Lecture Notes in Computer Science Commenced Publication in 1973 Founding and Former Series Editors: Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen
Editorial Board David Hutchison Lancaster University, UK Takeo Kanade Carnegie Mellon University, Pittsburgh, PA, USA Josef Kittler University of Surrey, Guildford, UK Jon M. Kleinberg Cornell University, Ithaca, NY, USA Alfred Kobsa University of California, Irvine, CA, USA Friedemann Mattern ETH Zurich, Switzerland John C. Mitchell Stanford University, CA, USA Moni Naor Weizmann Institute of Science, Rehovot, Israel Oscar Nierstrasz University of Bern, Switzerland C. Pandu Rangan Indian Institute of Technology, Madras, India Bernhard Steffen University of Dortmund, Germany Madhu Sudan Microsoft Research, Cambridge, MA, USA Demetri Terzopoulos University of California, Los Angeles, CA, USA Doug Tygar University of California, Berkeley, CA, USA Gerhard Weikum Max-Planck Institute of Computer Science, Saarbruecken, Germany
5667
Lei Chen Chengfei Liu Qing Liu Ke Deng (Eds.)
Database Systems for Advanced Applications DASFAA 2009 International Workshops: BenchmarX, MCIS, WDPP, PPDA, MBC, PhD Brisbane, Australia, April 20-23, 2009
13
Volume Editors Lei Chen Hong Kong University of Science and Technology E-mail:
[email protected] Chengfei Liu Swinburne University of Technology, Melbourne, Australia E-mail:
[email protected] Qing Liu CSIRO, Castray Esplanade, Hobart, TAS 7000, Australia E-mail:
[email protected] Ke Deng The University of Queensland, Brisbane, QLD 4072, Australia E-mail:
[email protected]
Library of Congress Control Number: 2009933477 CR Subject Classification (1998): H.2, H.3, H.4, H.5, J.1, H.2.4, H.3.4, K.6.5, I.7 LNCS Sublibrary: SL 3 – Information Systems and Application, incl. Internet/Web and HCI ISSN ISBN-10 ISBN-13
0302-9743 3-642-04204-X Springer Berlin Heidelberg New York 978-3-642-04204-1 Springer Berlin Heidelberg New York
This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. springer.com © Springer-Verlag Berlin Heidelberg 2009 Printed in Germany Typesetting: Camera-ready by author, data conversion by Scientific Publishing Services, Chennai, India Printed on acid-free paper SPIN: 12743681 06/3180 543210
Preface
DASFAA is an annual international database conference, located in the AsiaPacific region, which showcases state-of-the-art R & D activities in database systems and their applications. It provides a forum for technical presentations and discussions among database researchers, developers and users from academia, business and industry. DASFAA 2009, the 14th in the series, was held during April 20-23, 2009 in Brisbane, Australia. In this year, we carefully selected six workshops, each focusing on specific research issues that contribute to the main themes of the DASFAA conference. This volume contains the final versions of papers accepted for these six workshops that were held in conjunction with DASFAA 2009. They are: – First International Workshop on Benchmarking of XML and Semantic Web Applications (BenchmarX 2009) – Second International Workshop on Managing Data Quality in Collaborative Information Systems (MCIS 2009) – First International Workshop on Data and Process Provenance (WDPP 2009) – First International Workshop on Privacy-Preserving Data Analysis (PPDA 2009) – First International Workshop on Mobile Business Collaboration (MBC 2009) – DASFAA 2009 PhD Workshop All the workshops were selected via a public call-for-proposals process. The workshop organizers put a tremendous amount of effort into soliciting and selecting papers with a balance of high quality, new ideas and new applications. We asked all workshops to follow a rigid paper selection process, including the procedure to ensure that any Program Committee members are excluded from the paper review process of any paper they are involved with. A requirement about the overall paper acceptance rate of no more than 50% was also imposed on all the workshops. The conference and the workshops received generous financial support from The University of Melbourne, The University of New South Wales, The University of Sydney, The University of Queensland, National ICT Australia (NICTA), Australian Research Council (ARC) Research Network in Enterprise Information Infrastructure (EII), ARC Research Network on Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP), and ARC Research Network for a Secure Australia. We also received extensive help and logistic support from the DASFAA Steering Committee, The University of Queensland, Tokyo Institute of Technology and the Conference Management Toolkit Support Team at Microsoft. We are very grateful to Xiaofang Zhou, Haruo Yokota, Qing Liu, Ke Deng, Shazia Sadiq, Gabriel Fung, Kathleen Williamson and many other people for
VI
Preface
their effort in supporting the workshop organization. We would like to take this opportunity to thank all workshop organizers and Program Committee members for their effort to put together the workshop program of DASFAA 2009. April 2009
Lei Chen Chengfei Liu
DASFAA 2009 Workshop Organization
Workshop Chairs Lei Chen Chengfei Liu
Hong Kong University of Science and Technology, China Swinburne University of Technology, Australia
Publication Chairs Qing Liu Ke Deng
CSIRO, Australia University of Queensland, Australia
BenchmarX 2009 Workshop Program Committee Chairs Michal Kratky Irena Mlynkova Eric Pardede
Technical University of Ostrava, Czech Republic Charles University in Prague, Czech Republic La Trobe University, Bundoora, Australia
Program Committee Radim Baca Martine Collard Jiri Dokulil Peter Gursky Tomas Horvath Jana Katreniakova Markus Kirchberg Agnes Koschmider Michal Kratky Sebastian Link
Technical University of Ostrava, Czech Republic INRIA Sophia Antipolis, France Charles University in Prague, Czech Republic Pavol Jozef Safarik University in Kosice, Slovakia Pavol Jozef Safarik University in Kosice, Slovakia Comenius University in Bratislava, Slovakia Institute for Infocomm Research, A*STAR, Singapore Institute AIFB, Universit¨at Karlsruhe, Germany Technical University of Ostrava, Czech Republic Victoria University of Wellington, New Zealand
VIII
Organization
Pavel Loupal Mary Ann Malloy Marco Mevius Irena Mlynkova Martin Necasky Alexander Paar Incheon Paik Eric Pardede Jorge Perez Dmitry Shaporenkov Michal Valenta
Czech Technical University in Prague, Czech Republic The MITRE Corporation, USA Institute AIFB, Universit¨ at Karlsruhe, Germany Charles University in Prague, Czech Republic Charles University in Prague, Czech Republic Universit¨ at Karlsruhe, Germany The University of Aizu, Japan La Trobe University, Bundoora, Australia Pontificia Universidad Catolica, Chile University of Saint Petersburg, Russia Czech Technical University in Prague, Czech Republic
MCIS 2009 and WDPP 2009 Workshops Program Committee Chairs MCIS Shazia Sadiq Ke Deng Xiaofang Zhou Xiaochun Yang WDPP Walid G. Aref Alex Delis Qing Liu
The University of Queensland, Australia The University of Queensland, Australia The University of Queensland, Australia Northeastern University, China Purdue University, USA University of Athens, Greece CSIRO ICT Centre, Australia
Publicity Chair ( WDPP) Kai Xu
CSIRO ICT Centre, Australia
Program Committee MCIS Yi Chen Markus Helfert Ruoming Jin Chen Li Jiaheng Lu Graeme Shanks Can Turker Haixun Wang Xuemin Lin
Arizona State University, USA Dublin City University, UK Kent State University, USA UC Irvine, USA Renmin University, China Monash University, Australia FGCZ Zurich, Switzerland IBM, USA UNSW, Australia
Organization
WDPP Mohamed S. Abougabal Ilkay Altintas Athman Bouguettaya Susan B. Davidson Antonios Deligiannakis Juliana Freire James Frew Paul Groth Georgia Koutrika Bertram Ludascher Simon McBride Simon Miles Brahim Medjahed Khaled Nagi Anne Ngu Mourad Ouzzani Thomas Risse Satya S. Sahoo Zahir Tari Qi Yu Jun Zhao
University of Alexandria, Egypt San Diego Supercomputer Centre, USA CSIRO, Australia University of Pennsylvania, USA Technical University of Crete, Greece University of Utab, USA University of California, Santa Barbara, USA University of Southern California, USA Stanford University, USA University of California, Davis, USA The Australian E-Health Research Centre, Australia King’s College London, UK University of Michigan, Dearborn, USA Alexandria University, Egypt Texas State University, San Marcos, USA Purdue University, USA L3S Lab, Germany Kno.e.sis Center, Wright State University, USA RMIT, Australia Rochester Institute of Technology, USA University of Oxford, UK
PPDA 2009 Workshop Raymond Chi-Wing Wong Ada Wai-Chee Fu
The Hong Kong University of Science and Technology, China The Chinese University of Hong Kong, China
Program Committee Claudio Bettini Chris Clifton Claudia Diaz Josep Domingo-Ferrer Elena Ferrari Sara Foresti Benjamin C.M. Fung Christopher Andrew Leckie Jiuyong Li Jun-Lin Lin
University of Milan, Italy Purdue University, USA K.U.Leuven, Belgium Rovira i Virgili University, Spain University of Insubria, Italy University of Milan, Italy Concordia University, Canada The University of Melbourne, Australia University of South Australia, Australia Yuan Ze University, Taiwan
IX
X
Organization
Kun Liu Ashwin Machanavajjhala Bradley Malin Nikos Mamoulis Wee Keong Ng Jian Pei Yucel Saygin Jianhua Shao Yufei Tao Vicenc Torra Carmela Troncoso Hua Wang Ke Wang Sean Wang Duminda Wijesekera Xintao Wu Jeffrey Yu Philip S. Yu
IBM Almaden Research Center, USA Cornell University, USA Vanderbilt University, USA Hong Kong University, China Nanyang Technological University, Singapore Simon Fraser University, Canada Sabanci University, Turkey Cardiff University, UK The Chinese University of Hong Kong, China Universitat Autonoma de Barcelona, Spain K.U. Leuven, Belgium University of Southern Queensland, Australia Simon Fraser University, Canada University of Vermont, USA George Mason University, USA University of North Carolina at Charlotte, USA The Chinese University of Hong Kong, China University of Illinois at Chicago, USA
MBC 2009 Workshop General Chairs Qing Li Hua Hu
City University of Hong Kong, China Zhejiang Gongshang University, China
Program Committee Chairs Dickson K.W. Chiu Yi Zhuang
Dickson Computer Systems, Hong Kong, China Zhejiang Gongshang University, China
Program Committee Patrick C.K. Hung Samuel P.M. Choi Eleanna Kafeza Baihua Zheng Edward Hung Ho-fung Leung Zakaria Maamar Stefan Voss Cuiping Li
University of Ontario Institute of Technology, Canada The Open University of Hong Kong, China Athens University of Economics and Commerce, Greece Singapore Management University, Singapore Hong Kong Polytechnic University, China Chinese University of Hong, China Zayed University, UAE University of Hamburg, Germany Renmin University, China
Organization
Chi-hung Chi Stephen Yang Ibrahim Kushchu Haiyang Hu Huiye Ma Pirkko Walden Raymond Wong Lidan Shou Matti Rossi Achim Karduck
National Tsing Hua University, Taiwan, China National Central University, Taiwan, China Mobile Government Consortium International, UK Zhejiang Gongshang University, China CWI, The Netherlands Abo Akademi University, Finland National ICT, Australia Zhejiang University, China Helsinki School of Economics, Finland Furtwangen University, Germany
DASFAA 2009 PhD Workshop Program Committee Chairs Wei Wang Baihua Zheng
University of New South Wales, Australia Singapore Management University, Singapore
Program Committee Bin Cui Jianlin Feng Haifeng Jiang Takahiro Hara Wang-chien Lee Jiaheng Lu Lidan Shou Bill Shui Xueyan Tang Jianliang Xu
Peking University, China Zhongshan University, China Google, USA Osaka University, Japan Pennsylvania State University, USA Renming University, China Zhejiang University, China NICTA, Australia Nanyang Technological Universtiy, Singapore Hong Kong Baptist University, China
XI
Table of Contents
First International Workshop on Benchmarking of XML and Semantic Web Applications (BenchmarX’09)) Workshop Organizers’ Message . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Michal Kr´ atk´y, Irena Mlynkova, and Eric Pardede
3
Current Approaches to XML Benchmarking (Invited Talk) . . . . . . . . . . . . St´ephane Bressan
4
TJDewey – On the Efficient Path Labeling Scheme Holistic Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Radim Baˇca and Michal Kr´ atk´y
6
The XMLBench Project: Comparison of Fast, Multi-Platform XML Libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Suren Chilingaryan
21
A Synthetic, Trend-Based Benchmark for XPath . . . . . . . . . . . . . . . . . . . . . Curtis Dyreson and Hao Jin
35
An Empirical Evaluation of XML Compression Tools . . . . . . . . . . . . . . . . . Sherif Sakr
49
Benchmarking Performance-Critical Components in a Native XML Database System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Karsten Schmidt, Sebastian B¨ achle, and Theo H¨ arder On Benchmarking Transaction Managers . . . . . . . . . . . . . . . . . . . . . . . . . . . Pavel Strnad and Michal Valenta
64 79
Second International Workshop on Managing Data Quality in Collaborative Information Systems and First International Workshop on Data and Process Provenance (MCIS’09 & WDPP’09) Workshop Organizers’ Message . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Shazia Sadiq, Ke Deng, Xiaofang Zhou, Xiaochun Yang, Walid G. Aref, Alex Delis, Qing Liu, and Kai Xu Data Provenance Support in Relational Databases for Stored Procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Winly Jurnawan and Uwe R¨ ohm
95
97
XIV
Table of Contents
A Vision and Agenda for Theory Provenance in Scientific Publishing . . . Ian Wood, J. Walter Larson, and Henry Gardner
112
Probabilistic Ranking in Uncertain Vector Spaces . . . . . . . . . . . . . . . . . . . . Thomas Bernecker, Hans-Peter Kriegel, Matthias Renz, and Andreas Zuefle
122
Logical Foundations for Similarity-Based Databases . . . . . . . . . . . . . . . . . . Radim Belohlavek and Vilem Vychodil
137
Tailoring Data Quality Models Using Social Network Preferences . . . . . . . Ismael Caballero, Eugenio Verbo, Manuel Serrano, Coral Calero, and Mario Piattini
152
The Effect of Data Quality Tag Values and Usable Data Quality Tags on Decision-Making . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Rosanne Price and Graeme Shanks Predicting Timing Failures in Web Services . . . . . . . . . . . . . . . . . . . . . . . . . Nuno Laranjeiro, Marco Vieira, and Henrique Madeira A Two-Tire Index Structure for Approximate String Matching with Block Moves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bin Wang, Long Xie, and Guoren Wang
167
182
197
First International Workshop on Privacy-Preserving Data Analysis (PPDA’09) Workshop Organizers’ Message . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Raymond Chi-Wing Wong and Ada Wai-Chee Fu
215
Privacy Risk Diagnosis: Mining l-Diversity . . . . . . . . . . . . . . . . . . . . . . . . . . Mohammad-Reza Zare-Mirakabad, Aman Jantan, and St´ephane Bressan
216
Towards Preference-Constrained k-Anonymisation . . . . . . . . . . . . . . . . . . . . Grigorios Loukides, Achilles Tziatzios, and Jianhua Shao
231
Privacy FP-Tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sampson Pun and Ken Barker
246
Classification with Meta-learning in Privacy Preserving Data Mining . . . Piotr Andruszkiewicz
261
Importance of Data Standardization in Privacy-Preserving K-Means Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chunhua Su, Justin Zhan, and Kouichi Sakurai
276
Table of Contents
XV
First International Workshop on Mobile Business Collaboration (MBC’09) Workshop Organizers’ Message . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dickson K.W. Chiu and Yi Zhuang A Decomposition Approach with Invariant Analysis for Workflow Coordination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jidong Ge and Haiyang Hu An Efficient P2P Range Query Processing Approach for Multi-dimensional Uncertain Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ye Yuan, Guoren Wang, Yongjiao Sun, Bin Wang, Xiaochun Yang, and Ge Yu Flexibility as a Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . W.M.P. van der Aalst, M. Adams, A.H.M. ter Hofstede, M. Pesic, and H. Schonenberg
289
290
303
319
Concept Shift Detection for Frequent Itemsets from Sliding Windows over Data Streams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jia-Ling Koh and Ching-Yi Lin
334
A Framework for Mining Stochastic Model of Business Process in Mobile Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Haiyang Hu, Bo Xie, JiDong Ge, Yi Zhuang, and Hua Hu
349
DASFAA 2009 PhD Workshop Workshop Organizers’ Message . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Wei Wang and Baihua Zheng
357
Encryption over Semi-trusted Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hasan Kadhem, Toshiyuki Amagasa, and Hiroyuki Kitagawa
358
Integration of Domain Knowledge for Outlier Detection in High Dimensional Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sakshi Babbar
363
Towards a Spreadsheet-Based Service Composition Framework . . . . . . . . . Dat Dac Hoang, Boualem Benatallah, and Hye-young Paik
369
Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
375
Workshop Organizers’ Message Michal Kratky 1 , Irena Mlynkova 2 , and Eric Pardede 3 1
Technical University of Ostrava, Czech Republic Charles University in Prague, Czech Republic 3 La Trobe University, Bundoora, Australia
2
The 1st International Workshop on Benchmarking of XML and Semantic Web Applications (BenchmarX’09) was held on April 20, 2009 at the University of Queensland in Brisbane, Australia in conjunction with the 14th International Conference on Database Systems for Advanced Applications (DASFAA’09). It was organized by Jiri Dokulil, Irena Mlynkova and Martin Necasky from the Department of Software Engineering of the Charles University in Prague, Czech Republic. The main motivation of the workshop was based on the observation that even though XML and semantic data processing is the main topic of many conferences around the world, the communities dealing with XML and semantic data benchmarking and related issues are still scattered. Moreover, although benchmarking is one of the key aspects of improvements of data processing, the majority of researches naturally concentrate on proposing new approaches, while benchmarking is often neglected. Therefore, the aim of BenchmarX was and is to bring the benchmarking research community together and to provide an opportunity to deal with this topic more thoroughly. The program committee of the workshop consisted of 21 researchers and specialists representing 15 universities and institutions from 11 different countries. To ensure high objectiveness of the paper selection process 3 PC chairs from different institutions were selected, in particular Michal Kratky, Irena Mlynkova and Eric Pardede. Each of the submitted papers for BenchmarX’09 was reviewed by 3 PC members for its technical merit, originality, significance, and relevance to the workshop. Finally, the PC chairs decided to accept 40% of the submitted papers. The final program of the workshop consisted of an invited talk and 2 sessions involving the accepted papers. The invitation was kindly accepted by Stephane Bressan from the National University of Singapore, one of the authors of the XOO7 benchmark and an expert in various aspects of data management. Last but not least, let us mention that BenchmarX’09 would not be possible without the support of our sponsors. In particular it was partially supported by the Grant Agency of the Czech Republic, projects of GACR number 201/09/0990 and 201/09/P364. After the successful first year providing many interesting ideas and research problems, we believe that BenchmarX will become a traditional annual meeting opportunity for the whole benchmarking community.
L. Chen et al. (Eds.): DASFAA 2009 Workshops, LNCS 5667, p. 3, 2009. c Springer-Verlag Berlin Heidelberg 2009
Current Approaches to XML Benchmarking (Invited Talk) St´ephane Bressan School of Computing National University of Singapore
[email protected]
Abstract. XML benchmarking is as versatile an issue as numerous and diverse are the potential applications of XML. It is however not yet clear which of these anticipated applications will be prevalent and which of their features and components will have such performance requirements that necessitate benchmarking. The performance evaluation of XML-based systems, tools and techniques can either use benchmarks that consist of a predefined data set and workload or it can use a data set with an ad hoc workload. In both cases the data set can be real or synthetic. XML data generators such as Toxgene and Alphawork can generate XML documents whose characteristics, such as depth, breadth and various distributions, are controlled. It is also expected that benchmarks provide data generator with a fair amount of control of the size and shape of the data, if the data is synthesized, or offer a suite of data subsets of varying size and shape, if the data is real. Application level evaluation emphasizes the representativeness of the data set ad workload in terms of typical applications while micro-level evaluation focuses on elementary and individual technical features. The dual view of XML, data view and document view, is reflected in its benchmarks. There exist several well established benchmarks for XML data management systems that can be used for the evaluation of the performance of query processing. The main application level benchmarks in this category are XOO7, XMach1, XMark, and XBench while The Michigan Benchmark is a micro-benchmark. For the evaluation of XML information retrieval the prevalent benchmark is the series of INEX corpora and topics. However, in practice, whether for the evaluation of XML data management techniques or for the evaluation of XML-retrieval techniques, researchers seem to favor real or synthetic data sets with ad hoc workloads when needed. The university of Washington repository gathers links to a variety of XML data sets. Noticeably most of these data sets are small. The largest is 603MB. Popular data sets like Mundial or the Baseball Boxscore XML are much smaller. The Database and Logic Programming Bibliography XML data set, also used by many scientists, is around 500MB. All of these data sets are generally relatively structured and quite shallow thus not necessarily conveying the expected challenges associated with the semi-structure nature of XML. If the application level data sets and workloads are not satisfactory, It may well be the case that XML as a language used to structure and manage content has L. Chen et al. (Eds.): DASFAA 2009 Workshops, LNCS 5667, pp. 4–5, 2009. c Springer-Verlag Berlin Heidelberg 2009
Current Approaches to XML Benchmarking
5
not yet matured. We must ask ourselves the question as to what is there really to benchmark. As of today, XML data are most commonly produced by office suites and software development kits. Office suites supporting Office Open XML and in Open Document Format are or will soon become the principal producers of XML. Yet in these environments XML is principally used to represent formatting instructions. Similarly, the widespread adoption of Web service standards in software development frameworks and kits (in the .Net framework, for instance) also contributes to the creation of large amounts of XML data. Again here XML is primarily used to represent formats (e.g. SOAP messages). Although both XML-based document standards and Web service standards have intrinsic provision for XML content and have been designed to enable the management of content in XML, few users have yet the tools, the wants and the culture to manage their data in XML. Consequently, at least for now, it seems that these huge amounts of XML data created in the background of authoring and programming activities need neither be queried nor searched but rather only need to be processed by the office suites and compilers. The emphasis is still on format rather than content structuring and management. Of course, it is hoped by proponent of XML as a format for content that the XML-ization of formats will facilitate the XML-zation of the content. With XML-based protocols and formats, XML as a ”standards’ standard” (as there are compiler compilers) has been most successful at the lower layers of information management. The efforts for content organization and management, on the other hand, do not seem to have been as pervasive and prolific (in terms of the amount of XML data produced and used). For instance, the volume of data in the much talked about business XML standards (Rosettanet or Universal Business Language, for instance) is still difficult to measure and may not be or become significant. In this presentation we critically review the existing approaches to benchmarking of XML-based systems and applications. We try to analyze the trends in the usage of XML and in order to determine the needs and requirements for the successful design, development and adoption of benchmarks. CV: St´ephane Bressan is Associate Professor in the Computer Science department of the School of Computing (SoC) at the National University of Singapore (NUS). He joined the National University of Singapore in 1998. He is also adjunct Associate Professor at Malaysia University of Science and Technology (MUST) since 2004. He obtained his PhD in Computer Science from the University of Lille, France, in 1992. St´ephane was research scientist at the European Computer-industry Research Centre (ECRC), Munich, Germany, and at the Sloan School of Management of the Massachusetts Institute of Technology (MIT), Cambridge, USA. St´ephane’s research is concerned with the management and integration of multi-modal and multimedia information from distributed, heterogeneous, and autonomous sources. He is author and co-author of more than 100 papers. He is co-author of the XOO7 benchmark for XML data management systems. St´ephane is member of the XML working group of the Singapore Information Technology Standards Committee (ITSC) and advisory member of the executive committee of the XML user group of Singapore (XMLone).
TJDewey – On the Efficient Path Labeling Scheme Holistic Approach Radim Baˇca and Michal Kr´atk´y Department of Computer Science, Technical University of Ostrava Czech Republic {radim.baca,michal.kratky}@vsb.cz
Abstract. In recent years, many approaches to XML twig pattern searching have been developed. Holistic approaches are particularly significant in that they provide a theoretical model for optimal processing of some query classes and have very low main memory complexity. Holistic algorithms can be incorporated into XQuery algebra as a twig query pattern operator. We can find two types of labeling schemes used by indexing methods: element and path labeling schemes. The path labeling scheme is a labeling scheme where we can extract all the ancestors labels from a node label. In the TJFast method, authors have introduced an application of the path labeling scheme (Extended Dewey) in the case of holistic methods. In our paper, we depict some improvements of this method that lead to a better scalability of the TJFast algorithm. We introduce the TJDewey algorithm which combines the TJFast algorithm with the DataGuide summary tree. The path labeling schemes have better update features and our article shows that the utilization of a path labeling scheme can have comparable or even better query processing parameters compared to other element labeling scheme approaches. Keywords: XML, twig pattern query, holistic algorithms, path labeling scheme, TJFast.
1 Introduction XML (Extensible Mark-up Language) [20] has recently been embraced as a new approach to data modeling. A well-formed XML document or a set of documents is an XML database. Implementation of a system enabling storage and querying of XML documents efficiently (the so-called native XML databases) requires an efficient approach to indexing the XML document structure. Existing approaches to an XML document structure indexing use some kind of labeling scheme [22,6,19,1,17]. The labeling scheme associates every element of an XML document with a unique label, which allows us to determine the basic relationship between elements. We recognize two types of labeling schemes: (1) element labeling scheme (e.g., containment labeling scheme [22] or Dietz’s labeling scheme [6]), and (2) path labeling scheme (e.g., Dewey order [19] or OrdPath [17]). By the term ‘path
Work is partially supported by Grants of GACR No. 201/09/0990.
L. Chen et al. (Eds.): DASFAA 2009 Workshops, LNCS 5667, pp. 6–20, 2009. c Springer-Verlag Berlin Heidelberg 2009
TJDewey – On the Efficient Path Labeling Scheme Holistic Approach
7
labeling scheme’ we mean a labeling scheme where we can extract all the ancestor’s labels from a node label. Path labeling schemes have generally better update features [19,17]. Moreover, a path labeling scheme such as OrdPath can be updated without any relabeling [17]. A path labeling scheme has variable length labels, however, this problem can be solved using simple label encoding [7]. Despite these features we can not find many efficient query processing algorithms using path labeling schemes. Existing algorithms using structural joins [22,1,6] or holistic joins [3,5,4] work only with an element labeling scheme. Holistic algorithms are designed for specific types of XPath queries; so called twig query patterns. However, the holistic algorithm can be understood as an operator of XML algebra [15] and therefore it can be utilized in the case of more complex XPath queries. In [16], we can find a comparison of approaches to a twig query processing. Holistic algorithms were considered as the most robust solution not requiring any complicated query optimizations. Moreover, holistic approaches provide a theoretical model for optimal processing of some query classes and, their main memory requirements are minimal in this case. The TJFast holistic algorithm [13] applies the Extended Dewey path labeling scheme, but the update features are constrained by a finite state transducer used with Extended Dewey. Moreover, TJFast must extract the labeled path from every label and compare it with the regular expression. We improve the TJFast holistic algorithm using the DataGuide summary tree in order to decrease the unnecessary computing cost and I/O cost significantly. Our work shows, that the query algorithm using a path labeling scheme can outperform existing stateof-the-art algorithms using an element labeling scheme [3,5,4]. Due to the fact that our algorithm can be used with a popular Dewey order labeling scheme we call it TJDewey. However, the TJDewey algorithm is not dependent on a specific path labeling scheme, it can be used with any path labeling scheme. This paper is organized as follows. In Section 2, we depict a model of an XML document. Section 3 focuses on a brief description of holistic approaches. Due to the fact that the TJFast method is in the scope of our paper, we describe this approach in more detail. In Section 4, we introduce the improvement of TJFast. Section 5 provides comprehensive experimental results of different holistic approaches. In the last section, we summarize the paper content and outline possibilities of our future work.
2 Model An XML document can be modeled as a rooted, ordered, labeled tree, where every node of the tree corresponds to an element or an attribute of the document and edges connect elements or elements and attributes having a parent-child relationship. We call such representation of an XML document an XML tree. We see an example of the XML tree in Figure 3(b). We use the term ‘node’ in the meaning of a node of an XML tree which represents an element or an attribute. For each node of an XML tree, we shell define a labeled path as a sequence tag0 /tag1 / . . . /tagn of node tags lying on a path from the root to the node n. A labeled path provides additional information about a node that can be utilized to speed
8
R. Baˇca and M. Kr´atk´y
(a)
(b)
Fig. 1. (a) XML tree labeled with Dewey Order (b) corresponding DataGuide
up query processing. We say that the n node corresponds to the lp labeled path if the labeled path from the root to the n node is identical with lp. Obviously, labeled path is the path in a DataGuide [18]. Figure 1(b) depicts an example of a DataGuide for the XML document in Figure 1(a). Twig pattern query (TPQ) can be modeled as an unordered rooted query tree, where each node of the tree corresponds to a single query node and edges represent a descendant or child relationship between the connected nodes. Since each query node is labeled by a tag, we use the term ‘query node’ in the meaning of ‘tag of the query node’. Each query node can be constrained by a value-based condition. However, we do not consider content-based queries in this paper. Path query pattern pq in a TPQ is a sequence of query nodes in the TPQ from the root to the query node q. In this paper, we focus on the searching of TPQ matches in an XML tree.
3 Holistic Algorithms In this section, we briefly review holistic algorithms for TPQ searching. There are many works describing various holistic algorithms [3,9,5,13,4]. Some of them may be combined, e.g., XB-tree [3] or XR-tree [8] can be utilized in all holistic approaches applying an element labeling scheme. Holistic approaches use input streams of nodes. The content of streams depends on the streaming scheme used. Nodes, labeled with a labeling scheme, are always sorted in the stream. We expect streams to be supported by a retrieval mechanism such as inverted list. Obviously, more than one input stream Tq may be related to one query node q. The TwigStack algorithm [3] uses the tag streaming scheme. In [5], Chen et al. propose an extension of this algorithm by using different streaming schemes and they introduce tag+level and prefix path streaming schemes. The prefix path streaming scheme is basically a utilization of labeled paths. The idea of having a labeled path as a key for node retrieval has already been introduced in works [14,11]. However, Chen et al. were the first who described a holistic algorithm using the labeled paths (prefix path streaming in their work). We use LPS instead of ‘labeled path streaming scheme’ or ‘prefix path streaming scheme’ in the following text. In any algorithm using labeled paths we must first find labeled paths matching the query. In other words, we reduce the searched space in this way, and, obviously, this is the main issue of path based methods. Searching of matching labeled paths is called
TJDewey – On the Efficient Path Labeling Scheme Holistic Approach
9
stream pruning in [5]. In [2], authors show that the problem is identical with the query matching in a DataGuide tree. Let us note that a set of labeled paths matching the query node q is called PRUq . It is important to specify the relation among streams in the context of a TPQ. In [5], authors introduce the soln multiset capturing this relation in a simple way. Definition 1. (Solution Streams) Solution streams soln(T, qi ) for a stream T of class q and a q’s child query node qi in a TPQ are streams of class qi which satisfy the structural relationship of an edge q, qi . Since holistic algorithms applies different streaming and labeling schemes, we can classify these algorithms in this way. In Table 1, the classification of holistic algorithms is proposed. Table 1. Classification of the holistic algorithms
Element labeling scheme Path labeling scheme
Tag streaming scheme Labeled path streaming scheme TwigStack [3] iTwigJoin+PPS [5] TJFast [13] TJDewey
TJFast [13] introduces an application of the path labeling scheme in the case of holistic methods. In our paper, we introduce an improvement of the TJFast algorithm – the TJDewey method applying the labeled path streaming scheme. In the case of top-down holistic algorithms like TwigStack or iTwigJoin, both space and I/O complexities for an optimal query are determined and the space complexity is low. In contrast, bottom-up holistic methods, e.g. Twig2 Stack [4], rather have a high space complexity. In other words, Twig2 Stack can load the whole document into the main memory. Therefore, we do not consider this algorithm. 3.1 Optimality of Holistic Algorithms Optimality of holistic algorithms is an important issue of these algorithms. Approaches such as TwigStack [3], iTwigJoin+TL [5], iTwigJoin+PPS [5], and TJFast [13] define own query classes for which these algorithms are optimal. In the case of TJFast, authors prove that this method is optimal in the case of queries containing only AD edges after first branching node in the query. Moreover, TJFast is optimal for the same query class as TwigStack: query containing only AD edges, where the query may begin with the PC edge. Every top-down holistic approach operates in two phases. In the first phase, solutions of a path query pattern from a TPQ are found and stored in output arrays. In the second phase, solutions in output arrays are merged and query matches are found. The number of output arrays is equal to the number of leaf nodes in a query. If a query is optimal, no irrelevant nodes are stored in output arrays (for detail see [3]). In other words, all nodes in output arrays are in the final result. In the case of a non-optimal query, we have to prune irrelevant nodes during the merge-phase. We store solutions found during the first phase in a blocking structure. When we pop out
10
R. Baˇca and M. Kr´atk´y
the last item from the root stack, we enumerate blocked solutions. As shown in [12], the number of irrelevant solutions may be significant in the case of non-optimal queries. The worst-case space complexity of the optimal iTwigJoin and TwigStack is |Q| · L, where |Q| is the number of query nodes in a TPQ Q and L is the maximal length of the root-to-leaf path in the XML document. The worst-case space complexity of the optimal TJFast is |QL | · L, where |QL | is the number of leaf query nodes in the TPQ Q. The worst-case I/O and time complexities are equal to the sum of input stream sizes and size of the output list in the case of any optimal holistic algorithm. 3.2 TJFast Lu et al. [13] introduced the Extended Dewey labeling scheme which is applied in the proposed TJFast holistic algorithm. If we compare the TJFast with other holistic algorithms, we observe one important improvement: a query is evaluated with a retrieval of nodes from leaf query node streams. For example, having the query a//e/c, we find solutions by retrieving Extended Dewey labels only from the c stream. Although this idea is not novel in the case of XML query processing, e.g., a path-based approach [11,10] processes a query in this way, other holistic algorithms retrieve streams for all nodes in the TPQ. The main algorithm is shown in Algorithm 1, the most important function getNext is shown in Algorithm 3. Since authors do not use an LPS, they have to extract the labeled path from a node label. Authors define the Finite State Transducer (FST) structure to enable a labeled path extraction from an Extended Dewey label. Example 1. (The Labeled Path Extraction) In Figure 2, we show an example of the FST corresponding to the XML tree in Figure 1(a). Each FST node n is labeled by a tag and its nmax (in parentheses). The nmax is the maximal FST edge number starting from the FST node incremented by the number 1. If we want to know the labeled path of a node with the (3, 5, 2) Extended Dewey label, we start from the a root FST node. This is also the first tag of a labeled path. We process modulo operation using nmax of the actual FST node and move to the next FST node according to the result. Modulo is processed over each number of the Extended Dewey label and we extract the a/e/e/d labeled path from the label.
Fig. 2. The FST corresponding to the XML tree in Figure 1(a)
Once we have a labeled path corresponding to the node n, we have to apply a string matching algorithm in order to test whether the labeled path matches its query path. During that algorithm we have to also resolve the MB set where the MB set is a set of ancestors to n matching the ancestor query nodes in the query path (see Section 4.1 for detail).
TJDewey – On the Efficient Path Labeling Scheme Holistic Approach
11
4 TJDewey In this section, we introduce the TJDewey algorithm which is a variant of TJFast applying LPS. TJDewey does not use any time consuming labeled path extraction and labeled path matching. TJDewey uses the original Dewey Order labeling scheme [19] instead of Extended Dewey. Obviously, this labeling scheme is more appropriate for future updates compared to Extended Dewey. TJDewey can work also with OrdPath [17] or any other path labeling scheme. Therefore, it is more general compared to TJFast from this point of view. 4.1 TJDewey Improvement In Algorithm 1, we depict the main loop of the TJDewey algorithm. The application of an LPS has the following positive consequences: 1. We do not use the FST and the time consuming labeled path extraction. Labeled paths are available due to the LPS scheme used. 2. Every node label matches its query path in TJDewey. Therefore, the string matching algorithm is removed. (Line 8 in Algorithm 1 is skipped in TJDewey) 3. The MB set is not extracted from the every node label during query processing. The MB sets may be created before the first phase of the algorithm. (Line 2 in Algorithm 1 is added) Algorithm 1. TJDewey/TJFast algorithm 1 2 3 4 5 6 7 8 9 10
1
// the following line occurs only in TJDewey foreach lp ∈ P RUq do MBEnumeration(root, lp); while ¬end do q = getNext (root); outputSolutions(q); advance(Tqmin ); // the following line occurs in TJFast, however, it is skipped in TJDewey; locateMatchedLabel(T ————————————– q ); end mergePhase(); function : outputSolutions(Query node q) Return solutions of head(Tqmin ) related to the path query pattern pq such that the nodes matching the pq exist in the ancestors’ stacks.
These improvements are described in the following subsections. The first improvement is clear, therefore, it is not elaborated in a separate subsection. 2nd Improvement: String Matching Issue Lemma 1. (Streams after query matching in a DataGuide) For every stream lp ∈ PRUq , where q = root, there exists at least one stream lp of class qparent , where streams lp and lp satisfy the relationship q, qparent .
12
R. Baˇca and M. Kr´atk´y
To prove Lemma 1, it suffices to note that we find labeled paths as query matches. Lemma 2. (Node matching) Let us have the TPQ Q, then every labeled path lp ∈ PRUq in every query node q ∈ Q matches its path query pattern pq . Proof. By Lemma 1, every lp ∈ PRUq has to have at least one matching labeled path lp of class qparent . We can apply this lemma recursively and get matching labeled paths for every query node in pq , thus we are done. Using Lemma 2, we can omit the locateMatchedLabel procedure used in the TJFast algorithm due to the fact that every node matches its query pattern in TJDewey. 3rd Improvement: MB Set Building. The most important function in Algorithm 1, getN ext, is shown in Algorithm 3. The MB set is utilized there. Definition 2. (MB Set) Let us have a node n of class q, a path query pattern pq , and a query node qi ∈ pq . Let pq be a tuple (q1 , . . . , qm−1 , qm ), where the qm = q. The MB set is defined recursively as follows: ⎧ {n} if i = m, ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ { ni : ∃ni+1 ∈ MB(n, qi+1 ) MB(n, qi ) = such that ni satisfies ⎪ ⎪ ⎪ the structural relationship otherwise. ⎪ ⎪ ⎩ qi , qi+1 with ni+1 } Example 2. (MB Set) Let us have a node n with the (1, 2, 3, 4, 5) Extended dewey label and the corresponding labeled path a/b/a/b/c/d (FST is not important here). We want to know the MB set for every query node in the b//c/d path query pattern. Obviously, MB(n, d) = {(1, 2, 3, 4, 5)}. Furthermore, MB(n, c) = {(1, 2, 3, 4)}. For the query node b, MB(n, b) = {(1), (1, 2, 3)}, due to the fact that both nodes satisfy the AD relationship with the (1, 2, 3, 4) node. TJDewey applies labeled paths in advance and we can extract the MB sets from them. Therefore, we do not have to extract the MB set from every node label as proposed in the case of TJFast. Obviously, nodes of the MB set may be stored as numbers. The number indicates the length of the label. When we want to get the node ni ∈ MB(n, q), we use this length to extract the ni from n. Whole labeled path does not have to be accessible even in the case of LPS. When working with labeled paths, we usually represent the relationship by the soln multiset. Every labeled path lp ∈ PRUq is represented by a stream’s id (pointer to the start position of the stream) and the soln multiset for each child query node. This representation enables us to store it easily in the main memory, even when we have a high number of long labeled paths. Algorithm 2 shows that the MB set for each labeled path can be extracted from a simple structure, where we keep the labeled path length and reference to labeled paths in the soln multiset. An important issue is that we build the MB set for the labeled path
TJDewey – On the Efficient Path Labeling Scheme Holistic Approach
13
Algorithm 2. MBEnumeration(Query node q, labeled path lp) 1 2 3 4 5 6 7 8 9 10 11 12
if isLeaf(q) then foreach qi ∈ path query pattern pq do if pref ixLenqi ∈ / MB(lp, qi ) then MB(lp, qi ).add(pref ixLenqi ); end else pref ixLenq = lp.lpLenght; foreach qi ∈ childs(q) do foreach lpk ∈ soln(lp,qi ) do MBEnumeration(qi,lpk ); end end end
lp and this MB set can be directly used by every node n of class lp. Let us define the MB(n, q).add(ni.lpLength) function inserting a prefix length of an ni ∈ MB(n, q) into MB(n, q). We use the prefixLen array including the labeled path length for each inner query node. We build the MB set for every labeled path corresponding to each leaf query node in this way. 4.2 TJDewey Algorithm TJDewey uses one stack Sq per query node q during query processing. Every item on the stack has only ancestors below itself during the query evaluation. Main loop of TJDewey is shown in Algorithm 1. It outputs solutions corresponding to head(Tqmin ) node if it is has the matching nodes in the ancestor stacks. It also advance the Tqmin stream. In order to output only solutions corresponding to the twig query, we have to set properly all stacks. That is the main role of the getN ext function (Algorithm 3). Definition 3. (Minimal Extension) Let us have a TPQ Q. The node n = head(Tq ), q ∈ Q, has a minimal extension if there is a query match of a subtree Qq , where every node of the query match is the head node of its stream. The previously depicted stacks are set via clearSet and updateSet functions. The conditions in Lines 5 and 14 are true if the stream’s head nodes do not create a minimal extension for a subtree rooted in q. We clear the stack Sq (Lines 6 and 15) since the nodes in the stack which are not ancestors of head(Tqmin ) are useless. Functions minarg and maxarg return the order of the lowest and highest item in the e set, respectively (Lines 11 and 12). The condition in Line 20 is true if the p node belongs to a minimal extension for a subquery rooted in q. We push the node p into a stack Sq and enable to output solutions in this way. All nodes in Sq stack which are not ancestors of the p node are already useless, because all solutions for these nodes are already in the output array. We pop out these nodes from the Sq stack. Based on these observations we present Lemma 3 for the clearSet function. This lemma follows issues of the TJFast algorithm.
14
R. Baˇca and M. Kr´atk´y
Algorithm 3. getNext(Query node q) function 1 2 3
4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
1
1 2
if isLeaf(q) then return q; foreach qi ∈ dbl(q) do /* dbl(q) returns the set of all branching nodes b and leaf nodes f in the twig rooted with q such that in the path from q to b or f (excluding q, b or f ) there is no branching nodes, for detail see [13]. */ fi = getNext(qi ); if not isBranching(fi) ∧ empty(Sqi ) then clearSet(Sq , head(Tqmin )); i return fi ; end ei = max{p : p ∈ MB(fi ,q)}; end max = maxarg(e); min = minarg(e); foreach qi ∈ dbl(q) do if ∀p ∈ MB(fi ,q): ¬ancestor(p, emax ) then clearSet(Sq , head(Tqmin )); i return qi ; end end foreach p ∈ MB(qmin ,q) do if ancestor(p, emax ) then updateSet(Sq ,p); end return qmin ; function: clearSet(Stack Sq , Node p) if ¬ancestor(Sq .top(),p) then Sq .pop(); function: updateSet(Stack Sq , Node p) clearSet(Sq , p); Sq .push(p)
Lemma 3. We pop out only nodes which can not contribute to any new solution. All nodes which can potentially participate in a query match are pushed by the updateSet function (Line 20). Lemma 3 shows that these nodes are not pop out until all potential solutions are found and that enables us to state Theorem 1 about the correctness of the TJDewey algorithm. Theorem 1. (TJDewey correctness) Having the TPQ Q and the XML document D, the TJDewey algorithm finds correctly all matches of the Q in D. 4.3 LPS Issues In the case of LPS, we have to adjust head(Tq ) and advance(Tq ) methods in order to handle more streams per query node. The head(Tq ) function of the query node q
TJDewey – On the Efficient Path Labeling Scheme Holistic Approach
15
selects the head of the lowest stream in the array. We must hold streams to every query node q sorted according to the streams’ head. As usual, we implement it by an array of references to the streams’ head. The advance(Tq ) method first shifts the head of the current stream at the next node. Therefore, the array of references has to be resorted due to the fact that it has to henceforth handle the order of streams sorted according to the streams’ head. Resorting can be easily implemented by using an algorithm with logarithmic complexity. Consequently, the negative impact of this sorting is marginal compared to another LPS algorithm iTwigJoin+PPS as it is shown in Section 5. 4.4 Optimality of the TJDewey Algorithm Output nodes in the result set are determined only by nodes in the branching nodes’ stacks. This part of the TJFast algorithm remains the same in TJDewey. Due to this fact, TJDewey is optimal for queries with AD axes between branching nodes and their descendants. The worst-case space complexity of the optimal TJDewey is |QL | · L, where |QL | is the number of leaf query nodes in the TPQ Q and L is the length of the maximal path in the XML tree. The worst-case I/O and time complexities are equal to the sum of input stream sizes and size of the output list. However, there are cases where TJDewey is optimal and TJFast is not. Let us show it in Example 3. Example 3. (Processing a TQP in TJFast and TJDewey, respectively) Let us have the XML document DOC1 , corresponding DataGuide DG1 , and query T P Q1 in Figures 3(a-c). There are three streams when processing this TPQ using TJFast. This algorithm calls getN ext(a), where M B(c1 , a) = {a2 } and M B(b1 , a) = {a1 }, therefore fmax = c and fmin = b. TJFast pushes a1 into Sa (Line 19) and returns b. Solution (a1 , b1 ) is the output, stream Tb is advanced and locateM atchedLabel(Tb) finds that b2 matches its query path. M B(b2 , a) = {a1 , a2 } and node a2 is also pushed into Sa during the next call of getN ext(a). It returns the b query node again and solutions (a1 , b2 ), (a2 , b2 ) and (a2 , c1 ) are output. Another call of getN ext(a) clears the stack Sa and returns c. The streams include head(Tc ) = c3 and head(Tb ) = b3 after the locateM atchedLabel(Tc) function is processed. This configuration pushes the a3 node into Sa during getN ext(a), and therefore the last solution (a3 , b3 ) is output. We
(a) DOC1
(b) DG1
(c) T P Q1
Fig. 3. Example of an XML tree, DataGuide and query
16
R. Baˇca and M. Kr´atk´y
can observe that only solutions (a2 , b2 ) and (a2 , c1 ) are relevant to the query. The other nodes must be pruned during the merged phase. In the case of TJDewey, we process the TPQ search in the DataGuide. As a result, we get labeled paths r/a/a, r/a/a/c and r/a/a/c/b, where the soln(Tr/a/a , c) = {Tr/a/a/c } and soln(Tr/a/a , b) = {Tr/a/a/c/b}. This algorithm calls MBEnumeration and starts with getN ext(a), where head(Tcmin ) = c1 and head(Tbmin ) = b2 . The M B(c1 , a) = {a2 } which is the same with TJFast, however M B(b2 , a) = {a2 }, therefore, only a2 is pushed into Sa . Consequently, only solutions (a2 , b2 ) and (a2 , c1 ) are output. Both leaf streams Tr/a/a/c and Tr/a/a/c/b are finished after the advance and this algorithm ends. In this example, TJDewey stores only relevant solutions. The optimality improvement is caused by the MB set building. We create MB set only from labeled paths matching the whole twig in DataGuide, therefore ancestor nodes with irrelevant labeled paths are not a part of the MB set.
5 Experimental Results In our experiments1, we test all previously described top-down approaches. These approaches are implemented in C++. We use two XML collections: TreeBank2 and XMARK3 with factor 10. Table 2. Characteristic of XML collections Collection name
Node count
No. of No. of Max. labeled element depth paths types TreeBank 2,437,666 338,766 251 36 XMARK 20,532,805 548 77 16
Table 3. Index size Collection iTwigJoin iTwigJoin TJFast TJDewey name + Tag [MB] + LP [MB] [MB] [MB] TreeBank 46.6 53.2 44.8 42.2 XMARK 393 393 291 280
In Table 2, we observe a statistic of XML collections used in our experiments. The TreeBank collection includes many different labeled paths and the average depth of the XML document is quite high. It also contains a lot of recursive labeled paths. On the other hand, XMARK collection with factor 10 includes 512 labeled paths, where only a few of them are recursive. Table 3 shows index sizes for all approaches. We use the Fibonacci encoding [21] to store the variable length labels: Extended Dewey and Dewey Order labels in this case. We can observe that the size of indices is almost the same in the case of the TreeBank collection. Indices for the path labeling schemes are smaller in the case of the XMARK collection since the average depth is small. We use inverted lists, where the elements are stored. Table 4 shows queries used in our experiments. We selected queries with different features, involving combination of AD and PC axes and with up to three branching nodes. Queries TB1 and TB2 cover the higher number of labeled paths. 1
2 3
The experiments were executed on an AMD Opteron 865 1.8Ghz, 2.0 MB L2 cache; 2GB of DDR333; Windows 2003 Server. http://www.cs.washington.edu/research/xmldatasets/ http://monetdb.cwi.nl/xml/
TJDewey – On the Efficient Path Labeling Scheme Holistic Approach
17
Table 4. Queries used in experiments Abbreviation Query TB1 TB2 TB3 TB4 TB5 TB6 XM1 XM2 XM3 XM4 XM5 XM6
LP Result count size //PP//NP[.//VP[.//VBG and .//SBAR//NNPS]]//_COMMA_ 801 8 //S/VP//PP[./NP/VBN]/IN 1,298 426 //SINV//NP[./PP//JJR and .//S]//NN 429 5 //SINV[/ADJP[.//RB and .//JJ]]/ADVP/S[./NP]//VP 22 1 //_QUOTES_//S[.//VP/SBAR[./_NONE_ and .//SQ/MD and ./S/NP]] 84 3 //_COMMA_ //EMPTY/S//NP[/SBAR/WHNP/PP//NN]/_COMMA_ 57 12 //asia/item[./description/parlist/listitem//text and 13 1,852 ./mailbox/mail//emph]/name //person[.//profile[./gender and ./business and ./gender] 7 31,864 and .//address]//emailaddress //open_auctions/open_auction[./annotation/description//text[ 24 1,162 .//bold/keyword and .//emph] and .//privacy]//reserve //site//open_auction[.//bold/keyword and .//listitem/text] 13 2,831 //reserve //asia/item[.//listitem/text]//name 7 5,792 //item[.//listitem/text[.//bold/keyword and .//emph]]//name 96 5,642
We compare all depicted approaches to TPQ processing in terms of processing time, main memory time, and disk access cost: – Processing time equals to the whole time spent on twig pattern processing with a cold cache. Moreover, the cache of OS was turned off as well. This time includes labeled path searching (in the case of LPS) and both phases of a holistic algorithm. We process every query three times and compute the average processing time. Let us note that the processing time was always very similar. We compare processing time using the common HD disk (random seek 8.3 ms) and progressive SSD (Solid State drive, random seek 0.1 ms). SSD mainly shows that it is possible to minimize the random access issue. – Main memory time is a part of the processing time. This time covers all computations, where all data is stored in the main memory (I/O operations are not considered). – Disk Access Cost (DAC) equals to the amount of pages read from the secondary storage (I/O access). Inverted list in each method works with 4 kB pages. Therefore, this parameter indicates also the number of nodes read from the secondary storage.
5.1 Query Performance We compare TJDewey with TJFast and also with the other state of the art approaches like iTwigJoin+PPS or TwigStack in order to show competitiveness of our approach. In Figures 4(a-c) and 5(a-c) we observe that TJDewey is the most robust algorithm. TJDewey has the best average processing time and the only problem of TJDewey is in the case of many streams per query node (queries TB1, TB2) when HD disk is used. In this case, the issue of random accesses into the secondary storage occurs. It is possible to minimize this issue using SSD with 83× lower random seek time. The inefficiency of TJFast is highlighted in the case of main memory run (Figures 4(a) and 5(a)) where TJFast fails even in the comparison with TwigStack, because TJFast performs the time consuming labeled path extraction and string matching operations.
8
R. Baˇca and M. Kr´atk´y
6.9
6 Time [s]
0
0.0 TB1
TB2
TB3
TB4
TB5
TB6
XM1 XM2 XM3 XM4 XM5 XM6 Avg.
Avg.
(a) Main memory processing time
10.2
TwigStack iTwigJoin+PPS TJFast TJDewey
0
0
1
5
2
3
4
Time [s]
5
TwigStack iTwigJoin+PPS TJFast TJDewey
10 15 20 25 30
6
(a) Main memory processing time
Time [s]
TwigStack iTwigJoin+PPS TJFast TJDewey
2
1.0 0.5
Time [s]
1.5
TwigStack iTwigJoin+PPS TJFast TJDewey
4
2.0
18
TB1
TB2
TB3
TB4
TB5
TB6
XM1 XM2 XM3 XM4 XM5 XM6 Avg.
Avg.
(b) Processing time (HDD)
7.85
3
TwigStack iTwigJoin+PPS TJFast TJDewey
0
0
1
5
2
Time [s]
4
Time [s]
5
TwigStack iTwigJoin+PPS TJFast TJDewey
10 15 20 25 30
6
(b) Processing time (HDD)
TB1
TB2
TB3
TB4
TB5
TB6
XM1 XM2 XM3 XM4 XM5 XM6 Avg.
Avg.
(c) Processing time (SSD)
0
100000 0
DAC [kB]
TwigStack iTwigJoin+PPS TJFast TJDewey
40000
20000
TwigStack iTwigJoin+PPS TJFast TJDewey
10000
DAC [kB]
30000
(c) Processing time (SSD)
XM1 XM2 XM3 XM4 XM5 XM6 Avg.
TB1 TB2 TB3 TB4 TB5 TB6 Avg.
(d) DAC
(d) DAC
Fig. 4. Experiment results for the TreeBank dataset
Fig. 5. Experiment results for the XMARK dataset
TJDewey – On the Efficient Path Labeling Scheme Holistic Approach
19
From the solutions stored during the experiment we can observe whereas the algorithm was optimal or not. TJDewey is suboptimal only for queries TB2 and TB3, however, TJFast stores unnecessary solutions for queries TB2 – TB6. This difference in the optimality is depicted in Example 3. Moreover, TJDewey stores less solutions than TJFast in the case of suboptimal queries. If we compare TJDewey and iTwigJoin+PPS, the latter performs well in the case of the XMARK collection. However, iTwigJoin+PPS has problem with random access to the secondary storage as well as with stream sorting when the number of streams per query node is high. It is possible to minimize the random access issue using a warm cache and SSD, however the stream sorting issue remains. This issue is significantly visible in the case of TB2 query covering 1,298 labeled paths. Algorithm iTwigJoin+PPS performs poorly even when the SSD is utilized. TJDewey outperforms other approaches also in the terms of disk accesses: TJDewey combines advantages of LPS and TJFast method, where only leaf nodes are read.
6 Conclusion We propose the TJDewey algorithm utilizing the advantage of labeled paths during the twig query processing. Our article shows that the utilization of labeled path streaming scheme has a very good performance even in the case of many labeled paths per query node. We minimize the sorting issue using a logarithmic algorithm. The issue of random accesses to the secondary storage may be avoided using progressive SSD. TJDewey is a variant of the TJFast algorithm. TJDewey avoids some unnecessary computations of TJFast and allows us to apply any path based labeling scheme, which can lead to more efficient updates. Our experiments show that the average processing time of TJDewey is up to 40% lower compared to other considered approaches. The utilization of the labeled paths also involves an optimality improvement, where TJDewey does not store some irrelevant solutions stored by TJFast. Definition of this optimality will be of our future interest.
References 1. Al-Khalifa, S., Jagadish, H.V., Koudas, N.: Structural Joins: A Primitive for Efficient XML Query Pattern Matching. In: Proceedings of ICDE 2002. IEEE CS, Los Alamitos (2002) 2. Baˇca, R., Kr´atk´y, M., Sn´asˇel, V.: On the Efficient Search of an XML Twig Query in Large DataGuide Trees. In: Proceedings of the Twelfth International Database Engineering and Applications Symposium, IDEAS 2008. ACM Press, New York (2008) 3. Bruno, N., Srivastava, D., Koudas, N.: Holistic Twig Joins: Optimal XML Pattern Matching. In: Proceedings of ACM SIGMOD 2002, pp. 310–321. ACM Press, New York (2002) 4. Chen, S., Li, H.-G., Tatemura, J., Hsiung, W.-P., Agrawal, D., Candan, K.S.: Twig2Stack: Bottom-up Processing of Generalized-tree-pattern Queries Over XML documents. In: Proceedings of VLDB 2006, pp. 283–294 (2006) 5. Chen, T., Lu, J., Ling, T.: On Boosting Holism in XML Twig Pattern Matching Using Structural Indexing Techniques. In: Proceedings of ACM SIGMOD 2005. ACM Press, New York (2005)
20
R. Baˇca and M. Kr´atk´y
6. Grust, T.: Accelerating XPath Location Steps. In: Proceedings of ACM SIGMOD 2002, Madison, USA. ACM Press, New York (2002) 7. H¨arder, T., Haustein, M., Mathis, C., Wagner, M.: Node Labeling Schemes for Dynamic XML Documents Reconsidered. Data & Knowledge Engineering 60(1), 126–149 (2007) 8. Jiang, H., Lu, H., Wang, W., Ooi, B.: XR-Tree: Indexing XML Data for Efficient. In: Proceedings of ICDE, 2003. IEEE, Los Alamitos (2003) 9. Jiang, H., Wang, W., Lu, H., Yu, J.: Holistic Twig Joins on Indexed XML Documents. In: Proceedings of VLDB 2003, pp. 273–284 (2003) 10. Kr´atk´y, M., Baˇca, R., Sn´asˇel, V.: On the Efficient Processing Regular Path Expressions of an Enormous Volume of XML Data. In: Wagner, R., Revell, N., Pernul, G. (eds.) DEXA 2007. LNCS, vol. 4653, pp. 1–12. Springer, Heidelberg (2007) 11. Kr´atk´y, M., Pokorn´y, J., Sn´asˇ el, V.: Implementation of XPath Axes in the Multi-dimensional Approach to Indexing XML Data. In: Lindner, W., Mesiti, M., T¨urker, C., Tzitzikas, Y., Vakali, A.I. (eds.) EDBT 2004. LNCS, vol. 3268, pp. 219–229. Springer, Heidelberg (2004) 12. Lu, J., Chen, T., Ling, T.: Efficient Processing of XML Twig Patterns with Parent Child Edges: a Look-ahead Approach. In: Proceedings of ACM CIKM 2004, pp. 533–542 (2004) 13. Lu, J., Ling, T., Chan, C., Chen, T.: From Region Encoding to Extended Dewey: on Efficient Processing of XML Twig Pattern Matching. In: Proceedings of VLDB 2005 (2005) 14. Yoshikawa, T.S.M., Amagasa, T., Uemura, S.: XRel: a Path-based Approach to Storage and Retrieval of XML Documents Using Relational Databases. ACM Trans. Inter. Tech. 1(1), 110–141 (2001) 15. Michiels, P., Mihaila, G., Simeon, J.: Put a tree pattern in your algebra. In: Proceedings of the 23th International Conference on Data Engineering, ICDE 2007, pp. 246–255 (2007) 16. Moro, M., Vagena, Z., Tsotras, V.: Tree-pattern Queries on a Lightweight XML Processor. In: Proceedings of VLDB 2005, pp. 205–216 (2005) 17. O’Neil, P., O’Neil, E., Pal, S., Cseri, I., Schaller, G., Westbury, N.: ORDPATHs: Insertfriendly XML Node Labels. In: Proceedings of ACM SIGMOD 2004 (2004) 18. Goldman, J.W.R.: DataGuides: Enabling Query Formulation and Optimization in Semistructured Databases. In: Proceedings of VLDB 1997, pp. 436–445 (1997) 19. Tatarinov, I., et al.: Storing and Querying Ordered XML Using a Relational Database System. In: Proceedings of ACM SIGMOD 2002, New York, USA, pp. 204–215 (2002) 20. W3 Consortium. Extensible Markup Language (XML) 1.0, W3C Recommendation (February 10, 1998), http://www.w3.org/TR/REC-xml 21. Witten, I.H., Moffat, A., Bell, T.C.: Managing Gigabytes, Compressing and Indexing Documents and Images, 2nd edn. Morgan Kaufmann, San Francisco (1999) 22. Zhang, C., Naughton, J., DeWitt, D., Luo, Q., Lohman, G.: On Supporting Containment Queries in Relational Database Management Systems. In: Proceedings of ACM SIGMOD 2001, pp. 425–436 (2001)
The XMLBench Project: Comparison of Fast, Multi-platform XML libraries Suren Chilingaryan Forschungszentrum Karlsruhe, Hermann-von-Helmholtz-Platz-1, 76344, Eggenstein-Leopoldshafen, Germany
[email protected] http://xmlbench.sourceforge.net
Abstract. The XML technologies have brought a lot of new ideas and abilities in the field of information management systems. Nowadays, XML is used almost everywhere: from small configuration files to multigigabyte archives of measurements. Many network services are using XML as transport protocol. XML based applications are utilizing multiple XML technologies to simplify software development: DOM is used to create and navigate XML documents, XSD schema is used to check consistency and validity, XSL simplifies transformation between different formats, XML Encryption and Signature establishes secure and trustworthy way of information exchange and storage. These technologies are provided by multiple commercial and open source libraries which are significantly varied in features and performance. Moreover, some libraries are optimized to certain tasks and, therefore, the actual library performance could significantly vary depending on the type of data processed. XMLBench project was started to provide comprehensive comparison of available XML toolkits in their functionality and ability to sustain required performance. The main target was fast C and C++ libraries able to work on multiple platforms. The applied tests compare different aspects of XML processing and are run on few auto-generated data sets emulating library usage for different tasks. The details of test setup and achieved results will be presented.
1
Introduction
The rapid spread of XML technologies in the information management systems resulted in appearance of multiple XML toolkits and standalone tools aimed to automate various parts of XML processing. Event driven and object model based parsers are implementing DOM (Document Object Model [1]) and SAX (Simple API for XML [2]) specifications in order to provide applications with general access to XML data. Some of them are implementing DTD (Document Type Definition [3]), XSD (XML Schema Definition [4]), and Relax-NG [5] to provide consistence and type conformance checking. Few tools implementing XML Encryption [6] and XML Signature [7] specifications are used to protect data from unauthorized access and guarantee data integrity. XPath [8] and XML Query [9] based tools are used to search and extract certain information from L. Chen et al. (Eds.): DASFAA 2009 Workshops, LNCS 5667, pp. 21–34, 2009. c Springer-Verlag Berlin Heidelberg 2009
22
S. Chilingaryan
XML documents. XSL (XML Stylesheet Language [10]) formatters are able to present original data in multiple ways using different formats. SOAP (Simple Object Access Protocol [11]) is a core component of Web Service architectures and used to establish message interchange between remote components. OPC XML-DA (Open Process Control, XML Data Access [12]) servers are used to provide access to control data in process automation tasks. Being used nowadays at almost all stages of data processing, underlying XML software is greatly affecting overall stability and performance of complete informational system. This makes obvious that selection of appropriate tools is extremely important. To make proper selection, the available tools should be analyzed in their ability to cover required subset of XML specifications and stay in sync with the rapidly developing technologies. The performance and memory consumption on different considered data sets should stay stable over long execution times and fit parameters of available hardware. The plenty of various comparisons and benchmarks of available XML tools are already published on the web [13]. The XML Parser Benchmarks by Matthias Farwick and Michael Hafner are comparing SAX and DOM performance of few C and Java parsers [14]. Unfortunately, the interest of authors is mainly in Java parsers and the very important Xerces-C, Intel XSS are not covered. Dennis Sosnoski has published very interesting results providing multi-aspect comparison of XML software [15]. The individual results are presented for XML parsing, document construction, document serialization, etc. The memory consumption is presented as well. However, the benchmark is quite outdated and only Java parsers are covered. Some vendors of the XML software (for example, Intel [16]) provide benchmarking tools with their software. However, they as well include very limited set of tested software and such results should be treated with caution. Java performance is highly improved in the last years and for many tasks could compete with performance of C/C++ applications. The advantage of the later is much higher potential in exploring features of modern hardware like, for example, streaming SIMD extensions. Another important aspect is a finer grained control on memory allocation which allows significant reduction of memory consumption while working with in-memory representations of big XML documents. Finally, there are many embedded applications running on hardware which is only capable of C code execution. The XMLBench project was started to provide comprehensive measurement of available and actively developed XML libraries written in C and C++ languages. Only the software available on multiple platforms was considered. The tests exploit broad range of XML processing tasks using data sets ranging from few kilobytes up to hundreds of megabytes. Section 2 describes the considered XML toolkits. Section 3 contains details on the benchmark setup, presents results, and proposes few ideas for performance improvement. Finally, section 4 summarizes obtained results and includes discussion on their reliability.
The XMLBench Project: Comparison of Fast, Multi-platform XML libraries
2
23
XML Toolkits
This section describes tested toolkits and includes information about latest available version, subset of provided XML features, list of supported platforms and languages, etc. The latest versions of all presented toolkits were used during benchmarks if it is not stated otherwise in this section. 2.1
Expat
The Expat [17] is a very simple and fast XML parser widely used in various open-source software including Mozilla project. Although it only implements SAX API, the Sablotron [18] and Arabica [19] libraries are implementing DOM, XPath and XSLT specifications at the top of Expat. Version
Expat 2.0.1 (from 05.06.2007), Arabica Oct2008, Sablotron 1.0.3 License Open Source MIT License Supported Systems 32/64 bit versions; Windows, Linux, BSD, OS X, Solaris, QNX, HP-UX, AIX Native Language C Language Bindings Lua, OCaml, Perl, PhP, Python, Ruby, Tcl/Tk XML Version XML 1.0, Namespaces, XInclude API Supported SAX 1 and 2, DOM level 2 Network Validation DTD only Security Query XPath 1.0 Stylesheets XSLT 1.0 Web Services -
2.2
Apache XML Project
The Apache XML Project is maintained by The Apache Software Foundation. The project consists of several libraries each having C++ and Java versions: Apache Xerces [20] is a validating XML parser. Apache Xalan [21,22] provides implementation of XPath and XSL transformations. Apache Axis [23] provides Web Services infrastructure. Apache XML Security [24] implements XML Signature and Encryption specifications. Apache FOP [25] implements XSL-FO. Finally, XPath2 and XQuery are provided by a third party project: XQilla [26]. Unfortunately, the Xerces-C 3.0 is only supported by Xalan-C 1.11 which is still not released. In turn Apache XML Security depends on Xalan-C and even latest snapshot do not include stable support of Xalan-C 1.11 prereleases. Therefore, most of the tests were executed with Xerces-C 3.0 and preliminary version of Xalan-C 1.11. The Encryption and Signature benchmarks were run with Xerces-C 2.8.0, Xalan-C 1.10, and Apache XML Security 1.4.0.
24
S. Chilingaryan
Version
Xerces 3.0 (from 29.09.2008), Xalan 1.10, XML Security 1.4.0 License Open Source Apache License 2.0 Supported Systems 32/64 bit versions; Windows, Linux, BSD, OS X, Solaris, AIX, HP-UX Native Language C++ and Java Language Bindings Perl XML Version XML 1.0 and 1.1 (partly), Namespaces, XInclude API Supported SAX 1 and 2, DOM levels 2 and 3 Network FTP, HTTP Validation DTD, Schema 1.0 Security Canonical XML 1.0, XML Signature & Encryption Query XPath 1.0 and 2.0 Stylesheets XSLT 1.0, XSL-FO 1.1 (Java Only) Web Services SOAP 1.1 and 1.2, attachments, binary serialization, WSDL 1.1, JSON 2.3
Gnome XML Library
The Gnome XML is a set of libraries developed by various people for Gnome Project. However, it has widely spread outside of Gnome since then. The core of the toolkit is Gnome XML and XSLT libraries, also known as LibXML and LibXSLT correspondingly [27]. XML Security is provided by XMLSec Library [28]. DOM API is provided by GDome library [29], SOAP using CSOAP [30], and XSLFO by xmlroff [31]. Due to unresolved problems, version 2.7.3 of LibXML was considerably slower in creation/serialization of big DOM documents compared to the 2.6.x branch. For that reason, in DOM creation benchmark, the results achieved with LibXML 2.6.32 are presented. The LibXML 2.7.3 was used to obtain all other results. Version
LibXML 2.7.3 (from 18.01.2009), LibXSLT 1.1.24, XmlSec 1.2.11, GDome 0.8.1 License Open Source MIT License Supported Systems 32/64 bit versions; Windows, Linux, BSD, OS X, Solaris, QNX Native Language C Language Bindings Perl, PhP, Python, Ruby, Tcl/Tk XML Version XML 1.0 5th Ed., Namespaces, XInclude API Supported SAX 1 and 2, Native DOM style, DOM level 2 Network FTP, HTTP Validation DTD, Schema 1.0 (partly), Relax-NG Security Canonical XML 1.0, XML Signature & Encryption Query XPath 1.0 Stylesheets XSLT 1.0, EXSLT, XSL-FO 1.1 (partly) Web Services SOAP 1.1
The XMLBench Project: Comparison of Fast, Multi-platform XML libraries
2.4
25
QTSoftware XML Module
QT [32] is a popular cross-platform application framework. Since version 3 it includes a module providing some XML processing capabilities. Version Qt 4.4.3 (from 27.09.2008) License Commercial, Open Source GPL & LGPL Licenses Supported Systems 32/64 bit versions; Windows, Linux, BSD, OS X, Solaris, QNX, HP-UX, AIX Native Language C++ Language Bindings Python XML Version XML 1.0, Namespaces API Supported SAX 2, DOM level 2 Network FTP, HTTP Validation DTD only Security Query XQuery 1.0, XPath 2.0 Stylesheets Web Services 2.5
Intel XML Software Suite
Intel has developed its XML Software Suite (XSS [33]) using all capabilities of their newest processors. XSS is heavily optimized and utilizing full SEE instruction set including SSE4. As well, the XSS is the only tested toolkit which can benefit from multiple cores while performing manipulations on a single document. However, using dual core machine and current benchmark set, the results have not shown any performance benefit and, therefore, multi-core results are not included in this article. Version License Supported Systems Native Language Language Bindings XML Version API Supported Network Validation Security Query Stylesheets Web Services
Intel XSS 1.2 (from 19.01.2009) Commercial, sources are not available 32/64 bit versions; Windows, Linux, HP-UX C++/Java XML 1.0 5th Ed., Namespaces, XInclude SAX 2, DOM level 2 and partial level 3, StAX parser (Java only) FTP, HTTP DTD, XML Schema 1.0 XPath 1.0 XSLT 1.0 -
26
S. Chilingaryan
2.6
Oracle XDK
The Oracle XML Development Kit (XDK [34]) is developed by the Oracle Corp. to provide XML capabilities in their database solutions. Version License Supported Systems Native Language Language Bindings XML Version API Supported Network Validation Security Query Stylesheets Web Services
3
Oracle XDK 10g 10.2.0.2.0 (from 07.04.2006) OTN, sources are not available 32 bit only; Windows, Linux, HP-UX, AIX C/C++/Java XML 1.0, Namespaces, XInclude DOM level 2 and 3, StAX (Java Only), SAX 1 FTP, HTTP DTD, XML Schema 1.0 XPath 1.0, XPath 2.0 in Java version XSLT 1.0, XSLT 2.0 in Java version -
Benchmark Results
3.1
System Setup
The hardware platform was Fujitsu-Siemens Celsius W350 workstation. – Intel Core Duo E6300 (1.86GHz, 2MB L2 cache) – Fujitsu-Siemens D3217-A, Intel Q965 chipset (ICH8R, 1066 MHz bus) – 4GB DDR2-800 Memory All presented results were obtained using 64 bit version of Gentoo Linux. – – – – –
Kernel 2.6.25 GNU C Library 2.6.1 GNU C Compiler 4.2.4 Java SE Runtime Environment 1.6.0 07 (64-bit) -O2 -march=nocona optimization flags were used to build all open source libraries
The XML data was always pre-generated and loaded into the memory before running benchmark. Therefore, the hard drive performance is not affecting results. A few DOM benchmarks on big data sets were the only exclusion. In that case some of the parsers have utilized more than 4GB of memory and, therefore, disk swapping was happening. However, such results in any case were high above the rejection threshold and anyway were scored with worst value. 1
Implemented on top of DOM according to benchmark results
The XMLBench Project: Comparison of Fast, Multi-platform XML libraries
3.2
27
Data Description
Four different data sets were used in benchmarking process. None of them included DTD definitions. – RDF - A big RDF (Resource Description Framework [35]) document from DMoz.org project describing various web resources. It includes nodes from few namespaces and has constant depth of 3 levels. – XMLGen - Scalable data generator producing very simple XML content: 4 levels of depth, no namespaces, very limited amount of different XML nodes. – XMark - Another scalable data generator provided by XMark Project [36]. It produces XML documents modeling an auction website. The XML have slightly more complicated structure: up to 8 levels of depth, higher variety of XML elements. The namespaces are still not used. – OPCGen - A data generator emulating behavior of OPC XML-DA server [12]. The SOAP messages used in data exchange are generated. Size of some of these messages is scaled by scaling parameter, others are staying constant. The nodes and attributes are belonging to 3 different namespaces. The depth of generated XML is depending on the message type and varies from 1 to 5. 3.3
Benchmark Setup
For each toolkit the time required to process data is measured. This time, then, is divided by the time required by a reference implementation to accomplish the same task on the same data. The resulting value is called performance index and shown on performance charts below (better results are represented by smaller values of the indexes). To prevent poisoning of overall result by a single failed test, the maximal value of the performance index for a single run is limited by 10 (and 15 for DOM parsing benchmark, see explanation below). The libraries from Gnome XML Toolkit (LibXML, LibXSLT, and XMLSec) are used as the reference implementations. All tests were run in a single user mode without any system services running. To eliminate any possibility of negative interference for each task, for each type of data, and for each XML toolkit a standalone test application was executed. These applications have evaluated performance on multiple document instances of a given data set and an averaged value was used. The scaling parameters of data generators have remained the same and the values of text nodes as well as attributes were generated randomly. The number of documents was selected in the way that a single test would need about 5 to 10 minutes to complete. However, at least 10 documents were processed in order to insure result stability. The toolkit initialization, if possible, was performed before data processing and time spent in initialization is not counted in the benchmark scores. The time required to generate XML and to load it into the memory is excluded as well. 3.4
Parsing Benchmark
The parsing benchmark evaluates the time required to parse an XML document and provide encompassed information to the application. The parsing should
28
S. Chilingaryan
be performed before any other action could be done on XML data. Hence, the parsing performance is significantly contributing to overall performance of any application implicating usage of XML data. Currently, two main approaches of parsing are widely used. The DOM parsers construct a linked tree of objects which provide methods to obtain and set values of associated properties and navigate to neighboring elements [1]. The SAX application defines a set of callbacks which are called when correspondent data is encountered in XML stream [2]. Basically, the DOM representation is much easier to handle and it can be used to alter content of XML document. The SAX parsers are normally faster and do not have limitations on the size of XML document. To show relative performance of SAX and DOM parsers, the timings achieved by SAX mode parser of LibXML are used to calculate performance indexes in both, SAX and DOM, cases. The maximal allowed index of DOM parser is raised to 15 to acknowledge fact what most of DOM parsers are slower than their SAX counterparts. Figure 1 presents results of parsing benchmark. The DTD and other means of validity checking are not used.
Fig. 1. Benchmark results of SAX (top) and DOM (bottom) mode parsers
The XMLBench Project: Comparison of Fast, Multi-platform XML libraries
3.5
29
DOM Manipulation Benchmark
This benchmark evaluates ability of XML toolkits to manipulate data in DOM representation. First XMLGen style document is created using DOM API and, then, serialized into the UTF-8 string. In tests presented in Figure 2 the resulting string sizes are varied from 5 KB to 90 MB.
Fig. 2. Results of DOM manipulation benchmark
3.6
Schema Validation Benchmark
Figure 3 compares time needed by toolkits to validate documents against XML Schema [4].
Fig. 3. Results of XML schema validation benchmark
The data is generated by XMLGen and OPCGen generators. The XMLGen schema is very simple, do not include any type checking and consists only of 20 lines. OPC data have a complex structure and is described by rather complex schema definition considering checking of multiple XSI types. To get pure validation time, a non-validated parsing time is subtracted from complete time spent on parsing and validation. The validation context is created only once at the initialization stage.
30
3.7
S. Chilingaryan
XML Security Benchmark
The security capabilities are measured by 2 independent tests. – First one evaluates time required to sign XML document with digital signature and, then, verify this signature. – Second - time used to encrypt XML document and, then, decrypt it back. Results of both tests are presented in Figure 4. The key generation and initialization of security contexts are performed during test initialization and are not affecting the results. The time needed to parse the original document is not included as well.
Fig. 4. Results of XML security benchmark
3.8
XSL Transformation Benchmark
This benchmark evaluates performance of XSL transformation engines [10]. The XMLGen and RDF samples are converted to HTML. Rather big ODF (Open Document Format) document is converted to MediaWiki format. Both, XMLGen and RDF, are processed using tiny and simple XSL stylesheets. However, stylesheet used to process RDF document demands reference lookup over a big node-set. The stylesheet used to process ODF document is taken from OpenOffice 3 distribution and have much more sophisticated structure. The performance charts are depicted in Figure 5. The transformation engine is created only once at the initialization stage and the time needed to parse the original documents is excluded from the presented results.
Fig. 5. Results of XSL transformation benchmark
The XMLBench Project: Comparison of Fast, Multi-platform XML libraries
31
Fig. 6. Impact of used compiler on a performance of Apache Xerces and Xalan libraries. The timings of Gcc 4.2.4 /Glibc 2.6.1 (with standard optimization flags: -O2 -march=nocona) were used as a reference values to calculate performance indexes (This combination were utilized to build libraries for the rest of tests). O3 optimization flags include: -O3 –unroll-loops -mfpmath=sse,387 -march=nocona. O2 flags include: -O2 -march=nocona. The -xT flag of Intel C Compiler enables SSE3 usage.
3.9
Software Optimization
A few things can be done to optimize speed of XML applications linked with the open source libraries. The performance is significantly dependent on the compiler and optimization flags used to build stack of XML libraries. However, in most Linux distributions the software is compiled with safe but not always optimal flags. Considerable performance increase could be achieved with recompilation of certain libraries using better optimization flags. Upgrading system to 64 bit version of operating system and installing latest version of Gcc compiler and Glibc system library could improve performance as well. A basic estimation of performance impact of different compilers and optimization options could be found in Figure 6. However, this is highly dependent on the libraries and hardware used and should be evaluated for each specific case. If XML data include text messages written in non-Latin alphabets, it becomes very important to select proper character encoding. Figure 7 illustrates this fact. LibXML is treating data in UTF-8 encoding internally. The performance is significantly decreased if UTF-16 encoding is used for the data. Vice-versa Xerces/C has UTF-16 internal data representation and the performance drops if the UTF-8 encoded document is passed. Finally, we want to reference 2 research projects which can be useful in the case if performance is an ultimate goal. AsmXml is very efficient open source XML parser providing DOM-style API. It has very limited set of capabilities (even namespaces are not supported), requires schema file describing syntax of XML documents which are going to be
32
S. Chilingaryan
Fig. 7. Performance of Xerces/C and LibXML libraries depending on Unicode encoding used to store Cyrillic characters. The time needed by LibXML to process a Latin document was used to calculate performance indexes in this test.
Fig. 8. Performance of AsmXml parser compared to Intel XML Suite and LibXML. AsmXml timings were used to calculate performance indexes in this test.
parsed and, for that reasons, was not included in the benchmarks. However, it is implemented in pure assembler and has very efficient memory model [37]. The results showed in Figure 8 indicate that it is 3 to 10 times faster than Intel XML Suite and LibXML in parsing of XMLGen documents. Developers at IBM have implemented a non-validating high-performance XML parser on IBM Cell platform. The parser consists of a front-end that provides a SAX application interface, and eight back-end parse threads on the CELL SPEs. According to published results the parser is 2.3 times as fast as LibXML on an Intel Pentium M processor [38].
4
Discussion
As with any other benchmark results, presented scores should be treated cautiously. As it could be seen from the charts, the performance is highly dependent on the type of data and on task performed. Especially good this point is illustrated by XSL transformation benchmark presented in Figure 5. The performance index of Intel XML Suite (ratio between performances of Intel XML Suite and LibXSLT ) is varying for more than 10 times depending on the data set used. Xalan/C is in some tests up to 2 times faster than LibXSLT and in others 2 times slower. Figure 7 indicates what the parsing performance is dependent on the encoding used to code Unicode characters. The C library, compiler and
The XMLBench Project: Comparison of Fast, Multi-platform XML libraries
33
optimization flags could have significant impact on performance as well. Figure 6 proves that only proper selection of compiler and optimization flags may give an extra 15% of performance. Finally, the non-performance issues should be considered. Not all toolkits are implementing specifications completely and properly. Various defects and gaps in implementation are not easy to detect using benchmarks evaluating only a few standard test cases. Taking into account the aforesaid, Gnome XML Toolkit is clearly fastest among the open source toolkits. It have a relatively rich set of features and shows good performance in most of the tests. The main problems are low performance in some of XSL transformation tests and incomplete implementation of XML Schema specification. The Apache XML Toolkit is slightly slower in average and especially in the tests related to XML security. However, it has the best list of supported features and its schema implementation is much more complete. Obviously, the top performance has Intel XML Suite. Thoroughly using hardware optimizations it provides best results in most of the benchmarks. In the rest, actually few DOM and Schema tests, it is only a little behind the winning LibXML and Xerces/Java (correspondingly). Especially impressive performance is demonstrated in XSL transformation benchmark. The Intel XML Suite has managed to convert the Open Office document to MediaWiki format 10 times faster than any of competing tools. In other XSLT tests Intel has also outperformed all competitors from 2 to 8 times. The drawbacks are as follows: it is a commercial product and it is only supported on Intel and HP platforms. The XML Security implementation is still missing. The Java version of Apache XML Toolkit has shown very good results as well. In most cases it has performance comparable with fastest C libraries and it even got a best score in the DOM manipulation benchmark. The main problem was memory consumption. In few tests involving processing of big documents the Java have failed to perform required actions using available system memory and disk swapping was happening.
References 1. W3C: Document object model (2000), http://www.w3.org/TR/DOM-Level-2-Core/ 2. Megginson, D.: Simple api for xml (sax) (2004), http://www.saxproject.org 3. W3C: Extensible markup language (xml) 1.0 (2008), http://www.w3.org/TR/REC-xml 4. W3C: Xml schema part 0: Primer (2004), http://www.w3.org/TR/xmlschema-0/ 5. OASIS: Relax ng, iso/iec 19757-2:2003 (2001), http://www.oasis-open.org/committees/relax-ng/spec-20011203.html 6. W3C: Xml encryption syntax and processing (2002), http://www.w3.org/TR/xmlenc-core/ 7. W3C: Xml signature syntax and processing (2008), http://www.w3.org/TR/xmldsig-core/ 8. W3C: Xml path language (xpath) (1999), http://www.w3.org/TR/xpath 9. W3C: Xquery 1.0: An xml query language (2007), http://www.w3.org/TR/xquery/
34
S. Chilingaryan
10. W3C: Xsl transformations (1999), http://www.w3.org/TR/xslt 11. W3C: Soap version 1.2 part 0: Primer (2003), http://www.w3.org/TR/2003/REC-soap12-part0-20030624/ 12. OPC Foundation: Opc xmlda 1.01 specification (2004), http://opcfoundation.org 13. Mlynkova, I.: Xml benchmarking: Limitations and opportunities. Technical report, Charles University, Prague, Czech Republic (2008), http://www.ksi.mff.cuni.cz/~ mlynkova/doc/tr2008-1.pdf 14. Farwick, M., Hafner, M.: Xml parser benchmarks (2007), http://www.xml.com/pub/a/2007/05/16/xml-parser-benchmarks-part-2.html 15. Sosnoski, D.: Xmlbench document model benchmark (2002), http://www.sosnoski.com/opensrc/xmlbench/index.html 16. Intel: Xml benchmark tool (2009), http://software.intel.com/en-us/articles/intel-xml-software-products/ 17. Expat Team: The expat xml parser (2007), http://expat.sourceforge.net 18. Ginger Alliance: Sablotron: Xslt, dom and xpath processor (2006), http://www.gingerall.org/sablotron.html 19. Higgins, J.: Arabica xml and html processing toolkit (2008), http://www.jezuk.co.uk/arabica 20. Apache Foundation: Apache xerces (2008), http://xerces.apache.org 21. Apache Foundation: Apache xalan-c (2007), http://xml.apache.org/xalan-c/ 22. Apache Foundation: Apache xalan-j (2007), http://xml.apache.org/xalan-j/ 23. Apache Foundation: Apache axis (2008), http://ws.apache.org/axis2/ 24. Apache Foundation: Apache xml security (2007), http://santuario.apache.org 25. Apache Foundation: Apache fop (formating objects processor) (2008), http://projects.apache.org/projects/fop.html 26. XQilla Team: Xqilla (2009), http://xqilla.sourceforge.net 27. Veillard, D.: The xml c parser and toolkit of gnome (2009), http://xmlsoft.org 28. Sanin, A.: Xmlsec library (2007), http://www.aleksey.com/xmlsec/ 29. Casarini, P.: Gnome dom engine (2003), http://gdome2.cs.unibo.it 30. Ayaz, F.: Client/server soap library in pure c (2006), http://csoap.sourceforge.net 31. XMLroff Team: Xmlroff xsl formatter (2008), http://xmlroff.org 32. QT Software: Qt cross-platform application and ui framework (2009), http://www.qtsoftware.com 33. Intel: Intel xml software suite (2009), http://software.intel.com/en-us/articles/intel-xml-software-suite/ 34. Oracle: Oracle xml developer kit 10g (2006), http://www.oracle.com/technology/tech/xml/xdkhome.html 35. W3C: Resource description framework (2004), http://www.w3.org/TR/rdf-syntax-grammar/ 36. Schmidt, A.R., Waas, F., Kersten, M.L., Carey, M.J., Manolescu, I., Busse, R.: Xmark: A benchmark for xml data management. In: Proc. of Int. Conf. on Very Large Databases (VLDB), Hong Kong, China, pp. 974–985 (2002), http://www.xml-benchmark.org 37. Kerbiquet, M.: Asmxml (2008), http://mkerbiquet.free.fr/asm-xml/ 38. Letz, S., Zedler, M., Thierer, T., Schuetz, M., Roth, J., Seiffert, R.: Xml offload and acceleration with cell broadband engine. In: Proc. of XTech 2006, Amsterdam, Netherlands (2006), http://xtech06.usefulinc.com/schedule/paper/27
A Synthetic, Trend-Based Benchmark for XPath Curtis Dyreson1 and Hao Jin2 1
Department of Computer Science, Utah State University, Logan, Utah USA
[email protected] 2 Expedia.com, Seattle, Washington USA
[email protected]
Abstract. Interest in querying XML is increasing as it becomes an important medium for data representation and exchange. A core component in most XML query languages is XPath. This paper describes a benchmark for comparing the performance of XPath query evaluation engines. The benchmark consists of an XML document generator which generates synthetic XML documents using a variety of benchmark-specific control factors. The benchmark also has a set of queries to compare XPath evaluation for each control factor. This paper reports on the performance of several, popular XPath query engines using the benchmark and draws some general inferences from the performance. Keywords: Benchmark, query processing, XML, XPath.
1 Introduction The fast-growing use of XML increases the need for efficient, flexible query languages specifically designed for XML. There are several query languages for XML data collections. Examples include XML-QL [8], LOREL [1], XQL [14], and XQuery. An important part in many XML query languages, especially those promulgated by the W3C, is XPath [7]. XPath is a language for addressing locations in an XML document. For instance the XPath expression ‘(//paragraph)[5]’ locates the fifth paragraph element in a document. XPath was developed in part by the XML Query and XSL working groups. In addition to being used in XQuery, XPath is also a core component in XSL Transformations (XSLT) and XPointer. XPath is an important part of these languages because each needs some mechanism to address locations within an XML document. A benchmark is a quantitative comparison of system performance. Benchmarks can be classified as either generic or application-specific. A generic benchmark measures general system performance, independent of an application. An applicationspecific benchmark on the other hand is tailored to synthesize a workload for a particular application domain. Generic benchmarks are useful because implementing and measuring a specific application on many different systems is prohibitively expensive. A limitation of generic benchmarks though, is that no single metric can measure the performance of systems on all applications. Depending on the application domain, the performance of a system could vary enormously and systems designed for a specialized domain may have weaker performance in other domains. L. Chen et al. (Eds.): DASFAA 2009 Workshops, LNCS 5667, pp. 35–48, 2009. © Springer-Verlag Berlin Heidelberg 2009
36
C. Dyreson and H. Jin
At this stage of XML development, there is no commonly agreed standard of XML application scenarios. So this paper reports on a generic benchmark. The benchmark focuses on measuring the cost of query processing. XPath queries are evaluated against a tree-like data model. Queries typically traverse part of the tree-like data model. The efficiency of the tree-traversal has a major impact on the cost of query processing. The tree can vary in depth, density, size, and the kind of information in each node. We designed an XML document generator which generates XML documents that conform to several factors that control the shape and size of the tree. By varying only one of the control factors (e.g., tree depth) and keeping the other factors constant the benchmark is able to isolate the impact of that factor on query performance. The benchmark also includes a suite of query templates that can be instantiated to produce a set of benchmark queries. Overall, the benchmark is designed to assess the impact of trees of different sizes and shapes on query performance. This will help query engine developers understand and evaluate implementation alternatives, and also help users to decide which query engine best fits their needs. This paper also reports on the results of running the benchmark on a number of popular XPath query engines. Among them, there are four Java packages, Saxon [9], Xalan-Java [16], Jaxen [20], and DOM4j [19]; two C++ packages, MSXML 4.0 [21], and Xalan-C++ [17]; and also three XML databases, Xindice [18], eXist [11], and one (unnamed) leading commercial XML DBMS. Benchmarks of Internet and web servers are becoming more important and widely discussed. There have been several benchmarks proposed for evaluating the performance of XML data management systems. XMark [15] is a benchmark for XML data stores. The benchmark consists of an Internet auction application scenario and twenty XQuery challenges designed to cover the essentials of XML query processing. XOO7 [12] is an XML version of the OO7 benchmark [6], which is a benchmark for OODBMSs. The OO7 schema and instances are mapped into a Document Type Definition (DTD) and the corresponding XML data sets. Then the eight OO7 queries are translated into three respective languages for query processing engines: Lore, Kweelt, and an ORDBMS. XMach-1 [4] is based on a web application that consists of text documents, schema-less data, and structured data. XMach-1 differs from other benchmarks in this area insofar as it is a multi-user benchmark, and it is based on a web-oriented usage scenario of XML data, not just the data store. The performance metrics that XMach-1 evaluates are throughput and cost effectiveness, so it is closer to the metrics that TPC-A and TPC-B use. The Michigan benchmark [13] is a micro-benchmark for XML data management systems. A micro-benchmark targets the performance of basic query operations such as selections, joins, and aggregations. The data set of the Michigan benchmark is generated by a synthetic XML generator, rather than from a particular application scenario. The primary difference between our benchmark and all the above benchmarks is that we are most interested in how the basic properties of an XML document and XPath query affect the performance of XPath query execution. In most benchmarks, each test case consists of a set of fixed parameters that are chosen to test the performance of the system under a particular (usually typical) configuration. We took a different approach to design the test cases. In each of our test case, we allow one parameter to vary. The purpose is to enable us to study how this single factor affects
A Synthetic, Trend-Based Benchmark for XPath
37
the query performance. So our test cases are trend-based tests rather than fixed value tests. Another important difference is that our benchmark is not domain-specific. XMark, XOO7, and XMach-1 are domain-specific benchmarks. Our benchmark is similar to Michigan’s micro-benchmark in the sense that we are studying the atomic properties that affect general cases of XPath query execution. We use a synthetic XML document generator rather than generating documents that conform to an application specific XML schema.
2 XML Document Generator This section describes the XML document generator. The generator is used to create each of the benchmark’s data sets. Since the focus of this paper is on benchmarking the performance of XPath evaluation, we will describe the document generator in terms of the kinds of “trees” that are induced by the generated documents. Some benchmarks focus on including as much natural language as possible when generating documents for testing, e.g., the XMach-1 benchmark [4] and the Michigan benchmark [13]. Our approach is different. We generate synthetic data since XML query evaluation is based on data syntax rather than semantics. The generator has many control factors to manipulate the shape and content of the tree data model. We have carefully chosen a set of factors that we think are important properties of an XML document and hence, may have a significant impact on the performance of XPath queries. These factors represent the most common and influential properties of an XML document in the context of XPath query evaluation. We also chose control factors that are basic and do not depend on other factors, or combinations of factors. With these control factors, we are able to precisely control the document generation and isolate the impact of an individual factor on query evaluation. There are eleven control factors listed in Table 1. The factors are divided into two groups. The first group, the tree shape group, controls the general shape of the XML tree model. The second group, the tree data group, consists of factors that are more relevant to the content of the tree, such as the number of attributes, and length of text values. Though the number of attributes also affects the shape of the tree, since attribute nodes are on a special axis, we decided to place that control factor in the tree data group. The factors are described in more detail below. Table 1. XML document generator control factors
Group Tree shape Tree data
Control factors Number of root children, Depth, Bushiness, Density. Number of attributes, Magic level, Selectivity, Magic position, Magic length, Text length, Random name.
The generated document has a single root element named “Root”. The number of elements in the level below the root is controlled by the root children factor. Specifying a large number of root children will force the tree to broaden quickly. Since this is the first level containing real data, we call it level one. Names of
38
C. Dyreson and H. Jin
elements in level one begin with the letter ‘A’. Below level one, the number of children for a node is specified by the bushiness control. Note that the bushiness does not apply to the first level since the root children are a special case. In general, element names at level n begin with the nth letter in alphabetical order. The reason for using this naming scheme is that it allows us to construct XPath queries that descend to a desired level. For instance to descend to level five we might use the query ‘//E’. Each level also has some embedded magic elements. A magic element has a fixed suffix of ‘Magic’ in order to distinguish it from the non-magic elements at that level. The selectivity factor controls the percentage of magic elements. By setting a low selectivity only a small percentage of the elements in a level will be magic. A query that locates these elements, e.g., ‘//Emagic’, will generate a small result. Fig. 1 depicts how some of the factors impact the shape and content of the generated tree. The number of root children factor controls the level below the root. The bushiness controls the number of children of other interior nodes. The depth is the total number of levels. In the figure, some nodes at the leaf level are magic.
Root
A
B
C
A
B
…
C
…
B
CMagic
Number of root children
A
Bushiness
…
…
C Depth
Magic
Fig. 1. XML Document Generator Data Model
The following list gives a detailed description of the eleven control factors in the XML document generator. 1) Number of root children — The number of root children controls the number of sub-trees under the root element node. A tree with only a few root children will have only a few nodes in level one, and by extension only a few nodes in the top few levels. By increasing the number of root children, the tree will acquire immediate breadth at the top. 2) Depth — The depth is the maximum number of levels in the tree, or the maximum depth of sub-element nesting in the generated document. The count starts from the level beneath the root, so the root is at level 0, the level with elements named ‘A’ is at level 1, etc.
A Synthetic, Trend-Based Benchmark for XPath
39
3) Bushiness — The bushiness is the number of children of each element node, that is, the number of sub-elements in each element in the generated document. The parameter is used to control the width of the generated tree. 4) Number of attributes — Controls how many attributes each element can have. 5) Density — Using the first three control factors (number of root children, depth, and bushiness), the total number of elements in the tree can be pre-determined for a complete tree. But we would like some trees to be sparse. The density is the percentage of nodes generated relative to the number of nodes in a complete tree. For instance a density of 50% will generate a tree only half as full as a complete tree of the same depth, bushiness, and root children. 6) Magic level — The magic level is the level in the tree at which magic nodes appear. Distinguishing between magic and normal nodes controls the size of a query result. For instance, setting the magic level to 5 will generate some ‘EMagic’ nodes (how many depends on the selectivity). By setting the level to 0 magic nodes are inserted at all levels. 7) Selectivity — The selectivity is the percentage of magic nodes at a level, specified by the magic level control factor. A selectivity of 20 means that 20% of the nodes (randomly selected) at the magic level will be magic nodes. 8) Magic Positions — XPath queries can locate not only the element nodes, but also attribute and text nodes. This control factor specifies where magic nodes are positioned. There are four options: 1) element name, 2) attribute name, 3) attribute value, and 4) the text. 9) Magic length — By default, element and attribute names are a single character in length. Magic nodes have the string “Magic” appended. The magic length control factor adds a suffix of the specified length to the magic nodes. Since the magic nodes are the ones queries often select, this parameter can help to evaluate the impact of matching long vs. short strings. 10) Text length — This parameter controls the length of the text nodes in the document. The length can range from 0, which means no text nodes to any selected length. 11) Random name — By default, all of the non-magic elements in a level have the same name. The random name control factor increases the diversity of element names. When this control factor is selected, a random number is appended to the end of its regular name, so nodes will have different names (which impacts element indexes).
3 Benchmark Tests There are many control factors in the XML document generator, each with a wide range of possible settings. Exhaustively testing all combinations is infeasible, so in this section we elaborate a small, but representative suite of test cases. Each test case in the suite is intended to gauge the impact of a control factor on the performance of XPath queries. The test is based on an hypothesis about the impact of that control factor. The experimental methodology is to vary the single, control factor while keeping the others constant so that we can isolate the impact of this single factor and determine how it affects the performance of XPath queries in various implementations. Our tests are trend-based tests. The advantage of trend-based tests is
40
C. Dyreson and H. Jin
that it becomes possible to compare how well a system can handle a particular property of the XML document or the XPath query. For example, in an application scenario where the depth of the XML tree model is an important varying factor, not only can we tell the performance of a query engine in dealing with trees of a particular depth, but also which query engine scales well as the tree depth increases. In many cases, this information is much more important than just a single data point. The benchmark has fifteen test cases divided into three groups. The tree shape group focuses on the aspects that change the shape of a tree structure, such as tree depth and width, while the tree data group focuses on the aspects that relates to the content of an XML document, such as the magic level, selectivity, and text length. These two groups are similar to the groups of control factors for the benchmark’s XML document generator. The third group, the XPath property group, focuses on the different location paths and functions that XPath provides. The name and description of each test is listed in Table 2, and the values of the control factors in the test case are listed in Table 3. In Table 2 the Test Case column lists a short, descriptive name for each test. The Description column provides a description of the test. In Table 3 the explicit settings for the control factors are given. The “Vary” value stands for the control factor that varies. Table 2. Benchmark tests and descriptions Test Case Fat, flat tree (text value) Fat, flat tree (attribute value) Tree depth Tree width Magic level Selectivity Text length Random name Number of attributes Magic length Magic position
XPath query type Short-circuit eval. Steps vs. predicates String function
Description Tree Shape Group Varies the number of root children to create trees with “broad shoulders”, queries access the text values in the leaves of the tree. Varies the number of root children to create trees with “broad shoulders”, queries access the attribute values in the leaf elements. Varies the depth of the XML document tree. Varies the width of the tree. Tree Data Group Varies level of magic nodes placement, queries descend through magic nodes. Varies the percentage of magic nodes, controlling how many nodes are selected by a query. Varies the length of text values in the document. Runs same query on documents with random vs. fixed element names. Varies the number of attributes in each element, queries select the attributes. Varies the length of the suffix, creating elements with long names. Varies the position of magic nodes (element name, attribute name, attribute value, and text/element value), queries are tailored to locate the magic. XPath Property Group Tests XPath queries on different axes. Tests whether the query engines do short-circuit evaluation in predicates. Trades predicates for steps (evaluating efficiency of steps vs. predicates). Tests efficiency of various XPath string functions.
A Synthetic, Trend-Based Benchmark for XPath
41
Table 3. Benchmark tests parameters Root
Depth
Bushiness
Attrs.
Magic Select- Magic Magic Text Random XPath Level ivity Position Len. Len. Name
Tree Shape Group Fat, flat tree (text value) test Vary
2
5
0
1
1%
T
3
10
No
Desc.
Fat, flat tree (attribute value) test Vary
1
1
5
1
1%
AV
10
0
No
Desc.
4
0
Vary
10%
E
0
0
No
Child
Vary
0
4
10%
E
0
0
No
Child
4
0
Vary
10%
E
0
0
No
Desc.
4
0
7
Vary
E
0
0
No
Desc.
4
0
5
10%
T
1
Vary
No
Child
4
0
All
30%
E
0
0
Vary
Child
4
Vary
5
10%
AN
0
0
No
Desc.
4
0
5
10%
E
Vary
0
No
Child
4
5
5
10%
Vary
0
20
No
Child
0
4
30%
E
0
0
No
Vary
0
5
30%
E
0
0
No
Vary
4
0
All
30%
E
0
0
No
Vary
4
0
5
30%
T
5
100
No
Pred.
Tree depth test 100
Vary
Tree wdith test 100
4
Tree Data Group Magic level test 100
7
Selectivity test 100
7
Text length test 100
5
Random name test 100
7
Number of attributes test 100
5
Magic length test 100
5
Magic position test 100
5
XPath Property Group XPath query type test 100
5
4
Short-circuit evaluation test 100
5
4
Steps vs. Predicates test 100
5
String function test 100
5
Abbreviations: Magic position: E (element name), AN (attribute name), AV (attrr. value), T (text). XPath query: Desc. (desc. axis), Child (child axis), Pred. (predicate). There is one test case for each control factor, except for the density. The reason is that navigating an incomplete tree is similar to navigating a fraction of a complete tree. By using magic nodes, we are able to control how large a fraction of the tree is traversed in a query so the density in all of the test cases is 100%. Although there are no tests currently in the benchmark that vary the density, we include density as a
42
C. Dyreson and H. Jin
control factor for future extensions. In particular being able to set the density to a very low percentage is useful for creating very deep trees because deep, complete trees can exceed memory capacity.
4 Performance Analysis All the benchmark test cases were run on a single-processor 1.7GHz Pentium IV machine with 1GB of main memory. The benchmarking machine was running Windows XP and Sun JDK version 1.6. In each of the test cases described in Section 3, there are generally one or two control factors that vary while the rest remain constant. Furthermore there are small, random differences in the documents generated for each test case (e.g., different randomly-generated numbers are used in random names). To smooth out the small variations between test cases, we run each benchmark test case five times, re-generating the document using a different seed value, and average the results. Many XPath query engines are available. We chose several popular query systems to evaluate against the benchmark. The systems fall into two broad categories: inmemory systems and persistent. In-memory systems can use either SAX or DOM parsing to construct a tree-like data model (a DOM) in memory. After parsing completes, XPath queries are evaluated on the constructed data model. The persistent or XML database systems parse and save an XML document in a persistent data store. XPath queries are then run on the persistent data store, i.e., as database queries. XML database systems are much more likely to create secondary data structures, e.g., indexes, to improve query performance. The in-memory systems can be further divided by language Java vs. C++. Based on the above categorizations, we set out to find typical systems that are representative of the technology for each category. The XPath systems that we chose are summarized in the following list.
•
Java systems o Saxon – Saxon [9] is an open-source XSLT processor. The version we used is 6.5.2, which supports XPath version 1.0. The XML document parser of Saxon is a slightly improved version of the Ælfred from Microstar, based on the SAX API.
o Xalan-Java – Xalan-Java [16] is an open-source XSLT processor for transforming XML documents into HTML, text, or other XML document types developed by the Apache XML project. It fully implements XSLT version 1.0 and XPath version 1.0. The version we used is 2.5.0, which builds on SAX 2 and DOM level 2. We chose to evaluate the DOM component.
o Jaxen – Jaxen [20] is an open-source Java XPath Engine from the Werken Company. It is a universal object model walker, capable of evaluating XPath expressions across multiple models. Jaxen is based on SAXPath, which is an event-based model for parsing XPath expressions. Currently, it has implemented the XPath engine for DOM4j and JDOM, two popular and
A Synthetic, Trend-Based Benchmark for XPath
43
convenient object models for representing XML documents. W3C DOM is also supported. In our tests, we will use both the SAX parser and DOM parser to build the document model for Jaxen. The version of Jaxen we used is 1.0-FCS.
o DOM4j – DOM4j [19] is another open-source framework for processing XML. It is integrated with XPath and fully supports DOM, SAX, JAXP and the Java platform such as Java 2 Collections. DOM4j works with any existing SAX parser via JAXP, and/or DOM implementation. So in our benchmark, we will use both the default SAX parser of DOM4j and a DOM parser, just like what we do for Jaxen. The version of DOM4j used is 1.4. •
C++ systems o MSXML – Microsoft® XML Core Services (MSXML) [21] is a collection of tools that helps customers to build high-performance XML-based applications. It fully supports XPath version 1.0 in its XSLT processor. The version we used is 4.0. o Xalan-C++ – Xalan-C++ [17] is just the C++ version of Xalan-Java. The version we used is 1.5.
•
XML database systems o eXist – eXist [11] is an open-source XML native database featuring efficient, index-based XPath query processing. The database is lightweight, completely written in Java, and can be run as either a stand-alone server process, inside a servlet, or directly embedded into an application. Its Java API completely comforms to the XML:DB API, which provides a common interface to access XML database services. The version we used in 0.9.1. o Xindice – Xindice [18] is another open-source XML product developed by the Apache XML project. It is also a native XML database using XPath as its query language. Xindice also implements the XML:DB API. The version we used is 1.0.
o COR – COR is the name of a leading commercial database with extensions to support XML by shredding a document into a back-end object-relational database. Due to a licensing agreement for the commercial system, we cannot disclose the actual names of the system, so we will just refer to it as COR. Generally, we collect three performance metrics for each benchmark. But inmemory and database packages need to be measured differently due to the differences in their architectures. For in-memory packages, we measure • • •
parse time - the time to read an XML document from disk and parse it; query time - the time to evaluate a benchmark suite of queries; and output time - the time to iterate through the result set(s) and place every node into a Java Vector or C++ List.
44
C. Dyreson and H. Jin
For an XML database system, on the other hand, we measure • store time - the time to read, parse, and store an XML document into the system’s data store; • query time - the time to evaluate a suite of benchmark queries; and • output time - the time to iterate over the result set and place every node into a Java Vector or C++ List. Since query performance is the focus of our benchmark, we will ignore parse time and store time and concentrate on query time and output time. Often we will sum the two times into a single time that we refer to as (total) query execution time. The reason why we include the output time in the query execution time is that some query engines have a “lazy evaluation” mechanism, which returns only a fraction of the result set and then grabs the rest as needed. Lazy evaluation saves on the cost of generating unused query results. We force lazy evaluation to complete by iterating through the entire result set thus providing a fairer comparison with eager evaluation query packages. Another query optimization technique is creating and using indexes. Indexing is of special, critical importance in XML database packages. The XML database packages that we chose (claim to) have the ability to index the content of XML document. However, the scope and nature of the indexes varies greatly. To level the playing field, we decided to use only the default indexing. So our results are the default behavior of the tested packages. Below we show a few of our results (due to space limitations we have omitted the complete results). Fig. 2 and Fig. 3 test “fat” trees, but place values in text nodes vs. attributes. Using attributes is in general faster for most of the packages, and the navigational trends are largely the same in both experiments.
20
Time (seconds)
15 10 5
Jaxen(SAX) Xalan Xalan C++ Saxon MSXML
0 1
5
Root Children (thousand) MSXML
10 50 100
350 300 250 200 150 100 50 0
Time (seconds)
1
5
COR Xindice eXist
10
Root Children (thousand)
Saxon
Xalan C++
Xalan
a) In-memory packages
Jaxen(SAX)
eXist
50
Xindice
100 COR
b) XML database packages
Fig. 2. Fat, flat tree (text value) test results
A Synthetic, Trend-Based Benchmark for XPath
8
250
Time (seconds)
200
45
Time (seconds)
6 150
4
100
2
Xalan C++ Xalan Jaxen(SAX) Saxon MSXML
0 1
5
10 50 100
Saxon
COR Xindice
0 1
Root Children (thousand)
MSXML
50 5
eXist
10
50
Root Children (thousand)
Jaxen(SAX)
Xalan
Xalan C++
a) In-memory packages
eXist
Xindice
100 COR
b) XML database packages
Fig. 3. Fat, flat tree (attribute value) test results 150
50 0
SaxonJaxen(SAX)Xalan MSXMLXalan C++
Packages Descendant axis Predicate query
0
eXist
N/A
6 Time 4 (seconds) 2
100
N/A
Time (seconds)
8
Xindice
COR
Packages Child axis Union query
Ancestor axis Preceding axis
a) In-memory packages
Descendant axis Predicate query
Child axis Union query
Ancestor axis Preceding query
b) XML database packages
Fig. 4. XPath query type test results
Fig. 4 plots the effect of querying with different axis. The database packages largely have the same cost for axes, but the performance of the in-memory packages “blow-up” on the preceding axis. Interestingly, the packages show some effect of increasing the text length while holding all other parameters constant, as shown in Fig. 5. We now provide an analysis of trends that are present across the many individual tests in the benchmark (including results not shown unfortunately). Where possible, we also draw inferences about the behavior of the various packages. Saxon is generally the fastest Java application that we tested. Saxon uses an innovative tree structure [10] to represent an XML document, which we believe, contributes to its good performance. The nodes in this tree structure are represented as integer arrays rather than as objects. Unfortunately, Saxon does not provide a complete DOM interface. For instance, DOM update is not supported. Though we focused exclusively on query performance, and Saxon supports all of the benchmark
C. Dyreson and H. Jin
Time (seconds)
3.0 2.5 2.0 1.5
90 75 60 Time 45 (seconds) 30 15 0
1.0 0.5
0.0 Saxon Jaxen(SAX) Xalan Package Text Length
0
10
100
MSXML Xalan C++
500
a) In-memory packages
1000
N/A
46
eXist
Xindice
COR
Package Text Length
0
10
100
500
1000
b) XML database packages
Fig. 5. Text length test results
queries, the lack of full DOM support indirectly enhances Saxon’s performance because it reduces memory consumption. So for read only applications in Java, Saxon is a very good choice. In the C++ group, MSXML performs much better in all cases than Xalan-C++, and also much better than all the Java packages. MSXML however is supported only on a Windows platform (currently). But for Windows-based applications that need fast performance, MSXML is the best choice among the packages we tested. In the XML database group, eXist is a good choice because of its excellent performance. However, eXist does not even support all of the queries in XPath. Some simplifications were made in the design of eXist leading to improved performance at the cost of full functionality, just like Saxon. eXist fails to handle some uncommon axes and string functions. Xindice, on the other hand, provides very stable and uniform performance in all test cases, although it is slower than eXist. Increasing the depth or the nesting level of the XML document comes at a very high cost, not surprisingly. To reduce depth, use attributes rather than subelements with text values. Alternatively, if possible, expand the tree horizontally rather than vertically. Both alternatives can generally yield better performance. Of course, whether either alternative is possible largely depends on the schema of the XML document and the application scenario. Another factor to keep in mind is that querying an attribute value is slightly more expensive than querying a text value (assuming both are at the same depth in the tree). So if attributes can shrink a tree, they improve performance, but otherwise, use text nodes. If possible, use the child or descendant axis in your location paths, rather than one of the other axes. Try to avoid the preceding and following axes, and to a lesser extent the ancestor axis, because evaluating one of these axes can incur a much greater cost than a more common axis. The structure of the XML document always plays a more important role than the data itself in query performance. In other words, if you want to manipulate the XML document to get better query performance, try to reduce the depth, bushiness or other shape properties of the document. Shortening element names, for instance, generally will not help to improve query performance.
A Synthetic, Trend-Based Benchmark for XPath
47
5 Conclusion and Future Work We proposed a benchmark that can be used to evaluate the impact of basic XML document properties and XPath query functions on the performance of XPath queries. Since a tree-like data structure is a natural representation of an XML document, we built an XML document generator that manipulates the properties of XML document by controlling the shape of its tree model. We identified eleven control factors on the shape and location of data in the document tree. We hypothesized that these control factors potentially have an impact on query performance and so we designed fifteen tests to evaluate the impact of each factor. We ran the fifteen tests in our benchmark on a number of popular XPath query engines and analyzed the performance data to reach the following conclusions. •
• • • •
From a performance perspective, Saxon is the best Java application, MSXML is the best C++ application, and eXist is the best XML database system, although eXist has limited support and does not handle some axes and string functions in XPath. Increasing the depth or the nesting level of an XML document comes at a very high cost. Always try to use the child and descendant axis and avoid using the ancestor, preceding, and following axes. Querying attribute values is more expensive than querying text values when both appear at the same depth in a tree. The structure of the XML document always plays a more important role than the data in the tree on query performance.
We plan to extend the benchmark in several ways. First, we plan to create a “benchmark” calculator to infer information about new test cases from existing data. Our test cases focus on varying a single control factor while keeping others constant. Thus they provide an insight on how each factor, in isolation, impacts query performance. From this data, the expected performance for test cases that are not in our benchmark, such as a test for extracting 20% of nodes from a very broad, very deep tree, can be predicted. Ideally, users will be able to use the calculator to get a quick idea of which systems will perform best in their application scenarios. Second, we hope to extend the benchmark to cover RDF query systems. The use of RDF to represent metadata, in combination with data represented in XML, is becoming more common as the Semantic Web takes shape.
References [1] Abiteboul, S., Quass, D., McHugh, J., Widom, J., Wiener, J.L.: The Lorel Query Language for Semistructured Data. Int. J. on Digital Libraries 1, 68–88 (1997) [2] Adler, S., Berglund, A., Caruso, J., Deach, S., Graham, T., Grosso, P., Gutentag, E., Milowski, A., Parnell, S., Richman, J., Zilles, S.: Extensible Stylesheet Language (XSL) Version 1.0. W3C (2001), http://www.w3c.org/TR/xsl
48
C. Dyreson and H. Jin
[3] Boag, S., Chamberlin, D., Fernandez, M.F., Florescu, D., Robie, J., Siméon, J.: XQuery 1.0: An XML Query Language. W3C (2003), http://www.w3c.org/TR/xquery [4] Böhme, T., Rahm, E.: XMach-1: A Benchmark for XML Data Management. In: Proc. BTW 2001, pp. 264–273 (2001) [5] Bressan, S., Dobbie, G., Lacroix, Z., Lee, M.L., Nambiar, U., Li, Y.G., Wadhwa, B.: X007: Applying 007 Benchmark to XML Query Processing Tools. NUS Technical Report TRB6/0 1 (June 2001) [6] Carey, M.J., DeWitt, D.J., Naughton, J.F.: The OO7 benchmark. In: Proc. ACM SIGMOD Conference, pp. 12–21 (1993) [7] Clark, J., DeRose, S.: XML Path Language (XPath) Version 1.0. W3C (1996), http://www.w3c.org/TR/xpath [8] Deutsch, A., Fernandez, M., Florescu, D., Levy, A., Suciu, D.: XML-QL: A Query Language for XML. W3C (1998), http://www.w3c.org/TR/NOTE-xml-ql [9] Kay, M.H.: SAXON: The XSLT and XQuery Processor, version 6.5.2 (current as of June 2003), http://saxon.sourceforge.net [10] Kay, M.H.: Saxon: Anatomy of an XSLT processor. IBM developerWorks (February 2001), http://www106.ibm.com/developerworks/library/xslt2/ [11] Meier, W.: eXist: An Open Source Native XML Database. Version 0.9.1 (current as of June 2003), http://exist-db.org [12] Nambiar, U., Lacroix, Z., Bressan, S., Lee, M.L., Li, Y.G.: Benchmarking XML Management Systems: The XOO7 Way. TR-01-005, Arizona State University (2001) [13] Runapongsa, K., Patel, J.M., Jagadish, H.V., Chen, Y., Al-Khalifa, S.: The Michigan Benchmark: Towards XML Query Performance Diagnostics. In: Proc. 29th VLDB Conf., Berlin, Germany (2003) [14] Robie, J., Lapp, J., Schach, D.: XML Query Language (XQL). In: Proc. of the Query Language Workshop, Cambridge, MA (December 1998), http://www.w3.org/TandS/QL/QL98/pp/xql.html [15] Schmidt, A., Waas, F., Kersten, M., Florescu, D., Carey, M.J., Manolescu, I., Busse, R.: Xmark: A Benchmark for XML Data Management. In: Proc. 28th VLDB Conf., Hong Kong, China, pp. 974–985 (2002) [16] Apache XML Project. Xalan-Java, http://xml.apache.org/xalan-j [17] Apache XML Project. Xalan-C++, http://xml.apache.org/xalan-c [18] Apache XML Project. Xindice, http://xml.apache.org/xindice [19] DOM4j: The Flexible XML Framework for Java, ver 1.4, http://www.dom4j.org [20] Jaxen: Universal Java XPath Engine, http://jaxen.sourceforge.net [21] Microsoft® XML Core Services (MSXML), http://msdn.microsoft.com/xml
An Empirical Evaluation of XML Compression Tools Sherif Sakr School of Computer Science and Engineering University of New South Wales, Sydney, Australia
[email protected]
Abstract. This paper presents an extensive experimental study of the state-of-the-art of XML compression tools. The study reports the behavior of nine XML compressors using a large corpus of XML documents which covers the different natures and scales of XML documents. In addition to assessing and comparing the performance characteristics of the evaluated XML compression tools, the study tries to assess the effectiveness and practicality of using these tools in the real world. Finally, we provide some guidelines and recommendations which are useful for helping developers and users for making an effective decision for selecting the most suitable XML compression tool for their needs.
1
Introduction
The eXtensible Markup Language (XML) has been acknowledged to be one of the most useful and important technologies that has emerged as a result of the immense popularity of HTML and the World Wide Web. Due to the simplicity of its basic concepts and the theories behind, XML has been used in solving numerous problems such as providing neutral data representation between completely different architectures, bridging the gap between software systems with minimal effort and storing large volumes of semi-structured data. XML is often referred as self-describing data because it is designed in a way that the schema is repeated for each record in the document. On one hand, this self-describing feature grants the XML great flexibility and on the other hand, it introduces the main problem of verbosity of XML documents which results in huge document sizes. This huge size leads to the fact that the amount of information that has to be transmitted, processed, stored, and queried is often larger than that of other data formats. Since XML usage is continuing to grow and large repositories of XML documents are currently pervasive, a great demand for efficient XML compression tools has been existent. To tackle this problem, several research efforts have proposed the use of XML-conscious compressors which exploit the well-known structure of XML documents to achieve compression ratios that are better than those of general text compressors. The usage of XML compressing tools has many advantages such as: reducing the network bandwidth required for data exchange, reducing the disk space required for storage and minimizing the main memory requirements of processing and querying XML documents. L. Chen et al. (Eds.): DASFAA 2009 Workshops, LNCS 5667, pp. 49–63, 2009. Springer-Verlag Berlin Heidelberg 2009
50
S. Sakr
Experimental evaluation and comparison of different techniques and algorithms which deals with the same problem is a crucial aspect especially in applied domains of computer science. Previous studies of XML compression tools [17,29] were different and limited in their scope of investigated tools, data sets and testing parameters. This paper presents an extensive experimental study for evaluating the state-of-the-art of XML compression tools. We examine the performance characteristics of nine publicly available XML compression tools against a wide variety of data sets that consists of 57 XML documents. The web page of this study [1] provides access to the test files, examined XML compressors and the detailed results of this study. The remainder of this paper is organized as follows. Section 2 briefly introduces the XML compression tools examined in our study and classifies them in different ways. Section 3 presents the data sets used to perform the experiments. Detailed description of our experimental framework and the experimental results are presented in Section 4 before we draw our conclusions in Section 5.
2 2.1
Survey of XML Compression Tools Features and Classifications
A very large number of XML compressors have been proposed in the literature of recent years. These XML compressors can be classified with respect to two main characteristics. The first classification is based on their awareness of the structure of the XML documents. According to this classification, compressors are divided into two main groups: – General Text Compressors: Since XML data are stored as text files, the first logical approach for compressing XML documents was to use the traditional general purpose text compression tools. This group of XML compressors [2,5,21] is XML-Blind, treats XML documents as usual plain text documents and applies the traditional text compression techniques [31]. – XML Conscious Compressors: This group of compressors are designed to take the advantage of the awareness of the XML document structure to achieve better compression ratios over the general text compressors. This group of compressor can be further classified according to their dependence on the availability of the schema information of the XML documents as follows: • Schema dependent compressors where both of the encoder and decoder must have access to the document schema information to achieve the compression process [3,7,10,23]. • Schema independent compressors where the availability of the schema information is not required to achieve the encoding and decoding processes [8,15,19,26]. Although schema dependent compressors may be, theoretically, able to achieve slightly higher compression ratios, they are not preferable or commonly used in practice because there is no guarantee that the schema information of the XML documents is always available.
An Empirical Evaluation of XML Compression Tools
51
The second classification of XML compressor is based on their ability of supporting queries. – Non-Queriable (Archival) XML Processor: This group of the XML compressors does not allow any queries to be processed over the compressed format [8,10,19,26,34]. The main focus of this group is to achieve the highest compression ratio. By default, general purpose text compressors belong to the non-queriable group of compressors. – Queriable XML Processor: This group of the XML compressors allow queries to be processed over their compressed formats [12,27,32]. The compression ratio of this group is usually worse than that of the archival XML compressors. However, the main focus of this group is to avoid full document decompression during query execution. In fact, the ability to perform direct queries on compressed XML formats is important for many applications which are hosted on resource-limited computing devices such as mobile devices and GPS systems. By default, all queriable compressors are XML conscious compressors as well. 2.2
Examined Compressors
In our study we considered, to the best of our knowledge, all XML compression tools which are fulfilling the following conditions: 1. Is publicly and freely available either in the form of open source codes or binary versions. 2. Is schema-independent. As previously mentioned, the set of compressors which is not fulfilling this condition is not commonly used in practice . 3. Be able to run under our Linux version of operating system. Table 1 lists the symbols that indicate the features of each XML compressor included in the list of Table 2. Table 2 lists the surveyed XML compressors and their features where the Bold font is used to indicate the compressors which are fulfilling our conditions and included in our experimental investigation. The three compressors (DTDPPM, XAUST, rngzip) have not been included in our study because they do not satisfy Condition 2. Although its source code is available and it can be successfully compiled, XGrind did not satisfy Condition 3. It always gives a fixed run-time error message during the execution process. The rest of the list (11 compressors) don’t satisfy Condition 1. The status of a lack of source code/binaries for a large number of the XML compressors proposed in literature, to the best of our search efforts and contact with the authors, and especially from the queriable class [27,30,32] was a bit disappointing for us. This has limited a subset of our initially planned experiments especially those which targeted towards assessing the performance of evaluating the results of XML queries over the compressed representations. In the following we give a brief description of each examined compressor. General Text Compressors numerous algorithms have been devised over the past decades to efficiently compress text data. In our evaluation study we
52
S. Sakr Table 1. Symbols list of XML compressors features
Symbol G D A
Description General Text Compressor Schema dependent Compressor Archival XML Compressor
Symbol S I Q
Description Specific XML Compressor Schema Independent Compressor Queriable XML Compressor
Table 2. XML Compressors List Compressor Features Code Available GZIP (1.3.12) [5] GAI Y BZIP2 (1.0.4) [2] GAI Y PPM (j.1) [6] GAI Y XMill (0.7) [26] SAI Y XMLPPM (0.98.3) [19] SAI Y SCMPPM (0.93.3)[8] SAI Y XWRT (3.2) [15] SAI Y Exalt (0.1.0)[34] SAI Y AXECHOP[25] SAI Y DTDPPM [3] SAD Y rngzip [7] SQD Y
Compressor Features Code Available XGrind [12] SQI Y XBzip [22] SQI N XQueC [14] SQI N XCQ [30] SQI N XPress [28] SQI N XQzip [20] SQI N XSeq [27] SQI N QXT [32] SQI N ISX [35] SQI N XAUST[10] SAD Y Millau[23] SAD N
selected three compressors which are considered to be the best representative implementations of the most popular and efficient text compression techniques. We selected gzip [5], bzip2 [2] and PPM [21] compressors to represent this group. XMill in [26] Liefke and Suciu have presented the first implementation of an XML conscious compressor. In XMill, both of the structural and data value parts of the source XML document are collected and compressed separately before being passed to one of three alternative back-end general purpose compressor: gzip, bzip2 and PPM. In our experiments we evaluated the performance of the three alternative back-ends independently. Hence, in the rest of the paper we refer to the three alternative back-ends with the names XMillGzip, XMillBzip and XMillPPM respectively. XMLPPM is considered as an adaptation of the general purpose P rediction by P artial M atching compression scheme (PPM) [21]. In [19], Cheney has presented XMLPPM as a streaming XML compressor which uses a M ultiplexed H ierarchical PPM M odel called (MHM). The main idea of this MHM model is to use four different PPM models for compressing the different XML symbols: element, attribute, character and miscellaneous data. SCMPPM is presented as a variant of the XMLPPM compressor [16]. It combines a technique called S tructure C ontext M odelling (SCM) with the PPM compression scheme. It uses a bigger set of PPM models than XMLPPM as it uses a separate model to compress the text content under each element symbol. XWRT is presented by Skibinski et al. in [33]. It applies a dictionary-based compression technique called X ML W ord Replacing T ransform. The idea of this technique is to replace the frequently appearing words with references to the dictionary which is obtained by a preliminary pass over the data. XWRT
An Empirical Evaluation of XML Compression Tools
53
submits the encoded results of the preprocessing step to three alternative general purpose compression schemes: gzip, LZMA and PPM. Axechop is presented by Leighton et al. in [25]. It divides the source XML document into structural and data segments. The MPM compression algorithm is used to generate a context-free grammar for the structural segment which is then passed to an adaptive arithmetic coder. The data segment contents are organized into a series of containers (one container for each element) before applying the BWT compression scheme [18] over each container. Exalt in [34], Toman has presented an idea of applying a syntactical-oriented approach for compressing XML documents. It is similar to AXECHOP in that it utilizes the fact that XML document can be represented using a context-free grammar. It uses the grammar-based codes encoding technique introduced by Kieffer and Yang in [24] to encode the generated context-free grammars.
3 3.1
Our Corpus Corpus Characteristics
Determining the XML files that should be used for evaluating the set of XML compression tools is not a simple task. To provide an extensive set of experiments for assessing and evaluating the performance characteristics of the XML compression tools, we have collected and constructed a large corpus of XML documents. This corpus contains a wide variety of XML data sources and document sizes. Table 3 describes the characteristics of our corpus. Size denotes the disk space of XML file in MBytes. Tags represents the number of distinct tag names in each XML document. Nodes represents the total number of nodes in each XML data set. Depth is the length of the longest path in the data set. Data Ratio represents the percentage of the size of data values with respect to the document size in each XML file. The documents are selected to cover a wide range of sizes where the smallest document is 0.5 MB and the biggest document is 1.3 GB. The documents of our corpus can be classified into four categories depending on their characteristics: – Structural documents this group of documents does not have data contents at all. 100 % of each document size is preserved to its structure information. This category of documents is used to assess the claim of XML conscious compressors on using the well known structure of XML documents for achieving higher compression ratios on the structural parts of XML documents. Initially, our corpus consisted of 30 XML documents. Three of these documents were generated by using our own implemented Java-based random XML generator. This generator produces completely random XML documents to a parameterized arbitrary depth with only structural information (no data values). In addition, we created a structural copy for each document of the other 27 original documents - with data values - of the corpus. Thus, each structural copy captures the structure information of the associated
54
S. Sakr Table 3. Characteristics of XML data sets
Data Set Name Document Name Size (MB) Tags Number of Nodes Depth Data Ratio Telecomp.xml 0.65 39 651398 7 0.48 Weblog.xml 2.60 12 178419 3 0.31 EXI [4] Invoice.xml 0.93 52 78377 7 0.57 Array.xml 22.18 47 1168115 10 0.68 Factbook.xml 4.12 199 104117 5 0.53 Geographic Coordinates.xml 16.20 17 55 3 1 XMark1.xml 11.40 74 520546 12 0.74 XMark [13] XMark2.xml 113.80 74 5167121 12 0.74 XMark3.xml 571.75 74 25900899 12 0.74 DCSD-Small.xml 10.60 50 6190628 8 0.45 DCSD-Normal.xml 105.60 50 6190628 8 0.45 XBench [11] TCSD-Small.xml 10.95 24 831393 8 0.78 TCSD-Normal.xml 106.25 24 8085816 8 0.78 EnWikiNews.xml 71.09 20 2013778 5 0.91 EnWikiQuote.xml 127.25 20 2672870 5 0.97 Wikipedia [9] EnWikiSource.xml 1036.66 20 13423014 5 0.98 EnWikiVersity.xml 83.35 20 3333622 5 0.91 EnWikTionary.xml 570.00 20 28656178 5 0.77 DBLP DBLP.xml 130.72 32 4718588 5 0.58 U.S House USHouse.xml 0.52 43 16963 16 0.77 SwissProt SwissProt.xml 112.13 85 13917441 5 0.60 NASA NASA.xml 24.45 61 2278447 8 0.66 Shakespeare Shakespeare.xml 7.47 22 574156 7 0.64 Lineitem Lineitem.xml 31.48 18 2045953 3 0.19 Mondial Mondial.xml 1.75 23 147207 5 0.77 BaseBall BaseBall.xml 0.65 46 57812 6 0.11 Treebank Treebank.xml 84.06 250 10795711 36 0.70 Random-R1.xml 14.20 100 1249997 28 0 Random Random-R2.xml 53.90 200 3750002 34 0 Random-R3.xml 97.85 300 7500017 30 0
XML original copy and removes all data values. In the rest of this paper, we refer to the documents which include the data values as original documents and refer to the documents with no data values as structural documents. As a result, the final status of our corpus consisted of 57 documents, 27 original documents and 30 structural documents. The size of our own 3 randomly generated documents (R1,R2,R3) are indicated in Table 3 and the size of the structural copy of each original version of the document can be computed using the following equation: size(structural) = (1 − DR) ∗ size(Original) where DR represents the data ratio of the document. – Textual documents: this category of documents consists of simple structure and high ratio of its contents is preserved to the data values. The ratio of the data contents of these documents represent more than 70% of the document size. – Regular Documents consists mainly of regular document structure and short data contents. This document category reflects the XML view of relational data. The data ratio of these documents is in the range of between 40 and 60 percent.
An Empirical Evaluation of XML Compression Tools
55
– Irregular documents consists of documents that have very deep, complex and irregular structure. Similar to purely structured documents, this document category is mainly focusing on evaluating the efficiency of compressing irregular structural information of XML documents. 3.2
Data Sets
Our data set consists of the following documents: EXI-Group is a variant collection of XML documents included in the testing framework of the Efficient XML Interchange Working Group [4]. XMark-Group the XMark documents model an auction database with deeplynested elements. The XML document instances of the XMark benchmark are produced by the xmlgen tool of the XML benchmark project [13]. For our experiments, we generated three XML documents using three increasing scaling factors. XBench-Group presents a family of benchmarks that captures different XML application characteristics [11]. The databases it generates come with two main models: Data-centric (DC) and Text -centric (TC). Each of these two models can be represented either in the form of a single document (SD) or multiple documents (MD). Hence, these two levels of classifications are combined to generate four database instances: TCSD, DCSD, TCMD, DCMD. In addition, XBench can generate databases with 4 different sizes: small (11MB), normal (108MB) and large (1GB) and huge (10GB). In our experiments, we only use TCSD and DCSD instances of the small and normal sizes. Wikipedia-Group: Wikipedia offers free copies of all content to interested users [9]. For our corpus, we selected five samples of the XML dumps with different sizes and characteristics. DBLP presents the famous database of bibliographic information of computer science journals and conference proceedings. U.S House is a legislative document which provides information about the ongoing work of the U.S. House of Representatives. SwissProt is a protein sequence database which describes the DNA sequences. It provides a high level of annotations and a minimal level of redundancy. NASA is an astronomical database which is constructed by converting legacy flat-file formats into XML documents. Shakespeare represents the gathering of a collection of marked-up Shakespeare plays into a single XML file. It contains many long textual passages. Lineitem is an XML representation of the transactional relational database benchmark (TPC-H). Mondial provides the basic statistical information on countries of the world.
56
S. Sakr
BaseBall provides the complete baseball statistics of all players of each team that participated in the 1998 Major League. Treebank is a large collection of parsed English sentences from the Wall Street Journal. It has a very deep, non-regular and recursive structure. Random-Group this group of documents has been generated using our own implementation of a Java-based random XML generator. This generator is designed in a way to produce structural documents with very random, irregular and deep structures according to its input parameters for the number of unique tag names, maximum tree level and document size. We used this XML generator for producing three documents with different size and characteristics. The main aim of this group is to challenge the examined compressors and assess the efficiency of compressing the structural parts of XML documents.
4
Comprehensive Assessment
4.1
Testing Environments
To ensure the consistency of the performance behaviors of the evaluated XML compressors, we ran our experiments on two different environments. One environment with high computing resources and the other with considerably limited computing resources. Table 4 lists the setup details of the testing environments. Due to space limitations, we present in this paper the results of the high computing resource environment. For the full results of both testing environments we refer the reader to the web page of this study [1]. Table 4. Setup details of testing environment High Resources Setup OS Ubuntu 7.10 (Linux 2.6.22 Kernel) CPU Intel Core 2 Duo E6850 CPU 3.00 GHz, FSB 1333MHz, 4MB L2 Cache HD Seagate ST3250820AS - 250 GB RAM 4 GB
4.2
Limited Resources Setup Ubuntu 7.10 (Linux 2.6.20 Kernel) Intel Pentium 4 2.66GHz, FSB 533MHz, 512KB L2 Cache Western Digital WD400BB - 40 GB 512 MB
Experimental Framework
We evaluated the performance characteristics of XML compressors by running them through an extensive set of experiments. The setup of our experimental framework was very challenging and complex. The details of this experimental framework is described as follows: – We evaluated 11 XML compressors: 3 general purpose text compressors (gzip, bzip2, PPM) and 8 XML conscious compressors (XMillGzip, XMillBzip, XMillPPM XMLPPM, SCMPPM, XWRT, Exalt, AXECHOP). For
An Empirical Evaluation of XML Compression Tools
– – – –
57
our main set of experiments, we evaluated the compressors under their default settings. The rational behind this is that the default settings are considered to be the recommended settings from the developers of each compressors and thus can be assumed as the best behaviour. In addition to this main set of experiments, we run additional set of experiments with tuned parameters for the highest value of the level of compression parameter provided by some compressors (gzip, bzip2, PPM, XMillPPM, XWRT). That means in total we run 16 variant compressors. The experiments of the tuned version of XWRT could be only be performed on the high resource setup because they require at least 1 GB RAM. Our corpus consists of 57 documents: 27 original documents, 27 structural copies and 3 randomly generated structural documents (see Section 3.1). We run the experiments on two different platforms. One with limited computing resources and the other with high computing resources. For each combination of an XML test document and an XML compressor, we run two different operations (compression - decompression). To ensure accuracy, all reported numbers for our time metrics (compression time - decompression time) (see Section 4.3) are the average of five executions with the highest and the lowest values removed.
The above details lead to the conclusion that our number of runs was equal to 9120 on each experimental platform (16 * 57 * 2 * 5), i.e 18240 runs in total. In addition to running this huge set of experiments, we needed to find the best way to collect, analyze and present this huge amount of experimental results. To tackle this challenge, we created our own mix of Unix shell and Perl scripts to run and collect the results of these huge number of runs. In this paper, we present an important part from results of our experiments. For full detailed results, we refer the reader to the web page of this experimental study [1]. 4.3
Performance Metrics
We measure and compare the performance of the XML compression tools using the following metrics: – Compression Ratio: represents the ratio between the sizes of compressed and uncompressed XML documents. Compression Ratio = (Compressed Size) / (Uncompressed Size) – Compression Time: represents the elapsed time during the compression process. – Decompression Time: represents the elapsed time during the decompression process. For all metrics: the lower the metric value, the better the compressor. 4.4
Experimental Results
In this section we report the results obtained by running our exhaustive set of experiments. Figures 1 to 4 represents an important part of the results of our
58
S. Sakr
experiments. Several remarks and guidelines can be observed from the results of our exhaustive set of experiments. Some key remarks are given as follows: – Except the latest version of XMLPPM (0.98.3), none of the XML conscious compressors was able to execute the whole set of runs successfully. Moreover, AXECHOP and Exalt compressors have shown very poor stability. They failed to run successful decoding parts of many runs. They were thus excluded from any consolidated results. For a detailed list of the errors generated during our experiments we refer to the web page of this study [1]. – Figure 1(a) shows that XMillPPM achieves the best compression ratio for all the datasets. The irregular structural documents (Treebank, R1, R2, R3) are very challenging to the set of the compressors. This explains why they all had the worst compression ratios. Figure 2(a) shows that the three alternative back-ends of XMill compressor achieve the best average compression ratio over the structural documents. – Figures 1(b) and 2(b) show that gzip-based compressors (gzip, XMLGzip) have the worst compression ratios. Excluding these two compressors, Figure 2(b) shows that the differences on the average compression ratios between the rest of compressors are very narrow. They are very close to each other, the difference between the best and the worst average compression ratios is less than 5%. Among all compressors, SCMPPM achieves the best average compression ratio. – Figures 4(a) shows that the gzip-based compressors have the best performance in terms of compression time and decompression time metrics on both testing environments. The compression and decompression times of the PPM-Based compression scheme (XMillPPM, XMLPPM, SCMPPM) are much slower than the other compressors. Among all compressors, SCMPPM has the longest compression and decompression times. – The results of Figure 3 shows that the tuned run of XWRT with the highest level of compression ratio achieves the overall best average compression ratio with very expensive cost terms of compression and decompression times. – Figure 4(a) illustrates the overall performance of XML compressors where the values of the performance metrics are normalized with respect to bzip2. The results of this figure illustrate the narrow differences between the XML compressors in terms of their compression ratios and the wide differences in terms of their compression and decompression times. 4.5
Ranking
Obviously, it is a nice idea to use the results of our experiments and our performance metrics to provide a global ranking of XML compression tools. This is however an especially hard task. In fact, the results of our experiments have not shown a clear winner. Hence, different ranking methods and different weights for the factors could be used for this task. Deciding the weight of each metric is mainly dependant on the scenarios and requirements of the applications where these compression tools could be used. In this paper we used three ranking
An Empirical Evaluation of XML Compression Tools
59
0.10 0.09
Compression Ratio
0.08
Bzip2 Gzip PPM XMillBzip2 XMillGzip XMillPPM XMLPPM XWRT SCMPPM Exalt Axechop
0.07 0.06 0.05 0.04 0.03 0.02 0.01
BaseBall DBLP EnWikiNew EnWikiQuote EnWikiSource EnWikiVersity EnWikTionary EXI-Array EXI-factbook EXI-Invoice EXI-Telecomp EXI-weblog Lineitem Mondial Nasa Shakespeare SwissProt Treebank USHouse DCSD-Normal DCSD-Small TCSD-Normal TCSD-Small XMark1 XMark2 XMark3 Random-R1 Random-R2 Random-R3
0.00
(a) Structural documents 0.50 0.45 0.40
Bzip2 Gzip PPM XMillBzip2 XMillGzip XMillPPM XMLPPM XWRT SCMPPM Exalt Axechop
Compression Ratio
0.35 0.30 0.25 0.20 0.15 0.10 0.05
BaseBall DBLP EnWikiNew EnWikiQuote EnWikiSource EnWikiVersity EnWikTionary EXI-Array EXI-factbook EXI-GeogCoord EXI-Invoice EXI-Telecomp EXI-weblog Lineitem Mondial Nasa Shakespeare SwissProt Treebank USHouse DCSD-Normal DCSD-Small TCSD-Normal TCSD-Small XMark1 XMark2 XMark3
0.00
(b) Original documents Fig. 1. Detailed compression ratios of XML compression tools
functions which give different weights for our performance metrics. These three rankings function are defined as follows: – W F 1 = (1/3 ∗ CR) + (1/3 ∗ CT ) + (1/3 ∗ DCT ). – W F 2 = (1/2 ∗ CR) + (1/4 ∗ CT ) + (1/4 ∗ DCT ) – W F 3 = (3/5 ∗ CR) + (1/5 ∗ CT ) + (1/5 ∗ DCT ) where CR represents the compression ratio metric, CT represents the compression time metric and DCT represents the decompression time metric. In these ranking functions we used increasing weights for the compression ratio (CR)
60
S. Sakr
0.24 0.07 0.22 Avergae Compression Ratio
Average Compression Ratio
0.06
0.05
0.04
0.03
0.02
0.01
0.20
0.18
0.16
0.14
0.12
(a) Structural documents.
Gzip
PPM
XMillGzip
Bzip2
XMillPPM
XWRT
XMillBzip2
XMLPPM
SCMPPM
PPM
XWRT
Gzip
SCMPPM
Exalt
Axechop
Bzip2
XMLPPM
XMillGzip
XMillBzip2
XMillPPM
0.00
(b) Original documents.
Fig. 2. Average compression ratios of XML compression tools
0.24
24 22
0.22
20 Compression Ratio
0.20
bzip29 gzip9 PPM16 XMillBzip2 XMillGzip XMillPPM9 XMLPPM XWRT14 SCMPPM
18 0.18
16 14
0.16
12 0.14
10 8
0.12
6 0.10
(a) Average Compression Ratios
gzip-9
PPM
XMillGzip
bzip2-9
XMillBzip2
XMillPPM9
XMLPPM
PPM16
XWRT14
SCMPPM
4 2 0 Compression Time
Decompression Time
(b) Average compression and decompression times
Fig. 3. Results of tuned parameters runs of XML compression tools
metric (33%, 50% and 60%) while CT and DCT were equally sharing the remaining weight percentage for each function. Figure 4(b) shows that gzip and XMLGzip are ranked as the best compressors using the three ranking functions and on both of the testing environments. In addition, Figure 4(b) illustrates that none of the XML compression tools has shown a significant or noticeable
An Empirical Evaluation of XML Compression Tools
7
61
3.0
6 2.5 Bzip2 Gzip PPM XMillBzip2 XMillGzip XMillPPM XMLPPM XWRT SCMPPM
5
4
3
Bzip2 Gzip PPM XMillBzip2 XMillGzip XMillPPM XMLPPM XWRT SCMPPM
2.0
1.5
1.0
2 0.5
1
0
0.0 Compression Ratio
Compression Time
Decompression Time
(a) Overall performance
WF1
WF2
WF3
(b) Ranking functions
Fig. 4. Consolidated assessment of XML compression tools
improvement with respect to the compression ratio metric. The increasing assignment for the weight of CR do not change the order of the global ranking between the three ranking functions.
5
Conclusions
We believe that this paper could be valuable for both the developers of new XML compression tools and interested users as well. For developers, they can use the results of this paper to effectively decide on the points which can be improved in order to make an effective contribution. For this category of readers, we recommend tackling the area of developing stable efficient queriable XML compressors. Although there has been a lot of literature presented in this domain, our experience from this study lead us to the result that we are still missing efficient, scalable and stable implementations in this domain. For users, this study could be helpful for making an effective decision to select the suitable compressor for their requirements. For example, for users with highest compression ratio requirement, the results of Figure 3(a) recommends the usage of either the PPM compressor with the highest level of compression parameter (ppmd e -o16 document.xml ) or the XWRT compressor with the highest level of compression parameter (xwrt -l14 document.xml )(if they have more than 1 GB RAM on their systems) while for the users with fastest compression time and moderate compression ratio requirements, gzip and XMillGzip are considered to be the best choice (Figure 4(b)). From the experience and the results of this experimental study, we can draw the following conclusions and recommendations: – The primary innovation in the XML compression mechanisms was presented in the first implementation in this domain by XMill. It introduced the idea of separating the structural part of the XML document from the data part and then group the related data items into homogenous containers that can
62
S. Sakr
be compressed separably. This separation improves the further steps of compressing these homogenous containers using the general purpose compressors or any other compression mechanism because they can detect the redundant data easily. Most of the following XML compressors have simulated this idea in different ways. – The dominant practice in most of the XML compressors is to utilize the well-known structure of XML documents for applying a pre-processing encoding step and then forwarding the results of this step to general purpose compressors. Consequently, the compression ratio of most XML conscious compressor is very dependent and related on the general purpose compressors such as: gzip, bzip2 or PPM. Figure 2(b) shows that none of the XML conscious compressors has achieved an outstanding compression ratio over its back-end general purpose compressor. The improvements are always not significant with 5% being the best of cases. This fact could explain why XML conscious compressors are not widely used in practice. – The authors of the XML compression tools should provide more attention to provide the source code of their implementations available. Many tools presented in the literature - specially the queriable ones - have no available source code which prevents the possibility of ensuring the repeatability of the reported numbers. It also hinders the possibility of performing fair and consistent comparisons between the different approaches. For example in [27], the authors compared the results of their implementation Xseq with XBzip using an inconsistent way. They used the reported query evaluation time of XBzip in [22] to compare with their times although each of the implementation is running on a different environment. – There are no publicly available solid implementations for grammar-based XML compression techniques and queriable XML compressors. These two areas provide many interesting avenues for further research and development. As a future work, we are planning to continue maintaining and updating the web page of this study with further evaluations of any new evolving XML compressors. In addition, we will enable the visitor of our web page to perform their online experiments using the set of the available compressors and their own XML documents.
References 1. 2. 3. 4. 5. 6. 7. 8.
Benchmark of XML compression tools, http://xmlcompbench.sourceforge.net/ BZip2 Compressor, http://www.bzip.org/ DTDPPM Compressor, http://xmlppm.sourceforge.net/dtdppm/index.html Efficient XML Interchange WG, http://www.w3.org/XML/EXI/ GZip Compressor, http://www.gzip.org/ PPM Compressor, http://www.compression.ru/ds/ rngzip Compressor, http://contrapunctus.net/league/haques/rngzip/ SCMPPM Compressor, http://www.infor.uva.es/jadiego/files/scmppm-0.93.3.zip 9. Wikipedia Data Set, http://download.wikipedia.org/backup-index.html
An Empirical Evaluation of XML Compression Tools 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20.
21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32.
33. 34. 35.
63
XAUST Compressor, http://drona.csa.iisc.ernet.in/~ priti/xaust.tar.gz XBench Benchmark, http://softbase.uwaterloo.ca/~ ddbms/projects/xbench/ XGrind Compressor, http://sourceforge.net/projects/xgrind/ XMark Benchmark, http://monetdb.cwi.nl/xml/ XQueC Compressor, http://www.icar.cnr.it/angela/xquec/index.htm XWRT Compressor, http://sourceforge.net/projects/xwrt Adiego, J., Fuente, P., Navarro, G.: Merging Prediction by Partial Matching with Structural Contexts Model. In: The Data Compression Conference (2004) Augeri, C., Bulutoglu, D., Mullins, B., Baldwin, R., Baird, L.: An analysis of XML compression efficiency. In: Workshop on Experimental computer science (2007) Burrows, M., Wheeler, D.J.: A block-sorting lossless data compression algorithm. Technical Report 124 (1994) Cheney, J.: Compressing XML with Multiplexed Hierarchical PPM Models. In: Proceedings of the Data Compression Conference (2001) Cheng, J., Ng, W.: XQzip: Querying Compressed XML Using Structural Indexing. In: Bertino, E., Christodoulakis, S., Plexousakis, D., Christophides, V., Koubarakis, M., B¨ ohm, K., Ferrari, E. (eds.) EDBT 2004. LNCS, vol. 2992, pp. 219–236. Springer, Heidelberg (2004) Cleary, J.G., Witten, I.H.: Data compression using adaptive coding and partial string matching. IEEE Trans. on Communications OM-32(4) (1984) Ferragina, P., Luccio, F., Manzini, G., Muthukrishnan, S.: Compressing and searching XML data via two zips. In: WWW (2006) Girardot, M., Sundaresan, N.: Millau: an encoding format for efficient representation and exchange of XML over the Web. Comput. Networks 33(1-6) (2000) Kieffer, J.C., Yang, E.-H.: Grammar-Based Codes: A New Class of Universal Lossless Source Codes. IEEETIT: IEEE Transactions on Information Theory 46 (2000) Leighton, G., Diamond, J., Muldner, T.: AXECHOP: A Grammar-based Compressor for XML. In: Proceedings of the Data Compression Conference (2005) Liefke, H., Suciu, D.: XMill: An efficient compressor for XML data. In: SIGMOD (2000) Lin, Y., Zhang, Y., Li, Q., Yang, J.: Supporting efficient query processing on compressed XML files. In: SAC (2005) Min, J., Park, M., Chung, C.: XPRESS: A queriable compression for XML data. In: SIGMOD (2003) Ng, W., Lam, W., Cheng, J.: Comparative Analysis of XML Compression Technologies. World Wide Web 9(1) (2006) Ng, W., Lam, W., Wood, P.T., Levene, M.: XCQ: A queriable XML compression system. Knowl. Inf. Syst. 10(4) (2006) Salomon, D.: Data Compression: The Complete Reference. pub-SV (2004) Skibi´ nski, P., Swacha, J.: Combining Efficient XML Compression with Query Processing. In: Ioannidis, Y., Novikov, B., Rachev, B. (eds.) ADBIS 2007. LNCS, vol. 4690, pp. 330–342. Springer, Heidelberg (2007) Skibinski, P., Swacha, J.: Fast Transform for Effective XML Compression. In: CADSM, pp. 323–326 (2007) Toman, V.: Compression of XML Data. Master’s thesis. Charles University, Prague (2004) Wong, R.K., Lam, F., Shui, W.M.: Querying and maintaining a compact XML storage. In: WWW (2007)
Benchmarking Performance-Critical Components in a Native XML Database System Karsten Schmidt, Sebastian B¨ achle, and Theo H¨ arder University of Kaiserslautern, Germany {kschmidt,baechle,haerder}@cs.uni-kl.de
Abstract. The rapidly increasing number of XML-related applications indicates a growing need for efficient, dynamic, and native XML support in database management systems (XDBMS). So far, both industry and academia primarily focus on benchmarking of high-level performance figures for a variety of applications, queries, or documents – frequently executed in artificial workload scenarios – and, therefore, may analyze and compare only specific or incidental behavior of the underlying systems. To cover the full XDBMS support, it is mandatory to benchmark performance-critical components bottom-up, thereby removing bottlenecks and optimizing component behavior. In this way, wrong conclusions are avoided when new techniques such as tailored XML operators, index types, or storage mappings with unfamiliar performance characteristics are used. As an experience report, we present what we have learned from benchmarking a native XDBMS and recommend certain setups to do it in a systematic and meaningful way.
1
Motivation
The increasing presence of XML data and XML-enabled (database) applications is raising the demand for established XML benchmarks. During the last years, a handful of ad-hoc benchmarks emerged and some of them served as basis for on-going XML research [5,29,39], thus constituting some kind of XML “standard” benchmarks. All these benchmarks address the XDBMS behavior and performance visible at the application interface (API) and fail to evaluate and compare properties of the XDBMS components involved in XQuery processing. However, the development of native XDBMSs should be test-driven for all system layers separately, as it was successfully done in the relational world, too, before such high-level benchmarks are used to confirm suitability and efficiency of an XDBMS for a given application domain. In the same way, only high-level features such as document store/retrieve and complete XQuery expressions were drawn on the comparison and adaptation of XML benchmark capabilities [21,26,31,32]. They can be often characterized as “black-box” approaches and are apparently inappropriate to analyze the internal system behavior in a detailed way. This applies to other approaches which focused on specific problems such as handling “shredding” or NULL values efficiently, too. L. Chen et al. (Eds.): DASFAA 2009 Workshops, LNCS 5667, pp. 64–78, 2009. c Springer-Verlag Berlin Heidelberg 2009
Benchmarking Performance-Critical Components
2
65
Related Work
Selecting an appropriate benchmark for the targeted application scenario can be challenging and has become a favorite research topic especially for XML in the recent years. In particular, the definitions of XPath [36] as an XML query language for path expressions and of the Turing-complete XQuery [37] language, which is actually based on XPath, caused the emergence of several XML benchmarks. One of the first XML benchmarks is XMach-1 [5] developed to analyze web applications using XML documents of varying structure and content. Besides simple search queries, coarse operations to delete entire documents or to insert new documents are defined. Thus, XMach-1 does not address, among other issues, efficiency checking of concurrency control when evaluating fine-grained modifications in XML documents. Another very popular project is XMark [33], providing artificial auction data within a single, but scalable document and 20 pre-defined queries. Actually such a benchmark is useful to evaluate search functions at the query layer, whereas, again, multi-user access is not addressed at all. A more recent alternative proposed by IBM can be used to evaluate transactional processing over XML – TPoX [29], which utilizes the XQuery Update Facility [38] to include XML modification operations. By referring to a real-world XML schema (FixML), the benchmark defines three types of documents having rather small sizes (≤ 26 KB) and 11 initial queries for a financial application scenario. The dataset size can be scaled from several GB to one PB and the performance behavior of multi-user read/write transactions is a main objective of that project. Nevertheless, all benchmarking is done at the XQuery layer and no further system internals are to be inspected in detail. However, other benchmarks explicitly investigate the XQuery engine in a stand-alone mode [1,23]. Such “black box” benchmarks seem to be reasonable for scenarios operating on XML-enabled databases having relational cores, e.g., [4,10,11,35], because their internals are optimized during the last 40 years. Furthermore, it is interesting to take a look at the list of performance-critical aspects suggested by [31] and [32]: bulk loading, document reconstruction, path traversal, data type casting, missing elements, ordered access, references, valuebased joins, construction of large result sets, indexing, and full-text search for the containment function of XQuery. Almost all aspects are solely focusing on the application level or query level of XML processing, or they address relational systems, which have to cope with missing elements (NULL values). Indeed, the development towards native XML databases, e.g., [15,18,28,30,34] was necessary to overcome all shortcomings of “Shredding” [35] and lead to new benchmark requirements and opportunities [27]. Other comparative work in [1] and [26] either compared existing XML benchmarks or discussed the cornerstones of good XML benchmarks (e.g, [21,32]) defining queries and datasets. The only work that tried to extend existing approaches is presented in [31]. Although the authors forgot to consider TPoX, which has a focal point on updates, they extended other existing benchmarks with update queries to overcome that drawback.
66
3
K. Schmidt, S. B¨ achle, and T. H¨ arder
Performance-Critical Components
To explore the performance behavior of DBMS components, the analyzing tools must be able to address their specific operations in a controlled way, provoke their runtime effects under a variety of load situations, and make these effects visible to enable result interpretation and to conduct further research regarding the cause of bottleneck behavior or performance bugs. The characteristics of the XML documents used are described in detail in the next section. Because the component analysis always includes similar sets of node and path operations, we do not want to repeat them redundantly and only sketch the important aspects or operations leading to optimized behavior. XTC (XML Transaction Coordinator), our prototype XDBMS, & & is used in all measurements [15]. Its development over the last four years accumulated substantial ex !
perience concerning DBMS per formance. Based on this experi ence, inspection of the internal flow
$ of DBMS processing indicated the critical components benchmarked, "## % which are highlighted in the illus tration of the layered XTC archi
tecture depicted in Fig. 1. Many of the sketched components have close relationships in functionality or similar implementations in the relational world. Therefore, stable Fig. 1. XTC Architecture – overview [15] and proven solutions with known performance characteristics exist for them. However, a closer study of the components highlighted reveals that their functionality and implementation exhibit the largest differences compared to those in a relational DBMS. Therefore, it is meaningful to concentrate on these components and their underlying concepts to analyze and optimize their (largely unfamiliar) performance behavior. While application-specific benchmarks such as XMark or TPoX are designed to pinpoint the essentials of a typical application of a given domain and only use the required resources and functions of it, benchmarks checking layer-specific functionality or common DBMS components have to strive for generality and have to serve for the determination of generic and sufficiently broad solutions. For the on-going system development of XDBMSs, we want to emphasize that benchmarks have to address each layer separately to deliver helpful hints for performance improvement. Local effects (e.g., expensive XML mapping, compression penalties, quality of operator selection among existing alternatives, buffer effects) are often invisible at the XQuery layer and must be analyzed via native
Benchmarking Performance-Critical Components
67
Table 1. Selected XML documents considered Document
Decription
Size
dblp unirot lineitem treebnk SigRec XMark
Computer science index Universal protein resource TPC-H data Wall street journal records Sigmod records Artifical auction data TPoX benchmark documents for accounts, orders, and securities
330 MB 1.8 GB 32 MB 86 MB 0.5 MB 12 MB 112 MB ∼ 6 KB ∼ 2 KB ∼ 6 KB
account order security
Depth max avg 6 3.4 6 4.5 4 3.0 37 8.44 7 5.7 13 5.5 13 5.6 8 4.7 5 2.6 6 3.5
Nodes Paths 17 Mio 153 135 Mio 121 2 Mio 17 3.8 Mio 220k 23.000 7 324.271 439 3.2 Mio 451 ∼ 320 ∼ 100 ∼ 81 ∼ 83 ∼ 100 ∼ 53
interfaces or tailored benchmarks. In the following, we show what kind of approaches help to systematically explore performance-critical aspects within the layers of an XDBMS.
4
Storage Mapping
As the foundation of “external” XDBMS processing, special attention should be paid to storage mapping. Native XDBMSs avoid shredding to overcome shortcomings in scalability and flexibility [35] and developed tree-like storage structures [15,18,34]. One of the most important aspects is stable node labeling to allow for fast node addressing and efficient IUD1 operations on nodes or subtrees. Two classes of labeling schemes emerged where the prefix-based schemes proved to be superior to the range-based schemes; they are more versatile, stable, and efficient [7,8]. In addition, they enable prefix compression which significantly reduces space consumption and, in turn, IO costs. Furthermore, a node label delivers the label of all ancestors which is an unbeatable advantage in case of hierarchical intention locking [12]. Another important mapping property deals with XML structure representation – in particular, with clever handling of path redundancy. A naive approach is to map each XML element, attribute, and text node to a distinct physical entity leading to a high degree of redundancy in case of repetitive XML path instances. Because all implementations use dictionaries to substitute XML tag names (element, attribute) by short numbers (i.e., integer), we do not consider variations of dictionary encodings, but aimed to eliminate the structure part to the extent possible. With the help of an auxiliary data structure called path synopsis (a kind of path dictionary without duplicates), it is possible to avoid structural redundancy at all [13]. However, text value redundancy has to be addressed by common compression techniques and can be benchmarked orthogonal to the actual XML mapping. 1
Insert, Update, Delete.
68
K. Schmidt, S. B¨ achle, and T. H¨ arder
% 100
1)
2)
3)
80
60
40
20
0
naive
prefix compressed
e le m e n tle s s
Fig. 2. Mapping space efficiency
naive
prefix compressed IO
CPU
elementless 1) 10 MB 2) 100 MB 3) 1000 MB
Fig. 3. IO and CPU fractions analyzed
Because ordering is an important aspect for XML processing, the storage mapping has to observe the ordering and to a certain degree the round-trip property. Furthermore, the storage mapping has to be dynamic to allow for IUD operations within single documents and collections of documents. For benchmarking, we identified the following important aspects: space consumption (mapping efficiency), ratio and overhead of optional compression techniques, document size, document no., CPU vs. IO impact, and modifications (IUD). Benchmark datasets, listed in Table 1, were choosen from different sources to reflect the variety of XML documents. Therefore, we recommend to collect benchmark numbers from real data (e.g., dblp) and artificial data (e.g., XMark), tiny up to huge documents (e.g., TPoX, protein datasets), complex and simple structured documents (e.g., treebank, dblp), as well as document-centric and data-centric XML data (e.g., tpch data, sigmod records). The datasets used are derived from [25,29] and [33]. For scaling purposes, we generated differently sized XMark documents and differently composed sets of TPoX documents. Highlighting the fundamental importance of mapping efficiency, Fig. 2 gives some basic insight into three different XML mapping approaches, namely naive, prefix compressed, and elementless. While the naive approach simply maps each XML entity combined with its node label to a physical record, the prefix compressed mode drastically reduces this space overhead through prefix compression of node labels. The final optimization is to avoid redundancy in structure part at all, by the so-called structure virtualization where all structure nodes and paths can be computed on demand by using the document’s path synopsis. The mapping efficiency analysis in Fig. 2 shows that all kind of documents benefit from optimized mappings when compared to plain (the external XML file size). The second benchmark example (see Fig. 3) identifies the impact of the mapping’s compression techniques. For this purpose, we used TPoX document sets in the scaling range from 10 MB to 1000 MB and analyzed the share of IO and CPU time spend to randomly access and process these sets. For small sets (≤ 100 MB), the reduction of the IO impact is visible, whereas for large sets (> 100 MB) the differences are nearly leveled out. However, several other aspects heavily influence a storage-related benchmark, e.g., the block size chosen, the available hardware (disk, CPU(s), memory), and the software (OS, load).
Benchmarking Performance-Critical Components
69
Summarizing, the benchmarks have exhibited the influence of various mapping options and available performance spectrum, even at the lower system layers. Because these mappings serve different objectives, it depends on the specific XML document usage, which option meets the actual workload best.
5
Indexes
Although most XML indexing techniques proposed are developed to support reader transactions, they are necessarily used in read/write applications. After all, they have to observe space restrictions in practical applications. Therefore, a universal index implementation has to meet additional requirements such as maintenance costs and footprint. Moreover, its applicability, scalability, and query support have to be analyzed, too. On the one hand, we can find elementary XML indexes such as DataGuide [9] variations that index all elements of an XML document or full-text indexes covering all content nodes. Such indexes cover a fairly broad spectrum of search support, but need a lot of space and induce high maintenance costs. On the other hand, we can find more adjusted indexes such as path indexes [9,24] or CAS indexes [20] to fill existing gaps for specific access requirements. Comparing different index types should overcome common pitfalls, i.e., specialized indexes should not be evaluated against generic indexes, because different sets of indexed elements lead to different index sizes, selectivities, and, therefore, expressive power. Another drawback may be induced by unfavorable or complex update algorithms leading to the difficult question which kind of workload is supported best. Moreover, destroying and rebuilding of indexes may be frequently required on demand in highly dynamic environments, thereby drawing the attention primarily to index building costs. Eventually, query evaluation may be affected by costly index matching, in contrast to the relational case where an attribute-wise index matching is fairly cheap. Thus, to benchmark index configurations in sufficient quality, the performance of an abundant range of documents, workloads, and index definitions should be Table 2. Characteristics for selected XMark indexes build for an elementless document # Type Definition I1 CAS //* (all text nodes) I2 CAS //* (all text nodes) I3 CAS //item/location I4 CAS //asia/item/location I5 PATH //keyword I6 PATH //keyword I7 CONTENT all content nodes I8 ELEMENT all element nodes I9 CAS //* ∧ //@ D1 DOCUMENT document index
Size Paths Clustering Entries 25.9 MB 514 (94 %) label 1,173,733 25.9 MB 514 (94 %) path 1,173,733 0.26 MB 6 (1.1 %) label 21,750 0.025 MB 1 (0.2 %) label 2,000 0.67 MB 99 (18 %) label 69,969 0.43 MB 99 (18 %) path 69,969 21.3 MB 1,555,603 10.2 MB 1,666,384 31.0 MB 548 (100 %) path 1,555,603 94.5 MB 1,568,362
70
K. Schmidt, S. B¨ achle, and T. H¨ arder
Time [ms]
evaluated under the different storage mappings. However, due to space restrictions, we can only refer to a set of examples based on elementless storage to identify which (of the many) aspects must be observed first when benchmarking indexes. Therefore, Table 2 shows a selection of different index types and their characteristics supporting different kind of queries.2 For instance, the CAS indexes I1 and I2 are equal except for their clustering techniques used, which either optimize document-ordered access or path-based access. Moreover, the path-based clustering may need an additional sort to combine entries from more than one path instance. The indexes I3 and I4 serve as examples for refinement; the more focused an index definition, the less XML entities are addressed, which leads to smaller (and in case of IUD to cheaper maintenance of) indexes. However, their expressive power and usability for query support is reduced by such refinements. Path indexes (e.g., I5 and I5 ) using prefix compression on their keys may differ in size, in contrast to CAS indexes where the index size is independent of the clustering. Furthermore, they can exploit optional clustering whose performance benefit is, however, query dependent. Storage-type-independent indexes such as the stated content index I7 and element index I8 are fairly generic by covering the entire XML document. Thus, they need maintenance for each IUD operation, but often provide limited fallback access and can thereby avoid a document scan. Moreover, the complexity 2500 label clustering of XML indexing is shown path clustering by I7 and I9 which actu2000 ally index the same nodes (text and attribute con1500 tent). But I9 needs more space to include path infor1000 mation and supports clustering. Therefore, XML in500 dexes require fine-tuning to exploit their features and 0 have to be tailored to the 0.01 0.02 0.09 0.18 0.27 0.33 0.55 1.00 Selectivity workload. In contrast, unknown workloads may benFig. 4. Path index cluster comparison efit from more generic index approaches, whereas fixed workloads may be best supported by very specific path indexes or CAS indexes. Secondary features like the clustering may have a huge impact on query performance. This is confirmed by the indexing example in Fig. 4, which clearly shows that such details need to be considered for XML index benchmarking. Comparing the cluster impact of path indexes I5 and I6 , it becomes obvious that low selectivities (i.e., small numbers of path classes) are better supported 2
All indexes are built for an 112 MB XMark document containing a subset of the document’s 3,221,913 XML entities (element, attribute, and text nodes).
Benchmarking Performance-Critical Components
71
by path-based clustering, whereas high selectivities (in our example ≥50 %) can better take advantage of document-ordered, label-based clustering.
6
Path-Processing Operators
With each level of abstraction in the system architecture, the objects become more complex, allowing more powerful operations and being constrained by a larger number of integrity rules. Therefore, the parameter space of the operators frequently increases dramatically such that exhaustive analysis is not possible anymore. Because the options of the data structures and related operations at the path-processing layer are already so abundant and offer so many choices that it becomes hopeless to strive for complete coverage. Nevertheless, accurateenough benchmarking needs to consider the most influential parameters (e.g., stack size(s), index usage, recursion, false positive filtering) at least in principle. Here, we sketch our search for optimal evaluation support concerning treepattern queries and how we coped with their inherent variety and complexity. Because so many path-processing operators and join mechanisms were proposed in the literature for the processing of query tree patterns (QTP) and because we wanted to check them with our own optimization ideas, we implemented for each of the various solution classes the best-rated algorithm in XTC to provide an identical runtime environment and to use a full-fledged XDBMS (with appropriate indexes available) for accurate cross-comparisons: Structural Joins, TwigStack, TJFast, Twig2Stack, and TwigList [2,6,17,19,22]. Structural Join as the oldest method decomposes a QTP into its binary relationships and executes them separately. Its key drawback is the high amount of intermediate results produced during the matching process. TwigStack as a holistic method processes a QTP as a whole in two phases, where at first partial results for each QTP leg are derived, before the final result is created in an expensive merging phase. TJFast, inspired by TwigStack, aims at improvements by reducing IO. It uses a kind of prefix-based node labeling which enables the mapping of node labels to their related paths in the document. As a consequence, only document nodes potentially qualifying for QTP expressions have to be fetched, but it is still burdened by the expensive merging phase. Twig2Stack and its refined version TwigList evaluate QTPs without merging in a single phase, but they require more memory than TwigStack and TJFast. In the worst case, they have to load the entire document into memory. We complemented the set of these competitors with tailored solutions – especially developed in the XTC context to combine prefix-based node labeling and path synopsis use –, called S3 and its optimized version OS3 [17], where query evaluation avoids document access to the extent possible. To figure out the query evaluation performance for them, we used a set of benchmark queries (see Table 3) for XMark documents which guaranteed sufficient coverage of all aspects needed to cross-compare the different path processing and join algorithms under the variation of important parameters (type of index, selectivity of values, evaluation mechanism (bottom-up or top-down), size of documents, etc.).
72
K. Schmidt, S. B¨ achle, and T. H¨ arder
Table 3. Tree-pattern queries used to benchmark join algorithms on XMark documents # X1 X2 X3 X4 X5 X6
Query Matches /site//open auction[.//bidder/personref]//reserve 146982 //people//person[.//address/zipcode]/profile/education 15857 //item[location]/description//keyword 136260 //item[location][.//mailbox/mail//emph]/description//keyword 86568 //item[location][quantity][//keyword]/name 207639 //people//person[.//address/zipcode][id]/profile[.//age]/education 7991
10,000
100,000 10,000
1,000
1,000 100 100 10
10 1
1 X1
X2
TwigStack
X3
X4
X5 TJFast
a) Total execution time (ms)
X6
X1 TwigList
X2
X3
X4
X5
S3
X6 OS3
b) IO time (ms)
Fig. 5. Experimental results for tree-pattern queries on XMark (scale 5)
Unlike all competitor methods, S3 and OS3 executed path expressions not directly on the XML document, but first evaluated them against a path-synopsislike structure, to minimize access to the document. Hence, variations of our idea underlying the S3 algorithm outperformed any kind of conventional path operator use, achieved stable performance gains and proved their superiority under different benchmarks and in scalability experiments [17]. Fig. 5 shows our experimental results for the XMark (scale 5) dataset. Because the execution times for Structural Joins were typically orders of magnitude worse than those of the remaining methods, we have dropped them from our crosscomparison. As a general impression, our own methods – in particular, OS3 – are definitely superior to the competitors, in terms of execution time and IO time. As depicted in Fig. 5 (a), OS3 is at least three times faster than the other methods. S3 also obtains the same performance except for X3, X4, and X5. Here, S3 is about three times slower than TJFast for X4 and X5 and it is 1.3 times slower for X3 ; here, it exhibits the worst performance among all methods. As a result, processing time and IO cost for queries like X4 (see Fig. 5 (a) and (b)) are very high; OS3 can reduce these costs by tailored mechanisms [17]. As a consequence, OS3 is often more than two times faster than TJFast – the best of the competitor methods – for X3, X4, and X5. Eventually such twig operators should be confronted with largely varying input sizes to prove their general applicability, because stack-based operators
Benchmarking Performance-Critical Components
73
(e.g., TwigStack, Twig2Stack) or recursive algorithms stress memory capabilities more than iterative algorithms and iterator-based operators. Another aspect, not addressed in this work, is the preservation of document order for XML query processing. A fair evaluation has to ensure that all operators deliver the same set and order of results. Moreover, a comparison has to state if indexes were used and if the competitive operators used different indexes.
7
Transaction Processing
Like all other types of DBMSs, also XDBMSs must be designed to scale in multi-user environments with both queries and updates from concurrent clients. Of course, we should here leverage experience collected in decades of database research, but we must also revise appropriateness of prevailing principles. As already mentioned, aspects of logging and recovery are generally not different for transactional XML processing, because all current storage mappings are still build on page-based data structures. In terms of transaction isolation, however, we have to meet concerns of XML’s hierarchical structures and new data access patterns. The TPoX benchmark – the first XML benchmark that covers updates – defines a workload mix that queries and updates large collections of small documents. The authors claim that this setup is typical for most data-centric XML applications, which implies that relevant documents are easily identified through unique attribute values supported by additional indexes3 . Hence, document-level isolation would always provide sufficient concurrency. In general, however, data contention increases rapidly with the share of non-exact queries like, e.g., rangequeries, and the ratio between document size and number of documents. Research-focused transaction benchmarks for XDBMS should take this aspects into account, should not restrict themselves solely to current XML use cases, and, thus, close the door for new types of applications profiting from the use of semi-structured data. Therefore, we strive for an application-independent isolation concept, which provides us with both competitive performance through simple document-level isolation if sufficient and superior concurrency for finegrained node-level isolation if beneficial. The essence of our efforts is a hierarchical lock protocol called taDOM [16]. It bases on the concepts of multi-granularity locking, which are used in most relational DBMSs, but is tailored to maximize concurrent access to XML document trees. To schedule read and write access for specific nodes, siblings, or whole subtrees at arbitrary document levels, transactions may choose from sophisticated lock modes that have to be acquired in conjunction with so-called intention locks on the ancestor path from root to leaf. The concept of edge locks [16] complements this approach to avoid phantoms during navigation in the document tree. 3
90% of the TPoX workload directly addresses relevant document(s) through unique id attribute values supported by additional indexes for the measurements.
74
K. Schmidt, S. B¨ achle, and T. H¨ arder
Throughput (tpm)
To exemplify the benefits of 7000 Document-Level CC node-level locking in terms of Node-Level CC 6000 throughput and scalability, we 5000 executed a mix of eight read-only 4000 3000 and update transaction types on 2000 a single 8 MB XML document 1000 and varied the number of concurrent clients. All transaction types 1 10 25 50 75 100 # concurrent clients follow a typical query access pattern. They choose one or more Fig. 6. Scalability of node-level locking jump-in nodes directed by a secondary index and navigate from there in the document tree to build the query result or find the update position (see also [3]). The results in Fig. 6 show that the reduced locking overhead of document-level isolation only paid off in the single client case with a slightly higher throughput. In all other cases, node-level isolation improved not only the transaction rates, but also the scalability of the system, because a too coarse-grained isolation dramatically increases the danger of the so-called convoy effect. It arises if a system cannot scale with the rate of incoming client requests, because the requested data is exclusively accessed by update transactions. Accordingly, the more clients we had, the more documentlevel isolation suffered from rapidly growing request queues leading consequently to increasing response times, more timeouts, and finally a complete collapse. Although this example nicely illustrates the potential of node-level locking, it creates new challenges that we have to conquer. One of the first experiments with fine-grained locking, for example, surprisingly revealed that isolation levels lower than repeatable achieve less throughput, which is completely different from what we know from relational databases [14]. The reason for this are exploding numbers of request-release cycles for the intention locks on the document tree; a phenomena that was not known from the small-granule hierarchies in relational systems. Further work led to the cognition that prefix-based node labels were not only a sake for storage and query purposes but also for cheap derivation of ancestor node labels for intention locks. Finally, latest results [3] proved that we can overrule the objections that taDOM is too expensive for larger documents. A simple yet effective lock escalation mechanism allows us to balance lock overhead and concurrency benefits dynamically at runtime. In the latter experiments, we also identified the importance of update-aware query planning as a new research topic. By now, we are not aware of any work that covers the implications of concurrent document access and modification and the danger of deadlocks during plan generation. Another open question is the concurrency-aware use of the various XML indexes and their interplay with document storage.
8
Integrated View
In addition to component-level, respectively, layer-level analysis, we must also take the dependencies and implications of certain design decisions into account.
Benchmarking Performance-Critical Components
75
As already indicated in the previous sections, key aspects like, e.g., the chosen node labeling scheme have a huge effect on the whole system. As another example, the virtualization of the inner document structure leverages not only the storage mapping, but also indexing and path-processing operators in higher system layers. On the other hand, however, improvements in one part of the system can also impose new challenges on other system components. In the concurrency experiments in [3], for example, we reached such a high concurrency at the XML level that our storage structures became the concurrency bottleneck – a completely new challenge for XML storage mappings. Obviously, the next step towards scalable and generic XDBMS architectures must turn the attention to the interplay of all layers at the system boundary. For the beginning, we can fall back on existing toolboxes to evaluate the performance of XML key functions like navigation (DOM), streaming (SAX, StaX), and path evaluation (XPath). Thereafter, a wide range of XQuery benchmarks should be applied to identify the pros and cons of an architecture. Of course, it seems not possible to cover all potential performance-relevant aspects in every combination with exhaustive benchmarks, but, based on our experience, we can say that we need meaningful combinations of the following orthogonal aspects to get a thorough picture. The database size should be scaled in both directions, the number of documents and the individual document size. To cover the full range of XML’s flexibility, different degrees of structure complexity should be addressed with unstructured, semi-structured, structured, and mixed-structured documents in terms of repeating “patterns”. Furthermore, variations of parameters such as fan-out and depth can also give valuable insights. Queries should assess capabilities for full-text search, point and range queries over text content, as well as structural relationships like paths and twigs and combinations of both. Update capabilities should be addressed by scaling from read-only workloads over full document insertions and deletions to fine-grained intra-document updates. Finally, these workloads have to be evaluated in single and multi-user scenarios. To identify meaningful combinations, there must be an exchange between database researchers and application developers. On the one hand, specific application needs must be satisfied by the XDBMS and, on the other hand, applications have to be adjusted to observe the strengths and weaknesses of XDBMSs. Although the flexibility of XML allows many ways to model data and relationships in logically equivalent variants, it may have a strong influence on the performance of an XDBMS, e.g., in terms of buffer locality. Hence, system capabilities will also cause a rethinking the way how to model XML data, because data modeling driven solely by business needs will not necessarily lead to an optimal representation for an XDBMS-based application. Consequently, there must be a distinction between logical and physical data modeling as in the relational world.
9
Conclusions
We believe that benchmarking is a serious task for database development and, furthermore, we think that current benchmarks in the XML domain do not cover
76
K. Schmidt, S. B¨ achle, and T. H¨ arder
the entire XML complexity provided by native XDBMSs. To reveal important insights how database-based XML processing (e.g., XQuery) is executed, we started to implement various and promising algorithms for all layers in the entire DBMS architecture. Although this approach is time-consuming (and sometimes error-prone, too), it allows for direct comparisons and analyses of competitive ideas and justifies the development of our own native XDBMS – XTC. However, in this work we want to motivate the bottom-up development and simultaneous benchmarking, by giving some insight into critical aspects which arose during the development. Furthermore, we emphasize common pitfalls and results gained through tailored benchmarking of distinct components. In addition, it turned out that benchmarking is an interplay of hardware, software, workload (data and queries), measuring setup, and fairness. Hardware selection has to be reasonable w.r.t. main memory, number of CPUs, disk size and speed, etc. For instance, the trade-off between CPU and IO costs can either be adjusted through the selection of algorithms or often by hardware adjustments. New concepts such as distributed processing or adaptivity can rapidly extend the benchmark matrix. Thus, benchmarking different algorithms needs to be performed under realistic system configurations (e.g., current and propersized hardware). When it comes to workload modeling, either real-world datasets and queries and/or artificial datasets representing a wide range of applications should be used. Unfortunately, this is a difficult problem and needs sound considerations. Moreover, the situation that one algorithm put to benchmark is totally dominating its competitors is rather rare, in fact, most often its preferences are emphasized and shortcomings omitted in scientific contributions. New findings by benchmarking gained new techniques and alternative algorithms may lead to a rethinking, and thereby to a reimplementation, which may trigger expensive development costs, too. Thus unfortunately, the integration of new ideas is slow and cumbersome. Here, research is challenged to evaluate by proof-of-concept implementations such new ideas, before commercial systems may adopt them. 9.1
Should We Propose Another XML Benchmark?
A logic conclusion, drawn after evaluating critical aspects of XML processing mentioned in this work and existing XML benchmarks, is to develop another (new) XML benchmark. However, we do not think that a new benchmark is necessary at all. The rich variety of XML workloads (i.e., datasets and queries) allows for generating critical (and corner) cases. For instance, during our storage layer benchmarking we started with single large documents from [25], before we learned that real applications may also need to process several million (small) documents [29]. Therefore, we extended our storage benchmarks to meet all kinds of XML documents. Furthermore, XML processing pervades certain areas such as information retrieval, where fast reads are mandatory, or the area of application logging, where prevalently inserts and updates occur concurrently. Thus, the spectrum of transactional processing is quite wide and requires tailored protocols to ensure ACID capabilities. Thus, to evaluate concurrent transactional
Benchmarking Performance-Critical Components
77
behavior, for instance, is possible by weighting pre-defined benchmark queries of [29,33] according to the objectives put to benchmark. However, an open system design is helpful to adjust the measuring points for meaningful results. That means, either internal behavior (algorithms) have to be published and implemented into a single system or at least proper interfaces are available on each system put to test. 9.2
Future Work
For our future work, we plan to extend our benchmark findings and continue the bottom-up approach towards the query and application level. Here, we want to address XPath/XQuery in more detail, schema processing, XML applications and use cases, XML data modeling, and the domain of information retrieval on XML. Furthermore, aspects like query translation, query optimization, and XQuery language coverage are critical points for comparing XQuery compilers.
References 1. Afanasiev, L., Franceschet, M., Marx, M.: XCheck: a platform for benchmarking XQuery engines. In: Proc. VLDB, pp. 1247–1250 (2006) 2. Al-Khalifa, et al.: Structural Joins: A Primitive for Efficient XML Query Pattern Matching. In: Proc. ICDE, pp. 141–152 (2002) 3. B¨ achle, S., H¨ arder, T., Haustein, M.: Implementing and Optimizing Fine-Granular Lock Management for XML Document Trees. In: Proc. DASFAA (2009) 4. Bourret, R.: XML Database Products, http://www.rpbourret.com/xml/XMLDatabaseProds.htm 5. B¨ ohme, T., Rahm, E.: XMach-1: A Benchmark for XML Data Management. In: BTW, pp. 264–273 (2001) 6. Bruno, N., Koudas, N., Srivastava, D.: Holistic Twig Joins: Optimal XML Pattern Matching. In: Proc. SIGMOD, pp. 310–321 (2002) 7. Christophides, V., Plexousakis, D., Scholl, M., Tourtounis, S.: On Labeling Schemes for the Semantic Web. In: Proc. Int. WWW Conf., pp. 544–555 (2003) 8. Cohen, E., Kaplan, H., Milo, T.: Labeling Dynamic XML Trees. In: Proc. PODS, pp. 271–281 (2002) 9. Goldman, R., Widom, J.: DataGuides: Enabling Query Formulation and Optimization in Semistructured Databases. In: Proc. VLDB, pp. 436–445 (1997) 10. Grust, T., Rittinger, J., Teubner, J.: Why off-the-shelf RDBMSs are better at XPath than you might expect. In: Proc. SIGMOD, pp. 949–958 (2007) 11. Halverson, A., Josifovski, V., Lohman, G., Pirahesh, H., M¨orschel, M.: ROX: relational over XML. In: Proc. VLDB, vol. 30, pp. 264–275 (2004) 12. H¨ arder, T., Haustein, M., Mathis, C., Wagner, M.: Node Labeling Schemes for Dynamic XML Documents Reconsidered. DKE 60, 126–149 (2007) 13. H¨ arder, T., Mathis, C., Schmidt, K.: Comparison of Complete and Elementless Native Storage of XML Documents. In: Proc. IDEAS (2007) 14. Haustein, M., H¨ arder, T.: Adjustable Transaction Isolation in XML Database Management Systems. In: Bellahs`ene, Z., Milo, T., Rys, M., Suciu, D., Unland, R. (eds.) XSym 2004. LNCS, vol. 3186, pp. 173–188. Springer, Heidelberg (2004)
78
K. Schmidt, S. B¨ achle, and T. H¨ arder
15. Haustein, M.P., H¨ arder, T.: An efficient infrastructure for native transactional XML processing. DKE 61, 500–523 (2007) 16. Haustein, M., H¨ arder, T.: Optimizing lock protocols for native XML processing. DKE 65, 147–173 (2008) 17. Izadi, K., H¨ arder, T., Haghjoo, M.: S3: Evaluation of Tree-Pattern Queries Supported by Structural Summaries. DKE 68, 126–145 (2009) 18. Jagadish, H.V., et al.: TIMBER: A native XML database. VLDB Journal 11, 274– 291 (2002) 19. Jiang, H., Wang, W., Lu, H., Xu Yu, J.: Holistic Twig Joins on Indexed XML Documents. In: Proc. VLDB, pp. 273–284 (2003) 20. Li, H.-G., Aghili, S.A., Agrawal, D., El Abbadi, A.: FLUX: Content and Structure Matching of XPath Queries with Range Predicates. In: Amer-Yahia, S., Bellahs`ene, Z., Hunt, E., Unland, R., Yu, J.X. (eds.) XSym 2006. LNCS, vol. 4156, pp. 61–76. Springer, Heidelberg (2006) 21. Lu, H., et al.: What makes the differences: benchmarking XML database implementations. ACM Trans. Internet Technol. 5, 154–194 (2005) 22. Lu, J., Ling, T.W., Chan, C.Y., Chen, T.: From region encoding to extended Dewey: on efficient processing of XML twig pattern matching. In: Proc. VLDB, pp. 193–204 (2005) 23. Michiels, P., Manolescu, I., Miachon, C.: Toward microbenchmarking XQuery. Inf. Syst. 33, 182–202 (2008) 24. Milo, T., Suciu, D.: Index structures for path expressions. In: Beeri, C., Bruneman, P. (eds.) ICDT 1999. LNCS, vol. 1540, pp. 277–295. Springer, Heidelberg (1998) 25. Miklau, G.: XML Data Repository, http://www.cs.washington.edu/research/xmldatasets 26. Nambiar, U., Lee, M.L., Li, Y.: XML benchmarks put to the test. In: Proc. IIWAS (2001) 27. Nicola, M., John, J.: XML parsing: a threat to database performance. In: Proc. CIKM, pp. 175–178 (2003) 28. Nicola, M., van der Linden, B.: Native XML support in DB2 universal database. In: Proc. VLDB, pp. 1164–1174 (2005) 29. Nicola, M., Kogan, I., Schiefer, B.: An XML transaction processing benchmark. In: Proc. SIGMOD, pp. 937–948 (2007) 30. Oracle XML DB 11g, http://www.oracle.com/technology/tech/xml/xmldb/ 31. Phan, B.V., Pardede, E.: Towards the Development of XML Benchmark for XML Updates. In: Proc. ITNG, pp. 500–505 (2008) 32. Schmidt, A., et al.: Why and how to benchmark XML databases. SIGMOD Rec. 30, 27–32 (2001) 33. Schmidt, A., Waas, F., Kersten, M., Carey, M.J., Manolescu, I., Busse, R.: XMark: a benchmark for XML data management. In: Proc. VLDB, pp. 974–985 (2002) 34. Sch¨ oning, D.H.: Tamino - A DBMS Designed for XML. In: Proc. ICDE (2001) 35. Shanmugasundaram, J., et al.: Relational Databases for Querying XML Documents: Limitations and Opportunities. In: Proc., pp. 302–314 (1999) 36. XML Path Language (XPath) 2.0. W3C Recommendation, http://www.w3.org/TR/xpath 37. XQuery 1.0: An XML Query Language. W3C Recommendation. (2005), http://www.w3.org/TR/xquery 38. XQuery Update Facility 1.0. W3C Candidate Recommendation. (2008), http://www.w3.org/TR/xquery-update-10/ ¨ 39. Yao, B.B., Ozsu, M.T., Khandelwal, N.: XBench Benchmark and Performance Testing of XML DBMSs. In: Proc. ICDE, p. 621 (2004)
On Benchmarking Transaction Managers Pavel Strnad and Michal Valenta Department of Computer Science and Engineering Faculty of Electrical Engineering, Czech Technical University Karlovo n´ amˇest´ı 13, 121 35 Praha 2 Czech Republic {strnap1,valenta}@fel.cvut.cz
Abstract. We describe an idea of measuring the performance of a transaction manager’s performance. We design a very simple benchmark intended for evaluating this important component of a DB engine. Then we apply it to our own transaction manager’s implementation. We also describe the implementation of the transaction manager itself. It is done as a software layer over the eXist database engine. It is a standalone module which can be used to extend eXist functionality by transactional processing when it is needed.
1
Motivation
XML language as a data format is just half way in between a strict data organization structure (as we know it from relational database model) and unstructured texts. In principle, it should be used when such a data (semi-)organization is advantageous. Approximately 10 years after such a data format appeared it turned out that practical application are numerous. These applications are usually divided into document-oriented and data-oriented. Hence, we talk about data-oriented databases (orders, auction catalogues, user’s profiles, etc.) on one hand, and document oriented databases (texts, spreadsheets, presentations, etc.) on the other. Now, having quite a lot of XML data, we need to save them effectively and, naturally, we need to be able to search them. Hence the XML format became a very hot topic in the database community early on (at the beginning of the 21st century). There is a number of indexing methods, there is lively research on the XML stream processing and on XML query optimization techniques. However, a database engine should – next to effective saving and fast querying – provide a parallel approach to multiple users to shared their data and, moreover, it should implement rules for transactional processing that would not block concurrent access of multiple users to the same data source, if possible. In particular, we want to support the properties of transactional processing known as ACID (Atomicity, Consistency, Independence, Durability). Our contribution is precisely about the practical aspects of the transactional processing of XML data. Consider a team working on a shared semistructured and internally linked document (a book, a project proposal for a grant agency, L. Chen et al. (Eds.): DASFAA 2009 Workshops, LNCS 5667, pp. 79–92, 2009. c Springer-Verlag Berlin Heidelberg 2009
80
P. Strnad and M. Valenta
accreditation materials, etc.). For sure, there are collaborative applications that support the required teamwork with relative ease. However, should we have a database engine that would provide a (nonblocking) transactional processing to a document (and its parts), these additional applications would not be needed anymore and team members could work on the shared project in their own environment, with a higher degree of comfort and effectivity. In this article, we describe an implementation of a transaction manager that implements the taDOM3+ node-level locking protocol developed precisely for XML data. We want to inspect how much the transactional manager will influence the rest of a database engine (new document insertion and querying in particular) on a real-like application. This transaction manager is implemented as part of the CellStore project. However, CellStore as a XML native database engine is not advanced enough to support a relevant experiment (as a massive restructuring of a low-level I/O module took place recently, the module is not properly optimized yet). So, we reimplemented our transaction manager into Java language and applied it on the database engine eXist. In this environment, we carried out an experiment the description of which is the core of this article. This experiment, naturally, is only a starting point for further research. We would like, for example, to find out experimentally the optimal depth for nodelocking (parts of the document under this threshold are locked automatically with its parent node) for a given application. Setting the depth of locking is a parameter of the taDOM3+ protocol. Another practical outcome may be guessing of slow-down of an application if the locking is implemented (some applications may give up transactional processing because they are afraid of unacceptible slow-down). The third reason is the possibility of comparison of various transaction manager implementations by a specialized but simple transaction processing benchmark. Let us note, again, that the paper is focused on benchmarking just the transaction manager module, not the whole database engine. Although our benchmark is inspired by application based and broadly accepted benchmarks by TPC, or their open-source (lighted) variant – OSDB. The article is organized as follows: chapter 2 describes a status quo of XML benchmarking while chapter 3 adds a few technical details related to XML transactional processing. Then, chapter 4 portrays our (objective) implementation of the transaction manager. The core of the article is found in chapter 5: the experiment (a rather simple benchmark) based on XMARK data realized on the native XML database engine eXist is discussed. The outcomes and the future direction of the research are summarized in the final chapter of this article.
2
State of the Art
Many of native XML database engines do not either support transactional processing on the user level (eXist, Xindice) or they support it only partially (Berkeley DB, Sedna). So, there are only a few native XML engines that have ambitions to fully implement a node-level locking mechanism.
On Benchmarking Transaction Managers
81
On the other hand, there are complex, application-based benchmarks that care about transactions (see mainly TPoX bellow). But the main aim of this paper is the transaction manager benchmark which is simple enough for implementation and measurement. 2.1
XML Applications Benchmarks
In the following paragraphs, we provide a very brief description of several XML benchmarks. More details about their data models, queries, etc. can be found in [2]. X007 Benchmark: This benchmark was developed upon the 007 benchmark – it is an XML version of 007, only enriched by new elements and queries for specific XML related testing. Similarly to the 007 benchmark, it provides 3 different data sets: small, intermediate and large. The majority of 007 queries is focused on document oriented processing in object oriented DBs. The X007 testing set is divided into three groups: – traditional database queries, – data navigation queries, and – document oriented queries. Data manipulation and transactional processing are not considered in X007. XMark Benchmark: This benchmark simulates an internet auction. It consists of 20 queries. The main entities are an item, a person, an opened and finished auction, and a category. Items represent either an object that has already been sold or an offered object. Persons have subelements such as a name, e-mail, telephone number, etc. Category, finally, includes a name and a description. The data included in the benchmark is a collection of 17,000 most frequently used words in Shakespeare’s plays. The standard size of the document is 100MB. This size, then, is taken as 1.0 on a scale. A user can change the size of the data up to ten times from the default. XMach-1: This benchmark is based on a web application and it considers a different sets of XML data with a simple structure and a relatively small size. XMach-1 supports data with or without defined structure. A basic measure unit is Xqps – queries per second. The benchmark architecture consists of four parts: the XML database, an application server, clients for data loading and clients for querying. The database has a folder based structure and XML documents are designed to be loaded (by a load client) from various data sources in the internet. Each document has a unique URL maintained (together with metadata) in a folder based structure. Furthermore, an application server keeps a web server and other middleware components for XML documents processing and for an interaction with a backend database.
82
P. Strnad and M. Valenta
Each XML file represents an article with elements such as name, chapter, paragraph, etc. Text data are taken from a natural language. A user can change the XML file size by changing the quantity of the article elements. By changing the quantity of XML files the size of the data file is controlled. XMach-1 assumes that the size of data files is small (1-14kB). XMach-1 evaluates both standard and non-standard language features, such as insert, delete, URL query and aggregation functions. The benchmark consists of 8 queries and 2 update operations. The queries are divided into 4 groups according the the common characteristics they portray: – group 1: simple selection and projection with a comparison of elements or attributes – group 2: it requires the use of element order – group 3: tests aggregation capabilities and it uses metadata information – group 4: tests operation updates TPoX: Transaction Processing over XML is an application benchmark that simulates financial applications. It is used to evaluate the efficiency of XML database systems with a special attention paid to XQuery, SQL/XML, XML storage, XML indexing, XML scheme support, XML update, logging and other database aspects. It appears to be the most complex one and it also is the best contemporary benchmark. The benchmark simulates on-line trading and it uses FIXML to model a certain part of the data. FIXML is an XML version of FIX (Financial Information eXchange): a protocol used by the majority of leading financial companies in the world. FIXML consists of 41 schemes which, in turn, contain more than 1300 type definitions and more than 3600 elements and attributes. TPoX has 3 different types of XML documents: Order, Security, and CustAcc which includes a customer with all her accounts. The information about holdings is included in the account data. Order documents follow the standard FIXML schema. Typical document sizes are following: 3 - 10 KB for Security, 1 - 2 KB for Order, and 4 - 20 KB for combined Customer/Account documents. To capture the diversity/irregularity often seen in real- world XML data, there are hundreds of optional attributes and elements with typically only a small subset present in any given document instance (such as in FIXML) [7]. 2.2
Framework TaMix for XML Benchmarks
TaMix [4] is a framework that provides an automated runtime environment for benchmarks on XML documents. It is mainly developed at the University of Kaiserslautern. Those benchmarks consist of a specified amount of update operations per transaction. It is a simulation of a bank application tailored to update operations. Unfortunately, the framework’s more detailed specification is not available. Hence, we were not able to implement it. But we used the idea of this framework for our benchmark’s implementation.
On Benchmarking Transaction Managers
2.3
83
XML Application Benchmarks – Summary
The benchmarks XMark and X007 can be viewed as combined / composite: their data and queries are in fact fictious application scenarios, but, at the same time, they try to test essential components of the languages – XQuery and XPath. On the other hand they ignore update operations. XMach-1 and TPoX benchmarks consider both queries as well as updates. Hence, these benchmarks seems much more relevant to our needs. Unfortunately, both benchmarks use very complicated data models and their implementation takes a lot of time. TaMix framework seems suitable for our implementation but there is no detailed description available.
3
Concurrency in Native XML DBMS
A common requirement for database management systems is concurrency control. There are four well-known properties for a transactional system known as ACID [3]. Transaction is generally a unit of work in a database. ACID properties are independent on a database (logical) model (i.e. it must be kept in all transactional database systems). Isolation of transactions in a database system is usually ensured by a lock protocol. Direct application of a lock protocol used in relational databases does not provide high concurrency [5,9] (i.e. transactions are waiting longer than it is necessary). We show a huge difference between locking protocols for RDBMS and native XML database system on a small example. We consider two lock modes: Shared mode and Exclusive mode. The granularity of exclusive lock in RDBMS is often a row (or record) [1]. In a native XML database we have the possibility to lock a single node or to lock a whole subtree. One of those protocols which takes into account the XML structure is the taDOM3+ lock protocol. We implemented this protocol in our transaction manager. More detailed explanation of the taDOM3+ lock protocol is given in [5]. We consider only well-formed transactions and serializable plans of update operations [1]. All protocols quoted in this paper satisfy these requirements. We call lock protocols for native XML databases simply XML-lock protocols in this paper. The most of these XML-lock protocols are based on the basic relational lock protocol. Hence, XML-lock protocols inherit most of the features, e.g. twophase locking to ensure serializability.
4
Transaction Manager’s Implementation
In this section we describe our implementation of a Transaction Manager. A complete description is not the aim of this paper. Hence, we will only sketch a few parts of the implementation, especially those which are important for a better understanding of this paper. A better explanation of TransactionManager’s implementation is given in [9].
84
P. Strnad and M. Valenta
The implementation of locking and transaction management consists of the Transaction Manager module and the Lock Manager module. The Transaction Manager module is responsible for managing transactions, detecting deadlocks, committing, etc. On the other hand the Lock Manager implements the XMLlocking protocol taDOM3+ and it maintains assigned lock modes to nodes. Each Lock Mode is represented by a class in our implementation. The compatibility matrix and the conversion matrix are implemented using multiple dispatch and visitor methods isCompatibleWith() and combineWith() respectively. For measuring purposes the Transaction Manager holds directly an instance of Document class. In a real system, the Transaction Manager does not hold directly the instance, but it implements a specified interface to communicate with other parts of a database system. The Transaction Manager’s class diagram is shown in Figure 1. The implementation is done with emphasis on object design. We would like to expose a sequence of method calls for a DOM operation. It is not important which operation we select, because in general the sequence of method calls remains unchanged. The cascade of method calls is shown in the Figure 2. The user represents higher layer, such a Query Optimizer, in a system hierarchy and Document represents backend layer, a storage for example. One of the most important features for the Transaction Manager’s performance is a stable numbering schema for XML nodes. We use the DeweyID [6] ordering schema in our implementation. DeweyID is a unique identifier for nodes in our DBMS. The key feature important for the Transaction Manager is that a DeweyID of the parent of a context node is part of the context node’s DeweyID,
Fig. 1. Class Diagram of Transaction Manager
On Benchmarking Transaction Managers
85
Fig. 2. Sequence Diagram of DomOperation
in more detail it is a prefix of the context node’s DeweyID. Hence, the Transaction Manager can retrieve the parent DeweyID wihout rendundant storage access directly. This feature has a significant impact on the Transaction Manager’s performance.
5 5.1
Benchmarking of the Transaction Manager Performance Benchmarking
We asked the question “How to measure a transaction manager’s performance?” when we implemented the Transaction Manager. We found that there are two possibilities for measuring of the Transaction Manager’s performance. The first possibility is to measure the performance of whole database system twice. A first measurement is performed with a transaction manager and a second measurement without a transaction manager. The advantage of this possibility is an easier realisation of measurement, but it does not provide optimal results because it is influenced by the rest of the database system. The second possibility is based on separating the transaction manager from the database system. The important advantage of this possibility lies in providing more relevant results, but the designer of the database system has to think about the modularity of the system at design time.
86
P. Strnad and M. Valenta
Our approach for measuring the performance can be applied in both cases. Finally, we decided to design a simple benchmark to get a general overview of the Transaction Manager’s performance. 5.2
Benchmark Specification
Our benchmark specification generally consists of – – – –
the XML Schema of a test database sizes of database instances benchmarked operations (queries and updates) output consists of a duration of benchmarked operations in milliseconds.
As the schema for our test database we use XMark’s database model schema [8]. This schema covers our requirement because it is a model of a real world application schema for online transaction processing. It is based on a model of Internet auctions. XMark also includes a generator for database instances. Then it can be easily adjusted to another testing environment. Our benchmark uses 4 different sizes of a test database. In Table 1 are described database instances that the benchmark uses. The generator’s factor is a scaling factor f for the XMark generator. Table 1. Database sizes File Name db001.xml db005.xml db01.xml db02.xml
Generator’s Factor 0.01 0.05 0.1 0.2
Size 1 154 kB 5 735 kB 11 596 kB 23 364 kB
The benchmark’s tests are described in Table 2. We aim at extending our test cases in the near future. Tests 1 and 2 measure the transaction manager’s initialization time while Test 3 is intended to measure the transaction execution time in a real world OLTP scenario. It can be executed in a single or a multiple transaction mode. In the single transaction mode we measure time per transaction without conflicts. On the other hand in the multiple transaction mode we measure transaction throughput. Finally, we can measure the transaction’s execution time regarding to transaction manager. This mode has the following execution plan: – 40 parallel transactions at a time – each transaction is executing Test 3 – 5 execution repetitions. The result is the amount of time spent on that execution.
On Benchmarking Transaction Managers
87
Table 2. Description of tests Test 1
t1 = document initialization with DeweyID ordering t2 = document initialization without DeweyID ordering Result: Δt = t1 − t2 t1 = DOM operation getNode() with Transaction Manager t2 = DOM operation getNode() without Transaction Manager Result: Δt = t1 − t2 This test is intended to measure the transaction performance of the Transaction Manager’s implementation. The schema S of the transaction consists of following operations. The semantics of these operations is described in Table 3.
Test 2
Test 3
BEGIN_TRANSACTION WAIT BID WAIT CLOSE_AUCTION WAIT INSERT_AUCTION WAIT GET_CATEGORIES WAIT REMOVE_ITEM COMMIT_TRANSACTION t1 = preceding schema S with Transaction Manager t2 = preceding schema S without Transaction Manager Result: Δt = t1 − t2
Table 3. Semantics of transaction’s operations Operation WAIT BID CLOSE AUCTION INSERT AUCTION REMOVE ITEM
5.3
Semantics transaction waits a random time (0-5000ms) bids on a random item in a random auction moves random auction to closed auctions inserts new auction on a random item removes random item including all referenced auctions
Benchmarking Environment
The environment that was used for executing the benchmark conforms to the Transaction Manager’s implementation. The Transaction manager is implemented in Java. Java code is compiled into byte-code and then executed in a
88
P. Strnad and M. Valenta
Java Virtual Machine (JVM). The JVM has a significant influence on the Transaction Manager’s performance, because the executed byte-code is preprocessed and optimized. So, the tested methods are executing faster during the test repetitions. The computer used for performing the tests was Intel Core 2 Duo, 2.0 GHz, HDD SATA 5400 r.p.m. with operating system Windows Vista 32-bit with Java Rutime Environment version 1.6.0 07. 5.4
Results
This section sums up our results. Each test was executed five times. At the beginning of each test there was initialisation. This is important because of the JVM’s loading classes when they are invoked for the first time. It is neccessary in each interpreted (byte-code based) language. The results of Test 1 are exposed in Table 4. In Graph 3 a linear relation of Δt to the database instance, is shown. The cost of the DeweyID ordering algorithm is approximately 90% of a time needed to build and initialize a database instance. There is a wide area for a future research. Test 2 results are displayed in Table 5. Relation of Δt to the database instance is depicted in Graph 4. This relation seems to be a sublinear function. This behavior is caused by the implementation of a DeweyID accessor that is implemented by a hash table. The time complexity of a search operation in a hash table is O(1), a constant. But there is a small overhead of the Transaction Manager that has time complexity O(n), hence the relation is not a constant. We executed Test 3 in multiple transaction mode. It means that the test is executing transactions in a concurrent mode. The waiting time between nearby operations was 1000 ms. We did two measurements with different settings. The first setting included 20 concurrent transactions. The second one had 50 concurrent transactions. The measurement results of these settings are displayed in Tables 6 and 7. The corresponding graphs are in Figures 5 and 6. There is a significant result in figure 6. The execution for 50 transactions is faster Table 4. Test 1 results File Name db001.xml db005.xml db01.xml db02.xml
t1 [ms] 883 2510 6042 12794
t2 [ms] 592 239 577 2131
Δt [ms] 291 2271 5465 10663
Table 5. Test 2 results File Name db001.xml db005.xml db01.xml db02.xml
t1 [ms] 182 217 325 380
t2 [ms] 131 84 121 154
Δt [ms] 51 133 204 226
On Benchmarking Transaction Managers Table 6. Test 3 results - 20 transactions File Name db001.xml db005.xml db01.xml db02.xml
t1 [ms] 3762 3784 4444 10293
t2 [ms] 3928 3644 4726 10402
Δt [ms] -166 140 -282 -109
Table 7. Test 3 results - 50 transactions File Name db001.xml db005.xml db01.xml db02.xml
t1 [ms] 3739 5291 9550 29153
t2 [ms] 4047 4626 9483 36494
Fig. 3. Test 1 results
Fig. 4. Test 2 results
Δt [ms] -308 665 67 -7341
89
90
P. Strnad and M. Valenta
Fig. 5. Test 3 results - 20 transactions
Fig. 6. Test 3 results - 50 transactions
with Transaction Manager surprisingly. This effect is caused by inline caches of the Java Virtual Machine. The exception handling has an important impact on the performance. Many exceptions do not arise in the Transaction Manager environment because of suspending operations that are in conflict.
6
Conclusions and Future Work
In this paper, we described a basic idea of measuring the transaction manager’s implementation performance. We sketched the implementation of our Transaction Manager and as a result of our research we introduced our idea of transaction manager’s benchmarking. The main contribution of this paper lies in the execution and evaluation of the designed benchmark on our transaction manager.
On Benchmarking Transaction Managers
91
Finally, in Section 5 we evaluated results of our benchmark. We found that the application of DeweyID numbering has approximately linear relation to the size of the database. But it takes almost 90% of initialization time. One can argue, that this slow-down means only a constant time that is needed in document initialization (i.e. storing into the database). The value of DeweyID can be stored in DB’s internal storage along with the XML document itself. It is true, but on the other hand it may be re-computed in case of frequently updated nodes. The second experiment showed that introducing necessary DeweyID ordering resutls in a speed up for DOM operations. However, it is not a surprise at all, it can be regarded as a simple internal node indexing. The third experiment is the most interesting one. It shows that exception handling of conflicting operations has an important impact on the performance. In some cases, the execution of transactions in concurrent mode of the Transaction Manager is surprisingly faster than without the Transaction Manager. This behavior is caused by inline caches in the Java Virtual Machine on one side and by exception handling of the Transaction Manager on the other side. The benchmark discussed in this paper is focused just on testing of the transaction manager component. It is not meant as a complex benchmark like TPoX for example. Another notable side-result of this paper is the implementation of the Transaction manager itself as a software layer over the eXist database engine. There is a stand alone module which can be used to extend the eXist functionality by transactional processing when it is required.
Acknowledgement This work has been supported by Ministery of Education, Youth, and Sports under research program MSM 6840770014, by the grant of GACR No. GA201/09/0990 XML Data Processing and by the grant CTU0807313.
References 1. Bernstein, P., Newcomer, E.: Principles of Transaction Processing, 1st edn. Morgan Kaufmann, San Francisco (1997) 2. Chaudhri, A.B., Rashid, A., Zicari, R.: XML Data Management - Native XML and XML-Enabled Database Systems. Addison Wesley Professional, Reading (2003) 3. Date, C.J.: An Introduction to Database Systems, 6th edn. Addison-Wesley, Reading (1995) 4. Haustein, M., H¨ arder, T.: Optimizing lock protocols for native XML processing. Data Knowl. Eng. 65(1), 147–173 (2008) 5. Haustein, M.P., H¨ arder, T.: An efficient infrastructure for native transactional XML processing. Data Knowl. Eng. 61(3), 500–523 (2007) 6. Haustein, M.P., Hrder, T., Mathis, C., Wagner, M.: Deweyids - the key to finegrained management of XML documents. In: Proc. 20th Brasilian Symposium on Databases (SBBD 2005), Uberlandia, Brazil, October 2005, pp. 85–99 (2005)
92
P. Strnad and M. Valenta
7. Nicola, M., Kogan, I., Raghu, R., Gonzalez, A., Schiefer, B., Xie, K.: Transaction Processing over XML, TPoX (2008), http://tpox.sourceforge.net/TPoX_BenchmarkProposal_v1.2.pdf 8. Schmidt, A.R., Waas, F., Kersten, M.L., Florescu, D., Manolescu, I., Carey, M.J., Busse, R.: The XML Benchmark Project. Technical Report INS-R0103, CWI, Amsterdam, The Netherlands (April 2001) 9. Strnad, P., Valenta, M.: Object-oriented Implementation of Transaction Manager in CellStore Project. In: Objekty 2006, Praha, pp. 273–283 (2006)
Workshop Organizers' Message MCIS Shazia Sadiq1, Ke Deng1, Xiaofang Zhou1, and Xiaochun Yang2 1
The University of Queensland, Australia 2 Northeastern University, China
WDPP Walid G. Aref3, Alex Delis4, Qing Liu5, and Kai Xu5 3 Purdue University, USA University of Athens, Greece 5 CSIRO ICT Centre, Australia 4
Poor data quality is known to compromise the credibility and efficiency of commercial as well as public endeavours. Several developments from industry and academia have contributed significantly towards addressing the problem. These typically include analysts and practitioners who have contributed to the design of strategies and methodologies for data governance; solution architects including software vendors who have contributed towards appropriate system architectures that promote data integration and; and data experts who have contributed to data quality problems such as duplicate detection, identification of outliers, consistency checking and many more through the use of computational techniques. The attainment of true data quality lies at the convergence of the three aspects, namely organizational, architectural and computational. At the same time, importance of managing data quality has increased manifold in today's global information sharing environments, as the diversity of sources, formats and volume of data grows. In the MCIS workshop we target data quality in the light of collaborative information systems where data creation and ownership is increasingly difficult to establish. Collaborative settings are evident in enterprise systems, where partner/customer data may pollute enterprise data bases raising the need for data source attribution, as well as in scientific applications, where data lineage across long running collaborative scientific processes needs to be established. Collaborative settings thus warrant a pipeline of data quality methods and techniques that commence with (source) data profiling, data cleansing, methods for sustained quality, integration and linkage, and eventually ability for audit and attribution. The workshop provided a forum to bring together diverse researchers and make a consolidated contribution to new and extended methods to address the challenges of data quality in collaborative settings. Six out of twelve papers were selected following a rigorous review process with at least three program committee members reviewing each paper. The selected papers spanned the related topics covering both empirical as well as theoretical aspects. L. Chen et al. (Eds.): DASFAA 2009 Workshops, LNCS 5667, pp. 95–96, 2009. © Springer-Verlag Berlin Heidelberg 2009
96
S. Sadiq et al.
This year, a workshop on the related topic of data and process provenance (WDPP) was launched at DASFAA. Due to the topic overlap, MCIS and WDPP were merged. Two papers were selected out of four and presented at the WDPP workshop focussing specifically on the topic of provenance as described below. Provenance is about describing the creation, modification history, ownership and generally any other aspect that may significantly influence the lifetime of data and data-sets. In many scientific environments, data-driven analyses are now performed on large volumes of data and multiple sources yielding sophisticated methods. The manual maintenance of pertinent provenance information such as what methods, what parameters, who and when created and/or modified, is not only time consuming but also error-prone. The systematic and timely capture of the above information may accurately keep track of the various stages that data go through and help precisely articulate the processes used. The provenance of data and processes is expected to help validate and re-produce experiments as well as to greatly assist in the interpretation of data-analyses outcomes. These are all crucial for knowledge discovery. While provenance plays an important role in the scientific study at large and a number of systems have already furnished limited related functionalities, provenance has only recently attracted the attention of computer scientists. Provenance poses many fundamental challenges, such as performance, scalability, and interoperability in various environments. The examination of data and process provenance is a multi-disciplinarily activity. Among others, it involves data management, software engineering, workflow system, information retrieval, web service, and security research. At the same time, it also requires understanding of problems from specific scientific domains to better address corresponding requirements. The main objective of this workshop was to bring together researchers from multiple disciplines that have confronted and dealt with provenance-related issues and exchange ideas and experiences. Finally, we would like to acknowledge the contributions of the respective program committees as well as DASFAA organization towards making the two workshops a success. We look forward to offering a 3rd in the series workshop at next year’s DASFAA in Japan on the topic of Data Quality.
Data Provenance Support in Relational Databases for Stored Procedures Winly Jurnawan and Uwe Röhm School of Information Technologies, University of Sydney, Sydney NSW 2006, Australia
[email protected],
[email protected]
Abstract. The increasing amounts of data produced by automated scientific instruments require scalable data management platforms for storing, transforming and analyzing scientific data. At the same time, it is paramount for scientific applications to keep track of the provenance information for quality control purposes and to be able to re-trace workflow steps. Relational database systems are designed to efficiently manage and analyze large data volumes, and modern extensible database systems can also host complex data transformations as stored procedures. However, the relational model does not naturally support data provenance or lineage tracking. In this paper, we focus on providing data provenance management in relational databases for stored procedures. Our approach, called PSP, leverages the XML capabilities of SQL:2003 to keep track of the lineage of the data that has been processed by any stored procedure in a relational database as part of a scientific workflow. We show how this approach can be implemented in a state-of-the-art DBMS and discuss how the captured provenance data can be efficiently queried and analyzed. Keywords: Provenance, Stored Procedure, Relational Database.
1 Introduction Scientific research is currently experiencing a digital revolution. Computer-supported scientific instruments such as next-generation DNA sequencers or radio telescopes can conduct large experimental series automatically in a 24x7 setting and generate massive amounts of data. For instance in bioinformatics, the 1000 Genomes Project employs next-gen DNA sequencing technology that generates approximately 75 Terabyte of data weekly – in just one of three participating labs [1]. Another example, is the exponential growth of the NCBI GenBank data since 1998 which grew to more than 56 billion base pairs (approx 56Gb) by 2005 [2]. And in Physics, a prominent example is the Large Hadron Collider (LHC) project that expects data volumes to hit the Exabyte (1018B) scale [3]. These large-scale scientific experiments depend on a reliable and scalable production environment. For example, the 1000 Genome project operates more than 30 next-gen DNA sequencers in parallel, producing TB of raw sequencing data automatically which has to be further processed, analyzed and archived. Given the high experimental data L. Chen et al. (Eds.): DASFAA 2009 Workshops, LNCS 5667, pp. 97–111, 2009. © Springer-Verlag Berlin Heidelberg 2009
98
W. Jurnawan and U. Röhm
throughput generated by various sequencer machines, supervised by many scientists and running in 24x7 operation mode, it is impossible to keep track on the experiment process manually. The traditional role of lab log book recording does fail in such an environment because it is outperformed by the throughput of the data. Hence, a computer support to keep track of these processes is needed; this support is often referred as data provenance. Provenance in brief is the history of a data which is collected throughout its lifetime. The history refers to the process that produced, manipulated and affected the data [4]. E-science research uses provenance information in various ways depends on the purpose of research and the requirements of historical information. For instance in human genome sequencing, the amount of chemical used in web lab is crucial to ensure the reliability of the sequence results. There have been several attempts to manage scientific data with relational databases and to implement scientific processes inside a DBMS using existing extensibility features. For instance, dbBLAST [5] for alignment searches on large gene data bases, SDSS [6] in astronomy research, and SciDB [7] for the Physics community. Recently [8] looked into supporting a whole DNA sequencing pipeline for high-throughput genomics inside a relational database using CLR-based stored procedures. These attempts indicate that large-scale experimental research, which is still mostly file-centric organized, will slowly move towards database-centric approaches. This phenomenon brings new challenge to the database community, because there is no common model for capturing and representing data provenance for database-centric e-science researches. In this paper, propose a conceptual provenance model to track the workflow provenance for e-science tools which is implemented in relational database. We implemented Provenance for Stored Procedure (PSP) which is the proof-of-concept for our provenance model. It is implemented inside the relational database using CLR stored procedure with provenance data represented in SQL/XML format. In this initial work, we focus on the provenance for data which is manipulated by stored procedure. Our main contributions are:
A conceptual provenance model for relational databases that consists of three main components: data, agent and process. We propose an XML-based representation for provenance data in relational databases. XML format is chosen to accommodate the versatility of the provenance data requirement which change over the time. We implemented two variants of PSP as proof-of-concept: Naïve PSP and Centralize PSP which differs in the provenance data storage scheme. We evaluated both approaches using dbBLAST [5] as agent and the human genome as data set.
The structure of the remaining paper is as follows. Section 2 presents the other works on provenance which are related to our research. The conceptual foundation of our provenance model in relational database is presented in the Section 3. In section 4, we present the implementation of PSP and instance of our approaches. Section 5 presents the experimental result and its analysis. And in section 6 and 7 present our future work and conclusions.
Data Provenance Support in Relational Databases for Stored Procedures
99
2 Related Work There are many existing provenance approaches which are implemented on top of various technologies such as scripting architectures, service-oriented architecture, and relational database architecture. In this section, we focus more on the provenance approaches that are mostly related to relational database architectures. There are two major categories of provenance which are kept in scientific research, the data flow provenance and data workflow provenance. The data flow provenance focus on recording how or where the data has copied and moved throughout databases [9-11]. Similarly, [12] describe the data flow provenance as “where provenance” which mainly keep track on the source location(s) of current data. Data flow provenance captures and stores information such as, the source of data, type of operation done to retrieve the data (i.e. copy, move, insert), who accomplished the operations, dates and version of the data. On the other hand, the data workflow provenance focuses more on recording the histories of processes that has been done to particular data until current stage. It could be consider as the recipe to produce the current state of that particular data. Workflow provenance is usually used in research experiments and explorations, where the provenance information gives the firm verifications and explanations of results produced. However, scientist uses provenance information in various ways depends on the purpose of research and the requirements of historical information. Moreover this requirement of provenance information will vary over the time. Following are some of the related work of the existing works that are related to our research. CPDB (Copy Paste Database) [9] is the implementation of provenance concept on where provenance which was introduced in [12]. The main focus on the implementation of CPDB is on the provenance for curated database where most of its content are copied or derived from other sources. Thus, CPDB focus on data provenance which records the source of the data instead of the workflow provenance that record the series of process or event that affect the data [9]. The actual provenance information is represented in the XML data and stored separately from the actual data, in CPDB it is store separately in auxiliary table. CPDB is still implemented outside the relational database using Java application. Another implementation is REDUX [13] which is a provenance management system which captures the provenance information for the workflow which is built based-on the Windows Workflow Foundation (WinWF). REDUX stores it provenance information in the RDBMS which support better data management and data queries. However it is still an implementation outside the relational database system which makes use of relational database to store the provenance data. Another feature provide by REDUX, is smart replay which enables the user to replay on what has happened to a particular data before, the history of the data. There are implementations of provenance capturing in the database which are DBNotes [10, 14] and Trio [15]. DBNotes is originally the implementation of annotation management for a data in relational databases. The annotations of annotated data are propagated when the particular data is transformed. DBNotes take the advantage of the annotations that are propagated during transformation as the provenance trace. The authors also define provenance annotations where all the data is annotated with its address. Hence, as the annotations are propagated, one of the
100
W. Jurnawan and U. Röhm
annotations information contains the original address of the data. DBNotes can be categorized as where provenance, it traces the origin of data instead of the workflow [16]. On the other hand, Trio is an implementation of provenance tracking which trace the information for view data and its transformation in the data warehouse environment through query inversion method. Trio operates in the RDBMS environment where the data are queried, copied, moved, etc. It is a data provenance because it concerns more on keeping track of the source or origin of particular tuple in a view table. It uses the inversion model to automatically trace the source of the data for set of tuples created by view queries. These inverse queries are recorded at the granularity of a tuple and in the table called Lineage table. This related work section shows us that there are existing provenance approaches which are implemented inside the database, but it is only for data flow provenance or annotation-based provenance. Even though REDUX captures the workflow provenance, but the implementation of actual system still not in the relational database, it only uses relational database as data storage. We believe implementing the workflow provenance approaches purely inside the relational database is the gap that we can fill for this research area.
3 Provenance Model In this section we present the provenance model which track workflow provenance in a relational database for data which is manipulated by stored procedures - written in either SQL or CLR-based (Common Language Runtime). In our approach we assume all the processes are carried out inside the relational database by an agent which in our case is a stored procedure. These processes are the process that accept input data, manipulate data and generate output results. The object which is manipulated by agent is the data which refers to the actual/result data (records in the relational database table). It can be both data with provenance or data without any provenance attached to it. In this model provenance data is generated by facilitator, with contents such as, the agent who does the process, the execution time, the user who invokes the agent, the input query used in this process, etc. 3.1 Provenance Model There are three main components in our provenance model which are facilitator, data, and agent as depicted in Fig 1. facilitator is the central part of our model because it facilitates the process which carried out by agent to data. Since facilitator facilitates the process, it has the authority to collect the information of the process, agent who participates in the process, and data which is also participate in the process. Below are the short descriptions of each component: − data is the actual data or records in the relational database, it refers to the records in a tables or rows which is an object that participate is processed by agent. The data can be the data itself or a data with provenance attaches to it – the provenance must be the provenance which is complied with our provenance model.
Data Provenance Support in Relational Databases for Stored Procedures
101
Fig. 1. Provenance for Stored Procedure (PSP) Model
− agent is a stored procedure in a relational database; it is an active component that processes data. The agent could be a Transact-SQL stored procedure or CLR stored procedure. It accepts input data, manipulate data and generate output results (data). We assume all the data which is generated by agent is in relational table form. − facilitator is the central component of our model, because it facilitates the processes which are participated by data and agent, and collects and generates provenance to it. facilitator accepts user input which invoke agent to process data, and while it facilitates the process it collects the provenance pertaining to the process and generates the workflow provenance on-the-fly. Finally it returns the result (output data that is generated by agent) and adds provenance data to the result in form of extra column called provenance (refer to Fig. 3). If the previous data contains provenance, it will simply appends the new provenance to the existing provenance with a condition of the existing provenance data must comply with our provenance model. In this case, we are able to trace back, all process history of this particular data. The output of our model is exactly the same as the result of agent (in database table form), but one provenance column in XML format is appended to the result (refer to Fig. 3). 3.2 Provenance Data Representation Provenance defines a value of particular data, each scientific domain has its own interests and view on the provenance information. For example in bioinformatics, the information of data origins (where does the data derived from?) is a very precious piece information, while in commerce area the time (which user first purchase this share?) that describe a certain transaction is very crucial. Therefore, provenance data vary from one domain to another, which compel us to design provenance data representation that is flexible for every domain. We have considered two common options of data representation in relational database which are relational table and SQL/XML representation. The data representation in relational table have a rigid
102
W. Jurnawan and U. Röhm
structure where all the columns has to be predefined, and extending extra column might take much efforts. On the other hand, XML as a semi-structured representation is more flexible. Extending the XML schema (i.e. new information required to be captured) would not affect the existing provenance data, which does not work quite as well with a relational table representation. Accommodating the versatility of provenance requirement which changes over a time, XML representation is a better choice for us. Although XML representation is very flexible, we are compelled to control it. We created the XML schema to define what sort of structure to follow and what kind of data type should be use for a particular data. The purposes of this restriction are to ensure the correct XML data are created and to simplify the provenance data retrieval. Note that the XML schema can be modified anytime to cope with the provenance requirement changes.
Fig. 2. The Sample of PSP Provenance Data with two nodes
3.3 Provenance Mapping Approach The provenance data which generated by PSP should be mapped to its result data which returned by agent, otherwise it would void the purpose of provenance. In this paper we present two provenance mapping approach, Naïve mapping approach and Centralize mapping approach which map the results data to its provenance. 3.3.1 Naïve Provenance Mapping Approach Naïve provenance mapping approach is a straightforward mapping, because it simply attaches the provenance data at the end of each tuple. The provenance data is in XML format which is attached by adding extra column (provenance) to the result table. Fig. 3 depicts the series of data returned by agent with additional provenance in the form column attached to it. GI_NUM, MSP_que_s, Score, and compliment columns which are the result returned by agent dbBLAST. Naïve provenance mapping approach simply attaches the provenance as a new column (provenance) to the result.
Data Provenance Support in Relational Databases for Stored Procedures
103
3.3.2 Centralize Provenance Mapping Approach Centralize mapping approach is an optimization of the Naïve approach which does not store the provenance in tightly coupled manner. Instead of mapping the provenance data by adding extra provenance column to the result data, it stores provenance data in the central table (called ProvenanceSystem) and use a unique ID to map between the provenance data in central table and the result data returned by agent. Fig. 4 depicts the series of data returned by agent and the ProvenanceSystem table which are linked by the unique ID. This mapping approach reduces the redundancy of provenance data, if there are results which share the common provenance. For instance, in Fig. 4 the results with GI_NUM 2230-2231 share the same provenance which is mapped by ID 3. 3.4 Provenance with Series of Processes In many cases, the data is processed in a workflow pipeline which consists of multiple processes. In this paper, we assume all the processes are carried out sequentially in the relational database. Our provenance model handle the multiple processes by simply attaches the new XML node to existing provenance (in XML format). One node represents one processes and the last node in the XML document representing the last process which was affecting the respective data. Fig 2 depicts the provenance data which contains two processes; in this manner we could records all the processes that affected the data by adding new XML nodes to the existing provenance data. Hence, we are able to trace back all the process histories which have affected a particular data.
<provenance> <process> <username>dbblast
dbblast07 09.00.2050 <starttime>2009-01-15T10 … <stoptime>2009-01-15T10 …
DECLARE @t TABLE (GI_Num ... ClearLog blastn
Fig. 3. The Output Data Provenance for Stored Procedure (PSP) with naïve approach
3.5 Provenance Query The provenance information would be useful only when it could be queried and answers questions from user regarding any data that is described provenance data. We define two major categories on making use of the data with its provenance, they
104
W. Jurnawan and U. Röhm
are: the query of provenance data based-on the actual data and inverted query which query actual data based-on the provenance data. 3.5.1 Query of Provenance Data (Based-on the Result Data) In this category, the user knows the information of an actual data and intents to view the provenance of it. Let us consider an example, a scientist is reading on dbBLAST result and he would like to find out more of the record GI_NUM = 2203, on which version of dbBLAST generate this file, when it was generated and by who. In this case it is a straight forward query, the scientist can filter out the record that he/she wants in this case record with GI_NUM = 2203, then query the provenance data. Our provenance data also can handle query such as, what are the processes that has cause this data to be at this stage. It is basically list out all the processes that is in the provenance data. The queries one to six in Table 1 is the queries that return the provenance data. 3.5.2 Query Data from Provenance Data In this category, we would like to filter out the actual data or result data based-on the available provenance data. Basically, we have to scan through all the provenance data and filter out the actual data. Because it requires to scan all the existing provenance data it will take much more time, especially when the provenance data grow larger. Let us consider an example; a scientist would like to list the data that is produced by dbBLAST version 1.0 from 12 Dec 2008 – 20 Dec 2008. In this case, we have to scan all the provenance data and filter out the provenance data which match the query (dbBLAST version 1.0 from 12 Dec 2008 – 20 Dec 2008) and then display the corresponding actual data.
Fig. 4. The Output Data Provenance for Stored Procedure (PSP) with Centralize approach
4 Implementation We implement PSP that keeps the provenance of processes which are carried out by stored procedure. In order to demonstrate the usability and scalability of our approach, we use a real life Bioinformatics application dbBLAST [5] – a basic local alignment tools which is implemented in SQL Server. We further use the Human Genome from NCBI GenBank [17] as the dataset to the dbBLAST application. PSP is implemented as a CLR stored procedure that accepts SQL query that invoke agent (stored procedure) as an argument.
Data Provenance Support in Relational Databases for Stored Procedures
105
4.1 Overview PSP is a stored procedure that accepts a user query, executes the queries, collects and generates the provenance data, and then returns the results with additional provenance data in XML format. Since PSP is a stored procedure, it can be simply executed in a SQL command line interface or SQL query browser. In order to use the provenance function users can simply invoke the PSP stored procedure and insert the query (e.g. dbBLAST query) as the argument to it. Refer to followings queries: Semantic of PSP query: PSP ‘user queries / dbBLAST queries ’;
(1)
PSP query with dbBLAST query: PSP ‘DECLARE @q VARCHAR(max); SET @q = 'gcttataaa'; SELECT * FROM dbo.blastn (convert (varbinary(max) ,@q), 1, -3, 0, 0, 0);’ ;
(2)
Since PSP acts as the SQL query, it allows the flexibility in its implementations; it could be embedded in a program or script. Moreover, the result of PSP is basically the result of the original query (in this case is dbBLAST query) which is in table form with provenance column appended to it. Hence, it does not require a special settings to display the result apart from the normal relational table display (refer to Fig. 3 for Naïve PSP and Fig. 4 for Centralize PSP). We use SQL/XML data type to represent the provenance data which is supported by the DBMS through the SQL:2003 standard. XML Schema definition in PSP complies with W3C which simplifies the provenance query by XQuery. 4.2 Architecture of PSP The architecture of PSP consists of three main components, which are provenance monitoring component, provenance generator component and provenance integration component. Fig. 5 shows the architecture of PSP implementations. The provenance monitoring component is the only component which interacts with the agent (stored procedure) in the DBMS. Hence it responsible to accept the user query and invokes the agent, monitoring the whole process which done by agent, and finally fetches the results returned by agent. The provenance information gained from monitoring process is later passed to provenance generator component and the result returned by agent is passed to the provenance integration component. Provenance generator component only responsible to generated the provenance data in XML format. It generates the provenance data based-on the XML Schema define and finally passed it to provenance integration component. In this current implementation we still have to manually program the XML generation, automatic generation is still not supported. Provenance integration component is the last component in the PSP which does the conclusion work. It integrates the provenance data in XML format with the result which was returned by agent. This integration process makes use one of the provenance mapping approaches which are Naïve or Centralize provenance mapping approach.
106
W. Jurnawan and U. Röhm
Fig. 5. The Architecture of PSP
4.3 Provenance Data Generation There are four main steps taken by PSP to generate and display the provenance data. We assume all the result returned by agent is in relational table form, and PSP maps the provenance to the result by Naïve mapping approach or Centralize mapping approach –depends on which approach is used. Following are the steps taken by PSP: Step 1 : Query input step, in this step PSP accepts the user query and activates the first component which is provenance monitoring component. At the same time the user query is passed to the provenance monitoring component. Step 2: Provenance monitoring step, this step involved three main activities which are agent invocation, provenance information collection, and result fetching. Step 3: Provenance generation, the third step involves provenance generation component. It fetches the provenance information passed by the previous component, based-on this information it generate provenance data in XML format. Step 4: Provenance integration. Provenance integration process is the last process in PSP, which integrate the result returned by agent with the provenance data generated by provenance generator component. There are two variants of mapping approaches which are implemented in this prototype; they are the Naïve mapping approach and Centralize mapping approach. After the result are mapped to its provenance data by Naïve or Centralize approach, the integrated result will be returned by PSP. Fig. 3 shows the result of PSP with Naïve mapping approach and Fig. 4 shows the result of PSP with Centralize approach. 4.4 Provenance with Series of Processes As mentioned earlier PSP supports provenance for data with a series of processes and provenance for the intermediate result. In usual practice intermediate results are not discarded in order to add credibility to the end result or for troubleshooting purpose. Since PSP works in a very close nature to the relational database, it is also able to handle the provenance for all these intermediate processes as long as it is in relational database. For any results which are produced by any of these intermediate processes, it will add provenance data to it. If the provenance data has existed, our provenance
Data Provenance Support in Relational Databases for Stored Procedures
107
framework will just append the new XML node of new provenance data to the existing provenance data. Hence, the provenance data of a result contains the complete history of processes that have been applied to the data until current stage. There are some requirements for PSP to escalate these provenance processes, first the previous provenance data must be the provenance which is created by PSP. Secondly the agent must return the result with provenance column and its data or return the provenance unique ID. Because PSP identifies the existing provenance based-on these two information. If the these requirement are fulfilled PSP is able to escalate all the processed which has been done to the data. For instance, a data which has gone through five different intermediate processes, the provenance data of the final result will contain the provenance data of four previous stages including the provenance of the fifth stage. While the intermediate result of the third process contains only the provenance data of stage 1, 2, up to 3. The latest piece of provenance is appended to the end of the existing provenance data, so the bottom node is always the provenance for latest process. In this manner we could keep track the whole provenance of a piece of data.
5 Evaluation Our experiments focus on the scalability of PSP and usability of PSP result. We measured the execution time and the storage space required by the dbBLAST execution with PSP and execution without PSP, they are: the PSP with Naïve mapping approach, PSP with Centralize mapping approach and the executions without the PSP at all. All the experiments are conducted in the same machine with the same environment setup. 5.1 Experimental Setup The following performance and scalability tests were performed on a standard PC server with a 3 GHz Pentium 4 CPU (with hyper-threading enabled) and 2GB RAM under Microsoft Windows Server 2003 Standard Edition. All the experiments are carried out sequentially, the execution time is measured over a “hot” cache, i.e. running three times and reports the average runtime over seven subsequent runs. Our test queries is based on the dbBLAST implementation which is loaded with 10% of the whole human genome downloaded from NCBI GenBank [17] with total about 36 million bp. In the experiment, we run the dbBLAST as input queries and we restrict the input sequence up to 3500bp because the stored procedure only take up to 4000 character as input argument. 5.2 Results Analysis Fig. 6 shows the execution time in milliseconds (log scale) of dbBLAST with Naïve PSP, Centralize PSP and no PSP based on the numbers of rows returned by dbBLAST. The Naïve PSP has the worst performance, which is about 7.4 times of the Centralize PSP’s average execution time. On the other hand, the performance of the Centralize PSP is only about 1.14 times slower than the performance of dbBLAST without provenance.
108
W. Jurnawan and U. Röhm
Performance of PSP with dbBLAST No Provenance
Naïve Approach
Centralize Approach
Milliseconds (Log Scale)
100000
10000
1000
100 0
20000
40000
60000
80000
100000
120000
140000
160000
Returned Rows
Fig. 6. The performance of dbBLAST with Naïve PSP, Centralize PSP and no PSP Storage Space Usage naive Provenance
Optimized Provenance
No Provenance
100000
Log Scale
10000
1000
100
10 0
2000
4000
6000
8000
10000
12000
Number of Records
Fig. 7. Storage Space required for result of dbBLAST with Naïve PSP, Centralize PSP and no PSP
It is clear that the Naïve PSP has the slowest performance compare to the others, because in Naïve PSP appends a copy of the provenance data in XML format to every rows of result that is returned by dbBLAST. While in Centralize PSP, the overhead is quite small because PSP only need to write the provenance in XML data once, and assign a unique ID to every rows that returned by dbBLAST. Fig. 7 shows the storage space in KB (log scale) required if the results of the dbBLAST are stored in a table. The storage space required by the Naïve PSP is very high, about 100 times the storage space needed by the Centralize PSP. While the storage space needed by optimized approach for the same data is only about 1.14 times more than the storage space of the normal dbBLAST result (without provenance). The storage overhead of Naïve PSP requires huge space because for every row returned by dbBLAST, it appends provenance data in XML appended to it (refer to Fig. 3). Since the provenance data have a significant size, redundant record of XML
Data Provenance Support in Relational Databases for Stored Procedures
109
Table 1. List of the queries that have been tested to the output of PSP No 1 2 3 4 5 6 7 8 9 10
The Provenance Queries
Status
Find all the provenance of a particular data Find the processes queries that lead particular data to this stage Find the processes queries that lead particular data to this stage where agent is blastn Find the processes queries that lead particular data to this stage excluding the blastn process Find stage 3 and 4 detail that lead particular data to this stage Find the version of alignment that produce this data Find the alignment output that is produced by blastn version 1 in certain time Find the output of alignment between particular dates Find the output of alignment between particular dates by particular user Find the similar output produce by blastn version 1.0 and blastn version 2.0
answered answered answered answered answered answered answered answered answered answered
can easily occupied large storage. While the Centralize PSP takes the advantages of storing the provenance data only once for one process, and it only assigns ID (integer) to every row returned by dbBLAST. Hence the Centralize PSP could dramatically save more space by keeping one record of provenance data with redundant ID, instead of redundant provenance data –note that provenance data is in XML format which is larger that integer. Hence Centralize PSP has comparable storage requirement to the result without any provenance. 5.3 Query Analysis We also conducted experiments with provenance queries to test how well our provenance representation supports retrieval of provenance information. Since the actual data is stored in a relational table and the provenance data is XML, we have to combine SQL query and XQuery to get the desired results. The results consist of 16652 rows of data with four processes contributed to the results. The Naïve PSP has 16652 records of provenance and Centralize PSP has 3 records of provenance data, all the provenance data contains maximum four nodes (four processes). These queries are design based on the possible real-life scenario of dbBLAST result usage. Our first six queries are the queries which select the provenance data based-on the result data, and later four queries are the inverted queries which query the result data based-on the provenance data. The first six queries has lower execution time because it simply selecting the provenance data, while the rest of the queries is inverted queries which have to scan through all the provenance data and filter out the result data. The queries which query the result of Centralize PSP have better execution time because it only access a small number of provenance data. On the other hand Naïve PSP has many redundant provenance data, accessing more provenance data increases the execution time. Refer to Table 1 for the list of the queries and Fig 8 for the performance of each query in milliseconds (log scale). We have conducted experiments, to test the performance of PSP, the storage space requirement and the usability of result and provenance generated by PSP. The performance of Naive PSP is slow, but with the Centralize PSP we could dramatically increase the performance and reduce the storage space required. Although the Centralize PSP still require a little more overhead compare to the normal query (without provenance), we believe it is an acceptable trade-off between a little overhead with provenance data.
110
W. Jurnawan and U. Röhm
Performance of Provenance Query Naive PSP
Centralize PSP
MilliSeconds (log scale)
100000
10000
1000
100
10
1 query1
query2 query3 query4 query5 query6 query7 query8 query9 query10 queries
Fig. 8. The execution times of queries to query the result of Naïve PSP and Centralize PSP
6 Future Work This is only our initial work in provenance, but there are some obvious extensions to our work. Currently we are still working on how to represent the provenance for the result which has multiple source of provenance (has multiple parents). We are also working on how to represent the provenance for the data with multiple children for Centralize PSP. Our provenance data reveals the history of a piece of data, which is basically able to provide the recipes that produced this current data. The has been an implementation of smart reply in [13] by using full relational table representation. Providing a smart reply function for an XML based provenance with a visual approach would also be one of our next challenge.
7 Conclusions We presented an XML-based data provenance model for relational databases that captures the provenance for data that is processed by stored procedures. We have implemented our provenance model called Provenance for Stored Procedure (PSP) in a relational database using the built-in XML capabilities and CLR-based stored procedures. Our experimental results show that PSP with an Centralize provenance mapping performs 7.4 times faster than a Naive provenance mapping, and requires much less storage space. We have also demonstrated the usability of our PSP approaches by querying the result of PSP with both Naïve and Centralize approaches by combination of SQL and XQuery. The performance of simple query which select the provenance data is faster, and the inverted queries to select the result data basedon provenance data is slower because it has to scan all available provenance data. The results of these experiments show that PSP with Centralize provenance mapping
Data Provenance Support in Relational Databases for Stored Procedures
111
approach has a slight overhead compared to the normal query execution (without PSP). We believe that this trade-off to support data provenance with a little overhead during querying is quite acceptable.
References [1] Genomes, ``1000 Genomes” 16/10/2008 (2008) [2] NCBI, "Growth of GenBank," vol. 2008 (2006) [3] Harvey, B.N., Mark, H.E., John, A.O.: Data-intensive e-science frontier research. Commun. ACM 46, 68–77 (2003) [4] Yogesh, L.S., Beth, P., Dennis, G.: A survey of data provenance in e-science. SIGMOD Rec. 34, 31–36 (2005) [5] Röhm, U., Diep, T.-M.: How to BLAST your database — A study of stored procedures for BLAST searches. In: Li Lee, M., Tan, K.-L., Wuwongse, V. (eds.) DASFAA 2006. LNCS, vol. 3882, pp. 807–816. Springer, Heidelberg (2006) [6] Alexander, S.S., Jim, G., Ani, R.T., Peter, Z.K., Tanu, M., Jordan, R., Christopher, S., Jan, v.: The SDSS skyserver: public access to the sloan digital sky server data. In: Proceedings of the 2002 ACM SIGMOD international conference on Management of data, Madison, Wisconsin. ACM, New York (2002) [7] Stonebraker, M., Becla, J., Lim, K., Maier, D., Ratzesberger, O., Zdonik, S.: Requirements for Science Data Bases and SciDB. In: Presented at CIDR, Asilomar, CA, USA (2009) [8] Röhm, U., Blakeley, J.A.: Data Management for High-Throughput Genomics. In: Presented at CIDR, Asilomar, CA, USA (2009) [9] Peter, B., Adriane, C., James, C.: Provenance management in curated databases. In: Proceedings of the 2006 ACM SIGMOD international conference on Management of data, Chicago, IL, USA. ACM, New York (2006) [10] Laura, C., Wang-Chiew, T., Gaurav, V.: DBNotes: a post-it system for relational databases based on provenance. In: Proceedings of the 2005 ACM SIGMOD international conference on Management of data, Baltimore, Maryland. ACM, New York (2005) [11] Cui, Y., Widom, J.: Lineage tracing for general data warehouse transformations. The VLDB Journal 12, 41–58 (2003) [12] Buneman, P., Khanna, S., Wang-Chiew, T.: Why and where: A characterization of data provenance. In: Van den Bussche, J., Vianu, V. (eds.) ICDT 2001. LNCS, vol. 1973, pp. 316–330. Springer, Heidelberg (2000) [13] Roger, S.B., Luciano, A.D.: Automatic capture and efficient storage of e-Science experiment provenance. Concurr. Comput.: Pract. Exper. 20, 419–429 (2008) [14] Deepavali, B., Laura, C., Wang-Chiew, T., Gaurav, V.: An annotation management system for relational databases. In: Proceedings of the Thirtieth international conference on Very large data bases, Toronto, Canada: VLDB Endowment, vol. 30 (2004) [15] Benjelloun, O., Das Sarma, A., Halevy, A., Theobald, M., Widom, J.: Databases with uncertainty and lineage. The VLDB Journal The International Journal on Very Large Data Bases 17, 243–264 (2008) [16] Peter, B., James, C., Wang-Chiew, T., Stijn, V.: Curated databases. In: Proceedings of the twenty-seventh ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems, Vancouver, Canada. ACM, New York (2008) [17] Benson, D.A., Karsch-Mizrachi, I., Lipman, D.J., Ostell, J., Wheeler, D.L.: GenBank. Nucl. Acids Res. 35, D21–D25 (2007)
A Vision and Agenda for Theory Provenance in Scientific Publishing Ian Wood1 , J. Walter Larson1,2,3, and Henry Gardner1 1
3
Department of Computer Science, The Australian National University Canberra ACT 0200 Australia 2 Computation Institute, University of Chicago, Chicago, IL USA Mathematics and Computer Science Division, Argonne National Laboratory, Argonne, IL 60439, USA
Abstract. Primary motivations for effective data and process provenance in science are to facilitate validation and reproduction of experiments and to assist in the interpretation of data-analysis outcomes. Central to both these aims is an understanding of the ideas and hypotheses that the data supports, and how those ideas fit into the wider scientific context. Such knowledge consists of the collection of relevant previous ideas and experiments from the body of scientific knowledge, or, more specifically, how those ideas and hypotheses evolved, the steps in that evolution, and the experiments and results used to support those steps. This information we term the provenance of ideas or theory provenance. We propose an integrated approach to scientific knowledge management, combining data, process and theory provenance, providing full transparency for effective verification and review. Keywords: provenance, theory provenance, knowledge representation, scientific publishing, grid, semantic grid, semantic citation, semantic network.
1
Introduction
Data provenance has been described as a record of the computational steps that transform raw experimental data into that which is published [1]. Data provenance provides transparency in data acquisition and processing, allowing those who use the data to determine its validity and to verify its accuracy [2]. It helps identify the significance and meaning of derived data, which can be obscured by complex automated workflows, not only from those reading published work, but also from those who created the data [3]. Two surveys of data provenance practices in eScience have been compiled which report that, though provenance issues are being addressed, there is still much work to be done, in particular on standards to allow the portability of provenance metadata [4,5]. Zhao et al. recognised the need to identify theoretical context within data provenance [6]. The provenance of data and process provides, in essence, a history of how the data was produced and manipulated. The provenance of ideas provides a history L. Chen et al. (Eds.): DASFAA 2009 Workshops, LNCS 5667, pp. 112–121, 2009. c Springer-Verlag Berlin Heidelberg 2009
A Vision and Agenda for Theory Provenance in Scientific Publishing
113
of how ideas evolve and how they relate to preceding enquiries. Data provenance provides the concrete history of the development of the data, whereas theory provenance provides the abstract history of the ideas that relate to the data. The provenance of ideas provides context for the data, helping us interpret its meaning and to understand the evolution of the experimental techniques used. Figure 1 illustrates this relationship during the development of new theories.
Data and Experiments
Publishing
Publishing
Decisive Experiments
Theory Formulation
Developing Ideas
Preliminary Experiments
Previous Theory
Previous Experiments
Theory Development
Fig. 1. Science Lifecycle
Our vision is that all the steps in this figure can be represented as a semantic network and exposed to diverse automated analyses, thus improving knowledge utility and refining knowledge services such as knowledge discovery, validation and attribution. There have been concerns expressed in the scientific community about the lack of provenance in published work. For example, new algorithms published in computational science often lack sufficient detail to reproduce the published
114
I. Wood, J.W. Larson, and H. Gardner
results [7]. Furthermore, most data provenance approaches tag data transformations at the level of “name of software package,” “version number,” “platform and build configuration on which the code was executed,” but not at the level of “algorithm(s) implemented in the code,” “underlying assumptions under which the code is valid,” etc... These latter aspects speak to how much we can trust the results of a computational workflow. We believe these concerns are legitimate and require a solution that in fact complements data provenance. Specifically, we identify this missing element of scientific process provenance as the provenance of the ideas implemented by the data transformations applied. To track the evolution of ideas, we must delve into records of their development and presentation. These records could, for example, take the form of laboratory notes and records of collaborative events [8,9,10] or published scientific literature. The provenance of ideas is exposed when these records contain references indicating the relationships between ideas, such as “extends”, “depends on” or perhaps “refutes”. We refer to such references as semantic citations, extending current citation techniques. Zhuge has made a study of the types of relationships that might exist [11]. The resultant semantic web of knowledge, or knowledge grid [11] could be readily analysed to reveal the provenance of an idea. In Section 2 we further develop the idea of theory provenance in the context of scientific publishing and outline some initial requirements and implementation paths. In Section 3 we discuss theory provenance with refined knowledge representation and automated reasoning, introducing existing scientific knowledge representation initiatives. We conclude with a discussion of the need for knowledge management standards for metadata in scientific publishing and outline the core aspects that such a standard should include.
2
Theory Provenance and Scientific Publishing
Theory provenance provides a history of how ideas evolve and how they relate to those that precede them. In this section we outline how theory provenance can be integrated with and facilitated by existing e-science and knowledge management technologies. We suggest an incremental approach with a highly flexible standard for knowledge representation and linking and discuss research directions for easy implementation in publishing and knowledge development contexts. 2.1
Scientific Publishing
In order to access the provenance of ideas in published literature, we must delve into the body of published science and identify which previous results relate to the ideas under consideration, and the manner of that relationship. Advances in scientific publishing have greatly simplified that task. The majority of scientific journals and conferences (possibly all) now publish their material in digital form. Repositories of scientific articles often have associated knowledge management services such as keyword searches and subject categorisation (eg: Springer, IEEE and ACM) and third party search and categorisation services such as
A Vision and Agenda for Theory Provenance in Scientific Publishing
115
CiteSeer [12], Google Scholar [13] and the ISI Web of Knowledge [14] provide one-stop portals to much of the worlds published science. Despite these advances, our scientific publishing techniques can be said to have the spirit of being “on paper”, albeit digital paper, largely failing to utilise many of the powerful knowledge management techniques that are available [15]. The underlying format remains solely text based with unsophisticated citation techniques and little scope for directly referencing supporting data1 . Though keyword indexing and full text searches provide some ability to link related articles, they currently fall well short of tracking ideas through the body of scientific literature. A key concept needed here is the idea of semantic citation. 2.2
Semantic Citation
Citations are the vehicle for capturing provenance in scientific publishing, but the current citation techniques are coarse. Without reading the text (a task difficult for machines and onerous for humans), a citation tells you nothing about which specific concepts are related, and nothing about the relationship between those concepts or results. Currently, this information can only be obtained by reading both papers and considering the context in which the citations appear. A citation could contain information about the nature of the relationship between publications. Further, if the key concepts and arguments in a paper were available in a machine readable form (see Section 3), a citation could indicate which specific concepts are related. For example, it could indicate that a concept in the new paper assumes the validity or truth of one in the earlier one, or conversely that the new concept contradicts the earlier concept, or it could simply indicate that the new concept is distinct from or is a refinement or subconcept of the earlier one. One important semantic role is an indication that the cited concept is the same as the other. We will refer to citations with semantic information about their relationship as semantic citations. A related idea was presented by Carr et. al. in [16]. They present a service that semantically links documents that contain similar concepts, utilising existing document metadata. In essence, they are creating something similar to semantic citations between existing documents on the web. In practical terms, given a format for representing scientific knowledge, semantic citations should be straightforward to implement. Analogous to URI’s (Universal Resource Indicators [17]) and DOI’s (Digital Object Identifiers [18]), elements of represented knowledge (data, theories, entities etc..) could be given unique identifiers which could be quoted in the citing document or it’s metadata. The imposition on scientists to annotate their citations would not be significantly greater than the current citation model. In addition, modern data mining techniques could be applied to existing publications to identify the semantic role of citations. To the best of our knowledge this has not been attempted. 1
This publishing model was first used in 1665 when the first editions of “Journal des s¸cavans” and “Philosophical Transactions of the Royal Society” appeared and has not changed significantly in the ensuing 350 years.
116
2.3
I. Wood, J.W. Larson, and H. Gardner
Provenance and the Development of Ideas
The process of developing new ideas frequently entails a collection of notes, experimental results and other records that can be semantically linked in a similar way to published results. Electronic laboratory notebooks and other collaborative tools (see, for example, [8,9,10]) incorporate knowledge management services for annotating and organising records of experiments, meetings and other collaborative events. Used appropriately, these tools could track the provenance of ideas as they develop during scientific collaborations. A simple flexible framework for representing semantic citations and knowledge could be implemented for such systems. Scientists could add citations to published papers to these notes as they work. This information would then be readily available when authoring a new paper, and the represented knowledge could be incorporated into the paper, providing a semantic representation of the published work with little extra effort. 2.4
Trust and Validity
We have discussed theory provenance both as ideas evolve during the development of new hypotheses and theorems and within the body of peer reviewed, published science. It would be useful to give such contexts different levels of trust of validity. Other levels may be desirable as well - for example pre-prints that have not yet undergone peer review, but which the authors consider to be of publishable quality. There is scope for adapted peer review structures, utilising the opinions of a wider community of scientists with relevant expertise in a similar way to collaborative tagging. A standard for theory provenance should include scope for levels of verification, validity and trust.
3
Granular Representation and Automated Reasoning
A scientific publication often contains several key concepts, experimental techniques and other elements. To maximise the effectiveness of semantic citations, these sub-concepts could also be represented in a machine readable way. A citation could then point to and from specific semantic elements. The granularity of the represented concepts could, in principle, be very fine, including individual steps in the flow of logic within a publication. This could lead to automatic or semi-automatic verification of the logical conclusions presented. Compiled libraries of formally represented mathematics and their attendant theorem provers/checkers such as MIZAR [19] and IsarMathLib [20] faithfully represent theory dependencies and supporting arguments and as such they provide a substantial step toward granular theory provenance for science. 3.1
Knowledge Representation in Science
As the quantity and complexity of scientific data and knowledge has increased, new technologies have been developed to organise and effectively utilise it. In
A Vision and Agenda for Theory Provenance in Scientific Publishing
117
many areas of science, substantial knowledge bases have been created or are in the process of creation. These knowledge bases are primarily in the form of description logic ontologies. Their application has led to sophisticated data retrieval and resource management systems, and reference ontologies [21,22,23,24]. Another application of ontologies in science is data integration—well developed ontologies, made in collaboration with relevant expert communities, serve as a standard form of annotation and allow diverse data formats to be utilised interoperably. There are several projects working on these issues [25,26]. Numerous platforms and methodologies for ontology construction and maintenance have been developed [22,27,28,29]. Significant work in ontology development for science has been in association with the construction of semantic grids. The term Semantic Grid was coined by De Roure, Jennings and Shadbolt to describe “the application of Semantic Web technologies both on and in the Grid” [30]. The effectiveness and efficiency of Grid services is substantially enhanced by this approach, particularly when the Grid contains large and complex resources [31,32]. This can also be seen in research on workflow automation [33] and resource discovery [34,35]. Virtual observatories [36,37], though they do not claim to be semantic grids, satisfy Foster’s grid criteria [38] and apply Semantic Web technologies. Semantic Grids may be the natural platform for our vision of semantic publishing. Grids federate resources to create virtual organisations; thus they may be employed by a group of scientists to define scope for their fields of study. Grids control access to resources, allowing differing levels of authorisation; thus they allow some users to read and write resources (such as ontologies or other semantic descriptive data), while others may merely read. Semantic Grids provide a framework for semantic annotation of resources and services for workflow automation and resource discovery. Grid protocols use open standards, reducing barriers to integration of knowledge repositories. Specialised scientific markup languages have been driven by the need to extend HTML to perform typesetting of technical information such as mathematical formulae, by the need for standard information and data exchange formats, and the need for standard formats for automated processing. Numerous markup languages supporting science and eResearch exist or are under development[39]. In general, these are not description logic based, and many are too expressive for effective automated reasoning, as we shall see in the following section. For theory provenance at any level to become an effective tool for science, there is a need for flexible standards for representing scientific concepts and the links between them. Ideally, such standards should be able to incorporate widely differing representation formats for scientific knowledge and information (such as the various markup languages and DL ontologies above) as well as sophisticated conceptual links such as those used in the mathematical libraries mentioned above. 3.2
Description Logics
The study of knowledge representation for artificial intelligence led to questions about the tractability and computational complexity of different systems.
118
I. Wood, J.W. Larson, and H. Gardner
Theoretical attempts by logicians to answer these questions led to a deeper understanding of the tractability of automated reasoning and the development of a family of languages for representing knowledge. These languages are called description logics (DLs). Seminal work on the computational complexity of DLs was done by Hector J. Levesque and Ronald J. Brachman [40] who recognized that there is a tradeoff between the expressive power of a language for knowledge representation and the difficulty of reasoning with the resultant knowledge bases. A fundamental result is that languages entailing first order logic result in potentially unbounded reasoning operations. In a sufficiently expressive system, there will always be questions for which an automatic reasoner will never find an answer. Mathematics with the real numbers is such a system. Description logics form the basis of many automated reasoning applications and systems today. In particular the Web Ontology Language (OWL—the W3C standard ontology language for the Semantic Web [41]), is based on a description logic. Semantic Web technologies utilising automated reasoning have been effectively applied to enhance qualities of service and efficiencies in semantic grids and other science applications. In the previous section we mentioned the need to accommodate diverse standards for representing scientific knowledge. In order to gain the indexing and search efficiencies and other services that Semantic Web technologies provide, translations from or approximations of these standards to the languages of the Semantic Web would be needed. 3.3
Reasoning with Complex Knowledge
We might hope that automated reasoning techniques could be applied to a body of finely-represented scientific knowledge to obtain new results that are implied by some combination of known results, but that have not, as yet, been recognised. This would be possible if the knowledge can be faithfully represented by appropriate description logics, however in many areas of science core results are expressed in mathematics. We can devise formalised representation systems for such knowledge, but, as we saw in the previous section automated reasoning based on logic is unreliable with such highly expressive systems. This does not mean, however, that automated reasoning is not useful for scientific applications. Instead, we observe that in mathematics, automated theorem proving systems have successfully aided researchers to find mathematical proofs that had not previously been known [42]. These systems often require (sometimes substantial) human intervention. Cotton has recently reviewed the state of automated and semi-automated reasoning for mathematics with results that are encouraging [43]. However much work would have to be done before these techniques can provide substantial support for mathematical representations of scientific knowledge.
4
Conclusions
We have defined two key concepts that we believe are necessary to extend data provenance to become scientific process provenance: Theory provenance is the
A Vision and Agenda for Theory Provenance in Scientific Publishing
119
provenance of the ideas and reasoning behind scientific results or algorithms implemented in software that performs scientific data processing. Semantic citation is the addition of attributes to a citation to describe its semantic role. Taken together, these two concepts are at the centre of a vision for semantic publication that will (1) enable the integration of data provenance into a scientific argument, and (2) provide a fuller identification of the underlying assumptions and applicability of transformations used in scientific computational work flows. We have described in detail the relationships between theory provenance and semantic citation and a set of knowledge representation technologies that we believe can be employed to implement our vision for semantic publication. We have identified semantic grids as a promising platform for implementing our vision. The ideas we have presented here would have profound implications to science publishing, however uptake of the ideas would require substantial changes to the science publishing infrastructure as well as to the publishing habits of scientists. It is unlikely that such changes will be adopted by the wider scientific community without effective demonstration of theory provenance. Clearly, the vision will also not be realised unless these changes can be implemented without undue burden on working scientists, and automation in the form of collaborative technologies (see, for example, [8,9,10]) are likely applicable, or may be adapted to allow automatic implementation of semantic citation. Note that there are other implications of our proposal outside of provenance. A network of finely-represented semantically linked knowledge would be open to many forms of automated analysis, as well as innovative applications of Web 2.0 technologies and other technologies of the future. These ideas are also applicable beyond science. A promising area for future investigation is the applicability of the techniques described in this paper to public policy formulation, and objective measures of how well a given public policy aligns with supporting domain research—for example policies under discussion to respond to anthropogenically-generated global warming. Acknowledgements. We would like to acknowledge Dr. Peter Baumgartner, Dr. Ian Barnes, Dr. Roger Clarke, Dr. Scott Sanner, Dr. Catherine Legg, Dr. Jason Grossman and Dr Tom Worthington for their discussions and expert advice.
References 1. Foster, I., Kesselman, C. (eds.): The Grid 2: Blueprint for a New Computing Infrastructure. The Morgan Kaufmann Series in Computer Architecture and Design. Morgan Kaufmann, San Francisco (2003) 2. Moreau, L., et al.: Special Issue: The First Provenance Challenge. Concurrency and Computation: Practice and Experience 20(5), 409–418 (2008) 3. Miles, S., Deelman, E., Groth, P., Vahi, K., Mehta, G., Moreau, L.: Connecting scientific data to scientific experiments with provenance. In: IEEE International Conference on e-Science and Grid Computing, December 2007, pp. 179–186 (2007) 4. Simmhan, Y.L., Plale, B., Gannon, D.: A survey of data provenance in e-science. SIGMOD Rec. 34(3), 31–36 (2005) 5. Bose, R., Frew, J.: Lineage retrieval for scientific data processing: a survey. ACM Comput. Surv. 37(1), 1–28 (2005)
120
I. Wood, J.W. Larson, and H. Gardner
6. Zhao, J., Goble, C., Stevens, R., Bechhofer, S.: Semantically linking and browsing provenance logs for E-science. In: Bouzeghoub, M., Goble, C.A., Kashyap, V., Spaccapietra, S. (eds.) ICSNW 2004. LNCS, vol. 3226, pp. 158–176. Springer, Heidelberg (2004) 7. Quirk, J.: Computational science, same old silence, same old mistakes, something more is needed . . . . In: Adaptive Mesh Refinement - Theory and Applications. Lecture Notes in Computational Science and Engineering, vol. 41, pp. 3–28. Springer, Heidelberg (2005) 8. Shum, S., De Roure, D., Eisenstadt, M., Shadbolt, N., Tate, A.: CoAKTinG: Collaborative Advanced Knowledge Technologies in the Grid. In: 2nd Workshop Advanced Collaborative Environments, http://www.aktors.org/coakting/ 9. Myers, J., Mendoza, E., Hoopes, B.: A Collaborative Electronic Notebook. In: Proceedings of the IASTED International Conference on Internet and Multimedia Systems and Applications (IMSA 2001), August 2001, pp. 13–16. ACTA Press (2001) 10. Myers, J.D., Chappell, A., Elder, M., Geist, A., Schwidder, J.: Re-integrating the research record. Computing in Science and Engineering 5(3), 44–50 (2003) 11. Zhuge, H.: The Knowledge Grid. World Scientific, Singapore (2004) 12. CiteSeer: http://citeseer.ist.psu.edu/ or http://citeseersx.ist.psu.edu/ 13. GoogleScholar: http://scholar.google.com/ 14. ISI: Isi web of knowledge, http://apps.isiknowledge.com/ 15. de Waard, A.: Science publishing and the semantic web, or: Why are you reading this on paper. In: European Conference on the Semantic Web (2005) 16. Carr, L., Hall, W., Bechhofer, S., Goble, C.: Conceptual linking: ontology-based open hypermedia. In: WWW 2001: Proceedings of the 10th international conference on World Wide Web, pp. 334–342. ACM, New York (2001) 17. W3C: Naming and addressing: Uris, urls, ..., http://www.w3.org/Addressing/ 18. DOI: The digital object identifier system, http://www.doi.org/ 19. MIZAR: The mizar project for formalized representation of mathematics, http://www.mizar.org/ 20. IsarMathLib: Library of formalized mathematics for isabelle/isar (zf logic), http://savannah.nongnu.org/projects/isarmathlib 21. Stevens, R., Goble, C., Bechhofer, S.: Ontology-based knowledge representation for bioinformatics. Briefings in Bioinformatics 1(4), 398–414 (2000) 22. Stevens, R.D., Robinson, A.J., Goble, C.A.: myGrid: personalised bioinformatics on the information grid. Bioinformatics 19(suppl. 1), i302–i304 (2003) 23. EMBL-EBI: Biological ontology databases. European Bioinformatics Institute, an Outstation of the European Molecular Biology Laboratory, http://www.ebi.ac.uk/Databases/ontology.html 24. Hu, X., Lin, T., Song, I., Lin, X., Yoo, I., Lechner, M., Song, M.: Ontology-Based Scalable and Portable Information Extraction System to Extract Biological Knowledge from Huge Collection of Biomedical Web Documents. In: Proceedings of the 2004 IEEE/WIC/ACM International Conference on Web Intelligence, pp. 77–83. IEEE Computer Society, Washington (2004) 25. Fox, P., McGuinness, D., Raskin, R., Sinha, A.: Semantically-Enabled Scientific Data Integration. In: US Geological Survey Scientific Investigations Report, vol. 5201 (2006), http://sesdi.hao.ucar.edu/ 26. Zhang, X., Hu, C., Zhao, Q., Zhao, C.: Semantic data integration in materials science based on semantic model. In: IEEE International Conference on e-Science and Grid Computing, December 2007, pp. 320–327 (2007)
A Vision and Agenda for Theory Provenance in Scientific Publishing
121
27. Bao, J., Hu, Z., Caragea, D., Reecy, J., Honavar, V.: A tool for collaborative construction of large biological ontologies. In: Bressan, S., K¨ ung, J., Wagner, R. (eds.) DEXA 2006. LNCS, vol. 4080, pp. 191–195. Springer, Heidelberg (2006) 28. Lin, H.N., Tseng, S.S., Weng, J.F., Lin, H.Y., Su, J.M.: An iterative, collaborative ontology construction scheme. In: Second International Conference on Innovative Computing, Information and Control, 2007. ICICIC 2007, September 2007, p. 150 (2007) 29. Xexeo, G., de Souza, J., Vivacqua, A., Miranda, B., Braga, B., Almentero, B., D’ Almeida Jr., J.N., Castilho, R.: Peer-to-peer collaborative editing of ontologies. In: The 8th International Conference on Computer Supported Cooperative Work in Design, 2004. Proceedings, May 2004, vol. 2, pp. 186–190 (2004) 30. Roure, D., Jennings, N., Shadbolt, N.: Research agenda for the semantic grid: a future escience infrastructure, vol. 9. National e-Science Centre, Edinburgh (2001) 31. Roure, D.D., Jennings, N.R., Shadbolt, N.R.: The semantic grid: A future e-science infrastructure. In: Berman, F., Fox, G., Hey, A.J.G. (eds.) Grid Computing, pp. 437–470. Wiley, Chichester (2003) 32. Goble, C.: Putting semantics into e-science and grids. In: First International Conference on e-Science and Grid Computing, 2005, December 2005, p. 1 (2005) 33. Siddiqui, M., Villazon, A., Fahringer, T.: Semantic-based on-demand synthesis of grid activities for automatic workflow generation. In: IEEE International Conference on e-Science and Grid Computing, December 2007, pp. 43–50 (2007) 34. Somasundaram, T., Balachandar, R., Kandasamy, V., Buyya, R., Raman, R., Mohanram, N., Varun, S.: Semantic-based grid resource discovery and its integration with the grid service broker. In: International Conference on Advanced Computing and Communications, 2006. ADCOM 2006, December 2006, pp. 84–89 (2006) 35. Andronico, G., Barbera, R., Falzone, A.: Grid portal based data management for lattice qcd. In: 13th IEEE International Workshops on Enabling Technologies: Infrastructure for Collaborative Enterprises, 2004. WET ICE 2004, pp. 347–351. IEEE, Los Alamitos (2004) 36. Berriman, B., Kirkpatrick, D., Hanisch, R., Szalay, A., Williams, R.: Large Telescopes and Virtual Observatory: Visions for the Future. In: 25th meeting of the IAU, Joint Discussion, vol. 8, p. 17 (2003) 37. Fox, P., McGuinness, D.L., Middleton, D., Cinquini, L., Darnell, J.A., Garcia, J., West, P., Benedict, J., Solomon, S.: Semantically-enabled large-scale science data repositories. In: Cruz, I., Decker, S., Allemang, D., Preist, C., Schwabe, D., Mika, P., Uschold, M., Aroyo, L.M. (eds.) ISWC 2006. LNCS, vol. 4273, pp. 792–805. Springer, Heidelberg (2006) 38. Foster, I.: What is the Grid? A Three Point Checklist. Grid Today 1(6), 22–25 (2002) 39. Bartolo, L.M., Cole, T.W., Giersch, S., Wright, M.: NSF/NSDL Workshop on Scientific Markup Languages. D-Lib Magazine 1(11) (2005) 40. Levesque, H., Brachman, R.: Expressiveness and tractability in knowledge representation and reasoning 1. Computational Intelligence 3(1), 78–93 (1987) 41. W3C: Web ontology language, http://www.w3.org/TR/owl-features/ 42. Mccune, W.: Solution of the robbins problem. Journal of Automated Reasoning 19(3), 263–276 (1997) 43. Colton, S.: Computational discovery in pure mathematics. In: Dˇzeroski, S., Todorovski, L. (eds.) Computational Discovery 2007. LNCS (LNAI), vol. 4660, pp. 175– 201. Springer, Heidelberg (2007)
Probabilistic Ranking in Uncertain Vector Spaces Thomas Bernecker, Hans-Peter Kriegel, Matthias Renz, and Andreas Zuefle Institute for Informatics, Ludwig-Maximilians-Universit¨at M¨unchen, Germany {bernecker,kriegel,renz,zuefle}@dbs.ifi.lmu.de
Abstract. In many application domains, e.g. sensor databases, traffic management or recognition systems, objects have to be compared based on positionally and existentially uncertain data. Feature databases with uncertain data require special methods for effective similarity search. In this paper, we propose a probabilistic similarity ranking algorithm which computes the results dynamically based on the complete information given by inexact object representations. Hence, this can be performed in an effective and efficient way. We assume that the objects are given by a set of points in a vector space with confidence values following the discrete uncertainty model. Based on this representation, we introduce a probabilistic ranking algorithm that is able to reduce significantly the computational complexity of the computation of the probability that an object is at a certain ranking position. In a detailed experimental evaluation, we demonstrate the benefits of this approach compared to several competitors. The experiments show that, in addition to the gain of efficiency, we can achieve convenient query results of high quality.
1 Introduction Similarity ranking is one of the most important query types in feature databases. A similarity ranking query iteratively reports objects in descending order of their similarity to a given query object. The iterative computation of the answers is very suitable for retrieving the results the user could have in mind. This is a big advantage of ranking queries against the most prominent similarity queries, the distance-range (ε-range) and the k-nearest neighbor query, in particular if the user does not know how to specify the query parameters ε and k. Many modern applications have to cope with uncertain or imprecise data. Example applications are location determination and proximity detection of moving objects, similarity search and pattern matching in sensor databases or personal identification and recognition systems based on video images or scanned image data. The importance of this topic in the context of database systems is demonstrated by the increasing interest of the database research community in this subject matter. Several approaches coping with uncertain objects have been proposed [4,5,12,2]. All these approaches use continuous probability density functions (pdfs) for the description of the spatial uncertainty while the approaches proposed in [7,8] use discrete representations of uncertain objects. The approach proposed in [7] supports probabilistic distance range queries on uncertain objects. In [8] efficient methods for probabilistic nearest-neighbor queries are proposed. However, in fact only one-nearest neighbor queries are supported. L. Chen et al. (Eds.): DASFAA 2009 Workshops, LNCS 5667, pp. 122–136, 2009. c Springer-Verlag Berlin Heidelberg 2009
Probabilistic Ranking in Uncertain Vector Spaces
(a) Query on objects with aggregated uncertainty
123
(b) Query on objects with probabilistic uncertainty
Fig. 1. Distance range query on objects with different uncertainty representations
Similarity search in conjunction with multimedia data like images, music, or data from personal identification systems like face snapshots or fingerprints commonly involves distance computations within the feature space. If exact features cannot be generated from uncertain objects, we have to cope with positionally uncertain vectors in the feature space (i.e. objects are represented by ambiguous feature vectors). Basically, there exist two forms of representations of positionally uncertain data: Uncertain positions represented by a probability density function (pdf) or uncertain positions drawn by alternatives. In this paper we concentrate on uncertain objects represented by a set of alternative positions, each associated with a confidence value that indicates the degree of matching the exact object. This type of representation is motivated by the fact that we often have only discrete but ambiguous object information as usually returned by common sensor devices, e.g. discrete snapshots of continuously moving objects. A probabilistic ranking on uncertain objects computes for each object o ∈ D the probability that o is the k-th nearest neighbor (1 ≤ k ≤ |D|) of a given query object q. In the context of probabilistic ranking queries we propose diverse forms of ranking outputs which differ in the order the objects are reported to the user. Furthermore, we suggest diverse forms in which the results are reported (i.e. which kind of information is assigned to each result). The simplest solution to perform queries on uncertain objects is to represent the objects by an exact feature vector, e.g. the mean vector, and perform query processing in a traditional way. The advantage of this straightforward solution is that established query and indexing techniques can be applied. However, this solution is accompanied by information loss, since the similarity between uncertain objects is obviously more meaningful when taking the whole information of the object uncertainty into account. An example of the latter case is depicted in Figure 1(a), where a set of uncertain objects A . . . U represented by their mean values is depicted. The results of a distance range query with query object Q are shown in the upper right box. There are critical objects like P , that is included in the result, and O, which is not included, though they are
124
T. Bernecker et al.
very close to each other and have quite similar uncertainty regions as depicted in Figure 1(b)1 . Here, the complete uncertainty information of the objects is taken into account. The gray shaded fields indicate those objects which are included in the non-probabilistic result (cf. Figure 1(a)). The results show that objects O and P have quite similar probabilities (P (O) = 50%, P (P ) = 60%) for belonging to the result. Additionally, we can see that objects E, F , G and M are certain results.
2 Related Work Several approaches for indexing uncertain vector objects have been proposed. They mainly differ in the type of uncertainty supported by the index and in the type of supported similarity query. In [3], the Gauss-tree is introduced which is an index structure for managing large amounts of Gaussian distribution functions. The proposed system aims at efficiently answering so-called identification queries. Additionally, [3] proposed probabilistic identification queries which are based on a Bayesian setting (i.e., given a query pdf, retrieve those pdfs in the database that correspond to the query pdf with the highest probability). The authors of [4,5,12,2] deal with an uncertainty model for positionally uncertain objects and propose queries which are specified by intervals in the query space. In this setting, a query retrieves uncertain objects w.r.t. the likelihood that the uncertain object is indeed placed in the given query interval. The authors of [2] adapt the Gauss-tree proposed in [3] to a positionally uncertainty model and discuss probabilistic ranking queries. Here, the probabilistic ranking query has another meaning than the queries proposed in this paper. In [2], probabilistic ranking queries retrieve those k objects which have the highest probability of being located inside a given query area. In [12] an index structure called U-Tree is proposed which organizes pdfs using linear approximations. Recently, [10] introduced a method and a corresponding index structure modeling pdfs using piecewise-linear approximations. This new approach also employs linear functions as the U-Tree but is more exact in its approximation. All the approaches mentioned above use continuous probability density functions (pdfs) for the description of the spatial uncertainty. Most of them only support specific types of pdfs, e.g. uniform distribution within an interval or Gaussian distributions. The approaches proposed in [7,8] use discrete representations of positionally uncertain objects. Instead of continuous probability density functions they use sampled object positions reflecting the positionally object uncertainty. Based on this concept they proposed efficient similarity search approaches that allow to approximate uncertain objects represented by pdfs of arbitrary structure. The main advantage of this approach is that sampled positions in space can efficiently be indexed using traditional spatial access methods thus allowing to reduce the computational complexity of complex query types. The approach proposed in [7] supports probabilistic distance range queries on uncertain objects. The advantage of probabilistic distance range queries is that the result probability for an uncertain object does not depend on the other uncertain objects in the database. 1
Here the object uncertainties are indicated by a set of alternative positions, i.e. each uncertain object consists of a set of alternative positions.
Probabilistic Ranking in Uncertain Vector Spaces
125
The approach proposed in [8] enables an efficient computation of probabilistic nearestneighbor queries. However, only one-nearest neighbor queries are supported. The main challenge for k-NN queries is that the neighborhood probability of objects depends on the other objects in the database. Recently, Soliman et al. presented in [11] a top-k query processing algorithm for uncertain data in relational databases. The uncertain objects are represented by multiple tuples where to each tuple a confidence value is assigned indicating the likelihood that the tuple is a representant of the corresponding object. The authors propose two different query methods, the uncertain top-k query (U-Topk) and the uncertain k ranks query (U-kRanks). The U-Topk query reports tuples with the maximum aggregated probability of being top-k for a given score function while the U-kRanks query reports for each ranking position one tuple which is a clear winner of the corresponding ranking position. For both query types efficient query processing algorithms are presented. An improved method is given for both query types by Yi et al. in [13]. Though the U-kRanks query problem defined in [11,13] is quite related to our problem it is based on the xrelation model and, thus, differs from our problem definition. Their approaches refer to the occurrence of single tuples in a possible world instead of objects composed by a set of mutually exclusive vector points. In this paper, we propose efficient solutions for probabilistic ranking queries. Thereby, the results are iteratively reported in ascending order of the ranking parameter k. Similar to the object uncertainty model used in [7,8], our approach assumes that the uncertain objects are represented by a set of points in a vector space. This allows us to use standard spatial access methods like the R∗ -tree [9] for the efficient organization of the uncertain objects. Furthermore standard similarity search paradigms can be exploited to support probabilistic ranking in an efficient way.
3 Problem Definition In this section, we formally introduce the problem of probabilistic ranking queries on uncertain objects. We first start with the definition of (positionally) uncertain objects. 3.1 Positionally Uncertain Objects Objects of a d-dimensional vector space Rd are called positionally uncertain, if they do not have a unique position in Rd , but have multiple positions associated with a probability value. Thereby, the probability value assigned to a position p ∈ Rd of an object o denotes the likelihood that o is located at the position p in the feature space. A formal definition is given in the following: Definition 1 (Uncertain Object Representation). Let D be a database of objects located in a d-dimensional feature space Rd . Corresponding to the discrete uncertainty model, an uncertain object o is modelled by a finite set of alternative positions in a d-dimensional vector space each associated with a confidence value, i.e. o = {(x, p) : x ∈ Rd , p ∈ [0, 1], p is the probability that x is the position of o}. The confidence value p indicates the likelihood that the vector position matches the corresponding position of object o. The condition (x,p)∈o p = 1 holds.
126
T. Bernecker et al.
3.2 Distance Computation for Uncertain Objects Positionally uncertain objects involve uncertain distances between them. Like the uncertain position, the distance between two uncertain objects (or between two objects where at least one of them is an uncertain object) can be described by a probability density function (pdf) that reflects the probability for each possible distance value. However for uncertain objects with discrete uncertainty representations we need another form of distance. Definition 2 (Uncertain Distance). Let dist : Rd × Rd → R+ 0 be an Lp -norm based similarity distance function, and let oi ∈ D and oj ∈ D be two uncertain objects, where oi and oj are assumed to be independent of each other. Then an uncertain distance in the discrete uncertainty model is a collection duncertain (oi , oj ) = {(d, p) ∈ R+ 0 : ∀(x, p ) ∈ o , ∀(y, p ) ∈ o : d = dist(x, y), p = p · p }. Here, the condition i y j x y x (x,p)∈duncertain (oi ,oj ) p = 1 holds. The probability, that returns the likelihood that the uncertain distance duncertain (oi , oj ) between two uncertain objects oi and oj is smaller than a given range ε ∈ R+ 0 can be estimated by: P (duncertain (oi , oj ) ≤ ε) = p. (x, p) ∈ duncertain (oi , oj ) d≤ε Since distance computations between uncertain objects are very expensive, we need computationally inexpensive distance approximations to reduce the candidate set in a filter step. For this reason, we introduce distance approximations that lower and upper bound the uncertain distance between two uncertain objects. Definition 3 (Minimal and Maximal Object Distance). Let oi = {oi,1 , oi,2 , .., oi,M } and oj = {oj,1 , oj,2 , .., oj,M } be two uncertain objects. Then the distance dmin (oi , oj ) = mins=1..M,s =1..M {dist(oi,s , oj,s )} is called minimal distance between the objects oi and oj . Analogously, the distance dmax (oi , oj ) = maxs=1..M,s =1..M {dist(oi,s , oj,s )} is called maximal distance between the objects oi and oj . 3.3 Probabilistic Ranking on Uncertain Objects The output of probabilistic queries is usually in form of a set of result objects, each associated with a probability value indicating the likelihood that the object fulfills the query predicate. However, in contrast to ε-range queries and k-nn queries, ranking queries do not have such an unique query predicate, since the query predicate changes with each ranking position. In case of a ranking queries, to each result object a set of probability values is assigned, one for each ranking position. We call this form of ranking output probabilistic ranking. Definition 4 (Probabilistic Ranking). Let q be an uncertain query object and D be a database containing N = |D| uncertain objects. An uncertain ranking is a function
Probabilistic Ranking in Uncertain Vector Spaces
127
prob rankedq : (D × {1, .., N }) → [0..1] that reports for a database object o ∈ D and a ranking position k ∈ {1, .., N } the probability which reflects the likelihood that o is at the k th ranking position according to the uncertain distance duncertain (o, q) between o and the query object q in ascending order. The probabilistic ranking includes the following information, a probability value for each object and for each ranking position. Not all of this information might be relevant for the user and it could be difficult for the user to extract the relevant information. Commonly, a small part of the probabilistic ranking information should be sufficient and more easy to read and, thus, more convenient for most applications. Furthermore, due to the variance of the ranking positions of the objects, there does not exist a unique order in which the results are reported. For this reason, we define different types of probabilistic ranking queries which differ in the order the results are reported and in the form their confidence values are aggregated. In the following definitions, we assume an uncertain query object q and a database D containing N = |D| uncertain objects are given. Furthermore, we assume that prob rankedq is a probabilistic ranking over D according to q. The following variants of query definitions can be easily motivated by the fact that the user could be overstrained with ambiguous ranking results. They specify how the results of a probabilistic ranking can be aggregated and reported in a more comfortable form which is more easy to read. In particular, for each ranking position only one object is reported, i.e. for each ranking position k, the object which best fits the given position k is reported. Probabilistic Ranking Query Based on Maximal Confidence (PRQ MC): The first query definition reports the objects in such a way that the k th reported object has the highest confidence to be at the given ranking position k. This query definition is quite similar to the U-kRanks query defined in [11,13]. Definition 5. A probabilistic ranking query based on maximal confidence (PRQ MC) incrementally retrieves for the next ranking position i ∈ IN a result tuple of the form (o, prob rankedq (o, i)), where o ∈ D has not been reported at previous ranking iterations (i.e. at ranking positions j < i) and ∀p ∈ D which have not been reported at previous ranking iterations, the following statement holds: prob rankedq (o, i) ≥ prob rankedq (p, i). Note that this type of query only considers the occurrence confidence of a certain ranking position for an object. The confidences of prior ranking positions of an object are ignored in the case they are exceeded by another object. However, the confidences of prior ranking positions might also be relevant for the final setting of the ranking position of an object. This assumption is taken into account with the next query definition. Probabilistic Ranking Query Based on Maximal Aggregated Confidence (PRQ MAC): The next query definition PRQ MAC takes aggregated confidence values of ranking positions into account. Contrary to the previous definition, this query assigns each object o a unique ranking position k by aggregating over the confidences of all prior ranking positions i < k according to o.
128
T. Bernecker et al.
Definition 6. A probabilistic ranking query based on maximal aggregated confidence (PRQ MAC) incrementally retrieves for the next ranking position i ∈ IN a result tuple of the form (o, j=1..i prob rankedq (o, j)), where o ∈ D has not been reported at previous ranking iterations (i.e. at ranking positions j < i) and ∀p ∈ D which have not been reported at previous ranking iterations, the following statement holds: prob rankedq (o, j) ≥ prob rankedq (p, j). j=1..i
j=1..i
Both query types defined above specify the ranking position of each object o by comparing the ranking position confidence of o with that of the other objects. The next query specification takes for the assignment of a ranking position to an object o only the ranking confidences of o into account. Probabilistic Ranking Query Based on Expected k-Matching (PRQ EkM): This query assigns to each object its expected ranking position without taking the confidences of the other objects into account. Definition 7. A probabilistic ranking query based on expected k-matching (PRQ EkM) incrementally retrieves for the next ranking position i ∈ IN a result tuple of the form (o, prob rankedq (o, i)), where o ∈ D has not been reported at previous ranking iterations (i.e. at ranking positions j < i) and o has the ith highest expected ranking position μ(o) = j · prob rankedq (o, j). j=1..N
In other words, the objects are reported in ascending order of their expected ranking position.
4 Probabilistic Ranking Algorithm The computation of the probabilistic ranking is very expensive and is the main bottleneck of the probabilistic ranking queries proposed in the previous section. In this section, we first introduce in Section 4.1 the data model used to compute the probabilistic ranking and then, in Section 4.2, we show how the computational cost of the probabilistic ranking on uncertain objects can be drastically reduced. We assume that each object is represented by M alternative vector positions which we call sample points or simply samples in the remainder. Furthermore, we assume that the object samples are stored in a spatial index structure like the R∗ -tree [9], in order to organize the uncertain objects such that proximity queries can be efficiently processed. Up to now, we have assumed that the database objects are uncertain. If we assume that the query object is an uncertain object as well, we have to keep the dependencies between the alternative object representations given by the sample representations in mind. For this reason, we propose to solve the probabilistic ranking problem for each representant oq,j of the query object oq = {oq,1 , oq,2 , .., oq,M } separately. Let us note that this computation can be done in parallel and, thus, can be efficiently supported by distributed systems.
Probabilistic Ranking in Uncertain Vector Spaces
129
In the following, we concentrate on the computation of the probabilistic ranking query according to one sample point qj ∈ Rd of the query object q. The computation is done for each query sample point separately and, in a postprocessing step, the results can be easily merged to obtain the final result which is shown in Section 4.3. 4.1 Iterative Probability Computation Initially, an iterative computation of the nearest neighbors of qj w.r.t. the sample points of all objects o ∈ D (sample point ranking ranks (qj )) is started using the ranking algorithm proposed in [6]. Then, we iteratively pick object samples from the sample point ranking ranks (qj ) according to the query sample point qj . For each sample point oi,s (1 ≤ s ≤ M ) returned from ranks (qj ), we immediately compute the probability that oi,s is the k th nearest neighbor of qj for all k (1 ≤ k ≤ i). Thereby, all other samples oi,t (t = s) of object oi have to be ignored due to the sample dependency within an object as mentioned above. For the probability computation we need two auxiliary data structures, the sample table (ST), required to compute the probabilities by incorporating the other objects ol ∈ D (ol = oi ), and the probability table (PT) used to maintain the intermediate results w.r.t. oi,s and which finally contains the overall results of the probabilistic ranking. In the following, both data structures ST and PT are introduced in detail. Sample Table (ST): We maintain a table ST called sample table that stores for each accessed object separately the portion of samples already returned from ranks (qj ). Additionally, we need for each accessed object the portion of samples that has not been accessed so far. Entries of ST according to object oi are defined as follows: ST [i][1] =
# samples of oi already returned from ranks (qj ) , M (= # samples of object oi )
ST [i][0] can be directly computed by ST [i][0] = 1 − ST [i][1], such that in fact we only need to maintain entries of the form ST [i][1]. Probability Table (PT): Additionally to the sample table, we maintain a table P T called probability table that stores for each object oi and each k ∈ N (1 ≤ k ≤ N ) the actual probability that oi is the k th -nearest neighbor of the query sample point qs . The entries of P T according to the sth sample point of object oi are defined as follows: P T [k][i][s] = P ((k − 1) objects o ∈ D, (o = oi ), are closer to qj than the sample point oi,s ).
We assume that object oi is the ith object for which ranks (qj ) has reported at least one sample point. The same assumption is made for the sample points of an uncertain object (i.e., sample point oi,s is the sth -closest sample point of object oi according to qj ). These assumptions hold for the remainder of this paper. Now, we show how to compute an entry P T [k][i][s] of the probability table using the information stored in the sample table ST . Let ST be a sample table of size N (i.e. ST stores the information corresponding to all N objects of the database D). Let σk (i) ⊆ {o ∈ D|o = oi } denote the set, called k-set of oi , containing exactly (kN 1) objects. If we assume k < N , obviously different k-set permutations σk (i) k
130
T. Bernecker et al.
exist. For the computation of P T [k][i][s], we have to consider the set Sk of all possible k-set permutations according to oi . The probability that exactly (k-1) objects are closer to the query-sample point qj than the sample point oi,s , can be computed as follows: ST [l][1] ,if ol ∈ σk (i) P T [k][i][s] = ST [l][0] ,if ol ∈ / σk (i) σk (i)∈Sk l = 1..N l = i Let us assume that we actually process the sample point oi,s . Since the object samples are processed in ascending order according to their distance to qj , the sample table entry ST [l][1] reflects the probability, that object ol is closer to qj than the sample point oi,s . On the other hand, ST [l][0] reflects the probability that oi,s is closer to qj than ol . In the following, we show how the entries of the probability table can be computed by fetching iteratively the sample points from ranks (qj ). Thereby, we assume that all entries of the probability table are initially set to zero. Then the iterative ranking process ranks (qj ) which reports one sample point of an uncertain object in each iteration, is started. Each reported sample point oi,s is used to compute for all k (1 ≤ k ≤ N ) the probability value that corresponds to the table entry P T [k][i][s]. After filling the (i-s)-column of the probability table, we proceed with the next sample point fetched from ranks (qj ) in the same way as we did with oi,s . This procedure is repeated until all sample points are fetched from ranks (qj ). 4.2 Accelerated Probability Computation The computation of the probability table can be very costly in space and time. One reason is the size of the table that grows drastically with the number of objects and the number of samples for each object. Another problem is the very expensive computation of the probability table entries PT[k][i][s]. In the following, we propose some methods that reach a considerable reduction of the overall query cost. Table Pruning: Obviously, we do not need to maintain separately the result according to each sample point of an object. Instead of maintaining a table entry for each sample point of an object, we have to compute the average over the sample probabilities according to an object and a ranking position. This can be done on the fly by simply summing up the iteratively computed sample probabilities. An additional reduction of the table (i.e., a reduction to those parts of the table that should be available at once) can be achieved by maintaining only those parts of the table that are required for further computations and skip the rest. First, we have to maintain a table column only for those objects from which at least one sample point has been reported from ranks (qj ), whereas we can skip those from which we already fetched all sample points. In the same way we can reduce the sample table in order to reduce the cost required to compute the probability table entries. Second, we can skip each probability table row that corresponds to a ranking position which is not within a certain ranking range. This range is given by the minimal and maximal ranking position of the uncertain objects for which we currently have to maintain a column of the probability table. The following lemmas utilize the bounds for uncertain distances that are introduced in Definition 3.
Probabilistic Ranking in Uncertain Vector Spaces
131
Lemma 1 (Minimal Ranking Position). Let oi ∈ D be an uncertain object and qj be the query sample point. Furthermore, let at least n ∈ IN objects have a maximal distance that is smaller or equal to the minimal distance of oi , i.e. |{ol : ol ∈ D, dmax (ol , qj ) ≤ dmin (oi , qj )}| ≥ n. Then, the ranking position of object oi must be larger than n. In the same way, we can upper bound the ranking position of an uncertain object. Lemma 2 (Maximal Ranking Position). Let oi ∈ D be an uncertain object and qj be the query sample point. Furthermore, let at most n ∈ IN objects have a minimal distance that is smaller or equal to the maximal distance of oi , i.e. |{ol : ol ∈ D, dmin (ol , qj ) ≤ dmax (oi , qj )}| = n . Then, the ranking position of object oi must be lower than or equal to n . As mentioned above, the computation of the object probabilities according to ranking position i only requires to consider those objects whose minimal and maximal ranking position cover the ranking position i. This holds for those objects having sample points within as well as outside of the actual range of the actual ranking distance r − dist. Usually, in practice this is the case for only a small set of objects depending on their spatial density and specificity of their uncertainty. Bisection-Based Algorithm: The computational cost can be significantly reduced if we utilize the bisection-based algorithm as proposed in [1]. The bisection-based algorithm uses divide-and-conquer which computes for a query object q and a database object o the probability that exactly k other objects are closer to q than the object o. The main idea is to recursively perform a binary split of the set of relevant objects, i.e. objects which have to be taken into account for the probability computation. Afterwards, the corresponding results can be efficiently merged into the final result. Here, we leave out details due to limited space. Note that this approach, although this approach accelerates the computation cost of the PT[k][i][s] significantly, the asymptotical cost is still exponential in the ranking range. This approach is mentioned here because it is an important competitor to the dynamic-programming-based approach presented next (cf. Section 5). Dynamic-Programming-Based Algorithm: In the following, we introduce our new approach which is able to efficiently compute the PT[k][i][s] and whose runtime is O(|D|3 ). The key idea of our approach is based on the following property. Given a sample q and a set of j objects S = o1 , o2 , . . . , oj } for which the probability P (oi , q) that oi ∈ S is ranked higher than q is known. Now, we want to compute the probability Pk,S,q that exactly k oi ∈ S are ranked higher than q. Lemma 3. If we assume that object oj is ranked higher than q, then Pk,S,q is equal to the probability that exactly k−1 objects of S\{pj } are ranked higher than q. Otherwise, Pk,S,q is equal to the probability that exactly k objects of S\{pj } are ranked higher than q. The above lemma leads to the following recursion that allows to compute Pk,S,q by means of the paradigm of dynamic programming: Pk,S,q = Pk−1,S\{pj },q · pj + Pk,S\{pj },q · (1 − pj ), where P0,∅ = 1.
132
T. Bernecker et al.
Let us note that the above dynamic programming scheme was originally proposed in the context of Top-k queries in the x-relational model [13]. Here, we can exploit this scheme to compute the probability that an uncertain object o ∈ D is assigned to a certain ranking position. 4.3 Building Final Query Results Up to now, we have assumed that the query consists of one query sample point. In the following, we show how we support queries where the query object is also uncertain, i.e. consists of several query sample points. Let us assume that the query object q consists of M query sample points. Then we start for each sample point qj ∈ q separately a probabilistic ranking query as described above. The results are finally merged simply by computing for each object the average over all corresponding probabilities returned from the M queries, i.e. j=1..M prob rankedqj (i)(o) prob rankedq (i)(o) = . M
5 Experimental Evaluation In this section, we examine the effectiveness and efficiency of our proposed probabilistic similarity ranking approaches. Since the computation is highly CPU bounded, we measured the efficiency by the overall runtime cost required to compute an entire ranking averaged over 10 queries. 5.1 Datasets The following experiments are based on artificial and real-world datasets. The artificial datasets which are used for the efficiency experiments contain 10 to 1000 10dimensional uncertain objects that are situated by a Gaussian distribution in the data space. Each object consists of M = 10 alternative positions that are distributed around the mean positions of the objects with a variance of 10% of the data space if not stated otherwise. Figure 2 depicts the distribution of uncertain objects when varying the variance of the positions of the uncertain objects, i.e. the degree of uncertainty. A growing variance leads to an increase of the overlap between the object samples. For the evaluation of the effectiveness of our methods we used two real-world datasets: O3 and NSP. The O3 dataset is an environmental dataset consisting of 30 uncertain time series, each composing a set of measurements of O3 concentration in the air measured within one month. Thereby, each measurement features a daily O3 concentration curve. The dataset covers measurements from the year 2000 to 2004 and is classified according to the months in a year. The NSP dataset is a chronobiologic dataset describing the cell activity of Neurospora2 within sequences of day cycles. This dataset is used to investigate endogenous rhythms. It can be classified according to two parameters among 2
Neurospora is the name of a fungal genus containing several distinct species. For further information see The Neurospora Home Page: http://www.fgsc.net/Neurospora/neurospora.html.
Probabilistic Ranking in Uncertain Vector Spaces
(a) Positionally variance V = 2.0.
133
(b) Positionally variance V = 5.0.
Fig. 2. Uncertain object distribution (object variance = 10.0) in 60×60 space for different degrees of positionally uncertainty. (number of objects N = 40, number of samples S = 20)
Fig. 3. Avg. precision for probabilistic ranking queries on different real-world datasets
others: day cycle and type of mold. For our experiments we utilized two subsets of the NSP data: N SPh and N SPf rq . N SPh is classified according to the day cycle length. It consists of 36 objects that created three classes of day cycle (16, 18 and 20 hours). The N SPf rq dataset consists of 48 objects and is classified according to the type of the mold (f rq1, f rq7 and f rq+). 5.2 Effectiveness In the first experiments, we evaluate the quality of the different probabilistic ranking queries (PRQ MC, PRQ MAC, PRQ EkM) proposed in Section 3.3. In order to make a fair evaluation, we compare them with the results of a non-probabilistic ranking (MP) which ranks the objects based on the distance between their mean positions. For these experiments, we used the three real-world datasets O3 , N SPh and N SPf rq , each consisting of uncertain objects which are classified as described above. The quality of the proposed approaches can be directly compared in the table depicted in Figure 3 which shows the average precision over all recall values based on a k-nn classification for each probabilistic ranking query approach and each dataset. In all experiments, the PRQ MAC approach outperforms the other approaches including the non-probabilistic ranking approach. Interestingly, the approach PRQ MC which has a quite similar definition as the U-kRanks query proposed in [11,13] does not work very well and shows similar quality as the non-probabilistic ranking approach. The approach PRQ EkM loses clearly and is even significantly below the non-probabilistic ranking approach. This observation points out that the postprocessing step, i.e. the way in which the results of the probabilistic rankings are post-processed, indeed affects the result.
134
T. Bernecker et al.
10000
Query time [ms]
IT 1000
TP 100
BS TP+BS
10
DP
1 4
5
6
7
Variance
8
(a) Sample size S = 10.
9
(b) Sample size S = 30.
Fig. 4. Query processing cost w.r.t. the degree of object uncertainty
5.3 Efficiency In the next experiment, we evaluate the performance of our probabilistic ranking acceleration strategies proposed in Section 4.2 w.r.t. query processing time. We experimentally evaluate the performance of our algorithms by comparing the different proposed strategies against the straightforward solution where the computation of the query without any additional strategy. A summary of the competing methods is given below (cf. Section 4.2): IT Iterative fetching of the sample points from the sample point ranking ranks (qj ) and computation of the probability table P T entries without any acceleration strategy. TP Table pruning strategy where we used the reduced table space. BS Bisection-based computation of the probability permutations. DP Dynamic-Programming-based computation of the probability permutations. Influence of Degree of Uncertainty: In the first experiment, we compare all strategies (including the straightforward solution) for probabilistic rankings on the artificial datasets with different grade of uncertainty (variance). The evaluation of the query processing time of our approaches is illustrated in Figure 4. In particular, the differences between the used computation strategies are depicted for two different sample sizes S = 10 and S = 30. Here, we used a database size of 20 uncertain objects of a 10dimensional feature space. Obviously, the DP shows the best performance. Using only the recursive computation BS, the query processing time is quite high even for a low variance value. However, the query time increases only slightly when further increasing the variance. On the other hand, using only the table pruning strategy TP leads to a significant increase in computation time, in particular for high variances. The computation time of TP is much smaller for low variances compared to BS. The DP approach is not affected by an increasing variance. To sum up, using a combination of the TP and BS strategies results in a quite good performance, but it is outperformed by the DP approach due to its polynomial computational complexity. Scalability: Next, we evaluate the scalability based on the artificial datasets of different size. Here, we also considered different combinations of strategies. The experimental results are depicted in Figure 5. Obviously the simple approach IT produces such
Probabilistic Ranking in Uncertain Vector Spaces
IT
135
TP
Query time [ms]
100000 10000 1000 100 10 1 0
200
400
600
800
1000
1200
Database size
(a) Variance = 0.5
(b) Variance = 5.0
Fig. 5. Comparison of the scalability of all strategies (uniform uncertainty distribution)
overwhelming cost compared to the other strategies that experiments for a database size above 30 objects are not applicable. It can clearly be observed that the combination TP+BS significantly outperforms just TP. The scalability over a larger range of database sizes can be seen on the right hand side of Figure 5. The basic TP approach is already not applicable for very small databases, even for a low degree of object uncertainty. It is interesting to see that for very small database sizes and low degree of uncertainty the TP+BS outperforms DP. Let us note that we would achieve quite less cost for the query processing if we limit the ranking output to a k N . Anyway, a complete ranking of the database is usually not required. In contrast to the other competitors the DP scales well even for large databases.
6 Conclusions In this paper, we proposed an approach that efficiently computes probabilistic ranking queries on uncertain objects represented by sets of sample points. In particular, we proposed methods that are able to break down the high computational complexity required to compute for an object o the probability, that o has the ranking position k (1 ≤ k ≤ N ) according to the distance to a query object q. We theoretically and experimentally showed that our approach is able to speed-up the query by factors of several orders of magnitude. In the future we plan to apply probabilistic ranking queries to improve data mining applications.
References 1. Bernecker, T., Kriegel, H.-P., Renz, M.: Proud: Probabilistic ranking in uncertain databases. In: Lud¨ascher, B., Mamoulis, N. (eds.) SSDBM 2008. LNCS, vol. 5069, pp. 558–565. Springer, Heidelberg (2008) 2. B¨ohm, C., Pryakhin, A., Schubert, M.: Probabilistic Ranking Queries on Gaussians. In: Proc. of the 18th Int. Conf. on Scientific and Statistical Database Management (SSDBM 2006), pp. 169–178 (2006)
136
T. Bernecker et al.
3. B¨ohm, C., Pryakhin, A., Schubert, M.: The Gauss-Tree: Efficient Object Identification of Probabilistic Feature Vectors. In: Proc. 22nd Int. Conf. on Data Engineering (ICDE 2006), Atlanta,GA, US, p. 9 (2006) 4. Cheng, R., Kalashnikov, D., Prabhakar, S.: Evaluating Probabilistic Queries over Imprecise Data. In: Proc. ACM SIGMOD Int. Conf. on Management of Data (SIGMOD 2003), San Diego, CA, pp. 551–562 (2003) 5. Cheng, R., Xia, Y., Prabhakar, S., Shah, R., Vitter, J.: Efficient Indexing Methods for Probabilistic Threshold Queries over Uncertain Data. In: Proc. 30th Int. Conf. on Very Large Databases (VLDB 2004), Toronto, Canada, pp. 876–887 (2004) 6. Hjaltason, G., Samet, H.: Ranking in Spatial Databases. In: Proc. 4th Int. Symposium on Large Spatial Databases, SSD 1995, Portland, USA, vol. 951, pp. 83–95 (1995) 7. Kriegel, H.-P., Kunath, P., Pfeifle, M., Renz, M.: Probabilistic similarity join on uncertain data. In: Li Lee, M., Tan, K.-L., Wuwongse, V. (eds.) DASFAA 2006. LNCS, vol. 3882, pp. 295–309. Springer, Heidelberg (2006) 8. Kriegel, H.-P., Kunath, P., Renz, M.: Probabilistic nearest-neighbor query on uncertain objects. In: Kotagiri, R., Radha Krishna, P., Mohania, M., Nantajeewarawat, E. (eds.) DASFAA 2007. LNCS, vol. 4443, pp. 337–348. Springer, Heidelberg (2007) 9. Kriegel, H.-P., Seeger, B., Schneider, R., Beckmann, N.: The R*-tree: An Efficient Access Method for Geographic Information System. In: Proc. Int. Conf. on Geographic Information Systems, Ottawa, Canada (1990) 10. Ljosa, V., Singh, A.K.: APLA: Indexing arbitrary probability distributions. In: Proc. of the 23rd Int. Conf. on Data Engineering, ICDE 2007 (2007) 11. Soliman, M., Ilyas, I., Chang, K.C.-C.: Top-k Query Processing in Uncertain Databases. In: Proc. 23rd Int. Conf. on Data Engineering (ICDE 2007), Istanbul, Turkey, pp. 896–905 (2007) 12. Tao, Y., Cheng, R., Xiao, X., Ngai, W., Kao, B., Prabhakar, S.: Indexing Multi-Dimensional Uncertain Data with Arbitrary Probability Density Functions. In: Proc. 31th Int. Conf. on Very Large Data Bases (VLDB 2005), Trondheim, Norway, pp. 922–933 (2005) 13. Yi, K., Li, F., Kollios, G., Srivastava, D.: Efficient Processing of Top-k Queries in Uncertain Databases. In: Proc. 24th Int. Conf. on Data Engineering (ICDE 2008), Canc´un, M´exico (2008)
Logical Foundations for Similarity-Based Databases Radim Belohlavek1,2 and Vilem Vychodil1,2 1
T. J. Watson School of Engineering and Applied Science Binghamton University–SUNY, P.O. Box 6000, Binghamton, NY 13902–6000, USA 2 Dept. Computer Science, Palacky University, Olomouc Tomkova 40, CZ-779 00 Olomouc, Czech Republic
[email protected],
[email protected]
Abstract. Extensions of relational databases which aim at utilizing various aspects of similarity and imprecision in data processing are widespread in the literature. A need for development of solid foundations for such extensions, sometimes called similarity-based relational databases, has repeatedly been emphasized by leading database experts. This paper argues that, contrary to what may be perceived from the literature, solid foundations for similarity-based databases can be developed in a conceptually simple way. In this paper, we outline such foundations and develop in detail a part of the the facet related to similarity-based queries and relational algebra. The foundations are close in principle to Codd’s foundations for relational databases, yet they account for the main aspects of similarity-based data manipulation. A major implication of the paper is that similarity-based data manipulation can be made an integral part of an extended, similarity-based, relational model of data, rather than glued atop the classic relational model in an ad hoc manner.
1
Introduction
Uncertainty, Similarity-Based Databases, and the Need for Foundations Uncertainty abounds in data management. In the past, numerous studies were devoted to uncertainty and imprecision management in database systems. Yet, the problem of uncertainty and imprecision management is considered a challenge with no satisfactory solutions obtained so far. As an example, the report from the Lowell debate by 25 senior database researchers [1] says “. . . current DBMS have no facilities for either approximate data or imprecise queries.” According to this report, the management of uncertainty and imprecision is one of the six currently most important research directions in database systems. Uncertainty has several facets. In this paper, we address one which gained considerable attention in the past, namely similarity and imprecision and related topics such as approximate/imprecise matches and similarity-based queries. Sometimes, a simple idea regarding foundational aspects has a groundbreaking impact on a field. Codd’s idea regarding the relational database model is an
Supported by institutional support, research plan MSM 6198959214.
L. Chen et al. (Eds.): DASFAA 2009 Workshops, LNCS 5667, pp. 137–151, 2009. c Springer-Verlag Berlin Heidelberg 2009
138
R. Belohlavek and V. Vychodil
example in the field of database systems: “A hundred years from now, I’m quite sure, database systems will still be based on Codd’s relational foundation.” [7, p. 1]. The main virtues of Codd’s model are due to its reliance on a simple yet powerful mathematical concept of a relation and first-order logic: “The relational approach really is rock solid, owing (once again) to its basis in mathematics and predicate logic.” [7, p. 138]. This paper presents a simple idea regarding foundations of similarity-based databases. Put in short, we argue that clear conceptual foundations for similarity-based databases, which have not yet been provided, can be developed in a purely logical and relational way by revisiting the classic Codd’s relational model of data. Most of the literature offers a view in which a similarity-based database is a classic relational database equipped with a “similarity-processing module”, which is as if glued atop the classic database and in which the relational database has its conceptual foundations in Codd’s relational model, while the “similarityprocessing module” is conceptually rooted in the theory of metric spaces. Under such a view, various fundamental classic concepts of relational databases, such as functional or other data dependencies, remain unchanged and thus untouched by similarity considerations, unless a more or less suitable merge of the relational and metric frameworks is proposed for pragmatic reasons which results in a concept with both the relational and the metric component. The concept of a similarity-based query is an example. Our Approach and Contribution: Contrary to this heterogeneous view, we offer a homogeneous view in which similarity is understood as a particular many-valued, or fuzzy, relation. This way we obtain a purely relational model of similaritybased databases, very much in the spirit of the classic Codd’s relational model. From the methodological point of view, we add the concept of similarity to the very basic concept of the classic Codd’s model, namely to the concept of a relation (data table), contrary to gluing a “similarity-processing module” atop Codd’s model, and obtain a concept of a ranked table over domains with similarities which is illustrated in Tab. 1. As Tab. 1 suggests, we take the classic concept of a relation, add similarity relations to domains (right and bottom part of Tab. 1) and add degrees, which we call ranks, to tuples of the relation (first column of the table in Tab. 1). Note that the similarity relations on numerical domains, such as year , can be defined (by users) using absolute distance and a simple scaling function in this example (similarity degree y1 ≈year y2 of years y1 and y2 is sy (|y1 − y2 |)). The similarities on domains can be seen as an additional subjective information which is supplied by users of the database system. Notice that the ranked table in Tab. 1 can be interpreted as a result of a similaritybased query “select all properties which are sold for approximately $250,000”. In general, every ranked table can be interpreted in such a way—this is an important feature which our model shares with Codd’s relational model, see also Remark 1. This naturally leads to a methodical development of our model because all the concepts derived from the concept of a ranked table over domains with similarities can and need to take similarity and approximate matches into account. As a result, degrees of similarity and approximate matches employed
Logical Foundations for Similarity-Based Databases
139
Table 1. Ranked data table over domains with similarities 1.0 1.0 0.7 0.4 0.2
name Miller Ortiz Nelson Lee Kelly
type bdrms year price tax Penthouse (P) 2 1994 250,000 2,100 Single Family (S) 3 1982 250,000 4,350 Ranch (R) 4 1969 320,000 6,500 Single Family (S) 4 1975 370,000 8,350 Log Cabin (L) 1 1956 85,000 1,250
n1 ≈name n2 =
1, if n1 = n2 , 0, if n1 = n2 ,
y1 ≈year y2 = sy (|y1 − y2 |),
1
≈type L P R S
L 1 0 0.3 0.2
P 0 1 0 0.4
R 0.3 0 1 0.8
S 0.2 0.4 0.8 1
|b1 − b2 | b1 ≈bdrms b2 = 1 − min 1, , 2 sy (years)
1
sp (price)
1
st (tax)
p1 ≈price p2 = sp (|p1 − p2 |), t1 ≈tax t2 = st (|t1 − t2 |).
0
0 5 10 15 20
50, 000 200, 000
0
3, 000 7, 500
in our model replace 1 (representing exact match, or equality) and 0 (mismatch, unequality) of the classic Codd’s model. In the classic model, 1 and 0 are manipulated according to the rules of classical first-order logic (in this sense, Codd’s model is based on classical logic). In order to manipulate the degrees of similarity and approximate matches in very much the same way 1 and 0 are manipulated in Codd’s model, we employ the recently developed calculus of first-order fuzzy logic [9,10]. This is an important step toward a transparent model with solid logical foundations. Namely, the resultant model is conceptually clear and simple, yet powerful. For example, functional dependencies in the new model naturally take similarities on domains into account and have a simple axiomatization using Armstrong-like rules; relational algebra in the new model automatically offers similarity-based queries; several seemingly non-relational concepts of similaritybased databases turn out to be relational in the new view—a nice example is the top k query which, contrary to what one can find in the literature, becomes a relational query in the new model, and is thus on the same conceptual level as the other similarity-based and classic relational queries. Thus, we propose a conservative step back from the currently available view on similarity-based databases which has two facets, namely relational and metric, to a purely relational view. The paper is meant to be a programmatic contribution which outlines the foundations of similarity-based databases, with emphasis on relational algebra and similarity-based queries. Content of This Paper: In Section 2, we provide an overview of principal concepts from fuzzy logic. Section 3 presents the concept of a ranked table with domains over similarities and overviews some of our previous results. Section 4 presents basic traits of a relational algebra in our model, with emphasis on similaritybased operations. Section 5 surveys the future research. Previous Work and Related Approaches: Our previous work on this topic includes [2,3], where we presented functional dependencies for domains with similarities, their completeness theorem, a procedure for extracting a non-redundant basis
140
R. Belohlavek and V. Vychodil
of such functional dependencies from data, and a sketch of relational algebra with implementation considerations. These papers can be seen as developing technically the model, proposed from a foundational perspective in this paper. Another paper is [4] where we provided a detailed critical comparison of our model with related ones proposed previously in the literature. Namely, the idea of adding similarity to domains in the relational model appeared in several papers, but as a rule in an ad hoc manner, without proper logical foundations and comprehensive treatment, see e.g. [5,6,13,14,15] for selected papers.
2
Fuzzy Logic as the Underlying Logic
We use fuzzy logic [9,10] to represent and manipulate truth degrees of propositions like “u is similar to v”, cf. also [8]. Fuzzy logic is a many-valued logic with a truth-functional semantics which is an important property because, as a result, it does not depart too much from the principles of classical logic (truth functionality has important mathematical and computational consequences). Fuzzy logic allows us to process (aggregate) truth degrees in a natural way. For instance, consider a query “show all properties which are sold for about $250,000 and are built around 1970”. According to Tab. 1, the property owned by Nelson satisfies subqueries concerning price and year to degrees 0.7 and 0.9, respectively. Then, we combine the degrees using a fuzzy conjunction connective ⊗ to get a degree 0.7 ⊗ 0.9 to which the property owned by Nelson satisfies the conjunctive query. When using fuzzy logic, we have to pick an appropriate scale L of truth degrees (they serve as degrees similarity, degrees to which a tuple matches a query, etc.) and appropriate fuzzy logic connectives (conjunction, implication, etc.). We follow a modern approach in fuzzy logic in that we take an arbitrary partiallyordered scale L, ≤ of truth degrees and require the existence of infima and suprema (for technical reasons, to be able to evaluate quantifiers). Furthermore, we consider an adjoint pair of a fuzzy conjunction ⊗ and the corresponding fuzzy implication → (called residuum) and require some further natural conditions, see [9]. Adjointness is crucial from the point of view of mathematical properties of our model. This way, we obtain a structure L = L, ≤, ⊗, →, . . . of truth degrees with logical connectives. Such an approach, even thought rather abstract, is easier to handle theoretically and supports the symbolical character of our model. Moreover, the various particular logical connectives typically used in fuzzy logic applications are particular cases of our structure L. Technically speaking, our structure of truth degrees is assumed to be a complete residuated lattice L = L, ∧, ∨, ⊗, →, 0, 1, see [9,10] for details. A favorite choice of L is L = [0, 1] or a subchain of [0, 1]. Examples of pairs of important pairs of adjoint operations are L ukasiewicz (a ⊗ b = max(a + b − 1, 0), a → b = min(1 − a + b, 1)), and G¨ odel (a ⊗ b = min(a, b), a → b = 1 if a ≤ b, a → b = b else). For instance, 0.7⊗0.9 = 0.6 if ⊗ is the L ukasiewicz conjunction; 0.7⊗ 0.9 = 0.7 if ⊗ is the G¨ odel one. Further logical connectives are often considered in fuzzy logic, such as the truth-stressing hedges (they model linguistic modifiers such as “very”) [10], i.e. particular monotone unary functions ∗ : L → L. Two
Logical Foundations for Similarity-Based Databases
141
boundary cases of hedges are (i) identity, i.e. a∗ = a (a ∈ L); (ii) globalization: 1∗ = 1, and a∗ = 0 (a = 1). Note that a special case of a complete residuated lattice is the two-element Boolean algebra 2 of classical (bivalent) logic. Having L, we define the usual notions: an L-set (fuzzy set) A in universe U is a map A : U → L, A(u) being interpreted as “the degree to which u belongs to A”. The operations with L-sets are defined componentwise. Binary L-relations (binary fuzzy relations) between X and Y can be thought of as L-sets in the universe X × Y . A fuzzy relation E in U is called reflexive if for each u ∈ U we have E(u, u) = 1; symmetric if for each u, v ∈ U we have E(u, v) = E(v, u). We call a reflexive and symmetric fuzzy relation a similarity. We often denote a similarity by ≈ and use an infix notation, i.e. we write (u ≈ v) instead of ≈(u, v).
3
Ranked Tables over Domains with Similarities
This section presents a concept of a ranked table over domains with similarities. We use Y to denote a set of attributes (attribute names) and denote the attributes from Y by y, y1 , . . . ; L denotes a fixed structure of truth degrees and connectives. Definition 1. A ranked data table over domains with similarity relations (with Y and L) is given by – domains: for each y ∈ Y , Dy is a non-empty set (domain of y, set of values of y); – similarities: for each y ∈ Y , ≈y is a binary fuzzy relation (called similarity) in Dy (i.e. a mapping ≈y : Dy × Dy → L) which is reflexive (i.e. u ≈y u = 1) and symmetric (u ≈y v = v≈y u); – ranking: for each tuple t ∈ y∈Y Dy , there is a degree D(t) ∈ L (called rank of t in D) assigned to t. Remark 1. (a) D can be seen as a table with rows and columns corresponding to tuples and attributes, like in Tab. 1. Ranked tables with similarities represent a simple concept which extends the concept of a table (relation) of the classical relational model by two features: similarity relations and ranks. (b) Formally, D is a fuzzy relation between domains Dy (y ∈ Y ). t[y] denotes a value from Dy of tuple t on attribute y. We require that D has a finite support, i.e. there is only a finite number of tuples t with a non-zero degree D(t). If L = {0, 1} and if each ≈y is ordinary equality, the concept of a ranked data table with similarities coincides with that of a data table over set Y of attributes (relation over a relation scheme Y ) of a classic model. (c) Rank D(t) is interpreted as the degree to which a tuple t satisfies requirements posed by a similarity-query. A table D representing just stored data, i.e. data prior to querying, has all the ranks equal to 1, i.e. D(t) = 1 for each tuple t. Hence again, D can be thought of as a result of a query, namely, the query “show all stored data”. Therefore, a general interpretation is: a ranked table over domains with similarities is a result of a similarity-based query. Thus in principle, the role of ranked tables over domains with similarities is the same as the role of tables (relations) in the classic relational model.
142
R. Belohlavek and V. Vychodil
In [2], we introduced functional dependencies for ranked tables over domains with similarities, their Armstrong-like rules, and completeness theorems. From the technical point of view, this paper demonstrates well the advantage of using (formal) fuzzy logic. The main point is that even though degrees of similarity are taken into account and several aspects thus become more involved (e.g. proofs regarding properties of functional dependencies), functional dependencies over domains with similarities are conceptually simple and feasible both from the theoretical and computational point of view.
4
Relational Algebra and Similarity-Based Queries
In the original Codd’s relational model, the relational algebra is based on the calculus of classical relations and on first-order predicate logic. In the same spirit, we introduce a relation algebra for our model with similarities. It will be based on the calculus of fuzzy relations and will have foundations in first-order predicate fuzzy logic [9,10]. A development of the algebra in full is beyond the scope of this paper. Therefore, we present selected groups of relational operations. For each group we present the operations, their properties, and demonstrate them using illustrative examples. At the end, we briefly comment on further topics and results. 4.1
Basic Relational Operations
This group of operations contains our counterparts to the basic Boolean operations of Codd’s model—union, intersection, relational difference, etc. These operations emerge naturally because in our model, we actually replace the twoelement Boolean algebra by a general scale L = L, ⊗, →, ∧, ∨, 0, 1 of truth degrees (residuated lattice). For instance, the classic union D1 ∪ D2 of data tables D1 and D2 is defined as D1 ∪ D2 = {t | t ∈ D1 or t ∈ D2 } where “or” stands for Boolean disjunction. Replacing the Boolean disjunction by ∨ (supremum from L; ∨ is considered a truth function of disjunction in fuzzy logic) we can express the rank (D1 ∪ D2 )(t) of t in the union D1 ∪ D2 by D1 (t) ∨ D2 (t), i.e. (D1 ∪ D2 )(t) = D1 (t) ∨ D2 (t).
(1)
In a similar way, we define two kinds of intersections of ranked data tables. (D1 ∩ D2 )(t) = D1 (t) ∧ D2 (t), (D1 ⊗ D2 )(t) = D1 (t) ⊗ D2 (t).
(2) (3)
D1 ∩ D2 and D1 ⊗ D2 are called the ∧-intersection and ⊗-intersection of ranked data tables, respectively. Remark 2. (a) Notice that since both D1 and D2 have finite supports (see Remark 1 (b)), the results of union and both the intersections have finite supports as well, i.e., they represent (finite) ranked data tables. (b) If D1 is a result of query Q1 and D2 is a result of query Q2 , then (D1 ∪D2 )(t) should be interpreted as “a degree to which t matches Q1 or t matches Q2 ”. In
Logical Foundations for Similarity-Based Databases
143
most situations, ∨ coincides with maximum. Therefore, (D1 ∪ D2 )(t) is just the maximum of D1 (t) and D2 (t). (c) Operations (1)–(3) generalize the classical two-valued operations in the following sense. If the underlying complete residuated lattice L equals 2 (twoelement Boolean algebra) then (1)–(3) become the Boolean operations. A new relational operation is obtained based on the residuum → which represents a “fuzzy implication”. In this case, however, we cannot put (D1 → D2 )(t) = D1 (t) → D2 (t), as the resulting ranked data table may be infinite. Indeed, if the relation scheme of D1 contains at least one attribute y with an infinite domain Dy , there are infinitely many tuples t such that (D1 → D2 )(t) = 1. This is due to the fact that if t[y] is not a value of any tuple with a nonzero rank in D1 then D1 (t) = 0 and consequently (D1 → D2 )(t) = 0 → D2 (t) = 1. This problem is analogous to the problem of relational complements in the Codd’s model [12] where infinite domain can also produce infinite relations. We overcome the problem by considering only tuples whose values belong to so-called active domains of D1 , see [12]. For any ranked data table D over attributes Y and an attribute y ∈ Y , adom(D, y) ⊆ Dy is defined as follows adom(D, y) = {d ∈ Dy | there is tuple t such that D(t) > 0 and t[y] = d}. Hence, the active domain adom(D, y) of y in D is the set of all values of attribute y which appear in D. Furthermore, we let adom(D) = y∈Y adom(D, y). (4) Note that adom(D) defined by (4) is a finite ranked data table in its own right. Using (4), we define an active residuum D1 D2 of D1 in D2 by (D1 D2 )(t) = D1 (t) → D2 (t),
for each t ∈ adom(D),
(5)
and (D1 D2 )(t) = 0 otherwise. Notice that (D1 D2 )(t) can be seen as a degree to which it is true that “if t matches Q1 then t matches Q2 ”. The following proposition shows basic properties of the operations defined so far. Proposition 1. For any ranked data tables D1 , D2 , D3 , D1 ⊗ (D2 ∪ D3 ) = (D1 ∪ D2 ) ⊗ (D1 ∪ D3 ), D1 (D2 ∩ D3 ) = (D1 D2 ) ∩ (D1 D3 ), adom(D1 ∩ D2 ) ∩ ((D1 ∪ D2 ) D3 ) = (D1 D3 ) ∩ (D2 D3 ), adom(D1 ⊗ D2 ) ∩ (D1 (D2 D3 )) = (D1 ⊗ D2 ) D3
(6) (7) (8) (9)
Proof. For illustration, we prove (6). By definition,(D1 ⊗(D 2 ∪D3 ))(t) = D1 (t)⊗ (D2 ∪ D3 )(t) = D1 (t) ⊗ (D2 (t) ∨ D3 (t)). Using a ⊗ i bi = i (a ⊗ bi ) which is true in any complete residuated lattice, we get (D1 ⊗(D2 ∪D3 ))(t) = (D1 (t)⊗D2 (t))∨ (D1 (t) ⊗ D3 (t)) = (D1 ⊗ D 2 )(t) ∨ (D1 ⊗ D3 )(t) = ((D1 ⊗ D2 ) ∪ (D1 ⊗ D3 ))(t). (7) follows from a → i bi = i (a → bi ). In order to prove (8) and (9) we have to take care about different active domains on both sides of the equalities. Details are postponed to the full version of this paper.
144
R. Belohlavek and V. Vychodil
The only unary Boolean operation that is considered in the usual model is the active complement of a data table. In our model, there are other nontrivial unary operations like hedges (see Section 2) and shifts. For any ranked data table D, 0 = a ∈ L, and a unary truth function ∗ : L → L, we define ∼D, D∗ , and a D by (∼D)(t) = D(t) → 0, (a D)(t) = a → D(t), (D∗ )(t) = D(t)∗ ,
for each t ∈ adom(D),
(10)
for each t, for each t.
(11) (12)
∼D and a D are called an active complement and an a-shift of D, respectively. D∗ is a data table with ranks obtained by applying ∗ to the ranks of the original data table. Remark 3. (a) Notice that ∼D equals D ∅D where ∅D is an empty ranked data table, i.e. we have expressed complementation based on residuation and an empty table. This technique is widely used in the two-valued logic as well (e.g., a formula ¬ϕ is logically equivalent to ϕ ⇒ 0 where 0 is a nullary connective representing falsity). (b) Residuated lattices are, in general, weaker structures than Boolean algebras, i.e. not all laws satisfied by Boolean algebras are satisfied by residuated lattices. For instance, the law of excluded middle a ∨ (a → 0) = 1 is satisfied by a residuated lattice L iff L a Boolean algebra. Therefore, some properties of the operations from the original Codd’s model are not preserved in our model. Example 1. Since a → b = 1 iff a ≤ b, (a D)(t) defined by (11) can be interpreted as a degree to which “t matches (a query) at least to degree a”. For instance, if we consider 0.7 D with D from Tab. 1, the resulting table 1.0 1.0 1.0 0.7 0.5
name Miller Nelson Ortiz Lee Kelly
type bdrms Penthouse 2 Ranch 4 Single Family 3 Single Family 4 Log Cabin 1
year 1994 1969 1982 1975 1956
price 250,000 320,000 250,000 370,000 85,000
tax 2,100 6,500 4,350 8,350 1,250
represents the answer to query the “show all properties for sale where price = 250,000 at least to degree 0.7, i.e. where price is more or less equal to 250,000.” Shifts play an important role in our model because they enable us to select tuples with sufficient large ranks (in a logically clean way). 4.2
Derived Relational Operations
In this section we briefly discuss operations which can be derived from those presented in the previous section. Recall that the original Codd’s model considers relational difference D1 − D2 such that t ∈ D1 − D2 iff t ∈ D1 and t ∈ D2 . In our setting there are multiple choices to define a generalization of this operation. Our intention is to chose the “best definition” which behaves naturally with respect to the other operations, the main criteria being expressiveness and feasibility of the resulting algebra.
Logical Foundations for Similarity-Based Databases
145
In case of D1 − D2 , we can proceed as follows: in the ordinary model, t ∈ D1 − D2 iff it is not true that if t ∈ D1 then t ∈ D2 . Thus, we can define D1 − D2 = ∼(D1 D2 ),
(13)
i.e., (D1 − D2 )(t) = (D1 (t) → D2 (t)) → 0 for any t (as one can check, active domain can be disregarded in this case). Clearly, if L is 2, (13) is equivalent to the ordinary difference of relations. Our relational algebra contains operations which either have no counterparts within the classic operations or the counterparts are trivial, (e.g. a-shifts introduced in Section 4.1). Another example of a useful operation is a so-called a-cut. For a ranked table D and a ∈ L, an a-cut of D is a ranked table a D defined by 1, if D(t) ≥ a, a ( D)(t) = (14) 0, otherwise. That is, a D is a (“non-ranked”) table which contains just the tuples of D with ranks greater or equal to a. This is a useful operation for manipulation with ranked tables as it allows the user to select only a part of a query result given by threshold a. An a-cut is indeed a derived operation because a D = (a D)∗ where ∗ is globalization, see Section 2. Note that in combination with intersection, we can use a-cut to get the part of D with ranks at least a. Namely, we can put above(D, a) = D ∩ a D. Thus, D(t), if D(t) ≥ a, (above(D, a))(t) = 0, otherwise. Example 2. above(D, 0.7) in case of D from Tab. 1 is the following: name type bdrms 1.0 Miller Penthouse 2 1.0 Ortiz Single Family 3 0.7 Nelson Ranch 4 The result of
0.7
year price tax 1994 250,000 2,100 1982 250,000 4,350 1969 320,000 6,500
D is the same as above(D, a) except for the ranks all being 1.
Another operation which is derivable in our model is top k which has gained considerable interest recently, see [8] and also [11]. top k (D) contains first k ranked tuples according to rank ordering of D. Therefore, the result of top k (D) is a ranked data table containing k best matches of a query (if there are less than k ranks in D then top k (D) = D; and top k (D) includes also the tuples with rank equal to the rank of the k-th tuple). It can be shown that top k is a derivable operation in our relational model. Namely, (top k (D))(t) = D(t) ⊗ (Q
146
4.3
R. Belohlavek and V. Vychodil
Projections of Ranked Data Tables
The role of projection in our model is the same as in the Codd’s model. Projection produces a ranked data table with tuples containing a subset of attributes from the original ranked data table. Notice that we may have a situation such that D(t) = D(t ) and both tuples t and t have common values on all attributes from A ⊆ Y , i.e. t[A] = t [A]. Therefore, the rank of t in the projection of D onto A will be computed as a supremum of ranks of all tuples in D which agree with t on the attributes from A. Thus, the projection πA (D) of D onto A ⊆ Y is defined by (πA (D))(t) = s[A]=t D(s), for each t ∈ y∈A Dy . (15) As in the classic model, if two (or more) projections are applied in a row, the last one subsumes the previous ones. Therefore, for any A ⊆ B ⊆ Y , πA (πB (D)) = πA (D). The following proposition shows relationship of projections w.r.t. our basic relational operations. Proposition 2. For any D1 , D2 , D, A ∈ LY , a ∈ L, πA (D1 ∪ D2 ) = πA (D1 ) ∪ πA (D2 ),
πA (D∗ ) ⊆ πA (D),
(16)
πA (D1 ∩ D2 ) ⊆ πA (D1 ) ∩ πA (D2 ), πA (D1 ⊗ D2 ) ⊆ πA (D1 ) ⊗ πA (D2 ),
πA (a D) ⊆ a πA (D), (17) πA (a ⊗ D) = a ⊗ πA (D). (18) Proof. We prove (18). Observe that (πA (D1 ⊗ D2 ))(t) = s[A]=t (D1 ⊗ D2 )(s) = (D (s)⊗D2 (s)) ≤ s1 [A]=t s2 [A]=t (D1 (s1 )⊗D2 (s2 )) = ( s1 [A]=t D1 (s1 )⊗ s[A]=t 1 s2 [A]=t D2 (s2 )) = (πA (D1 ))(t) ⊗ (πA (D2 ))(t) = (πA (D1 ) ⊗ πA (D2 ))(t). The rest can be proved by similar arguments. 4.4
Similarity-Based Selection
Relational operations considered in previous sections did not utilize similarities of values on domains. We now turn our attention to the similarity-based selection which is a counterpart to ordinary selection. Our selection can be seen as an operation which selects from a data table all tuples which approximately match a given condition. The condition can be in the form of an equality “y = c”, saying that the value of attribute y should be similar (i.e., approximately equal) to c. Clearly, a tuple t matches y = c to a degree to which t[y] is similar to c, which equals t[y] ≈y c. This is where the similarity ≈y on the domain of y comes into play. In general, the selection formula which poses a restriction on tuples may be more complex than an atomic equality. In this paper, we will consider selection formulas as follows: each expression “p = q” where p, q are variables or (constants for) values (in the same domains) is a selection formula; each (constant for) a truth degree a ∈ L is a selection formula; if ϕ and ψ are selection formulas then (ϕ & ψ), (ϕ ψ), (ϕ ψ), (ϕ ⇒ ψ), and ϕ∗ are selection formulas.
Logical Foundations for Similarity-Based Databases
147
Since fuzzy logic [10] is truth-functional, there is a regular way to define a degree ||ϕ||t ∈ L to which tuple t matches a selection formula ϕ as follows: if ϕ is a (constant for) a truth degree a then ||ϕ||t = a; if ϕ equals “y = c” where y is a variable and c ∈ Dy , ||ϕ||t = t[y] ≈y c (analogously for ϕ being an equality of the form variable = variable, constant = variable, constant = constant); if ϕ equals (ϑ ⇒ χ) then ||ϕ||t = ||ϑ||t → ||χ||t and similarly for other connectives &, , and their truth functions ⊗, ∨, ∧. Remark 5. Our approach strictly adheres to mathematical fuzzy logic. Selection formulas are defined as well-formed formulas. Such formulas are a part of the syntax of our relational algebra. In order to interpret such formulas, we supply a semantic component—a tuple of values. Therefore, we have the usual distinction between a syntax (a formula) and a semantics (interpretation) as we know it from the classical Boolean logic. The only difference is in that we use a more general structure of truth degrees and, therefore, the degree ||ϕ||t to which t matches ϕ may be an arbitrary value in L. For instance, ||ϕ||t = 1 means a “perfect match”, ||ϕ||t = 0.9 is an “almost perfect match”, ||ϕ||t = 0.7 can be seen as a “more or less good match”, ||ϕ||t = 0 is “not a match at all”. The strong connection to mathematical fuzzy logic we have allows us to utilize both the symbolic level of our calculus where we can deal with formulas as with symbolic expressions and the numerical level enabling us to deal with ranks which are directly interpretable as numerical degrees of matches. The selection σϕ (D) of tuples in D matching ϕ is defined by σϕ (D) = D(t) ⊗ ||ϕ||t .
(19)
Considering D as a result of query Q, the rank of t in σϕ (D) can be interpreted as a degree to which “t matches the query Q and in addition it matches the selection formula ϕ”. Hence, if D is a table where all rows have ranks 1, (σϕ (D))(t) = ||ϕ||t , i.e. it is a degree to which “t matches condition posed by ϕ”. Selection satisfies several properties of which are analogous to properties of the classic selection. Due to the limited scope, we present just the following. Proposition 3. For any D1 , D2 , D, A ⊆ Y , and ϕ σϕ (D1 ∪ D2 ) = σϕ (D1 ) ∪ σϕ (D2 ),
σϕ (D1 ∩ D2 ) ⊆ σϕ (D1 ) ∩ D2 ,
(20)
σϕ (D1 ⊗ D2 ) = D1 ⊗ σϕ (D2 ),
σϕ (D1 ⊗ D2 ) = σϕ (D1 ) ⊗ D2 .
(21)
Proof. In case of σϕ (D1 ∩ D2 ) ⊆ σϕ (D1 ) ∩ D2 , σϕ (D1 ∩ D2 )(t) = (D1 ∩ D2 )(t) ⊗ ||ϕ||t = (D1 (t) ∧ D2 (t)) ⊗ ||ϕ||t . Now, (D1 (t) ∧ D2 (t)) ⊗ ||ϕ||t ≤ D1 (t) ⊗ ||ϕ||t and (D1 (t) ∧ D2 (t)) ⊗ ||ϕ||t ≤ D2 (t). Putting the latter inequalities together, we get (D1 (t) ∧ D2 (t)) ⊗ ||ϕ||t ≤ (D1 (t) ⊗ ||ϕ||t ) ∧ D2 (t) = (σϕ (D1 ) ∩ D2 )(t). The other equalities can be shown by analogous arguments. Our selection preserves important properties with respect to nested selections and projections [12]. It can be shown that for any D, A ⊆ Y , ϕ, ψ, and ϑ which does not contain attributes from A, πA (σϑ (D)) = σϑ (πA (D)), σϕ (σψ (D)) = σϕ&ψ (D) = σψ&ϕ (D) = σψ (σϕ (D)).
148
R. Belohlavek and V. Vychodil
The latter equality can be extended to arbitrary many selections: σϕ1 &ϕ2 &···&ϕk (D) = σϕ1 (σϕ2 (· · · (σϕk (D) · · · )), saying that the selection of tuples matching the conjunctive formula ϕ1 & ϕ2 & · · · & ϕk tantamount to performing a sequence of selections matching ϕi . If each ϕi is an equality yi = di then, by previous observations, (σy1 =d1 &···&yk =dk (D))(t) = D(t) ⊗ (t[y1 ] ≈y1 d1 ) ⊗ · · · ⊗ (t[yk ] ≈yk dk ), i.e. it is a degree of “t matches Q and value of t in y1 is similar to d1 and · · · and value of t in yk is similar to dk ”. Example 3. Let D2 be a data table of possible buyers of properties: 1.0 1.0 1.0 1.0 1.0 1.0
name Adams Black Chang Davis Enke Flores
type bdrms Single Family 3 Single Family 3 Residential 4 Condominium 1 Ranch 3 Penthouse 2
price score 250,000 628 325,000 769 300,000 535 200,000 567 240,000 635 200,000 659
The attributes price and credit score contain the amount which the buyer is willing to spend and his FICO credit score, respectively. Let us assume that we use a reasonable similarity on the domain of credit scores which agrees with how agents perceive different credit score values (i.e., we assume that the similarities are user-supplied). The result of the selection of tuples with credit score approximately equal to 600 is σscore =600 (D) and it may be the following 0.9 0.9 0.9 0.8 0.8 0.4
name Adams Davis Enke Chang Flores Black
type bdrms Single Family 3 Condominium 1 Ranch 3 Residential 4 Penthouse 2 Single Family 3
price score 250,000 628 200,000 567 240,000 635 300,000 535 200,000 659 325,000 769
Selection σscore =600 & type ="Single Family" & price =250,000 (D) uses a compound formula containing three approximate equalities connected by conjunctions. The result of such a selection of tuples with credit score approximately 600, housing type similar to "Single Family", and price approximately 250,000 is name type bdrms price score 0.9 Adams Single Family 3 250,000 628 0.6 Enke Ranch 3 240,000 635 0.2 Chang Residential 4 300,000 535 Selection top 4 (σ(0.7⇒(score =600)) & type ="Single 1.0 0.7 0.4 0.3
name Adams Enke Chang Black
Family" & price =250,000 (D))
type bdrms Single Family 3 Ranch 3 Residential 4 Single Family 3
price score 250,000 628 240,000 635 300,000 535 325,000 769
yields
Logical Foundations for Similarity-Based Databases
149
Notice that in the last case, our requirement on credit score has been relaxed using a 0.7-shift. Indeed, the subformula 0.7 ⇒ (score = 600) can be read “score is similar to 600 at least to degree 0.7”. As we can see, the ranks in the last table are higher than the ranks in the previous table which corresponds to the fact that 0.7 ⇒ (score = 600) represents a weaker constraint than score = 600. This example demonstrates that truth degrees can be used as natural weights in queries allowing us to put more emphasis on particular requirements. Note that in addition to similarity, we may consider other approximate comparators like “approximately greater”, etc. 4.5
Similarity-Based Join
A similarity-based join is a fundamental operation used to combine information from two tables into a single one based on similarity of values of two columns or, in a more general setting, on a comparator based on a selection formula. The Codd’s model has several notions of a join. In our case, the family of operations generalizing joins is even wider. Because of the limited scope, we discuss one particular join operation generalizing the classic theta-join. Recall that the ordinary theta-join can be seen as a selection from a Cartesian product of two data tables. Therefore, one option is to define a join in our setting based on this property. For ranked data tables D1 and D2 with disjoint relation schemes we define a Cartesian product D1 × D2 by (D1 × D2 )(st) = D1 (s) ⊗ D2 (t),
where D1 (s) > 0 and D2 (t) > 0.
(22)
If D1 and D2 are results of queries Q1 and Q2 , respectively, the rank of st in D1 × D2 is a degree to which “s matches Q1 and t matches Q2 ”. Obviously, if L is the two-element Boolean algebra, D1 × D2 becomes the ordinary Cartesian product of relations. Using (22), we can define a join D1 ϕ D2 of D1 and D2 by selection formula ϕ which may contain attributes from both D1 and D2 as follows D1 ϕ D2 = σϕ (D1 × D2 ).
(23)
Using previous definitions, (23) is equivalent to (D1 ϕ D2 )(st) = D1 (s) ⊗ D2 (t) ⊗ ||ϕ||st .
(24)
If ϕ is in the form of y1 = y2 where y1 and y2 are attributes from D1 and D2 defined on the same domain with similarity then (D1 y1 =y2 D2 )(st) = D1 (s) ⊗ D2 (t) ⊗ (s[y1 ] ≈y1 t[y2 ]),
(25)
which can be seen as a generalization of the classic equi-join. Remark 6. Note that unlike the classic equi-join, D1 y1 =y2 D2 includes both the attributes y1 and y2 because we are performing similarity-based join, i.e. the join runs not only over tuples with equal values of y1 and y2 but also over tuples with similar values. This way, we obtain joins over values which may also be interesting but which are not covered by the classic join. There are ways to introduce joins using just one attribute for both y1 and y2 , see [3] for details.
150
R. Belohlavek and V. Vychodil
Example 4. Let D1 be the ranked data table of buyers from Example 3 and let D2 be the ranked table of sellers from Tab. 1 prior to querying, i.e. with all nonzero ranks equal to 1. If we wish to reveal which buyers may want properties according to their type and price, we can use the following similarity-based join top 5 (D1 type1 =type2 & price1 =price2 D2 ) projected to interesting attributes: 1.0 0.8 0.8 0.7 0.7
name1 Adams Black Flores Black Enke
price1 250,000 325,000 200,000 325,000 240,000
price2 250,000 370,000 250,000 320,000 250,000
type1 Single Family Single Family Penthouse Single Family Ranch
type2 bdrms2 Single Family 3 Single Family 4 Penthouse 2 Ranch 4 Single Family 3
As we can see, our best 5 matches contain one perfect match and 4 matches which, according to their ranks, can be considered as almost perfect or very good. With data like these, it does not make much sense to consider the usual equality joins because they may often produce empty data tables although there are interesting pairs of tuples which are almost joinable. For instance the query top 5 (D1 type1 =type2 & price1 =price2 & score =750 D2 ) projected to interesting attributes gives 0.7 0.6 0.6 0.5 0.5
name1 Black Adams Black Black Flores
price1 325,000 250,000 325,000 325,000 200,000
price2 370,000 250,000 320,000 250,000 250,000
type1 Single Family Single Family Single Family Single Family Penthouse
type2 score Single Family 769 Single Family 628 Ranch 769 Single Family 769 Penthouse 659
Hence, there is no exact match although Black should be seen as a potential good candidate for buying the corresponding property. In this example, comparators other than “=” may be useful. When posing requirements on score, one usually wants it to be approximately greater (i.e., greater with a tolerance) than a value rather than similar to a value. Because of the limited scope of this paper, we are not going to discuss such comparators here but they can also be introduced in our model. 4.6
Domain Calculus, Tuple Calculus, and Equivalence Theorem
One of the fundamental results regarding the classic relational algebra is the socalled equivalence theorem which says that the expressive power of the algebra is equivalent to that of a domain calculus as well as to that of a tuple calculus. We obtained a counterpart to this result in our framework. Due to the limited scope, we will present it in a full version of this paper.
5
Conclusions and Further Research
We outlined logical foundations for similarity-based databases, with emphasis on relational algebra and similarity-based queries. Our future research will be
Logical Foundations for Similarity-Based Databases
151
directed toward the development of further foundational issues in our model including standard issues from relational databases (further data dependencies, redundancy, normalization and design of databases, optimization issues, etc.), with a particular focus on similarity-related aspects. In addition, we will continue [3] and develop a prototype implementation of our model by means of existing relational database management systems.
References 1. Abiteboul, S., et al.: The Lowell database research self-assessment. Comm. ACM 48(5), 111–118 (2005) 2. Bˇelohl´ avek, R., Vychodil, V.: Data tables with similarity relations: functional dependencies, complete rules and non-redundant bases. In: Li Lee, M., Tan, K.-L., Wuwongse, V. (eds.) DASFAA 2006. LNCS, vol. 3882, pp. 644–658. Springer, Heidelberg (2006) 3. Belohlavek, R., Opichal, S., Vychodil, V.: Relational algebra for ranked tables with similarities: properties and implementation. In: Berthold, M.R., Shawe-Taylor, J., Lavraˇc, N. (eds.) IDA 2007. LNCS, vol. 4723, pp. 140–151. Springer, Heidelberg (2007) 4. Belohlavek, R., Vychodil, V.: Codd’s relational model from the point of view of fuzzy logic. J. Logic and Computation (to appear) 5. Bosc, P., Kraft, D., Petry, F.: Fuzzy sets in database and information systems: status and opportunities. Fuzzy Sets and Syst. 156, 418–426 (2005) 6. Buckles, B.P., Petry, F.E.: Fuzzy databases in the new era. In: ACM SAC 1995, Nashville, TN, pp. 497–502 (1995) 7. Date, C.J.: Database Relational Model: A Retrospective Review and Analysis. Addison Wesley, Reading (2000) 8. Fagin, R.: Combining fuzzy information: an overview. ACM SIGMOD Record 31(2), 109–118 (2002) 9. Gottwald, S.: Mathematical fuzzy logics. Bulletin for Symbolic Logic 14(2), 210– 239 (2008) 10. H´ ajek, P.: Metamathematics of Fuzzy Logic. Kluwer, Dordrecht (1998) 11. Li, C., Chang, K.C.-C., Ilyas, I.F., Song, S.: RankSQL: Query Algebra and Optimization for Relational top-k queries. In: ACM SIGMOD 2005, pp. 131–142 12. Maier, D.: The Theory of Relational Databases. Computer Science Press, Rockville (1983) 13. Prade, H., Testemale, C.: Generalizing database relational algebra for the treatment of incomplete or uncertain information and vague queries. Inf. Sci. 34, 115–143 (1984) 14. Raju, K.V.S.V.N., Majumdar, A.K.: Fuzzy functional dependencies and lossless join decomposition of fuzzy relational database systems. ACM Trans. Database Systems 13(2), 129–166 (1988) 15. Takahashi, Y.: Fuzzy database query languages and their relational completeness theorem. IEEE Trans. Knowledge and Data Engineering 5, 122–125 (1993)
Tailoring Data Quality Models Using Social Network Preferences Ismael Caballero1, Eugenio Verbo2, Manuel Serrano1, Coral Calero1, and Mario Piattini1 1 University of Castilla–La Mancha Grupo Alarcos - Institute of Information Technologies & Systems Pº de la Universidad 4, 13071 Ciudad Real, Spain {Ismael.Caballero,Manuel.Serrano,Coral.Calero, Mario.Piattini}@uclm.es 2 Indra Software Labs R&D Department of Indra Software Labs Ronda Toledo s/n, 13004 Ciudad Real, Spain
[email protected]
Abstract. To succeed in their tasks, users need to manage data with the most adequate quality levels possible according to specific data quality models. Typically, data quality assessment consists of calculating a synthesizing value by means of a weighted average of values and weights associated with each data quality dimension of the data quality model. We shall study not only the overall perception of the level of importance for the set of users carrying out similar tasks, but also the different issues that can influence the selection of the data quality dimensions for the model. The core contribution of this paper is a framework for representing and managing data quality models using social networks. The framework includes a proposal for a data model for social networks centered on data quality (DQSN), and an extensible set of operators based on soft-computing theories for corresponding operations. Keywords: data quality, data quality dimensions, quality model, social networks.
1 Introduction Users need documents containing data with an adequate level of quality to succeed in their tasks. The concept of data quality (typically used as synonymous with information quality [1]) provides users with a set of useful fundamentals for assessing the quality of data contained in relevant documents and hence for computing the overall data quality of the document. Let us assume that it is possible to define classes of users that access and use documents with the same data quality requirements [2]. In addition, let us generalize the term “user” by employing “stakeholder” instead, to mean any agent (person, system or process) involved in the use of documents. To study the level of data L. Chen et al. (Eds.): DASFAA 2009 Workshops, LNCS 5667, pp. 152–166, 2009. © Springer-Verlag Berlin Heidelberg 2009
Tailoring Data Quality Models Using Social Network Preferences
153
quality of a document, and depending on the stakeholders’ role on data, data quality is assessed with regard to a data quality model composed of a set of data quality dimensions [3]. It is important to highlight that typically, for the sake of simplicity, data quality models present data quality dimensions as having all the same importance for the assessment. But as Wang and Strong demonstrate in [3], not all of them are indeed equally important. And this is the aim of our research: to study what foundations are appropriate for determining how important each one of data quality dimensions is for an assessment scenario, and how various issues can influence to the level of perceived importance of the dimensions. As we also want to make operative our findings, we propose the technological support for conducting such studies. To be more precise, our interest is focused on how to manage the “weight” of each data quality dimension in the assessment. Initially, this weight would be provided by each one of the stakeholders participating in the assessment scenario. This weight represents his or her perception of the relative importance of a specific data quality dimension with respect to the remaining ones in the model. What we want to set forth is that, even when planning assessments for diverse scenarios, different data quality models could be required. Anyway, we posit that working-alone or set of stakeholders carrying out similar tasks would take into consideration practically the same data quality dimensions for use with the set of different applications needed for their tasks across varying scenarios. The results of the research could be used as a starting point for new assessment efforts by establishing the corresponding benchmarking. So the simplest way to get a data quality model, containing different data quality dimensions and their associated weights, would consist of asking to the group of stakeholders to decide firstly about the dimensions, and secondly about the perceived level of importance for each data quality dimensions for the data quality model to be used in the assessment. After this, the results should be synthesized, taking into account the different perceptions. Since representations of preferences are typically subjective, it would be preferable to provide stakeholders with a set of linguistic variables to model this subjectivity, instead of numeric values as it has been done in other researching works related to data quality measurement. The development of the foundations for desired calculus is going to be based on soft-computing and computing with words [4] as it is presented in section 2. To give sufficient automated computational support to our study, and taking into account the definition of social network provided by [5], we have decided to coin the term Data Quality Social Network (DQSN) as a set of stakeholders connected by social relationships who share a data quality model for assessing data quality of documents used when developing a task. We intend to use a DQSN and associated analysis for each assessment scenario in order to manage the set of stakeholders, their perceptions and other future researchable issues, like how to manage the dependence among data quality dimensions. The main contribution of this paper, therefore, is an extensible framework for managing data quality models, based on the overall perception of the level of importance for the different data quality dimensions identified by the members of a DQSN. The remainder of the paper is structured as follows: Section 2 shows the main foundations of the proposed framework. In the third section, an example of application is introduced. Finally, the fourth section shows several conclusions and outlines our intended future work.
154
I. Caballero et al.
2 The Framework This section presents the main components of the framework for establishing a Data Quality Social Network (DQSN). The main aim of a DQSN is to make a data quality model which is based on collaboration. This data quality model will be composed of all those data quality dimensions and their corresponding weights. These in turn correspond to the overall perception of the levels of importance of the data quality dimensions which are of interest for stakeholders. This data quality model is associated with the task at hand for the stakeholders, although it is intended to be shareable and interoperable for many other applications. For instance, it can be used for filtering or ranking documents in order to discriminate those that are not of a high enough quality to be used, or for edition purposes, in order to guide designers to optimize their results. The framework consists of two main elements: •
•
A Data Model for representing and managing information about the members of the DQSN and their relationships, so as to support annotations regarding to the perception of the level of importance of data quality dimensions made by the members of the DQSN. The data model observes the concept related to data quality measurement issues. In addition, this Data Model might be easily extensible to satisfy those possible future researchable issues. An extensible set of Operators based on soft-computing, for making the corresponding calculations, like those to synthesize the overall level of importance of the individual data quality dimensions of a model.
It is important to realize that the framework presented in this paper does not observe the entire process of measurement of data quality dimensions for each one of the entities identifiable in the documents being assessed. See [6-8] for a broader explanation about measuring data quality under the hypothesis considered in this paper. 2.1 A Data Model for DQSN The main aim of depicting a Social Network is to enable the option of introducing Social Networking Analysis techniques as a way to further extend the framework. In this framework, the corresponding support for annotations is provided, aimed at collecting the perceptions (or preferences) of the stakeholder about each one of the data quality dimensions used to assess the data quality of documents based on their backgrounds and experiences. This implies gathering and conveniently storing information about the elements that participate in the scenario [9]. The information we consider important to manage is grouped into different classes and relationships of the data model presented. As we seek to enable usage by machines, and interoperability between applications, we decided to implement the data model by means of an OWL ontology, given that Semantic Web and Social Networks models support one another: on one hand, the Semantic Web enables online and explicitly represented social information; on the other hand, social network provides knowledge management in which users “outsource” knowledge and beliefs
Tailoring Data Quality Models Using Social Network Preferences
155
via the social network. OWL has been chosen instead of any other ontology language because it is the most complete language for the representation of Web data [10]. Instances of this ontology with gathered data are going to be registered by using RDF, since this is widely accepted and it is the basis for Semantic Web applications [11]. The vocabularies used to implement some of the concepts of our data model are: • • •
•
Dublin Core (DC) [12]. DC emerged as a small set of descriptors for describing such metadata about documents, like creator or publishers. Friend of a Friend (FOAF) [13], is the basis for developing the Social Network, since it provides terms for describing personal information [5]. Software Measurement Ontology (SMO) implemented from the software measurement ontology proposed by García et al. in [14]. It contains a Software Measurement Ontology describing the most important concepts related to the measurement of the software artifacts. It is available at http://alarcos.inf-cr.uclm.es/ontologies/smo. Data Quality Measurement Ontology (DQMO) implemented from the Data Quality Measurement Information Model (DQMIM) proposed by [6]. This DQMIM describes the different concepts regarding data quality measurement issues.
The classes that we have identified are (see Fig. 1): •
dqsn:DQSN. Acronym for Data Quality Social Network, which is an aggregation of Agents. This is the class of which a data quality social network is made up and it consists of an aggregation of different dqmo:Stakeholders.
Fig. 1. Data Model for Supporting DQSN
156
I. Caballero et al.
• •
•
•
dqmo:Stakeholder. This class is intended to represent a first approach to the stakeholders, who can be the information consumers [1]. dqmo:DataQualityDimension. This class represents the Data Quality Dimensions (see [7, 15]) whose levels of importance are going to be managed. This class has a property named dqsn:overallLevelOfImportance, whose main aim is to store the synthesizing level of importance calculated as the overall opinion of all stakeholders implied in the definition of the data quality model. This is calculated by means of the soft-computing operators described in section 2.2. dqmo:DataQualityModel. This is used to represent information about the Data Quality Model which is defined for each document under assessment. It contains a set of data quality dimensions, each one with its overall perceived level of importance. foaf:Document. This represents the document whose data quality is required to be assessed by using the data quality model.
On the other hand, we have identified several properties which allow us to establish relationships between the different classes. Since we needed to complete the semantic of some properties, we have extended them with sub-properties. UML has been used to represent them and some of the following relationships are depicted by means of association classes to keep their meaning. The properties that have been incorporated, represented as relationships or association classes in Fig. 1, are: • •
•
•
•
foaf:interest, this is used to relate a data quality social network to the set of documents that the members of the network are currently interested in. dqsn:hasbeeninfluencedby, it is based on the property foaf:knows. Its main objective is to represent how a stakeholder has been influenced in his or her perception of the level of importance of data quality dimension by another stakeholder, for different reasons [16]. dqsn:considerimportant, is the property aimed at capturing the level of importance that a dqmo:stakeholder gives to a given data quality dimension. As previously said, for the assessment of the data quality of a document, some authors like [8, 17] propose to make a weighted average of normalized vectors. Since it may be very difficult to give a precise value for each weight, we propose using foundations of computing with words [4]. This set limits the possibility of giving values for each level of importance to only a few linguistic labels representing the perception of data quality dimension importance. After assigning the corresponding linguistic label, it is necessary to synthesize the global perception of the data quality dimension importance. This task can be done using the operator Majority guided Linguistic Induced Ordered Weighted Averaging (MIOWA) provided by [18]. MIOWA operators will be discussed later. dqsn:uses establishes a relationship between the social network and the data quality model used to assess a document. This relationship is necessary because information consumers assess quality within specific business contexts [8]. smo:evaluates is used to connect a specific data quality model with data quality dimensions that are really important for the data quality social network.
Tailoring Data Quality Models Using Social Network Preferences
157
2.2 The MIOWA and the Influence-Biased MIOWA Operators In the previous subsection, we have introduced the need for the MIOWA operators to synthesize opinions of the majority of stakeholders (decision makers) in order to calculate the overall level of importance of a data quality dimension. In order to best understand the paper, in this subsection we are going to briefly introduce such operators, and then, we are going to present our contribution for synthesizing the opinion of the majority, but taking into account the possibility that the opinion of some stakeholders can be influenced by that of another. An Ordered Weighted Average (OWA) operator is an aggregation operator taking a collection of argument values ai and returning a single value. Yager and Filev in [19] define an OWA operator of dimension n as a function Φ:ℜn→ℜ, which has an associated weighting vector W={w1, w2,…,wn} such that wi ∈ [0,1] with ∑i wi = 1 for any arguments a1, a2, …an ∈ [0,1]: OWA (a1, a2, …an) = ∑i bi wi, with bi being the ith largest element of the aj. Let B be a vector that contains the ai ordered according to a certain criteria such as bj corresponding to the value ai before being ordered: [a1, a2, …, ai,..., aj,.., an-1, an] [b1, b2, … bi, … bj, …,bn-1,bn] For this example, i is not necessarily greater than j. It is possible to define the function a-index(i) = i which allows bi=aa-index(i). So it is possible to define an OWA operator as expressed in formula (1): OWA (a1, a2, …an) = WTB
(1)
Sometimes, a means to induce the order of the arguments to be aggregated is provided. This means is represented by a function av-index(i)=vj, which is known as order inducing value. This implies that for any ai to aggregate, there is another value vj associated to it. The way to order ai is done with respect to vj. Let BV be a vector where v-index(i) is the index of the ith largest vi, it is possible to define the Induced Ordered Weighted Averaging Operator (IOWA) [19] as shown in (2): OWA (a1, a2, …an) = WTBv
(2)
The way to calculate the weights wi must be such that it allows us to represent the increase of satisfaction in getting Si instead of Si-1 (see Fig. 2). Let Q be: [0,1]Æ[0,1] a function so that Q(0) = 0 and Q(1) =1, with Q(x) ≥ Q(y) ∀ x > y, with x, y ∈ ℜ. Each wi can be calculated as follows: wi = Q(i/n) – Q((i-1)/n)
(3)
The main strength of this weighting vector is that it can have an associated semantic meaning that determines the behavior of the OWA operator, since they have the effect of emphasizing or deemphasizing different components in the aggregation. Therefore, a weighting vector allows better control over the aggregation stage developed in the resolution processes of the Group Decision Making problems [21]. Thus, by introducing corresponding changes in the way of calculating it, it is possible to adapt it to any required situation of computing. It is possible to use the operator to synthesize the opinion of most of the decision makers, or some of them, etc. This can be satisfied
158
I. Caballero et al.
Fig. 2. A set of seven linguistic terms ([20]) 1,2
Satisfaction Degree
1
0,8
0,6
x ≥ 0.9 ⎧ 1 ⎪ μ most ( x) = ⎨2 x − 0.8 0.4 < x < 0.9 ⎪ 0 x ≤ 0.4 ⎩
0,4
0,2
0 Percent number of fully satisfied criteria
Fig. 3. Definition of the linguistic Quantifier most [18]
by defining the function Q as a fuzzy membership function μ. In our case, as we want to get the opinion of the most, the function represented in Fig. 3 can be used. Pasi and Yager in [18] introduce the Majority IOWA operator by (1) suggesting the idea that the most similar values have close positions in the induced ordering value so they can be aggregated and (2) suggesting a new strategy for constructing the weighting vector so as to model the “majority-based” of the aggregation better. For the first suggestion, they use the support function to compute similarities between pairs of opinion values. Let Sup(a,b) be the binary function that expresses the support from b for a (see Fig. 4); then the more similar two values are, the more closely they support each other.
⎧1 if | a − b |< α Sup(a, b) = ⎨ ⎩0 otherwise Fig. 4. Support Function
For the weighting vector, they suggest a procedure for its construction with nondecreasing weights. So, they introduce a modification for the overall support si (the sum of the corresponding sup(ai,aj) ) by making ti=si+1, having ti ≤ tj ∀ i<j and defining wi (see formula 4). In formula 4, Q is a function μ for linguistic quantifier “most” (see Fig. 3). wi = Q(ti/n) / ∑iQ((ti)/n)
(4)
Tailoring Data Quality Models Using Social Network Preferences
159
Summing up, to use the MIOWA operators, the steps presented in Table 1 can be followed. Table 1. Steps to apply a MIOWA operator
1. 2. 3.
Calculate the Order Inducing Values by means of the Support Function (see Fig. 4). Calculate the Weighting Vector W taking into account the quantifiers (see formula 4) Compute the Weighted Average (see formula 2)
This calculus must be repeated for each one of the data quality dimensions identified. At the end of all the calculus required, we will have obtained a synthesizing value for the level of importance of each data quality dimension in the data quality model, in the form LevelOfImportance(NumericalValue). With the set of all levels of importance, it is possible to compute the weighted average, together with the values of measures corresponding to each data quality dimension for the assessment of an entity containing data of interest for the stakeholders’ task in hand (e.g. a document of Web Semantic, an XML file…). As a result of the measurement method, one could have a quantitative (or numerical) or a qualitative (like a linguistic label) value. Depending on the nature of the measures, results must be converted so as to get adapted to the suitable operator. Hence, there are two alternatives: •
•
If the measures obtained are in a numerical format, we can (1) normalize the numerical values associated to the results of the MIOWA operator applied in the calculus described previously (see example in section 3) and then calculate the weighted average, or (2) get the fuzzy values for the results of the measurement and apply a corresponding soft-computing operator based on OWA in order to get a fuzzy value, which must be interpreted according to organizational policies. If the obtained measures have qualitative values (e.g. linguistic labels corresponding to fuzzy sets), we can (1) apply the corresponding softcomputing operator based on OWA directly, and then interpret the result according to the aforementioned organizational policies, or (2) defuzzy all the measures corresponding to values of the measures and those obtained for the level of importance of each data quality measure, and then compute a numerical weighted average.
However, any combination of the previous alternatives must be aligned to only one of the four, if we are to get an assessment for the data quality level of the entities being assessed. Nevertheless, we must take into account the nature of data quality measures for the best computational performance. Now that MIOWA operators have been presented, we are going to introduce the Influence-biased MIOWA, since contextual factors such as personal characteristics, decision tasks, and organizational settings strongly influence the perception of data quality [8]. We have noted that the importance of the influence of decision-makers amongst themselves is not currently observed by the MIOWA. Indeed, MIOWA operators consider decision-makers’ opinions as independent. Our idea is that in a
160
I. Caballero et al.
social network, one stakeholder (decision-maker) can influence the opinion of another (e.g., group leaders can suggest their vision to the remainder of the group) with the result that the overall opinion of the group is not actually the real opinion of the world, since it could be biased. Indeed, people tend to assume that a thing is of quality simply because another more experienced person has a more positive opinion about the thing being assessed. Since our main goal is to approach the actual level of importance of each data quality dimension, we are conscious that in order to compute the overall perception, all of the opinions must have the same value. In this first approach, we are going to represent the state of the level of influence of a stakeholder Ui to another Uj with a function LInfluence(Ui,Uj), which, only for demonstration purposes, can have one of the three values represented in Fig 5. The values provided are hypotheses for demonstration and they must be adapted to any other specific scenario. Indeed, as the levels and their corresponding values do not interfere the researching, they must be calibrated as properly required for each scenario.
⎧0.2 U i inf luences U j ⎪ LInfluence(U i , U j ) = ⎨0.1 U i semi inf luences U j ⎪ 0 U not inf luences U i j ⎩ Fig. 5. Function for computing the Level of Influence
The Influenced-biased MIOWA operator is a MIOWA one, in which the similarity between pairs of values is going to be modified by considering the effect of the level of influence. To achieve this goal, we have defined a modified function support Sup* (a,b) (see Fig. 6) based on the previous one shown in Fig. 4 (being a and b the level of importance of two data quality dimensions, and α a threshold value of support):
Sup * (a, b) = Sup(a, b) − LInfluence(U i ,U j ) Fig. 6. Support Function for Influenced-Biased MIOWA Operator
The remainder of the necessary calculus is the same as suggested by Pasi and Yager in [18] for calculating the weighting vectors and the order influence value. The main purpose of this Influenced-Biased MIOWA is to help DQSN analyzers to find patterns of influence in the perception of data quality in a social network, by comparing results obtained by the MIOWA operator with the ones obtained by applying the Influenced-biased ones.
3 An Example of Application In [6], authors presented an example that could be extended here to illustrate the use of the proposed framework. In this work, the authors provide a situation in which the owners of a newspaper want to know whether the data quality of the news published on their Web satisfies data quality consumers or not. There, the data quality model used for assessing the soundness of the provided data consisted of two data quality
Tailoring Data Quality Models Using Social Network Preferences
161
dimensions, namely reliability and completeness. Let’s expand now this data quality model by adding a third dimension: timeliness. To make the example easier to understand, let us suppose that users of the newspaper website (namely stakeholders) are interested in performing three tasks: T01, T02 and T03. Let us suppose that the nature of T01 and T02 is so similar that, from the point of view of data quality, stakeholders performing them can be allocated within the same group. This group is the basis for depicting the corresponding DQSN1. They are going to use a document D, which is intended for assessment. There are five users forming the DQSN-1. We assume that influence may not be a symmetric relationship, so if Ui influences the data quality perception of Uj, Uj does not necessarily influence the perception of Ui at the same level. We consider only the three possible levels provided in Fig. 5. Table 2 shows in rows the influence level of each user for each workmate in the network (addressed by columns). These values are used to compute the overall perception importance of each data quality dimension by applying the Influence-biased MIOWA operator. Table 2. Influences on users’ perception of importance of the data quality dimensions U01
U02
U03
U04
U05
U01
-
Yes
Semi
Yes
Non
U02
Non
-
Non
Yes
Non
U03
Semi
Semi
-
Semi
Non
U04
Semi
Semi
Non
-
Yes
U05
Non
Non
Semi
Not
-
Let us use a linguistic set S = {S5=VH, S4=H, S3=M, S2=L, S1=VL} (see Fig. 2), corresponding to the perceptions of level of importance of data quality dimensions for the assessment: VH= “Very High”, H= “High”, N= “Normal”, L= “Low”, VL= “Very Low”. The perceptions provided by the different users are gathered in Table 3. Table 3. Perception of the importance level of Data Quality Dimensions for the working example (ai to aggregate) U01 U02 U03 U04 U05
RELIABILITY H VH L M VL
COMPLETENESS TIMELINESS VH L VL H M M M L L L
Let us now go into the calculations by following the summarized steps provided in Table 1. First, we calculate the corresponding values ti and t*i by means of the support functions (Fig. 4 and Fig. 6) for the cases of non-influenced and influenced-biased described in section 2.2. In Table 4 we show the results for Reliability.
162
I. Caballero et al. Table 4. Calculating the overall modified Support ti and Support* t*i for Reliability H
VH
L
M
VL
ti=si+1
H
VH
L
M
VL
t*i=s*i+1
H
1
1
0
1
0
4
1
0.8
-0.1
0.8
0
3.7
VH
1
1
0
0
0
3
1
1
0
-0.2
0
2.8
L
0
0
1
1
1
4
-0.1
-0.1
1
0.9
1
3.7
M
1
0
1
1
0
4
0.9
-0.1
1
1
-0.2
3.6
VL
0
0
1
0
1
3
0
0
0.9
0
1
2.9
The corresponding order inducing values vi and v*i and the consequent induced order are shown in Table 5. Table 5. Order Inducing values vi and v*i for Reliability (a) Not Considering Influenced-Biased ti vi
3 U02
3 U05
4 U01
4 U03
4 U04
(b) Considering Influenced-Biased t*i v*i
2.8 U02
2.9 U05
3.6 U04
3.7 U01
3.7 U03
The induced order for the values ai to aggregate is represented in the values in Table 6 for both cases: not considering and considering the Influenced-Biased. Table 6. Calculating the Vectors Bvi and Bv*i (a) Not Considering Influenced-Biased ai vi io-ai Bvi
H U02 VH 5
VH U05 VL 1
L U01 H 4
M U03 L 2
VL U04 M 3
(b) Considering Influenced-Biased ai v*i io-a*i B*vi
H U02 VH 5
VH U05 VL 1
L U04 M 3
M U01 H 4
VL U03 L 2
The second step in Table 1 is used to calculate the Weighting Vector. Table 7 shows how to calculate wi and w*i according to the formula 4, by using a linguistic quantifier most Q2 (Fig. 3). Note that both of them are non-decreasing functions. Finally, we can now calculate (step 3 in Table 1) the definition of the OWA operator, by applying the formula (2). k =round (∑i wi Bvi)=0.13*5+0.13*1+0.25*4+0.25*2+0.25*3= round (3) = 3. So Sk = S3= “Medium”
Tailoring Data Quality Models Using Social Network Preferences
163
Table 7. Calculating the Weighting Vector W by Using a Q2 ti
ti/n
Q2(ti/n)
wi
t*i
ti/n
Q2(ti/n)
w*i
3
0.6
0.4
0.13
2.8
0.56
0.52
0.14
3
0.6
0.4
0.13
2.9
0.58
0.56
0.15
3.6
0.72
0.84
0.23
4
0.8
0.8
0.25
4
0.8
0.8
0.25
3.7
0.74
0.88
0.24
4
0.8
0.8
0.25
3.8
0.74
0.88
0.24
And repeating the calculus for k*, we obtain a value of k*=round(2.97)=3, obtaining Sk*= “Medium”. By repeating the same calculus for the remainder of the data quality dimensions of the data quality model, the values (and normalized ones for each case) shown in Table 8 are obtained. The data quality model to be used for the tasks T01 and T02 is thereby now fully described. Table 8. Data Quality Model for DQSN-1 Data Quality Dimensions Reliability Completeness Timeliness
Overall Importance Perceived without considering influence Medium 0.375 (k=3) Low 0.315 (k=2.52) Medium 0.309 (k=2.47)
Overall Importance Perceived considering influence Medium 0.379 (k*=2.98) Low 0.301 (k*=2.37) Medium 0.318 (k*=2.50)
Let us now use the data quality model to assess the data quality level of the data contained in the aforementioned document D. For the sake of simplicity, let us suppose that we have obtained numerical values when measuring the three data quality dimensions on D. Let us also suppose that all of the values are in the same scale, so that it is possible to operate them. The values for the data quality dimensions, ranging between 0 and 100, are shown in Table 9. Table 9. Measured values obtained for the data quality dimensions Reliability 52
Completeness 80
Timeliness 90
Using a weighted average [17], it is easy to see that without considering influence between members of the DQSN, the resulting value of the assessment is 0.375*52+0.315*80+0.309*90 = 72.510, whereas for the other case is 0.379*52+ 0.301*80+0.318*90=72.408. This difference (72.510 vs 72.408), although little, demonstrates that influence in the perception of the level of importance between members of the DQSN affects to the global perception of the level of data quality of a document D, then it deserves to take it into account.
164
I. Caballero et al.
Finally, the results can be expressed by using the DQSN ontology provided in this paper and the final result is the RDF file shown in Fig. 7.
[...] [...] <smo:evaluates> [...]
Fig. 7. Extract of a RDF file for the DQSN-1
4 Conclusions and Future Work In this paper, a framework for managing data quality models has been presented. This framework is based on the idea that not all data quality dimensions contained in the data quality model, are equally important in the assessment of the data quality of a document. The data quality model is composed of data quality dimensions and an associated weight for each one. These weights are obtained as a synthesizing value that reflects the perception of the importance level of the stakeholders and other relevant issues, like the effect of the influence on other stakeholders. Since it would be very difficult to assess this level of importance numerically, we have proposed the use of the computing with words theory to model, represent and manage the corresponding levels of perception by means of the MIOWA operators. The more stakeholders participate in the calculus of the overall importance of data quality dimensions, the more representative and reliable the results are. In this sense, a first operator, the Influence-based MIOWA, has been proposed and its existence has been justified by means of an example. It has also been proposed to manage the necessary information about stakeholders, their relationships and their perception of level of importance of the data quality dimensions by using a data quality social network. This enables to use, on one hand,
Tailoring Data Quality Models Using Social Network Preferences
165
Social Network Analysis [22] and, on the other hand, Semantic Web Technologies to make the framework operative. An interesting usage of the DQSN consists of studying the dynamics of data quality perceptions over a period of time, as [23] propose, or even studying the effect on stakeholders by choosing some data quality dimensions instead of others, as studied by [24]. With these conclusions, some rules to infer and predict the future behavior of network members can be arranged in order to avoid data quality problems related to the data quality dimensions identified. As a main conclusion, we can argue that the work presented is relevant as a promising step in the direction of improving the effectiveness of applications. This may include ways in which it is possible to “filter” or “rank”, according to an established data quality model, the great amount of documents available from different sources like the Internet. Being a DQSN built on semantic web technologies, semantically interoperability becomes possible between different applications. In the future, we want to refine both the data model for the DQSN and the operators by applying them to several study cases in practice. This will be done by using a tool that automates the calculus and the management of the DQSN. We want to study the different types of influences identified by [8], that condition the election of the weights, in order to extend the operators and manage these issues properly. In addition, we also want to introduce different techniques of Social Network Analysis to the study of the behavior of stakeholders in the assessment of data quality. Acknowledgments. This research is part of the projects DQNet (TIN2008-04951-E) supported by the Spanish Ministerio of Educación y Ciencia, IVISCUS (PAC080024-5991) supported by the JCCM, and IQMF-Tool supported by the University of Castilla-La Mancha.
References 1. Wang, R.Y.: A Product Perspective on Total Data Quality Management. Communications of the ACM 41(2), 58–65 (1998) 2. Cappiello, C., Francalanci, C., Pernici, B.: Data quality assessment from the user’s perspective. In: International Workshop on Information Quality in Information Systems (IQIS 2004), Paris, Francia, pp. 68–73. ACM, New York (2004) 3. Wang, R., Strong, D.: Beyond accuracy: What data quality means to data consumers. Journal of Management Information Systems 12(4), 5–33 (1996) 4. Zadeh, L.: From Computing with Numbers to Computing with Words-From Manipulation of Measurements to Manipulation of Perceptions. Fuzzy Control: Theory and Practice (2000) 5. Ding, L., Zhou, L., Finin, T., Joshi, A.: How the Semantic Web is Being Used: An Analysis of FOAF Documents. In: Proceedings of the 38th Annual Hawaii International Conference on System Sciences (HICSS 2005) - Track 4, vol. 04, p. 113. IEEE Computer Society, Los Alamitos (2005) 6. Caballero, I., Verbo, E.M., Calero, C., Piattini, M.: A Data Quality Measurement Information Model based on ISO/IEC 15939. In: 12th International Conference on Information Quality. MIT, Cambridge (2007)
166
I. Caballero et al.
7. Batini, C., Scannapieco, M.: Data Quality: Concepts, Methodologies and Techniques. In: Data-Centric Systems and Applications. Springer, Heidelberg (2006) 8. Even, A., Shankaranarayanan, G.: Utility-driven assessment of data quality. SIGMIS Database 38(2), 75–93 (2007) 9. Caballero, I., Verbo, E.M., Calero, C., Piattini, M.: DQRDFS:Towards a Semantic Web Enhanced with Data Quality. In: Web Information Systems and Technologies, Funchal, Madeira, Portugal, pp. 178–183 (2008) 10. Cappiello, C., Francalanci, C., Pernici, B., Martini, F.: Representation and Certification of Data Quality on the Web. In: 9th International Conference on Information Quality, pp. 402–417. MIT, Cambridge (2004) 11. Lassila, O., Hendler, J.: Embracing “Web 3.0”. IEEE Internet Computing 11(3), 90–93 (2007) 12. DCMI, Dublin Core Metadata Element Set, Version 1.1 (2008), http://dublincore.org/documents/dces/#DCTERMS 13. Brickley, D., Miller, L.: FOAF Vocabulary Specification 0.91 (2007), http://xmlns.com/foaf/spec 14. García, F., Bertoa, M.F., Calero, C., Vallecillo, A., Ruiz, F., Genero, M.: Towards a consistent terminology for software measurement. Information and Software Technology 48(8), 631–644 (2006) 15. Strong, D.M., Lee, Y.W., Wang, R.Y.: Data Quality in Context. Communications of the ACM 40(5), 103–110 (1997) 16. Zhang, D., Lowry, P.B., Zhou, L., Fu, X.: The impact of Individualism-Collectivism, Social Presence, and Group Diversity on Group Decision Making under majority influence. Journal on Management Information System 23(4), 53–80 (2007) 17. Pipino, L., Lee, Y., Wang, R.: Data Quality Assessment. Communications of the ACM 45(4), 211–218 (2002) 18. Pasi, G., Yager, R.: Modeling the concept of majority opinion in group decision making. Information Sciences 176(4), 390–414 (2006) 19. Yager, R., Filev, D.: Induced ordered weighted averaging operators. Part B, IEEE Transactions on Systems, Man and Cybernetics 29(2), 141–150 (1999) 20. Herrera-Viedma, E., Herrera, F., Martinez, L., Herrera, J.C., Lopez, A.G.: Incorporating filtering techniques in a fuzzy linguistic multi-agent model for information gathering on the web. Fuzzy Sets and Systems. Web Mining Using Soft Computing 148(1), 61–83 (2004) 21. Chiclana, F., Herrera-Viedma, E., Herrera, F., Alonso, S.: Some induced ordered weighted averaging operators and their use for solving group decision-making problems based on fuzzy preference relations. European Journal of Operational Research 182(1), 383–399 (2007) 22. Jamali, M., Abolhassani, H.: Different Aspects of Social Network Analysis. In: Proceedings of the 2006 IEEE/WIC/ACM International Conference on Web Intelligence, pp. 66–72. IEEE Computer Society, Los Alamitos (2006) 23. Stivilia, B.: A Model for Information Quality Change. In: 12th International Conference on Information Quality, pp. 39–49. MIT, Cambridge (2007) 24. DeAmicis, F., Barone, D., Batini, C.: An Analytical Framework to analyze Dependencies among data Quality Dimensions. In: ICIQ 2006. MIT, Cambridge (2006)
The Effect of Data Quality Tag Values and Usable Data Quality Tags on Decision-Making Rosanne Price and Graeme Shanks Department of Information Systems, University of Melbourne, Victoria, 3010, Australia {rprice,gshanks}@unimelb.edu.au
Abstract. Prior research has shown that supplying decision-makers with data quality (DQ) tags, metadata about the quality of data used in decision-making, can impact decision outcomes in certain circumstances. However, there is conflicting evidence as to how or when decision outcomes are affected. In order to improve experimental soundness, the current research addresses possible sources of ambiguity in previous DQ tagging experiments with respect to DQ tag design. In particular, usability, semantics, and the rationale for assigning DQ values are explicitly considered in the DQ tag design of experiments. DQ tag values are designed to test the question of whether DQ tag impact is influenced by the relative importance of the “poor quality” attribute(s) to the specific decision task, thus potentially explaining discrepancies noted in observed results of prior research. The results showed no significant impact on decision choice, confidence, or efficiency; even with moderately rather than critically important attributes identified as being of poor quality. There was, however, some evidence of decreased consensus with DQ tags. These findings thus contraindicate the inclusion of data quality metadata in decision-making applications as a general business practice. Keywords: Data quality tags, Decision-making, Contextual inquiry, Experimental soundness.
1 Introduction Decision-making often relies on heterogeneous data sources that are remote from the user and possibly external to the organization or individual, particularly in the context of data warehouses. Decision-makers routinely use data whose context is unfamiliar and may vary depending on the individual data type and source. In particular, it is unlikely that the set of data used to make a decision is of uniform quality. The quality of the data used to make a multi-criteria decision potentially impacts the effectiveness of that decision. It has therefore been proposed [1] that information about data quality (DQ) be provided to decision-makers in the form of metadata, called DQ tags. Decision outcomes could be influenced by the use of DQ tags. For example, decision makers may take time to consider data quality and reduce decision-making efficiency; some decision choices may be disregarded because of their low quality ratings; and the confidence in decisions may be eroded. For example, when using an L. Chen et al. (Eds.): DASFAA 2009 Workshops, LNCS 5667, pp. 167–181, 2009. © Springer-Verlag Berlin Heidelberg 2009
168
R. Price and G. Shanks
on-line system to search for a rental property; users may choose to ignore floor-space or treat it as a low priority if it is known to be unreliable compared to other criteria. Since the use of DQ tags is associated with significant overheads with respect to tag creation, storage, and maintenance; the adoption of DQ tagging as a business practice needs to be justified by a clear demonstration of its efficacy. The use of DQ tags by decision-makers may depend on factors such as decision-maker characteristics (eg. experience), the decision-making strategy employed, or the complexity of the decision-task (the number of decision criteria and alternatives). In order to predict the decision-making contexts most likely to benefit from the use of DQ tagging, it is therefore important to understand how such factors influence the use of DQ tags. Decision-making strategies [2] can be classified based on whether multiple attributes of a single alternative or multiple alternatives for a single attribute are considered first (alternative or attribute based respectively) and whether desirable values for one attribute can or cannot compensate for undesirable values in another attribute (i.e. compensatory or cutoff respectively). An Additive decision-making strategy is therefore classified as alternative and compensatory because it involves assigning a desirability score to each attribute value for a single alternative, calculating a summed scores for each alternative, and then choosing the alternative with the highest sum—even though that alternative may have very undesirable values for some attributes. In contrast, the Elimination by Attributes (EBA) strategy involves elimination of alternatives not meeting the minimum requirements for the value of the attribute most important to the decision-maker. This process is repeated for other attributes in descending order of priority until only one alternative remains. Similarly, different levels of task complexity can be defined in terms of the number of decision criteria and alternatives and the ease of differentiating between alternatives. 1.1 Previous Work Although DQ tags have been shown to impact decision outcomes given certain conditions [1,3,4], there is no agreement as to the necessary conditions (see [4, p.12]). Previous studies have in common the use of attribute-level tagging, the definition of two levels of task complexity (simple and complex), and the premise that a significant difference in decision-choice with and without DQ tags implies that the tags were used in the decision-making process when available. There is agreement [1,3,4] that increased task complexity is associated with reduced DQ tag usage, explained in terms of information overload. [4] reported DQ tag use only with an EBA but not an additive strategy, whereas [1] found DQ tag use to be more prevalent when DQ information was presented in a manner convenient for use with an additive strategy. In general, DQ tagging experiments to date indicate that DQ tag usage increases with experience (see [3, p.182])—with no use of tags by freshmen undergraduate students [3], use of tags for simple but not complex tasks for senior undergraduate students [1,4], and use of tags for both simple and complex tasks for professionals (ie. those with at least one year’s prior work experience) [3]. All previous studies reported instances of decreased consensus in decision choice when DQ tags were used but no change in consensus when DQ tags were ignored (ie. present but not used). Some studies have limitations with respect to the data sample size used and the control of the decision-making strategy employed by participants [1,3]. These studies
The Effect of Data Quality Tag Values and Usable Data Quality Tags
169
relied on small paper-based data sets of fewer than 8 alternatives, in marked contrast to the large data sets characterizing on-line decision-making. This has implications for the generalizability (ie. applicability to the real-world and to contexts other than that of the experiment) of the experimental results to on-line decision-making. In general, decision-making strategy was not controlled in [1,3] (except that cut-off scores were presented in some treatments in [1], see [5] for further discussion). Consequently, observed decision outcomes that were attributed solely to the use of tags could actually have depended partly or completely on the strategy or strategies used—potentially impacting experimental validity (ie. satisfactorily establishing that observed results are based on manipulations of dependent variables). In fact, evidence of the dependence of observed outcomes on decision-making strategy was reported in [4]. Concerns of scale and strategy were addressed in this study by using separate online interfaces, each with a different built-in decision-making strategy, to access an electronic database of 100 decision alternatives. However, usability concerns remain, as detailed in the following two paragraphs. The specific DQ characteristic or criterion (e.g. security or precision) whose value is represented by the DQ tag is the tag semantics (i.e. meaning). The only guide to the meaning of the DQ tag used in previous experiments is its label (reliability in [1,3] and accuracy in [4]), without any further explanation given participants. In fact, a DQ tag could potentially be based on a number of different DQ criteria discussed in the literature [6,7,8,9,10]. Unless DQ tag semantics are specified explicitly, there might not be agreement between the interpretations of different experimental subjects or between their interpretations and that of the researcher. Since each individual finds his or her own internal interpretation clear, a problem with consistency is likely to go undetected in laboratory-based pilot studies. Such inconsistency could lead to random error in when or how DQ tags are used that impacts the reliability (ie. repeatability) of the experiment. Specific documented cases of such an occurrence in practice are discussed in [5], where data labelled as being of poor quality using the term accuracy were used for decision-making by participants who interpreted the DQ tag semantics as numerical precision and ignored by participants who interpreted the semantics as correctness (ie. the correspondence of the database to the real-world). Tag usability is rarely discussed in the literature other than the use of pilot tests in [3,4], a technique whose limitations were illustrated above and are discussed in [11]. Since the use of DQ tags is not common in current business practice, there are generally no typical real-world precedents or widely understood conventions available to guide the researcher in designing or the user in understanding DQ tags. The only explicit consideration of different DQ tag representations is the comparison of DQ tag impact on decision-making using two-category ordinal (ie. below average, above average) versus continuous (ie. an integer between 1-100) representations of DQ information in [1]. Other possibilities for representing DQ values (eg. using ranges rather than single points, using graphics) and other aspects of DQ tag representation such as tag nomenclature or documentation have not been explicitly considered. One previous DQ tagging experiment was designed to address these limitations and improve support for experimental soundness (i.e. generalizability, reliability, and validity) through explicit consideration of usability and DQ semantics [5]. The results of that study differed from earlier DQ tagging research, insofar as there was no evidence of DQ tags impacting the preferred decision choice but some indication of
170
R. Price and G. Shanks
decreased decision-making efficiency and consensus with DQ tags nonetheless. This was despite the fact that the decision task (ie. simple rental property selection), attributes, treatment group sample sizes, participant characteristics (ie. experienced), and DQ tag values were selected in order to be consistent with the treatments reported to have the highest levels of DQ tag use in earlier DQ tagging research [1,3,4]. In particular, the same attribute commute time had the lowest DQ value. However, participant comments regarding their concerns over the recent dramatic increase in petrol prices and environmental awareness highlighted the increased importance of this attribute in [5] as opposed to earlier DQ tagging studies [1,3,4]. Based on exit queries addressed to all participants with DQ tags, it was clear that the majority of participants with tags included commute-time in their decision-making because of its importance to the decision despite understanding that it was of low quality (ie. 28 out of the 33 participants with DQ tags). Thus this clearly showed that DQ tags are ignored for those attributes considered by decision-makers to be of critical importance. Consequent on this observation is two related questions as to whether (1) the difference in results between [5] and earlier studies [1,3,4] could be explained by a likely increase in the importance accorded the attribute tagged of lowest DQ (ie. commute-time), and (2) whether decision-makers would be more willing to ignore attributes designated to be of low quality if those attributes were perceived by decision-makers to be of lower importance. In order to answer these two questions, it was proposed that further experiments be conducted that have attributes perceived to be of less (eg. moderate rather than critical) importance tagged with the lowest DQ value. Thus, the goal of the current experiments is to test whether discrepancies in previously reported results can be accounted for by differences in the relative importance of the attribute(s) identified as being of poor quality. The current experimental design has tag semantics, including derivation rules (the method used to calculate tag values), explicitly specified and is based on recommendations from a usability study conducted earlier. This helps to ensure that the DQ information provided is meaningful and the experimental context is credible. Section 2.1 overviews semantic and cost-based considerations in DQ tag design, section 2.2 overviews usability considerations in DQ tag design, and section 2.3 describes the basis for deciding which attributes to designate as being of low DQ in the experimental design. Section 3 presents the experimental design and research hypotheses. Results and discussion follow in section 4, with conclusions in section 5.
2 Design Considerations 2.1 DQ Semantics and Cost Considerations The semantics and derivation of DQ tags for the experiment are to be explicitly specified in the experimental materials given to participants. The semantics of tags used in the current experiments are based on a previously defined information quality framework called InfoQual [8], selected by virtue of its sound theoretical foundation and comprehensive coverage of different aspects of DQ. Different types of DQ tags can be defined based on InfoQual’s three DQ categories and their criteria. The first two categories (i.e. data conformance to rules and real-world correspondence) are
The Effect of Data Quality Tag Values and Usable Data Quality Tags
171
relatively objective in nature since they are inherently based on the data set itself rather than the individual user or context of use. In contrast, the third category (i.e. usefulness) is necessarily subjective since it is based on context-specific information consumer views (see [8] for a detailed discussion). Therefore DQ tags based on subjective quality measures must be associated with additional contextual information (e.g. activity or task, organizational or geographic context, user profile) to be meaningfully interpreted or used. Within each of these three categories, InfoQual defines a set of individual DQ criteria further detailing DQ aspects. For example, the usefulness category includes criteria such as security and timeliness; whereas the correspondence category includes criteria such as completeness, correctness, and consistency (of the real-world representation in the database). Since the creation, storage, and maintenance of tags incurs additional costs that offset potential benefits; it is desirable to restrict the scope to those choices that are likely to be the most practical in terms of simplicity, cost, and use. In the following discussion, we assume that a single decision-making criterion is represented as a database attribute or equivalently, assuming a relational database, as a column in a relational table; thus the three terms are used interchangeably. Cost-based issues that must be considered include those relating to DQ tag semantics, granularity, and level of consolidation, considered in detail in [11] and summarized here. Cost considerations thus suggest that DQ tags be: •
•
•
Objective: ie. have semantics based on objective rather than subjective aspects of DQ so that there is no need to store any contextual information. In the context of InfoQual, this means tags should be based on data conformance to database rules or real-world correspondence rather than on usefulness. Column-based: ie. specified in terms of a single DQ measurement for all the values of a single column. Relation, column, row, and field levels of granularity have progressively increasing information value but with the trade-off of increasing overheads. Column-level tagging is a natural compromise in the context of multi-criteria decision making, since the underlying cognitive processes involve evaluation of decision alternatives in terms of relevant criteria that are represented as columns. Consolidated: composite tags used when possible to logically combine related DQ criteria in order to limit storage overheads and reduce semantic complexity for users, eg. previous research [8, p.53] shows that users find it difficult to distinguish between individual criteria in the data correspondence category of InfoQual, preferring to combine them in a single summarized (i.e. category-level) concept instead—thus supporting the use of a single composite tag in this case.
These choices serve to limit the amount of extra information (and thus overhead costs) required for DQ tags. However, cost is not the only consideration for DQ tag design. As discussed in section 1.1, it is important to address usability concerns, especially given their potential impact on experimental soundness and the lack of common business conventions that could serve to guide experimental design or participant understanding of DQ tags. Usability questions relate to which aspects of DQ and which possible DQ tag representations (including incorporation of such tags into the decision-making artefact) are the most understandable and relevant to decision-makers. Answers to the first question allow researchers to focus their
172
R. Price and G. Shanks
attention and limited resources on those types of DQ tags judged by decision-makers to be the most likely to impact decision-making. Answers to the second question can be used to provide support for improved experimental soundness. The next section overviews a usability study conducted for this purpose. 2.2 Usability Contextual inquiry [12,13]—the interrogatory component of the user-centered design approach called contextual design [12]—can be used to solicit user opinions on DQ tag design and use in the work environment rather than in a contrived laboratory setting. This technique involves on-site interviews of users—in this case decisionmakers—while they demonstrate their real decision-making tasks. [12,13] describe how work processes and artefacts serve as reminders enabling decision-users to articulate their opinions in more detail and in reference to actual business practice (eg. rather than to the internal coherence of an experiment typical in a pilot test). Based on this approach, [11] describes in detail a usability study designed to solicit judgements from decision-makers as to the types and representations of DQ information most understandable and relevant in work and the current experimental context. Following the contextual inquiry guidelines from [12,13] for a single work role (ie. a decision-maker), investigators conducted one-hour audio-taped interviews with 9 different decision-makers representing a diverse set of organizational (eg. sizes), data (eg. types and sources), decision (eg. domain), and technological (eg. software) contexts. Interviewees were asked to demonstrate a multi-criteria, data-intensive, and on-line decision-making task and to explain their experience in response to interviewer questions throughout the interview. In order to address specific usability questions related to DQ tag and experimental design, an additional and novel segment was added after the standard contextual inquiry interview. This ordering ensures that the additional segment does not bias the initial demonstration of current decisionmaking practice. In the additional segment, decision-makers were first asked to reflect on possible ways to improve the demonstrated decision-making task using DQ tags and then asked to review alternative DQ tag representations and a proposed decisionmaking interface incorporating tags. Transcribed interviews were analyzed as per specified in [12,13], with the end result being a table summarizing interviewee responses on a structured list of topics and a set of design recommendations based on this analysis. The value of these recommendations is supported by the general agreement between interviewed decision-makers despite their diverse contexts and despite the two different domains (work and experimental) considered. The only type of DQ information considered to be of general interest at the attribute-based level was the degree of data correspondence to represented real-world values. Based on clear respondent preferences, the representation of such information would use the nomenclature accuracy, use a traffic light to graphically represent rangebased tag values based on both color and position, and include explicit documentation of tag semantics and derivation. With respect to the proposed experimental decisionmaking software artefact (ie. on-line decision-making interface), it was recommended that other interface elements (eg. attributes, value units) be documented using pop-up boxes and that desirability scores not be included in the decision-making interface. Such scores have been used in some earlier studies [1,3,4] to allow the relative
The Effect of Data Quality Tag Values and Usable Data Quality Tags
173
desirability of different attribute values (ie. criteria) and alternatives to be compared despite differences in attribute measurement units (eg. dollars rent versus number of bedrooms for a rental apartment) and directionality (a lower rent but higher number of bedrooms is preferred); however, they were deemed to be unnecessary and confusing. The usability study thus highlighted possible design issues in those earlier studies that did not explicitly consider usability [1,3,4]. In particular, their inclusion of desirability scores in the decision-making artefact and their use of a single numerical figure to represent DQ values without explicit specification of semantics is in direct contrast to the expressed preferences of the majority of interviewed decision-makers. 2.3 Selecting DQ Tag Values The goal of the current experiment is to test whether decision-makers are more likely to ignore low quality attributes (ie. based on consideration of their DQ tags) if they are not of critical importance to the decision task (see section 1.1), Therefore, in contrast to all previous studies [1,3,4,5], the selection of new DQ values must take into consideration decision-makers’ perceptions of the relative importance of different attributes to rental-property selection. In particular, we need to justify the decision as to which attribute(s) should be associated with the lowest DQ tag value. The challenge is to find an attribute of the right level of importance: neither too high (in which case, partcipants may ignore the DQ tags and use the attribute for decisionmaking regardless of its DQ tag value) or too low ( since participants may ignore the attribute regardless of whether it has a tag or not because it is of no interest to the decision). In order to find an attribute of moderate importance, we analyzed the rankings of decision attributes (ie. criteria) by priority from the experimental answer sheets completed by those participants not given tags in [5]. The decision attributes rent, number of bedrooms, and commute time were ruled out as possibilities based on their ranking, with more than 65% of participants ranking commute-time in the top two, more than 65% of participants ranking number of bedrooms in the top three, and all participants ranking rent in at least the top three and usually the topmost attribute(s) in terms of importance to the decision task of rental property selection. Of the other two attributes, the majority (64% for floor space and 72% for parking facilities) ranked them as the two least important attributes. However, a minority of participants did consider these attributes to be important, as indicated by the fact that 36% ranked floor space as either the second or third most important attribute and 28% ranked parking space as one of the top three attributes in terms of importance. The contrasting figures (in contrast to floor space, parking facilities has a larger percentage of participants ranking it low but some rank it as most important) make it difficult to judge which is of more importance overall. However, it is clear that there is rarely a case (only 1 out of 39) where a given participant ranks both attributes as high (ie. from the first to third most important attribute). Thus it is unlikely that a single participant will regard both attributes as so critically important that they will include both of them even if of poor quality. We expect that participants would be willing to remove one or the other if both are of poor quality. However, if participants rank both floor space and parking very low, then they may not include either floor or parking in their selections in any case—so making these attributes poor quality
174
R. Price and G. Shanks
wouldn’t affect apartment selections. Apartment selections can only be potentially affected by quality information if participants are interested in including at least one of these two attributes in their selected attributes without considering attribute quality. In fact, one of the decision-makers interviewed in the usability study expressed his opinion that DQ information was only of potential use for moderately important decision-criteria, since decision-makers would include critical criteria and exclude marginal criteria from their decision-making process in any case. The decision was therefore made to include both as attributes with the same and lowest DQ value, in the hope that at least one would be viewed as moderately important by participants who might then consider both the attribute and its DQ tag in their decision-making.
3 Research Design In order to examine the impact of DQ tagging on decision outcomes for different decision-making strategies, we use a laboratory experiment. The research model is shown below in Figure 1, illustrating the causal relationships between theoretical constructs (represented in circles) using solid lines and the measurement relationships between theoretical constructs and their empirical indicators (represented using rectangles) using dotted lines. In common with [4], the focus is on multi-criteria, dataintensive and on-line decision-making with the decision strategy controlled. In general, therefore, we base our experimental methodology and design on theirs. However, central to our work—and distinguishing it from earlier DQ tagging research [1,3,4]—is the explicit emphasis on usability and DQ semantics in designing the DQ tags and incorporating them into a decision-making artefact. To this end, the experimental design in [4] is modified in accordance with the semantic-, cost-, and usability-based recommendations outlined in section 2. The independent variables were decision-strategy and DQ tagging, both of which had two levels. Additive and EBA strategies are selected based on their contrasting properties (i.e. compensatory and alternative-based versus cut-off—therefore noncompensatory—and attribute-based respectively). DQ tags are either present or absent. The result is four separate experimental treatments: additive with DQ tags, additive without DQ tags, EBA with DQ tags, or EBA without DQ tags. The dependent variables, potentially impacted by DQ tags, are decision complacency, consensus, efficiency, and confidence As in previous DQ tagging studies [1,3,4], complacency and consensus relate to the impact of DQ tags on decision choice. Complacency refers to the degree to which decision-makers ignore DQ information. Thus a non-complacent outcome means that there is a significant change in the preferred decision choice when DQ tags are present as compared to that made when DQ tags are absent. The preferred decision choice is defined as that made by the plurality of participants in the treatment group. Consensus refers to the level of agreement on decision choice between decision-makers: we are interested in whether the level with DQ tags is the same as that without DQ tags. Essentially, we are asking whether there is any difference in the number of participants making the preferred decision choice with and without tags, even though the preferred choice may not be the same for the two different groups. In accordance with the research model in [4], we investigate whether the presence of DQ tags significantly changes the time required to make the decision or the decision-maker’s confidence in the decision choice made.
The Effect of Data Quality Tag Values and Usable Data Quality Tags
Levels
Independent Variables
Dependent Variables
175
Measures
Decision Complacency
Additive or EBA
Decision Strategy
H1a,b
H2a,b
Present or Absent
Data Quality Tagging
H3a,b
Decision Choice Decision Consensus
Decision Efficiency
Decision Time
H4a,b Decision Confidence
Confidence Rating
Fig. 1. Research Model
The case for the potential benefit of DQ tags would best be supported if the experiment shows that decision-makers are not complacent and have increased (or at least the same) level of consensus, efficiency, and confidence with tags. Such results require the rejection of the corresponding null hypotheses, formulated as follows: decision-makers are complacent (H1), and there is no difference in decision consensus (H2), efficiency (H3), or confidence (H4) with DQ tags for either the additive (a) or EBA (b) decision-making strategies. A chi-squared statistic is used to test the first two hypotheses. Depending on whether the data is normally distributed or not, either an independent samples t-test or a Mann-Whitney test is used for the last two hypotheses. Tests were run using SPSS (version 15) and with reference to [14]. Since previous research shows that that DQ tag usage generally increases with decision-maker experience (see section 1.1) and considering the resources available, participants were limited to postgraduate university students enrolled in a masters or PhD degree. Such students are likely to have had more decision-making experience and prior professional experience than undergraduates. In fact, 52 of the 69
176
R. Price and G. Shanks
participants (ie. 75%) had prior work experience and 24 out of 69 (ie. 35%) had prior managerial experience. The decision-making task used in the experiment involves selection of a preferred rental apartment based on the weekly rental cost, number of bedrooms, floor space, commute time, and parking facilities. The task involved recording decision start and finish times, nominating a confidence level using a 5-point likert scale ranging from very low to very high, and providing a brief explanation of the decision. In particular, participants were asked to write down which attributes—if any—they ignored in their search and why. Surveys of the target participant population of postgraduate university students showed that they typically had a good understanding of the domain and were frequent users of actual on-line rental property selection applications. The rental-property application domain, the set of attributes used (both their description and number), and the treatment group sample sizes are selected in order to be consistent with the simple task used in previous DQ tagging research [1,3,4,5], since prior research findings suggest that DQ tag usage is more likely in a simple rather than a complex decision-making task—especially for university students. Following the recommendations from section 2.3, the attributes floor space and parking facilities are both given the lowest (and equal) DQ values based on decision-maker perceptions that they were less important than other attributes in the rental property domain. In the context of the decision-making task, the above hypotheses are operationalized as no significant difference in the preferred apartment (H1), the number of decision-makers selecting the preferred apartment (H2), the decision time (H3), or the nominated decision confidence (H4). We adopt a relational database-type interface and use Microsoft Access software for development as in [4] – both well-understood and widely used. Issues of scale and decision-making strategy (each potentially affecting the impact of DQ tagging) are similarly addressed through the use of separate on-line interfaces for each experimental treatment, each with 100 alternatives and a specific built-in decisionstrategy and some including DQ tags. A set of instructions and an answer sheet were also developed for each different interface. In common with previous DQ tagging experiments, the alternatives in the databases are designed so that one apartment is clearly the most desirable without considering DQ information but is less desirable if DQ values from either or both of the low quality attributes are considered (ie. resulting in the associated low quality attribute(s) being ignored). In other words, the apartment automatically ranked as being the most desirable changes depending on whether both low quality attributes, only floor space, only parking facilities, or neither low quality attribute is included in the search. (This is true for either built-in decisionmaking strategy; however, usually the top three ranked apartments change for the additive decision strategy.) The additive interface with tags is shown in Figure 2. Calculated desirability scores are not displayed and concise explanations of interface elements are given in pop-up windows, in line with the recommendations resulting from the usability study described in section 2.2. Further to this purpose and in contrast with previous experiments, the current experiments use DQ tags with (1) semantics based on data correspondence (to the real-world), (2) range-based values represented by a traffic light symbol labelled accuracy, and (3) semantics and
The Effect of Data Quality Tag Values and Usable Data Quality Tags
177
Fig. 2. Interface for Additive decision strategy with DQ tags
derivation explicitly specified1 in both on-line and paper-based experimental materials. Furthermore, illustrative examples are included in explanatory notes given participants to clarify that the intended meaning of this term is correctness (based on real-world correspondence) rather than numerical precision (see Section 1.1 for documented evidence of such confusion). The DQ tags used in the experiment are attribute-based, based on objective rather than subjective DQ information, and are consolidated. As explained in section 2.1, user preferences evident from prior research [8, p.53] support the consolidation of correspondence-based DQ information (described by the set of related InfoQual criteria in the correspondence category) into a single DQ tag. 5 postgraduate students and 1 professional participated in a pilot study of the experiment. They were asked to verbalize their thoughts during the experiment, resulting in minor modifications to the screen display and instruction wording and repair of one software bug. A further pilot study with 14 postgraduate students writing down their comments provided evidence that the materials were understandable and did not result in any new suggestions, indicative of saturation. Participants were randomly assigned to one of 4 treatment groups. After being introduced to the experimental task scenario, decision-makers are asked to search for, select, and rank in order of priority their preferred rental apartment. Decision-makers 1
As “The traffic light symbol shows the percentage of column values in a random data sample that correctly corresponded to the represented real-world information, with a red light at the top indicating 0-33%, a yellow light in the middle indicating 34-66%, and a green light in the bottom indicating 67-100% of the values were correct.”
178
R. Price and G. Shanks
select and—for the EBA strategy—rank those attributes they wish to include in the search. Desirability scores are automatically calculated and alternatives sorted in order of decreasing desirability based on the attributes selected and the specific decision-making strategy built into the interface. Participants can then repeatedly change their attribute selections and re-sort. For the treatments involving DQ tags, completed answer sheets were checked before participants left the laboratory to see if either of the attributes with the lowest DQ value—floor space or parking facilities— was used in the search. If so, the participant was asked whether he or she understood the meaning of the DQ tags and—if so—why they used it despite its poor quality.
4 Results and Discussion Differences between an observed and the expected frequency distribution, where expected frequencies are derived from groups with no DQ tags, were checked with a chi-squared test. This test is non-parametric and therefore relatively free of underlying assumptions. Yates’ Correction for Continuity is used as appropriate for a 2x2 chisquared table. Results for analysis of decision complacency (H1a, H1b) and consensus (H2a, H2b) are summarized in Table 1. We can see that there are no significant differences (p>.05) for either decision-making strategy; therefore, it is not possible to reject the null hypotheses. Table 1. Analysis of Complacency and Consensus
Complacency Consensus
Additive χ2 = .000
Decision Strategy Elimination by Attributes χ2 = .896
p = 1.000 (H1a) χ2 = .000 p = 1.000 (H2a)
p = .344 (H1b) χ2 = .896 p = .344 (H2b)
For either decision-making strategy, Table 2 shows that the preferred apartment is the same with and without tags; so the chi-squared statistic is the same for complacency and consensus. For each treatment group, Table 2 also shows the total number and percentage of participants selecting any other than the preferred apartment under the label Other. Information about the plurality of participants next in size compared to that of the participants selecting the preferred apartment is given under the label Alternate. The size of each treatment group is specified under the label Total. We can see that for all treatments, the plurality of participants selecting the preferred apartment is much larger (ranging from 11% to 64% larger) than any other plurality; thus the less sensitive non-parametric chi-squared statistic does not show a significant difference in consensus. However, for both decision-strategies, the percentage of participants selecting the preferred apartment decreases (from 41% to 35% with additive and from 76% to 56% with EBA) when DQ information is present (compared to when it is not). This suggests that there may be an overall decline in consensus with tags that is not detected by the chi-square test.
The Effect of Data Quality Tag Values and Usable Data Quality Tags
179
Table 2. Number of Participants Selecting Preferred and Other Apartments
Number of Participants Selecting Apartment (% in treatment group selecting from set of specific apartments listed) No Tags 7 (41% for apt 83)
Tags 6 (35% for apt 83)
Alternate Total Preferred Other
10 (59% for apt 29,57,70,90 or 91) 3 (18% for apt 57 17 13 (76% for apt 67) 4 (24% for apt 1,5 or 44)
Alternate Total
2 (12% for apt 44) 17
11 (65% for apt 36,70,90,91 or 98) 4 (24% for apt 70) 17 10 (56% for apt 67) 8 (44% for apt 1,5,44 or 88) 5 (28% for apt 5) 18
Preferred Additive
Other
EBA
Table 3. Analysis of Time and Confidence
Time
Confidence
No Tags Tags No Tags Tags
Additive Mean rank Mean 16.75 8.59
Decision Strategy Elimination by Attributes SD Mean rank Mean SD 5.48 18.53 5.53 2.50
15.20 8.44 6.46 p = .633 (H3a) 17.56 2.41 .71
17.50 5.44 3.48 p = .763 (H3b) 18.32 2.00 .61
17.44 2.35 p = .970 (H4a)
17.69 2.00 p = .839 (H4b)
.70
.84
An analysis of answer sheet and exit query responses showed that 23% (8 out of 35) of the participants given tags and 18% (6 out of 34) of the participants not given tags ignored floor space, either because they didn’t care or—for 62% (5 out of 8) of those given tags who ignored the attribute—because the attribute was labelled as being of poor quality. For parking facilities, 51% (18 out of 35) of the participants given tags and 44% (15 of the 34) participants not given tags ignored the attribute, for the majority of participants because they didn’t care but—for some (22%, 4 out of 18) of those participants given tags who ignored the attribute—because the attribute was labelled as being of poor quality. So we can see that only a minority of participants took the tags into account when determining which attributes to use in the search. A visual inspection of relevant histograms, the Kolmogorov-Smirnov test, and the Shapiro-Wilks test revealed that both decision efficiency and confidence rating
180
R. Price and G. Shanks
demonstrated significant variations from normal distribution for both decisionstrategies. Hence, the non-parametric Mann-Whitney test was used for data analysis (mean rank and level of significance shown for each treatment group). The mean and standard deviation are also shown for each treatment group. Results for analysis of decision efficiency (H3a, H3b) and confidence (H4a, H4b) are summarized in Table 3. There were no significant results, thus none of the null hypotheses can be rejected.
5 Conclusion Decision confidence remained high throughout all treatments, consistent with the findings reported in [4,5]. However, in contrast to these two studies, there was no change in decision time with DQ tags for either decision-making strategy. With respect to decision complacency and consensus, the results of this study are markedly different from that of previous DQ tagging research [1,3,4] but consistent with recent experiments by the authors [5]. That is, there was some indications of an overall decline in decision consensus with tags even when decision-makers were complacent, ie. did not change their preferred decision choice. As detailed in section 1.1, current experiments were designed to answer questions raised in [5] by participant comments that even a very low quality attribute would be considered if it was very important for the decision task. The current experiments were therefore designed to see whether DQ tags would be used if a less important attribute was tagged as being of low quality; however, there was no change in results. Thus the majority of experimental participants from either decision-strategy treatment ignored DQ tags indicating that attributes were of poor quality, regardless of whether those attributes were regarded as important to the decision task or not. These findings do not support the adoption of DQ tagging in general business contexts. Earlier DQ tagging studies [1,3,4] could be said to provide limited support for the possible utility of DQ tags in specific circumstances, although with caveats regarding the associated risk of increased decision-time and reduced consensus. However, we can see that their findings and recommendations were not reproduced in recent DQ tagging experiments even though the task complexity (ie. simple) and target population (ie. with decision-making, work, and managerial experience) selected for these experiments were based on the treatments reported in these early studies to be associated with the highest levels of DQ tag usage. A notable distinction in recent as compared to earlier studies is their explicit emphasis on usability and semantics in experimental design, with the aim of providing better support for experimental soundness. We plan to conduct cognitive process tracing studies to explain these results more fully and to understand why DQ tags were largely ignored in these experiments. The experimental decision task and domain were selected to be consistent with previous research and familiar to participants; however, future studies are planned to test whether observed results generalize to other contexts. Acknowledgments. We thank Ranjani Nagarajan and Ganesh A. for their excellent work in preparing materials and all their help and the Monash University Schools of Information Technology for letting us run some experimental sessions there.
The Effect of Data Quality Tag Values and Usable Data Quality Tags
181
References 1. Chengular-Smith, I.N., Pazer, H.L.: The Impact of Data Quality Information on Decision Making: An Exploratory Analysis. IEEE TKDE 11(6), 853–864 (1999) 2. Payne, J., Bettman, J., Johnson, E.: The Adaptive Decision Maker. Cambridge Univ. Press, Cambridge (1993) 3. Fisher, C., Chengular-Smith, I.N., Ballou, D.: The Impact of Experience and Time on the Use of Data Quality Information in Decision Making. Information Systems Research 14(2), 170–188 (2003) 4. Shanks, G., Tansley, E.: Data Quality Tagging and Decision Outcomes: An Experimental Study. In: IFIP Working Group 8.3 Conference on Decision Making and Decision Support in the Internet Age, Cork, pp. 399–410 (2002) 5. Price, R., Shanks, G.: Data Quality Information and Decision-Making: Semantics, Usability, and Impact. Technical Report, Clayton School of Information Technology. Monash University (2009) 6. Eppler, M.J.: The Concept of Information Quality: An Interdisciplinary Evaluation of Recent Information Quality Frameworks. Studies in Communication Sciences 1, 167–182 (2001) 7. Even, A., Shankaranarayanan, G., Watts, S.: Enhancing Decision Making with Process Metadata: Theoretical Framework, Research Tool, and Exploratory Examination. In: 39th Hawaii International Conference on System Sciences (HICSS 2006), Hawaii, pp. 1–10 (2006) 8. Price, R., Shanks, G.: A Semiotic Information Quality Framework: Development and Comparative Analysis. Journal of Information Technology (JIT) 20(2), 88–102 (2005) 9. Wand, Y., Wang, R.: Anchoring Data Quality Dimensions in Ontological Foundations. CACM 39(11), 86–95 (1996) 10. Wang, R., Strong, D.M.: Beyond Accuracy: What Data Quality Means to Data Consumers. Journal of Management Information Systems (JMIS) 12(4), 5–34 (1996) 11. Price, R., Shanks, G.: Data Quality Tags and Decision-making: Improving the Design and Validity of Experimental Studies. In: IFIP TC8/WG8.3 Working Group’s International Conference on Collaborative Decision-Making (CDM 2008), Toulouse, pp. 233–244 (2008) 12. Beyer, H., Holzblatt, K.: Contextual Design: Defining Customer-Centered Systems. Morgan Kaufmann, San Francisco (1998) 13. Holtzblatt, K., Jones, S.: Conducting and Analyzing a Contextual Interview (Excerpt). In: Readings in Human-Computer Interaction: Towards the Year 2000, pp. 241–253. Morgan Kaufmann, San Francisco (2000) 14. Pallant, J.: SPSS Survival Manual A Step by Step Guide to Data Analysis using SPSS for Windows (Version 10)., Allen & Unwin, Crows Nest, NSW, Australia (2001)
Predicting Timing Failures in Web Services Nuno Laranjeiro, Marco Vieira, and Henrique Madeira CISUC, Department of Informatics Engineering, University of Coimbra, Portugal {cnl,mvieira,henrique}@dei.uc.pt
Abstract. Web services are increasingly being used in business critical environments, enabling uniform access to services provided by distinct parties. In these environments, an operation that does not execute on due time may be completely useless, which may result in service abandonment, and reputation or monetary losses. However, existing web services environments do not provide mechanisms to detect or predict timing violations. This paper proposes a web services programming model that transparently allows temporal failure detection and uses historical data for temporal failure prediction. This enables providers to easily deploy time-aware web services and consumers to express their timeliness requirements. Timing failures detection and prediction can be used by client applications to select alternative services in runtime and by application servers to optimize the resources allocated to each service. Keywords: web services, timing failures, detection, prediction.
1 Introduction Web services provide a simple interface between a provider and a consumer and are increasingly becoming a strategic vehicle for data exchange and content distribution [1]. Compositions, which are based on a collection of web services working together to achieve an objective, are particularly important. These compositions are normally defined at programming time as "business processes" that describe the sequencing and coordination of calls to the component web services. Calls between web service consumers and providers consist of messages that follow the SOAP protocol, which, along with WSDL and UDDI, form the core of the web services technology [1]. Developing web services able to deal with timeliness requirements is a difficult task as existing web services technology, programming models, and development tools do not provide easy support for assuring timeliness properties during web services execution. Although some transactional models provide basic support for detecting the cases when operations take longer than the expected/desired time [2], this usually requires a high development effort. In fact, developers have to select the most adequate middleware (including a transaction manager that must fit the deployment environment requirements), produce additional code to manage the transactions, specify their properties, and implement the basic support for timing requirements. Transactions are actually well suited for supporting typical transactional behavior, but they L. Chen et al. (Eds.): DASFAA 2009 Workshops, LNCS 5667, pp. 182–196, 2009. © Springer-Verlag Berlin Heidelberg 2009
Predicting Timing Failures in Web Services
183
are inadequate for deploying simple time-aware services. Indeed, transactions provide poor support for timing failures detection and no support for prediction. Despite the lack of mechanisms and tools for building time-aware web services, the number of real applications that have to support this kind of requirements is quickly increasing. Typically, developers deal with these by implementing ad-hoc solutions to support timing failures (this is, obviously, expensive and prone to fail). The concept of time has been, in fact, completely absent from the standard web services programming environment. Important features such as timing failure detection and forecasting have been overlooked, although these are particularly important if we consider that services are typically deployed over wide-area or open environments that exhibit poor baseline synchrony and reliability properties. In these environments it is normal for services to exhibit high or highly variable execution times. High execution times are usually associated with the serialization process involved in each invocation, coupled with a high amount of protocol information that has to be transmitted per each payload byte (e.g., the SOAP protocol requires a large amount of data to encapsulate the useful data to be transmitted). This serialization process is particularly important since a given web service can also behave as a client of another service, thus duplicating the end-to-end serialization effort. Variable execution times are essentially related to the use of unreliable, sometimes slow, transport channels (i.e., the internet) for client-server and inter web services communication. These characteristics make it difficult for developers to deal with timeliness requirements. Two outcomes are possible when considering timing requirements during a web service execution: either the server is able to produce an answer on due time, or not. The problem is that, in both cases the client application has to wait for the execution to complete or for the deadline to be violated (in this case a timing failure detection mechanism must be implemented). However, in many situations it is possible to predict in advance the occurrence of timing failures. In fact, execution history can typically be used to confidently forecast if a timely response will be possible to obtain. Note that, this is of utmost importance for client applications, that can retry or use alternative services, but also for servers, that can use this information to conveniently manage the resources allocated to each operation (e.g., an operation that is predictably useless can be canceled or proceed executing under a degraded mode). This paper proposes a new programming model for web services deployment that allows online detection and prediction of timing failures (wsTFDP: Web Services Timing Failures Detection and Prediction). By using this model clients are able to express their timeliness requirements for each service invocation by defining a timeout value and an associated confidence value for prediction. When timing requirements are not possible to satisfy (e.g., because the deadline was exceeded or because it will predictably be exceeded) the server responds with a well-known and consistent exceptional behavior. A simple and ready to use programming interface implementing the proposed model is also provided. The structure of the paper is as follows. The following section presents background and related work. Section 3 presents a high level view of the timing failures detection and prediction mechanism. Sections 4 and 5 detail the design of the detection and prediction components. Section 6 shows how the mechanism can be used in practice, and Section 7 presents some experimental results. Section 8 concludes the paper.
184
N. Laranjeiro, M. Vieira, and H. Madeira
2 Background and Related Work Web services environments can be characterized, essentially, as environments of partial synchrony. In fact, their basic synchrony properties are only cluttered by specific parts of the system. Several partial synchrony models have been proposed, with different solutions to address application timeliness requirements. The idea of using failure detectors that encapsulate the synchrony properties of the system was first proposed in [3]. The work in [4] introduces the notion of Global Stabilization Time (GST), which limits the uncertainty of the environment. The Timed model [5] allows the construction of fail-aware applications, which always behave timely or else provide awareness of their failure. The Timely Computing Base (TCB) model [6] provides a generic framework to deal with timeliness requirements in uncertain environments. In [7] we proposed a programming approach to help developers in programming time-bounded web service applications. The approach proposed implements timing detection (prediction is not addressed) in a transparent way. Several works on prediction can be found in the literature. In [8] a Support Vector Machine-based model for software reliability prediction is proposed and issues that affect the prediction accuracy are studied. These issues include: whether all historical failure data should be used and what type of failure data is more appropriate to use in terms of prediction accuracy. The accuracy of software reliability prediction models based on SVM and artificial neural networks are also compared in this work. Local prediction schemes only use the most recent information (and ignore information on far away data). As a result, the accuracy of local prediction schemes is limited. Considering this, in [9] it is proposed a novel approach named Markov– Fourier gray Model. The approach builds a gray model from a set of the most recent data and a Fourier series is used to fit the residuals produced. Then, the Markov matrices are employed to encode possible global information generated also by the residuals. The authors concluded that the global information encoded in the Markov matrices can provide quite useful information for predictions. In [10] it is presented a framework that incorporates multiple prediction algorithms to enable navigation of autonomous vehicles in real-life, on-road traffic situations. At the lower levels, the authors use estimation theoretic short-term predictions via an extended Kalman filter-based algorithm that uses sensor data to predict the future location of moving objects with an associated confidence measure. At the higher levels, the authors use a long-term situation-based probabilistic prediction using spatiotemporal relations and situation recognition. The authors also identify the different time periods where the two algorithms provide better estimates and demonstrate the possibility of using results of the short-term prediction algorithm to strengthen/ weaken the estimates of the long-term prediction algorithm. A key aspect to retain from these works is that, both recent and older historic data can be valuable when applying a particular prediction algorithm. In this sense, a prediction mechanism must include support for considering different periods of historical data when trying to predict a timing failure. This particular support should be at least configurable on the server-side. On some applications it may also be important to provide a client-side selection of the historical to be considered on each prediction. This, of course, depends on the particular goals of the software being deployed.
Predicting Timing Failures in Web Services
185
3 Timing Failures Detection and Prediction Mechanism The Web Services Temporal Failure Detection and Prediction (wsTFDP) mechanism builds on top of the failure detection mechanism we proposed in [7]. Besides failure detection, wsTFDP enables us to collect accurate runtime failure data that is later used for failures predicting. To be useful in real scenarios, a timing failures detection and prediction mechanism must achieve a key set of quality attributes. Thus, before developing the mechanism we have established the following objectives: •
•
•
•
•
Low latency: Timing failures must be detected within small latency intervals. In other words, failures must be detected on time, or the detection process itself may become of little or no use. Our goal was to design the mechanism in such way that guarantees detection latency under 100ms, a reasonable value when considering the typically high execution times of web services. High accuracy: The number of false-positives must be very low. A low falsepositive rate means that, in few cases, the mechanism detects or predicts a timing failure that, in reality, will not occur. On the other hand, the number of false-negatives (i.e., failures that are not predicted) must also be kept low. A system that fails less in identifying or predicting failures is obviously preferable. Based on our experimental knowledge, we aimed to provide an integrated mechanism able to maintain at least one of the rates under an average of 5%. We opted to define a single limit as lowering one of these two measures typically results in increasing the other. Low performance overhead: A time-aware service obviously performs more computational work than a basic service. For our initial prototype, we aimed to achieve a maximum overhead of 100ms in terms of response time (when compared to the equivalent service without timing characteristics). Easy usage: Code, tools, or deployment complexity and non-portability are among some of the characteristics developers try to avoid nowadays. It is important that developers, either consumers or providers, do not have to make significant changes to existing code in order to use the mechanism. In the same way, when developing new services, the development model must be maintained as similar as possible to the common practice. Generic: The mechanism must be generic so that it can be reused, as much as possible, in different environments. Our goal is to provide extension features so that wsTFDP can be used outside the scope of web service applications, e.g., in Remote Method Invocation methods (RMI).
wsTFDP consists of a detection component and a prediction component. The detection component performs two basic functions: it detects timing failures transparently in runtime and collects timing data about web service execution (this data is basically a profile of the service’s execution time). The prediction component uses this historical data to predict timing failures by monitoring the current execution point of a service and performing a pessimistic analysis of the estimated duration of the remaining path. When a failure is predicted the execution is aborted or continued under a degraded mode. In both cases, the client is immediately informed that an exceptional behavior has occurred.
186
N. Laranjeiro, M. Vieira, and H. Madeira
A difficult aspect is the integration of this kind of mechanisms with current applications and development models. In order to make the wsTFDP as transparent as possible to programmers, we decided to deploy it as an isolated package that holds all logic aspects related to timing failures detection and prediction and that can be merged into any application by using regular Aspect Oriented Programming (AOP) tools [11]. This process consists in compiling the candidate application and the wsTFDP component (using an AOP framework compiler) into a single application that is time-aware ready. All wsTFDP logic is automatically injected at particular points in the target code (described further ahead). Using AOP involves understanding a few key concepts, namely (see [12] for details): • • • •
Aspect: a concern that cuts across multiple objects. Joinpoint: a point during the execution of a program (e.g., the execution of a method or the handling of an exception). Advice: action taken by an aspect at a particular joinpoint. Types of advice include: ‘before’, ‘after’, and ‘around’ (i.e., before and after). Pointcut: a predicate that matches joinpoints. An advice is associated with a pointcut expression and runs at any joinpoint matched by the pointcut (e.g., the execution of a method with a certain name).
By using AOP we can inject crosscutting concerns into any application in a way that avoids writing extra code to execute these same crosscutting concerns. wsTFDP uses an around advice (so that we have control before and after our join point) and a pointcut that matches all methods that correspond to a web service invocation and take a TimeRestriction object as parameter (see Section 5.1 for further details on the interception). This object is defined in the context of the wsTFDP mechanism and is the one used by clients to specify deadlines and confidence values (see Section 6 for details on the object including its usage mode). With this configuration it is possible to intercept the web service calls for which users want to perform timing failure prediction. At the moment of interception several actions are taken by our framework to determine if the execution is on time, if a timing violation has occurred, or if it is expectable that a timing violation will occur. Despite the key concepts behind wsTFDP are completely generic, our initial prototype is Java-based, for practical reasons only (implementing the mechanism using another language does not impose any relevant technical difficulty). Note that this is relevant only at the server-side. In fact, any client using any programming language with support for web services can use a time-aware web service (even if this service is implemented in Java) with no extra special measures.
4 Detection Mechanism Design As previously referred, we have presented a preliminary version of the detection component of wsTFDP in [7]. For the sake of completeness we present here a brief description of its operating mode. Figure 1 illustrates the internal design of the detection component and the sequence of events that occur at the time of interception of a web service call. The horizontal solid lines represent a thread that is in a runnable state and is performing actual work. Dashed lines represent a thread that is waiting for some event.
Predicting Timing Failures in Web Services
187
Fig. 1. Temporal failure detection mechanism
At interception time each container thread (parent), that is responsible for serving a particular client request, spawns a new thread (child). This spawned thread is responsible for doing the actual web service work and placing the final result in its blocking queue. Immediately after the child thread is started, the parent tries to retrieve the result from the child’s blocking queue. As at that starting point no result is available, the parent thread waits during a given time frame (timeout specified by the client application) for an element to become available on the queue. During this period there is no periodic polling of the blocking queue, which reduces the impact of the mechanism. Instead, the parent waits for the occurrence of one of the following events: •
•
A signal to proceed with object removal, which occurs when a put operation is executed over the blocking queue. This operation signals any waiting thread (on that queue) to immediately stop waiting and proceed with object removal (this object corresponds to the result of the web service execution). The waiting time is exhausted. After the time has expired, the parent thread continues its execution (i.e., leaves the waiting state), ignoring any possible later results placed in the queue. In this case, an exception signaling the occurrence of the timing failure will be thrown to the client application.
There are two types of results that can be expected from the child thread’s execution. The first is a regular result (i.e., the one that is defined as the return parameter in the method signature) and the second an exception being thrown (it can be a checked or unchecked exception). In both cases, an object is placed in the waiting queue (exceptions are caught by the child thread and also put in the queue as regular objects). When the parent retrieves an object from the queue, type verification is performed. For a regular object, execution proceeds and the result is delivered to the client. On the other hand, if the retrieved object is an instance of Exception, the parent thread rethrows it, which enables us to maintain program correctness (that would be lost if the child thread did throw the exception itself). The detection component offers two types of exceptions (it is the responsibility of the provider to choose the type that better fits his business model): •
TimeExceededException: this is a checked exception, i.e., if the provider decides to mark a given public method with a ‘throws’ clause, then the client will
188
•
N. Laranjeiro, M. Vieira, and H. Madeira
be forced to handle a possible exception at compile time (it will have to enclose the service invocation with a try/catch block). TimeExceededRuntimeException: this is an unchecked exception, i.e., the client does not need to explicitly handle a possible exception that may result from invoking a particular web service operation. This is true even if the provider adds a ‘throws’ clause to its public operation signature.
5 Prediction Mechanism Design In order to predict timing failures, wsTFDP executes the following set of steps: 1) Analyzes the service code and builds a graph to represent its logical structure; 2) Gathers time-related performance metrics in a transparent way during runtime; 3) Uses historical data to predict, with a degree of confidence chosen by the client, if a given execution will or will not conclude in due time. 5.1 Graph Organization wsTFDP analyzes the logical structure of web services and automatically organizes a graph structure for each operation provided to clients. A graph consists of vertexes (or nodes) and edges that connect nodes. Each edge represents a connection between two nodes (i.e., an edge provides a path between two nodes, which in our case is unidirectional) that has an associated cost (i.e., travelling between two nodes involves a predefined cost) [13]. This data structure perfectly fits the requirements of our mechanism. To build the graph we have to define what information will constitute the nodes, edges, and costs: -
-
Nodes: specific instants in a service’s execution where timing failures should be predicted. Natural candidates are the invocation of the target service itself and all nested service invocations (if any) performed by the service. Additionally, it is important to add support for user-identified critical points that allow the developer to instruct wsTFDP to add specific code parts to the graph. Edges: these naturally represent the available connections between nodes, and are automatically defined by wsFTP based on a runtime analysis of each service. Cost: for our goals, the cost involved in travelling between two nodes is the execution time in milliseconds.
Figure 2 presents an example of a web service augmented to be time-aware and the respective graph organization (source code in bold is specific to our framework). The arrows connecting the nodes indicate unidirectional relations between those nodes. That is, it is possible to go from the ‘Service C’ node to the ‘DB query’ (database query) node but not the other way around, which of course accurately represents what the programmer intended when writing this particular piece of source code. This service uses two wsTFDP methods: check and ignore. The check method indicates that that specific point in the code is critical and must be included as a node on the graph and a point for timing failures prediction. The ignore method indicates that the following web service call should not be considered as a node when building the graph
Predicting Timing Failures in Web Services
189
and also when predicting timing failures. It is the responsibility of the programmer to identify critical execution points that should take part of the runtime graph. This task can be easily achieved with a regular profiling tool. For example, the invocation of ‘Service D’ (represented by the invokeServiceD(); statement) is not included in the graph as it was explicitly ignored by the programmer. @WebMethod public int buildInt(TimeRestriction restrict) { int result = 0; result += invokeServiceA(); if (result == 0) { result += invokeServiceB(); } else { result += invokeServiceC(); } TimeChecker.ignore(); result += invokeServiceD();
Service B
Entry Point
Service A
DB Query
Exit Point
Service C
TimeChecker.check(); result = queryDataBase(); return result; }
Fig. 2. Transformation from source code into a graph structure by wsTFDP
The graph structure is organized during regular service execution. Instead of implementing a fairly complicated approach for compile-time path analysis, we use AOP (and AspectJ [14] in particular for our prototype) during service execution. In Java, a web service can be invoked by executing a generated method that is marked with the @WebMethod JAX-WS annotation [15]. In order to build the graph, wsTFDP assumes that the provider is using any implementation form of JAX-WS, which is ever more frequent (e.g., JBoss provides a JAX-WS compliant implementation, such as many major middleware vendors). In our case, AspectJ is instructed to intercept calls to @WebMethod annotated methods and uses this interception information to build the service graph. As mentioned before, wsTFDP intercepts all external service calls (i.e., nested calls) by default. However, the mechanism can also be configured to intercept only specific (i.e., marked by the developer) external service invocations, ignoring the remaining. 5.2 Metrics Management Time-related metrics are collected by the detection component and stored in the graph’s edges. In each run, wsTFDP stores information that expresses how long it takes to travel from a preceding node to the current node (in terms of execution time). In practice, each edge maintains a list of historical values sorted from the shortest to the longest time. This list is updated in each web service call with the observed value. Obviously, as the maximum size of this list must be limited, the provider has to configure it according to the specificities of the web services being monitored. In environments that experience
190
N. Laranjeiro, M. Vieira, and H. Madeira
small variations in execution time a small list may be sufficient, while other type of environments may require larger lists that can capture the natural execution time variation of the web service. When considering the mechanism’s performance smaller lists are preferable, since the sorting algorithm, required by our pessimistic approach for failure prediction, will obviously perform faster in smaller than in larger lists (see Section 5.3 for details on the prediction process). An important aspect is that the graph will be quite incomplete during the first executions of the web service. Typically, web services can be rather complex and several calls may be needed to explore all possible paths, and hence build a complete graph. The more paths are explored, the more accurate prediction becomes. 5.3 Prediction Process At each node wsTFDP subtracts the elapsed time (counted from the moment execution started) from the maximum amount of time specified by the client application (which represents the total available time for execution). It then uses the Dijkstra’s shortest path computation algorithm [16], which makes use of the best (fastest) time values stored on the graph’s edges, to determine if execution can terminate on time. If history indicates that the amount of time left in a given service invocation is not enough to complete execution in a timely fashion, then it is fairly safe to return a well known prediction exception to the client. When a timing failure is predicted, two outcomes are possible: the server aborts the operation or the server continues the operation under a degraded mode. The former should be used in the cases where it is safe to artificially abort a particular operation. The latter mode is useful for those cases where consistency is needed on the server side (i.e., an initiated operation must not be artificially aborted, as the application may not be prepared to deal with this situation). These two modes are particularly important to providers since they both ease the load experienced by servers when timing failures occur. Obviously, it is up to the provider to decide which mode better fits the application being deployed. An important aspect is that there is always some degree of confidence involved in a prediction process. In wsTFDP, this degree of confidence can be manipulated by the provider or by the consumer in the following ways: •
•
When the list that stores execution times achieves its maximum in a given edge, it is necessary perform elements removal to accommodate new historical data. The provider can configure two different removal strategies. If accurate prediction is critical, then the provider should opt by dropping the longest duration values. This will make the prediction much more accurate since it uses a pessimistic approach (the algorithm always uses the fastest values on each edge). The other option is to do a random removal. In this case the mechanism may lose some accuracy (which in many cases is not highly relevant to clients), but it is able to capture the web service typical behavior in a better way. In runtime the consumer can discard a given percentage of the lowest execution times that exist in the collected history (i.e., discard the fastest values). This is quite useful to express an amount of confidence over the collected values. For example, if the consumer considers that 5% of the fastest executions happened in very particular conditions that do not capture the normal behavior of the web service
Predicting Timing Failures in Web Services
191
and its related resources, then those 5% of values should be discarded. This represents a 95% confidence degree for timing failures prediction (see Section 6 for more details on how to express this confidence value at the client-side). As in the detection mechanism, both checked and unchecked prediction exceptions are available. These are, respectively, FailurePredictionException and FailurePredictionRuntimeException. An important aspect is that these exceptions are hierarchically organized and extend a superclass (TimeFailureException) also available in unchecked and checked versions. This enables clients to handle both detected and predicted timing failures in a single catch block. The complete procedure for detecting and predicting timing failures is relatively straightforward. However, it is important to notice that, despite being a simple approach, it is able to produce very good results that can be used for service selection and resource management, with clear advantages for clients and servers respectively (see Section 7 for details on the experimental results).
6 Using the Timing Failure Detection Mechanism Developing a web service with timing failure detection and prediction characteristics is an easy task. In fact, a developer wanting to provide time-aware web services just needs to execute the following steps: 1) Add the wsTFDP library (or source code) to the project. The library and source code are available at [17]. 2) Add a TimeRestriction parameter to the web service operations for which timing failure detection and prediction is wanted. This object holds a numeric value that is set by clients to specify the desired service duration (in milliseconds). 3) Compile the project using an AOP compiler (AspectJ, in the case of our prototype). For example, when using Maven [18] as a building tool, compilation and packaging can be done by executing the following command from the command line: ‘mvn package’. As we can see in Figure 2 (Section 5.1), the wsTFDP-related changes (presented in bold) are quite simple. In fact, the code needed to include timing failure prediction in a web service is quite straightforward and very easy to understand. In its basic form, the provider only needs to add an extra TimeRestriction parameter to the web service and our framework will perform all remaining tasks automatically. An interesting aspect is that, although wsTFDP targets web services technology, it is also able to intercept other methods (fulfilling this way the ‘to be generic’ requirement presented in Section 3), like for instance Remote Method Invocation methods (RMI). To achieve this, the programmer should mark these methods with a @Interceptable annotation (an annotation provided by the wsTFDP library). Obviously, these methods will also have to accept a TimeRestriction object as parameter. No special measures are needed to invoke a time-aware service. In fact, the only difference between a regular service and a time-aware service, from the client pointof-view, is the use of a regular extra object – the TimeRestriction parameter. This has to be created and set with a desired time value and a prediction confidence factor.
192
N. Laranjeiro, M. Vieira, and H. Madeira
Figure 3 shows the client code needed to invoke a time-aware service. The service in question is an implementation of the ‘New Customer’ web service specified by the TPC-App performance benchmark [19] and augmented to be time-aware. In this example the client is setting a maximum service execution time of 1000ms, and choosing a 95% confidence factor for historical data. The wsTFDP-related changes are presented in bold. NewCustomerInput myServiceInput = new NewCustomerInput(); TimeRestriction restriction = new timeRestriction(); timeRestriction.setTimeInMillis(1000L); timeRestriction.setConfidenceValue(0.95D); NewCustomer proxy = new NewCustomerService().getNewCustomerPort(); proxy.newCustomer(restriction, myServiceInput);
Fig. 3. Client code for invoking a time-aware service
7 Experimental Evaluation In this section we present the experimental evaluation performed to assess the effectiveness of wsTFDP. The experiments performed try to give answer to the following questions: − − − −
Does the mechanism introduce a noticeable delay in services? How fast can the mechanism detect failures and how does this detection latency evolve under higher loads? Is it able to provide low false-positive and false-negative rates during the detection and prediction process? Can programmers easily use the mechanism?
7.1 Experimental Setup The TPC-App benchmark was used as test case [19]. This is a performance benchmark for web services and application servers widely accepted as representative of real environments. We applied our mechanism to a subset of the services defined by this benchmark (Change Payment Method, New Customer, New Product, and Product Detail). The Payment Method and the New Customer services include an external service call that simulates a payment gateway. In order to have a more realistic representation of what usually happens when invoking services over the Internet, and based in our empirical knowledge of common service execution times, we introduced a variable delay of 1000 to 2000 ms whenever the payment gateway was contacted. The setup for the experiments consisted in deploying the three main test components into three separate machines connected by a Fast Ethernet network (see Table 1). These components are: 1) a web service provider application that provides the set of web services used in the experimental evaluation; 2) a database server on top of the Oracle 10g Database Management System (DBMS) used by the TPC-App benchmark;
Predicting Timing Failures in Web Services
193
Table 1. Systems used for the experiments
,
Node Server DBMS Client
Software Windows server 2003 R2 Enterprise x64 Edition service pack 2 & JBoss 4.2.2.GA Windows server 2003 R2 Enterprise x64 Edition service pack 2 & Oracle 10g Windows XP pro SP2
Hardware Dual Core Pentium 4 3Ghz 1.46GB RAM Quad Core Intel Xeon 5300 Series 7.84 GB RAM Dual Core Pentium 4, 3GHz, 1.96GB RAM
and 3) a workload emulator that simulates the concurrent execution of business transactions by multiple clients (i.e., that performs web services invocations). 7.2 Results and Discussion To analyze the effectiveness of our mechanism, we executed 36 tests with different configurations. In all tests, wsTFDP was configured to predict at all service invocation points (i.e., the target service entry point and any existing nested service invocations). Each test had a duration of 20 minutes and was executed three times to increase the results representativeness. We ran tests considering different service loads (i.e., 2, 4, 8, and 16 simultaneous clients) and the system state was restored between each run, so that tests could be performed based on the same starting conditions. Time measurement was done using nanosecond precision, provided by Java 6. A first set of tests was conducted to assess the performance overhead. We ran 2 experiments, one without wsTFDP and another using wsTFDP (baseline experiment A and experiment B, respectively). Each of these experiments included 4 sets (correspond to a client-side configuration of 2, 4, 8, and 16 emulated business clients) of 3 tests (3 executions of a given configuration). Our goal was to test the system using different service loads to get meaningful measurements. For experiment B we defined a high deadline for the execution so that we would not have any service being aborted due to timing failures (we just wanted to assess the performance overhead of the mechanism, when compared to the normal conditions of experiment A). The results obtained are presented in Figure 4.a). For each of the four web services, we calculated the average execution time (of 3 runs) in each set of 4 tests (2, 4, 8 and 16 clients) for experiments A and B. We then extracted the minimum, maximum and average differences between experiments in each set. We finally averaged the extracted differences of each of the four web services in a single value per set. As expected, as the number of clients increases, the overhead of the detection and prediction process also increases. However, the overhead remains quite low, with an observed maximum of about 56 ms, which perfectly fits our initial goal of keeping it under 100ms for moderately loaded environments. Note that the introduced overhead is tightly associated with the complexity of running Dijkstra’s algorithm over the graph. It is well known that when the graph is not fully connected, which is frequently the case in this type of applications (i.e., each node is not connected to all remaining nodes), the time complexity is O ((m + n) log n) where ‘m’ is the number of edges and ‘n’ is the number of vertices in the graph (for algorithms that make use of a Priority Queue as in our case) [16]. The second experimental goal was to assess the detection latency (i.e., the time between the failure occurrence and its detection). As we were expecting very low
194
N. Laranjeiro, M. Vieira, and H. Madeira
b)
Overhead per client load
Detection latency per client load
60
60
50
50
Latency (ms)
Overhead (ms)
a)
40 30 20
40 30 20
10
10
0
0 2
4 min
8 avg
16 max
2
4 avg
8 max
16 min
Fig. 4. wsTFDP mechanism overhead a) and detection latency b) per client load
values for these experiments, we decided to add extraneous load to the server by creating two threads in the server application, running in a continuous loop. This pushed the CPU usage of our dual core machine to 100%. Based on the baseline reference values, we configured clients to request random service deadlines ranging from the observed average execution times and twice this value. Our goal was to mix services that finish on time with services that violate the client-set deadline and are terminated by wsTFDP, hence enabling us to measure the detection latency. As shown in Figure 4.b) four sets of tests (2, 4, 8, and 16 clients) were executed (experiment C) under the referred stress conditions. As expected, we observed increasing latency values as we added load to the server. The maximum observed latency was of 52 ms, which fulfills our initial objective. The extracted results are, in general, very good, particularly when considering the demanding environment used in this experiment (i.e., the extraneous load in the server). In the third experiment we tried to assess the false-positive (i.e., the number of times wsTFDP predicted a failure that did not occur) and false-negative (i.e., the number of times a failure was detected but not predicted) rates. An important aspect is that, these two measures can be considered as conflicting, in the sense that, when we tune the mechanism to minimize the number of wrong predictions we increase the number of failures that may occur without being predicted. Balancing both measures can be a hard task and it is up to the user to choose if a balanced approach is preferable, or if one particular measure is the most important. In our particular case, we executed a single experiment (experiment D) and tried to minimize the falsepositives, assuming that a wrong prediction has a higher cost than a failure that is not predicted. Note that, even if a timing failure is not predicted, it will be for sure detected (the detection coverage observed in all experiments performed was of 100%). We performed some preliminary tests to check which configuration parameters would provide the lowest false-positive rates. Note that, we did not intend to execute an exhaustive search over all the available configurations, but merely aimed to choose a good enough configuration that enabled a solid demonstration of our mechanism. In real situations, users may perform a more thorough experiment according to their specific needs. We maintained the same client-side selection of deadline values (ranging from the average observed values to twice the average) and empirically we
Predicting Timing Failures in Web Services
195
observed that, for our goals, an 80% of client-set confidence value provided the best results (we tested all lower and higher values in 5% increments, with worse results). On the provider side, the main configuration was related to the graphs edges management. We opted for a maximum list size of 100 elements and also for a random element removal strategy (see Section 5.3 for details on the available removal strategies). Figure 5 presents the obtained false-positive rates for the executed tests.
False-positive rate
False-positive rate per client load 8% 6% 4% 2% 0%
avg max min 2
4
8
16
Fig. 5. The observed false-positive rate under different client loads
As we can see, with this moderate configuration tuning we managed to maintain the average false-positive rate under 5%, which fulfills our initial goals. This is an excellent result if we consider that we did not perform exhaustive configuration tweaking. An interesting aspect is that, the false-positive rate generally decreases as we increase the number of clients. This is related with the fact that, under heavy loads more operations are executed and, in consequence, history comprehends a time-frame that is smaller and closer to the current environment, representing it more accurately. Concerning the false-negatives, we observed a total of 26% considering all tests performed in experiment D. This is quite acceptable, as our main goal was to have a low false-positive rate (which for sure increases the false-negatives rate). Users that prefer more balanced results can fine-tune the wsTFDP mechanism, for instance by increasing the history size, decreasing the confidence value at the client-side or testing multiple combinations of the different available parameters. To check the easiness of use of the wsTFTD mechanism we asked an external programmer to implement time-awareness into our reference TPC-App implementation. Since the application was already using Maven as building tool, the programmer’s tasks were reduced to merging the application’s project descriptor (a Maven configuration file) with the one provided by wsTFDP, and adding a TimeRestrictionParameter to each web service. The main task executed in the merging process was the replacement of the standard Java compiler with AspectJ’s compiler. The whole process was concluded in less than 10 minutes, indicating that it is a fairly easy procedure.
8 Conclusion This paper discussed the problem of timing failure detection and prediction in web service environments and proposes a programming approach to help developers in programming web services with time constraints. The development of a generic and non-intrusive detection and prediction mechanism is a complex task. Most temporal
196
N. Laranjeiro, M. Vieira, and H. Madeira
failure detection/prediction solutions available are coupled to particular applications or to specific sets of services (i.e., these solutions are part of the application itself), which obviously has a huge impact on specific quality attributes. The approach proposed implements timing failures detection and prediction in a transparent way. The results from several experiments showed that our mechanism can be easily used and introduces a low overhead on the target system. The mechanism is also able to perform fast failure detection and can accurately predict timing failures. This paper sets the basis for a prediction mechanism that targets web services and is perfectly tailored for compound services. Future work will involve studying learning algorithms, and using them to further improve prediction capabilities.
References 1. Chappell, D.A., Jewell, T.: Java Web Services: Using Java in Service Oriented Architectures. O’Reilly, Sebastopol (2002) 2. Elmagarmid, A.K.: Database Transaction Models for Advanced Applications. Morgan Kaufmann, San Francisco (1992) 3. Chandra, T., Toueg, S.: Unreliable Failure Detectors for Reliable Distributed Systems. Journal of the ACM 43(2), 225–267 (1996) 4. Dwork, L., Stockmeyer, L.: Consensus in the Presence of Partial Synchrony. Journal of the ACM (1988) 5. Cristian, F., Fetzer, C.: The Timed Asynchronous Distributed System Model. IEEE Transactions on Parallel and Distributed Systems (1999) 6. Veríssimo, P., Casimiro, A.: The Timely Computing Base Model and Architecture, Transactions on Computers - Special Issue on Asynch. Real-Time Systems (2002) 7. Laranjeiro, N., Vieira, M., Madeira, H.: Timing Failures Detection in Web Services. IEEE Asia-Pacific Services Computing Conference (2008) 8. Bo, Y., Xiang, L.: A study on software reliability prediction based on support vector machines. In: IEEE Intl. Conf. on Industrial Eng. and Eng. Management, pp. 1176–1180 (2007) 9. Kootbally, Z., Madhavan, R., Schlenoff, C.: Prediction in Dynamic Environments via Identification of Critical Time Points. In: Military Comm. Conf. (MILCOM 2006), pp. 1–7 (2006) 10. Su, S.-F., Lin, C.-B., Hsu, Y.-T.: A high precision global prediction approach based on local prediction approaches. IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews 32, 416–425 (2002) 11. Kiczales, G., Lamping, J., Mendhekar, A., Maeda, C., Lopes, C.V., Loingtier, J.-M., Irwin, J.: Aspect-oriented programming. In: Aksit, M., Matsuoka, S. (eds.) ECOOP 1997. LNCS, vol. 1241, pp. 220–242. Springer, Heidelberg (1997) 12. SpringSource, Aspect Oriented Programming with Spring, http://static. springframework.org 13. Biggs, N., Lloyd, E.K., Wilson, R.J.: Graph Theory, pp. 1736–1936. Clarendon Press, Oxford (1986) 14. The Eclipse Foundation: The AspectJ Project, http://www.eclipse.org/ aspectj/ 15. Sun Microsystems, Inc.: JAX-WS: JAX-WS Reference Implementation, https://jaxws.dev.java.net/ 16. Dijkstra, E.W.: A note on two problems in connexion with graphs. Numerische Mathematik 1, 269–271 (1959) 17. Laranjeiro, N., Vieira, M.: wsTFDP: Web Services Timing Failures Detection and Prediction (2008), http://cisuc.dei.uc.pt/sse/downloads.php 18. Apache Software Foundation, Apache Maven Project, http://maven.apache.org/ 19. Transaction Processing Performance Council, TPC BenchmarkTM App. (Application Server) Standard Specification, Version 1.3, http://www.tpc.org/tpc_app/
A Two-Tire Index Structure for Approximate String Matching with Block Moves Bin Wang1,2 , Long Xie3 , and Guoren Wang1,2 1
Key Laboratory of Medical Image Computing (Northeastern University), Ministry of Education 2 School of Information Science and Engineering, Northeastern University, Shenyang, China {binwang,wanggr}@mail.neu.edu.cn 3 Information School, Liaoning University, Shenyang, China
[email protected]
Abstract. Many applications need to solve the problem of approximate string matching with block moves. It is an NP-Complete problem to compute block edit distance between two strings. Our goal is to filter non-candidate strings as much as possible. Based on the two matured filter strategies, frequency distance and positional q-gram, we propose a two-tire index structure to make the use of the two filters more efficiently. We give a full specification of the index structure, including how to choose character order to achieve a better filterability and how to balance number of strings in different clusters. We present our experiments on real data sets to evaluate our technique and show the proposed index structure can provide a good performance.
1
Introduction
Approximate string matching has been wildly used in varies fields, such as finding DNA pieces, which might be changed by some possible operations in computational biology, regaining the original signals which are transmitted through noisy channels in communication, and searching texts with error tolerance in IR, etc. To measure the distance between two strings, there are many methods. And the most wildly used is edit distance a.k.a Levenshtein distance. Calculating edit distance between two strings s and p is to find the minimum number of insertion, delete, and substitute operations required to transform s to p. Besides the above three operations, block move (i.e. substring move) operation is very general and has been of interest in many real applications. For example, we can use two block move operations shown in Fig. 1 to change aaccg to ggcaa, which was called block edit distance. It is well known that the problem of calculating block edit distance is an NPComplete problem [1,2]. The block moves operation is flexile, i.e. the length of a
Supported by the National Natural Science Foundation of China (Grant No. 60828004), the Program for New Century Excellent Talents in Universities (Grant No. NCET-06-0290), and the Fok Ying Tong Education Foundation Award 104027.
L. Chen et al. (Eds.): DASFAA 2009 Workshops, LNCS 5667, pp. 197–211, 2009. c Springer-Verlag Berlin Heidelberg 2009
198
B. Wang, L. Xie, and G. Wang
aaccg
ccgaa
gccaa
ggcaa
substitute
move
move
g
Fig. 1. Translating aaccg to ggcaa by using block moves
block is not fixed and even the position where it should be moved to is unfixed. For instance, in Fig. 1, to translate aaccg to ggcaa, the blocks of aaccg are all its substrings, {a, aa, aac, · · · }, and for each block, it can be moved to every possible position in the string, such as aa, it can be moved to the rear of the first c, the front of g and the rear of g. Therefore, given a collection of strings S and a query string p, it is expected to reduce the search space to answer the query p, i.e., to choose a candidate set SC ⊆ S efficiently, such that the block edit distance between s(∈ SC ) and p is not larger than a given threshold. Many filter approaches have been deployed to reduce the search space of queries on string collection. Frequency distance [3], denoted by F D, is a wildly known lower bound of edit distance. Intuitively, if two string do not share close number of same characters, then they could not be similar. For instance, string s1 =acc has one a and two c, while string s2 =abb has one a and two b. Since s1 and s2 do not share the two c or b, then their edit distance could not be less than 1. F D is easy to calculate, but the filterability is weak, since the character position information is ignored. For instance, string abcdef and badcfe share the same number of characters, but they are not similar. Positional q-grams is another commonly used filter approach [4,5]. Given a string s, its positional q-grams are obtained by “sliding” a window of length q over the characters of s. If two strings are similar, they must share enough number of common grams. Positional q-grams can prune away more non-candidate strings since it considers overlapped grams, however, it requires more overhead on decomposing strings into gram sets. Therefore, it is desired to combine the advantages of both F D and positional q-gram. In this paper, we propose a novel two-tier index structure to and generate a candidate set as small as possible. The remaining of this paper is organized as follows: Section 2 briefly discusses related work; Section 3 provides background material; Section 4 discusses the construction of two-tier trie, 2TI; Section 5 introduces how to use 2TI to filter; Section 6 reports the experimental results; finally, we summarize the paper and give the future work in Section 7.
2
Related Work
Approximate string matching is one of the basic problems in computer science, the earlier discuss of it begins at 1960s. There are deeply and detailed introductions to the basic string matching problem in [6,7,8]. And [9] by Navarro is an excellent survey about approximate string matching with classical edit distance. Lopresti etc. give ten modules of approximate string matching with block
A Two-Tire Index Structure for Approximate String Matching
199
moves in [10], but they mainly focus on the complex of those modules. Since substitution can be regarded as deleting a character and then inserting another character, Dana etc. introduce the edit distance with moves, which substitutes substring movement for substitution [1,2]. They consider the text as a link list module, and the cost of adding a node, deleting a node and moving a sublist of nodes to other place is the same. And the authors of [11,12] embed strings to vector space, then use L1 distance between the string vectors to get an approximate result to the block edit distance. And the approximation to optimal of greedy algorithms is given in [13]. The difference between our work and the above is that they mainly focus on how to calculate the block edit distance between two strings, but we are interested in the problem of string collection. There are also some indexing methods for approximate string matching [14], and the data structure used are mainly suffix tree/array and q-grams/samples. The methods used suffix tree/array as their index structure, such as in literature [15], are based on pigeonhole principle which is based on the fact that if there are n pigeons but only n − 1 pigeonholes, then it must satisfy that there are at least two pigeons in the same pigeonholes. If k is maximal number of tolerant errors, then the query is partitioned into k + 1 parts, and only the string with at least one part as its substring need to be verified. And the methods used q-grams/samples are based on the fact that if two strings are similar, they should share a certain number of q-grams/samlples, such as the methods proposed in [4,5,16]. And a new type of grams with variable lengths, called VGRAM, is proposed in [17,18].
3 3.1
Preliminaries Block Edit Distance
Let Σ denotes a finite alphabet with size |Σ|. s ∈ Σ ∗ denotes a string consists of characters from Σ with length |s|. And s[i] represents the i-th character of s, while s[i, j] is a substring of s from s[i] to s[j], where 1 ≤ i ≤ j ≤ |s|. If j < i, s[i, j] is an empty string, denoted by ε. Especially, s[1, j] is a prefix of s, while s[i, |s|] is a suffix of s. Example 1. Given Σ = {a, c, g, t} and s = acagtc, then |Σ| = 4, |s| = 6, s[3] = a, s[2, 4] = cag, s[4, 3] = ε, s[1, 3] = aca is a prefix of s, and s[4, 6] = gtc is a suffix. As an extension of the three classical edit operations, block edit involves another edit operation, block moves. That is, given a string s, we can use a block move operation move(p1 , l, p2 ) to move the substring s[p1 , p1 + l − 1] to the destination position p2 (1 ≤ p2 ≤ |s| + 1), where p1 (1 ≤ p1 ≤ |s|) is starting position of the substring and l is the length of the substring. For instance, move(1,2,4)
s = abcde −−−−−−−→ s = cabde.
200
B. Wang, L. Xie, and G. Wang
Definition 1 (Block edit distance). The block edit distance between two strings is the minimum number of block edit operations to translate a string to the other. Problem 1. Given a string collection S = {s1 , s2 , . . . , sn }, a query string p and a threshold k, find out a subset T ∈ D, such that T = {si |si ∈ S ∧BED(si , p) ≤ k}. Example 2. Given S = {aaccg, aagcct, aaggct, ccaa, ctaa, aagcgt}, p = ccgaa, k = 1. Then BED(aaccg, ccgaa) = 1, since we can move the block aa to the rear of g. Similar, BED(ccaa, ccgaa) = 1. So the result set T = {aaccg, ccaa}. 3.2
Lower Bounds of Block Edit Distance
Frequency Distance: Given a string s, we call the number of duplicated number of a character in s is the local frequency. f (s) denotes the set of local frequencies for different characters in s. Literature [3] defines the frequency distance (F D) between two strings s and p as follows. ⎧ ⎪ posDist = f (s)[i] − f (p)[i] ⎪ ⎪ ⎨ i∈Σ∧f (s)[i]≥f (p)[i]
⎪ ⎪ negDist = ⎪ ⎩
f (p)[i] − f (s)[i]
(1)
i∈Σ∧f (p)[i]>f (s)[i]
F D(s, p) = max(posDist, negDist). For instance, given s = aggc is a string over {a, c, g, t}, f (aggc) = {a · 1, c · 1, g · 2, t · 0}, where a · 1 means the string contains one a. F D(aggc, aaccg) = max{2 − 1, 2 − 1 + 2 − 1} = 2. It is proved in [3] that frequency distance is a lower bound of edit distance. Since the block moves operation has no effect on the character’s frequency in a string, frequency distance is also a lower bound of block edit distance, i.e. F D(s, p) ≤ BED(s, p). Therefore, if F D(s, p) > k, then BED(s, p) must be larger than k. We can use it to filter non-candidate strings. Positional q-Grams: Given a string s and a positive integer q, a positional qgram of s is a pair (i, s[i, i+q −1]). Intuitively, if two string are similar, they must share enough common grams. In order to let two strings share more common grams and make the lower bound tight, according to the definition in [5], two characters # and $ that do not belong to Σ are introduced, and a new string s by prefixing q − 1 copies of # and suffixing q − 1 copies of $ on s is constructed. The set of positional q-grams of s, denoted by G(s, q), is obtained by sliding a window of length q over the characters of the new string s . There are |s| + q − 1 positional q-grams in G(s, q). For instance, suppose q = 3, and s = abcde, then G(s, q) = {(1, ##a), (2, #ab), (3, abc), (4, bcd), (5, cde), (6, de$), (7, e$$)}. It is proved in [5] that if BED(s, p) ≤ k, then s and p must share at least (max(|s|, |p|) − 1 − 3(k − 1)q) q-grams.
A Two-Tire Index Structure for Approximate String Matching
4
201
A Two-Tier Index Structure
Obviously, when we consider block edit distance, the lower bound (max(|s|, |q|)− 1 − 3(k − 1)q) becomes much loose compare with the classical edit distance. In order to make the lower bound tight, we need choose a small q value (e.g. q=2) to construct gram sets for strings. However, when q is small, adopting q-gram based inverted list index structure will generate long inverted lists for each gram. In order to shorten the inverted lists of each q-gram, it is desired to build up inverted lists for small number of strings. Therefore, we classify strings into different clusters and build up inverted lists for strings in each cluster. Based on the above observation, we propose a two-tier index structure, called 2TI to combine the advantage of both F D and q-grams. In the first tier, we extend the idea of F D to classify strings into several clusters. In the second tier, for each cluster of strings, we construct q-gram based inverted lists. Fig. 2(a) shows a collection of six strings and Fig. 2(b) shows an example of 2TI for the six strings.
Λ
ID
String
id1
taacgaa
id2
cctgc
id3
agaatta
id4
taacc
id5
tcgcc
id6
acctccc
g
t
[1, 1]
[1, 1]
t
a
[1, 2]
[1, 2]
a
(a) Strings
c
[4, 5]
[3, 3]
Q-gram based
C1 # a
t
a
a c g t
c $
g a
...
inverted lists
t a
Clusters
a
id3 id1 id1 id1 id1 id3 id1 id1 id1 id1 id3 id3 id3 id3 id3
t id3
(b) 2TI Fig. 2. An example of a two-tier index (2TI)
4.1
Clustering Strings Using Frequency
Since frequency distance can be used to filter non-candidate strings, we use it to classify strings into non-overlapped clusters. We first propose a cluster trie, then we discuss how to construct a compact cluster trie.
202
B. Wang, L. Xie, and G. Wang
Cluster Trie: A cluster trie is a tree-like structure, which consists of two types of nodes as follows. – Character node. Each character node contains only one character, which is depicted as a cycle in Fig. 3(a); – Frequency node. Each frequency node records the local frequencies of the character, which is depicted as a box in Fig. 3(a). If a local frequency of a character c in a string s is m, then we build up a character node nc whose value is c and let it point to a frequency node nf with value m. The string s will appear in a certain cluster that rooted at nf . For example, cluster C3 contains strings id2 and id5 . Both of them contain one g character, two t characters, and three c characters. Given a collection of strings S={s1 , . . . , sn }. If the local frequencies of a character c in different strings are different, then we build up several frequency nodes to record each local frequency. For instance, in Fig. 2(a), the local frequency of character c in string id4 is 2, whereas in string id6 is 5. We build up two frequency nodes rooted at the same character node, for example, there are two frequency nodes with values 2 and 5, respectively, rooted at the character node with value c in the right path of the cluster tire in Fig. 3(a). We call the number of different local frequencies for a set of strings the variety number of local frequencies. Λ g
t
1
1
t
c
1
2
c 2
a
c
Λ
Λ
t
[1, 1]
[1, 1]
2
5
a
a
g
t
t
a
[1, 1]
[1, 2]
[1, 2]
5
3
2
1
[1, 1]
id3
id2 id5
id4
id6
[1, 2]
C2
C3
C4 C5
t
a 4
g
id1 C1
(a) Cluster trie.
C6
a
C7 id4 id6
id1 id2 id3 id5
(b) Merging nodes.
[4, 5]
id1 C1 id3
c [3, 3]
C3 id 4 id6
id2 C2 id5
(c) Compact trie.
cluster
Fig. 3. Constructing a compact cluster trie
Ordered Cluster Trie: For a query q, we can calculate the frequencies of different characters in q. By computing the FD distance between q and each string s in a cluster can determine whether the string s is a candidate. Such procedure requires comparing q with all paths in a 2TI to get all candidates. Ideally, we expect to traverse as few nodes as possible. That is, we should be careful to determine the structure of a constructed cluster trie.
A Two-Tire Index Structure for Approximate String Matching
203
Definition 2 (Partially ordered relationship ≺). Let a and b ∈ Σ. a ≺ b, if and only if character node a is the ancestor node of character node b in 2TI. The different partial orders of characters will cause different cluster trie structure. Ideally, we expect to filter a path as early as possible. Observation 1. The cluster trie with fewer top-level nodes are better than the one with more top-level nodes. Based on the above observation, in a 2TI, the character in a top-level character node should appears in most strings and has less frequency nodes. The following are two possible policies to be used to construct partial order for characters. – Small variety number of local frequencies take precedence. Given a collection of strings S, let c1 and c2 be two characters in S. If the variety number of c1 is smaller than that of c2 , then we build up a partial order: c1 ≺ c2 . – Large global frequency take precedence. In a collection of strings S, the global frequency of a character c is the number of strings containing c. For any two characters c1 and c2 , if their variety number of local frequencies are the same and the global frequency of c1 is larger than that of c2 , then we build up a partial order: c1 ≺ c2 ; For example, in Fig. 2(a), the variety number of local frequency of g is 1, so we first construct a character node g. The global frequency of t is 6, so we also try to put t as higher as possible in the cluster trie. 4.2
Compact Cluster Tire
Using cluster trie, every path might be traversed to determine whether the strings indexed by the path are candidates or not. There could be too much paths in the cluster trie, which causes high traversal cost. Furthermore, strings in different clusters are skewed distributed. In order to solve the above problems, we merge nodes in the cluster trie to construct a compact cluster trie. Fig. 3(c) shows a compact cluster trie for the six strings in Fig. 2(a). In a compact cluster tire, the content of a frequency node is a frequency region [f1 , f2 ], which means the local frequency of a character is in between f1 and f2 . For instance, in the left subtree in Fig. 3(c), t has a frequency node with [1, 2]. We use the following two rules to determine whether two frequency nodes should be merged or not: – for a frequency region [f1 , f2 ], f2 −f1 should be smaller than a given threshold θr . For instance, if a character c has two different local frequencies: 1 and 5, then we should not merge them into one node, since merging them will decrease the filterability greatly; – the size of each cluster should be balanced, i.e. in between [θmin , θmax ], where θmin and θmax denote the minimum and maximum size of the cluster, respectively.
204
B. Wang, L. Xie, and G. Wang
Based on the above discussion, we use Algorithm 1to construct the compact cluster trie. We use Fig. 3 to show how Algorithm 1 works.
Algorithm 1. Construction of Compact Cluster Tire
1 2 3 4 5 6 7 8 9 10 11 12 13 14
Data: Ordered Cluster Trie OCT , frequency region threshold θr and clusters’ size range [θmin , θmax ] Result: Compact Cluster Tire CCT repeat foreach cluster Ci ∈ OCT do if Ci .size < θmin then Reverse traverse ci ’s path to find out the deepest frequency node noded which can be merged with its brother node nodeb ; Merged noded and nodeb ; if Ci .size > θmax then Find character σ1 which is not occurrence in the path indexing Ci and is with the largest variety number of local frequencies; Use σ1 to re-partition Ci ; if θmin ≤ Ci .size ≤ θmax then Find character σ2 which is not occurrence in the path indexing Ci and whose frequency region is smaller than or equal to θr ; Add σ2 and its frequency region to the path; until all thresholds are satisfied or there is no change of OCT ; CCT ← OCT ; return CCT ;
Let frequency region threshold θr be 1 and [θmin , θmax ] be [2, 2]. In the first iterator, in Line 2, Algorithm 1 scans all clusters. In Fig. 3(a), the number of strings in C1 is less than θmin , the two frequency nodes under t should be merged to make a merged cluster contain more strings. These two nodes are merged to a new node with frequency region [1, 2] that points to a new cluster C6 ={id1 , id2 , id3 , id5 }. Then C4 is processed and cluster C4 and C5 are merged into C7 in Fig. 3(b). Then the second iteration is started. Since there are four strings in C6 , the cluster should be partitioned. The local frequency of a and c are both 2, so it randomly chooses one (a is chosen in this case) and C6 is partitioned to two parts: {id1 , id3 } and {id2 , id5 }. Since the local frequencies of a in C7 are 1 and 2, respectively, a and range [1, 2] are added under t → [1, 1]. A compact cluster trie is shown in Fig. 3(c). Notice that the merger and depth limitation will cause more false positives, for instance, given p = taagcga and k = 1, using the cluster trie in Fig. 3(a), the candidate string is id1 , while using the compact cluster trie in Fig. 3(c), the candidates are id1 and id3 . Although compact cluster trie decrease the filterability of the first tier, the size of the cluster trie is reduced significantly. There are
A Two-Tire Index Structure for Approximate String Matching
205
23 nodes and 5 paths in Fig. 3(a), but only 13 nodes and 3 paths in Fig. 3(c). Meanwhile, the clusters become balanced, which is good for q-gram based filter in the second tire.
5
Search Candidate Strings
Given a query string p and a block edit distance threshold k, we first use 2TI to filter non-candidate strings in the string collection S. Then we adopt the Greedy Algorithm proposed in [1,1] to calculate block edit distance between each candidate c and p. If BED(c, p) ≤ k, then c is the answer. Now we use an example to show how 2TI works. Given p = taagcga, k = 1 and the string collection is as illustrated in Fig. 2(a). First, the frequency set of p is calculated f (p) = {g · 2, t · 1, c · 1, a · 3}. The first tier of 2TI, compact cluster trie, is traversed by using depth first search (DFS), lines 5–24 in Algorithm 2. To evaluate how many errors will be brought in when f (p) is matched with a (whole or partly) path in the trie, error cost is used. We use posDist to record the number of deletion operations applied to the path, and negDist to record the number of addition operations applied to the path. Add error cost is max(posDist, negDist). Obviously, if error cost is bigger than k, the visit of a path can be terminated. It is started at the root of the structure, and the node contained character g is visited afterwards, which means that the ranges in its sons are frequency regions of g. Since there are two g in p and the frequency region is [1, 1], the negDist is set to 1. And the error cost is max(0, 1) = 1 which equals to k, the traversal continues. It is a t node in the next, and the son of the t node contains [1, 2]. The local frequency of t in p is 1 which is in between [1, 2], so the error cost is not changed. Then the node a is accessed, there are two sons of this node. The one contains [4, 5] is visited firstly. Since there are three a in p, posDist is set to 1. And the error cost of this node is max(1, 1) = 1, so cluster C1 is a candidate. Then get back to the a node. Since all sons of a node are visited, then get back to the parent of a node. The remaining of the trie is traveled in this way, and the finally candidate cluster is C1 . Algorithm 2 presents the complete filter algorithm. So, here after the first tier filter using compact cluster trie, we use q-gram inverted lists in the second tier filter to refine the candidates. To use the q-gram inverted list, the q-grams set of query p is generated, which is G(p, 2) = {#t, ta, aa, ag, gc, cg, ga, a$} in this case. Then querying these q-grams in the q-grams index to count the number of common q-grams with the candidate strings. In this case, there are six q-grams shared by p and taacgaa, and the query taagcga and agaatta share five q-grams. According to the minimum number of shared q-grams is (max(|s|, |q|) − 1 − 3(k − 1)q) = max(7, 7) − 1 − 3 × (1 − 1) × 2 = 6, the string agaatta dose not satisfy this bound, so there is only one string taacgaa may satisfy that BED(taacgaa, taagcga) ≤ 1.
206
B. Wang, L. Xie, and G. Wang
Algorithm 2. DFS Filtering Data: The first tier of 2TI CCT , query p and error threshold k Result: The candidate cluster set SC f (p) ← the frequency set of p; posDist ← 0, negDist ← 0, ec ← 0; Let nStack and disStack are stacks of nodes and pair (posDist, negDist); pn ← the root of CCT , push pn to nStack; repeat if All sons of nStack.top() are visited then if nStack.top() is a character node then (posDist, negDist) ← disStack.top(); disStack.pop();
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26
6
nStack.pop(); else pn ← one of nStack.top()’s unvisited sons; Mark pn visited and push pn to nStack; if pn is a cluster then Add pn to SC ; else if pn is a character node then c ← pn .character; Push (posDist, negDist) to disStack; else if f (s)[c] < pn .min then posDist+ = pn .min − f (s)[c]; if f (s)[c] > pn .max then negDist+ = f (s)[c] − pn .max; ec = max(posDist, negDist); if ec > k then nStack.pop(); until nStack is empty ; return SC ;
Performance Evaluation
Our experiments are written by C++, and run on PC with 256M RAM,2.00GHz R CPU, Windows XP. Two real data sets, the book names1 , S1 and titles of 2 papers , S2 , are used to evaluate our indices. The alphabet’s size of S1 is 86, and there are totally 96 different characters in S2 . The average length of strings in S1 is 18.12, and it is 65.7 in S2 . 6.1
Construction Overhead
In this section, we show the performance of constructing full cluster and compact cluster. 2TIf ull denotes the two-tier index with the full cluster as its first tier, 1 2
Questia, http://www.questia.com/ DBLP, http://www.informatik.uni-trier.de/ ley/db/
4.5 2T I f ull 4.0 3.5 2T Iordered 3.0 2T Imerged 2.5 2.0 1.5 1.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 5.5 Data Set Size n(×104 )
Construction Time(sec.)
Construction Time(sec.)
A Two-Tire Index Structure for Approximate String Matching
(a) Data set S1 .
207
8 2T I f ull 7 2T Iordered 6 2T Imerged 5 4 3 2 1 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 5.5 Data Set Size n(×104 )
(b) Data set S2 .
3.5 2T I f ull 3.0 2T Iordered 2.5 2T Imerged 2.0 1.5 1.0 0.5 0.0 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 5.5 Data Set Size n(×104 )
(a) Data set S1 .
Index Size(MB)
Index Size(MB)
Fig. 4. Construction time v.s. the size of data set 7 2T I f ull 6 2T Iordered 5 2T Imerged 4 3 2 1 0 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 5.5 Data Set Size n(×104 )
(b) Data set S2 .
Fig. 5. Size of indices v.s. the size of data set
2TIordered denotes the two-tier index with reordered full cluster as its first tier, and 2TImerged denotes the two-tier index with compact cluster as its first tier. Time v.s. n: As shown by Fig. 4, the construction time grows linearly with the size of data set, n. The construction time of the three indices are almost the same. The cost of counting local frequencies of characters are too small to be seen. This is mainly because the alphabet reorder rules reduce the number of nodes in 2TIordered . It also can be seen that when the size of S1 is large, construction time of 2TImerged is more than that of 2TIf ull , but the construction time of 2TImerged is still smaller than that of 2TIf ull at S2 , it is because that the average string length of S1 is short, and it is more likely to repeat in S1 when the size is large, reducing the addition of node in 2TIf ull . Fig. 4(a) and Fig. 4(b) also show that the size of alphabet and average length of string affect the construction of indices: the larger they are, the more time is needed to construct the indices. Size v.s. n: As illustrated by Fig. 5, all indices’ sizes linearly grow with size of data set, n. The alphabet reorder rules excellently reduce the size of 2TIf ull of S1 but only reduce the size of S2 ’s ordered cluster trie a little. It is because the average string length is long and variety number of local frequencies is large in S2 , there may be many branches in a node of the 2TIf ull and it is hard to
B. Wang, L. Xie, and G. Wang 0.7 2T I f ull 0.6 2T Iordered 0.5 2T Imerged 0.4 0.3 0.2 0.1 0.0 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 5.5 Data Size n(×104 )
(a) Data set S1 .
Filtering Time(sec.)
Filtering Time(sec.)
208
0.22 2T I f ull 0.20 2T Iordered 0.18 2T Imerged 0.16 0.14 0.12 0.10 0.08 0.06 0.04 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 5.5 Data Set Size n(×104 )
(b) Data set S2 .
Fig. 6. Filtering time v.s. the size of data set
reduce the size. It also can been from Fig. 5 that the 2TImerged s always keep much smaller. 6.2
Effect of Filtering
Two query sets, Q1 and Q2 are used to evaluate the filtering performance of our indices. Q1 and Q2 are formed by 100 randomly selected strings from data set S1 and S2 respectively, some random noises are also be added to the strings in both sets. And the query sets are run 100 times on their corresponding data set’s indices (Q1 and Q2 correspond to S1 and S2 respectively). The average of these results is used as the final result. Time v.s. n: The error threshold k is set to 9. As Fig. 6 illustrated, The filtering time is linear with the size of data set. And even when the size of data set is 5.5 × 104 , the worst filtering time is 0.63s and 0.28s of S1 and S2 respectively. 2TImerged has the best performance, it is because the alphabet reorder rules and merger reduce the number of nodes in the 2TImerged . The filtering time of 2TImerged and 2TIordered is almost same in Fig. 6(a) is mainly because the merger cause of false positives which raise the time cost of the second tier of 2TImerged . Time v.s. k: The size of string collection n is fixed to 55000. The filtering time of those indices all quickly grow with the increasing of threshold k as illustrated by Fig. 7, it is because larger k causes more nodes needed to be tested. It is also shown that 2TImerged has the best performance again. But differently, 2TIf ull performs better than 2TIordered at S2 when k is small, it may be because some characters at the top of 2TIf ull do not occur in the query string, and the searches on 2TIf ull terminal earlier. Filterability: We use Equation 2 to evaluate the filterability of our index structure. |SC | e=1− , (2) |S|
0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0
Filtering Time(sec.)
Filtering Time(sec.)
A Two-Tire Index Structure for Approximate String Matching
2T I f ull 2T Iordered 2T Imerged
4
5
7 6 8 Error Threshold k
9
0.22 0.20 0.18 0.16 0.14 0.12 0.10 0.08 0.06 0.04 0.02 0.00
209
2T I f ull 2T Iordered 2T Imerged
5
4
(a) Data set S1 .
7 6 8 Error Threshold k
9
(b) Data set S2 .
100.00 99.95 99.90 Data Set S2 99.85 Data Set S1 99.80 99.75 99.70 99.65 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 5.5 Data Set Size n(×104 )
(a) Filterability v.s. the size of data set.
Filterability e(%)
Filterability e(%)
Fig. 7. Filtering time v.s. error threshold
100.0 99.5 99.0 98.5 98.0 97.5 97.0
Data Set S2 Data Set S1
4
5
7 6 8 Error threshold k
9
(b) Filterability v.s. error threshold k.
Fig. 8. Filterability
where SC denotes the candidate set generated by filtering with our indices, and S denotes the string collection. Fig. 9 shows the results. Fig. 8(a) shows the filterability when we fixed threshold k = 6 and varied the size of data set n. Fig. 8(b) shows the filterability when we fixed varying n = 55000 and varied k. We can see the proposed two-tier index structure provided good filterability. In Fig. 8(a), e was closed to 1 for the data set S2 because the frequency distance between query and S2 are more different than between query and S1 . When the size of data set increased, e increased slowly for the data set S1 . The result also shows that our indices are scalable. For similar reason, in Fig. 8(b), e kept stable, i.e. near to 1, for S2 . Although e for S1 decreases quickly when k grew, its value is still larger than 0.97 when k was less than 9. 6.3
Cluster Trie v.s. Q-Gram Index
In this section, we show the performance of using cluster index and q-gram index separately. The data set used is S2 , and k is fixed to 1. As Fig. 9(a) illustrated, with the growth of data set, the filterability of both cluster trie and q-gram index dose not change so much, the filterability of cluster trie is nearly 99.88%, and is nearly 99.94% for q-gram index. Obviously, the
Filtering Time (sec.)
B. Wang, L. Xie, and G. Wang
Filterability e(%)
210
99.94 99.92
First Tier Second Tier
99.90 99.88 1
3 2 Data Size n(×104 )
4
0.45 First Tier 0.40 Second Tier 0.35 0.30 0.25 0.20 0.15 0.10 0.05 0.00 3 1 2 Data Size n(×104 )
4
Filterability e(%)
(a) Filterability v.s. data set size. (b) Filtering time v.s. data set size. First Tier 99 98 Second Tier 97 96 95 94 [8, 11] [18, 21] [28, 31] Size of Cluster
[38, 41]
(c) Filterability of cluster and qgram. Fig. 9. Cluster Trie v.s. Q-gram Index
filterability of q-gram is much powerful than cluster trie’s, i.e. using q-gram may generate a smaller candidate set. However the filtering time of q-gram increases sharply with the growth of data set size, as illustrated in Fig. 9(b). But the filtering time of cluster trie only slightly increases, i.e. cluster trie has a better response time. So first use cluster trie to generate a candidate set, then use q-gram to refine the candidate set can benefit both the advantage of cluster trie and q-gram and avoid the disadvantage. The filterability of q-gram in Fig. 9(c) is based on the candidate set generated by cluster trie. As Fig. 9(c) illustrated, with the increase of cluster’s size, the filterability of cluster trie is descendent, i.e. the size of its candidate set is increased. And the increase of q-gram’s filterability is mainly because that the size of final candidate generated by q-gram dose not change.
7
Conclusion
In this paper we have developed a two-tier index structure, called 2TI, to reduce candidate strings for approximate string queries with block moves. The first tier of index is based on the idea of utilizing the advantages of the existing two mature filter strategies: frequency distance and positional q-gram inverted lists to enhance the filterability of non-candidate strings. We extend the idea of frequency distance to build up the first tier index and classify strings into small size of non-overlapped strings, then for each clusters, we use q-gram based inverted lists as second tier index to get tight lower bound of block edit distance.
A Two-Tire Index Structure for Approximate String Matching
211
We present our experiments on real data sets to evaluate our technique and show the proposed index structure can provide a good performance.
References 1. Shapira, D., Storer, J.A.: Edit distance with move operations. In: Apostolico, A., Takeda, M. (eds.) CPM 2002. LNCS, vol. 2373, p. 85. Springer, Heidelberg (2002) 2. Shapira, D., Storer, J.A.: Edit distance with move operations. Journal of discrete algorithms 5, 380–392 (2007) 3. Kahveci, T., Singh, A.K.: An Efficient Index Structure for String Databases. In: VLDB (2001) 4. Ukkonen, E.: Approximate String-matching with q-grams and Maximal Matches. Theoretical Computer Science 92, 191–211 (1992) 5. Gravano, L., Ipeirotis, P.G., Jagadish, H.V.: Approximate String Joins in a Database (Almost) for Free. In: VLDB (2001) 6. Crochemore, M., Rytter, W.: Text Algorithms. Oxford University Press, UK (1995) 7. Gusfield, D.: Algorithms on Strings, Trees, and Sequences: Computer Science and Computational. Cambridge University Press, UK (1997) 8. Crochemore, M., Rytter, W.: Jewels of Stringology. World scientific, Singapore (2002) 9. Navarro, G.: A Guided Tour to Approximate String Matching. ACM computing surveys(CSUR) 33, 31–88 (2001) 10. Lopresti, D., Tomkins, A.: Block Edit Models for Approximate String Matching. Theoretical computer science 181, 159–179 (1997) 11. Cormode, G., Muthukrishnan, S.: The String Edit Distance Matching Problem with Moves. In: ACM-SIAM symposium on Discrete algorithms (2002) 12. Cormode, G., Muthukrishnan, S.: The String Edit Distance Matching Problem with Moves. ACM Transactions on Algorithms(TALG) 3, 2–21 (2007) 13. Kaplan, H., Shfrir, N.: The Greedy Algorithm for Edit Distance with Moves. Information Processing Letters 97, 23–27 (2006) 14. Navarro, G., Baeza-yates, R., Sutinen, E., Tarhio, J.: Indexing Methods for Approximate String Matching. IEEE Data Engineering Bulletin 24, 19–27 (2001) 15. Navarro, G., Baeza-yates, R.: A Hybrid Indexing Method for Approximate String Matching. Journal of Discrete Algorithms 1, 205–239 (2000) 16. Kim, M.-s., Whang, K.-y., Lee, J.-g., Lee, M.-j.: n-Gram/2L: A Space and Time Efficient Two-Level n-Gram Inverted Index Structure. In: VLDB (2005) 17. Li, C., Wang, B., Yang, X.: VGRAM: Improving Performance of Approximate Queries on String Collections Using Variable-Length Grams. In: VLDB (2007) 18. Yang, X., Wang, B., Li, C.: Cost-based variable-length-gram selection for string collections to support approximate queries efficiently. In: SIGMOD (2008)
Workshop Organizers' Message Raymond Chi-Wing Wong1 and Ada Wai-Chee Fu2 1
The Hong Kong University of Science and Technology, China 2 The Chinese University of Hong Kong, China
International Workshop on Privacy-Preserving Data Analysis (PPDA) 2009 took place at Brisbane, Australia on 21st April, 2009. There were two sessions for this workshop with 2 paper presentations in the first session and 3 presentations in the second session. With the advancement of information technology and declining hardware price, organizations and companies are able to collect large amount of personal data. Moreover, advanced data analysis and mining techniques have been proposed to derived patterns hidden in data. However, the increasing power in data processing and analysis also raises concerns over the proper usage of personal information. Without proper treatment of the threat of sensitive information leakage, organizations would be reluctant to disclose their data even when there may be great potential gains from the data exploration. Therefore research in the area of privacy preservation is deemed important for both theoretical and practical reasons. PPDA 2009 was held in conjunction with DASFAA 2009 conference, and aims to bring together researchers in different fields related to privacy preserving data analysis and to provide a forum where researchers and practitioners can share and exchange their knowledge and experiences. There were totally 13 submissions from 8 countries, namely Canada, China, Denmark, Japan, Malaysia, Poland, Thailand and United Kingdom. 5 papers were accepted and thus the acceptance rate is 38.46%. All papers accepted by PPDA 2009 were published in a combined volume of Lecturer Notes in Computer Science series published by Springer. We would like to take this opportunity to thank our Program Committee for their work in selecting the papers for this year’s event.
L. Chen et al. (Eds.): DASFAA 2009 Workshops, LNCS 5667, p. 215, 2009. © Springer-Verlag Berlin Heidelberg 2009
Privacy Risk Diagnosis: Mining l-Diversity Mohammad-Reza Zare-Mirakabad1,2, Aman Jantan1 , and St´ephane Bressan2 1 2
School of Computer Sciences, Universiti Sains Malaysia, Malaysia
[email protected] ,
[email protected],
[email protected] School of Computing, National University of Singapore, Singapore
[email protected]
Abstract. Most of the recent efforts addressing the issue of data privacy have focused on devising algorithms for anonymization and diversification. Our objective is upstream of these works: we are concerned with the diagnosis of privacy risk and more specifically in this paper with ldiversity. We show that diagnosing l-diversity for various definitions of the concept is a knowledge discovery problem that can be mapped to the framework proposed by Mannila and Toivonen. The problem can therefore be solved with level-wise algorithms such as the apriori algorithm. We introduce and prove the necessary monotonicity property with respect to subset operator on attributes set for several instantiations of the l-diversity principle. We present and evaluate an algorithm based on the apriori algorithm. This algorithm computes, for instance, “maximum sets of attributes that can safely be published without jeopardizing sensitive attributes”, even if they were quasi-identifiers available from external sources, and “minimum subsets of attributes which jeopardize anonymity”. Keywords: Privacy preservation, measuring l-diversity, monotonicity of l-diversity, apriori algorithm, and knowledge discovery problem.
1
Introduction
The modern information infrastructure has created an unprecedented opportunity for commercial, professional and social electronic exposure. Organizations and individuals commonly publish and share data. The risk associated with this opportunity is that confidential information may be unintentionally disclosed. Privacy preservation therefore involves a selective modification and publication of data in order to prevent cross-referencing and inferences while maintaining sufficient usefulness. k-anonymity and l-diversity are two different aspects of privacy preservation. k-anonymity prevents unwanted identification of individuals and objects. l-diversity prevents unwanted disclosure of sensitive information. Sparked by Sweeney and Samarati’s work [9,26], numerous recent works address privacy preservation. However, most of these efforts have focused on devising efficient algorithms for transforming data (for example by generalization, suppression or fragmentation) in order to achieve acceptable level of anonymity L. Chen et al. (Eds.): DASFAA 2009 Workshops, LNCS 5667, pp. 216–230, 2009. c Springer-Verlag Berlin Heidelberg 2009
Privacy Risk Diagnosis: Mining l-Diversity
217
and diversity. Namely these efforts address the processes of anonymization and diversification. Fewer efforts have been made to devise techniques, tools and methodologies that assist data publishers, managers and analysts in their investigation and evaluation of privacy risks. This work is part of a project in which we try and design a one-stop privacy diagnosis centre that offers the necessary algorithms for the exploratory analysis of data and publication scenarios. Such a diagnosis centre should assist answering questions not only such as “is my data anonymous?”, “is my data sufficiently diverse?” (checking) but also “which sets of attributes can safely be published?” and “which sets of attributes may jeopardize privacy if published?” (exploration). In this paper, we show that diagnosing diversity of a relational table is a knowledge discovery problem of the class identified in [25] and can be solved with level-wise algorithms such as the apriori algorithm [3]. Then we propose an algorithm to give both positive and negative borders showing “maximum possible subsets of attributes can be safely published” and “minimum subsets which jeopardize anonymity” respectively, according to some selected anonymity principles. The rest of this paper is organized as follows. In section 2 we survey related work. In section 3, working definitions for diversity as well as the necessary properties are given. In section 4, we present an algorithm based on the apriori algorithm that computes maximum sets of attributes that can safely be published without jeopardizing sensitive attributes. We empirically analyze the performance of the algorithm with a real data set and confirm the effectiveness of pruning in section 5. Finally, we conclude our discussion in section 6 with some directions for future works.
2
Literature Review
Data privacy, as per Samarati and Sweeney [9,11,26], is protected by ensuring that any record in the released data is indistinguishable from at least (k-1) other records with respect to the quasi-identifier, i.e. sets of attributes that can be cross-referenced in other sources and identify objects. Each equivalence class of tuples (the set of tuples with the same values for the attributes in the quasi identifier) has at least k tuples. An individual is hidden in a crowd of size k, thus the name k-anonymity. Subsequent works on k-anonymity mostly propose algorithms for k-anonymization [10,13,15]. While k-anonymity prevents identification, l-diversity [16] aims at protecting sensitive information. Iyengar [12] characterizes k-anonymity and l-diversity as identity disclosure and attribute disclosure, respectively. l-diversity guarantees that one cannot associate beyond a certain probability an object with sensitive information. This is achieved by ensuring that values of sensitive attributes are “well represented” as per the l-diversity principle enounced in [16]. Many different instances of this principle, together corresponding transformation processes,
218
M.-R. Zare-Mirakabad, A. Jantan, and S. Bressan
have been proposed. For instance distinct l-diversity [19], entropy l-diversity and recursive (c,l)-diversity [16], (α,k)-anonymity [18], and t-closeness [19] are some of the proposed instances (usually presented with the corresponding diversification algorithms). Confusingly, the name l-diversity is sometimes used by authors to refer to different instances rather than to the general principle. In the simplest instance of l-diversity, given in [19], or the similar notion of p-sensitivity [24], a relation instance is (distinct) l-diverse if and only if each equivalence class of tuples with respect to a quasi identifier has at least l different values for the sensitive attribute. However, this does not guarantee that some values are not too frequent allowing undesirable statistical inferences. The authors of [8] present an instance of l-diversity such that in each equivalence class at most a 1/l fraction of tuples have same value for the sensitive attribute. The protection that this definition provides is that one can not associate an object with sensitive information with a probability higher than 1/l. This definition is most popular in recent works like [22]. We refer to this as frequency l-diversity to distinguish it from the other definitions. (α,k)-anonymity, introduced in [18], restricts a similar frequency requirement to selected values of the sensitive attributes known to be sensitive values. Entropy l-diversity and recursive (c,l)-diversity measure the distribution of values of the sensitive attribute making sure that “the most frequent value of sensitive attribute in each equivalence class is not too frequent, and the less frequent is not too rare” [16]. These two instances are very restrictive and it is hard for users to decide the appropriate values for l and c [18]. t-closeness, presented in [19], considers several shortcomings of the above instances. In particular, they address the issue that different values of sensitive attributes could be close enough to reveal information. They also remark that changing distribution of values can reveal which attributes or values are sensitive. While diversification is generally achieved by generalizing the values of quasiidentifier and suppressing tuples, as it is done in k-anonymization, Anatomy [8] proposes a novel technique that considers decomposition of relation instances into multiple instances. This creates an interesting connection between privacy and normalization yet to be completely exploited. A similar idea, bucketization, is used in [20] for (α,k)-anonymization. To each of the above instances corresponds a checking algorithm that computes the l-diversity parameter(s) of a given instance. Few works have focused on evaluating the privacy risk beyond checking and have rather concentrated on repair (that is anonymization and diversification process). Only the authors of [23] have recently looked at the discovery of quasi-identifiers as a special case of the discovery of approximate functional dependencies. In this paper we show that a diagnosis of the privacy risk for l-diversity is a knowledge discovery and data mining task. This is a formalization for various kinds of l-diversity principles and k-anonymity as well. We already proposed the preliminary k-anonymization diagnosis as a special case in [28], but not in the exhaustive framework.
Privacy Risk Diagnosis: Mining l-Diversity
219
l-Diversity
3
We consider three representative instances of the l-diversity principle: namely, distinct l-diversity, frequency l-diversity and entropy l-diversity. 3.1
Definitions
Definition 1 (Equivalence class with respect to a set of attributes). Given an instance r1 of a relation R2 and a set of attributes Q⊆R; e⊆r is an equivalence class with respect to Q if and only if e is the multi-set of tuples in r that agree on the values of their attributes in Q. We ignore the empty equivalence class. These equivalence classes are the equivalence classes of the relation “have the same values for the attributes in Q” on tuples. The notion induces a partitioning of r. This notion is used in [6,18,21], to cite a few. l-diversity is defined with respect to sensitive attributes. In this paper, we write r(Q, s) to refer to the instance r of R in which s∈R is the sensitive attribute3 , Q⊆R is the set of non-sensitive attributes and s ∈Q. The l-diversity principle requires that each value of the sensitive attribute s in an equivalence class with respect to Q be “well-represented”. Different instances of l-diversity differ in their realization of the property of ”well-represented”-ness. Let us now define, in homogeneous notations, representative forms of ldiversity found across recent literature, as discussed in the previous section. The reader can refer to the original papers (in particular [16]) for in-depth discussions of the respective rationale and respective merits of these definitions. We use two surrounding mid symbols “| · · · |” to denote cardinality. We use the opening and closing double curly brackets “{{· · ·}}” to denote a multi-set. Other notations are standard. The simplest instance of l-diversity counts the number of distinct values of the sensitive attribute in each equivalence class and requires that it is bigger or equal to l. Definition 2 (Distinct l-diversity [19]). An instance r(Q, s) of a relation R is distinct l-diverse if and only if for each equivalence class e with respect to Q, |{v|v ∈ dom(s) ∧ ∃t(t ∈ e ∧ t.s = v)}| ≥ l where dom(s) is the domain of the attribute s. Frequency l-diversity requires that each value of the sensitive attribute in each equivalence class e appears at most |e|/l times in e. We refer to this form of l-diversity as “frequency l-diversity” in order to differentiate it from other definitions, although this name is not originally used by the authors using the notion. 1 2 3
r is a multi-set (i.e. it can contain duplicates). R is both the name of a relation and its schema (i.e. a set of attributes). This work easily extends to sensitive sets of attributes (combinations of attributes).
220
M.-R. Zare-Mirakabad, A. Jantan, and S. Bressan
Definition 3 (Frequency l-diversity [8]). An instance r(Q, s) of a relation R is frequency l-diverse if and only if for each equivalence class e with respect to Q and each possible value v ∈ adom(s), p(e, v) ≤
1 , l
where adom(s) = {v|v ∈ dom(s) ∧ ∃t(t ∈ e ∧ t.s = v)}, the active domain of s, and p(e, v) = |{{t|t ∈ e ∧ t.s = v}}|/|e| (note that e is a multi-set). Entropy l-diversity measures the closeness of the distribution of values of the sensitive attribute in each equivalence class to the uniform distribution. Namely it requires its entropy (as used in information theory) to be bigger than log(l) for a given l. Definition 4 (Entropy l-diversity [16]). An instance r(Q, s) of a relation R is entropy l-diverse if and only if for each equivalence class e with respect to Q, H(e) ≥ log(l), where H(e) = −
p(e, v) log(p(e, v)) is the entropy of the equivalence
v∈adom(s)
class, where adom(s) is the active domain of s and p(e, v) = |{{t|t ∈ e ∧ t.s = v}}|/|e| is the fraction of tuples in e with sensitive value equal to v (as in Definition 3). Consider p(e, v) log(p(e, v) is 0 if p(e, v) is 0. 3.2
Mining l-Diversity
For a sensitive attribute s of a relation R and an instance r of R, l-diversity is at risk if one publishes a sensitive attribute s together with a set of non-sensitive attributes Q such that r(Q, s) is not l-diverse for a given acceptable l. The task of investigating the above risk can be seen as an instance of the generic knowledge discovery in database (data mining) framework proposed in [25]. Definition 5 (Data mining framework [25]). Let r be an instance of a relation R. Let L be a language for expressing properties and defining subgroups of the data. Let q be a predicate used for evaluating whether a sentence ϕ ∈ L defines an interesting subclass of r. A theory of r with respect to L is the set T h(L, q, r) = {ϕ ∈ L|q(r, ϕ) is true}. In the case of l-diversity, L = 2P , the set of subsets of R suspected to be quasiidentifiers and q(r, ϕ), for a given s and for a given l, is the statement “r(ϕ, s) is l-diverse”. The task consists in finding sets of attributes that could be published safely or that could jeopardize privacy. Several other tasks, such as finding the potentially sensitive attributes and sensitive combinations of attributes that are not l-diverse, could be defined and be mapped to the framework of [25]. The task we have selected is representative and useful.
Privacy Risk Diagnosis: Mining l-Diversity
221
Definition 6 (l-diversity mining framework). Let r be an instance of a relation R. Let P be a subset of R. Let s be an attribute of R not in P. Let l be a number. Let q(Q) (we can now omit r in the notation) be the predicate “r(Q, s) is l-diverse” for Q subset of P. A theory of r with respect to P (strictly speaking 2P ) is the set T h(P, q, r) = {Q|Q ⊆ P ∧ q(Q)} (strictly speaking T h(2P , q, r)). The authors of [25] give a level-wise algorithm that can compute a theory of r with respect to P provided a partial order ≺ and on 2P that is a monotone specialization relation with respect to q. Definition 7 (Monotone specialization relation). Let r be an instance of a relation R. The partial order “≺” is a monotone specialization relation with q if and only if for all Q1 and Q2 ∈ R if Q2 ≺ Q1 then q(Q1 ) ⇒ q(Q2 ). In order to complete the l-diversity mining framework, we need to exhibit such a partial order. Inclusion, ⊂ and ⊆ , is a partial order on 2P . In the next section, we show that it is a monotone specialization relation for q for the various l-diversity considered. Definition 8 (l-diversity borders). Let T h(P, “r(Q, s) is l-diverse”, r) be a theory for l-diversity. The positive and negative borders of the theory are Bd+ (T h(P, q, r)) = {Q|Q ∈ T h(P, q, r) ∧ ∀N ⊆ R(Q ⊂ N ⇒ N ∈ T h(P, q, r))} and Bd− (T h(P, q, r)) = {Q|Q ∈ T h(P, q, r) ∧ ∀N ⊆ R(N ⊂ Q ⇒ N ∈ T h(P, q, r))} respectively. Bd+ is the set of maximum subsets (maximum for inclusion) that verify ldiversity. Bd− is the set of minimum (for inclusion) subsets that do not verify l-diversity. Computing the positive border of the l-diversity theory, tells us which sets of attributes, the sets in the positive border and their subsets, can safely be published together with the sensitive attribute while guaranteeing l-diversity. Computing the negative border of the l-diversity theory, tells us which sets of attributes, the sets in the negative border and their supersets, could jeopardize privacy if published together with the sensitive attribute. 3.3
Monotonicity
The monotonicity properties that we are enouncing and proving here are different from the well known anti-monotonicity of k-anonymization in the value generalization hierarchy which is proved and/or used in other studies ( such as [7,16,18] to cite a few). Specifically what is pointed and used by them is exploiting the monotonicity property on the generalization lattice to avoid exhaustively searching the entire generalization space, in the anonymization process. We are,
222
M.-R. Zare-Mirakabad, A. Jantan, and S. Bressan
however, concerned with inclusion of sets of attributes to prune some subset of attributes in the checking process. Only the subset property enounced and proved in [14] for k-anonymity is similar to our monotonicity property, but only for k-anonymity, which is very simple and trivial. Lemma 1. Let r be an instance of a relation R. Let Q2 ⊂ Q1 ⊆ R. For every equivalence class e1 with respect to Q1 , there exists an equivalence class e2 with respect to Q2 such that e1 ⊆ e2 . Proof. An equivalence class e1 with respect to Q1 is the set of tuples that agree on the values for the attributes in Q1 . Since Q2 ⊂ Q1 , the tuples of e1 agree a fortiori on the attributes in Q2 . Therefore these tuples belong to an equivalence class e2 with respect to Q2 . Lemma 2. Let r be an instance of a relation R. Let Q2 ⊂ Q1 ⊆ R. Each equivalence class e2 of r with respect to Q2 is the union of equivalence classes of r with respect to Q1 . Proof. It suffices to remark that, by definition, equivalence classes with respect to any set of attributes (Q2 and Q1 , respectively, in particular) are disjoint and form a cover of r to infer this result from Lemma 1. Note again that the equivalence classes are disjoint and non-empty by definition. Proposition 1 (Monotone specialization of distinct l-diversity). Let r be an instance of a relation R. Let P be a subset of R. Let s be an attribute of R not in P. Let l be a number. Let q(Q) be the statement “r(Q, s) is distinct l-diverse” for Q subset of P. Inclusion ( ⊂ and ⊆ ) is a monotone specialization relation with respect to q. Proof. Let us consider Q2 ⊂ Q1 ⊆ P such that q(Q1 ) is true. By definition of distinct l-diversity, for each equivalence class e1 of r with respect to Q1 we have |{v|v ∈ dom(s) ∧ ∃t(t ∈ e1 ∧ t.s = v)}| ≥ l . We know from Lemma 2 that any equivalence class e2 with respect to Q2 is the union of disjoint equivalence classes with respect of Q1 . Therefore there exists e1 ⊆ e2 . Since e1 ⊆ e2 , {v|v ∈ dom(s) ∧ ∃t(t ∈ e1 ∧ t.s = v)} ⊆ {v|v ∈ dom(s) ∧ ∃t(t ∈ e2 ∧ t.s = v)}. Therefore, we have l ≤ |{v|v ∈ dom(s) ∧ ∃t(t ∈ e1 ∧ t.s = v)}| ≤ |{v|v ∈ dom(s) ∧ ∃t(t ∈ e2 ∧ t.s = v)}|.
Hence q(Q2 ) is true.
Proposition 2 (Monotone specialization of frequency l-diversity). Let r be an instance of a relation R. Let P be a subset of R. Let s be an attribute of R not in P. Let l be a number. Let q(Q) be the statement “r(Q, s) is frequency l-diverse” for Q subset of P. Inclusion ( ⊂ and ⊆ ) is a monotone specialization relation with respect to q.
Privacy Risk Diagnosis: Mining l-Diversity
223
Proof. Let us consider Q2 ⊂ Q1 ⊆ P such that q(Q1 ) is true. By definition of frequency l-diversity, for each equivalence class e1 of r with respect to Q1 and for each value v ∈ adom(s), |e1 (v)|/|e1 | ≤ 1/l. Equivalently, |e1 (v)| ≤ |e1 |/l. From Lemma 2, we know that every equivalence class e with respect of Q2 is the union of n equivalence classes of ei1 (for i = 1 to n) with respect to Q1 . We n can see that: |e2 | = |ei1 |, and that i=1 n
e2 (v) = {{t|t ∈ e2 ∧ t.s = v}} =
{{t|t ∈ ei1 ∧ t.s = v}} = ∪ni=1 ei1 (v)
i=1
and that, because we have multi-sets: |e2 (v) = |{{t|t ∈ e2 ∧ t.s = v}}| =
n
|{{t|t ∈ ei1 ∧ t.s = v}}| =
i=1
|ei1 (v)|.
i=1
Since for every i, |ei1 (v)| = |{{t|t ∈ ei1 ∧ t.s = v}}| ≤ |e2 (v)| = |{{t|t ∈ e2 ∧ t.s = v}}| =
n
n i=1
|ei1 | l ,
|ei1 (v)| ≤
we get: n |ei | 1
i=1
l
=
|e2 | . l
Hence q(Q2 ) is true.
Proposition 3 (Monotone specialization of entropy l-diversity). Let r be an instance of a relation R. Let P be a subset of R. Let s be an attribute of R not in P. Let l be a number. Let q(Q) be the statement “r(Q, s) is entropy l-diverse” for Q subset of P. Inclusion ( ⊂ and ⊆ ) is a monotone specialization relation with respect to q. (Note that, as we already mentioned, the monotonicity property we are considering here is different from monotonicity property announced and proved for entropy l-diversity in [17] for efficient lattice search algorithms in the generalization process.) Proof. As suggested but not proved in [17] (journal version of [16]) the monotonicity of entropy (also used for mining frequent items but not proved in [2]) is due to the fact that log is a concave function. More precisely, concaveness enables the application of Jensen’s inequality to prove the log sum inequality. The proof is in [27]. The log sum inequality states that for non negative numbers ai and bi : n n ai n ai i=1 ai log n ≤ ai log bi i=1 i=1 bi i=1
224
M.-R. Zare-Mirakabad, A. Jantan, and S. Bressan
Let us consider Q2 ⊂ Q1 ⊆ P such that q(Q1 ) is true. By definition of entropy l-diversity, for each equivalence class e1 of r with respect to Q1 , |e1 (v)| H(e1 ) = − |e1 (v)|log( ) ≥ |e1 |log(l) |e1 | v∈dom(s)
where e1 (v) = {{t|t ∈ e1 ∧ t.s = v}}. From Lemma 2, we know that every equivalence class e2 with respect of Q2 is the union of n disjoint equivalence classes ei1 (for i = 1 to n) with respect to Q1 . Then H(e2 ) = ⎛ n ⎞ i n |e1 (v)| ⎟ ⎜ ⎜ i=1 ⎟ |e2 (v)| i ⎜ ⎟. − |e2 (v)|log( ) = − |e1 (v)| log ⎜ n ⎟ |e2 | ⎝ ⎠ v∈adom(s) v∈adom(s) i=1 |ei1 | i=1
Using the log sum inequality (reversed with the minus sign), we get: ⎛ n ⎞ i n |e1 (v)| ⎟ ⎜ n ⎜ i=1 ⎟ |ei (v)| i ⎟ ≥ − |e1 (v)| log ⎜ |ei1 (v)|log( 1 i ). n ⎜ ⎟ |e1 | ⎝ ⎠ v∈adom(s) i=1 v∈adom(s) i=1 |ei1 | i=1
Since each equivalence class mations, we get: n
−
ei1
|ei1 (v)|log(
i=1
is entropy l-diverse and after swapping the sum-
n n |ei1 (v)| ei1 e2 i ) = H(e ) ≥ | | = | |. 1 i l l |e1 | i=1 i=1
Hence q(Q2 ) is true.
4 4.1
Algorithms Base Algorithms
We consider an instance r(Q, s) of a relation R. We are concerned with the ldiversity of a sensitive attribute s with respect to potential quasi identifiers, i.e. a set of attributes Q. It is possible to directly compute l for each of the instances of l-diversity described above. If the data is stored in a relational database, this computation can be achieved by an SQL aggregate query. We give the SQL queries corresponding to each one of the three instances of l-diversity discussed. Algorithm 1.1 is the SQL query to compute distinct l-diversity. The algorithm computes l. It finds the smallest number of distinct values in an equivalence class. SELECT MIN(CountS) FROM( SELECT Count (DISTINCT s) AS CountS FROM T GROUP BY Q )
Algorithm 1.1. SQL query for computing distinct l-diversity
Privacy Risk Diagnosis: Mining l-Diversity
225
Note that in the base case of Q = ∅ the query is: SELECT COUNT(DISTINCT s) FROM T
It computes l satisfied by whole table showing maximum distinct l-diversity which is possible by any modification of data. Algorithm 1.2 is the SQL query to compute frequency l-diversity. The algorithm computes l. It finds the minimum frequency of a sensitive value in an equivalence class. SELECT MIN(Counts) FROM( SELECT Q, SUM(ValueCountS)/MAX(ValueCountS) AS Counts FROM( SELECT Q, S, COUNT(∗) AS ValueCountS FROM T GROUP BY Q, S ) GROUP BY Q )
Algorithm 1.2. SQL query for computing frequency l-diversity
Note that in the base case of Q = ∅ the query is: SELECT SUM(ValueCountS)/MAX(ValueCountS) FROM( SELECT COUNT(∗) AS ValueCountS FROM T GROUP BY S )
Algorithm 1.3 is the SQL query to compute entropy l-diversity. The algorithm computes l. The query counts frequency of each sensitive value in each equivalence class with respect to Q, computes the cardinality of each equivalence class with respect to Q, computes l for each equivalence class with respect to Q and finally finds its minimum value, as l for data set. SELECT MIN (GEntropyL ) FROM( SELECT VSumG.Q, EXP(−SUM((SCount/SumGroup)∗LOG(SCount/SumGroup))) as GEntropyL FROM( (SELECT Q,S, COUNT(S) AS SCount FROM T GROUP BY Q,S ) AS VSFrequencies INNER JOIN (SELECT Q,COUNT(∗) AS SumGroup FROM T GROUP BY Q) AS VSumGroup ON VSFrequencies.Q = VSumGroup.Q ) GROUP BY VSumGroup.Q )
Algorithm 1.3. SQL query for computing entropy l-diversity
Note that in the base case of Q = ∅ the query is: SELECT EXP(−SUM((SCount/SumGroup)∗LOG(SCount/SumGroup)))as GEntropyL FROM( (SELECT S, COUNT(S) AS SCount FROM T GROUP BY S) , (SELECT COUNT(∗) AS SumGroup FROM T) )
We consider that, for each instance of l-diversity (distinct l-diversity, frequency l-diversity and entropy l-diversity), a (stored) function is available that computes and returns the value of l for a set of attribute Q and a sensitive attribute s, respectively. We call this function ComputeL(Q, s), where Q is the, possibly empty, set of attributes to be tested. We need not distinguish between the three instances of l-diversity as the following algorithm is the same for all instances except for the call to the corresponding ComputeL(Q, s) function.
226
4.2
M.-R. Zare-Mirakabad, A. Jantan, and S. Bressan
Apriori Algorithm for l-Diversity
The framework of [25] allows us to use a level-wise algorithm with pruning to compute the positive border of the theory. Namely, an apriori-like algorithm can compute the set of attributes that can be safely published with a sensitive attribute s and a threshold value of l for the tested l-diversity. Note that a dual algorithm is straightforwardly devised to compute the negative border. Algorithm 2 finds the positive border of r(Q,s) for a given l. It starts by checking the empty set (i.e. it checks the l-diversity of the entire table) and moves up level-wise in the lattice of sets of attributes. Input : r(Q,s) and l Output: positive border for l−diversity −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− P = ∅ , N = ∅ , PBorder = ∅ maxL = ComputeL(∅ , s) if ( l>maxL) then return ‘‘Positive and Negative Border =∅’’ end if for Ai ∈ R do S = {Ai } L = ComputeL(S, s) if L >= l then P = P∪{S} end if end for while (P <> ∅ ) for Si ∈ P do isPositiveBorder = true for Sj ∈ P do if Si − Sj and Sj − Si are singleton then S = Si ∪ Sj if all subsets of S of cardinality |S|−1 are in P then L= ComputeL(S, s) if L >= l then N = N ∪ {S} isPositiveBorder = false end if end if end if end for if isPositiveBorder then PBorder = PBorder ∪ {Si } end if end for P=N N=∅ end while output PBorder ‘‘as Positive Border’’
Algorithm 1.4. Finding the positive border of r(Q,s) for a given l
At each level the sensitive attribute s is tested for l-diversity with respect to the sets of attributes Q in the level using the function ComputeL(Q, s) defined above. If ComputeL(Q, s) ≥ l, then the set is kept in the level otherwise it is removed. The first two levels for Q=∅ and for Q singletons are first computed. Then the next levels are iteratively created by combining sets of the current level that differ symmetrically of exactly one element (this generation is rather na¨ıve for the sake of clarity and can be easily improved). Finally, when the next level is empty (there are no more sets Q to be checked), the current level, which is the positive border, is output. Note that a dual algorithm could start from R and remove attributes browsing the lattice downwards.
Privacy Risk Diagnosis: Mining l-Diversity
5
227
Performance Evaluation
In order to illustrate the performance of the algorithm proposed in a real setting and to confirm its ability in pruning the search space, we have implemented it and ran it with two real data sets. The Adult dataset [5] has become a de facto benchmark in anonymity research. For the sake of simplicity we keep the following 8 attributes: {age, work class, education, status, occupation, race, sex, country} and omit others without loss of generality, as with many other researches [6,7,15]. It, after data cleaning and removing tuples with missing values, contains 30162 tuples. The OCC dataset [4] has the same schema as the Adult data set. It contains 500000 tuples. For both data sets we use the ’occupation’ attribute as the sensitive attribute. It has 14 different values in the Adult data set and 50 in the OCC data set. 5.1
Effectiveness
For the purpose of illustration, we now discuss some example outputs for the Adult dataset. The complete schema comprises the following 8 attributes: “age”, “workclass”, “education”, “status”, “race”, “sex”, “country” and “occupation”. We consider the attribute “occupation” to be the sensitive attribute. Table 1 gives the positive and negative borders for varying l. Table 1. Bd+ and Bd-for the Adult data set and frequency l-diversity l Bd+ 2 {workclass,sex} {race,sex}{status,sex} 3 {status,sex}{race,sex} 4 {status}{race} 5 {race} 6 ∅
Bd{age}{education}{country}{workclass,status}{status,race} {workclass,race} {age}{workclass}{education}{country}{status,race} {age}{workclass}{education}{sex}{country}{status,race} {age}{workclass}{education}{status}{sex}{country} {age}{workclass}{education}{status}{race}{sex}{country}
Let us look, for instance, at frequency diversity with l=3. The positive border Bd+ is equal to {{status, sex}, {race, sex}}. This means that any projection of {status, sex, occupation} and {race, sex, occupation} is guaranteed to have equivalence classes with at least 3 times of the frequency of its most frequent occupation value and can be published if this is a sufficient guaranty. In other words an adversary can not link an individual with any occupation value with probability greater than 13 . The negative border Bd- is equal to {{age}, {workclass}, {education}, {country} {status, race}}. Any of these sets of attributes, if they constitute a quasi identifier that can be associated with an object, in this case an individual, from external sources, should not be disclosed in association with occupation because the probability of occupation disclosure will be more than 13 for certain otherwise indistinguishable individuals.
228
5.2
M.-R. Zare-Mirakabad, A. Jantan, and S. Bressan
Efficiency
For varying values of l, we measure the performance in numbers of calls to the function ComputeL(Q,s) as this number directly reflects the effect of pruning. Exactly the same curves would result if we compute running time, as it is a coefficient of number of calls. We show here only number of class that is more understandable. We show the results for distinct, frequency and entropy l-diversity and for both Adult and OCC dataset. For Figure 1, the size of the lattice is 128. The algorithm is able to find all subsets satisfying given l according to each three l-diversity principles with checking the condition for only the small fraction of subsets instead of all 128 subsets in naive algorithm. For example it can prune between 86% and 100% of the calls (at least one call is necessary) for all three l-diversity principles.
Fig. 1. Number of function calls in the computation of the positive border for Adult dataset and varying l values
In Figure 2, also for OCC dataset, the size of the lattice is 128. The algorithm is able to prune between 78% and 100% of the calls (at least one call is necessary), for all three principles.
Fig. 2. Number of function calls in the computation of the positive border for OCC dataset and varying l values
Privacy Risk Diagnosis: Mining l-Diversity
6
229
Conclusion and Future Work
We have shown that diagnosing l-diversity for various definitions of the concept is a knowledge discovery problem that can be mapped to the framework proposed by Mannila and Toivonen. After the necessary monotonicity results have been proved, the problem can be solved with level-wise algorithms and their pruning-based optimizations. We have implemented and evaluated an apriorilike algorithm for l-diversity that computes maximum sets of attributes that can safely be published without jeopardizing sensitive attributes. The evaluation confirms the benefits of the approach and the effectiveness of the pruning. Choosing of a threshold value for l is a difficult if not impossible task for a user left alone. That is why we consider a privacy diagnosis centre that proposes a variety of exploratory tools and algorithms enabling users to interactively understand, decide and realize the level of privacy she needs for her data. Our ongoing and future work studies further tools and algorithms needed for such a one-stop diagnosis and repair center. From the implementation point of view, we shall note that other known data mining algorithms, as well their generic implementations such as those in the iZi library [1], can be used and their efficiency compared. It is possible if one shows that the new problem is an instance of Mannila and Toivonen’s [25] framework.
References 1. Flouvat, F., Marchi, F.D., Petit, J.-M.: iZi: A New Toolkit for Pattern Mining Problems, pp. 131–136. Springer, Heidelberg (2008) 2. Knobbe, A.J., Ho, E.K.Y.: Maximally informative k-itemsets and their efficient discovery. In: 12th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 237–244 (2006) 3. Agrawal, R., Imielinski, T., Swami, A.: Mining Association Rules between Sets of Items in Large Databases. In: ACM Conference on Management of Data (SIGMOD), pp. 207–216 (1993) 4. USA CENSUS DATA 5. Blake, C., Merz, C.: UCI Repository of Machine Learning Databases (1998) 6. Byun, J.-W., Kamra, A., Bertino, E., Li, N.: Efficient k-Anonymity Using Clustering Technique. In: CERIAS Tech Report 2006-10, Center for Education and Research in Information Assurance and Security. Purdue University (2006) 7. Bayardo, R.J., Agrawal, R.: Data Privacy through Optimal k-Anonymization. In: 21st International Conference on Data Engineering, ICDE (2005) 8. Xiao, X., Tao, Y.: Anatomy: Simple and Effective Privacy Preservation. In: Very Large Data Bases (VLDB) Conference, pp. 139–150 (2006) 9. Sweeney, L.: k-Anonymity: A Model for Protecting Privacy. International Journal on Uncertainty. Fuzziness and Knowledge-based Systems 10, 557–570 (2002) 10. Aggarwal, G., Feder, T., Kenthapadi, K., Khuller, S., Panigrahy, R., Thomas, D., Zhu, A.: Achieving Anonymity via Clustering. In: Principles of Database Systems(PODS) (2006) 11. Samarati, P., Sweeney, L.: Protecting Privacy when Disclosing Information: kAnonymity and its Enforcement through Generalization and Suppression. In: Technical Report SRI-CSL-98-04. SRI Computer Science Laboratory (1998)
230
M.-R. Zare-Mirakabad, A. Jantan, and S. Bressan
12. Iyengar, V.: Transforming Data to Satisfy Privacy Constraints. In: SIGKDD, pp. 279–288 (2002) 13. Xu, J., Wang, W., Pei, J., Wang, X., Shi, B., Fu, A.W.-C.: Utility-based Anonymization Using Local Recoding. In: 12th ACM SIGKDD international conference on Knowledge discovery and data mining (2006) 14. LeFevre, K., DeWitt, D.J., Ramakrishnan, R.: Incognito: Efficient Full-domain kAnonymity. In: ACM Conference on Management of Data (SIGMOD), pp. 49–60 (2005) 15. LeFevre, K., DeWitt, D.J., Ramakrishnan, R.: Mondrian Multidimensional kAnonymity. In: 22nd International Conference on Data Engineering, ICDE (2006) 16. Machanavajjhala, A., Kifer, D., Gehrke, J., Venkitasubramaniam, M.: l-Diversity: Privacy beyond k-Anonymity. In: IEEE 22nd International Conference on Data Engineering, ICDE 2006 (2006) 17. Machanavajjhala, A., Kifer, D., Gehrke, J., Venkitasubramaniam, M.: l-Diversity: Privacy beyond k-Anonymity. ACM Transactions on Knowledge Discovery from Data 1 (2007) 18. Wong, R.C.-W., Li, J., Fu, A.W.-C., Wang, K.: (alpha, k)-Anonymity: An Enhanced k-Anonymity Model for Privacy Preserving Data Publishing. In: 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD (2006) 19. Li, N., Li, T., Venkatasubramanian, S.: t-Closeness: Privacy Beyond k-Anonymity and l-Diversity. In: IEEE 23rd International Conference on Data Engineering (ICDE), pp. 106–115 (2007) 20. Wong, R.C.-W., Liu, Y., Yin, J., Huang, Z., Fu, A.W.-c., Pei, J.: (α, k)-anonymity based privacy preservation by lossy join. In: Dong, G., Lin, X., Wang, W., Yang, Y., Yu, J.X. (eds.) APWeb/WAIM 2007. LNCS, vol. 4505, pp. 733–744. Springer, Heidelberg (2007) 21. Li, J., Wong, R.C.-W., Fu, A.W.-C., Pei, J.: Achieving k-Anonymity by Clustering in Attribute Hierarchical Structures, pp. 405–416. Springer, Heidelberg (2006) 22. Ghinita, G., Karras, P., Kalnis, P., Mamoulis, N.: Fast Data Anonymization with Low Information Loss. In: Very Large Data Bases (VLDB) Conference. ACM, New York (2007) 23. Motwani, R., Xu, Y.: Efficient Algorithms for Masking and Finding QuasiIdentifiers. In: SIAM International Workshop on Practical Privacy-Preserving Data Mining (2008) 24. Truta, T.M., Vinay, B.: Privacy Protection: p-Sensitive k-Anonymity Property. In: International Workshop of Privacy Data Management (PDM) Conjunction with 22th International Conference of Data Engineering, ICDE (2006) 25. Mannila, H., Toivonen, H.: Levelwise Search and Borders of Theories in Knowledge Discovery. In: Data Mining and Knowledge Discovery, vol. 1, pp. 241–258 (1997) 26. Samarati, P.: Protecting respondents’ identities in microdata release. IEEE Transactions on Knowledge and Data Engineering 13(6), 1010–1027 (2001) 27. Cover, T.M., Thomas, J.A.: Elements of Information Theory. John Wiley & Sons, Chichester (1991) 28. Zare Mirakabad, M.R., Jantan, A., Bressan, S.: Towards a Privacy Diagnosis Centre: Measuring k-anonymity. In: The 2008 International Symposium on Computer Science and its Applications. IEEE CS, Los Alamitos (2008)
Towards Preference-Constrained k-Anonymisation Grigorios Loukides, Achilles Tziatzios, and Jianhua Shao School of Computer Science Cardiff University Cardiff CF24 3AA, UK {G.Loukides,A.Tziatzios,J.Shao}@cs.cf.ac.uk
Abstract. In this paper, we propose a novel preference-constrained approach to k-anonymisation. In contrast to the existing works on kanonymisation which attempt to satisfy a minimum level of protection requirement as a constraint and then optimise data utility within that constraint, we allow data owners and users to specify their detailed protection and usage requirements as a set of preferences on attributes or data values, treat such preferences as constraints and solve them as a multi-objective optimisation problem. This ensures that anonymised data will be actually useful to data users in their applications and sufficiently protected for data owners. Our preliminary experiments show that our method is capable of producing anonymisations that satisfy a range of preferences and have a high level of data utility and protection.
1
Introduction
Data about individuals (often termed microdata) is increasingly handled by a variety of applications, ranging from business analysis to medical studies. However, publishing microdata directly may allow sensitive information about individuals to be revealed [1]. In response, a number of techniques have been proposed to protect sensitive information when publishing microdata [2,3,4]. k-anonymisation is one such technique. Assume that we have a table T consisting of three types of attribute: identifiers (IDs), quasi-identifiers (QIDs) and sensitive attributes (SAs). IDs contain information that can explicitly identify individuals (e.g. names and phone numbers) and are normally removed before the data is published. QIDs may seem innocuous (e.g. age and postcode), but can be linked with other data to reveal sensitive information about individuals contained in SAs (e.g. income and diagnosed diseases). k-anonymising T is to derive a view of T , possibly as a result of some data transformation [1], such that each tuple in the view is identical to at least k − 1 other tuples with respect to QIDs. 1.1
Motivation
The ultimate goal of k-anonymisation is to transform microdata in a way that both data utility and privacy protection are maximised, but this is impossible as L. Chen et al. (Eds.): DASFAA 2009 Workshops, LNCS 5667, pp. 231–245, 2009. c Springer-Verlag Berlin Heidelberg 2009
232
G. Loukides, A. Tziatzios, and J. Shao
they are conflicting requirements: privacy protection is achieved usually at the expense of data utility [5]. Most of existing works, e.g. [6,7,8], attempt to satisfy a required level of protection as a constraint, e.g. l-diversity [6], and then optimise data utility within that constraint. Unfortunately, while this approach is logically plausible, it does not guarantee that the anonymised data will actually be useful in applications. For example, suppose that the data contained in Table 1 (a) is to be released to a study concerning student income. Age and Student-level are QIDs and Income is an SA. Assume that 2-anonymity is the only protection requirement. Using the algorithm described in [6], Table 1 (c) is generated. While Table 1 (c) is considered to be optimised by this algorithm w.r.t utility, it is not actually useful to the data user if his intended usage of the data requires the range of each anonymised group w.r.t. Age to be no more than 10 (in order to have a certain degree of accuracy), and the values in Student-level to be less general than Student according to the generalisation hierarchy given in Figure 1 (b) (in order to have a sufficient discrimination power). What we need in this case is Table 1 (d). To ensure that the anonymised data is sufficiently protected and is actually useful in intended applications, it is necessary to consider how both utility and protection requirements may be expressed and how they may be incorporated into k-anonymisation process. For instance, if the data user in our example is interested in analysing earning patterns of the first and second year undergraduate students, then he may require the values in Student-level not to be generalised to or beyond the Junior level after anonymisation. The students (the data owners in this case), on the other hand, may require their income values to be protected
Fig. 1. (a): Microdata (b): Generalisation Hierarchy for Student-level (c): A 2anonymisation of the microdata (d): Another 2-anonymisation of the microdata
Towards Preference-Constrained k-Anonymisation
233
in such a way that an attacker should not be able to infer a student’s income with a probability higher than 0.7. To the best of our knowledge, none of the existing methods are capable of producing this sort of k-anonymisations, i.e. meeting the specific requirements from both data owners and users. 1.2
Our Approach
In this paper we propose a novel preference-constrained approach to kanonymisation. Our idea is to allow data owners and users to express their detailed, specific protection and usage requirements as a set of preferences, and then solve these preferences as a multi-objective optimisation problem. This requires two issues to be considered. First, preferences can come into different forms – they may be expressed at an attribute level (e.g. imposing the length of range or the size of set that anonymised groups may have in a given attribute) or at a value level (e.g. imposing ranges or sets allowed for specified values), and they can be incorporated into different k-anonymisation models (e.g. generalisation and micro-aggregation [9]). We therefore need to consider how preferences may be specified in a generic way. In this paper, we formulate preferences based on well-known notions of information loss [10,11] and protection for sensitive information [12], and allow them to be specified at the attribute or value level. Second, we need to investigate how preferences can be incorporated into the anonymisation process. Naive extensions to existing methods, for example generating k-anonymisations as usual first and then checking for preference satisfaction, will not work because they do not fully explore the m-dimensional search space created by the attributes or values, nor handle conflicting preferences well in optimisation. Consequently, they may fail to construct anonymisations that have a slightly lower data utility than the optimal when the dataset is measured as a whole, but actually satisfy the specified preferences. In this paper, we use a multi-objective genetic algorithm to construct k-anonymisations. Instead of optimising the aggregated utility of all QIDs collectively, we treat each attribute (QID or SA) separately and attempt to optimise an objective function related to it, while satisfying its associated constraints. Our solution gives more room for finding anonymisations that satisfy stated preferences, and can handle multiple and conflicting constraints well. Our preliminary experiments show that our method is capable of producing anonymisations that satisfy a range of preferences, and achieve a high level of data utility and protection. The rest of the paper is organised as follows. In Section 2, we describe how preferences may be specified, and formulate the preference-constrained kanonymisation problem. In Section 3, we discuss our approach to incorporating preferences into the anonymisation process. Section 4 is a preliminary experimental study comparing our method to existing algorithms in terms of their ability to produce anonymisations that satisfy the stated protection and utility preferences, and the level of data utility and privacy protection they achieve. Section 5 comments on the existing work related to ours. Finally, we conclude the paper and discuss future work in Section 6.
234
2
G. Loukides, A. Tziatzios, and J. Shao
Specification of Preferences
Let T be a table containing microdata and consisting of two types of attribute: q QIDs a1 , ..., aq , and m − q SAs aq+1 , ..., am . We assume that each tuple in T represents only one individual and takes a value from a finite domain Dai in each attribute ai , i = 1, ..., m. If ai is a categorical attribute, we also assume that there is an anonymiser-specified domain hierarchy Hai associated to it, similar to the one shown in Figure 1 (b). A k-anonymisation of T is a partition h h T = {g1 , . . . , gh } of T such that |gj | ≥ k, j=1 gj = ∅, j=1 gj = T , for j = 1, ..., h, and tuples in each gj have the same values in each ai , i = 1, ..., q. We distinguish two types of preference: the attribute-level preference is applied uniformly on all anonymised values of an attribute, whereas the value-level preference is applied on specified values of an attribute. Both impose a limit on the distance among a set of values in an attribute (QID or SA), but in different ways. We explain the concept of attribute-level preference first. Definition 1 (Attribute-level Preference). Given an attribute a of a table T , the attribute-level preference for a, denoted with ap(a), is either r(πa (gj )) ≤ c if a is a QID, or r(πa (gj )) ≥ c if a is an SA, where 0 ≤ c ≤ 1 is a constant, and gj , j = 1, ..., h, is a group of T , a k-anonymisation of T . πa (gj ) denotes the projection of gj on a, and r(πa (gj )) denotes its range as defined in Definition 2. Definition 2 (Range [11]). Given a set of values Va ⊆ Da obtained from an attribute a, the range of Va , denoted by r(Va ), is defined as: ⎧ max(V )−min(V ) ⎨ max(Daa )−min(Daa ) , numerical values r(Va ) = ⎩ s(Va ) , categorical values |Da | where max(Va ), min(Va ), max(Da ) and min(Da ) denote maximum and minimum values in Va and Da respectively. If a domain hierarchy Ha exists for a, then s(Va ) is the number of leaves of the subtree of Ha rooted at the closest common ancestor of the values in Va . Otherwise, s(Va ) is the number of distinct values in Va . |Da | is the size of domain Da . An attribute-level preference places a limit on the amount of generalisation allowed for QID values, or on the closeness of a set of SA values in a group. For example, the anonymisation shown in Table 1 (d) satisfies the attribute-level preferences “the range for Age should be at most 10” and “the range for Income should be at least 7K”1 . Note that founded on the notion of range, attribute-level preferences are related to standard notions of data utility and privacy protection [10,12,13]. Specifying an attribute-level preference is a straightforward way to ensure that all values in an attribute will not be over-generalised or under-protected. For instance, setting c = 0 in an attribute-level preference for a QID will essentially 1
Income is treated as a numerical SA here.
Towards Preference-Constrained k-Anonymisation
235
require that the values in this attribute are released intact, while doing so for an SA will result in creating groups having SA values that cover the entire domain of this attribute. However, it may be useful to be able to specify preferences at a value level when values in the same attribute need to be treated differently. For example, we may want to prevent the values 4th-year and Diploma in Studentlevel shown in Figure 1 (b) from being anonymised beyond U ndergraduate and P ostgradute respectively. Since the values in the subtree rooted at the value U ndergraduate have a different range from those that belong to the subtree rooted at Diploma (they are 27 and 37 respectively according to Definition 2), this requirement cannot be expressed using an attribute-level preference for Studentlevel. Therefore, we define value-level preferences. Definition 3 (Value-level Preference). Given a value u obtained from an attribute a, the value-level preference for u, denoted with vpa (u), is either r(πa (g)) ≤ c if a is a QID, or r(πa (g)) ≥ c if a is an SA, where g denotes the group of values to which u belongs, πa (g) denotes its projection on a, and 0 ≤ c ≤ 1 is a constant. Using a value-level preference is a direct way to specify the allowable generalisations for a specific QID value. For example, we can write vpStudent−level (4th − year) = U ndergraduate and vpStudent−level (Diploma) = P ostgraduate to ensure that values 4th − year and Diploma will not be generalised beyond their parents in the hierarchy shown in Figure 1 (b). In addition, value-level preferences allow assigning different levels of protection to different SA values. This is essential in some applications and can enhance data utility [14]. For instance, when high income values (e.g. those above 20K) are regarded as “very” sensitive in an application, a greater level of protection may be imposed on them by using a more stringent value-level preference (e.g. by setting vpIncome (27K) to be greater than vpIncome (7K)). Our model for specifying attribute and value-level preferences offers two important benefits. First, it allows specification of preferences on datasets with various characteristics. This is because our notions of preference can be applied to attributes of any type (numerical or categorical QIDs and SAs), and are independent of the cardinality of the dataset. Second, preferences for QIDs can easily be specified by data users, since they only need to know the domains of these QIDs, and possibly their associated hierarchies. This knowledge can be safely provided to data users by anonymisers when there is no disclosure of the anonymisation mechanism [15,16]. So far, we have discussed preferences that are applied to a single attribute or value. However, decisions on whether an anonymisation is sufficiently useful or protected are generally based on a combination of preferences. Thus, we use the concept of composite preference to combine preferences as illustrated below. Definition 4 (Composite Preference). A composite preference cp is a set {p1 , ..., pm }, where each pi , 1 ≤ i ≤ m, is either an attribute-level preference ap(a) on attribute a, or a set of value-level preferences {vpa (u1 ), ..., vpa (ul )} on values u1 , ..., ul in a.
236
G. Loukides, A. Tziatzios, and J. Shao
So a composite preference is a collection of attribute- and/or value-level preferences specified for any of the attributes of a table, and satisfying a composite preference requires satisfying all preferences simultaneously. This gives us the following definition of preference-constrained k-anonymisation. Definition 5 (Preference-Constrained (PC) k-Anonymisation). Given a table T and a composite preference cp, a preference-constrained k-anonymisation is a k-anonymous partition T = {g1 , ..., gh } of T that satisfies cp.
3
Incorporating Preferences into Anonymisation
Using preferences to generate anonymisations is a challenging task. Intuitively, producing a preference-constrained k-anonymisation becomes more difficult when the composite preference includes a large number of preferences. To tackle the problem we adapt a method based on the multi-objective optimisation paradigm [17]. We consider multiple data utility and protection objective functions and attempt to find a solution that optimises all of them, while satisfying the composite preference. In this paper, we formulate data utility and protection objective functions based on the notion of range (see Definition 2). More specifically, we define worstutility (W U ) as the maximum range of all groups of T with respect to a QID attribute a (i.e. W U (a) = max(r(πa (g1 )), ..., r(πa (gh )))), and worst-protection (W P ) as the minimum range of all groups of T with respect to an SA a (i.e. W P (a) = min(r(πa (g1 )), ..., r(πa (gh )))). Worst-utility and worst-protection indicate the information loss and level of protection of the worst group of T and are similar in principle to the measures proposed in [10,12]. However, our optimisation framework is general enough to handle different objective functions, such as the normalised certainty penalty for utility [10] or the l-diversity [6] for protection, but in this paper we do not explore this issue further. Using our proposed objective functions, we formulate the problem of finding a preference-constrained k-anonymisation as a multi-objective optimisation problem illustrated in Definition 6. Definition 6 (Optimal Preference-Constrained k-Anonymisation). Given a table T and a composite preference cp, find a k-anonymous partition T of T that satisfies cp and optimises the m-dimensional vector quality function given by q(T ) = [W U (a1 ), ..., W U (aq ), W P (aq+1 ), ..., W P (am )]T . Solving the problem requires finding a k-anonymous partition which satisfies the composite preference cp and yields an optimum value for q(T ). However, since the objective functions we use are conflicting in nature, it is not possible for all of them to be minimised within the same solution vector q(T ). For example, a completely generalised table T has an optimal score in terms of the W P objective functions, but the worst possible score with respect to W U functions. Thus, an optimal solution for q(T ) does not exist, and we attempt to find Paretooptimal solutions instead. A Pareto-optimal solution is as good as other feasible
Towards Preference-Constrained k-Anonymisation
237
solutions in all the objective functions W U and W P , and better in at least one of them. Algorithms to solve similar problems have received considerable research attention, and can be grouped into aggregating and non-aggregating ones [18]. Aggregating algorithms use an aggregate objective function which can be derived typically by using the weighted sum of all the objective functions of the problem. Alternatively, they can work by optimising one of the objective functions and treat the others as constraints that a solution has to satisfy. However, these algorithms are considered to be inadequate to solve problems with more than two objectives. This is because they tend to produce solutions that perform poorly with respect to most of the objective functions, and they may require multiple runs and parameter tuning [18]. Thus, aggregating k-anonymisation algorithms [6,13] are not well-suited to solving the optimal preference-constrained k-anonymisation problem. The non-aggregating approaches use different policies to balance the relative importance of the objective functions and have been proven to be successful in solving problems involving multiple and conflicting objectives [18]. These approaches are typically based on genetic algorithms (GAs), which are iterative optimisation strategies that mimic natural selection. A solution to the problem is called individual, and GAs operate on a set of individuals called population. In each iteration, the population is transformed into a new generation in an attempt to generate better individuals with respect to the objectives. This is done by reproducing individuals typically by performing two operators: crossover, which creates new individuals by combining parts of existing ones, and mutation, which slightly alters an individual. Objectives functions are used to evaluate individuals, and the best ones are reproduced. The goal of multi-objective GAs is to produce a set of Pareto-optimal solutions. Then, a single compromise solution from this set can be selected according to some specified criterion. To produce a preference-constrained k-anonymisation, we employ the framework of the NSGA-II multi-objective algorithm [19]. In our case, each preferenceconstrained k-anonymous table is an individual and is represented as a bit string. To ensure that an individual defines a valid generalisation we use the bit string representation proposed in [20]. That is, we use a bit for each two successive values in a QID hierarchy, and the bit is 0 when these values are generalised into a common value in the resultant k-anonymous table and 1 otherwise. For instance, a string “010111” for the QID Student-level implies that values are generalised as shown in Table 1 (c), i.e. the first 0 implies that 1st − year and 2nd − year are generalised to Junior, the next 1 that 2nd − year and 3rd − year are not generalised into a common value, etc. When there are multiple QIDs, we simply concatenate the bits representing each QID to form the final individual. Once a new individual is derived through crossover or mutation (we used the traditional 2-point reduced surrogate crossover operator [21] and implemented mutation as a single bit flip operation), we check whether it satisfies the given cp. When cp is comprised of attribute-level preferences only, this checking requires the production of a k-anonymisation T based on the information encoded in
238
G. Loukides, A. Tziatzios, and J. Shao
this individual’s bit string, evaluating the quality function q(T ) against the anonymisation produced, and comparing each W U (ai ) (or W P (ai )) score with the preference ap(ai ) specified (see Definition 1). Deciding whether a valuelevel preference is satisfied is done similarly using Definition 3. Thus, given a composite preference cp, this algorithm produces a number of (approximate) Pareto-optimal k-anonymisations with respect to the quality function q, each satisfying the given cp. We can then choose one solution based on some data utility or protection measure from those produced by the algorithm. Note that our bit representation of an anonymous table is consistent with the full-subtree global recoding model [22]. When a value u in a QID a is generalised to u , this model requires generalising all values in the subtree rooted at u to u according to the hierarchy Ha . For example, according to the full-subtree global recoding model, generalising the value 1st-year in Student-level shown in Figure 1 (b) to U ndergraduate will result in generalising the values 2nd-year, 3rd-year and 4th-year to U ndergraduate as well. We note that using this model may limit the value-level preferences one can specify. For instance, a preference which requires the value 1st-year not to be generalised to U ndergraduate, and another requiring the value 2nd-year not to be generalised to Student cannot be specified together, as the resultant anonymisation may not be consistent with the full-subtree recoding model. In this preliminary report, we assume that all specified value-level preferences lead to anonymisations that are consistent with this recoding model.
4 4.1
Experimental Evaluation Experimental Setup
In this section, we present a preliminary experimental evaluation of our approach in terms of its effectiveness in producing k-anonymisations. We compared our method, which is referred to as PCMA (Preference-Constrained Multi-objective k-Anonymisation), to two other k-anonymisation algorithms: Mondrian [23] which is a partition-based algorithm that optimises data utility, and a modified version of Incognito [22] (called p-Incognito) which is based on an Apriori type of search, and takes preferences on SAs into account by checking if the kanonymous result satisfies them. We studied their performance in terms of how the specified preferences are satisfied and the level of data utility and privacy protection they produce. For these experiments, we used a sample comprised of 5000 tuples of the Adults dataset [24] which has become a benchmark for k-anonymisation. The used dataset was configured as summarised in Table 1 and parameters used in our algorithm are summarised in Table 2. All the algorithms were implemented in Java and ran on a computer with Intel Xeon CPU at 2.33GHz with 3GB RAM under Windows XP. 4.2
Effectiveness w.r.t. Preference Satisfaction
In this set of experiments, we examined the performance of the methods in terms of satisfying preferences. We considered two types of attribute-level preference.
Towards Preference-Constrained k-Anonymisation
239
Table 1. Adults dataset configuration
Table 2. Parameters used in PCMA
The first type involves a single QID attribute, and the second involves a number of QIDs and/or SAs. In each case, we report the percentage of tuples that fail to satisfy the preferences specified for various values of k. We first considered preferences involving a single QID. For this experiment, pIncognito ran without specifying any preferences on SAs, which is equivalent to running Incognito itself. Figure 2 (a) illustrates the result for an attribute-level 2 preference ap(Age) = 10 74 . As is evident from Figure 2 (a), both p-Incognito and Mondrian produced anonymisations that fail to satisfy this simple preference. More specifically, the percentage of tuples that did not meet the preference was more than 50 and 95 for Incognito and Mondrian respectively for all tested values of k. Incognito did better than Mondrian in this experiment because it uses a single-dimensional recoding model, instead of the multi-dimensional global recoding model, which did not heavily generalise Age. We also examined the percentage of tuples that fail to meet the attribute-level preference ap(W ork − class) = 46 . As shown in Figure 2 (b), Incognito satisfied the preference only when k is less than 10, but failed to do so in all other cases, while the result for Mondrian was qualitatively similar to the one illustrated in Figure 2 (a). On the other hand, PCMA was able to derive anonymisations that always satisfied the specified preference (i.e. the percentage of tuples that fail to satisfy preferences is zero in Figures 2 (a) and (b)). We then considered preferences involving multiple attributes. For these experiments, we ran p-Incognito specifying preferences for both SAs. Figure 3 (a) 2
Note that in practice the user only needs to specify a range for values (e.g. the range for Age should be at most 10), while the size of the domain (e.g. 74 for the entire Age span in the dataset) can be established by the system automatically by simply scanning the dataset once.
240
G. Loukides, A. Tziatzios, and J. Shao
Fig. 2. Percentage of tuples that fail to satisfy (a) ap(Age) = 10 (b) ap(W ork−class) = 46 74
Fig. 3. Percentage of tuples that fail to satisfy (a) cp = {ap(Education) = 5 , ap(M arital − status) = 27 } (b) cp = {ap(W ork − class) = 46 , ap(N ative − 16 5 country) = 16 , ap(Education) = 16 } (c) cp = {ap(W ork − class) = 46 , ap(N ative − 41 16 country) = 41 , ap(M arital − status) = 27 }
reports the percentage of tuples that fail to satisfy the composite preference 5 cp = {ap(Education) = 16 , ap(M arital − status) = 27 }, which requires forming 5 groups containing values with a range that is at least 16 and 27 in Education and Marital-status respectively. This makes it more difficult for an attacker to gain an individual’s sensitive information compared to using k-anonymity alone [12]. As can be seen, p-Incognito satisfied the preferences, while Mondrian was unable
Towards Preference-Constrained k-Anonymisation
241
to do so for all tested values of k. As expected, PCMA was also able to satisfy the preferences (i.e. the percentage of tuples that fail to satisfy preferences is zero in Figures 3 (a)). We also tested whether preferences involving both QIDs and SAs can be met. Intuitively, these preferences are more difficult to satisfy, since they are conflicting in nature. Figures 3 (b) and (c) illustrate the result for two different composite preferences. As is evident from these figures, both p-Incognito and Mondrian failed to meet these preferences for all values of k. In contrast, PCMA was again able to construct an anonymisation that satisfied the specified preferences (i.e. zero percentage of fails in Figures 3 (b) and (c))). We also note that we conducted experiments involving attribute-preferences with different combinations of QIDs and SAs, as well as value-preferences. The comparative performance of the methods was similar, and thus we do not report these results in this paper. 4.3
Effectiveness w.r.t. Data Utility and Privacy Protection
We then compared PCMA to the other methods with respect to the data utility and privacy protection of the anonymisations they produced. In these experiments, our method ran with a composite preference cp = {ap(Age) = 25 5 2 74 , ap(Education) = 16 , ap(M arital − status) = 7 }, which involves conflicting preferences and is rather difficult to satisfy. We used two different data utility measures based on information loss. Utility Measure (UM) [11] which is defined as U M (T ) = avgi=1,...,h ( qj=1 r(πaj (gj ))) and quantifies the average information loss of all groups, and qWorst Group Utility (WGU) which is defined as W GU (T ) = maxi=1,...,h ( j=1 r(πaj (gj ))), and measures the information loss of the most generalised group. UM and WGU take values between 0 and 1 and “low” scores imply that a “high” level of data utility is retained by T . Figures 4 (a) and (b) illustrate the result with respect to UM and WGU respectively. We also used the worst-protection scores W P (Education) and W P (M arital− status) as defined in Section 3 to measure protection for sensitive information disclosure, and report the result in Figure 5. This result shows that Mondrian may create seriously unprotected groups. For example, Figure 5 (a) shows that there were groups w.r.t. Education whose W P score is 1, the maximum (worst)
Fig. 4. Effectiveness w.r.t. (a)UM and (b) WGU
242
G. Loukides, A. Tziatzios, and J. Shao
protection score for a group, for all tested values of k, while Figure 5 (b) indicates that for Marital-status equally unprotected groups were created when k is less than 15. In contrast, both p-Incognito and PCMA created groups whose range covers the entire domain of Education or Marital-status, as W P gets the minimum (best) possible score of 0 for all tested values of k.
Fig. 5. Worst-protection scores w.r.t. (a): Education (b): Marital-status
These results show that PCMA performs similarly or better than both pIncognito and Mondrian in terms of data utility and privacy protection. This is particularly encouraging, as it suggests that satisfying the specified preferences for some attributes or values does not harm the overall quality of the anonymised result. 4.4
Runtime Performance
PCMA is based on the algorithmic framework of NSGA-II, which has a time complexity of O(m × n2 ), where m is the number of objective functions and n the cardinality of the dataset [19]. Furthermore, as in all genetic algorithms, the runtime performance depends on a number of parameters, such as the size of the population and the number of iterations to be run, which are set empirically. In our experiments, PCMA generated anonymisations within 2 hours. Thus, our approach is not as efficient as Incognito and Mondrian. Although we do not focus on performance in this paper, there are methods to speed up our method with minimal quality degradation [25]. We consider investigating these issues as future work.
5
Related Work
Achieving both data utility and sensitive information protection in kanonymisation is critically important, and has been the focus of much recent work. The existing work on k-anonymisation may be grouped into two general approaches. The protection-constrained approach attempts to restrict data transformation to the level required to meet an additional constraint related to how SAs are grouped [6,7]. This approach may produce anonymisations with unacceptably low data utility, since there is no specific control for the amount of generalisation applied on certain attributes or values. The tradeoff-constrained
Towards Preference-Constrained k-Anonymisation
243
approach [13,11] attempts to balance the amount of generalisation and sensitive information disclosure risk, but does not separately control data utility and privacy protection. Thus, this approach may generate anonymised data that is either over-generalised or inadequately protected. Different from these approaches, we rely on preferences to produce anonymised tables with sufficient data utility and privacy protection. There is much work to quantify the level of privacy protection offered by published data using preferences. Wang et al. [26] proposed using privacy templates to limit the strength of the association between QIDs and certain SA values, Chen et al. [27] used preferences to model several types of background knowledge that an attacker may have about an individual using deterministic rules, while Du et al. [28] considered probabilistic modelling of background knowledge. All of these studies assume that preferences are specified by anonymisers. Xiao and Tao [8], on the other hand, proposed individual-specified preferences to limit the level of sensitive information disclosure. However, there is little work on using preferences to ensure data utility in privacy-preserving data publishing. Samarati proposed preferences based on the minimum number of suppressed tuples or on the height of hierarchies for categorical QID values [2], but they do not directly consider the information loss incurred in anonymisation [10] and are difficult to be expressed by data users as this requires knowledge of the characteristics of the dataset. Algorithms for deriving k-anonymisations based on data partitioning [23], clustering [10,13], or an a-priori type of search [22] have been proposed. These algorithms attempt to minimise information loss in all QIDs together and can be extended to satisfy a single protection requirement [6,7] based on the aggregating approach, but are not well-suited to generating P C k-anonymisations as described in this paper.
6
Conclusion
In this paper, we address the problem of releasing data with sufficient utility and privacy protection by proposing a novel preference-constrained (P C) k-anonymisation model. We introduce preferences that allow expressing data utility and privacy protection requirements on attributes or values, and can be specified by either anonymisers or data users. We also study how to incorporate preferences into anonymisation. We formulate k-anonymising data under the P C model as a multi-objective optimisation problem, treating preferences as constraints, and solve it by a genetic algorithm. Finally, we conduct a preliminary experimental evaluation which shows that our method is the only one that can produce k-anonymisations that satisfy the specified preferences while achieving a high level of data utility and protection. There are some interesting directions for future research. First, it is important to consider preference models which allow more complex preferences to be specified [29]. Second, providing guidance for specifying preferences is worth investigating. For example, we can do so based on the accuracy of certain data
244
G. Loukides, A. Tziatzios, and J. Shao
mining tasks (e.g. classification [30,16]) or by exploiting sample queries. Third, we aim to generalise our study to include protection requirements that can guard against attackers with stronger background knowledge [25].
References 1. Sweeney, L.: k-anonymity: a model for protecting privacy. International Journal on Uncertainty, Fuzziness and Knowledge-based Systems 10(5), 557–570 (2002) 2. Samarati, P.: Protecting respondents identities in microdata release. IEEE Trans. on Knowledge and Data Engineering 13(9), 1010–1027 (2001) 3. Vaidya, J., Clifton, C.: Privacy-preserving k-means clustering over vertically partitioned data. In: KDD 2003, pp. 206–215 (2003) 4. Evfimievski, A., Srikant, R., Agrawal, R., Gehrke, J.: Privacy preserving mining of association rules. In: KDD 2002, pp. 217–228 (2002) 5. Ghinita, G., Karras, P., Kalnis, P., Mamoulis, N.: Fast data anonymization with low information loss. In: VLDB 2007, pp. 758–769 (2007) 6. Machanavajjhala, A., Gehrke, J., Kifer, D., Venkitasubramaniam, M.: l-diversity: Privacy beyond k-anonymity. In: ICDE 2006, p. 24 (2006) 7. Li, N., Li, T., Venkatasubramanian, S.: t-closeness: Privacy beyond k-anonymity and l-diversity. In: ICDE 2007, pp. 106–115 (2007) 8. Xiao, X., Tao, Y.: Personalized privacy preservation. In: SIGMOD 2006, pp. 229– 240 (2006) 9. Domingo-Ferrer, J., Mateo-Sanz, J.M.: Practical data-oriented microaggregation for statistical disclosure control. IEEE Trans. on Knowledge and Data Engineering 14, 189–201 (2002) 10. Xu, J., Wang, W., Pei, J., Wang, X., Shi, B., Fu, A.: Utility-based anonymization using local recoding. In: KDD 2006, pp. 785–790 (2006) 11. Loukides, G., Shao, J.: Data utility and privacy protection trade-off in kanonymisation. In: EDBT workshop on Privacy and Anonymity in the Information Society, PAIS 2008 (2008) 12. Koudas, N., Zhang, Q., Srivastava, D., Yu, T.: Aggregate query answering on anonymized tables. In: ICDE 2007, pp. 116–125 (2007) 13. Loukides, G., Shao, J.: Capturing data usefulness and privacy protection in kanonymisation. In: SAC 2007, pp. 370–374 (2007) 14. Wong, R., Li, J., Fu, A., Wang, K.: alpha-k-anonymity: An enhanced k-anonymity model for privacy-preserving data publishing. In: KDD 2006, pp. 754–759 (2006) 15. Wong, R.C., Fu, A.W., Wang, K., Pei, J.: Minimality attack in privacy preserving data publishing. In: VLDB 2007, pp. 543–554 (2007) 16. Zhang, L., Jajodia, S., Brodsky, A.: Information disclosure under realistic assumptions: privacy versus optimality. In: CCS 2007, pp. 573–583 (2007) 17. Deb, K.: Multi-Objective Optimization using Evolutionary Algorithms. John Wiley & Sons, Chichester (2001) 18. Coello, C.A.: A comprehensive survey of evolutionary-based multiobjective optimization techniques. Knowl. Inf. Syst. 1(3), 129–156 (1999) 19. Deb, K., Agrawal, S., Pratap, A., Meyarivan, T.: A fast and elitist multiobjective genetic algorithm: Nsga-ii. IEEE Trans. Evolutionary Computation 6(2), 182–197 (2002) 20. Iyengar, V.S.: Transforming data to satisfy privacy constraints. In: KDD 2002, pp. 279–288 (2002)
Towards Preference-Constrained k-Anonymisation
245
21. Booker, L.: Improving Search in Genetic Algorithms. Morgan Kaufmann, San Francisco (1987) 22. LeFevre, K., DeWitt, D.J., Ramakrishnan, R.: Incognito: efficient full-domain kanonymity. In: SIGMOD 2005, pp. 49–60 (2005) 23. LeFevre, K., DeWitt, D.J., Ramakrishnan, R.: Mondrian multidimensional kanonymity. In: ICDE 2006, p. 25 (2006) 24. Hettich, S., Merz, C.J.: Uci repository of machine learning databases (1998) 25. Li, J., Tao, Y., Xiao, X.: Preservation of proximity privacy in publishing numerical sensitive data. In: SIGMOD 2008, pp. 473–486 (2008) 26. Wang, K., Fung, B.C.M., Yu, P.S.: Handicapping attacker’s confidence: an alternative to k-anonymization. Knowl. Inf. Syst. 11(3), 345–368 (2007) 27. Chen, B., Ramakrishnan, R., LeFevre, K.: Privacy skyline: Privacy with multidimensional adversarial knowledge. In: VLDB 2007, pp. 770–781 (2007) 28. Du, W., Teng, Z., Zhu, Z.: Privacy-maxent: integrating background knowledge in privacy quantification. In: SIGMOD 2008, pp. 459–472 (2008) 29. Agrawal, R., Wimmers, E.L.: A framework for expressing and combining preferences. In: SIGMOD 2000, pp. 297–306 (2000) 30. Fung, B.C.M., Wang, K., Yu, P.S.: Top-down specialization for information and privacy preservation. In: ICDE 2005, pp. 205–216 (2005)
Privacy FP-Tree Sampson Pun and Ken Barker University of Calgary 2500 University Drive NW Calgary, Alberta, Canada T2N 1N4 Tel.: (403) 220-5110 {szypun,kbarker}@ucalgary.ca
Abstract. Current technology has made the publication of people’s private information a common occurrence. The implications for individual privacy and security are still largely poorly understood by the general public but the risks are undeniable as evidenced by the increasing number of identity theft cases being reported recently. Two new definitions of privacy have been developed recently to help understand the exposure and how to protect individuals from privacy violations, namely, anonymized privacy and personalized privacy. This paper develops a methodology to validate whether a privacy violation exists for a published dataset. Determining whether privacy violations exist is a nontrivial task. Multiple privacy definitions and large datasets make exhaustive searches ineffective and computationally costly. We develop a compact tree structure called the Privacy FP-Tree to reduce the costs. This data structure stores the information of the published dataset in a format that allows for simple, efficient traversal. The Privacy FP-Tree can effectively determine the anonymity level of the dataset as well as identify any personalized privacy violations. This algorithm is O (n log n ) , which has acceptable characteristics for this application. Finally, experiments demonstrate the approach is scalable and practical. Keywords: Privacy, Database, FP-Tree, Anonymized privacy, Personalized privacy.
1 Introduction Increasing identity theft frequency throughout the world has made privacy a major concern for individuals. We are asked on a nearly daily basis to provide personal data in exchange for goods and services, particularly online. Credit card histories, phone call histories, medical histories, etc. are stored in various computers often without our knowledge. This data is often released to the public either through the failings of security protocols or purposefully by companies undertaking data mining for reasons (possibly) beyond the individual’s knowledge. One goal of privacy research is to ensure that an individual’s privacy is not violated even when sensitive data is made available to the public. This paper provides an effective method for validating whether an individual’s privacy has been (or is potentially exposed to be) violated. L. Chen et al. (Eds.): DASFAA 2009 Workshops, LNCS 5667, pp. 246–260, 2009. © Springer-Verlag Berlin Heidelberg 2009
Privacy FP-Tree
247
Over the last 10 years there have been numerous breaches of privacy on publicly available data. Sweeny [6] showed that by cross-referencing a medical database in Massachusetts with voter registration lists; private medical data could be exposed. As a result, patient data thought to be private, by anonymization, could be linked to specific individuals. More recently in 2006, AOL [11] was forced to make a public apology after releasing the search data of over 650,000 individuals. AOL employees thought the data was private and contained no identifiable information. AOL only removed the data from their website once users demonstrated that the data could be used of identifying specific individuals. In the same year, the Netflix1 prize was announced to encourage research into improving the search algorithm for its recommender system to assist subscribers when selecting movies they might be interested in based on past preferences. Narayanan and Shmatikov [3] showed that through their de-anonymization algorithms the Netflix dataset exposed individually identifiable information. Each time such a violation is identified, work is undertaken to remove the vulnerability but this retroactive approach does not prevent the damage caused by the initial violation so new algorithms need to be developed to identify potential vulnerabilities. Statistical databases were the focus of much of the privacy research in the late 80’s and early 90’s. These databases provided statistical information (sum, average, max, min, count, etc.) on the dataset without violating the privacy of its members. The research on statistical databases itself revolved mainly around query restriction [13] and data perturbation [4, 14]. However with the current growth of data mining, researchers are demanding more user specific data for analysis. Unfortunately, data perturbation techniques utilized by statistical databases often left the tuples in state that was inappropriate for data mining [6, 7]. To address this issue two new privacy classes have emerged: anonymized privacy and personalized privacy. These two privacy definitions allow the data collector to published tuple level information for data analysis while still guaranteeing some form of privacy to its members. Anonymized privacy is a privacy guarantee made by the data collector. When publishing user specific data, a member’s tuple will be anonymized so that it cannot be identified with a high degree of confidence. Many properties have been defined within anonymized privacy. These include k-anonymity [6], l-diversity [2], and (α, k)anonymity [10], among many others. Xiao and Tao [12] have also purposed the notion of personalized privacy. This notion allows users to define their own level of privacy to be used when data is published. If the data being published provides more information than the user is willing to release, then their privacy has been violated. Both privacy concepts will be discussed in further detail in Section 2. In this paper, we have developed a novel approach that identifies the amount of privacy released within a published dataset. Using the concepts of anonymized and personalized privacy, we determine the privacy properties exhibited within an arbitrary dataset. If the privacy requirements are correctly and explicitly specified in the meta-data, then by comparing the exhibited privacy properties to those stated in the meta-data, we can expose discrepancies between the specification and the actual exposure found in the data. Thus, given a privacy requirement (specification), we can validate a dataset’s claims to conforming to that specification. This paper contributes 1
http://www.netflixprize.com/ - Accessed September 17, 2008.
248
S. Pun and K. Barker
by characterizing the dataset, and leaves as future work the development of a suitable meta-data specification that can be used for comparison and/or validation. However, to help motivate the utility of the approach, consider the following simple privacy specification: “Anonymity is guaranteed to be at least 5-anonymous”. The dataset can now be tested, using the tool developed in this paper, to ensure that there exists at least 5 tuples in the dataset with the same quasi-identifier. Obviously this is a simple motivational example so the policy specifications are expected to be much more complex in a real-world data set. The contribution of this paper is a tool to analyze the data set using an efficient algorithm with respect to several anonymization criteria. The remainder of this paper is organized as follows. In Section 2, we describe the properties of anonymized privacy and personalized privacy. We present a new data structure called the privacy FP-Tree in Section 3. Section 4 explains how we use the privacy FP-tree to validate the privacy of the database. Section 5 describes scalability experiments to demonstrate the algorithm’s pragmatics and provides a complexity analysis. In Section 6, we discuss future work and draw conclusions about the privacy FP-tree.
2 Background For anonymized and personalized privacy, the definition of privacy itself relies on four keywords concepts, which must be defined first. These are quasi-Identifiers, identifiers, sensitive attributes, and non-sensitive attributes. All data values from a dataset must be categorized into one of these groups. 2.1 Identifiers and Quasi-Identifiers An adversary interested in compromising data privacy may know either identifiers or quasi-identifiers. These values provide hints or insight to the adversary about which individuals are members of a particular dataset. Clearly some values reveal more information than others. Identifiers are pieces of information, if published, that will immediately identify an individual in the database. Social Insurance Numbers, Birth Certificate Numbers, and Passport Numbers are examples of such identifiers. Quasiidentifiers are sets of information that when combined can explicitly or implicitly identify a person. Addresses, gender, and birthdays are examples of quasi-identifiers because each individual value provides insufficient identifying information but collectively could identify an individual. This paper uses the generic term identifier to reference both types and only uses the more specific terminology when necessary due to context. 2.2 Sensitive and Non-sensitive Attributes Attributes that are not identifiers can be considered either sensitive or non-sensitive attributes. These attributes are assumed to be unknown by an adversary attempting to gain knowledge about a particular individual. The sensitive attributes are those that we must keep secret from an adversary. Information collected on an individual’s health status, income, purchase history, and/or criminal past would be examples of
Privacy FP-Tree
249
sensitive information. Non-sensitive attributes are those which are unknown by the adversary but the user would not find problematic if the information is released as general knowledge. It is difficult to define where the dividing line is between these two types of attributes because each individual has their own preference so some may consider all the information they provide as sensitive while others do not mind if all such information is released. This is the task of privacy policy specifications, which is beyond the scope of this paper. Thus, the provider must identify which attributes are considered sensitive and these are the only ones considered in the balance of this paper, we do not consider non-sensitive attributes further. 2.3 Anonymized Privacy 2.3.1 K-Anonymity K-anonymity [6] is a privacy principle where any row within the dataset cannot be identified with a confidence greater than 1/k. To facilitate this property, each unique combination of identifiers must occur in the dataset at least k times. Table 1 provides a 2-anonymous table. The first three columns form the table’s identifiers and each unique combination of the identifiers occurs within the table at least 2 times. While k-anonymity protects each row of the table from identification, it fails to fully protect all sensitive values. Each Q-Block2 is linked to a set of sensitive values. If this set of sensitive values is identical, each row of a Q-block must contain the same sensitive value. In this situation the adversary does not need to predict the victim’s specific row. The adversary would still know the sensitive value for the victim with a confidence of 100%. This type of problem is called a homogeneity attack [2]. 2.3.2 L-Diversity Machanavajjhala et al. [2] describe a solution to the homogeneity attack called ldiversity. The l-diversity principle is defined as: A Q-block is l-diverse if it contains at least l ‘well represented’ values for each sensitive attribute. A table is l-diverse if every Q-block is l-diverse.
(1)
The key to this definition is the term ‘well represented’. Machanavajjhala et al. provides three different interpretations for this term [2]. Distinct l-diversity is the first interpretation of the term. Distinct l-diversity states that for each q-block there must be l unique sensitive values. Distinct l-diversity can guarantee that the sensitive value is predicted correctly by the adversary at a rate of: (Q – (l- 1)) / Q, where Q is the number of rows in the Q-block.
(2)
Distinct 1-diversity cannot provide a stronger privacy guarantee because there is no way to ensure the distribution among data values. It is feasible that a distinct 2-diverse table has a q-block containing 100 rows where one sensitive value contains a positive result while the other 99 contain negative results. An adversary would be able to predict with 99% accuracy that the victim has a negative sensitive value. The led Machanavajjhala et al. to [2] to define two other definitions for well represented l-diversity, namely, entropy l-diversity and recursive (c, l) – diversity. Entropy 2
Each set of rows corresponding to a unique combination of identifiers is known as a Q-Block.
250
S. Pun and K. Barker
l-diversity ensures that the distribution of sensitive values within each q-block conforms to: -∑ p (q, s) * log (p (q, s)) ≥ log (l). where p (q, s) = S / Q. S is the number of rows in the Q-block with a sensitive value s. Q is the number of rows in the Q-block.
(3)
Therefore, to be entropy l-diverse a dataset must contain a relatively even distribution among the sensitive values (dependent on the l value chosen). Conversely, recursive (c, l)-diversity does not aim to have an even distribution among the values. Recursive diversity attempts to ensure that the most frequent sensitive value of a q-block is not too frequent. By counting the frequency of each sensitive value within a q-block and then sorting it, we are left with a sequence r1, r2, …, rm where r1 is the most frequent sensitive value. A Q-block satisfies recursive (c, l)-diversity if: r1< c * (rl +rl+1 +… +rm), where r1 is the most frequent value.
(4)
2.3.3 (α, k) – Anonymity (α, k)-anonymity [10] is a privacy principle similar to l-diversity. Simply stated there are two parts to (α, k)-anonymity. The k portion is the same as previously described, and α is the maximum percentage that any sensitive value within a Q-block can represent. Using (α, k)-anonymity can prevent a homogeneity attack by limiting the sensitive values within a Q-block. Formally, (α, k)-anonymity is defined as: A q-block fulfills the (α, k)-anonymity if p (q, s) ≤ α, where p (q, s) = S / Q. S is the number of rows in the Q-block with a sensitive value s. Q is the number of rows in the Q-block. A table is (α, k)-anonymous if every Q-block fulfills the (α, k)-anonymity requirement.
(5)
Table 1. Anonymized Table Containing Individual Incomes in different provinces of Canada (k= 2, α = 0.667, c = 2, l = 2, entropy l = 3)
Address Alberta Alberta Ontario Ontario Manitoba Manitoba Manitoba
Date of Birth 1984 1984 19** 19** * * *
SIN 1234* 1234* 5**** 5**** 5**** 5**** 5****
Income 80k 85k 120k 123k 152k 32k 32k
2.4 Personalized Privacy Preservation Xiao et al. [12] introduce a different concept for protecting privacy called personalized privacy. In personalize privacy; the data collector must collect a guarding node along with the information of interested. This guarding node is a point on a taxonomy tree at
Privacy FP-Tree
251
which the data provider is willing to release information. When publishing the dataset, each row is checked against the data provider’s guarding node. A data provider’s sensitive value cannot be published at a level of greater detail than the provider feels comfortable as indicated by their guard node. Figure 1 provides an example of a taxonomy tree drawn from the education domain. While data is collect at the lowest level (representing the most detailed or specific data) a person’s guarding node may be at any point ranging from exact data (found at the leaves) up to the root node. For example, an individual may have completed Grade 9 but does not want this level of detailed released to others. By setting their guarding node as ‘Junior High School’, data can only be published if the public cannot know with high confidence that this individual only completed Grade 9.
ANY_EDU
University
High School Jr. High
7
8
9
Sr. High 10
11
12
Undergrad
Graduate
Masters
Doctoral
Fig. 1. Taxonomy Tree of the Education Domain
3 Privacy FP-Tree Given the growing size of datasets, efficiency and capacity must be considered when attempting to protect privacy. Willenborg and De Waal developed a compact data structure called the FP-tree [8]. It shows that by storing a dataset in the form of an FP-Tree, files sizes differ by orders of magnitude. The main purpose of the FP-tree was to identify frequent patterns of transactional datasets. Instead, we use this functionality to identify the frequencies of each Q-block in a privacy context. As such, only columns of the dataset that are considered identifiers (recall Section 2) are used to create the FP-Tree. 3.1 FP-Tree Construction Creating a FP-Tree requires two scans of the dataset. The first scan retrieves the frequency of each unique item and sorts that list in descending order. The second scan of the database will order the identifiers of each row according to its frequency and then append each identifier to the FP-tree. A sketch of this algorithm is shown in Algorithm 1 below.
252
S. Pun and K. Barker
Input: A database DB Output: FP-tree, the frequent-pattern tree of DB. Method: The FP-tree is constructed as follows. Scan the database DB once to collect the set of frequent identifiers (F) and the number of occurrences of each frequent identifier. Sort F in occurrence-descending order as FList, the list of frequent attributes. Create the root of an FP-tree, T, and label it as “null”. For each row ROWS in DB do: Select the frequent identifiers in ROWS Sort them according to the order of FList. Let the sorted frequent-identifier list in ROWS be [p | P], where p is the first element and P is the remaining list. Call insert tree ([p | P], T ). Algorithm 1. Algorithm for FP-Tree Construction [5]
The function insert tree ([p | P], T) is performed as follows: If T has a child N such that N.item-name = p.item-name, then increment N’s count by 1; else create a new node N, with its count initialized to 1, its parent link is linked to T, and its node-link is linked to the nodes with the same item-name via the node-link structure. If P is nonempty, call insert tree (P, N) recursively as indicated. 3.1.1 Example of FP-Tree Construction To create the FP-Tree of Table 1, the first three columns are labeled as the identifiers and the last column is considered the sensitive attribute. The database used to create the FP-Tree only includes the portion of Table 1 containing identifying columns. Table 2 contains the sorted list of items based on its frequency within the dataset. The following step three of Algorithm 1 will result in the FP-Tree shown in Figure 2. Table 2. Frequency of Each Identifier within Table 1
Identifier SIN_5**** Address_ Manitoba DOB_* Address_Alberta DOB_1984 SIN_1234* Address_Ontario DOB_19**
Frequency 5 3 3 2 2 2 2 2
Privacy FP-Tree
253
Root Add_Alberta:2
SIN_5****:5
DOB_1984:2
Add_Ontario:2
Add_Manitoba:3
SIN_1234*:2
DOB_19**:2
DOB_*:3
Fig. 2. FP-Tree of Table 1
3.2 Privacy FP-Tree Construction This paper extends Algorithm 1 to develop Privacy FP-Trees. Using the FP-tree allows us to find privacy properties related to identifiers. However sensitive values are a crucial part of any privacy definition. To account for this, sensitive values must be appended to the FP-tree. It was observed in Figure 2 that each leaf node of the FPTree represents one unique q-block within the dataset. Appending a list of sensitive values to the end of each leaf node allows the sensitive values to be associated with the correct q-block. In cases where the dataset contains multiple sensitive values, each column of sensitive values is stored in its own linked list at the end of each leaf node. The amendment to Algorithm 1 is as follows: Input: A database DB, Columns of Identifiers, Columns of Sensitive values Output: Privacy FP-tree, the frequent-pattern tree of DB with its associate sensitive values. Method: The Privacy FP-tree is constructed as follows. Scan the database DB once to collect the set of frequent identifiers (F) and the number of occurrences of each frequent identifier. Sort F in occurrence-descending order as FList, the list of frequent attributes. Create the root of an FP-tree, T, and label it as “null”. For each row ROWS in DB do: Select the frequent identifiers in ROWS Sort them according to the order of FList. Let the sorted frequent-identifier list in ROWS be [p | P], where p is the first element and P is the remaining list. If P is null, -- p will represent the leaf node of Row Let the sensitive values of the Row be [s|S], where s is the first and S are the remaining sensitive values. Call insert sensitive (p, [s|S]). Call insert tree ([p | P], T ). Algorithm 2. Algorithm for Privacy FP-Tree Construction
254
S. Pun and K. Barker
The function insert sensitive (p, [s|S]) is performed as follows: If p has a linked-list of type3 s; search through that linked-list for an element N such that N.item-name = s.item_name, then increment N by 1; else create a new node N, with its count initialized to 1, and append node N to the end of the linked-list. If no linked-list is found; create a new node N, with its count initialize to 1 and create a new linked-list for p of type s. If S is nonempty, call insert sensitive (p, S) recursively, as indicated. 3.2.1 Example of a Privacy FP-Tree Construction Input for Algorithm 2 is a table, so we illustrate it with Table 1 by providing; address, date of birth, and Social Insurance Number labeled as the identifying columns; and income as the sensitive column. The resulting Privacy FP-Tree is shown in Figure 3.
4 Determining Privacy 4.1 Anonymized Privacy 4.1.1 Finding K-Anonymity To determine the k-anonymity of a dataset, the q-block with the minimum number rows must be located. Using the privacy FP-tree, we represent each q-block by a leaf node. Within each leaf node is a frequency, which is the number of occurrences between the leaf node and the root node. For example, node “SIN_1234*” has a frequency of 2. The value 2 is the number of occurrences that “SIN_1234*, DOB_1984, Address_Alberta” appeared together within the dataset. Using this property of the tree, we traverse though all the leaf nodes. By identifying the minimum value among all the leaf nodes, k-anonymity is determined for the dataset. This minimum is the k of the dataset since no other q-block will have less than k common rows. 4.1.2 Finding l-Diversity To find the distinct l-diversity of a dataset, the q-block that contains the fewest unique sensitive values must be located. Using the privacy FP-tree, the number of unique sensitive values of a q-block is represented by the depth of the linked-list stored in the leaf node. Traversing through each leaf node and storing the minimum depth of the linked-lists will identify the distinct l of the dataset. Entropy of a q-block was defined above (3). Within each node of the linked-list is the sensitive value’s name and count. p (q, s) is determined by accessing the count of the sensitive value and dividing it by the frequency within the leaf node. Traversing the linked-list of sensitive values for a q-block will determine p (q, s) for all sensitive values s in that q-block. Finally, to calculate the entropy of the q-block we sum p (q, s) * log (p (q, s)) for all sensitive s. The entropy of a dataset is the q-block with the lowest entropy. Once again we can determine this by traversing each leaf node to identify the q-block with the lowest entropy. 3
Values of the same type belong to the same sensitive domain. Examples provided in this document assume that values of the same type are within the same column of a dataset.
Privacy FP-Tree
255
The l within (c, l)-diversity is the same l as the one found using the distinct ldiversity method. To calculate c of a q-block the most frequent sensitive value (i.e. max) must be determined. This can be accomplished by going through the linked-list of a q-block. Recall formula (4) above captured the properties of (c, l) diversity. The frequency of the leaf node is equal to the sum of all the counts of the sensitive values. This frequency subtracted from the l-1 most frequent sensitive values will result in (rl +rl+1 +… +rm). c of a q-block can be determined by r1 / (rl +rl+1 +… +rm). Traversing the leaf nodes to find the c of each q-block will determine the c for the dataset as a whole. The greatest c among the q-block is the c value for the dataset.
Root
Add_Alberta:2
Identifiers SIN_5****:5
DOB_1984:2
Add_Ontario:2
Add_Manitoba:3
SIN_1234*:2
DOB_19**:2
DOB_*:3
80k: 1
120k : 1
152k:1
85k:1
123k:1
32k: 2 Sensitive Values
Fig. 3. Privacy FP-Tree of Table 1
4.1.3 Finding (α, k) – anonymity The method used to find k was described in Section 4.1.1. α can be determined by calculating the max (p (q, s)) within the privacy FP-Tree. To find the max (p (q, s)) of a q-block, the sensitive value with the maximum count, max, is returned. Max is then divided by the frequency within the leaf node. Once all the leaf nodes have been traversed the q-block with max (p (q, s)) will be known and that value is the α of the dataset. 4.1.4 Multiple Sensitive Values While the examples and explanations have only involved datasets with a single sensitive value, multiple sensitive values within a datasets are common. Machanavajjhala et al. [2] defines a dataset to be l-diverse on a multi-sensitive table, if the table is l-diverse when each sensitive attribute is treated as the sole sensitive attribute. This is easily implemented in our privacy FP-tree. Each sensitive attribute is
256
S. Pun and K. Barker
given its own linked-list within each q-block. By comparing the values calculated from each linked-list within the same q-block and returning only the value required (i.e. min or max); we can determine the correct anonymized privacy values on multiattribute tables by iterating our algorithm appropriately. 4.2 Personalized Privacy Prior to finding whether or not a dataset preserves personalized privacy, a mechanism to represent the taxonomy tree is required. Each node within a level of the taxonomy tree is assigned a unique number. The sequence of numbers from the root to the node is then used as the representation. The conversion of the taxonomy in Figure 1 is shown in Table 3. Null is included to account for the possibility of an individual that has no preference for the amount of information released. Table 3. Numeric Representation of the Taxonomy Tree in Figure 1
Node ANY_EDU High School University Jr. High Sr. High Undergrad Graduate Grade 7 Grade 8 Grade 9 Grade 10 Grade 11 Grade 12 Masters Doctoral Null
Representation 1 1,1 1,2 1,1,1 1,1,2 1,2,1 1,2,2 1,1,1,1 1,1,1,2 1,1,1,3 1,1,2,1 1,1,2,2 1,1,2,3 1,2,2,1 1,2,2,2 Null
For datasets using personalized privacy, there are at least two sensitive columns. One column contains the sensitive values, which are going to be published; while the other contains the guarding nodes of each row. After building the privacy FP-tree, leaf nodes within the tree will contain two linked-lists corresponding to the two columns. By analyzing these two linked-lists we can determine if privacy is violated on a q-block. To analyze the sensitive values and the guarding nodes we first convert both lists to its numerical representation. The sensitive values and guarding nodes are then passed to Algorithm 3. If any q-block violates the privacy of an individual then the table itself violates the personalized privacy constraint.
Privacy FP-Tree
Input: Output:
257
List of sensitive values for a q-block S List of guarding nodes G for same q-block Boolean indicating if q-block preserves privacy
Method: The Privacy FP-tree is constructed as follows. for guards in G for data in S if (data.length() < guard.length()) Guard Satisfied else if (data != guard) (For the length of the guard) Guard Satisfied else Guard is violated if all guards are satisfied then the q-block preserves privacy else privacy is violated. Algorithm 3. Algorithm for determining personalize privacy violations
4.2.1 Discussion of Algorithm 3 A guard node is satisfied if a sensitive value within the q-block is higher on the taxonomy tree then the guard. In this situation an adversary cannot predict with high confidence4 at a level of detail the same as the guard node. For example, if a guard node was set at “High School”. Within this q-block a sensitive value existed which was “ANY_EDU”. The length of the guard node (High School: 1, 1) would be 2 and the length of the sensitive value (ANY_EDU: 1) would be 1. In this situation the guard node would be satisfied because an adversary would not be able to predict with high confidence this education level. Secondly, a guard node would be satisfied if the there was a sensitive value that does not have a common path to the root node. For example, if a guard node was “Grade 9” and there was a sensitive value “Masters” being published, then their respective numerical representations would be 1,1,1,3 and 1,2,2,1. Any difference in the values will prevent an adversary from predicting the sensitive value with high (100%) confidence.
5 Experiments 5.1 Environment Experiments were completed using based on a Java’s JRE 1.6 implementation. The hardware used was a Quad Core Q6600 @ 2.40 GHz. with 1024 Mb of memory allocated exclusively to the eclipse platform for running the java virtual machine. The dataset used for the experiments were variations of the “Adult” dataset provided from the UC Irvine Machine Learning Repository. In order to create a larger dataset for analysis we analyzed multiple copies of the Adult dataset as one single file. 4
High confidence is defined as 100% in Hansell [11].
258
S. Pun and K. Barker
Fig. 4. Time to Process Dataset of Varying Sizes
5.2 Scalability The first experiment was to determine the time required to analyze the privacy of datasets of different sizes. In this experiment the number of identifiers and sensitive columns remained constant while only increasing the number of rows. Maintaining the same number of unique values meant the size of the privacy FP-tree would remain constant. Only the counts within each node would differ. Figure 4 shows the results of the experiment. Figure 4 shows that there was a linear growth between the time to create the privacy FP-Tree and to determine its anonymized privacy properties versus the number of rows within the dataset. The linear growth was a result of the increasing cost to scan through datasets of larger sizes. Since the privacy FP-tree was the same size among all the datasets, determining the anonymized privacy properties was constant. The second experiment investigated the time required to determine the privacy properties of varying privacy FP-Trees. This experiment analyzed datasets that varied from one unique q-block to 1000 unique q-blocks. The size of the dataset remained constant at 1000 rows. The results of the experiment showed a negligible cost in running time for the various privacy FP-trees. Each dataset took approximately 5 seconds to complete and there were less than 1% difference between the times. The initial overhead for file I/O and class instantiation required the most resources for the entire process. These two experiments have shown that the privacy FP-tree is a practical solution capable of verifying and determining the privacy characteristics of a dataset. While no experiments are reported here to verify the personalized privacy aspect of the paper, the cost for determining such a property is similar to the cost of determining anonymized privacy. 5.3 Complexity This approach is readily split into two sections for complexity analysis. The first section is the creation of the privacy FP-tree and the second is the cost to determine privacy properties of the dataset. The creation of the privacy FP-tree requires exactly
Privacy FP-Tree
259
two scans of the database or O (n). Prior to the second database scan, the frequencies of n nodes must be sorted in descending order. The sorting algorithm implemented was a simple merge sort with O (n log n). Thus, the cost of creating the privacy FPtree is O (n) + O (n log n) so the total complexity is O (n log n). To determine the privacy properties, our algorithms must traverses all q-blocks. At each q-block it calculates the privacy properties by looking at the sensitive values. In the worst-case scenario, the cost is O (n) where each row is a unique q-block. Therefore, the overall cost to complete our approach is O (n log n).
6 Comparison to Some Directly Related Work The work by Atzori et al. [9] focused on identifying k-anonymity violations in the context of pattern discovery. The authors present an algorithm for detecting inference channels. In essence this algorithm identifies frequent item sets within a dataset consisting of only quasi-identifiers. This algorithm has exponential complex and is not scalable. The authors acknowledge this and provide an alternate algorithm that reduces the dataset that needs to be analyzed. This optimization reduces the running time of their algorithm by an order of magnitude. In comparison, our approach provides the same analysis with a reduced running cost and more flexibility by allowing sensitive values to be incorporated into the model. Friedman et al. [1] expanded on the existing k-anonymity definition beyond a release of a data table. They define k-anonymity based on the tuples released by the resulting model instead of the intermediate anonymized data tables. They provide an algorithm to induce a k-anonymous decision tree. This algorithm includes a component to maintain k-anonymity within the decision tree consistent with our definition of k-anonymity. To determine whether k-anonymity is breached in the decision tree, the authors have chosen to pre-scan the database and store the frequencies of all possible splits. Our algorithm would permit on demand scanning of the privacy FP-tree to reduce the cost of this step.
7 Future Work and Conclusions The main purpose of the privacy FP-tree is as a verification tool. This approach can be extended to ensure that datasets are stored in a manner that preserves privacy. Rather than storing information on tables, it can be automatically stored in a privacy FP-tree format. Thus, permissions can be developed to only allow entries into the privacy FP-tree if privacy is preserved after insertion. We intend to explore the possibility of implementing a privacy database system in which the privacy FP-tree data structure is used to store all information. Research must also be done on the effects of adding, removing, and updating rows to the published dataset. Since altering the dataset also alters the structure of the privacy FP-tree it requires us to rerun our algorithm to determine the privacy of the new dataset. Methods must be developed such that altering the dataset will not require the privacy FP-Tree to be rebuilt. Rebuilding the privacy FP-tree is the costliest
260
S. Pun and K. Barker
portion of our algorithm, as shown through our experiments, so further efficiencies will prove important. In this paper we have shown that the privacy afforded in a database can be determined in an efficient and effective manner. The privacy FP-tree allows the storage of the database to be compressed while considering of privacy principles. We have shown how k-anonymity, l-diversity, and (α, k) – anonymity can be correctly determined using the privacy FP-tree. We do acknowledge that many other similar algorithms exist that merit future consideration. Furthermore, this approach can be used to verify whether a dataset will preserve personalize privacy. Through our experiments and complexity analysis we have shown that this approach is practical and an improvement on current methods.
References 1. Friedman, R.W., Schuster, A.: Providing k-anonymity in data mining. In: The VLDB Journal 2008, pp. 789–804 (2008) 2. Machanavajjhala, J.G., Kifer, D., Venkitasubramaniam, M.: l-diversity: Privacy beyond kanonymity. In: Proc. 22nd Intnl. Conf. Data Engg. (ICDE), p. 24 (2006) 3. Narayanan, Shmatikov, V.: Robust De-anonymization of Large Datasets, February 5 (2008) 4. Dwork: An Ad Omnia Approach to Defining and Achieving Private Data Analysis. In: Proceedings of the First SIGKDD International Workshop on Privacy, Security, and Trust in KDD 5. Han, J., Pei, J., Yin, Y.: Mining Frequent Patterns without Candidate Generation. In: Chen, W., et al. (eds.) Proc. Int’l Conf. Management of Data, pp. 1–12 (2000) 6. Sweeney, L.: K-anonymity: a model for protecting privacy. International Journal on Uncertainty, Fuzziness and Knowledge-based Systems 10(5), 557–570 (2002) 7. Sweeney, L.: Weaving technology and policy together to maintain confidentiality. J. of Law, Medicine and Ethics 25(2-3), 98–110 (1997) 8. Willenborg, L., De Waal, T.: Statistical Disclosure Control in Practice. Springer, Heidelberg (1996) 9. Atzori, M., Bonchi, F., Giannotti, F., Pedreschi, D.: Anonymity preserving pattern discovery. The VLDB Journal 2008, 703–727 (2008) 10. Wong, R., Li, J., Fu, A., Wang, K.: (α, k)Anonymity: An Enhanced k-Anonymity Model for Privacy Preserving Data Publishing. In: KDD (2006) 11. Hansell, S.: AOL removes search data on vast group of web users. New York Times (August 8, 2006) 12. Xiao, X., Tao, Y.: Personalized Privacy Preservation. In: SIGMOD (2006) 13. Chin, F.Y., Ozsoyoglu, G.: Auditing and inference control in statistical databases. IEEE Trans. Softw. Eng. SE-8(6), 113–139 (1982) 14. Liew, K., Choi, U.J., Liew, C.J.: A data distortion by probability distribution. ACM TODS 10(3), 395–411 (1985)
Classification with Meta-learning in Privacy Preserving Data Mining Piotr Andruszkiewicz Institute of Computer Science, Warsaw University of Technology, Poland
[email protected] Abstract. In privacy preserving classification, when data is stored in a centralized database and distorted using a randomization-based technique, we have information loss and reduced accuracy of classification. Moreover, there are several possible algorithms, different reconstruction types (in case of decision tree) to use and we cannot point out the best combination of them. Meta-learning is the solution to combine information from all algorithms. Furthermore, it gives higher accuracy of classification. This paper presents the new meta-learning approach to privacy preserving classification for centralized data. Effectiveness of this solution has been tested on real data sets and presented in this paper.
1
Introduction
Incorporating privacy in classification with randomization-based techniques1 cause information loss. Thus, we obtain worse results (i.e., accuracy of classification) than without privacy. Moreover, there are many combinations of algorithms2 which can be used and there is no best combination. In some situations we can point out the best combination of algorithms, but for different sets it is no longer the best alternative. Many available combinations are the reason to use meta-learning for classification in privacy preserving data mining. Thus, we present in this paper the new meta-learning (bagging and boosting) approach to privacy preserving classification for centralized data distorted with randomization-based technique. Meta-learning enables miner to combine information from different algorithms and get better accuracy of classification, what reduces information loss caused by privacy. 1.1
Related Work
Privacy preserving classification has been extensively discussed recently [1] [3] [4] [5] [6] [7] [8] [2]. 1 2
For details about randomization-based technique for continuous and nominal attributes see [1] and [2], respectively. Combination of algorithms in this paper means always a pair of algorithms for reconstruction of probability distribution, one for continuous attributes and second for nominal attributes.
L. Chen et al. (Eds.): DASFAA 2009 Workshops, LNCS 5667, pp. 261–275, 2009. c Springer-Verlag Berlin Heidelberg 2009
262
P. Andruszkiewicz
Papers [3] and [6] represent the cryptographic approach to privacy preserving. We use a different approach – randomization-based technique. Preserving privacy for individual values in distributed data is considered in [7] and [8]. In these works databases are distributed across a number of sites and each site is only willing to share mining process results, but does not want to reveal the source data. Techniques for distributed database require a corresponding part of the true database at each site. Our approach is complementary, because it collects only modified tuples, which are distorted at the client machine. There are various different approaches to privacy preserving classification, but we mention only the relevant ones. Agrawal and Srikant [1] proposed how to build a decision tree over (centralized) disturbed data with randomization-based technique. In this solution they also presented the algorithm (we will call it AS) for probability distribution reconstruction for continuous attributes, which estimates the original probability distribution based on distorted samples. Paper [4] extends the AS algorithm and presents the EM-based reconstruction algorithm, which does not take into account nominal attributes either. Multivariate Randomized Response technique was presented in [5]. It allows creating a decision tree only for nominal attributes. The solution shown in [2] differs from those above, because it enabled data miner to classify (centralized) perturbed data containing continuous and nominal attributes modified using randomization-based techniques to preserve privacy on an individual level. This approach uses the EM/AS algorithm to reconstruct probability distribution for nominal attributes and the algorithm for assigning reconstructed values to samples for this type of attributes to build a decision tree simultaneously with continuous attributes. The EQ algorithm for reconstructing probability distribution of nominal attributes was proposed in [9]. The algorithm achieves better results, especially for high privacy level, i.e., low probability of retaining the original value of the nominal attribute. In our paper we use the solutions proposed in [1] [4] [2] and [9]. Many papers concern meta-learning [10] [11] [12], but not in the context of privacy preserving data mining. Meta-learning for distributed data mining has been developed in [13] and [14]. It creates classifiers at individual sites to develop a global classifier. This could protect individual entities but for distributed data. In this paper we show how to use meta-learning in privacy preserving classification for centralized data. 1.2
Contributions of This Paper
Proposed meta-learning solution combines information from different algorithms and gives better results for classification with preserved privacy for centralized data distorted with randomization-based technique.
Classification with Meta-learning in Privacy Preserving Data Mining
1.3
263
Organization of This Paper
The remainder of this paper is organized as follows: Section 2 reviews metalearning algorithms. In Section 3, we present our new approach to meta-learning in privacy preserving classification. The experimental results are highlighted in Section 4. Finally, in Section 5, we summarize the conclusions of our study and outline future avenues to explore in Section 6.
2
Review of Meta-learning
Meta-learning can be described as learning from information generated by a learner(s) [12]. We may also say that it is learning of meta-knowledge from information on lower level. In one of the cases meta-learning uses several classifiers trained on different subset of the data and every sample is classified by all trained classifiers. We may also use different classification algorithms, learn them on the train set or its subsets and then gather predicted classes from these classifiers. To choose the final class we use voting or weighted voting. This approach is effective for ”unstable” learning algorithms for which small change in training set gives significantly different hypothesis [12] [10]. These are, e.g., decision trees, rules. For stable algorithms (e.g., Naive Bayes) we probably will not get significant improvement. The most popular meta-learning algorithms are bagging and boosting. 2.1
Bagging
Bagging [10] is a method for generating multiple classifiers (learners) from the same training set. The final class is chosen by, e.g., voting. Let T be the training set with n labelled samples and C be the classification algorithm, e.g., decision tree. We learn k base classifiers cl1 , cl2 , .., clk . Every classifier uses C algorithm and is learned on Ti training set. Ti consists of n samples chosen uniformly at random with replacement from the original training set T. The number of samples may be also lower than the number of records in the original training set and be in the range of 23 n and n. Every trained classifier gives his prediction for a sample and the final class is chosen according to simple voting (every voter has the same strength - weight). 2.2
Boosting
In boosting method [11] (like in bagging) we create a set of k classifiers cl1 , cl2 , .., clk . Classifiers use C algorithm and are trained on Ti training sets. The difference is in the process of choosing the Ti training sets. In bagging we assume that we draw samples according to uniform distribution. In boosting every next classifier is trained mainly on samples which were misclassified by previous learners, i.e., misclassified samples have higher probability to be drawn.
264
P. Andruszkiewicz
One boosting method is AdaBoost [11]. Let Pil (i = 1..k, l = 1..n) be the probability of choosing sample sl to Ti from the original training set T. For the first classifier we have the same probabilities for all samples (like in bagging). 1 P1l = , l = 1..n n where n is the number of samples in the original training set n = |T |. For each classifier we have different probabilities, e.g., for classifier cli+1 we calculate Si = Pil , l = 1..n l:cli (ti ) =f (ti )
the sum of the probabilities for samples for which classifier cli gave the wrong answer. f is the empirical (true) classification. Then we compute αi fractions. αi =
1 1 − Si log , i = 1..k 2 Si
Probabilities are modified as follows: Pi,l · e−αi ; Pi,l : cli (ti ) = f (ti ) Pi+1,l = , i = 1..k, l = 1..n Pi,l · eαi ; Pi,l : cli (ti ) = f (ti ) Then we normalize the probabilities. The next classifier is trained mostly on the samples misclassified by previous learners. The final class is chosen using weighted voting with αi fraction for each classifier.
3
Meta-learning in Privacy Preserving Classification
While dealing with classification over the data with preserved privacy we have several possible algorithms to be chosen and there are no clear winner. In case of privacy preserving classification for centralized data distorted with randomization-based technique we may use two algorithms for reconstruction of probability distribution for continuous attributes: AS [1] and EM [4]. Moreover, for nominal attributes we also have two possible algorithms EM/AS [2] and EQ [9]. Thus, we have four combinations for sets containing continuous and nominal attributes simultaneously. When we use a decision tree as a classifier in privacy preserving, there are four reconstruction types: Local, By class, Global and Local All [1] [2]. Reconstruction type Global means that we reconstruct probability distributions only in the root of a tree. In By class we reconstruct separately for each class but only in the root node. For Local reconstruction is performed in every node divided into classes. Local all - reconstruction in every node without dividing into classes.
Classification with Meta-learning in Privacy Preserving Data Mining
265
Combining all currently available in literature algorithms and reconstruction types gives 16 possibilities. There is no one combination of algorithms which performs the best and we can only choose the best combination for a specific case (i.e., given data set and parameters of the distortion procedure). Not only can we not point out the best combination of algorithms, but it is hard to choose the best reconstruction type. We may say that there are two best reconstruction types: Local and By class [1] [2], but we still cannot choose the best one. Experiments conducted in [1] showed that these two reconstruction types are the best while building decision trees over distorted data containing only continuous attributes. [2] confirmed this statement for continuous and nominal attributes used simultaneously. Both papers did not point out the best reconstruction type, because for some data sets Local gives better results, for others - By class. Taking into account the high number of combinations of algorithms and reconstruction types, meta-learning can be used to eliminate these drawbacks. In privacy preserving data mining we may use meta-learning (without hierarchical structures) in two ways. First is to apply bagging or boosting for the chosen combination of algorithms and the reconstruction type. Second, we may apply bagging or boosting methods separately for different combinations of algorithms and reconstruction types. Then we use all the classifiers together and calculate the final class by voting. In both cases we conduct reconstruction separately for each classifier. 3.1
Boosting for Distorted Data
As stated in [1] [4] [2] [9] we train a classifier over distorted data, but test set is undistorted. Boosting needs to classify train (distorted) data to compute probabilities Pil and αi fractions (see Section 2.2). Classifying distorted data in a way we classify undistorted data leads to inaccuracy. In this Section we propose how to classify train (distorted) data. In a standard test node in a (binary) decision tree we check whether given value of the attribute meets the test. We choose one branch (left) when it meets the test and second (right) when it does not. Having distorted value of the attribute A, we may calculate the probability PA (yes) that given value meets the test and the probability that it does not meet the test PA (no) = 1 − PA (yes). For s-th sample, we want to classify, we calculate for i-th leaf the probability that s-th sample could go into i-th leaf (P li ). This probability is equal to the multiplication of probabilities PX (yes) when we choose the left branch and PX (no) when we choose the right branch for each test in the path which leads to i-th leaf. Let assume we have a decision tree with 3 tests. One (in the root) on attribute A. Second (on the left) on attribute B and third (on the right) on attribute C. Probability of the left leaf is P l1 = PA (yes) · PB (yes).
266
P. Andruszkiewicz
Having probabilities for l leaves estimated, we can calculate the probability that the sample s belongs to category Cj . l
Ps (Cj ) =
P li
i=1,Leaf Categoryi =Cj
We choose the category with the highest probability. To calculate Ps (Cj ) we need to estimate PX (yes) and PX (no) for each test. Nominal tests: For m-th test we have modified values (train samples which go into m-th test) of nominal attribute, so we have probability distribution of modified attribute (i.e., P (Z = vi ), i = 1..k). We have also reconstructed probability distribution (P (X = vi ), i = 1..k). Let assume that distorted value of attribute Z is equal to vq (Z = vq ). We can calculate probabilities that P (X = vi |Z = vq ), i = 1..k. Using Bayes’ Theorem we write: P (X = vi ) · P (Z = vq |X = vi ) P (X = vi |Z = vq ) = P (Z = vq ) Probability P (Z = vq |X = vi ) is the probability that the value of the attribute vi will be changed to value vq and we know that probability, because we know the parameters of distorting method. To calculate PX (yes) we sum probabilities P (X = vi |Z = vq ) for all values vi which meet the test. k
PX (yes) =
P (X = vi |Z = vq )
i=1,vi ∈test
Continuous tests: X is the original attribute. Y is used to modified X and obtain Z - modified attribute. Z is equal to z - the distorted value. Let assume that continuous test is met if the value of attribute X is less or equal than t. We may write: PX (yes) = P (X ≤ t|Z = z, X + Y = Z) = F (t|Z = z, X + Y = Z) t PX (yes) = fX (r|Z = z, X + Y = Z)dr
(1) (2)
−∞
using Bayes’ Theorem PX (yes) =
t
−∞
fZ (Z = z, X + Y = Z|X = r)fX (r) dr fZ (z)
(3)
since Y is independent of X and the denominator is independent of the integral t fY (z − r)fX (r) PX (yes) = dr (4) fZ (z) −∞ We know fY , thus we can compute PX (yes) and PX (no).
Classification with Meta-learning in Privacy Preserving Data Mining
3.2
267
Bagging and Boosting for Chosen Combination of Algorithms and Reconstruction Type
Having chosen the combination of algorithms for the reconstruction of probability distribution, one for continuous attributes and second for nominal attributes, and reconstruction type to be used in a decision tree, we may use either bagging or boosting. The final class is determined according to simple or weighted voting method. We may also join votes from bagging and boosting at one time and calculate the final class. We combine votes on the same level, i.e., we do not use hierarchical classifiers. In this case we choose a number of classifiers for each meta-learning method separately. For bagging weights are equal to 1 and for boosting we use αi fraction as weights. For example, let the class be binary (0-1) attribute and we have bagging with 3 classifiers and boosting with 4 classifiers. We sum weights for all (seven) classifiers which answered either 0 or 1, separately. W0 =
7 j=1,clj (s)=0
wj , W1 =
7
wj
j=1,clj (s)=1
Then we choose the class with the highest sum (cumulative weight). 3.3
Combining Different Algorithms and Reconstruction Types with the Usage of Bagging and Boosting
There are 3 possible cases of using different algorithms and reconstruction types with meta-learning. We may use only different combinations of algorithms, only different reconstruction types and both. In the first case for each combination of reconstruction algorithms and the chosen reconstruction type we separately use either bagging or boosting, like in Section 3.2. Then we determine final class using all created classifiers by (weighted ) voting (all classifiers are on the same level and we sum all weights). For different reconstruction types and the chosen combination of algorithms we process in the same way as for the case with different algorithms. We may also combine these situations, i.e., we use bagging or boosting for each possible combination of algorithms and reconstruction types. The method of calculating the weights stays the same. In all cases we may include not all possible combinations, e.g., for different combinations of reconstruction types to achieve better accuracy of classification we can use only Local and By class - two best reconstruction types.
4
Experiments
This section presents the results of the experiments conducted with the usage of meta-learning in privacy preserving classification.
268
P. Andruszkiewicz
All sets used in tests can be downloaded from UCI Machine Learning Repository (http://www.datalab.uci.edu/). We used the following sets: Australian, Credit-g, Diabetes, Segment, chosen under the following conditions: two sets should contain only continuous attributes (Diabetes, Segment ) and two both continuous and nominal attributes (Australian, Credit-g), one of the sets should have class attribute with multiple values (Segment ). In the experiments the accuracy, sensitivity, specificity, precision and Fmeasure were used. Sensitivity, also called recall rate, measures the proportion tp of actual positives which are correctly identified as such (Sensitivity = tp+f n ). Specificity measures the proportion of negatives which are correctly identified tn (Specif icity = tn+f p ). Precision measures the proportion of samples pointed tp out by classifier which are correctly identified (P recision = tp+f p ). Accuracy tp+tn is the percentage of correctly classified samples (Accuracy = tp+tn+f p+f n ). Fmeasure is the weighted harmonic mean of precision and recall (F − measure = 2·precision·recall precision+recall ). For class attribute with multiple (k) values we used macrok average (M acromeasure = k1 i=1 measure). Tp (fp) denotes the number of true (false) positives. Tn (fn) is the number of true (false) negatives. We used the definition of privacy based on the differential entrophy [4]. Definition 1. Privacy inherent in the random variable A is defined as follows: Π(A) = 2h(A) where h(A) is differential entrophy of variable A. Definition 2. Differential entrophy of random variable A is defined as follows: h(A) = − fA (a) log2 fA (a)da ΩA
where ΩA is the domain of variable A. A random variable A distributed uniformly between 0 and a has privacy equal to a. For general random variable C, Π(C) denote the length of the interval, over which a uniformly distributed random variable has the same uncertainty as C. n% privacy means that we use the distorting distribution which privacy is equal to n% of the range of values of the distorting distribution, e.g., for 100% privacy and the random variable A with the range of its values equal to 10 we distort it with the distribution which has privacy measure equal to 10 (for the uniform distribution we may use random variable distributed between −5 and 5)3 . To achieve more reliable results we used 10-fold cross-validation [15] and calculated average of 50 multiple runs. 3
For details about privacy measures see [1] and [4].
Classification with Meta-learning in Privacy Preserving Data Mining
269
We show mainly the results for sets Australian, because of similarity of the results for other sets and ability to compare calculated measures between the experiments. In all experiments we distorted all attributes expect class/target attribute. All continuous attributes were distorted using uniform distribution (unless explicitly stated that normal distribution was used). For normal distribution we obtained similar results. We used SPRINT [16] decision tree modified to incorporate privacy according to [1]. 4.1
Experiments with Chosen Combination of Algorithms and Reconstruction Type with the Usage of Bagging and Boosting
Figure 1 shows the accuracy of classification for Australian set. We used the following combination of algorithms: AS for continuous attributes and EM/AS (called EA) for nominal attributes and conducted experiments for all possible types of reconstruction: LO - Local, BC - By class, GL - Global, LA - Local all. NR means that we did not use any reconstruction. We used bagging and boosting separately and combined, as described in Section 3.2. We also experimented with accuracy and the number of classifiers for given meta-learning method. Figure 1 presents the results for separate usage of bagging - bg5 (5 classifiers were used), boosting bo5 (with 5 classifiers) and combination of meta-learning methods, e.g., bg5bo4 - bagging used 5 classifiers and boosting 4 decision trees. Thus, the number after short name of the meta-learning method tells how many classifiers were used for this particular meta-learning method. Without means that no meta-learning method was used, i.e., single classifier was built.
● ●
●
0.64
Australian (200% privacy)
without bo5 ba5 bg5bo5 bg11bo10
●
●
Accuracy
●
●
0.60
0.83
● ●
●
0.58
0.82
Accuracy
0.62
0.84
Australian (100% privacy)
0.56
0.81
●
●
without bo5 ba5 bg5bo5 bg11bo10
LO
BC
GL Reconstruction type
LA
NR
LO
BC
GL
LA
NR
Reconstruction type
Fig. 1. Accuracy of classification with the usage of meta-learning (bagging and boosting methods) for set Australian with 100% and 200% privacy level for chosen combination of algorithms - AS.EA.
270
P. Andruszkiewicz
Table 1. Sensitivity, specificity, precision and F-measure with the usage of metalearning (bagging and boosting methods) for set Australian with 100% and 200% privacy level for chosen combination of algorithms - AS.EA.
Measure, method Sensitivity without bg5bo5 Specificity without bg5bo5 Precision without bg5bo5 F without bg5bo5
100% LO 0.7958 0.8056 0.8403 0.8679 0.8031 0.8343 0.7956 0.8162
BC 0.7958 0.7836 0.8541 0.8818 0.8191 0.8463 0.8031 0.8096
GL 0.8021 0.7664 0.8508 0.8654 0.8155 0.8229 0.8053 0.7906
LA 0.7985 0.7708 0.8531 0.8708 0.8171 0.8303 0.8041 0.7961
NR 0.7693 0.7720 0.8369 0.8730 0.7982 0.8321 0.7774 0.7972
200% LO 0.4813 0.3964 0.7157 0.8339 0.5839 0.6665 0.5168 0.4862
BC 0.4068 0.3357 0.7527 0.8522 0.5801 0.6581 0.4669 0.4343
GL 0.4404 0.4301 0.7669 0.7598 0.5945 0.5838 0.4936 0.4872
LA 0.4427 0.4050 0.7632 0.7801 0.5956 0.5905 0.4937 0.4696
NR 0.4251 0.3309 0.6688 0.7928 0.5180 0.5727 0.4465 0.3998
For Local and By class reconstruction types and both presented levels of privacy (100%, 200%) we obtain in every case higher accuracy. For 100% privacy we have accuracy about 84-85%. Higher privacy reduces accuracy to the level of 62-64%. For 100% privacy bagging with 5 classifiers achieves better results than bagging and boosting together (both with 5 classifiers). But for 200% bagging and boosting together (5 classifiers each) perform better. For Global and Local all meta-learning decreases the accuracy of classification. The reason may be that in these two reconstruction types we do not divide samples into classes during reconstruction and changes made for each training set have low influence on the decision of classifiers. For the case without reconstruction and 100% privacy meta-learning increases accuracy. We may say that it is as high as for Global and Local all. For higher privacy (200%) and without reconstruction meta-learning still gives better results, but overall accuracy is the lowest among all reconstruction types. Table 1 shows sensitivity, specificity, precision and F-measure for set Australian in this experiment. For 100% privacy, Local and By class reconstruction types only sensitivity for By class has lower value for meta-learning with the simultaneous usage of bagging and boosting (5 classifiers for each method), compared to the situation without meta-learning. For 200%, Local and By class we have lower values for sensitivity and F-measure (precision increases). Thus, meta-learning causes slightly lower proportion of actual positives which are correctly identified. For 200% privacy, Global and Local all reconstruction type meta-learning gives better results only for 2 cases: sensitivity for Global and specificity for Local all. For 100% privacy all measures are between about 77 and 86%, for 200% sensitivity decreases to the level of 33%. To sum up, we can say that in general the higher number of classifiers, the better accuracy we obtain. The simultaneous usage of two meta-learning methods with high number of classifiers gives results with high accuracy.
Classification with Meta-learning in Privacy Preserving Data Mining
4.2
271
Accuracy of Classification for Different Combinations of Algorithms with the Usage of Bagging and Boosting
We performed the experiment with the usage of all combinations of algorithms: AS.EA, EM.EA, AS.EQ, and EM.EQ. We used bagging and boosting separately (with 1 and 5 classifiers for each combination of algorithm). Obtained results are similar to the experiment from Section 4.1. Meta-learning gives better results for Local, By class and no reconstruction, but almost no improvement for Global and Local all. For 5 classifiers per combination of algorithms (for Local and By class) we have high accuracy - about 85% for 100% privacy and about 72% for 200% privacy (for 200% privacy even higher than for experiment in Section 4.1). For bagging and boosting with 1 classifier per each combination of algorithms we have lower accuracy, but still higher than without meta-learning (but one case). To conclude, using different combinations of algorithms we obtain high accuracy. The higher number of classifiers, the better results. For the second time Global and Local all reconstruction types give poor results for meta-learning. 4.3
Accuracy of Classification for Different Reconstruction Types with the Usage of Meta-learning
For set Australian with the usage of meta-learning for all combinations of reconstruction types we get worse results than without meta-learning. The reason is that we obtain really low accuracy for Global and Local all reconstruction types for set Australian. As mentioned earlier these two reconstruction types seem to be the worst and for some sets they give very poor results (see previous experiments). To eliminate undesirable impact of Global and Local all, we use only Local and By class reconstruction types. Results of the experiments for set Australian are shown on Figure 2. For only two best reconstruction types meta-learning performs better than single classifier. For 100% privacy accuracy is about 85%, for 150% slightly lower - 82-83%. For 200% privacy we obtain 65% of accuracy for AS.EA and EM.AS, algorithms AS.EQ and EM.EQ give accuracy about 72%. 4.4
Accuracy of Classification for Different Combination of Algorithms and Reconstruction Types with the Usage of Bagging and Boosting
The last possibility is to combine different algorithms and reconstruction types. According to the results from previous section we use only Local and By class types of reconstruction. Results of the experiments are shown in Table 2. Sets were distorted with uniform and normal distribution.
272
P. Andruszkiewicz
0.85
Australian set
●
0.80
●
●
0.75 0.70
●
m100% m150% m200% Lo100% Lo150% Lo200%
0.60
0.65
Accuracy
●
AS.EA
EM.EA
AS.EQ
EM.EQ
Algorithms
Fig. 2. Accuracy of classification with the usage of meta-learning (simultaneously bagging and boosting with 5 classifiers per each method) for set Australian with 100%, 150%, and 200% privacy level for only Local and By class reconstruction types compared to Local reconstruction type Table 2. Accuracy of classification with the usage of meta-learning (simultaneously bagging and boosting with 5 classifiers per each method) for only Local and By class reconstruction types and different combinations of algorithms compared to Local reconstruction type and AS.EA algorithms. Set
Acc. 100% mCredit-g 0.7250 LoCredit-g 0.6813 mCredit-g (n) 0.7300 LoCredit-g (n) 0.6716 mAustralian 0.8567 LoAustralian 0.8199 mAustralian (n) 0.8548 LoAustralian (n) 0.8261 mDiabetes 0.7392 LoDiabetes 0.6908 mDiabetes (n) 0.7409 LoDiabetes (n) 0.7039 mSegment 0.8354 LoSegment 0.7974 mSegment (n) 0.8110 LoSegment (n) 0.7877
200% 0.6889 0.6033 0.6839 0.6061 0.6962 0.6165 0.6822 0.5696 0.7158 0.6699 0.7025 0.6762 0.8216 0.7882 0.7802 0.7272
Sens. 100% 0.2426 0.3749 0.3280 0.4212 0.8633 0.8040 0.8557 0.8004 0.8687 0.8216 0.8856 0.8399 0.8354 0.7972 0.8115 0.7884
200% 0.1396 0.3210 0.1363 0.3372 0.5011 0.4814 0.4719 0.4637 0.8412 0.7785 0.9266 0.8484 0.8231 0.7890 0.7823 0.7290
Spec. 100% 0.9342 0.8134 0.9038 0.7795 0.8517 0.8329 0.8541 0.8462 0.4936 0.4467 0.4710 0.4515 0.9726 0.9663 0.9685 0.9646
200% 0.9261 0.7283 0.9190 0.7239 0.8526 0.7235 0.8500 0.6541 0.4802 0.4666 0.2859 0.3559 0.9703 0.9647 0.9634 0.9546
Prec. 100% 0.6213 0.4636 0.5910 0.4455 0.8288 0.7995 0.8301 0.8095 0.7594 0.7326 0.7541 0.7375 0.8412 0.8022 0.8239 0.7994
200% 0.4803 0.3439 0.5091 0.3495 0.7461 0.5968 0.7267 0.5566 0.7492 0.7307 0.7068 0.7095 0.8184 0.7921 0.7814 0.7247
F 100% 0.3349 0.4074 0.4113 0.4267 0.8425 0.7971 0.8396 0.8021 0.8090 0.7718 0.8127 0.7827 0.8339 0.7935 0.8109 0.7849
200% 0.2301 0.3149 0.2502 0.3250 0.5858 0.5225 0.5404 0.4784 0.7901 0.7493 0.7993 0.7680 0.8157 0.7829 0.7712 0.7080
Only for Credit-g meta-learning significantly decreases sensitivity and Fmeasure. In remaining cases meta-learning improves all measures (only one case with significantly worse result).
Classification with Meta-learning in Privacy Preserving Data Mining
273
Table 3 shows relative gain caused by meta-learning for two cases: undistorted and with privacy data (for uniform and normal distorting distribution). For all cases meta-learning gain for privacy 100%, 150%, and 200% is higher than for undistorted data. Meta-learning gives higher accuracy and for the next time proves its usefulness in privacy preserving data mining. Table 3. Comparison of meta-learning accuracy gain for undistorted and distorted data (the simultaneous usage of bagging and boosting with 5 classifiers per each method). Set
Acc. without Credit-g 0.7210 Australian 0.8261 Diabetes 0.7368 Segment 0.9355
4.5
meta 0.7508 0.8552 0.7449 0.9550
Privacy 0% 4.1% 3.5% 1.1% 2.1%
100% 6.4% 4.5% 6.6% 4.8%
150% 12.6% 9.8% 7.7% 4.6%
Privacy 200% 100% (n) 150% (n) 200% (n) 14.2% 8.7% 9.1% 12.8% 12.9% 3.5% 6.6% 19.8% 6.4% 5.3% 3.7% 3.9% 4.2% 3.0% 6.8% 7.3%
Time of Training Classifiers with Meta-learning
Unfortunately meta-learning causes time cost. We need to build all that decision trees, what takes time. Local is the most expensive, because it reconstructs probability distribution for each class in every node of the decision tree. By class takes only slightly more time (for high number of classifiers) than the case without reconstruction, because it performs reconstruction for each class but only in the root of a tree. For about 20 classifiers there is a difference in time of one order of magnitude compared to situation without meta-learning. Meta-learning increases time of training, but the time of classification is almost the same. Thus, we bear time cost while training classifiers. Classification time is still small. Meta-learning is a perfect approach to use distributed computations and train classifiers on different machines. This would reduce time needed to train classifiers.
5
Conclusions
In privacy preserving data mining for classification meta-learning can be used to achieve higher accuracy and combine information from different algorithms. It is better to use only Local and By class reconstruction type, because remaining types may give poor results. Meta-learning gives higher gain in accuracy for data with preserved privacy than for undistorted data. Unfortunately, meta-learning causes significant time cost in the process of training classifiers. The higher number of classifiers is, the more time we need. Time of classification is still small. Moreover, meta-learning makes harder the interpretation of created classifier. We have to look at all decision trees to know the rules of classification.
274
6
P. Andruszkiewicz
Future Work
In future works, we plan to investigate the possibility of extension of our results to the usage of various classification algorithms as meta-learner (not only simple or weighted voting). We will check the results for situation where every single implementation of bagging and boosting gives separately its own answer to classifier of higher level (contrary to situation presented in the paper). It is also possible to pass to the classifier of the higher level not only the answers of classifiers, but the training set or its subset. We also plan to use hierarchical classifiers (with 3 and more levels). We will try to group classifiers with, e.g., different combinations of algorithms and the same reconstruction type. Then we will train the classifier on the highest level on their outputs. To reduce time of training classifiers in meta-learning, we will use distributed computations. We plan to use presented approach to classify distorted test set.
References 1. Agrawal, R., Srikant, R.: Privacy-preserving data mining. In: Proc. of the ACM SIGMOD Conference on Management of Data, May 2000, pp. 439–450. ACM Press, New York (2000) 2. Andruszkiewicz, P.: Privacy preserving classification for continuous and nominal attributes. In: Proceedings of the 16th International Conference on Intelligent Information Systems (2008) 3. Lindell, Y., Pinkas, B.: Privacy preserving data mining. In: Bellare, M. (ed.) CRYPTO 2000. LNCS, vol. 1880, pp. 36–54. Springer, Heidelberg (2000) 4. Agrawal, D., Aggarwal, C.C.: On the design and quantification of privacy preserving data mining algorithms. In: PODS 2001: Proceedings of the twentieth ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems, pp. 247–255 (2001) 5. Wenliang Du, Z.Z.: Using randomized response techniques for privacy-preserving data mining. In: Getoor, L., Senator, T.E., Domingos, P., Faloutsos, C. (eds.) KDD, pp. 505–510. ACM, New York (2003) 6. Yang, Z., Zhong, S., Wright, R.N.: Privacy-preserving classification of customer data without loss of accuracy. In: SDM (2005) 7. Zhang, N., Wang, S., Zhao, W.: A new scheme on privacy-preserving data classification. In: Grossman, R., Bayardo, R., Bennett, K.P. (eds.) KDD, pp. 374–383. ACM, New York (2005) 8. Xiong, L., Chitti, S., Liu, L.: Mining multiple private databases using a knn classifier. In: SAC 2007: Proceedings of the 2007 ACM symposium on Applied computing, pp. 435–440 (2007) 9. Andruszkiewicz, P.: Probability distribution reconstruction for nominal attributes in privacy preserving classification. In: Proceedings of the International Conference on Convergence and Hybrid Information Technology (2008) 10. Breiman, L.: Bagging predictors. Machine Learning 24(2), 123–140 (1996)
Classification with Meta-learning in Privacy Preserving Data Mining
275
11. Freund, Y., Schapire, R.: Experiments with a new boosting algorithm. In: Proceedings of the Thirteenth International Conference on Machine Learning (ICML), pp. 148–156 (1996) 12. Chan, P.K., Stolfo, S.J.: Experiments on multi-strategy learning by meta-learning. In: Bhargava, B.K., Finin, T.W., Yesha, Y. (eds.) CIKM, pp. 314–323. ACM, New York (1993) 13. Chan, P.K., Stolfo, S.J.: On the accuracy of meta-learning for scalable data mining. J. Intell. Inf. Syst. 8(1), 5–28 (1997) 14. Chan, P.K.W.: An extensible meta-learning approach for scalable and accurate inductive learning. PhD thesis, New York, NY, USA, Sponsor-Salvatore J. Stolfo. (1996) 15. Dietterich, T.G.: Approximate statistical tests for comparing supervised classification learning algorithms. Neural Computation (10), 1895–1924 (1998) 16. Shafer, J.C., Rakesh Agrawal, M.M.: Sprint: A scalable parallel classifier for data mining. In: Vijayaraman, T.M., Buchmann, A.P., Mohan, C., Sarda, N.L. (eds.) VLDB 1996, Proceedings of 22th International Conference on Very Large Data Bases, Mumbai (Bombay), September 3-6, pp. 544–555. Morgan Kaufmann, San Francisco (1996)
Importance of Data Standardization in Privacy-Preserving K-Means Clustering Chunhua Su1, , Justin Zhan2 , and Kouichi Sakurai1, 1
Dept. of Computer Science and Communication Engineering, Kyushu University, 744 Motooka, Nishi-ku, Fukuoka, Fukuoka 819-0395, Japan {su,sakurai}@itslab.csce.kyushu-u.ac.jp 2 Heinz College & Cylab Japan Carnegie Mellon University 5000 Forbes Avenue, Pittsburgh, PA, 15213, USA
[email protected]
Abstract. Privacy-preserving k-means clustering assumes that there are at least two parties in the secure interactive computation. However, the existing schemes do not consider the data standardization which is an important task before executing the clustering among the different database. In this paper, we point out without data standardization, some problems will arise from many applications of data mining. Also, we provide a solution for the secure data standardization in the privacypreserving k-means clustering. Keywords: k-means clustering, cryptographic protocol, secure approximation, data standardization.
1
Introduction
The k-means clustering algorithm is widely used in data clustering. Data clustering can be considered the most important unsupervised learning problem. It deals with finding a structure in a collection of unlabeled data. In this paper, we focus on privacy-preserving problem of k-means clustering which is a frequently used method in data mining. Data mining adds to clustering the complications of very large datasets with very many attributes of different types and approximation algorithms are frequently applied to sensitive data, as in the distributed cryptographic setup of secure computation. In this paper, we study data variable standardization problems in the privacy preserving k-means clustering problem using the secure approximation algorithms. This paper focuses on the secure approximation techniques of k-means clustering in a two-party data mining model.
Supported by the Research Fellowship of Japan Society for the Promotion of Science (JSPS) and Grant-in-Aid for Scientific Research of JSPS No. 2002041. Supported by Japan Society for the Promotion of Science, Grant-in-Aid for Scientific Research (B) No. 20300005.
L. Chen et al. (Eds.): DASFAA 2009 Workshops, LNCS 5667, pp. 276–286, 2009. c Springer-Verlag Berlin Heidelberg 2009
Importance of Data Standardization
1.1
277
Brief Description of K-Means Clustering
The k-means clustering algorithm is widely used in data mining to group data with similar characteristics or features together. Given n data vectors, the algorithm divides them into k groups in three steps: (1)Measure the distances between data vectors and each of k clusters and assign the data to the closest cluster. (2)Compute the centroid of each cluster. (3)Update the clusters over and over again until the k clusters change little than the former iteration. Kmeans algorithm was introduced by MacQueen in 1967 [9]. The algorithm can be described by a pseudo-program as follows:
K-means clustering algorithm Initialize k-clustering C1 , ..., Ck for n points xi (i = 1, ..., n) to 0 and randomly select k starting cluster points C 1 , .., C k repeat for all data points xi do Compute the centroid x ¯(Cj ) of each cluster; Assign data point xi to cluster Cj if distance d(xi , Cj ) is the minimum over all j. end for Calculate new means C 1 , .., C k . until the difference between C 1 , .., C k and C1 , ..., Ck is acceptably low.
1.2
Standardization Problem in Distributed K-Means Clustering
People have investigated k-means clustering from various perspectives. Many data factors, which may strongly affect the performance of k-means, have been identified and addressed. Data standardization is one of them. It is common practice in marketing research to standardize the columns (to mean zero and unit standard deviation) of a database by variables data matrix, prior to clustering the entities corresponding to the rows of that matrix. Standardization of variables is particularly important when variables are measured on different scales. For example, to combine a variable measuring income (usually measured in thousands of dollars) with a variable measuring age in a cluster analysis because the scale of the income variable is likely to dominate the resultant clusters. Prior to performing k-means clustering, variables must be standardized to have equal variance prior to avoid obtaining clusters that are dominated by variables having the largest amounts of variation. Standardization of variables is particularly important when variables are measured on different scales. For instance, it may be unwise to combine a variable measuring income (usually measured in thousands of dollars) with a variable measuring age in a cluster analysis because the scale of the income variable is likely to dominate the resultant clusters. However, should the variables be given equal weight? Is it reasonable to
278
C. Su, J. Zhan, and K. Sakurai
assume that all variables included in the cluster analysis contribute equally to the cluster structure? The uncritical standardization and selection of variables that are to be used in a cluster analysis may result in poor solutions. The goal of the present research is to present a procedure that effectively rescales variables to the same scale while preserving their inherent cluster information. Nowadays, data standardization is more and more important for data clustering in biology or chemical research areas [2]. 1.3
Previous Works and Our Contributions
The k-means clustering problem is one of the functions most studied in the more general class of data-mining problems. Data-mining problems have received much attention in recent years as advances in computer technology have allowed vast amounts of information to be stored and examined. Due to the sheer volume of inputs that are often involved in data-mining problems, generic multiparty computation (MPC) protocols become infeasible in terms of communication cost. This has led to constructions of function-specific multiparty protocols that attempt to handle a specific functionality in an efficient manner, while still providing privacy to the parties [1][5][6][7][15]. It will bring the correctness problem in many cases, depending on the implementation. For example, in a multi-attribute database, if the values of the variables are in different attribute then it is likely that some of the variables will take very large values, and hence the distance between two cases, on this variable, can be a big number. Other variables may be small in value, or not vary much between cases, in which case the difference in this variable between the two cases will be small. Hence in the distance metrics considered above, are dependent on the choice of units for the variables involved. The variables with high variability will dominate the metric. Therefore, we need the data standardization to force the attributes to have a common value range. In this paper, we propose secure data variables standardization schemes for two-party k-means clustering. Our construction is based on the homomorphic encryption. This paper is organized as follows: In Section 2, we introduce the cryptographic primitives used in our proposal. In Section 3, we give brief review of how to do the variable standardization for the k-means clustering. In Section 4, we propose two protocol of privacy-preserving standardization for two-party’s k-means clustering. Finally, we draw a conclusion in Section 5.
2 2.1
Preliminaries Cryptographic Primitives
Homomorphic Encryption: In our protocol, we require a homomorphic encryption scheme satisfying: E(a) ∗ E(b) = E(a + b), where E is a cryptosystem, ∗ and + denotes modular multiplication and addition, respectively. It also follows that E(a)c = E(a ∗ c) for c ∈ N . The Paillier cryptosystem [11] is a proper
Importance of Data Standardization
279
scheme which has this property and is the cryptosystem of our choice to construct a secure protocol. All multiplications in this section are with a modulus N 2 where N is the Paillier composite Oracle-aided Protocols: Informally, consider oracle-aided protocols, where the queries are supplied by both parties. The oracle answer may be different for each party depending on its definition, and may also be probabilistic. An oracle-aided protocol is said to privately reduce g to f if it privately computes g when using the oracle functionality f . An oracle-aided protocol is a protocol augmented by two things: (1) pairs of oracle-tapes, one for each party; and (2) oracle-call steps. An oracle-call step proceeds by one party sending a special oracle-request message to the other party. Such a message is typically sent after the party has written a string, its query, to its write-only oracle-tape. In response the other party writes its own query to its write-only oracle-tape and responds to the requesting party with an oracle-call message. At this point the oracle is invoked and the result is that a string is written onto the read-only oracle-tape of each party. Note that these strings may not be the same. This pair of strings is the oracle answer. In an oracle-aided protocol, oracle-call steps are only ever made sequentially, never in parallel. Oblivious Polynomial Evaluation (OPE): Oblivious Polynomial Evaluation(OPE) is one of the fundamental cryptographic techniques. It involves a sender and a receiver. The sender’s input is a polynomial Q(x) of a degree k over some field F and the receiver’s input is an element z ∈ F . The receiver learns Q(z). It is quite useful to construct some protocols which enable keyword queries while providing privacy for both parties: namely, (1) hiding the queries from the database (client privacy) and (2) preventing the clients from learning anything but the results of the queries (server privacy). 2.2
Problem Formulations
In this paper, we assume that there are two parties possess their private databases respectively. They want to get the common benefit for doing clustering analysis in the joint databases. However, maybe for the legal restriction or business secret, both are concerning about their privacy in their own database. For that reason, they need a private preserving system to execute the joint data variables standardization for k-means clustering analysis. The concern is solely that values associated with an individual entity not be released (e.g., personal or sensitive information), the techniques must focus on protecting such information. Secure Approximation Construction: Secure Approximation is quite useful in secure multi-party computation to construct an efficient and secure computation with the private inputs. We give out an approximation to a deterministic function f . Joan Feigenbaum et al [3], give out some important concepts about the secure approximation between parties. We need an approximation function fˆ with respect to the target function f without revealing any input information of f . Let f (x) be as above and fˆ(x) be its randomized approximation function.
280
C. Su, J. Zhan, and K. Sakurai
Then fˆ is functionally t-private with respect to f if there is an efficient randomized algorithm S, called simulator, such that for every x and 1 ≤ i1 , ..., it ≤ m, S((i1 , xi1 ), ..., (it , xit ), f (x)) is identically distributed to fˆ(x). In this paper, we use the secure approximation technique to computer the mean of the data objects. We say that fˆ is functionally private with respect to f if there exist a probabilistic, polynomial-time algorithm S such that S(f (x1 , x2 )) ≡ fˆ(x1 , x2 ) Where ≡ denotes computational indistinguishability. We define that when to compute a deterministic functions f . In our paper, we say that fˆ is an -approximation of f if, for all inputs (x1 , x2 ), |f (x1 , x2 ) − fˆ(x1 , x2 )| < . Data Setting: We assume a multivariate database D = {d1 , .., dn } which consists of n objects, each data object di has m attribute. So we take each di object as a vector set di = xi,1 , ..., xi,m , x denotes the attribute variable. Among the m attributes, we assume that there are l numeric attributes and m− l non-numeric A attributes. We assume that Party A holds a database dA men1 , .., dn B DA = tioned above, and Party B holds a database DB = d1 , .., dB . n Security in The Semi-Honest Model: We say that a real protocol that is run by the parties (in a world where no trusted party exists) is secure, if no adversary can do more harm in a real execution than in an execution that takes place in the ideal world. Consider the probability space induced by the execution of π on input x = (a, b) (induced by the independent choices of the random inputs π π rA , rB ). Let viewA (x) (resp., viewB (x)) denote the entire view of Alice (resp., Bob) in this execution, including her input, random input, and all messages she has received. Let outputπA (x) (resp., outputπB (x)) denote Alice’s (resp., Bob’s) output. Note that the above four random variables are defined over the same probability space. We say that π privately computes a function f if there exist probabilistic, polynomial-time simulator SA and SB such that: {(SA (a, fA (x)), fB (x))} ≡ {(V IEWAπ (x), OU P U TBπ (x))}
(1)
{(fA (x), SB (b, fB (x)))} ≡ {(OU P U TAπ (x), V IEWBπ (x))}
(2)
where ≡ denotes computational indistinguishability, which means that there is no probabilistic polynomial algorithm used by an adversary A can not distinguish the probability distribution over two random string.
3
Standardization in K-Means Clustering
Standardization is used to ensure that the variables in a similarity calculation make an equal contribution to the computed similarity value. Malicious attackers will constantly look for and exploit weaknesses in computer systems in order to attack or break these data mining services. In this section, we provide two secure protocols to prevent such attacks. First of all, we will give a brief review of data standardization. More details can be seen at [14].
Importance of Data Standardization
3.1
281
Z-Score Standardization
Letting X represent the N × V data matrix as stated previously, data transformed by the traditional z-score method will be denoted as Z (1) , where the transformation of the ith observation measured on the j th variable is given by xij − x¯j (1) zij = V ar(xj )
(3)
The z-score method addresses the differential scale of the original variables by transforming the variables to have unit variance; however, the z-score method places no specific restrictions on the ranges of the transformed variables. An important aspect to notice about the Z (1) standardization is that the larger the variance of a variable, the larger the value of its divisor in the standardization process. However, from Equation 3, it was shown that (given the same range), the larger the variance of the variable, the more cluster structure that is present. Thus, it appears that the Z (1) method is penalizing variables with more cluster structure by equating the variance across variables, regardless of cluster structures. Furthermore, if the standardization procedure is thought of as a weighting scheme where each transformed variable is represented by the original variable multiplied by the reciprocal of the variablefs standard deviation, the variables with the most cluster structure will be weighted least. Subsequently, standardization by Z (1) has been shown actually to degrade true cluster recovery. 3.2
Range Standardization
On the other hand, the recommended method of standardizing variables in cluster analysis is by the range, denoted by Z 2 and computed as (2)
zij =
xij R(xj )
(4)
where R(xj ) = max(xj ) − min(xj ). The Z 2 standardization procedure fixes all the transformed variables to have a range of unity but does not place any restrictions on the variances of the transformed variables. However, given the range is fixed to have a value of one, if the transformed variable is centered at (2) (2) zero, then min(zj ) =-0.50 and max(zj )=0.50, a maximal value for the variance of 0.25 (assuming an equal number of observations at the upper and lower limits of Z 2 ). Although arrived at differently, this value corresponds to that provided by Milligan and Cooper (1988). The comparatively better performance (according to cluster recovery) of Z 2 than Z 1 can be attributed to the fact that the Z 2 procedure does not constrain the variance to be equal across variables. However, the limited maximum variance allowable by the Z 2 method limits the ability of standardizing by the range to differentially weight variables with varying degrees of cluster information.
282
4 4.1
C. Su, J. Zhan, and K. Sakurai
Private Standardization Protocol Privacy-Preserving Z-Score Standardization Protocol
In this protocol, we have to solve the problem of private computation for mean and standard deviation. We assume that Party A has n1 data entries in database and B has n2 n1 data entries. Let the held by Party A be dA = n1 Party ndata 1 A A i=1 di and data held by Party B be dA = i=1 di . A B The Mean is computed as M = dn1 +d +n2 , so that the standard deviation is computed as: n1 A n2 B 1 2 2 σ = n1 +n i=1 (di − M ) + i=1 (di − M ) 2 Therefore, if we can compute the mean M securely, the privacy-preserving problem will be solved. Here, we use an oracle-aided protocol to compute the mean M . After that, the standardized data can be used in following computation. The details of secure approximation of computing the mean Protocol for Private Computation of Mean which will be given later. Therefore, we apply the secure mean computation protocol to do the data standardization calculation for the cluster update privately. the new mean should B be: M = (dA i + di )/(n1 + n2 ). It can be computed privately using the Oracleaided protocol to compute a private 2t -approximation protocol over the finite filed Fp proposed by E. Kiltz et al [8]. We will use the Oracle-aided protocol to make a secure computation. In general, an approximation requirement could be any binary relation between a deterministic real valued function f , called the target function, and a possibly randomized real-valued function f , called the approximating function, that defines which functions are considered good approximations. For the secure approximation computation, we use the result proposed in [8] and use of is oblivious polynomial evaluation (OPE) which is proposed in [10] to do the clustering update computation. The input of the sender is a polynomial P of degree k over some field F . The receiver can get the value P (x) for any element x ∈ F without learning anything else about the polynomial P and without revealing to the sender any information about x. Our construction uses an OPE method based on homomorphic encryption (such as Paillier’s system [11]) in the following way. We first introduce this construction in terms of a single database bin. m – The client’s input is a polynomial of degree m, where P (w) = i=0 ai wi The server’s input is a value w. – The sever sends to the server homomorphic encryptions of the powers of w up to the math power, i.e., Enc(w), Enc(w2 ), ..., Enc(wm ). – The homomorphic properties to compute the following: m server usesi the m i Enc(a w ) = i i=0 i=0 Enc(ai w ) = Enc(P (w)) The client sends this result back to the server.
Importance of Data Standardization
283
According to E. Kiltz et al. proposal [8], we make one of the players (A or B) takes the role of the sender and make use of the OPE-oracle. Our protocol clearly runs in a constant number of communication rounds between the two parties. The complexity of our protocol depends chiefly on the accuracy of the result, this corresponds to the length d of the Taylor expansion. We can compute as following: We will assume that there are some publicly known values N1 and N2 such that: B N1 0 ≤ sA , 0 < nj + m j < 2 N 2 . j + sj ≤ 2
B Protocol for Private Mean Computation of (dA i + di )/(n1 + n2 ):
Set d = 32 (t + N1 + 2) 1. Party A and Party B input n1 , nj , respectively. Oracle returns additive F shares aF 1 , a2 of Z over Fp . F 2. Party A and Party B input aF 1 and a2 respectively. Oracle returns additive shares aI1 , aI2 of Z over Z. 3. For j = 2, ..., N2 + 1: – Party A computes b1,j = aI1 /2j – Party B computes b2,j = aI2 /2j 4. Parties locally convert their shares back into additive Fp shares ci,j , where share ci,j is the Fp equivalent of integer share bi,j . 5. Party A and Party B input n1 and n2 respectively. Oracle returns additive shares t1 , t2 of k over Fp 6. Party A chooses e1 at random from Fp and defines the polynomial N2 +1 R1 (X) = i=2 c1,i Pi (t1 + X) − e1 . Party A runs an OPE protocol with Party B so that B learns e2 = R1 (t2 ) and A learns nothing. 7. Party B chooses f2 at random from Fp and defines the polynomial N2 +1 R2 (X) = i=2 c2,i Pi (t2 + X) − f2 . Party B runs an OPE protocol with A so that A learns f1 = R2 (d1 ) and B learns nothing. 8. Party A and Party B input e1 + f1 and e2 + f2 respectively. Oracle returns multiplicative shares g1 , g2 of e1 + f1 and e2 + f2 over Fp . B 9. Party A inputs dA i and B inputs di . Oracle returns multiplicative shares A B h1 , h2 of sj + sj over Fp . ˆ1 = g1 h1 · 2−N2 and sends it to B, and Party B 10. Party A computes M ˆ computes M2 = g2 h2 · 2−N2 ˆ = MˆA MˆB . 11. Both A and B can computes and approximated output M A B ˆ with sj +sj − M ˆ ≤ With the protocol, we can compute the approximated M nj+mj
2−t for the clustering update privately, where t is an approximation and secure parameter.
284
4.2
C. Su, J. Zhan, and K. Sakurai
Security Proof
The simulator S runs an internal copy of A, forwarding all messages from Z to A and vice versa. Simulating the case that no party is corrupted: In this case, S receives a message (sid) signaling it that party A and party B concluded an ideal execution with ideal function or trusted third party F . The simulator S then generates some simulated transcript of messages between the real model parties as mentioned in the real protocol above. Simulating the case that either one party is corrupted: Because the protocol executions for both two parties are symmetric, we only have to show the case of one party is corrupted. – Whenever receiving m and e1 from A, S generates random e2 send it to party B as if B receives it from the OPE. – S sends f1 to A as if A receives it from the OPE. – Whenever receiving e1 + f1 from A, S send g2 to party B, as if it receive it from the real protocol execution. ˆ1 from A, S generate the M ˆ2 , and output – Whenever receiving the share M the approximated result. 4.3
Privacy-Preserving Range Standardization Protocol
To compute the range of the joint database DA ∪ DB requires the maximum and minimum variable in the joint database. The major concern in this problem is how to the computer the range without revealing each other’s private information. In the case that if one party holds the maximum variable, the other holds the minimum one, there is no way to preserve the privacy because the range has to be shared after the computation. Only in the case that one party has both the maximum and the minimum variables, the privacy can be preserving with revealing the two variable. Therefore, we only focus on how to find out the maximum and the minimum variables in the two parties’ databases without revealing the variable. The protocol is executed as follows:
Privacy-Preserving Range Standardization Protocol 1. Each party find out the he maximum and the minimum variables in his local database. 2. Both two parties compare each other’s maximum variables and minimum variables. 3. If one party holds both maximum variable and minimum variable, he output the range, which is the range of the joint database DA ∪ DB 4. After get the range of the joint database, the standardization can be executed by each party locally.
Importance of Data Standardization
285
Here, we can reduce the comparison problem to the Yao’s millionaire problem, which is the problem of computing the predicate a ≥ b, without disclosing anything more than the result to either party. This problem can be formulated as a comparison of two encrypted messages without revealing them. Any distributed additive homomorphic encryption algorithm can be employed in this ciphertext comparison. We can employ the secure solution of Yao’s millionaire problem such as the scheme proposed in Peng et al. [12], which provides a correct, private, verifiable and efficient solution to the Yao’s millionaire problem. Suppose two L-bit messages m1 and m2 encrypted in c1 and c2 respectively are to be compared. The main idea of the comparison is comparing F (m1 ) and F (m2 ) where F () is a monotonely increasing one-way function. Based on this idea, a comparison technique Com(c1 , c2 ) can be designed, such that Com(c1 , c2 ) = 1 if m1 > m2 ; Com(c1 , c2 ) = 0 if m1 = m2 ; Com(c1 , c2 ) = −1 if m1 < m2 .
5
Conclusions
The privacy-preserving problems in distributed k-means clustering gains very focus of many researchers in recent years. In this paper, we have pointed out the existing schemes can cause the correctness problem without consideration of standardization in the implementations or application. To solve this problem, we proposed two privacy-preserving schemes for the standardization between two parties.
References 1. Bunn, P., Ostrovsky, R.: Secure two-party k-means clustering. In: Proc. of the 14th ACM conference on Computer and communications security, pp. 486–497 (2007) 2. Chu, C.W., Holliday, J., Willett, P.: Effect of data standardization on chemical clustering and similarity searching. Journal of Chemical Information and Modeling (2008) 3. Feigenbaum, J., Ishai, Y., Malkin, T., Nissim, K., Strauss, M., Wright, R.: Secure multiparty computation of approximations. In: Proc. of 28th International Colloquium on Automata, Languages and Programming, pp. 927–938 (2001) 4. Goldreich, O., Micali, S., Wigderson, A.: How to play any mental game or a completeness theorem for protocols with honest majority. In: Proc. of the Nineteenth Annual ACM Symposium on Theory of Computing, pp. 218–229 (1987) 5. Jha, S., Kruger, L., McDaniel, P.: Privacy preserving clustering. In: de di Vimercati, S.C., Syverson, P.F., Gollmann, D. (eds.) ESORICS 2005. LNCS, vol. 3679, pp. 397–417. Springer, Heidelberg (2005) 6. Jagannathan, G., Pillaipakkamnatt, K., Wright, R.N.: A new privacy-preserving distributed k-clustering algorithm. In: Proc. of the 2006 SIAM International Conference on Data Mining, SDM (2006) 7. Jagannathan, G., Wright, R.: Privacy-preserving distributed k-means clustering over arbitrarily partitioned data. In: Proc. of the 11th International Conference on Knowledge Discovery and Data Mining, KDD (2005)
286
C. Su, J. Zhan, and K. Sakurai
8. Kiltz, E., Leander, G., Malone-Lee, J.: Secure computation of the mean and related statistics. In: Kilian, J. (ed.) TCC 2005. LNCS, vol. 3378, pp. 283–302. Springer, Heidelberg (2005) 9. MacQueen, J.: Some methods for classification and analysis of multivariate observations. In: Proc. of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, vol. 1, pp. 281–297. University of California Press, Berkeley (1967) 10. Naor, M., Pinkas, B.: Oblivious transfer and polynomial evaluation. In: 31st ACM Symposium on Theory of Computing, pp. 245–254. ACM Press, New York (1999) 11. Paillier, P.: Public-key cryptosystems based on composite degree residuosity classes. In: Stern, J. (ed.) EUROCRYPT 1999. LNCS, vol. 1592, p. 223. Springer, Heidelberg (1999) 12. Peng, K., Boyd, C., Dawson, E., Lee, B.: An efficient and verifiable solution to the millionaire problem. In: Park, C.-s., Chee, S. (eds.) ICISC 2004. LNCS, vol. 3506, pp. 51–66. Springer, Heidelberg (2005) 13. Rakhlin, A., Caponnetto, A.: Stability of k-means clustering. In: Proc. of Neural Information Processing Systems Conference (2006) 14. Schaffer, C.M., Green, P.E.: An empirical comparison of variable standardization methods in cluster analysis. Multivariate Behavioral Research 31(2), 149–167 (1996) 15. Vaidya, J., Clifton, C.: Privacy-preserving k-means clustering over vertically partitioned data. In: Proc. of the 9th ACM SIGKDD Intl. Conf. on Knowledge Discovery and Data Mining, USA (2003)
Workshop Organizers’ Message Dickson K.W. Chiu1 and Yi Zhuang2 1
Dickson Computer Systems, Hong Kong, China 2 Zhejiang Gongshang University, China
The recent advancement of workflow technologies and adoption of the Service-Oriented Architecture (SOA) has much facilitated the automation of business collaboration within and across organizations to increase their competiveness and responsiveness to the fast evolving global economic environment. The widespread of mobile technologies has further resulted in an increasing demand for the support of Mobile Business Collaboration (MBC) across multiple platforms anytime and anywhere. Examples include supply-chain logistics, group calendars, and dynamic human resources planning. As mobile devices become more powerful, the adoption of mobile computing is imminent. However, mobile business collaboration is not merely porting the software with an alternative user interface, but rather involves a wide range of new requirements, constraints, and technical challenges. The 1st International Workshop on Mobile Business Collaboration (MBC’09) intentionally sought for scientists, engineers, educators, industry people, policy makers, decision makers, and others who had insight, vision, and understanding of the big challenges in this emerging field. We received 10 submission and 5 papers were accepted. The acceptance rate was 50%. The workshop was successfully held in conjunction with DASFAA’09 on April 21, 2009. Finally, we thank the reviewers for their insightful comments and the generous support of the DASFAA conference organizers. This workshop was sponsored by Zhejiang Gongshang University.
L. Chen et al. (Eds.): DASFAA 2009 Workshops, LNCS 5667, p. 289, 2009. © Springer-Verlag Berlin Heidelberg 2009
A Decomposition Approach with Invariant Analysis for Workflow Coordination* Jidong Ge 1,2 and Haiyang Hu2,3 2
1 Software Institute, Nanjing University, China, 210093 State Key Laboratory for Novel Software Technology, Nanjing University, China, 210093 3 College of Computer and Information Engineering, Zhejiang Gongshang University, China, 310018
[email protected]
Abstract. In order to coordinate different workflows from different organizations, this paper applies a model called Interaction-Oriented Petri Nets (IOPN). This model adopts the process interaction between transitions (workflow actions) from different workflows, to coordinate different workflow processes. To assure workflow process and workflow coordination can be executed correctly and completely, soundness and relaxed soundness are important properties to be considered. When the IOPN system becomes larger and larger, the state space of the coordination system becomes too large to analyze. To avoid the state space explosion problem, decomposition approach based on invariant analysis can be a complement analysis technique. IOPN model can be decomposed into a set of sequence diagrams when it is relaxed sound. Keywords: Workflow, Petri nets, Invariants, Sequence diagram.
1 Introduction Workflow technology is often applied to manage enterprise’s business processes. Under the fierce competition, the enterprises are facing the pressure to reduce the time and the cost for the market of their product and services. This requirement needs the enterprises to integrate the global resources, information and workflows. Today’s enterprises become larger and larger, and with many geographically distributed organizations. For efficiently managing their business, different organizations define their workflows themselves separately, and provide interfaces to other workflows from other organizations so that different organizations can collaborate and cooperate with each other through workflow coordination. Internet computing is a new trend for enterprise distributed computing to enhance its competence. Internet environments allow a set of heterogeneous resources both within and outside of organizations to be virtualized and form a large virtual computer or virtual organization. In Internet environments, serviceoriented computing is an important paradigm. Services are independent, distributed, *
This work was supported by 863 Program of China (2006AA01Z159, 2006AA01Z177, 2007AA01Z178, 2007AA01Z140), NSFC (60721002, 60736015, 60403014, 60603034), NSFJ (BK2006712), and the Seed Funding of Nanjing University.
L. Chen et al.(Eds.): DASFAA 2009 Workshops, LNCS 5667, pp. 290 – 302, 2009. © Springer-Verlag Berlin Heidelberg 2009
A Decomposition Approach with Invariant Analysis for Workflow Coordination
291
cooperative components, which can be composed into larger services or applications by workflow coordination languages. Collaboration and cooperation between distributed entities are frequently becoming essential requirements. Automatic composition of workflows in Internet environments is an important challenge [9]. Workflow coordination technology can compose the existing services and build the larger services. For coordinating different workflows from different organizations, we apply a model called Interaction-Oriented Petri Nets (IOPN). As a workflow coordination model, IOPN adopts the process interactions between transitions (workflow actions) from different workflows, to coordinate different workflow processes. To assure workflow process and workflow coordination can be executed correctly and completely, soundness and relaxed soundness are important properties to be considered [1, 6, 7]. This paper presents a decomposition approach with invariant analysis to check the relaxed soundness property of IOPN in Internet environments. The rest of the paper is organized as follows. Section 2 introduces related work about workflow coordination. Section 3 presents a workflow coordination model called IOPN. Section 4 considers soundness and relaxed soundness as the correction criterion of workflow process and workflow coordination. Section 5 introduces the invariants of Petri nets and workflow nets. Section 6 proposes a decomposition approach from IOPN into sequences diagrams when the IOPN is relaxed soundness. Finally, we conclude and make an outlook to the further work in Section 7.
2 Related Work Ordinarily, workflow model has three dimensions: process dimension, case dimension and resource dimension. Focusing on the process dimension, Aalst introduces Petri net to model workflow process, and defines the workflow net (WF-net) model and soundness property as important correctness criterion for workflow process model [1]. What’s more, Aalst extends the WF-net into IOWF-net to support inter-organizational workflow modeling and applies the theory of behavioral inheritance to IOWF-net [2, 3, 4, 16]. Workflow and service composition are both important technologies for Internet based electronic commerce applications [7]. There are some approaches for service composition related to workflow technology [18]. Petri net-based algebra can be used to model service control flows, as a necessary constituent of reliable Web service composition process [13]. A meta-model of workflow views presented in [5] can support the interoperability of multiple workflows across business organizations. Petri nets and MSCs can be combined to model process interaction between autonomous computing entities [14, 15, 17]. From these related work, we can see that describing interaction is an important approach to model workflow coordination. As a collaborative workflow, inter-organizational business process modeling needs more flexibility to adapt to the change of inter-organizational business process. Soundness property [1] is too strict for IOPN to impede the flexibility of interorganizational process modeling [10]. Dehnert proposes the relaxed soundness concept [7] to increase the flexibility of inter-organizational process modeling. Relaxed soundness property ensures that each transition in the workflow net has possibility to be fired, while there are some dead markings in the reachability graph. Relaxed
292
J. Ge and H. Hu
soundness is defined as weaker conditions compared to soundness concept. Dehnert also presents Robustness Algorithm to transform a relaxed sound workflow net into a sound workflow net [7]. Relaxed soundness property is more practical for interorganizational process modeling than soundness property.
3 Modeling Workflow Coordination with Process Interaction As a formal modeling tool with graphical notations and good intuitions, Petri net is widely used to model the behavior of workflow systems [1, 8, 12, 19]. A Petri net is a 3-tuple PN = ( P, T , F ) , P is a finite set of places, T is a finite set of transitions, P ∩ T = φ , F = ( P × T ) ∪ (T × P) is a set of arcs. A Petri net PN = ( P, T , F ) is a WF-net iff: PN has two special places: i and o, •i = φ and o• = φ . If we add a transition t * to PN such that •t* = {o} and t * • = {i} , then the resulting extended Petri net PN * = ( P, T ∪ {t*}, F ∪ {(o, t*), (t*, i )} is strongly connected. [i] and [o] denote the initial state and final state of WF-net respectively. WF-net model based on single Petri net has powerful expression, but in the largescale application, there are many organizations with their workflows. Composition is an important approach to build large-scale model. Usually, different organizations define their workflows themselves separately, and provide interfaces to other workflows from other organizations so that different organizations can collaborate and cooperate with each other through workflow coordination in Internet environments. For coordinating different workflows from different organizations, we apply a model called Interaction-Oriented Petri Nets (IOPN). IOPN includes a set of workflow nets and their interactions by asynchronous communication between transitions. Each organization’s workflow can be modeled by each WF-net (also can be called object net), the coordination between different workflows can be described by the process interaction between transitions from different workflow nets. So, under the process interaction, the workflows from different organizations can be coordinated. IOPN provides a loosely coupled paradigm for workflow coordination in Internet environments. 3.1 Interaction-Oriented Petri Nets (IOPN) Definition 1 (IOPN, Interaction-Oriented Petri Nets) IOPN = (ON S , ρ ) (1) ON S = {ON1 , ON 2 ,..., ON n } is the finite set of Petri nets. The element in ON S is called object net. Each object net describes the workflow from different organizations separately and has interfaces to interact with other workflows from other organizations. (2) ON k = ( Pk , Tk , Fk ) is an object net with labeled with k, which is the element of ON S . ON k is a WF-net, ON k .i is the initial state and ON k .o is the final state of ON k . Each object net ON k in IOPN is peer to peer. n
n
(3) ρ ⊆ ∪ ∪ Tk × T j , (1 < k < n,1 < j < n, k ≠ j ) , is the process interaction set. k =1 j =1
(a1 , b1 ) ∈ ρ ∧ (a1 ∈ Tk ) ∧ (b1 ∈ T j ) is an element of ρ . (a1 , b1 ) ∈ ρ means a process in-
teraction message from ON k to ON j . a1 is the sending action of this message
A Decomposition Approach with Invariant Analysis for Workflow Coordination
293
and b1 is the receiving action of this message. The actions participating in the interactions provide the interfaces to the other workflows. The process interaction can be implemented by the approach of message passing. (4) There is a constraint for the process interaction. If transition a1 is in an interaction relation (a1 , b1 ) ∈ ρ , and a1 is the sending action, then b1 is the single receiving action, denoted by b1 = ρ out (a1 ) . If transition a1 is in an interaction relation (b1 , a1 ) ∈ ρ , and a1 is the receiving action, then b1 is the single sending action, denoted by b1 = ρ in (a1 ) . □
ONk a1
b1 ONj
(a1,b1) ∈ ρ p(a1,b1)
ONk a1
b1 ONj
Fig. 1. Shared place of the process interaction between the two actions
Definition 2 (Shared Place in the Process Interaction) Let (a1 , b1 ) ∈ ρ ∧ (a1 ∈ Tk ) ∧ (b1 ∈ T j ) , in the Petri net semantics for the process interaction, there is adding a shared place p( a1 ,b1 ) (in Figure 1), and adding two arcs (a1 , p( a1 ,b1 ) ) and ( p( a1 ,b1 ) , b1 ) , so that the place p( a1 ,b1 ) can connect the two transitions from a1 to b1 . The total shared place set is denoted by Pshared = { p( a1 ,b1 ) | (a1 , b1 ) ∈ ρ ∧ (a1 ∈ Tk ) ∧ (b1 ∈ T j ) ∧ (k ≠ j )} .
□ ON1
ON2
a3
a1
ON3
iON 2
iON 1
a4
b1
iON 3
b5
c1
c3
c2
c4
b2 b3
a5
b6 b4
a2
a6 oON1
oON 2
oON 3
Fig. 2. IOPN for global workflow model with the interaction set
294
J. Ge and H. Hu
As an example, there are three workflows modeled by object nets separately, and the interaction set is ρ = {( a1 , b1 ), (b2 , c1 ), (c2 , b3 ), (b4 , a2 ), (a3 , b5 ), (a4, c3),(b6,a5), (c4 , a6 )} , then the global workflow model is showed as Figure 2. After coordination with the process interactions, the workflows are composed into a larger workflow. 3.2 Firing Rules of IOPN Definition 3 (Firing Rules and State Representation of IOPN) Let IOPN = (ON S , ρ ) , ON S = {ON1 , ON 2 ,..., ON n } , ON k = ( Pk , Tk , Fk ) . ON k is the basic components of IOPN system and ON k .M denotes the local state of ON k , so the global state of IOPN can be denoted by IOPN .M = ON1.M + ... + ON k .M + ... + ON n .M + M ρ , M ρ means the marking in shared place Pshared = { p( a1 ,b1 ) | (a1 , b1 ) ∈ ρ} of the process interactions. ON k .i and ON k .o denote the local initial state and local final state of respectively. The initial marking of IOPN is denoted by ON k IOPN .init = ON1 .i + ... + ON k .i + ... + ON n .i , and the final marking of IOPN is denoted by IOPN . final = ON1 .o + ... + ON k .o + ... + ON n .o . For a transition a1 ∈ Tk , if a1 ∈ enabled (ON k .M + M ρ ) then a1 ∈ enabled ( IOPN .M ) . When firing transition a1 , there are three cases: (1) If there is neither (a1 , b1 ) ∈ ρ nor (b1 , a1 ) ∈ ρ i.e. a1 has no interaction with other a1 a1 transition, then IOPN .M ⎯⎯→ IOPN .M ' , i.e. ON k .M ⎯⎯→ ON k .M ' , IOPN .M ' = ON1 .M + ... + ON k .M '+... + ON n .M + M ρ . a1 (2) If there is a pair interaction (a1 , b1 ) ∈ ρ , then IOPN .M ⎯⎯→ IOPN .M ' , i.e. a1 ON k .M + M ρ ⎯⎯→ ON k .M '+ M ρ + p( a1 ,b1 ) , and IOPN .M ' = ON1.M + ... + ON k .M '+... + ON n .M + M ρ + p( a1,b1 ) . a1 (3) If there is a pair interaction (b1 , a1 ) ∈ ρ , then IOPN .M ⎯⎯→ IOPN .M ' , i.e. a1 ON k .M + M ρ ⎯⎯→ ON k .M '+ M ρ − p(b1,a1 ) , and IOPN .M ' = ON1.M + ... + ON k .M '+... + ON n .M + M ρ − p( b1,a1 ) . □
4 Soundness Property as Coordination Correction Criterion of IOPN The IOPN model can coordinate different workflows from different organizations in Internet environments, but error model or interaction design will lead to that the workflow coordination can not be executed correctly. To assure the workflows can be coordinated correctly and completely, the soundness [1] and relaxed soundness [6, 7] properties must be considered, which is viewed as the coordination correctness criterion of IOPN. The soundness and relaxed soundness property of WF-net and IOPN have formal definition so that it can be verified by some checking algorithms [12].
A Decomposition Approach with Invariant Analysis for Workflow Coordination
295
Definition 4 (Soundness of WF-net) [1] A process modeled by a WF-net PN = ( P, T , F ) is sound if and only if: * * (1) ∀M ([i ] ⎯ ⎯→ M ) ⇒ (M ⎯ ⎯→ [o]) . * (2) ∀M ([i ] ⎯ ⎯→ M ∧ M ≥ [o]) ⇒ ( M = [o]) * t (3) ∀t ∈ T : ∃M , M ' , [i ] ⎯ ⎯→ M⎯ ⎯→ M '.
□
Essentially, the soundness property is a combination of liveness property and boundedness property of Petri net. In the other words, a workflow net PN is sound if and only if the extended workflow net PN * = ( P, T ∪ {t*}, F ∪ {(o, t*), (t*, i )}) is live and bounded [1]. The example in Figure 1 is a sound WF-net. Soundness is a much strict property for workflow modeling. Sometimes, the relaxed soundness is introduced for flexible inter-organizational workflow application as a weaker correction criterion [6]. With the robustness algorithm [7], the relaxed sound workflow can be transformed into a sound workflow. Definition 5 (Sound Firing Sequence) [6] σ Let ( PN , [i]) be a WF-net. A firing sequence σ : [i ] ⎯⎯→ M is called sound iff there σ' exists another firing sequence σ ' such that M ⎯⎯→ [o ] .
□
Definition 6 (Relaxed Soundness) [6] A process specified by a WF net PN = ( P, T , F ) is relaxed sound iff every transition is element of some sound firing sequence. For all t ∈ T there exist M, M ' such that * t * [i ] ⎯ ⎯→ M⎯ ⎯→ M'⎯ ⎯→ [o ] . □ The condition of relaxed soundness means that there exist enough sound firing sequences covering all transitions of a workflow net, i.e. each transition has possibility to occur in some sound firing sequences. As soundness property, there is no dead state in a workflow net. Compared to soundness, relaxed soundness allows that there are some dead states in a workflow net. With robustness algorithm, the dead states in a relaxed sound workflow net can be removed.
a1
a3
a5
ON1 iON 1 a2 a4
a6 oON1
ON2 iON 2 p(a2 ,b2 ) p(a1 ,b1 )
b2
b1
b3 b4 oON 2
Fig. 3. An example: non-sound and non-relaxed sound IOPN
296
J. Ge and H. Hu
If a workflow net is sound, then it must be relaxed sound. But it does not hold sound if a workflow net is relaxed sound. Because the structure of IOPN is different from WF-net, the soundness and relaxed soundness definition of IOPN should be also a little different from WF-net. According to the relaxed soundness of WF-net, we define the relaxed soundness property of IOPN as coordination correction criterion. Definition 7 (Relaxed Soundness of IOPN) Let IOPN = (ON S , ρ ) , ON S = {ON1 , ON 2 ,..., ON n } , ON k = ( Pk , Tk , Fk ) . IOPN is relaxed n
sound if and only if: T = ∪ Tk , For all t ∈ T , there exist IOPN .M , IOPN .M ' , such that k =1
* t * IOPN .init ⎯ ⎯→ IOPN .M ⎯ ⎯→ IOPN .M ' ⎯ ⎯→ IOPN. final .
□
Figure 3 shows an example of non-relaxed sound IOPN, because after firing a1 in ON1 , b3 will have no possibility to be fired in ON 2 , or after firing a2 in ON1 , b2 will have no possibility to be fired in ON 2 . For checking soundness and relaxed soundness properties, in some times, the state space methods based on the reachability graph of Petri net is an available analysis technique. But, when the IOPN system becomes larger and larger, the state space [12] of the workflow coordination system becomes too large to analyze. On the other hand, decomposition approach based on invariant analysis can be a complement analysis technique to avoid the state space explosion problem, which is an important advantage of invariant analysis. Based on the invariant analysis, we propose a decomposition approach from IOPN into sequence diagrams. We will introduce invariants of Petri net and workflow net, and the relations between soundness and relaxed soundness of workflow net and its invariants in Section 5.
5 Invariants of Petri nets There are two kinds of invariants: P-invariant (Place-invariant) and T-invariant (Transition-invariant). In this paper, we mainly apply T-invariant to analyze the workflow composition. Definition 8 (Incidence matrix, place invariants, transition invariants) (1) A Petri net PN = ( P, T , F ) can be represented by an incidence matrix PN : ( P × T ) → {−1,0,1} , which is defined by ⎧−1 if ( p, t ) ∈ F ⎪ PN ( p, t ) = ⎨ 0 if ( p, t ) ∉ F ∧(t , p ) ∉ F or ( p, t ) ∈ F ∧ (t , p ) ∈ F ⎪ 1 if (t , p ) ∈ F ⎩
(2) A T-invariant of a net PN = ( P, T , F ) is a rational-valued solution of the equation PN ⋅ Y = 0 . The solution set is denoted by J = {J1 , J 2 ,..., J n } . In essence, a Tinvariant J k is a T-vector, as a mapping J k : T → Ζ . A T-invariant J k is called semi-positive if J k ≥ 0 and J k ≠ 0 . A T-invariant J k is called positive if ∀t ∈ T : J k (t ) ≥ 1 . < J k > denotes the set of transitions t satisfying J k (t ) > 0 .
A Decomposition Approach with Invariant Analysis for Workflow Coordination
297
A Petri net is called covered by T-invariants iff, there exists a positive Tinvariant J k . (3) Minimal invariants: A semi-positive T-invariant J k is minimal if no semipositive T-invariant J x satisfies J x ⊂ J k . Every semi-positive invariant is the sum of minimal invariants [8]. If a net has a positive invariant, then every invariant is a linear combination of minimal invariants. (4) Fundamental property of T-invariant: Let ( PN , M 0 ) be a system, and let J k be a σ M (i.e., T-invariant of PN, then the Parikh vector σ is a T-invariant iff M ⎯⎯→ iff the occurrence of σ reproduces the marking M).
Compared to the ordinary Petri net, workflow net has special structural restriction, so its invariants have some special characteristics and special meanings. Definition 9 (T-Invariants of Workflow net) [11] Let PN = ( P, T , F ) be a workflow net. PN * = ( P, T ∪ {t*}, F ∪ {(o, t*), (t*, i )}) is the extended workflow net of PN. J k is called LMST-invariant (Legal Minimal Semipositive T-invariant), if J k (t*) = 1 ∧ J k ≥ 0 and is minimal T-invariants of PN*. A LMST-invariant Jk of PN* means an actually sound execution, and there exists a firing sequence (σ = u1u 2 ...u n t*) ∧ (u x ∈ T ) . Corresponding to Jk such that u1 u2 un−1 un t* [i ] ⎯⎯→ M 1 ⎯⎯→ ⎯→ M n−1 ⎯⎯→ [o] ⎯⎯→ [i] . Let π (σ ) be a function to M2 ⎯ ⎯→ ... ⎯⎯
record the occurrence times of each transitions over the sequence, then π (σ ) = J k . π (σ , t ) = J k (t ) denotes the times of transition t fired in the sequence σ . Obviously, a sound firing sequence from [i] to [o] is related to a LMST-invariant. In this paper, we will focus on the LMST-invariant to do analysis. In IOPN, ONk.J denotes LMSTinvariant set of ONk. □ Theorem 1 [7] Let PN = ( P, T , F )
be a WF-net. If ( PN , [i]) is relaxed PN * = ( P, T ∪ {t*}, F ∪ {(o, t*), (t*, i)}) is covered by LMST-invariants.
sound,
then □
In workflow net (PN, [i]), a sound firing sequence is related to a LMST-invariant of PN*. If the transitions of PN are covered by the sound firing sequences, then PN* is covered by LMST-invariants.
6 Decompose IOPN into Sequence Diagrams with Invariants Analysis IOPN model can coordinate different workflows from different organizations. To assure IOPN model can be executed correctly and completely, soundness and relaxed soundness are important properties to be considered as coordination correctness criterion. In some times, the state space of the workflow coordination system becomes
298
J. Ge and H. Hu
larger and larger so that the state space explosion [12] becomes too severe to analyze the model. On the other hand, decomposition approach based on invariant analysis can be a complement analysis technique to avoid the state space explosion problem, which is an important advantage of invariant analysis. Based on the invariant analysis, we propose a decomposition approach from IOPN into sequence diagrams. What’s more, we will present the logical relation between the relaxed soundness of IOPN and its decomposability into sequence diagrams. An IOPN model is relaxed sound if and only if it can be decomposed into sequence diagrams. According to Algorithm 1 and Theorem 2, it can help to checking the relaxed soundness of IOPN model. MSC (Message Sequence Charts) or sequence diagram usually are a practical and graphical tool for scenario-based modeling with formal semantics [7, 15]. In the UML (Unified Modeling Languages), sequence diagram is related to the MSC model. Sequence diagrams as scenario models can be used to describe the interaction protocols and behaviors between several objects. Definition 10 (Sequence Diagram) A sequence diagram is a tuple D = (O, E , Mes, L,W ) :
: : :
(1) O is a finite set of objects; (2) E is a finite set of events; (3) Mes is a finite set of messages. For any message g ∈ Mes , let g! and g ? represent the sending and the receiving for g respectively. For any e ∈ E , it is corresponding to the sending or receiving for a message g, denoted by ϕ (e) = g! or ϕ ( e) = g ? ; (4) L : E O is labelling function which maps each event e ∈ E to a object L ( e) ∈ O ; (5) W : is a finite set whose elements are of the form (e, e' ) where e and e' are in E and e ≠ e' , which represents a visual order displayed in D. □ The events in the sequence diagram must follow the partial order constraints, i.e. there is no circuit in the orders among the events. For a sequence diagram, there are several possible executable sequences according to the visual order semantics [15]. The process interactions in IOPN model can be imagined as messages of sequence diagrams. The transitions (actions) in the interaction set of IOPN are related to the events of sequence diagrams. If IOPN is relaxed sound, then each object net is locally relaxed sound alone. Since relaxed sound workflow net can be covered by LMST-invariants, and a sound firing sequence is related to a LMST-invariant, each object net can be decomposed into several sound firing sequences by LMST-invariants. According to the conditions of legal interaction in Definition 11 and the interaction set of IOPN, the LMST-invariants from different object nets can be constructed into several sequence diagrams. There exists a logical relation between the relaxed soundness of IOPN and its decomposability (see Theorem 2).
A Decomposition Approach with Invariant Analysis for Workflow Coordination
299
For a LMST-invariant J1, J1.out denotes the set of transitions sending messages to other objects and assigned with positive value in J1 invariant. J1.out |ONk denotes the set of transitions sending messages to ONk in J1.out. J1.in denotes the set of all transitions receiving messages from other objects and assigned with positive value in J1 invariant. J1.in |ONk denotes the set all transitions receiving messages from ONk in J1.in. Definition 11 (Legal Interaction between two LMST-invariants) For two LMST-invariants Ja and Jb, it is called legal interaction between Ja and Jb, if and only if J a ∈ ON k .J ∧ J b ∈ ON j .J ∧ (k ≠ j ) , ρ ( J a .out |ON j ) = J b .in |ONk and ρ ( J a .in |ON j ) = J b .out |ONk .
□
⊗ J1 J2
J1 0 0
J2 0 0
J3 1 0
J4 0 1
J5 1 0
J6 0 1
J3 J4 J5 J6
1 0 1 0
0 1 0 0
0 0 1 0
0 0 0 1
1 0 0 0
0 1 0 1
Fig. 4. The matrix describing the legal interaction relations among the LMST-invariants of IOPN
If Ja and Jb have legal interaction, it is denoted by ⊗( J a , J b ) = 1 , otherwise ⊗( J a , J b ) = 0 . The function of ρ ( X ) is defined as ρ ( X ) = { y | ∀x ∈ X : ( x, y ) ∈ ρ or ( y , x) ∈ ρ} . For example, in Figure 2, the LMST-invariants of each object net of IOPN are calculated as ON1.J = {J1 , J 2 } , ON 2 .J = {J 3 , J 4 } , ON 3 .J = {J 5 , J 6 } , J1 = [1,1,0,0,0,0] = {a1 , a2 } , J 2 = [0,0,1,1,1,1] = {a3 , a4 , a5 , a6 } , J 3 = [1,1,1,1,0,0] = {b1, b2 , b3 , b4} , , , J 4 = [0,0,0,0,1,1] = {b5 , b6 } J 5 = [1,1,0,0,0,0] = {c1, c2 } J 6 = [0,0,1,1] = {c3 , c4 } . ρ ( J 1.out |ON2 ) = J 3 .in |ON1 = {b1} , ρ ( J1.in |ON2 ) = J 3 .out |ON1 = {b4 } , so ⊗( J1 , J 3 ) = 1 . By calculating the legal interaction between LMST-invariants from different object nets, we can get a matrix to describe the legal interaction relations between LMST-invariants. For example, Figure 4 shows the matrix of IOPN in Figure 2. According to the legal interactions between the LMST-invariants from different object nets, we can design the decomposition algorithm from IOPN into sequence diagrams. | ON k .J | denotes the number of LMST-invariants in ON k . J a |ρ = J a .in ∪ J a .out denotes a transition set with projection on interaction ρ from LMST-invariant J a . In each sequence diagram Di = (Oi , Ei , Mesi , Li ,Wi ) , Oi and Ei are kernel elements, so the algorithm focus on constructing Oi and Ei for each Di.
300
J. Ge and H. Hu
Algorithm 1 (Decompose IOPN into Sequence Diagrams) Let
maxnum = max(∀k ∈ {1,..., n}. | ON k .J |)
.
Select
one
ON p .J
such
that
| ON p .J |= maxnum . input: ON k .J as the set of LMST-invariants of ON k and interaction set ρ . output: The set of sequence diagrams denoted by DS = {D1 ,..., Di ,..., Dmaxnum } , Di = (Oi , Ei , Mesi , Li , Wi ) . for i = 1 to maxnum do J x := ON p .J i ;
Oi := {ON p } ; Ei := J x |ρ ;
// put ON p .J i |ρ into Ei as initial value
TISi := {J x } ; // TISi denotes the set in which every two //LMST-invariants have legal interaction. for j = 1 to n do // n = |ONS| i.e. the number of object nets in IOPN. if j ≠ p then for m =1 to | ON j .J | do
J b := ON j .J m ;
Bool := True;
// Bool denotes whether Ja have //legal interaction closure with TISi .
for all J a ∈ TISi do ON k := J a .obj ; // J a .obj denotes the object net // which Ja derives from. Bool := Bool ∧ ( ρ ( J a .out |ON j ) = J b .in |ON k ) ∧ ( ρ ( J a .in |ON j ) = J b .out |ON k );
end for if Bool then Oi := Oi ∪ {ON k } ;
Ei := Ei ∪ J b |ρ ; TISi := TISi ∪ {J b } ; break; end if end for end if
end for end for
The time complexity of this algorithm is O (| ON S |2 ⋅ max 2 (| ON k .J |)) . By Algorithm 1, the IOPN can be decomposed into several sequence diagrams. What’s more, we will present the logical relation between the relaxed soundness of IOPN and its decomposability into sequence diagrams. As Figure 5 shows, it is obvious that the IOPN in Figure 2 can be decomposed into two simple sequence diagrams. However, Figure 3 shows a non-relaxed sound IOPN, which can not be decomposed into legal sequence diagrams. In Figure 3, the LMSTinvariants of each object net of IOPN are calculated as ON1.J = {J1 , J 2 } ,
A Decomposition Approach with Invariant Analysis for Workflow Coordination ON 2 .J = {J 3 }
,
J1 = [1,0,1,0,1,0] = {a1 , a3 , a5 }
J 2 = [0,1,0,1,0,1] = {a2 , a4 , a6 }
,
301
,
J 3 = [1,1,1,1] = {b1 , b2 , b3 , b4 } . ρ ( J 1.out |ON2 ) = {b1} , ρ ( J 2 .out |ON 2 ) = {b2 } , J 3 .in |ON1 = {b1 , b2 } ,
ρ ( J1.out |ON2 ) ≠ J 3 .in |ON1 , ρ ( J 2 .out |ON2 ) ≠ J 3 .in |ON1 . So ⊗( J1 , J 3 ) = 0 , ⊗( J 2 , J 3 ) = 0 , and
this IOPN model can not be decomposed into legal sequence diagrams.
J 1:ON1 a1
a2
J3:ON2
J5:ON3
J2:ON1
b2
c1
a3 a4
b3
c2
a5
b1
b4
a6
J4:ON2 b5 b6
J6:ON3 c3
c4
Fig. 5. The resulting two sequence diagrams decomposed from Fig. 2 by Algorithm 1
Theorem 2. IOPN Decomposed into Sequence Diagrams Let IOPN = (ON S , ρ ) , ON S = {ON1 , ON 2 ,..., ON n } be an IOPN. ∀k ∈ {1,...n}. ON k = {Pk , Tk , Fk } are all circuit-free Petri nets. IOPN is relaxed sound ⇔ IOPN can be decomposed into DS = {D1 , D2 .., Dmaxnum } , and Dx ∈ DS follows the partial order constraints. Proof. (Omitted)
7 Conclusion To adapt the fast development, the enterprise needs a technology to compose the global workflow processes from different organizations and automatically monitor the change of business. This paper applies a model called IOPN (Interaction-Oriented Petri Nets), as a formalized paradigm for workflow coordination. This model adopts the process interactions between workflow actions from different workflows, to coordinate different workflow processes. To assure workflow process and workflow coordination can be executed correctly and completely, soundness and relaxed soundness are important properties to be considered as coordination correction criterion. When the IOPN system becomes larger and larger, the state space of the coordination system becomes too large to analyze. To avoid the state space explosion problem, decomposition approach based on invariant analysis can be a complement analysis technique. We propose a decomposition approach from IOPN into sequences diagrams, and presents the logical relation between the relaxed soundness of IOPN and its decomposability into sequence diagrams, i.e. an IOPN model is relaxed sound if and only if it can be decomposed into sequence diagrams. According to Algorithm 1 and Theorem 2, it can help to checking the relaxed soundness of IOPN model.
302
J. Ge and H. Hu
References 1. van der Aalst, W.M.P.: The application of Petri nets to workflow management. The Journal of Circuits, Systems, and Computers 8(1), 21–66 (1998) 2. van der Aalst, W.M.P.: Interorganizational workflows: an approach based on message sequence charts and Petri nets. Systems Analysis-Modelling-Simulation 34(3), 335–367 (1999) 3. van der Aalst, W.M.P.: Loosely coupled interorganizational workflows: Modeling and analyzing workflows crossing organizational boundaries. Information & Management 37(2), 67–75 (2000) 4. van der Aalst, W.M.P.: Inheritance of interorganizational workflows to enable business-to -business E-Commerce. Electronic Commerce Research 2(3), 195–231 (2002) 5. Chiu, D.K.W., Cheung, S.C., Till, S., Li, Q., Kafeza, E.: Workflow view driven crossorganizational interoperability in a web service environment. Information Technology and Management 5(3-4), 221–250 (2004) 6. Dehnert, J.: A Methodology for Workflow Modeling, PhD thesis, Technischen Universität Berlin (2003) 7. Dehnert, J., van der Aalst, W.M.P.: Bridging the gap between business models and workflow specifications. International Journal of Cooperative Information Systems 13(3), 289– 332 (2004) 8. Desel, J., Esparza, J.: Free Choice Petri Nets. Cambridge University Press, Cambridge (1995) 9. Divitini, M., Hanachi, C., Sibertin-Blanc, C.: Inter-organizational workflows for enterprise coordination. In: Omicini, A., et al. (eds.) Coordination of Internet Agents: Models, Technologies, and Applications 2001, pp. 369–398. Springer, Heidelberg (2001) 10. Dehnert, J., Zimmermann, A.: On the suitability of correctness criteria for business process models. In: van der Aalst, W.M.P., Benatallah, B., Casati, F., Curbera, F. (eds.) BPM 2005. LNCS, vol. 3649, pp. 386–391. Springer, Heidelberg (2005) 11. Ge, J., Hu, H., Lü, J.: Invariant Analysis for the Task Refinement of Workflow Nets. In: Proceedings of IAWTIC 2006. IEEE Computer Society, Los Alamitos (2006) 12. Girault, C., Valk, R.: Petri Nets for System Engineering: A Guide to Modeling, Verification and Application. Springer, Heidelberg (2003) 13. Hamadi, R., Benatallah, B.: A Petri net-based model for web service composition. In: Schewe, K.-D., Zhou, X. (eds.) Proceedings of the 14th Australasian Database Conference (ADC 2003), pp. 191–200. Australian Computer Society (2003) 14. Heymer, S.: A semantics for MSC based on Petri net components. In: Proceedings of the 2nd International SDL and MSC Workshop for System Analysis and Modeling (SAM 2000), Verimag, Irisa, pp. 262–275 (2000) 15. ITU-TS. ITU-TS Recomendation Z.120, Message Sequence Chart 1996 (MSC 1996). Technical report, ITU-TS, Geneva (1996) 16. Kindler, E., Martens, A., Reisig, W.: Inter-operability of workflow applications: Local criteria for global soundness. In: van der Aalst, W.M.P., Desel, J., Oberweis, A. (eds.) Business Process Management. LNCS, vol. 1806, pp. 235–253. Springer, Heidelberg (2000) 17. Kluge, O.: Modelling a railway crossing with Message Sequence Charts and Petri Nets. In: Ehrig, H., Reisig, W., Rozenberg, G., Weber, H. (eds.) Petri Net Technology for Communication-Based Systems. LNCS, vol. 2472, pp. 197–218. Springer, Heidelberg (2003) 18. Milanovic, N., Malek, M.: Current solutions for web service composition. IEEE Internet Computing 8(6), 51–59 (2004) 19. Reisig, W.: An Introduction to Petri Nets. Springer, Heidelberg (1985)
An Efficient P2P Range Query Processing Approach for Multi-dimensional Uncertain Data Ye Yuan, Guoren Wang, Yongjiao Sun, Bin Wang, Xiaochun Yang, and Ge Yu College of Information Science and Engineering, Northeastern University, Shenyang 110004, China
[email protected]
Abstract. In recent years, the management of uncertain data has received much attention in a centralized database. However, to our knowledge, no work has been done on this topic in the context of Peer-to-Peer (P2P) systems, and the existing techniques of P2P range queries cannot be suitable for uncertain data. In this paper, we study the problem of answering probabilistic threshold range queries on multi-dimensional uncertain data for P2P systems. Our novel solution of the problem, PeerUN, is based on a tree structure overlay which has the optimal diameter and can support efficient routing in highly dynamic scenarios. The issues (also faced with multi-dimensional uncertain data) of existing techniques for multi-dimensional indexing over a structure P2P network are (1) they process queries efficiently at the cost of huge maintenance overhead; (2) they have low maintenance costs, but they suffer poor routing efficiency or introduce huge network overhead. PeerUN can process range queries on multi-dimensional uncertain data efficiently with low maintenance costs. PeerUN achieves this by introducing a series of novel queries processing algorithms and a cost-based optimal data replication strategy. Experimental results validate the effectiveness of the proposed approach.
1
Introduction
Recently, query processing over uncertain data has drawn much attention from the database community, due to its wide usage in many applications such as sensor network monitoring [1], object identification [2], and moving object search [3], [4]. Many real-world application data inherently contain uncertainty. For example, sensor data collected from different sites may be distorted for various reasons like the environmental factors, device failures, or battery power. Therefore, in these situations, each object can be modeled as a so-called uncertainty region [3], instead of a precise point. There have been many techniques specifically designed for uncertain databases, with queries such as NN query [4], [14], top-k query [12], [13] and skyline query [15]. Tao et al [5] are first to give the approach for processing probabilistic threshold range queries on multi-dimensional data. But their proposed schemes are not suitable for the distributed systems eg, P2P networks. P2P queries on uncertain data are essential when the uncertain information is distributed across the network or web. For example, in sensor map [17], the L. Chen et al. (Eds.): DASFAA 2009 Workshops, LNCS 5667, pp. 303–318, 2009. c Springer-Verlag Berlin Heidelberg 2009
304
Y. Yuan et al.
uncertain information from the sinks (databases) distributed in different regions needs to be shared to get the users know about the whole or any local scenario of the monitored region. But to our knowledge, no work has been done on this topic. In this paper, we study the problem of answering probabilistic threshold range queries on multi-dimensional uncertain data for P2P systems. For queries on multi-dimensional data, an important aspect of the distributed applications is the distribution of the index structure. Although some techniques have been proposed to address the query processing over the distributed indexing, they have some fundamental limitations. The root peer in MX-CIF quadtrees [6] can be a bottleneck because each query has to start from the root. In order to avoid the bottleneck of root, VBI [7] makes each node in BATON tree have to keep coverage information of all its ancestors. Therefore, the cost of updating a path or reconstructing an overlay network is huge and not acceptable. DPTree [8] also suffers from the same problem met by the VBI. MCAN [11], MAAN [9] and M-Chord [10] use space filling curves to map multi-dimensional data to one dimensional data to support similarity queries. However, they inherits the scaling problems of space filling curves with respect to dimensionality. They can support point queries efficiently, but they suffer poor routing efficiency and introduce huge network overhead when processing range queries. The proposed scheme PeerUN can solve the problems by introducing a novel multi-dimensional data mapping algorithm and a cost-based optimal data replication strategy. In this paper, PeerUN based on our previous work-Phoenix [18], a universal P2P framework, is proposed to support efficient range queries on uncertain multidimension data. A partial ordering-preserving hashing algorithm is given to map the uncertain data to Phoenix, and a filtering and validating rule based on the hashing string is presented. Two efficient algorithms are proposed to answer the range queries. PeerUN also utilizes a cost-based optimal data replication strategy to optimize the querying processing algorithms.
2
Background
In this section, we will introduce our pervious work-Phoenix [18], a universal P2P framework, on which PeerUN is built. A definition for probabilistic threshold range queries is also presented. 2.1
Problem Definition
The definition for a probabilistic threshold multi-dimensional uncertain data range queries can be defined as follows: Given a prob-range query with a hyper-rectangle rq and a threshold pq ∈ [0, 1], the appearance probability Papp (o, q) of an object o is calculated as: Papp (o, q) = o.pdf (x)dx (1) o.ur
rq
where o.ur rq denotes the intersection of the d-dimensional uncertainty region o.ur and rq . Object o is a result if Papp (o, q) ≥ pq .
An Efficient P2P Range Query Processing Approach
2.2
305
Phoenix
Phoenix is a universal P2P framework which is based on the l-way trie tree and deBruijn graph. Phoenix is a completely distributed system. Phoenix can maintain all the properties of the static deBruijn graph such as optimal routing efficiency, and it also can adapt to the highly dynamic scenario. For more detail Phoenix, see [18], in this section, we give a brief introduction about it. A l-way trie tree with height k is a rooted tree. Each node has at most l child nodes. Each edge at same level is assigned a unique label, and each node is given a unique label. The label of a node is the concatenation of the labels along the edges on its root path. Nodes at each level form a ring structure. Fig. 1 shows an example of a 2-ary trie tree with height 4. DeBruijn graph is embedded in each ring. Fig. 3 shows the deBruijn graph embedded in the 3rd level (The level of root is 0.) of the trie tree in Fig. 1. Each peer of Phoenix is mapped to exactly one node in the trie tree. For a balanced tree, the degrees of each internal and leaf peer of Phoenix are 2l + 3 and l + 3 respectively. For an unbalanced tree, each internal peer has at least 2l + 1 neighbors and leaf peer has at least l + 1 neighbors. In Fig. 1, for peer 010, the tree neighbors are peer 01, 0100 and 0101, the ring neighbors are 001 and 011, and its deBruijn neighbors are peer 100 and 101. The routing algorithm from a peer x of Phoenix to peer y mainly consists of two steps. Firstly, the route is forwarded to peer y at the same level with x. The identifier y is the prefix y or has y as its prefix, and the forwarding process is as the deBruijn protocol. Then the route goes down or up along the tree links until to peer y. For example, in Fig. 1, the route goes from peer 010 to peer 00 along the paths: 010-100-000; 000-00.
0000
1111
0001
00
11
110
0011 0
1100
0111
1101
1111
0100
0101
1100
1110
0001
0011
1001
1011
0000
0010
1000
1010
q1
1
0100 01
10
011
100
1011
101 1010
0101 0110
1001 0111
1000
Fig. 1. Example of a trie tree
3
0110
111 1101
010
o . pcr ( c )
1110 000
0010 001
o.region
q2
Fig. 2. Example of queries for an uncertain data
Overview of System
In this section, we will give the distribution of uncertain data across Phoenix and the structure of overlay network.
306
3.1
Y. Yuan et al.
Data Placement
Since the query processing algorithm of PeerUN is based on Phoenix whose peer is labeled with the trie string, each uncertain data should also be represented with the trie string, so that the data objects can be published to the related peers. To process multi-dimensional range queries, it should be kept the order-preserving data placement which is that the neighboring data in data space should also be in the adjoining peers after distributed across the network. It is impossible for a tree structural network to keep a total order-preserving data placement. However, we observe that each level of a trie tree (See Fig. 1) keeps an ordering for the trie strings (For example, trie region, [010, 101] = {010, 011, 100, 101}, keeps a trie string order). Therefore, we want to keep the labeled data orderpreserving in the trie hashing space. Without loss generality, assume that there are d dimensions e0 , e1 , ..., ed−1 and the interval of ei is [Li , Hi ] (The interval of each dimension can be approximately estimated for the application.). We define partial-order relation between multidimensional data values as follows. Definition 1. For two data objects δ1 =< u0 , u1 , · · · , ud−1 > and δ2 =< v0 , v1 , · · · , vd−1 > in d-dimensional space, δ1 δ2 if and only if, for each 0 ≤ i ≤ d − 1, ui ≤ vi . Definition 2. Assume that f is a surjection function from the d-dimensional space D to trie hashing space T . f is a d-dimensional partial order-preserving function if and only if, for any δ1 and δ2 in D, if δ1 δ2 , then f (δ1 ) f (δ2 ). After giving the partial order-preserving definition, we can use a complete trie tree to help us to design a multi-dimensional partial order-preserving hashing algorithm, called MultiHash. The height of the l-ary trie tree is H. MultiHash is designed as follows: We partition the entire multi-dimensional space < [L0 , H0 ], · · · , [Li , Hi ], · · · , [Ld−1 , Hd−1 ] > onto the trie tree along dimensions e0 , e1 , · · · , and ed−1 in a round-robin style. Each node in the trie tree represents a multi-dimensional subspace and the root node represents the entire multi-dimensional space. For any node A at the jth level of the trie tree that has l child nodes, let i denote the value of j mod d. Then, the subspace w represented by node A is evenly divided into l pieces along the ith dimension, and each of its l child nodes represents one such a piece. Based on the trie tree for multi-dimensional space, the MultiHash algorithm works as follows: For any object o with the multi-dimensional value V =< v0 , v1 , · · · , vd−1 >, V is surely in a subspace represented by a leaf node in the trie tree. Suppose the label of the leaf node is S, then the trie string S is assigned as o’s identifer. It is easy to see that the Multihash algorithm is a multi-dimensional partial order-preserving function. For example, in Fig. 1, suppose d = 2, H = 4, the interval of the two dimensions are [0, 12] and [0, 8] respectively. According to MultiHash, each dimension is evenly divided two times. As a result, the two dimensional region [0, 3], [0, 2] is assigned with string 0000; 0001 for (3, 6], (2, 4]; · · · ; 1110 for (6, 9], (4, 6]; 1111
An Efficient P2P Range Query Processing Approach
307
for (9, 12], (6, 8]. The 16 disjointing regions obey the partial order-preserving definition. MultiHash is similar to kd-tree [16], and the entire data space is divided by serval hyper-planes. The difference is that, for MultiHash, each dimension is evenly divided by l−1 hyper-planes each time. Each dimension can be divided by H/l times. Therefore, the length of the final undivided edge for ith dimension 1 is (Hi − Li ) lH/l . Suppose the length is li , the final undivided subspace ω is a d-dimensional hyper-rectangle whose edge length is li , i = 0, · · · , d−1. The value of H is enough large so that ω is very small. The multi-dimensional uncertain data can be represented with an uncertain region which is a subspace consisting of a number of ω. Therefore, each uncertain data can acquire its identifier. Each uncertain data is assigned in leaf peers of Phoenix with identifiers maximum matching rule. In practice, the height of Phoenix is very small compared with H. Leaf peers of Phoenix, therefore, contain large numbers of ω. Each uncertain data relatively consists of small numbers of ω. As a result, almost each uncertain data is entirely covered by one leaf peer. However, there still exists such data object that distributes across several different leaf peers. To query such data, the query message should reach all leaf peers which contain an entire uncertain data. Suppose each data is partitioned by n leaf peers. If n is large, the messages will take much cost. To reduce the number of messages, we have the following scheme: Algorithm 1 // A is a predefined number of peers containing a partitioned uncertain data object. 1: while n > A do 2: Move all the partitioned parts of the uncertain data to their parent peers. 3: Update n. 4: end while 5: Make peers covering an entire uncertain data connect each other.
Initially, an uncertain data is across leaf peers, if the number (n) of leaf peers is smaller than A, the data dose not need to move up. If the data is moved to low levels of the trie tree, n will become small, and thus the network overhead will decrease. It is flexible to choose the value A, mainly depending on the scenario of the network. 3.2
Filtering and Validating Rule Based on Trie Hashing Strings
In [5], a probabilistically constrained rectangles (PCR) of an object depending on a parameter c ∈ [0, 0.5], denoted by o.pcr(c), is used to prune or validate an uncertain object. The authors use m pre-determined probabilities c = {p1 , p2 , · · · , pm }, which constitute the U -catalog, to prune or validate a data object for different sizes rq . In addition, according to [5], the query processing of any shape query region can be converted into the processing to several axis-parallel rectangles. Therefore, we only consider the hyper-rectangular query region.
308
Y. Yuan et al.
To apply the filtering or validating rule in [5] for a data object to Phoenix, the rule should be described with trie hashing strings. After using MultiHash, the query region, the data and o.pcr(pj ) (1 ≤ j ≤ m) get their identifiers. Suppose the strings are Sq , So and Sj (1 ≤ j ≤ m) and the numbers of ω contained by Sj is numj = l(H−Length(Sj )) . Since pj (1 ≤ j ≤ m) are in an ascending order, the length of Sj are in an ascending order and Sj is the prefix of Sj+1 . The filtering or validating rule of PeerUN (called UncertainHash) can be described as follows: 1. For pq > 1 − pm, an object o can be eliminated if Sq does not satisfy the two conditions: (i) Sq is not the prefix of S1 ; (ii) Sq and S1 are the same string and numq ≥ num1 . 2. For pq ≤ 1 − pm , o can be pruned if the common prefix of Sq and Sm is null. 3. For pq ≤ 1 − 2pm , an object is guaranteed to satisfy q if Sq satisfies the the condition: Sq is the prefix of Sm ; or Sq and Sm are the same string and numq ≥ numm . 4. For pq > 0.5, the validating criterion satisfies the condition: Sq is the prefix of the string x1 x2 · · · x |o|−|m| and numq ≥ num |o|−|m| , where | o | and | 2 2 m | are the lengths of So and Sm respectively. (Like So = x1 x2 · · · x|o| , x1 x2 · · · x |o|−|m| represents an identifier of a subspace.) 2 5. For pq ≤ 0.5, the validating criterion satisfies the condition: Sq is the prefix of the string x1 x2 · · · x |o|−|1| and numq ≥ num |o|−|1| , where | o | and | 1 | 2 2 are the lengths of So and S1 respectively. Setting an example, in Fig. 2, o.region denotes the uncertain region of a data object; the region of o.pcr(c) is a PCR of the data object. In this example, the identifiers of q1 and o.pcr(m) are 010 and 0101. We suppose pq ≤ 1 − 2pm . Since 010 is the prefix 0101, according to the third rule of UncertainHash, the uncertain data is guaranteed to satisfy the query q1 . 3.3
Structure of Overlay Network
In order to process multi-dimensional uncertain data, Phoenix nodes can be partitioned into two classes: data nodes (or leaf nodes) and routing nodes (or internal nodes). Data nodes actually store data while routing nodes only keep routing information. An internal node e not only stores the identifiers of its neighbors, but contains m trie hashing strings Schildi (i = 1, ..., m) where Schildi is the identifier of the MBR of c1 .pcr(c), ..., cl .pcr(c) (c1 , ..., cl are l children of node e.). Schildi is used for the probabilistic pruning of the query processing algorithm (See section 4.1).
4
Range Query Processing
To process the range queries, PeerUN includes a basic range query algorithm and a load balancing range query algorithm.
An Efficient P2P Range Query Processing Approach
4.1
309
Basic Range Query Processing
As discussed above, all the uncertain data has been assigned across the trie tree overlay. Once the query request reaches the peers hosting an uncertain data, UncertainHash rule can be applied to the query. The query requests can also filter data objects before they reach leaf peers by using the probabilistic filtering scheme for a subtree e. Based on the trie strings Schildi , i = 1, ..., m, we have the following filtering scheme: (1) For pq > 1 − pm , a and Schildj is null, where (2) For pq ≤ 1 − pm , a and Schildj is null, where
subtree e Sj (1 ≤ j subtree e Sj (1 ≤ j
can be pruned if the common prefix of Sq ≤ m) has the smallest length. can be pruned if the common prefix of Sq ≤ m) has the largest length.
Now we can give the range query processing algorithm. In this paper, we only consider the balanced trie tree. The load balance algorithm given in [18] is used to guarantee the balance of the tree. A requesting peer initiates a range query. Based on MultiHash, the peer can acquire the identifiers of requested data locally. Suppose the height of Phoenix is h. We can get the identifiers of destination peers by adopting the front h log2 l bits of identifiers of data. The basic range query algorithm of PeerUN is given in Algorithm 2. Algorithm 2 RangeQuery(Identif ier[number]) // The identifiers of destination peers are contained in the array Identif ier[number]. 1: y = y1 y2 · · · yk =ComPrefix(Identif ier[number]); 2: Route(y, x); //The requesting peer x routes the query message to the common ancestor peer of the destination leaf peers. 3: h =Length(Identif ier[0]); 4: for i = k to h − 1 do 5: for each child peer y = y1 y2 · · · yi yi+1 of y = y1 y2 · · · yi do 6: if Exists an identifier in Identif ier[number] has the prefix y then 7: Forward the requesting message to peer y ; 8: end if 9: Perform the probabilistic filtering scheme for subtree y ; 10: if peer y contains a part of an uncertain data then 11: Peer y performs UncertainHash for an uncertain data with other neighbors containing the same uncertain data; 12: end if 13: end for 14: end for 15: Perform UncertainHash for each uncertain data;
There are three steps to process the range query. The first step is that requesting peer x routes the message to the common ancestor peer y of the destination leaf peers (Line 1-2). The second step is a pruning routing algorithm (Line 3-8). In the routing process, the requesting message goes down from peer y until to the leaf peers. While meeting each peer y on the downwards paths, the message filters a path by matching the identifiers of destination peers with y (Line 6-8).
310
Y. Yuan et al.
The final step is to filter a subtree with the probabilistic filtering scheme (Line 9). The scheme is used to filter a subtree where there is no uncertain data satisfying the query condition. If there is partitioned data across the internal peers, the filtering or validating rule (UncertainHash) is applied to an entire data. Finally, the message reaches the destination leaf peers containing the requested data. The UncertainHash is used for the uncertain data again. The remaining candidate data objects are examined by computing Equation (1). Setting an example for the algorithm, in Fig. 1, peer 010 issues a twodimensional range query [6, 9], [0, 2]; [9, 12], [2, 4]. According to MultiHash (See the example in page 4), the query is labeled with 1000 and 1011 which represent [6, 9], [0, 2] and [9, 12], [2, 4] respectively. In Fig. 2, the region q2 is the issued range query which covers the data with prefixes 1000 and 1011. Firstly, peer 010 routes the request to the peer 10 which is the common ancestor of peer 1000 and 1011. Then the request begins going down from peer 10. Suppose the request cannot be pruned by any subtree. The request reaches the leaf peers along the paths: 10 − 100 − 1000 and 10 − 101 − 1011. Finally, the UncertainHash rule is applied to the query. 4.2
Load Balancing Range Query Processing
As shown in Algorithm 2, the requesting message is forwarded to an ancestor peer which is usually at a low level of the trie tree. Though the algorithm can process range queries efficiently, peers at low levels suffer high query load, which leads to the bottleneck of low level peers. The bottleneck problem may cause the congestion of the tree-based network. We should make the query load uniformly distributed across the trie tree while processing the query efficiently. The scheme of PeerUN will be introduced in this section. Suppose a requesting peer x at level L issues a query q, and the common ancestor peer is y . If peer y is at a higher level than peer x, Algorithm 2 is still adopted to process the query; otherwise peer x routes the message to the peers also at level L and these peers are on the downward paths of peer y . It is easy to acquire the identifiers of these peers. We get the identifiers of these peers by adopting the front b log2 l bits of identifiers of destination peers, where b is the length of x. The identifiers are stored in the array Identif ier [number]. After the request reaches the peers at level L, the message begins to perform Algorithm 2 until the message reaches leaf peers. The query message will not go up, and it is sent to the destination leaf peers along the downward paths. As a result, the peers at low level assume very few query loads, and thus the bottleneck problem can be solved. The naive approach of routing a message from peer x to n peers at the level L is executing the routing algorithm proposed in [18] n times. This approach causes too many messages. In the following, we will introduce a novel scheme to reduce the network overhead. The schemes in [18] can guarantee that peers at each level of Phoenix can form an approximate complete deBruijn graph. Fig. 3 shows a complete deBruijn graph formed by peers at the 3rd level of Fig. 1. Before the end of this subsection,
An Efficient P2P Range Query Processing Approach
311
010
110
101
111
011
010
000
001
Fig. 3. A 2-ary deBruijn graph
101
100
100
000
001
010
100
10*
011
0**
101 110 111
***
Fig. 4. Example of the deBruijn routing tree of peer 010
we only consider the deBruijn graph of a level of Phoenix. The deBruijn graph formed by peers of Phoenix is called DLevel. As noted above, the identifiers of destination peers at level L are stored in Identif ier [number] originating from the MultiHash algorithm. In addition, the trie strings are also identifiers of nodes of the deBruijn graph. Therefore, Identifier’[number] contain partial order-preserving identifiers for deBruijn graph. MultiHash can guarantee that the peers with identifiers in Identif ier [number] are neighbors at level L as soon as possible. Suppose peers u and v are the peers with the smallest and largest identifiers in Identif ier [number]. Since peers at each level of Phoenix are arranged in ascending trie strings, the destination peers must be between peer u and peer v, and most of these peers are adjoining peers. Based on the topology properties of DLevel, we can design a deBruijn routing tree (DRT) for any peer x = x1 · · · xk , 1 ≤ k ≤ h, of DLevel. The deBruijn routing tree of the peer is formed by using the following four rules: (1) The root is peer x; (2) Each peer in the DRT is a peer in DLevel; (3) For each peer in the tree, its child peers at the next level are its deBruijn neighbors in DLevel and they are sorted from left to right in the increasing trie string order; (4) The DRT has (k +1) levels with the root node at the 0th level. Therefore, the ith level (0 ≤ i ≤ k − 1) of the DRT contains all the peers whose identifiers have a prefix x1 · · · xk and the last level (kth level) contains all the peers whose identifiers do not have xk as the first label. Fig. 4 shows the DRT of peer 010 (see Fig. 3) for the DLevel topology. The DRT of peer 010 has four levels, and nodes at the first and second levels respectively have a common prefix 10 and 0, which are suffixes of 010. The routing to peers Identif ier [number] can also be viewed as a range query to LowI and HighI, where LowI and HighI are the smallest and largest identifiers of Identif ier [number]. Based on the DRT, a pruning routing algorithm (LevelQuery) is used to perform the range queries in the DRT for all the destination peers that are in charge of the trie region [LowI, HighI]. Suppose the trie identifiers LowI and HighI have a common prefix (if they have no common prefix, we can divide [LowI, HighI] into several (at most l) sub-interval with common prefixes and deal with each sub-interval respectively), then all the destination peers are at the same level of the DRT. Let ComI
312
Y. Yuan et al.
denote the longest common prefix of LowI and HighI, and ComS the longest trie identifier which is both the prefix of ComI and the suffix of the root peer x identifier. Suppose the length of ComS is m, then all the destination peers are nodes at the (k − m)th level of the DRT. When a peer E at the ith level of the DRT receives the query message, the identifier of E is xi+1 · · · xk−m A. Consider any deBruijn neighbor F = xi+2 · · · xk−m AB of peer E, peer F is at the (i + 1)th level of the DRT. By the properties of the DRT, identifiers of F ’s descendants at the (k − m)th level of the DRT have a prefix AB. If the trie region [LowI, HighI] includes an identifier I that has a prefix AB, descendants of F in the DRT contains part of the destination peers and peer E should forward the query to peer F . LevelQuery is performed until the request reaches the destination peers. Let’s consider the example of the explanation to Algorithm 2 again. The identifiers of destination peers are Identif ier [number] = 100, 101. The dashed lines with arrows in Fig. 4 show an example of query paths of DRT. In the example, we have LowI = 110 and HighI = 111. All the destination peers are at the (k − m = 3 − 0 = 3)rd level of the DRT. Finally the peers with identifiers in identif ier [number] begin to execute the Algorithm 2.
5
Query Optimization
As discussed in Section 1, the previous works on the distributed multidimensional indexing like VBT-Tree and DPTree suffer huge maintenance costs due to the fact that each node in the tree has to keep coverage information of all its ancestors. PeerUN do not have the same problem, since the peers of Phoenix only store the information of their children in the trie tree. Any peer of Phoenix can execute the MultiHash algorithm locally, since the data space is evenly divided. The scheme of space-division for previous works is based on the distribution of data, which is not known in the distributed environment, and thus any node should keep the coverage information of the nodes at its upwards path. However, the filtering ability of PeerUN is weaker than that of the previous works. In other words, PeerUN will introduce more network overhead than the previous works when querying the same number of data. In this section, an optimal replication scheme is given to strengthen the filtering capability of our algorithms. All the data are kept in leaf peers, the data is replicated in the parent or ancestor peers of the corresponding leaf peers so as to reduce the cost of query processing. As a result, the query delay can also be reduced. The replication strategy is related to a specific query algorithm. In this paper, we consider the load balancing processing algorithm proposed in the last section. We should consider the replication scheme in a small subtree of Phoenix, since it is difficult to know or control the scenario of the whole network in highly distributed P2P systems. Suppose the subtree a has its root and leaf peers being at the ath and kth levels of the trie tree, where k is the height of the trie tree. All the data of Phoenix can be covered by several this kind of subtrees, and these subtrees
An Efficient P2P Range Query Processing Approach
313
Table 1. Notations of the optimal replication strategy for the subtree of a Symbol Description Costminus the decreased query cost Costplus the maintenance cost for replicated data ni the number of decreased messages if data replicated at ith level pi the ratio of query load over the ith level of bj the size of the jth replicated data (in bytes) J the number of the replicated data ni the number of maintaining messages for replicated data at the ith level bj the transmitted size when maintaining the jth replicated data (in bytes) yij One-Zero variable for the jth replicated data at the ith level
do not have overlapping parts. The value a is determined based on the scale of the network. Each subtree can perform the optimal scheme by itself. After all the data is replicated in these subtrees, the filtering capability of PeerUN can be maximally strengthened. In the following, we will consider the optimal replication strategy for a subtree a. The notations used for the scheme are listed in Table 1. For tree a, we define root is on the lowest level 1 and the leaves are on the highest level (k−a). The J data are kept in level k−a. As noted in load balancing routing algorithm, most query messages go from low levels to leaf peers to query data. If there are several copies of J data at the level 1 to level (k − a − 1) of tree a, the messages only reach peers at low levels, and thus the query cost in subtree a will decrease. Without loss generality, we assume that each data is desirable with the same probability, and each peer of Phoenix has the same probability to query each data in the system. Let yij be a zero-one variable which is equal to one if the jth data object has a copy in the ith level of tree a and is zero otherwise. For the tree a, the total decreased cost of queries is: Costminus =
k−a−1
pi ni
i=1
where ni =
k−a−1
lj =
j=i
pi = mi /mtotal =
a+i−1
j
l /
j=0
J
bj yij
j=1
lk−a − li l−1
k−a−1 i=1
k−a−1 la+i − 1 la+i − 1 mi = / l−1 l−1 i=1
=
la+i+1 − la+i − l + 1 lk − l − (l − 1)(k − a + 1)
(2)
314
Y. Yuan et al.
The replication scheme also introduces additional maintenance costs which include refresh, churn and updating costs for the copies. For ease of analysis, we assume the average transmitted size of the three maintaining operations for a copy of the jth data is bj . Then we have total maintenance cost, Costplus =
k−a−1 i=1
where ni
i
=l +
k−a−1
ni
J
bj yij
(3)
j=1
lm ymj (ymj − yij )
(4)
m=i+1
Any assignment of replicas to peers must satisfy the local storage constraints of each peer. For ease of analysis, we consider the storage constraints Si of each level which is the sum of constraints of peers at level i. Thus the variable yij subjects to: J bj yij ≤ Si , i = 1···k − a − 1 (5) j=1
We want to maximize Costminus , and minimize Costplus . This is an issue of the multi-objective function optimization, and the objectives of two function are the opposite. We can set up an objective function to solve the issue. But the quantities class of the values of the two functions must be the same. After analyzing the actual issue, we can construct one objective function as follows: Ctotal = λCostminus − (1 − λ)Costplus
(6)
The parameter λ represents the percentage of the query operations for the data, and 1 − λ is the percentage of the maintaining operations. We can have the optimal replication scheme by maximizing the values of the function. From Equation (2)∼(6), we have, Ctotal =
k−a−1 J
(λ(pi ni bj yij − (1 − λ)ni bj yij )
i=1 j=1
=
(7)
k−a−1 J
yij (λ(pi ni bj + ni bj ) − ni bj )
i=1 j=1
We can solve the optimization issue using dynamic programming. Suppose M =
k−a−1
Si . From Equation (7), we have the dynamic programming argument,
i=1
C1 (u1 , v1 ) 1≤u1 ≤k−a−1,0
= max{C1 (u1 −1, v1 −Si )+C2 (u2 , v2 ), C1 (u1 −1, v1 )}
C2 (u2 , v2 ) = max{C2 (u2 1≤u2 ≤J,0
− 1, v2 − bj )−bj (1−λ)C3 (u3 , v3 )+λpi ni bj , C2 (u2 −1, v2 )}
An Efficient P2P Range Query Processing Approach
315
From Equation (4), we have the the dynamic programming argument C3 (u3 , v3 ), C3 (u3 , v3 ) i+1≤u3 ≤k−a−1,0
= min{C3 (u3 −1, v3 −Si )+du3 (1−yij ), C3 (u3 , v3 )}
A recursive programme is used to compute the dynamic programming formulas. We can record the value of yij while executing the programme. We omit the detail process due to the limited space. The results will be shown in the experimental part.
6
Performance Evaluation
We build a P2P simulator to evaluate the performance of PeerUN. Since the process of the load balancing queries algorithm consists of the basic query algorithm, we only implement the load balancing queries processing algorithm of PeerUN. For comparison purpose, we implement VBI-UTree that organizes the UTree [5] that stores the PCRs in leaf and intermediate entries into the VBITree [7]. For fair comparison, the degree of VBI-Tree and Phoenix are both set to log N , where N is the number of nodes in the system. The optimal query processing scheme called OPT-PeerUN is also evaluated in the experiments. All the experiments are performed using a machine with a Pentium IV CPU of 3 GHz and 1 GB RAM. In particular, since there are no real data sets available, we synthetically generate uncertain objects in a d-dimensional data space U = [0, 1]d like [3], [4] and [5]. The uncertainty region o.ur is a circle at co with the radiuses 0.01, 0.02 and 0.03. For the data size in each experiment, we set the ratios of data objects with radiuses 0.01, 0.02 and 0.03 to be 40%, 30% and 30%. The distribution of the center points co is Zipfian distribution with parameter 0.8, which also represents the data distribution in the data space. The pdf of each data object is Uniform or Gaussian whose ratios are set to 40% and 60% for the data sizes in all experiments. The query probability pq is 0.4 or 0.6 whose ratios are set to 50% and 50% for the query size respectively. The search region of a query is a hypercube with side length (qs). To examine the updating cost for the data movement, we generate the inserted data by Zipfian method with parameter 1.0. For ease implementation of OPTPeerUN, we set bj , bj and λ to 1, 1 and 0.7. In this simulation, the data size is 10 times than its corresponding network size in the experiments. The default values for network size, insert data size, dimensionality, and qs are 214 , 10, 000, 4, and 100 (i.e., 1% of the length of an axis) respectively. Based on the above experimental settings, we examine the performance of PeerUN with different network size, qs and dimensionality. Impact of network size: The first set of experiments gives the performance curves impacted by different network sizes. For five network sizes, 100, 400, 1000, 5000, and 10000 range queries are uniformly generated respectively. The average query delays are plotted in Fig. 5(a), in which VBI-UTree has the worst query
316 12
Y. Yuan et al. 350
VBI-UTree PeerUD OPT-PeerUD
VBI-UTree PeerUD OPT-PeerUD
1400
8
6
Number of messsages
1200 Number of messsages
Number of routing hops
1600
VBI-UTree PeerUD OPT-PeerUD
300
10
250 200 150
1000 800 600 400
4
100
2
200
50 0
2
4 Number of nodes (2
6
8
0 0
2
(x+8)
)
4 Number of nodes (2
(a) Average query delay
6
8
0
2
(x+8)
4
6
8
Number of nodes (2(x+8))
)
(b) Average query cost
(c) Updating cost for data insertion
Fig. 5. Performance Comparison with different network size 500
VBI-UTree PeerUD OPT-PeerUD
450
10
8
6
4
2 100
1600
VBI-UTree PeerUD OPT-PeerUD
1400
400 350 300 250
200
250
Length of query side
(a) Average query delay
300
150 100
1000 800 600 400
200
150
VBI-UTree PeerUD OPT-PeerUD
1200 Number of messages
Number of routing hops
12
Number of messages
14
200
150
200
250
Length of query side
(b) Average query cost
300
0 100
150
200
250
300
Length of query side
(c) Updating cost for data insertion
Fig. 6. Performance Comparison with different lengths of query side
routing efficiency. This is because Phoenix has an deBruijn graph embedded while BATON has a Chord-like graph. From Fig. 5(b), we know PeerUN takes more query cost than VBI-UTree, since the space-division of PeerUN is not based on the data distribution. The number of messages of PeerUN decreases a lot by optimizing the query algorithm, so that OPT-PeerUN has the least query cost. For data insertion, the average updating costs to upside path of PeerUN and VBI-UTree are O(log N ) and O(N ) respectively. Therefore, as shown in Fig. 5(c), PeerUN and OPT-PeerUN save a large number of updating messages than VBI-UTree. But OPT-PeerUN takes a little more than PeerUN due to the additional maintenance cost for replicated data objects. Impact of qs: In this group of experiments, 5000 queries are uniformly issued to examine the performance with varying query sides. As shown in Fig. 6(a) and Fig. 6(b), the average query delay and query cost both increase in the increasing of qs. From the Fig. 6(c), we can see that qs only has a little impact on the updating cost, since all the curves in this figure are stable. The relation of curves for VBI-UTree, PeerUN and OPT-PeerUN in this set of experiments is the same as the last one, which conforms to our designs. Impact of dimensionality: Number of 5000 queries are also uniformly generated to evaluate the performance with varying dimensionality. Fig. 7(a) shows that the average number of routing hops of all schemes increase only a little,
An Efficient P2P Range Query Processing Approach 12
450
VBI-UTree PeerUD OPT-PeerUD
VBI-UTree PeerUD OPT-PeerUD
1400
8
6
Number of messages
1200 Number of messages
Number of routing hops
1600
VBI-UTree PeerUD OPT-PeerUD
400
10
317
350
300
250
1000 800 600 400
4
200
2
200
150 2
4
6
8
Dimensionality
(a) Average query delay
10
0 2
4
6
8
Dimensionality
(b) Average query cost
10
2
4
6
8
10
Dimensionality
(c) Updating cost for data insertion
Fig. 7. Performance Comparison with different dimensionality
which explains the query delays have nothing to do with the dimensions. Fig. 5(b) shows that average query cost increases with increasing of dimensionality, for the higher the dimensionality, the more the space overlapping. The updating costs for data insertion in Fig. 7(c) are not sensitive to the varying dimensionality, but VBI-UTree increases most.
7
Conclusions
In this paper, a novel scheme PeerUN is proposed to process range queries on multi-dimensional uncertain data for P2P systems. A partial order-preserving hashing algorithm is presented to map the multi-dimensional data space to the trie hashing space. PeerUN utilizes the trie hashing strings to design a filtering and validating rule for uncertain data. Based on a universal P2P frameworkPhoenix, a distributed multi-dimensional indexing structure is proposed to support two efficient query processing algorithms, which make PeerUN free from the root-bottleneck problem. Furthermore, an optimal data replication strategy is given to optimize the query efficiency. Compared with previous tree-based distributed multi-dimensional indices, PeerUN has extremely low maintenance cost. PeerUN also has the high query routing efficiency with low network overhead. The extensive experiments confirm our designs. Acknowledgement. This work is partially supported by the Program for New Century Excellent Talents in Universities (NCET-06-0290) and the Fok Ying Tong Education Foundation Award (104027).
References 1. Faradjian, A., Gehrke, J., Bonnet, P.: Gadt: a probability space ADT for representing and querying the physical world. In: Proc. of ICDE (2002) 2. B¨ ohm, C., Pryakhin, A., Schubert, M.: The gauss-tree: efficient object Identification in databases of probabilistic feature vectors. In: Proc. of ICDE (2006) 3. Cheng, R., Kalashnikov, D., Prabhakar, S.: Querying imprecise data in moving object environments. TKDE 16(9), 1112–1127 (2004)
318
Y. Yuan et al.
4. Ljosa, V., Singh, A.K.: APLA: indexing arbitrary probability distributions. In: Proc. of ICDE (2007) 5. Tao, Y., Cheng, R.: Indexing multi-dimensional uncertain data with arbitrary probability density functions. In: Proc. of VLDB (2005) 6. Tanin, E., Harwood, A., Samet, H.: Using a distributed quadtree index in peer-topeer networks. VLDB Journal 16(2), 165C–178C (2007) 7. Jagadish, H.V., Ooi, B.C.: VBI-tree: a peer-to-peer framework for supporting multidimensional indexing schemes. In: Proc. of ICDE (2006) 8. Lee, W., Li, M., Sivasubramaniam, A.: Dptree: a balanced tree based indexing framework for peer-to-peer systems. In: Proc. of ICNP (2006) 9. Cai, M., Frank, M.R., Szekely, P.A.: Maan: A multiattribute addressable network for grid information services. In: Proc. of GRID (2003) 10. Novak, D., Zezula, P.: M-Chord: a scalable distributed similarity search structure. In: Proc. of InfoScale (2006) 11. Falchi, F., Gennaro, C., Zezula, P.: A content–addressable network for similarity search in metric spaces. In: Moro, G., Bergamaschi, S., Joseph, S., Morin, J.-H., Ouksel, A.M. (eds.) DBISP2P 2005 and DBISP2P 2006. LNCS, vol. 4125, pp. 98–110. Springer, Heidelberg (2007) 12. Soliman, M.A., Ilyas, I.F., Chang, K.C.: Top-k query processing in uncertain databases. In: Proc. of ICDE (2007) 13. Re, C., Dalvi, N., Suciu, D.: Efficient top-k query evaluation on probabilistic data. In: Proc. of ICDE (2007) 14. Kriegel, H.P., Kunath, P., Renz, M.: Probabilistic nearest-neighbor query on uncertain objects. In: Kotagiri, R., Radha Krishna, P., Mohania, M., Nantajeewarawat, E. (eds.) DASFAA 2007. LNCS, vol. 4443, pp. 337–348. Springer, Heidelberg (2007) 15. Pei, J., Jiang, B., Lin, X., Yuan, Y.: Probabilistic skylines on uncertain data. In: Proc. of VLDB (2007) 16. Bentley, J.L.: Multidimensional binary search trees used for associative searching. Communications of the ACM 18(9), 509–517 (1975) 17. http://atom.research.microsoft.com/sensormap/ 18. Yuan, Y., Guo, D., Wang, G., Liu, Y.: Beyond the lower bound: a unified and optimal P2P construction method. HKUST Technical Report (2008), http://www.cse.ust.hk/~ liu/guodeke/index.html
Flexibility as a Service W.M.P. van der Aalst1,2 , M. Adams2 , A.H.M. ter Hofstede2 , M. Pesic1 , and H. Schonenberg1 1 Eindhoven University of Technology P.O. Box 513, NL-5600 MB, Eindhoven, The Netherlands
[email protected] 2 Queensland University of Technology GPO Box 2434, Brisbane QLD 4001, Australia
Abstract. The lack of flexibility is often seen as an inhibitor for the successful application of workflow technology. Many researchers have proposed different ways of addressing this problem and some of these ideas have been implemented in commercial systems. However, a “one size fits all” approach is likely to fail because, depending on the situation (i.e., characteristics of processes and people involved), different types of flexibility are needed. In fact within a single process/organisation varying degrees of flexibility may be required, e.g., the front-office part of the process may require more flexibility while the back-office part requires more control. This triggers the question whether different styles of flexibility can be mixed and integrated into one system. This paper proposes the Flexibility as a Service (FAAS) approach which is inspired by the Service Oriented Architecture (SOA) and our taxonomy of flexibility. Activities in the process are linked to services. Different services may implement the corresponding activities using different workflow languages. This way different styles of modelling may be mixed and nested in any way appropriate. This paper demonstrates the FAAS approach using the Yawl, Declare, and Worklet services.
1
Introduction
Web services, and more generally Services-Oriented Architectures (SOAs), have emerged as the standard way of implementing systems in a cross-organisational setting. The basic idea of SOA is to modularise functions and expose them as services using a stack of standards. Although initially intended for automated inter-organisational processes, the technology is also used for intra-organisational processes where many activities are performed by human actors. This is illustrated by the fact that classical workflow technology is being embedded in SOAstyle systems (cf. the role of BPEL). Moreover, proposals such as BPEL4People [15] illustrate the need to integrate human actors in processes as has been done in workflow systems since the nineties [2,16,13]. Experiences with workflow technology show that it is relatively easy to support structured processes. However, processes involving people and organisations L. Chen et al. (Eds.): DASFAA 2009 Workshops, LNCS 5667, pp. 319–333, 2009. c Springer-Verlag Berlin Heidelberg 2009
320
W.M.P. van der Aalst et al.
tend to be less structured. Often, the process is not stable and changes continuously or there are many cases that require people to deviate from the predefined process [5,7,19,20]. Therefore, numerous researchers proposed ways of dealing with flexibility and change. Unfortunately, few of these ideas have been adopted by commercial parties. Moreover, it has become clear that there is no “one size fits all” solution, i.e., depending on the application, different types of flexibility are needed. In this paper we use the taxonomy presented in [23] where four types of flexibility are distinguished: (1) flexibility by design, (2) flexibility by deviation, (3) flexibility by underspecification, and (4) flexibility by change (both at type and instance levels). This taxonomy shows that different types of flexibility exist. Moreover, different paradigms may be used, i.e., even within one flexibility type there may be different mechanisms that realise different forms of flexibility. For example, flexibility by design may be achieved by adding various modelling constructs depending on the nature of the initial language (declarative/imperative, graph-based/block-structured, etc.). Therefore, it is not realistic to think that a single language is able to cover all flexibility requirements. To address the need for flexibility and to take advantage of today’s SOAs, we propose Flexibility as a Service (FAAS). The idea behind FAAS is that different types of flexibility can be arbitrarily nested using the notion of a service. Figure 1 shows the basic idea. The proposal is not to fix the choice of workflow language but to agree on the interfaces between engines supporting different languages. Activities in one workflow may be subcontracted to a subprocess in another language, i.e., activities can act as service consumers while subprocesses, people, and applications act as service providers. As Figure 1 shows, languages can be arbitrarily nested.
A
B
Z
Fig. 1. Overview of FAAS approach: A, B, . . . , Z may use different languages. The solid arcs are used to connect a service consumer (arc source) to a service provider (arc target).
Flexibility as a Service
321
A particular case (i.e., workflow instance) triggers the invocation of a tree of services. For example, the top-level workflow is executed using language B, some of the activities in this workflow are directly executed by people and applications while other activities are outsourced, as shown by the links between activities and people or applications in Figure 1. This way some of the activities in the top-level workflow can be seen as service consumers that outsource work to service providers using potentially different workflow languages. The outsourced activities are atomic for the top-level workflow (service consumer), however, may be decomposed by the service provider. The service provider may use another workflow language, say A. However, activities expressed in language A may again refer to processes expressed in language C, etc. In fact any nesting of languages is possible as long as the interfaces match. In this paper, we will show that the FAAS approach depicted in Figure 1 is indeed possible. Moreover, we will show that the different forms of flexibility are complementary and can be merged. This will be illustrated in a setting using Yawl [3], Worklets [9], and Declare [6]. Yawl is a highly expressive language based on the workflow patterns. Using the advanced routing constructs of Yawl, flexibility by design is supported. Worklets offer flexibility by underspecification, i.e., only at run-time the appropriate subprocess is selected and/or defined. Declare is a framework that implements various declarative languages, e.g., DecSerFlow and ConDec, and supports flexibility by design, flexibility by deviation, and flexibility by change [17]. The paper is organised as follows. First, we present related work using our taxonomy of flexibility. Section 3 defines the FAAS approach and describes the characteristics of three concrete languages supporting some form of flexibility (Yawl, Worklets, and Declare). Note that also other workflow languages/systems could have been used (e.g., ADEPT [19]). However, we will show that Yawl, Worklets, and Declare provide a good coverage. Section 4 describes the implementation. Finally, we show an example that joins all three types of flexibility and conclude the paper.
2
Related Work and Taxonomy of Flexibility
Since the nineties many researchers have been working on workflow modelling, analysis, and enactment [2,16,13]. Already in [13] it was pointed out that there are different workflow processes ranging from fully automated processes to adhoc processes with considerable human involvement. Nevertheless, most of the commercial systems focus on structured processes. One notable exception is the tool FLOWer by Pallas Athena that supports flexibility through case handling [7]. Although their ideas have not been adopted in commercial offerings, many researchers have proposed interesting approaches to tackle flexibility [5,7,19,14,17,20]. Using the taxonomy of flexibility presented in [23], we now classify the flexibility spectrum and use this classification to discuss related work. Flexibility by Design is the ability to incorporate alternative execution paths within a process definition at design time such that selection of the most
322
W.M.P. van der Aalst et al.
appropriate execution path can be made at runtime for each process instance. Consider for example a process model which allows the user to select a route; this is a form of flexibility by design. Also parallel processes are more flexible than sequential processes, e.g., a process that allows for A and B to be executed in parallel, also allows for the sequence B followed by A (and vice versa), and thus offers some form of flexibility. In this way, many of the workflow patterns [4] can be seen as constructs to facilitate flexibility by design. A constraint-based language provides a completely different starting point, i.e., anything is possible as long as it is not forbidden [6]. Flexibility by Deviation is the ability for a process instance to deviate at runtime from the execution path prescribed by the original process without altering its process definition. The deviation can only encompass changes to the execution sequence of activities in the process model for a specific process instance; it does not allow for changes in the process definition or the activities that it comprises. Many systems allow for such functionality to some degree, i.e., there are mechanisms to deviate without changing the model or any modelling efforts. The concept of case handling allows activities to be skipped and rolled back as long as people have the right authorisation [7]. Flexibility by deviation is related to exception handling as exceptions can be seen as a kind of deviation. See [10,21] for pointers to the extensive literature on this topic. Flexibility by Underspecification is the ability to execute an incomplete process specification at runtime, i.e., one which does not contain sufficient information to allow it to be executed to completion. Note that this type of flexibility does not require the model to be changed at runtime, instead the model needs to be completed by providing a concrete realisation for the undefined parts. There are basically two types of underspecification: late binding and late modelling. Late binding means that a missing part of the model is linked to some prespecified functionality (e.g., a subprocess) at runtime. Late modelling means that at runtime new (i.e., not predefined) functionality is modelled, e.g., a subprocess is specified. Worklets allow for both late modelling and late binding [9,8]. The various approaches to underspecification are also discussed in [22]. Flexibility by Change is the ability to modify a process definition at runtime such that one or all of the currently executing process instances are migrated to a new process definition. Unlike the previous three flexibility types the model constructed at design time is modified and one or more instances need to be transferred from the old to the new model. There are two types of flexibility by change: (1) momentary change (also known as change at the instance level or ad-hoc change): a change affecting the execution of one or more selected process instances (typically only one), and (2) evolutionary change (also known as change at the type level): a change caused by modification of the process definition, affecting all new process instances. Changing a process definition leads to all kinds of problems as indicated in [5,12,19,20]. In [12] the concept of the so-called dynamic change bug was introduced. In the context of ADEPT [19] much work has been performed on workflow change. An excellent overview of problems and related work is given in [20].
Flexibility as a Service
323
Workflow flexibility has been a popular research topic in the last 15 years. Therefore, it is only possible to mention a small proportion of the many papers in this domain here. However, the taxonomy just given provides a good overview of the different approaches, and, more importantly, it shows that there are many different, often complementary, ways of supporting flexibility. Therefore, we propose not to use a single approach but to facilitate the arbitrary nesting of the different styles using the FAAS approach.
3
Linking Different Forms of Flexibility
The idea of FAAS was already introduced using Figure 1. The following definition conceptualises the main idea. The whole of what is shown in Figure 1 is a socalled workflow orchestration which consists of two types of services: workflow services and application services. Definition 1. A workflow orchestration is a tuple WO = (WFS , AS , W ), where – WFS is the set of workflow services (i.e., composites of activities glued together using some “process logic”), – AS is the set of application services (i.e., we abstract from the workflow inside such as service), – S = WFS ∪ AS is the set of services, – for any s ∈ WFS , we defined the following functions: • lang(s) is the language used to implement the process, • act (s) is the set of activities in s, • logic(s) defines the process logic, i.e., the causal dependencies between the activities in act (s) and expressed in lang(s), and • impl (s) ∈ act (s) → S defines the implementation of each activity in s, – from the above we can distill the following wiring relation: W = {(cons, prov ) ∈ WFS × S | ∃a∈act(cons) impl (cons)(a) = prov }. The arcs (i.e., the links in wiring relation W ) in Figure 1 represent service consumer/provider links. The source of a link is the service consumer while the target of a link is the service provider. The application services are the leaves in Figure 1, i.e., unlike workflow services they are considered to be black boxes and their internal structure is not revealed. They may correspond to a web service, a worklist handler which pushes work-items to workers, or some legacy application. The workflow services have the same “provider interface” but have their own internal structure. In Figure 1, several workflow services using languages A, B, . . . Z are depicted. Each workflow service s uses a lang(s) to describe the workflow process. In Figure 1, the top-level workflow is using language B. In process s, act (s) refers to the set of activities. Seen from the viewpoint of s, act (s) are black boxes that are subcontracted to other services. impl (s)(a) is the service that executes activity a for workflow service s. The solid arcs connecting activities to services in Figure 1 describe the wiring relation W . Note that workflow orchestration WO does not need to be static, e.g., modelling activities at runtime may lead to the creation of new services.
324
W.M.P. van der Aalst et al.
The goal of FAAS is that different languages can be mixed in any way. This requires a standard interface between the service consumer and the service provider. There are different ways of implementing such an interface. (For the implementation part of this paper, we use the so-called “Interface B” of Yawl described later.) However, the minimal requirement can be easily described using the notion of a work-item. A work-item corresponds to an activity enabled for a particular case, i.e., an activity instance. A work-item has input data and output data. The input data is information passed on from the service consumer to the service provider and output data is information passed on from the provider to the consumer. Moreover, there are two basic interactions: (1) check-in work-item and (2) check-out work-item. When an activity is enabled for a particular case, a work-item is checked out, i.e., data and control are passed to the service provider. In the consumer workflow the corresponding activity instance is blocked until the work-item is checked in, i.e., data and control are passed to the service consumer. An interface supporting check-in and check-out interactions provides the bare minimum. It is also possible to extend the interface with other interactions, e.g., “Interface B” of Yawl supports cancellation. However, for the purpose of this paper this is less relevant. The important thing is that all engines used in the workflow orchestration use the same interface. This interface can be simple since we do not consider the process logic in the interface. As a proof of concept, we demonstrate how three very different languages, Yawl [3], Worklets [9], and Declare [6], can be combined. First, we discuss the characteristics of the corresponding languages and why it is interesting to combine them. In Section 4 we briefly describe the implementation. 3.1
A Highly Expressive Language: YAWL
The field of workflow lacks commonly accepted formal and conceptual foundations. Many approaches, both in industry and in academia, to workflow specification exist, including (proposed) standards, but till recently, these have not had universal acceptance. In the late nineties the Workflow Patterns Initiative1 distilled patterns from workflow control-flow specification approaches of a number of commercial systems and research prototypes [4]. These patterns, which are language-independent, can be used to gain comparative insight, can assist with workflow specification and can serve as a basis for language definition. Yawl (Yet Another Workflow Language) [3] provides comprehensive support for these control-flow patterns and can thus be considered a highly expressive language. This support was achieved by taking Petri nets as a starting point and by observing that Petri nets have the following shortcomings in terms of control-flow patterns support: – It is hard to capture cancellation of (parts of) a workflow; – Dealing with various forms of synchronisation of multiple concurrent instances of the same activity is difficult; 1
www.workflowpatterns.com
Flexibility as a Service
325
Fig. 2. Yawl control-flow concepts
– There is no direct support for a synchronisation concept which captures the notion of “waiting only if you have to”. The graphical manifestations of the various concepts for control-flow specification in Yawl are shown in Figure 2. Yawl extends Workflow nets (see e.g. [2]) with concepts for the OR-split and the OR-join, for cancellation regions, and for multiple instance tasks. In Yawl terminology transitions are referred to as tasks and places as conditions. As a notational abbreviation, when tasks are in a sequence they can be connected directly (without adding a connecting place). The expressiveness of Yawl allows for models that are relatively compact as no elaborate work-arounds for certain patterns are needed. Therefore the essence of a model is relatively clear and this facilitates subsequent adaptation should that be required. Moreover, by providing comprehensive pattern support Yawl provides flexibility by design and tries to prevent the need for change, deviation, or underspecification. 3.2
An Approach Based on Late Binding & Modelling: Worklets
Typically workflow management systems are unable to handle unexpected or developmental change occurring in the work practices they model, even though such deviations are a common occurrence for almost all processes. Worklets provide an approach for dynamic flexibility, evolution and exception handling in workflows through the support of flexible work practices, and based, not on proprietary frameworks, but on accepted ideas of how people actually work. A worklet is in effect a small, self-contained, complete workflow process, designed to perform (or substitute for) one specific task in a larger parent process. The Worklets approach provides each task of a process instance with the ability to be associated with an extensible repertoire of actions (worklets), one of which is contextually and dynamically bound to the task at runtime. The actual worklet selection process is achieved by the evaluation of a hierarchical set of rules, called the Ripple Down Rules (RDR) [11]. An RDR Knowledge Base is a collection of simple rules of the form “if condition, then conclusion” conceptually arranged in a binary tree structure (Figure 3). The tree contains nodes that consist of a condition (a Boolean expression) and a conclusion (a reference to a worklet). During the selection, nodes are evaluated by applying the current context of the case and the task instance. After evaluation, the corresponding branch is taken for the evaluation of the next node. The right
326
W.M.P. van der Aalst et al.
(exception) branch is taken when the condition holds, the left (else) branch is taken when the condition is violated. If the condition holds in a terminal node, then its conclusion is taken, i.e., that associated worklet is selected. In case the condition of the terminal node is violated, then for terminal nodes on a true branch the conclusion of the parent node is taken, whereas for terminal nodes on a false branch the conclusion of node with the last satisfied condition is taken. For new exceptions, the RDR can be extended and new worklets may be added to the repertoire of a task at any time (even during process execution) as different approaches to completing the task are developed. Most importantly, the method used to handle an exception is captured by the system, and so a history of the event and the method used to handle it, is recorded for future instantiations. In this way, the process model undergoes a dynamic natural evolution, thus avoiding the problems normally associated with workflow change, such as migration and version control. The Worklets paradigm provides full support for flexibility by underspecification and allows to postpone the realisation of particular part of the process till runtime and be dependent on the context of each process instance. 3.3
An Approach Based on Constraints: Declare
In imperative modelling approaches, which are used by most process modelling languages, allowed behaviour is defined in terms of direct causal relations between activities in process models. Opposed to this, a declarative approach offers a wide variety of relations, called constraints, that restrict behaviour. Constraintbased languages are considered to be more flexible than traditional languages because of their semantics; everything that does not violate the constraints is allowed. Regarding flexibility, imperative and declarative approaches are opposites; to offer more flexibility, for imperative approaches more execution paths need to be incorporated into the model, whereas for declarative approaches the number of constraints need to be reduced. Hence, a declarative approach is more suitable for loosely-structured processes, since it requires less effort to include a variety of behaviours in the design. Declare has been developed as a framework for constraint-based languages and offers most features that traditional WFMSs have, e.g., model development and verification, automated model execution and decomposition of large
Fig. 3. RDR tree
Fig. 4. Declare constraints
Flexibility as a Service
327
processes, moreover is offers support for most flexibility types. Activities and constraints on activities are the key elements of a constraint-based model. Constraints are expressed in temporal logic [18], currently LTL is used, but other types of logic could be used as well. By using graphical representations of constraints (called templates), users do not have to be experts in LTL. The set of templates (called a constraint-based language) can easily be extended. DecSerFlow [6] is a such a language, available in Declare, and can be used for the specification of web services. Figure 4 shows some of the constraints available in DecSerFlow. The existence constraint on A expresses that activity A has to be executed exactly once. The response constraint between B and C expresses that if B is executed, then C must eventually also be executed. The precedence constraint between D and E expresses that E can only be executed if D has been executed before, in other words, D precedes E . Finally, the not co-existence constraint between F and G expresses that if F has been executed, then G cannot be executed and vice versa. Declare supports flexibility by deviation by offering two types of constraints: mandatory and optional constraints. Optional constraints are constraints that may be violated, mandatory constraints are constraints that may not be violated. Declare forces its users to follow all mandatory constraints and allows users to deviate by violating optional constraints. Graphically, the difference between mandatory and optional constraints is that the former are depicted by solid lines and the latter by dashed lines. Figure 4 only shows mandatory constraints. In Declare it is possible to change the process model during execution, both at type and instance level. A constraint-based model can be changed by adding constraints or activities, or by removing them. For declarative models it is straightforward to transfer instances. Instances for which the current trace satisfies the constraints of the new model, are mapped onto the new model. Hence the dynamic change bug, as described in [12] can be circumvented. In this section we showed three approaches that provide flexibility. Yawl tries to prevent the need for change, deviation, or underspecification by providing a highly expressive language based on the workflow patterns. Worklets allow for flexibility by underspecification by providing an extensible repertoire of actions at runtime. Declare uses a constraint-based approach that supports flexibility by design, flexibility by deviation, and flexibility by change. Each of the approaches has its strengths and weaknesses. Therefore, the idea presented in Figure 1 and Definition 1 is appealing as it allows to combine the strengths of the different approaches and wrap them as services.
4
Implementation
The idea of FAAS has been implemented in the context of Yawl. Since the initial set-up of Yawl was based on the service-oriented architecture, the existing interfaces could be used to also include Worklets and Declare services. Figure 5 shows the overall architecture of Yawl and the way Worklets and Declare have been included to allow for the arbitrary nesting of processes as
328
W.M.P. van der Aalst et al.
illustrated by Figure 1. The Worklets and Declare services have been implemented as YAWL Custom Services [1]. Using the web-services paradigm, all external entities (users, applications, devices, organisations) are abstracted as services in YAWL. Custom services communicate with the YAWL engine using a set of pre-defined interfaces, which provide methods for object and data passing via HTTP requests and responses. All data are passed as XML; objects are marshalled into XML on each side of an interface for passing across it, then reconstructed back to objects on the other side.
Interfaces A - Administration B - Processes E - Logging O - Org Data R - Resourcing W - Work Queue X - Exception
PROM
A
A
A R
YAWL Process Editor
B
R
Resource Service
Event Log
YAWL Workflow Engine
E
B
Case Data
X
B
A
B
W
Worklet Service
Declare Service
O Admin
Process Repository
Org Model
X
Worklist
Exception
Selection
Event Log
Event Log
users
Worklet Repository
Fig. 5. High-level architecture of the YAWL environment
As Figure 5 shows, Interface B plays a crucial role in this architecture. YAWL can subcontract work to Custom Services. Moreover, Interface B can also be used to start new process instances in YAWL. Therefore, the same interface can be used to provide a service and to use a service. Figure 5 shows that the Worklets and Declare services are connected through Interface B. Note that the Resource Service is also connected to the engine through Interface B. This service offers work to end users through so-called worklists. Again work is subcontracted, but in this case not to another process but to a user or software program. Note that a Yawl process may contain tasks subcontracted to the Declare and/or Worklets services. In turn, a Declare process may subcontract some of its task to Yawl. For each Declare task subcontracted to Yawl, a process is instantiated in the Yawl engine. This process may again contain tasks subcontracted to the Declare and/or Worklets services, etc. Note that it is not
Flexibility as a Service
329
possible to directly call the Declare service from the Worklets service and vice versa, i.e., Yawl acts as the “glue” connecting the services. However, by adding dummy Yawl processes, any nesting is possible as shown in Figure 1. The reason that Yawl is used as an intermediate layer results from the fact that Interface B allows for additional functionalities not mentioned before. For example, custom services may elect to be notified by the engine when certain events occur in the life-cycle of nominated process instantiations (i.e. when a work-item becomes enabled, when a work-item is cancelled, or when a case completes), to signal the creation and completion of process instances and work-items, or to notify the occurrence of certain events or changes in the status of existing workitems and cases. This allows for additional functionality used for e.g. exception handling and unified logging. The complete YAWL environment, together with source code and accompanying documentation, can be freely downloaded from www.yawl-system.com. The current version of Yawl includes the Worklets service. The Declare service is an optional component that can be downloaded from declare.sf.net.
5
Example
In this section the feasibility of FAAS is illustrated by means of an example which is inspired by the available documentation for the process of filling a vacancy at QUT.2 The process starts with the preparation of documents and the formation of a selection panel. This is necessary to formally acknowledge the vacancy by the organisational area and to start the advertisement campaign to attract suitable applicants and fill the vacancy. Incoming applications are collected and a shortlist of potential candidates is made. From the shortlisted candidates, a potential candidate is selected based on the outcomes of an interview and one additional selection method. This selection method must be identical for all applicants of the same vacancy. Shortlisted candidates are notified and invited for an interview. In addition, they are informed about the additional selection method. Finally, the application, the interview and the additional selection method outcomes are evaluated and one potential candidate is selected to whom the position is offered. After the candidate accepts the offer, the employment procedure is started. In the remainder of this section we illustrate the FAAS ideas by modelling the relevant parts of this process, with Yawl, Worklets, and Declare. At the highest level the process can be seen as a sequence of phases, such as the advertisement phase and the selection phase. There are two types of choices at this level. The choice to advertise internally, externally, or both, and the choice whether to go through the selection procedures with a selection of the shortlisted applicants, or to cancel the process in case there are no suitable shortlisted applicants. In addition, it is unknown in advance how many incoming 2
Manual of Policies and Procedures (MOPP), Chapter B4 Human Resources - Recruitment, Selection and Appointment (http://www.mopp.qut.edu.au/B/), Queensland University of Technology.
330
W.M.P. van der Aalst et al.
applications will have to be considered. All flexibility requirements at this level can be supported by Yawl, by means of multi-instance tasks, the OR- and XOR-join. Depending on the type and duration of a vacancy, there is a range of available additional selection methods, folio, referee report, psycho-metric test, in-basket test, presentation and pre-interview presentation. The selection and execution of a particular selection procedure for a particular applicant can be expressed with a Worklet-enabled task in the Yawl net (cf. Section 3.2). Ripple Down Rules (RDR [11]) define the selection of a suitable selection procedure for the vacancy at runtime (cf. Section 3.2). The advantage of this approach is that the selection rules can progressively evolve, e.g., for exceptions new rules can be added to the tree. Figure 6(b) depicts the RDR tree for Worklet-enabled task selection procedure of Figure 6(a). Depending on the evaluation of the rule in a specific context, a Yawl net is selected, e.g., the referee process. The selection method by referee reports is a good example of a looselystructured process, i.e., a process with just a few execution constraints and many possible execution paths. Prior to the interview, the HR department must advise the candidate about preparation regarding the selection method, and where to find important information. Also the candidate must be asked for his/her requirements regarding access. The referee method consists of requesting a report
Fig. 6. The application procedure: (a) The high-level process as Yawl net with Worklet-enabled task selection procedure and its RDR tree. (b) The RDR tree
in more detail with a link to the Referee report Yawl net. (c) The contents of the Referee report net task is a Declare process.
Flexibility as a Service
331
from a referee, e.g., a former supervisor that knows the candidate professionally. The referee report can be available before or after the interview and be provided written or orally. In case of an oral report, a written summary should be made afterwards. When a referee report is requested, the selection criteria for the position should be provided. The declarative approach offers a compact way for the specification of the referee report process. Figure 6(c) shows the process modelled in Declare using mandatory constraints, precedence , response and not co-existence and optional existence constraints exactly 1 (cf. Section 3.3). The not co-existence allows the specification of the occurrence of receiving either a written report or an oral report from a referee, without specifying when it should be decided. The optional constraint exactly 1 expresses that ideally the involved tasks are executed exactly once and it allows employees from the organisational area to deviate if necessary. The example illustrates that through the FAAS approach different languages for (flexible) workflow specification can be combined, yielding a powerful solution to deal with different flexibility requirements. The synchronisation of multiple instances and the multi-choice construct can be solved by Yawl. The contextdependent selection of (sub)processes at runtime can be modelled conveniently using Worklets. For loosely-structured (sub)processes Declare provides a convenient solution. Moreover, it is not only possible to combine, but also to nest the different approaches, e.g., a Yawl model may contain a Declare process in which a Worklet-enabled task occurs.
6
Conclusion
In this paper, we presented the Flexibility as a Service (FAAS) approach. The approach combines ideas from service oriented computing with the need to combine different forms of flexibility. Using a taxonomy of flexibility, we showed that different forms of flexibility are possible and argued that it is not realistic to assume a single language that suits all purposes. The flexibility paradigms are fundamentally different and therefore it is interesting to see how they can be combined without creating a new overarching and complex language. As a proof-of-concept we showed that Yawl, Worklets, and Declare can be composed easily using the FAAS approach. This particular choice of languages was driven by the desire to cover a large parts of the flexibility spectrum. However, it is possible to include other languages. For example, it would be interesting to also include ADEPT (strong in flexibility by change, i.e., changing processes on-the-fly and migrating instances) and FLOWer (strong in flexibility by deviation using the case handling concept). As indicated in the introduction, a lot of research has been conducted on flexibility resulting in many of academic prototypes. However, few of these ideas have been incorporated in commercial systems. Using the FAAS approach it may be easier for commercial systems to offer specific types of flexibility without changing the core workflow engine.
332
W.M.P. van der Aalst et al.
References 1. van der Aalst, W.M.P., Aldred, L., Dumas, M., ter Hofstede, A.H.M.: Design and implementation of the YAWL system. In: Persson, A., Stirna, J. (eds.) CAiSE 2004. LNCS, vol. 3084, pp. 142–159. Springer, Heidelberg (2004) 2. van der Aalst, W.M.P., van Hee, K.M.: Workflow Management: Models, Methods, and Systems. MIT Press, Cambridge (2004) 3. van der Aalst, W.M.P., ter Hofstede, A.H.M.: YAWL: Yet Another Workflow Language. Information Systems 30(4), 245–275 (2005) 4. van der Aalst, W.M.P., ter Hofstede, A.H.M., Kiepuszewski, B., Barros, A.P.: Workflow Patterns. Distributed and Parallel Databases 14(1), 5–51 (2003) 5. van der Aalst, W.M.P., Jablonski, S.: Dealing with Workflow Change: Identification of Issues and Solutions. International Journal of Computer Systems, Science, and Engineering 15(5), 267–276 (2000) 6. van der Aalst, W.M.P., Pesic, M.: DecSerFlow: Towards a truly declarative service flow language. In: Bravetti, M., N´ un ˜ez, M., Zavattaro, G. (eds.) WS-FM 2006. LNCS, vol. 4184, pp. 1–23. Springer, Heidelberg (2006) 7. van der Aalst, W.M.P., Weske, M., Gr¨ unbauer, D.: Case Handling: A New Paradigm for Business Process Support. Data and Knowledge Engineering 53(2), 129–162 (2005) 8. Adams, M., ter Hofstede, A.H.M., van der Aalst, W.M.P., Edmond, D.: Dynamic, extensible and context-aware exception handling for workflows. In: Meersman, R., Tari, Z. (eds.) OTM 2007, Part I. LNCS, vol. 4803, pp. 95–112. Springer, Heidelberg (2007) 9. Adams, M., ter Hofstede, A.H.M., Edmond, D., van der Aalst, W.M.P.: Worklets: A service-oriented implementation of dynamic flexibility in workflows. In: Meersman, R., Tari, Z. (eds.) OTM 2006. LNCS, vol. 4275, pp. 291–308. Springer, Heidelberg (2006) 10. Casati, F., Ceri, S., Paraboschi, S., Pozzi, G.: Specification and Implementation of Exceptions in Workflow Management Systems. ACM Transations on Database Systems 24(3), 405–451 (1999) 11. Compton, P., Jansen, B.: Knowledge in context: A strategy for expert system maintenance. In: Barter, C.J., Brooks, M.J. (eds.) Canadian AI 1988. LNCS, vol. 406, pp. 292–306. Springer, Heidelberg (1990) 12. Ellis, C.A., Keddara, K., Rozenberg, G.: Dynamic change within workflow systems. In: Comstock, N., Ellis, C., Kling, R., Mylopoulos, J., Kaplan, S. (eds.) ACM SIGOIS, Proceedings of the Conference on Organizational Computing Systems, Milpitas, California, pp. 10–21. ACM Press, New York (1995) 13. Georgakopoulos, D., Hornick, M., Sheth, A.: An Overview of Workflow Management: From Process Modeling to Workflow Automation Infrastructure. Distributed and Parallel Databases 3, 119–153 (1995) 14. Heinl, P., Horn, S., Jablonski, S., Neeb, J., Stein, K., Teschke, M.: A Comprehensive Approach to Flexibility in Workflow Management Systems. In: Georgakopoulos, G., Prinz, W., Wolf, A.L. (eds.) Work Activities Coordination and Collaboration (WACC 1999), pp. 79–88. ACM Press, San Francisco (1999) 15. Kloppmann, M., Koenig, D., Leymann, F., Pfau, G., Rickayzen, A., von Riegen, C., Schmidt, P., Trickovic, I.: WS-BPEL Extension for People BPEL4People. IBM Corporation (2005), http://www-128.ibm.com/developerworks/webservices/ library/specification/ws-bpel4people/
Flexibility as a Service
333
16. Leymann, F., Roller, D.: Production Workflow: Concepts and Techniques. PrenticeHall PTR, Upper Saddle River (1999) 17. Pesic, M., Schonenberg, M.H., Sidorova, N., van der Aalst, W.M.P.: ConstraintBased Workflow Models: Change Made Easy. In: Meersman, R., Tari, Z. (eds.) OTM 2007, Part I. LNCS, vol. 4803, pp. 77–94. Springer, Heidelberg (2007) 18. Pnueli, A.: The Temporal Logic of Programs. In: Proceedings of the 18th IEEE Annual Symposium on the Foundations of Computer Science, pp. 46–57. IEEE Computer Society Press, Providence (1977) 19. Reichert, M., Dadam, P.: ADEPTflex: Supporting Dynamic Changes of Workflow without Loosing Control. Journal of Intelligent Information Systems 10(2), 93–129 (1998) 20. Rinderle, S., Reichert, M., Dadam, P.: Correctness Criteria For Dynamic Changes in Workflow Systems: A Survey. Data and Knowledge Engineering 50(1), 9–34 (2004) 21. Russell, N., van der Aalst, W.M.P., ter Hofstede, A.H.M.: Workflow exception patterns. In: Dubois, E., Pohl, K. (eds.) CAiSE 2006. LNCS, vol. 4001, pp. 288– 302. Springer, Heidelberg (2006) 22. Sadiq, S., Sadiq, W., Orlowska, M.: Pockets of flexibility in workflow specification. In: Kunii, H.S., Jajodia, S., Sølvberg, A. (eds.) ER 2001. LNCS, vol. 2224, pp. 513–526. Springer, Heidelberg (2001) 23. Schonenberg, M.H., Mans, R.S., Russell, N.C., Mulyar, N.A., van der Aalst, W.M.P.: Towards a Taxonomy of Process Flexibility (Extended Version). BPM Center Report BPM-07-11, BPMcenter.org (2007)
Concept Shift Detection for Frequent Itemsets from Sliding Windows over Data Streams* Jia-Ling Koh and Ching-Yi Lin Department of Computer Science and Information Engineering National Taiwan Normal University Taipei, Taiwan
[email protected]
Abstract. In a mobile business collaboration environment, frequent itemsets analysis will discover the noticeable associated events and data to provide important information of user behaviors. Many algorithms have been proposed for mining frequent itemsets over data streams. However, in many practical situations where the data arrival rate is very high, continuous mining the data sets within a sliding window is unfeasible. For such cases, we propose an approach whereby the data stream is monitored continuously to detect any occurrence of a concept shift. In this context, a “concept-shift” means a significant number of frequent itemsets in the up-to-date sliding window are different from the previously discovered frequent itemsets. Our goal is to detect the notable changes of frequent itemsets according to an estimated changing rate of frequent itemsets without having to perform mining of the frequent itemsets at every time point. Consequently, for saving the computing costs, it is triggered to discover the complete set of new frequent itemsets only when any significant change is observed. The experimental results show that the proposed method detects concept shifts of frequent itemsets both effectively and efficiently. Keywords: Frequent Itemsets, Data Streams, Change Detection.
1 Introduction In recent times, data stream mining has received more attention due to the increasing number of streaming applications. Among the different applications, the mobile environment is a major source of data streams. Unlike mining static databases, mining data streams presents many new challenges. Since the amount of data over a data stream has no theoretical boundary, it is unrealistic to keep the entire stream. Accordingly, the traditional methods of mining static datasets using multiple scans are unfeasible. Additionally, a great deal of processing power is required to keep up with the high data arrival rate. Frequent itemset mining is well recognized as a fundamental problem in important data mining tasks such as finding sequential patterns [13], [14], clustering [15], and *
This work was partially supported by the R.O.C. N.S.C. under Contract No. 97-2221-E-003-007 and 97-2631-S-003-003.
L. Chen et al. (Eds.): DASFAA 2009 Workshops, LNCS 5667, pp. 334 – 348, 2009. © Springer-Verlag Berlin Heidelberg 2009
Concept Shift Detection for Frequent Itemsets from Sliding Windows over Data Streams
335
classification [10]. Since the problem was first identified by Agrawal et al. [1], many efficient algorithms for mining frequent itemsets on static databases have been proposed over the last decade [5], [6]. In a mobile business collaboration environment, frequent itemsets analysis will discover the noticeable associated events or data to provide important information of user behaviors. Many algorithms have been proposed for mining frequent itemsets over data streams [3]. Lossy-counting is the most popular approach among landmark-model algorithms for mining frequent itemsets over data streams [11]. Given an error tolerance parameter ε, the Lossy-counting algorithm prunes patterns with support being less than ε from the pool of monitored patterns such that the required memory usage is reduced. Consequently, the frequency of a pattern is estimated by compensating the maximum number of times that the pattern could have occurred before being inserted into the monitored patterns. It has been proven that no false dismissal occurs with the Lossy-counting algorithm and that the error of the estimated frequency is guaranteed not to exceed a given error tolerance parameter. In addition to the restriction on memory requirement introduced earlier, the time sensitivity issue is another important concern when mining frequent itemsets over data streams. It is likely that the embedded knowledge in a data stream will change quickly over time. In order to provide the most recent trend of a data stream, many algorithms have been proposed for mining frequent itemsets by adopting the sliding window model. Under the assumption that exactly one transaction arrives during each time unit, [2] defines the current sliding window to consist of the most recent w transactions in a data stream given a window size of w. Consequently, the recently frequent itemsets were defined to be the frequent itemsets mined from the current sliding window. In addition to counting the occurrence of a pattern in the new transaction, the count of a pattern in the oldest transaction has to be removed from the maintained data structure when the window is sliding. In [9], it was assumed that a block of varying transactions (zero or more transactions) is inputted into the data stream at each time unit. Accordingly, the frequent itemsets were discovered from the most recent w blocks of transactions. For each block of transactions, the frequent itemsets in the block were found and all possible frequent itemsets in the sliding window were collected in a PFP (Potential Frequent-itemset Pool) table. For each newly inserted pattern, the maximum number of its previous appearances was estimated. Moreover, a discounting table was constructed to provide approximate counts of the expired data items. Detecting changes in a data stream is an important area of research and has many applications. A method for the detection and estimation of changes according to data distributions was proposed in [7]. Although many studies have provided efficient techniques to support the incremental computation of frequent itemsets over data streams, it is not trivial for users to discover changes in patterns from the obtained results when the amount of frequent itemsets is large. A naïve solution to this problem can be implemented in two phases: (1) using a pattern mining algorithm to discover frequent itemsets; and (2) matching the new set of frequent itemsets with the old set. As stated in [12], in many practical situations where the data arrival rate is very high, it is unfeasible to continuously mine the data set within a sliding window. For such cases, we propose an approach whereby the data stream is monitored continuously to detect any occurrence of a concept shift. In this context, “concept-shift” means that a
336
J.-L. Koh and C.-Y. Lin
significant number of frequent itemsets in the up-to-date sliding window are different from the previously discovered frequent itemsets. Our goal is to detect the notable changes in frequent itemsets according to an estimated changing rate of frequent itemsets without having to perform frequent itemsets mining at every time point. Consequently, for saving the computing costs, it is triggered to discover the complete set of new frequent itemsets only when any significant change is observed. In a sliding window model, when the window slides, the previously discovered frequent itemsets which are no longer applicable within the new window can be obtained by monitoring their change in frequency over windows. However, the difficulty lies with incrementally discovering the newly generated frequent itemsets without keeping their previous counts. To handle such a challenge, this paper applies the power-law distribution of supports for itemsets to estimate the amount of newly generated frequent itemsets. An algorithm, named Frequent itemsets Change Detection method (FCDT), is proposed to monitor the changing ratios of frequent itemsets over a data stream updated per Transaction. Besides, the proposed method is extended to develop the algorithm FCDB for monitoring a data stream updated per Batch. The experimental results show that FCDT and FCDB can detect concept shifts of frequent itemsets both effectively and efficiently. The execution time of FCDT and FCDB is significantly less than the time required to mine the complete set of frequent itemsets over a sliding window at each time point. Furthermore, the information of frequency change for frequent itemsets is provided from the maintained data structures. The remaining sections of this paper are organized as follows. The problem of concept-shift detection for frequent itemsets is defined in Section 2. The proposed data structures and monitoring method of the FCDT algorithm are introduced in Section 3 and 4, respectively. The performance study is reported in Section 5, which shows the effectiveness and efficiency of the proposed method. Finally, in Section 6, we conclude this paper and discuss directions for our future studies.
2 Problem Definition Let I = {i1, i2, …, im} denote the set of items in the specific application domain where a transaction is composed of a set of items in I. A basic block is a set of transactions arriving within a fixed unit of time. A data stream, DS = [B1, B2, …), is a continuous sequence of basic blocks, where each basic block Bi, associated with a time identifier i, consists of a various number of transactions { T1i , T2i ,…, Ti i }(ik≥0). A special case k
occurs when there is exactly one transaction in each basic block, which is categorized into a data stream being updated per transaction [3]. The general case is called a data stream being updated per batch. Let DS(i,j) denote the set of transactions collected from basic blocks of DS from time i to j. Under a predefined window size w, the sliding window at time t, denoted as SWt, corresponds to DS(t-w+1, t). The number of transactions in SWt is denoted as |SWt|. Supposing that t’ denotes current time, the corresponding window SWt’ is named the current sliding window. An itemset (or a pattern) is a set consisting of one or more items in I, that is, a nonempty subset of I. The number of elements in an itemset p is called the length of p. If the itemset p is a subset of transaction T, we say that T contains p. The number of transactions in SWt which contain e is named the support count of p over DS at t,
Concept Shift Detection for Frequent Itemsets from Sliding Windows over Data Streams
337
denoted as SCt(p). Given a user specified minimum support threshold between 0 and 1, denoted as min_support, an itemset p is called a frequent itemset at t over DS if SCt(p) ≥ ⎡|SWt|×min_support⎤. Otherwise, p is an infrequent itemset at t. In the following, ⎡|SWt|×min_support⎤ is known as a minimum support count threshold, which is denoted as min_SC. Given two time identifiers t and t’ (t’>t), let Ft and Ft ' denote the set of frequent itemsets at time t and t’, respectively. Accordingly, ( Ft ' -Ft) is the set of newly generated frequent itemsets at t’ with respect to time t, which is denoted as Ft+(t’). Additionally, Ft-(t’) is defined as the set of itemsets which are frequent at t but become infrequent at t’, that is Ft-(t’)= Ft - Ft ' . The changing ratio of frequent itemsets at t’ with respect to t, denoted as FChanget(t’) is defined by the following equation: + FChanget (t ') = | Ft (t ') | + | +Ft (t ') | .
| Ft | + | Ft (t ') |
(1)
The value of FChanget(t’) is between 0 and 1. If Ft ' is identical to Ft, the changing ratio is 0. A value of 1 for the changing ratio occurs when Ft ' and Ft are disjoint. Let t denote the time identifier when frequent itemsets are discovered completely from SWt. The goal of concept shift detection is to discover the smallest t’, where t’> t, such that FChanget(t’) is larger than a given threshold value δ. The main issue studied in this paper is to estimate the value of FChanget(t’) without needing to perform complete mining of frequent itemsets at t’. Example 1 Table 1 shows an example of data streams being updated per batch. Suppose min_support is set to be 0.5 and window size w is 4. In this case, SW4={bcd, cd, ac, cde, bc, ab, bce, e, ae, be} and |SW4| = 10; SW6={bce, e, ae, be, ab, acd, bd, cde} and |SW6| = 8. Accordingly, F4= {b, c, e, bc} and F6= {a, b, e, ab}. Besides, F4+(6)={a, ab} and F4-(6)={c, bc}.The changing ratio of frequent itemsets at time 6 with respect to 4, denoted as FChange4(6), is (2+2)/(4+2) =2/3. Table 1. An example of data streams being updated per batch B1 bcd cd
B2 abc bcde bc ab
B3 bce e ae
B4 abe
B5 ab abcd
B6 abd cde
B7 b
… …
3 The Pattern Monitoring Tree Given a time identifier t, for monitoring changes in frequent patterns starting at t, a tree structure called the pattern monitoring tree (PM-tree) is constructed to maintain
338
J.-L. Koh and C.-Y. Lin
both the frequent itemsets discovered in SWt and candidates of newly generated frequent itemsets as time goes by. For a node Np of a pattern p in the PM-tree, the following four fields of information are stored: (1) Np.P stores the content of the pattern p. (2) Np.count keeps the value of SCt(p)-(min_SC–1) if p is in Ft. Otherwise, a cumulated count of p starting from t is recorded. (3) Np.changing_type denotes the type of status changing of p. The value 0 means that pattern p was found in Ft and remains a frequent pattern; the value 1 means that p was a frequent pattern but becomes infrequent; the value 2 means that p was an infrequent pattern but becomes frequent. (4) Np.initial_status denotes the initial status of p at time t. If p is a frequent pattern at t, a value of 1 is stored. Otherwise, a value of 0 is stored. Let t denote the time identifier when frequent itemsets are discovered completely from SWt, such as when the sliding window becomes full initially. For each pattern p in Ft, a corresponding node Np is inserted into the PM-tree with Np.P=p, Np.count = (SCt(p)-(min_SC–1)), Np.initial_status = 0, and Np.changing_status =1. To speed up the process of pattern searching, the nodes in the PM-tree are organized as a prefixtree according to their corresponding patterns. Let t’ denote the current time identifier. Since only frequent itemsets at time t are maintained in the PM-tree, when a pattern not in Ft appears at t’ (t’>t), its support count within the sliding window SWt’ is unknown. However, for each itemset p, it satisfies the following inequality: SCt’(p)≤min({SCt’(Ii) | Ii∈p}).
(2)
Therefore, an auxiliary array, ICount, is maintained to keep the support count of each single item at the current sliding window and to estimate the upper bound of the support count for a new pattern in the monitoring process. Table 2. An example of data streams being updated per transaction B1 cd
B2 bcd
B3 ac
B4 cde
B5 bcd
B6 ab
B7 cde
B8 cd
B9 ab
B10 ab
B11 cd
root c:3:0:1
B12 ae
B13 bc
B14 ab
ICount d:2:0:1
a
b
c
d
e
5
5
8
7
3
cd:2:0:1
Fig. 1. The constructed PM-tree and ICount table for the example
B15 bce
B16 e
Concept Shift Detection for Frequent Itemsets from Sliding Windows over Data Streams
339
Example 2 Table 2 shows an example of data stream being updated per transaction. Suppose min_support is set to be 0.5 and window size w is 12. At time 12, the sliding window is full initially. The discovered frequent itemsets from SW12 is c, cd, and d. The content of the constructed PM-tree and ICount is shown in Fig. 1.
4 The FCDT Algorithm In this section, based on the constructed pattern monitoring tree, the FCDT algorithm is proposed to monitor the changing ratios of frequent itemsets over a data stream updated per transaction. The FCDB algorithm is extended from FCDT for solving the same problem over a data stream updated per batch. 4.1 The Pattern Monitoring Process The pattern monitoring process starts from the next time point after the PM-tree is constructed. Initially, all the patterns in the PM-tree belong to Ft. During the pattern motoring process, in addition to maintaining the information of the support count and the changing status of patterns in the PM-tree, the newly inputted patterns are inserted into the tree if they become frequent according to their estimated support. However, each pattern which does not belong to Ft will be removed from the PM-tree when it becomes infrequent. For estimating the changing ratio of frequent itemsets, two types of counters are used to record the amounts of the status changing patterns. The variable removenum is an integer, which is used to count the number of patterns in Ft-(t’). Moreover, an integer array, addnum, is used to maintain the number of newly generated candidates of frequent itemsets with respect to Ft according to their increased support counts. There are two tasks in maintaining the changing information of patterns when the window slides: the first is to append the newly inputted transaction, and the second is to remove the expired transaction. These tasks are described in detail below. 1) Appending the newly inputted block Bt’={Tt’} For each item contained in Tt’, the corresponding entry in ICount is increased by 1. For each itemset p contained in Tt’, the PM-tree is searched to find the corresponding node Np. If node Np exists in the PM-tree, the following processing is required for updating the pattern changing counters according to the value of Np.initial_status: <1> Np.initial_status=1: That is, p belongs to Ft. Thus, the value of Np.count is increased by 1. If Np.changing_type is 1, it means that pattern p has become infrequent in an earlier stage. In this case, the newly inputted transaction will make p return to being frequent if the value of Np.count is equal to 1. Consequently, removenum is reduced by 1 and Np.changing_type is set to 2. If Np.changing_type is equal to 0 or 2, p remains a frequent pattern. For these two cases, the pattern changing counters remain unchanged. <2> Np.initial_status =0: That is, p does not belong to Ft but becomes frequent. The value of Np.count will be updated from Np.count to (Np.count+1). Therefore,
340
J.-L. Koh and C.-Y. Lin
the counter addnum[Np.count+1] is increased and addnum[Np.count] is reduced. After completing the update on addnum, the value of Np.count is increased by 1. If a corresponding node of p does not exist in the PM-tree, it is indicated that p is not in Ft. The difficulty here is that the true support count of p in the sliding window is unknown since this pattern was infrequent at the previous time points. To address this problem, the value of min({SCt’(Ik) | Ik∈p}), denoted as U_SCt’(p), is obtained according to the counting information stored in ICount to get the upper bound of SCt’(p). Then, the support count of p is estimated to be the smaller value between min_SC and U_SCt’(p). If the value is equal to min_SC, it is possible that p becomes a frequent itemset. Accordingly, the node Np of p is inserted into the PMtree, with Np.P=p, Np.count=1, Np.initial_status=0 and Np.changing_type=2. Additionally, addnum[1] is increased by 1. 2) Removing the expired block Bt’-w={Tt’-w } For each item contained in Tt’-w, the corresponding entry in ICount is reduced by 1. For each itemset p contained in Tt’-w, if the corresponding node Np exists in the PMtree, the following processing is required for updating the pattern changing counters according to the value of Np.initial_status: <1> Np.initial_status=1: That is, p is in Ft. The value of Np.count is reduced by 1. If Np.count is larger than 0, it means that pattern p remains a frequent pattern. Otherwise, the pattern becomes infrequent when Np.count is equal to 0. Accordingly, Np.changing_type is checked to verify the previous type of status changing of p. If the value of Np.changing_type is either 0 or 2, it indicates that p will become infrequent from frequent due to the removal of Tt’-w. Thus, the counter removenum is increased and Np.changing_type is set to 1. <2> Np.initial_status=0: The Np.count will be updated from Np.count to (Np.count1). If (Np.count-1) is larger than 0, the counter addnum[Np.count] is reduced by 1 and addnum[Np.count-1] is increased by 1. After completing the update on addnum, the value of Np.count is decreased by 1. Otherwise, if (Np.count-1) is equal to 0, it means that the pattern p will become infrequent again. In addition to reducing the value of addnum[Np.count], the corresponding node Np is removed from the PM-tree. According to the downward closure property of frequent itemsets, since p becomes infrequent, all the super-patterns of p must therefore be infrequent. It is implied that, for each descendant node of Np, denoted as Np’, which corresponds to a super-pattern of p, the support count was previously over-estimated. Therefore, for such a node Np’, the counter addnum[Np’.count] is reduced by 1 and node Np’ is removed from the tree to reduce the estimating error. If a corresponding node of p in the PM-tree does not exist, it means that the pattern has not become a candidate of frequent itemsets. In this situation, no information of the pattern p is kept or updated. 4.2 Estimation Method of the Changing Ratio In the pattern monitoring process described in the previous section, when a pattern p is added into the PM-tree, its true frequency in the sliding window is unknown. Due
Concept Shift Detection for Frequent Itemsets from Sliding Windows over Data Streams
341
to limited information, the support count of p is estimated to be the smaller value between min_SC and U_SCt’(p). However, in most cases, the estimated support count is higher than the actual one of a pattern. Accordingly, the amount of newly generated frequent itemsets, |Ft+(t’)|, is also over-estimated. For solving this problem, the powerlaw relationship appears in the distribution of supports of itemsets is used to adjust the value of |Ft+(t’)|. In [4], the author identified that the power-law relationship appears in the distribution of supports of itemsets. Let si denote a value of the support count and fi denote the number of itemsets with their support counts being equal to si. The powerlaw relationship is described by the following equation: log(fi) = θlog(si)+Ω.
(3)
In other words, the amount of patterns with a specific support can be estimated after the characteristics of the power-law relationship are extracted. Among the frequent itemsets at t, which are known, the number of frequent itemsets with support count si is counted and denoted as fi. By sampling (fi, si) pairs of frequent itemsets, the parameters θ and Ω in the power-law relationship are extracted after solving the linear regression problem. Accordingly, when assigning a value k (min_SC >k≥1) to si, the number of infrequent itemsets with support count being k is approximately estimated. An auxiliary array PLArray is used to store these values, where PLArray[k] keeps the estimated value of the number of patterns with support count being equal to k. During the pattern monitoring process, the addnum array cumulates the number of possible newly generated frequent itemsets according to their appearance frequency after time t. In other words, for each pattern which contributes one count in addnum [k], it appears k times after t. If it actually becomes a frequent itemset, its support count at t must be no less than (min_SC -k). Let addnum_up[k] denote the actual number of new frequent itemsets with appearance frequency k after time t. Accordingly, the value of addnum_up[1] must be no more than both addnum[1] and PLArray[min_SC-1]. Thus, addnum_up[1] is set to be the smaller value of addnum[1] and PLArray[min_SC-1]. In addition, a pattern with support (min_SC-1) at t will become frequent if its count increased 2 after t. Therefore, the value of addnum_up[2] must be no more than addnum[2] and (PLArray[min_SC-2]+(PLArray[min_SC-1]-addnum_up[1])). In general, the value of addnum_up[k] is obtained according to the following equation: addnum_up[k]=min(addnum[k], (PLArray[SCmin-k]+
k −1
∑
di)),
i =1
where di = (PLArray[SCmin-i]- addnum_up[i]) if PLArray[SCmin-i]> addnum_up[i], otherwise, di =0.
(4)
Consequently, |Ft+(t’)|is estimated using the following equation: SC min −1
|Ft+(t’)|=
∑
addnum_up[k]
k =1
On the other hand, |Ft-(t’)|is obtained directly from the value of removenum.
(5)
342
B1 cd
J.-L. Koh and C.-Y. Lin
B2 bcd
B3 ac
B4 cde
B5 bcd
B6 ab
B7 cde
B8 Cd
B9 ab
B10 ab
B11 cd
B12 ae
B13 bc
B14 ab
B15 bce
B16 e
ICount root c:3:0:1
a
b
c
d
e
ˈʳ
ˈʳ
ˋʳ
ˊʳ
ˆʳ
d:2:0:1
cd:2:0:1
PLarray 1
2
3
4
5
2
6
2
0
2
(a)
ICount
root c:3:0:1
b:1:2:0
d:1:0:1
cd:1:0:1
˴ʳ
˵ʳ
˶ʳ
˷ʳ
˸ʳ
ˈʳ
ˉʳ
ˋʳ
ˉʳ
ˆʳ
removenum=0 addnum[1] = 2
bc:1:2:0
(b)
ICount
root c:3:0:1
d:1:0:1
cd:1:0:1
b:2:2:0
a:1:2:0
bc:1:2:0
˴ʳ
˵ʳ
˶ʳ
˷ʳ
˸ʳ
ˉʳ
ˊʳ
ˋʳ
ˉʳ
ˆʳ
removenum=0 addnum[1] = 3 addnum[2] = 1
ab:1:2:0
(c)
ICount
root c:2:0:1
d:0:1:1
b:1:2:0
a:1:2:0
cd:0:1:1
ab:1:2:0
˴ʳ
˵ʳ
˶ʳ
˷ʳ
˸ʳ
ˉʳ
ˉʳ
ˊʳ
ˈʳ
ˆʳ
removenum=2 addnum[1] = 3
(d) Fig. 2. The monitoring process of the example
Finally, the value of FChanget(t’) is estimated using equation (1) defined in section 2 to decide whether a concept shift of frequent itemsets occurs. If FChanget(t’) is
Concept Shift Detection for Frequent Itemsets from Sliding Windows over Data Streams
343
under the given threshold value δ, the monitoring process continues. Otherwise, the task of frequent pattern mining is triggered to discover the complete set of frequent itemsets in SWt’. After resetting the monitoring environment, including resetting t to be t’, reconstructing the PM-tree, and initializing counters addnum and removenum, a new cycle of frequent pattern monitoring begins. Example 3 By following Example 2, the initial PM-tree is constructed as shown in Fig. 2(a). The corresponding ICount and PLArray are also shown. After that, the pattern monitoring process begins. At time 13, after inserting transaction {bc} and removing transaction {cd}, the PMtree and ICount table are updated as shown in Fig. 2(b). Because of the insertion of transaction {bc} in B13, itemsets b and bc, whose support counts are estimated to be (min_SC–1)+1, possibly become new frequent itemsets with respect to F12. Accordingly, the value in addnum[1] is set to be 2. On the other hand, no previously frequent itemset becomes infrequent due to the removing of transaction {cd} in B1. At time 14, after inserting the transaction {ab} in B14, the itemsets a and ab are estimated to possibly become new frequent itemsets with respect to F12, whose corresponding nodes are inserted into the PM-tree. Accordingly, addnum[1] is increased by 2. Besides, for the itemset b, its support count becomes (min_SC–1)+2 from (min_SC–1)+1. Thus, addnum[1] is decreased by 1 and addnum[2] is increased by 1. The resultant MP-tree, ICount table, and addnum are updated as Fig. 2(c). Finally, the expired transaction {bcd} in B2 is removed. Therefore, the itemsets {c} and {cd} become infrequent from frequent with respect to F12, which result in the value in removenum is updated to be 2. For the itemset b, whose support count is updated to be (min_SC–1)+1 from (min_SC–1)+2. Moreover, the itemset bc becomes an infrequent itemset set and is removed from the MP-tree because its support count is reduced to be (min_SC–1). The values in addnum[1] and addnum[2] are consequently updated. Fig. 2(d) show the result after removing the transaction in B2. According to equation (4), addnum_up[1]=min(addnum[1], PLArray[5])= min(3,2) =2. Since addnum[i]=0 for i = 2 to 5, |F12+(14)| is estimated to be 2. It shows that the over-estimated value of addnum[1] is adjusted according to the value in PLArray[5]. Consequently, FChange12(14) is estimated to be (2+2)/(3+2) =0.8. 4.3 Extension for Processing the General Cases In a data stream being updated per batch, because more than one transaction is inputted in a bucket, it is possible that a pattern appears both in the newly inputted block Bt’ and the expired block Bt’-w. For such a case, the support count of the pattern is unchanged. To reduce the costs of modification on the PM-tree, another algorithm named the FCDB is proposed to maintain a base pattern tree and a pattern changing tree. The complete set of frequent itemsets discovered at time t is stored in the base pattern tree. All the inserted or expired patterns after time t are collected in the pattern changing tree. By combining this with the information in the base pattern tree, the amount of frequent pattern changes is estimated. Moreover, in a data stream which is updated per batch, the number of transactions in a sliding window is not fixed. Therefore, an array named TranArray is constructed to maintain the number of
344
J.-L. Koh and C.-Y. Lin
transactions in each basic block within the current sliding window, which is used to compute the supports of itemsets. The estimation method of the changing ratio adopted in FCDB is similar to the one in FCDT. Due to page limit, the processing detail of FCDB refers to [7].
5 Performance Evaluation The proposed FCDT and FCDB algorithms were implemented using the Visual C++ programming language. The experiments were performed on a 3.4 GHz Intel Pentium IV machine with 2 gigabytes of memory and running Windows XP Professional. Moreover, the datasets were generated using the IBM data generator [1] where each dataset simulates a data stream. Each dataset is denoted in the form TxxIxxDxx, where T, I, and D specify the average transaction length, the average length of the maximum pattern, and the total number of transactions, respectively. In the following two subsections, the accuracy of the estimated changing ratios and execution time were measured to show the effectiveness and efficiency of the proposed FCDT and FCDB algorithms. The experiments presented here used the dataset T5I4D60K to simulate a data stream with a transaction occurring at every time point. Furthermore, the following experiments were performed with min_support = 0.03 and w = 1000. The FP-growth algorithm[6] is used to discover frequent itemsets from a sliding window at every time point. Consequently, with respect to the time when the sliding window is full initially, the true frequent pattern changing rates at the following time points are obtained to evaluate the accuracy of the ones estimated by the FCDT algorithm. The execution time of the FCDT was compared against the performance of the FP-growth algorithm over each sliding window. As stated in section 4.2, to estimate the amounts of infrequent itemsets with various supports at t, the parameters of the power-law relationship are extracted by solving the linear regression problem. In order to reduce the cost of computation, only part of the (fi, si) pairs of frequent itemsets at t are selected to solve parameters θ and Ω in equation (3). In the first experiment, we observe how the amount of selected pairs influences the accuracy of the estimated changing ratio. Let the real changing ratio be known as “real”. A curve denoted by “min_SC ×n” represents the estimated result of the FCDT obtained by selecting samples of (fi, si) with min_SC ×n ≥ fi ≥ min_SC. When n is varied from 2, 4, 6, 8, to 10, the results of the estimated changing ratios are shown in Fig. 3(a). As indicated in the figure, when n increases, the estimated result approaches the real value. However, the results of n = 6, 8, or 10, are almost identical. Accordingly, in the following experiments, n = 6 is used as a base for selecting (fi, si) pairs to solve the parameters in the power-law distribution of supports of itemsets. According to the results of the previous experiment, it is observed that the estimated distribution of supports of itemsets is over-estimated according to the solved parameters. Therefore, given the selected values of fi, the standard derivation of the estimated values and real values of si is computed to compensate for the estimation error from the estimated values of si. The obtained experimental result is shown in Fig. 3(b), where the curve is labeled with “min_SC ×6-SD”.
Concept Shift Detection for Frequent Itemsets from Sliding Windows over Data Streams 1 0.9
Changing Ratio
0.8
real
0.7
min_SC *2 min_SC *4 min_SC *6 min_SC *8 min_SC *10
0.6 0.5 0.4 0.3 0.2 0.1 0 1
501
1001
1501
2001
2501
Time Point
(a) 1 0.9
Changing Ratio
0.8 0.7 0.6
real
min_SC *6 min_SC *6-SD
0.5 0.4 0.3 0.2 0.1 0 1
501
1001
1501
2001
2501
Time Point
(b) 1 0.9
Changing Ratio
0.8 0.7 0.6
real
0.5
F CDT
0.4 0.3 0.2 0.1 0 1
501
1001
1501
2001
2501
T ime Point
(c) Execution Time (sec.)
25 20
FCDT 15
real
10 5 0 100
600
1100
1600
2100
2600
3100
3600
Time Point
(d) Fig. 3. The experimental results of the FCDT
345
346
J.-L. Koh and C.-Y. Lin
For simulating the occurrence of changes in frequent itemsets, two data sets, TD1 and TD2, consisting of 1000 transactions, are generated individually. Let the initial sliding window contain the data of TD1. After frequent itemsets in the initial window are discovered, the pattern monitoring process begins. At the beginning, from time point 1 to 1000, the data in TD1 is inputted. From time point 1001 to 2000, 90% of the data comes from TD1 and 10% of the data comes from TD2. From time point 2001 to 3000, half of the data comes from TD1 and the other half comes from TD2. It is indicated in Fig. 3(c) that the trend of estimated changing ratios is consistent with the trend of their real values. The average error of the estimated values for changing ratios is 0.025.
Changing Ratio
1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
real
min_SC*6 min_SC*6-SD
1
4
7
10 13 16 19 22 25 28 31 34 37 40 Block Number
(a) 1 0.9
Changing Ratio
0.8 0.7 0.6
real
0.5
FCDB
0.4 0.3 0.2 0.1 0 1
3
5
7
9
11 13 15 17 19 21 23 25 27 29 31 33 35 37 39
Block Number
(b) 3.5
Execution Time (sec.)
3 2.5 2
FCDB
1.5
real
1 0.5 0
0
5
10
15
20
25
30
35
40
Block Number
(c) Fig. 4. The experimental results of the FCDB
Concept Shift Detection for Frequent Itemsets from Sliding Windows over Data Streams
347
To evaluate the performing efficiency of the FCDT, the cumulated execution times of the FCDT and FP-Growth at every 100 time points is measured as shown in Fig. 3(d). The time required by the FCDT is almost 1/100 of the cost required by FP-Growth to discover complete frequent itemsets.1(d). The time required by the FCDT is almost 1/100 of the cost required by FP-Growth to discover complete frequent itemsets. 5.1 Experimental Results of the FCDB Algorithm In set of experiments, the dataset T5I4D60K is used to simulate a data stream with each basic block consisting of 1000 transactions. Furthermore, when running the FCDB algorithm, the parameters min_support and w are set to be 0.02 and 20, respectively. By comparing this with the real changing ratio obtained from the mining results of FP-growth at each time point, the accuracy of the value estimated by the FCDB algorithm is observed. The execution time of the FCDB and FP-growth are also compared. Fig. 4(a) shows the estimated changing ratios of the FCDB, where min_SC ×6 is adopted to select samples for solving the parameters θ and Ω in equation (3). The curve labeled “min_SC ×6-SD” indicates that the estimated changing ratio approaches its real value after using the standard derivation of errors to compensate for the estimation error from the estimated values of si. Let the threshold of the changing ratio δ equal 0.2. The FCDB will trigger FPgrowth to discover a new set of frequent itemsets when the changing ratio is larger than 0.2. In other words, it is indicated that a concept shift of frequent patterns occurs. As indicated in Fig. 4(b), the 27th block shows when a concept shift occurs. Accordingly, after the set of frequent itemsets is updated, a new cycle of detection begins. It is shown in Fig. 4(b) that the trend of estimated changing ratios is consistent with the trend of their real values. The average error of the estimated ratios is 0.024. When no concept drift occurs, only the costs of maintaining monitoring structure and the computation of the estimated changing ratio are required. Therefore, the execution of the FCDB is between 0.5 to 1.5 seconds in most cases, which is about one half of that required to perform FP-Growth at each time point.
6 Conclusion This paper proposed an approach whereby a data stream is monitored continuously to detect any occurrence of a concept shift. To prevent the mining of frequent itemsets at every time point, the proposed data structures are maintained incrementally to monitor the changing of frequent itemsets. Additionally, the property of the powerlaw distribution of supports of itemsets is applied to estimate the approximate amount of newly generated frequent itemsets. The mining algorithm is required to discover a complete set of new frequent itemsets only when a significant change is observed. Two algorithms, the FCDT and the FCDB, are proposed to monitor the changing ratios of frequent itemsets over a data stream based on a sliding window updated per transaction and updated per batch, respectively. The experimental results show that the FCDT and the FCDB detect concept shifts of frequent itemsets both effectively and efficiently. The execution time of the FCDT and the FCDB is significantly less
348
J.-L. Koh and C.-Y. Lin
than the time required to mine frequent itemsets over a sliding window at each time point. To provide a more concise data structure for monitoring the changes of frequent itemsets; this issue is a good direction for our future studies.
References 1. Agrawal, R., Srikant, R.: Fast Algorithms for Mining Association Rules in Large Databases. In: VLDB (1994) 2. Chang, J.H., Lee, W.S.: A Sliding Window Method for Finding Recently Frequent Itemsets over Online Data Streams. Journal of Information Science and Engineering 20(4) (2004) 3. Cheng, J., Ke, Y., Ng, W.: A Survey on Algorithms for Mining Frequent Itemsets over Data Streams. Knowledge and Information Systems 16(1), 1–27 (2008) 4. Chuang, K.-T., Huang, J.-L., Chen, M.-S.: On exploring the power-law relationship in the itemset support distribution. In: Ioannidis, Y., Scholl, M.H., Schmidt, J.W., Matthes, F., Hatzopoulos, M., Böhm, K., Kemper, A., Grust, T., Böhm, C. (eds.) EDBT 2006. LNCS, vol. 3896, pp. 682–699. Springer, Heidelberg (2006) 5. Goethals, B., Zaki, M.: FIMI 2003, Frequent Itemset Mining Implementations. In: Proc. of the ICDM 2003 Workshop on Frequent Itemset Mining Implementations (2003) 6. Han, J., Pei, J., Yin, Y.: Mining Frequent Patterns without Candidate Generation. In: SIGMOD (2000) 7. Kifer, D., Ben-David, S., Gehrke, J.: Detecting Change in Data Streams. In: Proceedings of the 30th International Conference on Very Large Database (2004) 8. Lin, C.-Y., Koh, J.-L.: Frequent Patterns Change Detection over Data Streams. Technique Report, Department of Information and Computer Education, NTNU (2007) 9. Lin, C.H., Chiu, D.Y., Wu, Y.H., Chen, A.L.P.: Mining Frequent Itemsets from Data Streams with a Time-Sensitive Sliding Window. In: Proc. of SIAM Intl. Conference on Data Mining (2005) 10. Liu, B., Hsu, W., Ma, Y.: Integrating Classification and Association Rule Mining. In: KDDM (1998) 11. Manku, G.S., Motwani, R.: Approximate Frequent Counts over Data Streams. In: Proc. of the 28th International Conference on Very Large Database (2002) 12. Mozafari, B., Thakkar, H., Zaniolo, C.: Verifying and Mining Frequent Patterns from Large Windows over Data Streams. In: Proc. of ICDM (2008) 13. Pei, J., Han, J., Mortazavi-As, B., Wang, J., Pinto, H., Chen, Q., Dayal, U., Hsu, M.: Mining Sequential Patterns by Pattern-Growth: The PrefixSpan Approach. IEEE TKDE 16(11), 1424–1440 (2004) 14. Srikant, R., Agrawal, R.: Mining Sequential Patterns: Generalizations and Performance Improvements. In: EDBT (1996) 15. Wang, H., Yang, J., Wang, W., Yu, P.S.: Clustering by Pattern Similarity in Large Datasets. In: Proc. of SIGMOD (2002)
A Framework for Mining Stochastic Model of Business Process in Mobile Environments Haiyang Hu1,2, Bo Xie1, JiDong Ge2,3, Yi Zhuang1, and Hua Hu1,4 1
College of Computer and Information Engineering, Zhejiang Gongshang University, Hangzhou 310018
[email protected] 2 State Key Laboratory for Novel Software Technology, Nanjing university, Nanjing, China, 210089 3 School of Software Technology, Nanjing university, Nanjing, China, 210089 4 Hangzhou Dianzi University, Zhejiang, China, 310018
Abstract. Using portable terminals do mobile business management is a trend in modern business process system. As the features of mobile wireless environments are dynamic and open, logs of business process system are often incomplete and noisy. In this situation, to model and analyze the behavior and performance of business process system efficiently, a framework is proposed in this paper to support mining and generating a formal stochastic model from the incomplete and noisy log related to the business system. Its architecture is presented, and the technology of maximum likelihood estimation to the tasks missing labels and association analysis on the task relations are discussed in details.
1 Introduction Today, equipped with different types of battery powered mobile terminals ( MT ), such as PDA (Personal Digital Assistant), mobile phones, hand-held computers, etc, people can do wireless access to Internet from anywhere at any time. In this environment, business process modeling and analyzing are important to support mobile business management. As designing a business process model is a complicated and time-consuming task, using the process mining technology can help create the workflow process model from execution logs of business management systems by a mining algorithm. In doing process mining, current approaches often make assumptions that perfect information must be obtained from the logs: (i) the log must be complete, i.e. the log should contain all the behavior of the system; (ii) there is no noise in the log, i.e. everything registered in the log is correct. However, in mobile wireless environment, logs are rarely complete or noise free. In this way, process mining tools should have the capability to hander this situation. A key problem of process mining is to design an efficient mining algorithm [2] [5], which can generate a business process model from a set of business process logs. L. Chen et al. (Eds.): DASFAA 2009 Workshops, LNCS 5667, pp. 349 – 354, 2009. © Springer-Verlag Berlin Heidelberg 2009
350
H. Hu et al.
However, existing mining algorithms do not address how to mine a corresponding formal stochastic model from the execution log. So, they cause the loss of capability to analyze the performance of the business management system. In this paper, different from current works, we propose an enhanced framework for mining formal stochastic model, which is named FMAM ( Framework for Mining stochastic Model ). In mobile environments, as the log is usually incomplete and often contains noise, such as the missing of task labels, and etc, our FMAM takes this into account and has the capability to deal with this situation. The rest of this paper is organized as follows. Section 2 introduces the related work about process mining for business system. Section 3 presents the architecture of FMAM. Section 4 presents the details of the methods used in the framework. Section 5 concludes the paper.
2 Related Works There are some different approaches for process mining [1-7][9-12]. Cook and Wolf described three approaches for process discovery: neural networks, a purely algorithmic approach, and Markov approach. In [9], Cook and Wolf provided a measurement to quantify discrepancies between a process model and the actual behavior as registered using event –based data. Herbst and Karagiannis [11] use stochastic task graphs as an intermediate representation and generate a workflow model described in the ADONIS modeling language to address the issue of process mining in the context of workflow management. In the recently work, some mining algorithms [2],[3], [4], were provided based on workflow net [1]. In general, the goal of process mining is to find a workflow model on the basis of a workflow log. Suppose there is an original workflow net WF1 and WF2 generates a set of complete logs based on enough executions. The set of workflow logs generated by WF1 is denoted by W1 . Based on the workflow logs( W1 ) and using a mining algorithm (as a miner function), it can construct a WF-net WF2 =Miner( W1 ). Not all kinds of workflow net can be rediscovered by a mining algorithm. Aalst explored a subclass of WF-net called Structure workflow Net, which can be rediscovered by the α algorithm. Among the existing algorithms, the α algorithm can mine structured workflow net without short loops [13] [14], and α + and β were extended to mine structured workflow net with short loops. Aalst presented α algorithm focuses on the structured workflow net. In [15] a heuristic approach using rather simple metrics is used to construct so-called “dependency/frequency tables” and “dependency/ frequency graphs”. This is used as input for the α algorithm. As a result it is possible to tackle the problem of noise. In the α algorithm, the transition set is derived from the alphabet set of activities, on the other hand, it can be viewed as a place vector set. Therefore the essence of mining algorithm calculates the place vector set from the workflow log. For the α algorithm, there is a phenomenon that the workflow log of the resulting workflow net is not equal to the input workflow log. However, to the best of our knowledge, there is no work on mining the stochastic model of business process system from a set of incomplete, noisy logs. As this class
A Framework for Mining Stochastic Model of Business Process in Mobile Environments
351
of formal models is very important in evaluating the performance of the process system, this paper presents a framework for mining the stochastic model and dealing with the log containing incomplete and noisy data.
3 The Mining Framework As shown in Figure 1, the objectives of our FMAM are to provide accurate generation of the formal stochastic model based on the log, and the performance analysis for the whole process. Therefore, the design of our platform focuses on mining the casually relations between the tasks and generating the formal model with omitting the negative influence brought by the log containing noise at the same time. We briefly introduce the main modules as follows.
log
MLE analysis for the log containing missing labels
Association analysis for task relations
Establishment of relation matrix for tasks
Statistics analysis for time and resources related with tasks
Generation of formal stochastic model
Fitness test of formal model and log
Performance analysis based on the model
Fig. 1. The mining framework
MLE (Maximum Likelihood Estimation ) analysis on the log containing missing labels. For each missing label, we fix the data by adding a set of stochastic labels. These added labels come from the set of the whole label names contained in the log. We relate an additional weight wi to each added label d i , and the tuple < d i , wi > is called a fractional sample. By taking the maximum likelihood of each tuple into account, the missing label can be found. Association analysis on task relations. To generate the formal model of a business process, the relations among the tasks will be defined firstly. We adopt an association analysis approach based on the cosine measurement. This cosine-based approach can avoid the negative influence brought by the incomplete log recording only partial behavior of the business process. Establishment of relation matrix for business process. We present a set of relations Ω = {ΩCA , ΩCH , Ω PA , ΩOT } , where ΩCA , ΩCH , Ω PA and ΩOT denote the casually, choice, parallel, and other relations, respectively. These relations are mutually exclusive and complementary. They are also the partition on the relations among the tasks. Statistics analysis on prosperities related to each task. We also adopt a statistics approach to analyze the properties related to the tasks. The properties include execution time, resources used, and organizers of the tasks.
352
H. Hu et al.
Generation of a formal stochastic model. To the objective of analyzing the performance of the business process, we will translate the relation matrix and the properties of each task into a stochastic Petri net ( SPN), and use the rules in SPN to measure the performance of the business process. Fitness test of the log and its’ SPN model. We also test the extent to which the log traces can be associated with execution paths specified by the SPN model, which is called fitness. By replaying the log in the SPN model, we count the number of the tasks that can be executed successfully and the tasks that fail to execute, and the fitness can be calculated.
4 Mining Model from Incomplete and Noisy Log In this section, we present the details of MLE analysis and Association analysis which is important in mining a formal stochastic model from the incomplete log with noise. Suppose M L denote a missing label occurring in the process instance D contained in the log L. We take into account all the values {m L } that M L may take. For each m L added into D, it is related by a weight wmL . Then, D after fixing the set of {mL } , is denoted as ( D, M L = mL )[ wml ] , where wmL = P( M L = mL | D,θ i )
Next, the value of wmL is used to update θ i +1 :
θ i +1
t ri ⎧ sijk t >0 , if ∑ sijk ⎪ ri ⎪ k =1 t = ⎨ ∑ sijk ⎪ k =1 ⎪1 / ri , else ⎩
The original valued of θ i , θ 0 is determined by the probability of frequency the t task occurs in the log, and sijk is the sum of the weights of all the samples that holds the two conditions M i = k and π ( M i ) = j . By the approaching value of θ i , the name of the missing label will be determined. In mobile wireless environments, the log is often incomplete and contains noise. To mine the relations among the task in the log, a cosine-based approach is adopted, which is shown as cos ine(Ω(t1 , t2 )) =
P(t1Ωt2 ) P(t1 ) * P(t2 )
where, P(t1 ) denotes the probability that task t1 occurs in the log, and the probability that there is a relation Ω between t1 and t 2 , is denoted as P(t1Ωt 2 ) . A set of relations Ω = {ΩCA , ΩCH , Ω PA , ΩOT } is presented to specify the task relations mutually exclusively and complementarily, where ΩCA , ΩCH , Ω PA and ΩOT denote the casually, choice, parallel, and other relations, respectively.
A Framework for Mining Stochastic Model of Business Process in Mobile Environments
353
The relation t1ΩCAt2 is determined by the formula: Ω CA = {(t1 , t 2 ) | P (t1 ) ≥ δ ∧ P (t 2 ) ≥ δ ∧ cos ine( P (t1 → t 2 )) ≈ 1} ;
For a set of tasks { ti } holding
∩ π (ti ) = c ∧∑ ΩCA (c, ti ) ≈ 1 , and for any
t1 , t 2 ∈ {ti } , it
i
i
holds that P(t1 ∈ D ∧ t2 ∈ D) ≈ 0 , then t1ΩCH t2 . To omit the influence of noise, it needs holding that P(c → t1 ) ≥ δ ∧ P(c → t 2 ) ≥ δ , where δ ∈ [0,1) is a probability threshold. For a set of tasks { ti } holding
∩ π (ti ) = c ∧∑ ΩCA (c, ti ) ≈ 1 , i
and for any t1 ,
i
t 2 ∈ {ti } , it holds that P (t1 ∈ D ∧ t 2 ∈ D) ≈ 1 , then t1Ω PAt 2 .
Finally, ΩOT is determined by Ω \ ( Ω CA ∪ ΩCH ∪ Ω PA ). Based on MLE analysis and Association analysis, we can establishment a relation matrix for the tasks, and generate the formal stochastic model using SPN based on this matrix.
5 Conclusion In mobile wireless environments, equipped with the new technologies such as Bluetooth, WLAN, and the mobile computing devices, the organizational business processes can be recorded in details. Because of the dynamics of mobile wireless environments, the logs of business process in this environment are often incomplete and noisy, which hinder the correctly mining formal model. In this paper, we present a framework for mining stochastic model of business process. By MLE analysis on the tasks missing labels and association analysis on the relations between tasks, the framework supports mining an accurate formal model from the incomplete, noisy log. In the future, we’ll develop the corresponding algorithms and the mining software tools. Also, we’ll use our mining tools in the following applications, which include patients and medical equipment.
Acknowledgement This paper is partially supported by the National Natural Science Foundation of China under Grant No.60873022, the Natural Science Foundation of Zhejiang Province of China under Grant No.Y1080148, the key Program of Science and Technology of Zhejiang Province under Grant No. 2008C13082, 2008C23034, the Open Fund provided by State Key Laboratory for Novel Software Technology of Nanjing University.
References 1. van der Aalst, W.M.P.: The Application of Petri Nets to Workflow Management. The Journal of Circuits, Systems and Computers 8(1), 21–66 (1998) 2. van der Aalst, W.M.P., van Dongen, B.F.: Discovering workflow performance models from timed logs. In: Han, Y., Tai, S., Wikarski, D. (eds.) EDCIS 2002. LNCS, vol. 2480, pp. 45–63. Springer, Heidelberg (2002)
354
H. Hu et al.
3. van der Aalst, W.M.P., van Hee, K.M.: Workflow Management: Models, Methods, and Systems. MIT press, Cambridge (2002) 4. van der Aalst, W.M.P., Song, M.S.: Mining social networks: Uncovering interaction patterns in business processes. In: Desel, J., Pernici, B., Weske, M. (eds.) BPM 2004. LNCS, vol. 3080, pp. 244–260. Springer, Heidelberg (2004) 5. van der Aalst, W.M.P., van Dongen, B.F., Herbst, J., Maruster, L., Schimm, G., Weijters, A.J.M.M.: Workflow Mining: A Survey of Issues and Approaches. Data and Knowledge Engineering 47(2), 237–267 (2003) 6. van der Aalst, W.M.P., Weijters, A.J.M.M. (eds.): Process Mining, Special Issue of Computers in Industry, vol. 53(3). Elsevier Science Publishers, Amsterdam (2004) 7. van der Aalst, W.M.P., Weijters, A.J.M.M., Maruster, L.: Workflow Mining: Discovering Process Models from Event Logs. QUT Technical report, FIT-TR-2003-03. Queensland University of Technology, Brisbane (2003); Accepted for publication in IEEE Transactions on Knowledge and Data Engineering 8. Agrawal, R., Gunopulos, D., Leymann, F.: Mining process models from workflow logs. In: Schek, H.-J., Saltor, F., Ramos, I., Alonso, G. (eds.) EDBT 1998. LNCS, vol. 1377, pp. 469–483. Springer, Heidelberg (1998) 9. Cook, J.E., Wolf, A.L.: Discovering Models of Software Processes from Event-Based Data. ACM Transactions on Software Engineering and Methodology 7(3), 215–249 (1998) 10. Dustdar, S., Gall, H., Schmidt, R.: Web Services For Groupware in Distributed and Mobile Collaboration. In: Cremonesi, P. (ed.) Proceeding of the 12th IEEE Euromicro Conference on Parallel, Distributed and Network based Processing (PDP 2004), pp. 241–247. IEEE Computer Society, Los Alamitos (2004) 11. Herbst, J.: A machine learning approach to workflow management. In: Lopez de Mantaras, R., Plaza, E. (eds.) ECML 2000. LNCS (LNAI), vol. 1810, pp. 183–194. Springer, Heidelberg (2000) 12. IDS Scheer. ARIS Process Performance Manager (ARIS PPM) (2002), http://www.ids-scheer.com 13. de Medeiros, A.K.A., van der Aalst, W.M.P., Weijters, A.J.M.M.T.: Workflow mining: Current status and future directions. In: Meersman, R., Tari, Z., Schmidt, D.C. (eds.) CoopIS 2003, DOA 2003, and ODBASE 2003. LNCS, vol. 2888, pp. 389–406. Springer, Heidelberg (2003) 14. de Medeiros, A.K.A., van Dongen, B.F., van der Aalst, W.M.P., Weijters, A.J.M.M.: Process Mining: Extending the α-algorithm to Mine Short Loops. BETA Working Paper Series, WP113. Eindhoven University of Technology, Eindhoven (2004) 15. Weijters, A.J.M.M., van der Aalst, W.M.P.: Rediscovering Workflow Models from EventBased Data using Little Thumb. Integrated Computer-Aided Engineering 10(2), 151–162 (2003)
Workshop Organizers’ Message Wei Wang 2
1
and Baihua Zheng
2
1 University of New South Wales, Australia Singapore Management University, Singapore
The DASFAA PhD workshop will be held in Brisbane, Australia on 22 April, 2009, in conjunction with the DASFAA 2009 conference. The workshop is the first in its series and aims to bring together PhD students working on topics related to the DASFAA conference series. It will offer PhD students the opportunity to present, discuss, and receive feedback on their research in a constructive and international atmosphere. The workshop has attracted six submissions. The submissions are highly diversified, coming from Australia, Cananda, Japan, and Mexico. All submissions were peer reviewed by three program committee members. The program committee selected three (short) papers for inclusion in the proceeding. The accepted papers covered important research topics and novel applications on Database, Data Mining, and Web Services. The workshop would not be successful without the help of many organizations and individuals. First, we would like to thank the program committee members for evaluating the assigned papers in a timely and professional manner. Next, the tremendous efforts put forwarded by members in the organization committee of DASFAA 2009 in accommodating and supporting the workshops are appreciated. Of course, without support from the authors for their submissions, the workshop would not have been possible.
L. Chen et al. (Eds.): DASFAA 2009 Workshops, LNCS 5667, p. 357, 2009. c Springer-Verlag Berlin Heidelberg 2009
Encryption over Semi-trusted Database Hasan Kadhem1 , Toshiyuki Amagasa1,2, and Hiroyuki Kitagawa1,2 1
Graduate School of Systems and Information Engineering 2 Center for Computational Sciences University of Tsukuba 1-1-1 Tennodai, Tsukuba, Ibaraki 305-8573, Japan
[email protected], {amagasa,kitagawa}@cs.tsukuba.ac.jp Abstract. Encryption is a well established technology for protecting sensitive data, but developing a database encryption strategy must take many factors into consideration. In the case of semi trusted databases where the database contents are shared between many parties, using server-based encryption (server encrypts all data) or client-based encryption (client encrypts all data) is not sufficient to protect semi-trusted databases. In this paper we propose Mixed Cryptography Database (MCDB), a novel framework to encrypt semi-trusted databases over untrusted networks in a mixed form using many keys owned by different parties. Keywords: Database cryptography, server-based encryption, client-based encryption, semi-trusted database, mixed cryptography.
1
Introduction
Most database security architectures rely on a traditional security model in which access to database management systems (RDBMSs) is strictly controlled while the data within the database is stored in the plaintext. However, access control, while important and necessary, is often insufficient. The data is not only unprotected from attackers who gains access to the raw database files bypassing the database access control mechanism, the data is also unprotected from authorized personnel thereby creating threats to privacy and security. Referring to Davida et al. [4], encryption is a well-studied technique to protect sensitive data so that when a database is compromised by an intruder, data remains protected even when a database is successfully attacked or stolen. But developing a database encryption strategy must take many factors into consideration. Who will encrypt data? Where will the data encryption be done? How is the data transferred? The answers to these questions affect database security. We can see that there are three approaches to database servers where encryption takes place: first, the trusted database server where the creator, or owner, of the data operates a database server, which processes queries and signs the results; second, the untrusted server where the owner’s database is stored at the service provider [7] (Database As a Service). The final model which is the subject of our research, we call it semi-trusted server where the database is shared between many parties. Here, part of the data is stored as trusted (server data) while other parts are considered untrusted (client data). L. Chen et al. (Eds.): DASFAA 2009 Workshops, LNCS 5667, pp. 358–362, 2009. c Springer-Verlag Berlin Heidelberg 2009
Encryption over Semi-trusted Database
Trusted Third Party
359
DB
Client
Server 1
Client
Server n
Fig. 1. Semi-trusted database model overview
1.1
Semi-trusted Database
We view a semi-trusted database system as shown in Figure 1. This system consists of three sides: client, Trusted Third Party (TTP) and server. The database on server side contains both the client and the server data. Specifically, server data and client data are stored as different attribute in the same relation. In addition, the client may have data on many servers. In this case, we need an intermediate party to mange the shared data between clients and servers such as data used to join many databases on different servers. The intermediate party could be a trusted third party or a public organization. This system could be used in many applications over the Web such as egovernment, e-commerce and e-banking, where databases hold critical and sensitive information transferred over untrusted networks like the Internet. In egovernment services, for example, the client might make query requests to many servers in different ministries and organizations. Another example is retrieving patient records from many hospitals and insurance company servers. Here, the problem is to ensure that confidentiality, privacy and integrity are achieved for a semi-trusted database. Consequently, we must protect databases from unauthorized disclosures and unauthorized modifications that may occur from inside and outside attacks. In our research, we study these security issues of the databases shared between many parties from a cryptographic perspective.
2
Related Work
Much previous work is reported in literature that deals with database cryptography in the trusted server [4,9,6,2,8]. These researches studied the encryption and decryption process of fields within records using the Chinese Reminder Theorem (CRT) [4], symmetric encryption [6] or asymmetric encryption [2]. Other studies go farther theoretically to show how to integrate modern cryptography techniques into a Relational Database Management System (RDBMS) [9,8]. Davida et al [4] studied a cryptography system on the server side based on the Chinese Remainder Theorem (CRT). The system has sub-keys that allow the encryption and decryption of fields within records. Using symmetric encryption, Ge and Zdonik [6] propose a database encryption schema called FCE for column stores in data warehouses with a trusted server. Recent years have seen extensive database cryptography research [7,1,10,11,5,3] conducted in the area of Database As a Service (DAS). The idea behind this research is to encrypt data so that it becomes accessible only to the client, because
360
H. Kadhem, T. Amagasa, and H. Kitagawa
the data belongs to the client and is stored on an untrusted server. Bouganim and Pucheral [1] propose chipped secured data access solution C-SDA, which enforces data confidentiality and controls personal privileges with a client based security component acting as a mediator between the client and an encrypted database. Database encryption operations in previous database cryptography researches have been done either by server or client key(s) using different algorithms. Using server-based encryption or client-based encryption is not sufficient to encrypt semi-trusted databases where data is shared among many parties. Simply, if the data is encrypted using server key(s) and the administrator has the authority to use this key(s), then the whole system becomes vulnerable. The Database Administrator (DBA) or any other employee who has access to the key(s) can access all data in plaintext. On the other hand, if each client encrypts his data using his key, then many problems might appear, such as how can organizations use the data? In both cases, the most important issue is the loss of the key used in the cryptography process. A lost key means all the data is lost. This paper introduces a new cryptography database framework that can deal with multi-tiered environments and can be used over the web. We propose Mixed Cryptography Database (MCDB), a novel framework to encrypt smi-trusted databases in a mixed form using many keys owned by different parties.
3
Database Encryption
The first step in the MCDB is the column-based data classification. In semtrusted database, the data are classified according to the data owner to client, server and trusted third party data. Trusted third party data are used to join many databases in one or many servers such as ID number. This data include also the data that may be used by servers without client permission. Such classification is needed to draw a full map of the encryption process. Our proposed system, adds encryption/decryption layers accumulatively while data moves from/to the database. The purpose of such design is to implement encrypted storage satisfying database security. In addition, it makes data usable for the server side, and makes it possible for the database to be joined to other databases by the coordinator (trusted third party). Following the convention, E denotes the encryption function, and D denotes the decryption function. The general approach to encryption is the field level; that is, sensitive fields need to be encrypted to protect the information from inside or outside attack. Data is encrypted using a symmetric encryption algorithm. After the data classification process, each side encrypts the data it owns. Definition 1. For each database schema S(R1 , ..., Rn ) , where R1 to Rn are relations created on the database server, Ri name is encrypted using the server secret key SES . Definition 2. For each relation schema R(F1 , ..., Fl , Fl+1 , ..., Fm , Fm+1 , ..., Fn ) in a relational database, where Fi (1 ≤ i < l) field is classified as trusted third party data, Fj (l ≤ j < m) field is classified as client data, and Fk (m ≤
Encryption over Semi-trusted Database
361
k < n) field is classified as server data, we store an encrypted relation: RE = C C S (FlT , ..., FlT , Fl+1 , ..., Fm , Fm+1 , ..., FnS ) where, FiT (1 ≤ i < l) in the encrypted E relation R stores the encrypted value of Fi in relation R using trusted third party secret key SET , viz. FiT = E SET (Fi ), FjC (l ≤ j < m) in the encrypted relation RE stores the encrypted value of Fj in relation R using trusted client secret key SEC , viz. FjC = E SEC (Fj ), FkS (m ≤ k < n) in the encrypted relation RE stores the encrypted value of Fk in relation R using server secret key SES , viz. FkS = E SES (Fk ). In a real database, the fields may be in different order, but we used this order to simplify the notation. Field names are encrypted as follows: Fi and Fj names are encrypted using trusted third party secret key SET . Fk name is encrypted using server secret key SES . With many clients, it is impossible to encrypt field names of client data using a client secret key. Therefore, field names of all data are classified as client data encrypted using the same key, which is the trusted third party secret key. Definition 3. Metadata rules updated in client, trusted third party and server whenever a change happens in the relation schema R on the server side. In our framework, the database is encrypted using many keys owned by different parties, and different parties might use different encryption algorithms. So, even if an inside or outside attacker gets one key, it is difficult to decrypt all data in the database. Request
Query Rewriter
Result Metadata
Plain Query
Metadata
Result Analyzer
Query Rewriter Trusted Third Party
Result Analyzer
Query Rewriter
Metadata
Execute Result Analyzer
Encrypted Database
Server
Client
Fig. 2. MCDB query management system
4
Query Processing
The query management system in MCDB shows in Figure 3 consists of two components. The first component is the Query Management Agent (QMA). The main objective of QMA is to prepares the query request to be executed over the encrypted database on the server side by encrypt, decrypt and rewrite the query parts. The second component is the result analyzer (RA). The objective of RA is to decrypt the query result to make it readable for the client. In addition, RA uses client public key (P UC ) to add an additional asymmetric encryption layer to the result before it is transmitted to other parties; in this way only the client can decrypt it using the client private key (P RC ). The two components use metadata to perform the query process. Metadata contains the minimum data required to rewrite and analyze the query. The data consists of field name, type, owner and table name. Each party has different metadata according to the data classification.
362
5
H. Kadhem, T. Amagasa, and H. Kitagawa
Future Plan
We plan to build on the research presented in this paper by the following tasks: first, we will implement a prototype as a proof of concept and get empirical data for efficiency and security analysis. Second, we will focus on developing an encryption algorithm and a query processing architecture to ensure security in the semi-trusted database with a reasonable performance. Third, we will investigate in detail the issues of joining between different encrypted databases, and integrating access control methods with the mixed encrypted database.
References 1. Bouganim, L., Pucheral, P.: Chip-secured data access: confidential data on untrusted servers. In: VLDB 2002: Proceedings of the 28th international conference on Very Large Data Bases, VLDB Endowment, pp. 131–142 (2002) 2. Chang, C.-C., Chan, C.-W.: A database record encryption scheme using the RSA public key cryptosystem and its master keys. In: ICCNMC 2003: Proceedings of the 2003 International Conference on Computer Networks and Mobile Computing, p. 345. IEEE Computer Society, Washington (2003) 3. Damiani, E., De Capitani Vimercati, S., Jajodia, S., Paraboschi, S., Samarati, P.: Balancing confidentiality and efficiency in untrusted relational DBMSs. In: CCS 2003: Proceedings of the 10th ACM conference on Computer and communications security, pp. 93–102. ACM, New York (2003) 4. Davida, G.I., Wells, D.L., Kam, J.B.: A database encryption system with subkeys. ACM Trans. Database Syst. 6(2), 312–328 (1981) 5. De Capitani di Vimercati, S., Foresti, S., Jajodia, S., Paraboschi, S., Samarati, P.: A data outsourcing architecture combining cryptography and access control. In: CSAW 2007: Proceedings of the 2007 ACM workshop on Computer security architecture, pp. 63–69. ACM, New York (2007) 6. Ge, T., Zdonik, S.: Fast, secure encryption for indexing in a column-oriented DBMS. In: IEEE 23rd International Conference on Data Engineering, 2007. ICDE 2007, pp. 676–685 (2007) 7. Hacigumus, H., Mehrotra, S., Iyer, B.: Providing database as a service. In: International Conference on Data Engineering, p. 0029 (2002) 8. Iyer, B., Mehrotra, S., Mykletun, E., Tsudik, G., Wu, Y.: A framework for efficient storage security in RDBMS. In: Bertino, E., Christodoulakis, S., Plexousakis, D., Christophides, V., Koubarakis, M., B¨ ohm, K., Ferrari, E. (eds.) EDBT 2004. LNCS, vol. 2992, pp. 147–164. Springer, Heidelberg (2004) 9. Jingmin, H., Wang, M.: Cryptography and relational database management systems. In: IDEAS 2001: Proceedings of the International Database Engineering & Applications Symposium, pp. 273–284. IEEE Computer Society, Washington (2001) 10. Mykletun, E., Tsudik, G.: Incorporating a secure coprocessor in the Database-asa-Service model. In: IWIA 2005: Proceedings of the Innovative Architecture on Future Generation High-Performance Processors and Systems, pp. 38–44. IEEE Computer Society, Washington (2005) 11. Wong, W.K., Cheung, D.W., Hung, E., Kao, B., Mamoulis, N.: Security in outsourcing of association rule mining. In: VLDB 2007: Proceedings of the 33rd international conference on Very large data bases, VLDB Endowment, pp. 111–122 (2007)
Integration of Domain Knowledge for Outlier Detection in High Dimensional Space Sakshi Babbar School of Information Technologies The University of Sydney
Abstract. The role of outlier or anomaly detection is to discover unusual and rare patterns in data. However, while the notion of “rare” can be quantified, what is “unusual” is high subjective and domain dependent. The focus of the proposal is to determine whether logic models like Probabilistic Relational Model, Default Reasoning and Ontologies can be used to integrate domain knowledge in the outlier discovery process.
1 Introduction Outlier detection is a data mining technique which focuses on discovering unusual patterns. An outlier in its simplest terminology can be defined as an observation in a domain that is markedly different from other observations in the same domain. The term outlier detection refers to the problem of finding those observations, that is, patterns in data that do not conform to an expected behaviour. Outliers are often interesting patterns that if found can raise alarms indicating that something unexpected is is occurring in the process which has generated the data. As a consequence, outlier detection is one of the categories of knowledge discovery. The real cause of outlier occurrence is usually unknown to users or analysts. Sometimes, the outlier can be a flawed value resulting from poor quality of the dataset. In this case, no useful contextual information is conveyed by the outlier value. However, it is also possible that an outlier represents correct, though exceptional information. In this case, the understanding of the outlier will potentially provide new information. The identification of outliers is important both for improving the quality of original data and for providing additional unknown knowledge from the data. However, while in the first case the correct action is to remove the outlier, in the second case a proper analysis is required so as to understand why such anomalies appear in the domain and what are its causes. Outlier detection deals and answers questions that are undoubtedly an important research direction. Outliers in data translate to significant actionable information in a wide variety of application domains. For example; fraud detection for credit cards, military surveillance for enemy activities, intrusion detection for cyber security and many more[3]. The key challenge in outlier detection is that it involves exploring the unseen space and finding patterns that do not conform to the expected normal behavior. A straight
PhD student
L. Chen et al. (Eds.): DASFAA 2009 Workshops, LNCS 5667, pp. 363–368, 2009. c Springer-Verlag Berlin Heidelberg 2009
364
S. Babbar
forward approach will be to define a region representing normal behavior and declare any observation, in the data which does not belong to this normal region, as an outlier. However this simple approach is rather challenging since defining a normal region which encompasses every possible normal behavior is not a trivial task. 1.1 Motivations Some salient points that I believe are important but current outlier detection techniques do not address: (1) When a point is declared as outlier, an explanation is required to determine in which context it is an outlier. Such explanation can provide better understanding and interpretation of the data. Existing literature is very limited to explain point outlier-ness in context. (2) From an analyst point of view it is often more important to know what behaviour, characteristics and values are making it different from the rest of the data than just exploring the outlier. (3) Do the outliers found by existing methods make sense from the application point of view ?. 1.2 Problem Statement In the literature on outlier detection most of the emphasis has been on proposing new definitions of outliers and then designing algorithms to discover them. Simple distance computation can just identify that some anomalies are present but it could happen that the found anomalies are not outliers when seen under specific domain knowledge. For such problems therefore, added to simple values of attributes information from domain experts is required to explain the outliers. The basic idea is to use the domain knowledge in order to discern between the observations that are real outliers and those that are not. An example will help to better illustrate the problem. Assume a data points plotted on the two dimensional attributes IncomeX-axis and ExpenditureY-axis given below1(a). By viewing this graph we would say that points O1 and O2 are outliers. However, from the domain’s perspective neither O1 nor O2 are outliers. If knowledge base is supported to find outliers then justification with reasoning (person having high income can have high expenditure) can be given as to why these points are not outliers. In this graphical representation , the important outliers which make sense but do not seen to be outliers,visually, are points O3 ,O4 and O5 . From the domain point of view, the points O3 ,O4 ,O5 are outliers because a person with low income can’t expend more than he/she earns. The above mentioned example illustrates the basic question that needs to be addressed by integrating domain specific knowledge in outlier detection algorithms. The task will not be easy and my work will be addressed to answer next three questions: How can context knowledge be introduced? What are the possible approaches? and Which one is the best option?. Before proceeding to the answers for the above mentioned questions, I present below a brief literature review on the main outlier detection techniques highlighting their weaknesses and strengths.
2 Background There is a broad range of outlier detection techniques that can be categorized in different ways. To start with, statistical methods develop statistical models from the given data
Integration of Domain Knowledge for Outlier Detection
(a)
365
(b)
Fig. 1. (a) shows objects in two dimensional space, where Low(L), Average(Avg) and High(H) are stages of attributes and points o1, o2, o3, o4, o5 are outliers and (b) shows different outliers that will be reported depending upon the definition
and then apply a statistical inference test to determine if an instance is likely to have been generated from the model. Instances that have low probability of belongingness to the statistical model are declared as outliers[5]. Referring to the figure1(b), a statistical model can find points P1 and possibly point P3 as outliers but can’t detect points P2 and P4 because these methods generally depend upon the statistical mean to calculate the distance and to find outliers. In the example, the value of the point P2 corresponds to the mean so it would not be detected by these methods. Moreover these methods can not be used with data of high dimensionality. Distance based methods use standard distance measures for finding outliers based on the nearest neighbours of the object and overcome a few shortcomings of statistical methods[7,8]. Referring to the figure again1(b), these methods would label out points P1 and P2 as outliers because they cannot fulfil the condition of having a required number of neighbors nearer than a certain distance threshold. However, as the points P3 and P4 can fulfil this condition, they would not be declared as outliers. The problem comes due to the fact that these methods take into account the global data distribution rather than local isolation in respect to the neighbourhood. This makes them perform poorly when the dataset has regions of varying densities. Moreover they have high computational complexity because they need to compute the distance to all the points in the database for finding the k nearest neighbors. The density based methods solve the problem of finding outliers which are both isolated from the whole data objects (global) and isolated from the local neighbourhood (local). Algorithms exist in literature that capture how isolated an object is with respect to its surrounding neighbourhood rather than the whole data set[2]. As a consequence such techniques can declare points P1 , P2 , P3 and P4 as outliers1(b). Again, the performance of these approaches greatly relies on a distance measure which is very challenging when the data is complex for e.g. graphs and sequences. Clustering based approaches make a very simple assumption for finding outliers: the normal data points belong to large and dense clusters, while outliers either do not belong to any cluster or form very small clusters[3]. However such techniques are highly dependent upon the effectiveness of clustering algorithms which in turn are dependent on a suitable metric for clustering. This might result in outliers getting assigned to large clusters, therefore being considered as normal. These methods would also be able to detect the four points as outliers.
366
S. Babbar
In order to get ”real” outliers that make sense from the domain prospective point of, view, we need to integrate the domains’ knowledge into the discovery process. We could visualize the domain specific knowledge as a model where the data points violating it are reported as ”real” outliers. Such a model could be used for reasoning about an entity.
3 Introducing Context Knowledge to Outlier Detection The best way of including context knowledge to outlier detection is something that needs to be determined. A preliminary literature survey, suggests that either of the following techniques could be used: Probabilistic relational model,Logic programming /Default reasoning and Ontologies. My work will consist on determining which one is more appropriate among above mentioned approaches and then design an outlier detection methodology that includes context knowledge. I will also define a real application domain to test the proposed methodology. 3.1 Probabilistic Relational Model (PRM) This model works on the principle of objects, classes and relations. The notion of individuals, their properties and the relation between them provide a meaningful framework for reasoning in diverse domains[4]. It is motivated primarily by the concepts of relational databases and is equivalent to the standard vocabulary and semantics of relational logic. It includes a probability model that specifies a joint distribution over all possible worlds. Thus, it specifies implicitly the probability of any event. The underlying probability model is similar to Bayesian Networks. From the outlier detection point of view, probabilistic relational model can be considered as a model constructed by experts and by data in terms of set of relationships that seek to determine required behaviour. This model will encode the probabilistic relationship amongst variables of interest The underlying principle of statistical schemes of this model yields several advantages including the capability of encoding interdependencies between variables and of predicting events as well as the ability to incorporate both prior knowledge and data. If the specification is comprehensive, the model could be used to predict abnormal patterns. In most of the real world applications the information is stored in databases. Databases can be used to generate PRMs and as a consequence, the use of PRMs to introduce context knowledge in outlier detection could be easily applicable to many of the existing domains. The same example above 1(a)can be used to perceive whether or not a given transaction is fraudulent using PRMs or not. Once the relational skeleton is designed ( in this case , Expenditure is child of node of Income) , the PRM encodes the probabilities for every possible states between these two nodes. Important thing to note about these probabilities is that they can answer the chances for any event to happen. Referring to the figure again, PRM can tell what is the probability of an event to occur when income is low and expenditure is high. Once we have the probability of such query , a threshold can be applied to check whether that transaction is fraudulent or not.With this conclusion, the points O3 ,O4 and O5 can be declared as outliers. Other thing about PRM, which can be used to find meaningful outliers, is that it can explain the reasoning behind the dependent events. Say the statement,”person is having high expenditure” is
Integration of Domain Knowledge for Outlier Detection
367
suspicious but as the expenditure is dependent upon income and if income is high then the same statement is no more suspicious rather becomes normal. This is the reason why PRM will not find points O1 and O2 as outliers. 3.2 Default Reasoning Default logics are used to describe the regular behavior and normal properties of the domain elements. For an example , high income implies high expenditure is a normal behavior. Some work has been done in this context and authors proposed the input of an outlier detection problem to be a set of rules representing context knowledge and a set of observations[1]. They suggest that the fact that an individual is an outlier has to be witnessed by an associated set of observations.The major stress by the authors is on the theoretical issues and they believe that finding outliers is quite complex using this methodology. However, I think that building a knowledge base for any domain and finding effective search algorithms to trace out anomalies will be very challenging. 3.3 Ontologies Ontology is a formal explicit description of concepts in a domain of discourse(classes), properties of each concept describing various features and attributes of the concept (slots), and restrictions on slots (facets). An ontology together with a set of individual instances of classes constitutes a knowledge base. The ontology devolvement for any domain is a iterative process which involves evaluating and debugging it by using it in applications. we can find in the literature many options that use ontologies for clustering[6]. As we mentioned clustering is one of the techniques used for outlier detection and as a consequence ontologies could be the third option to introduce context knowledge in outlier detection.
4 Conclusions In this paper, I have proposed a framework for integrating domain knowledge in the outlier detection process. The core idea is to investigate the use of Probabilistic Relational Models (PRM), Default Reasoning and Ontologies in the outlier discovery process. The advantages of this approach is that it will reduce the number of false positives and increase the credibility of the use of outlier detection for knowledge discovery.
References 1. Angiulli, F., Zohary, R.B.-E., Palopoli, L.: Outlier detection using default reasoning. Artif. Intell. 172(16-17), 1837–1872 (2008) 2. Kriegel, H.P., Ng, R.T., Breunig, M.M., Sander, J.: Lof: Identifying density-based local outliers. In: Proceedings of the 2000 ACM SIGMOD International Conference on Management of Data, Dallas, Texas, USA, pp. 93–104. ACM, New York (2000) 3. Banerjee, A., Kumar, V., Chandola, V.: Technical report on outlier detection. Technical report, http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1. 1.108.8502
368
S. Babbar
4. Koller, D.P., Getoor, l., Friedman, N.: Probabilistic relational model 5. Hawkins, D.: Identification of Outliers. Chapman and Hall, London (1980) 6. Li2, Z., Wen1, J., Hu3, X.: Ontology based clustering for improving genomic ir. In: Twentieth IEEE International Symposium on Computer-Based Medical Systems, pp. 225–230 (2007) 7. Ng, R.T., Tucakov, V., Knorr, E.M.: Distance-based outliers: algorithms and applications. VLDB journals 8(3-4), 237–253 (2000) 8. Stephen, B.D., Schwabacher: Mining distance-based in near linear time and randomization and simple pruning rule. In: Proceedings in Ninth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Washington, D.C., USA (2003)
Towards a Spreadsheet-Based Service Composition Framework Dat Dac Hoang, Boualem Benatallah, and Hye-young Paik School of Computer Science & Engineering, The University of New South Wales Sydney NSW 2052, Australia {ddhoang,boualem,hpaik}@cse.unsw.edu.au
Abstract. The spreadsheet is a popular productivity tool. Recently, it has gained attention as a potential service integration environment targeted towards end-users. We discuss four possible ways of implementing web service composition in a spreadsheet environment, highlighting their significance and limitations. We, then, propose our approach which aims to address the need of service composition while maintaining all of the spreadsheet characteristics the users are familiar with. We briefly discuss the progress so far as well as the future plans.
1
Introduction
The spreadsheet is used by millions of people on a daily basis thanks to its intuitive and easy-to-use data management and manipulation capabilities [4]. However, spreadsheets are not connected to the world of service-oriented computing (SOC) which promises to bring thousands of novel and easily reusable software components to the users. Let us illustrate the benefits of spreadsheet integration with SOC through a simple scenario1: Mary, a student, is learning Spanish. To study a word, she uses text-to-speech service (TTS), translation service (TSL) and spell-checking service (SPL). Instead of invoking these services separately and manually link the results, she opens a spreadsheet program, inputs an English word, writes formulas to link these services so that the completed formulas will now automatically suggest spelling corrections, pronounce the word in English, translate the word to Spanish then finally pronounce the word in Spanish. Obviously, the ability to quickly compose such services in spreadsheet enables her to create new applications which suit her needs and save her time, while interacting with a tool she is already familiar with. There have been some efforts from academia and industry communities trying to facilitate the integration between spreadsheets and SOC. However, they are either too simple and do not support sophisticated composition logic, or too complicated and require high learning curve. Clearly, there is still much work left to do in order to create a spreadsheet service composition model which can support the end-users both with its intuitiveness and comprehensiveness. To understand 1
This scenario is inspired by the scenario provided in [2].
L. Chen et al. (Eds.): DASFAA 2009 Workshops, LNCS 5667, pp. 369–373, 2009. c Springer-Verlag Berlin Heidelberg 2009
370
D.D. Hoang, B. Benatallah, and H.-y. Paik
the state-of-the-art in this area, we present four main approaches into service composition in spreadsheets. We then propose our approach which extends the conventional spreadsheet paradigm to address the need of service composition, while maintaining the simplicity and declarative nature of spreadsheets. The remainder of this paper is organized as follows. The survey is presented in Section 2. In Section 3, we put forward our approach and discuss the research plan briefly. A conclusion and discussion will be shown in Section 4.
2
A Brief Survey of Current Approaches
In this section, we present the current approaches to service composition in spreadsheets. 2.1
Implicit Service Composition
The idea of implicit service composition comes from the fact that compositions between services are made using natural data flow created by cell dependencies in spreadsheet. For example, a very simple composition can be implemented by linking one service’s output into another service’s input fields (e.g., in our reference scenario, the translation service takes the output of the spelling service as one of its input parameters). StrikeIron SOA Express for Excel [6] is a typical example of this approach. Carefully analyzing the data flows created by cell dependencies, we can show that spreadsheet programming paradigm naturally supports the core service composition patterns, namely: sequence, parallel split, synchronization, exclusive choice and simple merge. This approach is easy to learn as it conforms to the conventional spreadsheet programming metaphors (i.e., cell referencing, cell formula, etc). However, the main limitations of this approach are: (i) The composition mainly relies on services that operate in a request-response manner (i.e., it expects a service to always return output). It does not consider other types of services which do not operate in such a manner (e.g., one-way only interaction service). (ii) The execution order of composite application relies on data flow created by cell referencing. There is no explicit control-flow construct specified in the application thus it is less suitable for a scenario which requires complex execution logic. 2.2
Event-Based Service Composition
In event-based service composition, the flow of service execution is determined by user actions (e.g., button clicks, key presses, etc) or interactions with other services (i.e., the completion of an action in one service may cause an action in other service). Autonomic Task Manager for Administrators (also known as A1) [1] is a typical example of this approach. Each cell in A1’s spreadsheet is an arbitrary Java object. Each object is associated with a listener which is designed to manage events. The listener explicitly triggers the execution of a code block based on a condition. For example, the following code snippet shows
Towards a Spreadsheet-Based Service Composition Framework
371
how a listener construct on() is used to trigger cell B2 to increase its value by 1 when a button in cell B1 is pressed. B1 := Button(); B2 := 0; B3 : on(B1)B2 = B2 + 1; By using events, user can express all basic composition patterns and some complicated patterns (e.g., iteration). However, event-based service composition requires the user to have programming knowledge (e.g., the notion of events, expressions, assigning commands, etc) and the conventional spreadsheet paradigm itself needs to be extended to handle more complicated event conditions such as aggregating events. 2.3
Service Composition through Shared Process Data
This composition approach utilizes not only the data required/produced by services (i.e., input and output) but also the meta-data generated during the execution of services (e.g., service A can have status: “initiated”, “processing” or “completed”. The composition process simply tests for the existence of these data in order to decide if a service B following service A can be started. In this approach, a shared data repository is maintained by the composition framework. Every piece of data (e.g., the input/output of a service, or the execution status of a service) is read from/written to the repository so that it can be shared amongst the services. AMICO:CALC [2] is a typical example of this approach. AMICO:CALC uses variables (which encapsulate services’ data structure) to abstract the heterogeneity of services. Users are provided with various predefined functions (e.g., Amico Read, Amico W rite, etc) to get data from and post data to services. In [2], authors demonstrated that AMICO:CALC supports the basic composition patterns. The shared data repository allows users to express the composition logic using a combination of low-level operations such as Read and W rite. However the main limitation of this approach is that it requires an external repository to store the shared data. Users need to be aware of the presence of these data in order to compose services, thus the learning curve is increased (i.e., not a natural spreadsheet paradigm). 2.4
Explicit Service Composition
Explicit service composition allows users to explicitly define the execution order of a composite service. It can be done by defining flow-control constructs (e.g., sequence, if-then-else, for-loop). Husky [5] is a typical example of this approach. The composition logic in Husky is defined by transforming the spatial organization of services’ events into their ordering in time. Time progresses from left to right and from top to bottom in cell blocks. A set of adjacent cells makes a sequence of events, while empty cells disjoin the workspace into temporally independent event sequences. By introducing the notion of time, Husky explicitly introduce some operators such as iteration, if-then control, etc. By using pre-defined constructs, this approach gives the users more benefits of generality and extensibility of the language. However, it may break the philosophy of spreadsheet programming and increase the learning curve.
372
3
D.D. Hoang, B. Benatallah, and H.-y. Paik
Proposed Solution
Through the survey, we identified that the following elements are required to support a complete service composition environment in spreadsheets: – A component model, which it provides a unified abstraction of heterogeneous services that will be used to build composite service. – A component coordination model, which it is a programming model and language through which users integrate components to create composite service. – Adapters and wrappers, used to map/convert heterogeneous sources (e.g., WSDL services, REST-based services, etc) into the component model. – Extended type system, which it is used to extend the data types supported in spreadsheet cell (i.e., not only simple data such as text, number but also complex data such as object, service, etc). The scope of our study is with designing the component and component coordination model. Our framework assumes solutions developed by others for the wrapper [2] and extended types [3]. Component model. A component model provides building blocks for service composition. We adopt the model proposed in [7], which is mainly designed for presentation service components, to be used in our component coordination model. We introduce service execution states to [7] to support explicit service composition in spreadsheet. Our model is characterized by following elements: – Service execution state. This element is used to represent the status of service execution. We define two states: idle and running. A state of one service can determine the behaviour of other services. – Data. Data can represent an input to a service or the result of a service. It can be a simple data type (e.g., text, number, etc), complex data type (e.g., XML document, object, etc) or null (i.e., service invocation does not return any data). – Events. There are two categories of events: service execution state changes (e.g., from idle to running) and user actions (e.g., user updating formula). – Operations. These are the invocation methods provided by the component. Component coordination model. Our coordination model uses the implicit service composition approach because we believe that (i) it has the lowest learning curve which is the most important feature of an end-user development, (ii) it preserves the conventional spreadsheet programming metaphor. Basically, in the “implicit” approach, the control flow is based on the data flow in the composition. However consider a scenario where there is no data dependency between services, or where we want the execution order of composite service to be different from the control flow created by data flow. For example, in our reference scenario, Mary can not swap the execution order of TTS service for English with TTS service for Spanish because they already have data dependencies with TSL and SPL services. Our coordination model allows the user to explicitly modify control flow generated by data flow by using simple “before” and “after ”
Towards a Spreadsheet-Based Service Composition Framework
373
constructs without affecting the existing data flow. For this, we propose an algorithm which checks that the modified control flow does not break the existing data dependency constraints. Current spreadsheet evaluation engine only relies on the data flow. We implement a new evaluation engine which will evaluate the formula (i.e., the composite service) based on, not only the data flow, but also the control flow which is modified by the user. Implementation. We intend to implement our framework as an extension to Microsoft Excel since it is arguably the most widely used spreadsheet application. This decision is proved to be both challenging since it puts constrains on the framework we came up with, and rewarding since it ensures a higher rate of user acceptance. The language of choice for implementation is C# using Visual Studio for Office (VSTO) toolkit since VSTO lets programmers build an object representation of the Excel cell content. Evaluation engine will use JScript.Net since this standardized language is integrated within the .NET framework.
4
Conclusion and Future Works
In this paper, we make the following contributions: (i) We investigate and evaluate four possible ways of implementing composition in spreadsheet. (ii) We identify the elements of an ideal service composition framework and propose a component and component coordination model which will contribute to building intuitive, yet comprehensive spreadsheets for service composition. The first author has completed 5 semesters of his PhD candidature and expects to submit his thesis by mid 2010. The PhD workshop will bring him an opportunity to discuss the topic with the first-class researchers in the area. The following issues are left as future works: (i) The framework need to be used and evaluated by end-users in the criteria of usability. (ii) The ability for testing and debugging composite program in the framework. (iii) Consider the trade-off between usability and completeness of composition patterns supported in the framework.
References 1. Kandogan, E., Haber, E., Barrett, R., Cypher, A., Maglio, P., Zhao, H.: A1: end-user programming for web-based system administration. In: UIST 2005, USA (2005) 2. Obrenovic, Z., Gasevicc, D.: End-user service computing: Spreadsheets as a service composition tool. IEEE Transactions on Services Computing (2008) 3. Saint-Paul, R., Benatallah, B., Vayssi`ere, J.: Data services in your spreadsheet! In: EDBT 2008, pp. 690–694. ACM, New York (2008) 4. Scaffidi, C., Shaw, M., Myers, B.: Estimating the numbers of end users and end user programmers. In: VLHCC 2005, USA, pp. 207–214 (2005) 5. Srbljic, S., Zuzak, I., Skrobo, D.: In: Husky, ed. (2007), http://www.husky.fer.hr/ 6. StrikeIron. Strikeiron web services for excel, http://www2.strikeiron.com/ Products/LiveDataForExcel.aspx 7. Yu, J., Benatallah, B., Saint-Paul, R., Casati, F., Daniel, F., Matera, M.: A framework for rapid integration of presentation components. In: WWW 2007, USA, pp. 923–932 (2007)
Author Index
Adams, M. 319 Amagasa, Toshiyuki 358 Andruszkiewicz, Piotr 261 Aref, Walid G. 95 Babbar, Sakshi 363 Baˇca, Radim 6 B¨ achle, Sebastian 64 Barker, Ken 246 Belohlavek, Radim 137 Benatallah, Boualem 369 Bernecker, Thomas 122 Bressan, St´ephane 4, 216 Caballero, Ismael 152 Calero, Coral 152 Chilingaryan, Suren 21 Chiu, Dickson K.W. 289 Delis, Alex 95 Deng, Ke 95 Dyreson, Curtis Fu, Ada Wai-Chee
35 215
Gardner, Henry 112 Ge, JiDong 290, 349 H¨ arder, Theo 64 Hoang, Dat Dac 369 Hu, Haiyang 290, 349 Hu, Hua 349
Laranjeiro, Nuno 182 Larson, J. Walter 112 Lin, Ching-Yi 334 Liu, Qing 95 Loukides, Grigorios 231 Madeira, Henrique 182 Mlynkova, Irena 3 Paik, Hye-young 369 Pardede, Eric 3 Pesic, M. 319 Piattini, Mario 152 Price, Rosanne 167 Pun, Sampson 246 Renz, Matthias 122 R¨ ohm, Uwe 97 Sadiq, Shazia 95 Sakr, Sherif 49 Sakurai, Kouichi 276 Schmidt, Karsten 64 Schonenberg, H. 319 Serrano, Manuel 152 Shanks, Graeme 167 Shao, Jianhua 231 Strnad, Pavel 79 Su, Chunhua 276 Sun, Yongjiao 303 ter Hofstede, A.H.M. 319 Tziatzios, Achilles 231
Jantan, Aman 216 Jin, Hao 35 Jurnawan, Winly 97
Valenta, Michal 79 van der Aalst, W.M.P. Verbo, Eugenio 152 Vieira, Marco 182 Vychodil, Vilem 137
Kadhem, Hasan 358 Kitagawa, Hiroyuki 358 Koh, Jia-Ling 334 Kr´ atk´ y, Michal 3, 6 Kriegel, Hans-Peter 122
Wang, Bin 197, 303 Wang, Guoren 197, 303 Wang, Wei 357 Wong, Raymond Chi-Wing Wood, Ian 112
319
215
376
Author Index
Xie, Bo 349 Xie, Long 197 Xu, Kai 95 Yang, Xiaochun Yu, Ge 303 Yuan, Ye 303
95, 303
Zare-Mirakabad, Mohammad-Reza Zhan, Justin 276 Zheng, Baihua 357 Zhou, Xiaofang 95 Zhuang, Yi 289, 349 Zuefle, Andreas 122
216