Lecture Notes in Artificial Intelligence Edited by J. G. Carbonell and J. Siekmann
Subseries of Lecture Notes in Computer Science
2797
3
Berlin Heidelberg New York Hong Kong London Milan Paris Tokyo
Osmar R. Za¨ıane Simeon J. Simoff Chabane Djeraba (Eds.)
Mining Multimedia and Complex Data KDD Workshop MDM/KDD 2002 PAKDD Workshop KDMCD 2002 Revised Papers
13
Series Editors Jaime G. Carbonell, Carnegie Mellon University, Pittsburgh, PA, USA J¨org Siekmann, University of Saarland, Saarbr¨ucken, Germany Volume Editors Osmar R. Za¨ıane University of Alberta, Department of Computing Science Edmonton, Alberta, T6G 2E8 Canada E-mail:
[email protected] Simeon J. Simoff University of Technology – Sydney Institute for Information and Communication Technologies Faculty of Information Technology Broadway, P.O. Box 123, NSW 2007, Australia E-mail:
[email protected] Chabane Djeraba LIFL – UMR CNRS 8022, University of Science and Technology of Lille 59655 Villeneuve d’Ascq C´edex, France E-mail:
[email protected] Cataloging-in-Publication Data applied for A catalog record for this book is available from the Library of Congress. Bibliographic information published by Die Deutsche Bibliothek Die Deutsche Bibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data is available in the Internet at . CR Subject Classification (1998): I.2, H.2.8, H.3, H.4, K.4, C.2 ISSN 0302-9743 ISBN 3-540-20305-2 Springer-Verlag Berlin Heidelberg New York This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer-Verlag. Violations are liable for prosecution under the German Copyright Law. Springer-Verlag Berlin Heidelberg NewYork a member of BertelsmannSpringer Science+Business Media GmbH www.springeronline.com c Springer-Verlag Berlin Heidelberg 2003 Printed in Germany Typesetting: Camera-ready by author, data conversion by PTP-Berlin GmbH Printed on acid-free paper SPIN: 10931905 06/3142 543210
Preface
1
Workshop Theme
Digital multimedia differs from previous forms of combined media in that the bits that represent text, images, animations, and audio, video and other signals can be treated as data by computer programs. One facet of this diverse data in terms of underlying models and formats is that it is synchronized and integrated, hence it can be treated as integral data records. Such records can be found in a number of areas of human endeavour. Modern medicine generates huge amounts of such digital data. Another example is architectural design and the related architecture, engineering and construction (AEC) industry. Virtual communities (in the broad sense of this word, which includes any communities mediated by digital technologies) are another example where generated data constitutes an integral data record. Such data may include data about member profiles, the content generated by the virtual community, and communication data in different formats, including e-mail, chat records, SMS messages, videoconferencing records. Not all multimedia data is so diverse. An example of less diverse data, but data that is larger in terms of the collected amount, is that generated by video surveillance systems, where each integral data record roughly consists of a set of time-stamped images – the video frames. In any case, the collection of such integral data records constitutes a multimedia data set. The challenge of extracting meaningful patterns from such data sets has led to the research and development in the area of multimedia data mining. This is a challenging field due to the nonstructured nature of multimedia data. Such ubiquitous data is required, if not essential, in many applications. Multimedia databases are widespread and multimedia data sets are extremely large. There are tools for managing and searching within such collections, but the need for tools to extract hidden useful knowledge embedded within multimedia data is becoming critical for many decision-making applications. The tools needed today are tools for discovering relationships between data items or segments within images, classifying images based on their content, extracting patterns from sound, categorizing speech and music, recognizing and tracking objects in video streams, relations between different multimedia components, and crossmedia object relations.
2
Multimedia Workshop Series
This book is a result of two workshops: Multimedia Data Mining (MDM/KDD 2002) held in conjunction with ACM SIGKDD 2002 in Edmonton, Canada in July 2002, and Knowledge Discovery from Multimedia and Complex Data (KDMCD 2002) held in conjunction with PAKDD 2002 in Taipei, Taiwan in May 2002. These two workshops brought together crossdisciplinary experts for
VI
Preface
the analysis of digital multimedia content, multimedia databases, spatial data analysis, analysis of data in collaborative virtual environments, and knowledge engineers and domain experts from different applied disciplines related to multimedia data mining. The book reveals a variety of topics that come under the umbrella of multimedia data mining and mining complex data: mining spatial multimedia data; mining audio data and multimedia support; mining image and video data; frameworks for multimedia mining; multimedia mining for information retrieval; and applications of multimedia mining. These workshops were a continuation of other successful multimedia workshops held in conjunction with the KDD conference in 2000 and 2001. The multimedia workshop series events have been attended by industry as well as academia, and attendees have shown a sustained interest in this area.
3
Papers
The book attempts to address the above-mentioned issues, looking at specific issues in pattern extraction from image data, sound, and video; suitable multimedia representations and formats that can assist multimedia data mining; and advanced architectures of multimedia data mining systems. The papers in the book are not presented in a specific order. This is perhaps to reflect the fact that multimedia mining is at the confluence of many disciplines, often tackling different subjects and problems from different angles at the same time. The papers provide an interesting coverage of different issues and some technical solutions. In “Subjective Interpretation of Complex Data: Requirements for Supporting Kansei Mining Process” Bianchi-Berthouze and Hayashi present the continuation of Bianchi-Berthouze’s work on modeling visual impressions from the point of view of multimedia data mining. The new work describes a data warehouse for the mining of multimedia information, where a unique characteristic of the data warehouse is its ability to store multiple hierarchical descriptions of the multimedia data. Such a characteristic is necessary to allow mining not only at different levels of abstraction but also according to multiple interpretations of the content. The proposed framework could be generalized to support the analysis of any type of complex data that relate to subjective cognitive processes, whose content interpretation would be greatly variable. In “Multimedia Data Mining Framework for Raw Video Sequences,” Oh and Bandi present a general framework for real-time video data mining from “raw videos” (e.g., traffic videos, surveillance videos). The focus within the presented framework is on the motion as a feature, and how to compute and represent it for further processing. The multilevel hierarchical segment clustering procedure used category and motion. In the paper “Object Detection for Hierarchical Image Classification” by Khan and Wang, the authors discuss the indexing of images according to meanings rather than objects that appear in images. The authors propose a solution to the problem of creating a meaning-based index structure through the design of a concept-based model using domain-dependent ontologies. Aiming at an accurate
Preface
VII
identification of object boundaries, the authors propose an automatic scalable object boundary detection algorithm based on edge detection and region growing techniques, and an efficient merging algorithm to join adjacent regions using an adjacency graph to avoid the over-segmentation of regions. They implemented a very basic system aimed at the classification of images in the sports domain, in order to illustrate the effectiveness of the algorithm. In their paper “Mining High-level User Concepts with Multiple Instance Learning and Relevance Feedback for Content-Based Image retrieval,” Huang, Chen, Shyu and Zhang propose a framework that incorporates multiple instance learning into the user relevance feedback to discover users’ concept patterns. These patterns reflect where the user’s most interested region is allocated and how to map the local feature vector of that region to the high-level concept pattern of the user. This underlying mapping could be progressively discovered through the proposed feedback and learning procedure. The role of the user in the retrieval system is in guiding the mining process according to the user’s focus of attention. In “Associative Classifiers for Medical Images,” Antonie, Za¨ıane and Coman present a new approach to learning a classification model. Their approach is based on discovering association rules from a training set of images where the rules have the constraint to always include a class label as a consequent. Their association rule-based classifier was tested on a real dataset of medical images. The approach, which included a significant (and important) preprocessing phase, showed promising results when classifying real mammograms for breast cancer detection. In “An Innovative Concept for Image Information Mining,” Datcu and Seidel introduce the concept of “image information mining” and discuss a system that implements this concept. The approach is based on modeling the causalities that link the image-signal contents to the objects and structures of interest to the users. Their approach consists first of extracting image features using a library of algorithms; then unsupervised grouping in a large number of clusters, data reduction by parametric modeling of the clusters, and supervised learning of user semantics, the level where, instead of being programmed, the system is trained using a set of examples. The paper “Multimedia Data Mining Using P-Trees,” by Perrizo, Jockheck, Perera, Ren, Wu and Zhang, focuses on a data structure that provides an efficient, lossless, data-mining-ready representation of the data. The Peano Count Tree (P-tree) provides an efficient way to store and mine sets of images and related data. The authors show the effectiveness of such a structure in the context of multimedia mining. In “Scale Space Exploration for Mining Image Information Content,” Ciucu, Heas, Datcu and Tilton describe an application of a scale-space-clustering algorithm for the exploration of image information content. The clustering considers the feature space as a thermodynamical ensemble and groups the data by minimizing the free energy, with the temperature as a scale parameter. They analyze the information extracted by the grouping and propose an information represen-
VIII
Preface
tation structure that enables exploration of the image content. This structure is a tree in the scale space showing how the clusters merge. In “Videoview: A Content-Based Video Description Scheme and Database Navigation,” Guler and Pushee introduce a unified framework for a comprehensive video description scheme and present a browsing and manipulation tool for video data mining. The proposed description scheme is based on the structure and the semantics of the video, incorporating scene, camera, and object and behaviour information pertaining to a large class of video data. The navigator provided a means for visual data mining of multimedia data: an intuitive presentation, interactive manipulation, the ability to visualize the information and data from a number of perspectives, and the ability to annotate and correlate the data in the video database. In “The Community of Multimedia Agents,” Wei, Petrushin and Gershman present a work devoted to creating an open environment for developing, testing, learning and prototyping multimedia content analysis and annotation methods. Each method is represented as an agent that could communicate with the other agents registered in the environment using templates based on the descriptors and description schemes in the emerging MPEG-7 standard. This environment enables researchers to compare the performance of different agents and combine them in more powerful and robust system prototypes. “Multimedia Mining of Collaborative Virtual Workspaces: An Integrative Framework for Extracting and Integrating Collaborative Process Knowledge” by Simoff and Biuk-Aghai is an extended paper from a presentation at the MDM/KDD 2001 workshop. The authors focus on the knowledge discovery phase that comes after the data mining itself, namely the integration of discovered knowledge and knowledge transfer. They present a framework for this integration and transfer, and show its use in a particular domain: collaborative virtual workspaces. In “STIFF: A Forecasting Framework for Spatio-Temporal Data,” Li and Dunham cope with complex data containing both spatial and temporal characteristics. They propose a framework that uses neural networks to discover hidden spatial correlations and a stochastic time series to capture temporal information per location. This model is used for predictions. They show how their model can be used to predict water flow rate fluctuations in rivers. In “Mining Propositional Knowledge Bases to Discover Multi-level Rules,” Richards and Malik describe a technique for recognizing knowledge and discovering higher-level concepts in the knowledge base. This technique allows the exploration of the knowledge at and across any of the levels of abstraction to provide a much richer picture of the knowledge and understanding of the domain. In “Meta-classification: Combining Multimodal Classifiers,” Lin and Hauptmann present a combination framework called “meta-classification,” which models the problem of combining classifiers as a classification problem itself. They apply the technique on a wearable “experience collection” system, which unobtrusively records the wearer’s conversation, recognizes the face of the dialogue partner, and remembers his/her voice. When the system sees the same person’s
Preface
IX
face or hears the same voice, it can then use a summary of the last conversation to remind the wearer. To identify a person correctly from a mixture of audio and video streams, classification judgments from multiple modalities must be effectively combined. In “Partition Cardinality Estimation in Image Repositories,” Fernandez and Djeraba deal with the problem of automatically identifying the number of clusters to be discovered when considering clustering in large repositories of artefacts, such as images. They present an approach that estimates automatically the best partition cardinality (the best number of clusters) in the context of content-based accessing in image repositories. They suggest a method that reduces drastically the number of iterations necessary to extract the best number of clusters. In “A Framework for Customizable Sports Video Management and Retrieval,” Tjondronegoro, Chen and Pham propose a framework for a customizable video management system that allows the system to detect the type of video to be indexed. The system manages user preferences and usage history to make the system support specific requirements. The authors show how the extracted key segments can be summarized using standard descriptions of MPEG-7 in a hierarchical scheme. In “Style Recognition Using Keyword Analysis,” Lorensuhewa, Pham and Geva are interested in supervised classification where the learning sample could be insufficient. They present a framework for the augmentation of expert knowledge using knowledge extracted from multimedia sources such as text and images, and they show how this framework can be applied effectively.
4
Conclusion
The book discussion revised the scope of multimedia data mining outlined during the previous workshops of the MDM/KDD series, clearly identifying the need to approach multimedia data as a “single unit” rather than ignoring some layers in favor of others. The authors acknowledged the high potential of multimedia data mining methods in medical domains, design and creative industries. There was an agreement that the research and development in multimedia mining should be extended in the area of collaborative virtual environments, 3D virtual reality systems, the music domain and e-business technologies. The papers show that many researchers and developers in the areas of multimedia information systems and digital media turn to data mining methods for techniques that can improve indexing and retrieval in digital media. There is a consensus that multimedia data mining is emerging as its own distinct area of research and development. The work in the area is expected to focus on algorithms and methods for mining from images, sound and video streams. The paper authors identified that there is a need for: (i) development and application of specific methods, techniques and tools for multimedia data mining; and (ii) frameworks that provide consistent methodology for multimedia data analysis and integration of discovered knowledge back into the system where it can be utilized.
X
5
Preface
Acknowledgements
We would like to acknowledge the Program Committee members of MDM/KDD 2002 and KDMCD 2002 who invested their time in carefully reviewing papers for this volume: Frederic Andres (NII, Japan), Marie-Aude Aufaure (INRIA, France), Bruno Bachimont (INA, France), Nadia Bianchi-Berthouze (University of Aizu, Japan), Nozha Boujemaa (INRIA, France), Terry Caelli (University of Alberta, Canada), Liming Chen (ECL, France), Claude Chrisment (University of Toulouse, France), Chitra Dorai (IBM, USA), Alex Duffy (University of Strathclyde, UK), William Grosky (Wayne State University, USA), Howard J. Hamilton (University of Regina, Canada), Jiawei Han (University of Illinois at Urbana-Champaign, USA), Mohand-Sa¨ıd Hacid (Claude Bernard University, France), Alexander G. Hauptmann (Carnegie Mellon University, USA), Wynne Hsu (National University of Singapore, Singapore), Odej Kao (Technical University of Clausthal, Germany), Paul Kennedy (University of Technology, Sydney, Australia), Latifur Khan (University of Texas, USA), Inna Kolyshkina (Pricewaterhouse Coopers, Australia), Nabil Layaida (INRIA Rhˆ one Alpe, France), Brian Lovell (University of Queensland, Australia), Mike Maybury (MITRE Corporation, USA), Gholamreza Nakhaeizadeh (DaimlerChrysler, Germany), Mario Nascimento (University of Alberta, Canada), Ole Nielsen (Australian National University, Australia), Monique Noirhomme-Fraiture (FUNDP, Belgium), Vincent Oria (New Jersey Institute of Technology, USA), Jian Pei (SUNY, Buffalo, USA), Valery A. Petrushin (Accenture, USA), Jean-Marie Pinon (INSA, France), Mohamed Quafafou (IAAI, France), Zbigniew Rass (UNC, Charlotte, USA), Simone Santini (San Diego Super Computer Center, USA), Florence Sedes (University of Toulouse, France), Pramod Singh (University of Technology, Sydney, Australia), Dong Thi Bich Thuy (University of Hˆ o Chi Minh City, Vietnam), Duminda Wijesekera (George Mason University, USA). We would also like to thank others who contributed to the Multimedia Workshop series, including the original PC members who reviewed the first set of workshop papers. We are grateful to the SIGKDD and PAKDD organizing committee. Finally we would like to thank the many participants who brought their ideas, research and enthusiasm to the workshops and proposed many new directions for multimedia mining research.
June 2003
Osmar R. Za¨ıane Simeon J. Simoff Chabane Djeraba
Table of Contents
Subjective Interpretation of Complex Data: Requirements for Supporting Kansei Mining Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nadia Bianchi-Berthouze, Tomofumi Hayashi
1
Multimedia Data Mining Framework for Raw Video Sequences . . . . . . . . . . JungHwan Oh, JeongKyu Lee, Sanjaykumar Kote, Babitha Bandi
18
Object Detection for Hierarchical Image Classification . . . . . . . . . . . . . . . . . Latifur Khan, Lei Wang
36
Mining High-Level User Concepts with Multiple Instance Learning and Relevance Feedback for Content-Based Image Retrieval . . . . . . . . . . . . Xin Huang, Shu-Ching Chen, Mei-Ling Shyu, Chengcui Zhang
50
Associative Classifiers for Medical Images . . . . . . . . . . . . . . . . . . . . . . . . . . . Maria-Luiza Antonie, Osmar R. Za¨ıane, Alexandru Coman
68
An Innovative Concept for Image Information Mining . . . . . . . . . . . . . . . . . Mihai Datcu, Klaus Seidel
84
Multimedia Data Mining Using P-Trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 William Perrizo, William Jockheck, Amal Perera, Dongmei Ren, Weihua Wu, Yi Zhang Scale Space Exploration for Mining Image Information Content . . . . . . . . . 118 Mariana Ciucu, Patrick Heas, Mihai Datcu, James C. Tilton Videoviews: A Content Based Video Description Schema and Database Navigation Tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 Sadiye Guler, Ian Pushee The Community of Multimedia Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 Gang Wei, Valery A. Petrushin, Anatole V. Gershman Multimedia Mining of Collaborative Virtual Workspaces: An Integrative Framework for Extracting and Integrating Collaborative Process Knowledge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 Simeon J. Simoff, Robert P. Biuk-Aghai STIFF: A Forecasting Framework for Spatio-Temporal Data . . . . . . . . . . . 183 Zhigang Li, Margaret H. Dunham, Yongqiao Xiao Mining Propositional Knowledge Bases to Discover Multi-level Rules . . . . 199 Debbie Richards, Usama Malik
XII
Table of Contents
Meta-classification: Combining Multimodal Classifiers . . . . . . . . . . . . . . . . . 217 Wei-Hao Lin, Alexander Hauptmann Partition Cardinality Estimation in Image Repositories . . . . . . . . . . . . . . . . 232 Gregory Fernandez, Chabane Djeraba A Framework for Customizable Sports Video Management and Retrieval . 248 Dian Tjondronegoro, Yi-Ping Phoebe Chen, Binh Pham Style Recognition Using Keyword Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . 266 Aruna Lorensuhewa, Binh Pham, Shlomo Geva
Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
Subjective Interpretation of Complex Data: Requirements for Supporting Kansei Mining Process Nadia Bianchi-Berthouze1 and Tomofumi Hayashi2 1
2
Database Systems Lab, University of Aizu, Aizu Wakamatsu, 965-8580, Japan
[email protected] Japan Advanced Institute of Science and Technology, Nomi Gun, Ishikawa-Ken, Japan
[email protected]
Abstract. Today’s technology makes it possible to easily access huge amounts of complex data. As a consequence, techniques are needed for accessing the semantics of such data and supporting the user in selecting relevant information. While meta-languages such as XML have been proposed, they are not suitable for complex data such as images, video, sounds or any other non-verbal channel of communication, because those data have very subjective semantics, i.e., whose interpretation varies over time and between subjects. Yet, providing access to subjective semantics is becoming critical with the significant increase in interactive systems such as web-based systems or socially interactive robots. In this work, we attempt to identify the requirements for providing access to the subjective semantics of complex data. In particular, we focus on how to support the analysis of those dimensions that give rise to multiple subjective interpretations of the data. We propose a data warehouse as a support for the mining process involved. A unique characteristic of the data warehouse lays in its ability to store multiple hierarchical descriptions of the multimedia data.
1
Introduction
While it sometimes appears as if verbal communication is the primary means of communication in human interaction, the fact is that various channels are used to convey messages, support them, and stress nuances. A large part of the communication consists in sending and interpreting the subjective content of those messages, with subjective information defined as information that can be conveyed and interpreted differently according to the context and the personalities of the sender and listener. To interpret such non-verbal messages, humans rely on implicit models which they adapt through interaction, using their own personal model. Personal models are adapted to meet the different nuances or different interpretations due to other O.R. Za¨ıane et al. (Eds.): Mining Multimedia and Complex Data, LNAI 2797, pp. 1–17, 2003. c Springer-Verlag Berlin Heidelberg 2003
2
N. Bianchi-Berthouze and T. Hayashi
Fig. 1. The interpretation of this posture can change significantly even if only one single feature is modified. For example, if the body is slightly bent forward, it can convey desperation, while if it is slightly bent backwards, it can convey fear.
personalities and/or other cultures. While it has been argued that basic emotions are similar worldwide [1], their nuances, as well as a larger range of emotions or subjective information, are quite variable and affected by external factors such as mood, goal, and more global ones such as culture. In this paper, we mainly focus on messages that reflect the affective state of the speaker (where speaker is taken in the generic sense of expressing information through any channel) or the affective state that can be induced by external channels. Why are these messages subjective? The reasons are the complexity and richness of content that humans learn to select and filter (see Fig. 1). When looking at another human performing a gesture, which are the relevant features that allow the observer to understand the affective state being conveyed? Is it the movement of each limb, or the speed and amplitude? Similarly, when looking at a postcard sent by a friend, what are the image features that the sender wanted us to focus on? Even when this information is not explicitly stated, humans generally perform well at this task. And indeed, this capability is exploited in fields other than computer science. In artistic studies such as dance [2], choreography, or cinematography [3], experts have been studying how to use body expressions, voice, and scenery to communicate emotions and/or to create an atmosphere; designers and advertising companies have been using external media, such as images and shapes, to
Subjective Interpretation of Complex Data
3
trigger emotional messages. However, as far as we know, these studies are not formalized because they primarily rely on common sense and naive models [4]. In order to have computers handle such information, we need to understand the factors involved. The paper is organized as follows. At first, we introduce the approaches used in modeling subjective communication. Then, we discuss the issues that should be addressed to support the modeling of subjective interpretations of complex data. Finally, we propose and describe a framework to allow the analysis of such data at various levels of abstraction and aggregation, and to facilitate the identification of their relevant dimensions.
2
Subjective Communication: A Source of Complex Data
Psychological and physiological studies [1] [5] [6] have shown that basic emotions appear to be universally found across human societies. In addition, they seem to have prototypical emotional states, such as particular manifestations associated with them (facial expressions, behavioral tendencies, physiological patterns), and adaptive, survival-related roles that evolved through their value in dealing with situations which are or were fundamental in life [7]. Accordingly, various researchers, in computer science in particular, have proposed mechanisms to capture the signals that enable the recognition (and/or simulation) of affective states. Scientists have recently addressed the issue from various perspectives in order to create computational models that would allow a computer or a robot to convey subjective messages and (mainly) to recognize and react to them. The channels of communication used in those studies can be distinguished in two types: physical channels, e.g., body gesture, and facial expressions; external channels, e.g., images, music. 2.1
Physically Mediated Communication
The affective computing community focuses on giving computers the humanlike ability to recognize emotional cues and to use them when interacting with their users. Representative of robotic realizations of this paradigm is Kismet [8]. Kismet is an expressive anthropomorphic robot that engages people in natural and expressive face-to-face interaction. Such studies explicitly exploit the empathy of human caretakers for the human-like characteristics of the system, and try to identify the causes for this empathy. The human-like characteristics are generally achieved by pre-codifying emotional expressions. Naturally, a difficult extension of those studies is to have systems learn, and adapt to, their caretakers’ emotional language. Because of the increasing need to provide services through robotic aid systems and through systems that adapt according to the affective state of the users, researchers have been studying the various personal modalities of interaction between humans. In [9], humans exploit facial expressions and utterances to convey their states of mind to the robots and vice versa. Other studies have
4
N. Bianchi-Berthouze and T. Hayashi
dealt with the more general issue of body language. In [10], emotions conveyed by dance movements are modeled according to two major factors: the personal space and the external space. In the personal space, each emotional movement is characterized in terms of its amplitude, speed, softness, etc. In the external space, the characterization is based on the direction of the movements. In [11], the affective states are recognized by detecting physical changes in the body through various sensors, e.g., heart beat, body temperature. Strictly speaking, even if these changes do not reflect the way in which humans interact, those have been shown to be very powerful indicators to detect affective states, and they could prove very useful in environments such as hospitals or educational organizations. Applications of these studies have already appeared, e.g., a car by Toyota [12] which measures physical changes (e.g., temperature) in the user to decide which song to play or to signal to other cars the possibly nervous or sleepy state of its driver. 2.2
Externally Mediated Communication
Recently, the kansei engineering community has focused on multimedia information retrieval systems based on subjective criteria. The systems developed attempt to capture the subjective characteristics of their users, and exploit them to carry a task on the user’s behalf. The focus is on the affective content carried by each external modality considered. Various web-based image search engines [13] [14], art appreciation systems [15] [16], and design support systems [17] [18] [19] have been proposed to allow the retrieval of information on the basis of the subjective impression being conveyed (kansei in Japanese). An airline advertiser could, for example, query a search engine to ”retrieve images of airplanes conveying an impression of comfort, quietness and kindness”. The search engine would query the database using models of these impression words, i.e. mathematical functions that map low-level features characterizing multimedia information (an image in this case), into the word used to label the resulting subjective impression. These models are generally tailored to the subjectivity of each person or group of persons sharing a similar profile. Overall, even if these studies have shown the feasibility and importance of subjective communication, and led to some commercial applications, they have also highlighted the high degree of fuzziness and variability of subjective information. These characteristics are one of the reasons for the difficulty of developing recognition systems able to identify affective states and to adapt to various personalities and situations.
3
Issues in Modeling Subjective Complex Data
While techniques for creating and adapting subjective user models have been widely explored, the analysis of the multi-interpretation and multi-description of affective signals/messages has been largely ignored. Images, for example, allow for multiple interpretations of their content because of the attention and selection
Subjective Interpretation of Complex Data
(a)
5
(b)
Fig. 2. Our brain uses attention and selection mechanisms to interpret images. The impression conveyed by these images changes according to the focus of attention of the user. For example (we refer to the color version of these images): a) in the left image, the blue sea might convey an impression of freedom while the dark wall of leaves may convey the opposite impression; b) in the right image, a romantic impression is caused by degrading nuances of the red-yellow hues on a dark background (we refer to the color version of this image), while a peaceful impression is caused by the slight light up of dark regions without referring to its hues (colors).
mechanisms [20] used by the human brain to filter information (see Fig. 2). The resulting visual impression changes according to where the attention of the observer is focused, and according to which features are filtered. Filtering and selection mechanisms are triggered by external factors such as mood, experience, goal, etc. The set of features of the signal involved in recognizing the conveyed emotion is very large and highly dimensional. In [21], two computational systems, based on image processing and neural network learning modules, learn different emotional messages through interaction with users, i.e. facial expressions of the users as well as visual impression of images selected by users. These systems attempt to overcome the complexity of the modeling process by exploiting relevance and externalization feedback from users who actively participate in the process. But those mechanisms are not always sufficient to handle the high dimensionality of the information. In the two case-studies above, both the number of muscles, their continuous scale of tension, as well as their possible interaction with other facial features, and the colors of the images, as well as their textures and shapes and their interaction, make the formalization of emotional clues a very hard task. In [22], body gestures are studied to recognize the relevant features to create motions in avatars that are perceived as natural. Such experiments have shown the large number of features involved and the difficulty in formalizing emotion-signal rules.
6
N. Bianchi-Berthouze and T. Hayashi
Even when a connectionist approach is used, the simplification and/or reduction of the data result in poor performance of the models. In other words, by avoiding a deeper analysis of the data involved, the resulting modeling approach is inappropriate to account for the complexity and variability of users’ subjective interpretations of information. As a consequence, the results obtained so far have not been very encouraging. We suggest that to improve the performance of subjective user models, information should be analyzed according to: – multiple interpretations, i.e. subjective perceptions of the information contained in multimedia data; – the dynamic selection of the salient (to a certain subjective impression) features describing the information; – the fuzziness and limits in the meanings of the words used to label a given subjective experience; and a computational environment capable of handling multiple interpretation levels should facilitate this analysis.
Profile
Feedback Consistency Fact UserID WordID FeedbackID Session Variability Inconsistency
UserID Gender Age Nationality Studies Hobbies Job WebEngines
KUM_Tuning Fact
Impression WordID Nuances Opposite
ModelID UserID WordID MS_StructureID Accuracy ConceptLevel
MS_Structure FeedbackSet ImageID ROI_ID AgreementValue
KUM ModelID MS_FeedbackSetID
MS_FeedbackSet MS_FeedbackSetID ImageID MS_StructureID
MS_StructureID Concept Value
Image ImageID Image IndexWord
Fig. 3. An example of a fact constellation schema to support kansei data mining. It represents the set of data involved in the analysis of the feedback and in the creation and adaptation of Kansei User Models (KUMs). It consists of 2 fact tables: FeedbackConsistency fact table and KUM-Tuning fact table.
In this paper, we propose a hierarchical data model to store and access multiple hierarchical descriptions of complex data at different levels of abstraction.
Subjective Interpretation of Complex Data
7
This is a necessary requirement so as not to limit the possible emergent perspectives on these data and hence facilitate the creation of better computational models. Our framework is described here in the context of the modeling of the mapping between image features and subjective visual impressions.
4
A Framework to Support Complex Subjective Data Analysis
Today’s multimedia search engines base the refinement of their retrieval criteria only on relevance feedback [23]. When the user’s query is based on subjective criteria, as in the case of impression keywords (e.g., ”landscape images that convey a feeling of quietness”), the use of relevance feedback leads to a very low improvement in the systems’ performance [24]. In order to create more powerful models that map the features (e.g. color) of an image into the impression conveyed by the image, [21] proposed to integrate the user in the refinement process of such models. The role of the user was to externalize the rationale behind his/her subjective experience. Such externalization processes, supported by digital annotations, aimed at identifying the focus of attention and the filtering mechanisms to be used when mapping a certain image into an emotional label. The information present in an image is infact filtered and aggregated [20] in various ways by our brain. Our state of mind, goal and/or past experiences direct our attention on some aspects of an image and at different levels of abstraction [26] [27]. For example, different areas of the sea landscape shown in Fig. 2 (left) convey different impressions: the blue sea might convey an impression of freedom while the dark wall of leaves may convey the opposite impression. In the second picture of the same figure, a romantic impression is caused by degrading nuances of the red-yellow hues on a dark background (we refer to the color version of this image), while a peaceful impression is caused by the slight light up of dark regions without referring to its hues (colors). Thus, one of the aims of the mining process will be to determine the focus of attention in an image and the filtered features and concepts that caused the associated impression in a feedback. The user’s digital annotations, such as the identification and labeling of regions in the image, can facilitate the mining process. The user feedback hence becomes complex information by itself and, as such, should be analyzed to shed light on the image characteristics for which the user is indirectly looking. Therefore, even if the tools to create computational models have been already proposed [25], there is still a need for a framework to facilitate such a complex analysis and support the tuning of the algorithms involved. The fact constellation schema shown in Fig. 3 represents an example of the data involved in the analysis of this form of feedback and in the creation and adaptation of computational models for subjective image retrieval. It presents 2 fact tables: FeedbackConsistency fact table and KUM-Tuning fact table. The FeedbackConsistency fact table is described in terms of userID, WordID, and the feedbackID collected during image retrieval sessions. The user profile simply
8
N. Bianchi-Berthouze and T. Hayashi TONE MS_STRUCTURE
Bright
Fresh
Sunny
Color
Strong
Dull
Dynamic
Mature
....
.... Bright−Red
Bright−Yellow Bright−Purple
Pale
Soft
Tender
....
.... Pale−Red
Pale−Purple
Fig. 4. Hierarchical aggregation of color features into higher-level tonality concepts.
contains information that can affect the user’s subjectivity, such as nationality, age, studies, gender, goal, etc. The wordID identifies the impression-word used in the user query, e.g. ”quietness”. A user feedback consists of an image retrieved by the search engine for the wordID, some user selected and annotated regions of interest (ROI) of the image to highlight his/her focus of attention, and finally his/her dis/agreement with the search engine on the evaluation of the image (e.g. ”this image is not ”quiet”). The FeedbackConsistency fact table supports the analysis of the variability of the user subjectivity, i.e. the variability of his/her image evaluation. This analysis is directed to detect different patterns of inconsistencies. We identify three types of inconsistencies: – intrinsic or derived from a different evaluation of the same selected information (temporal evolution in the observer), – true or derived from the misuse of a word in conveying a subjective experience and – attentional or derived from a different reading of the information by the selection mechanism. The detection of inconsistencies is necessary to drive the mining process and the tuning of the computational models, i.e. the mapping functions between images features and emotional labels. We call these models Kansei User Models (KUMs), where the Japanese word Kansei refers to emotions, feelings or other subjective cognitive processes. The KUM-Tuning fact table is described in terms of userID, wordID, SignatureID. A Signature (S) describes the content of the image or a portion of it (ROI) according to its color, texture and shape characteristics [28]. This fact table aims at detecting the set of low level features that are the partial cause of the visual impression the images conveyed to the user. 4.1
MultiSignature: A Hierarchical Meta Schema
To support the identification of the relevant features, we propose to see the lowlevel features as leaves of hierarchical meta-schemas. These meta-schemas allow
Subjective Interpretation of Complex Data
9
Mining Task: comparison of consistency in two different hierarchical signatures of user’s feedback for word ”quiet”: USE database KanseiFeedback MINE COMPARISON AS ”ToneSignature” IN RELEVANCE to judgment, dimension WHERE S.type=’Tone’ VERSUS ”HueSignature” WHERE S.type=’Hue’ ANALYZE count% FROM signature S WHERE (S.userId=’Akko’ AND S.objectiveLabel=’landscape’ AND S.kanseiLabel=‘quiet’) DISPLAY AS table Fig. 5. An example of mining activity on the set of signatures of images classified as ”quiet” by the user ”Akko”.
the creation and management of various interpretations of an image content. Figure 4 shows an example of aggregation of low-level features. This structure shows a hierarchical set of concepts for the tone signature of images. In this particular structure, the color dimensions (leaves) are aggregated into higher level concepts that combine the feeling conveyed by tonality aspects of color with its hues. Using this hierarchy, regions with bright red, purple and yellow colors can be labeled with the concept ”sunny regions” while regions with pale tonality of the same hues are labeled as ”soft regions”. Currently, the schemas are created beforehand by using knowledge acquired experimentally or from previous studies in design, color science and psychology. As a future step, we aim at understanding how to infer such structures directly or partially from user annotations. A typical mining activity of the collected feedback could be to identify the level of image description that results in the highest similarity between images previously classified by the user as conveying a same visual impression. Figure 5 shows an example of a mining activity aimed at comparing the signatures of ”quiet” images according to two different hierarchical structure levels. In the first case, we used the tonality information of the image contents without differentiating between hues. In the second case, the color content of the image was described according to the second level of the hierarchical structure of Fig. 4, thus combining hue and tonality information of the color. We chose the two levels of descriptions by analyzing the variability of the values of the color features in the training sets of nuances of the word ”quiet”. Such training sets were collected through experiments in which users were asked to group images previously judged as ”quiet”, by nuances of such an impression. Using such signatures, we performed an automatic clustering of the set of images
10
N. Bianchi-Berthouze and T. Hayashi
Fig. 6. Results of the clustering performed on 2 different signature structures of the same set of ”quiet” images. The left screen-shot shows the result of the clustering performed on the basis of the tonality features, while the right screen-shot shows the result of the clustering performed on the basis of the color features. The clusters computed on tonality features seems to reflect better the various nuances of user perception of ”quietness”.
using the Clique algorithm described in [29]. Figure 6 shows the results of the clustering process. The screen-shot on the left corresponds to the result of the clustering performed on the tonality signatures of the images, while the screenshot on the right shows the clusters constructed from the color signatures. In each screen-shot a row represents one cluster. The clusters based on tonality features (Fig. 6: left) seem to reflect better the user perception of ”quietness”: the first row of the left screen-shot could be associated with an impression of ”silence”; the second row could be associated with a feeling of ”relax” and ”free time”; the third row represents ”romantic” and ”peaceful” feelings. The last row consists of outliers. On the other side, by applying the clustering algorithm to the color features, we obtained a higher number of clusters that were difficult to associate with a specific nuance of ”quietness”. Finally, even if by applying a clustering algorithm, we could identify various uses of the impression word ”quiet”, the number of clusters and their cardinality could be very high, particularly in the case of the the outliers. More interesting results could be obtained by applying in cascade the clustering algorithms at the different levels of image descriptions.
5
DBMS Server and Meta-schema Implementation
To support a deeper analysis of the user feedback and improve image retrieval performance, we developed KITE, the KanseI managemenT Environment, whose architecture is shown in Fig. 7. It consists of 3 main modules highlighted in gray: a DBMS server, a software interface and a set of data conversion utilities.
Subjective Interpretation of Complex Data
11
Fig. 7. The KITE (KanseI managemenT Environment) architecture is composed of 3 main modules: a DBMS server, a software interface and a set of data conversion utilities. The conversion utilities receive the feedback from the search engine (e.g. KDIME) and convert them into the hierarchical structures of the DBMS-server. The mining applications can access the DBMS server by calling the functions offered by the software library.
The DBMS server serves as a data warehouse to store the user profiles and the user feedback collected over time from a search engine (called KDIME [13] in the figure). The software interface creates a bridge between the kansei mining applications and the DBMS server by offering a set of functions to facilitate the access to the data. User profiles and user feedback are loaded into the DBMS server by using the data conversion utilities. These utilities integrate and convert the external data into the DBMS server data model. In this section, we describe in detail the implementation of the model to store multiple hierarchical signatures (MS Structure), and we will skip the straightforward implementation of the data model for the other data involved, e.g. user profile. We implemented our data model in SQL on PostgreSQL [30], an Object Relational Database Management System (OR-DBMS). To allow the storage of multiple hierarchical signatures, we adopted the following structure: each node of a hierarchy is characterized by a name, a depth and a value. The name indicates not only the concept but also the parent names of the concept. For example, the node ”soft” will be labeled as ”Tone.Pale.Soft”, where ”Tone” identifies the type of hierarchical structure. This structure enables also the storing of the hierarchical structure whose depth is not predefined. Signatures can have values of different types. For example, the signature describing the percentage of a color
12
N. Bianchi-Berthouze and T. Hayashi CREATE TABLE MS Structure ( imageID varchar NOT NULL, hierarchy varchar NOT NULL, nodePath varchar NOT NULL, valueType varchar NOT NULL ); CREATE TABLE floatValue ( value double precision ) INHERITS (MS Structure); CREATE TABLE textValue ( value text ) INHERITS (MS Structure); .......
Fig. 8. SQL expression for the definition of the tables containing the hierarchical structure of the signature and the signature values.
in an image will have values of type float, while shape signatures describing the main shapes detected in an image can use the polygonal type (a set of points). To allow the storage of signature values of different types, we used the inheritance mechanism offered by PostgreSQL. We created 2 types of tables to store the signatures: MS Structure table and < type >Value tables. The MS Structure table and the different < type >Value tables are created by the SQL expression given in Fig. 8. The data model of the multi signature is shown in Fig. 9. The signature values are stored in a < type >Value table according to the type of its values. Hence, each < type >Value table has an attribute ”value”, to store the value of a concept, and also all the attributes of the MS Structure table. Table 1 shows an example ofa < type >Value table containing the color signatures of images. Each row of this table corresponds to a node in the hierarchical structure of Fig. 4 for the signature of the image ”img456”. The second attribute of the table indicates the abstraction level of the concept in the hierarchical structure used. The third attribute indicates the complete path in the hierarchy. The attribute ”value” indicates the percentage of pixels in the image that reflects the correspondent color concept. This data model does not require the predefinition of all the types of hierarchical structures that can be stored in the DBMS server. In fact, it allows an introduction at run-time of any new hierarchical structure and the related image signatures. This is an important requirement for our environment. The visual data mining process could very well request the analysis of new types of aggregation of low-level features of images.
Subjective Interpretation of Complex Data
13
MS_STRUCTURE imageID structureType nodeName valueType
floatVALUE value
textVALUE value
............
polygonalVALUE value
Fig. 9. Inheritance model for storing multi-type signatures.
5.1
The Software Interface and KUM Evaluation
To allow easy access to the data, KITE contains a software interface that lets external applications access the DBMS servers independently of the DBMS’s model and the DBMS’s language. The library consists of two packages. One contains DBMS-access tools, the second one is a collection of data-access tools. The software interface module is installed on each client machine and communicates with the DBMS server through TCP/IP, allowing distributed computation. The DBMS-access tools provide the functions to create the TCP/IP connection between a client application and DBMS module. It also contains the data-conversion functions between DBMS primitive data structures and Java primitive data structures. The Data-Access tools supply the functions for accessing the data in the DBMS server. In particular, it enables the access to data at different abstraction levels by exploiting the hierarchical descriptions of the multi-signatures. The interface module is implemented in Java in order to offer architecture independence. The following code shows an example of access and analysis of the data stored in the DBMS server of KITE. This application uses the library functions to connect to the DBMS server and to extract the signatures of the images that the user ”Akko” has classified as ”quiet”. The retrieved signatures describe such images according to the color concepts in the second level of the hierarchical structure shown in Fig. 4. A clustering algorithm is applied to these signatures in order to detect patterns/clusters that relate to nuances of the impression word ”quiet” (e.g. peaceful, silent, etc.) (see Fig. 6). Their detection could facilitate the creation of KUMs for such an impression word [31]. Selected samples from the clusters are used to create KUMs for the impression word
14
N. Bianchi-Berthouze and T. Hayashi
Table 1. Example of < double >Value table. Values in the Value-column indicate the percentage of the corresponding color features present in the image. The attribute valueType has been left out for readability. imageID img456 img456 img456 img456 img456 img456 img456 img456 img456 img456
hierarchy Tone Tone Tone Tone Tone Tone Tone Tone Tone .....
nodePath Color.Bright.Sunny.BrightRed Color.Bright.Sunny.BrightYellow Color.Bright.Sunny.BrightPurple Color.Bright.Sunny Color.Bright.Fresh.BrightCyan Color.Bright.Fresh.BrightGreen Color.Bright.Fresh.White Color.Bright.Fresh Color.Bright .....
value 0.09 0.1 0.02 0.4 0.01 0.1 0.03 0.15 0.4 .....
”quiet”. The KUMs are created by applying the back-propagation algorithm to the signatures of the extracted samples.
pgConnection con; int main ( ) { //setting for the selection of the data to mine String username = "Akko"; String word = "quiet"; String MS\_Structure="Tone"; int depth = 2; //open connection to the DBMS server of KITE string ser = "jdbc:postgresql://hali/kitedb? user=hayashi"; con = new pgConnection(ser); con.createConnection(); //selection of the data to mine clUser usr = getUserByID (con, username); //retrieving the image signature aggregated according //to the abstraction level 2 of the Tone String[] images = usr.getImageID (con, word); double[][] values = usr.getSignatureTS (con, images, MS_Structure, depth); //clustering of the image signatures Cluster cl = new Cluster(values); boolean check = cl.performClustering(); //extraction of samples from the cluster and creation of KUM value = cl.stratifiedSampling(); KUM kum = new KUM(MS_Structure, depth); kum.NNtraining(values); .................. }
Subjective Interpretation of Complex Data
15
With similar code, we could create KUMs by using different levels of the MS Structure. Table 2 shows the comparison of the learning error for 5 different impression words, after having applied the sampling and learning process to the 2nd level (tonality signatures) and to the last level (color signatures) of the same set of images. While the errors are not very different from one type of feature to the other, we can observe that in most cases the KUMs created on the tone signature perform better than the ones created on the color signatures. However, in the case of the word ”warm”, the color seems to be more important than tone. Table 2. Comparison of the training and testing error obtained using 2 different signatures for the same set of images. The training is performed using neural networks fed by the image signatures and trained by the back-propagation algorithm. The numbers indicates the errors obtained by comparing the user evaluation with the system prediction.
Words Atsui (warm) Bukimina (scary) Kakkoii (cool) Miwakuteki (enchanting) Shizuka (quiet)
6
Tone Signatures Set Training Set Testing Set 0.0200 0.0452 0.0124 0.0329 0.0331 0.0513 0.0270 0.0504
Color Signatures Set Training Set Testing Set 0.0188 0.0398 0.0177 0.0427 0.0230 0.0402 0.0204 0.0361
0.0240
0.0327
0.0462
0.0567
Conclusion
The framework proposed supports the management of multi-descriptions of complex data. Multi-descriptions are necessary to investigate the various interpretations that humans usually perform on complex data. This is an important issue when interpreting data that relate to human subjective cognitive processes. Naturally, complex data are not limited to multimedia data and for example, another field of application for such a framework is the analysis of body language in a multi-modal communication environment. In this case, a description of a body gesture through a hierarchical representation of body parts or of their features could help the analysis of relevant features with respect to the message being sent or interpreted. From our first experimental results, we could observe the variability between subjects in evaluating the same body gesture: user feedback spanned from observations of the overall movements to specific observations on the movement of a single limb. In addition, it was found that the context being associated by the observer to the expression was affected by some of the elements of the movement. While the hierarchical structure can support the definition of strategies for data analysis, we believe, however, that it could be too limiting in the aggregation process because it doesn’t accurately reflect the human aggregation mechanisms. Other aggregation structures should now be explored.
16
N. Bianchi-Berthouze and T. Hayashi
References 1. Ekman, P.: An Argument for Basic Emotions. Cognition and Emotion, Vol. 6, N. 3/4, (1992) 169–200 2. R. von Laban: The mastery of movement. Princeton (1988) 3. Arijon, D.: Grammar of the film language. Silman James (1991) 4. Bianchi, N., Bottoni, P., Mussio, P., Rezzonico, G., Strepparava, M.G.: Participatory Interface Design: From Naive Models to Systems. International Conference on Human Computer Interaction (1997) 5. Ekman, P., Friesen, W.V.: Facial Action Coding System. Consulting Psychologists Press, Palo Alto, CA (1976) 6. Schiphorst, T., Fels, S.: Affect Space: Semantics of Caress. Communication of Art, Science, Technology (CAST 01) (2001) 285–287 7. Canamero, L.D., Fredslund, J.: I Show You How I Like You: Human-Robot Interaction through Emotional Expression and Tactile Stimulation. Dept. of Computer Science Technical Report DAIMI PB 544, University of Aarhus, Denmark (2000) 8. Breazeal, C.: Early Experiments using Motivations to Regulate Human-Robot Interaction. In D. Canamero, ed., Emotional and Intelligent: The Tangled Knot of Cognition. Papers from the 1998 AAAI Fall Symposium. AAAI Technical Report FS-98-03. Menlo Park, CA: AAAI Press (1998) 9. Tojo, T. Matsusaka, Y, Ishi, T, Kobayashi, T.: A Conversational Robot Using Facial and Body Expressions. In Proceedings of IEEE International Conference of Systems, Man and Cybernetics (2000) 858–863 10. Camurri, A., Hashimoto, S. , Ricchetti, M. , Ricci, A. , Suzuki, K. , Trocca, R., and Volpe, G. : EyesWeb – Toward Gesture and Affect Recognition in Dance/Music Interactive Systems. Computer Music Journal, Vol. 24, N. 1, MIT press (2000) 57–69 11. Picard, R.W.: Affective Computing, MIT Press, Cambridge (1997) 12. http://www.cardesignnews.com/autoshows/2001/tokyo/preview/toyota-pod/ (2001) 13. Inder, R., Bianchi-Berthouze, N., Kato, T.: K-DIME: A Software Framework for Kansei Filtering of Internet Material. Proceedings of IEEE International Conference on Systems, Man and Cybernetics, Vol. 6, Tokyo, Japan (1999) 241–246 14. Yoshida, K, Kato, T., Yanoru, T.: A Study of Database Systems with Kansei Information. IEEE International Conference on Systems Man and Cybernetics ’99, Vol. 6, Tokyo, Japan (1999) 253–256 15. Hattori, R., Fujiyoshi, M., Iida, M.: An Education System on WWW for Study Color Impression of Art Paintings Applied NetCatalog. IEEE International Conference on Systems Man and Cybernetics ’99, Vol.6, Tokyo, Japan (1999) 218–223 16. Dorai, C. and Venkatesh, S. (eds.) : Media Computing : Computational Aesthetics. Kluwer Academic publishers, 2002 17. Imai, T., Yamauchi, K., Ishi, N.: Color Coordination System on Case Based Reasoning System using Neural Networks. IEEE International Conference on Systems Man and Cybernetics ’99, Vol. 6, Tokyo, Japan (1999) 224–229 18. Lee, S., Harada, A.: A Design Approach by Objective and Subjective Evaluation of Kansei Information Proceedings of International Workshop on Robot and Human Communication, IEEE Press, Hamamatsu, Japan, (1998) 327–332 19. Shibata, T., Kato, T.: ”Kansei” Image Retrieval System for Street Landscape. Discrimination and Graphical Parameters based on correlation of Two Images. IEEE International Conference on Systems Man and Cybernetics ’99, Vol. 6, Tokyo, Japan (1999) 247–252
Subjective Interpretation of Complex Data
17
20. Pashler, H.: Attention and Visual Perception: Analyzing Divided Attention. S.M Kosslyn, D.N.Osherson editors, International Journal of Visual Cognition, N. 2, MIT Press (1996) 71–100 21. Bianchi-Berthouze, N. and C. Lisetti.: Modeling Multimodal Expression of Users’ Affective Subjective Experience. International Journal on User Modeling and UserAdapted Interaction, Vol. 12, N. 1 (2002) 49–84 22. Nakata, T.: Generation of whole-body expressive movement based on somatic theories. In Proceedings of the second international workshop on Epigenetic Robotics (2002) 105–114. 23. Rui, Y., T. S. Huang, M. Ortega, and S. Mehrotra.: Relevance Feedback: A power tool in interactive content-based image retrieval. IEEE Transaction on Circuits and Systems for Video Technology, Vol. 8, N. 5, (1998) 644–655 24. Smeulders, A. Worring, M., Santini, S., Gupta, A., Jain, R.: Content-Based Image Retrieval at the end of the early Years. IEEE Transaction on Pattern Analysis and Machine Intelligence, Vol. 22, N. 12 (2000), 10–12. 25. Bianchi-Berthouze, N.: Mining Multimedia Subjective Feedback. International Journal of Information Systems, Kluwer Academic Publishers, (2002) 26. Jaimes, A., Chang, S.F.: A Conceptual Framework for Indexing Visual Information at Multiple Levels. Internet Imaging 2000, IS&T/SPIE. San Jose, CA (2000) 27. Timpf, S.: Abstraction, levels of detail, and hierarchies in map series. proceedings of the International Conference on Spatial Information Theory Cognitive and computational foundations of geographic information science (COSIT’99), Freksa, C. and Mark, D.M. (eds.), Lecture Notes in Computer science 1661. Berlin-Heidelberg: Springer (1999) 125–140 28. Bianchi-Berthouze, N. and L. Berthouze.: Exploring Kansei in Multimedia Information. International Journal on Kansei Engineering, Vol. 2, N. 1 (2001) 1–10 29. Agrawal, R., Gerhrke, J., Gunopulos, D., Raghavan, P.: Automatic Subspace Clustering of High Dimensional Data for Data Mining Applications. Proceedings of ACM SIGMOD International Conference on Management of Data, Seattle, Washington (1998) 30. PostgreSQL: http://www.postgresql.org/ (2003) 31. Kobayashi, S.: Colorist: a practical handbook for personal and professional use. Kodansha Press (1998)
Multimedia Data Mining Framework for Raw Video Sequences JungHwan Oh, JeongKyu Lee, Sanjaykumar Kote, and Babitha Bandi Department of Computer Science and Engineering University of Texas at Arlington Arlington, TX 76019-0015 U. S. A. {oh, jelee, kote, bandi}@cse.uta.edu
Abstract. We extend our previous work [1] of the general framework for video data mining to further address the issue such as how to mine video data, in other words, how to extract previously unknown knowledge and detect interesting patterns. In our previous work, we have developed how to segment the incoming raw video stream into meaningful pieces, and how to extract and represent some feature (i.e., motion) for characterizing the segmented pieces. We extend this work as follows. To extract motions, we use an accumulation of quantized pixel differences among all frames in a video segment. As a result, the accumulated motions of segment are represented as a two dimensional matrix. We can get very accurate amount of motion in a segment using this matrix. Further, we develop how to capture the location of motions occurring in a segment using the same matrix generated for the calculation of the amount. We study how to cluster those segmented pieces using the features (the amount and the location of motions) we extract by the matrix above. We investigate an algorithm to find whether a segment has normal or abnormal events by clustering and modeling normal events, which occur mostly. In addition to deciding normal or abnormal, the algorithm computes Degree of Abnormality of a segment, which represents to what extent a segment is distant to the existing segments in relation with normal events. Our experimental studies indicate that the proposed techniques are promising. Keywords: Multimedia Data Mining, Video Segmentation, Motion Extraction, Video Data Clustering
1
Introduction
Data mining, which is defined as the process of extracting previously unknown knowledge, and detecting interesting patterns from a massive set of data, has been a very active research. As results, several commercial products and research prototypes are even available nowadays. However, most of these have focused on corporate data typically in alpha-numeric database. Multimedia data mining has been performed for different types of multimedia data; image, audio and video. An example of image data mining is CONQUEST O.R. Za¨ıane et al. (Eds.): Mining Multimedia and Complex Data, LNAI 2797, pp. 18–35, 2003. c Springer-Verlag Berlin Heidelberg 2003
Multimedia Data Mining Framework for Raw Video Sequences
19
[2] system that combines satellite data with geophysical data to discover patterns in global climate change. The SKICAT system [3] integrates techniques for image processing and data classification in order to identify ’sky objects’ captured in a very large satellite picture set. The MultiMediaMiner [4,5,6] project has constructed many image understanding, indexing and mining techniques in digital media. Some advanced techniques cab be found in mining knowledge from spatial [7] and geographical [8] databases. An example of video and audio data mining can be found in Mining Cinematic Knowledge project [9] which creates a movie mining system by examining the suitability of existing concepts in data mining to multimedia, where the semantic content is time sensitive and constructed by fusing data obtained from component streams. A project [10,11] analyzing the broadcast news programs has been reported. They have developed the techniques and tools to provide news video annotation, indexing and relevant information retrieval along with domain knowledge in the news programs. A data mining framework in audiovisual interaction has been presented [12] to learn the synchronous pattern between two channels, and apply it to speech driven lip motion of facial animation system. The other example is a system [13] focusing on the echocardiogram video data management to exploit semantic querying through object state transition data modeling and indexing scheme. We can find some multimedia data mining frameworks [14,15,16] for traffic monitoring system. EasyLiving [17,18] and HAL [19] projects are developing smart spaces that can monitor, predict and assist the activities of its occupants by using ubiquitous tools that facilitate everyday activities. As mentioned above, there have been some efforts about video data mining for movies, medical videos, and traffic videos. Among them, the developments of complex video surveillance systems [20] and traffic monitoring systems [15, 16,21,22,23] have recently captured the interest of both research and industrial worlds due to the growing availability of cheap sensors and processors at reasonable costs, and the increasing safety and security concerns. As mentioned in the literature [14], the common approach in these works is that the objects (i.e., person, car, airplane, etc.) are extracted from video sequences, and modeled by the specific domain knowledge, then, the behavior of those objects are monitored (tracked) to find any abnormal situations. What are missing in these efforts are first, how to index and cluster these unstructured and enormous video data for real-time processing, and second, how to mine them, in other words, how to extract previously unknown knowledge and detect interesting patterns. In this paper, we extend our previous work [1] of the general framework for video data mining to further address the issues discussed above. In our previous work, we have developed how to segment the incoming video stream into meaningful pieces, and how to extract and represent some feature (i.e., motion) for characterizing the segmented pieces. We extend this work as follows. – To extract motions, we use an accumulation of quantized pixel differences among all frames in a segment [1,24]. As a result, the accumulated motions of segment are represented as a two dimensional matrix. By this way, we can
20
J. Oh et al.
get very accurate amount of motion in a segment. Further, we develop how to capture the location of motions occurring in a segment using the same matrix generated for the calculation of the amount. – We study how to cluster these segmented pieces using the features (the amount and the location of motions) extracted above. – We investigate an algorithm to find whether a segment has normal or abnormal events by clustering and modelling normal events which occur the most. In addition to deciding normal or abnormal, the algorithm computes Degree of Abnormality (Ψ ) of a segment, which represents to what extent a given segment is distant to the existing segments with normal events. The main contributions of the proposed work can be summarized as follows. – The proposed technique to compute motions is very cost-effective because an expensive computation (i.e., optical flow) is not necessary. The matrices representing motions are showing not only the amounts but also the exact locations of motions. Therefore, we can get more accurate and richer information of motion contents of segment. Because the motions are represented as a matrix, comparison among segments is very efficient and scalable. – Many researches [25,26,27,28] have tried to find abnormal events by modelling abnormal events. Most of them define some specific abnormal event, and try to detect it in video sequences. However, a same specific abnormal event can occur in many different ways, and it is not possible to predict and model all abnormal events. To find the abnormality, our approach uses the normal events which occur everyday and easy to obtain. We do not have to model any abnormal event separately. Therefore, unlike the others, our approach can be used for any video surveillance sequences to distinguish normal and abnormal events. The remainder of this paper is organized as follows. In Section 2, to make the paper self-contained, we describe briefly the video segmentation technique relevant to this paper, which has been proposed in our previous work [1,24]. How to capture the amount and the location of motions occurring in a segment, how to cluster those segmented pieces, and how to model and detect normal events are discussed in Section 3. The experimental results are discussed in Section 4. Finally, we give our concluding remarks in Section 5.
2
Incoming Video Segmentation
In this section, we briefly discuss the details of the technique in our previous work [1] to group the incoming frames into semantically homogeneous pieces by real time processing (we called these pieces as ‘segments’ for convenience). To find segment boundary, instead of comparing two consecutive frames (Figure 1(a)) which is the most common way to detect shot boundary [29,30,31,32, 33], we compare each frame with a background frame as shown in Figure 1(b). A background frame is defined as a frame with only non-moving components.
Multimedia Data Mining Framework for Raw Video Sequences
21
Since we can assume that the camera remains stationary for our application, a background frame can be a frame of the stationary components in the image. We manually select a background frame using a similar approach as in [14,21, 34]. The solid graph in the top of Figure 2 shows the color histogram difference of background with each frame in the sequence. The differences are magnified so that segment boundaries can be found more clearly. The algorithm to decompose a video sequence into meaningful pieces (segments) is summarized as follows. The Step.1 is a preprocessing by off-line processing, and the Step.2 through 5 are performed by on-line real time processing. Note that since this segmentation algorithm is generic, the frame comparison can be done by any technique using color histogram, pixel-matching or edge change ratio. We chose a simple pixel matching technique for illustration purpose.
Fig. 1. Frame Comparison Strategies
– Step.1: A background frame (F B ) is extracted from a given sequence as ¯ preprocessing, and its color space of each frame is quantized (i.e., from 256 to 64 or 32 colors) to reduce noises (false detection of motion which is not actually motion but detected as motion). – Step.2: Each frame (F k ) arriving to the system is also quantized in the same ¯ rate used to quantize the background in the previous step. – Step.3: Compare all the corresponding (same position of) pixels of two frames ¯ (background and each frame). Compute the difference (Dk ) between the background (F B ) and each frame (F k ) as follows. Assume that the size of frame is c × r pixels. Note that the value of Dk is always between zero and one. T otal number of pixels in which their colors are dif f erent Dk = (1) c×r – Step.4: Classify Dk into 10 different categories based on its value. Assign a ¯ corresponding category number (Ck ) to the frame k. We use 10 categories for illustration purpose, but this value can be changed properly according to the contents of video. The classification is stated below.
22
J. Oh et al. 0.6 Difference with Background Difference between Two Consecutive Frames
Frame Differences by Color Historam
0.5
0.4
0.3
0.2
0.1
0 0
50
100
150
200
250
300
350
400
450
500
Frames
Fig. 2. Two Frame Comparison Strategies
• Category 0 : Dk < 0.1 • Category 1 : 0.1 ≤ Dk < 0.2 • Category 2 : 0.2 ≤ Dk < 0.3 • Category 3 : 0.3 ≤ Dk < 0.4 • Category 4 : 0.4 ≤ Dk < 0.5 • Category 5 : 0.5 ≤ Dk < 0.6 • Category 6 : 0.6 ≤ Dk < 0.7 • Category 7 : 0.7 ≤ Dk < 0.8 • Category 8 : 0.8 ≤ Dk < 0.9 • Category 9 : Dk ≥ 0.9 – Step.5: For real time on-line processing, a temporary table such as Table 1 is ¯ maintained. To do this, and to build a hierarchical structure from a sequence, compare Ck with Ck−1 . In other words, compare the category number of current frame with the previous frame. We can build a hierarchical structure from a sequence based on these categories which are not independent from each other. We consider that the lower categories contain the higher categories as shown in Figure 3. For example, one segment A of Cat. #1 starts with Frame #a and ends with Frame #b, and the other segment B of Cat. #2 starts with Frame #c and ends with Frame #d, then it is possible that a < c < d < b. In our hierarchical segmentation, therefore, finding segment boundaries means finding category boundaries in which we find a starting frame (Si ) and an ending frame (Ei ) for each category i. The following algorithm shows how to find these boundaries.
Multimedia Data Mining Framework for Raw Video Sequences
23
Table 1. Segmentation Table
Fig. 3. Relationships (Containments) among Categories
• If Ck−1 = Ck , then no segment boundary occurs, so continue with the next frame. • Else if Ck−1 < Ck , then SCk = k, SCk −1 = k, ... SCk−1 +1 = k. The starting frames of category Ck through Ck−1 + 1 are k. • Else, in other words, if Ck−1 > Ck , then ECk−1 = k −1, ECk−1 −1 = k −1, ..., ECk +1 = k − 1. The ending frames of category Ck−1 through Ck + 1 are k − 1. • If the length of a segment is less than a certain threshold value (β), we ignore this segment since it is too short to carry any semantic content. In general, this value β is one second. In other words, we assume that the minimum length of a segment is one second.
3
New Proposed Techniques
We propose new techniques to capture the amount and the location of motions occurring in a segment, to cluster those segmented pieces, and to model and detect normal events are discussed in this section. 3.1
Motion Feature Extraction
We describe how to extract and represent motions from each segment decomposed from a video sequence as discussed in the previous section. We developed a technique for automatic measurement of the overall motion in not only two consecutive frames but also an entire shot which is a collection of frames in our previous works [24,35]. We extend this technique to extract the motion from a segment, and represent it in a comparable form in this section. We compute Total Motion Matrix (TMM) which is considered as the overall motion of a segment, and represented as a two dimensional matrix. For comparison purpose among
24
J. Oh et al.
segments with different lengths (in terms of number of frames), we also compute an Average Motion Matrix (AMM), and its corresponding Total Motion (TM) and Average Motion (AM). The T M M , AM M , T M and AM for a segment with n frames are computed using the following algorithm (Step 1 through 5). We assume that the frame size is c × r pixels. – Step.1: The color space of each frame is quantized (i.e., from 256 to 64 or 32 colors) to reduce unwanted noises (false detection of motion which is not actually motion but detected as motion). – Step.2: An empty two dimensional matrix T M M (its size (c × r) is same as that of frame) for a segment S is created as follows. All its items are initialized with zeros.
t11 t21 T M MS = ... tr1
t12 t22 ... tr2
t13 t23 ... tr3
... t1c ... t2c ... ... ... trc
(2)
And AM MS which is a matrix whose items are averages computed as follows. t11 n t21 n
t12 n t22 n
t13 n t23 n
tr1 n
tr2 n
tr3 n
AM MS = ...
...
...
... ... ... ...
t1c n t2c n
...
(3)
trc n
– Step.3: Compare all the corresponding quantized pixels in the same position of each and background frames. If they have different colors, increase the matrix value (tij ) in the corresponding position by one (this value may be larger according to the other conditions). Otherwise, it remains the same. – Step.4: Step.3 is repeated until all n frames in a shot are compared with a background frame. – Step.5: Using the above T M MS and AM MS , we compute a motion feature, T MS , AMS as follows. T MS =
r c i=1 j=1
tij ,
AMS =
r c tij i=1 j=1
n
(4)
As seen in these formulae, T M is the sum of all items in T M M and we consider this as total motion in a segment. In other words, T M can indicate an amount of motion in a segment. However, T M is dependent not only on the amount of motions but also on the length of a segment. A T M of long segment with little motions can be equivalent to a T M of short segment with a lot of motions. To distinguish these, we simply use AM which is an average of T M .
Multimedia Data Mining Framework for Raw Video Sequences
3.2
25
Location of Motion
Comparing segments only by the amount of motion (i.e., AM ) would not give very accurate results because it ignores the locality such that where the motions occur. We introduce a technique to capture locality information without using partitioning, which is described as follows. In the proposed technique, the locality information of AM M can be captured by two one dimensional matrices which are the summation of column values and the summation of row values in AM M . These two arrays are called as Summation of Column (SC) and Summation of Row (SR) to indicate their actual meanings. The following equations show how to compute SCA and SRA from AM MA . r r r SCA = ( ai1 ai2 ... aic ) i=1
i=1
i=1
c c c SRA = ( a1j a2j ... arj ) j=1
j=1
j=1
To visualize the computed T M M (or AM M ), we can convert this T M M (or AM M ) to an image which is called Total Motion Matrix Image (TMMI) for T M M (Average Motion Matrix Image (AMMI) for AM M ). Let us convert a T M M with the maximum value, m into a 256 gray scale image as an example. We can convert an AM M using the same way. If m is greater than 256, m and other values are scaled down to fit into 256, otherwise, they are scaled up. But the value zero remains unchanged. An empty image with same size of T M M is created as T M M I, and the corresponding value of T M M is assigned as a pixel value. For example, assign white pixel for the matrix value zero which means no motion, and black pixels for the matrix value 256 which means maximum motion in a given shot. Each pixel value for a T M M I can be computed as follows after it is scaled up or down if we assume that T M M I is a 256 gray scale image. Each P ixel V alue = 256 − Corresponding M atrix V alue Figure 4 shows some visualization examples of AM M I, SC and SR such that how these SC and SR can capture where the motions occur. Two SRs in Figure 4 (a) are same, which means that the vertical locations of two motions are same. Similarly, Figure 4 (b) shows that the horizontal locations of two motions are same by SCs. Figure 4 (c) is showing the combination of two, the horizontal and vertical location changes. 3.3
Clustering of Segments
In our clustering, we employ a multi-level hierarchical clustering approach to group segments in terms of category, and motion of segments. The algorithm is implemented in a top-down fashion, where the feature, category is utilized at the top level, in other words, we group segments into k1 clusters according
26
J. Oh et al.
Fig. 4. Comparisons of Locations of Motions
to the categories. For convenience, we call this feature as Top Feature. Each cluster is clustered again into k2 groups based on the motion (AM ) extracted in the previous section accordingly, which are called as Bottom Feature. We will consider more features (i.e., SC and SR) for the clustering in the future. For this multi-level clustering, we adopted K-Means algorithm and cluster validity method studied by Ngo et. al. [36] since the algorithm is the most frequently used clustering algorithm due to its simplicity and efficiency. It is employed to cluster segments at each level of hierarchy independently. The K-Mean algorithm is implemented as follows. – Step.1: The initial centroids are selected in the following way: 1. Given v d-dimensional feature vectors, divide the d dimensions to ρ = kd . These subspaces are indexed by [1, 2, 3, ..., ρ], [ρ + 1, ρ + 2, ..., 2ρ], ..., [(k − 1)ρ + 1, (k − 1)ρ + 2, (k − 1)ρ + 3, ..., kρ]. 2. In each subspace j of [(j − 1)ρ + 1, ..., jρ], associate a value fij for each jρ feature vector Fi by fij = d=(j−1)ρ Fi (d) 3. Choose the initial cluster centroids µ1 , µ2 , ..., µk , by µj = argFi max1LPDJH+HLJKW@ IRULQW\ \LPDJH+HLJKW\ ^ IRULQW[ [LPDJH:LGWK[ ^ LI02([\ ,!7, 2502([\ K!7+ 2502([\ V!76 3L[HO>[@>\@LVDQHGJHSL[HO HOVH 3L[HO>[@>\@LVDQUHJLRQSL[HO ` ` After edge detection, all image pixels are divided into two sets; the edge pixel set (EPS) and the region pixel set (RPS). We move on to the region growing calculations. 3.2 Region Growing The detected edges cut the image into a set of regions. We pick a pixel from the RPS randomly as a seed for a new region, Ri. During region growing of Ri, all pixels in this region are moved out from the RPS and are assigned to this newborn region. After this region is fully grown, if the RPS is not empty, the algorithm simply picks a pixel randomly as a seed for another new region. This process continues until all pixels in the RPS are placed in a set of regions.
Fig. 2. Region growing
The growth of the regions must satisfy certain criteria. If the criteria cannot be satisfied, the growth in the given direction will be stopped. A. Trémeau et al. introduced three criteria for region growing, one local homogeneity criterion (LHC) and two average homogeneity criteria (AHC) [19]. We define p as the pixel to be processed, R is the set of pixels in the current region (possibly not fully-grown) and V is the subset of pixels from the current region which are neighbors to p. LHC states the color differences between p and its neighbors in R is sufficiently small. AHC1 states that the color difference between p and the mean of the colors in V is sufficiently small. AHC2 states that the color difference between p and the mean of the colors in
42
L. Khan and L. Wang
R is sufficiently small. Each of the 3 criteria must be satisfied for p to be merged into R. Growth of a region is as follows. First, the seed pixel is the only pixel that the region R has. Pixels of R are fallen into two categories such as boundary pixel (BP) and inner pixel (IP). A pixel is boundary pixel if at least one pixel among its 8 neighbor pixels is not in the region it belongs. On the other hand, a pixel is inner pixel if all its 8 neighbor pixels are in the region it belongs. At the beginning, the seed pixel is the only boundary pixel of the region. Next, we check the availability of 8 neighbor pixels of this boundary pixel. A pixel is available only when it is contained in RPS. This means the pixel is not an edge pixel and has not been assigned to some other region yet. If any of these pixels is available and satisfies the criteria, the pixel is qualified to be a member of R. After addition of a pixel into region R, it will be a new boundary pixel of the region. The inner pixels and boundary pixels of the region are also required to update. For example, in Fig. 2, after adding pixel A into region R, A will be a new boundary (red) pixel. Pixel C will be a current neighbor (yellow) pixel of boundary pixel, A. Thus, pixel B is not a boundary pixel any more and will be an inner (blue) pixel. Based on these two characteristics, we keep checking and updating boundary pixels until the region stops to extend. Then, we can say the region is fully grown. The pseudo code is as follows.
LQWL ZKLOH536LVQRWHPSW\ ^ L SLFNDSL[HOIURP536UDQGRPO\DVDVHHGDQGDVVLJQ LWWRQHZVHW5L IRUHDFKERXQGDU\SL[HOU RI5L^ IRUHDFKQHLJKERUSL[HOQ RIUWKDWLVQRWLQ %3DQG,3 LI/+&DQG$+&DUHVDWLVILHGIRUQ ^ 0RYHWKHSL[HOQIURP536WR5L 8SGDWH536DQG5L` ` ` 3.3 Merging Adjacent Regions We still encounter several shortcomings. First, it is possible to achieve some noise regions which may not be the true region. Second, it is still possible to cut one object into several sub regions even if it has a unique color. For example, a basketball could be divided into several sub regions due to its black lines. Intuitively, these two problems can be solved by merging adjacent regions. At first, we need to construct a region adjacency graph (RAG) based on regions [19]. In a RAG each vertex represents a sub region. An edge will appear to connect the two vertices, which stand for two adjacent regions. (Shown in Fig. 3) The edges are weighted by color difference between these two regions.
Object Detection for Hierarchical Image Classification
43
Fig. 3. Region Adjacency Graph
To construct RAG, we have to know whether any two given regions are adjacent or not. Two following approaches can be used.
Fig. 4. Examples of Adjacent Regions Detection
Minimum Bounding Rectangle Technique (MBRT): In this approach, minimum bounding rectangle has been constructed [24]. Two regions are considered to be adjacent to each other if their minimum bounding rectangles overlap. Minimum bounding rectangle of a region not only encompasses the region but may also surround some regions which may contribute false positive (not true adjacent regions). Matrix Oriented Technique (MOT): Here we keep a two dimensional matrix where each cell corresponds to a pixel. Furthermore, content of the cell corresponds to a region index where the pixel belongs. Note that for edge pixel we have a special treatment: -1 will be used as a region index. To find adjacent regions, we simply scan matrix row-by-row and column-by-column. For example, in Fig. 4, each gray pixel labeled by –1 is edge pixel, other pixels are region pixels and the number indicates the region index in which the pixel belongs to. When we scan through the matrix row by row and column by column, and if the region index changes from a to b (say), we can say that the region a is adjacent to region b. For example, when we scan the first row in Fig. 4(a), we know that region 5 and 3 are adjacent to each other. When we scan the seventh column in Fig. 4(a), we
44
L. Khan and L. Wang
know region 3 and 2 are adjacent. This method is easy to implement and the computation complexity is O(n). On the other hand, MOT has a shortcoming. In some special cases, it may detect regions adjacent wrongly. For example, in Fig. 4(b), when we scan the fifth row in the matrix, region 2 and 3 are declared as adjacent. However, these two regions are separated by six edge pixels. Now, the issue will arise such as: What is the maximum number of edge pixels used as a separator to determine that two regions are adjacent? This threshold depends on the edge detection result and the region size scale. With regard to the first problem (i.e., noise region), based on the adjacency graph, first we identify noise regions. If a region only contains a small number of pixels, we declare this region is a noise region. For this, we merge the noise region to one of its neighbor regions that has smallest color difference. With regard to the second problem (i.e., over segmentation of sub regions), we merge adjacent regions by using a modified minimum spanning tree algorithm (MMSTA). In the MMSTA a threshold tw is defined. Furthermore, a tree will be constructed by adding an additional constraint: weight of each edge in the tree will fall below tw.. All regions in the tree compose an object. This is because color difference between a region and all its neighbor regions in the tree falls below tw. Pseudo code for Merge Adjacent Regions is shown as below.
&DOFXODWHDYHUDJHFRORUYDOXHIRUHDFK5L &RQVWUXFWD5$* 'HILQH7Z 6RUWDOOHGJHV ZKLOHVWLOOKDYHHGJHVDQGYHUWH[QRWDGGHGLQWKH WUHH ^ )RUHDFKHGJHLQRUGHUWHVWZKHWKHULWFUHDWHVD F\FOHLQWKHWUHHZHKDYHWKXVIDUEXLOWRUWKH ZHLJKWLVPRUHWKDQ7Z LIVR GLVFDUG HOVH DGGWRWKHWUHH `
4 Experimental Results The object detection was tested using sample images found on the Internet. Here, due to space limitations, we reported results for only three images. These three images consist of varying degrees of complex objects. Fig. 5, 6 and 7 show these three images and display partition results. For each image, the original test image and edge detection results are shown first. Then all major detected objects are displayed. As shown in Fig. 5, in the first image we detected the six major objects. In the second image, objects are coarsely classified. In the third image objects are correctly identified. As discussed above in section 2, the use of hue values in region growing
Object Detection for Hierarchical Image Classification
45
makes our method shading and highlight invariant. This is illustrated in the segmentation result in Fig. 7. Note that the blue object in this image has a highlight region on its surface, and that our method segments this object correctly.
Fig. 5. Image Segmentation Results (a) Original Image (b) Boundary Detection Result (c) – (h) Six Major Objects
As we discussed in section 3.1, boundary detection is based upon intensity, hue and saturation differences among regions of an image. However, we noticed that in the experiment, for most of color images, the boundary detection performance would
46
L. Khan and L. Wang
not be improved by the usage of hue difference and saturation differences. Thus, the boundary detection results reported here are just based on intensity differences (Fig. 5 – Fig. 7). In Fig. 5, color layout is simple; our segmentation approach performs well in this case.
Fig. 6. Image Segmentation Results (a) Original image. (b) Boundary detection result (c) – (f) Major detected objects.
The color distribution of Fig. 6 is more complex than the former one. Basically, the regions having similar color are clustered together to form objects. We noticed that variability of boundary detection threshold may affect the final segmentation results. This is because region growing is based on the detected boundary. Furthermore, we also notice that even if we cluster regions having similar color together, the retrieved
Object Detection for Hierarchical Image Classification
47
objects could still lack of semantic meanings sometime. This is one of the shortcomings of all existing image segmentation methods including our approach.
Fig. 7. Image Segmentation Results (a) Original image. (b) Boundary detection result. (c) - (d) Major detected objects
As shown in Fig. 7, the color layout is simple in this case; however, color on the surface of blue cylinder is not homogenous. Some highlight regions exist. We discussed in section 2, to solve this problem, we do not have to calculate the color vector angle as S. Wesolkowski did [21, 22, 23]. Region growing in our approach is based on hue value difference that makes our approach highlight and shading invariant. As shown in Fig. 7(c), even if the highlight region exists, we still can retrieve the blue cylinder as a single object. Of course, there is tradeoff for using hue value in region growing. Because hue value is not always equivalent to the color human perceive. Due to the change of intensity and saturation, two colors that look very different could have very similar hue value. As a result, we may merge two regions having different colors but similar hue value. Fig. 5(f) illustrates that. The hair region looks black, which looks very different from the face region. Indeed, the hue value of this two region is very similar.
48
L. Khan and L. Wang
5 Conclusion and Future Works The success of ontology-based image classification model entirely depends on the detection of object boundaries. We have proposed an automatic scalable object boundary detection algorithm based on edge detection, and region growing techniques. We have also proposed an efficient merging algorithm to join adjacent regions using adjacency graph to avoid over segmentation of regions. To illustrate the effectiveness of our algorithm in automatic image classification, we implement a very basic system aimed at the classification of images in the sports domain. By identifying objects in images, we have shown that our approach works well when objects in images have varying degree of complex organization long with shading and highlight. We would like to extend the work in the following directions. First, we would like to do more experiments on object similarity. Next, we will update weight of objects automatically appeared in images.
References [1]
R. Barber, W. Equitz, C. Faloutsos, M. Fickner, W. Niblack, D. Petkovic, and P. Yanker, “Query by Content for Large On-Line Image Collections”, IEEE Journal, 1995. [2] C. Breen, L. Khan, Arun Kumar and Lei Wang, “Ontology-based Image Classification Using Neural Networks,” to appear in SPIE, Boston, MA, July 2002. [3] L. H. Chen, S. Chang, “Learning Algorithms and Applications of Principal Component Analysis”, Image Processing and Pattern Recognition, Chapter 1, C. T. Leondes, Academic Press, 1998. [4] Y. Deng, B.S. Manjunath, and H. Shin, "Color image segmentation", Proc. of IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 1999. [5] C. Frankel, M.J. Swain and V. Athitsos, “WebSeer: An Image Search Engine for the World Wide Web,” University of Chicago Technical Report TR-96-14, July 31, 1996. [6] Y. Gong and H. J. Zhang, “An Effective Method for Detecting Regions of Given Colors and the Features of the Region Surfaces”, in Proc. of Symposium on Electronic Imaging Science and Technology: Image and Video Processing II, pp. 274–285, San Jose, CA, February 1994, IS&T/SPIE. [7] N Ito, Y. Shimazu, T. Yokoyama, and Y. Matushita, “Fuzzy Logic Based Non-Parametric Color Image Segmentation with Optional Block Processing”, in Proc. of ACM, 1995. [8] A. K. Jain, “Fundamentals of Digital Image Processing”, Prentice Hall, Englewood Cliffs, NJ, 1989. [9] L. Khan and Lei Wang, “Automatic Ontology Derivation Using Clustering for Image Classification”, Proc. Of 8th International Workshop on Multimedia Information Systems, Tempe, Arizona, USA, Oct. 2002, pages 56–65. [10] W. Niblack, R. Barber, W. Equitz, M. Flickner, E. Glasman, D. Petkovic, P. Yanker, C. Faloutsos, G. Taubin, “The QBIC Project: Querying Images by Content Using Color, Texture, and Shape”, in Proc. of Storage and Retrieval for Image and Video Databases, Volume 1908, pp. 173–187, Bellingham, WA, 1993. [11] A. Pentland, R.W. Picard, S. Sclaroff, “Photobook: Tools for Content-Based Manipulation of Image Databases”, in Proc. of Storage and Retrieval for Image and Video Databases II, Volume 2185, pp. 34–47, Bellingham, WA, 1994.
Object Detection for Hierarchical Image Classification
49
[12] N. Row, and B. Frew, “Automatic Classification of Objects in Captioned Depictive Photographs for Retrieval”, Intelligent Multimedia Information Retrieval, Chapter 7, M. Maybury, AAAI Press, 1997. [13] K. Schluns and A. Koschan. “Global and local highlight analysis in color images”, in CGIP00 [8], pages 147–151. [14] A. F. Smeaton and A. Quigley, “Experiments on Using Semantic Distances between Words in Image Caption Retrieval,” in Proc. of The Nineteenth Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, 1995. [15] J. R. Smith, S. F. Chang, “Automated Binary Texture Feature Sets for Image Retrieval”, in Proc. of The International Conference On Acoustic Speech and Signal Processing (ICASSP), pp. 2241–2244, Atlanta, GA, 1996. [16] J. R. Smith, S. F. Chang, “Tools and Techniques for Color Image Retrieval”, in Proc. of The Symposium on Electronic Imaging: Science and Technology Storage and Retrieval for Image and Video Databases IV, pp. 426–437, San Jose, CA, 1996. [17] M. J. Swain, D. H. Ballard, “Color Indexing”, International Journal of Computer Vision, 7(1), pp. 11–32, 1991. [18] D. Tseng and C. Chang, “Color Segmentation Using Perceptual Attributes”, in Proc. of 11th International Conference on Pattern Recognition, pp. 228–231, Amsterdam, Holland, September 1992, IAPR, IEEE. [19] A. Trémeau and P. Colantoni, “Regions adjacency graph applied to color image segmentation,” IEEE Transactions on Image Processing, 1998. [20] Lei Wang, Latifur Khan, and Casey Breen, Object Boundary Detection for Ontologybased Image Classification, Third International Workshop on Multimedia Data Mining, Edmonton, Alberta, Canada, July 2002. [21] S. Wesolkowski, M.E. Jernigan, R.D. Dony, “Global Color Image Segmentation Strategies: Euclidean Distance vs. Vector Angle,” in Y.-H. Hu, J. Larsen, E. Wilson and S. Douglas (eds.), Neural Networks for Signal Processing IX, IEEE Press, Piscataway, NJ, 1999, pp. 419–428. [22] S. Wesolkowski, Color Image Edge Detection and Segmentation: A Comparison of the Vector Angle and the Euclidean Distance Color Similarity Measures, Master's thesis, Systems Design Engineering, University of Waterloo, Canada, 1999. [23] S. Wesolkowski, S. Tominaga, and R.D. Dony, “Shading and Highlight Invariant Color Image Segmentation Using the MPC Algorithm,” SPIE Color Imaging: DeviceIndependent Color, Color Hardcopy, and Graphic Arts VI, San Jose, USA, January 2001, pp. 229–240. [24] S. Wong and W. Leow, “Color segmentation and figure-ground segregation of natural images,” in Proc. Int. Conf. on Image Processing (ICIP 2000), volume 2, pages 120–123, 2000.
Mining High-Level User Concepts with Multiple Instance Learning and Relevance Feedback for Content-Based Image Retrieval Xin Huang1 , Shu-Ching Chen1 , Mei-Ling Shyu2 , and Chengcui Zhang1 1
2
Distributed Multimedia Information System Laboratory School of Computer Science, Florida International University Miami, FL 33199, USA {xhuan001, chens, czhang02}@cs.fiu.edu http://dmis.cs.fiu.edu/ Department of Electrical and Computer Engineering, University of Miami Coral Gables, FL 33124 USA
[email protected]
Abstract. Understanding and learning the subjective aspect of humans in Content-Based Image Retrieval has been an active research field during the past few years. However, how to effectively discover users’ concept patterns when there are multiple visual features existing in the retrieval system still remains a big issue. In this book chapter, we propose a multimedia data mining framework that incorporates Multiple Instance Learning into the user relevance feedback in a seamless way to discover the concept patterns of users, especially where the user’s most interested region and how to map the local feature vector of that region to the high-level concept pattern of users. This underlying mapping can be progressively discovered through the feedback and learning procedure. The role the user plays in the retrieval system is to guide the system mining process to his/her own focus of attention. The retrieval performance is tested to show the feasibility and effectiveness of the proposed multimedia data mining framework.
1
Introduction
The availability of today’s digital devices and techniques offers people more opportunities than ever to create their own digital images. Moreover, Internet has become the major platform to get, distribute and exchange digital image data. The rapid increase in the amount of image data and the inefficiency of traditional text-based image retrieval have created great demands for new approaches in image retrieval. As a consequence of such fast growth of digital image databases, the development of efficient search mechanisms has become more and more important. Currently, Content-Based Image Retrieval (CBIR) emerges and
Shu-Ching Chen gratefully acknowledges the support received from the NSF through grants CDA-9711582 and EIA-0220562 at Florida International University.
O.R. Za¨ıane et al. (Eds.): Mining Multimedia and Complex Data, LNAI 2797, pp. 50–67, 2003. c Springer-Verlag Berlin Heidelberg 2003
Mining High-Level User Concepts with Multiple Instance Learning
51
dedicates to tackling such difficulties. CBIR is an active research area where the image retrieval queries are based on the content of multimedia data. Recently, many efforts have been made to CBIR in order to personalize the retrieval engine. A significant problem in CBIR is the gap between semantic concepts and low-level image features. The subjectivity of human perception of visual content plays an important role in the CBIR systems. It is very often that the retrieval results are not very satisfactory especially when the level of satisfaction is closely related to user’s subjectivity. For example, given a query image with a tiger lying on the grass, one user may want to retrieve those images with the tiger objects in them, while another user may find the green grass background more interesting. User subjectivity in image retrieval is a very complex issue and difficult to explain. Therefore, a CBIR system needs to have the capability to discover the users’ concept patterns and adapt to them. The relevance feedback (RF) technique has been proposed and applied with the aim to discover the users’ concept patterns by bridging the gap between semantic concepts and low-level image features as in [6,14,16]. In this book chapter, a multimedia data mining framework is proposed that can dynamically discover the concept patterns of a specific user to allow the retrieval of images by the user’s most interested region. The discovering and adapting processes aim to find out the mapping between the local low-level features of the images and the concept patterns of the user with respect to how he/she feels about the images. Especially, the user’s interest in special regions can be discovered from the images. The proposed multimedia data mining framework seamlessly integrates several data mining techniques. First, it takes advantages of the user relevance feedback during the retrieval process. The users interact with the system by choosing the positive and negative samples from the retrieved images based on their own concepts. The user feedback is then fed into the retrieval system and triggers the modification of the query criteria to best match the users’ concepts [17]. Second, in order to identify the user’s most interested region within the image, the Multiple Instance Learning and neural network techniques are integrated into the query refining process. The Multiple Instance Learning technique is originally used in categorization of molecules in the context of drug design. Each molecule (bag) is represented by a bag of possible conformations (instances). In image retrieval, each image is viewed as a bag of image regions (instances). Under the Multiple Instance Learning scenario, each image is viewed as a bag of image regions (instances). In fact, the user feedback guides the system mining through the positive and negative examples by Multiple Instance Learning, and tells the system to shift its focus of attention to the region of interest. The neural network technology is applied to map the low-level image features to the user’s concepts. The parameters in the neural network are dynamically updated according to the user relevance feedback during the whole retrieval process to best represent the user’s concepts. In this sense, it is similar to the re-weighting techniques in the RF approach. The remainder of this chapter is organized as follows. Section 2 briefly introduces the background and related work in Relevance Feedback and Multiple Instance Learning. An overview of our proposed multimedia data mining framework is given in Section 3. Section 4 introduces the details of the Multiple
52
X. Huang et al.
Instance Learning and neural network techniques used in our framework. The proposed multimedia data mining framework for content-based image retrieval using user feedback and Multiple Instance Learning is described in Section 5. The experimental results are analyzed in Section 6. Section 7 gives the conclusion and future work.
2 2.1
Background and Related Work Relevance Feedback
While lots of research efforts establish the base of CBIR, most of them relatively ignore two distinct characteristics of the CBIR systems: (1) the gap between high-level concepts and low-level features, and (2) the subjectivity of human perception of visual content. To overcome these shortcomings, the concept of relevance feedback (RF) associated with CBIR was proposed in [15]. Relevance feedback is an interactive process in which the user judges the quality of the retrieval performed by the system by marking those images that the user perceives as truly relevant among the images retrieved by the system. This information is then used to refine the original query. This process iterates until a satisfactory result is obtained for the user. In the past few years, the RF approach to image retrieval has been an active research field. This powerful technique has been proved successful in many application areas. Various ad hoc parameter estimation techniques have been proposed for the RF approaches. Most RF techniques in CBIR are based on the most popular vector model [3, 15,18] used in information retrieval [8]. The RF techniques do not require a user to provide accurate initial queries, but rather estimate the user’s ideal query by using positive and negative examples (training samples) provided by the user. The fundamental goal of these techniques is to estimate the ideal query parameters (both the query vectors and the associated weights) accurately and robustly. Most of the previous RF researches are based on the low-level image features such as color, texture and shape and can be classified into two approaches: query point movement and re-weighting techniques [8]. The basic idea of query point movement is quite straightforward. It tries to move the estimation of “ideal query point” towards positive example points and away from negative example points specified by the user according to his/her subjective judgments. The Rocchio’s formula [14] is the frequently used technique to iteratively update the estimation of “ideal query point”. The re-weighting techniques, however, take user’s query example as the fixed “ideal query point” and attempt to estimate the best similarity metrics by adjusting the weight associated with each low-level image feature [1,5,15]. The basic idea is to give larger weights to more important dimensions and smaller weights to unimportant ones. 2.2
Multiple Instance Learning
The Multiple Instance Learning problem is a special kind of supervised machine learning problem, which is getting more attention recently in the field of machine
Mining High-Level User Concepts with Multiple Instance Learning
53
learning and has been applied to many applications such as drug activity prediction, stock prediction, natural scene image classification, and content-based image retrieval. Unlike the standard supervised machine learning where each object in the training examples is labeled and the problem is to learn a hypothesis that can predict the labels of the unseen objects accurately, in the scenario of Multiple Instance Learning, the labels of individual objects in the training data are not available; instead the labeled unit is a set of objects. A set of objects is called a bag and an individual object in a bag is called an instance. In other words, in Multiple Instance Learning, a training example is a labeled bag, and the labels of the instances are unknown although each instance is actually associated with a label. The goal of learning is to obtain a hypothesis from the training examples that generate labels to the unseen bags and instances. In this sense, the Multiple Instance Learning problem can be regarded as a special kind of supervised machine learning problem in the condition of incomplete labeling information. In the domain of Multiple Instance Learning, there are two kinds of labels, namely Positive and Negative. A label of an instance is either Positive or Negative. A bag is labeled Positive if and only if the bag has one or more Positive instances and is labeled Negative if and only if all its instances are Negative. The Multiple Instance Learning technique is originally used in the context of drug activity prediction. In this domain, the input object is a molecule and the observed result is whether the molecule binds to a target “binding site” or not. If a molecule binds to the target “binding site,” we label it as Positive; else we label it Negative. A molecule has a lot of alternative conformations, and only one or a few of the different conformations of each molecule (bag) are actually bound to the binding site and produce the observed result, while the others typically have no effect on the binding. Unfortunately, the binding activity of a specific molecule conformation cannot be directly observed. Actually, only the binding activity of a molecule can be observed. Therefore, the binding activity prediction problem is a multiple instance learning problem. In this sense, each bag is a molecule and the instances of a bag (molecule) are the alternative conformations of the molecule [7] . The applications of Multiple Instance Learning related to the topic of this chapter are natural scene image classification and content-based image retrieval. In the first application, a natural scene image usually contains a lot of different semantic regions and its semantic category is usually only determined by one or more regions in the image. There may be some regions which do not fit the semantic meaning of the category. For example, we have an image which contains a wide river and a hill beside it. This image can be classified into the “river” category because of the existence of the river. In this case, the hill has nothing to do with the classification. If the image classification system can discover this kind of fact and only consider the features of the river object when learning the classifier, better performance can be achieved than using the features of the whole image. Based on that basic idea, Maron et al. applied Multiple Instance Learning into natural scene image classification [10]. In their approach, each image is represented by a bag and the regions (subimages) in the image correspond to
54
X. Huang et al.
the instances in the bag. An image is labeled Positive if it somehow contains the concept of a specific semantic category (i.e., one of its regions contains the concept); otherwise it is labeled Negative. From the labeled training images, the concept can be learned by Multiple Instance Learning and the learned concept can be used for scene classification. With the same idea in natural scene image classification, the Multiple Instance Learning can also be applied to CBIR. In CBIR, the user expresses the visual concept he/she is interested in by submitting a query image example representing the concept to the system. It is often the case that only one or more regions in the query example represent that concept and other objects are unrelated to it. Considering each object as an instance and the image as a bag, Multiple Instance Learning can discover the objects really related to the user concept. By filtering out the unrelated objects (which can be considered as “noise”) and only applying the related objects in the query process, we can expect better query performance. Based on this idea, both [20] and [22] applied Multiple Instance Learning in CBIR. In addition to the application of Multiple Instance Learning, a lot of research has been done in Multiple Instance Learning algorithms. Dietterich et al. [7] represented the target concept by an axis-parallel rectangle (APR) in the n-dimensional feature space and presented several Multiple Instance Learning algorithms for learning the axis-parallel rectangles. In [2], the authors proposed the MULTIINST algorithm for Multiple Instance Learning that is also an APR based method. In [10], the concept of Diversity Density was introduced and a two-step gradient ascent with multiple starting points was applied to find the maximum Diversity Density. Based on the Diversity Density, EM-DD algorithm was proposed [21]. In their algorithm, it assumed that each bag has a representative instance that was treated as a missed value, and then the EM (ExpectationMaximization) method and Quasi-Newton method were used to learn the representative instances and maximize the Diversity Density simultaneously. [13] also used the EM method to do Multiple Instance Regression. Jun Wang et al. [19] explored the lazy learning approaches in Multiple Instance Learning. They developed two kNN-based algorithms: Citation-kNN and Bayesian-kNN. In [23], the authors tried to solve the Multiple Instance Learning problem with decision trees and decision rules. Jan Ramon et al. [12] proposed the Multiple Instance Neural Network.
3
Overview of the Proposed Multimedia Data Mining Framework
In this chapter, one of the main goals is to map the original visual feature space into a space that better describes the user desired high-level concepts. In other words, we try to discover the specific concept patterns for an individual user via user feedback and Multiple Instance Learning. For this purpose, we introduce a multiple instance feedback model that accounts for various concepts/responses of the user. In our method, we assume the user searches for those images close to the query image and responds to a series of machine queries by declaring the positive
Mining High-Level User Concepts with Multiple Instance Learning
55
and negative sample images among the displayed images. After getting user’s relevance feedback, Multiple Instance Learning is applied to capture the objects the user is really interested in and the mapping between the low-level feature and high-level concept simultaneously. Each new query is chosen to achieve the user expectation more closely given the previous user responses. In our multimedia data mining framework, the Multiple Instance Learning algorithm is a key part. It determines the performance of the framework in a significant degree. To meet this requirement, an open Multiple Instance Learning framework is designed, where an “open” framework means that different subalgorithms may be plugged into the learning framework for different applications. Hence, it provides the opportunity to select the suitable sub-algorithm for a specific application to get the best performance in a reasonable scope. In our multimedia data mining framework, the multi-layer feedforward neural network and back-propagation algorithm are plugged into the Multiple Instance Learning framework. Compared with the traditional RF techniques, our method differs in the following two aspects: 1. It is based on such an assumption that the users are usually more interested in one specific region than other regions of the query image. However, to our best knowledge, the recent efforts in the RF techniques are based on the global image properties of the query image. In order to produce a higher precision, we use the segmentation method proposed in [4] to segment an image into regions that roughly correspond to objects, which provides the possibility for the retrieval system to discover the most interested region for a specific user based on his/her feedback. 2. In many cases, what the user is really interested in is just an object of the query image (example). However, the user’s feedback is on the whole image. How to effectively identify the user’s most interested object and to precisely capture the user’s high-level concepts based on his/her feedback on the whole image have not received much attention yet. In this chapter, the Multiple Instance Learning method is applied to discover the user’s interested region and then mine the user’s high-level concepts. By doing so, not only the region-of-interest can be discovered, but also the ideal query point of that query image can be approached within several iterations. Compared with other Multiple Instance Learning methods used in CBIR, our methodology has the following advantages: 1) Instead of manually dividing each picture into many overlapping regions [20], we adopt the image segmentation method in [4] to partition the images in a more natural way; 2) In other Multiple Instance Learning based image retrieval systems such as [22], it is not very clear how the user interacts with the CBIR system to provide the training images and the associated labeling information for Multiple Instance Learning. While in our framework, user feedback is used in the image retrieval process, which makes the process more efficient and precise. It is more efficient since it is easy for the user to find some positive samples among the initial retrieved results. It is more precise since among the retrieved images, the user can select the negative samples based on his/her subjective perception. The reason is that the selected
56
X. Huang et al.
negative ones have similar features/contents with the query image but they have different focuses of attention from the user’s point of view. By selecting them as negative samples, the system can better distinguish the real needs of the users from the“noisy” or unrelated information via Multiple Instance Learning. As a result, the system can discover which feature vector related to a region in each image best represents the user’s concept, and furthermore, it can determine which dimensions of the feature vector are important by adaptively reweighing them through the neural network technique.
4
The Proposed Multiple Instance Learning Framework
In a traditional supervised learning scenario, each object in the training set has a label associated with it. The supervised learning can be viewed as a search for a function that maps an object to its label with the best approximation to the real unknown mapping function, which can be described with the following: Definition 1. Given an object space Ω, a label space Ψ , a set of objects O = {Oi |Oi ∈ Ω} and their associated labels L = {Li |Li ∈ Ψ }, the problem of supervised learning is to find a mapping function fˆ : Ω → Ψ so that the function fˆ has the best approximation of the real unknown function f. Unlike the traditional supervised learning, in multiple instance learning, the label of an individual object is unknown. Instead, only the label of a set of objects is available. An individual object is called an instance and a set of instances with an associated label is called a bag. Specifically, in image retrieval, there are only two kinds of labels, namely P ositive and N egative. A bag is labeled P ositive if the bag has one or more than one positive instance and is labeled N egative if and only if all its instances are negative. The Multiple Instance Learning problem is to learn a function mapping from an instance to a label (either Positive or Negative) with the best approximation to the unknown real mapping function, which can be defined as follows:
Definition 2. Given an object space Φ, a label space Ψ = {1 (P ositive), 0 (N egative)}, a set of n bags B = {Bi |Bi ∈ P (Φ), i = 1...n}, where P (Φ) is the power set of Φ, and their associated labels L = {Li |Li ∈ Ψ }, the problem of Multiple Instance Learning is to find a mapping function fˆ : Φ → Ψ so that the function fˆ has the best approximation of the real unknown function f. 4.1
Problem Definition
Let T =< B, L > denote a training set where B = {Bi , i = 1, ..., n} is the set of n bags in the training set, L = {Li , i = 1, ..., n} is the set of labels of B and Li is the label of Bi . A bag Bi contains mi instances that are denoted by Iij (j = 1, ..., mi ). The function f is the real unknown mapping function that maps an instance to its label, and fM IL denotes the function that maps a bag to its
Mining High-Level User Concepts with Multiple Instance Learning
57
label. In Multiple Instance Learning, a bag is labeled Positive if at least one of its instances is Positive. Otherwise, it has a Negative label. Hence, the relationship between the functions f and fM IL can be described in Figure 1.
Instance Ii1
li1
f
... Instance Iij ...
f
Instance Iim i
f
lij
Max
Li
lim i fMIL
Bag Bi
Fig. 1. Relationship between functions f and fM IL .
As can be seen from this figure, the function f maps each instance Iij in bag Bi to its label lij . The label Li of the bag Bi is the maximum of the labels of all its instances, which means Li =fM IL (Bi )=max{lij }=max{f (Iij )}. The Multiple j
j
Instance Learning is to find a mapping function fˆ with the best approximation to f given a training set B = {Bi } and their corresponding labels L = {Li , i = 1...n}. The corresponding approximation of fM IL is fˆM IL (Bi )=max{fˆ(Iij )}. j
In our framework, the Minimum Square Error (MSE) criterion is adopted, i.e., we try to find the function fˆ that minimizes SE =
n
n 2 2 ˆ Li − fM IL (Bi ) = Li − max{fˆ(Iij )}
i=1
i=1
j
(1)
Let γ = {γk , k = 1, ..., N } denote the N parameters of the function f (where N is the number of parameters), the Multiple Instance Learning problem is transformed to the following unconstrained optimization problem: γˆ = arg min γ
n i=1
2
Li − max{fˆ(Iij )} j
(2)
One class of the unconstrained optimization methods is the gradient search method such as steepest descent method, Newton method, Quasi-Newton method and Back-propagation (BP) learning method in the Multilayer FeedForward Neural Network. To apply those gradient-based methods, the differentiation of the target optimization function needs to be calculated. In our Multi-
58
X. Huang et al.
ple Instance Learning framework, we need to calculate the differentiation of the 2 function E = Li − max{fˆ(Iij )} . In order to do that, the differentiation of the j
max function needs to be calculated first. 4.2
Differentiation of the max Function
As mentioned in [9], the differentiation of the max function results in a ‘pointer’ that specifies the source of the maximum. Let y = max(x1 , x2 , ..., xn ) =
n
xi
i=1
U (xi − xj )
(3)
j=i
1 x>0 where U (·) is a unit step function, i.e., U (x) = 0 x≤0 The differentiation of the max function can be written as: ∂y = U (xi − xj ) = ∂xi j=i
4.3
1 if xi is maximum 0 otherwise
(4)
Differentiation of the Target Optimization Function
Equation (4) provides a way to differentiate the max function. In order to use the gradient-based search method to solve Equation (2), we need to further calculate 2 the differentiation of the function E = Li − max{fˆ(Iij )} on the parameters j
γ = {γk } of fˆ. The first partial derivative is as follows: 2 ∂ Li −max{fˆ(Iij )} ∂E j = ∂γk ∂γk {fˆ(Iij )} ∂max j = 2 × max{fˆ(Iij )} − Li × ∂γk j ˆ ∂max{f (Iij )} mi ˆ(Iij )} j ∂{ f × = 2 × max{fˆ(Iij )} − Li × j=1
j
∂ fˆ(Iij )
∂γk
(5) (6) (7)
Suppose the sth instance of bag Bi has the maximum value, i.e., fˆ(lis ) = max{fˆ(lij )}. According to Equation (4), Equation (5) can be written as: j
mi ∂E = 2 × fˆ(Iis ) − Li × j=1 ∂γk
= 2 × fˆ(Iis ) − Li ×
∂max{fˆ(Iij )} j
∂ fˆ(Iij )
∂{fˆ(Iis )} ∂γk
=
×
∂{fˆ(Iij )} ∂γk
∂ Li −fˆ(Iis ) ∂γk
(8)
2 (9)
Mining High-Level User Concepts with Multiple Instance Learning
59
Furthermore, the nth derivative of the target optimization function E can be written as: 2 2 ∂ n Li − max{fˆ(Iij )} ∂ n Li − fˆ(Iis ) ∂nE j = = (10) ∂γk n ∂γk n ∂γk n and the mixed partial derivation of function E can be written as:
∂(
k
k
4.4
nk )
∂γk
E
nk
∂( =
k
nk )
2
Li − max{fˆ(Iij )} j nk ∂γ k k
=
∂(
k
2 Li − fˆ(Iis ) (11) nk k ∂γk
nk )
Multiple Instance Learning to Traditional Supervised Learning
Similar to the analysis on Multiple Instance Learning problem in Section 4.1, the traditional supervised learning problem can also be converted to an unconstrained optimization problem as shown in Equation (9). γ = arg min γ
n
2 Li − fˆ(Oi )
(12)
i=1
The partial derivative and mixed partial derivative of the function Li − 2 fˆ(Oi ) are shown in Equations (10) and (11), respectively. 2 ∂ n Li − fˆ(Oi ) ∂γk n ∂(
k
2 Li − fˆ(Oi ) nk k ∂γk
(13)
nk )
(14)
Notice that Equation (10) is the same as the right side of Equation (7), and Equation (11) is the same as the right side of Equation (8) except that Oi in Equations (10) and (11) represents an object while Iis in Equations (7) and (8) represents an instance with the maximum label in bag Bi . This similarity provides us an easy way to transform Multiple Instance Learning to the traditional supervised learning. The steps of transformation are as follows: 1. For each bag Bi (i = 1, ..., n) in the training set, calculate the label of each instance Iij belonging to it. 2. Select the instance with the maximum label in each bag Bi . Let Iis denote the instance with the maximum label in bag Bi . 3. Construct a set of objects {Oi } (i = 1, ..., n) using all the instances Iis where Oi = Iis .
60
X. Huang et al.
4. For each object Oi , construct a label LOi that is actually the label of bag Bi . 5. The Multiple Instance Learning problem with the input {Bi }, {Li } is converted to the traditional supervised learning problem with the input {Oi }, {LOi } . After this transformation, the gradient-based search methods used in the traditional supervised learning such as the steepest descent method can be applied to Multiple Instance Learning. Despite the above transformation from Multiple Instance Learning to the traditional supervised learning, there still exists a major difference between Multiple Instance Learning and traditional supervised learning. In the traditional supervised learning, the training set is static and usually does not change during the learning procedure. However, in the transformed version of Multiple Instance Learning, the training set may change during the learning procedure. The reason is that the instance with the maximum label in each bag may change with the update of the approximated function fˆ during the learning procedure and therefore the training set constructed along with the aforementioned transformation may change during the learning procedure. In spite of such a dynamic characteristic of the training set, the fundamental learning method remains the same. The following is the pseudo code describing our Multiple Instance Learning framework.
MIL(B, L) Input : B = {Bi , i = 1, ..., n} is the set of n bags in the training set and L = {Li , i = 1, ..., n} is the set of labels where Li is the label of bag Bi . Output : γ = {γk , k = 1, ..., N } is the set of parameters of the mapping function fˆ where N is the number of parameters. 1. Set initial values to parameters γk in γ. 2. If the termination criterion has not been met, go to Step 3; else return the parameter set γ of function fˆ. /* The termination criterion can be based on MSE or the number of iterations. */ 3. Transform Multiple Instance Learning to traditional supervised learning using the method described in this section. 4. Apply the gradient-based search method in traditional supervised learning to update the parameters in γ. 5. Go to Step 2.
Obviously, the convergence of our Multiple Instance Learning framework depends on what kind of gradient-based search method is applied at Step 4. Actually, it has the same convergence property as the gradient-based search method applied.
Mining High-Level User Concepts with Multiple Instance Learning
5
61
Image Retrieval Using Relevance Feedback and Multiple Instance Learning
In a CBIR system, the most common way is ‘Query-by-Example’ which means the user submits a query example (image) and the CBIR system retrieves the images that are most similar to the query image from the image database. However, in many cases, when a user submits a query image, he/she is only interested in a region of the image. The image retrieval system proposed by Blobworld [4] first segments each image into a couple of regions, and then allows the user to specify the region of interest on the segmented query image. Unlike the Blobworld system, we use the user’s feedback and Multiple Instance Learning to automatically capture the user-interested region during the query refining process. Another advantage of our method is that the underlying mapping between the local visual feature vector of that region and the user’s high-level concept can be progressively discovered through the feedback and learning procedure. To apply Multiple Instance Learning into CBIR, a necessary step before an actual image retrieval is to acquire a set of images as the training examples that are used to learn the user’s target concept. In our method, the first set of training examples is obtained from the user’s feedback on the initial retrieval results. In addition, the user’s target concept is refined iteratively during the interactive retrieval process. It is assumed that the user is only interested in one region of an image. In other words, there exists a function f ∈ F : S → Ψ that can roughly map a region of an image to the user’s concept. S denotes the image feature vector space of the regions and Ψ = {1 (P ositive), 0 (N egative)} where P ositive means that the feature vector representing this region meets the user’s concept and N egative means not. An image is P ositive if there exists one or more regions in the image that can meet the user’s concept. An image is N egative if none of the regions can meet the user’s concept. Therefore, an image can be viewed as a bag and its regions are the instances of the bag in Multiple Instance Learning scenario. During the image retrieval procedure, the user’s feedback can provide the labels (P ositive or N egative) for the retrieved images and the labels are assigned to the individual images, not on individual regions. Thus, the image retrieval task can be viewed as a Multiple Instance Learning task aiming to discover the mapping function f and thus to mine the user’s high-level concept from the low-level features. At the beginning of retrieval, the user only submits a query image, and there are no training examples available, which means the learning method is not applicable at the current stage. Hence, a metric based on color histogram comparison is applied to measure the similarity of two images. For each color, the two most significant bits of each R, G, B color component are extracted to compose a 6-bit color code [11]. The 6-bit code provides 64 bins. Each image can be converted to a histogram with 64 bins and therefore can be represented by a point in the 64-dimension feature space. The Manhattan distance between two points is used as the measurement of the dissimilarity between the two images represented by those two points respectively. Assume the color histograms of image A and image B are represented by two 64-dimension vectors (a1 , a2 , ..., a64 )
62
X. Huang et al.
and (b1 , b2 , ..., b64 ) respectively. The dissimilarity (difference) between images A and B is defined as
D(A, B) =
64
|ai − bi |
(15)
i=1
Upon the first round of retrieving those “most similar” images, according to Equation (12), the users can give their feedbacks by labeling each retrieved image as P ositive or N egative. Based on the user feedbacks, a set of training examples {B+, B−} can be constructed where B+ consists of all the positive bags (i.e., the images the user assigns P ositive labels) and B− consists of all the negative bags (i.e., the images the user assigns N egative labels). Given the training examples {B+, B−}, our Multiple Instance Learning framework can be applied to discover the mapping function f in a progressive way and thus can mine the user’s high-level concept. The feedback and learning are performed iteratively. Moreover, during the feedback and learning process, the capturing of user’s high-level concept is refined until the user satisfies. At that time, the query process can be terminated by the user.
6
Experiments and Results
We created our own image repository using images from the Corel image library. There are 2,500 images collected from various categories for our testing purpose.
6.1
Image Processing Techniques
To apply Multiple Instance Learning on mining users’ concept patterns, we assume that the user is only interested in a specific region of the query image. Therefore, we first need to perform image segmentation. The automatic segmentation method proposed in the Blobworld system [4] is used in our system. The joint distribution of the color, texture and location features is modeled using a mixture of Gaussian. The Expectation-Maximization (EM) method is used to estimate the parameters of the Gaussian Mixture model and Minimum Description Length (MDL) principle is then applied to select the best number of components in the Gaussian Mixture model. The color, texture, shape and location characteristics of each region are extracted after image segmentation. Thus, each region is represented by a low-level feature vector. In our experiments, we used three texture features, three color features and two shape features as the representation of an image region. Therefore, for each bag (image), the number of its instances (regions) is the number of regions within that image, and each instance has eight features.
Mining High-Level User Concepts with Multiple Instance Learning
6.2
63
Neural Network Techniques
In our experiments, a three-layer Feed-Forward Neural Network is used as the function f to map an image region (including those eight low-level texture, color and shape features) into the user’s high-level concept. By taking the threelayer Feed-Forward Neural Network as the mapping function fˆ and the backpropagation (BP) learning algorithm as the gradient-based search method in our Multiple Instance Learning framework, the neural network parameters such as the weights of all connections and biases of neurons are the parameters in γ that we want to learn (search). Specifically, the input layer has eight neurons with each of them corresponding to one low-level image feature. The output layer has only one neuron and its output indicates the extent to which an image region meets the user’s concept. The number of neurons at the hidden layer is experimentally set to eight. The biases to all the neurons are set to zero, and the used activation function in the neuron is Sigmoid Function. The BP learning method was applied with learning rate 0.1 with no momentum. The initial weights of the connections in the network are randomly set with relatively small values. The termination condition of the BP algorithm is based on |M SE (k) − M SE (k−1) | < α × M SE (k−1) , where M SE (k) denotes the MSE at the k th iteration and α is a small constant. In our experiments, α is set to 0.005. 6.3
CBIR System Description
Based on the proposed framework, we have constructed a content-based image retrieval system. Figure 2 shows the interface of this system. As can be seen from this figure, the query image is the image at the top-left corner. The user can press the ‘Get’ button to select the query image and press the ‘Query’ button to perform a query. The query results are listed from top left to bottom right in decreasing order of their similarities to the query image. The user can use the pull down list under an image to input his/her feedback on that image (Negative or Positive). After the feedback, the user can carry out the next query. The user’s concept is then learned by the system in a progressive way through the user feedback, and the refined query will return a new collection of the matching images to the user. 6.4
Performance Analysis by a Query Example
In this section, a query example is conducted to illustrate how our CBIR system works. As shown in Figure 2, the query example is on the top-left corner. There is one tiger on the gray ground in the query image. Assume the tiger object (not the background) is what the user is really interested in. Figure 2 also shows the initial retrieval results using a simple color-histogram-based metric of image similarity according to Equation (12). As can be seen from this figure, many retrieved images have no tiger object in them. The reason why they are considered more similar to the query image is that they are similar in terms of the color distribution on the whole image. However, what the user really needs are the images with the tiger object in them. By integrating the user’s feedback
64
X. Huang et al.
with Multiple Instance Learning, the proposed CBIR system can solve the above problem since the user can provide his/her relevant feedback to the system by labeling each image as Positive or Negative. Such feedback information is then fed into the Multiple Instance Learning method to discover user’s real interest and thus capture the user’s high-level concept. Figure 3 shows the query results after 4 iterations of user feedback. Many more images containing the tiger object are successfully retrieved by the system. Especially, almost all of them have higher ranks than the other retrieved images. Another interesting result is that some of the retrieved images, such as the images containing horse on green lawn, have been retrieved although they are much different with the query image in terms of the color distribution on the whole image. The reason is that the horse object is more similar to the tiger object in the query image in terms of the color distribution on those objects. On the other hand, the irrelevant images with the similar color distribution on the whole images such as the building image and the cave image are filtered out during the feedback and learning procedure. Therefore, this example illustrates that our proposed framework is effective in identifying the user’s specific intention and thus to mine the user’s high-level concepts. A number of experiments have been conducted on our CBIR system. Usually, it converges after 4 or 5 iterations of the user feedbacks. Also, in many cases, the user’s most interested region of the query image can be discovered, and therefore the query performance can be improved.
Fig. 2. The interface of the proposed CBIR system and initial query results
Mining High-Level User Concepts with Multiple Instance Learning
65
Fig. 3. The query results of our CBIR system after 4 iterations of user feedback
7
Conclusions
In this book chapter, we presented a multimedia data mining framework to discover user’s high-level concepts from the low-level image features using Relevance Feedback and Multiple Instance Learning. Relevant Feedback provides a way to obtain the subjectivity of the user’s high-level vision concepts, and Multiple Instance Learning enables the automatic learning of the user’s high-level concepts. Especially, Multiple Instance Learning can capture the user’s specific interest in some region of an image and thus can discover user’s high-level concepts more precisely. In order to test the performance of the proposed framework, a content-based image retrieval (CBIR) system using Relevant Feedback and Multiple Instance Learning was developed and several experiments were conducted. The experimental results demonstrate the effectiveness of our framework.
References 1. Aksoy, S. and Haralick, R.M. : A Weighted Distance Approach to Relevance Feedback. Proceedings of the International Conference on Pattern Recognition, (2000) 812–815. 2. Auer, P.: On Learning From Multi-instance Examples: Empirical Evaluation of a Theoretical Approach. Proceedings of 14th International Conference on Machine Learning. (San Francisco, CA), (1997) 21–29. 3. Buckley, C., Singhal, A., and Miltra, M.: New Retrieval Approaches Using SMART: TREC4. Text Retrieval Conference, Sponsored by National Institute of Standard and Technology and Advanced Research Projects Agency, (1995) 25–48.
66
X. Huang et al.
4. Carson, C., Belongie, S., Greenspan, H., and Malik, J.: Blobworld: Image Segmentation Using Expectation-Maximization and Its Application to Image Querying. IEEE Transactions on Pattern Analysis and Machine Intelligence, (2002) 24(8), 1026–1038. 5. Chang, C.-H. and Hsu, C.-C.: Enabling Concept-Based Relevance Feedback for Information Retrieval on the WWW. IEEE Transactions on Knowledge and Data Engineering, 11 (1999) 595–609. 6. Cox, I. J., Minka, T. P., Papathomas, T. V., and Yianilos, P. N.: The Bayesian Image Retrieval System, PicHunter: Theory, Implementation, and Psychophysical Experiments. IEEE Transactions on Image Processing –special issue on digital libraries, 9(1) (2000) 20–37. 7. Dietterich, T.G., Lathrop, R. H., and Lozano-Perez, T.: Solving the MultipleInstance Problem with Axis-Parallel Rectangles. Artificial Intelligence Journal, 89 (1997) 31–71. 8. Ishikawa, Y., Subramanya R., and Faloutsos, C.: Mindreader: Query Databases Through Multiple Examples. Proceedings of the 24th International Conference on Very Large Databases, (1998) 9. Marks II, R.J., Oh, S., Arabshahi, P., Caudell, T.P., Choi, J.J., and Song, B.G.: Steepest Descent Adaptation of Min-Max Fuzzy If-Then Rules. Proceedings of the IEEE/INNS International Conference on Neural Networks, Beijing, China, 3 (1992) 471–477. 10. Maron, O. and Lozano-Perez, T.: Multiple-Instance A Framework for MultipleInstance Learning. In Advances in Neural Information Processing System 10. Cambridge, MA, MIT Press, (1998). 11. Nagasska, A. and Tanaka, Y.: Automatic Video Indexing and Full Video Search for Object Appearance. IFIP Trans. Visual Database Systems II, (1992) 113–127. 12. Ramon, J. and De Raedt, L.: Multi-Instance Neural Networks. Proceedings of the ICML 2000 Workshop on Attribute-value and Relational Learning, (2000) 13. Ray, S. and Page, D.: Multiple-Instance Regression. Proceedings of the 18th International Conference on Machine Learning, (San Francisco, CA), (2001) 425–432. 14. Rocchio, J.J.: Relevance Feedback in Information Retrieval. The Smart System experiments in automatic document processing, Englewood Cliffs, NJ: Prentice Hall Inc. (1971) 313–323. 15. Rui, Y., Huang, T.S., and Mehrotra, S.: Content-based image retrieval with relevance feedback in MARS. Proceedings of the 1997 International Conference on Image Processing, (1997) 815–818. 16. Rui, Y., Huang, T.S., Ortega, M., and Mehrotra, S.: Relevance Feedback: A Power Tool in Interactive Content-Based Image Retrieval. IEEE Transaction on Circuits and Systems for Video Technology , Special Issue on Segmentation, Description, and Retrieval of Video Content, 18(5) (1998) 644–655. 17. Rui, Y. and Huang, T.S.: Optimizing Learning In Image Retrieval. Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition, (2000) 236–243. 18. Salton, G. and McGill, M. J.: Introduction to Modern Information Retrieval. McGraw-Hill Book Company, (1983). 19. Wang, J. and Zucker, J.-D.: Solving the Multiple-Instance Learning Problem: A Lazy Learning Approach. Proceedings of the 17th International Conference on Machine Learning, (2000) 1119–1125. 20. Yang, C. and Lozano-Prez, T.: Image Database Retrieval with Multiple-Instance Learning Techniques. Proceedings of the 16th International Conference on Data Engineering, (2000) 233–243.
Mining High-Level User Concepts with Multiple Instance Learning
67
21. Zhang, Q. and Goldman, S.A.: EM-DD: An Improved Multiple-Instance Learning Technique. Advances in Neural Information Processing Systems (NIPS 2002). To be published. 22. Zhang, Q., Goldman, S.A., Yu, W., and Fritts, J.: Content-Based Image Retrieval Using Multiple-Instance Learning. Proceedings of the 9th International Conference on Machine Learning, (2002). 23. Zucker, J.-D. and Chevaleyre, Y.: Solving Multiple-instance and Multiple-part Learning Problems with Decision Trees and Decision Rules. Application to the Mutagenesis Problem. Proceedings of the 14th Biennial Conference of the Canadian Society for Computational Studies of Intelligence, AI 2001, (2001) 204–214.
Associative Classifiers for Medical Images Maria-Luiza Antonie, Osmar R. Za¨ıane, and Alexandru Coman Department of Computing Science University of Alberta Edmonton, Alberta, Canada {luiza, zaiane, acoman}@cs.ualberta.ca
Abstract. This paper presents two classification systems for medical images based on association rule mining. The system we propose consists of: a pre-processing phase, a phase for mining the resulted transactional database, and a final phase to organize the resulted association rules in a classification model. The experimental results show that the method performs well, reaching over 80% in accuracy. Moreover, this paper illustrates how important the data cleaning phase is in building an accurate data mining architecture for image classification.
1
Introduction
Association rule mining is one of the most important tasks in Data Mining and it has been extensively studied and applied to market basket analysis. In addition, building computer-aided systems to assist medical staff in medical care facilities is becoming of high importance and priority for many researchers. This paper describes the use of association rule mining in an automatic medical image classification process. This paper presents two classification methods for medical images. It is based on association rule mining and it is tested on real datasets in an application for classifying medical images. This work is a significant extension and improvement of the system and algorithm we developed and presented in [2]. The novelty is in the data cleaning and data transformation techniques as well as in the algorithm used to discover the association rules. This work illustrates the importance of data cleaning in applying data mining techniques in the context of image content mining. The high incidence of breast cancer in women, especially from developed countries, has increased significantly in recent years. The etiologies of this disease are not clear and neither are the reasons for the increased number of cases. Currently there are no methods to prevent breast cancer, that is why early detection represents a very important factor in cancer treatment and allows reaching a high survival rate. Mammography is considered the most reliable method in early detection of cancer. Due to the high volume of mammograms to be read by physicians, the accuracy rate tends to decrease and automatic
Maria-Luiza Antonie was partially supported by Alberta Ingenuity Fund and Osmar R. Za¨ıane was partially funded by NSERC, Canada
O.R. Za¨ıane et al. (Eds.): Mining Multimedia and Complex Data, LNAI 2797, pp. 68–83, 2003. c Springer-Verlag Berlin Heidelberg 2003
Associative Classifiers for Medical Images
69
reading of digital mammograms becomes highly desirable. It has been proven that double reading of mammograms (consecutive reading by two physicians or radiologists) increased the accuracy, but at high costs. That is why the computer aided diagnosis systems are necessary to assist medical staff to achieve high efficiency and effectiveness. The methods proposed in this paper classify the digital mammograms into three categories: normal, benign and malign. The normal ones are those characterizing a healthy patient, the benign ones represent mammograms showing a tumor, but that tumor is not formed by cancerous cells, and the malign ones are those mammograms taken from patients with cancerous tumors. Generally, the most errors occur when a radiologist must decide between the benign and malign tumors. Mammography reading alone cannot prove that a suspicious area is malignant or benign. To decide, the tissue has to be removed for examination using breast biopsy techniques. A false positive detection causes an unnecessary biopsy. Statistics show that only 20-30% of breast biopsy cases are proven cancerous. In a false negative detection, an actual tumor remains undetected that could lead to higher costs or even to the cost of a patient’s life. Digital mammograms are among the most difficult medical images to be read due to their low contrast and differences in the types of tissues. Important visual clues of breast cancer include preliminary signs of masses and calcification clusters. Unfortunately, at the early stages of breast cancer, these signs are very subtle and varied in appearance, making diagnosis difficult, challenging even for specialists. This is the main reason for the development of classification systems to assist specialists in medical institutions. Since the data that physicians and radiologists must deal with has increased significantly, there has been a great deal of research done in the field of medical images classification. With all this effort, there is still no widely used method to classify medical images. This is because this medical domain requires high accuracy. Also, mis-classifications could have different consequences. False negatives could lead to death while false positives have a high cost and could cause detrimental effects on patients. For automatic medical image classification, the rate of false negatives has to be very low if not zero. It is important to mention that manual classification of medical images by professionals is also prone to errors and the accuracy is far from perfect. Another important factor that influences the success of automatic classification methods is working in a team with medical specialists, which is desirable but often not achievable. The consequences of errors in detection or classification are costly. In addition, the existing tumors are of different types. These tumors are of different shapes and some of them have the characteristics of normal tissue. All these things contribute to the decisions that are made on such images even more difficult. Different methods have been used to classify and detect anomalies in medical images, such as wavelets [4,15], fractal theory [8], statistical methods [6] and most of them used features extracted using image processing techniques [13]. In addition, some other methods were presented in the literature based on fuzzy set theory [3], Markov models [7] and neural networks [5,9]. Most of the computer-aided methods proved to be powerful tools that could assist medical staff in hospitals and lead to better results in diagnosing a patient.
70
M.-L. Antonie, O.R. Za¨ıane and A. Coman
The remainder of the paper is organized as follows. Section 2 describes the feature extraction phase as well as the cleaning phase. The following section presents the new association rule-based method used to build the classification system. Section 4 describes how the classification system is built using the association rules mined. Section 5 introduces the data collection used and the experimental results obtained, while in the last section we summarize our work and discuss some future work directions.
2
Data Cleaning and Feature Extraction
This section summarizes the techniques used to enhance the mammograms as well as the features that were extracted from images. The result of this phase is a transactional database to be mined in the next step of our system. Indeed, we model the images with a set of transactions, each transaction representing one image with the visual features extracted as well as other given characteristics along with the class label. 2.1
Pre-processing Phase
Since real-life data is often incomplete, noisy and inconsistent, pre-processing becomes a necessity [12]. Two pre-processing techniques, namely Data Cleaning and Data Transformation, were applied to the image collection. Data Cleaning is the process of cleaning the data by removing noise, outliers etc. that could mislead the actual mining process. In our case, we had images that were very large (typical size was 1024 x 1024) and almost 50% of the whole image comprised the background with a lot of noise. In addition, these images were scanned at different illumination conditions, and therefore some images appeared too bright and some were too dark. The first step toward noise removal was pruning the images with the help of the crop operation in Image Processing. Cropping cuts off the unwanted portions of the image. Thus, we eliminated almost all the background information and most of the noise. An example of cropping that eliminates the artefacts and the black background is given in Figure 1 (a-b). Since the resulting images had different sizes, the x and the y coordinates were normalized to a value between 0 and 255. The cropping operation was done automatically by sweeping horizontally through the image. The next step towards pre-processing the images was using image enhancement techniques. Image enhancement helps in qualitative improvement of the image with respect to a specific application [10]. Enhancement can be done either in the spatial domain or in the frequency domain. Here we work with the spatial domain and directly deal with the image plane itself. In order to diminish the effect of over-brightness or over-darkness in images, and at the same time accentuate the image features, we applied the Histogram Equalization method, which is a widely used technique. The noise removal step was necessary before this enhancement because, otherwise, it would also result in enhancement of noise. Histogram Equalization increases the contrast range in an image by increasing the dynamic range of grey levels [10]. Figure 1 (c) shows an example of histogram equalization after cropping.
Associative Classifiers for Medical Images
(a)
(b)
71
(c)
Fig. 1. Pre-processing phase on an example image: (a) original image; (b) crop operation; (c) histogram equalization
2.2
Feature Extraction
The feature extraction phase is needed in order to create the transactional database to be mined. The features that were extracted were organized in a database, which is the input for the mining phase of the classification system. The extracted features are four statistical parameters: mean, variance, skewness and kurtosis; the mean over the histogram and the peak of the histogram. The general formula for the statistical parameters computed is the following: (x − x)n . (1) Mn = N where N is the number of data points and n is the order of the moment. The skewness can be defined as: 3 1 (x − x) Sk = ∗ . (2) N σ and the kurtosis as: 1 kurt = ∗ N
(x − x) σ
4 −3.
(3)
where σ is the standard deviation. 2.3
Transactional Database Organization
All the extracted features presented above have been computed over smaller windows of the original image. The original image was split in four parts, as shown in Figure 2, for a better localization of the region of interest. In addition, the features extracted were discretized over intervals before organizing the transactional data set.
72
M.-L. Antonie, O.R. Za¨ıane and A. Coman
NW
NE
SW
SE
Fig. 2. Mammography division
There are two database organizations that we propose in this paper. The first one is done as follows. The features of all quadrants were kept regardless of whether they characterized normal or cancerous tissue. In addition some other descriptors from the original database were attached, such as breast position, type of tissue, etc. In the second organization, when all the features were extracted the transactional database to be mined was built in the following way. For the normal images, all the features extracted were attached to the corresponding transaction, while for those characterizing an abnormal mammogram only the features extracted from abnormal quadrants were attached. (e.g. for the mammogram presented in Figure 2 only the features extracted for the NE quadrant (the arrow in the figure points to the tumor) were attached; if the mammogram were a normal one, the features extracted for all the splits would have been attached). In the second organization, in addition to selecting quadrants with tumors from abnormal mammograms, we also dropped those additional features from the database because some of them may not be available in other datasets, while others (breast position) proved to mislead the classification process.
3 3.1
Association Rule Based Classification Association Rules
Association rule mining has been extensively investigated in the data mining literature. Many efficient algorithms have been proposed, the most popular being apriori [1] and FP-Tree growth [11]. Association rule mining typically aims at discovering associations between items in a transactional database. Given a set of transactions D = {T1 , .., Tn } and a set of items I = {i1 , .., im } such that any transaction T in D is a set of items in I, an association rule is an implication A → B where the antecedent A and the consequent B are subsets of a transaction T in D, and A and B have no common items. For the association rule to be strong, the conditional probability of B given A has to be higher than a threshold called
Associative Classifiers for Medical Images
73
minimum confidence. Association rules mining is normally a two-step process, where in the first step frequent item-sets are discovered (i.e. item-sets whose support is no less than a minimum support) and in the second step association rules are derived from the frequent item-sets. In our approach, we used the apriori algorithm in order to discover association rules among the features extracted from the mammography database and the category to which each mammogram belongs. We constrained the association rules such that the antecedent of the rules is composed of a conjunction of features from the mammogram while the consequent of the rule is always the category to which the mammogram belongs. In other words, a rule would describe frequent sets of features per category normal, benign and malign based on the apriori association rule discovery algorithm. We developed two associative classifiers as described in the following sections. 3.2
Association Rule-Based Classification with All Categories
This section introduces the rule generation phase of building an associative classifier when the rules are extracted from the entire training set at once. In this approach (Figure 3) all the transactions in the database from a single training collection and the rules generated are de facto the classifier.
Fig. 3. Classifier for all categories
The following algorithm presents step by step the process of discovering association rules when the training set is mined at once. Algorithm: ARC-AC Find association rules on the training set of the data collection Input: A set of objects O1 of the form Oi : {cat1 , cat2 , ...catm , f1 , f2 , ...fn } where cati is a category attached to the object and fj are the selected features for the object; A minimum support threshold σ; Output: A set of association rules of the form f1 ∧ f2 ∧ ... ∧ fn ⇒ cati where cati is a category and fj is a feature;
74
M.-L. Antonie, O.R. Za¨ıane and A. Coman
Method: (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) (12) (13) (14) (15) (16) (17) (18) (19) (20) (21) (22) (23) (24) (25) (26) (27) (28)
C0 ← {Candidate categories and their support} F0 ← {F requent categories and their support} C1 ← {Candidate 1 itemsets and their support} F1 ← {F requent 1 itemsets and their support} C2 ← { candidate pairs (cat, f) such that (cat, f) ∈ O1 and cat ∈ F0 and f ∈ F1 } foreach object o in O1 do { foreach c = (cat, f ) in C2 do { c.support ← c.support + Count(c, o) } } F2 ← {c ∈ C2 | c.support > σ} O2 ← FilterTable(O1 , F2 ) for (i ← 3; Fi−1 = ∅; i ← i + 1) do { Ci ← (Fi−1 F2 ) /* ∀c ∈ Ci c has only one category */ Ci ← Ci − {c | (i − 1) item-set of c ∈ / Fi−1 } Oi ← FilterTable(Oi−1 , Fi−1 ) foreach object o in Oi do { foreach c in Ci do { c.support ← c.support + Count(c, o) } } Fi ← {c ∈ Ci | c.support > σ} } Sets ← i {c ∈ Fi | i > 1} R= ∅ foreach itemset I in Sets do { R ← R + {f ⇒ c | f ∪ c ∈ I ∧ f is an itemset ∧c ∈ C0 } }
Algorithm ARC-AC generates the strong rules when the entire collection is mined. In steps (1-10) the two-frequent itemsets are generated by joining the frequent categories and frequent 1-itemset. Step (11) retains only those that exceed the minimum support threshold. In (13-23) all the k-frequent itemsets are discovered as explained in the apriori algorithm. The last 4 steps represent the actual association rule generation stage. With both algorithms, ARC-AC and ARC-BC (presented next), the document space is reduced in each iteration by eliminating the transactions that do not contain any of the frequent itemsets. This step is done by FilterTable(Oi−1 , Fi−1 ) function. 3.3
Association Rule Based Classification by Category
This section introduces the second classification method (ARC-BC=association rule based classification by category) that we propose to be applied to the image
Associative Classifiers for Medical Images
75
data collection. It mines the data set by classes instead of mining the entire data set at once. This algorithm was first proposed for text classification in [16]. The transactional database consists of transactions as follows. If an object Oi is assigned to a set of categories C = {c1 , c2 , ...cm } and after preprocessing phase the set of features F = {f1 , f2 , ...fn } is retained, the following transaction is used to model the object: Oi : {c1 , c2 , ...cm , f1 , f2 , ...fn } and the association rules are discovered from these transactions. In this approach (Figure 4), each class is considered as a separate training collection and the association rule mining applied to it. In this case, the transactions that model the training documents are simplified to Oi : {C, t1 , t2 , ...tn } where C is the category considered.
Category 1
Association rules for category 1 New images
Category i
Association rules for category i
Category n
Association rules for category n
Associative Classifier ARC-BC
put the new images in the correct class
Fig. 4. Classifier per category
In our algorithm we use a constraint so that only the rules that could be used further for classification are generated. In other words, given the transaction model described above, we are interested in rules of the form O ⇒ ci where O ⊆ O and ci ⊆ C. To discover these interesting rules efficiently we push the rule shape constraint in the candidate generation phase of the apriori algorithm in order to retain only the suitable candidate itemsets. Moreover, at the phase for rule generation from all the frequent k-itemsets, we use the rule shape constraint again to prune those rules that are of no use in our classification. In ARC-BC algorithm step (2) generates the frequent 1-itemset. In steps (313) all the k-frequent itemsets are generated and merged with the category in C1 . Steps (16-18) generate the association rules.
4
Building the Classifier
This section describes how the classification system is built and how a new image can be classified using this system. First, there are presented a number of pruning techniques that were considered during our experiments and second, the process of classifying a new image is described.
76
M.-L. Antonie, O.R. Za¨ıane and A. Coman
Algorithm: ARC-BC Find association rules on the training set of the transactional database when the collection is divided in subsets by category Input: A set of objects (O) of the form Oi : {ci , f1 , f2 , ...fn } where ci is the category attached to the object and fj are the selected features for the object; A minimum support threshold σ; A minimum confidence threshold; Output: A set of association rules of the form f1 ∧ f2 ∧ ... ∧ fn ⇒ ci where ci is the category and fj is a feature; Method: (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) (12) (13) (14) (15) (16) (17) (18)
C1 ← {Candidate 1 term-sets and their support} F1 ← {Frequent 1 term-sets and their support} for (i ← 2; Fi−1 = ∅; i ← i + 1) do{ Ci ← (Fi−1 Fi−1 ) Ci ← Ci − {c | (i − 1) item-set of c ∈ / Fi−1 } Oi ← FilterTable(Oi−1 , Fi−1 ) foreach object o in Oi do { foreach c in Ci do { c.support ← c.support + Count(c, o) } } Fi ← {c ∈ Ci | c.support > σ} } Sets ← i {c ∈ Fi | i > 1} R= ∅ foreach itemset I in Sets do { R ← R + {I ⇒ Cat} }
Fig. 5. ARC-BC algorithm
4.1
Pruning Techniques
The number of rules that can be generated in the association rule mining phase could be very large. There are two issues that must be addressed in this case. The first is that a huge number of rules could contain noisy information which would mislead the classification process. The second is that a huge set of rules would extend the classification time. This could be an important problem in applications where fast responses are required. In addition, in a medical application, it is reasonable to present a small number of rules to medical staff for further study and manual tuning. When the set of rules is too large, it becomes unrealistic to manually sift through it for editing. The pruning methods that we employ in this project are the following: eliminate the specific rules and keep only those that are general and with high confidence, and prune some rules that could introduce errors at the classification stage. The following definitions introduce the notions used in this subsection. Definition 1. Given two rules T1 ⇒ C and T2 ⇒ C we say that the first rule is a general rule if T1 ⊆ T2 .
Associative Classifiers for Medical Images
77
The first step of this process is to order the set of rules. This is done according to the following ordering definition. Definition 2. Given two rules R1 and R2 , R1 is higher ranked than R2 if: (1) R1 has higher confidence than R2 (2)if the confidences are equal supp(R1 ) must exceed supp(R2 ) (3) both confidences and support are equal but R1 has less attributes in left hand side than R2 With the set of association rules sorted, the goal is to select a subset that will build an efficient and effective classifier. In our approach we attempt to select a high quality subset of rules by selecting those rules that are general and have high confidence. The algorithm for building this set of rules is described below. Algorithm: Pruning the low ranked specific association rules Input: The set of association rules that were found in the association rule mining phase (S) Output: A set of rules used in the classification process Method: (1) sort the rules according to Definition1 (2) foreach rule in the set S do { (2.1) find all those rules that are more specific (2.2) prune those that have lower confidence (3) } The next pruning method employed is to eliminate conflicting rules, rules that for the same characteristics would point to different categories. For example, given two rules T1 ⇒ C1 and T1 ⇒ C2 we say that these are conflicting since they could introduce errors. Since we are interested in a single-class classification, all these duplicates or conflicting rules are eliminated. 4.2
Classifying a New Image
The set of rules that were selected after the pruning phase represent the actual classifier. This categorizer is used to predict to which classes new objects are attached. Given a new image, the classification process searches in this set of rules for finding the class that is the closest to be attached with the object presented for categorization. This subsection discusses the approach for labeling new objects based on the set of association rules that forms the classifier. A solution for classifying new objects is to attach to the new image the class that has the most rules matching this new image or the class associated with the first rule that applies to the new object. Given an object to classify, the features are extracted and a transaction is create as discussed in Section 2. The features in the object would yield a list of applicable rules in the limit given by the confidence threshold. If the applicable rules are grouped by category in their consequent part and the groups are ordered by the sum of rules’ confidences, the ordered groups would indicate the most significant category that should be attached to the object to be classified.
78
M.-L. Antonie, O.R. Za¨ıane and A. Coman
The next algorithm describes the classification of a new image. Algorithm: Classification of a new image (I) Input: A new image to be classified; The associative classifier (ARC); The confidence threshold conf.t; Output: Category attached to the new image Method: (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) (12)
5
Foreach rule R in ARC (the sorted set of rules) do { if R matches I then R.count++ and keep R; if R.count==1 then first.conf=R.conf; else if (R.conf>first.conf-conf.t) R.count++ and keep R; else exit; } Let S be the set of rules that match I Divide S in subsets by category: S1 , S2 ...Sn Foreach subset S1 , S2 ...Sn do { Sum the confidences of rules in Sk put the new document in the class that has the highest confidence sum }
Experimental Results
This section introduces the data collection that we used and the experimental results obtained using the two classification methods proposed. 5.1
Mammography Collection
The data collection used in our experiments was taken from the Mammographic Image Analysis Society (MIAS) [14]. Its corpus consists of 322 images, which belong to three categories: normal, benign and malign. There are 208 normal images, 63 benign and 51 malign, which are considered abnormal. In addition, the abnormal cases are further divided into six categories: microcalcification, circumscribed masses, spiculated masses, ill-defined masses, architectural distortion and asymmetry. All the images also include the locations of any abnormalities that may be present. The existing data in the collection consists of the location of the abnormality (like the centre of a circle surrounding the tumor), its radius, breast position (left or right), type of breast tissues (fatty, fatty-glandular and dense) and tumor type if it exists (benign or malign). All the mammograms are medio-lateral oblique view. We selected this dataset because it is freely available, and to be able to compare our method with other published work since it is a commonly used database for mammography categorization. We divided the dataset in ten splits to perform the experiments. For each split we selected about 90% of the dataset for training and the rest for testing. That is 288 images in the training set and 34 images in the testing set.
Associative Classifiers for Medical Images
5.2
79
Experimental Results – Organization 1
In the training phase, the ARC-AC algorithm was applied on the training data and the association rules were extracted. The support was set to 10% and the confidence to 0%. The reason for choosing the 0% percent for the confidence is motivated by the fact that the database has more normal cases (about 70%). The 0% confidence threshold allows us to use the confidence of the rule in the tuning phase of the classifier. In the classification phase, the low and high thresholds of confidence are set such as the maximum recognition rate is reached. The success rate for association rule classifier was 69.11% on average. The results for the ten splits of the database are presented in Table 1. One noticeable advantage of the association rule-based classifier is the time required for training, which is very low compared to other methods such as neural networks. Table 1. Success ratios for the 10 splits with the association rule based classifier with all categories (ARC-AC) Database split Success ration (percentage) 1 67.647 2 79.412 3 67.647 4 61.765 5 64.706 6 64.706 7 64.706 8 64.706 9 67.647 10 88.235 Average 69.11
Given this data organization some experiments were performed with ARCBC algorithm as well, but the results obtained were unsatisfactory. 5.3
Experimental Results – Organization 2
We have tested our classification approach with ten different splits of the dataset. For Table 2 that is presented below, the association rules are discovered setting a starting minimum support at 25% and the minimum confidence at 50%. The computation of the actual support with which the database is mined is computed in an adaptive way. Starting with the given minimum support the dataset is mined, then a set of association rules is found. These rules are ordered and used as a classifier to test the classifier on the training set. When the accuracy on the training set is higher than a given accuracy threshold, the mining process is stopped, otherwise the support is decreased (σ = σ − 1) and the process is continued. As a result, different classes are mined at different supports.
80
M.-L. Antonie, O.R. Za¨ıane and A. Coman
The parameters in the tests with the results below are: minimum support 25%, minimum confidence 50% and the accuracy threshold is 95%. Table 2. Classification accuracy over the 10 splits using ARC-BC 1st rule Split #rules accuracy 1 22 76.67 2 18 86.67 3 22 83.33 4 22 63.33 5 33 56.67 6 16 66.67 7 30 66.67 8 26 66.67 9 20 66.67 10 18 76.67 avg(%) 22.7 71.02
ordered
cut rules
remove specific
#rules accuracy #rules accuracy #rules accuracy
1121 974 823 1101 1893 1180 1372 1386 1353 895 1209.8
80.00 93.33 86.67 76.67 70.00 76.67 83.33 76.67 76.67 83.33 80.33
856 755 656 842 1235 958 1055 1089 1130 702 927.8
76.67 90.00 86.67 66.67 70.00 73.33 73.33 80.00 76.67 80.00 77.33
51 48 50 51 63 51 58 57 52 51 53.2
60.00 86.67 76.67 53.33 50.00 63.33 53.33 46.67 60.00 76.67 62.67
Classification in the first two columns of Table 2 is done by assigning the image to the category attached to the first rule (the one with the highest confidence) that applies to the test image (see Table 2 columns under ’1st rule’). However, pruning techniques are employed before so that a high quality set of rules is selected. The pruning technique used in this case is a modified version of the database coverage (i.e. selecting a set of rules that classifies most transactions presented in the training set). Given a set of rules, the main idea is to find the best rules that would make a good distinction between the classes. The given set of rules is ordered. Take one rule at a time and classify the training set for each class. If the consequent of the rule indicates class ci keep that rule, only if it correctly classifies some objects in ci training set and doesn’t classify any in the other classes. The transactions that were classified are removed from the training set. The next columns in Table 2 are results of classification that uses the most powerful class in the set of rules. The difference is as follows: in the first two columns the set of rules that form the classifier is the set of rules extracted at the mining stage but ordered according to the confidence and support of the rules (support was normalized so that the ordering is possible even if the association rules are found by category)(see Table 2 columns under ‘ordered’); in the next two columns after the rules were ordered the conflicting rules (see Section 4.1) were removed (see Table 2 columns under ‘cut rules’); in the last two columns (see Table 2 columns under ‘remove specific’) from the ordered set of rules the specific ones were removed if they had lower confidence (see Section 4.1). We also present precision/recall graphs in Figure 6 to show that both false positive and false negative are very small for normal cases, which means that for
Associative Classifiers for Medical Images
81
abnormal images we have a very small number of false negative which is very desirable in medical image classification. The formulae for precision and recall are given below: R=
TP . TP + FN
(4)
TP . (5) TP + FP The terms used to express precision and recall are given in the contingency table Table 3, where TP stands for true positives, FP for false positives, FN for false negatives and TN for true negatives. From the graphs presented in Figure 6 one can observe that for both precision and recall for normal cases the values are very high. In addition, we can notice from equations 4 and 5 that the values for FP and FN tend to zero when precision and recall tend to 100%. Thus, the false positives and in particular false negatives are almost null with our approach. P =
100
100 Recall
80
80
60
60
percentage
percentage
Precision
40
20
40
20
0
0 1
2
3
4
5
6
7
8
9
10
1
2
3
4
5
split
(a)
6
7
8
9
10
split
(b)
Fig. 6. (a)Precision over the ten splits ; (b) Recall over the ten splits
Table 3. Contingency table for category cat Category cat classifier Yes assignments No
human assignments Yes No TP FP FN TN
In Table 4 the classification is done using the association rules obtained when mining the entire dataset at once with the second organization. In the first two columns the set of rules that form the classifier is the set of rules extracted at the mining stage but ordered according to the confidence and support of the rules
82
M.-L. Antonie, O.R. Za¨ıane and A. Coman
(see Table 4 columns under ‘ordered’); in the next two columns after the rules were ordered the conflicting rules (see Section 4.1) were removed (see Table 4 columns under ‘cut rules’). Table 4. Classification accuracy over the 10 splits using ARC-AC[2] Split 1 2 3 4 5 6 7 8 9 10 avg(%)
ordered #rules accuracy 6967 53.33 5633 86.67 5223 76.67 6882 53.33 7783 50.00 7779 60.00 7120 46.67 7241 43.33 7870 53.33 5806 76.67 6830.4 60.00
cut rules #rules accuracy 6090 53.33 4772 86.67 4379 76.67 5938 53.33 6878 50.00 6889 60.00 6209 46.67 6364 43.33 6969 53.33 4980 76.67 5946.8 60.00
As observed from the two tables presented above, the accuracy reached when ARC-BC is used is higher than the one obtained when the training set was mined at once with ARC-AC. However, the accuracy reached in [2] with ARC-AC was actually higher than in this case (69.11%). These results prove the importance of choosing the right data cleaning technique and data organization in reaching an effective and efficient data mining system. Not only in accuracy does ARC-BC outperform ARC-AC, but in time measurements as well (41.315 seconds versus 199.325 seconds for training and testing for all ten splits). All tests were performed on an AMD Athlon 1.8 GHz.
6
Conclusions
In this paper we presented two classification methods applied to medical image classification. Both classification systems are based on association rule mining. In addition, we demonstrated how important the cleaning phase is in building a classification system. The evaluation of the system was carried out on MIAS [14] dataset and the experimental results show that the accuracy of the system reaches 80.33% accuracy and the false negatives and false positives tend towards zero in more than half the splits. Although the results seem promising when an associative classifier is used, there are some future research directions to be studied. A collaboration with medical stuff would be very interesting in order to evaluate the performance of our system. In addition, the extraction of different features or a different database organization could lead to improved results.
Associative Classifiers for Medical Images
83
References 1. R. Agrawal, T. Imielinski, and A. Swami. Mining association rules between sets of items in large databases. In Proc. 1993 ACM-SIGMOD Int. Conf. Management of Data, pages 207–216, Washington, D.C., May 1993. 2. Maria-Luiza Antonie, Osmar R. Za¨ıane, and Alexandru Coman. Application of data mining techniques for medical image classification. In In Proc. of Second Intl. Workshop on Multimedia Data Mining (MDM/KDD’2001) in conjunction with Seventh ACM SIGKDD, pages 94–101, San Francisco, USA, 2001. 3. D. Brazokovic and M. Neskovic. Mammogram screening using multiresolutionbased image segmentation. International Journal of Pattern Recognition and Artificial Intelligence, 7(6):1437–1460, 1993. 4. C. Chen and G. Lee. Image segmentation using multiresolution wavelet analysis and expectation-maximization (em) algorithm for digital mammography. International Journal of Imaging Systems and Technology, 8(5):491–504, 1997. 5. A. Dhawan et al. Radial-basis-function-based classification of mammographic microcalcifications using texture features. In Proc. of the 17th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, volume 1, pages 535–536, 1995. 6. H. Chan et al. Computerized analysis of mammographic microcalcifications in morphological and feature spaces. Medical Physics, 25(10):2007–2019, 1998. 7. H. Li et al. Markov random field for tumor detection in digital mammography. IEEE Trans. Medical Imaging, 14(3):565–576, 1995. 8. H. Li et al. Fractal modeling and segmentation for the enhancement of microcalcifications in digital mammograms. IEEE Trans. Medical Imaging, 16(6):785–798, 1997. 9. I. Christoyianni et al. Fast detection of masses in computer-aided mammography. IEEE Signal Processing Magazine, pages 54–64, 2000. 10. Rafael C. Gonzalez and Richard. E. Woods. Digital Image Processing. AddisonWesley, 1993. second edition. 11. J. Han, J. Pei, and Y. Yin. Mining frequent patterns without candidate generation. In ACM-SIGMOD, Dallas, 2000. 12. Jiawei Han and Micheline Kamber. Data Mining, Concepts and Techniques. Morgan Kaufmann, 2001. 13. S. Lai, X. Li, and W. Bischof. On techniques for detecting circuscribed masses in mammograms. IEEE Trans. Medical Imaging, pages 377–386, 1989. 14. http://www.wiau.man.ac.uk/services/MIAS/MIASweb.html. 15. T. Wang and N. Karayiannis. Detection of microcalcification in digital mammograms using wavelets. IEEE Trans. Medical Imaging, pages 498–509, 1998. 16. Osmar R. Za¨ıane and Maria-Luiza Antonie. Clasifying text documents by associating terms with text categories. In In Proc. of the Thirteenth Australasian Database Conference (ADC’02), pages 215–222, Melbourne, Australia, 2002.
An Innovative Concept for Image Information Mining Mihai Datcu1 and Klaus Seidel2 1Remote
Sensing Technology Institute - IMF German Aerospace Center - DLR Oberpfaffenhofen, D-82234 Wessling - GERMANY
[email protected] 2 Remote Sensing Group Computer Vision Lab CVL - ETHZ CH 8092 Zürich - SWITZERLAND
[email protected]
Abstract. Information mining opens new perspectives and a huge potential for information extraction from large volumes of heterogeneous images and the correlation of this information with the goals of applications. We present a new concept and system for image information mining, based on modelling the causalities which link the image-signal contents to the objects and structures within interest of the users. The basic idea is to split the information representation into four steps: 1. image feature extraction using a library of algorithms so as to obtain a quasicomplete signal description 2. unsupervised grouping in a large number of clusters to be suitable for a large set of tasks 3. data reduction by parametric modelling the clusters 4. supervised learning of user semantics, that is the level where, instead of being programmed, the systems is trained by a set of examples; thus the links from image contents to the users are created. The record of the sequence of links is a knowledge acquisition process, the system memorizes the user hypotheses. Step 4. is a man-machine dialogue, the information exchange is done using advanced visualization tools. The system learns what the users need. The system is presently prototyped for inclusion in a new generation of intelligent satellite ground segment systems, value adding tools in the area of geoinformation, and several applications in medicine and biometrics are also foreseen.
1 Introduction The image archives are heterogeneous, huge data repositories, they are high complexity sources of valuable information, e.g. the Earth Observation data archives contain millions of optical, radar and other types of images and data. The exploration of their content is not an easy task. Among the promising methods proposed in the last years are the methods of data and information mining. However, accessing the image information content involves highly complex problems, arising primarily from the huge volume of data, the rich information content, and the subjectivity of the user interpretation. The present article makes an analysis of the Image Information Mining methods O.R. Zaïane et al. (Eds.): Mining Multimedia and Complex Data, LNAI 2797, pp. 84−99, 2003. Springer-Verlag Berlin Heidelberg 2003
An Innovative Concept for Image Information Mining
85
seen as an information transmission problem: the source of information is an image archive, the receiver is the community of users. Data and information mining are exploratory processes focusing on the techniques for analyzing and combining raw data and detecting patterns and regularities within the data set. The success of the exploratory information search depend on the capacity to capture and describe the full complexity of the data. Thus we use a concept integrating multiple methods: information theory, stochastic modelling, Bayesian inference, machine learning. Information theory deals with encoding data in order to transmit it correctly and efficiently. The theory of stochastic processes and machine learning deal with estimating models of data and predicting future observations. There is a relationship between these fields: the most compact encoding of the data is by the probabilistic model that describes it best, thus there is a fundamental link between information and probabilistic models. This link is basic to the implementation of optimal algorithms for information extraction, detecting causalities, and for the design of information systems implementing image information mining functions. The article presents and analyzes several methods for mining the information content of large image repositories, and demonstrates image mining functions, like, search by example, search by data model, exploration in the scale space and image complexity, knowledge acquisition, and adapting to the user conjecture.
2 From Content Based Image Retrieval to Mining the Image Information The continuously expansion of multimedia into all sectors of activity faces us with a double explosion: • the number of image data sets • the data size and information variability of each image e.g. with a digital camera we can acquire 10 Gb of images during a 3 weeks holiday, a satellite sensor can acquire 100 Gb per day. It has been known for many years that classical image file text annotation is prohibitive for large data bases. The last decade is marked by important research efforts in the development of Content Based Image Retrieval (CBIR) concepts and systems [11]. Images in an archive are searched by their visual similarity with respect to color, texture or shape characteristics. While image size and information content is continuously growing CBIR was not any more satisfactory and Region Based Information Retrieval (RBIR) was developed [11]. Each image is segmented and individual objects are indexed by primitive attributes like color, texture and shape. Thus, RBIR is a solution to deal with the variability of image content. However, both CBIR and RBIR have been computer centered approaches, i.e. the concepts could only marginally or not at all adapt to the user needs. Consequently, the image retrieval systems have been equipped with relevance feedback functions [1]. The systems are designed to search images similar to the user conjecture. The algo-
86
M. Datcu and K. Seidel
rithms are based on analyses of the probabilities of an image relative to the search target. A feedback mechanism which takes this part into account is introduced. Another interesting approach was developed based on a learning algorithm to select and combine feature grouping and to allow users to give positive and negative examples. The method refines the user interaction and enhances the quality of the queries [8]. Both previously mentioned concepts are first steps to include the user in the search loop: they are information mining concepts. Also, these are methods in the trend of designing human centered systems.
3 Images and Image Information Compared with Data Mining the field of Image Information Mining reaches much higher complexity resulting from: • the huge volume of data (Tb to Pb) • the variability and heterogeneity of the image data (diversity of sensors, time or conditions of acquisition, etc) • the image content, its meaning is many times subjective, depending on the users interest • the large range of user interest, semantics and contextual (semiotic) understanding. In general, by image we understand picture thus relating it to the (human) visual perception and understanding. A picture is characterized by its primitive features such as color, texture, shape at different scales. Its perception and understanding is in form of symbols and semantics in a certain semiotic context [12]. However, the concept of image is beyond the pictorial understanding. Images are multidimensional signals, like computer tomography, hyperspectral images or results of simulations. They are communicated to users via 2-dimensional visual projections. Those images can contain quantitative, objective information, as acquired by an instrument. In Fig. 1 an example is presented for the visualization of a data set of a Digital Terrain Model (DEM) in comparison with a color rendered satellite image of the same Alpine region. The visual information in the DEM image is not easy to read. The information of terrain elevation is contained in the image samples. The color image, however, shows the complexity of pictorial information. In the perspective of image information mining both the types of images, pictorial and multidimensional signals give rise to the same problem. Their understanding depends on the accuracy of: • information content modelling • modelling the users understanding.
An Innovative Concept for Image Information Mining
87
Fig. 1. Quantitative versus pictorial information. Top: Visualization of a digital Elevation Model DEM data set of Davos, Switzerland. The information on terrain height is contained in the pixel intensity, the information is quantitative and is not rich in visual meaning. Bottom: Satellite image (Landsat TM) of the same area. The information is pictorial, aggregation of colors, textures and geometrical objects at different scales makes it possible to understand the scenery of an alpine ski resort.
Thus, image information mining can be seen as a communication task. The source of information is the large heterogeneous image archive. The receiver is the community of users. The accuracy of communication the, i.e. the success of finding the information needed as exploration results, depends on the accuracy of the previously assumed levels of modelling.
4 Information Mining: Concept and System We developed a theoretical concept for image information representation and adaptation for the user conjecture [2,3,4,6,7]. A quasi-complete description of the image content is obtained by utilization of a library of models. The feature extraction is equivalent to the splitting the image content into different information channels. An unsupervised clustering is done for each information channel as an information encod-
88
M. Datcu and K. Seidel
Hierachic information represenation Data
Features & Metafeatures
Clusters
Color/Spectral
Clusters
Semantics Associations Concepts
Images
Scale 0
Texture
Scale 1
model Clusters
model
Geometry
Label 1 Label 2
Clusters
model
..................
Clusters
Label n
Scale n
Multiscale features
Information extraction
model
Information processing steps Information Class reduction Fusion
Machine learning
Fig. 2. The hierarchical representation of the image information content, and the causalities to correlate the user conjecture to the image content. The key elements are: the quasicomplete image signal description by extraction of the elementary features, the data reduction by clustering, thus inducing also a measure of some similarity over the feature space, the utilization of the cluster models as elements of an abstract vocabulary which in an interactive learning process enables learning of the semantics of the target and the user conjecture.
ing and data reduction operation. Then, during the operation of the system, an interactive learning process allows the user to create links, i.e. to discover conditions between the low-level signal description and the target of the user. The image features reflect the physical parameters of the imaged scene, thus, assuming the availability of certain models, the scene parameters can be extracted. For example, color and image texture carries information about the structure of object surfaces. However, in the case of modelling high complexity signals, a large number of sources coexists within the same system, thus multiple candidates models are needed to describe the information sources in the image. Also, to reduce complexity, to capture the class structure, and discover causalities and to provide computational advantages, the models are likely to be analyzed hierarchically. The hierarchical information representation is further presented and depicted in Fig. 2: • Image data: the information is contained in the samples of the raw data. It is the lowest level of information representation. • Image features: the performance of information extraction depends critically on the descriptive or predictive accuracy of the probabilistic model employed. Accurate modelling typically requires high-dimensional and multi-scale modelling. For nonstationary sources, accuracy also depends on adaptation to local characteristics. For a quasi-complete characterization the image content, information is extracted in form of parameters characterizing: color or spectral properties, texture as interactions among spatially distributed samples, the geometrical attributes of image objects. • Meta features: estimation of the image features, requires the assumption of some data models. The type of model used, its evidence and complexity, plays the role of
An Innovative Concept for Image Information Mining
89
meta information, i.e. describing the quality of the extracted parameters. From a data aggregation perspective, a meta feature is an indicator of information commensurability, e.g. estimated texture features using cooccurence matrix are not comparable with parameters of Markov random fields. The meta features have semantic value. • Cluster model: the signal features have n-dimensional representations. Due to observation noise or model approximations the feature space is not occupied homogeneously. Thus, another level of information abstraction is the type of feature grouping, i.e. the cluster models, and the associated parameters. The obtained clusters represent information only for each category of the features. • Semantic representation: it is known that the distinction between the perception of information as signals and symbols is generally not dependent on the form in which the information is presented but rather on the conjecture in which it is perceived, i.e. upon the hypothesis and expectations of the user. Augmentation of data with meaning requires a higher level of abstraction. The extracted information, represented in the form of classes is fused in a supervised learning process. Prior information in the form of training data sets or expert knowledge is used to create semantic categories by associations to different information classes. Thus, the observations are labelled and the contextual meaning is defined. In order to implement the hierarchical representation of the image information content, the data are pre-processed. First, the image features are extracted for different image scales. Then the image features are clustered, and further a signal content index is created using the cluster description, the scale information and the type of stochastic model assumed for the image parameters. A Bayesian learning algorithm allows a user to visualize and to encapsulate interactively his prior knowledge of certain image structures and to generate a supervised classification in the joint space of clusters, scales, and model types. The index of each image pixel is encoded by the spatial correspondence of the class information. The user is enabled to attach his meaning to similar structures occurring in different images, thus adding a label in the archive
Data acquisition, preprocessing, archiving system Data ingestion
Image archive
Browsing engine
Image features extraction
Inventory
Query engine
Multi-sensor sequence of images
Classification
Index generation
User
Interactive learning
Information fusion and interactive interpretation
Fig. 3. The system architecture. In yellow the server, violet the client.
90
M. Datcu and K. Seidel
inventory. This label is further used to specify queries. The hierarchical information, meta-information, associations and semantic labels are stored and managed by a Data Base Management System. The system is implemented in a server-client architecture as presented in Fig. 3. This concept was implemented and successfully demonstrated with an on-line experimental system, see http://isis.dlr.de/mining. The novel mining functions presently provided by the system are further presented.
Fig. 4. Semantic CBIR on Synthetic Aperture Radar X-SAR SRL images of Switzerland. Top: Result of semantic query - discovering settlements. The images have been automatically analyzed at ingestion in the archive, and a catalogue entry was created for all images containing built up areas. Bottom: Each image has attached the result of the classification, the regions marked in red correspond to villages and cities, thus the result of the query is the list of images, augmented with the expected semantic image content.
An Innovative Concept for Image Information Mining
91
Fig. 5. The geographical location of the images obtained as result of a semantic query (Fig. 3).
4.1
Semantic Content Based Image Retrieval
Following an automatic processing at data ingestion or in a semi-automatic manner using an interactive learning process, the system can create links between the concept level and the image data and cluster levels. The user is enabled to specify semantic queries at concept level and the system is returning all images with the specified content and a classification on individual images. An example is given in Fig. 4. In the case of Earth Observation the geographical location is also used as meta-information allowing to find the location of the intensity images as indicated in Fig. 5. 4.2 Mining Driven by Primitive Signal Features Mining driven by primitive signal features, such as spectral signatures or structural patterns, is enabled by the exploration of the links between the cluster and image data levels. Examples of spectral and textural signature mining are depicted in Fig. 6. The spectral mining is an example of physical, quantitative model exploration. For the Landsat-TM images used in the example only 6 spectral bands were selected. 4.3 Mining Information Theoretical Measures In the exploration of large image archives with rich information content it is important to group the data according to various objective information measures. This helps the users to orient within the search process. One important characteristic is the scale at which relevant information is concentrated. We used an multiscale stochastic process for automatic scale detection and segmentation [9,10]. An example is shown in Fig. 7. The exploration of image archives by scale
92
M. Datcu and K. Seidel
Fig. 6. Example of image information content extraction in a Landsat TM image of Switzerland. Left: Spectral image content, in red, obtained by the correlation of a specified cluster model with the pixel position in the image. Right: Texture image content obtained in similar manner, however, the textural information characterizes structures, thus the resulting classification has connected areas. The information is indexed enabling the discovery of all images with similar spectral or textural properties.
is a process which is implicitly using a priori knowledge assumed by the user: the ratio of the image resolution and size of objects he is searching for. The complexity of the images is another information theoretical measure used to rank images. The complexity is defined as the Kullback-Leiber divergence between the cluster level and the image data level. The complexity depends on the quality and type
Fig. 7. Scale selection in Aerial photography. Left: structures correspond to a fine scale. Right: In the same image structures corresponding to a rougher scale. The scale of structures in images is a fundamental descriptor, both in relation with the visual interpretation, and objectively in relation with the resolution of the sensor. The parameters of a multiscale random filed are used to automatically detect the relevant scales. The information is indexed enabling the discovery of all images with structures at similar scales.
An Innovative Concept for Image Information Mining
93
Fig. 8. Image complexity Top: Example of images of low (left) and high (right) spectral complexity. Bottom: Example of images of low (left) and high (right) structural complexity. The complexity of the images was measured as Kullback-Leiber entropy at the classification and clustering levels in the information hierarchy. The low complexity images are poor in information content, high complexity images show more “activity” thus giving a better chance to discover “interesting” structures, or objects. The complexity values are indexed enabling the discovery of all images with similar behavior.
of model used. In Fig. 8 examples of ranking images are presented according to their spectral and textural complexity. 4.4 Mining by Interactive Learning Interactive learning is the process of discovering the links between the user interest (target), the image content in terms of describing models and the images containing the assumed structure[3,7]. Mining by interactive learning and probabilistic search is the process which exploits all the levels of hierarchical information representation (Fig. 2) in order to discover the links between the user interest and the image content. The user interest can be a target, i.e. a scene object or scene structure, in this case being specified from the beginning as a precise semantic category. However, as a typical data mining task, the user interest can be broader, like a specific class of ontologies. Thus,
94
M. Datcu and K. Seidel
including possible groups of semantic classes. At the beginning of the mining process, the user does not know if any of the search targets is included in the data. That generally results in searching similar objects or classes of categories. The user can define semantic objects (or groups of objects and structures) by an interactive learning function which is based on an information fusion and classification process. The fused information is the statistical model of the clusters induced for each feature space separately. The clustering process is equivalent to data coding, e.g. vector quantization. The feature space is partitioned in regions grouping similar image features. Thus each cluster represents an abstract symbol of the messages contained in the data. The clustering is performed over the feature space of all images and is accompanied by an index to the image generating the feature and to the corresponding image regions. Thus, during a supervised classification, the cluster models are fused, resulting at the user site with a semantic message about the target of the search process and at the data site with pointers indicating which images contain the target objects and the associated image classification. For a better definition of the search target the interactive training process is implemented as a Bayesian decision based on positive and negative examples. As result of the training process the images in the archive are ordered by statistical measures of belief, a posteriori probability, the extent of regions having high belief to correspond to the searched target and are also ordered in a sequence corresponding to images likely to contain structures given as positive examples and images to contain structures trained as negative examples. The user can continue the training selecting images from one of the sequences. This is a general learning strategy based on positive and negative examples in a man-machine dialogue using a visual man-machine interface. In a first step, interactive learning uses a Bayesian network to create the links between the concept and cluster levels. During interactive learning the image data (quicklooks) are used to give examples and to index the spatial position of the target structures. In a second step, also using a Bayesian approach, a probabilistic search over the image space is performed. At this stage the links between the concept level, clusters and image data levels are created. The learning process uses positive and negative examples, both from the user and machine. It is a man-machine dialog. In Fig. 9 an example is presented for the exploration of different models (texture at various scales and spectral signatures) to discover different semantic objects in the data. The results of the probabilistic search are depicted in Fig. 10 for the cases indicated in Fig. 9.
An Innovative Concept for Image Information Mining
95
Fig. 9. Interactive training Top: Interactive training using fusion of spectral and textural information at the finest image scale. The target semantics is “meadow”. Bottom: On the same image, interactive training using fusion of texture information estimated for scales 1:2 and 1:3, the target semantics is “mountain”. Interactive learning is an information mining process which enables adation to the user conjecture. It is a pure exploratory function based on learning, fusion, and classification processes, using the pre-extracted image primitive attributes, and allowing an open, very large semantic space. The user defined target is generalized over the entire image archive, thus allowing further exploration.
96
M. Datcu and K. Seidel
Fig. 10. Probabilistic search Top: the result of probabilistic search for images containing “meadow”. Bottom: the result of probabilistic search for images containing “mountains”. Both query results correspond to the interactive training as defined in Fig. 9.
4.5 Knowledge Driven Image Information Mining and User Conjecture During the interactive learning and probabilistic search the database management system (DBMS) holds a record of: • the user semantic
An Innovative Concept for Image Information Mining
97
• the combination of models able to explain the user’s target • the classification of the target structure in each individual image • a set of statistical and information theoretical measures of goodness of the learning process. This information and associations represent a body of knowledge, either discovered or learned from the various system users. This information is further being used for other mining tasks. This acquired and learned information and knowledge is itself object of mining, e.g. grouping of semantic levels, relevance feedback, joint grouping between the semantic space and the statistical or information theoretical measures of goodness of the learning process. 4.6 Applications of Image Information Mining Concepts Part of the concepts and methods have been materialized in an pre-operational prototype system: KIM - Knowledge driven image Information Mining [13]. The system has been implemented by Advanced Computer System (ACS) in Rome, Italy, under European Space Agency (ESA) contract. The KIM system has been evaluated by the European Union Satellite Center (EU SC) in Madrid, Spain, and the Nansen Environmental and Remote Sensing Center (NERSC) in Bergen, Norway. The evaluation was performed using data sets, covering almost all territory of Mozambique and a part of Nepal. The data set consists of co-registered multispectral Landsat-5 data and Synthetic Aperture Radar (ERS 1) images, and also Landsat-7 and Ikonos data. Presently KIM is used for a variety of applications using high resolution optical multispectral, hyperspectral, polarimetric Synthetic Aperture Radar and also medical images. This technology reached a sufficient level of maturity for the integration into commercial products, as has been shown here for a variety of remote-sensing applications. This opens new perspectives and offers huge potential for correlating the information extracted from remote sensing images with the goals of specific applications. These technologies shift the focus from data to information, meeting user needs, promoting scientific investigations, and supporting the growth of the value-adding industry and service providers by permitting the provision of new services based on information and knowledge. This will profoundly affect developments in fields like space exploration, industrial processes, exploitation of resources, media, etc. The KIM prototype is demonstrating that "the results of advanced and very high complex algorithms for feature extraction can be made available to a large and diverse user community". The users can access the image information content based on their specific background knowledge and can interactively store the meta-information.
5 Conclusions We based and developed a new concept for image information mining. We regard the mining process as a communication task, from a user centered perspective. The hierar-
98
M. Datcu and K. Seidel
chy of information representation, in conjunction with the quasi-complete image content description, enables implementation of a large variety of mining functions. The concept was demonstrated for a variety of Earth Observation data. Further work is being performed for the development of intelligent satellite ground segment systems, and value adding tools. However its potential is broader, other fields of applications are possible, such as medical imagery, biometrics, etc. The proposed concept is far away from being fully exploited. Presently ongoing theoretical development is profoundating the problematic of image complexity. In the case of high heterogeneity observations, the complexity and the course of dimensionality are two key issues which can hinder the interpretation. Therefore, as an alternative solution to the “interpretation”, we propose an exploratory methodology approached from a information theoretical perspective in a Bayesian frame. Another direction is the analysis of cluster models from the perspective of an “objective” semantic approach, aiming at the elaboration of methods to understand the nature of the feature space. A direction of application of the developed methodology is the mining of temporal series of images, considering the integration of spatio-temporal signal analysis. Even the concept of learning the user conjecture was at some extent demonstrated. Difficult problems are under further research, such aS developing image grammars and representation of image content in different contextual environments. This is a semantic problem which can arise between different users when they define or describe the same structures differently, requiring the primitive attributes, features, domains, values, or causalities to be translated. A number of challenging tasks, mainly in the design of multidimensional DBMS, manmachine interfaces, distributed information systems, will probably be approached soon. Acknowledgements. The project has been supported by the Swiss Federal Institute of Technology (ETH) Research Foundation Advanced Query and Retrieval Techniques for Remote Sensing Image Archives (Grant: RSIA 0-20255-96). The authors would like to thank Michael Schröder and Hubert Rehrauer for converting the concept into algorithms and setting up the Multi-Mission Demonstrator (MMDEMO).
References [1]
I.J. Cox, M.L. Miller, S.M. Omohundro, and P. N. Yianilos, 1996, PicHunter: Bayesian Relevance Feedback for Image Retrieval, Proc. Int. Conf. on Pattern Recognition, Vienna, Austria.
An Innovative Concept for Image Information Mining
[2]
[3]
[4]
[5]
[6]
[7]
[8] [9]
[10]
[11] [12]
[13]
99
M. Datcu, K.Seidel, and M. Walessa, 1998, Spatial Information Retrieval From Remote Sensing Images: Part I. Information Theoretical Perspective, IEEE Tr. on Geoscience and Remote Sensing, Vol. 36, pp. 1431-1445. M. Datcu, K. Seidel, and G. Schwarz, 1999, Elaboration of advanced tools for information retrieval and the design of a new generation of remote sensing ground segment systems, in I. Kanellopoulos, editor, Machine Vision in Remote Sensing, Springer, pp. 199-212. M. Datcu and K. Seidel, 1999, Bayesian methods: applications in information aggregation and data mining. International Archives of Photogrammetry and Remote Sensing, Vol. 32, Part 7-4-3 W6, pp. 68-73. M. Datcu, K. Seidel, S. D’Elia, and P. G. Marchetti, 2002, Knowledge-driven Information-Mining in remote sensing image archives, ESA Bulletin No. 110, pp. 26-33. M. Schröder, H. Rehrauer, K. Seidel, and M. Datcu, 1998, Spatial Information Retrieval From Remote Sensing Images: Part II. Gibbs Markov Random Fields, IEEE Tr. on Geoscience and Remote Sensing, Vol. 36, pp. 1446-1455. M. Schröder, H. Rehrauer, K. Seidel, and M. Datcu, 2000, Interactive learning and probabilistic retrieval in remote sensing image archives, IEEE Trans. on Geoscience and Remote Sensing, Vol. 38, pp. 2288-2298 T. P. Minka and R. W. Picard, 1997, Interactive learning with a society of models. Pattern Recognition, vol. 30, pp.565–581. H. Rehrauer, K. Seidel, and M. Datcu, 1999, Multi-scale indices for contentbased image retrieval. in Proc. of 1999 IEEE International Geoscience and Remote Sensing Symposium IGARSS’99, volume V, pp. 2377-2379. H. Rehrauer and M. Datcu, 2000, Selecting scales for texture models, In Texture analysis in machine vision, ed.: M.K. Pietikäinen, Series in machine perception and artificial intelligence, vol. 40, World Scientific. C. R. Veltkamp, H. Burkhardt, and H.-P. Kriegel (eds.). 2001, State-of-the-Art in Content-Based Image and Video Retrieval. Kluwer. Ji Zhang, Wynne Hsu, and Mong Li Lee, 2001, Image Mining: Issues, Frameworks and Techniques, in Proceedings of the Second International Workshop on Multimedia Data Mining (MDM/KDD’2001), San Francisco, CA, USA, August, 2001 M. Datcu, K. Seidel, S. D’Elia and P.G. Marchetti, 2002, Knowledge Driven Information Mining in Remote Sensing Image Archives, ESA Bulletin 110, pp. 26-33
0XOWLPHGLD'DWD0LQLQJ8VLQJ37UHHV :LOOLDP3HUUL]R:LOOLDP-RFNKHFN$PDO3HUHUD'RQJPHL5HQ :HLKXD:XDQG@,WLQFOXGHVWKHFRQVWUXFWLRQRIPXOWLPHGLDGDWDFXEHV ZKLFK IDFLOLWDWH PXOWLSOH GLPHQVLRQDO DQDO\VLV DQG WKH PLQLQJ RI PXOWLSOH NLQGV RI NQRZOHGJH LQFOXGLQJ VXPPDUL]DWLRQ FODVVLILFDWLRQ DQG DVVRFLDWLRQ >@ 7KH FRPPRQFKDUDFWHULVWLFLQPDQ\GDWDPLQLQJDSSOLFDWLRQVLQFOXGLQJPDQ\PXOWLPHGLD GDWD PLQLQJ DSSOLFDWLRQV LV WKDW ILUVW VSHFLILF IHDWXUHV RI WKH GDWD DUH FDSWXUHG DV IHDWXUHYHFWRUVRUWXSOHVLQWDEOHVRUUHODWLRQVDQGWKHQWXSOHPLQHG>@>@ 7KHUH DUH VRPH H[DPSOHV RI PXOWLPHGLD GDWD PLQLQJ V\VWHPV ,%0 V 4XHU\ E\ LPDJHFRQWHQWDQG0,7 V3KRWRERRNH[WUDFWLPDJHIHDWXUHVVXFKDVFRORUKLVWRJUDPV
:3HUUL]RHWDO
KXHV LQWHQVLWLHV VKDSH GHVFULSWRUV DV ZHOO DV TXDQWLWLHV PHDVXULQJ WH[WXUH 2QFH WKHVHIHDWXUHVKDYHEHHQH[WUDFWHGHDFKLPDJHLQWKHGDWDEDVHLVWKHQWKRXJKWRIDVD SRLQW LQ WKLV PXOWLGLPHQVLRQDO IHDWXUH VSDFH RQH RI WKH FRRUGLQDWHV PLJKW IRU WKH VDNH RI D VLPSOLFLW\ FRUUHVSRQG WR WKH RYHUDOO LQWHQVLW\ RI UHG SL[HOV DQG VR RQ $QRWKHU H[DPSOH LV 0XOWL0HGLD0LQHU >@ ZKLFK LV D V\VWHP SURWRW\SH IRU PXOWLPHGLDGDWDPLQLQJDSSOLHGWRPXOWLGLPHQVLRQGDWDEDVHVXVLQJDWWULEXWHRULHQWHG LQGXFWLRQ PXOWLOHYHO DVVRFLDWLRQ DQDO\VLV VWDWLVWLFDO GDWD DQDO\VLV DQG PDFKLQH OHDUQLQJIRUPLQLQJGLIIHUHQWNLQGVRIUXOHV7KHV\VWHPFRQWDLQVPDMRUFRPSRQHQWV DQ LPDJH H[FDYDWRU IRU WKH H[WUDFWLRQ RI LPDJHV DQG YLGHRV IURP PXOWLPHGLD UHSRVLWRULHVDSURFHVVRUIRUWKHH[WUDFWLRQRILPDJHIHDWXUHVDQGVWRULQJSUHFRPSXWHG GDWDDXVHULQWHUIDFHDQGDVHDUFKNHUQHOIRUPDWFKLQJTXHULHVZLWKLPDJHDQGYLGHR IHDWXUHLQWKHGDWDEDVH 9LGHR$XGLR 'DWD 0LQLQJ 7KH KLJK GLPHQVLRQDOLW\ RI WKH IHDWXUH VSDFH DQG WKH VL]H RI WKH PXOWLPHGLD GDWDVHWV PDNH PHDQLQJIXO PXOWLPHGLD GDWD VXPPDUL]DWLRQ D FKDOOHQJLQJSUREOHP>@3HDQRWUHHV37UHHV SURYLGHDFRPPRQVWUXFWXUHIRUWKHVH KLJKO\ GLPHQVLRQDO IHDWXUH YHFWRUV 9LGHR$XGLR GDWD PLQLQJ DQG RWKHU PXOWLPHGLD GDWD PLQLQJ RIWHQ LQYROYHV SUHOLPLQDU\ IHDWXUH H[WUDFWLRQ ,Q RUGHU WR JHW KLJK DFFXUDF\IRUFODVVLILFDWLRQDQGFOXVWHULQJJRRGIHDWXUHVDUHVHOHFWHGWKDWFDQFDSWXUH WKH WHPSRUDO DQG VSHFWUDO VWUXFWXUHV RI PXOWLPHGLD GDWD 7KHVH SHUWLQHQW GDWD DUH IRUPHGLQWRUHODWLRQVRUSRVVLEO\WLPHVHULHVUHODWLRQV(DFKWXSOHGHVFULEHVVSHFLILF IHDWXUHVRIDIUDPH>@$VVKRZQLQ)LJXUHZHWUDQVIRUPWKHUHODWLRQVLQWR3 7UHHVZKLFKSURYLGHDJRRGIRXQGDWLRQIRUWKHGDWDPLQLQJSURFHVV
,PDJH
9LGHR$XGLR
'RFXPHQW
)HDWXUH([WUDFWLRQ
5HODWLRQ7DEOH
3HDQR7UHH
'DWD0LQLQJ3URFHVV )LJ3URFHVVRIYLGHRDXGLRPXOWLPHGLDGDWDPLQLQJ
)RU H[DPSOH SHUIRUPLQJ IDFH UHFRJQLWLRQ IURP YLGHR VHTXHQFHV LQYROYHV ILUVW H[WUDFWLQJ VSHFLILF IDFH JHRPHWU\ DWWULEXWHV HJ UHODWLYH SRVLWLRQ RI QRVH H\HV
0XOWLPHGLD'DWD0LQLQJ8VLQJ37UHHV
FKLQERQHVFKLQHWF DQGWKHQIRUPLQJDWXSOH RI WKRVH JHRPHWULF DWWULEXWHV )DFHV DUH LGHQWLILHG E\ FRPSDULQJ IDFHJHRPHWULF IHDWXUHV ZLWK WKRVH VWRUHG LQ D GDWDEDVH IRU NQRZQ LQGLYLGXDOV 3DUWLDO PDWFKHV DOORZ UHFRJQLWLRQ HYHQ LI WKHUH DUH JODVVHV EHDUGV ZHLJKW FKDQJHV HWF 7KHUH DUH PDQ\ DSSOLFDWLRQV RI IDFH UHFRJQLWLRQ WHFKQRORJ\ LQFOXGLQJ VXUYHLOODQFH GLJLWDO OLEUDU\ LQGH[LQJ VHFXUH FRPSXWHU ORJRQ DQGERUGHUFURVVLQJDLUSRUWDQGEDQNLQJVHFXULW\>@ 9RLFHELRPHWULFVLVDQH[DPSOHRIDXGLRPLQLQJ>@,WUHOLHVRQKXPDQVSHHFK RQHRIWKHSULPDU\PRGDOLW\LQKXPDQWRKXPDQFRPPXQLFDWLRQDQGSURYLGHVDQRQ LQWUXVLYH PHWKRG IRU DXWKHQWLFDWLRQ %\ H[WUDFWLQJ DSSURSULDWH IHDWXUHV IURP D SHUVRQ¶V YRLFH DQG IRUPLQJ D YHFWRU RU WXSOH RI WKHVH IHDWXUHV WR UHSUHVHQW WKH YRLFHSULQW WKH XQLTXHQHVV RI WKH SK\VLRORJ\ RI WKH YRFDO WUDFW DQG DUWLFXODWRU SURSHUWLHV FDQ EH FDSWXUHG WR KLJK GHJUHH DQG XVHG YHU\ HIIHFWLYHO\ IRU UHFRJQL]LQJ WKHLGHQWLW\RIWKHSHUVRQ 7H[W PLQLQJ 7H[W PLQLQJ FDQ ILQG XVHIXO LQIRUPDWLRQ IURP XQVWUXFWXUHG WH[WXDO LQIRUPDWLRQ VXFK DV OHWWHUV HPDLOV DQG WHFKQLFDO GRFXPHQWV 7KHVH NLQGV RI XQVWUXFWXUDOWH[WXUDOGRFXPHQWVDUHQRWUHDG\IRUGDWDPLQLQJ>@ 7H[WPLQLQJJHQHUDOO\LQYROYHVWKHIROORZLQJWZRSKDVHV 3UHSDUDWLRQSKDVHGRFXPHQWUHSUHVHQWDWLRQ 3URFHVVLQJSKDVHFOXVWHULQJRUFODVVLILFDWLRQ ,QRUGHUWRDSSO\GDWDPLQLQJDOJRULWKPVWRWH[WGDWDDZHLJKWHGIHDWXUHYHFWRULV W\SLFDOO\XVHGWRGHVFULEHDGRFXPHQW7KHVHIHDWXUHYHFWRUVFRQWDLQDOLVWRIWKHPDLQ WKHPHV RU NH\ZRUGV RU ZRUGVWHPV DORQJ ZLWK D QXPHULF ZHLJKW LQGLFDWLQJ WKH UHODWLYHLPSRUWDQFHRIWKHWKHPHRUWHUPWRWKHGRFXPHQWDVDZKROH>@7KHIHDWXUH YHFWRUV DUHXVXDOO\KLJKO\ GLPHQVLRQDOEXW VSDUVHO\ SRSXODWHG >@ 3WUHHV DUH ZHOO VXLWHGIRUUHSUHVHQWLQJVXFKIHDWXUHYHFWRUVHWV$IWHUWKHPDSSLQJRIGRFXPHQWVWR IHDWXUHYHFWRUWDEOHVRUUHODWLRQVGRFXPHQWFODVVLILFDWLRQFDQEHSHUIRUPHGLQHLWKHU RIWZRZD\VWXSOHFOXVWHULQJRUWXSOHFODVVLILFDWLRQ
0XOWLPHGLD6XPPDU\
,QVXPPDU\WKHNH\SRLQWRIWKLVGLVFXVVLRQLVWKDWDODUJHYROXPHRIPXOWLPHGLDGDWD LV W\SLFDOO\ SUHSURFHVVHG LQWR VRPH VRUW RI UHSUHVHQWDWLRQ LQ KLJK GLPHQVLRQ IHDWXUH VSDFHV 7KHVH IHDWXUH VSDFHV XVXDOO\ WDNH WKH IRUP RI WDEOHV RU UHODWLRQV 7KH GDWD PLQLQJRIPXOWLPHGLDGDWDWKHQEHFRPHVDPDWWHURIURZRUWXSOHPLQLQJFOXVWHULQJ RU FODVVLILFDWLRQ RI WKH IHDWXUH WDEOHV RU UHODWLRQV 7KLV SDSHU SURSRVHV D QHZ DSSURDFKWRWKHVWRUDJHDQGSURFHVVLQJRIWKHIHDWXUHVSDFHV,QWKHQH[WVHFWLRQRI WKLVSDSHUZHGHVFULEHDWHFKQRORJ\IRUVWRULQJDQGPLQLQJPXOWLPHGLDIHDWXUHVSDFHV HIILFLHQWO\DQGDFFXUDWHO\
:3HUUL]RHWDO
3HDQR&RXQW7UHHV37UHHV
,QWKLVVHFWLRQZHGLVFXVVDGDWDVWUXFWXUHFDOOHGWKH3HDQR&RXQW7UHHRU3WUHH LWV DOJHEUD DQG SURSHUWLHV )LUVW ZH QRWH DJDLQ WKDW LQ PRVW PXOWLPHGLD GDWD PLQLQJ DSSOLFDWLRQVIHDWXUHH[WUDFWLRQLVXVHGWRFRQYHUWUDZPXOWLPHGLDGDWDWRUHODWLRQDORU WDEXODU IHDWXUH YHFWRU IRUP DQG WKHQ WKH YHFWRUV DUH GDWD PLQHG 7KH 3WUHH GDWD VWUXFWXUH LVGHVLJQHGIRUMXVW VXFK D GDWD PLQLQJ VHWWLQJ 3WUHHV SURYLGH D ORVVOHVV FRPSUHVVHGGDWDPLQLQJUHDG\UHSUHVHQWDWLRQRIWDEXODUGDWD>@ *LYHQ D UHODWLRQDO WDEOH ZLWK RUGHUHG URZV WKH GDWD FDQ EH RUJDQL]HG LQ GLIIHUHQWIRUPDWV%64%,/DQG%,3DUHWKUHHW\SLFDOIRUPDWV7KH%DQG6HTXHQWLDO %64 IRUPDW LV VLPLODU WR WKH UHODWLRQDO IRUPDW H[FHSW WKDW HDFK DWWULEXWH EDQG LV VWRUHG DV D VHSDUDWH ILOH XVLQJ D FRQVLVWHQW WXSOH RUGHULQJ 7KHPDWLF 0DSSHU 70 VDWHOOLWHLPDJHVDUHLQ%64IRUPDW)RULPDJHVWKH%DQG,QWHUOHDYHGE\/LQH%,/ IRUPDWVWRUHVWKHGDWDLQOLQHPDMRURUGHULHWKHILUVWURZRIDOOEDQGVIROORZHGE\ WKH VHFRQG URZ RI DOO EDQGV DQG VR RQ 6327 LPDJHV ZKLFK FRPH IURP )UHQFK VDWHOOLWH SODWIRUPV DUH LQ %,/ IRUPDW %DQG ,QWHUOHDYHG E\ 3L[HO %,3 LV D SL[HO PDMRUIRUPDW6WDQGDUG7,))LPDJHVDUHLQ%,3IRUPDW %$1'
%$1'
%64IRUPDWILOHV
%,/IRUPDWILOH
%,3IRUPDWILOH
%DQG %DQG
E64IRUPDWILOHVLQFROXPQV %%%%%%%%%%%%%%%%
)LJ%64%,3%,/DQGE64IRUEDQG×LPDJH
:HXVHDJHQHUDOL]DWLRQRI%64IRUPDWFDOOHGELW6HTXHQWLDOE64 WRRUJDQL]H DQ\UHODWLRQDOGDWDVHWZLWKQXPHULFDOYDOXHV>@:HVSOLWHDFKDWWULEXWHLQWRVHSDUDWH ILOHVRQHIRUHDFKELWSRVLWLRQ7KHUHDUHVHYHUDOUHDVRQVZK\ZHXVHWKHE64IRUPDW )LUVWGLIIHUHQWELWVPDNHGLIIHUHQWFRQWULEXWLRQVWRWKHYDOXHV,QVRPHDSSOLFDWLRQV WKHKLJKRUGHUELWVDORQHSURYLGHWKHQHFHVVDU\LQIRUPDWLRQ6HFRQGWKHE64IRUPDW IDFLOLWDWHV WKH UHSUHVHQWDWLRQ RI D SUHFLVLRQ KLHUDUFK\ 7KLUG E64 IRUPDW IDFLOLWDWHV FRPSUHVVLRQ3WUHHVDUHTXDGUDQWZLVHSRO\WDQWZLVHLQGLPHQVLRQVRWKHUWKDQWZR 3HDQRRUGHUUXQOHQJWKFRPSUHVVHG UHSUHVHQWDWLRQV RI HDFK E64 ILOH )DVW 3WUHH RSHUDWLRQVHVSHFLDOO\IDVW$1'RSHUDWLRQSURYLGHIRUHIILFLHQWGDWDPLQLQJ,Q)LJXUH ZHJLYHDYHU\VLPSOHLOOXVWUDWLYHH[DPSOHZLWKRQO\WZREDQGVLQDVFHQHKDYLQJ RQO\IRXUSL[HOV WZR URZV DQG WZR FROXPQV %RWKGHFLPDO DQG ELQDU\UHIOHFWDQFH YDOXHVDUHJLYHQ:HFDQVHHWKHGLIIHUHQFHRI%64%,/%,3DQGE64IRUPDWV
0XOWLPHGLD'DWD0LQLQJ8VLQJ37UHHV
%DVLF37UHHV
,QWKLVVXEVHFWLRQZHDVVXPHWKHUHODWLRQLVWKHSL[HOUHODWLRQRIDQLPDJHVRWKDWWKHUH LV D QDWXUDO QRWLRQ RI URZV DQG FROXPQV +RZHYHU IRU DUELWUDU\ UHODWLRQV WKH URZ RUGHUFDQEHFRQVLGHUHG3HDQRRUGHULQ'''HWF WRDFKLHYHWKHYHU\VDPH UHVXOW$Q;@ 3WUHHV DUH VRPHZKDW VLPLODU LQ FRQVWUXFWLRQ WR RWKHU GDWD VWUXFWXUHV LQ WKH OLWHUDWXUH HJ 4XDGWUHHV>@>@DQG++FRGHV>@ *LYHQD×E64ILOHRQHELWRQHEDQGILOH LWV3WUHHLVDVVKRZQLQ)LJXUH
3WUHH BBBBBBBBBB??BBBBBBBBBB BBB?BBB? ?? BBBBBBBBB _?_?? _?_?_?
)LJ3WUHHIRUD×E64ILOH
,Q WKLV H[DPSOH LV WKH QXPEHU RI ¶V LQ WKH HQWLUH LPDJH FDOOHG URRW FRXQW 7KLVURRWOHYHOLVODEHOHGOHYHO7KHQXPEHUVDQGDWWKHQH[WOHYHOOHYHO DUHWKHELWFRXQWVIRUWKHIRXUPDMRUTXDGUDQWVLQUDVWHURUGHU6LQFHWKHILUVWDQG ODVWOHYHOTXDGUDQWVDUHFRPSRVHGHQWLUHO\RIELWVFDOOHGSXUHTXDGUDQWV DQG ELWV FDOOHG SXUH TXDGUDQWV UHVSHFWLYHO\ VXEWUHHV DUH QRW QHHGHG DQG WKHVH EUDQFKHV WHUPLQDWH 7KLV SDWWHUQ LV FRQWLQXHG UHFXUVLYHO\ XVLQJ WKH 3HDQR RU = RUGHULQJ UHFXUVLYH UDVWHU RUGHULQJ RI WKH IRXU VXETXDGUDQWV DW HDFK QHZ OHYHO (YHQWXDOO\HYHU\EUDQFKWHUPLQDWHVVLQFHDWWKH³OHDI´OHYHODOOTXDGUDQWDUHSXUH ,IZHZHUHWRH[SDQGDOOVXEWUHHVLQFOXGLQJWKRVHIRUSXUHTXDGUDQWV WKHQ WKH OHDI VHTXHQFH ZRXOG EH WKH 3HDQRRUGHULQJ RI WKH LPDJH 7KH 3HDQRRUGHULQJ RI WKH RULJLQDOLPDJHLVFDOOHG3HDQR6HTXHQFH7KXVZHXVHWKHQDPH3HDQR&RXQW7UHH IRUWKHWUHHVWUXFWXUHDERYH7KHIDQRXWRID3WUHHQHHGQRWEHIL[HGDWIRXU,WFDQ EHDQ\SRZHURIHIIHFWLYHO\VNLSSLQJOHYHOVLQWKHWUHH $OVRWKHIDQRXWDWDQ\ RQH OHYHO QHHG QRW FRLQFLGH ZLWK WKH IDQRXW DW DQRWKHU OHYHO 7KH IDQRXW FDQ EH FKRVHQ WR PD[LPL]H FRPSUHVVLRQ IRU H[DPSOH :H XVH 37UHHULO WR LQGLFDWH WKH IDQRXWSDWWHUQZKHUHULVWKHIDQRXWRIWKHURRWQRGHLLVWKHIDQRXWRIDOOLQWHUQDO QRGHVDWOHYHOWR/ZKHUHURRWKDVOHYHO/DQGOHDIKDVOHYHO DQGOLVWKHIDQ RXWRIDOOQRGHVDWOHYHO:HKDYHLPSOHPHQWHG37UHH37UHHDQG 37UHH
:3HUUL]RHWDO
30WUHHP BBBBBB?BBBBBB ?? ?? BBBPBBBPBB _?_?? PPP _?_?_?
)LJ30WUHH
3WUHH3WUHH BBBBB?BBBBBBBBBB?BBBBB ???? ???? BBBBB _?_?????? _?_?_?_?_?_?
)LJ3WUHHDQG3WUHH
'HILQLWLRQ$EDVLF3WUHH3L M LVD3WUHHIRUWKHMWKELWRIWKHLWKEDQGL7KH FRPSOHPHQW RI EDVLF 3WUHH 3L M LV GHQRWHG DV 3L M ¶ WKH FRPSOHPHQW RSHUDWLRQ LV H[SODLQHG EHORZ )RU HDFK EDQG DVVXPLQJ ELW GDWD YDOXHV WKRXJK WKH PRGHO DSSOLHV WR GDWD RI DQ\ QXPEHU ELWV WKHUH DUH HLJKW EDVLF 3WUHHV RQH IRU HDFK ELW SRVLWLRQ:HZLOOFDOOWKHVH3WUHHVWKHEDVLF3WUHHVRIWKHVSDWLDOGDWDVHW:HZLOO XVHWKHQRWDWLRQ3ELWRGHQRWHWKHEDVLF3WUHHIRUEDQGEDQGELWSRVLWLRQL7KHUH DUHDOZD\VQEDVLF3WUHHVIRUQEDQGV3WUHHVKDYHWKHIROORZLQJIHDWXUHV • • • • •
3WUHHVFRQWDLQFRXQWVIRUHYHU\TXDGUDQW 7KH3WUHHIRUDVXETXDGUDQWLVWKHVXEWUHHURRWHGDWWKDWVXETXDGUDQW $ 3WUHH OHDI VHTXHQFH GHSWKILUVW LV D SDUWLDO UXQOHQJWK FRPSUHVVHG YHUVLRQRIWKHRULJLQDOELWEDQG %DVLF 3WUHHV FDQ EH FRPELQHG WR UHSURGXFH WKH RULJLQDO GDWD 3WUHHV DUH ORVVOHVVUHSUHVHQWDWLRQV 3WUHHVFDQSURGXFHXSSHUDQGORZHUERXQGVRQTXDGUDQWFRXQWV
3WUHHVFDQEHXVHGWRVPRRWKGDWDE\ERWWRPXSTXDGUDQWSXULILFDWLRQERWWRP XS UHSODFHPHQW RIQRQSXUHRU PL[HG FRXQWV ZLWK WKHLU FORVHVWSXUH FRXQWV 3WUHHV FDQ EH JHQHUDWHG TXLWH TXLFNO\ DQG FDQ EH YLHZHG DV D ³GDWD PLQLQJ UHDG\´ DQG ORVVOHVVIRUPDWIRUVWRULQJVSDWLDORUDQ\UHODWLRQDOGDWD
37UHH9DULDWLRQV
$YDULDWLRQRIWKH3WUHHGDWDVWUXFWXUHWKH3HDQR0DVN7UHH30WUHHRU307 LVD VLPLODUVWUXFWXUHLQZKLFKPDVNVUDWKHUWKDQFRXQWVDUHXVHG,QD30WUHHZHXVHD YDOXH ORJLF WR UHSUHVHQW SXUH SXUH DQG PL[HG TXDGUDQWV GHQRWHV SXUH GHQRWHVSXUHDQGPGHQRWHVPL[HG 7KH30WUHHIRUWKHSUHYLRXVH[DPSOHLVDOVR JLYHQEHORZ )RU 307 ZH RQO\ QHHG WR VWRUH WKH SXULW\ LQIRUPDWLRQ DW HDFK OHYHO L 307 UHTXLUHV OHVV VWRUDJH FRPSDUHG WR 3&7 7KH VWRUDJH UHTXLUHPHQW DW HDFK OHYHO LV FRQVWDQWIRU307FRPSDUHGWR3&73&7KDVWKHDGYDQWDJHRIEHLQJDEOHWRSURYLGH WKHELWFRXQWZLWKRXWWUDYHUVLQJWKHWUHHZKLFKLVDQDGYDQWDJHLQFHUWDLQVLWXDWLRQV GHVFULEHGLQVHFWLRQ%RWKRIWKHVHUHSUHVHQWDWLRQVDUHORVVOHVV6LQFHD30WUHHLV MXVW DQ DOWHUQDWLYH LPSOHPHQWDWLRQ IRU D 3HDQR &RXQW WUHH ZH ZLOO XVH WKH WHUP ³3 WUHH´WRFRYHUERWK3HDQR&RXQWWUHHVDQG3HDQR0DVNWUHHV
0XOWLPHGLD'DWD0LQLQJ8VLQJ37UHHV
2WKHU XVHIXO YDULDWLRQV LQFOXGH 3WUHH DQG 37UHH 7KH\ DUH H[DPSOHV RI D FODVVRI3WUHHVFDOOHG3UHGLFDWH7UHHV*LYHQDQ\TXDGUDQWSUHGLFDWHDFRQGLWLRQWKDW LVHLWKHUWUXHRUIDOVHZLWKUHVSHFWWRHDFKTXDGUDQW ZHXVHDELWWRLQGLFDWHWUXHDQG DELWWRLQGLFDWHIDOVHIRUHDFKTXDGUDQWDWHDFKOHYHO$QH[DPSOHRIWKH3WUHH SUHGLFDWHLVSXUH DQG3WUHHLVJLYHQLQILJXUH7KHSUHGLFDWHFDQDOVREHQRW SXUH13WUHH QRWSXUHWUHH13WUHH HWF 7KHORJLFDO3WUHHDOJHEUDLQFOXGHVFRPSOHPHQW$1'DQG257KHFRPSOHPHQW RIDEDVLF3WUHHFDQEHFRQVWUXFWHGGLUHFWO\IURPWKH3WUHHE\VLPSO\FRPSOHPHQWLQJ WKHFRXQWVDWHDFKOHYHOVXEWUDFWLQJIURPWKHSXUHFRXQWDWWKDWOHYHO DVVKRZQLQ WKHH[DPSOHEHORZ1RWHWKDWWKHFRPSOHPHQWRID3WUHHSURYLGHVWKHELWFRXQWV IRUHDFKTXDGUDQW3WUHH$1'25RSHUDWLRQVDUHDOVRLOOXVWUDWHGLQILJXUH 3WUHH30WUHHP BBBBBB??BBBBBBBBBBBBB??BBBBBB BB?BBB?BB?BB? ???? BBBBBBBBBPP _?_?????? PPP _?_?_?_?_?_? &RPSOHPHQWP BBBBBB??BBBBBBBBBBBBB??BBBBBB BB?BBB?BB?BB? ???? BBBBBBBBBPP _?_?????? PPP _?_?_?_?_?_? 3WUHHP BBBBBB??BBBBBB ?? ?? PP ???? PPP _?_?_?
3WUHHP BBBBBB??BBBBBB ?? ?? P ?? P _?
$1'5HVXOWP BBBBBBBB??BBB BBBB?? ?? P _?? PP _?_?
255HVXOWP BBBBBBBB??BBB BBBB?? ?? P ?? PP _?_?
)LJ3WUHH$OJHEUD&RPSOHPHQW$1'25
$1'LVWKHPRVWLPSRUWDQWRSHUDWLRQ7KH25RSHUDWLRQFDQEHLPSOHPHQWHGLQ D YHU\ VLPLODU ZD\ %HORZ ZH ZLOO GLVFXVV YDULRXV RSWLRQV WR LPSOHPHQW 3WUHH $1'LQJ
:3HUUL]RHWDO
/HYHO:LVH37UHH$1'LQJ
$1'LQJ LV D YHU\ LPSRUWDQW DQG IUHTXHQWO\ XVHG RSHUDWLRQ IRU 3WUHHV 7KHUH DUH VHYHUDO ZD\V WR SHUIRUP 3WUHH $1'LQJ )LUVW OHW¶V ORRN DW D VLPSOH ZD\ :H FDQ SHUIRUP$1'LQJOHYHOE\OHYHOVWDUWLQJIURPWKHURRWOHYHO7DEOHJLYHVWKHUXOHV IRU SHUIRUPLQJ 3WUHH $1'LQJ 2SHUDQG DQG 2SHUDQG DUH WZR 3WUHHV RU VXE WUHHV ZLWK URRW ; DQG ; UHVSHFWLYHO\ 8VLQJ 30WUHHV ; DQG ; FRXOG EH DQ\ YDOXH DPRQJ DQG P YDOXH ORJLF UHSUHVHQWLQJ SXUH SXUH DQG PL[HG TXDGUDQW )RU H[DPSOH WR $1' D SXUH 3WUHH ZLWK DQ\ 3WUHH ZLOO UHVXOW LQ WKH VHFRQGRSHUDQGWR$1'DSXUH3WUHHZLWKDQ\3WUHHZLOOUHVXOWLQWKHSXUH3 WUHH ,W LV SRVVLEOH WR $1' WZR P¶V UHVXOWV LQ D SXUH TXDGUDQW LI WKHLU IRXU VXE TXDGUDQWVUHVXOWLQSXUHTXDGUDQWV 7DEOH3WUHH$1'UXOHV
2SHUDQG
2SHUDQG
5HVXOW
;
6XEWUHHZLWKURRW;
;
;
6XEWUHHZLWKURRW;
;
P
P
LIIRXUVXETXDGUDQWVUHVXOWLQ2WKHUZLVHP
37UHHDQG8VLQJ3XUH3DWKV
$PRUHHIILFLHQWZD\WRGR3WUHH$1'LQJLVWRVWRUHRQO\WKHEDVLF3WUHHVDQGWKHQ JHQHUDWH WKH YDOXH DQG WXSOH 3WUHH URRW FRXQWV ³RQWKHIO\´ DV QHHGHG ,Q WKH IROORZLQJ DOJRULWKP ZH ZLOO DVVXPH 3WUHHV DUH FRGHG LQ D FRPSDFW GHSWKILUVW RUGHULQJ RI WKH SDWKV WR HDFK SXUH TXDGUDQW :H XVH WKH KLHUDUFKLFDO TXDGUDQW LG 4LG VFKHPH VKRZQ LQ ILJXUH WR LGHQWLI\ TXDGUDQWV $W HDFK OHYHO ZH DSSHQG D VXETXDGUDQWLGQXPEHUPHDQVXSSHUOHIWXSSHUULJKWORZHUOHIWORZHUULJKW
)LJ4XDGUDQWLG4LG
0XOWLPHGLD'DWD0LQLQJ8VLQJ37UHHV
)RUDVSDWLDOGDWDVHWZLWKQURZDQGQFROXPQWKHUHLVDPDSSLQJIURPUDVWHU FRRUGLQDWHV[\ WR3HDQRFRRUGLQDWHVFDOOHGTXDGUDQWLGVRU4LGV ,I[DQG\DUH H[SUHVVHGDVQELWVWULQJV[[«[QDQG\\«\QWKHQWKHPDSSLQJLV[\ [[«[Q \\«\Q Æ [\ [\ « [Q\Q 7KXV LQ DQ E\ LPDJH WKH SL[HO DW KDVTXDGUDQWLG )RUVLPSOLFLW\ZHZURWHWKH4LGDV LQVWHDGRI)LJXUHVKRZWKLVH[DPSOH ,QWKHH[DPSOHRIILJXUHHDFKSDWKLVUHSUHVHQWHGE\WKHVHTXHQFHRITXDGUDQWV LQ3HDQRRUGHUEHJLQQLQJMXVWEHORZWKHURRW6LQFHDTXDGUDQWZLOOEHSXUHLQWKH UHVXOW RQO\ LI LW LV SXUH LQ ERWKDOO RSHUDQGV WKH $1' FDQ EH GRQH VLPSO\ E\ VFDQQLQJWKHRSHUDQGVDQGRXWSXWWLQJPDWFKLQJSXUHSDWKV 7KH$1'RSHUDWLRQLVHIIHFWLYHO\WKHSL[HOZLVH$1'RIELWVIURPE64ILOHVRU WKHLUFRPSOHPHQWILOHV+RZHYHUVLQFHVXFKILOHVFDQFRQWDLQKXQGUHGVRIPLOOLRQVRI ELWVVKRUWFXWPHWKRGVDUHQHHGHG,PSOHPHQWDWLRQVRIWKHVHPHWKRGVKDYHEHHQGRQH ZKLFKDOORZWKHSHUIRUPDQFHRIDQQZD\$1'RI7LIILPDJH3WUHHVE\ SL[HOV LQ D IHZ PLOOLVHFRQGV :H GLVFXVV VXFK PHWKRGV ODWHU LQ WKH SDSHU 7KH SURFHVV RI FRQYHUWLQJ GDWD WR 3WUHHV WKRXJK D RQHWLPH SURFHVV FDQ DOVR EH WLPH FRQVXPLQJXQOHVVVSHFLDO PHWKRGV DUH XVHG 2XU PHWKRGV FDQ FRQYHUW HYHQ D ODUJH 70VDWHOOLWHLPDJHDSSUR[LPDWHO\PLOOLRQSL[HOV WRLWVEDVLF3WUHHVLQMXVWDIHZ VHFRQGVXVLQJDKLJKSHUIRUPDQFH3&FRPSXWHU7KLVLVDRQHWLPHSURFHVV 3WUHHP BBBBBB??BBBBBB ?? ?? PP ???? PPP _?_?_?
3WUHHP BBBBBB??BBBBBB ?? ?? P ?? P _?
$1'5HVXOWP BBBBBBBBBBBB??BBBBBBBBBBBB ?? ?? P _?? PP _?_? Æ5(68/7 Æ Æ Æ Æ Æ
)LJ3WUHHD1'XVLQJSXUHSDWK
:3HUUL]RHWDO
9DOXH7XSOH37UHHV,QWHUYDODQG&XEH37UHHV
%\SHUIRUPLQJWKH$1'RSHUDWLRQRQWKHDSSURSULDWHVXEVHWRIWKHEDVLF3WUHHVDQG WKHLUFRPSOHPHQWVZHFDQFRQVWUXFW3WUHHVIRUYDOXHVZLWKPRUHWKDQRQHELW 'HILQLWLRQ$YDOXH3WUHH3LY LVWKH3WUHHRIYDOXHYDWEDQGL9DOXHYFDQ EH H[SUHVVHG LQ ELW XS WR ELW SUHFLVLRQ 9DOXH 3WUHHV FDQ EH FRQVWUXFWHG E\ $1'LQJEDVLF3WUHHVRUWKHLUFRPSOHPHQWV)RUH[DPSOHYDOXH3WUHH3L JLYHV WKHFRXQWRISL[HOVZLWKEDQGLELWHTXDOWRELWHTXDOWRDQGELWHTXDOWR LHZLWKEDQGLYDOXHLQWKHUDQJHRI> 3L 3L$1'3L$1'3L¶ 3WUHHVFDQDOVRUHSUHVHQWGDWDIRUDQ\YDOXHFRPELQDWLRQHYHQWKHHQWLUHWXSOHV 'HILQLWLRQ$WXSOH3WUHH3YY«YQ LVWKH3WUHHRIYDOXHYLDWEDQGL IRUDOOLIURPWRQ3YY«YQ 3Y $1'3Y $1'«$1'3QYQ ,IYDOXHYM LVQRWJLYHQ LW PHDQV LW FRXOG EH DQ\YDOXH LQ %DQGM)RU H[DPSOH 3 VWDQGVIRUDWXSOH3WUHHRIYDOXHLQEDQGLQEDQG DQGLQEDQGDQGDQ\YDOXHLQDQ\RWKHUEDQG 'HILQLWLRQ$QLQWHUYDO3WUHH3L YY LVWKH3WUHHIRUYDOXHLQWKHLQWHUYDO RI>YY@RIEDQGL7KXVZHKDYH3LYY 253LY IRUDOOYLQ>YY@ 'HILQLWLRQ$FXEH3WUHH3 >YY@>YY@«>Y1Y1@ LVWKH3WUHH IRUYDOXHLQWKHLQWHUYDORI>YLYL@RIEDQGLIRUDOOLIURPWR1 $Q\ YDOXH 3WUHH DQG WXSOH 3WUHH FDQ EH FRQVWUXFWHG E\ SHUIRUPLQJ $1' RQ EDVLF3WUHHVDQGWKHLUFRPSOHPHQWV,QWHUYDODQGFXEH3WUHHVFDQEHFRQVWUXFWHGE\ FRPELQLQJ $1' DQG 25 RSHUDWLRQV RI EDVLF 3WUHHV )LJXUH $OO WKH 3WUHH RSHUDWLRQV LQFOXGLQJ EDVLF RSHUDWLRQV $1' 25 &203/(0(17 DQG RWKHU RSHUDWLRQVVXFKDV;25FDQEHSHUIRUPHGRQDQ\NLQGVRI3WUHHVGHILQHGDERYH $1'25
%DVLF3WUHHV
$1'25
$1' $1'
9DOXH3WUHHV
25
$1'
7XSOH3WUHHV
,QWHUYDO3WUHHV $1'
25
&XEH3WUHHV
)LJ%DVLF9DOXH7XSOH,QWHUYDODQG&XEH3WUHHV
3URSHUWLHVRI37UHHV
,Q WKLV VHFWLRQ ZH ZLOO GLVFXVV WKH JRRG SURSHUWLHV RI 3WUHHV :H ZLOO XVH WKH IROORZLQJQRWDWLRQV
0XOWLPHGLD'DWD0LQLQJ8VLQJ37UHHV S [ \ LVWKHSL[HOZLWKFRRUGLQDWH[\ 9[ \ L LVWKHYDOXHIRUWKHEDQGLRIWKHSL[HO S [ \ E[ \ L M LV WKH MWK ELWRI 9 [ \ L ELWV DUHQXPEHUHG IURP OHIW WRULJKW E[ \ L LV WKH
OHIWPRVWELW ,QGLFHV[FROXPQ[FRRUGLQDWH \URZ\FRRUGLQDWH LEDQGMELW )RUDQ\3WUHHV33DQG3 3 3 GHQRWHV 3 $1' 3 3 _ 3 GHQRWHV 3 2533⊕3GHQRWHV3;2533′GHQRWHV&203/(0(17RI3 3LMLVWKHEDVLF3WUHHIRUELWMRIEDQGL3LY LVWKHYDOXH3WUHHIRUWKHYDOXHYRI EDQGL 3LYY LVWKHLQWHUYDO3WUHHIRUWKH LQWHUYDO>Y Y@ RIEDQG , UF3 LV WKH URRWFRXQWRI3WUHH3 3 PHDQVSXUHWUHH 3 PHDQVSXUHWUHH1LVWKHQXPEHU RISL[HOVLQWKHLPDJHRUVSDFHXQGHUFRQVLGHUDWLRQ /HPPD)RUWZR3WUHHV3DQG3UF3_3 ⇒UF3 DQGUF3 0RUH VWULFWO\ UF3 _ 3 LI DQG RQO\ LI UF3 DQG UF3 3URRI3URRIE\FRQWUDGLFWLRQ /HWUF3 ≠7KHQIRUVRPHSL[HOVWKHUHDUH VLQ3DQGIRUWKRVHSL[HOVWKHUHPXVWEHVLQ3_3LHUF3_3 ≠%XWZH DVVXPHGUF3_3 7KHUHIRUHUF3 6LPLODUO\ZHFDQSURYHWKDWUF3 7KHSURRIIRUWKHLQYHUVHUF3 DQGUF3 ⇒UF3_3 LVWULYLDO )URPWKLVLPPHGLDWHO\IROORZVWKHGHILQLWLRQV /HPPDSURRIVDUHLPPHGLDWH D UF3 RUUF3 ⇒UF3 3 E UF3 DQGUF3 ⇒UF3 3 F UF 3
G UF 3 1
H 3 3 = 3
I 3 3 = 3
J 3 _ 3 = 3
K 3 _ 3 = 3
L 3 3 = 3
M 3 _ 3 = 3
/HPPDY≠Y⇒UF^3LY 3LY ` IRUDQ\EDQGL 3URRI3L Y UHSUHVHQWVDOOSL[HOVKDYLQJYDOXHYIRUEDQGL,IY ≠YQRSL[HO FDQKDYHYDQGYIRUWKHVDPHEDQG7KHUHIRUHLIWKHUHLVDLQ3LY IRUDQ\SL[HO WKHUHPXVWEHLQ3LY IRUWKDWSL[HODQGYLFHYHUVD+HQFHUF^3LY 3LY ` /HPPDUF3_3 UF3 UF3 UF3 3 3URRI/HWWKHQXPEHURISL[HOVIRUZKLFKWKHUHDUHVLQ3DQGVLQ3LVQWKH QXPEHU RI SL[HOV IRU ZKLFK WKHUH DUH V LQ 3 DQG V LQ 3 LV Q DQG WKH QXPEHU RI SL[HOVIRUZKLFKWKHUHDUHVLQERWK3DQG3LVQ 1RZ UF3 Q Q UF3 Q Q UF3 3 Q DQG UF3_3 QQQ QQ QQ Q UF3 UF3 UF3 3 7KHRUHPUF^3LY _3LY ` UF^3LY `UF^3LY `ZKHUHY≠Y 3URRIUF^3LY _3LY ` UF^3LY `UF^3LY `UF^3LY 3LY `/HPPD ,IY≠YUF^3LY 3LY ` /HPPD UF^3LY _3LY UF^3LY `UF^3LY `
:3HUUL]RHWDO
'DWD0LQLQJ7HFKQLTXHV8VLQJ37UHHV
7KH 3WUHH WHFKQRORJ\ KDV EHHQ HPSOR\HG ZLWK D ODUJH QXPEHU RI GDWD PLQLQJ WHFKQLTXHV7KHVHLQFOXGHWKHIROORZLQJ>@>@>@>@>@>@>@>@,W LVLQWHUHVWLQJWRQRWHWKDWD3WUHHEDVHGVROXWLRQWRSHGWKHHYDOXDWLRQIRURQHRIWKH WZR FDWHJRULHV DW WKH .'' FXS GDWD PLQLQJ FRPSHWLWLRQ IRU @
%D\HVLDQ&ODVVLILHUV
$%D\HVLDQFODVVLILHULVDVWDWLVWLFDOFODVVLILHUZKLFKXVHV%D\HV¶WKHRUHPWRSUHGLFW FODVV PHPEHUVKLS DV D FRQGLWLRQDO SUREDELOLW\ WKDW D JLYHQ GDWD VDPSOH IDOOV LQWR D SDUWLFXODUFODVV7KHFRPSOH[LW\RIFRPSXWLQJWKHFRQGLWLRQDOSUREDELOLW\YDOXHVFDQ EHFRPH SURKLELWLYH IRU PRVW RI WKH PXOWLPHGLD DSSOLFDWLRQV ZLWK D ODUJH DWWULEXWH VSDFH %D\HVLDQ %HOLHI 1HWZRUNV UHOD[ PDQ\ FRQVWUDLQWV DQG XVH WKH LQIRUPDWLRQ DERXW WKH GRPDLQ WR EXLOG D FRQGLWLRQDO SUREDELOLW\ WDEOH 1DwYH %D\HVLDQ &ODVVLILFDWLRQ LV D OD]\ FODVVLILHU &RPSXWDWLRQDO FRVW LV UHGXFHG ZLWK WKH XVH RI WKH 1DwYH DVVXPSWLRQ RI FODVV FRQGLWLRQDO LQGHSHQGHQFH WR FDOFXODWH WKH FRQGLWLRQDO SUREDELOLWLHV ZKHQ UHTXLUHG %D\HVLDQ %HOLHI 1HWZRUNV UHTXLUH EXLOGLQJ WLPH DQG GRPDLQNQRZOHGJHZKHUHWKH1DwYHDSSURDFKORRVHVDFFXUDF\LIWKHDVVXPSWLRQLVQRW YDOLG7KH3WUHHGDWDVWUXFWXUHDOORZVXVWRFRPSXWHWKH%D\HVLDQSUREDELOLW\YDOXHV HIILFLHQWO\ ZLWKRXW WKH 1DwYH DVVXPSWLRQ E\ EXLOGLQJ 3WUHHV IRU WKH WUDLQLQJ GDWD &DOFXODWLRQ RI SUREDELOLW\ YDOXHV UHTXLUH D VHW RI 3WUHH $1' RSHUDWLRQV WKDW ZLOO \LHOGWKHUHVSHFWLYHFRXQWVIRUDJLYHQSDWWHUQ%D\HVLDQFODVVLILFDWLRQZLWK3WUHHVKDV EHHQ XVHG VXFFHVVIXOO\ RQ UHPRWHO\ VHQVHG LPDJH GDWD WR SUHGLFW \LHOG LQ SUHFLVLRQ DJULFXOWXUH>@>@,Q>@WRDYRLGVLWXDWLRQVZKHUHWKHUHTXLUHGSDWWHUQGRHVQRW H[LVWLQWKHWUDLQLQJGDWDLWSDUWLDOO\HPSOR\VWKHQDwYHDVVXPSWLRQZLWKDEDQGEDVHG DSSURDFK ,Q >@ WR FRPSOHWHO\ HOLPLQDWH WKH QDwYH DVVXPSWLRQ LQ RUGHU WR LQFUHDVH WKH DFFXUDF\ D ELWEDVHG %D\HVLDQ FODVVLILFDWLRQ LV XVHG LQVWHDG RI WKH EDQGEDVHG DSSURDFK ,Q ERWK DSSURDFKHV LQIRUPDWLRQ JDLQ LV XVHG DV D JXLGH WR GHWHUPLQH WKH FRXUVH RI SURJUHVV ,W LV LQWHUHVWLQJ WR QRWH WKDW LQIRUPDWLRQ JDLQ FDQ EH FDOFXODWHG ZLWKDVHWRI3WUHH$1'RSHUDWLRQV>@>@>@
0XOWLPHGLD'DWD0LQLQJ8VLQJ37UHHV
$VVRFLDWLRQ5XOH0LQLQJ$50
$VVRFLDWLRQ 5XOH 0LQLQJ RULJLQDOO\ SURSRVHG IRU PDUNHW EDVNHW GDWD KDV SRWHQWLDO DSSOLFDWLRQV LQ PDQ\ DUHDV ([WUDFWLQJ LQWHUHVWLQJ SDWWHUQV DQG UXOHV IURP GDWDVHWV FRPSRVHG RI LPDJHV DQG DVVRFLDWHG GDWD FDQ EH RI LPSRUWDQFH +RZHYHU LQ PRVW FDVHV WKHGDWD VL]HV DUH WRR ODUJH WR EH PLQHG LQ D UHDVRQDEOH DPRXQW RI WLPH XVLQJ H[LVWLQJ DOJRULWKPV ([SHULPHQWDO UHVXOWV VKRZHG WKDW XVLQJ 3WUHH WHFKQLTXHV LQ DQ HIILFLHQW DVVRFLDWLRQ UXOH PLQLQJ DOJRULWKP 3$50 KDV VLJQLILFDQW LPSURYHPHQW FRPSDUHGZLWK)3JURZWKDQG$SULRULDOJRULWKPV>@>@
.11DQG&ORVHG.11&ODVVLILHUV
)RU VSDWLDO GDWD VWUHDPV PRVW FODVVLILHUV W\SLFDOO\ KDYH D YHU\ KLJK FRVW DVVRFLDWHG ZLWK EXLOGLQJ D QHZ FODVVLILHU HDFK WLPH QHZ GDWD DUULYHV 2Q WKH RWKHU KDQG N QHDUHVW QHLJKERU .11 FODVVLILFDWLRQ LV D YHU\ JRRG FKRLFH VLQFH QR UHVLGXDO FODVVLILHUQHHGVWREHEXLOWDKHDGRIWLPH.11LVH[WUHPHO\VLPSOHWRLPSOHPHQWDQG OHQGV LWVHOI WR D ZLGH YDULHW\ RI YDULDWLRQV 7KH FRQVWUXFWLRQ RI QHLJKERUKRRG LV WKH KLJKHVWFRVW,QVWHDGRIH[DPLQLQJLQGLYLGXDOGDWDSRLQWVWRILQGQHDUHVWQHLJKERUV E\XVLQJ3WUHHWHFKQRORJ\ZHUHSO\RQWKHH[SDQVLRQRIWKHQHLJKERUKRRGDQGILQGD FORVHG.11VHWZKLFKGRHVQRWKDYHWREHUHFRQVWUXFWHG([SHULPHQWVVKRZFORVHG .11\LHOGVKLJKHUFODVVLILFDWLRQDFFXUDF\DQGVLJQLILFDQWO\KLJKHUVSHHG>@
37UHH'DWD0LQLQJ3HUIRUPDQFH
%DVHGRQWKHH[SHULPHQWDOZRUNGLVFXVVHGDERYHLQFRUSRUDWLRQRI3WUHHWHFKQRORJ\ LQWR GDWD PLQLQJ DSSOLFDWLRQV KDV FRQVLVWHQWO\ LPSURYHG SHUIRUPDQFH 7KH GDWD PLQLQJ UHDG\ VWUXFWXUH RI 3WUHH KDV GHPRQVWUDWHG LWV SRWHQWLDO IRU LPSURYLQJ SHUIRUPDQFHLQPXOWLPHGLDGDWD0DQ\W\SHVRIGDWDVKRZFRQWLQXLW\LQGLPHQVLRQV WKDW DUH QRW WKHPVHOYHV XVHG DV GDWD PLQLQJ DWWULEXWHV 6SDWLDO GDWD WKDW LV PLQHG LQGHSHQGHQWO\RIORFDWLRQZLOOFRQVLVWRIODUJHDUHDVRIVLPLODUDWWULEXWHYDOXHV'DWD VWUHDPVDQGPDQ\W\SHVRIPXOWLPHGLDGDWDVXFKDVYLGHRVVKRZDVLPLODUFRQWLQXLW\ LQ WKHLU WHPSRUDO GLPHQVLRQ 7KH 3WUHH GDWD VWUXFWXUH XVHV WKHVH FRQWLQXLWLHV WR FRPSUHVV GDWD HIILFLHQWO\ ZKLOH DOORZLQJ LW WR EH XVHG LQ FRPSXWDWLRQV ,QGLYLGXDO ELWV RI WKH PLQLQJUHOHYDQW DWWULEXWHV DUH UHSUHVHQWHG LQ VHSDUDWH 3WUHHV &RXQWV RI DWWULEXWH YDOXHV RU DWWULEXWH UDQJHV FDQ HIILFLHQWO\ EH FDOFXODWHG E\ DQ $1' RSHUDWLRQ RQ DOO UHOHYDQW 3WUHHV 7KHVH $1'RSHUDWLRQV FDQ EH HIILFLHQWO\ LPSOHPHQWHGXVLQJDUHJXODUVWUXFWXUHWKDWFRPSUHVVHVHQWLUHTXDGUDQWVZKLOHPDNLQJ XVHRISUHFRPSXWHGFRXQWVWKDWDUHNHSWDWLQWHUPHGLDWHOHYHOVRIWKHWUHHVWUXFWXUH
,PSOHPHQWDWLRQ,VVXHVDQG3HUIRUPDQFH
3WUHH SHUIRUPDQFH LV GLVFXVVHG ZLWK UHVSHFW WR VWRUDJH DQG H[HFXWLRQ WLPH IRU WKH $1'RSHUDWLRQ7KHDPRXQWRILQWHUQDOPHPRU\UHTXLUHGIRUHDFK3WUHHVWUXFWXUHLV UHODWHG WR WKH UHVSHFWLYH VL]H RI WKH 3WUHH ILOH VWRUHG LQ VHFRQGDU\ VWRUDJH 7KH FUHDWLRQ DQG VWRULQJ RI 3WUHHV LV D RQH±WLPH SURFHVV 7R PDNH D JHQHUDOL]HG 3WUHH VWUXFWXUHWKHIROORZLQJILOHVWUXFWXUHLVSURSRVHGWDEOH IRUVWRULQJEDVLF3WUHHV
:3HUUL]RHWDO 7DEOH3WUHHILOHVWUXFWXUH E\WH
E\WHV
E\WH
E\WHV
E\WHV
)RUPDW &RGH
)DQRXW
RIOHYHOV
5RRWFRXQW
/HQJWKRI WKHERG\
%RG\RIWKH 3WUHH
)RUPDWFRGH )RUPDW FRGH LGHQWLILHV WKHIRUPDW RI WKH 3WUHH3&7 307 HWF )DQRXW 7KLV ILHOG FRQWDLQV WKH IDQRXW LQIRUPDWLRQ RI WKH 3WUHH )DQRXW LQIRUPDWLRQLVUHTXLUHGWRWUDYHUVHWKH3WUHHLQSHUIRUPLQJYDULRXV3WUHHRSHUDWLRQV 7KH IDQRXW LV GHFLGHG DW FUHDWLRQ WLPH ,Q WKH FDVH RI XVLQJ GLIIHUHQW IDQRXWV DW GLIIHUHQWOHYHOVLWZLOOEHXVHGDVDQLGHQWLILHURIOHYHOV1XPEHURIOHYHOVLQWKH 3WUHH 5RRW FRXQW 5RRW FRXQW LH WKH QXPEHU RI V LQ WKH 3WUHH 7KRXJK ZH FDQ FDOFXODWHWKHURRWFRXQWRID3WUHH RQ WKHIO\ IURP WKH 3WUHH GDWD WKHVHE\WHV RI VSDFHFDQVDYHFRPSXWDWLRQWLPHZKHQZHRQO\QHHGWKHURRWFRXQWRID3WUHHWRWDNH DGYDQWDJHRIWKHSURSHUWLHVGHVFULEHGLQVHFWLRQ7KHURRWFRXQWRID3WUHHFDQEH FRPSXWHGDWWKHWLPHRIFRQVWUXFWLRQZLWKYHU\OLWWOHH[WUDFRVW/HQJWKRIWKHERG\ /HQJWKRIWKHERG\LVWKHVL]HRIWKH3WUHHILOHLQE\WHVH[FOXGLQJWKHKHDGHU7KHVL]H RIWKH3WUHHYDULHVGXHWRWKHOHYHORIFRPSUHVVLRQLQWKHGDWD7RDOORFDWHPHPRU\ G\QDPLFDOO\IRUWKH3WUHHVLWLVEHWWHUWRNQRZWKHVL]HRIWKHUHTXLUHGPHPRU\VL]H EHIRUH UHDGLQJ WKH GDWD IURP GLVN 7KLV ZLOO DOVR EH DQ LQGLFDWRU RI WKH GDWD GLVWULEXWLRQZKLFKFDQEHXVHGWRHVWLPDWH$1'WLPHLQDGYDQFHIRUWKHJLYHQVHDUFK VSDFH%RG\RIWKH3WUHHFRQWDLQVWKHVWUHDPRIE\WHVUHSUHVHQWLQJWKH3WUHH :HRQO\VWRUHWKHEDVLF3WUHHVIRUHDFKGDWDVHW$OORWKHU3WUHHVYDOXH3WUHHV DQGWXSOH3WUHHV DUHFUHDWHGRQWKHIO\ZKHQUHTXLUHG7KLVUHVXOWVLQDFRQVLGHUDEOH VDYLQJ RI VSDFH )LJXUHV DQG JLYHV WKH VWRUDJH UHTXLUHPHQWV IRU YDULRXV IRUPDWVRIGDWD7,))6327DQG70VFHQH XVLQJYDULRXVIRUPDWVRI3WUHHV3&7RU 307 ZLWKGLIIHUHQWIDQRXWSDWWHUQV)DQRXWSDWWHUQIIIZLOOLQGLFDWHDIDQRXW RI I IRU WKH URRW OHYHO I IRU WKH OHDI OHYHO DQG I IRU DOO WKH RWKHU OHYHOV 7KH YDULDWLRQ LQ WKHVL]H LV GXH WR WKHGLIIHUHQW OHYHOVRI FRPSUHVVLRQIRU HDFK ELW LQ WKH LPDJH ,W LV LPSRUWDQW WR QRWH WKDW 3WUHH LV D ORVVOHVV UHSUHVHQWDWLRQ RI WKH RULJLQDO GDWD 'LIIHUHQW UHSUHVHQWDWLRQV KDYH DQ HIIHFW RQ WKH FRPSXWDWLRQ RI WKH 3WUHH RSHUDWRUV7KHSHUIRUPDQFHRIWKHSURFHVVRUDJDLQVWPHPRU\DFFHVVVKRXOGEHWDNHQ LQWRFRQVLGHUDWLRQZKHQVHOHFWLQJDUHSUHVHQWDWLRQ )LOH 6L]H 9V%LW1XPEHU
)LOH 6L]H 9V%LW1XPEHU
3&7UHH
3&7UHH 307
%LW1XPEHU
)LOH6L]H.%
)LOH6L]H.%
3&7UHH
3&7UHH 307
%LW1XPEHU
)LJ&RPSDULVRQRIILOHVL]HIRUGLIIHUHQWELWVRI%DQG RID7,))LPDJH
0XOWLPHGLD'DWD0LQLQJ8VLQJ37UHHV )LOH6L]H9V%LW1XPEHU
)LOH6L]H9V%LW1XPEHU
3&7UHH
3&7UHH
307
)LOH6L]H.%
)LOH6L]H.%
3&7UHH
3&7UHH
307
%LW1XPEHU
%LW1XPEHU
)LJ&RPSDULVRQRIILOHVL]HIRUGLIIHUHQWELWVRI%DQG RID6327LPDJH )LOH6L]H9V%LW1XPEHU
)LOH6L]H9V%LW1XPEHU
3&7UHH
3&7UHH
3&7UHH 307
)LOHVL]H.%
)LOHVL]H.%
3&7UHH
3&7UHH
307
3&7UHH
%LW1XPEHU
%LW1XPEHU
)LJ&RPSDULVRQRIILOHVL]HIRUGLIIHUHQWELWVRI%DQG RID70LPDJH
7KHHIILFLHQF\RIGDWDPLQLQJZLWKWKH3WUHHGDWDVWUXFWXUHUHOLHVRQWKHWLPHUHTXLUHG IRUEDVLF3WUHHRSHUDWRUV7KH$1'RSHUDWLRQRQEDVLF3WUHHVFDQEHGRQHLQ PLOOLVHFRQGVIRUDQLPDJHILOHZLWKPLOOLRQSL[HOVRQD%HRZXOIFOXVWHURIGXDO 30+]SURFHVVRUVZLWK0%RI5$0([SHULPHQWDOUHVXOWVDOVRVKRZWKDW WKH $1' RSHUDWLRQ LV VFDODEOH ZLWK UHVSHFW WR GDWD VL]H DQG WKH QXPEHU RI DWWULEXWH ELWV )LJXUH VKRZV WKH WLPH UHTXLUHG WR SHUIRUP WKH 3WUHH $1' RSHUDWLRQ ,Q )LJXUHOHIW WKH$1'RSHUDWLRQLVGRQHRQHLJKWGLIIHUHQW3WUHHVWRSURGXFHFRXQWV IRUDOOSRVVLEOHYDOXHVLQHDFKEDQGDQGWKHDYHUDJHLVXVHG,Q)LJXUHULJKW DQ LPDJHILOHZLWKPLOOLRQSL[HOVLVXVHGWRFRPSXWHWKH$1'WLPH
7LPH9VGDWDVL]H
7LPH9V$WWULEXWH%LWV
7LPHPV
7LPHPV
'DWDVL]HPLOOLRQSL[HOV
1XPHEURIDWWULEXWHELWV
)LJOHIW 7LPHWRSHUIRUP$1'RSHUDWLRQIRUGLIIHUHQWGDWDVL]HV ULJKW 7LPHWR SHUIRUP$1'RSHUDWLRQIRUGLIIHUHQWQXPEHURIDWWULEXWHELWV
:3HUUL]RHWDO
7KH 3WUHH GDWD VWUXFWXUH SURYLGHV DQ RSSRUWXQLW\ WR XVH KLJK SHUIRUPDQFH SDUDOOHO DQG GLVWULEXWHG FRPSXWLQJ LQGHSHQGHQW RI WKH GDWD PLQLQJ WHFKQLTXH $ SURSHUO\GHVLJQHG3WUHH$3,IRUGDWDFDSWXULQJDQG3WUHHPDQLSXODWLRQVSURYLGHVWKH FDSDELOLW\ WR H[SHULPHQW ZLWK PDQ\ GLIIHUHQW GDWD PLQLQJ WHFKQLTXHV RQ ODUJH GDWD VHWV ZLWKRXW KDYLQJ WR EH FRQFHUQHG DERXW GLVWULEXWHG FRPSXWLQJ )RU 'LVWULEXWHG FRPSXWLQJLQ3WUHHVWKHPRVWFRPPRQDSSURDFKLVWRXVHDTXDGUDQWEDVHGSDUWLWLRQ LHDKRUL]RQWDOSDUWLWLRQ,QWKLVDSSURDFKWKH3WUHHRSHUDWLRQVRQHDFKSDUWLWLRQFDQ EHDFFXPXODWHGWRSURGXFHWKHJOREDOFRXQW$YHUWLFDOSDUWLWLRQFDQDOVREHXVHGZLWK D VOLJKW LQFUHDVH LQ FRPPXQLFDWLRQ FRVW ,Q WKLV DSSURDFK 3WUHH RSHUDWLRQV RQ SDUWLDOO\FUHDWHGYDOXH3WUHHVIURPHDFKSDUWLWLRQZLOOSURGXFHWKHJOREDOFRXQW%RWK WKHVHDSSURDFKHVFDQEHXVHGWRPLQHGLVWULEXWHGPXOWLPHGLDGDWDE\FRQYHUWLQJWKH GDWD LQWR 3WUHHV DQG VWRULQJ LW DW WKH GDWD VRXUFH LI UHTXLUHG 7KH SDUWLFXODU GDWD PLQLQJ DOJRULWKP ZLOO EH DEOH WR SXOO WKH UHTXLUHG FRXQWV WKURXJK D KLJK VSHHG GHGLFDWHGQHWZRUNRU WKH,QWHUQHW ,I ODWHQF\ GHOD\ LV KLJK WKLV DSSURDFK PD\ SXW D UHVWULFWLRQRQWKHW\SHRIDOJRULWKPVWRVXLWEDWFKHGFRXQWUHTXHVWVIURPWKH3WUHHV
5HODWHG:RUN
&RQFHSWVUHODWHGWRWKH3WUHHGDWDVWUXFWXUHLQFOXGHWKH4XDGWUHH>@>@>@DQGLWV YDULDQWVHJSRLQWTXDGWUHHVDQGUHJLRQTXDGWUHHV DQG++FRGHV>@ 4XDGWUHHVGHFRPSRVHWKHXQLYHUVHE\PHDQVRILVRRULHQWHGK\SHUSODQHV7KHVH SDUWLWLRQV GR QRW KDYH WR EH RI HTXDO VL]H DOWKRXJK WKDW LV RIWHQ WKH FDVH 7KH GHFRPSRVLWLRQLQWRVXEVSDFHVLVXVXDOO\FRQWLQXHGXQWLOWKHQXPEHURIREMHFWVLQHDFK SDUWLWLRQ LV EHORZ D JLYHQ WKUHVKROG 4XDGWUHHV KDYH PDQ\ YDULDQWV VXFK DV SRLQW TXDGWUHHVDQGUHJLRQTXDGWUHHV ++FRGHV RU +HOLFDO +\SHUVSDWLDO &RGHV DUH ELQDU\ UHSUHVHQWDWLRQV RI WKH 5LHPDQQLDQGLDJRQDO7KHELQDU\GLYLVLRQRIWKHGLDJRQDOIRUPVWKHQRGHSRLQWIURP ZKLFK HLJKW VXEFXEHV DUH IRUPHG (DFK VXEFXEH KDV LWV RZQ GLDJRQDO JHQHUDWLQJ QHZ VXEFXEHV 7KHVH FXEHV DUH IRUPHG E\ LQWHUODFLQJ RQHGLPHQVLRQDO YDOXHV HQFRGHG DV ++ ELW FRGHV :KHQ VRUWHG WKH\ FOXVWHU LQ JURXSV DORQJ WKH GLDJRQDO 7KHFOXVWHUVDUHRUGHULQDKHOLFDOSDWWHUQWKXVWKHQDPH+HOLFDO+\SHUVSDWLDO 7KHVLPLODULWLHVDPRQJ3WUHHTXDGWUHHDQG++&RGHDUHWKDWWKH\DUHTXDGUDQW EDVHG7KHGLIIHUHQFHLVWKDW3WUHHVIRFXVRQWKHFRXQW3WUHHVDUHQRWLQGH[UDWKHU WKH\DUHUHSUHVHQWDWLRQVRIWKHGDWDVHWVWKHPVHOYHV3WUHHVDUHSDUWLFXODUO\XVHIXOIRU GDWDPLQLQJEHFDXVHWKH\FRQWDLQWKHDJJUHJDWHLQIRUPDWLRQQHHGHGIRUGDWDPLQLQJ
&RQFOXVLRQ
7KLVSDSHUUHYLHZVVRPHRIWKHLVVXHVLQPXOWLPHGLDGDWDPLQLQJDQGFRQFOXGHVWKDW RQHRIWKHPDMRULVVXHVLVWKHVKHHUVL]HRIUHVXOWLQJIHDWXUHVSDFHVH[WUDFWHGIURPUDZ GDWD 'HFLGLQJ KRZ WR HIILFLHQWO\ VWRUH DQG SURFHVV WKLV KLJK YROXPH KLJK GLPHQVLRQDO GDWD ZLOO SOD\ D PDMRU UROH LQ WKH VXFFHVV RI D PXOWLPHGLD GDWD PLQLQJ SURMHFWV 7KLV SDSHU SURSRVHV WKH XVH RI D FRPSUHVVHG GDWDPLQLQJUHDG\ GDWD
0XOWLPHGLD'DWD0LQLQJ8VLQJ37UHHV
VWUXFWXUHWRVROYHWKHSUREOHP7RWKDWHQGWKH3HDQR&RXQW7UHHRU3WUHH DQGLWV DOJHEUDDQGSURSHUWLHVZHUHSUHVHQWHG7KH3WUHHVWUXFWXUHFDQEHYLHZHGDVDGDWD PLQLQJUHDG\ VWUXFWXUH WKDW IDFLOLWDWHV HIILFLHQW GDWD PLQLQJ >@ 3UHYLRXV ZRUN KDV GHPRQVWUDWHG WKDW E\ XVLQJ WKH 3WUHH WHFKQRORJ\ GDWD PLQLQJ WHFKQLTXHV FDQ EH SHUIRUPHGHIILFLHQWO\ZKLOHRSHUDWLQJGLUHFWO\IURPDFRPSUHVVHGGDWDVWRUH
5HIHUHQFHV
9*DHGH2*XQWKHU0XOWLGLPHQVLRQDO$FFHVV0HWKRGV&RPSXWLQJ6XUYH\V +6DPHW'HVLJQDQG$QDO\VLVRI6SDWLDO'DWD6WUXFWXUHV$GGLVRQ:HVOH\ 5 $ )LQNHO DQG - / %HQWOH\ 4XDG WUHHV $ GDWD VWUXFWXUH IRU UHWULHYDO RI FRPSRVLWH NH\V$FWD,QIRUPDWLFD ++FRGHVDYDLODEOHDWKWWSZZZVWDWNDUWQRQOKGELYHKHUKKWH[WKWPO :3HUUL]R4LQ'LQJ4LDQJ'LQJDQG$5R\'HULYLQJ +LJK &RQILGHQFH 5XOHVIURP 6SDWLDO'DWDXVLQJ3HDQR&RXQW7UHHV6SULQJHU9HUODJ/1&6-XO\ -RFKHQ 'RHUUH 3HWHU *HUVWO DQG 5RODQG 6HLIIHUW 7H[W 0LQLQJ )LQGLQJ 1XJJHWV LQ 0RXQWDLQVRI7H[WXUDO'DWD.''6DQ'LHJR&$86$ '6XOOLYDQ1HHGIRU7H[W0LQLQJLQ%XV,QWHOOLJHQFH'05HYLHZ'HF 25 =DLDQH - +DQ = /L 6 &KHH - &KLDQJ 0XOWL0HGLD0LQHU 3URWRW\SH IRU 0XOWL0HGLD'DWDPLQLQJ$&0&RQIHUHQFHRQ0DQDJHPHQWRI'DWD-XQH $'HQWRQ4LDQJ'LQJ: 3HUUL]R 4LQ 'LQJ (IILFLHQW +LHUDUFKLFDO &OXVWHULQJ 8VLQJ 3WUHHV ,QWO &RQIHUHQFH RQ &RPSXWHU $SSOLFDWLRQV LQ ,QGXVWU\ DQG (QJLQHHULQJ 6DQ 'LHJR1RY 8 )D\\DG * 3LDWHVN\6KDSLUR 3 6P\WK 7KH .'' SURFHVV IRU H[WUDFWLQJ XVHIXO NQRZOHGJHIURPYROXPHVRIGDWD&RPPXQLFDWLRQVRI$&0 1RY : %DNHU $ (YDQV / -RUGDQ 6 3HWKH 8VHU 9HULILFDWLRQ 6\VWHP :RUNVKRS RQ 3URJUDPPLQJ/DQJXDJHVDQG6\VWHPV3DFH8QLYHUVLW\$SULO &KDEDQH 'MHUDED +HQUL %ULDQG 7HPSRUDO DQG ,QWHUDFWLYH 5HODWLRQV LQ D 0XOWLPHGLD 'DWDEDVH6\VWHP(&0$67 6LPHRQ-6LPRII2VPDU5=DwDQH0XOWLPHGLDGDWDPLQLQJ.'' 2VPDU 5 =DwDQH -LDZHL +DQ =H1LDQ /L -HDQ +RX 0LQLQJ 0XOWLPHGLD 'DWD &$6&21 0HHWLQJRI0LQGV 4LDQJ'LQJ4LQ'LQJ:3HUUL]R'HFLVLRQ7UHH&ODVVLILFDWLRQRI6SDWLDO'DWD6WUHDPV 8VLQJ3WUHHV$&06\PSRVLXP$SSOLHG&RPSXWLQJ0DGULG0DUFK 4LQ 'LQJ 4LDQJ 'LQJ : 3HUUL]R $VVRFLDWLRQ 5XOH 0LQLQJ RQ 56, 8VLQJ 3WUHHV 3$.''6SULQJHU9HUODJ/1$,0D\ 0RKDPHG +RVVDLQ %D\HVLDQ &ODVVLILFDWLRQ XVLQJ 37UHH 0DVWHU RI 6FLHQFH 7KHVLV 1RUWK'DNRWD6WDWH8QLYHUVLW\'HFHPEHU 0 .KDQ 4 'LQJ : 3HUUL]R .QHDUHVW 1HLJKERU &ODVVLILFDWLRQ RQ 6SDWLDO 'DWD 6WUHDPV8VLQJ3WUHHV3$.''6SULQJHU9HUODJ/1$,0D\ : 9DOGLYLD*UDQGD : 3HUUL]R ) /DUVRQ ( 'HFNDUG 3WUHHV DQG $50 IRU JHQH H[SUHVVLRQSURILOLQJRI'1$PLFURDUUD\V,QWO&RQIHUHQFHRQ%LRLQIRUPDWLFV $ 6 3HUHUD 0 + 6HUD]L : 3HUUL]R 3HUIRUPDQFH ,PSURYHPHQW IRU %D\HVLDQ &ODVVLILFDWLRQ ZLWK 37UHHV &RPSXWHU $SSOLFDWLRQV LQ ,QGXVWU\ DQG (QJLQHHULQJ 6DQ 'LHJR1RY $3HUHUD$'HQWRQ3.RWDOD:-RFNKHFN:9DOGLYLD*UDQGD:3HUUL]R3WUHH &ODVVLILFDWLRQRI@ (DFK RI WKHVH DSSURDFKHV LV EDVHG RQ GLIIHUHQW SKLORVRSKLFDO DVVXPSWLRQV DQG PDQDJHV WKH UHODWLRQVKLSV EHWZHHQ DFWLYLWLHV LQ D GLIIHUHQW ZD\ KRZHYHU WKH RYHUODS DQG FRQYHUJHQFH RI WKH UHVHDUFK DQG GHYHORSPHQW LQ WKH DUHD UHVXOWHG LQ ZKDW LV NQRZQ DV FROODERUDWLYH YLUWXDO ZRUNVSDFHV &9: 7KH UHTXLUHPHQWV IRU D VXSSRUWLQJ WHFKQLFDO LQIUDVWUXFWXUH DGGUHVVLQJ FURVVGLVFLSOLQH FRQFHUQVYDU\DFURVVGLIIHUHQWGRPDLQRIDSSOLFDWLRQVHJHYHQZLWKLQWKHSDUDGLJP RIYLUWXDOGHVLJQVWXGLRV>@D&9:LQDUFKLWHFWXUDOGHVLJQZLOOGLIIHUIURPD&9: IRU D VRIWZDUH HQJLQHHULQJ GHVLJQ KHQFH WKHUH DUH QXPHURXV DSSURDFKHV IRU O.R. Zaïane et al. (Eds.): Mining Multimedia and Complex Data, LNAI 2797, pp. 164–182, 2003. © Springer-Verlag Berlin Heidelberg 2003
Multimedia Mining of Collaborative Virtual Workspaces
165
GHVLJQLQJ &9:V 7KHVH DSSURDFKHV XVH GLIIHUHQW ZD\V RI IRUPDOLVLQJ WKH GHVLJQ UHTXLUHPHQWV WRZDUGV WKH ZRUNVSDFH &RPPRQ GHVLJQ VWUDWHJLHV DUH EDVHG RQ WKH XWLOLVDWLRQ RI H[SHUWV¶ NQRZOHGJH LQ D WRSGRZQ DQDO\VLV F\FOH 7KH XQGHUO\LQJ WHFKQRORJ\ VSDQV D EURDG UDQJH RI GLVWULEXWHG V\VWHPV²IURP GHVNWRS JURXSZDUH V\VWHPVIRUH[DPSOHVVHH)LJ WRWH[WEDVHGDQG'YLUWXDOZRUOGVIRUH[DPSOHV VHH)LJIRUDQH[FHOOHQWWD[RQRP\RIWKHODWWHUVHH>@ 'HVSLWHWKHYDULHW\RIWKH UHTXLUHPHQWV DGGUHVVLQJ FURVVGLVFLSOLQH FRQFHUQV DQG WKH GLIIHUHQFHV LQ WKH ILQDO RXWFRPH WKH GHVLJQV RI FROODERUDWLYH YLUWXDO ZRUNVSDFHV WKDW VXSSRUWV NQRZOHGJH LQWHQVLYH ZRUN SURFHVVHV KDYH VHYHUDO NH\ FRQFHSWV LQ FRPPRQ WKDW SURYLGH EDFNJURXQGWRWKHLQWHJUDWLRQRIWKHGDWDPLQLQJWHFKQRORJLHV • HPEHGGLQJKXPDQVLQWKH&9:LQRWKHUZRUGVUHSUHVHQWLQJSHRSOHDVVRPH HQWLWLHV ZKLFK LQWHUDFW DQG JHQHUDWH GDWD DERXW SHRSOH¶V EHKDYLRXU LQ WKH &9: 7KHVH UHSUHVHQWDWLRQV VSDQ IURP WKH VRFDOOHG ³FKDUDFWHUV´ LQ ZRUNVSDFHVXVLQJWH[WEDVHGYLUWXDOZRUOGVWRWKH³DYDWDUV´LQWKH'YLUWXDO ZRUOGV>@ • RQWRORJ\ RI WKH &9: LQ RWKHU ZRUGV SURYLGLQJ VRPH ZD\ RI GHILQLQJ WKH WRSRORJ\ DQG VHPDQWLFV RI WKH ZRUNVSDFH VHSDUDWLQJ DQG KDQGOLQJ GLIIHUHQW LQIRUPDWLRQ ZLWKLQ WKH XQLWV RI WKLV VWUXFWXUH DQG SURYLGLQJ D UHIHUHQFH V\VWHP IRURULHQWDWLRQ DQGQDYLJDWLRQ 7KHVH VWUXFWXUHVVSDQ IURP YDULDWLRQV RIWKH³URRP´PHWDSKRUERWKLQGHVNWRSVW\OHSODFHVHJ)LJD DQGWH[W EDVHG YLUWXDO ZRUOGV HJ )LJ D WR WKH ³VTXDUHV RI ODQG´ LQ ' YLUWXDO ZRUOGVHJ)LJE • DVHWRIIHDVLEOHDFWLYLWLHVWKDWFDQEHSHUIRUPHGLQWKH&9:7KLVVHWGHILQHV WKH IXQFWLRQDOLW\ RI WKH &9: GHVLJQ WR ZKDW H[WHQW WKH HQYLURQPHQW XQGHU FRQVLGHUDWLRQFDQEHXVHGIRUFRQGXFWLQJFROODERUDWLYHSURMHFWVLQDSDUWLFXODU GRPDLQ 7KH LPSOHPHQWDWLRQ RI HDFK RI WKHVH FRQFHSWV DOORZV WR FRQWURO RI WKH GDWD JHQHUDWHG E\ WKH &9: %HLQJ D UHVXOW RI WKH DFWLYLWLHV SHUIRUPHG LQ WKH &9: WKH GDWD FDQ FRQWDLQ VLJQLILFDQW LQIRUPDWLRQ DERXW WKH DFWXDO EHKDYLRXU ZRUNVW\OH LQIRUPDWLRQRUJDQL]DWLRQDQGSURFHVVLQJE\WKHSHRSOHZKRFROODERUDWHYLDWKH&9: )XUWKHUWKHWKUHHFRQFHSWVDUHGLVFXVVHGLQWKLVFRQWH[W 1.1 Embedding Humans in the CVW The establishment of the identity of the people in the CVW occurs through the representation of individuals as characters or avatars that possess various properties, and through the behaviours of that representation. Representations of a person include variety of properties, which depend on the embodiment model. In the context of mining data about collaboration in such environments, the important ones are the properties of the character (geometry, image, gestures, text description), and behavioural properties (privileges, rights at the CVM, roles). Preliminary recommendations about arrangements in virtual teams can be based on linked patterns derived from demographic information and the properties of the character. Much more important in collaborative projects is the ability to make judgments about the patterns of collaboration expected in a team work within a team or across different teams and to reuse such knowledge when forming teams.
166
S.J. Simoff and R.P. Biuk-Aghai $FWLRQV
7RSRORJ\RIWKH ZRUNVSDFH
&RQWHQW +XPDQ 5HSUHVHQWDWLRQ
+XPDQ 5HSUHVHQWDWLRQ &RQWHQW
7RSRORJ\RIWKH ZRUNVSDFH
&RPPXQLFDWLRQWUDQVFULSWV $FWLRQV
&RPPXQLFDWLRQWUDQVFULSWV
a. Earlier approaches (TeamWave1)
b. Recent approaches (Groove2)
Fig. 1. Examples of collaborative virtual workspaces, based on desktop-style groupware +XPDQ 5HSUHVHQWDWLRQ
&RQWHQW
+XPDQ UHSUHVHQWDWLRQV DQGDFWLRQV
7RSRORJ\RIWKH ZRUNVSDFH 7RSRORJ\ &RQWHQW
&RPPXQLFDWLRQWUDQVFULSWV DQG $FWLRQV
$V\QFKURQRXVFRPPXQLFDWLRQV
7H[WGHVFULSWLRQ
&$'GUDZLQJ
&RPPXQLFDWLRQWUDQVFULSWV
a. Text-based virtual world with Web frontend
b. 3D virtual world
Fig. 2. Examples of collaborative virtual workspaces, based on virtual worlds technology
7KH SUHOLPLQDU\ LQIRUPDWLRQ LV QRW DOZD\V VXIILFLHQW IRU HVWDEOLVKLQJ VXFFHVVIXO ZRUN 0LQLQJ EHKDYLRXUDO GDWD LV RQH ZD\ WR H[WUDFW LQIRUPDWLRQ DERXW WKH IXQFWLRQLQJRIJURXSVRILQGLYLGXDOVDQGGLVFRYHULQJ SDWWHUQVRI FROODERUDWLRQEDVHG RQ SURMHFW FRPPXQLFDWLRQ EHWZHHQ WKHP 7KH IUDPHZRUN SURSRVHG LQ WKLV FKDSWHU DOORZV LQFRUSRUDWLQJ DQG UHXVLQJ H[WUDFWHG NQRZOHGJH ZKHQ FRQILJXULQJ JURXSV LQ QHZSURMHFWV
1 2
http://www.teamwave.com http://www.groove.net
Multimedia Mining of Collaborative Virtual Workspaces
167
1.2 Ontology of the CVW The ways of structuring of the information and activity virtual workspace depend on a number of factors, including the ontology (what kind of ‘place’ the underlying environment is), the purpose of the environment, the embedded functionality, the preferable communication and collaboration mode [2], underlying technologies and their integration [4]. For example, the Virtual Campus3 (Faculty of Architecture, University of Sydney) shown in Fig. 2a is organised according to the ontology of a university campus. The workspace is structured in terms of “rooms”, “levels” and “buildings”, which follows the ontology of building design. The reference system and the topology of the workspace are based on the purpose of the “buildings” and the “rooms” in them. This ontology defines the partition of the workspace [5]. 7KHVWUXFWXUHRIDYLUWXDOZRUNVSDFHXVXDOO\HYROYHVDFFRUGLQJWRWKHQHHGVRID SURMHFW2QHZD\WRDSSURDFKWKLVSUREOHPLVWRFUHDWH³GHVLJQSURWRW\SHV´DFFRUGLQJ WRWKHRQWRORJ\RIWKHHQYLURQPHQW,QRXUH[DPSOHDSURWRW\SHRIDIDFXOW\EXLOGLQJ FDQ EH D ³EXLOGLQJ´ ZLWK IRXU ³OHYHOV´ ³&ODVVURRPV´ ZLWK URRPV IRU HDFK VXEMHFW ³2IILFHV´ZLWKURRPVIRUVWDIIPHPEHUV³/LEUDU\´ZLWKURRPVWKDWNHHSLQIRUPDWLRQ IURP SDVW VXEMHFWV DQG ³&RPPRQ OHYHO´ ZKLFK FDQ DFFRPPRGDWH JHQHUDO SXUSRVH PHHWLQJ URRPV SUDFWLFH URRPV DQG RWKHU IXQFWLRQDO DUHDV LQ WKH ZRUNVSDFH 7KH RQWRORJ\ LQ WKLV FDVH RIIHUV FRQVLVWHQW FRQFHSWXDO VKHOO DEOH WR DFFRPPRGDWH VWUXFWXUDO H[WHQVLRQV ZLWKLQ WKH FRQVWUDLQWV RI WKH RQWRORJ\ )LJ LOOXVWUDWHV WKH WRSRORJ\RIDGHVLJQHQYLURQPHQWSUHGHILQHGE\WKHPRGHORQWRORJ\ RIWKHGHVLJQ SURFHVV (DFK ³GRRUZD\´ OHDGV WR D URRP ZRUNVSDFH ZKLFK FRQWDLQV WKH UHOHYDQW LQIRUPDWLRQ IRU HDFK GHVLJQ VWDJH 7KH PRGHO XVHG LV GHULYHG IURP WKH H[LVWLQJ PRGHOVRIWKHGHVLJQSURFHVVLQWKHUHVHDUFKOLWHUDWXUHHJ>@ )XUWKHUVXFKVFKHPD FDQEHXVHGDVDSURWRW\SLFDOZRUNVSDFH+RZHYHUVXFKWRSGRZQDSSURDFKGRHVQRW FDSWXUHWKHNQRZOHGJHIURPWKHDFWXDOXVHRIWKHYLUWXDOHQYLURQPHQW²ZKLFKSDUWVRI LWZHUHXVHGPRUHLQWHQVLYHO\ZKDWDUHWKH³QHLJKERXULQJ´UHODWLRQVHJFRYLVLWHG URRPV DQGRWKHUUHODWLRQVGRFXPHQWFRQWHQWVGXULQJGLIIHUHQWSKDVHVHWF 7KHRQWRORJ\RIWKH&9:SURYLGHVDOVRWKHVHPDQWLFVIRUWKHDQDO\VLVDQGPLQLQJ RIFROOHFWHGGDWD)RUH[DPSOHWKHWRSRORJ\RIWKH9LUWXDO&DPSXVSURYLGHVDZD\RI SDUWLWLRQLQJ DQG LQWHJUDWLQJ FROOHFWHG GDWD DQG EDFNJURXQG NQRZOHGJH DERXW WKH UHODWLRQV EHWZHHQ WKH SDUWLWLRQV 6XFK NQRZOHGJH FRPSOHPHQWV WKH GDWD FROOHFWHG IURP WKH DFWXDO XWLOLVDWLRQ DQG HYROXWLRQ RI WKH &9: VWUXFWXUH ZKLFK SDUWV RI WKH &9: FRQILJXUDWLRQ ZHUH XVHG PRUH LQWHQVLYHO\ ZKDW DUH WKH ³QHLJKERXULQJ´ UHODWLRQV HJ FRYLVLWHG URRPV VKDUHG GRFXPHQWV DFURVV GLIIHUHQW DUHDV RI WKH ZRUNVSDFH EHORQJLQJ WR WKH VDPH SURMHFW VHW RI DFWLYLWLHV UHODWLRQV EHWZHHQ QRQ QHLJKERXULQJ DUHDV 0LQLQJ VXFK GDWD FDQ UHYHDO VXFK UHODWLRQV 'LVFRYHUHG NQRZOHGJH FDQ EH UHIOHFWHG LQ YDULDWLRQV RI WKH ZRUNVSDFH VWUXFWXUHV WKDW FRQVWLWXWH WKH ³GHVLJQSURWRW\SHV´UHVXOWLQJ LQ D OLEUDU\ RI VXFK SURWRW\SHV :KHQ LW FRPHV WR DGGUHVVLQJWKHUHTXLUHPHQWVRIDQHZSURMHFWZRUNVSDFHWKHVHSURWRW\SHVFDQSURYLGH UHXVDEOHEXLOGLQJEORFNV
3
http://www.arch.usyd.edu.au:7778
168
S.J. Simoff and R.P. Biuk-Aghai
Fig. 3. Pre-defined topology of a collaborative virtual environment for design projects
1.3 A Set of Feasible Activities The ontology of the virtual workspace provides substantial a priori knowledge not only about the navigation, but also about the set of feasible activities in such workspace. Usually the initial set of activities is derived from the design requirements. The design of the virtual workspace is focused on the “arrangement” of the workplace in a way that will support computer-mediated collaboration between geographically dispersed participants, whether this be an educational, research or business collaboration. Such requirements are usually expressed in terms of activities. The same set of activities is transferred in sequences and combination of actions, and information structures that may vary across different underlying technologies. For example, a virtual workspace for participatory design of data mining bots for electronic markets will include the same activities (e.g. specifying bots requirements, development and selection of bots algorithms, updating bots functionality, evaluation of the bots), however, the implementation of the actions that support these activities (e.g. uploading documents, navigation through related information in the workspace, structuring communication transcripts between participants) can be different (e.g. the implementation of the project virtual workspace in a text-based virtual world, like the Virtual Campus will differ from the project virtual workspace implemented in a light
Multimedia Mining of Collaborative Virtual Workspaces
169
groupware environment like Groove or LiveNet4). The set of actions can be viewed as the log generated at the application level. 7KH RQWRORJ\ RI WKH ZRUNVSDFH FRPELQHG ZLWK WKH XQGHUO\LQJ PRGHOV RI WKH WHFKQRORJ\VXSSRUWLQJWKHYLUWXDOZRUNVSDFHSURYLGHVWKHEDFNJURXQGNQRZOHGJHIRU WKH DQDO\VLV RI FRPSXWHUPHGLDWHG FROODERUDWLYH DFWLYLWLHV DQG GLVFRYHU\ DQG LQWHUSUHWDWLRQ RI SDWWHUQV RI FROODERUDWLYH EHKDYLRXU 7KH RYHUDOO DFWLYLW\ VWUXFWXUH DQGKHQFHWKHVWUXFWXUHRIWKHYLUWXDOZRUNVSDFHXVXDOO\HYROYHVVLJQLILFDQWO\GXULQJ WKH SURMHFW UXQ WR DFFRPPRGDWH VWHSV DQG SURFHVVHV XQH[SHFWHG GXULQJ WKH LQLWLDO GHVLJQ ,Q WKH ORQJ WHUP PLQLQJ VXFK GDWD SURYLGHV D SRWHQWLDO IRU GHVLJQLQJ SUR DFWLYH SURWRW\SHV VXSSRUWLQJ GLIIHUHQW W\SHV RI SURMHFWV 'LVFRYHUHG DFWLRQ VHWV DQG ZRUNVSDFHVWUXFWXUHVIRUPSURDFWLYHGHVLJQSURWRW\SHVUHVXOWLQJLQDOLEUDU\RIVXFK SURWRW\SHV DQG WKHLU UHXVH DFFRUGLQJ WR WKH UHTXLUHPHQWV RI D QHZ SURMHFW%HORZ ZH SUHVHQW D IUDPHZRUN IRU LQWHJUDWLQJ PXOWLPHGLD GDWD PLQLQJ LQ WKH GHVLJQ RI FROODERUDWLYHYLUWXDOZRUNVSDFHVLQDZD\WKDWLWIDFLOLWDWHVQRWRQO\WKHGDWDFROOHFWLRQ DQG DQDO\VLV EXW DOVR WKH DSSOLFDWLRQ DQG LQWHJUDWLRQ RI GLVFRYHUHG NQRZOHGJH :H LOOXVWUDWH VRPH DVSHFWV RI WKH DSSOLFDWLRQ RI WKH IUDPHZRUN IRU PRQLWRULQJ DQG H[WUDFWLQJNQRZOHGJHIURPFROODERUDWLYHDFWLYLWLHVDQGLQFRUSRUDWLQJWKDWNQRZOHGJH EDFNLQWRWKHFROODERUDWLYHHQYLURQPHQW
2 The “Space-Data-Memory” Framework for Knowledge Discovery and Deployment in Collaborative Virtual Environments Fig. 4 presents the general framework for knowledge discovery, transfer and utilisation in virtual environments, which embeds the knowledge discovery process in the cycle of design and utilisation of the collaborative virtual workspace for supporting knowledge-intensive activities. Its primary goals are: • To influence the design of CVWs so as to provide the data necessary for mining and analysis of data about project activities. • To utilise extracted knowledge and transfer it back into the design and use of CVWs. • To facilitate the research in computer-mediated collaboration and computer support of collaborative knowledge-intensive processes. The framework includes four major groups of inter-woven components: • virtual workspace, which may consist of one or more integrated virtual environments; • collaboration data, which includes the integrated multimedia data generated at the virtual workspace during the development of a project; • knowledge discovery, which incorporates not only the data mining stage, but also the knowledge representation; • organisational memory, which is the ‘feedback’ component in the framework.
4
http://livenet.it.uts.edu.au
170
S.J. Simoff and R.P. Biuk-Aghai 2UJDQLVDWLRQDO PHPRU\
9LUWXDO ZRUNVSDFH
&ROODERUDWLRQ GDWD
&RQFHSWXDO OHYHO
2QWRORJ\DQG WHUPLQRORJ\
'RPDLQ XQGHUVWDQGLQJ
'DWD XQGHUVWDQGLQJ
6WUXFWXUDO OHYHO
7RSRORJLHVDQG FRQILJXUDWLRQV
:RUNVSDFH GHVLJQ
'DWD PRGHOLQJ
&ROODERUDWLRQ OHYHO
&ROODERUDWLRQ XQGHUVWDQGLQJ
:RUNVSDFH XWLOLVDWLRQ
'DWD FROOHFWLRQ
.QRZOHGJHGLVFRYHU\ .QRZOHGJH UHSUHVHQWDWLRQ
3DWWHUQ GLVFRYHU\
'DWD PLQLQJ
Fig. 4. The “Space-Data-Memory” framework for knowledge discovery, transfer and utilisation in virtual environments for supporting knowledge-intensive activities
The framework views the arrangement of collaboration data set and the design of the collaborative virtual workspace as complementary and parallel activities. Such amalgamation offers better control over the data collection, and, to some extent, can reduce and even eliminate the data preprocessing stage in the knowledge discovery process. Knowledge obtained from collaboration data, extracted from the virtual workspace is a further contributor to the design of the CVW. A number of related research efforts are underway in the direction of controlled data collection, carried out mainly in the field of e-commerce and Web data mining [7]. The three components appearing in the upper part of Fig. 4 are presented at different levels of abstraction, namely conceptual, structural and collaboration levels. Below, we discuss the components of the framework in more detail. 2.1 Virtual Workspace CVWs are the support systems within which collaboration is carried out. CVWs are becoming increasingly part of professional practice. Such environments aim to support certain work practices, hence their design and configuration are domainspecific. For each domain, an understanding of the domain-dependent requirements for the CVW has to be obtained. On the conceptual level this activity identifies the concepts to be supported by the underlying environment: the structuring metaphor employed, navigation facilities, representation of people and their abilities, artefacts and tools provided in the environment, etc. On the structural level, this initial step is followed by the actual design of the CVW when the relationship between the identified concepts is established and their detail is elaborated. Once designed (and implemented), the CVW is utilised by its users on the collaboration level.
Multimedia Mining of Collaborative Virtual Workspaces
171
2.2 Collaboration Data The activities related to CVWs are paralleled by those related to collaboration data. Within the framework, presented in Fig. 4, we use the label ‘collaboration data’ to emphasise that it incorporates data about the activities, respectively, actions, performed at the CVW, and the documents and other artefacts (like presentations, communication transcripts) that have been generated as a result of collaborative activities. These data facilitates knowledge discovery within the domain of collaboration, regardless of whether it is of direct use within the CVW. Traditionally, technologies that support computer-mediated collaboration [1] did not provide any particular support for data collection aimed at knowledge discovery. Data was seen as an internal aspect of the system and only the data required for the system diagnostics was recorded and maintained. The presented framework emphasises the need for data understanding, modelling and collection that can enable knowledge discovery in the collaboration domain, and therefore within this framework collaboration data is treated separately from the environment that supports the virtual workspace. On conceptual level, domain understanding within the virtual workspace sphere and data understanding within the sphere of collaboration data are mutually complementary: once domain understanding identifies a concept to be supported, data understanding identifies the necessary data elements to be recorded. Such data elements include fields of log files, the different multimedia documents and file formats involved in collaboration, the transcripts from synchronous communication, the thread structure and content of asynchronous communication (e.g. discussion (bulletin) board transcripts), audio/video communications, and the integrating data descriptions, which define the relations between the different types of media data. On the structural level, during the design of the workspace, data modelling identifies details of and relationships among the collaboration concepts and data. Finally, on the collaboration level, the CVW is utilised in an actual project, which generates the collaboration data. These data are collected for subsequent data mining. At this level, the data is modelled for data warehousing (for more details about data warehousing technologies see [8,9]). 2.3 Knowledge Discovery The knowledge discovery in this framework differs slightly from what can be called “the classical schema” [10]—the selection and data pre-processing stages are implicitly embedded in the activities related to collaboration data. Therefore, collected data is expected to be ready for the application of data mining methods techniques. As a further difference to the classical knowledge discovery schema, a step of knowledge representation is explicitly included at the end. Its purpose is to map discovered knowledge back into the CVW’s representation. Knowledge discovery in this framework aims to produce a better understanding of computer-mediated collaboration, and to enable the usage of discovered knowledge to improve structural features of the workspace configuration and media content when new projects are conducted in the environment. For example, particular structure of a workspace implies certain navigation behaviour. Through the analysis of the structure
172
S.J. Simoff and R.P. Biuk-Aghai
of the virtual workspace and behavioural sequences, one can collect templates of structures of workspaces, implemented using particular underlying environments and technologies, and reuse those templates in designing virtual workspaces. Collecting data about actual navigation within the environment can provide a source for discovering traversal patterns, which can provide indicators for improving the topology (structuring) of the environment during the project run. Other possibilities for improvement of the virtual workspace exist according to particular collaboration and business process needs. This is something difficult to know ahead of time. The development of environments supporting virtual workspaces follows emergent and adaptive strategies, rather then predefined topologies. In both cases, some necessary indicators for improvement of the structure are required. Both the activities in collaboration data and knowledge discovery sections utilise the developments in the CRISP-DM methodology [11]. 2.4 Organisational Memory In the times of computer-mediated knowledge economy the informal knowledge of ‘doing things’ is the daily currency. This asset usually is poorly preserved and managed, after a project is completed, as it automatically compiles in a collective team expertise. Over the past decade, the CSCW5 community and related areas have taken a keen interest in the notion and different realisations of organisational memory (OM) [12-14] as a possibility to preserve informal knowledge. Groupware tools tend to make informal knowledge explicit, but on their own they generally do not create a coherent organisational memory as they do not provide an effective index or structure to the mass of collected data and do not provide techniques for extracting information and knowledge out of that data. On the other hand, attempts to build stand-alone organisational memory systems have little success because they relied on humans documenting their activities, providing them with one or another form of hypertext to capture the thinking and learning outcomes. In the context of capturing the informal knowledge about computer-mediated collaboration, this suggests that there is value in retaining and later drawing on historical records of virtual collaboration. Such records may be referenced when setting out on new distributed project, to “see how others have done it”, and perhaps to reuse and re-enact parts of those collaboration instances. Unlike conventional work settings, where details of collaboration have to be collected manually through effort-intensive and sometimes intrusive methods, the design of a CVWs according to proposed framework offers the capability to record automatically a great amount of details about collaboration activities conducted in the CVW, when work is predominantly or entirely carried out virtually. While much work in organisational memory concerns itself with the content of collaboration, or the declarative memory, little work has been done on harnessing the procedural memory, or knowledge about how work has been carried out. The importance of utilising this aspect of organisational memory in groupware systems has been pointed out relatively early in [14], and again more recently within the context of virtual team effectiveness [15]. The framework presented in Fig. 4 makes 5
Computer Supported Collaborative Work
Multimedia Mining of Collaborative Virtual Workspaces
173
the procedural portion of organisational memory an integral part of collaboration support in the virtual workspace environment by maintaining knowledge extracted from collaboration environments and making it available within the CVW. On the collaboration level, this knowledge relates to an understanding of collaborative activities and the process of virtual collaboration. For example, it can identify what main types of activities were conducted within a virtual workspace, how the activities were carried out over time, what differences exist in the activity of different people within the environment, etc. This knowledge can be utilised within the environment itself, leading for instance to an adaptation of the environment to the evolving collaborative process and/or its interface in order to facilitate the execution of predominant activities. It can also serve as a management and control instrument, which is of particular value when collaboration is conducted in purely virtual mode and traditional management methods are severely limited. On the structural level, various representations of the virtual workspace topology are maintained. The structural patterns, discovered across a number of workspaces, are deposited in the organisational memory in the form of different topologies and configurations available for reuse. This knowledge can be applied already during the project run, for instance to rearrange the topology of the virtual workspace if its current arrangement encumbers work activities. The utilisation of CVWs may, over time, also lead to the emergence of new concepts, or an application of existing concepts in ways that were not previously anticipated. These are deposited on the conceptual level as modifications to the ontology of the CVW, which will influence the development of related underlying technologies and environments. An example of this is where an environment lacks a certain feature, but where users discover workarounds that, though cumbersome, allow the feature to be supported. Discovery of such cases can be of use in the development of the next version of the underlying environment to explicitly support such feature.
3 Technological Support of the Framework In this section, we discuss one possible implementation of the framework elements, illustrated with an example of its application. The framework has been implemented as part of LiveNet underlying technology. LiveNet is a groupware technology for CVW support, which has been developed at the Collaborative Systems Laboratory, University of Technology, Sydney [16]. LiveNet has been selected for testing the application of the framework as its environment offers not only support to some elements of the framework, but also a development environment for the components that are not implemented, in particular, the data mining support. 3.1 Virtual Workspaces and Organisational Memory LiveNet supports mainly asynchronous collaboration of distributed groups of people, i.e. different-time, different-place interactions, although its design does not limit it to only this mode of collaboration. A central server is accessed across the network through one of several client interfaces, most commonly through a Web interface, as
174
S.J. Simoff and R.P. Biuk-Aghai
illustrated in Fig.5a). The environment is built around a particular ontology [16] (a simplified version of this ontology is shown in Fig.5b).
a. Workspace access via the Web interface to LiveNet
b. Simplified ontology of LiveNet Fig. 5. LiveNet underlying model and one of the interfaces.
LiveNet provides virtual workspaces, which bring together people, artefacts (e.g. documents), communication channels, awareness facilities, and a collection of tools, all tied together through a configurable governance structure. In terms of the ontology, workspaces contain roles, occupied by participants (i.e. actual people), who perform actions. Some actions may operate on document artefacts; others may be interactions with other workspace participants through discussions. Most workspace elements such as documents, discussions and participants may be shared between workspaces. Thus, LiveNet workspaces are not just stand-alone entities like in some collaboration systems, but nodes in a network of inter-connected collaboration spaces. Neither are structures of workspaces in LiveNet static—once created, a workspace can be dynamically adapted to evolve together with the collaboration carried out in it,
Multimedia Mining of Collaborative Virtual Workspaces
175
while likewise entire “ecologies” of inter-connected workspaces can co-evolve. The structure of such network of workspaces and the variety of artefacts that each workspace can accommodate offer a structure that supports organisational memory. 3.2 Collaboration Data Support Data about workspaces in LiveNet captures two aspects: a database maintains the current state of all workspace elements (documents, roles, participants, etc.), while log files record all user actions carried out in the system over time. Although the vast majority of users interact with LiveNet through a web interface, the log records captured by the LiveNet server are on a semantically much higher level than those in the corresponding web access log. While a web log includes IP addresses, document names, timestamps and http request types, the LiveNet log records information in terms of LiveNet’s conceptual model. Thus, every record includes the name of the workspace and its owner, the name of the participant carrying out the action, his/her role name, the LiveNet server command requested, etc. This allows analysis to exploit metadata available in the application and to capture higher-level actions than a mere web log does [17].
a. Selection of the data source
b. Selection of the data mining function
Fig. 6. The implementation of the data mining component of the framework.
3.3 Knowledge Discovery Support The initial implementation of the data mining support to the LiveNet environment is based on the WEKA data mining class library [18] [with extensions for text and image data preprocessing and mining]. The data mining tool can be attached to a workspace. The input data can come either from LiveNet internal log data records or from files that are artifacts in the workspace or external files. At present, the data mining algorithms include techniques for clustering, classification and associations mining. The interfaces are illustrated in Fig. 6.
176
S.J. Simoff and R.P. Biuk-Aghai
4 Extracting Knowledge from Collaborative Activities This section presents an example of the application of the framework, which illustrates how knowledge about collaboration can be extracted and integrated in the CVW. The emphasis is not on the data mining methods (indeed, the illustration uses simple descriptive statistics), but on the “contribution” of each component of the framework. The analysis in this example allows utilising the application data about higher-level actions and relating them to the activities performed in the CVW. The goal of the data mining exercise was to discover styles of group collaboration in terms of workspace topology and sets of activities. The analysis we carried out focused primarily on the log of collaboration actions, and to a lesser extent on LiveNet workspace database. It involved pre-processing of the log, visualisation of workspace data, and actual data mining. The pre-processing step normalises session numbers, aggregates lower-level events into higher-level actions, and calculates session summaries. In this context, a session is the sequence of actions carried out by a user from login to logout time. Data preprocessing is considered part of collaboration data collection and is usually automatically performed. The data used originated from students and instructors of a number of courses at the University of Technology, Sydney, who used the LiveNet system both to coordinate their work, and to set up workspaces as part of the students’ assignments. The data covers a three month period, with a total of 571,319 log records. These records were aggregated into 178,488 higher-level actions in a total of 24,628 sessions involving 721 workspaces and 513 users. 4.1 Extracting Knowledge about Workspace Structuring This knowledge discovery process is an example of visual data mining, operating with visualisations of the input data. These visualisations aim at discovering of certain relationships within and between workspaces. This particularly aids exploratory analysis, when the purpose is to get an understanding of the structure of the data and patterns in it. We selected data originating from students of one course who used LiveNet during the mentioned period. There were a total of 187 student users, organised into 50 groups of 3-5 people, whose use accounted for about 20% of the above-mentioned log data. Initial visualisation focused on networks of workspaces, to discover how individual student groups partitioned their work in terms of distinct workspaces, and to what extent these workspaces were linked to one another. This exploratory analysis revealed two distinct patterns: the majority of users preferred to use just one workspace to organise all their course work (such as posting drafts of assignment documents, discussing work distribution and problems, etc.). This workspace tended to contain a relatively large amount of objects—or have a high absolute workspace density. We label such groups as centralisers. To a certain extent, this mode corresponds to the single-task collaboration mentioned earlier. On the other hand, a few groups tended to partition their work across a collection of connected workspaces, usually with a separate workspace for each major course assignment. These workspaces tended to contain fewer objects (having a lower absolute workspace density) than the ones of the centralisers. We label these groups as
Multimedia Mining of Collaborative Virtual Workspaces
177
partitioners. Their collaboration style corresponds to the multi-task collaboration (for discussion of collaboration styles see [2]). Fig. 7 shows a map of LiveNet workspaces with colours (levels of grey in the printed version), highlighting absolute workspace density—lighter colour (lighter grey level) indicating lower density, darker colour (darker grey level) indicating higher density. Branching out from the central node at the top are networks of workspaces for three groups. Nodes represent workspaces, edges represent hierarchical relationships between workspaces. What the map reveals is that the group on the right, Team40, has a very high density in the workspace used for facilitating its work (the workspace Team40_Master). Moreover, it uses only one workspace for this purpose. Thus, the right group is a typical example of a centraliser. On the other hand, workspaces in the group at the centre have a much lower density. Out of the eight workspaces in this group, six are used for facilitating aspects of the group’s work. This is indicative of a partitioner group. There are plausible explanations for both the centraliser and partitioner cases. Both approaches have their own advantages: • in the centraliser case, it is convenience in not having to create multiple workspaces, to switch between them, and in addition to have everything available to all participants in a single location; • in the partitioner case, the advantage is increased clarity, structuring according to task, and consequently reduced cognitive load in the case of multi-task collaboration.
Fig. 7. Workspace densities of three different groups.
Furthermore, some groups may bring certain preferences as to the way to organise their work into workspaces and enact these preferences in the way they structure their virtual workspace environment. Such preferences are recognised during knowledge discovery phase, and stored in the organisational memory (i.e. the case base of workspace topologies). These preferences can be matched during the design of a new CVW, thus helping to offer support that is more adequate to collaborative groups with diverse working styles.
178
S.J. Simoff and R.P. Biuk-Aghai
4.2 Extracting Knowledge from Feasible Actions A further area we investigated was focused on identifying the actions different groups mainly carried out within LiveNet in the context of the activities they performed in the projects. LiveNet Release 3, in which these projects have been conducted, offers about 80 different actions. The majority of student groups used only about half of those. The major actions carried out are related to the main elements of the LiveNet ontology: workspaces, roles, participants, documents, and discussions. The taxonomy of the major application level actions is presented in Fig. 8. While all groups had been given the same task—to prepare a number of assignments and to set up a collaborative virtual workspace to support a given process—the way they implemented this task varied markedly. This was evident in a number of indicators measured and analysed, including intensity of use, number of workspaces created, number and length of sessions, number of actions per session, etc. One area of our analysis focused on the proportional distribution of main actions. This revealed that strong differences existed among different groups. We illustrate this difference with two examples, presented in Fig. 9 and Fig 10. Fig 9 shows action distributions among the major application level actions of the taxonomy in Fig 8 for one group whose distribution of actions was fairly even across categories (with the exception of the participant category): the five major action categories did not vary greatly (with the exception of the “Paricipant” category) and none of them exceeded 0.29 (or 29%) of the total amount of actions (the circle size is corresponds to the proportion out of the total). Fig 10 on the other hand, shows a highly uneven distribution of actions in another group, where one action category (role) strongly dominates with 0.56 (56%) of the total, and two other action categories (document and discussion) barely register. Such difference may be explained when considering that Group 1 (Fig. 9) had a total of 627 sessions consisting of a total of 7446 actions, while Group 50 (Fig. 10) had only 36 sessions and 633 actions. Not only did Group 1 use the collaborative virtual workspace much more intensively, but they also made much broader use of the CVW to facilitate their own work (as manifested in the solid proportion of actions in the document and discussion categories). Thus the skew in action distribution towards role-related actions on the part of Group 50 is caused by the under-utilisation of other LiveNet features, not by an absolute high number of actions related to roles (in absolute terms, Group 1 carried out 431 role-related actions, while Group 50 carried out only 142 such actions). It should be noted that the choice of these two groups for illustration was not coincidental: Group 1 was the best-performing group in the course, while Group 50 was the worst-performing group, as measured in the marks obtained for their assignments in the course. The situation was comparable in other similarly scoring groups. When such cases are identified and included in the organisational memory as part of a record of collaboration, they can be of use in evaluating virtual work. This can be particularly useful virtual teams that never meet face-to-face, where conventional management methods for project monitoring and control are severely limited or absent. The organisational memory in such cases can be utilised as a management instrument.
Multimedia Mining of Collaborative Virtual Workspaces
179
Fig. 8. Taxonomy of major application level LiveNet actions.
Fig. 9. Relatively even distribution of actions in Group 1.
Fig. 10. Highly uneven distribution of actions in Group 50.
The presented illustration of knowledge extraction from feasible actions has already yielded interesting results through the application of very simple data analysis methods. The application of clustering methods assists in identifying patterns of behaviour in different user groups, which then assist in categorising the utilisation of different collaboration spaces used for different purposes. Clustering is also used to aid in the construction of “group profiles”, allowing personalisation of the support towards the design of new individual workspaces.
180
S.J. Simoff and R.P. Biuk-Aghai
4.3 Integrating Extracted Knowledge in the Organisational Memory An important part of the framework is the way knowledge is returned back to the environment. An example of such feedback is the collection and generalisation over the workspace graph structure. The procedure to some extent is similar to building a case base of workspace configurations. Case indexing and retrieval is based on matching graph structures. The new collaborative process is formalised into a graph structure using concepts from a modified form of the soft systems methodology, with activities, roles and artefacts as node types and particular rules for the connections between the nodes (for example, a participant and an artefact cannot have a direct connection). The formal representation is usually a result of a high-level (i.e. not detailed) description of the process. This representation is matched against the graph representation of the workspace configurations. Retrieved cases provide the initial configuration for further adaptation. The framework also allows a feedback from the organisational memory towards modification of the knowledge representation schema, used for representation and incorporation of discovered knowledge. The detailed discussion of the issues related to the modification of the knowledge representation schema, however, are beyond the scope of this chapter.
5 Conclusions The chapter presented a framework that integrates multimedia data mining technologies with technologies that support collaborative virtual workspaces. Such workspaces are becoming an intrinsic part of professional practices in global businesses. CVWs have the potential to collect data about the collaborative activities and the information that they process. Unfortunately, the design of earlier environments did not pay much attention to the issues of data collection. Thus, the application of multimedia data mining methods had to cope with data collected for purposes, different from the goals of the data mining exercise (for example, a collaborative system server log used usually for correct recovery after a server failure). Consequently, the earlier application of multimedia data mining methods in CVEs has been focused mainly on the analysis of communication transcripts— whether recorded in synchronous collaborative sessions or over a discussion board in asynchronous mode, and also over the analysis of project document contents. The framework presented in the chapter looks at the integration of data mining technologies with CVWs at the early design stages of the virtual environment. A key issue at the design stage is the selection of the data that should be recorded. These records are complementary to the standard logs of the web server and can be at different levels of granularity. They include activity log data, the dynamics of media usage and changes in workspace content. Careful design and analysis of the activity log data have the potential to lead to improvements of the structure of the space and tuning the set of feasible actions with respect to the purpose of the environment. The applicability of the framework has been tested and demonstrated on a real environment.
Multimedia Mining of Collaborative Virtual Workspaces
181
The example, presented in the chapter illustrated the application of the data mining and visualisation techniques within the framework for discovery patterns in collaboration data and utilising the discoveries. By combining CVW and multimedia data mining technology, proposed framework leads to the development of more coherent and consistent CVWs. CVWs with embedded multimedia data mining capabilities will allow the discovery of ad-hoc and emergent processes for which no initial design has catered. Workspace configurations can be retained in a library of reusable process templates. The efficient utilisation of discovered knowledge within the environment where data has been collected and the transfer of such knowledge to other environments are areas for further research to focus on. Acknowledgments. This work has been supported by the Australian Research Council and the University of Technology, Sydney.
References 1. 2. 3. 4. 5. 6.
7. 8. 9. 10.
11. 12.
13.
Bolcer, G.A. and Taylor, R.N.: Advanced Workflow Management Technologies, University of California, Irvine, 1998, pp 60. Maher, M.L., Simoff, S.J. and Cicognani, A.: Understanding Virtual Design Studios. Springer, London, UK (2000) Capin, T.K., Pandzic, I.S., Magnenat-Thalman, N. and Thalman, D.: Avatars in Networked Virtual Environments. John Wiley and Sons, Chichester (1999) Simoff, S.J. and Maher, M.L.: Loosely-integrated open virtual environments as places. IEEE Learning Technology. 3 (1) (2001). Maher, M.L., Simoff, S.J., Gu, N. and Lau, K.H.: Designing Virtual Architecture. Proceedings of CAADRIA 2000 (2000) 481–490 Lesley, H.G. and McKay, D.G.: Towards an information and decision support system for the building industry. In: Mathur, K.S., Betts, M.P. and Tham, K.W. (eds): Management of Information Technology for Construction. World Scientific, Singapore (1993), 101–111 Spiliopoulou, M. and Pohle, C.: Data mining for measuring and improving the success of web sites. Data Mining and Knowledge Discovery. 5 (1-2) (2001) 85–114. Berson, A. and Smith, S.J.: Data Warehousing, Data Mining and OLAP. McGRaw-Hill, New York (1997) Chaudhuri, S. and Dayal, U.: An overview of data warehousing and OLAP technology. ACM SIGMOD Record. 26 (1997) 65–74. Fayyad, U.M., Piatetsky-Shapiro, G. and Smyth, P.: From data mining to knowledge discovery: An overview. In: Fayyad, U.M., Piatetsky-Shapiro, G., Smyth, P. and Uthurusamy, R. (eds): Advances in Knowledge Discovery and Data Mining. AAAI Press/MIT Press, Menlo Park, CA, USA (1996) Chapman, P., Clinton, J., Kerber, R., Khabaza, T., Reinartz, T., Shearer, C. and Wirth, R.: CRISP-DM 1.0: Step-by-step data mining guide, SPSS Inc., 2000, pp 78. Ackerman, M.S.: Augmenting the organizational memory: A field study of Answer Garden. Proceedings of the Conference on Computer Supported Cooperative Work. Chapel Hill, NC, USA (1994) 243–252 Bannon, L.J. and Kuutti, K.: Shifting perspectives on organizational memory: From storage to active remembering. Proceedings of the 29th Hawaii International Conference on System Sciences (HICSS-29). Hawaii, USA 3 (1996) 156–167
182
S.J. Simoff and R.P. Biuk-Aghai
14. Conklin, E.J.: Capturing organizational memory. In: Baecker, R.M. (ed.): Readings in Groupware and Computer Supported Cooperative Work: Assisting Human-Human Collaboration. Morgan Kaufmann Publishers (1993), 561–565 15. Furst, S., Blackburn, R. and Rosen, B.: Virtual team effectiveness: A proposed research agenda. Information Systems Journal. 9 (4) (1999) 249–269. 16. Hawryszkiewycz, I.T.: Workspace Networks for Knowledge Sharing. In: Debrency, R. and Ellis, A. (eds): Proceedings of AusWeb99, the Fifth Australian World Wide Web Conference, Ballina, Australia (1999), 219–227 17. Ansari, S., Kohavi, R., Mason, L. and Zheng, Z.: Integrating e-commerce and data mining: Architecture and challenges. Proceedings WEBKDD 2000 Workshop: Web Mining for ECommerce – Challenges and Opportunities. Boston, MA, USA (2000) 18. Witten, I.H. and Frank, E.: Data Mining: Practical Machine Learning Tools and Techniques with Java Implementations. Morgan Kaufmann Publishers (2000)
STIFF: A Forecasting Framework for Spatio-Temporal Data Zhigang Li1 , Margaret H. Dunham1 , and Yongqiao Xiao2 1
Dept. of Computer Science and Engineering, Southern Methodist University Dallas, TX 75275-0122, USA {zgli,mhd}@engr.smu.edu 2 Dept. of Math and Computer Science, Georgia College & State University Milledgeville, GA 31061, USA {yxiao}@gcsu.edu
Abstract. Nowadays spatiotemporal forecasting has been drawing more and more attention from academic researchers and industrial practitioners for its promising applicability to complex data containing both spatial and temporal characteristics. To meet this increasing demand we propose STIFF (SpatioTemporal Integrated Forecasting Framework) in this paper. Following a divide-and-conquer methodology, it 1) first constructs a stochastic time series model to capture the temporal characteristic of each spatially separated location, 2) then builds an artificial neural network to discover the hidden spatial correlation among all locations, 3) finally combines the previous individual temporal and spatial predictions based upon statistical regression to obtain the overall integrated forecasting. After the framework description a real-world case study in a river catchment, which bears abrupt water flow rate fluctuation, obtained from a British catchment with complicated hydrological situations, is presented for illustration purpose. The effectiveness of the framework is shown by an enhanced forecasting accuracy and more balanced behaviors.
1
Introduction
Spatiotemporal forecasting has been developing from individual spatial or temporal forecasting [11] and gained vast attention recently for its promising performance in handling complex data, in which not only spatial but also temporal characteristic must be taken into account. Although nowadays technology improvement in sophisticated scientific observation instruments, such as satellites and infrared-ray detectors, has enabled the spatiotemporal observation and instant transmission and regeneration manipulable and affordable, the forecasting based upon the historical data can still furnish us with a valuable empirical understanding of the past events and a powerful insight into the future, in spite
This material is based upon work supported by the National Science Foundation under Grant No. IIS-9820841.
O.R. Za¨ıane et al. (Eds.): Mining Multimedia and Complex Data, LNAI 2797, pp. 183–198, 2003. c Springer-Verlag Berlin Heidelberg 2003
184
Z. Li, M.H. Dunham, and Y. Xiao
of the fact that the juxtaposition of the space and time makes the whole issue much more complicated and challenging. In accordance with this quickly growing demand, there have been an increasing number of papers, journals, books, forums and conferences that are dedicated to improve people’s understanding of spatiotemporal correlation and explore its application in a variety of fields, such as river hydrology study [6], biological pattern formation [16], agricultural estimation [13], housing price research [10], rainfall distribution [9], livestock waste monitoring [5], fishery output prediction [18] , hotel pickup ratio [12] and so on. By no means do all of them deal only with forecasting. Some address the spatiotemporal problem from different viewpoints, like simulation, regeneration and pattern recognition, which in return benefit the forecasting issue quite a bit. In this work we utilize both statistical analysis strategies and data mining techniques to build a new framework, namely STIFF (SpatioTemporal Integrated Forecasting Framework), in an attempt of better answering such a challenging problem and overcoming some limitations in the previous work. The following outline is how the rest of paper is organized. Section 2 briefs and evaluates the past spatiotemporal forecasting and other related work. Following the retrospect is a complete description of the methodology of STIFF in Section 3. A practical case study with seven gauging stations scattered in a catchment is then presented to crystalize the framework in Section 4. Section 5 concludes the paper with some remarks.
2
Overview of Past Work
Before presenting details, let us first digress a little to take a close look at what other people have done in this field. One thing worthy to be kept in mind, which is one of the most challenging issues in the spatiotemporal forecasting, is how to deal with the spatiotemporal data appropriately, as both spatial and temporal characteristics of the data play important roles over the accuracy of the forecasting. Intuitively speaking, time is one-dimensional and constantly flows forward, which makes it relatively easy to be measured and assessed by human beings. Whereas space is three-dimensional and may change over time accordingly. Moreover, the spatial correlation cannot be solely determined by the distance in the space. For instance, the probability of flooding at a certain site along the river, does not only depend on the precipitation at the site, but also on the upper tributary where there was heavy rain, plus other influential factors, like the saturation of the soil, the forest coverage percentage, the cross-section area, gradient of the river bed and the turning angle of the river. It is even possible there are some other factors not thoroughly understood by people or simply hard to be measured and assessed, especially in those more difficult spatiotemporal application fields, like social and economic study. To address this problem, people have conducted many beneficial trials and proposed useful models, most of which can be found in the bibliography [14] and some typical ones are presented in [6,12,5,10,9,13]. Their work is not strictly lim-
STIFF: A Forecasting Framework for Spatio-Temporal Data
185
ited to the spatiotemporal forecasting, but somehow comprehensive and covers many aspects of spatiotemporal application. What is common in these models is they consider the spatiotemporal relationship simultaneously and try to find a reasonable way to combine the temporal and spatial characteristics as well as possible. In [9], Hidden Markov Model (HMM) was employed to simulate the monthly rainfall quantity before it was disaggregated from a top-level 400 km × 400 km area into 32 bottom-level 12.5 km × 12.5 km areas. Unfortunately, the discrepancy between the simulated rainfall and the actual one was somewhat pronounced, which undermined the subsequent forecasting. Pokrajac et al. [13] simply assumed a current event was only affected by its immediate temporal predecessor, in order to avoid the laborious computation, and built the test upon the synthetic dataset. It is apparent such imposed restriction is most likely unacceptable since data collected in the real world oftentimes shows some kind of periodicity. For example, when sea level is recorded every month, most probably the current level will look somehow similar to the one 12 months ago based upon the one-year cycle. With the assumption applied, definitely some very important information regarding the seasonal cycle is lost. Cressie et at. built their model based upon spatial statistics analysis where the temporal characteristic was condensed into a ”three-day area of influence” [5]. But the problem of his approach was ”the large variation of the predicted values” [5] with just a little modification of the input data, when the model was used to do the real forecasting. Although this kind of oversensitivity in their work was attributed to the low sampling density in space and time, it is rational to suspect that spatial statistics, with an assistance of straightforward time-lag assumption, is not sophisticated enough for a reliable forecasting. As a counterpart to the above spatial-statistics-based model, work in [6,10, 9] was mainly based upon the time series analysis, a well established statistical tool that has been proven very useful in spatiotemporal forecasting. For the goal of incorporation of the spatial correlation, the authors introduced a simple neighboring distance matrix. If two locations were within a pre-defined threshold distance, the corresponding entry in the neighboring matrix would be set as 1; otherwise it remained 0. Such an introduction of the simple neighboring matrix helped merge the spatial characteristic and facilitate the computation. But, as discussed before, a simplification like this may miss other non-distance influencing factors that are likely to contribute to the event in their own ways. Generally speaking, most of the above mentioned works follow a kind of divide-and-conquer approach, since so far a well-founded and uniformly-accepted methodology to integrate space and time has been far from available. Some look on the overall forecasting as the equal combination of separate spatial and temporal sub-forecasting, loosely dividing the whole process into parts. They first solve the forecasting concerning space and time separately according to their special characteristics, then proceed to dig out the overall optimal answer via a kind of integration. Such an idea was embodied in [10,9]. While others prefer to pay more attention to currently mature analysis tools, e.g., time series or spatial
186
Z. Li, M.H. Dunham, and Y. Xiao
statistics, and extend them to the spatiotemporal problem. For example, spatial statistics concepts were extended to take the time dimension into account in [5, 13]. On the contrary, Deutsch et at. in [6,12] incorporated spatial correlation into the multivariate time series analysis with the help of a neighboring distance matrix.
3
Framework Specification
We define the spatiotemporal problem as follows to facilitate the following forecasting discussion. First frequently-used symbols are summarized in Table 1.
Table 1. Symbols summary symbols meaning ∆ set of n + 1 locations δi (i = 0, 1, ..., n) spatially separated locations in ∆ Π set of time series data scattered at all locations πi (i = 0, 1, ..., n) the ith time series data at location δi l (=|πi |) length of time series data πij (i = 0, 1, ..., n)(j = 1, ..., l) the jth observation in time series data πi s steps to forecast ahead
Definition 1. Given the collection of locations ∆, the whole available time series data Π and the lookahead steps of s, a spatiotemporal forecasting problem asks to find a mapping relationship f , defined as f : {∆, Π, l, s} {ˆ π0(l+1) , π ˆ0(l+2) , ..., π ˆ0(l+s) },
(1)
which is as good as possible. Without loss of generality, δ0 is assumed to be the only target location where the spatiotemporal forecasting will be conducted. Although not stated explicitly, most of previous spatiotemporal forecasting techniques follow the same definition to undertake the forecasting problem. But most of them lack an appropriately balanced examination of both spatial and temporal characteristics, by bearing either too stringent assumptions or oversimplification. Motivated to overcome these limitations and find a better way to carry out the forecasting, in this section we present STIFF, a powerful spatiotemporal forecasting framework that combines both statistical analysis and data mining. STIFF in fact consists of several different functional components and its working methodology is outlined in the subsequent algorithm. For better understanding, we further explain the latter three ones in the coming subsections.
STIFF: A Forecasting Framework for Spatio-Temporal Data
187
STIFF Algorithm Step 1. To ensure the applicability of STIFF, the forecasting problem may have to be restated in term of the above definition, such as determining the target location δ0 and its spatially separated siblings δ1 , δ2 , ..., δn. If needed, clustering or classification techniques, like K-M eans and P AM [8], could be employed for this purpose. Step 2. For each location δi ∈ ∆ build a time series model T Si that provides the necessary temporal forecasting capability at the location. Specifically, forecasting from model T S0 is recorded as fT . Step 3. Based upon the spatial correlation of all location δi (i = 0) against δ0 , an artificial neural network is constructed to capture the spatial influence over the target location δ0 . After the neural network has been appropriately trained, the forecasting from each time series model T Si (i = 0) is fed into the network whose output is taken to be the spatially-influenced forecasting, recorded as fS . Step 4. For the two forecasting, one from fT and the other from fS , an appropriate statistical regression mechanism combines them together, which is our final overall spatiotemporal forecasting. Given such a framework, there are a number of criteria to testify its correctness and effectiveness. For example, it is possible to argue for the statistical sufficiency of capturing both spatial and temporal characteristics. Or from a more empirical perspective, show the better overall forecasting performance. Based upon the application-oriented nature of spatiotemporal forecasting, we propose the following error measurement s |ˆ π0(l+t) − π0(l+t) | , (2) N ARE = t=1s t=1 |π0(l+t) | denoted as Normalized Absolute Ratio Error. NARE will be employed in the paper as the performance criterion, for its easy understanding and simple mathematical calculation. A very similar error-measuring approach, in which the absolute operation was replaced with square root for the author’s preference, was taken in [15] where neural network techniques were applied in Nile river flood control. 3.1
Build Time Series Model for Each Location
By concentrating on each location and building a time series model for it individually, the spatial influence from its siblings could be temporarily shielded, which facilitates working on the temporal characteristic. A moderate coverage of time series would take the full content of a book with hundreds of pages. Therefore in this section we just focus on some major concepts and only supply necessary background introduction, instead of giving out a detailed explanation. Interested readers are highly suggested to examine the corresponding references [19,3].
188
Z. Li, M.H. Dunham, and Y. Xiao
What is especially important to our work is the so-called MSARIMA (Multiplicative Seasonal AutoRegressive Integrated Moving Average) model [19,3], developed by Box et al. Denoted as MSARIMA(p, d, q) × (P, D, Q)s in shorthand, it usually has the following mathematical representation : ΦP (B s )φp (B)(1 − B)d (1 − B s )Z˙ t = θq (B)ΘQ (B s )at
(3)
where B D d s ΦP (B s ) φp (B) ΘQ (B s ) θQ (B) Z˙ t
: : : : : : : : :
The backshift operation, e.g., BZt = Zt−1 . Number of seasonal differencing required. Number of nonseasonal differencing required. Seasonal period, e.g., based upon one year cycle, quarterly data has s = 4. Seasonal autoregressive polynomial. Nonseasonal autoregressive polynomial. Seasonal moving average polynomial. Nonseasonal moving average polynomial. Original or transformed data. A certain transformation sometimes is needed to stabilize the mean or the variance so that the data can be better fitted into the model.
Data transformation may turn out necessary, if the original time series shows obvious nonstationary evidence [19]. Most frequently-used transformations include square root, natural log, reciprocal, etc. But for more general cases, a Box-Cox transformation family can be found in [3] : y (λ) = {
y λ −1 λ
(λ = 0) log y (λ = 0)
(4)
Finding the right transformation parameter λ thus becomes important. Aiming at this problem, Box and Cox proposed a large sample maximum-likelihood approach [2]. William then suggested a simpler one [19] to avoid the clumsy computation. His key idea is to empirically find the most appropriate λ that minimizes the following sum S(λ) =
n
(Zt (λ) − µ ˆ)2
(5)
t=1
n where µ ˆ is the mean value, equal to n1 t=1 (Zt (λ)), and Zt (λ) is the time series observation. However, there is a potential flaw with Wei’s approach – the transformation only relies upon the overall absolute variance between each observation and the mean value, and does not take the relative ratio into account. Such a point is illustrated in the following example. Example 1. Suppose we have a very small set of observations with only 3 records. By the way, there are two different transformations that can be used to convert the data into {0.5, 0.8, 1.1} and {101, 103, 105} respectively with means
STIFF: A Forecasting Framework for Spatio-Temporal Data
189
of µ ˆ1 = 0.8 and µ ˆ2 = 103. Following Equation (5), because S2 > S1 , it looks that the first transformation is preferred. But actually it is the latter one that makes the data more stable and is what we are looking for. To correct this drawback, we suggest a better approach for determining λ. Namely, the relative variance will be taken into account and the sum is normalized by the original observation, as shown in the following equation n ˆ )2 t=1 (Zt (λ) − µ S(λ) = . (6) n 2 t=1 (Zt (λ)) The λ that minimizes S(λ) will be used. Model construction work after data transformation could be quite diversified, but roughly speaking, it is normally divided into the following three subsequent stages [19,3], including model identification, parameter estimation and diagnostic checking. – In the model identification phase, tentative models are identified, the seasonal and nonseasonal autoregressive and moving average degrees, as well as the necessary differencing number, are determined, namely, the values of D, d, P, p, Q, q. To do that, the autocorrelation and partial autocorrelation function (ACF and PACF) p1ots are examined carefully. Other supplementary criteria may include extended sample autocorrelation function (ESACF) [19], generalized partial autocorrelation coefficient (GPAC) [21] and so on. – In the parameter estimation phase, all other coefficients, mainly those in polynomials, are estimated based upon previously identified degrees. For this purpose, some statistical software, like SAS, SCA, are employed to facilitate the estimation process instead of programming from scratch. – After the estimation, the third phase, diagnostic checking, provides a monitoring approach to prevent the creation of an inadequate model. It also works as an approach to choose the best one from all candidate models. A basic assumption for adequacy here is that the residual from the fitted model should be white noise (WN). There are also other popular criteria available, such as Akaike’s Information Criterion (A1C), Schwarz’s Bayesian Criterion (SBC). If the estimated model does not satisfy the adequacy requirement and all candidate models fail, the construction process has to be repeated with other possible parameters. 3.2
Infer the Spatial Influence over Target Location
Compared with temporal model building, figuring out spatial influence over the target location δ0 is much more challenging. As pointed out before, although the term ”spatial” is used here, by no means does it assert the spatial influence can be solely decided with the distance in the space. One work that ignores this fact is presented in [5]. Here the forecasting showed a large variation just with a very slight input modification. To bypass the difficulty of measuring the spatial influence precisely, and to make the spatial model more stable and reliable, we introduce ANN (Artificial
190
Z. Li, M.H. Dunham, and Y. Xiao
Neural Networks) to this problem. First appearing nearly sixty years ago, ANNs have been widely used in a variety of application fields in recent years [8]. Simply speaking, an ANN is a non-linear technology used to learn hidden patterns and relationships automatically. It is especially useful in ”situations where the underlying physical relationships are not fully understood”, as stated in [7]. By following the thinking pattern of human biological brains and working smoothly without fully relying on people’s interpretation and specification, it can do almost all other conventional mathematical and statistical models can do, most of time with a promise of better empirical performance, higher speed and better fault-tolerance, since there is no need for a regular equation into which the data has to find a way to squeeze. Other ANN-based examples can be easily found in the concerned literature, such as [15,17,4]. Unfortunately there is no existing method guaranteeing how to build an optimal network, not to mention a universal one applicable everywhere. It’s a case-by-case issue and calls for the human expertise and experience. Furthermore, there are so many dazzling choices, from very common back-propagation network to more fancy ones, like radial basis functions network, time delay network and recurrent network, which oftentimes leave people at a loss of making a good decision. But keep in mind, parsimoniousness is also an important criterion that is used to evaluate the overall performance, as in the time series analysis. Superfluous complication probably adds more beauty, but also more likely comes at the expense of laborious computation and leads to some tricky problems like overfitting. For the purpose of illustration, a methodology of creating the neural network is outlined as below. The example that follows materializes this construction process using the problem to be studied in the case study. – Firstly a simple three-layered (divided into input layer, hidden layer and output layer respectively) fully-connected back-propagation neural network is assumed here. Normally one hidden layer is sufficient to perform the necessary transformation connecting the input and the output. – Secondly we assign |∆| - 1 neurons, each of which corresponds to a nontarget location δi (i = 0), to the input layer, and only one, denoting the target location δ0 , to the output layer. – Thirdly we must determine the exact structure of the hidden layer. This includes the number of neurons in the middle hidden layer, the links connecting different layers, and the weight of these links. As the structure of the hidden layer depends on the spatial organization of every specific problem, we use experimentation to determine the best one. These experiments are conducted based upon the training data provided with the problem. – Moreover, in order to streamline the neural network, particular temporal and spatial characteristics in each specific problem could be utilized to help design the structure somehow, such as pruning unnecessary neurons or links, assigning appropriate start-up weight to each link, etc. Example 2. Let us jump ahead a little bit and take a look at how the above methodology is used to determine the neural network structure. Figure 2 shows
STIFF: A Forecasting Framework for Spatio-Temporal Data
191
the structure of the river catchment area that is used in the case study of the paper. According to Figure 2, the network would be in a 6 − X − 1 structure. That is, there are 6 input neurons, an unknown number (X) of neurons in the hidden layer, and one neuron in the output layer. Without loss of generality, a fully-connected back-propagation structure is assumed here. To determine X, we vary the number of neurons in the hidden layer from the most sparse 3 to the highly dense 12 in order to find an optimal one. It turns out 6, 7 and 8 almost have the same best performance during the training stage. As a result 6 is picked up for its most simple structure. Moreover, whenever possible, spatial and temporal information of the catchment (and its data) is taken into account to help streamline the overall structure. In this case, since there are exactly 6 neurons in hidden layer, we name each of them as a non-target location. Therefore, location 28011 is only under the spatial influence of locations 28028, 28043 and 28070. Links incident to the neuron, which corresponds to site 28011 in the hidden layer, from site 28040 and 28055 are regarded as unnecessary and can be removed for simplification. With appropriate training, the performance of the condensed network is almost as good as the original one, but with only 16 links totally, 26 less than its fully-connected counterpart. Thus the condensed network with 6 neurons in the hidden layer is finally chosen as the one used to find the spatial forecasting fS . 28070
28023
28043
28070
28023
28043 28010
28011
28011
28055
28055
28048
28048
Fig. 1. The neural network structure based upon the catchment in Figure 2
No doubt the above methodology won’t be the only choice and may not be the best. It only works as a simple demonstration of neural network construction. There are surely quite a few alternative approaches, and normally the complexity will vary with the problem itself proportionately. 3.3
Generate the Overall Spatiotemporal Forecasting
So far there are two forecasted values associated with δ0 . One carries the temporal forecasting from time series model fT , and the other bears the spatial one from artificial neural network fS . Our current goal is how to generate an optimal overall spatiotemporal forecasting.
192
Z. Li, M.H. Dunham, and Y. Xiao
When working with the combination of multiple classifiers [20], Kevin et at. noticed that there were mainly two approaches : fusion or dynamic selection. If we look at the forecasting as a special type of classification, which maps each class into a real number, either of the two approaches could be taken directly. But since both fT and fS serve the forecasting from different viewpoints respectively, it is inappropriate to neglect either of them. To keep the combination easily understood and as simple as possible, the following popular linear regression is suggested as follows foverall = x1 × fT + x2 × fS + Regression Constant
(7)
where both the regression coefficients, including x1 and x2 , and regression constant, Regression Constant, have to be estimated beforehand. Of course more advanced nonlinear regression are worth careful consideration in case a higher accuracy is desired or the simple one turns out to fail in meeting the demand.
4
A Real-World Case Study
In this section a practical spatiotemporal forecasting is presented to solidify the idea and show how accurate the overall forecasting could be. This is only an illustrative proof of concept, more detailed work is underway to study the comprehensive performance of the general approach. The case study is based upon the data kindly provided by NRFA (British National River Flow Archive) [1]. The upper Serwent catchment, containing seven gauging stations that are denoted by solid black circles, is shown in Figure 2. The gauging station, 28010, lying lowest in the catchment, is where the forecast-
Fig. 2. The catchment of upper Serwent river
ing will be carried out. In other words, it is the target location δ0 in term of the problem definition. We are going to forecast the water flow rate (m3 /s) at
STIFF: A Forecasting Framework for Spatio-Temporal Data
193
20
40
60
80
100
station 28010. First of all, a fraction plot of time series data between 01/01/73 to 02/04/74 from the gauging station 28010 is given in Figure 3 (we are sorry it is unrealistic to provide all data plots for the limitation of the paper space), which clearly depicts the abrupt data change universally present in all gauging stations, most probably caused by the human-reservoirs intervention as well as somewhat lopsided rainfall amount in the catchment. After determining the time window within which data is available at all gauging stations, we choose data between 01/01/73 and 09/30/75 as the training set and forecast from the date 10/01/75 with 30, 60 and 120 days ahead respectively.
0
100
200
300
400
Fig. 3. Time series plot at gauging station 28010 in upper Derwent catchment
The building process is carried out following the framework specification, after the data has been appropriately transformed according to the suggested method (see Equation 6). In model construction, there are two minor problems deserving some attention: Time series data shifted before fed into the neural network. Due to the different distance between gauging stations and variable flowing speed of the water, time lag is present for readings from different stations as it takes time for water to flow from one station to another. In order to maximize the correlation between data (from different stations) so as to better train the neural network, a shifting operation is first introduced to find out the possible time lag (against the data from the target location δ0 , gauging station 28010). Intuitively speaking, for data coming from each non-target location, an appropriate timelag is picked up when it minimizes l the absolute difference sum, defined as k=1 (|πi(k+time lag) − π0k |)(i = 0). In order not to chop too many observations, which could undermine the neural network training, we suggest to limit the shifting with an upper bound, such as 100 used in our work. Moreover, for the sake of simplicity, the time series data in the target location is fixed, while data from other
194
Z. Li, M.H. Dunham, and Y. Xiao
locations can only be moved in one direction, arbitrarily set towards the past. Regression coefficients estimated. Since a linear regression (see Equation 7) is employed to merge the time series model fT and the neural network model fS (to generate the overall forecasting), all concerned regression coefficients have to be appropriately estimated before STIFF is put into application. Therefore at the first time during the individual construction of fT and fS we cannot use all the training data, and a certain portion of it has to reserved for coefficients estimation purpose when fT and fS both are ready. A common practice is to keep the latest training data for coefficients estimation. Then when all regression coefficients are available, fT and fS get updated accordingly – they may be rebuilt from scratch based upon the entire training data, or just be modified incrementally through the way discussed in [3,8] with the new training data they did not utilize at the first time, if less labor is preferred. Obviously reserving an appropriate percentage of the training data for regression coefficients estimation, therefore, becomes very important. If reservation is too small, the accuracy of the coefficients estimation will be affected unfavorably. On the contrary, if more than necessary is put aside, fT and fS , which are updated accordingly later, will differ significantly from what they were just constructed. A rule of thumb is to hold the training data between 5% and 10%. In our case study we keep 70 observations off for regression coefficients estimation, almost 7% of the training data. In a contrary way to fuse fT and fS together in STIFF, dynamic selection of either fT or fS is another reasonable alternative, as indicated in Kevin’s work [20]. The key point of dynamic selection is to each time choose an appropriate forecaster (after the forecasted observation is available), from either fT or fS , which is closer to the real forecasted observation, and use it as the next forecaster. In other words, whichever is better at the current time is used for the next, although it may turn bad at the next time. Such a forecasting approach, denoted as CDS , is employed in the following comparison work. By the way, for comparison purposes, in our best knowledge we also construct two other more common forecasting models, a pure time series model (CT S ) and a pure artificial neural network (CAN N ), based upon data from gauging station 28010 only, and compare them with STIFF side by side in the case study. The resulted forecasting plots and error summary table are shown in the subsequent Figure 4, 5, 6 and Table 2 respectively. It is straightforward to find out that STIFF outperforms both CT S and CAN N with not only smaller error rate but also more balanced behaviors between over-estimation and under-estimation. As far as CDS concerned, although it is slightly better than CAN N and CT S , it still lags behind STIFF. Another advantage from this hybrid approach is, some stringent assumptions and over-simplification that had to be complied with in the previous work could be easily released. For instance, the introduction of time series makes it needless to keep the assumption in [13] that the current event is only affected by
STIFF: A Forecasting Framework for Spatio-Temporal Data
195
Real Observation C_ANN C_TS STIFF
2.2
Real/Forecasting values
2
1.8
1.6
1.4
1.2
1 5
10
15 Look ahead step (days)
20
25
30
Fig. 4. 30 days forecasting from different models Real Observation C_ANN C_TS STIFF
3.5
Real/Forecasting values
3
2.5
2
1.5
1 10
20
30 Look ahead step (days)
40
50
60
Fig. 5. 60 days forecasting from different models
its immediate temporal predecessor. This makes STIFF more appropriate when working with data that bears obvious periodicity in it, like daily temperature, monthly rainfall, yearly sale amount, etc. By the way, the introduction of artificial neural network rules out the necessity of considering a much simpler and single-purposed neighboring distance matrix [6,12], which may be misleading to us under some situations, due to the fact that only absolute physical distance is considered. For instance, if we follow the neighboring distance matrix method, since the distance from gauging station 28055 to 28010 is much shorter than that from 28011 to 28010, gauging station 28055 should have a larger number compared to gauging station 28011 in the matrix. But actually it is not. A simple statistics calculation tells us the data correlation between 28011 and 28010 is much bigger that that between 28055 and 28010. How comes? A further study of the catchment discloses that gauging station 28011 lies in the major upper stream of 28010, and more than 70% water, which passes by 28010, comes from 28011. This discrepancy further suggests, as we have mentioned before, focusing
196
Z. Li, M.H. Dunham, and Y. Xiao 4.5 Real Observation C_ANN C_TS STIFF
4
Real/Forecasting values
3.5
3
2.5
2
1.5
1 20
40
60 Look ahead step (days)
80
100
120
Fig. 6. 120 days forecasting from different models
on distance only is normally not sophisticated enough with more complicated situation. We further check the stability of STIFF, provided that a little portion of input data has been changed. To do that, we intentionally and randomly modify the input data a little bit and re-run the whole forecasting, but no over-sensitive forecasting phenomenon, present in [5], has ever shown up. Table 2. Comparison of lookahead forecasting for different models. Groups are for 30, 60 and 120 days respectively Look-ahead days Models 30 CAN N CT S CDS STIFF 60 CAN N CT S CDS STIFF 120 CAN N CT S CDS STIFF
5
NARE Over-estimation Number Under-estimation Number 0.16723 27 3 0.16603 24 6 0.15063 23 7 0.05558 17 13 0.14717 42 18 0.19518 52 8 0.11397 43 17 0.08792 35 25 0.19247 42 78 0.23261 52 68 0.18129 43 77 0.14249 36 84
Conclusive Remarks
In this paper we have presented STIFF (SpatioTemporal Integrated Forecasting Framework) that can be used for the forecasting based upon the complex spatiotemporal data. It makes up the deficiency of previous work and loosens some of their stringent assumptions and excessive simplification. Time series analysis
STIFF: A Forecasting Framework for Spatio-Temporal Data
197
is incorporated to capture the temporal correlations and the artificial neural network technique is employed to discover the hidden and deeply entangled spatial relationships. Then the two approaches are combined via regression to generate the overall forecasting. A real-world application of STIFF about the water flow rate forecasting in a catchment is presented to illustrates the effectiveness of the framework. STIFF works very well and outperforms the normal time series model, the artificial neural network as well as the dynamic selection between two of them, especially in the short-term one or two months period. For example, the forecasting error, measured in NARE, is generally 10 percent points less. At the same time, it also has a more balanced forecasting behavior. For the longer middleterm four months period, the efficiency is also somewhat marked, although the accuracy degrades little by little. Considering the catchment in the case study is dominated by quite a few other influential factors, like reservoirs activities, human interventions and lopsided rainfall, its performance is rather satisfying. Unfortunately, STIFF requires much human expertise and skill in time series model and neural network construction, which turns out to be its major drawback. By the way, we are also exploring the possibility of incorporating other more sophisticated techniques, like intervention analysis of time series [3], latest artificial neural networks and so on. All of these drawbacks and alternative options will be further studied in our future work.
References 1. British National River Flow Archive. http://www.nerc-wallingford.ac.uk/ih/nrfa/index.htm. 2. G. E. P. Box and D. R. Cox. An analysis of transformation. Journal of Royal Statistical Society, 1964. 3. George E. P. Box, Gwilym M. Jenkins, and Gregory C. Reinsel. Time Series Analysis: Forecasting and Control. Prentice-Hall, 1994. 4. Sung-Bae Cho. Neural-network classifiers for recognizing totally unconstrained handwritten numerals. IEEE Transactions on Neural Networks, 1997. 5. N. Cressie and J. J. Majure. Spatio-temporal statistical modeling of livestock waste in streams. Journal of Agricultural, Biological and Environmental Statistics, 1997. 6. S. J. Deutsch and J. A. Ramos. Space-time modeling of vector hydrologic sequences. Water Resource Bulletin, 1986. 7. M. Dougherty, Linda See, S. Corne, and S. Openshaw. Some initial experiments with neural network models of flood forecasting on the river ouse. In Proceedings of the 2nd Annual Conference of GeoComputation, 1997. 8. Margaret H. Dunham. Data Mining Introductory and Advanced Topics. Prentice Hall, 2002. 9. C. Jothityangkoon, M. Sivapalan, and N. R. Viney. Tests of a space-time model of daily rainfall in southwestern australia based on nonhomogeneous random cascades. Water Resource Research, 2000. 10. R. Pace Kelly, R. Barry, J. Clapp, and M. Rodriquez. Spatiotemporal autoregressive models of neighborhood effects. Journal of Real Estate Economics, 1998. 11. Philip E. Pfeifer and Stuart J. Deutsch. A three-stage iterative procedure for space-time modelling. TECHNOMETRICS, 1980.
198
Z. Li, M.H. Dunham, and Y. Xiao
12. Phillip E. Pfeifer and Stuart J. Deutsch. A statima model-building procedure with applicationto description and regional forecasting. Journal of Forecasting, 1990. 13. D. Pokrajac and Z. Obradovic. Improved spatial-temporal forecasting through modeling of spatial residuals in recent history. In First SIAM International Conference on Data Mining (SDM’2001), 2001. 14. John F. Roddick, Kathleen Hornsby, and Myra Spiliopoulou. Improved spatialtemporal forecasting through modeling of spatial residuals in recent history. In International Workshop on Temporal, Spatial and Spatio-Temporal Data Mining TSDM2000, 2000. 15. S. I. Shaheen, A. F. Atiya, S. M. El Shoura, and M. S. El Sherif. A comparison between neural network forecasting techniques. IEEE Transactions on Neural Networks, 1999. 16. G. D. Singh, M. A. J. Chaplain, and John McLachlan. On Growth and Form: Spatio-Temporal Pattern Formation in Biology. John Wiley & Sons, 1999. 17. N. Srinivasa and N. Ahuja. A topological and temporal correlator network for spatiotemporal pattern learning, recognition, and recall. IEEE Transactions on Neural Networks, 1999. 18. D. S. Stofer. Estimation and identification of space-time armax models in the presence of missing data. Journal of American Statistical Association, 1986. 19. William W. S. Wei. Time Series Analysis: Univariate and Multivariant Methods. Addison Wesley, 1990. 20. Kevin Woods, W. Philip Kegelmeyer, and Kevin Bowyer. Combination of multiple classifiers using local accuracy estimates. IEEE Transaction on Pattern Analysis and Machine Intelligence, 1997. 21. W. A. Woodward and H. L. Gray. On the relationship between the s array and the box-jenkins method of arma model identification. Journal of American Statistical Association, 1981.
Mining Propositional Knowledge Bases to Discover Multi-level Rules Debbie Richards and Usama Malik Department of Computing, Centre for Language Technology, Division of Information and Communication Sciences, Macquarie University, Sydney, Australia
[email protected]
Abstract. This paper explores how knowledge in the form of propositions in an expert system can be used as input into data mining. The output is multi-level knowledge which can be used to provide structure, suggest interesting concepts, improve understanding and support querying of the original knowledge. Appropriate algorithms for mining knowledge must take into account the peculiar features of knowledge which distinguish it from data. The most obvious and problematic distinction is that only one of each rule exists. This paper introduces the possible benefits of mining knowledge and describes a technique for reorganizing knowledge and discovering higher-level concepts in the knowledge base. The rules input may have been acquired manually (we describe a simple technique known as Ripple Down Rules for this purpose) or automatically using an existing data mining technique. In either case, once the knowledge exists in propositional form, Formal Concept Analysis is applied to the rules to develop an abstraction hierarchy from which multi-level rules can be extracted. The user is able to explore the knowledge at and across any of the levels of abstraction to provide a much richer picture of the knowledge and understanding of the domain.
1
Introduction
Discovery of knowledge from databases or other data sources has been the focus of most knowledge discovery research. This emphasis has been natural since much data is readily available and in need of further manipulation, summarisation and interpretation. This paper reports work being conducted to reorganize and discover higher-level knowledge from rule bases rather than data bases. A benefit of starting with knowledge rather than data is that the key features in the data have already been identified. However, knowledge has some peculiarities such as smaller itemsets which are unique and thus do not lend themselves to frequency based algorithms. Similar to research into finding multi-level association rules from data (e.g. [6], we are motivated to support querying across levels of abstraction which is not possible when the rules are based merely on the primitive attribute-values used in the original data. Another reason for mining O.R. Za¨ıane et al. (Eds.): Mining Multimedia and Complex Data, LNAI 2797, pp. 199–216, 2003. c Springer-Verlag Berlin Heidelberg 2003
200
D. Richards and U. Malik
rules is the identification of“interesting” patterns or concepts in amongst the many patterns that may emerge. Interestingness can be considered the extent to which a rule’s support deviates from its predicted behaviour [11]. Our view of interestingness is related to our understanding of a concept as sets of objects and their attributes. Although our approach is currently focused on outputting classification, rather than association rules, like [19] we define an interesting (or most informative) rule to be non-redundant with minimal antecedents and maximal consequents. According to [19] a redundant rule is one that conveys the same or less general information within the same support and confidence levels. The experiments we report do not employ support or confidence measures as we only have one instance of each example. However, based on the work of [5] we are currently exploring if accuracy/usage statistics can be used to supplement our approach to identify the “interesting” concepts. In our approach redundancy exists where we have a repeated concept or where a concept belongs to the same branch, that is, there is a parent-child relationship. These notions can be better understood after our description of the knowledge representation and mining technique we use. Starting with knowledge can be seen as a problem in itself given the difficulties associated with acquiring knowledge particularly where the technique requires a model to be articulated by the domain expert. In this work we use a technique which does not rely on the difficult task of model specification in order to capture data. This paper offers an approach to knowledge discovery that is focused on knowledge rather than data as input and structured multilevel knowledge rather than single-level knowledge as output. Ripple-down rules (RDR) [2] are used for manual and incremental acquisition of rules from cases at a rate of approximately 1 rule per minute. The rules are then mined using a set-theoretical technique known as Formal Concept Analysis (FCA) [31][33] to automatically generate a concept lattice. The approach is applicable beyond RDR and can be used on any propositional knowledge-based representation. The approach should be highly attractive to other data mining techniques that produce association rules which are low-level and simply based on the attributevalue pairs found in the original data. Firstly, we will introduce the goals of and motivation for the project. In sections 2 and 3 we will describe RDR and FCA briefly. In Section 4 we will describe the experiments we have performed using knowledge bases from the domain of chemical pathology. In Section 5 we will discuss our results. Related research and concluding remarks appear in Section 6.
2
Introducing Ripple down Rules
Ripple-down rules are based on a situated view of cognition that sees knowledge as always evolving and dependent on its context. Thus maintainability is paramount. The approach is designed for incremental acquisition of knowledge motivated by cases which ground the knowledge and an exception structure which limits the effect of changes unlike standard production rules [15] that may
Mining Propositional Knowledge Bases to Discover Multi-level Rules
201
require significant review and revision of the entire KB each time a new rule is added. RDR have found commercial success in a number of domains. The work we report here uses some of the commercial KB that have been developed in the domain of clinical pathology. Given that we had a knowledge representation and acquisition technique that provided simple, incremental and validated development of KB, our motivation was not to uncover knowledge from data but from the rules we were rapidly acquiring. One of the strengths of RDR is that minimal analysis of the domain is required. KA can begin once cases are set up or found. Since the knowledge acquired uses the primitive terms in the cases we were interested in finding higher level concepts to give us further insights into the cases and rules. For example, if we have four rules which state: IF has author=yes, has publisher=yes, has page numbers=yes, has ISSN=yes, has volume number=yes THEN artifact=journal IF has author=yes, has publisher=yes, has page numbers=yes, has ISBN=yes, THEN artifact=book IF has author=yes, has publisher=yes, has page numbers=yes, has ISBN=yes THEN artifact=monograph IF has author=yes, has publisher=yes, has page numbers=yes, has ISBN=yes, part of series=yes THEN artifact=serial monograph
we can find the set intersections to suggest that perhaps a monograph and a book are the same (and perhaps remove one of them or differentiate the monograph by adding an attribute such as “one off publication=yes” resulting in the monograph being a type of book) and that a serial monograph is a type of book. We can find the intersections of the journal, book, monograph and serial monograph concepts to create a new concept and show this higher-level concept to an oracle (probably human) from whom we could solicit the name ‘publication’ to add into the background knowledge stored by our system. Wherever this pattern is found it can be identified with the abstract term. This strategy fits with our incremental and user-manageable KA technique. The result is a new higher-level concept (rule):
202
D. Richards and U. Malik
IF has author=yes, has publisher=yes, has page numbers=yes, THEN artifact=publication
In this way the higher-level concepts can suggest intermediate rules. The domain expert may find such abstractions more natural and quicker to use when specifying rules. In addition to the KA task, multiple levels of abstraction are useful for exploration of the knowledge. We support this via the concept lattice which provides a conceptual hierarchy that can be browsed. An example of a lattice is given in Section 5. Pilot studies [21] we have conducted have shown that the concept lattice is valuable for gaining a deeper understanding of the domain and supports explanation and learning.
Fig. 1. A partial MCRDR KB for the Glucose Domain CL1 = [Report][Glucose Comment] CL2 = [Glucose Comment][The mean blood glucose has been calculated from the HbA1c result. Glucose has not been measured on this specimen.] CL3 = [Glucose Comment][Mean blood glucose is not calculated when the HbA1c is below 6.0%.] CL4 = [Glucose Comment][The mean blood glucose has been calculated from the HbA1c result. It is an index of the average blood glucose. Glucose has not been measured on this specimen.] CO1 = (HbA1c ≥ 6.0) CO2 = (Diabetic=Y) CO3 = (HbA1c ≤ 5.9) CO4 = (NoComment=Y)
Additionally we wanted to improve the compactness and comprehensibility of the KB by removing redundant knowledge. RDR uses a rule-based exception structure for knowledge representation and cases to drive knowledge acquisition as well as to allow automatic validation of new rules. Exception rules are
Mining Propositional Knowledge Bases to Discover Multi-level Rules
203
used to locally patch a rule that gives a wrong conclusion to a case. The case that prompts a new rule to be added is known as the rule’s cornerstone case. The case thus provides the context in which the knowledge applies. Rules are only ever added, never deleted. As shown in Fig. 1, new rules may be modifications/exceptions to a previous rule. Studies have shown that RDR KB are comparable in size to KB developed using machine learning algorithms such as C4.5 and Induct [17]. Manually acquired KB are even more compact as it appears, not surprisingly, that experts are good at selecting which features in a case justify a particular classification. The partial Multiple Classification RDR (MCRDR) [13] KB shown in Fig. 1 demonstrates the incremental nature of development and the possibility of redundancy. The example is taken from a real Glucose KB, referred to as GA, and includes the use of unconditional stopping rules which are used to override previous classifications with a null conclusion. Although the full KB has 25 rules, Fig. 1 shows only those rules which have exceptions to demonstrate the MCRDR structure. In section 5 we show how these 25 rules have been reduced to 8 rules and how they can be displayed as a concept lattice. This example is real and demonstrates the redundancy that can exist in an MCRDR KB. If cases had been seen in a different order, the structure and amount of redundancy will differ. Redundancy and scattered classes occur because rules are added as cases arise and because the expert is free to choose any string as a rule condition (providing it covers the current case and does not cover cornerstone cases) or conclusion. Thus two different strings “representing” the same condition or conclusion will be regarded as different. This makes understanding of the knowledge and the task of pattern identification even harder as concepts which are the same or similar (e.g. classes CL2 and CL4 in the legend in Fig. 1) will be spread across possibly thousand of rules. By finding intersections of shared rule conditions using FCA we can automatically suggest classes to be merged or further differentiated. This paper focuses primarily on the discovery of multi-level knowledge. The goal of KB reorganization is discussed briefly in Section 5. Previous work on reorgansiation is described in [23] [28].
3
Introducing Formal Concept Analysis
Formal Concept Analysis is a theory of data analysis and knowledge processing based on the idea of a formal context and formal concepts. It provides methods to visualize data in the form of concept lattices. A formal context κ: = (G, M, I) consists of sets of formal objects (G), sets of formal attributes (M) and a binary relation (I) between G and M. (g, m) ∈ I means object g ∈ G is in relation I with an attribute m ∈ M . A formal context is usually represented by a cross table as shown in Fig. 3. In our use of FCA we treat each rule as an object comprising a set of attributes based on the rule conditions. Fig. 2 shows the flat rules of the Glucose KB given in Fig. 1. They have been flattened using the process described in Section 4.1. In the process rules 4 and 8 have been replaced by rules 16 and 17. Conclusion CL2 is no longer given as it is unconditionally overridden by CL4 in
204
D. Richards and U. Malik
rules 20 and 21. The default rule in Rule 2, which concludes CL1, has not been included as it is not considered potentially interesting. In other propositional knowledge representations that do not use exceptions the rules can be mapped directly to the formal context. Fig. 3 shows the formal context based on the flat rules in Fig. 2. The set of objects, G, are the rules = {10, 16, 17, 20, 21, 22, 23, 24} and attributes, M, are the rule conditions = {NoComment=Y, HbA1c ≤ 5.9, HbA1c ≥ 6.0, Diabetic=Y }. For a set X ⊆ G of objects we define: X := {m ∈ M | (g, m) for all g ∈ X} For a set Z ⊆ M of attributes we define: Z := {g ∈ G | (g, m) for all m ∈ Z} 10 16 17 20 21 22 23 24
%00008 %00011 %00011 %00012 %00012 %00013 %00013 %00014
(CL3) (STOP) (STOP) (CL4) (CL4) (NOT CL4) (NOT CL4) (NOT CL3)
(HbAc1=5.9) (NoComment=Y);(Diabetic=Y);(HbA1c ≥ 6.0) (NoComment=Y);(HbA1c ≥ 6.0) (Diabetic=Y);(HbA1c=6.0) (HbA1c ≥ 6.0) (NoComment=Y);(Diabetic=Y);(HbA1c ≥ 6.0) (NoComment=Y);(HbA1c ≥ 6.0) (NoComment=Y);(HbA1c ≤ 5.9)
Fig. 2. Extract of the flat rules from the Glucose KB. Rules 4 and 8 have been replaced by rules 16 and 17. NoComment=Y means that the comment is not to be reported to the client. Diabetic=Y means that the patient is a diabetic.
Rule No NoComment=Y HbA1c ≤ 5.9 HbA1c ≥ 6.0 Diabetic=Y 10-CL3 × 16-STOP × × × 17-STOP × × 20-CL4 × × 21-CL4 × 22-Not CL4 × × × 23-Not CL4 × × 24-Not CL3 × × Fig. 3. Crosstable for the Glucose KB in Fig. 1
A formal concept of context (G, M , I ) is a pair (X,Z) with X ⊆ G and Z ⊆ M such that X’ = Z and Z’ = X. X is called the extent and Z is called the intent of the concept (X,Z), i.e. Z consists of those attributes which apply to all objects in X and all objects in X have each attribute in Z. Formal concepts for the Formal Context in Fig. 3 include:
Mining Propositional Knowledge Bases to Discover Multi-level Rules
205
1. ({ 20}, {HbA1c ≥ 6.0, Diabetic=Y}) 2. ({16, 17, 22, 23, 24}, {NoComment=Y}) 3. ({16, 22}, {HbA1c ≥ 6.0, Diabetic=Y, NoComment=Y}) Concept 1 is called an object concept as its extent consists of only one object. Similarly concept 2 is an attribute concept consisting of one attribute only. Once we specify examples, FCA can be used to generate conceptual structures. The usual approach is to first construct a context table and then apply an algorithm to generate all concepts. The most famous algorithm for this purpose is Ganter’s algorithm [10]. While we investigated some incremental algorithms (e.g.[16]) our results are based on Ganter’s algorithm.
4
The Experiments
The input data to our experiments were 13 deployed KB developed using the commercial version of MCRDR from the domain of chemical pathology. We had a number of KB for different subdomains such as glucose, bio-chemistry, haematology, and microbiology. The KB ranged in size from 3 rules to 319 rules. The number of attributes ranged from 8 to 182. The largest RDR KB currently has over 7,000 rules. We did not have access to this KB in this study but will apply our approach to it when it becomes available. Other related studies [23] have conducted experiments that made use of the values in the cornerstone cases. However, cases were not made available for these experiments. Scalability is a concern, particularly when it comes to the generation, display and navigation of large graphs. The handling of a small number of concepts is much simpler than the handling of many concepts. For example, to find interesting concepts when we have a small lattice the following procedure can be followed. First, remove the MCRDR structure by flattening the rules and remove unconditional stopping rules (this is step 1 described next). The diagram after step 1 is shown in Fig. 4. From this diagram we can see a pocket of potentially interesting concepts (concept numbers 3-10 and 21) where certain concepts are shared. This subset was shown in the partial MCRDR in Fig. 1 and used in the next section to show how we identified higher level concepts as well as restructured the KB. This interesting pocket is easy to identify in the small lattice in Fig. 4. However, trying to identify interesting concepts through the lattice for larger KB is difficult due to the problems of displaying and navigating around large graphs. Instead of relying on visual identification we can achieve the same results by following the process outlined below. Generating multi-level knoweldge involves the following four step process as depicted in Fig. 5: Step 1: Parse the KB file and convert the decision list structure into a binary decision table. (i.e. a crosstable or formal context). Since initial experiments produced too many meaningless concepts we have included pruning rules in this step. Step 2: Prepare a formal context with rules as objects and conditions on these rules as attributes. Step 3: Generate concepts using the algorithm in Section 4.3.
206
D. Richards and U. Malik
Fig. 4. Concept lattice for the glucose domain after stopping rules have been removed
Step 4: Prune uninteresting concepts and output the remainder in a suitable format for the expert to analyze.
Fig. 5. The Knowledge Generation Process
4.1
Step 1: Parsing
There are two objectives of parsing: • To remove the exception structure of the RDR tree so that we can generate a crosstable for FCA. • Prune rules that are unnecessary so that the number of concepts generated are reduced. We introduce the following grammar used in our parsing step: Each Rule starts with keyword RuleID followed by Rule Number and ends with keyword rule. Syntax is: RuleID [Rule Number] remove Parent ID /add Conclusion /modify [Parent ID] Conclusion (condition Condition) * rule Where * means that we can have zero or more conditions at each new line. The algorithm we used for parsing the file is given in Fig. 6.
Mining Propositional Knowledge Bases to Discover Multi-level Rules
207
The parsing rules we used for processing the data had to consider the type of rule (modify, add or remove) and whether the rule included conditions (conditional or unconditional). The combination of these two characteristics resulted in the following set of rules: Unconditional modify - use conclusion from this node and pick up all conditions from all parents, delete parent; Conditional modify - use conclusion and conditions from this node and pick up all conditions from all parents, no change to parent; Unconditional add - this rule will always fire, typically used for a default head rule; Conditional add - use conclusion and conditions from this node. No parents exist; Unconditional remove - flag the parent rule as deleted so that it is not included in concept generation; Conditional remove - use the negation of the parent’s conclusion, pick up conditions from this node and from all predecessors (parents). Remove rules are known as stopping rules, as shown in Fig. 1. Stopping rules have a NULL Conclusion. RDR KB tend to have a large number of stopping rules. This does not have a noticeable affect on inferencing, but it does impact the usefulness of some explanations (rule traces). Stopping rules also tend to generate many (probably uninteresting) concepts. Handling of stopping rules is problematic as the semantics are unclear. In some cases a NULL conclusion means that the rule was an error or in other cases a NULL conclusion implies that if we have the set of conditions on that pathway then we don’t have the conclusion in the parent. Since there is no encoding in the KB to distinguish interesting or uninteresting stopping rules we decided to discard all unconditional stopping rules. Conditional stopping rules took on the negated conclusion of the parent. By the end of this phase we have a list of rules. Associated with each is a set of attributes and a conclusion. Next we prepare the formal context. 4.2
Step 2: Generating a Formal Context
A simple technique for creating a crosstable from MCRDR KBs is to treat each rule as an object and the rule conditions as attributes. Each condition is actually an attribute-value pair but our naive approach produces similar results to Ganter’s approach to conceptual scaling and the handling of multi-valued contexts [8][9]. Each row in the crosstable corresponds to a rule in the MCRDR KB. Each object/row is identified by its rule number and the conclusion or conclusion code. Each column in the crosstable, except for the first column which contains the object id, corresponds to a rule condition. We put a cross when a particular rule has these conditions otherwise we leave it empty. The context table is used to generate concepts. 4.3
Step 3: Generating Concepts
From Section 3 the generation of new concepts can be simply seen as the result of finding the intersections of objects and their attributes. In these experiments we used Ganter’s algorithm NEXTCONCEPT [10] to generate concepts.
208
D. Richards and U. Malik
!
"# $ % $ ! # & & # # # ! ' ( #)' ( ( ## *&)&+ (,# ! ( ' ( #)' ! ## ! ! ! Fig. 6. Parsing Algorithm used in Step 1
Mining Propositional Knowledge Bases to Discover Multi-level Rules
209
It computes all concepts L(G,M,I) from context (G, M, I) in lectical order in O(| G | 2x | M | xL(G, M, I) |) [16]. We filter concepts as we generate them, however, we could also leave filtering as a separate final step as outlined below. 4.4
Step 4: Filtering Concepts
Two criterions are employed to filter concepts. • First we don’t want single extent concepts (object concepts). This is because they only tell us that this rule has these attributes, something the expert knows already. • Since we are interested in cross comparisons of rules in different branches we prune concepts that consist of rules in a parent-child relation i.e. belonging to same branch. Finally we output concepts along with conclusions of rules so that the expert can analyze them.
5
Results
Results for five of the KB are given in Table 1. While we conducted our experiments on all 13 KB, for space we have randomly selected five of different sizes, each covering different subdomains within chemical pathology. The first column shows the KB Id. The second column shows the original number of rules in the KB. The third and fourth columns show the number of rules output from Steps 1 and 3, respectively. The reduction from Step 1 is due to the removal of stopping rules. Step 2 is simply a formatting stage. The fifth column shows how many concepts from Step 3 passed through the filter for human review in step 4. The sixth column shows the number of concepts that meet our ”interesting” concept criteria of minimal antecedent and maximal consequent to convey equivalent information. Finally we show the percentage of interesting concepts to original concepts from step 3. We can see a substantial reduction in the original size of the KB. A more compact and restructured KB can be produced from the concept output from step 4. The interesting concepts can be used as the initial concepts, to further encourage an optimal KB organization. The smaller set of interesting concepts can be explored using a concept lattice. Without pruning and identification of the interesting concepts, the lattices were too large to be comprehensible. In Fig. 7 we see the remaining 8 concepts after pruning the GA Glucose KB. An introduction to how lattices are generated and used is given in [24]. The filtered concepts from the Glucose KB are shown in the concept lattice in Fig. 7. To read the lattice, the attributes and objects belonging to a concept are reached by ascending and descending paths, respectively. Thus we can see that concepts 2, 3, 4 and 8 share the attribute NoComment=Y (i.e. don’t report this comment) and result therefore in NULL conclusions. The graph shows in concept 5 that only once Hb1Ac is ≥ 6.0 the mean blood glucose is calculated from the Hb1Ac result. The %00012 conclusion notes that the glucose measure has not
210
D. Richards and U. Malik Table 1. Results after each step KB # #Rules #Rules from # Concepts # Concepts # Interesting %pruned Input Step 1 Step 3 Step 4 concepts ANA 107 55 90 42 19 0.79 GA 25 20 21 8 3 0.85 GTT 19 14 23 13 7 0.70 GLU 319 154 616 416 310 0.49 IRON 90 59 147 84 57 0.61
Fig. 7. Concept lattice for selected conclusion in the glucose domain CONCLUSION CODES %00008(CL3) = Mean blood glucose is not calculated when HbA1c is below 6.0 %00011(STOP) = NOT (The mean blood glucose has been calculated from the HbA1c result. Glucose has not been measured on this specimen) %00012(CL4) = The mean blood glucose has been calculated from the HbA1c result. It is an index of the average blood glucose. Glucose has not been measured on this specimen. %00013(NOT CL4) = NOT (%00012) %00014(NOT CL3) = NOT (%00008)
been measured which may suggest further testing to the referring doctor. From the graph it appears that we could greatly prune our concepts by disregarding rules with the NoComment=Y condition. It also appears that rule 20 at concept 6 could be redundant as it is subsumed by rule 21 in concept 5. By filtering
Mining Propositional Knowledge Bases to Discover Multi-level Rules
211
Fig. 8. Lattice after restructure
stopping rules and rules subsumed by a parent in the original MCRDR KB, we removed repetition and redundancy found in Figs. 1 and 7. The reorganised rules are shown in Fig. 8. Conclusions CL2 has been replaced by CL4. The 10 rules in Fig. 1 (not including the root/default rule) have been replaced with 3 rules. Fig. 8 is obviously less complex than the original structure and even simplifies the output from our process shown in Fig. 7. However, for the larger KB our process still produced too many concepts. Table 1 does not reveal a direct relationship between the percentage pruned and the size of the KB. To understand what might affect the final number of concepts we considered the different types of rules that we found in each KB. In Fig. 9 some statistics of the GA, GLU and IRON KBs (3 KB of different sizes chosen at random) are given. All data have been converted into a percentage of the original number of rules input as given in Table 1. From the chart we compare the affect of unconditional removes or modifies on the number of rules after parsing. Since there are so few in each KB it is not possible to draw any conclusions. If there were many one would expect a corresponding decrease in the number of rules after parsing. There does appear to be a relationship between the number of conditional removes and modifies and a decrease in the number of rules after parsing i.e. more conditional removes and/or modifies results in fewer rules output. All KB are smaller that the original KB size after parsing. One(1) on the y-axis signifies 100 percent of the original number of rules. From this small sample it appears that despite the reduction in size from parsing, the number of concepts generated from the parsed rules is greater for KB with more conditional removes and modifies. Our strategy to ignore parent-child concepts does not seem to have been terribly useful since there were only 0, 2 and 1 such concepts in the GA, GLU, and IRON KBs, respectively. The relationship between the number of object concepts (one object shared by many attributes) and final number of concepts output is not clear, but since these are primitive concepts
212
D. Richards and U. Malik
Fig. 9. Column graph showing the relationship between the number of rules input, various characteristics of the input data and the final number of concepts output
it is obvious they are not new or interesting. Our decision to prune attribute concepts (i.e. only one concept shared by many objects), however, does seem to substantially reduce the number of filtered concepts. The sample confirms some of our intuitions but also compels us to consider further pruning strategies. We particularly need to combine similar strings or attribute values. We plan to use clustering techniques for this and will use our nearest neighbour algorithm in conjunction with FCA. Initial work which manually mapped similar attributes resulted in even more rules sharing conditions (as expected) but resulted in many more concepts (not what we wanted). Our current focus is on getting access to usage statistics so that we can apply confidence and support measures to further prune concepts. Once we have a small number of concepts, say 5-20 per KB, we will meet with domain experts who will assess whether the concepts are in fact interesting.
6
Related Work and Conclusion
Like most techniques for multi-level mining of rules (e.g. [6]) our approach using FCA involves pruning or filtering of rules. There is, however, no transformation resulting in loss of information in those rules that are kept. Essentially, filtering implements the semantics of the KB given by the domain expert. That is, only objects (rules) identified for removal are ignored in the generation of the formal context and formal concepts. Our approach is like the use of restricted contexts [8]. Since there is simply the creation of additional concepts without the alteration of the original primitive concepts there is high fidelity between the operational or performance knowledge and the explanation knowledge. This also means there is no problem concerning ontological commitment. Even in the final step where we prune the output for human review by taking concepts higher in
Mining Propositional Knowledge Bases to Discover Multi-level Rules
213
the lattice, the lower-level concepts are merely hidden from view and may be included if desired. Before we adopted FCA for higher-concept generation we explored the use of Rough Set Theory [20] since the technique also does not rely on frequencies of cases. In fact duplicate examples are discarded as a first step in the development of reducts (rules). However, we found that since rules have already discarded most (if not all) irrelevant attributes we lost too much information and classification accuracy when the rules output were run against the test sets [21] [23]. Like [6] the concept hierarchies we produce are dynamically adjusted. Lattices can be generated for each query. Alternatively, algorithms for the incremental update of lattices can be used which modify only those parts of the concept lattice affected by the new query or changes in the KB itself. Using FCA higher-level knowledge is not only discovered but also structured. The structure contains valuable knowledge which is often not known or easily articulated by experts but which adds clarity and improves understanding. The abstraction hierarchy generated by FCA can be viewed as an ontology in that it represents a shared conceptualization [12] of a particular domain. Our work which combines multiple knowledge bases to develop a shared knowledge repository [22] fits particularly well with the notion of a shared conceptualization in addition to supporting queries across concept hierarchies. [18] has surveyed a number of approaches to learning ontologies. He divides the approaches broadly into two main categories depending on whether they involve machine-learning (ML) or manual construction. Omelayenko decides that the combination of both techniques will offer the best results by combining the speed of machine learning with the accuracy of a human. We are inclined to agree with his conclusion. [14] offer an algorithm that uses two parameters: support and confidence for a rule to semiautomatically develop an ontology from text. As Omelayanko points out, ML results in flat homogeneous structures often in propositional form. He cites work from the RDR group [27], which was seeded by our FCA work, which learns different relationships between classifications: subsumption in marginal cases, mutual exclusivity and similarity and then develops a taxonomic hierarchy between the classes. Our goal to develop an approach that supports multiple levels of abstraction is shared by the work of [29]. They are looking at the seamless integration of knowledge bases and databases. The ParkaDB approach supports the development of high and multi-level classification rules. An ontology (in the form of a concept hierarchy) together with frequency counts are used to determine which concepts should be included in a rule. The ontology provides the background knowledge to guide the discovery process. A number of similar approaches that use concept taxonomies (e.g. [26]) have also been developed but these are based on traditional relational database technology and require transformation to a generalized table as part of the preprocessing which can result in over-generalisation. ParkDB does not require such preprocessing but supports dynamical generalization of data without over generalization. Our approach is different to all of these in that we do not use a concept hierarchy to develop rules. Instead we use rules to develop a concept hierarchy which may lead to higher level rules being uncovered. Thus we avoid the substantial effort required in first developing the hierarchies and the difficult task of validating them. Given
214
D. Richards and U. Malik
any string or substring and using set intersection, term subsumption and lectical ordering we are able to find all combinations using that string. Some concepts will include attributes and/or objects at different levels of abstraction such as the objects publication and book will appear in individual and shared concepts. The combination of multi-level concepts supports queries at multiple levels. Rules are sometimes used as background knowledge [1] [30] to guide the discovery of knowledge from data. Some approaches use templates to guide the mining process. [25] specifies abstract forms of rules which are used in metaqueries. To some extent the RDR approach is concurrently data and knowledge based in that it combines the use of case and rule-based reasoning. A key difference between our approach and typical KDD is that we do not rely on large amounts of data or large number of cases for rule development. Where data is plentiful ML algorithms will develop rules more quickly and possibly have higher accuracy than manually developed KB. Even better results will be achieved if the chosen set of cases are well-structured and representative of stereotypical cases [7]. However, where large numbers of classified cases do not exist or where we want to provide an incremental way of developing and maintaining rules the technique offered by RDR works very well in building compact KB that mature quickly (i.e. they cover the domain with small error rates). In support of this claim, the first commercial system we developed went into production with only 200 rules and grew online over a four-year period to over 2000 rules [4]. The knowledge captured first covered one or two pathology domains, but the number of domains covered increased over that time. Mining rules is offered in this paper as a way of extracting added-value from an organization’s knowledge sources. The complete approach using RDR and FCA could be applied where limited and/or unclassified cases exist and where incremental discovery is appropriate. Where large amounts of data exist, machine learning or data mining techniques can be applied to develop a first round of primitive rules which are then used as input by FCA to develop an abstraction hierarchy from which multi-level rules can be extracted. Alternatively, FCA can be used directly on the cases to discover implications. However, KA using FCA is not incremental and the review of implications and offering of counterexamples used in the approach may be more demanding on the user than the RDR technique [32]. Whatever the starting point, FCA may be used to explore knowledge at and across different levels of abstraction to provide a much richer picture of the knowledge and understanding of the domain. Acknowledgements. The authors would like to thank Pacific Knowledge Systems (PKS) for access to the research knowledge bases. Particular thanks to Les Lazarus and Michael Harries from PKS for their time and assistance.
References 1. Cai, Y., Cerone, N. and Han, J. (1991) Attribute-oriented induction in relational databases. In Knowledge Discovery in Databases AAAI/MIT Press. 2. Compton, P. and Jansen, R., (1990) A Philosophical Basis for Knowledge Acquisition. Knowledge Acquisition 2:241–257.
Mining Propositional Knowledge Bases to Discover Multi-level Rules
215
3. Deogun, J., Raghavan, V. and Sever, H. (1998) Association Mining and Formal Concept Analysis, In Proceedings Sixth International Workshop on Rough Sets, Data Mining and Granular Computing, Vol 1: 335–338. 4. Edwards, G., Compton, P., Malor, R, Srinivasan, A. and Lazarus, L. (1993) PEIRS: a Pathologist Maintained Expert System for the Interpretation of Chemical Pathology Reports Pathology 25: 27–34. 5. Edwards, G., Kang, B., Preston, P. and Compton, P. (1995) Prudent Expert Systems with Credentials: Managing the expertise of Decision Support Systems Int. Journal Biomedical Computing 40:125–132. 6. Fortin, S., Liu, L. and Goebel, R. (1996) Multi-Level Association Rule Mining: An Object-Oriented Approach Based on Dynamic Hierarchies, Technical Report TR 96–15, Dept. of Computing Science, University of Alberta. 7. Gaines, B. R. (1989) An Ounce of Knowledge is Worth a Ton of Data: Quantitative Studies of the Trade Off Between Expertise and Data Based on Statistically WellFounded Empirical Induction Proceedings of the 6th International Workshop on Machine Learning Morgan Kaufmann, San Mateo, California, 156–159. 8. Ganter, B. (1988) Composition and Decomposition of Data, In H. Bock (ed), Classification and Related Methods of Data Analysis North-Holland, Amsterdam, 561– 566. 9. Ganter, B. and Wille, R. (1989) Conceptual Scaling In F. Roberts (ed), Applications of Combinatorics and Graph Theory to the Biological Sciences Springer, New York, 139–167. 10. Ganter, B. and Wille, R. (1999) Formal Concept Analysis: Mathematical Foundations Springer, Berlin. 11. Graaf, de J. M., Kosters, W. A. and Witteman, J. J. (2000) Interesting Association Rules in Multiple Taxonomies, Proceedings of the 12th Belgium-Netherlands Artificial Intelligence Conference. 12. Gruber, T. R., (1993) Toward Principles for the Design of Ontologies Used for Knowledge Sharing Knowledge Systems Laboratory, Stanford University. 13. Kang, B., Compton, P. and Preston, P. (1995) Multiple Classification Ripple Down Rules: Evaluation and Possibilities Proc. 9th Banff Knowledge Acquisition for Knowledge Based Systems W’shop Banff. Feb 26-March 3 1995, Vol 1: 17.1–17.20. 14. Kietz, J.-U., Maedche, A. and Volz, R. (2000) A Method for Semi-Automatic Ontology Acquisition from a Corporate Intranet. WS ”Ontologies and Text”, co-located with EKAW’2000, Juan-les-Pins, French Riviera, October 2–6, 2000 15. Li, X. (1991) What’s so bad about Rule-Based Programming ? IEEE Software, September 1991, 103–105. 16. Lindig, C. (2000) Fast Concept Analysis In G. Stumme (ed) Working with Conceptual Structures Contributions to ICCS’2000 Shaker-Verlag, Aachen, 2000, 152–161. 17. Mansuri, Y., Kim, J.G., Compton, P. and Sammut, C. (1991). A comparison of a manual knowledge acquisition method and an inductive learning method Australian Workshop on Knowledge Acquisition for Knowledge Based Systems, Pokolbin (1991), 114–132. 18. Omelayenko B., (2001) Learning of Ontologies for the Web: the Analysis of Existent Approaches, In: Proceedings of the International Workshop on Web Dynamics, held in conj. with the 8th International Conference on Database Theory (ICDT’01), London, UK, 3 January 2001. 19. Pasquier, N., (2000) Mining Association Rules using Formal Concept Analysis: In Stumme, G. (ed.) Working with Conceptual Structures, Proceedings of ICCS’2000, Springer, 259–264.
216
D. Richards and U. Malik
20. Pawlak, Zdzislaw (1991) Rough Sets: Theoretical Aspects of Reasoning about Data Kluwer Academic Publishers, Dordrecht. 21. Richards, D. (1998) Using AI to Resolve the Republican Debate In Slaney, J. (ed) Poster Proceedings of Eleventh Australian Joint Artificial Intelligence Conf. AI’98 13–17 July 1998, Griffith Uni., Nathan Campus, Brisbane, Australia, 121–133. 22. Richards, D. (2000) Reconciling Conflicting Sources of Expertise: A Framework and Illustration In Proceedings of the 6th Pacific Knowledge Acquisition Workshop P. Compton, A. Hoffmann, H. Motoda and T. Yamaguchi (eds) Sydney December 11– 13, 2000, 275–296. 23. Richards, D., Chellen, V. and Compton, C (1996) The Reuse of Ripple Down Rule Knowledge Bases: Using Machine Learning to Remove Repetition In Compton, P., Mizoguchi, R., Motoda, H. and Menzies, T. (eds) Proceedings of Pacific Knowledge Acquisition Workshop PKAW’96, October 23–25 1996, Coogee, Australia, 293–312. 24. Richards, D and Compton, P, (1997) Combining Formal Concept Analysis and Ripple Down Rules to Support the Reuse of Knowledge Proceedings Software Engineering Knowledge Engineering SEKE’97, Madrid 18–20 June 1997, 177–184. 25. Shen, W., Ong, K., Mitbander, B. and Zaniolo, C. (1995) Metaqueries for Data Mining In U. Fayad, Piatetsky-Shapiro, P. Smyth and R. Uthurusamy (eds) Advances in Knowledge Discovery and Data Mining AAAI/MIT Press. 26. Singh, L., Scheuermann, P. and Chen, B. (1997) Generating Association Rules from SemiStructured Documents Using an Extended Concept Hierarchy. In Proceedings of the 6th International Conference on Information and Knowledge Management. Las Vegas, Nevada, USA, 1997, pp. 193–200. 27. Suryanto, H., and Compton, P. (2000) Learning Classification Taxonomies from a Classification Knowledge Based System. In S. Staab, A. Maedche, C. Nedellec, P. Wiemer-Hastings (eds) Proceedings of the Workshop on Ontology Learning, 14th Conference on Artificial Intelligence (ECAI’00) August 20–25, Berlin. 28. Suryanto, H., Richards, D. and Compton, P. (1999) The Automatic Compression of Multiple Classification Ripple Down Rules The Third International Conference on Knowledge Based Intelligent Information Engineering Systems (KES’99) August 31st – September 1st 1999, Adelaide. 29. Taylor, M., Stoffel, K. and Hendler J. (1997) Ontology-based Induction of High Level Classification Rules In SIGMOD Data Mining and Knowledge Discovery workshop proceedings. Tuscon, Arizona, 1997. 30. Walker, A. (1980) On Retrieval from a Small Version of a Large Database, In VLDB Conference Proceedings. 31. Wille, R. (1982) Restructuring Lattice Theory: An Approach Based on Hierarchies of Concepts In I. Rival (ed), Ordered Sets Reidel, Dordrecht, Boston, 445–470,. 32. Wille, R. (1989) Knowledge Acquisition by Methods of Formal Concept Analysis In E. Diday (ed.) Data Analysis, Learning Symbolic and Numeric Knowledge, Nova Science Pub., New York, 365–380. 33. Wille, R. (1992) Concept Lattices and Conceptual Knowledge Systems Computers Math. Applic. (23) 6–9: 493–515.
Meta-classification: Combining Multimodal Classifiers Wei-Hao Lin and Alexander Hauptmann Language Technologies Institute School of Computer Science Carnegie Mellon University 5000 Forbes Ave Pittsburgh, PA 15213 U.S.A. ^ZKOLQDOH[`#FVFPXHGX Abstract. Combining multiple classifiers is of particular interest in multimedia applications. Each modality in multimedia data can be analyzed individually, and combining multiple pieces of evidence can usually improve classification accuracy. However, most combination strategies used in previous studies implement some ad hoc designs, and ignore the varying “expertise” of specialized individual modality classifiers in recognizing a category under particular circumstances. In this paper we present a combination framework called “metaclassification”, which models the problem of combining classifiers as a classification problem itself. We apply the technique on a wearable “experience collection” system, which unobtrusively records the wearer’s conversation, recognizes the face of the dialogue partner, and remember his/her voice. When the system sees the same person’s face or hears the same voice, it can then use a summary of the last conversation to remind the wearer. To identify a person correctly from a mixture of audio and video streams, classification judgments from multiple modalities must be effectively combined. Experimental results show that combining different face recognizers and speaker identification aspects using the meta-classification strategy can dramatically improve classification accuracy, and is more effective than a fixed probability-based strategy. Other work in labeling weather news broadcasts showed that metaclassification is a general framework that can be applied to any application that needs to combine multiple classifiers without much modification.
1 Introduction Classification is an important form of knowledge extraction, and can help make key decisions. Multimedia classification is different from single mode classification like text document classification, because multimedia collections are composed of visual, aural, textual and other aspects of the data. To be successful, one need to integrate techniques from several fields such as computer vision, pattern recognition, speech recognition, and machine learning. Visual classifiers for fingerprints [8], face recognition [17] [18], iris matching [14], and audio classifiers for person identification [20] have been successfully applied to many domains like security surveillance and smart environments [16]. Our research on multimedia classifier combinations is situated in the broader context of an ambitious research project “Capturing and Remembering Human Experiences”. This larger project aims, among other things, to develop a system that allows O.R. Zaïane et al. (Eds.): Mining Multimedia and Complex Data, LNAI 2797, pp. 217–231, 2003. © Springer-Verlag Berlin Heidelberg 2003
218
W.-H. Lin and A. Hauptmann
people to capture and later search through a complete record of their personal experiences. It assumes that within ten years technology will be in place for creating a continuously recorded, digital, high fidelity record of one’s whole life in video form [5]. Wearable, personal digital memory system units will record audio, video, location and electronic communications. This digital human memory system seeks to fulfill the vision of Vannevar Bush’s personal Memex [1], capturing and remembering whatever is seen and heard, and quickly returning any item on request. While our vision outlines a research program expected to last for many years, we have reduced certain aspects of this vision into an operational personal memory prototype that remembers the faces and voices associated with a conversation and can retrieve snippets of that conversation when confronted with the same face and voice. The system currently combines face detection and recognition with speaker identification, audio recording and analysis. The face recognition and speaker identification enables the storing of the audio conversation associated with the face and the voice. Audio analysis and speech recognition compact the conversation, retrieving only important phrases. All of this happens unobtrusively, somewhat like an intelligent assistant who whispers relevant personal background information to you when you meet someone you do not quite remember. One key component in the aforementioned prototype is the combination of multimodal classifiers such that the system can identify the unknown person as soon as a voice and/or a face are recognized. Combining multiple classifiers can improve classification accuracy when the classifiers are not random guessers and complementary to each other [6]. In tasks like person identification, classifiers built on different modalities use distinct features to classify samples, and thus they seldom make correlated mistakes. In addition to feature extraction and classifier design, the strategy of combining evidence from an ensemble of classifiers appears to benefit classification accuracy. One straightforward way to exploit multimodalities is to concatenate features from each modality to make a single, long feature vector. This approach is doomed to suffer from the “curse of dimensionality,” where there are too many dimensions to search through or to exploit effectively through machine learning approaches. Concatenating the multidimensional features simply does not scale up. Instead of combining features, another approach is to build a classifier on each modality independently, and then to combine their results to make the ultimate final decision. Using an ensemble of multimedia classifiers has been explored in identity verification studies [4][11], which demonstrated the effectiveness of linearly combining three multimodal classifiers. Majority voting, where each classifier casts one vote towards the overall outcome, and linear interpolation [7] are among the most common ways of combining classifiers. However, these methods are considered ad hoc, and totally ignore the relative quality and expertise among classifiers. The other problem is that the weights of classifiers are assigned either equally (summing all probabilities) or empirically (taking the maximal or minimal probability). To address these problems, we propose a new combination framework called meta-classification, which makes the final decision by re-classifying the judgments from an ensemble of classifiers. Experimental results presented in the later section showed that meta-classification is more effective than probability-based combination strategies. This paper is organized as follows: the basic system for collecting and the retrieving digital human memory in the context of our research project for remembering
Meta-classification: Combining Multimodal Classifiers
219
human experience is described in Section 2. The multimedia classifiers are detailed in Section 3. Section 4 explains meta-classification, which is our proposed new combination strategy. Experimental results are given in Section 5 and conclusions presented in Section 6.
Fig. 1. The wearable experience collection and retrieval unit, consisting of camera (outlined in the bold square), omni-directional and lapel microphones (outlined in the light square) and a laptop computer embedded in a vest. Earphones allow feedback from the computer. User control is managed through spoken commands or a wearable mouse.
2 Personal Memory System We have designed and implemented a preliminary prototype (as shown in Figure 1) of a personal memory system that functions like the previously described intelligent assistant. There are currently two modes of system operation: personal conversation collection and remembering the previous conversation. The basic hardware components of the system consist of a wearable miniature digital video camera, two microphones (one cardioid close-talking lapel microphone and one omni-directional), headphones to receive user output, all attached to a laptoptype computer for data processing. The software modules in the system include a speaker identification module, a speech recognition module, a face detection and recognition module, a database, and an interface control manager. The basic architecture
220
W.-H. Lin and A. Hauptmann
of the system, as outlined in Figure 2, shows the interface control manager selecting between the acquisition and the retrieval module as needed according to a command issued by the wearer either through speech or a wearable mouse [24]. ,QWHUIDFH&RQWURO User commands, preferences and notifications
3HUVRQDO0HPRU\ &ROOHFWLRQ
3HUVRQDO0HPRU\ 5HWULHYDO
Fig. 2. The basic architecture of the Personal Memory System
2.1 Personal Memory Collection The system works by detecting the face of the person the user is talking to in the video, and listening to the conversation from both the close-talking (wearer) audio track and the omni-directional (dialogue partner) audio track. An overview of the system for memory collection is shown in Figure 3.
Transcript
Face Detection
:KDW LV \RXU QDPH «,W¶V D SOHDVXUH WR PHHW \RX « 7KDQN \RX ,¶P ILQH«
Speaker ID
Summary … Name … pleasure ,,, meet ,,, work
Time Personal Memory
Fig. 3. Process for Personal Memory Collection
The close-talking audio is processed by a speech recognition system to produce a rough, approximate transcript. The omni-directional audio stream is processed through a speaker identification module. An encoded representation of the face and
Meta-classification: Combining Multimodal Classifiers
221
the voice characteristics of the current dialog partner, and the raw audio of the current conversation are all stored in a database. The next time the system sees or hears the same person (by detecting a face/voice and matching it to the stored faces/voices in the database), it can retrieve and play back the audio from the last conversation. The audio is further processed through audio analysis for silence removal and emphasis detection, and can optionally be filtered based on the keywords in the speech recognition transcript. This data enables the system to efficiently retrieve and replay only the person names and the major issues that were mentioned in the conversation. 2.2 Personal Memory Retrieval In the retrieval (remembering) mode, the system immediately searches for a face in the video stream and performs speaker identification on the omni-directional audio stream. Once a face is detected, the face and speaker characteristics will be matched to the instances of faces and speaker characteristics stored in the memory database. The score of both face and speaker match is combined using the meta-classification strategy. When a match with sufficiently high score is found, the system will return a brief audio summary of the last conversation with the person. Figure 4 shows the process of personal memory retrieval.
3HUVRQDO0HPRU\
««« ««« «««««««
6SHDNHU,'
)DFH'HWHFWLRQ 5HFRJQLWLRQ «1DPH«ZRUN« SOHDVXUH«PHHW
Fig. 4. Processing for Personal Memory Retrieval
3 The Multimedia Classifiers 3.1 Face Recognition Extensive work in face detection has been done at CMU by Rowley [17][18][19]. This approach modeled the statistics of appearance implicitly using an artificial neural network. Currently we use Schneiderman’s approach [22], which applies statistical
222
W.-H. Lin and A. Hauptmann
modeling to capture the many variations in facial appearance. We learn the statistics of both object appearance and "non-object" appearance using a product of histograms. Each histogram represents the joint statistics of a subset of wavelet coefficients and their position on the object. Our approach is to use many such histograms representing a wide variety of visual attributes. The detector then applies a set of models that each describes the statistical behavior of a group of wavelet coefficients. We apply the ‘eigenface’ [22] approach to recognize faces. There have also been several commercial systems offering face detection and identification, such as Visionics [25]. In our implementation we have been using both the Visionics FaceIt toolkit for face detection and matching, as well as the Schneiderman face detector and ‘eigenfaces’ for matching similar faces. Eigenfaces treat a face image as a twodimensional N by N array of intensity values. From a set of training images, a set of eigenvectors can be derived that constitute the eigenfaces. Every unknown new face is mapped into this eigenvector subspace and we calculate the distance between faces through corresponding points within the subspace [27]. 3.2 Speaker Identification Speaker identification is accomplished through our own implementation of Gaussian Mixture Models (GMM) as described by Gish [20]. GMM have been proven effective in speaker identification tasks in a large database of over 2000 speakers [28][20]. Prior to classification, Mel-Frequency Cepstral Coefficients (or MFCC) features are extracted from the audio channel. For training, regions of audio are labeled with a speaker code, and then modeled in their respective class (speaker). Once training models have been generated, the system must classify novel audio sections. The process begins by segmenting the audio channel into one-second, overlapping regions and computing the GMM. The resulting model is compared to existing trained models using a maximum likelihood distance function. Based on the comparisons to each class, a decision is made as to the classification of the data into speech, noise, a known speaker, etc. The speaker identification system also uses the fundamental pitch frequency to eliminate false alarms. Generally, about four seconds of speech are required to achieve reliable speaker identification, under benign environmental conditions.
4 Combining Multiple Classifiers We combine classifiers in the hope that classification accuracy can be improved by taking multiple ‘experts’ into consideration. The other motivation is that instead of spending much time and effort building a highly accurate and specialized classifier, we would like to quickly build several weaker and more generic classifiers, and combine them together using a combining strategy. In this section, we first review a probability-based framework of combing classifiers, followed by our proposed metaclassification approach.
Meta-classification: Combining Multimodal Classifiers
223
4.1 Probability-Based Framework Kittler et al. proposed a probability-based framework [11] to explain various combination strategies. Assume there are p classes {w1,…, wp}, and k classifiers to be combined, and the feature vector that the ith classifier observes is xi, i = 1,…,k. Without any further information, we will select the class wj with the highest posterior probability given these features, i.e. P(wj|x1,…,xk). By Bayes theorem, we have 3 Z M S [ [N _ Z M 3 Z M _ [ [N = S [ [N If we assume the feature vector from each modality is conditionally independent given the class, the decision rule will be Equation 1. We classify (x1,…,xk) into category wj if N
S
N
L =
O =
L =
3 Z M ∏ S [L _ Z M = PD[ 3 ZO ∏ S [L _ ZO
(1)
In terms of the posterior probability from each classifier, i.e. P(wi|xi), Equation 1 can be rewritten as Equation 2. The decision rule is called the Product rule because final decision is based on the product of probabilities of all classifiers. We classify (x1,…,xk) into category wj if N
S
N
L =
O =
L =
3 Z M − N ∏ S Z M _ [L = PD[ 3− N ZO ∏ S ZO _ [L
(2)
If we further assume equal prior class probabilities, the decision rule can be reduced to Equation 3. We classify (x1,…,xk) into category wj if N
S
N
(3)
∏ S Z M _ [L = PD[ ∏ S ZO _ [L O = L =
L =
If we further assume that the posterior probability differs little from the prior probability, i.e. very weak classifiers, we can obtain the following decision rule by rewriting Equation 2 and ignoring second and high-order terms, We classify (x1,…,xk) into category wj if N
S
N
3 Z M − N + ∑ S Z M _ [L = PD[ 3− N ZO + ∑ S ZO _ [L L =
O =
(4)
L =
Equation 4 is called the Sum rule because we sum up the posterior probabilities made by each classifier. This is shown to be more stable and effective than the Product rule [11] in spite of its strong assumption. Furthermore, we can approximate the Product or Sum rule by their upper or lower bound such as minimum, mean, and median of the posterior probabilities. Therefore, most pre-determined, fixed combining strategies can be incorporated in this probability-based framework. 4.2 Meta-classification Framework The main idea of meta-classification is to model the problem of combining classifiers as a classification problem, and then train a meta-classifier to map the judgment made
224
W.-H. Lin and A. Hauptmann
by each classifier for each class to the final decision. In this section, we first introduce how to synthesize asynchronous judgments from multimedia classifiers into a new feature vector, which is then fed into the meta-classifier. Then, we describe the method of training such a meta-classifier. 4.2.1 Synthesizing Asynchronous Classification Judgements Multimedia classifiers often make judgments and produce classification output at different rates due to the asynchronous nature of multimedia. For example, the speaker identification module needs a longer time frame to make reliable estimates than the face recognition module does, while the latter can make a judgment as soon as a single image is acquired. Consequently, the classification judgments from multimodal classifiers will be fed into the meta-classifier asynchronously, and a method of appropriately synchronizing them is needed.
)DFH 5HFRJQLWLRQ 6SHDNHU ,GHQWLILFDWLRQ
6\QWKHVL]HG )HDWXUH 7LPH Fig. 5. The figure illustrates how asynchronous feature are synthesized. The horizontal lines are time lines, and each marker (circle or diamond) stands for a time point when a judgment is made. Face recognition makes decisions quicker, and has less time delay between markers, while speaker identification is slower, and has a larger lag. The synthesized feature (circle with x) is the combination of features from two modalities that are the closest to the current time point.
Denote xji as the degree of confidence that the example belongs to the class i made by the classifier j. Depending on the nature of the classifier, xji can be a similarity score or posterior probability of class given data. A multimedia classifier j generates a judgment vector xj once it finishes analyzing input from the audio or video stream. The judgment vector xj is (xj1, xj2, …, xjk), where k is the number of classes, i.e. the number of people in current pool of digital human memory. Given r classifiers, at a time point t at which any classifier can make a judgment, a new feature vector xsyn(t) = (x1(t’), x2(t’), …, xr(t’)), where t’ is the time point that is the closest to time point t when the classifier makes a judgment. In other words, whenever a classifier makes a judgment, a new feature vector combines every classifier’s judgment by concatenat-
Meta-classification: Combining Multimodal Classifiers
225
ing classification vectors generated at the time point nearest to current time, as shown in Figure 5. Suppose we have three classifiers and two classes. At the fifth second, the first classifier makes a classification judgment x1(5) = (10, 2). The most recent judgments made by the other two classifiers are x2(4.375) = (0.8, 0.1) and x3(3) = (60, 50) at 4.375 and 3 seconds, respectively. Therefore, the synthesized feature vector at the time point of 5 second is xsyn(5) = (10, 2, 0.8, 0.1, 60, 50), and meta-classifier learns and classifies in this new, 6-dimensional feature space. The synthesis method is based on the assumption that between t and t’, there is no dramatic change with respect to the last judgment, which usually holds true in the real-time application, like remembering conversations. When the user meets a person and attempts to remember the person based on voice or facial features, he or she usually continues to look or listen to the person who is to be identified, which means the multimedia classifiers will produce the same judgments in the short period. 4.2.2 Meta-classification Meta-classification re-classifies the classification judgments made by classifiers. We treat the judgment from each classifier for each class as a feature, and then build another classifier, i.e. a meta-classifier, to make the final decision, illustrated in Figure 6. Consider that there are two “deaf” face recognition experts and one “blind” speaker identification expert residing in our system. Once the system detects an unknown person approaching the user or the user actively triggers the recognition mode, each expert starts to make judgment based on the input from the corresponding modality. Instead of making the final decision by voting, or summing up probabilities and then picking the most promising one, we present their decisions together to another judge, i.e. the meta-classifier, who decides the identity of the person in the current video or audio stream. &ODVVLILHU &ODVVLILHU
YLGHR
0HWDFODVVLILHU
ILQDOGHFLVLRQ
« &ODVVLILHUN
Fig. 6. Meta-classification Combing Strategy
Formally speaking, assume there are p classes and k classifiers, and the judgment from ith classifier for the jth class given an unknown pattern is oi,j. The ith classifier outputs its judgment as a feature vector oi = (oi,1,…oi,p), and we combine these feature vectors into a long feature vector m=(o1,…ok). The meta-classifier takes the feature vector m and makes a final decision. The meta-classifier can be any kinds of classifier, and we chose SVM as our meta-classifier. Note that we have to build p SVM classifiers for each class because SVM is an inherently binary classifier.
226
W.-H. Lin and A. Hauptmann
4.2.3 Support Vector Machine SVM has recently received much attention in the machine learning community. Initially proposed as a binary classification method, SVM has not only been carefully motivated by statistical learning theory [25], but also been successfully applied to numerous domains, including object detection [15], handwritten digit recognition [21], and text categorization [10]. The basic idea of SVM is illustrated in Figure 7.
Fig. 7. The margin and decision boundaries of SVM in the two-dimensional feature space. The bold line shows the optimal decision boundary with the widest margins (dashed lines) separating the two classes, as opposed to the thin line, which separates the two classes with a smaller margin.
While there are many linear decision boundaries that can separate two classes (circles and squares) in Figure 7, SVM will select the bold solid line over the narrow solid line because the margin of the bold line is larger. If we push the bold solid line until we reach the data points of each class, the distance between these two dash lines is the margin. Intuitively, a decision boundary with a large margin suggests a lower classification error when new data arrives. The data points (circles and squares) on the margin boundary, which are drawn in bold outlines, are called the support vectors. The goal of training SVM is to find the decision boundary with the maximal margin. Consider a binary classification problem with linearly separable data (xi,yi), where yi is the label of the feature vector xi , and the value is binary, either +1 or –1. For positive data xi with yi = +1, there exists w and b such that w·xi + b> +1. Similarly, for negative data xi with for yi = -1, we have w·xi + b > -1. The margin between these two supporting planes will be 2/||w||2. The task of maximizing the margin can be formulated as a quadratic program with constraints w·xi + b > +1 for positive data and w·xi + b > -1 for negative data. There exist many optimization methods to solve Quadratic Programming (QP). Since the QP problems are convex, we are guaranteed to find the global maximum. Moreover, the distance of the margin is decided only by support vectors and has no direct relationship to the dimensionality of the complete data. More rigorous introductions to SVM can be found in standard texts [3][25]. Our SVM implementation is based on SVMlight [9] with a linear kernel, solving the following QP, O PLQ Z ⋅ Z + & ∑ ξL Z Eξ L = \L Z ⋅ Θ [L + E ≥ − ξL ZKHUH LL «O Â is the function that maps the xi into higher dimension.
Meta-classification: Combining Multimodal Classifiers
227
The SVM-based meta-classifier makes its binary decision by classifying combined feature vectors, and we build one such meta-classifier for each class. Unlike other combination schemes that require each classifier to output the judgment in the same scale, here feature vectors can consist of similarity scores or probabilities without any restriction. 4.2.4 SVM-Based Meta-classifier The advantage of applying meta-classification is two-fold: compared with probability-based combination strategies, meta-classification observes more when it makes the final decision. From a meta-classification’s point of view, the product rule only observes a feature vector m’ = (o1,j,o2,j,…,ok,j), which is a subset of m. Secondly, a SVM-based meta-classifier automatically learns the weights for different classifiers, while the product rule treats all classifiers with equal weights, even though not all classifiers are equally robust across all classes. Moreover, using a binary classifier such as SVM as a meta-classifier encourages reliance on local experts. It is possible that one classifier is an expert to recognize a specific class but not all classes. For example, one of the users’ friends was first met in a very noisy environment, resulting in poor quality voice for training speaker identification but keeping the visual features of the face intact. Meta-classification can learn the pattern from synthesized feature vectors. Therefore, when the user meets this friend again, the face recognition module should be certain about the identity of the friend while the speaker identification module is likely to be confused. The normal linear combination strategy will act unstable in this circumstance because the information that speaker ID is not reliable for the person is totally ignored. The product rule and other types of meta-classifiers such as Artificial Neural Networks have difficulties incorporating this kind of idea of “local” experts. The meta-classification strategy can be applied to other classifiers with little effort. Any existing multimedia classifier can be easily plugged into the framework to combine with other classifiers to generate synthesized feature vectors, and metaclassification training is processed in the same way. It does not matter that the classifier is probability-based or similarity-based, and both probabilities and similarity scores can be combined into the feature vector. 4.2.5 Evaluation Window To exploit the continuousness of audio and video input in a context-aware application, we can make the classification decision not only by combining multimodal classifiers, but also by accumulating classification judgments over time. The idea is that the classifier will become more confident if the same person is recognized repeatedly in a short period, assuming the wearer keeps looking at or listening to the same individual. The length of time for classification judgments is defined as the size of the evaluation window. The decision rule for choosing subject j as the final decision is as follows: L+Z
L+Z
∑ V W = PD[ ∑ V W W =L
M
N
W =L
N
where sk(t) is the judgment the classifier made for the kth class at the time t, i is the starting time when we begin to accumulate judgments, and w is the size of the window. We expect larger window will have better performance because of more evidence over time, but at the expense of quick response.
228
W.-H. Lin and A. Hauptmann
5 Experiments 5.1 Data and Procedure We collected two conversations for each of 22 people wearing our prototype memory capture unit. Each conversation is at least 20 seconds long, and is analyzed for faces and speaker audio characteristics as described before. The lighting condition and background were purposely made different between the two conversations to make the experiment more realistic, especially for face recognition. The first of each conversation served as the training example for the multimedia classifiers as well as the meta-classifier, while the second conversation was used as the query or retrieval prompt to “remember” the first conversation, i.e. the testing samples. Each multimedia classifier makes a judgment based on information in the mixture of audio and video stream, and the judgments of all classifiers are synthesized as new feature vectors for the meta-classifier. In the testing phase, each classifier makes its decision first, and the combined feature vector is sent to the meta-classifier to make the final decision. The conversation retrieval was considered successful only when the combining strategy correctly identified the person in question. Since the SVM-based meta-classifier is a binary classifier, we have to train one meta-classifier for each of the 22 people. The training data consisted of the synthesized feature vectors from the given person, and feature vectors from the other 21 people, i.e. one-against-all training. To account for the discrepancy between the number of positive examples and negative examples, the cost of misclassifying positive training examples into negative examples was set to 22 times the cost incurred in the reverse situations. There were 345 testing feature vectors for the 22 classes. 345 is not a multiple of 22, because the number of feature vectors generated from each person was not the same. If one of the multimedia classifier had a hard time making a judgment at a time slot, there would not be a combined feature vector at that time point. On average, the face classifier made a judgment every 1/6 second, and the voice classifier made one judgment every second. 5.2 Results We used the average rank as the evaluation metric for the retrieval task, i.e. on average, at what rank the correct conversation was found. The better the classifier or the combining strategy performs, the closer its average rank is to one. The results of our experiment are shown in Table 1, suggesting that the Schneiderman/Eigenface detection/recognition method retrieved the correct conversation at an average rank of 3.42 of the 22 possible conversation candidates. The Visionics face recognition system found the correct conversation at rank 3.33. Speaker identification proved to be much worse than face recognizers. The overall results are far from perfect, suggesting that people recognition in noisy environments is not a trivial problem. The meta-classifier combined the face classifiers and the speaker identification, and resulted in an average rank of 2.61. This result does not only significantly outperform the individual multimedia classifiers, but also outperforms the Sum rule, a probability-based combining strategy.
Meta-classification: Combining Multimodal Classifiers
229
Table 1. Experiment results from individual classifiers and the combinations of classifiers demonstrate the advantage of the SVM-based meta-classifier.
Retrieval Method Schneiderman + Normalized Eigenfaces Visionics Face Recognition Speaker Identification by similarity Speaker Identification by pitch
Average Rank 3.42 3.33 3.92 6.22
Summing every classifier
3.87
SVM-based Meta-classifier
2.61
We also evaluated the effect of window size [13], and plot the combination strategies versus the window size in Figure 7. We expect that with the increasing size of the evaluation window, the performance should improve because the classifier is more confident about its decision by observing more samples over time. Since classifiers from different modalities make decisions at different pace, the plot in Figure 8 has two x-axes, the one above the plot with window size up to 25 is for the slow speaker identification, and the bottom one with window size up to 500 is for the fast face recognition and the combination strategies. Interestingly, the two audio classifiers do not improve with the increasing window size, suggesting that the speaker identification modules have stable, but not very accurate performance within a one-second audio sample. The performance could not improve unless we expand the sample size, which is not desirable in context-aware applications that need to response quickly. In short, evaluation window cannot improve speaker identification due to the intrinsic limit of speech. On the contrary, the face recognition module and the combination strategies improve with the increasing window size. Note that after a window size of 250 (about 15 seconds), the meta-classifier achieved perfect performance. Moreover, the curve of the meta-classifier combining strategy declined very quickly in the first several window sizes, showing that the strategy is effective at combining multimodal classifiers to make the best classification judgment. If quick response is desirable and the user is willing to tolerate misclassification error, the meta-classification strategies can achieve an average rank of two at a window size of 20 (about five seconds), which means on average, the user can find the correct conversation in the first results.
6 Conclusions The novelty of our research is the meta-classification strategy for combining multimedia classifiers. Based on the experiment results in the task of identifying the same person through audio and video signals, meta-classification is shown to be much more effective than single classifiers as well as any probability-based strategies. SVM has stronger generalization power with the idea of maximal margins, and performs well practically. Moreover, meta-classification using SVM can take full advantage of all information within judgments made by multiple classifiers, and eliminate the burden of assigning weights to individual classifiers.
230
W.-H. Lin and A. Hauptmann
NQ D 5 H JD UH Y$
3LWFK $FRXVWLF6LPLODULW\ 6XP5XOH )DFH5HFRJQLWLRQ 0HWDFODVVLILFDWLRQ
:LQGRZ6L]H
Fig. 8. Experiment results of manipulating the size of the evaluation window show the rapid improvement rate of the SVM-based meta-classification.
We have also successfully experimented with this meta-classification approach in a different context, learning to classify video segments of broadcast news as weather reports [12]. Again, the meta-classifier combination strategy proved superior to individual classifiers based on transcript or image classifiers alone, and outperformed other ad-hoc combination strategies. The meta-classification combination strategy can be easily applied to other situations that need to combine multiple classifiers to improve classification accuracy. Acknowledgements. This material is based in part on work supported by the National Science Foundation under Agreement No. IIS-0121641 and under Agreement No. IIS-0105219. This work was also supported in part by the Advanced Research and Development Activity (ARDA) under contract number MDA908-00-C-0037.
References 9%XVK³$VZHPD\WKLQN´$WODQWLF0RQWKO\9RO1RSS± & - & %XUJHV ³$ 7XWRULDO RQ 6XSSRUW 9HFWRU 0DFKLQHV IRU 3DWWHUQ 5HFRJQLWLRQ´ .QRZOHGJH'LVFRYHU\DQG'DWD0LQLQJ9RO1R
3. Cristianini, N. and Shawe-Taylor, J. An Introduction to Support Vector Machines and other Kernel-based Learning Methods. Cambridge University Press, 2000.
5 : )ULVFKKRO] DQG 8 'LHFNPDQQ ³%LR,' $ 0XOWLPRGDO %LRPHWULF ,GHQWLILFDWLRQ 6\VWHP´,(((&RPSXWHU)HESS±
-*UD\³:KDWQH[W"$IHZUHPDLQLQJSUREOHPVLQ,QIRUPDWLRQ7HFKQRORJ\´$&0)HG HUDWHG5HVHDUFK&RPSXWHU&RQIHUHQFH$WODQWD*$0D\
+DQVHQ/DQG6DODPRQ3³1HXUDO1HWZRUNV(QVHPEQOHV´,(((7UDQVDFWLRQVRQ3DW WHUQ$QDO\VLVDQG0DFKLQH,QWHOOLJHQFHSS±
Meta-classification: Combining Multimodal Classifiers
231
6+DVKHPDQG%6FKPHLVHU³,PSURYLQJ0RGHO$FFXUDF\XVLQJ2SWLPDO/LQHDU&RPEL 10.
16.
QDWLRQVRI7UDLQHG1HXUDO1HWZRUNV´,(((7UDQVDFWLRQVRQ1HXUDO1HWZRUNV9RO1R 0D\SS± . -DLQ / +RQJ 6 3DQNDQWL DQG 5 %ROOH ³$Q ,GHQWLI\$XWKHQWLFDWLRQ 6\VWHP 8VLQJ )LQJHUSULQWV´LQ3URFHHGLQJVRI(XUR6SHHFK¶/RV$ODPLWRV&$6HS,(((&6 3UHVVSS± 7-RDFKLPV³0DNLQJODUJH6FDOH690/HDUQLQJ3UDFWLFDO´$GYDQFHVLQ.HUQHO0HWK RGV 6XSSRUW 9HFWRU /HDUQLQJ % 6FK|ONRSI DQG & %XUJHV DQG $ 6PROD HG 0,7 3UHVV Joachims, T. Text Categorization with Support Vector Machines: Learning with Many Relevant Features. In Proceedings of European Conference on Machine Learning, 1998. - .LWWOHU 0 +DWHI 5 3: 'XLQ DQG - 0DWDV ³2Q &RPELQLQJ &ODVVLILHUV´ ,((( 7UDQVDFWLRQVRQ3DWWHUQ$QDO\VLVDQG0DFKLQH,QWHOOLJHQFH9RO1R0DU /LQ :+ +DXSWPDQQ $ ³1HZV 9LGHR &ODVVLILFDWLRQ 8VLQJ 690EDVHG 0XOWLPRGDO &ODVVLILHUVDQG&RPELQDWLRQ6WUDWHJLHV´$&00XOWLPHGLD-XDQ/HV3LQV)UDQFH'HF /LQ :+ -LQ 5 +DXSWPDQQ $ * ³7ULJJHULQJ 0HPRULHV RI &RQYHUVDWLRQV XVLQJ 0XOWLPRGDO &ODVVLILHUV´ $$$, :RUNVKRS RQ ,QWHOOLJHQW 6LWXDWLRQ$ZDUH 0HGLD DQG 3UHVHQWDWLRQ(GPRQWRQ$OEHUWD&DQDGD-XO\ 01HJLQ7$&KPLHOHZVNL06DOJDQLFRII7$&DPXV80&YRQ6HHOHQDQG3 /9HQHWLDQHUDQG**=KDQJ³$Q,ULV%LRPHWULF6\VWHPIRU3XEOLFDQG3HUVRQDO8VH´ ,(((&RPSXWHU)HESS± 3DSDJHRUJLRX&2UHQ0DQG3RJJLR7$*HQHUDO)UDPHZRUNIRU2EMHFW'HWHFWLRQ ,Q3URFHHGLQJVRI,QWHUQDWLRQDO&RQIHUHQFHRQ&RPSXWHU9LVLRQ Pentland , and T. Choudhury “Face Recognition for Smart Environments”, IEEE Computer, Feb 2000, pp. 50–55. +$5RZOH\6%DOXMDDQG7.DQDGH³1HXUDO1HWZRUN%DVHG)DFH'HWHFWLRQ´,((( 7UDQVDFWLRQRQ3DWHHUQ$QDO\VLVDQG0DFKLQH,QWHOOLJHQFH-DQSS± + $ 5RZOH\ 6 %DOXMD DQG 7 .DQDGH ³+XPDQ )DFH 'HWHFWLRQ LQ 9LVXDO 6FHQHV´ &DUQHJLH0HOORQ8QLYHUVLW\7HFKQLFDO5HSRUW&08&63LWWVEXUJK3$ + $ 5RZOH\ 6 %DOXMD DQG 7 .DQDGH ³5RWDWLRQ LQYDULDQW QHXUDO QHWZRUNEDVHG IDFH GHWHFWLRQ´,(((&9356DQWD%DUEDUD 0 6FKPLGW - *ROGHQ DQG + *LVK ³*00 VDPSOH VWDWLVWLF ORJOLNHOLKRRGV IRU WH[W LQGHSHQGHQWVSHDNHUUHFRJQLWLRQ´(XURVSHHFK5KRGHV*UHHFH6HSSS± 6FKRON|SI..6XQJ&%XUJHV)*LURVL31L\RJL73RJJLRDQG99DSQLN³&RP SDULQJ6XSSRUW9HFWRU0DFKLQHVZLWK*DXVVLDQ.HUQHOVWR5DGLDO%DVLV)XQFWLRQ&ODVVLIL HUV´,(((7UDQVDFWLRQVRQ6LJQDO3URFHVVLQJ9RO1R1RY +6FKQHLGHUPDQDQG7.DQDGH³3UREDELOLVWLF0RGHOLQJRI/RFDO$SSHDUDQFHDQG6SD WLDO5HODWLRQVKLSVRI2EMHFW5HFRJQLWLRQ´,(((&9356DQWD%DUEDUD 7XUN0DQG3HQWODQG$(LJHQIDFHVIRUUHFRJQLWLRQ-RXUQDORI&RJQLWLYH1HXURVFLHQFH ± 7ZLGGOHU+DQG\NH\&RUSRUDWLRQKWWSZZZKDQG\NH\FRP 9DSQLN 91 7KH 1DWXUH RI 6WDWLVWLFDO /HDUQLQJ 7KHRU\ QG HG 6SULQJHU 1HZ @_3_ :H QHHG PLOOLRQ LWHUDWLRQV WR H[WUDFW WKH EHVW FDUGLQDOLW\ SDUWLWLRQ ,W LV WRR WLPH FRQVXPLQJ IRU DQ\ FOXVWHULQJ PHWKRG >0FT @ >9LQ @ >'LG @ 2QN 7KDW LV ZK\ZHSURSRVHDVROXWLRQWKDWUHGXFHVVLJQLILFDQWO\WKHQXPEHURILWHUDWLRQV 7KH VHFRQG RQH LV WKH FOXVWHULQJ FRQILGHQFH PHDVXUH *OREDO 9DULDQFH 5DWLR &ULWHULRQ WKDWFRQVLGHUVERWKLQWHUFOXVWHUDQGLQWUDFOXVWHUPHDVXUH*OREDO9DULDQFH &ULWHULRQ 5DWLR &ULWHULRQ FRQVLGHUV WKH FRQILGHQFH RI WKH ZKROH DQG LQGLYLGXDO FOXVWHUV:HVKRZHGLQRXUSUHYLRXVH[SHULPHQWV>)HU@WKDWWKHPHDVXUHVSUHVHQWV LQWHUHVWLQJUHVXOW:HDOVRVKRZHGWKDWWKHFRQILGHQFHRIUHVXOWVGRQRWGHSHQGRQO\RI WKHFOXVWHULQJDOJRULWKPDQGWKHFOXVWHUFRQILGHQFHPHDVXUHVEXWGHSHQGVDOVRRIGDWD LWHPGHVFULSWRUV,QRXUSUHYLRXVH[SHULPHQWVGDWDLWHPGHVFULSWRUVDUHUHSUHVHQWHGE\ :DYHOHW &') &RKHQ'DXEHFKLHV)HDXYHDX DQG &') &RKHQ 'DXEHFKLHV)HDXYHDX >&RK @ 7KH ILUVW RQH SUHVHQWV EHWWHU FRQILGHQFH RI FOXVWHULQJ UHVXOWV FRPSDUHG WR WKH VHFRQG RQH EHFDXVH LW VXSSRUWV OHVV QRLVHV FRPSDUHG WR WKH VHFRQG RQH ,Q WKLV SDSHU LPDJH GHVFULSWRUV DUH UHSUHVHQWHG E\ )RXULHU GHVFULSWRUV 7KH GLIIHUHQFH EHWZHHQ :DYHOHW GHVFULSWRUV DQG )RXULHU GHVFULSWRUVLVMXVWLILHGE\WKHIDFWWKDW:DYHOHWGHVFULSWRUVFRQWULEXWHWRGHWHFWFOXVWHUV RIDUELWUDU\IURQWLHUVWKH\DUHLQVHQVLWLYHWRWKHQRLVH>6KH@DQGXVLQJWKHPXOWL UHVROXWLRQ SURSHUW\ RI :DYHOHW GHVFULSWRUV FRQWULEXWHV WR HIIHFWLYHO\ LGHQWLI\LQJ DUELWUDULO\ IURQWLHU FOXVWHUV )RXULHU GHVFULSWRUV VXSSRUW WKH UREXVWQHVV RI GHVFULSWLRQ WR WUDQVODWLRQ URWDWLRQ DQG VFDOH WUDQVIRUPDWLRQV RI WKH ZKROH LPDJHV ,W PHDQV WKDW :DYHOHWGHVFULSWRUVDUHPRUHVXLWDEOHIRUWKHGHVFULSWLRQRIORFDOUHJLRQVRIDQLPDJH KRZHYHU)RXULHUGHVFULSWRUVDUHPRUHVXLWDEOHIRUWKHGHVFULSWLRQRIWKHZKROHLPDJH LQ D FRPSDFWHG ZD\ DQG ZLWK IHZ QRLVHV 7KH FRPSDULVRQ EHWZHHQ WKH WZR GHVFULSWRUV LV QRW WKH VFRSH RI WKH SDSHU KRZHYHU WKH DSSURDFK SURSRVHG GRHV QRW GHSHQGRIDVSHFLILFGDWDGHVFULSWRUVRILPDJHV 7KH SDSHU LV FRPSRVHG RI WKH IROORZLQJ VHFWLRQV 7KH VHFRQG VHFWLRQ SUHVHQWV WKH UHODWHG ZRUNV 7KH WKLUG VHFWLRQ KLJKOLJKWV WKH DSSURDFK LQJUHGLHQWV LPDJH GHVFULSWRUV FOXVWHU FRQILGHQFH PHDVXUH DQG FOXVWHULQJ PHWKRG 7KH IRXUWK VHFWLRQ SUHVHQWVWKHSDUWLWLRQFDUGLQDOLW\HVWLPDWLRQDSSURDFKDQGWKHH[SHULPHQWUHVXOWV
2 Related Works 3DUWLWLRQLQJ PHWKRGV GHFRPSRVH D FROOHFWLRQ RI LPDJHV LQWR D JLYHQ QXPEHU RI GLVMRLQW FOXVWHUV ZKLFK DUH RSWLPDO LQ WHUPV RI VRPH SUHGHILQHG FULWHULD IXQFWLRQV 7\SLFDO PHWKRGV LQ WKLV FDWHJRU\ LQFOXGH N0HDQV FOXVWHULQJ >:LO @ SUREDELOLVWLF FOXVWHULQJ>.U]@*DXVVLDQ0L[WXUH0RGHOHWF$FRPPRQFKDUDFWHULVWLFRIWKHVH PHWKRGVLVWKDWWKH\DOOUHTXLUHWKHXVHUWRSURYLGHWKHQXPEHURIFOXVWHUVFRPSULVLQJ WKHGDWDFRUSXV+RZHYHULQUHDODSSOLFDWLRQVWKLVLVDUDWKHUGLIILFXOWWRVDWLVI\ZKHQ JLYHQXQNQRZQLPDJHUHSRVLWRULHVZLWKRXWDQ\SULRUNQRZOHGJHDERXWLW 'HWHUPLQDWLRQRIWKHEHVWSDUWLWLRQFDUGLQDOLW\LQGDWD UHTXLUHV WKDW ZH KDYH ERWK DQ DOJRULWKP WKDW FDQ VHHN IRU WKH FRUUHFW QXPEHU DQG D FULWHULRQ WKDW LV FDSDEOH RI UHFRJQL]LQJ WKH FRUUHFW QXPEHU RI FOXVWHUV 7KH VLPSOHVW DOJRULWKP LV WR XVH DQ
3DUWLWLRQ&DUGLQDOLW\(VWLPDWLRQLQ,PDJH5HSRVLWRULHV
H[LVWLQJDOJRULWKPIRUIL[HGQXPEHURIFOXVWHUVLQDORRSDQGVHOHFWWKHEHVWVROXWLRQ ZLWK VRPH FULWHULRQ 7KLV EUXWH IRUFH VHDUFK LV JXDUDQWHHG WR ZRUN EXW LV DOVR WKH VORZHVWDQGXQSUDFWLFDOPHWKRG 7KHUHKDYHEHHQUHVHDUFKHIIRUWVWKDWVWULYHWRSURYLGHWKHPRGHOVHOHFWLRQFDSDELOLW\ WRWKHDERYHPHWKRGVKRZHYHULQRXUNQRZOHGJHIHZRIWKHPKDYHEHHQDSSOLHGLQ LPDJH GDWD VHWV 7KDW LV ZK\ ZH ZLOO SUHVHQW VRPH UHODWHG ZRUNV LQGHSHQGHQWO\ RI WKHLU DSSOLFDWLRQ GRPDLQ 6WHSZLVH FOXVWHULQJ DOJRULWKP IRU UHGXFLQJ WKH ZRUNORDG UHTXLUHG E\ WKH %UXWH )RUFH KDV EHHQ SURSRVHG >.DU @ 7KH LGHD LV WR XWLOL]H WKH SUHYLRXV VROXWLRQ DV D VWDUWLQJ SRLQW ZKHQ VROYLQJ WKH QH[W FOXVWHULQJ SUREOHP ZLWK GLIIHUHQWQXPEHURIFOXVWHUV$VWRSSLQJFULWHULRQLVDSSOLHGWRHVWLPDWHWKHSRWHQWLDO LPSURYHPHQW RI WKH DOJRULWKP DQG WR VWRS WKH LWHUDWLRQ ZKHQ WKH HVWLPDWHG IXUWKHU LPSURYHPHQWVWD\VEHORZDSUHGHILQHGWKUHVKROGYDOXH7KHDSSURDFKLVWKH'\QDPLF /RFDO 6HDUFK WKDW VROYHV WKH QXPEHU DQG ORFDWLRQ RI WKH FOXVWHUV MRLQWO\ 7KH DOJRULWKP XVHV D VHW RI EDVLF RSHUDWLRQV VXFK DV FOXVWHU DGGLWLRQ UHPRYDO DQG VZDSSLQJ 7KHUHDUHDOVRWZRPHWKRGVWKDWVROYHWKHQXPEHUDQGORFDWLRQRIWKHFOXVWHUVMRLQWO\ :HNQRZWZRRIWKHPWKDWZHGDUHWRPHQWLRQ7KH&RPSHWLWLYHDJJORPHUDWLRQ>)UL @ GHFUHDVHV WKH QXPEHU RI FOXVWHUV XQWLO WKHUH DUH QR FOXVWHUV VPDOOHU WKDQ D SUHGHILQHG WKUHVKROG YDOXH 7KH GUDZEDFN LV WKDW WKH WKUHVKROG YDOXH PXVW EH H[SHULPHQWDOO\GHWHUPLQHG 7KH PRUH GLIILFXOW SUREOHP LV WKH FKRLFH RI WKH HYDOXDWLRQ IXQFWLRQ WKDW LV WR EH PLQLPL]HG )RU D JLYHQ DSSOLFDWLRQ WKH FULWHULRQ FDQ EH EDVHG RQ WKH SULQFLSOH RI 0LQLPXP 'HVFULSWLRQ /HQJWK 2WKHUZLVH WKH FULWHULRQ PXVW EH EDVHG RQ FHUWDLQ DVVXPSWLRQRIGDWDQRUPDOL]DWLRQDQGVSKHULFDOFOXVWHUV%DVLFDOO\WKHIXQFWLRQVKRXOG FRUUHODWHZLWKKLJKLQWHUFOXVWHUGLVWDQFHDQGORZLQWUDFOXVWHUGLYHUVLW\,QWKHFDVHRI %LQDU\GDWD6WRFKDVWLF&RPSOH[LW\>*\O@KDYHEHHQDSSOLHG ;PHDQVSURSRVHGLQ>3HO@LVDQH[WHQVLRQRI.PHDQVZLWKDQDGGHGIXQFWLRQDOLW\ RIHVWLPDWLQJWKHQXPEHURIFOXVWHUVWRJHQHUDWH7KH%D\VLDQ,QIRUPDWLRQ&ULWHULRQLV HPSOR\HG WR GHWHUPLQH ZKHWKHU WR VSOLW D FOXVWHU RU QRW 7KH VSOLWWLQJ LV FRQGXFWHG ZKHQWKHLQIRUPDWLRQJDLQIRUVSOLWWLQJDFOXVWHULVJUHDWHUWKDQWKHJDLQIRUNHHSLQJ WKDWFOXVWHU7KHTXHVWLRQLVKRZGDWDLWHPLVVSOLW"7KLVDSSURDFKPD\EHLQWHUHVWLQJ IRU VLPSOH GHVFULSWRUV KRZHYHU LW LV QRW VXUH LW ZRUNV ZHOO ZLWK KLJK FRPSOH[ GHVFULSWRUVOLNH:DYHOHWRU)RXULHU >)UD @ GHVFULEHV WKH XVH RI DQ DSSUR[LPDWHG %D\HV IDFWRU FRPSXWHG ZLWK WKH H[SHFWDWLRQPD[LPL]DWLRQDOJRULWKPWRFRPSDUHVWDWLVWLFDOPRGHOVRIFOXVWHUGDWDDQG VLPXOWDQHRXVO\FKRRVHERWKWKHGHVLUHGQXPEHURIFOXVWHUVDQGFOXVWHULQJWHFKQLTXH >0LO@FRPSDUHGWKLUW\FOXVWHULQJPHDVXUHVLQFOXGLQJWKH*OREDO9DULDQFH&ULWHULRQ 5DWLR PHDVXUH LQ WKH FRQWH[W RI KLHUDUFKLFDO FOXVWHULQJ LQ RUGHU WR ILQG WKH EHVW QXPEHU RI FOXVWHUV +H DSSOLHV DOO RI WKHP WR WKH VDPH GDWD VHW DQG FRPSXWHV KRZ PDQ\WLPHDQLQGH[JLYHWKHULJKWSDUWLWLRQFDUGLQDOLW\*OREDOYDULDQFHUDWLRFULWHULRQ >&DO @ SUHVHQWHG WKH EHVW UHVXOWV LQ DOO FDVHV 7KDW LV ZK\ ZH FRQVLGHU LW LQ RXU DSSURDFK *OREDO 9DULDQFH 5DWLR &ULWHULRQ PD[LPL]HV WKH QRUPDOL]HG UDWLR EHWZHHQ DQGZLWKLQFOXVWHUGLVWDQFHVDVDPHDQRIFKRRVLQJWKHRSWLPDOQXPEHURIFOXVWHUV
*)HUQDQGH]DQG&'MHUDED
2WKHU PHWKRGV EDVHG RQ FOXVWHU GLVSHUVLRQ RU FOXVWHU VXP VTXDUHG GLVWDQFHV KDYH EHHQ SURSRVHG LQFOXGLQJ DSSURDFKHV RI >+DU @ >.U] @ DQG >7LE @ ZKR SURSRVHG WKH C*DS VWDWLVWLF IRU GHWHUPLQLQJ WKH RSWLPDO QXPEHU RI FOXVWHUV 7KH PHWKRG FRPSXWHV WKH ZLWKLQ FOXVWHU GLVSHUVLRQ IRU LQFUHDVLQJ YDOXHV RI N DQG FRPSDUHVWKHFKDQJHLQWKHVHYDOXHVDJDLQVWDUHIHUHQFHQXOOGLVWULEXWLRQ+HH[SORUHG XVLQJ ERWK D XQLIRUP UHIHUHQFH GLVWULEXWLRQ RYHU WKH UDQJH RI HDFK IHDWXUH DQG D XQLIRUP UHIHUHQFH LQ WKH SULQFLSDO FRPSRQHQW RULHQWDWLRQ &OXVWHU VWDELOLW\ KDV DOVR EHHQ SURSRVHG DV D FULWHULRQ IRU GHWHUPLQLQJ WKH VWUXFWXUH RI GDWD %XLOGLQJ RII RI SUHYLRXVZRUNLQVWDELOLW\PHDVXUHPHQW>6PL@DQGFOXVWHUFRPSDULVRQSURSRVHGD VWDELOLW\EDVHG PHWKRG IRU ILQGLQJ WKH RSWLPDO QXPEHU RI FOXVWHUV 7KHLU WHFKQLTXH VDPSOHV D VSDFH RI FOXVWHULQJ IRU HDFK FKRLFH RI SDUWLWLRQ FDUGLQDOLW\ DQG XVHV D FOXVWHULQJ VLPLODULW\ PHWULF WR JHQHUDWH D GLVWULEXWLRQ RI VWDELOLW\ YDOXHV 7KLV GLVWULEXWLRQLVWKHQXVHGWRFKRRVHWKHPRVWVWDEOHFOXVWHULQJ 7KHULYDOSHQDOL]HGFRPSHWLWLYHOHDUQLQJ>;X@DOJRULWKPKDVGHPRQVWUDWHGDYHU\ JRRGUHVXOWLQILQGLQJWKHFOXVWHUQXPEHU+RZHYHUWKHUHLVVWLOOQRDSSURSULDWHWKHRU\ EHLQJGHYHORSHG>9R]@>5RV@,QWKHPL[WXUHPRGHOFOXVWHUDQDO\VLVWKHVDPSOH GDWDDUHYLHZHGDVWZRRUPRUHPL[WXUHVRIQRUPDO*DXVVLDQ GLVWULEXWLRQLQYDU\LQJ SURSRUWLRQ7KHFOXVWHULVDQDO\VHGE\PHDQVRIPL[WXUHGLVWULEXWLRQ7KHOLNHOLKRRG DSSURDFKWRWKHILWWLQJRIPL[WXUHPRGHOVKDVEHHQXWLOL]HGH[WHQVLYHO\>5HG@ 7KH%D\HVLDQ.XOOEDFN:LO @ +RZHYHU IRU D UHODWLYHO\ VPDOO VHW RI VDPSOHV WKH PD[LPXP OLNHOLKRRG PHWKRG ZLWK WKH H[SHFWDWLRQPD[LPL]DWLRQ DOJRULWKP IRU HVWLPDWLQJ PL[WXUH PRGHO SDUDPHWHUV ZLOO QRW DGHTXDWHO\ UHIOHFW WKH FKDUDFWHULVWLFV RI WKH FOXVWHU VWUXFWXUH ,Q WKLV ZD\ WKH VHOHFWHG FOXVWHU QXPEHU LV LQFRUUHFW7RVROYHWKHSUREOHPIRUWKHVPDOOVHWRIVDPSOHVWKHDSSURDFKWKHRU\IRU GDWD VPRRWKLQJ LV GHYHORSHG LQ >6PL @ DQG WKH DSSURDFK FRQVLGHUV WKH QRQ SDUDPHWULFGHQVLW\HVWLPDWLRQDQGWKHVPRRWKLQJIDFWRU ,Q FRQFOXVLRQ WKHUH DUH QR FRPSOHWHO\ VDWLVIDFWRU\ PHWKRGV IRU GHWHUPLQLQJ WKH QXPEHURISRSXODWLRQFOXVWHUVIRUDQ\W\SHRIFOXVWHUDQDO\VLV7KHGHWHUPLQDWLRQRI WKH DSSURSULDWH FOXVWHU QXPEHU VWLOO UHPDLQV RQH RI WKH PRVW GLIILFXOW SUREOHPV LQ FOXVWHU DQDO\VLV 7KHUH DUH PDQ\ SDUDPHWHUV H[ GDWD LWHP GHVFULSWRUV YROXPH RI LQIRUPDWLRQFRQILGHQFHRIFOXVWHULQJDOJRULWKPRIFOXVWHULQJGRPDLQRIDSSOLFDWLRQV WRFRQVLGHUEHIRUHDIILUPLQJWKDWWKLVPHWKRGLVEHWWHUWKDQWKLVRQH(DFKPHWKRGKDV LWVDGYDQWDJHVDQGGLVDGYDQWDJHVDVSUHVHQWHGDERYH,QWKLVSDSHUZHLQYHVWLJDWHDQ DSSURDFK WKDW UHGXFHV WKH QXPEHU RI LWHUDWLRQ RI WKH ORRS WKDW HYDOXDWHV GLIIHUHQW QXPEHURIFOXVWHUV2XUDSSURDFKIROORZVDQLQWHUQDOSURFHVVZKHUHWKHPHWKRGWKDW FRPSXWHVWKHEHVWSDUWLWLRQFDUGLQDOLW\LVEDVHGRQWKHVDPHREVHUYDWLRQVWKDWDUHXVHG WRFUHDWHWKHFOXVWHULQJ6RLWLVQRWDQH[WHUQDODSSURDFKEHFDXVHLWLVQRWEDVHGRQ PHDVXUHVRIDJUHHPHQWEHWZHHQSDUWLWLRQV,QRUGHUWRDVVHVVWKHDELOLW\RIDFOXVWHULQJ DOJRULWKPWRUHFRYHUWUXHFOXVWHUODEHOVLWKDVEHHQQHFHVVDU\WRGHILQHDPHDVXUHRI DJUHHPHQWEHWZHHQWZRSDUWLWLRQVDVZHZLOOVHHEHOORZ
3DUWLWLRQ&DUGLQDOLW\(VWLPDWLRQLQ,PDJH5HSRVLWRULHV
3 Approach Ingredients 7KUHHLQJUHGLHQWVDUHQHFHVVDU\WRDSSUHKHQGWKHDSSURDFKWKDWHVWLPDWHVWKHSDUWLWLRQ FDUGLQDOLW\LPDJHGHVFULSWRUVFOXVWHUFRQILGHQFHPHDVXUHDQGFOXVWHULQJPHWKRG 3.1 Image Descriptors The image is an important aspect of human visual perception, and the challenge is to represent it in an accurate and compacted way. The approach considered implements a powerful image content representation. Thus, we use a mathematical model: Fourier model [Zah 72]. Fourier model has very interesting advantages: - the image can be reconstructed from the Fourier features. – It has a mathematical description rather than a heuristic one. - And finally, the model supports the robustness of description to translation, rotation and scale transformations. An important contribution of our representation is our extension of Fourier model to consider the matching process, and particularly the similarity measure, in the categorizing procedure. In this extension, we consider image (t) to be composed of two functions: x (t) and y (t). So image (t) = (x (t), y (t)). x (t) represents the different level of grey of x, and y (t) represents the different level of grey of y. t indicates the different indices of the signal image. t = 0, N-1. N is the period of the function, and N = length of the normalized image. So, we have two series of coefficients S (an, bn) and S (cn, dn) that represent Fourier coefficients of x (t) and y (t) respectively.
[W D∑N 1DQFRVπNW1 EQVLQπNW1 \W E∑N 1FQFRVπNW1 GQVLQπNW1 $QG DQ 1∑N 1[W FRVπNW1EQ 1∑N 1[ W VLQπNW1FQ 1∑N 1[W FRVπNW1GQ 1∑N 1[W VLQπNW1 We consider only eleven coefficients of Fourier that select the lowest frequencies of the sub-band k ∈ [0-10]. We choose eleven Fourier descriptors because we noticed in our previous experiments [Bou 98] that eleven descriptors are enough to reconstitute the image with very light modifications. More than 11 descriptors make few changes on the stability of image representation. In this extension, we modify the similarity measures (Euclidean distance) in order to consider the coefficients of the two signals x (t) and y(t). Fourier descriptors represent any complex image with only few parameters, for N harmonics, we have 2+N*4 coefficients. Two images are similar even if they differ as a result of a geometric transformation such as rotation, scale or translation. In fact, translation, rotation and scale have no effect on the module of Fourier’s coefficients (more or less K coefficient for scale).
*)HUQDQGH]DQG&'MHUDED
3.2 Cluster Confidence Measure We consider the categorization confidence measure “Global Variance Criterion Ratio” inspired of [Cal 74]. It underlines both inter-category and intra-category distortion, and it is simple to implement. It computes the confidence of both the whole categories and individual categories. Therefore the confidence combines two levels of measures (global and local) to accurate the computation of the categorization confidence. The global variance ration confidence has never been used in partition categorization. However, it has been experimented in the context of hierarchical categorization [Mil 85], in which thirty measures have been examined in order to find the best number of categories. They apply all of them to test data set, and compute how many time an index give the right partition cardinality. The global variance ratio confidence presented the best results in all cases. The legitimate question may be: how this confidence measure is good for partition methods? The confidence is a good measure for hierarchical methods. Is it also good for partition methods? We think that the global variance ratio is also good for partition methods, because it intervenes on category confidence measure, and not in the categorization algorithm itself which is different from hierarchical to partition categorization.
*OREDOYDULDQFHUDWLRFULWHULRQNFOXVWHUV EHWD % QN N EHWD: n is the cardinality of the data set. k is the number of clusters. Beta (B)/(k-1) is the variance between categories. [ is the gravity centre of a cluster. Beta (W)/(n-k) is the dispersion within clusters.
*OREDOYDULDQFHUDWLRFULWHULRQN QN ΣL Q _[L±[_ ±ΣO NΣ[M∈&L_[L±[O_ N ΣO NΣ[M∈&O_[L±[O_ 3.3 Clustering Method
7KHFOXVWHULQJSURFHGXUHLVDYDULDQWRINPHGRLG>9LQ@6LQFHWKHHDUOLHUYHUVLRQV PDQ\ YDULDQWV KDYH EHHQ GHYHORSHG DQG VRPH RI WKHP KDYH EHHQ GHVFULEHG LQ >-DL @ 7KH SDUWLFXODULW\ RI RXU FDWHJRUL]DWLRQ SURFHGXUH LV EDVHG RQ WZR SRLQWV 7KH ILUVW SRLQWFRQFHUQVWKHXVHRIWZROHYHOVRIFOXVWHULQJ7KHILUVWRQHUHGXFHVWKHGLVWRUWLRQ RI WKH FOXVWHULQJ DQG WKH VHFRQG RQH DPHOLRUDWHV WKH HQWURS\ RI WKH FDWHJRUL]DWLRQ 7KH ILUVW DQG VHFRQG SRLQWV KDYH EHHQ GHWDLOHG LQ >'MH @ 6R LPDJHV DUH JURXSHG LQWRFOXVWHUVVXFKWKDWVLPLODULW\EHWZHHQGLIIHUHQWFOXVWHUVDUHPLQLPL]HGZKLOHWKH VLPLODULWLHVZLWKLQHDFKFOXVWHUVDUHPD[LPL]HG 7KHFOXVWHULQJDOJRULWKPLVEDVHGRQWKHWZRIROORZLQJLGHDV&RQVLGHULQJDVHWRI FHQWURLGV ZH DVVLJQ HDFK LPDJH GHVFULSWRUV WR WKH FORVHVW FHQWURLG LQ RUGHU WR PLQLPL]HWKHGLVWRUWLRQ:KHQWKLVWHPSRUDU\FODVVLILFDWLRQLVFRPSOHWHGFRPSXWH
3DUWLWLRQ&DUGLQDOLW\(VWLPDWLRQLQ,PDJH5HSRVLWRULHV
IRU HDFK FODVV D SDUWLFXODU YHFWRU WKDW PLQLPL]HV WKH GLVWRUWLRQ $ QHZ SDUWLWLRQ FKDUDFWHUL]HGE\DORZHUGLVWRUWLRQLVDFKLHYHG ,QWKHRU\WKLVLWHUDWLYHSURFHGXUHLVUHSHDWHGXQWLOWKHUDWLREHWZHHQWKHGLVWRUWLRQVRI WZR VXFFHVVLYH LWHUDWLRQV EHFRPH VPDOOHU RI D JLYHQ WKUHVKROG D ORFDO PLQLPXP RI WKHGLVWRUWLRQLVUHDFKHG +RZHYHUWRVSHHGXSWKHFOXVWHULQJSURFHVVZHFRQVLGHULQ RXUH[SHULPHQWVRQHLWHUDWLRQIRUGLVWRUWLRQDQGRQHLWHUDWLRQIRUHQWURS\,IQRLQLWLDO SDUWLWLRQLQLWLDOFHQWURLGVRIFOXVWHUV LVVSHFLILHGWKHLWHUDWLYHSURFHGXUHVWDUWVIURP WKHRSWLPL]DWLRQRIWKHLQLWLDOGDWDVHWFRQWDLQLQJRQHFHQWURLG7KLVSDUWLWLRQLVWKHQ VXFFHVVLYHO\VSOLWWHGDQGRSWLPL]HGWRREWDLQWKHGHVLUHGQXPEHURIFHQWURLGV 7KH DOJRULWKP LV D VXE RSWLPDO WUDLQLQJ DOJRULWKP WKDW KDV WKH JUHDW DGYDQWDJH WR IDVWHQWKHVHDUFKWKURXJKWKHSDUWLWLRQ7KHFHQWURLGVRIWKHSDUWLWLRQDUHRUJDQL]HGLQ DEDODQFHGELQDU\WUHHUHGXFLQJWKHVHDUFKWLPHIURP.IRU.PHGRLGV WR/RJ.. LV WKH QXPEHU RI FHQWURLGV LQ WKH SDUWLWLRQ 7KLV RUJDQLVDWLRQ RI FHQWURLGV LQ D WUHH OHDGV WR D VOLJKW PRGLILFDWLRQ RI WKH WUDLQLQJ DOJRULWKP 7KH RSWLPL]DWLRQ RI WKH FOXVWHULQJ LV QR PRUH SHUIRUPHG RQ WKH ZKROH FOXVWHULQJ VSDFH EXW UDWKHU RQ HDFK VXEVSDFHFRUUHVSRQGLQJWRWKHOHDYHVRIWKHWUHH7KLVVXERSWLPDOPHWKRGUHVXOWVLQ ZRUVH EXW VWLOO DFFHSWDEOH GLVWRUWLRQ SHUIRUPDQFH 2QFH WKH SDUWLWLRQ KDV EHHQ FRPSXWHGWKHLPDJHVFDQEHPDSSHGWRWKHGLVFUHWHVSDFHRIWKHFHQWURLGV
4 Partition Cardinality Estimation The procedure is a several times run that concern certain different starting states according to the number of clusters, and the best configuration of the cluster number obtained from concerned runs is used as the output results. So for N images of the data set, the possible number of clusters belongs to [1, N]. We consider a sequence v = (1, 2, 3, 4, …., N) of the potential cardinalities. The main idea of the procedure is to avoid the test of all values of v, and to limit the test to a subsequence v’ of v for which cardinality (v’)