Advances in Intelligent and Soft Computing Editor-in-Chief: J. Kacprzyk
67
Advances in Intelligent and Soft Computing Editor-in-Chief Prof. Janusz Kacprzyk Systems Research Institute Polish Academy of Sciences ul. Newelska 6 01-447 Warsaw Poland E-mail:
[email protected] Further volumes of this series can be found on our homepage: springer.com Vol. 53. E. Corchado, R. Zunino, P. Gastaldo, Á. Herrero (Eds.) Proceedings of the International Workshop on Computational Intelligence in Security for Information Systems CISIS 2008, 2009 ISBN 978-3-540-88180-3
Vol. 60. Z.S. Hippe, J.L. Kulikowski (Eds.) Human-Computer Systems Interaction, 2009 ISBN 978-3-642-03201-1
Vol. 54. B.-y. Cao, C.-y. Zhang, T.-f. Li (Eds.) Fuzzy Information and Engineering, 2009 ISBN 978-3-540-88913-7
Vol. 62. B. Cao, T.-F. Li, C.-Y. Zhang (Eds.) Fuzzy Information and Engineering Volume 2, 2009 ISBN 978-3-642-03663-7
Vol. 55. Y. Demazeau, J. Pavón, J.M. Corchado, J. Bajo (Eds.) 7th International Conference on Practical Applications of Agents and Multi-Agent Systems (PAAMS 2009), 2009 ISBN 978-3-642-00486-5 Vol. 56. H. Wang, Y. Shen, T. Huang, Z. Zeng (Eds.) The Sixth International Symposium on Neural Networks (ISNN 2009), 2009 ISBN 978-3-642-01215-0 Vol. 57. M. Kurzynski, M. Wozniak (Eds.) Computer Recognition Systems 3, 2009 ISBN 978-3-540-93904-7 Vol. 58. J. Mehnen, A. Tiwari, M. Köppen, A. Saad (Eds.) Applications of Soft Computing, 2009 ISBN 978-3-540-89618-0 Vol. 59. K.A. Cyran, S. Kozielski, J.F. Peters, U. Sta´nczyk, A. Wakulicz-Deja (Eds.) Man-Machine Interactions, 2009 ISBN 978-3-642-00562-6
Vol. 61. W. Yu, E.N. Sanchez (Eds.) Advances in Computational Intelligence, 2009 ISBN 978-3-642-03155-7
Vol. 63. Á. Herrero, P. Gastaldo, R. Zunino, E. Corchado (Eds.) Computational Intelligence in Security for Information Systems, 2009 ISBN 978-3-642-04090-0 Vol. 64. E. Tkacz, A. Kapczynski (Eds.) Internet – Technical Development and Applications, 2009 ISBN 978-3-642-05018-3 Vol. 65. E. Kacki, ˛ M. Rudnicki, J. Stempczy´nska (Eds.) Computers in Medical Activity, 2009 ISBN 978-3-642-04461-8 Vol. 66. G.Q. Huang, K.L. Mak, P.G. Maropoulos (Eds.) Proceedings of the 6th CIRP-Sponsored International Conference on Digital Enterprise Technology, 2009 ISBN 978-3-642-10429-9 Vol. 67. V. Snášel, P.S. Szczepaniak, A. Abraham, J. Kacprzyk (Eds.) Advances in Intelligent Web Mastering - 2, 2010 ISBN 978-3-642-10686-6
Vaclav Snášel, Piotr S. Szczepaniak, Ajith Abraham, and Janusz Kacprzyk (Eds.)
Advances in Intelligent Web Mastering - 2 Proceedings of the 6th Atlantic Web Intelligence Conference - AWIC’2009, Prague, Czech Republic, September, 2009
ABC
Editors Vaclav Snášel Technical University Ostrava Dept. Computer Science Tr. 17. Listopadu 15 708 33 Ostrava Czech Republic E-mail:
[email protected]
Dr. Ajith Abraham Machine Intelligence Research Labs (MIR) Scientific Network for Innovation & Research Excellence P.O.Box 2259 Auburn WA 98071-2259 USA E-mail:
[email protected]
Prof. Piotr S. Szczepaniak Technical University of Lódz Inst. Computer Science ul. Wólczanska 215 93-005 Lódz Poland E-mail:
[email protected]
Prof. Dr. Janusz Kacprzyk PAN Warszawa Systems Research Instiute Newelska 6 01-447 Warszawa Poland E-mail:
[email protected]
ISBN 978-3-642-10686-6
e-ISBN 978-3-642-10687-3
DOI 10.1007/978-3-642-10687-3 Advances in Intelligent and Soft Computing
ISSN 1867-5662
Library of Congress Control Number: 2009941785 c 2010
Springer-Verlag Berlin Heidelberg
This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilm or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable for prosecution under the German Copyright Law. The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. Typeset & Cover Design: Scientific Publishing Services Pvt. Ltd., Chennai, India. Printed in acid-free paper 543210 springer.com
Preface
The Atlantic Web Intelligence Conferences bring together scientists, engineers, computer users, and students to exchange and share their experiences, new ideas, and research results about all aspects (theory, applications and tools) of intelligent methods applied to Web based systems, and to discuss the practical challenges encountered and the solutions adopted. Previous AWIC events were held in Spain – 2003, Mexico – 2004, Poland – 2005, Israel – 2006 and France – 2007. This year, the 6th Atlantic Web Intelligence Conference (AWIC2009) held during September 9-11, 2009, at the Faculty of Mathematics and Physics of the Charles University in the historical city center of Prague, Czech Republic. AWIC2009 is organized by the Machine Intelligence Research Labs (MIR ˇ Labs – http://www.mirlabs.org), VSB-Technical University of Ostrava, Czech Republic and the Institute of Computer Science, Academy of Sciences of the Czech Republic. Conference has attracted submissions from several parts of the world and each paper was reviewed by three or more reviewers. We are sure that the diversity of the topics dealt with in these papers and the quality of their contents, and the three keynote speakers (Peter Vojt´ aˇs, Charles University in Prague, Czech Republic, George Feuerlicht, Department of Information Technologies, University of Economics, Prague, Czech Republic & Faculty of Engineering and Information Technology, UTS, Australia and Juan D. Vel´ asquez, University of Chile, Chile) make for an exciting conference. We would like to express our sincere gratitude to the program committee, local organizing committee and the numerous referees who helped us evaluate the papers and make AWIC2009 a very successful scientific event.
VI
Preface
Grateful appreciation is expressed to Professor Janusz Kacprzyk, Editor of the series publishing this book as well as to the Springer team for its excellent work. We hope that every reader will find this book interesting and inspiring. Prague, September 2009
V´aclav Sn´ aˇsel Piotr Szczepaniak Ajith Abraham Jaroslav Pokorn´ y Duˇsan H´ usek Katarzyna Wegrzyn-Wolska
Contents
Part I: Invited Lectures 1
Web Semantization – Design and Principles . . . . . . . . . . . . . Jan Dˇedek, Alan Eckhardt, Peter Vojt´ aˇs
3
2
Next Generation SOA: Can SOA Survive Cloud Computing? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . George Feuerlicht
19
A Dynamic Stochastic Model Applied to the Analysis of the Web User Behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Pablo E. Rom´ an, Juan D. Vel´ asquez
31
3
Part II: Regular Papers 4
5
6
7
Using Wavelet and Multi-wavelet Transforms for Web Information Retrieval of Czech Language . . . . . . . . . . . . . . . . Shawki A. Al-Dubaee, V´ aclav Sn´ aˇsel, Jan Platoˇs
43
An Extensible Open-Source Framework for Social Network Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Michal Barla, M´ aria Bielikov´ a
53
Automatic Web Document Restructuring Based on Visual Information Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Radek Burget
61
Reasoning about Weighted Semantic User Profiles through Collective Confidence Analysis: A Fuzzy Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nima Dokoohaki, Mihhail Matskin
71
VIII
8
9
Contents
Estimation of Boolean Factor Analysis Performance by Informational Gain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Alexander Frolov, Dusan Husek, Pavel Polyakov
83
The Usage of Genetic Algorithm in Clustering and Routing in Wireless Sensor Networks . . . . . . . . . . . . . . . . . . . . Ehsan Heidari, Ali Movaghar, Mehran Mahramian
95
10 Automatic Topic Learning for Personalized Re-ordering of Web Search Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 Orland Hoeber, Chris Massie 11 Providing Private Recommendations on Personal Social Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 Cihan Kaleli, Huseyin Polat 12 Differential Evolution for Scheduling Independent Tasks on Heterogeneous Distributed Environments . . . . . . 127 Pavel Kr¨ omer, Ajith Abraham, V´ aclav Sn´ aˇsel, Jan Platoˇs, Hesam Izakian 13 Visual Similarity of Web Pages . . . . . . . . . . . . . . . . . . . . . . . . . . 135 Miloˇs Kudˇelka, Yasufumi Takama, V´ aclav Sn´ aˇsel, Karel Klos, Jaroslav Pokorn´y 14 Burst Moment Estimation for Information Propagation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 Tomas Kuzar, Pavol Navrat 15 Search in Documents Based on Topical Development . . . . 155 Jan Martinoviˇc, V´ aclav Sn´ aˇsel, Jiˇr´ı Dvorsk´ y, Pavla Dr´ aˇzdilov´ a 16 Mining Overall Sentiment in Large Sets of Opinions . . . . . 167 Pavol Navrat, Anna Bou Ezzeddine, Lukas Slizik 17 Opinion Mining through Structural Data Analysis Using Neuronal Group Learning . . . . . . . . . . . . . . . . . . . . . . . . . 175 Michal Pryczek, Piotr S. Szczepaniak 18 Rough Set Based Concept Extraction Paradigm for Document Ranking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 Shailendra Singh, Santosh Kumar Ray, Bhagwati P. Joshi 19 Knowledge Modeling for Enhanced Information Retrieval and Visualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 Maria Sokhn, Elena Mugellini, Omar Abou Khaled
Contents
IX
20 Entity Extraction from the Web with WebKnox . . . . . . . . . 209 David Urbansky, Marius Feldmann, James A. Thom, Alexander Schill 21 Order-Oriented Reasoning in Description Logics . . . . . . . . . 219 Veronika Vanekov´ a, Peter Vojt´ aˇs 22 Moderated Class–Membership Interchange in Iterative Multi–relational Graph Classifier . . . . . . . . . . . . . . . . . . . . . . . . 229 Peter Vojtek, M´ aria Bielikov´ a Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
Chapter 1
Web Semantization – Design and Principles Jan Dˇedek, Alan Eckhardt, and Peter Vojt´asˇ
Abstract. Web Semantization is a concept we introduce in this paper. We understand Web Semantization as an automated process of increasing degree of semantic content on the web. Part the of content of the web is further usable, semantic content (usually annotated) is more suitable for machine processing. The idea is supported by models, methods, prototypes and experiments with a web repository, automated annotation tools producing third party semantic annotations, semantic repository serving as a sample of semantized web and a proposal of an intelligent software agent. We are working on a proof of concept that even today it is possible to develop a semantic search engine designed for software agents. Keywords: Semantic Web, Semantic Annotation, Web Information Extraction, User Preferences.
1.1 Introduction In their Scientific American 2001 article [3], Tim Berners-Lee, James Hendler and Ora Lassila unveiled a nascent vision of the semantic web: a highly interconnected network of data that could be easily accessed and understood by a desktop or handheld machine. They painted a future of intelligent software agents that would “answer to a particular question without our having to search for information or pore through results” (quoted from [15]). Lee Feigenbaum, Ivan Herman, Tonya Hongsermeier, Eric Neumann and Susie Stephens in their Scientific American 2007 article [15] conclude that “Grand visions rarely progress exactly as planned, but the Semantic Web is indeed emerging and is making online information more useful as Jan Dˇedek · Alan Eckhardt · Peter Vojt´asˇ Department of software engineering Charles University, Prague, Czech Republic, e-mail: {dedek,eckhardt,vojtas}@ksi.mff.cuni.cz Institute of Computer Science, Academy of Sciences of the Czech Republic
V. Sn´asˇel et al. (Eds.): Advances in Intelligent Web Mastering - 2, AISC 67, pp. 3–18. c Springer-Verlag Berlin Heidelberg 2010 springerlink.com
4
J. Dˇedek, A. Eckhardt, and P. Vojt´asˇ
ever”. L. Feigenbaum et al. support their claim with success of semantic web technology in drug discovery and health care (and several further applications). These are mainly corporate applications with data annotated by humans. Ben Adida when bridging clickable and Semantic Web with RDFa [1] assumes also human (assisted) activity by annotations of newly created web resources. But what to do with the content of the web of today or of pages published without annotations? The content of the web of today is too valuable to be lost for emerging semantic web applications. We are looking for a solution how to make it accessible in semantic manner. In this paper we would like to address the problem of semantization (enrichment) of current web content as an automated process of third party annotation for making at least a part of today’s web more suitable for machine processing and hence enabling it intelligent tools for searching and recommending things on the web [2]. Our main idea is to fill a semantic repository with information that is automatically extracted from the web and make it available to software agents. We give a proof of concept that this idea is realizable and we give results of several experiments in this direction. Our web crawler (see Fig. 1.1) downloads a part of the web to the web repository (Web Store). Resources with semantic content can be uploaded directly to the semantic repository (Semantic Store). Extractor 1 (classifier) extracts those parts of Web Store which are suitable for further semantic enrichment (we are able to enrich only a part of resources). More semantic content is created by several extractors and annotators in several phases. The emphasis of this paper is on the automation and the usage of such extracted/enriched data.
Web Store
Semantic Content
HTML page
Web Crawler Extractor 3 (semantic)
Extractor 2 (linguistic)
Extractor 1 (classifier)
New Semantic Content 3
New Semantic Content 2
+ +
Semantic ContentSemantic
Content Semantic Content is growing
Semantic Content
+
New Semantic Content 1
Semantic Store
Fig. 1.1 The process of semantization of the Web
1
Web Semantization – Design and Principles
5
The paper is structured as follows. The next section describes our main ideas introduced in this paper. Then two special sections denoted to the domain independent annotation (Sect. 1.3) and domain dependent annotation (Sect. 1.4) follow. Section 1.5 describes our user agent. Our experiments are presented in the Sect. 1.6 and at the end, just before conclusion, there is a section about related work.
1.2 Idea of Web Semantization 1.2.1 Web Repository First idea is the idea of a web repository. It develops as follows in details. Semantic enrichment is in fact a data mining task (although a special one) - to add to web documents a piece of knowledge, which is obvious for human perception not for a machine. That means to annotate data by concepts from an ontology which is the same as to map instances to an ontology. Such a data mining task will be easier to solve when there is a sort of a repetition (modulo some similarity). We decided to enrich only textual content for the present, no multimedia (this substantially reduces the size of information we have to store). Especially we restrict to the pages with content consisting either dominantly of natural language sentences (let us call them textual pages) and those containing large number of table cells (let us call them tabular pages). Of course this division need not be disjoint, and will not cover the whole web. The Web repository is a temporal repository of Web documents crawled by a crawler. The repository supports document’s metadata, e.g. modification and creation dates, domain name, ratio HTML code/text, content type, language, grammatical sentences etc. It keeps track of all changes in a document and simplifies access to and further processing of Web documents. We are experimenting with the Web crawler Egothor1 2.x and it’s Web repository. We have been filled this repository with several terabytes of textual part of Czech web (domain *.cz) and it very simplified access to this data.
1.2.2 Domain (In)Dependent Annotation Second idea is to split annotation process to two parts, one domain independent intermediate annotation and second domain dependent user directed annotation. Both should be automated, with some initial human assisted learning. This first part of learning could require assistance of a highly skilled expert; the second (probably faster part) should be doable by an user with average computer literacy. The first domain independent annotation will serve as an intermediate annotation supporting further processing. This will run once on the whole suitable data. Hence initial necessary human assistance in training can be done by a highly skilled expert (indeed it can be a longer process). 1
http://www.egothor.org/
6
J. Dˇedek, A. Eckhardt, and P. Vojt´asˇ
The second domain dependent annotation part can consists of a large number of tasks with different domains and ontologies. This should be possible to be executed very fast and if possible with assistance of an average computer skilled user. We can afford this having intermediate annotation. Domain independent intermediate annotation can be done with respect to general ontologies. First ontology is the general PDT tectogrammatical structure [19] (it is not exactly an ontology written in ontology language, but could be considered this way) which captures semantic meaning of a grammatical sentence. This is the task for computational linguistics; we make use of their achievements which were originally focused on machine translation. Of course tectogrammatical structure is not the only option for this purpose. English language for example can be parsed in many different ways (most often according to some kind of grammar). All the other possibilities are applicable, but in our current solution we make use of a tree structure of the annotations. In this paper we will present our experience with Czech language and tectogrammatical structure that we have used for domain independent intermediate annotation of pages dominantly consisting of grammatical sentences. For structured survey or product summary pages (we call them “tabular pages”) we assume that their structure is often similar and the common structure can help us to detect data regions and data records and possibly also attributes and values from detailed product pages. Here annotation tools will be also trained by humans – nevertheless only once for the annotation of the whole repository. We can see an interesting inversion in the use of similarity and repetition depicted in Fig. 1.2. While for tabular pages we use similarities in the intermediate phase, for textual pages we use similarities in the domain dependent phase where similar sentences often occur. Type of annotation
Tabular pages
Textual pages
Intermediate general Domain dependent
Uses similarities Does not use similarities
Does not use similarities Uses similarities
Fig. 1.2 Use of similarity in annotation approaches
Domain (task) dependent (on demand) annotation is concerning only pages previously annotated by general ontologies. This makes this second annotation faster and easier. An assistance of a human is assumed here for each domain and new ontology. For textual pages repetitions make it possible to learn a mapping from structured tectogrammatical instances to an ontology. This domain (task) dependent task can be avoided by a collaborative filtering method, assuming there is enough users’ acquaintance.
1.2.3 Semantic Repository Third important idea is to design a semantic repository. It should store all the semantic data extracted by extraction tools and accessed thorough a semantic search
1
Web Semantization – Design and Principles
7
engine. Of course many different problems (e.g. querying and scalability) are connected with this but they are at least partially solved now days. Let us mention work of our colleges [11] that is available for use. The semantic repository should also contain some sort of uncertainty annotation besides above mentioned ontologies. The main reason is that the annotation process is error prone and in the future we can have different alternative annotation tools so we can aggregate results. This aspect is not further described in the paper but can be found with many details in [7] and [12].
1.2.4 User Software Agent Last, but also important idea is to design at least one software agent, which will give the evidence that the semantization really improved the usability of the Web. Besides using annotated data it should also contain some user preference dependent search capabilities. The process of a user agent searching and making use of the semantic search engine is represented in Fig. 1.3. Our main focus in this paper is on the agent and on the extractors. Semantic Data Semantic Store
Semantic Data
Semantic Search Engine
Query
Agent 1
Recommendation
User preferences / User feedback
User 1
Fig. 1.3 Process of querying Semantic Search Engine by an user agent
1.3 Intermediate Domain Independent Automated Annotation Web information extraction is often able to extract valuable data. We would like to couple the extraction process with the consecutive semantic annotation (initially human trained; later it should operate automatically). In this chapter we would like to describe details of our idea to split this process in two parts - domain independent and domain dependent.
1.3.1 Extraction Based on Structural Similarity First approach for domain independent intermediate information extraction and semantic annotation is to use the structural similarity in web pages containing large number of table cells and for each cell a link to detailed pages. This is often presented in web shops and on pages that presents more than one object (product offer). Each object is presented in a similar way and this fact can be exploited.
8
J. Dˇedek, A. Eckhardt, and P. Vojt´asˇ
As web pages of web shops are intended for human usage creators have to make their comprehension easier. Acquaintance with several years of web shops has converged to a more or less similar design fashion. There are often cumulative pages with many products in a form of a table with cells containing a brief description and a link to a page with details about each particular product. There exist many extraction tools that can process web pages like this. Most complete recent survey can be found in [4] and also in [18]. We come with similar but also different solution divided into domain dependent and domain independent phases. See below for details. Summary Web Page Data Region Corespondig DOM tree Data Record 1
Data Record 2
(car offer 1)
(car offer 2)
link to details
link to details
Data Record 1 Data Record 2
Detail Web Page 1 Attribute labels
Values
Detail Web Page 2 Attribute labels
Values
Fig. 1.4 Structural similarity in web pages
Our main idea (illustrated in the Fig. 1.4) is to use a DOM tree representation of the summary web page and by breadth first search encounter similar subtrees. The similarity of these subtrees is used to determine the data region - a place where all the objects are stored. It is represented as a node in the DOM tree, underneath it there are the similar sub-trees, which are called data records. Our ontology engineering part is based on a bottom-up approach of building ontologies. The first classes are ‘tabular page’, ‘data region’ and ‘data record’ which are connected with properties ‘hasDataRegion’ and ‘hasDataRecord’. Also, between data record and its detail Data region and Detailed page we have property ‘hasDetailPage’. Note that these concepts and properties have soft computing nature which can be modeled by a fuzzy ontology. For instance, being an instance has a degree of membership depending on the precision of our algorithm (dependent on similarity measures used to detect data regions and/or records). Using some heuristics we can detect what are resources described in this page (Tabular page describes rdfs:Resources). To detect possible attributes and even their values we explore similarities between the content of a data record and corresponding content of a detailed page. The main idea is to find same pieces of text in the data record and in the detail page. This occurs often, because a brief summary of the object (e.g. a product)
1
Web Semantization – Design and Principles
9
is present in the data record. Somewhere near the attribute values are located the names of attributes in the detail page. These names of attributes can be extracted. The extraction of attribute names is easy because on every detail page the names will be the same. The names of attributes can be used in our low-level ontology as names of properties - each object will be represented by the URL of the detail page, and a set of properties that consists of the attribute values found on the detail page. This idea is modification of our previous implementation from [12]. Here we decided to split the extraction process to the domain independent and the domain dependent part (see in Sect. 1.4.1) so the generally applicable algorithm described here is separated form the domain dependent stuff that can be supported with a domain and an extraction ontology (see in Sect. 1.4.1).
Algorithm 1.1 Finding data regions in DOM 1: function BFS FIND DR(LevelNodes) 2: NextLevelNodes ← 0/ 3: regions ← 0/ 4: for all Node ∈ LevelNodes do 5: regions ← identDataRegions(normalized(Node.children)) 6: NextLevelNodes ← NextLevelNodes ∪ (Node.Children ∈ / regions) 7: end for 8: if NextLevelNodes = 0/ then 9: return regions ∪ BFSFindDR(NextLevelNodes) 10: else 11: return regions 12: end if 13: end function
In the Algoritmus 1.1 is described an algorithm for finding the data regions. The input of the function is the root element of the DOM tree of a web page. The function BFSfindDR is recursive; each run processes one level of the tree. The algorithm proceeds across each node of the input LevelNodes (4) and tries to identify if some of the node’s children represents a data region (5). If so, those children are excluded from further processing (6). The nodes that are not in any data region are added to NextLevelNodes. Finally, if there are some nodes in NextLevelNodes, the function is called recursively. Data regions found on the page form the output of the function.
1.3.2 A Method Based on Czech Linguistics Our approach for Web information extraction form textual pages is based on Czech linguistics and NLP tools. We use a chain of linguistic analyzers [16, 6, 17] that processes the text presented on a web page and produces linguistic (syntactic) trees corresponding with particular sentences. These trees serve as a basis of our semantic extraction.
10
J. Dˇedek, A. Eckhardt, and P. Vojt´asˇ
Unlike the usual approaches to the description of English syntax, the Czech syntactic descriptions are dependency-based, which means that every edge of a syntactic tree captures the relation of the dependency between a governor and its dependent node. Especially the tectogrammatical (deep syntactic) level of representation [19] is closer to the meaning of the sentence. The tectogrammatical trees (Example of such a tree is on the Fig. 1.5) have a very convenient property of containing just the type of information we need for our purpose (extraction of semantic information), namely the information about inner participants of verbs - actor, patient, addressee etc. So far this linguistic tool does not work with the context of the sentence and hence does not exploit a possible frequent occurrence of similar sentences.
)( "*+
$ %&$ '
! "#"
+ # '
( -.
%&$ ' &13 '
-. "#" +*1&2 +*1&2
,-
46 /0 ! 4( "#" "#" "#"
. '
5
!
Fig. 1.5 A tectogrammatical tree of sentence: “Two men died on the spot in the demolished Trabant . . . ”
1.4 Domain Dependent Automated Annotation The second phase of our extraction-annotation process is domain dependent. It makes use of the previous intermediate general (domain independent) annotation. Our goal is to make this process as fast and as easy as possible, e.g. to be trained very fast (possibly in query time) and precisely by any user with average computer skills.
1
Web Semantization – Design and Principles
11
1.4.1 Extraction and Annotation Based on Extraction Ontology A domain ontology is the basis for an extraction ontology. The extraction ontology [14] extends the domain ontology by adding some additional information about how the instances of a class are presented on web pages. We used regular expressions to determine values of the properties. These regular expressions are matched against relevant pieces of text found on the web page in previous general annotation phase. These regular expressions are not dependent on an extraction tool, so the extraction ontology is general – it contains generally applicable information, which can be shared and imported to the particular tool from variety of locations. So far our method of the creation of an extraction ontology has to be done by a very experienced user and has to be verified on a set of web pages. In the future we plan to invest more effort and soft computing techniques in automating this part. In this paper we do not deal with learning of extraction ontology.
1.4.2 Linguistic-Based Domain Dependent Annotation Assume we have pages annotated by a linguistic annotator and we have a domain ontology. The extraction method we have used is based on extraction rules. An example of such an extraction rule is on Fig. 1.6 (on the left side). These rules represent common structural patterns that occur in sentences (more precisely in corresponding trees) with the same or similar meaning. Mapping of the extraction rules to the concepts of the target ontology enables the semantic extraction. An example of such mapping is demonstrated in Fig. 1.6. This method is not limited to the Czech language and can be used with any structured linguistic representation. We have published this method in [9], more details can be found there.
m/tag Incident actionManner
t_lemma
String*
negation
Boolean
actionType hasParticipant
String Instance*
Participant
hasParticipant*
t_lemma
Participant
t_lemma + numeral translation
participantType
String
participantQuantity
Integer
t_lemma
Fig. 1.6 An example of extraction rule and its mapping to ontology
12
J. Dˇedek, A. Eckhardt, and P. Vojt´asˇ
We experimented with obtaining extraction rules in two ways: 1. Rules and mappings were designed manually (like the rule on the Fig. 1.6). 2. Rules and mappings were learned using Inductive Logic Programming (ILP) methods (see in the following section). Finally, having this mapping, we can extract instances of the target ontology from the answer corresponding to an extraction rule. This answer is given by the semantic repository, and the instances of the ontology are also stored there.
1.4.3 Using Inductive Logic Programming (ILP) ILP [20] is a method for a generation of rules that describe some properties of data. ILP takes three kinds of input • Positive examples E+ – objects that have the desired property. • Negative examples E- – objects that do not have the desired property. • Background knowledge – facts about the domain (so far we do not consider background knowledge in the form of rules). ILP tries to generate such rules that all positive examples and no negative example satisfy them. It uses concepts from the background knowledge in the body of the rules. The main advantage of ILP is that it can be trained to mine arbitrary data described in predicate logic - the process of learning is dependent only on the positive and the negative examples and on the amount of information we have about them. The more information we have, the more precise the learning will be. Thanks to the fact that ILP learns from positive and negative examples, we can propose an assisted learning procedure for learning and also tuning the extractor (described in previous section). The process is such that the extractor gets pure textual or semantically (pre)annotated data from a web page and presents it to the user. The user then annotates the data or only corrects the data saying which are well and which are badly annotated. The extractor can be taught and tuned to the desired goal in this way. Of course the user has to understand the semantics of the annotations - so the user has to be an expert in that way of understanding the target ontology. Yet this ontology can be quite simple in the case we are interested only in a specific kind of information e.g. number of people injured during a car accident like in our experiments (see next section) and motivation. We discovered that we can use a straightforward transformation of linguistic trees (see an example on the Fig. 1.5) to predicates of ILP (example of the transformation is in the Fig. 1.7) and the learning procedure responds with significant results, which are presented in next section. On the Fig. 1.5 we can see the relationship between particular words of a sentence and nodes of tectogrammatical tree - the highlighted node of the tree corresponds with the word ‘two’ in the sentence (also highlighted). This relationship allows a propagation of the information from the user (which annotates just the text of sentence) to the ILP learning procedure.
1
Web Semantization – Design and Principles
13
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Two men died on the spot in the % demolished trabant – a senior 82 % years old and another man, who’s %%%%%%%% Nodes %%%%%%%% tree_root(node0_0). node(node0_0). id(node0_0, t_jihomoravsky49640_txt_001_p1s4). node(node0_1). t_lemma(node0_1, zemrit). functor(node0_1, pred). gram_sempos(node0_1, v). node(node0_2). t_lemma(node0_2, x_perspron). functor(node0_2, act). gram_sempos(node0_2, n_pron_def_pers). ... %%%%%%%% Edges %%%%%%%% edge(node0_0, node0_1). edge(node0_1, node0_2). edge(node0_1, node0_3). ... edge(node0_34, node0_35). %%%%%%%% Injury %%%%%%%% injured(t_jihomoravsky49640_txt_001_p1s4).
Fig. 1.7 Sample from prolog representation of a sentence
1.5 User Agent One of the aspects of our proposed agent is to compare different offers (e.g. used car offer) based on their attributes. We call this a “decathlon effect”, or in economy terminology utilizing possibly “conflicting objectives”. The user has some preferences on car offer’s attributes; the user wants low price and low consumption. These preferences are called objectives. We can use fuzzy logic notation and to each attribute Ai associate its fuzzy function fi : DAi → [0, 1], which normalizes the domain of the attribute (the higher the fuzzy value the higher the user preference). Some of these objectives are conflicting e.g. wanting low price and low consumption. There is no clear winner, some of the car offers may be incomparable - Audi A4 has high price but also lower consumption, on the other hand Tatra T613 may have lower price but also higher consumption. Regarding two objectives “low price” and “low consumption”, these two cars are incomparable. This undecidability is solved by an aggregation function (in economy called utility function), often denoted by @. This function takes preference on attributes and combines them into one global score of the car offer as a whole. In this way, the agent can order all car offers by their score. The aggregation function may be a weighted average, for example: @(Price,Consumption) =
3 ∗ Price + 1 ∗ Consumption 4
where Price and Consumption are the degrees to which the car offer satisfies the use criteria. We have spent much time in the research of user modelling, these ideas are discussed in detail for example in recent [13]. Another phenomenon of our approach is that we take into account the (un)certainty of correct extraction. In our system we have for example two different
14
J. Dˇedek, A. Eckhardt, and P. Vojt´asˇ
Rating
1
A
Audi A4
1.000€ 10.000km
0
B
Audi A4 10.000€ 1.000km
1
Reliability
Fig. 1.8 An example of reliable and unreliable rating
pieces of information about the same offer of Audi A4. Let us denote “the first Audi A4” as A and the second as B (see in Fig. 1.8). A has much higher rating than the B. However, A has also much lower reliability. The reliability of the rating of A and B is determined by extractors that made the annotation or by a later user feedback, finding first extractor mistaken. This may be caused for example by an error of extractor that switched the price (10.000(Euro)) with the distance travelled (1.000(km)). Thus A has much better price than B and consequently also the rating. We use a second order utility function to combine preference degree and reliability trained to specific user profile.
1.6 Experiments Our experiment was to enable our agent to access semantic information from the web. We tried to gather information form car offers with information about car accidents. We wanted to find out the number of injured persons during car accidents and match them to the offers according the car make and model. We have used firemen reports; some of them were about car accidents, some were not. Each report was split into sentences and each sentence was linguistically analyzed and transformed into a tectogrammatical tree, as described above. These trees were transformed to a set of Prolog facts (see Fig. 1.7). Sentences, which talk about an injury during a car accident, were manually tagged by predicate injured(X), where X is the ID of the sentence. Those sentences that do not talk about injured persons during a car accident were tagged as :-injured(X), which represents a negative example. This tagging can be done even by a user unexperienced in linguistics and ILP, but the user has to understand the semantics of information he is tagging (it usually means that the user has to understand the target ontology). These tagged sentences were the input for ILP, we have used 22 sentences as positive examples and 13 sentences as negative examples. We used Progol [21] as ILP software. The rules ILP found are in following Fig. 1.9. The first four rules are overfitted to specific sentences. Only the last two represent generally applicable rules. But they do make sense - ‘zranit’ means ‘to hurt’ and
1
Web Semantization – Design and Principles
injured(A) injured(A) injured(A) injured(A)
::::-
id(B,A), id(B,A), id(B,A), id(B,A),
15
id(B,t_57770_txt_001_p5s2). id(B,t_60375_txt_001_p1s6). id(B,t_57870_txt_001_p8s2). id(B,t_57870_txt_001_p1s1).
injured(A) :- id(B,A), edge(B,C), edge(C,D), t_lemma(D,zranit). injured(A) :- id(B,A), edge(B,C), edge(C,D), t_lemma(D,nehoda).
Fig. 1.9 Rules mined by ILP
‘nehoda’ means ‘an accident’ in Czech. These two rules mean that the root element is connected either to a noun that means an accident or to a verb that means to hurt. We tested these rules on the set of 15 positive and 23 negative examples. The result accuracy was 86.84%. Figure 1.10 shows a 4-fold table where rows depict classification by the rule set and columns classification by the user. Results are promising, nevertheless tagging is done by unskilled user for whom the tagging is usually tedious.
P ¬P
A 11 4
¬A 1 22
Fig. 1.10 Results on the test set
In this case we learned a set of rules that identifies relevant sentences – roots of relevant tectogrammatical trees. We have made some additional experiments (some of them were published in [8]) that seem to lead to our goal. It is still a long way to go, but we understand these results as a proof of concept that ILP can be used for finding different kinds of information present in nodes and structure of linguistic trees. We for example extracted number of injured people from relevant trees when we have modified the training data. And still there is the possibility to design extraction rule manually if the learning method fails.
1.7 Related Work Of course the idea of exploitation of information extraction tools for the creation of the semantic web is not entirely novel. Above all we have to mention the very recognized work of IBM Almaden Research Center: the SemTag [10]. This work demonstrated that an automated semantic annotation can be applied in a large scale. In their experiment they annotated about 264 million web pages and generated about 434 millions of semantic tags. They also provided the annotations as a Semantic Label Bureau – a HTTP server providing annotations for web documents of 3rd parties.
16
J. Dˇedek, A. Eckhardt, and P. Vojt´asˇ
The architecture and component infrastructure of the Seeker system from [10] is a great inspiration to us. In this paper we are more concentrated on the richness and complexity of semantic annotations and an overview of the personalized software user agent making use of the annotations. Another similar system is the KIM platform2 (lately extended in [22]) – next good example of a large-scale automated semantic annotation system. Similarly to the SemTag, the KIM platform is oriented to recognition of named entities and limited amount of key-phrases. These annotations allow very interesting analysis of cooccurrence and ranking of entities together with basic useful semantic search and visualization of semantic data. Interesting idea of “the self-annotating web” was proposed in [5]. This idea consists in that it uses globally available Web data and structures to semantically annotate–or at least facilitate annotation of–local resources. This idea can be seen as an inversion to ours (because we intend to make the semantic annotations globally available), but in fact the method proposed in [5] can be integrated to an automated semantic annotation system with many benefits on both sites. To conclude this section we refer to two survey papers about Web Information Extraction Systems [4] and Semantic Annotation Platforms [23], which describe many other systems that are compatible with the idea of Web Semantization and can be also exploited in the process of gradual semantization of the web. As we mentioned above the domain dependent IE systems need initial learning for each new task. We take it into account and propose to use the learned systems only for suitable web pages selected by a prompt classifier.
1.8 Conclusions and Further Work In this paper we have developed the idea of web semantization, which can help to arch over the gap between Web of today and Semantic Web. We have presented a proof of concept that even today it is possible to develop a semantic search engine designed for software agents. In particular contributions consists of two sorts of automated third party annotation of existing web resources, the idea of a semantic repository serving as a sample of semantized web with extension of the ontology model for additional annotation (e.g. uncertainty annotation) and a proposal of an intelligent user agent. Future work goes in two directions. First is in the integration of specific parts of the system, in the automation of the whole process and in the extensive experiments with a larger number of web pages and different users. Second is in improving special parts of the system - either making it more precise or making it more automatic (able to train by a less qualified user). Acknowledgements. This work was partially supported by Czech projects: IS-1ET100300517, GACR-201/09/H057, GAUK 31009 and MSM-0021620838. 2
http://www.ontotext.com/kim
1
Web Semantization – Design and Principles
17
References 1. Adida, B.: Bridging the clickable and semantic webs with RDFA. ERCIM News - Special: The Future Web 72, 24–25 (2008), http://ercim-news.ercim.org/content/view/334/528/ 2. Berners-Lee, T.: The web of things. ERCIM News - Special: The Future Web 72, 3 (2008), http://ercim-news.ercim.org/content/view/343/533/ 3. Berners-Lee, T., Hendler, J., Lassila, O.: The semantic web, a new form of web content that is meaningful to computers will unleash a revolution of new possibilities. Scientific American 284(5), 34–43 (2001) 4. Chang, C.H., Kayed, M., Girgis, M.R., Shaalan, K.F.: A survey of web information extraction systems. IEEE Transactions on Knowledge and Data Engineering 18(10), 1411–1428 (2006), http://ieeexplore.ieee.org/xpls/ abs all.jsp?arnumber=1683775 5. Cimiano, P., Handschuh, S., Staab, S.: Towards the self-annotating web. In: WWW 2004: Proceedings of the 13th international conference on World Wide Web, pp. 462–471. ACM, New York (2004), http://doi.acm.org/10.1145/988672.988735 6. Collins, M., Hajiˇc, J., Brill, E., Ramshaw, L., Tillmann, C.: A Statistical Parser of Czech. In: Proceedings of 37th ACL Conference, pp. 505–512. University of Maryland, College Park (1999) 7. Dˇedek, J., Eckhardt, A., Galamboˇs, L., Vojt´asˇ, P.: Discussion on uncertainty ontology for annotation and reasoning (a position paper). In: da Costa, P.C.G. (ed.) URSW 2008 Uncertainty Reasoning for the Semantic Web. The 7th International Semantic Web Conference, vol. 4 (2008) 8. Dˇedek, J., Eckhardt, A., Vojt´asˇ, P.: Experiments with czech linguistic data and ILP. In: ˇ Zelezn´ y, F., Lavraˇc, N. (eds.) ILP 2008 - Inductive Logic Programming (Late Breaking Papers), pp. 20–25. Action M, Prague (2008) 9. Dˇedek, J., Vojt´asˇ, P.: Computing aggregations from linguistic web resources: a case study in czech republic sector/traffic accidents. In: Dini, C. (ed.) Second International Conference on Advanced Engineering Computing and Applications in Sciences, pp. 7–12. IEEE Computer Society, Los Alamitos (2008) 10. Dill, S., Eiron, N., Gibson, D., Gruhl, D., Guha, R., Jhingran, A., Kanungo, T., Rajagopalan, S., Tomkins, A., Tomlin, J.A., Zien, J.Y.: Semtag and seeker: bootstrapping the semantic web via automated semantic annotation. In: WWW 2003: Proceedings of the 12th international conference on World Wide Web, pp. 178–186. ACM, New York (2003), http://doi.acm.org/10.1145/775152.775178 11. Dokulil, J., Tykal, J., Yaghob, J., Zavoral, F.: Semantic web infrastructure. In: Kellenberger, P. (ed.) First IEEE International Conference on Semantic Computing, pp. 209– 215. IEEE Computer Society, Los Alamitos (2007) 12. Eckhardt, A., Horv´ath, T., Maruˇscˇ a´ k, D., Novotn´y, R., Vojt´asˇ , P.: Uncertainty Issues and Algorithms in Automating Process Connecting Web and User. In: da Costa, P.C.G., d’Amato, C., Fanizzi, N., Laskey, K.B., Laskey, K.J., Lukasiewicz, T., Nickles, M., Pool, M. (eds.) URSW 2005 - 2007. LNCS (LNAI), vol. 5327, pp. 207–223. Springer, Heidelberg (2008) 13. Eckhardt, A., Horv´ath, T., Vojt´asˇ, P.: Learning different user profile annotated rules for fuzzy preference top-k querying. In: Prade, H., Subrahmanian, V.S. (eds.) SUM 2007. LNCS (LNAI), vol. 4772, pp. 116–130. Springer, Heidelberg (2007)
18
J. Dˇedek, A. Eckhardt, and P. Vojt´asˇ
14. Embley, D.W., Tao, C., Liddle, S.W.: Automatically extracting ontologically specified data from HTML tables of unknown structure. In: Spaccapietra, S., March, S.T., Kambayashi, Y. (eds.) ER 2002. LNCS, vol. 2503, pp. 322–337. Springer, Heidelberg (2002) 15. Feigenbaum, L., Herman, I., Hongsermeier, T., Neumann, E., Stephens, S.: The semantic web in action. Scientific American 297, 90–97 (2007), http://thefigtrees.net/lee/sw/sciam/semantic-web-in-action 16. Hajiˇc, J.: Morphological Tagging: Data vs. Dictionaries. In: Proceedings of the 6th Applied Natural Language Processing and the 1st NAACL Conference, Seattle, Washington, pp. 94–101 (2000) 17. Klimeˇs, V.: Transformation-based tectogrammatical analysis of czech. In: Sojka, P., Kopeˇcek, I., Pala, K. (eds.) TSD 2006. LNCS (LNAI), vol. 4188, pp. 135–142. Springer, Heidelberg (2006) 18. Liu, B.: Web Data Mining. Springer, Heidelberg (2007), http://dx.doi.org/10.1007/978-3-540-37882-2 19. Mikulov´a, M., B´emov´a, A., Hajiˇc, J., Hajiˇcov´a, E., Havelka, J., Kol´arˇov´a, V., Kuˇcov´a, L., ˇ ep´anek, J., Ureˇsov´a, Lopatkov´a, M., Pajas, P., Panevov´a, J., Raz´ımov´a, M., Sgall, P., Stˇ ˇ Z., Vesel´a, K., Zabokrtsk´ y, Z.: Annotation on the tectogrammatical level in the prague ´ dependency treebank. annotation manual. Tech. Rep. 30, UFAL MFF UK, Prague, Czech Rep. (2006) 20. Muggleton, S.: Inductive logic programming. New Generation Comput. 8(4), 295 (1991) 21. Muggleton, S.: Learning from positive data. In: Muggleton, S. (ed.) ILP 1996. LNCS, vol. 1314, pp. 358–376. Springer, Heidelberg (1997) 22. Popov, B., Kiryakov, A., Kitchukov, I., Angelov, K., Kozhuharov, D.: Co-occurrence and ranking of entities based on semantic annotation. Int. J. Metadata Semant. Ontologies 3(1), 21–36 (2008), http://dx.doi.org/10.1504/IJMSO.2008.021203 23. Reeve, L., Han, H.: Survey of semantic annotation platforms. In: SAC 2005: Proceedings of the 2005 ACM symposium on Applied computing, pp. 1634–1638. ACM, New York (2005), http://doi.acm.org/10.1145/1066677.1067049
Chapter 2
Next Generation SOA: Can SOA Survive Cloud Computing? George Feuerlicht
Abstract. SOA has been widely adopted as architecture of choice to address the requirements of modern organizations, but there are recent indications that many companies are not willing to make the substantial investment required for the transition to SOA in the current economic climate. Recent emergence of Cloud Computing is accelerating the trend of delivering enterprise applications and IT infrastructure in the form of externally provided services. The convergence of Cloud Computing, SaaS and Web 2.0 are redefining the very basis on which the computer industry has operated for decades, challenging some of the key SOA assumptions and principles. In this paper we discuss the challenges that Cloud Computing presents to established concepts in enterprise computing in general, and consider the specific architectural challenges and their implications to future development of SOA. Keywords: SOA, SaaS, Cloud Computing, Web 2.0.
2.1 Introduction Service-Oriented Architecture (SOA) has been widely promoted as the architecture capable of addressing the business needs of modern organizations in a cost-effective and timely manner. Perceived SOA benefits include flexibility and agility, improved alignment between business processes and the supporting IT infrastructure, easy integrations, and in particular, reduced TCO (Total Cost of Ownership). Many organizations view SOA as an architectural solution to the problem of integration of legacy applications implemented as separate silos of mutually incompatible systems. Such enterprise computing environments are characterized by excessive heterogeneity, duplication of functionality poor hardware utilization, and consequently high cost George Feuerlicht Department of Information Technology, Prague University of Economics, Czech Republic e-mail:
[email protected] Faculty of Engineering and Information Technology, UTS, Australia V. Sn´asˇel et al. (Eds.): Advances in Intelligent Web Mastering - 2, AISC 67, pp. 19–29. c Springer-Verlag Berlin Heidelberg 2010 springerlink.com
20
G. Feuerlicht
of ownership. Although the precise impact and level of adoption of SOA is difficult to ascertain it was estimated by Gartner Research in September 2008 that most European and North American companies have either already adopted SOA or are planning to adopt SOA over the next 12 months [1]. However, the same report has concluded that many companies are not willing to make the substantial investment required for the transition to SOA in the current economic climate. Disillusionment about SOA is also evident among some experts who are beginning to argue that SOA has failed to deliver on its promise [2]. Part of the problem relates to the widely differing definitions of SOA; SOA is regarded by many as a universal solution for IT ills, and this expectation is impossible to meet in practice. SOA is best described as a set of architectural concepts and principles (an architectural style) that include systems development methods, techniques and related technologies that enable the implementation of service-oriented enterprise applications. SOA applications are compositions of coarse-grained, loosely coupled, autonomous services that operate without the need to maintain state between services invocations. However, while such architectural principles are essentially sound and well-suited to today’s dynamic computing environment, an architecture based on such principles addresses only a part of the problem that organizations face when implementing IT solutions. Ultimately, enterprise applications must be implemented using hardware and software technologies provided by IT vendors and developed by IT professionals (on-premise) within individual end-user organizations. The success of such projects depends on numerous factors including the stability of the technology platform, correctness of the analysis, and not least on the level of expertise and skills of IT architects, application designers and developers. Numerous studies indicate that the success rate of IT projects is unacceptably low. For example, a recent CHAOS Summary 2009 report by The Standish Group based on a survey of 400 organizations concludes that only 32 percent of IT projects were considered successful (having been completed on time, on budget and with the required features and functions), 24 percent of IT projects were considered failures (having been cancelled before they were completed, or having been delivered but never used), and the rest (44 percent) were considered challenged (they were finished late, over budget, or with fewer than the required features and function) [3]. It can be argued that end-user organizations, such as banks, airlines, government departments, and various other types of businesses are not well-equipped to manage the development, customization and operation of complex enterprise applications and to maintain the operation of the underlying IT infrastructure. IT is not core competency for such organizations and often represents a substantial drain on resources that could be better utilized on core business activities. In the drive to reduce costs during the present economic downturn, organizations are actively investigating alternatives to on-premise IT solutions. Cloud Computing subscription models that include SaaS (Software as a Service), PaaS (Platform as a Services) and IaaS (Infrastructure as a Service) are becoming viable alternatives that are particularly attractive to SMEs (Small and Medium Enterprises) as little or no upfront capital expenditure is required. However, the adoption of these service models introduces its own set of problems as organizations need to be able to effectively
2
Next Generation SOA: Can SOA Survive Cloud Computing?
21
integrate and manage externally sourced services potentially from a range of different providers and to incorporate these services into their existing enterprise IT architecture. This situation is further complicated by the adoption of Web 2.0 (i.e. social networking platforms, Wikis, Blogs, mashups, Web APIs, etc.) changing the character of enterprise applications. The emerging enterprise computing scenario involves organizations interacting with multiple external providers of various types of IT services, as well as with partner organizations within the context of supply chains and other types of collaborative scenarios, as illustrated in Fig. 2.1. As the adoption of Cloud Computing accelerates, it is likely that most enterprise computing needs will be addressed by external providers of software and infrastructure services, and internally organizations will be mainly concerned with service integration and with managing the interface with services providers. This will allow focus on core business processes and the implementation of supporting applications that can deliver competitive advantage, freeing the end-user organization from the burden of having to develop and maintain large-scale IT systems.
Fig. 2.1 New enterprise computing environment
Although the concept of SOA has been evolving over time, it remains focused on intra-enterprise computing and does not fully address the challenges associated with the externally provided IT services. To be effective in today’s enterprise computing environment, the scope of SOA needs to be extended to encompass different types of service models, including SaaS, IaaS, PaaS and Web 2.0 services. These important service models have been evolving in parallel with SOA, and have now reached a stage of maturity and adoption that make the SaaS, IaaS, and PaaS platforms essential components of many enterprise computing environments. To remain relevant in the future SOA must provide a comprehensive architectural framework for effective integration and management of IT resources that unifies internally (on-premise) and externally sourced services. At the same time SOA needs to be sufficiently light-weight to avoid time-consuming and costly architectural projects. Another important function of SOA is to assist organizations to manage the transition from the traditional into services-based enterprise computing environments. In this paper we first describe Cloud Computing and its constituent models SaaS, IaaS, and IaaP, Sect. 2.2, and the Web 2.0 Internet computing platform, Sect. 2.3. We then consider the impact of wide adoption of Cloud Computing and Web 2.0 on
22
G. Feuerlicht
the future directions for SOA and argue that SOA must be extended to incorporate both Cloud Computing and Web 2.0 to enable organizations to take advantage of this new computing environment and remain relevant to enterprise computing in the future, Sect. 2.4.
2.2 Cloud Computing Cloud Computing involves making computing, data storage, and software services available via the Internet, effectively transforming the Internet into a vast computing platform. A key characteristic of Cloud Computing is that services can be rented on a pay-as-you-use basis, enabling client organizations to adjust the usage of IT resources according to their present needs. This so-called elasticity (i.e. up or down scalability) and the ability to rent a resource for a limited duration of time allows client organizations to manage their IT expenditure much more effectively than in situations when they directly own these resources. While Cloud Computing is a relatively recent concept popularized by companies such as IBM, Amazon.com, Google and Salesforce.com [4], the underlying technological solutions that include cluster computing [5], Grid computing [6], Utility Computing [7] and virtualization [8] have evolved over the last decade. In this section we discuss the SaaS (Sect. 2.2.1), IaaS (Sect. 2.2.2), and PaaS (Sect. 2.2.3) models that are generally considered as the constituent models under the Cloud Computing umbrella. While there are clearly major differences between these models, they are all part of the same outsourcing trend leading to evacuation of IT from end-used organizations. To appreciate its impact, it is important to understand Cloud Computing in the historical context of the evolution of enterprise computing, Fig. 2.2. It is interesting to note that software services started with Data Processing Bureaus in 1960s that provided services such as payroll processing, etc to organizations that could not afford computers and did not have in-house computing skills. As computers became more affordable organizations started to expand on-premise computing facilities and develop their own applications. This trend continued with the purchase of various types of packaged applications, culminating with wide-ranging adoption of ERP (Enterprise Resource Planning) systems in the late 1990s. As the costs of maintaining on-premise IT infrastructure escalated, end-user organizations begun to consider various types of outsourcing solutions, including the ASP (Application Service Provider) model attempting to reverse the trend of expansion of on-premise computing. While the initial attempts to provide software services at the beginning of this century have not been particularly successful, the present generation of IT service providers uses highly sophisticated technologies to deliver reliable and cost-effective solutions to their users. Technological advances, in particular fast and reliable Internet communications and the emergence of commodity hardware combined with virtualization technologies have created a situation where software and infrastructure services can be delivered from remote data centers at the fraction of the cost of on-premise solutions.
2
Next Generation SOA: Can SOA Survive Cloud Computing?
23
Fig. 2.2 Evolution of enterprise computing
2.2.1 Software Services Using the SaaS model, enterprise applications are not installed on client’s premises, but are made available as software services from a shared data centre hosted by the service provider, typically the software vendor. The SaaS model has its origins in the Application Service Provider (ASP) model that emerged in the late 1990s. The early ASP providers have not been able to establish a viable business model, failing to deliver major cost savings to their customers, resulting in poor acceptance of the ASP model by the market place. A significant factor in the failure of the ASP model was a lack of a suitable technological infrastructure for hosting complex enterprise applications in a scalable and secure manner, and poor (or not-existent) customization and integration facilities. Delivering SaaS applications to a large heterogeneous user base requires specialized application architecture; the so called multi-tenant architecture is widely used in order to support large user populations in a scalable manner [9]. Multi-tenant architecture relies on resource virtualization to enable sharing of physical and logical resources among different end-user organizations, while at the same time ensuring security and confidentiality of information. Storage and server virtualization leads to significantly increased utilization of hardware resources. Using database virtualization, different end-user organizations are allocated logical (virtual) database instances that coexist within one physical database. This significantly reduces administration overheads associated with maintaining the database (i.e. database backup and recovery, etc.), and at the same time provides a secure environment for individual user organizations. SaaS business model generates ongoing revenues for the service provider in the form of subscription fees from each client organization. The service provider benefits from economies of scale as the incremental costs associated with adding a new user is relatively low. The customers, on the other hand, benefit from minimizing (or eliminating altogether) up-front costs associated with hardware and software acquisition, predictability of on-going costs, and at the same time avoid having to hire technical support staff and other IT specialists. Recent studies indicate a significant increase in the adoption of SaaS [10], and according to analyst predictions SaaS will outperform more traditional IT alternatives as a consequence of the current economic crisis [11]. It is becoming apparent
24
G. Feuerlicht
that the benefits of SaaS go well beyond cost-effective software delivery, enabling organizations to transform their business processes [12]. Adopting SaaS does not always result in outsourcing the entire IT infrastructure, so that the dependence on the external service provider can be reduced, for example by maintaining the database in-house. A number of SaaS variants are possible, for example the client can host the software internally in a data centre within the client organization, with the service provider maintaining the applications from a remote location. Equally, it is possible for the client organization to host their internally developed applications externally on the provider’s hardware infrastructure. Another, more recent SaaS model involves the client organization self-hosting their own software in an internal data centre and providing software services to their partners, or subsidiaries in remote locations. Various combinations of these models are also possible, including situations where the service provider and the client participate in the development and delivery of software services in the form of a joint venture, sharing the benefits and the risks. Typically, SaaS services involve the delivery of complete enterprise applications such as Salesforce CRM applications [13], or Google Apps [14]. However, it is also possible to provide lower granularity services in the form of Web Service (or REST services) that implement a specific business function (e.g. airline flight booking). The combination of recent business and technological developments has created a situation where delivery of enterprise applications in the form of SaaS services becomes both technically possible, and economically compelling. The SaaS model provides a viable alternative to the traditional on-premise approach for many application types today, and is likely to become the dominant method for delivery of enterprise applications in the future. In can be argued that the traditional on-premise model is unsustainable, and that in the future most enterprise applications will be delivered in the form of reliable and secure application services over the Internet or using private networks.
2.2.2 Infrastructure Services Cloud Computing infrastructure services address the need for a scalable, costeffective IT infrastructure Recently, a number of providers including Amazon.com, Google.com, and Yahoo.com have introduced Cloud Computing infrastructure services. The range of infrastructure services includes storage services: e.g. Amazon Scalable Storage Service (Amazon S3), compute services: e.g. Amazon Elastic Compute Cloud (Amazon EC2) and, database management services: e.g. Amazon SimpleDB [15]. Cloud Computing infrastructure services rely on highly scalable virtualized resources that use statistical multiplexing to significantly increase resource utilization from the low levels prevailing in a typical data center, estimated between 5% - 20%, to above 85%. According to some estimates multiplexing leads to cost reduction by a factor of 5-7, with additional cost savings achievable by placing Cloud Computing data centers in geographical locations in order to minimize the cost of electricity,
2
Next Generation SOA: Can SOA Survive Cloud Computing?
25
labor, property, and taxes. As a result, Cloud Computing services can be offered at relatively low cost; for example Amazon EC2 Services offer 1.0- GHz x86 ISA instance for 10 cents/per hour, and Amazon S3 storage charges are $0.12 to $0.15 per gigabyte-month [16].
2.2.3 Platform Services The final component of the Cloud Computing approach are platform services. Platform services address the need for a reliable, secure and scalable development environment that can be rapidly configured without extensive technical expertise An example of Cloud Computing platform service is Google App Engine [17]. Google App Engine for Java [18] is a Cloud Computing platform for Java developers that makes it possible to rapidly develop and deploy Java and J2EE applications to the cloud without having to provision servers, configure databases and server-side technologies. Google App Engine offers services that include authentication, authorization, data persistence and task scheduling. The support for standard technologies such as Java Servlets, Java Server Pages (JSP), Java Data Objects (JDO), Java Persistence API (JPA), etc. allows developers to reuse their skills, making it relatively easy to port existing applications to the cloud [19].
2.3 WEB 2.0 Another important trend impacting on SOA today is the emerging Web 2.0 environment [20]. Web 2.0 is a set of economic, social and technology trends that collectively form the basis for the next generation of Internet computing. Web 2.0 involves extensive user participation and benefits from the network effect as the number of participants reaches tens of millions [20]. The Web 2.0 platform relies on light-weight programming models based on AJAX technologies (i.e. JavaScript and related technologies) [21] and the REST interaction style (i.e. XML over HHTP) [22], avoiding the complexities of SOAP Web Services. An important characteristic of the Web 2.0 environment is extensive user participation and social networking, facilitated by popular social networking platforms such as Myspace (http:// www.myspace.com/) and Facebook (http://www.facebook.com/). Web 2.0 is characterized by specialized, publicly available data sources (e.g. mapping data, weather information, etc.) accessible via light-weight APIs that can be used to create value-added services in the form of mashups [23]. Mashups create Webbased resources that include content and application functionality through reuse and composition of existing resources. Web 2.0 and related technologies are redefining the Web into a programmatic environment with thousands of different APIs externalized by companies such as Amazon.com, Google, and eBay, facilitating the construction of new types of Internet-scale applications, and transforming the Web into a platform for collaborative development and information sharing.
26
G. Feuerlicht
Next generation SOA needs to incorporate architectural features to allow the integration of Web 2.0 applications with more traditional types of application services. More specifically, SOA architectural framework needs to provide facilities to integrate SaaS applications with mashups taking advantage of the extensive array of services available in the form of programmable Web APIs. For example, a Web 2.0 travel application typically uses the Google mapping service to show the location of hotels and other amenities, and could be directly integrated with Saber Web Services [24] based on the Open Travel Alliance specification [25] as illustrated in Fig. 2.3. The travel agent could then integrate this application with a (SaaS) CRM application provided by Salesforce.com. While similar applications do exist in practice (e.g. http://travel.travelocity.com/, http://www. expedia.com/, etc.), they are typically hard-coded rather than implemented in the context of a comprehensive architectural framework.
Fig. 2.3 Web 2.0 Travel Example Scenario
2.4 Conclusions Most large organizations today manage highly complex and heterogeneous enterprise computing environments characterized by the coexistence of several generations of architectural approaches and related technologies. As a result, enterprise computing is associated with high cost of ownership and low return on investment. This situation has led some observers to conclude that information technology does not provide competitive advantage to organizations that make substantial investment in IT, and in some cases can detract from the core business in which the organization is engaged [26, 7]. As argued in this paper, adoption of SOA by organizations can not provide the full answer to the above concerns as SOA is at present primarily focused on internal (i.e. on-premise) enterprise systems and does not satisfactorily address situations where infrastructure and enterprise applications are sourced externally in the form of SaaS and virtualized infrastructure services. The basic underlying SOA principles and assumption need to be re-evaluated to address the challenges of computing environments that involve externally provided software and infrastructure services.
2
Next Generation SOA: Can SOA Survive Cloud Computing?
27
SOA, Cloud Computing, and Web 2.0 have important synergies and share a common service-based model. These technology trends are likely to converge into a single architectural framework at some point in the future, resulting in the unification of the intra-enterprise and inter-enterprise computing environments and creating new opportunities for enterprise applications and associated business models. This new architectural framework will need to address a number of important issues that include the integration of external and internal services, optimization, management and governance of complex service environments, design and modeling of internal and external services, and alignment of architectural and organizational service models. Some progressive organizations are already exploiting the synergies between SaaS and SOA to maximize outsourcing and minimize IT costs. For example, the low-cost Australian airline Jetstar (www.jetstar.com) is a fast growing airline in the process of building a franchise network across Asia. Jetstar is aggressively pursuing outsourcing strategy based on SOA [27]. Most Jetstar business functions are outsourced including renting aircraft engines by the hour of usage, in-flight catering, overseas ground handling, etc. The payroll system is a SaaS solution, and IT infrastructure and support are fully outsourced. Virtualization plays a major role within Jetstar and enables fast deployment of check-in applications on mobile check-in units in regional airports using a Citrix-based thin-client solution. The entire IT operation relies on small team of IT professionals who are primarily responsible for managing relationships with individual service providers. As the number of organizations moving towards the service-based model increases it will become essential to provide architectural support for such enterprise computing environments. ˇ (Grant Agency, Acknowledgements. This research has been partially supported by GACR Czech Republic) grant of No. 201/06/0175 - Enhancement of the of Enterprise ICT Management Model.
References 1. Sholler, D.: SOA User Survey: Adoption Trends and Characteristics, Gartner, September 26, p. 32 (2008) 2. Manes, A.T.: SOA is Dead; Long Live Services (2009), http://apsblog.burtongroup.com/2009/01/ soa-is-dead-long-live-services.html (cited July 7, 2009) 3. Levinson, M.: Recession Causes Rising IT Project Failure Rates (2009), http://www.cio.com/article/495306/Recession_Causes_Rising_ IT_Project_Failure_Rates (cited July 7, 2009) 4. Buyya, R., Yeo, C.S., Venugopal, S.: Market-Oriented Cloud Computing: Vision, Hype, and Reality for Delivering IT Services as Computing Utilities. In: Proceedings of the 2008 10th IEEE International Conference on High Performance Computing and Communications. IEEE Computer Society, Los Alamitos (2008)
28
G. Feuerlicht
5. Oguchi, M.: Research works on cluster computing and storage area network. In: Proceedings of the 3rd International Conference on Ubiquitous Information Management and Communication. ACM, Suwon (2009) 6. Yu, J., Buyya, R.: A taxonomy of scientific workflow systems for grid computing. SIGMOD Rec. 34(3), 44–49 (2005) 7. Carr, N.G.: The End of Corporate Computing. MIT Sloan Management Review 46(3), 67–73 (2005) 8. Fong, L., Steinder, M.: Duality of virtualization: simplification and complexity. SIGOPS Oper. Syst. Rev. 42(1), 96–97 (2008) 9. Feuerlicht, G., Voˇr´ısˇek, J.: Pre-requisites for successful adoption of the ASP model by user organization. In: The Proceedings of Thirteenth International Conference on Information Systems Development: Methods and Tools, Theory and Practice, ISD 2004, Vilnius, Lithuania, September 9-11 (2004) 10. ACA Research, Software-as-a-Service (SaaS) in Australia: Is it the Next Big Thing? (2007), http://www.anthonywatson.net.au/files/ SAAS%20White%20Paper%20FINAL.PDF (cited March 4, 2008) 11. IDC Predictions 2009: An Economic Pressure Cooker Will Accelerate the IT Industry Transformation, 215519, December 2008, IDC (2008) 12. Voˇr´ısˇ ek, J., Feuerlicht, G.: Trends in the Delivery and Utilization of Enterprise ICT. In: The Proceedings of 17th Annual IRMA International Conference, Washington DC, USA, May 21-24 (2006) 13. Financial Insights IDC, On-Demand Customer Knowledge Management in Financial Services (2008), http://www.salesforce.com/assets/pdf/analysts/idc_ ondemand_financial_services.pdf (cited March 4, 2008) 14. Google Apps, http://www.google.com/apps/ (cited July 7, 2009) 15. Amazon.com (2009), http://aws.amazon.com/ (cited July 7, 2009); http://aws.amazon.com/ 16. Armbrust, M., et al.: Above the Clouds: A Berkeley View of Cloud Computing. University of California, Berkeley, Tech. Rep. (2009), http://scholar.google.com.au/scholar?q=above+the+clouds: +A+Berkeley&hl=en&lr=&output=search (cited July 7, 2009) 17. Google App Engine, http://code.google.com/appengine/ (cited July 7, 2009) 18. Google App Engine for Java, http://code.google.com/appengine/docs/java/overview.html (cited July 7, 2009) 19. Sun Microsystems, Java EE at a Glance (2007), http://java.sun.com/javaee/ (cited December 7, 2007) 20. O’Reilly – What Is Web 2.0. (2008), www.oreillynet.com (cited January 30, 2008), http://www.oreillynet.com/pub/a/oreilly/tim/news/2005/09/ 30/what-is-web-20.html?page=1 21. Johnson, D., White, A., Charland, A.: Enterprise Ajax, 1st edn. Prentice Hall, Englewood Cliffs (2008) 22. Fielding, T.R.: Architectural Styles and the Design of Network-based Software Architectures (2002), http://www.ics.uci.edu/˜fielding/pubs/dissertation/top.htm (cited April 17, 2008)
2
Next Generation SOA: Can SOA Survive Cloud Computing?
29
23. Govardhan, S., Feuerlicht, G.: Itinerary Planner: A Mashup Case Study. In: Di Nitto, E., Ripeanu, M. (eds.) ICSOC 2007. LNCS, vol. 4907, pp. 3–14. Springer, Heidelberg (2009) 24. Sabre, Sabre Holdings :: World Leader in Travel Distribution, Merchandising, and Retailing Airline Products (2007), http://www.sabre-holdings.com/ (cited December 10, 2007) 25. OTA, The Open Travel Alliance (2009), http://www.opentravel.org/ (cited March 20, 2009) 26. Carr, N.G.: IT Doesn’t Matter. Harvard Business Review 81(5) (2003) 27. Tame, S.: Implementing IT and SOA into the Jetstar Business. In: SOA 2008 IQPC Conference 2008, Sydney, Australia (2008)
Chapter 3
A Dynamic Stochastic Model Applied to the Analysis of the Web User Behavior Pablo E. Rom´an and Juan D. Vel´asquez
Abstract. We present a dynamical model of web user behavior based on a mathematical theory of psychological behavior from Usher and McClelland and the random utility model from McFadden. We adapt the probabilistic model to the decision making process that follows each web user when decide which link will continue to browse. Those stochastic models have been fully tested in a variety of neurophysiological studies, and then we base our research on them. The adapted model describes, in probability, the time course that a web user performs for taking the decision to follow a particular hyperlink or to leave the web site. The model has parameter to be fitted based on historical user sessions, the web site structure and content. The advantage of using this point of view is that the web user model is independent of the web site, and then its can predict changes on the web usage based on changes on the web site. Keywords: Web User Behavior, Stochastic Process, Random Utility Model, TFIDF, Stochastic Equation, Stochastic Simulation, Web User Session, Logit, Web User Text Preferences.
3.1 Introduction Web mining techniques has been used by e-business for decades with average increment of the web site performance for web user capturing and sales improvement [9]. Web usage mining [20] relate with general purposed machine learning algorithm adapted to Web Data for capturing web user’s patterns of behavior. The extracted patterns are then used for predicting web user preferences and constructing recommender systems incrementing sales [7]. Machine leaning are related with generics mathematical models that does not have a relation with the intrinsic way that the Pablo E. Rom´an · Juan D. Vel´asquez University of Chile, Rep´ublica 701 Santiago e-mail:
[email protected],
[email protected] V. Sn´asˇel et al. (Eds.): Advances in Intelligent Web Mastering - 2, AISC 67, pp. 31–39. c Springer-Verlag Berlin Heidelberg 2010 springerlink.com
32
P.E. Rom´an and J.D. Vel´asquez
web user perform decision making while web browsing, instead they have to adapt the behavior even for robotic user’s like web crawler systems. Generic algorithm also have the drawback to be less successful than very specific machine learning algorithm adapted to specific structures [8]. In this research we pretend to go further in the model specification, proposing a time dependent dynamic model of web user browsing that capture the statistical behavior of an ensemble of web users. We base our dynamic model of web user behavior on related studies on Neurophysiology and Economy. Those disciplines have as a central objective the study of human behavior and have reach to present day a mature state of the mathematical description of how humans performs decision on different context. Models like the LCA stochastic process and the Random Utility Model have a long history of experimental validations (about 40 years) [10, 17, 21, 19, 13]. We assume that those well proven theory on human behavior can be applied and adapted to describe the web user behavior, producing a much more structured and specific machine learning model. The rest of this paper is organized as follows. Section 3.2 presents some attempt to have specific model of user dynamic and related research from other fields of study. Section 3.3 describe our proposal for stochastic model of the web user’s dynamic. Section 3.4 describe the working data set and the preprocessing. Section 3.5 present the parameter adjustment of the model, the methodology used and the results obtained. Section 3.6 present the evaluation of the model once fitted with data. Section 3.7 provides some conclusions and suggests future research.
3.2 Related Work We present the LCA and Random utility model that we directly adapt in an dynamic stochastic model of web user browsing as shown in Sect. 3.3. We present also some others attempt to model the user behavior.
3.2.1 The Leaky Competing Accumulator Model (LCA) In the field of mathematical psychology, the Leaky Competing Accumulator Model describes the neurophysiology of decision making in the brain [23]. It corresponds to the time description of the subject neural activity of specifics zones in the brain. The theory associate to each possible decision i ∈ C a region in brain with a neuronal activity level activity Yi ∈ [0, 1], when a region i reach an activity value equals to a given threshold (equals one) the subject take the decision i . Neural region activity levels Yi are time dependent and its dynamic is stochastic as shown in the stochastic equation (3.1). The coefficient are interpreted as: κ is a dissipative term, λ is relate to competitive inhibition, Ii is the supporting evidence and σ 2 is the variance of the white noise term. Function f (.) is the inhibition signal response from others neuron, ussually modeled a sigmoid (near to linear) or linear ( f (x) = x). dYi = −κ Yi dt − λ
∑
j=i, j∈C
f (Y j )dt + Ii dt + σ dWi
(3.1)
3
A Dynamic Stochastic Model
33
Variation of the neural activity receive contribution from four terms: a dissipative term (Leaky) proportional to the activity itself, an inhibition term (Competing) proportional that relate with the neural competence that modulate each activity with a function f (), a term that indicate the level of supporting evidence, and a Gaussian noise signal. The neural activities Yi have a reflecting barrier at the origin as signals can not be negative and the first time hitting problem gives probabilities P(i, τ ) for the subject to choose the alternative i at time τ . This model has been evaluated with real subject and shows good agreement with experiment [23]. Others diffusion models have been proposed but has been proved equivalent to LCA [4].
3.2.2 The Random Utility Model Discrete choice has been studied in economics for describing the amount demanded for discrete goods where consumers are considered rational (utility maximizer). First from psychological studies [11] and after applied to the economy [13], the model consider the consumer utility as a random variable originating a probability distribution over the perceived demand for goods. The utility maximization problem with discrete random variables results in a class of extremum probability distributions, the wide used model is the Logit model, Eq. (3.2), where probabilities are adjusted using the known logistics regression [15]. The Logit extremum probability distribution of a choice i consider every possible choice j has a consumer utility V j . P(i) =
eVi ∑ j∈C eV j
(3.2)
The Logit model has successfully been applied in modeling the user seeking information on hypertext system [16] conforming better adaptive systems. In this work, we use the logit model for estimating the supporting evidence coefficient of the LCA model (Sect. 3.2.1) considering as utility function the similarity text perceived by users as we shown in Sect. 3.3.
3.2.3 Web Related Models There are several others attempt to model the dynamics of web users one of the most famous come from the random surfer model [3] whose first application was in the Page Rank algorithm [18]. This model has a great reputation since it was first applied on the Google search engine. It considers web user without any intelligence, because they choose with equal probability any link on the page that they look. This stochastic model include also the probability of restarting the process by recommencing the navigation in another (equiprobable page) conforming an ergodic Markov chain. This simple stochastic process gives stationary probability for the random surfer to be in a page, where higher probability are associated with the most important pages. That conforms a ranking for pages used in web search engines. Further refining of
34
P.E. Rom´an and J.D. Vel´asquez
this idea are used in [22] where web users are considered as flows over the network of hyperlinks fulfilling flow conservation balance on the network.
3.3 The Proposed Stochastic Model for Web User Behavior We consider that web users browse pages on a web site selecting each hyperlinks according to a LCA stochastic equation of Sect. 3.2.1. The hyperlink structure is completed with an special link from every page on a web site to the outside with probability φ , that help to model when a web user is leaving the web site. The advantage to represent web user behavior by stochastic equations comes from the easy implementation of a simulation framework for solving the probabilistic behavior. We identify the variables Y j as representing the time dependent levels of possible choice for the link j on an specific page, and the decision making comes from the stopping time problem resulting in the time τ taken for the choice and the selected link i. Important parameters of the LCA stochastic equation (3.1) are the I j values that are interpreted as the supporting evidence from other sensorial brain areas. We model those levels using the logit and the vector space model [12], considering users as if they were driven by text preferences. When a web user enters a web site, it comes with a predefined objective represented by a vector μ of TF-IDF values [12] that define its own text preference. Web user text preference are then measured by a utility function corresponding to the cosine text similarity measure with the text content of each link L j that he read. The web user select links that have maximal utility that have the text more similar (cosine near to 1) to the user text preference μ . In this since the supporting evidence values I j are considered proporcional (β ) to the probability to select the link j as in Eq. (3.2). When the index j correspond to the outside link we set I j = φ and we scale all the others probability by (1 − φ ) completing the choice framework with the probability to leave the web site.
μk Lik μ Li
(3.3)
; Z = ∑ eVi
(3.4)
Vi = ∑ k
μ L
Ii = β
e
∑k μ kLik i
Z
i
In this way (Eq. (3.4)), the LCA equation (3.1) will depend on exogenous text preference vector μ representing an specific user and the scalar parameters (κ , λ , σ , β ).
3.4 Web Data Pre-processing We used a data set consisting of web users sessions extracted from the university departmental web site http://www.dii.uchile.cl/ from May 24 to June 24 of 2009. We use a tracking cookie [2] for registering each action of the visiting web
3
A Dynamic Stochastic Model
35
users. This is the most exact method to extract session, in order to ensure privacy sessions was identified by an anonymous random id and fewer JavaScript methods were used for universal browser compatibility. This results in 39,428 individual sessions corresponding to 1,135 pages and 15,795 link.
Fig. 3.1 The distribution of the maximal keyword T F − IDF over pages
The site contains the corporative site, project sub-sites, specific graduate programs web sites, research sub-sites, personal web pages and the web mail; not the entire web site access was tracked only the corporate and graduate program subsites that were granted the corresponding permission. Session time was also been recorded observing narrow variations on time spend on each pages, with 89% of the visit with less than 60 second spend on a page and an average of 14 second. The site was also crawled using a program based on the websphinx library [14], obtaining 13,328 stemmed keywords. Nevertheless the website was updated periodically we consider that changes performed over a month are minimal then the vector space model remains constant over this period of time. In Fig. 3.1 a distribution of the maximal T F − IDF keyword value was represented observing that with the 10 percent of the keywords have most representative value.
3.5 Stochastic Model Parameter Setting and Simulation We explore the mean user behavior as considering represented a μ vector consisting in the most important words T F − IDF weight component. We construct the μ text preference vector taking the 90% of the most significative word component in the web site as shown in Fig. 3.1. This user will follows links according to the statistical rule defined in Sect. 3.3, but in real life there exist different users k with different μk . We are interested to test the average web user with the observed average behavior for checking model stability. An important data is the probability of leaving the web site φ , for this purpose we fit the distribution φ (L) of session length L assuming equals to the probability to
36
P.E. Rom´an and J.D. Vel´asquez
leave the site after L step, that is well known to be nearly a power law [6]. We fit for observed visit to our web site with R2 = 0, 9725 and φ (L) = 1, 6091L−2,6399. Others data needed for the simulation is the distribution of starting pages that we obtain empirically from session data frequencies. An important part of the model is the text identification of a link. We approximate the text representation of a link assuming that web master perform a good design of the web site and word surrounding a link correspond to the text of the page pointed by the link. In this case a good approximation of the text weight correspond to the T F − IDF vector weight of the pointed page, this assumption really simplify the parsing task since identifying text “surrounding” links is a very complex definition. Finally for scalar parameter (κ = 0, 05; λ = 0, 05; σ = 0, 05; β = 0, 05) we use values proportional to those explored in psychological studies [5]. The numeric resolution of the stochastic equations used was the Euler method [1] where time discretization was performed using an increment of h = 0, 1 as shown in Eq. (3.5). The term Δ hWi ∼ N(0, h) is a Gaussian noise with variance h. The term a() is identified as the dt term of the LCA equation considering function f () as linear and the term b() is the variance σ . Several other test were performed but those election were the best outcomes. Yih (t) = Yih (t − h) + a(t − h,Y h )h + b(t − h,Y h )Δ hWi
(3.5)
The overall algorithm for simulating a web user sessions consist in the following step: 1. Initialize user text preference vector μ . 2. Select a random initial page j according to the observed empirical distribution of first session pages. 3. Initialize the vector Y = 0 having the same dimension as links in the page j plus one additional component for leaving the site option. Set the session size variable s = 0. 4. Calculate the evidence vector I (same dimension as Y ) using the similarity measure with μ and the text vector Li associated with the link i as shown in Eq. (3.4). In this case we approximate Li by the T F − IDF values of the pointed page. 5. Make one step iteration of the LCA equation in Y using the Euler method with time step h as shown in Eq. (3.5). 6. If al component Yi remains under the threshold 1 return to the last step and increment s = s + 1 and If one or more component becomes negative reset them on 0. 7. If not (threshold reached), select the coordinate j with maximum value Y j as a new starting page, and return to the step 3 if not leaving the site ( j is not the leaving site option). 8. If the user is leaving the site, i.e. j correspond to the outside label, then end the session and store result on data base. This procedure should be executed in parallel on several threads in order to ensure statistical convergence.
3
A Dynamic Stochastic Model
37
3.6 Results and Discussion We process data stored on a Mysql data base server 5.0.1, with the simulator system implemented in Java running 40 thread. It takes about 4 hours to process 40, 000 simulated users browsing on the web site on a 2Ghz double core processor. Model stability should be tested on following the same distribution of session length than the empirical one. Surprisingly the “mean web user” has good adjustment with precision the empirical distribution of session length within a relative error of the 8, 1%. Distribution error remains more or less constant for session of size less than 15 pages with about 0, 3% of error (Fig. 3.2). We are not surprising since we ensure that the leaving-site probability was chosen as the same than empirical, but we realize several others parameters variations that not commit such degree of adjustment, so this is a non trivial result. The Figure 3.3 expose the overall page visits frequencies of the simulated behavior showing an very nearly average of the experimental behavior nearly 5% of the average but with a 50% of the variance. This big variance is a range that need to be fitted by a more realistic simulation using a distribution of users with different text preference μ . The distribution of errors in this case does not fit well a Gaussian distribution because are very concentrated on the mean user behavior, then non parametric distribution estimation should be perform for fitting this distribution an simulated the ensemble. This methodology allows to fit the observed behavior of web users that allows the experimentation with changing web site structure and content and observing changes in web user navigation.
Fig. 3.2 The distribution of session length Empirical (Squares) vs Simulated (Triangles) in Log scale
38
P.E. Rom´an and J.D. Vel´asquez
Fig. 3.3 The distribution of visit per page on simulated vs experimental session
3.7 Conclusion and Future Research We conclude this method is a plausible way to simulate navigational behavior of web users that have accordance with statistical behavior of real sessions. More than a fitting procedure it advantage will follows if we add a few changes on the site structure and content predicting the resulting browsing behavior. This possibility is an ongoing research with include changes in the logit model, more precise text link associations and non-parametric adjustment of the web user μ distribution. Acknowledgements. This work has been partially supported by the National Doctoral Grant from Conicyt Chile and by the Chilean Millennium Institute of Complex Engineering Systems.
References 1. Asmussen, S., Glynn, P.: Stochastic Simulation: Algorithms and Analysis. Springer, Heidelberg (2007) 2. Berendt, B., Mobasher, B., Spiliopoulou, M., Wiltshire, J.: Measuring the accuracy of sessionizers for web usage analysis. In: Proc. of the Workshop on Web Mining, First SIAM Internat. Conf. on Data Mining, pp. 7–14 (2001) 3. Blum, A., Chan, T.H.H., Rwebangira, M.R.: A random-surfer web-graph model. In: Proceedings of the eigth Workshop on Algorithm Engineering and Experiments and the third Workshop on Analytic Algorithmics and Combinatorics, pp. 238–246. Society for Industrial and Applied Mathematics (2006) 4. Bogacz, R., Brown, E., Moehlis, J., Holmes, P., Cohen, J.D.: The physics of optimal decision making: A formal analysis of models of performance in two-alternative forced choice tasks. Psychological Review 4(113), 700–765 (2006)
3
A Dynamic Stochastic Model
39
5. Bogacz, R., Usher, M., Zhang, J., McClelland, J.: Extending a biologically inspired model of choice: multi-alternatives, nonlinearity and value-based multidimensional choice. Philosophical Transaction of the Royal Society B 362(1485), 1655–1670 (2007) 6. Huberman, B., Pirolli, P., Pitkow, J., Lukose, R.M.: Strong regularities in world wide web surfing. Science 280(5360), 95–97 (1998) 7. Vel´asquez, J.D., Palade, V.: Adaptive web sites: A knowledge extraction from web data approach. IOS Press, Amsterdam (2008) 8. Kashima, H., Koyanagi, T.: Kernels for semi-structured data. In: ICML 2002: Proceedings of the Nineteenth International Conference on Machine Learning, pp. 291–298. Morgan Kaufmann Publishers Inc., San Francisco (2002) 9. Kosala, R., Blockeel, H.: Web mining research: A survey. SIGKDD Explorations: Newsletters of the special Interest Group (SIG) on Knowledge Discovery and Data Mining 1(2), 1–15 (2000) 10. Laming, D.R.J.: Information theory of choice reaction time. Wiley, Chichester (1968) 11. Luce, R., Suppes, P.: Preference, utility and subjective probability. In: luce, Bush, Galanter (eds.) Handbook of Mathematical Psychology III, pp. 249–410. Wiley, Chichester (1965) 12. Manning, C.D., Schutze, H.: Foundation of Statistical Natural Language Processing. The MIT Press, Cambridge (1999) 13. McFadden, D.: Is conditional logit analysis of qualitative choice behavior. In: Zarembka (ed.) Frontiers in Econometrics. Academic Press, London (1973) 14. Miller, R.C., Bharat, K.: Sphinx: A framework for creating personal, site-specific web crawlers. In: Proceedings of the Seventh International World Wide Web Conference (WWW7), pp. 119–130 (1998) 15. Newey, W., McFadden, D.: Large sample estimation and hypothesis testing. In: Handbook of Econometrics, vol. 4, pp. 2111–2245. North Holland, Amsterdam (1994) 16. Pirolli, P.: Power of 10: Modeling complex information-seeking systems at multiple scales. Computer 42, 33–40 (2009) 17. Ratcliff, R.: A theory of memory retrieval. Psychological Review (83), 59–108 (1978) 18. Brin, S., Page, L.S.: The anatomy of a large-scale hypertextual web search engine. In: Computer Networks and ISDN Systems, pp. 107–117 (1998) 19. Schall, J.D.: Neural basis of deciding, choosing and acting. National Review of Neuroscience 2(1), 33–42 (2001) 20. Srivastava, J., Cooley, R., Deshpande, M., Tan, P.: Web usage mining: Discovery and applications of usage patterns from web data. SIGKDD Explorations 2(1), 12–23 (2000) 21. Stone, M.: Models for choice reaction time. Psychometrika (25), 251–260 (1960) 22. Tomlin, J.A.: A new paradigm for ranking pages on the world wide web, www 2003. In: Computer Networks and ISDN Systems, Budapest, Hungary, May 20-24, pp. 107–117 (1998) 23. Usher, M., McClelland, J.: The time course of perceptual choice: The leaky, competing accumulator model. Psychological Review 2(1), 550–592 (2001)
Chapter 4
Using Wavelet and Multi-wavelet Transforms for Web Information Retrieval of Czech Language Shawki A. Al-Dubaee, V´aclav Sn´asˇel, and Jan Platoˇs
Abstract. In this paper, we apply our novel approaches based on wavelet and multiwavelet transforms in Web information retrieval of Czech language [1, 2, 3]. The influence of wavelet and multiwavelet transforms on feature extraction and information retrieval ability of calibration model and solve problem of selecting optimum wavelet transform for sentence query entered of any language by Internet user was investigated. The empirical results show that performs accurate retrieval of the multiwavelet transform than scalar wavelet transform. The aptitude of multiwavelet transform to represent one language or domain of the multilingual world regardless of type, script, word order, direction of writing and difficult font problems of the language. This work is a step towards multilingual search engine. Keywords: Wavelet Transforms, Multiwavelet Transforms, Web Information Retrieval, Multilingual, Search Engine.
4.1 Introduction The era of high technology revolution impacts on all aspects of our life. The Internet is the most important one which contains a large repository of multilingual documents and web sites widespread on cyberspace. One of the powerful properties of the Internet is a multilingual world. Internet users can communicate effectively when they agree to use a common language. They could speak own language, but they should use to communicate one language in the Internet. In other words, the main problem is information access on Web, which is also called Web information retrieval, due to several reasons, dynamic, heterogeneous, Shawki A. Al-Dubaee · V´aclav Sn´asˇel · Jan Platoˇs ˇ - Technical University of Ostrava, 17. listopadu 15, Department of Computer Science, VSB 708 33, Ostrava-Poruba, Czech Republic e-mail:
[email protected], {vaclav.snasel,jan.platos}@vsb.cz V. Sn´asˇel et al. (Eds.): Advances in Intelligent Web Mastering - 2, AISC 67, pp. 43–52. c Springer-Verlag Berlin Heidelberg 2010 springerlink.com
44
S.A. Al-Dubaee, V. Sn´asˇ el, and J. Platoˇs
unlabeled and a comprehensive coverage of the entire important topic is impossible because of high dimensional and time varying nature of Web. Nevertheless, the digital divide still exits. One of the important goals of UNESCO is “to achieve worldwide access to e-contents in all languages, improve the linguistic capabilities of users and create and develop tools for multilingual access to the Internet”. Due to the above fact, we should have an adequate solution to the problem of multilingualism on the Internet. The foremost reason being, we are in a multilingual and multicultural world. Anyone in the world has the whole right to utilize multilingualism and universal access to cyberspace. Wavelet domain and its applications are increasing very rapidly over the last two decades and wavelet transform have become widely applied in the area of pattern recognition, signal and image recognition, compression, denoising [8, 9, 10, 11]. This is due to the fact that it has good time-scale (time-frequency or multi-resolution analysis) localization property, having fine frequency and coarse time resolutions at lower frequency resolution and coarse frequency and fine time resolutions at higher frequency resolution. Therefore, it makes suitable for indexing, compressing, information retrieval, clustering detection, and multi-resolution analysis of time varying and non-stationary signals. Wavelets, theterm wavelet translate from ondelltes in French to English which means a small wave, [12] are an alternative to solve the shortcomings of Fourier Transform (FT) and Short Time Fourier Transforms (STFT). FT just has frequency resolution and no time resolution which is not suitable to deal with non-stationary and non-periodic signal. In an effort to correct this insufficiency, STFT adapted a single window to represent time and frequency resolutions. However, there are limited precision and the particular window of time is used for all frequencies. Wavelet representation is a lot similar to a musical score that location of the notes tells when and what the frequencies of tones are occurred [11]. In [1, 2], we apply a new direction for wavelet transform on multilingual Web information retrieval. The novel method is converted all the sentence query entered of Internet user to signal by using its Unicode standard. The main reason to convert the sentence query entered of Internet user to Unicode standard. Unicode is international standard for representing the characters used into plurality of languages. Also, it provides a unique numeric character, regardless of language, platform, and program in the world. Furthermore, it is popular used in Internet and any operating system of computers, device to text visual representation and writing system in the whole world. In [4, 5, 6, 7], Mitra, Acharya and Thuillard have considered a wavelet transform as a new tool of signal processing in Soft Computing (SC). In [6], M. Thuillard has proposed adaptive multiresolution analysis and wavelet based search methods within the framework of Markov theory. In [7], M. Thuillard also has proposed two methods namely, the Markov-based and wavelet-based multiresolution, to search and optimization problems. In [3], we solve problem of selecting optimum wavelet function within wavelet transform for 16 languages sentences results of 14 languages (English, Spanish, Danish, Dutch, German, Greek, Portuguese, French, Italian, Russian, Arabic,
4
Using Wavelet and Multi-wavelet Transforms for Web Information Retrieval
45
Chinese (Simplified and Traditional), Japanese and Korean (CJK)) belong to the five language families and the results show that the multiwavelet with one level give the suitable results for the whole these languages. It is instead of Wavelet transform. Furthermore, we suggested due to the multiwavelet is a body of wavelet transform that it considers a new tool in SC as well. The main different of wavelet and multiwavelet transforms is that the first transform just has a single scaling and mother wavelet functions and the second has a lot of scaling and mother wavelet functions. The main field of multiwavelet and wavelet transform is used with signals and images. The study and applications of multiwavelet transform is increasing very rapidly which has proven to perform better than scalar wavelet [13, 14, 15, 16, 17]. In this paper, we try to investigate our previous suggestion to solve problem of selecting optimum wavelet transform for sentence query entered of any language by Internet user. Therefore, we apply our novel methods based on wavelet and multiwavelet transforms of Czech language. It is estimated that over 65 percent of the total global online population is nonEnglish speaking. Therefore, the population of non-English speaking Internet users is growing much faster than of English speaking users. Asia, Africa, the Middle East and Latin America are the areas with fastest growing online population. Therefore, the study of multilingual web information retrieval has become an interesting and challenging research problem in the multilingual world. Furthermore, this is an important for using one’s own language for searching the desired information. There is a real need to find a new tool to make a multilingual search engine easily. The rest of this paper is organized as follows. Section 4.2 discusses the preliminaries that are related to multiwavelet transform. In Sect. 4.3, methodologies pertaining to this work are described. Section 4.4 contains results and discussions. Finally, Sect. 4.5 is for conclusions.
4.2 An Overview Multiwavelet Transform Wavelet transform has a scalar scaling and mother wavelet functions, but multiwavelets have two or more scaling and wavelet functions as shown in Figs. 4.1and 4.2. The multiwavelet basis is generated by r scaling functions φ1 (t), φ2 (t), . . . , φr−1 (t) and r wavelet functions ψ1 (t), ψ2 (t), . . . , ψr−1 (t). Here r denotes the multiplicity in the vector setting with r ≥ 2. The multiscaling functions φ (t) = [φ1 (t), φ2 (t), . . . , φr−1 (t)]T satisfy the matrix dilation equation [16, 17]:
ψ (t) =
√ M−1 2 ∑ Hk ψ (2t − k)
(4.1)
k=0
Similarly for the multiwavelets ψ (t) = [ψ1 (t), ψ2 (t), . . . , ψr−1 (t)]T the matrix dilation equation is obtain by
46
S.A. Al-Dubaee, V. Sn´asˇ el, and J. Platoˇs
√ M−1 2 ∑ Gk φ (2t − k)
φ (t) =
(4.2)
k=0
Where the coefficients Hk and Gk , 0 ≤ k ≤ M − 1, are r × r matrices instead of scalars, which is called matrix low pass and matrix high pass filters respectively [16, 17]. As example, we give the GHM (Geroniom, Hardin, Massopust) system [18, 19]. 3√ 3 4 0 2 H1 = 59 √1 H0 = 5−1 −35√ 20 20 10 2 2 H2 =
9 20
G0 =
0√ −3 10 2
√ 2
−3 10
9 20 √
−3 10 −3 10
2
9 10
2
H3 =
−1 20 √
1 10
G2 =
0
3 10
√ 2
G1 = G3 =
0 0 1 20 0
9 −1 √ 20 2 √ −9 10 2 0 −1 20 √ −1 10 2
0 0
We have used these parameters with this work. The original signal f (t), f ∈ L2 (R) where L2 (R) space is the space of all squareintegrable functions defined on the real line R with a basic property, is given by the expansion of multiscaling (φ1 (t), φ2 (t)) and multiwavelets (ψ1 (t), ψ2 (t)) functions as example depicted in Fig. 4.1 respectively: f (t) =
M−1
−1
j0
∑ Ck0 2 2 φ (2 j0 t − k) + ∑ ∑ D j (k)2 2 ψ (2 j t − k) j
(4.3)
k j= j0
k=0
where C j (k) = [c1 (k), c2 (k), . . . , cr−1 (k)]T and D j (k) = [ds1 (k), d2 (k), . . . , dr−1 (k)]T are coefficients of multiscaling and multiwavelet functions respectively and the input signal is discrete. To present the low pass (H) and high pass (G) filters. Based on equations (1) and (2) the forward and inverse discrete multiwavelet transorm (DMWT) can be recursively calculated by [16, 17] √ C j−1 |(t) = 2 ∑ H(k)C j φ (2t − k) (4.4) k
D j−1 |(t) =
√ 2 ∑ G(k)C j φ (2t − k)
(4.5)
k
where the input of data signal should be square matrix. The low pass (H) and high pass (G) filters (previous parameter filters) achieve convolution of the input signal. Then, each data stream is down sampling by a factor of two as depicted in Fig. 4.3.
4
Using Wavelet and Multi-wavelet Transforms for Web Information Retrieval
Fig. 4.1 Some wavelet families
Fig. 4.2 GHM pair of scaling and mother functions of multiwavelets
START Initialize variables of query
Select the language
Convert the sentence query to signal
Compute decomposition of DWT or DMWT
Compute reconstruction of query by DWT or DMWT
Compare decomposition & reconstruction of DWT or DMWT
More Language
YES
Fig. 4.3 Flowchart of proposed information retrieval method
NO END
47
48
S.A. Al-Dubaee, V. Sn´asˇ el, and J. Platoˇs
4.3 Methodologies The main objective of the proposed method is to make the Internet user easily access it using one’s language or any language for the required information. This paper is extended to our previous work [1, 2, 3]. In this work, we apply our novel methods that Internet user can pose a long sentence query of Czech language. The entered sentence of Ministr zahraniˇc´ı a vicepremi´er Jan Kohout pˇripomnˇel, zˇ e u´ stava ned´av´a term´ın
is taken from website of Novinky.cz newspaper [20]. The translation of this sentence to English language is Foreign Minister and Vice Prime Minister Jan Kohout pointed out that the Constitution does not give a deadline
In the current paper, we have assumed that Czech Internet user pose the previous sentence query. The reason of selecting this sentence query is so as to appraise the accuracy of sentence query reconstruction (retrieval) by FWT and DMWT. In addition, we want to further in-depth investigate of Czech language which is belong to Indo-European families, and to try to solve problem of selecting optimum wavelet functions of wavelet transform, and make one language or domain to multilingual web search engine. As shown in Fig. 4.3, type of language, the level decomposition is selected, and FWT or DMWT is applied to the sentence query entered after convert the query to signal by using Unicode standard. Unicode standard provides a unique numeric character, regardless of language, platform, and program in the world. After that, the approximation and detail coefficients are obtained by using decomposition of two transforms. The first transform processes with wavelet transform by using Forty-one wavelet functions (mother wavelets) of six families, namely Haar (haar), Daubechies (db2-10), Biorthogonal (bior1.1- 6.8 and rbior1.1 - 6.8), Cofilets (coif 1-5), Symlets (sym 1 - 10), and Dmey (dmey), of wavelet transform. The second transform processes discrete multiwavelet transform (DMWT) by using the filter GHM of sentence query entered. Then, reconstructions of two transforms are separately computed. Finally, the averages of reconstructions of two transforms have been achieved by comparing the decomposition and reconstruction of every character and space of sentence query entered of two transforms. These processes are applied to the sentence query entered for any language.
4.4 Results and Discussions From our previous papers, we get that the wavelet function, type and sentence query of language should have good similarities. This means that the wavelet function should have good similarities not just with type of language, but also with sentence query of same language. Also, this fact we have found it with Czech language.
4
Using Wavelet and Multi-wavelet Transforms for Web Information Retrieval
49
The main result is that we can deal with the query entered of language by Internet user same as signal within wavelet transform discrete or multiwavelet transform without taking it as image, icon or another method. In scalar wavelet transform, Fast Wavelet Transform (FWT), has not standard method to select a wavelet function with signals and language. Some criteria have been proposed to select a wavelet. One of them was that the wavelet and signal should have good similarities. We have noticed that the main problem of the sentence reconstruction with scalar wavelet transform, in Czech language, is reconstructed either different words or shapes other than the words entered or the sentence is without space between words especially with FWT. As depicted in Table 4.2, there are difference with getting perfect retrieval with one filter in one level to four level of Haar, Sym1, Bior1.x, Bior2.x and Bior3.x filters of this language. The filter Bior3.1 gives better results than others filters with all levels. Therefore, we have selected one level as good level to get speed up in processing, low size on storage, and quality of retrieval for this language. Moreover, the Table 4.3 show the details analysis of FWT with four levels of Czech language. In our previous work, we have used multiwavelet to solve problem selecting optimum wavelet function within wavelet transform from 16 languages sentences results of 14 languages (English, Spanish, Danish, Dutch, German, Greek, Portuguese, French, Italian, Russian, Arabic, Chinese (Simplified and Traditional), Japanese and Korean (CJK)) of five language families [3]. DMWT can simultaneously posses the good properties of orthogonality, symmetry, high approximation order and short support which are not possible in the scalar wavelet transform. The amazing results of discrete multiwavelet transform also appear that its retrieval of the information is 100% perfectly with Czech language as shown in Table 4.1. The accuracy of the reconstructions of DMWT is estimated by average percentage reconstruction of one decomposition level on the long sentence query entered of the Czech language. This means that multiwavelet already has proven to be suitable of multilingual web information retrieval. No matter of type, script, word order, direction of writing and difficult font problems of the language. This means that we can apply all properties of DMWT with GHM parameter filters on text as signal.
Table 4.1 The sentence query of “Ministr zahraniˇc´ı a vicepremi´er Jan Kohout pˇripomnˇel, zˇ e u´ stava ned´av´a term´ın,” with one level decomposition of FWT and DMWT Filter
Reconstruction
Accuracy
R(Haar)
Ministr zahraniˇc´ı a vicepremi´er JanKohoutpˇripomnˇel, zˇ e u´ stava ned´av´a term´ın,
97%
R(Bior3.1) Ministr zahrani a vicepremir Jan Kohout pipomnl,ˇze u´ stava ned´av´a term´ın,
99%
R(DMWT) Ministr zahrani a vicepremir Jan Kohout pipomnl, e stava nedv termn,
100%
50
S.A. Al-Dubaee, V. Sn´asˇ el, and J. Platoˇs
Table 4.2 Influence of decomposition and reconstruction of wavelet functions to four levels on Czech language Filter Haar Db2 Db3 Db4 Db5 Db6 Db7 Db8 Db9 Db10 Coif1 Coif2 Coif3 Coif4 Coif5 Sym1 Sym2 Sym3 Sym4 Sym5 Sym6 Sym7 Sym8 Sym9 Sym10 Bior1.1 Bior1.3 Bior1.5 Bior2.2 Bior2.4 Bior2.6 Bior2.8 Bior3.1 Bior3.3 Bior3.5 Bior3.7 Bior3.9 Bior4.4 Bior5.5 Bior6.8 Dmey
Level 1 (%) 97 50 47 55 44 59 47 45 49 53 47 49 47 55 49 97 50 47 47 53 47 51 50 56 41 97 94 96 97 89 95 83 99 94 94 90 95 50 47 49 46
Level 2 (%) 97 51 45 62 46 50 45 45 53 60 47 56 54 46 50 97 51 45 51 51 50 51 50 54 44 97 95 97 99 90 99 95 100 97 95 100 97 58 39 55 41
Level 3 (%) 100 59 41 58 45 50 44 44 58 59 46 62 54 50 53 100 59 41 45 54 40 58 54 46 50 100 99 97 99 90 100 99 100 99 100 100 100 63 42 63 39
Level 4 (%) 100 60 46 54 45 49 44 41 58 62 46 64 53 47 53 100 60 46 46 56 41 54 49 50 42 100 99 97 100 96 100 100 100 100 100 100 100 63 44 63 41
4
Using Wavelet and Multi-wavelet Transforms for Web Information Retrieval
51
In brief, DMWT has been solved the main problem of selecting optimum wavelet transform for sentence query entered of Czech language by Internet user as shown in Tables 4.1 to 4.3. The aptitude of wavelet or multiwavelet transform to represent one language (domain) of the multilingual world is promising.
Table 4.3 The sentence query of “Ministr zahraniˇc´ı a vicepremi´er Jan Kohout pˇripomnˇel, zˇ e u´ stava ned´av´a term´ın,” with one level decomposition of FWT – example with few filters Type
Level !
Haar
Ministr zahraniˇc´ı a vicepremi´er JanKohoutpˇripomnˇel, zˇ e u´ stava ned´av´a term´ın, ˇ ˇ Ze ˇ rtaua Mhmirsr y´hqaniCˇ a uhceoqemi q J´m Kngots oRipnmm Ek, med u tdrl m, ˇ ˇ Ze ˇ st´ua Mhmhrsq y´hraniCˇ a vhcdoremh q J´m Kngnus oRipnmm Ek, mec u tdrl n,
Db2 Coif2
Bior2.4 Ministrzahraniˇc a vicepremi´erJan Kohout pˇripommˇek,ˇzd´ustava ned´av´a tdrm´ın,
Accuracy 97 50 49 89
Note: Non-printable characters was eliminated.
4.5 Conclusions This paper highlights that applied of wavelet and multiwavelet transform in Web information retrieval is fruitful and promising. From the experiments conducted, the evaluation of multiwavelet of solving problem of selecting optimum wavelet function within wavelet transform is to get the multilingual web information retrieval that is performed using multiwavelet transform with GHM parameters. It is experienced one long sentence of Czech language results of 41 filters belong to four filter families. The results show that the multiwavelet transform with one level gives the suitable results for the whole these languages. In addition, the results show the satisfactory performance of the applied multiwavelet transform. In sum, the multiwavelet transform has proved the aptitude for being a multilingual web information retrieval. We can use it, regardless of type, script, word order, direction of writing and difficult font problems of these languages. By this property, the power distribution over the multiwavelet domain describes its multilingualism. As a result, we expected that the wavelet or multiwavelet transform becomes multilingualism tool on the Internet and is a new tool to solve problem of multilingual question and answering system and other applications in future. Furthermore, we should take care of as further improvement in the proposed of multiwavelet to minimize the computational cost so as to transport it close to the real time process.
52
S.A. Al-Dubaee, V. Sn´asˇ el, and J. Platoˇs
References 1. Al-Dubaee, S.A., Ahmad, N.: New Direction of Wavelet Transform in Multilingual Web Information Retrieval. In: The 5th International Conference on Fuzzy Systems and Knowledge Discovery (FSDK 2008), Jinan, China, vol. 4, pp. 198–202. IEEE Computer Society Press, Los Alamitos (2008) 2. AL-Dubaee, S.A., Ahmad, N.: The Bior 3.1 Wavelet Transform in Multilingual Web Information Retrieval. In: The 2008 World Congress in Computer Science, Computer Engineering, and Applied Computing (WORLDCOMP 2008), Las Vegas, USA, vol. 2, pp. 707–713 (2008) 3. Al-Dubaee, S.A., Ahmad, N.: A Novel Multilingual Web Information Retrieval Method Using Multiwavelet Transform. Ubiquitous Computing and Communication (UBICC) Journal 3(3) (July 2008) 4. Mitra, S., Acharya, T.: Data Mining Multimedia, Soft Computing, and Bioinformatics. J. Wiley & Sons Inc., India (2003) 5. Thuillard, M.: Wavelet in Soft Computing. In: Word Scientific series in Robotics and intelligent systems, vol. 25. World Sciencfic, Singapore (2001) 6. Thuillard, M.: Adaptive Multiresolution and Wavelet Based Search Methods. International Journal of Intelligent Systems 19, 303–313 (2004) 7. Thuillard, M.: Adaptive Multiresolution search: How to beat brute force. Elsevier International Journal of Approximate Reasoning 35, 233–238 (2004) 8. Li, T., et al.: A Survey on Wavelet Application in Data Mining. SIGKDD Explorations 4(2), 49–68 (2002) 9. Graps, A.: An Introduction to Wavelets. IEEE Computational Sciences and Engineering 2(2), 50–61 (1995) 10. Phan, F., Tzamkou, M., Sideman, S.: Speaker Identification using Neural Network and Wavelet. IEEE Engineering in Medicine and Biology Magazine, 92–101 (February 2000) 11. Narayanaswami, R., Pang, J.: Multiresolution analysis as an approach for tool path planning in NC machining. Elsevier Computer Aided Design 35(2), 167–178 (2003) 12. Burrus, C.S., Gopinath, R.A., Guo, H.: Introduction to Wavelets and Wavelet transforms. Prentice Hall Inc., Englewood Cliffs (1998) 13. Mallat, S.: A theory for Multiresolution Signal Decomposition: The Wavelet Representation. IEEE Trans. on Pattern Analysis and Machine Intell. 11(7), 674–693 (1989) 14. Strang, G., Nguyen, T.: Wavelets and Filter Banks. Cambridge Press, Wellesley (1996) 15. Strela, V., et al.: The application of multiwavelet filter banks to signal and image processing. IEEE Trans. on Image Processing 8(4), 548–563 (1999) 16. Strela, V.: Multiwavelets: Theory and Applications. Ph.D. Thesis, Department of Mathematics at the Massachusetts Institute of Technology (June 1996) 17. Fritz, K.: Wavelets and Multiwavelets. Chapman &Hall/CRC, Boca Raton (2003) 18. Plonka, G., Strela, V.: Construction of multi-scaling functions with approximation and symmetry. SIAM J. Math. Anal. 29, 482–510 (1998) 19. Pan, J., Jiaoa, L., Fanga, Y.: Construction of orthogonal multiwavelets with short sequence. Elsevier Signal Processing 81, 2609–2614 (2001) 20. Novinky.cz electronic newspaper, http://www.novinky.cz/domaci/168391-nechejme-klukyv-bruselu-trochu-znervoznet-rikaji-odpurci-lisabonu.html (cited March 11, 2009)
Chapter 5
An Extensible Open-Source Framework for Social Network Analysis Michal Barla and M´aria Bielikov´a
Abstract. Online communities that form social networks became extremely important in many tasks related with information processing, presentation, navigation, especially in context of web-based information systems. Web-based information systems employing communities could benefit from the classical studies of human social interactions – social network analysis. In this paper, we present an extensible open-source JAVA-based framework for social network analysis which can be used either a a stand-alone application with its own GUI as well as a library within third-party software projects developed in JAVA. We provide not only a standalone desktop application, but the whole framework, which allows anyone to incorporate results of social networks analysis into their own project, which would possibly boost up its functionality, enhance the results etc. Keywords: Social Networks Analysis, Framework, Open-source, Workflow.
5.1 Introduction Online communities became very trendy. Apart from its impact on our every day lives, with all blogs, wikis, tagged content, social portals and other Web 2.0 features, it is very popular also in the research community. As an example, we can take the recent research in personalization and adaptivity of web applications, with many papers leveraging the fact that an online user belongs to certain community. Example of a system which employs community-based personalization is Community-based Conference Navigator [7]. It uses social navigation support to help conference attendees to plan their attendance at the most appropriate sessions and Michal Barla · M´aria Bielikov´a Institute of Informatics and Software Engineering, Faculty of Informatics and Information Technologies, SUT in Bratislava, Bratislava, Slovakia e-mail: {barla,bielik}@fiit.stuba.sk V. Sn´asˇel et al. (Eds.): Advances in Intelligent Web Mastering - 2, AISC 67, pp. 53–60. c Springer-Verlag Berlin Heidelberg 2010 springerlink.com
54
M. Barla and M. Bielikov´a
make sure that the most important papers are not overlooked. The system provides additional adaptive layer over the conference planning system, which allows the conference attendees to browse the conference program and plan their schedule. Community-based conference navigator tracks different activities of the community (such as paper scheduling) and allows users to add comments to papers. All activities result in updates of the community profile, which accumulate over time the “wisdom” of the community, used in the adaptation of the original system. The selection of the community is done manually by each user. If the user does not find the suitable community, she is allowed to create a new one. Moreover, the user can switch between communities anytime during the usage of the system, which gives instantly the annotations for a different community. However, it seems that the user can act only as a member of one community at a time, so all actions contribute only to one community profile. In reality, many people act as “bridges” between different communities, so it would not be easy for them to choose strictly only one of them. In other words, web-based information systems employing communities could benefit from the results of classical studies of human social interactions – social network analysis. Its goal is determining the functional roles of individuals (e.g., leaders, followers, “popular people”, early adopters) in a social network and diagnosing network-wide conditions or states. So, in the case of community-based conference navigator, a paper scheduled by somebody who is considered as an authority should be considered as more important. We realized that much of the research, similarly to the [7], stays on the group level and ignores information encoded in individual, one-to-one relationships, which can be extracted by applying social network analysis. One possible reason for such a situation can be a lack of proper software tools and development kits, which would facilitate the employment of social network analysis to the developers/researchers. This is our motivation for designing and developing Mitandao, an open source software for social network analysis which can be used either as a stand alone application or as a library providing analytical services to other applications via API. These could be either classical metrics of social network analysis such as metrics for determining centrality (node degree, betweenness and closeness centrality etc.) or more recent methods of link analysis [4] such as PageRank. Moreover, we designed Mitandao as an extensible framework, where everyone can add his or her own modules for network analysis.
5.2 Related Work As we mentioned already, social networks are gaining attention in various fields of research. Social navigation on the web [6] is an efficient method how to guide users to the content they might be interested in. Social networks are also used to overcome the initialization stage of recommender systems [8] or to solve the cold-start problem in user modeling [1, 11]. Other problem, which could benefit from the information stored within social networks is a disambiguation of person names, e.g., to
5
An Extensible Open-Source Framework for Social Network Analysis
55
normalize a publications database [10] or for the purpose of automatized information extraction from the web [3]. All the aforementioned research areas could benefit from the reusable, interoperable and extensible social network analyzer. Weka [14] is an open-source collection of machine learning and data mining algorithms written in JAVA. It is possible to use it as a standalone application or integrate some of its parts into another software project. It provides a basic support for distributed computing, which allows user to execute different setups of selfstanding algorithms simultaneously. Since Weka is an open source software, it is possible to add a plugin (a new algorithm) into it or change an existing one according to user’s needs. However, Weka is primarily targeted at data mining and its usage to social network analysis is not straightforward. Ucinet [5] is a commercial software for social network analysis being developed by Analytic Technologies. It comes with a variety of functions to analyze the network (centrality measures, subgroup identification, role analysis etc.) and multiple ways of its visualization. The main drawback is that the software can be used only “as is”. You need to have your data ready, import them to Ucinet, execute the chosen algorithm and observe the results. There is no way of modifying existing algorithms or adding your own one to the process. Moreover, Ucinet does not provide any kind of API which would allow other software to use its services. InFlow (orgnet.com/inflow3.html) is another commercial software used to analyze social and organizational networks. Similarly to Ucinet, it contains a set of algorithms to analyze the structure and dynamics of the given network. What distinguishes InFlow is the “What-If” feature: every time the network changes, InFlow re-runs automatically selected analysis and display the results. This allows for efficient simulations, especially useful when designing an organizational structure within a company. Pajek [2] is a program for analysis and visualization of large networks developed at University of Ljubljana. It is free for non-commercial use. It provides many algorithms and has a powerful drawing and layout algorithms. Unfortunately, similarly to Ucinet and InFlow, there is no way of calling the Pajek’s logic from another program. VisuaLyzer (mdlogix.com/solutions/) from Mdlogix is another commercial tool for analyzing and visualizing graphs. It has a clean and usable user interface which provides access to an interesting set of algorithms and visualizing options. Moreover, it allows the user to create and use new types of attributes for nodes and edges and is capable to use these attributes in algorithms, query functions for selecting nodes etc. Similarly to previously analyzed software tools, it does not provide any API, which would allow to use the features outside of the graphical user interface. Overall, we can see that the majority of existing software solutions does not cover our requirements for an extensible and interoperable social network analyzer, such as possibility to be used automatically, by means of APIs. Second, their architecture and closed-source licensing does not allow ordinary users to add new functionality or change the existing one. The only exception is Weka, which is on the other hand too complex for the task of social network analysis.
56
M. Barla and M. Bielikov´a
5.3 Mitandao: Functionality and Architecture All the functionality of Mitandao framework is concentrated in the execution of workflows. Each workflow consists of following stages: 1. 2. 3. 4.
import/use current graph filtering analysis export/visualization
Workflows can be chained together, which allows for efficient application of any possible filter/analyze combination (e.g., we can compute betweenness centrality for every node in the first workflow and apply a filter of nodes based on this value in the second workflow, taking the graph from the first workflow as an input). Along with graph editing functions and undo/redo functions based on on-demand generated checkpoints, Mitandao provides all necessary methods for a successful dynamic analysis of any kind of social network. Mitandao was designed with simplicity and extendability in mind. We separated completely the functionality of Mitandao from its GUI in order to allow anyone to use it as a library in his or her own project. Moreover, the Mitandao library provides only the core functionality required to execute a workflow along with some supporting tools, while the actual logic is separated into pluggable modules. We recognize four types of modules according to four stages of a workflow: 1. Input module – used to load a graph (social network) into a working environment; 2. Filter module – used to remove nodes from the loaded graph according to the specific conditions; 3. Algorithm module – performing actual analysis (i.e., computation of metrics) of loaded social network; 4. Output module – used to store current graph in an external data-store. Every module takes a graph as an input and returns another graph which is passed to the subsequent module or displayed on the screen. The input graph can be empty, which is often the case for the Input modules, which actually “creates” the graph for further processing. Not every module chained in a workflow needs to perform changes in the graph – the output graph can be equal to the input graph, i.e., when an Output modules does not change the graph itself, but dumps it to the file or database as a side effect. For the internal graph representation we chose to use a widely used JUNG framework (Java Universal Network/Graph Framework, jung.sourceforge.net) for graph modeling and analysis. Use of the same graph structure ensures a high level of interoperability and usability of our framework. Moreover, we could take advantage of JUNG visualization framework and painlessly integrate JUNG implementations of various analysis algorithms. JUNG is able to store custom data (e.g., analysis results) in the form of key – value pairs. To standardize and unify the way of storing various data in the graph, we took the idea of JUNG labeler (a structure, which applies a set of labels to a graph
5
An Extensible Open-Source Framework for Social Network Analysis
57
to which it is attached to) and derived two labelers (for vertices and edges) which are to be used by all modules to store and manipulate custom graph data. The library consists of following components (see Fig. 5.1): • Core component – apart from definitions of basic components (such as workflow) or exceptions, which can occur during the analysis, it contains the entry interface for the whole library. The core contains a ParameterReader module, responsible for setting up individual modules according to the parameters chosen within the GUI and ClassLoader module, which looks-up available modules in pre-defined classpaths. • Graph component – provides the basic utilities to manipulate graphs such as combining two graphs into one (union), copying data from one graph to another, storing custom data in the graph and additional converters. • Modules component – contains interfaces of workflow modules, a ModuleManager for accessing all loaded modules. It contains an implementation of two basic input and output modules working with checkpoints (complete dump/recovery of a graph to/from a file). • UI Framework – Every module author has an option to annotate (using JAVA Annotations) the module parameters which require user input (e.g., an input module might require a name of the input file). UI Framework uses such annotations to generate the GUI for setting these values and ensures proper setting of module variables via setter methods. Alternatively, the module author can provide his own GUI for setting the module parameters. In such case, UI Framework just passes a map of user supplied values to the module.
<
> Library <> Core
<> Graph
<> Modules
<> ClassLoader
<> Labellers
<> Manager
<> Convertors
<> Interfaces
<> Exceptions
<> NullModules <> ParameterReader
<> UI Framework
Fig. 5.1 Architecture of the Mitandao library
<> Checkpoint
58
M. Barla and M. Bielikov´a
5.4 Mitandao: Modules All the functionality provided by Mitandao is provided through modules combined into workflows. We decided to support various types of inputs such as Pajek and GraphML input files and provide also means to import data from a relational database, with either flat table structure or a structure using a join table (e.g., for bipartite graph where relationships are modeled via another entity). After analysis, when needed, the social network (graph) can be saved (exported) into GraphML or Pajek files. After loading a social network into working environment, Mitandao provides modules for its optional filtering. Like that we can remove isolants (nodes with no connections at all) or use a more complex filter, which allows user to setup multiple conditions on a particular attribute of nodes. The analysis modules are responsible for adding new properties to the social network, its nodes and connections between them. Currently implemented analysis modules are mainly wrappers on the top of JUNG library, providing the user with easy way how to compute various attributes of the nodes such as betweenness centrality or degree distribution. Every stage of the workflow (input,filter, analysis, output) is optional and workflow can be executed without it. So, if an input module is not used to read a graph from the external storage, the subsequent stage receives an empty graph to work with. Similarly, user can choose not to use any output module and to process the resulting graph by other means.
5.5 Mitandao as a Standalone Application Mitandao application is a java-based application built on the top of the Mitandao library. It provides two basic ways of setting up a workflow: • classical tabbed wizard with pre-defined simple linear analysis consisting of one input, one optional filtering algorithm, one optional analytical algorithm and one output (with optional graph visualization at the end). Each step is presented in one tab containing all controls necessary to choose the appropriate module and to setup its parameters; • graphical wizard, which allows to create advanced analysis with multiple input, filtering, analytical and output stages. Moreover, it allows to create forks and joins in the workflow, thus performing multiple branches of analysis or some of its stages. When a graph is displayed (Fig. 5.2), a user has possibilities to explore it freely by zooming in and out and moving the graph. The user can drag nodes to another position, explore the properties of nodes and edges, add/delete nodes/edges. It is also possible to omit the input module stage and start the analysis with an empty graph, fill it interactively by nodes and edges and assign various attributes to them.
5
An Extensible Open-Source Framework for Social Network Analysis
59
Fig. 5.2 Screenshot of the main Mitandao window. The left part shows the current graph, while the panel on the right contains controls to select nodes or edges and to examine the attributes of the selected items.
Then, the user can store this manually created graph for the future reference or use it as an input for a new workflow. Node and edge selection of the displayed graph can be done by either manually clicking the nodes to be selected or by using pre-defined selectors, which allows for selecting all nodes within a defined distance from an already selected node, selecting nodes with a certain degree or selecting nodes/edges according to the values of their attributes.
5.6 Conclusions In this paper, we described Mitandao, an extensible open source social network analysis framework and an application built on top of it. The most important part is that we provide not only a standalone desktop application, but the whole framework, which allows anyone to incorporate results of social networks analysis into their own project, which would possibly boost up its functionality, enhance the results etc. We already started to incorporate results of social network analysis coming from Mitandao framework into our research. It is suitable mostly for tools realizing web search and navigation [9, 12, 13]. In [1], we proposed a method for initializing user model of a new user by leveraging his or her social connections to other users. Our approach requires that every link between two users is assigned a weight and every user (node in the network) is assigned a rank. All these values come as results of automated invocation of analysis workflows within our Mitandao framework. The framework (source codes, javadoc documentation and usage guides) is available at mitandao.sourceforge.net.
60
M. Barla and M. Bielikov´a
Acknowledgements. This work was partially supported by the Cultural and Educational Grant Agency of the Slovak Republic, grant No. KEGA 3/5187/07 and by the Scientific Grant Agency of Slovak Republic, grant No. VG1/0508/09. We wish to thank students Lucia Jastrzembsk´a, Tom´asˇ Jel´ınek, Katka Kostkov´a, Luboˇs Omelina and Tom´asˇ Koneˇcn´y for their invaluable contribution to the project.
References 1. Barla, M.: Leveraging Social Networks for Cold Start User Modeling Problem Solving. In: Bielikov´a, M. (ed.) IIT. SRC 2008: Student Research Conference, pp. 150–157 (2008) 2. Batagelj, V., Mrvar, A.: Pajek - analysis and visualization of large networks. In: Mutzel, P., J¨unger, M., Leipert, S. (eds.) GD 2001. LNCS, vol. 2265, pp. 8–11. Springer, Heidelberg (2002) 3. Bekkerman, R., McCallum, A.: Disambiguating Web appearances of people in a social network. In: Ellis, A., Hagino, T. (eds.) WWW 2005, pp. 463–470. ACM, New York (2005) 4. Bing, L.: Web Data Mining, Exploring Hyperlinks, Contents, and Usage Data. Springer, Heidelberg (2007) 5. Borgatti, S., Everett, M., Freeman, L.: UCINET 6.0. Analytic Technologies (2008) 6. Brusilovsky, P.: Social information access: The other side of the social web. In: Geffert, V., Karhum¨aki, J., Bertoni, A., Preneel, B., N´avrat, P., Bielikov´a, M. (eds.) SOFSEM 2008. LNCS, vol. 4910, pp. 5–22. Springer, Heidelberg (2008) 7. Farzan, R., Brusilovsky, P.: Community-based Conference Navigator. In: Vassileva, J., et al. (eds.) Socium: Adaptation and Personalisation in Social Systems: Groups, Teams, Communities. Workshop held at UM 2007, pp. 30–39 (2007) 8. Massa, P., Bhattacharjee, B.: Using Trust in Recommender Systems: an Experimental Analysis. In: 2nd Int. Conference on Trust Management (2004) 9. N´avrat, P., Taraba, T., Bou Ezzeddine, A., Chud´a, D.: Context Search Enhanced by Readability Index. In: IFIP 20th World Computer Congress, TC 12: IFIP AI 2008. IFI, vol. 276, pp. 373–382. Springer Science+Business Media (2008) 10. Reuther, P., et al.: Managing the Quality of Person Names in DBLP. In: Gonzalo, J., Thanos, C., Verdejo, M.F., Carrasco, R.C. (eds.) ECDL 2006. LNCS, vol. 4172, pp. 508–511. Springer, Heidelberg (2006) 11. Tsiriga, V., Virvou, M.: A Framework for the Initialization of Student Models in Webbased Intelligent Tutoring Systems. User Model. User-Adapt. Interact. 14(4), 289–316 (2004) 12. Tvaroˇzek, M.: Personalized Navigation in the Semantic Web. In: Wade, V.P., Ashman, H., Smyth, B. (eds.) AH 2006. LNCS, vol. 4018, pp. 467–472. Springer, Heidelberg (2006) 13. Tvaroˇzek, M., Barla, M., Frivolt, G., Tomˇsa, M., Bielikov´a, M.: Improving Search in the Semantic Web via Integrated Personalized Faceted and Visual Navigation. In: Geffert, V., Karhum¨aki, J., Bertoni, A., Preneel, B., N´avrat, P., Bielikov´a, M. (eds.) SOFSEM 2008. LNCS, vol. 4910, pp. 778–789. Springer, Heidelberg (2008) 14. Witten, I.H., Frank, E.: Data Mining: Practical Machine Learning Tools and Techniques with Java Implementations, 2nd edn. Morgan Kaufmann, San Francisco (2005)
Chapter 6
Automatic Web Document Restructuring Based on Visual Information Analysis Radek Burget
Abstract. Many documents available on the current web have quite a complex structure that allows to present various kinds of information. Apart from the main content, the documents usually contain headers and footers, navigation sections and other types of additional information. For many applications such as document indexing or browsing on special devices, it is desirable that the main document information should precede the additional information in the underlying HTML code. In this paper, we propose a method of document preprocessing that automatically restructures the document code according to this criteria. Our method is based on rendered document analysis. A page segmentation algorithm is used for detecting the basic blocks on the page and the relevance of the individual parts is estimated from the visual properties of the text content. Keywords: Document Restructuring, Page Analysis, Page Segmentation, Block Importance.
6.1 Introduction The current World Wide Web contains a huge amount of documents that are primarily viewed using a web browser. Many of the available documents have a quite complex structure as they contain various types of information. Apart from the main content that should be presented to the reader, it is necessary to include additional information sections to the documents such as navigation sections, related content, headers and footers, etc. This is characteristic for example for the electronic newspaper websites, where the published article is usually surrounded by many kinds of additional information. Radek Burget Faculty of Information Technology, Brno University of Technology e-mail: [email protected] V. Sn´asˇel et al. (Eds.): Advances in Intelligent Web Mastering - 2, AISC 67, pp. 61–70. c Springer-Verlag Berlin Heidelberg 2010 springerlink.com
62
R. Burget
This complexity of web documents has certain drawbacks on the accessibility of the document content. From the user point of view, it complicates browsing the documents on alternative devices such as mobile phones, PDAs or the special voice readers or tactile devices used by blind users. And from the information processing point of view, it complicates indexing, classification and other operations with the documents that have been developed for processing simple text documents [10]. The technology currently used for web publishing (mainly the HTML and CSS languages) does not provide a standard mechanism for annotating the purpose of the individual content parts in the documents. The only information provided by the authors is intended for the human users only and it has a form of visual hints that include using certain content layout and visual style. Therefore, using the visual information for detecting the document structure is an interesting challenge. In this paper, we propose a method for an automatic restructuring of HTML documents based on the analysis of their visual layout and style. The key idea of our approach is using the visual information provided to the standard user for improving the structure of the underlying code. For detecting the basic information blocks in the page, we use a special page segmentation algorithm with several features specific for this task. Subsequently, the importance of the discovered blocks is estimated and the blocks are reordered according to the above mentioned criteria. The resulting documents are very simple in their visual style, however, their structure is more suitable mainly for automatic processing and for the non-visual access methods.
6.2 Related Work Our work is related to the areas of visual page segmentation and information block discovery in the document. Page segmentation has been often investigated in the context of document transformation to a structured format (mainly the PDF documents or OCR) where the XY-Cut approach is usually used [7]. The HTML documents are specific from this point of view since a more structured content description is available in the form of the document code. Therefore, in case of HTML documents, many segmentation methods are based on DOM tree analysis [4, 5]. However, the direct analysis of the HTML code must be always based on certain assumptions on the HTML usage that are often not fulfilled in reality. Advanced methods of page segmentation work with a visual representation of the rendered document such as the VIPS method [3]. The area of information block detection is overlapping with page segmentation to some extent. Most of the existing approaches are again based on the analysis of the HTML code tree (e.g. [4, 5, 6, 9]). Some approaches work with a visual page representation obtained by page segmentation. The work published by Yu et al. [10] is based on a visual page segmentation using the VIPS algorithm. Similarly, the work of Song et al. [8] is based on the visual block classification. Our page segmentation algorithm is based on our previous work [2] and it shares some features (mainly the separator detection) with the VIPS algorithm. In comparison to the existing methods, our algorithm has several new features, mainly the font
6
Automatic Web Document Restructuring
63
style comparison and the logical block analysis based on the header detection. We consider the page segmentation itself and its application to document restructuring the main contribution of this paper.
6.3 Method Overview The proposed document restructuring method is summarized in the Fig. 6.1. It is based on the analysis of the visual appearance of the document. Document code
Box tree Document rendering
Page segmentation Layout model
Resulting code
Segment order detection
HTML code generation
Ordered list of areas
Fig. 6.1 Code restructuring method overview
First, the document is rendered and a tree of page elements (boxes) is obtained. The details of the document rendering are contained in Sect. 6.4. As the next step, the rendered page is processed by a specific page segmentation algorithm. The aim of this step is to detect the logical segments that can be considered the standalone units in the page. This is the key algorithm of the whole process and therefore, the corresponding Sect. 6.5 forms the major part of this paper. Once the segments are detected, we re-order them as described in Sect. 6.6. Finally, we generate a simple HTML code from the resulting page model. This phase is described in Sect. 6.7.
6.4 HTML Document Rendering For rendering the web page, we have implemented our own layout engine called CSSBox1. The task of the layout engine is to process an HTML code tree represented by a DOM, load all the corresponding CSS style sheets and subsequently, to determine the positions and visual features of all the document elements on the resulting page. The way of determining these features is given by the CSS specification [1]. The result of the rendering process is (according to the CSSBox terminology) a tree of boxes. By a box, we understand a rectangular area on the resulting page with a given position and size that contains an arbitrary part of the document content. For 1
http://cssbox.sourceforge.net/
64
R. Burget
each element of the HTML code that is not explicitly marked as hidden, a corresponding element box is created. Moreover, a text box is created for each continuous text string that should be displayed in the document and a replaced box is created for each image or other object in the page. The way of organizing the obtained boxes to a tree structures corresponds to the CSS containing box definition [1], i.e. for each box, its containing box forms its parent node in the resulting tree.
6.5 Page Segmentation The task of the page segmentation is to discover the basic visual blocks on the page. Our method is based on our previous work [2], however, certain modifications have been proposed in order to meet the specific requirements of the code restructuring task. The essential requirement is that the basic logical blocks remain consistent. It is necessary to avoid breaking a logical block (for example a published article) in several parts and changing their order arbitrarily. In other words, the code restructuring may change the order of the logical blocks but it should not affect their contents. In order to meet these criteria, we have proposed a page segmentation algorithm with several extensions in comparison to the similar existing algorithms. In addition to the content layout and separators, our algorithm involves the analysis of the content style and the detection of logical blocks based on the detection of headings in the content. The input of the page segmentation is the tree of boxes obtained from the rendering engine. The whole segmentation process itself consists of following steps: 1. Consistent visual area detection. We detect the blocks of content with a consistent style in the rendered page such as paragraphs of text or headings. During this operation, we consider the style of the individual text boxes and several kinds of visual separators that can be used for separating the detected areas. 2. Heading detection phase. Based on the font properties of the detected areas, we guess the areas that may correspond to the headings in the page. We consider the headings to be important for denoting the important blocks in the page. 3. Logical block detection phase. The logical units consisting of multiple blocks are discovered. In our approach, we use the detected headings as starting points and we further analyze the content layout for the discovery of the logical block bounds. In following sections, we will briefly describe the individual phases.
6.5.1 Consistent Visual Area Detection This phase of page segmentation provides a basic analysis of the tree of boxes obtained from the rendering engine. The goal of this phase is to discover connected
6
Automatic Web Document Restructuring
65
blocks of text in the page with a consistent style that can be later interpreted as paragraphs, headings or other parts of the document contents. In the underlying HTML code and subsequently in the rendered tree of boxes, these areas may be represented by one or more elements or boxes. Therefore, during the visual area detection, we need to discover clusters of text boxes in the box tree with certain properties. In our method, we consider the mutual positions of the boxes in the page, visual separators of several kinds and the visual style of the contained text. First, for each box in the box tree, we represent the mutual positions of its child boxes in the page using a grid as illustrated in the Fig. 6.2. This representation allows to quickly determine which boxes are adjacent to each other. Then, we start with the leaf nodes of the tree and for each parent node, we perform following steps: 1. Line detection – we group together the text boxes that are placed on the same line in the grid. 2. Separator detection – we detect the separators that may be formed either by whitespace rectangles with their width or height exceeding certain limit or by border lines that can be created at the box borders by the corresponding CSS properties. 3. Consistent area detection – we group together the boxes in the grid that are not separated by any of the detected separators and in the same time, they have consistent text style. We say, that two boxes have a consistent text style if the difference in font size, weight and style between these boxes is under a certain threshold. 1 23 4
Fig. 6.2 Representing mutual positions of child boxes using a grid
5 6 7 8
1 2 3 4 5 6
As the result, for each non-leaf node b in the box tree, we obtain clusters of its child boxes that represent the text areas of a consistent style. We will call these clusters the visual areas. Since the same algorithm is applied on all the tree levels, the box b itself may form a part of another visual area at an upper level of the box tree. As the result, we obtain a parallel tree of visual areas in addition to the tree of boxes. The root area of this tree corresponds to the whole rendered page.
6.5.2 Heading Detection Some of the consistent blocks detected in the previous phase correspond to document headings. The information about the headings is important for preserving
66
R. Burget
the consistence of the restructured document because it is used for determining the logical block bounds as described further in Sect. 6.5.3. For the heading detection, we use quite a simple heuristic based on the font size. For each visual area in the tree of visual areas created in the previous step, we compute the average font size of its text content. For the leaf areas, it is the average font size of the contained text boxes. For the remaining areas, it is the average font size of their child areas. Then, the average font size of the root area corresponds to the average font size of the whole page. Our heading detection method is based on a simple assumption that the font size used for headings is significantly larger than the average font size. The detection process is following: • Let f0 be the average font size of the root visual area. We define a font size threshold ft relative to the average font size of the root area. In our experiments, the value ft = 1.5 f0 turned out. • We choose all the leaf areas with the average font size greater or equal to ft . • Some of the headings may consist of multiple lines. These headings are represented by non-leaf visual areas containing a set of leaf areas with the same font size. In this case, we choose the parent area instead of the child areas. This simple heuristic allows to mark selected areas in the tree of visual areas as “likely to represent headings”.
6.5.3 Logical Block Detection From the page segmentation phase, we have obtained a tree of visual areas on the paragraph level. For the page restructuring, it is necessary to determine the bounds of the larger logical blocks in the page such as the articles. These blocks should remain continuous during the page restructuring. This task roughly corresponds to article detection in the page. However, we do not insist on determining the exact bounds of the article. For the document restructuring, it is sufficient to find logically consistent areas. Moreover, in order to avoid the loss of consistence of the content, we prefer a smaller number of larger areas. For example, when a document contains several articles below each other, we will consider the whole area containing the articles a single logical block. In our approach, we use the detected headings as the start points of the logical block detection. This corresponds to the observation that the human users also use the headings similar way. Moreover, we assume that the page contents may be organized in multiple columns. Therefore, for each heading detected, we try to find a column in the page that contains this heading. Subsequently, the logical block is detected within this column. We assume, that the logical block is formed mostly by a consistent flow of text that is aligned to the left or to a block. Therefore, we search for a sequence of aligned visual areas that correspond to text lines or paragraphs below and above each
6
Automatic Web Document Restructuring
67
heading. During this process, following visual area configurations are considered to indicate the possible end of the logical block (loss of consistency): • A visual area that would require expanding the bounds to the left (it exceeds the expected bounds of the logical block on the left side). • Multiple visual areas on a line that indicate the end of the sequence of simple lines. When the frequency of these conditions exceeds certain threshold, we have found the end of the logical block. Usually, there is a longer text placed below the heading and eventually, significantly less text is placed above the heading (for example a date or subtitle). Therefore, the threshold may be set differently for searching above and below the heading.
6.5.4 Segmentation Result As the result of the above described segmentation process, we obtain a tree of visual areas, and the information about recognized headings and logical blocks formed by these visual areas. In comparison to the source HTML code, the obtained visual area tree represents the page organization in a manner that is much closer to the human perception of the rendered page and it allows using the visual features of the content for its restructuring.
6.6 Content Restructuring The aim of the content restructuring is to move more important visual blocks to the beginning of the document. In the same time, we want to preserve the order of the remaining areas in order not to break the consistency of the contents. Once the logical blocks have been detected as described in Sect. 6.5.3, the task is to sort these blocks according to their expected importance for the user. The basic idea behind the importance evaluation is that the important parts of the document should be denoted with some heading and the importance of the heading is visually expressed by the used font size. Therefore, we prefer putting the logical blocks containing large headings before the remaining visual blocks. In the tree of areas, we define an average importance of a visual area as an average of the importance of all its descendants (contained areas) that have the importance greater than zero. For the leaf areas, their importance corresponds to the font size, if the area is a heading or 0 otherwise. Then, the code restructuring consists of sorting the child elements of all the visual areas according to their average importance while preserving the order of the areas with equal average importance. However, in order to avoid breaking the logical blocks, the order of areas within the detected logical blocks is left untouched during the restructuring.
68
R. Burget
Fig. 6.3 Web document before and after restructuring (the remaining page content is placed further on the page)
6.7 HTML Code Generation The resulting HTML code is generated from the tree of visual areas. We traverse the whole tree in a pre-order manner. During the traversal, for each leaf visual area, we create a element in the resulting HTML code. Similarly, for each text box associated with this visual area, we create another nested
element with the appropriate text content and the style attribute set according to the font style and colors of the box. As the result, we obtain a HTML code with only a simple visual style, however, it’s content is re-ordered as described in the previous section. Figure 6.3 shows an example document and the result after restructuring.
6.8 Experimental Evaluation We have implemented the described methods in the Java environment. We have created a tool that displays the segmented page identified by the given URL and it produces the restructured HTML document. Moreover, the resulting tree of visual areas produced by the page segmentation is stored in an XML file. Additionally, we have created an annotation tool that reads and displays the segmented page stored in the XML file and it allows to manually annotate the important blocks in the page. We have tested the method on a set of documents from various online news portals and article sources. The list of websites and the results are shown in Table 6.1. From each source, we have taken 10 random documents from different sections of the website. These sample documents have been segmented and using our annotation tool, we have marked the areas related to the topic. In most cases, this was the text of the published article or articles. Subsequently we have compared our selection with the result of automatic restructuring and we have checked following properties of the resulting documents:
6
Automatic Web Document Restructuring
69
Table 6.1 Results of experimental testing on selected websites
cnn.com reuters.com usatoday.com foxnews.com newsday.com bostonherald.com news.cnet.com elmundo.es lefigaro.fr latribune.fr spiegel.de zeit.de idnes.cz lupa.cz wikipedia.org
Consistent blocks (%)
Content beginning (%)
100 100 10 100 100 100 100 90 100 100 60 100 100 100 100
30 100 100 100 100 100 30 100 0 0 100 90 100 100 100
• The contents of the selected logical blocks (mainly the text of the articles) should remain consistent, i.e. the content is complete and it corresponds to the original article. The “Consistent blocks” column in the Table 6.1 contains a percentage of the logical blocks that remained consistent for the individual sources. • The text of the article or articles including the headings should be placed immediately at the beginning of the document. We have admitted a few lines such as date or subtitle before the first heading. The “Content beginning” column in the Table 6.1 shows the percentage of documents, that have met this criteria. During our experiments, we have identified following problems, that caused the method to fail in some cases: • There are some auxiliary headings that are greater than the main article heading. The articles remain consistent however, they are incorrectly ordered (cnn.com). • There is a main heading provided in each page (usually containing the website name) that is recognized to be more important than the individual article headings. Again, the articles remain consistent however, the document is considered to be a single logical block and therefore, it is not reordered at all. (cnn.com, news.cnet.com, lefigaro.fr, latribune.fr). • The logical block detection fails. In this case, the main article correctly starts at the beginning of the document however, it is broken into parts that are mixed with the remaining contents. (usatoday.com, spiegel.de). Quite surprisingly, although the heuristic is quite simple, the header detection phase gives reliable results for all the documents.
70
R. Burget
6.9 Conclusions We have proposed an algorithm for web document restructuring based on the visual information analysis. The most important part of our approach is a page segmentation method with some features that address the specific requirements of this task. The results show that the method works reliably for most of the tested websites. It is the advantage of the presented method that it is based on the rendered document analysis and therefore, it is not dependent on the HTML and CSS implementation details that may be very different among the websites. On the other hand, we use very simple heuristics mainly for the logical block detection and the importance estimation that are the most important cause of the erroneous results. The improvement of these heuristics by considering more visual features or their replacement with a classification method will be the objective of our further research. Since the method involves rendering the whole document, it can be viewed as complex and time-consuming. However, for the application such as viewing the documents on the portable devices or by other alternative means, the method allows to access the important content more efficiently. Acknowledgements. This research was supported by the Research Plan No. MSM 0021630528 – Security-Oriented Research in Information Technology.
References 1. Bos, B., Lie, H.W., Lilley, C., Jacobs, I.: Cascading Style Sheets, level 2, CSS2 Specification. The World Wide Web Consortium (1998) 2. Burget, R.: Automatic document structure detection for data integration. In: Abramowicz, W. (ed.) BIS 2007. LNCS, vol. 4439, pp. 394–400. Springer, Heidelberg (2007) 3. Cai, D., Yu, S., Wen, J.R., Ma, W.Y.: VIPS: a Vision-based Page Segmentation Algorithm. Microsoft Research (2003) 4. Gupta, S., Kaiser, G., Neistadt, D., Grimm, P.: Dom-based content extraction of HTML documents. In: WWW 2003 Proceedings of the 12 Web Conference, pp. 207–214 (2003) 5. Kovacevic, M., Diligenti, M., Gori, M., Milutinovic, V.: Recognition of common areas in a web page using visual information: a possible application in a page classification. In: ICDM 2002, p. 250. IEEE Computer Society, Washington (2002) 6. Lin, S.H., Ho, J.M.: Discovering informative content blocks from web documents. In: KDD 2002: Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 588–593. ACM Press, New York (2002) 7. Meunier, J.L.: Optimized xy-cut for determining a page reading order. ICDAR 0, 347– 351 (2005) 8. Song, R., Liu, H., Wen, J.R., Ma, W.Y.: Learning block importance models for web pages. In: WWW 2004: Proceedings of the 13th international conference on World Wide Web, pp. 203–211. ACM Press, New York (2004) 9. Yi, L., Liu, B., Li, X.: Eliminating noisy information in web pages for data mining. In: Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. ACM, New York (2003) 10. Yu, S., Cai, D., Wen, J.R., Ma, W.Y.: Improving Pseudo-Relevance Feedback in Web Information Retrieval Using Web Page Segmentation. Microsoft Research (2002)
Chapter 7
Reasoning about Weighted Semantic User Profiles through Collective Confidence Analysis: A Fuzzy Evaluation Nima Dokoohaki and Mihhail Matskin
Abstract. User profiles are vastly utilized to alleviate the increasing problem of so called information overload. Many important issues of Semantic Web like trust, privacy, matching and ranking have a certain degree of vagueness and involve truth degrees that one requires to present and reason about. In this ground, profiles tend to be useful and allow incorporation of these uncertain attributes in the form of weights into profiled materials. In order to interpret and reason about these uncertain values, we have constructed a fuzzy confidence model, through which these values could be collectively analyzed and interpreted as collective experience confidence of users. We analyze this model within a scenario, comprising weighted user profiles of a semantically enabled cultural heritage knowledge platform. Initial simulation results have shown the benefits of our mechanism for alleviating problem of sparse and empty profiles. Keywords: Confidence, Fuzzy Inference, Semantic User Profiles, Personalization, Reasoning, Uncertainty Evaluation.
7.1 Introduction Increasing overload of information scattered across heterogeneous information ecosystems and landscapes, has increased the importance of user profiling. Profiling is seen as a facilitator and enabler for personalization. Personalization is a Nima Dokoohaki Department of Electronics, Computer and Software Systems, School of Information and Communications Technology, Department of Information and Computer Science, Royal Institute of Technology (KTH), Stockholm, Sweden e-mail:
[email protected] Mihhail Matskin Department of Information and Computer Science, Norwegian University of Science and Technology (NTNU), Trondheim, Norway e-mail:
[email protected] V. Sn´asˇel et al. (Eds.): Advances in Intelligent Web Mastering - 2, AISC 67, pp. 71–81. c Springer-Verlag Berlin Heidelberg 2010 springerlink.com
72
N. Dokoohaki and M. Matskin
methodology used for filtering the information on user behalf. As a result, profiles are increasingly implemented and utilized to allow intelligent information systems to disseminate selected and filtered information to individual or group sets of users, based on gathered personal information, stored in their respective profiles. Reasoning about uncertain knowledge is increasingly important. There has been a strong emphasis on the problem of reasoning in the face of uncertainty in Semantic Web [1]. Fuzzy Logic [2] has become an important focus area to Semantic Web research community [3]. While strong attention has been given to present fuzzy ontological concepts [3], and reason about them [4], still how to process and infer the uncertain degrees and truth ranges, is of interest in many fields. Within the profiling domain certain concepts such as trust, privacy and ranking carry vague and uncertain semantics. While ontological fuzzy languages can be used to present these concepts, analyzing the fuzzy degrees of each of these notions, as well as processing them is of our interest. We have proposed a profile format [5], through which we consider trust, privacy and rank as weights to items the user has visited and they are stored in profiled records. We record values for each of these three weights, creating a multiweighted profile structure. We have used RDF as the language for presentation of profiled information. Since profile is used to reflect both interests of the user and store previous experiences of the user, we create a hybrid notion of user profile. Confidence is defined as the state of being certain. As certainty of an experience is affected by situation-dependent measures of usage, we can consider these weights as parameters affecting usage confidence. As a result, we can take each of weight-triple values and process them to model and evaluate the confidence of user during profiled experience. To this end we take a fuzzy approach, through which we process each three-weight values of profiled records and infer confidence values. We demonstrate our model in the context of the SMARTMUSEUM scenario [6], a physical exhibition of art, in which users interact with a personalized ubiquitous knowledge platform that uses profiling for providing users with their information services of their choice. The organization of the paper is as follows: following background study in Sect. 7.2, our framework is presented in Sect. 7.3, a simulation of our framework is presented in Sect. 7.4, while we conclude and present a future work in Sect. 7.5.
7.2 Background User profiling has its roots in human studies. A user profile is defined as gathering of raw personal material about the user, according to Koch [7]. User profiles gather and present cognitive skills, abilities, preferences and interaction histories with the system [8]. According to Gauch et al. [8], User profiling is either knowledge-based or behavior-based. Knowledge-based approaches construct static models of users and match users to the closest model. Behavior-based methods consider the behavior as a modeling base, commonly by utilizing machine-learning techniques [9] to discover useful patterns in the behavior. Behavioral gathering and logging is used
7
Reasoning about Weighted Semantic User Profiles
73
in order to obtain the data necessary to detect and extract usage patterns, according to Kobsa [10]. Personalization systems are based on user profiles, according to Gauch et al, [8]. A category of personalization techniques is based on cognitive patterns (such as interests, preferences, likes, dislikes, and goals) a user has. These methods are known as filtering and recommendation techniques [10]. They filter resources based on features (mostly metadata) extracted and gathered from a resource or according to ratings (generally weights) of a user of similar profile, according to Weibelzahl [11]. Ontologies, at the heart of Semantic Web technologies, are used to formalize domain concepts which allow describing constraints for generation or selection of resource contents belonging the domain the user is keen towards, as well as being used to formalize the user model or profile ontology that helps making decision which resources to be adapted (for instance, shown or not shown) to the user. Ontologies along with reasoning create formalization that boosts personalization decision making mechanisms, according to Dolog et al, [12, 13]. Ontological user profiles are becoming widely adopted. For instance, within the domain of digital cultural heritage, CHIP project is definitely a significant stake holder. Considerable amount of research attention has been paid to semantically formalizing the user domain [14], as well as personalization of information retrieval. Hybrid ontological user models are consumed to learn, gather, store and use personal user data, according to which semantically-enriched art works are recommended to, during both online and on-site visit to exhibition. We have considered utilizing hybrid user models [5], which incorporate a semantic presentation of personal information about users as well as incorporating notions of trust, privacy and ranking for items the user has interest towards in the form of weight-descriptors. Fuzzy logics have been considered as a means for mining, learning and improving user profiles [15, 16]. Fuzzy notions of trust [17, 18, 19, 20], privacy [21] and ranking [22] have been proposed. In the context of, e-commerce Multi-agent settings, a fuzzy framework for evaluating and inferring trustworthiness values of opinion of agents has been proposed, by Schmidt et al, [19]. Agents state their evaluations about a particular (trustee) agent, agent being evaluated, with respect to agent initiating the transaction (truster). We have adopted and utilized the framework to our problem. At the same time we have adopted the privacy approach, proposed by Zhang et al., [21] to privacy and ranking, while for trust evaluation, we have taken approach proposed by Schmidt et al, [19]. In addition, uncertain notions of confidence modeling have been proposed [20]. In the context of PGP key-chaining, Kohlas et al. [20], proposes a naive approach to confidence evaluation based on uncertain evidence. Considered as a close notion to trust and belief, confidence is modeled, as an important element in the fuzzy inference mechanism.
7.3 Fuzzy Confidence Framework In this section we present our approach for modeling and evaluating overall confidence from the three-weight descriptors of trust, privacy and rank, assigned to semantic user profiles. We refer to the inferred resulting values as overall confidence
74
N. Dokoohaki and M. Matskin
of the users. Before we present the process first, we describe the presentation format for the profiled records containing values of trust, privacy and rank, and describe the motivation for using them. We limit the application of the profiles to the scenario that we are eager to apply our framework towards.
7.3.1 Presenting Profiled Weight Descriptors In addition to interest capturing (known as a traditional approach in profiling), we assign extra weights for capturing trust, privacy and rank to user and customer profiles. As an example, in the SMARTMUSEUM [6] case, the three-weight descriptors (privacy, trust and rank) are gathered in form of sensor data, given directly by users from mobile devices which they carry during exhibition, or are unobtrusively gathered without their consensus from environmental sensors such as RFID tags, GPS location services and Wireless networks. These weights represent the perception of users with respect to their experience, in our case exhibition and visit from art and cultural artifacts [5, 6]. Rank or score presents the amount of interest a user has with respect to his/her visit. Privacy presents the secrecy of users with respect to the disclosure of their personal information. Trust describes the self-assurance of the experience of users. As a result, user has the ability to tell the system how much his/her current experience is secret and cool. Main motivation for processing these weights to profiled items is that using these extra weights we can alter and perhaps improve the behavior of the system. If services provided by platform can be seen as system’s behavior sets, perhaps these weights could alter system’s behaviors to an extent that system provides better and improved services. Raw values of privacy, trust and ranking are gathered during the exhibition visit from the interactive interfaces implemented on smart handheld devices. Software interface depicts the three values in form of a scale which user can change in the preferences section. During the visit, experience data (mainly items visited and weigh values assigned) are retrieved from the handheld devices by SMARTMUSEUM servers and are stored in user profiles. The structure of the profiles [9] has been specified flexible and generic enough to accommodate ontological (RDF triplets) data about visited artifacts, context of the visit and weight descriptors. As an example, following profiled slice:
Conveys the following semantics; In a rainy weather (context), at a certain date (20081210), anonymous user (subject) visited (predicate) Saint Jerome writing (object) artwork and liked it very much (rank value = .80) and user trusts moderately his/her own experience (trust value= .60) and has average secrecy (privacy value= .50).
7
Reasoning about Weighted Semantic User Profiles
75
7.3.2 Fuzzy Confidence Modeling Process In order to evaluate the overall confidence of a user, we extract the weight values (privacy, trust and rank) from user profiles described previously, and we process them accordingly. The process is made of two main phases; pre-processing and postprocessing. The following steps are taken in order to evaluate the overall confidence of the user: In the pre-processing phase, first step involves application of weighting methodologies to raw values. This step includes fuzzification of each of the weightdescriptors. We take different approaches per each weight-descriptor, depending on the usage and semantics of each of weight-descriptors. Second step involves defining membership functions where per each fuzzy-weight input we define a membership function which translates the linguistic fuzzy rules and axioms into fuzzy numbers and values, as members of fuzzy sets. The final step involves application of fuzzy rules where fuzzy rules, which are defined and embedded in the fuzzy rule-base, are applied to fuzzy and weighted sets. In the post-processing phase, first step involves feeding input values to membership functions, where fuzzy sets are created as a result of this process. Second step involves application of defuzzification methodology to fuzzy sets. In the following sections, we describe each step in more details. 7.3.2.1
Fuzzification Phase
In this phase, each weight value will be taken separately and converted into fuzzy values which are fed into fuzzy inference engine afterwards. Weight Fuzzification (Secrecy, Self-Reliance and Opinion Weighting) As stated, privacy in our model gives the users this ability to specify the secrecy of their experience. This allows system to treat their personal data according to their choice. Within a similar fashion, an uncertain privacy model is introduced by Zhang et al. [21], in which privacy is defined as a role that allows the user to manage personal information disclosure to persons and technologies, with respect to their privacy preferences. This motivates us to adopt this approach to our own problem. With respect to self-reliance, trust in our approach allows the user to describe if they can rely on their own experience. To model such form of uncertain trust model, we have adopted approach proposed by Schmidt et al., [19]. The agent-oriented approach undertaken allows modeling trust as a weighted factor. In the case of rating, we take the same approach for privacy [21], with major difference that the importance (sensitivity) of certain item has direct relationship with resulting weighted rating and most importantly, the raw rating has direct effect to weighted resulting rating value. Meaning that, for instance the more important an item is the higher the rating of a user is. As the focus of this work is on confidence values, we advise the reader to refer to [19, 21] for detailed description of formulas used for calculation of weighted values.
76
N. Dokoohaki and M. Matskin
Defining and Applying Membership Functions and Fuzzy Rules Before the fuzzy values are fed to inference engine, membership functions should be formed and values need to be grouped according the degree of membership of each input parameter. Existing membership functions for fuzzy values comprise of Exponential, Sigmoid, Trapezoid, Gaussian, and bell-shaped [3]. We take the simple approach of Triangular shape [2, 3] with our three fuzzified sets (for trust, privacy and rank). Fuzzy rules [2], allow the combination and specification of the output model from the inference engine. We utilize the fuzzy rules in our approach to characterize the confidence output model. We would like to describe the degree of user’s confidence with respect to the scores he/she has assigned as weights for trust, privacy and rank. An example for a rule in our confidence model could be: If the (Fuzzy) Trust Value is High AND the (Fuzzy) Ranking Value is High AND the (Fuzzy) Privacy Value is High Then Confidence is High. In this case AND operator narrows down the output result of the rule, as it represents a conjunction of membership functions. Finally, membership functions translate the fuzzy rules into fuzzy numbers. Fuzzy numbers are then used to give input to the fuzzy expert system. 7.3.2.2
Defuzzification Phase
For defuzzification methodology, several approaches exist. Existing approaches include center of gravity method, the center of area method, the mean of maxima method, first of maximum method, the last of maximum method [2], bisector of area method, or the root-sum-square method. Root-sum-square is considered here as the main method. Other methods could be considered and evaluated, consequently but for the sake of simplicity we only consider this approach in this paper. In the defuzzification phase, the calculated membership function results are taken, grouped according to fuzzy rules, raised by power of two, and summed following the consequent side of each asserted fuzzy rule. Let FR, be a fuzzy rule. Following the defuzzification approach, formula (7.1), is used to defuzzify the values: FRm = ∑ (FRm )2 (7.1) Let m = [’-’ , ’0’ , ’+’], which represents the labels used to group the rule sets. This allows us to distinguish between rules, describing “negative (low), neutral and positive (high)” confidence outcomes. Now that we have defined the outcomes, we can apply weights and scale the weighted output. For instance, if positive confidence weights are more in favor of our approach then more weight can be given to positive confidence rather than negative or neutral. As a matter of fact, W0 represents neutral weight, W+ represents positive weight, W− represents the negative weight. Adopted from Schmidt et al., [19], formula (7.2) allows us to create a weighted overall confidence output:
7
Reasoning about Weighted Semantic User Profiles
C (Ux ) =
FR−W− + FR0W0 + FR+W+ FR− + FR0 + FR+
77
(7.2)
Where C represents the evaluated Confidence and Ux represents the user being evaluated. We can derive a Collective Confidence Factor (CCF), where we consider the confidence degree of other users with respect to the same information item and we calculate and process the confidence value of a user with respect to an item, bearing in mind overall derived confidence. Let us define View as the inferred confidence of the user with respect to a certain item. If we consider a user’s self-view as internal, we can define an Internal View, while other users’ views can be seen as External Views. By taking into account this assumption we can assign weights to average confidence of others and to a user’s confidence, altogether. Adapted from OTV (Overall Trustworthiness Value), proposed by Schmidt et al., [19], we formulate CCF using formula (7.3): FR−W− +FR0W0 +FR+W+ ∑i=0 l FR− +FR0 +FR+ (7.3) CCF (Ux ) = WIViewC (Ux ) + WEView l Where CCF is Collective Confidence Factor, Ux is user being evaluated,C (Ux ) is the confidence of user with respect to the item being viewed, WIView represents internal view weight while WEView represents external view weight, and l represents total number of users, for whom we have considered their confidence values. At this stage we can scale the resulting values on a specific confidence scale and expand the range of resulting CCF values.
7.4 SMARTMUSEUM Simulation In an experimental evaluation, taking into account a SMARTMUSEUM setting, we simulated 100 weighted user profiles. Weight values are intentionally sparse; weights assigned to profile slices contain blank values in order to reflect the realworld scenarios where users don’t provide much input data into system, or sensors are faulty. We considered artifacts of two physical museums as items being experienced by exhibitors. In order to apply our model and demonstrate it in the context of our laid out scenario, we follow the steps, described previously in Sect. 7.2. In the first step we fuzzify all input raw values for three weight-descriptors at hand. We consider three main qualifiers for preferred outcome. Figure 7.1 depicts an excerpt of weight values for raw and fuzzified trust values of 10 simulated users. Crisp trust values (left) and weighted trust values (right) for 10 users. Simulated raw values are intentionally sparse, as depicted with broken lines in left diagram.We have considered “high” confidence, “neutral” confidence and “low” confidence: If Trust,Privacy&Rank all High, Confidence High. If Trust,Privacy&Rank all Average, Confidence Neutral. If Trust,Privacy&Rank all Low, Confidence Low.
78
N. Dokoohaki and M. Matskin
Fig. 7.1 Linear presentation of crisp trust values (left) and weighted trust values (right)
Now that fuzzy sets are formed we apply defuzzification methodology. This allows us to filter negative results, in the case negative values were available. In our scenario all input values are positive. As fuzzy sets are grouped based on the preferred output, we can scale the output and gain more flexibility using weights. We can define weights for each type of output. Weight degree is taken from the range of [0, 1]. We have given the maximum weight to positive values, while neutral values are considered more important than low values. Now we’re able to evaluate the confidence. Figure 7.2 depicts the resulting confidence evaluation for user/item in our scenario. Now that confidence values are derived, we can infer CCF for the users we have evaluated so far. More flexibility can be gained through giving weights to internal and external Views of user information that we have processed. Since we don’t have any preference over difference of processed user’s view (WIView internal view) and other user’s views (WEView external view), then we assign equal weight to both views. Confidence values and Collective Confidence Factors are depicted in Fig. 7.2. Results were generated for 100 user profiles. For Confidence Evaluation weight set, W = [W+ = 1,W0 = 0.05,W− = 0.5] while for Collective Confidence Factor weight set, W = [WIView = 0.5,WEView = 0.5]. Horizontal axis represents users, while vertical axis plots confidence degree distribution. We tried to generate values that represent real-world user inserted values, in many cases either one or two, or in one or two exceptions, all three values were kept empty. This reflects the sparsity problem of training data for profiling (in general personalization) services such as recommendation, or matchmaking. Such problem hinders the performance of personalization services by creating infamous problem of “cold-start”. By comparing input (raw) values with resulting confidence degrees, we realize that results are not uniform and that is justifiable with respect to different preferences or interests of users. In certain cases values have improved, while in many cases values haven’t changed. We observed that empty values in many situations haven’t changed and this is mainly because of the naive rules considered, where we weigh positive and neutral outcomes higher than low outcomes. Simple approach could be considered to address empty or zero values, by using an offset for trust fuzzification. Although in comparison between pure confidence values and collective confidence factors, we realize that considering collective opinions while evaluating the confidence of an individual
7
Reasoning about Weighted Semantic User Profiles
79
Fig. 7.2 Stacked linear presentation of (top) confidence and (bottom) collective confidence
user over a certain item, could give more improved results. As seen in Fig. 7.2, CCF values are more uniformly distributed over diagram in comparison to pure confidence values. The uniformness in distribution of values in CCF comes from quantification of others confidence while calculating one’s confidence. The other reason can be seen as the flexibility given by incorporating further weight for Views with respect to user being evaluated or collective Views of other users. As a result we can use CCF values instead of classic, pure confidence values for boosting personalization services. All and all, we have managed to replace all empty values with a single value (although zero) and at least sparsity is alleviated with respect to that.
80
N. Dokoohaki and M. Matskin
7.5 Conclusion and Future Work We have introduced a fuzzy approach to modeling and analyzing confidence based on weights assigned to profiled information of users stored in semantic profiles. Based on our approach weights can be processed through a fuzzy reasoner and create a weighted outcome based on factors affecting the context of calculation. We have tested our approach with simulation data from a real-world scenario, where exhibitors of visual art experience personalized services of distributed knowledgeplatforms. We have introduced a classic and a collective notion of confidence where values could be used to improve quality of adaptive personalized services or allow us to detect similar individual or group behavioral patterns. As a future work, we will use resulting confidence degrees to improve personalization services provided by the knowledgeplatform, such as recommendation, matchmaking, and etc. We would like to also see how collective notion can be used to enable group-based services such as group recommendations. Acknowledgements. This work has been done within the FP7-216923 EU IST funded SMARTMUSEUM project. The overall objective of the project is to develop a platform for innovative services enhancing on-site personalized access to digital cultural heritage through adaptive and privacy preserving user profiling.
References 1. Huang, Z., van Harmelen, F., Teije, A.T.: Reasoning with Inconsistent Ontologies. In: International Joint Conference on Artificial Intelligence, vol. 19, pp. 454–459. Lawrence Erlbaum Associates Ltd., USA (2005) 2. Klir, G.J., Yuan, B.: Fuzzy sets, Fuzzy Logic, and Fuzzy Systems: selected papers by Zadeh, L.A. World Scientific Publishing Co., Inc., River Edge (1996) 3. Sanchez, E.: Fuzzy Logic and the Semantic Web. Elsevier, Amsterdam (2006) 4. Li, Y., Xu, B., Lu, J., Kang, D.: Reasoning with Fuzzy Ontologies. Contexts and Ontologies: Theory, Practice and Applications (2006) 5. Dokoohaki, N., Matskin, M.: Personalizing human interaction through hybrid ontological profiling: Cultural heritage case study. In: Ronchetti, M. (ed.) 1st Workshop on Semantic Web Applications and Human Aspects (SWAHA 2008), in conjunction with Asian Semantic Web Conference 2008, pp. 133–140. AIT epress (2008) 6. EU IST FP7 SMARTMUSEUM project: http://www.smartmuseum.eu (last accessed, June 2009) 7. Koch, N.: Software Engineering for Adaptive Hypermedia Systems. PhD thesis, LudwigMaximilians University, Munich, Germany (2005) 8. Gauch, S., Speretta, M., Chandramouli, A., Micarelli, A.: User Profiles for Personalized Information Access. In: Brusilovsky, P., Kobsa, A., Nejdl, W. (eds.) Adaptive Web 2007. LNCS, vol. 4321, pp. 54–89. Springer, Heidelberg (2007) 9. Bloedorn, E., Mason, G., Mani, I., MacMillan, T.R.: Machine Learning of User Profiles: Representational Issues. In: Proceedings of the national conference on artificial intelligence, vol. 13, pp. 433–438. John Wiley and Sons Ltd., Chichester (1996)
7
Reasoning about Weighted Semantic User Profiles
81
10. Kobsa, A.: User Modeling: Recent work, prospects and Hazards, Auth. SchneiderHufschmidt, m. (1993) 11. Weibelzahl, S.: Evaluation of Adaptive Systems: PhD thesis. University of Trier (2003) 12. Dolog, P., Nejdl, W.: Challenges and Benefits of the Semantic Web for User Modelling. In: Proceedings of Adaptive Hypermedia (2003) 13. Dolog, P., Nejdl, W.: Semantic Web Technologies for the Adaptive Web. In: Brusilovsky, P., Kobsa, A., Nejdl, W. (eds.) Adaptive Web 2007. LNCS, vol. 4321, pp. 697–719. Springer, Heidelberg (2007) 14. Wang, Y., Aroyo, L., Stash, N., Rutledge, L.: Interactive User Modeling for Personalized Access to Museum Collections: The Rijksmuseum Case Study. In: Proc. 11th User Modeling Conference, Greece (2007) 15. Nasraoui, O., Frigui, H., Krishnapuram, R., Joshi, A.: Extracting Web User Profiles Using Relational Competitive Fuzzy Clustering. International journal on artificial intelligence tools 9, PART 4, 509–526 (2000) 16. Martin-Bautista, M.J., Kraft, D.H., Vila, M.A., Chen, J., Cruz, J.: User profiles and fuzzy logic for web retrieval issues. Soft computing 6(5), 365–372 (2002) 17. Nefti, S., Meziane, F., Kasiran, K.: A Fuzzy Trust Model for E-Commerce. E-Commerce Technology, CEC (2005) 18. Falcone, R., Pezzulo, G., Castelfranchi, C.A.: Fuzzy Approach to a Belief-Based Trust Computation. In: Falcone, R., Barber, S.K., Korba, L., Singh, M.P. (eds.) AAMAS 2002. LNCS (LNAI), vol. 2631, pp. 73–86. Springer, Heidelberg (2003) 19. Schmidt, S., Steele, R., Dillon, T.S., Chang, E.: Fuzzy trust evaluation and credibility development in multi-agent systems. Journal of Applied Soft Computing (2007) 20. Kohlas, R., Maurer, U.: Confidence Valuation in a Public-Key Infrastructure Based on Uncertain Evidence, pp. 93–112. Springer, Heidelberg (2000) 21. Zhang, Q., Qi, Y., Zhao, J., Hou, D., Niu, Y.: Fuzzy privacy decision for context-aware access personal information. Wuhan University Journal of Natural Sciences (2007) 22. Raj, P.A., Kumar, D.N.: Ranking alternatives with fuzzy weights using maximizing set and minimizing set. Fuzzy Sets and Systems (1999)
Chapter 8
Estimation of Boolean Factor Analysis Performance by Informational Gain Alexander Frolov, Dusan Husek, and Pavel Polyakov
Abstract. To evaluate the soundness of multidimensional binary signal analysis based on Boolean factor analysis theory and mainly of its neural network implementation, proposed is a universal measure - informational gain. This measure is derived using classical informational theory results. Neural network based Boolean factor analysis method efficiency is demonstrated using this measure, both when applied to Bars Problem benchmark data and to real textual data. It is shown that when applied to the well defined Bars Problem data, Boolean factor analysis provides informational gain close to its maximum, i.e. the latent structure of the testing images data was revealed with the maximal accuracy. For scientific origin real textual data the informational gain provided by the method happened to be much higher comparing to that based on human experts proposal. Keywords: Boolean Factor Analysis, Informational Gain, Hopfield-like Network.
8.1 Introduction Formulating a general model for Boolean factor analysis (BFA), we follow the ideas of Barlow [3, 4, 5], Marr [14, 15], Foldiak [8], and others who assumed that such Alexander Frolov Dept. of Mathematical Neurobiology of Learning, Institute for Higher Nervous Activity and Neurophysiology, Moscow, Russia e-mail: [email protected] Dusan Husek Dept. of Nonlinear Systems, Institute of Computer Science, Academy of Sciences, Prague, Czech Republic e-mail: [email protected] Pavel Polyakov Dept. of Optical Memory, Scientific-Research Institute for System Studies, Moscow, Russia e-mail: [email protected]
V. Sn´asˇel et al. (Eds.): Advances in Intelligent Web Mastering - 2, AISC 67, pp. 83–94. c Springer-Verlag Berlin Heidelberg 2010 springerlink.com
84
A. Frolov, D. Husek, and P. Polyakov
analysis should be one of the main brain functions. For example Foldiak stated: “According to Barlow [5] objects (and also features, concepts or anything that deserves a name) are collections of highly correlated properties. For instance, the properties furry, shorter than a meter, has tail, moves, animal, barks, etc. are highly correlated, i.e. the combination of these properties is much more frequent than it would be if they were independent (the probability of the conjunction is higher than the product of individual probabilities of the component features). It is these non-independent, redundant features, the suspicious coincidences that define objects, features, concepts, categories, and these are what we should be detecting. While components of objects can be highly correlated, objects are relatively independent of one another. The goal of the sensory system might be to detect these redundant features and to form a representation in which these redundancies are reduced and the independent features and objects are represented explicitly [3, 4, 17]”. In our formulation the object’s properties are attributes, combination of highly correlated attributes are factors, and current combination of objects given by their attributes is a scene. Thus every scene is defined by a binary vector X whose dimensionality N is equal to the total number of attributes in the current context. Every component of X takes value ’One’ or ’Zero’ depending on the appearance of the corresponding attribute in the scene. Each factor fi , i = 1, . . . , L (L is a total number of factors in a given context) is a binary vector of dimensionality N in which the entries with value ’One’ correspond to highly correlated attributes appearing in the scene when the object characterized by the factor appears. Although the probability of object’s attribute to appear in a scene with the object is high, it is not obligatory equal to 1. Sometimes attribute could vanish, as the attribute “has a tail” for the object “dog” in the above example. We take into account this property of objects by introducing the probabilities pi j i = 1, . . . , L, j = 1, . . . , N which are assumed to be high for attributes constituting the factor, and for the other attributes we put them to be 0. As in the usual linear factor analysis, we suppose that additionally to common factors every scene contains also some binary “specific factor” or “noise” n which is characterized by set of probabilities q j that its j-th component takes 1. As a result, in general vector X can be presented in the form X=
L
Si fi ∨ ξ
(8.1)
i=1
where S is vector of factor scores of dimensionality L, fi is a corrupted version of the factor appeared in the current case and ξ is a specific factor. We assume that factors appear in scenes (e.g. corresponding scores entries Si take value ’One’ representing factor appearance) independently with probabilities πi , i = 1, . . . , L. Boolean factors can be interpreted as causes that produces the set of followings [12]. They can be interpreted as representatives of classes as well and their corrupted versions, as representatives of individuals belonging to classes. In this case the presentation of a scene in the form (8.1) implies some hierarchy: factor scores define what class individuals presented in the scene belong to, and the currently presented corrupted versions of factors - individuals themselves.
8
Estimation of Boolean Factor Analysis Performance by Informational Gain
85
In spite of the fact that binary form of data is typical for many fields of our daily life including social science, marketing, zoology, genetics and medicine the methods of Boolean BFA are only weekly developed. In [9] we proposed the general method of BFA that is based on the Hopfield-like attractor neural network (BFANN). The method exploits the well known property of a Hopfield network to create attractors of network dynamics by tightly connected neurons. Since neurons representing a factor are activated simultaneously each time when factor appears in the patterns of the learning set and neurons representing different factors are activated simultaneously only by chance, then due to the Hebbian learning rule the neurons of factor become connected more tightly than other neurons. Hence the factors can be revealed as attractors of the network dynamics. Here we suggest the general measure of BFA performance based on the classical theory of information. This measure is supposed to be a universal tool for comparing the efficiencies of different methods as well as for estimation if the presentation of original data by Eq. (8.1) is advantageous or not, i.e. if the method can reveal the existence of a latent structure in the original high-dimensional signal in the form of the proposed model. Here we demonstrate the efficiency of BFANN at the example of so called “Bars Problem” (BP) which is the model benchmark task for methods revealing independent objects from the complex scenes. It was a challenge for us to test our method on this task because the BP is a demonstrative and well investigated example of Boolean factorization. Another example considered in the present paper is BFA of textual data. To demonstrate the method efficiency we used the data of several conferences on Neural Networks available on WWW [1, 2]. The organization of the remainder of this article is as follows. In Sect. 8.2 we define the novel measure of BFA quality - informational gain. In Sect. 8.3 we define and analyze the BP task. In Sect. 8.4 define and solve the more general case, when our BFANN is applied to scientific textual data. Finally, in Sect. 8.5, we discuss our analytical and numerical results.
8.2 Informational Gain If the latent structure of the signal is ignored, then storing of j-th component of vector X requires h0 j = h(p j ) bits of information, where h(x) = −x log2 x − (1 − x) log2 (1 − x) is Shannon function and L
p j = 1 − (1 − q j) ∏(1 − πi pi j )
(8.2)
i=1
is a probability of j-th component to take ’One’. Storing of M vectors X requires H0 = MNh0 bits of information where h0 = ∑Nj h0 j /N.
(8.3)
86
A. Frolov, D. Husek, and P. Polyakov
If the factor structure of the signal space is taken into account, then storing of j-th component of vector X requires L L πi h ((1 − q j )(1 − pi j )) + h1 j = ∏(1 − πi) h(q j ) + ∑ 1 − πi i=1 i=1 πi πk +∑ h((1 − q)(1 − pi j)(1 − pk j )) + · · · (8.4) (1 − π i )(1 − πk ) k>i L L πi ··· +∏ h (1 − q) ∏(1 − pi j ) i=1 1 − πi i=1 bits of information. Storing of M vectors X requires H1 = H1 + H”1 = MNh1
(8.5)
bits of information where h1 = h1 + h”1 , h1 = ∑Nj=1 hj /N and h”1 = ∑Li=1 h(πi )/N. The first term in on the right side of Eq. (8.5) defines the information which is required to store vectors X, providing that their factor scores are known (noise) and the second one - defines the information to store factor scores themselves. In general, it is also necessary to add the information concerning factors themselves. To store them one needs H f = ∑Li=1 Nh(ri ) bits of information where ri is a fraction of ’Ones’ in i-th factor. However, the number of scenes M is supposed to be much larger than the number of factors L. In this case one can neglect the information H f because it is much smaller compared with H1 . The informational gain G is determined by the difference between H0 , Eq. (8.3), and H1 , Eq. (8.5). The relative informational gain we define as G = (H0 − H1 )/H0 = (h0 − h1)/h0 .
(8.6)
Figure 8.1 illustrates the dependence of informational gain on the probability p that factor’s component takes ’One’ and probability q that a component of signal space takes ’One’ due to noise. Example 8.1. Let us have (without loss generality) following arrangement: Let factors be independently and uniformly distributed in the signal space so that the fraction of ’Ones’ in each factor is fixed and equal to r, then let us suppose that the probability of each factor to appear in the scene is supposed to be πi = C/L, where C is the mean number of factors mixed in each scene. Let N = 256, L = 32, C = 5 and r = 1/16. Thus there are rN = 16 ’Ones’ in each factor i, and all pi j ar equal i.e. pi j = p. Then for the rest of pixels N − rN = 240 (’Zeros’ pixels) the probability pi j = 0. For all components of the signal space probability q j = q. As shown in Fig.1, maximal informational gain is achieved when uncorrupted factors are mixed in the scenes (p = 1, q = 0). In this case h1 = 0, h1 = h”1 = (L/N)h(C/L) = 0.078, p j 1 − (1 − r)C 1 − exp(−Cr) = 0.27 and consequently h0 = h(pi j ) = 0.84 and
8
Estimation of Boolean Factor Analysis Performance by Informational Gain
G = 1 − h1/h0 = 1 −
Lh(C/L) = 0.9. Nh(exp(−Cr))
87
(8.7)
It is worth noting that in contrast to the ordinary linear factor analysis, the solution obtained by BFA is reasonable even when the number of factors exceeds the dimensionality of the signal space. Let us suppose that C/L 1 and Cr 1 to demonstrate this. Then according to Eq. (8.7) G 1−
ln(eL/C) Nr ln(e/Cr)
(we used here that h(x) xlog2 (e/x) for x 1). Thus independently of the relation between L and N large informational gain can be achieved only due to increasing N. On the contrary, application of BFA can be sometimes inexpedient even in the case when L N. Let us assume that as before C/L 1 but Cr 1. Then G = 1−
C ln(eL/C) N exp(−Cr)(Cr + 1)
and independently of the relation between L and N, negative informational gain can be achieved only due to increasing C. In the extremity of large C almost all components of vectors X take ’Ones’ and the meaninglessness of BFA for such signals is evident even without any evaluation of informational gain. As shown in Fig. 8.1, negative informational gain can be also achieved due to increase of noise: both decrease of p and increase of q. In the first case, informational gain becomes negative due to decreasing fraction of ’Ones’ in vectors X and consequent decreasing h0 , in the second case - due to dominance of noisy ’Ones’. In all considered cases negative informational gain corresponds to intuitive evidence of the senseless of BFA. This gives the basis for treating informational gain as a general quantitative index of Boolean factor analysis application success and as a measure of profit from this. Let factor scores be calculated by any method. Then probabilities pi j and q j can be easily calculated from Eq. (8.2) which defines q j and equation L
pi j = 1 − (1 − q j) [(1 − pi j )/(1 − πi pi j )] ∏(1 − πi pi j ) i=1
= 1 − (1 − p j)(1 − pi j )/(1 − πi pi j ) where pi j is the probability of j-th component to take ’One’ when i-th factor was observed in the scene. This results in pi, j = (pi j − p j )/(1 − p − πi(1 − pi j )).
(8.8)
Probabilities πi can be evidently estimated as frequencies of factors appearance in the learning set and p j and pi, j can be estimated as frequencies of j-th component to take ’One’ at whole and during the presence of i-th factor detected by Si . Note
88
A. Frolov, D. Husek, and P. Polyakov
Fig. 8.1 Relative informational gain G in dependence on factor corruption. p – probability that factor’s component takes ’One’, q – probability that a component of signal space takes ’One’ due to noise. Thick solid lines — all factors are revealed, thin solid lines — one factor is missed, dashed lines — two factors are missed
that for estimation of qi and pi j only factor scores are required. Factors loading can be obtained by the threshold binarization of pi j . Figure 8.1 demonstrates how informational gain changes when BFA is not perfect. Thin lines give informational gain when one of the factors is missed. As a result the dimensionality of vectors S is reduced by one. Since the absence of the missed factor does not influence on the frequencies of appearance of other factors πi and frequencies pi j and p j (the last frequency is directly defined by the observed signals and does not depend on any manipulation with them), the missing of one factor results only in the increase of q j and respective decrease of informational gain. Thus a not perfect BFA results in the decrease of informational gain. This gives the basis for treating it as a measure of its quality for comparing different methods. The relative informational gain can vary in the range from ∞ to 1. A positive value of G means that the encoding of signals by means of factors is more effective compared to the original encoding. If the value G is close to zero or negative, then factors emerged incorrectly, or factor analysis based on the BFA model is meaningless, because the model is inadequate to the internal structure of the analyzed data.
8.3 Bars Problem In this task [8] each scene is n × n pixel image containing several of 2n possible (one-pixel wide) horizontal and vertical uncorrupted bars. Pixels belonging and not belonging to the bar take value 1 and 0 respectively. Bars are chosen to scenes with equal probabilities. In the point of intersection of vertical and horizontal bars pixel takes the value 1. The Boolean summation of pixels belonging to different bars simulates the occlusion of objects. The goal of the task is to learn all bars as individual objects on the basis of learning set containing M images. In terms of BFA, bars are factors, each image is Boolean superposition of factors and factors scores
8
Estimation of Boolean Factor Analysis Performance by Informational Gain
89
Fig. 8.2 Relative informational gain G in dependence on the size of the learning set M. Thick solid line — “ideal” curve (see text), thin solid line — BFANN, dashed line — theoretical gain
take values 1 or 0 dependently on whether bars appear in the image. Thus each image can be presented by (ref Boolfac), that is BP is only a special case of BFA. In the original setting of the BP [8] n = 8, L = 16, the bars appear in the images with equal probabilities πi = C/L = 1/8, i.e. C = 2. Figure 8.2 shows the relative information gain depending on the number of M images submitted to the input. Marked by a dashed line shown is theoretical maximal gain G calculated by Eqs. (8.3), (8.5) and (8.6) – so it would be an information advantage, if probabilities πi = 1/8 and pi j = 0, 1 and q j = 0 were found with absolute precision. This gain does not depend on M - the size of the learning set. Marked by a thick line is an “ideal gain” which is calculated with pi, j obtained by Eq. (8.8), when all scores Si are given exactly and πi are estimated as frequencies of factors appearance in the learning set, and p j and pi j as frequencies of j-th component to take ’One’ at whole and during the presence of i-th factor detected by Si .Marked by a thin line is a gain calculated by scores Si obtained by BFANN. Last two graphs were obtained as a result of the averaging of 50 different sets of images. Obviously, the larger the input set of patterns (cases), the more information on the possible combinations of bars, so the increase in the number of patterns increases the quality of factors allocation. Even if M = 300 the method already gives results close to the ideal. In the next experiment the ability of BFANN to solve a task, when large number of factors is mixed in the signals, is evaluated. For this, sets of images, consisting of a vertical or horizontal one pixel wide bars were randomly created, but contrary to previous setting the number of bars was 32, and the probability of bar appearance varied from 2/32 to 8/32, N = 16 × 16. Figure 8.3 shows the information gain obtained by BFANN normalized to “ideal” gain depending on the number of images M for C = 2, 4, 6, 8. Results for both BFANN and “ideal” cases are obtained by averaging over 50 sets of images. It is evident that BFANN shows good results independently of signal complexity. This is a great advantage of BFANN compared with other methods of BFA [16, 12].
90
A. Frolov, D. Husek, and P. Polyakov
Fig. 8.3 Informational gain obtained by BFANN in dependence on the size of the learning set M and signal complexity C. Informational gain obtained by BFANN is normalized by an “ideal” gain
8.4 Textual Documents As a source for textual databases we used the papers published in the proceedings of the IJCNN conferences held in 2003 and 2004 [1], and in the proceedings of the Russian conference on Neuroinformatics held in 2004 and 2005 [2]. The sizes of the considered databases amounted to M = 1042 and M = 189 articles, respectively. After stop-words and rare words filtering, i.e., those that appeared in less than 3 percent of the articles, the sizes of the dictionaries were N = 3044 and N = 1716 words. The article length, i.e., the number of different words used, varied from 14 to 573 (mean 280, mean 847 before filtering) in English articles, and from 16 to 514 (mean 184, mean 312 before filtering) in Russian articles. The documents were represented as vectors, see e.g. [6]. BFANN for IJCNN conferences revealed twelve factors presented in Table 8.1. By the specificity of words the factors could be easily recognized as corresponding to the topics Neurobiology, Classification, Optimization, Probability, Hardware, Genetic Algorithms, Image Processing, Multilayer Networks, Dynamic Stability, Self-organizing Mapping, Source Separation. The twelfth factor looks strange. It contains abbreviations “Fig.N” printed without space and “IEEE Trans”. We revealed that this factor was created due to the fact that articles from 2003 contained the misprint “Fig.N” without space two times more frequently than articles from 2004. The term “IEEE Trans” was bound with terms “Fig.N” due to the fact that the PDF-format for articles from 2003, in contrast to 2004, included the printing of “IEEE Trans” at the end of each page. The appearance of this factor stressed the fact that our method is based on pure statistics, however, the statistics correspond to the nature of the textual database: articles on the same topic tend to contain the same set of words. That is why all other topics are quite reasonable. We excluded the last factor from further consideration because it is meaningless. Table 8.1 demonstrates that the largest portion of the articles is related to the topic “Multilayer networks”. On average one article contains 2.2 factors.
8
Estimation of Boolean Factor Analysis Performance by Informational Gain
91
Table 8.1 Ten top significant terms for factors found in the IJCNN database No Factor length
Related Top Words articles
1
33
203
2
42
299
3
21
333
4
19
183
5
21
100
6
22
75
7
14
233
8
15
450
9
12
159
10 13
199
11 12
62
12 12
102
cortex, excitatory, inhibitory, stimulus, spike, synapse, brain, neuronal, sensory, cell classifier, support vector machines, hyperplane, svms, validation, database, label, repository, machine learning, margin gradient, descent, convergence, approximate, guarantee, derivative, formulate, iteration, cost, satisfy estimation, observed, density, variance, Gaussian, mixture, statistical, assumption, likelihood, probability vlsi, voltage, circuit, gate, transistor, cmos, hardware, block, chip, clock mutation, crossover, chromosome, fitness, genetic, population, evolutionary, generation, parent, selection pixel, image, camera, color, object, recognize, extraction, eye, vision, horizontal, vertical multilayer, hidden, perceptron, approximation, testing, backpropagation, estimation, validation, epochs, regression inequality, guarantee, proof, theorem, stability, constraint, bound, convergence, satisfy, derivative self-organizing maps, som, cluster, winner, unsupervised, neighborhood, Euclidean, competitive, dimension, group independent component analysis, blind, mixing, bss, separation, source, mixture, independent, signal processing, speech fig.4, fig.3, fig.5, fig.6, fig.7, fig.2, fig.1, fig.8, fig.9, ieee trans
As for the IJCNN, in the case of the Neuroinformatics, BFANN revealed twelve factors shown in Table 8.2. The words corresponding to the factors are translated in English. We related the other factors to the topics: Neurobiology, Multilayer Networks, Image Processing, Classification, Optimization, Intellectual Systems, Genetic Algorithms, Recurrent Networks, Mathematics, Intellectual Agents, Time Series, Clustering. On average one article contained 1.9 factors. Seven topics in the Neuroinformatics coincide with those in the IJCNN database (namely Multilayer Networks, Image Processing, Classification, Optimization, Genetic Algorithms, Intellectual Agents, Clustering). The topic “Mathematics” is more general than the topic “Probability” in the IJCNN. The topics “Hardware” and “Dynamic Stability” are absent from Neuroinformatics, while topics “Intellectual Systems”, “Recurrent Networks” and “Time Series” are absent from IJCNN. We cannot be sure that the articles on these topics were completely absent from IJCNN. We only know that their presence was too weak to create the factors. On the other hand, the presence of the corresponding factors (as attractors of network dynamics) in Neuroinformatics could be explained by the fact that it contained a much smaller number of articles and hence factors’ extraction is less reliable. The absence of the
92
A. Frolov, D. Husek, and P. Polyakov
Table 8.2 Ten top significant terms for factors found in the Neuroinformatics database No Factor length
Related Top Words articles
1
16
41
2
22
39
3
18
29
4
14
32
5
13
24
6
13
26
7
12
19
8
14
46
9
15
30
10 13
24
11 11
38
12 11
15
physiology, nervous, excitatory, synaptical, inhibititory, activation, membrane, stimulus, brain, cortex optimization, hidden, iteration, backpropagation, layer, minimization, neural network, perceptron, sampling, weigth brightness, orientation, undertone, two-dimensional, image, radial, vision, uniform, pixel, area recognition, multilayer, classification, class, sampling, perceptron, practical, recommendation, stage, member iteration, convergence, gradient, perceptron, multilayer, stop, optimum, optimization, testing, minimization organisation, apparatus, objective laws, hierarchy, mechanism, intellectual, development, language, conception, understanding selection, independent, stop, genetic, population, sampling, mutation, optimization, criterium, efficiency vector, zero, number, equal, associative, iteration, cycle, perceptron, rule, change geometry, discrete, measurement, plane, differentiation, form, physical, mathematical, boundary, integral intelligence, need, search, selection, presence, operation, mapping, quality, probability, adaptation rule, statistics, finance, vector, knowledge, prognosis, random, prediction, probability, expectation clustering, Kohonen, separation, distribution, center, noise, statistics, partition, distance, selection
topic “Hardware” in Neuroinformatics could obviously be explained by the current poor state of microelectronics in Russia. Relative informational gain obtained by BFANN for Neuroinformatics amounted to 0.15, that means that presentation of these textual data in the form of the proposed model is quite reasonable. It is interesting that the grouping of the articles across scientific sections produced by Program Committees provided informational gain of only 0.04.
8.5 Discussion Due to the proliferation of information in textual databases, and especially on the Internet, the issue of minimization of the search space with the proper selection of a keyword list becomes more and more important. Unsupervised word clustering (providing a kind of thesaurus) is a standard approach to performing this task [10]. Based only on the words statistics, it helps in overcoming the diversity of synonyms
8
Estimation of Boolean Factor Analysis Performance by Informational Gain
93
used by authors of different expertise and background. There are many methods for unsupervised word clustering [13] and some of them are based on the neural network approach [10], [11]. Our challenge was to treat this problem with the general method of BFA in [9], which is based on a Hopfield-like neural network. The idea of the method is to exploit the well-known property of the network to create attractors of the network dynamics in tightly connected groups of neurons. We supposed, first, that each topic is characterized by a set of specific words (often called concepts [7]) which appear in relevant documents coherently, and secondly, that different concepts are present in documents in random combinations. If the textual database is used as a learning set for the Hopfield-like network and the appearance of each word in the document is associated with the activity of the corresponding neuron, then Hebbian learning provides tight connections between words belonging to concepts, since they appear coherently, and weak connections between words belonging to different concepts, since they appear simultaneously only coincidentally. Hence, concepts can be revealed as the attractors of network dynamics. In our general formalism [9], a group of coherently activated components of the learning set is called factor. Thus, concepts are factors of textual data. To evaluate the reasoning efficiency of BFA of multidimensional binary signal we propose a universal measure, its based on classical informational theory - the informational gain. Efficiency of BFANN was demonstrated for both BP and textual data. For BP data BFANN provides informational gain close to its maximum achieved in the case when the latent structure of the bar images is revealed with the maximal accuracy. For textual data the informational gain provided by BFANN happens to be much higher than this given by experts. To evaluate the soundness of multidimensional binary signal analysis based on BFA theory, we proposed an universal measure derived using classical informational theory results, the - informational gain. BFANN efficiency was demonstrated using this measure, both when applied to BP benchmark data and to real scientific textual data. It was shown that when applied to the well defined BP data, BFA provides informational gain close to its maximum, i.e. the latent structure of the testing images data was revealed with maximal accuracy. For scientific origin real textual data the informational gain provided by the method happened to be much higher comparing to that based on human experts proposal even if it was below the value for previous case. Acknowledgements. This work was partially supported by grants AV0Z10300504, GA CR 205/09/1079, 1M0567 and RFBR 05-07-90049.
References 1. http://ieeexplore.ieee.org/xpl/ tocresult.jsp?isnumber=27487 2. http://www.niisi.ru/iont/ni 3. Barlow, H.: Single units and sensation: a neuron doctrine for perceptual psychology? Perception 1, 371–394 (1972)
94
A. Frolov, D. Husek, and P. Polyakov
4. Barlow, H.B.: Possible principles underlying the transformations of sensory messages. In: Rosenblith, W.A. (ed.) Sensory communication, pp. 217–234. MIT Press, Cambridge (1961) 5. Barlow, H.B.: Cerebral cortex as model builder. In: Rose, D., Dodson, V.G. (eds.) Models of the visual cortex, pp. 37–46. Wiley, Chichester (1985) 6. Berry, M.W., Browne, M.: Understanding search engines: mathematical modeling and text retrieval. SIAM, Philadelphia (1999) 7. Farkas, J.: Documents, concepts and neural networks. In: Proceedings of CASCON 1993, Toronto, Ontario, vol. 2, pp. 1021–1031 (1993) 8. F¨oldi´ak, P.: Forming sparse representations by local anti-Hebbian learning. Formal Aspects of Computing 64(2), 165–170 (1990) 9. Frolov, A.A., Husek, D., Muraviev, I.P., Polyakov, P.Y.: Boolean factor analysis by attractor neural network. IEEE Transactions on Neural Networks 18(3), 698–707 (2007) 10. Hodge, V.J., Austin, J.: Hierarchical word clustering - automatic thesaurus generation. Neurocomputing 48, 819–846 (2002) 11. Kohonen, T.: Self-organization of very large documents collections: state of the art. In: Proceedings of ICANN 1998, London, England, vol. 1, pp. 65–74 (1998) 12. L¨ucke, J., Sahani, M.: Maximal Causes for Non-linear Component Extraction. The Journal of Machine Learning Research 9, 1227–1267 (2008) 13. Manning, D.C., Shultze, H.: Foundations of statistical natural language processing. MIT Press, Cambridge (1999) 14. Marr, D.: A Theory for Cerebral Neocortex. Proceedings of the Royal Society of London. Series B, Biological Sciences (1934-1990) 176(1043), 161–234 (1970) 15. Marr, D.: Simple Memory: A Theory for Archicortex. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences (1934-1990) 262(841), 23–81 (1971) 16. Polyakov, P.Y., Husek, D., Frolov, A.A.: Comparison of two neural network approaches to Boolean matrix factorization. In: Proc. The First International Conference on ’Networked Digital Technologies’ (NDT 2009), Ostrava, Czech Republic, pp. 316–321 (2009) 17. Watanabe, S.: Pattern recognition: human and mechanical. Wiley, New York (1985)
Chapter 9
The Usage of Genetic Algorithm in Clustering and Routing in Wireless Sensor Networks Ehsan Heidari, Ali Movaghar, and Mehran Mahramian
Abstract. The wireless sensor networks include a lot of sensor nodes with a limited energy distributed in a limited geography area. One of the important issues in these networks is the increased longevity of a network. In a wireless sensor network a long connecting distance between sensors and sink uses a lot of energy and decreases the longevity of network. Since clustering can decrease requirement of using energy for wireless sensor networks, we are following an intelligent technique for forming and managing clusters in this article. With clustering wireless sensor network and using a genetic algorithms we can decrease the connecting distance a little, so the longevity of network will be more. The results of simulation by the help of MATLAB show that the suggested algorithm can find a suitable solution very fast. Keywords: Wireless Sensor Networks, Longevity of Network, Communication Distance, Clustering, Genetic Algorithms.
9.1 Introduction Recent advances in micro-electro-mechanical systems (MEMS) technology, wireless communications, and digital electronics increased the development of low-cost, low-power and multifunctional sensor nodes that are small in size and communicate in short distances [1], [14], [8]. These tiny sensor nodes, which consist of sensing, data processing, and communicating components, leverage the idea of sensor networks based on collaborative effort of large number of nodes. A wireless sensor network usually has a base station that can have a radio relation with other sensor Ehsan Heidari Islamic Azad University (Doroud branch) Ali Movaghar Department of Computer Engineering, Sharif University of Technology Tehran, Iran Mehran Mahramian Informatics Services Corporate, Tehran, Iran V. Sn´asˇel et al. (Eds.): Advances in Intelligent Web Mastering - 2, AISC 67, pp. 95–103. c Springer-Verlag Berlin Heidelberg 2010 springerlink.com
96
E. Heidari, A. Movaghar, and M. Mahramian
nodes in a network. The data of each sensor node will send to the base station directly or by the other nodes. Then, all the information will gather and they will be edited in a base station for parameters like: temperature, pressure and humidity. After that we can estimate the real amount of that parameter, clearly. In designing the wireless sensor networks there are some limitations like; the small size, low weight, use of energy and the low prices of sensors [4]. Among these factors the amount and manner of using energy is very important, too. The relational protocols have an important role in efficiency and longevity of wireless sensor networks [2]. So, designing efficient protocols for using energy is the necessity for wireless sensor networks. By using these protocols not only the whole consumed energy in a network will decrease, but also the load of consumed energy will be distributed among the network nodes monotonously. So the longevity of the network will increase. Among the available protocols, hierarchical protocols are greatly economized in consuming energy by the network [15]. In these protocols the network will be divided into several clusters and in each of these clusters one node will be introduced as a cluster head. The tasks of these head clusters are gathering the sending data from the nodes of the cluster, omitting the repetitious data, mixing the data and sending these data to sink. In these protocols selecting a node as a cluster head and mixing the data are greatly efficient in increasing the scalability and longevity of a network. Up to now, several clustering protocols like: LEACH [7], [6], TEEN [12], APTEEN [11], DBS [3], EMPAC [10], FTPASC [9] and sop [16] have been given for the wireless sensor networks. In this article, we suppose the sensor network like static. The sensors are distributed monotonously in an environment and they are far away from sink. We supposed that all the nodes have the ability of clustering, too. The place of sensors can be measured by the GPS system. Clustering a network for keeping the total distance to a minimum is the NP-hard problem. One genetic algorithms is the algorithm for efficient searching that imitates a transforming procedure from environment. Of course, the genetic algorithms can be used completely in many NP-hard issues like Optimization and TSP (The Traveling Salesman Problem). In this article, we suggest a genetic algorithm for clustering sensor nodes and selection of the minimum of clusters. In this way the total of connecting distances and the use of energy will decrease effectively and the longevity of network will increase. The rest of the article is as follows. Some methods of previous clustering are presented in Sect. 9.2, briefly. In Sect. 9.3 some primary concepts about genetic algorithms are introduced. In Sect. 9.4 the suggested method is put forward. In Sect. 9.5 the results of simulation are discussed, and finally in Sect. 9.6 there will be a conclusion.
9.2 Related Works Heinzelman et al. [6], [7] describe the LEACH protocol, a hierarchical and selforganized cluster-based approach for monitoring applications. The data collection
9
The Usage of Genetic Algorithm in Clustering
97
area is randomly divided into several clusters, where the number of clusters is predetermined. Based on time division multiple accesses (TDMA), the sensor nodes transmit data to the cluster heads, which aggregate and transmit the data to the base station. A new set of cluster heads are chosen after specific time intervals. A node can be re-elected only after all the remaining candidates have been elected. All of the above algorithms are assumed a fixed number of clusters. When we do not know the number of clusters in advance, we would like to solve much harder problems. Our approach uses a genetic algorithm to determine both the number and location of the cluster heads. This approach minimizes the communication distance in a sensor network; too.
9.3 Genetic Algorithms Genetic algorithms [5, 13] are adaptive methods which may be used to solve search and optimization problems. They are based on the genetic processes of biological organisms. According to the principles of natural selection and survival of the fittest; natural populations are evolved in many generations. By mimicking this process, genetic algorithms are able to use solutions to real world problems, if they have been suitably encoded.
9.4 Our Genetic Algorithm The base station uses a genetic algorithm to create energy-efficient clusters for a given number of transmissions. The node is represented as a bit of a chromosome. The head and member nodes are represented as 1s and 0s, respectively. A population consists of several chromosomes. The best chromosome is used to generate the next population. Based on the survival fitness, the population transforms into the future generation. Initially, each fitness parameter is assigned an arbitrary weight; however, after every generation, the fittest chromosome is evaluated and the weights for each fitness parameter are updated accordingly. The genetic algorithms outcome identifies suitable clusters for the network. The base station broadcasts the complete network details to the sensor nodes. These broadcast messages include: the query execution plan, the number of cluster heads, the members associated with each cluster head, and the number of transmissions for this configuration. All the sensor nodes receive the packets broadcasted by the base station and clusters are created accordingly; thus the cluster formation phase will be completed. This is followed by the data transfer phase.
9.4.1 Problem Representation Finding appropriate cluster heads is critically important for minimizing the distance. We use binary representation in which each bit corresponds to one sensor or node.
98
E. Heidari, A. Movaghar, and M. Mahramian
“1” means that corresponding sensor is a cluster-head; otherwise, it is a regular node. The initial population consists of randomly generated individuals. genetic algorithms is used to select cluster-heads.
9.4.2 Crossover and Mutation Used in This Article 9.4.2.1
Crossover
In this article, we use one-point crossover. If a regular node becomes a cluster-head after crossover, all other regular nodes should check if they are nearer to this new cluster-head. If so, they switch their membership to this new head. This new head is detached from its previous head. If a cluster-head becomes a regular node, all of its members must find new cluster-heads. Every node is either a cluster-head or a member of a cluster-head in the network, Fig. 9.1.
First Chromosome Offspring
11001 01100000110 11001 10110001110
Second 10010 10110001110 10010 01100000110
Fig. 9.1 A single-point crossover
9.4.2.2
Mutation
The mutation operator is applied to each bit of an individual with a probability of mutation rate. When applied, a bit whose value is 0 is mutated into 1 and vice versa, Fig. 9.2.
Offspring Original Mutated
1100110110001110 1100110010001110
Fig. 9.2 An example of mutation
9.4.3 Selection The selection process determines which of the chromosomes from the current population will mate (crossover) to create new chromosomes. These new chromosomes join to the existing population. This combined population will be the basis for the next selection. The individuals (chromosomes) with better fitness values have better chances of selection. There are several selection methods, such as:
9
The Usage of Genetic Algorithm in Clustering
99
“Roulette-Wheel” selection, “Rank” selection, “Steady state” selection and “Tournament” selection. In Steady state, which is used in this article, chromosomes with higher fitness compared to others and they will be selected for making new offspring. Then, among these selected chromosomes, the ones with lesser fitness than others will be removed and new offspring would be replaced with the former ones.
9.4.4 Fitness Parameters The total transmission distance is the main factor we need to minimize. In addition, the number of cluster heads can factor into the function. Given the same distance, fewer cluster heads result in greater energy efficiency. • The total transmission distance (total distance) to sink: The total transmission distance, T D, to sink is the sum of all distances from sensor nodes to the Sink. • Cluster distance (regular nodes to cluster head, cluster head to sink distance): The cluster distance, RCSD, is the sum of the distances from the nodes to the cluster head and the distance from the head to the sink. • Transfer energy (E): Transfer energy, E, represents the energy consumed to transfer the aggregated message from the cluster to the sink. For a cluster with m member nodes, cluster transfer energy is defined as follows: m
E = ∑ T ECHi + (m · ER) + TECHs
(9.1)
i=1
The first term of Eq. (9.1) (∑m i=1 T ECHi ) shows the energy consumed to transmit messages from m member nodes to the cluster head. The second term (m · ER) shows the energy consumed by the cluster head to receive m messages from the member nodes. Finally, the third term (T ECHs ) represents the energy needed to transmit from the cluster head to the sink. We represent the number of cluster-heads with TCH and the total number of nodes with N.
9.4.5 Fitness Function The used energy for conveying the message from cluster to sink and the sending distance are the main factors that we need to minimum them. In addition to these, we can insert the decreasing number of clusters in our function that it can affect the energy function like the decreased sending distance, because the clusters use more energy in spite of other nodes. So the chromosome fitness or F is a function (Fitness Function) of all the above fitness parameters that we define it like Eq. (9.2). F=
1 + (T D − RCSD) + (N − TCH) E
(9.2)
100
E. Heidari, A. Movaghar, and M. Mahramian
As we explained here E means the essential energy for sending information from cluster to sink, T D is the total distance of all the nodes to sink, RCSD is the total distance of Regular nodes to clusters and the total distances of all the clusters to sink, N and TCH are the number of all the nodes and clusters. For calculating this function, the amounts of N and T D are fixed, but the amounts of E, TCH and RCSD are changing. The fewer used energy, the shorter sending distance or the fewer number of clusters will lead to the more amount of fitness value. Our suggested genetic algorithm tries to find a suitable solution by increasing the fitness value. The first part of Eq. (9.2) points to the decreased used energy, the second part of it points to the decreased sending distance and the last part points to the decreased number of clusters. The shorter the distance, or the lower the number of cluster-heads, the higher the fitness value of an individual is. Our Genetic algorithm tries to maximize the fitness value to find a good solution.
9.5 Genetic Algorithm Experiments In this section, the performance of the suggested method will be evaluated. For this, we use MATLAB software. Here, we compare clustering in LEACH algorithm with a suggested method. In our experiment we suppose that the base station with one interval is out of wireless sensor network area. The number of sensor nodes for this experiment is about 200. The used parameters in this experiment are shown in Table 9.1. Table 9.1 The used parameters in this experiment Parameter
Value
N Population size Selection type Crossover type Crossover rate Mutation rate
200 100 Steady state One point 0.68 0.004
Figure 9.3 shows the number of live nodes of a network during 1100 circuits. As we can see in a figure, the sensor nodes in our method stay alive longer than sensor nodes in LEACH. Figure 9.4, shows the whole consumed energy in a network during the simulation time. Based on the results of the simulation, more than 18 percent has been economized in consuming energy.
9
The Usage of Genetic Algorithm in Clustering
101
Fig. 9.3 The number of live nodes
Fig. 9.4 The whole consumed energy in a network
Figures 9.5 and 9.6 show the minimum distance and the number of cluster heads. They show that our method after 130 generations, can find a suitable solution for clustering very fast. Fig. 9.5 The minimum distance
102
E. Heidari, A. Movaghar, and M. Mahramian
Fig. 9.6 The number of cluster heads
9.6 Conclusion In this article, one method for clustering the network based on the genetic algorithms was introduced. The basis of this method is the intelligent clustering of network sensors that decrease the connecting distance. The results of the experiments in this article show that the suggested method is an efficient solution for solving the problem of clusters and their place. It means that the suitable number of cluster heads is determined by the genetic algorithms with a suitable fitness. Here all the nodes of a network were supposed fixed. But we can change it so that it can be used in networks with movable nodes, too. For determining clusters, we can use the other techniques of learning.
References 1. Akyildiz, I.F., Su, W., Sankarasubramaniam, Y., Cayirci, E.: Wireless sensor networks: a survey. Journal of Computer Networks, 38, 393–422 (2002) 2. Al-Karaki, J.N., Kamal, A.E.: Routing Techniques in Wireless Sensor Networks: A Survey. IEEE Journal of Wireless Communications 11(6), 6–28 (2004) 3. Amini, N., Fazeli, M., Miremadi, S.G., Manzuri, M.T.: Distance-Based Segmentation: An Energy-Efficient Clustering Hierarchy for Wireless Microsensor Networks. In: Proc. of the 5th Annual Conf. on Communication Networks and Services Research (CNSR 2007), Fredericton, Canada, May 2007, pp. 18–25 (2007) 4. Calhoun, B.H., Daly, D.C., Verma, N., Finchelstein, D.F., Wentzloff, D.D., Wang, A., Cho, S., Chandrakasan, A.P.: Design Considerations for Ultra-Low Energy Wireless Microsensor Nodes. IEEE Trans. on Computers 54(6), 727–740 (2005) 5. Rogers, A.: CM2408 – Symbolic Al Lecture 8 – Introduction to Genetic Algorithms (December 2002)
9
The Usage of Genetic Algorithm in Clustering
103
6. Heinzelman, W.R., Chandrakasan, A.P., Balakrishnan, H.: An Application-Specific Protocol Architecture for Wireless Microsensor Networks. IEEE Trans. on Wireless Communications 1(4), 660–670 (2002) 7. Heinzelman, W.R., Chandrakasan, A.P., Balakrishnan, H.: Energy-Efficient Communication Protocol for Wireless Microsensor Networks. In: Proc. of the 33rd IEEE Int. Conf. on System Sciences, Honolulu, USA, January 2000, pp. 1–10 (2000) 8. Katz, R.H., Kahn, J.M., Pister, K.S.J.: Mobile Networking for Smart Dust. In: Proc. of the 5th Annual ACM/IEEE Int. Conf. on Mobile Computing and Networking (MobiCom 1999), Seattle, USA, August 1999, pp. 350–355 (1999) 9. Khadivi, A., Shiva, M.: FTPASC: A Fault Tolerant Power Aware Protocol with Static Clustering for Wireless Sensor Networks. In: Proc. of IEEE Int. Conf. on Wireless and Mobile Computing, Networking and Communications, Montreal, Canada, June 2006, pp. 397–401 (2006) 10. Khadivi, A., Shiva, M., Yazdani, N.: EPMPAC: an efficient power management protocol with adaptive clustering for wireless sensor networks. In: Proc. of Int. Conf. on Wireless Communications, Networking and Mobile Computing, China, Sepember 2005, pp. 1154–1157 (2005) 11. Manjeshwar, A., Agarwal, D.P.: APTEEN: A Hybrid Protocol for Efficient Routing and Comprehensive Information Retrieval in Wireless Sensor Networks. In: Proc. of the IEEE IPDPS, Fort Lauderdale, USA, April 2002, pp. 195–202 (2002) 12. Manjeshwar, A., Agarwal, D.P.: TEEN: A Routing Protocol for Enhanced Efficiency in Wireless Sensor Networks. In: Proc. of the IEEE IPDPS, San Francisco, USA, April 2001, pp. 23–26 (2001) 13. Dianati, M., Song, I., Treiber, M.: An Introduction to Genetic Algorithms and Evolution Strategies, University of Waterloo, Canada (2002) 14. Min, R., Bhardwaj, M., Cho, S., Shih, E., Sinha, A., Wang, A., Chandrakasan, A.: Low Power Wireless Sensor Networks. In: Proc. of Internation Conf. on VLSI Design, Bangalore, India, January 2001, pp. 205–210 (2001) 15. Younis, O., Krunz, M., Ramasubramanian, S.: Node Clustering in Wireless Sensor Networks: Recent Developments and Deployment Challenges. IEEE Network (special issue on wireless sensor networking) 20(3), 20–25 (2006) 16. Subramanian, L., Katz, R.H.: An Architecture for Building Self Configurable Systems. In: Proc. of IEEE/ACM Workshop on Mobile Ad Hoc Networking and Computing, Boston, USA, August 2000, pp. 63–73 (2000)
Chapter 10
Automatic Topic Learning for Personalized Re-ordering of Web Search Results Orland Hoeber and Chris Massie
Abstract. The fundamental idea behind personalization is to first learn something about the users of a system, and then use this information to support their future activities. When effective algorithms can be developed to learn user preferences, and when the methods for supporting future actions are achievable, personalization can be very effective. However, personalization is difficult in domains where tracking users, learning their preferences, and affecting their future actions is not obvious. In this paper, we introduce a novel method for providing personalized re-ordering of Web search results, based on allowing the searcher to maintain distinct search topics. Search results viewed during the search process are monitored, allowing the system to automatically learn about the users’ current interests. The results of an evaluation study show improvements in the precision of the top 10 and 20 documents in the personalized search results after selecting as few as two relevant documents. Keywords: Machine Learning, Web Search, Personalization.
10.1 Introduction One potential problem with current Web search technologies is that the results of a search often do not consider the current interests, needs, and preferences of the searcher. The searcher’s opportunity to affect the outcome of a search occurs only as they craft the query. The results for the same query submitted by two different people are the same, regardless of the differences between these people and what they were actually seeking. This paper describes a method for automatically capturing information about the current interests of individual searchers, using this information to generate a personalized re-ordering of the search results. This solution is implemented in a prototype system called miSearch. Orland Hoeber · Chris Massie Department of Computer Science, Memorial University, St. John’s, NL, A1B 3X5, Canada e-mail: {hoeber,massiec}@cs.mun.ca V. Sn´asˇel et al. (Eds.): Advances in Intelligent Web Mastering - 2, AISC 67, pp. 105–116. c Springer-Verlag Berlin Heidelberg 2010 springerlink.com
106
O. Hoeber and C. Massie
When modern information retrieval systems fail, in most cases it is due to difficulties with the system understanding an aspect of the topic being searched [2]. Clearly, the short queries that are common in searching the Web [7, 16] provide very little information upon which the search engine can base its results. The solution that has been employed by the major Web search engines is to return a large set of search results and let the users decide what is relevant and what is not. Our goal in this research is to capture additional information about what users think is relevant to their active search goals, and subsequently use this to re-order the search results. This work is inspired by the traditional information retrieval approach to relevance feedback [15], as well as the concept of “information scent” [13]. Personalization within the context of this research is defined as “the task of making Web-based information systems adaptive to the needs and interests of individual users” [12]. This definition highlights the two fundamental difficulties in personalization: how do we capture the interests of users in a non-obtrusive manner; and how do we adapt the system such that these interests are promoted and supported. With respect to miSearch, the first of these difficulties is addressed through automatic topic learning; the second is addressed through the personalized re-ordering of Web search results. A novel aspect of this work is the support it provides for users to create and maintain multiple search topics, such that the interests the searcher shows in one topic does not adversely affect their interests in other topics.
10.2 Related Work Others have explored methods for personalization within the domain of Web search, including work from the top search providers, as well as in the academic literature. The Google search engine currently includes a personalization component that automatically learns searcher preferences through their search activities. The outcome is that searchers who have logged into the system are provided with a combination of personalized search results and recommendations [8]. Researchers at Yahoo! have investigated the use of data mining techniques on both the query and click data stored in their search engine logs [18]. The primary purpose in their work was to assess the potential for personalization. Although they found that it took a few hundred queries for distinct topics to become apparent, repeated site clicks were shown to be useful in identifying special interest topics. Ahn et al. [1] developed a system directed at the exploratory search activities of expert searchers. Users can create and maintain notes about their search activities, from which a vector-based task model is automatically generated. The searcher may choose to view the search results sorted by relevance to the query, relevance to the task model, or relevance to both the query and the task model. Other features include the representation of the task model as a tag cloud, the creation of personalized snippets, and the highlighting of important terms in the snippets (using an approach similar to that in [5]). The utility of the proposed method was demonstrated via an in-depth user study. Ma et al. [10] developed a method that maps user interests (from documents such as resumes) to categories in the Open Directory Project (ODP) [11]. These
10
Automatic Topic Learning for Personalized Re-ordering
107
categories are then used to generate text classifiers, which are employed as part of the search process. When a user conducts a Web search, the full textual contents of the documents are retrieved and classified with respect to the categories in which the users have shown interest. The authors found the system to work well when seeking a small set of documents. Sugiyama et al. [17] captured both long-term and short-term preferences based on the user’s Web browsing activities. Gaps in the interest profiles are automatically filled based on matches to similar users. Clusters of all the user profiles on the system are generated; when conducting a search, the results are re-sorted based on their similarity to the clusters most similar to the searcher’s profile. The authors found the system to be quite effective once sufficient information was gained to train the preference models. A common theme among these Web search personalization methods is the use of complex techniques to capture the searcher’s interests, and subsequently personalize the search results. In many cases, this move towards more complexity is necessitated by the single personalized profile maintained for each user. However, since searchers will commonly seek information on numerous topics that may have little relationship to one another, we suggest that a single profile is not appropriate. The method employed in our research (and implemented in miSearch) allows the searchers to maintain multiple topics of interest, choosing the appropriate one based on their current search activities. As a result, we are able to employ much simpler methods for capturing, inferring, and storing user interest in these topics, along with personalizing the order of the search results. The details of our approach are provided in the following sections.
10.3 Multiple Search Topics Since people who search the Web have the potential to be seeking information on many different topics (sometimes simultaneously), creating a personalized model of their interests as a single collection of information may not be very effective. In some cases, a searcher may show particular interest in documents that contain a certain term; whereas in other cases, the same searcher may find all the documents that use this term irrelevant. While it may be possible to deduce when the searcher has changed their search interests from one topic to another, a more accurate method is to have the user implicitly indicate their current topic of interest as an initial step in the search process. Such topics will form high-level concepts that provide a basis for collecting information about the searcher’s preferences (as described in Sect. 10.4), and guide the subsequent personalized re-ordering of the search results (as described in Sect. 10.5). When using miSearch, at any time during the search process a new topic can be created by the user. Similarly, the user may choose to switch to a previously created topic whenever they like. Since this process is not normally performed as part of a Web search, the goal is to make it as unobtrusive as possible. As such, we collect
108
O. Hoeber and C. Massie
minimal information when creating new topics of interest, and allow the searcher to switch topics with just a simple selection from the topic list.
10.4 Automatic Topic Learning When presented with a list of potentially relevant documents (e.g., a list of Web search results), searchers use many different methods for choosing which documents to view. Some scan the titles of the documents; others carefully read and consider the title and snippet; still others consider the source URL of the document. Regardless of what information searchers use, when they choose specific documents to view, there must be “something” in the information they considered that gave them a cue that the document might be relevant. The goal of the automatic topic learning process is to capture this “information scent” [13]. As users of miSearch select documents to view from the search results list, the system automatically monitors this activity, learning the preferences of each user with respect to their currently selected topic. Rather than sending users directly to the target documents when links in the search results lists are clicked, the system temporarily re-directs users to an intermediate URL which performs the automatic topic learning based on the details of the search result that was clicked. The system then re-directs the Web browsers to the target documents. This process occurs quickly enough so as to not introduce any noticeable delay between when a search result is clicked and when the target document begins to load. The automatic topic learning algorithm uses a vector-based representation of the topic, with each dimension in the vector representing a unique term that appeared in the title, snippet, or URL of the search result clicked by the searcher. Selecting to view documents provides positive evidence of the potential relevance of the terms used to describe those documents; the topic profile is incrementally updated based on this evidence of relevance. The algorithm takes as input the title, snippet, and URL of the clicked search result, as well as the searcher’s currently selected topic of interest. The outcome of the algorithm is an update to the topic profile vector stored in the database. The steps of the algorithm are as follows: 1. Load the topic profile vector from the database. 2. Combine the title, snippet, and URL together into a document descriptor string. 3. Split the document descriptor string into individual terms based on non-word characters. 4. Remove all terms that appear in the stop-words list and words that are shorter than three characters. 5. Stem the terms using Porter’s stemming algorithm [14]. 6. Generate a document vector that represents the frequency of occurrence of each unique stem. 7. Add the document vector to the topic profile vector using vector addition. 8. Save the updated topic profile vector to the database.
10
Automatic Topic Learning for Personalized Re-ordering
109
10.5 Personalized Re-ordering of Web Search Results Once a topic profile vector has been generated, it is possible to use this information to re-order the Web search results. The goal of this re-ordering is to move those documents from the current search results list that are most similar to the topic profile to the top of the list. The premise is that the title, snippet, and URL of relevant search results will be similar to previously selected documents (as modeled in the topic profile vector). The algorithm for re-ordering the search results receives as input the title, snippet, and URL of each document in the search results list, along with the current search topic selected by the searcher. The steps of the algorithm are as follows: 1. Load the topic profile vector from the database. 2. For each document in the search results list: a. Combine the title, snippet, and URL together into a document descriptor string. b. Split the document descriptor string into individual terms based on non-word characters. c. Remove all terms that appear in the stop-words list and words that are shorter than three characters. d. Stem the terms using Porter’s stemming algorithm [14]. e. Generate a document vector that represents the frequency of occurrence of each unique stem. f. Calculate the similarity between the document vector and the topic profile vector using Pearson’s product-moment correlation coefficient [4]. g. Save the value of the similarity measure with the document. 3. Re-sort the search results list in descending order based on the similarity measure.
While it would be possible to re-apply the personalized re-sorting technique as each document is viewed (and the topic profile is updated), it has been shown that such instant update strategies are not well-received by users, even when they provide more accurate results [3]. Clearly, usability issues arise when the search results are re-ordered interactively as a user selects to view a document and directs their attention away from the search results list. Instead, miSearch performs the personalized re-ordering of the search results only as each page of search results is loaded, or when users select new topics or re-select the current topic.
10.6 User’s Model of Search The user’s model of search when using miSearch is altered slightly from the normal Web search procedures. In particular, users must first login, and subsequently select (or create) a topic prior to initiating a search. The login feature allows the system to keep track of multiple simultaneous users; the topic selection supports the personalization based on multiple topic profiles. The remaining process of evaluating
110
O. Hoeber and C. Massie
Fig. 10.1 A screenshot of the miSearch system. Note the personalized order of the search results based on previous selection of relevant documents.
the search results list and selecting potentially relevant documents to view remains unchanged. The features described in the paper have been implemented in miSearch. The system currently uses the search results provided by the Yahoo! API [19], displaying fifty search results per page in order to provide a reasonable number of search results to personalize. Figure 10.1 shows a screenshot of the system. A public beta-version is currently available 1 ; readers of this paper are invited to create accounts and use the system for their Web search needs.
10.7 Evaluation In order to measure the effectiveness of the Web search personalization methods described in this paper, twelve queries were selected from the TREC 2005 Hard Track2 as the basis for the evaluation. In general, the queries in this collection represent topics that are somewhat ambiguous, resulting in search results that contain a mix of relevant and non-relevant documents. Queries were chosen to provide a range of ambiguity. The selected queries and a brief description of the information need are listed in Table 10.1. 1 2
http://uxlab.cs.mun.ca/miSearch/ http://trec.nist.gov/data/t14_hard.html
10
Automatic Topic Learning for Personalized Re-ordering
111
Table 10.1 Queries selected from the TREC 2005 Hard Track for the evaluation of miSearch ID
Query
Description
310
“radio waves and brain cancer”
Evidence that radio waves from radio towers or car phones affect brain cancer occurrence.
322
“international art crime”
Isolate instances of fraud or embezzlement in the international art trade.
325
“cult lifestyles”
Describe a cult by name and identify the cult members’ activities in their everyday life.
354
“journalist risks”
Identify instances where a journalist has been put at risk (e.g., killed, arrested or taken hostage) in the performance of his work.
363
“transportation tunnel disasters”
What disasters have occurred in tunnels used for transportation?
367
“piracy”
What modern instances have there been of old fashioned piracy, the boarding or taking control of boats?
372
“native american casino”
Identify documents that discuss the growth of Native American casino gambling.
378
“euro opposition”
Identify documents that discuss opposition to the introduction of the euro, the european currency.
397
“automobile recalls”
Identify documents that discuss the reasons for automobile recalls.
408
“tropical storms”
What tropical storms (hurricanes and typhoons) have caused significant property damage and loss of life?
625
“arrests bombing wtc”
Identify documents that provide information on the arrest and/or conviction of the bombers of the World Trade Center (WTC) in February 1993.
639
“consumer on-line shopping”
What factors contributed to the growth of consumer on-line shopping?
For each of the queries, the top 50 search results provided by the Yahoo! API were retrieved and cached. The two authors of this paper, along with a third colleague, independently assigned relevance scores on a four-point relevance scale to each search result. Only the information provided by the search engine (title, snippet, and URL) was considered when assigning relevance scores. The possibility that a relevant document may not appear relevant in the search results list, or vice versa, is beyond the scope of this research. Discussions and consensus among the three evaluators resulted in ground truth relevance scores for each of the 50 search results produced for the twelve test queries. In order to determine the quality of a particular ordering of the search results, the precision metric was used. Precision is defined as the ratio of relevant documents retrieved to the total number of documents retrieved. For the purposes of this study, we considered any document assigned a score of 3 or 4 on the 4-point relevance scale as “relevant”. Precision was measured at two different intervals within the search results set: P10 which measures the precision among the first 10 documents, and P20 which measures the precision among the first 20 documents. While it would be possible to measure the precision over larger sets of documents, the opportunity for improvements diminishes as we approach the size of search results set used in this evaluation. Note that while it is common in information retrieval research to also
112
O. Hoeber and C. Massie
use the recall metric (ratio of relevant documents retrieved to the total relevant documents in the collection), the calculation of this metric with respect to Web search is not feasible due to the immense size of the collection (billions of documents) [9].
10.7.1 Hypotheses Within this evaluation method, we use the precision achieved by the original order of the search results (as retrieved using the Yahoo! API) as the baseline performance measure. The two experimental conditions represent the performance of the system after selecting the first two relevant documents, and after selecting the first four relevant documents. Using the two levels of precision measurement discussed in the previous section (P10 and P20), we arrive at four hypotheses: H1: H2: H3: H4:
After selecting the first 2 relevant documents, there will be an increase in the precision among the first 10 documents in the re-orderd search results list. After selecting the first 2 relevant documents, there will be an increase in the precision among the first 20 documents in the re-orderd search results list. After selecting the first 4 relevant documents, there will be an increase in the precision among the first 10 documents in the re-orderd search results list. After selecting the first 4 relevant documents, there will be an increase in the precision among the first 20 documents in the re-orderd search results list.
10.7.2 Results In order to determine whether the measurements from this experiment support or refute the hypotheses, we calculated the percent improvement (or deterioration) from the baseline measurements to the measurements after selecting two and four relevant documents. For all four cases under consideration, a statistically significant improvement was measured, as reported in Table 10.2. Significance was determined using ANOVA tests at a significance level of α = 0.05. Based on this statistical analysis, we conclude that H1, H2, H3, and H4 are all valid. As expected, the measurements also improve between selecting two and four relevant documents (H1 to H3, and H2 to H4). The decrease in precision between P10 and P20 is also to be expected, since as we consider a larger set of documents for relevance, the chance of non-relevant documents being included increases due to the limited number of documents available (e.g., 50 in these experiments). Our selection of test queries was intentionally chosen to provide a range of ambiguous queries. Since positive improvement was not discovered in all cases, it is worthwhile to consider the success of the technique with respect to each individual query. Figure 10.2 depicts the percent improvements over the baseline performance at both precision levels. In most cases, a significant increase in performance was found. However, in a few cases, the precision decreased as a result of the personalization.
10
Automatic Topic Learning for Personalized Re-ordering
113
Table 10.2 Average percent improvement over baseline precision measurements. Statistical significance is verified with ANOVA tests. Precision 2 Relevant Documents Selected P10 P20
4 Relevant Documents Selected
H1: 89% (F(1, 23) = 16.36, p < 0.01) H3: 128% (F(1, 23) = 15.20, p < 0.01) H2: 40% (F(1, 23) = 9.64, p < 0.01) H4: 52% (F(1, 23) = 16.35, p < 0.01)
(a) Percent improvement at P10 .
(b) Percent improvement at P20 . Fig. 10.2 The precent improvement over the baseline precision for each of the test queries, sorted by the degree of improvement after selecting two relevant documents
Upon further analysis, we discovered that in all cases where there was a decrease in the precision scores with respect to the baseline (“automobile recalls” and “arrests bombing wtc” at the P10 level, and “journalist risks” at the P20 level), the baseline precision (from the original order of the search results) was already high (i.e., 0.6
114
O. Hoeber and C. Massie
(a) Measured precision values at P10 .
(b) Measured precision values at P20 . Fig. 10.3 The measured precision values for each of the test queries, sorted by the baseline precision (i.e., the original search results order)
or higher). The measured precision scores are provided in Fig. 10.3. Clearly, in the cases where the precision measurements are already high, the ability to make improvements via personalization is limited. A logical conclusion from this is that personalization is of more value when the performance of the underlying search engine is poor, and of less value when the underlying search engine can properly match the user’s query to the relevant documents.
10
Automatic Topic Learning for Personalized Re-ordering
115
10.8 Conclusions and Future Work This paper describes the key features of miSearch, a novel Web search personalization system based on automatically learning searchers’ interests in explicitly identified search topics. A vector-based model is used for the automatic learning of the topic profiles, supporting the calculation of similarity measures between the topic profiles and the documents in the search results set. These similarity measures are used to provide a personalized re-ordering of the search results set. An evaluation using a set of difficult queries showed that a substantial improvement over the original order of the search results can be obtained, even after choosing to view as few as two relevant documents. We attribute this success to the methods for allowing searchers to maintain multiple distinct search topics upon which to base the personalized re-ordering. This results in less noise during the automatic topic learning, producing a cleaner modeling of the searcher’s interests in the topics. Although the results reported in this paper have shown the methods used in miSearch to be very effective, we believe there is room for further improvement. We are currently investigating methods for re-weighting the contributions to the topic profile vectors during their construction, resulting in a dampening effect and the ability for the topics to model a user’s changing understanding of their information need (i.e., topic drift). Analysis of the techniques over a much larger collection of difficult search tasks, and under conditions where the searchers might incorrectly select non-relevant document to view, is needed to determine the robustness of the methods used in miSearch. In addition, user evaluations are in the planning stages, which will allow us to determine the willingness of searchers to pre-select topics during their search process. A longitudinal study will allow us to evaluate the value of the personalization methods in real-world search settings [6]. Acknowledgements. This research has been made possible the first author’s Start-Up Grant provided by Faculty of Science at the Memorial University, as well as the first author’s Discovery Grant provided by the Natural Science and Engineering Research Council of Canada (NSERC). The authors would like to thank Aaron Hewlett for assisting with the software development, and Matthew Follett for assisting with the relevance score judgements.
References 1. Ahn, J., Brusiloviksy, P., He, D., Grady, J., Li, Q.: Personalized web exploration with task models. In: Proceedings of the World Wide Web Conference, pp. 1–10 (2008) 2. Buckley, C.: Why current IR engines fail. In: Proceedings of the ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 584–585 (2004) 3. He, D., Brusiloviksy, P., Grady, J., Li, Q., Ahn, J.: How up-to-date should it be? the value of instant profiling and adaptation in information filtering. In: Proceedings of the IEEE/WIC/ACM International Conference on Web Intelligence, pp. 699–705 (2007) 4. Hinkle, D.E., Wiersma, W., Jurs, S.G.: Applied Statistics for the Behavioural Sciences. Houghton Mifflin Company (1994)
116
O. Hoeber and C. Massie
5. Hoeber, O.: Exploring Web search results by visually specifying utility functions. In: Proceedings of the IEEE/WIC/ACM International Conference on Web Intelligence, pp. 650–654 (2007) 6. Hoeber, O.: User evaluation methods for visual Web search interfaces. In: Proceedings of the International Conference on Information Visualization (2009) 7. Jansen, B.J., Pooch, U.: A review of Web searching studies and a framework for future research. Journal of the American Society for Information Science and Technology 52(3), 235–246 (2001) 8. Kamvar, S., Mayer, M.: Personally speaking (2007), http://googleblog.blogspot.com/2007/02/ personally-speaking.html 9. Kobayashi, M., Takeda, K.: Information retrieval on the Web. ACM Computing Surveys 32(2), 114–173 (2000) 10. Ma, Z., Pant, G., Sheng, O.R.L.: Interest-based personalized search. ACM Transactions on Information Systems 25(1) (2007) 11. Netscape: Open directory project (2008), http://www.dmoz.org/ 12. Pierrakos, D., Paliouras, G., Papatheodorou, C., Spyropoulos, C.: Web usage mining as a tool for personalization: A survey. User Modeling and User-Adapted Interaction 13(4), 311–372 (2003) 13. Pirolli, P., Card, S.: Information foraging. Psychological Review 106(4), 643–675 (1999) 14. Porter, M.: An algorithm for suffix stripping. Program 14(3), 130–137 (1980) 15. van Rijsbergen, C.J.: Information Retrieval, Butterworths (1979) 16. Spink, A., Wolfram, D., Jansen, B.J., Saracevic, T.: Searching the Web: The public and their queries. Journal of the American Society for Information Science and Technology 52(3), 226–234 (2001) 17. Sugiyama, K., Hatano, K., Yoshikawa, M.: Adaptive Web search based on user profile construction without any effort from users. In: Proceedings of the World Wide Web Conference, pp. 675–684 (2004) 18. Wedig, S., Madani, O.: A large-scale analysis of query logs for assessing personalization opportunities. In: Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 742–747 (2006) 19. Yahoo: Yahoo! developer network: Yahoo! search Web services (2008), http://developer.yahoo.com/search
Chapter 11
Providing Private Recommendations on Personal Social Networks Cihan Kaleli and Huseyin Polat
Abstract. Personal social networks are recently used to offer recommendations. Due to privacy concerns, privacy protection while generating accurate referrals is imperative. Since accuracy and privacy are conflicting goals, providing accurate predictions with privacy is challenging. We investigate generating personal social networks-based referrals without greatly jeopardizing users’ privacy using randomization techniques. We perform real data-based trials to evaluate the overall performance of our proposed schemes. We analyze our schemes in terms of privacy and efficiency. Our schemes make it possible to generate accurate recommendations on social networks efficiently while preserving users’ privacy. Keywords: Web mining, Multimedia, Semantics of the Images, Automatic Understanding.
11.1 Introduction Collaborative filtering (CF) is a recent technique for filtering and recommendation purposes. Many users are able to utilize CF techniques to obtain recommendations about many of their daily activities with the help of a group of users. CF systems work by collecting ratings for items and matching together users who share the same tastes or interests. CF schemes predict how well a user, referred to as the active user (a), will like an item based on preferences of similar users. The goal is to offer predictions with decent accuracy efficiently. Users usually want to get accurate referrals with in a limited time during an online interaction. Various techniques like clustering, classification, and so on have been used to achieve such goals. Personal Cihan Kaleli · Huseyin Polat Anadolu University, Department of Computer Engineering, Anadolu University, Eskisehir 26470, Turkey e-mail: {ckaleli,polath}@anadolu.edu.tr V. Sn´asˇel et al. (Eds.): Advances in Intelligent Web Mastering - 2, AISC 67, pp. 117–125. c Springer-Verlag Berlin Heidelberg 2010 springerlink.com
118
C. Kaleli and H. Polat
social networks are among such methods and can be applied to CF [1]. Once users have friends, their friends’ friends, and so on, they can form a social network over the Internet. It is assumed that the connected friends have common interests. CF systems including social networks-based ones have various advantages. However, they fail to protect users’ privacy. They pose several privacy risks to individuals. Data collected for CF can be used for unsolicited marketing, government surveillance, profiling users, misused, and it can be transferred [4]. Since data are valuable assets, they can be used for malicious purposes. Therefore, due to privacy risks, users might decide to give false data or refuse to provide data at all. Without enough data, it is almost impossible to generate recommendations. Even if sufficient data are available but might contain false data when privacy measures are not provided, producing accurate referrals is not possible from such data. It is likely to collect more truthful and sufficient data if privacy measures are provided. We propose using randomization techniques to protect users’ privacy while still providing accurate recommendations on masked data using social networks-based CF schemes. We perform real data-based experiments and analyze our schemes.
11.2 Related Work Privacy-preserving collaborative filtering (PPCF) has been increasingly receiving attention. In [5, 6], Canny proposes privacy-preserving schemes for CF, where users control their private data; a users’ community can compute a public aggregate of their data. In [10], the authors employ randomized perturbation techniques to offer predictions for a single item with privacy. Polat and Du [11] study how to produce predictions while preserving users’ privacy by using randomized response techniques. Their schemes are based on binary ratings, where the system predicts whether an item will be liked or disliked by active users. Parameswaran [9] presents a data obfuscation technique; designs and implements a privacy-preserving shared CF framework using the data obfuscation algorithm. Berkovsky et. al [2] investigate how a decentralized approach to users’ profiles storage could mitigate some of the privacy concerns of CF. In [13], the authors discuss how to provide top-N recommendations on distributed data while preserving data owners’ privacy. Kaleli and Polat [7] investigate how to achieve na¨ıve Bayesian classifier (NBC)-based CF tasks on partitioned data with privacy. They demonstrate that NBC-based CF services on distributed data can be offered with decent accuracy with privacy. Our study here differs from the aforementioned ones in some aspects. We investigate how to provide predictions on social networks constructed over the Internet while preserving users’ privacy. Each user has control over her own data. There is no need to have a central database. Recommendations can be generated in a distributive manner. Unlike partitioned data-based schemes, where ratings are distributed between two parties, each user in the social network keeps her data.
11
Providing Private Recommendations on Personal Social Networks
119
11.3 Personal Social Networks-Based Predictions with Privacy Without privacy concerns, users in a social network can get referrals on their friends’, their friends of friends’ data, and so on up to level six using the algorithm proposed in [1]. However, with privacy concerns, users do not want to reveal their ratings about items to their friends. Each user in a network wants to guard her ratings against her friends with whom she has a direct link. It is challenging to give a clear-cut privacy definition. However, we can define it, as follows: Protecting each user’s ratings and rated and/or unrated items against her friends. Revealing the ratings about various products causes different privacy risks. Furthermore, it might be more damaging to disclose which items have been bought and which items have not been purchased. Therefore, privacy protection methods should prevent users’ friends to derive their ratings and rated and/or unrated items. Once a personal social network is constructed, each user wants to hide her data from her friend in the upper level and her friends in the lower level. Shortly, users want to hide data from those with whom they have direct links. In a network like MSN or ICQ messenger, users interact only with their friends. When a wants to get referrals for some items, she also needs to hide her ratings and rated and/or unrated items. Users can create friendships using MSN or ICQ messenger. One user sends invitation to another who can accept or decline it. If she accepts it, they become friends. When users continue doing this, a social network will be obtained, where each user is a node and friendship is represented with direct links [1]. Users have ratings for items they bought before. Ben-Shimon et al. [1] propose offering referrals on each user’s personal social network, which is a snapshot of the entire network presenting each user’s relations with her related friends up to the level of six. The network is constructed in the form of a social tree for each user (a), who is looking for referrals, by using a breadth search algorithm. The distance between users a and u j , d(a, u j ), is computed by traversing the tree. The tree is viewed as a set of users described by X(a, l) = {u j |∀ j, u j ∈ social tree of user a up to the level of l}. The list of recommendations for a is based on her friends’ ratings, where the list is a vector of sorted items according to rank, computed as follows: rank(a, q) =
∑
(k−d(a,u j ) × r(u j , q)),
(11.1)
∀u j ∈X(a,l)
where q is the target item for which rank is sought, r(u j , q) is the rating of user u j for q, and k is social network’s attenuation coefficient and set at 2. Once a’s social network is constructed, she can get recommendations based on her social network, where a is looking for a ranked list not numerical predictions. Therefore, ranks can be found based on z-scores instead of ratings. Each user finds z-scores of their ratings. Since each user does not know the mean rating and the standard deviation of her friends’ ratings, it becomes more difficult to derive true ratings from disguised z-scores. To find the ranked list without violating users’ privacy, we follow the following steps:
120
C. Kaleli and H. Polat
1. a first sends a query to her neighbors, stating that she is looking recommendations for Ma items. Since revealing unrated items poses privacy risks, a prevents her neighbors from learning her unrated items, as follows: After selecting M items to ask referrals, she uniformly randomly or selectively chooses βa over (1, 100) and uniformly randomly selects βa % of her remaining cells (Md ) including rated and unrated items. She asks referrals for Ma items, where Ma = M + Md . Since time to get referrals increases with increasing β ; and βa values affect privacy, a can decide βa based on the desired privacy and performance. a also adds a counter (c) to her query and sets c at 0. 2. Each friend of a first checks c to see whether they are at the sixth level or not. If c is less than five, they send the query received from a to their friends. 3. Each user in subsequent levels again checks c. Unless it is five, they send the query to their friends in lower level while increasing c by 1. 4. When c = 5, the users are at level six. Since each user at level six sends their z-scores of Ma items to their friend at level five, they disguise their private data to prevent their friend at higher level from deriving data. For this purpose, they conduct the followings: a. Each user computes default z-scores (zd s) online, as explained in Sect. 11.3.1. Each user u chooses a δu value either selectively or uniformly randomly over (1, 100). b. Users then selectively or uniformly randomly pick δu % of zd s of Ma items. c. They add them to their z-score values of Ma items, multiply the sum by 1/26, and send the final aggregate values to their friend at the higher level while decreasing c. 5. Users at level five cannot learn the true z-scores due to added zd s. They decrease c by 1, multiply their corresponding z-scores by 1/(25), and add the result to the values obtained from their friends. They send the aggregate results and c to their friends at the higher level. 6. Users in subsequent levels decrease c by 1, find the multiplication of their zscores with 1/(kl ), where l shows the level, add the result to the values obtained from their friends in the lower level, and finally send the aggregate values to their friend at the higher level. 7. Once a gets aggregate values from all of her neighbors at lower level, she sums them and finds the final aggregate values for Ma items. She then finds those ranks for M items and sorts them in descending order. She finally obtains the ranked list.
11.3.1 Privacy-Preserving zd s Computation Protocol To protect their privacy while generating referrals, users need to employ nonpersonalized z-scores zd s. Therefore, each user computes zd s based on the z-scores collected from their friends, as follows:
11
Providing Private Recommendations on Personal Social Networks
121
1. Each neighbor finds z-scores of their ratings’ and masks them, as explained below. 2. Each neighbor sends masked data to user u or friend who wants to find zd s. 3. After collecting her friends’ masked z-scores, user u stores them into an mu × j matrix, where mu is the number of her neighbors and j is the number products. 4. The computations so far are conducted off-line. To add more randomness and to prevent others from deriving data in multiple scenarios, when the user is at level six, she can compute zd s online, as follows, for each referral computation: a. She chooses a γu value over (1, 100) either selectively or uniformly randomly. b. For each item, she selects a θu j value uniformly randomly over (1, γu ). c. She finally uniformly randomly or selectively chooses θu j % of the masked z-scores for each item j to find zd s. Users might perturb their data differently due to varying privacy concerns. Data type, value of data, and data sensitivity differ between various users. As shown by Polat and Du [12], it is still possible to find aggregate values with decent accuracy from variably masked data. Since inconsistent data masking based on randomization improves privacy, users disguise their data variably, as follows: 1. Users decide random number distribution (uniform or Gaussian) and random numbers’ standard deviation (σ ). They decide on the number of rated and/or unrated cells to be masked. 2. They generate random numbers based on the number of cells to be disguised using uniform or Gaussian distribution with mean (μ ) being 0 and σ . 3. They finally add such random numbers to corresponding cells.
11.4 Experiments We performed various trials based on real data sets in order to evaluate the overall performance of our proposed schemes. We performed different experiments to show how various factors like γ and δ values affect accuracy. Since the effects of random number distribution, the level of perturbation, and the total number users (n) in a personal social network have been studied before, we did not run experiments to show their effects. As explained by Polat and Du [12], uniform and Gaussian distributions give very similar results, accuracy improves with increasing n values and it decreases with increasing level of perturbation. We used Jester (http://goldberg.berkely.edu/jester-data/) in our trials, which is a web-based joke referral system. It has 100 jokes and records of 17,988 users. The continuous ratings in Jester range from -10 to +10. Although MovieLens and EachMovie data sets are also available for CF purposes, our results based on Jester can be generalized. To measure accuracy, we used R- measure [3], which is widely used to asses the accuracy of ranked lists, which can be defined, as follows for a:
122
C. Kaleli and H. Polat
Ra =
∑ i
max(zai − zn , 0) , 2(i−1)/(α −1)
(11.2)
where i is the index of items a liked in the recommendations list, α is the viewing half-life, which is set at 5, and zn is neutral z-score. The grade is then divided by Rmax , the maximal grade when all items the user likes appear at the top of the list. We used randomly selected 3,280 users who rated at least 50 items from Jester to generate social networks. We randomly selected 500 users as test users, where we found the ranked lists for 30 randomly selected test items for each a. We constructed personal social networks for each a. We assumed that each user approximately has m friends and we set m at 3. Users at level seven disguise their z-scores and empty cells using random numbers generated using Gaussian distribution with σ = 1. The results for γ and δ being 0 show the outcomes without any privacy concerns. We first performed experiments to show how varying γ values affect accuracy. We changed γ values from 0 to 100, found the ranked lists for each a and γ values, and displayed the final outcomes in Table 11.1. Users at level six compute zd s from masked data they received. The amount of data to be used for zd s computation varies with γ values. As seen from Table 11.1, accuracy slightly improves with increasing γ values. Although data disguising makes accuracy worse, since more data (data of users at level seven) is involved, accuracy slightly improves. However, performance degrades with privacy measures because zd s are found online. Table 11.1 Accuracy with Varying γ Values
γ
0
30
60
100
R-Measure
56.1858
56.8587
56.8592
56.8615
We also performed experiments to show how varying δ values affect accuracy, where we set γ at 60. Users at level six decide the amount of zd s, based on the δ values, to be used to hide their true z-scores. Therefore, we changed γ values from 0 to 100 to show how accuracy varies, found the ranked lists for each a and δ values, and displayed the final outcomes in Table 11.2. As seen from Table 11.2, quality of recommendations faintly improves with increasing δ values due to escalating amount of data. The encouraging effects of increasing amount of data outweigh the unconstructive effects of data masking. Although accuracy somewhat improves, Table 11.2 Accuracy with Varying δ Values
δ
0
30
60
100
R-Measure
56.1858
56.4092
56.6483
56.8592
11
Providing Private Recommendations on Personal Social Networks
123
privacy-preserving measures make performance worse. Therefore, we still provide accurate recommendations with privacy while sacrificing on performance.
11.5 Summary: Privacy, Accuracy, and Performance Issues We can enhance privacy using various improvements, as follows: 1. Since a is looking for a sorted rank list not numerical ratings, each user in a’s social network generates a random number and adds it to all of her z-scores of Ma items. The order between ranks will be the same because the same random number is added. Users selectively or uniformly randomly choose random numbers and are free to add them or not. 2. Users can encrypt their z-scores using a’s public key employing homomorphic encryption schemes. The homomorphic encryption property is Ek (x) ∗ Ek (y) = Ek (x + y), and an example is the system proposed by Paillier [8]. A useful property of homomorphic encryption schemes is that an addition operation can be performed on the encrypted data without decrypting them. Users then send encrypted values to their friends at higher levels. Users at intermediate levels can find encrypted values’ sum. a gets encrypted aggregates from her friends. She decrypts them and sums them to find the final aggregate values. She finds the sorted list, as explained before. When each user sends their encrypted data to their friends at the higher levels, such friends will not be able to decrypt the encrypted values unless they collaborate with a to learn a’s private key. 3. We assume that one user’s friend at higher level and her friends at lower level are not able to collaborate, because there is no direct link between them. To derive data about user u at level l, her fiend at level l-1 must collaborate all of user u’s friends at l+1. This is practically impossible. In other words, in a social network, a friend of user u cannot know all of user u’s friends. Yet, if this is the case, user u at level l adds zd s of some of her uniformly randomly or selectively chosen Ma items to true z-scores of Ma items, as explained before. Our proposed schemes can be used in such a way to achieve required privacy, accuracy, and performance levels. With increasing randomness, accuracy decreases, as expected. Using uniform or Gaussian distributions achieves similar results. With increasing σ values, accuracy diminishes. Although zd s are non-personalized values, they might not represent true default z-scores. Therefore, appending zd s might make accuracy worse. On the other hand, with increasing amount of data used to find zd s, accuracy advances since the effects of random numbers diminish. Our schemes are flexible to achieve the required levels of accuracy, privacy, and performance. Our proposed schemes are secure due to the following schemes: Since z-scores are masked rather than true ratings, and the mean vote and the standard deviation of ratings are known by each user, it becomes difficult to derive true ratings from masked z-scores. Inconsistent data masking improves privacy because users need to learn the random number distribution, level of perturbation, and the amount of masked data to derive data. Due to randomly or selectively chosen γ and δ values
124
C. Kaleli and H. Polat
and randomly selected z-scores and zd s based on such values, users cannot obtain true z-scores. Performance can be discussed in terms of additional communication, computation, and storage costs. Extra storage cost is small because each user u needs space to save data to find zd s into an mu × j matrix. Although privacy measures do not cause additional online communication costs in terms of number of communications, amount of data to be sent online increases and our schemes introduce extra off-line communication costs, which are not critical. The schemes introduce extra online computation costs due to zd s computation. But, since each user performs them simultaneously, they are not critical.
11.6 Conclusion and Future Work We showed that users in a social network are able to obtain referrals with decent accuracy without violating their privacy using personal social networks. Although privacy, accuracy, and performance are conflicting goals, our proposed schemes make it possible for users to find equilibrium among them. Our results based on real data trials show that we can still provide accurate referrals even if privacy protecting measures are in place. We will study how to improve privacy without sacrificing too much on efficiency. Since numerical and binary ratings require different techniques for masking, we will study providing private predictions on binary ratings.
References 1. Ben-Shimon, D., Tsikinovsky, A., Rokach, L., et al.: Recommender system from personal social networks. Adv. Soft Comput. 43, 47–55 (2007) 2. Berkovsky, S., Busetta, P., Eytani, Y., et al.: Collaborative filtering over distributed environment. In: Proc. of the DASUM Workshop, Edinburg, UK (2005) 3. Breese, J.S., Heckerman, D., Kadie, C.: Empirical analysis of predictive algorithms for collaborative filtering. In: Proc. of the UAI 1998, Madison, WI, USA (1998) 4. Cranor, L.F.: I didn’t buy it for myself’ privacy and E-commerce personalization. In: Proc. of the ACM Workshop on Privacy in the Electronic Society, Washington, DC, USA (2003) 5. Canny, J.: Collaborative filtering with privacy. In: Proc. of the IEEE Symposium on Security and Privacy, Oakland, CA, USA (2002) 6. Canny, J.: Collaborative filtering with privacy via factor analysis. In: Proc. of the ACM SIGIR 2002, Tampere, Finland (2002) 7. Kaleli, C., Polat, H.: Providing naive Bayesian classifier-based private recommendations on partitioned data. In: Kok, J.N., Koronacki, J., Lopez de Mantaras, R., Matwin, S., Mladeniˇc, D., Skowron, A. (eds.) PKDD 2007. LNCS (LNAI), vol. 4702, pp. 515–522. Springer, Heidelberg (2007) 8. Paillier, P.: Public-key cryptosystems based on composite degree residue classes. In: Stern, J. (ed.) EUROCRYPT 1999. LNCS, vol. 1592. Springer, Heidelberg (1999) 9. Parameswaran. R.: A robust data obfuscation approach for privacy-preserving collaborative filtering. Georgia Institute of Technology, Atlanta, GA, USA (2006)
11
Providing Private Recommendations on Personal Social Networks
125
10. Polat, H., Du, W.: Privacy-preserving collaborative filtering. Int. J. Electron. Commer. 9(4), 9–36 (2005) 11. Polat, H., Du, W.: Achieving private recommendations using randomized response techniques. In: Ng, W.-K., Kitsuregawa, M., Li, J., Chang, K. (eds.) PAKDD 2006. LNCS (LNAI), vol. 3918, pp. 637–646. Springer, Heidelberg (2006) 12. Polat, H., Du, W.: Effects of inconsistently masked data using RPT on CF with privacy. In: Proc. of the ACM SAC ECT, Seoul, Korea (2007) 13. Polat, H., Du, W.: Privacy-preserving top-N recommendation on distributed data. J. of the Am. Soc. for Inf. Sci. and Technol. 59(7), 1093–1108 (2008)
Chapter 12
Differential Evolution for Scheduling Independent Tasks on Heterogeneous Distributed Environments Pavel Kr¨omer, Ajith Abraham, V´aclav Sn´asˇ el, Jan Platoˇs, and Hesam Izakian
Abstract. Scheduling is one of the core steps to efficiently exploit the capabilities of heterogeneous distributed computing systems and is an NP-complete problem. Therefore using meta-heuristic algorithms is a suitable approach in order to cope with its difficulty. In this paper we apply an efficient meta-heuristic method, the differential evolution, to the independent tasks scheduling problem and compare its efficiency with other popular methods for minimizing makespan and flowtime in heterogeneous distributed computing systems. Keywords: Scheduling, Differential Evolution.
12.1 Introduction Grid computing and distributed computing, dealing with large scale and complex computing problems, is a hot topic in the computer science and research. Mixedmachine heterogeneous computing (HC) environments utilize a distributed suite Pavel Kr¨omer · V´aclav Sn´asˇel · Jan Platoˇs ˇ - Technical University of Ostrava, 17. listopadu 15, Department of Computer Science, VSB 708 33, Ostrava-Poruba, Czech Republic e-mail: {pavel.kromer.fei,jan.platos.fei,vaclav.snasel}@vsb.cz Ajith Abraham Norwegian Center of Excellence, Center of Excellence for Quantifiable Quality of Service, Norwegian University of Science and Technology, Trondheim, Norway e-mail: [email protected] Hesam Izakian Islamic Azad University, Ramsar Branch, Ramsar, Iran e-mail: [email protected]
V. Sn´asˇel et al. (Eds.): Advances in Intelligent Web Mastering - 2, AISC 67, pp. 127–134. c Springer-Verlag Berlin Heidelberg 2010 springerlink.com
128
P. Kr¨omer et al.
of different machines, interconnected with computer network, to perform different computationally intensive applications that have diverse requirements [1, 2]. Miscellaneous resources ought to be orchestrated to perform a number of tasks in parallel or to solve complex tasks atomized to variety of independent subtasks [8]. Proper scheduling of the tasks on available resources is one of the main challenges of a mixed-machine HC environment. To exploit the different capabilities of a suite of heterogeneous resources, a resource management system (RMS) allocates the resources to the tasks and the tasks are ordered for execution on the resources. At a time interval in HC environment a number of tasks are received by RMS. Task scheduling is mapping a set of tasks to a set of resources to efficiently exploit the capabilities of such. It has been shown, that an optimal mapping of computational tasks to available machines in an HC suite is a NP-complete problem [3] and as such, it is a subject to various heuristic and meta-heuristic algorithms. The heuristics applied to the task scheduling problem include min-min heuristic, max-min heuristic, longest job to fastest resource- shortest job to fastest resource heuristic, sufferage heuristic, work queue heuristic and min-max heuristic. The meta-heuristics applied to the task scheduling problem include hybrid ant colony optimization [7], simulated annealing [9] and genetic algorithms [5, 4]. The meta-heuristic algorithms usually operate with a population of prospective problem solutions - task schedules - that are evolved in order to obtain an improved schedule which is optimized according to some criteria. The initial solutions optimized by meta-heuristic algorithm are generated either randomly or using some heuristic method. In this paper, we apply a powerful populational meta-heuristic algorithm - the differential evolution - to the task scheduling problem and compare its results with selected existing algorithms.
12.2 Differential Evolution Differential evolution (DE) is a reliable, versatile and easy to use stochastic evolutionary optimization algorithm [6]. DE is a population-based optimizer that evolves real encoded vectors representing the solutions to given problem. The real-valued nature of population vectors differentiates the DE notably from GAs that were designed to evolve solution encoded into binary or finite alphabets. The DE starts with an initial population of N real-valued vectors. The vectors are initialized with real values either randomly or so, that they are evenly spread over the problem domain. The latter initialization leads to better results of the optimization process [6]. During the optimization, DE generates new vectors that are perturbations of existing population vectors. The algorithm perturbs vectors with the scaled difference of two randomly selected population vectors and adds the scaled random vector difference to a third randomly selected population vector to produce so called trial vector. The trial vector competes with a member of the current population with the same index. If the trial vector represents a better solution than the population vector, it takes its place in the population [6]. Differential evolution is parametrized by two parameters [6]. Scale factor F ∈ (0, 1+) controls the rate at which the population evolves and the crossover probability C ∈ [0, 1] determines the ratio of bits that are transferred
12
Differential Evolution for Scheduling Independent Tasks
129
to the trial vector from its opponent. The number of vectors in the population is also an important parameter of the population. The outline of DE is shown in Algorithm 12.1. There are more variants of differential evolution. They differ mostly by the way new vectors are generated. Algorithm 12.1 A summary of Differential Evolution 1: 2: 3: 4: 5:
6: 7: 8: 9: 10:
Initialize the population P consisting of M vectors Evaluate an objective function ranking the vectors in the population Create new population: for all i ∈ {1 . . . M} do Create a trial vector vti = v1r + F · (v2r − v3r ), where F ∈ [0, 1] is a parameter and 1 vr , v2r , v3r are three random vectors from the population P. This step is in DE called mutation. Validate the range of coordinates of vti . Optionally adjust coordinates of vti so, that vti is valid solution to given problem. Perform uniform crossover. Select randomly one point (coordinate) l in vti . With probability 1 −C let vti [m] = vi [m] for each m ∈ {1, . . . , N} such that m = l. Evaluate the trial vector. If the trial vector vti represent a better solution than population vector vi , replace vi in P by vti . end for Check termination criteria; if not satisfied go back to step 3
12.3 Differential Evolution for Scheduling Optimization An HC environment is composed of computing resources where these resources can be a single PC, a cluster of workstations or a supercomputer. Let T = {T1 , T2 , . . . , Tn } denote the set of tasks that is in a specific time interval submitted to RMS. Assume the tasks are independent of each other with no intertask data dependencies and preemption is not allowed (the tasks cannot change the resource they have been assigned to). Also assume at the time of receiving these tasks by RMS, m machines M = {M1 , M2 , . . . , Mm } are within the HC environment. For our purpose, scheduling is done on machine level and it is assumed that each machine uses First-Come, First-Served (FCFS) method for performing the received tasks. We assume that each machine in HC environment can estimate how much time is required to perform each task. In [2] Expected Time to Compute (ETC) matrix is used to estimate the required time for executing a task in a machine. An ETC matrix is a n × m matrix in which n is the number of tasks and m is the number of machines. One row of the ETC matrix contains the estimated execution time for a given task on each machine. Similarly one column of the ETC matrix consists of the estimated execution time of a given machine for each task. Thus, for an arbitrary task T j and an arbitrary machine Mi , [ETC] j,i is the estimated execution time of T j on Mi . In ETC model we take the usual assumption that we know the computing capacity of each resource, an estimation or prediction of the computational needs of each job, and the load of prior work of each resource. Optimum makespan (metatask execution time) and flowtime of a set of jobs can be defined as:
130
P. Kr¨omer et al.
makespan = min
S∈Sched
f lowtime = min
S∈Sched
max Fj
j∈Jobs
∑
(12.1)
Fj
(12.2)
j∈Jobs
where Sched is the set of all possible schedules, Jobs stands for the set of all jobs and Fj represents the time in which job j finalizes. Assume that [C] j,i ( j = 1, 2, . . . , n, i = 1, 2, . . . , m) is the completion time for performing j-th task in i-th machine and Wi (i = 1, 2, . . . , m) is the previous workload of Mi , then ∑ (Ci + Wi ) is the time required for Mi to complete the tasks included in it. According to the aforementioned definition, makespan and flowtime can be evaluated using Eq. (12.3) and Eq. (12.4) respectively. makespan = min C + W (12.3) ∑ i i f lowtime =
i∈{1,...,m} m
∑ Ci
(12.4)
i=1
12.3.1 Schedule Encoding A schedule of n independent tasks executed on m machines can be naturally expressed as a string of n integers S = (s1 , s2 , . . . , sn ) that are subject to si ∈ 1, . . . , m. The value at i-the position in S represents the machine on which is the i-the job scheduled in schedule S. Since the differential evolution uses for problem encoding real vectors, real coordinates must be used instead of discrete machine numbers. The real-encoded DE vector is translated to schedule representation by truncation of its elements.
12.3.2 Schedule Evaluation Assume schedule S from the set of all possible schedules Sched. For the purpose of differential evolution, we define a fitness function f it(S) : Sched → R that evaluates each schedule: f it(S) = λ · makespan(S) + (1 − λ ) ·
f lowtime(S) m
(12.5)
The function f it(S) is a sum of two objectives, the makespan of schedule S and flowtime of schedule S divided by number of machines m to keep both objectives in approximately the same magnitude. The influence of makespan and flowtime in f it(S) is parametrized by the variable λ . The same schedule evaluation was used also in [4]. In this paper, we compute the flowtime and makespan using a
12
Differential Evolution for Scheduling Independent Tasks
131
binary schedule matrix B(S) : Sched → {0, 1}2 which is constructed as follows: for a n × m ETC matrix that describes estimated execution times of n jobs on m machines, the m × n schedule matrix B(S) has in i-th row and j-th column 1 iff the task j is scheduled for execution on machine i. Otherwise, B(S)i, j is equal to 0. Then f lowtime(S) : Sched → R and makespan(S) : Sched → R can be defined with the help ofmatrix multiplication as:
∑ [B(S) · ETC] j, j f lowtime(S) = max ∑ [B(S) · ETC] j, j j∈{1,...,m}
makespan(S) =
(12.6) (12.7)
Less formally, makespan equals to the sum of all elements on the main diagonal of B(S) · ETC and flowtime equals to maximal value on the main diagonal on B(S) · ETC.
12.4 Experiments We have implemented differential evolution for scheduling of independent tasks on heterogeneous independent environments. The differential evolution algorithm was implemented in its classic variant referred to as DE/rand/1/bin [6]. To evaluate the performance of DE for minimizing makespan and flowtime, we have used the benchmark proposed in [2]. The simulation model is based on expected time to compute (ETC) matrix for 512 jobs and 16 machines. The instances of the benchmark are classified into 12 different types of ETC matrices according to the following properties [2]: • task heterogeneity – represents the amount of variance among the execution times of tasks for a given machine • machine heterogeneity – represents the variation among the execution times for a given task across all the machines • consistency – an ETC matrix is said to be consistent whenever a machine M j executes any task Ti faster than machine Mk ; in this case, machine M j executes all tasks faster than machine Mk • inconsistency – machine M j may be faster than machine Mk for some tasks and slower for others The DE algorithm was used with parameters summarized in Table 12.1. The parameters were set after brief initial tuning. The factor λ was set to 0.5 to have equal contribution of makespan and mean flowtime to the fitness value. Table 12.2 and Table 12.3 compare makespan and flowtime of experimental schedule optimization by differential evolution with the result of some heuristic algorithms that were developed specially for scheduling independent tasks on heterogeneous distributed environments. The algorithms were [2]: work queue heuristic, max-min heuristic, sufferage heuristic, min-min heuristic, and min-max heuristic.
132
P. Kr¨omer et al.
Table 12.1 A summary of DE parameters Parameter
Value
Population size Terminating generation Probability of crossover Scaling factor Makespan / flowtime ratio
20 100000 C = 0.9 F = 0.1 λ = 0.5
Table 12.2 Makespan achieved by different algorithms ETC l-l-c l-l-s l-l-i l-h-c l-h-s l-h-i h-l-c h-l-s h-l-i h-h-c h-h-s h-h-i
work queue
max-min suff-erage min-min min-max
7332 6753 8258 5947 9099 4998 473353 400222 647404 314048 836701 232419 203180 203684 251980 169782 283553 153992 13717654 11637786 18977807 9097358 23286178 7016532
5461 5468 3443 3599 2577 2734 333413 279651 163846 157307 121738 113944 170663 164490 105661 106322 77753 82936 9228550 8145395 4922677 4701249 3366693 3573987
DE best
DE avg
5310 7151 7303.2 3327 4479 4582.2 2523 3127 3203.0 273467 451815 457741.0 146953 212828 220334.0 102543 141635 152186.0 164134 212175 220142.2 103321 141176 142405.2 77873 99413 100307.0 7878374 13325802 13595908.0 4368071 6138124 6545734.0 2989993 4418167 454678.0
Table 12.3 Flowtime achieved by different algorithms ETC work queue
max-min
sufferage
min-min
min-max
DE best
DE avg
l-l-c 108843 108014 86643 80354 84717 85422 891272.4 l-l-s 127639 95091 54075 51399 52935 53675 53964.4 l-l-i 140764 79882 40235 39605 39679 43941 44846.2 l-h-c 7235486 6400684 5271246 3918515 4357089 3783520 3788428.0 l-h-s 10028494 5017831 2568300 2118116 2323396 2277816 2383501.0 l-h-i 12422991 3710963 1641220 1577886 1589574 1890529 1935355.4 h-l-c 3043653 3257403 2693264 2480404 2613333 2699241 2765402.2 h-l-s 3776731 2714227 1657537 1565877 1640408 1597594 1625219.6 h-l-i 4382650 2462485 1230495 1214038 1205625 1359241 1380342.0 h-h-c 203118678 185988129 145482572 115162284 125659590 100921177 104753227.0 h-h-s 282014637 145337260 76238739 63516912 69472441 67874790 70281581.0 h-h-i 352446704 112145666 47237165 45696141 46118709 57808847 58216428.0
12
Differential Evolution for Scheduling Independent Tasks
133
Each ETC matrix was named using the pattern x − y − z, where x describes task heterogeneity (high or low), y describes machine heterogeneity (high or low) and z describes the type of consistency (incosnsistent, consistent or semiconsistent). The best result (i.e. shortest time) is printed in bold. As shown in Table 12.2, DE with current setting cannot compete with domain specific heuristics when optimizing makespan. It ranks fifth and its results are usually better than work queue heuristics and max-min heuristics, but worse than sufferage heuristics, min-min heuristics and min-max heuristics. Table 12.3 shows the flowtime of optimized schedule. In this case, DE reached the best value for two of experimental matrices (l-h-c and h-h-c). Also in other cases, DE delivered quite competitive results. Obviously, used setting of scheduling DE suited better to the optimization of flowtime.
12.5 Conclusions This paper presents an algorithm for scheduling independent tasks on heterogeneous distributed environments based on differential evolution. The algorithm was implemented and experimental results suggest that it can deliver competitive results. Without any tuning, the algorithm managed to optimize schedules for some ETC matrices so that the flowtime was best. Presented algorithm has a number of parameters including C, F and λ . The comparison of the results of selected domain specific heuristics and DE has shown that DE, as a general metaheuristic, cannot compete with domain specific heuristics in most cases. More likely, the scheduling heuristics might be used to generate initial population for DE in order to achieve better results. Fine tuning of DE parameters and the optimization of initial population generated by scheduling heuristics is subject of our future work.
References 1. Ali, S., Braun, T., Siegel, H., Maciejewski, A.: Heterogeneous computing (2002), citeseer.ist.psu.edu/ali02heterogeneous.html 2. Braun, T.D., Siegel, H.J., Beck, N., Boloni, L.L., Maheswaran, M., Reuther, A.I., Robertson, J.P., Theys, M.D., Yao, B., Hensgen, D., Freund, R.F.: A comparison of eleven static heuristics for mapping a class of independent tasks onto heterogeneous distributed computing systems (2001) 3. Fernandez-Baca, D.: Allocating modules to processors in a distributed system. IEEE Trans. Softw. Eng. 15(11), 1427–1436 (1989), http://dx.doi.org/10.1109/32.41334 4. Carretero, J., Xhafa, F., Abraham, A.: Genetic Algorithm Based Schedulers for Grid Computing Systems. International Journal of Innovative Computing, Information and Control 3 (2007) 5. Page, A.J., Naughton, T.J.: Framework for task scheduling in heterogeneous distributed computing using genetic algorithms. Artificial Intelligence Review 24, 137–146 (2004)
134
P. Kr¨omer et al.
6. Price, K.V., Storn, R.M., Lampinen, J.A.: Differential Evolution A Practical Approach to Global Optimization. Natural Computing Series. Springer, Berlin (2005), http://www.springer.com/west/home/computer/foundations? SGWID=4-156-22-32104365-0&teaserId=68063&CENTER ID=69103 7. Ritchie, G., Levine, J.: A hybrid ant algorithm for scheduling independent jobs in heterogeneous computing environments. In: Proceedings of the 23rd Workshop of the UK Planning and Scheduling Special Interest Group (2004) 8. Tracy, M.M., Braun, T.D., Siegel, H.J.: High-performance mixed-machine heterogeneous computing. In: 6th Euromicro Workshop on Parallel and Distributed Processing, pp. 3–9 (1998) 9. YarKhan, A., Dongarra, J.: Experiments with scheduling using simulated annealing in a grid environment. In: Parashar, M. (ed.) GRID 2002. LNCS, vol. 2536, pp. 232–242. Springer, Heidelberg (2002)
Chapter 13
Visual Similarity of Web Pages Miloˇs Kudˇelka, Yasufumi Takama, V´aclav Sn´asˇ el, Karel Klos, and Jaroslav Pokorn´y
Abstract. In this paper we introduce an experiment with two methods for evaluating similarity of Web pages. The results of these methods can be used in different ways for the reordering and clustering a Web page set. Both of these methods belong to the field of Web content mining. The first method is purely focused on the visual similarity of Web pages. This method segments Web pages and compares their layouts based on image processing and graph matching. The second method is based on detecting of objects that result from the user point of view on the Web page. The similarity of Web pages is measured as an object match on the analyzed Web pages. Keywords: Web Mining, Multimedia, Semantics of the Images, Automatic Understanding.
13.1 Introduction Research on how to assist users with orientation in a huge number of Web pages is very extensive. Its origin dates back to the expansion of the Internet into a broader Miloˇs Kudˇelka ˇ - Technical University of Ostrava, 17. listopadu 15, Department of Computer Science, VSB 708 33, Ostrava-Poruba, Czech Republic e-mail: [email protected] Yasufumi Takama Tokyo Metropolitan University, Japan e-mail: [email protected] V´aclav Sn´asˇel · Karel Klos ˇ - Technical University of Ostrava, 17. listopadu 15, Department of Computer Science, VSB 708 33, Ostrava-Poruba, Czech Republic e-mail: [email protected],[email protected] Jaroslav Pokorn´y Charles Univeristy, Czech Republic e-mail: [email protected] V. Sn´asˇel et al. (Eds.): Advances in Intelligent Web Mastering - 2, AISC 67, pp. 135–146. c Springer-Verlag Berlin Heidelberg 2010 springerlink.com
136
M. Kudˇelka et al.
awareness. Initially, although almost exclusively in the academic sphere, we cannot imagine any area that this phenomenon does not apply to. Since the Web page is perceived in the usual sense, as an HTML code which represents the graphic and text form of a page, the analysis of Web pages can be focused on different characteristics. The analysis is usually focused on information that is used for user searching and navigation in a large set of Web pages or data mining [8, 9]. This information is used for further processing, particularly in database processing. There are also approaches that focus on the graphic appearance or the technical structure of a Web page. Graphics is also used to detect page similarity. The methods, which were worked for purpose of this paper with, belong to the field of Web content mining. Generally, Web data mining is the usage of data mining technology on the Web. Specifically, this means finding and extracting information from sources that relate to the user’s interaction with Web pages. In 2002, several challenges arise for developing an intelligent Web [8]: 1. 2. 3. 4. 5. 6.
mining Web search-engine data, analyzing Web link structures, classifying Web documents automatically, mining Web page semantic structures and page contents, mining Web dynamics, building a multilayered, multidimensional Web.
Most of these issues are currently relevant. In this paper three areas of Web data mining are addressed [11]: Web content mining, Web structure mining, and Web usage mining. Web content mining describes the discovery of useful information from Web contents, data, and documents. The Web content data consists of unstructured data such as free texts, semi-structured data such as HTML documents, and a more structured data such as data in the tables or databases generated from HTML pages. The goal of Web content mining is to assist or to improve the methods for finding or filtering information to users [15]. The rest of the paper is organized as follows. Related works are mentioned in Sect. 13.2. Section 13.3 describes various approaches to similarity of Web pages. In Sect. 13.4 we describe experiments with two methods evaluating the similarities of Web pages and summarize some results. Finally, we conclude in Sect. 13.5 with an outline of future work.
13.2 Recent Methods There are a lot of methods dealing with Web content mining which are used for analysis of Web page structure and Web page segmentation [14]. Methods can be classified in several ways. One of them is based on the role which the DOM tree plays on DOM based and Visual Layout based methods [2, 4]. The first category is especially focused on the detection of relevant subtrees in the DOM tree. The second category only uses DOM in a limited extent (or DOM is not used at all) and in order to ensure the structure of a page from the point of view of external appearances. Another way can be based on human participation in the analysis [14].
13
Visual Similarity of Web Pages
137
There are supervised (wrapper induction, e.g. [15, 5]) and unsupervised (automatic extraction) methods. Yet another approach is based on our knowledge of a Web page structure [26, 27]. It includes template-dependent and template independent methods. Currently, the trend is evolving towards automatic, template-independent and visual layout based approaches. The following methods are considered to be typical approaches. A method for mine data records in a Web page automatically is presented in [16]. The algorithm is called MDR (Mining Data Records in Web pages). The MDR method is based on two observations. The first one is that a group of data records containing descriptions of a set of similar objects. This set is typically presented in a particular region of a page and formatted using similar HTML tags (subtree patterns). The second one is that a group of similar data records is being placed in a specific region. This region is reflected in the tag tree by the fact that records are found under one parent node (forest-tree pattern). Some approaches following the MDR method can be seen (e.g. [18, 20]). An improvement of the MDR method called DEPTA (Data Extraction based on Partial Tree Alignment) is presented in [24, 25]. VIPS (VIsion-based Page Segmentation) [2, 3]) is an algorithm to extract the content structure for a Web page. In VIPS approach, a Web page is understood as a finite set of objects or sub-Web-pages. Not all these objects are overlapped. Each object can be recursively viewed as a sub-Web-page and has a subsidiary content structure. A page contains a finite set of visual separators, including horizontal separators and vertical separators. Adjacent visual blocks are separated by these separators. For each visual block, the degree of coherence is defined to measure how coherent it is. Every node, especially the leaf node, is more likely to convey a semantic meaning for building a higher semantic via the hierarchy. In the VIPS algorithm, the visionbased content structure of a page is deduced by combining the DOM structure and the visual cues. The segmentation process works on DOM structure and visual information, such as position, background colour, font size, font weight, etc. Some approaches that follow VIPS are [23, 7, 28]. The function-based Object Model (FOM) is stated in [6]. This approach is based on understanding authors’ intention, not on understanding semantic Web pages. Every object on a Website serves for certain functions (Basic and Specific Function) which reflect the authors’ intention towards the purpose of an object. The FOM model for Website understanding is based on this consideration. FOM includes two complementary parts: basic FOM (BFOM) based on the basic functional properties of an object, and specific FOM (SFOM) based on the category of an object. It is supposed that by combining BFOM and SFOM, thorough understanding of the authors’ intention of a Web site can be determined.
13.3 Similarity of Web Pages The similarity of Web pages based on their content (there are also methods based on link analysis) can be evaluated with different methods and the results of these
138
M. Kudˇelka et al.
Fig. 13.1 Classification of methods dealing with the similarity of Web pages. The first group includes methods which are aimed at semantic content of Web pages and as a typical example methods from the field of Information Retrieval (IR) can be mentioned. In IR, the similarity is based on terms and the advantages of a vector space model are often used [10]. The second group contains methods working with patterns, semantic blocks or genre of Web pages [13]. The third is a group including methods evaluating visual similarity of Web pages [19].
methods can be used in various ways. There are different views on how to measure similarity of Web pages. It depends on the target of a method that is used to analysis certain Web pages. The methods can be roughly divided into three groups (see Fig. 13.1). From the user point of view, this classification is based on the level of understanding of the natural language in which a Web page is written. • The methods from the first group mentioned above must understand of a language of analyzed Web page in order to be used successfully, otherwise these methods are limited by one’s knowledge of the language of a Web page. Semantic content of a Chinese or Arabic Web page is not understandable for an English speaker. • The second group contains methods that only require rare knowledge of the given language. A native English speaker can recognize a Chinese or Arabic product Web page. • The third mentioned group contains methods that are independent of knowledge of the given language. These methods are not aimed to the semantic content of Web pages but they compare visual similarities of Web pages. A common feature of many functions of similarity is that they are symmetrical. In [21], there is illustrated in the examples and verified on the experiments that similarity is not always symmetrical in real life. If it is focused on the features of two objects, then it is necessary for the calculation of similarity, to work with those features, which are common to both objects, and then with those features, which only one object has, or the other object has. As a result, it is possible to understand the similarity as non-symmetric, and represent it using the function (13.1). Sim(x, y) = F(X ∩Y, X − Y,Y − X) The similarity of x and y is expressed as a function F of three arguments: 1. X ∩Y the features that are common to both objects x and y. 2. X − Y the features that belong to x but not to y. 3. Y − X the features that belong to y but not to x.
(13.1)
13
Visual Similarity of Web Pages
139
We can apply this concept to a Web page, by using the Example 13.1. Example 13.1. Let us take Web page W1, which is a typical simple product page with Price information, Purchase possibility and Technical features. Then let us take a Web page W2, which in addition to the option to buy the product still offers a Product catalogue, Customer reviews, Discussion, and Special offer. Web page W2 contains everything that the perspective of an abstract description of the Web page W1 contains. If, therefore, a user focuses on the individual features of page W1, then the user can find thus features on the page W2. Of course, that the page W2 has some features in addition, we can not say that it is the same page. Then, 1. if the user looks at the similarity from the point of view of page W1, the following question can be formulated: “How similar is page W2 to page W1?” If the user is satisfied with information he obtains on page W1, then surely the user responds “very similar” (this is a vague statement, which is difficult to measure). 2. if the user looks at the similarity from the point of the view of page W2, it can be formulated the following question: “How similar is page W1 to page W2?” Here the response is more complicated. Obviously, it depends on what features a user is focused on. If he/she is focused on Customer review or Discussion, then he/she probably responds “a little”. Even if a user focuses on the page as a whole, the answer probably will not be read as “very similar” within the meaning of response to the previous question.
13.3.1 Visual Analysis Visual analysis does not work with a semantic content of Web pages but it is based on visual features that can be found on Web pages. How it was mentioned above, methods from this field can be successfully used to if any other methods understanding certain Web pages can not be found. The results of the visual analysis are usable to improve others based on source files. In this field, there are visual segmentation methods [3, 1, 19]. It can be said that two Web pages are similar in a point of view of visual similarity if they contain same or similar visual features or visual objects. The term of the visual feature can be defined as below. Visual features are objects on a Web page, which are 1. independent in their content, 2. identifiable in term of labelling a certain physical part of Web page, 3. and have an ability to be classified in the meaning of their purpose (text, image, . . . ) What is more, these objects should carry some information to a user and as a whole perform certain functions. An object generally performs one or more of these basic functions (for more information [6]): • informative: An object may provide some basic semantic content to users. • navigational: An object may have a hyperlink to guide users to another object
140
M. Kudˇelka et al.
• decorative: An object may serve for the beautifying of a page • interactive: An object may have an interface for users to communicate with the system To exemplify the method we used an ordinary Web page in Example 13.2. Example 13.2. A user sees a Web page (see Fig. 13.2a) with unknown content because he/she does not speak the language of this Web page. Although a user is not able to understand the content of this Web page he/she can identify visual features (objects) there and describe their functions. In Fig. 13.2a, there are bold visual features (objects) and four marked examples. The first example is an object which provides an interactive function according to the classification mentioned above. It is an example of an input form to send information. The second example is a picture that provides a decorative function. A navigational function is provided by the third example which is a set of links. The last marked example is a text paragraph providing an informative function.
(a)
(b)
Fig. 13.2 Web page with unknown content. User can identify visual features only. Visual features (objects) on the Web page a) and Named object types b).
13.4 Experiments This experiment was aimed to compare the results of two methods evaluating the similarities of the Web pages in different ways. For this purpose a set of one hundred Web pages was manually taken so that they are typical for the domains which they belong to. After that, every Web page was analyzed by each of the methods.
13.4.1 Visual Similarity Comparison for Web Pages This method analyses a Web page layout. The aim of the analysis is to break the Web page image into segments. Each segment is labelled as either text, image or a
13
Visual Similarity of Web Pages
141
mixture. In this method, a page image is divided into several regions based on edge detection. The process of layout analysis is summarized in the following steps [17]: 1. 2. 3. 4.
Translate page images from RGB to YCbCr colour space. Obtain an edge image from Y image. Obtain initial region set S1 by connecting neighbouring edges. Obtain the second region set S2 by merging small regions in S1 assuming those are character regions. 5. Label each region in S2 as either text or image. 6. Merge neighbouring regions in S2. If the regions merged have different content types, the resultant region has a label “mixture”. In step 5, the types of contents in a region are estimated as either text or image, based on the distribution of edges in it. If edges appear with uniformly space in a horizontal direction, the corresponding region is labelled as “text”. Otherwise, the region is labelled as “image”. The last step assumes that the remaining small regions are items as icons and buttons, which have a visual impression as a group, sometimes along with neighbouring text region. The visual similarity between Web pages is calculated based on the results of page layout analysis processing. The set of regions obtained from a Web page is represented as a complete graph, to which graph matching algorithm is applied. There are attribute functions of nodes and edges, which are defined dependently in applications. In general the attribute functions are defined as follows.
μ (v) = {xν , yν , Sxν , Syν , T pν } T p = {text, image, mixture} 2 2 x f − xt + y f − yt v(e) =
(13.2)
where (xν , yν ) denotesordinates of the region μ (v), Sxν , Syν are its width and height (in pixels, and T p is its content type, respectively. The (x f , y f ) and (xt , yt ) are the coordinates of the endpoints of edge e. The regions are classified into three types, which are derived from the following observations. • Most of text regions in a top page are used for the menu, and further detailed segmentation is not required from the viewpoint of visual similarity. • An image of large size, such as a logo and a product photo, tends to be located on the head of a top page, and there is less relationship between the appearance of the images and the category of business. • Small images such as buttons and icons are used along with text.
13.4.2 Pattrio Method The approach of Pattrio [12] was inspired by Web design pattern [22] use for the analysis of Web page content. Detailed technical information is needed to recognize pattern’s instances. That is why a catalogue describing those repeated Web page
142
M. Kudˇelka et al.
parts, which Pattrio manages to detect on Web pages, was created. For purpose of this method, these Web page parts are called Named Web Objects. This term is defined as follows. Definition 13.1. Named Web Object (Named object) is a part of a Web page, • whose intention is general and it is repeated frequently, • which can be named intelligibly and more or less unambiguously so that the name is understandable for the Web page user. The Named object can, but does not have to, strictly relate to the structure of a Web page in a technical sense. For example, it does not necessarily have to apply that it is represented by one subtree in DOM tree of the page or by one block in the sense of the visual layout of the page. Rather it can be represented by one or more segments of a page, which form it together (see Fig. 13.2b): 1. Information is displayed in one place. An example can be Price information or Product catalogue item. 2. Information is displayed in repeating segments. An example can be Discussion or Product catalogue. 3. Information is displayed in several different segments. An example can be Technical features or Review. Pattrio calculates for each Named object, which was found on a Web page, a score from the interval 0, 1. It has to be reckoned with some inaccuracy of the method, but if it is a nonzero score, the likelihood of finding the object on the page is important. Assume that a certain value t as a threshold is given, which implies the probable existence of the feature (of found Named object) on the Web page. Then the vector v1 representing this page can be divided into two vectors v1 A and v1 B. The components of vector v1 A are such components of vector v1 , the value of which is greater than or equal to t. The components of vector v1 B are such components of vector v1 , the value of which is less than t. Then it can be said that in this case, the vector v1 A represents such features, which are crucial for vector v1 (Web page has these features), and vector v1 B represents such features, which are not essential (the page does not have them). If the similarity of other vector v2 against vector v1 is wanted to measure, then it can be similarly divided it into two vectors v2 A and v2 B. Vectors v2 A and v2 B contain such components of vector v2 , which correspond to the components of vectors v1 A v1 B from vector v1 , respectively. For the calculation of similarity, the similarity of vectors vi A then has a different value than the similarity of vectors vi B. The reasons for it are the following: 1. The value depends on the dimension of the vectors vi A and vi B. For example, if the vector v1 has the dimension 10, the vector vi A has the dimension 1, and the vector vi B has the dimension 9, then the importance of one feature against nine features is smaller. 2. The weight of features that are greater than the threshold should be higher for comparison.
13
Visual Similarity of Web Pages
143
In the experiments, it has been worked with the formula (13.3). wt · Dim(v1 A) · SimC(v1A, v2 A) + Dim(v1B) · SimC(v1B, v2 B) wt · Dim(v1 A) + Dim(v1B) (13.3) where SimC is cosine measure and wt is the weight of similarity of vectors representing the features higher than threshold at the vector, towards which it is measured the similarity. Obviously, Sim(n1, n2) ∈< 0, 1 >. Sim(v1 , v2 ) =
13.4.3 Review of Results Both methods are based on a different point of view on how to measure the similarities of Web pages. Pattrio works with Named Web Objects, which are among other things described by key words, so this method recognizes that the pages have generally the same content. Visual similarity comparison method segments these pages and compares the found regions. According to their position and content it decides that pages are similar. We cannot expect the results to be the same. As we expected, they did not have the same results about many couples of Web pages so that there were reached some disagreement between examined methods during this experiment. Two of them are described in Figs. 13.3 and 13.4.
Fig. 13.3 Review Web pages. Web pages are similar from the point of Pattrio view but they are not similar according to the method of visual similarity comparison. Both these pages concern a review of a book and a television, respectively. Pattrio found out that there are lots of text with small pictures of reviewed things, rating of these things and menu bars. On the other side, pages were evaluated as dissimilar by the method of visual comparison.
144
M. Kudˇelka et al.
Fig. 13.4 Review and e-shop Web pages. From the Pattrio point of view, these pages are not similar because how it was mentioned above one of these pages is with review and one is a typical shopping page. It means that Pattrio has recognized long textual information about a movie without any opportunity to buy it or to obtain information about price. Whereas, on the e-shop page Pattrio has found out information about price of goods as well as opportunity to buy it and other signs of the Web page of an e-shop. Method of visual similarity comparison has analyzed the Web page with result that they are similar. This method found out several regions which has the same content with similar position and because it did not look at a certain semantic meaning of these regions the Web pages were identified as similar from the point of view of this method.
13.5 Conclusion Currently, there are problems with evaluation of Web page similarity only according to their semantic content, especially with regard to the user point of view. Two product Web pages can be mentioned as an example of limitation of recent methods. One of these product Web pages can be with roller-skates and the second one with DVD. If the methods aiming at semantic content are taken to analyze these Web pages they will evaluate them as dissimilar. Both of these Web pages are with a product so the question is if these Web pages are dissimilar according to user expectation or only to semantic content. Another problem is how to deal with a big number of different natural languages of Web pages. The examined methods could be useful in solution of these challenges. At first, they try to look at Web pages from the user point of view. Secondly, the method of Visual Similarity Comparison is totally independent on the natural language of Web pages and Pattrio works with a relatively small dictionary of keywords so its language independence depends on a number of dictionaries. Further work in the field of Web page similarity is important because current methods do not have satisfactory results. The results of our experiments with mentioned methods, which were taken for purpose of this paper, show
13
Visual Similarity of Web Pages
145
that they could have a big potential for using in combination with methods based on IR. It means that they could be used to improve the results of current using methods based on semantic content. Another interesting field to investigate is non-symmetry of the similarity functions of Web pages using by examined methods. As above mentioned, similarity is not always symmetrical in practical life and it is thought that it would be useful to consider how to deal with problems of non-symmetry of Web page similarity. Acknowledgements. This work is supported by Grant of Grant Agency of Czech Republic No. 201/09/0683.
References 1. Burget, R.: Layout Based Information Extraction from HTML Documents. In: Conference on Document Analysis and Recognition (ICDAR 2007), Curitiba, Paran´a, Brazil, pp. 624–628 (2007) 2. Cai, D., Yu, S., Wen, J.-R., Ma, W.-Y.: Extracting Content Structure for Web Pages based on Visual Representation. In: Asia Pacific Web Conference, pp. 406–417 (2003) 3. Cai, D., Yu, S., Wen, J.-R., Ma, W.-Y.: VIPS: a vision based page segmentation algorithm. Microsoft Technical Report, MSR-TR-2003-79 (2003) 4. Cosulschi, M., Constantinescu, N., Gabroveanu, M.: Classification and comparison of information structures from a Web page. The Annals of the University of Craiova 31, 109–121 (2004) 5. Chang, C., Lui, S.: IEPAD: information extraction based on pattern discovery. In: Proc. of the 10th Int. World Wide Web, WWW 2001, pp. 681–688. ACM, New York (2001) 6. Chen, J., Zhou, B., Shi, J., Zhang, H., Fengwu, Q.: Function-based object model towards Website adaptation World Wide Web, Hong Kong, May 01-05, pp. 587–596 (2001) 7. Chibane, I., Doan, B.L.: A Web page topic segmentation algorithm based on visual criteria and content layout. In: Proc. of SIGIR, pp. 817–818 (2007) 8. Han, J., Chang, K.: Data Mining for Web Intelligence. Computer 35(11), 64–70 (2002) 9. Han, J., Kamber, M.: Data mining: concepts and techniques. Morgan Kaufmann Publishers Inc., San Francisco 10. Kobayashi, M., Takeda, K.: Information retrieval on the Web. ACM Computing Surveys (CSUR) 32(2), 144–173 (2000) 11. Kosala, K., Blockeel, H.: Web Mining Research: A Survey. SIGKDD Explorations 2(1), 1–15 (2000) 12. Kudelka, M., Snasel, V., Lehecka, O., El-Qawasmeh, E.: Semantic Analysis of Web Pages Using Web Patterns. In: IEEE/WIC/ACM International Conference on Web Intelligence, Hong Kong, pp. 329–333 (2006) 13. Kudelka, M., Snasel, V., Lehecka, O., El-Qawasmeh, E., Pokorny, J.: Web Pages Reordering and Clustering Based on Web Patterns. In: Geffert, V., Karhum¨aki, J., Bertoni, A., Preneel, B., N´avrat, P., Bielikov´a, M. (eds.) SOFSEM 2008. LNCS, vol. 4910, pp. 731–742. Springer, Heidelberg (2008) 14. Liu, B.: Web content mining (tutorial). In: Proc. of the 14th Int. World Wide Web (2005) 15. Liu, B., Chang, K.C.-C.: Editorial: Special Issue on Web Content Mining. IGKDD Explorer Newsletter 6(2), 1–4 (2004)
146
M. Kudˇelka et al.
16. Liu, B., Grossman, R., Zhai, Y.: Mining data records in Web pages. In: KDD 2003, pp. 601-606 (2003) 17. Mitsuhashi, N., Yamaguchi, T., Takama., Y.: Layout analysis for calculation of Web page similarity as image. In: Int. Symp. on Advanced Intelligent Systems, pp. 142–145 (2003) 18. Simon, K., Lausen, G.: ViPER: augmenting automatic information extraction with visual perceptions. In: CIKM 2005, pp. 381–388. ACM, New York (2005) 19. Takama, Y., Mitsuhashi, N.: Visual Similarity Comparison for Web Page Retrieval. In: Proc. of IEEE/WIC/ACM Web Intelligence (WI 2005), pp. 301–304 (2005) 20. Tseng, Y.-F., Kao, H.-K.: The Mining and Extraction of Primary Informative Blocks and Data Objects from Systematic Web Pages. In: Web Intelligence (WI 2006), pp. 370–373 (2006) 21. Tversky, A.: Features of Similarity. Psychological Review 84, 327–352 (1977) 22. Van Welie, M.: Pattern in Interaction Design, http://www.welie.com, (access 2008-08-31) 23. Xiang, P., Yang, X., Shi, Y.: Effective Page Segmentation Combining Pattern Analysis and Visual Separators for Browsing on Small Screens. In: WI 2006, pp. 831–840 (2006) 24. Zhai, Y., Liu, B.: Web data extraction based on partial tree alignment. In: World Wide Web, WWW 2005, Chiba, Japan, May 10 - 14, pp. 76–85. ACM, New York (2005) 25. Zhai, Y., Liu, B.: Structured Data Extraction from the Web Based on Partial Tree Alignment. IEEE Transact. on Knowl. and Data Eng. 18(12), 1614–1628 (2006) 26. Zheng, S., Song, R., Wen, J.-R.: Template-independent news extraction in based on visual consistency. In: Proc. of AAAI 2007, pp. 1507–1511 (2007) 27. Zhu, J., Nie, Z., Wen, J., Zhang, B., Ma, W.: Simultaneous record detection and attribute labeling in Web data extraction. In: Proc. of the 12th ACM SIGKDD international Conference on Knowledge Discovery and Data Mining, KDD 2006, Philadelphia, PA, USA, August 20-23, pp. 494–503. ACM, New York (2006) 28. Zhu, J., Zhang, B., Nie, Z., Wen, J.R., Hon, H.W.: Webpage understanding: an integrated approach. In: Knowledge Discovery in Data, San Jose, California, USA, pp. 903–912 (2007)
Chapter 14
Burst Moment Estimation for Information Propagation Tomas Kuzar and Pavol Navrat
Abstract. In the article we concentrate on timing aspect of information propagation on social web. The aim of the information producer is to transfer the information to the broad audience via social web. Producer needs to identify interesting content and publish it to social web in the right time. Right timing of information publishing can increase the potential of spreading it. In the article we describe the process of interesting content identification and we present a model for right moment estimation of information publishing. Our estimation is based on producer’s web usage mining, web topic tracking and event identification. Keywords: Social Web, Web Content Mining, Web Usage Mining.
14.1 Introduction Rash development of the web technologies, especially of Web 2.0 and social web applications brought new means of user-web interaction: Wikis, Blogs, or Social Networks Sites. New tools for the user collaboration and information sharing flooded the web space by huge amount of unstructured or semi structured information. That is why we need to cope with the question of effective organizing and making web content easily accessible. The amount of the published content increased enormously and the web user became flooded by information. That is why it is necessary to develop methods for effective publishing and organizing of the web content. Effective marketing should reflect the relationship between the company and the customer. We suppose that adding social media into marketing process will bring the added value to the company and to the brand. Web marketing costs money and energy that is why it needs to be done effectively. Tomas Kuzar · Pavol Navrat Faculty of Informatics and Information Technologies, Slovak University of Technology e-mail: {kuzar,navrat}@fiit.stuba.sk V. Sn´asˇel et al. (Eds.): Advances in Intelligent Web Mastering - 2, AISC 67, pp. 147–154. c Springer-Verlag Berlin Heidelberg 2010 springerlink.com
148
T. Kuzar and P. Navrat
Company plays the role of producer of the information. The portals and sites of producers usually have no significant user traffic. The idea is to propagate interesting web content which has small traffic. Such situation occurs in social web space frequently – interesting and important content stays unexplored. Different publications discuss the problem of information propagation, viral propagation and propagation based on influencers. We consider the aspect of timing for very important in web information propagation. Our timing model is based on web usage of the producer portal and monitoring of the situation in social web space. The mediators can to be used in order of information diffusion. Conditions for information diffusion are changing over time. We designed a model for optimal Burst Moment Estimation (BME). Technology brought many possibilities for users to generate the content. Tools like crawlers and RSS can be easily used for automatic web information gathering. We use RRS to collect the text streams. Principally, we are able to process various streams: new paper articles, forum contributions, blogs or statuses of social network users. We started our explorations on online newspaper stream.
14.2 Related Work Our research was motivated mainly by the problem of social web marketing and its viral effects. Authors in [4, 5, 6, 7] analyzed viral aspect of information propagation from multiple points of view: pathways identification, influencers, latency, prediction etc. The studies of viral effects are many times closely related with the topic identification and topic mining. Problem of topic detection and tracking (TDT) is not a new phenomenon. With the social media technologies many models need to be reevaluated and rescaled. Related work on TDT focuses mainly on historical tracking. We want to concentrate on the problem of prediction. Therefore TDT needs to be extended to be able to process data from various sources and also to cope with rapid changes in published content. Other important part of our research is based on the problem of topic mining presented in [8]. Internet and web space is flooded by documents and information. The article discusses the problem of automatic uncovering how ideas spread through the collection over time. Most of the publications presented explorations when documents were connected via hyperlinks. Authors in [1] proved that content based connection can be used instead of hyperlink connections. Web space can be understood as huge source of text streams. Authors [2] analyzed the problem of text stream synchronization. We need to discuss problem of topic extraction and topic mining. Model in [3] consists of two parts: topic identification and topic intensity identification. The first part addresses the problem of topic identification, which was already discusses in many previous publications. Second part discusses the importance of the topic – topic intensity.
14
Burst Moment Estimation for Information Propagation
149
We provide web usage analysis of producer’s portal. Problems related to web usage mining like bots identification or session’s identification was studied in various research projects [9, 10]. We studied not just link clicks but the content which was represented by these links. Problem of content mining was studied by many authors [11, 12]. Many publications studied the information paths or viral effects of propagation.
14.3 Model We designed representation of web information flow as depicted in Fig. 14.1. We focused on timing aspect of producer’s information propagation. We designed a model for representation of producers and mediators.
Fig. 14.1 Information flow model
Producers are business and private web portals with low web traffic. They publish the information in order to propagate a product, service or a brand. Also evaluation of data from both sources is processed on analyze engine on the producer‘s site. Analysis engine analysis the traffic of the producer and gathers the information from mediators. Mediators are portals based on Web 2.0 technologies with huge amount of user generated content. Mediators have high traffic and high number of page visits. Therefore they are useful for information propagation based on viral effects [6]. Consumers are end users and receivers of the information. They access different blogging, bookmarking or video sharing portals. End users can access information provided by mediators through Social Network Sites (SNS) (Facebook, LinkedIn ect.) as well. We need to solve the problem when to hang the information on the portal of mediator. We assume that right moment of publishing will increase the viral effect of information propagation.
150
T. Kuzar and P. Navrat
Producer publishes the information as an article on his portal. If information in the article is interesting or important for the user requests of the article will rapidly increase. We need to model the article and the request count. Information on the site of mediators is published by various users and reflects actual topics or events. We use four dimensions to model producers and mediators: 1. 2. 3. 4.
Concept Dimension (s) – vector for semantic representation of articles Time Dimension (t) – timestamp Stream Dimension (s) – type and name of the stream where the text comes from Event dimension (e) – mainly retrospective detection of repeatable events
We have chosen two basic metrics: usage (u) and occurrence (o). On the producers site we represent the usage of semantic concepts as couples (s, u). On the site of mediators we represent the occurrence of the concepts as couples (s, u).
14.3.1 Producer Characteristics Producer characteristics are based on quantitative analysis of the visited concepts and terms. We represent producer’s document as vector of couples: p = ((c1, u1), (c2, u2), . . ., (cn, un))
(14.1)
We model the occurrence of most important concepts in time together with related event. We model the producer P as a set of triplets: P = {(t, e, p)}
(14.2)
By mining producer’s set we can find the candidates for publishing. We present our explorations in Sect. 14.4. The rapid change in producer’s usage characteristic vector p indicates candidate topic for publishing in that time. We assume that topic interesting for small group of users related to expectable event has potential to address broader audience on social web.
14.3.2 Mediator Characteristics Mediator characteristics are based on quantitative analysis of the published concepts and terms. We represent mediator’s document as vector of couples: m = ((c1, o1), (c2, o2), . . ., (cn, on))
(14.3)
We model the mediator as set of tetrads: M = {(t, e, s, m)}
(14.4)
14
Burst Moment Estimation for Information Propagation
151
14.3.3 Burst Moment Estimation Algorithm for Burst Moment Estimation evaluates the relation between producer and mediator. We need to model function F to estimate the burst moment, function F can return one of two values – wait (w) and burst (b). F(P, M) = {w, b}
(14.5)
For simplification our model represents the relationship between producer’s usage characteristic vector and the mediator occurrence characteristic vector with three states: H(p, m) = {related, unrelated, silence} (14.6) States - silence, related topic and unrelated topic - express the problem of relation between semantic vectors. Silence means the state without important general topic on mediator. How H influences F will be object of the further explorations.
14.4 Experimental Results We focused our experiments on web usage mining of the producer’s web site. Web usage mining is well known method for web usage information gathering. We understand web usage mining as a basic step for determination of most crucial parts of the producer’s web site – events and topics which increase the traffic. Data preprocessing is based on session’s identification and semantic representation of the web pages as was already described in many publications [9, 10]. We analyzed web server logs of the student’s university portal during March 2008 and March 2009. We identified 100.000 unique users, 326.041 user sessions and 629 requested pages. Measurement is important for the evaluation of our model. There can be chosen several quantitative indicators of the web usage – page visits, session lengths or clicks per session. On the other hand we use few qualitative characteristics – semantic analysis and text mining methods for clustering. Figure 14.2 shows average day distribution of the session count. There is a low traffic during the night and much higher traffic during the day. Figure 14.3 shows the session length distribution during the day – night sessions are shorter than day sessions. In Sect. 14.3 we presented the model of the mediator and model of the producer. We experimentally proved the semantic representation of the producer. Figure 14.4 presents the count of the user sessions during one year. Summer season has the lowest traffic. We consider summer to be an event which causes lower web traffic. Figure 14.5 presents number of sessions which contained the concept ‘summer’ during one year. The peak of the page requests containing the concept ‘summer’ can be observed during summer season. These correlations between event, semantic
152
T. Kuzar and P. Navrat
Fig. 14.2 Session count distribution
Fig. 14.3 Session length distribution
Fig. 14.4 One year access count
concept request number and producer web site traffic needs to be learned, generalized and used for Burst Moment Estimation. Other examples are depicted in Fig. 14.6 which shows time access characteristics of concepts ‘fasting’ and ‘discussion’. Easter can be considered to be year event
14
Burst Moment Estimation for Information Propagation
153
Fig. 14.5 Year access of concept ’summer’
Fig. 14.6 Year usage of concepts ’fasting’ and ’discussion’
which causes increase of the web site traffic in Easter time. Increase in access of the concept ‘discussion’ is related to the start of the discussions with very popular Slovak journalist. Experiments show that the user activity is related to external predictable events – summer, Christmas, Easter or interesting discussion. We say that the producer needs to mine all interesting events and their consequences to determine topic and time slot for effective publishing at social web of mediators.
14.5 Conclusions and Future Work Our aim is to achieve effective marketing on social web. Issue of Burst Moment Estimation can be looked at more generally. Any information or topic (opinions, comments, bookmarks, videos) interesting for small group can be potentially interesting for broader audience. We analyzed the problem of optimal timing of information propagation.
154
T. Kuzar and P. Navrat
We focused on news stream data. Applying proper topic detection methods, our model is applicable for various data streams: online forums, status at social networking sites, bookmarking sites etc. In the future we will analyze the situation at mediator using topic mining methods and we will adopt various methods for impact evaluation. Acknowledgements. This work was partially supported by the Scientific Grant Agency of Republic of Slovakia, grant No. VEGA 1/0508/09.
References 1. Shaparenko, B.: Information Genealogy: Uncovering the Flow of Ideas in NonHyperlinked Document Databases. In: International Conference on Knowledge Discovery and Data Mining, pp. 619–628. ACM, New York (2007) 2. Wang, X.: Mining Common Topics from Multiple Asynchronous Text Streams. In: Proceedings of the Second ACM International Conference on Web Search and Data Mining, pp. 182–201. ACM, NY (2009) 3. Krause, A.: Data Association for Topic Intensity Tracking. In: Proceedings of the 23rd international conference on Machine learning, pp. 497–504. ACM, NY (2006) 4. Ma, H.: Mining Social Networks Using Heat Diffusion Processes for Marketing Candidates Selection. In: Proceeding of the 17th ACM conference on Information and knowledge management, pp. 233–242. ACM, NY (2008) 5. Hartline, J.: Optimal Marketing Strategies over Social Networks. In: Proceeding of the 17th international conference on World Wide Web, pp. 189–198. ACM, NY (2008) 6. Leskovec, J.: The Dynamics of Viral Marketing. In: Proceedings of the 7th ACM conference on Electronic commerce, pp. 228–257. ACM, NY (2007) 7. Richardson, M.: Mining Knowledge-Sharing Sites for Viral Marketing. In: Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 61–70. ACM, New York (2002) 8. Chung, S.: Dynamic Topic Mining from News Stream Data, pp. 653–670. Springer, Heidelberg (2004) 9. Tan, P., Kumar, V.: Web usage mining based on probabilistic latent semantic analysis. In: Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 197–205. ACM, New York (2004) 10. Batista, P., Silva, M.J.: Mining Web Access Logs of an On-line Newspaper. In: Proceedings of the 5th WSEAS International Conference on Artificial Intelligence, Knowledge Engineering and Data Bases (2002) 11. Chen, Y.: Advertising Keyword Suggestion Based on Concept Hierarchy. In: Proceedings of the international conference on Web search and web data mining, pp. 251–260. ACM, New York (2008) 12. Xue, X.: Distributional Features for Text Categorization, pp. 428–442. IEEE, NJ (2009)
Chapter 15
Search in Documents Based on Topical Development Jan Martinoviˇc, V´aclav Sn´asˇel, Jiˇr´ı Dvorsk´y, and Pavla Dr´azˇ dilov´a
Abstract. An important service for systems providing access to information is the organization of returned search results. Vector model search results may be represented by a sphere in an n-dimensional space. A query represents the center of this sphere whose size is determined by its radius or by the amount of documents it contains. The goal of searching is to have all documents relevant to a query present within this sphere. It is known that not all relevant documents are present in this sphere and that is why various methods for improving search results, which can be implemented on the basis of expanding the original question, have been developed. Our goal is to utilize knowledge of document similarity contained in textual databases to obtain a larger amount of relevant documents while minimizing those cancelled due to their irrelevance. In the article we will define the concept k-path (topical development). For the individual development of vector query results, we will propose the SORT-EACH algorithm, which uses the aforementioned methods for acquiring topical development. Keywords: Topical Development, Clustering, Information Retrieval.
15.1 Introduction There are many systems used for searching collections of textual documents. These systems are based on the vector model, probability models and other models for document representation, queries, rules and procedures [2, 19]. All of these systems contain a number of limitations. Incomplete lists of relevant documents obtained in search results ranks among one of the most basic of these limitations. Jan Martinoviˇc · V´aclav Sn´asˇel · Jiˇr´ı Dvorsk´y · Pavla Dr´azˇ dilov´a ˇ - Technical University of Ostrava, 17. listopadu 15, Department of Computer Science, VSB 708 33, Ostrava-Poruba, Czech Republic e-mail: {jan.martinovic,vaclav.snasel,jiri.dvorsky}@vsb.cz [email protected] V. Sn´asˇel et al. (Eds.): Advances in Intelligent Web Mastering - 2, AISC 67, pp. 155–166. c Springer-Verlag Berlin Heidelberg 2010 springerlink.com
156
J. Martinoviˇc et al.
An important service for systems providing access to information is the organization of returned search results. Conventional IR systems evaluate obtained documents based on their similarity to given query. [4]. Other systems present graphic illustrations based on mutually similar documents [18, 21, 9], specific attribute relations [11, 20] and samples of terms distributed in the query [7]. Vector model search results may be represented by a sphere in an n-dimensional space. A query represents the center of this sphere whose size is determined by its radius (range query) or by the amount of documents it contains (NN–query). The goal of searching is to have all documents relevant to a query present within this sphere. It is known that not all relevant documents are present in this sphere and that is why various methods for improving search results, which can be implemented on the basis of expanding the original question, have been developed [12, 3]. Our goal is to utilize knowledge of document similarity contained in textual databases to obtain a larger amount of relevant documents while minimizing those cancelled due to their irrelevance [14, 16, 15].
15.2 Issues with Metric Searching The distance between the two documents x and y is the function δ (x, y) : X × X → R (where X is a set of all documents), for which the following conditions apply:
δ (x, x) = 0 δ (x, y) ≥ 0 δ (x, y) = δ (y, x)
(15.1) (15.2) (15.3)
Distance further requires the validity of triangle inequality. Triangle inequality is only valid when triad x, y and z abide by the following conditions:
δ (x, z) ≤ δ (x, y) + δ (y, z)
(15.4)
Set X and function δ create the metric space [1], which we identify as (X, δ ).
15.2.1 ε -ball and ε -k-ball Definition 15.1. For given x ∈ X and ε ∈ R+ (where R+ = {x ∈ R|x ≥ 0}), the set B(x, ε ) = {y ∈ X; δ (x, y) ≤ ε } is called the ball with the radius ε , or ε -ball centered at the point x. Definition 15.2. For given x ∈ X, ε ∈ R+ and k ∈ N+ , the set Bk (x, ε ) = {y ∈ X; x1 , . . . , xk ∈ X, x = x1 , y = xk , ∑k−1 l=1 δ (xl , xl+1 ) ≤ ε } is called the k-ball with the radius ε , or ε -k-ball centered at the point x.
15
Search in Documents Based on Topical Development
157
The ε -k-ball is an equivalent of ε -ball in a metric space. It is easy to show that: B(x, ε ) = Bk (x, ε )
(15.5)
Formally, this means that any k-step path of length ε belong to ε -ball.
y= x4
y x1
x e
Fig. 15.1 Balls in metric space
x3 x2
e
(a) ε -ball
(b) ε -k-ball
The Figure 1(a) represents ε -ball well known in the vector model. The extension ε -k-ball is shown in Fig. 1(b). The Figure 15.2 illustrates the back-transformation from ε -k-ball to ε -ball. We are able to construct a triangle between two different points. The hypotenuse can replace two legs of such triangle. The condition of a triangle inequality is satisfied.
x4 x1
x4 x3
x2
x1
x4 x1
x4 x3
x2
x1
x4 x3
x1
x4 x3
x2
x1
y= x4
x4 x3
x1
x3 x2
x3
x1
Fig. 15.2 ε -k-ball to ε -ball transformation
15.2.2 ε -k-ball and Similarity A similarity s(x, y) between document x and y is function s(x, y) : X × X → R which satisfied the following conditions: s(x, x) = 1 s(x, y) ≥ 0
(15.6) (15.7)
s(x, y) = s(y, x)
(15.8)
158
J. Martinoviˇc et al.
If a non-metric is used, the triangle inequality is disturbed and the identity generally does not hold. We performed some experiments with non-metric, which satisfies the condition of ε -k-ball. This is shown in an illustrative example below. In this way, we were able to find some documents which could be not found in a metric space.
d3 1 0,94
0,93
d1 d0
0,29
0,24
d2
0,86
Fig. 15.3 The result in sphere dissimilarity distances
e=0,6
There are two ε -k-ball in Fig. 15.3. The first one consists of documents d0 , d1 and d2 . The second one contains the documents d0 , d1 and d3 . The ε -ball is centered in the document d0 . Only the document d1 could be reached using a conventional vector model for ε -ball = 0.6.
15.3 Topical Development of a Given Document In the preceding paragraphs, we defined ε -k-ball and its behavior in a space that does not maintain the rules of triangle inequality. Now, we define the concept k-path, for which the term topical development will be used. Definition 15.3. For the given x ∈ X and k ∈ N+ , the set Bk (x) = {y ∈ X; x1 , . . . , xk ∈ X, x = x1 , y = xk } is called the k-path centered at the point x. We can present topical development as a path leading away from the initial document, through similar documents and towards other documents pertaining to this document. We can illustrate this path in a vector space, where our document forms nodes. The edges between these nodes evaluate their similarity. If this path satisfies the conditions for k-path we can say that it is a proper representation of topical development. We can create many other methods based on IR theories for topical development. We will now present these methods.
15
Search in Documents Based on Topical Development
159
15.3.1 Topical Development with NN-query Methods TOPIC-NN and TOPIC-NN2 stem from NN-query. The disadvantage of this method is that the NN-query method must often be calculated while searching for topical development. For the TOPIC-NN2 method, with a topical development length of k, we must calculate k − 1 times NN-query. This disadvantage can be eliminated by pre-calculating this query in the indexing phase. With large requirements in k parameters, however, space complications become a major factor. We define the TOPIC-NN method as an NN-query, when the query is a document within which we want to find topical development. The ordering of topical development is given as the similarity of documents to the assigned document. An example of this method is shown in Fig. 15.4(a), where the searched document is document d0 and topical development for this document is created in the following order: d0 , d1 , d2 , d3 .
d2
d1 d0
d3
d2
d1
d0
d3
(a) Example of TOPIC-NN
(b) Example of TOPIC-NN2
Fig. 15.4 Examples of TOPIC method
The principle of the TOPIC-NN2 follows: we start with document di , for which we search for topical development. For this document, we find the most similar dk document and add it to the resulting topical development. Then, we repeat this process, but instead of using the original di document, we use document dk . The expansion is completed either when the necessary amount of documents in the development has been reached, or if the next closest document we are searching for does not exist. An example of this method is illustrated in Fig. 15.4(b).
15.3.2 Topical Development Using a Cluster Analysis Now we face the question of how to effectively search for topical development. One possible approach to carrying out a search for topical development is to use a cluster partitioning [10, 6]. The method which we now present carries out the main part of the calculation during the document indexing phase. This enables fast searching.
160
J. Martinoviˇc et al.
The reason we chose cluster partitioning for determining topical development is its ability to create groups of similar documents. We chose hierarchal agglomerative clustering from the available clustering method options. We can present the results of this type of clustering using a dendrogram. Steps for the automation of topical development are as follows: 1. 2. 3. 4.
Index text collections into the IR system. Create a similarity matrix for the document C. Hierarchal agglomerative clustering in the similarity matrix C. Topical development query - algorithm acquired in topical development.
For acquiring topical development from hierarchal clustering, we will define the algorithm TOPIC-CA, which uses the amount of documents in the development as a hindrance. Definition 15.4. The TOPIC-CA algorithm (see Algorithm 15.1) for acquiring topical development is defined with the aid of a dendrogram DTree as list ST = T OPIC CA(dq ). Where dq is a node in the dendrogram for which we want to generate a topical development.
Algorithm 15.1. Algorithm TOPIC-CA – function T OPIC CA function TOPIC CA(node ∈ DTree ∪ null) L ← Empty list if node = null then AddNodeToEnd(L, node) while node = null do sibling ← S IBLING(node) L ← S UB(sibling, L) node ← PARENT (node) end while end if return L end function
The advantage of using this algorithm for acquiring topical development is low time and space requirement during querying. For searching topical development, we need a dendrogram with pre-calculated similarity for each individual node of the dendrogram. The disadvantage is the time required to create the dendrogram. A calculation of the hierarchal cluster is performed during the creation of a textual database, so users entering queries into the IR system are not influenced by this factor.
15.4 Improving Vector Query Results The result to a query in the vector model is sorted collection of documents. Sorting is done with a coefficient similarity query for each individual document saved in a
15
Search in Documents Based on Topical Development
161
Algorithm 15.2. Algorithm TOPIC-CA – function Sub function S UB(node ∈ DTree ∪ null, list L) if node = null then return L end if sibling ← S IBLING(node) if node ∈ leaf nodes of DTree then AddNodeToEnd(L, node) else if sibling = null then siblingLe f t ← L EFT C HILD(sibling) siblingRight ← R IGHT C HILD(sibling) simLe f t ← S IM(node, siblingLe f t) simRight ← S IM(node, siblingRight) if SimRight ≤ SimLe f t then L ← S UB(siblingLe f t, L) L ← S UB(siblingRight, L) else L ← S UB(siblingRight, L) L ← S UB(siblingLe f t, L) end if end if return L end function
Algorithm 15.3. Algorithm TOPIC-CA – function Sim. It calculates proximity of cluster n1 to a descendant of a neighboring cluster n2 in the hierarchy function S IM(n1 ∈ DTree ∪ null, n2 ∈ DTree ∪ null) if node1 = null ∨ node2 = null then return 0 end if cn1 ← centroid created from all leafs nodes in n1 cn2 ← centroid created from all leafs nodes in n2 sim ← similarity between cn1 and cn2 return sim end function
text database. The objective of this section is to introduce the SORT-EACH method for changing the ordering based on information we have on topical development. We want to reach this goal by improving vector query results. From a geometrical perspective, this change in ordering means a having relevant documents at a closer proximity and distancing irrelevant documents.
162
J. Martinoviˇc et al.
Our SORT-EACH algorithm [5, 16], moves documents retrieved by the vector query thus to become documents from the same topical development, one after another (see Algorithm 15.4).
Algorithm 15.4. SORT-EACH algorithm 1. Perform a vector query and label the retrieved documents with CV M , which is ordered by the vector model query ranks. 2. Label the resulting ordered list of documents with CSorted and calculate documents which should consist of count. 3. Determine level as a calculation of documents in the topical development. 4. Do the sorting via Algorithm 15.5. 5. The resulting list is CS .
The goal of this algorithm is to change the ordering of query results to bring similar documents closer to one another in the ordering. The algorithm starts with a document that is most similar to the query. This document added directly to the newly created ordering. We replace the first document with the second document from the original set, since there is no other place for it to be sorted. Then, we gradually take documents distancing themselves from the query and sort them based on their relevance to the given topic. our goal was to avoid returning of set of clusters to users for following selection (see [23, 22, 17, 25]). We wanted to keep results in the form of list that gradually distanced the document from the query (a bit similar approche is [13]) with respecting information about initial ranking for all documents.
Algorithm 15.5. Sorting middle SORT-EACH algorithm First items in lists have index 1 CV M [1] add to the begin of the ordered list CSorted CV M [2] add to the end of the ordered list CSorted for i ← 3, count do DV ← CV M [i] CTopic ← TOPIC-ST(CA) for DV and level for j ← 2, |CTopic | do if CTopic [ j] ∈ CSorted then DV add to CSorted behind CTopic [ j] break end if end for / CSorted then if DV ∈ DV add to back of CSorted end if end for
Skip DV – on index 1
15
Search in Documents Based on Topical Development
163
15.5 Experimental Results Now we illustrate the tests that provided us with improved vector model query results when using the SORT-EACH algorithm. We used the TREC-9 Filtering Track Collection for testing topical development. This collection consists of 283,335 English abstracts from medicine (their approximate size is 235 MB)1 . The collection also contains 63 queries. The 117,135 terms were extracted by indexing from collections. While testing, we gradually adjusted k parameters determining the amount of documents in topical development. We also adjusted l parameters, which we present as the amount of documents for which the SORT-EACH algorithm will be carried out. In this set, we compared three algorithm types for topical development: TOPICNN, TOPIC-NN2 a TOPIC-CA. In all tests, we proceeded as follows: 1. We carry out all queries, prepared for a given collection, using the Vector model. 2. SORT-EACH, with appropriately selected algorithms for topical development and setting parameters k and l, is used for reassembling results to individual queries. 3. We calculate the F-measure for individual results [24, 8]. F-measure is calculated with the set value β = 1. 4. We calculate F-measure average values. 5. We calculate the P-R curve for each individual set of queries. There were a total of 63 queries in the collection for which we gradually calculated vector queries and reassembly. Test results can be viewed in Table 15.1. Two rows, marked with asterisk, were depicted in Fig. 15.5 and Fig. 15.6 respectively. Table 15.1 Average F-measure. Vector model average F-measure = 16.19107. k
l for SORT-EACH
5
50 100 150 200 50 100 150 200 50 100 150 200
10
30
1
TOPIC-NN
TOPIC-NN2
16.43540 17.39482 17.26270 17.27485 16.60054 17.64491 17.71311 17.83688 16.55724 17.26095 17.30852 17.13975
16.46800 17.45359 17.31879 17.34002 16.63277 17.65865 17.71629 17.84139 16.58858 17.40597 17.35623 17.17328
TOPICCA 16.52930 17.16512 17.18616 17.21730 16.62706 17.32622 * 17.64486 17.63589 16.70111 17.56266 * 17.90581 17.81069
This collection can be downloaded from http://trec.nist.gov/data/t9_filtering.html.
164
J. Martinoviˇc et al.
22
22
20
20
18
18
16
14
F-measure
F-measure
16
TOPIC-NN TOPIC-NN2 TOPIC-CA Vector Model
12
14
10
10
8
8
6
6
4
TOPIC-NN TOPIC-NN2 TOPIC-CA Vector Model
12
4 10
20
30
40
50
60
70
80
90
10
20
30
Count of Documents
40
50
60
70
80
90
Count of Documents
(a)
(b)
Fig. 15.5 F-measure for (a) k = 10 and l = 100 and for (b) k = 30 and l = 100
100
100
80
80
Precision
60
Precision
60 TOPIC-NN TOPIC-NN2 TOPIC-CA Vector Model
TOPIC-NN TOPIC-NN2 TOPIC-CA Vector Model
40
40
20
20
0
0 0
20
40
60
80
100
0
20
40
Recall
(a)
60
80
100
Recall
(b)
Fig. 15.6 P-R curve for (a) k = 10 and l = 100 and for (b) k = 30 and l = 100
15.6 Conclusion Our objective was to take advantage of the similarity among documents contained in text. This knowledge was used for moving relevant documents higher up in ordering lists of documents returned by IR systems. From a geometrical perspective, this change means bringing relevant documents closer together and distancing irrelevant documents from one another. We defined the concept k-path, which we renamed “topical development”. We can present the simplified version of topical development as a path away from an initial document, through similar documents and on toward other documents pertinent to these documents. To acquire topical development in textual databases, we have proposed four methods: TOPIC-NN, TOPIC-NN2 and TOPIC-CA. For the individual development of vector query results, we proposed the SORTEACH algorithm, which uses the aforementioned methods for acquiring topical development. From the results can be seen that TOPIC-CA was not always necessarily the most optimal algorithm for topical development, however, in situations where we selected smaller k parameters TOPIC-NN was the more optimal algorithm.
15
Search in Documents Based on Topical Development
165
Acknowledgements. This work is partially supported by Grant of Grant Agency of Czech Republic No. 205/09/1079.
References 1. Armstrong, M.A.: Basic Topology (Undergraduate Texts in Mathematics). Springer, Heidelberg (1997) 2. Berry, M.: Survey of Text Mining: Clustering, Classification, and Retrieval. Springer, Heidelberg (2003) 3. Carpineto, C., de Mori, R., Romano, G., Bigi, B.: An information-theoretic approach to automatic query expansion. ACM Transactions on Information Systems 19(1), 1–27 (2001), http://doi.acm.org/10.1145/366836.366860 4. Chalmers, M., Chitson, P.: Bead: explorations in information visualization. In: SIGIR 1992: Proceedings of the 15th annual international ACM SIGIR conference on Research and development in information retrieval, pp. 330–337. ACM, New York (1992), http://dx.doi.org/10.1145/133160.133215 5. Dvorsk´y, J., Martinoviˇc, J., Sn´asˇel, V.: Query expansion and evolution of topic in inˇ a R´ ˇ ıcˇ ka, Czech formation retrieval systems. In: DATESO, pp. 117–127. Desn´a – Cern´ Republic (2004) 6. Gan, G., Ma, C., Wu, J.: Data Clustering: Theory, Algorithms, and Applications. ASASIAM Series on Statistics and Applied Probability. SIAM, Philadelphia (2007) 7. Hearst, M.A.: Tilebars: Visualization of term distribution information in full text information access. In: Proceedings of the Conference on Human Factors in Computing Systems, CHI 1995 (1995), http://citeseer.ist.psu.edu/hearst95tilebars.html 8. Ishioka, T.: Evaluation of criteria on information retrieval. Systems and Computers in Japan 35(6), 42–49 (2004), http://dx.doi.org/10.1002/scj.v35:6 9. Jacobs, D.W., Weinshall, D., Gdalyahu, Y.: Classification with nonmetric distances: image retrieval and class representation. IEEE Transactions on Pattern Analysis and Machine Intelligence 22(6), 583–600 (2000), http://dx.doi.org/10.1109/34.862197 10. Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM Computing Surveys 31(3), 264–323 (1999), citeseer.ist.psu.edu/jain99data.html 11. Korfhage, R.R.: To see, or not to see - is that the query? In: SIGIR 1991: Proceedings of the 14th annual international ACM SIGIR conference on Research and development in information retrieval, pp. 134–141. ACM Press, New York (1991), http://dx.doi.org/10.1145/122860.122873 12. Kowalski, G.J., Maybury, M.T.: Information Storage and Retrieval Systems Theory and Implementation, 2nd edn. The Information Retrieval Series, vol. 8. Springer, Norwell (2000) 13. Leuski, A.: Evaluating document clustering for interactive information retrieval. In: CIKM, pp. 33–40 (2001), http://citeseer.ist.psu.edu/leuski01evaluating.html 14. Martinoviˇc, J., Gajdoˇs, P., Sn´asˇ el, V.: Similarity in information retrieval. In: 7th Computer Information Systems and Industrial Management Applications, 2008. CISIM 2008, pp. 145–150. IEEE, Los Alamitos (2008) 15. Martinoviˇc, J.: Evolution of topic in information retrieval systems. In: WOFEX, Ostrava, Czech Republic (2004)
166
J. Martinoviˇc et al.
16. Martinoviˇc, J., Gajdoˇs, P.: Vector model improvement by FCA and topic evolution. In: ˇ a R´ ˇ ıcˇ ka, Czech Republic (2005) DATESO, pp. 46–57. Desn´a – Cern´ 17. Osinski, S., Weiss, D.: Carrot2: Design of a flexible and efficient web information retrieval framework. In: Szczepaniak, P.S., Kacprzyk, J., Niewiadomski, A. (eds.) AWIC 2005. LNCS (LNAI), vol. 3528, pp. 439–444. Springer, Heidelberg (2005) 18. Salton, G.: Automatic Text Processing. Addison-Wesley, Reading (1989) 19. Salton, G., Buckley, C.: Term-weighting approaches in automatic text retrieval. Information Processing and Management 24(5), 513–523 (1988), http://dx.doi.org/10.1016/0306-4573(88)90021-0 20. Spoerri, A.: Infocrystal: a visual tool for information retrieval & management. In: CIKM 1993: Proceedings of the second international conference on Information and knowledge management, pp. 11–20. ACM, New York (1993), http://doi.acm.org/10.1145/170088.170095 21. Thompson, R.H., Croft, W.B.: Support for browsing in an intelligent text retrieval system. Int. J. Man-Mach. Stud. 30(6), 639–668 (1989) 22. http://demo.carrot2.org/demo-stable/main (1.8.2008) 23. http://vivisimo.com (1.8.2008) 24. Van Rijsbergen, C.J.: Information Retrieval, 2nd edn., Department of Computer Science, University of Glasgow (1979) 25. Zamir, O., Etzioni, O.: Grouper: a dynamic clustering interface to web search results. Computer Networks 31(11-16), 1361–1374 (1999), http://dx.doi.org/10.1016/S1389-1286(99)00054-7
Chapter 16
Mining Overall Sentiment in Large Sets of Opinions Pavol Navrat, Anna Bou Ezzeddine, and Lukas Slizik
Abstract. Nowadays e-commerce becomes more and more popular and widespread. Being on the web opens many possibilities for both customers and service providers or goods merchants, thence there is a great potential and opportunity for all e-shops. For example, many e-shops provide discussion forum for every product they sell. Besides the primary role of any discussion forum, i.e. to facilitate discussions among customers, the reader can benefit from the opinions expressed there. However, if there are too many comments on the forum, we have a problem of extracting the essential overall opinion from it. At least some kind of partial automation is desirable. One approach is to perform opinion mining. In this work, we elaborate opinion mining for a problem setting where there is a need to extract overall sentiments expressed by discussion forum participants, i.e. whether the opinion represents positive or negative attitude. We propose a method of mining overall sentiments and design a fully automated system that will provide intelligent processing of large amount of reviews and opinions. Keywords: Web Mining, Semantics of the Images, Automatic Understanding.
16.1 Introduction Almost everyone feels a need to share his opinion with others. Various users can express their opinion in practically every possible field. Every opinion has its polarity, i.e. an opinion represents positive or negative attitude. Users usually comment Pavol Navrat · Anna Bou Ezzeddine · Lukas Slizik Faculty of Informatics and Information Technologies, Slovak University of Technology in Bratislava e-mail: {navrat,ezzeddine}@fiit.stuba.sk,[email protected] V. Sn´asˇel et al. (Eds.): Advances in Intelligent Web Mastering - 2, AISC 67, pp. 167–173. c Springer-Verlag Berlin Heidelberg 2010 springerlink.com
168
P. Navrat, A.B. Ezzeddine, and L. Slizik
on only some special characteristics of the product. Users don’t post comments on the whole product, they pick several product features, e.g. capacity of the hard drive, battery life of the digital camera or screen resolution of the monitor [1, 2]. Therefore the first step is to identify those characteristics. The second step then lies in opinion words identification. Opinion word is an adjective, that evaluates the quality of the particular characteristic. Finally, the last step is to determine the orientation of all identified opinion words in the text and thus evaluate the overall quality of the product [5]. One fact has to be emphasized and that is the subjective nature of the whole opinion mining field. It is sometimes not even possible for two different human beings to settle down to one common opinion, e.g. the first person has identified some product characteristics, but the second person has tagged different words as product characteristics. The problem is, that both could be right, because it is only a matter of their opinion and that could be subjective. Therefore it is very difficult to design fully automated system, that will score excellent results in precision and recall [4, 6, 8]. At the time of creating this system, we have been inspired by the work of [2]. The authors of this work design to use the tool WordNET, in order to determine the semantic orientation of specific words. They suggest manual creating of a set of opinion words. According to them this set consists of the 30 most frequently used adjectives in English. The problem is that the basic set of this words is not mentioned. In the suggestion of the system, we implemented our own algorithm of determining a semantic orientation of opinion words.
16.2 Corpus The Opinion Mining System is complex system and the process of opinion mining consists of three basic steps: 1. product characteristics mining 2. opinion words mining 3. semantic orientation determination In the process of creating and testing the system several corpuses with collected opinions, reviews and comments had to be created. Amazon is the America’s largest online retailer. It provides opportunity for users to post a comment on every product this company sells, therefore its webpage offers a great source for opinion mining. The behavior of the Opinion Mining System was tested on four corpuses; each of them was created by downloading all available posts from Amazon website. Table 16.1 shows overview of created corpuses. First column represents the name of the product, second column says how many post were downloaded, third one contains number of words in the whole corpus and the fourth column represents average number of words per one post.
16
Mining Overall Sentiment in Large Sets of Opinions
169
Table 16.1 Corpuses overview Product name
Number of posts
Number of words Average number of words per post
Apple iPod touch 16GB Canon EOS 300d Nikon Wireless Remote Control Pearl Jam album Ten
874 425 582 742
117346 61355 39344 103971
134 144 68 140
16.3 Product Characteristics Mining This step is based on POS (part-of-speech) tagging, assuming that only nouns can be product characteristics. POS tagging is a process based on identification of a particular part of speech for every word in text, or in other words POS tagging sorts words into categories, e.g. proper nouns, comparative adjectives, cardinal numbers and so on. In this work, java-based library LingPipe was used. This tool allows for part-of-speech tagging. This way we were able to identify all nouns in the selected corpus and thus identify product characteristics. The Opinion Mining System gradually marked 0.1%, 0.2%, . . . , 0.9%, 1% most frequent nouns in the corpus as product characteristics. This situation is shown in Fig. 16.1 and Fig. 16.2. Figure 16.1 shows results for product characteristics identification for product Canon EOS 300D. The x-axis represents 0.1%, 0.2%, . . . , 0.9%, 1% most frequent nouns in the corpus. Values precision, recall and f1 statistics, measure how successfully had system identified product characteristics. Figure 16.2 then shows results for the product Apple iPod touch 16GB.
Fig. 16.1 Product characteristics identification for Canon EOS 300D
170
P. Navrat, A.B. Ezzeddine, and L. Slizik
Fig. 16.2 Product characteristics identification for Apple iPod touch 16GB
16.4 Opinion Words Mining In this phase, product characteristics have been already identified. The next step is to identify particular opinion words to every product characteristic. It was mentioned before, that opinion words are adjectives, that occur in the same sentence as the product characteristic. The problem lies in fact, that using this approach we might retrieve also adjectives, that are in the same sentence as the characteristic, but these adjectives can hardly be considered as opinion words. Therefore more intelligent treatment has to be involved. This is the second phase of our approach. Opinion Mining System tries to determine polarity for each adjective, that was marked as opinion word, and those opinion words, whose polarity cannot be determined, are removed from the set of opinion words, because their polarity is neutral, and therefore they do not evaluate the quality of particular product characteristic. Table 16.2 shows result for opinion words mining. Table 16.2 Results for opinion words mining Product
Number of positive opinion words Number of negative opinion words Number of neutral words Number of opinion words together Precision of opinion words mining
Nikon
Canon
56 16 130 202 36%
96 29 283 408 30%
First column represents the name of the product or the tested corpus. Second column says about the number of positive opinion words identified in the selected corpus. Third and fourth column represents the number of negative and the number
16
Mining Overall Sentiment in Large Sets of Opinions
171
of neutral words, that were identified in the whole corpus. Fifth one sums stated values and the last one shows precision in opinion words mining. These results are not good enough, so we present semantic orientation determination method to increase precision in identification of opinion words.
16.5 Semantic Orientation Determination Basic idea behind the semantic orientation of words, i.e. whether the adjective represents positive or negative opinion, lies in identification of its synonyms and antonyms. Synonyms are words with the same or similar meaning, so we assume that synonyms have always the same semantic orientation. On the other hand, antonyms are words with the opposite meaning, therefore the orientation of particular opinion word and its antonym has to be different. This situation is illustrated in Fig. 16.3. It’s obvious that words on the left represent something positive; effective and right are synonyms with good and they have the same meaning. Words on the right represent something negative; defective and spoiled are synonyms with bad and also have the same meaning as word bad. Words good and bad are connected through the dashed line and that means they are antonyms and they have different meaning. In this situation all we need to know is the orientation of one of the words, orientation of others can be determined through knowing it’s synonyms and antonyms. There are various online dictionaries, e.g. WordNet [3, 7]. This dictionary stores information about synonyms and antonyms, but it does not include semantic orientation of the word. Using this kind of dictionary, we know that words effective, good and right are in the same set, but we cannot figure out their orientation. All we know is that this set has the opposite orientation to the set containing words bad, spoiled and defective.
Fig. 16.3 Synonyms and antonyms
Us proposed algorithm works with predefined set of frequent adjectives. Each of them has its own tag that specifies semantic orientation of this adjective. Let this set be S. We also need a label for a set of words, whose orientation we don’t know, let this set be X. The for loop goes incrementally through all the words stored in the set X, i.e. through all the words with unknown orientation. If the set S contains
172
P. Navrat, A.B. Ezzeddine, and L. Slizik
the actual word, it means that the orientation of this word is known, so the actual word is removed from the set X. If the set S does not contain actual word, but it contains synonym of this word, we determine the orientation of the actual word to be the same as is the orientation of its synonym. The last case is that the actual word is not in the set S, neither are its synonyms, but set S contains at least one of its antonyms and in that case we determine the orientation of actual word to be the opposite to the orientation of its antonym. In the case that none of these conditions is true, actual word remains in the set X and its orientation may be determined in one of the next iterations. Words, whose orientation can not been determined are declared as neutral (without orientation, e.g. external, digital, . . . ). Table 16.3 shows the results for two corpuses. First column says about the number of words, whose orientation was determined correctly, the second one indicates the number of incorrectly determined orientations. Overall precision was calculated as the ratio between correctly classified words and all words.
Table 16.3 Precision of semantic orientation determination Product Nikon Canon
Number of correctly classified Number of incorrectly classified Precision 130 149
21 99
86% 60%
16.6 Conclusion The designed Opinion Mining System is complex system that consists of several modules. It is practically impossible to evaluate the functionality of the whole system in just one test, therefore several tests were designed. First one tested the precision and recall of the product characteristics mining process (Figs. 16.1, 16.2). The second test evaluates the semantic orientation of opinion words. It is divided into two phases. After the first phase the system showed very low accuracy, approximately 35%, (Table 16.2). In the second phase the algorithm of determining the semantic orientation was initiated. After the second phase the accuracy of the system increases rapidly (Table 16.3). The differences in the accuracy after the first and the second phase show the importance and success of the algorithm of determining the semantic orientation. Text Mining can be by certain concept understood as a subset of Data Mining, because it also elaborates a large amount of data in order of information retrieval afterwards. The main difference consists in the fact, that Text Mining works with a text, which can be represented in different ways. We can get from Text Mining to Opinion Mining by additional classification. The opinion can be also considered as a text, including the difference, that it is an evaluative text, that shows the opinion of the person to something, for example a product. In opinion interpretation we should take into account the polarity (negative,
16
Mining Overall Sentiment in Large Sets of Opinions
173
neutral, positive) and the strength of the evaluation (high or low negative or positive opinion). The opinion can be represented in different ways. Nowadays the Internet contains a number of reviews, that show the opinions of many users in practically every field. The problem can appear in the unlimited opportunities, that are related to the publication on the Internet. As a consequence it would be appropriate to consider the credit of the author, who published the opinion in the future. Acknowledgements. This work was partially supported by the Scientific Grant Agency of Republic of Slovakia, grant No. VEGA 1/0508/09.
References 1. Hu, M., Liu, B.: Mining and Summarizing Customer Reviews. In: Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Seattle, Washington, USA, August 22-25 (2004) 2. Hu, M., Liu, B.: Mining Opinion Features in Customer Reviews. In: Proceedings of Nineteenth National Conference on Artificial Intelligence, San Jose, USA (July 2004) 3. WordNet - large lexical database of English, http://wordnet.princeton.edu/perl/webwn 4. Larose, D.T.: Discovering Knowledge in Data. Wiley, Chichester (2005) 5. Felici, G.: Data Mining and Knowledge Discovery Approaches Based on Rule Induction Techniques (Massive Computing). Springer, Heidelberg (2006) 6. Larose, D.T.: Data Mining Methods and Models. Wiley, Chichester (2006) 7. van Gemert, J.: Text Mining Tools on the Internet: An overview (2000) 8. Liu, B., Lee, W.S., Yu, P.S., Li, X.: Partially Supervised Classification of Text Documents. In: Proceedings of the Nineteenth International Conference on Machine Learning (ICML 2002), Sydney, Australia (July 2002)
Chapter 17
Opinion Mining through Structural Data Analysis Using Neuronal Group Learning Michał Pryczek and Piotr S. Szczepaniak
Abstract. Opinion Mining (OM) and Sentiment Analysis problems lay in the conjunction of such fields as Information Retrieval and Computational Linguistics. As the problems are semantic oriented, the solution must be looked for not in data as such, but in its meaning, considering complex (both internal and external,) domain specific context relations. This paper presents Opinion Mining as a specific definition of structural pattern recognition problem. Neuronal Group Learning, earlier presented as general structural data analysis tool, is specialised to infer annotations from natural language text. Keywords: Opinion Mining, Neural Networks, Structural Pattern Analysis.
17.1 Introduction Opinion Mining [18], also known as sentiment analysis or opinion extraction is a relatively new discipline of research laying between Information Retrieval and Computational Linguistics. It can be in short described as applying machine learning methods for automatic recognition of subjectivity related aspects of text. However, this task is significantly harder than most of other information retrieval tasks, as answers are buried deeply inside meaning of words, phrases and sentences, rather than words as such or their layout. Semantic-oriented text processing has been of special interest for the last years due to multiple potential fields of application, especially in the Web. Imagine situation, in which it would be easy to search through the web not only for information about the product, but for specific opinions about it, or even summarise general opinion about it. The scope of application is not limited to searching or automated Michał Pryczek · Piotr S. Szczepaniak Institute of Information Technology, Technical University of Lodz, Wolczanska 215, 90-924 Lodz, Poland e-mail: [email protected] V. Sn´asˇel et al. (Eds.): Advances in Intelligent Web Mastering - 2, AISC 67, pp. 175–186. c Springer-Verlag Berlin Heidelberg 2010 springerlink.com
176
M. Pryczek and P.S. Szczepaniak
information retrieval, but covers such issues as automated survey analysis or automated market research. This brings to the foreground crucial external knowledge and context related issues. Usually a method architect concentrates on the transformation of input data to some kind of feature space, which is later processed using common machine learning techniques. Proper feature space definition becomes increasingly complex, especially that the later utilised machine learning method (e.g. classifier inference technique,) is usually domain blind. In sentiment analysis external knowledge cannot be, however, easily coded into feature premises. It is enough to say that human approach to textual information varies, depending on its source. Newspaper articles are treated differently than letters from friends or blog posts. In such cases, cultural and language contexts are needed for meaningful processing of a text, which can be referred to as external context. Semantic oriented text processing emphasises also another type of context- internal context. Often used in natural language processing, statistical word occurrence analysis is almost useless here. Not only does it matter what words are used, but it is also important how they are placed in a sentence. Lexical and morphological analysis become increasingly important, as every detail can matter from the semantic point of view. Various recent achievements of computational linguistics, have to be utilised, to achieve any improvement, with still unsatisfactory a result. Section 17.2 briefly presents Opinion Miming and its most general and fundamental task. It is later redefined as structural pattern recognition problem, which can be simplified. In Sect. 17.3, Neuronal Group Learning [12] is presented as a method that by applying self-organising neural-network adaptation process, can be used as a structural pattern recognition method. This method is further elaborated, and details of opinion description inference using Adaptive Active Hypercontour [15, 16, 17] algorithm is given in Sect. 17.4.
17.2 Opinion Mining Opinion Mining, or Sentiment Analysis is one of relatively recent Information Retrieval tasks, which aim to identify private states [14, 18] of agents basing on natural language expressions. It can be generally described as finding not only what the agent directly expressed or what was said about him, but what he meant or, more specifically, what were his real attitudes. In this paper we concentrate on text oriented Opinion Mining, which we will consider as text annotating. Several kinds of objects (annotation types) has been identified in OM task, including: • agents which can be considered sources of private states and speech events or targets of other agents’ attitudes; • direct subjective expressions, holding direct mentions of expressed private states;
17
Opinion Mining through Structural Data Analysis
177
• words or phrases indirectly expressing subjectivity; • objective speech events, which do not express private states (neither directly nor indirectly). Each annotation can be (and usually is) additionally described by respective set of attributes in a frame-like manner; e.g. polarity and intensity of subjective expression. From the point of view of this article, the most important attributes are: • agent annotation: 1. id - document or universe unique identifier of an agent 2. text anchor - the part of text to which annotation is referring • direct subjective and expressing subjectivity annotations: 1. agent-source - agent whose private state is expressed 2. agent-target - attitude target 3. text anchor (see above) • objective speech event annotations: 1. agent-source - agent whose private state is expressed 2. agent-target - attitude target 3. text anchor (see above) It is worth mentioning that other attributes are not less important that the ones listed here. Especially that increasing effort has been put into their recognition in recent years, including, but not limited to, automatic recognition of polarity or intensity of words and phrases [2], which has brought important progress in the field.
17.2.1 Pattern Recognition Task There are various approaches to Opinion Mining in the literature. Sometimes its definition is biased towards a specific application, which may lead to misunderstandings. The most general, and thus most complex task definition, namely complete document annotating, can be inferred from Computational Linguistics’ field. Let textual document d be considered as a finite sequence of symbols (tokens) from alphabet A . Definition of A can vary depending on natural language processing tools used to prepare documents. In the simplest example, it can be a set of characters. However, it is generally preferred to be a set of words of fixed phrases extended by a set of punctuation marks, which carries more high-level knowledge and better reflects human perception of a text. Additionally, let Id ⊂ N be set of indices of symbols in d. Definition 17.1. Complete annotating of document d is defined as finding the set of annotations Kd anchored in d together with all their attributes.
178
M. Pryczek and P.S. Szczepaniak
Being the most general definition, it is usually simplified according to one’s needs and aims. In particular Chai et al. [6] concentrate on identifying opinion sources, Kim and Hovy [11] extract tuples (opinion, source, target), Berthard et al. [3] search for sentences holding opinions, Breck et al. [4] recognise subjective speech events only. In all these cases, such as the presented paper, text anchors play a special role. As recognition algorithms operate on textual data directly (possibly equipped with some additional external knowledge) it is natural for the process to be anchor oriented. Nevertheless, this can be considered as a limitation due to the facts presented below. Stateless approach This is the most commonly used approach in which all annotations are identified separately. This is also the easiest one from machine learning point of view. We will here limit our considerations to text anchor and annotation type recognition. Let Θ and Φ = 2Id be the set of annotation types and all indices sets in processed document respectively. Automatic document annotation task problem is equivalent to identifying relation ξ ⊆ Φ × Θ . In such a simplification two annotations of the same type cannot have the same text anchors, which usually is not a problem. Such a task can be solved by constructing classifier cId : Φ × Θ → {0, 1}: 1 i f (ϕ , τ ) ∈ ξ cd (ϕ , τ ) = (17.1) 0 otherwise Various equivalents, which are more suitable for classifier model definitions are possible. It has been observed that such single-stage classification is error prone, due to difficulties in identifying proper features to ensure differentiation between active and not active text anchors. One of the compatible methods is I-O-B model, common in Information Retrieval field, in which the beginning of an anchor is first found, and then all acceptable subsequent symbols are classified to be put either inside or outside the anchor. Sequence approach The biggest disadvantage of stateless approach is that it focuses on finding single annotation at once. However, annotations are closely connected to inch other in respect to various relations. Thorough information about other annotations in a document is seen crucial for a correct identification of all annotations as well as reducing false positive detections. The aim is to classify all indices set in one step. Such an approach an be computationally demanding, due to exponential set size. However, natural chain ordering of symbols together with I-O-B-like approach makes it easy to apply probabilistic models of sequence labelling, e.g. Conditional Random Fields.
17
Opinion Mining through Structural Data Analysis
179
17.2.2 Contextual Remarks and Graph Annotation Description Even though practically very useful, all these attempts aim at finding each annotation separately, leaving attribute for potential subsequent process. Such an approach neglects important contextual relations between sentences and parts of the text. The first example is the problem of agent continuity in a text or corpora. More importantly, the relations between annotations and their attributes are ignored. An actual aim for the complete task can be captured in a directed graph with labelled edges and vertices. Set of vertices V consists of annotations and active text anchors (V ⊂ K ∪ 2Id ). The edges of this graph represent different types of relations, most notably anchoring agent generalisation (used to maintain agent continuity) and source/target relations. It can also be desired that, instead of text anchors, symbols themselves become elements of description graph. Inference of such graph can be decomposed into several processes of simpler structure: • anchor recognition • type recognition • attribute recognition (including relations) Each of these could be solved separately (using techniques presented above). However, a single step structural pattern recognition seems to be a more reasonable approach. Although it is generally much more complex, both logically and computationally, there are a few advantages of such an approach. 1. All aspects of graph are inferred considering contextual dependencies among them. 2. Expert knowledge can be used effectively to easy solution space searching. 3. Annotations lacking text anchor can be meaningfully included into the model.
17.3 Neuronal Groups and Interrelations In this section, the utilisation of Neuronal Group Learning is proposed as a method of classical and structural classifier inference especially designed for graph pattern recognition. The dominating use of neural networks is function approximation. After the learning phase, the network is used “as is”. Such approach concentrates knowledge and work of problem expert on input and output space definitions as well as transformation to and from these spaces.
17.3.1 Neurophysiological Background: Neuronal Group Selection Theory (NGST) The 20th century brought rapid development of neuroscience, which resulted in increasing interest in brain organisation and physiology of cognitive process. Neural
180
M. Pryczek and P.S. Szczepaniak
Darwinism [7] is one of contemporary theories concerning the processes of learning and development of the brain, especially the mechanisms of achieving complex functional capabilities. The Neuronal Group Selection Theory [7] states that the brain is being dynamically organised into (sub)networks, structure and function of which arises from interaction with environment. These networks, consisting of thousands of strongly interconnected neurons, are called neuronal groups and considered as functional units. Edelman considers three phases/processes that influence neural map development. 1. Building primary repertoire of neuronal groups. This phase is partially genetically determined and takes place mostly during foetal life and early infancy. As a result a major role in formation of primary functional units (primary neural repertoire) from best performing group prototypes is played by self-generated activity and consequently self-afferent information. 2. Together with development of neural system the increasing quantity and variability of information need to be processed. As a result more complex functional structures called secondary neural repertoire are formed. This process takes place mostly in postnatal life and is more ontogenetic, as it bases on individual experiences which drive neuronal group selection. 3. Temporal correlations between various groups lead to creation of dynamically emerging neural maps (effective functional units of brain), whose re-entrant interconnections maintain spatio-temporal continuity in response to re-entrant signals. Seeking for a more detailed and intuitive example, please refer to Mijna HaddersAlgra’s article [9] presenting human motor development in the light of NGST. Lateral Feedback The lateral feedback mechanism was originally another inspiration of the model presented together with other facts concerning formation of cortical map [10]. Positive and negative influence (increasing and decreasing neuron activation) can be observed, which properties lead to formation of bubbles- seen as simple functional units.
17.3.2 Neuronal Group Concept Let V be a set of neurons and S denote a set of neuronal groups (considered as a set of group labels). In the concept presented, all neurons n ∈ V have associated a set of groups to which they belong, here denoted as function l : V → 2S , also referred to as labelling of neurons. Limitations on l’s codomain can be imposed to simplify model or tune it to specific needs. One of the most useful is enforcing mutual exclusion of groups: ∀n∈V l(n) = 1. In this paper, it is desired to introduce or infer groups of different kinds, playing different roles in algorithms that can utilise information coming from grouping of neurons. As a result, mutual exclusion condition should
17
Opinion Mining through Structural Data Analysis
181
be imposed only on subsets of S , in cases of contradicting group distinguishing policies. In general we can consider three possible approaches to group identification: 1. predefined groups - identified by expert/engineer contexts in which interrelations of neurons should be considered. 2. supervised group identification – in which there exists an external premise/knowledge that may be used to distinguish groups. This is similar to a supervised classification task. 3. unsupervised group identification - possibly the most interesting approach- an entirely unsupervised knowledge inference.
17.3.3 Neuronal Interrelations and Their Descriptions The most natural way to describe relations between neurons is using graphs of different kind. Even network architecture can be described as a labelled directed graph. Topological neighbourhood is another example of the often analysed relation type (see Growing Neural Gas algorithm (GNG) [8]). Going further, let us consider a set G as a set of available graph interrelation descriptions. Various restrictions can be imposed on these graphs’ structures depending on needs and their meaning. Similarly to the three approaches to group identification, graph descriptions can be defined according to analogous premises.
17.3.4 Dualism of Inference Process During a network inference phase two processes are taking place in parallel. 1. Network adaptation, which consists of neurons somatic parameters adaptation, network reorganisation etc. 2. Inference of interrelation descriptions and neurons labelling, which covers management of label and edge sets. Although using the word “duality” in the following context may be seen as misuse, it intuitively describes character of relations between the inference of neural network and other two mentioned parallel processes. What is most interesting is that neuronal groups and interrelation graphs can be an integral part of the final result. Their inference can be seen as a kind of structural pattern recognition process.
17.3.5 Neuronal Group Learning and Opinion Mining First of all let sensor neuron be considered, in which each neuron or simple neuron group is responsible for recognition of one text anchor. The same labelling mechanism is used both for group constitution and description inference. However, in the
182
M. Pryczek and P.S. Szczepaniak
first case, groups are semi-supervised, as we know their general meaning, but not their count. The other ones are strictly predefined by the annotation scheme. Conforming to the description also enforces the creation of multiple, semi-supervised graph descriptions, which complicates strict formalization, but does not increase logical complexity of task.
17.4 Description Inference The main difference between self-organising neural network and probabilistic methods like CRF is iterative nature of computing final solution. Conditional Random Fields use dynamic programming to search through entire solution space (if possible) and as such they always find an optimal answer in the view of inferred model parameters. Iterative nature of neural network adaptation using Neuronal Group Learning leads to heuristic search model. To organise the process Adaptive Active Hypercontour meta-algorithm is proposed, which will now be briefly presented to build a frame for further considerations. Adaptive Energyoriented Object Inference Process.
17.4.1 Adaptive Energy Oriented Object Inference Although AAH algorithm has been specially developed for classifier inference, its scope of application is much wider, including the presented problem. Though misleading, the name will be kept in the paper for reference. Let H d denote the description hypothesis space; it will be referred to as H and in this case it is the space of possible descriptions of given document d and let CH be the space of inferred objects (in this case specialized NGL enabled neural networks over document d space). As graph description inferred during network adaptation is considered a solution, a distinction is made between inferred object c ∈ CH referred as hypothesis and the inferred description h(c) realized by c, which will be referred to as description hypothesis. Let κ p be parametrised by p a task oriented object inference function, CκH be actual object space (the domain of κ ), such that ∀c∈CκH h(c) ∈ H . AAH solves object inference through optimisation process, whose target function E is commonly called energy. E can be considered as function H → R, but it is usually desired that it operates on CκH directly, taking into account varius aspects of classifier parameters. Adaptive Active Hypercontour algorithm consists of the following steps [13, 12]: 1. Initialization: during this phase the choice of initial classifier model and subordinate inference method (κ ) is made. An initial classifier hypothesis co ∈ CκH is then generated and its energy is computed. 2. α − phase: This step consists mainly of subordinate algorithm invocation which generates a new classifier hypothesis: ci+1 = κ p (ci )
17
Opinion Mining through Structural Data Analysis
183
3. ci+1 energy estimation 4. β − phase: The subordinate algorithm (κ ) is very often an optimisation process itself. However, its target function usually is compatible with, but different than E. In this phase any change in current process may be performed to ameliorate E optimisation state modification, like: • unwarranted current hypothesis change, usually κ and E oriented, like restoring the previous classifier (in case of significant optimisation regression, or its structural change that bypasses κ shortcomings). • κ reparametrisation or replacement with different algorithm. • classifier model change. It is quite common that expert knowledge describes the shape of the result rather than the steps leading to its achievement. Such knowledge is very hard to take into account during input data transformation. Energy function and optimisation process can, however, be used as censor, and thus it can utilise such knowledge to drive inference process, or at least ease it by limiting cH .
17.4.2 Iterative Document Annotating AAH algorithm can be easily used to formulate cascade of κ to adapt to various cases of use. For clarity, let us differentiate graph descriptions that appear as a part of inferred annotating, (referred to as high level ones,) from others that may be present during the adaptation phase, mostly used to build functional units responsible for anchor detection. The same division is made for neuronal groups (labelling). In opinion annotating the following aspects should be considered. Given the previous assumption about role of neurons holds, two processes must be implemented. 1. Adaptation of neuron somatic description, which can: • change text anchor; • change neural group (detector/primary groups) settings. 2. Inferred description modification, including: • • • •
vertex relabelling; new annotation vertex generation; attribute inference; attribute graphs (labelled edge).
There are no direct presumptions to put description inference into α or β phase. However, given that text anchors are set, it may be easier to infer these descriptions using external method after numerous anchor adaptation that adapt it with every, even smallest change. The high level analysis is usually much more computationally expensive and as such it may cause the method presented infeasible.
184
M. Pryczek and P.S. Szczepaniak
17.5 Evaluation Data and Methodology Considering complete annotation problem not only an effective method may be regarded as research goal, but also prediction result evaluation methodology has important shortages. During the research, Multi-Perspective Question Answering (MPQA) [18] corpus is used. It is a collection of 535 news articles in English collected from versatile data sources between June 2001 and May 2002. The majority of documents concern 10 topics. Unclassified documents form a separate category (misc). The documents have been manually annotated by human experts. It is worth mentioning that the same markup language (namely WWC) has been successfully adopted in similar I-CAB corpus (in Italian language) [1]. Following the commonly adopted approach, first 35 documents are used as a tuning set for the method. The remaining 500 are divided into k groups in order to enable k-fold cross validation technique to be used (usually k = 5).
17.5.1 Text Processing Steps and Feature Extraction To enable high level text processing and meaningful feature extraction sentences in documents are first parsed using Charniak parser [5] with its tree-bank grammar. This enriches sentences with syntactic information, which is needed for higher level feature extraction. Sentence tokenisation is inferred from Charniak parser result, which also does simple normalisation of tokens. In general, two types of features are extracted for both tokens and fragments of documents. The length of fragments is limited to the end of the current sentence, as there are no annotations crossing sentences. Three types of features are considered in our work: 1. lexical features, which are binary or fuzzy token identification features; 2. morpho-syntactic features, the extraction of which is based on Charniak parser output; for each syntactic category 3 features are generated, weather token starts, ends or is inside specific node. 3. external features (all others, e.g. weather token starts with capital letter) For sentence fragments histograms are generated based on lexical features, whereas morpho-syntactic features remains the same with respect to being computed for fragments rather than single tokens.
17.5.2 Results Evaluation One of the important limitations of research is the lack of meaningfull evaluation methodology. This problem can be easily bypassed by imposing restrictions on problem formulation, or by using only information about text anchors. This fairly
17
Opinion Mining through Structural Data Analysis
185
often makes it possible to use evaluation methodology known from Information Retrieval field, namely precision, recall and F-measure for exact or inexact matches. However, an important part of the results is not taken into account during this process, which yields new methods in this field. Empirical results and their analysis will be presented during the conference.
17.6 Conclusions Despite being deeply studied in recent years, semantic-oriented text analysis, including Opinion Mining, still waits for more sophisticated processing techniques, enabling its meaningful applications. Even though some sub-tasks of it seem to be computationally solved, the field lacks synthetic approach. This paper elaborates on some aspects of Information Retrieval model used for automated opinion-mining task. Benefits of utilising the structural pattern recognition approach is discussed. The application of new computational model, the presented Neuronal Group Learning paradigm shows this problems in a new light, creating a promising research perspective. However, presented complete annotation task yields entirely new research and result evaluation methodology that needs to be developed for more meaningful analysis.
References 1. Andrea Esuli, F.S., Urciuoli, I.: Annotating expressions of opinion and emotion in the italian content annotation bank. In: Proceedings of the Sixth International Language Resources and Evaluation (LREC 2008), Marrakech, Morocco (2008) 2. Argamon, S., Bloom, K., Esuli, A., Sebastiani, F.: Automatically determining attitude type and force for sentiment analysis. In: Proceedings of the 3rd Language & Technology Conference (LTC 2007), Pozna´n, PL, pp. 369–373 (2007) 3. Bethard, S., Yu, H., Thornton, A., Hatzivassiloglou, V., Jurafsky, D.: Automatic extraction of opinion propositions and their holders. In: Proceedings of the AAAI Spring Symposium on Exploring Attitude and Affect in Text: Theories and Applications, Stanford, USA (2004) 4. Breck, E., Choi, Y., Cardie, C.: Identifying expressions of opinion in context. In: Veloso, M.M. (ed.) Proceedings of the 20th International Joint Conference on Artificial Intelligence, Hyderabad, India, January 6-12, pp. 2683–2688. AAAI Press, Menlo Park (2007) 5. Charniak, E.: A maximum-entropy-inspired parser. In: Proceedings of 6th Applied Natural Language Processing Conference, Seattle, Washington, USA, April 29 – May 4, pp. 132–139 (2000) 6. Choi, Y., Cardie, C., Riloff, E., Patwardhan, S.: Identifying sources of opinions with conditional random fields and extraction patterns. In: Proceedings of Human Language Technology Conference and Conference on Empirical Methods in NLP, Vancouver, British Columbia, Canada, October 6-8. The Association for Computational Linguistics (2005) 7. Edelman, G.M.: Neural Darwinism: The Theory of Neuronal Group Selection. Basic Books, New York (1987)
186
M. Pryczek and P.S. Szczepaniak
8. Fritzke, B.: A growing neural gas network learns topologies. In: NIPS, vol. 7, pp. 625– 632. MIT Press, Cambridge (1994) 9. Hadders-Algra, M.: The neuronal group selection theory: a framework to explain variation in normal motor development. Developmental Medicine & Child Neurology 42, 566–572 (2000) 10. Haykin, S.: Neural Networks: A Comprechensive Foundation. Macmillan College Publishing Company, New York (1994) 11. Kim, S.M., Hovy, E.: Extracting opinions, opinion holders, and topics expressed in online news media text. In: COLING-ACL, Sydney, Australia (2006) 12. Pryczek, M.: Neuronal groups and interrelations. In: Proceedings of the International Multiconference on Computer Science and Information Technology, Wisł,a, Poland, October 20-22, vol. 3, pp. 221–227 (2008) 13. Pryczek, M.: Supervised object classification using adaptive active hypercontours with growing neural gas representation. Journal of Applied Computer Science 16(2), 69–80 (2008) 14. Quirk, R., Greenbaum, S., Leech, G., Svartvik, J.: A comprehensive grammar of the english language. Longman, London (1985) 15. Szczepaniak, P.S., Tomczyk, A., Pryczek, M.: Supervised web document classification using discrete transforms, active hypercontours and expert knowledge. In: Zhong, N., Liu, J., Yao, Y., Wu, J., Lu, S., Li, K. (eds.) Web Intelligence Meets Brain Informatics. LNCS (LNAI), vol. 4845, pp. 305–323. Springer, Heidelberg (2007) 16. Tomczyk, A.: Active hypercontours and contextual classification. In: Proceedings of 5th International Conference on Inteligent Systems Design and Applications (ISDA), pp. 256–261. IEEE Computer Society Press, Los Alamitos (2005) 17. Tomczyk, A., Szczepaniak, P.S., Pryczek, M.: Active contours as knowledge discovery methods. In: Corruble, V., Takeda, M., Suzuki, E. (eds.) DS 2007. LNCS (LNAI), vol. 4755, pp. 209–218. Springer, Heidelberg (2007) 18. Wiebe, J., Wilson, T., Cardie, C.: Annotating expressions of opinions and emotions in language. Language Resources and Evaluation 39(2), 165–210 (2005)
Chapter 18
Rough Set Based Concept Extraction Paradigm for Document Ranking Shailendra Singh, Santosh Kumar Ray, and Bhagwati P. Joshi
Abstract. On the World Wide Web, Open domain Question Answering System is one of the emerging information retrieval systems which are becoming popular day by day to get succinct relevant answers in response of users’ questions. In this paper, we are addressing rough set based method for document ranking which is one of the major tasks in the representation of retrieved results and directly contributes towards accuracy of a retrieval system. Rough sets are widely used for document categorization, vocabulary reduction, and other information retrieval problems. We are proposing a computationally efficient rough set based method for ranking of the documents. The distinctive point of the proposed algorithm is to give more emphasis on presence and position of the concept combination instead of term frequencies. We have experimented over a set of standard questions collected from TREC, Wordbook, WorldFactBook using Google and our proposed method. We found 16% improvement in document ranking performance. Further, we have compared our method with online Question Answering System AnswerBus and observed 38% improvement in ranking relevant documents on top ranks. We conducted more experiments to judge the effectiveness of the information retrieval system and found satisfactory performance results. Keywords: Rough sets, Document Ranking, Concept Extraction, and Question Answering System. Shailendra Singh Samsung India Software Centre, Noida, India e-mail: [email protected] Santosh Kumar Ray Birla Institute of Technology, Muscat, Oman e-mail: [email protected] Bhagwati P. Joshi Birla Institute of Technology, Noida, India e-mail: [email protected]
V. Sn´asˇel et al. (Eds.): Advances in Intelligent Web Mastering - 2, AISC 67, pp. 187–197. c Springer-Verlag Berlin Heidelberg 2010 springerlink.com
188
S. Singh, S.K. Ray, and B.P. Joshi
18.1 Introduction Today, the World Wide Web has become the chief source of the information for everyone from general user to researchers for fulfilling their information needs. Recently published article [1] says that the number of web pages on the internet increased tremendously and crossed 1 trillion counts in 2008 which was only 200 billion in 2006 as reported in [17]. Therefore, the management of huge volume of data is a big challenge for search engines like Google which are helping users to retrieve information from the Web. However, these search engines return a list of web pages for the user query and most of the times retrieved web pages do not provide precise information and retrieve irrelevant information in top ranked results. This prompts the researchers to look for an alternative information retrieval system that can provide answers of the user queries in succinct form. Question Answering System (QAS) is one of the efficient information retrieval systems that has attracted attention of researchers and users equally. QASs, unlike other retrieval systems, combine information retrieval and information extraction techniques to present precise answers to the questions posed in a natural language. A typical QAS consists of three distinct phases: Question Processing, Document Processing, and Answer Processing. Question processing phase classifies user questions, derives expected answer types, extracts keywords, and reformulates question into multiple similar meaning questions. Reformulation of query into similar meaning queries is also known as query expansion and it boosts up the recall of the information retrieval system. The document processing phase retrieves the documents and ranks them using ranking algorithms. In answer processing phase, system identifies the candidate answer sentences, ranks them, and finally presents the answers using information extraction techniques. This is obvious that ranking algorithm is very important to improve the precision of an information retrieval system. This paper focuses on document processing phase to enhance the precision of the QAS. The proposed rough set based algorithm gives more emphasis on the presence of the concepts combination rather than presence of individual terms in the document. It also considers the fact that the titles and subtitles of the documents tend to describe the essence of the document and presence of concept combinations in these locations increases the relevance of the document. In this paper, Sect. 18.2 describes previous research work while Sect. 18.3 briefly introduces rough sets basic concept relevant to this paper. Section 18.4 explains the detailed methodologies of proposed rough set based document ranking algorithm. We have shown our observations and results in Sect. 18.5. In the last section, we have stated our conclusion and future directions to build next generation QASs.
18.2 Related Work Ranking of the documents has always been the important part of the information retrieval systems and specifically for the systems which are using World Wide Web
18
Rough Set Based Concept Extraction Paradigm for Document Ranking
189
as knowledge base. There are number of document ranking models proposed such as extended Boolean model [11], Vector space model [7], and Relevance model [4]. These models are largely dependent on the query term frequency, document length etc to rank the documents and these methods are computationally fast. However, they ignore the linguistics features and the semantics of the query as well as the documents which inversely affects their retrieval performance. References [16] and [10] propose conceptual models which map a set of words and phrases to the concepts and exploit their conceptual structures for retrieval. Reference [13] proposes an ontology hierarchy based approach for automatic topic identification which can be extended for automatic text categorization. These models are complicated but retrieve more precise information in comparison of other statistical models. However, these methods are not able to handle imprecise information necessary to fulfill users need. Therefore, rough set based methods [12, 2] were proposed for document classification to handle imprecise and vague information. Reference [5] proposes automatic classification of WWW bookmarks based on rough sets while [18] proposes extension of document frequency metric using rough sets. They have used indiscernibility relation for text categorization. In this paper, we have proposed a document ranking method which uses an extension of their research work.
18.3 Introduction to the Rough Sets Rough set theory [8] and [6] was proposed by Z. Pawlak in 1982. This is an extension of classical set theory which deals with vagueness and uncertainty present in the data set. This has been successfully applied in classification and knowledge discovery from incomplete knowledge databases. In rough set theory, an information system is defined as 4-tuple (U, A,Va , f ), where U = {x1 , . . . , xn } is a non-empty finite set of objects called the universe, A = {a1 , . . . , ak } is a non-empty finite set of attributes, Va is the value set of ai ∈ A, and f is an information function: (U, A) → Va . For an information system I = (U, A,Va , f ), a relation IND(B) with any B ⊆ A is defined as IND(B) = {(x, y) ∈ U × U | ∀b ∈ B, f (x, b) = fa (y, b) } The relation IND(B) is called B-indiscernibilty relation. If (x, y) ∈ IND(B), then objects x and y are indiscernible with respect to the attribute set B. The relation IND(B) partitions the universe U into equivalence classes. Let us consider an example information system as shown in Table 18.1. This information system consists of five documents represented by D1 to D5 and five attributes defined as Year, Alaska, Purchased, Bought, and Answer. The first four attributes are conditional attributes and last one is called as decision attribute. The numbers in the Table 18.1 indicate the frequency of the conditional attribute terms in the documents.
190
S. Singh, S.K. Ray, and B.P. Joshi
Table 18.1 Rough set based information system
D1 D2 D3 D4 D5
Year
Alaska
Purchased
Bought
Answer
2 2 2 2 1
1 2 2 1 0
0 1 0 0 0
1 0 1 0 0
Y N N N N
In this information system, attribute set B = {Year, Alaska, Purchased} divides universe U = {D1 , D2 , D3 , D4 , D5 } into four equivalent classes {{D1 , D4 }, {D2 }, {D3 }, {D5 }} Here documents D1 and D4 are indiscernible with respect to attribute set B. The indiscernibility relation is an important concept of rough set theory and is being widely used for text categorization and vocabulary reduction. The equivalence classes obtained from the indiscernibility relation are used to define set approximations.
18.4 Rough Set Based Document Ranking Method Document retrieval is one of the crucial phases of an information retrieval system. A good document ranking algorithm can help the information retrieval system to bring more relevant documents on the top even though these documents may not contain the keywords in the user’s question. The proposed method is not considering term frequencies for ranking of retrieved documents as the algorithms based on term frequencies tend to be more biased towards longer documents. The proposed algorithm expands the user query, selects the relevant features from the set of documents returned by search engines and ranks extracted concept combinations according to their relevancy to the user’s question. Finally, the algorithm performs re-ranking of the documents based on the position of the concept combinations in the set of documents.
18.4.1 Concept Combination Ranking Algorithm In this section, we are describing an algorithm that uses the indiscernibility relation of the rough set theory to rank the concept combinations. The basic idea is based on the algorithm discussed in [18] which uses document frequency to extract the
18
Rough Set Based Concept Extraction Paradigm for Document Ranking
191
important features from a set of documents and categorizes them on the basis of their features (terms). We are extending their algorithm for ranking a concept combination obtained from query expansion phase. The underlying intuition is that a document is more relevant if it contains combination of concepts together rather than containing individual concepts. Let us assume that the user query contains concepts C1 ,C2 , . . . ,Cn and the query is expanded using query expansion algorithm proposed in [9]. The key concepts in the expanded queries are then grouped into concept combinations using Cartesian product and ranked according to the knowledge quantity contained in them. The complete algorithm for ranking the concept combinations is described below, see Algorithm 18.1.
Algorithm 18.1. Concept Extraction(Q, D) Input: User query (Q) and set of documents (D) Output: Ranked concepts list (Gr ) Step 1: Extract key concepts C1 ,C2 , . . . ,Cn from the query. Step 2: Expand query using query expansion algorithm [9]. The resulting query is C1 ∪C2 ∪ . . . ∪Cn where Ci = Ci1 ∪Ci2 ∪ . . . ∪Cik and Ci j indicates the jth semantically related word to concept Ci . Step 3: Let G = C1 ×C2 × · · · ×Cn where × indicates the Cartesian product. Step 4: Define an information system I = (U, A,V, f ), where U = {Di |Di ∈ D}, A = {Gi |Gi ∈ G}, V is the domain of values of Gi , and f is an information function (U, A) → V such that: 0 if any of the concepts in Gi is not present in Di f (Di , Gi ) = 1 if all concepts in Gi are present in Di Step 5: Determine the “Knowledge Quantity” (KQ) of Gi using the Eq. (18.1) KQi = m(n − m)
(18.1)
where n and m represents cardinality of D and number of documents in which concept group Gi occurs respectively. Step 6: Repeat step 5 for all Gi . Step 7: Sort G according to “Knowledge Quantity” and return Gr (Sorted G). Step 8: END
18.4.2 Document Ranking Algorithm The proposed document ranking algorithm considers ranked concept combination as discussed in Sect. 18.4.1 and searches the document sets for these concept combinations. The algorithm considers the most descriptive concepts of the document which are used to define title or subtitle. Secondly, in answer extraction phase, we consider those sentences more relevant which contain more number of concepts. Algorithm Document Ranking describes the proposed document ranking algorithm.
192
S. Singh, S.K. Ray, and B.P. Joshi
Algorithm 18.2. Document Ranking (Q, D) Input: User query (Q) and set of documents (D) Output: Ranked documents list (Dr ) Step 1: Run Concept Extraction (Q, D) algorithm to get ranked list of concept groups. Step 2: For each document Di ∈ D and concept group G j , compute document score (Wi1 ) using Eq. (18.2). p−rj (18.2) Wi1 = 1 + W0 ∑ p 1≤ j≤p and G j ⊂Di where p is the cardinality of the set G (step 3 in Algorithm 18.1) and r j is the rank of G j obtained in 1. W0 is the initial weight assigned to each document. Step 3: For each document Di ∈ D and concept group G j , re-compute document score (Wi2 ) using Eq. (18.3). Wi2 = Wi1 + k1Wi1
∑
t⊆G j and t is in one sentence
+ k2Wi1 t⊆G j and
+ k3Wi1
t
∑
t⊆G j and
t
∑
is in one subtitle
is in title
ats + bj
ath + bj
(18.3)
att bj
Here k1 , k2 , and k3 are constants indicating weight assigned for occurrence of concept combination in sentences, sub-titles, and titles within the documents. ats , ath and att are the cardinality of subset ’t’ in sentences, sub-titles, and title respectively. b j is the cardinality of G j . Step 4: Rank the document set according to the document scores obtained in step 3. Step 5: END
18.5 Experiments and Results 18.5.1 Data Collection To test the efficiency of the proposed algorithm, we have chosen a set of 300 questions which is compiled using TREC [14], Wordbook [15], WorldFactbook [3] and other standard resources. Further, we have collected a set of 25 documents corresponding to each 300 questions separately. Thus, our knowledge base has collection of 7500 documents including duplicates. Our present knowledge base is containing correct answers of 295 questions while answers of remaining 5 questions are not included because answers were not present in the reference knowledge base.
18.5.2 Comparative Performance Measurement In this section we are presenting our results and observations. We carried out experiments over the set of collected questions and their corresponding documents.
18
Rough Set Based Concept Extraction Paradigm for Document Ranking
193
In the first phase, we fed the original questions into Google; we were able to retrieve satisfactory answers of 258 questions in the top 10 documents. Further, we have expanded original questions using step 2 of algorithm Concept Extraction, (Algorithm 18.1), as explained in Sect. 18.4.1. These expanded questions were fed into Google and we found that a total of 276 questions were answered correctly in the top 10 documents returned by Google for each question. The correct answers percentage increased from 86% to 92% as result of query expansion. As this paper is about document ranking, we are presenting our results in context of expanded questions. After re-ranking of the document set using Document Ranking algorithm, we were able to get the correct answers of only 270 questions in the first 10 documents for each question. However, average number of documents containing correct answers in top 10 documents increased from 3.7 to 4.3. This indicates an improvement of 16% in document retrieval. We also observed increased number of correct answers in top ranked documents. There were 94 questions whose answers were present in at least 5 documents out of top 10 documents using Google but using proposed algorithm, this count increased to 142. Our proposed algorithm shows an improvement of 16% in top ranked results. These results reflect that the algorithm document ranking is bringing more relevant documents to higher ranks. In another observation, initially there were 40 questions with precision 1 but after re-ranking it increased to 82 questions. We have summarized these results in Table 18.2 as follows. Table 18.2 Comparative performance analysis S.N Performance Parameters 1 2 3 4 5
No. of questions whose answers are present in at least one of the top 10 documents No. of questions whose answers were present in at least 5 documents (out of first 10 documents) Average no. of documents containing correct answers (out of first 10 documents) Number of questions with answer in the first document Average rank of the document containing first correct answer
With Google
With Proposed Approach
276
270
94
142
3.7
4.3
160 2.9
175 2.6
Results of the experiments for first 50 questions are shown in Fig. 18.1. As seen from the figure, number of documents containing correct answers is higher compared to original retrieval. Thus, our algorithm helps the information retrieval system to improve the precision of the system which is more explicit in Fig. 18.2. Figure 18.2 can be derived from Fig. 18.1 by using the formula for precision calculation. However, there were 98 questions where number of documents containing correct answers exceeded 10. In such cases precision calculation is little different.
194
S. Singh, S.K. Ray, and B.P. Joshi
Fig. 18.1 Number of documents containing
Fig. 18.2 Precision graph
In such cases, for precision calculation, we are assuming that there were only 10 documents containing correct answers. Further, we represent document rank containing first correct answer in Fig. 18.3. The ranks of the first document with correct answer were same for Google and our algorithm in 36% cases (which was mostly rank 1 and hence there is no scope of improvement). In 34% cases, the algorithm improved the ranks of the first document containing correct answer while rank of the same declined in case of 24% questions. Thus, it is clear from the Fig. 18.3 that algorithm is improving the rank of relevant documents.
18
Rough Set Based Concept Extraction Paradigm for Document Ranking
195
correct answers
Fig. 18.3 Documents’ rank containing first correct answer
We tested our algorithm with online Question Answering System called AnswerBus. Answer Bus supports queries in multiple languages and has consistently performed well in TREC evaluations. In Fig. 18.4, we show the ranks of the first document with correct answer for the first fifty questions fed into AnswerBus. Initially, the average rank of the first document containing the correct answer was 2.2 and it improved to 1.76 when we re-ranked the documents returned by AnswerBus using the proposed algorithm. In 38% questions the rank of the document containing correct answer improved while it went down for the 12 % questions. In 50% cases, either AnswerBus could not return the document containing correct answer or the first document itself had correct answer in it. Hence, in both cases there was no scope of improvement. We also compared the performance of the proposed algorithm by computing the Mean Reciprocal Rank (MRR) for all questions. The MRR is calculated using Eq. (18.4). 1 (18.4) MRR = ∑ i i Here, i is the rank of the document containing the correct answer. For example, if a question has correct answers in document number 2, 4, and 5 then its MRR will be
Fig. 18.4 Document’s rank containing first correct answer for AnswerBus
196
S. Singh, S.K. Ray, and B.P. Joshi
Fig. 18.5 Mean reciprocal rank before and after the ranking
0.95 (0.5 + 0.25 + 0.2). We have computed MRR for all the questions before and after the application of algorithm. The MRR scores for first 50 questions are shown in Fig. 18.5. In 50% questions MRR score increased using proposed algorithm which clearly indicates the improvement in the performance of the ranking system. In 16 % cases MRR score was reduced while in 26 % cases none of the documents returned by the AnswerBus contain the correct answer. In such cases the MRR score was zero.
18.6 Conclusion and Future Scope In this paper, we have presented two algorithms to rank documents conceptually. Our first algorithm ranks concept combination of the documents which is useful to find more conceptually relevant answers. Further, second algorithm ranks retrieved documents using position of concept combination which improves the precision of the information retrieval system. Though this algorithm uses modern semantic tools such as rough set and ontologies but it is a simple and computationally efficient method. We have experimented with 7500 documents retrieved using Google as well as with online QAS AnswerBus to judge the effectiveness of the proposed method.
References 1. Alpert, J., Hajaj, N.: We Knew the Web was Big (2008), http://googleblog.blogspot.com/2008/07/ we-knew-web-was-big.html 2. Bao, Y., Aoyama, S., Yamada, K., Ishii, N., Du, X.: A Rough Set Based Hybrid Method to Text Categorization. In: Second international conference on web information systems engineering (WISE 2001), vol. 1, pp. 254–261. IEEE Computer Society, Washington (2001)
18
Rough Set Based Concept Extraction Paradigm for Document Ranking
197
3. CIA the World Factbook, https://www.cia.gov/library/publications/ the-world-factbook/ 4. Crestani, F., Lalmas, M., Rijsbergen, J., Campbell, L.: Is This Document Relevant? . . . Probably. A Survey of Probabilistic Models in Information Retrieval. ACM Computing Surveys 30(4), 528–552 (1998) 5. Jensen, R., Shen, Q.: A Rough Set-Aided System for Sorting WWW Bookmarks. In: Zhong, N., Yao, Y., Ohsuga, S., Liu, J. (eds.) WI 2001. LNCS (LNAI), vol. 2198, pp. 95–105. Springer, Heidelberg (2001) 6. Komorowski, J.: Rough Set: A Tutorial, folli.loria.fr/cds/1999/library/pdf/skowron.pdf 7. Lee, D.L., Chuang, H., Seamons, K.: Document Ranking and the Vector Space Model. IEEE Software 14(2), 67–75 (1997) 8. Pawlak, Z.: Rough Sets. International Journal of Computer and Information Science 11(5), 341–356 (1982) 9. Ray, S.K., Singh, S., Joshi, B.P.: Question Answering Systems Performance Evaluation – To Construct an Effective Conceptual Query Based on Ontologies and WordNet. In: Proceedings of the 5th Workshop on Semantic Web Applications and Perspectives, Rome, Italy, December 15-17. CEUR Workshop Proceedings (2008) ISSN 1613-0073 10. Rocha, C., Schwabe, D., Poggi de Arag˜ao, M.: A Hybrid Approach for Searching in the Semantic Web. In: 13th International Conference on World Wide Web, pp. 374–383. ACM, New York (2004) 11. Salton, G., Fox, E.A., Wu, H.: Extended Boolean Information Retrieval. Communications of the ACM 26(11), 1022–1036 (1983) 12. Singh, S., Dey, L.: A Rough-Fuzzy Document Grading System for Customized Text Information Retrieval. Information Processing and Management: an International Journal 41(2), 195–216 (2005) 13. Tiun, S., Abdullah, R., Kong, T.E.: Automatic Topic Identification Using Ontology Hierarchy. In: Gelbukh, A. (ed.) CICLing 2001. LNCS, vol. 2004, pp. 444–453. Springer, Heidelberg (2001) 14. Text Retrieval Conference, http://trec.nist.gov/ 15. The World Book, http://www.worldbook.com/ 16. Vallet, D., Fern´andez, M., Castells, P.: An Ontology-Based Information Retrieval Model. In: G´omez-P´erez, A., Euzenat, J. (eds.) ESWC 2005. LNCS, vol. 3532, pp. 455–470. Springer, Heidelberg (2005) 17. Wirken, D.: The Google Goal Of Indexing 100 Billion Web Pages (2006), www.sitepronews.com/archives/2006/sep/20.html 18. Xu, Y., Wang, B., Li, J.T., Jing, H.: An Extended Document Frequency Metric for Feature Selection in Text Categorization. In: Li, H., Liu, T., Ma, W.-Y., Sakai, T., Wong, K.-F., Zhou, G. (eds.) AIRS 2008. LNCS, vol. 4993, pp. 71–82. Springer, Heidelberg (2008)
Chapter 19
Knowledge Modeling for Enhanced Information Retrieval and Visualization Maria Sokhn, Elena Mugellini, and Omar Abou Khaled
Abstract. The advent of technologies in information retrieval driven by users’ requests calls for an effort to annotate multimedia contents. The exponential growth of digital resources makes the use of manual solutions unfeasible. In recent years domain-specific ontologies (abstract model of some domain) have been adopted to enhance content based annotation and retrieval of these digital resources. In this paper we present an ontology that models implicit and explicit knowledge and information conveyed within a conference life cycle. Our model, presented as a High-level modEL for cOnference (HELO), is used on the one hand to perform high level annotation of video recordings of a conference allowing granular search facilities and complex queries, and on the other hand to enhance knowledge retrieval and visualization. As a proof-of-concept HELO has been integrated into CALIMERA (Conference Advanced Level Information ManagEment & RetrievAl.) a framework we developed that handles conference information. Keywords: Knowledge Modeling, Knowledge Visualization, Ontologies, Multimedia Information Retrieval.
19.1 Introduction While recording technology is becoming easier to use and more affordable due to technological developments, an increasing number of conference and scientific events are now being recorded. However the resulting multimedia data, such as video recordings1, lacks semantic content annotations. Therefore, giving place to Maria Sokhn · Elena Mugellini · Omar Abou Khaled University of Applied Sciences of Fribourg, Boulevard Perolles, 80, 1700, Fribourg e-mail: {maria.sokhn,elena.mugellini,omar.aboukhaled}@hefr.ch 1
In the rest of the paper “video recordings” refers to the recorded talks within a conference. Each conference may have several sessions. Each session is composed of one or multiple talks.
V. Sn´asˇel et al. (Eds.): Advances in Intelligent Web Mastering - 2, AISC 67, pp. 199–208. c Springer-Verlag Berlin Heidelberg 2010 springerlink.com
200
M. Sokhn, E. Mugellini, and O.A. Khaled
the so-called semantic gap which as defined by [18] (other works presented similar definitions [19, 11, 20]) is “the lack of coincidence between the information that one can extract from the visual data and the interpretation that the same data has for a user in a given situation” . Video data presents challenges to information retrieval due to the dynamic nature of the medium and to the lack of advanced and precise methods that handle non-textual features [21]. The information implicitly conveyed in this large available digital content, is only accessible today if considerable effort is made to analyze, extract and create useful semantic annotations is made. In recent years domain-specific ontologies (being abstract models of some domains) have been adopted to enhance content based annotation and retrieval of these digital resources [22, 21, 14]. In fact using ontologies to describe multimedia resources provide ways to define well structured concepts and concepts relationships facilitating the burden tasks of annotation and retrieval [2]. As argued in [3], the use of ontologies improves both automatic annotation and retrieval process. In fact, using concepts to annotate data maintains efficient annotation by avoiding inconsistent combination of annotation and by allowing annotation using generic concepts in some cases of uncertainty on the data description. As for retrieval, the improvement is done by offering to the users consistent results related to their query based on the defined concepts of the domain ontology and their relationships. In addition to the annotation and retrieval improvement, we believe that the ontology may considerably enhance the browsing, navigation and data accessibility when used to visualize resulting information. In this paper we present HELO, a High-level modEL for cOnference, which describes and structures the implicit and explicit information and knowledge conveyed within a conference life cycle. HELO aims at bridging the semantic gap, providing users with efficient and granular annotation and search facilities allowing for complex queries based on semantic criteria such as: “Find a sequence of a recorded talk, in 2007, in Prague, where a colleague of professor Y, talked, after a coffee break, about image indexing and presented a demo of his work”, “Find a recorded talk of the speaker who wrote the paper Z”, “Find a recorded talk of a professor expert in semantic web explaining the benefit of ontology use”, etc.. Such queries requires resources to be semantically annotated based on defined concepts related to the conference domain. Our work also highlights the importance and benefit of using the conference ontology model in visualizing the queries results. The remaining part of the paper is structured as follows: In Sect. 19.2, we present related works. In Sect. 19.3, we detail the conference model HELO. In Sect. 19.4, we briefly describe the framework in which HELO is integrated and we detail the conference model based annotation tool and the ontology based visualization interface. Finally, in Sect. 19.5 we conclude and present future works.
19.2 Related Works Several works and projects have been designed to handle multimedia information retrieval similar to the ones described in [4, 12, 11]. Some of them, such as PHAROS
19
Knowledge Modeling for Enhanced Information Retrieval and Visualization
201
[5], are dedicated for domain regardless multimedia information retrieval. Others, such as COALA [7] for TV news retrieval, ConKMeL [8] for e-learning knowledge management, soccer video annotation [2] have been designed based on broad domain (news, sport, e-learning) of the video data set. Our work is focused on the scientific conference domain. Our goal is to facilitate and enhance video conference recordings retrieval and replay. Lots of projects have been carried out to model conferences. For instance the AKT Reference Ontology [27], the Conference ontology by Jen Golbeck [28], the eBiquity Conference Ontology [29], the ESWC2006 ontology [15], and ISWC2006 ontology which are to the best of our knowledge the most expressive ontologies describing a conference [14]. According to a detailed analysis within the ESWC and ISWC metadata project, existing ontologies lack the expressiveness required [14], and therefore they developed more expressive ontologies: (i) ESCW2006, has 6 top-level classes: Artefact, Call, Event, Place, Role, Sponsorship. In contrast to other ontologies, it explicitly models relationships between people, roles, and events. (ii) ISWC2006, largely influenced by the ESWC2006 ontology, takes into account several issues that needed to be addressed such as restructuring some concepts, adding new existing standards, making benefice of OWL features, etc.. Still, these ontologies lack some expressiveness towards multimedia data generated from a conference such as -and principallyvideo recordings of talks within a conference. The HELO model described in the following section addresses this issue, in addition to several restructuring within the ESWC2006 and ISWC2006 designed concepts.
19.3 High-Level modEL for cOnference: HELO HELO stands for High-level modEL for cOnference. It is an ontological model that describes and structures implicit and explicit information conveyed within a conference life cycle. Every conference has a life cycle through which information is conveyed. This information includes: the video recordings of the different talks that took place within a conference, their content extracted information (video segmentation, keywords, topics, etc.), the talks presentation files (ppt, pdf, etc.), the speakers and audience information (name, affiliation, publications, etc.), the administrative information (conference planning, logistics, etc.), the related demos, the related events, etc.. HELO is based on the effort made in ESWC2006 and ISWC2006 project (enhanced in 2007 as well) and it integrates several other concepts extracted from ontologies related to the conference domain. Figure 19.1 presents a graphical overview of HELO. As shown in this figure, the model is a two-layer concept: (i) a user oriented annotation layer and (ii) a user oriented query & visualization layer both related with a is partof relation. As argued earlier semantic annotations should be provided to existing data resources. Therefore, the user-oriented annotation layer offers granular concepts allowing users to manually or semi-manually annotate conference information, precisely video recordings. This layer is composed of the following 6 top classes:
202
M. Sokhn, E. Mugellini, and O.A. Khaled
Fig. 19.1 HELO overview
Group, Person, OrganisedEvent, Location, MultimediaDocument and Topic. Other important concepts have been defined as subclasses or object and data properties such as the concept of Role, Expertise, VideoRecordings, etc.. The major changes, structuring and reorganizing compared to the above listed ontologies, are the introduction of the MultimediaDocument concept which describes all the media conveyed within a conference. More specifically, the VideoRecording concept which describes the video recordings of the different talks, the Expertise concept which describes the expertise level per domain, and finally the Topic concept which describes the thematics that may occur in scientific conferences, is based on ACM taxonomy [6] guiding the users in their annotation. As said earlier HELO aims not only to help the users in the annotation process but also to enhance knowledge and information querying and visualization as well. Therefore, HELO integrates the concept of Scope that models the conference through 8 different views referred in our work as Scope (Fig. 19.1 User oriented query & visualization layer). Each scope is a concept that may have one or several other concepts from the user oriented annotation layer to which it is linked. By that, the knowledge conveyed within a conference is on the one hand described according to several concepts that model most of the details within a conference (User oriented annotation layer) and on the other hand it is searched using a user oriented and simplified method. In order to provide a common retrieval framework for different users we proposed different scopes corresponding to a set of interpretations of the audiovisual content based on users’ retrieval needs. In order to identify these different scopes, we studied a set of queries (such as the ones listed in the introduction) put forward by different users. Based on that we have identified 8 Scopes which correspond to the most common retrieval activities of users in a scientific conference environment. The user oriented query & visualization layer is then composed of: • PersonScope includes information about people involved in a conference e.g. names, roles, affiliation. It allows users to make queries such as: find the videorecording of the talk where a colleague of the chairman made a presentation.
19
Knowledge Modeling for Enhanced Information Retrieval and Visualization
203
• LocationScope contains information about the conference location e.g. continent, city, room. It allows users to make queries such as: find the video-recording of the talk that took place in the building A during the conference session Y. • TemporalScope concerns conference planning e.g. starting time, parallel sessions, breaks. It allows users to make queries such as: find the video-recording of the talk that took place in the afternoon in parallel to the talk B. • TypeScope lists several categories of conferences e.g. workshop. It allows users to make queries such as: find a talk given in the academic lecture Y. • MediaScope gathers all the media information linked to a talk e.g. talk videorecording, presentation document, papers, books. It allows users to make queries such as: find the video-recording segment of the talk related to this paper. • ThematicScope affiliates a conference to a domain, topic, related events e.g. video-recording indexing, biology. It allows users to make queries such as: find the video-recording part of the talk related to the data mining. • CommunityScope defines communities such as laboratories, research groups, conferences committees e.g. AWIC program committee. It allows users to make queries such as: find the video-recording of the talk where a professor from France university in the AWIC program committee made a presentation. • EventScope describes the events related to a conference event. It allows users to make queries such as find the video-recording of the talk related to the inauguration of the LHC.
19.4 HELO in Use HELO has been integrated within CALIMERA framework detailed in [1]. CALIMERA stands for Conference Advanced Level Information ManagEment and RetrievAl. It has been designed to handle conference information aiming to facilitate the retrieval of video-recordings of the talks within a conference. CALIMERA framework provides a solution for two main tasks; the knowledge and information management of a conference video-recordings as well as their retrieval. Figure 19.2 outlines the global view of the framework (a detailed view is presented in [1]) which is composed of the following modules: Tools manager: CALIMERA is a tool independent framework. The tool manager provides the possibility to integrate tools that may be used for data and meta-data management, query & visualization or both (Fig. 19.2). For a proof of concept we have integrated four principal tools: INDICO [24] to manage the administrative information of a conference, SMAC [23] to record conference talks, CALISEMA (detailed in section 4.1) to annotate video recordings and INVENIO [25] to search conference data. Data & metadata management module: (Fig. 19.2) consists of handling the conference high level information, such as recording talks, segmenting videorecordings, annotating video-recording segments, managing the context information of these talks, etc.
204
M. Sokhn, E. Mugellini, and O.A. Khaled
Fig. 19.2 CALIMERA overview
Data & Metadata storage (Fig. 19.2) integrates existing data and metadata formats such as MPEG-7 which is one of the most widely used standard for multimedia description, RDF & OWL. These are more semantically oriented standards for multimedia description that integrates high level semantic description. Query & visualization module: (Fig. 19.2) queries the data & metadata storage in order to return the video recordings or the set of video-recording sequences of talks the users are seeking for. As shown in the Fig. 19.2, the integrated modules are based on the conference model HELO. The following sections present the use of HELO in videorecordings annotation and visualization.
19.4.1 HELO Based Annotation Manual annotation, being a labor task, we developed a video annotation tool named CALISEMA to facilitate the video annotation process. CALISEMA has been developed in collaboration with the University of Athena. It integrates an algorithm manager that allows users to choose the segmentation algorithm they want to apply on their video. Such as slide change segmentation for the video recordings of the talks or shot change detection for other type of video (a demo video, scientific experiment video, etc.). CALISEMA also has a segment manager that helps users handle (deleting, adding, or merging) existing segments. Each video segment represented by a keyframe or group of segments can be annotated in CALISEMA using HELO ontology. The description file is exported afterward to either MPEG-7, OWL format or both. Figure 19.3 shows the CALISEMA interface. The bottom part (1) shows the key-frames of the video recording of the talk. Each key-frame corresponds to a slide in the talk presentation. In the left part (2) we view the video sequence corresponding to the chosen slide. Each sequence can be annotated in different ways (top-right part(3)). Such as ontology based annotation which parameters are shown in bottom-right part(4)).
19
Knowledge Modeling for Enhanced Information Retrieval and Visualization
205
Fig. 19.3 Slide change based video segmentation, HELO annotation
19.4.2 HELO Based Visualization The HELO model is based on the study of the retrieval needs of the users, especially, on the analysis of the users’ queries in scientific conferences environment. Figure 19.4 shows a basic example of submitting a complex query such as “Find
Fig. 19.4 HELO based visualization
206
M. Sokhn, E. Mugellini, and O.A. Khaled
Fig. 19.5 Temporal Scope view
a talk recording sequence of the workshop held in 2007, in Geneva, where a colleague of Thomas Barron, talked about dark matter and presented a demo of his work. The speaker wrote the paper “introduction a la physique des particules””. Using HELO for visualization, provides the users with an interface where the descriptions are grouped based on their use, offering to the users the ability to explore the information content in an interactive way (Figs. 19.5, 19.6 showing respectively the visualization by “Temporal” and by “Person” ). In fact, multimedia content description based on a model that structures the characteristics and the relationships of a set of scope helps the user apply a simple and structured retrieval method. This will help them improve the query formulation by expressing their requirements in
Fig. 19.6 Person Scope view -left side-, Person Scope view zoom in -right side-
19
Knowledge Modeling for Enhanced Information Retrieval and Visualization
207
a more precise way (Fig. 19.4). These features become essential in multimedia retrieval which has a complex content hardly searched by using “traditional” keywordbased searching. Figure 19.5 (Top left) shows the different options of visualization that are based on the defined Scopes: PersonScope, LocationScope, TemporalScope, TypeScope, MediaScope, ThematicScope, CommunityScope, EventScope.
19.5 Conclusion and Future Works In this paper we have presented HELO a conference model that structures implicit and explicit information and knowledge conveyed along a conference life cycle. HELO aims at handling and enhancing the building of efficient content video annotation allowing users to submit complex queries in order to retrieve conference video recording. The resulting data is presented to the user based on HELO Scopes enhancing the browsing, navigation and data accessibility. In fact this helps the users to search and browse by visualizing the resulting information that are expressed via a conference model which follows their viewpoints, requirements and behavior. HELO integrates the framework CALIMERA, designed to handle conference information allowing the users to express complex queries. In the future, our goal is to enhance the HELO based visualization part from its current basic stage and elaborate the evaluation part of the entire framework.
References 1. Sokhn, M., Mugellini, E., Abou Khaled, O.: Knowledge management framework for conference video-recording retrieval. In: The 21st International Conference on Software Engineering and Knowledge Engineering, Boston, USA (July 2009) 2. Bertini, M., Del Bimbo, A., Torniai, C.: Soccer Video Annotation Using Ontologies Extended with Visual Prototypes Content-Based Multimedia Indexing. In: International Workshop. CBMI 2007, June 2007, pp. 25–27 (2007) 3. Hare, J.S., Sinclair, P.A.S., Lewis, P.H., Martinez, K., Enser, P.G.B., Sandom, C.J.: Bridging the semantic gap in multimedia information retrieval - top-down and bottom-up approaches. In: Proc. 3rd European Semantic Web Conference, Budva, Montenegro (June 2006) 4. An advanced software framework for enabling the integrated development of semanticbased information, content, and knowledge (ick) management systems, http://www.vikef.net/ 5. Platform for searching of audiovisual resources across online spaces, http://www.pharos-audiovisualsearch.eu/ 6. Top-Level Categories for the ACM Taxonomy, www.computer.org/mc/keywords/keywords.htm 7. Fatemi, N.: PhD thesis, A semantic views model for audiovisual indexing and retrieval, Ecole polytechnique federale de Lausanne (2003) 8. Huang, W., Mille, A., ODea, M.: Conkmel: A contextual knowledge management framework to support multimedia e-learning (August 2006)
208
M. Sokhn, E. Mugellini, and O.A. Khaled
9. Bertini, M., Bimbo, A.D., Pala, P.: Content based indexing and retrieval of TV news. Pattern Recognit. Lett. 22, 503–516 (2001) 10. Snoek, C.G.M., Worring, M.: Multimodal Video indexing: a Review of the State-of-theart. Multimedia tools and applications 25(1), 5–35 (2005) 11. Smeulders, A.W.M., Worring, M., Santini, S., Gupta, A., Jain, R.: Content-based image retrieval. IEEE Trans. Pattern Anal., Machine Intell. 23(9), 1349–1380 (2001) 12. Kazman, R., Al-Halimi, R., Hunt, R., Mantei, W.: Four Paradigms for Indexing Video Conferences. IEEE Multimedia arch. 3, 63–73 (1996) 13. Martin, T., Boucher, A., Ogier, J.M.: Multimedia scenario extraction and content indexing. In: CBMI Proceedings, France, pp. 204–211 (2007) 14. Moller, K., Health, T., Handschuh, S., Domingue, J.: The ESWC and ISWC Metadata Projects. In: Aberer, K., Choi, K.-S., Noy, N., Allemang, D., Lee, K.-I., Nixon, L.J.B., Golbeck, J., Mika, P., Maynard, D., Mizoguchi, R., Schreiber, G., Cudr´e-Mauroux, P. (eds.) ASWC 2007 and ISWC 2007. LNCS, vol. 4825, pp. 802–815. Springer, Heidelberg (2007) 15. Semantic Web Technologies ESWC 2006 (2006), http://www.eswc2006.org/technologies/ontology-content/ 2006-09-21.html 16. FOAF Vocabulary Specification, http://xmlns.com/foaf/0.1 17. Basic Geo (WGS84 lat/long) Vocabulary, http://www.w3.org/2003/01/geo/ 18. Smeulders, A.W.M., Worring, M., Santini, S., Gupta, A., Jain, R.: Content-based image retrieval at the end of the early years. IEEE Transactions on Pattern Analysis and Machine Intelligence (2000) 19. Liu, Y., Zhang, D., Lu, G., Ma, W.Y.: A survey of content-based image retrieval with high-level semantics (January 2007) 20. Graves, A.P.: Iconic indexing for video search. PhD thesis, Queen Mary, University of London (2006) 21. Bagdanov, A.D., Bertini, M., Del Bimbo, A., Serra, G., Torniai, C.: Semantic annotation and retrieval of video events using multimedia ontologies. In: ICSC, pp. 713–720 (2007) 22. Hollink, L., Worring, M.: Building a visual ontology for video retrieval. In: Proceedings of the 13th annual ACM international conference on Multimedia, Hilton, Singapore, November 06-11 (2005) 23. SMAC, Smart Multimedia Archiving for Conference, http://smac.hefr.ch/ 24. INDICO, Web application used for scheduling and organizing events, http://indico.cern.ch/ 25. INVENIO, A set of tools for building and managing an autonomous digital library, http://cdsweb.cern.ch/ 26. CERN, European organization for nuclear research, http://www.cern.ch/ 27. AKT conference ontology, http://www.aktors.org/publications/ontology/ 28. Conference ontology by Jen Golbeck, http://www.mindswap.org/˜golbeck/web/www04photo.owl/ 29. eBiquity Conference Ontology, http://ebiquity.umbc.edu/ontology/
Chapter 20
Entity Extraction from the Web with WebKnox David Urbansky, Marius Feldmann, James A. Thom, and Alexander Schill
Abstract. This paper describes a system for entity extraction from the web. The system uses three different extraction techniques which are tightly coupled with mechanisms for retrieving entity rich web pages. The main contributions of this paper are a new entity retrieval approach, a comparison of different extraction techniques and a more precise entity extraction algorithm. The presented approach allows to extract domain-independent information from the web requiring only minimal human effort. Keywords: Information Extraction, Web Mining, Ontologies.
20.1 Introduction The amount of freely available data on the web is huge and it increases everyday at a rapid pace. Web sites often host high quality information about products such as CDs or books, travel destinations, people and so forth. No single human can have an overview of all that distributed information and thus can not get the entire picture easily. For example, a user wants to find out about new mobile phones, that is, he or she wants to know which mobile phones are available, what features they have, what other people think about them and who sells them, etc. Today, a user might know a good review site and can go there directly. However, chances are low that this site offers all the information the user is looking for so he or she needs to use a search engine and gather the desired information in a time consuming manner. David Urbansky · Marius Feldmann · Alexander Schill University of Technology Dresden e-mail: {david.urbansky,marius.feldmann}@tu-dresden.de [email protected] James A. Thom RMIT e-mail: [email protected] V. Sn´asˇel et al. (Eds.): Advances in Intelligent Web Mastering - 2, AISC 67, pp. 209–218. c Springer-Verlag Berlin Heidelberg 2010 springerlink.com
210
D. Urbansky et al.
In this paper, we present the system “WebKnox” (Web Knowledge eXtraction). Our main contributions are a new entity retrieval and extraction approach (Focused Crawl Extraction), a more precise algorithm for entity extraction using seeds (XPath Wrapper Inductor), and that we require minimal user input in form of an ontology. The remainder of this paper is structured as follows. Firstly, we give background information of tasks and techniques for information extraction from the web. Secondly, we give an overview of three state-of-the-art information extraction systems. Thirdly, we introduce the architecture of WebKnox and elaborate on the retrieval and extraction techniques that are unique to WebKnox. Before this paper concludes with a summary and outlook, we, fourthly, evaluate the main components from the architecture section.
20.2 Background and Related Work In contrast to information retrieval (IR), in which the task is to find relevant information for a given query and to rank the results, information extraction (IE) is the process of extracting information to a given target structure such as a template or an ontology. The IE tasks are defined by an input (such as an HTML page) and an output (such as a populated database) [2]. There are several main tasks in web information extraction such as Entity Extraction, which is the task of discovering new instances of a concept, Fact Extraction, which is the task of finding values for given attributes of a given entity, and Concept/Attribute Extraction which is the task of discovering new concepts and attributes. Information can be extracted using a variety of different techniques such as natural language processing, pattern learning [3], wrapper induction [2], visual-based [5], and semantic web based [6] approaches. Very common techniques are pattern learning and wrapper induction, in which free or semi-structured text is analyzed for repeating patterns which can then be generalized and used to find and extract new instances of entities or facts. There is a great variety of systems that extract information from the web, but many work only in certain domains or require considerable user involvement. In this section, we review three significant systems for domain-independent entity extraction from the web. KnowItAll [4] is a domain-independent, unsupervised system that automatically extracts entities and facts from the web. KnowItAll is redundancy-based which means it relies on the assumption that a fact or entity occurs many times on the web. The system’s strength is finding new entities for a given class using pattern learning and wrapper induction. KnowItAll uses a set of domain-independent patterns and queries a search engine with these patterns. The input for KnowItAll is a set of concepts, attributes, and relations. The number of entities that can be found in those relations is however limited, for example very obscure actors are unlikely to appear in the given patterns. Therefore, KnowItAll has also a list extractor that uses seed entities to find structured lists of entities. KnowItAll queries a search engine
20
Entity Extraction from the Web with WebKnox
211
with a set of known entities, constructs a wrapper for each returned web page and tries to extract more entities that are in the same format as the seeds. TextRunner’s [1] only input is a corpus of web pages to enable the extraction of information from these pages. This is realized in three steps for every sentence read: The noun phrases of the sentence are tagged, nouns that are not too far away from each other are put into a candidate tuple set and the tuples are analyzed and classified as true or false using a self-supervised classifier [9]. Opposed to KnowItAll, TextRunner does not use extractions from lists and is limited to the entities that can be found in free text patterns. Set Expander for Any Language (SEAL) [8] is a language-independent system that aims to extract entities from the web after having analysed a few seeds. It operates in three main stages, at first it fetches web pages from a search engine by querying it with seeds. After that, all prefixes and suffixes around the seeds are constructed and wrappers are built and finally, SEAL uses a graph algorithm to rank and assess the extracted entities. The discussed state-of-the-art systems can be extended by using a combination of entity extraction from patterns and from structure. The next section describes the design of WebKnox and particularly elevates the novelties in entity extraction, which are the focused crawl entity extraction technique, the XPath Wrapper Inductor for seed extraction, and that we use a semantic foundation to describe the concepts and attributes the system should be searching for.
20.3 Architecture of WebKnox WebKnox is divided into two main extraction processes. First of all, the entity extraction process gets a set of predefined concept names as input (for example “Movie”, “Mobile Phone“ or “Country”) and then queries a multi-purpose search engine such as Google to retrieve pages with possible entity occurrences. After the knowledge base contains some entities, the fact extraction process reads those entity names and also gets predefined information about the attributes that are searched for the entity’s concept. The process then queries a search engine again, tries to extract facts, assesses the extractions and writes the results back to the knowledge base. The focus of the remainder of this paper is set on the entity extraction process, detailed information about the fact extraction process can be found in [7].
20.3.1 Entity Extraction Techniques Figure 20.1 shows the entity extraction process realized by WebKnox. The system uses the following three techniques to extract entities from the web: Phrase Extraction, Focused Crawl Extraction and Seed Extraction. All three techniques have in common that they get the concept names from the knowledge ontology as an input and that they query a general purpose search engine such as Google to retrieve pages that are likely to have entities for the searched concepts. The Phrase
212
D. Urbansky et al.
Fig. 20.1 Overview of the entity extraction process
Extraction extracts entities with a relatively high precision, but it does not discover many instances. The Focused Crawl Extraction is then used to find more entities by searching for explicit listings of entities. The Seed Extraction can only be used after another technique since it relies on previously extracted entities as seeds. This way, WebKnox eliminates the human involvement to specify seeds for this extraction process. 20.3.1.1
Phrase Extraction
The Phrase Extraction (PE) technique borrows the basic idea from the KnowItAll system [4] and queries a search engine with phrases that are likely to link a concept to several entities of that concept. The following queries are used by the PE technique where CONCEPT is the plural name of a concept: “CONCEPTs such as”, “CONCEPTs like”, “CONCEPTs including”. The quotes are part of the query, that is, only exact matches are sought. For each concept, all queries are instantiated with the concept name and sent to a search engine. The phrases are then searched in the returned pages for each query and proper nouns after the phrase are extracted as entities. 20.3.1.2
Focused Crawl Extraction
The Focused Crawl Extraction (FCE) technique of WebKnox uses a novel retrievalextraction combination in entity extraction systems and is explained in more detail. FCE queries a search engine with generic queries that aim to find pages with lists of
20
Entity Extraction from the Web with WebKnox
213
entities on them (a list is an enumeration of entities and does not necessarily mean that the entities are encoded using the HTML LI tag). The focused crawler tries to detect a list on the page and, furthermore, it tries to find signs of a pagination. Pagination is often used to limit the results for one page and permits the web site visitor to view the data page by page. If a pagination is detected, the focused crawler starts on the first page it finds, tries to detect a list and then extracts entities from the list. The focused crawler processes pages that are retrieved with the following queries that have shown to return web pages with entity lists: “list of CONCEPTs”, “CONCEPT list”, “index of CONCEPTs”, “CONCEPT index” and “browse CONCEPTs”. In each query, CONCEPT is the name of the concept. These queries explicitly aim to find pages that state to have a list of entities on it. Another helpful synonym is “index” which can be queried equivalently to the list queries. The “browse” keyword aims to find pages that allow a user to browse entities of a concept and thus, possibly, leads to paginated pages with lists. For each page that is returned by one of the queries, WebKnox tries to detect a list, that is, list detection aims to find the XPath that points to all entities of a list. This is a complicated task since lists can be encoded in a great variety of ways and often it is not even clear whether something should be considered as a list or not. For the list detection algorithm a list must have the following features. (1) The entries of the list are encoded in a very similar way (format and structure), (2) there are at least 10 entries in the list, and (3) the list is in the content area of the web page. Finding the correct path can rarely be achieved by looking at the page’s DOM tree only; hence, also the content of a list is analyzed in order to find the sought list. The list detection algorithm explained here makes use of many heuristics which were determined by analyzing a wide variety of list pages. Entities are expected to be in one of the following tags: LI, TD, H2,H3, H4, H5, H6, A, I, DIV, STRONG and SPAN. The list detector does not construct an XPath to every tag that can be used in HTML since some tags are not made for having text content such as TABLE, TR, SELECT etc. Every constructed XPath addresses exactly one node, in order to find the XPath that addresses all entities, that is, multiple nodes, the indices of the nodes in the XPath are removed. An XPath with its indices removed is called stripped XPath. After the indices are removed, all XPath instances that lead to the same stripped XPath can be counted and sorted by the number of occurrences. It is assumed that finding the XPath with most occurrences on a web page is one that encodes a list, since list entries are most often encoded the same way. The list detector favors longer XPath over shorter ones because it is assumed that the deeper the hierarchy, the more precise the node text will be, so less noise around an entity is extracted. As explained in the beginning of this section, lists must have certain features in order to be detected and used for entity extraction. Most of the time, lists are used to present something else than entities, for example links to blog entries, numeric values such as prices and so forth. Therefore, heuristics have to be employed in order to find out whether the detected XPath really addresses a list of entities that should be extracted. Through analysis of many entity lists on web pages, we created
214
D. Urbansky et al.
criteria for a valid entity list. A detected list is only used for extraction if all of the following criteria are fulfilled. 1. 2. 3. 4. 5.
Less than 15% of the entries in the list are entirely numeric values. Less than 50% of the entries in the list are completely capitalized. On average, each entity consists of not more than 12 words. Less than 10% of the entries in the tables are duplicates. Less than 10% of the entries in the list are missing.
One of the necessary features for the list detection was that a list has to appear in the content area of the web page. Often the navigation or footer of a web page contains many entries that might look like a list and thus must be filtered out. The list detector tries to find all elements on a web page that are not page specific content by comparing it to a sibling web page. A sibling web page is found by analyzing all links from the target web pages, that is, all contents of the href attribute of the A tag. The link to a URL with the highest similarity to the URL of the target web page is taken as the sibling URL pointing to the sibling page. The similarity between two URLs is calculated by the number of characters from left to right that the two URLs have in common. This way it is likely to retrieve a sibling web page that has a similar structure as the target web page. All stripped XPaths to the content tags are constructed for the target and the sibling page. If the content overlap is over 70% the XPath is considered to point to a non-content area and is dismissed. The pagination detector recognizes two main types of pagination, namely, uppercase letters and a series of numbers. Since pagination elements are almost always links, the pagination detector constructs all XPaths to A tags and adds those to a candidate set. The indices for the elements A, TR, TD, P, SPAN, LI are removed since those elements are more often used to encode pagination lists. The highest ranked XPath is then taken as the XPath addressing pagination links. 20.3.1.3
Seed Extraction
The Seed Extraction (SE) technique aims to implicitly find pages with lists of entities by querying the search engine with seeds. The retrieval using seeds has excessively been done by the KnowItAll system [4]. WebKnox uses automatically obtained entities (extracted by PE or FCE) as seeds for the seed extraction technique, thus, no human intervention is necessary. The XPath Wrapper Inductor (XWI) aims to find the XPaths that point to the seeds and to generalize it so that all entities that are encoded in the same way as the seeds are addressed and can be extracted. XWI needs at least two seeds in order to find such a generalized XPath. It then works as follows. For each seed, all XPaths that point to the seed occurrences on a web page are constructed. After that, a generalized XPath is searched by comparing each index for each element of the XPath. If the index changes, the index is deleted and thus the number of elements the XPath is addressing is increased. Figure 20.2 shows that process for a simple example and two seeds. In a) and b) the XPath to Seed1 and Seed2 are marked with green rectangles in the DOM tree respectively. Both
20
Entity Extraction from the Web with WebKnox
215
XPaths are the same but only the last index is different, indicating that more nodes with the same structure exist. The index is deleted and in Fig. 20.2c) all siblings of the two seeds are addressed by the generalized XPath. If one or more seeds appear in different elements on the web page, the stripped XPath with the highest count is taken.
Fig. 20.2 a) and b) show the XPath for two seeds, in c) the XPath is addresses all elements of the same structure as Seed1 and Seed2
XWI also makes minimalistic use of prefixes and suffixes around the seeds. This is necessary because the generalized XPath addresses the complete tag content but sometimes there is information around the seed that should not be extracted. To avoid extracting unwanted parts before and after the seed, a two character prefix and suffix is constructed around the seed instances. The complete wrapper for this web page now consists of the generalized XPath and the small prefix and suffix. Applying this wrapper will extract new entities without any noise around them. The extraction results of the XWI are also checked for uniformity to ensure that fewer incorrect lists of entities are extracted. XWI aims for high precision and rather extracts nothing from a web page than a few correct entities with low precision.
20.4 Evaluation In this section, the entity extraction techniques are evaluated. First of all, the difficult part of list and pagination detection for the FCE technique is assessed, secondly the new XWI is compared to another state-of-the-art Wrapper Inductor and, finally, the entity extraction techniques are compared across all concepts. WebKnox extracted over 420,000 entities from about 24,000 web pages in the following 10 concepts from the web: Movie, Actor, Mobile Phone, Notebook, Car, City, Country, Song, Musician and Sport. List and Pagination Detection The list detector has been tested on 60 web pages across 10 concepts. The web pages have been found by querying Google with the concept name and the term “list” (and
216
D. Urbansky et al.
variations). Pages with actual lists and without lists have been chosen for evaluation. The correct XPath was assigned for about 55% of the pages. The list detector aims however to reject a page if the list is not likely to be a list of entities so that the actual accuracy of accepted list pages is about 77%, that is, from 23% of the pages that are analyzed with the list detector, wrong lists are detected. The pagination detection has been tested on over 70 web pages in 10 concepts. Web pages with pagination and without pagination were chosen for evaluation. The pagination XPath was assigned with an accuracy of about 90%. Wrapper Inductor Comparison We compare WebKnox’s XPath Wrapper Inductor (XWI) to the entity extractor from the SEAL system [8], since it is a state-of-the-art algorithm that also outperformed the Google Sets algorithm. We implemented the SEAL algorithm as described in [8] and call it Affix Wrapper Inductor (AWI) for simplicity. For the evaluation 40 list pages across 20 different concepts were collected by querying the Google search engine with the terms “list of CONCEPTs” or “CONCEPTs list”. The first pages that really had a list of the sought concept present, were taken for evaluation. For each web page, two seeds were chosen randomly and the precision and recall of both wrapper inductors were determined. Precision and recall were then averaged over the 40 test pages. Figure 20.3 shows the two wrapper inductors in comparison. The two red bars on the top show the precision (pr) and recall (re) for the Affix Wrapper Inductor, whereas the two blue bars in the bottom depict the precision and recall of the XPath Wrapper Inductor. We can see that the AWI has a slightly higher recall than the XWI. The XWI however aims for high precision which is achieved at about 98.94%. The F1-Score for the AWI is about 67% while the XWI reaches about 88%. A paired t-test of the F1-Scores for both wrapper inductors showed that the higher score for the XWI is statistically significant with p being 0.034.
Fig. 20.3 Precision and recall of the AWI and the XWI
Entity Extraction Techniques Figure 20.4 shows a comparison of 12 queries for the three extraction techniques. For the evaluation, a random sample of maximum 100 entities per query has been evaluated. The red bars in the figure show the precision (left y-axis) of the query averaged over 10 concepts, the blue bars depict the number of extractions, for each query in a logarithmic scale (right y-axis). The green bars next to the red and blue bars visualize the standard deviation. The horizontal red and blue lines show the
20
Entity Extraction from the Web with WebKnox
217
Fig. 20.4 Comparison of the three entity extraction techniques and their queries
weighted average precision and average number of extractions per extraction technique. We can see that the phrase extraction technique (first three queries) has the highest average precision but leads to the fewest extractions. The “list of” query from the focused crawl extraction technique has the highest average precision with about 65% and extracted over 1,300 entities per concept on average. All seeded queries lead to the largest number of extractions, but with the lowest average precision compared to the two other extraction techniques. It is important to note that all queries were very concept dependent, which means that for some concepts they worked exceptionally well whereas they did not reach a high precision for others. The high standard deviation for precision and number of extraction values shows this dependency.
20.5 Conclusion and Future Work In this paper we have shown the architectural overview of the web information extraction system WebKnox. The system improves the seed extraction by using an XPath Wrapper Inductor that has shown to be more precise than a wrapper inductor that only uses affixes. We compared different extraction and retrieval techniques and showed their strengths and weaknesses. In future work we will research different information sources and their impact on entity and fact extraction quality. For example, sources such as REST web services and RDF markup provide information in a more structured form and thus simplify
218
D. Urbansky et al.
the extraction. We will find different sources and investigate how to generically extract information from them while avoiding human intervention. Furthermore, we will study how to assess the extracted entities to ensure a high precision of the information in the knowledge base.
References 1. Banko, M., Cafarella, M.J., Soderland, S., Broadhead, M., Etzioni, O.: Open Information Extraction from the Web. In: Proceedings of the 20th International Joint Conference on Artificial Intelligence, pp. 2670–2676 (2007) 2. Chang, C.H., Kayed, M., Girgis, M.R., Shaalan, K.F.: A Survey of Web Information Extraction Systems. IEEE Transactions on Knowledge and Data Engineering 18(10), 1411– 1428 (2006) 3. Downey, D., Etzioni, O., Soderland, S., Weld, D.S.: Learning Text Patterns for Web Information Extraction and Assessment. In: AAAI 2004 Workshop on Adaptive Text Extraction and Mining (2004) 4. Etzioni, O., Cafarella, M., Downey, D., Popescu, A.M., Shaked, T., Soderland, S., Weld, D.S., Yates, A.: Unsupervised named-entity extraction from the Web: An experimental study. Artificial Intelligence 165(1), 91–134 (2005) 5. Gatterbauer, W., Bohunsky, P., Herzog, M., Kr¨upl, B., Pollak, B.: Towards domainindependent information extraction from web tables. In: Proceedings of the 16th international conference on World Wide Web, pp. 71–80. ACM, New York (2007) 6. Popov, B., Kiryakov, A., Manov, D., Kirilov, A., Ognyanoff, D., Goranov, M.: Towards semantic web information extraction. In: Workshop on Human Language Technology for the Semantic Web and Web Services (2003) 7. Urbansky, D., Thom, J.A., Feldmann, M.: WebKnox: Web Knowledge Extraction. In: Proceedings of the Thirteenth Australasian Document Computing Symposium, pp. 27–34 (2008) 8. Wang, R.C., Cohen, W.W.: Language-Independent Set Expansion of Named Entities Using the Web. In: The 2007 IEEE International Conference on Data Mining, pp. 342–350 (2007) 9. Yates, A.: Information Extraction from the Web: Techniques and Applications. Ph.D. thesis, University of Washington, Computer Science and Engineering (2007)
Chapter 21
Order-Oriented Reasoning in Description Logics Veronika Vanekov´a and Peter Vojt´asˇ
Abstract. User preference is often a source of uncertainty in web search. We propose an order-oriented description logic suited especially to represent user preference. Concepts are interpreted as preorders of individuals from the domain. We redefine reasoning tasks to reflect order-oriented approach and we present an algorithm for order instance problem in description logic without aggregation. Furthermore, we describe top-k retrieval which allows us to find the best objects with respect to user preference concepts. Keywords: Preference, Preorder, Description Logic.
21.1 Introduction Description logics (DLs) are a well established field of research. Thanks to thoroughly investigated tradeoff between expressive power and computational complexity, we have a broad family of DLs with different constructors. Combinations of DLs with other formalisms, mainly fuzzy logic, provide very interesting results. Fuzzy sets are commonly used to deal with various types of uncertainty. However, in many application fuzzy membership value or score itself does not play any role - only the order of individuals implied by score matters. In some applications (e.g. Google search) the actual score is hidden to the user. This is the starting point of our present research. We are introducing an order description logic o-E L (D), where data (roles) are crisp and concepts are interpreted as orders on instances. This DL Veronika Vanekov´a ˇ arik University, Koˇsice, Slovakia Institute of Computer Science, Pavol Jozef Saf´ e-mail: [email protected] Peter Vojt´asˇ Faculty of Mathematics and Physics, Charles University, Prague, Czech Republic e-mail: [email protected]
V. Sn´asˇel et al. (Eds.): Advances in Intelligent Web Mastering - 2, AISC 67, pp. 219–228. c Springer-Verlag Berlin Heidelberg 2010 springerlink.com
220
V. Vanekov´a and P. Vojt´asˇ
closely corresponds with fuzzified E L with crisp roles and aggregation [1]. Order concepts leave out fuzzy membership degrees, but keep the ordering induced by them. DL o-E L (D) addresses a specific problem when the knowledge base has a simple structure (we restrict ourselves to E L constructors), but contains large number of individuals. This logic is designed especially to represent the notion of user preference. We represent these preferences as special concepts, interpreted as preorders of objects from the domain. Users often want to find only top-k best objects, ordered by their specific preferences. A modified top-k algorithm [3] is supported in our DL for user-dependent search of k best objects. This paper first describes the syntax and semantics of o-E L (D) together with illustration examples (Sect. 21.2). Then it proceeds to analyze reasoning problems for order-oriented DL (Sect. 21.3). Two of the aforementioned reasoning problems, namely instance order problem and top-k retrieval, are addressed in Sects. 21.4 and 21.5. We mention some related papers in Sect. 21.6 and finally, Sect. 21.7 concludes.
21.2 Description Logic o-E L (D) with Concept Instance Ordering To illustrate our idea of order-oriented approach, let us begin with an example. Imagine a user who wants to buy some used car. His preferences are usually vague (e.g. low mileage) and they can be even conflicting (e.g. cheap, fast and economy car). In fact, every such preference depends on one attribute (e.g. “cheap” depends on price) and generates an ordering of all possible cars according to this attribute. The main problem is to aggregate all (possibly conflicting) orders into one final order, where the first element is a car that meets user preferences best. In o-E L (D), we represent every user preference with a concept, such as good price. Every concept orders individuals (objects) according to some attribute. Attributes are represented with roles, as usual in DLs. We use conjunction or aggregation to combine multiple preference concepts. This section presents all the necessary theory for order concepts and then proceeds with the running example. Complex concepts in o-E L (D) are constructed as usual, allowed constructors include concept conjunction C≤ D≤ , existential quantification ∃R.C≤ , existential quantification with concrete domain predicate ∃ f .P. Moreover we use non-standard constructors aggregation @U (C≤1 , . . . ,C≤m ) and top-k constructor top − k(C≤). The knowledge base consists of TBox T with concept definitions and ABox A with assertions concerning individuals. Typical TBox definitions are C≤ ≡ D≤ . ABox contains concept assertions (a1 , a2 ) : C≤ and role assertions (a, c) : R. Ordering concepts C≤ are interpreted as non-strict preorders of the domain, J J C≤ ⊆ Δ J × Δ J . If (a, b) ∈ C≤ , then a belongs to the concept C≤ less than b (or equally). If C≤ is a concept representing user preference, we say that a is preferred to b according to C≤ .
21
Order-Oriented Reasoning in Description Logics
221
A preorder is a reflexive and transitive relation. It is different from partial order, which is reflexive, transitive and antisymmetric. In case of preorder, there can be two J individuals x, y that are not identical despite being equal in the sense (x, y) ∈ C≤ J
and (y, x) ∈ C≤ . We call such individuals indiscernible according to attribute C. A J ∨ (b, a) ∈ CJ (one or both inequalities preorder is total if ∀a, b ∈ Δ J : (a, b) ∈ C≤ ≤ hold for every pair of elements from the domain). A partial preorder can be induced in practice from user inputs like “individual a is better than individual b with respect to C” or from sample set of individuals rated by the user. Inductive learning of user preference from such rated set of individuals means finding a linear extension of the partial order and thus be able to compare any pair of individuals. From a logical point of view, we interpret both concepts and roles as binary predicates. Interpretations of standard concepts constructors are defined in Table 21.1. Here C, D denote concepts, R a role, f a concrete role and P a concrete predicate (see below). Two additional non-standard constructors (aggregation and top−k constructor) are defined in the subsequent text. Table 21.1 Interpretations of o-E L (D) concepts Syntax
Semantics
≤ C≤ D≤
ΔJ ×ΔJ J J {(a1 , a2 )| (a1 , a2 ) ∈ D≤ ∧ (a1 , a2 ) ∈ C≤ } J {(a1 , a2 )| ∀c1 (a1 , c1 ) ∈ RJ ∃c2 (a2 , c2 ) ∈ RJ : (c1 , c2 ) ∈ C≤ } J J {(a1 , a2 )|∀c1 (a1 , c1 ) ∈ ( f )∃c2 (a2 , c2 ) ∈ ( f ) : P(c1 ) ≤ P(c2 )}
∃R.C≤ ∃ f .P
The concrete domain used in the definition is D = (Δ D , Pred(D)), where = R and Pred(D) contains unary fuzzy predicates like lta,b (x) (left trapezoidal membership function), rta,b (x) (right trapezoidal), trza,b,c,d (x) (trapezoidal) and inva,b,c,d (x) (inverse trapezoidal) with one variable x and parameters a, b, c, d (originally defined in [9], see also [1]). Concrete roles are interpreted as f J : Δ J −→ Δ D and concrete predicates have fixed interpretation P : Δ D −→ [0, 1]. Therefore the constructor ∃ f .P from Table 21.1 generates a total preorder. Top concept ≤ is interpreted as a complete relation Δ J × Δ J , where all individuals are equally preferred. Concept conjunction C≤ D≤ often produces partial preorder, even if C≤ and D≤ are total preorders. Sometimes it is more convenient to use aggregation @U instead of concept conjunction, especially when we consider a conjunction of more than two concepts. For every user-dependent aggregation @U with arity m and for every m-tuple of J total order concepts C≤ j , the aggregation @U (C≤1 , . . . ,C≤m ) ⊆ Δ J × Δ J is a total
ΔD
J
J
order. If (a, b) ∈ C≤ j for every j = 1, . . . , m, then (a, b) ∈ @U (C≤1 , . . . ,C≤m ). The definition of aggregation is inspired by the rules of Formula 1 auto racing: first eight drivers gain points according to the point table (10, 8, 6, 5, 4, 3, 2, 1) regardless of their exact time, speed or lead. The final order is determined by summing up all
222
V. Vanekov´a and P. Vojt´asˇ
points. In case of DL o-E L (D) every individual a gains points for its position in concepts C≤1 , . . . ,C≤m . The individual with the highest sum of all points will be first in the aggregated concept. The difference is that we allow ties to occur and thus the result is a non-strict preorder. First of all, it is necessary to define the level of instance a in the interpretation of concept C≤ . It is the biggest possible length of a sequence such that the first element is a and every following element is strictly greater than its predecessor. level(a,C, J ) = max{l ∈ N| ∃b1 , . . . , bl ∈ Δ J ∀i ∈ {1, . . . , l} (bi , bi+1 ) ∈ CJ ∧ (bi+1 , bi ) ∈ / CJ ∧ b1 = a} Next we define a scoring table for the aggregation. Scoring table is arbitrary finite, strictly decreasing sequence score@U = (score1 , . . . , scoren ) like (10, 8, 6, 5, 4, 3, 2, 1). The differences between adjacent elements are also decreasing, but not strictly. All the following elements are equal to 0. The pair (a, b) belongs to J aggregation @U (C≤1 , . . . ,C≤n ) if m
m
j=1
j=1
∑ scorelevel(C≤ j ,a,J ) ≤ ∑ scorelevel(C≤ j ,b,J )
It is also straightforward to define top-k queries. Let Ca = {c ∈ Δ J | (a, c) ∈ J ∧ (c, a) ∈ / C≤ } be a set of individuals strictly greater than a in ordering concept C≤ . Then (a, b) ∈ top − k(C≤)J , iff:
J C≤
J
1. (a, b) ∈ C≤ or 2. Ca ≥ k If C≤ was a total preorder, then top − k(C≤ ) will be also total. Top-k constructor preserves the original order of individuals for the first k individuals and possible additional ties on the kth position (condition 1), while all the following individuals share the lowest level in the ordering (condition 2). Finding top-k objects is originally a non-deterministic process: if there are any ties on the kth position, we can choose any objects from this position and return exactly k results [4, 3]. We changed this aspect of top-k retrieval to obtain deterministic definition of this constructor. Example 21.1. ABox A contains the following assertions about used car sales. We have concrete roles has price and has horsepower and a concept car with both instances indiscernible (because both individuals are cars). (Audi (Ford (Audi (Audi (Ford (Ford
A3, Ford Focus) : car Focus, Audi A3) : car A3, 7900) : has price A3, 110) : has horsepower Focus, 9100) : has price Focus, 145) : has horsepower
21
Order-Oriented Reasoning in Description Logics
223
TBox contains definitions of new preference concepts specific for user U1 : good priceU1 , good horsepowerU1 and good carU1 . good priceU1 ≡ ∃has price.lt7000,9000 good horsepowerU1 ≡ ∃has horsepower.rt100,150 good carU1 ≡ good horsepowerU1 good priceU1 car Every model J of the knowledge base will satisfy the following: (Ford Focus, Audi A3) : good priceU1 (Audi A3, Ford Focus) : good horsepowerU1 The former assertion is satisfied because we know that ∀c1 (Audi A3, c1 ) ∈ RJ J ∃c2 ∈ Δ J (Ford Focus, c2 ) ∈ RJ : (c1 , c2 ) ∈ rt100,150 . We have only one possibilJ
ity c1 = 110 and c2 = 147 and moreover (110, 147) ∈ rt100,150 . Note that neither the first tuple (Audi A3, Ford Focus), nor the second tuple J (Ford Focus, Audi A3) belongs to good carU1 in every model J . This is caused by the ambiguity in concept conjunctions. The interpretation of the concept good carU1 is a partial order, which can be extended both ways. This shows a necessity to use aggregations instead of concept conjunctions. Let us define the scoring table for aggregation @U1 as (3, 2, 1) and the concept good carU1 as @U1 (good priceU1 , good horsepowerU1 ). The first place in the concept good horsepowerU1 belongs to individual Audi A3, while Ford Focus is first in the concept good priceU1 . The result is that both individuals gain five points in total (three for the first place and two for the second place) and every interpretation must satisfy both: (Ford Focus, Audi A3) : good carU1 (Audi A3, Ford Focus) : good carU1
21.3 Reasoning Problems in o-E L (D) The most important aspect of description logics is the reasoning - finding implicit consequences of definitions and assertions from the knowledge base. Some reasoning problems have to be redefined when we deal with user preference concepts. In this section we suggest several order-oriented modifications of classical reasoning problems. Subsumption problem, C K D, means checking whether concept D is more general than C with respect to knowledge base K . Concepts in classical DLs are interpreted as subsets of the domain. Thus C K D if CI ⊆ DI for every model I of K . In case of the description logic o-E L (D), all concepts are interpreted as preJ J orders. It is also possible to check subset relationship C≤ ⊆ D≤ . It would mean that preorder D≤ contains all tuples from C≤ and some possible additional tuples. If J C≤ = {(x1 , x2 ), (x2 , x3 ), (x1 , x3 )} then adding a tuple (x3 , x4 ) increases the number
224
V. Vanekov´a and P. Vojt´asˇ
of equivalence classes (levels), but adding (x3 , x1 ) or (x2 , x1 ) makes some elements equally preferred and thus decreases the number of levels. Considering orderoriented subsumption, it would be necessary to treat these two cases separately. J J J J In addition to classical equivalence (C≤ D≤ ∧ D≤ C≤ ) we can check correlation of two total preorders. We use Kendall tau correlation coefficient [5]. If J J C≤ , D≤ are two preorders of the same domain, their tau correlation is defined as J
J
τ (C≤ , D≤ ) = 2(c−d) n(n−1) , where c is the number of concordant pairs (i.e. such that they have the same order in both concepts (a, b) ∈ CJ ∧ (a, b) ∈ DJ ), d is the number of discordant pairs (such that they have different order in concepts CJ , DJ ) and n is the number of elements from the domain. The result can range from -1 to 1. Identical orders have correlation 1 and reversed orders have the result -1. In case of J
J
J J C≤ ∩D≤ J J C≤ ∪D≤
partial orders, this formula can be reformulated as τ (C≤ , D≤ ) = 2 ·
− 1.
Both formulations assume that the number of pairs in both posets is finite. It is possible to use the solution from paper [6], which suggests fuzzy measure to determine correlation of two potentially infinite posets. Another interesting reasoning problem is concept satisfiability. In classical DLs a concept is satisfiable if it has some non-empty interpretation (i.e. its definition is not contradictory). In fuzzy DLs we can check satisfiability to some fuzzy degree. Ordering DL is different case. Because of reflexivity condition, every preorder will J be non-empty. The closest related problem is how many levels C≤ can have at most. We can also define instance level problem: individual a is an instance of concept C≤ in level n if level(a,C, J ) ≥ n for every model J . Order-oriented retrieval problem is similar: to find all instances of concept C≤ in level n. Top-k retrieval means finding the best k individuals, regardless of how many levels they span. Another closely related problem is instance order problem: a is preferred less or equal than J b in concept C≤ if (a, b) ∈ C≤ for every model J .
21.4 Reasoning Algorithm for Instance Order Problem Some reasoning algorithms for o-E L can be designed by modification of structural or tableau approaches from classical DLs. There is one more complication – the use of aggregation functions may easily cause undecidability of reasoning. Paper [7] proves decidability only for E L (D) with atomic negation and a concrete domain D with functions min, max, sum. Although aggregation functions are more suitable to represent user preferences, we replace them with concept conjunctions for the purpose of reasoning. We also leave out top − k constructor which will be used only in top-k queries described in the next section. In this section we focus on J instance order problem – to find out if (a, b) ∈ C≤ holds for every model J of the knowledge base. At the beginning we construct the expansion and the transitive closure of ABox A :
21
Order-Oriented Reasoning in Description Logics
225
1. add (a, b) : C (the assertion that we want to check) 2. replace all complex concept names in A with their definitions / A , then add (a, c) : C≤ 3. if (a, b) : C≤ ∈ A and (b, c) : C≤ ∈ A and (a, c) : C≤ ∈ Afterwards we decompose A by applying rules. Every rule replaces some ABox assertion with new assertions or constraints. New assertions are allowed to contain variables instead of individual names. Constraints have the form (x, y) : P, where x, y can be values from the concrete domain or variables and P is a concrete predicate. The rule R∃ can be used for both abstract and concrete roles. (R) (R∃)
J
J
REPLACE (a1 , a2 ) : (C≤ D≤ )J WITH (a1 , a2 ) : C≤ AND (a1 , a2 ) : D≤ J
IF ∃c1 (a1 , c1 ) ∈ RJ ∀c2 (a2 , c2 ) ∈ RJ : (c2 , c1 ) ∈ C≤ THEN REPLACE J
(a1 , a2 ) ∈ (∃R.C≤ )J WITH (a2 , x) ∈ RJ AND (c1 , x) ∈ C≤ for all c1 : (c2 , c1 ) ∈ J
C≤ where x is a new variable After no more rules can be applied on ABox A , we end up with constraints like (x1 , x2 ) : P and assertions of the form (x3 , x4 ) : C≤ and (x5 , x6 ) : R. As the next step we try to substitute individuals and concrete values for the variables so that all constraints and assertions are fulfilled. For each variable x we create a set of candidate values CVx = {y ∈ Δ J ∪ Δ D : (a, x) : R ∈ A ∧ (a, y) : R ∈ A }. We have to choose one value from each candidate set such that no predicate assertion (x, y) : P J is violated. If there is such a substitution, then (a, b) ∈ C≤ for every model J . If J
not, (a, b) ∈ C≤ may hold only for some models. A substitution can be found with bMIP (bounded mixed integer programming) method described in [9]. Example 21.2. Let us revise the example from the previous section. TBox and ABox J remain the same and we want to find out if (Audi A3, Ford Focus) ∈ good carU1 . Of course, good carU1 is replaced by its definition. First we apply rule (R) twice and gain: (Audi A3, Ford Focus) : car (Audi A3, Ford Focus) : ∃has price.lt7000,9000 (Audi A3, Ford Focus) : ∃has horsepower.rt100,150 Then we replace last two assertions using (R∃): (Audi A3, Ford Focus) : car (Audi A3, x1 ) : has price (Ford Focus, x2 ) : has price (Audi A3, x3 ) : has horsepower (Ford Focus, x4 ) : has horsepower (x1 , x2 ) : lt7000,9000 (x3 , x4 ) : rt100,150 No more rules can be applied and we look for a substitution for x1 , x2 , x3 and x4 . Let us start with variable x1 . It must be connected with Audi A3 by role has price, and therefore the only candidate value is 7900. Constraint (x1 , x2 ) : lt7000,9000 cannot be checked yet. We substitute value 9100 for x2 the same way. But (7900, 9100) :
226
V. Vanekov´a and P. Vojt´asˇ
lt7000,9000 is not true. There is no other option to substitute x1 and x2 . We conclude J that (Audi A3, Ford Focus) does not belong to good carU1 in every interpretation.
21.5 Top-k Retrieval Problem Description logic o-E L (D) is suitable to support top-k queries. We use a modification of Fagin’s threshold algorithm [4, 3]. This modification allows for various user preferences of the type lta,b , rta,b , trza,b,c,d , inva,b,c,d and aggregation functions (instead of conjunctions). It has been implemented as a part of user-dependent search system and described in our previous work [15]. This implementation of top-k uses lists of individuals ordered by partial preference (like good priceU1 from the example). These lists can be traversed from highest to lowest value or vice versa. With combination of both directions, we can simulate all types of preferences. For example in case of trapezoidal fuzzy set, we start traversing values in both directions from the middle and we merge two non-increasing lists into one non-increasing list. Thus we gain a list of objects ordered from most preferred to least preferred. (The original threshold algorithm considered only one fixed ordering for each attribute.) For a top-k query like top − k(@U (C≤1 , . . . ,C≤m )), we need m ordered lists of individuals L1 , . . . , Ln , one for each preference concept. Lists are prepared in advance, during the preprocessing stage. Partial preorders have to be extended to total preorders. Ties can be traversed in arbitrary order. Top-k algorithm uses sorted access to the lists, reads attribute values and calculates preference values. It is easy to see that the same individual will have different positions in different lists, so some attribute values will be unknown in the meantime. After reading all attribute values of some individual a, top-k algorithm calculates aggregation function to determine overall preference value of the object. A threshold value is determined to see if any individual with some unknown values has a chance to be more preferred than a. If not, a is written to the output. The algorithm stops when the number of results reaches k and all individuals preferred on the same level as kth result have been returned.
21.6 Related Work There is a considerable effort concerning the connection of simpler description logics with top-k algorithm [12, 13, 14]. Top-k anticipates lists of objects ordered by various attributes, so it is suitable to deal with DL knowledge bases, where the information about one object is split into several concept and role assertions. The notion of instance ordering within description logics appeared in [10]. This paper defines crisp DL A L C Q(D) with special ordering descriptions that can be used to index and search a knowledge base. Paper [11] presents A L C f c , a fuzzy DL with comparison concept constructors. Individuals can be divided into different classes according to comparisons of various fuzzy membership degrees. Thus it is possible to define e.g. a concept of very cheap cars (with fuzzy degree of “cheap”
21
Order-Oriented Reasoning in Description Logics
227
over some specified value), or cars that are more economy than strong. However, all of the mentioned papers use classical (crisp or fuzzy) concept interpretation. To the best of our knowledge, there is no other work concerning interpreting concepts as preorders. We already studied E L (D) with fuzzified concepts in [1]. We suggested the shift towards ordering approach, but the paper did not specify details of o-E L (D), nor the relationship between scoring and ordering description logic.
21.7 Conclusion We further elaborated the idea of order-oriented description logic published in [1]. It is suitable to use a subset of o-E D(D) (with concept constructors ∃R.C≤ , ∃ f .P and C≤ D≤ ) for reasoning tasks like instance order presented in Sect. 21.3. Aggregation and top − k constructor are used in top-k queries to specify overall preference concept. We show that the connection of description logic and individual ordering is possible and allows both reasoning and query answering. DL o-E L (D) forms a theoretical background for user-dependent search system [2]. The choice of E L is in compliance with current W3C efforts. OWL2 contains several restricted versions of the language, called profiles [8], among them OWL2 EL, which corresponds with E L ++ . Order-oriented DL can be used as a modular ontology to support user preference in other E L ontologies. Acknowledgements. Partially supported by Czech projects 1ET100300517, MSM0021620838 and Slovak project VEGA 1/0131/09.
References 1. Vanekov´a, V., Vojt´asˇ, P.: A Description Logic with Concept Instance Ordering and Top-k Restriction. In: Kiyoki, Y., Tokuda, T., Jaakkola, H., Chen, X., Yoshida, N. (eds.) Information Modelling and Knowledge Bases XX. Frontiers in Artificial Intelligence and Applications, vol. 190. IOS Press, Amsterdam (2009) 2. Gursk´y, P., Vanekov´a, V., Pribolov´a, J.: Fuzzy User Preference Model for Top-k Search. In: Proceedings of IEEE World Congress on Computational Intelligence (WCCI), Hong Kong (2008) 3. Gursk´y, P., Vojt´asˇ, P.: On top-k search with no random access using small memory. In: Atzeni, P., Caplinskas, A., Jaakkola, H. (eds.) ADBIS 2008. LNCS, vol. 5207, pp. 97– 111. Springer, Heidelberg (2008) 4. Fagin, R., Lotem, A., Naor, M.: Optimal aggregation algorithms for middleware. In: PODS 2001: Proceedings of the twentieth ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems (2001) 5. Kendall, M., Gibbons, J.D.: Rank Correlation Methods. A Charles Griffin Title (1990) 6. Michel, Ch.: Poset Representation and Similarity Comparisons of Systems in IR. In: Proceedings of the 26th Conference ACM SIGIR Workshop on Mathematical/Formal Methods in IR (2003)
228
V. Vanekov´a and P. Vojt´asˇ
7. Baader, F., Sattler, U.: Description logics with aggregates and concrete domains. Information Systems 28(8) (2003) 8. Patel-Schneider, P.F., Hayes, P., Horrocks, I. (eds.): OWL 2 Web Ontology Language: Profiles. W3C Recommendation (2008), http://www.w3.org/TR/owl2-profiles 9. Straccia, U.: Fuzzy A L C with Fuzzy Concrete Domains. In: Proceedings of the 2005 International Workshop on Description Logics (DL 2005), CEUR Workshop Proceedings, vol. 147 (2005) 10. Pound, J., Stanchev, L., Toman, D., Weddell, G.E.: On Ordering and Indexing Metadata for the Semantic Web. In: Proceedings of the 21st International Workshop on Description Logics (DL 2008), CEUR Workshop Proceedings, vol. 353 (2008) 11. Kang, D., Xu, B., Lu, J., Li, Y.: On Ordering Descriptions in a Description Logic. In: Proceedings of the 2006 International Workshop on Description Logics (DL 2006), CEUR Workshop Proceedings, vol. 189 (2006) 12. Ragone, A., Straccia, U., Bobillo, F., Di Noia, T., Di Sciascio, E.: Fuzzy Description Logics for Bilateral Matchmaking in e-Marketplaces. In: Proceedings of the 21st International Workshop on Description Logics, DL 2008 (2008) 13. Straccia, U.: Towards Top-k Query Answering in Description Logics: the case of DLLite. In: Fisher, M., van der Hoek, W., Konev, B., Lisitsa, A. (eds.) JELIA 2006. LNCS (LNAI), vol. 4160, pp. 439–451. Springer, Heidelberg (2006) 14. Lukasiewicz, T., Straccia, U.: Top-K Retrieval in Description Logic Programs under Vagueness for the Semantic Web. In: Prade, H., Subrahmanian, V.S. (eds.) SUM 2007. LNCS (LNAI), vol. 4772, pp. 16–30. Springer, Heidelberg (2007) 15. Gursk´y, P., Horv´ath, T., Jir´asek, J., Novotn´y, R., Pribolov´a, J., Vanekov´a, V., Vojt´asˇ, P.: Knowledge Processing for Web Search - An Integrated Model and Experiments. Scalable Computing: Practice and Experience 9, 51–59 (2008)
Chapter 22
Moderated Class–Membership Interchange in Iterative Multi–relational Graph Classifier Peter Vojtek and M´aria Bielikov´a
Abstract. Organizing information resources into classes helps significantly in searching in massive volumes of on line documents available through the Web or other information sources such as electronic mail, digital libraries, corporate databases. Existing classification methods are often based only on own content of document, i.e. its attributes. Considering relations in the web document space brings better results. We adopt multi–relational classification that interconnects attribute–based classifiers with iterative optimization based on relational heterogeneous graph structures, while different types of instances and various relation types can be classified together. We establish moderated class–membership spreading mechanism in multi–relational graphs and compare the impact of various levels of regulation in collective inference classifier. The experiments based on large–scale graphs originated in MAPEKUS research project data set (web portals of scientific libraries) demonstrate that moderated class–membership spreading significantly increases accuracy of the relational classifier (up to 10%) and protects instances with heterophilic neighborhood to be misclassified. Keywords: Relational Classification, Graph, Homophily.
22.1 Introduction Classification is an established data mining method useful in automated document grouping and specifically populating directories of web pages, e.g., Google Directory1 . Increasing complexity and structure of data on the Web revealed limitations of the traditional attribute–based (content) classification based solely on own content of data objects. In search for advanced methods capable to exploit structure of Peter Vojtek · M´aria Bielikov´a Faculty of Informatics and Information Technologies, Slovak University of Technology e-mail: {pvojtek,bielik}@fiit.stuba.sk 1
http://www.google.com/dirhp
V. Sn´asˇel et al. (Eds.): Advances in Intelligent Web Mastering - 2, AISC 67, pp. 229–238. c Springer-Verlag Berlin Heidelberg 2010 springerlink.com
230
P. Vojtek and M. Bielikov´a
Fig. 22.1 Graph with multiple object types and various relations between them, domain of scientific publications
interconnected data instances more intensively, single–relational classification [7, 5] originated as more efficient alternative to content classification. Multi–relational classifiers are a successor of single–relational methods, designed to uncover and take advantage of broader dependencies present in the data. Multiplicity of the classifier is both in the nature of data instances and relations between them. Direct classification of heterogeneous web objects as search queries and web pages or classification of scientific publications and associated authors and keywords presents areas of interest where multi–relational approach takes advantage over single–relational and content–based methods. Similarly, domains with social interaction between instances (usually people) are good candidates for multi– relational classifiers, e.g., brokers fraud detection [9], tax frauds or user preferences gathering. Our work is focused on graph–based classifier, Fig. 22.1 illustrates an example of multi–relational graph representation of data from the domain of scientific web portals. Data with similar nature are used in experimental evaluation presented in this paper, using MAPEKUS dataset2 created within a research project on personalization of large information spaces in domain of digital libraries [1]. Three object types are covered: Publications, Authors and Keywords. Vertices P1 and P2 are instances of object type Publication connected with intra–relation references. Similarly, instances of Authors are intra–related through isCollaboratorOf relation type. Two types of inter–relations are present in the graph; isAuthorOf connects Authors and Publications, and hasKeyword associates instances of Publications and Keywords. Typical classification task suitable for presented data network is to determine publications interested in class Hardware. Concerning attributes of publication exclusively can induce some results, however augmenting the classifier by neighboring 2
MAPEKUS dataset: http://mapekus.fiit.stuba.sk/
22
Moderated Class–Membership Interchange
231
publications, authors or keywords can provide the classifier more useful information, assuming homophily3 between related instances. Additionally, it is feasible to determine authors interested in Hardware as well, without the necessity of supplementary classification or subgraph extraction. Iterative Reinforcement Categorization (IRC) introduced by Xue et al. [11] is one of the first methods competent to perform classification in such multi-relational graph structure directly, without the need for subgraph extraction, preserving and providing whole context the data is situated in to a classifier. Beside the advances of such a non–weakening approach our initial experiments exhibited following handicap: performance of the multi–relational classifier is heavily affected by structure of the graph and its accuracy gain is not always positive (Sect. 22.3.1 provides empirical evidence of even negative influence of non–moderated multi–relational classifier when compared to content–based classifier). To deal with this problem, the graph structure can be investigated and readjusted in some way. However, such inspection is domain specific and can be time and resource demanding (we discuss further this issue in Sect. 22.3.2). Better solution is to use a mechanism to automatically analyze quality of relations in the network, sustain helpful connections between the data instances and inhibit influence of non-beneficial relations, i.e. establish a technique to moderate information spreading in the graph. We propose universal and effective method to moderate information exchange between classified instances based on class–membership of each classified instance. Multi–relational IRC method adopted in this work encapsulates domain specific knowledge into statistical parameter class–membership, which refers to the probability that an instance belongs to a certain class. Such a domain independent moderation of information spreading adjusts classifier performance, decreases the risk of improper object re-classification, provides a mechanism to deal with and take advance of even very weak homophily. The rest of this paper has following structure: Sect. 22.2 describes the core of multi–relational classifier with class–membership spreading moderation. Experimental evaluation aimed at finding optimal parameters of the classifier, comparison of content classifier, original IRC classifier and moderated method and analysis of dataset homophily is in Sect. 22.3, using freely available MAPEKUS data set incorporating networks of electronic publications, authors, keywords, etc., from electronic publication web portals. Next, Sect. 22.4 contains overview of related work and Sect. 22.5 concludes the paper and points out some issues requiring further work.
22.2 Principles of Moderated IRC Method This section describes our proposal of the moderated IRC classifier in a details. The method consist of following steps: 3
Assumption of homophily – related (neighboring) instances are more likely to share similarities (e.g., same class) as non–related instances [7].
232
P. Vojtek and M. Bielikov´a
1. class–membership initialization using content–based classifier; 2. single iteration step of class–membership optimization exploiting the graph structure of interconnected classified instances. Moderation of information interchange is applied in this step; class–membership of each vertex is inspected and only when the class–membership is evaluated as beneficial to the overall classifier performance, the vertex can provide class–membership information to its neighbors. 3. sequential iteration steps converging into fixed graph state succeeded by final assignment of classes to instances. In the pre–classification step only local features of each instance (object in the graph) are taken into account (e.g., each publication is pre–classified according to text of the publication), this step is de f acto content classification where each instance is assigned a fuzzy class–membership. The method to be used can vary (e.g., Naive Bayes, decision trees [6]). If only one object type is assigned a training class– membership and other object types are subsidiary, only the leading object type instances x1 , x2 , . . . , xn ∈ X are pre–classified. Class–membership absorption Following preconditions are arranged already: real class–membership of Xtrain instances, initial class–membership of each instance in Xtest 4 , auxiliary instances of remainder types (denoted as belonging to set Y disregarding their type) and relations between all instances. In current step each object from Xtest and Y absorbs class–memberships of neighboring objects and recomputes its own membership. Two types of neighborhood are used; trainNeigh(ni) returns set of neighboring instances from Xtrain of an instance ni (ni can be either from X or Y ) and testNeigh(ni ) refers to neighbors from Xtest and Y . Usually only closest instance neighborhood is taken into account (i.e. only instances directly connected via edges). For each object ni ∈ Xtest ∪Y and each class c j ∈ C a class–membership p(c j |ni ) determines odds that ni will be labeled with class c j .
∑
p(c j |ni ) = λ1 p(c j |ni ) + λ2 sel f
sizeO f (trainNeigh(ni ))
+ λ3
+
Xtrain
∑
4
w(ni , xz )p(c j |xz )
xz ∈trainNeigh(ni )
w(ni , nz )p(c j |nz )
nz ∈testNeigh(ni )
sizeO f (testNeigh(ni )) Xtest ∪Y
(22.1)
Note that the real class–membership of the testing instances is also known and is stored in order to compute performance of the classifier.
22
Moderated Class–Membership Interchange
233
Moderation of class–membership spreading Membership computed in Eq. 22.1 can be harmful; an instance ni affiliated to each class with the same probability (e.g., binary classification with p(c+ |ni ) = 0.5 and p(c− |ni ) = 0.5) can provide meaningless information to neighboring instances, or even worse, can affect their class–membership negatively. Our assumption is that instance provides to its neighbors most useful information when class–membership of this instance belongs with high probability to positive or negative class, i.e. p(c+ |ni ) → 1.0 or p(c− |ni ) → 0.0. An eligible solution to improve Eq. 22.1 is to accept information only from instances with well–formed membership, i.e. to compute entropy based on node’s z
class–membership H(n) = − ∑ p(ci |n) log p(ci |n) and if the value exceeds specii=1
fied moderation threshold, information from the node is either accepted or ignored in information exchange. Cycles of iteration and final assignment Class–membership adjustment is an iterative process, probabilities pt (c j |ni ) gathered in iteration t are utilized to compute class–membership in iteration t + 1. If Qt is membership probability matrix between all objects n ∈ Xtest ∪ Y and all classes ci ∈ C in iteration t, the absorption and spreading of information ends when the difference ||Qt+1 − Qt || is smaller than some predefined δ . After the iterative spreading is terminated, final class of each instance ni is arg maxc j p(c j |ni ).
22.3 Experimental Evaluation and Discussion In following experiments, designed to evaluate effect of the moderation mechanism, we use MAPEKUS dataset with instances obtained from ACM (Association for Computing Machinery) portal5 similar to the graph present in Fig. 22.1. Three instance types are treated: leading type Publication, which is primary classified according to ACM classification, and two subsidiary instance types, Author and Keyword. Two inter–relation types occur in the data: isAuthorOf and hasKeyword. Weight of each relation edge is set to w(ni , n j ) = 1.0. Size of the graph used in our experiments is following: 4 000 publication instances, 7 600 keywords and 9 700 authors, totally 21 300 unique instances with 35 000 edges. Accuracy gain is observed and evaluated as an indicator of classifier quality. The term accuracy gain expresses contrast between accuracy of content–based classifier and multi–relational classifier on the same data sample, e.g., when content classifier achieves accuracy= 80% and multi–relational classifier attains accuracy= 90%, the accuracy gain is +10%. We adopt Naive Bayes method as basal content–based classifier, preclassification is based on text of publications’ abstracts. Vectorization of 5
ACM: http://www.acm.org/dl
234
P. Vojtek and M. Bielikov´a
abstract text is preceded by stemming and stop–word removal. Provided statistics are averaged from 200 runs.
22.3.1 Moderation Threshold and Accuracy Gain Parameter of moderation established in Sect. 22.2 is introduced with the aim to boost classifier accuracy. We performed series of experiments where the moderation threshold (labeled as mod) is set to values between 0.5 and 1.0. mod = 0.5 corresponds to original non–moderated IRC classifier and class–membership spreading is without constrains. Increasing the value of moderation refers to stronger control of class–membership interchange between neighboring instances. Setting the threshold to mod = 1.0 implies that only objects with well–formed class–membership can spread their values, such a condition is satisfied only by instances from the training set Xtrain as only these are exclusively truly positive (i.e. p(c+ |ni ) = 1.0 and p(c− |ni ) = 0.0) or truly negative (p(c+ |ni ) = 0.0 and p(c− |ni ) = 1.0).
Fig. 22.2 Influence of moderation threshold on accuracy gain, different classes of ACM
The experiment is accomplished with three different top–level classes from ACM (General literature, Software and Data), for each value of mod and each class all relations present in the dataset are involved. Parameters λ1 , λ2 and λ3 were set equally to 13 , denoting same weight of all components in Formula 22.1. Figure 22.2 refers to results of the experiment. X–axis displays various values of moderation threshold, y-axis indicates accuracy gain. Both three classes exhibit similar behaviour of the classifier. The stronger the moderation is, the higher is the accuracy gain. This trend reaches maximum when mod is between 0.85 and 0.95. Decrease of accuracy gain in mod = 1.0 demonstrates importance of instances of the testing set to overall accuracy gain (these instances are eliminated from class–membership spreading in the strong moderated case when
22
Moderated Class–Membership Interchange
235
mod = 1.0). This experiment successfully demonstrated importance of moderation threshold. Non–moderated multi–relational classifier (corresponds to mod = 0.5 in Fig. 22.2) achieves inadequate, or even negative accuracy gain (−1.4% for class Data).
22.3.2 Analysis of Relation Quality In previous experiment entire graph with Publication, Author and Keyword instances and isAuthorOf and hasKeyword relation types was employed. However, impact of these two relation types on the classifier performance can be different when considered independently, as they can exhibit different degree of homophily between instances they connect. In the current experiment we investigate quality of these relations types, comparing accuracy gain for three networks: • graph with publications and keywords (hasKeyword relation type); • graph with publications and authors (isAuthorOf ); • join graph with publications, authors and keywords (isAuthorOf +hasKeyword). Initial conditions are following: moderation threshold is fixed to mod = 0.9 and class Software is considered. Pre–experimental hypothesis is that classifier performance will be highest when both relation types are included in the graph as most information is provided to the classifier in this configuration. Figure 22.3 displays result of the experiment. Curve of accuracy gain for mod ∈ 0.5, 0.9 indicates our initial hypothesis is wrong, accuracy gain in this interval for graph with relation type isAuthorOf over–performs graph with both types of relation (isAuthorOf and hasKeyword). The hypothesis is satisfied only for graph with relation type hasKeyword, which is over–performed by richer graph with both isAuthorOf and hasKeyword relation types. Loss of the accuracy gain for isAuthorOf +hasKeyword graph is induced by different character of the relation types, mainly neighborhood quality of associated vertices (Author associated with isAuthors and Keyword associated with hasKeyword). Authors are more likely to have neighboring publications holding the same class–orientation (i.e. positive or negative examples of a class). Well–formed neighborhood (when all neighboring vertices of a vertex are exclusively positive or negative) corresponds to positive–to–negative ratio 1 : 0 or 0 : 1. As much as 89% of Author vertices and only 42% Keyword vertices fall into this range. Positive–to–negative ratio 1 : 1 (heterophilic neighborhood) is present in 3% of Author vertices and 43% of Keyword vertices. These statistics exhibit significant difference between concerned relation types. Revisiting Fig. 22.3 shows our original hypothesis6 is correct for mod ∈ (0.9, 1.0, when accuracy gain for graph with both relation types predominates – this improvement is stimulated by positive influence of strong moderation, eliminating most of heterophilic vertices, either from Keyword and Author set. 6
Classifier performance will be highest when both relation types are included.
236
P. Vojtek and M. Bielikov´a
Fig. 22.3 Classifier performance influenced by relation type
22.4 Related Work Users searching the Web commonly deal with information overload. Classifiers are frequently employed in Web search as they can automatically conceptualize and schematize concerned information, which is the base of data indexing. One of the first methods which applied single–relational classifier to organize hypertext documents connected via hyperlinks is designed by Chakrabarti et al. [3]. Classifiers designed to uncover majority of information present in multi–relational data structures appeared in past few years. Moderated iterative multi–relational classification described in this work is an extension of IRC (Iterative Reinforcement Categorization) designed by Xue et al. [11], experimental evaluation of this method was based on classification of web pages together with user sessions and their search queries. This branch of multi–relational classifiers uses graph representation of data. Similar to IRC is Relational Ensemble Classification (REC) [10]. The main difference between IRC and REC is in the graph processing phase; IRC method iteratively spreads class–membership between intra– and inter–related objects while REC method requires construction of homogeneous subgraphs (each subgraph has single object– and single relation– type). After the iterative class–membership spreading ends the results are compiled together using ensemble classification. Moderation of class–membership spreading is the task of determining the proper amount of disseminated information. Similar problem is in the scope of Galstyan et al. [4] where single–relational binary classifier with three–state epidemic model is utilized. In a broader sense, information dissematination in graphs is not limited to classification tasks; one of the universal information diffusion methods employed in web search is activation spreading [2].
22.5 Conclusions and Further Work Multi–relational classification is recently established but powerful data mining technique gaining attention in hard classification problems as classification of instances
22
Moderated Class–Membership Interchange
237
with sparse or missing attributes where attribute–based classification cannot take advance. Employing multi–relational data structures and corresponding methods brings satisfactory results in such circumstances. Collective inference method called Iterative Reinforcement Categorization (IRC) is enhanced with moderated class– membership spreading mechanism in this paper in order to efficiently deal with varying homophily of related data instances. Experimental evaluation based on data from scientific web portals validates assumption that the class–membership spreading requires moderation of the diffused amount of information, adjusting classifier accuracy up to 10%. In addition, moderated class–membership spreading provides an efficient, robust and universal mechanism to deal with different quality of relation types. Evaluation and comparison of different classifiers is usually performed using a dataset originated from an existing information space, e.g., WebKB7 is a mirror of the Web. Our future work will focus on involving class–membership moderation in other relational classifiers employing collective inference mechanism [8]. We plan also to investigate influence of different shapes of the moderation function. Acknowledgements. This work was partially supported by the Slovak Research and Development Agency under the contract No. APVV-0391-06, the Cultural and Educational Grant Agency of the Slovak Republic, grant No. KEGA 3/5187/07 and by the Scientific Grant Agency of Slovak Republic, grant No. VG1/0508/09.
References 1. Bielikov´a, M., Frivolt, G., Suchal, J., Vesel´y, R., Vojtek, P., Voz´ar, O.: Creation, population and preprocessing of experimental data sets for evaluation of applications for the semantic web. In: Geffert, V., Karhum¨aki, J., Bertoni, A., Preneel, B., N´avrat, P., Bielikov´a, M. (eds.) SOFSEM 2008. LNCS, vol. 4910, pp. 684–695. Springer, Heidelberg (2008) 2. Ceglowski, M., Coburn, A., Cuadrado, J.: Semantic search of unstructured data using contextual network graphs (2003) 3. Chakrabarti, S., Dom, B.E., Indyk, P.: Enhanced hypertext categorization using hyperlinks. In: Haas, L.M., Tiwary, A. (eds.) Proceedings of SIGMOD 1998, ACM International Conference on Management of Data, pp. 307–318. ACM Press, New York (1998) 4. Galstyan, A., Cohen, P.R.: Iterative relational classification through three-state epidemic dynamics. In: Mehrotra, S., Zeng, D.D., Chen, H., Thuraisingham, B., Wang, F.-Y. (eds.) ISI 2006. LNCS, vol. 3975, pp. 83–92. Springer, Heidelberg (2006) 5. Jensen, D., Neville, J., Gallagher, B.: Why collective inference improves relational classification. In: KDD 2004: Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 593–598. ACM Press, New York (2004) 6. Liu, B.: Web Data Mining: Exploring Hyperlinks, Contents, and Usage Data, pp. 55–115. Springer, Heidelberg (2006) 7. Macskassy, S., Provost, F.: A simple relational classifier. In: Workshop Multi-Relational Data Mining in conjunction with KDD 2003. ACM Press, New York (2003) 7
WebKB: http://www.cs.cmu.edu/$\sim$webkb/
238
P. Vojtek and M. Bielikov´a
8. Macskassy, S.A., Provost, F.: Classification in networked data: A toolkit and a univariate case study. J. Mach. Learn. Res. 8, 935–983 (2007) 9. Neville, J.: Statistical models and analysis techniques for learning in relational data. Ph.D. thesis, University of Massachusetts Amherst (2006) 10. Preisach, C., Schmidt-Thieme, L.: Relational ensemble classification. In: ICDM 2006: Proceedings of the Sixth International Conference on Data Mining, pp. 499–509. IEEE Computer Society, Washington (2006) 11. Xue, G., Yu, Y., Shen, D., Yang, Q., Zeng, H., Chen, Z.: Reinforcing web-object categorization through interrelationships. Data Min. Knowl. Discov. 12(2-3), 229–248 (2006)
Author Index
Abraham, Ajith 127 Al-Dubaee, Shawki A.
43
Barla, Michal 53 Bielikov´ a, M´ aria 53, 229 Burget, Radek 61
Navrat, Pavol
Dˇedek, Jan 3 Dokoohaki, Nima 71 Dr´ aˇzdilov´ a, Pavla 155 Dvorsk´ y, Jiˇr´ı 155 Eckhardt, Alan 3 Ezzeddine, Anna Bou Feldmann, Marius Feuerlicht, George Frolov, Alexander
167
209 19 83
Heidari, Ehsan 95 Hoeber, Orland 105 Husek, Dusan 83 Izakian, Hesam
127
Joshi, Bhagwati P.
Mahramian, Mehran 95 Martinoviˇc, Jan 155 Massie, Chris 105 Matskin, Mihhail 71 Movaghar, Ali 95 Mugellini, Elena 199
187
Kaleli, Cihan 117 Khaled, Omar Abou 199 Klos, Karel 135 Kr¨ omer, Pavel 127 Kudˇelka, Miloˇs 135 Kuzar, Tomas 147
147, 167
Platoˇs, Jan 43, 127 Pokorn´ y, Jaroslav 135 Polat, Huseyin 117 Polyakov, Pavel 83 Pryczek, Michal 175 Ray, Santosh Kumar 187 Rom´ an, Pablo E. 31 Schill, Alexander 209 Singh, Shailendra 187 Slizik, Lukas 167 Sn´ aˇsel, V´ aclav 43, 127, 135, 155 Sokhn, Maria 199 Szczepaniak, Piotr S. 175 Takama, Yasufumi 135 Thom, James A. 209 Urbansky, David
209
Vanekov´ a, Veronika 219 Vel´ asquez, Juan D. 31 Vojt´ aˇs, Peter 3, 219 Vojtek, Peter 229
Index
automatic understanding 167 block importance 61 boolean factor analysis
117, 135,
83
cloud computing 19 clustering 95, 155 communication distance 95 concept extraction 187 confidence 71 description logic 219 differential evolution 127 document ranking, 187 restructuring, 61 framework 53 fuzzy inference 71 genetic algorithms graph 229
95
homophily 229 Hopfield-like network information extraction, 209 retrieval, 155 informational gain knowledge modeling, 199 visualization, 199
95
machine learning 105 multilingual 43 multimedia 117, 135 information retrieval, 199 multiwavelet transforms 43 neural networks
175
ontologies 199, 209 open-source 53 opinion mining 175 page analysis, 61 segmentation, 61 personalization 71, 105 preference 219 preorder 219 question answering system
83
83
logit 31 longevity of network
random utility model 31 reasoning 71 relational classification 229 rough sets 187 SaaS 19 scheduling 127 search engine 43 semantic annotation, 3 user profiles, 71 web, 3
187
242
Index
semantics of the images 117, 135, 167 SOA 19 social networks analysis, 53 web, 147 stochastic equation, 31 process, 31 simulation, 31 structural pattern analysis 175 topical development uncertainty evaluation user preferences 3
155 71
wavelet transforms 43 web content mining, 147 information extraction, 3 information retrieval, 43 mining, 117, 135, 167, 209 search, 105 usage mining, 147 user behavior, 31 user session, 31 user text preferences, 31 Web 2.0, 19 wireless sensor networks 95 workflow 53