ADVANCES IN INTELLIGENT IT
Frontiers in Artificial Intelligence and Applications FAIA covers all aspects of theoretical and applied artificial intelligence research in the form of monographs, doctoral dissertations, textbooks, handbooks and proceedings volumes. The FAIA series contains several sub-series, including “Information Modelling and Knowledge Bases” and “Knowledge-Based Intelligent Engineering Systems”. It also includes the biannual ECAI, the European Conference on Artificial Intelligence, proceedings volumes, and other ECCAI – the European Coordinating Committee on Artificial Intelligence – sponsored publications. An editorial panel of internationally well-known scholars is appointed to provide a high quality selection. Series Editors: J. Breuker, R. Dieng, N. Guarino, J.N. Kok, J. Liu, R. López de Mántaras, R. Mizoguchi, M. Musen and N. Zhong
Volume 138 Recently published in this series Vol. 137. P. Hassanaly et al. (Eds.), Cooperative Systems Design – Seamless Integration of Artifacts and Conversations – Enhanced Concepts of Infrastructure for Communication Vol. 136. Y. Kiyoki et al. (Eds.), Information Modelling and Knowledge Bases XVII Vol. 135. H. Czap et al. (Eds.), Self-Organization and Autonomic Informatics (I) Vol. 134. M.-F. Moens and P. Spyns (Eds.), Legal Knowledge and Information Systems – JURIX 2005: The Eighteenth Annual Conference Vol. 133. C.-K. Looi et al. (Eds.), Towards Sustainable and Scalable Educational Innovations Informed by the Learning Sciences – Sharing Good Practices of Research, Experimentation and Innovation Vol. 132. K. Nakamatsu and J.M. Abe (Eds.), Advances in Logic Based Intelligent Systems – Selected Papers of LAPTEC 2005 Vol. 131. B. López et al. (Eds.), Artificial Intelligence Research and Development Vol. 130. K. Zieliński and T. Szmuc (Eds.), Software Engineering: Evolution and Emerging Technologies Vol. 129. H. Fujita and M. Mejri (Eds.), New Trends in Software Methodologies, Tools and Techniques – Proceedings of the fourth SoMeT_W05 Vol. 128. J. Zhou et al. (Eds.), Applied Public Key Infrastructure – 4th International Workshop: IWAP 2005 Vol. 127. P. Ritrovato et al. (Eds.), Towards the Learning Grid – Advances in Human Learning Services Vol. 126. J. Cruz, Constraint Reasoning for Differential Models
ISSN 0922-6389
Advances in Intelligent IT Active Media Technology 2006
Edited by
Yuefeng Li Queensland University of Technology, Australia
Mark Looi Queensland University of Technology, Australia
and
Ning Zhong Maebashi Institute of Technology, Japan
Amsterdam • Berlin • Oxford • Tokyo • Washington, DC
© 2006 The authors. All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, without prior written permission from the publisher. ISBN 1-58603-615-7 Library of Congress Control Number: 2006924779 Publisher IOS Press Nieuwe Hemweg 6B 1013 BG Amsterdam Netherlands fax: +31 20 687 0019 e-mail:
[email protected] Distributor in the UK and Ireland Gazelle Books Services Ltd. White Cross Mills Hightown Lancaster LA1 4XS United Kingdom fax: +44 1524 63232 e-mail:
[email protected]
Distributor in the USA and Canada IOS Press, Inc. 4502 Rachael Manor Drive Fairfax, VA 22032 USA fax: +1 703 323 3668 e-mail:
[email protected]
LEGAL NOTICE The publisher is not responsible for the use which might be made of the following information. PRINTED IN THE NETHERLANDS
v
Advances in Intelligent IT Y. Li et al. (Eds.) IOS Press, 2006 © 2006 The authors. All rights reserved.
Preface In the great digital era, we are witnessing many rapid scientific and technological developments in human-centered, seamless computing environments, interfaces, devices, and systems with applications ranging from business and communication to entertainment and learning. These developments are collectively best characterized as Active Media Technology (AMT), a new area of intelligent information technology and computer science that emphasizes the proactive, seamless roles of interfaces and systems as well as new media in all aspects of digital life. An AMT based computer system offers services that enable the rapid design, implementation, deploying and support of customized solutions. The first International Conference on Active Media Technology (AMT01) was held in Hong Kong in 2001, the second International Conference on Active Media Technology (AMT03) was held in Chongqing, China in May 29–31 of 2004, and the third International Conference on Active Media Technology (AMT05) was held in Kagawa, Japan in May 2005. The 4th International Conference on Active Media Technology (AMT06) follows the success of AMT01, AMT03 and AMT05. AMT06 is the leading International Conference focusing on Active Media Technology. It aims to bring together researchers from diverse areas, such as Web intelligence, data mining, intelligent agents, smart information use, networking and intelligent interface. It also encourages collaborative research in these areas to provide best services for enabling the rapid design, implementation, deploying and support of customized solutions. The conference includes the following topics: • • • • • • • • • • • • • •
Active Computer Systems and Intelligent Interfaces Adaptive Web Systems and Information Foraging Agents Web mining, Wisdom Web and Web Intelligence E-Commerce and Web Services Data Mining, Ontology Mining and Data Reasoning Network, Mobile and Wireless Security Entertainment and Social Applications of Active Media Agent-Based Software Engineering and Multi-Agent Systems Digital City and Digital Interactivity Machine Learning and Human-Centred Robotics Multi-Modal Processing, Detection, Recognition, and Expression Analysis Personalized, Pervasive, and Ubiquitous Systems and their Interfaces Smart Digital Media Evaluation of Active Media and AMT Based Systems
AMT06 is sponsored by the IEEE Systems, Man, and Cybernetics Society and Queensland University of Technology. It attracted 123 submissions from 19 countries and regions: Algeria, Australia, China, Canada, England, Finland, France, Hong Kong, India, Japan, Korea, New Zealand, Pakistan, Poland, Republic of Korea, Taiwan,
vi
United Arab Emirates, United Kingdom, and United States of America. The review process was rigorous. Each paper was reviewed by two reviewers at least, and most of them reviewed by three reviewers. The Program Committee accepted 39 regular papers (the approximate acceptance rate is 32%), 33 short papers (the approximate acceptance rate is 39%) and 9 industry/demonstration papers. We would like to thank the members of Program Committee and Organization Committee and reviewers who contributed to the success of this conference. Yuefeng Li, Mark Looi and Ning Zhong 17 March 2006
vii
Organization AMT06 was hosted by Queensland University of Technology Conference Chair: Conference Co-Chair: Program Chair: Local Organizing Chair: Industry/Demo Chair: Publicity Chair: Social Event Chair:
Mark Looi, Queensland University of Technology, Australia Ning Zhong, Maebashi Institute of Technology, Japan Yuefeng Li, Queensland University of Technology, Australia Yue Xu, Queensland University of Technology, Australia Raymond Lau, City University of Hong Kong Guoyin Wang, Chongqing University of Posts and Telecommunications, China Shlomo Geva, Queensland University of Technology, Australia
Steering Committee: Jiming Liu, University of Windsor, Canada Toyoaki Nishida, Kyoto University, Japan Ning Zhong, Maebashi Institute of Technology, Japan Local Organizing Committee: Jinhai Cai, Queensland University of Technology, Australia Vicky Liu, Queensland University of Technology, Australia Leonie Simpson, Queensland University of Technology, Australia On Wong, Queensland University of Technology, Australia Jan Wilcox, Queensland University of Technology, Australia Jinglan Zhang, Queensland University of Technology, Australia Program Committee: Jiyuan An, Deakin University, Australia David Billington, Griffith University, Australia Kankana Chakrabarty, University of New England, Australia Zheng Chen, Microsoft Research Asia, China Young Choi, University of Tasmania, Australia Shang Gao, Deakin University, Australia Xiaoying Gao, Victoria University, New Zealand James Harland, RMIT University, Australia Qing He, Chinese Academy of Science, China Masahito Hirakawa, Shimane University, Japan Weijia Jia, City University of Hong Kong Jesse Jin, The University of Newcastle, Australia Rajiv Khosla, La Trobe University, Australia Yasuhiko Kitamura, Kwansei Gakuin University, Japan Yue-Sun Kuo, Institute of Information Science, Academia Sinica, Taiwan Raymond Lau, City University of Hong Kong Chunping Li, Tsinghua University, China
viii
Jiuyong Li, University of Southern Queensland, Australia Wei Li, Central Queensland University, Australia Xiaodong Li, RMIT University, Australia Yan Li, University of Southern Queensland, Australia Fei Liu, La Trobe University, Australia Jiming Liu, University of Windsor, Canada Wanquan Liu, Curtin University of Technology, Australia Hongen Lu, La Trobe University, Australia Frederic Maire, Queensland University of Technology, Australia Jun Munemori, Wakayama University, Japan Richi Nayak, Queensland University of Technology, Australia Wee keong Ng, Nanyang Technological University, Singapore Yoshihiro Okada, Kyushu University, Japan Sunju Park, Yonsei University, Seoul, Korea Terry R Payne, Southampton University, UK Binh Pham, Queensland University of Technology, Australia Mikhail Prokopenko, CSIRO ICT Centre, Sydney, Australia Paul Roe, Queensland University of Technology, Australia Atul Sajjanhar, Deakin University, Australia Eugene Santos Jr., Dartmouth College, USA Ichiro Satoh, National Institute of Informatics, Japan Hideyuki Sawada, Kagawa University, Japan Zhongzhi Shi, Chinese Academy of Science, China Timothy Shih, Tamkang University, Taiwan Carles Sierra, CSIC-Spanish Scientific Research Council, Spain Jackie Silcock, Deakin University, Australia Dawei Song, Open University, UK Hiroyuki Tarumi, Kagawa University, Japan Nipon Theera-Umpon, Chiang Mai University, Thailand Eric Cheung Choy Tsang, Hong Kong Polytechnic University, China Kuniaki Uehara, Kobe University, Japan Guoyin Wang, Chongqing University of Posts and Telecommunications, China Hui Wang, University of Ulster, UK Kewen Wang, Griffith University, Australia Tomio Watanabe, Okayama Prefectural University, Japan Peng Wen, The University of South Queensland, Australia Yue Xu, Queensland University of Technology, Australia Yiyu Yao, University of Regina, Canada Dit-Yan Yeung, Hong Kong University of Science and Technology, Hong Kong Tetsuya Yoshida, Hokkaido University, Japan Shui Yu, Deakin University, Australia Mengjie Zhang, Victoria University, New Zealand Minjie Zhang, University of Wollongong, Australia Shichao Zhang, University of Technology, Sydney, Australia Zili Zhang, Deakin University, Australia Shaochun Zhong, Northeast Normal University, China Xuehai Zhou, University of Science and Technology of China, China Ce Zhu, Nanyang Technological University, Singapore
ix
Non-Program Committee Reviewers Jia Hu John King Xiaohui Tao (Daniel) Sheng Tang Wu (Sam) Wanzhong Yang Xujuan Zhou (Susan)
This page intentionally left blank
Keynotes
This page intentionally left blank
Advances in Intelligent IT Y. Li et al. (Eds.) IOS Press, 2006 © 2006 The authors. All rights reserved.
3
Active Media Technologies (AMT) from the Standpoint of the Wisdom Web Jiming Liu Professor and Director of School of Computer Science University of Windsor, Windsor, Ontario, Canada As advocated in [1,4,7], the next generation of the Web Intelligence (WI) technologies [5,6,7] will aim at enabling users to go beyond the existing online information search and knowledge query functionalities and to gain, from the Web, practical wisdoms of living, working, and playing. The paradigm of Wisdom Web based computing will provide not only a medium for seamless knowledge and experience sharing, but also a supply of self-organized resources for driving sustainable knowledge creation and scientific or social development/evolution [1]. This talk will look into the challenges of future Active Media Technologies (AMT) from the standpoint of the Wisdom Web. We envision that the future AMT environments will be best characterized as communities of autonomous entities (i.e., Wisdom Web agents) that establish and maintain a vast collection of socially or scientifically functional/behavioral networks. The dynamic flows of services, such as information and knowledge exchanges following some predefined protocols, will allow for the dynamic formation, reformation, and consolidation of such networks. As a result, networks of common practice or shared markets will emerge. The process of the dynamic interactions among the agents is a complex one, in which many types of interesting complex emergent behaviors can be induced and observed [2]. In this talk, we will discuss not only the dynamics of formation and growth of the networks, but even more importantly the dynamics of the network functions with reference to certain goal-directed criteria. In order to support such a Wisdom Web, it has been suggested that a grid1 -like computing infrastructure with intelligent service agencies is needed, where these agencies can autonomously interact, self-organize, learn, and evolve their course of actions and identities [1,3,4]. 1. The Wisdom Web Challenges Generally speaking, the Wisdom Web encompasses the systems, environments, and activities (1) that are empowered through the global or regional connectivity of computing resources as well as distribution of data, knowledge, and contextual presence, and (2) that are specifically dedicated to enable human beings to gain practical wisdoms throughout their professional and personal activities, such as running a business and living with a certain lifestyle. 1 The notion of grid here will not be limited to the technological sense of grid that is currently adopted in the Internet for data, computation, and knowledge sharing.
4
J. Liu / Active Media Technologies (AMT) from the Standpoint of the Wisdom Web
The best way to define the Wisdom Web is by stating what the Wisdom Web must be capable of operationally performing and demonstrating, i.e., by providing an operational definition. Towards this end, we identify at least three aspects in which the Wisdom Web must do well: 1. Discovering the best means and the best ends: The Wisdom Web needs to discover: What are the goals and sub-goals that a user is trying to attain? What will be the best strategy? What will be the course of actions for implementation? Remark 1. (Practical sense of Wisdom) In its literal terms, wisdom refers to the knowledge of the best means and the best ends. Here, we shall focus on practical and operationalizable means, such as the steps and tasks to be involved, for achieving some practical goals, such as performing and/or adding values to a process, computational (e.g., scientific research and discovery) or noncomputational (e.g., advice seeking), in which certain desired states and utilities should be gained and some constraints be satisfied. 2. Mobilizing distributed resources: The Wisdom Web needs to determine: What resources are relevant? How can distributed resources be coordinated and streamlined? What are the cost-effective ways to optimally utilize them? What are the dynamics of resource utilization? Remark 2. (Resources on the Wisdom Web) Here by resources it is meant not only computational resources, such as memory space and processing power, but also data and knowledge resources, such as distributed scientific or business data and some application-specific domain knowledge. 3. Enriching social interaction: The Wisdom Web needs to understand: What is the new form of social interaction to emerge in work, life, and play? How are certain forms of social norms, values, beliefs, as well as commonsense knowledge to be promoted and shared? How can a social community be sustained? Remark 3. (Social dimension of the Wisdom Web) This objective concerns the social dimension of the Wisdom Web. It regards the Wisdom Web beyond the forms of media, online systems, or environments. It advocates that the Wisdom Web should become the integral part of human-level, community-level communications, which is aimed to support the social sharing and evolution of experiences and wisdoms.
2. A Grand Intellectual Undertaking Developing the Wisdom Web presents a grand opportunity of research and development in computer science and information technology (IT). It redefines the fundamental roles and practical impacts of AI and IT. Here by AI, we meant both early AI, such as knowledge representation, reasoning, and planning, and contemporary AI, such as knowledge discovery, autonomy oriented computing, and social intelligence. The realm of IT encompasses advanced information technologies and their applications, such as sensor networks, mobile and ubiquitous computing, computational/data/knowledge grids, and social networks. In many ways, developing the Wisdom Web will broaden the horizon of AI research: (1) The marriage of AI and IT identifies many new problems and challenges that we
J. Liu / Active Media Technologies (AMT) from the Standpoint of the Wisdom Web
5
have never seen or been able to solve before, offering us new opportunities to think and test new AI paradigms; (2) the Wisdom Web provides a rich domain for applying and demonstrating some existing AI techniques and solutions that have been, for the past four to five decades, involving mostly solving toy problems. In order to ultimately perform and demonstrate the three aspects as highlighted in Section 1, the Wisdom Web presents not only an engineering challenge, but also a scientific endeavor that requires new theories and paradigms for computing and interacting with humans in a human way. This is most likely to employ as well as further extend theories in sociology, ecology, economics, and physics, among others: 1. Sociology has much to offer as to understand how people reach consensus and form new opinions or social norms, how the roles and functions of individuals change over time in a dynamically fast evolving digital society, how the virtual worlds become real (e.g., games), and how the real world become virtual (e.g., business and organizations). 2. Ecology will be extended to account for the new ‘food chain’ in the digital world, with respect to how digital trends (e.g., chips, palm computers, servers, agents, networks, etc.) will evolve, how they are related to each other as well as to other technologies (e.g., embedded nano-systems), and what are their developmental stages and lifecycles. 3. Economics will enable us to look into such issues as how to measure, exchange, distribute, share, and grow the values and ownerships of digital commodities in the society. 4. Physics will play an important role as to empirically measure various regularities emergent from the digital society and to discover the laws for explaining such phenomena as phase transitions and self-organized criticality.
Acknowledgements I would like to thank Dr. Yuefeng Li and other organizers of AMT 2006 for their kind invitation and organization. Special thanks go to my research collaborators, Profs. Ning Zhong and Y. Y. Yao, as well as the members of my research team. Also, I would like to acknowledge the support of the following research grants: Hong Kong Research Grant Council (RGC) Central Allocation Grant (HKBU 2/03/C) and Earmarked Research Grants (HKBU 2121/03E)(HKBU 2040/02E).
References [1] Liu, J. (2003). Web Intelligence (WI): What makes Wisdom Web? Proceedings of the 18th International Joint Conference on Artificial Intelligence (IJCAI-03), Acapulco, Mexico, Aug. 9-15, 2003, Morgan Kaufmann Publishers, pp. 1596-1601 (An invited plenary talk). [2] Liu, J., Jin, X., and Tsui, K.C. (2005). Autonomy Oriented Computing (AOC): From Problem Solving to Complex Systems Modeling, Kluwer Academic Publishers/Springer. [3] Liu, J. and Yao, C. (2004). Rational competition and cooperation in ubiquitous agent communities, Knowledge-Based Systems, 17, 5-6, pp. 189-200.
6
J. Liu / Active Media Technologies (AMT) from the Standpoint of the Wisdom Web
[4] Liu, J., Zhong, N., Yao, Y. Y., and Ras, Z. W. (2003). The Wisdom Web: New challenges for Web Intelligence (WI), Journal of Intelligent Information Systems, Kluwer Academic Publishers, 20, 1. [5] Yao, Y.Y., Zhong, N., Liu, J., and Ohsuga, S. (2001). Web Intelligence (WI): Research challenges and trends in the new information age, Web Intelligence: Research and Development, LNAI 2198, Springer, pp. 1-17. [6] Zhong, N., Liu, J., and Yao, Y. Y. (2002). In search of the Wisdom Web, IEEE Computer, 35, pp. 27-31. [7] Zhong, N., Liu, J., and Yao, Y. Y. (2003). (Eds.), Web Intelligence, Springer.
Advances in Intelligent IT Y. Li et al. (Eds.) IOS Press, 2006 © 2006 The authors. All rights reserved.
7
Anytime learning and classification for online applications Geoffrey I. Webb 1 Monash University, Clayton, Vic, 3800 Australia Abstract. Many online applications of machine learning require fast classification and hence utilize efficient classifiers such as naïve Bayes. However, outside periods of peak computational load, additional computational resources will often be available. Anytime classification can use whatever computational resources may be available at classification time to improve the accuracy of the classifications made. Keywords. Machine Learning; Anytime Algorithms; Anytime Learning; Anytime Classification
1. Introduction Efficient and effective predictive inference is critical to the success of many online computer systems including such diverse applications as web search, information retrieval, SPAM filtering, cache management, fraud detection, fault detection, network intrusion detection, fault diagnosis and user modeling. In such online contexts predictions must usually be delivered very quickly. In addition to strong time constraints, the memory that may be devoted to the prediction task is often also very limited, as there may be numerous simultaneous sessions being run on a single computer. Time and memory constraints may vary depending upon the computational load in the application environment at the time that prediction is required. These requirements are a major factor behind the widespread deployment of Naïve Bayes (NB) [1]. NB has negligible time and space complexity. Hence its deployment minimizes the risk of exceeding any given time and space constraints. However, NB is known to frequently deliver less accurate predictions than more complex techniques [2,3,4,5,6,7,8, 9,10,11,12,13,14,15,16,17]. In consequence, when NB is more efficient than the operational time and space constraints require, it is likely that the predictions that are delivered are less accurate than could be obtained with better utilization of the available computational resources. This results in such undesirable outcomes as useful web pages not being identified; relevant information being overlooked or irrelevant information being retrieved; SPAM going undetected or non-SPAM email being incorrectly blocked; data access being needlessly slow due to inefficient cache management; failures to detect or false alarms for fraud, faults and network intrusions; incorrect fault diagnosis; and inaccurate user models. 1 Correspondence
to: Geoffrey I. Webb, Faculty of Information Technology, PO Box 75, Monash University, Clayton, Vic. 3800. Tel.: +61 3 990 53296; Fax: +61 3 990 55146; E-mail:
[email protected]
8
G.I. Webb / Anytime Learning and Classification for Online Applications
Anytime Averaged One-Dependence Estimators [18] seek to address these issues by utilizing any additional time available at classification time to improve upon the predictions of NB. Thus, under peak load the system will use minimal resources for classification by employing NB, but when additional resources are available more accurate classification will be performed.
2. Naïve Bayes Assume k classes c1 . . . ck ; m attributes; and a training set of t objects, T = y1 , x1 . . . yt , xt where each yi ∈ {c1 . . . ck } and each xi is a vector of attribute values x1 . . . xm . A learner λ is applied to T at training time to generate a model λ(T ). A new vector of attribute values x is presented at classification time, together with the model, to a classifier φ, which produces a classification φ(λ(T ), x) ∈ {c1 . . . ck }. NB uses Bayes formula P (y | x) = P (y)P (x | y)/P (x).
(1)
together with the attribute independence assumption, P (x | y) =
n
P (xi | y),
(2)
i=1
where P (y) and P (xi | y) are estimated from the frequency of the relevant terms in T , usually with smoothing for sampling error such as the Laplace correction. Optimal classification can be achieved by selecting argmaxy (P (y | x)) and hence using the above assumptions NB classifies using argmax(Pˆ (y) y
n
Pˆ (xi | y) / Pˆ (x))
(3)
i=1
= argmax(Pˆ (y) y
n
Pˆ (xi | y))
(4)
i=1
where Pˆ (·) represents an estimate of P (·).
3. Averaged One-Dependence Estimators NB is computationally efficient and delivers surprisingly strong classification performance given its simplicity. However, where the attribute independence assumption (2) is violated it may prove suboptimal, and there is considerable evidence that this occurs in practice [2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17]. Averaged One-Dependence Estimators (AODE) [14] seek to alleviate the problems associated with the attribute independence assumption by using one-dependence estimators [10] by which all attributes are assumed independent given both the class and one other attribute. AODE avoids the high computational overheads associated with other
G.I. Webb / Anytime Learning and Classification for Online Applications
9
one-dependence estimators [3,4,10] by averaging over all of a limited class of onedependence estimators. It is justified by the observation that for any xi , P (y | x) = P (xi , y)P (x | xi , y) / P (x).
(5)
It follows that for any I ⊆ {1 . . . m}, P (y | x) =
P (xi , y)P (x | xi , y) P (x)
i∈I
/ |I|
(6)
where |I| denotes the number of elements in I. By assuming all attributes are independent of each other given xi and y we can derive P (x | xi , y) =
m
P (xj | xi , y).
(7)
j=1
By using (7) within (5) we obtain a super-parent one-dependence estimator (SPODE), a one-dependence estimator for which there is a single attribute such that all other attributes depend upon only it and the class. AODE uses (7) within (6) resulting in a new one-dependence estimator that forms an ensemble of all SPODEs that satisfy a minimum support constraint and averages their conditional probability estimates. In the current research, the minimum support constraint specifies that all and only SPODEs for which the parent attribute value xi occurs in T are used. 4. Anytime Averaged One-Dependence Estimators Anytime Averaged One-Dependence Estimators (AAODE) is based on the observation that for any I ⊆ {1 . . . m}, P (y)P (x | y) + i∈I P (xi , y)P (x | xi , y) (8) P (y | x) = (|I| + 1) × P (x) which justifies averaging NB together with the SPODEs in AODE. As the equality holds for any set of SPODEs, AAODE first calculates NB and then as many SPODEs as time permits. The resulting conditional probability estimates are then averaged and the class with the highest conditional probability selected. This leaves the problem of how the SPODEs should be ordered, as it will clearly be beneficial to order the SPODEs so that the most accurate are evaluated first. Our initial technique ordered them on the support for the parent xi , such that the values that occur most frequently are evaluated first [18]. Figure 4 reproduces a learning curve that shows the monotonic decrease in error as more SPODEs are evaluated. The y-axis is the average standardized ten-fold cross-validation error over all 37 datasets used by the researchers. The error is standardized by dividing it by the ten-fold cross-validation error of NB on the same data. The x-axis represents the number of SPODEs evaluated, with 1 representing NB and no SPODEs, 2 representing NB and 1 SPODE, and so on. As different datasets have different numbers of attributes, different points on the x-axis represent averages over different numbers of datasets, and this accounts for the jagged upward spikes which occur when some easier datasets fall out of the assessment.
10
Average Error / NB Error
G.I. Webb / Anytime Learning and Classification for Online Applications
1.0 0.9 0.8 0.7 0.6 0
5
10 15 20 25 30 35 40 45 50 55 60
Number of sub-models 1 0.98 0.96 0.94 0.92 0.9 0.88 0.86 0.84 0.82 0.8
Error Ratio against NB
Error Ratio against NB
Figure 1. Learning curve for AAODE reproduced from [18]
LOO.Stop LOO.NoStop
0
10
20
30
Ensemble Size
40
50
1 0.98 0.96 0.94 0.92 0.9 0.88 0.86 0.84 0.82 0.8
FSA.Stop FSA.NoStop
0
10
20
30
40
50
Ensemble Size
Figure 2. The importance of stopping criteria, reproduced from [19]
Subsequent research has shown that this ordering scheme is not particularly effective and that substantially better performance can be obtained when the SPODEs are ordered using their leave-one-out classification accuracy on the training data [19]. This research also shows that even greater accuracy can be obtained if the leave-one-out accuracy of entire ensembles of SPODEs is considered, but at the cost of considerable increase in training time. A second important issue that arises once a strong ordering scheme is used is that the evaluation of the most accurate SPODEs first implies that the least accurate SPODEs will be evaluated last. In this context there is the potential for the addition of the final SPODEs to the classification ensembles to increase error. To prevent this, it is important to have an effective stopping criterion by which it is decided when to stop adding SPODEs. Figure 2 reproduces an illustration of the effectiveness of stopping criteria for order on each of leave-one-out evaluation of individual SPODEs (LOO) and leave-one-out evaluation of ensembles of SPODEs (FSA). In each case the lines indicates the standardized error averaged over all datasets, the one labeled ‘Stop’ showing performance when a stopping criterion is employed and ‘NoStop’ showing performance when one is not. As can be seen, the use of a stopping criterion greatly improves performance.
5. Conclusion Anytime classification has the potential to improve the performance of online classification systems by utilizing otherwise idle computational resources to improve classification
G.I. Webb / Anytime Learning and Classification for Online Applications
11
performance. Preliminary techniques based on the AODE learning algorithm demonstrate considerable promise and show that it is possible to use additional classification time to improve upon the classification accuracy of NB.
Acknowledgements I am grateful to Ying Yang for comments on a draft of this paper and to her and my other colleagues Zijian Zheng, Janice Boughton, Zhihai Wang, Kevin Korb, Kai Ming Ting and Fei Zheng who have all contributed to the research program described herein.
References [1] David D. Lewis. Naive Bayes at forty: The independence assumption in information retrieval. In ECML-98: Proc. Tenth European Conf. Machine Learning, pages 4–15, Berlin, April 1998. Springer. [2] Jesus Cerquides and Ramon Lopez de Mantaras. Robust Bayesian linear classifier ensembles. In Proc. 16th European Conf. Machine Learning (ECML-05), pages 72–83, 2005. [3] Nir Friedman, Dan Geiger, and Moises Goldszmidt. Bayesian network classifiers. Machine Learning, 29(2):131–163, 1997. [4] E. Keogh and M. Pazzani. Learning augmented Bayesian classifiers: A comparison of distribution-based and classification-based approaches. In Proc. Int. Workshop on Artificial Intelligence and Statistics, pages 225–230, 1999. [5] Ron Kohavi. Scaling up the accuracy of naive-Bayes classifiers: A decision-tree hybrid. In Proc. Second ACM SIGKDD Int. Conf. Knowledge Discovery and Data Mining (KDD-96), pages 202–207, Portland, Or, 1996. [6] Igor Kononenko. Semi-naive Bayesian classifier. In Proc. Sixth European Working Session on Learning, pages 206–219, Berlin, 1991. Springer-Verlag. [7] Pat Langley. Induction of recursive Bayesian classifiers. In Proc. 1993 European Conf. Machine Learning, pages 153–164, Berlin, 1993. Springer-Verlag. [8] Pat Langley and Stephanie Sage. Induction of selective Bayesian classifiers. In Proc. Tenth Conf. Uncertainty in Artificial Intelligence, pages 399–406. Morgan Kaufmann, 1994. [9] Michael J. Pazzani. Constructive induction of Cartesian product attributes. In ISIS: Information, Statistics and Induction in Science, pages 66–77, Singapore, August 1996. World Scientific. [10] M Sahami. Learning limited dependence Bayesian classifiers. In Proc. Second Int. Conf. Knowledge Discovery and Data Mining, pages 334–338, Menlo Park, CA, 1996. AAAI Press. [11] Moninder Singh and Gregory M. Provan. Efficient learning of selective Bayesian network classifiers. In Proc. Thirteenth Int. Conf. Machine Learning, pages 453–461, San Francisco, 1996. Morgan Kaufmann. [12] Geoffrey I. Webb and Michael J. Pazzani. Adjusted probability naive Bayesian induction. In Proc. Eleventh Australian Joint Conf. Artificial Intelligence, pages 285–295, Berlin, 1998. Springer. [13] Geoffrey I. Webb. Candidate elimination criteria for Lazy Bayesian Rules. In Proc. Fourteenth Australian Joint Conf. Artificial Intelligence, pages 545–556, Berlin, December 2001. Springer. [14] Geoffrey I. Webb, Janice Boughton, and Zhihai Wang. Not so naive Bayes: Averaged onedependence estimators. Machine Learning, 58(1):5–24, 2005. [15] Zhipeng Xie, Wynne Hsu, Zongtian Liu, and Mong Li Lee. SNNB: A selective neighborhood based naive Bayes for lazy learning. In Ming-Syan Chen, Philip S. Yu, and Bing Liu, editors,
12
[16] [17]
[18] [19]
G.I. Webb / Anytime Learning and Classification for Online Applications
Advances in Knowledge Discovery and Data Mining, Proc. PAKDD 2002, pages 104–114, Berlin, May 2002. Springer. Zijian Zheng and Geoffrey I. Webb. Lazy learning of Bayesian Rules. Machine Learning, 41(1):53–84, 2000. Zijian Zheng, Geoffrey I. Webb, and Kai Ming Ting. Lazy Bayesian Rules: A lazy semi-naive Bayesian learning technique competitive to boosting decision trees. In Proc. Sixteenth Int. Conf. Machine Learning (ICML-99), pages 493–502. Morgan Kaufmann, 1999. Geoffrey I. Webb, Ying Yang, and Janice Boughton. Learning for anytime classification. Submitted for publication, 2006. Ying Yang, Geoffrey I Webb, Kevin Korb, and Kai Ming Ting. Effectively ordering and ensembling probabilistic estimators for anytime classification. Submitted for publication, 2006.
Advances in Intelligent IT Y. Li et al. (Eds.) IOS Press, 2006 © 2006 The authors. All rights reserved.
13
Domain-Driven Data Mining: Methodologies and Applications Chengqi ZHANG and Longbing CAO Faculty of Information Technology, University of Technology, Sydney, Australia {chengqi, lbcao }@it.uts.edu.au Abstract The aims and objectives of data mining is to discover actionable knowledge of main interest to real user needs, which is one of Grand Challenges in KDD. Most extant data mining is a data-driven trial-an-error process. Patterns discovered via predefined models in the above process are often of limited interest to constraint-based real business. In order to work out patterns really interesting and actionable to the real world, pattern discovery is more likely to be a domain-driven human-machine-cooperated process. This talk proposes a practical data mining methodology named “domain-driven data mining”. The main ideas include a Domain-Driven In-Depth Pattern Discovery framework (DDID-PD), constraint-based mining, in-depth mining, human-cooperated mining and loop-closed mining. Guided by this methodology, we demonstrate some of our work in identifying useful correlations in real stock markets, for instance, discovering optimal trading rules from the existing rule classes, and mining trading rule-stock correlations in stock exchange data. The results have attracted strong interest from both traders and researchers in stock markets. It has shown that the methodology is potential for guiding deep mining of patterns interesting to real business.
1
Introduction
The extant data mining is a data-driven trial-and-error process [2] where data mining algorithms extract patterns from converted data via some predefined models based on experts’ hypothesis. Data mining is presumed as an automated process producing automatic algorithms and tools without human involvement and the capability to adapt to external environment constraints. However, the real-world data mining targets actionable knowledge discovery, which can afford important grounds to business decision makers for performing appropriate actions. In the panel discussions of SIGKDD 2002 and 2003 [2, 8], it was highlighted by panelists as one of the Grand Challenges for extant and future data mining. Real world data mining, for instance financial data mining in capital markets, is highly constraint-based [9] and domain-oriented. Constraints involve technical, business, economic and social aspects in the process of developing and deploying actionable knowledge. It is also highly domain-oriented. Domain-oriented involve aspects such as business requirements and objectives, user preference, domain and background knowledge, user involvement, business-driven measurement and evaluation, and so on. In actionable knowledge discovery from data embedded in the constrained environment, it is essential to slough off the superficial and captures the essential information from the data mining. However, this is a non-trivial task. While many methodologies have been studied, they either view data mining as an automated process, or deal with real-world constraints in a case-by-case manner. Our experience in developing human-machine-cooperation intelligent systems [3] and lessons learned
14
C. Zhang and L. Cao / Domain-Driven Data Mining: Methodologies and Applications
in financial data mining [11] show that the involvement of domain knowledge and experts, the consideration of constraints, and in-depth patterns development are essential for filtering subtle concerns while capturing incisive issues. Combining these aspects together, a sleek DM (data mining) methodology can be developed to find the distilled core of a problem. These are our motivations to develop a practical domain-driven data mining methodology, referred to as domain-driven in-depth pattern discovery (DDID-PD). It can advise the process of real-world data analysis and preparation, the selection of features, the design and fine-tuning of algorithms, and the evaluation and refinement of mining results in a manner more effective and workable to real-world data mining.
2
Domain-driven data mining methodologies
The domain-driven data mining methodologies [6,7] consist of main components such as a domain-driven data mining process framework ---- DDID-PD, knowledge actionability measure, constraint-based mining, in-depth mining, human-cooperated mining, loop-closed mining and interactive and parallel data mining supports. DDID-PD takes I3D (namely Interactive, In-depth, Iterative and Domain-specific) as real-world KDD basics. I3D means that the discovery of actionable knowledge is an iteratively interactive in-depth pattern discovery process in domain-specific context. I3D is further embodied through (i) constraint-based mining, (ii) human- cooperated mining, (iii) in-depth mining, (iv) loop-closed mining, (v) actionability knowledge, (vi) questionnaire-based interview, and (vi) interactive and parallel system supports. Constraint-based mining Mining constraint-based context requests to effectively extract and transform domain-specific datasets with advice from domain experts and their knowledge. Constrained environment is composed of domain constraints, data constraints, human-related constraints, interestingness constraints, deployment constraints. Human-cooperated mining In this framework, data mining and domain experts complement each other in regard to in-depth granularity through an interactive interface. The involvement of domain experts and their knowledge can assist in developing highly effective domain-specific data mining techniques and reduce the complexity of the knowledge producing process in the real world. In-depth mining In-depth pattern mining discovers more interesting and actionable patterns from a domain-specific perspective. Loop-closed mining A system following the DDID-PD framework can embed effective supports for domain knowledge and experts’ feedback, and refines the lifecycle of data mining in an iterative manner. Knowledge actionability The resulting patterns identified are evaluated in terms of both technical interestingness and business interestingness from both objective and subjective perspectives. Questionnaire-based interview
C. Zhang and L. Cao / Domain-Driven Data Mining: Methodologies and Applications
15
A series of questionnaire-based interviews may be required to support the DDID-PD-oriented knowledge discovery process for each step. Interactive and parallel system supports System supports such as agent-based intelligent user interfaces [12], ontology-based knowledge representation and discovery [5], knowledge management portal and parallel data mining supports such as processor-memory-based caching are useful for domain-driven knowledge discovery in constrained environment and the involvement of domain experts and knowledge in the process. Further information about the DDID-PD process model and main components can be found from our following work [4,6,7].
3
Domain-driven data mining applications
Domain-driven data mining is useful for real-world complicated data mining applications such as financial data mining [10]. Such kind of applications is practical but challenging in the business world. Taking the pair relationship mining in ASX market as an instance, the algorithms must search all more than 1000 listed companies in this small market. To effectively and efficiently discover trading or market surveillance evidences of main interest to the market trading or surveillance, market microstructure, dynamics and domain knowledge and trader/regulator’s preference must be sufficiently considered. In stock data mining, we utilize the domain-driven data mining framework ---DDID-PD to mine in-depth pairs of stocks, stocks and trading rules, and foreign currencies using real data. The correlation pattern mining in the stock order stream targets patterned interesting and actionable for stock traders. In-depth correlation mining in stock market data is aimed at finding correlations between stocks, searching correlated patterns from existing trading rules developed by financial experts in order to develop more actionable trading rules, and discovering correlated relations between trading rules and stocks. Some of the main business problems in domain-driven data mining include: x stock correlation mining from a collection of stocks in one or multiple markets; x trading rule-stock pairs mining from a collection of stocks and a set of trading rules; x currency pairs mining from multiple foreign currencies; and x pairs mining between stocks and market dynamics, etc. Some of the technical issues necessary consist of: x high dimension reductions to generate a small quantity of data or rule representatives from a huge data set or rule combinations using genetic algorithms, fuzzy genetic algorithms and parallel genetic algorithms; x human-machine-cooperated interactive refinement and parallel system supports to refine search space and evidence candidate space based on domain-specific knowledge and objectives, and x in-depth evidence discovery through developing effective algorithms and actionability measures to obtain the pairs of interest to trading.
16
4
C. Zhang and L. Cao / Domain-Driven Data Mining: Methodologies and Applications
Conclusions
Actionable knowledge discovery is taken as one of Grand Challenges of KDD in the next 10 years. The research on this issue may change the existing situation where a great number of rules are mined while few of them are interesting to business, and promote the wide deployment of data mining into business. This paper has proposed domain-driven data mining, a new data mining methodology on top of data-driven data mining, to deal with complexities facing by the extant data mining methodologies and applications. The domain-driven data mining consists of a domain-driven actionable knowledge discovery framework, referred to as Domain-Driven In-Depth Pattern Discovery (DDID-PD), and a series of domain-oriented components. It provides a systematic overview of the issues in discovering actionable knowledge, and advocates the methodology of mining actionable knowledge in constraint-based context through human-mining system cooperation in a loop-closed iterative refinement manner. The main phases and components of the DDID-PD include almost all phases of the CRISP-DM [13]. It has enclosed some big differences from the CRISP-DM. For instance, (i) some new essential components, such as constraint mining, in-depth mining, the involvement of domain experts and knowledge, are taken into the lifecycle of KDD for consideration, (ii) in the DDID-PD, the normal steps of CRISP-DM are enhanced by dynamic cooperation with domain experts and the consideration of constraints and domain knowledge. These differences actually play key roles in improving the existing knowledge discovery in a more realistic and reliable way.
References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13]
Aggarwal, C., Towards effective and interpretable data mining by visual interaction, ACM SIGKDD Explorations Newsletter, 3(2):11-22, 2002. Ankerst, M., Report on the SIGKDD-2002 panel the perfect data mining tool: interactive or automated? ACM SIGKDD Explorations Newsletter, 4(2):110-111, 2002. Cao, R. Dai. Human-Computer Cooperated Intelligent Information System Based on Multi-Agents, ACTA AUTOMATICA SINICA, 29(1):86-94, 2003, China (in English). Cao, L., et al., Domain-driven in-depth pattern discovery: a practical perspective. Proceeding of AusDM, 101-114, 2005. Cao, L., et al., Ontology-Based Integration of Business Intelligence. Int. J. on Web Intelligence and Agent Systems, 4(4), 2006. Cao, L., Zhang, C. Domain-driven actionable knowledge discovery in the real world, PAKDD2006, LNAI 3918, 821 – 830, 2006. Cao, L., Zhang, C. Domain-driven data mining: a practical methodology, International Journal of Data Warehousing and Mining, 2006. Fayyad, U., Shapiro G., Uthurusamy R., Summary from the KDD-03 panel – Data mining: the next 10 years. ACM SIGKDD Explorations Newsletter, 5(2):191-196, 2003. Han, J., Towards Human-Centered, Constraint-Based, Multi-Dimensional Data Mining. An invited talk at Univ. Minnesota, Minneapolis, Minnesota, Nov. 1999. Kovalerchuk, B. and Vityaev, E. Data Mining in Finance: Advances in Relational and Hybrid Methods, Kluwer, 2000. Lin, L., Cao, L. Mining In-Depth Patterns in Stock Market, Int. J. Intelligent System Technologies and Applications, 2006. C. Zhang, Z. Zhang, L. Cao. Agents and Data Mining: Mutual Enhancement by Integration, LNCS 3505, 50-61, Springer, 2005. http://www.crisp-dm.org
Regular Papers
This page intentionally left blank
Advances in Intelligent IT Y. Li et al. (Eds.) IOS Press, 2006 © 2006 The authors. All rights reserved.
19
Web Service Request Transformatter Wei-Chun Changa, Wei-Cheng Teob and Cheng-Chang Oh Yangb a Aviation Management, CAFA, Taiwan b Computer Science, National Cheng Kung University
[email protected]
Abstract. A tool designed to serve for the transformation of Web service description is described. The fundamentals of service –oriented architecture are heavily relied on the eXtensible Markup Language (XML) to describe the interfaces of web services and the access protocol. The automation process to transform the input of user requests into XML-based representation can facilitate the service composition. We proposed a solution tool to collect user requirements and transform them into XML-based document automatically. The tool is illustrated through a domain-specific task flow. Keywords. Internet workflow, service request, Web service, task management, XML
Introduction The usage of the Web has evolved from simple communication medium to a sophisticated platform for remote application execution. The process starts from the input of users’ needs over the Internet. Users’ request for programmatic application functionality over the Internet is called Web services (WS) [1]. WS uses service-oriented architecture (SOA) which is a component model that inter-relates different functional units of an application. As the trends keep evolving, organizations are deploying more Web service components over the Internet. Currently, the techniques used to support Web service environment are all based on the XML, e.g. tModel, Web Service Description Language (WSDL) and Simple Object Access Protocol (SOAP) [1]. Service requesters can access the WS components through SOAP and service providers can dispatch their WS components using WSDL as the interfaces of communication information. Accordingly, XML has become the most important interface language to support the prevalence of next generation of Web applications. The first stage of requesting WS is to collect and transform users’ requests into XMLbased format. What is the most convenient way for general users to request WS over the Internet? It certainly is the natural language description. It’s easy to speak out and listen to. However, in the engineers and computer side, the most commonly used representation of WS information is XML-based techniques, e.g. SOAP, WSDL. Obviously, there is a gap between users’ requests (presented in natural language) and the representation of service request (presented in XML). Several research projects tried to solve the problem by using tools to capture users’ requirements and transform them into XML-based documents. However, there are some shortcomings existed in these models which is urgently required to be studied in order to improve the performance. It is a long process and labor-intensive
20
W.-C. Chang et al. / Web Service Request Transformatter
work. For instance, in WebDG [2] and IRS-II [3], users are strongly expected to have the ability to describe their demands and procedures in XML-based format; and they did not provide the effective user interface to assist users through the designing procedure. As a result, it is very difficult to transform users’ demands into formatted workflows. In the workflow construction, WebDG, Model Driven Service Composition [4] does not reuse workflows in WS composition, and the model does not demarcate workflows according to their abstract level. The problems discussed in the previous paragraph are the key factors to hinder the construction of automated framework of WS composition. There is an urgent need for developing an automation tool that can be used to capture and transform the users’ requirements into XML-based task sequence. It leads to the main objective of this paper, to design an automation tool that can transform a task flow into XML representation format used for the rest of the WS composition process.
1. Internet Workflow The starting point of utilizing WSs is the requirements requested from customers. According to the definition from the Workflow Management Coalition (WfMC), a workflow is defined as follows [5]. “The automation of a business process, in whole or part, during which documents, information or tasks are passed from one participant to another for action, according to a set of procedural rules.” Based on the definition, the related information regarding each task in the process is provided in order to complete process. The workflow management was originally confined to the automation of business process [6]. The scope of workflow management has broadened due to the dependency of business process on the Internet platform. The concept of workflow can be virtually extended to all areas from science and entertainment. The main concern of the management is to provide the means to improve the quality of services, flexibility, and complex services. Services requested over the Internet are regarded as Internet-centric applications requiring some form of workflow management. A service request from a user has similar properties (e.g. task information, parameter, and corresponding program) as a working process in order to satisfy users’ needs. Therefore, a service request can then be regarded as a workflow. Generally, we can apply use-case or scenario-based method to elicit and represent the requirements. As a result, a workflow is generated as the service request (input) in the SOA. Service requests can be treated as the Internet workflow that requires corresponding pieces of software component to complete the task sequence requested by users. A service request can be further decomposed as a sequence of tasks. Each task requires a corresponding WS component to serve for the necessary functionalities. A concrete example of applying the concept of workflow on a service request is given as follows. First, the service request regarding a travel package reservation is collected from a user.
W.-C. Chang et al. / Web Service Request Transformatter
21
“Tom is going to make a travel package reservation over an Internet. The package includes airline, hotel, car rental, and resort reservations.” The service requests can be further formalized into a set of tasks (see Figure 1). The service request is modeled by a use case diagram, 4 tasks are identified in order to carry out the functionalities of the original service request.
2. The Architecture Design Figure 1. Use case diagram for The objective of the tool development is to travel package request transform users’ requests into XML-based format. The architecture of the tool design is illustrated by a package diagram as in Figure 2. The flow of transformation starts from a Graphical User Interface (GUI) to allow requests input from users. The inputs are grouping into different sets that are supported by domain knowledge presented in Figure 2. After collecting the inputs, the tool starts to transform the inputs into XML-based format (e.g. nodes in XML representation) and then presents them in the Web-based interface. The XML-based format can then be managed (i.e. add, delete, modify and query) through the package, “NodeManagement”. In the tool architecture, we adopted a Set Figure 2. The package diagram of Representation [7] and a Depth First Search service request transformation tool (DFS) algorithm [7] as the fundamentals to build and maintain the tree representation. They are described in the following sections. 2.1. Set Representation The main idea of Set Representation is to construct user requests as multiple sets and their relationship. The reason of using this data structure is to construct a tree that represents the structure of formatted service requests into XML-based representation. The data structure can maintain useful information regarding the parent-children nodes in the tree. The definition of the Set Representation is listed as follows [7]. Def: A graph G consists of a set V, called the vertices of G, and, for all v ∈ V , a subset Av of V, called the set of vertices adjacent to v
22
W.-C. Chang et al. / Web Service Request Transformatter
2.2. Depth First Search (DFS) algorithm While trying to construct a graphical tree to represent the transformation of service requests, the Set Representation will encounter certain level of difficulty to achieve the goal. Therefore, we adopted the DFS algorithm in order to traverse a tree structure in a Web-based interface and to enable tree manipulation. The DFS algorithm is listed as follows. DFS (Vertex V) Visited [V] = T For each W adjacent to V If (!Visited[W]) DFS (W) The algorithm is a generalization of preorder traversal. It starts from a Vertex V, and recursively traverse all vertices adjacent to V. The visiting time vertices is of traversing all tree . O ( E ), where E = Θ(V ) Figure 3. A screen shot of the tool interface for multiple 3. Experiment and Case Study XML files input To illustrate the effects of tool usage in real world cases, we first tested the tool for the XML-based content manipulation. One or multiple XML files are input and managed; the tool provides a Web-based interface to present a graphical tree and manage nodes inside. Secondly, we entered a task sequence from the command and control domain to test the tool architecture for the format transformation. The implementations of the two tests are described in the following sections. 3.1. XML File Manipulation The first test is to input one or multiple XML files and present the tree structure in the Web-based interface. The tool provides operations to merge Figure 4. XML tree structure multiple XML files into an XML file (see Figure 3) representation and manipulation and to manage (e.g. add, delete, modify, and query) functions (in grey box) nodes in the tree structure representation (see Figure 4). The tool successfully read and managed the content of an XML file and presented it in the Web-based interface.
W.-C. Chang et al. / Web Service Request Transformatter
23
3.2. A Specific Domain Case Study The second experiment is to test the tool by entering a scenario as a service request. Each role in the scenario can be mapped as a service component in a system. The input of service request use case is described as follows. The generic command and control task cycle has five steps as defined in [8]: Recognize/Interpret an event – Analyze situation – Plan response – Decide – Action A working scenario will normally require several phases (i.e. a task cycle per phase) to accomplish. For each steps, human actor and technology are involved for the job execution. An example is listed as follows. In phase one, Tactical Picture Supervisor (TPS) Figure 5. A user-friendly uses Radar #1(RAD1) to monitor the situation… input interface for workflow The tool provides a user-friendly input interface (see Figure 5) to allow users to enter a working scenario. The case study models the response of a warship to an air launched missile threat. The scenario consists of five phases with each phase containing 5 task, agent, and technology tuples (e.g. ). We used the first task, “monitor”, as the test case to illustrate the functionality of the design. The preset data items are provided by the domain experts who have the ability to analyze and construct the knowledge of handling the events. In the diagrams (see Figure 5, 6), the first phase is input and an XML file is generated. The formatting XML representation is also presented on the Web-based interface for further manipulation if users request to implement it. The content of the XML file is listed as follows.
4. Conclusion and Discussion
Figure 6. XML file creation and graphical tree representation
In summary, the field of Web service has attracted a lot of research attentions due to the pervasion of the Internet. Related techniques
24
W.-C. Chang et al. / Web Service Request Transformatter
have focused on building a process to compose Web service composition automatically. A novel transformation tool is proposed in this research. The tool designed to transform users’ requests into XML-based representation. The main contribution is to provide a preprocessing tool for the automation of Web service composition. Several functionalities of processing XML-based format are provided. Firstly, the tool provides the operations to merge and manage existed XML files. The operations are implemented on a Web-based interface which is also used to present the tree structure of an XML document represented a service request. Secondly, a Web-based interface is developed to transform users’ requests into XML-based format. As a result, an XML file is created and the tree structure is presented on the Web-based interface for further manipulation. The generated XML file can be used for the Web service composition. Although we have demonstrated the functionalities provided by the tool to transform users’ requests into XML-based format, there still requires pre-conditions to be made for the transformation. The domain knowledge construction is the key factor to build the tool. So far, we are depending on the experts to provide the domain information in order to transform users’ input through the Web-based interface. Requests from different domains require different sets of domain knowledge to support. The generalization design of the tool architecture to accommodate multi-domain is a potential area that can improve the request transformation. Another issue is the input format. In current solution, encoding components from real world cases are used as the input from users. The communication gap between users and the automation tool is narrow but not totally eliminated. The potential solution is to provide a transformation mechanism that allows users to input their requests in natural language description (i.e. in scenario format). The tool should be able to process the transformation automatically. An information extraction technique proposed by Cowie and Wilks [9] could be used to elicit component information from the narrative scenarios and to organize scenario task sequences using generic task templates. That will be the next development plan of our tool design. References [1] [2] [3] [4] [5] [6] [7] [8] [9]
http://www.w3.org/, "World Wide Web Consortium," 2004 B. Medjahed, A. Bouguettaya, and A. K. Elmagarmid, "Composing Web Services on the Semantic Web," The VLDB Journal, vol. 12, pp. pp. 333-351, 2003. E. Motta, J. Domingue, L. Cabral, and M. Gaspari, "IRS-II: A Framework and Infrastructure for Semantic Web Services," presented at 2nd International Semantic Web Conference, 2003. B. Orriens, J. Yang, and M. P. Papazoglou, "Model Driven Service Composition," Lecture Notes in Computer Science, vol. 2910, pp. pp. 75-90, 2003. Workflow Management Coalition, "Workflow Managmeent Coalition Terminology and Glossary," Workflow Management Coalition, Hampshire, UK Feb. 1999 1999. D. C. Marinescu, Internet Based Workflow Management: Towards a Semantic Web, 1st ed: WileyInterscience, 2002. E. Horowitz, S. Sahni, and S. Andersonfreed, Fundamentals of Data Structures in C: W. H. Freeman, 2002. A. G. Sutcliffe, "Requirements engineering for complex collaborative systems," presented at RE01, 5th IEEE international Symposium on Requirements Engineering,, Toronto, Canada, 2001. J. Cowie and Y. Wilks, "Information extraction," in Handbook of Natural Language Processing, R. Dale, H. Moisl, and H. Somers, Eds. New York: Marcel Dekker, 2000.
Advances in Intelligent IT Y. Li et al. (Eds.) IOS Press, 2006 © 2006 The authors. All rights reserved.
25
A Head-Tracker Based on the Lucas-Kanade Optical Flow Algorithm Frank LOEWENICH and Frederic MAIRE School of Software Engineering and Data Communications, IT Faculty, Queensland University of Technology, 2 George Street, GPO Box 2434, Brisbane Q 4001, Australia
Abstract. Technology is advancing at a rapid pace, automating many everyday chores in the process, changing the way we perform work and providing various forms of entertainment. Makers of technology, however, often do not consider the needs of the disabled in their design of products by, for example, providing some alternative means of interaction with their devices. The use of computers presents a challenge to many disabled users who are not able to see graphical user interfaces, use a mouse or keyboard or otherwise interact with standard computers. This paper introduces a head-tracker based on the use of a modified Lucas-Kanade optical-flow algorithm for tracking head movements, eliminating the need to locate and track specific facial features. The implementation presents an alternative to the traditional mouse input device. Keywords. Graphical user interface, auditory interaction, Multi-modal interfaces, blind, visual impairment, audio, human-computer interaction, interface models, rehabilitation engineering, users with special needs, disability
Introduction It is highly desirable to provide the disabled with an easy means to access a standard PC that does not require specialized equipment. The access should in particular allow the easy navigation of the graphical user interface in a similar fashion experienced by non-disabled users. In this paper we introduce a graphical user interface navigation utility similar in functionality to the traditional mouse pointing device for the Windows XP operating system. Key factors driving development in the area of accessibility for users with disabilities are demographic changes and the ageing population, legislation introduced by government, including the United States of America (USA) and the European Union (EU), as well as the realization of the increasing diversity of people requiring access to information technology. Our implementation of a head-tracking algorithm, based on the Lucas-Kanade optical flow algorithm (henceforth referred to as LK-Tracker), as described by Vámossy, Tóth and Hirschberg (2004), avoids any specialized hardware. The system
26
F. Loewenich and F. Maire / A Head-Tracker Based on the Lucas-Kanade Optical Flow Algorithm
instead relies on a common web camera device, making the technology easily available to all computer users at low cost. The remainder of this paper discusses available assistive technologies for users with a disability in Section 1. Sections 2 and 3 describe our proposed system. Sections 4 and 5 provide in-depth explanations of the Haar Classifier Cascade and LucasKanade algorithms, respectively. Section 6 concludes the paper.
1. Assistive Technologies Available Today A majority of persons with disabilities can now lead more independent lives in their communities, attend regular schools, and seek professional careers more than ever before in history. Assistive technology providers have changed their focus from people with disabilities as requiring treatment and intervention, to a view of the person with a disability and the minimization of obstacles to living in the community and participating in the workforce. Assistive technologies have been an important key to successful community participation. However, the rate of assistive technology nonuse, abandonment and discontinuance remains high - the average being about 1/3 of all devices provided to consumers (Scherer, 1998, 2000, 2002; Taylor and Francis Group, 2002; World Health Organization, 2001). ABLEDATA (2005), the assistive technology product database sponsored by the Institute on Disability and Rehabilitation Research, U.S. Department of Education, reports approximately 22,000 current products from over 2,000 different companies available to users with a disability. However, the vast majority (>95%) of products listed are specialized hardware devices aimed at specific disabilities. These are expensive, often hard to handle by the disabled person and only available from specialized vendors. Software solutions comprise less than 5% of total available products and are typically aimed at specific disabilities, usually screen readers for the blind or visually impaired and learning solutions for intellectually disabled computer users. Traditionally, software and hardware tools for visually disabled users have essentially been implemented as text-based interfaces, which return information in the form of voice synthesis or Braille in combination with keyboard control. Studies have shown the importance of preserving the intrinsic spatial constraints of GUI’s (Stephanidis, 2005), while using auditory feedback to create a virtual representation of the graphical interface and its components, such as windows and icons (Tominaga & Yonekura, 1999; Myatt & Edwards, 1992). Ramstein et al. (1996) reports that assistive technologies enabling humancomputer interaction for users with a disability often require some additional hardware, which is either worn or manually operated by the user. This additional requirement adds expense and inconvenience for the user. LK-Tracker aims to use off-the-shelf hardware to interpret the user’s movements to interact with the Windows XP graphical user interface.
2. The Proposed System Development of the LK-Tracker arose out of the need to provide the disabled, who may not be able to use traditional interaction devices, with a suitable method to control the
F. Loewenich and F. Maire / A Head-Tracker Based on the Lucas-Kanade Optical Flow Algorithm
27
mouse pointer in Windows XP. The tracking of head movements with a web camera provides a simple solution to this problem. The system utilizes off-the-shelf hardware and software components in order to make the system easily available to the greatest possible number of users. To further reduce implementation costs the system utilizes freely available software components available in Windows XP. The .Net Framework was chosen as a development platform, as it offers a good level of interoperability with Windows XP system components (Microsoft Corporation, 2005). We also rely on the OpenCV library from Intel Corporation (2001) for feature detection and tracking.
Figure 1. The integrated framework of our Head-Tracker
3. System Implementation A major obstacle, which had to be overcome, was the integration of the OpenCV library (unmanaged code) into a managed (.Net) project. Unfortunately, direct integration of the OpenCV library was not possible. OpenCV lacks the object-oriented approach and organization of C# and the .Net Framework. Also, the two feature a completely incompatible model for exception handling, making a complete integration almost impossible. To address this problem, we created an interface to the necessary OpenCV library functions (using the .Net Framework interoperability functionality of Platform Invoke or PInvoke), which was compiled into a C# wrapper class. The LK-Tracker uses a static web camera. Identification of the head has been achieved using a Haar Classifier Cascade algorithm (Viola & Jones, 2001). This
28
F. Loewenich and F. Maire / A Head-Tracker Based on the Lucas-Kanade Optical Flow Algorithm
algorithm has the ability to detect faces in an image. Once the user’s face has been detected, position and size of a rectangular area encompassing the face is extracted. This is used by the Lucas-Kanade optical-flow algorithm to determine significant features of the face suitable for tracking, which are highlighted in the frame using green dots (see Figures of Section 4). Tracking may be initiated at any time by the user, at which point significant features, as determined by the Lucas-Kanade algorithm, are locked into place (these are subject to change prior to tracking activation, due to subject movements and changes in lighting conditions) and marked with a red dot. Red dot coordinates from the first frame are compared to subsequent frames to determine head movements.
4. Haar Classifier Cascade: A Robust Face Detection Algorithm The face detection algorithm employed by the LK-Tracker was first described by Viola & Jones in 2001. It offers a robust framework for rapid visual detection in grey scale images of frontal faces. Working with grey scale images provides a significant performance boost compared to other approaches involving color information. The algorithm classifies images based on Haar-like features rather than pixel data. These features are selected using Adaboost. Successively more complex classifiers are put in a cascade structure, which dramatically increases the speed of the detector by focusing attention on promising regions of the image.
Figure 2. Head detection using the Haar Classifier Cascade algorithm. Accurate results are demonstrated in various lighting conditions. Green dots mark significant features identified by the Lucas-Kanade algorithm.
5. Lucas-Kanade Optical Flow: A Reliable Feature Tracking Algorithm The optical tracking component uses the pyramidal implementation of the LucasKanade optical flow algorithm, which first identifies and then tracks features in an image. These features are pixels whose spatial gradient matrices have a large enough
F. Loewenich and F. Maire / A Head-Tracker Based on the Lucas-Kanade Optical Flow Algorithm
29
minimum eigenvalue. The iterative implementation of the Lucas-Kanade optical flow computation provides sufficient local tracking accuracy.
Figure 3. Feature extraction using modified Lucas-Kanade algorithm. The algorithm first identifies regions of interest in the image (subject to constraints) and tracks these regions in subsequent frames.
Figure 4. Demonstrating the low-light tracking capability of the Lucas-Kanade algorithm
6. Conclusion While about two-thirds of disabled computer users taking part in a recent study by the Joseph Rowntree Foundation (2004) did have aids or equipment adaptations to access a computer available, almost half were experiencing problems using them. Others did not have the aids available to them, which they felt they needed, and still others did not know what they would need to effectively access a computer and the range of services and information which they could take advantage of. According to the U.S. Department of Education (2005), specially adapted hardware and software for the blind and visually impaired has always been expensive and unfortunately this trend is continuing if not worsening. Our implementation for an alternative to the traditional mouse pointer provides a low-cost alternative to users with a disability. The system has proven to be robust in various lighting conditions and combined with its capability to exclude distractions, such as busy backgrounds, the system is promising to be useful in a number of realworld situations. Furthermore, the modular architecture of the system allows for ready integration in any number of projects requiring a head-tracking component.
30
F. Loewenich and F. Maire / A Head-Tracker Based on the Lucas-Kanade Optical Flow Algorithm
References Intel Corporation. (2001). Open Source Computer Vision Library Reference Manual. Retrieved July 14, 2005, from http://developer.intel.com Joseph Rowntree Foundation. (2004). Findings: Does the Internet open up opportunities for disabled people?. Retrieved May 24, 2005, from http://www.jrf.org.uk/knowledge/findings/socialcare/pdf/524.pdf Microsoft Corporation. (2005). .Net Framework Developer’s Guide. [Electronic version]. Microsoft Developer Network (MSDN), January 2005. Mynatt, E. D., and Edwards, W. K. (1992). Mapping GUIs to auditory interfaces. [Electronic version]. Proceedings of the 5th annual ACM symposium on User interface software and technology (pp. 61-70). California: Symposium on User Interface Software and Technology. Ramstein, C., Martial, O., Dufresne, A., Carignan, M., Chasse, P., and Mabilleau, P. (1996). Touching and hearing GUI’s: design issues for the PC-Access system. [Electronic version]. In Proceedings of the second annual ACM conference on Assistive technologies (pp. 2-9). Vancouver: ACM SIGCAPH Conference on Assistive Technologies. Scherer, M. J. (2000). Living in the State of Stuck: How Technology Impacts the Lives of People with Disabilities, Third Edition. Cambridge, MA: Retrieved June 7, 2005, from http://www.brooklinebooks.com/disabilities/disindex.htm Scherer, M.J. (1998). Matching Person & Technology Model and Accompanying Assessment Instruments. Webster, NY: Retrieved June 7, 2005, from http://members.aol.com/impt97/ mpt.html Scherer, M.J. (Ed.). (2002). Assistive Technology: Matching Device and Consumer for Successful Rehabilitation. Washington, DC: Retrieved June 7, 2005, from http://www.apa.org/books/ 431667a.html Stephanidis C. (2001). User interfaces for all—concepts, methods and tools. Mahwah, NJ: Lawrence Erlbaum. Taylor and Francis Group. (2002). Special Issue on Assistive Technology: Disability & Rehabilitation. Retrieved June 7, 2005, from http://www.tandf.co.uk/ Tominaga, H., and Yonekura, T. (1999). A Proposal of an Auditory Interface for the Virtual ThreeDimensional Space. [Electronic version]. Systems and Computers in Japan, 30(11). 77-84 U.S. Department of Education. (2005). ABLEDATA: Assistive Technology Information: Online Database. Retrieved May 24, 2005, from http://www.abledata.com Vámossy, Z., Tóth, A, and Hirschberg, P. (2004). PAL Based Localization Using Pyramidal Lucas-Kanade Feature Tracker. In Proceedings of the Simposium on Intelligent Systems, SISY 2004 (pp. 223-231). Subotica, Serbia and Montenegro. Viola, P., and Jones, J. J. (2003). Robust Real-Time Face Detection. International Journal of Computer Vision, 57(2). Kluwer Academic Publishers. 137-154. World Health Organization. (2001). International Classification of Functioning, Disability and Health. Geneva, Switzerland: Author. Retrieved June 7, 2005, from http://www.who.int/inf-pr-1999/en/note9919.html
Advances in Intelligent IT Y. Li et al. (Eds.) IOS Press, 2006 © 2006 The authors. All rights reserved.
31
A Fair Peer Selection Algorithm for an Ecommerce-Oriented Distributed Recommender System Li-Tung Weng, Yue Xu, Yuefeng Li and Richi Nayak School of Software Engineering and Data Communications Queensland University of Technology, QLD 4001, Australia
Abstract. Most of the existing recommender systems nowadays operate in a single organizational base, and very often they do not have sufficient resources to be used in order to generate quality recommendations. Therefore, it would be beneficial if recommender systems of different organizations can cooperate together sharing their resources and recommendations. In this paper, we propose a preliminary design of a distributed recommender system that consists of multiple recommender systems from different organizations. Moreover, a peer selection algorithm is also presented that allows a recommender system peer to select a set of other peers to cooperate with. The proposed selection mechanism not only ensures a high degree of user satisfaction to the generated recommendation, it also makes sure that every peer has been fairly treated and studied. The paper also further points out how the proposed distributed recommender system and the peer selection algorithm can provide a solution to the problem of resource lacking (e.g. cold start problem) and also enables recommender systems to provide recommendations with better novelty and quality to users.
Keywords. Recommender System, Multiagent System, Cistributed System
1. Introduction The receipt of undesirable or non-relevant information is generally referred to as information overload. Nowadays, due to the advance of internet technology and World Wide Web (WWW), the issue of information overload has become increasingly serious. Significant research endeavor is being invested in building support tools that ensure the right information is delivered to the right people at the right time. Recommender systems are one of the recent inventions purposing for helping humans to deal with such information explosion by giving information recommendations according to their information needs [1-3]. Recommender systems is considered beneficial to commercial organizations by increasing cross sales, building customer loyalties or making personalized promotions [1]. Consequentially, a big challenge in developing modern recommendation systems is to connect recommendation systems with marketers [1] as well as provide adaptability to different business models. Most of the existing recommender systems are implemented for one organization. Generally one single organization may not possess sufficient information or data for analysis in order to give their customers precise and high-quality recommendations (it
32
L.-T. Weng et al. / A Fair Peer Selection Algorithm
is often called “cold-start problem”) [5]. Therefore, it can be beneficial if organizations can share their resources (i.e. products and customer database) and recommendations boundlessly (i.e. apply recommendation systems on to inter-organizational level), and more importantly, great business value might be generated during the resource sharing process among the organizations. In this paper, we describe a preliminary design of a distributed recommender system model that enables recommender systems in different organizations to share their recommendations boundlessly, and the algorithm that we employed to solve the peer selection problem in the proposed distributed recommender system is also presented.
2. Related Work Several researches have attempted to implement recommender system in a decentralized fashion. Wei [6] has proposed an multi-agent based recommender system in which a recommender system is considered as a marketplace consisting of one auctioneer agent and multiple bidder agents. Each bidder agent is considered as a recommendation algorithm that is capable of generating recommendations independently, and within the marketplace these bidder agents compete to each other for shortlisting their recommendations. The task of auctioneer agent is to incorporate the bids of the bidder agents and generating the most suitable result to the users. Essentially, Wei’s approach is a hybridized recommender system designed based on the concept of multiagent system. Even though Wei’s approach takes the concept of decentralized decision making into consideration, it is not a truly distributed recommender system because it only works within a single organization. Vidal [7], on another hand, proposed a protocol for distributed recommender system, in which agents share and exchange their preferences of a finite set of documents. Vidal’s research focuses on how the agents can cooperate together in order to maximize the efficiency and profitability of their interaction. However, in Vidal’s paper[7] the relationship between the recommenders’ and users’ preferences has not been considered, and the ecommerce related aspects to the proposed model are also absent.
3. System Design and Structure The proposed distributed recommender system consists of multiple recommender systems (or recommender peers) of different organizations. When anyone of these recommender peers received a request from a user, not only it generates recommendation from its own resources, but it also consults (and interact) with other recommender peers for suggestion in order to improve its recommendation quality to the user. The interaction of the proposed system is modeled with a multi-agent based approach, as such, each recommender peer is considered as a self-interested agent competing to each others within the system. The interaction protocol used in the system is inherited from the contract net protocol which is well known in the field of MAS [11, 12]. Based on the protocol, each recommender peer plays two different roles: contractor agent and manager agent. When a recommender peer makes request for recommendations from other peers, it is considered as manager agent. On the other
L.-T. Weng et al. / A Fair Peer Selection Algorithm
33
hand, the recommender peer that receives request for recommendations and provides recommendations for other peers is considered a contractor agent. The roles of the manager agent and the contractor agent are depicted in Figure 1.
Figure 1. Interaction in the proposed System
The communication steps involved in the interaction are indicated by the numbers in Figure 1 and explained as follows: 1. User sends a request for recommendations. 2. Based on user’s request and profile to select suitable peer recommenders. 3. Make request to peers for recommendations 4. Each agent generates recommendations based on the request. 5. Send back to recommendations to the manager agent. 6. Synthesize and merge the recommendations from the contractor agents. 7. Produce recommendations to user. 8. Supply ratings to the recommendations. 9. Based on the user rating, evaluate the performance of the peers and update the peers profiles. 10. Feedback and reward peers based on their performances on the task. 11. Based on the feedback update the manage agent’s profile. From Figure 1, it can be seen that when a recommender peer is requested to make a recommendation for a user, it acts as a manager agent. In the role of a manager agent, the recommender first generates a strategy about how and what to recommend to the user based on the user’s profile and request, then chooses a set of recommender peers (in this context, they act as contractor agents) based on the strategy and the profiles of peer recommenders, and makes requests for recommendations to these selected recommender peers. When these selected contractor agents received the requests, they then construct and return their recommendations based on the requests received and the manager agent’s profile. After the manager agent received the recommendations returned from the contractor agents, it then merges the recommendations (also include recommendations made by itself) and return to the user. According to the recommendations received from the manager agent, the user might either explicitly or implicitly give feedbacks or ratings about the recommendations to the manager agent. After received the user’s feedback, the manager agent will evaluate the performance of each selected contractor agent, update the profile of each contractor agent, and then construct the feedback and make reward to the contractor agents. Finally, the contractor agents will update the manager agent’s profile based on the given rewards and feedbacks.
34
L.-T. Weng et al. / A Fair Peer Selection Algorithm
In order to carry out the proposed interaction described above, many works such as peer selection, recommendation generation, recommendation merge and peer profile evaluation and update are involved. Due to space limit, this paper will only focus on the peer selection task. A method for selecting recommender peers is described in next section.
4. Peer Selection Peer selection is an essential part of the proposed distributed recommender system. As mentioned earlier, there might be a large amount recommender peers within a distributed recommender system network. It is both ineffective and unnecessary for a peer to interact with all other peers each time it needs to make a recommendation to the users. A peer agent (i.e. the manager agent) selects a subset of peer agents to make recommendations rather than consulting all other peers. The manager agent makes the selection based on trustworthiness to each of other peers. The trustworthiness in this context indicates the degree of the manager agent’s belief that a contractor agent will return a recommendation that will be highly rated by the users. In this paper, we consider the manager agent’s trust to a contractor agent depends on both the manager agent’s selection frequency to the contractor agent and the contractor agent’s past performance. In other words, if a manager agent’s trust to a contractor agent is high, then that means the contractor agent has stably generated good recommendations to the manger agent for many times. Let A be the set of all agents, A={a1, …, an}, am A stand for the manager agent and CA=A\ am stand for the set of all possible contractor agents. The manager agent keeps a record of number of times, denoted as RC m i , that it has selected the contractor agent ai CA . Suppose that the total number of times that am has made recommendation requests to other peers is N, then the frequency of ai being selected by am is RC m i /N, denoted as fmi= RC m i /N. The current performance of a contractor agent ai to the manager agent is denoted as Pm i , where ai CA , 0 d Pmi d 1 . The actual computation of Pm i can be complicated, however, in this paper we simplified the Pm i as the average of the past user feedbacks to the contractor’s recommendations. Thus, the manger agent’s trust to a contractor agent ai is computed based on the formula below. trust m ( ai )
Pm i u E f m i u (1 E ) (1)
Within (1), E is the weight factor for adjusting the importance of the performance and the degree of knowledge in the computation of the trustworthiness, where 0 d E d 1 . Based on the formula (1), a vector Tm [trust m ( ai ) | ai CA] is also defined to present the manager agent’s trusts to all of its contractor agents. The trustiness measured by formula (1) is based on peers’ past performance and selection history. There are two flaws by only using formula (1) to select peers. Firstly, the selection will be constrained to certain few peers that have performed well before, and therefore the degree of the novelty will be constrained too. Secondly, in the distributed recommender system, all of the recommender peers operate independently
L.-T. Weng et al. / A Fair Peer Selection Algorithm
35
and their performances can improve or deteriorate (e.g. changing their database or recommendation algorithm) overtime without notifying other peers. Therefore it is possible that some peers who performed not very well before may have improved their performance, but may still not be able to be selected. The design goal of the proposed peer selection algorithm is to ensure that all contractor agents have equal opportunities to be learnt and trusted in order to avoid possible suboptimum performance. Thus, it is important that the diversity of trustiness of the manager agent to its contractor agents can be controlled within a certain level. The standard deviation of T is employed in this paper to measure the diversity of the manager agent’s trustiness to the contractor agents. Therefore, when the standard deviation is high, that means the manager agent has a very diverse knowledge and trust to the contractor agents and it is very likely the generated recommendation might be suboptimum. Hence, it is important for the manager agent to improve its trustiness to those not so trusted contractor agents. In order to let the manager agent trust more on these distrusted contractor agents, the manager agent will increase the selection frequency of these distrusted contractor agents in a manner that does not overly degrade the quality of the result recommendations. In the proposed algorithm, two thresholds are defined, and they are min_knowledge_treshold and max_diversity_treshold. The min_knowledge_treshold is used to measure whether the manager agent has already possessed a certain level of knowledge about other peers, so it can take the advantage of sharing recommendations with other peers in producing better recommendations rather than making recommendations by its own. The max_diversity_treshold is used to predicate if the manager agent’s trusts to the contractor agents have reached a certain level of diversity where the result recommendations are very likely to be suboptimum. The proposed algorithm is formally
described below. Algorithm contractor_selection Let min_knowledge_treshold be the minimum number of interactions that the manager agent should have to all its contractor agents, where min_knowledge_treshold > 0 k is the maximum number of contractor agents to be selected each time. max_diversity_treshold be the maximum diversity of the manager agent’s trust to its contractor agents 1.
for agent am, total_interactions =
¦ RC
mi
a i CA
2. if total_interactions < min_knowledge_treshold then 3. initial _ peformance _ learning () 4. return CA 5. otherwise 6. w sdv (T ) 7. if w < max_diversity_treshold then 8. SCA = a sorted CA based on T 9. return first k contractor agents from SCA 10. otherwise 11. CA' {ai CA | trust ( ai ) T } 12. SCA’ = a sorted CA’ based on Pm 13.
if SCA ' k then
14.
return SCA’
36
L.-T. Weng et al. / A Fair Peer Selection Algorithm
15. 16.
otherwise return first k contractor agents from SCA’
The algorithm contractor_selection shows that the selection of contractor agents is not only by the trustworthiness, we also take the balance of trustworthiness into consideration in order to achieve global optimum performance of the interaction. The diversity of the trustworthiness is measured by the standard deviation of T, which is denoted as in the algorithm. In the case of < max_diversity_treshold, the manager agent simply selects k most trusted contractor agents in order to maximally satisfy the target user’s information needs. However, if the diversity of the trustworthiness is too high ( max_diversity_treshold), the manager agent will try select contractor agents with lower trustiness in order to improve its relationship with them. The selection of these distrusted contractor agents is based on their performance in order to maintain a certain level of user satisfaction. The line 2-4 of the algorithm indicates the manager agent do not have enough interactions with the contractor agents initially, and therefore will not be able to trust any of the contractor agent. In this situation, the manager agent only trusts itself, therefore it will make recommendations by itself to the users. However, the manager agent still request for recommendations from all the contractor agents, and based on comparing the user’s feedback to its recommendations and the contractor agents’ recommendations, the manager agent can evaluate the performances of these contractor agents initially. The operation is denoted as the detail of which was not discussed in this paper.
5. Conclusion In this paper we present a distributed recommender system model that is based on the contract net protocol [11,12]. Within the proposed model, recommender system peers from different organizations are considered as self-interested peers. In each round of recommendation making, the peer that receives recommendation request from the user is regarded as a manger agent, and the other peers that receive request from the manager agent are considered as contractor agent (a recommender peer can play both manager or contractor roles, but within one recommendation making round, they can only play one role. A recommendation making round consists of one manager agent and multiple contractor agents). Moreover, in our proposed system, the recommender peers study and learn about each other based on the shared recommendations. This model is particularly useful in the real ecommerce domain, because the required autonomous and privacy for the businesses are ensured. Additionally, the protocol is designed so the system not only can generate recommendations that satisfy users’ information needs, but it also allows recommender peers effectively and efficiently gather knowledge about each other though tasks such as peer selection, result merging and peer rewarding. Conclusively, based on effectively and intelligently sharing recommendations among multiple independent organizations, we believe the proposed distributed recommender system can provide better recommendation accuracy and novelty than standard recommender system, as well as solve the cold start problem.
L.-T. Weng et al. / A Fair Peer Selection Algorithm
37
References [1] [2]
[3] [4]
[5]
[6] [7] [8] [9] [10]
[11] [12] [13]
J. B. Schafer, J. A. Konstan, and J. Riedl, "E-Commerce Recommendation Applications," Journal of Data Mining and Knowledge Discovery, vol. 5, pp. 115-152, 2000. B. M. Sarwar, G. Karypis, J. A. Konstan, and J. Riedl, "Analysis of recommendation algorithms for ecommerce," presented at 2nd ACM conference on Electronic commerce, Minneapolis, Minnesota, United States, 2000. G. Linden, B. Smith, and J. York, "Amazon.com recommendations: item-to-item collaborative filtering," Internet Computing, IEEE, vol. 7, pp. 76-80, 2003. Y. Z. Wei, L. Moreau, and N. R. Jennings, "Recommender systems: a market-based design," presented at 2nd International Joint Conference on Autonomous Agents and Multiagent systems, Melbourne, Australia, 2003. A. I. Schein, A. Popescul, L. H. Ungar, and D. M. Pennock, "Methods and Metrics for Cold-start Recommendations," presented at 25th annual international ACM SIGIR conference on Research and development in information retrieval, Tampere, Finland, 2002. Y. Z. Wei, L. Moreau, and N. R. Jennings, "A market-based approach to recommender systems," ACM Transactions on Information Systems, vol. 23, pp. 227--266, 2005. J. e. M. Vidal, "A Protocol for a Distributed Recommender System," in Trusting Agents for Trusting Electronic Societies, R. Falcone, S. Barber, J. Sabater, and M. Singh, Eds.: Springer, 2005. S. M. Bohte, E. Gerding, and H. L. Poutré, "Market-based recommendation: Agents that compete for consumer attention," presented at ACM Transactions on Internet Technology, New York, USA, 2004. F. Brandt and T. Sandholm, "Decentralized Voting with Unconditional Privacy," presented at International Joint Conference on Autonomous Agents and Multi-Agent Systems, Utrecht, Netherlands, 2005. Y. Z. Wei, L. Moreau, Nicholas, and R. Jennings, "Recommender Systems: A Market-Based Design," presented at 2nd International Joint Conference on Autonomous Agents and Multiagent Systems, Melbourne, Australia, 2003. M. Wooldridge, An Introduction to Multiagent Systems. London, 2002. G. Weiss, "Multiagent Systems: a modern appraoch to distributed artificial intelligence." London, England, 1999, pp. 619. L.-T. Weng, Y. Xu, and Y. Li, "An Improvement to Collaborative Filtering for Recommender Systems," presented at International Conference on Computational Intelligence for Modelling, Control and Automation, Vienna, Austria, 2005.
38
Advances in Intelligent IT Y. Li et al. (Eds.) IOS Press, 2006 © 2006 The authors. All rights reserved.
Intelligent Decision Making with the Semantic Web Kevin CURRAN and Gary GUMBLETON School of Computing and Intelligent Systems University of Ulster, Magee Campus, Londonderry, BT47 3QL, UK Email:
[email protected] Abstract. There has been a move to acetate web context with explicit meaning so that machines will be able to make better use of it and thus better able to assist web users, leading to a Semantic Web. To date, the possibilities for delivering personalized digital media experiences have been limited however more and more companies are entering the marketplace to offer personal digital media delivery. These companies provide products and services that offer an instant solution for digital media deployments. This paper presents a prototype which demonstrates the power of the semantic web where users can connect to an existing database of media files (e.g. political interviews, new snippets, football manager pre-match interviews etc) and retrieve a selection of media clips. In the case of a football manager giving an interview before a game, it would be possible to also associate his interview with that of another premiership manager. Therefore whenever a user requests an interview clip, the other clip could also be offered to this user. Keywords. Semantic web, XML, WWW, web personalization
1. Introduction The Semantic Web (SW) is a vision of the Web where its information is more efficiently linked up in such a way so that machines can more easily process its information. Tim Berners-Lee, inventor of the World Wide Web (WWW), is credited with created the Semantic Web. Currently there is a large team of people at various academic institutes across the world working on improving and extending the system to make the goal come true. They are doing this by creating applications for the Semantic Web, making publications and creating languages for the Semantic Web to be published with. It is generating such an interest not just because Tim Berners-Lee is advocating it but also because it aims to solve the largest problem faced by the web at present. This problem is that information is hidden away on it in HTML documents, which are easy for humans to get information out of but are difficult for machines to do so. But the question has to be asked how it will work? As already mentioned there is a large group of people working on this coming up with different solutions, for representing the data and storing it. But there is a general consensus that says it should be built out of the current technology of the Web, in general using Universal Resource Identifiers (URI’s) and the eXtensive Mark-up Language (XML). Digital media is fast becoming an integral part of the way companies sell products, support customers, and communicate with employees on a day-to-day basis. To date, the possibilities for delivering personalized digital media experiences have been limited
K. Curran and G. Gumbleton / Intelligent Decision Making with the Semantic Web
39
however more and more companies are entering the marketplace to offer personal digital media delivery. These companies provide products and services that offer an instant solution for digital media deployments. Their products enable providers of audio and video content to develop innovative solutions that enable easy access. Their solution and services divisions enable corporate organizations to reap the benefit of streaming technologies, and to seamlessly integrate these with traditional corporate information assets. One such idea that requires further investigation is where users can connect to an existing database of media files (e.g. political interviews, new snippets, football manager pre-match interviews etc) and retrieves a selection of media clips. Imagine the case of Sir Alex Ferguson giving an interview before a game. One could then associate his interview with that of another premiership manager (say Arsene Wenger). Therefore whenever a user requests one of these interviews, the other one could also be offered to this user with a service offering as just described. This paper expands upon this idea.
2. The Semantic Web The semantic web is not a completely new form of the WWW instead it is an extension of the current web aimed at overcoming some of the disadvantages of the current web. In particular it is concerned with making the Web more readable for computers so they would be able to interpret it better and as a result be better able to assist us. Tim Berners-Lee, Director of the World Wide Web Consortium (W3C) states that “The Semantic Web is not a separate Web but an extension of the current one, in which information is given well-defined meaning, better enabling computers and people to work in cooperation” [1]. Knowledge representations (KR) are need for the semantic web to function; the computers and software agents using it need to have access to structured information and inference rules in order to perform some reasoning. Also the rules and information must be powerful enough to describe complex terms. Languages like the Extensible Mark-up Language (XML) and Resource Description Framework (RDF) are already in place and helping to make this happen along with newer languages such as here and here, which shall all be described in more detail later on. Ontologies are a document or file that formally defines the relationship between terms. On the web the most commonly used type of ontology used is taxonomy (subclass– super class) hierarchy, though they may not just be limited to this form. These taxonomy’s work on classes and descried the relationship between them together with their inference rules, they play a vital part in the semantic web. For example if a salary class is associated with a currency class and the currency class is then associated with a county class. The inference rules could then say that if an employee gets paid in British pounds then they work in the UK. Also you could have different ontologies pointing to each other so your ontology for “person” could point to someone else’s that is describing the same thing but using different terminology this would then increase the scope of the inference rules and make then more reliable. Agents are the programs that will gather the contents of the semantic web process them and exchange them with other agents. Off course for the agents to exchange information with each other there will have to be some degree of proof between them that the information they gathered is true. Digital signatures and proofs can overcome this. XML allows authors of documents to create their own mark-up language, where the meaning of the
40
K. Curran and G. Gumbleton / Intelligent Decision Making with the Semantic Web
information is placed in the document. The information placed into the document is called “elements”; these elements are encapsulated by start (). Tag names are the word inside the start and end tags of elements for example dataBaseFootball would be an example tag name in Figure 1. Elements can contain attributes, other elements (giving the document a hierarchical structure) or a combination of the both. These tags tell the computer that (in the case of Figure 1) “Ferguson “is a “Manager” but they do not tell the computer what a “Manager” is. Since XML cannot express the meaning of the tags it can cause problems for machine processing. As most processing applications require tag sets whose meanings have been agreed to some standard or convention.
Figure 1: Example XML structure
Figure 2: Person structure represented differently
To help with this, “document type definition” (DTD) was created allowing for grammar to be defined. DTDs specify elements; the context of elements and which attributes in elements can be changed. Although DTDs allow for syntax in XML documents the semantics are still implicit. Meaning that a human infers the meaning of a DTD element by the name given to it as a comment in the DTD or it is described in a separate document. This makes it easy to exchange XML documents between people on a small scale as they can get together beforehand and design DTDs that will meet there combined needs. But it runs into problems when you scale it up and for example you want to integrate your DTD with similar one’s from multiple sources. One of these problems is exchanging representations of the same idea structure. 2.1. Resource Development Language (RDF) Though XML is good at letting you invent tags it can have problems with scalability. For example often the order in which elements appear in XML is significant, so keeping the correct order of data items on something as extensive as the Web could
K. Curran and G. Gumbleton / Intelligent Decision Making with the Semantic Web
41
prove impractical. To help solve this RDF was developed by a number of different metadata communities under the umbrella flagship of the W3C, with the aim to develop a flexible architecture for supporting Metadata on the web. Its history derives from 1995 when the W3C developed PICS (Platform for Internet Content Selection), which was a mechanism for communicating the ratings of web pages from a server to a client (mainly with the aim to tell the client if a particular Web page was or wasn’t suitable for children) by using metadata. However for whatever reason PICS did not take off but it was clear that other metadata communities could use some of the infrastructure that had been developed. So the W3C created a working group to bring together the requirements of several different metadata groups and in 1998 they released these recommendations. In its essence RDF is a method to express and process a series of simple assertions, such as “Ora Lassila created this page (Home/Lassila)”. This is called an RDF statement and illustrated it looks like Figure 3 comprising of nodes, labeled arcs and values. It consists of three parts a subject (Resource); predicate (Property) and an object (Literal) with their corresponding values.
Figure 3: An example RDF diagram (Triple) [2]
RDF provides also provides a model for describing resources The basic concept behind it is that an object (a resource) is described throw a collection of properties called an RDF Description which itself consists of a property type and value, as long as that object has a unique URL address. In RDF values may be text, strings, numbers and so on, but they may also be other resources which themselves can have properties of their own. RDF Schema is needed for the creation of controlled, sharable and extensible vocabularies. It extends RDF to include a larger reserved vocabulary with more complex semantic constraints allow users to create schemas of classes and properties using RDF. RDF then uses XML Namespaces in order to avoid confusion between two separate definitions of the same term, which could have conflicting meaning. RDF can easily be marked up into XML. RDF was developed at about the same time as XML with the aim to provide a language for modeling semi-structured metadata and enabling knowledge management systems and it has proved to be successful because of its simplicity. However as RDF scope has expanded to include things like the SW, the limitations of its RDF Schema have become clear, as it lacks for catering data typing and a consistent expression for enumerations as well as other facilities. In response the DAML (DARPA Agent Mark-up Language) was set up and grouped its efforts with OIL (Ontology Inference Layer) another group working in the same area to provide a more sophisticated classification, who were using constructs from frame based AI (Artificial Intelligence). This resulted in a language that was able to express far more sophisticated classifications and property of resource than RDFS.
42
K. Curran and G. Gumbleton / Intelligent Decision Making with the Semantic Web
3. A Semantic Web Application The ontology is written in RDF using RDF VCards because RDF VCards provide a well-defined framework for developing an ontology. To input the RDF file the user is first given the choice of selecting the default file or if they so wish their own file. Next the following code is used to create the resource and add properties to it: Resource userName = model0.createResource(personURI) .addProperty(VCARD.FN,fullName).addProperty(VCARD.BDAY,bday).addProperty(VCARD.N, model0.createResource().addProperty(VCARD.Given,givenName).addProperty(VCARD.Family, familyName));
A resource is created and set to in this case the users web page but as a URI can point to anything, this resource can also be anything from a html page to MPEG clip. After which the properties of that resource are added (such things as name and age) with the use of the “.addProperty()” method. Into the add property method are passed two parameters. The first telling the system what type of property is being passed for the purpose of this thesis they shall all be of the VCard type and the second the actual value. All persons in the RDF file can be retrieved with the URL to their web page containing their information as follows: String queryString = "SELECT ?x, ?fname , ?fname)" ;
WHERE
(?x,
Here we wanted to select all URI and full name from the model that has a RDF VCard specification full name (FN) property. The variable for the information that is required are introduced with a preceding question mark “?”. The query once constructed is then passed to the query class and once the query has been executed the results are returned. An example of searching for people by age is: System.out.println("Enter the lower age limit of the person you are looking for") ; int lAge = com326.readInt(); String queryString = "SELECT ?x WHERE (?x, , ?age)"+ "AND ?age >=" +lAge+ "USING info FOR ";
This returns the URI from the model that has a value for age (age is not a RDF VCard specification but extra properties can be added at will) where the age property matches or is greater than a variable “lAge” that is entered by the user. For loading the file there are two main options that the user can avail of. The first is to load the default file and the second is to load the users own RDF VCard file. After which the user will be presented with the main menu:
***************************************** * RDF Query System ***************************************** * 1. Build Your Own Ontology * * 2. Display Everybody * * 3. Search For people by Location *
*
K. Curran and G. Gumbleton / Intelligent Decision Making with the Semantic Web
* 4. Search For people by Age * * 5. Search For people by Position * * 6. Search For people by Team * * 7. Search for people by Full Name * * 8. Search for people by Surname * * 9. Search for people by Forename * * 10. Exit *****************************************
43
*
To successfully create there own ontology the user must firstly select option “1” from the menu then enter in the relevant details as instructed, after which they shall be presented with the completed RDF VCard: Enter your web address : http://www.infm.ulst.ac.uk/~Gary Enter your first name : Gary Enter your Surname : Gumbleton Enter your Year of Birth : 1976 Enter your Month of Birth : 9 Your VCard is as follows: Gary Gumbleton 1976-9-6 Gary Gumbleton
4. Conclusion The Semantic web is a vision of what the web of the future will be. Offering a web, which is not just designed for navigation by humans but also by machines, where information will not just be hidden away on text documents but will be structured in a manner that will make the discovery of documents and facts far easier. To support the Meta data describing the resources the authors of the semantic web proposed the use of ontologies, with the aim of providing the “semantics” for the semantic web. This paper presents a system which demonstrates the semantic web in action. References [1] [2]
Berners-Lee, T., Hendler, J., Lassila, O. (2001). The Semantic Web. Scientific America, May 2001. http://www.w3.org/TR/1999/REC-rdf-syntax-19990222/
44
Advances in Intelligent IT Y. Li et al. (Eds.) IOS Press, 2006 © 2006 The authors. All rights reserved.
Relevence Assessment of Topic Ontology a
Xujuan Zhoua, Yuefeng Lia, Yue Xua, and Raymond Laub School of Software Engineering and Data Communications, Queensland University of Technology, Australia
[email protected] {y2.li,
[email protected]} b
Department of Information Systems City University of Hong Kong Tat Chee Avenue, Kowloon, Hong Kong SAR
[email protected]
Abstract. In traditional Information Retrieval (IR), user profiles are often represented by keyword/concepts space vectors or by some predefined categories. Unfortunately, this data is often inadequately or incompletely interpreted. Ontology-based user profile is another newer approach. This method is able to provide richer semantic information to facilitate information retrieval processes. It has become an important means for semantic-based information search and retrieval. Some ontology-based user profile models have been developed over the past few years. With the increasing usage of this method, it raises the issues of effective relevance measurement for the evaluation of ontologies. In practice, it is crucial to find a good relevance assessment algorithm for measuring the quality of ontologies. To represent user profile by relevant topic ontology, this paper presents a new method capable of measuring the user profile more objectively and hence has great potential to enhance the IR processes. Keywords: topic ontology, user profiles, relevance assessment, information retrieval
1. Introduction User profile can be an important source of metadata for IR processes. Traditionally, user profiles are often represented by keyword/concepts space vectors or by some predefined categories ([5]; [12]). This method lacks the understanding on the data semantics and often, leads to inadequately or incompletely interpretation. A more novel approach is to use ontology. Ontology is an explicit and formal specification of conceptualization, it is often used for helping people interact with computers, and it can provide rich semantics to facilitate IR processes by matching users’ expectations of retrieval results. However, when using ontology to represent user profile for IR process, a new problem arises i.e. how can we assess the relevance of ontologies for obtaining the right information to satisfy web users’ information needs? To solve this
45
X. Zhou et al. / Relevence Assessment of Topic Ontology
problem, this paper focuses on studying the two aspects of ontological relevance assessment for user profile: specificities and exhaustivities.
2. Construction of Topic Ontology An ontology is an arrangement of concepts that represents a view of the world [1]. Within an ontology, the concepts are interconnected by semantic relationships. Some ontology-based user profiles have been conducted by several researchers. Kim and Chan [2] developed a user interest hierarchy using a form of hierarchical clustering on a set of Web pages visited by a user. Trajkova and Gauch [16] have setup a user profile by using the vector space model to represents the Web pages and to classify documents into the best-matching concepts according to a pre-defined ontology. In general, they used hierarchical relationships and did not take into account non-hierarchical relationships between the concepts within an ontology. In this paper, the user profile is constructed from the topics of a user’s interest. By using the ontological approach, the user profile includes the topic’s semantic relationship. Hence, this type of user profile is called topic ontology. It is assumed that topic ontology is constructed from some primitive objects (e.g., terms). They consist of primitive classes and compound classes. The primitive classes are the smallest concepts that cannot be assembled from other classes, however they may be inherited by some derived concepts or their children. The compound classes can be constructed from a set of primitive classes. The base backbone and the top backbone are employed to connect patterns each other. The base is useful when constructing the linkage between primitive classes while the top can be used when we construct the linkages between compound classes. The process of constructing the topic ontology consists of two phases: 1). the base backbone construction. 2). the top backbone construction. The procedure of topic ontology construction was developed by Li ([8]; [9]). It can be summarized as followings: 1) A set of terms (i.e. keywords), T = {t1, t2,…, tn}; term frequency tf (d,t) is defined as the number of occurrences of t in d if given a document d and a term t. 2) A set of term frequency pairs, P = {(t,f)Ňt T, f = tf(t,d) > 0}. Here, P is referred to as a pattern. Let termset(P) = {t Ň(t,f) P} be the termset of P. Given a pattern P = {(t1, f1), (t2, f2),…,(tr, fr)}, its normal form {(t1, w1), (t2, w2),…,(tr, wr)} can be determined by equations:
wi =
fi r
¦
j
fj 1
for all i d r and i t 1
(1)
3) support(P) is used to describe the extent to which the pattern is discussed in the training set: the greater the support is, the more important the pattern is. 4) Topic ontology is represented by O. It consists of a set of patters. O = {P1,… , Pn}. There are some relations between patterns: if P1 is subset of P2 then “part-of” relationship hold by these two patterns; if P1 P2 z Ø then intersect relationship is exist between P1 and P2; if P1 = P2 then “is-a” relationship exist between P1 and P2, and these two patterns should be composed to generate a new patterns, P1 P2.
46
X. Zhou et al. / Relevence Assessment of Topic Ontology
support(P1 P2) = support(P1) + support(P2), where is a composition operator (see [8] or [9]). The support can be normalized by: support: O ĺ[0,1], such that support (P) =
support (P)
¦
PjO
(2)
support(Pj )
5) Using some existing algorithms on the bottom up approach (see [10], or [11]), the hierarchy of all keywords in T can be obtained. Here, T consists of a set of clusters, 4 , where each cluster in 4 is represented as a term. 4 T is called the set of primitive objects. Then all compound classes can be constructed from some primitive ones using OntoMining algorithm ([8], [9]). The correlation in the top backbone of the ontology is represented by an association set from O to
4 according to [9], where E is a mapping which satisfies: E : O o 24u>0,1@ í {Ø} such that E (P) = {(t1, w1), (t2,,w2),…,(tr,wr)} 4 u [0,1], and E P is P’s normal form. An association set here maps a pattern to a termset and provides a term weight distribution for the terms in the termset.
3. Relevance Assessment Study The edge based and the nodes based are two methods to make use of the hierarchical ontology structure. The relevance is measured by the semantic similarity between concepts in an ontology. The traditional edge based approach calculates the distance/edge length between nodes ([3]; [14]). The shorter the distance from one node to the other, the higher similarity they are. The shortest or the average distance may be used when there are multiple paths. One of the main limitations with this approach is that they made an inaccurate assumption in taxonomies exhibiting variable link densities. They assume that notes and links are uniformly distributed in an ontology. The newer method, node based approaches [13] typically use information content measures or information on object-part relationships to determine the conceptual similarity. Resnik’s model is based on such technique. The similarity between concepts is determined by the extent to which they share information. They assume that the more information two concepts share in common, the more similar they are. To assess the relevance in an ontology, the approach presented in here is significantly different from the edge based and the node based approaches. This method focuses on the two aspects of an ontology, i.e., specificity and exhaustivity. The development of this assessment method was inspired by the Dempster-Shafer (D-S) theory - a mathematical theory of evidence. 3.1. Specificity and Exhaustivity The specificity and exhaustivity can be defined as: Specificity (spe for short), which describes the extent to which the pattern (or topic) focuses on what users want.
47
X. Zhou et al. / Relevence Assessment of Topic Ontology
Exhaustivity (exh for short), which describes the extent to which the pattern (or topic) discusses what users want. Its numeral functions for measuring specificity and exhaustivity are: spe: 2 4 ĺ [0, 1]; such that spe (A) =
¦ exh: 2 4 ĺ [0, 1]; such that exh (A) = ¦
PO, termset (P) A
support (P)
PO, termset (P) A zI
(3)
support (P)
(4)
for all A 4 . According to Shafer [15], the D-S theory, is based on two idea of obtaining degrees of belief for one question from subjective probabilities for a related question, and Dempster’s rule for combining such degrees of belief when they are based on independent items of evidence. There are three important functions in D-S theory: the basic probability assignment function (bpa or m), the Belief function (Bel), and the Plausibility function (Pl). Specificity of pattern P is related to a belief function and exhaustivity of pattern P is related to a plausibility function, respectively. According to Eq. (3) and (4), the specificity of pattern P is expressed by all its sub-patterns and its exhaustivity is expressed by all patterns that overlap with it. A probability function m from a given association set () is
prm(t) =
PO , (t , w)E ( P )
support(P) u w
(5)
3.2. Relevance Assessment Method Web users may have different information search intentions. A user may be interested in more focused information and her search goal is to find out accurate information. In such situation, patterns in an ontology that have higher spe(Pi) mean they have more details information and have more specific meaning and can be considered to be more relevant patterns. The relevance function should be:
relevancespe (Pi) = spe(Pi) prm(t) (6) In other cases a user may wish to find more general information. In such case, patterns in an ontology that have greater exh (Pi) mean they have more broad-spectrum information and can be considered to be more relevant patterns. The relevance function should be: relevanceexh (Pi) = exh(Pi) prm(t) (7) Depends on the user’s search goal the system may choose different method to assess relevance of the pattern. If a user looks for specific topic, Eq. (6) may be used; if a user only browses some general information, Eq. (7) may be selected. 3.3. Algorithm In order to evaluate relevance theoretically, a document d is called logical relevance if P O such that termset(P) d. P is used to denote the covering set of P, which includes all documents d such that termset(P) d. A method or an algorithm is
> @
called complete if it can retrieve all logically relevant documents
PO
[ P] .
To facilitate prune some patterns which have weak relevance such that the patterns they have no any subset or Pi Pj = Ø, for all Pi O and Pj O, it is assumed that
48
X. Zhou et al. / Relevence Assessment of Topic Ontology
every pattern has equal support: support (Pi) = 1/n, where n = ŇOŇ, then according to this assumption and Eq. (3) and (4), the minimal specificity and exhaustivity can be defend:
min_spe = (1+ D )/n, (0 < D min_exh (Pi) then Pi will be kept in the set of patterns. Otherwise the Pi will be pruned. The relevance of each relevant pattern will be computed using Eq.(6) or Eq. (7) along with the user information search goal. After obtaining all patterns’ relevance we can find out the minimal relevance. This minimal relevance is the threshold. A document is relevant if its relevance value is great than the threshold. The relevance of each document d in the testing set is evaluated: relevance(d)= prm (t )W (t , d ) , where W (t , d ) 1 if t d otherwise W (t , d ) 0.
¦ t4
Two algorithms were developed called ObtainingIntention and RelevanceAssessment. The purpose of the first algorithm is to prune some patterns which have weak relevance and find out the threshold in accordance with the user information search intention. The second algorithm is to find out all the relevant documents in the testing set. Algorithm ObtainingIntention ( 4 , O, . thresholdspe, thresholdexh) /**Input parameters: 4 ,O,. Output parameters: thresholdspe, thresholdexh,
D and i)
E
are experimental coefficients and (0 < D min_exh) O = O; else O = O í {Pi}; relevancespe (Pi) = spe(Pi) prm(t) ; relevanceexh (Pi) = exh(Pi) prm(t) ; }//end for loop iv) // find out the minimal relevance for all patterns in the set of pattern O
thresholdspe = min PiO (relevancespe (Pi)); thresholdexh = min PiO (relevanceexh (Pi));
49
X. Zhou et al. / Relevence Assessment of Topic Ontology
Algorithm RelevanceAssessment (thresholdspe, thresholdexh, docs, rel) /** Input parameters: thresholdspe, thresholdexh. Output parameters: docs, rel which accommodate document relevance pairs and relevant documents, respectively**/ i)
rel = Ø, docs = Ø;
//let rel and docs be empty
ii)
for each d // for all documents in the testing set. {
relevance(d) =
¦ pr
m (t )W (t , d ) ;
t4
docs = docs
{ (d, relevance (d)) ;
if (a user is searching for specific topic)
threshold = thresholdspe ; else threshold = thresholdexh ; if ( relevance (d) t threshold )
rel = rel
{d} ;
}//end for loop
3.4. Case study The topic number 101 of the TREC (Text Retrieval Conference, see http://trec.nist.gov/) 2002 filtering track was utilized to get a set of positive documents for the topic ontology construction. Let: O ={P1,P2,P3,P4,P5,P6}, 4 ={GERMAN, VW, US, ECONOM, BILL, ESPIONAG, MAN}, and n = ŇOŇ= 6. By utilizing ontology mining and evolution algorithms ([8]; [9]), the topic ontology will be built as follows: Table 1. An association set from O to 4 Pattern ID
support
P1
1/6
{( GERMAN,1/2), (VW,1/2)}
P2
1/6
{( US,1/2), (ECONOM,1/4), (ESPIONAG,1/4)}
P7
1/3
{( US,1/4), (BILL,1/4), (ECONOM,1/4), (ESPIONAG,1/4)}
P8
1/3
{( GERMAN,1/3), (MAN,2/9), (VW,2/9), (ESPIONAG,2/9)}
E
* where P7 and P8 are generated using the composition. P7 = P3
P4; and P8 = P5
P6
The next step is computing the min_spe, min_exh, spe and exh to decide which pattern will be pruned. min_spe = (1+ D )/6 and min_exh = (3+ E )/6, where (0 < D n t
honest parties which is a contradiction. So V ¦ S stat i = V ¦ S stati' .
6. Conclusion
Asynchronous multi-party contract signing protocols have received attention in recent year as a compromise between efficient protocols and protocols avoiding a third party as a bottleneck of security. A new protocol for multi-party contract signing in completely asynchronous network is presented that make use of cryptography, specifically of verifiable signature sharing . These cryptographic protocols have practical and provably secure implementations in the “random oracle” model. The resulting asynchronous multi-party contract signing protocol is both practical and theorecally nearly optimal because it tolerates the maxmum number of corrupted parties, and to run in constant expected round numbers. The correctness of the protocol is proved in theory. References:
1. J. Garay, M. Jakobsson, and P. Mackenzie. Abuse-free optimistic contract signing. 2. N. Asokan, B. Baum-Waidner, M. Schunter, and M. Waidner, Optimistic synchronous multi-party contract signing. IBM Research Report RZ3089, 1998. 3. B. Baum-Waidner, M. Waidner. Optimistic asynchronous multi-party contract signing. IBM Research Report RZ3078, 1998. 4. B. Pfitzmann, M. Schunter, and M. Waidner. Optimal efficiency of Optimistic multi-party contract signing. In PODC’98, pp. 113-122.
D. Ji and D. Feng / Fault-Tolerate Multiparty Electronic Contract Signing Protocol
85
5. A. Silberschatz, H. Korth, and S. Sudarshan, Database System Concepts. McGraw-Hill, 1997. 6. M. Pease, R. Shostak, and L. Lamport, Reaching agreement in the presence of faults. Journal of the ACM, 27(2):228-234, 1980. 7. N. Asokan, V. Shoup, and M. Waidner. Fair exchange of digital signatures, In EUROCRYPT’98, pp. 591-606. 8. M.K.Franklin, M.K.Reite. Verifiable signature sharing , Advances in Eurocrypt’95, 1995 , pp. 50-63. 9 .A. Shamir. How to share a secret. Communications of the ACM, 1979, 22: 612613 10. Ran Canetti, Studies in secure multiparty computation and applications, Phd Dissertation, The weizmann institute of sciences, 1995. 11. C. P. Schorr. Efficient signature generation by smart cards. Journal of cryptology, 1991, 4: 161-174
86
Advances in Intelligent IT Y. Li et al. (Eds.) IOS Press, 2006 © 2006 The authors. All rights reserved.
Location Based Predictive Handoff Algorithm for Mobile Networks Kevin CURRAN, Gary CULLEN Faculty of Engineering, University of Ulster, Magee Campus {kj.curran, cullen-g2}@ulster.ac.uk Abstract
The proliferation of wireless network technologies has led to an explosion in the deployment of WiFi communications solutions. Consequently this propagation of wireless systems coupled with the frequency and familiarity of users to these systems has led to an increase in user expectation, whereby more and more mobile users are demanding the same Quality of Service (QoS) that they were accustomed to as fixed wired network users. As a result of the widespread deployment of Wi-Fi hotspots together with the expected increase in user demands for unlimited bandwidth and unrestrained network access, pressure has been placed on these wireless networks to respond to these user expectations. Most notable in this area are the complexities involved in providing the mobile user with seamless network connectivity during the traversal of a mobile cellular based network infrastructure. Fundamental to these issues is the latency and packet loss that occurs during the handoff process when a user moves between cells when traversing the wireless network as a handoff is required in order to maintain connectivity to the network, while at the same time ensuring minimum disruption to ongoing sessions. This paper presents a handoff algorithm for streaming media on mobile networks that will anticipate the handover procedure that occurs when a mobile device roams from one cell in the network to another. It proceeds to buffer packets from the anticipated ‘new’ access point and upon successful entry into the correctly predicted target cell will promptly ‘pick up’ the stream with minimal losses. Keywords – Handover, Handoff, Streaming Media, Cellular Networks, Mobile IP.
1 Introduction Cellular systems have grown in importance over the last number of years and this trend is set to continue as the number of users increases. The introduction of mobile computing devices such as laptops, mobile phones, (PDAs), etc, and the growth of wireless communication are increasing the demand for mobile access (Ramjee, 2000). A central component of the overall functionality of mobile networks is that the handover mechanism guarantees the user mobility in the network wherein the user can move around whilst keeping alive their connection to the network. When the mobile moves from the coverage area of one cell to the coverage area of another, a (handoff) new connection to/from the target cell has to be ‘set up’ and the connection with the old cell has to be ‘torn down’. Aside from being essential, these handovers have a common negative aspect on connectivity; they decrease the overall system performance due to the signalling load caused by rerouting the ongoing connection the new cells to where the mobile is entering. The foundation of handoff research began in the cellular Global System for Mobile Communication (GSM) [1], networking. Research in handoff optimization techniques is mainly driven by the needs of real-time video and voice-over-IP (VoIP) data transmissions. These streaming data transmissions are timesensitive QoS guaranteed applications and the ‘Holy Grail’ of this research activity is to provide a lossless and low-latency handoff mechanism. The internet initially rode on
K. Curran and G. Cullen / Location Based Predictive Handoff Algorithm for Mobile Networks
87
the mostly analogue infrastructure of the old telecommunications networks, and technological advancements in both realms advanced the service provided to both types of users. Moreover we find that even an amalgamation of technologies can occur, for example British Telecoms (BTs) movements into the VoIP market. Although both types of transmissions have specific characteristics that are fundamental to their respective technologies and protocols, there is an inherent overlap in some of these. One of the primary technological intersections is the similarities in the configuration of both cellular based network infrastructures. The intersecting of the cells within these networks identifies the handoff areas for each communications mode. For this reason this paper investigates the research carried out into the development of predictive handoff algorithms for both types of networks in order to unearth a more comprehensive solution for the Mobile IP handoff problem.
2
Mobile Handoff
The portable components of WLANs (Wireless Local Area Networks) are the mobile terminals that access services provided by servers which sit on the wired side of the network. These wireless LANs are connected to the wired network through layer-2 (Data Link Layer) Access Points (AP) and layer-3 (Network Layer) routers. PDA vendors are starting to incorporate the IEEE 802.11b technology in today’s PDAs. A notable drawback of the technology is that its physical coverage is limited to approximately 160 metres [2]. By overlapping individual cells of a wireless network the total coverage area can be amplified to provide a wider catchment zone for users. Here each cell has it own unique AP. These APs act like bridges which connect the wireless segment of the network to the wired segment of the network. While wireless devices (or Mobile Nodes MNs) roam around the network they connect to the cell unique APs through the strength of the signal arriving from the APs. For example if a MN is moving into the overlapping area between two cells it will connect to the AP with the highest signal strength. It is the device driver in the wireless Network Interface Card (NIC) that measures the signal strength from cell to cell. Wireless NIC’s can run in one of two modes Access Point Mode/Infrastructure Mode) which allows for the cellular based network mentioned above, or ad hoc/Peer-to-Peer mode which is used for connection between mobile nodes. Mobile Internet Protocol (Mobile IP), provids enhancements to the wired IP protocol which provides Mobile Nodes (MNs) with the ability to move across wireless subnets without losing their connectivity to the network. This allowed users to continue accessing the services on servers on the wired network irrespective of any change in the AP they were gaining network connectivity through. The TCP protocol which is a layer 4 protocol operating on the Transport Layer was not modified to look after the specific issues regarding the transportation of data across a wireless network. This has led to some connectivity issues during data transfer most notable in theses issues is the fact that TCP was fundamentally designed to run on a wired network. Therefore whenever TCP has a problem receiving acknowledgements of packets received from a node it looks at the problem as being a network congestion issue (which is the most common wired network problem). It then proceeds to implement a back-off algorithm to ease the congestion on the ‘perceived’ wired link, from a wireless perspective the most common problem regarding acknowledgements is that the MN has most probably lost connectivity, therefore when TCP backs-off it actually exacerbates to the problem. The issues regarding TCP on mobile networks is
88
K. Curran and G. Cullen / Location Based Predictive Handoff Algorithm for Mobile Networks
slightly beyond the remit of this paper, and are discussed in detail in [3]. Mobile IP uses home and foreign agents which run on the wired side of the network, these Mobile Agents (MA’s) broadcast advertisements out onto the WLAN. A generic wired and wireless network topology with which mobile IP operates is shown in Figure 1.
Figure 1 : Wireless Networks
When an MN moves from one subnet to another (foreign) subnet, it will receive Mobile IP advertisements from that subnets FA. The MN then uses these advertisements to send a registration request to the target subnets Foreign Agent. Authentication is then carried out on the MN and a tunnel is setup between the FA and the Home Agent (HA). This Home Agent (HA) acts as a proxy for the MN, redirecting all the MN packets through the tunnel. Packets transmitted by the MN are initially received by the FA which then redirects them through the tunnel to its HA which then routes them to their required destination. This process of switching from one MA to another as an MN migrates across neighbouring wireless IP subnets is known as the Mobile IP Handoff. Just before the handoff occurs but while the MN is in the new cell, the MN loses connectivity to the wired network. The length of time of this loss of connectivity is therefore critical to any time sensitive applications that the MN might be receiving from the servers on the wired network. This is the essence of the case for the solution to this handoff latency proposed in this paper. The effects on other less time critical applications such as web browsing or file transfer, are less significant as regards functionality, but as mentioned earlier real-time applications such as streaming video or audio can be dramatically affected, especially regarding QoS guarantees as well as end user results. It is expected that the number of deployed WLANs (Wireless Local Area Network) will increase in the not to distant future to accommodate the plans for extending traditional wired networks with IEEE 802.11 cells, in areas of high user density and relatively high mobility. Furthermore, the number of wireless users has increased dramatically in the previous years and continues to grow at a considerable rate [4].Thus, in order to maintain an acceptable level of QoS (in terms bandwidth) the size of the WLAN coverage areas will become smaller [5], which in turn means that the rate of handoffs will increase significantly due to the smaller size of each cell within the WLAN. With the current state of technology, a Mobile Node (MN) would rely entirely on the strength of RF signals in order to determine next cell coverage availability. Handoff procedures, and particularly those needed for handoff across heterogeneous networks, incur significant delay. As with any communication delivery mode, this degrades performance in terms of interrupting data transfer. Therefore, the performance degradation will have a noticeable effect from the user’s perspective. The problem described here is a result of the limitations of the traditional handoff decision
K. Curran and G. Cullen / Location Based Predictive Handoff Algorithm for Mobile Networks
89
method, which lacks awareness of the MNs position, velocity, and trajectory. This information is essential in order to prevent inefficient handoffs to discovered cells, therefore an approach that combines the location information with the direction and speed of the MN with respect to the required cell is necessary in order to alleviate this issue and prevent its performance-degrading effects.
3 A Predictive Location Based Handoff Mechanism The objective of this research study is to analyse the practicality and benefits of augmenting the traditional RF-based handoff mechanism with location based information. This will be achieved by incorporating location-based evaluation methods to develop a robust location-assisted handoff decision algorithm for implementing handoff predictions for data intensive real-time streaming media transfer over wireless local area networks. The underlying hypothesis is to eliminate or at least reduce the latency and packet loss of handoffs that result in performance degradation and wasting of resources during the handoff procedure. The network that the proposed solution was implemented on is located in the college campus of Letterkenny Institute of Technology (LYIT). The location based information was parsed from a Garmin Etrex 12 channel Handheld GPS (Global Positioning System) device. The Servers on the wired network that these server applications (Server A and Server B) will reside have the static IP addresses [192.168.72.54] and [192.168.72.64] respectively. The client application will receive the data streams from the servers via a Siemens Laptop without the aid of the location based information being employed to pre-empt the handoff process. The server applications send a continuous stream of data to the client, the client initiates the connection to the server application on Server A (via Access Point A), as the mobile device moves into the handoff area, the application visualises this occurring through the location-based information received from the GPS device. When the defined threshold for the handover occurs (i.e. the optimum coordinates have been reached by the MN), the client initiates a handover by dialling the remote server application Server B via the new AP (Access Point B). The number of packets received through the Server A application is measured and the client application keeps track of the number of packets it should be receiving (after the handoff begins) from Server A via a counter thread in the client application. When the client application restores its connection to the WLAN and access Server B via Access Point B, a measurement of the number of dropped packets during the handover procedure can be ascertained.
4
Testing and Analysis
This testing was carried out at a very early stage of the project; here the definitive boundaries of the overlapping of Cells A and Cells B were determined. The traceroute and netstat utilities in Windows were used to ascertain which AP the MN was connected to, georeferencing information was then recorded to establish the outer boundary of the overlapping cells. Physical markings were positioned on the ground to depict this area. This information was tested on different days to allow for any discrepancies in the day-to-day results. After analysing the collective information the following coordinates were found to most accurately depict the boundary: (1) N, 540 57.132, W 0070 43.254 – (2) N, 540 57.141, W 0070 43.252 – (3) N, 540 57.156, W 0070
90
K. Curran and G. Cullen / Location Based Predictive Handoff Algorithm for Mobile Networks
43.264 – (4) N, 540 57.142, W 0070 43.262 – (5) N, 540 57.135, W 0070 43.258. These geographical coordinate measurements were then coded into the system to provide the systems intelligence in predicting the handoffs between the cells. The information received from the Garmin GPS device delivered different measures of accuracy. These varied from day-to-day results that were accumulated, offering between 3 feet and 17 feet of accuracy depending on the conditions and the number of tracking satellites that were available. The accuracy of DGPS depends significantly on the number of satellites that can be tracked; the number of theses satellites in the sky above the area being monitored defines how accurate the information is. An accuracy of 3 feet was accomplished during readings taken early in the morning between the hours of 9 and 11, the Garmin device showed the device tracking 8 satellites at this time. On other occasions, chiefly those measurements taken in early evening or late afternoon fewer satellites could be tracked, sometimes as little as 3 (although the Garmin manufactures promote the fact that it can operate accurately in these conditions), which only offered accuracy of between 16 and 29 feet. It was for this reason that the system tests took place at 9.30 am which provided geometric accuracy of between 3 and 8 feet during the experiment. The testing of the algorithmic solution provided a quantitative analysis of the system, the results of this analysis can be seen in the diagram in, Figure 2.
Figure 2 – Packet Loss Results
It should be noted however that the results were carried out in optimal conditions, with no other users on the network at the time of testing. Future testing on a live environment will be carried out at a later date, but these are expected to have little effect on the initial results achieved. The testing involved 10 separate trials, whereby the client application connected to Server A via Access Point A moved across the network into the coverage area of Cell B. The network then initiated the handoff, where the client re-established the connection to Server A via the new cell, Cell B. A packet count of the number of lost packets was then calculated by the client device, this result was then stored and the same experiment was carried out using the handoff algorithm in conjunction with the client and server applications. These tests were then again performed in the opposite direction (from Cell B to Cell A), and as can be seen in the diagram in Figure 2, also proved successful. The velocity of the MN travelling between
K. Curran and G. Cullen / Location Based Predictive Handoff Algorithm for Mobile Networks
91
the cells was kept to a standard throughout, using the speed information from the Garmin device as a guide. The experiments provide concrete proof of the merits of the location-based solution offering a 48% reduction in packet loss compared to the traditional method. Considering the data intensive nature of streaming media, these results are even more prevalent. The average dropped packet rate of the traditional method was 13.5, demonstrating an average packet loss increase of 92% against the algorithm; this reinforces the benefits of the algorithmic solution, in providing a smoother transition between cells during the handoff procedure.
5
Conclusion
This research has shown how location based information can be used to optimise the handoff process in wireless networks. In conclusion it is believed that georeferencing techniques will provide valuable information to improve the intelligence of handover procedures in cellular networks. However, some of the benefits offered by other techniques cannot be completely discounted; an amalgamation of these approaches could offer a more comprehensive solution. For example the introduction of wireless Hot-Spots in areas such as metro stations, (where GPS cannot be received), could work well with a system along the lines of the route planner mentioned earlier. Here the position of the train at any one time could be calculated from the train’s itinerary for that day, thereby predicting the Cells that it is going to interact with before the handoff occurs. With more and more advances in the area of ubiquitous computing an augmented system that gains its intelligence from both GPS and a users appointment application my not be that far away.
References 1. Mouly, M. (1992). The GSM System for Mobile Communications, Palasieu, France, 1992. 2. Erceg, V., (2001). Channel Models for Fixed Wireless Applications, IEEE 802.11 Broadband Wireless Access Working Group IEEE 802.16.3c-01/29r4, July 16 3. Holland, G. and Vaidya, N. H. (1999). Analysis of TCP performance over mobile ad hoc networks, Proceedings of ACM MobiCom’99, Aug. 4. Yunker, J. (2004). How Business Travellers are Shaking up the Telecoms Industry, in The Wireless Road Warrior, pp. 23-29, April 5. Chen, J. C. (2001). Measured Performance of 5-GHz 802.11a Wireless LAN
92
Advances in Intelligent IT Y. Li et al. (Eds.) IOS Press, 2006 © 2006 The authors. All rights reserved.
Designing Peer-To-Peer Agent Auctions Using Object-Process Methodology Zoheir Ezziane1 College of Information Technology Dubai University College, Dubai, UAE
Abstract. Peer-to-Peer (P2P) architecture is one of the most interesting topics in distributed systems and other related areas (e.g., AI, Database, etc). Basic P2P applications have only implemented limited aspects of a real P2P environment. Meanwhile the fast growing technology, autonomous agents, appear to be a good candidate for most of the complex and dynamic problems. This paper proposes a flexible design for the creation of agent auctions in a distributed environment. Interactions between agents occur in a P2P communication protocol, reducing the role of the centralized auction process to an auction initiator, and to inform agents when a general equilibrium is reached. Agents and auctions are designed using the object-process methodology (OPM) which represents a comprehensive approach to system evolution that incorporates the static-structural and dynamic-procedural aspects of a system into a single unifying model. OPM includes a clear and concise set of symbols that form a language enabling the expression of the system's building blocks and how they relate to each other. Keywords. Distributed agents, autonomous bidding agents, agent economies, eauctions, multi-agent system design, object-process methodology
1. Introduction Although Internet auctions have been successfully used [1], current auction applications still do not fully exploit the distributed nature of the Internet. They are based on centralized systems that make decisions about the auction process. Some centralized auction systems provide some risks occur when clients rely on the server is that information centralization makes the server an attractive target for hackers, which expose the entire system to the risk of abuse. Meanwhile, peer-to-peer (P2P) systems adopt a network-based computing style that neither excludes nor inherently depends on centralized control points. The machines that make up such systems can communicate directly between themselves without using central servers. In systems such as Freenet [2] and Napster [3], each user can either be a producer or a consumer of resources such as files or processor time slots.
1
P.O. Box 14143 Dubai, United Arab Emirates; E-mail:
[email protected]
Z. Ezziane / Designing Peer-To-Peer Agent Auctions Using Object-Process Methodology
93
Artificial systems such as an agent-based auction system require development and support efforts throughout their entire lifecycle. Systematic specification, analysis, design, and implementation of new systems and products are becoming even more challenging and demanding, as contradicting requirements of shorter time-to-market, rising quality, and lower cost, are on the rise. These trends call for a comprehensive methodology, capable of tackling the mounting challenges that the evolution of new systems poses. Object-process methodology (OPM) is a system development and specification approach that combines the major system aspects (function, structure, and behavior), into an integrated single model. This methodology is used here to design P2P agent auctions. In this article, a proposed P2P agent auction design using OPM is described. The main motivation for investigating such a model comes from P2P’s potential to removes the servers’ centralized control and bottleneck effects. Moreover, this model also adopts a relatively recent modeling approach in designing agent auctions.
2. Background 2.1 Object-Process Methodology OPM takes a fresh look at modeling complex systems that comprise humans, physical objects, and information [4]. It is an integrated approach to the study and the development of systems in general and information systems in particular. OPM was applied in such diverse areas as modeling electronic commerce transactions [5], web applications [6] and agent design [7].
2.2. Peer-to-Peer Agent Auctions The auction mechanism, by which self-interested traders are able to settle on a mutual agreed price for a commodity, is a key demonstration of the concept of autonomous agents working together without outside control [8,9]. Ogston and Vassiliadis [10] introduced P2P auctioning and compared the central auction and simple learning agents. Ogstan et al. [11] also examined a method of clustering within a fully decentralized multi-agent system. They grouped agents with similar objectives or data, as is done in traditional clustering. Liu et al. [12] studies risk-averse auction agents.
2.3. Current Design Models Current Object-oriented (OO) techniques suffer from three major inter-related problems: the encapsulation, the complexity management, and the model multiplicity problem. They do not have a mechanism for specifying stand-alone processes, which are not owned by a certain object and counter the encapsulation principle. Moreover, the encapsulation principle eliminates the dynamic aspect of the system. Thus, while being a useful programming convention, this unnecessary encapsulation constraint has been a source of endless confusion. The complexity management problem is related to the way OO methods deal with the complexity that emerged by splitting systems into various models such as structure (the object/class model) and dynamics (Statecharts). Therefore, when the complexity of the system increases, no tools will be available to describe the entire system.
94
Z. Ezziane / Designing Peer-To-Peer Agent Auctions Using Object-Process Methodology
3. P2P Agent Auction Design Using OPM An auction takes place between an agent (auctioneer) and a set of agents (bidders). The objective of the auction is for the auctioneer to allocate the good to one of the bidders. Usually, the auctioneer would like to maximize the price at which the good is allocated, while the bidders want to minimize it. Achieving the auctioneer’s goal is realized through some rules of encounter. Figure 1 shows an attempt to design a tentative example of a centralized agent auction system.
Figure 1. Tentative design for a centralized agent auction system Winner determination is an auction protocol. The first-price is used to allocate the good to the agent that bids the most, while the second-price is to allocate the good to the second highest bid. The other auction protocol specifies on how bids are announced. If the bids are known and made available to all agents in the system, then the auction is open-cry, otherwise it is sealed-bid. A central system works well in the case of suitable and well-demarcated domains, such as for instance for a book and music store. However, for a large heterogeneous marketplace with many participants, several complexity difficulties arise. This is due to the amount of relevant information that has to be tracked and processed by the auctioning system in the form of relevant up-to-date knowledge, for example: the buyer’s interest in different products, service, quality and price. Thus the central approach involves a heavy computational complexity for information processing as well as serious objections from participants. Figure 2 depicts an abstract design of a P2P agent auction. The P2P exchange represents the environment that helps buyer agents and seller agents to communicate in order to complete a specific transaction with very little involvement from the auctioning process.
Figure 2. P2P Agent Auction
Z. Ezziane / Designing Peer-To-Peer Agent Auctions Using Object-Process Methodology
95
3.1 General Equilibrium Market Mechanisms Economic theory states that there is a computable equilibrium price for a given commodity market based on the best price at which each trader in that market is willing to buy or sell. This equilibrium price is the price at which the largest number of traders will be made within the market [13]. Consider a pure exchange economy with k commodities, and with column price vector p k
taking values in and with components p1,…,pk. Each buyer agent i has a utility function ui(xi) which encodes its preferences over different consumption bundles xi = [xi1, …, xik], where xig is buyer i’s allocation of good g. Each buyer i also has an initial endowment wi = [wi1, …, wik], where wig is his endowment of commodity g. Let x = (x1, …, xn) denote a specific allocation of n i n i all goods to all buyer agents. x is feasible if and only if ¦ x d ¦ w i 1 i 1 Denote the induced aggregate excess demand function s(p), which is a row vector with components s1(p), …, sk(p), and in turn sj(p) is a seller agent vector j with components sj1,…, sjk where sjg is the amount of good g that seller agent j sells, and its profit is p·sj. Walrasian equilibrium refers to a situation where each buyer receives a bundle maximizing his utility and where the vector price p* is such that the final allocation is feasible, and (p*,x*,s*) is a general (Walrasian) equilibrium if: 1.
2. 3.
Each buyer agent i maximizes its preferences given the prices: xi* = arg max ui(xi) and n i n i ¦ x ( p*) d ¦ w i 1 i 1 n i* i Markets clear: s*(p*) = ¦ ( x ( p*) w ) i 1 * * Each seller agent j maximizes its profits given the prices: s j arg max p · sj
There a few interesting corollaries that are based on the definition of equilibrium and the n i i following Walras’ Law: p p·s(p) = ¦ [ p· x ( p ) p·w ] = 0 i 1 Corollary1: If demand equals supply in k-1 markets and if pk > 0, then demand equals supply in the kth market. Proof: If we assume that when pk > 0 and demand is not equal supply then this would violate Walras’ Law. Corollary2: If vector price p* is an equilibrium price and sj(p*) < 0 then pj* = 0. If some good is in excess supply from seller agents in equilibrium then it must be free. Proof: In equilibrium s(p*) d 0. Hence pk t 0, p* · s(p*) is a sum of non positive terms. If sj(p*) < 0 and pj* > 0 ==> p* · s(p*) < 0 which also violates Walras’ Law. General equilibrium solutions have some interesting properties such as Pareto efficiency, coalitional stability and existence [14]. In this paper, the general equilibrium constraints reside in the auctioning process, and once the equilibrium is found, it will be broadcasted to all participant agents.
96
Z. Ezziane / Designing Peer-To-Peer Agent Auctions Using Object-Process Methodology
3.2 P2P Search for General Equilibrium There are centralized and decentralized strategies which can be used to search for a general equilibrium. Here a decentralized P2P agent auction is designed through the OPM paradigm. Figure 3 shows a zoom-in of the seller agent reading broadcasted data from the auction and then search for a maximum profit. All bids that maximize the profit for seller agent will be announced to the auction. The agent will keep on looking for those maximizing type of bids until a general equilibrium is announced by the auctioning process.
Figure 3. Seller Agent
On the other hand, the buyer agent depicted in figure 4, shows that it receives broadcasted data from the auction and from different seller agents from P2P exchange, then search for the best tradeoffs after dealing with many seller agents. All bids that maximize the profit for buyer agent will be announced to the auction. This process stops when the auctioning process informs the buyer agents that a general equilibrium is reached.
Figure 4. Agent Buyer
Figure 5 shows a zoom-in of the auctioning process and its interactions with seller agents, buyer agents and P2P exchanging. This process initially sets a constant price to all goods, and then broadcast those prices to all buyer agents and seller agents. Once the seller agents send new prices for their goods, the action will forward them to buyer agents. Consequently, the buyer agents and seller agents try to find the maximum profit independently of each other. Once the general equilibrium has been reached, the auction will inform its agents.
Z. Ezziane / Designing Peer-To-Peer Agent Auctions Using Object-Process Methodology
97
Figure 5. Auctioning Process
4. Concluding Remarks and Future Work This paper emphasizes on a different framework (OPM) in designing P2P agent auctions. OPM is able to represent all the important interaction in a system, and is widely used in various fields. In the proposed design, auctions are conducted in a distributed manner, through a peer-to-peer communication protocol among several agents. What has been accomplished in this paper is the simplistic model used to design P2P agent auction architectures. Moreover, this work will probably contribute in clarifying design issues related to distributed agent auctions, and consequently lead to not only localize any bottleneck but also to tackle more complex issues which were not easily detectible yet in centralized auction systems. As a future extension to this research, an implementation is planned to integrate the current system with the web services environment, in which agents find out about each other using XML-based discovery mechanisms, and exchange messages using XML and SOAP.
References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10]
H.G. Lee, Do electronic marketplaces lower the price of goods? Communication of the ACM, 41(1) (1998), 73-80. A. Langley, Freenet. 2001. Peer-to-peer: Harnessing the power of disruptive technologies. A. Oram, ed., O’Reilly and Assoc. (2001), 123-132. C. Shirky, Listening to Napster. Peer-to-peer: Harnessing the power of disruptive technologies. A. Oram, ed., O’Reilly and Assoc. (2001), 21-37. D. Dori, Object-Process Methodology A Holistic Systems Paradigm, New York: Springer, 2002. D. Dori, Object-Process Methodology Applied to Modeling Credit Card Transactions, Journal of Database Management, 12(1) (2001), 2-12. I. Reinhartz-Berger, and D. Dori, OPM/Web Object-Process Methodology for Developing Web Applications. Annals of Software Engineering, 13 (2002), 141-161. Z. Ezziane, Object-Process Methodology Applied to Agent Design, 6th Int’l Conf. on Enterprise Information Systems, Porto, Portugal, April 13-17, (2004), 455-462. C. Sierra, Agent-Mediated Electronic Commerce, Autonomous Agents and Multi-Agent Systems, 9(3) (2004), 285-301. P. McBurney, R.M. Van Eijk, S. Parsons, and L. Amgoud, A Dialog Game Protocol for Agent Purchase Negotiations, Autonomous Agents and Multi-Agent Systems, 7(3) (2003), 235-273. E. Ogston and S. Vassiliadis, A Peer-to-Peer Agent Auction, 1st int’l joint conf. on Autonomous agents and multiagent systems: part 1, Bologna, Italy (2002), 151-159.
98
Z. Ezziane / Designing Peer-To-Peer Agent Auctions Using Object-Process Methodology
[11] E. Ogston, B. Overeinder, M. Van Steen and F. Brazier, A Method for Decentralized Clustering in Large Multi-Agent Systems, 2nd int’l joint conf. on Autonomous agents and multiagent systems, Melbourne, Australia (2003), 789 - 796. [12] Y. Liu, R. Goodwin and S. Koenig, Risk-Averse Auction Agents, 2nd int’l joint conf. on Autonomous agents and multiagent systems, Melbourne, Australia (2003), 353-360. [13] T.W. Sandholm, Distributed Rational Decision Making. In Weiss G., ed., Multiagent Systems, MIT Press, Cambridge, 1999, 201-258. [14] A. Mas-Colell, M. Whinston and J.R. Green, Microeconomic Theory, Oxford, 1995.
Advances in Intelligent IT Y. Li et al. (Eds.) IOS Press, 2006 © 2006 The authors. All rights reserved.
99
Rough Set Model for Constraint-based Multi-dimensional Association Rule Mining Wanzhong Yang a, Yuefeng Li b, Yue Xu b and Hang Liuc School of Software Engineering & Data Communications Queensland University of Technology, Brisbane, OLD 4001, Australia E-mail:{
[email protected] } b School of Software Engineering & Data Communications Queensland University of Technology, Brisbane, OLD 4001, Australia E-mail:{ y2.li, yue.xu}@qut.edu.au } c X-bas Business Management Services E-mail: {
[email protected] } a
Abstract. This paper presents a rough set model for constraint-based multidimensional association rule mining. It first overviews the progress in constraintbased multi-dimensional association rule mining. It then applies the constraints on the rough set model. To set up a decision table, it adopts the user voting and the thresholds on condition granules and decision granules. Finally it employs the extended random sets to generate interesting rules. It shows that this rough set model will effectively improve the quality of association rule mining by reducing the attributes greatly in the vertical direction and clustering the records clearly in the horizontal direction. To describe the association among the attributes, it constructs an ontology and presents a new concept of an association table. The construction of a tuple in an association table indicates the relationship among different levels on the ontology towards decision support. Keywords. Data mining, Rough sets, Multi-dimensional association rules.
1. Introduction Association rule mining plays an important role in supporting decision making in data mining. Association rule mining can be represented by a form of X Y. X, Y are all frequent itemsets and X ŀ Y = [1]. This formula represents the association between two itemsets. The support and the confidence are two parameters to measure the association rules. The support refers to the percentage of the transactions that contain both X and Y in all transactions. The confidence is the percentage of transactions that contain Y in the transactions that contain X. To filter out the interesting data, each parameter needs to set up a threshold. The strong association rules must satisfy both the minimum support and the minimum confidence. Constraint-based mining approaches have received more attention recently. These can obviously reduce the overlap of information and improve effectiveness and efficiency. There are 5 different types of constraints. Anti-monotone and succinct
100 W. Yang et al. / Rough Set Model for Constraint-Based Multi-Dimensional Association Rule Mining
constraints were first used in mining algorithm CAP by Ng et al. in 1998 [4]. Leung et al. focused on succinct constraints and proposed an FPS algorithm in an FP-tree based framework in 2002 [5]. However, Pei et al. addressed the notion of convertible constraints and enabled deep mining in the FP-growth algorithm [6]. Monotone and anti-monotone constraints need a trade-off to reconcile the effectiveness in the pruning. Bucila proposed the DualMiner algorithms to prune search space with both constraints in 2002 [7]. Moreover, Wang et al. proposed a Divide-and-Approximate algorithm in 2005 which divides the search space and approximates the given constraints with each of both constraints [8]. Anti-monotone constraints play a leading role in constraintbased association rule mining in most situations. Most of previous constraint-based association rules focus on the single dimensional attribute. Lee et al. proposed E-CFG and GE-CFG algorithms for mining multi-dimensional association rule based on multi-dimensional constraints and a FPgrowth approach in 2005 [9]. Lee et al. handled both algorithms with anti-monotone constraints. The purpose of both algorithms is first to find the most popular brand and price range of products. Secondly, the approaches answer what kinds of products will be sold together for acquiring marketing gain. The products are categorized by cost and price. For example, the products that cost $40 and are priced at $50 are the most frequent items. In this way we can find the most popular brands in this category. To reach this purpose, the idea of the algorithms first classifies the same values of products at cost and price into each item in the product table. One transaction can involve different items. The multi-dimensional constraints include one single constraint against multi-dimensions and a conjunction and/or disjunction of multiple sub-constraints. Both algorithms all include three phases, namely frequency checking, constraint checking and conditional FP-tree construction. The feature of both algorithms is to classify the attributes only in the horizontal direction. The last phase of both algorithms is based on the conditional FP-tree construction. A large amount of calculation is still required to solve the problems in market basket analysis. Furthermore, constraint-based multi-dimensional association rule mining causes many useless association rules. To improve the quality of association rule mining, in this paper we present a constraint-based rough set model. It can not only cluster the records in the horizontal direction but also reduce the attributes on the vertical direction for the transaction database. It employs the extended random sets on the decision table and generates interesting rules. It will improve efficiency and effectiveness in the market basket analysis obviously. Moreover, the study of the vertical direction will find the dependency among attributes. The current study only focuses on the items at the bottom level. It doesn’t consider the category at the upper level. In addition, it cannot reflect the dependency on the ontology. We apply the experiment results on the product ontology and present a new concept of the association table. Using the ontology, we will provide a global view of the data in the database. The rest of this paper is organized as follows. We set up the user specified constraint and user voting on the pre-processed transactions in Sect. 2. In Sect. 3 we construct a decision table. The extended random set is employed to generate interesting rules. The experiment is in Sect. 4. Through the analysis of the experiment we present a new concept of an association table. Finally, we come up with conclusions in Sect. 5.
W. Yang et al. / Rough Set Model for Constraint-Based Multi-Dimensional Association Rule Mining 101
2. User Specified Constraints In a multiple store environment, we want to see not only what products are the most popular, but also their profits. To suit the requirements, let each product be an item in a multiple store environment. We will sort the frequency of all items in the transaction database in a descending order. We will set up a threshold to choose the m most frequent items in a descending order. Assume that P = {p1, p2, ..., pn} is a large set of data items, where each item can be viewed as an object in general. We use itemID to identify each unique item. Each item contains multi-dimensional attributes, such as name, quantity, price, cost, etc. In a transaction database T, let tid be an identifier of each transaction and Pt is a subset of all items P. Each transaction is represented by t = , where Pt P. The record of each transaction is made up of an identifier and an itemset. To satisfy users’ requirements and generate interesting rules among a large number of items, we need to set up user specified constraints. In this paper we will set up the constraint to find a suitable profit range. This constraint will classify the itemsets of the products into two different groups in a rough set model. First we select the anti-monotone constraint and concentrate on the profit percentage. Assume that J * cost < price and 1.0 < J is a constraint, where J is a threshold and for example, let J be 1.80. This constraint will separate the m most frequent items into two sets, namely a condition set and a decision set. We transfer all transactions into the bitmap of a pre-processed transaction table in Figure 1. In the condition set {Item1,…, Items}, the items satisfy the constraint. The decision set {Items+1, …, Itemm} are the left of the m most frequent items. Then we can study the association between highly profitable products and less profitable products. However, there are still many items in each set. | m J * cost < price , J =1.8 o | m J * cost price , J = 1.8 o | TID 1 2 . . N
Item1
…
Items
Items+1
…
Itemm
1 0
0… 0…
1 0
0 1
0… 0…
0 0
0
1…
1
0
0…
0
Figure 1. The bitmap of pre-processed transactions To find the most valuable information, at the second stage we need to reduce the less important items. We set up two parameters to measure the items in each set. The first parameter is the frequency of the items in all transactions. The second parameter is the percentage of the profit. In the condition set, we will choose the top k items for each parameter in a descending order. There are two groups of items for the condition set. The itemset {item10, item38, item3, item39, item41,…, item242, …, item148,…} is in a descending order of the frequency of the items. The itemset {item242, item148, …, Item10, item38,…, item41, …, item3, …} is in a descending order of the percentage of the profit. In these two itemsets, we adopt user voting to choose the most optimal items for the condition set. We focus on the frequency of the items during user voting. The itemset {Item41, Item3, Item38 ,Item10} is selected as the condition set which could be
102 W. Yang et al. / Rough Set Model for Constraint-Based Multi-Dimensional Association Rule Mining
represented as {a1, a2,…, ak}. To acquire the most valuable items for a decision set, we will use the two parameters with the same idea and choose the items from the condition set related transactions. But at the step of user voting, we will focus on low profit. We can also select l items {ak+1,…, ak+l} as the decision set. In this example there are 6 items in the decision set. Then we can form a data pre-processing table with attributes of both the condition set and the decision set from the pre-processed transaction database.
3. Constructing Decision Tables Let U be a non-empty finite set of objects (a set of records), and A be a set of attributes (or fields). We call a pair S = (U, A) an information table if there is a function for every attribute a A such that a: U o Va, where Va is the set of all values of a. We call Va the domain of a. Let B be a subset of A. B determines a binary relation I(B) on U such that (x, y) I(B) if and only if a(x) = a(y) for every aB, where a(x) denotes the value of attribute a for element xU. It is easy to prove that I(B) is an equivalence relation, and the family of all equivalence classes of I(B), that is a partition determined by B, is denoted by U/I(B) or simply by U/B. The classes in U/B are referred to as B-granules or Belementary sets. The class which contains x is called B-granule induced by x, and is denoted by B(x). A user may use some attributes of a database. We can divide the user used attributes into two groups: condition attributes and decision attributes, respectively. We call the tripe (U, C, D) a decision table of (U, A) if CD= and CDA, where (U, C, D) is a set of classes and each class is the representative of a group of records. Class
a1
a2
a3
a4
N
1
1
0
0
0
239
2
0
1
0
0
200
3
0
1
1
0
16
4
0
1
0
1
11
5
0
0
1
0
6
6
0
0
0
1
147
Figure 2. A condition set In this example, all attributes are based on the data pre-processing table. We will cluster the same values of the records into the same class. For example, there are 4 attributes in the condition set and 6 classes in the data pre-processing table in Figure 2. Let C = {a1, a2, a3, a4}. To acquire condition granules, we need to set up a threshold for the frequency of the items in the condition set. Let the threshold be 100. After removing 3 classes of items in low frequency, all other classes are empty in the attribute a3. So we remove the attribute a3. Then C = {a1, a2, a4} and only 3 classes can be included in condition granules. Similarly, for the decision set, there are 6 attributes and 13 classes related to condition granules. We also set up the threshold to 10 for the
W. Yang et al. / Rough Set Model for Constraint-Based Multi-Dimensional Association Rule Mining 103
frequency of the items. Then one attribute and four classes are removed. There are 5 attributes and 9 classes for the decision granules. Compared to the method of Pawlak, this method can reduce the attributes in both condition and decision granules. In particular, the generation of decision granules depends on the processed condition granules, where the uninteresting attributes and classes are removed by the threshold. Class
a1
a2
A4
a5
a6
a7
a8
a9
N
1
1
0
0
0
0
0
1
0
24
2
1
0
0
0
0
1
0
0
11
3
1
0
0
0
0
1
1
0
36
4
1
0
0
1
0
0
1
0
11
5
1
0
0
1
0
1
0
0
41
6
1
0
0
1
0
1
1
0
112
7
0
1
0
0
0
0
0
1
108
8
0
1
0
0
1
0
0
1
19
9
0
0
1
0
1
0
0
0
129
10
0
0
1
0
1
0
0
1
17
Figure 3. A decision table With the above constraints and thresholds, the decision table will be generated in Figure 3 by combining of the condition set and decision set. We also can get the set of condition granules, U/C = {{1,2,3,4,5,6}, {7,8}, {9,10}}, and decision granules, U/D = {{1}, {2}, {3}, {4}, {5},{6},{7}, {8,10},{9}}, respectively. Then we employ an extended random set to generate interesting rules (details see [10]).
4. Experiment Compared with Pawlak’s method, this rough set model greatly improves efficiency and effectiveness. We simulate the data in a multiple store environment and generate a transaction database, which includes over 26, 000 transaction records. In the product table there are over 5000 different products. We choose the 300 most frequent products as frequent items and set up the constraint as 1.8 * cost < price. During the user voting, we concentrate on the frequency of the items for condition granules and the low profit of the items for decision granules. Finally we choose 3 items as condition attributes and 5 items as decision attributes. This model can effectively reduce the attributes in a decision table for a large dataset in the vertical direction for both condition and decision granules. In the horizontal direction, this model will cluster the same values of the records into the same class. So it improves the quality to generate interesting rules. From the view of complexity, we avoid using the frequent pattern tree [3] and Apriori algorithms [2]. From the experiment result, we can find the dependency between high profit products and low profit products. In the market some high profit products (over 80% profit) are sold together with the low profit (less than 20%) products. In the other words, some low profit products are very important for marketing gain.
104 W. Yang et al. / Rough Set Model for Constraint-Based Multi-Dimensional Association Rule Mining Products
Department :
Commodity :
Fruit & Vegetabl e d1
Seasonal Accessories c1
Nuts/Snacks d2
Confectionery d3
Nuts Mix c2
Service Deli d4
PIC MIX c3
Salad c4
Chicken c5
Items : Potato Brushed
Shallots Bunch
Orange
Pears Pack
PIC MIX 6 PCS
MRS Crocket Salad STEGGLES Family Half Chicken
5 Pieces For $2 All Flavours
Figure 4. An ontology To study the dependency among the products further, we construct an ontology of products for the above example as shown in Figure 4. We explore the relationship from the bottom level to the top level using market basket analysis. In the above rough set model, we can determine the association: (1) the item potato brushed (high profit) is often sold with other three items namely oranges, pears and shallots; (2) the item MRS crocket salad is often sold with the item PIC MIX-6 pieces; (3) the item STEGGLES family chicken is often sold with item 5 pieces for $2 (all flavours). Class
a1
a2
a4
a5
a6
a7
1
(1,c1,d1)
2
(1,c1,d1)
(1,c1,d1)
3
(1,c1,d1)
(1,c1,d1)
4
(1,c1,d1)
(1,c1,d1)
5
(1,c1,d1)
(1,c1,d1)
(1,c1,d1)
6
(1,c1,d1)
(1,c1,d1)
(1,c1,d1)
a8
a9
(1,c1,d1)
7
(1,c4,d4)
8
(1,c4,d4)
(1,c2,d2)
9
(1, c5,d4)
(1,c2,d2)
10
(1, c5, d4)
(1,c2,d2)
Figure 5. An association table
N 24 11
(1,c1,d1)
36
(1,c1,d1)
11 41
(1,c1,d1)
112 ( 1,c3,d3)
108
( 1,c3,d3)
19 129
( 1,c3,d3)
17
W. Yang et al. / Rough Set Model for Constraint-Based Multi-Dimensional Association Rule Mining 105
To reflect relations among different commodities and departments, we present an association table in Figure 5. In the association table, a tuple is the basic cell of the table. A tuple is made of three parameters in the table, such as (1, c, d). The first parameter is as same as the usual bit in the bitmap. The second parameter refers to the third level commodity. And last parameter refers to the second level department. From these parameters, we can see the relations among different items. In particular, the association may occur between different levels. Moreover, in the ontology, there are n-2 levels between the root and the bottom level. We can trace the relationship of items in the bottom level to the top level. To acquire the best effect of the association mining, it is important to set the tree in the ontology to a suitable width and depth. We will discuss this further in future research.
5. Conclusion In this paper, we have enhanced granular computing for interpreting association rules by data pre-processing. The main contribution of this paper is to apply the antimonotone constraint on the rough set model for multi-dimensional association rule mining. To set up a decision table, the method of user voting and thresholds are adopted in both condition granules and decision granules. This rough set model can not only cluster records in the horizontal direction, but also reduce the attributes in the vertical direction. Compared with the Pawlak’ method, it greatly improves the quality of interesting rule mining on efficiency and effectiveness. To describe the relationship further, we construct an ontology of the products and present the concept of an association table. The construction of a tuple in an association table discloses the relationship among different levels of the ontology. References [1]
R. Agrawal, T.Imielinski, A.Swami, Mining Association Rules between Sets of Items in Large Databases, in Proceedings of ACM-SIGMOD, 1993, pp. 207-216. [2] R. Agrawal, R. Srikant, Fast Algorithms for Mining Association Rules, in Proceedings of International Conference on Very Large Data Bases, 1994, pp. 487-499. [3] Jiawei Han and Micheline Kamber, “Data Mining: Concepts and Techniques”, published by Morgan Kaufmann Publishers, 2000. [4] R.T.Ng, L.V.S. Lakshmanan, J. Han, A. Pang, Exploratory Mining and Pruning Optimizations of Constrained Associations Rules, in Proceedings of ACM-SIGMOD, 1998, pp. 13-24. [5] C. K.-S. Leung, L.V.S. Lakshmanan, R. T. Ng, Exploiting Succinct Constraints using FP-trees, in ACMSIGKDD Explorations, 2002, pp. 40-49. [6] J. Pei, J. Han, L.V.S. Lakshmanan, Mining Frequent Itemsets with Convertible Constraints, in Proceedings of 17th International Conference on Data Engineering, 2001, pp. 433-442. [7] C. Bucila, J. Gehrke, D. Kifer, W. White, DualMiner: A Dual-Pruning Algorithm for Itemsets with Constraints, in Proceedings of ACM-SIGKDD, 2002, pp. 42-51. [8] K. Wang, Y. Jiang, J.X. Yu, G. Dong J. Han, Divide-and-Approximate: A Novel Constraint Push Strategy for Iceberg Cube Mining, IEEE Transactions on Knowledge and Data Engineering, 2005, pp. 354-368. [9] A. J. T. Lee, W. Lin, C. Wang, Mining Association Rules with Multi-dimensional Constraints, The Journal of Systems and Software, 2006, pp. 79-92. [10] Yufeng Li and Ning Zhong, Interpretations of Association Rules by Granular Computing, ICDM, 2003, pp. 593 – 596. [11] Z. Pawlak, In pursuit of patterns in data reasoning from data, the rough set way, 3rdInternational Conference on Rough Sets and Current Trends in Computing, USA, 2002, pp. 1-9.
106
Advances in Intelligent IT Y. Li et al. (Eds.) IOS Press, 2006 © 2006 The authors. All rights reserved.
Kernel-Based Multi-Imputation for Missing Data* Shichao ZHANG 1, Yongsong QIN 2, Xiaofeng ZHU 2, Jilian ZHANG 2, Chengqi ZHANG 1 1 Faculty of Information Technology, University of Technology Sydney 2 Deparment of Computer Science, Guangxi Normal University, China
Abstract. A Kernel-Based Nonparametric Multiple imputation method is proposed under MAR (Missing at Random) and MCAR (Missing Completely at Random) missing mechanisms in nonparametric regression settings. We experimentally evaluate our approach, and demonstrate that our imputation performs better than the well-known NORM algorithm. Keywords. multiple imputation, missing data, kernel function, nonparametric
1. Introduction Missing or incomplete data is a very important problem in many fields of research, Such as in active media technology, opinion polls, market research surveys, mail enquiries, medical studies, and other scientific experiments. Missing data imputation is a challenging issue in machine learning and data mining (Zhang et al. 2004; Batista & Monard 2003). Many missing data analysis techniques are of single-imputation which missing values are filled in by a plausible estimate such as the mean or median for that variable on other participants. However, single-imputation cannot provide valid standard errors and confidence intervals, since it ignores the uncertainty implicit in the fact that the imputed values are not the actual values. Recently, much research on missing data analysis has focused on multi-imputation techniques for addressing the issues in single-imputation (Faris et al. 2002; Little et al. 1987; Schaffer 2002; Taylor et al. 2002; Zhang 2004). Little et al. (1987) proposed a multiple imputation procedure to replace each missing value with a set of plausible values that represent the uncertainty about the right value to impute. The multiple-imputed-data sets are then analyzed using a standard procedure for complete data and combining the results from these analyses. In this paper, a kernel-based nonparametric multiple-imputation (KBNM) is proposed under MAR (missing Y mainly depends on X ) and MCAR (MCAR is when the probability of missing a value is the same for all variables). The rest of this paper is organized as follows. Our kernel-based multiple imputation method is described in Section 2. Section 3 presents a series of experimental results on simulation models and a real-world dataset (from UCI) to compare the performances between our KBNM approach and the NORM. Conclusions are given in Section 4. *
This work is partially supported by Australian large ARC grants (DP0449535, DP0559536 and DP0667060), a China NSF major research Program (60496327), and a China NSF grant (60463003). Corresponding author: Shichao Zhang, Faculty of Information Technology, University of Technology Sydney PO Box 123, Broadway NSW 2007, Australia; E-mail:
[email protected].
107
S. Zhang et al. / Kernel-Based Multi-Imputation for Missing Data
2. KBNM Method The theoretical underpinnings of multi-imputation are Bayesian. The central idea is to fill in the missing values by drawing from the posterior predictive distribution of the missing data given the observed data. The procedure is independently repeated M times. Each filled-in dataset is analyzed separately and the results combined following well-established rules. Rubin's multiple imputation is a three-step method for handling complex missing data. At the first step, m (> 1) completed-data sets are created by imputing the unobserved data m times using m independent draws from an imputation model, which is constructed to reasonably approximate the true distributional relationship between the unobserved data and the available information, and thus reduce potentially very serious nonrespondent bias due to systematic difference between the observed data and the unobserved ones. In this article, we use the kernel-based nonparametric imputation to impute the nonrespondent (missing) and obtain m ‘complete’ data sets. Let X be a d-dimensional vector of factors and Y be a respondent variable influenced by X. Suppose that ( X i ,Yi ) s satisfy the following model:
Yi = m ( X i )
, where Y have nonrespondents and
m (.)
is an
unknown function, and Xi s are i.i.d. (independence identified distributed) random variables and all Xi s are observed. Let r = ∑ in=1δ i ,m =n − r (n is sample size), denote the sets of respondents and nonrespondents as sr and sm , respectively. Let imputed values
Yi ( R ) = mˆ n ( X i ) +ε i* , i ∈ s m ,
with replacement from { Y j −mˆ n ( X i ), j∈sr }. observed pairs
( X i ,Yi )
Yi ( R )
, i ∈ s m be the
*
where { ε i } is a simple random sample of size m x− Xi n ) ∑ i =1δ iYi K ( h mˆ n ( x ) = n δ K ( x − X i ) + n −2 ∑ i =1 i h
based on the completely
.
Note that: δi = 0 if Yi is missing, otherwise δi = 1; h=hn be a bandwidth sequence that decreases toward 0 as the sample size n increases toward ∞ ; the term n −2 is introduced to avoid the case that the denominator from becoming zero. K(.) is a symmetric probability density function and claimed kernel function. In practice, there is no any significant difference using kernel functions and we use the Gaussian kernel (standard normal density function: K (⋅ ) =(2π )−1/ 2 exp( − x 2 / 2), x ~ N (1,1) ) in our experiments. At the second step, m complete data analyses are performed by treating each complete data set as a real complete-data set, and thus standard complete-data procedures and software can be utilized directly. At the last step, the results from the m complete-data analyses are combined in a simple, appropriate way to obtain the so-called repeated-imputation inference, which properly takes into account the uncertainty in the imputed values. We use two methods analyze the m complete-data to analyze the performance of KBNM. Suppose that our primary interest lies in a scalar Q (in this article, we specify Q as the mean of the response variable); and m complete datasets under the nonparametric regression model are obtained. In our first method, we constructed a 100( 1−α )% interval estimate for Q based on Rubin (1987; 1999) where α is the significance level and we set
108
S. Zhang et al. / Kernel-Based Multi-Imputation for Missing Data
α =0.05 throughout the paper (Other values of α can be chosen in practice). Another method is the RE (Relative Efficiency) which use the finite m imputation estimator, rather than using an infinite number for the fully efficient imputation, in units of variance, is approximately a function of m and λ (for λ and the formula, pleas see (Yuan 2001)). Below Table 1 shows the relative efficiencies with different values of m and λ based on the formula in (Yuan 2001). For cases with little missing information, only a small number of imputations are necessary for our MI analysis, such as, the RE is 0.9662 and the repeat times is only 10 while the missing rate access to 70%. Due to lack of space, the repeat times of our experiment is 10 in the paper. Table 1. Relative efficiencies (RE) with different values of m and λ λ
m
10%
3 10
0.9677 0.9901
20%
30%
50%
70%
0.9375 0.9901
0.9091 0.9852
0.8571 0.9756
0.8108 0.9662
3. Experiments In order to show the effectiveness of the proposed method, extensive experiments were done on simulation models as well as real dataset using a DELL Workstation PWS650 with 2G main memory, 2.6G CPU, and WINDOWS 2000. In our experiments, we evaluate the performances of the proposed method in making inference for the mean (Q) of the response variable. We compare the performances of the NORM (Schafer 1999) and our KBNM according to their coverage probabilities (CP) and average lengths of confidence intervals (AL) based on our constructed confidence intervals, as well as we compare the two methods with relative efficiencies (RE) according to table 1. The NORM is a Windows 95/98/NT program for multiple imputation (MI) of incomplete multivariate data downloaded from Schafer (1999). It creates multiple imputations by an algorithm called data augmentation (DA), a special kind of Markov chain Monte Carlo (MCMC) technique. NORM is not designed to replace well-established statistical packages like SAS or SPSS and does not perform statistical analyses (e.g. linear or logistic regression). 3.1. Simulations In general, we may select any nonlinear model where variables can be any attribution in simulated experiment. In this paper, we use the nonlinear model y = x12 +sin x2 +ε , with xi ( i=1,2) from the normal distribution N(1,1) and ε from N(0, 1). The following two cases of response probabilities under the MAR ( P (δ =1|Y,X ) = P (δ =1|X ) ) and MCAR assumptions are considered: Case1 (MAR): P1 ( x ) = P (δ = 1 | X = x1 , X = x 2 ) = 0.8 + 0.2 | x1 − 1 || x2 − 1 |, if
| x1 − 1 || x2
− 1 |≤ 1,
and = 0.95, elsewhere.
Case2 (MCAR):
|
P ( x ) = P (δ = 1 X = x1 , X = x 2 ) = 0.9,
for all
x1 , x 2
respectively.
109
S. Zhang et al. / Kernel-Based Multi-Imputation for Missing Data
Figures (1) and (3) present the CP and AL (coverage probability and the average length of intervals) based on NORM and KBNM with various bandwidth h=Cn−1/5 and missing rate 10% as well as repeat times 10; Figures (2) and (4) present the same experiment as the former but the missing rate is 70%. We notice that the CP/AL of NORM is not related to the value of C, which keeps the same with various C in figures.
(1)
(2)
(3)
(4)
Figures (1) to (4) reveal the following results: When the missing rate is relatively small (for example, 10%), the confidence interval based on KBNM under both of MCAR and MAR perform almost uniformly better than those based on NORM for various C as shown in Figures (1) and (3) as the CPs based on KBNM are closer to the nominal level 95% than the CP based on NORM, and the ALs are shorter based on KBNM than the AL based on NORM. For most of the C, these advantages of KBNM over the NORM are significant; When the missing rate is relatively large (for example, 70%), the confidence interval based on KBNM under both of MCAR and MAR perform still better, but not as significantly as the missing rate 10%. By choosing appropriate C, we see that the performance of KBNM is significant better than NORM under different response rates and missing mechanisms. We would consider the choice of C in the future work. We also compare the performances of KBNM and the NORM in terms of the RE (Relative Efficiency). Table 2 shows the Relative Efficiency in different values of m (repeat times) and λ , in which 10% and 70% are the missing rates. Table 2. The comparison of RE among KBNM under MCAR and MAR, and NORM λ
m 3 10
MCAR 0.9993 0.9999
10% MAR 0.9948 0.9994
NORM 0.9909 0.9952
MCAR 0.9899 0.9998
70% MAR 0.9829 0.9993
NORM 0.9550 0.9792
110
S. Zhang et al. / Kernel-Based Multi-Imputation for Missing Data
From Table 2 comparing with the standard in Table 1, we can see that all the performances of KBNM and the NORM are perform well than the performance in theory based on table 1 and the KBNM is a little better than the NORM. 3.2. Application in Abalone from UCI In order to show the effectiveness of our proposed method (KBNM) in making inference for the mean of a population, we conducted some experiments on the real dataset abalone from UCI (Blake & Merz 1998). It contains 4177 instances in total and 9 attributes for each instance, in which there are no missing values. These attributes are used to predict the age of abalone. Obviously, the relation between the age and these attributes is MAR. But we also have experiments for the data set about MACR since we want to show the difference among the three. We select the other attributes (except the “sex” who is a nominal) to predict the age of the abalone. In this paper, we randomly select 1000 instances from 4177 because the maximum instance that NORM can only handle is 2000. We use MCAR, MAR missing mechanisms on Y at different missing rate of 10% and 70%, then the proposed nonparmetric method is utilized to fill up the missing values of Y, with repeated times 10. Figures (5) and (7) present the CP and AL based on NORM and KBNM with various bandwidth h=Cn−1/5 and missing rate 10%; Figures (6) and (8) present the performance with missing rate 70%. Table 3 shows the Relative Efficiencies in different values of m (repeat times) and λ . Due to the fact that the real world data do not fit the ideal statistical distributions exactly and there are noises, which will distort the distribution of the real world data, From these experiments, the performance of KBNM have a little fluctuation than the previous simulation study, but we can see that the KBNM performs better than NORM similar to the findings in appropriate C.
(5)
(6)
(7)
(8)
111
S. Zhang et al. / Kernel-Based Multi-Imputation for Missing Data Table 3. The comparison of RE among KBNM under MCAR and MAR, and NORM λ
m 3 10
MCAR 0.9983 0.9997
10% MAR 0.9984 0.9997
NORM 0.9969 0.9991
MCAR 0.9905 0.9956
70% MAR 0.9926 0.9959
NORM 0.9519 0.9858
4. Summary We have designed an algorithm of Kernel-Based Nonparametric Multiple imputation (KBNM) to impute the incomplete datasets under MAR and MCAR assumptions. We have experimentally evaluated the performances of our KBNM and the NORM using a simulation dataset and a real dataset. The performances are in terms of the confidence intervals and the Relative Efficiencies based on different imputation methods. It has shown that our KBNM performs much better than the NORM in terms of coverage probabilities, average length of the confidence intervals and their relative efficiencies are similarly well. References [1] Barnard, J. & Rubin, D. (1999). Small-Sample Degrees of Freedom with Multiple Imputation. Biometrika, 86: 948–955. [2] Batista, G. and Monard, M. (2003), An Analysis of Four Missing Data Treatment Methods for Supervised Learning. Applied Artificial Intelligence, 17(5–6): 519–533. [3] Blake, C. and Merz, C. (1998). UCI Repository of machine learning database. [http://www.ics.uci.edu/~ mlearn/MLResoesitory.html] Irvine, CA: university of California, Department of Information and Computer Science. [4] Faris, P., et al. (2002). Multiple imputation versus data enhancement for dealing with missing data in observational health care outcome analyses. Journal of Clinical Epidemiology, 55: 184–191. [5] Kahl, F. Heyden, A. and Quan L. (2001), Minimal Projective Reconstruction Including Missing Data. IEEE Trans. Pattern Anal. Mach. Intell., 23(4): 418–424. [6] Gessert G. (1991), Handling Missing Data by Using Stored Truth Values. SIGMOD Record, 20(3): 30–42. [7] Lakshminarayan, K., Harp, S., Goldman, R. and Samad, T. (1996), Imputation of Missing Data Using Machine Learning Techniques. KDD-1996: 140–145. [8] Little, R.J.A. and Rubin, D.B. (1987). Statistical Analysis with Missing Data. New York: John Wiley & Sons, Inc. [9] Schafer J. (1997). Analysis of incomplete multivariate data. 1st ed. London: Chapman and Hall. [10] Schafer, J. (1999). NORM: Multiple imputation of incomplete multivariate data under a normal model. Version 2. Available: http://www.stat.psu.edu/~jls/misoftwa.html. [11] Schaffer, J. (2002). Dealing with Missing Data. Res. Lett. Inf. Math. Sci., 3, 153–160. [12] Shichao Zhang, Chengqi Zhang and Qiang Yang (2004), Information Enhancement for Data Mining. IEEE Intelligent Systems, 19(2): 12–13. [13] Taylor, J., Murray, S. & Hsu, C. (2002): Survival estimation and testing via multiple imputation. Statistics & Probability, 58: 221–232. [14] Yuan, Y.C. (2001). Multiple imputation for missing data: concepts and new development SAS/STAT 8.2. (see http://www.sas.com/statistics) SAS Institute Inc, Cary, NC.
112
Advances in Intelligent IT Y. Li et al. (Eds.) IOS Press, 2006 © 2006 The authors. All rights reserved.
Efficient Frequent Itemsets Mining by Sampling Yanchang ZHAO, Chengqi ZHANG and Shichao ZHANG Faculty of Information Technology, University of Technology, Sydney, Australia
Abstract. As the first stage for discovering association rules, frequent itemsets mining is an important challenging task for large databases. Sampling provides an efficient way to get approximating answers in much shorter time. Based on the characteristics of frequent itemsets counting, a new bound for sampling is proposed, with which less samples are necessary to achieve the required accuracy and the efficiency is much improved over traditional Chernoff bounds.
1. Introduction Frequent itemsets counting, as the key phase for discovering association rules [1], is I/O intensive because of multiple passes scan of databases. Sampling is one of the methods used for reducing I/O costs and improving efficiency for mining frequent itemsets. Mannila et al. proposed to use sampling to improve the efficiency of finding association rules in 1994 and their analysis and experiments show that “small samples are usually quite good for finding covering sets” [2]. Toivonen proposed to use a small sample to find all association rules and then verify the results with the rest of the database [3]. Zaki et al. also used sampling to reduce I/O costs and speed up the mining process of association rules [4]. All the above three methods used Chernoff bounds to derive the theoretical sample size. Zhang et al. proposed to use central limit theorem to calculate sample size [5], which reduces the sample size by about half compared to that derived from Chernoff bounds. Chen et al. proposed a two-phase sampling-based algorithm for discovering association rules in large databases [6]. At first a large initial sample is used to estimate the supports and then these estimated supports are used to form a small final sample. Parthasarathy proposed the use of progressive sampling to determine the required sample size for association rule mining [7]. The key point of sampling lies in how to compute the sufficient sample size to ensure the error bound and Chernoff bounds are the most widely. For Chernoff bounds, one method is setting the limit of error to an additive value and the other method is using a multiplicative error bound. Generally speaking, the former is more efficient than the latter, because less samples are required for additive error bound. However, for very frequent itemsets, a large error bound is preferred so that fewer samples will be necessary, so a multiplicative error bound is better. On the other hand, for very infrequent itemsets, a multiplicative error bound is too small and requires large sample, so an additive error bound is preferred. In this paper, we proposed a novel hybrid error bound by combining the above two error bounds. Given the same accuracy, the proposed method requires fewer samples than the two traditional methods. Our analysis shows that the sample size with our method is about 1/10 that of traditional bounds.
113
Y. Zhao et al. / Efficient Frequent Itemsets Mining by Sampling
2. Preliminaries 2.1. Frequent Itemsets and Association Rules Let I = {i1 , i1 , , im } be a set of m distinct items. D is a set of variable length transactions over I and each transaction is a subset of I. An association rule is an implication A → B , where A, B ⊂ I and A B = φ . For an itemset A ⊂ I , its support supp(A) is the fraction of transactions in D containing A. A is a frequent itemset iff supp(A) is no less than a user-specified threshold. The support of rule A → B is supp( A B ) and its confidence is defined as supp( A B )/supp(A). Association rule mining is to find all rules with support and confidence no less than user-specified thresholds. The procedure of mining association rules can be broken into two phases: mining frequent itemsets and then discovering association rules from frequent itemsets. The first phase requires multiple scans of datasets, which is I/O expensive because the datasets are usually too large to be loaded into main memory. One possible solution to the above problem is sampling. The key point lies in how many samples should be drawn to assure the required accuracy [2-7] and the most widely used theoretical bounds are Chernoff bounds.
2.2. Chernoff Bounds and Central Limit Theorem n
Let X 1 , , X n be independent Poisson trials and Pr( X i ) = pi . Let X = ¦i =1 X i and
μ = E[ X ] . For any 0 < γ ≤ 1 , the following multiplicative Chernoff bounds hold [8]:
Pr ( X ≥ (1 + γ ) μ ) ≤ e − μγ
2
and Pr ( X ≤ (1 − γ ) μ ) ≤ e − μγ
/3
Pr ( X − μ ≥ γμ ) ≤ 2e − μγ
2
/3
2
/2
, so
.
(1)
The following additive Chernoff bounds hold [9]: Pr ( X ≥ μ + γ ) ≤ e −2 nγ Pr ( X ≤ μ − γ ) ≤ e
−2 nγ 2
2
and
, so
Pr ( X − μ ≥ γ ) ≤ 2e −2 nγ . 2
(2)
For frequent itemset counting, assume that the sample size is n and that the error bound is ε . Let s and sˆ denote respectively the actual and estimated support of an itemset. Set X = nsˆ , μ = ns and γ = ε , and the following can be derived from Formula (1).
(
)
Pr sˆ − s ≥ εs ≤ 2e − nsε
By setting 2e − nsε
2
/3
2
/3
.
≤ δ , we can get n ≥
(3) 3 sε
2
ln
2
δ
.
(4)
114
Y. Zhao et al. / Efficient Frequent Itemsets Mining by Sampling
Similarly, from Formula (2) we can get n ≥
1 2 ln . 2ε 2 δ
(5)
It means that if the sample size meets the requirement given in inequation 4 (or 5), the difference between the estimated support sˆ and the actual support s will be no less than εs (or ε ) with probability greater than 1- δ . The above Chernoff bounds are widely used to calculate the size of sample, but they are likely to overestimate the necessary sample size. Formula (4) is used by Zaki et al. in [4] to compute the sample size, and it is suggested to used the user specified minimum support threshold for s. As an alternative to Chernoff bounds, central limit theorem was used by Zhang et 2 al. [5] to derive the sufficient sample size: n ≥ Ζ δ / 2 4ε 2 , where Ζ δ / 2 is the Ζ value above which the area under the standard normal curve is δ / 2 .
3. Efficient Sampling for Mining Frequent Itemsets
3.1. Basic Idea of Hybrid Bounds Chernoff bounds are the most widely used theoretical bounds for computing sample size. There are two ways to use Chernoff bounds. One method is to set additive error bounds (Figure 1a) and the other is to set multiplicative error bounds (Figure 1b). The former method is stronger than the latter. However, both methods usually overestimate the sample sizes. For an itemset with large support, it is unnecessary to require the accuracy to be very high. If its support is relatively much larger than support threshold, the accuracy requirement can be relaxed without any risk of mistakenly judging it as an infrequent itemset. The larger the support is, the more can be relaxed for the accuracy. In such cases, multiplicative error bound seems a better choice. On the other hand, for an item with very small support, it is not necessary to require the accuracy of estimated support to be less than a certain percentage, say 10%, of its support. The reason lies in that it will probably not be judged as frequent even if the error is 50%. For this kind of itemsets, multiplicative error bound is not a good choice, since it is too strict for judging whether it is frequent or not. For example, assume that the support threshold is 0.1. For an itemset whose support is 0.2, even when the error bound of estimation is 0.09, that is, 45% of its support, it still can be correctly discovered as frequent. For an itemset with large support, say 0.3, error bound can be even larger, say 0.19, which is 63% of its support. However, for an itemset whose support is 0.001, even a 100% error is too strict, since an error bound of 0.09 is good enough to judge whether it is frequent . Based on the above analysis, we propose a new method for computing theoretical bound of sampling, which allows multiplicative error bounds for itemsets of large supports and additive error bounds for itemsets of small supports. With the proposed method, the theoretical bound of sample size can be much reduced with the same accuracy of frequent itemsets mining. The proposed idea is shown in Figure 1. The solid lines in the center show the actual support. The dotted lines in Figure 1a show the additive error bounds, while the dash-doted lines in Figure 1b show the multiplicative error bounds. The thick solid lines in Figure 1c show the proposed hybrid error bounds, which are derived by combining the two traditional bounds.
115
ε 1 ε
1
Estimated Support
Estimated Support
Estimated Support
Y. Zhao et al. / Efficient Frequent Itemsets Mining by Sampling
ε s 2
ε2s
Support
τ Support
Support
(a) Additional Error Bounds
(b) Multiplicative Error Bounds
(c) Hybrid Error Bounds
Figure 1. Error Bounds
Let s and sˆ be the actual and estimated supports of an itemset, respectively. ε1 and ε 2 are respectively additive and multiplicative error bounds. δ is the probability bound. The proposed error bound is Pr[| sˆ − s |≥ ε 1 ] ≤ δ ® ¯Pr[| sˆ − s |≥ ε 2 s ] ≤ δ
: 0 ≤ s ≤τ : τ < s ≤1
, where τ = ε1 / ε 2 .
(6)
3.2. Theoretical Bounds By setting ε 1 = εs in inequation (3), the following can be derived.
(
)
Pr sˆ − s ≥ ε 1 ≤ 2e − nε1
2
/ 3s
.
(7)
By setting the right to be no greater than δ , we can get n ≥
3s
ε1
2
ln
2
(8)
δ
It is clear from Formula (8) that, given a additive error bound ε 1 , the necessary sample size becomes larger as support increases. However, for a given multiplicative error bound, it becomes smaller with the increase of support, as shown by Formula (4). From the Formulae (3), (6) and (7), set ε1 = ε 2τ , that is, τ = ε1 / ε 2 , then nε nε − − °°Pr[ sˆ − s ≥ ε 1 ] ≤ 2e 3 s ≤ 2e 3τ , ® nsε nτε − − ° ≤ 2e 3 °¯Pr[ sˆ − s ≥ ε 2 s ] ≤ 2e 3 2
2
1
1
0 ≤ s ≤τ
2
2
2 2
= 2e
−
nε1 3τ
2
,
.
(9)
τ < s ≤1
From the above two inequations, it is clear that for all 0 ≤ s ≤ 1 , the probability is no greater than 2e
−
nε1 3τ
2
. Set the probability to be no greater than δ , we can get
116
Y. Zhao et al. / Efficient Frequent Itemsets Mining by Sampling
Table 1. Sufficient Sample Sizes
ε
δ
0.01 0.01 0.01 0.001 0.001 0.001
0.01 0.001 0.0001 0.01 0.001 0.0001
n ≥
3τ
ε1
2
ln
Multiplicative Chernoff Bounds 15,894,953 22,802,708 29,710,463 15,89,495,210 22,80,270,738 29,71,046,266
2
δ
=
3
ε 1ε 2
ln
2
δ
Additive Chernoff Bounds 26,492 38,005 49,518 2,649,159 3,800,452 4,951,744
.
Central Limit Theorem 16,587 27,069 37,842 1,658,687 2,706,848 3,784,193
Hybrid Chernoff Bounds 3,179 4,561 5,943 317,900 456,055 594,210
(10)
By comparing formulae (10) with (5), we can see that the proposed hybrid bound is stronger than additive Chernoff bound when τ ≤ 1 / 6 . For frequent itemset mining, the frequency is usually low, so the above requirement is easy to meet. In Table 1, the sufficient sample sizes derived from different bounds are given. For multiplicative Chernoff bounds, the minimum support threshold is set to 0.01 in inequation (4). For hybrid Chernoff bounds, τ = ε 1 ε 2 is set to 0.02. From the table, it is clear that the hybrid Chernoff bounds are much stronger than the other three bounds.
4. Experiments
We use the sampling method from [5] in our experiments and IBM Market-Basket Data Generator [1] is used to generate test datasets. At first the sample size n is computed according to Formula (10) and then n samples are drawn randomly from transactional dataset. What follows is fast Apriori [10], which discovers frequent itemsets from sample set. The experiments were conducted on a PC with 256MB memory and an Intel Pentium 4 1.6 GHz CPU, and the operating system is Windows 2000. One dataset used in our experiment is T10.I4.D100K, obtained by setting the number of transactions to 100,000, the average transaction size to 10 and the average maximal potentially large itemset size to 4. Another two synthetic datasets are T10.I4.D1M and T20.I6.D1M. The fourth dataset is a retail dataset of 88,162 transactions [11]. The precision and recall of frequent items are used to measure the effectiveness. We run each method 20 times on each dataset, and the average precision, recall and running time of different methods are shown in Table 2. For all datasets, ε is set to 0.01 and δ is also set to 0.01. The support threshold is set to 0.003, 0.003 and 0.005 for dataset T10.I4.D100K, T10.I4.D1M and T20.I6.D1M respectively. It is set to 0.01 for retail data. For the first three datasets, the precision and recall of our hybrid bounds is nearly as good as those with central limit theorem and additive Chernoff bounds. However, the running time is much less because of fewer necessary samples. For the retail data, the precision and recall with the hybrid bounds is less than those with the additive Chernoff bounds and central limit theorem. Experiments on several other datasets also show that the proposed bound is more effective on synthetic data than on real data. A possible reason is that synthetic data are more randomly distributed but real data are in some degree skewed. How to improve the effectiveness of sampling on real data is an open problem and will be included in our future work.
Y. Zhao et al. / Efficient Frequent Itemsets Mining by Sampling
117
Table 2. Experimental Results Dataset T10.I4.D100K T10.I4.D1M T20.I6.D1M Retail
Additive Chernoff Bounds R P t(s) 0.99 0.99 2.3 0.99 0.99 2.9 0.99 0.99 1.7 0.96 0.93 0.45
Central Limit Theorem R P t(s) 0.98 0.98 2.0 0.99 0.98 2.4 0.98 0.99 1.1 0.93 0.90 0.29
Hybrid Chernoff Bounds R P t(s) 0.95 0.97 1.6 0.97 0.98 2.1 0.98 0.96 0.4 0.89 0.85 0.10
Original Data t(s) 5.1 58 59 1.3
5. Conclusions
We proposed a new hybrid theoretical bound of sample size for frequent itemsets discovering. By combining the additive error bound and the multiplicative error bound, the proposed bound makes the theoretical sample size to be much less than traditional Chernoff bounds. Theoretical analysis shows that the sample size is about an order of magnitude smaller than the traditional Chernoff bounds. Our experiments validate the effectiveness of the proposed new bounds. In the future work, we will design more efficient methods for discovering association rules based on the proposed method.
Acknowledgement
This work is partially supported by Australian large ARC grants (DP0449535, DP0559536 and DP0667060), a China NSF major research Program (60496327), and a China NSF grant (60463003).
References [1]
Agrawal, R. and R. Srikant. Fast algorithms for mining association rules. in the 20th International Conference on Very Large Data Bases (VLDB'94). 1994. Santiago, Chile. [2] Mannila, H., H. Toivonen, and A.I. Verkamo. Efficient Algorithms for Discovering Association Rules. in AAAI Workshop on Knowledge Discovery in Databases. 1994. Seattle, Washington, US. [3] Toivonen, H. Sampling large databases for association rules. in 22th International Conference on Very Large Databases (VLDB'96). 1996. Mumbay, India: Morgan Kaufmann. [4] Zaki, M.J., et al. Evaluation of sampling for data mining of association rules. in 7th Intl. Wkshp. Research Issues in Data Engg. 1997. [5] Zhang, C., S. Zhang, and G.I. Webb, Identifying approximate itemsets of interest in large databases. Applied Intelligence, 2003. 18: p. 91-104. [6] Chen, B., P. Haas, and P. Scheuermann. A new two-phase sampling based algorithm for discovering association rules. in SIGKDD'02. 2002. Edmonton, Alberta, Canada. [7] Parthasarathy, S. Efficient progressive sampling for association rules. in the IEEE International Conference on Data Mining, 2002. 2002. [8] Mitzenmacher, M. and E. Upfal, Probability and computing : randomized algorithms and probabilistic analysis. 2005, Cambridge: Cambridge University Press. [9] Heap, D. The bounds that tie. http://www.cs.toronto.edu/~heap/Misc/bounds.ps. [10] Bodon, F. A fast APRIORI implementation. in Proceedings of the IEEE ICDM Workshop on Frequent Itemset Mining Implementations (FIMI'03). 2003. [11] Brijs, T., et al. The use of association rules for product assortment decisions: a case study. in the 5th Int. Conf. on Knowledge Discovery and Data Mining. 1999. San Diego, USA.
118
Advances in Intelligent IT Y. Li et al. (Eds.) IOS Press, 2006 © 2006 The authors. All rights reserved.
Missing or absent? A Question in Costsensitive Decision Tree Zhenxing Qin1, Shichao Zhang and Chengqi Zhang Faculty of Information Technology, University of Technology, Sydney PO Box 123, Broadway, Sydney, NSW 2007, Australia { zqin , zhangsc, chengqi }@ it.uts.edu.au }
Abstract. One common source of error in data is the existence of missing value fields. Imputation method has been a widely used technique in preprocessing phase of data mining, in which missing values are replaced by some estimated values. Previous work is trying to seek the "original" values according to specific criteria, such as statistics measure. However, in domain of cost-sensitive learning, minimal overall cost is the most important issue, i.e. a value which can minimize total cost is prefer than the "best" value upon common sense. For example, in medical domains, some data fields usually are left as absent and known information is enough for a decision. In this paper, we proposed a new method to study the problem of “missing or absent values?” in the domain cost-sensitive learning. Experiment results show some improvements with distinguished missing and absent data in cost-sensitive decision tree. Keywords: Induction, knowledge acquisition, machine learning.
1. Introduction 1.1. Missing fields in data set It is well known that "garbage in, garbage out." This principle is particularly significant in data mining where the objective is to find interesting patterns that exist in a large amount of data. Obviously, if the data is noisy and full of errors, the patterns we mined will have little or no value. Hence, data cleaning is a very important preprocessing step before serious mining can begin. Currently, there are three approaches to deal with missing fields: mark, filtering and imputation. The mark method is to mark all the unknown values by a special symbol, usually called null value which means the values is exist but not recorded. It exactly does nothing about the missing fields and leaves the data imperfections to data mining algorithms. A lot of algorithms in the machine learning are robust enough to handling the special values, such as C4.5. 1 Corresponding Author: Zhenxing Qin is with the Faculty of Information Technology at University of Technology Sydney, PO Box 123, Broadway, Sydney, NSW 2007, Australia; zqin@ it.uts.edu.au.
Z. Qin et al. / Missing or Absent? A Question in Cost-Sensitive Decision Tree
119
Filtering simply discards those data instances with missing fields and only uses the rest data for data mining. This method often results in a substantial decrease in the sample size available for the analysis. This is certainly not satisfactory as it may result in waste of lot of data. In particular, it always assumes that data are "missing at random”, otherwise it may leads to biased estimates on true patterns in data. Imputation method is currently common used which assigns values to these missing fields based on some criteria. A lot of criteria are proposed for variant domains, such as statistics estimation, correlation analysis, and association among attributes, etc. 1.2. Cost-sensitive learning Traditionally, Inductive learning built classifiers to minimize the expected number of errors (also known as the 0/1 loss). Some inductive learning techniques, such as the decision tree algorithms and naive Bayes, have met great success in building classification models with the aim to minimize the classification errors [. Some practice systems have been commonly used in decision-making applications, such as CART and C4.5. As an extension, much previous inductive learning research has also considered how to minimize the costs of classification errors, such as the cost of false positive (FP) and the cost of false negative (FN) in binary classification tasks. Cost-sensitive learning is motivated from the unbalance in misclassification errors which is usually measured with a cost matrix. Misclassification error is not the only error in classification problem. Numbers of different types of classification errors are listed in [2], and the costs of different types of errors are often very different. More recently, researchers have begun to consider both test and misclassification costs [4,7]. Ling, Yang, Wang and Zhang [1] proposed a new method for building and testing decision trees that minimizes the sum of the misclassification cost and the test cost. The objective is to minimize the expected total cost of tests and misclassifications. Table 2 shows a cost list for obtaining unknown attribute’s values, and figure 1 is an example tree in [1]:
A6 230:102 1
4 2
P 107:0
P 108:0
N 11:100
A1 4:2 2
6 4
N 0:1
N 0:1
5 P 2:0
P 2:0
Fig. 1. A decision tree built from the Ecoli dataset (costs are set as in Table 2).
120
Z. Qin et al. / Missing or Absent? A Question in Cost-Sensitive Decision Tree
A1
A2
A3
A4
A5
A6
FP/FN
50
50
50
50
50
20
600/800
Table 2. Test costs set and misclassification cost matrix for Ecoli dataset.
1.3. Motivation Previous work of Imputation method is trying to fix all the missing values and seek the "original" values according to specific criteria, such as statistics measure. However, in domain of cost-sensitive learning, such as medical diagnostic, doctor usually need part of the entire patient’s medical information and make a diagnostic. It means other information is left as “absent”. Previous work [18] shows the absent values are also useful for learning of the doctor’s diagnostic knowledge. It arise a question if we need to assign a value to the missing field that is actually a “absent” value? On the other hand, all minimal overall cost is the most important issue, i.e. a value which can minimize total cost is preferred than the "best" value upon common sense. Let’s look at an example of credit card application in Figure 1: a decision tree is built from the training set with data structure in table 1. We can see that there are 200 vs. 300 class label in the root node where it is 200 good and 300 bad customers here, noted as (200:300). From the tree, we could see that attribute “Criminal Record” is no use in the left branch or root node, i.e. low income customers could lead a reject directly. That means “Criminal Record” maybe absent in practice business application and the absent data is also useful [18]. For those real “missing” data, existing imputation methods are still not enough for cost-sensitive learning. Assume the training set contains 10 percent with missing values in attribute “Criminal Record”, like instance V1 in table 1. From the point view of previous imputation methods, we may intend to choose “No” for V1 like instance, because we already used existing information to split the instance space, and the values of attribute “Criminal Record” distribution is 1/3 for “Yes”, 2/3 for “No”. However, choosing “No” here means V1 should be regarded as a good customer even his “Criminal Record” is unknown. According to cost matrix in table 2, predicting “bad” class to “good” class take 5 times cost. Choosing “No” would totally mislead the cost-sensitive leaner. This leads us to propose a cost-based estimation technique to decide the unknown values are missing or absent, and select the proper value which can minimize the misclassification cost for the missing fields. A number of experiments have been carried out. The results show that our proposed method is able to predict more accurately the correct values for the missing fields. The rest of the paper is organized as follows. Section 2 reviews the related works; Section 3 introduces the patching algorithm based on cost evaluation, and discuss the new issues and properties as patching missing values in cost-sensitive learning; section 4 presents our experimental results; Section 5 draws conclusion and discusses future work.
Z. Qin et al. / Missing or Absent? A Question in Cost-Sensitive Decision Tree
121
Income 300:200
Low
High
Middle
Bad
Good
Criminal Record 50:100
(245:0)
Yes Bad (40:10)
(5:100)
No Good (10:90)
Fig. 2. . Example of decision tree on credit card application
2. Previous work Missing values problem is an old task in data processing domain [10, 11]. Currently, to avoid wasting data with case-wise deletion of missing values, many imputation methods are proposed to estimate the missing values from existing data in databases [12, 13]. Previous work is trying to seek the "original" values according to specific criteria, such as statistics measure. For example, Replace with an average value according the Bayes formula. Choose the most possible value according the statistics, and association rule is also used to predict missing values [13]. The performance of estimation is measured by overall predicting precision. It means any error in predicting is same in counting the total error number (like 0/1 loss). In domain of cost-sensitive learning, cost matrix is a very important information source and the performance is measured by overall cost. Then choosing a value which can minimize total cost is prefers than the "best" value upon common sense. Most of costsensitive learners are extended from traditional learners and they usually have their own routine to handle marked missing values. A common way is to regard the null value as a normal value. C4.5 uses another way by dynamically choosing a normal value to replace missing value according the probabilities of the attribute’s all values [8]. A new decision tree to minimize sum of misclassification cost and test cost is proposed in [1] and all unknown values (it uses “?” for the unknown value) are treated as a special “value”. The examples with unknown attribute values will not be grouped together as a leaf, or to build a sub-tree; instead, they are “gathered” inside the node that represents that attribute. Generally speaking, the standard approach to coping with imperfect data is to delegate the burden to the theory builder [14]. Without an overall evaluation mechanism, algorithms must institute its own noisy-handling routine to ensure their robustness because noisy is unavoidably introduced during dealing with missing values, duplicating the effort required
122
Z. Qin et al. / Missing or Absent? A Question in Cost-Sensitive Decision Tree
even if using the same data set in each case. Especially in case of multiple missing values in same instance, bias views will be introduced using those internal routines for missing values. This leads us to proposed a cost sensitive estimation method from traditional 0/1 loss imputation method by involving cost matrix. This method builds a dynamic test costsensitive decision tree for each example with missing values. It firstly building a tree by attributes with known values and shrinks the instance space of the example belonging to, and then using the cost matrix to evaluate all possible values of the attributes with unknown value. A value with minimal overall cost is select to patch the missing field.
3. Evaluating and patching up missing values Instead of using a single tree as above, we adopt a lazy decision tree algorithm for evaluating and patching unknown values. This dynamic tree building strategy utilizes as much information in the known attributes as possible [18]. More specifically, for each instance under patching, a lazy decision tree is built from all of the training examples with only those attributes whose values are known in the example. More specifically, given a test example with known and unknown attributes, we first reassign the cost of the known attributes to be $0 while the cost of the unknown attributes remains unchanged. For example, suppose that there are 3 attributes and their costs are $30, $40, and $60 respectively. If in a test example, the second attribute value is missing, then the new test costs would be reset to $30, $0, and $60 respectively. Then a cost-sensitive tree is built using [1]. If the attributes with unknown values are selected in the tree, it means the attributes are important for decision making, and then we need to patch up them. Otherwise, the unknown values are marked as “absent” values that are trivial for decision making. We expect the small change could reduce the total misclassification and test cost. Attribute
A1
A2
A3
A4
A5
A6
Class
Value
6
2
? (1)
2
2
? (3)
P
Test cost
0
0
50
0
0
20
Table 3. An example with unknown values and new test costs
An example instance with unknown values is shown in table 4. We can see that A1, A2, A4, and A5 are the only known attributes, a new decision tree is built as in Figure 3. Once an attribute is chosen to split the node, the subspace will be decomposed to smaller subspace and then other attributes with unknown value will be evaluated recursively. This is proper for patch an instance with multiple missing values. In this example, only A6 is select into the tree. That means A3 is trivial for decision making and should be marked as “absent”. Then we explore the tree, A5’s value is 2, it goes down to the second branch and meets the unknown value node A6. It means we need to consider a patching for the A6.
Z. Qin et al. / Missing or Absent? A Question in Cost-Sensitive Decision Tree
123
We then begin to evaluate the children of internal node of A6 and decide which branch is most proper for the missing value of A6. As mentioned before, each branch leads a smaller sub-space. With a specific cost matrix, all the misclassification cost can be calculated and then all it children are sorted by their cost from left to right. For example, branch (A6 = 2) with 1 negative instance and it is labeled with positive, then an overall misclassification cost here is MC1 = N*FP / (N+P) = 1* 800/ 16 = 50. MC1 is minimal comparing to other three possible values, so value 2 is selected to patch the missing value.
A5 (230:102) P
1
3 2 P (194:48)
P (15:2)
A6 (21:52) N
6 2
P (15:1)
4
N (2:21)
5
N (2:19)
N (2:11)
Fig.4. A decision tree built extended from figure 2.
4. Experiments We conducted experiments on five real-world datasets [3] and compared our patching method and special value method in [1] against C4.5. These datasets are chosen because they have at least some discrete attributes, binary class, and a good number of examples. The numerical attributes in datasets are discretized first using minimal entropy method [15] as our algorithm can currently only deal with discrete attributes. The datasets are listed in Table 4. We can also use the cost matrix in table 2 for all of the dataset. No. of attributes
No. of examples
Class distribution (P/N)
ECOLI
6
332
230/102
Breast
9
683
444/239
Heart
8
161
98/163
Thyroid
24
2000
1762/238
Austrilia
15
653
296/357
Table 5. Datasets used in the experiments.
124
Z. Qin et al. / Missing or Absent? A Question in Cost-Sensitive Decision Tree
Average cost
Ecoli Thyroid
Breast Australia
Heart
70 60 50 40 30 20 10 0
C4.5
Special Value
Patching
Patching strategies Fig. 5. Comparing of total target cost of three tree building strategies on different datasets
First we randomly choose 5% values from the dataset D, new dataset noted by D’. Using our patching method, D’ is evaluated and patched, patched dataset noted by D’’. And we conducted the building algorithm in [1] on both dataset D’ and D’’. Performance of the two methods is also compared to C4.5 conducting on D’. Then, we compare the average cost (Total Cost/ Example number in test phase) of patching method and special value method against C4.5 on all five dataset. The results of total cost are shown as in figure 5. From Figure 5, we can see that patching method outperforms the other two in total cost. It means building decision on patched dataset got a better overall performance then building on dataset with imperfection. Influence of cost matrix on patching Average Cost
100 80 60 40 800:800
800:600 800:400 Cost m atrix ratio (FP/FN)
800:200
Fig.6 Influence of cost matrix on our patching
To study the influence of cost matrix on choosing values, we conducted patching on different cost matrix (FP/FN from 800:800 to 800:200). The result is shown in figure 6. From figure 6, we can see that cost matrix will significantly influent the patching performance. And our method work better on skew cost. We can conclude that cost matrix is very important for patch missing values in cost sensitive learning. 5. Conclusions and Future Work In this paper, we presented a simple and novel method to evaluate and patch the known values in cost-sensitive learning. Aim to minimize the misclassification cost, value with
Z. Qin et al. / Missing or Absent? A Question in Cost-Sensitive Decision Tree
125
least risk of high cost consumption is selected to fill the missing fields. We proposed a new cost-based estimation criterion for value selection, and only the “important” values are imputed and others are marked as “absent”. Our experiments show that our new patching algorithm with cost-based estimation criterion dramatically outperforms the filtering method and completing method common sense. . In the future, we plan to test the patching method on other cost-sensitive learners, such as Naive Bayes. Also pruning can be introduced in our tree-building algorithm to avoid over-fitting of the data.
References [1] Charles Ling, Qiang Yang, Jianning Wang and Shichao Zhang (2004), Decision Trees with Minimal Costs. In: Proceedings of 21st International Conference on Machine Learning, Banff, Alberta, Canada, July 4-8, 2004. [2] Turney, P. D. (2000), Types of cost in inductive concept learning, Workshop on Cost-Sensitive Learning at the Seventeenth International Conference on Machine Learning, Stanford University, California. [3] Blake, C. L., and Merz, C. J. (1998), UCI Repository of machine learning databases (See [http://www.ics.uci.edu/~mlearn/MLRepository.html]). Irvine, CA: University of California, Department of Information and Computer Science. [4] Turney, P. D. (1995), Cost-sensitive classication: Empirical evaluation of a hybrid genetic decision tree induction algorithm. Journal of Articial Intelligence Research, 2: 369-409, 1995. [5] Mitchell, T.M. (1997), Machine Learning. McGraw Hills [6] Zubek, V. B., Dietterich, T. G. (2002), Pruning Improves Heuristic Search for Cost-Sensitive Learning. In Proceedings of the Nineteenth International Conference on Machine Learning. pp. 27-34, Sydney, Australia. [7] Greiner, R., Grove, A. J., and Roth D. (2002), Learning cost-sensitive active classiers. Articial Intelligence, 139(2): 137-174, 2002. [8] Quinlan, J. R. (1993), C4.5: Programs for Machine Learning. Morgan Kaufmann, San Mateo, California, 1993. [9] Breiman, L., Friedman, J. H., Olshen, R. A., and Stone, C. J. (1984), Classification and Regression Trees. Wadsworth, Monterey, California, 1984. [10] R.J.A Little, D.B. Rubin. Statistical Analysis with Missing Data, Wiley series in probability and mathematical statistics, John Wiley and Sons, USA, 1987 [11] J.R Quinlan. Unknown Attribute Values in Induction, in Segre A.M. (ed.), In Proc. Of the Sixth Int’l Workshop on Machin Learning , p. 164-168, morgan Kaufmann, Los Altos, USA, 1989. [12] W.Z Liu, A.P White. Techniques for dealing with missing values in classification. In second int’l symposium on intelligent data analysis, London, 1997. [13] A. Ragel and B. Cremilleux (1999), “MVC, A preprocessing Method to deal with missing values.” Knowledge-based Systems, pp. 285-291. [14] Choh Man Teng, Polishing blemishes: Issues in data correction, IEEE Intelligent Systems, March/April 2004, pp 34-39. [15] U. M. Fayyad and K.B. Irani, Multi-interval discretization of continuous-valued attributes for classification learning. In: Proceedings of the 13th International Joint Conference on Artificial Intelligence (IJCAI93), pp. 1022-1027. Morgan Kaufmann, 1993. [16] Z. Qin, C. Zhang and S. Zhang, Cost-sensitive Decision Trees with Multiple Cost Scales. In: Proceedings of the 17th Australian Joint Conference on Artificial Intelligence (AI 2004), Cairns, Queensland, Australia, 610 December, 2004. [17] Zhenxing Qin, Chengqi Zhang and Shichao Zhang, Dynamic Test-sensitive Decision Trees with Multiple Cost Scales. Proceedings of FSKD 2005, Changsha, China, August 2005: 402-405. [18] Shichao Zhang, Zhenxing Qin, Charles Ling and Shengli Sheng, "Missing is Useful": Missing Values in Cost-sensitive Decision Trees. IEEE Transactions on Knowledge and Data Engineering, Vol. 17 No. 12 (2005): 1689-1693.
126
Advances in Intelligent IT Y. Li et al. (Eds.) IOS Press, 2006 © 2006 The authors. All rights reserved.
KIDS table: Interactive tabletop Display Systems Kunio SAKAMOTO, Yuko IWAZE Interdisciplinary Faculty of Science and Engineering, Shimane University
Abstract. Conventional information displays provide us some images. However these images are generally two-dimensional(2D). If the images are displayed in the threedimensional (3D) space, these images are in the air at a distance from a desk and users can directly touch and interact them such that kids can make play clay. In addition, many applications for 3D imaging can be proposed. The authors have researched the 3D displays and applications. We developed new applications and a prototype interactive tabletop holographic display system. These systems consist of the object recognition system and the spatial imaging system. In this paper, we describe the recognition system using RFID tags and a tabletop display system using a conventional CRT display and a holographic technology.
1. Introduction A 3D display system is useful technology for virtual reality, mixed reality and augmented reality. We have researched spatial imaging and interaction system. We have ever proposed 3D displays using the slit as a parallax barrier, the lenticular screen and the holographic optical elements(HOEs) for displaying an active image[1], [2], [3]. The purpose of this paper is to propose the interactive system using these 3D imaging technologies. In this paper, the authors describe the interactive tabletop 3D display system. The observer can view virtual images when the user puts the special object on the display table. The key technologies of this system are the spatial imaging display and the object recognition system.
2. System Concept Figure 1 shows the concept of an interactive tabletop 3D display system. When kids put objects on the table, the display system gives users virtual 3D images, which is floating in the air, and the observers can touch these floating images. This system interacts; these virtual images are automatically moving according to circumstances and the users and floating images take mutual action when the user touches the virtual image.
Floating images in the air
Virtual images
Table top Moving, interactive, touchable and practical
Real objects
Figure 1. Interactive tabletop 3D display
127
K. Sakamoto and Y. Iwaze / KIDS Table: Interactive Tabletop Display Systems
Thus this system displays the same color and shape virtual image as the user puts an object on the table. Moreover, it is possible for users to indicate the attributes such as active motions and functions. For example, the system displays the virtual image of a clock, which is moving punctually, when the user puts the card, on which is drawn the picture of a clock. In this case, the function is to display the clock and the motion is to rotate the hour and minute hands exactly.
Right image
Model: SUB-01D
(Horizontally polarized) User2 image
Right image
User
State1: Left eye image displaying State2: Right eye image displaying
Display Left image
User1
Right image
(a) Polarized glasses display
Display
Display User
State1: State2:
Left Right
Model: SUB-02D
Model: SUB-01E
State1: User1 image displaying State2: Right eye image displaying
User2
Left image
Left image
Display
Block
Use1 image (Vertically polarized)
Thru
Left eye image (Horizontally polarized)
State1:
Right eye image (Vertically polarized)
User2
User2 State2:
User
State1: User1 State2:
Thru Block
Model: SUB-02E
(b) LC shutter glasses display
Figure 2. Special glasses required display system
Glasses Monitor
Figure 3. Appearance of Type1 ҏdisplay.
(a)Displaying dual images
(b) User1 observed image
(c) User2 observed image
Figure 4. Example of displaying dual views
3. Tabletop imaging by generic displays Simple and generic display systems consist of a conventional glasses displaying system. These systems require viewers to wear special glasses to see 3D images. As shown in figure 2, trial Type1 display has two developments; Model:SUB-01 utilizes polarized glasses and Model:SUB-02 is constituted of liquid crystal(LC) shutter glasses. Orthogonal polarizing method and time sharing method have realized the image separation for left and right eyes. When the user views left and right image by each eye, these display systems provide stereoscopic images for a single user. The other use value is these display systems can provide different images to another user surrounding the table. When the user-2 sits down opposite the user-1, each user must view the different image so as not to perceive upside down images. Figure 3 shows the appearance of trial Type1(Model:SUB-02)ҏ system.
128
K. Sakamoto and Y. Iwaze / KIDS Table: Interactive Tabletop Display Systems
Figure 4 shows observed images which are displayed by the prototype display using the liquid crystal shutter glasses. When the observer, which does not wear the special glasses, views the screen on the CRT display, both images for User1 and User2 are observed as shown in figure 4(a). The liquid crystal shutter glasses enable to separate these images by time sharing and thus each observer can view the image for User1 or User2 as shown in figure 4 (b) and (c).
4. Spatial imaging by hologram 4.1 Type of hologram Many kinds of artistic holograms have been developed; for example, rainbow hologram, shadowgram and so on. Lippman hologram, color hologram and holographic stereogram are also sorts of holograms. The tabletop display described in this paper needs floating images in the air, and thus a hologram must reconstruct the real image using the conjugate illumination as shown in figure 5. Our hologram is made with PFG-01 holographic silver halide plate, and exposed by a 10mW He-Ne laser. The PFG-01 recording medium is only red sensitive and the wavelength of reconstruction is generally 635nm (red only). In the reconstruction with a white light, color-smeared images are observed due to the white light that contains components with various wavelengths. Thus, it is possible for a transmission hologram to reconstruct a monochrome or rainbow color image. Holographic recording material
Object beam Reference beam
Object
Conical hologram Reconstructed images (real image)
Reference beam
(a) holographic mastering
Observer
Conjugate beam
Reconstructed images (real image)
Conjugate beam
(b) transfer process
Hologram plate
(c) reconstruction process
Figure 5. Two-step recording optical layout
4.2 Recording and reconstruction process The two-step recording method is typical in order to get the floating holographic image. This method involves two steps: the holographic mastering and transferring process. In the first step, a master hologram is recorded on the conical hologram film. In the second step, an image hologram (transfer hologram) is made by holographic transferring. A master hologram is recorded using the optical layout as shown in figure 5. As shown in figure 5 (a), the 3D information of objects is recorded on the holographic material. The shapes of this material must be conical or hemispherical so that the material can shade objects. Image holograms are made with the conventional holographic transferring method
129
K. Sakamoto and Y. Iwaze / KIDS Table: Interactive Tabletop Display Systems
as shown in figure 5 (b). A reconstruction process is shown in figure 5 (c). The transfer hologram reconstructs the floating image in the air. Observer Reference beam
Light object
Observer Reconstructed images (real image)
Reconstructed images (real image)
Hole
Hologram
Shade
Object beam Holographic recording material
Hologram
Conjugate beam
(a) first exposure
Conjugate beam
(a)
(a)
Observer
Light object
Hologram
Shade
Object beam
(b) second exposure
Figure 6. Multiple recording process
Reconstructed images (real image)
Reconstructed images (real image)
Reference beam
Holographic recording material
Observer
Hole Hologram
Conjugate beam
(b)
Figure 7. Reconstructing multiple recorded hologram
Conjugate beam
(b)
Figure 8. Reconstructing only target image
4.3 Multiple recording The interactive tabletop display system gives us many images according to the ID object put on the table. Hence, a hologram needs to display and to be recorded multiple images. The wavelength and angle selectivity is a natural attribute of a hologram. A hologram can reflect or diffract a narrow band of wavelengths with high efficiency, there exists a duality between the spectral bandwidth and angular bandwidth of a hologram. A multiple recording technique allows to reconstruct an angle-multiplexed hologram at the same angle as recording one by simply varying the illumination angle. Figure 6 shows a recording process of two floating images. When this hologram is illuminated, the target image is on the center of hologram media as shown in figure 7. However, other images are also reconstructed with the target. To cut needless images off, a shade with the hole in the center is located at a distance from the hologram plate as shown in figure 8. Thus, the target image is only in the air. The wavelength selectivity allows to extract the target image from the multiplexed hologram at the same wavelength as recording one by varying the wavelength of illumination. Three images are recorded on the holographic material using different wavelengths (for example three colors; red, green and blue). Lippman color hologram using a full color sensitive material reconstructs a true color image when this hologram is illuminated with white light. Moreover Lippman hologram using a monochrome sensitive material (for example red only) can reconstruct only one color (red) image even if a white
130
K. Sakamoto and Y. Iwaze / KIDS Table: Interactive Tabletop Display Systems
light illuminates its media. Hence, the target image is selectively reconstructed when each color light appropriately illuminates the multiplexed hologram, which is made by the multiple recording.
5. Object recognition The user puts objects and cards on the interactive table. The system recognizes the attributions (color, shape, motions, functions, etc.) of these objects and cards using its unique identification. The object and card have an identifier attached to the back. Our trial system utilizes the RFID tags as the identifier. Radio Frequency Identification(RFID) tag, which is a silicon chip with IDs, frequency function within small size, is attached to some goods such as transportation tickets, luggage management and so on. RFID technologies deliver wireless communication with small tags. Figure 9 shows the RFID board for reading tags and RFID tags. The RFID board is a USB-based controller for radio frequency identification applications. This RFID has a read range of approximately 3 inches for the three types of RFID tags which are disc tag, credit card sized tag and keyfob tag. The tags are read only devices with unique ID's.
example of models
Figure 9. RFID reader and cards within tag
Figure 10. The demonstration of the AQUARIUM
6. Working model 6.1 Applications of tabletop display Figure 10 shows the image of "the interactive digital AQUARIUM" using the prototype system Model: SUB-02Das shown in figure 3. In this figure, the KINGYO (KINGYO means a goldfish in English) is displayed as an example, when the user puts a model of the KINGYO on the display table. This prototype system needs the observer to wear the special glasses such as the liquid crystal shutter glasses. To solve this inconvenience, the authors utilize a hologram for displaying spatial images. 6.2 Interactive hologram Holograms are illuminated by the laser or white light. The laser (or laser diode: LD) illumination generally reconstructs a monochrome holographic image (for example red only). However, it is possible to reconstruct color images using three color lasers (red,
K. Sakamoto and Y. Iwaze / KIDS Table: Interactive Tabletop Display Systems
131
green and blue). The tabletop hologram described in this paper selectively reconstructs a floating image by controlling the angle or wavelength of an illuminating light. The white light illumination is also utilizable. The viewing angle and the color of a hologram image, for example rainbow hologram, are changed according to the incident angle of the light. This effect is very interesting to build the interactive holographic display system. According to the ID tags, the angle of incident beam changes, then the hologram interactively reconstructs spatial images. Figure 11 shows light control system. As shown in figure 11, the floating image is selectively reconstructed by controlling the angle of an illuminating light. In these trial systems, all lights are same color lasers (red only) or white lights. It is also possible to use various color lasers for changing interactively color of a reconstructed image. Figure 12 shows the prototype system of holographic display. RFID tag reader USB PC workstation
Hologram Illumination light RS-232C I/O control board
Figure 11. System configuration
(a) tabletop display system
(b) reconstructed image
Figure 12. Appearance of tabletop hologram system and reconstructed images.
7. Conclusions The interactive tabletop 3D display system is described. This display provides virtual images when the user puts an object on the display table. The objects, which put on the table, are recognized by RFID tags. This system and users can take mutual action. Acknowledgments This research is partially supported by "Grant-in-Aid for Young Scientists(B)" #17700115 from Ministry of Education, Culture, Sports, Science and Technology Japan(MEXT), "Feasibility Study" and "Science and Technology Incubation in Advanced Regions" of Japan Science and Technology Agency(JST) and also by the Mazda Foundation's Research Grant.
References [1] K. Sakamoto, R. Kimura, M. Takaki, Parallax Polarizer Barrier Stereoscopic 3D Display Systems, Proc. of the 2005 International Conference on Active Media Technology, pp. 469-474, 2005 [2] K. Sakamoto, M. Takaki, R. Kimura, Lenticular 3D display using double polarizer slits, Proc. of 11th International Conference on Virtual Systems and Multimedia㧘pp.123-126㧘2005 [3] K. Sakamoto, M. Takaki, M. Nishida, "Parallax Barrier 3D Reflection Display Using Holographic Screen", Proc. of 12th International Display Workshops, 3D3-3, 2005
132
Advances in Intelligent IT Y. Li et al. (Eds.) IOS Press, 2006 © 2006 The authors. All rights reserved.
Generating Believable Personality-Rich Story Characters Using Body Languages Wen-Poh SU, Binh PHAM and Aster WARDHANI Faculty of Information Technology, Queensland University of Technology GPO Box 24334, Brisbane, QLD, Australia
[email protected], {B.Pham, A.Wardhani}@qut.edu.au
Abstract. We present an approach using story scripts and action descriptions in a form similar to the content description of storyboards to predict specific personality and emotional states. By constructing a hierarchical fuzzy rule-based system we facilitate the personality and emotion control of the body language of dynamic story characters. Our ultimate goal is to facilitate the high-level control of synthetic characters. Keywords. Nonverbal behaviour, body language, storyboard, personality, emotion, fuzzy logic, story character animation
Introduction Building life-like or believable synthetic agents can mean different levels of believability acceptance such as: moving, acting, reacting believability. Moving believability means characters embodied human figure animation, locomotion, and path finding. Acting believability indicates a character having more styling behaviors including the use of gaze, posture, gesture, interpersonal space and more details. Reacting believability means a character personified better communication capabilities driven by a complex internal set of motivations, goals, emotions and personalities. These can relate with a variety of different research such as: emotion and personality, non-verbal communication, natural language processing, speech recognition [1]. Interpersonal communication is characterized not only by verbal, but also by non-verbal cues. Non-verbal signals include facial expression, gaze, gesture, posture, spatial behaviour, non-verbal vocalizations and the other aspects of appearance. Several researchers have built animated embodied conversational agents that synthesized speech with animated hand gestures [2]. Most of these works focus on gestures performed by hands and arms. However, there are other bodily signals used during communications. In this paper, we focus on body postures and gestures of the acting and reacting believability of story characters. Our research derives from high-level personality and emotion factors to map to overall body language of a synthetic character used in story simulation. We focus on how to analyse the personality and emotion of a story character from specific context information in a form similar to the content description of storyboards. Artists recognize these principles, and properly apply them in drawing, animating, acting or
W.-P. Su et al. / Generating Believable Personality-Rich Story Characters Using Body Languages
133
writing. However, in the story scripts without the perceptual hand sketching of graphics, how it might be imagined for a computational synthetic agent to interpret. In our previous work [3], we construct a hierarchical fuzzy rule-based system to facilitate the reasoning of a character movement as well as map to a reasonable personality type and an emotional state. We adopt the Abridged Big Five Circumplex Model (AB5C) of personality from psychological study [4] as a basis for a computational model. We simplify and analyse 32 personality types for simulating animated story characters. Our Personality and Emotion (P & E) Engine takes the advantage of relevant knowledge described by psychologists and researchers of storytelling, non-verbal communication, and human movement. To a narrative character design, the categories of personality and emotion of a story character are bred from the story plot in the authoring process. Personality and emotion both influence the behaviour decisions of a story character, e.g. to confront danger, or to withdraw from it. Moreover, personality provides consistent influence of the threshold of different emotional intensity. Thus, we devise a schema for this hierarchical relationship of personality, emotion, and behaviour. Personality and emotional states are mapped into the body’s movements, which give storytelling players/designers an effective way to control synthetic characters through high-level personality and emotion controlling mechanisms. The P & E engine provides consistent measurement of predictability for story character types and behaviours types which reflect on appropriate posture motions. In this paper, we extend our P& E engine to support story input and script analysis for generating body languages of a narrative agent. This extension allows us to integrate text and non-verbal aspects of a synthetic story character to accomplish communication goals. A story designer or player devise a descriptive storylines as inputs to the P & E engine and fine-tune the outcome of character performance. We utilize the semantic of action description to facilitate the generation of character movements. We analyse what body languages of an arthropathic character conveying corresponding to the narrative context. As a result, users can explicitly modify or fine-tune the personality and emotion values to change the feeling, drive and motivation of the story characters, which will affect their behaviour through body posture expression.
1. Modelling Believable Characters from Story Scripts The acting believability of a synthetic character indicates a character having more styling behaviours which derive from the psychological factors. Personality, emotion, social relationships, and behavioural capabilities are the fundamentals for providing high-level directives for autonomous character architecture. Can we analyse story scripts to present these fundamental factors and provide sufficient information for character performance? In order to model autonomous story character that can perform believable acting behaviour, we identify the following challenges. In a story, the action and body language are generally described together to accomplish communication goals. For instance, a girl might be upset with the stranger approaching in accord with her personality, and with her emotion upon being lost. She may cross her arms as non-verbal expression to manifest defensiveness, rejection, and irritation. These types of non-vocal expressions can not be decoded in speech or dialogue recognition processes.
134
W.-P. Su et al. / Generating Believable Personality-Rich Story Characters Using Body Languages
How can we derive and differentiate the varieties of dialogues, descriptions and action data from context information? How can we provide sufficient action descriptions of a story character in text for reasoning? How can we create a context database (“semantic” action plan) for the interpretation of body language? Body language varies with diverse personality types and emotion states. Every person has a distinctive characteristic of standing, walking, sitting, and gesture based on the personality types and emotional states. Personality influences the patterns of our body language and gives us distinctiveness. What do body languages derive from the personality types? How emotion is related to body movements? 2. Mapping Personality, Emotion and Behaviour to Body Language Consider what happens when we meet a stranger. How can we tell from the posture of someone who stops us to ask the way where he is lost, or has designs on your wallet? Naturally, we want to know whether this person is threatening, aggressive, sympathetic, or deceptive. The major part of these causes derives from the variety of personalities. The accumulative body signals reveal the inclination of personality. For instance, a person displaying more liking behaviour can be predicted high in openness, and extraversion. Therefore, we decode the text to the meaning of body language to acquire a possible personality type. In order to devise a dynamic story character, a simple way to simulate the personalities of the characters is through the contrast of movements. Therefore, we map the personality and emotion effects to characters’ behaviour. According to Lamb’s theory [5], he divided the kinesphere of the human body into three shaping zones: horizontally, vertical, and sagittally orientated. Kinesphere represents the metaphor of people having an orbiting vapour trail around their own body centre through the way they shape out movement. In order to facilitate the analysis in body movement scoring system for psychological study, Bull [6] divided body movements into four main areas as follows, i) head ii) trunk iii) upper limbs iv) lower limbs. We map these theories together to facilitate the modelling process of a synthetic character. In Table 1, we summarize the relationship of body language, kinesphere posture principles and four main body areas. Table 1. Some Examples of Body Language and Four Main Areas of Body Movement Head Upper Limbs Trunk Lower Limbs
Body Language Face up Chin down Open posture Closed posture Lean backward Lean forward Open posture Closed posture
Kinesphere Zone Vertical Horizontal Sagittal Horizontal
3. System Architecture Our system consists four major parts: story scripts input, P&E (Personality and Emotion) engine, animation and graphics engine, and display as shown in Figure 1. In
W.-P. Su et al. / Generating Believable Personality-Rich Story Characters Using Body Languages
135
story input, we analyse how to generate the behaviour from a story script and map the meaning of body languages to the suitable psychological factors. We utilize the adjectives and adverbs to describe the body language. The P&E engine is an automatic high-level control of a story character’s type and emotional state. We map the patterns of personality and emotion based on psychological theories to computational linguistic variables by a hierarchical fuzzy system. The Animation and Graphics Engine interprets the value from P&E engine for the movement of animated characters. The Animation and Graphics engine receives the outputs from the P&E engine and generates a mixed movement of posture animation by using linear algorithms. It is also responsible for maintaining the geometric model and for controlling the rendering process as well as displays. Finally, the results display in animated sequences for visualization purposes. Story Context Adjective & Adverb
P&E Engine
Actor
Action &Gesture & Posture Selection Stage
Meaning of Body Language Figure 1. System Overview
3.1. Story Input Module Researchers transcribed the scripts and actions from a moment of a film [7] to support natural language generation and action. However, non-verbal communication and psychological ingredients are not taken into account. We study believable agents that use nonverbal communication derived from psychological models (personality and emotional state) to influence their behaviours. In order to provide the sufficient action descriptions of a story character in text for reasoning, we devise the story structure which adopts the ideas of storyboard from film production. Storyboard is formed by multimedia combining images, text and audio explanation to give a basic overview of the content and functionality of the scenario. It expresses what, when, for how long pictures will be seen, and what audio and text will accompany with the images. We devise six basic different dialogue scenarios for computational purposes. Our basic dialogues consist of three segments: standing alone, meeting the man, and following/not-following the man abbreviated in Table 2. From this scenario, we develop more than a hundred various body postures or gestures for selection in a scene graph. We describe the action detail in text to replace hand-sketches and give the ideas whether someone is anxious, hesitant, gentle, keen, enthusiastic, and so on. For instance, the girl stares at the stranger and stands with distance. She nibbles her fingernails with one suitcase grabbed tightly in the other hand. This shows the girl is sceptical, anxious, impolite, careful, introverted and nervous to face the stranger. We devise gestural and postural lexicons look-up table. The cumulative meanings of body languages can be used to predict the possibility of personality type and emotional state. By analysing the descriptive meaning of body language and story scripts, we are able to collect the used adjectives and adverbs. By accumulating all the descriptive keywords, we are able to evaluate the maximum frequency of keywords and the time of emotional transitions. From the example foresaid, the personality of the girl can be supposed to be low (L) in openness (O), extraversion (E) and agreeableness (A), high (H) in
136
W.-P. Su et al. / Generating Believable Personality-Rich Story Characters Using Body Languages
conscientiousness (C) and neuroticism (N). Accordingly, the emotional state can be slightly in fear. Table 2. An Abbreviated Proposed Scenario Scene 1
2
Story Scripts by Script Supervisor
Actions Directed by Director
Time
< Ally >Ally is carrying her suitcase. She is lost in a bus terminal of an airport. A passing man offers her help. Can I help you?
She remains in original position and folds her arm in the front. She puts down her suitcase. She looks around. He is in an open posture and nodes.
1’30”
1’
3.2. P & E Engine The P&E Engine consists of the hierarchical MIMO Fuzzy Logic Controllers (FLCs), a personality FLC module and six emotion FLC modules to provide the visual presentation of an animation engine [3]. The personality FLC module is devised for reasoning the type of the story characters and behaviours. After receiving the outcomes of character and behaviour types produced from the personality FLC module, these two factors are coupled with the emotion selection to become the input variables of the emotion FLC module. There are six emotions that the FLC module constructs with happy, surprise, angry, sad, angry, disgust, and fear. The output variables of emotion FLC are horizontal, vertical, and sagittal. The output value of story script module can be used to predicate a possible personality type and then extracted to become the input value to P & E Engine. The result is in the form of LHLLH (OCEAN respectively) as the following example description: (C-H, E-L) means careful, cautious, punctual, formal, and thrifty; (E-L, N-H) means lonely, weak, and cowardly. The meanings of these body languages are stored as data in the form of adjectives or adverbs mapping to our 32 types of personality combinations. Moreover, the emotion transition is another output from story scripts. We devise the emotion based on the numerated scenes. The emotion of the scene is updated in P & E engine. Time factor is considered as well along with the mood changing to provide a reference for animation and graphics engine. This process can assist the authoring process by giving the personality descriptions as guideline and fine-tuning mechanism. An author or a story designer can evaluate the results and decide to modulate for enhancing performance. 3.3. Animation and Graphics Engine Human motions and our animation engine are implemented by using Maya as the visualization environment. Maya Embedded Language (MEL) uses commands and functions defined for expression to create Maya’s interface and preset functions. It allows easy creation of custom graphical user interfaces and procedures. The animation and graphics engine received the possible postural values of horizontal, vertical, and sagittal orientation. The movement of the character will remain in a consistent postural threshold as well as her/his possible habitual gestures are limited to display the specific personality of the character. If the results fall short of story designer’s expectations, it is possible to refine the input value of P & E engine for better performance. The Figure 2 displays some types of gestures and postures corresponding to the unfolding story. We physically constrain the movements of an animated model and collect a set
W.-P. Su et al. / Generating Believable Personality-Rich Story Characters Using Body Languages
137
of data of each motion interpolation. By fitting curves to data, we create an equation for each motion, for instance, $LegSpread = 0.194 * ($H )2 − 0.5 * ($H ) + 0.23 where $ H means “Horizontal” varying between -1 (Enclosing), 0 (Neutral), 1 (Spreading).
(a) (b) (c) (d) Figure 2. (a) Ally scratches her head. (b) She leans forward to the man. (c) She has arms-akimbo. (d) She folds her arms and leans backward.
4. Conclusion and Future Work Our system provides the fine-tuning mechanism for the subtlety of posture and gesture behaviour which results from the personality. This can benefit the design process, and the construction of dramatic effects. Our P & E engine has a great potential in animation production, although facial expressions of a story character are not included in the scope of this research. The system is still limited by the variety of motions and path goals. Our work can be extended to improve reacting believability by coping with a complex internal set of motivations and interactive entertainment industry. In the emotion module, the system can be re-scaled by appending a cartoon type FLC and defining rules for mixing emotional combinations (e.g. happy and surprise FLC). We continue working on the synthetic characters’ interaction. How does an agent perceive the body language of the other interlocutor and respond to him/her based on his/her own personality? There are issues related to status, power, and deception which commonly happen in stories. These also should be evaluated for supporting narrative simulations. Further formal evaluations will have to be carried out. References 1. 2. 3.
4. 5. 6. 7.
Gratch, J., Rickel, J., Andre, E., et al., Creating Interactive Virtual Humans: Some Assembly Required. IEEE Intelligent Systems, 2002: p. 54-63. Stone, M., DeCarlo, D., et al., Speaking with hands: Creating Animated Conversational Characters from Recordings of Human Performance. in SIGGRAPH. 2004: ACM Transactions on Graphics. Su, W., Pham B., and Wardhani, A., High-level Control Posture of Story Characters Based on Personality and Emotion. in IE05', The Second Australasian Interactive entertainment conference. 2005. Sydney: ACM Digital Library. De Raad, B., The Big Five Personality Factors, The Phscholexical Approach to Personality. 2000: Gottingen, Netherlands: Hogrefe & Huber. Lamb, W. and Watson, E. Body Code-The Meaning Movement. 1979, Princeton, New Jersey: Princeton Book Company. Bull, P.E., Posture and Gesture. International series in Experimental Social Psychology. Vol. 16. 1987: Pergamon Press. Loyall, A.B. and Bates, J. Personality-Rich Believable Agents That Use Language. in the first International Conference on Autonomous Agents. 1997. Marina del Rey, California.
138
Advances in Intelligent IT Y. Li et al. (Eds.) IOS Press, 2006 © 2006 The authors. All rights reserved.
Simulated Artificial Human Vision: The Effects of Spatial Resolution and Frame Rate on Mobility Jason Dowling a,1 , Wageeh Boles a and Anthony Maeder b a Queensland University of Technology, Brisbane, Australia b E-Health Research Centre, CSIRO ICT Centre, Brisbane, Australia Abstract. Electrical stimulation of the human visual system can result in the perception of blobs of light, known as phosphenes. Artificial Human Vision (AHV or visual prosthesis) systems use this method to provide a visual substitute for the blind. This paper reports on our experiments involving normally sighted participants using a portable AHV simulation. A Virtual Reality Head Mounted Display is used to display the phosphene simulation. Custom software converts captured images from a head mounted USB camera to a DirectX based phosphene simulation. The effects of frame rate (1, 2 and 4 FPS) and phosphene spatial resolution (16x12 and 32x24) on participant Percentage of Preferred Walking Speed (PPWS) and mobility errors were assessed during repeated trials on an artificial indoor mobility course. Results indicate that spatial resolution is a significant factor in reducing contact with obstacles and following a path without veering, however the phosphene display frame rate is a better predictor of a person’s preferred walking speed. These findings support the development of an adaptive display which could provide a faster display with reduced spatial resolution when a person is walking comfortably and a slower display with higher resolution when a person has stopped moving. Keywords. visual prosthesis, blind mobility, artificial human vision, image processing,
1. Introduction 1.1. Blind mobility An important usability requirement for an Artificial Human Vision (AHV) system is the ability to move safely and confidently. One widely used mobility measure for the blind and visually impaired is the Percentage of Preferred Walking Speed (PPWS) and a count of mobility incidents (generally defined as contact with obstacles) (for example, [6] and [7]). PPWS requires a measure of a person’s Preferred Walking Speed (PWS), which is generally obtained by an instructor guiding a participant over a known distance and dividing the distance by the time taken. Walking efficiency can then be calculated 1 Correspondence to: Jason Dowling, S1102, 2 George St Brisbane Queensland Australia 4001. Tel.: +61 738 641 608; Fax: +61 738 641 516; E-mail: jason.dowlingqut.edu.au
J. Dowling et al. / Simulated AHV: The Effects of Spatial Resolution and Frame Rate on Mobility
139
Figure 1. Phosphenes displayed for grey level pixels in reduced resolution images
as a percentage of the PWS [7]: P P W S = SM C/P W S × 100 , where SM C = distance/time. The PPWS can be used as a between participants measure to compare different walking speeds, in addition to assessing mobility changes in a single participant. 1.2. Artificial Human Vision (AHV) AHV involves the delivery of electrical impulses to a component of the visual pathway where they may be perceived as phosphenes, or points of light. Currently four locations for stimulation are being investigated: behind the retina (subretinal), in front of the retina (epiretinal), the optic nerve and the visual cortex (using intra and surface electrodes) [3]. A typical AHV system involves a head-mounted camera; image processing unit; a transmitter/receiver; stimulator unit and an electrode array. The number of perceived phosphenes is constrained by the number of electrodes, therefore image processing techniques are required to reduce the spatial resolution of captured images. There are also limits to the rate at which phosphenes can be presented. For example, the only commercially available cortical device is limited to one frame per second (FPS) (Dobelle (2000)). Due to the difficulty in obtaining experimental participants with an implanted AHV device, a number of simulation studies have been conducted with normally sighted subjects, for example: [1], [2], [4], [5] and [8]. However, there is little published research on the effects of image processing on AHV mobility performance. A focus of current AHV research is to increase the number of implantable electrodes and therefore increasing perceived spatial resolution, however the effect of frame rate on mobility for an AHV display has not been explored. The current study investigates the effect of display frame rate (1,2 and 4 FPS) and spatial resolution (32x24 and 16x12 phosphenes) on the frequency of mobility errors and PPWS measured on an indoor artificial mobility course.
2. Method 2.1. Simulation Hardware An i-O Display Systems i-glasses PC/SVGA Head Mounted Display (HMD) was used in this study, powered from an external lithium polymer battery. The HMD screen distance was 25 mm from the wearer’s eyes. A Swann Netmate USB camera was attached, at eye level, to the front of the HMD. This camera was powered from the USB port of a Toshiba Tecra laptop (1.6GHz Centrino processor). To block out external light, a custom shroud was made from block out curtain and attached to the HMD.
140
J. Dowling et al. / Simulated AHV: The Effects of Spatial Resolution and Frame Rate on Mobility
2.2. Simulation Software The main requirement for our AHV simulation software was to convert input from the camera into an on-screen phosphene display. Our simulation reduces the resolution of captured images from 160x120 RGB colour to 32x24 or 16x12 eight grey-level simulated phosphenes. Our simulation, written in Microsoft Visual C++ 6.0, uses the Microsoft Video for Windows library to capture incoming video images. These images are subsampled (using the mean grey level of contributing pixels) to a lower resolution image, which is then converted to 8 grey levels. To simulate a perceived electrode response the low resolution image is displayed as a phosphene array using the DirectDraw component of Microsoft DirectX. Figure 1 shows the mapping between image grey levels and the different phosphene representations. Each phosphene was generated from an original 40 pixel wide circle, filled with the matching grey level, and blurred with a Gaussian filter (r=10). Examples of the simulation display are shown in Figures 2 to 4.
Figure 2. Original pixel captured image
160x120
Figure 3. Original image reduced to 32x24 phosphenes
Figure 4. Original image reduced to 16x12 phosphenes
2.3. Mobility course To assess mobility performance, an indoor mobility course (Figure 5) was constructed within a 30x40m laboratory at the School of Civil Engineering, Queensland University of Technology. The course consisted of a winding path, approximately 1m wide and 30m long. Path boundaries were marked with 48mm black duct tape. The floor of the course consisted of concrete (generally light grey with a 3m2 section painted white from a previous study). Grey office partitions were placed on either side of the path to reduce visual clutter and to prevent participants from confusing the neighboring path with the current path. Eight obstacles, painted in different shades of matt grey, were placed through the course (see Figure 6). Two of the obstacles were suspended from the ceiling to a height of 1.2 m. All obstacles along the path were made from empty packing boxes (450x410x300mm). A straight, unobstructed, 10m section of the course was used to measure the Preferred Walking Speed (PWS) of each participant. 2.4. Participants Ten female and 50 male volunteers were recruited from staff and students at different faculties at the Queensland University of Technology. Four participants were aged between 0-20 years; 32 were aged between 20-30; 12 were between 30-40; 9 were between 40-50; 2 were between 50-60; and 1 participant was aged over 60 years. All participants had normal or corrected to normal vision.
J. Dowling et al. / Simulated AHV: The Effects of Spatial Resolution and Frame Rate on Mobility
141
Figure 5. Map of the artificial mobility course built for this study
Figure 6. Different types of grey shading on each obstacle shown in Figure 5
2.5. Questionnaire Details of gender, age and whether the participant was wearing glasses or contact lenses were collected from a questionaire. In addition, participants were asked how many times (if any) they had used an immersive Virtual Reality environment. 2.6. Procedure Each participant was randomly allocated to a frame rate and display type level and commenced their first trial with one of the two course start locations (marked ’A’ or ’B’ in Figure 5). One hour was allocated for testing each individual. Study participants were met in a corridor outside the lab, asked to read a consent sheet and fill out the questionnaire. The simulation headgear was then explained and fitted before the participant was led into the concrete lab. Each participant was then allowed two minutes to familiarise themselves with the display. The guided PWS was then recorded over 10m. After this the participant was led to the trial starting location (’A’ or ’B’) and the first mobility trial was conducted. Participants were offered a short break before the second trial was conducted. Finally, the PWS was measured for the second time. During the mobility trials, a single experimenter recorded walking speed, obstacle contacts, the number of times participants were told they were walking backwards and the number of times participants veered outside the path boundary.
3. Results A summary of the mobility results are provided in Table 1. No participants reported nausea during the experiment, although two required a break between trials. The initial and final measurements of Preferred Walking Speed (PWS) were significantly correlated (r=0.67, p, abc should be considered before pqr. This shows therefore that the sequence needs to be scanned in a chronological way. 2.2. Related Works Researches have been dedicated to the automated discovery of periodic patterns in timeseries data [9,10,11]. But as the search is focused on periodic patterns only, no interac-
148
O. Lartillot / Pattern Mining in Discrete Time Series and Application to Music Mining
tion is proposed with acyclic pattern discovery. Hence, although offering interesting descriptions of time-series data, they cannot be used to solve the combinatory problem presented in the previous paragraph. In our approach, on the other hand, the periodic pattern paradigm, closely articulated with the acyclic pattern mining process, insures the compactness of the results. A simpler solution to the combinatory problem consists in forbidding overlapping between patterns [1]. But this heuristics presupposes that time-series data are segmented into one-dimensional series of successive segments. Time-series data do not all fulfill this requirement: musical sequences, in particular, may sometimes be composed of multileveled hierarchy of structures (as in Figure 4 for instance). 2.3. General and Specific Cycles The integration of the concept of cyclic pattern in the multidimensional musical space requires a generalization of specificity relations, defined in previous section, to cyclic patterns. A cyclic pattern C is considered as more specific than another cyclic pattern D when the sequence of description of pattern D is included in the sequence of description of pattern C. As for acyclic patterns, in order to avoid combinatory explosion and to improve the compactness of the representation, cyclic patterns need to be filtered using the closure heuristics: i.e., only closed cyclic patterns should be selected. As seen previously, the different possible patterns are considered in a chronological way, and new general patterns are constructed through generalization, and specific patterns through specialization [8].
3. Results This model has been written in Common Lisp, and is now being rewritten in C language. It has been tested with different musical sequences taken from several musical genres (classical music, pop, jazz, etc.) [8]. Thanks to the complete management of combinatorial redundancy, presented throughout the paper, the model shows significant improvement with respect to previous researches, in which combinatorial explosion was eluded through fuzzy and global averaging [8]. Figure 4 shows for instance a comparative analysis of the Ode to Joy: while the most recent approach [12] globally segments the piece into unlabeled segments (that have to be labeled in a second step of the analysis), our model immediately offers a detailed analysis, including long melodico-rhythmic phrases and shorter and more frequent motives. Currently, the modeling reaches an average precision factor of 70%.
melo: +1 +1 0 -1 -1 -1 -1 +1 +1 0 -1 0 +1 rhyt: 2 1 1 1 1 1 1 2 1 1 1.5 .5 2
+1 +1 0 -1 -1 -1 -1 +1 +1 -1 -1 0 2 1 1 1 1 1 1 2 1 1 1.5 .5
Figure 4. Analysis of Ode to Joy, from Beethoven’s Ninth Symphony: segmentation curve given by [12] (above), and patterns found by our modeling (below).
O. Lartillot / Pattern Mining in Discrete Time Series and Application to Music Mining
149
4. Future Works The approach described in this paper is limited to the detection of repeated monodic patterns. Music in general is polyphonic, where simultaneous notes form chords and parallel voices. We are currently developing algorithms that construct, from polyphonies, syntagmatic chains representing distinct monodic streams. These chains may be intertwined, forming complex graphs along which the pattern discovery algorithm will be applied. The automated discovery of repeated patterns can be applied to automated indexing of musical content in music databases. This approach may be generalized later to audio databases, once robust and general tools for automated transcription of musical sound into symbolic scores will be available. A new kind of similarity distance between musical pieces may be defined, based on these pattern descriptions, offering new ways of browsing inside a music database using pattern-based similarity distance. The pattern mining framework proposed in this paper can be applied to the analysis of any type of discrete time-series (texts, multimedia documents, genomes, etc.). Indeed, the robustness of the results is the consequence of a thorough study of the general characteristics of pattern mining paradigm, leading to a close articulation between acyclic and cyclic pattern mining. Therefore, these performances are not restricted to the musical domain. The development of a domain-independent version of the algorithm is under investigation. Acknowledgements This work has been supported by the Academy of Finland (Project No. 102253). References [1] Y. Tanaka, K. Iwamoto K and K. Uehara, Discovery of Time-Series Motif from MultiDimensional Data Based on MDL Principle, Machine Learning, 58 (2005), 269–300. [2] J. Lin, E Keogh, S. Lonardi and P. Patel, Finding Motifs in Time Series, Intl. Conf. Knowledge Discovery and Data Mining, 2002. [3] N. Ruwet, Methods of Analysis in Musicology, Music Analysis 6 (1987), 11–36. [4] F. Lerdahl and R. Jackendoff, A Generative Theory of Tonal Music, The M.I.T. Press, 1983. [5] M. Zaki, Efficient algorithms for mining closed itemsets and their lattice structure, IEEE Transactions on Knowledge and Data Engineering, 17 (2005), 462–478. [6] R. Agrawal and R. Skirant, Mining Sequential Patterns, Intl. Conf. Data Engineering, 1995. [7] B. Ganter and R. Wille, Formal Concept Analysis: Mathematical Foundations, SpringerVerlag, 1999. [8] O. Lartillot, Multi-Dimensional Motivic Pattern Extension Founded on Adaptive Redundancy Filtering, Journal of New Music Research 34 (2005), 375–393. [9] J. Han, G. Dong and Y. Yin, Efficient Mining of Partial Periodic Patterns in Time Series Database, Intl. Conf. Data Engineering, 1999. [10] S. Ma and J. Hellerstein, Mining partially periodic event patterns with unknown periods, Intl. Conf. Data Engineering, 2001. [11] J. Yang, W. Wang and P.S. Yu, InfoMiner+: Mining Partial Periodic Patterns with Gap Penalties, IEEE Intl. Conf. Data Mining, 2002. [12] E. Cambouropoulos, and C. Tsougras. Influence of Musical Similarity on Melodic Segmentation: Representations and Algorithms. Intl. Conf. Sound and Music Computing, 2004.
150
Advances in Intelligent IT Y. Li et al. (Eds.) IOS Press, 2006 © 2006 The authors. All rights reserved.
Weblog-based Egocentric Mapping with Track Backs on Spatial Relations Masatoshi Arikawa, Toru Hayashi and Kaoru Sezaki Center for Spatial Information Science, the University of Tokyo Abstract. This paper proposes a new framework for creating maps by the collaboration of personal users’ posting of articles which include spatial information using a weblog’s communication method, such as track back ping and RSS retrieval. In our daily life, each person differs in how he or she utilizes the real space. Therefore, maps which are visualization of spatial descriptions should be essentially created and customized for each person. To realize the concept of map, we adopt new methods, in which each article of the weblog has a map of a certain place where a user knows in detail. The methods enable users to create a variety of personal maps with articles written firstly and then to automatically generate links connecting articles over personal maps through common places of multiple maps to interchange information of their interests using track back ping and RSS retrieval. We call our proposed framework, for managing both the weblog and mapping services, weblog based egocentric mapping or blog ego map. We also discuss the process and configuration of our prototype system that develop a wide area map by collecting multiple egocentric local maps, departing from the traditional map-making process. Keywords. Weblog, Mapping, Egocentrism, Track back, RSS, Spatial relations
1. Introduction With the spread of Internet, the enormous information space has been realized these days. Many documents such as news, company information and public organization information are being circulated on the Web. Now, there is a trend that a huge number of people post his or her personal writing on the Web. The representative software being used is the weblog which was extended from a bulletin board system and is mainly used for writing diary. Accordingly, as the content personalization further proceeds on the Web, a large amount of writings about personal experiences and ideas will be available on the Web. Considering the present situation in the weblog, almost all articles end up only posted without being read and received track back by other users. Even though an article is not so valuable as one unit, it may become worthy by linking relevant articles using track back or RSS. We can say the structure of weblog is immature yet. In the near future, information from individuals will increase on the Web, and therefore it is firstly important that a weblog reader can efficiently look for information from masses of information. Secondly, to make weblog more useful for both writers and readers, there will be more demands for improving weblog by connecting relevant articles written by different users. Among the methods to achieve them, the simplest way is that all articles are collected and sorted by time when an article is posted. By listing articles sorted by the time, weblog increases the likelihood by which users can easily
M. Arikawa et al. / Weblog-Based Egocentric Mapping with Track Backs on Spatial Relations
151
search other related articles. In this paper, we focus on information about location included in weblog to collect and connect articles using several methods. It is true that all articles do not necessarily have associations with location, but introducing an idea of location to weblog enhances usability of weblog. The advantages of the framework that articles are attached to maps on Internet are to produce wide area map intended for article viewers, and to enrich personal spatial descriptions for article writers.
2. Personal Maps and Collaboration of Them As a weblog is based on a concept of personalization on the Web, flickr [1] has a similar mechanism to weblog. The major different point is that primal elements are different for a weblog and flickr. The primal element of flikr is a photograph as a substitute for an article in weblogs. In flickr, a user can also utilize a track back and RSS function between photographs like articles of weblog. Futhermore, flickr adopts a communicative method that puts an emphasis on direct links between users. Users can have their own album servers and use inter-links over distributed album servers of flikr. Livedoor Map [2] has been providing a map service for using location information in articles of the weblog. It may be the best weblog-based map service in Japan so far. The map server of Livedoor Map is centralized and helps users link between an article and a position on a map (Fig. 1). Articles are indirectly connected via a central map server. On the other hands, it must be better framework for all users to have their own personal map servers which are better collaborated with weblog. We call the ideal framework of integrating the weblog and map services a weblog based egocentric mapping service or a blog ego mapping in this paper.
flickr: Inter-links over distributed album servers Livedoor Map: Indirect links through the center server Figure 1: Centralized and distributed digital content sharing servers with connection of the weblog
Our proposed blog ego mapping introduces maps into all weblog. Users of blog ego mapping can create their personalized or favorite maps, called ego maps, of his or her interests. Then, the users post articles with map symbols which are labeled on a map. Articles of a specific local area are gathered with a self-made map of the place. Therefore, besides all articles are collected on a local map, several maps of the same place could be created and divided in different weblog by themes of articles on itself according to users’ preference. Therefore, even if maps are designed for the same place, they show different contents from totally different views. Furthermore, we compare our blog ego mapping with a former weblog mapping, especially Livedoor Map Weblog. When one map in a center server is adopted like Livedoor Map weblog, it prevents users from describing rich spatial information because users have no other choice, and
152
M. Arikawa et al. / Weblog-Based Egocentric Mapping with Track Backs on Spatial Relations
the available map may have too disrelated information and uniform descriptions. In a blog ego mapping, a target place of map is based on a user’s favorite arbitrary scale. As observed earlier, one shared map in the former weblog mapping service limits the potentiality for spatial description although weblog originally specialize in personal information.
3. Capability of Representing Places by Object Identifiers Considering an existing map, a map could be drawn using simplest symbols such as point, line and surface. In the former weblog mapping, a place description of an article on a map is represented by a point value of latitude and longitude. In consequence, an article has to be represented as a point symbol on a map. In addition, when there are articles which have close relation in their content each other, the related articles exist separately as points around the point on a map in the former system. There is a problem that point values many articles are almost the same and are difficult to tell the difference among adjacent places. To solve the problem, we introduce a new method of managing a relationship between map and article in blog ego mapping. Departing from uniting an article with a map by the values of longitude and latitude, the symbols that become a part of the map like line and surface, as observed earlier, have its own object identifier or o-id which combines an article with a map. An o-id is equivalent to an article-id consisting of an attribute of symbol type, a map-id and an extent of the map. Introducing o-id into the weblog, managing of spatial articles is better suited for spatial recognition of human than latitude and longitude because symbols on a map directly mean features or events in the real world. For example, when a user wants to add information in the article which has been already posted before, the user posts the information to the object on the map, not the latitude and longitude, and thus information gathers under the object.
4. Place and Information Sharing with Setting Ports between Personal Maps The blog ego mapping enables users to freely generate a link between two distributed personal maps or ego maps which have a registered common place. We call the function of connecting ego maps place and information sharing or pi-sharing. pi-sharing enables a map to catch up changes of the real world in short time and create collaborative maps by users. As a result of defining a common place for pi-sharing between multiple ego maps, articles of the common place will be linked between ego maps one another. To realize the mechanism of pi-sharing, we use both sending a track back ping and reading RSS of the weblog. When a user A wants to have a pi-sharing with another user B’s article of the weblog in Fig 2(a), the user A reads articles as a RSS of the user B’s weblog article specifying the position of user A’s ego map at first. Then, the user A’s ego map includes map symbols and its articles’ contents from the user B’s weblog in the user A’s ego map in Fig 2(b). Then, if the user A adds an article to the article after reading RSS, the user A’s article automatically sends a track back ping to the original user B’s article referred by the user A in Fig 2(c). Then considering blog ego map as map space, the mechanism of indirect track back in map space would be as follows (Fig. 2 (d)).
M. Arikawa et al. / Weblog-Based Egocentric Mapping with Track Backs on Spatial Relations
153
In Fig. 2 (c), the user A sends track back ping to the user B’s article bp. Then the user B’s ego map becomes map space and lists article URL urlk of the user A because article URL urlk has been imported the user B’s ego map. The user C also registers a pi-sharing to read the user B’s ego map in his or her own ego map. In this case, because the RSS of the user B’ ego map includes the user A’s article URL urlk, the user C’s ego map knows the urlk. Then the user C’s ego map lists the urlk as an indirect track back. Next, the user C notices the newly updated urlk. If the user C wants to see the content of urlk precisely, the user C is able to reach the user A’s ego map through his or her own ego map. The user C’s ego map does not directly receive a track back ping from the user A, but the user C is dynamically related with the user A’s ego map by indirect track back. This is one of the differences between the existing weblog and our blog ego map. As a conclusion, ego maps read by other ego maps play a role of map space as a core to connect ego maps in which users have the same interests. By joining a common place on ego maps by many users, the entire networks of ego maps are going to be complex. In pi-sharing, a blog ego mapping has a mechanism that the first publisher of an article automatically gathers all related articles referring the article. That results in the weblog development by others spontaneously. Referred articles by RSS gather all referring articles’ information. If an ego map with valuable articles is referred by many other users, the ego map collects related articles of the many other users by many track back pings.
(a) Generating a port of place sharing by User A
㧔b㧕Reading RSS from User B’s ego map Figure 2. Information sharing between two ego maps using RSS and track back (part 1)
154
M. Arikawa et al. / Weblog-Based Egocentric Mapping with Track Backs on Spatial Relations
(c) Track back by an additional article
(d) Indirect track back among blog ego maps Figure 2. Information sharing between two ego maps using RSS and track back (part 2)
5. Prototype system We have implemented a prototype system, called blog ego map, based on the above framework as a plug-in module of Movable Type [7], which is the most popular and premier weblog publishing platform. Fig. 3 shows a visual interface of a top window of blog ego map. If a symbol on a map is selected, a pop-up window shows up with its title, content and track back URL. The upper right list (#5 in Fig. 3) of the top window represents personal maps this weblog owner set. The upper-middle list (#6 in Fig. 3) represents sharing target places of other users’ blog ego map. The two lower lists (#7 and #8 in Fig. 3) represent track back destinations to which track back pings were sent in this weblog. Considering standard character-based weblog with blog ego map, the character-based weblog collaterally have the same information of an articles or a map (equivalent to “category” in Movable Type) when a user of blog ego map creates a new personal map or a new article on one’s own map. Considering usage examples of ego map blog, we assume that blog ego map is for not only personal use like travel records or remarks on restaurants with their location, but also for social use like gathering and sharing information of warning danger spots with their location or damage from natural disasters by direct link of ego map.
M. Arikawa et al. / Weblog-Based Egocentric Mapping with Track Backs on Spatial Relations
155
Figure 3. Top window of our proposed blog ego map
6. Conclusions and Future Works This paper proposed a new framework for creating collaborative maps in weblog system. By the framework, we improved a map-making method which visualized and joined personal knowledge which have been so far ignored. In addition, we represented a new communication method of location-based weblog applying the weblog’s track back ping and RSS to make weblog’s articles more valuable. As future work, we will explore location-based communication that extends our proposed weblog-based egocentric mapping services to a field of mobile phone in ubiquitous computing environment. By combining the Web space with the real space, this framework could develop new functions for users to make their daily life more convenient. If a user who has a weblog-based egocentric mapping can post an article with its position automatically obtained from a GPS receiver embedded in a mobile phone, then other users who share the same common place for information are automatically informed of the new article. Acknowledgements This research was supported in part by Grant-in-aid for Scientific Research on Priority Area “Informatics Studies for the Foundation of IT Evolution” by the Ministry of Education, Culture, Sports, Science and Technology of Japan.
References [1] flickr, . [2] Livedoor Map Blog, . [3] Hatena Diary, . [4] Ba-log, . [5] Tatsuhiko Miyagawa, Naoya Ito, BLOG HACKS, Oreilly, Japan, 2004. [6] Global Base Project, . [7] Movable Type,
156
Advances in Intelligent IT Y. Li et al. (Eds.) IOS Press, 2006 © 2006 The authors. All rights reserved.
Spatial Label Sharing and Development among Photographs Hideyuki Fujita and Masatoshi Arikawa Center for Spatial Information Science, the University of Tokyo
Abstract. This paper proposes a new framework for sharing text labels among photographs. Digital photographs are usually organized by text labeling in order to search and browse a collection. However, labeling many photographs costs users much effort. On the other hand, many kinds of spatial sensors including GPS have become popular today. We assume there will be many photographs with spatial metadata such as geographic coordinates of camera’s positions and directions. We therefore suggest label sharing system for such photographs. The system (i) stores label data placed on photographs by a user, and (ii) computes geographic locations of objects in the real world pointed by stored labels. When a photograph requires label data, the system (iii) selects and shares all label data which are judged to be seen in the photograph by considering field of view (FOV) of the photograph. In this framework, label texts are placed on appropriate positions on photographs, and that makes photographs clickable when label data have URLs. Keywords. Spatial photographs, Spatial label, Mapping, Data sharing, Data development
1. Introduction Digital photos are usually organized by text labeling in order to search and browse a collection. However, labeling many photos costs users much effort. On the other hand, our target photos are enhanced with spatial metadata such as geographic coordinates where they were taken and the directions toward where they focused on. These metadata are generated by spatial sensors such as GPS and gyrocompasses, and embedded in headers of image files in Exif format, which is a widely used standard for storage of information about cameras, images and shooting conditions within JPEG and TIFF image files. The aim of this paper is sharing text labels among such photos. A simple method is propagating label data based on geographic proximity of positions where photos were taken. However, even if two photos were taken from the same position, they may show different directions, and quite different scenes, and a text label for one photo may not be appropriate to the other. The reason for the problem is that used geographic coordinates are positions of cameras, and they are not positions of objects taken in photos and pointed by labels. We therefore propagate labels based on whether their pointing objects are seen in photos. For that purpose, we compute geographic locations of objects pointed by those labels at first, and give those labels to each photo based on the spatial relationships between geographic locations of labels and field of view of the photo. We have implemented this process as label sharing system. The system shares label data by the following process: 1. Storing label data submitted by users 2. Computing geographic locations of objects in the real world pointed by labels
H. Fujita and M. Arikawa / Spatial Label Sharing and Development Among Photographs
157
When photos require label data 3. Retrieving a set of label data seen in those photos, and giving the set to them. Acquired label data from the system are placed on photos, which enable us to access Web pages by clicking labels on photos when labels have URLs. The remaining part of this paper is constructed as follows: The target data are defined in Section 2. A label propagating method and a label geocoding method are explained in Section 3 and 4. An implemented prototype system is introduced in Section 5, and some concluding remarks are described in Section 6.
2. Spatial Photograph and Spatial Label We named target data of our research as spatial photograph. It is a photo data enhanced with geographic coordinates of camera’s location, direction, and view angle in the real world. A spatial photograph is defined as follows: p image, fov, L : spatial photograph image is a photo image. fov is a spatial metadata of the photo. L l1 , l 2 , l 3 ,... is a set of a spatial label data li . A spatial label data is an enhanced text label data for the photo, and defined later in this section. fov in spatial photo is defined as follows: fov S ,V ,T w ,T h : field of view S X S , YS , Z S is the viewpoint. It is the position of the camera at a time when the photo was taken. V X V , YV , ZV is the view direction. It is the direction where the photo is focused on. Both T w and T h are the view angles. Spatial label is an enhanced text label for spatial photos. In our implementation, when a user attaches a spatial label for a spatial photo, he or she specifies a certain point on the photo by clicking on it, and inputs label text. For example, a user labels a photo showing Tokyo Tower by clicking a point around Tokyo Tower on the photo, and input “Tokyo Tower.” A spatial label can also have geographic coordinates of the location of its pointing object in the real world. A spatial label l is defined as follows: l text , url , M , m : spatial label text is a label text. It is a name of geographic object pointed by the label data such as “Tokyo Tower.” url is URL on the Web. M X M , YM , Z M is geographic coordinates of the pointed object. m x m , y m is pixel coordinates of the pointed object on a photo. Null values are permitted for M or m . Figure 1 and 2 respectively show geometric models of a spatial photo and a spatial label.
Figure 1. Spatial Photograph
Figure 2. Spatial Label
158
H. Fujita and M. Arikawa / Spatial Label Sharing and Development Among Photographs
3. Propagating and Mapping Spatial Label Data For propagating spatial label among spatial photos, we search for a set of label seen in each photo. We judge a pointed object by a label is seen in a photo when the geographic coordinates of the spatial label is contained in the view angle of the spatial photo. For example, in Figure 3, spatial label l 2 and l3 are searched as appropriate.
Figure 3. Search for Spatial Labels in View Angle
Searched spatial labels are displayed on spatial photos. In order to display spatial labels having only geographic coordinates, we compute pixel coordinates of them on spatial photos as follows. A relationship between geographic coordinates Q X , Y , Z and image coordinates qx, y is expressed by so-called colinearity equations [3]. x
c
a1 X X S a 4 Y YS a 7 Z Z S (1) y a7 X X S a8 Y YS a 9 Z Z S
c
a2 X X S a 5 Y YS a8 Z Z S (2) a7 X X S a8 Y YS a 9 Z Z S
S X S , YS , Z S expresses coordinates of the viewpoint. a1 , a 2 ,..., a 9 are elements of a rotation matrix which rotates the view direction to the Z-axis of the absolute coordinate system. They are computed by using values of the view direction. c is a focal length, which is computed by using the values of the view angle T w and the pixel width w of the photo image:
w
c
2 tan
Tw
(3)
2
For spatial photos, x, y in expression (1), (2) are computed when X , Y , Z are given. By substituting geographic location of a label M X M , YM , Z M for X , Y , Z , pixel coordinates of a label m x m , y m are therefore given as x, y . Figure 4 shows an example of a photo mapped multiple spatial labels when all substituted values to (1), (2) are three-dimensional data. When one of those values is two-dimensional data, labels are displayed like Figure 5.
Figure 4. Display Labels on a Spatial Photograph
Figure 5. Display Labels under a Spatial Photograph
H. Fujita and M. Arikawa / Spatial Label Sharing and Development Among Photographs
159
4. Geocoding Spatial Label Data When a user creates spatial label data by clicking and specifying positions on spatial photos, we can’t apply the propagating method in Section 3 to them since geographic locations are not attached in the process. We therefore compute geographic locations of such spatial labels by using multiple labels pointing the same object in the real world. 4.1 Label Geocoding using Two Labels About a spatial label placed on a spatial photo, the direction from the photo’s viewpoint toward the label’s geographic location is known. When two labels placed on different spatial photos point to the same geographic object, we compute geographic coordinates of those labels as the crossing point of two half lines from photos’ viewpoint toward the label’s geographic location. In explaining the process, we use following values: l1 text1 , url1 , M 1 , m1 , l 2 text 2 , url 2 , M 2 , m 2 : spatial label p1 image1 , fov1 , L1 l1 L1 , p2 image2 , fov2 , L2 l2 L2 :spatial photograph fov1 S1 ,V1 ,T w1 ,T h1 , fov2 S 2 ,V2 ,T w 2 ,T h 2 : field of view l1 and l2 are spatial labels for the different spatial photos p1 and p2 . Only labels’ geographic locations M 1 and M 2 are unknown. The aim is computing M 1 ( M 2 ) . Figure 6 shows spatial relationships among those values. In our implementation, labels are judged to point to the same object if their label texts are the same: text1 text 2 (4)
Figure 6. Label Geocoding from Two Labels
We use the function “ Direction ”, which uses a spatial metadata for a spatial photo fov and pixel coordinates of a point on the photo ax, y , and returns a direction vector Va along the line from photo’s viewpoint toward the geographic location of point ax, y : Va Direction fov, a (5) We explain the function. Equations (1) and (2) can be transformed as follows. X XS
(Z ZS )
a1 x a 2 y a3 c (6) a7 x a8 y a 9 c
Y YS
(Z Z S )
a4 x a 5 y a 6 c (7) a7 x a8 y a9 c
We define the functions f1 ( x, y ) , f 2 ( x, y ) , f 3 ( x, y ) as follows. f1 ( x, y )
a1 x a 2 y a 3 c (8) a7 x a 8 y a 9 c
f 2 ( x, y )
a4 x a 5 y a 6 c (9) a7 x a 8 y a 9 c
f 3 ( x, y )
Z Z S (10)
By using these functions, (6) and (7) are transformed as follows. X X S f 1 ( x, y ) f 3 ( x, y ) (11) Y YS f 2 ( x, y ) f 3 ( x, y ) (12) By substituting label’s photo location m xm , y m for x, y in (10), (11) and (12), direction vector Vm from the viewpoint toward the geographic location of the pointed object by the label is computed as follows.
160
H. Fujita and M. Arikawa / Spatial Label Sharing and Development Among Photographs
§X XS · ¨ ¸ ¨ Y YS ¸ ¨ Z Z ¸ S ¹ ©
§ f1 ( xm , y m ) f 3 ( xm , y m ) · ¨ ¸ Vl 1 (14) ¨ f 2 ( x m , y m ) f 3 ( xm , y m ) ¸ (13) ¨ ¸ f 3 ( xm , y m ) © ¹ By using the function “ Direction ”, Direction vectors from the viewpoint of photo p1 toward geographic location of label l1 and that from the viewpoint of photo p2 toward geographic location of label l2 are represented as follows. Vm1 Direction p1 . fov, l1 .m1 (15) Vm 2 Direction p 2 . fov2 , l 2 .m2 (16) Label’s geographic location M 1 and M 2 is represented as points on half lines from viewpoints S1 along direction Vm1 and that from S 2 along Vm 2 as follows: M 1 S1 k1Vm1 (17) M 2 S 2 k 2Vm 2 (18) k1 and k 2 are real numbers. M 1 and M 2 as crossing point of these half lines. M 1 M 2 (19) Vm
However, it is unrealistic that two half lines are crossing in three-dimensional space. That means the simultaneous equations (17), (18) and (19) cannot be solved. We therefore use the following equations (20), (21) and (22) instead of (17), (18) and (19). M 1 ' S1 k1 'Vm1 (20) M 2 ' S 2 k 2 'Vm 2 (21) § ¨ M 1 '.X ¨ M 1 '.Y ¨ ¨ M 1 '.Z M 1 '.Z ¨ 2 ©
· ¸ ¸ ¸ ¸ ¸ ¹
§ · ¨ ¸ M 2 '.X ¨ ¸ M1 M 2 M 2 '.Y ¨ ¸ (22) ¨ M 1 '.Z M 1 '.Z ¸ ¨ ¸ 2 © ¹ M 1 ' and M 2 ' are points on half lines from viewpoints S1 along direction Vm1 and that from S 2 along Vm 2 , and their x and y coordinates are respectively equal. M 1 and M 2 are computed as the same point, which is the middle point of M 1 ' and M 2 ' .
4.2 Label Geocoding using Multiple Labels When three or more labels for different spatial photos have the same label text as shown in Figure 7, half lines from viewpoints toward label’s geographic location do not cross at the same point. We therefore compute all crossing points of all sets of two half lines by the method in Section 4.1, and compute the geographic location of the label as the middle point of those crossing points. To be specific, we name label data point to the same object as l1 , l2 , l3 , .... They all have the same label text. li .text l j .text ޓ1 d i, j d n, i z j (23) By using M ij , which is a geographic location computed from label li and l j , geographic location for all the labels l.M ( l1 .M l 2 .M ...) is computed as follows. l.M
1di , j d n ,i z j
¦
M ij n
C2
(24)
Figure 7. Label Geocoding using the Same Label on Multiple Photos
H. Fujita and M. Arikawa / Spatial Label Sharing and Development Among Photographs
161
5. Prototype System Figure 8 is a graphical user interface of our prototype system. Spatial photos are mapped as arrows from their viewpoint toward their view directions. The system realizes the following function: (a) reading and writing spatial metadata of spatial photos, (b) visualizing spatial photos on map as photo vectors, (c) adding, removing, and displaying spatial labels on maps and spatial photos, (d) search for photos by label texts, (e) giving a set of spatial label to a selected spatial photo, (f) geocoding spatial labels, (g) propagating a spatial label data among all spatial photos.
Figure 8. Prototype System
6. Conclusion We proposed a new framework for sharing text labels among photos which have spatial metadata such as geographic coordinates of camera’s positions and directions, and also implemented a prototype system based on the framework. In our framework, label data for spatial photos are shared by computing geographic location of them. It is a framework improving reusability of label data by transforming them into spatial data.
References [1] [2] [3]
Fujita H., Arikawa M.: Photo Vector Field Model for Mapping Photographs, Maps, and Cyberspaces to Each Other, The 2005 Int’l Conf. on Active Media Technology (2005), 495-496. M. Naaman, A. Paepcke, and H. G. Molina: From Where To What: Metadata Sharing for Digital Photographs with Geographic Coordinates, 10th Int’l Conf. on Cooperative Information Systems (2003). Murai S.: Spatial Information Engineering (in Japanese), Japan Association of Surveyors (2001), 175.
162
Advances in Intelligent IT Y. Li et al. (Eds.) IOS Press, 2006 © 2006 The authors. All rights reserved.
Speech Recognition Interface Using Dual Channel Post-Filter Based on Transfer Function Estimation for Noise Reduction Heungkyu Lee1) and June Kim2) Dept. of Computer and Electronic Engineering, Korea Univ. 2) Dept. of Information and Communication Engineering, Seokyeong Univ. 1)
Abstract. This paper proposes the accurate transfer function estimation method to improve the performance of dual channel post-filter for noise reduction in speech recognition interface. In case of applying Wiener filer based post filtering, cross-power spectrum representing cross-correlation of inputted noises having higher coherence is computed in estimating transfer function. At this time, least mean square algorithm to make noise characteristics equally is applied after we assume that respective channels have different noise characteristics. This enhances the performance of noise reduction rate because this enable to satisfy the general assumption that noise characteristics of respective channels have same one. Experimental evaluation is done using real car noisy speech databases, which showed better performance than other’s works.
1. Introduction Speech recognition interface is user-friendly one of various user interfaces especially as an active media technology. Recently, speech recognition accuracy has obtained stable and good performance on clean environmental conditions, while it is not still stable on background noise environments. One of the main reasons to degrade speech recognition accuracy is additive noise on various environmental conditions. Various background noises, channel distortion, and manner of utterance such as loudness and distance talking, are factors to reduce the accuracy of speech recognition. Thus, lots of noise removal, speech enhancement [1][2], and feature compensation algorithms are applied to enhance the performance of speech recognition. However, these do not satisfy the user’s requirements on varying changing background conditions. For example, one channel noise suppression algorithm using silence region to estimate noise distribution, has a weak point in nonstationary noises. In addition, two channel noise suppression algorithm using primary and reference input assumption, has a weak point in realistic conditions because noise source is inputted equally to two channels with a bit different magnitude [3]. This cause performance degradation because two channels is modeled respectively using speech and noise characteristics. In case of diffuse noise field, two channel noise suppression algorithms do not bring satisfactory result [4]. Thus, post-filtering algorithm is required to cope with above problems.
163
H. Lee and J. Kim / Speech Recognition Interface Using Dual Channel Post-Filter
In previous works [3][4], post-filtering methods of two channel noise suppression algorithms assume that captured noise signals obtained from two channels have no correlation between them, and have same power spectrum in noise characteristic. However, this assumption does not satisfy in real car noise environments because distance between two microphones is close. Thus, transfer function estimation method for robust postfiltering is proposed to clearly remove a musical noise. To do this, we perform following two steps as shown in Figure 1. In case of applying Wiener filter based post filtering, cross-power spectrum representing cross-correlation of inputted noises having higher coherence is computed in estimating transfer function. At this time, LMS(Least Mean Square) algorithm to make noise characteristics equally is applied, in which we assume that respective channels have different noise characteristics. This enhances the performance of noise reduction rate because this enable to satisfy the general assumption that noise characteristics of respective channels have same one.
Figure 1. Noise reduction using post-filtering based on dual channel microphones
2. Dual Channel based Post-Filtering As a post-filtering, Wiener filter can be applied which has a following transfer function. HZ
) SY ) YY
(1)
where the variable, ) SY is a cross-power spectrum between clean speech signal and noise suppressed signal using delay-and-sum beam-former. The variable, ) YY is a auto-power spectrum between noise suppressed signals using delay-and-sum beam-former. If we assume that there is no correlation between speech and noise signal, and between noise signals, the equation (1) can be modified as follows: HZ
) SY )YY
) SS ) SS ) NN
(2)
If there is no correlation between noise signals, power spectrum, ) SS of clean speech signal can be computed using signals of two microphones. By using this, estimated frequency response of Wiener filter is as follows:
164
H. Lee and J. Kim / Speech Recognition Interface Using Dual Channel Post-Filter
HZ
) Y1Y2
(3)
) YY
To perform post-filtering properly, following three assumptions should be satisfied. First, there is no correlation between speech and noise signal. Second, there is no correlation between noise corrupted input signals of two microphones. Finally, power spectrums of noise corrupted input signals of two microphones are same. However, specific conditions such as real in-car environment do not satisfy above three assumptions. Figure 2 shows the coherence of noise corrupted input signals of two microphones on real car environment. We used first 10 frames. This shows that correlation between noise corrupted input signals of two microphones is very high.
Figure 2. Coherence of two microphones on the real car noise environment.
As shown in Figure 2, we cannot ignore the correlation of two input signals, and assume that power spectrum of two input signals is same. These factors are main reasons that degrade performance of post-filtering. Thus, we ignore these assumptions, and derive new equations by considering above factors.
3. Transfer Function Estimation based Post-Filtering To improve the performance of post-filtering, we consider coherence of two channel inputs [4]. If we consider coherence of two channel inputs, non-correlation of noisy inputs obtained from two channel microphones is not an assumption any more. Secondly, we consider that the power spectrum of respective input channels is different. Thus, the power spectrum for transfer function estimation can be modified as follows: ) Y1Y1
) SS ) N1N1
) N2 N2
W T ) N1N1
) Y2Y2
) SS ) N 2 N 2
) SS ) NN
(4)
W T ) NN
(5)
) SS W T ) NN
(6)
H. Lee and J. Kim / Speech Recognition Interface Using Dual Channel Post-Filter
) Y1Y2 ) SSii
(7)
) SS *N1N 2 ) NN
) YiY j
1/ 2) ) ) 1 / 21 W * W
*N i N j W T
165
1/ 2
YiYi
Y jY j
YiY j
(8)
T 1/ 2
T
Ni N j
where variable, W is weight function of LMS adaptive filter to adjust the parameters. This finds the coherence of two channel input noises. The variable, *N1 N 2 is complex coherence function about two channel inputs. This can be presented by ) N1N 2
*N1N 2
(9)
) N1N1 ) N 2 N 2
Thus, the proposed frequency response function of post-filter based on Wiener filter using two channel microphones, is presented by ) SSij
H proposed
(10)
2
1 / 2¦ ) YicYic ic 1
The variable, ) SSij is computed by equation (8). Meanwhile, it is known that real in-car noise is distributed on the low-frequency region [6]. Thus, we first remove the lowfrequency noise components using high-pass filter. Its cut-off frequency is 240Hz.
4. Experimental Evaluations For experimental test evaluation, SiTEC Car 01 database is used. This database is composed of real car noisy speech utterances using 8 channel microphones. Sampling rate is 8 Khz, 16bits PCM. For test utterance, 1,000 utterances of 10 men and 10 women are tested. We used just 2 channels that is located on the sunvisor center, and its left. Distance of two microphones is 30 Cm. First, we applied LMS algorithm, and then analyzed ) N N and ) N N of two channels. As shown in Figure 3, noise power spectrums of two 1 1
2
2
channels become similar between them after applying LMS algorithm. This enable to satisfy the general assumption of post-filtering that noisy signals of two channels having similar power spectrum are inputted after noisy components are adjusted by LMS algorithm. From this simulation, noise reduction rate, NR can be computed by equation (11), and experimental result is shown in Table 1.G
166
H. Lee and J. Kim / Speech Recognition Interface Using Dual Channel Post-Filter
(11)
§ 2 2· NR 10 log¨¨ ¦ Sˆ t / ¦ y ch1 t ¸¸ tTn © tTn ¹
G Figure 3. Improved noise power spectrum using LMS algorithm.
where the numerator is summed the noise sample data of front and back’s 300 ms in estimated speech signal. The denominator is computed using channel 1’s utterances in SiTEC database that is recorded using head-set microphone on real car, and show nearly clean status. McCowan [4] -13.33 dB 13.63 dB
NR Mean SNR
Proposed Method -18.43 dB 20.33 dB
Table 1. NR and mean SNR comparison Baseline Accuracy(%) WER
88.94 -
Spectral Subtraction 86.99 -17.63
DASB
McCowan [4]
89.42 4.34
91.38 22.06
Proposed Method 92.76 34.54
Table 2. Experimental evaluation of speech recognition accuracy.
G Figure 4. Enhance speech signal. (Median: McCowan System output, Bottom: proposed method)
H. Lee and J. Kim / Speech Recognition Interface Using Dual Channel Post-Filter
167
As a second evaluation measure, SNR mean is evaluated as shown in Table 1 and Figure 4. Noise interval is computed using same method with equation (11), and speech interval is computed using the other intervals excepting for noise interval. From Table 1, we know that SNR mean of the proposed method is improved 6.7dB higher than the McCowan’s method. Thus, it is shown that the compensation method of power spectrum of noises obtained from two channels provides robust noise reduction rate. In addition, we can know that improvement of transfer function estimation provides accurate estimation method of noise cross-correlation. For comparison, post-filtering method of McCowan [4] is evaluated. High-pass filtering and Delay-and-Sum beam-forming (DASB) is applied equally to the McCowan’s method and the proposed method. McCowan’s method applies cross-correlation of noises obtained from two channels, and post-filtering. The proposed method also applies cross-correlation of noises obtained from two channels. In addition, it compensates the power spectrum of noises obtained from two channels, which have different noisy characteristic. From experimental result, we know that the proposed method obtained noise reduction rate of 5.1dB. Finally, we evaluated the speech recognition accuracy using equal SiTEC speech database as shown in Table 2. Baseline is set to channel 4.
5. Summary and Conclusions This paper proposed the transfer function estimation method to improve the performance of post-filtering by accurately estimating noise cross-correlation for robust speech interface. This method is based on two-channel post-filtering, which is derived from a fact that two input channels have different characteristics and magnitude. Thus, we considered complex coherence of noise field more accurately, and estimated frequency response. From experimental evaluation, we obtained robust noise reduction rate and SNR improvement. ACKNOWLEDGMENT This work was supported by grant No. A17-11-02 from Korea Institute of Industrial Technology Evaluation & Planning Foundation. G yG [1] Israel C, “Analysis of two-channel generalized sidelobe canceller (GSC) with post-filtering,” IEEE Trans. on Speech and Audio Processing, Vol. 11, No. 6, November 2003. [2] Sharon G, and Israel C, “Speech enhancement based on the general transfer function GSC and postfiltering,” IEEE Trans. on Speech and Audio Processing, Vol. 12, no. 6, November 2004. [3] Zelinski, R., "A microphone array with adaptive post-filtering for noise reduction in reverberant rooms", Acoustics, Speech, and Signal Processing, ICASSP-88, Vol. 5, pp. 2578-2581, 1988. [4] McCowan, I., Bourlard, H., "Microphone Array Post-Filter Based on Noise Field Coherence", IEEE Transactions on Speech and Audio Processing, Vol. 11, No. 6, pp. 709-716, 2003. [5] Meyer, J., Simmer, K. U., "Multi-channel speech enhancement in a car environment using Wiener filtering and spectral subtraction", ICASSP-97, Vol. 2, PP. 1167-1170, 1997. [6] Sungjoo, A., Hanseok K., "Background noise reduction via dual-channel scheme for speech recognition in vehicular environment", IEEE Transactions on Consumer Electronics, Vol. 51, No. 1, pp. 22-27, 2005.
168
Advances in Intelligent IT Y. Li et al. (Eds.) IOS Press, 2006 © 2006 The authors. All rights reserved.
Gabor Filter Based Efficient Thermal and Visual Face Recognition Using Fusion Architectures Jahanzeb Ahmad, Usman Ali, Syed Naveed Hussain Shah COMSATS Institute of Information Technology, Abbottabad, Pakistan
[email protected] ,
[email protected],
[email protected]
Abstract. Fusion architecture for efficient visual and thermal face recognition biometric system is presented in this paper. Both Data fusion and decision fusion are employed in the architecture to improve the individual fusion performance. Gabor filter technique is used for recognition of features from input image and the database images. To our knowledge this is the first visual, thermal and fused-data (fusion of visual and thermal data) face recognition fusion recognition system, which utilizes Gabor filter for feature extraction. We have achieved the accuracy of above 98%. Paper also discusses the performance issues of memory and response time and defines new frontiers for fast and efficient recognition system.
Keywords. Gabor Filter, Face Recognition, Decision Fusion, Data Fusion
Introduction A face recognition system must recognize a face from a novel image despite the luminance variations between images of the same face. A common approach to overcoming image variations because of changes in the illumination conditions in visual images it is preferred, nowadays, to use thermal images or infraread images. The choice of infrared makes the system less dependent on external light sources and more robust with respect to incident angle and light variation. Some of the commonly used face recognition techniques are Principal Component Analysis (PCA) [1], Independent Component Analysis (ICA) [2], Linear Discriminant Analysis (LDA) [3], Elastic Bunch Graph Matching (EBGM) [4] and Gabor Filter technique [5]. Thermal imagery is new emerging field of interest for different applications. Previous work shows that well-known face recognition techniques mentioned above, can be successfully applied to IR images. In [6][7] it is shown that by applying recognition techniques on IR imagery have even performed better in some cases than on visible imagery. In [8], an image data set acquired by Socolinsky et al. was used to study multimodal IR and visible face recognition using the Identix FaceIt algorithm [9]. X. Chen et al in [11], which is the extension of their work in [10], showed by using time-lapse recognition experiments, that (1) PCA-based recognition using visible im-
J. Ahmad et al. / Gabor Filter Based Efficient Thermal and Visual Face Recognition
169
ages performed better than PCA-based recognition using IR images, (2) FaceItbased[9] recognition using visible images outperformed either PCA-based recognition on visible or PCA-based recognition on IR, and (3) the combination of PCA-based recognition on visible and PCA-based recognition on IR outperformed FaceIt on visible images. This shows that, even using a standard public-domain recognition engine, multi-modal IR and visible recognition has the potential to improve performance over the current commercially available state of the art. A face recognition method in the infrared spectrum is presented in [12], where Gabor filtering is used as a spectral analysis tool. They have used Equinox facial database [13], the most extensive infrared facial database and have provided comparison with eigenfaces method for infrared images [14]. In [15] illumination invariant face recognition using thermal infrared imagery is discussed which is tested on eigenfaces method and ARENA algorithm [16]. For the detailed review on current advances in visual and thermal face recognition refer to [17]. In [18] many multisensor data fusion architectures are presented to create a color night vision capability for target designation, learning, and search based on a Fuzzy ARTMAP neural network operations. Fusion of Visual and Thermal Signatures for Robust Face Recognition is explained in [19] has outperform the individual visual and thermal face recognizers. Both data and decision fusion is discussed and faceIt recognition is used for testing the fusion of Equinox face database [13]. At NASA Langley research Center (LaRC), a real time DSP implementation on data from infrared and visible sensors has been developed [20] to assist pilots flying through adverse weather conditions. In this paper we present the fusion architecture for the fusion of thermal, visual and fused face images for better biometric face recognition system, which utilizes Gabor filter. Paper is organized as follow: In section 1 Data fusion is discussed. Section 2 discusses Gabor filtering for face recognition. Section 3 discuses decision fusion, section 4 presents the over all system architecture. Finally, the results and conclusions are discussed.
Figure 1: Some sample visual and thermal Images from database
Figure 2: Fused-data (result of data fusion of visual and thermal images)
170
J. Ahmad et al. / Gabor Filter Based Efficient Thermal and Visual Face Recognition
1. Data Fusion Data fusion technique provides an advantage of combining the information of both the sources (visual and thermal) to produce new more informational image is obtained for recognition, which produces more accurate results. The Equinox facial database [13], the most extensive infrared facial database that is publicly available at the moment, was used for testing. The Equinox database has a good mix of subject images with accessories (e.g., glasses) as well as expressions of happiness, anger, and surprise, which account for pose variation. Figure 1 shows some examples from this database. The visual and thermal image is combined by using equation as proposed in [18]:
F ( x, y )
Fw T ( x , y ) ( Fw 1)V ( x , y )
Where F(x,y) is the fused image T(x,y) and V(x,y) are the thermal and visual image respectively and Fw is the weight of fusion and its value is from 0 to 1. Fused image F(x,y) is passed to Face recognition system as input. Few fused images are shown in Figure 2. 2. Gabor Filter based Face Recognition 2.1. Face Skin Detection Face detection from cluttered images is very tough, due to the change in environment, light effects, facial expressions and different poses of the face. There are different face detection techniques for example estimation of Illumination from color skin [8], Face Detection Using Mixtures of Linear Subspaces [9], A Fast and Accurate Face Detector for Indexation of Face Images [10]. Skin color based detection [11] is implemented in the proposed architecture due to its simplicity though it’s less efficient. Skin color range, tested for wide range of skin color, is ( (r>95) & (g>40) & (b>20) & ((max (r,g,b) – min (r,g,b)) > 15) & (abs(r-g)>15) & (r>g) & (r>b) ) Where r,g and b are red, green and blue contents of the pixel respectively. 2.2. Feature point calculation Physiological studies found simple cells in human visual cortex that are selectively tuned to orientation as well as to special frequency, it was suggested that the response of a simple cell could be approximated by 2-D Gabor filters [19]. One of the most successful recognition methods is based on graph matching of coefficients, which have disadvantages due to there matching complexity. Escobar and Javier [20] proposed a model in which they manually located the feature points and then calculated the Gabor jet manually, which describes the behavior of the image around that point. As for automated face recognition we can’t locate feature points manually so to detect points automatically we have calculated the Gabor filter response at every point to see the behavior of the image around that point. A filter response at any point can be calculated by convolving the filter kernel with the image at that point. For point (X, Y), filter response denoted as R is defined as
R1 x cos(T ) y sin(T ) R1 x sin(T ) y cos(T )
J. Ahmad et al. / Gabor Filter Based Efficient Thermal and Visual Face Recognition
171
N X 1 M Y 1
¦ ¦ I ( X x, Y y ) f ( x , y , T , O )
R( X , Y ,T , O )
x X
y Y
0.5 (
f ( x, y,T , O , VX , VY )
T
O * k / n, k
e
R12
R 22
VX
VY 2
2
)
1,2,....n
Where VX and VY are the standard deviation of the Gaussian envelop along the x and y dimensions respectively. Ȝ , ș and n are the wavelength, orientation and no of orientations respectively. I ( x, y ) denotes NxM image. When we apply all Gabor filters at multiple frequencies and orientations at a specific point we thus get filter response for that point. We have chosen four orientations and a constant wavelength because feature points are relatively insensitive to the Gabor kernel wavelength, while vary significantly across different orientations [5]. We have constant Ȝ = 2*1.414 and VX = VY = Ȝ/2. 2.3. Feature Point selection Mostly eyes, nose, mouth and corner of lips are taken as feature points. However, in our implementation we do not fix the feature points because of varying facial characteristics of different faces such as dimples, moles etc. Human mind also uses these characteristics for face recognition. We chose the point, in a particular window of size SxT around which the behavior or response of Gabor filter kernel is maximum, as feature point. Where S
N W
and T
M W
Where N = no of columns and M = no of rows and W is the no of windows. Feature point located at any point can be evaluated as
R f ( xo, yo)
max
( R j ( x , y ))
( x , y )C
Where Rj is the response of the image to the jth Gabor filter and C is any window. Window size is one of the important constraints of our implemented model. It should be small enough to capture all important facial feature points, but it should be large enough so that no redundancy occurs. Feature responses are obtained by applying above method on all windows. 2.4. Feature vector generation Feature vectors are generated at feature points as discussed in previous sections. pth feature vector of ith reference face is defined as:
vi , p [ xp, yp, Ri , j ( xp, yp )] Where j = no of responses. Feature vector contains response with location information.
172
J. Ahmad et al. / Gabor Filter Based Efficient Thermal and Visual Face Recognition
2.5. Similarity calculation The degree of similarity is calculated between input image and all the images from the database. Similarity between features of input image and any image from database is calculated using.
¦| v
i, p
Si ( p , j )
(l ) || vi , j (l ) |
l
¦| v
i, p
l
(l ) | 2 ¦ | vi , j (l ) | 2 l
Where Si ( p, j ) represents the similarity of jth feature vector of input face ( vi , j ) to pth feature vector of ith reference face, ( vi , p ), where l the no of vector elements. We chose the greatest similarity value (i.e. nearest to one) of a feature vector of input image with all the feature vectors of any image from the database, as it determines the highest degree of similarity between two feature vectors.
D (Ti, I )
min[| S (i,1) 1 |, | S (i,2) 1 |,........ ......... | S (i, ( z 1)) 1 |, | S (i, z ) 1 |]
Where D (Ti, I ) is the difference of ith feature vector of input image T with all the feature vectors of image I from the database. where z = n x W. Finally, we calculate the overall difference D (T , I ) between an input image and an image from data base by using following equation:
D (T , I )
1 z ¦ D(Ti, I ) z i1
3. Decision Fusion In [11] and [21] sum rule is used as it outperforms other classifier combination schemes. We have used a much flexible decision fusion as proposed in [19] for the combination of the visual, thermal, data fused face recognition decision. The matching scores (MF) of fusion can be derived using the individual scores of visual recognition module (MV), thermal recognition module (MT) and data fused recognition module (MDf). Decision fusion is the average weighted sum of MV, MT, and MDf
MF
wv M v wT M T wDf M Df
wv , wT and wDf denote weight factors for the matching scores of visual, w w thermal and data fused face recognition modules. In this paper, v = wT = Df = where
0.3333. 4. Architecture of system The proposed architecture consists of four main processing modules for each visual, thermal and fused face recognition module, which are a) feature value calculation b) Feature vector selection c) Similarity calculation, d) Decision fusion and few preprocessing and storage modules. Figure 3 shows the block diagram of architecture of the system.
J. Ahmad et al. / Gabor Filter Based Efficient Thermal and Visual Face Recognition
173
5. Accuracy Results Experiments were performed on 24 candidates of Equinox facial database [14] of images. Few visual and thermal images are shown in figure 2. Three images (thermal, visual, and fused) of each candidate with frontal illumination are used. In our experimentations we have value of Fw to 0.5. Accuracy results for two different resolutions are provided in table 1. Input
Thermal Image
Thermal Image Visual Image
Data Fusion
Feature value calculation Thermal Image
Feature value calculation Visual Image
Feature value calculation Fused Image
Feature vector selection Thermal Image
Feature vector selection Visual Image
Feature vector selection Fused Image
Similarity calculation Thermal Image
Similarity calculation Visual Image
Feature Vectors Database
Decision Fusion
Similarity calculation Fused Image
Flow of Image Flow of Feature Vectors
Decision Fusion Output Figure 3: Data and Decision Fusion Face Recognition Architecture Table 1 : Accuracy Results
Input type Thermal Visual Decision Fusion Data Fusion Both Data and Decision Fusion
Accuracy of Resolution 21x30 37x49 38% 38.6% 63% 74.38 86% 97% 96% 97.4% 97.11% 98.6%
6. Conclusions As Recognition algorithms are well developed for visual images, in this paper a well developed Gabor filter is applied individually on visual, thermal and data fused images and finally the decision is made by fusion of their results for efficient face recognition. Because only test visual and thermal images are stored in memory, so memory re-
174
J. Ahmad et al. / Gabor Filter Based Efficient Thermal and Visual Face Recognition
quirement is very less. Only feature vectors achieved by applying Gabor filter on the fused-data are stored in database hence further reducing the memory requirement. Though, data and decision fusion architecture for face recognition system has thrice the computational cost as that of the individual computational cost of visual or thermal or fused-data face recognition systems, but the accuracy of designed fusion architecture is more then the individual visual/thermal/fused-data face recognition, and we are able to achieve more then 98% accuracy results. 7. Future Work As the computational cost of the proposed system is almost thrice that of individual visual, thermal and data fused face recognition cost, therefore, this issue must be resolved before the system can be implemented in hardware. We are currently working on making a parallel distributed system design so that the above mentioned draw back can be removed. References [1] M. Turk et al “Eigenfaces for Recognition, Journal of Cognitive Neurosicence”, Vol. 3, No. 1,1991. [2] M.S. Bartlett et al “Face Recognition by Independent Component Analysis, IEEE Trans. on Neural Networks”, Vol. 13, No. 6, November 2002, pp. 1450-1464. [3] P.N. Belhumeur et al, “Eigenfaces vs. Fisherfaces: Recognition using Class Specific Linear Projection”, 4th European Conference on Computer Vision, ECCV'96, 15-18 April 1996, Cambridge, UK, pp. 45-58 [4] L. Wiskott, et al, “Face Recognition by Elastic Bunch Graph Matching”, Chapter 11 in Intelligent Biometric Techniques in Fingerprint and Face Recognition, eds. L.C. Jain et al., CRC Press, 1999, pp. 355-396. [5] Creed F. Jones III, “Color Face Recognition using Quaternionic Gabor Filters”, 15, January2003. [6] D. A. Socolinsky and A. Selinger, “A comparative analysis of face recognition performance with visible and thermal infrared imagery,” in International Conference on Pattern Recognition, August 2002. [7] A. Selinger and D. A. Socolinsky, “Appearance-based facial recognition using visible and thermal imagery: a comparative study,” Technical Report,Equinox corporation, 2001. [8] B. Abidi, “Performance comparison of visual and thermal signatures for face recognition,” in The Biometric Consortium Conference, 2003. [9] http://www.indentix.com [10] X. Chen et al, “Pca-based face recognition in infrared imagery: Baseline and comparative studies,” IEEE InternationalWorkshop on Analysis and Modeling of Faces and Gestures, pp. 127– 134, 2003. [11] X. Chen et al, “Visible-light and Infrared Face Recognition” Proc. of the of Workshop on Multimodal User Authentication, pages 48-55, Dec 2003, Santa Barbara, CA USA. [12] P. Buddharaju et al, “Face Recognition in the Thermal Infrared Spectrum”, In the Proc. of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. [13] Equinox: Face database. www.equinoxsensors.com/products/HID.html (2004) [14] Cutler, R.: Face recognition using infrared images and eigenfaces. cs.umd.edu/rgc/face/face.htm (1996) [15] D. A. Socolinsky, et al, “Illumination Invariant Face Recognition Using Thermal Infrared Imagery”, In the Proc. of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition(CVPR 2001). Volume 1., Kauai, Hawaii, United States (2001) 527–534 [16] Terence Sim, et al, “High-Performance Memorybased Face Recognition for Visitor Identification,” in Proceedings of IEEE Conf. Face and Gesture Recognition, Grenoble, 2000. [17] S. Kong et al, "Recent Advances in Visual and Infrared Face Recognition - A Review," the Journal of Computer Vision and Image Understanding, Vol. 97, No. 1, pp. 103-135, June 2005.
J. Ahmad et al. / Gabor Filter Based Efficient Thermal and Visual Face Recognition
175
[18] D.A. Fay et al, “Fusion of Multi-Sensor Imagery for Night Vision: Color Visualization, Target Learning and Search”, In the proc. of the 3rd International Conference on Information Fusion, Vol 1, pp. TuD3-3 -TuD3-10, July, 2000. [19] J. Heo et al, "Fusion of Visual and Thermal Signatures with Eyeglass Removal for Robust Face Recognition," IEEE Workshop on Object Tracking and Classification Beyond the Visible Spectrum in conjunction with CVPR 2004, pp. 94-99, Washington, D.C., July 2004. [20] G. D. Hines et al, “Real-time Enhanced Vision System” SPIE Defense & Security Symposium 2005, Orlando, Florida, March 28 - April 1, 2005, (757KB). [21] J. Kittler et al, “On combining classifiers,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, no. 3, pp. 226–239, 1992.
176
Advances in Intelligent IT Y. Li et al. (Eds.) IOS Press, 2006 © 2006 The authors. All rights reserved.
Wrapping Software Agents into Web Services Manish Chhabra and Hongen Lu Department of Computer Science and Computer Engineering, La Trobe University, Bundoora, Melbourne, VIC 3086, Australia
[email protected],
[email protected]
Abstract. The World Wide Web is evolving from a sea of information to a service oriented marketplace. Web Service (WS) is one of the fastest growing areas of information technology in recent years. To meet the increasing demand of vast range of Web Services, an engineering approach is much needed. In this paper we propose a methodology to wrap software agents with Web Services. This method takes advantage of widely accepted agent development tools in order to speed up the development of WS, and it also broadens the accessibility of agents. Our case study shows significant improvement of the nature of information provided by the WSs by making them more dynamic with the use of autonomous goal directed behavior of software agents. Keywords. Agent, Web Service, Agent Tools, Service Engineering
1. Introduction The World Wide Web is evolving from a sea of information to a service oriented marketplaces and Web Service (WS) technology is the next wave of Internet computing. WS is one of the fastest growing areas of information technology in recent years. Web Services expose business processes over the Internet and promise more business opportunities by providing a common protocol that can be used by web applications to communicate with each other over the web. Web services are described in XML and are communicated over existing HTTP infrastructure using SOAP. The foundation of Web Services is laid by three significant standards: SOAP (Simple Object Access Protocol) describes message format; WSDL (Web Services Definition Language) gives self-describing interfaces of Web Services; and UDDI (Universal Description, Discovery, and Integration) provides means to locate appropriate web services. Publicizing Web Services is also done using UDDI. While WS is bringing revolution in the Web Wide Web by making it a more dynamic environment however still there is an increasing need to apprehend its current capability before exploiting its full potential [3]. With the help of Software agents (SA) wrapped with WS, the information provided by WS can be made more efficient and more dynamic in nature. Software agents are significant in many domains of computer science. Although there is no precise definition of SA, the present belief describes them as computer system situation in some environment capable of flexible autonomous actions in order to meet their design objectives [1]. Another benefit of wrapping agents into web services is that developers can easily and efficiently reuse widely accepted
M. Chhabra and H. Lu / Wrapping Software Agents into Web Services
177
existing agent based systems as WS. We emphasize on using existing agent development tools which would speed up the development process of such a design.
2. Software Agent and Web Service WS technology represents a fundamental shift in the way web applications will be developed for e-business. Sure, there will still be web pages and HTML, and JavaScript authoring, but many e-business applications will more and more rely on programmatic interfaces that tie Internet-based applications together on fundamental level. Web Services Description Language (WSDL) is used to describe the inputs, outputs, and semantics of remote client services. UDDI is an open specification that enables vendors to offer their business services and assist potential buyers to search for these services. Service providers register their services with Universal Service Registry from where the requester can search the desired service and its description. Service Requester invokes the service using SOAP as communication protocol. SOAP is designed to be a lightweight method for exchanging information between distributed systems. Service Provider
Message Envelope
Invoke Services
Register Services
Service Requester
Find Services SERVICE REGISTRY
Figure 1. Web Service Model (Relation between Service Provider and Requester)
Software agent provides a prominent solution to improve the Web Service standards with the benefits they provide to the user of Web Services. Agents take input from the environment based on the changes in the environment and perform actions based on the input. BDI (Belief-Desire-Intention) Model as described in [4] is the most popular model for designing agent based systems. Beliefs represent an agent’s local knowledge base that is the agent’s information about the world which may also be incomplete or incorrect; Desires represent what an agent is trying to achieve, that is states of affairs that an agent would be brought about and they should be consistent with one another; Intentions are the currently adopted plans. Plans are predetermined sequences of actions that can accomplish specified tasks. Java-based agents are very popular, as they are easy to develop, and many tools and techniques exist extending the basic standard. There are many agent frameworks, languages and protocols available today. JACK agent development is one tool provides an environment for building, running and integrating commercial-grade multi-agent systems using a component-based approach [5]. The JACK agent language is a programming language that extends Java with agent oriented concepts. JACK source
178
M. Chhabra and H. Lu / Wrapping Software Agents into Web Services
code is first compiled into regular Java code before being executed. Agents in JACK model reasoning behavior according to the theoretical BDI model.
3. Wrapping Agent with Web Service In this section we present a methodology to wrap software agents with Web Services. In this method, agents are not only used for creating, storing, and providing information as they are done by a WS itself, but agents can also be used for continuous monitoring of changes in the information provided by WS and can put together the information gathered from different sources in a more resourceful manner. Figure 2 illustrates Agent A wrapped partially with WS A, which is realized with the help of an adapter. Agent A provides its inherent features to WS and is able to communicate with other in-house agents like Agent B in Figure 2 using agent communication language like KQML. This figure also shows that the communication of an agent is not limited to other software agents; instead agents can gather information from external Web Services by SOAP messaging.
WS A
..
AGENT A
SOAP
WS B
ACL (e.g. KQML) AGENT B
Figure 2. Software Agent Wrapped with WS This design also fulfils the requirement of customization and service users are able to fit the service into their individual requirements and expectations for the reason that the processing can now be done within the service provider’s vicinity. Hence, the wrapping results in more competent Web Services. The following five steps are given to implement our proposed wrapping methodology: 1) Define the adapter interface. This adapter is used to communicate between agent and WS. The method bodies of this adapter interface will be implemented in the agent. They generate the events which in turn invoke the agent’s plan associated with these events. public interface LoanInterface { void check(//data parameters//); }
M. Chhabra and H. Lu / Wrapping Software Agents into Web Services
179
2) In this step we define the events that agent will handle. These events are generated by adapter methods for the agent to process. event LoanApplication extends Event { #posted as check(// data members //) { /* Every event has a plan associated */ } }
3) Next is the business process which brings into play when the events are generated. This is the plan of the agent which is associated with the events posted in the environment. plan DoCreditCheck extends Plan { #handles event LoanApplication la; body() { //connect to database and do credit check } }
4) Implement the agent and within the agent give the method bodies defined in adapter interface in step 1. agent CreditCheckAgent extends Agent implements LoanInterface { #handles event LoanApplication; #uses plan DoCreditCheck; #posts event LoanApplication la; CreditCheckAgent(String name) { super(name); } public void check(//datamembers//) { postEvent(la.check(//datamembers//) ); } }
180
M. Chhabra and H. Lu / Wrapping Software Agents into Web Services
5) Finally we create the WS and the agents are invoked either when the request is generated to WS or in an asynchronous manner. The calls to agents are made through the adapter.
4. A Case Study in Financial Service Domain This is a case on financial services provided by a bank to process loan applications and report the outcome to customers. Loan Application Service (LAS) is an online service for the users to submit their loan applications to the bank. The loan processing events are managed by four agents in the Figure 3. To achieve this functionality agent Loan Processor (aLP) is wrapped with the service using the methodology described in previous section. The agent with its accustomed capabilities is able to communicate with other in-house agents that provide help to verify credit history of the customer and to find out current interest rates. Since the interest rates may vary depending on the external factors like Federal Reserve bank rate, agents will need to access external services.
Figure 3. Online Loan Application Service
M. Chhabra and H. Lu / Wrapping Software Agents into Web Services
181
The agents form a system to process loan applications. Agent Credit Checker (aCC) checks the credit history of users by communicating with external service. Agent Interest Calculator (aIC) computes current interest rates after retrieving information from other sources like Federal Reserve bank rates. The Accounts Manager agent (aAM) stores the applicant details in the database if the applicant is not the existing customer. Agent Loan Processor (aLP) communicates with these agents and based on the information obtained the application is accepted or rejected. Using the proposed methodology the whole system can be made online. The adapter is implemented by aLP and wraps the agent with online loan application web service. The methods defined by the adapter take the customer details as the parameter and will post the event for the agent to start processing on this data. The set of actions that need to be performed using this data are defined as the plan of agent. LAS presents the whole system online will define the method to receive all the customer details and pass them to the agent using the adapter.
5. Conclusion Currently Web Services are developed in an ad hoc manner. To meet the increasing demand of Web Services, an engineering approach is much needed. In this paper we propose an agent oriented methodology for Web Service engineering. In this approach software agents are the building blocks for Web services. Steps are given and described on how to build Web Services. Wrapping software agents with Web Services can churn out more efficient way of providing information through Web Services, which are capable to respond to frequent changes in the environment. We use autonomous and goal directed behavior of software agents to provide such information. The paper also presents the idea to use agents to provide information which is retrieved from combination of several independent Web Services. The case study clearly presents more understanding of their imperative behavior. We also demonstrate how this approach can be implemented using existing tools and technology. The wrapping methodology is easy to follow, and effective to build web based systems systematically.
References [1] [2] [3] [4] [5] [6] [7] [8]
M. Wooldridge and N.R. Jennings. Intelligent Agents: Theory and practice. The Knowledge Engineering Review, 10(2): 115–152, 1995. Johnson P Thomas, Mathew Thomas and George Ghinea: Modeling of Web Services Flow. Proceedings of the IEEE International Conference on E-Commerce (CEC’03). PRADHAN, S.; Lu, H.; Using SOAP Messaging to Coordinate Business Transactions in Multi-Agent Financial Web Services; Volume 1,
[email protected], iiWAS05, pg. 109-120, September 19-21, 2005. Michael Winikoff: Simplifying Agent Concepts. SRI International, AIC, Tuesday 5th June 2001. The Agent Oriented Software Group: WWW – page: http://www.agent- software.com/shared/products/. Tim Finin, Jay Weber: Specification of KQML Agent-Communication Language. June 15th, 1993. The DARPA Knowledge Sharing Initiative External Interfaces Working Group. Chang-Hyun Jo, Guobin Chen and James Choi: A new approach to BDI Agent-Based Modeling. 2004 ACM Symposium on Applied Computing. SAC’04 March 14-17, 2004. Book Review: Jay Foster, Mick Porter, Dreamtech Software Inc., Natalie Wear, Bob Hablutzel: Developing Web Services with Java API’s for XML using JWSDP. 2002.
182
Advances in Intelligent IT Y. Li et al. (Eds.) IOS Press, 2006 © 2006 The authors. All rights reserved.
Intelligent Elevator Control by Application of Computer Vision Vagram VARELJIAN, Ju Jia ZOU School of Engineering, University of Western Sydney, Locked Bag 1797, Penrith South DC, NSW 1797, Australia
Abstract. An age that bears witness to a proliferation of relentlessly higher buildings denotes a necessity for more efficient elevator services. This paper presents a scheme that improves elevator efficiency and passenger comfort by reducing waiting and travel times. The proposed system utilises a background subtraction process to estimate the extent of crowding inside the elevator car, allowing a full elevator to bypass hall calls. Furthermore, using a similar detection technique, the system is capable of estimating the number of patrons waiting on every elevator floor. This information can be relayed to an elevator controller, improving traffic flow by setting higher priorities for busy floors. Keywords. Computer Vision, Elevator Control, Background Subtraction
1. Introduction Increasing land prices and limited urban space fuels a growing trend for constructing perpetually taller structures. Skyscrapers have become customary in major cities around the world providing living and working space for millions of people. During peak periods, problems may arise due to congestion of human traffic corridors [1]. Time and energy are wasted when full elevators make redundant stops. 1.1 Current System Limitations To improve performance some elevator control systems determine the extent of crowding inside the lift and hall floor areas. This task is generally achieved by using pressure sensors [2]. However this approach does not work in all scenarios, because weight is often a poor predictor of occupied floor surface area. For example, a person might enter the lift with a lightweight pram, which takes up a large area. Such control implementation would mistakenly estimate that the elevator is relatively empty and continue to inefficiently attend to hall calls. Furthermore, this system would require existing elevators to be upgraded, involving construction work, during which time the elevator would be out of service, and high costs. Accurate traffic flow forecasting is required to make correct elevator dispatching policies. Modern elevator control systems improve efficiency by estimating traffic flow, utilizing; fuzzy logic, genetic algorithms and neural networks to predict
V. Vareljian and J.J. Zou / Intelligent Elevator Control by Application of Computer Vision
183
passenger numbers [2] – [5]. However, elevator traffic flow is complex and variable in time, making it hard to predict accurately. 1.2 Improving the System by Means of Computer Vision Historically, there has been considerable improvement in elevator controllers; they have evolved from basic single call control modules to advanced artificial intelligence systems. Modern computer vision technology can further improve the efficiency of elevators, allowing the group controller to: x x x
Bypass floor calls if the elevator is full. Cancel nuisance ‘ghost’ requests if the lift is empty. Set a higher priority for busy elevator floors.
Utilising existing security cameras and subtracting real-time frames from a saved background image a computer vision system is able to determine the level of crowding in the elevator car. Detection of a full elevator would inform the controller to ignore any further hall calls, preventing the car from attempting to pick up new passengers. The controller can cancel any car calls that are registered inside an empty elevator. For example, if a child had pressed all the buttons in the elevator and ran away. Furthermore, the frame differencing technique can be applied to discriminate between numerous levels of activity, allowing the controller to set various priories for different floors. For example a floor with 6 people waiting would be given a higher priority than a floor with one person. This approach directly measures the amount of activity; instead of attempting predict traffic flow.
2. Proposed System 2.1 Filtering Due to camera limitations, high frequency noise artifacts manifest throughout the input video. In order to obtain accurate results, the noise predicament must be addressed. Hence, the initial step of the proposed method utilises Gaussian filtering to reduce the noise in the input video stream. 2.2 Background Modeling Background modeling is a representation of the background and its associated statistics [6]. Background modeling correlates to the maintenance operations performed on the background image in order to maximise the effectiveness of the frame differencing process. Median background image modeling is founded on the assumption that brightness intensity for every individual background pixel varies autonomously, according to a normal distribution [7]. In order to create a background model, pixel-wise mean values are computed during an initial training phase. It is essential that foreground objects are absent during this phase, as their presence would taint the formation of an accurate background
184
V. Vareljian and J.J. Zou / Intelligent Elevator Control by Application of Computer Vision
image. The training phase involves accumulating an (n) number of frames (F(x,y)) to determine a sum of pixel values (S(x,y)) and a sum of squares (Sq(x,y)) for every pixel location, as illustrated in equations (1) and (2). N
¦F
S (x,y)
n(x,y)
(1)
2
(2)
n 1
N
¦F
Sq ( x , y )
n
( x, y)
n 1
The next step involves calculating the mean background image, as depicted by Eq. (3), where N is equivalent to the value used in Eqs. (1) and (2). The derived mean (μb(x, y)) is used as a background image for the background subtraction process.
1 u S ( x,y) N
P b( x , y )
(3)
Consequently, standard deviation G (x, y) is calculated using the process illustrated in the Eq. (4) where (Sq), (S) and (N) are constants obtained previously.
G
Sq ( x , y ) (x,y)
N
§S ¨¨ ( x , y ) © N
· ¸¸ ¹
2
(4)
Real-time background image segmentation [8] can be used to compensate for variations in lighting, for example if an elevator features a window. This system would be able to incorporate static changes in the background, for example, graffiti on the elevator wall. 2.3 Background Subtraction Every image consists of pixels belonging to one of two categories: figure or background. Figure pixels are believed to be part of the projection of an object of interest, such as a person (the occupied elevator space), whilst background pixels correspond to the object's surroundings. The estimation whether a pixel is figure or background is based on the fact that moving objects tend to be located where pixel intensities have recently experienced significant change [9]. Moments after elevator door closure a centre-mounted camera captures lift proceedings, where each frame is labelled as a current image (Ic (x, y)). Each derived pixel from the current image (Ic(x, y)) is subtracted from its corresponding pixel in the median background image (μb (x, y)). If their difference is greater than a threshold intensity amount, then the pixel is part of the binary threshold image (Ith(x, y)). The following pixel acceptance test is used to construct the threshold image:
Ith
( x , y )
° 1 If P b(x,y ® °¯ 0 Otherwise
)
Ic
( x , y )
! k u G
( x , y )
(5)
V. Vareljian and J.J. Zou / Intelligent Elevator Control by Application of Computer Vision
185
The sensitivity of the background subtraction process can be varied by altering the threshold value, this is achieved by multiplying the standard deviation by a constant k. It is generally known that optimum background subtraction results are obtained when the constant k is set to 3 [10]. 2.4 Division of Threshold Image Human behavioural model analysis can be applied to the detection process if the system is able to pinpoint the region where change had transpired. To achieve this task, the derived threshold image is divided into smaller regions; each individual section is designated to represent the area an average person would occupy in the elevator. 2.5 Calculation of Occupied Space Given that the threshold image is binary, a pixel can be categorised as belonging to either a section of the foreground or background. If the pixel value is black (Pb) it belongs to the foreground image, in other words (Pb) is categorised as part of the occupied elevator space. If the pixel value is white (Pw), it is recognised as the image background, belonging to the empty elevator space. Since the original image was previously divided into smaller regions, every pixel P(x, y) is cataloged as belonging to a specific region, based on its (x, y) coordinates. An overall sum (Fs) of foreground pixels (Pb) contained within each region is recorded. The magnitude of the sum (Fs) represents the ‘crowding level’ for each region. A larger sum represents a greater area occupied as demonstrated in table 1. A region’s ‘crowding level’ can be classified as empty, light, medium or full, depending on the value of the foreground sum (Fs). For example if the region contains 0 to 100 black pixels it is classified as empty, if the region contains 101 to 700 black pixels it is classified as light, and so on. In addition, every area classification is assigned a numerical weighting factor (Wƒ), as listed in (Table 1). Regions adjoining the door are assigned an extra weighting point when they are occupied. Assigning extra Wƒ points is based on the assumption that people prefer to stay away from the doors in most circumstances, unless the elevator is completely full. The Wƒ points are used to compute the elevators ‘crowding’ status. An elevator is considered full when the total amount of points is greater than 25 and no empty or light areas are registered in any region. These parameters would need to be configured independently for every system, taking into account elevator size and the level of passenger comfort required. Table1. Region status classification and point value allocation. Sum Range 0 < (Fs) d 100 100 < (Fs) d 700 700 < (Fs) d 1500 (Fs) < 1500
Region Status classification Empty Light Medium Full
Weighting points (Wƒ) 0 1 2 3
186
V. Vareljian and J.J. Zou / Intelligent Elevator Control by Application of Computer Vision
3. Experimental Results A scale model lift was constructed utilising realistic textures obtained from an elevator; it was illuminated by diffused fluorescent lighting. Proceedings were captured by a 101K pixel webcam. The obtained images closely resembled those of a security system in a large elevator, as illustrated in Figure 1(a). The highlighted rectangle represents the region of interest (ROI), that was used in the following test.
(a)
(b)
(c)
Figure1. (a) Selection of ROI, (b) and (c) Object detection output, image pairs featuring threshold and current frames respectively.
An array of various figurines, featuring different shapes and colours were positioned inside the elevator in order to assess the system’s ability to detect foreign objects. Figures 1(b) and 1(c) illustrate four output image pairs, where each pair incorporates a current and threshold image. Beginning with the empty elevator, each subsequent frame characterised the new object, its presence and location represented by the black rendered region in the corresponding threshold image. Objects were added until the elevator became full; the results of this procedure are graphically illustrated in Figure 2.
Block Activity (test2)
Data used to determine available area status
5000
35
4500
30 25
3500 3000
20 count (unit)
Foreground sum Fs (pixels)
4000
2500 2000 1500
15 10
1000
5 500
0
0 0
200
400
600
800
1000
0
-500
200
400
600
800
1000
-5
time (frame)
Time (frame) region1
region2
region3
region7
region8
region9
(a)
region4
region5
region6
empty
light
medium
full
Total Wƒ
Poly. (Total Wƒ)
(b)
Figure2. (a) Magnitude of the foreground sum for each region. (b ) Progression of region status and Wƒ points as objects are added.
V. Vareljian and J.J. Zou / Intelligent Elevator Control by Application of Computer Vision
187
Figure 2(a) illustrates the foreground sum (Fs) of black pixels for every region. As more objects are added, the number of black pixels increases in the threshold image. The data from this graph was used to designate block status. For example, if the amplitude exceeded 1500 pixels, the region was deemed full. Figure 2(b) was used to determine the crowding status for the whole elevator. The number of empty regions begins with a count of 10, whilst all other blocks begin at 0. As time passes and more objects are added the elevator the amount of empty blocks descends to zero, and the amount of full blocks approaches 10. It can be observed that the weighting factor (Wƒ) increased fairly linearly, as shown by a polynomial line of best fit. This was due to the fact that objects were added periodically. When the total weighting factor (Wƒ) was greater than 25 and the amount of empty and light blocks was reduced to zero, the system was registered as full.
4. Conclusion This paper proposed a computer vision system to improve elevator efficiency and passenger comfort. The performance of the system was demonstrated using a scale model elevator. The experimental results showed that a system utilising a background subtraction process correctly estimated crowding inside the elevator, preventing a full lift from attempting to pickup new passengers. Using a similar process the system can be used to detect the level of activity on every elevator floor allowing a controller to improve traffic flow by setting higher priorities for busy levels. The system can be easily implemented with existing elevator configurations, permitting a controller to perform functions that were previously unavailable.
5. References N. Takashi, 2003. ‘Complex behavior of elevators in peak traffic’, Physica A: Statistical Mechanics and its Applications, Vol 326, Aug 2003, pp 556-566 [2] T.K Khinag, K. Marzuki & R Yuslof, ‘Intelligent elevator control by ordinal structure fuzzy logic algorithm’, Proc. International conference on control, automation, robotics & vision, Dec. 1996 [3] P. Cortés , J. Larrañeta & L. Onieva, ‘Genetic algorithm for controllers in elevator groups: analysis and simulation during lunchpeak traffic’, Applied Soft Computing, Vol 4, May 2004, pp 159-174 [4] Z. Dewen, J. Li, Z. Yuwen, S. Guanghui, and H. Kai, ‘Modern elevator group supervisory control systems and neural networks technique’ Proc. IEEE International Conference on Intelligent Processing Systems, 1997, pp. 528–532. [5] F. Luo, Y.G. Xu & J.Z. Cao, ‘Elevator Traffic Flow Prediction with Least Squares Support Vector Machines’, Proc. IEEE International Conference on Machine Learning and Cybernetics, Vol 7, Aug 2005 pp 4266 - 4270 [6] K. Toyama, K. J Krumm, J. Brumitt & B. Meyers, ‘Wallflower: principles and practice of background maintenance’, Proc. IEEE International Conference on Computer Vision, Vol 1, Sept. 1999 pp 255 261 [7] ‘Open Source Computer Vision Library Reference Manual’, Intel Corporation, 2001, p 29 [8] D. Butler, S. Sridharan & V.M. Bove, ‘Real-time adaptive background segmentation’, Proc. International Conference on Multimedia and Expo, Vol 3, Jul 2003, pp 341-314 [9] S. Ribaric, G. Adrinek & Segvic, S, ‘Real-time active visual tracking system’, Proc. IEEE Mediterranean Electrotechnical Conference, Vol 1, May 2004, pp 231 - 234 [10] I. Haritaoglu, D. Harwood, & L.S. Davis, ‘W4: Real-time Surveillance of People and their Activities’, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol 22, Aug. 2000 pp 809 – 830
[1]
188
Advances in Intelligent IT Y. Li et al. (Eds.) IOS Press, 2006 © 2006 The authors. All rights reserved.
Brain Injury Detection and Monitoring through fMRI Time Series Data Mining SUMRALL, Jeffrey and CHAKRAVARTHY, Ramya and CHAUDRY, Maryam
Abstract – The fundamental purpose of functional Magnetic Resonance Imaging (fMRI) based detection and monitoring is to provide tangible benchmarks for neuronal activity via hemodynamic response characteristics. Voxel intensity maps are calculated from normalized images, the maps are then stored in as a single dataset in order to gain knowledge from their collective analysis. At this point data mining techniques are employed to control for certain levels of treatment as well as demographic conditions. The model presented involves three stages; data collection, preprocessing, and data mining/statistical analysis. The general idea for the proposed fMRI based detection and monitoring system is to leverage high precision hidden knowledge from medical images captured for other purposes Keywords - data mining, fMRI, decision support, bioinformatics, statistical analysis
1. Introduction Bioinformatics encompasses a broad range of healthcare oriented techniques for analyzing biological data. Medical imaging and genomic sequencing are two of the major focal points of this rapidly growing industry. In the last few decades, advances in molecular biology, digital imaging techniques and increased computational abilities and have afforded rapid advances in this field of study. [18] For example, in 1990 the Human Genome Project was a 13-year effort coordinated by the U.S. Department of Energy and the National Institutes of Health was designed as a springboard for American genomics research. Popular sequence databases, such as GenBank and EMBL, have been growing at exponential rates due to this effort. [18] This deluge of information has necessitated the careful storage, organization and indexing of sequence information. Information science has been applied to biology to produce the field called bioinformatics. Bioinformatics is loosely defined as the application of computing techniques to the study of biology and in particular biology research. [20] Due to its noninvasive nature, modern imaging technology has emerged as a preferred method of medical condition diagnosis. Therefore huge amounts of imaging data are being collected daily. fMRI has shown considerable promise in the medical community as the preferred imaging technique for investigating brain function primarily because it does not use x-ray radiation. Non-imaging techniques; lesion studies, drug manipulations, and recording of electrical activity, combined with fMRI data provide the most comprehensive picture of brain activity while each individual technique is central to modern neuroscience[19]. Specifically, the case study presented focuses on functional Magnetic Resonance Imaging BOLD response analysis as a technique for measuring level of brain damage in brain injury patients. For example,
J. Sumrall et al. / Brain Injury Detection and Monitoring Through fMRI Time Series Data Mining
189
once a patient arrives in the emergency room with some degree of brain injury, at this point his or her vital signs must be stabilized foremost. After the stabilization period, the Brain Injury Detection System (BIDS) can analyze the fMRI scan of the patient and determine which areas of the brain require treatment. If the patient’s brain injuries are so slight that it is challenging for the radiologist to pinpoint, the decision support system will be able to more accurately diagnose potential brain damage invisible to the human eye. In essence, it stands to give medical personnel an automated second opinion for diagnosis.
2. Framework The Brain Injury Detection System BIDS development cycle consists of four distinct stages to achieve standardization. Figure 1 depicts a graphic interpretation of the BIDS development cycle.
Figure 1. BIDS Stage Diagram 2.1. Data Collection The first stage involves collection and archiving of raw images. These images will not be from a single source. Depending on the different scanner brands commercially available in the market, there might be some differences on the raw images collected. Image acquisition setting are generally; TE = 60 ms, TR = 3 s, flip angle = 90ż on the 1.5 T Magnetic Resonance Imaging System. Slice thickness is usually set at 5 mm but can be as thin as 3 mm [2].
2.2. Preprocessing Preprocessing is achieved by using a standard neuroscience analysis toolkit called Statistical Parametric Mapping (SPM). During preprocessing images will go through a
190
J. Sumrall et al. / Brain Injury Detection and Monitoring Through fMRI Time Series Data Mining
series of steps to derive a common comparative index. The first step in this preprocessing sequence is realignment. 2.2.1. Realignment Head motions during the fMRI scans will create variances during analysis. Figure 2 shows the amount of translation and rotation correction needed on the X, Y and Z axis. Realignment is the process by making a mathematical adjustment to correct or minimize the variances, thereby improving the accuracy. Figure 2 shows the aforementioned mathematical adjustments required to minimize the variances.
Figure 2. Output of Realignment using SPM
2.2.2. Co-registration Co-registration is process of matching images of the same subject with either different or the same modalities. Basically, the functional image is matched to the structural image during this phase. Co-registration also refers to the registration of the position of corresponding features as well as enabling us to compare the intensity at those corresponding positions [8]. Co-registration in terms of this study refers to the transformation that can relate the position of features in one image or coordinate space with the position of the corresponding feature in another image or coordinate space [8]. It is particularly important when fMRI scans are obtained from echo-planar imaging techniques due to the very low frequency per point in the phase encoding direction [5]. Carrying out this pre-processing stage will provide a better spatial image for normalization.
J. Sumrall et al. / Brain Injury Detection and Monitoring Through fMRI Time Series Data Mining
191
Figure 3. Output of Co-registration using SPM2
2.2.3. Slice Timing In extreme cases as much as 90% of the variance, in fMRI time-series, can be accounted for by the effects of movement after realignment [6]. Causes of these movement-related components are due to movement effects that cannot be modeled using a linear affine model. These nonlinear effects include; (i) subject movement between slice acquisition, (ii) interpolation artifacts [7], (iii) n onlinear distortion due to magnetic field in homogeneities [1] and (iv) spin-excitation history effects [6]. The latter can be pronounced if the TR approaches T1 making the current signal a function of movement history. Slice timing is therefore very important in reducing the amount of variances existing in the images. During this process, an attempt to correct the differences in acquisition time between slices during sequential imaging. The significance of this process is to ensure that the data for any given volume is sampled at the same time which is refer to as temporal realignment. On the contrary, with only certain comparative conditions, e.g. comparing different trial types at the same voxel, temporal realignment might not prove beneficial to the experiment. This process can be done before or after realignment but prior to normalization.
2.2.4. Spatial Normalization The Normalization process enables comparisons across subjects [9]. Therefore it is a very crucial process to all research that requires cross subjects comparison. Normalization starts with the creation of parameters for analysis. Images are being loaded into the Statistical Parametric Mapping (SPM) application where normalization and interpolations are carried out. Parameter accuracy can be verified by looking at the cross hair of different brain locations to ensure an accurate match up between the two images (template image and subject’s image) paying close attention to edges of the brain. Voxel Size will also be determined during this process. Normalization consists of
192
J. Sumrall et al. / Brain Injury Detection and Monitoring Through fMRI Time Series Data Mining
two steps: first the determination of an optimum 12-parameter (translations, rotations, zooms & shears) affine transformation (from an image into a template), followed by a nonlinear estimation of deformations. These parameters can be used to reslice other images coregistered to the image from which the parameters were determined.
Figure 4. Output of Spatial Normalization from SPM2
2.2.5. Spatial Smoothing Spatial Smoothing involves the application of Gaussian kernel,which smoothes each voxel’s intensity into a weighted average by incorporating neighboring voxel intensity. According to the central limit theorem, the smoothing process will alleviate errors and improve the validity of inference associated with the parametric test [5]. 2.2.6. Segmentation Image segmentation or partitioning is to distinguish objects in an image from the image background. This phase segments MRI-image(s) into grey matter (GM), white matter (WM), CSF (cerebro-spinal fluid) and other [14]. GM, WM and CSF represent three classified tissue types of a healthy brain. Clustering algorithms are used to partition fMRI images into different tissue types. Statistical Segmentation can then be achieved by modeling the histogram of the observations as being made up of a mixture of the distributions of the different classes that we want to segment the image into [14]. 2.3. Data Mining/ Statistical Analysis fMRI data analysis involves assessing brain activation characteristics based on Blood Oxygen Level Dependent (BOLD) response to a radio frequency induced magnetic field on the human brain. Activation levels have a direct linear relationship to BOLD response. During the scan series multiple images are captured over a pre-specified time interval. A typical scan series will capture 128 separate three dimensional images,
J. Sumrall et al. / Brain Injury Detection and Monitoring Through fMRI Time Series Data Mining
193
which can amount to a large amount of generated data very quickly. However, there are unique distinctions amongst the captured image dataset. Some images are captured during a positive stimuli state and others are captured during a null stimuli state. The time series is therefore left up to the individual investigator to design their own stimuli timing patterns. The present research considers a case study of four patients with different demographics, in terms of sex and age. The voxel intensities extracted from the BOLD fMRI data shows a quasi gaussian distribution. (Fig. 5) The mean and variance of the voxel intensity distributions are calculated as these would define the voxel intensity plot and hence the fMRI response. Since it is expected that the fMRI response for patients with different demographics will be different, the parameters of voxel intensity distribution should also be statistically different. For this case study, two female patients (aged 21 and 30) and two male patients (aged 20 and 30) have been considered. To confirm or reject the hypothesis, ANOVA has been performed on these patients using SAS®. In the first run, the two female patients have been compared with each other to check for difference in distribution for difference in age. In the second run, the two male patients have been compared to check for difference in distribution with difference in age. In the final run, the pooled voxel intensities for the two males have been were compared to that of the two females to check for difference in distribution for difference in sex. All patients are healthy, right handed individuals.
Figure 5. Voxel Intensity Distribution
194
J. Sumrall et al. / Brain Injury Detection and Monitoring Through fMRI Time Series Data Mining
3. Results and Conclusion 3.1. Comparison for age of patient The voxel intensities of 128 images per patient were compared to those of the other patient of the same sex. The ANOVA procedure on the data showed that the two classes of patients are significantly different. A. COMPARISON OF VOXEL INTENSITIES FOR TWO MALES AGED 20 AND 30
Source
DF
Sum of Squares
Mean Square
F Value
Pr > F
1636472
F
4202675
F
Model
1
85.8482526
85.8482526
168.82
t0 , for ∀ε > 0, ∃t0 > 0. Theorem 3.1 The extracted FIA is stable by the above extraction algorithm. Proof: According to the above algorithm (4) for extraction of FIA and definition 3.1, the conclusion is true.
Q. Wu et al. / Stability of Fuzzy Infinite-State Automaton
407
Figure 2. Stability of FIA.
4. Simulation results In order to simplify in simulation, here we discuss the input vectors x(t) are the twodimensions. The simulation time T is 100 seconds. The simulation results are shown in Fig.2. From the Fig.2 known, the simulation results indicate that extraction of FIA that has obtained is surely stable.
5. Conclusions Now, some questions require to be solved in the future: In order to learn FIA, the FIA is more stable how the structure of the networks is improved. It is an open question whether the higher order the networks is or not, the more advanced and more stable the automaton is induced. For solving these problems, it will require a lot of scholars to study hard and further investigate together in the future.
References [1] Blanco.A, Delgado.M, Pegalajar.M.C.: Identification of fuzzy dynamic systems using maxmin recurrent neural networks, Fuzzy Set and Systems 1(48) (2000), 63-70. [2] E.B.Kosmatopoulos and M.A.Christodoulou.: Recurrent neural networks for approximation of fuzzy dynamical systems, Int.J. Intell. Control Syst 1(2) (1996), 223-233. [3] C.L.Giles, C.B.Miller, D.Chen, etc.:Learning and extracting finite state automata with second-order recurrent neural networks, Neural Computation, 4 (1992), 393-405. [4] Lalande.A, Jaulent.M.:A fuzzy automaton to detect and quantify fuzzy artery lesions from arteriograms, In: Proceedings of the Sixth International Conference IPMU’96. In Canada 3(7) (1996), 1481-1487. [5] Zhang.Naiyao, Yan.Pingfan.: Neural Networks and Fuzzy Control,Tsing-Hua University Press, 1998. [6] Yuan.Jiangwen, Shankou. Heng, Gu.Longsi.: Neural network and fuzzy signal processing, Science Press, 2003.
408
Advances in Intelligent IT Y. Li et al. (Eds.) IOS Press, 2006 © 2006 The authors. All rights reserved.
Introduction to Interactive Document Search with Multi-Class Diverse Density Chalita HIRANSOOG a , and Lucien MURRAY-PITTS b a The University of Edinburgh, Scotland, U.K. b Celoxica Japan KK, Japan Abstract. An interactive document search is introduced. It is a document search that allows a user to report to the search engine their opinion on how relevant each document is. The search engine then uses this information to narrow down the list of documents given to a user as a result of the search. It is proposed that there are three types of keywords: general keywords; hidden keywords; and negative keywords, that the search engine needs to uncover. The problem is then redefined as MultiClass Multiple-Instance learning problem where the Multi-Class Diverse Density method developed as part of this research was used to find the solution. Keywords. interactive document search, multiple-instance learning, multi-class multiple-instance learning, multi-class diverse density
1. Introduction The majority of research on document search engines concentrates on getting a perfect search. Such search methods list documents in order of relevance using information for the sources such as citation information instead of getting further help from the user to direct the search. By a nature of the document search, the search is driven by keywords a user has in mind, which can sometimes be ambiguous even to a user until after they have seen a number of documents. This paper sets out an interactive document search method that allows a user to report to the search engine their opinion on how relevant each document is. The search then uses this information to narrow down the list of documents given back to a user for further consideration. This interactive document search task has the characteristics that match with those of multi-class multiple-instance learning framework discussed in Section 2. The multi-class Diverse Density method was proposed in Section 3. Section 4 describes the problem of interactive document search in detail, followed by experiment and conclusion in Section5 and 6 respectively. 2. Multi-Class Multiple-Instance Learning Framework Multi-class framework is proposed as an extension to the original multiple-instance learning framework [1]. Each example presented to a learning algorithm or a learner is a set (or bag) of instances but not all of these instances are responsible to the label of the example. In the original framework, a bag will be labelled positive if at least one
C. Hiransoog and L. Murray-Pitts / Introduction to Interactive Document Search
409
Figure 1. A two-dimensional feature space diagram describing multi-class multiple-instance learning problem. ‘+1’ represents an instance in a positive bag number one, while ‘-1’ represents an instance in a negative bag number one and so on.
instance in the bag is positive while a bag will be labelled negative if all the instances in the bag are negative. In comparison to the multi-class framework, a bag will be labelled according to its instance that underlies the highest influential class label. Consider Figure 1, the solution to the multi-class problem is at least two points in a feature space. Point A is the point underlying the positive class, while point B is for the negative class. The order of influence depends on the class label of a bag that has both A and B in it. A class of A is more influential than a class of B if the bag is labelled as a class of A and vice versa. This structure is also applied to a problem with the number of classes larger than two as well.
3. Multi-Class Diverse Density 3.1. Model Multi-class Diverse Density method is developed as an extension to the Diverse Density method [2] for solving multi-class multiple-instance learning problem. Multi-Class Diverse Density of concept t being a target concept underlying a particular class r (DDm(tr )) is the probability that t is a target concept given the training examples 2 , . . . , B1p , . . . , Bop ) where Bop is bag num(DDm(tr ) = P r(t|B11 , . . . , Bn1 , B12 , . . . , Bm
410
C. Hiransoog and L. Murray-Pitts / Introduction to Interactive Document Search
ber o of a p-class bag). Referring to similar proof as in [2] using Bayes’ rule, where P r(t|Bi ) = P r(t|Bi1 , Bi2 , . . . , Bij ) if bag i has j instances,
DDm(tr ) = (const)
P r(t|Bi1 )
1≤i≤n
P r(t|Bi2 ) . . .
1≤i≤m
P r(t|Bip )
(1)
1≤i≤o
The effect of joint instances J (instances that exist both in at least one bag of class p and at least in a bag of one of the other classes) is then introduced into the probability model. Intuitively, P r(t|Bir ) is high if t is closed to at least one of the instances in a r-class bag. Further, where class p is other than class r, the multiplication of P r(t|Bip ) is P r(t|J) and is low if t is close to at least one of joint-instances (J) existed in this p-class bag. Equation 2-3 are revised all-or-nothing density estimator and Equation 4-5 are the revised version of noisy-or model. P r(t|Bir ) = 1
if ∃j such that Bij ∈ t, and 0 otherwise
P r(t|J) = 0.5(nc−2) F
P r(t|Bir ) = 1 −
if ∃j such that Jj ∈ t, and 1 otherwise r (1 − P r(Bij ∈ ct ))
(2) (3) (4)
1≤j≤p
P r(t|J) =
(1 − P r(J ∈ ct )) + 0.5(nc−2) F |
1≤j≤p
(1 − P r(J ∈ ct ))|
(5)
1≤j≤p
where j is the j − th instance of a class-r bag i or of the joint instance, p is the total number of instances in bag i or of joint instances, nc is the number of classes where concept t exists in bags, fraction F is to be ≤ 0.5(total number of classes − 1) . r ∈ ct ) is the probability of a particular instance being the target concept, and P r(Bij joint instance. Both using Gaussian-like distribution P r(Jj ∈ ct ) is that of a particular r r (orJj ) ∈ ct ) = exp(− 1≤k≤l (Bijk (orJjk ) − ctk )2 ). With this model, the of P r(Bij r addition of DDm(t ) (ADD-DDm) from every class can be used as an indicator for the order of influence of the underlying concepts. 3.2. Experiment on Artificial Data In this experiment, one target concept underlying the positive class and another underlying the negative class are randomly selected from artificial data set of range {100,100}. The order of influence of the underlying concept is also randomly selected. Based on similar experiment in [2], the learning algorithm will learn from 10 bags of instances, 5 for each type of bag label, and 50 instances in each bag. The algorithm uses noisy-or estimator model. One example of the learning tasks was to learn that concept {30,15} was underlying positive-class bags and had more influence than concept {57,68}, which was underlying negative-class bags. Apart from concept {30,15} with ADD-DDm = 1 and concept {57,68} with ADD-DDm = 0.5, ADD-DDm of other concepts in the concentration area (4 around {30,15} and 8 around {57,68}) are only between 0.003 and 0.0149 while ADD-DDm of the rest was very close to 0. The very high peak of ADD-DDm values only occurred at the underlying concepts and the height of the peak was according to the order of influence.
C. Hiransoog and L. Murray-Pitts / Introduction to Interactive Document Search
411
4. Interactive Document Search as Multi-Class Multiple-Instance learning problem The interaction back and forth between a user and a search engine can be fitted in nicely with multiple-instance learning framework described in Section 2, if each document is considered as a bag of instances and each word within a document is considered as an instance of a bag. Each document will be labelled into different classes by a user depending on how close the content is to the user’s search idea. Each document is also a collection of many words in which only a portion of these words constitute its label. In other words, keywords of each of the three types mentioned below underlie different classes of documents. Because all these keywords must be discovered and not just one type of keywords for a search engine to be able to refine its search, the problem become that of multi-class. • General keywords: normally supplied by a user. • Hidden keywords: considered as a part of or a subset of another keyword. For example if ‘a learning algorithm’ is a keyword, any learning algorithm such as Neural Networks, Reinforcement Learning and the like becomes hidden keyword because a user could be interested in a document that describes at least one of these methods. • Negative keywords: if appear in a document will remove this document from the user’s desired list. For example, after reading through some of documents for a while, a user realises that they are only interested in documents about Neural Networks among all other algorithms, any learning algorithm other than Neural Networks should become negative keywords. Most of the search engines do not pay attention to this type of keywords when in fact they could narrow down the search dramatically.
5. Experiment and Results 5.1. Bag Generator The bag generator generates a list of words and word frequency as they appeared in each document. The generator has a library of articles, pronouns, prepositions and other grammatical words that do not affect the main idea of a sentence. Therefore those words will be removed from the list by the generator. The generator also maintains another library where each new word encountered by the generator is attributed with a unique identification number. This unique identification number (UID) will be used as an instance in bag similar to what was carried out on the artificial data in Section 3.2. 5.2. Experiment The initial list of documents is randomly selected 50 documents listed during a search using the engine from http://citeseer.ist.psu.edu website with ’learning’ as keywords. Only the abstract of each document is considered for the purpose of this experiment. The task is for the multi-class diverse density to discover that Neural Network is a hidden keyword while Reinforcement Learning is a negative keyword. From the 50 documents, 10
412
C. Hiransoog and L. Murray-Pitts / Introduction to Interactive Document Search
documents are randomly selected and labelled based on the two keywords above. The learning algorithm learns from these 10 document and output the more refined list taken from the first 50 document based on the new keywords learned. Another 10 documents are randomly selected from the second list mentioned earlier and the experiment is repeated until the list cannot get any smaller. The experiment is repeated ten times. It was found that it took an average of 3 times before the list cannot get any smaller and each time the list was reduced at an average of 50% smaller than the previous list.
6. Conclusion This paper is a proof of concept that an interactive document search task can be achieved using Multi-Class Diverse Density. As a further work, it is to incorporate word frequency into the calculation of Multi-Class Diverse Density.
Acknowledgements Special thanks to Lucien Murray-Pitts for the development of Bag Generator.
References [1] T.G. Dietterich, R.H. Lathrop and T. Lozano-Perez, Solving the multiple-instance problem with axis-parallel rectangles, Artificial Intelligence 98(1-2) (1997), 31–71. [2] O. Maron and T. Lozano-Perez, A framework for multiple-instance learning, Advances in Neural Information Processing Systems 10 (1998), 570–576.
Advances in Intelligent IT Y. Li et al. (Eds.) IOS Press, 2006 © 2006 The authors. All rights reserved.
413
A New Method for Classifying Customer Purchasing Power Junzhong Ji 1 , Fan Chen and Chunnian Liu Beijing Municipal Key Laboratory of Multimedia and Intelligent Software Technology Abstract. Personalized recommendations in an intelligent B2C portal are an important topic in Web Intelligence. Aiming at identification of customer purchasing power, this paper proposes a classification method based on Ant Colony Optimization (ACO), in which an artificial ant constructs a classification rule in light of virtual pheromone associated with attribute-value pairs, then illustrates the method with a case study and discusses its application in personalized recommendation. Keywords. Personalized recommendation, Ant Colony Optimization, Customer purchasing power
1. Introduction Web Intelligence (WI) is an emerging new research field, which involves versatile practical applications employed Artificial Intelligence (AI) and advanced Information Technology (IT) on the Web and Internet [1,2]. In light of the idea of Would Wide Wisdom Web [3], personalization is one of the major capabilities in the new generation of WI. Moreover, the personalized demands have increased rapidly along with the retail web sites’ developing in E-commerce. Thus, the personalized recommendation for an intelligent B2C portal is an important research topic in WI. In [4], McCarthy gave a definition of personalization which was an ability to customize each individual customer’s experience on the web. This viewpoint obviously presents that the knowledge discovery in customer information and behaviors is very important for every personalized recommendation system. Then, there are becoming more and more new methods to find and express customer’s behavior patterns by which a commerce intelligence can be implemented. We have emphasized this problem in our previous work [9]. Ant Algorithm(AA) was a new approach to solve the stochastic optimization, it derived from the bionics which simulates the natural behaviors of some social insects, such as ants, bees, etc. Though it is incurious that each ant performs its own tasks, a colony of ants are capable of solving complex combinatorial optimization through cooperation and information transition. Hence, the ant algorithm is also called an Ant Colony Optimization algorithm (ACO). As having good qualities, ACO has been applied to various classical combinatorial optimization problems [5]. Recently, there are a few new explorations on using ant colony algorithms in data ming and web ming [6,7,8]. This paper is the first attempt to apply an ant algorithm to the personalized commodities recommendations. 1 College of Computer Science and Technology, Beijing University of Technology. Tel.: +86 010 673910891076; Beijing 100022, China E-mail:
[email protected].
414
J. Ji et al. / A New Method for Classifying Customer Purchasing Power
2. Optimization Mechanism of an Ant Colony Algorithm Ant algorithms (AA) are based on the behaviors of real ants, which are capable of finding the shortest path between a food source and an ant nest without any help information. Even if the environment changes, ants also could try to establish new shortest path adapting these changes. After extensively researches, people found that ants used a specific medium called pheromone to communicate with each other. Each moving ant is able to lay some pheromone on its path, thus marking the path by a trail of this substance. The pheromone can gradually evaporate along with time’s passing, thus the more ants follow a given trail, the more attractive this trail will be followed by other ants. As a result, pheromone accumulates faster in the shorter path when the number of ants increases. Because ants often prefer to follow trails with larger amounts of pheromone, eventually all ants can converge to the shortest path in period of time. This collective behavior represents swarm intelligence. Based on the mechanism, an ACO has become one of the best methods to solve global optimization problems.
3. Customer Purchasing Power Classification Based on ACO Personalized recommendations typically involve the identification of each customer’s potential purchasing power. There are many factors that may reflect a customer purchasing power, such as income, occupation, age, gender, living city, and recent consumption, and so on. These factors imply customer potential economic characteristics. When a customer enrolls at a retail web portal, some basic economic information can be obtained and kept in the web portal’s database. In Ref. [9], we gave a combination formula of utility functions which can evaluate a customer purchasing power by the value of the combination function. This paper proposes a customer classification method, which utilizes ACO for the discovery of classification rules [6]. Namely, we classify each customer according to following form: IF < f actor1 AN D f actor2 AN D · · · > T HEN < class > . Each factor is a triple < attribute = value >, where value is a value belonging to the attribute domain. For example, though customers may live in different cities, the domain of the city attribute can be normalized as the set {big, medium, small}. Moreover, a customer class depicts his purchasing power, whose value belongs to the class set of {high, medium, low}. In order to discover classification rules, the algorithm is simulating the process that an ant looks for food. Thus, the major steps of the customer classification method include rule constructing, pruning and pheromone updating. Based on above steps, we design the algorithm called CC-ANT for performing the customer classification as the next page. The CC-ANT extracts the ideas from the Ant Colony mechanism, it employs ants to search for customer classification rules instead of searching for food in original AA.
4. A Case Study 4.1. Learning Rules The section gives an instance to illustrate the CC-ANT. We assume that attribute gender, income and living city respectively correspond to the value sets {male, female}, {high,
J. Ji et al. / A New Method for Classifying Customer Purchasing Power
415
Algorithm: CC-ANT Initialization: set values for kinds of parameters. TrainningSet={all acquaintance customers economic information database} ResultRuleSet=[ ]. Do Initialize all trails with the same amount of pheromone; Repeat 1. Rule constructing: construct a classification rule by adding one factor at a time; 2. Rule pruning: prune the just-constructed rule; 3. Pheromone updating: update the pheromone of all trails (increase or decrease); Until (stopping criteria) Choose best rule out of all rules constructed by all ants in this iteration; Put the best rule into ResultRuleSet; TrainningSet= TrainningSet - {cases correctly covered by the best rule}; While (stopping condition is not met)
Start={ }
Income= Medium Income=Low
Income=High
Low
Living city= Small
Gender=Femal Living city= Medium High
Gender=Male Gender=Femal
Low
Medium
Figure 1. A rule tree learned by the CC-ANT
medium, low} and {large, middle, small}. For the real world data, we can standardize customers attribute into corresponding value domains. If the results learned by the CCANT are shown as Fig. 1, then we can acquire customer classification rules as follows: IF income=low THEN class=low IF income=high AND gender=female THEN class=high IF income=medium AND living city=medium AND gender=female THEN class=medium IF income=medium AND living city=small AND gender=male THEN class=low The results show that the discovered rules are simple and intuitionistic, thus the knowledge is easy for merchants to interpret and use. 4.2. Rules Application in a Personalized Recommendation Customers classification rules learned by the CC-ANT can be applied to a personalized recommendation system. By means of these classification rules, we can predict the class of every new online customer, and then decide to recommend what price commodities to him in the different cases. That is, the system purposefully provides the customer
416
J. Ji et al. / A New Method for Classifying Customer Purchasing Power
with appropriate priced commodities according to its class. This reflects the personalized recommendation from the view of potential purchasing power.
5. Conclusions In this paper, we proposed a new classification approach for customer purchasing power, which learns classification rule from data by the CC-ANT. By means of a case study, we presented the learned rules and their application in a personalized recommendation. To the best of our knowledge, the research is the first attempt to apply an ACO to the customers purchasing power classification in WI. Thus, this research enriches the technologies of WI, and elementary researches show that this classification method for customers purchasing power has definite practicability. Our future work includes looking for suitable real world data in a web site to deeply testify the effectiveness and practicability of this method, and compare this method with other known methods on various performances.
Acknowledgements This work is supported by the NSFC major research program: “Basic Theory and Core Techniques of Non-Canonical Knowledge” (60496322, 60496327), the Institute of Beijing Educational Committee research program : “Uncertain Knowledge Representation and Reasoning in Web Intelligence”(KM200610005020).
References [1] N. Zhong, J. Liu, Y.Y. Yao, S. Ohsuga: Web Intelligence (WI). Proc. 24th IEEE Computer Society International Computer Software and Applications Conference (2000) 469-470. [2] N. Zhong, J. Liu, and Y.Y. Yao (eds.) Web Intelligence, Springer (2003). [3] J. Liu: Web Intelligence (WI): What Makes Wisdom Web? Proc. IJCAI’03 (2003) 1596-1601. [4] J. F. McCarthy: the virtual world gets physical:Persepectives on personalization, IEEE Internet Computing, 5(6) (2001), 48-53. [5] Dorigo, M., Maniezzo, V., and Colorni, A. . The ant system: Optimization by a colony of cooperating agents. IEEE Transactions on Systems, Man, and Cybernetics, 26 (1) (1996), 29-41. [6] RS Parpinelli, HS Lopes, and AA Freitas: Data Mining with an Ant Colony Optimization Algorithm. IEEE Trans on Evolutionary Computation, special issue on Ant Colony Algorithms, 6(4) (2002), 321-332. [7] A. Abraham and V. Ramos: Web Usage Mining Using Artificial Ant Colony Clustering and Genetic Programming. Proc. Congress on Evolutionary Computation, IEEE Press, (2003) 1384-1391. [8] N. Holden, Alex A. Freitas: Web Page Classification with an Ant Colony Algorithm, Parallel Problem Solving from Nature 8th International Conference, Springer Verlag, LNCS, (2004) 1092-1102. [9] J. Z. Ji, C. N. Liu, Z. Q. Sha, N. Zhong: Personalized Recommendation Based on a Multilevel Customer Model. International Journal of Pattern Recognition and Artificial Intelligence, World Scientific, 19(7) (2005), 895-916.
Advances in Intelligent IT Y. Li et al. (Eds.) IOS Press, 2006 © 2006 The authors. All rights reserved.
417
Employ MultiAgent to Make BPM Proactive Qing Maoa, Lei Yuanb, Jian Huangc Dept. of Electrical Information Engineering, Xiangfan University, Xiangfan, Hubei province, 441053, P.R.China, a
[email protected],
[email protected],
[email protected]
Abstract. Process-Oriented and Agent-Oriented are both emerging views of software and have their advantages and disadvantages respectively. What presents a striking contrast is that the process-oriented BPM is passive while the agent-orient MAS is proactive. Therefore, this paper employs MultiAgent to eliminate the passivity of BPM so as to make BPM proactive. It gives the architecture of the proposed MultiAgent-employed BPM and the design of the MultiAgent. This work is critically important as it serves as a bridge from existing, static views of BPM to future, agent-based, dynamic BPM. Keywords. MultiAgent, BPEL4WS, proactive, Business Process Management
1. Introduction Moving from the Information Age to the Process Age, the past five years have witnessed great progress in the theory and practice of BPM. Simultaneously, an intelligent agent represents a distinct agent-orient category of software. This paper details the design and development of proactive BPM (Business Process Management) System employing MultiAgent System (MAS). The impact of this work is broad, as it influences many existing and emerging technologies, such as BPM Systems, Web services, Internet Agents, application integration, and XML-based coordination mediums, etc. BPM System (BPMS) treats process as an abstract data type [2] so that business processes can be managed in a new holistic manner, and has an architecture and a paradigm of its own, which thinks of Web services and Service-Oriented Architecture (SOA) as an underlying technical operating system. To BPM, the significance of the Web services specifications is that they provide the first truly workable architecture for building complex, interoperable computing processes over the Internet. [3] As result, BPM gets some advantages, such as transparency and application independence, etc. Unfortunately, Web services are passive: knowing only about themselves, possessing no meta-level awareness; and being not designed to utilize or understand ontologies and not capable of intentional communication, autonomous action, or deliberatively cooperative behavior [4]. So, Web services make BPM passive inversely.
418
Q. Mao et al. / Employ MultiAgent to Make BPM Proactive
Consequently, it’s necessary to make BPM proactive by eliminating this deficiency. And software agents are good candidates since, in comparison with Web services, agents possess almost all of these capabilities of which they are short. Further more, it is also possible to integrate agents into BPM frameworks. This paper first discusses the architecture of the MultiAgent-employed BPM system. Then it proceeds with detail about the design of the BPM agents. Finally, the paper concludes with a discussion of benefits of this way.
2. Architecture Design The applicability of MAS to workflow enactment and the notion of using passive Web services as externally defined behaviors of proactive agents have been noted previously, for example [6], [7], [8], [9]; Basing upon these important works, this paper will go deep into to employ MAS to BPM. The main idea is as follow: On one hand, the execution of BPM comes down to that of Web services essentially. On the other hand, according to the works noted earlier, Agent can further be integrated into Web services. Therefore, this paper gives proposal architecture of MultiAgent- employed BPM system, as showed in the figure 1. Projects of BPM
Schema Editor
Mapper
Rules Compose
BPEL4WS S
OrchestrationDesigner
Web i
MultiAgentServices
Xindice
Figure 1. The architecture of MultiAgent-employed BPM System
Further, figure 2 shows the detailed design of the employed MultiAgent services. Application integration considers that the components are independently executing applications that are integrated via the asynchronous exchange of data and control, and that Web services are passive entities which don't execute until called. Therefore, we can wrap them in proactive agents that possess their own thread of control. The agents are then integrated to enact the BPM, where the agents are coordinated with a shared data space and the asynchronous exchange of messages. Here, we envision the role of an agent is to search for ways to optimize the BPM in which it is engaged. In details, Agents can be viewed as independent applications that provide services to one another through loosely coupled, asynchronous message exchange. And they are able to take advantage of the non-blocking nature of their messaging by overlapping other processing with their communicative acts. Accordingly, the agents can use their autonomy to determine what work to perform. This might occur through finding other service partners that provide better quality of service, or learning from its interaction histories with existing partners so as to maximize the utility of their future interactions. [10]
419
Q. Mao et al. / Employ MultiAgent to Make BPM Proactive
Web Services Agent Gateway TargetAgent
TargetAgent
DistributedAgent
DistributedAgent
DistributdAgent
DistributedAgent
BPM
DistributedAgent
Figure 2. MultiAgent services of BPM
Besides, this architecture builds atop some open standards for increased interoperability, including the primary Web service standards of SOAP, WSDL and UDDI, while in the agent space, the FIPA standards [11]. And some software components were used: JADE [12] as a FIPA compliant agent development environment; the Web Service Agent Gateway (WSAG) [13] as a bridge between synchronous Web service calls and asynchronous agent messaging; and Xindice [14] a networked, native XML database used as coordination medium.
3. Agent design There are two types of agents that enact the BPM: target agents and distributed BPM agents. A target agent interfaces the distributed BPM agents to the WSAG, while the later are the proactive proxies for the passive Web services they represent. 3.1. Target Agent Figure 3 illustrates the structure of a target agent. Target agents receive messages from both the WSAG and other distributed BPM agents; the two distinct execution paths in Figure 3 denote this. The boxes found on the execution path simply designate that some processing is occurring, while the two squiggly lines note a "layer fold" used to indicate the interaction of the target agent with the middle-agents. [10] (a)BPM enactment request from the WSAG; (b) create collection for the case and store the request message in Xindice; (c) utilize DF (Directory Facilitator) to locate outgoing partner roles and MTS (Message Transport Service) to deliver ACL Request message(s); (d) receipt of DWFA (Distributed Workflow Agent) message indicating case completion; (e) retrieve BPM response form Xindice; (f) utilize MTS to send response to the gateway agent.
a
Target Agent c b
f
e
d
Figure 3. Diagram of a Target Agent
Distributed Agent b
a
d e
c
f
Figure 4. Diagram of a Distributed BPM Agent
420
Q. Mao et al. / Employ MultiAgent to Make BPM Proactive
3.2. Distributed BPM Agents Figure 4 reflects the implementation of the distributed BPM agents, in which the dashed rounded rectangle is a placeholder symbol for a passive component. The distributed BPM agents share the same code base; they are simply instantiated with different BPM partner information. This is consistent with the fact that the primary distinction between these agents is the Web service they represent. [10] (a) slot to hold run-time assignment of Web service; (b) request for Web service invocation; (c) case management and Xindice access for building Web service request message; (d) dynamic bind and invocation of the Web service; (e) Xindice storage of SOAP request/response pair; (f) Utilize DF to locate outgoing partner roles and MTS to deliver ACL Request message(s).
4. Conclusion Through integrating agents into BPM frameworks naturally, the work described in this paper opens up a new avenue of research regarding BPM.. In this way, we successfully eliminate the passivity of BPM and thus make BPM proactive. And the following advantages can be expected to gain: managing business process enactment in an environment in which resources fluctuate at runtime, handling exceptions in a context-dependent manner, provisioning problem solving resources according to prevailing circumstances, and allowing loose coupling between inter-organisational business activities, etc.
References [1]. XML Cover Pages. Business Process Execution Language for Web Services (BPEL4WS), xml.coverpages.org/bpel4ws.html [2]. Peter Fingar, Business Process Management: The Third Wave, www.bpm3.com [3]. BizTalk Server 2004 Architecture White Paper, www.microsoft.com/biztalk/techinfo/whitepapers [4]. Huhns, M.N. Agents as Web Services. Internet Computing, 6(4):93-95, 2002. [5]. W3C Web Service Architecture Working Group. Web, ww.w3.org/TR/2004/NOTE-ws-arch-20040211/ [6]. Paul A. Buhler, J.M.Vidal, Enacting BPEL4WS specified workflows with MultiAgent Systems, www.jmvidal.cse.sc.edu/papers/buhler04a.pdf. [7]. Singh, M.P. and Huhns, M.N. MultiAgent Systems for Workflow. International Journal of Intelligent Systems in Accounting, Finance and Management, 8:105-117, 1999. [8]. Buhler, P. and Vidal, J.M., Integrating Agent Services into BPEL4WS DefinedWorkflows, USC CSE TR-2004-003, jmvidal.cse.sc.edu/papers/buhlertr04a.pdf. [9]. Dominic Greenwood, Monique Calisti, Engineering Web Service - Agent Integration, www.whitestein.com/resources/papers/ieeesmc04.pdf [10]. Paul A. Buhler, José M.Vidal, Enacting BPEL4WS specified workflows with MultiAgent Systems, jmvidal.cse.sc.edu/papers/buhler04a.pdf [11]. The Foundation for Intelligent Physical Agents, www.fipa.org. [12]. Telecom Italia Lab. JADE (Java Agent DEvelopment Framework), sharon.cselt.it/projects/jade/. [13]. Whitestein Information Technology Group AG. Web services Agent Integration Project, wsai.sourceforge.net/index.html. [14]. The Apache XML Project. Xindice Homepage, xml.apache.org/xindice
Advances in Intelligent IT Y. Li et al. (Eds.) IOS Press, 2006 © 2006 The authors. All rights reserved.
421
A Kind of Instruction-Learning Model Division Method Based on Information Technology Yongjiang ZHONG, Shaochun ZHONG, Jiaying TONG and Yamei ZHANG The Institute of Ideal Information Technology, Northeast Normal University, Changchun, 130024, China
Abstract. At present, the information technology played an important role in elementary and middle schools’ elementary education application aspect, but one reason is insufficient research on teaching and learning style based on the information technology. This article summarizes existent teaching and learning models, proposes one new teaching and learning style analysis method based on the usage of information technology. We propose the analysis method incorporating network usage, cooperation style, independence degree three dimensions. Our results help to improve modern teaching and learning planning. Keyword. Instruction-learning model, Information technology, Media technology, Network technology
Introduction With the current information technology vigorous development, the instruction reform like a raging fire, we must advance the education innovation unceasingly and fully use the modern science and technology method to enhance the education vigorously the modernized level. Along with the information age, the network millennium arrival, the information technology method got up to the renewal education idea, formidable propelling force of the transformation of education .We create one kind of brand-new education pattern using the network information technology method and manifest that it take the student as the central teaching way.
1. Problem of the Extant Method of Instruction Model Division Based on Network 1.1. The Extant Method of Instruction Model Division Liu Rude once proposed, regardless of the tradition or under the information technology environment, instruction model may approximately divided in two dimensions into the acceptable learning and inquiring learning, then the cooperation learning and the personalization learning [1].
422
Y. Zhong et al. / A Kind of Instruction-Learning Model Division Method Based on IT
Then introduces these four kinds of studies model separately: Acceptable learning model, inquiring learning model, the cooperation learning model, the personalization learning model: • • • •
Acceptable learning model Inquiring learning model [3]. Cooperation learning model: The discussion studies the model and the assist learning model Self-independent learning model
As we can see, the four kinds of model above are divided from student's two dimensions: cooperation and inquiring, has manifested the learning way well which the new class sign initiates. But the division has not considered to the learning condition and the environment factor. Sometimes the instruction process is dynamic, fuzz to some kind of model existence, very difficult to concrete which one kind of model is. Therefore, the above division is absolute and linear, which make instruction model absolutely, static. 1.2. Extant another Instruction Model Classification Method According to the initiative of the long-distance instruction process instruction organization form and the student learning process management under the network environment, we may carry on quite popular instruction model the following classification. • • •
Distance instruction model collective learning primarily-instruction model Distance instruction model by individual learning primarily Long-distance instructional model by the group learning
According to the above analysis, we may see that the present existing instructional model based on internet has some insufficiencies: There are many division methods, complex, quite is chaotic, not clearly. The present existing division methods in the two-dimensional stratification plane quite isolated and is absolute static. The instructional model has not manifested the continuity and the fuzziness. Some division methods only consider to the internet environment superiority, but do not consider traditional the instructional model and the method. The instruction essential elements of a system consideration is not comprehensive, has not considered to the learning condition and the learning environment factor.
2. A kind of Instruction-learning Model Division Method Based on Information Technology Each kind of instruction model is established in certain learning theory. Through researching many kinds of traditional instruction model and using information technology method, considering the education essential elements of a system characteristic, we may create some new division methods on the instruction model. Like the chart is the classified research result of myself from the three dimensions (the independency, the cooperation, the web technology) to the instruction-learning
Y. Zhong et al. / A Kind of Instruction-Learning Model Division Method Based on IT
423
model. First carries on the stipulation to the meaning of three coordinate figures. Show in Figure 1.
Figure 1. Three-dimensional chart of teaching-learning model. •
•
•
Active learning: middle-school student's independent degree in the learning process, 0 represents the student to be weakest, the teacher leads strongly; 1 represents the student to be strongest; the teacher leads weakly. Cooperation: middle-school student's cooperation degree in the learning process, 0 represents the student to cooperate in the learning process weakly; 1 represents the student to cooperate strongly in the learning process. The web technology: middle-school student's network application degree in the learning process, 0 represents the student's network application to be weakest in the learning process; 1 represents the student's network application to be strongest in the learning process. [2]
We divide the entire cube into eight parts. The next table is the main model which contains these eight parts. We may see from the chart that lower four are quite thorough parts about tradition curriculum and the instruction research, but above four needs the instruction of new curriculum and the instruction theory. If the teacher allows student to solve the problem independently in the learning process, this spot will move to right somewhat, otherwise, this spot will move to left somewhat.[3] If the teacher uses information technology in the instruction process, this spot will move above, otherwise this spot will move to underside. If the teacher applies network technology very well in the instruction process, then this spot can fall in the model 5 (electricity fills). Other spot position of instruction-learning model no longer gives examples here. The instruction-learning model above solved the problem that the instruction-learning model was complex and had not unified classification standard. It enabled us clearly to learning the process of each kind of model, the existence difficulty, how to display superiority of network technology and suit what contents of the learning discipline and so on.
424
Y. Zhong et al. / A Kind of Instruction-Learning Model Division Method Based on IT
Regardless of using what kind of instruction-learning model, the teacher should let the student solve the problem independently as far as possible, complete the task cooperatively and display the superiority of network technology.
3. Development Tendency The futurological education development even more will not be able to leave the network technology and computer technology. This division method of instruction-learning model of this article displays the superiority of information technology. This division method of instruction-learning model will display information technology superiority. The future development of instruction model will be inseparable with the integration of the information technology and the curriculum, it manifested the dynamic development of the instruction process, we will continue to learning the integration of these instruction models and the concrete discipline (for example language, English, mathematics, physics and so on). These educational models conformity which will certainly to make the contribution for our country's elementary and middle schools elementary education development.[4]
References [1] Yongjun Jing, Shaochun Zhong, Xiaochun Cheng. Teaching Models and Supporting Platforms in Modern Distance Education, CACS2003, September, 2003, Luton. [2] Xiaochun Cheng Teaching Model using Computer Network, ACE UK Journal, Vol. 11, 2004, ISSN 1476-5837, pp. 42–45. [3] Y. Li1, X. Cheng1,2,3, X. Liu1, S. Zhong2, X Yu3 An Analogy-based Memory Model September 7–8, 2004, Londonderry, U.K. Proceedings of the IEEE SMC UK-RI Chapter Conference 2004on Intelligent Cybernetic Systems. [4] Y. Li1, X. Cheng1, 2, 3, Z. Li2, D. Ouyang2, M. Cheng3, X. Yu2, S. Zhong3 Semantic Organization to Enhance Active Learning Based on Distributed Multimedia Instruction Resources.
Advances in Intelligent IT Y. Li et al. (Eds.) IOS Press, 2006 © 2006 The authors. All rights reserved.
425
The Application Research of Learning Based on Websites Jiaying TONG, Shaochun ZHONG, XiyingWANG, Yao WANG The Institute of Ideal Information Technology, Northeast Normal University,Changchun, 130024, China
Abstract. This article introduces the understanding of the topic learning websites and the technology used in the topic learning websites in China. It compared the similarities and differences between China and other countries according to the current application research circumstances of the learning based on websites. This article also pointes out the development tendency of the learning based on websites and the insufficiency of topic learnin websites in China. Keywords. E-learning, Learning based on websites, Topic learning websites, Active media.
1. Introduction As the development of the computer multimedia technology and the network technology, the education domain appeared as a process of information popularization. The network broke the limit of time and space, it gave the learner the more flexible study way, and is changing our life and our society. At the same time it is also changing our study. E-learning is a kind of flexible way to learners. E-learning speeds up the step of the information popularization, impels the development of network education, and enhances the efficiency and the quality of the study. These developments and changes have an important impact on Active Media Technology (AMT), a new area of intelligent information technology and computer science.
2. The Understanding of Learning Based on Websites in China In recent years, the study about the Topic Learning Websites in China is like bamboo shooting after a spring rain, at the same time the concepts about the Topic Learning Websites have many kinds of explanations, but there is not an uniform explanation. We introduce several explanations here: x Youru Xie pointed out: ̌ The Topic Learning Websites are the learning resources websites that under the environment of the network, carrying on one or several learning topics which are related with some certain course or more courses.” [1] x Kedong Li pointed out:̌ The Topic Learning Websites are the instruction websites that are constructed based on the learning model of ̈Topic exploring-
426
J. Tong et al. / The Application Research of Learning Based on Websites
Website development̉, under the supporting of the Topic Learning Websites, the teachers and the students construct the content and the whole construction of website about one special topic, then organize and consult the related resources in the studying way, and create the website that are provided with some constructing topic knowledge. The teachers and the students are on the base of this website, carry on the expanding and investigating learning, and make the knowledge and content of the website more and more perfect.” [2] In addition, we thought the definition of topic learning websites is that they centered on certain field that developed communion learning between students and students, teachers and students. It also provided resources that support learning websites for help and assessment each other between teachers and students in the circumstances of sufficient hardware and staff’s technology supporting and maintenance.
3. The Circumstances and Development of the Learning Based on Websites in China According to the material analysis, the topic learning websites in China generally include some parts, for example: the topic knowledge, the topic resources storehouse, the topic learning application tool, the topic learning appraisal. They are made up of "knowledge and concept","Practice operation", "Self- appraisal", "Consultation, discussion" and so on. Moreover they have rich assistance learning resources such as "Correlative specialized material search", "upload and download" and so on. The website environment based on the Web network technology is a giant hypermedia information resource storehouse by the integration of the text, the image, the animation, the sound, the video and so on. The overall system uses the Browser/Server working pattern, the onstage uses ASP, JSP and the HTML documents which developed by the administrative personnel, the backstage uses SQL server 2000 database servers. ASP realizes the integration of the Web page and the first floor databases which enables the learner to be nimble in the subscriber's premises with the browser, effectively to glance over the study content, not time, place and test installation limit.[3]
4. Application Circumstances of the Learning Based on Websites in other Countries
4.1. The Application Circumstances of the Learning Based on Websites in USA It has been investigated that the number of people who learning based on websites was increased by 300 percent every year in USA. In 1999, over 7000 Americans had obtained knowledge and work skill from E-learning and staffs in over 60% enterprise were training and learning from E-learning. E-learning will bring on the new revolution in education field when E-learning was flourishing. More and more pence lople learned by internet. In USA, there were a lot of learning websites based on internet. Commonly, whose modality is distaearning online and distance education
J. Tong et al. / The Application Research of Learning Based on Websites
427
online. After learning, learner may acquire degree of various universities. These websites were comprehensive websites. It is said that a majority of proprietors of stateside learning websites were enterprise or institution and a small part was individual. 4.2. The Applacation Circumstances of the Learning Based on Websites in UK We will take an example of English learning based on websites. http://www.elearningcentre.co.uk/. In the E-Learning Center’s Information section you will find a large bank of collection links to E-learning resources focusing on E-learning in the Workplace, for professional development and in Further and Higher Education. We will introduce the structure of the site (http://www.quiz-tree.com). It lies in “School e-Learning Showcase” which is one of the branches of Showcase. The learners can use the quizzes not only to test their knowledge, but also to help learners prepare for their next exam, or simply to learn something new. Learners may do each quiz as many times as they want to, there's no need to sign-up. Many of the quizzes included sounds, which make learning more interesting. To start a quiz, simply click on the topic that learners would begin to explore. 4.3. The application circumstances of the learning Based on websites in Canada The navigation of learning sites in Canada is full of visualizing and interesting. The classification of the learning sites is very clear; the resources involve plenty of interrelated knowledge. The learner’s personal character and initialization ability are taken into account. The evaluation on the learner’s learning situation is in time. But the organizational format is infrequence and the supporting service system and international format are not comprehensive.
5. The Differences research of Learning Based on Websites between China and the other Countries The construction date of domestic sites is later than that of overseas countries, but now development speed of many domestic sites is much quick. The number of the overseas sites is fewer than domestic ones, but the format of the overseas sites is more sufficient than the domestic ones. From the learning resource of based on websites, the condition of the resource construction of the websites is that the most resource came from the students’ daily life and social practice, another came from the culture course. In the set of the integration of resource, the less half of the domestic sites has large scope on integration of resource. The service of domestic websites, which supported by the special subject study, the technical help and the scene help have been done by local special subject websites that are no better than that of overseas ones. The service of overseas ones is very circumspect, very vivid and visualize. But there are also some domestic websites have not provided the help, such phenomenon is nonexistent in overseas ones. As far as the interaction, it is good between china and other countries, such as E-MAIL, forum, BBS etc, which provided a large space for the learners to communicate each other.
428
J. Tong et al. / The Application Research of Learning Based on Websites
From the activity-organized form of websites, knowledge present is only 81.82% at home, and there have few designs about explored learning activity. The overseas learning websites are better. The style of organization is various and colorful. The imagination of design is artful and reasonable, which can cultivate the student ability of active research. There is more chance for them, so does the freedom. From the estimation of learning websites, there is a little estimation of domestic learning websites, but the forms of the estimation are more. It considered that the estimation to the overseas websites, but the forms isn’t as more as domestic learning websites.
6. Conclusion and Future Work From the above comparative analysis, we may see, at present the topic learning websites are not imperfect for the construction and studies in China. The comparasions between China and other countries about the topic learning websites illustrate that thare are mainly some insufficienct aspects as follows: x The amount of the topic learning websites in China construction is much, but the quality is not high. x In the aspect of the resources construction of the topic learning websites in China, the integration of the resources is not good. x The study support service of the topic learning websites in China is incomplete and not vivid. They do not have BLOG generally. x The active organization form of the topic learning websites in China is unitary, the active design is not rich. x The topic learning websites in China are free generally and their organization mostly is group. As for industrious effort of the educational worker and scientific researcher, we believe that construction of learning websites will develop extensively. The education for life will turn into a trend and the learning websites will become more popular.
References [1] Youru Xie and Rui Yin, The teaching design of the topic learning websites, Electrical Education Research Journal, January 2003 [2] JuanHuang and Kedong Li ,The development research of the topic websites and the mentality and methods about carrying on the investigation study, China Electrical Education, May 2003. [3] Yongping Jiang and Xudu, Set up “two Main Model” Based Environment Providing Web Resource and Construct Website for Subject Learning, Beijing University publishing company, December 2003
Demo/Industry Papers
This page intentionally left blank
Advances in Intelligent IT Y. Li et al. (Eds.) IOS Press, 2006 © 2006 The authors. All rights reserved.
431
Finger Pointing Using a Single Camera Hironobu Nakayama a , Rieko Kimura a and Kunio Sakamoto a,1 a Interdisciplinary Faculty of Science and Engineering, Shimane University Abstract. A finger pointing system is described that can specify the position of pointed target. A 3-D position is generally detected and measured using a stereo viewing. This pair of stereo images allows us to obtain the 3-D information about the object. However it is possible to estimate the position of fingertip using a single image without stereo images. The length of arm is approximately calculated from stature and shoulder length, then the 3-D position of fingertip is estimated using the geometric model of human. We have developed a prototype finger pointing system using a single camera and a personal computer. This paper proposes the measuring method of the position of finger using a single camera.
1. Measuring 3D position of fingertips using a single camera Figure 1 is brief illustration of operations by the finger pointing. In figure 1, let (X, Y , Z) be three space coordinate and the SX -SY coordinate system shows the image plane of a camera. Assume that the points A, B, C and D show the interest points of the body and the points A’, B’, C’ and D’ on the image plane show the corresponding points. When the focal length of camera is f and the screen coordinates of the points are A’(sxA , syA ), B’(sxB , syB ), C’(sxC , syC ) and D’(sxD , syD ), the world coordinates of these points are respectively given as follows; (−sxA , −syA , f ), (−sxB , −syB , f ), (−sxC , −syC , f ), (−sxD , −syD , f ). Assume that the height of camera is h(> 0), it is possible to find the point of intersection, C, of the line C’C: X = sxC · tC , Y = syC · tC , Z = −f · tC (tC is variable) with the plane: Y = −h. Hence we have C(−sxC · h/syC , −h, f · h/syC ) as the point of intersection. Now we get the distance between user and camera z = −f · h/syC (> 0). Assume that the points A, B and C are on the same plane: Z = −z, it is possible to find the point of intersection, A, of the line A’A: X = sxA ·tA , Y = syA ·tA , Z = −f ·tA (tA is variable) with the plane: Z = −z. Hence we have A(sxA · z/f , syA · z/f , −z) as the point of intersection. Moreover we get B(sxB · z/f , syB · z/f , −z) as the point of intersection of the line B’B with the plane. Hence we determine |yA − yC | as the stature. The information from Ergonomics research says that the horizontal distance between fingertips is equal to the stature. The distance between shoulders is given by 2·|xA −xB |, we get the length of arm (distance BD between shoulder and fingertip): R = |yA − yC |/2 − |xA − xB |(> 0). Let find the point of intersection, D, of the line D’D: X = sxD · t, Y = syD · t, Z = −f · t (t is variable) with the spherical surface: 1 Correspondence
to: K. Sakamoto, 1060 Nishikawatsu, Matsue, Shimane 690-8504, Japan. Tel./Fax: +81 852 32 6473; E-mail:
[email protected].
432
H. Nakayama et al. / Finger Pointing Using a Single Camera
A
A D
X
B O C
h z
SX
Y
E
Y
D
C SO Z B
B
X D
C
G
A
SX
G Z
E SY
SY
Figure 1. Measuring position of fingertips.
O
Figure 2. Measuring the position of finger pointing.
(X − xB )2 + (Y − yB )2 + (Z + z)2 = R2 . For the spherical surface at D, we have: 2 2 (sx2D +syD +f 2 )t2 +2(−sxD · xB −syD · yB −f · z)t+(x2B +yB +z 2 −R2 ) = 0. (1) 2 Standard form of a quadratic is αt + 2βt + γ = 0. Let list the values of α, β and γ: 2 2 + f 2 (> 0), β = −sxD · xB − syD · yB − f · z, γ = x2B + yB + z 2 − R2 . α = sx2D + syD Then it is possible to find the solution by applying the quadratic formula: t = (−β ± β 2 − αγ)/α. Generally z > R, then assume that γ > 0. Moreover the point of intersection must exist, we get the condition t > 0. Then we determine β < 0. We get the two solutions by applying the quadratic formula. But the smaller value of solutions is correct because the point D must be near to the coordinate origin O. Hence we get the solution: tD = (−β − β 2 − αγ)/α and the position of fingertip: D(sxD · tD , syD · tD , −f · tD ). 2. Measuring 3D position of finger pointing using a single camera Figure 2 is brief illustration which shows the user points at the floor. In figure 2, let the point E be the midpoint of both eyes and the point E’ on the image plane shows the corresponding point. The screen coordinate of the point is E’(sxE , syE ), the world coordinate of this point is given by (−sxE , −syE , f ). Assume that the point E is on the plane: Z = −z. Let find the point of intersection, E, of the line E’E: X = sxE · tE , Y = syE · tE , Z = −f · tE (tE is variable) with the plane: Z = −z. Hence we have E(sxE · z/f , syE · z/f , −z) as the point of intersection. When the user indicates the point G on the floor, it is possible to find the point of intersection, G, of the line EG through the point D: X = (xD − xE ) · tG + xD , Y = (yD − yE ) · tG + yD , Z = (zD + z) · tG + zD (tG is variable) with the plane: Y = −h. Hence we have the parameter tG = (−h + yD )/(yD − yE ) and the point G of intersection. Moreover when the camera can capture the point G, let the point G’ on the image plane be the corresponding point. Then it is possible to calculate the point of intersection, G’, of the line GG’: X = −xG · tG , Y = −yG · tG , Z = −zG · tG (tG is variable) with the plane: Z = f . Hence we have G’(−xG · f /zG , −yG · f /zG ) on the image plane. This finger recognition and measuring system realizes the finger pointing such that user draws a picture on the floor in a virtual space. We evaluated the result of specified positions by prototype system and made sure the performance required for practical use.
Advances in Intelligent IT Y. Li et al. (Eds.) IOS Press, 2006 © 2006 The authors. All rights reserved.
433
Virtual Theremin: Cyber music instrument by human motion capture Shoto TANEJI, Takae TAWADA, Kunio SAKAMOTO Interdisciplinary Faculty of Science and Engineering, Shimane University
Abstract. We developed a prototype music instrument system using the motion capture. In this system, the performer plays music in the virtual space. The virtual theremin system consists of the image capture, measuring performer’s position, detecting and recognizing motions and synthesizing sound system using the personal computer. In this paper, we describe the method of 3D measuring using a single camera for tracking user's position and the cyber music instrument system.
1. Introduction The original theremin is a musical instrument that uses electronic circuits to produce audible tones, having the unusual aspect of being controlled by the performer's hand motions near the antennas, without the hands touching the instrument. One hand is used to control the theremin's pitch, and the other hand is used to control the volume. In our cyber music instrument system, the player performs the instrument in the virtual space and controls the frequency and amplitude of sound by using both hands such as the theremin. y
A G’ H
C B Detecting regions Near side
D
Detecting regions Far side
Figure 1. Illustration of captured camera image.
O
G
xz
Figure 2. Measuring the distance to the performer.
2. Recognition the position of user by measuring the distance from the camera Figure 1 shows captured images when user performs in front of the camera. The size of captured user’s image changes according to the distance between user and camera, hence we need to modify the detection regions of the motion as shown in figure 1. Then we recognize the position of performer by measuring the distance from the camera. When user
434
S. Taneji et al. / Virtual Theremin: Cyber Music Instrument by Human Motion Capture
is on the floor, it is possible to measure the distance using a single camera as shown figure 2. Figure 2 shows brief illustration of captured image by a single camera. In figure 2, the pinhole camera is fixed and looks down with a tilt angle D with respect to the ground plane scene. The point H(0, h) is the pinhole of the camera. Segment AB is the image plane and point C is the center of the plane. The interest point G is the position where the user’s foot touches the floor and the point G’ on the image plane shows the corresponding point. Assume that AHC T , BHC T , i.e., the angle of view is 2T and CH=a (the focal length), CG’=b. Let calculate the x-coordinate of the point G. In figure 2, vector HC, CG and HG are respectively given by HC (a cosD , a sinD ) , CG (b sin D , b cosD ) and HG HC CG (b sin D a cosD , a sin D b cosD ) . We get the line G’G through the point H: Scale Volume 10 control (1) G’G: y a sinD b cosD x h control a cosD b sin D . Let find the point of intersection, G, of the line G'G with the line: y = 0. Hence we get the x-coordinate of the point G: a cosD b sin D 0 B A G F E D C (2) x h a sin D b cosD .
Figure 3. Performer controls instrument.
3. Virtual theremin system Figure 3 shows the profile of virtual theremin. Performer’s right hand is used to control a musical scale. Sounds may be generally characterized by pitch i.e., frequency of sound. For example, middle C in equal temperament = 261.6Hz. The left hand is used to control the volume. The regions of scale and volume are determined by the position of the face in the captured scene. Figure 4 shows the result of extracting the body, face and hands from the captured camera image. The result of extracting the body is used to measuring the distance from camera to performer and to extract the face and hands. Figure 5 shows the screen of the personal computer while the “virtual theremin” application was running.
S. Taneji et al. / Virtual Theremin: Cyber Music Instrument by Human Motion Capture
Face
Right hand
435
Distance: 293cm Volume level: 4 Scale: C (261.6Hz)
Left hand
Figure 4. Result of detecting body, face and hands.
Figure 5. Demonstration of “virtual theremin” software.
436
Advances in Intelligent IT Y. Li et al. (Eds.) IOS Press, 2006 © 2006 The authors. All rights reserved.
Ehipasiko: A Content-based Image Indexing and Retrieval System Shyh Wei TENG and Kai Ming TING Gippsland School of Information Technology, Monash University, Australia Email: {shyh.wei.teng, kaiming.teng}@infotech.monash.edu.au
Abstract: Presently, retrieving images from a digital library requires different retrieval techniques to those used to retrieve text documents. In this paper, we demonstrate the possibility of converting the contents of images into texts, which enables us to utilise text-base retrieval techniques for image retrieval. The potential advantages and applications of this approach are also illustrated in this paper. Keywords: digital library, image indexing and retrieval, text retrieval
Introduction In the current digital era, huge amount of digital information are stored in digital libraries to facilitate activities which may be business, entertainment or educational related [1,2]. The digital information kept in a digital library may comprise of text, images and videos. In order for this information to be useful, effective retrieval techniques must be available to extract user desired information from the digital libraries. To date, most Digital Asset Management (DAM) systems use different retrieval techniques when retrieving different media from the digital library [1-6]. This is due to the different characteristics of different media. For example, for text documents, traditional text retrieval techniques can make use of the keywords from the documents’ contents to retrieve documents relevant to the user. However, for images, due to the absence of text annotation of the images and difficulties with manual annotation of images [1], image retrieval techniques usually use the images’ low-level features, such as colour, shape and texture of objects in the images, as a basis for users to retrieve images. Such an approach to image retrieval is commonly known as Content-based Image Retrieval (CBIR). In this paper, we demonstrate the possibility of a different approach to image retrieval, whereby the images’ low-level features are first converted to text; text retrieval techniques can then be used to facilitate image retrieval. To demonstrate this approach, we have built the first version of such an image retrieval system, called Ehipasiko (http://ep.gscit.monash.edu.au). With Ehipasiko, we will demonstrate that the retrieval effectiveness of our approach when a query-by-example is issued. We will also illustrate that our approach can be easily applied to various image features, such as colour and shape. Finally, we will demonstrate the ease of combining different features (e.g. text with colour or shape) using this approach, when issuing a query.
S.W. Teng and K.M. Ting / Ehipasiko: A Content-Based Image Indexing and Retrieval System
437
1. Potential Advantages of Our Approach The proposed approach to CBIR problems has the potential to provide a much greater flexibility in terms of the form and specificity of the queries. Current practical CBIR systems require the users to provide a complete image as the query. This type of queries is constrained by the availability of the image and the available image might contain many features that are not wanted by the users; in other words the users cannot specify its query precisely. The proposed solution enables a user to submit an arbitrary shaped extract of an image (e.g., a human face out of an image that contains many other objects in the background). This allows users to specify their needs more precisely. Current CBIR systems either cannot provide the same flexibility or can only provide a limited subset of the proposed query mechanism. In addition, the improved query mechanism provided by the proposed system can possibly be used to annotate images automatically, if annotations are found to be a useful addition to CBIR. The combined result of using matured text-based techniques and the flexible query mechanism will bring common acceptance and a wide spread use of CBIR systems which is lacking at the moment.
2. Applications Our proposed approach to CBIR can be incorporated into any DAM systems. Examples of current applications in various sectors that require such image retrieval functionality are listed as follows: x Commercial Photo sale systems (e.g. Lonely planet images [7]) Internet search engines (e.g. google [8]) x Legal Trademark registration office x Entertainment and recreational Personal digital photo album Online photo sharing (e.g. Kodak easy share gallery [9]) References [1] [2] [3] [4] [5] [6] [7] [8] [9]
S. Deb, “Multimedia Systems and Content-Based Image Retrieval”, Hershey, PA : Idea Group Publishing, 2004 C. Djeraba, “Multimedia mining : a highway to intelligent multimedia documents”, Boston : Kluwer Academic Publishers, 2003 M. L. Kherfi, D. Ziou, A. Bernardi, “Image Retrieval from the World Wide Web: Issues, Techniques, and Systems” , ACM Computing Surveys (CSUR), March 2004, Vol. 36 Issue 1. G. Qiu; K. Lam, “Frequency layered color indexing for content-based image retrieval”, IEEE Transactions on Image Processing, Jan 2003, Vol 12, Issue 1, pp.102-113. C. H. Yeh and C. J. Kuo, "Content-based Image Retrieval through Compressed Indices based on Vector Quantized Images", Optical Engineering, 2005. S. W. Teng., “Image Indexing and Retrieval based on Vector Quantization”, Ph.D Thesis, Monash University, July 2003. www.lonelyplanetimages.com www.google.com www.kodakgallery.com
438
Advances in Intelligent IT Y. Li et al. (Eds.) IOS Press, 2006 © 2006 The authors. All rights reserved.
Fusion of Fuzzy Automata Qinge Wu a,1 , Tuo Wang a , Yongxuan Huang a and Jisheng Li a a School of Electronic and Information Engineering, Xi’an Jiao Tong University, Xi’an, Shaanxi, 710049, P.R.China Abstract. The fusion of fuzzy automata for solving some complicated problems will be discussed by various fuzzy automata (FA) discussed. We here describe a fusion method for encoding any FA into fuzzy recurrent neural networks architecture. The state transition matrix and model transition probability matrix and noise matrix are all fuzzy and will be given. The algorithm for encoding FA’s in second-order recurrent neural networks will be shown. Keywords. Fuzzy automata, fusion, recurrent neural networks
1. Network architecture of fusion The extraction algorithm of various fuzzy automata has been discussed by using the neural networks in detail. To the extracted FA, its stability has been also described. Then, why has the FA been discussed? What application is it in real world? For better solving some complicated problems in fuzzy information processing, some information fusions are performed well by FA. The hierarchy of fuzzy automata (FA) is discussed completely. Here, we augment the network architecture used for encoding FA’s by using Bayes rules (Fig.1). The algorithm for encoding FA’s in second-order recurrent neural networks is shown as follows: The architecture consists of two parts: recurrent state neurons encode the state transitions of the deterministic acceptor and an output layer, i.e., the device is composed of two parts, a second-order recurrent neural networks and an output layer. These recurrent state neurons are connected to a distinct output neuron that computes overall estimate value and overall covariance of state and final membership degree. The recurrent state neurons are some filter or smoother or predictor that computes mixing estimate and mixing covariance. A complete recursion of the fusion[1] algorithm here makes use of Kalman filters. The recurrent neural network is formed of N recurrent hidden neurons, labeled Sjt , j = 0, · · · , N − 1, L input neurons, labeled Ikt , k = 0, · · · , L − 1 with N 2 ∗ L weights, ωijk , i = 0, · · · , N − 1, associated to the links of these neurons. At the output layer, we have M output neurons, OU Tp ∈ {0, 1}, p = 0, · · · , M − 1 attached to the 1 Qing
E Wu received the B.Sc. degree in mathematics from BeiJing normal University, Beijing, P.R.China, and the M.Sc. degree in applied Mathematics from University of Electronic Science and technology of China, ChengDu, Sichuan, P.R.China. She is currently a Ph.D. candidate with the school of Electronic and Information Engineering of Xi’an Jiao Tong University, Xi’an, Shaanxi, P.R.China. Her main research interests include spacecraft designing and image processing; E-mail:
[email protected].
Q. Wu et al. / Fusion of Fuzzy Automata
439
Figure 1. Architecture of recurrent neural network for state estimate fusion of FA.
Figure 2. Target curve and tracking.
hidden-layer neurons by N ∗ M weights labeled Upj . The Op denotes network output neuron and string membership, where p = 0, 1, · · · , M − 1.
2. Simulation results The fusion[1] result of FA state is shown in Fig.2. From the Fig.2, it can be seen that the target tracking of FA is better relatively. The tracking curve of FA is almost same as the true curve of target track basically. It is signified that the information processing can be almost utilized completely by state fusion of FA than single state information. These researches will impulse further development of theories and applications of fuzzy automata hierarchy.
References [1] X.Rong Li, Vesselin P.Jilkov. Survey of Maneuvering Target Tracking-Part I: Dynamic Model, IEEE transaction on aerospace and electronic systems 39(4) (2003), 1–58.
440
Advances in Intelligent IT Y. Li et al. (Eds.) IOS Press, 2006 © 2006 The authors. All rights reserved.
Virtual Draw: Drawing system in the 3D virtual space Kunio SAKAMOTO, Hironobu NAKAYAMA, Shoto TANEJI Interdisciplinary Faculty of Science and Engineering, Shimane University
Abstract. We developed interaction media systems in the 3D virtual space. In these systems, the artist draws a picture or the performer plays a show using the virtual character such as a puppet. This interactive virtual media system consists of the image capture, measuring performer’s position, detecting and recognizing motions and synthesizing video image using the personal computer. In this paper, we describe the applications of 3D virtual space; a virtual drawing and superimposing CG character.
1. Virtual drawing system using finger pointing Figure 1 shows an illustration of the virtual drawing system. In this system, the fingerpointed points are superimposed on the captured scene when the user indicates the point on the floor as shown in figure 2. Hence the user can draw the line art in the virtual space. This virtual drawing system consists of the Windows PC connected with the USB camera for PCs. The drawing system software functions as the video capture, the recognition and detection of interested regions for the motion capture, the measurement for estimating the position of finger pointing and the operation of an interaction. We used the method of measuring the position of fingertip and finger pointing using a single camera without stereo-pair images. This finger recognition and measuring system realizes the finger pointing such that user draws a picture on the floor in a virtual space. We evaluated the result of specified positions by prototype system and made sure the performance required for practical use.
K. Sakamoto et al. / Virtual Draw: Drawing System in the 3D Virtual Space
Drawing art on the floor of virtual space
441
Captured scene superimposed plots
Superimposed virtual images
Moving Camera
Indicating the point on the floor
Figure 1. Interaction in a virtual space.
Figure 2. Drawing on the floor in the captured scene.
2. Virtual drawing system using the 3D position of detected fingertip We get the 3D position of the fingertip using the measuring method by a single camera. The performer plots the points in the virtual space utilizing the result of measuring fingertip’s position. The traces of fingertip’s 3D position represent a line drawing. Figure 3 shows the floating image of a line drawing in Figure 3. Drawing picture in the air. the air when the user draws a circle in the virtual space. In figure 3, the performer acts and moves fingers in front of the camera, the line drawing is superimposed on the captured camera image.
3. “Virtual puppet”: additional application software superimposing CG character We incorporated funny application software into the virtual drawing system. The authors call this additional software as the “virtual puppet.” The “virtual puppet” software superimposes a 3D computer graphics character image on the captured camera’s scene. In this system, the location of the human face is detected and its head pose is estimated continuously. To estimate the head pose, the software recognized the positions of performer’s eyes by detecting the regions of eyes from the extracted face area. By analyzing the detected human face region and finding the positions of eyes, we can estimate the head pose; the slant angle, i.e., how he turns his head right or left. Moreover, we estimated the distance from the camera to the performer by the results of 3D position estimation using a single camera measuring method. Figure 4 shows the results of detecting the face and eyes and estimation of the head pose. In this superimposing
442
K. Sakamoto et al. / Virtual Draw: Drawing System in the 3D Virtual Space
character system, the 3D CG character is synthesized to the captured scene using the results of face position and head pose estimation. Figure 5 shows the superimposed scene using prototype system.
Camera image
Face position & Head pose
Superimposed scene
Area: (189,72)(285,192) Pose: 13.02 deg.
Figure 4. Detected eyes and head pose.
Figure 5. “virtual puppet”: superimposing virtual character.
Advances in Intelligent IT Y. Li et al. (Eds.) IOS Press, 2006 © 2006 The authors. All rights reserved.
443
Bip: Towards a Highly Configurable Business Intelligence Framework Ping Tang Bip Technology Ltd. Room 201, Blissful Building, 247 Doeux Road, Central, Hong Kong Email:
[email protected]
Abstract. The Bip intelligent business solution is the basics for all business systems. It is a total solution for enterprise or SME business application implementation. This business process building block offers an easy-to-use practical tool to customize business needs in a short time, usually in hands on the user. In line with the latest application development universe, user-maintained rule-based application engine with easily customizable screen flows are the basics provided in the building block. Toppling with the latest business intelligence capability and enhanced graphic interface makes the framework a good landing base towards a swift to the long-standing programming platform.
1. Business Problems In business world, there is a constant need to react to the market: no matter it relates to the change in product combination, target segments, business process, marketing strategy, internal organization change or even through external authority compliance. The traditional customer relationship management software offers a platform to integrate the various systems together and provide a best position for the front-line staff to run the business and to the management team to manage and project on business growth. Looking back to the CRM system implementation experiences, there have been more failures than success in previous years, and SME business have slow pace to enter into the area since it is a common result that the required customization effort is usually at least 3 to 4 times the software license cost, and the expertise in maintaining the system is expensive, and usually system integration services are still required after the implementation. Implemented workflow and processes are usually difficult to modify and users find it complicated to use the system as more and more variety of processes are implemented.
444
P. Tang / Bip: Towards a Highly Configurable Business Intelligence Framework
2. Rationale of the System Design With the years of information systems implementation and maintenance experience, as a developer, we understand what we want from a build block or a platform to support business needs while offering an easily customizable solution to react to the usual lastminute change requirements before implementation. Nowadays, people are inherited to the building of new technology, and tends to ignore the key practical needs from the business universe. A business solution must be able to resolve the business problem, enhance investment returns while maintaining a low total cost of ownership, instead of racing with technology. Customer centricity system is a basic must-to-have tool for a successful business solution. The solution must be a user-friendly system, to both developers and business users, enabling the integration ability to the most common business communication tools, e.g. Office and Emails. Business process addition or modification are able to be in place very quickly, with an objective to achieve at least 50% productivity gain against the common system implementation experience. This can be achieved by a faster prototype building process with user team whereby the prototype work can be fully reused in production, and business objects offer easy reusability. All the commonly required functions are out-ofthe-box despite the fact that different businesses have different entities and relationships. Record Search, Segmentation and Selection List Exports to common office tools, like MS Excel, are the basic features without much customization. Business entities and relationships must be easily maintained in form of meta-data to support enterprise data sharing and real-time business intelligence model. The system will also need to provide data mining and OLAP analytics capability to management and operation managers to perform the necessary on-floor control and results review, together with the predictive trend analysis for all level of business users based on a discovery mining model. On a matured implementation, the business intelligence features will be empowered to the front-line staff to serve the customers with a fully specialized tool on segmented needs and consolidated customer profiling.
3. General System Architecture Screen Construction Tools to assist prototype building (functions include easy building of screens and process flows); Meta-data driven technology as the core of the system; Rule based engine to control business process and reporting. The selection of J2EE and JWSDP (Java Web Service Developer Pack) is purely focused on the openness and availability of the Java based resources and the bounce of advanced functions and readily availability to the front-end capabilities for proof-of-concept. Database platform is RDBMS, for instance, the Bip intelligent solution works on MySQL, Oracle, and MSSQL. Microsoft .net may also be a platform of choice during the implementation tool seletion.
P. Tang / Bip: Towards a Highly Configurable Business Intelligence Framework
445
4. System Features Meta Data Driven Technology – Ease of Maintenance (no reliance on programming codes); Logical Data Model are centrally maintained in system meta data tables; Business Rules are captured in rule engines centrally maintained in meta data tables; Screen Flows and screen properties are configurable depending on change of attributes; Implementation of business model in presentation layer binding by business rules; Business Rule Engine configures business workflow (field validation, on-screen business process, data value trigger, field-level audit trail, selective rule based alert); Simple Script Engine with controlled workflow; Unicode language support on screen implementation and batch jobs processing; Service Oriented Architecture – Support external services with standard programming openings with methods meta-data linked to business process; Basic Business Needs has been defined as service for ease of reuse; Data Queries made simple – selection or combination of fields on screens can be applied as a query key without programming modification; Productivity Enhancement with streamlined workflow and configurable processing; User defined Selection List implementation in single and multiple levels; Data driven look-up tables easily available; Role based security control; User configurable screen layout, like column display sequence, color and font preference; User defined alerts or system initiated alerts coupled with a follow-up workflow; Business Process Management and Activity Based Costing; Case management and escalation rules enabled; Intelligent report query per screen; Developer Tool available for screen construction to allow easy prototype on business needs; Meta Data are centrally controlled using graphic tools; Developer assisted Job Scheduler with dependency control; Real-time summary report; Easy business intelligence integration; Straight-thinking data factory building to support the customer centricity; Enabled data mining, pattern learning and predictive modeling support; Enable per-field audit trail for ease of compliance support; Flexibly designed real-time access to other platform via JDBC or ODBC; Real-time charting; score card analysis and PC Phone PIM integrated.
5. Specific Applications The business implementation of this building block can be quickly deployed in the CRM sectors for SME business. The main differentiation from this product amongst other market products is on implementation and on-going maintenance ease. Business investment can easily be visualized for the project. Since the building block can virtually fit in any business flow, with the design of its rule base engine, escalation flow controller, work schedule and secretarial alert feature, the system can be easily generalized for the government, finance and banking sectors. The eventual goal will be to set up a one-stop business solution for all industry on customer centric application, which will include customer relationship management, business process management, activity based costing, customer value analysis, data analytics, predictive modeling and interactive responsive marketing. The key feature of the field level audit trail feature offers business a very major capability to support the increased compliance needs from the government for all industries.
446
Advances in Intelligent IT Y. Li et al. (Eds.) IOS Press, 2006 © 2006 The authors. All rights reserved.
Knowledge Discovery from Digital Text Documents Sheng-Tang WU School of Software Engineering and Data Communications, Queensland University of Technology, Brisbane, Australia
Abstract. Effective knowledge discovery depends on a precise text representation mechanism for a knowledge base. There exist many approaches to represent a set of digital text documents; however most of them are keyword-based approaches with tfidf-like term weighting schema. In this demo, a pattern-based prototype model has been developed in the field of knowledge discovery from digital text documents.
Introduction Sequential pattern mining from a large amount of data has become one of the major tasks in the data mining field. Lots of methods have been proposed to find sequential patterns from digital text documents in an efficient way. However, few of these literatures have mentioned how to use the mined patterns or how to apply this technique to knowledge discovery. From text mining point of view, the main process is to generate a text representative for a set of relevant documents, in order to be used to describe the concept of these documents. The format of these features varies and includes bag of words, terms, n-grams, phrases, or any of their combinations. Finding sequential patterns as features from text documents can be then applied to several applications such as classification, categorization and information filtering.
1. Problem statement In this work, the frequent sequential data mining method was adopted. The basic definitions of sequences used in this work are described as follows. A sequence Į = ¢a1, a2…, an² is a sub-sequence of another sequence ȕ = ¢b1, b2…, bm², denoted by Į Ն ȕ, if there exist integers 1 i1 < i2 < … < in m, such that a1 = bi1, a2 = bi2…, an = bin. The problem of mining sequential patterns is to find the complete set of sub-sequences from a set of sequences whose support is greater than a user predefined threshold, min_sup. The concept of closed sequential patterns was adopted and pattern taxonomies were generated by using PTM [2]. Figure 1 illustrates a sample of pattern taxonomies and patterns within dashed box will be kept since they are closed sequential patterns and the others will be pruned.
S.-T. Wu / Knowledge Discovery from Digital Text Documents
< A >3 < B >3
< A, B >2
< A >3 < C >4 < B >3
< A, C >2
< A, B, C >2
447
< C >4
< B, C >3
sub-sequence
Figure 1. A sample of pattern taxonomy with sub-sequence relationship among mined sequential patterns.
2. Application Once all of the pattern taxonomies have been discovered from digital text documents, we can apply the pattern deploying method PD [1] to merging all taxonomies into a concept space, i.e. a text representative. As a key process of knowledge discovery on text documents, this representative then can be used by IR and KDD related applications such as information filtering, data classification, categorization, clustering and Web content mining. We have applied PTM and PD to implement information filtering tasks based on a real world document collection, Reuters Corpus Volume 1 [3]. RCV1 is the latest one among several data collections and it also contains a reasonable number of documents with reference judgment for the benchmark. Although another version of Rueters-21578 is currently the most widely used dataset for text categorization research, it is believed with high possibility to be superseded by RCV1 over the next few years. With respect to the representation of the content of documents, some research works have used phrases rather than individual words. However, the effectiveness of the text mining systems was not improved significantly. One possible reason is that, a phrase-based method has lower document frequency for phrase-based patterns. Therefore, we present a novel concept for mining text documents for sequential patterns and use these mined patterns to implement knowledge discovery tasks.
References [1] WU, S.-T., LI, Y., and XU, Y., "An Effective Deploying Algorithm for using Pattern-Taxonomy," The 7th International Conference on Information Integration and Web-based Applications & Services (iiWAS2005), Kuala Lumpur, Malaysia, pp. 1013-1022, 2005. [2] WU, S.-T., LI, Y., XU, Y., PHAM, B., and CHEN, P., "Automatic Pattern-Taxonomy Extraction for Web Mining," The 2004 IEEE/WIC/ACM International Conference on Web Intelligence (WI 2004), Beijing, China, pp.242-248, 2004. [3] Reuters Corpus Volume 1, http://about.reuters.com/researchandstandards/corpus/
448
Advances in Intelligent IT Y. Li et al. (Eds.) IOS Press, 2006 © 2006 The authors. All rights reserved.
Associate A User’s Goal: Exhaustivity and Specificity Information Retrieval Using Ontology Xiaohui Tao School of Software Engineering and Data Communications Queensland University of Technology, Australia
[email protected] Abstract. In information retrieval it is difficult to extract the accurate information to satisfy a user’s information need. Based on the goals, we categorize the searches into two groups: information search and navigational search, and proposed a method using ontology to extract the the specific or general context for the given query and perform the search using it. An IR system using the method can be more efficient as it performs the search associating to the user’s particular goal. Keywords. User’s goal, ontology, exhaustivity, specificity, information retrieval
1. Introduction In information retrieval it is difficult to extract the accurate information to satisfy a user’s information need. A great difficulty is that we can not read the user’s mind to acquire what he/she really wants. Sometimes a user knows the exact information need, and just puts on a query which he/she thinks best represents the need and then performs a search. We name this kind of search as information search, as the user’s goal is obtaining the information that the query represents to. Sometimes a user may not have an exact idea about what they want, and just puts on a query to search it out in order to undertake another precise search, or a user may just simply want to get access to an online resource [1,3,5]. We name this as navigational search, as the users’ final goal is not the search results but just an intermediator in the information retrieval process. If we know a user’s goal, we may be able to serve better results to the user. We argue that this can be achieved by using ontology to extract the different levels (specificity or exhaustivity) of context for a given query to retrieve based on a user’s goal. The results indicate that while performing a specificity search for the goal of information search the results are with better precision, whereas while performing an exhaustivity search for navigational search the results are with better recall. 2. Method A well known feature of ontology is that a node on the upper bound contains more general concept, and covers broader semantic area than nodes on the lower bound with more
X. Tao / Associate a User’s Goal: Exhaustivity and Specificity Information Retrieval
449
Figure 1. Example of Ontology
specific concept [2,4,6]. Figure 1 illustrates the feature using a simple ontology. A specific concept of "Java" holds an is-a relation to the general concept of "Programming language", and the same as "Programming language" to "Software". By common sense we know that "Programming language" contains multiple languages rather than just "Java". And except for "Programming language" "Application" holds is-a to "Software" as well. If search using concept of "Programming language", we may have results covering concepts of "Java", "C++", etc. However, if search using just the concept of "Java", we will have results covering only about "Java" but not "C++", because the semantic area is restricted by a more specific concept. Based on this feature, we may perform search using the given query’s context extracted from ontology depending on a user’s goal. If a user wants an information search, we can perform a specificity retrieval, which is using the query’s context extracted from the concepts on the lower bound of ontology. For example, for a query of "programming language as java", we can use the context extracted from the specific concept of "Java". Because the user wants an information search and has already had an exact idea about the information need, we are supposed to serve the results as precise as possible. "Java" is more specific than "Programming language", and so that it can be with more precise results. Whereas, if the user wants a navigational search (e.g. the user wants another programming language which is like java, but he/she can not recall the name of it and may just wants to search it out, on the above example query), we can perform an exhaustivity retrieval, which is using the given query’s context extracted from the concepts on the upper bound of ontology. This time we prefer the general concept of "Programming language" rather than "Java", because "Programming language" covers broader semantic area, and the user will have more exhaustive results relevant to "programming language" to remind his/her mind. The specificity and exhaustivity information retrieval have different focuses, and end with different levels of the precision and recall rate. A specificity retrieval uses more specific context to perform search, its results are with better precision, but trading with recall. An exhaustivity retrieval uses more general context to perform search, its results are with better recall, but some precision may be scarified.
3. Conclusion We categorize a user’s searching goals into two groups: information search and navigational search, associate the goal with a method using ontology to extract the specific or general context for the given query, and then present the user different search results based on the goal. An Information Retrieval system using this can be more efficient as the information retrieval associates to a user’s particular need.
450
X. Tao / Associate a User’s Goal: Exhaustivity and Specificity Information Retrieval
References [1] A. Broder: A taxonomy of web search, SIGIR Forum, ACM Press, 36 (2002), 3–10. [2] J. Gonzalo, F. Verdejo, I. Chugur, & J. Cigarran: Indexing with WordNet synsets can improve text retrieval, ACL/COLING Workshop on Usage of WordNet for Natural Language Processing, 1998. [3] U, Lee, Z. Liu, & J. Cho: Automatic identification of user goals in Web search, In Proceedings of the 14th international conference on World Wide Web, ACM Press, (2005), 391–400. [4] G. A. Miller: WordNet: a lexical database for English. Commun. ACM, ACM Press, (1995), 38, 39–41. [5] D. E. Rose, & D. Levinson: Understanding user goals in web search, In Proceedings of the 13th international conference on World Wide Web, ACM Press, (2004), 13–19. [6] K. M. Sim: Web agents with a three-stage information filtering approach, In Proceedings of International Conference on Cyberworlds 2003, (2003), 266–273.
Advances in Intelligent IT Y. Li et al. (Eds.) IOS Press, 2006 © 2006 The authors. All rights reserved.
451
Large Scale Analysis of Search Engine Content John D. King School of Software Engineering and Data Communications Queensland University of Technology, Australia
[email protected] Abstract. We mine a large taxonomic dataset for subject classification rules. We then use these rules to perform an extensive analysis of the subject matter of the largest general purpose internet search engines in use today. Keywords. Search engine selection, data mining, hierarchal classification
1. Introduction In recent history it has become impossible for one person to have full knowledge about every domain of human endeavour. The previous way of accessing and discovering information was to manually search of a set of books or journals. However the introduction of search engines has forever changed the way people access and discover information. By using a search engine, people can find information about almost any subject in seconds, and as more material becomes electronically available the influence of search engines will continue to grow. However, little is known about the content of the largest general purpose search engines. We introduce a new world knowledge assisted method which we use for the classification of large search engines, even those which contain many billions of documents1 . Table 1 shows the search engines used in this work. The search engines were compared across hundreds of subjects, and the similarities and differences between the engines Title
Abbreviation
URL
Altavista America Online Search Ask Jeeves
AV AOL ASK
http://www.altavista.com/ http://search.aol.com/ http://webk.ask.com/
Google MSN Search Teoma
Google MSN Teoma
http://www.google.com/ http://search.msn.com/ http://www.teoma.com/
WiseNut Wisenut http://www.wisenut.com/ Yahoo Search Yahoo http://www.yahoo.com/ Table 1. The Search Engines Used In This Paper 1 At
present Google lists its index size as just over 8 billion pages.
452
J.D. King / Large Scale Analysis of Search Engine Content
were analysed. As far as the author is aware this is the first time a study of this size and scope has been carried out.
2. Method We generate subject classification rules from a large human expert classified training set. The training set is a large collection of expert human classified documents across many different subjects which are parsed and added to a database. For each subject a set of classification terms are selected using statistical analysis. These terms should preferably be subject-specific (occurring within few or no other subjects) and should occur frequently within the subject and infrequently in other subjects. It is difficult to decide which terms to select as there are many possible terms to describe a subject. Many terms may not occur in common English dictionaries yet are still valuable for classification. Many of these terms are technical and subject specific such as conference names, acronyms and names of specialist technology. Some examples from computing are RMI, SMIL, and XSLT 2 . Few standard English dictionaries include these terms, yet if any of these acronyms occur in a document it is likely the document covers a subject related to computing. For each subject we use statistical analysis to extract a set of terms that are most representative of the subject. The statistical analysis includes finding patterns between subject nodes and terms which are used to extract classification terms. Once the subject classification terms are extracted we use them to query the search engines. From the number of results returned from each query we are able to find the representation of each subject in each search engine.
3. Results The subject distributions of each search engine were analysed and it was found that some search engines had a bias towards the sciences and others toward the arts. The analysis also showed that Teoma and ASK use the same index for their results. Each search engine was compared to Google’s index and it was found the search engine that was most similar to Google was AOL (which uses a different version of Google’s index) and the search engine that was the most different to Google was WiseNut. Figure 1 shows the content distributions of the search engines for each of the toplevel subject groupings. We can only show the results for the ten highest level subjects because there is not enough space to show the results of the hundreds of lower level subjects.
4. Conclusion We have developed a new search engine classification method which is highly scalable and easily distributed. The method shows the subject matter of each search engine. We use this method to analyse the subject matter of the largest internet search engines in use today. 2 Remote Method Invocation, Synchronized Multimedia Integration Language, Extensible Stylesheet Language Transformation.
453
J.D. King / Large Scale Analysis of Search Engine Content 0.120
1.000 0.900
0.100
0.800 0.700
0.080
0.600 0.060
0.500 0.400
0.040
0.300 0.200
0.020
0.100 0.000
0.000 Wisenut
AOL
ASK
Teoma
Google
AV
MSN
MSN
Yahoo
(a) Generalities
Yahoo
AV
Google
AOL
Wisenut Teoma
ASK
(b) Philosophy & Psychology
0.030
0.060
0.025
0.050
0.020
0.040
0.015
0.030
0.010
0.020
0.005
0.010
0.000
0.000 Yahoo
AV
Google
MSN
Teoma
ASK
Wisenut
AOL
Google
Yahoo
(c) Religion
AV
AOL
MSN
Wisenut Teoma
ASK
(d) Social Sciences 0.250
0.500 0.450
0.200
0.400 0.350
0.150
0.300 0.250
0.100
0.200 0.150
0.050
0.100 0.050 0.000
0.000 MSN
Yahoo
AV
Teoma
ASK
Google
AOL
Google
Wisenut
(e) Language
Yahoo
AV
AOL
Teoma
ASK
MSN
Wisenut
(f) Natural sciences & mathematics
0.250
0.450 0.400
0.200
0.350 0.300
0.150
0.250 0.200
0.100
0.150 0.100
0.050
0.050 0.000
0.000 Google
Yahoo
Wisenut
AV
AOL
MSN
Teoma
ASK
Yahoo
MSN
(g) Technology (Applied sciences)
AV
Wisenut
ASK
Teoma
Google
AOL
AOL
Teoma
ASK
(h) The Arts
0.250
0.250
0.200
0.200
0.150
0.150
0.100
0.100
0.050
0.050
0.000
0.000 Yahoo
AV
Google
MSN
Teoma
ASK
(i) Literature & rhetoric
AOL
Wisenut
Yahoo
AV
Google Wisenut
MSN
(j) Geography & history
Figure 1. Normalised Distribution of Search Engine Content
This page intentionally left blank
455
Advances in Intelligent IT Y. Li et al. (Eds.) IOS Press, 2006 © 2006 The authors. All rights reserved.
Author Index Ahmad, J. Ali, U. An, J. Arikawa, M. Boles, W. Cao, L. Cao, Y. Chakrabarty, K. Chakravarthy, R. Chang, W.-C. Chaudry, M. Che, H. Chen, F. Chen, P. Chen, Y.-P.P. Chhabra, M. Cirstea, M. Cui, F. Cullen, G. Curran, K. Dan, D. Denham, C. Dib, L. Dowling, J. Dusinski, D. Eisenstadt, M. Ezziane, Z. Feng, D. Fujita, H. Gao, X. Gopal, T.V. Guessoum, Z. Gumbleton, G. Ha, S.H. Han, K.-R. Haugaasen, M. Hayashi, T. Hiransoog, C. Hou, P. Hu, J. Huang, J. Huang, Y. Iwaze, Y.
168 168 211, 302 150, 156 138 13, 396 229 356 188 19 188 335 413 293 211, 302, 384 176 366 307 86 38, 86, 380 196 73 400 138 311 73 92 79, 327 156 52 372 400 38 217 285 323 150 408 307 58 417 404, 438 126
Ji, D. Ji, J. Jing, Y. Kang, B.S. Karthik, M.R. Kevitt, P.M.C. Kim, J. Kimura, R. King, J.D. Kwon, O. Lartillot, O. Laskri, M.T. Lau, R. Lee, H. Lee, K.-M. Li, Chunping Li, Chunsheng Li, Jian Li, Jisheng Li, L. Li, X. Li, Yan Li, Youping Li, Yuefeng Li, Z. Liang, C. Liao, L.-j. Liao, X. Liu, B. Liu, Chang Liu, Chunnian Liu, H. Liu, Jiming Liu, Ju Liu, X. Liu, Z. Lizhi, X. Loewenich, F. Looi, M. Lu, H. Lunney, T. Ma, J. Maeder, A.
79, 327 413 298 217 360 380 162, 236 431 451 236 144 400 44 162, 236 285 249 205 298 404, 438 268 298 255 289 v, 31, 44, 99 340 331 229 331 340 243 413 99 3 392 392 289 196 25 v 176 380 289 138
456
Maire, F. Mao, Q. Menaka, S. Mitchell, C.J. Mitchell, S. Morrison, D. Murray-Pitts, L. Nakayama, H. Nanda, S. Nayak, R. Ni, J. Nishida, T. Nunn, D.J.E. Pavlovski, C.J. Pham, B. Ponnusamy, R. Qin, Y. Qin, Z. Ryu, J. Sajjanhar, A. Sakamoto, K. Saravana Kumar, C.P. Sezaki, K. Shah, S.N.H. Shi, Y. Smith, B. Smyth, E. Song, D. Sridharan, D. Su, W.-P. Subramaniam, C. Sumrall, J. Sun, B.-K. Szczepaniak, P.S. Tan, S.C. Taneji, S. Tang, P. Tao, X. Tawada, T. Teng, S.W. Teo, W.-C. Ting, K.M. Tong, J. Vaidyanathan, S. Vareljian, V. Wang, G. Wang, H. Wang, T.
25 417 346 366 281 384 408 431, 440 356 31, 67, 323 396 277 366 281 132 372 106 118 285 384 126, 277, 431, 433, 440 360 150 168 58 281 380 73 346 132 211 188 285 311 223 433, 440 443 448 433 436 19 436 421, 425 372 182 293 243 404, 438
Wang, X. 425 Wang, Y. 425 Wardhani, A. 132 Webb, G.I. 7 Wegrzyn-Wolska, K. 317 Wen, P. 255 Weng, L.-T. 31 Wong, O. 223 Wu, B. 268 Wu, Qinge 404, 438 Wu, Qiong 307 Wu, S.-T. 446 Xiao, X. 196 Xie, Q. 335 Xing, L. 289 Xu, R. 281 Xu, Y. 31, 44, 99 Yang, C.-C.O. 19 Yang, Wanzhong 99 Yang, Wenchuan 307, 331 Yang, Yong 293 Yang, Yun 268 Yu, H. 335 Yu, Shuang 249 Yu, Shui 340 Yuan, L. 417 Yuan, Y.-X. 388 Zhang, C. 13, 106, 112, 118, 352, 396 Zhang, J. 106 Zhang, K. 229 Zhang, M. 52 Zhang, S. 106, 112, 118 Zhang, Y. 421 Zhang, Z. 261 Zhao, C. 331 Zhao, Y. 112 Zhong, N. v, 58 Zhong, S. 298, 392, 421, 425 Zhong, Y. 392, 421 Zhou, H. 352 Zhou, Xueyuan 249 Zhou, Xujuan 44 Zhou, Y. 261 Zhu, J. 73 Zhu, L. 229 Zhu, X. 106 Zou, J.J. 182