Lecture Notes in Computer Science Commenced Publication in 1973 Founding and Former Series Editors: Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen
Editorial Board David Hutchison Lancaster University, UK Takeo Kanade Carnegie Mellon University, Pittsburgh, PA, USA Josef Kittler University of Surrey, Guildford, UK Jon M. Kleinberg Cornell University, Ithaca, NY, USA Friedemann Mattern ETH Zurich, Switzerland John C. Mitchell Stanford University, CA, USA Moni Naor Weizmann Institute of Science, Rehovot, Israel Oscar Nierstrasz University of Bern, Switzerland C. Pandu Rangan Indian Institute of Technology, Madras, India Bernhard Steffen University of Dortmund, Germany Madhu Sudan Massachusetts Institute of Technology, MA, USA Demetri Terzopoulos University of California, Los Angeles, CA, USA Doug Tygar University of California, Berkeley, CA, USA Moshe Y. Vardi Rice University, Houston, TX, USA Gerhard Weikum Max-Planck Institute of Computer Science, Saarbruecken, Germany
4842
George Bebis Richard Boyle Bahram Parvin Darko Koracin Nikos Paragios Syeda-Mahmood Tanveer Tao Ju Zicheng Liu Sabine Coquillart Carolina Cruz-Neira Torsten Müller Tom Malzbender (Eds.)
Advances in Visual Computing Third International Symposium, ISVC 2007 Lake Tahoe, NV, USA, November 26-28, 2007 Proceedings, Part II
13
Volume Editors George Bebis, E-mail:
[email protected] Richard Boyle, E-mail:
[email protected] Bahram Parvin, E-mail:
[email protected] Darko Koracin, E-mail:
[email protected] Nikos Paragios, E-mail:
[email protected] Syeda-Mahmood Tanveer, E-mail:
[email protected] Tao Ju, E-mail:
[email protected] Zicheng Liu, E-mail:
[email protected] Sabine Coquillart, E-mail:
[email protected] Carolina Cruz-Neira, E-mail:
[email protected] Torsten Müller, E-mail:
[email protected] Tom Malzbender, E-mail:
[email protected]
Library of Congress Control Number: 2007939401 CR Subject Classification (1998): I.4, I.5, I.2.10, I.3.3, I.3.5, I.3.7, I.2.6, F.2.2 LNCS Sublibrary: SL 6 – Image Processing, Computer Vision, Pattern Recognition, and Graphics ISSN ISBN-10 ISBN-13
0302-9743 3-540-76855-6 Springer Berlin Heidelberg New York 978-3-540-76855-5 Springer Berlin Heidelberg New York
This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. Springer is a part of Springer Science+Business Media springer.com © Springer-Verlag Berlin Heidelberg 2007 Printed in Germany Typesetting: Camera-ready by author, data conversion by Scientific Publishing Services, Chennai, India Printed on acid-free paper SPIN: 12193011 06/3180 543210
Preface
It is with great pleasure that we welcome you to the Proceedings of the 3rd International Symposium on Visual Computing (ISVC 2007) held in Lake Tahoe, Nevada/California. ISVC offers a common umbrella for the four main areas of visual computing including vision, graphics, visualization, and virtual reality. Its goal is to provide a forum for researchers, scientists, engineers and practitioners throughout the world to present their latest research findings, ideas, developments, and applications in the broader area of visual computing. This year, the program consisted of 14 oral sessions, 1 poster session, 6 special tracks, and 6 keynote presentations. Following a very successful ISVC 2006, the response to the call for papers was almost equally strong; we received over 270 submissions for the main symposium from which we accepted 77 papers for oral presentation and 42 papers for poster presentation. Special track papers were solicited separately through the Organizing and Program Committees of each track. A total of 32 papers were accepted for oral presentation and 5 papers for poster presentation in the special tracks. All papers were reviewed with an emphasis on their potential to contribute to the state of the art in the field. Selection criteria included accuracy and originality of ideas, clarity and significance of results, and presentation quality. The review process was quite rigorous, involving two to three independent blind reviews followed by several days of discussion. During the discussion period we tried to correct anomalies and errors that might have existed in the initial reviews. Despite our efforts, we recognize that some papers worthy of inclusion may have not been included in the program. We offer our sincere apologies to authors whose contributions might have been overlooked. We wish to thank everybody who submitted their work to ISVC 2007 for review. It was because of their contributions that we succeeded in having a technical program of high scientific quality. In particular, we would like to thank the ISVC 2007 Area Chairs, the organizing institutions (UNR, DRI, LBNL, and NASA Ames), the industrial sponsors (Intel, DigitalPersona, Equinox, Ford, Siemens, Hewlett Packard, MERL, UtopiaCompression), the International Program Committee, the special track organizers and their Program Committees, the keynote speakers, the reviewers, and especially the authors that contributed their work to the symposium. In particular, we would like to thank Siemens,
VI
Preface
Hewlett Packard, and MERL who kindly offered three “best paper awards” this year. September 2007
ISVC 2007 Steering Committee and Area Chairs
Organization
ISVC 2007 Steering Committee Bebis George, University of Nevada, Reno, USA Boyle Richard, NASA Ames Research Center, USA Parvin Bahram, Lawrence Berkeley National Laboratory, USA Koracin Darko, Desert Research Institute, USA
ISVC 2007 Area Chairs Computer Vision Paragios Nikos, Ecole Centrale de Paris , France Syeda-Mahmood Tanveer, IBM Almaden, USA Computer Graphics Ju Tao, Washington University, USA Liu Zicheng, Microsoft Research, USA Virtual Reality Coquillart Sabine, INRIA, France Cruz-Neira Carolina, Louisiana Immersive Technologies Enterprise, USA Visualization M¨ uller Torsten, Simon Fraser University, Canada Malzbender Tom, Hewlett Packard Labs, USA Publicity Li Wenjing, STI Medical Systems, USA Local Arrangements Veropoulos Kostas, Desert Research Institute, USA Publications Wang Junxian, UtopiaCompression, USA
VIII
Organization
ISVC 2007 Keynote Speakers Mathieu Desbrun , California Institute of Technology, USA Kwan-Liu Ma, University of Califirnia, Davis, USA John Tsotsos, York University, Canada Mubarak Shah, University of Central Florida, USA Dimitris Metaxas, Rutgers University, USA Fatih Porikli, MERL, USA
ISVC 2007 International Program Committee Computer Vision (Area 1) Abidi Besma, University of Tennessee, USA Aggarwal J. K.,University of Texas, Austin, USA Agouris Peggy, George Mason University, USA Anagnostopoulos George, Florida Institute of Technology, USA Argyros Antonis, University of Crete , Greece Asari Vijayan, Old Dominion University, USA Basu Anup, University of Alberta, Canada Bebis George, University of Nevada at Reno, USA Belyaev Alexander, Max-Planck-Institut fuer Informatik, Germany Bhatia Sanjiv, University of Missouri-St. Louis, USA Bioucas Jose, Instituto Superior T´ecnico, Lisbon, Portugal Birchfield Stan, Clemson University, USA Boon Goh Wooi, Nanyang Technological University, Singapore Bourbakis Nikolaos, Wright State University, USA Brimkov Valentin, State University of New York, USA Cavallaro Andrea, Queen Mary, University of London, UK Chellappa Rama, University of Maryland, USA Chen Danny, University of Notre Dame, USA Darbon Jerome, LRDE EPITA, France Davis James, Ohio State University, USA Debrunner Christian, Colorado School of Mines, USA Duan Ye, University of Missouri-Columbia, USA El-Gammal Ahmed, University of New Jersey, USA Eng How Lung, Institute for Infocomm Research, Singapore Erol Ali, Ocali Information Technology, Turkey Fan Guoliang, Oklahoma State University, USA Foresti GianLuca, University of Udine, Italy Gandhi Tarak, University of California at San Diego, USA Georgescu Bogdan, Siemens, USA Hammoud Riad, Delphi Corporation, USA Harville Michael, Hewlett Packard Labs, USA He Xiangjian, University of Technology, Australia
Organization
Jacobs David, University of Maryland, USA Kamberov George, Stevens Institute of Technology, USA Kamberova Gerda, Hofstra University, USA Kakadiaris Ioannis, University of Houston, USA Kisacanin Branislav, Texas Instruments, USA Klette Reinhard, Auckland University, New Zeland Kollias Stefanos, National Technical University of Athens, Greece Komodakis Nikos, Ecole Centrale de Paris, France Kuno Yoshinori, Saitama University, Japan Lee Seong-Whan, Korea University, Korea Leung Valerie, Kingston University, UK Li Wenjing, STI Medical Systems, USA Liu Jianzhuang, The Chinese University of Hong Kong, Hong Kong Ma Yunqian, Honyewell Labs, USA Maeder Anthony, CSIRO ICT Centre, Australia Maltoni Davide, University of Bologna, Italy Maybank Steve, Birkbeck College, UK Medioni Gerard, University of Southern California, USA Metaxas Dimitris, Rutgers University, USA Miller Ron, Ford Motor Company, USA Mirmehdi Majid, Bristol University, UK Monekosso Dorothy, Kingston University, UK Mueller Klaus, SUNY Stony Brook, USA Mulligan Jeff, NASA Ames Research Center, USA Nait-Charif Hammadi, Bournemouth University, UK Nefian Ara, Intel, USA Nicolescu Mircea, University of Nevada, Reno, USA Nixon Mark, University of Southampton, UK Nolle Lars, The Nottingham Trent University, UK Ntalianis Klimis, National Technical University of Athens, Greece Pantic Maja, Imperial College, UK Papadourakis George, Technological Education Institute, Greece Papanikolopoulos Nikolaos, University of Minnesota, USA Parvin Bharam, Lawerence Berkeley National Lab, USA Pati Peeta Basa, Indian Institute of Science, India Patras Ioannis, Queen Mary University, London, UK Petrakis Euripides, Technical University of Crete, Greece Peyronnet Sylvain, LRDE/EPITA, France Pitas Ioannis, University of Thessaloniki, Greece Porikli Fatih, MERL, USA Prabhakar Salil, DigitalPersona Inc., USA Qian Gang, Arizona State University, USA Regazzoni Carlo, University of Genoa, Italy Remagnino Paolo, Kingston University, UK Ribeiro Eraldo, Florida Institute of Technology, USA
IX
X
Organization
Ross Arun, West Virginia University, USA Schaefer Gerald, Aston University, UK Shi Pengcheng, The Hong Kong University of Science and Technology, Hong Kong Salgian Andrea, The College of New Jersey, USA Samir Tamer, Ingersoll Rand Security Technologies, USA Sarti Augusto, DEI, Politecnico di Milano, Italy Scalzo Fabien, University of Nevada, Reno, USA Shah Mubarak, University of Central Florida, USA Singh Rahul, San Francisco State University, USA Skurikhin Alexei, Los Alamos National Laboratory, USA Sturm Peter, INRIA Rhˆ one-Alpes, France Su Chung-Yen, National Taiwan Normal University, Taiwan Sugihara Kokichi, University of Tokyo, Japan Sun Zehang, eTreppid Technologies, USA Teoh Eam Khwang, Nanyang Technological University, Singapore Thiran Jean-Philippe, EPFL, Switzerland Tobin Kenneth, Oak Ridge National Laboratory, USA Triesch Jochen, Frankfurt Institute for Advanced Studies, Germany Tsechpenakis Gabriel, University of Miami, USA Tsotsos John, York University, Canada Tubaro Stefano, DEI, Politecnico di Milano, Italy Velastin Sergio, Kingston University London, UK Veropoulos Kostas, Desert Research Institute, USA Verri Alessandro, Universita’ di Genova, Italy Wang Song, University of South Carolina, USA Wang Junxian, UtopiaCompression, USA Wang Yunhong, Chinese Academy of Sciences, China Webster Michael, University of Nevada, Reno, USA Wolff Larry, Equinox Corporation, USA Wong Kenneth, University of Hong Kong, Hong Kong Xiang Tao, Queen Mary, University of London, UK Xu Meihe, University of California at Los Angeles, USA Yau Wei-Yun, Institute for Infocomm Research, Singapore Yeasin Mohammed, Memphis University, USA Yi Lijun, SUNY at Binghampton, USA Yuan Chunrong, University of Tuebingen, Germany Zhang Yan, Delphi Corporation, USA
Computer Graphics (Area 2) Arns Laura, Purdue University, USA Baciu George, Hong Kong PolyU, Hong Kong Barneva Reneta, State University of New York, USA Bartoli Vilanova Anna, Eindhoven University of Technology, Netherlands
Organization
Belyaev Alexander, Max-Planck-Institut fuer Informatik, Germany Bilalis Nicholas, Technical University of Crete, Greece Bohez Erik, Asian Institute of Technology, Thailand Bouatouch Kadi, University of Rennes I, IRISA, France Brady Rachael, Duke University, USA Brimkov Valentin, State University of New York, USA Brown Ross, Queensland University of Technology, Australia Cheng Irene, University of Alberta, Canada Choi Min, University of Colorado at Denver, USA Cremer Jim, University of Iowa, USA Crosa Pere Brunet, Universitat Polit`ecnica de Catalunya, Spain Damiand Guillaume, SIC Laboratory, France Dingliana John, Trinity College, Ireland Fiorio Christophe, LIRMM, France Floriani Leila De, University of Maryland, USA Gaither Kelly, University of Texas at Austin, USA Geiger Christian, Duesseldorf University of Applied Sciences, Germany Grller Eduard, Vienna University of Technology, Austria Gu David, State University of New York at Stony Brook, USA Hadwiger Helmut Markus, VRVis Research Center, Austria Haller Michael, Upper Austria University of Applied Sciences, Austria Hamza-Lup Felix, Armstrong Atlantic State University, USA Hernandez Jose Tiberio, Universidad de los Andes, Colombia Hinkenjan Andre, Bonn-Rhein-Sieg University of Applied Sciences, Germany Huang Zhiyong, Institute for Infocomm Research, Singapore Julier Simon J., University College London, UK Kakadiaris Ioannis, University of Houston, USA Kamberov George, Stevens Institute of Technology, USA Klosowski James, IBM T.J. Watson Research Center, USA Kobbelt Leif, RWTH Aachen, Germany Lee Seungyong, Pohang Univ. of Sci. and Tech. (POSTECH), Korea Lok Benjamin, University of Florida, USA Loviscach Jorn, University of Applied Sciences, Bremen, Germany Martin Ralph, Cardiff University, UK Meenakshisundaram Gopi, University of California-Irvine, USA Mendoza Cesar, NaturalMotion Ltd., USA Metaxas Dimitris, Rutgers University, USA Monroe Laura, Los Alamos National Lab, USA Nait-Charif Hammadi, University of Dundee, Scotland Noma Tsukasa, Kyushu Institute of Technology, Japan Oliveira Manuel M., Univ. Fed. do Rio Grande do Sul, Brazil Pajarola Renato, University of Zurich, Switzerland Palanque Philippe, University of Paul Sabatier, France Pascucci Valerio, Lawrence Livermore National Laboratory, USA Pattanaik Sumanta, University of Central Florida, USA
XI
XII
Organization
Peters Jorg, University of Florida, USA Qin Hong, State University of New York at Stony Brook, USA Renner Gabor, Computer and Automation Research Institute, Hungary Sapidis Nickolas, Aegean University, Greece Sarfraz Muhammad, King Fahd University of Petroleum and Minerals, Saudi Arabia Schaefer Scott, Texas A&M University, USA Sequin Carlo, University of California-Berkeley, USA Shamir Arik, The Interdisciplinary Center, Herzliya, Israel Silva Claudio, University of Utah, USA Snoeyink Jack, University of North Carolina at Chapel Hill, USA Sourin Alexei, Nanyang Technological University, Singapore Teschner Matthias, University of Freiburg, Germany Umlauf Georg, University of Kaiserslautern, Germany Vinacua Alvar, Universitat Polit`ecnica de Catalunya, Spain Wald Ingo, University of Utah, USA Weinkauf Tino, ZIB Berlin, Germany Wylie Brian, Sandia National Laboratory, USA Ye Duan, University of Missouri-Columbia, USA Yin Lijun, Binghamton University, USA Yuan Xiaoru, University of Minnesota, USA
Virtual Reality (Area 3) Alca˜ niz Mariano, Technical University of Valencia, Spain Arns Laura, Purdue University, USA Behringer Reinhold, Leeds Metropolitan University UK Benesa Bedrich, Purdue University, USA Bilalis Nicholas, Technical University of Crete, Greece Blach Roland, Fraunhofer Institute for Industrial Engineering, Germany Boyle Richard, NASA Ames Research Center, USA Brega Jos Remo Ferreira, UNIVEM, PPGCC, Brazil Brown Ross, Queensland University of Technology, Australia Chen Jian, Brown University, USA Cheng Irene, University of Alberta, Canada Craig Alan, NCSA University of Illinois at Urbana-Champaign, USA Cremer Jim, University of Iowa, USA Crosa Pere Brunet, Universitat Polit`ecnica de Catalunya, Spain Encarnacao L. Miguel, Imedia Labs, USA Figueroa Pablo, Universidad de los Andes, Colombia Froehlich Bernd, University of Weimar, Germany Geiger Christian, Duesseldorf University of Applied Sciences, Germany Gupta Satyandra K., University of Maryland, USA Haller Michael, FH Hagenberg, Austria Hamza-Lup Felix, Armstrong Atlantic State University, USA
Organization
XIII
Harders Matthias, ETH Zuerich, Switzerland Hinkenjan Andre, Bonn-Rhein-Sieg University of Applied Sciences, Germany Julier Simon J., University College London, UK Klosowski James, IBM T.J. Watson Research Center, USA Liere Robert van, CWI, Netherlands Lindt Irma, Fraunhofer FIT, Germany Lok Benjamin, University of Florida, USA Molineros Jose, Teledyne Scientific and Imaging, USA Monroe Laura, Los Alamos National Lab, USA Muller Stefan, University of Koblenz, Germany Paelke Volker, Leibniz Universit¨ at Hannover, Germany Peli Eli, Harvard University, USA Qian Gang, Arizona State University, USA Reiners Dirk, University of Louisiana, USA Rizzo Albert, University of Southern California, USA Rodello Ildeberto, UNIVEM, PPGCC, Brazil Rolland Jannick, University of Central Florida, USA Santhanam Anand, MD Anderson Cancer Center Orlando, USA Sapidis Nickolas, Aegean University, Greece Schmalstieg Dieter, Graz University of Technology, Austria Sourin Alexei, Nanyang Technological University, Singapore Srikanth Manohar, Indian Institute of Science, India Stefani Oliver, COAT-Basel, Switzerland Varsamidis Thomas, University of Wales, UK Wald Ingo, University of Utah, USA Yu Ka Chun, Denver Museum of Nature and Science, USA Yuan Chunrong, University of Tuebingen, Germany Zachmann Gabriel, Clausthal University, Germany Zyda Michael, University of Southern California, USA
Visualization (Area 4) Apperley Mark, University of Waikato, New Zealand Arns Laura, Purdue University, USA Avila Lisa, Kitware, USA Balzs Csbfalvi, Budapest University of Technology and Economics, Hungary Bartoli Anna Vilanova, Eindhoven University of Technology, Netherlands Bilalis Nicholas, Technical University of Crete, Greece Brodlie Ken, University of Leeds, UK Brown Ross, Queensland University of Technology, Australia Chen Jian, Brown University, USA Cheng Irene, University of Alberta, Canada Crosa Pere Brunet, Universitat Polit`ecnica de Catalunya, Spain Doleisch Helmut, VRVis Research Center, Austria Duan Ye, University of Missouri-Columbia, USA
XIV
Organization
Encarnasao James Miguel, Imedia Labs, USA Ertl Thomas, University of Stuttgart, Germany Floriani Leila De, University of Maryland, USA Fujishiro Issei, Tohoku University, Japan Geiger Christian, Duesseldorf University of Applied Sciences, Germany Grller Eduard, Vienna University of Technology, Austria Goebel Randy, University of Alberta, Canada Hadwiger Helmut Markus, VRVis Research Center, Austria Hamza-Lup Felix, Armstrong Atlantic State University, USA Julier Simon J., University College London, UK Koracin Darko, Desert Research Institute, USA Liere Robert van, CWI, Netherlands Lim Ik Soo, University of Wales, UK Ma Kwan-Liu, University of California-Davis, USA Maeder Anthony, CSIRO ICT Centre, Australia Malpica Jose, Alcala University, Spain Masutani Yoshitaka, The University of Tokyo Hospital, Japan Melanon Guy, INRIA Futurs and CNRS UMR 5506 LIRMM, France Monroe Laura, Los Alamos National Lab, USA Mueller Klaus, SUNY Stony Brook, USA Paelke Volker, Leibniz Universit¨ at Hannover, Germany Preim Bernhard, Otto-von-Guericke University, Germany Rabin Robert, University of Wisconsin at Madison, USA Rhyne Theresa-Marie, North Carolina State University, USA Rolland Jannick, University of Central Florida, USA Santhanam Anand, MD Anderson Cancer Center Orlando, USA Scheuermann Gerik, University of Leipzig, Germany Shen Han-Wei, Ohio State University, USA Silva Claudio, University of Utah, USA Snoeyink Jack, University of North Carolina at Chapel Hill, USA Sourin Alexei, Nanyang Technological University, Singapore Theisel Holger, Max-Planck Institut f¨ ur Informatik, Germany Thiele Olaf, University of Mannheim, Germany Tory Melanie, University of Victoria, Canada Umlauf Georg, University of Kaiserslautern, Germany Viegas Fernanda, IBM, USA Viola Ivan, University of Bergen, Norway Wald Ingo, University of Utah, USA Wylie Brian, Sandia National Laboratory, USA Yeasin Mohammed, Memphis University, USA Yuan Xiaoru, University of Minnesota, USA Zachmann Gabriel, Clausthal University, Germany Zhukov Leonid, Caltech, USA
Organization
XV
ISVC 2007 Special Tracks Intelligent Algorithms for Smart Monitoring of Complex Environments Organizers Paolo Remagnino, DIRC, Kingston University, UK How-Lung Eng, IIR, Singapore Guoliang Fan, Oklahoma State University, USA Yunqian Ma, Honeywell Labs, USA Dorothy Monekosso, DIRC, Kingston University, UK Yau Wei Yun, IIR, Singapore
Object Recognition Organizers Andrea Salgian, The College of New Jersey, USA Fabien Scalzo, University of Nevada, Reno, USA Program Committee Boris Epshtein, The Weizmann Institute of Science, Israel Bastian Leibe, ETH Zurich, Switzerland Bogdan Matei, Sarnoff Corporation, USA Raphael Maree, Universit´e de Li`ege, Belgium Randal Nelson, University of Rochester, USA Justus Piater, Universit´e de Li`ege, Belgium Nicu Sebe, University of Amsterdam, Netherlands Bill Triggs, INRIA, France Tinne Tuytelaars, Katholieke Universiteit Leuven, Belgium
Image Databases Organizers Sanjiv K. Bhatia, University of Missouri-St. Louis, USA Ashok Samal, University of Missouri-St. Louis, USA Bedrich Benes, Purdue University, USA Sharlee Climer, Washington University in St. Louis, USA
XVI
Organization
Algorithms for the Understanding of Dynamics in Complex and Cluttered Scenes Organizers Paolo Remagnino, DIRC, Kingston University, UK Fatih Porikli, MERL, USA Larry Davis, University of Maryland, USA Massimo Piccardi, University of Technology Sydney, Australia Program Committee Rita Cucchiara, University of Modena, Italy Gian Luca Foresti, University of Udine, Italy Yoshinori Kuno, Saitama University, Japan Mohan Trivedi, University of California, San Diego, USA Andrea Prati, University of Modena, Italy Carlo Regazzoni, University of Genoa, Italy Graeme Jones, Kingston University, UK Steve Maybank, Birkbeck University of London, UK Ram Nevatia, University of Southern California, USA Sergio Velastin, Kingston University, USA Monique Thonnat, INRIA, Sophia Antipolis, France Tieniu Tan, National Lab of Pattern Recognition, China James Ferryman, Reading University, UK Andrea Cavallaro, Queen Mary, University of London, UK Klaus Diepold, University of Technology in Munich, Germany
Medical Data Analysis Organizers Irene Cheng, University of Alberta, Canada Guido Gortelazzo, University of Padova, Italy Kostas Daniilidis, University of Pennsylvania, USA Pablo Figueroa, Universidad de los Andes, Columbia Tom Malzbender, Hewlett Packard Lab., USA Mrinal Mandal, University of Alberta, USA Lijun Yin, SUNY at Binghamton, USA Karel Zuiderveld, Vital Images Inc., USA Program Committee Walter Bischof, University of Alberta, Canada Anup Basu, University of Alberta, Canada Paul Major, University of Alberta, Canada
Organization
XVII
Tarek El-Bialy, University of Alberta, Canada Jana Carlos Flores, University of Alberta, Canada Randy Goebel, University of Alberta, Canada David Hatcher, DDI Central Corp., USA Shoo Lee, iCARE, Capital Health, Canada Jiambo Shi, University of Pennsylvania, USA Garnette Sutherland, University of Calgary, Canada
Soft Computing in Image Processing and Computer Vision Organizers Gerald Schaefer, Nottingham Trent University, UK Mike Nachtegael, Ghent University, Belgium Lars Nolle, Nottingham Trent University, UK Etienne Kerre, Ghent University, Belgium
Additional Reviewers Leandro A. F. Ferndandes Erik Murphy-Chutorian Tarak Gandhi Florian Mannuss Daniel Patel Mark Keck
XVIII
Organization
Organizing Institutions and Sponsors
Table of Contents – Part II
Motion and Tracking II Visible and Infrared Sensors Fusion by Matching Feature Points of Foreground Blobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Pier-Luc St-Onge and Guillaume-Alexandre Bilodeau
1
Multiple Combined Constraints for Optical Flow Estimation. . . . . . . . . . . Ahmed Fahad and Tim Morris
11
Combining Models of Pose and Dynamics for Human Motion Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Roman Filipovych and Eraldo Ribeiro
21
Optical Flow and Total Least Squares Solution for Multi-scale Data in an Over-Determined System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Homa Fashandi, Reza Fazel-Rezai, and Stephen Pistorius
33
A Hardware-Friendly Adaptive Tensor Based Optical Flow Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Zhao-Yi Wei, Dah-Jye Lee, and Brent E. Nelson
43
Segmentation/Feature Extraction/Classification Image Segmentation That Optimizes Global Homogeneity in a Variational Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Wei Wang and Ronald Chung
52
Image and Volume Segmentation by Water Flow . . . . . . . . . . . . . . . . . . . . . Xin U. Liu and Mark S. Nixon
62
A Novel Hierarchical Technique for Range Segmentation of Large Building Exteriors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Reyhaneh Hesami, Alireza Bab-Hadiashar, and Reza Hosseinnezhad
75
Lip Contour Segmentation Using Kernel Methods and Level Sets . . . . . . . A. Khan, W. Christmas, and J. Kittler
86
A Robust Two Level Classification Algorithm for Text Localization in Documents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . R. Kandan, Nirup Kumar Reddy, K.R. Arvind, and A.G. Ramakrishnan Image Classification from Small Sample, with Distance Learning and Feature Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Daphna Weinshall and Lior Zamir
96
106
XX
Table of Contents – Part II
ST1: Intelligent Algorithms for Smart Monitoring of Complex Environments Comparison of Techniques for Mitigating the Effects of Illumination Variations on the Appearance of Human Targets . . . . . . . . . . . . . . . . . . . . . C. Madden, M. Piccardi, and S. Zuffi
116
Scene Context Modeling for Foreground Detection from a Scene in Remote Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Liyuan Li, Xinguo Yu, and Weimin Huang
128
Recognition of Household Objects by Service Robots Through Interactive and Autonomous Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Al Mansur, Katsutoshi Sakata, and Yoshinori Kuno
140
Motion Projection for Floating Object Detection . . . . . . . . . . . . . . . . . . . . . Zhao-Yi Wei, Dah-Jye Lee, David Jilk, and Robert Schoenberger
152
Real-Time Subspace-Based Background Modeling Using Multi-channel Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bohyung Han and Ramesh Jain
162
A Vision-Based Architecture for Intent Recognition . . . . . . . . . . . . . . . . . . Alireza Tavakkoli, Richard Kelley, Christopher King, Mircea Nicolescu, Monica Nicolescu, and George Bebis
173
Shape/Recognition Combinatorial Shape Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ralf Juengling and Melanie Mitchell
183
Rotation-Invariant Texture Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Javier A. Montoya-Zegarra, Jo˜ ao P. Papa, Neucimar J. Leite, Ricardo da Silva Torres, and Alexandre X. Falc˜ ao
193
A New Set of Normalized Geometric Moments Based on Schlick’s Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ramakrishnan Mukundan
205
Shape Evolution Driven by a Perceptually Motivated Measure . . . . . . . . . Sergej Lewin, Xiaoyi Jiang, and Achim Clausing
214
The Global-Local Transformation for Invariant Shape Representation . . . Konstantinos A. Raftopoulos and Stefanos D. Kollias
224
A Vision System for Recognizing Objects in Complex Real Images . . . . . Mohammad Reza Daliri, Walter Vanzella, and Vincent Torre
234
ST3: Image Databases RISE-SIMR: A Robust Image Search Engine for Satellite Image Matching and Retrieval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sanjiv K. Bhatia, Ashok Samal, and Prasanth Vadlamani
245
Table of Contents – Part II
XXI
Content-Based Image Retrieval Using Shape and Depth from an Engineering Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Amit Jain, Ramanathan Muthuganapathy, and Karthik Ramani
255
Automatic Image Representation for Content-Based Access to Personal Photo Album . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Edoardo Ardizzone, Marco La Cascia, and Filippo Vella
265
Geographic Image Retrieval Using Interest Point Descriptors . . . . . . . . . . Shawn Newsam and Yang Yang
275
ST6: Soft Computing in Image Processing and Computer Vision Feed Forward Genetic Image Network: Toward Efficient Automatic Construction of Image Processing Algorithm . . . . . . . . . . . . . . . . . . . . . . . . Shinichi Shirakawa and Tomoharu Nagao
287
Neural Networks for Exudate Detection in Retinal Images . . . . . . . . . . . . . Gerald Schaefer and Edmond Leung
298
Kernel Fusion for Image Classification Using Fuzzy Structural Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Emanuel Aldea, Geoffroy Fouquier, Jamal Atif, and Isabelle Bloch
307
A Genetic Approach to Training Support Vector Data Descriptors for Background Modeling in Video Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Alireza Tavakkoli, Amol Ambardekar, Mircea Nicolescu, and Sushil Louis
318
Video Sequence Querying Using Clustering of Objects’ Appearance Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yunqian Ma, Ben Miller, and Isaac Cohen
328
Learning to Recognize Complex Actions Using Conditional Random Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Christopher I. Connolly
340
Poster Intrinsic Images by Fisher Linear Discriminant . . . . . . . . . . . . . . . . . . . . . . Qiang He and Chee-Hung Henry Chu
349
Shape-from-Shading Algorithm for Oblique Light Source . . . . . . . . . . . . . . Osamu Ikeda
357
Pedestrian Tracking from a Moving Host Using Corner Points . . . . . . . . . Mirko Meuter, Dennis M¨ uller, Stefan M¨ uller-Schneiders, Uri Iurgel, Su-Birm Park, and Anton Kummert
367
XXII
Table of Contents – Part II
3D Reconstruction and Pose Determination of the Cutting Tool from a Single View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Xi Zhang, Xiaodong Tian, Kazuo Yamazaki, and Makoto Fujishima
377
Playfield and Ball Detection in Soccer Video . . . . . . . . . . . . . . . . . . . . . . . . Junqing Yu, Yang Tang, Zhifang Wang, and Lejiang Shi
387
Single-View Matching Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Klas Nordberg
397
A 3D Face Recognition Algorithm Based on Nonuniform Re-sampling Correspondence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yanfeng Sun, Jun Wang, and Baocai Yin
407
A Novel Approach for Storm Detection Based on 3-D Radar Image Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lei Han, Hong-Qing Wang, Li-Feng Zhao, and Sheng-Xue Fu
417
A New Approach for Vehicle Detection in Congested Traffic Scenes Based on Strong Shadow Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ehsan Adeli Mosabbeb, Maryam Sadeghi, and Mahmoud Fathy
427
A Robust Method for Near Infrared Face Recognition Based on Extended Local Binary Pattern . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Di Huang, Yunhong Wang, and Yiding Wang
437
Surface Signature-Based Method for Modeling and Recognizing Free-Form Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H.B. Darbandi, M.R. Ito, and J. Little
447
Integrating Vision and Language: Semantic Description of Traffic Events from Image Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Takashi Hirano, Shogo Yoneyama, Yasuhiro Okada, and Yukio Kosugi Rule-Based Multiple Object Tracking for Traffic Surveillance Using Collaborative Background Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Xiaoyuan Su, Taghi M. Khoshgoftaar, Xingquan Zhu, and Andres Folleco
459
469
A Novel Approach for Iris Recognition Using Local Edge Patterns . . . . . . Jen-Chun Lee, Ping S. Huang, Chien-Ping Chang, and Te-Ming Tu
479
Automated Trimmed Iterative Closest Point Algorithm . . . . . . . . . . . . . . . R. Synave, P. Desbarats, and S. Gueorguieva
489
Classification of High Resolution Satellite Images Using Texture from the Panchromatic Band . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mar´ıa C. Alonso, Mar´ıa A. Sanz, and Jos´e A. Malpica
499
Table of Contents – Part II
XXIII
Deriving a Priori Co-occurrence Probability Estimates for Object Recognition from Social Networks and Text Processing . . . . . . . . . . . . . . . Guillaume Pitel, Christophe Millet, and Gregory Grefenstette
509
3D Face Reconstruction Under Imperfect Tracking Circumstances Using Shape Model Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H. Fang and N.P. Costen
519
A Combined Statistical-Structural Strategy for Alphanumeric Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . N. Thome and A. Vacavant
529
The Multiplicative Path Toward Prior-Shape Guided Active Contour for Object Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Wei Wang and Ronald Chung
539
On Shape-Mediated Enrolment in Ear Biometrics . . . . . . . . . . . . . . . . . . . . Banafshe Arbab-Zavar and Mark S. Nixon
549
Determining Efficient Scan-Patterns for 3-D Object Recognition Using Spin Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Stephan Matzka, Yvan R. Petillot, and Andrew M. Wallace
559
A Comparison of Fast Level Set-Like Algorithms for Image Segmentation in Fluorescence Microscopy . . . . . . . . . . . . . . . . . . . . . . . . . . . Martin Maˇska, Jan Huben´ y, David Svoboda, and Michal Kozubek
571
Texture-Based Objects Recognition for Vehicle Environment Perception Using a Multiband Camera . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yousun Kang, Kiyosumi Kidono, Yoshikatsu Kimura, and Yoshiki Ninomiya
582
Object Tracking Via Uncertainty Minimization . . . . . . . . . . . . . . . . . . . . . . Albert Akhriev
592
Detection of a Speaker in Video by Combined Analysis of Speech Sound and Mouth Movement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Osamu Ikeda
602
Extraction of Cartographic Features from a High Resolution Satellite Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jos´e A. Malpica, Juan B. Mena, and Francisco J. Gonz´ alez-Matesanz
611
Expression Mimicking: From 2D Monocular Sequences to 3D Animations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Charlotte Ghys, Maxime Taron, Nikos Paragios, Nikos Komodakis, and B´en´edicte Bascle Object Recognition: A Focused Vision Based Approach . . . . . . . . . . . . . . . Noel Trujillo, Roland Chapuis, Frederic Chausse, and Michel Naranjo
621
631
XXIV
Table of Contents – Part II
A Robust Image Segmentation Model Based on Integrated Square Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Shuisheng Xie, Jundong Liu, Darlene Berryman, Edward List, Charles Smith, and Hima Chebrolu
643
Measuring Effective Data Visualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ying Zhu
652
Automatic Inspection of Tobacco Leaves Based on MRF Image Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yinhui Zhang, Yunsheng Zhang, Zifen He, and Xiangyang Tang
662
A Mesh Meaningful Segmentation Algorithm Using Skeleton and Minima-Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Zhi-Quan Cheng, Kai Xu, Bao Li, Yan-Zhen Wang, Gang Dang, and Shi-Yao Jin
671
Fast kd -Tree Construction for 3D-rendering Algorithms Like Ray Tracing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sajid Hussain and H˚ akan Grahn
681
Phase Space Rendering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Andr´e Hinkenjann and Thorsten Roth
691
Automatic Extraction of a Quadrilateral Network of NURBS Patches from Range Data Using Evolutionary Strategies . . . . . . . . . . . . . . . . . . . . . John William Branch, Flavio Prieto, and Pierre Boulanger
701
ChipViz : Visualizing Memory Chip Test Data . . . . . . . . . . . . . . . . . . . . . . . Amit P. Sawant, Ravi Raina, and Christopher G. Healey
711
Enhanced Visual Experience and Archival Reusability in Personalized Search Based on Modified Spider Graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dhruba J. Baishya
721
Probe-It! Visualization Support for Provenance . . . . . . . . . . . . . . . . . . . . . . Nicholas Del Rio and Paulo Pinheiro da Silva
732
Portable Projection-Based AR System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jihyun Oh, Byung-Kuk Seo, Moon-Hyun Lee, Hanhoon Park, and Jong-Il Park
742
Adaptive Chrominance Correction for a Projector Considering Image and Screen Color . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sun Hee Park, Sejung Yang, and Byung-Uk Lee
751
Easying MR Development with Eclipse and InTml . . . . . . . . . . . . . . . . . . . Pablo Figueroa and Camilo Florez
760
Table of Contents – Part II
XXV
Unsupervised Intrusion Detection Using Color Images . . . . . . . . . . . . . . . . Grant Cermak and Karl Keyzer
770
Pose Sampling for Efficient Model-Based Recognition . . . . . . . . . . . . . . . . . Clark F. Olson
781
Video Segmentation for Markerless Motion Capture in Unconstrained Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Martin Cˆ ot´e, Pierre Payeur, and Gilles Comeau
791
Hardware-Accelerated Volume Rendering for Real-Time Medical Data Visualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Rui Shen and Pierre Boulanger
801
Fuzzy Morphology for Edge Detection and Segmentation . . . . . . . . . . . . . . Atif Bin Mansoor, Ajmal S. Mian, Adil Khan, and Shoab A. Khan
811
Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
823
Visible and Infrared Sensors Fusion by Matching Feature Points of Foreground Blobs Pier-Luc St-Onge and Guillaume-Alexandre Bilodeau ´ LITIV, Department of Computer Engineering, Ecole Polytechnique de Montr´eal, P.O. Box 6079, Station Centre-ville Montr´eal (Qu´ebec), Canada, H3C 3A7 {pier-luc.st-onge,guillaume-alexandre.bilodeau}@polymtl.ca
Abstract. Foreground blobs in a mixed stereo pair of videos (visible and infrared sensors) allow a coarse evaluation of the distances between each blob and the uncalibrated cameras. The main goals of this work are to find common feature points in each type of image and to create pairs of corresponding points in order to obtain coarse positionning of blobs in space. Feature points are found by two methods: the skeleton and the Discrete Curve Evolution (DCE) algorithm. For each method, a featurebased algorithm creates the pairs of points. Blob pairing can help to create those pairs of points. Finally, a RANSAC algorithm filters all pairs of points in order to respect the epipolar geometrical constraints. The median horizontal disparities for each pair of blobs are evaluated with two different ground truths. In most cases, the nearest blob is detected and disparities are as accurate as the background subtraction allows.
1
Introduction
By watching a calibrated rectified stereo pair1 , a human visual system is able to reconstruct the scene in 3D. But if we use a camera for the visible spectrum and another for the infrared spectrum, images from both cameras have different colors and textures, and the human visual system is just not able to reconstruct the scene. Fortunately, a computer can do it with the proper image processing. In the recent years, infrared cameras are more affordable than ever [1]. By being more accessible, they are used in applications like video surveillance. In fact, while there could be some occlusions or false alarms in the visible spectrum, the infrared spectrum could possibly resolve these problems [2], [3]. While this is not the goal of the present research, it illustrates one of the advantages of combining visible and infrared informations. Another advantage is to exploit the two cameras to enable the evaluation of which person is farther and which person is closer. In a context of video surveillance, this could be very useful to identify and prevent crimes or strange behaviors. In the literature, some algorithms can create dense disparity maps from two visible rectified stereo images [4]. Some other algorithms use uncalibrated infrared stereo images to create a sparse or dense disparity map [1], [5]. Unfortunatly, these methods are not useful as the corresponding points are featured 1
There is a line by line correspondence in rectified stereo pairs.
G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 1–10, 2007. c Springer-Verlag Berlin Heidelberg 2007
2
P.-L. St-Onge and G.-A. Bilodeau
with different values in infrared and visible images. As an example, the Phase Congruency algorithm [6] may find reliable feature points in any type of images, but the pairing by correlation product fails to find enough good pairs of points in infrared and visible images. So, color invariant features are needed. Shah and al. ([7]) have used many types of moments to describe feature points of both types of images. While this may work, their method also requires that the stereo pair have minimal disparity, so it is not very useful for the current research. In the case of our research, both cameras are uncalibrated2 . Nevertheless, they have to look approximately at the same vanishing point3 , and they must have about the same up vector and field of view. The cameras are distanced by at least half a meter to get significant differences between disparities. By convention for the rest of the paper, the left and right images are the visible and infrared images respectively. All images are scaled to a resolution of 640x480 after a correction of the lens distortion. The foreground blobs are obtained by a Temporal Averaging algorithm like the one in [8]. Shadows are eliminated as much as possible in the left image. Finally, blobs smaller than 256 pixels of area are deleted. The remaining blobs are the input of our method. The proposed method finds two types of color invariant feature points in each foreground blob: from an approximated skeleton (section 2) and from a simplified implementation of the Discrete Curve Evolution process (section 3). Then, feature points are further described in order to allow the pairing with a correspondence matrix comparing all possible pairs of feature points of the same type (sections 2.2 and 3.2). These are the main contributions of the paper. The outliers are filtered with two successive algorithms presented in section 4: the blob pairs determined with the pairs of points, and a specialized RANSAC algorithm. The calculation of all blobs’ disparity from all remaining pairs of points is defined in section 5. Section 6 contains the experiment details and the results. Finally, the paper ends with our conclusions and suggested improvements.
2
The Skeleton
2.1
Feature Points
In a skeleton interpreted as a tree, the feature points are the vertices that are not connected by two edges (see figure 1). Here are the steps to extract the feature points in each foreground blob: 1. In the blob, delete all holes that are smaller than N pixels of area; 2. Approximate the contour of the blob with the Douglas-Peucker algorithm4 [9]. This algorithm has only one parameter: the accuracy (in pixels); 3. Draw the approximated blob in white on a black binary image; 4. Apply a distance transform [10] on the white region of the binary image; 2 3 4
The internal and external parameters of the stereo system are unknown. In practice, the cameras must look at the same farthest object in the scene. This algorithm, like many others, is already implemented in OpenCV.
Visible and Infrared Sensors Fusion
3
⎛
5.
6. 7. 8.
⎞ −1 −1 −1 With the convolution product, apply the filter ⎝ −1 8 −1 ⎠ on the distance −1 −1 −1 transform result; Apply a threshold to get the first feature points; Use the Prim’s algorithm [11] to get the minimum spanning tree of the complete graph made of all first feature points; In the minimum spanning tree, keep all vertices that do not have two edges.
By default, the accuracy parameter of the Douglas-Peucker algorithm is 2.5 pixels, the parameter N is about 250, and the threshold is about 4. Figure 1 gives an example of the feature points found in each image.
Fig. 1. Two skeletons in the left and right images. The dark grey dots are all points found after the thresholding. The big light grey points are the feature points. The black lines bind the feature points to their neighbors.
2.2
Feature Points Description
All found feature points must be described in order to make pairs of similar points from the stereo images. The pairs are determined with a correspondence matrix containing the totalScores of each possible pair of feature points. Of course, the corresponding points are the ones that have the highest totalScores. The totalScore metric is the sum of four individual metrics that we have defined: totalScore = scoreDist + scoreEucl + scoreEdge + scoreAngl .
(1)
Here is the description of each metric: – By normalizing the distance-transformed blobs to values from zero to one5 , each feature point is described according to its relative distance rd from itself to its blob’s contour. So, the first metric is scoreDist = −|rdl − rdr | , 5
(2)
All values in the distance-transformed image are scaled linearly to values from zero to one.
4
P.-L. St-Onge and G.-A. Bilodeau
where rdl and rdr are the relative distances of the points in a candidate pair. Of course, l means left image, and r means right image. – Being given that the two images have the same size, it is possible to evaluate the distance between the two points. The second metric is the sigmoid scoreEucl =
1 1+
e−3+6d/D
,
(3)
where d is the Euclidean distance in pixels between the two points, and D is the length in pixels of the diagonal of the images. – In the minimum spanning tree, each point in the pair has one or many edges. The number of edges of the left and the right points are nbl and nbr respectively. So, the third metric is 2 if (nbl ≥ 3) = (nbr ≥ 3) scoreEdge = . (4) 0 if not – For each point in a candidate pair, all point’s edges are oriented from the point to its neighbor vertex in its minimum spanning tree. Then, the left point’s oriented edges are compared to the right point’s oriented edges in a correspondence matrix. In this matrix, the score is the cosinus of the angle between two compared edges. The largest is the cosinus, the most the two edges correspond. Being the set of corresponding edges Pe , the fourth metric for the candidate pair of feature points is p∈Pe cos θp , (5) scoreAngl = max(nbl , nbr ) where θp is the angle between the two edges in the pair p.
3 3.1
Discrete Curve Evolution Feature Points
We use the Discrete Curve Evolution (DCE) algorithm described in [12] to keep the most significant points of the contour of each foreground blob. At each iteration of the DCE, instead of removing the complete set Vmin (P i ) from V ertices(P i ) (see [12]), we remove only one vertex v ∈ Vmin (P i ). In this way, we can control how much vertices we want to keep in the final contour. We did not implement the topology preserving DCE process, because we wanted the code to be as simple and fast as possible. The final contours usually have no bad loops. The external contours end with 33 vertices at most (by default). The internal contours for holes larger than 256 pixels of area end with 13 vertices at most (by default). Figure 2 gives an example of the feature points found in each image.
Visible and Infrared Sensors Fusion
5
Fig. 2. Simplified contour of blobs in the left and right images. The black dots are all points found by our DCE implementation.
3.2
Feature Points Description
For the DCE algorithm, the totalScore metric is defined differently: totalScore = scoreK + scoreEucl + scoreAngl .
(6)
As defined in [12], the relevance measure is K(β, l1 , l2 ) =
βl1 l2 , l1 + l2
(7)
where β is the external angle, and li is the length of the ith edge. The metric scoreK is defined as: −|Kl − Kr |
, scoreK = 2 (8) Kl + Kr2 if Kl2 + Kr2 > 0 1 if not where Kl and Kr are the relevance measures of the left and right points respectively. The metrics scoreEucl and scoreAngl are the same as in section 2.2.
4 4.1
Filtering the Pairs of Points Blob Pairs Filter with Centroids
Until now, the previous methods only match points by comparing all points of the left image to the points of the right image. Of course, there are outliers that match points from two foreground blobs that do not correspond. In order to avoid these outliers, we classify each pair of points in bins representing all possible pairs of corresponding blobs; this is another correspondence matrix, and the score is the number of pairs of points. Then, it is quite easy to identify the most probable pairs of blobs. Finally, those pairs of blobs are used to compute again the pairs of points. In fact, not only is there a correspondence matrix
6
P.-L. St-Onge and G.-A. Bilodeau
of totalScores for each pair of blobs, but we can add another metric to both totalScores in sections 2.2 and 3.2. This is done by these steps: 1. Align the left and right blobs according to their centroid; 2. Compute the new displacement d˜ between the left and right points; 3. Align the left and right blobs in order to get the minimal bounding box that contains them; 4. If the horizontal displacement of d˜ in absolute is less than a quarter of the width of the minimal bounding box, than add 1.0 to the totalScore. 5. If the vertical displacement of d˜ in absolute is less than a quarter of the height of the minimal bounding box, than add 1.0 to the totalScore. By using this additional metric, we are able to get rid of some outliers that may link, for example, the head of a person to his foot (see figure 4). This is why the adaptive thresholds for d˜ are the quarter of the maximal width and height of both corresponding blobs. Finally, d˜ is not used in equation 3. 4.2
Epipolar Geometrical Constraint
According to [1], the epipolar geometrical constraint is represented by the equation 9, where F is the fundamental matrix, xl = (xl , yl , 1)T is a left point and xr = (xr , yr , 1)T is the corresponding right point. xTr F xl = 0
(9)
Even after applying the blob pairs filter, there are still outliers. It is almost impossible to find a fundamental matrix F such that the equation 9 stays true for all pairs of points. Fortunately, a specialized RANSAC algorithm6 can eliminate those outliers while accepting matric products that are not exactly zero [13]. The remaining pairs of points can be used to compute the blobs’ disparity in order to evaluate the relative distance between the blobs and the cameras.
5
Blobs’ Disparity
For each known pair of blobs, all its pairs of feature points are grouped together. Within a group of pairs of points, one must compute the median horizontal displacement. At this stage, there are still some outliers. This is why it is preferable to use the median instead of the mean value. Because the up vector and the field of view of both cameras are about the same, it is more interesting to focus on the horizontal displacement instead of the complete vectorial displacement. Finally, the absolute value of the median horizontal displacement is the blob pair’s disparity value. The figure 3 is an example of a final render. 6
We used the implementation in OpenCV. The default maximum distance is 2.0, and the default confidence level is 0.9.
Visible and Infrared Sensors Fusion
7
Fig. 3. A final output with all filtered pairs of points. Darker blobs are farther than pale blobs.
6 6.1
Experimentation Experiments
The first test evaluates the effect of the filters described in section 4. Then, the other tests evaluate the pairs of points with two types of ground truths: – The first type contains identification number(s) (ID) for each significant blob at each four frames of each video. For each stereo images, the corresponding blobs have the same ID. If one significant blob is merged with another blob, the final foreground blob gets two IDs. Then, if this blob separates, the two resultant blobs get the proper unique ID. If two people are merged in one blob (identified with two IDs) in one image and if they are separated in the other image (one ID for each blob), this makes two pairs of blobs. With this type of ground truth, we are able to classify the pairs of blobs found with our correspondence matrix (see section 4.1): valid pairs, bad pairs (false positives) and missing pairs (false negatives). – The second type contains informations about the distance between each blob and the left camera. The information is a relative distance without any unit. This type of ground gruth is only used to know which blob is farther than the other: it is not yet used to know how accurate are the disparities computed at section 5. The metric is then computed with the following steps: 1. 2. 3. 4. 5.
Put each blob’s disparity in the vector α ; Put each blob’s ground truth relative distance in the vector β ; Do the scalar product γ1 = −α · β ; Sort values in α and in β, and redo the scalar product γ2 = −α · β ; The metric is then κ = γ1 − γ2 .
8
P.-L. St-Onge and G.-A. Bilodeau
The negative sign for γi values is because the disparity values are high when the blobs are close to the camera. If κ is zero, then the order of blobs is perfect. If κ is negative, then it only tells that our program have the wrong estimation of which blob is closer. Finally, all tests are done by using thresholds and parameters proposed earlier in the paper. The sources are two videos of 210 frames, but we have only tested one stereo pair out of four (53 stereo pairs including the first one). The videos are recorded from a Sony DFWSX910 visible camera and a FLIR A40V infrared camera. These cameras are synchronized at 7.5 fps. 6.2
Results
The figure 4 shows the effects of applying the blob pairs filter and the epipolar geometrical constraint. It can be noted that the blob pairs filter helps to avoid coarse errors of pairing. While the RANSAC algorithm filters many outliers, it also gets rid of some inliers: we still have to find the right parameters for this algorithm. Anyway, many of the outliers are caused by the unreliable background subtraction. With a better background subtractor, we expect to get better results. For the following results, both filters are used.
a)
b)
c)
Fig. 4. Visual results when merging all found feature points from the skeleton and the DCE algorithms. In a), no filter is used. In b) and c), the blob pairs filter is activated. Only in c), the RANSAC algorithm is activated. In all three cases, our method properly detects that the two people are not close to each other.
Table 1 shows the results with the first type of ground truth. Most of valid blob pairs have been found, so they are usable in the blob pairs filter. The bad or missing blob pairs are caused by the imperfect background subtraction (figure 5a) or by merged blobs (figure 5c). The skeleton algorithm does slightly less blob pairs errors than the DCE algorithm, probably because the DCE algorithm is more sensitive to the noise in the contour of blobs in the visible images. All these errors could be resolved by a better background subtraction or by a segmentation like in [7]. For the second type of ground truth, we have used each algorithm (skeleton and DCE) alone and then together. For each of these three cases, we got only two wrong ordering for all 53 stereo pairs. All three cases have failed on the
Visible and Infrared Sensors Fusion
9
same stereo pair when both actors are about at the same distance from both cameras. The second error of each case is in different stereo pairs, and is due to their respective outliers (see figure 5b) and the random behavior of the RANSAC algorithm. This is why more tests have to be done to evaluate the accuracy of the disparities. Nevertheless, the examples of figure 4 show that our method is already able to say that the two people are not close to each other, and we are confident that the program is generally able to find which blob is the closest to the cameras. Table 1. For each algorithm, the total number of found, valid, bad and missing pairs of blobs Algorithm Found Valid Bad Missing Skeleton 60 58 2 4 DCE 62 57 5 5
a)
b)
c)
Fig. 5. Visual results of some anomalies. In a), the left actor’s blob was cut in two during the background subtraction; the DCE algorithm and the blob pairs filter retained the smallest piece of blob. In b), too much outliers caused a wrong ordering. In c), the merged blobs caused a missing blob pair.
7
Conclusion
In this work, we address the problem of stereo vision with a visible camera and an infrared camera. We have proposed two methods to find feature points from the foreground blobs: the skeleton and the Discrete Curve Evolution process. The pairs of points have been chosen with a set of metrics for each type of feature point. We have successfully filtered some outliers with the blob pairs. While the RANSAC algorithm needs to be adjusted, it already removes some outliers. Even if the proposed method is not as precise as a dense 3D reconstruction, we are able to identify which blob is the closest to the cameras. In future works, we want to use better background subtraction algorithms. We also want to test all possible parameters to assess the performance of the proposed methods. Of course, other types of feature points will be implemented and tested. Finally, the accuracy of blob disparities has to be tested with a third type of ground truth.
10
P.-L. St-Onge and G.-A. Bilodeau
Acknowledgments We would like to thank the Canadian Foundation for Innovation (CFI) and the Fonds qu´eb´ecois de la recherche sur la nature et les technologies (FQRNT) for their support with a grant and a scholarship respectively.
References 1. Hajebi, K., Zelek, J.S.: Sparse disparity map from uncalibrated infrared stereo images. In: CRV 2006. Proceedings of the 3rd Canadian Conference on Computer and Robot Vision, pp. 17–17 (2006) 2. Jones, G.D., Hodgetts, M.A., Allsop, R.E., Sumpter, N., Vicencio-Silva, M.A.: A novel approach for surveillance using visual and thermal images, 9/1–919 (2001) 3. Ju, H., Bhanu, B.: Detecting moving humans using color and infrared video. In: MFI 2003. Multisensor Fusion and Integration for Intelligent Systems, pp. 228–233 (2003) 4. Zitnick, C.L., Kanade, T.: A cooperative algorithm for stereo matching and occlusion detection. IEEE Transactions on Pattern Analysis and Machine Intelligence 22(7), 675–684 (2000) 5. Hajebi, K., Zelek, J.S.: Dense surface from infrared stereo. In: WACV 2007. Workshop on Applications of Computer Vision, pp. 21–21 (2007) 6. Kovesi, P.: Phase congruency: A low-level image invariant. Psychological Research 64(2), 136–148 (2000) 7. Shah, S., Aggarwal, J.K., Eledath, J., Ghosh, J.: Multisensor integration for scene classification: an experiment in human form detection, vol. 2, pp. 199–202 (1997) 8. Shoushtarian, B., Bez, H.E.: A practical adaptive approach for dynamic background subtraction using an invariant colour model and object tracking. Pattern Recognition Letters 26(1), 5–26 (2005) 9. Douglas, D.H., Peucker, T.K.: Algorithms for the reduction of the number of points required to represent a digitized line or its caricature. Cartographica: The International Journal for Geographic Information and Geovisualization 10(2), 112–122 (1973) 10. Borgefors, G.: Distance transformations in digital images. Computer Vision, Graphics, and Image Processing 34(3), 344–371 (1986) 11. Cheriton, D., Tarjan, R.E.: Finding minimum spanning trees. SIAM Journal on Computing 5(4), 724–742 (1976) 12. Latecki, L.J., Lakamper, R.: Polygon Evolution by Vertex Deletion. In: Nielsen, M., Johansen, P., Olsen, O.F., Weickert, J. (eds.) Scale-Space Theories in Computer Vision. LNCS, vol. 1682, Springer, Heidelberg (1999) 13. Fischler, M.A., Bolles, R.C.: Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 24(6), 381–395 (1981)
Multiple Combined Constraints for Optical Flow Estimation Ahmed Fahad and Tim Morris School of Computer Science, The University of Manchester Manchester, M13 9PL, UK
[email protected],
[email protected]
Abstract. Several approaches to optical flow estimation use differential methods to model changes in image brightness over time. In computer vision it is often desirable to over constrain the problem to more precisely determine the solution and enforce robustness. In this paper, two new solutions for optical flow computation are proposed which are based on combining brightness and gradient constraints using more than one quadratic constraint embedded in a robust statistical function. Applying the same set of differential equations to different quadratic error functions produces different results. The two techniques combine the advantages of different constraints to achieve the best results. Experimental comparisons of estimation errors against those of wellknown synthetic ground-truthed test sequences showed good qualitative performance.
1 Introduction Many methods for computing optical flow have been proposed over the years, concentrating on the accuracy and density of velocity estimates. Variational optic flow methods belong to the best performing and best understood methods. The energy functional can be designed in such a way that it preserves motion boundaries, is robust under noise or invariant to illumination changes, it handles large displacements and adapts to occlusion or mismatches. Differential methods allow optic flow estimation based on computing spatial and temporal image derivatives and can be classified into techniques that minimize local or global energy functions. Both methods attempt to overcome the ill-posedness due to the aperture problem where computing the optic flow component is only possible in the direction of intensity gradient, i.e. normal to image edges. Therefore, differential methods use smoothing techniques and smoothness assumptions. Local techniques use spatial constancy assumptions, e.g. in the case of the Lucas-Kanade method [1]. Global techniques supplement the optic flow constraint with a regularizing smoothness, e.g. Horn and Schunck [2]. Combining the different smoothing effects of local and global methods provides the robustness of local methods with the full density of global methods [3]. Extending the optic flow brightness constancy assumption by a gradient constancy assumption yields excellent results at the expense of extra computation [4, 5, 6] G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 11–20, 2007. © Springer-Verlag Berlin Heidelberg 2007
12
A. Fahad and T. Morris
compared to the combined local-global method. Other constancy assumptions are possible, such as the constancy of the Hessian or the Laplacian [7], that reveal certain advantages in comparison to the brightness constancy assumption. The data term of the energy function usually embeds the constancy assumption in a quadratic norm that is nonlinear with respect to motion components. Since this causes problems when minimizing the energy function, first order Taylor expansion is often used to linearize the expression and make solution easier. However, the linear approximation is accurate only if the image gradient changes linearly along the displacement and fails in the presence of large displacements. Coarse-to-fine strategies or multiresolution approaches [3, 8, 9] overcome this limitation by incrementally refining the optic flow from a coarse scale to a finer scale. The coarse scale motion is used to warp the original sequence before going to the next finer level, resulting in a hierarchy of small displacement problems at each level. The final result is obtained by a summation of all motion increments and is more accurate. To compute optical flow robustly, outliers should be penalized less severely. Therefore, robust statistical formulation makes the recovered flow field less sensitive to assumption violations by replacing the quadratic penalizer by nonquadratic penalizers [3, 7, 10, 11]. Although this leads to nonlinear optimization problems, they give better results at locations with flow discontinuities. Currently, the most popular regularizers of optical flow estimation are the variational based isotropic or anisotropic smoothness operators where the latter considers discontinuities at the motion boundaries [12]. Finally, spatiotemporal approaches use the information of an additional dimension to consider spatiotemporal smoothness operators by simply replacing 2D Gaussian convolution with 3D spatiotemporal Gaussian convolution. Spatiotemporal versions of the Horn and Schunck and Combined Local Global (CLG) methods have been proposed [3, 13] where the energy functions are extended by including temporal data (3D). Thus the minimization is understood in a spatiotemporal way over more than two frames. In general, experiments showed that extending the energy function with temporal data obtaining a spatiotemporal formulation give better results than spatial ones [13] because of the additional denoising in the temporal dimension. Over-constraining the optical flow problem allows more precise determination of a solution. Moreover, it uses redundant information to enforce robustness with respect to measurement noise. Constraints can be obtained using several approaches by either applying the same equation to multiple points or defining multiple constraints for each image point [14]. The latter can be obtained by applying a set of differential equations [15] or applying the same set of equations to different functions which are related to image brightness. In this paper, section 2 discusses the main optical flow constraints involved in local and global optic flow approaches and the use of higher image derivatives. Section 3 proposes a novel variational approach that integrates the gradient constraint into different error functions which can be minimized with numerical methods. Based on the results, we extend in section 4 the method with brightness constraint to allow for more competition between the constraints and the use of multiple robust functions. Then section 5 presents mulitresolution techniques that implement warping to correct the original sequence before going to the next finer level by creating a sophisticated hierarchy of equations with excellent error reduction properties. Qualitative experiments using a number of synthetic and real image sequences are presented in section 6 and a summary concludes the paper.
Multiple Combined Constraints for Optical Flow Estimation
13
2 Optical Flow Constraints In this section we derive our variational model for the combined constancy assumptions. Since its formulation is essentially based on the variational optic flow methods we start by giving a short review. Let us consider some image sequence I(x,y,t), where (x,y) denotes the location within a rectangular image domain Ω and t ∈ [0,T ] denotes time. Many differential methods for optic flow are based on the optical flow constraint (OFC) assumption that the brightness values of image objects do not change over time, i.e. constancy of the brightness value:
I (x + u , y + v , t + 1) = I (x , y , t ) ,
I x u + I y v + I t = 0,
(1)
where the displacement u(x,y,t), v(x,y,t) is called the optic flow. For small displacements, we may perform a first order Taylor expansion yielding the optic flow constraint where subscripts denote partial derivatives. In order to cope with the aperture problem (the single OFC equation is not sufficient to uniquely compute the two unknowns u and v), Lucas and Kanade [1] assumed that the unknown optic flow vector is constant within some neighbourhood of size ρ and introduced an equation that can be solved if the system matrix is invertible. Unlike the Lucas-Kanade method which is local, Horn and Schunck [2] introduced a global method that adds a regularization constraint to the energy function to determine the unknown u and v, E HS (u ,v ) =
∫ ( (I
xu
)
+ I y v + I t )2 + α (| ∇u |2 + | ∇v |2 ) dxdy
Ω
(2)
α>0 is a regularization parameter. At locations where no reliable flow estimate is
possible i.e. with the gradient | ∇I |≈ 0 , the regularizer | ∇u |2 + | ∇v |2 fills in information from the neighbourhood which results in completely dense flow fields. The CLG method [3] complements the advantages of both approaches, benefiting from the robustness of local methods with the density of global approaches: E CLG (u ,v ) =
∫ ( K ρ *(I
Ω
xu
)
+ I y v + I t ) 2 + α (| ∇u |2 + | ∇v |2 ) dxdy
(3)
The method also proved robust with respect to parameter variations. Variational methods require the minimization of an energy functional, which, from the discretization of the Euler-Lagrange equations, results in linear or non-linear systems of equations. The large sparse system of equations is minimized iteratively using Gauss-Seidel or Successive Over Relaxation SOR methods. Although the brightness constancy assumption works well, it cannot deal with either local or global changes in illumination. Other constancy assumptions such as the gradient constancy assumption (which assumes the spatial gradients of an image sequence to be constant during motion) are applied [5, 6]. A global change in illumination affects the brightness values of an image by either shifting or scaling or both. Shifting the brightness will not change the gradient; scaling affects the magnitude of the gradient vector but not its direction.
14
A. Fahad and T. Morris
I xx u + I xy v + I xt = 0,
I yx u + I yy v + I yt = 0.
(4)
It is to be noted that the grey value and gradient constancy assumptions yield good results; other higher order constancy assumptions may be applied. One choice including second order derivatives is the Hessian. Not all constancy assumptions based on derivatives perform equally well, neither are they well-suited to estimate different types of motion.
3 Variant Gradient Constraint Formulation The gradient constraint produces two equations with the two unknowns of the optical flow in the spatial domain. Reformulating equation 3 using the gradient constraint produces two energy functions, equations (5) and (6) (we only show the data constraint without the regularization for brevity). One function (eq. (5)) penalizes each data constraint with a quadratic error, and the other one (eq. (6)) penalizes both data constraints with one quadratic error,
∫
(5)
∫
(6)
E data (u ,v ) = (I xx u + I xy v + I xt ) 2 + (I yx u + I yy v + I yt ) 2 dxdy Ω
E data (u ,v ) = ((I xx + I yx )u + (I xy + I yy )v + I xt + I yt ) 2 dxdy Ω
Table 1. shows the results of using both equations to process the Yosemite sequence that has ground truth. The results show that using equation (6) generates results with a smaller average angular error (AAE, which is defined below by equation 14). However, re-formulating the equations by replacing the quadratic error terms by nonquadratic penalizers, where outliers are penalized less severely, produced different results. The reformulated expressions are equations (7) and (8) with their AAE results presented in Table 1.
∫ (
)
∫ (
)
E data (u ,v ) = ψ 1 (I xx u + I xy v + I xt )2 + (I yx u + I yy v + I yt )2 dxdy Ω
E data (u ,v ) = ψ 1 ((I xx + I yx )u + (I xy + I yy )v + I xt + I yt ) 2 dxdy Ω
(7)
(8)
where ψ1(s2) and ψ2(s2) are nonquadratic penalisers. Using these to process the Yosemite sequence gave the results presented in Table 1. We notice equation (7) gave less AAE than equation (8). Furthermore, we analyze the optical flow generated for each pixel looking for smaller angular errors. Figure 1(a) shows two different colours, light and dark gray. Light gray regions represent where equation (7) generated smaller
Multiple Combined Constraints for Optical Flow Estimation
15
angular error than equation (8) and vice versa for dark regions. Obviously, the two energy functions produced different results and neither of them has produced a smaller angular error for all pixels. An explanation for this behaviour can be given as follows. Let us denote e1 to be the error from the x-derivative (Ix) constancy constraint and e2 to be the error from the y-derivative (Iy) constancy. At locations where e1 and e2 are quite small, the smoothing constraint becomes more important making flow fields more regularized which can be noticed in cloudy regions of the sequence having very small gradients. Since e12 + e 22 ≤ (e1 + e 2 )2 for all e1 and e2 both negative and non-negative, thus using one quadratic error gives higher error values making the regularization less important in cloudy regions with very small gradients. On the other hand, balancing between e1 and e2 in equation (7) allows both gradient constancy errors to compete for the best flow field. Hence, it would be interesting to construct a hybrid data constraint that constitutes the best features of the two constraints. Therefore, we propose a new energy function that combines the gradient constraint equations using more than one penalizer, equation (9),
∫ (
)
E data (u ,v ) = ψ 1 (I xx u + I xy v + I xt )2 + (I yx u + I yy v + I yt )2 +
∫ (
ψ 1 ((I xx + I yx )u + (I xy + I yy )v + I xt + I yt ) 2
(9)
)
The new energy function benefits from balancing between the two penalizers. Table 1. Comparison between the different combination of brightness and gradient data constraints used in the energy functions applied to the cloudy Yosemite sequence. The table shows the average angular error between the ground truth and the computed direction of motion.
Frame 6 7 8 9 10
Eq.(5) 6.98 6.43 6.48 6.85 6.56
Eq.(6) 6.71 6.24 6.28 6.65 6.37
Eq.(7) 5.86 5.35 5.35 5.46 5.55
Eq.(8) 6.14 5.67 5.67 5.77 5.87
Eq.(10) 5.54 5.26 5.26 5.37 5.46
Eq.(11) 6.04 5.62 5.62 5.72 5.82
4 A Combined Brightness-Gradient Constraint The previous approaches used only the gradient constancy assumptions. Brox. [5] introduced competition between the brightness constancy and the gradient constancy that produced the best optical flow results by delaying the linearization of constraints. He applied methods from robust statistics where outliers are penalized less severely: E data (u ,v ) =
∫ψ (( I (x +w ,t + 1) − I (x ,t ) ) 1
Ω
2
)
+ γ ( ∇I (x + w , t + 1) − ∇I (x , t ) ) dxdy 2
(10)
16
A. Fahad and T. Morris
Reformulating equation (10) by penalizing both brightness and gradient constancy constraints using one quadratic error and using robust statistics we find, E data (u ,v ) =
∫ψ (( I (x +w ,t + 1) − I (x ,t ) + γ∇I (x +w ,t + 1) − γ∇I (x ,t ) ) ) 2
(11)
1
Ω
Table 1 shows the results of applying both equations to process the Yosemite sequence and comparing the results to the ground truth. The results show a smaller average angular error (AAE) for equation (10) maintaining all other parameters constant. However, figure 1 shows two different regions where small angular errors are found, and no method produced a globally smaller angular error. Therefore, to benefit from the advantages of both energy functions we propose a hybrid data constraint that combines both brightness and gradient constancy constraints,
∫ (
)
E (u ,v ) = ψ 1 BC 2 + γ GCx 2 + γ GCy 2 + (BC + γ GCx + γ GCy ) 2 + Ω
∫ αψ (| ∇u | 2
Ω
2
)
+ | ∇v |2 dxdy
(12)
BC = I x u + I y v + I t , GxC = I xx u + I xy v + I xt , GyC = I yx u + I yy v + I yt
where γ is constant. We notice in the new energy function the competition between three data constancy terms. This competition allows the optical flow field that reaches the minimum by fitting the brightness constancy, the gradient constancy and both brightness and gradient constancy such that each constancy term has one quadratic error.
5 Nonquadratic Implementation and Multiresolution Approach At locations with flow discontinuities, the constancy assumption model for motion analysis may not be capable of determining the optical flow uniquely, especially in homogenous areas or where there are outliers caused by noise, occlusions, or other violations of the constancy assumption. Moreover, the smoothness assumption does not respect discontinuities in the flow field. In order to capture locally non-smooth motion [3, 10] and data outliers [6, 12], it is possible to replace the quadratic penalizer with a nonquadratic penalizer in the energy function. We use a function proposed by [3] where the bias that a particular measurement has on the solution is proportional to the derivative of the penalizer function ψ i (s 2 ) = 2βi 1 + s 2 / βi2 , where the βi arescaling parameters. The energy functional now is not solved using linear optimization methods and is regarded as a nonlinear optimization problem.
Multiple Combined Constraints for Optical Flow Estimation
17
Fig. 1. (a) Light gray areas represent equation (7) holding smaller angular error and dark areas equation (8) holding smaller angular error. (b) The same results for equation (10) and (11).
(
)
(
)
2 2 0 = div ⎛⎜ψ 2' ∇u + ∇v ∇u ⎞⎟ − ⎝ ⎠ 1 ' 2 2 2 2 ψ 1 BC + γ GCx + γ GCy + ( BC + γ GCx + γ GCy ) ( J 11u + J12v + J13 ) ,
α
(
)
2 2 0 = div ⎛⎜ψ 2' ∇u + ∇v ∇v ⎞⎟ − ⎝ ⎠ 1 ' 2 2 2 2 ψ 1 BC + γ GCx + γ GCy + ( BC + γ GCx + γ GCy ) ( J 21u + J 22v + J 23 ) .
α
(
)
(13)
where, 2 2 J 11 = I x2 + I xx + I yx + (I x + I xx + I yx )2
J 12 = J 21 = I x I y + I xx I xy + I yx I yy + (I x + I xx + I yx )(I y + I xy + I yy ). 2 2 J 22 = I y2 + I xy + I yy + (I y + I xy + I yy )2
J 13 = I x I t + I xx I xt + I yx I yt + (I x + I xx + I yx )(I t + I xt + I yt ) J 23 = I y I t + I xy I xt + I yy I yt + (I y + I xy + I yy )(I t + I xt + I yt )
Since we consider linearizing the constancy assumptions, multiscale focusing or multiresolution strategies are required for large motions. These techniques incrementally compute the optic flow based on a sophisticated coarse to fine strategy [3, 9, 11] where the coarsest scale resolution is refined. Then the coarse scale motion is used to warp the original sequence before continuing to the next finer level. This is different from using the estimated flow at the coarse level as initialization for the next finer level which only speeds up the convergence. The final displacement is the sum of all the motion displacement computed at each level, so called motion increments.
6 Experiments In this section, we present the results of testing our algorithm on a variety of synthetic image sequences. Comparisons of our results with the previously reported results are made to demonstrate the performance of our algorithm. We evaluate the
18
A. Fahad and T. Morris
algorithm using the Yosemite sequence with cloudy sky obtained from ftp://ftp.csd.uwo.ca/pub/vision. We compare the average angular error defined by Barron et al. [8] with respect to the ground truth flow field, ⎛ u cue + v cv e + 1 Angular Error = arccos ⎜ ⎜ 2 2 2 2 ⎝ (uc + v c + 1)(u e + v e + 1)
⎞ ⎟ ⎟ ⎠
(14)
where (uc,vc) denotes the correct flow, and (ue,ve) is the estimated flow. Table 2 shows the results of combined gradient constraint using two quadratic errors in equation (9) which reflect better results than using only the gradient constraint with one quadratic error as in Table 1. The reason for this is that the gradient constancy is invariant under brightness changes. This behaviour agrees with the theoretical consideration of Papenberg et al. [7] where higher order terms are superior in areas where illumination changes (the sky region). We fix all the parameters for all energy functions except for the robust function parameters β1 and β2 to values in the range between 1×10-4 5×10-2. Table 2. Average angular errors computed for the cloudy Yosemite sequence for five different frames using two flow computations. Std. Dev.: standard Deviation of AAE.
Yosemite# 6 7 8 9 10
Eq. (9) 5.36 4.85 4.85 4.95 5.05
Std. Dev. 7.30 6.76 6.76 6.74 6.78
Eq.(12) 5.28 4.73 4.72 4.83 4.93
Std. Dev. 7.30 6.73 6.73 6.71 6.75
Table 3. Average angular errors for the Office sequence under varying amounts of added noise. The combined Local-Global approach into the data functions ameliorates the effect of noise, the noise still affects the gradient constraint.
σn 0 5 10 20
Yosemite with Cloud AAE STD 4.82° 6.73° 5.51° 7.74° 6.89° 8.63° 11.59° 11.26°
Office AAE STD 3.20° 4.20° 3.80° 4.39° 4.77° 4.86° 7.09° 6.42°
In another experiment we studied the effect of noise on the robustness of the equations. We added white Gaussian noise to the synthetic Yosemite and Office sequence (www.cs.otago.ac.nz/research/vision/). We can observe in Table 3 that the flow computations using the method suffered a little when severely degraded by Gaussian noise since the functional contains higher order terms. Therefore, in the future, we intend to delay the linearization of the data constraints to avoid such degradation.
Multiple Combined Constraints for Optical Flow Estimation
19
(a)
(b)
(c)
(d)
(e)
(f)
Fig. 2. (a) Frame 8 of the Yosemite sequence with clouds. (b) Ground truth between frame 8 and 9 for the sequence with clouds. (c) Computed flow field by equation 12 for the sequence with clouds. (d) Frame 8 of the Office sequence. (e) Ground truth (f) Computed flow field by equation 12.
7 Conclusion In this paper, we have introduced a combination of constancy assumptions for optic flow computation under local global smoothing effects. The new energy functional data term contains brightness and gradient constancy assumptions. While each of these concepts has before proved its use, we have shown that their combination into different quadratic errors and under smoothing regimes delivers better results. A full multigrid strategy avoids local minima and helps improve the convergence of the solution to a global minimum. We further showed that the improved results come at the price of being more sensitive to noise.
20
A. Fahad and T. Morris
References 1. Lucas, B.D., Kanade, T.: An iterative image registration technique with an application to stereo vision. In: Proceedings of Seventh International Joint Conference on Artificial Intelligence, pp. 674–679 (1981) 2. Horn, B.K.P., Shunck, B.G.: Determining optical flow. Artificial Intelligence 17, 185–203 (1981) 3. Bruhn, A.e., Weickert, J.: Lucas/kanade meets horn/schunck: Combining local and global optic flow methods. International Journal of Computer Vision 16, 211–231 (2005) 4. Uras, S., Girosi, F., Verri, A., Torre, V.: A computational approach to motion perception. Biological Cybernetics 60, 79–87 (1988) 5. Brox, T., Bruhn, A.e., Papenberg, N., Weickert, J.: High accuracy optical flow estimation based on a theory for warping. In: Pajdla, T., Matas, J(G.) (eds.) ECCV 2004. LNCS, vol. 3024, pp. 25–36. Springer, Heidelberg (2004) 6. Rabcewicz, A.: Clg method for optical flow estimation based on gradient constancy assumption. In: International Conference on PDE-Based Image Processing and Related Inverse Problems, pp. 57–66. Springer, Heidelberg (2005) 7. Papenberg, N., Bruhn, A., Brox, T., Didas, S., Weickert, J.: Highly accurate optic flow computation with theoretically justified warping. International Journal of Computer Vision 67, 141–158 (2006) 8. Odobez, J.M., Bouthemy, P.: Robust multiresolution estimation of parametric motion models. International Journal of Visual Communication and Image Representation 6, 348– 365 (1995) 9. Memin, E., Perez, P.: Dense estimation and object-based segmentation of the optical flow with robust techniques. IEEE Transactions on Image Processing 7, 703–719 (1998) 10. Bab-Hadiashar, A., Suter, D.: Robust optic flow computation. International Journal of Computer Vision 29, 59–77 (1998) 11. Black, M.J., Anandan, P.: The robust estimation of multiple motions: Parametric and piecewise-smooth flow fields. Computer Vision and Image Understanding 63, 75–104 (1996) 12. Xiao, J., Cheng, H., Sawhney, H., Rao, C., Isnardi, M.: Bilateral filtering-based optical flow estimation with occlusion detection. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3951, pp. 211–224. Springer, Heidelberg (2006) 13. Barron, J.L., Fleet, D.J., Beauchemin, S.S.: Performance of optical flow techniques. International Journal of Computer Vision 12, 43–77 (1994) 14. Tistarelli, M.: Multiple constraints to compute optical flow. IEEE Transactions on Pattern Analysis and Machine Intelligence 18, 1243–1250 (1996) 15. Bimbo, A.D., Nesi, P., Sanz, J.L.C.: Optical flow computation using extended constraints. IEEE Transactions on Image Processing 5, 720–732 (1996)
Combining Models of Pose and Dynamics for Human Motion Recognition Roman Filipovych and Eraldo Ribeiro Computer Vision and Bio-Inspired Computing Laboratory Department of Computer Sciences Florida Institute of Technology Melbourne, FL 32901, USA {rfilipov,eribeiro}@fit.edu http://www.cs.fit.edu/∼ eribeiro
Abstract. We present a novel method for human motion recognition. A video sequence is represented with a sparse set of spatial and spatialtemporal features by extracting static and dynamic interest points. Our model learns a set of poses along with the dynamics of the sequence. Pose models and the model of motion dynamics are represented as a constellation of static and dynamic parts, respectively. On top of the layer of individual models we build a higher level model that can be described as “constellation of constellation models”. This model encodes the spatial-temporal relationships between the dynamics of the motion and the appearance of individual poses. We test the model on a publicly available action dataset and demonstrate that our new method performs well on the classification tasks. We also perform additional experiments to show how the classification performance can be improved by increasing the number of pose models in our framework.
1
Introduction
Recognizing human actions from videos is of relevance to both the scientific and industrial communities. Humans usually perform actions by means of a number of articulated complex motions. Consequently, creating effective computational models for human motion representation is a crucial but challenging task required to all action recognition algorithms. Despite significant efforts by the computer vision community, action recognition is still an open problem. In general, approaches to human motion recognition work by analyzing the dynamic information of image sequences. Recently, the use of spatial-temporal features has been demonstrated to be an affective tool for motion recognition [18,16]. Additionally, the importance of static information [17] combined with recent advances in probabilistic constellation models [15] have also been demonstrated. In this paper, we focus ourselves on the problem of learning representational models for human motion recognition. More specifically, we propose a Bayesian probabilistic framework that allows for integrating both static and dynamic information. Here, our main contribution is to present a principled solution to the G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 21–32, 2007. c Springer-Verlag Berlin Heidelberg 2007
22
R. Filipovych and E. Ribeiro
human motion recognition problem by combining data of different nature within a single probabilistic integration framework while allowing for computationally efficient learning and inference algorithms. This is accomplished by combining constellation models “tuned” to recognize specific human poses with a constellation model of the motion’s spatial-temporal data to form a single human motion model. Our resulting method can be characterized as a “constellation of constellation models” that combines pose recognition and motion dynamics recognition into a single framework. We demonstrate the effectiveness of the proposed method on a series of motion classification experiments along with a comparison with a recently published motion recognition approach. The results show that our model offers promising classification performance on an established human action dataset. The remainder of this paper is organized as follows. In Section 2, we comment on the literature related to the problem addressed in this paper. Section 3 describes the details of our action recognition framework. In Section 4, we describe experimental results of our method on an established human action database. Finally, Section 5 presents our conclusions and directions for future investigation.
2
Related Work
Approaches to human motion recognition can be grouped into data-driven and model-based methods. Data-driven approaches operate directly on the data. For example, Dollar et al. [7] perform action classification in the space of extracted spatial-temporal features using a support vector machine classifier. Leo et al. [13] describe an unsupervised clustering algorithm for motion classification based on histograms of binary silhouette’s horizontal and vertical projections. These methods are computationally efficient and achieve good classification performance. However, data-driven methods may be inadequate in most realistic scenarios, primarily because local image features are typically highly ambiguous. On the other hand, model-based approaches explicitly include higher-level knowledge about the data by means of a previously learned model. Despite their computational and mathematical elegance, the performance of model-based approaches strongly depends on both the choice of the model and the availability of prior information about the data at hand. Additionally, in the absence of prior information about the models’ structure, the learning task is often intractable. Graphical models represent a suitable solution to this problem as they allow for efficient learning and inference techniques while simultaneously providing a span of models with rich descriptive power. For example, Boiman and Irani [3] propose a graphical Bayesian model for motion anomaly detection. The method describes the motion data using hidden variables that correspond to hidden ensemble in a database of spatial-temporal patches. Niebles et al. [14] create a generative graphical model where action category labels are present as latent variables.
Combining Models of Pose and Dynamics
23
Recently, there has been considerable development in part-based classification methods that model the spatial arrangement of object parts [9,8,5]. These methods are inspired by the original ideas proposed by Fischler and Elschlager [11]. For example, Fergus et al. [9] proposed a fully-connected part-based probabilistic model for object categorization. The approach is based on the constellation model proposed in [4]. Unfortunately, the complexity of the model learning and inference often increases drastically as the number of parts increases. A solution to this problem is to select model structures that allow for both optimal classification performance and tractable learning and inference. In this paper, we propose a method that combines a set of partial motion models into a global model of human motion. Each partial model is a constellation model [4] of a relatively simple yet descriptive structure that describes either the motion dynamics or a specific pose of the motion cycle. The partial models are combined within a Bayesian framework to form a final human motion model that encodes spatial-temporal relationships between the partial models. Here, models are learned from a set of labeled training examples. The key difference between ours and other approaches such as the one described in [15] is that in our framework poses and motion dynamics are modeled explicitly.
3
Our Method
In this section, we present our human motion recognition framework. The goal of our approach is twofold. First, we aim at combining the static information provided by the pose images with the video’s spatial-temporal information to obtain an integrated human motion model. Secondly, we will use this integration model for the classification of human motion sequences. We accomplish these goals by approaching the human motion recognition problem as a probabilistic inference task. An overview of our approach is illustrated in Figure 1. Next, we introduce our probabilistic integration framework followed by a description of the learning and classification procedures. 3.1
Integrating Human Pose and Motion Dynamics
We commence by defining the main components of our model. A video sequence V of human motion can be considered to be the variation of a specific human pose as a function of time. Let P = {P1 , . . . , PK } represent a discrete set of K poses sampled from the space of all possible poses that are representative of a specific human motion type, where K is a small number, usually much smaller than the number of frames in the video sequence. Let M represent the spatial-temporal information extracted from the video sequence. This information describes temporal variations in the image frames, and can be obtained from measurements such as optical flow and spatial-temporal features. Additionally, let X represent simultaneously a particular spatial-temporal configuration of pose and human motion dynamics.
24
R. Filipovych and E. Ribeiro
Fig. 1. Diagram of our approach. An illustrative example of “dynamics two-pose” model. Temporally-aligned extracted motion cycles form the training set (a). For each partial pose model a set of frames corresponding to the same time instance are selected (b, c). The images are preprocessed and the interest subregions are extracted. In the case of dynamics partial model, the spatial-temporal features are extracted using the detector from [7] (d). The partial models’ parameters are estimated independently. These models are then combined to form a global model (e). (Note: The extracted and learned subregions displayed in the chart do not present the actual subregions, as in our implementation the dimensionality of the input subregions is reduced using PCA).
Combining Models of Pose and Dynamics
25
Probabilistically, the likelihood of observing a particular video sequence given that a human motion is at some spatial-temporal location can be represented by the distribution p(V|X ). From the Bayes’ theorem, we obtain: p(X |V)
∝ p(V|X ) p(X ) ∝
p(P|X ) p(M|X )
p(X )
poses
dynamics
spatial-temporal
appearance
appearance
configuration
(1)
We further assume that the appearance of both pose and dynamics are statistically independent. This assumption allows us to factorize the likelihood function in Equation 1 into two components. Accordingly, we introduce the variables P and M to indicate that the human motion information in the video is represented by a set of static poses and dynamic information, respectively. Our integration model of pose and motion dynamics is inspired by the part-based object factorization suggested by Felzenszwalb and Huttenlocher [8]. The underlying idea in our factorization is that the spatial-temporal arrangement of parts can be encoded into the prior probability distribution while the likelihood distribution encodes the appearance. In this paper, we focus ourselves on the combination of human motion dynamics with both the appearance and the spatial-temporal configuration of the pose models. Spatial-Temporal Prior Model. The prior distribution in Equation 1 is described as follows. We begin by assuming that each pose Pi from P can be subdivided into a number of non-overlapping subregions such that Pi = (i) (i) (i) (i) (i) (i) {(a1 , x1 ), . . . , (aNP , xNP )}, where the components of each pair (aj , xj ) i i represent the local appearance a and the spatial-temporal location x of the subregion j for the model of pose Pi , respectively. Here, NPi is the total number of subregions for the pose Pi . While a pose conveys only two-dimensional spatial information, the temporal position of the pose in the video sequence serves as the temporal coordinate of the parts’ locations. Similarly, the dynamic information required by our model can be represented by a sparse set of spatial-temporal (M) (M) (M) (M) features [7,12]. Accordingly, let M = {(a1 , x1 ), . . . , (aNM , xNM )} be a set of spatial-temporal interest features where NM is the number of features in M. The pose models and the dynamics model are the partial models used in our integration framework. The creation of these models is described next. For simplicity, we model both pose and dynamic information using directed acyclic star graphs. This is similar to the part-based object model suggested by Fergus et al. [10]. Here, a particular vertex is assigned to be a landmark vertex (i) (i) (ar , xr ) for the pose Pi . A similar landmark vertex assignment is done for the (M) (M) dynamics model, (ar , xr ). The remaining vertices within each model are conditioned on the corresponding landmark vertex. Figure 1(b) and Figure 1(c) show examples of the partial model graphs for pose while Figure 1(d) shows a graph of the partial model for the motion dynamics. Finally, we build another structural layer on top of the pose models and the motion dynamics model. In this layer, the spatial locations of the partial models are the locations of the
26
R. Filipovych and E. Ribeiro
corresponding landmark image subregions. The global structural layer is built by conditioning the landmark vertices of the pose model graphs on the landmark vertex of the dynamics model graph. In this way, we obtain a multi-layered tree-structured model, which is the global model of human motion used in our method. The graph in Figure 1(e) illustrates our partial models’ integration concept. Here, the arrows in the graph indicate the conditional dependence between the connected vertices. Accordingly, the joint distribution for the partial models’ spatial interaction can be derived from the graphical model shown in Figure 1(e), and is given by: (2) p(x(i) |x(M) ) p(X ) = p(x(M) ) Pi ∈P where x(i) is the spatial-temporal configuration of the pose Pi , and x(M) is the spatial-temporal configuration of the dynamics model. The probability distributions that compose Equation 2 are:
) p(x(M) ) = p(x(M) r
(M)
p(xj
|x(M) ) r
(3)
(i)
(4)
j=r (M) p(x(i) |x(M) ) = p(x(i) ) r |xr
p(xj |x(i) r )
j=r
It should be noted that the dependence between the partial models is based solely on their spatial-temporal configuration within the global model. This follows from our assumption that the partial models are statistically independent with respect to their appearance. Next, we describe the appearance likelihood component of Equation 1 for both pose and motion dynamics. Appearance Model. Under the appearance independence assumption, the appearance likelihood of the pose Pi can be written as the product of the probabilities of its subregions (i.e., parts): NPi
p(Pi |X ) =
(i)
(i)
p(aj |xj )
(5)
j
Similarly, in our motion dynamics model, the appearance likelihood is given by: p(M|X ) =
N M
(M)
p(aj
(M)
|xj
)
(6)
j
As a result, the likelihood term in Equation 1 becomes: p(V|X ) = p(P|X ) p(M|X ) =
Pi K N
i
j
NMi (i)
(i)
p(aj |xj ) ×
(M)
p(aj
(M)
|xj
)
(7)
j
Next, we describe the parameters estimation step (i.e., learning) of our model as well as the motion classification procedure.
Combining Models of Pose and Dynamics
3.2
27
Learning and Classification of Human Motions
Learning. In the learning stage, the parameters of our model are estimated from a set of training video sequences. The factorization in Equation 2 and Equation 7 allows for the learning process to be performed in a modular fashion given a set of training videos {V 1 , . . . , V L }. We restrict each of the training videos to contain exactly one full motion cycle (e.g., two steps of the walking motion). Additionally, we temporally align motion cycles extracted from the training sequences such that they start and finish with the same pose. To obtain the pose training data, we first normalize the length of the sequences to be within the [0, 1] time interval. Then, we extract the corresponding frames from the normalized sequences for a specific time instant. Consequently, in the case of periodic motion, the frames corresponding to the time instants 0 and 1 will contain the same pose translated in time (and, for some motions, also in 2D space). Figure 2 shows an example of the aligned walking cycles. In the figure, the frames correspond to the normalized time slices t = 0, t = 0.25, t = 0.5, t = 0.75, and t = 1. The aligned sequences serve as the input to the dynamic model learning algorithm. The learning procedure is divided into two main steps.
Fig. 2. Examples of segmented and normalized walking sequences, and background images. Frames for normalized time slices: t = 0, t = 0.25, t = 0.5, t = 0.75 and t = 1.
First, the algorithm estimates the parameters for each of the partial models. Secondly, the parameters representing the spatial-temporal configuration of the global model are determined. The learning steps of our algorithm are detailed next. For simplicity, the probabilities in our model are represented by Gaussian densities. Learning step 1 - Learning the parameters of the partial models. In this step, the parameters of the partial models for pose and motion dynamics are estimated. We begin by modeling the probabilities of subregion locations in Equation 2 using Gaussian joint probability distributions. Fortunately, it can be shown that the conditional distributions relating independent Gaussian distributions are also Gaussian [1]. As a result, the conditional densities in Equations 3 and 4
28
R. Filipovych and E. Ribeiro
take a particularly simple form. Further details on Gaussian joint distributions can be found in [1]. For a pose model, we commence by extracting a set of subregions centered at the locations provided by the interest point detector. The method requires two types of input. The first one is a set of positive training images (i.e., images containing the target pose) and a set of negative training images (i.e., background images). We associate a 3D location with every extracted subregion. Here, the locations is represented by the x- and y-coordinates of the subregion in the pose image, and the additional t-coordinate is the frame-position of the pose image in the input sequence. Unlike some other approaches [2], our method does not require the pose to be segmented from the image. We adopt the learning process described by Crandall and Huttenlocher [6]. However, in our work, we consider the spatial-temporal configuration of parts rather than only the spatial configuration. In essence, their method uses a clustering technique to estimate the initial appearances of the possible parts. Then, an initial spatial model is created and the optimal number of parts is determined. An EM-based procedure is used to simultaneously refine the initial estimates of the appearances and the spatial parameters. In our method, we use an E.M. approach to simultaneously (i) (i) estimate the parameters of the distributions p(xj |xr ) in Equation 4 and the pose appearance in Equation 5. We estimate the parameters of the dynamic model in a similar fashion as in the case of the pose models. We proceed by extracting a set of spatial-temporal interest points using the detector described in [7]. We again use the learning (M) (M) method from [6] to estimate the parameters of the distributions p(xj |xr ) in Equation 4 and the dynamics appearance in Equation 6. Learning step 2 - Estimating the parameters of the global model. The goal of this step is to estimate the parameters of the distributions that govern the relationships between the partial models. More specifically, we aim at estimating (i) (M) the parameters of the distributions p(xr |xr ) in Equation 4. Given the original training data instances for each partial model (i.e., extracted frames for the pose models and aligned sequences for the dynamics model), we compute the most likely location for each data type by maximizing the likelihood of pose models: ˆ (i) = arg max p(x(i) |Pi ) x
(8)
ˆ (M) = arg max p(x(M) |M) x
(9)
x
and the dynamics model:
x
Once the maximum likelihood locations evaluated for every partial model and its corresponding data instances are at hand, we can directly estimate the (i) (M) parameters of the distributions p(xr |xr ) in Equation 4. These distributions govern the spatial-temporal interaction between the partial models.
Combining Models of Pose and Dynamics
29
Classification. The problem of recognizing a human motion in a video sequence can then be posed as follows. We seek for the spatial-temporal location in the video sequence that maximizes the posterior probability of the location of the motion given a set of partial models as given in (1): = arg max p(X |V) X X
(10)
It is worth pointing out that, in the case of the tree-structured Bayesian network, the model is equivalent to the Random Markov Fields (RMF) model in which the potential functions are the conditional probability densities. An efficient inference algorithm for such graph structure was studied by Felzenszwalb and Huttenlocher [8]. The algorithm allows to perform exact inference in a reasonable time if the number of partial models is small.
4
Experimental Results
The goal of our experiments is to demonstrate the potential of our method for the classification of human motion. To accomplish this goal, we tested our model on the human action dataset from [2]. This database contains nine action classes performed by nine different subjects. Figure 3 shows a sample of video frames for each motion analyzed in our experiments. Additionally, we compared our results with the results reported by Niebles and Fei Fei in [15]. Finally, we provide some preliminary experimental results on the effect of including additional partial models into the global modeling proposed in this paper. Video Data Preparation. We begin by pre-processing the video data. Since our method is view-dependent, we reflect frames of some sequences with respect to the y-axis, such that the direction of motion is the same in all sequences (e.g., a subject is always walking from right to left). In our implementation, when learning a pose model, we employ the Harris corner detector to obtain static interest point locations for pose images. We limit the number of interest points to be 20 for every pose image. A Gaussian smoothed edge-map of the pose images is obtained from which we extract square patches centered at the detected locations. The features required to create the dynamics model were obtained by means of the spatial-temporal interest point detector described in [7]. In all cases the dimensionality of the data was reduced using PCA. However, the appearance of the background in the sequences from [2] is very similar. This appearance similarity tends to induce a bias in the learning process. To address this issue, we created background data for the dynamics model from portions of the sequences in which no subject was present. On the other hand, corresponding frames served as the background data for the pose learning module. A sample of the static frames extracted from the background sequences is shown in Figure 2. In the results that follow, the maximum number of learned parts of each partial model is set to four. This is done by selecting only up to four most descriptive parts using the descriptiveness evaluation procedure as described in [6].
30
R. Filipovych and E. Ribeiro
Fig. 3. Human action dataset: Example frames from video sequences in the dataset from [2]. The images correspond to frames extracted at t = 0 (a), and t = 0.25 (b).
Classification. We compared our results with the results reported by Niebles and Fei Fei [15]. Similarly, we adopted a leave-one-out scheme for evaluation, by taking videos of one subject as testing data, and using sequences of the remaining subjects for training. The segmentation of sequences is not required by our method. Additionally, only the best match for each model is considered when making the labeling decision. For a given motion type we selected one motion cycle from every original training sequences. The set of the segmented motion cycles is the training set of our method. The partial model learning algorithm is EM-based and strongly depends on the correct initialization. To reduce the effect of incorrect initialization, we removed one of the training sequences from the training set and assigned it to be a validation sequence. The learning algorithm was repeated five times and the model for which the posterior probability calculated on the validation sequence was the highest was retained. In our experiments, we also investigated the effect of including additional partial models into the global model within our framework. First, we built a “dynamics one-pose” model that combines the motion dynamics model and a single pose model. The classification results obtained with this model were compared to results produced by a “dynamics two-pose” model that combines motion dynamics model and two pose models. For the “dynamics one-pose” model the pose was extracted at t = 0. For the “dynamics two-pose” model, the poses were extracted at t = 0 and t = 0.25, respectively. Figure 3 shows a sample of pose images that correspond to these time instants. The confusion tables generated by our classification results are shown in Figure 4. When classifying sequences, the “dynamics one-pose” model allows to correctly classify 70.4% of the testing videos. The method mostly misclassified those sequences for which the pose is similar at the given time instant. This is the case for pose images for the “pjump”, “jack”, and “side” actions (Figure 3(a)). On the other hand, with the “dynamics two-pose” model our system was able to correctly classify 79.0% of the test sequences. This is superior to the 72.8% classification rate in [15].
Combining Models of Pose and Dynamics
(a)
31
(b)
Fig. 4. Confusion matrices for the “dynamics one-pose” model (a) and “dynamics twopose” model (b). The “dynamics one-pose” model correctly classifies 70.4% of the test sequences. The “dynamics-two poses” model correctly classifies 79.0% of the sequences.
5
Conclusions
In this paper, we presented a novel principled solution to the problem of recognizing human motion in videos. Our method works by combining data of different nature within a single probabilistic integration framework. More specifically, we demonstrated how partial models of individual static poses can be combined with partial models of the video’s motion dynamics to achieve motion classification. We demonstrated the effectiveness of the proposed method on a series of motion classification experiments using a well-known motion database. We also provided a comparison with a recently published motion recognition approach. Our results demonstrate that our method offers promising classification performance. Future directions of investigation include a study of the possibility to automatically select poses that would lead to optimal recognition performance. A possible way to follow is to use boosting to improve the selection of optimal poses and dynamics information. Acknowledgments. This research was supported by U.S. Office of Naval Research under contract: N00014-05-1-0764.
References 1. Bishop, C.M.: Pattern Recognition and Machine Learning (Information Science and Statistics), Secaucus, NJ, USA. Springer, Heidelberg (2006) 2. Blank, M., Gorelick, L., Shechtman, E., Irani, M., Basri, R.: Actions as space-time shapes. Int. Conference on Computer Vision, 1395–1402 (2005) 3. Boiman, O., Irani, M.: Detecting irregularities in images and in video. In: Conf. on Computer Vision and Pattern Recognition, pp. 462–469 (2005)
32
R. Filipovych and E. Ribeiro
4. Burl, M.C., Weber, M., Perona, P.: A probabilistic approach to object recognition using local photometry and global geometry. In: Burkhardt, H., Neumann, B. (eds.) ECCV 1998. LNCS, vol. 1407, pp. 628–641. Springer, Heidelberg (1998) 5. Carneiro, G., Lowe, D.: Sparse flexible models of local features. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3953, pp. 29–43. Springer, Heidelberg (2006) 6. Crandall, D.J., Huttenlocher, D.P.: Weakly supervised learning of part-based spatial models for visual object recognition. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3951, pp. 16–29. Springer, Heidelberg (2006) 7. Doll´ ar, P., Rabaud, V., Cottrell, G., Belongie, S.: Behavior recognition via sparse spatio-temporal features. In: VS-PETS (October 2005) 8. Felzenszwalb, P.F., Huttenlocher, D.P.: Pictorial structures for object recognition. Int. J. Comput. Vision 61(1), 55–79 (2005) 9. Fergus, R., Perona, P., Zisserman, A.: Weakly supervised scale-invariant learning of models for visual recognition. Int. J. Comput. Vision 71(3), 273–303 (2007) 10. Fergus, R., Perona, P., Zisserman, A.: A sparse object category model for efficient learning and exhaustive recognition. In: CVPR 2005. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (June 2005) 11. Fischler, M., Elschlager, R.: The representation and matching of pictorial structures. IEEE Transactions - Computers 22, 67–92 (1977) 12. Laptev, I., Lindeberg, T.: Space-time interest points. In: IEEE Int. Conf. on Computer Vision, Nice, France (October 2003) 13. Leo, M., D’Orazio, T., Gnoni, I., Spagnolo, P., Distante, A.: Complex human activity recognition for monitoring wide outdoor environments. In: ICPR 2004. Proceedings of the Pattern Recognition, 17th International Conference, vol. 4, pp. 913–916. IEEE Computer Society Press, Los Alamitos (2004) 14. Niebles, J., Wang, H., Wang, H., Fei Fei, L.: Unsupervised learning of human action categories using spatial-temporal words. In: BMVC 2006. British Machine Vision Conference, p. 1249 (2006) 15. Niebles, J.C., Fei-Fei, L.: A hierarchical model of shape and appearance for human action classification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Minneapolis, USA (July 2007) 16. Schuldt, C., Laptev, I., Caputo, B.: Recognizing human actions: A local svm approach. In: ICPR 2004. Proceedings of the Pattern Recognition, 17th International Conference, vol. 3, pp. 32–36. IEEE Computer Society Press, Los Alamitos (2004) 17. Wang, Y., Jiang, H., Drew, M.S., Li, Z.-N., Mori, G.: Unsupervised discovery of action classes. In: CVPR 2006. Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 1654–1661. IEEE Computer Society Press, Los Alamitos (2006) 18. Wong, S.-F., Kim, T.-K., Cipolla, R.: Learning motion categories using both semantic and structural information. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Minneapolis, USA (June 2007)
Optical Flow and Total Least Squares Solution for Multi-scale Data in an Over-Determined System Homa Fashandi1, Reza Fazel-Rezai1, and Stephen Pistorius2,3 1
Department of Electrical and Computer Engineering, University of Manitoba, Winnipeg, MB, Canada R3T 5V6 {Fashandi,Fazel}@ee.umanitoba.ca 2 Departments of Radiology & Physics and Astronomy, University of Manitoba, Winnipeg, MB, Canada R3T 2N2 3 Medical Physics, CancerCare Manitoba, Winnipeg, MB, Canada R3E 0V9
[email protected]
Abstract. In this paper, we introduce a new technique to estimate optical flow fields based on wavelet decomposition. In order to block error propagation between layers of multi-resolution image pyramid, we consider information of the all pyramid levels at once. We add a homogenous smoothness constraint to the system of optical flow constraints to obtain smooth motion fields. Since there are approximations on both sides of our over determined equation system, a total least square method is used as a minimization technique. The method was tested on several standard sequences in the field and megavoltage images taken by linear accelerator devices and showed promising results.
1 Introduction Optical flow is defined as “the distribution of apparent velocities of movements of brightness patterns in an image” [1]. It is an estimation of 2D projection of moving 3D environment. A Wide range of applications use optical flow including scene interpretation algorithms, recognizing camera motion, determining the number of moving objects, characterizing their motion, recovering camera ego motion, or performing motion segmentation. A comprehensive survey of different optical flow techniques is presented in [2]. Methods are broadly classified into four different categories; gradient-based, correlation-based, phase-based and spatiotemporal energy-based methods. Gradient-based techniques utilize spatial and temporal derivatives of the image. The most salient work in this category has been presented by Horn and Schnuck [1]. Correlation-based approaches are similar to block matching techniques. For each pixel in the image, a block is opened, the best match in the previous block is found and the amount of motion is calculated. The best match is determined by maximizing or minimizing a similarity measure [3]. Phase-based approaches define velocity in terms of the phase behavior of band-pass filter outputs [4]. Spatiotemporal energy-based methods use physiologically based models for motion extraction. Oriented spatiotemporal subband filters are applied to the image and their outputs measure the motion energy [5]. A broader classification of optical flow techniques is also introduced in [6]. G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 33–42, 2007. © Springer-Verlag Berlin Heidelberg 2007
34
H. Fashandi, R. Fazel-Rezai, and S. Pistorius
Optical flow methods are also classified into local approaches and global methods. Local techniques which are more robust to noise optimize some form of local energy objective [1], while global methods which provide flow fields with full density optimize global energy function [7]. In this paper, we have developed a gradient based technique to enable us to evaluate real time patient positioning errors in radiation therapy. Initially a multi-resolution image pyramid is created using wavelet decomposition. For each block in the original image, information at all coarser levels is incorporated to establish an overdetermined system of linear equations based on optical flow constraints. A homogenous smoothness constraint at the finest level is added to the system and finally a total least squares method is used to solve the system. The paper is organized as follows. Section 2 develops the technical background required before we describe our proposed method in Section 3. The experimental results are shown in Section 4 while in Section 5, conclusions and future work are presented.
2 Technical Backgrounds A brief definition of optical flow and regularization techniques are presented in Section 2.1. Since this paper estimates optical flow using wavelets, Section 2.2 briefly describes the salient work on wavelet-based optical flow techniques. The affine motion model is discussed in Section 2.3 while in Section 2.4, a method for solving overdetermined system of linear equations is presented. 2.1 Optical Flow There are three major techniques to estimate displacement vectors from successive image frames; block matching, the pel recursive method and optical flow [8]. Optical flow provides relatively more accurate results [8].Optical flow algorithms estimate the motion vector for each pixel in an image. Most optical flow techniques assume intensity conservation between images which implies that the intensity values of pixels do not change in an image sequence. Under this assumption we can write
f ( x + u , y + v, t + 1) = f ( x, y, t )
(1)
where (u,v) is optical flow vector at point (x,y). If the first order Taylor series is used to expand equation 1, the result will be called Optical Flow Constraint:
f xu + f y v + ft = 0
(2)
where f x , f y and f t are partial derivatives in x, y and t, respectively. To estimate the two unknowns u and v, we need to solve equation 2. This single equation does not have enough information to be solved uniquely. This is called aperture problem as only the flow component parallel to ∇f = ( f x , f y ) , where the image gradient is not zero, can be computed. In general, we need an additional constraint to solve the problem and variational methods consider smooth or piecewise smooth optical fields [9] while minimizing some form of energy function, E:
Optical Flow and Total Least Squares Solution
∫ OPC (u, v, ∇
E (u , v) =
3
f ) + αS (∇ 3u, ∇ 3v, ∇ 3 f )dxdydt
Ω×[ 0,T ]
35
(3)
where Ω is the spatial neighbourhood, OPC is optical flow constraint (equation 2), S is the smoothness function, ∇ 3 = (∂ x , ∂ y , ∂ t )T is spatiotemporal gradient operator and
α is a regularisation parameter which determines the contribution of smoothness in optical flow estimation. Regularization has a strong influence on the result and an overview of regularization techniques can be found in [6] [10]. We used the regularization technique introduced by Horn and Schunck which utilized the Laplacian of u and v as a smoothing constraint [1]. In this method, the smoothness function is expressed as: 2
S (∇ 3u, ∇3v, ∇3 f ) = ∇u + ∇v
2
(4)
2.2 Wavelet-Based Optical Flow Techniques
Time aliasing, where large displacements can not be recovered at a fine scale, is a common problem in optical flow techniques. To overcome this, a multi-resolution approach where optical flow fields are estimated at the coarsest level and then propagated to the finer levels to estimate more details has been used intensively [12]. Since a wavelet transform has a built-in multi-scale structure, it has attracted attention for estimating motion fields [11][12][13][14]. Wu et al. proposed a method [11] which minimizes the following function to recover u and v: E=
∑ [I ( x + u( x, y) , y + v( x, y)) − I
0 ( x, y )
x, y
]2
(5)
Motion vectors u and v are approximated using two dimensional basis functions which themselves are tensor products of one dimensional basis functions. Each motion field is expressed by linear combinations of the coarsest scale spline function and horizontal, vertical and diagonal wavelets in finer levels. The problem is then converted to estimating coefficient vectors [11]. Chen et al. used wavelets to represent u and v as well as image related operators such as the gradient operator [14]. To overcome the aperture problem Bernard used wavelets to calculate an inner product of the optic flow constraint with S different vectors to establish a solvable system of equations [13]. The final form of the system is as follows: ⎧ ∂ψ 1w ∂ψ 1w ∂ u ( w) + I , v( w) = I ,ψ w1 ⎪ I, ∂y ∂t ∂x ⎪ ⎪ # ⎨ ⎪ ∂ψ wN ∂ψ wN ∂ u ( w) + I , v( w) = I ,ψ wN ⎪ I, ∂y ∂t ∂x ⎪⎩
(6)
36
H. Fashandi, R. Fazel-Rezai, and S. Pistorius
where (ψ n ) n=1,...,N defined in L2 (ℜ 2 ) centered around (0, 0) with different frequency contents and ψ wn ( x) = ψ n ( x − w) . Typically 3 or 4 equations are used to solve the system for u and v. Lui et al. proposed a framework for wavelet based optical flow estimation to overcome the problem of error propagation in multi-level pyramids [12]. Typically, optical flow is first estimated at a coarse level and then propagated to a finer level as a warp function to morph an image. The algorithm is then applied to the warped version of an image and its reference. When the coarse level estimations contain large errors that can not be corrected in finer levels, the result contains large errors. This typically happens when regions of low textures become flat in coarser levels. To solve the error propagation problem, Lui et al. suggested considering optical flow constraint at all levels of multi-resolution pyramid simultaneously and solving the system of equations at once. We will discuss this method further in Section 3. 2.3 Affine Motion Model for Optical Flow
As indicated in equation 1, only translational motion is considered to estimate optical flow. Since true motion occurs in 3-dimensional space and then is projected into 2directional image space, a better assumption for motion would be an affine motion model. This model is first introduced to the field of optical flow estimation in [15]. In this model, u and v are defined as follows: u = p1 x + p 2 y + p 3 v = p 4 x + p5 y + p 6
(7)
where six parameters, p1 to p6 , represent the affine motion model; p1 to p4 are linear affine, p5 and p6 are translations parameters. Considering affine motion model, equation 2 can be written in vector form as bellow:
( f x , f y )⎛⎜⎜ x0 ⎝
y 1 0 0 0⎞ ⎟( p1 0 1 x y 1 ⎟⎠
p2
p3
p4
p5
p6 )T − f t = 0
(8)
2.4 Over-Determined System of Linear Equations and Total Least Squares
Regardless of the method used to solve the aperture problem, we obtain a system of linear equations to estimate the motion fields. One example of this is shown in equation 8. There are only 6 unknown parameters, and it is expected that 6 equations are enough to estimate the unknowns. However, since we are using approximations, erroneous and noisy data, 6 equations will not give the best solution and generally, more equations are used to solve this problem [16]. An over-determined system of linear equation is as follows: Ax = b
(9)
where A ∈ ℜ m× n is a data matrix, b ∈ ℜm is an observation vector and m>n. In our case n equals 6. In classical Least Squares (LS) the measurements of A are considered
Optical Flow and Total Least Squares Solution
37
error free and all errors are confined to b. In solving equation 9 for optical flow, both A and b contain errors due to noise, approximations and occlusion. Van Huffel and Vandewalle proposed a method called Total Least Squares (TLS) which outperforms LS when both data and observations matrices are erroneous [17]. They define the multidimensional TLS problem as follows. Let AX=B be an over-determined set of m equations with n×d unknowns X (m>n). TLS minimizes: min [ Aˆ ; Bˆ ] ∈ ℜ m×( n+d )
[ A; B] − [ Aˆ ; Bˆ ]
F
(10)
is the Frobenius norm of M. Once a minimizing [ Aˆ ; Bˆ ] is found, any X satisfying Aˆ X = bˆ is a TLS solution and [ΔAˆ ; ΔBˆ ] = [ A; B ] − [ Aˆ − Bˆ ] is the corresponding TLS correction. Singular value decomposition (SVD) of the matrix [A; B] is as follows:
subject to Bˆ ⊆ R( Aˆ ) . M
F
[ A; B ] = UΣV T
(11)
such that 0⎤ ⎡Σ m×( n + d ) , t = min{m − n, d } Σ=⎢ 1 ⎥ = diag (σ 1 ,..., σ n + t ) ∈ ℜ 0 Σ 2⎦ ⎣ Σ 1 = diag (σ 1 ,..., σ n ) ∈ ℜ n×n , Σ 2 = diag (σ n +1 ,..., σ n +t ) ∈ ℜ ( m − n )×d ,
(12)
σ 1 ≥ ... ≥ σ n +t ≥ 0 If σ n′ > σ n +1 = ... = σ n + d , where σ n′ is singular value of A, the solution to equation 10 will be as follows: 2 Xˆ = ( AT A − σ n +1 I )−1 AT B
(13)
3 Our Proposed Method Our proposed method is inspired by work done by Lie et. al. [12] and [16]. Liu et al. considered information at all levels of resolution simultaneously to block error propagation [12]. Bab Hadiashar and Suter compared different kinds of least square techniques to find a robust method for optical flow determination [16]. The novelty of our work is to consider smoothness constraint in addition to information at all resolution levels. We also used TLS to solve this over-determined system .The overall structure of the proposed method is depicted in Fig. 1. The first step is to use a twodimensional wavelet transform to construct an image pyramid. At each level of decomposition, approximation, vertical, horizontal and diagonal channels are obtained. As in equation 8, vertical and horizontal derivatives of the image are used to obtain the optical flow constraint. These derivatives were calculated in wavelet decomposition pyramid. Fig. 2 shows the schematic form of two-dimensional wavelet decomposition.
38
H. Fashandi, R. Fazel-Rezai, and S. Pistorius
We reconstructed equation 3 so that translational motion is replaced by affine motion model. TLS is used to minimize the objective function E: L
E ( p1 ,", p6 ) =
∫ ∑ OPC ( p ,", p , ∇ l
1
6
3
f l ) + αS (∇ 3 p1 ,", ∇ 3 p6 , ∇ 3 f 0 )dxdydt
(14)
Ω×[ 0,T ] l = 0
where f l is image approximation at level l of multi-resolution pyramid. l=0 is bottom of pyramid, l=L is the coarsest level.
[p1, … , p6] Optical Flow Constraint TLS
Image Pyramid
A p=B Smoothness Constraint
Over-determined System of Linear Equations
Fig. 1. Overall structure of our proposed method. Image pyramid is constructed by wavelet decomposition. Optical flow constraints are obtained for pixels in blocks of 2L-m+1×2L-m+1 at level m, where L is the total number of levels. In addition to all of the equations, smoothness constraint is also considered for the finest level. An over-determined system of linear equations Ap=B is solved using TLS method. [p1, ….,p6] are affine motion parameters.
Fig. 2. Two level wavelet decomposition. A: approximation channel, H: horizontal channel, V: vertical channel, D: diagonal channel.
The TLS solution to equation 14 is the same as the system shown in equation 15. Here optical flow parameters are considered similar in small neighborhood. For each pixel in the image or each 2L+1 pixels in image, the system in equation 15 is computed. Since u and v become half of their value when moving from one level to the next coarser one, we add power of 2 corrections in the spatial matrixes in system 15. The last two equations in system 15 are responsible for homogenous smoothness. α is the contribution of smoothness in the minimization process while ε1 and ε2 are almost zero.
Optical Flow and Total Least Squares Solution
⎧ 0 ⎛ x yi 2 L 0 0 0 ⎞ G 0 ⎟ p = f 2 0 ( xi , yi ) − f10 ( xi , yi ) , i = 1,...,2 L +1 ⎪( f1 x ( xi , yi ), f1 y ( xi , yi ))⎜⎜ i L⎟ ⎪ ⎝ 0 0 0 xi y i 2 ⎠ ⎪ # ⎪ 0 ⎛ ⎞ ⎪⎪ ( f1 L ( xi , yi ), f1 L ( xi , y i ))⎜ xi yi 2 0 0 0 ⎟ pG = f 2 L ( xi , yi ) − f1 L ( xi , yi ) , i = 1,...,2 2 y ⎜ 0 0 0 x y 20 ⎟ ⎨ x i i ⎝ ⎠ G ⎪ ∂p ⎪ α = ε1 ∂x ⎪ G ⎪ ∂p = ε2 α ⎪ ⎪⎩ ∂y
39
(15)
4 Experimental Results There are several test sequences for optical flow estimation. They are categorized as real and synthetic sequences. To obtain the error measure for a method, the true optical flow of a sequence is needed. Normally in this field, error is measured by angular error between the space time direction vectors (vact,1) and (v,1):
⎡ ⎢ (vact ,1).(v,1) e = arccos⎢ ⎢ 1+ v 2 1+ v act ⎣⎢
2
⎤ ⎥ ⎥ ⎥ ⎦⎥
(16)
where vact is the actual 2-D motion fields and v is the vector of estimated motion field. It is usual to consider a confidence measure for estimated optical flows to enhance the result. We obtain a reliability measure for our method based on TLS solution where the confidence measure is R =1 −
σ n +1 2
∑ (b
i
− bi ) 2
(17)
where σ n+1 is the same as defined in equation 13 and bi are observation vector elements. This measure should be close to one [16]. To test the system, we used the Yosemite1, Hamburg taxi2 and Vc-box3 sequences. The Yosemite sequence is a challenging one because it includes a range of velocities, the occluding edges between the mountains and at the horizon and severe aliasing in the lower portion of image [2]. Two frames of this sequence, true optical flow and our estimated optical flow are shown in Fig. 3(a-d). We found most errors in the cloud region. We set R=0.8 as a threshold and remove the motion fields below the threshold. Our overall error is 12.3 degrees with standard deviation 9.35 degrees. By using higher thresholds, we could gain a better result but at the same time we lose the optical flow density. Therefore there is a tradeoff between density and lower error rate. 1 2 3
http://www.cs.brown.edu/~black/images.html http://i21www.ira.uka.de/image_sequences/ http://of-eval.sourceforge.net/
40
H. Fashandi, R. Fazel-Rezai, and S. Pistorius
Fig. 3. (a-b) Yosemite image sequence, frames 9 and10, (c) true optical flow, (d) estimated flow field with threshold set to 0.8 (e-f) frame 16 and 17 of Vc-box sequence, (g) true motion field , (h) estimated motion field with threshold equals 0.8, (i) reference mega voltage image, (j) mega voltage image with 2 degrees rotation to image shown in i, (k) estimated optical flow, (l) estimated optical flow with threshold is set to 0.8, (m) reference mega voltage image, (n) mega voltage image with 2 mm translational motion to the right with respect to image shown in m, (o) estimated optical flow, (p) threshold is set to 0.8, (q-r) Hamburg taxi Sequence, frames 5 and 6(s) estimated optical flow
Images in Fig. 3(e-g), show two frames of Vc-box sequence and true and estimated optical flow. The sequence involves zooming out from a view of a cardboard box [18]. In addition to standard sequences, we also applied our method to megavoltage images taken on a linear accelerator. These images are taken with high energy X-rays (in the megavolt range), and are of relatively poor contrast and resolution. We used these images because we wanted to identify if the method is capable of recognizing
Optical Flow and Total Least Squares Solution
41
small changes in noisy and low resolution images. As shown in Fig. 3 (i-p) our system successfully recognized rotation and translational motions. These images contain illumination changes which caused errors especially in image boundaries. Finally Hamburg taxi sequence and the estimated optical flow are shown in Fig. 3(q-s). This sequence shows three moving cars, the taxi turning, a car moving from left to right at the bottom left and another car driving right to left at the bottom right of the sequence. Estimated optical flow shows these motions correctly.
5 Summary and Conclusions Optical flow is an estimation of 2D projection of moving 3D environment. Optical flow is used by a wide variety of applications, including: scene interpretation, recognizing camera movements, determining number of moving objects, etc. In this paper, we construct an over determined system of linear equations to estimate optical flow. This system contains information of neighborhood of each pixel at different resolution. Multi-resolution scheme is used in estimating optical flow, to recognize larger motions. Most techniques used multi-resolution pyramid to propagate motion fields from coarser levels to the finer ones. In this paper, we are inspired by the idea presented in [12]. We gathered all the information at different levels to block error propagation. Two dimensional wavelet transform is used to construct multi-resolution pyramid. In addition to optical flow constraints, we propose to add a smoothness constraint to the system. Since numerical approximations are used in calculating image gradients, the smoothness constraint and given the noise in the images, there are perturbations in both sides of our system. We propose using total least squares method instead of traditional least squares as a robust method to solve the system and estimate optical flow vectors. In comparison to summery of different techniques applied to Yosemite sequence reported in [12], our results are promising.
Acknowledgement Financial support from CancerCare Manitoba and MITACS is gratefully acknowledged.
References 1. Horn, B.K.P., Schunck, B.G.: Determining Optical Flow. Artificial Intelligence 17, 185– 204 (1981) 2. Barron, J.L., Fleet, D.J., Beauchemin, S.S.: Performance of Optical Flow Techniques. International Journal of Computer Vision 12, 43–77 (1994) 3. Anandan, P.: A Computational Framework and an Algorithm for the Measurement of Visual Motion. International Journal of Computer Vision 2, 283–310 (1989) 4. Gautama, T., Van Hulle, M.: A Phase-based Approach to the Estimation of the Optical Flow Field. IEEE Transactions on Neural Networks 13(5), 1127–1136 (2000) 5. Simoncelli, E.P., Adelson, E.H.: Computing Optical flow Distributions Using Spatiotemporal Filters, Technical Report 165, M.I.T Media Lab Vision and Modeling (1991)
42
H. Fashandi, R. Fazel-Rezai, and S. Pistorius
6. Bruhn, A., Weickert, J.: Lucas/Kanade Meets Horn/Schunck. International Journal of Computer Vision 16(3), 211–231 (2005) 7. Lucas, B., Kanade, T.: An Iterative Image Registration Technique with an Application to Stereo Vision. In: Proceeding Seven International Joint Conference on Artificial Intelligence, Vancouver, pp. 674–679 (1981) 8. Shi, Y.Q., Sun, H.: Image and Video Compression for Multimedia Engineering: Fundamentals, Algorithms and Standards. CRC Press, New York (2000) 9. Bruhn, A., Weickert, J.: A Confidence Measure for Variational Optic Flow Methods. In: Geometric Properties for Incomplete Data, pp. 283–297. Springer, Heidelberg (2006) 10. Weickert, J., Schnorr, C.: A Theoritical Framework for Convex Regularizers in PDE based Computation of Image Motion. International Journal of Computer Vision 45, 245–264 (2001) 11. Wu, Y.T., Kanade, T., Li, C.C., Cohn, J.: Optical Flow Estimation Using Wavelet Motion Model. International Journal of Computer Vision 38(2), 129–152 (2000) 12. Liu, H., Chellappa, R., Rosenfeld, A.: Fast Two Frame Multi-scale Dense Optical Flow Estimation Using Discrete Wavelet Filters. Journal of Optical Society of America 20(8), 1505–1515 (2003) 13. Bernard Ch.: Wavelet and Ill-posed Problems: Optic Flow Estimation and Scattered Data Interpolation, PhD Thesis, Ecole Polytechnique (1999) 14. Chen, L.F., Liao, H.Y., Lin, J.C.: Wavelet Based Optical Flow Estimation. IEEE Transactions on Circuits and Systems for Video Technology 12(1), 1–12 (2002) 15. Liu, H., Hong, T.H., Herman, M., Chellappa, R.: A General Motion Model and Spatio Temporal Filters for Computing Optical Flow. International Journal of Computer Vision 22(2), 141–172 (1997) 16. Bab Hadiashar, A., Suter, D.: Robust Optical Flow Computation. International Journal of Computer Vision 29(1), 59–77 (1998) 17. Van Huffel, S., Vandewalle, J.: The Total Least Squares Problem Computational Aspects and Analysis, SIAM (1991) ISBN 0-89871-275-0 18. McCane, B.: On Benchmarking Optical Flow. Computer Vision and Image Understanding 84, 126–143 (2001)
A Hardware-Friendly Adaptive Tensor Based Optical Flow Algorithm Zhao-Yi Wei, Dah-Jye Lee, and Brent E. Nelson Department of Electrical and Computer Engineering, Brigham Young University, Provo, UT USA
Abstract. A tensor-based optical flow algorithm is presented in this paper. This algorithm uses a cost function that is an indication of tensor certainty to adaptively adjust weights for tensor computation. By incorporating a good initial value and an efficient search strategy, this algorithm is able to determine optimal weights in a small number of iterations. The weighting mask for the tensor computation is decomposed into rings to simplify a 2D weighting into 1D. The devised algorithm is well-suited for real-time implementation using a pipelined hardware structure and can thus be used to achieve real-time optical flow computation. This paper presents simulation results of the algorithm in software, and the results are compared with our previous work to show its effectiveness. It is shown that the proposed new algorithm automatically achieves equivalent accuracy to that previously achieved via manual tuning of the weights.
1 Introduction Tensor based optical flow algorithms can produce dense and accurate optical flow fields [1]-[5] for motion detection and estimation. Tensors provide a closed form representation of the local brightness pattern. Using special-purpose hardware to compute optical flow has previously shown its advantages over traditional softwarebased implementations of optical flow algorithms [6]-[10]. Depending on the application, a fast optical flow algorithm with adequate accuracy can be more useful in practice than a complex and slow algorithm with higher “theoretical accuracy”. This is because the brightness constancy assumption is better satisfied when processing at high frame rates. A small processing unit with special processing hardware can be more useful for many real-time applications than a general purpose PC. However, mapping from software to hardware is not trivial and performance will deteriorate if hardware implementation considerations are not taken into account carefully. That is, it is necessary to devise the algorithm with hardware in mind from the outset. In [6] a tensor-based optical flow algorithm was implemented on an FPGA that was able to process 64 640×480 frames per second. This computation contained two weighting processes and the weights were determined offline by finding the setting which gave the best accuracy compared against the ground truth. However, this method was not optimal in practice. In this paper, a new algorithm is proposed to use a cost function to adaptively determine the optimal weights. An efficient scheme is G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 43–51, 2007. © Springer-Verlag Berlin Heidelberg 2007
44
Z.-Y. Wei, D.-J. Lee, and B.E. Nelson
devised to determine the optimal weights faster and more accurately. To better fit the hardware structure, an efficient tensor computation method is proposed. Experimental results are presented in Section 3 to show the effectiveness of the algorithm.
2 Algorithm 2.1 Tensor-Based Algorithm Overview Given an image sequence g(x), structure tensor T(x) can be computed at each pixel to incorporate the local brightness pattern. Structure tensor T(x) is a 3 by 3 symmetric semidefinite matrix. The structure tensor [4] is defined as
⎛ t1 ⎜ T = ∑ ci O i = ⎜ t 4 i ⎜t ⎝ 5
t4 t2 t6
t5 ⎞ ⎟ t6 ⎟ t 3 ⎟⎠
(1)
where ci are weights for averaging the outer products O of the averaged gradient. O is calculated as
⎛ o1 ⎜ O = ∇g (x)∇g (x) T = ⎜ o 4 ⎜o ⎝ 5
o4 o2 o6
o5 ⎞ ⎟ o6 ⎟ , o3 ⎟⎠
(2)
where the averaged gradient is
∇g (x) = ∑ wi ∇g (x i ) = (g x (x), g y (x), g z (x) )
T
(3)
i
and wi are weights for averaging the gradient ∇g (x) . From (1)-(3), we can see that the tensor is calculated by first smoothing the gradient and then weighting the outer product of the smoothed gradient. The weights, ci and wi, are critical to the performance of the algorithm. Optical flow v = (v x , v y )T can be extended to a 3D spatio-temporal vector T v = ( vT ,1)T = (v x , v y ,1)T . According to the brightness constancy assumption, v Tv=0 if
there’s no rotational movement or noise in the neighborhood [1]-[2]. In the presence of rotation and noise, vTTv will not be zero and v can be determined by minimizing vTTv instead. The optical flow can then be solved as
vx =
(t 6 t 4 − t 5t 2 ) , (t t − t t ) v y = 5 4 6 21 . 2 (t1t 2 − t 4 ) (t1t 2 − t 4 )
(4)
2.2 Cost Function Using a tensor to represent orientation has a notable advantage over scalars and vectors. Tensors not only estimate the orientation but also include the certainty of the
A Hardware-Friendly Adaptive Tensor Based Optical Flow Algorithm
45
estimation. In [4], local brightness patterns were divided into different cases and eigenvalues of the structure tensor in each case were analyzed. Three measures: total coherency measure, spatial coherency measure, and corner measure were defined. They were functions of the eigenvalues of structure tensor. Middendorf et al. [11] used eigenvalue analysis to divide the optical flow field into five categories for motion segmentation. Kühne et al. [12] derived a coherence measure based on eigenvalues. This measure was integrated into an active contour model for segmentation of moving objects. In [3], a corner measure was applied to adaptively adjust the Gaussian window function to improve tensor accuracy. All of the above work first calculated the eigenvalues of the structure tensor and then devised measures using certain combinations of these eigenvalues. The main challenge of using these measures in hardware is that eigenvalues are difficult to obtain. In this paper, the following cost function [1] is used to indicate the certainty
ct (T) =
t3 − t T T −1t , where ⎛T t ⎞. ⎟⎟ T = ⎜⎜ T trace(T) ⎝ t t3 ⎠
(5)
T is a 2×2 symmetric matrix and its inverse matrix can be easily computed. ct is the minimum value of vTTv [1] normalized by the trace of the tensor and indicates the variation along direction v. Therefore, the smaller ct is, the more reliable the tensor will be and vice versa. 2.3 Finding Optimal Weights Cost function ct is not directly related to wi in (3). It is difficult to decouple ci and wi and to adjust wi by evaluating the cost function. Cost function ct in (5) directly indicates the performance of the weights ci in (1) if we assume wi are fixed. Our algorithm adjusts ci while using fixed wi. The support of wi is chosen to be relatively small to prevent “over smoothing”. The reason for applying two small weighting masks instead of one big mask is that it is more economical to implement such a scheme in hardware. The ci are given by the Gaussian function. The shape of the Gaussian function can be characterized by its standard deviation σ . The problem can be interpreted as “finding the optimal standard deviation σ for ci such that the cost function ct (T) is minimized”. Liu et al. [3] proposed an adaptive standard deviation updating algorithm. Starting from a small value, standard deviation σ was increased by Δσ if the confidence measure did not reach the threshold. The support size was then adjusted according to the standard deviation σ . This process was repeated until the confidence measure reached the desired value or a set number of iterations were reached. In this paper, an efficient and accurate algorithm for updating the standard deviation is proposed. It is different from the adaptive algorithm in [3] in four major aspects: 1) It uses a different confidence measure; 2) It has a fixed support size; 3) It uses a better initial value; 4) It uses different searching strategy.
46
Z.-Y. Wei, D.-J. Lee, and B.E. Nelson
The first aspect was discussed above in Section 2.2. The support size is fixed because changing the support size usually requires changes in the hardware structure as opposed to changing weights. During the weighting process, the mask is moved sequentially along a certain direction, say from left to right and then from top to bottom. Two adjacent pixels have most of the masked region overlapped. We can expect that the brightness patterns of the two masked regions are very similar. Therefore, instead of using one initial value for every pixel, we use the standard deviation σ f ( x − 1, y ) resolved from the previous pixel to initialize the standard deviation σ 1 ( x, y ) of the current pixel. The standard deviation is set to zero for the first pixel of each row. There are two advantages of this method: 1) It dramatically decreases the number of iterations; 2) It increases the dynamic range of the standard deviation and can handle signals in a wider range. The pseudocode of the algorithm used for searching and for building the tensor for pixel (x, y) is shown in Fig. 1. The threshold for cost function is t c and the iteration limit is t iter , σ i ( x, y ) is the standard deviation for pixel (x, y) at the ith iteration. By using this scheme, for each iteration a better initial value is given, the distance to the ideal standard deviation is shortened, and the number of iterations is decreased. The tensor can be computed using (1) or the method introduced in the subsection 2.4. There are two modes for updating the standard deviation; increasing mode and decreasing mode. One way to decide which mode should be taken is to compute tensors and cost functions using both modes and choose the one with smaller cost. The iteration will be terminated under one of three conditions: 1) The cost decreases below a pre-set threshold; 2) A maximum iteration limit is reached; 3) The cost increases compared to the last iteration.
2.4 Tensor Computation If the weighting process in (1) is directly implemented in hardware, for a (2n+1)×(2n+1) mask, all (2n+1)2 data in the mask need to be stored in hardware registers at each iteration for computing the tensor in the next iteration. With an increase of mask size, the number of hardware registers will increase quadratically. We propose a method to implement the weighting process efficiently. First, the weighting mask is divided into concentric rings centered at the center of the mask. The weights on each ring are similar because their distances to the center of the mask are roughly the same. Therefore, the parameters c j ,k and outer products O j,k of the averaged gradient on the jth ring Rj can be replaced by
c j and O j shown below.
T = ∑ ciO i = ∑∑ c j , k O j , k ≈ ∑ (c j ∑ O j , k ) = ∑ m j c j O j . . i
where
j
k
j
k
m j is the number of the weights on ring Rj .
j
(6)
A Hardware-Friendly Adaptive Tensor Based Optical Flow Algorithm
47
initialize σ 1 ( x, y ) ; compute tensor T1 using σ 1 ( x, y) ; compute ct as in (5); if ct is smaller than t c set the final standard deviation σ f ( x, y ) = σ 1 ( x, y ) ; return T1; end if; judge which mode will be taken; for i = 1 : t iter if it is increasing mode σ i +1 ( x, y ) = σ i ( x, y ) + Δσ ; else σ i +1 ( x, y ) = σ i ( x, y ) − Δσ ; end if; compute tensor Ti +1 using σ i+1 ( x, y ) ; compute ct as in (5); if ct is smaller than t c σ f ( x, y ) = σ i +1 ( x, y ) ; return Ti+1; else if ct is increasing σ f ( x, y ) = σ i ( x, y ) ; return Ti; end if; end for; σ f ( x, y ) = σ i +1 ( x, y ) ; return Ti+1; Fig. 1. Algorithm Pseudocode
For a (2n+1)×(2n+1) weighting mask, it can be divided into 2n rings. If we denote c ( x, y ) as the parameter at (x, y) on the mask and S as the summation of the absolute values of the x and y coordinates (which will have a range of [0,2n] ), the division of the rings can be formulated as follows:
R1 = {c( x, y) : S ( x, y) = 0},
(7)
R j = {c( x, y ) : S ( x, y ) = j − 1 , x = 0 or y = 0 } ∪ {c( x, y ) : S ( x, y ) = j , x ≠ 0, y ≠ 0)}, j = 2,3,…,2n; x, y ∈ [− n, n ]; x, y ∈ Z.
(8)
D j = {( x, y ) : c ( x, y ) ∈ R j }, j = 1,2, … ,2n. .
(9)
48
Z.-Y. Wei, D.-J. Lee, and B.E. Nelson
c j is calculated as d2
c j (σ ) =
− 1 1 2σ 2 e , where d 2 = m 2π σ j
∑ (x
2
+ y2 ) .
(10)
( x , y )∈D j
The distribution of rings of a 7×7 mask is shown in Fig.2 as an example. Values of
S are shown in each grid of the image. Locations belonging to the same ring are connected by a dashed line in the figure. When the standard deviation is changed from σ 1 to σ 2 , a new tensor can be calculated as T=∑ j
c j (σ 2 ) c j (σ 1 )
mjOj .
(11)
As a result of using this simplified tensor computation, the required number of hardware registers increases linearly instead of quadratically with increasing mask size. In this paper, spatial smoothing instead of spatio-temporal smoothing is used due to the difficulty of implementing temporal smoothing in hardware. Nevertheless, this computation method works for temporal smoothing as well. Although the accuracy of the ring approximation decreases as the mask size increases, acceptable accuracy is obtained with the current settings as shown in the experiment. Using a large mask size is not necessary.
Fig. 2. The distribution of rings of a 7 by 7 mask
3 Experimental Results The proposed algorithm was simulated in MATLAB to evaluate its performance. The algorithm was tested on the Yosemite sequence and the Flower Garden sequence. The results for the Yosemite sequence are shown in Table 1. The first weighting mask in the proposed algorithm is a 5×5 mask whose parameters are ones and
A Hardware-Friendly Adaptive Tensor Based Optical Flow Algorithm
49
Δσ = 0.5 , t c = 0.001 , and titer = 10. Two limits σ max = 8 , σ min = 1 were set for the standard deviation σ to make sure it is within a reasonable range. Table 1. Experiment data on Yosemite sequence n
ACUR1
2 3 4 5
11.73 9.12 7.65 6.75
° ° ° °
ACUR2
° ° ° °
11.73 9.12 7.65 6.77
ACUR3
° ° ° °
11.89 9.13 7.56 6.66
ACUR4 22.34 22.69 22.93 23.22
° ° ° °
AVG ITER1 2.31 2.08 1.85 1.66
AVG ITER2 4.12 4.04 3.97 3.90
The accuracy of these results is measured in angular error as shown in columns 2-5, where n is the half size of the second mask (column 1). ACUR1 is the accuracy of the proposed algorithm, and ACUR2 is the accuracy of the same algorithm without the simplified tensor computation. ACUR3 is the highest accuracy obtained in the work from [6] by tweaking different standard deviations manually, and ACUR4 is the lowest accuracy of this algorithm. AVG ITER1 represents the average iterations per pixel using the proposed algorithm. AVG ITER2 represents the average iterations per pixel using the searching scheme in [3]. The proposed algorithm uses a simplified tensor computation method which reduces the number of hardware registers required for implementation. We compare the accuracy with (ACUR1) and without simplified tensor computation method (ACUR2) and conclude that the proposed simplification process does not affect the accuracy. The proposed algorithm ACUR3 and ACUR4 are obtained by setting the first weighting mask in algorithm [6] the same as that in the proposed algorithm and increasing the standard deviation for the second weighting process from 0.5 to 8 at a step of 0.5. ACUR1 is very close to the highest accuracy (ACUR3) and much better than the lowest (ACUR4). This demonstrates the effectiveness of the proposed weights searching strategy. The average number of iterations required for the proposed searching strategy is compared to those in [3] using AVG ITER1 and AVG ITER2, respectively. The 8th frame of the Yosemite sequence and the 10th frame of the Garden sequence and their optical flow fields using the proposed algorithm are shown in Fig. 3 ( n = 5 and the others settings are the same as the above).
4 Conclusions Using the proposed adaptive algorithm, the optimal parameters for the weighting process can be determined. The accuracy is close to or even better than the best results obtained by manually tweaking the parameters. Also, using the proposed efficient tensor computation method, the number of iterations required is reduced around 2× and the computation can be better pipelined in hardware. Of importance, this optimization is also effective in software implementations of optical flow. Our next step is to implement the proposed algorithm in FPGA.
50
Z.-Y. Wei, D.-J. Lee, and B.E. Nelson
(a)
(b)
(c)
(d)
Fig. 3. Image sequences and optical flow fields
Acknowledgment This work was supported in part by David and Deborah Huber.
References 1. Farnebäck, G.: Very high accuracy velocity estimation using orientation tensors, parametric motion, and simultaneous segmentation of the motion field. In: Proc. ICCV, vol. 1, pp. 77–80 (2001) 2. Farnebäck, G.: Fast and accurate motion estimation using orientation tensors and parametric motion models. In: Proc. ICPR., vol. 1, pp. 135–139 (2000) 3. Liu, H., Chellappa, R., Rosenfeld, A.: Accurate dense optical flow estimation using adaptive structure tensors and a parametric model. IEEE Trans. Image Processing 12, 1170– 1180 (2003) 4. Haussecker, H., Spies, H.: Handbook of Computer Vision and Application, vol. 2, ch. 13, Academic, New York (1999) 5. Wang, H., Ma, K.: Structure tensor-based motion field classification and optical flow estimation. In: Proc. ICICS-PCM, vol. 1, pp. 66–70 (2003) 6. Previous publication hidden for blind review 7. Correia, M., Campilho, A.: Real-time implementation of an optical flow algorithm. In: Proc. ICIP, vol. 4, pp. 247–250 (2002) 8. Zuloaga, A., Martín, J.L, Ezquerra, J.: Hardware architecture for optical flow estimation in real time. In: Proc. ICIP, vol. 3, pp. 972–976 (1998) 9. Martín, J.L., Zuloaga, A., Cuadrado, C., Lázaro, J., Bidarte, U.: Hardware implementation of optical flow constraint equation using FPGAs. In: Computer Vision and Image Understanding, vol. 98, pp. 462–490 (2005)
A Hardware-Friendly Adaptive Tensor Based Optical Flow Algorithm
51
10. Díaz, J., Ros, E., Pelayo, F., Ortigosa, E.M., Mota, S.: FPGA-based real-time optical-flow system. IEEE Trans. Circuits and Systems for Video Technology 16(2), 274–279 (2006) 11. Middendorf, M., Nagel, H.–H.: Estimation and interpretation of discontinuities in optical flow fields. In: Proc. ICCV, vol. 1, pp. 178–183 (2001) 12. Kühne, G., Weickert, J., Schuster, O., Richter, S.: A tensor-driven active contour model for moving object segmentation. In: Proc. ICIP, vol. 2, pp. 73–76 (2001)
Image Segmentation That Optimizes Global Homogeneity in a Variational Framework Wei Wang and Ronald Chung Department of Mechanical and Automation Engineering The Chinese University of Hong Kong Shatin, Hong Kong, China {wangwei,rchung}@mae.cuhk.edu.hk
Abstract. A two-phase segmentation mechanism is described that allows a global homogeneity-related measure to be optimized in a level-set formulation. The mechanism has uniform treatment toward texture, gray level, and color boundaries. Intensities or colors of the image are first coarsely quantized into a number of classes. Then a class map is formed by having each pixel labeled with the class identity its gray or color level is associated with. With this class map, for any segmented region, it can be determined which pixels inside the region belong to which classes, and it can even be calculated how spread-out each of such classes is inside the region. The average spread-size of the classes in the region, in comparison with the size of the region, then constitutes a good measure in evaluating how homogeneous the region is. With the measure, the segmentation problem can be formulated as the optimization of the average homogeneity of the segmented regions. This work contributes chiefly by expressing the above optimization functional in such a way that allows it to be encoded in a variational formulation and that the solution can be reached by the deformation of an active contour. In addition, to solve the problem of multiple optima, this work incorporates an additional geodesic term into the functional of the optimization to maintain the active contour’s mobility at even adverse condition of the deformation process. Experimental results on synthetic and real images are presented to demonstrate the performance of the mechanism.
1
Introduction
Image segmentation is an important problem in image processing and computer vision, with applications encompassing pattern recognition, image understanding, and data transmission. It is to divide an image into disjointed collections of pixels so that each collection represents a surface or object. Much of the previous work is based on the postulate that pixels in the same collection are more or less homogeneous in their brightness or texture information, and pixels from adjacent collections are not.
Corresponding Author.
G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 52–61, 2007. c Springer-Verlag Berlin Heidelberg 2007
Image Segmentation That Optimizes Global Homogeneity
53
Approaches for gray level or color image segmentation could be categorized as boundary based [1], [2] and region based [3], [4], [5], [6], [7]. In general, methods such as [4] [7], that are based upon region features or more precisely the statistical characteristics of intensity distribution in the image, are more robust than the boundary based ones. It is for the reason that a region by nature must have a closed boundary, whereas bridging isolated edge elements in the boundary based methods to form closed boundaries is often a nontrivial task, and it is especially so if the image is of weak contrast. However, in the region-based approach, if splitand-merge or alike mechanism that works pixel-by-pixel is used for separating the regions, the resultant region boundaries are often shaggy due to the local nature of decision-making in the mechanism. Another problem of the split-andmerge or alike mechanism is, because of the absence of the notion of boundary, introducing explicit encouragement of smoothness in the segmentation boundary is difficult. Co-presence of texture regions in the image would add further complexity to the segmentation problem. Either specific methods for texture segmentation [8] [9] are designed to augment the original process, or specific discriminants [4] [10] to describe the texture’s characteristic are needed. For images with both gray level (including color) regions and texture regions, multi-channel information [8] [6] or features derived from Gabor filters [9] etc. are generally required to realize segmentation. To our knowledge, Deng’s work [11] is one of the few that address uniform treatment to gray level, color, and texture segmentation. The work describes a segmentation method that is based on intensity (gray level or color vector) classes, but could also handle texture regions in no different treatment. Intensities or colors of the image are first quantized into a few major classes, and each pixel is labeled with the class number its intensity or color is associated with. A homogeneity-related measure which is about the ratio of the region size to the average class spread is then defined. By the use of a mechanism that minimizes the measure, the image can be divided into segments that contain uniformly distributed classes. In [11], the measure is used first to outline points that are located around the center of the desired regions, and some seed regions are formed around those points. The regions are then grown and merged under a heuristic mechanism that aims at minimizing the above measure on the way. Definable for the entire image domain, the homogeneity-related measure is however only used in Deng’s work as a local measure in a split-and-merge-like mechanism. More precisely, the measure is used as a local operator to indicate the possibility of a pixel as being on the boundary of a region or in the middle of it. Seed regions’ selection, region growing, region merging, and design of the stopping condition constitute the core of the mechanism. There is no encouragement of smoothness in the segmentation boundary, nor is the mechanism formulated in an analytical fashion. In this work, we propose a mechanism that allows a global form of the homogeneity-related measure to be optimized in a variational framework. More precisely, the measure is optimized under the level set formulation, as the
54
W. Wang and R. Chung
deformation of an active contour in the image domain. The measure is defined not locally but upon the entire image, thus having the heuristic part of the design greatly simplified. The mechanism inherits the property of having uniform treatment toward gray level, color, and texture regions in an image. Ever since level set method was introduced by Osher and Sethian [12], it has been widely used in active contour based image segmentation schemes [2], [7], [3], [4], [5], [8] for its many advantages including flexibility, and ease of change of region topology at any stage of the boundary evolution process. In particular, it allows smoothness constraint to be introduced explicitly to the segmentation boundaries. The variational mechanism could also simplify the evolution process by having less parameters to deal with. As the first attempt to directly minimize the homogeneity-related measure in a global way, this work is restricted to two-phase segmentation (segmentation of foreground and background in an image) only. In such a case, only one level set function needs be employed. The measure’s minimization is tackled by solving the corresponding Euler-Lagrange function, and realized by iteratively evolving the related PDE’s to the steady state. In addition, to deal with the problem of multiple minima, we introduce a geodesic term to the functional of the minimization. The term is to maintain mobility of the active contour even at adverse initialization from the user or at adverse condition of the evolution process. The proposed solution has the following assumptions: segments in the image contain uniformly distributed classes, and the two phases can be separated by distinguishable patterns formed by the classes. The rest of the paper is organized as follows. In section 2, the homogeneityrelated measure adopted in this work is outlined. In section 3, we show how the measure can be expressed in the level set formulation for minimization. In section 4, experimental results with performance comparison with those of other mechanisms are presented. In section 5 we draw the conclusion and indicate possible future work.
2
Segmentation Criterion
As in the work of Deng et al. [11], intensities or colors of the input image are first coarsely quantized into a few classes. For gray level image, the intensity values are normalized according to approximate prior knowledge of the number of classes in the image. Then each pixel is labeled with the nearest class number its gray level or color is associated with, forming a class map which is of the same size of the original image. As for color image, more sophisticated method can be used, like that in [13]. In our experiments, generally 10∼20 classes were used to process each image. With the above process, a class map is attained that has each pixel attached with a particular class label. The class map then replaces the original image in the subsequent segmentation process. Notice that it is not required that the labeling be perfect; the subsequent segmentation process has tolerance toward incorrect labels that are often inevitable in a step so preliminary.
Image Segmentation That Optimizes Global Homogeneity
55
The segmentation criterion, or the optimization measure, is then defined over the class map. Below we first briefly review the criterion J¯ used in [11]. 2.1
Deng’s Criterion
Let Z = {z : z = (x, y)} be the set ofall N pixels in the class map, and m be z, in which z ∈ Z. Suppose Z has been the center of the set. That is, m = N1 labeled as consisting of n classes {Zp : p = 1, ..., n}, and m p is the center of the totally Np data points in the class Zp . That is, mp = N1p z, in which z ∈ Zp . Notice that the n class-groups are not necessarily disjointed in the image domain: there could be significant overlap between some of them. If the image is rather homogeneous (e.g., like a checker board), all class-groups (i.e., the two classes of black pixels and white pixels respectively) span more or less the entire image; otherwise the class-groups have less overlap. Then moments ST and SW can be defined as ST = z∈Z z − m2 and two n SW = p=1 z∈Zp z − mp 2 . While ST describes the spread of the image from the image center, or equivalently speaking the size of the image, SW describes the average spread of the class groups. Again, for a homogeneous image, ST and SW are of about the same value; for inhomogeneous image, SW is generally much smaller than ST . W Then a measure J can be defined as J = STS−S , which compares the image W size with the average class spread. Small J value indicates homogeneous distribution of classes in the class map, and large J value indicates that the class-groups are distinctly separated. Now suppose the class map is segmented into a number of regions (generally each region could contain several classes, and the members of one class could be distributed over multiple regions). Then denote the J value on each region the whole k as Jk (in other words, J is calculated over each region instead of map), and define J¯ as the weighted average of all the Jk ’s: J¯ = N1 k Mk Jk , where Mk is the number of pixels in region k. Generally a good segmentation that comprises homogeneous regions will has small J¯ value. 2.2
Our Criterion
The J¯ can be used as a measure to drive the segmentation process. We shall refer to it the inhomogeneity measure, as the higher is its value the more inhomogeneous are the segmented regions, and we desire a segmentation with a small value of it. However, it is difficult to minimize J¯ globally over the entire image domain, and in the work [11] it is used only as a local operator. ¯ Specifically, we derive In contrast, we prefer a more global way of utilizing J. ¯ J’s first variation, and adopt gradient descent method to find the minimum of the measure. As a first effort in this direction, this work sets the number of segments as two and is restricted to the two-phase segmentation problem. In such a case, only one level set function is needed to embed all segmentation boundaries.
56
W. Wang and R. Chung
Let φ(z, t) represent the level set function defined on the class map Z with time parameter t. The zero level contour separates the class map into interior ¯ and exterior regions. Then the J¯ equation can be reformulated as J(φ(z, t)) = Mi Mo J (φ(z, t)) + J (φ(z, t)), where M is the number of pixels in the interior i o i N N region, and Mo the number of points in the exterior region. The J value defined S −S on the interior region is represented as Ji = TiSW Wi , and J value defined on S
−S
i
the exterior region is represented as Jo = ToSW Wo . o Straight minimization of the global inhomogeneity measure does not always derive the optimal segmentation; there is the notion of multiple local minima. In particular, a region with a radially symmetric distribution of classes could ¯ resulting in immobility of the active contour. Fig. 1 shows an also have small J, example. The pixels in the image can be readily clustered into three classes by intensity values, and a segmentation result as shown in Fig. 1 (c) is obviously preferred. Under such a segmentation, the J¯ value will be nearly zero which is ¯ However, if the active contour moves to a global minimum of the measure J. location as that shown in (b), the J¯ value will also be nearly zero. It indicates ¯ that multiple-minimum property of the measure J.
(a)
(b)
(c)
Fig. 1. Two different segmentation results on image (a) are shown in (b) and (c), both with small J¯ value
To tackle the problem of multiple minima, we introduce a geodesic measure LR + νAR [2] into the functional of the optimization process, where ν is a weight parameter. The first term LR measures the weighted contour length, and the second term AR measures the weighted area of the interior region enclosed by the contour. Details of the terms will be elaborated in the next section. With this additional term to minimize, our mechanism encourages not only homogeneous regions in the segmentation result, but also compact regions, i.e., the region’s boundaries had better fall on edges in the class map. In summary, our mechanism minimizes the measure J¯ + μ(LR + νAR ) in the level-set formulated segmentation process, where μ is a weight parameter.
3
Curve Evolution Scheme
Assume that there are n class-groups Zp (p = 1, ..., n) in the class map Z that is defined on the image domain Ω. A level set function φ(z, t) is defined on Ω, in which z represent a point (x, y) in Ω and t is the time parameter. The two
Image Segmentation That Optimizes Global Homogeneity
57
phases in Ω are separated by the zero level contour of function φ. With derivation similar to that in the work [4], we minimize the functional J¯ with respect to t using the evolution equation derived from function φ. To indicate the interior and exterior regions separated by the zero level contour, a Heaviside function H is defined on function φ: H(φ) = 1 if φ ≥ 0, and H(φ) = 0 if φ < 0. Its derivative is defined by the one dimensional Dirac mead H(φ). Also, we define Hp on each class Zp (p = 1, 2, ..., n) as: sure δ(φ) = dφ Hp (z) = 1 if z ∈ Zp , and Hp (z) = 0 if z ∈ Zp . Then we can express Ji and Jo as: z − mi 2 H(φ)dz STi Ji = − 1 = n Ω −1 (1) 2 SWi p=1 Ω z − mpi H(φ)Hp (z)dz z − mo 2 (1 − H(φ))dz STo Jo = − 1 = n Ω −1 (2) 2 SWo p=1 Ω z − mpo (1 − H(φ))Hp (z)dz where mi is the center of the interior region, mo the center of the exterior region, mpi the center of the members of Zp class in the interior region, and mpo the center of the members of Zp class in the exterior region. In addition, J¯ can be expressed as H(φ)dz H(φ)dz Ω ¯ J(φ) = Ji (φ) + (1 − Ω )Jo (φ) (3) N N It can be seen that J¯ is a functional defined on mi , mo , mpi ’s, mpo ’s (expressed as m’s later) and φ. First, keeping φ fixed and minimizing J¯ with respectÊ to m’s, we can derive Ê Ê zH(φ)dz z(1−H(φ))dz zH(φ)H (z)dz Ω Ω that: mi = Ê H(φ)dz , mo = Ê (1−H(φ))dz , mpi = ÊΩ H(φ)Hpp(z)dz , mpo = Ω Ê ÊΩ z(1−H(φ))Hp (z)dz ,
Ω
Ω
(p = 1, 2, ..., n). (1−H(φ))Hp (z)dz Ω Then, from the Euler-Lagrange equation of function (3), we can derive the ¯ gradient flow for J: n 2 H(φ)dz STi z − mi 2 ∂φ p=1 z − mpi Hp (z) Ω = −δ(φ) − ∂t N SWi STi SWi n 2 z − mo 2 p=1 z − mpo Hp (z) Ω H(φ)dz STo +(1 − ) − N SWo SWo STo 1 (4) + (Ji − Jo ) N As to the geodesic measure LR + νAR , which is equal to Ω gδ(φ)|∇φ| + ν Ω gH(φ) in the level set formulation (g is the edge indicator function defined on the J image), its gradient flow is
∇φ ∂φ = δ(φ) div g − νg (5) ∂t |∇φ|
58
W. Wang and R. Chung
∂φ with boundary condition ∂n = 0, where n denotes the normal to the image boundary ∂Ω, ν is a constant used to speed up the contour’s convergence. Therefore, the combined flow which minimizes the functional J¯+μ(LR +νAR ) is μ weighted summation of Equation (4) and (5). It can be observed that there are only two weight parameters μ and ν. Thus, by the variational formulation the rather complex design in the heuristic approach can be much simplified. In the numerical implementation, the general form of the edge indicator function g [2] on image I is g = 1+|∇G1 σ ∗I|p , (p ≥ 1). In this work, to allow texture structure be processed under the same framework, we use J value instead of the gradient magnitude item |∇Gσ ∗ I| in the g function. The J value is calculated at each pixel in the class map Z with a circular neighborhood of diameter 9 pixels, W . The larger is the J value, the more likely would the by equation J = STS−S W pixel be near the region boundary.
4
Experimental Results
We present experimental results on synthetic and real images to illustrate the performance of the proposed method. In all examples, we set μ = 0.1, ν = 0.1, Δt = 0.1. The initial contour was set as a rectangle as shown in Fig. 2 (a), and the level set function φ was re-initialized every 20 iterations. In fact, the example in Fig. 1 already shows one set of experimental results. Fig. 1 (b) shows the initial location of the contour, and (c) shows the final location of the contour. Fig. 2 (a) shows an image consisting of a textured region and a homogeneous gray region. The white rectangle indicates the location of the initial zero level contour of function φ. Fig. 2 (b) shows the final contour derived by our method with class number n = 10. In comparison with the result derived by JSEG method [11] as shown in (c), our method could derive more accurate and smoother contour. Experiments under the setting μ = 0 and ν = 0 also derived the same segmentation result as in (b), showing that the geodesic measure is not always needed in our method. Experiments nonetheless show that negative impact to the result was not induced by the presence of the geodesic term. Fig. 2(d) shows the result derived for the corrupted image with SN R = 40dB and n = 10; (e) shows the result derived for the same corrupted image but with the class number n increased to 20. Fig. 2(f) shows the result derived for the corrupted image with SN R = 26dB and n = 20. The results show that the proposed mechanism is robust to the increase of noise, and the same result can be obtained over a rather wide range of the preset number of classes. Fig. 3 shows an image consisting of two textured regions with different patterns. Shown in Fig. 3 (a) is the original image with the initial contour superimposed. The result of our method is shown in (b). In comparison with the result of the JSEG method as shown in (c), the boundary from our method was of higher accuracy and smoothness. Fig. 4 illustrates how the segmentation algorithm performed on two images that comprise Brodatz textures [14]. On images shown in Fig. 4, it was set that class number n = 10. The first column shows the original images, the second
Image Segmentation That Optimizes Global Homogeneity
(a)
(b)
(c)
(d)
(e)
59
(f)
Fig. 2. Segmentation result for a synthetic image from the proposed method: (a) initial contour, (b) final contour, (c) result derived from JSEG method for comparison, (d) final contour (of the proposed method) under SN R = 40dB, class number n = 10, (e) SN R = 40dB, n = 20, (f) SN R = 26dB, n = 20
(a)
(b)
(c)
Fig. 3. Segmentation results for an image of two textured regions. (a) initial contour; (b) final contour by our method; (c) final contour by JSEG method.
(a)
(b)
(c)
(d)
(e)
(f)
Fig. 4. Segmentation results for two images that comprise Brodatz textures. (a),(d): original images. (b),(e): segmentation results by our method. (c),(f): segmentation results by the JSEG method.
60
W. Wang and R. Chung
column shows the results derived by our method, and the third column shows the results derived by the JSEG method for comparison. It can be seen that our method derived smoother and more precise region boundaries. Fig. 5 illustrates how the segmentation algorithm performed on a number of real images. To all images shown in Fig. 5, the class number n is set as n = 10. The first row shows the original images, the second row shows the results from our method, and the third row shows the results from the JSEG method for comparison. It seems that on the two color images, the boundaries from the JSEG method are smoother than ours, but upon close examination and comparison with the original images it could be observed that our method actually derived more precise region boundaries. On the second image, the JSEG method had difficulty in achieving the two-phase segmentation; if the merging parameter was increased further, all segments would be merged together. On the third image, our method again derived a more reasonable segmentation result, although still with certain errors. We ascribe the better performance of our method to the use of the J measure in a more global form.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
Fig. 5. Segmentation result on a number of real images. (a),(b),(c): original image. (d),(e),(f): segmentation results by our method. (g),(h),(i): segmentation results by the JSEG method.
Image Segmentation That Optimizes Global Homogeneity
5
61
Conclusion and Future Work
A two-phase segmentation mechanism that allows a global homogeneity-related measure be optimized in a level-set formulation has been presented. The mechanism has uniform treatment toward texture, gray level, and color segmentations, and allows them to co-exist in the image. By allowing the measure to be minimized under a variational method, the mechanism is made much simpler than the heuristic split-and-merge mechanism. In addition, smoothness can be explicitly encouraged to the segmentation boundary, and the homogeneity-related measure can be applied to the entire image domain and be made global. Future work will be upon extending the work for multiple-phase segmentation.
References 1. Kass, M., Witkin, A., Terzopoulos, D.: Snakes: Active contour models. Int’l J. Computer Vision 1(4), 321–331 (1988) 2. Caselles, V., Kimmel, R., Sapiro, G.: Geodesic active contours. Int’l J. Computer Vision 22(1), 61–79 (1997) 3. Samson, C., Blanc-feraud, L., Aubert, G., Zerubia, J.: A level set model for image classification. Int’l J. Computer Vision 40(3), 187–197 (2000) 4. Chan, T.F., Vese, L.A.: Active contours without edges. IEEE Trans. Image Processing 10, 266–277 (2001) 5. Chan, T.F., Vese, L.A.: A level set algorithm for minimizing the mumford-shah functional in image processing. In: Proc. of 1st IEEE Workshop on Variational and Level Set Methods in Computer Vision, pp. 161–168 (2001) 6. Zhu, S.C., Yuille, A.: Region competition: Unifying snakes, region growing, and bayes/mdl for multiband image segmentation. IEEE Trans. Pattern Analysis and Machine Intelligence 18(9), 884–900 (1996) 7. Yezzi, A., Tsai, A., Willsky, A.: A statistical approach to snakes for bimodal and trimodal imagery. In: IEEE Int’l Conf. Computer Vision-II, pp. 898–903 (1999) 8. Chan, T.F., Sandberg, B.Y., Vese, L.A.: Active contours without edges for vectorvalued images. J. Visual Communication and Image Represenation 11(2), 130–141 (2000) 9. Sagiv, C., Sochen, N.A., Zeevi, Y.Y.: Integrated active contours for texture segmentation. IEEE Trans. Image Processing 1(1), 1–19 (2004) 10. Sumengen, B., Manjunath, B.: Graph partitioning active contours for image segmentation. IEEE Trans. Pattern Analysis and Machine Intelligence 28(4), 509–521 (2006) 11. Deng, Y., Manjunath, B., Shin, H.: Color image sementation. In: IEEE Comput. Soc. Conf. Computer Vision and Pattern Recognition-II, pp. 23–25 (1999) 12. Osher, S., Sethian, J.: Fronts propagation with curvature dependent speed: Algorithms based on hamilton-jacobi fomulations. J. Comput. Phys. 79, 12–49 (1988) 13. Deng, Y., Kenney, C., Moore, M.S., Manjunath, B.: Peer group filtering and perceptual color image quantization. In: IEEE Int’l Symposium on Circuits and SystemsIV, pp. 21–24 (1999) 14. Brodatz, P.: Textures: A Photographic Album for Artists and Designers. New York, Dover (1966)
Image and Volume Segmentation by Water Flow Xin U. Liu and Mark S. Nixon ISIS group, School of ECS, University of Southampton, Southampton, UK
Abstract. A general framework for image segmentation is presented in this paper, based on the paradigm of water flow. The major water flow attributes like water pressure, surface tension and capillary force are defined in the context of force field generation and make the model adaptable to topological and geometrical changes. A flow-stopping image functional combining edge- and region-based forces is introduced to produce capability for both range and accuracy. The method is assessed qualitatively and quantitatively on synthetic and natural images. It is shown that the new approach can segment objects with complex shapes or weak-contrasted boundaries, and has good immunity to noise. The operator is also extended to 3-D, and is successfully applied to medical volume segmentation.
1 Introduction Image segmentation is a fundamental task. For example, in retinal images, vessel structures can provide useful information like vessel width, tortuosity and abnormal branching which are helpful in medical diagnoses. However, natural images often comprise topologically and/or geometrically complex shapes, like the vessels. The complexity and variability of features, together with the image imperfections such as intensity inhomogeneities and imaging noise which cause the boundaries of considered features discontinuous or indistinct, make the task very challenging. Many methods have been proposed in medical image segmentation. Active contours or snakes [1] are one of the most powerful established techniques. An active contour is essentially a parameterized curve which evolves from an initial position to the object’s boundary so that a specified energy functional can be minimized. The methods achieve desirable features including inherent connectivity and smoothness that counteract object boundary irregularities and image noise, so they provide an attractive solution to image segmentation. However, there are still many limitations. Classical parametric snakes use edge information and need good initialization for a correct convergence. Moreover, they cannot handle topological and geometrical changes like object splitting or merging and boundary concavities. Many methods have been proposed to overcome these problems. Balloon models [2], distance potentials [3], and gradient vector flow (GVF) [4] have been developed to solve the problems of initialization and concave boundary detection. Snake energy functionals using region statistics or likelihood information have also been proposed [5, 6]. A common premise is to increase the capture range of the external forces to guide the curve towards the boundaries. For complex topology detection, several authors have proposed adaptive methods like the T-snake [7] G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 62–74, 2007. © Springer-Verlag Berlin Heidelberg 2007
Image and Volume Segmentation by Water Flow
63
based on repeated sampling of the evolving contour on an affine grid. Geometric active contours [8, 9] have also been developed where the planar curve is represented as a level set of an appropriate 2-D surface. They work on a fixed grid and can automatically handle topological and geometrical changes. However, many methods solve only one problem whilst introducing new difficulties. Balloon models introduce an inflation force so that it can “pull” or “push” the curve to the target boundary, but the force cannot be too strong otherwise “weak” edges would be overwhelmed. Region-based energy can give a large basin of attraction and can converge even when explicit edges do not exist but it cannot yield as good localization of the contour near the boundaries as can edge-based methods. Level set methods can detect complex shapes well but also increase the complexity since a surface is evolved rather than a curve. Instead of model-based methods, some proposed the morphological watershed based region growing techniques [10, 11]. The approach is based on the fact that smooth surfaces can be decomposed into hills and valleys by studying critical points and their gradient. Considering pixel properties (intensity or gradient) as elevation and then simulating rainfall on the landscape, rain water will flow from areas of high altitude along lines of steepest descent to arrive at some regional minimal height. The catchment basins are defined as the draining areas of its regional minima and their boundaries can then be used in object extraction. Though assuming water collection, the method does not use the features of water itself and focuses on the image’s geographical features. The non-linearity arising from issues like finding steepest descent lines between two points makes the method complicated. Moreover, the region growing framework often yields irregular boundaries, over-segmentation and small holes. Unlike the mathematical models introduced above, we propose a physical model focusing on water itself rather than the landscape of images. Water is chosen because features like fluidity and surface tension can lead to topological adaptability and geometrical flexibility, as well as contour smoothness. We completely redefined the basis of our previous water-flow based segmentation approaches [12, 13] by adopting the force filed theory which has been used in feature extractions [14]. The method shows decent segmentation performance in quantitative and qualitative assessments. Further, the nature of physical analogy makes the working principles and parameters easy and explicit to interpret. The 3D extension is also more natural and straightforward than mathematical models like T-surfaces [7].
2 Methodology Water flow is a compromise between several factors: the position of the leading front of a water flow depends on pressure, surface tension, and adhesion (if any). There are some other natural properties like turbulence and viscosity, which are ignored here. Image edges and some other characteristics that can be used to distinguish objects are treated as the “walls” terminating the flow. The final static shape of the water should give the related object’s contour. Some physical principles are first introduced. The flow velocity is determined by total flow driving force and the flow resistance. The relationship between the flow velocity v, the flow resistance R and the total driving force F is given by:
64
X.U. Liu and M.S. Nixon
v = FD
A⋅R
(1)
where A is the cross-sectional area of the flowing water and is set to unity here. FD comprises the pressure, surface tension and adhesion. The flow is mainly driven by the pressure pointing outwards. The surface tension, which is the attractive force between water surface elements, can form a water film to bridge gaps in object boundaries. The adhesion, which is defined as the attractive force from image edges to water surface, can assist water in flowing inside narrow braches. For the image analogy, one pixel in the image is considered to be one basic water element. An adaptive water source is assumed at the starting point(s) so that the water can keep flowing until stasis, where flow ceases. The image is now separated into dry and flooded areas by the water. Only elements at water contours are adjacent to dry regions, so only contour elements are of interest in the implementation. The implementation of the flow process of one contour element is shown by the flowchart in figure 1. Applying same procedures to all the contour elements forms one complete flow iteration. As shown by figure 1, the flow process is separated into two stages – the acceleration stage and the flow stage. In the first stage, the considered element achieves an initial flow velocity determined by the driving force FD and resistance R. Then we examine the movements at possible flow directions pointing from the considered contour element toward adjacent dry points one by one. For the direction i, the component velocity scalar vi is calculated. If vi > 0, the process then progress to the next stage, where the element is assumed to be flowing to the dry position related to the direction i and some image force is acting on it. To reconcile the flow velocity with the image force and hence conduct the movement decision processor, dynamical formulae are used. The movement decision is made according to the sign of J and
J = mvi 2 2 + Fi S
(2)
where S and m are defined as the fixed flow distance in one iterative step and the water element mass. In this equation, Fi is the scalar image force at direction i. It is defined to be positive if consistent with i, and negative if opposite. J ≥ 0 means that the initial kinetic energy exceeds the resistant work produced by Fi during S and thus the contour element is able to flow the target position at direction i. The definitions and calculations of the factors and parameters introduced above then need to be clarified. In this paper, the force field theory is embodied into the water flow model to define the flow driving force FD.
Fig. 1. The flowchart of implementing the flow process for one water contour element
Image and Volume Segmentation by Water Flow
65
2.1 Force Field, Water Driving Force, and Flow Velocity In this new water flow model, each water element is treated as a particle exhibiting attraction or repulsion to other ones, depending on whether or not it is on the contour / surface. The image pixels at the dry areas are considered as particles exhibiting attractive forces to water contour elements. Now both the water elements and the dry area image pixels are assumed to be arrays of mutually attracted or repelled particles acting as the sources of Gaussian force fields. Gauss’s law is used as a generalization of the inverse square law which defines the gravitational and/or electrostatic force fields. Denoting the mass value of pixel with position vector rj as L(rj), we can define the total attractive force at rj from other points within the area W as FD ( r j ) =
∑
j∈ W , j ≠ k
L ( rk )
( r j − rk ) |r j − r k | 3
(3)
Equation (3) can be directly adopted into the framework of the water flow model, provided the mass values of different kinds of elements are properly defined. Here the magnitude of a water element is set to 1, and that of a dry image pixel is set to be the edge strength at that point (an approximation of the probability that the considered pixel is an edge point). The mass values of water contour elements and image pixels should be set positive, and those of the interior water elements should be negative because equation (3) is for attractive forces. From equation (3), the flow velocity is inversely proportional to the resistance of water (the cross-sectional area A has been set to 1). In a physical model, the resistance is decided by the water viscosity, the flow channel and temperature etc. Since this is an image analogy which offers great freedom in selection of parameter definitions, we can relate the resistance definition to certain image attributes. For instance, in retina vessel detection, if the vessels have relatively low intensity, we can define the resistance to be proportional to the intensity of the pixel. Further, if we derive the resistance from the edge information, the process will become adaptive. That is, when the edge response is strong, resistance should be large and the flow velocity should be weakened. Thereby, even if the driving force set by users is too “strong”, the resistance will lower its influence at edge positions. Thereby the problem in balloon models [2], where strong driving forces may overwhelm “weak” edges, can be suppressed. Therefore, such a definition is adopted here. The flow resistance R at arbitrary position (u, v) is defined as a function of the corresponding edge strength:
R = exp{k ⋅ E(u, v)}
(4)
where E is the edge strength matrix and the positive parameter k controls the rate of fall of the exponential curve. If we assign a higher value to k, the resistance would be more sensitive to the edge strength, and a lower k will lead to less sensitivity. Substitute equations (3) and (4) to equation (1), the resultant flow velocity can be calculated. From figure 1, we can see that each possible flow direction is examined separately, so the component velocity at the considered direction, vi needs to be computed:
vi = v ⋅ cos γ
(5)
where γ is the angle between the flow direction i and the resultant velocity direction.
66
X.U. Liu and M.S. Nixon
2.2 Image Forces If vi ≤ 0, the contour element will not flow to the corresponding direction i. Otherwise, the movement decision given by equation (2) should be carried out, during which the image force is needed. The gradient of an edge response map is often defined as the potential force in active contour methods since it gives rise to vectors pointing to the edge lines [3]. This is also used here. The force is large only in the immediate vicinity of edges and always pointing towards them. The second property means that the forces at two sides of an edge have opposite directions. Thus it will attract water elements onto edges and prevent overflow. The potential force scalar acting on the contour element starting from position (xc, yc) and flowing toward target position (xt, yt) is given by:
FP ,i = [∇E( xt , yt )] cos β
(6)
where ∇E is the gradient of the edge map and β is the angle between the gradient and the direction i pointing from (xc, yc) to (xt, yt). The gradient of edges at the target position rather than that at the considered water contour position is defined as the potential force because the image force is presumed to act only during the second stage of flow where the element has left the contour and is moving to the target position. The forces defined above work well as long as the gradient of edges pointing to the boundary is correct and meaningful. However, as with corners, the gradient can sometimes provide useless or even incorrect information. Unlike the method used in the inflation force [2] and T-snakes [7], where the evolution is turned off when the intensity is bigger than some threshold, we propose a pixel-wise regional statistics based image force. The statistics of the region inside and outside the contour are considered respectively and thus yield a new image force: FS ,i = −
nint nint + 1
(I( xt , yt ) − μint )2 +
next next − 1
(I( xt , yt ) − μext )
2
(7)
where subscripts “int” and “ext” denote inner and outer parts of the water, respectively; μ and n are the mean intensity and number of pixels of each area, separately; I is the original image. The equation is deduced from the Mumford-Shah functional [6]: F1 (C ) + F2 (C ) = ∫inside ( C ) | I ( x , y ) − μint | + ∫outside ( C ) | I ( x , y ) − μext | 2
2
(8)
where C is the closed evolving curve. If we assume C0 is the real boundary of the object in the image, then when C fits C0, the term will achieve the minimum. Instead of globally minimizing the term as in [6], we obtain equation (9) by looking at the change of the total sum given by single movement of the water element. If an image pixel is flooded by water, the statistics of the two areas (water and non-water) will change and are given by equation (9). The derivation has been shown in [13, 14]. Edge-based forces provide a good localization of the contour near the real boundaries but have limited capture range whilst region-based forces have a large basin of attraction and relatively low detection accuracy. A convex combination method is chosen to unify the two functionals:
Image and Volume Segmentation by Water Flow
Fi = α FP ,i + (1 − α ) FS ,i
67
(9)
where all terms are scalar quantities, and α (0≤α≤1) is determined by the user to control the balance between them. 2.3 Final Movement Decision
If the scalar image force is not less than zero, then J given by equation (2) must be positive (because the initial velocity vi needs to be positive to pass the previous decision process as shown in figure 1). Since only the sign of J is needed in this final decision-making step, the exact value of J need not be calculated in this case and the element will be able to flow to the target position. If the scalar image force, however, is negative (resistant force), equation (2) must be calculated to see if the kinetic energy is sufficient to overcome the resistant force. As the exact value of J is still unnecessary to compute, equation (2) can be simplified
J = λ vi + Fi 2
(10)
where λ is a regularization parameter set by users which controls the tradeoff between the two energy terms. It can be considered as the combination of mass m and displacement S. Its value reflects smoothing of image noise. For example, more noise requires larger λ. The sign of J from equation (10) then determines whether the considered element can flow to the target position at direction i. 2.4 Three Dimensional Water Flow Model
The extension of the water flow model to 3-D is very straightforward and natural because the physical water flow process is just three dimensional. Still, assume one voxel of the volume matrix represents one basic water element, and define the water elements adjacent to dry areas as surface elements under certain connectivity (here the 26-connectivity is chosen). Now the forces acting on surface elements are of interest. The implementation process is exactly the same as the one shown in figure 1. The difference is that now the factors discussed above need to be extended to 3-D. Equation (3) is again used to calculate the total driving forces given that the position vectors r’s are three dimensional. The definition of flow resistance is also unchanged, provided that the edge / gradient operator used is extended to 3-D. Simply defining the image force functions given by equations (6), (7) and hence (9) in Ω ⊂ R3, the same equations then can be used to calculate the 3-D force functionals.
3 Experimental Results The new technique is applied to both synthetic and natural images, and is evaluated both qualitatively and quantitatively. 3.1 Synthetic Images
First, the goodness of analogy to water flow is examined. Figure 2 indicates water flowing in a cube-like object with and without adhesive force. The evolution is
68
X.U. Liu and M.S. Nixon
(a) without adhesion
(b) with adhesion
(c) without adhesion
(d) with adhesion
Fig. 2. Imitating water flowing inside a tube-like course with a narrow branch
initialized at the left end of the pipe. We can see that the water stops at the interior side of the step edge and the front forms a shape which is similar to that observed on naturally flowing water. Figures 2(a) and (b) have slightly different front shapes – the two edges of the water flow faster due to the effect of adhesive forces. The adhesion also helps flow into narrow branches, as indicated in figure 2(d). Without the adhesive force, the surface tension will bridge the entrance of the very narrow branch and thus the water cannot enter it, as shown in figure 2(c). Introducing the region-based force functional enables the operator to detect objects with weak boundaries, as shown in figure 3. The region-based force will stop the flow even if there is no marked edge response. The segmentation result here is mainly determined by the value of α. To assess the immunity to noise, a quantitative performance evaluation is also performed. The level set method based on regional statistics [6] is chosen for comparison. The test image is generated so that the ground truth segmentation result can be compared. The shape of the considered object is designed as a circle with a boundary concavity to increase the detection difficulty. Different levels of Gaussian and Impulsive noise are added. The mean square error (MSE) is used to measure the performance under noise. ID
M SE = ∑ d k k =1
2
m ax( I D , I I )
(11)
where II and ID are the number of ideal and detected contour points respectively and dk is the distance between the kth detected contour point and the nearest ideal point. The
Fig. 3. Segmentation of the object with weak-contrasted boundaries (α=0)
Image and Volume Segmentation by Water Flow
69
quantitative results are shown by figure 4(a) and (b). For both noise, the performance of the water flow model is markedly better than the level set operator especially when the noise contamination is severe (SNR less than 10dB). The performance superiority of the water operator under noisy conditions is further illustrated qualitati- vely by the segmentation results for Gaussian noise (SNR: 13.69) and impulsive noise (SNR: 11.81), see figure 4(c) to (f). This robustness to noise is desirable for many practical applications like medical image segmentation.
(a)
(c) LS (MSE: 0.83) (d) WF (MSE: 0.13)
(b)
(e) LS (MSE: 1.02)
(f) WF (MSE: 0.31)
Fig. 4. Quantitative evaluation and detection examples for level set method (LS) and water flow operator (WF), left for Gaussian noise and right for Impulsive noise
3.2 Natural Images
Natural images with complex shape and topology are also assessed. Figure 5 shows the result for the image of a river delta with different parameters, where the river is the target object. It is suited to performance evaluation since gaps and “weak” edges exist in the image. One example is the upper part of the river, where boundaries are blurred and irregular. There are also inhomogeneous areas inside the river, which are small islands and have lower intensity. Our water flow based operator can overcome these problems. As shown in figure 5(a), a reasonably accurate and detailed contour of the river is detected. At the upper area, the very weak boundaries are also detected. This is achieved by using high value of k in equation (4) that gives the operator a high sensitivity to edges. The contour is relatively smooth by virtue of surface tension. The fluidity leading to topological adaptability is shown well by successful flow to the branches at the lower area. Most of them are detected except failure at several narrow branches. The barriers are caused either by natural irregularities inside them or noise.
70
X.U. Liu and M.S. Nixon
(a) α=0.5, λ=1, k=50
(b) α=0.5, λ=1, k=0 Fig. 5. Water-flow detection results for river delta photo with different parameters: increased λ reduces the significance of image forces, and smaller k makes flow less sensitive to edges, therefore the detail detection level is lower in (b)
Different initializations inside the river were tried and with the same parameters chosen, the results are almost the same, as expected. The operator is insensitive to the source positions. By changing the parameters, however, some alternative results can be achieved. For example, figure 5(b) shows a segmentation of the whole basin of the river. It is analogy to a flood from the river. The water floods the original channels and stops at the relatively high regions. This shows the possibility of achieving different level of detail just by altering some parameters. The new water flow model is also applied to segment the complex and variable anatomical features in medical images that typically have limited quality and are often contaminated by noise. Figure 6 presents the example results for several MR images. The water sources are all set inside the object of interest and parameter are chosen as k=20, α=0.5, λ=1. The resultant contours are relatively smooth by virtue of surface
Image and Volume Segmentation by Water Flow
(a)
(b)
71
(c)
Fig.6. Segmentation results in real medical images: a) brain in a sagittal MR image, b) carotid artery in a MR carotid MRA image and, c) grey/white matter interface in MR brain image slice
tension. The operator can find weak-contrasted boundaries as shown by figure 6(a) where the indistinct interface between the brain and the spine is detected. This is achieved by combining a high value of k that gives the operator a high sensitivity to edge response and the region-based forces. The fluidity of water leads to both topological adaptability and geometrical flexibility, and the capillary force assists in detecting narrow tube-like features. Figure 6(b) and (c) illustrate those – the complex structures and irregular branches are segmented successfully. Retinal vessel segmentation plays a vital role in medical imaging since it is needed in many diagnoses like diabetic retinopathy and hypertension. The irregular and complex shape of vessels requires the vessel detector to be free of topology and geometry. Furthermore, digital eye fundus image often have problems like low resolution, bad quality and imaging noise. The water flow model is a natural choice. Figure 7 shows the segmentation results. Multiple initializations/water sources are set inside the vessel
(a)
(b)
Fig. 7. Segmenting vessels in retinal images with low resolution and quality (k=50, α=0.5, λ=1)
72
X.U. Liu and M.S. Nixon
structures to lighten the problems caused by gaps on the vessels. In figure 7(a), multiple flows of water merged, leading to a single vessel structure. In figure 7(b), some water flows merged and some remained separated. This can be improved by post-processes like gap-linking techniques.
(a)
(b)
(c)
Fig. 8. An example of the MRI volume segmentation by 3-D water flow analogy: a) the water flow model segments the lateral ventricles of brain; b) – c) cross-sections of the results
3.3 Medical Images Volume Segmentation
The 3-D water flow model is expected to have comparative performance in volume segmentation. We have applied our 3-D water model to a variety of medical images so as to segment anatomical structures with complex shapes and topologies. Figure 8
Image and Volume Segmentation by Water Flow
73
presents a typical example where the model is applied to a 181×217×181 MR image volume of a human brain. The water source is set inside the lateral ventricles and the parameters are set at k=5, α=0.5, λ=1. The operator detects most parts of the lateral ventricles. Two cross-sections of the fitted model are also shown in figure 8.
4 Conclusions This paper introduces a new general framework for image segmentation based on a paradigm of water flow. The operator successfully realizes the key attributes of flow process under the structure of force field generation. The resistance given by images is defined by a combination of object boundary and regional information. The problems of boundary concavities and topological changes are settled whist the attractive feature of snakes, the smoothness of the evolving contour, is achieved. Those are approved by the results on synthetic and real images. Good noise immunity is justified both quantitatively. Besides, the complexity of the algorithm is relatively low. Therefore the method is expected to be of potential use in practical areas like medical imaging and remote sensing where target objects are often complex shapes corrupted by noise. A 3-D version of the operator is also defined and implemented, and is applied to the medical volume segmentation area. The algorithm here uses the simple edge potential forces. In the future we seek to embody more refined edge detectors [15] or new force functionals like GVF [3] into the water flow based framework.
References [1] Kass, M., Witkin, A., Terzopoulos, D.: Snakes: Active contour models. Int’l. J. of Comp. Vision 1(4), 321–331 (1988) [2] Cohen, L.D.: On active contour models and balloons, CVGIP. Image Understanding 53(2), 211–218 (1991) [3] Cohen, L.D., Cohen, I.: Finite element methods for active models and balloons for 2-D and 3-D images. IEEE Trans. PAMI 15, 1131–1147 (1993) [4] Xu, C., Prince, J.L.: Snakes, shapes, and gradient vector flow. IEEE Trans. Image Processing 7(3), 359–369 (1998) [5] Figueiredo, M., Leitao, J.: Bayesian estimation of ventricular contours in angiographic images. IEEE Trans. Medical Imaging 11, 416–429 (1992) [6] Chan, T.F., Vese, L.A.: Active contours without edges. IEEE Trans. Image Processing 10, 266–276 (2001) [7] McInerney, T., Terzopoulos, D.: T-snakes: topologically adaptive snakes. Medical Image Analysis 4, 73–91 (2000) [8] Casselles, V., Kimmel, R., Spiro, G.: Geodesic active contours. Int’l Journal of Computer Vision 22(1), 61–79 (1997) [9] Malladi, R., et al.: Shape modeling with front propagation: A level set approach. IEEE Trans. PAMI 17, 158–174 (1995) [10] Vincent, L., Soille, P.: Watersheds in digital space: an efficient algorithm based on immersion simulations. IEEE Trans. PAMI 13, 583–598 (1991) [11] Bleau, A., Leon, L.J.: Watershed-base segmentation and region merging. Computer Vision and Image Understanding 77, 317–370 (2000)
74
X.U. Liu and M.S. Nixon
[12] Liu, X.U., Nixon, M.S.: Water Flow Based Complex Feature Extraction. In: Blanc-Talon, J., Philips, W., Popescu, D., Scheunders, P. (eds.) ACIVS 2006. LNCS, vol. 4179, pp. 833–845. Springer, Heidelberg (2006) [13] Liu, X.U., Nixon, M.S.: Water flow based vessel detection in retinal images. In: Proceedings of Int’l Conference on Visual Information Engineering, pp. 345–350 (2006) [14] Hurley, D.J., Nixon, M.S., Carter, J.N.: Force field feature extraction for ear biometrics. Computer Vision and Image Understanding 98, 491–512 (2005) [15] Evans, A.N., Liu, X.U.: A morphological gradient approach to color edge detection. IEEE trans. Image Processing 15(6), 1454–1463 (2006)
A Novel Hierarchical Technique for Range Segmentation of Large Building Exteriors Reyhaneh Hesami, Alireza Bab-Hadiashar, and Reza Hosseinnezhad Faculty of Engineering and Industrial Sciences Swinburne University of Technology, VIC 3127, Australia {rhesami,abab-hadiashar,rhosseinnezhad}@swin.edu.au
Abstract. Complex multiple structures, high uncertainty due to the existence of moving objects, and significant disparity in the size of features are the main issues associated with processing range data of outdoor scenes. The existing range segmentation techniques have been commonly developed for laboratory sized objects or simple architectural building features. In this paper, main problems related to the geometrical segmentation of large and significant buildings are studied. A robust and accurate range segmentation approach is also devised to extract very fine geometric details of building exteriors. It uses a hierarchical model-base range segmentation strategy and employs a high breakdown point robust estimator to deal with the existing discrepancies in size and sampling rates of various features of large outdoor objects. The proposed range segmentation algorithm facilitates automatic generation of fine 3D models of environment. The computational advantages and segmentation capabilities of the proposed method are shown using real range data of large building exteriors.
1 Introduction During the last decade, large scale 3D measurement technology has been significantly advanced. Accurate dense range data of outdoor objects, up to a few hundred meters in size, can now be produced in minutes. As a result, production of automated urban models of whole buildings is emerging as one of the viable applications of 3D data. In particular, capturing the fine architectural details embedded in the façade of important buildings has found new significance for developing realistic virtual reality tours of monuments, computer games, etc. The size and complexity of buildings, unavoidable existence of moving objects (as shown in Figure 1(a)), unpredictable change of environmental conditions and existence of sharp contrasts between the levels of details in different parts of large buildings (as shown in Figure 1(b)) pose significant challenges for the existing computer vision techniques. A number of approaches for the segmentation of dense range images of outdoor scenes have been developed during the last few years. A common approach, particularly for segmenting building exteriors, is to use architectural features. Attributes such as vanishing points [1], parallelism of walls and orthogonality of edges [2] are employed to extract linear features of buildings. Another common approach is to consider the 3D dataset as a collection of pre-defined classes of segments [3-6]. In this approach, a learning method is employed to find the various G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 75–85, 2007. © Springer-Verlag Berlin Heidelberg 2007
76
R. Hesami, A. Bab-Hadiashar, and R. Hosseinnezhad
instances of different classes of objects such as ground, vegetations, buildings, shrubs, etc. Although the above techniques are generally able to extract large segments of various buildings, they are not designed to detect fine details particularly seen on the façade of important buildings. Moreover, these techniques rely on the existence of distinguished features embedded in the scene which their availability is application dependent. In addition, due to the fact that they often employ edge detection or region growing techniques for segmentation, they require significant post-processing to overcome the occlusion problem. To our knowledge, automatic extraction of fine details of the buildings of interest from range data is yet to be addressed satisfactorily.
Large Segments
Small Segments
(a)
(b)
Fig. 1. a) Intensity and range images of the Royal Exhibition Building- Melbourne, Australia. Cars, passengers and vegetation are obstacles that couldn’t have been avoided at the time of data collection. Moving objects appear as straight lines in the range image. b) Large disparity in size of various features is highlighted (the Shrine of Remembrance – Melbourne, Australia).
Our main goal has been to develop a computationally feasible technique capable of extracting all possible geometric details embedded in 3D data of the exteriors of large building. The proposed range segmentation algorithm indeed facilitates automatic generation of fine 3D models of environment. To overcome the problems associated with measurement uncertainties and structural complexities of range data of outdoor scenes, we have developed a hierarchical (parametric) model-based robust range segmentation algorithm. The segmentation strategy has been adopted from the techniques presented by [7, 8] while a hierarchical approach, involving sequential usage of a robust estimator, is used to significantly reduce the computational cost and increase the level of accuracy in segmentation of fine details. In this paper, we first introduce and analyze the main characteristics of outdoor scenes (building exteriors, in particular) that complicate its geometric segmentation task. We then outline our new parametric robust hierarchical scheme that is specifically designed to address those problems. Our experiments with real data are presented in Sec. 4 where we demonstrate that this method is able to segment range data of building of significance, containing substantial disparities in size and noise.
2 Characteristics of Range Data of Building Exteriors The application domain of outdoor range segmentation is an open environment and there are many factors that significantly influence the measurement processes. In particular, there are several main issues that complicate the outdoor range data
A Novel Hierarchical Technique for Range Segmentation
77
segmentation task. In the following subsections, we explain those problems in detail and show ways by which they can be addressed. 2.1 Disparity in Size In range images of large structures, the difference between the shape and size of objects of interest can be significant. For example, as shown in Figure 1(b), when walls of a building may contain as many as 30% of all data points, surfaces associated with roof decoration may only contain 1% of all data. Since distant and small structures have a small number of data samples, a segmentation algorithm that relies on a minimum size for structures may not be able to extract all possible structures. To show the effect of disparity in size on the segmentation process, we have designed and conducted a simulation experiment as shown in Figure 2(a). The scene in this experiment represents 3D synthetic data containing two parallel planar structures. The large plane contains 1600 data points while the size of the small plane is varied from 16 to 1600 data points. Data of both structures are generated using square shape regular grids corrupted by additive Gaussian noise N (0, 0.1). Around 80 (representing 5% of whole population) uniformly distributed wrong measurements, imitating the effect of gross outliers, have also been added to the mix. A robust estimator (MSSE [7]) is then applied to segment this data. k=0.02
P (Percentage of Success)
Depth (meter)
k=0.03
100
60 40 20 0 6 4 2 0 Y (meter)
-2
-2
0
2 X (meter)
(a)
4
k=0.04 k=0.05
80
k=0.06 k=0.07
60
k=0.08 40
k=0.09 k=0.10
20
k=0.11
6
k=0.12 0
0
0.1 0.2 0.3 0.4 size of small structure/size of all data
0.5
(b)
Fig. 2. a) Sample of synthetic data used to demonstrate effect of disparity in size for large-scale range data segmentation. b) Plot of percentage of success in segmenting both small and large structures with a robust estimator versus the ratio of the size of small structure to the size of whole population for different values of K.
The proposed hierarchical robust segmentation technique is aimed to overcome this issue by performing segmentation at different scales and hence recovering small structures without the interference of the larger ones. The above experiment was repeated 100 times for every value of the K (the proportion of the size of the smallest data group that would be considered a structure – here, varied from 2% to 12% of the whole population) and the successes of the robust estimator in separating both planes were recorded as shown in Figure 2(b). This figure indicates that successful segmentation of all possible structures of interest greatly depends on the size of the embedded structure. Structures containing less than 20% of all data are less likely to be segmented as a separate structure.
78
R. Hesami, A. Bab-Hadiashar, and R. Hosseinnezhad
2.2 Existence of Very Fine Details Modern 3D laser scanners are able to capture high resolution dense geometric data points of building exteriors in a short period of time. As a result, outdoor range datasets are rich in detail. Moreover, many buildings, in addition to their main structure, contain different architectural details such as columns, statues and staircases. Generation of a simplified model of such a building is not unfeasible with existing techniques – for instance, see [9]. However, extraction of fine details of the ornamental buildings has remained a challenging task that can only be performed either manually [10] or by huge amount of computation. The cost associated with either of those methods increases rapidly with modest increases in the desired level of interest. In order to analyze this phenomenon, we consider the cost associated with RANSAC [12] type robust estimation approach. Most of the robust estimators use a search method such as random sampling to solve the problem of optimization. As shown by Fischler and Bolles [11] , if ε is the fraction of data points not belonging to the segment one tries to find by random sampling and P is the probability of having at least one “good sample” (a sample belonging to the segment of interest) in p-dimensional parameter space, minimum number of random samples required for having at least one good sample (which by itself is far from satisfying the sufficiency condition [12]) is calculated by: m=
log(1 − P ) log[1 − (1 − ε ) p ]
.
(1)
Number of required random samples
For instance, if the size of the smallest structure of interest in a multi-structure scenario is 1% (ε=0.99) of all data population, then more than 2 million random samples are required to find the structure of interest 90% of times. Figure 3 shows the rapid change in the number of required random samples when high level of details is desired. 4
x 10
4
P =0.99 P =0.95 P =0.90
3 2 1 0
0
0.05 0.1 0.15 0.2 0.25 0.3 0.35 Minim um relative s iz e of the s m alles t desired detail
0.4
Fig. 3. Plot representing the number of required random samples verses the minimum relative size of the smallest desired detail for different probabilities of success in 3D. The number of required samples enormously increases when the size of desired structure goes below 10%.
It is important to note that in practice, the outlier ratio ε is not known a priori and has to be assumed to be fairly large to guarantee that small structures aren’t overlooked. Therefore, for successful segmentation of small patches, ε values very close to one must be chosen which would result in prohibitive computation as exemplified above.
A Novel Hierarchical Technique for Range Segmentation
79
2.3 High Uncertainty Due to Construction Errors In our experiments with range data of building exteriors, we’ve found that segmentation of building exterior is highly affected by the construction accuracy of the modeled building. Construction errors in large buildings are generally unavoidable and their scales are significant compared to the accuracy of the 3D measurement systems (a few centimeters and a few millimeters, respectively). In particular, the effect of construction error becomes a significant issue when different parts of one structure are located apart. In such circumstances, and depending on the level of construction error, model-based range segmentation algorithms may no longer be able to detect coplanar surfaces as single structures. To investigate the effect of construction error on the segmentation process, we have performed a simulation experiment as shown in Figure 4(a). The scene represents three dimensional synthetic data containing coplanar surfaces of similar size; each contains a total of 500 data points (5 meter wide planes separated by 5 to 15 meters). Data of both planar surfaces are generated using rectangular (and regular) grids corrupted by additive Gaussian noise N (0, 0.1). The construction errors are then modeled by moving one surface parallel to the other in depth by different amounts (µ times the scale of noise). A number of randomly distributed gross outliers (around 30% of population, representing wrong measurements or miscellaneous building parts) have also been added to the set and a robust estimator (MSSE) is applied to segment this data. μ=0.5 μ=1 μ=1.5 μ=2 μ=2.5 μ=3
15
10
5 10 15 5
10
Y(meter)
5 0
0
(a)
X(meter)
P (Percentage of Success)
Depth of Structur in meter
110 100 90 80 70 60 50 40
0
10
20 30 40 m(Distance of Structures in meter)
50
60
(b)
Fig. 4. a) Sample of the simulation data used for segmentation analysis of distant co-planar surfaces b) Likelihood of detecting co-planar surfaces as one segment vs. distance of structures for cases where K=0.1
The above experiment was repeated 100 times for different values of µ (the construction error in depth) ranging from 0.5 to 3 times of measurement error (here, 10 mm). The number of times that the robust estimator has successfully labeled both patches as a single plane was recorded and is shown in Figure 4(b). The plot shows that successful segmentation of coplanar surfaces directly depends on the distance separating those coplanar structures and the amount of construction error. Coplanar structures separated by more than 2 times of their dimension are not likely to be segmented as coplanar and the situation worsens as the construction error increases.
80
R. Hesami, A. Bab-Hadiashar, and R. Hosseinnezhad
3 Hierarchical Robust Segmentation Scheme (HRS) As was mentioned previously, robust estimation is the tool we have chosen to address the aforementioned complexity and uncertainty issues associated with segmentation of range images. A suitable robust estimator needs to have breakdown points much higher than 50% offered by traditional robust estimators like M-estimators [13] and LMedS [14]. Overtime, a number of very high breakdown robust estimators have been specifically developed for computer vision applications (i.e. RANSAC [11], MSSE [7], PbM [15] and most recently HBM [16]) but those have only been used for applications involving laboratory-sized objects. To deal with the problems explained in Section 2.1 and 2.2, a single global approach is unlikely to be sufficient. Hierarchical Robust Segmentation (HRS) technique presented here is designed to overcome the issues associated with range segmentation of building exteriors. In this approach, a robust range segmentation strategy is applied at different stages, significantly reducing the overall computation requirements to a level achievable by ordinary computers (see Table 1 and 2 for a comparison). Moreover, we assume that most of the structural and decorative parts of large building exteriors are either planar (due to the ease of their constructions) or can be approximated by small planar patches. This allows us to use the highly effective model-based approach and take advantage of existing robust segmentation techniques [7]. However, for applications where nonlinear forms are of importance, our proposed hierarchical framework can be extended to include model selection strategies similar to those introduced in [8]. The proposed algorithm starts by specifying a user-defined input to the robust estimator (a threshold K, which is the size of the smallest region that can be regarded as a separate region). Without this constraint, the segmentation task becomes a philosophical question as every 3 points in the dataset can in theory represent a planar patch. We also assume that the scale of typical measurement error of the rangefinder is either readily available or can be measured from data. The outline of our proposed range data segmentation algorithm is as follows: Range data pre-processing – In this stage, first, those data points whose associated depths are not valid (due to the limitation of the laser rangefinder used for measuring the depth) are eliminated. These points are usually marked by the rangescanner with an out-of-range number. Data of outdoor man-made objects, captured by laser technology is contaminated by noise due to the “mixed-pixel” effect and moving objects. To reduce these effects, a median filter (here, 5 × 5) then is applied to the entire valid range data. Robust range segmentation - A robust segmentation algorithm is applied to the entire data. This algorithm is initially tuned to extract a preliminary collection of coarse/large segments. The remaining data are marked as outliers and stored for further processing. In this work we have chosen to use the Modified Selective Statistical Estimator (MSSE) [7], because it is straightforward and has the least finite sample bias in comparison with other popular robust estimators [17]. However, other highly robust estimators could also be used in this step and would be expected to produce similar results. This estimator is explained in Section 3.1.
A Novel Hierarchical Technique for Range Segmentation
81
Surface fit – We, then, fit a planar surface to the data of each coarse/large segment (of the previous stage) and calculate the scale of noise. Hierarchy criterion - If the calculated value of scale is more than scale of noise of the measurement unit, we consider this segment as coarse segment and once again, apply the robust segmentation algorithm. Otherwise, it will be labeled as a large segment. Where applicable, this step is repeated to extract all possible details embedded in the data. Outlier segmentation - Those data marked as outliers in the first segmentation stage are not discarded since the majority of such points may belong to some small structures. We again apply a finer robust segmentation algorithm to these data points. Smaller structures are normally detected at this stage. 3.1 Robust Estimation and Segmentation Using MSSE As was mentioned earlier, in our experiments, we have used the MSSE [7] to perform segmentation at every level. Although describing the full detail of this technique is outside the scope of this paper, we briefly outline how the method is implemented. MSSE uses random sampling to determine a number of candidate fits, ranks these candidate fits by least K-th order residuals and estimates the scale from the best preliminary fit. The algorithm classifies inliers and outliers by using scale estimation. The important steps of MSSE are as follows: A value of K is set (by the user) as the lower limit of the size of populations one is interested in. A localised data group inside the data space in which all the pixels appear on a flat plane is found using random sampling. A planar model with the least K-th order squared residuals is selected from the planar models fitted to those samples. For the accepted model, starting from n = K, the unbiased estimate of scale of noise is calculated using the smallest n residuals: 2
σn =
n 2 ∑ r j =1 j
.
(2)
n− p
where rj is the j-th smallest residual and p is the number of parameters in the model. Those points whose squared residual is greater than a threshold (T – specified based on the level of significance in the normal distribution) multiple of the scale of noise are rejected. The equivalent characterization of the point of transition from inlier to outlier occurs when: 2
σ n +1 2 σn
> 1+
T
2
−1
n − p +1
.
(3)
A new segment containing all the inliers to this fit regardless of their geometrical location is generated. As a result, the algorithm has the advantage of detecting and
82
R. Hesami, A. Bab-Hadiashar, and R. Hosseinnezhad
resolving occlusion while segmenting the data. The above tasks are iteratively performed until the number of remaining data points becomes less than the size of the smallest possible region in the considered application.
4 Experimental Results We have conducted a number of experiments to determine the functionality of the proposed algorithm and one is detailed here. The range data of this experiment is captured from the front view of a historical building called the Shrine of Remembrance (Melbourne, Australia) by a Riegl (LMS-Z210) laser rangescanner. The exterior of this building is highly structured with many large planar objects like walls, doors and roofs and smaller planar objects such as stairs. Part of the building also contains some small and decorative structures. The building is shown in Figure 5(a) (left). The scanned range image of the building is sampled on a 250× 382 grid and contains almost 105 data points. Angular resolution of the scanner was set to be 0.1 degree and its measurement error is typically 10mm. The proposed set of algorithms has all been implemented in MATLAB and the original range image and the results of the first and last stages of the segmentation strategy are shown in Figure 5(b) and (c). To highlight the high accuracy of the proposed segmentation algorithm, the decorative part of the front exterior and its segmentation outcome are magnified in Figure 5(d). The quality and the required computation of achieving the above outcomes are further elaborated here. 4.1 Quantitative Evaluation of Results To highlight the advantages of the proposed hierarchical strategy, we have compared the results of the direct and step-wise implementation of the robust range segmentation technique using a range image of a typical building. Table 1 and 2 summarizes the various outcomes of each approach for different values of K (the proportional size of the smallest data group that would be considered a structure). The results for the direct approach (Table 1) show that as the value of K is decreased to extract more details (finer structures), the required computational cost is significantly increased to levels that would be considered impractical for most vision applications. At the same time, the value of σ (estimated noise scale) for each segment also decreases pointing to the fact that the segmentation has become more accurate with the smaller values of K. The oversegmentation problem however worsens when K is fairly small. Table 2 shows the results of hierarchical approach to the robust range segmentation of the Shrine of Remembrance (see Figure 5) requiring a three level pyramid. At the first level, the algorithm has focused on the large/coarse segments (e.g. structures that contain at least 20% of the whole population) and has separated data into 4 parts. Stages two and three have further refined those parts into smaller/finer segments where a very accurate segmentation is achieved at its final stage. This table also shows that our hierarchical approach to the segmentation, drastically decreases the required time of computation (3 minutes verses 13 hours) and has taken the full advantage of the high accuracy that the MSSE can produce.
A Novel Hierarchical Technique for Range Segmentation
83
Table 1. Outcomes of direct implementation of the robust range segmentation algorithm for the Shrine of Remembrance with different values of K (size of the smallest structure)
K
No. of Samples
Segmentation Time (in seconds)
No. of Segments
0.3 0.2 0.1
84 286 2,301
28 36 150
2 3 8
0.08
4,496
0.05
18,419
0.02 0.01
287,821 2,302,583
Segmentation Quality
No fine details detected No fine detail detected No fine detail detected Moderate number of fine 398 10 details Moderate number of fine 1661 15 details 44822 (~13 hours) 29 Most details are detected Stopped due to the computational limitations
Table 2. Outcomes of hierarchical implementation of robust segmentation algorithm for the Shrine of Remembrance Hierarchy
K
First
0.2
Number and Quality of Generated Segments Segment I
Segment II
Segment III
Remainder (outliers)
σ >> 0.01 Segments I-1 and I-2 Second
0.15
Segment II-1 to Segment II-7
Segments III1 to III-7
σ < 0.01
σ < 0.01
σ > 0.01
Third
0.4
Segment I-1.1 to Segment I-1.4 σ < 0.01
Segments O-1 toO-9 σ<0.01 Segment O10 σ>0.01 Segments O-10.1 to O10.3 σ<0.01
Segmentation time = 191 (s)
5 Conclusion In this paper, we have studied the main issues associated with the geometrical segmentation of large buildings. A novel method is also proposed to efficiently partition range data of such objects into planar patches using a hierarchical segmentation approach. The algorithm extracts detailed planar patches of large and complex datasets with very modest computational cost. In a case study presented here, it has taken around 190 seconds to segment 28 planar patches embedded in the range data set of a decorative monument. The same task takes around 13 hours when a similar robust segmentation technique is applied directly to this dataset. Our experimental results also show that the outcomes of the proposed method are more accurate and less prone to the usual over or under segmentation issues.
84
R. Hesami, A. Bab-Hadiashar, and R. Hosseinnezhad
(a)
(b)
(c)
(d)
Fig. 5. a) Intensity (left) and range (right) image of the front side of the Shrine of Remembrance. b) Resulting segmentation of the first level of hierarchy c) Final result of hierarchical segmentation d) All possible detail (planar and decorative) of the head of the building is successfully segmented.
A Novel Hierarchical Technique for Range Segmentation
85
References [1] Stamos, I., Allen, P.: Geometry and Texture Recovery of Scenes of Large Scale. Computer Vision and Image Understanding (CVIU) 88(2), 94–118 (2002) [2] Cantzler, H., Fisher, R.B., Devy, M.: Improving Architectural 3D Reconstruction by Plane and Edge Constraining. In: (BMVC), Britain, pp. 43–52 (2002) [3] Zhao, H., Shibasaki, R.: A Vehicle-Borne Urban 3D Acquisition System using Single-row Laser Range Scanners. Systems, Man and Cybernetics, Part B 33(4), 658–666 (2003) [4] Anguelov, D., Taskarf, B., Chatalbashev, V., Koller, D., Gupta, D., Heitz, G., Ng, A.: Discriminative Learning of Markov Random Fields for Segmentation of 3D Scan Data. In: CVPR, CA, USA, pp. 169–176 (2005) [5] Triebel, R., Kersting, K., Burgard, W.: Robust 3D Scan Point Classification Using Associative Markov Networks. In: (ICRA), Florida, USA, pp. 2603–2608 (2006) [6] Lakaemper, R., Latecki, L.J.: Using Extended EM to Segment Planar Structures in 3D. In: (ICPR), Hong Kong, pp. 1077–1082 (2006) [7] Bab-Hadiashar, A., Suter, D.: Robust Segmentation of Visual Data Using Ranked Unbiased Scale Estimate. ROBOTICA 17, 649–660 (1999) [8] Bab-Hadiashar, A., Gheissari, N.: Range Image Segmentation Using Surface Selection Criterion. IEEE Trans. on Image Processing 15(7), 2006–2018 (2006) [9] Stamos, I., Yu, G., Wolberg, G., Zokai, S.: 3D Modelling Using Planar Segments and Mesh Elements. In: (3DPVT), University of North Carolina, pp. 599–606. Chapel Hill (2006) [10] Allen, P.K., Stamos, I., Troccoli, A., Smith, B., Leordeanu, M., Hsu, Y.C.: 3D Modeling of Historic Sites Using Range and Image Data. In: (ICRA), Taiwan, pp. 145–150 (2003) [11] Fischler, M.A., Bolles, R.C.: Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography. Communications of the ACM 24(6), 381–393 (1981) [12] Meer, P.: Robust Techniques for Computer Vision. In: Emerging Topics in Computer Vision, pp. 107–190. Prentice Hall, Englewood Cliffs (2004) [13] Huber, P.J.: Robust Statistics. Wiley, New York (1981) [14] Rousseeuw, P.J.: Least Median of Squares Regression. Journal of American Statistical Association 79, 871–880 (1984) [15] Chen, H., Meer, P.: Robust Regression with Projection Based M-Estimators. In: (ICCV), Nice, France, pp. 878–885 (2003) [16] Hoseinnezhad, R., Bab-Hadiashar, A.: A Novel High Breakdown M-estimator for Visual Data Segmentation. In: (ICCV), Rio de Janeiro, Brazil (October 2007) [17] Hoseinnezhad, R., Bab-Hadiashar, A., Suter, D.: Finite Sample Bias of Robust Scale Estimators in Computer Vision Problems. In: (ISVC), Nevada, USA, pp. 445–454 (November 2006)
Lip Contour Segmentation Using Kernel Methods and Level Sets A. Khan, W. Christmas, and J. Kittler Centre for Vision, Speech and Signal Processing University of Surrey Guildford, GU2 7XH, UK {a.khan,w.christmas,j.kittler}@surrey.ac.uk
Abstract. This paper proposes a novel method for segmenting lips from face images or video sequences. A non-linear learning method in the form of an SVM classifier is trained to recognise lip colour over a variety of faces. The pixel-level information that the trained classifier outputs is integrated effectively by minimising an energy functional using level set methods, which yields the lip contour(s). The method works over a wide variety of face types, and can elegantly deal with both the case where the subjects’ mouths are open and the mouth contour is prominent, and with the closed mouth case where the mouth contour is not visible.
1
Introduction
The ability to extract a person’s lips from an image or image sequence has some useful and interesting applications. Bi-modal speech recognition is one: according to [1], speech perception and understanding in humans is not just an auditory activity, but rather also employs visual information conveyed through the associated lip and mouth movement of the speaker. Even in machine-based speech understanding, visual cues extracted from lip shape can serve as a complementary source of information that can help bolster the speech recognition process when combined with the auditory information. The appearance of the lips also conveys valuable information about a person’s facial expression. Applications include expression/emotion recognition and character-driven animation. This paper proposes a novel method of segmenting/tracking lip shapes in images/videos of a person’s face. Lip extraction has generally been performed using some sort of flexible lip shape representation, such as deformable templates [2], active contours [3] or active shape models [4]. Generally, in a sequence consisting of a person speaking into the camera, it will be observed that the shape of the lips exhibits some dominant modes of variation. Therefore some of these methods try to impose a strong prior on the set of possible shapes that these contours or templates can take. This can be achieved , for instance, by using a parametric shape model with a limited degree of freedom that can model these variations with sufficient accuracy, or by learning the shape statistics and constraining shape deformations accordingly[5],[6]. G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 86–95, 2007. c Springer-Verlag Berlin Heidelberg 2007
Lip Contour Segmentation Using Kernel Methods and Level Sets
87
Another issue to be considered is how to model both the inner and outer lip contour, given that the mouth of a speaking person opens and closes, affecting the visibility and prominence of the inner lip boundary. Methods differ in whether they model only the outer boundary of the lip or both the inner and outer contours, or whether the lip model is switched depending on whether the mouth is ascertained to be open or closed[7]. In the method presented here, the level set method is used to represent the lip shape. Level sets provide an energy minimisation framework that has been used extensively to model shapes and object boundaries in the computer vision community. This method facilitates incorporation of region information at an image pixel level while still enforcing global notions such as boundary smoothness. An additional benefit of this method in this particular application, is that because of its unique behaviour with regard to topological properties of shapes, the problem of modelling both the inner and outer lip contour is elegantly solved. To drive the shape fitting process to the actual in-image lips, some sort of colour and/or gradient information (of the lip region and lip boundaries respectively) is used. When gradient-based features are used, the segmentation process needs to be supported by a strong shape model, as the contrast between the lip region and the skin often tends to be weak. When using colour information, the assumption is that in a face image the colour distribution of lips is sufficiently different from everything else (primarily skin, but also other facial features and hair). This assumption can be relaxed a bit (as in this paper) by assuming a non-negligible degree of overlap between the colour distributions of lip and skin. Methods that rely on learning to distinguish lip and non-lip regions make the implicit assumption that over a large set of face images, the differences between the lip and non-lip feature observations can be characterised over the different races or skin types. The rest of this paper is organised as follows: Section 2 gives a detailed description of the approach used in this work. Section 3 presents some visual results, and numerical characterisation of their quality. The paper is drawn to conclusion in section 4.
2
The Method
The primary task to be achieved is the segmentation of the lip shape in a face image. For shape segmentation, the level set approach is a popular variant of active contour methods among the computer vision community, and is the one used in this work. In one of its basic formulations, level sets facilitate the segmentation of an image or image sequence into two regions, separated by a boundary over which some smoothness constraint has been exerted, such that the homogeinity of some property in each region is maximised. The property used depends on the particular application, and may be one, or a combination, of various cues (such as intensity, texture, optical flow, etc.) but is basically a scalar function defined over the image domain. To define this function, the method presented in this paper takes a machine-learning approach; a support vector machine classifier,
88
A. Khan, W. Christmas, and J. Kittler
suitably trained over a set of colour-based features, defines at a pixel-by-pixel level some notion of the likelihood of the class label (whether lip or non-lip). This effectively allows the conversion of a multidimensional feature vector (based on RGB colour components) to a scalar quantity which represents a “soft” decision regarding whether a particular pixel looks like lip or non-lip. The level set method allows integrating this information at a more global level, by yielding an entire lip region, rather than a pixel-level classification. A practical overview of the method is as follows: a number of example face images with the lip region demarcated are used to generate a training set of feature vectors and class labels (lip, non-lip) and fed as training examples to a SVM (support vector machine) classifier. Once the classifier has been trained, a new instance of a face image for lip extraction can be presented to the system, and the SVM output is treated as a function over which an appropriate energy minimisation problem is defined. This problem is solved by means of gradient descent of the level set function. The solution of this problem generally yields the lip contour(s). The details of the method are given in the next section. 2.1
Choice of Classification Method
It is observed that in general it is not possible to linearly separate skin and lip colours when colour is represented in RGB. Several non-linear colour spaces (i.e. obtained as non-linear transformation of the RGB space) have been proposed that purport to give better results for distinguishing lip and skin pixels[8]. This problem becomes even more serious when considering the distribution of skin and lip colour over the range of races and skin types, and it may well be impossible to be able to distinguish adequately between these two classes by using any particular colour space. Given the non-linear nature of the classification problem, in this method we chose to employ a support vector machine-based classifier with a radial basis kernel. The first advantage of using SVMs is, for colour-based features, we can justifiably dispense with the task of having to find a new colour-space to represent the data, as the non-linearity built into the SVM classifier should be able to cope. Hence RGB can be retained as the colour space. Further it is possible to cope with variations of skin and lip colour, and its correlation with race or type of skin, by including the necessary features that bring out these characteristics. Another important characteristic of the SVM is that the classification process yields not a hard binary decision of which class a particular observation fall in, but rather a decision function that is a continuous function of the feature variables. This functional characteristic makes the SVM output usable by the next stage of the process, namely level set method-based evolution. This will be discussed in more detail later. The training set was composed of tightly cropped face images (although it is acceptable to have a fair bit of background, as long as the skin region dominates the image area) which were subjectively chosen to be representative of the majority of the images that would be used to test the system.
Lip Contour Segmentation Using Kernel Methods and Level Sets
89
A 6-dimensional feature vector was chosen for classification: three of its components were the R, G and B intensities of the pixel, and the other three were the medians of the individual R,G, and B components of the training image, and referred from here onwards as the “dominant colour” of the image. It is this dominant image colour that brings out the differences between the various types of faces. Although strictly speaking, colour elements cannot be ordered (because of their multidimensionality), however it is quite reasonable to think that colours that are perceptually near have small Euclidean distances between them in the RGB space, and hence taking the median of the individual colour components does in fact represent the representative colour of the face (i.e. skin colour). Simple tests performed on the images bear this out. Once the classifier has been trained, a test image is fed to the system, which outputs the value of the SVM decision function for each pixel of the image. The sign of the decision function denotes the class label assigned to it by the binary classifier, and in many SVM-based applications, that is all the information that is used. However, the absolute value of the decision function (which is a real number) is actually a measure of the confidence of the classification decision, i.e. a larger absolute value indicated a higher likelihood of the correctness of the classification. “Borderline” cases have values close to zero. The decision function map is going to be used in defining the lip segmentation energy functional. 2.2
The Energy Functional and Its Level Set Based-Minimisation
To extract and represent the lip shape, a level-set shape model[9] is used. Level sets provide an implicit representation of shape that is free of parametrisation (and the problems associated with it). For instance, they can handle the sharp discontinuities in the shape, such as at the corners of the mouth where the curvature of the lip shape is quite high. Also, because the level sets automatically handle shape and regions that are multiply connected or possess holes, in this particular problem they present an elegant means of dealing with multiple hypotheses (i.e. if there are other regions in the image that look like lip) as well as dealing with the appearance and disappearance of the inner lips contour as the speaker’s mouth opens and closes. Suppose the decision function returns a positive result for the lip class and a negative result for the non-lip class. In order to define an energy functional over the decision function map output by the SVM classifier, the following simple fact is used: in the decision function map, the lip region will contain a large proportion of positive valued numbers (ideally 100% - although if that were the case, the SVM-based classifier would be enough to extract the lip shape and any further steps would be unnecessary) whereas the non-lip pixels would consist of (again, mostly) negative values. What needs to be determined now is the boundary that separates these two regions. Let f be the decision function on Ω, a subset of R2 (corresponding to the image support). Suppose C represents this boundary, ins(C) and out(C) represent the
90
A. Khan, W. Christmas, and J. Kittler
(possibly multiply-connected) regions on either side of this boundary. Consider the following energy functional: 2
f (x) dx
E=−
−
ins(C)
2 f (x) dx
+ μ length(C)
(1)
out(C)
The first two terms depend upon the decision function map and the last term penalises the length of the boundary separating the two regions. If we momentarily ignore the contribution of the last term, the minimisation of the above function becomes trivial: C simply corresponds to the “boundary” separating the binary thresholded version of the decision function (with a threshold of zero). The all-important boundary length term imposes the requirement of smoothness on the solution, which is where the level set method comes in to play. This allows the lip boundaries to be smooth in shape and “whole” (i.e. even though they contain small regions where the pixels which have been wrongly classified as skin pixels.) To recast this energy minimisation problem into the level set framework, we introduce the level set surface function: φ(x) : R2 → R. The contour C is represented by the (1-dimensional) zero level set of this function: C = {x : φ(x) = 0} and the two inside and outside are represented as follows the following twodimensional sets: ins(C) = {x : φ(x) > 0} out(C) = {x : φ(x) < 0} We also introduce the Heaviside function defined as follows: 1, φ > 0 H(φ) = 0, φ ≤ 0
(2)
and this allows us to write the level set version of (1) as follows: E(φ) = − Ω
2 2 f H(φ) dx − f (1 − H(φ)) dx + μ δ (φ) |∇φ| dx (3) Ω
Ω
Here, δ (φ) is the distributional derivative of the Heaviside function. The last term is equal to the length of the set φ(x) = 0 and is a consequence of the co-area formula[10]. The level set evolution equation obtained for the gradient descent of (3) is given by ∂φ ∇φ = β f + μ div( ) |∇φ| (4) ∂t |∇φ| where β = Ω f (x) dx The discrete version of (4) over the image grid is solved using well-known methods for solving partial differential equations.
Lip Contour Segmentation Using Kernel Methods and Level Sets
3
91
Some Results
The method was tested on face images from XM2VTS database. The training set for the SVM classifier consisted of 8 images chosen from this database, and the performance of the method was tested on 122 images (not including the ones in the training set). Face Image
SVM decision function
Final lip segmentation
Fig. 1. SVM decision function map Final lip segmentation
Final lip segmentation
Fig. 2. Lip segmentation in open-mouth cases
The centre image in figure 1 provides a two-colour visual representation of the SVM-decision map; red indicates that the corresponding pixel has been classified as a lip pixel, while blue indicates non-lip classification. The intensity of the colour is proportional to the reliability of the classification; brighter colours represent pixels more likely to belong to their respectively assigned classes. From this representation it can be observed that even within the true lip region, there are sub-regions that the classifier is incorrect for, or uncertain about (for example, the dark region of the upper-left portion of the lip). However, since the level-set minimisation process (through the boundary-smoothness term) takes a broader, region-level look instead of focusing on individual pixel values, and the lip region has been correctly segmented.
92
A. Khan, W. Christmas, and J. Kittler
Fig. 3. Some examples of good results
As the results indicate, the classifier is able to distil sufficient information from the training process to allow it to capture the lip region over a wide variety of faces. From figure 2, the advantages of using level set techniques should also be apparent; the mouth contour has been located in the examples where the subjects have opened their mouth. The contours are also able to deal with the high curvature at the corners of the mouth. 3.1
Quantitative Measurement of the Results
To numerically define a notion of the quality of the results, a reasonable approach is to find a measure that matches the shape of the extracted region with respect to the ground-truth. This was done as follows: let G denote the lip region as demarcated in the ground-truth, and E the lip region extracted by the method, considered as sets inside the image region (which can be taken to be the “universal set”). If | · | denotes the region area, then the quality of the extracted shape when matched with the ground-truth shape has been defined as follows: q(E, G) =
|E ∩ G| |E ∪ G|
(5)
The measure q is simply a ratio between the overlapping area and the total area of the two shapes. q(·, ·) is symmetric with respect to its arguments, lies in the range [0, 1], where 1 represents a perfect match and 0 represents a total mismatch. The histogram indicate that the majority of results have a quality measure of greater than 0.65 and visually correspond to a “good” segmentation result. Results with values in the intermediate range contain results of reasonable quality towards the upper end of this range, while at the lower end lie images that were partially segmented, but the level-set evolution process got stuck somewhere in a local minimum. There are a small number of completely failed segmentations, and these have a quality measure of less than 0.1.
Lip Contour Segmentation Using Kernel Methods and Level Sets
93
Fig. 4. Histogram distribution of the image quality
3.2
Discussion
It may be noticed from the results that at times some portion of the face other than lips may also be captured. Since only colour features have been used, this is inevitable, as other parts of the face may have similar colouring to lips. However, due to the topological property of level sets, these regions tend to be disjoint of the true lip contours, and can be treated as cases where there are multiple hypotheses regarding the lip region. This is not necessary bad; it may in fact be advantageous if, for instance, the image/video had several faces, so the method would work without having prior knowledge of their number. It is also observed that whether or not the method discovers multiple lip-like regions is strongly dependent on the shape and position of the initial level set contour. Gradient descent energy minimisation gives local solutions which depend on the initial estimate of the object whose energy is being minimised. As might be expected, by initialising closer to the expected lip region, better results are obtained (the results are usually “better” in the sense that a smaller number of lip-like candidates are extracted). By employing another process that considers higher level
94
A. Khan, W. Christmas, and J. Kittler
information such as lip shape, position and size, the true lip contour can be chosen from the set of lip candidates. In a tracking application, it is expected that this would have to be done only once (if at all) after which the segmented lip boundary from every frame would serve as a good initial estimate for the next frame, and due to the local nature of the minimisation, lip-like regions away from the lip region would not be found again. With regard to the few instances of complete failures such as the leftmost image in figure 4, it may be attributed to the fact that the skin tone in these cases was quite different from any of the eight images used in the training set.
4
Conclusion
In this paper, we have presented a new method to segment lips in face images. This method employs learning to deal with difficulties in the separability of lip and skin colour. Contour-based segmentation using level set methods ensures that the lip will be segmented as a whole shape, and further deals elegantly with the problem of segmentation of the mouth contour. 4.1
Future Work
It would be interesting to adapt the method proposed in this work to track a speaker’s lips in a video sequence. Although the SVM classifier was found to be the bottle-neck in terms of speed of the method, this problem could be alleviated in the tracking scenario, by pre-computing the classifier’s decision function once over a range of feature vector values and then employing a look-up table to construct the decision function map for each frame. In an environment with controlled lighting conditions, this seems like quite a feasible thing to do, as the skin colour components would not be expected to vary much throughout the sequence (and the variation in the median features would be even less). Alternative methods to speed up the SVM include reducing its complexity by means of selectively reducing the training set, or by simplifying the decision function itself[11][12]. In the level set minimisation phase, the contour extracted at each frame would serve as an excellent approximation for the next frame, and convergence could be achieved within a few iterations. Another improvement that might be considered is the incorporation of shape prior information in the level set framework[13].
References 1. Mcgurck, H., Macdonald, J.W.: Hearing lips and seeing voices. Nature 264 (1976) 2. Hennecke, M., Prasad, K., Stork, D.: Using deformable templates to infer visual speech dynamics. In: 9th Asilomar Conference on Signals Systems and Computers, Pacific Grove, CA, pp. 578–582 (1994) 3. Kaucic, R., Blake, A.: Accurate, real-time, unadorned lip tracking. In: ICCV 1998: Proceedings of the Sixth International Conference on Computer Vision, p. 370. IEEE Computer Society, Washington, DC, USA (1998)
Lip Contour Segmentation Using Kernel Methods and Level Sets
95
4. Luettin, J., Thacker, N.A., Beet, S.W.: Visual speech recognition using active shape models and hidden markov models. In: IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 1996), vol. 2, pp. 817–820 (1996) 5. Lievin, M., Delmas, P., Coulon, P., Luthon, F., Fristot, V.: Automatic lip tracking: Bayesian segmentation and active contours in a cooperative scheme. In: ICMCS, vol. 1, pp. 691–696 (1999) 6. Bregler, C., Omohundro, S.M.: Nonlinear manifold learning for visual speech recognition. In: ICCV 1995: Proceedings of the Fifth International Conference on Computer Vision, p. 494. IEEE Computer Society, Washington, DC, USA (1995) 7. Tian, Y.-L., Kanade, T., Cohn, J.: Robust lip tracking by combining shape, color and motion. In: Proceedings of the 4th Asian Conference on Computer Vision (ACCV 2000) (2000) 8. Li´evin, M., Luthon, F.: Nonlinear color space and spatiotemporal mrf for hierarchical segmentation of face features in video. IEEE Transactions on Image Processing 13, 63–71 (2004) 9. Osher, S., Fedkiw, R.: Level set methods and dynamic implicit surfaces. Springer, Heidelberg (2003) 10. Morgan, F.: Geometric Measure Theory: A Beginner’s Guide, 3rd edn. Academic Press, London (2000) 11. Bakir, G.H., Bottou, L., Weston, J.: Breaking svm complexity with cross-training. In: NIPS (2004) 12. Burges, C.J.C.: Simplified support vector decision rules. In: International Conference on Machine Learning, pp. 71–77 (1996) 13. Chan, T., Zhu, W.: Level set based shape prior segmentation. In: CVPR 2005: Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2005), vol. 2, pp. 1164–1170. IEEE Computer Society, Washington, DC, USA (2005)
A Robust Two Level Classification Algorithm for Text Localization in Documents R. Kandan, Nirup Kumar Reddy, K.R. Arvind, and A.G. Ramakrishnan MILE Laboratory, Electrical Engineering Department, Indian Institute of Science, Bangalore 560 012, INDIA
[email protected],
[email protected],
[email protected],
[email protected]
Abstract. This paper describes a two level classification algorithm to discriminate the handwritten elements from the printed text in a printed document. The proposed technique is independent of size, slant, orientation, translation and other variations in handwritten text. At the first level of classification, we use two classifiers and present a comparison between the nearest neighbour classifier and Support Vector Machines(SVM) classifier to localize the handwritten text. The features that are extracted from the document are seven invariant central moments and based on these features, we classify the text as hand-written. At the second level, we use Delaunay triangulation to reclassify the misclassified elements. When Delaunay triangulation is imposed on the centroid points of the connected components, we extract features based on the triangles and reclassify the misclassified elements. We remove the noise components in the document as part of the pre-processing step.
1
Introduction
Most document images invariably consist of a mixture of machine printed elements such as logos, text, barcodes etc and handwritten elements such as address or name texts, signatures, markings etc. From established methods of machine-printed and handwritten character recognition it is understood that the methods are quite different from each other. Hence a necessary preprocessing step to an OCR is the separation of machine-printed and hand-written elements. Imade et al. [1] have described a method to segment a Japanese document into machine-printed Kanji and Kana, handwritten Kanji and Kana, photograph and printed image. They extracted the gradient and luminance histogram of the document image and used a feed forward neural network in their system. Kuhnke et al. [2] developed a method for the distinction between machine-printed and handwritten character images using directional and symmetrical features as the input of a neural network. Guo and Ma [3] have propose a scheme which combined the statistical variations in projection profiles with hidden Markov models (HMMs) to separate the handwritten material from the machine printed G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 96–105, 2007. c Springer-Verlag Berlin Heidelberg 2007
A Robust Two Level Classification Algorithm
97
text. Fan et al. [4] have proposed a scheme for classification of machine-printed and handwritten texts. They used spatial features and character block layout variance as the prime features in their approach. They have also claimed that this technique could be applied to English or Chinese document images. Pal and Chaudhuri [5] have used horizontal projection profiles for separating the printed and hand-written lines in Bangla script. In this paper we use a set of seven 2D invariant moments, that are insensitive to translation, scale, mirroring and rotation as the feature for distinguishing the printed and handwritten elements. Then we go on to use Delaunay triangulation to reassign the labels assigned to the elements. We find that the accuracy achieved is around 87.85% using the nearest neighbour classifier and with the SVM classifier the accuracy is 93.22%
2
System Description
The entire system is divided into three stages. The first stage is the preprocessing stage, in which the document is cleaned of all the noise components present such as spurious dots and lines. In the second stage we extract the features based on invariant moments for classification of the elements into printed or handwritten. This classification is done with the Nearest neighbor and SVM classifiers separately. Finally, in the third stage we use Delaunay Triangulation to reassign the class labels assigned to the elements. The flowchart of the entire system is as shown in Figure 1. Let us look at the three stages in detail.
Fig. 1. Flowchart showing the Algorithm for Text Localization
98
2.1
R. Kandan et al.
Preprocessing
In this stage we remove the noise elements such as dots and lines in the document image. This process is described below. – Apply Connected Component Analysis (CCA) – Obtain Bounding Boxes of the connected components. Then we find the area of each connected element Ai and also find the minimum area Amin , maximum area Amax in the entire document. – If one of the following conditions is met then it indicates noise in the document and it is removed. • If the value of; (Ai - Amin )/(Amax - Amin ) < T1 (a threshold value, for the test documents it is set at 0.002) • Aspect ratio is used to remove horizontal and vertical lines. • Also if the height or width of the connected component is lesser than a threshold value, then it is a spurious element or a dot, and we remove it. The thresholds which are used have been chosen empirically. 2.2
Feature Extraction
We use the features drawn by invariants moment technique which is used to evaluate seven distributed parameters of an image element. The invariant moments (IMs) are well known to be invariant under translation, scaling, rotation and reflection [6,7]. They are measures of the pixel distribution around the centre of gravity of the character and allow to capture the global character shape information. In the present work, the moment invariants are evaluated using central moments of the image function f(x,y) up to third order. Discrete representation of the central moments is; (x − x ¯)p (y − y¯)q f (x, y) (1) μpq = x
y
where for p,q = 0,1,2,... and x and y are moments evaluated from the geometrical moments of Mpq as follows M10 M01 M00 and y¯ = / M00 (x)p (y)q f (x, y) Mpq =
x¯ =
x
(2) (3)
y
The central mmoments μ00 , μ10 , μ01 , μ11 , μ20 , μ02 , μ30 , μ03 , μ21 , μ12 of order upto 3 are calculated. A further normalization for variations in scale is implemented using the formula’ ηpq =
μpq μ00
From the central moments, the following values are calculated;
(4)
A Robust Two Level Classification Algorithm
99
φ1 = η20 + η02 2 φ2 = (η20 + η02 )2 + 4η02
φ3 = (η30 + 3η12 )2 + (3η21 − η03 )2 φ4 = (η30 + η12 )2 + (η21 − η03 )2 φ5 = (η30 + 3η12 )2 (η30 − η12 )(η30 − 3η12 )2 − 3(η21 − η03 )2 + (3η21 − η03 )(η21 + η03 )[(3(η30 − η12 )2 − (η21 − η03 )2 φ6 = (η20 − η02 )(η30 + η12 )2 − (η21 + η03 )2 + η11 (η30 − η12 ) φ7 = (3η21 − η02 )(η30 + η12 )(η30 − η12 )2 − 3(η21 − η03 )2 + (3η12 − η30 )(η21 + η03 )[(3(η30 − η12 )2 − (η21 − η03 )2
(a) Input Document
(b) Image after Handwritten elements are classified using NN classifier Fig. 2.
where φ7 is a skew invariant to distinguish mirror images. In the above, φ1 and φ2 are second order moments and φ3 through φ7 are third order moments. φ1 (the sum of the second order moments) may be thought of as the spread of the pattern; whereas φ2 may be interpreted as the ”slenderness” of the pattern. The third order moments, φ3 through φ7 do not have any direct physical meaning but includes the spatial frequencies and ranges of the image element.
100
2.3
R. Kandan et al.
Classification
The following classifiers are used to localisze the handwritten text regions; i) Nearest Neighbour Classifier ii) Support Vector Machines. The feature vector is considered to be a point in the feature space and the training data is a distribution of points in the feature space. Now for the test block, we extract the feature vectors and then the Eculidean distances are calculated from each of the points of the training data. Using the Nearest neighbour classifier principle we assign the class label of the training vector which has the minimum distance from the test vector, to the test block. Figure 2(a) and 2(b) depict the input image from which the handwritten elements are to be separated and the output after classification using the nearest neighbour classifier. In Figure 2(b), all those elements that have a class value of two, which represent handwritten text are marked by a magenta Bounding Box. We trained an SVM classifier with Radial Basis Function(RBF) kernel[8] with the invariant moments as the features. Then SVM finds a linear separating hyperplane with the maximal margin in this higher dimensional space. C > 0 is the penalty parameter of the error term. The RBF kernel used is given below K(xi ; xj ) = exp(−γ(||xi − xj ||)2 , γ > 0
(5)
The kernel parameters were chosen by using a five fold cross validation using all the training data samples. The best parameters were found to be C= 256 and γ = 0.0625 which was operating at an efficiency of 96%. With a lower computation cost i.e, by reducing the value of C=32 we achieve an efficiency of 95% when cross validation is done as shown in Figure 3. Figure 4 depict the image in which the handwritten elements are separated using SVM’s. 2.4
Reclassification
We now use Delaunay triangulation[9] for reclassification of the elements which are misclassified. We briefly give the definition of Delaunay triangulation. Delaunay triangulation of a set of non-degenerate vertices V is defined as the unique triangulation with empty circles, i.e, no vertex lies inside the circumscribing circle of any Delaunay triangles, as follows: DT (V ) = (pi .pj , pk )V 3 , B(pi .pj , pk ) ∩ V \ (pi .pj , pk )φ
(6)
where B(pi ; pj ; pk ) is the circle circumscribed by the three vertices pi ; pj ; pk that form a Delaunay triangle. The Delaunay triangulation carried out on printed text/handwritten text regions have the following features: – The lengths of the sides of most triangles in a printed text region are similar as compared to the lengths of the handwritten text.
A Robust Two Level Classification Algorithm
101
– Triangles in the printed text have their longest and similar sides link the point pairs between two adjacent text lines above which is also printed text. – The height of the triangles in the printed text region are uniform. The above features are extracted after applying the Delaunay triangulation and a threshold value based on comparing with the neighbouring points is set to reclassify the text as machine printed or handwritten text. If a particular element and its neighbouring elements have similar features and the element in consideration is labelled differently then it is assigned the label of the neighbours. We carry out the reclassification as per the following algorithm: – First the Delaunay triangulation is done on the document for the centroid points as shown in Figure 5(a) in which NN classifier is used and 5(b), in which SVM is used as the classifier. – Now, let us consider a centroid point P(x,y). A number of triangles originate from P. Thus P is associated with other centroid points P1 , P2 and Pn by the Delaunay triangles. All such points P1 , P2 and Pn which are connected to the point P are said to be the neighboring points of P. – After we get these neighboring points we compare the label of each point with the label of the centroid point P(x,y). If the label is not the same then we find the degree of similarity of the triangles; i.e in this case the difference between the heights of the element defined by the neighboring centroid points and the height of the element defined by the reference point P(x,y). If the difference is less than 7 pixels then we increment a count indicating the similarity in height. – If more than 50% of the neighbouring points have similar height i.e, the diference in height is less than 7 pixels then we reassign the label i.e 1 as 2 or 2 as 1. Let us assume count as Ci and the total no of neighbouring points Ci as Ni for the centroid point i then if the value of N ∗ 100 > 50 - the label i is reassigned else the label remains the same. This is then done for all the centroid points. – This means that if the centroid point P, has different text as compared to the neighboring points, then the height difference will be greater than 7 pixels. However if the centroid point P, has similar text as compared to the neighboring points and is misclassified we compare the height feature of the triangles and compute the difference in height. If the feature is same(i.e if the difference in height is less than 7 pixels), with more than 50% of the neighboring points then the point P(x,y) is given the label of the neighboring points and is re-classified. If the point P has the same label as that of its neighbors then the above steps are not required and the above algorithm is done for the next point. Figure 6(a) and 6(b) shows the image of the document with the handwritten elements localized within a magenta colored bounding box after the labels are reclassified.
102
R. Kandan et al.
Fig. 3. Graph showing the cross-validation results where γ is varied between 1 to 2−10 , C is varied between 1 to 2+10 and the best pairs of (γ, C) are choosen
Fig. 4. Image after the Handwritten elements are classified using SVM
A Robust Two Level Classification Algorithm
103
(a) Image after the NN classification show- (b) Image after the SVM classification ing Delaunay triangulation’s plot showing Delaunay trangulation’s plot Fig. 5.
3 3.1
Experimental Results Data Description
The training data are extracted from over 500 documents which contain predominantly machine printed elements. The handwritten elements are composed of text that is both cursive and block handwriting besides signatures, dates and address locations. Our test data consists of 150 English document images, scanned at 200 dpi and stored in 1-bit depth monochrome format. These documents contain handwritten elements, signatures, logos and other such things along with free-flowing text paragraphs. 3.2
Accuracy Calculation
Table (1) shows the classification accuracy using the proposed method. Using the nearest neighbour classifier we find that the number of printed text elements which are misclassified is higher which reduces the overall accuracy after the second stage of classification. The SVM classifier shows better accuracy and also the misclassified elements are few in number. There were a total of 1,678 handwritten elements in the test documents and the nearest neighbour classified 1,475 elements correctly while the SVM classified 1,565 elements correctly.
104
R. Kandan et al.
Table 1. Classification accuracy of Handwritten text from test data of 150 documents Localization of Correctly Misclassified elements %Accuracy Handwritten text Classified Elements using Nearest Neighbour 1,475 672 87.85% at the first stage of classification using SVM at the first stage 1,565 156 93.22% of classification
(a) Handwritten elements re-classified after the NN classifier in the bounding box
(b) Handwritten elements re-classified after the SVM classifier shown in a bounding box
Fig. 6.
4
Conclusion
In this paper we have presented a novel method for extraction of handwritten text from the machine printed text in the documents. After the first level of classification it is found that there are a lot of printed text which is misclassifed and hence we employ delaunay triangulation to reclassify the text. This is found to give us higher efficiency and accurate results. This classification which is done over two levels improves the overall accuracy of localization of handwritten text rather than a single level of feature extraction and classification. This method
A Robust Two Level Classification Algorithm
105
can be used to extract handwritten components from noisy documents and hence it is a robust algorithm. The misclassification of printed text has been greatly reduced by use of SVM as opposed to nearest neighbour classifier. However this method fails to accurately localize when the handwritten elements have similar structure to machine printed text for eg. block letters. Also in some cases when there is a continuous set of handwritten text, only certain part of the handwritten text are separated. This can be addressed by optimal selection of more features at the first level of classification which can be further passed to Delaunay triangulation.
References 1. Imade, S., Tatsuta, S., Wada, T.: Segmentation and Classification for Mixed Text/Image Documents Using Neural Network. In: Proceedings of 2nd ICDAR, pp. 930–934 (1993) 2. Kuhnke, K., Simonicini, L., Kovacs-V, Z.: A System for Machine-Written and HandWritten Character Distinction. In: Proceedings of 3rd ICDAR, pp. 811–814 (1995) 3. Guo, J.K., Ma, M.Y.: Separating handwritten material from machine printed text using hidden markov models. In: Proceedings of the International Conference on Document Analysis and Recognition, pp. 439–443 (2001) 4. Fan, K., Wang, L., Tu, Y.: Classification of Machine-Printed and Hand-Written Texts Using Character Block Layout Variance. Pattern Recognition 31(9), 1275– 1284 (1998) 5. Pal, U., Chaudhuri, B.: Automatic Separation of Machine-Printed and HandWritten Text Lines. In: Proceedings of 5th ICDAR, Bangalore, India, pp. 645–648 (1999) 6. Ramteke, R.J., Mehrotra, S.C.: Feature Extraction Based on Invariants Moment for Handwriting Recognition. In: Proc. of 2nd IEEE Int. Conf. on Cybernetics Intelligent System (CIS2006), Bangkok (June 2006) 7. Gonzalez, R.C., Woods, R.E.: Digital Image Processing, Pearson Education (2002) 8. Chang, C.-C., Lin, C.-J.: LIBSVM: a library for support vector machines, Software (2001), available at http://www.csie.ntu.edu.tw/∼ cjlin/libsvm 9. Davoine, F., et al.: Fractal image compression based on Delaunay triangulation and vector quantization. IEEE Trans. Image Process 5(2), 338–346 (1996)
Image Classification from Small Sample, with Distance Learning and Feature Selection Daphna Weinshall and Lior Zamir School of Computer Science and Engineering, The Hebrew University of Jerusalem, Jerusalem, Israel 91904
Abstract. Small sample is an acute problem in many application domains, which may be partially addressed by feature selection or dimensionality reduction. For the purpose of distance learning, we describe a method for feature selection using equivalence constraints between pairs of datapoints. The method is based on L1 regularization and optimization. Feature selection is then incorporated into an existing nonparametric method for distance learning, which is based on the boosting of constrained generative models. Thus the final algorithm employs dynamical feature selection, where features are selected anew in each boosting iteration based on the weighted training data. We tested our algorithm on the classification of facial images, using two public domain databases. We show the results of extensive experiments where our method performed much better than a number of competing methods, including the original boosting-based distance learning method and two commonly used Mahalanobis metrics. Keyword: Feature Selection, Distance Learning, Small Sample, L1 regularization.
1
Introduction
A distance (or inverse similarity) function, defined for every pair of datapoints, is a useful way to describe data. It is also a useful way to transfer knowledge between related classes, and thus address the problems of small sample and even one-shot learning (with only one example per new class). Distances can be directly used for unsupervised clustering - as do spectral methods for example, or for supervised classification - as in nearest neighbor classification. An important special case is the family of kernel functions, which can also be used to enhance a variety of kernel-based classification algorithms (such as kernel-SVM). Here we are interested in the problem of learning a distance function from small sample, when the training data is a set of equivalence constraints on pairs of datapoints, indicating whether the pair originated from the same or different sources. The problem of small sample is ubiquitous in application domains such as computer vision and bioinformatics, where data points may be initially represented in some high dimensional feature space, while the number of training examples is typically much smaller than the feature space dimensionality. It may lead to G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 106–115, 2007. c Springer-Verlag Berlin Heidelberg 2007
Feature Selection in Distance Learning
107
problems of over-fitting and poor generalization. One common way to address this problem is to reduce dimensionality, or dramatically prune the dataspace via feature selection. We focus here on the second avenue of feature selection. We describe below a method for feature selection based on equivalence constraints, and incorporate the method into an existing non-parametric method for distance learning [10]. Our feature selection method, described in Section 2.1, relies on the use of the L1 norm in the evaluation of the cost function. There has been much recent work on the use of concave cost functions in order to achieve sparse signal decomposition [6], and L1 is probably the simplest such norm. Feature selection with L1 regularization has been studied before, as in the lasso method [17], but see also [19,13]. Unlike these methods here we use both L1 optimization and regularization, i.e., the loss function defined over the training data of equivalence constraints is also defined in terms of the L1 norm. This choice leads to a sparse solution over the set of constraints. Thus, in a typical solution some constraints are fully satisfied, while others may deviate largely from the target value; this seems desirable given the discrete nature of equivalence constraints. Distance learning from equivalence constraints has been studied extensively in recent years. Much of this work focused on the learning of the (linear) Mahalanobis metric, as in [15,5,4,3,9], where feature selection is often done implicitly or explicitly as part of the learning procedure. We use here a more powerful non-parametric distance function learning algorithm [10] based on boosting. The weak learner in this algorithm computes in a semi-supervised manner a generative constrained Gaussian Mixture Model [14] to describe the data. But here lies the problem with small sample: in each iteration, the GMM algorithm can only work in a rather low dimensional space as it must estimate a number of covariance matrices defined over this space. Currently, the problem is solved by projecting the data initially into a low-dimensional space where the weak learner has a chance of working properly. With a very small sample, this onetime dimensionality reduction may be catastrophical, and lead to poor results. We therefore propose to compute the dimensionality reduction afresh in each iteration of the boosting method, see Section 2.2. In Section 3 we describe extensive experimental results on facial image classification, where very significant improvement is obtained. In our experiments we used a single pair from each class of objects (each individual face), which is an instance of ’one shot learning’ - how to learn a classifier from one example of a new class. Distance learning offers one way to approach this problem. Our results show that our algorithm performs better than a number of alternative distance learning methods. Other approaches have been developed recently in the context of object and class recognition, see for example [12,2,7,16,11]. Also note that in our method, we use feature selection to enhance the performance of the weak learner in each boosting iteration as in [18]. In a very different approach, boosting is used in [1] to select embeddings in the construction of a discriminative classifier.
108
2
D. Weinshall and L. Zamir
Distance Learning Algorithm
As noted in the introduction, we use a non-parametric distance learning method [10] based on boosting, where in each iteration a generative Gaussian Mixture Model is constructed using a set of weighted equivalence constraints. This weak learner estimates a number of covariance matrices from the training sample. Thus, with small sample it must be used in a low dimensional space, which can be obtained via dimensionality reduction or feature selection (or both), see Section 2.1. Our final algorithm selects features dynamically - different features in each boosting iteration, as described in Section 2.2. 2.1
Feature Selection with L1 Optimization and Regularization
Notations. Given a set of data points, equivalence constraints over pairs of points consist of positive constraints - denoting points which come from the same source, and negative constraints - denoting points from different sources. Let p1 and p2 denote two 1×n data points. Let Δ+ = p1 −p2 denote the vector difference between two positively constrained points, Δ− = p1 − p2 denote the difference between two negatively constrained points, and WΔ+ , WΔ− denote the weight of constraint Δ+ and Δ− respectively. Let A denote an n × n diagonal matrix whose i−th diagonal element is denoted Ai , N denote the number of features to be selected, and μN denote a regularization parameter which controls the sparseness of A as determined by N . Problem Formulation. Feature selection is obtained using L1 regularization and optimization, which favors sparsity in both the features selection matrix A, and the set of satisfied constraints. The latter property implies a certain “slack” in the constraint satisfaction, where some constraints are fully satisfied while others behave like outliers; this seems desirable, given the discrete nature of equivalence constraints. Thus we define an optimization problem which will be solved (in its most general form) by linear programming. We define two variants of the optimization problem, with some different characteristics (as will be discussed shortly): LP1: Linear Program version 1 WΔ+ Δ+ · A 1 − WΔ− Δ− · A 1 +μN | Ai | F = min A
Δ−
Δ+
i
s.t. A = diag[Ai ], 0 ≤ Ai ≤ 1
(1)
Given the constraints on Ai , the following derivation holds: F = min A
= min A
WΔ+
i=1
Δ+
n i=1
n
Ai
Δ+
|Δ+ i |Ai −
WΔ−
Δ−
WΔ+ Δ+ i −
Δ−
n
|Δ− i |Ai + μN
i=1
WΔ− Δ− i + μN
i
Ai
Feature Selection in Distance Learning
= min A
n
Ai wi ,
i=1
where wi =
s.t. 0 ≤ Ai ≤ 1 WΔ+ Δ+ i −
109
(2)
WΔ− Δ− i + μN
Δ−
Δ+
Clearly minimizing (2) gives a solution where Ai = 0 if wi > 0, and Ai = 1 if wi < 0. The regularization parameter μN thus determines exactly how many coordinates Ai will have the solution 1, or in other words, exactly how many features will be selected. Given that we want to select N features, the optimal solution is obtained by sorting the coefficients wi , and then setting Ai = 1 for the N smallest coordinates, and Ai = 0 otherwise. LP2: Linear Program version 2 F = min WΔ+ Δ+ · A 1 + WΔ− | 1− Δ− · A 1 | +μN | N − Ai | A
Δ+
Δ−
s.t. A = diag[Ai ], 0 ≤ Ai ≤ 1
i
(3)
This defines a linear programming optimization problem, where Ai > 0 implies that feature i is selected (possibly with weight Ai ). Parameters: The only free parameter in both versions is the number of features to be selected. Weights are given by the boosting mechanism, and μN is either determined uniquely by N (in version 1, following the above derivation), or is set to a very large constant (in version 2). RND: Random feature selection. To evaluate the significance of the way features are selected, we tested a third method, whereby N features are randomly selected. This would typically lead to very poor performance in the original high dimensional feature space, and it was therefore preceded by PCA dimensionality reduction, followed by projecting the data onto the M most significant principal components. This gave reasonable performance, which critically depended on the value of M (see Fig. 6). In later comparisons this value was chosen empirically, so as to allow optimal performance for this method. 2.2
The Final Distance Learning Algorithm
The final algorithm is shown in Alg. 1, where modifications from the original distBoost algorithm [10] are highlighted in boldface. In the algorithm’s description, we denote by Wit1 i2 the weight WΔ+/− at iteration t of constraint Δ+/− = (pi1 − pi2 ).
3 3.1
Experimental Results Methods
Data and methodology: We used two public domain datasets: (i) The class of faces from the Caltech dataset (Faces 1999 from
110
D. Weinshall and L. Zamir
Algorithm 1. Distance learning with feature selection Input: Data points: (p1 , ..., pn ), pk ∈ Rn A set of equivalence constraints: (pi1 , pi2 , yi ), where yi ∈ {−1, 1} Unlabeled pairs of points: (pi1 , pi2 , yi = ∗), implicitly defined by all unconstrained pairs of points – Initialize Wi11 i2 = 1/(n2 ) i1 , i2 = 1, . . . , n (weights over pairs of points) k = 1, . . . , n (weights over data points) wk = 1/n – For t = 1, .., T 1. Given the original data vectors in Rn , obtain a lower dimensional description in RD as follows: • To balance the effect of positive and negative constraints, normalize the weights such that: t t Δ+ =(pi1 −pi2 ) Wi1 i2 = 1, Δ− =(pi1 −pi2 ) Wi1 i2 = 1. • Select N ≥ D features using one of the methods described in Section 2.1. • Given only the selected features, reduce the data dimensionality to D with PCA, and obtain xi = Gt (pi ). 2. As in [14], fit a constrained GMM (weak learner) on weighted data points xi in X = RD using the equivalence constraints. ˜ t : X × X → [0, 1] and define 3. As in [10], generate a weak hypothesis function h ˜ t (xi , xj )) ∈ [0, 1]. a weak distance function as ht (xi , xj ) = 12 (1 − h ˜ t (xi , xi ), only over labeled pairs. Wit1 i2 y i h 4. Compute rt = 1 2 (xi1 ,xi2 ,yi =±1)
Accept the current hypothesis only if rt > 0. 1+rt ). 5. Choose the hypothesis weight αt = 12 ln( 1−r t 6. Update the weights of all points in X × X as follows: = Wit+1 1 i2
W
t i1 i2 Wit1 i2
exp(−αt yi ˜ ht (xi1 , xi2 )) yi ∈ {−1, 1} exp(−λ ∗ αt ) yi = ∗
where λ is a tradeoff parameter that determines the decay rate of the unlabeled points in the boosting process. = 7. Normalize: Wit+1 1 i2
Wit+1 i
n
1 2
i1 ,i2 =1
Wit+1 i
1 2
8. Translate the weights from X × X to Rn : wkt+1 = Output: A final distance function D(pi , pj ) =
T t=1
j
t+1 Wkj
αt ht (Gt (pi ), Gt (pj ))
http://www.vision.caltech.edu/archive.html), which contains images of individuals with different lighting, expressions and backgrounds. (ii) The YaleB dataset [8], which contains images of individuals under different illumination conditions. Examples are shown in Fig. 1. Each image was represented by its pixel gray-values, 28 × 28 in the Caltech dataset and 128 × 112 in the YaleB dataset. 19 classes (or different individuals) were used in both datasets, with
Feature Selection in Distance Learning
111
Fig. 1. Left: three pictures of the same individual from the YaleB database, which contains 1920 frontal face images of 30 individuals taken under different lighting conditions. Right: two images from the Caltech dataset.
training data composed of two examples randomly sampled from each class in each experiment. This generated 19 positive equivalence constraints, and a larger number of negative constraints. Our experiments indicated that it was sufficient to use a subset of roughly 19 negative constraints (equal to the number of positive constraints) to achieve the best performance. The test set included 20 new random images from each of the 19 classes. Performance - evaluation and measures: Performance of each algorithm was evaluated on the test set only, using the Equal Error Rate (EER) of the ROC curve. This curve was obtained by parameterizing the distance threshold t: points at distance lower than t were declared ’positive’ (or same class), while others were declared ’negative’ (or different class). The EER is the point where the miss (false negative) rate equals the false positive rate. In some experiments we used the learnt distance to perform clustering of the test datapoints with the Ward agglomerative clustering algorithm. We then measured performance using R score, where P denotes precision rate and R denotes recall rate. the F 12 = P2P+R Another performance measure used the k-nearest-neighbor classification, where for each point - the k nearest points are retrieved, and classification is done in agreement with the majority. Algorithms: We evaluated three variants of Algorithm 1, using one of the three selection methods described in Section 2.1; accordingly, they are denoted below as LP1, LP2 and RND. For comparison we used the following distance measures: 1. 2. 3. 4.
Euclidean metric in the original feature space. RCA [15] - a Mahalanobis metric learning algorithm. DMC [5] - a Mahalanobis metric learning algorithm. Euclidean FS - the Euclidean metric in the lower dimensional space, obtained using feature selection as described in Section 2.1. 5. DB-PCA - the original distBoost algorithm, where dimensionality is initially reduced by projecting the data onto the D largest principal components. Algorithm parameters: We used 150 boosting rounds for all methods (LP1, LP2, RND, DB-PCA). The number of features N to be selected in each iteration was 80 for the Caltech dataset, and 20 for the YaleB dataset. The dimension D subsequently used by the weak learner was chosen to be the maximal possible.
112
D. Weinshall and L. Zamir
Fig. 2. Results using the Caltech facial image dataset. Left: the ROC curve of eight distance measures. Right: summary of the Equal Error Rate of the ROC curves (left) with standard error (ste) bars, for all algorithms.
Fig. 3. Results using the YaleB facial image dataset, including clustering results (with the Ward algorithm) measured by F 1 on the left, and the summary of the Equal Error 2 Rate of the ROC curves for all algorithms on the right. Ste bars are also shown.
3.2
Results
The performance of the various algorithms in the different experiments is summarized in Figs. 2, 3. Fig. 4 provides visualization of the features (or pixels) selected by the algorithm, while Fig. 5 shows the dependency of the results on the number of features which have been selected. Fig. 6 shows the behavior of the different algorithms as a function of the boosting iteration. Finally, Fig. 7 shows performance evaluation on totally new classes, including faces of individuals that the algorithms have never seen before. 3.3
Discussion
The results in Figs. 2, 3 clearly show that feature selection improves performance significantly. (Note that although the errors may appear relatively high, given the difficulty of the task - with so few training examples - these are state-of-the-art results.) Moreover, Fig. 7 shows that this advantage is maintained even when
Feature Selection in Distance Learning
113
Fig. 4. Visualization of the features selection process using the YaleB dataset. All the features selected by the algorithm up to a certain iteration are shown in white. From left to right, we show iteration 4, 20, and 150 respectively.
Fig. 5. Caltech facial image dataset. Left: the fraction of features (out of 784 image pixels) used by the hypotheses of the LP1 and LP2 methods as a function of the boosting iteration number. Right: EER scores as a function of the fraction of the total number of features selected by the LP1 and LP2 methods.
Fig. 6. Caltech facial image dataset: Left: EER scores as a function of the number of iterations. Right: Clustering performance of method RND as a function of the number of top principal components chosen for representation of the data (M ).
presented with completely novel classes (a learning to learn scenraio), which is one of the important motivations for learning distance functions instead of classifiers. When comparing the two main feature selection variants - LP1 and LP2 (see definition in Section 2.1), we see comparable performance. But recall that LP1 has a tremendous computational advantage, since its only computation involves sorting and it does not need to solve a linear programming problem. Thus LP1, with slightly better performance, is clearly preferable. The third
114
D. Weinshall and L. Zamir
Fig. 7. Results on YaleB when tested on 10 totally novel classes - individuals that the algorithms have never seen before. Left: F 1 clustering scores. Right: K-nearest-neighbor 2 classification scores.
variant - RND - usually performs overall less well. More importantly, its good performance critically depends on the number of features used for the random sampling (see Fig. 6-right), an unknown parameter apriori, which makes this variant even less appealing. Both LP1 and LP2 do not use all the original features. This is most clearly seen in the results with YaleB dataset, where only 12% of the initial features are used, as is visualized in Fig. 4. This may give them yet another significant advantage over alternative distance learning methods in some applications, where features are expensive to compute. In this respect, the advantage of the LP2 variant is more pronounced, as shown in Fig. 5. Finally, all the results above are reported for very small samples - with 2 examples from each of 19 classes. The relative advantage of the feature selection variants disappears when the sample increases.
4
Summary
We described a distance learning method which is based on boosting combined with feature selection, using equivalence constraints on pairs of datapoints. In a facial image classification task and with very few training examples, the new method performs much better than the alternative algorithms we have tried. The underlying reason may be that feature selection combined with boosting allows the distance learning algorithm to look at more features in the data, while being able to estimate only a small number of parameters in each round. Thus, within this very difficult domain of image classification from very small sample (or learning to learn), our algorithm achieves the goal of advancing the state-of-the-art.
References 1. Athitsos, V., Alon, J., Sclaroff, S., Kollios, G.: BoostMap: A method for efficient approximate similarity rankings. In: Proc. CVPR (2004) 2. Bart, E., Ullman, S.: Cross-generalization: learning novel classes from a single example by feature replacement. In: Proc. CVPR, pp. 672–679 (2005)
Feature Selection in Distance Learning
115
3. Bilenko, M., Basu, S., Mooney, R.J.: Integrating constraints and metric learning in semi-supervised clustering. In: ACM International Conference Proceeding Series (2004) 4. Chang, H., Yeung, D.Y.: Locally linear metric adaptation for semi-supervised clustering. In: ACM International Conference Proceeding Series (2004) 5. De Bie, T., Momma, M., Cristianini, N.: Efficiently learning the metric with sideinformation. In: Gavald´ a, R., Jantke, K.P., Takimoto, E. (eds.) ALT 2003. LNCS (LNAI), vol. 2842, pp. 175–189. Springer, Heidelberg (2003) 6. Donoho, D.L., Elad, M.: Optimally sparse representation in general (nonorthogonal) dictionaries via l1 minimization. Proceedings of the National Academy of Sciences 100(5), 2197–2202 (2003) 7. Ferencz, A., Learned-Miller, E., Malik, J.: Building a classification cascade for visual identification from one example. In: Proc. ICCV, pp. 286–293 (2005) 8. Georghiades, A.S., Belhumeur, P.N., Kriegman, D.J.: From few to many: generative models for recognition under variable pose and illumination. In: IEEE Int. Conf. on Automatic Face and Gesture Recognition, pp. 277–284 (2000) 9. Goldberger, J., Roweis, S., Hinton, G., Salakhutdinov, R.: Neighbourhood components analysis. Advances in Neural Information Processing Systems, 17 (2005) 10. Hertz, T., Bar-Hillel, A., Weinshall, D.: Boosting margin based distance functions for clustering. In: ICML (2004) 11. Hertz, T., Bar-Hillel, A., Weinshall, D.: Learning a kernel function for classification with small training samples. In: ICML (2006) 12. Li, F.F., Fergus, R., Perona, P.: One-shot learning of object categories. IEEE PAMI 28(4), 594–611 (2006) 13. Ng, A.Y.: Feature selection, L1 vs. L2 regularization, and rotational invariance. In: ACM International Conference Proceeding Series (2004) 14. Shental, N., Bar-Hillel, A., Hertz, T., Weinshall, D.: Computing gaussian mixture models with em using equivalence constraints. In: NIPS (2003) 15. Shental, N., Hertz, T., Weinshall, D., Pavel, M.: Adjustment learning and relevant component analysis. In: Heyden, A., Sparr, G., Nielsen, M., Johansen, P. (eds.) ECCV 2002. LNCS, vol. 2353, Springer, Heidelberg (2002) 16. Sudderth, E.B., Torralba, A., Freeman, W.T., Willsky, A.S.: Learning hierarchical models of scenes, objects, and parts. In: Proc. ICCV (2005) 17. Tibshirani, R.: Regression Shrinkage and Selection via the Lasso. Journal of the Royal Statistical Society. Series B (Methodological) 58(1), 267–288 (1996) 18. Tsymbal, A., Puuronen, S., Skrypnyk, I.: Ensemble feature selection with dynamic integration of classifiers. In: Int. ICSC-CIMA (2001) 19. Zheng, A.X., Jordan, M.I., Liblit, B., Aiken, A.: Statistical debugging of sampled programs. Advances in Neural Information Processing Systems 17 (2003)
Comparison of Techniques for Mitigating the Effects of Illumination Variations on the Appearance of Human Targets C. Madden1 , M. Piccardi1 , and S. Zuffi2 1
University of Technology, Sydney, Australia 2 ITC-CNR, Milano, Italy
Abstract. Several techniques have been proposed to date to build colour invariants between camera views with varying illumination conditions. In this paper, we propose to improve colour invariance by using data-dependent techniques. To this aim, we compare the effectiveness of histogram stretching, illumination filtration, full histogram equalisation and controlled histogram equalisation in a video surveillance domain. All such techniques have limited computational requirements and are therefore suitable for real time implementation. Controlled histogram equalisation is a modified histogram equalisation operating under the influence of a control parameter [1]. Our empirical comparison looks at the ability of these techniques to make the global colour appearance of single human targets more matchable under illumination changes, whilst still discriminating between different people. Tests are conducted on the appearance of individuals from two camera views with greatly differing illumination conditions and invariance is evaluated through a similarity measure based upon colour histograms. In general, our results indicate that these techniques improve colour invariance; amongst them, full and controlled equalisation consistently showed the best performance.
1
Introduction
Applications in the computer vision field that extract information about humans interacting with their environment are built upon the exploitation of appearance, shape and motion cues in videos. Appearance (i.e. colour-based) features are increasingly being used because cheaper, higher resolution cameras of good pixel quality are available. However, significant problems still affect the reliable use of appearance features for the analysis of humans in videos, such as the variations in illumination and the articulated nature of humans’ geometry. The goal of this paper is to improve the invariance of appearance features such as colour histograms for the global object. This is different from local colour invariants such as CSIFT [2] that describe the object’s colours only in a limited spatial neighbourhood. The improvement of colour invariance is investigated through the comparison of data-dependent techniques that compensate for illumination changes. The evaluation of the illumination invariance of these techniques is based upon measuring their ability to remain invariant for a single person under G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 116–127, 2007. c Springer-Verlag Berlin Heidelberg 2007
Comparison of Techniques for Mitigating the Effects of Illumination
50
20
50
50
50
100
60
150
100
100
80
150
200
100 150
200
250
140
250 20406080100
700
20
200
200 20406080100
500
600
400
150
500
40
20 40 60
600
1200
500
800
200
600
1500
100
200
0 400 0
200
0 400 0
500 100
200 200
0 400 0
1000
200
400
50
0
2000
300
100
20 40 60
400
100 300 200
20 40 60
1400
300
200
160
1000
400
150
120
150
300
0
50
40
100 100
117
200
0 400 0
200
0 400 0
200
400
Fig. 1. Sample people of interest and their red histograms under differing illumination
different illumination, whilst retaining a high degree of discrimination between different individuals. The colour of an object in a camera view is not the intrinsic colour of the object itself, but rather a view-dependent measurement of the light reflected from the object, and the camera sensitivity to that light [3]. By recording the camera response to different wavelengths of light, the colour sensitivity can be estimated and exploited for model-based or empirical camera characterisation [4], [5]; however illumination provides a more difficult challenge. Compensating for illumination changes are broadly classified by Finlayson et al. [3] into colour invariants, which seeks transformation of the colours that are illumination independent, or colour constancy, which seeks to estimate the illumination of the scene to extract the intrinsic colours of objects. Whilst accurate models of the illumination of the scene could extract the intrinsic colours of objects, the implementation of this technique is very difficult. In previous work Javed et al. [6,7] propose to estimate the intensity transfer functions between camera pairs during an initial training phase. Such functions are estimated by displaying common targets to the two cameras under a significant range of illumination conditions, and modelling correspondences in the targets’ colour histograms. However, the authors’ assumptions in [6,7] that objects are planar, radiance is diffuse and illumination the same throughout the field of view do not hold in real life. Illumination varies at pixel-level resolution and have first-order effects on appearance. Weiss [8] proposed a method to estimate illumination from a sequence of frames of the same scene. Though the method works well for static objects such as the background scene, it cannot accurately predict the illumination over 3D moving targets, especially highly articulated ones such as people. Moreover, in these applications
118
C. Madden, M. Piccardi, and S. Zuffi
segmentation is always affected by a certain degree of error and this adds to the effects of illumination variations and pose changes. Figure 1 shows examples of two such people of interest automatically segmented from the background, and how their red channel colour appearance may alter under differing illumination conditions. Approaches to colour invariance have had greater success in mitigating the effects of illumination, which Finlayson et al. [3] suggest occur because although the RGB values change, the rank ordering of the responses of each sensor is preserved. This implies that the values for a particular colour channel, such as R, will change from illumination source A to source B; however the ordering of those values will remain invariant, as shown in Figure 1. This observation has occurred for what we assume to be typical lighting in human environments, which largely consists of natural sunlight, fluorescent lighting, or incandescent lighting. Other lighting sources are sometimes used, but are rarely used in open common spaces where surveillance occurs, so they are outside the scope of this investigation. A range of techniques are used to provide colours that are invariant to illumination, with the most common being chromaticity spaces. Chromaticity can be simply derived from the RGB space using the following transformation: r=
G B R ,g = ,b = R+G+B R+G+B R+G+B
(1)
This chromaticity vector (r,g,b) has only two independent co-ordinates and is defined such that it is invariant to the intensity of an illumination source. Changes to the illumination will scale the RGB values by a factor s as (sR,sG,sB ), leaving r,g,b invariant. If the illumination source changes in spectral output, say from a white fluorescent source to a yellow incandescent source, then a single scale factor is not sufficient to compensate for such a change. A second diagonal space has also been proposed where each sensor response in the R, G, or B channels can be independently derived. This model allows for a shift in illumination intensity as well as a possible shift in the frequency spectrum for that illumination. The response could be modeled using the grey-world representation [9] by using:
R =
R G B ,G = ,B = Rave Gave Bave
(2)
where Rave , Gave , and Bave denote the means of all the R, G, B values respectively across an entire image. These common techniques are useful for providing measurements that are invariant to illumination to a degree; however they have difficulty in adequately compensating for the multiple illumination sources that could also be time varying in the case of natural sunlight. These multiple illumination sources also have complicated interplay with the complex 3D surfaces of moving objects, where the effect of illumination in the background, or portions of the background may vary from its effect upon foreground objects. These chromaticity techniques cannot identify the difference in intrinsic black and white surfaces or differing shades of
Comparison of Techniques for Mitigating the Effects of Illumination
119
grey. Moreover, the model in (1) is unstable for dark colours. A different approach to colour invariance considers techniques that perform colour normalisation in a data-dependent fashion, such as histogram equalisation. These techniques do not seek to model the illumination conditions and could therefore be more suitable for uncontrolled scenarios that are predominant in video surveillance. We propose to compare various techniques that transform the RGB data of the object to make the same object more similar under varying illumination conditions, whilst still allowing for the discrimination of differing colours without requiring either training or other assumed scene knowledge. The four techniques compared are filtering the illumination [10], Section 2, applying histogram stretching to the object histograms, described in 3, and applying equalisation to the object histograms in both a full mode, and a novel controlled equalisation technique [3,1], Section 4. These techniques have limited computational requirements and are therefore immediately suitable for real time implementation. These methods are compared by applying the techniques at varying parameter levels to a set of 15 tracks acquired from four different individuals within two cameras under differing illumination conditions. After the mitigation techniques have been applied, colour histograms of the object are extracted and compared as described in Section 5. The histograms used are computed in the joint RGB space so as to retain the correlation between colour components. Since the joint RGB space has many possible different values, colours are mapped onto sparse histograms (Major Colour Representations, or MCR’s) i.e. histograms retaining only those bins having a non negligible bin count [1]. Then, the similarity of any two histograms is measured based on the Kolmogorov divergence with equal priors [11]. This allows for a comparison of the effects of the mitigation through the comparison of the similarities of the MCR histograms. The results of the similarity measurements of the appearance between 120 track pairs from matching and non-matching individuals are compared in Section 6. This provides a discussion of the ability of these techniques to improve the invariance of the appearance under different illumination, whilst still retaining discrimination over different individuals.
2
Illumination Filtration
This section outlines a technique of homomorphic filtering of the illumination effects from the image based upon the method described by Toth et al. [10]. This technique assumes objects consist of Lambertian surfaces and that illumination only changes slowly over space in the image. Toth et al. [10] suggests that this low frequency component can be filtered out by converting values to a logarithmic scale then a applying high pass filter, leaving the mid to high frequency details which in practise relate to the reflectance component of the image. The intensity of the illumination on the surface of the object in the τ -th frame in an image sequence can be modelled as: yτ (k) = iτ (k) · rτ (k)
(3)
120
C. Madden, M. Piccardi, and S. Zuffi
where k is the pixel index in the image, i is the illumination component and r is the reflective component in the image y. If the reflectance component r can be separated from the illumination component i, then it can be used as an illumination invariant representation of the appearance. The slow rate of change of illumination over the space of the image means that it will consist of low frequency components of the image, whilst the reflectance model will consist significantly of mid to high frequency components. Applying the logarithm to (3) transforms the multiplicative relationship between y, i, and r into an additive one: log (yτ (k)) = log (iτ (k)) + log (rτ (k))
(4)
A high pass filter kernel can then applied to remove the low frequency illumination component i. An exponentiation of the filtered image therefore contains the illumination invariant image consisting of the reflectance information. The parameters of the Gaussian filter applied to remove the illumination relate to the filter size, standard deviation, and a weighting parameter which controls the amount of filtration applied. These parameters are given in this order when the filtration results are presented in Table 1 in Section 6. 0.035 0.03 0.025 0.02 0.015 0.01 0.005 0
0
50
100
150
200
250
300
0
50
100
150
200
250
300
0.035 0.03 0.025 0.02 0.015 0.01 0.005 0
Fig. 2. Individuals R values before and after illumination filtration
3
Histogram Stretching
This section outlines the use of histogram stretching to perform the illumination transformation. This method proposes to stretch the objects histogram separately for each of the RGB components to allow for changes in the illumination spectrum. Stretching the histogram should make it appear more similar across a range of illuminations conditions without explicitly modelling those illumination sources. It also preserves the rank ordering of the histogram, which Finlayson et al. [3] suggest adds to the success of many of the colour invariance techniques. This technique is demonstrated in Figure 3.
Comparison of Techniques for Mitigating the Effects of Illumination
121
0.035 0.03 0.025 0.02 0.015 0.01 0.005 0
0
50
100
150
200
250
300
0
50
100
150
200
250
300
0.035 0.03 0.025 0.02 0.015 0.01 0.005 0
Fig. 3. Histogram Stretching of the Individuals pixels
The key points for histogram stretching are the selection of the upper and lower limits of the output histogram, and the upper and lower limits of the input histogram for each colour channel. Histogram stretching then performs a linear mapping of the input to output values. We maximise the spread of the histogram by choosing the upper and lower limits of the stretched output to be 255 and 0 respectively. We choose the upper and lower limits of the object histogram based upon a single parameter a which denotes the percentage amount of histogram tails to be ignored. The removal of these tail components of the histogram aims to reduce the amount of noise in the input histogram. It is calculated by cumulating the count in each histogram bin from either end until the percentage a is reached. If we denote the lower input limit as b and the upper input limit as c, then the output of the stretching r for any given input value in that channel can be calculated as: 255 r = (r − b) (5) c−b This stretching transformation is performed upon each object pixel to generate a new object image which should have a higher tolerance to illumination changes without requiring either training or other assumed scene knowledge. This stretching provides a linear transformation of values so they lie across the entire histogram, whilst still retaining a similar shape to the original object component. The results of the stretching is presented in Table 1 in Section 6 for a range of a values to explore the effect of changing the amount of the histogram that is ignored.
4
Histogram Equalisation
This section outlines the use of equalisation to perform a data-dependent transformation of an individual’s histogram. This method differs from histogram stretching as it can provide a non-linear transformation. First this section
122
C. Madden, M. Piccardi, and S. Zuffi
explains the application of histogram equalisation as proposed by Finlayson et al. [3], before defining the ’controlled equalisation’ as we proposed in [1]. Histogram equalisation, also denoted here as full equalisation, aims to spread a given histogram across the entire bandwidth in order to equalise as far as possible the histogram values in the frequency domain. This operation is datadependent and inherently non-linear as shown in Figure 4; however it retains the rank order of the colours within the histogram. The equalisation process is applied separately in each of the R, G and B colour components to remap their values according to the following transformation functions: Tr (i) =
255 pr (j) N j=0
(6)
Tg (i) =
255 pg (j) N j=0
(7)
Tb (i) =
255 pb (j) N j=0
(8)
i
i
i
We also introduce a ’controlled equalisation’ as described in [1]. This process is based upon equalising a combination of the object pixels and an amount of pre-equalised pixels that is a proportion k of the object size. These pre-equalised pixels effectively ’control’ the amount of equalisation such that the pixels are spread to a limited degree within the spectrum instead of being spread fully. Thus although an object should become more matchable under a range of illumination conditions, it is still likely to retain a higher degree of discrimination from objects of differing intrinsic colour. This technique is demonstrated at varying levels of parameter in Figure 5 below. 0.035 0.03 0.025 0.02 0.015 0.01 0.005 0
0
50
100
150
200
250
300
0
50
100
150
200
250
300
0.035 0.03 0.025 0.02 0.015 0.01 0.005 0
Fig. 4. Full Equalisation of the Individuals pixels
Comparison of Techniques for Mitigating the Effects of Illumination 0.04
0.04
0.04
0.03
0.03
0.03
0.02
0.02
0.02
0.01
0.01
0.01
0
0
0
100 200 Original
300
0
100
200
300
0
0.04
0.04
0.03
0.03
0.03
0.02
0.02
0.02
0.01
0.01
0.01
0
0
100
200
300
0
k=2
100 200 k=1.5
300
0
0.04
0.04
0.03
0.03
0.03
0.02
0.02
0.02
0.01
0.01
0.01
0
0
100 200 k=0.5
200
300
300
0
100 200 k=0.25
0
200
300
100 200 k=0 (full)
300
100 k=1
0.04
0
100 k=5
0.04
0
0
k=10
123
300
0
0
Fig. 5. Controlled Equalisation of the individuals pixels with varying k values
This equalisation can be formally described by designating the set of N pixels in a generic object as A, and calling B a second set of kN pixels which are perfectly equalised in their R, G, and B components. Note that the parameter k designates the proportionality of the amount of equalised pixels to the amount of pixels in A. From their union A∪B, the cumulative histograms of the R, G, and B components, pr (i) , pg (i), and pr (i) for i = 0 . . . 255 are computed. A histogram equalisation of the individual colour channels is then derived as shown in 9-11:
Tr (i) =
i 255 pr (j) (1 + k) N j=0
(9)
Tg (i) =
i 255 pg (j) (1 + k) N j=0
(10)
Tb (i) =
i 255 pb (j) (1 + k) N j=0
(11)
These intensity transforms can then be applied to re-map the R, G, and B components in the object’s pixels providing the ’controlled equalisation’. The parameter k can be used to control the amount of pre-equalised pixels used, which in turn controls the spread of the object histogram. The results of this technique presented in Table 1 in Section 6 are for a range of k values to explore the effect of changing this parameter.
124
5
C. Madden, M. Piccardi, and S. Zuffi
Comparison of Techniques Which Mitigate Changes in Illumination
This section outlines the experiment used to compare the techniques that mitigate the effects of changes in illumination upon object appearance. The goal of the experiment is to evaluate the effectiveness of the various techniques by measuring similarities between colour histograms computed over objects extracted from video surveillance videos. To this aim, we used 15 tracks obtained from 4 individuals across 2 cameras with illumination from both natural sunlight and artificial sources. The experiment articulates over various stages of processing whose details are provided in the following. The first stage of processing is to automatically extract the objects from the background in each frame of the videos. We have utilised an adaptive mixture model based upon that derived by Wren et al. [12] that quickly provides reasonably segmented objects. All the objects extracted along the frame sequence from a single individual are then manually collected into a single track so as to ensure correct data association. In the second stage, for each object and in each frame, one of the mitigation techniques is applied in turn and the values of the object’s pixels remapped accordingly. In the third stage, the MCR histogram of the object’s appearance is computed as described in [1]. An MCR histogram consists of a sparse set of bins mapping the pixels’ colour values. Each bin in the RGB space is of spherical shape and has the same radius under a normalised colour distance. The number of such bins is not bounded a priori and the position of their centroids is optimised through a k -means procedure. The MCR histogram is a 3-D, nonparametric representation of an object’s colours. In the forth stage of processing, tracks are considered in pairs. One frame from each track is taken and a similarity measurement, Sf , is computed between their two MCR’s based on the Kolmogorov divergence. In a similar way, Sf values are computed for all other possible frame combinations from the two tracks and averaged so as to achieve a similarity measurement at the track level, St . A number of track pairs are built for both the cases of two different people (nonmatch case, or H0 hypothesis) or a single person (match case, or H1 hypothesis) and all St computed and stored for the two cases. In the fifth stage, the distributions of the St values for each of the two hypotheses, H0 and H1, are statistically modelled by optimally fitting a Gaussian distribution on each. In this way, the two distributions, pH0 (St ) and pH1 (St ), are simply described by their expected values, μH0 and μH1 , and their standard deviations, σH0 and σH1 . The Gaussian assumption seems to well model the data, with σH0 significantly larger than σH1 (the dispersion of similarity values for different objects is obviously greater than that for different views of a same object). The performance evaluation for the different mitigation techniques is then performed by computing the false alarm rate and the missed detection rate directly from p(H0) and p(H1), assuming H0 and H1 have equal priors. We derive the similarity value, Stth , for which pH0 (St ) = pH1 (St ) as:
Comparison of Techniques for Mitigating the Effects of Illumination
Stth =
−b −
√ b2 − 4ac 2a
125
(12)
with a = σ12 − σ22 b = 2(μ1 σ22 − μ2 σ12 ) σ2 c = σ12 μ22 − σ22 μ12 + 2σ12 σ22 ln( σ22 ) 1
The false alarm rate, P F A, is then given by the tail pH0 (St ) below pH1 (St ), (St ≥ Stth ) and the missed detection rate, PMD, from the tail of pH1 (St ) below pH0 (St ), (St ≤ Stth ). By identifying the matching errors from the estimated statistical distributions, the desirable property for the most effective technique is that it will provide the best possible trade-off between false alarm and missed detection rates. The results of the effectiveness of the various illumination mitigation techniques are reported in Table 1.
6
Results
This section discusses the results from the comparison of these data-dependent, rank-preserving techniques which mitigate the effect of illumination changes upon appearance. They are observed for a range of their parameters, including the case of no attempt at mitigation (i.e. leaving the colour values unaltered). These results are based upon the similarity values obtained from 50 matching and 70 non-matching track pairs and are reported in Table 1. The results investigate the effectiveness of the compared mitigation techniques by comparing their estimated P F A, P M D and total error rate. The results in Table 1 clearly demonstrate that applying histogram stretching actually seems to reduce effectiveness rather than improve upon the case Table 1. Results of similarity measurements for matching and non-matching tracks Method Parameter None Equal Equal Equal Equal Stretch Stretch Stretch Filter Filter Filter Filter
Full 0.5 1 2 0.1% 1% 5% 5 1 0.5 7 2 0.4 7 3 0.5 7 2 0.6
Matching mean std 0.8498 0.0103 0.9095 0.0020 0.9080 0.0018 0.9116 0.0012 0.9135 0.0014 0.7648 0.0133 0.7452 0.0146 0.7279 0.0145 0.8496 0.0086 0.8511 0.0074 0.8592 0.0072 0.8632 0.0069
Non-Matching mean std 0.2230 0.1202 0.2522 0.1437 0.2637 0.1552 0.2680 0.1648 0.2726 0.0980 0.2070 0.0980 0.1967 0.0891 0.1738 0.0714 0.2294 0.1449 0.2262 0.1417 0.2322 0.1481 0.2377 0.1544
Theoretical Errors % PMD PFA total 0.68 13.87 14.55 0.07 8.83 8.90 0.06 9.91 9.97 0.03 9.42 9.45 0.04 10.24 10.28 1.32 16.73 18.05 1.64 16.55 18.19 1.74 13.06 14.80 0.53 13.82 14.35 0.43 12.59 13.02 0.41 12.98 13.39 0.38 13.39 13.77
126
C. Madden, M. Piccardi, and S. Zuffi
of no mitigation attempt (first row). This occurs through both an undesirable reduction in the similarity of matching objects and an increase in the similarity of differing objects. The application of illumination filtration is suggested to remove highlights upon objects, as well as compensating for general illumination. The results show that varying the filter parameters to increase the size and variation of the Gaussian filter produces some improvement in both the matching of similar colours and that of differing colours with respect to no mitigation attempt. The results for the colour equalisation techniques show the best improvement in matching scores, indicating that as Finlayson et al. suggest [3], they are capable of providing colours that are more illumination invariant. Compared to stretching, equalisation reallocates the histogram bins to compress the dynamic of bins with low bin counts and expand that of bins with high bin counts. This non-linear reallocation seems to better compensate for the appearance changes occurring under illumination variations. Whilst full equalisation produces the best overall error rate, this is only marginally lower than that of controlled equalisation, which, at its turn, produces the best similarity between matching objects. Picking the “best” technique between these two would require one to define costs for both a false alarm and a missed detection. Such costs significantly depend on the actual application.
7
Conclusions
Many techniques have been suggested in the literature to compensate for the effects of variable illumination over an object’s appearance. In a scenario of video surveillance, explicitly estimating the illumination over 3-D deformable moving targets such as humans simply proves impractical. For this reason, in this paper we have discussed and compared various data-dependent, rank-preserving techniques in an attempt at improving the invariance of a person’s appearance across camera views without exploiting any scene knowledge. Results show that some of these techniques can significantly mitigate the effects of illumination variations, almost halving the matching error rate. Therefore, their use seems strongly beneficial for these applications. The histogram stretching technique actually diminish the similarity of matching objects and increase that of differing objects and its use is therefore counter productive. The illumination filtration technique alone provides a marginal improvement in the similarity of an object’s appearance under illumination changes, possibly due to its removal of illumination highlights on the object. The equalisation of an individual’s colour histograms provides a significant improvement in appearance similarity under differing illumination. Whilst full equalisation produces the best overall error rate, controlled equalisation produces the best similarity between matching objects. Either technique may suit different surveillance applications depending on their error costs. For instance, as discussed in [1], tracking people across a network of disjoint camera views require the highest possible detection rate in order to avoid costly manual revisions; however false detections are easier to correct. This suggests the use of controlled equalisation as the technique of choice for this scenario.
Comparison of Techniques for Mitigating the Effects of Illumination
127
Acknowledgements This research is supported by the Australian Research Council under the ARC Discovery Project Grant Scheme 2004 - DP0452657.
References 1. Madden, C., Cheng, E.D., Piccardi, M.: Tracking people across disjoint camera views by an illumination-tolerant appearance representation. Machine Vision and Applications 18, 233–247 (2007) 2. Abdel-Hakim, A.E., Farag, A.A.: Csift: A sift descriptor with color invariant characteristics. International Conference on Computer Vision and Pattern Recognition 2, 1978–1983 (2006) 3. Finlayson, G., Hordley, S., Schaefer, G., Tian, G.Y.: Illuminant and device invariant colour using histogram equalisation. Pattern Recognition 38, 179–190 (2005) 4. Barnard, K., Funt, B.: Camera characterization for color research. Color Research and Application 27, 153–164 (2002) 5. Bala, R.: Device characterization. In: Sharma, G. (ed.) Digital Color Imaging Handbook, CRC Press, Boca Raton, USA (2003) 6. Javed, O., Rasheed, Z., Shafique, K., Shah, M.: Tracking across multiple cameras with disjoint views. IEEE Conference on Computer Vision and Pattern Recognition 2, 26–33 (2005) 7. Javed, O., Rasheed, Z., Shafique, K., Shah, M.: Tracking across multiple cameras with disjoint views. International Conference on Computer Vision 2, 952–957 (2003) 8. Weiss, Y.: Deriving intrinsic images from image sequences. International Conference on Computer Vision 2, 68–75 (2001) 9. Barnard, K., Funt, B., Cardei, V.: A comparison of computational colour constancy algorithms; part one: Methodology and experiments with synthesized data. IEEE Transactions in Image Processing 11, 972–984 (2002) 10. Toth, D., Aach, T., Metzler, V.: Bayesian spatiotemporal motion detection under varying illumination. In: European Signal Processing Conference pp. 2081–2084 (2000) 11. Zhou, S.K., Chellapa, R.: From sample similarity to ensemble similarity: probabilistic distance measures in reproducing kernel hilbert space. IEEE Transactions on Pattern Analysis And Machine Intelligence 28, 917–929 (2006) 12. Wren, C., Azarbayejani, A., Darrell, T., Pentland, A.P.: Pfinder: real-time tracking of the human body. IEEE Transactions on Pattern Analysis And Machine Intelligence 19(7), 780–785 (1997)
Scene Context Modeling for Foreground Detection from a Scene in Remote Monitoring Liyuan Li, Xinguo Yu, and Weimin Huang Institute for Infocomm Research, Singapore {lyli,xinguo,wmhuang}@i2r.a-star.edu.sg
Abstract. In this paper, foreground detection is performed by scene interpretation. A natural scene in different illumination conditions is characterized by scene context which contains spatial and appearance representations. The spatial representation is obtained in two steps. First, the large homogenous regions in each sample image are extracted using local and global dominant color histograms (DCH). Then, the latent semantic regions of the scene are generated by combining the coincident regions in the segmented images. The appearance representation is learned by the probabilistic latent semantic analysis (PLSA) model with local DCH visual words. The scene context is then applied to interpret incoming images from the scene. For a new image, its global appearance is first recognized and then the pixels are labelled under the constraint of the scene appearance. The proposed method has been tested on various scenes under different weather conditions and very promising results have been obtained.
1
Introduction
In visual surveillance applications, usually a fixed camera is used to monitor a scene. Background subtraction techniques are widely used for foreground detection in such cases [1], [2], [3], [4]. In background subtraction, an accurate pixellevel, temporal background model of the empty scene at the exact time should be built and maintained. To adapt to scene variations in different weather conditions, the background model should be updated in real-time from incoming sequence of images. If the frame rate is low or the scene is frequently crowded, existing methods will fail to follow the background changes. In the applications for remote monitoring, the images from a camera are transported through internet or wireless network. The interval between two consecutive frames can be over half minute. In this case, existing background subtraction methods are not applicable since the inter-frame changes of the scene may be quite significant, e.g. from sunny to cloudy, and the captured images may always contain foreground objects in the scene. Existing background subtraction technique is a bottom-up process. It does not have any knowledge about what the scene will appear in a certain weather condition. Its knowledge about the scene is based on the previous observations of the scene at pixel-level over a short period of time, e.g. several minutes. Is it G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 128–139, 2007. c Springer-Verlag Berlin Heidelberg 2007
Scene Context Modeling for Foreground Detection
129
possible to learn global and long-term context knowledge of a scene and then applying it to interpret an incoming image from any time in a day? In this paper, we propose a novel approach to perform top-down background subtraction based on scene interpretation. With learned scene context, our method first predict the global illumination condition of the incoming image and then label each pixel according to the knowledge of background appearance under the illumination condition. The scene context consists of spatial and appearance representations. The spatial representation is the layout of the latent semantic regions (i.e. large homogenous regions) in the scene, and the appearance representation captures the correlation between global scene appearances under various weather conditions and the visual words from the latent semantic regions. As indicated in [5,6], color is very efficient to characterize large homogenous regions. In this paper, the Dominant Color Histogram (DCH) of a grid is used as a visual word to characterize the local features. To achieve an effective dictionary of visual words for scene appearances, the latent semantic regions with compact DCHs in a scene are extracted in two steps. First, we integrate the bottom-up DCH-based grid growing and top-down DCH-based classification to extract the large homogenous regions in each sample image. Then, we generate the latent semantic regions of the scene by combining the coincident regions from all the sample images captured under different illumination conditions. Using the model of probabilistic latent semantic analysis (PLSA), the appearance representation is learned from a few samples of typical appearance categories of the scene. The scene context then used for image interpretation. For an incoming image, the illumination condition is first recognized and then the posterior labeling of the image is generated according to the corresponding appearance of the scene. The result provides both detected foreground objects and a global-level description of the scene. The rest of this paper is organized as follows. Section 2 presents the method for extracting the latent semantic regions of a scene. Section 3 describes the learning of appearance representation of a scene. In Section 4, the scene context is applied to scene appearance recognition and image labeling. Section 5 reports the experimental results. Section 6 concludes the paper.
2
Generation of Spatial Representation
The latent semantic regions in a scene are associated with large background entities. They are large regions of homogenous colors or regular texture patterns. To adapt to possible uneven illuminations in a single image, we generate the latent semantic regions for a scene from a few sample images under different lighting conditions. 2.1
Extraction of Large Homogenous Regions
Here, we want to extract large homogenous regions from an image, rather than segment the whole image. General purpose methods for image segmentation
130
L. Li, X. Yu, and W. Huang
(e.g. mean-shift clustering [7] or normalized cuts [8]) segment the whole image into smooth regions. They may generate too many small clutters. To guarantee a compact dictionary of DCH words for a scene and good correlations between DCH words and latent semantic topics, a specific purpose method to extract large homogenous regions is proposed. The large homogenous regions in an image are extracted in two steps. The first step is a bottom-up DCH-based grid growing process which extracts the connected large homogenous regions iteratively, and the second step is a topdown DCH-based classification process which classifies each pixel into a global homogenous region. The top-down process can merge together the spatially separated parts of a semantic region, e.g. the grass lands divided by a road. Let I(x) be an image. It is first divided into non-overlapping grids, where the size of a grid is K × K pixels. By scanning the ith grid gi , a dominant color histogram be obtained as DCH(gi ) = {hi (cki ) : k = 1, · · · , Ni }. The difference between any two dominant colors is larger than a small threshold ε. As suggested in [6], the color difference d(c1 , c2 ) = 1 −
2 < c1 , c2 > c1 2 + c2 2
(1)
is employed, where < ·, · > denotes the dot product. The histogram values h (ck ) are sorted in descendent order. The first Ni components which satisfies i Nii k k=0 hi (ci ) ≥ 0.95Wg Hg are used. Since the DCH is very efficient, very a few dominant colors are enough to cover more than 95% colors in a grid. The analysis in the appendix indicates that the use of (1) can result in a compact DCH for homogenous regions. As described in [6], the likelihood of grid gi to grid gj can be computed as Nj Ni k k n n k=1 min hj (cj ), n=1 δ(cj , ci )hi (ci ) L(gi |gj ) = Nj k k=1 hj (cj ) where δ(ckj , cni ) = 1 if d(ckj , cni ) < . The similarity between gi and gj can be defined as Sij = min{L(gi |gj ), L(gj |gi )} (2) Using (2), the number of similar grids to the ith grid can be computed as Pi = μ (Sij ), where μT (·) is a thresholding function on T . T j The process of grid growing is performed repeatedly. If a grid belongs to a large homogenous region, it will have many similar grids in the image. Hence, in the beginning of each iteration, a grid which has the maximum Pi value and does not belong to any previously extracted regions is selected as the seed. The region is expanded to 4-connecting grids until no grid along the region boundary can be merged. For each region generated by grid growing, a dominant color histogram can be obtained by scanning the region. The DCH for the ith region Ri can be expressed as DCH(Ri ) = {hi (ckRi ) : k = 1, · · · , Ni }. Usually less than 20 dominant colors are enough for a large homogenous region.
Scene Context Modeling for Foreground Detection
131
In the second step, the global homogenous regions are generated one-by-one. The probability of a pixel x belonging to the ith region Ri is computed as P (x|Ri ) =
1 K2
s∈w(x)
max {δ(I(s), ckRi )}
k∈[1,Ni ]
(3)
where w(x) is a small window of size K × K pixels centered at x. In practice, P (x|Ri ) is the proportion of the pixels within the neighborhood window which look like from the region Ri . The global homogenous regions is generated from the largest to the smallest regions. For the ith region, if a pixel x does not belong to the previously generated regions and P (x|Ri ) is larger than the threshold T , it is classified as a point of the region. The pixels not belonging to any global homogenous regions are in clutters.
Fig. 1. Two examples of large homogenous region extraction on road scenes from Sowerby dataset. In each row from left to right are the image, the extracted result, and the ground truth.
The proposed method can generate a good result for natural scenes. Two examples of our results on the Sowerby dataset are displayed in Figure 1. In the segmented images, large homogenous regions are labelled with colors. The black regions are the clutters with great local color variations. Compared with ground truths, it can be seen that our method performs quite well in extracting the large homogenous regions corresponding to semantic regions of sky, road surfaces, vegetation, etc. The images from Sowerby dataset are resized as 384×267 pixels. In this paper, the grid size is 7 × 7 (K = 7) and the threshold value is T = 0.7. 2.2
Generation of Latent Semantic Regions
It is well-known that the automatic image segmentation on single image is a very fragile and erroneous process. In this paper, we propose to generate the latent semantic regions by combining the multiple segmentations of the large homogenous regions from different images of the same scene. This can result in a reliable segmentation of latent semantic regions of the scene from imperfect segmentations of sample images. This idea can be illustrated by the diagram
132
L. Li, X. Yu, and W. Huang sunny
cloudy
sun&cloud
shadows
evening
combination on coincidence
latent semantic regions
Fig. 2. Generation of latent semantic regions by combining the segmented large homogenous regions from a few sample images
in Figure 2. In the figure, it can be seen that when large shadows are casted on the road surface, it is segmented into separated parts, but by combining the segments from all the sample images, we have a great chance to obtain a good segmentation of the road surface. The segmented homogenous regions associated with a latent semantic region in different sample images will overlap each other. Suppose we have Ns segmented sample images for a scene, and in the ith image, there are Ji large homogenous regions {Rij : j = 1, · · · , Ji }. The spatial coincidence between two regions from two different images, e.g. Rij and Rkl , can be described by the Ì ratio Rij Ë Rkl of spatial intersection to spatial union of the two regions, i.e. ckl = ij Rij Rkl . A well segmented region associated with a latent semantic region will get strong supports from the coincident regions in the other images. The strength of such supports for a region in an image, e.g. Rij , can be defined as Ns Jk kl kl k=1,k=i l=1 cij μTa (cij ) eij = Ns (4) Jk kl k=1,k=i l=1 μTa (cij ) where Ta is a threshold to filter out the less related regions. An iterative process is performed to generate the latent semantic regions. In each iteration, the region of the maximum strength is selected. A new latent semantic region is generated according to the coincidence of other regions with the selected region. Suppose now it is the nth iteration and the region Rij is selected. The probability of a pixel x in the new latent semantic region can be calculated as Jk Ns 1 Rkl (x)μTa (ckl (5) An (x) = ij ) Ns k=1 l=1
To avoid a region associated with the nth latent semantic region being selected again, the strength of each region in the sample images is updated. First, a temporal region Rn for the new latent semantic region is generated by thresholding An (x) at Ta . Then, for Ìa region, e.g. Rkl , the coincidence of Rkl with Rn can be R R computed as ankl = klRkl n . The strength of Rkl is updated as en+1 = (1 − ankl )enkl kl
(6)
Scene Context Modeling for Foreground Detection
133
Operation (6) suppresses the strength significantly if the region is greatly coincident with Rn . The final latent semantic regions are generated according to the probability values as ILSR (x) = l if l = argn An (x) and An (x) > Ta , otherwise, ILSR (x) = 0 indicates the pixel is in clutters. Ta = 0.4 is used in this paper.
3
Learning Appearance Representation
Probabilistic Latent Semantic Analysis (PLSA) has been successfully used to learn local features for image retrieval and specific object location [9], [10], [11]. In this paper, we use it to learn appearance features from the same scene. Given the spatial representation and a few samples of typical appearance categories of a scene, Probabilistic Latent Semantic Analysis (PLSA) model [12] is used to capture the co-occurrence between scene appearances and the visual words from the latent semantic regions. To avoid the ambiguity in learning the latent concepts, we constrain the latent semantic regions as latent topics. In this way, the learning to build the PLSA model becomes a straightforward process. The correspondence between the original terms used in the text literature and the terms in this paper is listed as the following: (a) document di : scene appearance category; (b) aspect zk : latent semantic region; (c) word vj : visual word. The bag of visual words is constructed by dominant visual words from all the sample images. Again, the DCH of a grid (K × K pixels) is employed as a visual word. To get abundant visual words, nearly half overlapped grids are used. That is, the center distance between two adjacent grids is K/2. If the similarity between two visual words vi and vj is larger than the threshold T (i.e. Sij > T as in Sec. 2), they are considered as the same word. The histogram of the visual words are sorted and the first Nw words which cover more than 95% of all the words from all the sample images form the bag of visual words. PLSA associates the scene appearance categories with the visual words according to the co-occurrence counts. Let n(vj , zk , di ) be the number of times the visual word vj occurred in the latent semantic region zk in the image of appearance category di . The conditional probabilities P (vj |zk , di ) and P (vj |di ) can be obtained as n(vj , zk , di ) , P (vj |di ) = P (vj |zk , di ) (7) P (vj |zk , di ) = i k n(vj , zk , di ) k
4
Scene Interpretation
Exploiting scene context, image interpretation becomes a top-down process from global-level scene appearance recognition to local-level pixel labeling. Let In (x) be a new image from the scene, {vt : t = 1, · · · , Ng } be the visual words extracted at grid positions in In (x). The likelihood of In (x) being of the category di can be computed as N
N
g g 1 1 P (vt |di ) P (di |In (x)) = P (di |vt ) = Ng t=1 Ng t=1 l P (vt |dl )
134
L. Li, X. Yu, and W. Huang
Then the appearance of In (x) can be recognized as i = arg max{P (dl |In (x))} l
(8)
The recognized appearance category provides the information about the illumination condition of the scene in the new image. It provides global constraints for local visual appearances. From appearance context, the problem of local image labeling becomes a problem of posterior estimation. Let x be the center of the tth grid and w(x) is a window of (K/2) × (K/2) pixels centered at x. The grids of such windows do not overlap each other and cover the image completely except image boundary. The spatial prior of w(x) in the kth latent semantic region P (zk |w(x)) can be evaluated as the proportion of the pixels of zk in the window w(x). Then the posterior probability of w(x) being a patch of zk in the new image can be expressed as P (vt |zk , di )P (zk |w(x)) P (zk |vt , di , w(x)) = l P (vt |zl , di )P (zl |w(x))
(9)
To spatially smooth the probabilities, a 2D Gaussian filter is applied to the 8connected neighborhood of each grid. Suppose P¯ (zk |vt , di , w(x)) is the smoothed posterior probability. The pixels in the window (i.e. y ∈ w(x)) are labelled as ⎧ ⎨ k, if k = arg maxl P¯ (zl |vt , di , w(x))&P¯ (zk |vt , di , w(x)) > 0.5; L(y) = 0, else if k = arg maxl P (zl |w(x))&P (zk |w(x)) < 0.5; ⎩ 255, otherwise. (10) The label “k” means the patch being part of the latent semantic region zk , “0” means the patch being in clutters, and “255” indicates a patch of foreground or anomalous change in the latent semantic region. The context-based image interpretation provides not only the detected foreground pixels but also where and when the foreground is detected. From the result, we can infer that how many people are on the road, whether there are vehicles on the grass land, whether there are possible damages on the buildings, etc. It also provides rich information about human activities or the traffic states under different weather conditions and in different time periods of a day.
5
Experiments and Evaluation
A new database of images for test is built in this paper. Each set of images in the database come from the same scene for more than one month. The images came from natural outdoor scenes in different weather conditions, e.g., rainy, cloudy, cloud and sun, and sunny days. All the images are resized as 320×240 pixels. Some examples and evaluation on two difficult datasets are presented in this paper. The complete results can be found from our website. Images of examples are displayed in the same format in Figure 3 and 4. In the 1st row, the left is the generated latent semantic regions of the scene and the rest are the sample
Scene Context Modeling for Foreground Detection
135
images of appearance categories. In this paper, only one sample image is used for each appearance category, and the sample images are selected from the days other than those for text. 16 examples and the corresponding labeling results are displayed in the rest 4 rows, where the text over an original image indicates the date and time the image was captured, and the text over a labeled image indicates the recognized appearance category for the image. In the labeled images, the black color indicates clutter regions, gray colors of different brightness represents the labelled latent semantic regions, and the white color indicates the foreground objects or anomalous changes within the latent semantic regions. The first dataset contains 120 images from a live webcam which monitors an entrance of a university. The images are encoded as JPEG files with a moderate rate of compression. The images from 5 days of different weather conditions are selected for evaluation. The spatial representation of the scene is composed of 4 latent semantic regions, e.g., the pavement surface, the grass land, the concrete surfaces of the building, and the ceiling of the entrance. The glasses of the building are classified as background clutters since the visual features from the region are unstable. The appearance representation is learned from samples of 6 appearance categories, e.g., the “rainy”, “cloudy”, “partly cloudy”, “sunny”, and “sun reflection”. For this scene, a bag of 410 visual words is generated to capture the appearance characteristics from the latent sematic regions. The images from this dataset are shown in Figure 3. The 6 middle columns show the examples of the corresponding appearance categories over them in the 1st row. The examples in the left column show the scene appearances between “cloudy” and “rainy”. In the right column, two more examples from the most difficult category, i.e. “sun reflection”, are displayed. In examples, the person in the shadow has almost submerged into the shadow. From the labeled images, it can be found that, with the recognized appearance categories, the latent regions are labelled accurately and meanwhile the foreground objects can be detected, even though there are so great the differences of the scene appearances under different illumination conditions. In the appearances of strong sun shining, the boundary around the shadow is not learned as scene context since the features around the boundary do not belong to dominant visual words from the latent regions. The second dataset is composed of 103 images from a campus scene, i.e. a plaza in a university which is often crowded in the day. The images from the scene are captured by a webcam and encoded as JPEG files with a high compression rate. The images constructing the dataset are collected from 5 days of different weathers. As shown in Figure 4, the spatial context of the scene contains 2 latent semantic regions, i.e. the brick surface of the ground and the building, and the sky. The grass lands and trees are classified as clutters since the colors from them are not stable in the highly compressed images. 7 appearance categories are used for this scene, i.e., “rain cloudy”, “cloudy”, “morning”, “sun & cloud”, “sunny morning”, “sunny noon”, and “sunny afternoon”. This is because there are great variations of the images from day to day due to the weak brightness adaptability of the webcam. As examples, in gentle sunny days, the scene appears as the sample of “morning” under warm illumination, but in some days of strong
136
L. Li, X. Yu, and W. Huang
spatial context
rainy
cloudy
partly cloudy
sunny
sun reflection
sunset
17/10, 14:47
08/11, 17:40
06/11, 17:26
17/10, 14:03
01/11, 13:45
13/11, 15:05
06/11, 19:06
13/11, 15:26
rainy
rainy
cloudy
partly cloudy
sunny
sun reflection
sunset
sun reflection
17/10, 15:04
08/11, 14:36
13/11, 17:00
06/11, 12:40
13/11, 14:04
17/10, 15:40
17/10, 18:44
13/11, 15:35
rainy
rainy
cloudy
partly cloudy
sunny
sun reflection
sunset
sun reflection
Fig. 3. Examples of image interpretation for the scene of an entrance of a university
sun shining, the scene appears as under cold illumination like the last three samples. Due to the great variations of scene appearances, a large bag of 1026 visual words is generated from the samples of appearance categories. From the 16 examples from the dataset shown in Figure 4, it can be seen that due to the global appearance category is correctly recognized, the image can be labelled quite well with good discrimination for foreground objects. In a day of gentle sunshine, there is no great difference of the scene around the day. However, in a strong sun shining day, the scene appearances in the morning, noon, and afternoon are different. Due to that the images came from a webcam, when the sun shining is very strong, the light sensitivity for the objects under shadows becomes very low. It is made worse by high rate compressions. Therefore, more labeling errors will be found for the objects under dark shadows as shown in the examples from sunny days. The evaluation results on the 2 datasets are shown in Table 1. The Correct Labeling Rate is the percentage of correctly labelled pixels of latent semantic regions, and the Foreground Detection Rate is the percentage of the detected foreground pixels of persons and vehicles with respect to the manually labelled ground truth. Both the high CLR and FDR values mean that both the errors of under-segmentation and over-segmentation are low. This indicates that the scene context is accurate to characterize the appearances of the scene in both the global and local levels. Considering the great variations of scene appearances in different weather conditions and the poor quality of test images coming from internet, the result is very encouraging. Since the dominant colors are sorted in descending order in DCH, if two DCHs do not match, the comparison can be terminated when one or a few top elements are evaluated. Hence, vary efficient algorithms can be implemented. We have
Scene Context Modeling for Foreground Detection
137
spatial context
rain cloudy
cloudy
gentle-sunshiny
sun & cloud
sunny morning
sunny noon
sunny afternoon
24/10, 15:08:39
15/10, 09:00:32
25/10, 16:30:23
25/10, 10:26:01
30/10, 11:08:56
20/10, 08:37:40
20/10, 13:20:21
20/10, 16:54:53
gentle-sunshiny
rain cloudy
cloudy
gentle-sunshiny
sun & cloud
sunny morning
sunny noon
sunny afternoon
25/10, 08:37:12
24/10, 17:30:37
30/10, 10:30:08
24/10, 10:30:18
25/10, 13:01:13
25/10, 12:00:32
20/10, 11:29:34
20/10, 17:22:44
gentle-sunshiny
rain cloudy
cloudy
gentle-sunshiny
sun & cloud
sunny morning
sunny noon
sunny afternoon
Fig. 4. Examples of image interpretation for the images from a plaza in a university Table 1. Evaluation results on two scenes. (LSR: latent semantic region) Evaluation on LSRs Entrance Plaza Average Correct Labeling Rate (CLR) 0.943 0.870 0.907 Foreground Detection Rate (FDR) 0.786 0.729 0.758
tested on the datasets from over 10 scenes. Usually less than 200 visual words is generated for a scene. When 410 visual words are generated, 6.5s is taken to label a new image of 320×240 pixels on a DELL laptop with 1.7GHz Pentium M. In some tests on images with size of 160×120 pixels, our method takes about 1s to label an image. It is fast enough to process images from web-broadcasting. Currently, the sample images are selected manually. For each scene, less than 10 sample images are enough for one or two months. The processing time is still a drawback compared with the conventional background subtraction techniques. The optimized implementation and organization of visual words may be helpful to speed up the program.
6
Conclusions
We have proposed a novel approach to detect foreground objects through image interpretation based on scene context. The scene context consists of the spatial and appearance representations. A novel method is proposed to extract large homogenous regions in a scene from a few sample images under different illuminations, which includes the DCH-based segmentation of single image and combination of multiple segmentations from multiple images. From the spatial representation, an effective and efficient representation of scene appearances
138
L. Li, X. Yu, and W. Huang
combining DCH words and the PLSA model is achieved. Once the scene context has been learned, it can be used to perform top-down background subtraction around the clock for months. Experiments show promising results on both foreground detection and scene interpretation. This study shows that it is possible to use global scene context to perform background subtraction without the need of a pixel-level background model, which may not be available when some related background parts are always occluded or the short-term previous observations can not catch up the scene variation. The next problem is how to automatically select the sample images for different illumination or weather conditions. We plan to investigate it by analyzing the global changes over the latent semantic regions.
References 1. Wren, C., Azarbaygaui, A., Darrell, T., Pentland, A.: Pfinder: Real-time tracking of the human body. IEEE Trans. Pattern Analy. and Mach. Intel. 19, 780–785 (1997) 2. Haritaoglu, I., Harwood, D., Davis, L.: W4 : Real-time surveillance of people and their activities. IEEE Trans. Pattern Analy. and Mach. Intel. 22, 809–830 (2000) 3. Stauffer, C., Grimson, W.: Learning patterns of activity using real-time tracking. IEEE Trans. Pattern Analy. and Mach. Intel. 22, 747–757 (2000) 4. Li, L., Huang, W., Gu, I., Tian, Q.: Statistical modeling of complex background for foreground object detection. IEEE Trans. Image Processing 13, 1459–1472 (2004) 5. Konishi, S., Yuille, A.: Statistical cues for domain specific image segmentation with performance analysis. In: IEEE CVPR, pp. 291–301 (2000) 6. Li, L., Luo, R., Huang, W., Eng, H.L.: Context-controlled adaptive background subtraction. In: IEEE Workshop on PETS, pp. 31–38 (2006) 7. Comanicu, D., Meer, P.: Mean-shift: A robust approach toward feature space analysis. IEEE Trans. Pattern Analy. and Mach. Intel. 24, 603–619 (2002) 8. Shi, J., Malik, J.: Normalized cuts and image segmentation. In: IEEE CVPR, pp. 731–743 (1997) 9. Li, F., Perona, P.: A bayesian hierarchical model for learning natural scene categories. In: IEEE CVPR, vol. 2, pp. 524–531 (2005) 10. Quelhas, P., Monay, F., Odobez, J.M., Gatica-Perez, D., Tuytelaaars, T., Van Gool, L.: Modeling scenes with local descriptors and latent aspects. In: IEEE ICCV, vol. 1, pp. 883–890 (2005) 11. Sivic, J., Russell, B., Efros, A., Zisserman, A., Freeman, W.: Discovering objects and their location in images. In: IEEE ICCV (2005) 12. Hofmann, T.: Unsupervised learning by probabilistic latent semantic analysis. Machine Learning 42, 177–196 (2001) 13. Alexander, D., Buxton, B.: Statistical modeling of colour data. Int’l J. Computer Vision 44, 87–109 (2001)
Appendix: Advantages of Color Distance (1) The advantage of the color distance (1) over the Euclidean distance can be illustrated in the following analysis. These advantages suggest a compact DCH
Scene Context Modeling for Foreground Detection
139
for homogenous regions can be generated by using (1) since it well characterizes the color variations of homogenous objects in natural scenes [13]. Robustness to variation of brightness: Let c1 and c2 the RGB colors from the same object in different illuminations. According to the color model for imaging, there is c2 = kc1 where k describes the proportion of the illumination variation. From Eq. (1), the color distance is d(c1 , c2 ) = 1 −
2kc1 2 (1 − k)2 = (1 + k 2 )c1 2 1 + k2
(11)
The Euclidean distance between c1 and c2 is de (c1 , c2 ) = c1 − c2 = (1 − k)c1
(12)
It can be seen that the color distance d(c1 , c2 ) depends only on the proportion of the illumination change k, whereas the Euclidean distance de (c1 , c2 ) depends not only on the proportion of illumination change k but also on the brightness of the original color. The distance (11) is small when k is around 1. It increases quickly when k is far away from 1. This means the distance measure is robust to color variations of the same object but sensitive to colors of large difference in brightness which may come from different objects. Sensitivity to variation of chromaticity: If two colors c1 and c2 are just chromatically different, there is c1 = c2 . The difference is determined by the angle θ between the color vectors. From Eq. (1), the distance is d(c1 , c2 ) = 1 −
2c1 c2 cos θ = 1 − cos θ c1 2 + c2 2
The Euclidean distance between c1 and c2 is √ √ de (c1 , c2 ) = c1 − c2 = 2c1 1 − cos θ
(13)
(14)
Again, the distance d(c1 , c2 ) depends only on the angle θ, but the Euclidean distance depends not only on the angle θ but also the brightness of the colors. In addition, the distance (13) is small when θ closes to 0 and it will increase quickly when θ becomes large. This also indicates that the distance measure is robust to color variations of the same object but sensitive to colors from different objects.
Recognition of Household Objects by Service Robots Through Interactive and Autonomous Methods Al Mansur, Katsutoshi Sakata, and Yoshinori Kuno Graduate School of Science and Engineering, Saitama University, 255 Shimo-Okubo, Sakura-ku, Saitama-shi, Saitama 338-8570, Japan {mansur,sakata,kuno}@cv.ics.saitama-u.ac.jp
Abstract. Service robots need to be able to recognize and identify objects located within complex backgrounds. Since no single method may work in every situation, several methods need to be combined. However, there are several cases when autonomous recognition methods fail. We propose several types of interactive recognition methods in those cases. Each one takes place at the failures of autonomous methods in different situations. We proposed four types of interactive methods such that robot may know the current situation and initiate the appropriate interaction with the user. Moreover we propose the grammar and sentence patterns for the instructions used by the user. We also propose an interactive learning process which can be used to learn or improve an object model through failures.
1
Introduction
Service robots have attracted the attention of researchers for their potential use with handicapped and elderly people. We are developing a service robot that can identify a specific or a general class of household objects requested by the user. The robot receives instruction through the user’s speech, and should be able to carry out two tasks: 1) detect a specific object (e.g., “coke can”), and 2) detect a class of objects (e.g., “can”). The robot needs to possess vision system that can locate various objects in complex backgrounds in order to carry out the two tasks mentioned above. There is no single object recognition method that can work equally well on various types of objects and backgrounds perceived by a service robot. Rather, it must rely on multiple methods and should be able to select the appropriate one depending on the characteristics of the object. An autonomous recognition system for service robot using multiple methods has been initially proposed in [14] and extended in [15]. However, as the recognition rate of autonomous method is not 100%, it is desirable to improve the recognition performance by incorporating user interaction. In the interactive method, the robot communicates with a human user and the user guides it to recognize the object through short, ‘user-friendly’ conversation. In our application, the user is conceived of as a physically handicapped person G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 140–151, 2007. Springer-Verlag Berlin Heidelberg 2007
Recognition of Household Objects by Service Robots Through Interactive
141
who can speak clearly. Thus it should not be difficult for this person to interact with the robot to help it locate the requested object. The service robot learns through failure and continuously improves its model of an object whenever it makes a mistake. The user helps it in this learning process. The autonomous methods may fail in different situations. Each situation need to be treated separately if we like to incorporate interactive recognition. We analyze these situations, construct sentence patterns and structures, and propose vocabulary and implement the interactive method in this paper. We also propose an interactive learning system for the robot which can be used to learn through failure and to improve the object model of the robot. In section 2 we discuss the implementation and performance of the autonomous method. Interactive method and interactive learning is presented in sections 3 and 4 respectively. Finally we conclude the paper in section 5.
2
Autonomous Object Recognition
Here we briefly describe the autonomous object recognition method. Details are given in [15]. 2.1
Object Categorization
Objects encountered by service robots can be described by their color, shape, and texture. By ‘texture’ we mean the pattern (not necessarily regular and periodic) within the object contour. For example, in our notation, the label on a bottle is its texture. We used three features for recognition: intensity, Gabor feature, and color. We split the objects into five categories depending on characteristics. Textureless simple-shape objects are named category 1. We need to use shape features to recognize such objects. Kernel PCA (KPCA) in conjunction with Support Vector Machine (SVM) can be used in this case. In category 2, some objects have textures although these textures do not characterize them and the texture contents of different members of the class are not the same. Some members of those classes have a texture-free body. As a result we need to use information regarding their shapes in order to describe them. Using SIFT, any specific textured object of this category can be recognized. To recognize a texture-free specific object or a class of this category we use KPCA+SVM. Since these objects are shape-based, we should use Gabor feature because it works well on objects with different textures. Category 3 and 4 objects have similar texture and texture is required for their recognition. An example of this class includes fruit (e.g. pineapple) and computer keyboards. KPCA+SVM based method works on this type of object. However, in our experiments, we found that intensity feature works better than or the same as Gabor feature for some objects of this type. For other objects of this type, Gabor feature obtains better recognition rate. Many of the texture classification methods [11, 12, 13] use Gabor filters for feature extraction. Robust feature extraction using Gabor filters requires a large set of Gabor filters of various scales and orientation. This makes
142
A. Mansur, K. Sakata, and Y. Kuno
the computation huge. In this respect, intensity feature is desirable due to its simplicity and speed. Objects with similar textures in which grayscale or intensity feature has satisfactory performance are named category 3 objects. Gabor feature works better on other types of objects with similar texture. These are designated category 4 objects. Category 5 objects have similar color histograms. We use a combination of color and intensity features for their recognition. 2.2
Methods
Four different methods have been integrated for the autonomous recognition system. As shown below, different methods are employed for different object categories according to object characteristics. Method 1: Used for recognition of specific object from category 3, category 4 and specific (textured) object from category 2. Method 2: Used for recognition of specific object and class from category 1, specific (texture-free) object and class from category 2 and class from category 4. Method 3: Used for class recognition of category 3 objects. Method 4: Used for class recognition of category 5 objects. We use SIFT following [1] in method 1. In method 2, we apply a battery of Gabor filters to each of the training and test images (grayscale) to extract the edges oriented in different directions. Dimensionality of these Gabor feature vectors are reduced by KPCA [3, 4, 5] and are used to train a SVM classifier. In method 3, KPCA features are derived from the intensity images and then a support vector classifier is trained. In method 4, an SVM classifier is build using color features. Here, another intensity based SVM classifier is trained (as in method 3) and used to reduce the false positive results of the first classifier. Details of these four methods and the algorithm for selecting one of these methods automatically are given in [15]. When only one object per class is available, the robot uses method 1 for the recognition of textured object and color histogram for plain and texture-free objects. 2.3
Experimental Results
First, we evaluated the autonomous object recognition techniques using objects from the Caltech database (available at www.vision.caltech.edu). We got satisfactory recognition performance for different categories when appropriate methods were used. These results have been shown and class recognition performances of three methods have been compared in [15]. In [15], method 2 has been compared with Serre’s work [2] and it was found that method 2 is about sixty times faster than Serre’s method for the same recognition rate. Next, we performed experiments with daily objects placed in home scenes as shown in Figure 1. These results confirmed that our methods can recognize objects in our application domain with reasonable success rates.
Recognition of Household Objects by Service Robots Through Interactive
(a)
(b)
(c)
143
(d)
Fig. 1. (a)-(b) Class recognition results: (a) cup noodles (b) apple (c)-(d) Specific object recognition results: (c) cup (d) cup noodle
3
Interactive Object Recognition
We are implementing our algorithms on our experimental robot Robovie-R Ver.2 [6]. This 57 kg robot is equipped with three cameras (2 pan-tilt and one omnidirectional), wireless LAN, various sensors, and two 2.8 GHz Pentium 4 processors. Our service robot has access to a few variants of a certain class of objects and its training set is usually small. In spite of a small training set we achieved a reasonable recognition rate. However, the recognition methods are not 100% accurate. It is desirable to improve the robot vision by any feasible way. In our application the robot user is assumed to be a physically disabled person with speaking capability. The robot is designed to help him or her bring an object upon request. When the robot fails to find the object it may ask the user to assist it using some short, ‘user-friendly’ conversation. We have already developed some interactive object-recognition methods [7,8,9]. These methods are designed for the recognition of simple single-color objects in plain background. Here we extend these works for complex objects in complex backgrounds. 3.1
Grammar and Sentence Pattern
In order to implement interactive object recognition, robots have to understand the user’s instruction. We have developed the following method. Instructions are grouped into nine categories. In order to build a sentence pattern, words or phrases must be selected from the vocabulary list. Some words are marked as optional. We limited the vocabulary list to avoid ambiguity during speech recognition. The user must follow the sentence structure (Table 1) and choose the words from the registered word list (Table 2) for the corresponding vocabulary type to successfully initiate the command. Optional words, though not required, provide more natural speech. For example, the user can say, “Get me a noodle.” This satisfies the grammar of ‘Object ordering: class’ and it uses the vocabulary from Phrase 1 and Object Name. Likewise, the user could also say, “May I have the Nescafe (brand name) Coffee jar?” Words not appearing in the vocabulary list may not be used. The vocabularies are listed in Table 2. Language processing presented here is not state of the art. We developed it for checking
144
A. Mansur, K. Sakata, and Y. Kuno Table 1. Grammar
Purpose Sentence structure Feedback Feedback Object Ordering: Phrase 1+a/an+ Object Name class Object Ordering: Phrase 1 + (the) + (Specifier/color) +Obspecific ject Name (at least one ‘the’ or ‘specifier/color’ is required Positional informa- Verb 1 + Positional adjective/ Preposition tion 1 1 + (Article) + Specifier/color + Object Name Positional informa- Verb 1 + Positional adjective/ Preposition tion 2 1 + that + (Object Name) Positional informa- Verb 1 + Preposition 2 + (Article) + tion 3 (Specifier/color) + Object Name + (and) + (Article) + (Specifier) + Object Name Instruction to point (Phrase 2) + Verb 2 Instruction to find (Phrase 3) + Verb 3 + (Article) + Specifier/color + Object Name Object description Color with single color
Example Yes/No Get an apple. Get my cup.
Look at the left of Seafood noodle. Look behind that. Look between Pepsi can and tea bottle. Please show me. Can you find the wooron tea bottle? Red
Table 2. Vocabulary Type Feedback Phrase 1 Phrase 2 Phrase 3 Verb 1 Verb 2 Verb 3 Specifier Positional adjective Preposition 1 Preposition 2 Object name Color
Registered words Yes, No May I have, Can I have, Can I get, (Please) get (me), (Please) Bring, I’d like, I would like, Give (me) Please, Could you (please), Can you (please) Could you, Can you (Please) look (at/to), (Please) check Show (me), Point Find, See My, Coke, [brand name], etc. Left, Right Front, Behind, Top, Bottom Between Noodles, Cup, Jar, Bottle, Coffee jar, etc. Red, Green, Yellow, etc.
the effectiveness of the interactive object-recognition technique. At present, user instruction is given through a keyboard and the robot response is generated by text to speech. We will use the results developed by researchers on natural language understanding in the future.
Recognition of Household Objects by Service Robots Through Interactive
3.2
145
Interaction in Different Situations
Results of autonomous object recognition can be classified as shown in Figure 2. Interactive recognitions are initiated by the robot at failures of autonomous methods in different situations. To start the appropriate dialog it is necessary for the robot to know the case. Case 1 to 3 are identified by the number of object(s) found. Case 4 arises when the user asks the robot to get an object which is unknown to the robot and it does not have any model of that object. We assume that if the object model exists in the robot’s database and the object exists in the field of view (FOV), the robot can detect it. Moreover, if the search scene is out of the robot’s FOV, user corrects the robots FOV through pointing gesture. Such a method is developed in [16]. As a result, when the robot cannot find any object, it assumes that the object is occluded and identifies the situation as case 3-1. If the robot cannot detect any ordered objects after interaction with the user in case 3-1, the object model in the robot is considered not applicable in this situation. This happens when the object name is in the robot’s database but the object experienced by the robot is too different from the object model. For example, if the robot’s model of ‘apple’ included only red apples, and there is a green apple in the scene, it may not be able to detect it. Thus, such a situation turns into case 4. Now we discuss the different cases in detail.
Fig. 2. Outcomes of autonomous object recognition
Case 1. One instance of the required object is found In this case the robot detects one object which may be true positive or false positive. To confirm the accuracy of the detection, the robot points the detected object and seeks user’s confirmation. If the result is wrong, this turns into case 3. In the following, a sample conversation from this case is given. Conversation: User: Get a coffee jar. Robot: I found this. Robot points to the object. Robot: Is this correct? User: Yes.
146
A. Mansur, K. Sakata, and Y. Kuno
Now user interaction is not required anymore since the robot successfully recognizes the desired object. Case 2. More than one object are found This is the case when the robot detects more than one objects. Recognition result is one of the followings: (1) all are true positives (2) only one is true positive or (3) all are false positives. The robot selects the particular true positive or eliminate the false positives by pointing the objects sequentially and asking the user about the pointed object. If none of the found object is true positive, situation turns into case 3. One sample conversation is given below: Conversation (See Figure 3(a)-(b)): User: Get a coffee jar. Robot: I found two. User: Show me. Robot points to the upper object. Robot: Is this correct? User: Yes. Next the robot points to the lower object. Robot: Is this correct? User: No. Analysis: Here “Get (Phrase 1) a coffee jar (Object name)” follows the sentence structure of ‘Object ordering: class’. Since there is more than one coffee jar, the user says, ‘a coffee jar’. The robot uses Gabor feature based KPCA+SVM and finds two objects, one of which is a creamer jar. The robot is confirmed about the false positive result through user interaction and then rejects the object found in the lower bounding box, and only the correct object remains. The robot updates its model of ‘coffee jar’ by including the image of false positive in the negative training set. If the robot makes the same mistake again, it again adds one more instance of that object. This gives more weight to that particular image.
(a)
(b)
(c)
(d)
Fig. 3. (a) Two objects have been found where the lower one is false positive (creamer) (b) False positive is removed through user interaction (c) The required object is not found due to occlusion (d) Object is found through user interaction
Case 3-1. No object found due to occlusion This case arises when the required object model is in the robot’s database and FOV but the robot cannot find. The robot assumes that failure is due to occlusion. The robot may get rid of occlusion by moving in the left or right. Robot
Recognition of Household Objects by Service Robots Through Interactive
147
estimates the moving direction from the positional information of the required object obtained from the user. One sample conversation is given below: Conversation (See Figure 3(c)-(d)): The required object is the sugar jar occluded by the big tea bottle. User: Get the sugar jar. Robot: I could not find it. Tell me one object near it. User: Can you find the wooron tea bottle? Robot: Yes. User: Look behind that. (The robot moves left.) Robot: I found one. Analysis: Here the user wants the sugar jar and there is only one sugar jar in the house and the order is specific, and follows the appropriate sentence structure. The sugar jar is plain and only one example is available. As a result, the robot uses the color histogram for recognition. Since the object is in back of the wooron tea bottle, the robot cannot find it and informs the user. The user helps the robot get it and uses a reference object (wooron tea bottle). This is easy to find as it has many SIFT keypoints. The robot uses SIFT to locate this first and then moves to left to get the view of ‘behind wooron tea bottle’ to get the required sugar jar. Since the movement to right makes the robot away from the tea bottle, it did not follow that way. When the user says ‘that’, the robot understands ‘wooron tea bottle’ since it is mentioned in the previous sentence. If the robot cannot find the desired object using reference objects, the method described in case 4 may be used. Case 3-2. No object found, as the object is not in the FOV The robot does not know whether the object is in the FOV or not. The user initiates the dialog and directs the robot by pointing gesture in this case. Details can be found in [16]. Case 4. No object found as the object model is not in the database In this case, the robot cannot find the object even though the object is in the robot’s field of view. The robot needs to obtain some visual attribute of the object from the user to recognize the object. The user may use many such attributes to describe an object. In our previous interactive object recognition methods [7,8,9], the robot asks the user about visual attributes and we have only considered simple objects such as single color objects. However, in the real world, objects are usually more complex. For example, most of objects have multiple color parts. Still, humans seem to describe such complex objects in a simple way. An investigation on how humans describe appearance of objects is given in [10]. The results are summarized as follows: (1) 51% of the studied people described the studied household objects by color. The rest of the people used shape (25.3%), pattern (9.8%), size (6.7%), attached thing (4.1%) and material (3.1%). (2) Among the people who described the objects by color only, 80.8% used only one color and rest 19.2% used
148
A. Mansur, K. Sakata, and Y. Kuno
two or more colors. (3) Among the people who described the objects by only one color, 82% used base color or major color to describe an object. The results indicate that we need a vision process to detect a multi-color object even though the user mentions one color for the object. If any one color occupies the most area, this is the color that the user uses to describe the object. If there are multiple different color regions, the results in [10] indicate that humans mention the color of the base part of the object. We calculate the convex hulls of color regions and select the color for the largest area inside the convex hull as the base color. We illustrate the process in Figure 4. Here a glass shaped container of ‘ramen snack’ is considered. There are two dominant colors in this object: red and yellow. We extracted both colors and computed the number of pixels in each color. In the ‘Red’ area there are 6712 pixels and in the ‘Yellow’ area there are 6661 pixels; both colors cover almost equal areas. Then, the convex hull is computed and filled for ‘Red’ as shown in Figure 4(f) and for ‘Yellow’ as shown in Figure 4(i). In this case, as the ‘Red’ area is larger, ‘Red’ is considered as the base color of the object and will be used to describe it. Conversation (See Figure 5): User: Get the ‘ramen snack’. Robot: I do not know ‘ramen snack’. Tell me its color. User: Red. Robot: I found one. User: Show me. Robot points the found object. Robot: Is this correct? Robot: Yes. Analysis: Here the user wants the ‘ramen snack’ (Figure 5). Since this object model was not in the robot’s database, it says “I do not know ‘ramen snack’ ”. Then it asks the user to describe it by color. Using the mentioned color (red) robot finds that object using the procedure described in Figure 4. Now the robot adds this object into the training dataset. Next time it uses method 1 to
(a)
(b)
(f)
(c)
(g)
(d)
(h)
(e)
(i)
Fig. 4. Base-part color detection (a) ramen snack (b) red area (c) yellow area (d) point set of red area (e) convex hull (f) area of red region (g) point set of yellow area (h) convex hull (i) area of yellow region
Recognition of Household Objects by Service Robots Through Interactive
(a)
149
(b)
Fig. 5. (a) Object is not found (b) Object is found using color information through interaction
recognize this specific object as the object is textured. If there were more than one red objects in the scene, we could use a reference object as used in case 2 to eliminate the unwanted object. Also, if the unknown object is occluded, it is possible to use the interaction as in case 3-1.
4
Learning Through Failure
In interactive object recognition, we have to consider that the interaction took place earlier for a particular object should not be repeated by the robot. Therefore, the robot should learn from failures. We have developed a simple method of interactive learning. Figure 6 shows the flow. When the system cannot detect a requested object, the system uses interaction with the user to detect it. After successful detection, the system updates the model of the object by adding the image of the detected object. In our experiments, we noticed that the inclusion of even a single representative image in the training set can improve the recognition
Fig. 6. Interactive learning
150
A. Mansur, K. Sakata, and Y. Kuno
(a)
(b)
Fig. 7. (a) Detected object along with false positive (b) No result of false positive after inclusion of the previous false positive image in the negative training set
results significantly. In Figure 7 we demonstrate the effectiveness of the object model update on failure through user interaction. Here, the user requests the robot to get a coffee jar. However, the robot detected two objects, one of which was false positive (Figure 7(a)). The robot knew the correct one and wrong one through interaction with the user. The robot included the false positive image in the negative training set and updated the model. Now the robot can detect the object without any false positive as shown in Figure 7(b). In this experiment, the number of training images is 30 for positive set and 80 for negative set.
5
Conclusions
To make a service robot’s vision system work well in various situations, we have integrated interactive recognition methods with the autonomous methods. We classified the failures of the autonomous recognition methods into four situations and employed different types of interactions in these situations so that the robot can recover from failure. In this paper, we have analyzed these situations, constructed vocabulary, grammar and sentence patterns for the interactive methods, and implemented the interactive methods. We also developed an interactive learning system for the robot so that the robot can learn through recognition failures and create or improve an object model. Further study on interactive object recognition and learning from failures are left for future work.
References 1. Lowe, D.: Distinctive Image Features from Scale-invariant Keypoints. International Journal of Computer Vision 60, 91–110 (2004) 2. Serre, T., Wolf, L., Poggio, T.: A New Biologically Motivated Framework for Robust Object Recognition. Ai memo, 2004-026, cbcl memo 243, MIT (2004) 3. Li, S.Z., et al.: Kernel Machine Based Learning for MultiView Face Detection and Pose Estimation. In: Eighth International Conference on Computer Vision, pp. 674–679 (2001) 4. Liu, C.: Gabor-Based Kernel PCA with Fractional Power Polynomial Models for Face Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence 26, 572–581 (2004)
Recognition of Household Objects by Service Robots Through Interactive
151
5. Sch¨ olkopf, B., Smola, A.J., Muller, K.-R.: Nonlinear Component Analysis as a Kernel Eigenvalue Problem. Neural Computation 10, 1299–1319 (1998) 6. Intelligent Robotics and Communication Laboratories, http://www.irc.atr. jp/index.html 7. Hossain, M.A., Kurnia, R., Nakamura, A., Kuno, Y.: Interactive Object Recognition through Hypothesis Generation and Confirmation. IEICE Transactions on Information and Systems E89-D, 2197–2206 (2006) 8. Hossain, M.A., Kurnia, R., Nakamura, A., Kuno, Y.: Interactive Object Recognition System for a Helper Robot Using Photometric Invariance. IEICE Transactions on Information and Systems E88-D, 2500–2508 (2005) 9. Kurnia, R., Hossain, M.A., Nakamura, A., Kuno, Y.: Generation of Efficient and User-friendly Queries for Helper Robots to Detect Target Objects. Advanced Robotics 20, 499–517 (2006) 10. Sakata, K., Kuno, Y.: Detection of Objects Based on Research of Human Expression for Objects. In: Symposium on Sensing Via Image Information CD ROM (in Japanese) (2007) 11. Dunn, D., Higgins, W.E.: Optimal Gabor Filters for Texture Segmentation. IEEE Transactions on Image Processing 4, 947–964 (1995) 12. Jain, A.K., Farrokhnia, F.: Unsupervised Texture Segmentation Using Gabor Filters. Pattern Recognition 24, 1167–1186 (1991) 13. Manjunath, B.S., Ma, W.Y.: Texture Features for Browsing and Retrieval of Image Data. IEEE Transactions on Pattern Analysis and Machine Intelligence 18, 837–842 (1996) 14. Mansur, A., Hossain, M.A., Kuno, Y.: Integration of Multiple Methods for Class and Specific Object Recognition. In: International Symposium on Visual Computing Part I, pp. 841–849 (2006) 15. Mansur, A., Kuno, Y.: Integration of Multiple Methods for Robust Object Recognition. Accepted in: SICE Anual Conference, Kagawa, Japan (September 2007) 16. Hanafiah, Z.M., Yamazaki, C., Nakamura, A., Kuno, Y.: Human-Robot Speech Interface Understanding Inexplicit Utterances Using Vision. In: International Conference for Human-Computer Interaction, 1321-1324/CD-ROM Disc2 2p1321.pdf (2004)
Motion Projection for Floating Object Detection Zhao-Yi Wei1, Dah-Jye Lee1, David Jilk2, and Robert Schoenberger3 1
Dept. of Electrical and Computer Eng., Brigham Young University, Provo, UT USA 2 eCortex, Inc., Boulder, CO USA 3 Symmetron, LLC a division of ManTech International Corp. Fairfax, VA USA
Abstract. Floating mines are a significant threat to the safety of ships in theatres of military or terrorist conflict. Automating mine detection is difficult, due to the unpredictable environment and high requirements for robustness and accuracy. In this paper, a floating mine detection algorithm using motion analysis methods is proposed. The algorithm aims to locate suspicious regions in the scene using contrast and motion information, specifically regions that exhibit certain predefined motion patterns. Throughput of the algorithm is improved with a parallel pipelined data flow. Moreover, this data flow enables further computational performance improvements though special hardware such as field programmable gate arrays (FPGA) or Graphics Processing Units (GPUs). Experimental results show that this algorithm is able to detect mine regions in the video with reasonable false positive and minimum false negative rates.
1 Introduction Of the 18 U.S. ships damaged by air and naval weapons since 1950, 14 were by mines [1]. In the early 1980s, the U.S. Navy began development of new mine countermeasures (MCM) forces [2]. This included two classes of mine warfare ships: Avenger Class and Osprey Class. These ships were equipped with sonar and video systems, cable cutters and a mine detonating device that can be released and detonated by remote control [2-3]. In the 1990s, predecessors of the U.S. Navy’s DDG-1000 program began the process of developing a family of advanced technology multi-mission surface combatants. The successful detection and classification of floating objects, especially mines, will be essential to the security of these ships. Moreover, to decrease the on-board manpower, partial or extensive automation of tedious, full-attention tasks such as floating mine detection and classification is important. Sonar and video systems [4-5] can be applied to detect floating mines. Unfortunately, the wide variation in surface mine and other target signatures, combined with ship motion and the greater impact in shallow water of surface acoustic interactions, reverberation, bubble fields, mixed salinity and currents, organic matter and debris, high amounts of clutter due to bottom features, and other phenomena on the performance of sonar systems, limit the ability of both the sonar system and its operator to detect and classify floating objects with a sufficiently high probability of detection and low probability of false alarm. Further, unlike sonar, a video system is passive and thus does not have a negative impact on the environment. A video-based floating mine detection system thus has advantages over sonar. G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 152–161, 2007. © Springer-Verlag Berlin Heidelberg 2007
Motion Projection for Floating Object Detection
153
The detection system as a whole receives input in the form of RGB as well as infrared (IR) video streams, and identifies floating objects of interest. Results reported in this paper use recorded video as input. All video clips are approximately 30 to 60 seconds long, taken from a fixed onshore position over six days. Resolution of all clips is 640x480 pixels. Although the RGB and IR data maintain a consistent relationship to each other, they are not perfectly aligned and are therefore processed independently by the same processing algorithm. Most clips contain at least one floating object and most floating objects in these clips are either mine-like or buoy-like. Different clips have different zooming characteristics. Weather and lighting conditions vary significantly over the course of video acquisition. The structure of the whole system is shown in Figure 1. A mine detection module is located at the front of the system, extracting suspicious regions and higher level descriptors such as region area, shape, and motion from the raw video. These data are fed into the mine recognition module for further classification. The output of the mine recognition module is post-processed to obtain a final identification result. This paper focuses on mine detection: other processing modules are outside the scope of this paper.
Fig. 1. System structure overview
We propose a robust and accurate floating mine detection algorithm. This algorithm uses contrast and motion as the two key features to distinguish floating objects from the background. Due to application-specific requirements, robustness means producing a low false negative rate under all possible environmental conditions. In other words, the mine detection module should not miss any suspicious regions in the scene. Accuracy means a low false alarm rate with the prerequisite of low false negatives. In order to avoid the limitations of traditional motion estimation algorithms, an efficient and accurate motion correspondence approach is proposed, which extracts regions with certain motion patterns. Experiments show our mine detection algorithm does an excellent job of identifying objects of interest, and it is generally successful at excluding sunlight glint and other very short-lived distractors in any conditions. The paper is organized as follows: In section 2, the overall structure of the detection algorithm and the reasons for such structure are discussed. In section 3, the formulation of each step in the framework is introduced. Intermediate results along the data flow of the pipeline are shown as well. Experimental results are shown in Section 4. Conclusions and future work are presented in Section 5.
154
Z.-Y. Wei et al.
2 Algorithm Overview The objective of the mine detection module is to identify candidate target locations in the scene. A “multi-scale” algorithm that uses both contrast detection and motion estimation is developed to handle different sized objects. Small regions of high intensity contrast are initially selected as candidate regions. Relative motion of these regions is estimated and analyzed. Regions which do not match specified motion characteristics are excluded from further processing. Morphological operations and temporal smoothing are performed on the motion analysis results, and outputs from different image scales are combined to reach a final decision, resulting in a list of candidate mine regions for each frame. Other information such as region size, shape, and motion are also generated for the subsequent recognition module. Mine recognition is a pattern recognition problem. Using features provided by the detection module, recognition has two objectives: first to remove false positives, particularly waves, from the set candidate regions; and second to identify the category of each remaining target object. Although mine recognition is not the focus of this paper, a robust and accurate pre-processing algorithm for detection can provide fewer false positives and better feature data to the recognition algorithms, improving the results and computational performance of the system a whole.
3 Floating Mine Detection Algorithm 3.1 Problem Description and Analysis The objective of the mine detection algorithm is to identify regions of interest in video of ocean scenes under a variety of circumstances, including variable sea and lighting conditions, variable target shapes and colors, and distractors such as waves, sun glare, and debris in the scene. The algorithm must have a miss rate very close to zero, and within that constraint should attempt to minimize the false positive rate. Reducing false positives further is the goal of the subsequent processing modules. The approach used in this paper relies on contrast and motion. A few example video frames will illustrate the principles involved. In Fig. 2 (a), the water background is a relatively uniform region, while the mine, highlighted manually in a box, is slightly darker. In other cases, mines could be much darker or brighter than the background, or even a different color. It should be noted that if there is very little contrast between the mine and the background, it would be difficult for a human or any algorithm to detect it. Generally, we assume that either the IR or RGB video will provide enough contrast between the background and the object for detection to be feasible. Based on the above observation, candidate pixels can be selected by identifying those with a certain minimum amount of contrast to the average of a small surrounding region. We determined that grayscale intensity contrast is adequate, thus avoiding the extra computational cost for color processing. Contrast information is far more reliable than simple intensity thresholding because of large variations in lighting and mine brightness under anticipated conditions.
Motion Projection for Floating Object Detection
155
However, contrast information alone is not sufficient for candidate pixel selection. In Fig. 2 (b), the regions highlighted in boxes contain pixels with higher grayscale intensity than the immediate background regions. This contrast is caused by moving waves, thus they should not be classified as mines. Other distractors such as sun glint and non-floating objects may also exhibit significant intensity contrast from the immediate background. Many of these distractors can be eliminated by using motion information. In particular, we observe that mines and distractors have distinct motion patterns. In Fig. 2 (c), all highlighted regions are true mines except the rightmost one, which is actually a bird flying across the scene from right to left. It cannot be distinguished from the targets from a single frame, but by discerning its motion over multiple frames, it is clearly not a floating object. With the above analysis, a mine detection algorithm was designed as shown in Fig. 3. The components of the algorithm are discussed individually in the following subsections.
(a)
(b)
(c)
Fig. 2. (a) uniform background with a dark object, (b) distractors with brighter grayscale intensity, and (c) dark objects and one distractor with different motion pattern
Fig. 3. Mine detection algorithm diagram
3.2 Processing Module Formulation 3.2.1 Multi-scale Scheme As shown in Fig. 3, a multi-scale scheme is used in the algorithm, for two reasons. First, mine size varies and is unknown. The multi-scale scheme accommodates
156
Z.-Y. Wei et al.
different mine sizes. Second, a multi-scale scheme is highly efficient for computing image attributes. Input video is first down-sampled to several different scales. A Gaussian filter is applied to the image to avoid aliasing, then it is down-sampled by a power of two. In our tests, the image was only down-sampled once (by a factor of two). Subsequent processing stages, which are applied to each of these scales, are: candidate pixel selection, motion estimation/analysis, and spatial-temporal smoothing. Results from the different scales can be fused together to reach a final decision. In the current implementation, results from different scales are retained and displayed for comparison. The improvement obtained by applying the multi-scale scheme will be shown in the next subsection. 3.2.2 Candidate Pixel Selection We denote the video at scale i as Vi(x, y, t) where x, y, and t are the spatial-temporal coordinates, such that t=1, 2,···2n+1 and i=1, 2,···N. n represents the number of frames before and after the middle frame that are needed for processing. As will be mentioned in the next section, 2n+1 temporally sequential frames are used to calculate the point motion correspondence for the middle frame, i.e. the n+1th frame. The temporal window of interest moves from frame to frame along the temporal axis. As discussed before, the mine should have sufficiently different intensity from its background. For candidate pixel selection, only the center frame Vi(x, y, n+1) is needed. The center frame is divided into “blocks” of 80×80 pixels out of a total of 640×480 pixels. We denote the intensity value at one pixel (x, y) as p(x, y), and the mean intensity and standard deviation of the current block Bk as bk and σk. The value of the candidate mask at (x, y) can be calculated as
⎧1, if p ( x, y ) − bk ( x, y ) ≥ nσ ⋅ σ k . C ( x, y ) = ⎨ ⎩0, if p ( x, y ) − bk ( x, y ) < nσ ⋅ σ k .
(1)
where nσ controls the number of the initial candidates that will be selected. If the value of nσ is high, only pixels with intensity value very different from their background are selected. A mine could be missed if it has an intensity value very close to the background. If nσ is low, pixels with intensity value close to the background will be successfully selected, but at the cost of increasing false positives. Another consideration is the block size. Our assumption is that the mine is small compared to the block size and its intensity value is different from the average intensity of pixels in the block. If the mine is large and occupies most of the block, the average intensity of the block will be close to the average mine intensity. In this case, it is very likely that the pixels of the mine will not be selected as candidates. The multi-scale scheme is helpful here as well. Fig. 4 (a) shows an original frame from a video clip, with the corresponding detected candidate pixels shown in Fig. 4 (b). Besides pixels in the mine region, some false positive pixels are detected. These false positives can be removed by motion analysis. Fig. 5 (a) shows an original frame from a video clip, and its candidate pixels are shown in Fig. 5 (b). In this case the mine size is close to the block size, biasing the average intensity of the block and hence reducing the contrast between the mine and the average intensity value. Figs. 4 (b) and 5 (b) are obtained from the full image
Motion Projection for Floating Object Detection
a
b
157
c
Fig. 4. (a) A video frame with a small mine, (b) candidate pixel selection result, and (c) mine detection result using motion projection
(a)
(b)
(c)
Fig. 5. (a) A video frame with a large mine, (b) candidate pixel selection result using full image scale, and (c) candidate pixel selection result using multiple scales
scale. Fig. 5 (b) shows the drawback of processing at just one scale. Fig. 5 (c) shows the fusion result of candidate selection with multiple scales. 3.2.3 Motion Estimation/Analysis The purpose of motion estimation is to discriminate the mines from other distractors with distinct motion patterns. Existing motion estimation algorithms can be roughly divided into two broad categories: block matching and optical flow. Block matching algorithms [8-9] attempt to match a block of pixels to adjacent blocks, centered within a certain distance of the original, in the subsequent frame or frames. The size of the block to be matched and the searching radius and strategy are critical to the performance of the algorithm. However, for the problem we are addressing here, the target size is not predictable, thus there will be no single optimal block size. Moreover, for very small or distant targets, there is little texture in the region of interest, reducing the performance of block matching even where the block size is appropriate. Optical flow algorithms [10-11] are based on the brightness constancy assumption. They need strong regularity of brightness to suppress noise and obtain accurate results. However, the algorithm will fail in scenes like Fig. 2 (b). Motion of small objects will be severely distorted because of the lack of such regularity. In this case, point correspondence algorithms, which are suitable for calculating small region motion, could be applied. Point correspondence algorithms normally define a motion model and then optimize the model to obtain the motion correspondence [12] from two or more frames. Extensive effort has gone into enhancing the accuracy of correspondence algorithms by improving the motion model and
158
Z.-Y. Wei et al.
optimization techniques, at the cost of the processing speed. Unfortunately, the high computational cost is not acceptable in a real-time application. For a more complete survey on point correspondence algorithms, refer to [12]. In this paper, we propose a motion correspondence algorithm that we call “motion projection,” to efficiently estimate motion with high accuracy. This algorithm makes two simple assumptions: the first is the brightness constancy assumption; the second is the constancy of motion across frames. Consider a pixel of interest V(x, y, t) and a square block of size N×N that centers at this pixel. Here N=2n+1 where n is half of the window size. As shown in Fig. 6, the window and the window centering at the same pixel in the preceding and subsequent n frames can be stacked upon each other to become two N×N×(n+1) volumes, called forward and backward volumes, respectively. From the center pixel in the middle frame to each pixel in the end frame (the t+nth frame in the forward volume and the tnth frame in the backward volume), there are N2 trajectories each denoted as {D1F , D2F , … D F } and {D1B , D2B ,… D B } . Each trajectory is the set of n+1 pixels which N2
N2
line up in the spatio-temporal domain. The true motion trajectories for forward and backward volumes can be formulated as F = ⎧⎨ D kF σ ( D kF ) = min (σ ( D Fj )), j = 1,2 … N 2 ⎫⎬ Dtrue j ⎩ ⎭
(2)
B = ⎧⎨ D kB σ ( D kB ) = min (σ ( D Bj )), j = 1,2 … N 2 ⎫⎬ Dtrue j ⎩ ⎭
(3)
where σ ( D B ) is the variance along trajectory D B . One difficulty is that the intersecj
j
tions of the trajectory and each frame may not fall on the exact grid points. To handle this, we could select the intensity value closer to the intersection, or use interpolation techniques to generate the value. In this paper we use the former method to simplify the design. The goal of motion estimation is to detect regions with consistent motion, i.e., constant velocity over the period of the projection. This process will filter out glare or glint, which is a common distractor in this application. An individual glare element has a very short visibility cycle, and also shows large changes in brightness. Thus its motion trajectory does not stay constant and it can be distinguished from mines. Wave elements exhibit consistent motion and cannot be isolated from targets from that factor alone. However, they also exhibit more generally horizontal movement, whereas floating objects, including mines, exhibit primarily vertical motion. In the current version of the algorithm, a minor bias is given to vertical motion in the filtering of targets. This distinction is ripe for further research, as wave motion will be generally consistent across the field of view and over different waves, and will differ from that of mines. Figs. 4 (c) and 5 (c) show the motion estimation/analysis result of the clip shown in Figs. 4 and 5. Although some pixels on the target surface are filtered out, many false positives in the image are also removed.
Motion Projection for Floating Object Detection
(a)
159
(b)
Fig. 6. (a) Forward motion project volume and (b) Backward motion project volume
3.2.4 Spatial-Temporal Smoothing In some cases, the motion estimation/analysis result contains noise and non-mine distractors even after the previous filtering, for example from wave motion. According to the observation of motion difference between the object and background, spatio-temporal smoothing is carried out to retain only the motion regions which have consistency in spatio-temporal domain. Filtering is performed by simply counting the number of pixels exhibiting consistent motion from the motion estimation and analysis, and removing pixels that have a score below a threshold. Currently, the threshold is one-half of the mask size. For example, if the mask is an 11x11x11 volume, there must be at least five pixels with consistent motion to pass the filter. 3.3 Computational Cost and Algorithm Architecture As discussed earlier, the entire computation as described above is concatenated and pipelined. Also some of the previous modules break the data into blocks which can be processed in parallel. The goal of this design is to use simple but reliable processing to improve the overall performance and performance at each step. Further, modules with a smaller computational cost (e.g., candidate pixel selection) are positioned at beginning of the pipeline, so that computationally expensive modules (e.g., motion estimation/analysis) are applied only to a selected subset of pixels. A similar architecture is applied in [7], where the mine detection algorithm could be denoted as “FrontEnd Analysis” and the subsequent recognition and post-processing stages are called “High-level Analysis”. Currently the entire computation process requires approximately 5-7 seconds per frame on a Pentium duo-core PC in the Matlab development environment. Some parts of the code, such as motion estimation/analysis, is optimized using C code interfaced via the “mex” framework. Because of its modular, pipelined design, the algorithm could also be easily implemented for higher performance using hardware accelerators such as DSPs or GPUs, or directly in hardware (such as FPGAs) to achieve high-speed processing at or near the camera frame rate. Further, its simple design enables further improvement of results by incorporating additional modules where appropriate.
160
Z.-Y. Wei et al.
4 Experiment The objective of this algorithm is to detect all possible mine candidates and provide a good candidate list for further processing such as high-level motion analysis and object recognition. Due to the nature of the application and the fact that this algorithm is a pre-processing component of the overall solution; false positives are more tolerable than false negatives. Thus in the candidate pixel module, nσ is set to a small value so that virtually every visible non-uniform region is detected using Equation (1). Although this approach generates a large number of false positives, many are filtered out in the subsequent motion estimation/analysis module as well as in the spatio-temporal smoothing module. From the binary mask indicating mine pixel locations, we treat connected sets of pixels as a single object. The output of the overall algorithm is a set of centroids, bounding boxes, and the number of pixels of each of these potential mine objects. Currently, the proposed algorithm achieves its objective well on most videos. The miss rate is 0% for objects up to 1000m, 3% for 1500m, and 11% for 2000m among 680 videos. Figure 7 shows a few examples of mine detection result including two IR video frames. Detected mines are highlighted in small blue boxes.
Fig. 7. Mine detection result examples
5 Conclusions and Future Work In this paper, a robust and accurate mine detection algorithm is proposed, representing the pre-processing stage of a larger identification system. This algorithm uses contrast to select candidate regions. Motion information is used to filter out candidate pixel regions with certain motion patterns, such as horizontal motion, and to filter out regions with inconsistent motion. Results are spatio-temporally smoothed to remove the noise. In order to accommodate different mine sizes, a multi-scale scheme is applied. Experiment results reveal its effectiveness and promise.
Motion Projection for Floating Object Detection
161
Future work will include deploying higher level vision techniques to recognize mines based on the information from the mine detection algorithm. This is expected to further lower the false positive rate. The mine detection algorithm can also be optimized using special hardware to achieve real-time computing.
Acknowledgments This material is based upon work supported by the Naval Sea Systems Command under Contract No. N65538-07-M-0042. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Naval Sea Systems Command, nor of their respective companies or BYU.
References 1. Skinners, D.: Mine Countermeasures (MCM) Sensor Technology Drivers. In: SPIE Proceedings Detection Technologies for Mines and Minelike Targets, vol. 2496 (1995) 2. http://peos.crane.navy.mil/mine/default.htm 3. http://www.cmwc.navy.mil/default.aspx 4. Zimmerman, C., Coolidge, M.: The Forgotten Threat of Attack by Sea: Using 3D Sonar to Detect Terrorist Swimmers and Mines. In: IEEE Conference on Technologies for Homeland Security (2002) 5. Dobeck, G.J., Hyland, J.: Sea mine detection and classification using side-looking sonars. In: SPIE Proceedings Annual International Symposium on Aerospace/Defense Sensing, Simulation and Control, pp. 442–453 (1995) 6. Chen, Y., Nguyen, T.Q.: Sea Mine Detection Based on Multiresolution Analysis and Noise Whitening. Technical Report (1999) 7. Burt, P.J.: A Pyramid-based Front-end Processor for Dynamic Vision Applications. Proceedings of the IEEE 90, 1188–1200 (2002) 8. Huang, Y., Chen, C., Tsai, C., Shen, C., Chen, L.: Survey on Block Matching Motion Estimation Algorithms and Architectures with New Results. The Journal of VLSI Signal Processing 42, 297–320 (2006) 9. Love, N.S., Kamath, C.: An Empirical Study of Block Matching Techniques for the Detection of Moving Objects. LLNL Technical Report UCRL-TR-218038 (2006) 10. Horn, B.K.P., Schunck, B.G.: Determining Optical Flow. Artificial Intelligence 17, 185– 203 (1981) 11. Lucas, B.D., Kanade, T.: An iterative image registration technique with an application to stereo vision. In: Proceedings of Image Understanding Workshop, pp. 121–130 (1981) 12. Shafique, K., Shah, M.: A Noniterative Greedy Algorithm for Multiframe Point Correspondence. IEEE Trans. on PAMI 27, 51–65 (2005)
Real-Time Subspace-Based Background Modeling Using Multi-channel Data Bohyung Han1 and Ramesh Jain1,2 Calit2 School of Information and Computer Sciences University of California, Irvine, CA 92697, USA {bhhan,jain}@ics.uci.edu 1
2
Abstract. Background modeling and subtraction using subspaces is attractive in real-time computer vision applications due to its low computational cost. However, the application of this method is mostly limited to the gray-scale images since the integration of multi-channel data is not straightforward; it involves much higher dimensional space and causes additional difficulty to manage data in general. We propose an efficient background modeling and subtraction algorithm using 2-Dimensional Principal Component Analysis (2DPCA) [1], where multi-channel data are naturally integrated in eigenbackground framework [2] with no additional dimensionality. It is shown that the principal components in 2DPCA are computed efficiently by transformation to standard PCA. We also propose an incremental algorithm to update eigenvectors to handle temporal variations of background. The proposed algorithm is applied to 3-channel (RGB) and 4-channel (RGB+IR) data, and compared with standard subspace-based as well as pixel-wise density-based method.
1
Introduction
Background modeling and subtraction is an important preprocessing step for high-level computer vision tasks, and the design and implementation of a fast and robust algorithm is critical to the performance of the entire system. However, many existing algorithms involve significant amount of processing time by itself, so they may not be appropriate for real-time applications. We propose a fast algorithm to model and update background based on subspace method for multichannel data. The most popular approach for background modeling is pixel-wise densitybased method. In [3], background in each pixel is modeled by a Gaussian distribution, which has serious limitation that the density function is uni-modal and static. To handle multi-modal background, Gaussian mixture model is employed [4,5,6], but it is not flexible enough since the number of Gaussian components is either fixed or updated in an ad-hoc manner. For more accurate background modeling, adaptive Kernel Density Estimation (KDE) is proposed [7,8], but its huge computational cost and memory requirement remain as critical issues. To alleviate large memory requirement, a novel density estimation technique based G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 162–172, 2007. c Springer-Verlag Berlin Heidelberg 2007
Real-Time Subspace-Based Background Modeling Using Multi-channel Data
163
on mean-shift mode finding is introduced [9], but it still suffers from high computational cost. These methods are based on pixel-wise density estimation, so the computational cost is high especially when large images are involved. Also, background models are based on the data presented at the same pixel in the image and spatial information is usually ignored. Therefore, they sometimes lose accuracy or model unnecessarily complex background. On the other hand, Principal Component Analysis (PCA) is often used for background modeling [10,2]. In this framework, 2D images are vectorized and collected to obtain principal components during the training phase. In the test phase, vectorized images are first projected onto trained subspace and background is reconstructed using the projected image and the principal components. The advantage of this method lies in the simplicity; it is typically very fast compared with pixel-wise density-based methods and well fitted to realtime applications. However, it is not straightforward to integrate multi-channel data such as RGB color images without significant increase of dimensionality, which is not desirable due to additional computational complexity and curse of dimensionality problem. Recently, 2-Dimensional Principal Component Analysis (2DPCA) [1] is proposed for face recognition. The main advantage of 2DPCA is the dimensionality reduction; instead of vectorizing image data, it utilizes original two-dimensional structure of images. So, the curse of dimensionality problem is alleviated and spatial structures of visual features are considered. Most of all, the computational cost and memory requirement are reduced significantly, and this algorithm is more appropriate for real-time applications. We focus on the subspace-based background modeling algorithm using 2DPCA, and our contribution is summarized below. – We propose a background subtraction (BGS) algorithm for multi-channel data using 2DPCA with no additional dimensionality. – The computation of initial principal components in 2DPCA is performed efficiently by transformation to standard PCA, and incremental update of subspace in 2DPCA framework is employed to handle dynamic background. – The threshold for background subtraction is determined automatically by statistical analysis. – The proposed algorithm is naturally applied to the sensor fusion for background subtraction; in addition to RGB color images, IR image is integrated together to model background and detect shadow. We also implemented other background modeling and subtraction algorithms such as pixel-wise density based [6] and standard eigenbackground [2] method, and compare performance with our algorithm. This paper is organized as follows. In Section 2, 2DPCA algorithm is discussed and analyzed, and Section 3 describes background modeling technique with multi-channel data based on 2DPCA. The performance of our method is evaluated in Section 4.
164
2
B. Han and R. Jain
2DPCA
In this section, we review and analyze original 2DPCA algorithm introduced in [1,11]. 2.1
2DPCA: Review
The basic idea of 2DPCA is to project m × n image matrix A onto an ndimensional unitary column vector x to produce m-dimensional projected feature vector of image A by the following linear transformation. y = Ax.
(1)
Let the covariance matrix of the projected feature vectors be defined by Cx . The maximization of tr(Cx ), where tr(·) denotes the trace of a matrix, is the criterion to determine the optimal projection vector x. The formal derivation of tr(Cx ) is given by M 1 ¯ ) (yi − y ¯) tr(Cx ) = (y − y M i=1 i
=
M 1 ¯ (Ai x − Ax) ¯ (Ai x − Ax) M i=1
=
M 1 ¯ (A − A)x ¯ x (A − A) M i=1
= x CA x,
(2)
¯ are the means of projected feature vectors y , y , . . . , y and ¯ and A where y 1 2 M original image matrices A1 , A2 , . . . , AM , respectively, and CA is image covariance (scatter) matrix. Therefore, the maximization problem of tr(Cx ) is equivalent to solve for the eigenvectors of CA with the largest eigenvalues. When the k eigenvectors are selected, the original image A is represented with m × k feature vectors Y i:k , which is given by Y 1:k = AX1:k ,
(3)
where each column of the n×k matrix X1:k corresponds to one of the k principal components of CA . 2.2
Analysis of 2DPCA
Instead of vectorizing 2-dimensional image, 2DPCA uses the original image matrix; dimensionality and speed are significantly reduced. The data representation and reconstruction ability of 2DPCA is evaluated in face recognition problems [1,11,12], where 2DPCA is reported to be superior to standard PCA.
Real-Time Subspace-Based Background Modeling Using Multi-channel Data
(a) Training image
(b) Original test image
165
(c) Reconstructed background
Fig. 1. Example of poor background image reconstruction by 2DPCA
However, it is worthwhile to investigate further the computation procedure of the image covariance matrix. Suppose that A i = [ai,1 ai,2 . . . ai,m ], where ¯ ai,j is the j-th row vector in image matrix Ai and that A = 0 without loss of generality. Then, image covariance matrix CA can be re-written as CA =
M 1 A Ai M i=1 i
M m 1 = a ai,j . M i=1 j=1 i,j
(4)
The Eq. (4) means that 2DPCA is equivalent to block-based PCA, in which each block corresponds to a row in the image. This is also discussed in [13]. The image covariance matrix is computed by the sum of outer-product of the same row vectors, and the information gathered from different rows are blindly combined into the matrix. This factor may not affect the image representation and reconstruction in face recognition much, but it causes a significant problem in background subtraction. Figure 1 demonstrates an example of poor background reconstruction by 2DPCA due to the reason described here.
3
Multi-channel Background Modeling by 2DPCA
In this section, we present how 2DPCA is applied to background modeling for multi-channel data without dimensionality increase. The method for efficient computation of initial principal components is introduced, and an adaptive technique to determine threshold for background subtraction is proposed. Also, the incremental subspace learning to handle dynamic background is discussed. 3.1
Data Representation and Initial Principal Components
As mentioned earlier, although 2DPCA shows successful results in face recognition, its 2D image representation has a critical limitation for background subtraction as illustrated in Figure 1. Therefore, we keep vectorized representation
166
B. Han and R. Jain
of image, but 2DPCA is utilized to deal with multi-channel image data efficiently. Note that the vectorization of entire d-channel image causes significant increase of dimensionality and additional computational complexity for initial and incremental PCA. Let Ai (i = 1, . . . , M ) be a d-channel m × n image, which means Ai is a m × n × d matrix. The original 3D matrix is converted to a d × mn matrix Ai by which data in the same channel is vectorized and located in a row. Typically, mn is a very large number and the direct computation of image covariance matrix is very expensive. Although eigenvectors can be derived from the image covariance of transposed matrix A i , only d eigenvectors are available in this case; a method to compute sufficient number of eigenvectors efficiently is required, which is described below. Suppose that A = (A 1 A2 . . . AM ), where A is a mn×dM matrix. Then, CA in Eq. (4) is computed by a simple matrix operation instead of the summation of M matrices, which is given by CA =
1 AA . M
(5)
However, the size of CA is still mn × mn and prohibitively large. Fortunately, the eigenvectors of AA can be obtained from the eigenvectors of A A since A Aˆ u = λˆ u AA Aˆ u = λAˆ u u) = λ(Aˆ u) AA (Aˆ AA u = λu
(6)
ˆ and λ is eigenvector and eigenvalue of A A, respectively, and u = Aˆ where u u is eigenvector of AA1 . Therefore, we can obtain eigenvectors of AA by the eigen-decomposition of dM ×dM matrix instead of mn×mn matrix (mn dM ). ˆ 1:k = (ˆ ˆ2 . . . u ˆ k ) the eigenvectors associated with the k largest u1 u Note that U eigenvalues of A A and that the eigenvectors of our interest are denoted by U1:k = (u1 u2 . . . uk ) that is an mn × k matrix. Also, the diagonal matrix for associated eigenvalues is Σk×k . 3.2
Background Subtraction
Denote by B = (b1 b2 . . . bmn ) a d × mn test image for background subtraction, where bi (i = 1, . . . , mn) is a d-dimensional data extracted from d channels ˆ mn ) from the ˆ2 . . . b ˆ 1b ˆ = (b in the i-th pixel. The reconstructed background B original image B is given by ˆ = BU1:k U . B 1:k 1
u needs to be normalized.
(7)
Real-Time Subspace-Based Background Modeling Using Multi-channel Data
167
Foreground pixel is detected by estimating the difference between the original and reconstructed image as ˆ i || > ξ(i) 1 if ||bi − b FG(i) = , (8) 0 otherwise where FG(i) is a foreground mask and ξ(i) is a threshold for the i-th pixel in the vectorized image. Threshold is critical to the performance of background subtraction; we propose a simple method to determine the threshold of each pixel and update it based on temporal variations of incoming data. From the training sequence, the variation of each pixel is modeled by a Gaussian distribution, whose mean and variance are obtained from the distance between original and reconstructed image2 . Denote by m(i) and σ(i) be mean and standard deviation of the distance in the i-th pixel, respectively. The threshold ξ at the i-the pixel is given by ξ(i) = max (ξmin , min (ξmax , m(i) + κσ(i))) ,
(9)
where ξmin , ξmax and κ are constants. Also, the threshold is updated during background subtraction process; the new distance for the pixel is added to the current distribution incrementally. The incremental update of mean and variance is given by ˆ i || mnew (i) = (1 − α)m(i) + α||bi − b 2 σnew (i)
(10)
ˆ i || − m(i)) = (1 − α)σ (i) + α(1 − α)(||bi − b 2
2
(11)
where α is forgetting factor (learning rate). 3.3
Weighted Incremental 2DPCA
Background changes over time and new observations should be integrated into existing model. However, it is not straightforward to update background without the corruption affected by foreground regions and/or noises. In our work, the weighted mean of original and reconstructed image is used for incremental subspace learning, where the weight of a pixel is determined by its confidence to foreground (or background) classification. In other words, the actual data used for the incremental 2DPCA is given by ˜ i = (1 − r(i))b ˆ i + r(i)bi b (12) ˆ 2 i −bi || where r(i) = exp −ρ ||bξ(i) and ρ is a constant. By this strategy, confi2 dent background information is quickly adapted but suspicious information is integrated into the model very slowly. 2
In case that training data are not reliable due to noises, foreground regions, etc., robust estimation techniques such as M -estimator and RANSAC may be required for the parameter estimation.
168
B. Han and R. Jain
There are several algorithms for Incremental PCA (IPCA) [14,15,16], among which we adopt the algorithm proposed [14]. Suppose that a collection of K new ˜ ). Then, the updated ˜...B ˜ B data, which is derived from Eq. (12), is B = (B 1 2 K image mean (MA ) and covariance (CA ) matrix is given by (13) MA = (1 − β)MA + βM ˜ , B β BB + β(1 − β)(MA − M ˜ )(MA − M ˜ ) , (14) CA = (1 − β)CA + B B M where CA ≈ U1:k Σk×k U1:k , MA and M ˜ are the mean of existing and new B data, and β is learning rate. The k largest eigenvalues of CA is obtained by the following decomposition,
CA ≈ U1:k Σk×k U1:k ,
(15)
where U1:k is new mn × k eigenvector matrix and Σk×k is k × k diagonal matrix with associated eigenvalues. However, the direct computation of CA and its eigen-decomposition are not desirable (or even impossible) in real-time systems, and the eigenvectors and associated eigenvalues should be obtained from an indirect method, where CA is decomposed as CA = (U1:k |E)D(U1:k |E) , = (U1:k |E)R(k+l)×k Σk×k R (k+l)×k (U1:k |E) ,
(16)
where D = R(k+l)×k Σk×k R (k+l)×k , and E (mn × (k + l)) is a set of orthonormal basis vectors for the new data, which is orthogonal to the original eigenspace, and (MA − M ˜ ). Then, the original decomposition in Eq. (15) can be solved B by the decomposition of much smaller matrix D, which is given by D = (U1:k |E) CA (U1:k |E),
(17)
where CA in Eq. (14) is plugged in to obtain D efficiently. Note that we only need eigenvalues and eigenvectors from the previous decomposition in addition to incoming data for the subspace update. More details about incremental learning algorithm and its computational complexity are described in [14].
4
Experiments
In this section, we present the performance of our background subtraction technique in comparison with other algorithms — eigenbackground [2] and pixel-wise density-based modeling [6]. Our method is applied to RGB color image sequence and extended to sensor fusion — RGB color and IR — for background modeling and subtraction.
Real-Time Subspace-Based Background Modeling Using Multi-channel Data
(a) Original image (b) Eigenbackground
(c) GMM
169
(d) Our method
Fig. 2. Comparison of background subtraction algorithms (top) fountain sequence (bottom) subway sequence
(a) Reconstructed image
(b) ξ = 30
(c) ξ = 45
(d) ξ = 60
Fig. 3. BGS with fixed thresholds
4.1
RGB Color Image
We first test our background subtraction technique in fountain and subway sequence, where significant pixel-wise noises and/or structural variations of background are observed. Background model is trained using the first 50 empty frames, and only 5 principal components are selected. The subspace is updated at every time step by the method proposed in Section 3.3. The average processing speed is 8 ∼ 9 frame/sec in the standard PC with dual core 2.2GHz CPU and 2GB memory when the image size is 320 × 240 and the algorithm is implemented in Matlab. The standard eigenbackground [2] and pixel-wise Gaussian mixture model (GMM) with up to three components [6] are also implemented. The thresholds of the standard eigenbackground and GMM are determined manually to optimize background subtraction results for both sequences with constructed models. The threshold of our method is determined automatically, where the parameters in Eq. (9) are set as follows: ξmin = 30, ξmax = 60, and κ = 5. The results are illustrated in Figure 2, where we observe that the false alarm is lower than [2,6] with similar detection rate. Although [6] shows good performance when a separate optimal threshold is given to each sequence, it is difficult to find a good threshold for both sequences. Also, note that pixel-wise density-based modeling method is computationally more expensive than our method.
170
B. Han and R. Jain
(a) Sample training images
(b) BGS results at t = 277 (left) t = 2091 (right) Fig. 4. BGS with global illumination variations. Average intensities of sample training images are around 110, 111, 120, and 133, respectively.
To evaluate our automatic threshold setting technique, three different values — 30, 45, and 60 — are selected for global threshold ξ and applied to background subtraction for fountain sequence. The results for the same frame used in Figure 2 are illustrated in Figure 3; the threshold determined by the proposed method is clearly better than (a) and (b), and comparable to (c) considering true positives and false negatives. The variation of global illumination is successfully modeled by our method in campus sequence, where the scene is getting brighter gradually and there are some foreground objects in training images. Sample training images and background subtraction results are demonstrated in Figure 4. 4.2
Sensor Fusion for Background Subtraction
In addition to RGB color images, our method is naturally applied to background modeling and subtraction for the data captured by multiple sensors. Background subtraction based on 2DPCA using RGB and RGB+IR images are performed, and some results are illustrated in Figure 5. Since much larger images (640×480) are used in this experiment, the standard PCA for multi-channel data may not be a desirable solution due to high dimensionality. As shown in Figure 5, our method with RGB+IR shows much better performance than RGB only. The combination of IR with RGB is especially advantageous since there is no shadow for foreground objects in IR image and the shadow can be easily detected; the pixel with low reconstruction error in IR image but high overall error is considered to be shadow and classified as background. Also, at t = 150 and t = 350, moving vehicles behind trees are detected clearly in our method (inside ellipse), while modeling by RGB feature only hardly detect those objects.
Real-Time Subspace-Based Background Modeling Using Multi-channel Data
(a) t = 150
(b) t = 250
171
(c) t = 350
Fig. 5. 2DPCA BGS with RGB+IR data (row 1) RGB image (row 2) IR image (row 3) BGS with RGB (row 4) BGS with RGB+IR
5
Conclusion
We proposed a background modeling and subtraction algorithm for multichannel data using 2DPCA. Initial subspace for background model is obtained efficiently by converting 2DPCA problem into standard PCA, and the subspace is updated incrementally based on new data, which is obtained from the combination of incoming data and reconstructed background. We also proposed a method to automatically determine threshold for background subtraction. Our algorithm is implemented and compared with standard eigenbackground and pixel-wise density-based method, and applied to sensor fusion background subtraction.
172
B. Han and R. Jain
References 1. Yang, J., Zhang, D., Frangi, A.F., Yang, J.Y.: Two-dimensional pca: A new approach to appearance-based face representation and recognition. IEEE Trans. Pattern Anal. Machine Intell. 26, 131–137 (2004) 2. Oliver, N.M., Rosario, B., Pentland, A.: A bayesian computer vision system for modeling human interactions. IEEE Trans. Pattern Anal. Machine Intell. 22, 831– 843 (2000) 3. Wren, C., Azarbayejani, A., Darrell, T., Pentland, A.: Pfinder: Real-time tracking of the human body. IEEE Trans. Pattern Anal. Machine Intell. 19, 780–785 (1997) 4. Friedman, N., Russell, S.: Image segmenation in video sequences: A probabilistic approach. In: Proc. Thirteenth Conf. Uncertainty in Artificial Intell (UAI) (1997) 5. Lee, D.: Effective gaussian mixture learning for video background subtraction. IEEE Trans. Pattern Anal. Machine Intell. 27, 827–832 (2005) 6. Stauffer, C., Grimson, W.: Adaptive background mixture models for real-time tracking. In: Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Fort Collins, CO, pp. 246–252 (1999) 7. Elgammal, A., Harwood, D., Davis, L.: Non-parametric model for background subtraction. In: Vernon, D. (ed.) ECCV 2000. LNCS, vol. 1843, pp. 751–767. Springer, Heidelberg (2000) 8. Elgammal, A., Duraiswami, R., Harwood, D., Davis, L.: Background and foreground modeling using nonparametric kernel density estimation for visual surveillance. Proceedings of IEEE 90, 1151–1163 (2002) 9. Han, B., Comaniciu, D., Davis, L.: Sequential kernel density approximation through mode propagation: Applications to background modeling. In: Asian Conference on Computer Vision, Jeju Island, Korea (2004) 10. Torre, F.D.L., Black, M.: A framework for robust subspace learning. Intl. J. of Computer Vision 54, 117–142 (2003) 11. Kong, H., Wang, L., Teoh, E.K., Li, X., Wang, J.G., Venkateswarlu, R.: Generalized 2d principal component analysis for face image representation and recognition. Neural Networks: Special Issue 5–6, 585–594 (2005) 12. Xu, A., Jin, X., Jiang, Y., Guo, P.: Complete two-dimensional pca for face recognition. In: Int. Conf. Pattern Recognition, Hong Kong, pp. 459–466 (2006) 13. Wang, L., Wang, X., Zhang, X., Feng, J.: The equivalence of two-dimensional pca to line-based pca. Pattern Recognition Letters 26, 57–60 (2005) 14. Hall, P., Marshall, D., Martin, R.: Merging and splitting eigenspace models. IEEE Trans. Pattern Anal. Machine Intell. 22, 1042–1048 (2000) 15. Weng, J., Zhang, Y., Hwang, W.: Candid covariance-free incremental principal component analysis. IEEE Trans. Pattern Anal. Machine Intell. 25, 1034–1040 (2003) 16. Levy, A., Lindenbaum, M.: Sequential karhunen-loeve basis extraction and its application to images. IEEE Trans. Image Process. 9, 1371–1374 (2000)
A Vision-Based Architecture for Intent Recognition Alireza Tavakkoli, Richard Kelley, Christopher King, Mircea Nicolescu, Monica Nicolescu, and George Bebis Department of Computer Science and Engineering University of Nevada, Reno, USA {tavakkol,rkelley,cjking,mircea,monica,bebis}@cse.unr.edu
Abstract. Understanding intent is an important aspect of communication among people and is an essential component of the human cognitive system. This capability is particularly relevant for situations that involve collaboration among multiple agents or detection of situations that can pose a particular threat. We propose an approach that allows a physical robot to detect the intentions of others based on experience acquired through its own sensory-motor abilities. It uses this experience while taking the perspective of the agent whose intent should be recognized. The robot’s capability to observe and analyze the current scene employs a novel vision-based technique for target detection and tracking, using a non-parametric recursive modeling approach. Our intent recognition method uses a novel formulation of Hidden Markov Models (HMM’s) designed to model a robot’s experience and its interaction with the world while performing various actions.
1
Introduction
The ability to understand the intent of others is critical for the success of communication and collaboration between people. The general principle of understanding intentions that we propose in this work is inspired from psychological evidence of a Theory of Mind [1], which states that people have a mechanism for representing, predicting and interpreting each other’s actions. This mechanism, based on taking the perspective of others [2], gives people the ability to infer the intentions and goals that underlie action [3]. We base our work on these findings and we take an approach that uses the observer’s own learned experience to detect the intentions of the agent or agents it observes. When matched with our own past experiences, these sensory observations become indicative of what our intentions would be in the same situation. The proposed system models the interactions with the world, acquired from visual information. This information is used in a novel formulation of Hidden Markov Models (HMMs) adapted to suit our needs. The distinguishing feature in our HMMs is that they model not only transitions between discrete states, but also the way in which the parameters encoding the goals of an activity change G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 173–182, 2007. c Springer-Verlag Berlin Heidelberg 2007
174
A. Tavakkoli et al.
during its performance. This novel formulation of the HMM representation allows for recognition of the agents’ intent well before the actions are finalized. Our approach is composed of two modules: the Vision module and the HMM module. The vision module performs low-level processing on video frames such as detection and tracking of objects of interest. The detected objects are further processed in the vision module and their 3D positions, distances, and angles are generated. This mid-level information is finally used in the HMM module to perform the two main stages: the activity modeling and the intent recognition. During the first stage the robot learns corresponding HMM’s for each activity it should later recognize. During the intent recognition phase the robot, now an observer, is equipped with the trained HMMs and monitors other agent(s)’ performance by evaluating the changes of the same goal parameters, from the perspective of the observed agents. A significant advantage of the proposed HMM module is that unlike typical approaches to HMMs, which are restricted to be used in the same (training) environment, our models are general and can be transferred to different domains. The remainder of the paper is structured as follows: Section 2 describes the visual capabilities we developed for this work (vision module). Section 3 summarizes related work in activity modeling and recognition and inferring intent, and presents our novel architecture for understanding intent using HMM’s, Section 4 describes our results, and Section 5 summarizes our paper.
2
Vision-Based Perceptual Capabilities
We provide a set of vision-based perceptual capabilities for our system that facilitate the modeling and recognition of actions carried out by the agents. Specifically, we are interested in: detection and tracking of relevant entities, and the estimation of their 3D positions, with respect to the observer. As the appearance of these agents is generally not known a priori, the only visual cue that can be used for attracting the robot’s attention toward them is image motion. Our approach makes significant use of more efficient and reliable techniques traditionally used in real-time surveillance applications, based on background/foreground modeling, structured as follows: • During the activity modeling stage, the robot is moving while performing various activities. The appearance models of the other mobile agents, necessary for tracking, are built in a separate, prior process where the static robot observes each agent that will be used for action learning. • During the intent recognition stage, we assume that the camera is static while the robot observes the actions carried out by the other agents. The static camera allows the use of a foreground-background segmentation technique in order to build the models on-line, and to improve the tracking speed. 2.1
Detection and Tracking
For tracking we use a standard kernel-based approach [4], where the appearance model for each detected region is represented by a histogram-based color
A Vision-Based Architecture for Intent Recognition
175
Fig. 1. Model evolution after 10 frames (left) , 50 (middle) and 100 frames (right)
distribution. The detection is achieved by building a representation of the scene background and comparing the new image frames with this representation. Because of inherent changes in the background, such as fluctuations in monitors and fluorescent lights, waving flags and trees, water surfaces, etc. the background may not be completely stationary. In the presence of these types of backgrounds, referred to as quasi-stationary, more complex background modeling techniques are required. In parametric background modeling methods, the model is assumed to follow a specific distribution whose parameters must be determined. Mixtures of Gaussians are used in [5]. A Bayesian framework that incorporates spectral, spatio-temporal features to characterize the background is also proposed in [6]. As opposed to this trend, one of the most successful approaches in background modeling [7] proposes a non-parametric model. The background representation is drawn by estimating the probability density function of each pixel by using a kernel density estimation technique. The Background Model. In this work, we use the non-parametric modeling, which estimates the density directly from the data, without any assumptions about the underlying distribution. This avoids having to choose a specific model (that may be incorrect or too restricting) and estimating its distribution parameters. It also addresses the problem of background multi-modality, leading to significant robustness in the case of quasi-stationary backgrounds. In order to preserve the benefits of non-parametric modeling while addressing its limitations, we propose a recursive modeling scheme. Our approach for background modeling employs a recursive formulation, where the background model θt (x) is continuously updated according to equation (1): θt (x) = 1 (1) θˆt (x) = (1 − βt ) × θt−1 (x) + αt × HΔ (x − xt ) : x
The model θt (x) corresponds to a probability density function (distinct for each pixel), defined over the range of possible intensity (or color) values x. After being updated, the model is normalized according to equation (1), so that the function takes values in [0,1], representing the probability for a value x at that pixel to be part of the background. This recursive process takes into consideration the model at the previous image frame, and updates it by using a kernel function (e.g., a Gaussian) HΔ (x) centered at the new pixel value xt .
176
A. Tavakkoli et al.
Fig. 2. Convergence speed
Fig. 3. Recovery speed from sudden global changes
In order to allow for an effective adaptation to changes in the background, we use a scheduled learning approach by introducing the learning rate αt and forgetting rate βt as weights for the two components in equation (1). The learning and forgetting rates are adjusted online, depending on the variance observed in the past model values. This schedule makes the adaptive learning process converge faster without compromising the stability and memory requirements of the system while successfully handling both gradual and sudden changes in the background independently at each pixel. Discussion and Results. Fig. 1 shows the updating process using our proposed recursive modeling technique. It can be seen that the trained model (solid line) converges to the actual one (dashed line) as new samples are introduced. The actual model is the probability density function of a randomly generated sample population and the trained model is generated by using the recursive formula presented in equation (1). Fig. 2 illustrates the convergence speed of our approach with scheduled learning, compared to constant learning and kernel density estimation with constant window size. Fig. 3 compares the same approaches in terms of recovery speed after sudden illumination changes (three different lights switched off in sequence). Results on several challenging sequences are illustrated in Fig. 4, showing that the proposed methodology is robust to noise, gradual illumination changes, or natural scene variations, such as local fluctuating intensity values dues to monitor flicker (a), waves (b), moving tree branches (c), rain (d), or water motion (e). The ability to correctly model the background even when there are moving objects in every frame is shown in Fig. 4(f). Quantitative estimation. The performance of our method is evaluated quantitatively on randomly selected samples from different video sequences, taken from [6]. The value used, is the similarity measure between two regions A and B, defined as S = A∩B A∪B , where region A corresponds to the detected foreground
A Vision-Based Architecture for Intent Recognition
177
Fig. 4. Background modeling and foreground detection in the presence of quasistationary backgrounds Table 1. Quantitative evaluation and comparison. The sequences are Meeting Room, Lobby, Campus, Side Walk, Water Surface and Fountain, from left to right. Videos Proposed Statistical Modeling [6] Mixture of Gaussians [5]
MR 0.92 0.91 0.44
LB 0.87 0.71 0.42
CAM 0.75 0.69 0.48
SW 0.72 0.57 0.36
WS 0.89 0.85 0.54
FT 0.87 0.67 0.66
Avg 0.84 0.74 0.49
and B is the actual foreground mask. This measure is monotonically increasing with the similarity of the two masks, with values between 0 and 1. Table 1 shows the similarity measure for several video sequences where ground truth was available, as analyzed by our method, the mixture of Gaussians [5], and the statistical modeling [6]. It can be seen that the proposed approach clearly outperforms the others, while also producing more consistent results over a wide range of environments. We also emphasize that in the proposed method the thresholds are estimated automatically (and independently at each pixel), and there is no prior assumption needed on the background model. The scheduled learning scheme achieves a high convergence speed, and a fast recovery from expired models, allowing for successful modeling even for nonempty backgrounds (when there are moving objects in every frame). Its adaptive localized classification leads to automatic training for different scene types and for different locations within the same scene. 2.2
Estimation of 3D Positions
We employ the robot-mounted laser rangefinder for estimating the 3D positions of detected agents with respect to the observing robot. For each such agent, its
178
A. Tavakkoli et al.
position is obtained by examining the distance profile from the rangefinder in the direction where the foreground object has been detected by the camera. In order to determine the direction (in camera coordinates) through a pixel, the intrinsic camera parameters are first obtained with an off-line calibration process. For the intent recognition stage (once the 3D position of each agent is known with respect to the camera) a simple change of coordinates allows the observing robot to take the perspective of any participating agent. This is done in order to map its current observations to those acquired during the action learning stage.
3
General Architecture for Intent Understanding
HMM’s are powerful tools for modeling processes that involve temporal sequences and have been successfully used in applications involving speech and sound. Recently, HMMs have been used for activity understanding. They display a significant potential for their use in activity modeling and inferring intent. While some of the existing approaches allude to the potential of using HMMs to learn the user’s intentions, these systems fall short of this goal: the approach allows detecting that some goal has been achieved only after observing its occurrence. However, for tight collaborative scenarios or for detection of potentially threatening situations, it is of particular importance to detect the intentions before the goals of such actions have actually been achieved. An application of HMMs that is closer to our work is that of detecting abnormal activity. The methods used to achieve this goal typically rely on detecting inconsistencies between the observed activity and a set of pre-existing activity models [8]. Intent recognition has also been addressed from the perspective of intent inference and plan recognition for collaborative dialog [9], but these methods use explicit information in order to infer intentional goals. Our robotic domain relies entirely on implicit cues that come from a robot’s sensory capabilities, and thus requires different mechanisms for detecting intent. 3.1
Novel HMM Formulation
Hidden Markov Models have found greatest use in problems that have inherent temporality, to represent processes that have a time-extended evolution. The main contribution of our approach consists in choosing a different method for constructing the model. This HMM formulation models an agent’s interaction with the world while performing an activity through the way in which parameters that encode the task goals are changing. With this representation, the visible states reliably encode the changes in task goal parameters while the hidden states represent the hidden underlying intent of the performed actions. The reason for choosing the activity goals as the parameters monitored by the HMM is that goals carry intentional meanings. Activity Modeling. During this stage, the robot uses its experience of performing various activities to train corresponding HMM’s. The robot is equipped
A Vision-Based Architecture for Intent Recognition
179
with a basis set of behaviors and controllers that allow it to execute these tasks. We use a schema-based representation of behaviors, similar to that described in [10]. We experimented with Following, Meeting, Passing By, Picking Up and Dropping off an object. While executing these activities, the robot monitors the changes in the corresponding behaviors’ goals. For a meeting activity, the angle and distance to the other person are parameters relevant to the goal. The robot’s observable symbol alphabet models all possible combinations of changes that can occur: increasing (++), decreasing (−−), constant (==), or unknown (?). The underlying intent of actions is encoded in the HMMs’ hidden states. Repeated execution of a given activity provides the data used to estimate the model transition probabilities aij and bjk using the Baum-Welch algorithm [11]. During the training stage, the observed, visible states are computed by the observer from its own perspective. Intent Recognition. The recognition problem consists of inferring, for each observed agent, the intent of the actions they most likely perform from the previously trained HMM’s. The observer robot monitors the behavior of all the agents of interest with respect to other agents or locations. Since the observer is now external to the scene, the features need to be computed from the observed agents’ perspective rather than from the observer’s own point of view. These observations consist of monitoring the same goal parameters that have been used in training the HMM. For each agent and for all HMM’s, the robot computes the likelihood that the sequence of observations has been produced by each model, using the Forward Algorithm [12]. To detect the most probable state that represents the intent of an agent we consider the intentional state emitted only by the model with highest probability. For that model, we then use the Viterbi Algorithm [13] to detect the most probable sequence of hidden states.
4
Experimental Results
To validate our approach we performed experiments with a Pioneer 2DX mobile robot, with an onboard computer, a laser rangefinder and a PTZ Sony camera. The experiments consisted of two stages: the activity modeling phase and the intent recognition phase. During activity modeling, the robot was initially equipped with controllers for following, meeting or passing by a person for several runs of each of the three activities. The observations gathered from these trials were used to train the HMM’s. The goal parameters monitored in order to compute the observable symbols are the distance and angle to the human, from the robot’s perspective. During intent recognition, the robot acted as an observer of activities performed by two people in five different scenarios, which included following, meeting, passing by, and two additional scenarios in which the users switched repeatedly between these three activities. We exposed the robot to different viewpoints of the activities and to show the robustness of the system to varying
180
A. Tavakkoli et al.
(a) Follow
(b) Meet
(c) Pass by
Fig. 5. Intent recognition for different activities
Fig. 6. Model probabilities during two follow scenarios
Fig. 7. Model probabilities of two people during the meet scenario
Fig. 8. Model probabilities of two people during the passing by scenario
environmental conditions. The goal of the two complex scenarios is to demonstrate the ability of the system to infer a change in intent as soon as it occurred. Fig. 5 shows snapshots of the detection and intent recognition for the two runs of each scenario from different viewpoints. The blue and red bars correspond to the blue and red-tracked agent, respectively, whose length represent the cumulative likelihood of the models up to that point in time. Fig. 6 through Fig. 8 show that the robot is able to infer the correct intent for the following, meeting, and passing by scenarios: the probability for the correct model rapidly exceeds the other models, which have very low likelihoods. For the following scenarios (Fig. 6), we only present the intent of the person who is performing the action. For the other scenarios (Fig. 7 and Fig. 8), we show the intent of both people involved in the activities: the robot is able to detect that both have similar intentions, either related to meeting or passing by. During the complex scenarios the system was capable of adapting to changes in people’s activities quickly, and of detecting the correct intentional state of the agents, as shown in Fig. 10.
A Vision-Based Architecture for Intent Recognition
(a)
181
(b)
Fig. 9. Model probabilities during: (a) drop off and (b) pick up scenarios
(a) Red and blue pass by
(b) Blue follows red
Fig. 10. Results from complex scenarios Table 2. Quantitative evaluation Scenario
Follow Meet Pass Drop off Pick up Both Agent1 Agent 2 Agent 1 Agent 2 Both Both Avg. ED[%] 2.465 4.12 49.85 0 0 11.53 0 Avg. CD[%] 97.535 95.88 66.82 100 100 90.38 100
After these experiments were performed we added two new activity models to the robot’s set of capabilities, for picking up and dropping off objects. Fig. 9 presents the model probabilities of drop off and pick up activities, respectively. To provide a quantitative evaluation of our method we analyze the accuracy rate, early detection and correct duration, typically used in HMM’s [14]: Accuracy rate = the ratio of the number of observation sequences, of which the winning intentional state or activity matches the ground truth, to the total number of test sequences. Early detection (ED) = t/T , where T is the observation length and t∗ = min{t|P r(winning intentional activity) is highest from time t to T }. Correct duration (CD) = C/T , where C is the total time during which the intentional state with the highest probability matches the ground truth. For reliable recognition, the system should have a high accuracy rate, a small value for early detection and high correct duration. The accuracy rate of our system is 100%: all intent recognition scenarios have been correctly identified. Table 2 shows the early detection and the correct duration for these experiments. The worse results occurred when inferring agent 2’s intent, during the meeting scenarios. From our analysis of the data we observed that this result is due to small variations in computing the observable symbols from agent 2’s perspective and the high similarity between meeting and passing by.
182
5
A. Tavakkoli et al.
Conclusion and Future Work
In this paper, we proposed an approach for detecting intent from visual information. We developed a vision-based technique for target detection and tracking that uses a new non-parametric recursive modeling approach. We proposed a novel formulation of Hidden Markov Models (HMMs) to encode a robot’s experiences and its interactions with the world when performing various actions. These models are used through taking the perspective to infer the intent of other agents before their actions are finalized. This is in contrast to other activity recognition approaches which only detect an activity , after it is completed. We validated this architecture with an embedded robot, detecting the intent of people performing multiple activities. We are working on expanding the repertoire of activities for the robot to more complex navigation scenarios.
References 1. Premack, D., Woodruff, G.: Does the chimpanzee have a theory of mind? Behavioral and Brain Sciences 1, 515–526 (1978) 2. Gopnick, A., Moore, A.: Changing your views: How understanding visual perception can lead to a new theory of mind. Children’s Early Understanding of Mind 157–181 (1994) 3. Baldwin, D., Baird, J.: Discerning intentions in dynamic human action. Trends in Cognitive Sciences 5, 171–178 (2001) 4. Comaniciu, D., Ramesh, V., Meer, P.: Kernel-based object tracking. IEEE Transactions on Pattern Analysis and Ma-chine Intelligence 25, 564–577 (2003) 5. Stauffer, C., Grimson, W.: Learning patterns of activity using real-time tracking. IEEE Transactions on Pattern Analysis and Machine Intelligence. 22, 747–757 (2000) 6. Li, L., Huang, W., Gu, I., Tian, Q.: Statistical modeling of complex backgrounds for foreground object detection. IEEE Transactions on Image Processing 23, 1459– 1472 (2004) 7. Elgammal, A., Duraiswami, R., Harwood, D., Davis, L.: Background and foreground modeling using nonparametric kernel density estimation for visual surveillance. Proceedings of the IEEE 90, 1151–1163 (2002) 8. Duong, T., Bui, H., Phung, D., Venkatesh, S.: Activity recognition and abnormality detection with the switching hidden semi-markov model. In: IEEE Intl. Conference on Computer Vision and Pattern Recognition (2005) 9. Grosz, B.J., Sidner, C.L.: Plans for discourse. In: Intentions in communication, 417–444 (1990) 10. Arkin, R.C.: Behavior-based robotics, 417–444 (1998) 11. Baum, L.E., Peterie, T., Souled, G., Weiss, N.: A maximization technique occurring in the statistical analysis of probabilistic functions of markov chains. Ann. Math. Statist 41, 164–171 12. Rabiner, L.R.: A tutorial on hidden-markov models and selected applications in speech recognition. Proceedings of the IEEE 77 13. Forney Jr., G.D.: The viterbi algorithm. Proceedings of the IEEE 61, 2268–2278 14. Nguyen, N., Phung, D., Venkatesh, S., Bui, H.: Learning and detecting activities from movement trajectories using the hierarchical hidden markov model. In: IEEE Intl. Conference on Computer Vision and Pattern Recognition, pp. 955–960.
Combinatorial Shape Decomposition Ralf Juengling and Melanie Mitchell Department of Computer Science P.O. Box 751 Portland State University Portland, Oregon 97207-0751
Abstract. We formulate decomposition of two-dimensional shapes as a combinatorial optimization problem and present a dynamic programming algorithm that solves it.
1
Introduction
Identifying a shape’s components can be essential for object recognition, object completion, and shape matching, among other computer vision tasks [1]. In this paper we present a novel shape-decomposition algorithm, aimed at capturing some of the heuristics used by humans when parsing shapes. In 1984, Hoffman and Richards [2] proposed the minima rule, a simple heuristic for making straight-line cuts that decompose a given shape (or silhouette): Given a silhouette such as the one in Fig. 1(a), the end-points of cuts should be negative minima of curvature of its bounding contour (Fig. 1(b)). Note that this rule does not specify which pairs of these points should be connected to make cuts. Fig. 1(c) gives one possible set of cuts connecting negative minima. Later, Singh, Syranian, and Hoffman [3] proposed an additional simple heuristic, supported by results of psychophysics experiments on human subjects, called the short-cut rule: If there are several competing cuts, select the one with shortest length. For example, in Fig. 1(c), most people would prefer the cuts shown as compared with a cut between the two topmost black dots, which would be significantly longer. Singh et al. make clear that the minima and short-cut rules are not the only necessary heuristics for shape decomposition; other possible heuristics could involve local symmetries or good continuation, or rely on prior knowledge about the shape’s category. However, the two simple heuristics seem to explain many of the experimental results on people. In this paper we propose an efficient algorithm for shape decomposition that results in these two heuristics being approximately satisfied, without having to compute boundary curvature. Given a polygonal description of a silhouette, our algorithm computes the constrained Delaunay triangulation of the shape, and chooses among the interior edges of this triangulation an optimum set of cuts by solving a corresponding combinatorial optimization problem. G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 183–192, 2007. c Springer-Verlag Berlin Heidelberg 2007
184
R. Juengling and M. Mitchell
Fig. 1. (a) Silhouette. (b) Black dots mark points at which curvature has a negative minimum. (c) Three possible cuts based on these points.
A α C
D
β
α1 β1
B
A
A
B
B
α2 β2
Fig. 2. Making a cut means breaking a polygon into two polygons. Here the cut is made between two concave vertices A and B. Because α = α1 + α2 and α < 2π the number of concave vertices never increases through a cut.
2
Finding Cuts by Combinatorial Optimization
We assume a polygonal description of the shape and require that a cut (1) connects two polygon vertices, and (2) does not cross a polygon edge. It follows that there is only a finite number of possible cuts for any given polygon. Our strategy is to define an objective function over the set of possible cuts, C, and to select the subset of cuts that minimizes the objective function. This is a combinatorial optimization problem with a number of possible solutions exponential in |C|. We introduce a third constraint on the set of possible cuts in Section 2.1 to make the problem amenable to a solution by dynamic programming. Curvature is not available in our polygonal framework and we need to adapt the minima rule. As in Latecki and Lakaemper [4], vertices with a concave angle play the role of boundary points of negative curvature (we measure the inside angle and call vertices with an angle greater than π concave; see Fig. 2). As Fig. 2 illustrates, placing a cut amounts to breaking a polygon in two. Our objective function favors a decomposition into convex parts by penalizing concave vertices. It is is basically of the form k f (θk ), where θk ranges over all angles of a given partition or cut set. For the example in Fig. 2 the difference of the objective function values between the empty cut set (left) and the set {AB} (right) is therefore f (α) − f (α1 ) − f (α2 ) + f (β) − f (β1 ) − f (β2 )
(1)
Thus f should be such that this sum is negative when the cut AB is considered desirable. We will resume discussing the objective function below in Section 2.2.
Combinatorial Shape Decomposition
185
Fig. 3. Different triangulations of a dog-shaped polygon: A minmax length triangulation (left) minimizes the maximum edge length, the minimum weight triangulation (middle) minimizes total edge length, and the constrained Delaunay triangulation (right) minimizes the maximum triangle circumcircle.
2.1
The Set of Possible Cuts
If a set of cuts for the polygon in Fig. 2 includes the cut AB, then it cannot simultaneously include the cut CD because AB and CD cross. On the other hand, if it were understood that possible cuts never cross, then it is enough to know all other cuts ending in either A or B to decide whether AB should be included to improve a tentative cut set. This insight is the key to our dynamic programming optimization algorithm (Section 2.4). We therefore pose as a third requirement on C, the set of possible cuts or chords, that no two elements in C cross. This also means that we are excluding a number of possible cuts outright when choosing C for a given shape. For the shape in Fig. 2, for example, we have to decide between AB and CD, among others. Any maximum set of chords obeying our third requirement corresponds to a triangulation of the shape polygon, a well-studied subject in computational geometry [5]. For C we need to choose a triangulation that contains most of the desired cuts. Since by the short-cut rule we prefer shorter cuts over longer ones, the minmax edge length or the minimum weight triangulation [6] ought to be good candidates (cf. Fig. 3). In addition we consider the constrained Delaunay triangulation (CDT, Fig. 3 right). The CDT optimizes several criteria (e.g., it maximizes the minimum angle and minimizes the maximum triangle circumcircle). While it tends to yield short chords as well, it is in general not optimal with respect to length criteria [6]. However, we find that chords of the CDT match our intuitive notion of “possible cut” best. This has to do with the defining property of the CDT, that every circumcircle is an empty circle [5]: If a sequence of polygons converges to a silhouette then the empty circles of the respective CDTs converge to maximum inscribed circles of the silhouette, and hence, in the limit, the chords of the CDT connect boundary points in local symmetry [7]. This observation corresponds to a third rule stated by Singh et al. [3], that a cut ought to cross an axis of local symmetry.
186 iAB 0 0 1 1
R. Juengling and M. Mitchell iAC 0 1 0 1
FA (iAB , iAC ) f (α1 + α2 + α3 ) lAC f (α1 ) + lAC f (α2 + α3 ) lA lA lAB lAB f (α + α ) + f (α3 ) 1 2 lA lA lAC lAB lAC lAB f (α ) + f (α ) + f (α3 ) 1 2 lA lA lA lA
α1
A α3
α2
C B
Fig. 4. Left: Term FA ; lAB denotes the length of chord AB, lA is the length of the shortest chord incident to A. Right: Chords (dashed) and corner angles incident to A.
2.2
The Objective Function
We now define a function E that determines whether one set of cuts is “better” than another. To that end we introduce a binary indicator variable ic for every chord c ∈ C and use the notation E(ic | C) to indicate that E is a function of the |C| variables ic , c ∈ C. The assignment ic = 1 means that chord c is a cut, ic = 0 means c is not a cut. A set of assignments to all ic is called a configuration. Fv (ic | Cv ) (2) E(ic | C) = v∈V
Function E is the sum of |V | terms, V being the set of polygon vertices. For every v ∈ V we write Cv for the set of chords incident to v (every chord is incident to two vertices). Each term Fv in Equation (2) is itself a sum of the form k wk f (αk ), where {αk } are angles of part corners incident to v and {wk } are weights, which we will discuss shortly. The number of angles depends on the configuration and ranges between 1 and | Cv | + 1. For example, assume there are two chords, AB and AC, incident to vertex A (Fig. 4 Right). Then there are four possible configurations of CA (Fig. 4 Left). With configuration iAB = iAC = 0 (no cuts incident to A) the value of FA depends on the interior angle of the polygon at A only. With iAB = iAC (one cut) it depends on the angles of the two corners separated by the cut and of the relative length of the cut, and with iAB = iAC = 1 it depends on three angles and two relative cut lengths. Thus the f -terms in a sum Fv are weighted by the relative lengths of the cuts involved (the lengths are normalized by the length of the shortest chord incident to v). This weighting scheme is again motivated by the short-cut rule. We finally turn to the function f , which has to be defined on the interval (0, 2π). We derive its qualitative form from three principles: 1. Cuts should remove concave angles, except minor ones. 2. A convex polygon should never be partitioned. 3. Cuts that create an angle π or close to π are preferable. From the second principle it follows that f should be non-increasing and have non-negative curvature in the range (0, π]. From the third principle it follows that f should have a minimum at π. We are free to choose f (π) = 0 as adding a constant to E does not affect the ranking of the configurations.
Combinatorial Shape Decomposition
187
15 10
γ
5 0
π 0 γ0
π
π+γ
2π
Fig. 5. Left: Plot of f (α) over α with γ0 = become a cut only if γ > γ0 .
π 8
. Right: The chord (dashed) should
From the first principle it follows that f (α1 + α2 ) > f (α1 ) + f (α2 ) when α1 + α2 > π + γ0 , where γ0 is some small angle by which we realize the tolerance for minor concavities. To derive a constraint on f related to this tolerance parameter, we consider the situation depicted in Fig. 5 Right. The protrusion should be separated by a cut if and only if γ > γ0 . With f (π) = 0 it follows that f (π +γ) > f (γ) when γ > γ0 and f (π + γ) < f (γ) when γ < γ0 . The following simple function meets all the stated constraints. It is plotted in Fig. 5 Left. ⎧ α−π ,α < π ⎪ ⎨ γ0 −π 2 α−π f (α) = (3) , π ≤ α < π + γ0 γ0 ⎪ ⎩ 2 (α − π) − 1 , π + γ ≤ α 0 γ0 2.3
Robustness to Similarity Transforms
The objective function Eq. (2) is invariant under rotation, translation and scaling as it depends only on angles between edges and chords, and on ratios of chord lengths. However, this invariance is irrelevant if the process by which the shape contour is obtained is not also invariant or at least robust to these transforms. We therefore take the output of a contour tracing algorithm and simplify it with Lowe’s algorithm [8] to obtain a polygonal description robust to the named transformations. We next add polygon vertices so that the Euclidean distance between two adjacent vertices is bounded from above by some value r (r = 8 is used for all following results). This step is to ensure that the set of chords is dense in the sense that there are chords close to the cuts that would be obtained in the continuous limit r → 0 (Fig. 6). 2.4
Minimizing the Objective Function
We briefly discuss two algorithms for minimizing the objective function, a dynamic programming algorithm which yields an optimal solution, and a greedy algorithm which finds a good but not always optimal solution.
188
R. Juengling and M. Mitchell
Fig. 6. Simplified polygon obtained with Lowe’s algorithm (left). Polygon obtained from first by regular resampling with parameter r = 8 (middle) and r = 3 (right), respectively. Polygon vertices are indicated by points; in the right polygon, points overlap and appear as a continuous line. Each polygon is shown with best cut set.
The Dynamic Programming Algorithm: First observe that the number of arguments of each term Fv in the objective function Eq. (2) is much smaller than the number of arguments of E, | Cv | << | C|. Second, each variable ic is an argument of only two terms. Whether ic = 0 or ic = 1 in the optimal configuration is conditional on the optimal arguments of these two terms only. For example, the optimal configuration for the variables in {ic |c ∈ CA ∪ CB − {AB}} determines the optimal value for iAB . The dynamic programming or “variable elimination” approach consists of reducing the number of variables in the objective function one by one. We write En for the objective function obtained after n elimination steps. The functions En are related in that the optimal configuration for E is also optimal for all En . A variable ic that is an argument of En is eliminated by replacing the two terms of En that ic appears in by a single new term. The new term is defined as the minimum over the values of the eliminated variable of the sum of the replaced terms. For example, assume iAB appears in the two terms F (ic | CF ) and G(ic | CG ) of En . We eliminate iAB by replacing F and G by H(ic | CF ∪ CG − {AB}) = min F (ic | CF ) + G(ic | CG ) iAB
(4)
in En to obtain En+1 . The process of iteratively eliminating variables eventually yields a function Em of only one variable. Call the variables eliminated in the successive steps i1 , i2 , ..., im−1 , and the remaining variable im . The value om that minimizes Em is the optimal assignment to im . The value that minimizes Em−1 with im = om is the optimal assignment to im−1 . Optimal assignments to variables im−2 , im−3 ,...i1 are obtained in the same way. The Greedy Algorithm: The greedy algorithm works as follows. Starting with the empty set as current cut set it repeatedly finds a single cut that, when added to the current cut set, reduces the objective function value by the largest amount. The algorithm terminates when no such cut can be found.
Combinatorial Shape Decomposition
189
Fig. 7. An example in which the greedy algorithm gives an unsatisfactory result (left; optimal cut set on the right)
This algorithm often yields the optimal or a close-to-optimal solution, and it has the advantage of being faster and simpler than our original algorithm. However, it sometimes makes inappropriate cuts, as illustrated in Fig. 7. 2.5
Computational Complexity
To summarize, our algorithm first ensures the sampling of the silhouette is dense enough, and the number of vertices n is proportional to silhouette length. Second, C, the set of possible cuts, is computed in O(n log n) time [6], and |C| is linear in n. Third, the objective function with |C| variables is dynamically created. Fourth, a configuration is found which minimizes the objective function. The objective function is represented by a set of tables—one table for each term in Eq. (2). The complexity of the third step is governed by the number of tables (n), and by the size of the tables, which is |Cv |2|Cv | . Thus, if we use d to denote the maximum degree of the CDT computed in the second step, then the third step is O(n(d − 2)2d−2 ) = O(nd2d ) in time and space. Evaluation of the objective function may be implemented to cost O(n) if the argument is a configuration, or O(c) if the argument is a list of the c variables with non-zero assignment. We use the second variant. With the greedy algorithm we first evaluate the objective function for all singleton cut sets (all but one variable values are zero) and store the corresponding variables in a heap with objective function value as key (takes O(n log n) time). We then take variables from the heap until the current configuration cannot be improved upon (m times, say), updating the heap in each iteration (O(md + d log n), as at most 2(d − 2) variables are affected). Thus, the greedy algorithm takes O(n log n + nd2d + md log n)time, which is O(n log n) if d is bounded. For the optimal algorithm the situation is more complicated because the computational cost of computing the tables corresponding to new terms as in Eq. (4) hinges on the number | CF ∪ CG |, respectively. How big these numbers get depends crucially on the order in which the variables are eliminated, and finding a variable ordering that minimizes the total table size is known as the “secondary optimization problem” in nonserial dynamic programming [9]. We lack the space to discuss it here, but point out that, when the shape is simple (it has no holes),
190
R. Juengling and M. Mitchell
the structure of objective function (2) is described by an “interaction graph” [9] that is chordal. In this case an optimal elimination ordering can be found in O(n + |C|) = O(n) time [10]. The computational complexity of the optimal algorithm then is O(n log n + nD2D ), where D is the maximum number of variables of a single term that occurs in the course of the variable elimination process.
3
Results
Fig. 8 presents results of the global optimization and the greedy algorithm, respectively. For most shapes both cut sets are very similar, due to the fact that cut sets tend to be sparse subsets of C (every cut that does not share a vertex with another cut is selected by both algorithms). However, the results show the more complicated optimal algorithm might yield more appropriate results. For instance, all teeth of the cockscomb were separated by the optimal algorithm because cutting all teeth results in a relatively smooth part corresponding to the cock’s head. Many of the resulting parts seem too small (e.g., the parts of the lizard shape). This is because “part significance” is not reflected in the objective function. We believe that the best approach would be to determine part significance in subsequent processing and prune the cut set instead of incorporating part significance directly into the objective function. Occasionally neither of our algorithms selects a desired cut, as the elephant example shows: the leftmost leg is not separated from the torso. This is because the leg’s left and right boundary curve both are too smooth and do not feature a “concave enough” vertex.
4
Related Work
We briefly discuss the relation to other algorithms that assume a polygonal shape description as input. The algorithm by Latecki and Lakaemper [4] also chooses cuts that connect vertices of the input polygon. These vertices are found as the most stable vertices in a process of “discrete curve evolution”. While Latecki and Lakaemper’s method selects good vertices it does not pair according to proximity and sometimes yields implausibly long cuts. Rosin’s approach is perhaps closest in spirit to ours [11]. It also defines cuts as interior edges connecting polygon vertices and optimizes an objective function that favors a decomposition into convex parts. Unlike our algorithm, which chooses from C, Rosin’s algorithm considers all possible cuts and needs to identify crossing cuts during optimization. The computational cost depends exponentially on the number of cuts; searching for an optimal decomposition is only feasible if the number of cuts is very small (most decompositions presented in [11] contain only one or two cuts). Prasad recently proposed selecting cuts from the chords of the CDT of a shape [12]. He defines a function on the set of chords that evaluates the degree of overlap of adjacent circumcircles (“approximate co-circularity”), and selects chords with
Combinatorial Shape Decomposition
191
Fig. 8. Example shapes with optimal cut sets (top figure in each row) juxtaposed with the corresponding result of the greedy algorithm (bottom figure in each row)
192
R. Juengling and M. Mitchell
low co-circularity score, resulting in good correspondence to the minima and short-cut rules. However, the connection to his co-circularity measure is unclear. A quantitative evaluation and comparison with the cited other approaches is very desirable, but to our knowledge no established benchmark for shape decomposition currently exists. We plan to propose evaluation measures and to conduct a comparative study in future work.
Acknowledgements This work was supported by a grant to MM from the J. S. McDonnell Foundation. We used J. R. Shewchuk’s Triangle code [13], Y. LeCun and L. Bottou’s Lush programming environment, and obtained the shape data for our experiments from R. Lakaemper.
References 1. Zhu, S.C., Yuille, A.L.: Forms: A flexible object recognition and modelling system. International Journal of Computer Vision 20, 187–212 (1996) 2. Hoffman, D.D., Richards, W.A.: Parts of recognition. Cognition 18, 65–96 (1985) 3. Singh, M., Seyranian, G., Hoffman, D.: Parsing silhouettes: The short-cut rule. Perception and Psychophysics 61, 636–660 (1999) 4. Latecki, L.J., Lakaemper, R.: Convexity rule for shape decomposition based on discrete contour evolution. Computer Vision and Image Understanding 73, 441– 454 (1999) 5. Goodman, J.E., O’Rourke, J. (eds.): Handbook of Discrete and Computational Geometry. CRC Press, Boca Raton, USA (1997) 6. Bern, M., Eppstein, D.: Mesh Generation and Optimal Triangulation. In: Hwang, F.K., Du, D.Z. (eds.) Computing in Euclidean Geometry, World Scientific, Singapore (1992) 7. Brady, M., Asada, H.: Smoothed local symmetries and their implementation. International Journal of Robotics Research 3, 36–61 (1984) 8. Lowe, D.G.: Three-dimensional object recognition from single two-dimensional images. Artificial Intelligence 31, 355–395 (1987) 9. Bertele, U., Brioschi, F.: Nonserial Dynamic Programming. Mathematics in Science and Engineering, vol. 91. Academic Press, London (1972) 10. Rose, D.J., Tarjan, R.E., Lueker, G.S.: Algorithmic aspects of vertex elimination on graphs. SIAM Journal on Computing 5, 266–283 (1976) 11. Rosin, P.L.: Shape partitioning by convexity. IEEE Transactions on Systems, Man, and Cybernetics, Part A 30, 202–210 (2000) 12. Prasad, L.: Rectification of the chordal axis transform and a new criterion for ´ Damiand, G., Lienhardt, P. (eds.) DGCI shape decomposition. In: Andr`es, E., 2005. LNCS, vol. 3429, pp. 263–275. Springer, Heidelberg (2005) 13. Shewchuk, J.R.: Triangle: Engineering a 2D quality mesh generator and delaunay triangulator. In: Lin, M.C., Manocha, D. (eds.) FCRC-WS 1996 and WACG 1996. LNCS, vol. 1148, pp. 203–222. Springer, Heidelberg (1996)
Rotation-Invariant Texture Recognition Javier A. Montoya-Zegarra1,2 , Jo˜ao P. Papa2 , Neucimar J. Leite2 , Ricardo da Silva Torres2 , and Alexandre X. Falc˜ao2 Computer Engineering Department1 , Faculty of Engineering, San Pablo Catholic University, Av. Salaverry 301, Vallecito, Arequipa, Peru Institute of Computing2 , State University of Campinas, Av. Albert Einstein 1216, Campinas, S˜ao Paulo, Brazil {jmontoyaz,papa.joaopaulo}@gmail.com
Abstract. This paper proposes a new texture classification system, which is distinguished by: (1) a new rotation-invariant image descriptor based on Steerable Pyramid Decomposition, and (2) by a novel multi-class recognition method based on Optimum Path Forest. By combining the discriminating power of our image descriptor and classifier, our system uses small size feature vectors to characterize texture images without compromising overall classification rates. State-of-the-art recognition results are further presented on the Brodatz dataset. High classification rates demonstrate the superiority of the proposed method.
1 Introduction In the last years, several image recognition systems have been proposed in the literature as a result of many research efforts [1, 2]. Although those approaches have achieved high classification rates, most of them have not been widely evaluated in texture image databases. Traditionally, texture images may be characterized by: (1) small inter-class variations, i.e, textures belonging to different classes may appear quite similar, especially in terms of their global patterns (coarseness, smoothness, etc.), and (2) the presence of image distortions such as rotations. In this sense, texture pattern recognition is a still open task. The next challenge in texture classification should be, therefore, to achieve rotation-invariant feature representations for non-controlled environments. To address some of these limitations, this work proposes a new texture classification method, which is characterized by: (1) a new texture image descriptor based on Steerable Pyramid Decomposition, which encodes the relevant texture information in small size feature vectors including rotation-invariant characterization, and (2) a novel multiclass object recognition method based on the Optimum Path Forest classifier [3]. Roughly speaking, a Steerable Pyramid is a method by which images are decomposed into a set of multi-scale, and multi-orientation image subbands, where the basis functions are directional derivative operators [4]. Our motivation in using Steerable Pyramids relies on that, unlike other image decomposition methods, the feature coefficients are less affected by image distortions. Furthermore, the Optimum Path Forest Classifier is a recent approach that handles non separable classes, without the necessity of using boosting procedures to increase its performance, resulting thus in a faster and more accurate classifier for object recognition. By combining the discriminating G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 193–204, 2007. c Springer-Verlag Berlin Heidelberg 2007
194
J.A. Montoya-Zegarra et al.
power of our image descriptor and classifier, our system uses small size feature vectors to characterize texture images without compromising overall classification rates. In this way, texture classification applications, where data storage capacity is a limitation, are further facilitated. The outline of this paper is as follows. In the next section, we briefly review the fundamentals of the Steerable Pyramid Decomposition. Section 3 describes how texture images are characterized to obtain rotation-invariant representations. Section 4 introduces the Optimum Path Forest classifier method. The experimental setup conducted in our study is presented in Section 5. In section 6, experimental results on several datasets are given and are used to demonstrate the recognition accuracy improvement of our approach. Comparisons with other texture feature representations and classifiers are further discussed. Finally, some conclusions are drawn in Section 7.
2 Steerable Pyramid Decomposition The Steerable Pyramid Decomposition is a linear multi-resolution image decomposition method, by which an image is subdivided into a collection of subbands localized at different scales and orientations [4]. Using a high-, and low-pass filter (H0 , L0 ) the input image is initially decomposed into two subbands: a high-, and a low-pass subband, respectively. Further, the low-pass subband is decomposed into K-oriented band-pass portions B0 , . . . , BK−1 , and into a lowpass subband L1 . The decomposition is done recursively by subsampling the lower low-pass subband (LS ) by a factor of 2 along the rows and columns. Each recursive step captures different directional information at a given scale. Considering the polar-separability of the filters in the Fourier domain, the first low-, and high-pass filters, are defined as [5]: r r ,θ (1) L0 (r, θ) = L , θ /2 H0 (r, θ) = H 2 2 where r, θ are the polar frequency coordinates. L, H are raised cosine low-, and highpass transfer functions: ⎧ r ≤ π4 ⎨2 4r π π π L(r, θ) = 2cos 2 log2 π (2) 4
k ∈ [0, K − 1]
(3)
Bk (r, θ) represents the K directional bandpass filters used in the iterative stages, with radial and angular parts, defined as: ⎧ r ≥ π4 ⎨1 2r π π π (4) H(r, θ) = cos 2 log2 π 4 < r < 2 ⎩ π 0 r≤ 2
K−1 πk
θ − πk < π K 2 Gk (θ) = αK cos θ − K (5) 0 otherwise where αk = 2(k−1) √
(K−1)! . K[2(K−1)]!
Rotation-Invariant Texture Recognition
195
3 Texture Feature Representation This section describes the proposed modification of Steerable Pyramid Decomposition to obtain rotation-invariant representations, used to characterize the texture images. 3.1 Texture Representation Roughly speaking, texture images can be seen as a set of basic repetitive primitives characterized by their spatial homogeneity [6]. By applying statistical measures, this information is extracted, and used to capture the relevant image content into feature vectors. More precisely, by considering the presence of homogeneous regions in texture images, we use the mean (μmn ) and standard deviation (σmn ) of the energy distribution of the filtered images (Smn ). Given an image I(x, y), its Steerable Pyramid Decomposition is defined as: I(x1 , y1 )Bmn (x − x1 , y − y1 ) (6) Smn (x, y) = x1
y1
where Bmn denotes the directional bandpass filters at stage m = 0, 1, . . . , S − 1, and orientation n = 0, 1, . . . , K − 1. The energy distribution (E(m, n)) of the filtered images at scale m, and at orientation n is defined as: |Smn (x, y)| (7) E(m, n) = x
y
Additionally, the mean (μmn ) and standard deviation (σmn ) of the energy distributions are found as follows: 1 1 2 μmn = Emn (x, y) σmn = (|Smn (x, y)| − μmn ) (8) MN MN x y The corresponding feature vector (f ) is defined by using the mean and standard deviation as feature elements. It is denoted as: f = [μ00 , σ00 ,μ01 , σ01 , . . . ,μS−1K−1 , σS−1K−1 ]
(9)
3.2 Rotation-Invariant Representation Rotation-invariant representation is achieved by computing the dominant orientation of the texture images followed by feature alignment. The dominant orientation (DO) is defined as the orientation with the highest total energy across the different scales considered during image decomposition. It is computed by finding the highest accumulated energy for the K different orientations considered during image decomposition:
(R) (R) (R) (10) DOi = max E0 , E1 , . . . , EK−1 where i is the index where the dominant orientation is found, and:
196
J.A. Montoya-Zegarra et al.
En(R) =
S−1
E(m, n), n = 0, 1, . . . , K − 1.
(11)
m=0 (R)
Note that each En covers a set of filtered images at different scales but at same orientation. Finally, rotation-invariance is obtained by shifting circularly feature elements within the same scales, so that first elements at each scale correspond to dominant orientations. This process is based on the assumption that to classify textures, they should be rotated so that their dominant directions are the same. Further, it has been proved that image rotation in spatial domain is equivalent to circular shift of feature vector elements [7].
4 Texture Feature Recognition This section aims to present the new approach to pattern recognition called OPF (Optimum Path Forest), which has been demonstrated to be generally more efficient than Artificial Neural Networks and Support Vector Machines [3]. The OPF approach works by modeling the patterns as being nodes of a graph in the feature space, where every pair of nodes are connected by an arc (complete graph). This classifier creates a discrete optimal partition of the feature space such that any unknown sample can be classified according to this partition. This partition is an optimum path forest computed in n by the image foresting transform (IFT) algorithm [8]. Let Z1 , Z2 , and Z3 be training, evaluation, and test sets with |Z1 |, |Z2 |, and |Z3 | samples such as feature vectors. Let λ(s) be the function that assigns the correct label i, i = 1, 2, . . . , c, from class i to any sample s ∈ Z1 ∪ Z2 ∪ Z3 . Z1 and Z2 are labeled sets used to the design of the classifier and the unseen set Z3 is used to compute the final accuracy of the classifier. Let S ⊂ Z1 be a set of prototypes of all classes (i.e., key samples that best represent the classes). Let v be an algorithm which extracts n attributes (texture properties) from any sample s ∈ Z1 ∪ Z2 ∪ Z3 and returns a vector v(s) ∈ n . The distance d(s, t) between two samples, s and t, is the one between their feature vectors v(s) and v(t) (e.g., Euclidean or any valid metric). Let (Z1 , A) be a complete graph whose the nodes are the samples in Z1 . We define a path as being a sequence of distinct samples π = s1 , s2 , . . . , sk , where (si , si+1 ) ∈ A for 1 ≤ i ≤ k − 1. A path is said trivial if π = s1 . We assign to each path π a cost f (π) given by a path-cost function f . A path π is said optimum if f (π) ≤ f (π ) for any other path π , where π and π end at a same sample sk . We also denote by π · s, t
the concatenation of a path π with terminus at s and an arc (s, t). The OPF algorithm uses the path-cost function fmax , because of its theoretical properties for estimating optimum prototypes: 0 if s ∈ S, fmax ( s ) = +∞ otherwise fmax (π · s, t ) = max{fmax (π), d(s, t)}
(12)
We can observe that fmax (π) computes the maximum distance between adjacent samples in π, when π is not a trivial path.
Rotation-Invariant Texture Recognition
197
The OPF algorithm assigns one optimum path P ∗ (s) from S to every sample s ∈ Z1 , forming an optimum path forest P (a function with no cycles which assigns to each s ∈ Z1 \S its predecessor P (s) in P ∗ (s) or a marker nil when s ∈ S. Let R(s) ∈ S be the root of P ∗ (s) which can be reached from P (s). The OPF algorithm computes for each s ∈ Z1 , the cost C(s) of P ∗ (s), the label L(s) = λ(R(s)), and the predecessor P (s), as follows. Algorithm 1 – OPF ALGORITHM A λ-labeled training set Z1 , prototypes S ⊂ Z1 and the pair (v, d) for feature vector and distance computations. Optimum path forest P , cost map C and label map L. O UTPUT: AUXILIARY: Priority queue Q and cost variable cst.
I NPUT:
1. For each s ∈ Z1 \S, set C(s) ← +∞. 2. For each s ∈ S, do C(s) ← 0, P (s) ← nil, L(s) ← λ(s), and insert s in Q. 3. 4. While Q is not empty, do Remove from Q a sample s such that C(s) is minimum. 5. For each t ∈ Z1 such that t = s and C(t) > C(s), do 6. Compute cst ← max{C(s), d(s, t)}. 7. If cst < C(t), then 8. If t ∈ Q, then remove t from Q. 9. P (t) ← s, L(t) ← L(s), C(t) ← cst, and insert t in Q. 10.
Lines 1 − 3 initialize maps and insert prototypes in Q. The main loop computes an optimum path from S to every sample s in a non-decreasing order of cost (Lines 4 − 10). At each iteration, a path of minimum cost C(s) is obtained in P when we remove its last node s from Q (Line 5). Lines 8 − 10 evaluate if the path that reaches an adjacent node t through s is cheaper than the current path with terminus t and update the position of t in Q, C(t), L(t) and P (t) accordingly. The label L(s) may be different from λ(s), leading to classification errors in Z1 . The training finds prototypes with none classification errors in Z1 . The OPF algorithm works with two phases: training and classification (test), as follows. 4.1 Training Phase We say that S ∗ is an optimum set of prototypes when Algorithm 1 propagates the labels L(s) = λ(s) for every s ∈ Z1 . Set S ∗ can be found by exploiting the theoretical relation between Minimum Spanning Tree (MST) [9] and optimum path tree for fmax . The training essentially consists of finding S ∗ and an OPF classifier rooted at S ∗ . By computing an MST in the complete graph (Z1 , A), we obtain a connected acyclic graph whose nodes are all samples in Z1 and the arcs are undirected and weighted by the distance d between the adjacent sample feature vectors. This spanning tree is optimum in the sense that the sum of its arc weights is minimum as compared to any other spanning tree in the complete graph. In the MST, every pair of samples is connected by a single path which is optimum according to fmax . That is, for any given sample s ∈ Z1 ,
198
J.A. Montoya-Zegarra et al.
it is possible to direct the arcs of the MST such that the result will be an optimum path tree P for fmax rooted at s. The optimum prototypes are the closest elements in the MST with different labels in Z1 . By removing the arcs between different classes, their adjacent samples become prototypes in S ∗ and Algorithm 1 can compute an optimum path forest with none classification errors in Z1 without data overfitting [10]. 4.2 Classification For any sample t ∈ Z3 , the OPF consider all arcs connecting t with samples s ∈ Z1 , as though t were part of the graph. Considering all possible paths from S ∗ to t, we wish to find the optimum path P ∗ (t) from S ∗ and label t with the class λ(R(t)) of its most strongly connected prototype R(t) ∈ S ∗ . This path can be identified incrementally, by evaluating the optimum cost C(t) as C(t) = min{max{C(s), d(s, t)}}, ∀s ∈ Z1 .
(13)
Let the node s∗ ∈ Z1 be the one that satisfies the above equation (i.e., the predecessor P (t) in the optimum path P ∗ (t)). Given that L(s∗ ) = λ(R(t)), the classification simply assigns L(s∗ ) as the class of t. An error occurs when L(s∗ ) = λ(t). 4.3 Learning Algorithm The performance of the OPF classifier improves when the closest samples from different classes are included in Z1 , because the method finds prototypes that will work as sentinels in the frontier between classes. The learning algorithm replaces irrelevant samples of Z1 by errors in Z2 , and secondly other samples of Z2 are replaced by irrelevant samples of Z1 . In both cases, the algorithm never let a sample of Z2 return to Z1 . The learning algorithm essentially tries to identify these prototypes from a few iterations of classification over Z2 . In order to define irrelevant samples, the OPF algorithm can identify all samples in Z1 that participated in the classification task of any node t ∈ Z2 . The OPF consider all right and wrong classifications in Z2 . When t ∈ Z2 is correctly/incorrectly classified, we add one to the number of right/wrong classifications, of every sample r ∈ Z1 in the optimum path P ∗ (t) from R(t) ∈ S ∗ to s∗ . In that way, the learning algorithm outputs the final projected classifier, which can be now used to predict the labels of Z3 .
5 Experimental Setup 5.1 Datasets To evaluate the accuracy of our approach, we selected thirteen texture images obtained from the standard Brodatz database. Before being digitized, each of the 512 × 512 texture images was rotated at different degrees [11]. Figure 1 displays the non-rotated version of each of the texture images. From this database, three different image datasets were generated: non-distorted, rotated-set A, and rotated-set B. The non-distorted image dataset was constructed just
Rotation-Invariant Texture Recognition
199
Fig. 1. Texture images from the Brodatz dataset used in our experiments. From left to right, and from top to bottom, they include: Bark, Brick, Bubbles, Grass, Leather, Pigskin, Raffia, Sand, Straw, Water, Weave, Wood, and Wool.
from the original input textures (i.e. texture patterns at 0 degrees). Each texture image was partitioned into sixteen 128 × 128 non-overlapping subimages. Thus, this dataset comprises 208 (13×16) different images. Furthermore, images belonging to this dataset will be used in the learning stage of our classifier. The second image dataset is referred to as rotated image dataset A, and was generated by selecting the four 128 × 128 innermost subimages from texture images at 0, 30, 60, and 120 degrees. A total number of 208 images were generated (13 × 4 × 4). Finally, in the rotated image dataset B, we selected the four 128 × 128 innermost subimages of the rotated image textures (512 × 512 ) at 0, 30, 60, 90, 120, 150 and 200 degrees. This led to 364 (13 × 4 × 4) dataset images. Rotated image dataset A, as well as rotated image dataset B will be used for recognition purposes. 5.2 Classification Evaluation In our experiments, the accuracy was measured by taking into account the possibility of having classes with different cardinalities. Let N (i), i = 1, 2, . . . , c, be the number of samples in each class i and N = N (1) ∪ N (2) . . . ∪ N (c) be the whole dataset. We define the partial errors ei,1 and ei,2 as follows: ei,1 =
F P (i) F N (i) and ei,2 = , i = 1, . . . , c. |N | − |N (i)| |N (i)|
(14)
where F P (i) and F N (i) denote both false positives and false negatives, respectively. That is, F P (i) represents the number of samples of other classes that were classified as being from the class i. In addition, F N (i) represents the number of samples in class i that were misclassified. The errors ei,1 and ei,2 are then used to define the accumulated partial error of class i as: (15) E(i) = ei,1 + ei,2 . Finally, the resulting classification accuracy L is found: c E(i) 2c − ci=1 E(i) = 1 − i=1 . L= 2c 2c
(16)
200
J.A. Montoya-Zegarra et al.
6 Experimental Results To demonstrate the discriminating power of our proposed method for recognizing rotated texture patterns, we conducted two series of experiments. In the first series of experiments (Subsection 6.1), we evaluated the effectiveness of the proposed rotationinvariant representation against two other approaches: the conventional Pyramid Decomposition [12] and with a recent proposal based on Gabor Wavelets [13]. Furthermore, the second series of experiments (Subsection 6.2), were used to evaluate the recognition accuracy of the novel OPF multi-class classifier. Note that, both series of experiments were conducted using rotated-sets A and B. The rotated-set A was used to analyze the recognition accuracy of our method under the presence of few rotated versions of texture patterns. In addition, experiments conducted in the rotated-set B consider several rotated versions of different texture patterns. In both series of experiments, we used Steerable Pyramids having different decomposition levels (S = 2, 3, 4) at several orientations (K = 4, 5, 6, 7, 8). Our experiments agree with [14] in that, the most relevant textural information in images is contained in the first two levels of decomposition, since little recognition improvement is achieved by varying the number of scales during image decomposition. Therefore, we focus our discussions on image decompositions having (S = 2, 3) scales. The dimensionality of the feature vectors depends on the number of scales (S) and on the number of orientations (O) considered during image decomposition and it is computed as follows: 2 × O × S. Furthermore, an important motivation in our study was to use small size feature vectors, in order: (1) to show that the recognition accuracy of our approach is not compromised, and (2) to facilitate texture recognition applications where data storage capacity is a limitation. 6.1 Effectiveness of the Rotation Invariance Representation To analyze the texture characterization capabilities of our method against the convetional Pyramid Decomposition [12] and the Gabor Wavelets [13], we used Gaussiankernel Support Vector Machines (SVMs) as texture classification mechanisms. Note that, the SVM parameters were optimized by using the cross-validation method 1 . Figure 2 compares the recognition accuracy obtained by those three methods in the rotated-set A, whereas Figure 3 depicts the recognition accuracy obtained in the rotatedset B. From both figures, it can be seen that our image descriptor outperforms the other two approaches, regardless of the number of scales or orientations used for extracting the feature vectors. In the case of the rotated-set A, the higher classification accuracies achieved by our method were obtained by using 7 orientations, which corresponds to image rotations in steps of 25.71◦. Those accuracies are respectively 100% and 97.31% for two and three decomposition levels (s = 2, 3). The corresponing classification accuracies obtained by the Gabor Wavelets are: 90.36% and 93.90% (s = 2, 3;o = 7), whereas for the conventional Steerable Pyramid those accuracies are: 89.67% and 90.36%. Furthermore, the accuracies using o = 6, 7, 8 orientations are very close to each other. Therefore, s = 2 1
We used the well known LIBSVM package [15].
Rotation-Invariant Texture Recognition
201
and o = 6 are the most appropiate parameter combinations for our rotation-invariant image descriptor, and at the same time, low dimensionality feature vectors are obtained. In the case of the rotated-set B, the higher classification accuracies achieved by our descriptor, were again obtained by using 7 orientations. Classification rates of 95.86% and 95.73% correspond respectively to feature vectors with s = 2, 3 scales and o = 7 orientations. Further, it is found that both Gabor Wavelets and Conventional Steerable Pyramid Decomposition present lower classification rates, being respectively 91.05% and 95.35% for the first method, and 84.22%, 84.23% for the second one. As in the results obtained in rotated-set A, we can notice that the achieved classification accuracies are very close to each other, when using o = 6, 7 or o = 8 orientations. From these results, we can reinforce that the most appropriate parameter settings for our descriptor are s = 2 scales and o = 6 orientations. Furthermore, from the bar graphs shown in Figures 2 and 3, the best performance obtained by the rotation-invariant Gabor method is as good as our descriptor. However, this rate is obtained at s = 3 scales, whereas the proposed descriptor achieves the same performance using only s = 2 scales. In this sense, an important advantage of our method is its high performance rate at low size feature vectors. It is worth to mention that in our experiments the OPF classifier was at least 10 times faster than the SVM classifier2 . Rotation-invariance Analysis Using SVM Average classification rate(%)
Average classification rate(%)
Rotation-invariance Analysis Using SVM Gabor Wavelets Conventional Steerable Pyramid Proposed method
100
90
80
Gabor Wavelets Conventional Steerable Pyramid Proposed method
100
90
80
70
70
o
o
-8
3s
o
-7
3s
o
-6
3s
o
-5
3s
o
-4
3s
o
-8
2s
o
-7
2s
o
Fig. 2. Classification accuracy comparison using the SVM classifier obtained in rotatedset A using (S = 2, 3) scales with (K = 4, 5, 6, 7, 8) orientations for Gabor Wavelets, conventional Steerable Pyramid decomposition and our method
-6
2s
o
-5
-4
2s
2s
o
o
-8
3s
o
-7
3s
o
-6
3s
o
-5
3s
o
-4
3s
o
-8
2s
o
-7
2s
o
-6
2s
o
-5
-4
2s
2s
s - scale o - orientation
s - scale o - orientation
Fig. 3. Classification accuracy comparison using the SVM classifier obtained in rotatedset B using (S = 2, 3) scales with (K = 4, 5, 6, 7, 8) orientations for Gabor Wavelets, conventional Steerable Pyramid decomposition and our method
6.2 Effectiveness of the Multiclass Recognition Method In the previous subsection, we showed that our proposed rotation-invariant image descriptor outperforms the other two methods. Therefore, our objective now is to show the recognition improvement of our classifier over the SVM approach. It can be seen 2
Due to the lack of space, we could not present a detailed study of the execution time.
202
J.A. Montoya-Zegarra et al.
Proposed Image Descriptor with SVM Proposed Image Descriptor with OPF 100
90
80
70
Rotation-invariance Analysis Using SVM and OPF Average classification rate(%)
Average classification rate(%)
Rotation-invariance Analysis Using SVM and OPF
Proposed Image Descriptor with SVM Proposed Image Descriptor with OPF 100
90
80
70
-8
3s
-7
3s
-6
3s
-5
3s
-4
3s
-8
2s
-7
2s
-6
2s
-5
-4
2s
o
o
o
o
o
o
o
o
o
o
Fig. 4. Recognition accuracy comparison of the OPF and SVM classifiers in rotated-set A using (S = 2, 3) scales with (K = 4, 5, 6, 7, 8) orientations for our rotation-invariant image descriptor
2s
o -8 3s o -7 3s o -6 3s o -5 3s o -4 3s o -8 2s o -7 2s o -6 2s o -5 2s o -4 2s
s - scale o - orientation
s - scale o - orientation
Fig. 5. Recognition accuracy comparison of the OPF and SVM classifiers in rotated-set B using (S = 2, 3) scales with (K = 4, 5, 6, 7, 8) orientations for our rotation-invariant image descriptor
from Figure 4, that for almost all feature extraction configurations, the recognition rates of the OPF classifier are higher than those of the SVM classifier. However, the latter method presents the same recognition rates as the OPF classifier when using s = 2 scales and o = 6, 7 orientations. In the case of the image rotated-set B, our classifier yields better recognition rates for all feature extraction configurations (See Figure 5). By considering that it was found, that the most appropriate parameter settings for our descriptor are s = 2 scales and o = 6 orientations, it is worth to mention, that by using this configuration, the recognition accuracy obtained by the OPF classifier is 98.49% in comparison with the corresponding accuracy of 95.48% obtained by the SVM classifier. 6.3 Results Summarization A summary of our experimental results is provided in Tables 1, and 2. Table 1 compares for each dataset, the mean recognition rates obtained by the three texture image descriptors using different scales (s = 2, 3) and different orientations (o = 4, 5, 6, 7, 8). In this set of experiments, we used Gaussian-kernel Support Vector Machines (SVMs) as texture classification mechanisms. From our results, it can be noticed that our texture image descriptor performs better regardless of the dataset used, or the image decomposition Table 1. Mean recognition rates for the three different texture image descriptors using Gaussiankernel Support Vector Machines as classifiers Rotated Feature vectors with different Proposed Image Conventional Steerable Gabor Wavelets Image Dataset scales (s) and orientations (o) Descriptor Pyramid Decomposition A (s=2;o=4,5,6,7,8) 98.89% 93.19% 93.19% A (s=3;o=4,5,6,7,8) 98.61% 92.36% 97.92% B (s=2;o=4,5,6,7,8) 97.35% 85.30% 91.29% B (s=3;o=4,5,6,7,8) 96.74% 85.76% 96.67%
Rotation-Invariant Texture Recognition
203
Table 2. Mean recognition rates for the proposed rotation-invariant texture image descriptor using both OPF and SVM classifiers
Rotated Feature vectors with different Proposed Image Proposed Image Image Dataset scales (s) and orientations (o) Descriptor using OPF Descriptor using SVM A (s=2;o=4,5,6,7,8) 98.89% 95.89% A (s=3;o=4,5,6,7,8) 98.61% 97.99% B (s=2;o=4,5,6,7,8) 97.35% 92.30% B (s=3;o=4,5,6,7,8) 96.74% 96.70%
parameters considered during feature extraction (number of scales and orientations). Furthermore, the second series of experiments are summarized in Table 2. As it can be seen, the OPF classifier improves the recognition accuracies obtained by the SVM classifier in all of our experiments. By considering the summarized results, it can be shown that our proposed recognition system performs better than the previously mentioned approaches, which represent state-of-the-art methods.
7 Conclusions In this paper a new novel texture classification system was proposed. Its main features include: (1) a new rotation-invariant image descriptor, and (2) a novel multi-class recognition method based on Optimum Path Forest. The proposed image descriptor exploits the discriminability properties of the Steerable Pyramid Decomposition for texture characterization. To obtain rotation-invariance, the dominant orientation of the input textures is found, so that feature elements are aligned according to this value. By doing this, a more reliable feature extraction process can be performed, since corresponding feature elements of distinct feature vectors, coincide with images at same orientations. In addition, our system adopted a novel approach for pattern classification based on Optimum Path Forest, which finds prototypes with none zero classification errors in the training sets and learns from errors in evaluation sets. By combining the discriminating power of our image descriptor and classifier, our system uses small size feature vectors to characterize texture images without compromising overall classification rates, being ideally for real-time applications. Furthermore, we have demonstrated state-of-the-art results on two image datasets derived from the standard Brodatz database. For the first image dataset, our method obtained a mean classification rate of 98.89% in comparision with a mean accuracy of 93.19% obtained by both conventional Steerable Pyramid decomposition [12] and Gabor Wavelets [13]. For the second image dataset, our method achieved a mean classification of 97.35%, whereas the other two methods obtained respectively classification rates of 85.30% and 91.29%. On the other side, the OPF multi-class classifier outperformed the SVM in the two datasets. It is a new promising graph tool for pattern recognition, which differs from traditional approaches, in that, it does not use the idea of feature space space geometry, therefore, better results in overlapped databases are achieved. Future work will include extending this method for scale-invariant texture recognition.
204
J.A. Montoya-Zegarra et al.
This work is partially supported by Microsoft Escience and Tablet PC Technology and Higher Education projects, CNPq (Proc. 134990/2005-6, 477039/2006-5 and 311309/2006-2), FAPESP, CAPES, and FAEPEX.
References 1. Haykin, S.: Neural networks: a comprehensive foundation. Prentice-Hall, Englewood Cliffs (1994) 2. Boser, B., Guyon, I., Vapnik, V.: A training algorithm for optimal margin classifiers. In: Proc. 5th Workshop on Computational Learning Theory, pp. 144–152 (1992) 3. Papa, J.P.: Falc˜ao, A.X., Miranda, P.A.V., Suzuki, C.T.N., Mascarenhas, N.D.A.: A new pattern classifier based on optimum path forest. Technical Report IC-07-13, Institute of Computing, State University of Campinas, Technical report (2007), available at http://www.dcc.unicamp.br/ic-tr-ftp/2007/07-13.ps.gz 4. Freeman, W.T., Adelson, E.H.: The design and use of steerable filters. IEEE Trans. Pattern Anal. Mach. Intell. 13, 891–906 (1991) 5. Portilla, J., Simoncelli, E.P.: A parametric texture model based on joint statistics of complex wavelet coefficients. International Journal of Computer Vision 40, 49–70 (2000) 6. Bimbo, A.D.: Visual information retrieval. Morgan Kaufmann, San Francisco (1999) 7. Zhang, D., Wong, A., Indrawan, M., Lu, G.: Content based image retrieval using gabor texture features. In: PCM 2000: Proc. of 1st IEEE Pacific-Rim Conf. on Multimedia, pp. 392– 395 (2000) 8. Falc˜ao, A., Stolfi, J., Lotufo, R.: The image foresting transform: theory, algorithms, and applications. IEEE Trans. Pattern Anal. Mach. Intell. 26, 19–29 (2004) 9. Cormen, T., Leiserson, C., Rivest, R.: Introduction to Algorithms. MIT (1990) 10. Cousty, J., Bertrand, G., Najman, L., Couprie, M.: Watersheds, minimum spanning forests, ´ and the drop of water principle, Ecole Sup´erieure d’Ing´enieurs (2007) 11. University of Southern California, S., Institute, I.P. (Rotated texture database) (Accessed on 1 March 1, 2007) http://sipi.usc.edu/services/database/Database.html 12. Simoncelli, E.P., Freeman, W.T.: The steerable pyramid: A flexible architecture for multiscale derivative computation. Proceedings of IEEE ICIP 13, 891–906 (1995) 13. Arivazhagan, S., Ganesan, L., Priyal, S.P.: Texture classification using gabor wavelets based rotation invariant features. Pattern Recognition Letters 27, 1976–1982 (2006) 14. Do, M.N., Vetterli, M.: Wavelet-based texture retrieval using generalized gaussian density and kullback-leibler distance. IEEE Transactions on Image Processing 11, 146–158 (2002) 15. Chang, C.C., Lin, C.J.: LIBSVM: a library for support vector machines, Software (2001), available at http://www.csie.ntu.edu.tw/∼cjlin/libsvm
A New Set of Normalized Geometric Moments Based on Schlick’s Approximation Ramakrishnan Mukundan Department of Computer Science and Software Engineering University of Canterbury, Christchurch, New Zealand
Abstract. Schlick’s approximation of the term xp is used primarily to reduce the complexity of specular lighting calculations in graphics applications. Since moment functions have a kernel defined using a monomial xpyp, the same approximation could be effectively used in the computation of normalized geometric moments and invariants. This paper outlines a framework for computing moments of various orders of an image using a simplified kernel, and shows the advantages provided by the approximating function through a series of experimental results.
1 Introduction Geometric moments and moment invariants have been extensively used as feature descriptors in several pattern recognition and vision related applications [1],[2],[3]. Even though many other types of complex and orthogonal moments have also been developed in order to get improved feature representation capability in a moment set, geometric moments continue to be popular as they have a simple structure that can be easily implemented. A number of fast algorithms for the computation of geometric moments can therefore be found in literature [4],[5], including VLSI implementations [6]. Geometric moments are usually normalized or scaled to reduce the dynamic range of values. Most commonly found applications of geometric moments are pattern recognition, template matching, image classification, and pose estimation. This paper proposes a completely new approach to computation of normalized geometric moments and their invariants using Schlick’s approximation. The approximation of the term xp by a simple fraction of polynomial functions when x has a value less than 1, is found to be useful in lighting calculations involving specular reflections[7],[8]. Lighting involves a huge amount of computations in many graphics applications with large polygonal count. Schlick’s approximation is used in such cases to get better frame rates by significantly reducing the number of multiplications in the specular component. Since geometric moments are also based on monomials of the form xpyq, we could use Schlick’s approximation to have a computationally simpler representation of the kernel, and a correspondingly fast algorithm to compute the invariants. In this paper we give the general formulation of normalised geometric moments and their invariant functions, and demonstrate how Schlick’s approximation G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 205–213, 2007. © Springer-Verlag Berlin Heidelberg 2007
206
R. Mukundan
could be used effectively in this framework. Most importantly Schlick’s approximation finds application in pattern recognition and image classification where normalized invariants are used as feature descriptors. The organisation of this paper is as follows. The next section gives an overview of geometric moments and introduces normalized translation invariants. Schlick’s approximating functions and their extensions that are suitable for defining normalized moments are given in Section 3. Experimental results showing the invariant characteristics of normalized moments using Schlick’s approximation, as well as the reduction in computational complexity with the proposed method are provided in Section 4. Conclusions and possible extensions of the work presented in this paper are given in Section 5.
2 Geometric Moments and Invariants The two dimensional geometric moments of an image intensity distribution I(x, y), is defined as follows [3]: ∞
m pq =
∞
∫ ∫x
p
y q I ( x, y )dxdy
(1)
x=0 y =0
where p, q are positive integers, and mpq denotes moments of order p+q. For an image of size MxN pixels (0≤ x<M, 0≤ y
m pq =
M −1 N −1
∑∑ x
p
y q I ( x, y )
(2)
x = 0 y =0
The coordinates of the image cetnroid can be computed from the first order moments as follows:
x=
m10 m 00
y=
m 01 m 00
(3)
The central moments defined below are used for translation-invariant recognition of images.
μ pq =
M −1N −1
∑∑ ( x − x ) x=0 y =0
p
( y − y ) q I ( x, y )
(4)
A New Set of Normalized Geometric Moments Based on Schlick’s Approximation
207
From the above equation, it is clear that central moments evaluated at high orders tend to become numerically unstable due to the monomial structure of the kernel functions. Image coordinate values are usually normalized to a value less than 1 by dividing by the image size, in order to eliminate this problem. With coordinate normalization, moment values tend to zero instead of infinity as the moment order increases. Normalization also helps in reducing the sensitivity of moment functions to image noise. We use the following definition for normalized translation-invariants: μˆ pq =
M −1 N −1
∑∑ ( x )
p
( yˆ ) q I ( x, y )
(5)
x =0 y =0
where
1 (x − x) M 1 yˆ = ( y − y ) N
xˆ =
(6)
Both the above terms can take values in the range [−1, 1]. The kernel of Eq.(5) involves product terms that can add to the complexity of a moment computation process, especially when the values of p and q are large. But this function can be easily approximated by a simple fraction containing two polynomials and only linear terms, at the same time preserving the desired invariant properties. This approximation can drastically reduce the amount of computations required for high order invariants.
3 Schlick’s Approximation The Schlick’s approximation of an exponential function xp , where x ∈ [0,1), is given by [7] S p ( x) =
x , x + p − xp
= 1,
if p = 0.
p ∈ [1,∞), (7)
The above formula is commonly used in Computer Graphics to approximate the term (cosφ)f that appears in the traditional Phong’s illumination model for specular reflections [8], with a simple function that does not require eponentiation. Interestingly, Sp(x) represents the ratio of a linear interpolation between 0 and 1 (given by x in the numerator) to a linear interpolation between p and 1 (given by p(1−x)+ x in the denominator), with x as the parameter for interpolation. Thus Sp(x) has a value ranging from 0 to 1, for all p. Fig. 1 gives a plot of variations of xp with respect to x∈ [0,1), for different values of p. Fig. 2 shows that the function in (7) closely approximates the values of xp. It is also clear from the figure that Schlick’s approximation provides a slower convergence to zero for x<1, as the value of p is increased.
208
R. Mukundan
Values of pow(x,p)
p=1 p=2 p=3
0.5 0.4 0.3 0.2 0.1 0
p=4 p=10 p=20
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
x^p
1 0.9 0.8 0.7 0.6
Normalized Values of x Fig. 1. A graph showing the values of the monomial xp for various values of p
1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
p=1 p=2 p=3 p=4 p=10 p=20
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
x/(x+p-px)
Schlick's Approximation
Normalized Values of x Fig. 2. A graph showing the approximations of xp using Schlick’s functions, for different values of p
A New Set of Normalized Geometric Moments Based on Schlick’s Approximation
209
We note here that the normalized coordinate variables used in moment computation as given in Eq. (6) have an extended range [−1, 1]. We correspondingly extend the Schlick’s approximation function in Eq. (7), for x∈ (−1, 1) as follows: S p ( x) =
x , x + p − xp
x ≥0,
p ∈ [1,∞),
S p ( x) =
( −1) p x , x − p − xp
x <0,
p ∈ [1,∞),
Sp(x) = 1,
if p = 0.
(8)
The translation invariants given in Eq.(5) can now be rewritten based on the above definition as follows: μˆ pq =
M −1 N −1
∑ ∑S
p ( x ) S q ( y ) I ( x,
y)
(9)
x =0 y = 0
In the following, we analyse the effect of approximation on accuracy and invariant characteristics of the above functions.
4 Experimental Results In this section we present the results of a comparative analysis of the two methods for moment computation based on Eqs(5) and (9). Fig. 3 shows a binary image of size 200x200 pixels used in the experiments. For this image, the centroid (Eq.(3)) has coordinates (106.4, 105), with the origin of the image coordinate system located at the bottom-left corner.
Fig. 3. Binary image of an aircraft model used for experimental evaluation of the proposed method
The values of μˆ p 0 computed using Eqs.(5) and (9) for the above image are shown in Fig. 4. The values obtained using Schlick’s approximation matches closely with the original values when p is odd. Higher order moments tend to become zero because of coordinate normalization, but Schlick’s approximation yields an offset when p is even, and this offset could be useful in a feature vector used for pattern matching.
210
R. Mukundan
Comparison of Moments (q=0) 700 600 500
Up0
400 Power Function
300
Schlick's Approx
200 100 0 2
8
14
20
26
32
38
44
-100 p
Fig. 4. Comparison of central moments with and without Schlick’s approximation
This offset also appears for functions μˆ pp , but converges to zero as the value of p is increased. The primary advantage a Schlick’s approximation based moment function gives is the large reduction in the number of multiplications required for the monomial kernel. The computational time required for evaluating moments of order p was analysed with the value of p ranging from 0 to 100 (Fig. 5). The computations were performed on a 2.8GHz Intel-Pentium 4 CPU with 1 GB of RAM. Figure 4 gives a comparison of CPU times with the values of xˆ p , yˆ q in Eq. (5) computed using (i) repeated multiplication, (ii) built-in “pow()” function and (iii) Schlick’s approximation (Eq. (8)). In all the three cases, moments were computed using the straight forward direct-sum method. The computational time could be further reduced by using the separabilityproperty of the kernel functions. This property is not affected by Schlick’s approximation. Moment functions that are invariant with respect to rotational transformations were derived by Hu, using the theory of algebraic invariants. If we use the translation invariants given in Eq.(5) , (9) in Hu’s algebraic expressions, then we get moments that are both translation and rotation invariant. Our experiments show that moments
A New Set of Normalized Geometric Moments Based on Schlick’s Approximation
211
Comparison of CPU Time 7
CPU Time (Secs)
6 5 Iteration
4
pow() 3
Schlick's Approx.
2 1 0 0
10 20 30 40 50 60 70 80 90 Moment Order
Fig. 5. Comparison of CPU times with the moment kernel computed using the exponential function and Schlick’s approximation
obtained using Schlick’s approximation can be utilized to form invariants, but we have to take note of the errors introduced by the approximation. There is clearly a trade-off between the approximation errors and the computational advantage that Schlick’s approximation provides in the calculation of invariants. As an example, we consider a fourth-order invariant given by
K 4 = μˆ 40 + 2μˆ 22 + μˆ 04
(10)
The plots in Fig. 6 can be used to verify the invariant characteristics of the above function, computed with and without Schlick’s approximation. It may also be noted that the error introduced by Schlick’s approximation reduces with order (Fig. 4), and higher order invariants tend to become more accurate when compared with invariants formed with monomial functions. The translation invariant characteristics are not found to be affected by Schlick’s approximation. This is because the parameter in Schlick’s function is computed after translating the image to the centroid, as given in Eq. (6).
212
R. Mukundan
Rotation Invariant Using Geometric Moments 6.09
K4
6.08 6.07 6.06 6.05 6.04 6.03 6.02 0
50
100
150
200
250
300
350
Angle
Rotation Invariant Using Schlick's Approx. 20.5 20 K4
19.5 19 18.5 18 17.5 0
50
100
150
200
250
300
350
Angle
Fig. 6. Comparison of moment invariants computed using original moments and Schlick’s approximation
5 Conclusion This paper has shown that Schlick’s approximation for the function xp could be used to compute normalized geometric moments and invariants for applications in pattern recognition. The approximation replaces the power functions in the kernel of the moment definition by simple rational polynomials, resulting in a significant reduction of the amount of computations. The paper has used an extended version of Schlick’s approximation to suit the range [−1, +1] of the normalized coordinates. Experimental
A New Set of Normalized Geometric Moments Based on Schlick’s Approximation
213
results have shown that while translation invariant characteristics can be accurately preserved in central moments computed using Schlick’s approximation, rotation invaraints have larger error introduced range and magnitude. This error, however is found to be within acceptable range for image classification and recognition. The computational advantage provided by Schlicks approximation far outweighs the problems associated with an increase in the variance of moment values. The paper has thus introduced a mathematical tool used in the field of Computer Graphics to the image analysis area. Schlick’s approximation could be used in conjunction with other methods for fast computation of moments. Further work in this area could be directed towards an analysis of approximation errors in the presence of image noise, and the derivation of generalized affine invariants using Schlick’s approximation.
References 1. Ghorbel, F., Derrode, S., Dhahbi, S.: Reconstructing with Geometric Moments. In: Proc. Int. Conf. on Machine Intelligence: ACIDCA-ICMI 2005 (2005) 2. Li, B.: High-Order Moment Computation of Gray Level Images. IEEE Trans. On Image Processing 4, 502–505 (1995) 3. Mukundan, R., Ramakrishnan, K.R.: Moment Functions in Image Analysis – Theory and Applications. World Scientific, Singapore (1998) 4. Yang, L.R., Albregtsen, F.: Fast and Exact Computation of Cartesian Geometric Moments Using Discrete Greens Theorem. Pattern Recognition 29, 1061–1073 (1996) 5. Martinez, J., Thomas, F.: Efficient Computation of Local Geometric Moments. IEEE Trans. On Image Processing 11, 1102–1111 (2002) 6. Liu, W., Chen, S.S., Cavin, R.: A Bit-Serial VLSI Architecture for Generating Moments in Real Time. IEEE Trans. On. Systems Man and Cybernetics 23, 539–546 (1993) 7. Schlick, C.: A Fast Alternative to Phong’s Specular Model. In: Heckbert, P.S. (ed.) Graphics Gems IV, pp. 385–387. Elsevier Science, Amsterdam (1994) 8. Shirley, P., Smits, B., Hu, H., Lafortune, E.: A Practioner’s Assessment of Light Reflection Models. In: Proc. 5th Pacific Conf. on Computer Graphics and Applications, pp. 40–49 (1997)
Shape Evolution Driven by a Perceptually Motivated Measure Sergej Lewin, Xiaoyi Jiang, and Achim Clausing Department of Mathematics and Computer Science, University of M¨ unster Einsteinstrasse 62, D-48149 M¨ unster, Germany {slewin,xjiang,cl}@math.uni-muenster.de
Abstract. In this paper we introduce a novel concept of shape evolution based on a semi-global shape width measure. It is perceptually motivated and helps us distinguish between different shape parts of varying importance. This measure can be integrated into an adaptive σ function in a flexible manner and used to achieve a shape evolution which can be controlled by the relative importance of shape parts (without detecting these parts explicitly). For instance, we can start to smooth out fine details while not blurring the large shape parts, or vice versa. This shape-preserving property cannot be achieved by the popular Gaussian smoothing (evolution based on geometric heat flow) and related variants, whose behavior is controlled by the local curvature alone. Experimental results demonstrate the behavior and power of this new shape evolution scheme.
1
Introduction
Research in psychology has shown that shapes are perceived by their parts [2,11]. Each shape part has a different importance for the perception of the complete shape. This can be shown with the following simple example. When considering a human’s shape, we can say that the torso has greater importance for the perception than the arms. This chain can be continued. The arms are more important than the hands and so on. This perception chain is called the perceptional hierarchy, which can be represented by a perceptional tree, where the main part of a shape is the root of the tree. Certainly, the relationship between object parts is not always so clear as in this human example. But for a lot of shapes this observation applies and leads to the idea of multi-scale shape representations. Such multi-scale shape representations can be obtained by shape evolution (continuous deformation to smoother ones). Generally, it is required that this deformation process should be causal, in the sense that it maintains a hierarchical structure of geometric features without introducing any new ones. In addition it is desirable that the deformation process helps reveal the large global features. Especially, we want long and thin parts to be removed faster than large and rounder parts while the latter ones’ shape should not be blurred too much [4]. In the next section we show that typical shape evolution methods cannot fully satisfy all these requirements. The reason for their unsatisfactory behavior is that G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 214–223, 2007. Springer-Verlag Berlin Heidelberg 2007
Shape Evolution Driven by a Perceptually Motivated Measure
215
they are driven by local shape properties only and thus cannot guarantee any behavior of inherently global nature. In this paper we incorporate a perceptually motivated measure (to be introduced in Section 3) into the shape evolution process, which then depends both on local and semi-global shape properties. We call a measure semi-global if it is not fully global, i.e. a function of the whole shape. Instead, it is determined by a relevant subpart of the shape. A continuous and a discrete version of shape evolution schemes are introduced in Section 4 and 5, respectively. Experimental results demonstrate the ability of our approach to smooth out fine details while maintaining the geometry of larger structures. Further discussion and future work will be given in Section 6 to conclude the paper.
2
Shape Evolution Methods
The most popular approach to shape smoothing is Gaussian smoothing by convolution with the Gaussian kernel function. Let C(u) = (x(u), y(u)) with u ∈ [0, 1] be the contour of the shape parametrized by normalized arc length parameter u. The convolution process can be then defined as the curve sequence: Cσ (u) = (xσ (u), yσ (u)), σ ≥ 0 xσ (u) = x(u) ⊗ g(u, σ), x0 (u) = x(u) yσ (u) = y(u) ⊗ g(u, σ), y0 (u) = y(u) where g(u, σ) = σ√12π e−u /2σ is the Gaussian kernel function and the value of σ determines the degree of smoothing. This definition has the drawback that the parameter u of curve cσ (u) with σ > 0 is in general not normalized arc length parameter. Hence, we will use the arc length evolution: 2
2
Cσ (Ψ ) = (xσ (Ψ ), yσ (Ψ )), σ ≥ 0 xσ (Ψ ) = x(Ψ ) ⊗ g(Ψ, σ), yσ (Ψ ) = y(Ψ ) ⊗ g(Ψ, σ) where Ψ (w, σ0 ) with any σ0 ≥ 0 is a continuous and monotonic function of w. Especially, Ψ remains the normalized arc length parameter as the curve evolves [9]. Generally, it is difficult to compute the function Ψ analytically. In the discrete case, however, this arc length evolution can be computed by a resampling technique. Firstly, the Gaussian smoothing based on a small σ(=1.0) value is performed. Then the resulting curve is resampled with a constant sampling rate. These two steps are repeated to achieve a smoothing of arbitrary σ value [9]. Figure 1 (upper row) shows an example of Gaussian smoothing. Since the value of σ is equal for all curve points, the shape is smoothed uniformly. The shape parts are treated independently of their importance for the perception; fine parts are smoothed out while the main shape changes accordingly. The behavior of Gaussian smoothing can also be explained by means of the corresponding geometric heat flow [10]: ∂C = κN ∂t
216
S. Lewin, X. Jiang, and A. Clausing
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
Fig. 1. Shape evolution with Gaussian smoothing (upper row) and with our approach (bottom row). (a),(e): initial shape; (b)–(d), (f)–(h): shapes after 10, 30, and 50 iterations, respectively.
At each point the curve moves along the normal direction N with a velocity equal to the curvature. The curve points with positive curvature move inwards and the points with negative curvature outwards. It becomes clear that the only driven force is the pure local property curvature and as such cannot deliver truly global behavior. The Gaussian smoothing can be extended by generalizing the heat flow to: ∂C = G(κ)N ∂t where G(κ) is some function of curvature. Several variants are known [4,10], but they all share the same property of the local driven force curvature only. Besides the Gaussian smoothing and other continuous shape evolution methods, there are also discrete ones. The approach in [8], for instance, represents a shape by a polygon. The polygon vertices are successively removed in a certain order, which is determined by a vertex cost function. In each step of shape evolution the vertex with the smallest cost is deleted and the costs of the remaining vertices are then updated. Each evolution step thus simplifies the shape and the evolution process terminates when the shape polygon becomes convex. The cost function for vertex v is defined as follows: c(v) =
β(s1v , s2v )l(s1v )l(s2v ) l(s1v ) + l(s2v )
where s1v and s2v are the polygon sides incident to v, β(s1v , s2v ) the turn angle of these sides, and l(s1v ), l(s2v ) the side length.
Shape Evolution Driven by a Perceptually Motivated Measure
3
217
A Semi-global Measure: Shape Width
We intend to develop a shape evolution approach, which produces the results shown in Figure 1 (bottom row), where the fine details are successively smoothed out while keeping the main shape (almost) unchanged. Here the fine details are associated with long and thin parts, which should be removed faster than large and rounder parts [4]. This consideration leads to a natural extension of Gaussian smoothing by a context-sensitive function σ(swp ), where swp is a measure of shape width at point p. Each point will thus receive a different degree of smoothing driven by the shape width. When the shape contour is parametrized by the normalized arc length, the shape width can be described by a function: sw : [0, 1] → R. In [6,12] each contour point is paired to another contour point on the opposite side by an optimization procedure. Given such a pair of points, the half of their distance may be considered as the shape width. This measure is clearly a global one, giving some hint to the width of the local shape subpart. However, it is not continuous and there may be substantial jumps when moving from one subpart to another. The authors of [7] proposed another definition of local shape width. At curve point p it is defined as the distance between p and the next point at opposite side of the shape curve along the normal of p. Again, this shape width measure is not a continuous one. In [7] this problem is alleviated, but not totally solved, by smoothing the sequence of shape width values. Although the measures directly or indirectly emerging from these previous works satisfy our requirement to some degree, we decided not to apply them in our work because of their discontinuity. Instead, we seek for a continuous shape width measure. 3.1
Definition of Shape Width
To be context-sensitive, the shape width value at any contour point must be dependent on its contour environment (both the neighbors and some opposite subpart). Obviously, the local curvature and its functions as used in [4,10] insufficiently cover the environmental information. We resort to the original concept of medial axis according to Blum [3]. A circle is said to be maximal in a planar object if it is contained in the object but not a proper subset of any other circle contained in the object. The medial axis is the locus of the centers of all maximal circles. The radius function of the medial axis is then a continuous function whose value at each point is equal to the radius of the maximal circle at that point. We take a view from the contour and define the shape width by the radius of the maximal inscribed circle at each contour point. This consideration results in the following definition of shape width. Definition 3.1. Let C(u) = (x(u), y(u)) be a closed smooth curve with normalized arc length parameter u ∈ [0, 1] that represents the shape contour. The shape width swp at point p is defined as the radius of the maximal inscribed circle at p.
218
S. Lewin, X. Jiang, and A. Clausing
Accordingly, the shape width function sw : [0, 1] → R parametrized by normalized arc length parameter u is defined as: sw(u) = swp with p = C(u), u ∈ [0, 1] Since the radius function along the medial axis is continuous and the shape curve is assumed to be smooth, the shape width function is continuous as well. 3.2
Properties of Shape Width
The shape width defined above is a semi-global feature of contour points. It depends on the part of the shape around a point itself and the opposite side, but in general not on the whole shape. Indeed, two quantities are associated with each contour point: the curvature and the shape width. If the curvature at a contour point is large, then the curvature radius is small and the maximal inscribed circle will not touch the opposite side. In this case the shape width is totally bounded by the local curvature radius. On the other hand, the shape width of a point with small curvature is determined by a maximal inscribed circle touching the opposite side. Here the local curvature does not play any role and it is the part of the shape around point itself and the opposite side which specify the shape width. The most important properties of shape features is the translation, rotation and scale invariance. Obviously, the shape width is invariant to translation and rotation. However, it is not invariant to shape scaling. The scale invariance can be achieved by normalizing the shape width function. The normalization is performed by dividing the shape width values by their maximum. The shape width has an intuitive interpretation. Thin and elongated shape parts are associated with small shape width values, while larger shape parts tend to have higher values. The scaling normalization above will not change the relative importance of shape parts in any way. This interpretation allows us to include the shape width into the shape evolution process for achieving more global control of evolution. Another interesting property of the shape width is its invariance to certain kind of shape articulation. The values of corresponding shape widths do not change by articulation except at some connection positions between articulated parts. Hence, the shape width may be an appropriate feature for matching articulated shapes. 3.3
Shape Width Computation in Discrete Case
In our current implementation a simple method based on a Voronoi diagram is used. The shape is represented by a polygon with a set of vertices S. The Voronoi diagram decomposes the 2D plane in regions around each polygon vertex p ∈ S such that each point within the region around p is closer to p than to any other vertex in S (see Fig. 2). The medial axis can be approximated by the Voronoi
Shape Evolution Driven by a Perceptually Motivated Measure
v3
219
v4 v5
v2
v1
swp
p
Fig. 2. Discrete computation of shape width. The figure shows a part of the Voronoi diagram. The shaded part corresponds to the shape interior. v1 , . . . , v5 are vertices of the Voronoi region around sample point p. The shape width at p is swp = max dist(vi , p) = 1≤i≤5
dist(v1 , p).
vertices and edges within a shape polygon. The approximation gets better as the sample density increases [5]. When the sample density is high enough, there is for each polygon vertex p ∈ S at least one Voronoi vertex within the polygon which lies on the boundary of the Voronoi region around p. Since the Voronoi diagram is dual to Delaunay diagram and in particular the Voronoi vertices are the centers of circumcircles of Delaunay triangles, the Voronoi vertices within the shape polygon approximately correspond to the centers of inscribed circles. So the shape width of a polygon vertex p can be computed as follows: Let Vp be a Voronoi region of the polygon vertex p and v1 , . . . , vn the Voronoi vertices of Vp which lie within the shape polygon. The shape width at p is then: swp = max dist(vi , p) 1≤i≤n
where dist(v, p) is the Euclidean distance between Voronoi vertex v and the polygon vertex p, see Fig. 2. Fig. 3 shows the discrete shape width function for a horse shape. Due to the simple computation procedure the shape width curve is not that smooth. A smoothing operation may be used here, but our experience indicates that there is no real need. All our experimental results reported in the later sections are computed without this option. Any good-quality skeleton computation algorithm, e.g. [1], will help further. Then, an optimization by means of dynamic programming will produce smoother shape width values.
220
S. Lewin, X. Jiang, and A. Clausing
1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0
(a) shape width in discrete case
20
40
60
80
100
120
140
160
180
200
(b) shape width function
Fig. 3. Discrete shape width
4
Continuous Shape Evolution Driven by Shape Width
The shape width has the advantageous property of being both local and global. To give the Gaussian smoothing and other variants dependent on local curvature only [4,10] a flair of global shape sensitivity we integrate the shape width into the Gaussian kernel function, which becomes then a function of both the evolution parameter t and the shape width. The shape width swt (u) changes during the evolution, where sw0 (u) = sw(u) is the initial shape width function. So we can define the shape width based evolution as follows: xt (Ψ ) = xt−1 (Ψ ) ⊗ g(Ψ, σ(swt (Ψ )) yt (Ψ ) = yt−1 (Ψ ) ⊗ g(Ψ, σ(swt (Ψ )) with (x0 (u), yo (u)) = (x(u), y(u)) being the initial shape. That is, each contour point (xt (Ψ ), yt (Ψ )) is assigned a specific σ value σ(swt (Ψ )) to achieve an adaptive smoothing behavior. The shape width based evolution can be computed in a similar way as for Gaussian smoothing. Firstly, the curve is sampled with a constant sampling rate. For each curve point a Gaussian filter based on an adaptive σ value, which depends on the shape width at that point, is determined. Then, the curve is convolved with the adaptive Gaussian filter. The resulting curve is resampled. The shape width function is computed again and the previous steps are iterated. The evolution terminates if the curve becomes convex. In principle the adaptive σ function can be an arbitrary one σ(sw(u)). It is up to the user to specify some function σ() in accordance with the particular need in a practical situation. One possibility is to smooth out the fine details quickly while maintaining the main shape to a large extent. This can be achieved by a class of monotonically descending functions σ(sw(u)), for instance σ(u) = a − b · sw(u), a, b > 0, a ≥ b
Shape Evolution Driven by a Perceptually Motivated Measure
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(j)
(k)
(l)
221
Fig. 4. Shape evolution with Gaussian smoothing (upper row) and with our approach (lower row). (a),(g): initial shape, (b)-(f),(h-l) shapes after 10, 20, 50, 70, and 100 iterations, respectively.
The condition a ≥ b is required to guarantee non-negative σ values. The shape evolution in Fig. 1 is computed for a = b = 1. Compared with the Gaussian smoothing we see that small details tend to disappear in the evolution process. However, the Gaussian smoothing affects the other parts as well; the points move inwards or outwards dependent on the sign of the local curvature. In our approach, on the other hand, the points with large shape width values receive small adaptive σ values, which strongly reduce the smoothing effect locally and thus prevent from a shape blurring in these parts. Note that if we continue our shape evolution in Fig. 1 (bottom row) a convex shape will result. The speed of the convergence can be controlled by the adaptive σ function above. It can be increased as well as decreased by choosing appropriate values for the parameters a and b. Another sequence of shape evolution for a star-shaped contour is shown in Fig. 4. The Gaussian smoothing tends to fuse the two leaf apexes at the bottom so that the evolved leaf does not show a good symmetry. In our approach this symmetry property is well maintained. There are other possibilities of integrating the shape width into the evolution process. For instance, we may want to smooth out the large details first while maintaining the fine details (the opposite of what we have done so far). This effect can be easily achieved by a class of monotonically ascending functions σ(sw(u)), for instance σ(u) = a · sw(u), a > 0 Fig. 5 shows such an example of shape evolution for a = 1. Given the semi-global nature of shape width we obtain a flexible tool of controlling the shape evolution behavior. All the experimental results shown above clearly demonstrate the power of this approach.
222
S. Lewin, X. Jiang, and A. Clausing
Fig. 5. Shape evolution with monotonically ascending functions σ(sw(u))
Fig. 6. Discrete shape evolution with shape width
5
Discrete Shape Evolution Driven by Shape Width
The discrete shape evolution described in [8] successively removes the polygon vertices in an order determined by a vertex cost function. We can develop a similar shape evolution scheme by means of a cost function dependent on the shape width. For instance, the vertex with smallest shape width is deleted in each evolution step. This way the shape successively becomes simpler and the evolution process terminates when it becomes convex. Fig. 6 shows the evolution procedure of a horse by gradually deleting the points. Obviously, narrow shape parts are deleted first. At the same time the main part of shape remains unchanged until the narrow parts are removed from the shape.
6
Conclusions and Future Work
In this paper we have introduced the novel concept of shape evolution based on a semi-global shape width measure. It is perceptually motivated and helps us distinguish between the parts of varying importance (without detecting these parts explicitly). We have seen that this measure can be integrated into an adaptive σ function in a flexible manner. For instance, we can start to smooth out fine details while not blurring the large shape parts, or vice versa. This shapepreserving property cannot be achieved by the popular Gaussian smoothing and related variants, whose behavior is controlled by the local curvature alone. The behavior of our approach could be interpreted by means of a partial differential equation ∂C = H(κ, sw)N ∂t where the movement control term H(κ, sw) is an extension of many methods known from the literature with G(κ).
Shape Evolution Driven by a Perceptually Motivated Measure
223
Our work can be extended in various directions. In our experiments the shape width has been computed by a simple method based on Voronoi diagram. Any good-quality skeleton computation algorithm, e.g. [1], will help further. Then, an optimization by means of dynamic programming will produce smoother shape width values. Currently, we have formulated the shape evolution in the space of 2D shapes directly. Alternatively, a reformulation in term of a partial differential equation and a corresponding implementation in the context of level set methods is of interest. Shape decomposition and retrieval of articulated shapes are two further topics worth of investigation.
References 1. Bai, X., Latecki, L.J., Liu, W.-Y.: Skeleton prunning by contour partitioning with discrete curve evolution. IEEE Transactions on Pattern Analysis and Machine Intelligence 29(3), 449–462 (2007) 2. Bierderman, I.: Recognition-by-components: A theory of human image understanding. Psychological Review 94(2), 115–147 (1987) 3. Blum, H.: A transformation for extracting new descriptors of form. In: WhatenDunn, W. (ed.) Models for the Perception of Speech and Visual Form, pp. 362–380. MIT Press, Cambridge (1967) 4. Cao, F.: Geometric Curve Evolution and Image Processing. Springer, Heidelberg (2003) 5. Fabbri, R., Estrozi, L.F., Costa, L.D.F.: On voronoi diagrams and medial axes. Journal of Mathematical Imaging and Vision 17, 27–40 (2002) 6. Geiger, D., Liu, T.-L., Kohn, R.V.: Representation and self-similarity of shapes. IEEE Transactions on Pattern Analysis and Machine Intelligence 25(1), 86–99 (2003) 7. Jiang, X., Lewin, S.: An approach to perceptual shape matching. In: Bres, S., Laurini, R. (eds.) VISUAL 2005. LNCS, vol. 3736, pp. 109–120. Springer, Heidelberg (2006) 8. Latecki, L.J., Lak¨ amper, R.: Convexity rule for shape decomposition based on discrete contour evolution. Computer Vision and Image Understanding 73, 441– 454 (1999) 9. Mokhtarian, F., Bober, M.: Curvature Scale Space Representation: Theory, Applications & MPEG-7 Standardisation. Kluwer Academic Publishers, Dordrecht (2003) 10. Sapiro, G.: Geometric Partial Differential Equations and Image Analysis. Cambridge University Press, Cambridge (2001) 11. Siddiqi, K., Kimia, B.B.: Parts of visual form: Computational aspects. IEEE Transactions on Pattern Analysis and Machine Intelligence 17(3), 239–251 (1995) 12. Zhu, S.-C.: Embedding gestalt laws in markov random fields. IEEE Transactions on Pattern Analysis and Machine Intelligence 21(11), 1170–1187 (1999)
The Global-Local Transformation for Invariant Shape Representation Konstantinos A. Raftopoulos and Stefanos D. Kollias Electrical Engineering Building - 1st Floor - Room 1.1.23 National Technical University of Athens Iroon Polytexneiou 9, 15780 Athens, Greece
[email protected]
Abstract. We present the GlobalLocal (GL) transformation for closed planar curves. With this new transformation we can represent shape by means of two dimensional manifolds (surfaces) embedded into the unit cube. We explore some useful properties of the transform space and we demonstrate its high expressive power. We justify the high potential of the resulting invariant shape representations in object recognition by providing experimental results using the Kimia silhouette database. Keywords: Global-Local transformation, Shape representation, Shape recognition.
1
Introduction
The proposed Global-Local (GL) transformation is a metric embedding, invariant to affine transformations, rotation, scaling, mirroring and therefor it is a shape representation technique, and its significance lies in transforming local to global descriptors in a way that one can compensate for the other in improving the robustness and the recognition capability of the overall representation. While for example global shape descriptors are relatively robust to noise, they have limited usefulness with respect to occlusion and/or missing parts while on the other hand, methods that emphasise local properties are more robust to occlusion but they are not robust to noise and deformation. The need therefore for hybrid representations together with the fact that direct analytical representations of shape lack in “meaning”, since the numerical description they provide about a given shape has no direct human perceptual interpretation, are motivations for introducing the GL transformation. The compatibility between the shape representation and the human perception is an important issue since a “meaningful” shape representation permits intelligent interaction between the algorithm and the human operator. The Fourier representation for e.g. provide an excellent global shape descriptor but the transformed shape is described by means of frequency coefficients that have no direct perceptual interpretation. Representations on the other hand that emphasise local shape properties like tangent, curvature etc or the “landmark” based representations even though they provide a more “meaningful” description of the G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 224–233, 2007. c Springer-Verlag Berlin Heidelberg 2007
The Global-Local Transformation for Invariant Shape Representation
225
local points they fail to represent the shape as a whole. By measuring the curvature of a curve at a given point for e.g give us no information about the relative displacement of this part of the curve with respect to the rest of the curve. The GL-transformation is novel not only in the way it encodes local into global descriptors and vise versa but also in the way it encodes morphological shape features into the boundary points combining this way the expressive power of the morphological features with the high performance of the boundary methods. The rest of the paper is structured as follows: We first examine similar and related work before we present the GL-transformation. Next we demonstrate the Global-Local properties as we explore the view-area integrals and the extreme points in the transform space. Using a dynamic programming approach in the sequel we describe an algorithm for calculating the distance between curves and using the Kimia silhouette database we demonstrate experimental results showing the high recognition capability of our representation.
2
Related Work
Several 1-D boundary representations of shape have been presented in the literature. Bennet and McDonald [1] used a tangent angle versus arc length function. Chang, Hwang, and Buehrer [2] constructed the distance function from the centroid to feature points. Zahn and Roskies [3] used the Fourier transform of the tangent angle vs. arc length shape boundary representation. Bengtsson and Eklundh [4] presented a hierarchical method where the shape boundary is represented by a polygonal approximation. Ikebe and Miyamoto [5] wrote an overview of spline applications for shape design, representation, and restoration. Witkin [6] proposed a scale-space filtering and Babaud et al. [7] proved the uniqueness of the Gaussian kernel for scale-space filtering. Asada and Brady [8] introduced a representation called the curvature primal sketch. Mokhtarian and Mackworth [9,10] applied the scale-space approach to the description of planar shapes using the shape boundary. Blum [11,12] proposed the medial axs trasnform (MAT). Maragos and Schafer [13] used mathematical morphology to extract skeleton subsets for efficient coding of binary images. Robert Osada, Thomas Funkhouser, Bernard Chazelle and David Dobkin [14] computed shape signatures for arbitrary 3D polygonal models by representing the object as a shape distribution sampled from a shape function measuring global geometric properties. K. Siddiqi and B.B. Kimia used a shock graph for shape recognition and later P. J. Giblin and B. B. Kimia [15] improved it to reconstruct the shape boundary from its skeleton. T. B. Sebastian and P. N. Klein and B. B. Kimia [16] demonstrated state of the art recognition by editing the shock graphs. In this work the authors show that the recognition based on the shock graphs is improved by incorporating shape boundary information along with the skeleton symmetry. Their definition of the various types of shocks is similar to the dilations and restrictions defined by our critical points but our method is a boundary representation and our advantage is in keeping the total ordering of the boundary points being therefore
226
K.A. Raftopoulos and S.D. Kollias
at the same time able to apply the dynamic programming techniques for optimal correspondence between the boundary points of different shapes. In the sequel we introduce a new shape transformation technique that is inherently invariant of affine transformations and at the same time combines many of the desirable properties for shape representation. In contrast to the Fourier representations it retains full metric and location information in the transform space and even if it is a boundary representation lends many of the useful properties of the 2D global descriptors. At the time where traditionally the meaningful shape descriptors are extracted from the global space methods like the methods based on the medial axis transform the new representation although a boundary method achieves to encode global morphological descriptors into the boundary values combining this way the high performance of the boundary methods to the expressive power of the global space methods.
3
The Proposed Transformation
The proposed transformation is a metric embedding independent of the coordinate system used to parametrise the curve. From a starting point on the curve we first construct a view function as the boundary distance versus the plane distance between the starting point and another point that runs the whole curve. We then let the starting point also run the whole curve and by combining the corresponding views we construct a surface embedded into the unit cube. The curve can be reconstructed up to affine transformations by only three of the views but the set of all views that generate the GL-surface contain all the possible re parameterisation of the curve in a highly expressive and redundant way. Local distortions on the boundary translate to global deformation of the whole transform space therefore local curvature on the initial curve can be encoded into the global area of corresponding perpendicular to the surface planes in the transform space. At the same time the extreme points (points of local maxima and minima) in the transform space indicate segments on the shape boundary that form morphological regions namely dilations and restrictions of the shape of the curve, local descriptors therefore in the transform surface indicate global shape features of the initial curve. By relaxing the conditions of the extreme points to - critical points we manage not only to identify but also measure these morphological formations in a way to construct a boundary method that assigns values of morphological significance to the boundary points. The morphological regions defined by the GL-transformation are similar to the forth order shocks mentioned above but our advantage is that we are able to identify such regions without having to calculate the skeleton of the shape since our method keeps the boundary correspondence of the points even though extracting morphological features. In the following we formally define these concepts for the general case of m-dimensional curves where the results for the planar curves stand for m=2. Let [0, λ] ⊂ and α : [0, λ] → m a continuous 1-1 infinite times differentiable curve in m . Let also α(0) = α(λ), so that α is a closed curve in m . Since α
The Global-Local Transformation for Invariant Shape Representation
227
Fig. 1. A closed planar curve and the corresponding GL surface
is continuous, 1-1 and onto α([0, λ]) ⊂ m is an isomorphism and imposes the total ordering of [0, λ] ⊂ into Imα ([0, λ]) ⊂ m . We consider m endowed with the usual metric derived by the usual norm .m , Imα ([0, λ]) := Iα is the trajectory of the closed curve α and its diameter in m is Δ = sup x − ym x,y∈Iα
We also consider the trajectory endowed with the usual metric derived by the usual norm .1 = |.| (the absolute value) in its isomorphic [0, λ]. We now let x0 be a point in the trajectory of the curve α and we consider the function vx 0 as : vx 0 : Iα → [0, Δ], x → vx0 (x) := x − x0 m . vx 0 is dm restricted to the trajectory of α, fixed to x0 and continuous as such. We define vx0 , the composition of α and vx 0 as follows: Definition 1. vx0 := (vx 0 ◦ α), therefor, vx0 : [0, λ] → [0, Δ], x → vx0 (x) := vx 0 (α(x)) We call vx0 the view of the trajectory Iα from x0 . vx0 is well defined on the choice of x0 . It is also continuous as a composition of the continuous curve α and the continuous metric vx 0 . Let t ∈ [0, λ] and α(t) = x. Definition 2. Let [0, λ] ⊂ and α : [0, λ] → m a continuous 1-1 infinite times differentiable curve in m . Let also α(0) = α(λ), so that α is a closed curve in m . For each t1 ∈ [0, λ] we consider the view functions vα(t1 ) (t2 ). The function Sα (t1 , t2 ) = vα(t1 ) (t2 ), t1 , t2 ∈ [0, λ] is the Global-Local transformation of the curve α. Sα defines a surface {t1 , t2 , Sα (t1 , t2 )} which is generated by the view functions vα(t) , t ∈ [0, λ]: Sα = vα(t) t∈[0,λ]
228
K.A. Raftopoulos and S.D. Kollias
This surface we call the GlobaLocal surface of the closed curve α and it is clear that Sα ⊂ [0, λ] × [0, λ] × [0, Δ].
4
The Global-Local Properties
The GL-transformation just introduced processes significant properties for shape representation. As a metric embedding is independent of the coordinate system therefor invariant of affine transformations. Furthermore, local measures of the transformed curve become global measures on the GL surface while local measures on the GL-surface correspond to global shape descriptors on the shape of the curve. In particular the local curvature on the transformed curve can be estimated by area integrals in the transform space whereas points of local extrema on the GL-surface correspond to restrictions and dilations of the transformed shape. This correspondence of local and global descriptors through the GL-transformation is significant since it can be used in the case of the curvature towards better resistance to noise and in the case of restrictions and dilations towards meaningful global morphometric encoding of the shape boundary. 4.1
Encoding Curvature into Global Descriptors of the GL Surface
Here we show that in the GL-surface the local curvature of the shape boundary has been encoded to global descriptors and it can be estimated by calculating the area enclosed by the view functions in the GL-surface. We consider C ∞ [0, λ] (the infinite times differentiable real functions defined on [0, λ]) endowed with the metric df defined by the norm .f (norm of the operator) defined as: λ |f (t)|dt .f : C ∞ [0, λ] → , f → f f := 0
∞
The space C [0, λ] becomes a metric space with reference to the metric of the operator just introduced. Let also φ a function defined on [0, λ] and taking values in R as follows: φ : [0, λ] → R: t → φ(t) := vα(t) f = vx f The function φ maps a point x = α(t) on the curve to the area that is enclosed by the view function vα(t) and the x x axis. From the Liphsitz condition satisfied by vα(t) and the strong continuity of φ we get that the curvature of the function φ(t) is a bounded estimation of the curvature of the initial curve α(t). This result is significant since in the presence of noise where the curvature of the initial curve cannot be estimated without smoothing operations, the function φ of the view areas in the transform space can be used to estimate the curvature of the noisy curve. The resulting from φ representation we will call the viewarea representation. In figure 2 the initial shape (a) together with the viewarea representation of the curve (d) is shown. Despite the noisy shape (a) the
The Global-Local Transformation for Invariant Shape Representation
229
Fig. 2. A shape from the Kimia database (a), the GL surface (b), the GL-critical point outline (c) and the corresponding view-area representation (d)
magnitude of the curvature of the shape in (a) can be estimated by the curvature of the representation in (d). 4.2
Encoding Global Morphometric Curve Descriptors into the Local Curvature of the GL-Surface
In this section we demonstrate how local measures on the GL-surface can be used to identify and measure morphological regions of the transformed shape. The significance in this approach is that keeping the total ordering of the boundary points we encode morphological descriptors of the curve that are traditionally extracted by morphological and not by boundary methods. This gives us the ability to combine the expressive power of morphological descriptors with the performance advantage of the boundary representations towards successful recognition of high performance. We now show that the local extrema of the GL-surface identify morphological regions namely dilations and restrictions on the shape of the curve. We define these critical points and by identifying regions around the extreme points we provide a way to measure these morphological regions into the GL-measurements. Let α ∈ Cc∞ ([0, λ], m ), a closed, continuous, infinite times differentiable , vx (t) the view functions, Sα (x, y) the GL surface and Hα = curve in m⎪ ⎪ ⎪ 2 2 ⎪ ⎪ ⎪ ∂S ∂S α α ⎪ ⎪ ∂x2 ∂x∂y ⎪ ⎪ ⎪ the Hessian of Sα . Let also t1 , t2 ∈ [0, λ] and x1 = α(t1 ), x2 = ⎪ ⎪ ⎪ ⎪ ∂Sα 2 ∂Sα 2 ⎪ ⎪ ⎪ ⎪ ∂y∂x ∂y2 ⎪ α(t2 )), x1 , x2 ∈ Iα .
230
K.A. Raftopoulos and S.D. Kollias
Definition 3. The point (t1 , t2 ) is called - critical iff |
∂Sα (t1 , t2 )| ≤ , ≥ 0 ∂t1
|
∂Sα (t1 , t2 )| ≤ , ≥ 0 ∂t2
and
and Hα (t1 , t2 ) > 0 Here we have used relaxed conditions necessary for points of local extrema on the GL surface. The critical points after our definition are points of local extrema together with points in a range around them on the GL surface, the range is according to . Now if (t1 , t2 ) an critical point, then the pair (x1 , x2 ) with x1 = α(t1 ) and x2 = α(t2 ) are also critical and define a morphological region on the curve, namely a dilation or a restriction on the shape of the curve. In figure 3 a shape with one restriction and two dilations is shown while in figure 2 the initial shape (a), the GL-surface (b) and the pairs of critical points along the boundary (c) is shown for two shapes. We see therefor that local measures on the GL-surface define global shape properties for the shape of the curve. To provide measurements of the morphological regions we will use the size of the region of the critical points around the local extrema.
Fig. 3. A shape with one restriction and two dilations
Definition 4. Let C (α) the set of all the - critical points of the curve α and x a point on the curve. The GL measure of the curve at the point x is: k(x, y)x − ym dy GLα (x) = (x,y)∈C(α)
where k(x, y) a kernel function ensuring that x − ym counts in the calculation of the GL measure only if the line connecting x and y lies in the interior of
The Global-Local Transformation for Invariant Shape Representation
231
the curve. This is necessary because we want the GL measure to convey perceptual compatibility by measuring the local convex formations of the boundary segments. The significance of the GL measure lies in the fact that global morphometric properties of the contour are being measured and assigned to the boundary of the curve. As was discussed before, the correspondence between formations of the shape and the part of the boundary segments that contribute to these formations is important since the total ordering of the boundary points keeps a low complexity in finding the optimal correspondence (distance) between the two curves with dynamic programming techniques. In the next section we present experimental results that this correspondence through the GL measure is successful in shape recognition.
5
Experimental Results
In this section we use a dynamic programming approach similar to [17] to measure the distance between shapes from the Kimia silhouette database using the GL measure for calculating the pairwise correspondence. Let C1 and C2 two curves. Given a correspondence c between the points of C1 and C2 in a way that c(C1 (t1 )) = C2 (t2 ) and C the set of all such correspondences we want to measure the distance between the two curves by minimising the energy functional: |GLC1 (x) − GLC2 (c(x))|dx E(C1 , C2 , c) = x∈C1
with respect to the correspondence c. The GL distance therefor between the two curves C1 and C2 will be given by: C1 − C2 GL = arg min E(C1 , C2 , c) c∈C
We descritize the curves by sampling 100 equispaced points on each. We then find an initial correspondence between a pair of points on both curves by estimating the diameter of the area enclosed by the two shapes. An exhaustive search for the best initial correspondence is also possible if the results through the diameter are not accurate. After the correspondence of the first pair of points at each step all the possible correspondences between the pairs of points of the two curves are being examined, the restriction of the total ordering of the points on both curves keeps the complexity feasible while the method finds the optimum correspondence. In the table of figure 4 we see 18 shapes from the Kimia database including shapes from the classes of fish, mammals, rabbits, men, hands, planes, tools and sea creatures. For each shape we can see the three best matches in the database labelled as 1st, 2nd and 3d. These are the three shapes among all the 99 shapes in the database that have the smallest GL distance from the initial image. We see that for all the tested images the three best matches are always correct. Since the presented shapes have different characteristics, the experimental results demonstrate that the GL-measure is a new representation that achieves significant results in general shape recognition.
232
K.A. Raftopoulos and S.D. Kollias
Fig. 4. Shapes from the Kimia database arranged in the order of their three closest counterparts
References 1. Bennet, J.R., McDonald, J.S.: On the measurement of curvature in a quantized environment. IEEE Transactions on Computers 24, 803–820 (1975) 2. Chang, C.C., Hwang, S.M., Buehrer, D.J.: A shape recognition scheme based on relative distances of feature points from the centroid. Pattern Recognition 24, 1053– 1063 (1991) 3. Zahn, C., Roskies, R.: Fourier descriptors for plane closed curves. Computer Graphics and Image Processing 21, 269–281 (1972)
The Global-Local Transformation for Invariant Shape Representation
233
4. Bengtsson, A., Eklundh, J.: Shape representation by multiscale contour approximation. IEEE Transactions on PAMI 13, 85–93 (1991) 5. Ikebe, Y., Miyamoto, S.: Shape design, representation, and restoration with splines. In: Fu, K., Kunii, T. (eds.) Picture Engineering. LNCS, pp. 75–95. Springer, Heidelberg (1982) 6. Witkin, A.P.: Scale-space filtering. In: Proceedings of the 8th Int’l Joint Conference on Artificial Intelligence, pp. 1019–1022 (1983) 7. Babaud, J., Witkin, A., Baudin, M., Duda, R.: Uniqueness of the gaussian kernel for scale-space filtering. IEEE Transactions on PAMI 8, 26–33 (1986) 8. Asada, H., Brady, M.: The curvature primal sketch. IEEE Transactions on PAMI 8, 2–14 (1986) 9. Mokhtarian, F., Mackworth, A.K.: Scale-based description and recognition of planar curves and two-dimensional shapes. IEEE Transactions on PAMI 8, 34–43 (1986) 10. Mokhtarian, F., Mackworth, A.K.: A theory of multiscale, curvature-based shape representation for planar curves. IEEE Transactions on PAMI 14, 789–805 (1992) 11. Blum, H.: A transformation for extracting new descriptors of shape. Models for the Perception of Speech and Visual Forms. MIT Press, Cambridge (1967) 12. Blum, H.: Biological shape and visual science. Journal of Theoretical Biology 38, 205–287 (1973) 13. Maragos, P., Schafer, R.: Morphological skeleton representation and coding of binary images. IEEE Transactions on ASSP 34, 1228–1244 (1986) 14. Osada, R., Funkhouser, T., Chazelle, B., Dobkin, D.: Shape distributions. ACM Trans. Graph. 21, 807–832 (2002) 15. Giblin, P.J., Kimia, B.B.: On the intrinsic reconstruction of shape from its symmetries. IEEE Transactions on PAMI 25, 895–911 (2003) 16. Sebastian, T.B., Klein, P.N., Kimia, B.B.: Recognition of shapes by editing their shock graphs. IEEE Transactions on PAMI 26, 550–571 (2004) 17. Sebastian, T.B., Klein, P.N., Kimia, B.B.: On aligning curves. IEEE Transactions on PAMI 25, 116–125 (2003)
A Vision System for Recognizing Objects in Complex Real Images Mohammad Reza Daliri1,2, Walter Vanzella1, and Vincent Torre1 1
2
SISSA, Via Beirut 2-4, 34014 Trieste, Italy ICTP Programme for Training and Research in Italian Laboratories, International Center for Theoretical Phyiscs, Strada Costiera 11, 34014 Trieste, Italy {daliri,vanzella,torre}@sissa.it
Abstract. A new system for object recognition in complex natural images is here proposed. The proposed system is based on two modules: image segmentation and region categorization. Original images g(x,y) are first regularized by using a self-adaptive implementation of the Mumford-Shah functional so that the two parameters α and γ controlling the smoothness and fidelity, automatically adapt to the local scale and contrast. From the regularized image u(x,y), a piece-wise constant image sN(x,y) representing a segmentation of the original image g(x,y) is obtained. The obtained segmentation is a collection of different regions or silhouettes which must be categorized. Categorization is based on the detection of perceptual landmarks, which are scale invariant. These landmarks and the parts between them are transformed into a symbolic representation. Shapes are mapped into symbol sequences and a database of shapes in mapped into a set of symbol sequences. Categorization is obtained by using support vector machines. The Kimia silhouettes database is used for training and complex natural images from Martin database and collection of images extracted from the web are used for testing the proposed system. The proposed system is able to recognize correctly birds, mammals and fish in several of these cluttered images.
1 Introduction A major goal of computer vision is to make machines able to recognize and categorize objects as humans are able to do very quickly [1] even in very cluttered and complex images. There are several reasons that make this problem so difficult. The first reason is related to the uncertainty about the level of categorization in which recognition should be done. Based on the research made by cognitive scientists [2], there are several levels at which categorization is performed. Another reason is the natural variability within various classes. Moreover, the characterization should be invariant to rotation, scale, translation and to certain deformations. Objects have several properties that can be used for recognition, like shape, color, texture, brightness. Each of these cues can be used for classifying objects. Biederman [3] suggested that edge-based representations mediate real-time object recognition. In his view, surface characteristics such as color and texture can be used for defining edges and can provide cues for visual search, but they play only a secondary role in G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 234–244, 2007. © Springer-Verlag Berlin Heidelberg 2007
A Vision System for Recognizing Objects in Complex Real Images
235
the real-time recognition. There are two major approaches for shape-based object recognition: 1) boundary-based, that uses contour information [4], [5], [6], [7], [8], and 2) holistic-based representation, requiring more general information about the shape [9], [10]. In this paper we address the issue of categorizing objects in natural cluttered images by developing a vision system based of two modules: the first module is a bottom-up segmentation of natural images and the second module is a top-down recognition paradigm. In the first module, images are segmented in a collection of regions or silhouettes with a constant grey level. In the second module these regions are categorized and some of them are recognized as relevant objects, such as birds, mammals and fish. Segmentation of the image (Vanzella & Torre 2006 [12]) is addressed by using a modification of the classical regularization of Mumford and Shah based on the minimization of the functional (1):
Φ(u , k ) = γ
∫
Ω/k
∇u dx + αH (k ) + 2
∫u−g
Ω/ k
2
dx
(1)
By making α and γ self-adaptive to the local features of the image it is possible to obtain a regularized image, where - at some extent - noise has been removed without loosing the fine details ( Vanzella, Pellegrino & Torre 2004 [11] ). A segmentation of the original image g(x,y) is easily obtained from the regularized image u(x,y). Indeed segmentation is obtained by approximating u(x,y) with a piece-wise constant function s(x,y) which is the union of a collection of shapes or silhouettes. Categorization of silhouettes is based on the extraction of the perceptually relevant landmarks obtained from their contours. Each silhouette is transformed into a symbolic representation, where each shape is mapped in a string of symbols. The present manuscript is organized as follows: Section 2 describes the segmentation of the original images. Localization, extraction of landmarks and their symbolic representation are investigated in Section 3. Section 4 describes the feature space composed by string kernels. In Section 5 geometrical invariants features are described. Experimental results on silhouettes from Kimia database [13] and on cluttered images from Martin database [14] and some images extracted from the web are presented in Section 6.
2 Bottom-Up Module: Image Segmentation The segmentation of natural and complex images g(x,y) here proposed is composed by two distinct steps. The original image g(x,y) is first regularized with the selfadaptive regularization [12] where the parameters α and γ controlling smoothness and fidelity of the regularized image u(x,y) adapt to the local scale and contrast. u(x,y) is a piece-wise smooth function approximating g(x,y), from which a piece-wise constant function s(x,y) is obtained. This piece-wise constant function s(x,y) is composed by the union of N regions of constant grey level and to make explicit the dependence from N the notation sN(x,y) is used. Given the original image g(x,y) the segmentation sN(x,y) can be considered satisfactory if: * the number of distinct regions N is small
(2)
236
M.R. Daliri, W. Vanzella, and V. Torre
* the approximation error RMSE =
1 g−s Np
2
is small
(3)
Where Np is the total number of the pixels of g(x,y). Occasionally N and RMSE are both small and a satisfactory segmentation is obtained. Often, however, the two conditions (2) and (3) cannot be simultaneously met and a compromise must be found. In this case a good segmentation sN(x,y) must minimize the function: f(N) = RMSE + βN
(4)
Where β controls the relative weight of the two terms RMSE and N. Given a segmentation sN(x,y), the function (3) is computed. In order to reduce the number of distinct regions from N to N-1, the two neighbouring regions leading to the largest decrease of (4) are merged. The merging is stopped when the minimum of f(N) is reached. A possible choice for β is:
β = 1 / RMSE n
(5)
With this choice when RMSE is small the minimization of (4) leads to a strong reduction of N, but when RMSE increases - in order to minimize (4) - is not convenient to reduce N. As shown in [12] the exponential value n controls the grain of the final segmentation, minimizing (4). With a value of n equal to 1 the number of the final segmentation varies between 10 and 30 and can be considered a coarse segmentation. With n equal to 1.5 and to 2 more regions are obtained in the final segmentation and therefore it is possible to obtain a medium and a fine segmentation. By using a value of n equal to 0.5 the number of final regions is around 10.
3 Curvature Computation and Its Symbolic Representation In this section we describe how to compute the curvature of each region obtained from the segmentation described in the previous section. The contour of each region is represented by the edge chain (x(j),y(j)) j=1,...,N where N is the chain or contour length ( see Fig.1B). Curvature is computed at a single scale and therefore the proposed method is less sophisticated and precise than multi-scale approaches as those of [15] but is computationally simpler. Having recovered the edge chain from the region contour, the next step is finding the gradient of the contour at the optimal scale. The optimal scale was extracted with the Lindeberg formula [16] and the gradient of the original shape Si at this scale was computed by a simple 2-D Gaussian filtering in X and Y direction in the image plane. From the obtained gradient the tangent vector (Tx,Ty) is easily computed. The curvature κ of a planar curve at a point P on the curve is:
κ=
∂T ∂s
(6)
where s is the arc length. The curvature is computed as: κ = κ ⋅ Sign(κ )
(7)
A Vision System for Recognizing Objects in Complex Real Images
B
237
C
C1-L1-A1-L1-A5-L1-A1-L1-A5L3-A1-L1-A1-L1-C2-L3-C2-L2C2-L2-A1-L2
A
D
E
Fig. 1. A) Examples of silhouettes from Kimia database B) the contour of one shape from the Kimia database. Numbers indicate some of the landmarks with an arbitrary starting-point in the contour C) Raw and smoothed curvature profile. D) Angle and curve part representation based on the maxima and peaks information of the curvature representation (red arrow shows the starting-point for the symbolic representation and red circles are curve part). E) Symbolic representation for the bird shape.
where κ is the modulus of the curvature and Sign(k) is its sign. The computed κ is often noisy and it is useful to smooth regions of low curvature and to leave unaltered regions of high curvature. Therefore a non-linear filtering was used. The local square curvature is computed as:
κ 2 (n) =
1 2σ 1
σ1
∑κ +1 σ i=−
2
(n + i)
(8)
1
and non-linear filtering was performed by convolving κ with a one-dimensional Gaussian function, where the scale of the filter is: σ 2 ( n) =
κˆ κ ( n) 2
(9)
Using the value of κˆ = 0.02 a robust and perceptually relevant representation of shape curvature is obtained, where local maxima (negative and positive peaks) are easily identified (see Fig.1C). Now, the local maxima (negative and positive peaks) of the curvature are detected and identified as landmarks in the original 2-D contours (see Fig.1D). Having obtained the curvature and having extracted the relevant landmarks, each silhouette is transformed into a symbolic representation to be used for categorization (see Fig.1E). Firstly angles close to 180 degrees are removed. By using a ”dictionary” of angles, curves and straight lines the obtained curvature is transformed into a string
238
M.R. Daliri, W. Vanzella, and V. Torre
of symbols. Features detected as corners are quantized so that angles have either 45 or 90 or 135 degrees. These angles can have either a positive or a negative value of curvature. A total of 6 different corners are obtained which are labeled as A1, A2... up to A6. Curve parts have the average curvature between straight lines and sharp angles that with setting a threshold can be found. Curves are labeled either as concave (C1) or convex (C2), according to the sign of their average curvature. Pieces of the contour of silhouettes linking two corners (or curves) are labeled in three ways: L1 if it is a straight line, L2 if it is not a straight line (and it is not a curve) but has an average positive curvature and L3 if, on average, has a negative curvature.
4 Top-Down Module: Learning in the Feature Space In our approach, shape categorization becomes similar to text categorization, where each string of symbols can be either a sentence or a separate document. A standard approach [17] to text categorization uses the classical text representation [18], mapping each document into a high-dimensional feature vector, where each entry of the vector represents the presence or the absence of a feature. Our approach makes use of specific kernels [19]. This specific kernel named string kernel, maps strings, i.e. the symbolic representation of the contour obtained in the previous section, into a feature space. In this high dimensional feature space all shapes have the same size. This transformation provides the desired rotational invariance and therefore the categorization system is also invariant to the initial symbol of the string describing the shape. The feature space in this case is composed by the set of all substrings of maximum length L of k-symbols. In agreement with a procedure used for text classification [19] the distance and therefore the similarity between two shapes is obtained by computing the inner product between their representations in the feature space. Their inner product is computed by making use of kernel functions [19], which compute the inner product by implicitly mapping shapes to the feature space. In essence, this inner product measures the common substrings of the symbolic representations of the two shapes: if their inner product is high the two shapes are similar. Substrings do not need to be contiguous, and the degree of contiguity of one substring determines its weight in the inner product. Each substring is weighted according to its frequency of appearance and on its degree of compactness, measured by a decay factor, λ , between (0,1) [19]. To create the feature space we need to search for all possible substrings composed by k-symbols, which in our case are the 11 symbols introduced in Section 3. For each substring there is a weight in the feature space given by the sum of all occurrences of that substring considering the decay factor for non-contiguity. After creating the invariant feature space, we need to use a classifier to find the best hyper-planes between the different classes. Support Vector Machines (SVM) are a very successful class of statistical learning theory in high dimensional space [20]. For classification, SVMs operate by finding a hyper-surface in the space of possible inputs. In their simplest version they learn a separation hyper-plane between two sets of points, the positive examples from the negative examples, in order to maximize the margin, i.e. the distance between the hyper-plane plane and the closest point. Intuitively, this makes the classification correct for testing data that is near, but not identical to the training data. Further information can be found in [21], [22].
A Vision System for Recognizing Objects in Complex Real Images
239
Table 1. Some of geometrical invariant features
Geometric Feature Compactness or Circularity Elongation Roughness Solidity
Definition (Perimeter^2) / (4*pi*Area of the shape) Major Axis Length / Minor Axis Length Perimeter / Convex Perimeter Number of pixels in the convex hull / the number of shape Rectangularity Number of pixels in the bounding box / the number of shapes pixels Normalized Major axis length the length of the major axis of the ellipse (to Perimeter) that has the same second-moments as the region Normalized Minor axis length the length of the minor axis of the ellipse (to Perimeter) that has the same second-moments as the region Normalized Equivalent Diame- the diameter of a circle with the same ter (to Perimeter) area as the region Eccentricity the ratio of the distance between the foci of the ellipse and its major axis length
5 Adding Geometrical Features Beside the high dimensional feature space described in the previous section, a set of geometrical properties for each shape were measured. They consist of 16 different numbers that are normalized so to be invariant for rotation and size transformation. Table 1 illustrates these geometrical features. For further information we refer readers to [23].
6 Experimental Results The proposed method for object categorization was initially tested on silhouettes from the Kimia database [13] and three categories of silhouettes or shapes were chosen: birds, mammals and fish. Having tested and optimized the categorization procedure, natural images of birds, mammals and fish were analyzed. These images are good examples of real complex and cluttered images. 6.1 Categorization of Silhouettes Kimia Database. In this section, some experiment results aiming at evaluating and comparing the proposed algorithm for shape classification will be presented. A database extracted from Kimia silhouette database [13] was used. Three different categories were considered composed by the category of birds consisting of 94 shapes, the category of mammals consisting of 181 shapes and the category of fish consisting of 79 shapes. Some shapes of the database were rotated and resized. We used LIBSVM [24] tools supporting multi-class classification. To test the success of our
240
M.R. Daliri, W. Vanzella, and V. Torre
classification the cross-validation leave-one-out method was used [25]. Table 2 illustrates results of combining local features extracted from the curvature and global geometrical features. We have optimized the parameters with further experiments over larger silhouette database (MPEG7 shape database) and obtained better results than the other methods in the literature [27]. Table 2. Classification rate for 3 different categories selected from Kimia Database
Bird 96.8%
Mammal 97.75%
Fish 96.2%
6.2 Categorization in Real Images Images Collected from the Web. A sample of images representing mammals, birds d fish (some of which shown in Fig.2a) was extracted from the web. These images were segmented using the procedure developed by [11] providing a segmentation of the initial image in a limited number of distinct regions, usually between 5 and 15 depending on the parameters used for segmentation and the image complexity. The segmentation for the corresponding images is shown in Fig.2b. The obtained segmentation is a collection of shapes which can be analyzed by the procedure used for analyzing Kimia database, as described in section 6.1. Two problems arise at this stage: firstly, some shapes do not represent any object category because are either fragments of the background or fragments of the shape to be recognized. Therefore a new category of shapes was introduced referred as background fragments and examples of its members were obtained by analyzing 53 different images of animals found on the web. Secondly the shape to be categorized can be a single region of the segmentation, as in the case shown in Fig. 2, but can be the union of two and even more neighboring regions. Therefore from the segmentation of the original images all shapes composed by a single region, by pairs of neighboring regions, triplets, quadruplets and quintets of neighboring regions were considered and categorized. In this way the results illustrated in Fig. 2 were obtained. 53 different images of animals were analyzed and a successful categorization was obtained for 45 of them. In 5 images a background fragment was erroneously categorized as an animal and in the other 8 images multiple erroneous categorizations occurred. By enriching the feature space and augmenting the learning set we expect to reach a successful categorization of about 90 % in natural images of moderate complexity. Martin Natural Image Database. Having tested and optimized the categorization procedure, images of birds, mammals and fish from Martin database [26] were analyzed. These images are good examples of real complex and cluttered images. A sample of images representing mammals, birds and fish (some of which shown in Fig.3A) was extracted from Martin database of complex natural images [26]. These images are usually rather cluttered and represent a good benchmark for testing Computer Vision algorithms. These images were segmented using the procedure developed by [11] providing a segmentation of the initial image in a limited number of distinct regions, usually between 5 and 15 depending on the parameters used for segmentation and the image complexity.
A Vision System for Recognizing Objects in Complex Real Images
241
Fig. 2. A) Examples of real images. B) The segmentation obtained with the [11] procedure. C) Categorization of regions of the segmentation shown in B according to the proposed procedure. In the first and second row two birds were identified by one region; in the third row a mammal was categorized by merging four regions and finally in the forth row a fish was categorized merging three regions respectively.
The segmentation for the corresponding images is shown in Fig.3B. The obtained segmentation is a collection of shapes which must be categorized and recognized. The Kimia database of silhouettes was used to learn the categorization of mammals and birds. 181 and 94 silhouettes of mammals and birds respectively were used as the input to the SVMs as described in section 4. Having learned the category of mammals and birds from Kimia database we analyze the collection of shapes obtained by the segmentation of real images of mammals and birds. Again a new category ("background fragments") was introduced and examples of its members were obtained by analyzing 50 different images of the Martin database. As described above, from the segmentation of the original images all shapes composed by a single region, by pairs of neighboring regions, triplets, quadruplets and quintuplets of neighboring regions were considered and categorized. In this way the results illustrated in Fig. 3 were obtained. 15 different images of animals from Martin database were analyzed and a successful categorization was obtained for 13 of them. In two images a background fragment was categorized as a bird and a mammal. These mistakes are expected to be eliminated by modification of the feature space, which are under progress. The processing of a single shape or silhouette requires between 45s and 55s of computing time with a PC AMD 1.1 GHz. Segmentation of cluttered real images is
242
M.R. Daliri, W. Vanzella, and V. Torre
A
B
C
Fig. 3. A) Examples of real images from Martin database. B) The segmentation obtained with the [11] procedure. C) Categorization of regions of the segmentation shown in B according to the proposed procedure. In the first row a mammal was identified merging four contiguous regions; in the second row a bird was categorized by merging two regions and finally in the third row two mammals were categorized merging three and four regions respectively.
obtained in less than 5 minutes with a PC Pentium IV 2.1 GHz for a 256*256 image size. The categorization step requires 0.55s to 0.70s (Testing phase). An implementation on a cluster of PCs is under way and it is expected to reduce the overall computing time to approximately 1 minute for each image.
7 Conclusions In this paper a new vision system for object recognition and categorization in cluttered natural images is proposed. The system is based on a low level segmentation sN(x,y) of the original image g(x,y) (see section 2 and [11]) and a high level description of shapes ( see section 4 ) which integrate local information of contours ( see section 3 ) and on global geometrical features ( see section 5 ). The proposed system is able to correctly categorize and recognize mammals, birds and fish from natural complex images, as shown in section 6.
References 1. Ullman, S.: High-level Vision. MIT Press, Cambridge (1996) 2. Edelman, S.: Representation and Recognition in Vision. MIT Press, Cambridge (1999)
A Vision System for Recognizing Objects in Complex Real Images
243
3. Biederman, I., Ju, G.: Surface Versus Edge-based Determinants of Visual Recognitions. Cognitive Psychology 20, 38–64 (1988) 4. Blum, H.: A Transformation for Extracting New Descriptors of Shape. In: Wathen-Dunn, W. (ed.) Models for the Perception of Speech and Visual Form, pp. 362–380. MIT Press, Cambridge (1967) 5. Sebastian, T.B., Klien, P., Kimia, B.B.: Recognition of Shapes by Editing Their Shock Graphs. IEEE Transaction on PAMI 26(5), 550–571 (2004) 6. Mokhtarian, F., Mackworth, A.: Scale-based Description and Recognition of Planar Curves and Two-dimensional Shapes. IEEE Transaction on PAMI 8(1), 34–43 (1986) 7. Belongie, S., Malik, J.: Shape Matching and Object Recognition Using Shape Contexts. IEEE Transaction on PAMI 24(24), 509–522 (2002) 8. Arbter, K., Snyder, W.E., Burhardt, H., Hirzinger, G.: Application of Affine-Invariant Fourier Descriptors to Recognition of 3-D Objects. IEEE Transaction on PAMI 12(7), 640–647 (1990) 9. Rivlin, E., Weiss, I.: Local Invariants for Recognition. IEEE Transaction on PAMI 17(3), 226–238 (1995) 10. Murase, H., Nayar, S.K.: Visual Learning and Recognition of 3-D Objects from Appearance. Int. J. Computer Vision 14(1), 5–24 (1995) 11. Vanzella, W., Pellegrino, F.A., Torre, V.: Self-adaptive Regularization. IEEE Transaction on PAMI 26(6), 804–809 (2004) 12. Vanzella, W., Torre, V.: A Versatile Segmentation Procedure. IEEE Transaction on SMC part B 36(2), 366–378 (2006) 13. Sharvit, D., Chan, J., Tek, H., Kimia, B.B.: Symmetry-based Indexing of Image Databases. J. of Visual Communication and Image Representation 9(4), 366–380 (1998) 14. Martin, D.R., Fowlkes, C., Malik, J.: Learning to Detect Natural Image Boundaries Using Local Brightness, Color, and Texture Cues. IEEE Transaction on PAMI 26(5), 530–549 (2004) 15. Dudek, G., Tsotsos, J.: Shape Representation and Recognition from Multi-scale Curvature. Computer Vision and Image Understanding 68(2), 170–189 (1997) 16. Lindeberg, T.: Edge Detection and Ridge Detection with Automatic Scale Selection. Int. J. Computer Vision 30(2), 117–154 (1998) 17. Joachims, T.: Text Categorization with Support Vector Machines: Learning with Many Relevant Features. In: Nédellec, C., Rouveirol, C. (eds.) Machine Learning: ECML-1998. LNCS, vol. 1398, pp. 137–142. Springer, Berlin (1998) 18. Salton, A.W., Yang, C.: A Vector Space Model for Automatic Indexing. Communications of the ACM 18(11), 613–620 (1975) 19. Lodhi, H., Saunders, C., Shawe-Taylor, J., Cristianini, N., Watkins, C.: Text Classification using String Kernels. J. of Machine Learning Research 2, 419–444 (2002) 20. Vapnik, V.: Statistical Learning Theory. Wiley-Interscience, New York (1998) 21. Cristianini, N., Shawe-Taylor, J.: An Introduction to Support Vector Machines. Cambridge University Press, Cambridge (2000) 22. Burges, C.J.C.: A Tutorial on Support Vector Machines for Pattern Recognition. Data Mining and Knowledge Discovery 2(2), 121–167 (1998) 23. Costa, L.D.F., Junior, R.M.C.: Shape Analysis and Classification: Theory and Practice. CRC Press, Boca Raton, USA (2000) 24. Fan, R.E., Chen, P.H., Lin, C.J.: Working Set Selection using the Second Order Information for Training SVM. Technical report, Department of Computer Science, National Taiwan University (2005)
244
M.R. Daliri, W. Vanzella, and V. Torre
25. Kearns, M., Ron, D.: Algorithmic Stability and Sanity-check Bounds for Leave-one-out Cross-validation. Neural Computation 11, 1427–1453 (1999) 26. Martin, D.R., Fowlkes, C., Malik, J.: Learning to Detect Natural Image Boundaries using Local Brightness, Color, and Texture Cues. IEEE Transaction on PAMI 26, 530–549 (2004) 27. Daliri, M.R., Delponte, E., Verri, A., Torre, V.: Shape Categorization using String Kernels. In: Yeung, D.-Y., Kwok, J.T., Fred, A., Roli, F., de Ridder, D. (eds.) Structural, Syntactic, and Statistical Pattern Recognition. LNCS, vol. 4109, pp. 297–305. Springer, Heidelberg (2006)
RISE-SIMR: A Robust Image Search Engine for Satellite Image Matching and Retrieval Sanjiv K. Bhatia1 , Ashok Samal2 , and Prasanth Vadlamani2 University of Missouri – St. Louis
[email protected] 2 University of Nebraska – Lincoln {samal,pvadlama}@cse.unl.edu
1
Abstract. The current generation of satellite-based sensors produces a wealth of observations. The observations are recorded in different regions of electromagnetic spectrum, such as visual, infra-red, and microwave bands. The observations by themselves provide a snapshot of an area but a more interesting problem, from mining the observations for ecological or agricultural research, is to be able to correlate observations from different time instances. However, the sheer volume of data makes such correlation a daunting task. The task may be simplified in part by correlating geographical coordinates to observation but that may lead to omission of similar conditions in different regions. This paper reports on our work on an image search engine that can efficiently extract matching image segments from a database of satellite images. This engine is based on an adaptation of rise (Robust Image Search Engine) that has been used successfully in querying large databases of images. Our goal in the current work, in addition to matching different image segments, is to develop an interface that supports hybrid query mechanisms including the ones based on text, geographic, and content.
1
Introduction
The number of images continues to grow at a rapid pace due to the advances in technologies and with the advent of ever increasing high quality image capture devices. In addition, the resolution of the images continues to increase. This has led to an ever increasing search space in terms of searching for content within an image. This explosion in terms of quality and quantity of images makes image search a challenge. Although significant progress has been made in the field of data mining, research in image information mining is still in its infancy [1]. Furthermore, the ubiquity of Internet has resulted into a growing population of users searching for various pieces of information, including images [2]. More recently, there is a large volume of satellite-generated images available on the Internet. The Earth Observation Satellites easily generate images in the terabyte range every day. The satellite images are interpreted by a variety of users, from meteorologists to farmers to ecologists. And so, there is a growing need to retrieve images based on some specified criteria from the wealth of images sensed every day. G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 245–254, 2007. c Springer-Verlag Berlin Heidelberg 2007
246
S.K. Bhatia, A. Samal, and P. Vadlamani
Most of the current work in image search is based on textual annotations by a person to describe the image contents. At present, even the most popular search engine Google does not have the capability to perform content-based search to retrieve images that are similar to a given image. The text annotation method, used in text-based image retrieval (tbir) to describe images, has several drawbacks. First, databases keep changing on a day-to-day basis as new images are created every day. The new images need to be annotated continuously to make them useful for retrieval. Second, these methods require the subjectivity of human perception during indexing for annotating an image. This leads to a problem where too much responsibility is put on the end-user. It is limited in the sense that a user will use his/her context to describe an image and may miss important elements in the image. The semantic complexity of an image can lead to different descriptions, resulting in content or language mismatch [2]. A content mismatch occurs when a seemingly insignificant object or characteristic may be omitted in annotation and, later, a user searches for that omitted information. A language mismatch occurs when an object in an image can be labeled with many names. The phenomena of content and language mismatch are illustrated with Figures 1–2. Figure 1 describes how the flowers in the left image may be ignored due to less prominence in the image, even though it may be a query word entered by a user and the user might expect both the left and the right images. Figure 2 shows the more obvious language mismatch, where the flowers can be annotated by the terms “water lilies,” “flowers in a pond,” and by their biological name. These limitations of manual annotation create a need for efficient and automated retrieval systems that can minimize human intervention. The solution that is widely used to overcome these limitations is provided by content-based image retrieval (cbir) systems.
Fig. 1. Content-mismatch in images
A cbir system uses the contents or properties of an image in the form of color, shape, texture, and spatial layout to create an index for the images in the database. The query is also provided in the form of an image that is analyzed by the cbir system to determine the properties that are matched against the stored index to determine the degree of match of those images.
RISE – SIMR
247
Fig. 2. Language mismatch
Our focus is to perform efficient and effective retrieval on images generated by satellites. These images are too complex to be annotated by a human in a meaningful manner which rules out the use of a pure tbir system. We prefer to use a system that can allow a user to formulate a query and perform retrieval in a meaningful manner, possibly using a hybrid of tbir and cbir approaches. In this paper, we describe a cbir system rise-simr – Robust Image Search Engine for Satellite Image Matching and Retrieval. rise-simr is based on our earlier general purpose image retrieval system rise that has been successfully deployed over the web [2]. rise works by looking at the color distribution in perceptual color space and creates an index for each image by dividing it in the form of a quad tree. rise works on an image as a whole by first scaling it and then, creating a quad tree from the scaled image. rise-simr, on the other hand, enhances rise by looking at different areas of an image to measure their relevance to the query, and also looks at textual data such as geographic coordinates of the area under consideration. It maintains the graphical user interface developed for rise and uses it to receive a query image and to display the results. This paper is organized as follows. Section 2 provides the details on rise that was the motivation behind rise-simr, and other related work. In Section 3, we present the design and implementation of rise-simr. This is followed by a description of experiments to show the performance of rise-simr in Section 4. The paper provides a summary and conclusion in Section 5.
2
Background and Related Work
The current generation of image database systems tend to fall into one of the two classifications: tbir and cbir. The tbir systems, for example the Google image search engine, retrieve images based on human-provided annotations and do not look into the images themselves to reason with them. The cbir systems resolve a query by matching the contents of a query image to other images in the database. cbir systems are further classified on the basis of the query interface mechanism: query-by-example and query-by-memory [3]. In query-by-example, a user
248
S.K. Bhatia, A. Samal, and P. Vadlamani
selects an example image as the query. In query-by-memory, a user selects image features, such as color, texture, shape, and spatial attributes, from his/her memory to define a query. rise-simr is a query-by-example system based on ideas from two applications: Robust Image Search Engine (rise) [2] and Image Characterization and Modeling System (icams) [4]. rise is a general purpose image database application designed to organize, query, and retrieve a large set of images. rise processes each image to create a signature in the form of a quad tree, and stores this signature as a vector in a relational database. It processes a query image to build its quad tree-based signature and compares the query signature with the signature of other images in the database to quantify their relevance to the query. rise is a cbir system and uses a query-by-example interface through a web browser. rise creates the signature based on color in perceptual space. However, color is not expected to be of much relevance in satellite images, specially if the images are falsely colored due to perception in different nonvisual regions of electromagnetic spectrum. Therefore, we decided to use the spatial autocorrelation features to compute the signature, as proposed in icams. In icams, spatial autocorrelation is descibed as correlation of a variable to itself through space. If there is any systematic pattern in the spatial distribution of a variable, it is said to be spatially autocorrelated. If the neighboring areas are described by similar patterns, the phenomenon is called positive correlation; different patterns in neighboring areas, such as checkerboard patterns, lead to negative correlation. Randomly distributed patterns do not exhibit spatial autocorrelation [5, 6]. icams was developed as a test-bed to evaluate the performance of various spatial-analytical methods on satellite images. It contains a number of image processing tools such as contrast stretching, edge detection, wavelet decomposition, and Fourier transform. The set of tools also includes Moran’s I and Geary’s C indices for spatial autocorrelation [5]. During the query, icams asks the user to define an object of interest. It then computes the query signature as the spatial and spectral characteristics of this object. icams searches a metadata table for similar object signatures which indicate the presence of matching objects. It effectively calculates the spatial autocorrelation value of a given image. icams uses a variety of techniques such as histogram matching and quad trees to create an index, and queries the database using the index. In the next section, we describe the design and implementation of rise-simr, and compare it with rise and icams.
3
Retrieval of Satellite Images
rise-simr uses Moran’s I parameter which is one of the oldest indicators of spatial autocorrelation. It gives the join count statistics of differing spatial structures of the smooth and rough surfaces. It can be applied to zones or points with continuous variables associated with them. It compares the value of the variable at any one location with the values at all other locations [5, 6]. The parameter is given by
RISE – SIMR
I=
n
249
¯ ¯ i j wij (xi − X)(xj − X) 2 ¯ ( i=j wij ) i (xi − X)
(1)
¯ is the where n is the number of cases, xi is the variable value at location i, X mean of variables x, and wij is the weight applied to the comparison between location i and j. This expression gives us the signature of the image. rise-simr computes the signature of all images in the database using Moran’s I parameter and stores the signature using an Oracle database. It compares the signature of query with the stored signatures and quantifies the comparison using Euclidean distance. The quantification is used to rank the images in ascending order with respect to the query image. If an exact match is found, it will have the same autocorrelation value as the query image and yield a distance of 0 with respect to the query. rise-simr is based on the techniques we developed in rise. It uses the same query process as any cbir system. The query is suggested in the form of an image to rise-simr. rise-simr then processes the query image to compute its signature in the form of Moran’s I value and compares the computed signature with the signatures of images stored in the database. rise-simr quantifies the similarity between the computed and stored signatures using the Euclidean distance. rise-simr accommodates different image formats, including jpeg, gif, bmp and tiff. Using the techniques from jpeg format, it abstracts an image into 8×8 pixel blocks and computes the average on those blocks. The average Moran’s I ¯ is computed as parameter over the block, denoted by I, 1 I I¯ = 64 i=1 j=1 8
8
(2)
rise-simr uses this approach to avoid the problem where two different images could lead to the same signature. For example, the two images in Figure 3 yield the same Moran’s I value as 0.924914. Such problems can be expensive in image
Fig. 3. Example satellite images used in rise-simr
250
S.K. Bhatia, A. Samal, and P. Vadlamani
search and can lead to false positives. Therefore, it is essential to look at the signature in different subregions of the images, and the signature computed for those subregions to refine the results of match. The regions in rise-simr, as in rise, are organized in the form of a quad tree. rise-simr builds a quad tree for each image and saves the signature at each node in the database tables. The use of quad tree makes the query process fairly simple while improving the efficiency of the process. rise-simr uses the standard spatial quad tree which implies that each internal node in the tree has exactly four child nodes [7]. This type of quad tree is also known as point-region quad tree. The leaf node of the quad tree is considered to be an 8 × 8 pixel block in the image which is the smallest addressable unit in our scheme. rise-simr also considers all the leaf nodes to be at the same level in the tree. Thus, the input image is scaled to be a power of 2 in both height and width. The internal nodes of the quad tree contain the features information or the search criteria. This criteria is used to make similarity comparisons between images at higher levels than an individual block. The internal nodes also contain information to access their four children, and represent the aggregate of information in the child nodes. We show the division of an n × n image into a quad tree structure in Figure 4.
6
n 8
6 n 16
?
n 8
- ?
Fig. 4. Quad tree division of an image in rise-simr
rise-simr computes the similarity between the topmost levels of the images and the query using Euclidean distance. It allows the user to define a threshold distance such that it only considers those images for further processing whose distance from the query is less than the threshold. The process is repeated at each level in the quad tree structure, pruning the images with distance larger than the threshold from further consideration at each level. It is obvious that for the perfect match, or when the query matches an image exactly, the Euclidean distance at each successive level between the signatures is zero.
RISE – SIMR
251
In the next section, we present the details of our experimental set up and show the performance of rise-simr.
4
Retrieval Performance
We have successfully implemented the cbir system rise-simr as described in this paper. rise-simr is built on top of rise adding some spatial statistics measures. rise is a general-purpose system based on query evaluation by measuring the distribution of color in perceptual space, using a quad tree structure. rise-simr uses the quad tree approach from rise but computes the signature using spatial statistics instead of color distribution. Currently, we have implemented several spatial autocorrelation measures including the Moran’s I index. We have tested rise-simr using the icams database. The icams database contains over 1000 images. We indexed the images in risesimr and ran some queries. We show two images from this dataset in Figure 3. Our first goal was to build a quad tree signature of Moran’s I value for different images. Here, we note that the computed values differed slightly from those computed in icams. This was to be expected since icams considers the entire image while we computed the values at different levels, and averaged those values. In Figure 5, we show the values in three levels of the quad tree from the image in Figure 4. Level 0 0.949889 0.961889 Level 1 0.945788 0.976043 0.959960 Level 2 0.785174 0.938551
0.945903 0.943306 0.950085 0.870027 0.950444 0.957088
0.951866 0.934441 0.940458 0.906094
0.944724 0.842413 0.877106 0.989019
Fig. 5. Moran’s I value in top three levels of quad tree
In Figures 6–9, we show the results obtained by rise-simr. Each of these figures shows an example query and the top 12 matches from the database.
5
Summary and Future Work
There is an urgent need to mine information from ever growing repositories of imagery derived from remotely sensed devices. This problem is only going to get worse as more satellites are launched and as the sensors continue to have better spatial and temporal resolution. In this paper, we have described our approach to mine information from satellite imagery using a cbir approach. Our approach is based on exploiting both the spectral information in the imagery as well their geospatial nature. In order to provide efficiency to the computation, we use a
252
S.K. Bhatia, A. Samal, and P. Vadlamani
Fig. 6. Query 1 results
Fig. 7. Query 2 results
RISE – SIMR
Fig. 8. Query 3 results
Fig. 9. Query 4 results
253
254
S.K. Bhatia, A. Samal, and P. Vadlamani
hierarchical approach to organize the image using a quadtree structure. This provides a way to set up match between images at different scales, which is ideal since geographic processes occur at different spatial scales. The results from this approach show that this approach has promise and needs further investigation. Our research can be extended along many dimensions. Our system is currently in early stages and has been tested using a smaller database of about 1000 images. The size of real datasets is orders of magnitude larger. We will need to address the issues of efficiency and high performance computation before the system can be deployed. Our approach now is to match a query image with images in the database. In the case of satellite images, which are geospatial in nature, there are multiple images for the same region derived at different times and with different types of sensors. Furthermore, there are many derived data products that are both raster images (e.g., vegetation index) and vector data (boundaries of lakes and buildings). Using all these imagery and data for querying is a challenging, but potentially beneficial application and needs to be explored.
References 1. Li, J., Narayanan, R.M.: Integrated spectral and spatial information mining in remote sensing imagery. IEEE Transactions on Geosciences and Remote Sensing 42, 673–685 (2004) 2. Goswami, D., Bhatia, S.K., Samal, A.: RISE: A Robust Image Search Engine. In: Pattern Recognition Research Horizons, Nova Publishers (2007) 3. van den Broek, E.L., Kisters, P.M., Vuurpijl, L.G.: Design guidelines for a contentbased image retrieval color-selection interface. In: Proceedings of the Conference on Dutch Directions in HCI, Amsterdam, Holland (2004) 4. Quattrochi, D.A., Lam, N., Qiu, H., Zhao, W.: Image characterization and modeling system (ICAMS): A geographic information system for the characterization and modeling of multiscale remote sensing data. In: Quattrochi, D.A., Goodchild, M.F. (eds.) Scale in Remote Sensing and GIS, pp. 295–307. Cambridge University Press, Cambridge (1997) 5. Emerson, C.W., Quattrochi, D.A., Lam, N.S.N.: Spatial metadata for remote sensing imagery. In: NASA’s Earth Science Technology Conference, Palo Alto, CA (2004) 6. Garrett, T.A., Marsh, T.L.: The revenue impacts of cross-border lottery shopping in the presence of spatial autocorrelation. Regional Science and Urban Economics 32, 501–519 (2002) 7. Samet, H.: The quadtree and related hierarchical data structures. ACM Computing Surveys 16, 187–260 (1984)
Content-Based Image Retrieval Using Shape and Depth from an Engineering Database Amit Jain, Ramanathan Muthuganapathy, and Karthik Ramani School of Mechanical Engineering, Purdue University, West Lafayette, IN 47907, USA {amitj,rmuthuga,ramani}@purdue.edu
Abstract. Content based image retrieval (CBIR), a technique which uses visual contents to search images from the large scale image databases, is an active area of research for the past decade. It is increasingly evident that an image retrieval system has to be domain specific. In this paper, we present an algorithm for retrieving images with respect to a database consisting of engineering/computer-aided design (CAD) models. The algorithm uses the shape information in an image along with its 3D information. A linear approximation procedure that can capture the depth information using the idea of shape from shading has been used. Retrieval of objects is then done using a similarity measure that combines shape and the depth information. Plotted precision/recall curves show that this method is very effective for an engineering database.
1
Introduction
Content-based image retrieval (CBIR), a technique which uses visual contents to search images from large scale image databases has been an active research area for the last decade. Advances in the internet and digital imaging have resulted in an exponential increase in the volume of digital images. The need to find a desired image from a collection of databases has wide applications, such as, in crime prevention by automatic face detection, finger print, medical diagnosis, to name a few. Early techniques of image retrieval were based on the manual textual annotation of images, a cumbersome and also often a subjective task. Texts alone are not sufficient because of the fact that interpretation of what we see is hard to characterize by them. Hence, contents in an image, color, shape, and texture, started gaining prominence. Initially, image retrievals used the content from an image individually. For example, Huang and Jean [1] used a 2D C + -strings and Huang et al. [2] used the color information for indexing and its applications. Approaches using a combination of contents then started gaining prominence. Combining shape and color using various strategies such as weighting [3], histogram-based [4], kernel-based [5], or invariance-based [6] has been one of the premier combination strategies. G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 255–264, 2007. c Springer-Verlag Berlin Heidelberg 2007
256
A. Jain, R. Muthuganapathy, and K. Ramani
Shape and texture using elastic energy-based approach to measure image similarity has been presented in [7]. Smith and Chang [8] presented an automated extraction of color and texture information using binary set representations. Li et al. [9] used a color histogram along with the texture and spatial information. Image retrieval by segmenting them had been the focus of few research papers such as [10] and [11]. A detailed overview on the various literatures that are available on CBIR can be found in [12] and [13]. A discussion on various similarity measurement techniques can be found in [14]. Even though research on image retrieval has grown exponentially, particularly in the last few years, it appears that less than 20% were concerned with applications or real-world systems. Though various combinations of contents and their possible descriptions have been tried, it is increasingly evident that a system cannot cater to the needs of a general database. Hence, it is more relevant to build image retrieval systems that are specialized to domains. Also, the selection of appropriate features for CBIR and annotation systems remain largely ad-hoc. In this paper, retrieving images from an engineering database has been presented. As the engineering objects are geometrically well-defined as opposed to natural objects and also they rarely contain texture information, the appropriate features shape (or contour) to capture its two dimensional content along with its 3D embedding information, its depth profile at each pixel on the contour, has been used. Shape from an image is quite a powerful representation as it characterizes the geometry of the object. However, it is normally a planar profile, and is insufficient by itself to recognize objects that are typically 3D in nature. To take into account the third dimension, other parameters such as color and/or texture have been used. However, in our paper, we propose an approach that combines shape with the depth-map of the shape. The basic idea of our paper is illustrated in Fig. 1. Depth map, obtained from depth from focus approach using multiple images, has been used in [15] for indexing and retrieval by segmenting them. However, using depth information alone is not quite sufficient for well-defined geometric objects.
Fig. 1. Flow chart indicating the basic idea used in this paper
CBIR Using Shape and Depth from an Engineering Database
257
The rest of the paper is organized as follows. Section 2 describes the method to obtain the shape information, given an image. The method used for obtaining the 3D embedding information, i.e., the depth is described in Section 3. A representation involving both shape and depth along with its similarity measurements for retrieval is described in Section 4. Retrieval results are presented and discussed in Section 5. Finally, Section 6 concludes the paper.
2
Obtaining Shape
Engineering objects are geometrically well-defined as most of the them are obtained from a boolean combination of primitives. Hence, it is imperative to get the geometry information. As the input is an image, its 2D information can be obtained by applying a contour detection algorithm. This geometry information can be termed as the shape information for the particular image. Steps to obtain the contour of an image are shown in Fig. 2. Contour can be obtained by separating the object information from its background details. This is done by converting the given image into a gray scale image (Fig. 2(a)). It is then binarized (Fig. 2(b)). As the contour detection algorithms are susceptible to small changes, converting to a binary image reduces its susceptibility. A simple threshold is applied to convert a gray scale image into the binary image. This conversion can induce noise along the shape boundary. Denoising using blurring techniques is then applied to remove them. It also eliminates isolated pixels, and small regions. Applying the contour tracing algorithm generates the boundary shape (contours) of the object (Fig. 2(c)). A polynomial is then fit to simplify the contours that generates the contour image.
(a)
(b)
(c)
Fig. 2. Processing an input image (a) Grayscale image (b) Binarized image (c) Contour extraction
Shape signature, a one dimensional representation of the shape, is obtained by applying the 8-point connectivity technique on the 2D closed contour. As engineering/CAD objects have well defined centroid (xc , yc ), and also retrieval has shown to be better with central distance [16], we use it as our shape
258
A. Jain, R. Muthuganapathy, and K. Ramani
representation. The feature vector representing the central distance between a point on the contour (x, y) and the centroid (xc , yc ) is given by
where xc =
3
1 N
(1) Vc = (x − xc , y − yc , 0) N −1 N −1 1 i=0 xi , yc = N i=0 yi and N is the total number of pixels.
Computing Depth Map
Once the shape or the contour is obtained (as described in Section 2), its 3D information is then computed. Recovering the 3D information can be done in terms of depth Z, the surface normal (nx , ny , nz ), or surface gradient (p, q). One approach is to use several images taken under different lighting conditions, as in Photometric stereo, and identify the depth by the change in illumination. However, in this paper, we use only a single image and not a set of images. Hence, principles of shape from shading has been used to obtain the 3D embedding information. Lambertian model, where it is assumed that equal amount of light is reflected in every direction, is a reasonable approximation for engineering objects. In this model, the reflectance map is simplified to be independent of viewers direction. The important parameters in Lambertian reflectance are albedo, which is assumed to be constant and illuminant direction, which can be computed, in general. To identify the depth-map (sometimes called as simply depth) of an image, we use the approach proposed by [17], where it is assumed that the lower order components in the reflectance map dominate. The linearity of the reflectance map in the depth Z has been used instead of in p and q. Discrete approximations for p and q are employed and linearize the reflectance in Z(x, y). The following are used from [17] and has been presented here for completeness. The reflectance function for the Lambertian surface is as follows: 1 + pps + qqs E(x, y) = R(p, q) = 1 + p2 + q 2 1 + p2s + qs2
(2)
∂Z cos τ sin σ where E(x, y) is the gray level at pixel (x, y), p = ∂Z ∂x , q = ∂y , ps = cos σ , sin τ sin σ qs = cos σ , τ is the tilt of the illuminant and σ is the slant of the illuminant. Discrete approximation of p and q are given by the following:
p=
∂Z = Z(x, y) − Z(x − 1, y), ∂x
q=
∂Z = Z(x, y) − Z(x, y − 1) ∂y
(3)
The reflectance equation can be then rewritten as 0 = f (E(x, y), Z(x, y), Z(x − 1, y), Z(x, y − 1)) = E(x, y) − R(Z(x, y) − Z(x − 1, y), Z(x, y) − Z(x, y − 1))
(4)
For a fixed point (x, y) and a given image E, linear approximation (Taylor series expansion up through the first order terms) of the function f about a given depth
CBIR Using Shape and Depth from an Engineering Database
259
map Z n−1 and solving using iterative Jacobi method results in the following reduced form: 0 = f (Z(x, y)) = f (Z n−1 (x, y)) + (Z(x, y) − Z n−1 (x, y))
df (Z n−1 (x, y)) dZ(x, y)
(5)
For Z(x, y) = Z n (x, y), the depth map at n-th iteration can be solved using the following: df (Z n−1 (x, y)) (6) Z n (x, y) = Z n−1 (x, y) dZ(x, y) n−1 (x,y)) s +qqs +1) √s 2 2 − √ (p+q)(pp √ where df (ZdZ(x,y) = −1 ∗ √ 2 ps2+q 2 2 3 2 2 1+p +q
1+ps +qs
(
1+p +q )
1+ps +qs
Figs. 3(b) and 3(d) show the depth maps of the respective images in Fig. 3(a) and Fig. 3(c). It is to be noted that only the depth values at the contour are used in this paper even though they are calculated for the interior region as well. In general, Lambertian model assumption by itself is probably not sufficient. A more generalized model [18] that includes diffuse and specular properties can be used for better approximation of the depth map.
(a)
(b)
(c)
(d)
Fig. 3. Image and the depth maps
The depth map is then represented in a way similar to the shape (Equation (1)). The feature vector representing depth is given by Vd = (0, 0, Z − Zc )
(7)
where Z is the depth obtained from Equation (6) of the contour, and Zc denotes the third dimension of the centroid.
4
Representation, Indexing and Retrieval
In this section, the introduced shape-depth representation is described, followed by Indexing using Fourier Descriptors and then a similarity measurement to describe the retrieval.
260
4.1
A. Jain, R. Muthuganapathy, and K. Ramani
Shape-Depth Representation
As it is evident, shape alone is not sufficient to get a good retrieval. Typically, color has been combined with shape to obtain better retrieval results. As we are dealing with well-defined geometric objects, a novel strategy based on a 3D embedding has been adopted. Shape, in this paper, is combined with the corresponding estimated depth profile. Shape-Depth can be defined as I : R2 → R3 . At each point on the contour, a vector is defined as follows: V = (x − xc , y − yc , Z − Zc )
(8)
Note that the vector Equation (8) is of dimension three, which is quite low and hence it enhances the speed of retrieving. This can be decomposed into Vc (Equation (1)) representing the shape/contour and Vd (Equation (7)) representing depth. A weighted combination of the magnitude of the vectors Vc and Vd is used for retrieving images. Shape-Depth representation is defined as follows: SD =
wc ∗ Vc + wd ∗ Vd wc + wd
(9)
where wc and wd are the weights assigned to the shape-based similarity and the depth-based similarity, respectively and wc + wd = 1, wc > 0 and wd > 0. It can be observed that Vc captures the central distance measure in the 2D domain and Vd is a similar measure on the third dimension, the depth. It should be noted that central distance captures both local and global features of the representation. It is safe to say that the shape-depth representation is between the contour-based and region-based representations and hence it could prove to be a very useful one for retrieving objects/images. 4.2
Fourier Transform of Shape-Depth and Indexing
A primary requirement of any representation for retrieval is that it is invariant to transformations such as translation, scaling and rotation. Fourier transform is widely used for achieving the invariance. For any 1-D signature function, its discrete Fourier transform is given by an =
N −1 1 (SD) exp(−j2πit/N ) N i=0
(10)
where, n = 0, 1, . . . , N − 1 and SD is given by Equation (9). The coefficients an are usually called Fourier Descriptors (FD), denoted as F Dn . Since the shape and depth representations described in this paper are translation invariant, the corresponding FDs are also translation invariant. Rotation invariancy is achieved by using only the magnitude information and ignoring the phase information. Scale normalization is achieved by dividing the
CBIR Using Shape and Depth from an Engineering Database
261
magnitude values of the FDs with F D1 . The invariant feature vector used to index SD is then given by f =[ 4.3
|F DN −1 | |F D2 | |F D3 | , ,..., ] |F D1 | |F D1 | |F D1 |
(11)
Similarity Measurement
Retrieval result is not a single image but a list of images ranked by their similarities with the query image since CBIR is not based on exact matching. For 1 2 N a model shape indexed by FD feature fm = [fm , fm , ..., fm ] and a database in1 2 N dexed by FD feature fd = [fd , fd , . . . , fd ], the Euclidean distance between two feature vectors can then be used as the similarity measurement: N −1 i − f i |2 |fm (12) d= d i=0
where N is the total number of sampled points on the shape contour.
5
Experimental Results
For testing the proposed approach, the engineering database [19] containing 1290 images (a fairly good amount for testing), has been used. It has multiple copies of an image and also it has same images in arbitrary position and rotation. The query image is also one of the images in the databases. Our framework for CBIR is built in Visual C++. Test results for some objects are shown in Figs. 4(a) to 4(e), where only first fifteen retrieved images are shown for the query image on the right. The various parameters that can influence the retrieval results in our approach are the following; number of sampling points to compute the FDs, weights wc and wd in Equation (9), and the factors such as light direction and number of iterations when computing the depth. There is always a tradeoff when the number of sampling points is chosen. Too large a value will give a very good result at the cost of computation. On the other hand, using lesser number of coefficients is computationally inexpensive but may not give an accurate information. Based on the experimentation conducted on the contour plots of the database, the number of coefficients was chosen to be 100. For this sampling, the values for the weights that yield better results were identified to be wc = 0.70 and wd = 0.30. The depth computation uses 2 iterations initialized to zero depth with light source direction as (0, 0, 1). Fig. 4 shows the results for test images with the above mentioned parameters and the plotted precision-recall (Fig. 5) shows combined shape-depth representation yields better retrievals than using shape alone. In all the test results, it is to be noted that the query image is also retrieved, which indicates that the shape-depth representation is robust. They also show that genus > 1 objects are retrieved when the query is of zero genus. This is
262
A. Jain, R. Muthuganapathy, and K. Ramani
(a)
(b)
(c)
(d)
(e) Fig. 4. Retrieval Results for some Engineering objects
CBIR Using Shape and Depth from an Engineering Database
263
Fig. 5. Precision-Recall for Shape and Shape-Depth representations
because of the following reasons: interior contour information is not used in this experiment and depth values only at the contour has been used. We believe that the retrieval results will be improved when the inside region of the contour along with the depth at the interior is used for shape-depth represenation. We also did not carry out the experiment with various light source directions that could affect the depth-map. The main advantage in using the depth content of the image is that we can represent objects close to how it is in its three dimensional space. As we are using only a single image to compute the depth map, it will be close to its real depth only if the image is in its most informative position. As a consequence, the current approach can produce very good results for objects having symmetry, such as 2.5D objects and also when the depth map is computed from the most informative position for general objects as is the case in most engineering images. However, in our appraoch, as we are not only dependent on the depth but also its shape information, we can also retrieve objects that are in different orientation as can be seen in Fig. 4(d), though we have not analyzed the bounds on the orientation. In future, the weights wc and wd can be identified dynamically based on the changes in the depth map and the shape-depth correspondence. A better representation for the obtained depth information is also being explored.
6
Conclusions
The main contribution of this paper is the idea of combining shape (contour) obtained from the contour tracing along with the 3D embedding, the depth information at each point on the contour. Similarity metrics are proposed to combine the shape and depth. It is shown that this approach is effective for retrieving engineering objects. It would be interesting to investigate whether the proposed shape representation is useful in other application domains, such as protein search in molecular biology.
264
A. Jain, R. Muthuganapathy, and K. Ramani
References 1. Huang, P., Jean, Y.: Using 2d c+-strings as spatial knowledge representation for image database systems 27, 1249–1257 (1994) 2. Huang, J., Kumar, S.R., Mitra, M., Zhu, W.J., Zabih, R.: Spatial color indexing and applications. Int. J. Comput. Vision 35, 245–268 (1999) 3. Jain, A., Vailaya, A.: Image retrieval using color and shape. Pattern Recognition 29, 1233–1244 (1996) 4. Saykol, E., Gudukbay, U., Ulusoy, O.: A histogram-based approach for object-based query-by-shape-and-color in multimedia databases. Technical Report BU-CE-0201, Bilkent University, Computer Engineering Dept (2002) 5. Caputo, B., Dorko, G.: How to combine color and shape information for 3d object recognition: kernels do the trick (2002) 6. Diplaros, A., Gevers, T., Patras, I.: Combining color and shape information for illumination-viewpoint invariant object recognition 15, 1–11 (2006) 7. Pala, S.: Image retrieval by shape and texture. PATREC: Pattern Recognition. Pergamon Press 32 (1999) 8. Smith, J.R., Chang, S.F.: Automated image retrieval using color and texture. Technical Report 414-95-20, Columbia University, Department of Electrical Engineering and Center for Telecommunications Research (1995) 9. Li, X., Chen, S.C., Shyu, M.L., Furht, B.: Image retrieval by color, texture, and spatial information. In: Proceedings of the the 8th International Conference on Distributed Multimedia Systems (DMS 2002), San Francisco Bay, CA, USA, pp. 152–159 (2002) 10. Carson, C., Thomas, M., Belongie, S., Hellerstein, J.M., Malik, J.: Blobworld: A system for region-based image indexing and retrieval. In: Third International Conference on Visual Information Systems, Springer, Heidelberg (1999) 11. Shao, L., Brady, M.: Invariant salient regions based image retrieval under viewpoint and illumination variations. J. Vis. Comun. Image Represent. 17, 1256–1272 (2006) 12. Veltkamp, R., Tanase, M.: Content-based image retrieval systems: A survey. Technical Report UU-CS-2000-34, Utrecht University, Department of Computer Science (2000) 13. Datta, R., Li, J., Wang, J.Z.: Content-based image retrieval: approaches and trends of the new age. In: MIR 2005: Proceedings of the 7th ACM SIGMM international workshop on Multimedia information retrieval, pp. 253–262. ACM Press, New York (2005) 14. Chang, H., Yeung, D.Y.: Kernel-based distance metric learning for content-based image retrieval. Image Vision Comput. 25, 695–703 (2007) 15. Cz´ uni, L., Csord´ as, D.: Depth-based indexing and retrieval of photographic images. In: Garc´ıa, N., Salgado, L., Mart´ınez, J.M. (eds.) VLBV 2003. LNCS, vol. 2849, pp. 76–83. Springer, Heidelberg (2003) 16. Zhang, D.S., Lu, G.: A comparative study on shape retrieval using fourier descriptors with different shape signatures. In: Proc. of International Conference on Intelligent Multimedia and Distance Education (ICIMADE 2001), Fargo, ND, USA, pp. 1–9 (2001) 17. Tsai, P., Shah, M.: Shape from shading using linear-approximation. IVC 12, 487– 498 (1994) 18. Lee, K.M., Kuo, C.C.J.: Shape from shading with a generalized reflectance map model. Comput. Vis. Image Underst. 67, 143–160 (1997) 19. Jayanti, S., Kalyanaraman, Y., Iyer, N., Ramani, K.: Developing an engineering shape benchmark for cad models. Computer-Aided Design 38, 939–953 (2006)
Automatic Image Representation for Content-Based Access to Personal Photo Album Edoardo Ardizzone1 , Marco La Cascia1 , and Filippo Vella1 Dipartimento di Ingegneria Informatica Universit` a of Palermo Viale delle Scienze ed.6 - 90128 Palermo - Italy {ardizzon,lacascia,filippo.vella}@unipa.it
Abstract. The proposed work exploits methods and techniques for automatic characterization of images for content-based access to personal photo libraries. Several techniques, even if not reliable enough to address the general problem of content-based image retrieval, have been proven quite robust in a limited domain such as the one of personal photo album. In particular, starting from the observation that most personal photos depict a usually small number of people in a relatively small number of different contexts (e.g. Beach, Public Garden, Indoor, Nature, Snow, City, etc...) we propose the use of automatic techniques borrowed from the fields of computer vision and pattern recognition to index images based on who is present in the scene and on the context where the picture were taken. Experiments on a personal photo collection of about a thousand images proved that relatively simple content-based techniques lead to surprisingly good results in term of easyness of user access to the data.
1
Introduction
The digital revolution in photo and video capture brought as a consequence the capability for home users to manage the capture-storage-fruition process on their own. Systems increasingly more integrated for acquisition, processing and storage of multimedia data are now available in almost any home. Moreover the reduced cost of digital photography compared to traditional systems brought people to increase the number of image acquired and video captured. However, even tough digital photography and personal computing are now fully integrated, few attempts have been made to use completely automatic tools to organize, store and index data for efficient and content-based image retrieval (CBIR). The main problem in CBIR is the gap between the image data and its semantic meaning. Techniques proposed in literature for multimedia data analysis and representation range from semi-automatic to fully automatic ones. Semiautomatic techniques require a lot of human effort and then in many cases are not of practical use. On the other hand fully automatic techniques, typically related to low-level features such as color histogram, texture, shape, etc..., tend G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 265–274, 2007. c Springer-Verlag Berlin Heidelberg 2007
266
E. Ardizzone, M. La Cascia, and F. Vella
to miss the semantic meaning of the data. Moreover, even human provided keywords or textual descriptions often fail to explicitate all the relevant aspects of the data. In this paper we present fully automatic techniques aimed at finding who is in the picture and where and when the picture was shot. When the picture was shot is an information that comes for free as all the digital cameras attach a timestamp to the picture they take. The paper is organized as follows: Section 2 describes techniques for the organization, storage and content based retrieval for personal photo collection; section 3 explain the proposed approach for the creation of automatic contentbased image representation is presented. The details of the image processing and analysis are described in 4 and in 5. The browsing capability of the system are evaluated in 6. Finally in section 7 are given some conclusions.
2
Related Works
Even though there are several tools for storing and managing personal photo collection included in current operating systems or as on-line services, these tools do not include right now significant facilities for content-based browsing of the collection. One of the first personal photo collection browser has been reported by Kang and Shneiderman[7]. The goal of this system was to enable non-technical users of personal photo collection to browse and search efficiently for particular images. The authors proposed a very powerful user interface but implemented very limited CBIR capabilities. Moreover the search was heavily based on manual annotation of the data. As in personal photos the objects of interest are often people Zhang et al. [14] addressed the problem of automated annotation of human faces in family album. CBIR techniques and face recognition are integrated in a probabilistic framework. Based on initial training data models of each person are built and faces in images are often recognized correctly even in presence of some occlusions. User interaction during the annotation process is also possible to reinforce the classifier. Experimental results on a family album of a few thousands photos showed the effectiveness of the approach. Abdel-Mottaleb and Chen [1] also studied the use of faces arrangement in photo album browsing and retrieval. In particular they defined a similarity measure based on face arrangement that can be computed automatically and is used to define clusters of photos and finally to browse the collection. A photo management application leveraging face recognition technology has also been proposed by Girgensohn et al.[5]. The authors implemented a user interface that greatly helps the users in face labelling. Other semiautomatic techniques have been proposed recently. In [10] the system suggests identity labels for photos during the manual annotation phase. The system does not use face detection or face recognition technique. Identity hypothesis are based on time and location of each picture in the collection,
Automatic Image Representation for Content-Based Access
267
assumed known, and on previous annotations of the user. In [8] is described the implementation of an on-line system with capabilities for photo upload, annotation, browsing and sharing. Classical perceptual features, EXIF data and manual annotation are put together in a powerful GUI in an effort to help the user in browsing large collections of photos. Also in [3] a sophisticated user interface aimed at helping the user in the manual annotation of personal photos is reported but, even with the help of advanced tools, the annotation process remains tedious and time consuming. A different approach has been proposed by Graham et al. [6]. The authors propose an interesting photo browser for collections of time-stamped digital images and they exploit the timing information to structure the collection and automatically generate summaries of the data. A general introduction to the problem of home photo album management can also be found in [9] where the authors analyze the needs of digital personal photo album and present a preliminary system aimed at helping the users in organizing, retrieving, viewing and sharing their images. Even tough user interfaces and semi-automatic tools can significantly help the user in annotating the data we believe fully automatic annotation has to be considered a mandatory goal even at the cost of some wrong people identification or context labelling.
3
Basic Idea
Our work is based on the observation that using current state of the art techniques a large number of faces can be automatically detected, rectified, resampled, cropped [2] and finally projected in a common low-dimensional face space. A few coefficients of the projection can then be used as face descriptor. Moreover, as in the studied context most of the faces are relative to a quite small set of individuals, a relatively low-dimensional face space provide enough discriminant power. The remaining part of the images (the background) can be characterized by mean of low-level features that are useful in discriminating between different contexts (where). Also in this case, as the typical user is interested to a limited number of differenty contexts the link between low-level features of the background and semantic labels (i.e. indoor, beach, snow, mountain...) can reasonably be exploited using a small training set and a supervised classification approach[12]. Each image in the collection is then represented as none, one or more points in the low-dimensional space of the faces and as a context label in the space of the locations as explained in section 5. The process of image representation is shown in fig. 1. In the following sections the processing of visual information in the two chosen representation spaces is fully described.
4
People Processing
As faces in personal photos appear in very different condition of pose, illumination and expressions standard face recognition or clustering techniques tend
268
E. Ardizzone, M. La Cascia, and F. Vella
Fig. 1. Image representation for personal photo collections
to fail. Significant processing is needed in order to make detected faces more suited for identity based representation. Following the lines of [2] each image to be archived in the system is searched for faces. Detected faces are then validated and rectified to a canonical pose and size. The face detector we adopted[13] is usually successful in detecting faces in a quite large range of pose, expression and illumination conditions. Some sort of face normalization is then needed. In particular, as suggested by Berg et al.[2], we try to detect five features per face (external corners of left and right eyes, corners of the mouth an tip of the nose) and, if detection is successful, we estimate an affine transformation to rescale and align the face to canonical position. A final crop to 100 × 80 pixels brings each face to a common reference system. 4.1
Face Rectification
The face processing is based on a first step of face validation and a second phase of rectification. Face validation is achieved detecting a set of fiducial points covering relevant areas of faces. In this system fiducial points are left eye extern corner, right eye extern corner, tip of the nose, mouth left corner, and mouth right corner. Features are detected using five Support Vector Machine trained with hand-labelled data. Each detected face is tested by running each SVM over the image region to find facial features. If at least three features are detected with an high degree of confidence an affine transformation leading all the faces in a common reference system is estimated by least square. Unfortunately face detection as well as facial features detection are error prone then in many cases it is not possible to obtain meaningful faces from generic images. Even worse in some cases the SVMs estimate with high confidence wrong facial features leading to non significant face data. A test on faceness[11] of rectified face could easily be used to automatically filter out nonfaces. Fig. 2 shows a few example of automatically detected and rectified faces. Note that, even though faces are somewhat distorted, the identity of depicted people is still evident and faces are reasonably aligned to allow for appearance-based similarity search[11].
Automatic Image Representation for Content-Based Access
269
Fig. 2. Examples of correctly rectified faces
4.2
Face Representation
Once a face has been detected and succesfully rectified and cropped, a 20dimensional face descriptor is computed. The descriptor is a vector w containing the projection of the rectified and cropped face in a subspace of the global face space. In practice the average face Ψ is subtracted from the 100×80 cropped and rectified face Γi and the obtained image Φ is then projected on the eigenspace to obtain wi = eTi Φ. The average face Ψ and the 16 eigenimages ei associated with the largest eigenvalues are shown in fig. 3. The face space, as well as the average face, is learned off-line on a significant subset of the image collection and it is not updated. At any time, if most of the faces present in the image collection differ significantly from the training set, it is possible to build a new face space and effortlessy recompute the projection of each detected, rectified and cropped face in the new face space. In our experience we learned the face space with about 500 images. Rebuilding the face space with 1000 images did not improve significantly the performance of the system. Other information, such as the size and the position of the originally detected face in the image or the reliability of the rectification process[2], are also stored for future use but are not exploited in the current version of the system.
5
Background Processing
Each picture is processed for face detection[13] and rectification as seen in previous section. The remaining part of the image is then processed as background (see fig. 1). Extending the approach, more complex or additional detectors can be used to extract different kind of objects of interest in the scene and operate a different figure/background segmentation (e.g. a detector for entire body could be easily integrated in the system). Background is processed to capture relevant patterns able to characterize, with a coarse classification, the context of the scene depicted in the image. The representation of background is dealt with visual symbols or token usually referred as visual terms. These terms are extracted considering that the distribution of the feature values in feature space tends to
270
E. Ardizzone, M. La Cascia, and F. Vella
Fig. 3. Average face and eigenfaces associated to the 16 largest eigenvalues shown in decreasing order left-to-right then top-to-bottom
have multimodal density in the vector space and that centroids corresponding to different modes can be considered as forming a base for data representation.For example, if A = {A1 , A2 , . . . , AM } is the set of M visual terms for the feature A, each image is represented by a vector V = (v1 , v2 , . . . , vM ) where the i-th component takes into account the statistics of the term Ai in the image. The representation of the visual content can be enriched considering the spatial information conveyed by bigrams of visual terms in the chosen images[4] in a similar way to what is done in documents versus words representation. The association of labels to background patterns, expressed as function of visual terms, is shown in section 5.1 and is performed with a supervised approach using Maximal Figure of Merit(MFoM) classifier[4]. It is a classifier based on Linear Discriminant Function (LDF) that is trained to learn a chosen figure of merit (e.g. Precision, Recall, F1 measure,...) and has been successfully employed in automatic image annotation in [4][12]. 5.1
Supervised Background Labeling
The supervised association of labels to images is based on a training image set formed by the set of couples (X, Y ) where X is a D-dimensional vector of values describing images as combination of visual terms and Y is the manually assigned labels. The predefined keyword set is denoted as C = {Cj , 1 ≤ j ≤ N } where N is the total number of keywords and Cj the j-th keyword. In our experiments the labels set is C = {Beach, P ublicGarden, Indoor, N ature, Snow, City}. The LDF classifier, used for the supervised classification, is composed by a set of function dj large as the number of the data classes. Each function gj is characterized by a set of parameters Λj that are trained in order to discriminate the positive samples from the negative samples of the j-th class. In the classification stage, each g-unit produces a score relative to its own class and the final keyword, assigned the input image X, is chosen according to the following multiple-label decision rule: C(X) = arg max gj (X, Λj ) 1≤j≤N
(1)
Automatic Image Representation for Content-Based Access
271
Each g-unit competes with all the other units to assign its own label to the input image X. The ones achieving the best score are the most trustable to assign its own label. 5.2
Multi-class Maximal Figure of Merit Learning
In Multi-Class Maximal Figure of Merit (MC MFoM) learning, the parameter set Λj for each class is estimated by optimizing a metric-oriented objective function. The continuous and differentiable objective function, embedding the model parameters, is designed to approximate a chosen performance metric (e.g. precision, recall, F1).To complete the definition of the objective function, a one dimensional class misclassification function, dj (X, Λ) is defined to have a smoother decision rule: dj (X; Λ) = −gj (X, Λ) + gj− (X, Λ− ) (2) where gj (X, Λj ) is the global score of the competing g-units that is defined as: ⎤ η1
⎡
⎥ ⎢ 1 gj− (X, Λ− ) = log ⎣ − exp(gi (X; Λi ))η ⎦ Cj −
(3)
i∈Cj
If a sample of the j-th class is presented as input, dj (X, Λ) is negative if the correct decision is taken, in the other case, the positive value is assumed when a wrong decision occurs. Since eq. 2 produces results from −∞ to +∞, a class loss function lj is defined in eq. 4 having a range running from 0 to +1: lj (X; Λ) =
1 1+
e−α(dj (X;Λ)+β)
(4)
where α is a positive constant that controls the size of the learning window and the learning rate, and β is a constant measuring the offset of dj (X, Λ) from 0. The both values are empirically determined. The value of eq. 4 simulates the error count made by the j-the image model for a given sample X. With the above definitions, most commonly used metrics, e.g. precision, recall and F1, are approximated over training set T and can be defined in terms of lj function. In the experiments the Det Error that is function of both false negative and false positive error rates has been considered. It is defined as: DetE =
F Pj + F Nj 2·N
(5)
1≤j≤N
The Det Error is minimized using a generalized probabilistic descent algorithm [4] applied to all the linear discriminant g-units that are characterized by a function as gj (X, Λj ) = Wj · X + bj where the Wj and bj parameters form the j-th concept model.
272
6
E. Ardizzone, M. La Cascia, and F. Vella
Experimental Results
The proposed image representation for personal photo album allows a simple and efficient organization according to semantic content. This representation leads to the possibility to effectively browse the entire album in the two chosen representation dimensions (who and where). As shown in section 4 an emerging representation in the space of the normalized faces is available through the principal components analysis of the rectified faces. The axes of this representation are given by the eigenfaces of the chosen training set. Although the representation in this space allows a straightforward organization in clusters, the discovery of the structure and the browsing in a clustered space is beyond the scope of this paper. In this work we tested the efficiency of the representation scheme with face-based queries by example and label-based direct queries. In a face-based query by example a sample face is given as input. The face is then rectified and used as starting point in the space formed by the eigenfaces. A vector metric allows to pick the faces that are most similar to the input face and then the images containing those faces. The result is an image set. Being the visual content of background described with labels learned in a supervised fashion, queries can be oriented to the retrieval of images with the same labels. In this case photo retrieval is a simple search based on the labels associated to visual information as shown in section 5. The experiments have been aimed to the evaluation of the retrieval capability of the proposed system in terms of face identification and background labelling. To evaluate the performances of the proposed system we ran a set of experiments on a real photo collection. The digital album used is a subset of a real personal collection of 1008 images taken in the last three years. The presented process for face detection and rectification brought to the extraction of 331 images of rectified faces. All the rectified faces have been projected on the truncated face space considering 20 eigenfaces for the representation. Each face and each image were then hand labelled respectively with the correct identity of the person and with the correct context label for evaluation purpose. To evaluate face identification capabilities we ran a set of queries by example. Firstly we randomly selected 5 images in the dataset containing a single face. Then the query images were processed as seen in 4 for face detection, extraction and rectification. The extracted and rectified faces were used as examples for the retrieval of relevant images. The n-nearest faces to the query face were then used to evaluate precision and recall. The face similarity is calculated through a distance based on the 20 coefficients of the eigenface representation. Since the most used metrics are the Euclidean and the Mahalanobis distance, in the experiments the results of both are reported. With the chosen experimental setup Euclidean distance performed better than Mahalanobis distance. In fig. 4 precision vs. recall is plotted considering results of the n-nearest faces for n = 5, 10, 20, 30, 40, 50. The automatic labelling processing was also evaluated comparing the output of the classifier with the manually provided annotations. Results are shown in table 1. The analysis of results shows that, using our representation scheme, some
Automatic Image Representation for Content-Based Access
273
Fig. 4. Precision vs. Recall for query by example Table 1. Accuracy of automatic image labelling Context Beach Indoor Nature City PublicGarden Snow
Correctly classified 83% 82% 48% 54% 66% 59%
contexts are easier then others to detect as they have prominent visual characteristics (for example Beach, P ublicGarden, Indoor). The other contexts (N ature, Snow, City) showed a slightly worse classification results.
7
Conclusions
A novel approach for the automatic representation and retrieval of images in personal photo album has been presented. The proposed representation is based on a space for the faces based on Principal Component Analysis and a second space for the background characterization in term of semantic labels. Experiments showed promising results in querying a personal photo album. We observed that errors are due mainly to incorrect face detection and rectification. In the future we plan to improve the robustness of automatic image representation developing a unifying framework to organize photos along different dimensions of content. Furthermore other representation spaces could be made to represent recurring objects of interest different from faces, other visual features, time, text automatically extracted from the images and so on. We expect that the use of such features in an integrated querying framework would definitely improve the browsing capabilities of personal photo album.
274
E. Ardizzone, M. La Cascia, and F. Vella
References 1. Abdel-Mottaleb, M., Chen, L.: Content-based photo album management using faces’ arrangement. In: IEEE Intern. Conf. on Multimedia and Expo (ICME) (2004) 2. Berg, T.L., Berg, A.C., Edwards, J., Maire, M., White, R., Teh, Y.W., LearnedMiller, E., Forsyth, D.A.: Names and faces in the news. In: Proc. of IEEE Intern. Conf. on Computer Vision and Pattern Recognition (CVPR) (1994) 3. Cui, J., Wen, F., Xiao, R., Tian, Y., Tang, X.: Easyalbum: An interactive photo annotation system based on face clustering and re-ranking. In: Proc. of ACM Conf. on Human Factors in Computing Systems(CHI) (2007) 4. Gao, S., Wang, D.-H., Lee, C.-H.: Automatic image annotation through multitopic text categorization. In: Proc. of Inter. Conf. on Acoustics, Speech and Signal Processing(ICASSP) (2006) 5. Girgensohn, A., Adcock, J., Wilcox, L.: Leveraging face recognition technology to find and organize photos. In: Proc. of ACM Intern. Workshoop on Multimedia Information Retrieval (MIR) (2004) 6. Graham, A., Garcia-Molina, H., Paepcke, A., Winograd, T.: Time as essence for photo browsing through personal digital libraries. In: Proc. of ACM/IEEE Joint Conference on Digital Libraries (JCDL) (2002) 7. Kang, H., Shneiderman, B.: Visualization methods for personal photo collections: Browsing and searching in the photofinder. In: Proc. of IEEE Intern. Conf. on Multimedia and Expo (ICME) (2000) 8. Lee, B.N., Chen, W.-Y., Chang, E.Y.: A scalable service for photo annotation, sharing and search. In: Proc. of ACM Intern. Conf. on Multimedia (2006) 9. Mulhem, P., Lim, J.H., Leow, W.K., Kankanhalli, M.S.: Advances in digital home photo albums. In: Deb, S. (ed.) Multimedia Systems and Content-Based Image Retrieval. ch. IX, Idea publishing (2004) 10. Naaman, M., Yeh, R.B., Garcia-Molina, H., Paepcke, A.: Leveraging context to resolve identity in photo albums. In: Proc. of ACM/IEEE Joint Conference on Digital Libraries (JCDL) (2005) 11. Turk, M.A., Pentland, A.P.: Face recognition using eigenfaces. In: Proc. of IEEE Intern. Conf. on Computer Vision and Pattern Recognition (CVPR) (1991) 12. Vella, F., Lee, C.-H.: Boosting of maximal figure of merit classifiers for automatic image annotation. In: Proc. of IEEE Intern. Conf. on Image Processing (ICIP) (2007) 13. Viola, P., Jones, M.: Rapid object detection using a boosted cascade of simple features. In: Proc. of IEEE Intern. Conf. on Computer Vision and Pattern Recognition (CVPR) (2001) 14. Zhang, L., Chen, L., Li, M., Zhang, H.: Automated annotation of human faces in family albums. In: Proc. of ACM Intern. Conf. on Multimedia (2003)
Geographic Image Retrieval Using Interest Point Descriptors Shawn Newsam and Yang Yang Computer Science and Engineering University of California, Merced CA 95344, USA snewsam,
[email protected]
Abstract. We investigate image retrieval using interest point descriptors. New geographic information systems such as Google Earth and Microsoft Virtual Earth are providing increased access to remote sensed imagery. Content-based access to this data would support a much richer interaction than is currently possible. Interest point descriptors have proven surprisingly effective for a range of computer vision problems. We investigate their application to performing similarity retrieval in a ground-truth dataset manually constructed from 1-m IKONOS satellite imagery. We compare results of using quantized versus full descriptors, Euclidean versus Mahalanobis distance measures, and methods for comparing the sets of descriptors associated with query and target images.
1
Introduction
New geographic information systems such as Google Earth and Microsoft Virtual Earth are providing increased access to geographic imagery. These systems, however, only allow users to view the raw image data. Automated techniques for annotating the image content would enable much richer interaction. Solutions for land-use classification, similarity retrieval, and spatial data mining would not only serve existing needs but would also spawn novel applications. Automated remote sensed image analysis remains by-and-large an unsolved problem. There has been significant effort over the last several decades in using low-level image descriptors, such as spectral, shape and texture features, to make sense of the raw image data. While there has been noted successes for specific problems, plenty of opportunities for improvement remain. In this paper, we investigate the application of a new category of low-level image descriptors, termed interest points, to remote sensed image analysis. Interest point descriptors have enjoyed surprising success for a range of traditional computer vision problems. There has been little research, however, on applying them to remote sensed imagery. In previous work [1], we showed that a straightforward application of interest point descriptors to similarity retrieval performed comparably to state-of-the-art approaches based on global texture analysis. In this paper, we explore the interest point descriptors further. Our investigation is done in the context of similarity retrieval which is not only a useful application but also serves as an excellent platform for evaluating a G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 275–286, 2007. c Springer-Verlag Berlin Heidelberg 2007
276
S. Newsam and Y. Yang
descriptor. We investigate several methods for using interest point descriptors to perform similarity retrieval in a large dataset of geographic images. We compare the results of using quantized versus full-length descriptors, of using different descriptor-to-descriptor distance measures, and of using different methods for comparing the sets of descriptors representing the images.
2
Related Work
Content-based image retrieval (CBIR) has been an active research area in computer vision for over a decade with IBM’s Query by Image Content (QBIC) system from 1995 [2] one of the earliest successes. A variety of image descriptors have been investigated including color, shape, texture, spatial configurations, and others. A recent survey is available in [3]. Image retrieval has been proposed as an automated method for accessing the the growing collections of remote sensed imagery. As in other domains, a variety of descriptors have been investigated including spectral [4,5], shape [6], texture [7,8,9,10], and combinations such as multi-spectral texture [11]. The recent emergence of interest point descriptors has revitalized many research areas in computer vision. A number of different techniques have been proposed which have two fundamental components in common. First, a method for finding the so-called interesting or salient locations in an image. Second, a descriptor for describing the image patches at these locations. Interest point detectors and descriptors have shown to be robust to changes in image orientation, scale, perspective and illumination conditions as well as to occlusion, and, like global features, do not require segmentation. They are very efficient to compute which allows them to be used in real-time applications. They have been successfully applied to problems such as image stereo pair matching, object recognition and categorization, robot localization, panorama construction, and, relevant to this work, image retrieval. Excellent comparisons of interest point detectors and descriptors can be found in [12] and [13], respectively. The application of interest point detectors and descriptors to image retrieval has focused primarily on retrieving images of the same object or scene under different conditions[14,15,16,17,18]. There has been little application to finding similar images or image regions. In particular, there has not been much investigation into using interest point descriptors to perform similarity retrieval in large collections of remote sensed imagery.
3
Interest Point Descriptors
We choose David Lowe’s Scale Invariant Feature Transform (SIFT) [19,20] as the interest point detector and descriptor. SIFT descriptors have been shown to be robust to image rotation and scale, and to be capable of matching images with geometric distortion and varied illumination. An extensive comparison with other local descriptors found that SIFT-based descriptors performed the best in an image matching task [13]. Like most interest point based analysis, there are
Geographic Image Retrieval Using Interest Point Descriptors
277
two components to extracting SIFT descriptors. First, a detection step locates points that are identifiable from different views. This process ideally locates the same regions in an object or scene regardless of viewpoint and illumination. Second, these locations are described by a descriptor that is distinctive yet also invariant to viewpoint and illumination. SIFT-based analysis exploits image patches that can be found and matched under different image acquisition conditions. The SIFT detection step is designed to find image regions that are salient not only spatially but also across different scales. Candidate locations are initially selected from local extrema in Difference of Gaussian (DoG) filtered images in scale space. The DoG images are derived by subtracting two Gaussian blurred images with different σ D(x, y, σ) = L(x, y, kσ) − L(x, y, σ)
(1)
where L(x, y, σ) is the image convolved with a Gaussian kernel with standard deviation σ, and k represents the different sampling intervals in scale space. Each point in the three dimensional DoG scale space is compared with its eight spatial neighbors at the same scale, and with its nine neighbors at adjacent higher and lower scales. The local maxima or minima are further screened for low contrast and poor localization along elongated edges. The last step of the detection process uses a histogram of gradient directions sampled around the interest point to estimate its orientation. This orientation is used to align the descriptor to make it rotation invariant. A feature descriptor is then extracted from the image patch centered at each interest point. The size of this patch is determined by the scale of the corresponding extremum in the DoG scale space. This makes the descriptor scale invariant. The feature descriptor consists of histograms of gradient directions computed over a 4x4 spatial grid. The interest point orientation estimate described above is used to align the gradient directions to make the descriptor rotation invariant. The gradient directions are quantized into eight bins so that the final feature vector has dimension 128 (4x4x8). This histogram-of-gradients descriptor can be roughly thought of a summary of the edge information in a scale and orientation normalized image patch centered at the interest point.
4
Similarity Measures Using Full Descriptors
This section describes methods for computing the similarity between two images represented by sets of full interest point descriptors. First, we describe the comparison of single descriptors and then extend this to sets of descriptors. 4.1
Comparing Single Descriptors
SIFT descriptors are represented by 128 dimension feature vectors. We use standard Euclidean distance to compute the similarity between two SIFT descriptors. Let h1 and h2 be the feature vectors representing two SIFT descriptors. The Euclidean distance between these features is then computed as
278
S. Newsam and Y. Yang
dEuc (h1 , h2 ) =
(h1 − h2 )T (h1 − h2 ) .
(2)
We also consider using the Mahalanobis distance to compare single descriptors. The Mahalanobis distance is equivalent to the Euclidean distance computed in a transformed feature space in which the dimensions (feature components) have uniform scale and are uncorrelated. The Mahalanobis distance between two feature vectors is computed as (3) dMah (h1 , h2 ) = (h1 − h2 )T Σ −1 (h1 − h2 ) where Σ is the covariance matrix of the feature vectors. 4.2
Comparing Sets of Descriptors
Since images are represented by multiple interest point descriptors, we need a method to compute the similarity between sets of descriptors. We formulate this as a bipartite graph matching problem between a query and target graph in which the vertices are the descriptors and the edges are the distances between descriptors computed using either the Euclidean or Mahalanobis distance. We consider two different methods for making the graph assignments. In the first method, we assign each query vertex to the target vertex with the minimum distance, allowing many-to-one matches. Let the query image contain the set of m descriptors Hq = {hq1 , ..., hqm } and the target image contain the set of n descriptors Ht = {ht1 , ..., htn }. Then, we define the minimum distance measure between the query and target image to be 1 dmin(hqi , T ) m i=1 m
Dmin (Q, T ) =
(4)
where dmin(hqi , T ) = min d(hqi , htj ) 1≤j≤n
(5)
and d(·, ·) is either the Euclidean or Mahalanobis distance. The factor of 1/m normalizes for the size of the query descriptor set. We also consider the optimal complete (perfect) assignment between query and target vertices. In this assignment we allow a query vertex to be assigned to at most one target vertex. In the case where there are fewer target than query vertices, we allow some of the query vertices to remain unassigned. We define the complete distance measure between the query and target image to be Dcomp (Q, T ) = min f
m
d(hqi , htf (i) )
(6)
i=1
where f (·) is an assignment which provides a one-to-one mapping from (1, ..., m) to (1, ..., n). Again, d(·, ·) is either the Euclidean or Mahalanobis distance. In the case where m > n, we allow m − n values not to be mapped and not contribute to the distance summation. We find the optimal mapping using the Hungarian algorithm [21] which runs in polynomial time in m and n. Finally, we normalize for the number of descriptors by dividing the distance by min(m, n).
Geographic Image Retrieval Using Interest Point Descriptors
5
279
Similarity Measures Using Quantized Descriptors
As an alternate to using the full 128 dimension descriptors, we investigate quantized features. Quantized features are more compact and support significantly faster similarity retrieval. Quantized interest point descriptors have proven effective in other image retrieval tasks [14]. The 128 dimension descriptors were quantized using the k-means algorithm. The clustering was performed using features randomly sampled from a large training dataset (the full dataset was too large to cluster). The clustering results were then used to label the features in the test dataset with the ID of the closest cluster centroid. We compared k-means clustering and labeling using the Euclidean and Mahalanobis distance measures. A feature vector consisting of the counts of the quantized descriptors was used to compute the similarity between images. That is, Hquant for an image is Hquant = [t0 , t1 , . . . , tc−1 ]
(7)
where ti is number of occurrences of quantized descriptors with label i and c is the number of clusters used to quantize the features. Hquant is similar to a term vector in document retrieval. The cosine distance measure has shown to be effective for comparing documents represented by term vectors [22] so we use it here to compute the similarity between images. The similarity between a query image Q with counts [q0 , q1 , . . . , qc−1 ] and a target image T with counts [t0 , t1 , . . . , tc−1 ] is computed as c−1
Dquant (Q, T ) =
qi ti
i=0
c−1 i=0
qi2
c−1 j=0
.
(8)
t2j
The cosine distance measure ranges from zero (no match) to one (perfect match). To make it compatible with the distance measures above, for which zero is a perfect match, we use one minus the cosine distance to perform similarity retrieval.
6
Similarity Retrieval
The distance measures above are used to perform similarity retrieval as follows. Let Q be a query image and let T ∈ T be a set of target images. The image T ∗ ∈ T most similar to Q is computed as T ∗ = arg min D(Q, T ) . T ∈T
(9)
where D(·, ·) is one of the image-to-image distance measures described above. Likewise, the k most similar images are those that result in the k smallest distances when compared to the query image. Retrieving the k most similar images is commonly referred to as a k-nearest neighbor (kNN) query.
280
S. Newsam and Y. Yang
Given a ground-truth dataset, there are a number of ways to evaluate retrieval performance. One common method is to plot the precision of the retrieved set for different values of k. Precision is defined as the percent of the retrieved set that is correct and can be computed as the ratio of the number of true positives to the size of the retrieved set. It is straightforward and meaningful to compute and compare the average precision for a set of queries when the ground truth sizes are the same. (It is not straightforward to do this for precision-recall curves.) Plotting precision versus the size of the retrieved set provides a graphical evaluation of performance. A single measure of performance that not only considers that the ground-truth items are in the top retrievals but also their ordering can be computed as follows [23]. Consider a query q with a ground-truth size of N G(q). The Rank(k) of the kth ground-truth item is defined as the position at which it is retrieved. A number K(q) ≥ N G(q) is chosen so that items with a higher rank are given a constant penalty Rank(k), if Rank(k) ≤ K(q) Rank(k) = . (10) 1.25K(q), if Rank(k) > K(q) K(q) is commonly chosen to be 2N G(q). The Average Rank (AVR) for a single query q is then computed as N G(k) 1 Rank(k) . AV R(q) = N G(q)
(11)
k=1
To eliminate influences of different N G(q), the Normalized Modified Retrieval Rank (NMRR) N M RR(q) =
AV R(q) − 0.5[1 + N G(q)] 1.25K(q) − 0.5[1 + N G(q)]
(12)
is computed. N M RR(q) takes values between zero (indicating whole ground truth found) and one (indicating nothing found) irrespective of the size of the ground-truth for query q, N G(q). Finally, the Average Normalized Retrieval Rate (ANMRR) can be computed for a set N Q of queries 1 N M RR(q) . N Q q=1 NQ
AN M RR =
7
(13)
Dataset
A collection of 1-m panchromatic IKONOS satellite images was used to evaluate the retrieval methods. A ground truth consisting of ten sets of 100 64-by-64 pixel images was manually extracted from the IKONOS images for the following landuse/cover classes: aqueduct, commercial, dense residential, desert chaparral, forest, freeway, intersection, parking lot, road, and rural residential. Figure 1 shows
Geographic Image Retrieval Using Interest Point Descriptors
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(j)
281
Fig. 1. Two examples from each of the ground-truth classes. (a) Aqueduct. (b) Commercial. (c) Dense residential. (d) Desert chaparral. (e) Forest. (f) Freeway. (g) Intersection. (h) Parking lot. (i) Road. (j) Rural residential.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(j)
Fig. 2. The interest point locations for the ground-truth images in figure 1
examples from each of these ten classes. SIFT interest point descriptors were extracted from each image as described in section 3. Figure 2 shows the locations of the detected interest points for the sample images in Figure 1. Each image contains an average of 59.1 interest points. A large set of features randomly sampled from the the full IKONOS images was clustered using the k-means algorithm using both the Euclidean and Mahalanobis distance measures. The features in the 1,000 ground-truth images were labeled with the ID of the closest cluster centroid. Each ground-truth image is thus represented by the following: – A set of full interest point descriptors. – Quantized feature counts based on clustering using Euclidean distance. – Quantized feature counts based on clustering using Mahalanobis distance.
282
8
S. Newsam and Y. Yang
Results
The retrieval performance of the different representations and similarity measures was evaluated by performing a comprehensive set of k-nearest neighbor similarity searches using each of the 1,000 images in the ground-truth dataset as a query. In particular, the following six methods were compared: 1. 2. 3. 4. 5. 6.
Quantized descriptors based on Euclidean clustering. Cosine distance. Quantized descriptors based on Mahalanobis clustering. Cosine distance. Full descriptors. Minimum distance measure using Euclidean distance. Full descriptors. Minimum distance measure using Mahalanobis distance. Full descriptors. Complete distance measure using Euclidean distance. Full descriptors. Complete distance measure using Mahalanobis distance.
These methods are described in sections 4 and 5 and will be referred to by number in the rest of the paper. Similarity retrieval using the quantized descriptors was compared for cluster counts c ranging from 10 to 1000. The clustering was performed on 100,000 points selected at random from the large IKONOS images (a separate dataset from the ground-truth). We computed the average ANMRR over the ten ground-truth classes. This was done ten times for each value of c since the clustering process is not deterministic (it is initialized with random centroids and is applied to a random set of points). Figure 3 shows the ANMRR values for different numbers of clusters. Error bars show the first standard deviation computed over the ten trials for each c. Again, ANMRR values range from zero for all the ground-truth items retrieved in a result set the size of the ground-truth to one for none of the ground-truth items retrieved. We make two conclusions from the results in Figure 3. One, that it is better to quantize the descriptors using Euclidean kmeans clustering; and two, that the optimal number of clusters is 50. We use this optimal configuration in the remaining comparisons. Figure 4 plots precision (the percent of correct retrievals) versus result set size for the different methods. These values are the average over all 1,000 queries. Quantized descriptors are shown to outperform full descriptors for all result set sizes. The minimum distance measure is shown to outperform the complete distance measure for comparing sets of full descriptors. Finally, as above, Euclidean distance is shown to outperform Mahalanobis distance, this time when used for full descriptor-to-descriptor comparison. Table 1 lists the ANMRR values for the specific image categories. The values are the average over all 100 queries in each category. These results confirm that the quantized descriptors outperform the full descriptors on average. It is interesting to note, however, that no single method performs best for all categories. Finally, it is worth comparing the computational complexity of the different methods. On average, the 1,000 queries took approximately 2 seconds using the quantized descriptors, approximately 10 hours using the minimum distance measure for sets of full descriptors, and approximately 14 hours using the complete distance measure for sets of full descriptors. This significant difference results from the combinatorial expansion of comparing sets of descriptors and the cost
Geographic Image Retrieval Using Interest Point Descriptors
283
Euclidean distance
0.58
Mahalanobis distance
0.56 0.54
ANMRR
0.52 0.5 0.48 0.46 0.44 0.42 0.4
0
200
400 600 number of clusters
800
1000
Fig. 3. Retrieval performance of descriptors quantized using k-means clustering for different numbers of clusters c. Shown for clustering with Euclidean and Mahalanobis distances. Image-to-image similarity is computed using the cosine distance measure.
Fig. 4. Retrieval performance in terms of precision versus size of result set
284
S. Newsam and Y. Yang
Table 1. Average Normalized Modified Retrieval Rate (ANMRR). Lower value is better. Ground-truth Method 1 Method 3 Method 4 Method 5 Method 6 Aqueduct 0.488 0.655 0.573 0.621 0.577 Commercial 0.575 0.668 0.703 0.761 0.896 Dense residential 0.432 0.412 0.795 0.670 0.959 Desert chaparral 0.015 0.002 0.062 0.003 0.493 Forest 0.166 0.131 0.764 0.338 0.940 Freeway 0.497 0.384 0.290 0.401 0.307 Intersection 0.420 0.435 0.672 0.675 0.953 Parking lot 0.314 0.361 0.526 0.301 0.617 Road 0.680 0.494 0.417 0.660 0.680 Rural residential 0.460 0.592 0.833 0.706 0.943 Average 0.405 0.413 0.563 0.514 0.736
of full descriptor-to-descriptor comparisons in the 128 dimension feature space. Conversely, comparing two images using quantized features only requires a single cosine distance computation. These timings were measured on a typical workstation. No distinction is made between using Euclidean and Mahalanobis distances since the latter is implemented by transforming the feature vectors before performing the queries.
9
Discussion
We reach the following conclusions based on the results above. Similarity retrieval using quantized interest point descriptors is more effective and significantly more efficient than using full descriptors. This is true regardless of how the sets of full descriptors for the images are matched–minimum or complete– and how the individual descriptors are compared–Euclidean or Mahalanobis. This finding is initially a bit surprising. One might expect the loss of information from quantizing the descriptors to reduce performance. However, it seems that a binary comparison between quantized descriptors is more effective than an exact (Euclidean or Mahalanobis) comparison between full descriptors. The cosine distance can be viewed as comparing sets of descriptors in which individual descriptors are matched if they are quantized to the same cluster. The exact distance between descriptors does not matter, only that they are in some sense closer to each other than they are to other descriptors. This actually agrees with how interest point descriptors are used to determine correspondences between stereo pairs [20]. It is not the exact distance between a pair of descriptors that is used to assign a point in one image to a point in another but the ratio of this distance to that of the next closest point. We showed that the optimal number of clusters used to quantize the descriptors seems to be around 50. This is lower than we expected. Other researchers [14] found that a much larger number of clusters, on the order of thousands,
Geographic Image Retrieval Using Interest Point Descriptors
285
performed better for matching objects in videos. While our application is different it would be interesting to investigate this further. This finding is significant as a coarser quantization supports higher scalability since it results in reduced feature representation and faster similarity comparison. We found that using the Euclidean distance to compare descriptors is better than the Mahalanobis distance. This is true for using k-means clustering to construct the quantization space. It is also true for computing individual descriptor-to-descriptor distances when comparing sets of full descriptors. This results from the distribution of the descriptors in the 128 dimension space. This again differs from the findings of other researchers [14] who used the Mahalanobis distance to cluster descriptors. It is not clear, however, if the Euclidean distance was considered or if it was just assumed that removing correlations and scale would improve the quantization induced by the clustering. We discovered that when comparing sets of full descriptors, it is better to allow many-to-one matches; that is, the minimum distance measure outperformed the complete distance measure. This agrees conceptually with the superior performance of the quantized descriptors. The cosine distance used to compare quantized descriptors “allows” multiple matches. Finally, we found that no method performed the best for all image classes. This requires additional investigation perhaps with a simpler, more homogeneous ground-truth dataset. Preliminary observations suggest that some methods are better at discriminating visually similar classes than others. In particular, the Mahalanobis distance measure seems better than the Euclidean distance measure at distinguishing the aqueduct, freeway and road classes which are very similar visually. We plan to investigate this further.
References 1. Newsam, S., Yang, Y.: Comparing global and interest point descriptors for similarity retrieval in remote sensed imagery. In: ACM International Symposium on Advances in Geographic Information Systems (ACM GIS) (2007) 2. Ashley, J., Flickner, M., Hafner, J., Lee, D., Niblack, W., Petkovic, D.: The query by image content (QBIC) system. In: ACM SIGMOD International Conference on Management of Data (1995) 3. Datta, R., Joshi, D., Li, J., Wang, J.Z.: Image retrieval: Ideas, influences, and trends of the new age. In: Penn State University Technical Report CSE 06-009 (2006) 4. Bretschneider, T., Cavet, R., Kao, O.: Retrieval of remotely sensed imagery using spectral information content. In: Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, pp. 2253–2255 (2002) 5. Bretschneider, T., Kao, O.: A retrieval system for remotely sensed imagery. In: International Conference on Imaging Science, Systems, and Technology, vol. 2, pp. 439–445 (2002) 6. Ma, A., Sethi, I.K.: Local shape association based retrieval of infrared satellite images. In: IEEE International Symposium on Multimedia (2005) 7. Li, Y., Bretschneider, T.: Semantics-based satellite image retrieval using low-level features. In: Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, vol. 7, pp. 4406–4409 (2004)
286
S. Newsam and Y. Yang
8. Hongyu, Y., Bicheng, L., Wen, C.: Remote sensing imagery retrieval based-on Gabor texture feature classification. In: International Conference on Signal Processing, pp. 733–736 (2004) 9. Manjunath, B.S., Ma, W.Y.: Texture features for browsing and retrieval of image data. IEEE Trans. on Pattern Analysis and Machine Intelligence 18, 837–842 (1996) 10. Newsam, S., Wang, L., Bhagavathy, S., Manjunath, B.S.: Using texture to analyze and manage large collections of remote sensed image and video data. Journal of Applied Optics: Information Processing 43, 210–217 (2004) 11. Newsam, S., Kamath, C.: Retrieval using texture features in high resolution multispectral satellite imagery. In: SPIE Defense and Security Symposium, Data Mining and Knowledge Discovery: Theory, Tools, and Technology VI (2004) 12. Mikolajczyk, K., Tuytelaars, T., Schmid, C., Zisserman, A., Matas, J., Schaffalitzky, F., Kadir, T., Gool, L.V.: A comparison of affine region detectors. International Journal of Computer Vision 65, 43–72 (2005) 13. Mikolajczyk, K., Schmid, C.: A performance evaluation of local descriptors. IEEE Trans. on Pattern Analysis and Machine Intelligence 27, 1615–1630 (2005) 14. Sivic, J., Zisserman, A.: Video Google: A text retrieval approach to object matching in videos. In: IEEE International Conference on Computer Vision, vol. 2, pp. 1470– 1477 (2003) 15. Schmid, C., Mohr, R.: Local grayvalue invariants for image retrieval. IEEE Trans. on Pattern Analysis and Machine Intelligence 19, 530–535 (1997) 16. Wang, J., Zha, H., Cipolla, R.: Combining interest points and edges for contentbased image retrieval. In: IEEE International Conference on Image Processing, pp. 1256–1259 (2005) 17. Wolf, C., Kropatsch, W., Bischof, H., Jolion, J.M.: Content based image retrieval using interest points and texture features. International Conference on Pattern Recognition 4, 4234 (2000) 18. Ledwich, L., Williams, S.: Reduced SIFT features for image retrieval and indoor localisation. In: Australasian Conference on Robotics and Automation (2004) 19. Lowe, D.G.: Object recognition from local scale-invariant features. In: IEEE International Conference on Computer Vision, vol. 2, pp. 1150–1157 (1999) 20. Lowe, D.G.: Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision 60, 91–110 (2004) 21. Kuhn, H.W.: The Hungarian Method for the assignment problem. Naval Research Logistic Quarterly 2, 83–97 (1955) 22. Hand, D., Mannila, H., Smyth, P.: Principles of Data Mining. The MIT Press, Cambridge (2001) 23. Manjunath, B.S., Salembier, P., Sikora, T. (eds.): Introduction to MPEG7: Multimedia Content Description Interface. John Wiley & Sons, Chichester (2002)
Feed Forward Genetic Image Network: Toward Efficient Automatic Construction of Image Processing Algorithm Shinichi Shirakawa and Tomoharu Nagao Graduate School of Environment and Information Sciences, Yokohama National University, 79-7, Tokiwadai, Hodogaya-ku, Yokohama, Kanagawa, 240-8501, Japan
[email protected],
[email protected] Abstract. A new method for automatic construction of image transformation, Feed Forward Genetic Image Network (FFGIN), is proposed in this paper. FFGIN evolves feed forward network structured image transformation automatically. Therefore, it is possible to straightforward execution of network structured image transformation. The genotype in FFGIN is a fixed length representation and consists of string which encode the image processing filter ID and connections of each node in the network. In order to verify the effectiveness of FFGIN, we apply FFGIN to the problem of automatic construction of image transformation which is “pasta segmentation” and compare with several method. From the experimental results, it is verified that FFGIN automatically constructs image transformation. Additionally, obtained structure by FFGIN is unique, and reuses the transformed images.
1
Introduction
In image processing, it is difficult to select image processing filters to satisfy the transformation from original images to its target images. The system of ACTIT (Automatic Construction of Tree structural Image Transformation)[1,2,3] have been proposed previously. ACTIT approximates adequate image transformation from original images to their target images by a combination of several known image processing filters. ACTIT constructs tree structured image processing filters using Genetic Programming (GP)[4,5]. Recently, the extended method of ACTIT, Genetic Image Network (GIN)[6], have been proposed. Instead of tree representation, the representation of GIN is arbitrary network structure. The output images of each node in GIN changes every step. Thus, in GIN the users must decide the parameter of “the number of steps”. This parameter is the step of evaluating the output image. This paper introduces a new method for automatic construction of image transformation. This new method, named Feed Forward Genetic Image Network (FFGIN), uses feed forward network representation. FFGIN evolves the feed forward network structure of image processing filters based on instance based learning in similar ways of the case of ACTIT. The characteristic of FFGIN is its structure of connections of image processing filters (feed forward network G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 287–297, 2007. c Springer-Verlag Berlin Heidelberg 2007
288
S. Shirakawa and T. Nagao
structure). In FFGIN, it is possible to straightforward execution of network structured image transformation, and it does not need the parameter of ”the number of step”. In order to verify the effectiveness of FFGIN, we apply FFGIN to the problem of automatic construction of image transformation which is “pasta segmentation”. The next section of this paper is an overview of several related works. In section 3, we describe our proposed method, Feed Forward Genetic Image Network (FFGIN). Next, in section 4, we apply the proposed method to the problem of automatic construction of image transformation and show several experimental results. Finally, in section 5, we describe conclusion and future work.
2 2.1
Related Works Genetic Programming and Graph Based Genetic Programming
Genetic Programming (GP)[4,5] is one of Evolutionary Computation techniques, which was introduced by Koza. GP evolves computer programs, which are usually tree structure, and searches a desired program using Genetic Algorithm (GA). Today, a lot of extensions and improvements of GP are proposed. Parallel Algorithm Discovery and Orchestration (PADO) [7,8]is one of the graph based GPs instead of the tree structure. PADO was applied to the object recognition problems. Another graph based GP is the Parallel Distributed Genetic Programming (PDGP)[9]. In this approach, the tree is represented as a graph with functions and terminals nodes located over a grid. Cartesian Genetic Programming (CGP)[10,11] was developed from a representation that was used for the evolution of digital circuits and represents a programs as a graph. In CGP, the genotype is an integer string which denotes a list of node connections and functions. This string is mapped into phenotype of an index graph. Recently, Genetic Network Programming (GNP)[12,13] which has a directed graph structure is proposed. GNP is applied to make the behavior sequences of agents and shows better performances compared to GP. By the way, in the literature of genetic approaches for neural networks design or training, many kinds of methods have been proposed[14]. And the evolution of neural networks using Genetic Algorithm (GA) has shown the effectiveness in the various fields. The NeuroEvolution of Augmenting Topologies (NEAT) method for evolving neural networks has been proposed by Stanley[15]. Each genome in NEAT includes a list of connection genes, each of which refers to two node genes being connected. 2.2
Automatic Construction of Tree Structural Image Transformation (ACTIT)
ACTIT[1,2,3] constructs tree structured image processing filters with one-input one-output filters and two-inputs one-output filters by using Genetic Programming (GP) to satisfy the given several image examples. The individual in ACTIT
Feed Forward Genetic Image Network
289
is a tree structured image transformation. The terminal nodes of a tree are the original images and non-terminal nodes are several kinds of image processing filters. A root node means an output image. The users give “Training images”, and ACTIT system constructs appropriate image processing automatically. 3DACTIT[2,3] is extended method which automatically constructs various 3D image processing procedures, and applies to medical image processing. 2.3
Genetic Image Network (GIN)
Genetic Image Network (GIN)[6] is a method for automatically construction of image transformation. Instead of tree representation, the representation of GIN is arbitrary network structure. GIN is composed of several nodes which are wellknown image processing filters whose input is one or two. The biggest difference between GIN and ACTIT is the structure of connections of image processing filters. Generally, network structure theoretically includes tree structure (i.e. network structure also represent tree structure). Therefore, the description ability of network representation is higher than that of tree structure. Genotype in GIN consists of a string which indicate image processing filter type and connections. The other work shows that GIN automatically constructs a simple structure for complex image transformation using its network representation[6]. The execution of GIN is as follows. Initially, we set original images to “in” nodes when GIN executes. All nodes synchronously transform each inputted images, and output the transformed image to destination nodes. The output images of each node changes every steps. After predefined iterations (called “the number of steps”), the images of output nodes are evaluated.
3 3.1
Feed Forward Genetic Image Network (FFGIN) Overview
ACTIT uses Genetic Programming (GP) and has a tendency to create image processing filters with unnecessarily large size. This problem in GP is called bloat and increases computational efforts. In GIN, the output images of each node changes every step. Thus, the users must decide the parameter of “the number of step”. This parameter is the step of evaluating the output image. However, the optimum value of “the number of steps” does not find in advance. To overcome these problems, we propose Feed Forward Genetic Image Network (FFGIN) whose representation is feed forward network structure. It is possible to straightforward execution of network structured image transformation, and it does not need the parameter of ”the number of step”. The genotype of GIN is a string which denotes a list of image processing filter ID and connections of each node in the network. The features of FFGIN are summarized as follows: – Feed forward network structure of image processing filters.
290
S. Shirakawa and T. Nagao
– Efficient evolution of image processing programs without bloat through the genotype of fixed length string. 3.2
Structure of FFGIN
Feed Forward Genetic Image Network (FFGIN) constructs acyclic network structured image transformation automatically. Figure 1 illustrates an example of Phenotype (feed forward network structure) and Genotype (string representing Phenotype) in FFGIN. Each node in FFGIN is well-known image processing filters. In FFGIN, the feedback structure is restricted in genotype level. The nodes take their inputs from either the output of a previous node or from the inputs in a feed forward manner. Therefore, it is possible to straightforward execution of network structured image transformation, and it does not need the parameter of “the number of step”. The main benefit of this type of representation is that it allows the implicit reuse of nodes in the network. To adopt evolutionary method, genotype-phenotype mapping is used in FFGIN system. This genotype-phenotype mapping method is similar to Cartesian Genetic Programming (CGP). The feed forward network image processing filters are encoded in the form of a linear string. The genotype in FFGIN is a fixed length representation and consists of string which encode the image processing filter ID and connections of each node in the network. However, the number of nodes in the phenotype can vary but is bounded, as not all of the nodes encoded in the genotype have to be connected. This allows the existence of inactive nodes. The length of the genotype is fixed and equals to Nnode ∗ (nin + 1) + Nout , where Nnode is the number of nodes, nin is the maximum number of inputs of predefined filters and Nout is the number of output nodes. FFGIN constructs feed forward network structured image processing filters, thus it is possible to represent plural outputs. FFGIN enables to simultaneously construct plural image transformation using only a single network structure. 3.3
Genetic Operators
To obtain the optimum structure of FFGIN, an evolutionary method is adopted. The genotype of FFGIN is a linear string. Therefore, FFGIN is able to use a usual Genetic Algorithm (GA). In this paper we use uniform crossover and mutation as the genetic operators. The uniform crossover operator affects two individuals, as follows: – Select several genes randomly according to the crossover rate Pc for each gene. – The selected genes are swapped between two parents, and generate offspring. The mutation operator affects one individual, as follows: – Select several genes randomly according to the mutation rate Pm for each gene. – The selected genes are randomly changed.
Feed Forward Genetic Image Network
2JGPQV[RG
0Q
E
0Q
0Q
KP
D
0Q
QWV
KPCEVKXGPQFG
0Q
0Q
QWVRWVPQFG
H
C
0Q
KOCIGHKNVGT
H
I
0Q
KP
KPRWVPQFG
0Q
C
0Q
291
EQPPGEVKQP
QWV
G
KPCEVKXGEQPPGEVKQP
0Q
0Q
I
F
)GPQV[RG 0Q C
0Q
D
0Q F
0Q
G
0Q
HKNVGT+& EQPPGEVKQPԘ
H
0Q
E
0Q
H
0Q
I
0Q
EQPPGEVKQPԙ
I
0Q
C
QWV QWV
QPGKPRWVKOCIGHKNVGT
Fig. 1. Structure of FFGIN (phenotype) and the genotype which denotes a list of filter ID and connections
4
Experiments and Results
In this section, we apply FFGIN to the problem of automatic construction of image transformation. Additionally, the effectiveness of FFGIN, GIN and ACTIT are compared. 4.1
Settings of Experiments
“Training images” we used in the experiments appear in Figure 2 (four training images). “Training images” consist of original images and target images. The number of “Training images” is four. Target images are the images that users require after the image processing (ideal results). This “pasta segmentation problem” is proposed as subject of a competition at Genetic and Evolutionary Computation Conference 2006 (GECCO 2006)1 . The problem consists in evolving a detection algorithm capable of separating pasta pixels from nonpasta pixels in pictures containing various kinds of (uncooked) pasta randomly placed on textured backgrounds. The problem is made harder by the varying lighting conditions and the presence, in some of the images, of “pasta noise” (i.e., small pieces of pasta representing alphanumeric characters) which must be labeled as background. All images used in the experiments are gray scale images and the size of 128 × 96 pixel. 1
http://cswww.essex.ac.uk/staff/rpoli/GECCO2006/pasta.htm
292
S. Shirakawa and T. Nagao
Original image 1
Original image 2
Original image 3
Original image 4
Target image 1
Target image 2
Target image 3
Target image 4
Fig. 2. “Training images” used in the experiments. The number of “Training images” is four. Table 1. Parameters of each algorithm Parameter The number of generations Population size N Crossover rate Pc Mutation rate Pm Generation alternation model Tournament size The maximum number of nodes The number of steps The number of independent runs
FFGIN 5000 150 0.9 0.03 MGG 5 50 N/A 10
GIN ACTIT 5000 5000 150 150 0.9 N/A 0.03 0.9 (for individual) MGG MGG 5 5 50 50 10 N/A 10 10
We use the mean error on the “Training images” as a fitness function. The fitness function used in the experiments is: ⎧ ⎫ W H ⎪ n n ⎪ ⎪ |oij − tij | ⎪ ⎪ N ⎪ ⎬ 1 ⎨ i=1 j=1 1− (1) f itness = N n=1 ⎪ W · H · Vmax ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ ⎭ where on is the transformed image and tn is its target one. The numbers of pixel in the direction i and j are W , H respectively, and the number of training image set is N . The range of this fitness function is [0.0, 1.0]. The higher the numerical value indicates the better performance. We use Minimal Generation Gap (MGG) as the generation alternation model. The MGG model [16,17] is a steady state model proposed by Satoh et al. The
Feed Forward Genetic Image Network
293
MGG model has a desirable convergence property maintaining the diversity of the population, and shows higher performance than the other conventional models in a wide range of applications. We use the MGG model in the experiments as follows: 1. Set generation counter t = 0. Generate N individuals randomly as the initial population P (t). 2. Select a set of two parents M by random sampling from the population P (t). 3. Generate a set of m offspring C by applying the crossover and the mutation operation to M . 4. Select two individuals from set M + C. One is the elite individual and the other is the individual by the tournament selection. Then replace M with the two individuals in population P (t) to get population P (t + 1). 5. Stop if a certain specified condition is satisfied, otherwise set t = t + 1 and go to step 2. In the experiments we use m = 50. The parameters used by FFGIN, GIN and ACTIT are shown in Table 1. The common parameters between the three methods are identical. We prepare simple and well-known image processing filters in the experiments (27 one-input oneoutput filters and 11 two-inputs one-output filters). For instance, Mean filter, Maximum filter, Minimum filter, Sobel filter, Laplacian filter, Gamma correction filter, Binarization, Linear transformation, Difference, Logical sum, Logical prod and so on. FFGIN, GIN and ACTIT construct complex image processing from combination of these filters. Results are given for 10 different runs with the same parameter set. 4.2
Results and Discussion
Figure 3 shows the output images of “Training images” using FFGIN, and the fitness of this image transformation is 0.9676. FFGIN scored very high, and the output images are extremely similar to target images. FFGIN automatically constructs feed forward network structured image transformation. Figure 4 is obtained structure (feed forward network structured image processing filters) using FFGIN. FFGIN constructs the structure of reusing the transformed images in its network. This structure cannot be constructed using
Output image 1
Output image 2
Output image 3
Output image 4
Fig. 3. Output images of “Training images” using FFGIN
294
S. Shirakawa and T. Nagao
$KPCTK\CVKQP &KUETKOKPCPV#PCN[UKU
/KPKOWO
$QWPFGF2TQF
5QDGN
&GRNQ[
/CZ
&TCUVKE5WO
#NIGDTCKE5WO
$QWPFGF2TQF
$QWPFGF2TQF $KPCTK\CVKQP $QWPFGF2TQF
QWVRWVKOCIG
5QDGN &CTM'FIG
KPRWVKOCIG
.CTIG#TGC
&TCUVKE5WO
%QPVTCEVKQP
.KIJV2KZGN
.KIJV2KZGN
&TCUVKE2TQF
.CRNCEKCP
Fig. 4. An example of obtained structure (image processing filters) by FFGIN
Test image 1
Test image 2
Test image 3
Test image 4
Output image 1
Output image 2
Output image 3
Output image 4
Fig. 5. “Test images” which are not used in evolutionary process (four images), and the output images of the obtained structure by FFGIN
ACTIT. We only give the “Training images”, FFGIN constructs the ideal image processing automatically. Next, we apply the constructed feed forward network structured image processing filters using FFGIN to “Test images”. “Test images” are not used in evolutionary process (non-training images which are similar to training images). “Test images” used in the experiments are shown in Figure 5 (four images). The number of “Test images” is four. The results (output images) also appear in Figure 5. From the output images, FFGIN transforms the test images
Feed Forward Genetic Image Network
295
0.97 0.96 0.95
Fitness
0.94 0.93 0.92 0.91 0.9
FFGIN GIN ACTIT
0.89 0.88 0
1000
2000 3000 4000 Number of generations
5000
Fig. 6. The transition of fitness of FFGIN, GIN and ACTIT. Each curve is an average of 10 independent runs. Table 2. Average computational time in seconds for each algorithm (average of 10 independent runs) FFGIN GIN ACTIT Computational time 5427 42620 28130
to the ideal images which extracted “pasta” in the varying lighting conditions and the presence. It shows that FFGIN automatically construct general image transformation through learning. Figure 6 shows transition of average fitness of 10 independent runs. According to the result, we can evaluate all algorithms constructed adequate image processing algorithms. The performance of FFGIN is comparable with GIN and ACTIT. Finally, we discuss the computational time of the experiments. We generate the results presented in this paper using a Intel Core 2 Duo E6400 processor with 1GB of memory. Table 2 shows the comparison of computational time between FFGIN, GIN and ACTIT. The computational time of FFGIN is about 8 times faster than GIN and about 5 times faster than ACTIT because of the evolution of without bloat. FFGIN allows the reuse of nodes, and the constructed structure (image processing filters) tends to compact. Therefore, the computational time decreases.
5
Conclusion and Future Work
In this paper, we proposed a new method for automatic construction of image transformation, Feed Forward Genetic Image Network (FFGIN), which evolves
296
S. Shirakawa and T. Nagao
feed forward network structured image transformation programs. We applied FFGIN to evolve image processing algorithm of “pasta segmentation” and confirmed that the optimum solution in each problem was obtained by the FFGIN system. From the experimental results, the performance of FFGIN is comparable with GIN and ACTIT, and the computational time of FFGIN is about 5 times faster than ACTIT. In future work we will apply FFGIN to other problems of image processing, in particular on larger problems and other types of problems. Moreover, we will introduce to the mechanisms of evolution of numerical parameters simultaneously in FFGIN.
References 1. Aoki, S., Nagao, T.: Automatic construction of tree-structural image transformation using genetic programming. In: Proceedings of the 1999 International Conference on Image Processing (ICIP 1999), Kobe, Japan, vol. 1, pp. 529–533. IEEE, Los Alamitos (1999) 2. Nakano, Y., Nagao, T.: 3D medical image processing using 3D-ACTIT; automatic construction of tree-structural image transformation. In: Proceedings of the International Workshop on Advanced Image Technology (IWAIT-2004), Singapore, pp. 529–533 (2004) 3. Nakano, Y., Nagao, T.: Automatic construction of abnormal signal extraction processing from 3D diffusion weighted image. In: Proceedings of the International Workshop on Advanced Image Technology (IWAIT-2007), Bangkok, Thailand (2007) 4. Koza, J.R.: Genetic Programming: On the Programming of Computers by Means of Natural Selection. MIT Press, Cambridge (1992) 5. Koza, J.R.: Genetic Programming II: Automatic Discovery of Reusable Programs. MIT Press, Cambridge (1994) 6. Shirakawa, S., Nagao, T.: Genetic image network (GIN): Automatically construction of image processing algorithm. In: Proceedings of the International Workshop on Advanced Image Technology (IWAIT-2007), Bangkok, Thailand (2007) 7. Teller, A., Veloso, M.: Algorithm evolution for face recognition: What makes a picture difficult. In: International Conference on Evolutionary Computation, Perth, Australia, pp. 608–613. IEEE Press, Los Alamitos (1995) 8. Teller, A., Veloso, M.: PADO: A new learning architecture for object recognition. In: Ikeuchi, K., Veloso, M. (eds.) Symbolic Visual Learning, pp. 81–116. Oxford University Press, Oxford (1996) 9. Poli, R.: Evolution of graph-like programs with parallel distributed genetic programming. In: Proceedings of the Seventh International Conference on Genetic Algorithms, East Lansing, MI, USA, pp. 346–353. Morgan Kaufmann, San Francisco (1997) 10. Miller, J.F., Smith, S.L.: Redundancy and computational efficiency in cartesian genetic programming. IEEE Transactions on Evolutionary Computation 10, 167– 174 (2006) 11. Miller, J.F., Thomson, P.: Cartesian genetic programming. In: Poli, R., Banzhaf, W., Langdon, W.B., Miller, J., Nordin, P., Fogarty, T.C. (eds.) EuroGP 2000. LNCS, vol. 1802, pp. 121–132. Springer, Heidelberg (2000)
Feed Forward Genetic Image Network
297
12. Hirasawa, K., Okubo, M., Hu, J., Murata, J.: Comparison between genetic network programming (GNP) and genetic programming (GP). In: Proceedings of the 2001 Congress on Evolutionary Computation (CEC 2001), Seoul, Korea, pp. 1276–1282. IEEE Computer Society Press, Los Alamitos (2001) 13. Eguchi, T., Hirasawa, K., Hu, J., Ota, N.: A study of evolutionary multiagent models based on symbiosis. IEEE Transactions on Systems, Man and Cybernetics Part B 36, 179–193 (2006) 14. Yao, X.: Evolving artificial neural networks. Proceedings of the IEEE 87, 1423–1447 (1999) 15. Stanley, K.O.: Efficient evolution of neural networks through complexification. Technical Report AI-TR-04-314, Ph.D. Thesis; Department of Computer Sciences, The University of Texas at Austin (2004) 16. Satoh, H., Yamamura, M., Kobayashi, S.: Minimal generation gap model for considering both exploration and exploitations. In: Proceedings of the IIZUKA 1996, pp. 494–497 (1996) 17. Kita, H., Ono, I., Kobayashi, S.: Multi-parental extension of the unimodal normal distribution crossover for real-coded genetic algorithms. In: Proceedings of the 1999 Congress on Evolutionary Computation (CEC 1999), vol. 2, pp. 1581–1587 (1999)
Neural Networks for Exudate Detection in Retinal Images Gerald Schaefer and Edmond Leung School of Engineering and Applied Science Aston University
[email protected]
Abstract. Diabetic retinopathy is a common eye disease directly associated with diabetes and one of the leading causes for blindness. One of its early indicators is the presence of exudates on the retina. In this paper we present a neural network-based approach to automatically detect exudates in retina images. A sliding windowing technique is used to extract parts of the image which are then passed to the neural net to classify whether the area is part of an exudate region or not. Principal component analysis and histogram specification are used to reduce training times and complexity of the network, and to improve the classification rate. Experimental results on an image data set with known exudate locations show good performance with a sensitivity of 94.78% and a specificity of 94.29%.
1 Introduction Nowadays, medical images are playing a vital role in the early detection and diagnosis of diseases [1] and their importance continues to grow as new imaging techniques are incorporated into standard practises. Diabetes is characterised by the impaired use of insulin by the body to properly regulate blood sugar levels, causing high levels of glucose in the blood, and is currently affecting approximately 2% of the UK population [2]. Diabetic retinopathy is a common eye disease directly associated with diabetes and one of the leading causes for blindness [3]. When a patient is diagnosed with one of the earlier signs of retinopathy this is typically based on the presence of microaneurysms, intraretinal haemorrhages, hard exudates or retinal oedema, depending on the progression of the disease. Problems start to become apparent when the retina is damaged by high blood glucose. Microvascular lesions become damaged and leak plasma into the retina. Hard exudates are one of the most common form of diabetic retinopathy and are formed in groups or rings surrounding leakage of plasma, over time increasing in size and number. If exudates start to form around the macular region, used for sight focus and straight vision, this can cause problems in obstructed sight and eventually, over time, can lead to macular oedema effectively leading to blindness. Current screening techniques used to detect diabetic retinopathy usually comprise a comprehensive eye examination. If any suspicion of diabetic retinopathy exists, the patient is referred further to an ophthalmologist, who will take and manually analyse retinal images which constitutes a cost and labour intensive task. G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 298–306, 2007. © Springer-Verlag Berlin Heidelberg 2007
Neural Networks for Exudate Detection in Retinal Images
299
In this paper we introduce an automated, neural network-based approach for the detection of exudates in retinal images. Images are processed using a sliding window approach to extract subregions of the image. Based on a ground truth database of images with known exudate locations, a neural network is trained to classify the regions’ central pixels into exudate and non-exudate instances. Principal component analysis is employed to reduce the complexity of the network and shorten training and classification times. Furthermore, histogram specification is applied to deal with problems of different lighting conditions that are commonly encountered in retina images. Experimental results on unseen images confirm the effectiveness of our approach which provides a sensitivity of 94.78% with a specificity of 94.29%.
2 Related Work One of the earliest works on using a neural network to detect diabetic lesions was developed by Gardner et al. [4] and was able to identify vessels, exudates and haemorrhages in retinal images. They used 147 diabetic and 32 normal red-free images which were divided into 20x20 pixel regions and classed by a trained observer into exudates or non-exudates. Each pixel serves as a single input to a backpropagation neural network giving a total of 400 inputs. A sensitivity of 93.1% in detecting exudates was achieved. Osareh et al. [5] proposed a detection system that works at pixel resolution rather than using region windows. Histogram specification is employed as a pre-processing step to normalise the colour across all 142 colour retinal images used in the study. Images were colour segmented based on fuzzy c-means clustering and the segmented regions classified as exudate or non-exudate using 18 features including size and colour descriptors. A two-layer perceptron neural network was trained with the 18 features serving as the input. A sensitivity of 93% and specificity of 94.1% were reported Walter et al. [6] focussed on the use of image processing techniques to detect exudates within retinal images. First, candidate regions are found based on high variations in contrast. The contours of exudate regions are then found based on morphological reconstruction techniques. Using this approach, a sensitivity of 92.8% was achieved. Sinthanayothin et al. [7] used a recursive region growing technique that groups together similar pixels. After thresholding, a binary image was produced which was then overlaid onto the original image showing exudate regions. They report a sensitivity of 88.5% and specificity of 99.7%. Goatmana et al. [8] evaluates three pre-processing techniques on retinal images including greyworld normalisation, histogram equalisation and histogram specification, in order to reduce the variation in background colour among different images. The image set contained 18 images with diabetic lesions, and experimental results showed that histogram specification provided the best performance.
3 Neural Network Based Exudate Detection In keeping with the traditional approach to pattern recognition systems, we explore the effects of various pre-processing methods while also making use of
300
G. Schaefer and E. Leung
dimensionality reduction techniques. The overall structure of our approach is shown in Figure 1. In order to investigate whether pre-processing of the retinal image has a positive effect, the original images are first passed through the neural network in raw form, i.e. without any pre-processing. This is then followed by histogram equalisation respectively histogram specification. This process is then repeated again, where we explore the effect of feature extraction, in particular that of principle component analysis. Pre-processing Histogram Equalised Image
Original Image
Raw Data
Histogram Specified Image
PCA
Dimensionality Reduction
Neural Network
Fig. 1. General structure of exudate detection approach taking into account different preprocessing and feature representation methods
It shall be noted that in this study our aim was to investigate the use of a relatively “naïve” neural network as opposed to employing complex feature extraction techniques coupled with a neural net for classification. We therefore keep the preprocessing to a minimum by investigating solely two contrast enhancement techniques together with a dimensionality reduction approach to reduce the complexity of the resulting networks. 3.1 Input Features Obviously, a neural network has never “seen” an exudate, and thus needs to know which areas of an image correspond to exudates and conversely where non-exudate
Neural Networks for Exudate Detection in Retinal Images
301
areas are located. Therefore the training set must comprise both positive data covering exudates, and non-exudate negative samples. In order to represent exudate and non-exudate regions we employ a sliding window technique similar to the one used by Gardner et al. [4]. While they focussed on using the green channel of the images we make use of the full colour data within a retinal image. Various window sizes were considered before deciding on a final pixel region size of 9x9 pixels both taking into account the final size of the network inputs with respect to the feature vector and the amount of information to present to the network. Every pixel in a retinal image is characterised by three values: red, green and blue this colour information serves as the raw input data to be fed to the network. Each window region hence comprises 243 (=9*9*3) input values). If the central pixel of a 9x9 window is part of an exudate region the window is added as a positive sample to the training set. On the other hand, if the centre of the window is not in an exudate region, then this is regarded as a negative sample. A separate target vector is created to aid training, and depending whether the feature vector is positive or negative, set to either 0 or 1. 3.2 Training Data Selection A set of 17 colour fundus images [6] serve as the basis for analysis, each image having a resolution of 640 x 480 in size and stored in 24-bit bitmap format. All retinal images show signs of diabetic retinopathy with candidate exudates. Duplicates of the images are provided where exudative locations were pre-marked by specialists, and which hence serve as a ground truth on the dataset as well as training and testing data for the neural network. When using data to train a neural network, typically it is best to provide a subset of the data to represent the dataset. 10 images are selected for training, where positive and negative samples are extracted leaving 7 images to test the network. However, in order to improve generalisation of the neural network over unseen retinal images, early stopping is utilised. A validation set of 2 images are taken from the remaining 7 images to serve for validation testing of the neural network. A 50-fold validation step is used, meaning that every 50 iterations during the training period the network is tested on the validation set, and the current error compared with the previous error. The neural network will stop training when the validation error shows a definitive rise in order to prevent over-fitting of the training data. Some images contain more exudates than others, resulting in some images having rather limited (positive) information to extract. We obtain all exudate information from the 10 images for training together with an equal number of negative samples extracted from random non-exudate retina locations. Following this procedure we end of with a training set size of about 39,000 samples. Using the above approach allows us to verify the ability of the neural network to generalise over unseen images. However, using the above scheme clearly depends on which image falls into which category, considering the variation among retinal images. In order to gain a more insightful performance of the neural network we need to test the performance on each image in the dataset. Using a training set size of 16 images out of the set of 17, this would inherently contain the maximum amount of positive information that can be obtained from the image database. As before, random
302
G. Schaefer and E. Leung
locations where exudates do not reside are taken from each image to the size of the positive set. In order to test each image, this technique is performed 17 times, once for each image. Using this approach, training set size varies between 39000 – 49000 feature vectors. 3.3 Neural Network Architecture We adopted a three-layer perceptron architecture with one input, one hidden, and one output layer. Feature vectors in the training set are taken from 9x9 windows in RGB, giving a total of 243 neurons in the input layer. For the hidden layer several configurations with varying numbers of hidden neurons were tested and finally a layer of 50 hidden units selected which provided a good trade-off between classification performance and network complexity. The output layer contains a single neuron, and is attached to a non-linear logistic output function so that the output range falls between 0 and 1, i.e. corresponding to the range of the target vectors. Using standard back-propagation with gradient descent to minimise the error during training would be yield slow convergence, therefore a more optimised version of the algorithm, namely a scaled conjugate gradient method [9], was adopted. When testing the trained neural network with an unseen image, windows of 9x9 pixels are passed individually through the network for each pixel, and the classification value is determined by the network decision. A thresholding scheme is used to classify whether the output corresponds to an exudate region or not. Outputs above the threshold value are classed as exudative, and the centre pixel of the window is marked on the original input image where the location of the exudate was detected. 3.4 Image Pre-processing As it is difficult to control lighting conditions but also due to variations in racial background and iris pigmentation, retinal images usually exhibit relatively large colour and contrast variations, both on global and local scales. We therefore evaluated two pre-processing techniques which are designed to weaken these effects, namely histogram equalisation and histogram specification. Histogram equalisation [10] alters the image histogram to have a uniform flat shape where the different intensities are equally likely. Before candidate positive and negative sets were selected, each image underwent pre-processing. Histogram equalisation was applied separately to each of the red, green and blue channels as in [11]. Histogram specification on the other hand, involves approximating the histogram shape of an image in correspondence to the desired histogram of another image [10]. A target image was selected manually and all other images subjected to histogram specification based on this target’s (red, green and blue) histograms. 3.5 Dimensionality Reduction Obviously, one of the disadvantages in using raw colour information is the size of the network with 243 inputs and 50 hidden units. Since training time grows exponentially with network size, principal component analysis (PCA) is employed to reduce the number of input features [9].
Neural Networks for Exudate Detection in Retinal Images
303
When using raw colour information we employ 30 principal components which account for 97% of the variance in the training data. On the other hand, when applying histogram equalisation and histogram specification we used 90 and 50 principal components respectively which again capture about 97% of the variance. With a reduced number of input neurons we obviously also reduce the number of units in the hidden layer to 15, 40, and 20 for raw colour, histogram equalised, and histogram specified images respectively.
4 Experimental Results In our first set of experiments we use 10 of the 17 images to train the neural network while testing it on the remaining 7. The effects of passing raw colour data through the neural network are tested for original images as well as histogram equalised images and histogram specified images. Then for these three sets the effects of PCA are tested for dimensionality reduction on the dataset and hence a reduction in network complexity. In a second set of experiments we then train on 16 images in a leave-onout fashion and obtain average classification results over all 17 images. Training is once again performed on original images, pre-processed images and PCA projections. To record the number of correctly identified exudates when a retinal image is tested, we count, for each test image, the numbers of True Positives (TP), True Negatives (TN), False Positives (FP) and False Negatives (FN). The true positive measures the number of correctly identified exudate pixels, while the false positive on the other hand is where a pixel is wrongly identified as an exudate region, and true negatives and false negatives are defined similarly. In order to calculate these values, the image being tested is compared to its marked counterpart, in which exudate pixels were manually marked by an expert. We calculate the true positive rate, or sensitivity, which is defined as TP/(TP+FN) and the true negative rate, or specificity, defined as TN/(TN+FP) which are commonly employed for performance analysis in the medical domain. Furthermore, by varying the threshold of the output neuron we can put more emphasis on either sensitivity or specificity and also generate the ROC (receiver operator characteristics) curve which is used extensively in medical applications [1]. The performance of a classifier can be denoted by such a curve which describes a classifier’s true positive detection against its false positive rate. The ROC curve will help in identifying our neural network performance across different thresholds. The area under the curve (AUC) is defined as the probability a classifier can correctly predict the class out of a pair of examples [9]. This gives an indication to the overall quality of a classifier. A perfect classification model identifying all positive and negative cases correctly would yield an AUC of 1.0, whereas an AUC of 0.5 would signify random guessing. Figure 2 on the left shows the ROC results of taking samples from 10 images for the training set and testing generalisation over 5 images. Passing raw colour data from the original images into the neural network seems to obtain healthy results, resulting in an AUC rating of 0.911 indicating fairly good accuracy of classification. Preprocessing the images with histogram equalisation before passing the data to the network seems to suggest a higher detection rate of true positives. The AUC value is
304
G. Schaefer and E. Leung
Fig. 2. ROC curves, based on 10-2-5 training, on raw data (left) and PCA data (right)
rated at 0.933, an improvement in accuracy over raw data training. Using histogram specification to pre-process the data achieves an AUC of 0.928, suggesting slightly lower less overall accuracy compared to histogram equalisation. The effects of dimensionality reduction using principle component analysis on raw data are shown on the right hand side of Figure 2. When comparing this with the results obtained on raw data projected into PCA space results in an AUC of 0.920 compared to 0.911. This depicts a slight improvement in overall accuracy, however, the result over all thresholds is more or less similar, as indicated by the overall shape of the curve. It is apparent that using PCA successfully manages to model the same amount of information while decreasing the complexity of the network from a previous network structure of 243 input units and 50 hidden units, resulting in 12150 weight calculations, down to a structure of 30 inputs and 15 hidden units, and hence 450 weights, and thus improving the efficiency of training dramatically. The effects of histogram equalisation and specification both prove to increase the performance, in particular in the case of histogram specification. Applying both principle component analysis and histogram specification provides an optimal classification with a sensitivity of 92.31% and a specificity 92.41%. Figure 3 on the left gives ROC curves of the average sensitivity and specificity readings from all 17 images based on a leave-one-out training scenario where, in turn, the network is trained on 16 images and then tested on the remaining image. Training raw colour data on the NN presents similar behaviour compared to training with 10 images, achieving a higher AUC of 0.927, and confirms the integrity of the neural network. If we are to compare the effects of pre-processing on all images, histogram equalisation does indeed have a positive effect, while histogram specification achieves the highest AUC of 0.951, showing greater accuracy. The ROC curves of the PCA networks on 16-1 training are shown on the right of Figure 3 and confirm that principle component analysis undeniably has a positive effect on exudate detection. When raw data is defined in PCA space, the accuracy of the NN improves greatly, especially when comparing the resulting AUC value of 0.957 to the highest AUC value of the previous test. Application of PCA also improves classification results based on histogram equalisation. However, it is using
Neural Networks for Exudate Detection in Retinal Images
305
Fig. 3. ROC curves, based on 16-1 training, on raw data (left) and PCA data (right)
histogram specification that truly improves the performance of the classifier, both in achieving better accuracy and detecting higher rates of positive exudates, and attains and AUC rating of 0.973. The optimum balanced sensitivity and specificity results for the best performing classifier give a sensitivity of 94.78% and a specificity of 94.29% which compare favourably with other results in the literature.
5 Conclusions In this paper we have investigated the performance of relatively naïve neural networks for the detection of exudates. The central pixels of image regions are classified as being part of exudate or non-exudate regions by a backpropagation neural network. Colour variations are minimised through the application of histogram equalisation/specification while dimensionality reduction is performed using principal component analysis. Despite the simplicity of the setup good classification performance is achieved providing a sensitivity of 94.78% with a specificity of 94.29% which compare favourably with other results in the literature.
References 1. Mayer-Base, A.: Pattern Recognition for Medical Imaging. Elsevier, USA (2004) 2. Kanski, J.J.: Clinical Ophthalmology. Great Britain: Reed Education and Professional Publishing (1999) 3. Patton, N., Aslam, T.M., MacGillivray, T., Deary, I.J., Dhillon, B., Eikelboom, R.H., Yogesan, K., Constable, I.J.: Retinal image analysis: Concepts, applications and potential. Progress in Retinal and Eye Research 25, 99–127 (2006) 4. Gardner, G.G., Keating, D., Williamson, T.H., Elliott, A.T.: Automatic detection of diabetic retinopathy using an artificial neural network: a screening tool. British Journal of Ophthalmology 80, 940–944 (1996) 5. Osareh, A., Mirmehdi, M., Thomas, B., Markham, R.: Automated identification of diabetic retinal exudates in digital colour images. British Journal of Ophthalmology 87, 1220–1223 (2003)
306
G. Schaefer and E. Leung
6. Walter, T., Klein, J.-C., Massin, P., Erginay, A.A: Contribution of Image Processing to the Diagnosis of Diabetic Retinopathy – Detection of Exudates in Color Fundus Images of the Human Retina. IEEE Trans. On Medical Imaging 21(10) (2002) 7. Sinthanayothin, C., Boyce, J., Williamson, T., Cook, H., Mensah, E., Lal, S., Usher, D.: Automated Detection of Diabetic Retinopathy on Digital Fundus Images. Diabetic Medicine 21(1), 84–90 (2004) 8. Goatmana, K.A., Whitwama, A.D., Manivannana, A., Olsonb, J.A, Sharpa, P.F.: Colour Normalisation of Retinal Images. In: Proc. Medical Image Understanding and Analysis (2003) 9. Nabney, I.T.: Netlab: Algorithms for Pattern Recognition. Springer, Great Britain (2002) 10. Gonzalez, R.C., Woods, R.E.: Digital Image Processing. Prentice-Hall, New Jersey (2002) 11. Finlayson, G., Hordley, S., Schaefer, G., Tian, G.-Y.: Illuminant and device invariant colour using histogram equalisation. Pattern Recognition 38, 179–190 (2005)
Kernel Fusion for Image Classification Using Fuzzy Structural Information Emanuel Aldea1 , Geoffroy Fouquier1 , Jamal Atif2 , and Isabelle Bloch1 1
GET - T´el´ecom Paris (ENST), Dept. TSI, CNRS UMR 5141 LTCI 46 rue Barrault, 75634 Paris Cedex 13, France
[email protected] 2 Unit´e ESPACE S140, IRD-Cayenne/UAG, Guyanne Fran¸caise
Abstract. Various kernel functions on graphs have been defined recently. In this article, our purpose is to assess the efficiency of a marginalized kernel for image classification using structural information. Graphs are built from image segmentations, and various types of information concerning the underlying image regions as well as the spatial relationships between them are incorporated as attributes in the graph labeling. The main contribution of this paper consists in studying the impact of fusioning kernels for different attributes on the classification decision, while proposing the use of fuzzy attributes for estimating spatial relationships.
1
Introduction
Most of traditional machine learning techniques are not designed to cope with structured data. Instead of changing these algorithms, an alternative approach is to go in the opposite direction and to adapt the input for classification purposes so as to decrement the structural complexity and at the same time to preserve the attributes that allow assigning data to distinct classes. In the particular case of images, fundamentally different strategies have been outlined in recent years. One of them copes with images as single indivisible objects [1] and tends to use global image features, like the color histogram. Other strategies treat them as bags [2] of objects, thus taking into account primarily the vectorization of the image content. Finally, a third strategy considers images as organized sets of objects [3,4], making use of components and also of the relationships among them; our approach falls into this category. The interest of this latter model in retrieving complex structures from images is that it handles view variations and complex inference of non-rigid objects, taking into account their intrinsic variability in a spatial context. In [5], an image classification method using marginalized kernels for graphs was presented. In a preprocessing step, images are automatically segmented and an adjacency graph is built upon the resulting neighboring regions. Intrinsic region attributes are computed. The only structural information retrieved from the image is the neighborhood relationship between regions that is implicitly stored in the graph structure by the presence of an edge between two vertices. Once the graph is built, a marginalized kernel extension relying on the attributes G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 307–317, 2007. c Springer-Verlag Berlin Heidelberg 2007
308
E. Aldea et al.
mentioned above is used to assess the similarity between two graphs and to build a classifier. In this paper, we extend this image classification method. We propose to automatically create a kernel based on more than one attribute. The presence of multiple attributes emphasizes the importance of a generic, reliable method that combines data sources in building the discriminant function of the classifier [6]. We also enrich the graph by adding more edges and more complex structural information retrieved from the image, such as topological relations or metric spatial relations [7] (distance, relative orientation). This raises specific methodological problems, that are addressed in this paper, in particular by using different kernels for each type of relation and combining them under a global optimization constraint. The framework is open to the introduction of any other features that describe image regions or relationships between them. However, we stress the importance of selecting relevant features and of finding positive definite kernels that give an intuitive similarity measure between them. The general scheme of the proposed method is illustrated in Figure 1. Training Attribute Extraction
Image
Graph
Attribute kernel function parame− −ters estimation
Train database Segmentation
Label Image kernel fusion
region−merging threshold Graph extraction
Graph similarity function Test database
Graph Extraction
Classifier
Classification Results
Test
Fig. 1. Block diagram. Training step: If needed, images are segmented. A graph is extracted from each image of the training database, using the corresponding label image. Then, for each graph attribute, the corresponding kernel function parameters are estimated. Finally, the kernel functions are merged. Test step: A graph is extracted from each image of the test database. The resulting graphs are compared with the graphs of the training database and classified using the learned similarity function.
The structure of this paper is as follows. First the original method is summarized in Section 2. Section 3 presents the graph structure and edge attributes. Section 4 presents how kernel fusion is used to merge different attribute kernels. Experimental results are outlined in Section 5.
2
Classification Based on Kernels for Graphs
This section briefly presents the general principle of our classification technique based on random walk kernels for graphs [5].
Kernel Fusion for Image Classification Using Fuzzy Structural Information
309
The image is first over-segmented using an unsupervised hierarchical process [8,9]. Then neighboring regions with close average gray levels are merged. The stopping criterion is a function of a dynamic threshold based on the differences between neighboring regions, updated at each step of the process1 . An adjacency graph is constructed with all regions as vertices. In [5], only the adjacency between regions is considered as an implicit edge attribute. The following real value attributes are then computed for each region: the surface in pixels, the ratio between the surface of the region and the surface of the image (relative surface), average gray level, relative (to the dynamic range of the image) gray level, perimeter, compacity and neighboring degree. The kernel between two graphs G and G measures the similarity according to an attribute a of all the possible random walk labels [10,11], weighted by their probabilities of apparition. As compared to previous frameworks that use this type of method [12], the region neighborhood has a lower importance in image than it has in a chemical structure between its constituents, for example. Variable space used in labeling becomes continuous and multi-dimensional, and a significant part of the information migrates from the graph structure to the labeling of its constituent parts. Therefore, the similarity function for a continuous-valued attribute such as the gray level must be less discriminative than a Dirac function. For this purpose, a Gaussian kernel KaRBF (a1 , a2 ) = exp[−a1 − a2 2 /(2σ 2 )] or a triangular kernel KaΔ (a1 , a2 ) = max(1 − a1 − a2 /Γ, 0) is used for assessing the similarity between two numeric values a1 and a2 of an attribute a. For two graphs G and G to compare, these basic kernels allow us to evaluate the similarity ka (h, h ) between two random walks h ∈ G and h ∈ G , by aggregating the similarity of attribute a of all vertices (resp. edges) along h and h . In [5], an extension of the base kernel ka (h, h ) is proposed to better cope with specific image attributes. Under this framework, continuous similarity values between graph constituents (vertices, edges) are interpreted as transition probability penalties that will influence the random walks, without terminating them prematurely. Finally, the kernel between G and G sums the similarity of all the possible random walks, weighted by their probabilities of apparition: Ka (G, G ) = h h k(h, h )p(h|G)p(h |G ). This function is subsequently used in a 1-norm soft margin SVM [6] for creating the image classifier.
3
Graph Representation of Images Including Spatial Relations
In addition to the region-based attributes from the original method, we propose to improve the structure of the graph (by adding some edges) and to add structural information on these edges. The original method [5] uses an adjacency graph. One way to enrich the graph is by adding structural information on the adjacency graph, i.e. no edges 1
Any other segmentation method achieving the same goal could be used as well (e.g. Markov Random Fields).
310
E. Aldea et al.
are added or removed. On the other hand, the adjacency graph from the original method is too restrictive since adjacency is a relation that is highly sensitive to the segmentation of the objects and whether it is satisfied or not may depend on one point only. Therefore, using edges carrying more than adjacency and corresponding attributes better reflects the structural information and improves the robustness of the representation. Thus, the resulting graph is not an adjacency graph anymore, it may even become complete if this is not a performance drawback. In [5], only region-based features are computed. We propose some new features based on structural information, more precisely spatial relations. They are traditionally divided into topological relations and metric relations [13]. Among all the spatial relation, we choose here the most usual examples of the latter: distance and directional relative position (but the method applies to any other relation). As a topological relation, instead of the adjacency, we compute an estimation of the adjacency length between two regions. We now present each of these features. Distance between regions. The distance between two regions R1 and R2 is computed as the minimal Euclidean distance between two points pi ∈ R1 and qj ∈ R2 : minpi ∈R1 ,qj ∈R2 (deuclidian (pi , qj )). Directional relative position. Several methods have been proposed to define the directional relative position between two objects, which is an intrinsically vague notion. In particularly, fuzzy methods are appropriate [14], and we choose here to represent this information using histograms of angles [15]. This allows representing all possible directional relations between two regions. If R1 and R2 are two sets of points R1 = p1 , ..., pn and R2 = q1 , ..., qn , the relative position between regions R1 and R2 is estimated from the relative position of each point qj of R2 with respect to each point pi of R1 . The histogram of angles HR1 R2 is defined as a function of the angle θ and HR1 R2 (θ) is the frequency of the angle θ: → −→ − HR1 R2 (θ) = {(pi , qj ) ∈ R1 × R2 /∠ ( i , − pi qj ) = θ} → −→ − → − → where ∠ ( i , − pi qj ) denote the angle between a reference vector i and − p− i qj . In order to derive a real value, we compute the center of gravity of the histogram. Adjacency measure based on fuzzy satisfiability. Distance and orientation may not be always relevant, for instance the distance between two regions is the same if those two regions are adjacent by only one pixel, or if a region is surrounded by another region. In the latter case, the center of gravity of the histogram of angles has no meaning. Therefore we propose to include a third feature which is a topological feature that measures the adjacency length between two regions. One way to estimate this measure is to compute the matching between the portion of space “near” a reference region and the other region. This measure is
Kernel Fusion for Image Classification Using Fuzzy Structural Information
311
maximal in the case where the reference region is embedded into the other one, and is minimal if the two regions are far away from each other. Fuzzy representations are appropriate to model the intrinsic imprecision of several relations (such as “near”) and the necessary flexibility for spatial reasoning [7]. We define the region of space in which a relation to a given object is satisfied. The membership degree of each point to this fuzzy set corresponds to the satisfaction degree of the relation at this point [7]. Note that this representation is in the image space and thus may be more easily merged with an image of a region. The spatial relation “near” is defined as a distance relation. A distance relation can be defined as a fuzzy interval f of trapezoidal shape on R+ . A fuzzy subset μd of the image space S can then be derived by combining f with a distance map dR to the reference object R: ∀x ∈ S, μd (x) = f (dR (x)), where dR (x) = inf y∈R d(x, y). Figure 2 presents a region (a) and the fuzzy subset corresponding to “Near region 1” (d). In our experiments, the fuzzy interval f is defined with the following fixed values: 0, 0, 10, 30.
a)
b)
c)
d)
e)
f)
Fig. 2. (a) Region 1. (b) Region 2. (c) Region 3. (d) Fuzzy subset corresponding to “Near region 1”. (e) The same with boundary of region 2 added. (f) The same with boundary of region 3 added.
So far we have defined the portion of space in which the relation “near” a reference object is defined. The next step consists in estimating the matching between this fuzzy representation and the other region. Among all possible fuzzy measures, we choose as a criterion a M-measure of satisfiability [16] defined as: min(μnear(R1 ) (x), μR2 (x)) Sat(near(R1 ), R2 ) = x∈S x∈S μnear(R1 ) (x) where S denotes the spatial domain. It measures the precision of the position of the object in the region where the relation is satisfied. It is maximal if the whole object is included in the kernel of μnear(R1 ) . Note that the size of the region where the relation is satisfied is not restricted and could be the whole image
μnear(R
) (x)
space. If object R2 is crisp, this measure reduces to x∈R2μnear(R 1) (x) , i.e. the x∈S 1 portion of μnear(R1 ) that is covered by the object. Figure 2 presents three regions: the reference region (a), a small region adjacent to the first one (b) and a bigger region which is partially represented (c). The fuzzy subset corresponding to “Near region 1” is illustrated in (d) and the border of the others regions have been added in (e) and (f). The value of the
312
E. Aldea et al.
satisfiability measure between the fuzzy subset “Near region 1” and region 2 is 0.06, and for region 3, 0.29. We also choose a symmetric measure, on the contrary of the satisfiability measure, the M-measure of resemblance [16] defined as : min(μnear(R1 ) (x), μR2 (x)) Res(near(R1 ), R2 ) = x∈S max(μ near(R1 ) (x), μR2 (x)) x∈S This measure is maximal if the object and the relation are identical: this resemblance measure accounts for the positioning of the object and for the precision of the fuzzy set as well.
4
Attribute Fusion
We have presented three features corresponding to the principal spatial relations. All these features are normalized in the following. We present now how those features are incorporated in the kernel. The interest of fusion is to provide a single kernel representation for heterogeneous data, here different types of attributes. For a given graph training set, the first step of the classification task is to build the base kernel matrices {Ka1 , . . . , Kan } corresponding to each attribute take into account. These matrices are basic in the way that each of them represents a narrow view of the data. For a difficult set of images, classification in such basic feature spaces might not be efficient, because a reliable discrimination is not performed using only one attribute. In these cases, fusion of the information brought by each kernel is necessary. The most straightforward solution to this problem is to build a linear n combination of the base kernels K = i=1 λi Kai . Graph Training Set
Train Attribut a
Gaussian Kernel
Random Walks
a−Classifier
Fusion
Parameters for a
Fig. 3. Fusion of attribute kernels at learning step. For each attribute, a Gaussian kernel is computed with the corresponding parameter. For each of these attribute kernels, the random walk function creates a different classifier using the graphs extracted from the training database. Finally, classifiers are merged using a linear combination.
This type of linear combination represents a compromise that allows mutual compensation among different views of the data, thus ameliorating the classification flexibility. The problem of optimally retrieving the weight vector λ has been addressed in [6] and consists in globally optimizing over the convex cone P of symmetric, positive definite matrices: P = {X ∈ Rp×p | X = X T , X 0} the following SVM-like dual problem
Kernel Fusion for Image Classification Using Fuzzy Structural Information min
max 2αT e − αT D(y) K D(y)α , subject to
λ∈Rn ,K∈P α∈Rm
313 (1)
n
C ≥ α ≥ 0, trace(K) = c, K =
λi Kai , αT y = 0 i=1
where m is the size of the training database, e ∈ Rm is the vector whose elements are equal to 1 and D(y) ∈ Rm × Rm is the matrix whose elements are null except those on diagonal which are the labels (+1 or -1) of the training examples, D(y)ii = yi . In the problem specified above, C represents the soft margin parameter, while c ≥ 0 fixes the trace of the resulting matrix. The interest of this program is that it minimizes the cost function of the classifier with respect to both the discriminant boundary and the parameters λi . The output is a set of weights and a discriminant function that combines information from multiple kernel spaces. The problem can be transposed into the following quadratically constrained quadratic program [6], whose primal-dual solution indicates the optimal weights λi : min 2αT e − ct , subject to
(2)
α,t
t≥
1 αT D(y) Kai D(y) α trace(Ki )
i = 1, . . . , n
C ≥ α ≥ 0, αT y = 0
We define a kernel function for each attribute, using one of the basic types mentioned above (Gaussian or triangular). Kernel parameters are selected according to their feature variability in the data. More precisely, the threshold for the discrimination function should roughly indicate the smallest distance between two feature values that would trigger a 0-similarity decision for an observer. This threshold is closely correlated to the type of the attribute and equally to the data being analyzed. For each of the attribute kernels above, we build a graph kernel that will provide us with a graph similarity estimate based on a single feature of the data. Some features are more discriminative than others for a specific data set and therefore generate a better classifier. The fusion method presented above allows us to build a heterogenous decision function that weighs each feature based on its relative relevance in the feature set through its weight μi , thus providing optimal performance with the given feature kernels as inputs.
5
Experiments and Results
The IBSR database2 contains real clinical data and is a widely used 3D healthy brain magnetic resonance image (MRI) database. It provides 18 manually-guided 2
Internet Brain Segmentation Repository, available at http://www.cma.mgh.harvard.edu/ibsr/
314
E. Aldea et al.
a)
b)
c)
d)
e)
f)
Fig. 4. Samples from IBSR database. Gray levels represent labels. (a) (b) Two slices of the axial view of the same 3D MRI volume representing both classes. (c) (d) Coronal view. (e) (f) Sagittal view. Table 1. Identification of the slices composing the database in each view of the 3D volume, for the three possible views: axial (A), sagittal (S) and coronal (C) View # slices Slices class 1 Slices class 2 A 255 121, 122, 123 126, 127, 128 255 121, 122, 123 126, 127, 128 S 128 58, 59, 60 64, 65, 66 C
expert brain segmentations, each of them being available for three different views: axial, sagittal and coronal. Each element of the database is a set of slices that cover the whole brain. The main purpose of the database is to provide a tool for evaluating the performance of segmentation algorithms. However, the fact that it is freely available and that it offers high quality segmentations makes it also useful for our experiments. Image classification between two different views is performed with a 100% success rate for many of the attributes that we take into account; as a result, we had to build a more challenging classification problem. We try to perform classification on images belonging to the same view; each element of the database belonging to the view will provide three slices in a row for the first class, and other three for the second one. In each set of 54 images that define a class, we choose fifteen images for training (randomly), and the rest of them are used for testing the classifier. Table 1 references the index of slices that are used for defining each class, and for each of the three views. For assessing attribute similarity, we use Gaussian kernels with relatively small thresholds that render them sensitive to the differences in the labeling. Each attribute kernel is injected in a graph marginalized kernel that we use in the SVM algorithm. For the regularization parameter C of the SVM that controls the trade-off between maximizing the margin and minimizing the L1 norm of the slack vector, we perform a grid-search with uniform resolution in log2 space: log2 C ∈ {−5, . . . , 15}. For each classification task we use N = 30 training graphs and T = 78 test graphs, both evenly divided for the two classes. Further, fusion is performed for k multiple attributes (spatial relations and region descriptors), based on their corresponding marginalized kernels. We fix the trace constraint parameter of the fusion algorithm c = kN and we compute the weights λ1 , . . . , λk for the input kernels in the fusion function, by solving the
Kernel Fusion for Image Classification Using Fuzzy Structural Information
315
Table 2. Classification performance for different attributes (sa: satisfiability; re: resemblance; su: relative surface; co: compacity; gr: gray level). Columns 2, 4 and 6 list the kernel parameters, and columns 3, 5 et 7 outline the individual classification performance for each attribute applied to each view. sa re su Par. % Par. % Par. % axial 0.01 0.79 0.01 0.69 coronal 0.05 0.74 0.01 0.82 sagittal 0.05 0.85 0.01 0.95 0.01 0.96
co Par. % 0.01 0.87 0.01 0.81 0.01 0.91
gr Par. % 0.10 0.65 0.10 0.86 0.10 0.81
Table 3. Classification using fusion kernels. Columns 2, 5 and 8 present the attributes used for fusion, and columns 3, 6 and 9 present the performance of the fusion kernel.
No. 1 2 3 4 5
Axial Coronal Att. Fusion No. Att. Fusion sa,su 0.92 6 re,ng 0.99 sa,co 0.90 7 sa,co 0.83 sa,ng 0.94 8 sa,ng 0.90 su,ng 0.97 9 ng,co 0.87 sa,su,ng 0.96 10 sa,ng,co 0.87
No. 11 12 13 14 15 16 17
Sagittal Att. Fusion re,su 0.96 re,ng 0.83 sa,su 0.96 sa,ng 0.83 sa,co 0.91 ng,co 0.95 sa,ng,co 0.95
system 2 with cvx3 . Finally, the performance of the resulting kernel is tested in an SVM classifier. In most cases, preliminary results show an amelioration of the performance compared to the initial classification rates, thus proving the interest of the fusion approach for these image kernels. In lines 1, 3, 4, 6 and 16, attributes seem to provide overall complementary views of the data and therefore their individual performances are greatly topped by that of the fusion. In lines 5 and 17, triple fusion performs as well as the best possible double fusion for the given attributes, thus indicating a saturation effect, based on previous high classification scores. There are also cases (lines 10, 12 or 14) where the fusion weighs more the kernel with a lower performance, thus creating an average performance interpolator. Indeed, optimizing the global convex problem does not directly guarantee a better performance on any testing sample, but gives a better statistical bound on the proportion of errors. Another important aspect that has to be taken into account is the fact that fusion increases the dimensionality of the kernel feature space, and overlearning may occur for small size training sets. The heaviest step of the algorithm is the computation of the kernel Kai between two graphs G and G . The computational complexity associated with this 3
Matlab Software for Disciplined Convex Programming, available at http://www.stanford.edu/ boyd/cvx/
316
E. Aldea et al.
operation is O((|G||G |)3 ), corresponding to a few milliseconds for the images of the IBSR database and one minute for more complex graphs with 60-70 nodes.
6
Conclusion
A method for image classification based on marginalized kernels has been proposed. In particular, we show that a graph representation of the image, enriched with numerical attributes characterizing both the image regions and the spatial relations between them, associated with a fusion of the attributes, leads to improved performances. A kernel is derived for each attribute and fusion of the kernels is performed using a weighted average, in which weights are automatically estimated so as to give more importance to the most relevant attributes. Preliminary results on medical images illustrate the interest of the proposed approach. Future work aims at extending the experimental study on other and larger image databases and for more meaningful problems. From a methodological point of view, it could be interesting to investigate different types of fusion.
References 1. Chapelle, O., Haffner, P., Vapnik, V.: SVMs for histogram-based image classification. IEEE Transactions on Neural Networks, special issue on Support Vectors (1999) 2. Sivic, J., Russell, B.C., Efros, A.A., Zisserman, A., Freeman, W.T.: Discovering object categories in image collections. In: Proc. IEEE Int. Conf. on Computer Vision (2005) 3. Neuhaus, M., Bunke, H.: Edit distance based kernel functions for attributed graph matching. In: 5th IAPR TC-15 Workshop on Graph-based Representations in Pattern Recognition, Poitier, France, pp. 352–361 (2005) 4. Neuhaus, M., Bunke, H.: A random walk kernel derived from graph edit distance. In: Yeung, D.-Y., Kwok, J.T., Fred, A., Roli, F., de Ridder, D. (eds.) Structural, Syntactic, and Statistical Pattern Recognition. LNCS, vol. 4109, pp. 191–199. Springer, Heidelberg (2006) 5. Aldea, E., Atif, J., Bloch, I.: Image Classification using Marginalized Kernels for Graphs. In: 6th IAPR-TC15 Workshop on Graph-based Representations in Pattern Recognition, GbR 2007, Alicante, Spain. LNCS, vol. 4538, pp. 103–113. Springer, Heidelberg (2007) 6. Lanckriet, G., Cristianini, N., Bartlett, P., Ghaoui, L.E., Jordan, M.I.: Learning the Kernel Matrix with Semidefinite Programming. Journal of Machine Learning Research 5, 27–72 (2004) 7. Bloch, I.: Fuzzy Spatial Relationships for Image Processing and Interpretation: A Review. Image and Vision Computing 23, 89–110 (2005) 8. Brun, L., Mokhtari, M., Meyer, F.: Hierarchical watersheds within the combinato´ Damiand, G., Lienhardt, P. (eds.) DGCI rial pyramid framework. In: Andr`es, E., 2005. LNCS, vol. 3429, pp. 34–44. Springer, Heidelberg (2005) 9. Haris, K., Estradiadis, S.N., Maglaveras, N., Katsaggelos, A.K.: Hybrid image segmentation using watersheds and fast region merging. IEEE Transactions on Image Processing 7(12), 1684–1699 (1998)
Kernel Fusion for Image Classification Using Fuzzy Structural Information
317
10. Gaertner, T., Flach, P., Wrobel, S.: On graph kernels: Hardness results and efficient alternatives. In: 16th Annual Conference on Computational Learning Theory, Washington, DC, USA, pp. 129–143 (2003) 11. Kashima, H., Tsuda, K., Inokuchi, A.: Marginalized kernels between labeled graphs. In: Proc. 20st Int. Conf. on Machine Learning, pp. 321–328 (2003) 12. Mah´e, P., Ueda, N., Akutsu, T., Perret, J.L., Vert, J.P.: Extensions of marginalized graph kernels. In: ICML 2004: Proc. 21st Int. Conf. on Machine Learning (2004) 13. Kuipers, B.: Modeling spatial knowledge. Cognitive Science 2, 129–153 (1978) 14. Bloch, I., Ralescu, A.: Directional Relative Position between Objects in Image Processing: A Comparison between Fuzzy Approaches. Pattern Recognition 36, 1563–1582 (2003) 15. Miyajima, K., Ralescu, A.: Spatial organization in 2d segmented images: representation and recognition of primitive spatial relations. Fuzzy Sets and Systems 65, 225–236 (1994) 16. Bouchon-Meunier, B., Rifqi, M., Bothorel, S.: Towards general measures of comparison of objects. Fuzzy sets and Systems 84(2), 143–153 (1996)
A Genetic Approach to Training Support Vector Data Descriptors for Background Modeling in Video Data Alireza Tavakkoli, Amol Ambardekar, Mircea Nicolescu, and Sushil Louis Department of Computer Science and Engineering University of Nevada, Reno, USA {tavakkol,ambardek,mircea,louis}@cse.unr.edu
Abstract. Detecting regions of interest in video sequences is one of the most important tasks in many high level video processing applications. In this paper a novel approach based on Support Vector Data Description (SVDD) is presented. The method detects foreground regions in videos with quasi-stationary backgrounds. The SVDD is a technique used in analytically describing the data from a set of population samples. The training of Support Vector Machines (SVM’s) in general, and SVDD in particular requires a Lagrange optimization which is computationally intensive. We propose to use a genetic approach to solve the Lagrange optimization problem. The Genetic Algorithm (GA) starts with the initial guess and solves the optimization problem iteratively. Moreover, we expect to get accurate results with less cost than the Sequential Minimal Optimization (SMO) technique.
1
Introduction
Typically, in most visual surveillance systems, stationary cameras are used. However, because of inherent changes in the background itself, such as fluctuations in monitors and fluorescent lights, waving flags and trees, water surfaces, etc., the background of the video may not be completely static. In these types of backgrounds a single background frame is not useful to detect moving regions. A mixture of Gaussians (MoG) modeling technique was proposed in [1] to address the multi-modality of the underlying background. Recently, a recursive filter formulation for MoG training is proposed by Lee in [2]. In [3], El Gammal et al. proposed a non-parametric Kernel Density Estimation (KDE) method for pixel-wise background modeling without making any assumption about its probability distribution. In order to adapt the model a sliding window is used in [4]. However, the model convergence is problematic in situations when the illumination suddenly changes. In methods that explicitly model the background, the foreground is detected by comparing each pixel model with a heuristically selected, global threshold [3], or locally trained thresholds [5]. In this paper a single-class classification approach is used to label pixels in video sequences into foreground and background classes using Support Vector G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 318–327, 2007. c Springer-Verlag Berlin Heidelberg 2007
A Genetic Approach to Training SVDD for Background Modeling
319
Data Description (SVDD) [6]. The SVDD is a technique which is used in describing the data analytically from a set of population samples [7]. This technique uses a generalized support vector learning scheme for novelty detection, when samples from outliers are not accessible by measurement [8]. Such samples in video surveillance applications are the foreground regions. These regions are not accessible in the training stage of the system. To train the SVDD system Lagrange optimization, a quadratic programming (QP) optimization problem, must be solved. The most common technique to solve the QP problem is Sequential Minimal Optimization (SMO), proposed by Platt in [9]. However, the complexity of the SMO algorithm increases with the difference between the number of samples and their dimensionality. Lessmaan et.al [10] proposed a model selection for Support Vector Machines (SVM) using a genetic algorithm (GA) approach. However, this technique deals with the models used to expand the SVM and addresses the classification accuracy. Liu et.al proposed a weighted SVM with a GA based parameter selection in [11]. Their method finds reliable weights for those samples with better support and finds the SVM parameters using a GA. However, these methods do not explicitly employ GA’s as a tool for training the SVM. The main contribution of this paper is to propose an evolutionary computing approach to solve the optimization problem in training of the SVDD. Our method encodes the description of samples as the genetic structure of individuals for a GA. This chromosome is then used to generate an evolving population using the proposed algorithm. The surviving genes of the fittest individual represents the support vectors of the sample set. The rest of the paper is organized as follows. In section 2 we present a review of the SVDD. Section 3 and section 4 describe the GA approach and our proposed method, respectively. In section 5 experimental results of the proposed algorithm are provided and discussed. Finally, section 6 concludes this paper and gives future directions for this research.
2
Support Vector Data Description
Data domain description concerns the characteristics of a data set [7] whose boundary can be used to detect novel samples (outliers). A normal data description gives a closed boundary around the data which can be represented by a hyper-sphere (i.e. F (R, a)). The volume of this hyper-sphere with center a and radius R should be minimized while containing all the training samples xi . As proposed in [7] the extension to more complex distributions is straightforward using kernels. To allow the possibility of outliers in the training set, slack variables i ≥ 0 are introduced. The error function to be minimized is defined as F (R, a) = R2 + C i i , subject to: xi − a2 ≤ R2 + i for all i. Using Lagrange optimization the above results in: αi (xi · xi ) − αi αj (xi · xj ) ∀αi : 0 ≤ αi ≤ C (1) L= i
i,j
320
A. Tavakkoli et al.
When a sample falls in the hyper-sphere then its corresponding Lagrange multiplier is αi ≥ 0, otherwise it is zero. It can be observed that only data points with non-zero αi are needed in the description of the data set, therefore they are called support vectors of the description. To test a new sample y, its distance to the center of the hyper-sphere is calculated and tested against R. Given the support vectors xi , a new test sample zt can be classified as known/novel data using: 2 αi (zt · xi ) + αi αj (xi · xj ) (2) zt − a = (zt · zt ) − 2 i
i,j
where αi are Lagrange multipliers and ||zt − a|| is the distance of the new sample to the description center. If this distance is larger than R then the sample is classified as novel. In order to have a flexible data description, as opposed to the simple hypersphere discussed above, a kernel function K(xi , xj ) = Φ(xi )·Φ(xj ) is introduced. This kernel maps the data into a higher dimensional space, where it is described by the simple hyper-sphere boundary. Instead of a simple dot product of the training samples (xi · xj ), the dot product is performed using a kernel function. Several kernels have been proposed in the literature [12]. Among these, the xi −xj 2 . Gaussian kernel gives a closed data description, K(xi , xj ) = exp − σ2 Using the above theory the proposed method generates a SVDD for each pixel in the scene using its values in the past. These descriptions are then used to label each pixel in new frames as a known (background) or a novel (foreground) pixel. In the following section we present the motivations behind using a GA to solve the SVDD training problem.
3
The Genetic Algorithm Approach
The theory behind using a SVDD is to find those sample points whose α Lagrange multipliers are not zero (support vectors). In order to find support vectors for a given distribution, an optimization problem should be solved which results in all non-zero Lagrange multipliers from equation (1). We propose to use an evolutionary computing method to solve the optimization problem in training of the SVDD. An evolutionary algorithm is capable of adapting to near-optimal solutions efficiently. For the sake of illustration we assume, without loss of generality, that the data is in 2-D generated using a normal distribution function. As discussed in the previous section, the extension to higher dimensions and more complex distributions is straightforward. Lagrange multipliers are non-zero only for the support vectors. Support vectors describe the distribution of the data using equation (1). Solving the optimization problem with respect to a (the center of discription) results in a = α x . Note that the Lagrange multipliers should be normalized ( i i i i αi = 1). In our approach the optimization problem is solved in a bottom-up fashion. We start with random initial values for the α multipliers. Given these multipliers, the
A Genetic Approach to Training SVDD for Background Modeling
321
1. Generate populatoin: αi ’s according to i αi = 1 2. For each individual in population: 2.1. a = αi xi 2.2. Ri = ||xi − a|| 2.3. R = αi Ri 2.4. Fitness = Percentage of Data covered by: (R, a) 4. Perform selection operation 5. Perform recombination ( i αi = 1) 3. Perform evaluation ( i αi = 1) 6. If target rate not reached goto 2. 7. Produce the data description: {αi , xi }∀i:αi =0
Fig. 1. The proposed algorithm
corresponding data description is generated. The data description is refined to achieve the target description iteratively according to the proposed evolutionary technique.
4
The Proposed Method
The proposed evolutionary algorithm is presented in Fig. 1. The algorithm employs a genetic approach to generate the solution iteratively, where the target solution is the smallest circle encompassing the data points. The algorithm produces the best solution as soon as the target rate is reached, or after it is run for the maximum number of generations. In the following, we discuss the representation scheme, encoding of genomes, and selection strategies of the proposed GA, respectively. 4.1
Representation and Encoding
The optimization results of an evolutionary method greatly depend on the representation scheme and encoding. A crucial issue in this context is choosing the individual chromosomes. A chromosome encodes the vital information about an individual and can be used to evaluate the fitness or the quality of that individual during the evolution. In our algorithm, the goal is to find the best data description for an arbitrary distribution of samples. Each individual in the population is represented by the sample points and their corresponding α multipliers (Fig. 2). It can be observed that a data description is uniquely represented by support vectors xi and their corresponding coefficients αi . Thus, each individual’s decoded chromosome represents a data description. Given a number of training sample points, each evolving individual in the population can be represented by a vector containing the Lagrange multipliers αi . The α values are real numbers between 0 and 1. Notice that the size of each chromosome is equal to the number of training samples. Our GA finds the
322
A. Tavakkoli et al.
Fig. 2. Chromosome encoding and representation
(a)
(b) Fig. 3. Evolutionary operators: (a) Crossover operation. (b) Mutation operation.
combination of Lagrange multipliers that results in the target data description. Fig. 2 shows the representation and encoding schemes used in the proposed GA. 4.2
Selection
In the proposed algorithm three different selection strategies are implemented. These strategies are roulette wheel selection, rank-based selection and μλ selection, respectively. The effect of each strategy on the performance of the trained SVDD is studied. Roulette wheel selection. This is a standard selection strategy used in canonical GA. This strategy explores the solution space more evenly. However, its convergence to the optimum in non-smooth search spaces is problematic. Since selection probability is proportional to the fitness the scaling problem is inevitable in this strategy [13]. Rank-based selection. This selection strategy addresses the problem of scaling. While being similar to the roulette wheel selection, individuals are selected with respect to their rank instead of their fitness value in this strategy. μλ selection. This selection strategy also addresses the scaling problem of canonical GA. μλ is considered to be an elitist selection. In this strategy the best individual from the current population always survives to the next generation. 4.3
Crossover and Mutation
It can be shown that crossover is the most important operator in GA’s using the schema theorem. The crossover operator uses individuals in the population
A Genetic Approach to Training SVDD for Background Modeling
323
based on crossover probability and generates offsprings. The exploration power of a genetic algorithm relates to the type and probability of its crossover operator. In our algorithm we used bi-parental crossover. From the selection stage we have N new individuals for a population size of N . Two parents are picked at random from the pool of individuals and are used in the crossover operation. This is done using the result of a biased toss of a coin (crossover probability). This operator is similar to the crossover used in canonical GA except that it contains a string of real values. The procedure is summarized the best in Fig. 3(a). From the figure, the normalization condition is violated after the crossover. To maintain this condition both chromosomes are normalized. Mutation explores the solution space to find better regions for optimization. A single point mutation is used in the proposed method. This operator picks a mutating gene randomly with the mutation probability using a biased coin toss. Fig. 3(b) depicts our proposed mutation operation. After the mutating gene is selected a small mutation bias value δ is added or subtracted. Since mutation violates the normalization of the multipliers the chromosome is normalized.
5
Experimental Results
We present two groups of experiments. The GA training experiments evaluate the performance of the proposed GA approach with regard to its selection strategies, population size and other significant parameters. The performance of the proposed method is compared with state-of-the-art techniques in the literature. The real data experiments compare the proposed system in detecting foreground regions in real videos to existing methods. 5.1
GA Training Experiments
A normal 2-D distribution of 100 data points is used for our experiments. As mentioned earlier the extension to higher dimensions and more complex distributions is straightforward, using kernels. In the following experiments we used a population size of 100, 0.5 for probability of crossover, 0.05 for probability of mutation, 30 runs, suppression parameter of 0.01, mutation bias of 0.01 and 500 generations, unless otherwise stated. Effect of population size. In this experiment the effect of population size on the convergence and performance of the proposed approach is evaluated. The GA is run for 30 runs using the rank-based selection strategy. Fig. 4(a) and (b) show the comparison of the fitness values for population sizes of 20 and 100 individuals. In each graph the solid curve is the minimum, the dashed curve is the average and dotted curve represents the maximum fitness values over 30 runs. As it can be seen, the more individuals in the population, the faster the convergence of the GA. However, increasing the population size results in more calls to the evaluation function and a decrease in speed. Our experiments showed that a population size of 50 is best in terms of speed and accuracy.
324
A. Tavakkoli et al.
(a)
(b)
Fig. 4. (a) Population size 20. (b) Population size 100.
(a)
(b)
Fig. 5. Comparison of different strategies with respect to (a) Fitness. (b) Radius.
Fitness comparison. In this experiment we compare the performance of each of the three proposed selections strategies in terms of maximum fitness value. Another measure of performance is the radius of the description. Given the same fitness value, the smaller the description the better the performance of the system. This implies if two different descriptors encompass the same number of sample points the smaller one should be preferred. The comparison of different selection strategies with respect to the fitness of individuals is shown in Fig. 5(a). In this experiment the population size of 100 is used for all GAs. Among the three selection strategies, the μλ selection converges faster to the target rate of 90% (dashed curve). The Rank based selection reaches the target rate in about 300 generations (solid curve). However, notice that the canonical GA fell in the local optima and never reached the target rate of 90% fitness (dotted curve). Fig. 5(b) compares the description radius found by each of the three GAs. From the figure, the μλ selection based GA converges to the actual solution faster than the other two methods. Rank-based selection finds the best solution in about 300 generations. However, after comparing the radius found by these
A Genetic Approach to Training SVDD for Background Modeling
325
Fig. 6. Comparison of training the SVDD using the proposed GA and the SMO Table 1. Comparison of False Reject Rate for different classifiers Method GA-SVDD SMO-SVDD MoG KDE KNN FRR 0.0833 0.1067 0.1400 0.1667 0.1333
methods it is clear that μλ finds a better description than the rank-based selection. Notice that canonical GA was not able to find the optimal solution in the allowed number of generation and fell into one of the local optima. Comparison of the GA and the SMO. In this section we compare the results of training the SVDD by using the proposed GA and the SMO method. The data set in this experiment is a banana shaped distribution of 200 data points in two dimensions. The distribution of data points can be seen in Fig. 6 along with the decision boundaries trained by using the GA and the SMO techniques. The GA is set to train the classifier and preserve only 8 support vectors. The SMO is used to achieve the same number of support vectors after training the SVDD. The classification boundary of the SVDD by using the proposed GA are represented by the dashed curve. The solid curve is the decision boundary for the SVDD trained by the SMO method. As seen, the GA gives better generalization compared to the SMO. Quantitative comparison. We define the False Reject Rate (FRR) for a quantitative evaluation. By definition, FRR is the percentage of missed targets: #Missed targets . FRR = #Samples Table 1 shows a quantitative comparison between the proposed GA-based SVDD method and other techniques such as SVDD trained by SMO, Mixture of Gaussians (MoG), Kernel Density Estimation (KDE) and K-Nearest Neighbors (KNN). The FRR for SVDD is smaller than that of the other three which proves the superiority of this classifier for novelty detection. 5.2
Real Videos
In this section, the foreground detection results of our method on real video sequences are shown and compared with existing statistical modeling techniques.
326
A. Tavakkoli et al.
(a)
(b)
(e)
(c)
(d)
(f)
(g)
Fig. 7. Real videos: (a) Water surface sequence. (b) MoG. (c) KDE. (d) SVDD. (e) Handshake sequence. (f) KDE (g) SVDD.
(a)
(b)
(c)
(d)
Fig. 8. Foreground detection results
Comparison in the presence of irregular motion. By using the water surface video sequence, we compare the results of foreground region detection using our proposed method with a typical KDE [5] and MoG [1]. Fig. 7(b), (c), and (d) show the results of the MoG, the KDE, and the proposed SVDD technique, respectively. As it can be seen, the proposed method gives a better detection. Comparison in case of low contrast videos. Figure 7(e)-(g) shows the result of foreground detection using the proposed method in the hand shake video sequence and compares this result with that of the KDE method. As it can be seen from Fig. 7(f) and 7(g), the proposed method achieves better detection rates compared to the KDE technique. Other difficult scenarios. Fig. 8(a)-(d) shows results of the proposed foreground detection algorithm in very difficult situations. Our system accurately detects the foreground regions in all of these situations robustly.
6
Conclusion and Future Work
Support Vector Data Description (SVDD) is an elegant technique to describe a single class of known samples, used in novelty detection. The main contribution
A Genetic Approach to Training SVDD for Background Modeling
327
of this work is to design an evolutionary computing algorithm (i.e. GA) to solve the optimization problem in training of the SVDD. The proposed SVDD is used in detecting foreground regions in videos with quasi-stationary backgrounds. The technique shows robust performance and fast convergence speed. One future direction of this work is an incremental version of the proposed SVDD. This can be done by injecting new incoming data samples into the chromosome which replace the non-support vectors. The incremental SVDD will be adaptive to temporal changes in the sample set. The kernel and the classifier parameters can be encoded into the genetic materials as well. This unified framework evolves the data descriptor and the sample classifier together, thus, making the system automatic with respect to its parameters.
References 1. Stauffer, C., Grimson, W.: Learning patterns of activity using real-time tracking. IEEE Transactions on PAMI 22(8), 747–757 (2000) 2. Lee, D.: Effective gaussian mixture learning for video background subtraction. IEEE Transactions on PAMI 27(5), 827–832 (2005) 3. Elgammal, A., Duraiswami, R., Harwood, D., Davis, L.: Background and foreground modeling using nonparametric kernel density estimation for visual surveillance. Proceedings of the IEEE 90, 1151–1163 (2002) 4. Mittal, A., Paragios, N.: Motion-based background subtraction using adaptive kernel density estimation. proceedings of CVPR 2, 302–309 (2004) 5. Tavakkoli, A., Nicolescu, M., Bebis, G.: Automatic robust background modeling using multivariate non-parametric kernel density estimation for visual surveillance. In: Bebis, G., Boyle, R., Koracin, D., Parvin, B. (eds.) ISVC 2005. LNCS, vol. 3804, pp. 363–370. Springer, Heidelberg (2005) 6. Tavakkoli, A., Nicolescu, M., Bebis, G.: A novelty detection approach for foreground region detection in videos with quasi-stationary backgrounds. In: proceedings of the 2nd International Symposium on Visual Computing (2006) 7. Tax, D., Duin, R.: Support vector data description. Machine Learning 54(1), 45–66 (2004) 8. Sholkopf, B.: Support Vector Learning. Ph.D. Thesis, Technische Universit¨ at Berlin (1997) 9. Platt, J.: Sequential minimal optimization: A fast algorithm for training support vector machines. Microsoft Research Technical Report MSR-TR-98-14 (1998) 10. Lessmann, S., Stahlbock, R., Crone, S.F.: Genetic algorithms for support vector machine model selection. In: proceedings of the International Joint Conference on Neural Networks, pp. 3063–3069 (2006) 11. Liu, S., Jia, C.Y., Ma, H.: A new weighted support vector machine with ga-based parameter selection. In: proceedings of the International Conference on Machine Learning and Cybernetics, pp. 4351–4355 (2005) 12. Vapnik, V.: Statistical Learning Theory. Wiley, New York (1998) 13. DeJong, K.A.: Genetic algorithms are not function optimizers. Foundations of Genetic Algorithms, 5–17 (1993)
Video Sequence Querying Using Clustering of Objects’ Appearance Models Yunqian Ma, Ben Miller, and Isaac Cohen Honeywell Labs, 3660 Technology Drive, Minneapolis, MN 55418 {yunqian.ma,ben.miller,isaac.cohen}@honeywell.com Abstract. In this paper, we present an approach for addressing the ‘query by example’ problem in video surveillance, where a user specifies an object of interest and would like the system to return some images (e.g. top five) of that object or its trajectory by searching a large network of overlapping or non-overlapping cameras. The approach proposed is based on defining an appearance model for every detected object or trajectory in the network of cameras. The model integrates relative position, color, and texture descriptors of each detected object. We present a ‘pseudo track’ search method for querying using a single appearance model. Moreover, the availability of tracking within every camera can further improve the accuracy of such association by incorporating information from several appearance models belonging to the object’s trajectory. For this purpose, we present an automatic clustering technique allowing us to build a multi-valued appearance model from a collection of appearance models. The proposed approach does not require any geometric or colorimetric calibration of the cameras. Experiments from a mass transportation site demonstrate some promising results.
1
Introduction
In this paper, we address the problem of ‘query by example’ for video surveillance applications such as forensic analysis The extensive use of video surveillance cameras in airports, rails stations, and other large facilities, such as casinos, have significantly increased the number of video streams that a surveillance operator has to keep track of. Due to limited resources, operators monitor a couple of video streams and very often cycle through large numbers of streams based on the importance of the monitored areas of the scene. In crowded environments, the operators are often unable to aggregate the information collected by the large number of cameras to assess the threats associated with suspicious behaviors. We address the ‘query by example’ problem in video surveillance, where a user provides an instance of an object of interest to a forensic tool and have the system return some images (e.g. top five) of that object or a trajectory of that object by searching a large database of video streams collected by a network of overlapping or non-overlapping cameras. In this paper, we address the above problem of associating objects across a large network of overlapping or non-overlapping cameras. The approach proposed in this paper does not require a geometric or photometric calibration G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 328–339, 2007. c Springer-Verlag Berlin Heidelberg 2007
Video Sequence Querying Using Clustering of Objects’ Appearance Models
329
of the network of cameras. Moreover, the topology of the network (i.e. knowledge of relative position of the cameras) of cameras is not required. The method implemented relies on a robust appearance model allowing it to measure the similarity of objects and regions (e.g. people, vehicles, etc.). The appearance model inferred for each moving region can be computed from a single region or from a collection or trajectory of regions belonging to the same object. The collection of appearances is provided by a moving regions tracker allowing association of multiple observations of the same object. The appearance of each moving region is encoded by a covariance matrix fusing color, textures and position information. Searching a large corpus of objects for instances similar to the example provided by an user has been addressed in various form in the literature by using image features, appearance models, tracking within and across cameras. [1] used the covariance matrix based appearance model to track objects within the field of view of a single camera. [13] proposed a fast method for computing covariance matrix appearance model based on integral images. [7] used Karcher mean to aggregate a collection of appearance model for people detection. There exist other appearance models for associating objects. [9] proposed a color distribution model, which is obtained by partitioning the blob into its polar representation. The angle and radius are subdivided uniformly defining a set of regions partitioning the blob. [17] used color representation of different areas of a person as the appearance model to match objects across cameras. [15] combined color features and the estimated height and body type of the people they are tracking; requiring a ground plane information. Other work focused on extracting features and local descriptors as appearance models: [8] used a SIFT based descriptor as appearance model for indexing purposes, while [14] used Haar wavelet features. Other authors have proposed addressing the association of objects across a newtowork of cameras using the kinematics and a known topology of the network of sensors. For example, [16] used a particle filter to associate objects across cameras, and they need a map of transition possibilities between cameras.
2
Query Using a Single Example
In this section, we present in detail the appearance model used to describe objects. Then we present searching methods for objects represented by a single appearance model. 2.1
Appearance Model
The appearance model is computed on the detected regions. These regions can be motion detection regions, object detection regions, or fused regions from motion detection and object detection. In this paper we only consider motion detection regions. We use a covariance matrix to represent the appearance of detected regions as developed in [1]. This method is appealing because it fuses different types of features and has small dimensionality. The small dimensionality of the model is
330
Y. Ma, B. Miller, and I. Cohen
well suited for its use in associating objects across a network of cameras monitoring a crowded environment, since it takes very little storage space. Given the large number of moving regions detected in the environments we are targeting, it is important to be able to compare models very quickly to enable very short search responses. The appearance model fuses a variety of features such as colors, textures, and relative locations. This allows it to localize color and texture properties and therefore increase the accuracy of the model. This is important for a scenario containing un-calibrated cameras in arbitrary positions and orientations (overlapping or non-overlapping cameras). In this case, objects features, such as scale and color, may vary greatly between different cameras. The covariance matrix is built over a feature vector f containing spatial and appearance attributes [1]: f (x, y) = [x, y, R(x, y), G(x, y), B(x, y), ∇RT (x, y), ∇GT (x, y), ∇B T (x, y)] (1) where R, G, and B are the color space encoding, and x, and y are the coordinates of the pixel contributing to the color and gradient information. The appearance model associated to a detected region Bk is computed by the centered autocovariance matrix defined by: Ck = (f − f )(f − f )T (2) x,y
In the above formalism we use color and gradients as appearance features. The selection of these features is important and impacts the performance of such appearance model. The appearance model for every detected region is given by the covariance matrix defined above. Searching for occurrences of an object of interest within the field of view of a given camera or across a network of cameras requires defining a similarity measure comparing these appearance models. Given two appearance models, each described by the covariance matrix Ci and Cj , we will use the similarity measure defined by the sum of squared logarithms of the generalized eigenvalues as shown in the following equation [1][3]: d ρ(Ci , Cj ) = ln2 λk (Ci , Cj ) (3) k=1
where λk (Ci , Cj ) are the generalized eigenvalues of the models Ci and Cj . 2.2
‘Pseudo Track’ Method
The previous appearance model and similarity measure define the tools needed to associate moving regions within the field of view of a camera or across a network of cameras. One can consider these tools for building a ‘query by example’ solution allowing the operator to sift quickly over a very large number of detected moving regions. To achieve this, a search method must be developed and some possible ones will be discussed in this section.
Video Sequence Querying Using Clustering of Objects’ Appearance Models
331
One possibility is to use the exhaustive k-nearest neighbor search; where the user could specify different k values. Another search method could use threshold mechanisms to determine how many matching regions should be returned, however it is difficult to set the threshold since the range of similarity distances from one model to all other models varies for different models. In the following we present a method we developed called ‘pseudo track’ search. The proposed method is based on the observation that the distance associating the same object across different cameras is usually greater than the distance observed within the same camera. This is mainly due to camera characteristics, and environmental conditions (e.g. ambient light, presence of very dark and very bright regions in the scene, etc.). One can note that changes in the ambient light, and scale variations contribute the most to the large range of similarity values observed in the various data sets. While providing a colorimetric calibration of the cameras could reduce the effect for static factors, it would not help with dynamic changes such as lighting, and scale changes. As previously mentioned the query results from the exhaustive k-nearest neighbor search on the same camera often had multiple correct matches as the top matches. The proposed ‘pseudo track’ search leverages that property and has three steps: – The first step identifies the multiple appearance models given the query model. We use five-nearest neighbor search on all the models from the same camera from which the model was selected. – The second step performs k -nearest neighbor searches for each of the appearance models identified in the first step across all other cameras that are of interest to the operator. For these searches we set k to be a percentage of the total number of appearance models in the data set. – Finally we aggregate the multiple search results using a criterion that is based on the number of identical results found in the sets reported by the multiple queries. In the first step we chose to use only the top five results because it was a good trade off between adding extra information from correct matches and reducing noise added from false positive matches. The proposed ‘pseudo track’ method can reduce the variance of the results of a query. In Figure 1, we present a result from a real world data set using the proposed ‘pseudo track’ method. We also present the results using the traditional exhaustive search method on the same object. We have four cameras in the data set, including overlapping cameras and non-overlapping cameras. The details of the camera layout can be found in Section 4. The object of interest tagged by the operator is shown in Figure 1(a) and the motion detection region used to compute the appearance model is depicted on the right side of Figure 1(a). Figure 1(b) presents the top three search results for each of the other three cameras using the proposed ‘pseudo track’ method. Figure 1(c) presents the same using the exhaustive search method. Both results are compared to the ground truth with the correct matches outlined in green and mismatches in red as shown in Figures 1(b) and 1(c). Among the top nine returned results
332
Y. Ma, B. Miller, and I. Cohen
(a)
(b)
(c)
Fig. 1. ‘Pseudo track’ search result (a) The object selected and foreground pixels (b) Top three matches per camera using our method (c) Top three matches per camera using exhaustive search Table 1. comparison between the proposed ‘pseudo track’ and the exhaustive search Method Match in top 5 Match in top 3 exhaustive 83 % 70 % ‘pseudo track’ 87 % 83 %
from three cameras, the proposed ‘pseudo track’ method matched eight of nine, while the exhaustive search only matched six of nine results. Table 1 shows the percentage of searches that contained at least one correctly matching appearance model in the top five and the top three matches across cameras respectively. This was generated from 23 object searches from the data set described in Section 4 which were detected in multiple cameras. We found that the ‘pseudo track’ method matched 4% more searches in the top 5 results than the exhaustive search. For the same searches if only the top 3 results are considered the ‘pseudo track’ method returns a correct match for 13% more. This shows that the ‘pseudo track’ method is more effective than exhaustive search when the operator views less results per search. This property is useful in designing an efficient representation of the results of each query, since the operator will view less results without losing much accuracy.
3
Object Query Using Trajectories
In the previous section we have presented the appearance model computed for each detected moving region and a querying method using only a single observation. Tracking of moving regions allow to associate occurrences of the same
Video Sequence Querying Using Clustering of Objects’ Appearance Models
333
(a)
(b)
(c)
(d)
Fig. 2. Example of sequence representation: (a) original sequence of frames; (b) clustering results having valid cluster and invalid cluster; (c) representative frame for the second cluster in (b); (d) representative frame for the third cluster in (b)
objects within the field of view of a camera [5,9,10]. Moreover, object tracking methods allow to correct for erroneous region detections, and disambiguate objects positions in presence of dynamic or static occlusions. The obtained trajectories provide a collections of motion regions that are believed to belong to the same tracked moving object in the scene. This provides a collection of object instances that should be used for inferring a more robust appearance model for each tracked region in the scene. In this section, we present a method for associating regions using the collection of appearances provided by the tracking. In this section we will focus on the effective representation of a sequence of appearance models defined by the covariance matrix. The use of the covariance matrix as the appearance model prevents us from using traditional machine learning methods. Indeed the covariance matrix is defined over the manifold of symmetric positives matrices [2]. 3.1
Modeling the Appearance of a Collection of Regions
A collection of appearance models corresponding to the k th trajectory of a tracked object in the scene is represented by S (k) = {Bi , i = 1, ..., n}, which
334
Y. Ma, B. Miller, and I. Cohen
is composed of a finite number n moving regions. These correspond to the regions in motion that belong to the track. Each region Bi in this set S (k) is (k) represented by its appearance model Ci , i = 1, ..., n. In the following, we will (k) define S (k) = {Ci , i = 1, ..., n}. The number of frames in a track may be very large, and might be corrupted by other moving objects in the scene. This typically occurs when the tracked object is occluded by another moving object in the scene. Corrupted tracks also occur in the case of erroneous association of objects in the video sequence. Examples of such corrupted observations are illustrated in Figure 2 (a). Next we present a method for encoding the appearance model of a trajectory or a collection of appearance models using a multi-valued representation. Our objective is to infer from S (k) a compact subset S r(k) of moving regions Br(k) where card(S (k) ) = m and m n allowing to represent accurately sequence S (k) . S r(k) is defined by: r(k)
S r(k) = {Cj
j = 1, .., m}
(4)
This compact representation S r(k) needs to contain all the key appearances present within the set S (k) . It should capture various appearance of the person or vehicle as it moves across the scene, as well as removing outliers present in the set S (k) . We present a clustering-based method for building the representation S r(k) for a given sequence S (k) . We propose the use of an agglomerative hierarchical clustering [4] driven by the similarity measure defined by Eq. (3). The steps of our proposed sequence representation methods are as follows. – Step 1: Agglomerative clustering on a trajectory. – Step 2: Outlier cluster detection and removal. r(k) – Step 3: calculate the representative frame for each valid cluster Cj , j = 1, .., m. First, we use an agglomerative clustering approach to identify the number of class (k) present in a given track. Each region’s appearance model Ci , i = 1, 2, ..., n is initially placed into its own group. Therefore, we have n initial clusters, each of these clusters contains only a single moving region or equivalently its covariance matrix. At each step, we merge the closest pair of clusters. As for the proximity between clusters, we use average linkage, or average pairwise proximities (average length of edges) : (5) proximity = average{ρ(Ci , Cj )} where Ci and Cj from different cluster. The obtained clusters may contain valid clusters as well as outliers due to errors in the tracking or the presence in the tracked moving regions of objects occluding the tracked objects. In Figure 2 (b) we depict the clusters obtained in the case where we have limited the number of clusters to four. The computed clusters are validated using the number of elements belonging to each cluster. Clusters with too few elements are discarded. In practice we use a threshold
Video Sequence Querying Using Clustering of Objects’ Appearance Models
335
on the size of the cluster for identifying clusters corresponding to outliers. For example, in Figure 2(b), the first cluster and fourth cluster are outliers, because they contain only one region. Once the different regions belonging to a trajectory are clustered in groups of similar appearance, we obtain a set of valid groups G1 , .., Gm each containing a number of regions. In order to provide a compact description of the set of regions representing the trajectory of the moving object, one has to estimate a representative element for each of these clusters. A representative region Bi has to be computed for each corresponding cluster Gi . The representative region Bl , is defined by the following equation: l = arg min ρ(Ci , Cj ) (6) j∈1,...,nk ,j=i
i=1,..,nk
A representative region defined by the above equation corresponds to the region that is the most similar to all the regions contained in the considered cluster. A similar concept to our representative region method is call Karcher mean [18], which is used in [7]. In Figure 2 we show the complete processing of a collection of regions belonging to a trajectory. Figure 2 (a) shows the collections of regions. Figure 2 (b) depicts the 4 clusters estimated and the regions contained in each cluster. Finally Figure 2 (c) and (d) depict the representative region for each of the identified clusters. 3.2
Matching Trajectories
Matching sequences of regions to other computed trajectories requires the definition of a similarity measure allowing to compare the appearances of two trajectories. In the above section we have defined a method for representing each sequence or trajectory S (q) by a subset S r(q) of a fixed size. The size of S r(q) corresponds to the number of clusters which is provided as a parameter by the user and is constant for all trajectories. We define a similarity measure between two trajectories S (q) and S (p) using Hausdorff distance. The Hausdorff distance is the maximum distance of a set to the nearest point in the other set [6]: d(S (q) , S (p) ) = max
min (ρ (x, y))
x∈S r(q) y∈S r(p)
(7)
where ρ is the Forstner distance defined in Eq. (3). The above matching of trajectories using the Hausdorff distance allows the method to take into consideration multiple appearance models of the object being tracked. It improves the accuracy of the association of objects across non overlapping cameras as it allows for adaptation in changes of appearances.
4
Experimental Results
The proposed framework was implemented in a query by example paradigm, allowing the operator to search for objects or regions of interest across a large
336
Y. Ma, B. Miller, and I. Cohen
Fig. 3. Sample of videos acquired by the four cameras used in the reported experiment. Three of the cameras have a partial overlap, although for two of the three cameras significant changes of scale and appearance are observed. The later are due to variable ambient illumination. The image in the bottom right depict the view from the fourth camera which monitors a different region of the scenes.
repository of video streams acquired by a collection of cameras monitoring a relatively crowded environment. In this paper we present results on association of detected and tracked moving regions across four cameras in a relative crowd environment. Figure 3 shows a sample of the four images acquired by each of these cameras. It illustrates the camera viewing angle and the scene complexity. Two of the overlapping cameras are adjacent and overlap in the middle while the third one is perpendicular to both. This particular setup allowed us to see the effect of scale and pose change, as well as change in illumination and ambient scene characteristics between non overlapping cameras. In this configuration a strong variation of the ambient illumination is observed. The video sequences collected by the four cameras were first processed for computing the motion regions and their trajectories. In the first stage of this study, we have considered motion regions only for associating objects across multiple cameras. In the data considered for this experiment, we had 36000 regions detected for which we have computed an appearance model. The appearance model for each region was represented through the 11x11 covariance matrix defined in Equation 2 and built using the feature vector defined in Equation 1. A lookup matrix was built storing all similarity measures between all detected regions. This matrix while very large in storage size enables very quick queries since a
Video Sequence Querying Using Clustering of Objects’ Appearance Models
337
Fig. 4. An example of association of a single region across non overlapping cameras. In this example the operator provided the probe on the left. The center and right image depict images acquired by two video cameras where the matching regions were found by the proposed approach.
Fig. 5. An example of association of an object of interest using the appearance model inferred from the trajectories of the object. Single region across non overlapping cameras. In this example the operator provided the probe on the left. The center and right image depict images acquired by two video cameras where the matching regions were found by the proposed approach.
query is reduced to a lookup. More efficient representations are currently investigated, but are beyond the scope of the present paper. The purpose of this matrix was to validate the proposed framework on relatively large data sets. This stored information is used to perform queries and report corresponding matches. An actual query is performed in a couple of seconds depending on how many results are returned. This varies on the amount of pruning in the search algorithm and is tunable by the user. In Figure 4 we present a first result of the association of objects across non overlapping cameras. The method used for this particular example uses only a single instance of the object of interest in the scene. The appearance model is computed using the region tagged by the operator an depicted in Figure 4.a. Figure 4.b and Figure 4.c depict frames where the object of interest
338
Y. Ma, B. Miller, and I. Cohen
was identified with high confidence. Here we only use the appearance model of the region tagged by the operator to search for similar regions. In the above example, the performance of the proposed approach were satisfactory given the number of models in the database. However the accuracy of the association depends heavily in the quality of the motion regions. In Figure 5 we illustrate the matching of objects across non overlapping cameras using the appearance model inferred from the trajectories of the detected moving regions. In this example the database approximately 1800 trajectories. Each trajectory was modeled using a 4-clusters grouping method, representing each trajectory using 4 or less representative regions. Trajectories of objects were compared using the Hausdorff distance presented in Eq. 7. In Figure 5, the operator tagged the object of interest represented in Figure 5.a. The probe or the template is built from the trajectory to which the tagged object belongs, and then matched to all trajectories in the database. In Figure 5.b and Figure 5.c we show the frames containing the object matching best the appearance model of the probe. Associating objects using the appearance model derived from their trajectory provides more accurate results and faster response time due to the smaller number of trajectories compared to the number of motion regions. In the examples above we provided only the best match for each probe or template provided. The user interface built around the proposed concept allow the security operator to browse quickly and in a non linear fashion a large number of video streams providing an efficient forensic tool.
5
Conclusion
In this paper we proposed a new method for the ‘query by example’ scenario in video surveillance across a network of cameras which may contain overlapping and/or non-overlapping cameras. For the query by a single detection, we proposed the ‘pseudo track’ search method to reduce the variance of the query results. For the query by trajectory, we proposed a clustering approach and a method for selecting a representative region for each cluster. Then we proposed the method to associate trajectories in presence of occlusions and outliers due to incorrect motion region tracking. We performed experiments using real data sets from a mass transit site that promising results. The proposed approach can allow the user to quickly search through a large amount of video data for interesting objects in a non-sequential manner. In the future work, we will improve our methods by incorporating more accurate region detection algorithms like people segmentation method. Also an efficient data representation and index mechanism as developed by Nister in [8] could be used to improve querying speed. Acknowledgement. This research was funded in part by the Homeland Security Advanced Research Projects Agency of the U.S. Government under contract ONR/N0001405C0119.
Video Sequence Querying Using Clustering of Objects’ Appearance Models
339
References 1. Porikli, F., Tuzel, O., Meer, P.: Covariance Tracking using Model Based on Means on Riemannian Manifolds. In: Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2006) 2. Fletcher, P.T., Lu, C., Pizer, S.M., Joshi, S.: Principal Geodesic Analysis for the Study of Nonlinear Statistics of Shape. IEEE Trans. on Medical Imaging 23(8), 995–1005 (2004) 3. Forstner, W., Moonen, B.: A Metric for Covariance Matrices, TR Dept. of Geodesy and Geoinformatics, Stuttgart University (1999) 4. Duda, R.O., Hart, P.E., Stork, D.G.: Pattern Classification, 2nd edn. John Willey & Sons, New York (2001) 5. Ma, Y., Yu, Q., Cohen, I.: Multiple Hypothesis Target Tracking Using Merge and Split of Graph’s Node. In: Bebis, G., et al. (eds.) ISCV 2006, pp. 783–792 (2006) 6. Rote, G.: Computing the minimum Hausdorff distance between two point sets on a line under translation. Information Processing Letters 38, 123–127 (1991) 7. Tuzel, O., Porikli, F., Meer, P.: Human Detection via Classification on Riemannian Manifolds. In: CVPR (2007) 8. Nister, D., Stewenius, H.: Scalable Recognition with a Vocabulary Tree. In: CVPR (2006) 9. Kang, J., Cohen, I., Medioni, G.: Continuous Tracking Within and across Camera Streams. In: CVPR (2003) 10. Yu, Q., Medioni, G., Cohen, I.: Multiple Target Tracking Using Spatio-Temporal Markov Chain Monte Carlo Data Association. In: CVPR (2007) 11. Christel, M.G.: Carnegie Mellon University Traditional Informedia Digital Video Retrieval System. In: CIVR (2007) 12. Ferencz, A., Learned-Miller, E.G., Malik, J.: Learning Hyper-Features for Visual Identification. NIPS (2004) 13. Tuzel, O., Porikli, F., Meer, P.: Region covariance: A fast descriptor for detection and classification. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3951, pp. 589–600. Springer, Heidelberg (2006) 14. Liu, T., Rosenberg, C., Rowley, H.: Clustering Billions of Images with Large Scale Nearest Neighbor Search. In: WACV (2007) 15. Park, U., Jain, A.K., Kitahara, I., Kogure, K., Hagita, N.: ViSE: Visual Search Engine Using Multiple Networked Cameras. ICPR (2006) 16. Leoputra, W., Tan, T., Lim, F.: Non-overlapping Distributed Tracking using Particle Filter. ICPR (2006) 17. Cameras Albu, A.B., Laurendeau, D., Comtois, S., et al.: MONNET: Monitoring Pedestrians with a Network of Loosely-Coupled Cameras. ICPR (2006) 18. Karcher, H.: Riemannian center of mass and mollifier smoothing. Communications of Pure and Applied Mathematics 30, 509–541 (1977)
Learning to Recognize Complex Actions Using Conditional Random Fields Christopher I. Connolly SRI International 333 Ravenswood Avenue Menlo Park, CA Abstract. Surveillance systems that operate continuously generate large volumes of data. One such system is described here, continuously tracking and storing observations taken from multiple stereo systems. Automated event recognition is one way of annotating track databases for faster search and retrieval. Recognition of complex events in such data sets often requires context for successful disambiguation of apparently similar activities. Conditional random fields permit straightforward incorporation of temporal context into the event recognition task. This paper describes experiments in activity learning, using conditional random fields to learn and recognize composite events that are captured by the observation stream. Keywords: Video Tracking, Conditional Random Fields, Learning, Event Recognition.
1
Introduction
The sheer volume of video data in surveillance applications presents challenges for search and annotation. Event recognition algorithms offer one approach to the problem of focusing attention on interesting yet tractable subsets of the video stream. Much work has been done to date on the problem of event recognition in video streams [5,1,8,13]. Recent progress in event recognition has led to the development of ontologies for describing composite events in various domains [10]. Ontologies describe composite events in terms of primitive or atomic events. For the purposes of this paper, primitive events are those that represent shortterm changes (often paired to form intervals) in mover state. Composite events (as described in [10] using the VERL formalism) can be defined using a variant of first order logic. While primitive events are usually easy to extract from the data stream (e.g., standing vs. moving), the extraction of composite events requires finding satisfying variable assignments (primitive event instances) for a logical proposition. It is of interest to know whether such instances can be learned (and whether ontologies can be defined or refined through learning). In this paper, we describe steps toward automatic annotation of video datasets using CRFs (conditional random fields [7]) to infer composite events from the raw data stream. Some success has been achieved through the use of Markov models for activity recognition [9]. The Markov assumption, however, does not easily G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 340–348, 2007. c Springer-Verlag Berlin Heidelberg 2007
Learning to Recognize Complex Actions Using Conditional Random Fields
341
permit Markov models to explicitly capture contextual information. To compensate for this, hierarchical Markov and semi-Markov models have been proposed, where model layers capture sequence properties at different time scales [2]. In contrast, CRFs explicitly model conditional dependencies in the input stream, and can therefore directly capture temporal context that is not immediately available to an approach based on Markov models. Context can be crucial for disambiguating actions that are similar in appearance. CRFs have recently been applied to the problem of activity recognition in a model domain [14], activity analysis in annotated video [16] and in video motion analysis [11]. These results are promising, and we wished to understand whether CRFs would work well using less constrained data. In contrast to the efforts cited above, no prior annotation is performed here except to provide ground truth for training, and no body part fixturing is required. Subjects were performing natural activities in a familiar environment without regard to the sensors.1 1.1
System Overview
The testbed for this work is a multisensor stereo tracking system. Sensors are fixed to the ceiling of an office area and can monitor high-traffic areas such as hallways and document preparation rooms. One of the sensors monitors a hallway containing a restroom and two vending machines, and this area is the environment of interest for this paper. Figure 1 shows an example of data collected from one of the system’s sensors. Sensors are fully calibrated and situated within a geospatial site model of the environment. Local environment geometry can be modeled and overlaid on the sensor image, as shown in this figure. All tracks are geolocated and can be analyzed in the context of site geometry, although this data is not used for the experiments described here. Each sensor delivers stereo and video data to a small dedicated processor that performs plan-view tracking to extract samples of the movers in the scene [4]. The tracker extracts image chips, floor histograms, and position information for each sample. Floor histograms are obtained from the stereo-derived points p that are included in the mover’s bounding box. For a given floor histogram bin x, y, let S be the set of points p contained in that bin. Each histogram bin H(x, y) is the sum of z values that fall in that bin: pz (1) H(x, y) = p∈S
Thus at each sample in time, the floor histogram encodes information about the posture of the mover. Samples are timestamped and delivered to a central analysis and archive host, where observations can be entered into a database or otherwise saved to disk. Sample rates are typically on the order of 7 to 10 Hz. 1
Since the sensors have been in place for many months, most people in this environment no longer notice their presence.
342
C.I. Connolly
Fig. 1. Reconstructed video of a vending machine purchase, showing ground track history and local scene geometry
2
Track Processing
Vending machine purchase events are composite, consisting of several distinct phases. The buyer must first choose the item. In many cases, prospective buyers will simply walk away if items are too expensive or if the desired item is not found. After making a decision, a buyer then needs to insert money into the machine, press the appropriate buttons, and stoop down to retrieve the selected item. Position and velocity alone are therefore insufficient for correct recognition of purchase events. Recognizing a purchase requires an analysis of position and posture over the time course of the purchase event, and the ability to detect all the required components of the event in proper order without being confused by “near misses”. To complicate matters, a recycling bin sits next to the vending machines. Stooping motions are observed for both the vending machines and the recycling bins, so this motion by itself is not sufficient to distinguish between recycling and purchasing activities. The context of the action is therefore crucial in correctly identifying purchases.
3
Conditional Random Fields
The type of CRF used here is a linear-chain conditional random field [12], which corresponds in structure to a Hidden Markov Model (HMM). Figure 2 illustrates the graphical model corresponding to an HMM. Given a sequence of
Learning to Recognize Complex Actions Using Conditional Random Fields
343
Fig. 2. HMM as a directed graph, with states as white nodes, observations as shaded nodes, and time going from left to right
Fig. 3. Linear-chain CRF as a directed graph, with states as white nodes and observations as shaded nodes. Note the links between labels and observations that are forward and backward in time.
observations xt and labels (or states) yt , an HMM models the distribution p(x|y), the probability of observation xt given state yt . It is therefore a generative model. Furthermore, the Markov property holds that the probability of a transition to state yt only depends in the immediately prior state yt−1 . HMMs generally require extra machinery to consider temporal context. In contrast, conditional random fields model the conditional probability p(y|x), the likelihood of label y given the observation x. Typically, the set of labels and observations is structured as an undirected graphical model. This graphical model is not constrained to look solely at observation xt , but can incorporate observations that are arbitrarily far away in time (see Figure 3). CRFs are discriminative, since they infer the label y from the observation sequence. CRFs are trained by optimizing p(y|x) with respect to sequences with ground truth labels.
4
CRF Features
The combination of mover position, velocity, and posture are used here for event recognition. Raw postural information is represented by a 16x16 floor histogram. The histogram is a 16x16 array that represents approximately 1 square meter of floor space centered on the mover (see Figure 4). To make training over this
344
C.I. Connolly
Fig. 4. A single sensor observation on the left with the corresponding mover-centered postural histogram on the right
feature space more tractable, eigenposes [3] are used to reduce the histogram space to a 6-dimensional posture component space. Eigenposes were computed by selecting random floor histograms H from track positions that were evenly distributed throughout the capture volume of the sensor. A total of 48 tracks, most of which were short walks through the capture volume, were used to train this aspect of the system. This yielded approximately 1000 usable floor histograms in H. Matrix A is constructed by taking the inner products of floor histograms: Aij = Hi · Hj
(2)
After singular value decomposition of A, first six left singular vectors u of A are used as bases for constructing six principal component eigenposes {Ok , k = 1...6} derived from the floor histograms. Each singular vector u(k) of A serves as a set of coefficients for a linear combination of the original floor histograms Hi to get Ok : N (k) Ok = u i Hi (3) i=0
The set of operators O can be applied (using an inner product) to each floor histogram in a track to compute six characteristic curves for posture change over the time course of a track. Figure 5 shows the first six normalized eigenposes obtained from the 48 tracks. After computation of the eigenpose basis, each incoming floor histogram can be represented in a six-parameter posture space for further analysis. Figure 6 shows the time course of posture space for a sample track in which the mover is walking through the capture volume. Although the eigenposes are heavily biased toward walking gaits, they are sufficient for capturing the postural changes that occur in vending machine purchases.
Learning to Recognize Complex Actions Using Conditional Random Fields
345
Fig. 5. Eigenpose basis for the first six components of posture space
CRF input features are defined using x, y floor position, x, y velocity, and six posture components. All features were discretized to integer values so that floor position is expressed in tenths of meters, velocity is tenths of meters per second, and each posture component is normalized to the interval [0,100]. Time of day is also represented as a string using 24-hour time (i.e., 17:00 is 5PM). Thus, the input to the CRF can account for the fact that vending machine purchases are more likely to be made at certain times of the day. In the CRF template, temporal features are established with a maximum window size of 0.4 second (4 temporal bins). Bigram features are included to enforce label consistency across time.
5
Experiments
Using the discretizations and feature templates described above, tracks were transformed into state sequences and supplied as input to the CRF++ package [6]. A training set was gathered from the dataset archive and labeled, consisting of a total of 144 tracks, 20 of which represented true vending machine purchases. The remainder of the training set contained an even mixture of walks through the vending machine area, subjects that were standing (usually in conversation) but not buying, and window shopping. In addition, four tracks were nonpurchase events where the subject apparently needed more money before making a successful purchase. All tracks were ground-truthed by marking events using a timeline browser. The browser allows the user to scroll the timeline to see the time course of tracks. Users can create event descriptions by defining intervals with the mouse. True vending machine purchase events were marked as “BUY” events regardless of which machine was used, while the remaining time was marked as “DEFAULT”. For these experiments, 14 “BUY” tracks were used for testing, and 35 nonBUY examples were tested, including 4 conversations, 5 loiter sequences, 3 reach actions, 4 recycle actions (where objects were tossed into the recycle bin next to the machines), and 20 normal walks through the capture volume. A sequence was marked as a “BUY” event if it contained at least one such label in the output from the CRF classifier (although in no case were fewer than 30 “BUY” labels seen in a positive sequence). All data in the test set was annotated in the same way, as the training data. The regularization parameter was varied from 6 to 10, and the number of posture bins was varied from 20 to 100. The resulting true and false positive rates as functions of regularization and posture binning are shown in Figure 7. The maximum false positive rate in this set of experiments was approximately
346
C.I. Connolly
Fig. 6. Three components of posture space taken from the track shown in Figure 4
Fig. 7. Left: False positive rate as a function of regularization parameter C and the number of posture bins. Note that values at C=10 and bins=100 are at 0. Right: True positive rate as a function of regularization parameter C and the number of posture bins.
2%, while the lowest was 0%. The maximum true positive value of 100% is observed over most values of posture binning and regularization. Reach and recycle activities tended to produce the most false positives. These actions have more features in common with vending machine purchases (hand movement and stooping posture, respectively) than activity that consisted solely of walking or standing.
6
Conclusion
Recognition of vending machine purchase events, and distinguishing these events from similar actions, such as stooping to pick up change, or placing an item into nearby recycling bins, requires analysis of the context of the action. In this
Learning to Recognize Complex Actions Using Conditional Random Fields
347
case, insertion of change, selection of an item, and retrieval of the item must all occur for a true purchase event to be identified correctly. Since our data is of limited resolution, it is inevitable that some event labelings will be incorrect. Nonetheless, within the confines of this experimental setup, good recognition results were achieved. Regularization parameters of 8 to 10 in combination with posture binning using 40 to 70 bins per component tended to produce the best results. Within this range, all vending machine purchases in the test set are correctly identified, and generally, only 1 to 3 false positives are found out of 36 true negatives. Extremes of discretization will degrade performance, so care must be taken in finding the best posture discretization for a given eigenpose basis. Other methods for dimensionality reduction [15] may improve the quality of postural features. Characterization of performance as a function of training set size is part of our ongoing work, as is tuning of the feature templates (e.g., the temporal range) for CRFs. The labor required for annotation of observations with ground truth constrained the amount of training and test data available for this study, although with time a larger corpus can be established. Conditional random fields appear to work well for complex activity recognition. The current study represents ongoing work in characterizing the recognition power of CRFs and the degree to which changing experimental conditions affect the classification competence of the method. Our initial thinking was that CRFs could naturally be applied in a hierarchical context by supplying labels for primitive events, which then supply event label likelihoods to algorithms that can infer the presence of composite events. The current paper is an outgrowth of a feasibility study that indicated CRFs can do well when applied directly to the recognition of composite events. In retrospect, this is not surprising since CRFs can take full advantage of the temporal context present in the raw data stream. A more detailed study is required to directly compare the abilities of CRFs and HMMs in recognizing event sequences of similar complexity. The author is indebted to the reviewers, who provided several helpful comments and suggestions. Thanks also to R. Bolles, L. Iocchi, C. Cowan, and J. B. Burns, who contributed to various aspects of the Sentient Environment and event recognition systems used here.
References 1. Burns, J.B.: Detecting independently moving objects and their interactions in georeferenced airborne video. In: Proceedings of the IEEE Workshop on Detection and Recognition of Events in Video, pp. 12–19. IEEE, Los Alamitos (2001) 2. Duong, T., Bui, H., Phung, D., Vekatesh, S.: Activity recognition and abnormality detection with the switching hidden semi-Markov model. In: IEEE International Conference on Computer Vision and Pattern Recognition (2005) 3. Harville, M., Li, D.: Fast, integrated person tracking and activity recognition with plan-view templates from a single stereo camera. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (June 2004)
348
C.I. Connolly
4. Iocchi, L., Bolles, R.: Integrating plan-view tracking and color-based person models for multiple people tracking. In: International Conference on Image Processing, pp. 872–875 (2005) 5. Ivanov, Y., Stauffer, C., Bobick, A., Grimson, E.: Video surveillance of interactions. In: Proceedings of the CVPR ’99 Workshop on Visual Surveillance (1998) 6. Kudo, T.: CRF++, yet another CRF toolkit. Web Page, http://crfpp.sourceforge.net/index.html 7. Lafferty, J., McCallum, A., Pereira, F.: Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In: Proceedings of ICML 2001, pp. 282–289 (2001) 8. Medioni, G.G., Cohen, I., Bremond, F., Hongeng, S., Nevatia, R.: Event detection and analysis from video streams. IEEE Transactions on Pattern Analysis and Machine Intelligence 23(8), 873–889 (2001) 9. Moore, D.J., Essa, I.A., Hayes, M.H.: Exploiting human actions and object context for recognition tasks. In: ICCV (1), pp. 80–86 (1999) 10. Nevatia, R., Hobbs, J., Bolles, B.: An ontology for video event representation. In: Proceedings of the 2004 Conference on Computer Vision and Pattern Recognition Workshop (CVPRW 2004), vol. 7, p. 119 (2004) 11. Sminchisescu, C., Kanaujia, A., Li, Z., Metaxas, D.: Conditional models for contextual human motion recognition. In: Proceedings of the International Conference on Computer Vision ICCV 2005 (2005) 12. Sutton, C., McCallum, A.: An Introduction to Conditional Random Fields for Relational Learning. ch. 4. MIT Press, Cambridge (2006) 13. Toshev, A., Bremond, F., Thonnat, M.: An APRIORI-based method for frequent composite event discovery in videos. In: Computer Vision Systems, p. 10 (2006) 14. Vail, D.L., Veloso, M.M., Lafferty, J.D.: Conditional random fields for activity recognition. In: Proceedings of the 2007 Conference on Autonomous Agents and Multiagent Systems (2007) 15. Vasilescu, M.A.O., Terzopoulos, D.: Multilinear subspace analysis of image ensembles. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2003), pp. 93–99 (June 2003) 16. Wang, T., Li, J., Diao, Q., Hu, W., Zhang, Y., Dulong, C.: Semantic event detection using conditional random fields. In: Semantic Learning Applications in Multimedia, p. 109 (2006)
Intrinsic Images by Fisher Linear Discriminant Qiang He1 and Chee-Hung Henry Chu2 1
Department of Mathematics, Computer and Information Sciences Mississippi Valley State University Itta Bena, MS 38941
[email protected] 2 The Center for Advanced Computer Studies University of Louisiana at Lafayette Lafayette, LA 70504-4330
[email protected]
Abstract. Intrinsic image decomposition is useful for improving the performance of such image understanding tasks as segmentation and object recognition. We present a new intrinsic image decomposition algorithm using the Fisher Linear Discriminant based on the assumptions of Lambertian surfaces, approximately Planckian lighting, and narrowband camera sensors. The Fisher Linear Discriminant not only considers the within-sensor data as convergent as possible but also treats the between-sensor data as separate as possible. The experimental results on real-world data show good performance of this algorithm.
1 Introduction Shadows and variable illumination considerably limit the performance of many image understanding tasks, such as image segmentation, object tracking, and object recognition. Intrinsic image decomposition can be used as a pre-processing step in image segmentation and object recognition [3]. An observed image is a product of its reflectance image and illumination image. Intrinsic images [1] refer to the underlying reflectance image and the illumination image (also termed as shading image) that cannot be directly measured. The illumination describes what happens when light interacts with surfaces. It is the amount of light incoming to a surface. The reflectance is the ratio of the reflected light from a surface to the incoming light, and is used to measure a surface's capacity to reflect incident light. The intrinsic image decomposition is to separate the reflectance image and the illumination image from an observed image. Weiss [8] decomposed the intrinsic image from a sequence of images taken under a stationary camera. He assumes that the reflectance is constant and the illumination varies with time. One important natural image statistic is that its derivative filter output is sparse [7]. By incorporating the natural image statistic as a prior, he developed a maximum-likelihood estimation solution to this problem. The work most related to ours is by Finlayson et al. [4]. They devised a method to recover reflectance images directly from a single color image. Under assumptions of G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 349–356, 2007. © Springer-Verlag Berlin Heidelberg 2007
350
Q. He and C.-H.H. Chu
Lambertian reflectance, approximately Planckian lighting, and narrowband camera sensors, they introduced an invariant image, which is grayscale and independent of illuminant color and intensity. As a result, invariant images are free of shadows. The most important step in computing invariant images is to calibrate the angle for an “invariant direction" in a 2-dimensional log-chromaticity space. The invariant images are generated by projecting data, in the log-chromaticity space, along the invariant direction. They showed an important fact that the correct projection can be reached by minimizing entropy in the log-chromaticity space. Here, we present an intrinsic image decomposition method for a single color image under the above assumptions. We showed that the invariant direction could also be achieved through the Fisher linear discriminant. In Section 2, we give a description of invariant images and explain how the Fisher linear discriminant can be used to detect the projection direction for invariant images. We present an algorithm in Section 3 that recovers shadow free images from derivative filter outputs. In Section 4, experiments show that the shadow free images could be recovered through the Fisher linear discriminant. Finally, we draw our conclusion in Section 5.
2 Invariant Direction and Fisher Linear Discriminant Under the assumptions of Lambertian reflectance, approximately Planckian lighting, and narrowband camera sensors, a grayscale invariant image can be generated that is independent of illuminant color and intensity. Each pixel of a single color image has three components, viz. red (R), green (G), and blue (B). A 2-dimensional logchromaticity map is first created by calculating the positions of log( B / R) and
log(G / R) . Points from different illuminant conditions but under the same sensor band tend to form a straight line for the following reason. Under the abovementioned assumptions, it can be shown [4] that a color component is given by: −5 C = I × S (λ C ) × k 1 λ C e
−k2
λC T
(1)
where C ∈ {R, G, B} , I is the lighting intensity, λC is the wavelength corresponding to a sensor, S (λC ) is the surface reflectivity specific to a wavelength, k1 and k 2 are constants, and T is the lighting temperature. Taking the logarithm on both sides of (1), we see that the logarithm of a color component is given by a sum of three terms: k −5 log C = log I + log( S (λC ) × k1λ C )− 2 , λC T where the first term is independent of the sensor, the second term depends only on the reflectance, and the third term is dependent on the illumination temperature. By writing out the logarithms for each of the three color components, we have
⎡ log R ⎤ ⎡log I ⎤ ⎡σ R ⎤ ⎢log G ⎥ = ⎢log I ⎥ + ⎢σ ⎥ + 1 ⎢ ⎥ ⎢ ⎥ ⎢ G⎥ T ⎢⎣ log B ⎥⎦ ⎢⎣log I ⎥⎦ ⎢⎣σ B ⎥⎦
⎡φ R ⎤ ⎢φ ⎥ ⎢ G⎥ ⎢⎣φ B ⎥⎦
(2)
Intrinsic Images by Fisher Linear Discriminant
−5 where σ C = log(S (λC ) × k1λC ) and φ C = −
k2
λC
351
for each of the components. We
can eliminate the illumination terms I and T by combining the equations in (2) to obtain the linear relationship in a log-chromaticity map: log(B / R ) =
σ B −σ R log(G / R ) + K σ G −σ R
(3)
where K depends on σ C and φ C , C ∈ {R, G, B} . The constant K determines the distance of the point from the origin. Data points from surfaces with different reflectance properties have different sets of σ C values; hence they fall onto different lines in the log-chromaticity space. Further, those lines from different sensors tend to be parallel because the slopes in (3) tend to be approximately constant for different surfaces. Therefore, if we project those points onto another line orthogonal to these lines, we obtain an illumination free image, which is called the invariant image. This is illustrated in Figure 1.
Fig. 1. Illustration of invariant image and invariant direction
To compute invariant images requires that we calibrate the angle of the “invariant direction" in a log-chromaticity space. Finlayson [4] showed that the correct projection can be achieved by minimizing the entropy in the log-chromaticity space. An alternative to recovering the invariant direction is through the Fisher linear discriminant. Minimizing the entropy guarantees that the projected data have minimum variance within a class, so that the projected data from the same sensor band are distributed as close as possible. The Fisher linear discriminant (FLD) [2] is developed for dimensional reduction and pattern classification. The goal of FLD is to project original data onto low dimensional space such that the between-class distance is as large as possible and the within-class distance is as small as possible. Using the FLD, we not only make the projected data from the same band as close as possible, but also separate the projected data from different sensor bands as far as possible.
352
Q. He and C.-H.H. Chu
In the log-chromaticity space, there are c ∈ {1, 2,3, "} classes of points computed from the observed pixel values in an image. Given these data points, we can compute the between-class scatter matrix S B and the within-class scatter matrix SW . The FLD solution finds a projection W to optimize
J (W ) =
W T S BW
W T S WW by solving for the eigenvectors corresponding to the c − 1 largest eigenvalues in the following: S B v = λSW v or equivalently -1 SW S B v = λv . Since S B is symmetric, it can be diagonalized as S B = UΛU T , where U is an orthogonal matrix, consisting of S B ’s orthonormal eigenvectors, and Λ is a diagonal 1
1
matrix with eigenvalues of S B as diagonal elements. Define S B2 = UΛ 2 U T , and 1
define a new vector z = S B2 v , then 1
1
-1 2 S B2 SW S B z = λz This becomes a common eigenvalue problem for a symmetric, positive definite 1
1
−
1
-1 2 S B . The original solution can be given as v = S B2 z . matrix S B2 SW Since we are concerned with a projection direction for 2-dimensional data, we compute the unit vector with the same direction as v , i.e., v/norm( v) . In theory, there can be any number of classes of data points from a single color image in a log-chromaticity space. Nevertheless, the dominance of only one or two surfaces in a scene may lead to there being only one class or two classes in the 2dimensional chromaticity space. Our method considers these cases as follows.
Case 1. When there are only two classes, we obtain the projection v as -1 v = SW (m 1 − m 2 ) , where m1 and m 2 are the two class means.
Case 2. When there is only one class, the 2-dimensional data distributed along parallel straight lines. The projected data is distributed as close as possible if the invariant direction is the direction with the minimum-variance. If there is only one class in the 2-dimensional chromaticity space, the eigenvector corresponding to the smaller eigenvalue of the 2-dimensional chromaticity data matrix will be the invariant projection direction. Prior to computing the FLD solution, we need to classify original data into different groups. That is, we need to solve which class each data sample belongs to and assign each sample into its class. This is accomplished through a K-means
Intrinsic Images by Fisher Linear Discriminant
353
algorithm [6]. Given the number of clusters, the K-Means method partitions the data iteratively until the sum of square errors for all data to its class center (total intracluster variance), is made to be minimum. It is possible that other algorithms such like Gaussian mixture models may give a better result. However, in practice, the K-means algorithm works well here. Figure 2 and Figure 3 illustrate the FLD for one class and three classes, respectively, where clustering is obtained through the K-means method. We can see that we have obtained satisfactory invariant images through data projection on the invariant direction chosen by Fisher linear discriminant.
(a)
(b)
(c)
(d)
Fig. 2. Invariant image through FLD with one class. (a) original image from [9]. (b) Data in 2-d chromaticity space. (c) Clustering for FLD, here same as (b). (d) Invariant image from FLD.
(a)
(b)
(c)
(d)
Fig. 3. Invariant image through FLD with three classes. (a) Original image. (b) Data in 2-d chromaticity space. (c) Clustering through K-means method; different classes are shown in different colors. (d) Invariant image from FLD.
354
Q. He and C.-H.H. Chu
3 Recovery of Shadow Free Images A major application of recovering an intrinsic image is to generate an image that is free of shadows. The reflectance image recovered from the method described in Section 2 does not contain illumination effects, hence shadow edges are not present in the reflectance image. The reflectance image can therefore be used to eliminate the shadow edges in the original image as follows. We run an edge detector through the three color components of the original image and through the reflectance image. For each color component edge, if there is not a corresponding edge in the reflectance image, we declare that a shadow edge in the original image. Edges are typically found by thresholding and linking directional derivative filter outputs. It was suggested by Weiss [8] that we can reintegrate the derivative filter outputs to generate shadow free images. We compute the derivative filter outputs for R, G, and B color channels individually and then set the derivative values in shadow edge positions to zeros. Here, we simply consider derivative filters as the horizontal derivative filter and the vertical derivative filter. The horizontal derivative filter is defined as
f x = [0, 1, − 1]T and the vertical derivative filter is f y = [0, 1, −1] . After we obtain the individual reflectance derivative maps, where derivative values in shadow edge positions are set to zeros, following Weiss [8], we can recover the reflectance image through the following deconvolution:
(
rˆ = g ⊗ f x r ⊗ rx + f y r ⊗ ry
)
where ⊗ is the filter operation, f ⋅ r is the reverse filter to f ⋅ , r⋅ are the derivatives , and rˆ are the estimated reflectance image. Further, g satisfies the following constraint:
(
)
g ⊗ fxr ⊗ fx + f yr ⊗ f y = δ In practice, this can be realized through the Fourier transform. To briefly summarize, our algorithm to remove shadows consists of the following steps: (1) Compute the 2-dimensional log-chromaticity map given an original color image. (2) Cluster data in 2-dimensional log-chromaticity space using the K-means algorithm. (3) Compute the invariant direction (projection line) using the Fisher linear discriminant. (4) Generate the invariant image by projecting 2-dimensional data along the invariant direction. (5) Compute the edge maps of the graylevel images (corresponding to each color channel) of the original image and of the graylevel invariant image using Canny edge detectors. (6) Extract those edges in original edge maps but not in the invariant image edge map as shadow edges.
Intrinsic Images by Fisher Linear Discriminant
355
(7) Compute the derivative filter outputs for R, G, and B color channels. (8) Set the derivative values in shadow edge positions to zeros. (9) Reintegrate the derivative filter outputs to recover the shadow free image.
4 Experimental Results In the following, we show the shadow removal results for two pictures in Figure 4 and 5. We can see that the shadow edges are basically extracted correctly. After setting the derivatives in edge positions to zeros and reintegrating them back, we can recover the original image. Further, the shadows in the recovered image are either considerably attenuated or effectively removed. We also notice that some artifacts exist in the recovered images. As pointed out by Finlayson [4], these result from inaccurate detection of shadow edges.
(a)
(b)
(c)
(d)
Fig. 4. Shadow removal results through FLD. (a) Original image. (b) The invariant image from FLD. (c) The detected shadow edges. (d) The recovered shadow free color images.
(a)
(b)
(c)
(d)
Fig. 5. Shadow removal results through FLD. (a) Original image from [9]. (b) The invariant images from FLD. (c) The detected shadow edges. (d) The recovered shadow free color images.
356
Q. He and C.-H.H. Chu
5 Conclusions In this paper, we describe an effective approach to recover the intrinsic reflectance image from a color image. The method relies on using the FLD to recover the invariant direction in the log-chromaticity space. We demonstrate how to use the reflectance image to synthesize a shadow free image by eliminating the shadow edges and integrating the derivative outputs to generate the output image. In our experiments, the shadow removal method is not very sensitive to the estimated invariant direction. In reality, the data in the 2-dimensional log-chromaticity space from different sensors are not exactly parallel. However, after projection through FLD, we can obtain satisfactory invariant images and, subsequently, shadow free images. In the future work, we explore how to model the log-chromaticity data using other statistical methods instead of the K-means approach. This may result in the more accurate estimation of the invariant direction.
References [1] Barrow, H.G., Tenenbaum, J.M.: Recovering intrinsic scene characteristics from images. In: Hanson, A., Riseman, E. (eds.) Computer Vision Systems, Academic Press, London (1978) [2] Duda, R.O., Hart, P.E., Stork, D.G.: Pattern Classification, 2nd edn. Wiley-Interscience, Chichester (2000) [3] Farid, H., Adelson, E.H.: Separating reflections from images by use of independent components analysis. Journal of the Optical Society of America 16(9), 2136–2145 (1999) [4] Finlayson, G.D., Drew, M.S., Lu, C.: Intrinsic images by entropy minimization. In: Pajdla, T., Matas, J(G.) (eds.) ECCV 2004. LNCS, vol. 3022, pp. 582–595. Springer, Heidelberg (2004) [5] Funt, B.V., Drew, M.S., Brockington, M.: Recovering shading from color images. In: Sandini, G. (ed.) ECCV 1992. LNCS, vol. 588, pp. 124–132. Springer, Heidelberg (1992) [6] MacQueen, J.B.: Some methods for classification and analysis of multivariate observations. In: Proceedings of 5th Berkeley Symposium on Mathematical Statistics and Probability, vol. 1, pp. 281–297. University of California Press, Berkeley (1967) [7] Olshausen, B.A., Field, D.J.: Emergence of simple cell receptive field properties by learning a sparse code for natural images. Nature 381, 607–608 (1996) [8] Weiss, Y.: Deriving intrinsic images from image sequences. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 68–75 (2001) [9] http://www.cs.sfu.ca/ mark/ftp/Eccv04/
Shape-from-Shading Algorithm for Oblique Light Source Osamu Ikeda Faculty of Engineering, Takushoku University, 815-1 Tate, Hachioji, Tokyo, 193-0985 Japan
Abstract. Shape-from-shading method for oblique light source appears most applicable with minimal effects of the convex-concave ambiguity and shadows. In this paper, first, a robust iterative relation that reconstructs shape is constructed by applying the Jacobi iterative method to the equation between the reflectance map and image for each of the four approximations of the surface normal and by combining the resulting four relations as constraints. The relation ensures convergence, but that alone is not enough to reconstruct correct shapes for bright image parts or mathematically singular points. Next, to solve the problem, the light direction is tilted in slant angle following a criterion and the average tilt of the resulting shape is compensated. A numerical study using synthetic Mozart images shows that the method works well for a wide direction of the light source and that it gives more correct shapes than any of existing methods. Results for real images are also given, showing its usefulness more convincingly.
1 Introduction Shape reconstruction from a single shading image has long been studied [1], giving a variety of approaches called minimization [2], [3], linear [4], propagation [5], [6], deformable model [7]. However, it appears that there is no method yet that reconstructs good shapes in a robust way. For example, the minimization approach presented by Zheng et al. extrapolates the surface normals to estimate them on the boundaries [2], possibly causing the numerical instability. They stop the iteration to avoid the instability at the cost of accuracy. The approach given by Tsai et al., which expands the reflectance map in a single linear depth and iteratively estimates the shape based on the consistency between the image and the reflectance map [4], leads to instability at the brightest image parts. To avoid it, they restrict the iteration in number at the sacrifice of accuracy. In the propagation approaches the image is normalized to a value less than unity [6] to avoid the complex processing to combine many shape patches estimated, making the resulting shapes inaccurate. The recently reported method using the deformable model has also a stabilization factor that plays the role of damping effect in the estimation [8]. The shape accuracy tends to be sacrificed in return for the stability. In addition, the accuracy appears to depend on the initial surface adopted. Another recent methods based on viscosity solutions [9], [10], [11] assume that part of the boundary is known and there are no shadows in the images. In practice, however, the boundary information may not be obtainable, and shadows may always exist especially when oblique light source is used. G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 357–366, 2007. © Springer-Verlag Berlin Heidelberg 2007
358
O. Ikeda
In this paper, we present a robust shape-from-shading approach that reconstructs better shapes for images having shadows than any of the existing methods. Here we use an oblique light source by taking into account both the convex-concave ambiguity and shadows, and we do not assume any initial surface. First, in this paper, we construct a robust iterative relation for shape reconstruction which ensures the convergence. Next, why and how the light direction is optimized is explained. Then, the numerical study is described to show the usefulness of the method.
2 Iterative Relation for Shape Reconstruction We use the consistency between a given image i(x,y) and a reflectance map R(p,q). Let P∝(p,q,1)T be the surface normal of the object’s surface z(x,y), x,y=1,…,N, and S∝(Sx, Sy, Sz)T be the direction of the light source. Then, for the Lambertian surface, the map normalized by the albedo is given by the scalar product of P and S:
pS x + qS y + S z
R ( p, q ) =
(1)
p + q 2 + 1 S x2 + S y2 + S z2 2
The surface normal components of p and q are given by -∂z/∂x and -∂z/∂y, respectively, where the negative sign is used for the convenience. Here we consider the four approximations for them: ⎧( z (x − 1, y ) − z (x, y ), z ( x, y − 1) − z (x, y )) ⎪( z (x, y ) − z (x + 1, y ), z ( x, y ) − z (x, y + 1)) ( p( x, y), q( x, y) ) = ⎪⎨ ⎪(z (x − 1, y ) − z ( x, y ), z (x, y ) − z ( x, y + 1)) ⎪⎩(z (x, y ) − z ( x + 1, y ), z (x, y − 1) − z ( x, y ))
(2a) (2b) (2c) (2d)
2
Let the function f(x,y) be defined by
f m (x, y ) ≡ J m ( x, y ) − R m ( p, q ) , m=1, 2, 3 and 4
(3)
where the image, normalized to unity, is shifted depending on the approximation: ⎧ I ( x, y ) ⎪ I ( x + 1, y + 1) ⎪ J m ( x, y ) = ⎨ ⎪ I ( x, y + 1) ⎪⎩ I ( x + 1, y )
for m = 1 for m = 2 for m = 3 for m = 4
(4)
and the (p,q) expressions in Eq. (2) are used in Rm(p,q), m=1 to 4. Applying the Jacobi iterative method to the consistency between the image and the reflectance map, we obtain the following four iterative relations for the four approximations, respectively: ( n −1) 1, x , y
−f
⎛ ∂f = ⎜ 1, x , y ⎜ ∂z x , y ⎝
⎞ ⎟ ⎟ ⎠
( n −1)
(z
(n) x, y
−z
( n −1) x, y
)
⎛ ∂f + ⎜ 1, x , y ⎜ ∂z x −1, y ⎝
⎞ ⎟ ⎟ ⎠
( n −1)
(z
(n) x −1, y
−z
( n −1) x −1, y
)
⎛ ∂f ⎞ + ⎜ 1, x , y ⎟ ⎜ ∂z x , y −1 ⎟ ⎝ ⎠
( n −1)
(z
(n) x , y −1
− z x( n, y−−1)1
)
(5a)
Shape-from-Shading Algorithm for Oblique Light Source
⎞ ⎛ ∂f − f 2(,nx−, y1) = ⎜ 2, x , y ⎟ ⎜ ∂z x , y ⎟ ⎠ ⎝
( n −1)
(z
(n) x, y
⎛ ∂f − z x( n, y−1) + ⎜ 2, x , y ⎜ ∂z x +1, y ⎝
)
⎞ ⎟ ⎟ ⎠
( n −1)
(z
(n) x +1, y
⎞ ⎛ ∂f − z x( n+−1,1y) + ⎜ 2, x , y ⎟ ⎜ ∂z x , y +1 ⎟ ⎠ ⎝
)
( n −1)
(z
(n) x , y +1
359
− z x( n, y−+11)
)
(5b) ⎛ ∂f ⎞ − f 3(,nx−, y1) = ⎜ 3, x , y ⎟ ⎜ ∂z x , y ⎟ ⎝ ⎠
( n −1)
(z
(n) x, y
⎛ ∂f ⎞ − z x( n, y−1) + ⎜ 3, x, y ⎟ ⎜ ∂z x −1, y ⎟ ⎝ ⎠
)
( n −1)
(z
(n) x −1, y
⎛ ∂f ⎞ − z x( n−−1,1y) + ⎜ 3, x, y ⎟ ⎜ ∂z x, y +1 ⎟ ⎝ ⎠
)
( n −1)
(z
( n) x , y +1
− z x( n, y−+1)1
)
(5c) ⎛ ∂f ⎞ − f 4(,nx−, y1) = ⎜ 4, x , y ⎟ ⎜ ∂z x, y ⎟ ⎝ ⎠
( n −1)
(z
(n) x, y
⎛ ∂f ⎞ − z x( n, y−1) + ⎜ 4, x, y ⎟ ⎜ ∂z x +1, y ⎟ ⎝ ⎠
)
( n −1)
(z
( n) x +1, y
⎛ ∂f ⎞ − z x( n+−11, y) + ⎜ 4, x, y ⎟ ⎜ ∂z x, y −1 ⎟ ⎝ ⎠
)
( n −1)
(z
(n) x , y −1
− z x( n, y−−11)
)
(5d) where fm,x,y≡fm(x,y) and zx,y≡z(x,y). These can be rewritten in matrix form as
− fm( n −1) = g (mn −1) (z ( n ) − z ( n −1) ) , m=1,…,4, n=1,2,…
(6)
2
where fm and z are N -elements column vectors of fm(x,y) and z(x,y), respectively, and gm are N2xN2-elements sparse matrices made of one to three derivatives of fm(x,y) with respect to z(x,y), z(x-1,y), z(x+1,y), z(x,y-1) or z(x,y+1). The derivatives have positive or negative values. The inverses of the four gm matrices take values in different regions from each other, as shown in Fig. 1. The elements of fm in these shaded regions are multiplied by those of gm-1 which are integrated to give values of gm-1fm.
(x, y)
(x, y)
(a) m=1
(x, y)
(x, y)
(b) m=2
(c) m=3
(d) m=4
Fig. 1. The integral operations of the form gm-1fm are carried out in the different shaded regions for the four different approximations to give depth values at (x,y)
We combine the four iterative relations as follows: ⎛ f 1( n −1) ⎞ ⎛ g 1( n −1) ⎞ ⎟ ⎟ ⎜ ⎜ ⎜ f ( n −1) ⎟ ⎜ g ( n −1) ⎟ 2 2 ⎟(z ( n ) − z ( n −1) ) , n=1,2,… ⎟=⎜ −⎜ ⎜ f 3( n −1) ⎟ ⎜ g 3( n −1) ⎟ ⎟ ⎟ ⎜ ⎜ ⎜ f ( n −1) ⎟ ⎜ g ( n −1) ⎟ ⎝ 4 ⎠ ⎝ 4 ⎠
(7)
Using F and G given by
((
F ( n ) = f 1( n )
) (f ) (f ) (f ) T
(n) T 2
( n) T 3
)
T (n) T 4
(8)
360
O. Ikeda
((
G ( n ) = g (wn1)
) (g ) (g ) (g )
)
T (n) T w4
(n) T w3
(n) T w2
T
(9)
Eq. (7) is rewritten as
(
)
− F ( n −1) = G ( n −1) z ( n ) − z (n −1 ) , n=1,2,…
(10)
Then, following the least square error procedure, the shape is reconstructed following the iterative relation:
{(
z ( n ) = z (n −1 ) − G ( n −1)
) (G T
( n −1)
)} {(G −1
or
(
z ( n ) = z (n −1 ) − G 2
)
( n −1) T
)
( n −1) −1
}
F ( n −1) , n=1,2,…
F2
( n −1)
(11) (12)
, n=1,2,…
with z(0)=0 as initial values, where G 2 = G T G , F2 = G T F
(13)
Let us express the terms, gm, fm and z, as
g (mn ) = [ g m( ni ,) j ]
(14)
f m( n ) = [ f m( nj ) ]
(15)
z ( n ) = [ z i( n ) ]
(16)
where i or j is equal to x+Ny, then G2 and F2 are given, respectively, by ⎡ 4 N 2 ( n) ( n) ⎤ = ⎢∑ ∑ g mk , i g mk , j ⎥ ⎢⎣m =1 k =1 ⎥⎦
(17)
⎡ 4 N2 ⎤ F2( n ) = ⎢∑ ∑ g m( nk), i f m( nk ) ⎥ ⎢⎣m =1 k =1 ⎥⎦
(18)
G
(n) 2
It is seen from Eq. (17) that the matrix, G2, is also sparse and its eigenvalues are given by the diagonal elements as 2
2
⎛ ∂f m ( x, y ) ⎞ ⎛ ∂f ( x − 1, y ) ⎞ ⎛ ∂f ( x + 1, y ) ⎞ ⎟⎟ + ∑ ⎜⎜ m ⎟⎟ + ∑ ⎜⎜ m ⎟ ∂z ( x, y ) ⎠ m =1,3 ⎝ ∂z ( x, y ) ⎟⎠ m =1⎝ ∂z ( x, y ) ⎠ m = 2,4 ⎝ 4
2
λ ( x, y ) = ∑ ⎜⎜
2
2
⎛ ∂f ( x, y − 1) ⎞ ⎛ ∂f ( x, y + 1) ⎞ ⎟⎟ + ∑ ⎜⎜ m ⎟ , 2 ≤ x ≤ N − 1, 2 ≤ y ≤ N − 1 + ∑ ⎜⎜ m ∂z ( x, y ) ⎠ m =1, 4 ⎝ ∂z ( x, y ) ⎟⎠ m = 2 ,3 ⎝
(19)
The eigenvalues on the four edge lines are also given by Eq. (19), provided that we retain only those terms within the region of 1≤x≤N and 1≤y≤N. That is, they consist of five summed terms in the region 2≤x≤N-1 and 2≤y≤N-1, four such terms on the four edge lines and three such terms at the four corners. So we restrict the reconstruction in the region 2≤x≤N-1 and 2≤y≤N-1 to make the reconstrcution more stable and the
Shape-from-Shading Algorithm for Oblique Light Source
361
resulting shape more uniform. In this case we can see by inserting Eq. (3) and the relevant expressions in Eq. (19) that nine depths at (x,y) and at the eight neighboring points contribute to give the eigenvalue at (x,y). Similarly it is seen from Eq. (18) that the elements of F2 have also similar symmetric expressions. Let z′, G2′ and F2′ have the elements of z, G2 and F2, respectively, in the region 2≤x≤N-1 and 2≤y≤N-1. Then the following iterative relation holds:
(
z′( n ) = z′( n −1) − G 2 '( n −1)
)
−1
F2 '( n −1) , n = 1,2,…
(20)
It is noted that the values of (p,q) in the entire area are still needed, as seen from Eq. (19). We impose the following boundary value after each iteration for such an object as the Mozart that stands on a flat surface: z ( x, y ) = 0 for ( x, y ) ∈ R edge
(21)
where Redge means the four edge lines. When an image varies over the entire region, on the other hand, we enclose the image with flat surface strips and shade them using the real light direction, on which Eq. (21) is applied.
3 Tilting the Light Direction The term ∂f/∂z is given from Eqs. (1) and (3) as
)(
(
)
(
S x + S y 1 + p 2 + q 2 − ( p + q ) pS x + qS y + S z ∂f = 3 ∂z S x2 + S y2 + S z2 1 + p 2 + q 2
(
)
)
(22)
Substituting the variables with S x = S 0 cos θ , S y = S 0 sin θ , p = p 0 cos α , q = p 0 sin α
(23)
Eq. (22) is rewritten as
(
)
∂f S 0 (cos θ + sin θ ) 1 + p02 − p0 (cos α + sin α )(S 0 p0 cos(θ − α ) + S z ) = 3 ∂z S 02 + S z2 1 + p02
(
)
(24)
Considering the effects along the same azimuthal direction as θ, we put α=θ in Eq. (24):
(cosθ + sin θ ){S0 − p0 S z } ∂f = 3 ∂z S02 + S z2 1 + p02
(
)
(25)
This shows that the derivative is null for the brightest image part, where S0=p0Sz holds, but, as seen from Eq. (19), the eigenvalue is not null but takes a small value. The value is significant enough to achieve convergence in the iteration, but it may not be so to achieve smooth shape reconstruction for the part, where there often results in a cliff-like big depth change instead.
362
O. Ikeda
If we use a smaller value, Szr, than the real one, Sz, we can make the eigenvalue significant enough to solve the problem. The manipulation, however, brings the side effects of tilting and distorting the shape. Letting the resulting surface normal be (pr, qr) for the case of Szr, we have the relation from Eq. (1) as I=
p r S x + q r S y + S zr S +S +S 2 x
2 y
2 zr
1+ p + q 2 r
2 r
=
pS x + qS y + S z S + S y2 + S z2 1 + p 2 + q 2 2 x
(26)
Using the following variables in addition to those in Eq. (23): pr = pr 0 cos α , qr = pr 0 sin α
(27)
Eq. (26) is rewritten as I=
pr 0 S 0 cos(θ − α ) + S zr S +S 2 0
2 zr
1+ p
2 r0
=
p0 S 0 cos(θ − α ) + S z S 02 + S z2 1 + p02
(28)
Applying the Taylor series expansion of the form ⎛ ∂p pr = p0 + ⎜⎜ r ⎝ ∂S zr
⎞ ⎟ (S zr − S z ) + ..... ⎟ ⎠ S zr = S z
(29)
we obtain the result: pr = p0 −
(
)
S 0 1 + p02 p0 S z cos(θ − α ) − S 0 (S zr − S z ) + ..... S 02 + S z2 p0 S z − S 0 cos(θ − α )
(30)
The expression for the case of θ=α is given by pr = p0 −
(
)
S0 1 + p02 (S zr − S z ) + ..... S 02 + S z2
(31)
It is seen that the obtainable shape is tilted in proportion to Szr-Sz, so that we compensate the tilt using the relation for the shape xS x + yS y ⎫ ⎧ (S zr − S z )⎬ zc ( x, y ) = z ( x, y )⎨1 − m Sz ⎭ ⎩
(32)
Taking into account that the flat shape part is tilted with the coefficient So/(So2+S z2) in Eq. (31), m is given from Eqs. (31) and (32) as m=
Sz S x2 + S y2
(33)
We can expect that as Szr is decreased from Sz, first, large depth changes around the brightest image parts may disappear, making the depth range of the reconstructed shape smaller, then, it may begin to increase as the shape is further tilted. So, the appropriate Szr may be considered to be the one that minimizes the depth range, which may correspond to the minimal shape distortion, too:
Shape-from-Shading Algorithm for Oblique Light Source
Sˆ zr = arg Min{Max{z ( x, y )} − Min{z ( x, y )}}
363
(34)
S zr
Some of the cliff-like depth changes could still remain even if the optimal value were used. When a smoother shape is desired to obtain, we use a smaller Szr than the optimal one by a few degrees in slant angle. This may increase shape distortions a little, but the resulting shape looks better.
4 Computer Experiments The Mozart object and three real images were used as shown in Fig. 2. The image David may have the Lambertian surface to a great degree. Weak specular reflection components were removed from the original pepper image through manual operations. Optical polarization filters were placed in front of the light source and the camera to capture the Penguin image almost free from the specular reflection components, where the reflection property of the plastic doll is almost uniform. The reconstructed shape for the Mozart is evaluated using the ground-truth shape as ⎧ ⎫ error = Min ⎨∑ a(z c ( x, y ) − c ) − z o ( x, y ) ⎬ a,c ⎩ x, y ⎭
( Max{z o ( x, y)}− Min{z o ( x, y )})
(35)
where zc is the reconstructed shape in Eq. (32) and zo is the ground-truth shape. The summation is carried out in the object region.
Mozart
Pepper
David
Penguin
Fig. 2. Mozart object (128x128), images of pepper (90x90), David (80x80), and Penguin (90x90) used in experiment
Fig. 3(a) shows that without the optimization of the light direction the eigenvalues of G2 take small values not just for brightest image parts but also for the parts of the convex-concave ambiguity and shadows, which are plotted as large values in the figure. Fig. 3(b) shows that with the optimization those of G2 can be made more significant and, as a result, smoother, especially on the face part. Fig. 4 demonstrates the usefulness of the new method. It is seen that the optimization is quite effective in reconstructing good shapes for a wide range of the light direction, where the slant angles used are larger than the optimal ones by three degrees. This addition is effective in making the shape smoother at the cost of a slight increase of
364
O. Ikeda
(a) Sr = S = (5,5,7)
(b) S = (5,5,7) and S r = (5,5,5.7)
Fig. 3. Comparison between the distributions of the diagonal elements of G2, (a) Sr = S = (5,5,7) and (b) S r = (5,5,5.7), where a smaller value of the element is plotted higher within the object region and the element is plotted to the lowest level corresponding to the maximal value of the element outside of the object region
S = (5,5,5)
Sr = (5,5,5)
Sr = (5,5,4.3)
S = (5,5,7)
Sr = (5,5,7)
Sr = (5,5,5.3)
S = (5,5,9)
Sr = (5,5,9)
Sr = (5,5,7.0)
Fig. 4. Left: Images. Center: shapes reconstructed using the real light directions. Right: shapes reconstructed using the optimal slant angle plus three degrees.
Shape-from-Shading Algorithm for Oblique Light Source
365
10
70 Sz
60
8
σrecon
shape error
9
7
50
optimal S zr
40
6
30
5 30
40
50
60 σ
70
30
40
50
60
70
σ
Fig. 5. Left: Error comparison of two shapes reconstructed using the real Sz and optimal Szr in the sense of Eq. (34). Right: relation between the real slant angle and the optimal one of the light direction.
Fig. 6. Reconstructed shape and two views of the image-mapped shape for each of the three real images. The light directions are S = (0.766,0.642,1) and Sr = (0.766,0.642,0.6) for Pepper, S = (-0.707,0.707,1) and Sr = (-0.707,0.707,0.46) for David, and S = (10,6.75,5.5) and Sr = (10,6.75,5.1) for Penguin.
366
O. Ikeda
distortions. In Fig. 5 the improvement in terms of shape error is shown and the difference of the optimal slant angle given by Eq. (34) from the real one is shown. The result shows that the shape error is kept around 6% over a slant angle range from 35 to 60 degrees. This result for the Mozart is superior to any of the existing methods [7]. The shadows may play the major role to deteriorate the shape when the slant angle of the light source is much larger than 45 degrees, while the ambiguity may do so as the angle is much smaller than 45 degrees. Fig. 6 shows the effects of the optimization for the three real images, where the optimal slant angle given by Eq. (34) is used for the reconstruction and the compensation using Eq. (32) is not applied. It is seen that the optimization is quite effective in reconstructing good shapes, especially for the second image. We notice some distortions in the reconstructed shape for the third image. The image was obtained by using a commercially available light ball to directly illuminate the doll, meaning that the orthographic projection holds valid only approximately. But, still the result is good enough to show the usefulness of the new method.
5 Conclusions A robust shape-from-shading approach for the case of a single image was presented, which reconstructs better shapes than any of the existing methods for a wide range of the light direction. As far as those convex-shaped objects are concerned, use of an oblique light source may be most applicable while avoiding both the convex-concave ambiguity and shadows in image.
References [1] Horn, B.K.P.: Obtaining Shape from Shading Information. In: Winston, P.H. (ed.) The Psychology of Computer Vision, pp. 115–155. McGraw-Hill, New York (1975) [2] Zheng, Q., Chellappa, R.: Estimation of Illuminant Direction, Albedo, and Shape from Shading. IEEE Trans. PAMI 13(7), 680–702 (1991) [3] Worthington, P.L., Hancock, E.R.: New Constraints on Data-Closeness and Needle Map Consistency for Shape-from- Shading. IEEE Trans. PAMI 21(12), 1250–1267 (1999) [4] Tsai, P.S., Shah, M.: Shape from Shading Using Linear Approximation. J. Imaging and Vision Computing 12(8), 487–498 (1994) [5] Bichsel, M., Pentland, A.: A Simple Algorithm for Shape from Shading. In: Proc. CVPR, pp. 459–465 (1992) [6] Kimmel, R., Bruckstein, A.M.: Tracking Level Sets by Level Sets: A Method for Solving Shape from Shading Problem. CVIU 62(2), 47–58 (1995) [7] Samaras, D., Metaxas, D.: Incorporating Illumination Constraints in Deformable Models for Shape from Shading and Light Direction Estimation. PAMI 25(2), 247–264 (2003) [8] Metaxas, D., Terzopoulos, D.: Shape and Nonrigid Motion Estimation through Physics-Based Synthesis. IEEE Trans. PAMI 15(6), 580–591 (1993) [9] Prados, E., Faugeras, O., Rouy, E.: Shape from Shading and Viscosity Solutions. In: Tistarelli, M., Bigun, J., Jain, A.K. (eds.) ECCV 2002. LNCS, vol. 2359, pp. 790–804. Springer, Heidelberg (2002) [10] Kimmel, R., Sethian, J.A.: Optimal Algorithm for Shape from Shading. Mathematical Imaging and Vision 14(3), 237–244 (2001) [11] Durou, J., Falcone, M., Sagona, M.: A Survey of Numerical Methods for Shape from Shading. Research Report IRIT 2004-2-R (2004)
Pedestrian Tracking from a Moving Host Using Corner Points Mirko Meuter1 , Dennis M¨ uller2 , Stefan M¨ uller-Schneiders2 , 2 2 Uri Iurgel , Su-Birm Park , and Anton Kummert1 1
Faculty of Electrical Information and Media Engineering University of Wuppertal D-42119 Wuppertal, Germany 2 Delphi Delco Electronics Europe Advanced Engineering D-42119 Wuppertal, Germany
Abstract. We present a new camera based algorithm to track pedestrians from a moving host using corner points. The algorithm can handle partial shape variations and the set of point movement vectors allows to estimate not only translation but also scaling. The algorithm works as follows: Corner points are extracted within a bounding box, where the pedestrian is detected in the current frame and in a search region in the next frame. We compare the local neighbourhood of points to find point correspondences using an improved method. The point correspondences are used to estimate the object movement using a translation scale model. A fast iterative outlier removal strategy is employed to remove single false point matches. A correction step is presented to correct the position estimate. The step uses the accumulated movement of each point over time to detect outliers that can not be found using inter-frame motion vectors. First tests indicate a good performance of the presented tracking algorithm, which is improved by the presented correction step.
1
Introduction
Camera-based optical tracking of objects plays an important role in visual surveillance and sensing applications, and a wide variety of methods has been developed. Simple methods for optical tracking like correlation work straightforwardly by extracting a template region from a source image or edge image [CSR94]. This region is convolved with a search region in the most recent frame to find the area with the maximum matching. The position where the maximum matching was found is used to calculate the displacement vector from the previous to the current frame. More advanced methods work based on the selection and extraction of features like edges, contours or corner points [LF05]. Especially corners are an obvious choice for tracking. They provide accurate localization and usually, the image has a high information content in the image area surrounding these points [SMB00]. There are two major methods to track objects based on feature points. One method is the one time detection of features and their subsequent G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 367–376, 2007. c Springer-Verlag Berlin Heidelberg 2007
368
M. Meuter et al.
tracking by relying on the optical flow e.g. the Kanade-Lucas-Tomasi (KLT) tracker. Such an approach has trouble with the appearance or disappearance of feature points due to partial occlusion or aspect changes which can lead to subsequent tracking failures [LF05]. The other method is the detection of feature points in each frame and their subsequent matching. To track an object, the points are extracted in the region where the object was detected or estimated and additionally in a search region in the next frame. The image region in the neighbourhood of each point is compared to the neighbourhood of points in the next frame to generate a list of point matches. This method yields a set of point movement vectors that can contain outliers caused by false matches and points on the background together with movement vectors from the object of interest. [LF05] reported about the application of interest point tracking to estimate the pose change of a plane. They employ the RANSAC algorithm to handle the outlier problem. In [AB01], interest points have been used to track vehicles from a moving host. They proposed to use a robust estimator to solve the outlier problem. From our knowledge, both techniques give good results, but can have high requirements regarding processing time. Smith [Smi95] presented the ASSET-2 tracking system for vehicles. They estimate the point flow using a Kalman filter for each corner to reduce the outlier problem. Then they use a motion segmentation algorithm to assign points to the object of interest or to the background. Their example videos show a good performance, but in these examples, the motion is very simple separable. [PGV05] presented the application of interest point tracking for pedestrians, but this application considered translational movement only which will lead to problems under scale changes. We assume that we have a detection algorithm that is able to deliver a good bounding box from the object of interest in every nth frame. In practice, we simulate this behaviour by hand labelling our object of interest and feeding the box into the tracker as initialization. Then we proceed as follows. We extract corner points for the object in a region of interest and in a search region in the next frame using the well known Harris corner detector [HS88]. The local neighbourhood of each point in the current frame is matched against the local neighbourhood of close corner points in the next frame to find point correspondences. A modification of the method described in [Smi95] is used to generate the matching results. This method has low run time requirements, gives good results and needs to compute less comparisons than a 3 × 3 SSD comparison. For movement estimation, we employ a simple model for pedestrian tracking using only translation and scale. This model seems to be sufficient for our needs, and has to our best knowledge not previously been applied for pedestrian tracking using interest points. Outliers in the set of correspondences are rejected by iteratively estimating the model coefficients using an MMSE estimation and removing those points where the residual between the matched and the expected position exceeds a certain threshold. Even if this is not a robust estimation technique, the algorithm works in most cases for a minority of outliers and is still very time efficient.
Pedestrian Tracking from a Moving Host Using Corner Points
369
To improve the tracking results, we propose a new additional correction step considering more than one time step. The point movement vectors are accumulated over several time steps, the movement is estimated and the matched position is compared to the expected position to remove points with a large distance between the expected point position and the position, where the match was found. This method allows to additionally remove points, for which only the accumulated movement over time allows a decision, whether they belong to the background or to the object of interest.
2
Corner Point Extraction
For the detection of corner points, the well known Harris detector is used. This detector delivers stable points [Abd01], and is less computational demanding compared to the SIFT [Low04] and other true Scale Space detectors [MS04]. We will give a short overview about the required processing steps of the Harris corner detector [HS88]. If a first order Taylor expansion is used to approximate the image brightness function I(x, y) in the neighbourhood of a point, the results of a local autocorrelation f using a small shift dx, dy can be approximated according to [Der04] by A(x, y) C(x, y) dx f (dx, dy) = dx dy (1) C(x, y) B(x, y) dy with A(x, y) =
∂I(x, y) 2 ∂x
B(x, y) =
∂I(x, y) 2 ∂y
C(x, y) =
∂I(x, y) ∂I(x, y) . (2) ∂x ∂y
Each term is summed up over a small circular window and weighted by a Gaussian function to yield a term comparable to the local covariance. The Eigenvalues of the resulting matrix x2 +y2 A(x, y) C(x, y) 2 2ρ M =e ⊗ (3) C(x, y) B(x, y) are a rotation invariant indicator, whether a corner, an edge or simply a flat area is present. In case of a flat region, both eigenvalues will be low. One high eigenvalue will indicate an edge, two high eigenvalues will indicate a corner. The eigenvalues allow to formulate the following function in which corners appear as positive local maximum C = αβ − k · (α + β)2
(4)
= det(M ) − k · tr(M ) , 2
(5)
where α and β are the Eigenvalues of M , and k an adjustable parameter. The advantage of this formulation is the avoidance of an explicit eigenvalue decomposition. That a region with high eigenvalues of the first order derivative matrix is particularly well suited for tracking is also shown in [ST94].
370
3
M. Meuter et al.
Corner Matching
The proposed method to match the corner points between different frames consists of several steps. We assume, that the object movement is sufficiently small between consecutive frames. Due to this reason, only those points in the current frame are considered as feasible matching candidate, that are in the local neighbourhood of the correspondence point in the previous frame. Therefore the euclidean distance between a point in the last frame and a matching point candidate is evaluated. If it exceeds a defined threshold, the point is rejected. The final matching is evaluated by comparing the local image content in the neighbourhood of the point in the last frame with the neighbourhood of the matching candidates. There are several approaches for this comparison in the literature. [LF05] proposes to compare the local neighbourhood using a normalised cross-correlation in the 7 × 7 pixel neighbourhood region. This is relatively slow, because it requires the comparison of 49 pixels. [Low04] proposed orientation histograms as local descriptor. The approach is rotation invariant, but complex to compute and we do not require rotation invariance since we consider the roll to be zero in our sequences. [Smi95] states that using the coefficients of a first order Taylor expansion (brightness and the first order derivatives) at the position of the corner gives sufficient results. We have extended this method to match also the second order derivatives. Using the second order derivatives requires to use a larger window to calculate the derivatives, but introduces additional information about the local environment. The extension yields a better matching result based on our experience and is still very fast. Thus we define our matching vector as t ∂I(x,y) ∂ 2 I(x,y) ∂ 2 I(x,y) ∂ 2 I(x,y) (6) m = I(x, y), ∂I(x,y) , , , , 2 2 ∂x ∂y 2∂x 2∂y ∂x∂y Since we approximate the derivatives by recursively convolving the smoothed input image with a simple derivation mask −1 0 1 , we can finally calculate the coefficients using simple additions and subtractions so that the computation time for the coefficients is neglectible. Under all feasible matching candidates m , we find the matching vector m∗ by m∗ = arg min m − m . m
(7)
The final comparison involves only 6 values, while even a 3 × 3 SSD comparison requires more comparisons.
4
Motion Model
The change of the bounding box between frames can be expressed in terms of translation and scale only. We assume that the motion of our pedestrians in the image plane is largely dominated by these factors and that the appearance variation is low between single frames and affects only a small subset of the
Pedestrian Tracking from a Moving Host Using Corner Points
371
points. That was the reason, why we decided to neglect these movements and to try to estimate the movement using translation and scale only. The final result of the estimation justifies this decision. More formally, we assume, that the new coordinate of each point can be calculated as follows: xnew = (xold − tcx )s + tncx ynew = (yold − tcy )s + tncy
(8) (9)
t Here tcx tcy is the vector towards the unknown scaling centre for our point t set, tncx tncy is unknown translation vector to the new centre point and s the scale factor relative to the centre point. If we define our estimation parameters as a1 = s
(10)
a2 = tncx − tcx s a3 = tncy − tcy s,
(11) (12)
we can formulate the final equation system in matrix form ⎡ ⎤ a xnew xold 1 0 ⎣ 1 ⎦ a2 . = yold 0 1 ynew a3
(13)
The estimation should find the parameters a1 , a2 and a3 that minimise the mean square error ⎛ ⎡ ⎤⎞2 xi xi 1 0 a1 ⎝ new ⎣a2 ⎦⎠ . (14) − old i i yold ynew 01 i a3 For the estimation, we used the Singular Value Decomposition approach presented in [Fla92].
5
Outlier Rejection and Bounding Box Correction
The set of movement vectors obtained by the point matching process usually contains vectors originating from false matches or background points. To reject these outliers, we used the following approach: We estimate the model coefficients in a first step using every point, for which a point match was found. The expected position for each point in the set is calculated by applying the estimated coefficients to equation 13. This resulting point is compared to the point, where the actual matching was found. Usually the outliers caused by false matches have bigger residuals than the points on the true target since their movement can not be explained by the model. This lead to the following outlier rejection strategy: We remove rough outliers by removing each point from the estimation set that has a residual above a certain scale threshold. Then we re-estimate the
372
M. Meuter et al.
(a) First Esti- (b) First Cor- (c) Second mation rection Step Correction Step Fig. 1. Graphical Representation of the Outlier Rejection Process
model coefficients using the remaining points. It can be seen, that the estimation is more accurate and allows to remove outliers, that were less obvious in the last estimation step. Reducing the scale threshold and repeating the estimation several times yields our final estimation. This procedure is not generally applicable, but we obtain good results in most cases. The technique is very fast and has a fixed number of iterations and thus a fixed run time. An example of this approach can be seen in Figure 1. Here, an ideally labeled pedestrian was selected to be tracked across two frames. The image shows the estimated point movement of all points in the estimation set in red and the residual between the estimated position and the matched position in green. It also shows the estimated bounding box that is calculated by applying the model coefficients on the corner points of the box. It can be seen that each outlier removal step improves the result and that the final result yields a good point movement estimate and a good final bounding box estimation.
6
Outlier Removal over Time
We observed that sometimes even with our outlier rejection approach, single background points were tracked. We found out that in these cases it was very difficult to detect, that the movement vector of a background point is different from the movement of the other points on a single frame basis. One reason for this behaviour was that we were limited to pixel accuracy. If the point is tracked over several time steps, the accumulated movement difference that was undetectable before yields an increasing drift from the true target and degrades the estimation. That is the reason why we introduce a new correction step over time. The idea is as follows. Accumulating the motion over time should make the movement difference noticeable. This difference should allow to apply the same outlier removal strategy as in the last chapter, but calculated over several frames.
Pedestrian Tracking from a Moving Host Using Corner Points
373
We generate a set of point trajectories over several frames for the remaining points not considered as outlier after the outlier removal step. In future timesteps, each point in the set is handled as follows: If a point in this set is missing or the match is considered as outlier according to section 5 in one frame only, the remaining points are used to calculate the expected position of this point to fill the gap in the trajectory. Otherwise, the point is removed from the set of point trajectories. The set of reliable point trajectories is used to calculate a corrected bounding box. Lets assume that we are currently in frame t. We use the saved trajectory to go n time steps back into the past to get the corresponding point set at time t−n, (We use only those points that have a trajectory up to time t − n) and calculate the motion vector to the matched point at frame t. We use these motion vectors to estimate the motion coefficients over these time steps using the technique presented in section 4. The result is used to calculate the expected position for each point in the actual frame. The position is again compared against the position, where the matched point was found. If the distance exceeds a predefined threshold, the point is removed from the estimation set and the estimation is repeated using the remaining points only. The finally obtained coefficients are used to calculate a new, corrected bounding box position. We decided to go back for four frames at max. For more frames, our model is not sufficient any more and there are often not enough point trajectories for such a long time period.
7
Algorithm Evaluation
In order to test the performance of our modified tracking approach, we have recorded 11 video sequences with over 1400 frames showing pedestrians in several situations. We have manually labeled the pedestrian for each frame in each video sequence. Every 17th frame, such a label is used to reset the tracker’s position. In all remaining frames, we use the estimated position from the tracker and compare it with the labeled ground truth. The optimal result delays the drift that can be expected from all optical trackers as much as possible. We use this data set to compare the tracking results using the correction step described in section 6, and the results without this correction step. In order to use a test measure that is also used by other authors, we decided to use the coverage test described in [KSB05]. Let the ground truth bounding box be given by Gt and the estimated bounding box by Et and the area by |Et | and |Gt |. An optimal coverage using Recall and Precision – Recall |Gt Et | r= |Et |
(15)
374
M. Meuter et al.
– Precision
p=
|Gt Et | |Gt |
(16)
has a high recall and a high precision. This requirement lead to the definition of the F-measure as quality measure for tracking [KSB05] F =
2rp . r+p
(17)
We have plotted the mean F-measure over all videos used for the tracking period starting at the point, where the tracker is set to the ground-truth data. The result
Fig. 2. Results of the Estimation Algorithm with and without Correction
of the evaluation is presented in figure 2. The overall high F-measures indicate a good performance of the tracker. The slight degradation over time is caused by the drift effect. The correction step lead to an improvement of the result and the drift effect is delayed even more. The Harris detector requires 9 ms to detect corners in a 136 × 640 pixel strip. The matching process and the movement estimation together require only 0.5 ms in average on a 3 GHz Pentium IV PC. In practice, the performance is also very well. Examples of the estimated bounding box using the presented algorithm can be seen in Figure 7. In some sequences with crossing pedestrians, we observed tracking failures, since the majority of points in the bounding box were detected on the background. In such a case our assumptions are violated and the tracked object is lost or the bounding box size estimates are wrong (Figure 3(i)). In such a case, even a robust estimation would most probably fail and additional cues are needed. However this evaluation gives only an impression of the optical tracking algorithm without any additional information. In a complete system that includes
Pedestrian Tracking from a Moving Host Using Corner Points
375
(a) Pedestrian on Sidewalk, (b) Pedestrian on Sidewalk, (c) Pedestrian on Sidewalk, Frame 0 Frame 8 Frame 16
(d) Pedestrian on Sidewalk, (e) Pedestrian on Sidewalk, (f) Pedestrian on Sidewalk, Seq. 2, Frame 0 Seq. 2, Frame 8 Seq. 2, Frame 16
(g) Crossing Frame 0
Pedestrian, (h) Crossing Frame 8
Pedestrian, (i) Crossing Frame 16
Pedestrian,
(j) Crossing Pedestrian, (k) Crossing Pedestrian, (l) Crossing Pedestrian, Seq. 2, Frame 0 Seq. 2, Frame 8 Seq. 2, Frame 16 Fig. 3. Examples for the Estimation Result in Several Sequences
a detection algorithm and a tracking filter over time like a Kalman filter, the problem could be tackled as follows. The prediction of the filter can be used to detect and to remove stationary points and to initialise the estimation algorithm. The estimation result from the optical tracking can be fed into the tracker as additional measurement so that the final filtering result consists of a fusion of the results from the detection and the optical tracker.
8
Conclusion
We have presented a new, fast pedestrian tracking algorithm based on corner points for a moving host. The algorithm is capable to estimate the translation and the scale change of the pedestrian over time. We have presented a modified algorithm to find corner matches and we have introduced a correction step that allowed to improve the results and to delay the drift even further. An evaluation has shown that the algorithm delivers a good performance in most cases. In
376
M. Meuter et al.
a complete pedestrian detection system, the image based movement extraction method presented in this paper could provide additional movement information to a system based on single frame detections and a filter based movement extraction. An intelligent fusion of this information could substantially improve the overall performance of such a system.
References [AB01]
Motamedi, S., Behrad, A., Shahrokni, A.: A robust vision-based moving target detection and tracking system (2001) [Abd01] Abdeljaqued, Y.: Feature Point Extraction and Tracking for Video Summarization and Manipulation. PhD thesis, Universitaet Hannover (November 2001) [CSR94] Brandt, S., Smith, C., Papanikolopoulos, N., Richards, C.: Visual tracking strategies for intelligent vehicle-highway systems. In: International Symposium on Photonics for Industrial Applications, Intelligent Vehicle Highway Systems, Boston (1994) [Der04] Derpanis, K.: The Harris Corner Detector. Technical report, Department of Computer Science and Engineering, York University, Toronto, Ontario, Canada (October, 27th 2004) [Fla92] Press, W., Teukolsky, S., Vetterling, W., Flannery, B.: Numerical Recipes in C The art of Scientific Computing, 2nd edn. Cambridge University Press, Cambridge (1992) [HS88] Harris, C., Stephens, M.: A combined corner and edge detector. In: Fourth Alvey Vision Conference, pp. 147-151, Manchester (1988) [KSB05] Odobez, J.-M., Smith, K., Gatica-Perez, D., Ba, S.: Evaluating multi-object tracking. In: Workshop on Empirical Evaluation Methods in Computer Vision (EEMCV) (June 2005) [LF05] Lepetit, V., Fua, P.: Monocular model-based 3d tracking of rigid objects: A survey. Foundations and Trends in Computer Graphics and Vision 1 (2005) [Low04] Lowe, D.: Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision (2004) [MS04] Mikolajczyk, K., Schmid, C.: Scale and affine invariant interest point detectors. International Journal of Computer Vision 60 (2004) [PGV05] Piater, J., Gabriel, P., Hayet, J., Verly, J.: Object tracking using color interest points. In: Advanced Video and Signal Based Surveillance, pp. 159-164 (September 2005) [SMB00] Schmid, C., Mohr, R., Bauckhage, C.: Evaluation of interest point detectors. International Journal of Computer Vision 37, 151–172 (2000) [Smi95] Smith, S.: Asset 2: Real-time motion segmentation and tracking. Technical report, Oxford Centre For Functional Magnetic Resonance Imaging of the Brain Department of Clinical Neurology, Oxford University, Oxford, UK (1995) [ST94] Shi, J., Tomasi, C.: Good features to track. In: 1994 IEEE CVPR, pp. 593– 600 (1994)
3D Reconstruction and Pose Determination of the Cutting Tool from a Single View Xi Zhang1, Xiaodong Tian1, Kazuo Yamazaki1, and Makoto Fujishima2 1
IMS-Mechatronics Laboratory, University of California, Davis, California, USA 2
Mori Seiki Co., LTD., Nagoya, Japan
Abstract. This paper addresses the problem of 3D reconstruction and orientation of the cutting tool on a machine tool after it is loaded onto the spindle. Considering the reconstruction efficiency and that a cutting tool is a typical object of surface of revolution (SOR), a method based on a single calibrated view is presented, which only involves simple perspective projection relationship. First, the position and the orientation of the cutting tool is determined from an image. Then the silhouette of the cutting tool on the image is used to generate the 3D model, section by section. The designed algorithm is presented. This method is applicable to various kinds of cutting tools. Simulation and actual experiments on a machine tool verify that the method is correct with an accuracy of less than 1 mm.
1 Introduction Computer vision has found many applications in the manufacturing industry, which provides another way of thinking to solve problems that traditional technologies cannot accomplish. In our application, computer vision is introduced to quickly obtain the 3D model of a cutting tool for the purpose of collisions checking on a machine tool. Collisions between machine tool components such as cutting tools, workpieces and jigs may occur during machining operation and results in serious damage to the machine tool. Therefore, it is necessary to verify the numerical control (NC) program by machining simulation in the virtual manufacturing environment and eliminate possible collisions before actual machining. In order to do so, constructing the 3D digital models of the machining setup and cutting tool after they are loaded onto the machine tool becomes a key problem. One method is presented in reference [1] to quickly reconstruct the machining setup once the setup of workpieces and jigs are installed onto the machine tool. The prototype system has been developed for this purpose, which is able to quickly obtain the 3D model of entire machining setup of workpieces and jigs by an on-machine stereo vision system with object recognition technology. Then the 3D model of the cutting tool is required to constitute the comprehensive digital model environment for simulation purposes. As far as the quick on-machine 3D modeling method is concerned, there are no appropriate methods to meet this requirement presently. Although there are tool presetters, contact gages and laser tool measurement systems which are capable of measuring geometrical features of a cutting tool, they cannot capture the entire cutting tool and quickly generate its 3D G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 377–386, 2007. © Springer-Verlag Berlin Heidelberg 2007
378
X. Zhang et al.
model. However, from the viewpoint of compute vision, a cutting tool is a typical object of surface of revolution (SOR) that is formed by rotating a planar curve (scaling function) around an axis. This geometrical constraint by the properties of the SOR makes it possible to solve this problem from a single view. SOR is the subclass of Straight Homogeneous generalized cylinder (SHGC). It shares all the projective properties and invariance of SHGC[2-3]. Based on the research of recovery of SHGC from a single view [4-7], the researchers have focused on the reconstruction of SOR from a single uncalibrated view in recent years. Reference [8] gives an intensive review. Reference [9] has used two imaged crossed sections to perform projective reconstruction of SOR from a single uncalibrated image. The problem of metric reconstruction of SOR from a single uncalibrated view is addressed in [8][10-12]. The general idea is to exploit the geometric properties of SOR to calibrate camera and transform the silhouette into SOR scaling function by planar rectification. In reference [10-11] the silhouette is related directly to its contour generator on the surface. In [8][12] metric reconstruction of the SOR is reformulated as the problem of determining the shape of a meridian curve. The imaged meridian and imaged axis will then be rectified to compute the SOR scaling function. For the application of 3D modeling of the cutting tool on a computer numerical control (CNC) machine tool, previous methods of SOR reconstruction from a single uncalibrated view is not applicable considering the accuracy and reliability. First, previous methods need the assumptions of square pixel or principal point being at the image center to calibrate the camera and cannot derive the correction coefficients to camera model. These will dramatically affect the accuracy of reconstruction. Second, the methods using camera self-calibration technology only determines the 3D envelope shape of the SOR, not including its absolute position and orientation. However, in our application, the absolute position and orientation of the cutting tool is also equally important as the 3D shape of the cutting tool. Third, most of the other proposed methods are only verified in a simulated or laboratory environment. The robustness and accuracy are not verified in an actual application environment. This paper presents an algorithm to reconstruct the cutting tool from a single calibrated view. Simple perspective projection relationship is only used to generate the 3D model of a cutting tool section-by-section along the rotation axis. Therefore, the complex calculations of planar homology are avoided, and the calculation process is stable and the result is accurate enough. Moreover various kinds of cutting tool can be dealt with by the same algorithm. The algorithm to identify the position and orientation of a cutting tool is also derived. The spindle which is used for a clamped cutting tool on a machine tool can be viewed as a cylinder. In order to obtain Euclidean reconstruction, the diameter of the spindle is supposed to be known. This data is easily obtained since the spindle is a standard component of a machine tool. Experiments verify that our method is accurate with an accuracy of less than 1mm. The paper outline is as follows. In section 2, the basic idea of the approach is presented. In section 3 the detailed algorithms are given including the identification of position and orientation of the cutting tool and 3D Euclidean reconstruction. The simulation experiment with synthetic model and image is used to verify the validity of our method. Section 4 shows the actual experiment which is conducted on the machine tool to test the stability and accuracy of the proposed method. The
3D Reconstruction and Pose Determination of the Cutting Tool from a Single View
379
reconstruction results of the end mill are presented. Finally in section 5 the conclusion is given with the future work.
2 Approach 2.1 Definition of the Problem The problem is defined as: Given a camera whose intrinsic parameters and orientation on a machine tool are fixed, to build the 3D model of the cutting tool and determine its position and orientation from a single view with respect to the machine tool coordinate frame. The 3D model here represents the maximum envelope body of the cutting tool and does not include the detailed geometrical features such as flutes and cutting edges. The diameter of the spindle is known. 2.2 Basic Idea There are two types of SOR involved in the application. One is the spindle which can be viewed as a cylinder. The other is the cutting tool which is clamped on the spindle. The silhouette of the spindle is used to determine the position and orientation of the rotation axis, as well as the position of the cutting tool on the rotation axis. The silhouette of the cutting tool can reconstruct the 3D model along the rotation axis. The basic idea of 3D modeling is as follows. Each pixel on the silhouette of the cutting tool together with the camera center can define a light ray. The light ray is tangent to the surface at a point, which belongs to the contour generator of the cutting tool. The perpendicular distance between the light ray and the rotation axis of the cutting tool equals to the radius of the cross section corresponding to the tangent point. In this way, the silhouette of the cutting tool is used pixel by pixel to calculate the radius of each cross section of the cutting tool. Then the 3D model of the cutting tool can be generated section by section along the rotation axis. Therefore this method is applicable to various kinds of cutting tools. The proposed processing flow is: First, the camera is calibrated with respect to the machine tool coordinate frame. This problem is solved in reference [1]. Second, the position and orientation of the rotation axis need to be determined. The position of the cutting tool on the rotation axis is also needed. Third, the silhouette of the cutting tool is extracted from the captured image. Presently, they are determined manually. At last, the 3D model of the cutting tool is generated in the way of section-by-section along the rotation axis.
3 Algorithm Implementation 3.1 Determination of the Position and Orientation of the Rotation Axis Fig. 1 illustrates the perspective projective of the spindle and an image of a spindle from which two contour lines of spindle L1 and L2 can be determined. There are two
380
X. Zhang et al.
P1 Plane ʌ1
Spindle n2 P3 Plane ʌ2
n1
P2 L1
L2
s Image O X Plane ʌ Y
M
yc
xc
N
d L
zc
o
P
P4 k Extension of Spindle
Plane ʌ3
Fig. 1. The image of the spindle and perspective projection of the spindle
coordinate systems: the pixel coordinate system O-XY and Camera Coordinate System (CCS) o-xcyczc. To be simple, in the following sections all the vectors or points in 3D space is with respect to CCS. It is easy to transform them into machine tool coordinate frame with the extrinsic parameters of the camera. Let vector L1(a1,b1,c1)T and L2(a2,b2,c2)T denote line L1 and L2, respectively. Two planes π1 and π2 are defined by the two images lines and camera center, which are tangent to the spindle. Let the intrinsic parameter of the camera be K. The normal vector n1 and n2 of these two planes with respect to CCS is determined with a calibrated camera [13]. n1 = [K 0]TxL1, n2 = [K 0]TxL2 .
(1)
Let the direction vector of the rotation axis be s. s represents the orientation of the rotation axis, which is determined by s = n1×n2 =(a, b, c) .
(2)
Next, a particular point P is determined, which expresses the position of the rotation axis. P is the intersection of rotation axis with plane π3 which goes through the camera center and is perpendicular to the rotation axis. The perpendicular distance between the camera center and the rotation axis is d. The perpendicular distance d is d = R/sin(β) .
(3)
Where, R is the radius of the spindle and β is the half angle between plane π1 and π2. β is given by (4) cos(2β)=|n1Txn2|/||n1||x||n2|| .
(4)
3D Reconstruction and Pose Determination of the Cutting Tool from a Single View
381
The direction of Line oP is determined by (5) k = n1+n2 = (m0, n0, p0) .
(5)
The coordinate of Point P measured in CCS is (x0, y0, z0) = (m0t0, n0t0, p0t0) .
(6)
Where t0 =
d2 (m02 + n 02 + p02 )
(7) .
So the equation of rotation axis L relative to CCS is
∈
Xc = P+sxt, t (-∞,∞) .
(8)
3.2 Determination of the Position of the Cutting Tool Since the cutting tool is clamped on the spindle, the position of the cutting tool corresponds to the end surface of the spindle, which is a circle with known radius R. Its image is an ellipse. Select a point x which belongs to this ellipse from the image. This point and the camera center determine a light ray as shown in Fig. 2. The direction vector of this light ray is r = K-1xx=(m, n, p) .
(9)
K is the intrinsic camera parameter. The ray intersects the end surface at point E. The image of 3D point E is the point x. Since the ray passes through the camera center, the coordinate of E measurement in CCS is E (mt1, nt1, pt1). Here t1 is an unknown parameter to be determined. The perpendicular distance from E to the rotation axis is known as R. The problem becomes to find a point on this light ray with direction vector r which fulfills: i j k a b c =R . mt1 - x nt1 - y pt1 - z
(10)
Where, i, j, k is the direction vector of CCS. (a, b, c) is the direction vector of the rotation axis relative to CCS determined by (2). | | denotes the normal of a vector. (x, y, z) is an arbitrary point on the rotation axis. The center of the end surface is determined in respect to CCS by (11). ⎛ b nt-y a mt-x a mt-x ⎞ C=⎜ ,, +E . ⎜ c pt-z c pt-z b nt-y ⎟⎟ ⎝ ⎠
(11)
382
X. Zhang et al.
o
Spindle
x
xi Li
xj
Image Plane
Lj
E C
Image
Spindle axis L Section I Ei Ci
plane Cj
Ej
r
Section J
Fig. 2. Determine the end surface
Fig. 3. 3D modeling of a cutting tool
3.3 3D Reconstruction of a Cutting Tool A model of a cutting tool is shown in Fig. 3. The diameters of all the circles between cross section I and J of the cutting tool are different. The silhouette is extracted from the image. The image point xi and the camera center determine light ray Li which is tangent to the cutting tool at point Ei. The direction vector of Li is Li = K-1xxi=(mi, ni, pi) .
(12)
The perpendicular distance between line Li and the rotation axis equals to the radius of the 3D circle. According to the equation of the rotation axis and Li, the radius of the circle on cross section I is: ri = D1/D2, where x D1 = a mi
y b ni
z c , D2 = pi
a mi
b ni
2
+
b ni
(13) c pi
2
+
c ni
a mi
2
.
(14)
(a, b, c) is the direction vector of the spindle relative in CCS and (x, y, z) is an arbitrary point on the rotation axis. Since the radius of the circle on section I is known, the position of this circle are determined with the algorithm in section 3.2. Similarly the radii and positions of all circles between cross section I and J can be determined using contour line in the way of pixel by pixel. In 3D space, the 3D model of the cutting tool is generated along the rotation axis in the way of section-by-section. An arbitrary point on the ellipse which is the projection of the circle on the end of the cutting is used to stop modeling along the rotation axis. 3.4 Algorithm Verification A spindle is modelled together with the worktable of machine tool as shown in Fig. 4(a). The cylinder is perpendicular to the worktable with the diameter of 178.05 mm. A set of camera parameters is used to produce a synthetic image as shown in Fig. 4(b). From the image, the position and orientation of the rotation axis as well as
3D Reconstruction and Pose Determination of the Cutting Tool from a Single View
383
the end surface of the spindle are calculated with the algorithm in section3.1 as shown in Fig. 4(c). The calculated values are compared with the reference values in Table 1. Here, P and s is the same as shown in Fig. 1. They represent the position and pose of the rotation axis respectively. H is the distance between the center of end surface and worktable. All the data is relative to the machine tool coordinate frame whose original is a corner of the worktable. The x and y axis is parallel to the edges of worktable and the z axis is perpendicular to the worktable. Then, a smaller cylinder is added to the 3D model of spindle, which can be viewed as an end mill as shown in Fig. 4(d). The synthetic image is produced with the same camera parameter. We select a point on the contour line on the image and calculate the radius of the end mill with the algorithm in section 3.2. The calculated value of the radius of the end mill is 39.8804. Its real value is 40mm. The absolute error is 0.1196mm. The results of the simulation experiment indicate that the 3D modeling method for the cutting tool from a single calibrated view is correct and accurate.
1 2
(a)
(b)
3
(c)
1
(d)
(e)
(f)
Fig. 4. Simulation experiment Table 1. Result of rotation axis and end surface determination
P(mm) s (vector) Calculated value [161.7830, 102.4974,401.1794] [-0.0008, 0.0003, 1] Design value [161.1, 102.2, 401.5453] [0,0,1] Absolute Error [0.6830, 0.2974, 0.3659] [0.0008, 0.0003, 0]
H(mm) 199.9473 200 0.0527
4 Experimental Verification 4.1 Vision System on Machine Tool A Sony CCD camera (1024×768, 1/3 inch) is mounted on a CNC machine center (Model: Mori seiki GV-503/5AX) as shown in Fig. 5(a). The camera is calibrated on
384
X. Zhang et al. (
)
1 2
3
1
(a) Vision system
(b) Pose determination
(c) Calculate cross sections
Fig. 5. Actual experiment
the machine tool [1]. An end mill is loaded onto the spindle. The spindle rotates at speed of 4000r/min. The image of the spindle and cutting tool is captured. 4.2 Experiments Process and Results First, the position and orientation of the cutting tool is calculated from image. Since the spindle is perpendicular to the worktable in this application, the direction vector of the rotation axis is known as [0, 0, 1]. The position of the rotation axis is calculated as [150.9962, -19.2164, 401.5456] mm and the distance between the end surface and worktable is 241.4457mm. The reprojection of the end surface matches well with the actual image as shown in Fig. 5(b). Then, radii and centers of ten cross sections of the cutting tool are calculated from the image. Fig. 5(c) illustrates the reprojection of one cross section according to the calculated value. The value of radii of ten sections is listed in Table 2. The cross sections are used to generate the 3D model of the cutting tool with CAD software as shown in Fig. 6. The approximate positions of ten sections are also marked. These cross sections are also measured by a vernier caliper as a reference. Here the radii of section 4 and 5 are not convenient to measure with a vernier calliper. Table 2 shows that the 3D modeling method for the cutting tool from a single calibrated view is correct with accuracy of less than 1 mm. Table 2. Results of Radius Calculation of Each Section (units: mm)
section 1 section 2 section 3 section 4 section 5 section 6 section 7 section 8 section 9 section 10
Calculation 22.8044 22.4473 25.9201 25.5968 25.9805 21.4216 20.4063 24.5063 24.2592 24.9056
Reference 22.25 22.25 25.4 25.4 25.575 N/A N/A 24.1 24.14 25.315
Aabsolute Error 0.5544 0.1973 0.5201 0.1968 0.4055 N/A N/A 0.4063 0.1192 0.4094
3D Reconstruction and Pose Determination of the Cutting Tool from a Single View
(a)
385
(b)
Fig. 6. Cutting tool and reconstructed 3D model
5 Conclusions This paper introduces a new method of generating the 3D model of a cutting tool with a single camera, especially on a machine tool. This proposed method is based on a calibrated view. The generated digital model also has the accurate absolute position and orientation with respect to the machine tool coordinate frame. In this paper, the position and orientation of the cutting tool are determined from an image. Then the silhouette of the cutting tool on the image is used to generate the 3D model section by section. The corresponding algorithms are derived and presented. Simulation and actual experiments on a machine tool verify that the method is able to quickly construct the digital model with an accuracy of less than 1 mm. This method has following advantages: 1) is able to obtain the 3D model of a cutting tool quickly from one view; 2) is able to deal with different kinds of cutting tools using the same algorithm. 3) be able to be extended in other similar industrial environment. For the future work, the image segmentation algorithm is being designed in order to model different kinds of cutting tools as a full automation process.
References 1. Xiao, D., Deng, H., Yamazaki, K., Mori, M., Fujishima, M.: On-machine Vision Modeling System with Object Recognition. In: ASME International Mechanical Engineering Congress, Orlando, Florida, November 6-11 (2005) 2. Ponce, J., Chelberg, D., Mann, W.B.: Invariant Properties of straight homogeneous generalized cylinders and their contours. IEEE Trans. on PAMI. 11, 951–966 (1989) 3. Abdallah, S.M.: Object Recognition via Invariance. PhD thesis, Univ. of Sydney, Australia (2000) 4. Lavest, J.M., Glachet, R., Dhome, M., Lapreste, J.T.: Modeling solids of Revolution by Monocular Vision. In: Proc. of the Conference on CVPR, pp. 690–691 (1991) 5. Sato, H., Binford, T.O.: Finding and recovering SHGC objects in an edge image. Graphics and Image Processing 57, 346–358 (1993)
386
X. Zhang et al.
6. Ulupinar, F., Nevatia, R.: Shape from contour: Straight homogeneous generalized cylinders and constant cross-section generalized cylinders. IEEE Trans. on PAMI 17, 120–135 (1995) 7. Gross, A.D., Boult, T.E.: Recovery of SHGCs from a single intensity view. IEEE Trans. on PAMI 18, 161–180 (1996) 8. Colombo, C., Bimbo, A.D., Pernici, F.: Metric 3D reconstruction and texture acquisition of surfaces of revolution from a single uncalibrated view. IEEE Trans. on PAMI 27, 99–114 (2005) 9. Utcke, S., Zisserman, A.: Projective Reconstruction of Surfaces of Revolution. In: Proc. DAGM-Symp. Mustererkennung, pp. 265–272 (2003) 10. Wong, K.-Y.K.: Structure and Motion from Silhouettes. PhD thesis, Univ. of Cambridge, U.K (2001) 11. Wong, K.-Y.K, Mendonça, P.R.S, Cipolla, R.: Reconstruction of Surfaces of Revolution from Single Uncalibrated Views. Image vision computing 22, 829–836 (2004) 12. Colombo, C., Bimob, A.D., Pernici, F.: Uncalibrated 3D Metric Reconstruction and Flattened Texture Acquisition from a Single View of a Surface of Revolution. In: Proc. First Int. Symp. on 3D Data Processing, Visualization, and Transmission, pp. 277–284 (2002) 13. Hartley, R.I, Zisserman, A.: Multiple View Geometry in Computer Vision, 2nd edn. Cambridge University Press, Cambridge (2003)
Playfield and Ball Detection in Soccer Video Junqing Yu1, Yang Tang2, Zhifang Wang1, and Lejiang Shi1 1
School of Computer Science & Technology, Huazhong University of Science & Technology, Wuhan, 430074, China
[email protected],
[email protected],
[email protected] 2 Department of Development & Planning, Hubei Electric Power Company, Wuhan 430077
[email protected]
Abstract. The ball is really hard to be detected when it is merged with field lines or players in soccer video. A trajectory based ball detection scheme together with an approach of playfield detection is proposed to solve this problem. Playfield detection plays a fundamental role in semantic analysis of soccer video. An improve Generalized Lloyd Algorithm (GLA) based method is introduced to detect the playfield. Based on the detected playfield, an improved Viterbi algorithm is utilized to detect and track the ball. A group of selected interpolation points are calculated employing least squares method to track the ball in the playfield. An occlusion reasoning procedure is used to further qualify some undetected and false ball positions. The experimental results have verified their effectiveness of the given schema. Keywords: Playfield detection, Ball detection, Soccer Video.
1 Introduction As an important specific-domain video genre, sports video has been widely studied for its tremendous commercial potentials. Soccer video is one of the most popular sports ones in the world, so its automatic analysis has become a focus of research efforts. Its possible applications have a broad range, such as indexing, retrieval, annotation, summarization, event detection and tactics analysis. Meanwhile, because the playfield is the place, where almost all the events take place and the ball is the focus for all the audience and players, both playfield and ball detection have been always drawing much attention. There are mainly two methods to detect the playfield: parametric [1-4] and nonparametric [5, 6]. Gaussian Mixture Model (GMM) is a most widely used parametric method. Estimating parameters and updating model are complicated in GMM. Ordinary Expectation Maximization (EM) algorithm is usually used to estimate the parameters of the GMM. In [1], an unsupervised MAP (Maximum a Posteriori) adaptation is used to adapt GMM to the color of the playfield in each game. Liu [4] proposes IEM (Incremental Expectation Maximization) algorithm to update GMM parameters to adapt the model to the playfield variation with time. Arnaud [5] characterizes the relevant area in the color space and uses a spatial coherence criterion on the selected pixels in the image. In [6], Ekin proposes an algorithm to G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 387–396, 2007. © Springer-Verlag Berlin Heidelberg 2007
388
J. Yu et al.
automatically learn the statistical dominant color of playfield using two color spaces, a control space and a primary space. The information from these two color spaces is combined to be used. As the case with playfield detection, many classical algorithms have been proposed for ball detection and tracking [7-17]. Most of them use Kalman filter to match the ball. And some utilize Viterbi algorithm to detect the bal [13]l, and the Kalman filter based template matching [13] or Condensation algorithm [11] is used to track it.
Fig. 1. Typical playfield and ball samples
While, the problem has not yet been fully resolved due to the following challenges: (1) The playfield may be hard to detect due to the shadow brought out by illumination sunlight; (2) The ball’s features, such as color, shape, and size, are changing with the circumstance conditions like light and velocity. Fig. 1 demonstrates some typical playfield and ball samples in soccer video; (3) When players possess the ball, it’s hardly to segment the ball from the player; (4) The fragments due to impropriate segmentation of players or field-lines may be similar to the ball; (5) The ball may be sometimes occluded, and especially when the ball is around the field-lines, it is hard to be segmented because its color is similar to the field-lines. In this paper, a trajectory based ball detection and tracking scheme with a preprocedure of playfield detection is proposed in Fig.2. The rest of the paper is organized as follows. In section 2, an improved GLA based playfield detection algorithm is discussed. Improved Viterbi based ball detection algorithm is introduced in section 3. Viterbi algorithm based ball tracking and least square based ball trajectory generating are presented in section 4. In section 5, trajectory based occlusion reasoning is depicted, and some experimental results are presented. Section 6 concludes this paper.
2 Playfield Detection An improved GLA based method is proposed to detect the playfield color, and Sklansky’s algorithm is employed to find the convex boundary of the playfield. Then, the playfield area can be identified through the detected boundary.
Playfield and Ball Detection in Soccer Video
389
Fig. 2. Framework of ball detection and tracking
2.1 Playfield Color Identification Generally, the dominant color of the playfield is green in the soccer video. Therefore, this characteristic can be used to find playfield color. Firstly, the frame color in the video can be classified into different clusters. Then, the cluster containing the majority of pixels can be marked as the playfield color. Every frame of video is a piece of image, which is composed of a set of pixels. The color vectors of the pixels in the image are quantified by the improved GLA. The quantification process is a classification process, so the improved GLA can be employed to classify the pixels. More details of GLA can be referred to [18]. The classification algorithm is designed as follows. Step 1: Convert the color of an input image from RGB space to HSV space. The playfield dominant color is detected according to the hue and saturation component of the pixel, by which the effect of shadow brought out by illumination can be eliminated. The formulae for converting RGB to HSV color space can be referred to [19]. Step 2: Use clustering method to classify the pixels. At the beginning, each pixel of the input image assigns to one cluster individually.
390
J. Yu et al.
Step 3: Each pixel is assigned to it’s nearest cluster. The distance d between the pixel j and the cluster centroid is calculated by the formula as follows:
d = ( S j ) 2 + ( S ) 2 − 2S j S cos(θ ( j )) ⎧
where, θ ( j ) = ⎨
Δ( j )
⎩ 360 − Δ( j ) 0
if Δ ( j ) ≤ 1800 otherwise
(1)
, Δ( j ) = H − H j
In the above equation, S j and H j are the pixel’s saturation and hue value; S and H are the cluster’s centroid of saturation and hue value. After all the pixels having been classified, cluster centroid should be recalculated to find the mean of the cluster. Step 4: Calculate the total distortion. If the change in distortion is bigger than the threshold (5%), go to step 3. Otherwise, go to step 5. Step 5: Calculate the distortion of each cluster. The cluster with the biggest one will be split into two new clusters. If the number of clusters after splitting is more than the upper limited number, go to step 6. Otherwise, go to step 3. Step 6: Merge the clusters using an agglomerative clustering method. Calculate mutual distances between clusters’ centroid to construct a table. If the minimum distance in the table is smaller than the threshold, merge the two relevant clusters and update the table. Otherwise, repeat this step. Step 7: Finally, the color of the cluster with the most pixels is identified as the playfield color. The detection result of playfield color is demonstrated in Fig. 3. Picture (a) is the original image and (b) is its detected result. The pixels without the playfield color are filled with black.
(a) Original (b) classification result Fig. 3. The detection result of playfield color
2.2 Playfield Boundary Detection
After the playfield color having been identified, the region whose pixel number is less than a given threshold is merged to the neighboring region through region growing process. But the player region affects the playfield detection when it locates on the playfield boundary, as the example in Fig. 4(b). Therefore, a playfield boundary detection algorithm has to be used to solve this problem. Because the playfield is often a normal polygon, Sklansky’s algorithm can be employed to find its convex boundary. Then the playfield area can be filled through the detected boundary. The Sklansky’s algorithm has been discussed detailed in [20].
Playfield and Ball Detection in Soccer Video
391
In Fig. 4, (a) is the original image, (b) is the mask of playfield and (c) shows the obtained playfield boundary which has been marked with red line.
(a) Original (b) the mask of playfield (c) playfield boundary
Fig. 4. The detection result of playfield boundary
The whole process of playfield detection is illustrated in Fig. 5. (a) is the original image, (b) is the identified playfield color, (c) is the detected playfield boundary and (d) is the playfield area.
(a)
(b)
(c)
(d)
Fig. 5. Playfield detection results
3 Ball Detection in the Soccer Video 3.1 Ball Candidate Detection
On the green playfield, there are four kinds of objects, including ball, playfield lines, players, and some noises. To find the ball candidates, the key thing is to filter the nonball objects. To attain this aim, we can use the ball’s characteristics, which can differentiate it from other objects. The detailed detection algorithm is explained in the following. Step 1: Compute the average area (A average) of all the objects on the playfield. In order to filter out the players and some tiny noises, only those whose area ranges from A average/10 to 2A average/3 are chosen. Here, 1/10 and 2/3 are set empirically through experiments. Step 2: Calculate the form factor, which is defined as
F = P2/ (4A)
(2)
F = P2/ (4A)
(3)
392
J. Yu et al.
Where P and A refer to the perimeter and of the objects. The bigger the form factor is, the more the object is likely to be a ball. Therefore, the objects with very small form factors can be also filtered out. Step 3: Compute the ratio of object area to its bounding rectangular area
R = A area / A box
(4)
If its ratio is less than 0.2, such object should be filtered out. Through above three steps, the candidate balls can be detected successfully. 3.2 Graph Construction
A weighted graph is constructed on the ball candidates in the N successive frames. In the graph, nodes denote ball candidates and edges connect those adjacent ball candidates, whose Euclidean distance is smaller than a threshold. Each graph node is assigned a weight to denote its resemblance to the ball, and each edge is assigned a weight to represent the similarity between the connected nodes. Fig.6 depicts a weighted graph example. Each number inside nodes is node’s weight. The number above the node is its cumulative weight denoted by cweight, which is the sum of all the node and edge weights on the optimal path ending with the current node. Node weight can be obtained according to the formula (5), while the edge weight can be computed using the similarity between the two connected nodes.
weight = 0.5 * F + 0.5 * R
(5)
Where F means the form factor and R is the ratio Aarea / Abox .
Fig. 6. Weighted graph Illustration
Fig. 7. Ball tracking algorithm illustration
3.3 Path Selection Based on the Improved Viterbi Algorithm
An improved Veiterbi algorithm is designed to select the ball path, which can be summarized as the following steps. The detailed Viterbi algorithm can be referred in [21]. Step 1: Calculate the cumulative weight (cweight) according to the formula (5), and record the corresponding ex-node, which is the value of superscript i in formula (6), for later tracking back use.
Playfield and Ball Detection in Soccer Video
393
Step 2: Get the node with the biggest cweight. Step 3: Track back to obtain all the nodes on the path according to the ex-nodes recorded in step 1. Step 4: Check the ends of the selected path. If the path doesn’t stretch over all N frames, we can continue select paths respectively among the preceding and succeeding candidates of the current path according to above steps.
⎧ t (a ) ⎪ weight j ⎪ t t −1 cweight j = ⎨max {τ 1 * ( cweighti + ⎪ 0≤ip N t −1 ⎪ weight t ) + τ * weight ij } (b) j edge 2 ⎩
(6)
In the above formula, weight tj means the cumulative weight of the jth candidate on frame t. (a) presents the situation that the jth candidate has no edge linking to the candidates on its former frame. Otherwise is the (b) situation, where Nt-1 is the number of candidates on frame t-1, cweight it −1 represents the cumulative weight of the ith candidate on frame t-1, weight tj means the node weight of the jth candidate on frame t-1, weight
ij edge
denotes the weight of the edge between the ith candidate on frame t-1
and the jth candidate on frame t, while τ 1 and τ 2 respectively denote the proportion of the node weight and edge weight in calculating the cumulative weight. In our experiment, τ 1 is set to 0.8 and τ 2 is set to 0.2. The reason why the edge weight has small proportion is that the ball’s contour may change greatly over frames because of the change of light and velocity. In Fig. 6, the bold lines and nodes is the optimal path.
4 Ball Tracking and Trajectory Generation 4.1 Ball Tracking
A Viterbi-based algorithm is utilized to track the ball in the successive frames. We construct the weighted graph in the detected ball candidates of N sequential frames. Here, the weighted graph should be initialized using the detected ball. Then, Viterbibased path selection algorithm can be applied to track the ball. The algorithm is rather similar to that of ball detection. The only difference between them is the initialization procedure. To initialize the weighted graph, the detected ball is added to it as the unique node on the 0th frame, which is assigned a weight of 1. If the distance between the node on the 1st frame and the unique node on the 0th frame is within the threshold, an edge exists. Otherwise, the node can be deleted. The edge weight is assigned according to the similarity of its connected nodes. Fig. 7 demonstrates the ball tracking algorithm. The unique node with weight 1 on the 0th frame represents the detected ball. The bold nodes are the tracked balls.
394
J. Yu et al.
4.2 Trajectory Generation
Ball Trajectory can be generated based on the raw data points of the detected ball. To generate trajectory function, least squares method is used here. Fig. 8 describes a set of generated functions, where the x-axis denotes time, and the y-axis denotes a dimensional variable, such as the transverse or the vertical displacement of the ball. That is, if the coordinate space is three-dimensioned, then there must be three such sets of trajectory functions. In the Fig. 8, the hollow circles denote the detected ball positions, on which curves are generated using least squares method. The filled circle is the expected ball point computed by the trajectory functions. Therefore, in this process we can not only generation the ball trajectory, but also deleted the false ball and supplement the ignored ones.
Fig. 8. Trajectory generation of the detected balls
5 Trajectory-Based Occlusion Reasoning 5.1 Occlusion Reasoning
In the soccer video, the ball is usually be just merged with other objects, moving, possessed by players, or out of the playfield. When the ball is possessed, the player is usually back tracked. When merged only in few frames, the ball position can be directly calculated through the trajectory functions. E.g. in Fig. 8, the ball can not detected in time tN, but its position can be calculated through X(tN) = x(t0) + v(t0) * (tN - t0) + 1/2 * a(t0) * (tN- t0)2, and the corresponding Y(t) and Z(t) can also be calculated in the same way. 5.2 Experimental Results
Experiments have been conducted on 2 soccer video clips with more than 500 frames respectively. The video used is from the matches of 2006 FIFA World Cup. Fig. 9
Fig. 9. Ball passing over a field line
Playfield and Ball Detection in Soccer Video
395
shows some frame sequences where the ball passes over a field line. Based on the interpolation function, the ball position has been gained exactly. In Fig. 10, the player is kicking the ball in sequences a, and heading the ball in sequences b and c.
(a)
(b)
(c) Fig. 10. Ball merged with players
6 Conclusions and Future Work A trajectory based ball detection and tracking scheme with a pre-procedure of playfield detection has been proposed in the paper. The experimental results have verified that the discussed scheme is reasonable and can be effectively used in soccer video. Using specific-domain approaches, video analysis and semantic extraction become easy. Therefore, our framework is useful to bridge the semantic gap of soccer video. However, our current work does not involve the balls out of the playfield and the event understanding, which are important for automatic analysis of the soccer video. Future work will focus on these topics.
References 1. Barnard, Odobez, J.M.: Robust playfield segmentation using MAP adaptation. In: Proceedings of the 17th International Conference on Pattern Recognition, pp. 610–613 (2004) 2. Jiang, S., Ye, Q., Gao, W., Huang, T.: A new method to segment playfield and its applications in match analysis in sports video. In: Proceedings of the 12th ACM International Conference on Multimedia, pp. 292–295 (2004) 3. Wang, L., Zeng, B., Lin, S., Xu, G., Shum, H.Y.: Automatic extraction of semantic colors in sports video. In: Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 617–620 (2004) 4. Liu, Y., Jiang, S., Ye, Q., Gao, W., Huang, Q.: Playfield detection using adaptive GMM and its application. In: Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing, pp. 421–424 (2005)
396
J. Yu et al.
5. Troter, A.L., Mavromatis, S., Sequeira, J.: Soccer Field Detection in Video Images Using Color and Spatial Coherence. In: Proceedings of International Conference on Image Analysis and Recognition, pp. 265–272 (2004) 6. Ekin, A., Tekalp, A.M.: Robust dominant color region detection and color-based applications for sports video. In: Proceedings of International Conference on Image Processing, pp. 21–24 (2003) 7. Gong, Y., Sin, L.T., Chuan, C.H., Zhang, H., Sakauchi, M.: Automatic Parsing of TV Soccer Programs. In: Proceedings of International Conference on Multimedia Computing and Systems, pp. 167–174 (1995) 8. Seo, Y., Choi, S., Kim, H., Hong, K.: Where are the ball and players? Soccer Game Analysis with Color-based Tracking and Image Mosaic. In: Proceedings of 9th International Conference on Image Analysis and Processing, (2), pp. 196–203 (1997) 9. Ohno, Y., Miura, J., Shirai, Y.: Tracking Players and Estimation of the 3D Position of a Ball in Soccer Games. In: Proceedings of 15th International Conference on Pattern Recognition, (1), pp. 145–148 (2000) 10. D’Orazio, T., Ancona, N., Cicirelli, G., Nitti, M.: A Ball Detection Algorithm for Real Soccer Image Sequences. In: Proceedings of 16th International Conference on Pattern Recognition, pp. 201–213 (2002) 11. Yow, D., Yeo, B.L., Yeung, M., Liu, B.: Analysis and Presentation of Soccer Highlights from Digital Video. In: Proceedings of Asian Conference on Computer Vision, pp. 499– 503 (1995) 12. Tong, X., Lu, H., Liu, Q.: An Effective and Fast Soccer Ball Detection and Tracking Method. In: Proceedings of the 17th International Conference on Pattern Recognition, (4), pp. 795-798 (2004) 13. Liang, D., Liu, Y., Wang, Q., Gao, W.: A Scheme for Ball Detection and Tracking in Broadcast Soccer Video. In: Proceedings of Pacific-Rim Conference on Multimedia, (1), pp. 864–875 (2005) 14. Yu, X., Xu, C., Tian, Q., Leong, H.W.: A Ball Tracking Framework for Broadcast Soccer Video. In: Proceedings of International Conference of Multimedia and Expo, (2), pp. 273– 276 (2003) 15. Yu, X., Tian, Q., Wan, K.W.: A Novel Ball Detection Framework for Broadcast Soccer Video. In: Proceedings of International Conference of Multimedia and Expo, (2), pp. 265– 268 (2003) 16. Yu, X., Xu, C., Leong, H.W., Tian, Q., Tang, Q., Wan, K.W.: Trajectory-Based Ball Detection and Tracking with Applications to Semantic Analysis of Broadcast Soccer Video. In: Proceedings of the 11th ACM International Conference on Multimedia, pp. 11– 20 (2003) 17. Ren, J., Orwell, J., Jones, G.A.: Generating Ball Trajectory in Soccer Video Sequences. In: Workshop on Computer Vision Based Analysis in Sport Environments, Graz, Austria (2006) 18. Gersho, A., Gray, R.M.: Vector quantization and signal compression [M]. Kluwer Academic Publishers, Boston (1992) 19. Gonzalez, R.C., Wintz, P.: Digital Image Processing [M], 2nd edn. Addison-Wesley, Reading, MA (1987) 20. Sklansky, J.: Finding the convex hull of a simple polygon [J]. Pattern Recognition Letters 1(2), 79–83 (1982) 21. Rabiner, L.R.: A Tutorial on Hidden Markov Model and Selected Applications in Speech Recognition. Proceedings of the IEEE 77(2), 257–286 (1989)
Single-View Matching Constraints Klas Nordberg Computer Vision Laboratory Department of Electrical Engineering Link¨ oping University
Abstract. A single-view matching constraint is described which represents a necessary condition which 6 points in an image must satisfy if they are the images of 6 known 3D points under an arbitrary projective transformation. Similar to the well-known matching constrains for two or more view, represented by fundamental matrices or trifocal tensors, single-view matching constrains are represented by tensors and when multiplied with the homogeneous image coordinates the result vanishes when the condition is satisfied. More precisely, they are represented by 6-th order tensors on R3 which can be computed in a simple manner from the camera projection matrix and the 6 3D points. The single-view matching constraints can be used for finding correspondences between detected 2D feature points and known 3D points, e.g., on an object, which are observed from arbitrary views. Consequently, this type of constraint can be said to be a representation of 3D shape (in the form of a point set) which is invariant to projective transformations when projected onto a 2D image.
1
Introduction
Matching constraints for two, three or multiple views is a well-explored area in computer vision. The basis is the standard representation of the mapping from a 3D point to a 2D point in terms of the pin-hole camera model: y k ∼ Ck x
(1)
where Ck is the k-th camera matrix and x and yk are homogeneous representations of a 3D point and its 2D image projection in the k-th camera. The symbol ∼ represents equality up to scaling. In the two-view case, x is projected onto two camera images y1 , y2 by means of two distinct cameras C1 , C2 . This setup leads to a matching constraint of the form1 : y1T F y2 = F · (y1 ⊗ y2 ) = 0
1
(2)
This work is made within the VISCOS project funded by the Swedish Strategic Research Foundation (SSF). The ⊗ sign denotes outer (or tensor, or Kronecker) product between two vectors, or in general two arrays or matrices or tensors. The result is a new array of all products of elements from the first vector with the elements of the second vector.
G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 397–406, 2007. c Springer-Verlag Berlin Heidelberg 2007
398
K. Nordberg
where F is the fundamental matrix corresponding to the two cameras which can be computed directly from their matrices [1,2,3,4]. This constraint allows us to make a simple check whether or not two image points can correspond to the same 3D point. It should be noted that the constraint is necessary but not sufficient for y1 and y2 to correspond to the same x. Consequently, if Equation (2) is satisfied, the point pair y1 , y2 can be said to be in hypothetical correspondence, which may be confirmed or rejected by further processing, e.g., comparing local image features in or around the two points. This type of matching constraint has also been extended to the three-view case, described in terms of the trifocal tensor [5,6,7,4], and to the general multiview case [8,9,4]. The resulting constrains can, e.g., describe necessary conditions on corresponding image points or lines in the different images. These constraints are multi-linear mappings (tensors) on the homogeneous representations of image points or lines which equal zero when the 2D points or lines correspond to the same 3D structure. Given this background on matching constrains the idea of a single-view matching constraint may appear somewhat peculiar. Clearly, it does not make sense to ask if two distinct image points in the same image correspond to the same 3D point. The following sections, however, describe a problem which is analogue to multi-view matching constraints but formulated for points in a single image.
2
Preliminaries
Let us start the single-view case by rewriting Equation (1) slightly: yk ∼ C T xk =
4 4
ci Tij xjk
(3)
i=1 j=1
where now all 2D points yk are found in the same image, corresponding to camera matrix C, and each such point is the image of some 3D point xk . Before being mapped onto the camera image, however, the 3D point is transformed by some projective transformation, represented by the 4 × 4 matrix T. To easier see what happens in the next step, this relation is also written out as an explicit summation over products of ci , the columns of C, Tij , the elements of T, and xjk , the elements of xk . From a mathematical point of view, yk can also be seen as the mapping C⊗xk applied onto T; 4 4 (ci xjk ) Tij (4) yk ∼ (C ⊗ xk ) T = i=1 j=1
Here, (C ⊗ xk ) can be represented as a 3 × 16 matrix and T as a 16-dimensional vector. In the following, the combination C ⊗ xk is referred to as a point-camera, denoted C(xk ); a point-camera maps projective transformations T to image points yk by combining the transformation with a 3D point xk and a camera C.
Single-View Matching Constraints
399
The camera C cannot see the difference between two points if they are located on a projection line which intersects with the camera focal point (or camera center). In a similar manner, a point-camera cannot distinguish between transformations which move points along such projection lines, for example, in terms of a uniform scaling of camera centered coordinates. In algebraic terms this effect means that the dimensions of T can be reduced as follows. C has a 1-dimensional ˆ , the homogeneous representation null space spanned by the normalized vector n ˆn ˆ t ) = C PT P, where of the camera focal point. This means that2 C = C (I − n 4 P is a 4 × 3 matrix which projects R onto the 3-dimensional space which is ˆ . Inserted into Equations (3) and (4) this gives perpendicular to n yk ∼ C PT P T xk = ((C P) ⊗ xk ) (P T) = (C ⊗ xk ) (P T)
(5)
where C = C PT . This leads to a modified point-camera C (xk ) = C ⊗ xk , a 3 × 12 matrix which maps the 12-dimensional vector P T = t to image point yk : yk ∼ C (xk ) t
(6)
Notice the similarity between Equation (1) and Equation (6). In the first case is yk the image of a 3D point x as projected by a camera Ck , in the second case it is the image of a transformation t as projected by a point-camera C (xk ). Section 1 discusses necessary constraints which emerge when some cameras map an unknown 3D point to known image points in multiple views. Let us speculate over the possibility of necessary conditions which emerge when some point-cameras map an unknown transformation to known image points in a single image; a single-view matching constraint. In contrast to the usual matching constraints, a single-view matching constraint cannot be defined for only a pair of points since two 2D points can be the image of any pair of 3D points for at least some projective transformation. As will be shown in the following section, a constraint which is a multi-linear expression in the homogeneous image coordinates is possible if six image points are included.
3
Derivation of the Single-View Matching Constraint
Let us begin by considering two image points y1 , y2 which are the images of two 3D points x1 , x2 according to Equation (3). The main result of the previous section is that the image points can also be written as y1 = C (x1 ) t y2 = C (x2 ) t
(7)
where t = P T, reshaped as a 12-dimensional vector, and C (xk ) is the modified point-camera related to 3D point xk , as described in the previous section. Form the tensor or outer product of the homogeneous image coordinates Y = y1 ⊗ y2 which combined with Equation (7) gives Y = (C (x1 ) ⊗ C (x2 )) (t ⊗ t) = C12 (t ⊗ t) 2
I denotes the identity mapping of suitable size.
(8)
400
K. Nordberg
In this relation, we can see Y as a 9-dimensional vector (the outer product of two 3-dimensional vectors), t ⊗ t as a 144-dimensional vector (the outer product of a 12-dimensional vector with itself) and C12 = C (x1 ) ⊗ C (x2 ) as a 9 × 144 matrix formed as the Kronecker product of C (x1 ) and C (x2 ). Before we continue, let us study the mapping C12 more carefully. It has a transpose CT12 = C (x1 )T ⊗ C (x2 )T , and the range of CT12 is the tensor product of the ranges of C (x1 )T and C (x2 )T , a 9-dimensional subspace of the 144dimensional space of 2-nd order tensors on R12 (or square 12 × 12 matrices). + Furthermore, C12 has a pseudo-inverse C+ 12 which satisfy C12 C12 = I, where I 9 is the identity mapping on R , and which also has the property that C+ 12 C12 is the projection operator onto the above mentioned range. In fact, this pseudoinverse can be expressed in terms of the pseudo-inverses of the two point-cameras: + + C+ 12 = C (x1 ) ⊗ C (x2 ) . Let G be a 12 × 12 tensor which is anti-symmetric and which lies in the range T of CT12 , and define S = (C+ 12 ) G. It then follows that S · Y = ST Y = GT C+ 12 C12 (t ⊗ t)
(9)
and since G lies in the range of CT12 , for which C+ 12 C12 is a projection operator, we get S · Y = ST Y = GT (t ⊗ t) = 0 (10) where the last equality follows from G being anti-symmetric and t ⊗ t being symmetric. Provided that G exists, we have thus proved that ST Y = 0 for all Y = y1 ⊗ y2 where y1 , y2 are given in Equation (7), i.e., S is a single-view matching constraint of the type discussed in Section 2. Unfortunately, but in agreement with the discussion in the previous section, there is no G of this type except G = 0. The 9-dimensional range of CT12 simply never includes a non-zero anti-symmetric tensor. Fortunately, however, the above approach can be extended to arbitrary order of Y. The order we need happens to be 6, i.e., we need to consider 6 different image points yk given by Equation (3) and form their outer or tensor product Y = y1 ⊗ y2 ⊗ . . . ⊗ y6
(11)
This is a 6-th order tensor on R3 can be reshaped as a 729-dimensional vector. The main result of this paper is: given a camera C, to any set of 6 3D points xk we can form a 6-order tensor S on R3 , a 729-dimensional vector, such that ST Y = 0 if the 6 2D points and the 6 3D points correspond, i.e., the former are the images of the latter. The following presentation sketches an existence proof for this S and discusses some general properties. The 6-th order constraint tensor S is constructed in an similar way as the second order tensor above, with the exception that the result is, in general, non-zero. First, we construct C1...6 = C (x1 ) ⊗ . . . ⊗ C (x6 )
(12)
Single-View Matching Constraints
401
which represents a 36 × 126 matrix3 . This mapping has a transpose CT1...6 and pseudo-inverse C+ 1...6 with properties that generalizes from the what was said in relation to C12 earlier. Second, G is defined as a 6-th order tensor on R12 which has the following properties: It lies lies in the range of CT1...6 and it is perpendicular to any completely symmetric 6-th order tensor. A G with these properties can be found as follows. Let rij be the j-th row (j=1,2,3) of C (xi ), i = 1, . . . , 6 and set αijklmn r1i ⊗ r2j ⊗ r3k ⊗ r4l ⊗ r5m ⊗ r6n (13) G= ijklmn
This construction of G assures that it lies in the range of CT1...6 . The scalars αijklmn are chosen as αijklmn = (−1)i+j+k+l+m+n det(Mijklmn )
(14)
where the 12 × 12 matrices Mijklmn are given by Mijklmn = concatenation of all rows rpq , except r1i , r2j , . . . , r6n
(15)
This guarantees that it is also the case that G is perpendicular to any completely symmetric 6-th order tensor on R12 . As before, S is computed from G as T S = (C+ 1...6 ) G
(16)
but since the basis used to construct G now meets its dual basis represented T by the rows of (C+ 1...6 ) , the elements of S are precisely the scalars Sijklmn = αijklmn . We have a G that satisfies both the critical requirements previously described for the second order case and have constructed S in a similar way. With the same arguments, it then follows that S · Y = ST Y = 0
(17)
when Y is constructed from a 6-tuple of image points which match the 6-tuple of 3D points used for constructing S. It should be noted that the derivation of S suggests that it can be computed without computing neither G nor C+ 1...6 explicitly!. Instead, it is sufficient to compute 36 = 729 determinants of 12 × 12 matrices Mijklmn . However, the number of unique determinants is in fact much smaller due to symmetries in αijklmn . First, αijklmn vanishes whenever there are 3 or more indices which are equal. Since the indices have a range= 1, 2, 3, it follows that αijklmn will be nonzero only when the indices are permutations of (112233). There are 90 unique sets of such indices. Second, for each such set αijklmn is invariant to exchanging, e.g., the 1-s and the 2-s. There are 6 such permutations which implies that there 3
The size of this matrix may appear discouraging at first sight, but we never need to compute or manipulate it in practice in order to derive S!
402
K. Nordberg
are (at most) only 90/6 = 15 distinct values for αijklmn (except those which are always zero). This means that the elements of S can be computed in terms of only 15 determinants of 12 × 12 matrices! An important property of S is that it vanishes completely for certain configurations of the 6 3D points xk . Preliminary results suggest that this happens whenever more than 4 points lie in the same plane. This property has not yet been established formally, but appears valid based on experimental data.
4
Application
The single-view matching constraint derived in the previous section can be used for finding 2D-3D correspondences. For example, given a set of 6 known 3D points, typically distinct feature points on an object, and a set of 6 detected image points we can determine whether the latter set is the image of the first after an arbitrary projective transformation followed by the camera mapping, where the latter is assumed to be known. Notice that the points in the sets involved in these computation must be correctly ordered, i.e., 3D points xk must be in correspondence with 2D point yk for k = 1, . . . , 6. If proper order cannot be assured, it may be necessary to test all 6! = 720 possible permutations in order to find the correct one. Furthermore, the sets of points which are considered can in general have more than 6 points. In this case, the constraint can be applied to any selection of 6 points from the set. When using the single-view matching constraint which is derived here it is important to understand that it represents a necessary but not sufficient condition for matching. To see this, notice that S maps, e.g., the points y1 , . . . , y5 to the dual homogeneous coordinates of a line l such that lT y6 = 0 when the constraint is satisfied. This means that any y6 located along this line satisfies the constraint and implies that the constraint is not sufficient for matching. This observation suggests that the constraint can be used to establish initial hypothetical 2D-3D correspondences which then have to be further investigated in order to be confirmed or rejected. 4.1
Experiment
An object with 25 distinct points (corners, etc) is observed in three different poses, as shown in Figure 1. The points are indicated with circles and the poses differ mainly in a 20◦ rotation between images 1 and 2 and images 2 and 3. The images are 576 × 720 pixels in size and the image coordinates of the points are manually determined with an estimated accuracy of ±1 pixels. The corresponding 3D coordinates of the 25 points relative to an object centered system is also determined manually with an estimated accuracy of ±0.5 mm. From the set of 3D coordinates and the 2D coordinates in the first image, a camera matrix C can be estimated using the DLT-method based on homogenous coordinates which are numerically normalized [10,4]. The same camera matrix is used for the analysis of images 2 and 3.
Single-View Matching Constraints
403
Fig. 1. Images of a 3D object with a set of 25 feature points in three different poses. 25 feature points are indicated with circles.
To investigate the discrimination capacity of the single-view matching constraint, a set of 6 3D points is selected randomly and the corresponding S is computed according to Section 3. If S = 0 (which happens for certain configurations of the 3D points, see end of Section 3) a new set is chosen until a non-zero S is obtained. The corresponding set of image coordinates is then used to form Y, Equation (11), and the matching measure ST Y is computed. Ideally, this should be zero, but since there are measurement errors in all of xk , yk and C the result is a non-zero number which, in theory, is relatively small compared to what it becomes when S and Y are computed for non-corresponding point sets. To test this hypothesis, the matching measure is computed for 1000 sets of corresponding 2D-3D points in one image. These are then compared to 1000 sets where the 2D points are chosen randomly in the image but assured to not correspond to the 3D points. As a result, two sets of 1000 matching measures are obtained, the first should consistently be relatively small and the second one should in general be large. A threshold t is chosen from a range of values and the frequency of true positives (TP) is estimated as the percentage of matching measures in the first set which are < t and the frequency of false positives (FP) is estimated as the percentage of matching measures in the second set which are < t. This procedure is made for all three images. The corresponding receiver operating characteristic (ROC) curves are shown in Figure 2. These curves are rather
404
K. Nordberg 1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Fig. 2. The ROC curves corresponding to the experiment using the standard matching measure. Frequency of true positives on the vertical axis and frequency of false positives on the horizontal axis. Different points on the curve correspond to different threshold values. The three different curves comes from the three different images.
similar and show, for example, that we can choose a threshold which means that approximately 75% of all true positive cases can be detected at the expense of also getting 20% false positives. For many practical applications, however, this is not an acceptable performance. An alternative approach to detection of hypothetical 2D-3D correspondences is as follows (see also the discussion in the beginning of this section). Mathematically, S maps a set of 5 image points to a line in the image, and the single-view matching constraint is equivalent to a 6-th point lying on this line. This means that S corresponds to 6 lines in the image, represented in dual homogeneous coordinates as lk , k = 1, . . . , 6. The corresponding matching constraint is equivalent to lTk yk = 0 for k = 1, . . . , 6. With proper normalization of lk , however, lTk yk represents the 2D distance from image point k to line k. An alternative matching measure can then be defined as the largest of the 6 distances produced by lk and yk for k = 1, . . . , 6. For the same data set of 1000 true and false correspondences considered above, from each of the three images, this alternative matching measure is computed and the ROC-curves are estimated for a range of thresholds. The result is presented in Figure 3 and shows that this matching measure can provide a better discrimination between true and false positives, e.g., giving approximately 80% of all true positives at the expense of only getting 5% false positives.
5
Discussion
The experiment presented in Section 4.1 shows that the single-view matching constraint is a useful tool for determining 2D-3D correspondences, in particular when the second type of matching measure is used. It should be noted that the
Single-View Matching Constraints
405
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Fig. 3. The ROC curve corresponding to the experiment using the alternative matching measure. The three different curves comes from the three different images.
data used in this experiment is rather crude, e.g., the camera matrix has been estimated using the simplest possible technique, and deviations from the pinhole camera model, e.g., in terms of lens distortion, have not been taken into account. By estimating C with higher accuracy and correcting for geometric distortions in yk , an even better discrimination between true and false positives may be possible. However, since the single-view matching constraint does not represent a sufficient condition for matching, we must always expect to obtain false positives even with data of high accuracy. Consequently, further processing of the set of positives is necessary to reduce the number of false positives. The validity of the matching constraint has been tested only for rigid transformations of the 3D points, not general projective transformations. That generalization appears to be of minor practical use and is likely to decrease the numerical stability of the method since it becomes more difficult to find a suitable threshold t which discriminates between true and false positives. The fact that S is of 6-th order raises the question whether it is possible to find a tensor of lower order which solves the same problem. Although no formal proof is given here, it is claimed that such a tensor does not exist. In fact, the tensor reported here was found after a search through different orders, trying to find S which is perpendicular to all Y (the tensor product of the homogeneous image coordinates) when the image points are the camera projection of the same set of 3D points but for different transformations T, Equation (3). A direct consequence of the derivation of S is that it is unchanged (as a projective element) to a projective transformation which simultaneously acts on the 6 3D points from which it is computed. Consequently, it provides a 15dimensional descriptor of 3D shape which is invariant (unchanged) with respect to projective transformations, where we interpret the 6 3D points as a basic form of 3D shape. This implies that we do not have to discuss S in the context of camera mappings, and instead use it as a general 3D shape descriptor, e.g., assuming a canonical camera.
406
K. Nordberg
A method for deriving matching constraints in multiple views is presented in [8], based on the theory of the so-called Grassmannian tensor. An alternative approach with the same goal is described in [11], based on standard linear algebra. The derivation of S can be seen as a straight-forward extension of the latter work, where the cameras Ck are replaced by the point-cameras C(xk ), the 3D point x is replaced by the transformation t, and the 6-order case is considered.
References 1. Faugeras, O., Luong, Q., Maybank, S.: Camera self-calibration: theory and experiments. In: Sandini, G. (ed.) ECCV 1992. LNCS, vol. 588, pp. 321–334. Springer, Heidelberg (1992) 2. Faugeras, O.: What can be seen in three dimensions with an uncalibrated stereo rig? In: Sandini, G. (ed.) ECCV 1992. LNCS, vol. 588, pp. 563–578. Springer, Heidelberg (1992) 3. Hartley, R.: Estimation of relative camera positions for uncalibrated cameras. In: Sandini, G. (ed.) ECCV 1992. LNCS, vol. 588, pp. 579–587. Springer, Heidelberg (1992) 4. Hartley, R., Zisserman, A.: Multiple View Geometry in Computer Vision, 2nd edn. Cambridge University Press, Cambridge (2003) 5. Shashua, A.: Triliearlity in visual recognition by alignment. In: Eklundh, J.-O. (ed.) ECCV 1994. LNCS, vol. 800, pp. 479–484. Springer, Heidelberg (1994) 6. Shashua, A.: Algebraic funtions for recoginition. IEEE Trans. on Pattern Recognition and Machine Intelligence 17 (1995) 7. Shashua, A., Werman, M.: On the trilinear tensor of three perspective views and its underlying geometry. In: Proceedings of International Conference on Computer Vision (1995) 8. Triggs, B.: Matching constraints and the joint image. In: Proceedings of International Conference on Computer Vision, Cambridge, MA, pp. 338–343 (1995) 9. Faugeras, O., Mourrain, B.: On the geometry and algebra of the point and line correspondences between N images. In: Proceedings of International Conference on Computer Vision, Cambridge, MA, pp. 951–956 (1995) 10. Hartley, R.I.: In defence of the 8-point algorithm. IEEE Trans. on Pattern Recognition and Machine Intelligence 19, 580–593 (1997) 11. Nordberg, K.: Point matching constraints in two and three views. In: Hamprecht, F.A., Schn¨ orr, C., J¨ ahne, B. (eds.) Pattern Recognition. LNCS, vol. 4713, pp. 52– 61. Springer, Heidelberg (2007)
A 3D Face Recognition Algorithm Based on Nonuniform Re-sampling Correspondence Yanfeng Sun, Jun Wang, and Baocai Yin Beijing Key Laboratory of Multimedia and Intelligent Software, College of Computer Science and Technology, Beijing University of Technology, Beijing 100022 China
[email protected],
[email protected],
[email protected]
Abstract. This paper proposes an approach of face recognition using 3D face data based on Principle Component Analysis (PCA) and Linear Discriminant Analysis (LDA). This approach first aligned 3D faces based on nonuniform mesh re-sampling by computing face surface curves. This step achieves aligning of 3D prototypes based on facial features, eliminates 3D face size information and preserves important 3D face shape information in the input face. Then 2D texture information and the 3D shape information are extracted from 3D face images for recognition. Experimental results for 105 persons 3D face data set obtained by Cyberware 3030RGB/PS laser scanner have demonstrated the performance of our algorithm.
1
Introduction
Face recognition technology have many potential application in public security, personal identification, automated crowd surveillance and so on. Over the past 40 years many different face recogniton techniques with 2D images were proposed. Although significant progress has been made [1], some difficult problems have not been solved well such as pose and illumination. The reason is that the input face image of same person is not taken at similar condition as that one of same person in the database. It has been proved [2,3] even small variations in pose and illumination can drastically degrade performance of 2D image based face recognition system. Pose and illumination invariant face recognition is a challenging research area. Some approaches have been proposed. A well known approach for achieving pose and illumination invariant is to utilize 3D face information [4]. As the technology for acquiring 3D face information becomes simpler and cheaper, the use of 3D face data becomes more common. A 3D face image contains its shape and texture information. 3D shape information which is lacking in 2D image is expected to provide more recognition feature and these features are robust against pose and illumination variations. So the use of additional 3D information is expected to improve the reliability of face recognition system. Principle Component Analysis (PCA) is a classical data compression, feature extraction and data representation technique widely used in the areas of pattern recognition. It has been one of the most successful approaches in face G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 407–416, 2007. c Springer-Verlag Berlin Heidelberg 2007
408
Y. Sun, J. Wang, and B. Yin
recognition [5]. Its goal is to find a basis vectors in an orthogonal linear space by training set and project a probe face image to the linear space spanned by the basis vectors. PCA can represent effectively a probe face image (not in the training set) as a linear combination of basis vectors. The different face image has different combination coefficients. These combination coefficients are taken as PCA feature for face recognition. It has been proved PCA transform is the optimal transform in the sense of minimum square error. But the correspondence of all face features is needed according to linear object class theory. In 2D image this is satisfied by normalization procedure which requires pixel number is same and face feature are corresponding. In 3D image the first work of face recognition based on PCA should be to correspond with 3D face data. The Linear Discriminant Analysis (LDA) method is to find a transform matrix so that the projected discriminatory information achieves greater between-class scatter and consequently classification is more reliable. Researchers have demonstrated that the LDA based algorithms outperform the PCA algorithm for many different tasks [6]. However, LDA has a weakness known as the Small Sample Size (SSS) problem. To overcome the problem many LDA extensions were proposed in face recognition. A two-stage method received relatively more attention. This method applies first PCA for dimensionality reduction and then LDA for discriminant analysis. Having achieved reasonable success from 2D image based PCA+LDA face recognition system in previous works, we continue this line of research from 3D face image acquired by the Cyberware 3030RGB/PS laser scanner. In this paper we presented an approach for corresponding 3D face shape images. The correspondence is achieved using nonuniform mesh re-sampling. The re-sampling is implemented by computing face surface curves and subsequently constructing mesh and surface. This correspondence guarantees not only face feature aligning but also having same number of nodes for all 3D face image. By this step it is possible that the probed 3D face image has uniform data format with that one in face database. This step also makes face recognition based on PCA and LDA feasible. The reminder of this paper is organized as follows. In section 2, we introduce some others works related to our experiments. The correspondence calculation by nonuniform re-sampling is described in section 3. Section 4 reports the experimental results. Finally, Conclusions and future work are drawn in the last section.
2
Related Work
A quite comprehensive and detailed review of 3D face recognition researches can be found in [4]. In this research we only review the correspondence strategies related with 3D face recognition based on PCA+LDA. Gu et al. [8] proposed a uniform mesh re-sampling based method to make correspondence between the nodes of the prototypes. This method is the improvement of [7]. It is based on uniform gridden re-sampling of different 3D
A 3D Face Recognition Algorithm
409
faces. This method first segment each face texture image into patches and then paint iso-curves of each patch on every face uniformly. After a combination of relation and subdivision steps, the point to point correspondence can be acquired between each two faces with uniform topology. But a disadvantage of this method is that each patch has same sampling strategy. This results the large scale 3D face mesh data. It is not helpful for PCA dimension reducibility. The ICP algorithm [9] developed by Besl and Mckay is used to register the point sets by an iterative procedure which is widely used in field of 3D object registration. When matching two facial surfaces with different shape or expression, the difference between the pairs of nearest point may become large due to shape deformation which may have a large effect when performing least-squares minimization. Thus there is a significant performance drop in 3D face recognition when expression varies between gallery and probe. The ICP procedure can not assure that all matched 3D face surfaces have uniform represent. It also is not exact alignment of prominent facial features, only an approximation to similar geometries. In [10] Trina Russ et al. present an approach of 3D face recognition based on PCA. In this approach, the 3D facial point correspondence is achieved by registering a 3D face to a scaled generic 3D reference face and subsequently a surface normal search is performed. It can preserve important 3D size information by scaling a 3D reference to alignment of facial key points. This is an improved approach on the basis of [9]. Their difference is in the search strategy. Our approach nonuniformly parameterizes original grids of different prototypes through polygonal surfaces re-sampling such that original geometry is maintained. It includes face segmentation, computing curves and re-sampling.
3
Correspondence Calculation
The different face data acquired by 3D scanner has different dimension. The exact alignment of facial feature is needed using PCA based face recognition algorithm. Therefore, the correspondence of 3D face data is a key step for our face recognition approach. Our correspondence calculation is an improvement of [8]. 3.1
Face Segmentation
Original face data were obtained by the Cyberware 3030RGB/PS laser scans. The laser scans provide head structure data in a cylindrical representation relative to a vertical axis. In angular steps ϕ and vertical steps h, at a spacing of 0.75o and 0.615mm, the device measures radius r, along with red, green and blue components of surface texture R,G,B. Face segmentation is to segment 3D face into many patches. Each patch is a quadrangle. While segmentation the key feature points and some edge points are the segmentation nodes. Face segmentation is implemented on the texture image of scanned 3D face. Firstly it locates the key feature points such as eyes, nose, mouth, ear and so on. Then based on these feature points, other segmentation
410
Y. Sun, J. Wang, and B. Yin
points can be obtained by interpolation. Connecting these segmentation points, a mesh is formed. The face is divided into 122 quadrangles patches. Finally a map relation is set up to map mesh onto 3D face texture image. The attractive attributes of this segmentation are that the same facial feature is in same signed region, key feature points are the node of quadrangles and each segmented face has same topology structure. More details are in [8]. 3.2
Calculating Node Curvature and Patch Curvature
Re-sampling density of nonuniform mesh is determined by surface curvature. The larger the curvature is, the much more dense the re-sampling density is. In this paper, we use the method proposed by Milroy [11] to obtain the conicoid fitting in the nodes of the face, then to get the principal curvature, the mean curvature of the node and patch curvature. The curvature calculation process is as follows. Step 1: vi is a node of initial quadrangle mesh. The quadratic surface S passing vi can be defined as: S(u, v) = (u, v, h(u, v)), h(u, v) = au2 + buv + cv 2 .
(1)
Fig. 1. Local surface at a point
There a, b and c are constants. As shown in Figure 1, P is a point on the surface S. N is the surface normal, T1 and T2 are the principal direction (T1 is minimum curvature and T2 is maximum). (u,v,h) is the coordinates of tangents plane frame with origin at vi . Step 2: Calculate the normal direction vector Ni of vi . Ni =
m j=1
di,j
dj,j..+1 n, di,m..+1 = di,1 . + di,j..+1 + dj,j..+1
(2)
There di,j is the length between vi and vj . vj is the neighborhood point of vi . nj is the unit normal direction vector of the patch adjacent to vi . m is the number of the patch next vi . dj,j..+1 is the length between vj (neighborhood point of vi ) and its neighborhood point vj..+1 (Figure 2)
A 3D Face Recognition Algorithm
411
Fig. 2. The normal vector of vi
Step 3: Calculating the value of parameter a, b and c in S(u,v) After the the normal direction vector Ni of vi is obtained, the partial coordinate system of vi can be constructed. The value of neighborhood point vj in this coordinate system is known. To fit a polynomial that best approximates the surface at a point vi , we use the weighted least-squares surface fitting to get the conicoid. Then we can obtain the value of a, b and c in S(u,v). More detailed can be found in [11]. Step 4: Calculating the principle curvature k1 ,k2 of vertex vi . (a − c)2 + b2 k2 = a + c + (a − c)2 + b2 k1 = a + c −
(3) (4)
k1 ,k2 are determined by surface. The average curvature of vi is defined as Hi =(k1 +k2 )/2. Step 5: We use the mean curvature of all the points in a patch to represent the degree of curvature in this patch. It can be defined as: ki =
n
Hi /n, i = 1, 2, ...122
(5)
i=1
After evaluating all the ki for a given 3D face data, we can get the hierarchical patches based on discrete curvature which is shown in Figure 3. Black means the area of the curvature is the largest and need much more re-sampling times. The red is second and the yellow is third.
Fig. 3. Hierarchical patches based on discrete curvature
412
3.3
Y. Sun, J. Wang, and B. Yin
Nonuniform Mesh Re-sampling
Mesh re-sampling is a common method for setting up surface from primitive data. Our re-sampling is nonuniform. Its mesh density is determined by the curvature each patch. After calculating the patch curvature, a re-sampling rule must be given. According to this rule, different patch has different re-sampling times. For the smaller curvature patch, sparse mesh can approximate surface shape well. We give it less re-sampling times. For the larger curvature patch, we use denser mesh to approximate it and to resample it more times. This rule tends to give reduced mesh nodes over uniform mesh re-sampling because it enables an improved example of the input face’s surface geometry. However, for a nonuniform mesh re-sampling to provide good result, the re-sampling times must determined in advance. This is obtained by evaluating the mesh error between scanned face mesh and re-sampling face mesh which can be determined offline in training stage. In our experiments, we use two re-sampling methods. The highest re-sampling time is 4 and the lowest re-sampling time is 2 in method 1. In method 2, the re-sampling times turned to be 3 and 2 times. An example of same nonuniform re-sampling strategy and different re-sampling times can be viewed in Figure 4.
Fig. 4. Results of re-sampling. (a) Method 1 with 26058 nodes, 45184 triangles. (b) Method 2 with 13546 nodes, 21632 triangles. (c) Uniform re-sampling with 132858 nodes, 249856 triangles.
Their nodes number are respectively 26058, 13546 and 132858. Comparing with uniform re-sampling, the node number of nonuniform re-sampling decrease 80.39% and 89.80%. Restructured same faces based above mesh are shown in Figure 4. Notice that the node number is reduced drastically, but the restructured face shape is similar.
4
Experimental Results
In this section, the analysis of presented approach is performed for 3D face synthesis and recognition. Some experimental results are shown.
A 3D Face Recognition Algorithm
4.1
413
Database
Our analysis is performed on 3D BJUT face database. This database consists of 3D face images of 1000 subjects acquired in 2005 and 2006. Of the 1000 subjects, 105 subjects are acquired three or four samples in different time and formed 331 3D face images. This allows that two of these subjects can be used as gallery image which also use for PCA training. All the rest 121 images form the probe set. All faces were without makeup, accessories and facial hair. Some of them have a little expression such as angry, surprise or happiness. In our research, all 3D face images have same data format by nonuniform re-sampling method. They can be represented as shape and texture vector: Si = (Xi1 , Yi1 , Zi1 , ..., Xin , Yin , Zin )T ∈ R3n
(6)
Ti = (Ri1 , Gi1 , Bi1 , ..., Rin , Gin , Bin ) ∈ R
(7)
T
3n
Where 1≤i≤N, N is the number of the faces, n is the number of nodes of the 3D faces after nonuniform re-sampling correspondence and (Rin ,Gin ,Bin ) is the color values of the node (Xin ,Yin ,Zin ). 4.2
3D Face Synthesis Based on PCA Model
3D face synthesis based PCA model is to generate a new 3D face using PCA basis images. This process is affected by 3D face correspondence, PCA basis image from training image sets. In this section, a face generated by face segmentation and correspondence procedure in section 3 is called as nonuniform re-sampling face image. The PCA parameters of nonuniform re-sampling face image are obtained by PCA basis images. The face generated by these parameters and PCA model is called synthesized image. In our experiments, the PCA training sets consist of 210 3D face images and the first 91 eigenvalues occupy almost all the energy (99%). Figure 5 gives the examples of synthesized face image. The left row is input 3D face image, middle is re-sampling face image and the right is synthesized face image. Here we just consider shape information and neglect texture information. So there is no texture values correspond. We consider three cases: (a) the face image is included in the training set, (b) the face image is not included in the training set, (c) a different subject of the face image is included in the training set. 4.3
Face Recognition
We examine the utility of PCA+LDA parameters for face recognition by examining both correct classification rate (CCR) and cumulative match characteristics (CMC), which denotes the probability, CMC(r), that the correct match of any given test is within the r top-scoring candidates for 1≤ r≤N. This provides an indication of how close one is to obtaining the correct match if the rank one match was incorrect. Obviously, the recognition rate (RR) is equal to CMC (1). In order to demonstrate the recognition performance, we use three kinds of information:
414
Y. Sun, J. Wang, and B. Yin
Fig. 5. The examples of reconstructed 3D face
(1) the shape information, (2) the texture information, and (3) combined shape and texture information. Shape and texture information provide two scores based on method proposed before which can be defined as Sshape and Stexture . A combination of these two scores Scomb can be obtained as follows: Scomb = (1 − α) ∗ Sshape + α ∗ Stexture
(8)
where 0≤α≤1. Table 1 shows the Correct Classification Rate (CCR) of the proposed PCA plus LDA methods based on the Euclidean Distance classifier using shape and texture information and the combination of two of them. Figure 6 shows the CMC curves using the Euclidean distance classifier on shape and texture respectively in method 1 and method 2. As shown in Figure 6, the Table 1. CCR in face Database(105 persons) Shape Texture Combination of only only texture and shape PCA+LDA(method 1) 90.1% 95.0% 97.5%(α=0.3) PCA+LDA(method 2) 86.8% 92.6% 95.0%(α=0.4)
A 3D Face Recognition Algorithm
415
Fig. 6. CMC curve using shape or texture information
Fig. 7. Identification accuracy based on the fusion strategy with respect to α
Rank one recognition rate can reached above 95% when r≥2 and only use shape information in method 1. It also shows that the rank one recognition rate can reach approximately 100% when r=5 and use texture information in method 1. Figure 7 describe the recognition rate based on the combination of the shape and texture strategy with respect to α. The results show the fusion information get better performance than the individual information.
5
Conclusions and Future Work
We have presented a general approach for 3D face pixel-wise correspondence based on nonuniform mesh resampling by computing face surface curves and subsequently constructing mesh and surface. It represents an improvement on the work in [8] in dimension reduction. It also is an improvement on the correspondence performance in [7]. After aligned all the 3D faces, we proposes an approach of face recognition using 3D face shape and texture information based on PCA and LDA. The proposed algorithm is tested with the 3D BJUT face
416
Y. Sun, J. Wang, and B. Yin
database and gets good performance. In the future, we will focus on dealing with 3D face model, the problem of facial expression analysis, enlarge the test set and improve the identification performance.
Acknowledgment The research is supported by: National Natural Science Foundation of China (No.60533030), Beijing Natural Science Foundation(No.4061001)
References 1. Zhao, W., Chellappa, R., Rosenfeld, A.: Face recognition: a literature survey. ACM Computing Surveys 35, 399–458 (2003) 2. Moses, Y., Adini, Y., Ullman, S.: Face recognition: the problem of compensating for illumination changes. IEEE Transactions on Pattern Analysis and Machine Intelligence 19(7), 721–732 (1997) 3. Zhao, A.R.W., Chellappa, R., Phillips, P.: Face recognition: a literature survey, revised. Frobnication. Technical Report CS-TR4167R, UMCP (2002) 4. Bowyer, K.W., Chang, K., Flynn, P.J.: An evaluation of multi-modal 2d+3d face biometrics. IEEE PAMI 27(4), 619–624 (2005) 5. Turk, M., Pentland, A.: Eigenfaces for Recognition. The frobnicatable foo filter. Journal of Cognitive Neuroscience 3, 71–86 (1991) 6. Belhumeur, P., Hespanha, J., Kriegman, D.: Using discriminant eigenfeatures for image retrieval. The frobnicatable foo filter. IEEE PAMI 19(7), 711–720 (1997) 7. Blanz, V., Vetter, T.: Face Recognition Based on Fitting a 3D Morphable Model. The frobnicatable foo filter, 2006. IEEE Transactions on Pattern Analysis and Machine Intelligence 25(9), 1063–1074 (2003) 8. Gu, C.L., Yin, B.C., Hu, Y.L., Cheng, S.Q.: UResampling Based Method for Pixelwise Correspondence between 3D Faces. The frobnicatable foo filter, 2006. In: Proceedings, ITCC 2004, vol. 1, pp. 614–619 (2004) 9. Besl, J., McKay, N.D.: A Method for Registration of 3 -D Shapes.The frobnicatable foo filter, 2006. Proc.of IEEE Trans.on Pattern Analysis and Machine Intelligence 14(2), 239–256 (1992) 10. Russ, T., Boehnen, C., Peters, T.: 3D Face Recognition Using 3D Alignment for PCA. The frobnicatable foo filter, 2006. In: Proceeding of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 1391–1398 (2006) 11. Milroy, M.J., Bradley, C., Vickers, G.W.: Segmentation of a wrap around model using an active contour. The frobnicatable foo filter, 2006. Computer Aided Designed 29(4), 299–320 (1997)
A Novel Approach for Storm Detection Based on 3-D Radar Image Data Lei Han1, Hong-Qing Wang1, Li-Feng Zhao2, and Sheng-Xue Fu2 1
Department of Atmospheric Science, Peking University, 100871 Beijing, P.R. China {hanlei,hqwang}@pku.edu.cn 2 Department of Electrical Engineering, Ocean University of China, 266003 Qingdao, P.R. China {wmhp2004,dayou}@ouc.edu.cn
Abstract. Storm detection algorithm is a key element of the severe weather surveillance service based on radar image data. 3-D clustering technique is the fundamental part of storm detection. During the clustering process, the connection area between adjacent storms may cause the existing algorithms to identify them as one storm wrongly. Isolating storms from a cluster of storms is another difficulty. To overcome these difficulties, this paper introduces a novel approach which combines the strengths of erosion and dilation in a special way. First, the erosion operation is used to solve the problem of false merger. Then the dilation operation is performed when using gradually increased threshold to detect storms. This keeps the internal structure information of sub-storms well when isolating storms from a cluster of storms. The results of the experiment show that this method can correctly recognize adjacent storms. And when isolating storms from a cluster of storms, this method can also keep the internal structure of sub-storms which will benefit the following tracking task.
1 Introduction As an important method of the severe weather severe weather surveillance, storm detection, tracking and forecasting method based on radar image data is to detect existing storms, calculate their parameters, such as centroid position, volume and top height etc., and to track storms in the successive radar images which builds the motion correspondence between these storms, and then to forecast the evolution and movement of the storms. Its result is also the important input of other radar-based severe weather detection algorithms, such as hail events detection [1]. In these processes, storm detection is the most rudimental part, and its result directly affects the following storm tracking and forecasting. In the past decades, scientists have done many researches on automatic storm detection based on radar data [2,3,4,5,6]. Early researchers focused on 2-D radar images. Crane groups 2-D storms into “volume cells” through the vertical association of storms in successive images [3]. It is a great progress which transferred the research focus from 2-D to 3-D case. Dixon and Wiener [7], and Johnson et al. [8] developed two well-known algorithms, TITAN and SCIT. However, both of them are apt to G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 417–426, 2007. © Springer-Verlag Berlin Heidelberg 2007
418
L. Han et al.
identify adjacent storms as one storm wrongly which is called false merger. Meanwhile, the TITAN algorithm is unable to identify storms from a cluster of storms because of using a single reflectivity threshold. SCIT algorithm is capable of isolating storms from a cluster of storms using several reflectivity thresholds, but it drops the results identified by the lower threshold which causes the lost of the internal structure information of storms. To overcome these difficulties, we introduce a novel approach to detect 3-D storms which uses erosion and dilation in a special way. The erosion operation is used first to identify adjacent storms. Then the dilation operation is performed when using gradually increased threshold to detect storms. The dilation will be stopped when the sub-storm edges touch each other. This keeps the internal structure information of sub-storms well when isolating storms from a cluster of storms.
2 The Input Radar Image Data The basic principles of Doppler radar are shown in Fig. 1 [8].
Fig. 1. Illustration of basic principles of Doppler radar
The radar scans from the lowest elevation angle and gradually increases the angle. On each elevation angle, the interval of the adjacent radials is 1º. And the interval between the adjacent sampling points along a radial is 1km. So the output of a scan on an elevation angle is a 2-D image. All these 2-D images constitute a 3-D image. The time interval of two successive 3-D images is typically 5~6 minutes. The remote sensing data obtained by the radar include reflectivity factor (dBZ), radial velocity and spectrum. Storm detection algorithm only uses reflectivity data. As the original polar coordinate is inconvenient for analysis, the reflectivity data is often interpolated into a Cartesian coordinate as shown in Fig. 2 [9].
A Novel Approach for Storm Detection Based on 3-D Radar Image Data
3
1-km
400 km
419
19 km
2km Fig. 2. Schematic layout of the 3-D image
3 The Basic Detection Algorithm and Its Shortcomings Similar to TITAN, the experimental storm is defined as a 3-D contiguous region that exhibits reflectivity above a given threshold (Tzmin), and the volume of which exceeds a threshold (Tv). 3-D clustering technique is the fundamental part of the detection algorithm. In short, the clustering process can be divided into three stages, 1-D, 2-D and 3-D stage [7]. 1) . Cluster contiguous sequences of points (referred to as runs) in every row for which the reflectivity exceeds Tzmin. There are 15 such runs in Fig.3. 2) . Group runs that are adjacent on the same plane as 2-D component of storms. This is 2-D clustering. In this example, storm 1 comprises runs 1-6; storm 2 comprises
Fig. 3. Example of storm runs - 2D case. Shading indicates grid points where the reflectivity exceeds Tzmin. Different shades indicate different storms.
420
L. Han et al.
runs 7, 8, and 10; storm 3 comprises runs 9, 11, 13 and 14; storm 4 contains only run 12; and storm 5, only run 5. 3) . Group 2-D component of storms on different planes as 3-D storms. Then, calculate the properties of each identified storm, such as the centroid, volume and projecting area. The output of this step is 3-D storm which is the final result of TITAN. However, it may have the problem of false merger and be unable to isolate storms from a cluster of storms.
4 Storm Detection Using Erosion and Dilation Our approach applies the erosion operation to solve the problem of false merger. In order to isolate storms from a cluster of storms, multi-threshold detection method embedded with the dilation operation is used. At last, erosion and dilation are combined compactly. 4.1 Erosion False merger of adjacent storms may occur when there are two or more adjacent storms in convective weather, which will bring difficulties in automatic storm detection. Fig. 4 is the diagram of false merger of two adjacent storms.
Fig. 4. Diagram of false merger of 2 adjacent storms
There is only a weak connection between the two adjacent storms in Fig. 4, which should be treated as two single storms. However, these two storms will be identified as one storm by the TITAN algorithm. And this results in “false merger”. The false merger will directly affect the following tracking task, and may cause tracking and forecasting failure. The erosion operation of the mathematical morphology can be used to tackle the false merger problem [9]. After transforming the detection results into a binary image, the erosion operator erodes the boundary of storms using the structuring element [10]. In addition, as the horizontal scale of storms is much lager than the vertical scale – the height of the troposphere is only 10km, false merger mainly occur in the horizontal
A Novel Approach for Storm Detection Based on 3-D Radar Image Data
421
scale. And it’s better for the vertical structure of storms to be untouched by the erosion operation. So the erosion of 3-D storm is divided into two steps. First, perform the erosion operation in 2-D horizontal plane. As the composite reflectivity (CR) image represents the horizontal structure of storms well, it is used as the 2-D horizontal plane. The erosion operation is carried out on the CR image. Here, “composite” means that each point in CR image represents the maximum reflectivity in each vertical column. Second, map the 2-D result back to the 3-D space. The eroded horizontal CR image needs to be mapped back to the 3-D space. Projection is operated layer by layer. As shown in Fig.5, all points inside the ith layer of the original storm are projected to the eroded CR image. Those points which fall into the effective region of the eroded CR image are saved, and other points are discarded.
z
The ith layer of the original storm
y The eroded CR image
x Fig. 5. Mapping the 2D erosion result to 3D space
At the end of this stage, implement the clustering algorithm again to the projected 3-D storms. The storm, like the case in Fig. 4, which is identified falsely as one storm by TITAN, can be successfully divided into two storms as shown in Fig.6. False merger problem is solved. If the storm is still identified as one storm, discard the erosion result and use the original result. It should be noted that the above operations are performed against the storms one by one, rather than against all storms simultaneously.
Fig. 6. The outputs of erosion
422
L. Han et al.
4.2 Multi-threshold Detection Embedded with Dilation There are usually many adjacent storms (cluster of storms) in severe convective systems. In this case, our goal is to isolate storms from a cluster of storms and meanwhile to keep the internal structure of sub-storms as much as possible. TITAN is not capable of doing this and it will identify the cluster of storms as a whole one (see dashed line regions in Fig.7).
Lower reflectivity
Higher reflectivity (a)
(b)
Fig. 7. A cluster of storms is identified as a single storm
The SCIT algorithm uses multi-threshold (30, 35, 40, 45, 50, 55, 60 dBZ) method, in order to identify the storms from the cluster of storms. SCIT uses the lowest threshold to identify storms first, and then increases the threshold gradually. The result of higher reflectivity (solid line regions in Fig.7) will be accepted, while that of lower reflectivity (dashed line regions in Fig.7) will be abandoned. However, the abandonment is reasonable for such cases as Fig.7 (a), but obviously not as Fig.7 (b) because too much information is lost. The dilation operation of the mathematical morphology could be used to tackle this problem. That is, during multi-threshold detection, perform dilation operation against the storms obtained from higher threshold detection stage (sub-storm). The dilation will be stopped when the storm edges touch each other or touch the edges of the previous storms identified by lower threshold (parent-storm). The similar method has been used by the latest version of TITAN, which uses only two thresholds. Obviously, using two thresholds is not sufficient to isolate storms from a cluster of storms. The details of multi-threshold detection and dilation (“multi-threshold dilation” for short) are shown as follows: (1) Compute the CR image of each 3-D storm. Note that the whole dilation procedure is against the storms one by one, not simultaneously. (2) Identify storms using the higher threshold. In Fig.8 (a), storm A and B (dashed line) are identified by lower threshold, and sub-storm C, D, E and F (solid line) are identified by higher threshold. (3) For too small sub-storms (e.g., F in Fig.8 (a)), ignore them. Then: a) If there are at least two sub-storms which are larger than the area threshold (Tarea), then dilate the sub-storms. The dilation will be stopped when the
A Novel Approach for Storm Detection Based on 3-D Radar Image Data
423
storm edges touch each other or touch the edges of the previous storms identified by the lower threshold. For example, storm D’ and E’ in Fig.8 (b) are the dilated storms. b) If the condition in (a) is not satisfied, do not divide the parent-storm. For example, there is only one sub-storm C in parent-storm A in Fig.8 (a). So, in Fig.8 (b), sub-storm C’ keeps the position of parent-storm A. (4) Map the above 2-D result back to the 3-D space. (5) Repeat (1) to (4) using thresholds: Tzmin + 5×i, i = 1, 2 ,..., N.
(a)
A
B
(b)
D
F
D˃
C
C˃ E
(c)
E˃
(d)
C˃ G J
H
I
D˃
E˃
G˃
H˃
I˃
Fig. 8. Results of multi-threshold detection and dilation (a) Sub storms(C, D, E, F) detected by the second threshold (b) Results after dilation in (a) (c) Results of detection after using the third threshold in (b) (d) Results after dilation in (c)
Fig.8 shows the results of detection and dilation by using gradually higher threshold. Comparing Fig.8 (a) and (d), we can see that storms in the cluster of storms are successfully isolated and the internal structure of sub-storms is kept as much as possible. This is achieved by dilating the sub-storms identified by higher threshold during multiple threshold detection stages. It should be noted that multi-threshold dilation could partially handle the problem of false merger. But it will fail if the intensity distribution of reflectivity inside the storms is uniform. 4.3 Combine Erosion and Multi-threshold Dilation The above steps are performed separately to handle the problem of false merger and isolating storms from a cluster of storms. Now, we combine them as follows. Similar to TITAN and SCIT, we use the lowest threshold (Tzmin) to identify storms in the first step. Then we perform the erosion operation against these storms. False merger will be eliminated at the end of this step. Next, we use the second threshold to detect
424
L. Han et al.
storms with higher intensity and then dilate these sub-storms. Dilation is repeated when using the higher threshold: Tzmin + 5×i, i = 1, 2, ... , N. The aforementioned processes constitute the key idea of our approach and are summarized in Fig. 9. z
Applying the single-threshold detection algorithm with the lowest threshold Tzmin.
z
Erode the 3-D storms which are just got above.
z
For i = 2, … , N, do: - Identify sub-storms with the ith threshold, Tzmin + 5×i - Dilate the sub-storms Fig. 9. Algorithm of automatic storm detection
By combining the strengths of the erosion and dilation operation, false merger is eliminated first and the internal structure of sub-storms is kept well when isolating storms from a cluster of storms. We can get the more accurate detection results which will also benefit the following tracking and forecasting process.
5 Experiment There was a severe convective weather in Beijing on 31 May 2005. Hails attacked the city for several times. The Tianjin radar station explored the entire event. We use the data of Tianjin radar for the comparison between different detection methods. Generally speaking, the threshold of reflectivity in convective storms can be set between 30 and 40 dBZ [7]. In this experiment, we choose 40 dBZ as the lowest threshold because it was a severe convective weather. The CR image is used to show the detection results. Fig.10 displays two examples at 17:35 and 18:17 on 31 May 2005. We can see that there was a cluster of storms at 17:35 which is difficult to identify. TITAN identifies it as one storm (Fig.10 (a)). Fig.10 (b) is the result of SCIT. In comparison with TITAN, SCIT does isolate storms from the cluster of storms. But in comparison with Fig.10 (c) which is the result of our approach, it’s obvious that storm 3 has been lost. As for the example at 18:17, the connection area between storm 4 and storm 5 is very weak. This is a typical case of false merger, and storm 4 and storm 5 should be identified as two storms. But SCIT discards storm 5 because it doesn’t meet the 45 dBZ threshold. This even makes the tracking of storm 1 totally impossible because it’s not detected at all. TITAN treats storm 4 and storm 5 as one storm which cause the problem of false merger. This also makes the tracking of storm 5 impossible.
A Novel Approach for Storm Detection Based on 3-D Radar Image Data
(a)
(d)
(b)
(e) 2
425
3 5
1 4
(f)
(c) 2
3 5
1
4
Fig. 10. The outputs of TITAN(a, d), SCIT(b, e), our approach(c, f) at 17:35 and 18:17 respectively
Fig.10 (c) and (f) show the result of our approach. False merger is successfully eliminated. And the storms in the cluster of storms are identified well. Storm 3 and 5 are detected correctly. Our approach gets the correct results in both examples.
6 Conclusion Accurate storm detection is the prerequisite of storm tracking and forecasting. The detection algorithm of TITAN is easy to implement but unable to identify storms from a cluster of storms. Although SCIT is capable of isolating storms from a cluster of storms, it has the deficiency that the internal structure of storms is destroyed. Meanwhile, both of them have the problem of false merger. We introduce a novel method which combines the strengths of the erosion and dilation operation. The result of the experiments shows that this method can successfully recognize false merger. And when isolating storms from a cluster of storms, this method can also keep the internal structure of sub-storms. Both TITAN and our approach are able to run automatically after the lowest threshold (Tzmin) is set. However, Tzmin still needs to be set artificially aforehand, and different Tzmin will influence the final detection result. For example, in the previous experiment, the detection result with Tzmin set to 40 dBZ is better than 30 dBZ. The
426
L. Han et al.
same phenomenon is also mentioned in [1]. Therefore, how to automatically on-line choose Tzmin is one of the future works.
References 1. Paul, J., Don, B., Rod, P.: The S2K Severe Weather Detection Algorithms and Their Performance. Weather and Forecasting 19, 43–63 (2004) 2. Wilson, J.W., Crook, N.A., Mueller.: Nowcasting thunderstorms: a status report. Bull. Am. Meteorol. Soc. 79, 2079–2099 (1998) 3. Crane, R.K.: Automatic cell detection and tracking. IEEE Trans. Geosci. Electron. GE-17, 250–262 (1979) 4. Bjerkaas, C.L., Forsyth, E.E.: Real-time automative tracking of severe thunderstorms using Doppler weather radar. In: Preprints, 11 th Conf on Severe Local Storms, pp. 573–576 (1979) 5. Austin, G.L., Bellon, A.: Very-short-range forecasting of precipitation by objective extrapolation of radar and satellite data. In: Browning, K. (ed.) Nowcasting, pp. 177–190. Academic Press, London (1982) 6. Rosenfeld, D.: Objective method for analysis and tracking of convective cells as seen by radar. J. Atmos. Oceanic Technol. 4, 422–434 (1987) 7. Dixon, M., Wiener, G.: TITAN: Thunderstorm Identification, Tracking, Analysis, and Nowcasting - A Radar-based Methodology. J. Atmos. Oceanic Technol. 10, 785–797 (1993) 8. Johnson, J.T., MacKeen, P.L., Witt, A.: The Storm Cell Identification and Tracking algorithm: an enhanced WSR-88D algorithm. Weather and Forecasting 13, 263–276 (1998) 9. Dixon, M.: Atuomated storm identification, tracking and forecasting – A radar-based method. PhD Thesis. University of Colorado (1994) 10. Rafael, C.G., Richard, E.W.: Digital Image Processing. Publishing House of Electronics Industry, Beijing (2002)
A New Approach for Vehicle Detection in Congested Traffic Scenes Based on Strong Shadow Segmentation Ehsan Adeli Mosabbeb1, Maryam Sadeghi2, and Mahmoud Fathy1 1
Computer Eng. Department, Iran University of Science and Technology, Tehran, Iran 2 Computer Science Department, Simon Fraser University, Vancouver, Canada
[email protected],
[email protected],
[email protected]
Abstract. Intelligent traffic surveillance systems are assuming an increasingly important role in highway monitoring and city road management systems. Recently a novel feature was proposed to improve the accuracy of object localization and occlusion handling. It was constructed on the basis of the strong shadow under the vehicle in real-world traffic scene. In this paper, we use some statistical parameters of each frame to detect and segment these shadows. To demonstrate robustness and accuracy of our proposed approach, impressive results of our method in real traffic images including high congestion, noise, clutter, snow, and rain containing cast shadows, bad illumination conditions and occlusions, taken from both outdoor highways and city roads are presented.
1 Introduction Increasing congestion on freeways has generated an interest in new vehicle detection technologies such as video image processing. Existing commercial image processing systems work well in free-flowing traffic, but the systems have problems with congestion, occlusion, shadows and lighting transitions. This paper addresses the problem of vehicle segmentation in traffic images including vehicle occlusion and cast shadows. Some of the related works for analyzing surveillance images are based on background subtraction methods [1, 2], and some use an extended Kalman filter [3, 4]. The early attempts to solve the occlusion problem involved simple thresholding, while later methods applied energy minimization and motion information [5, 6]. More recently, motion segmentation methods based on active contours [7] have been proposed. The other concept of object tracking as spatio-temporal boundary detection has been proposed in [8]. The advantage of part-based methods is shown in [9], and the algorithm known as predictive Trajectory Merge-and-Split (PTMS) in [10], has been developed to detect partial or complete occlusions during object motion. In [11] a new low-cost method has been presented for occlusion handling that uses strong shadow as a key feature to vehicle detection, though shadow detection techniques have been employed for shadow removal from background and foreground. The problem of shadow detection has been increasingly addressed over the past years. Shadow detection techniques can be classified into two groups: model-based and property-based techniques. Model-based techniques are designed for specific G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 427–436, 2007. © Springer-Verlag Berlin Heidelberg 2007
428
E.A. Mosabbeb, M. Sadeghi, and M. Fathy
applications, such as aerial image understanding [12] and video surveillance [13]. Luminance information is exploited in early techniques by analyzing edges [14], and texture information [15]. Luminance, chrominance and gradient density information is used in [16]. Color information is used also in [17]. A physics-based approach to distinguish material changes from shadow boundaries in chromatic still images is presented in [18]. Cavallaro et. al. in [19] proposed Shadow-aware object-based video processing. A classification of color edges by means of photometric invariant features into shadow-geometry edges, highlight edges, and material changes is proposed in [20]. Using strong shadow information as a feature for vehicle detection was initially discussed in [11]. In this first attempt, it was found that the area under a vehicle is distinctly darker than any other areas on an asphalt paved road.
2 Shadow Analysis A cast shadow is the area projected by the object in the direction of direct light. Shadows are characterized by two types of properties: photometric and geometric. Geometric properties depend on the type of obstruction and position of light source. Photometric properties determine relation of pixel intensity of background under illumination and under shadow. Geometric properties need a priori information such as object size or direction of light rays. We can model geometric properties of shadow and illumination with BDRF (Bi-Directional Reflectivity Function) (Figure 1(a)).
Camera
θ α Height d Distance Detection Region Proposed Feature
(a)
(b)
Fig. 1. (a) BDRF Model, (b) Camera viewpoint for feature detection
Generally we define a BDRF as R(λ ,φi ,θ i ,φ v ,θ v ) that relates incoming light in the direction (φi ,θ i ) to outgoing light in the direction (φ v , θ v ) . The BDRF is the ratio of outgoing intensity to incoming energy:
R(λ ,φi ,θ i ,φ v ,θ v ) =
Iv(λ ,φi ,θ i ,φ v ,θ v ) Ei(φi ,θ i )
(1)
where the relationship between the incoming energy and incoming intensity is E i (φi ,θ i ) = I i (φi ,θ i )cos(θ i )
(2)
A New Approach for Vehicle Detection in Congested Traffic Scenes
429
In Figure 1(b) a view of the camera position and strong shadow pixels is shown. In strong shadow pixels due to the lack of light source and low value of incoming energy, there is not considerable amount of outgoing intensity. Therefore pixels of shadow in under-vehicle region have the lowest intensity among image pixels. We can demonstrate this feature using the photometric properties [13, 21]. L r (λ , p) = L a (λ ) + L b (λ , p) + L s (λ , p)
(3)
Where La(λ), Lb(λ, p), Ls(λ, p) are the ambient reflection term, the body reflection term, and the surface reflection term, respectively and λ is the wavelength. L r ( shadow)(λ , p) = L a (λ )
(4)
Ci (x, y) = ∫ E(λ , x, y) Sci(λ , x, y)dλ
(5)
Sci(λ ) ∈ {SR(λ ), SG (λ ), SB(λ )}
(6)
Ci (x, y)lit = ∫ α (La (λ ) + L b (λ , p) + Ls (λ , p))Sci(λ )dλ
(7)
Ci (x, y)shadow = ∫ α ( La (λ )) Sci(λ , x, y )dλ
(8)
In which C(x, y)shadow = (Rshadow, Gshadow, Bshadow). It follows that each of the three RGB color components, if positive and not zero, decreases when passing from a lit region to a shadowed one, that is Rshadow < Rlit, Gshadow < Glit, Bshadow < Blit
(9)
So this region has different spectral properties. Also other shadows in traffic scene have Ls(λ ,p) and more intensity than under vehicle shadows.
3 Our Proposed Approach The focus of this work is on the problem of strong shadow segmentation for on-road vehicle detection and occlusion handling. Strong shadow feature was initially presented in [11]. There, contrary to previous algorithms which had been implemented for shadow removal to detect foreground objects, shadow was used as a useful feature to detect vehicles. The detection of shadows was done by converting to gray level with different colormaps, thresholding the image, and applying some morphologic operations. We have proposed a new method for segmentation of strong shadow pixels, not having the previous problems of vehicle detection invariant to weather/lighting conditions on both wet and dry roads. Cast shadows are effectively removed, while strong shadows remain. Since underneath strong shadows have various sizes in different regions of the image, due to depth of perspective images, to avoid data loss in far regions of images, accurate parameters need to be set. Determining these critical values was the main problem in [11]. To solve this problem we have presented a new method, that uses mean and standard deviation information. This section presents the proposed vehicle detection system, illustrated in Figure 2.
430
E.A. Mosabbeb, M. Sadeghi, and M. Fathy
Image Acquisition
Pre-processing
Filtering Standard Deviation
Background Modeling
Subtraction Mean
Mapping
Post-processing
Localization
Fig. 2. Diagram of our approach
3.1 Pre-processing
We have shown that this new method of vehicle detection can be significantly improved by means of simple content-adapted techniques. These techniques are brightness and contrast improvement according to the contents relevance when necessary. Adverse weather condition causes low contrast for all of the pixels in an image. Because of differences in contrast and brightness in different weather conditions the contrast of the intensity image were enhanced by transforming the values using Contrast-Limited Adaptive Histogram Equalization (CLAHE). CLAHE operates on small regions in the image, called tiles, rather than the entire image (Figure 3).
(a)
(b)
(c)
(d)
Fig. 3. (a) Real traffic scene in the rainy weather condition. (b) Histogram of original image. (c) Enhanced image. (d) Histogram of enhanced image.
3.2 Background Modeling and Subtraction
Sadeghi and Fathy in [11] used to mask the road in each frame image to determine the processing region. This attempt was to avoid the probable other shadow like areas in the image. But here we model the background and extract the moving objects. The strong shadow underneath the vehicle is moving from frame to frame, like the vehicle itself. Doing this we omit all other probable static dark areas in the image. The background modeling used here is proposed by W4, a real-time visual surveillance system [22]. In order to distinguish moving pixels from stationary pixels, first we apply a pixel wise median filter over time to several portions of video (In this
A New Approach for Vehicle Detection in Congested Traffic Scenes
431
experiment 100 frames, about 3.3 seconds). Only the stationary pixels are chosen as background pixels. Let A be a video sequence containing N consecutive frames, Ak (i, j ) be the intensity of pixel (i, j ) in kth frame, σ (i, j ) and μ (i, j ) be the standard
deviation and median values at pixel (i, j ) in all frames in A, respectively. Background model B(i, j ) = [m(i, j ), n(i, j ), d (i, j )] ( m(i, j ) minimum, n(i, j ) maximum intensity values and d (i, j ) maximum intensity difference between frames observed during the training period) is calculated as [22]: z ⎤ ⎡ ⎡m(i, j )⎤ ⎢min z A (i, j ) ⎥ ⎥ B(i, j ) = ⎢⎢n(i, j ) ⎥⎥ = ⎢max z A z (i, j ) ⎥ ⎢ z z − 1 ⎢⎣d (i, j ) ⎥⎦ ⎢max A (i, j ) − A (i, j ) ⎥ z ⎦ ⎣
(10)
where z, are frames satisfying: A z (i, j ) − μ (i, j ) ≤ 2σ (i, j )
(11)
Only stationary pixels are selected as background pixels. This is because when a moving object moves across a pixel, the intensity of that pixel decreases or increases sharply. Then if we choose the median value of that pixel over time, we can model image without any moving object, which is the background image we are looking for. After the training phase initial background model for each pixel is obtained (i.e. B(i, j ) ). We convert each frame to gray-scale, feeding into the W4 algorithm. The result is the background and can be used in the background subtraction process.
(a)
(b)
(c)
(d)
Fig. 4. (a) Extracted background image. (b) A sample frame of the video sequence. (c) Background subtraction, subtracting frame image from the background. (d) Binary Image of c.
For each frame, background subtraction yields to an image containing only the moving objects in that scene. As far as we are seeking for the strong shadow under the vehicle, we subtract the frame image from the background, truncating out of range pixel values. In a gray-scale image, the values of the strong shadow pixels, being very dark, are likely the lowest. But the same pixels in the background have values larger than those shadow pixels. Subtracting the frame image from the background image would make the darker data outstanding. The strong shadow is a part of these dark pixels. After subtraction we convert the resulting image to a binary image. Figure 4 illustrates an example of this background modeling approach. The result of the background subtraction phase is a binary image ‘S’:
432
E.A. Mosabbeb, M. Sadeghi, and M. Fathy
⎧1 S (i , j ) = ⎨ ⎩0
A(i , j ) part of the dark region
otherwise
(12)
3.3 Filtering and Mapping
In [11] authors have used converting images to gray level with different color maps, thresholding the image, and applying some morphological processing. The intensity of the shadow pixels depends on the illumination of the image, which in turn depends on weather conditions. Therefore the thresholds are not fixed causing implementation difficulties. Different threshold values used for image to binary conversion was a problem. Facing this problem, multi-level processing is used to get accurate results in depth of perspective images. In this work we used the information acquired by the mean and standard deviation in the area around each image pixel to segment strong shadow pixels. Rainy weather conditions or bad illumination conditions make the color of the road pixels darker, but our results have shown satisfying outcome. Local standard deviation (STD) and mean values in images have been widely used for pattern detection and image segmentation [23]. Here we use these two parameters to detect strong shadow under each vehicle. A sliding square window is used as the neighborhood element. The length of this window is defined depending on the image perspective and depth. Let the window length be N, the 2D Arithmetic Mean ( μ ) and the 2D Standard Deviation ( σ ) for the neighborhood around pixel (i, j ) of image ‘I’ is calculated using: μ (i, j ) =
σ (i, j ) =
1 N 2 −1
⎢N⎥ ⎢ ⎥ ⎣2⎦
1 N2
⎢N⎥ ⎢ ⎥ ⎣2⎦
∑ ∑ I (i + k , j + l )
(13)
⎡ N⎤ ⎡ N⎤ k =⎢− ⎥ l =⎢− ⎥ ⎢ 2⎥ ⎢ 2⎥
⎢N⎥ ⎢2⎥ ⎣ ⎦
⎢N⎥ ⎢2⎥ ⎣ ⎦
∑ ∑ (I (i + k , j + l ) − μ (i, j))
2
(14)
⎡ N⎤ ⎡ N⎤ k = ⎢ − ⎥ l = ⎢− ⎥ ⎢ 2⎥ ⎢ 2⎥
It’s obvious that the mean filter smoothes the image regarding the pixels’ intensities, and the standard deviation helps finding texture information in an image. σ (i, j ) , in any solid texture area, has a very low value, since the variation of the intensities in that area is not too much. In such areas, that the intensities are not largely variable, μ (i, j ) will be most like the pixels’ values in that area. So, in the areas of strong shadow, regions of our interest, as far as dark areas have low intensities and they have dark solid textures μ (i, j ) and σ (i, j ) are supposed to be generally low. Pixels with low values of both μ (i, j ) and σ (i, j ) are likely to be parts of the strong shadow region. They are marked as candidates. Figure 6 illustrates the μ (i, j ) and σ (i, j ) scaled images of the frame showed in 4(b).
A New Approach for Vehicle Detection in Congested Traffic Scenes
(a)
433
(b)
Fig. 5. (a) Image of Means. (b) Scaled image of Standard Deviations.
As discussed, the neighborhood window length needs to be adapted regarding the image depth. Since far vehicles in a perspective have smaller shadows. Treating the same with the vehicles far or near in a traffic scene yields to losing some information. Therefore, we assigned three different levels in a processing image. The vehicles close to the camera are processed with a big window, the ones far in depth in the perspective are processed using a small window, and finally for the ones in the middle a not-so-big not-so-small window is used. The next phase includes integrating results of all previous steps. This phase is called mapping. First, we threshold μ (i, j ) and σ (i, j ) matrices and map the results on the image acquired by background subtraction, S (i, j ) . So, all the pixels having the following condition are dark parts of the moving vehicle and are very likely to be the strong shadow:
μ (i, j ) > mean _ threshold σ (i, j ) > std _ threshold
(15)
S (i , j ) = 1
(a)
(b)
(c)
Fig. 6. (a) Thresholded mean image. (b) Thresholded standard deviation image. (c) Result of the mapping phase.
Figure 7 shows the thresholded images of μ (i, j ) and σ (i, j ) of figure 4(b) and the result of mapping using 15. 3.4 Post-processing and Localization
After the mapping process, we should count each individual component in the binary image as a symbol of a vehicle. In order to remove non-desired blobs, erosion and dilation morphologic techniques are taken into account. Due to different sizes of
434
E.A. Mosabbeb, M. Sadeghi, and M. Fathy
shadows, multilevel processing is used, as well. A traffic image is divided into three different levels; three different values for the structuring element are determined to be used in the morphologic operation. Using a fixed value for morphologic operations might cause losing information of small, far shadows or counting non-feature shadows as vehicles. After this step, we count blobs as the representative of a vehicle.
(a)
(b)
(c)
(d)
Fig. 7. (a) Occluded traffic scene. (b) Result of Mean filter. (c) Result of STD filter. (d) Detected vehicles.
4 Experimental Results Experimental results and comparisons using real data demonstrate the superiority of the proposed approach which has achieved an average accuracy of 93.94% on completely novel test images. Our tests have demonstrated the effectiveness of our approach handling occlusion in real traffic images including high congestion, noisy, cluttered, snowy, and rainy and scenes containing remarkable shadows, bad illumination conditions and occlusions. The first image in Figure 8 shows real traffic scene with considerable occlusion, whereas the second one in Figure 9 illustrates a group of occluded vehicles in congested
(a)
(b)
(c)
(d)
Fig. 8. (a) Real traffic scene in the rainy weather condition , (b) Enhanced image using histogram (CLAHE) (c) Result of Mean filter (d) Detected vehicles
A New Approach for Vehicle Detection in Congested Traffic Scenes
(a)
(b)
(c)
(d)
(e)
(f)
435
Fig. 9. (a) Original image containing cast shadows. (b) Background. (c) Result of Mean filter. (d) Result of the mapping phase. (e) Result of post-processing and localization.
traffic and rainy weather which are detected accurately and finally Figure 10 shows result of our proposed approach in bad illumination and remarkable cast shadow condition. Our approach could ignore cast shadows precisely.
5 Conclusion and Future Works In this work we first reviewed a recently proposed feature for vehicle pose detection, the strong shadow under any vehicle. In order to detect this region in the video sequence after enhancing the image quality, the background was extracted. For each frame background subtraction and image filtering was done. Mean and Standard Deviation matrixes together with the output of the background subtraction phase, are fed into a mapping process to extract the strong shadow regions. The post-processing phase helped leaving out the noise and non-desirable regions. We tested our approach on different traffic scenes, adverse weather conditions and noisy or cluttered images, and it showed accurate and considerable results, while being low-cost and easy to implement. It also can ignore cast shadows on the street. Our focus in future work is proposing optimal method for local enhancing regarding to weather and illumination conditions to make the algorithms more robust. Also we are working to convey Mean and STD filters on results of subtraction phase and moving object detection to improve the time-cost. The current results are outstanding and further work is being done to make the system more practical.
References 1. Gutchess, D., Trajkovics, M., Cohen-Solal: A background model initialization algorithm for video surveillance. In: Proc. of IEEE ICCV 2001, Pt.1, pp. 744-740 (2001) 2. Javed, O., Shafique, K., Shah, M.: A hierarchical approach to robust background subtraction using color and gradient information. In: Workshop on Motion and Video Comp. pp. 22–27 (2002)
436
E.A. Mosabbeb, M. Sadeghi, and M. Fathy
3. Veeraraghavan, H., Masoud, O., Papanikolopoulos, N.: Computer vision algorithms for intersection monitoring. IEEE Trans. Intell.Transport. Syst. 4, 78–89 (2003) 4. Jung, Y., Lee, K., Ho., Y.: Content-Based event retrieval using semantic scene interpretation for automated traffic surveillance. IEEE Transaction ITS 2, 151–163 (2001) 5. Memin, E., Perez, P.: Dense estimation and object-based segmentation of the optical flow with robust techniques. IEEE Trans. Image Process 7(5), 703–719 (1998) 6. Chang, M., Tekalp, A., Sezan, M.: Simultaneous motion estimation and segmentation. IEEE Trans.Image Process 6, 1326–1333 (1997) 7. Ristivojević, M., Konrad, J.: Joint space-time motion-based video segmentation and occlusion detection using multiphase level sets. In: IS&T/SPIE Symposium on Electronic Imaging, Visual Communications and Image Processing, San Jose, CA, USA, pp. 18–22 (2004) 8. Mitiche, A., El-Feghali, R., Mansouri, A.-R.: Tracking moving objects as spatio-temporal boundary detection. In: IEEE Southwest Symp. on Image Anal. Interp., pp. 110–206 (April 2002) 9. Nowak, E., Jurie, F.: Vehicle categorization: Parts for speed and accuracy. UJF – INPG, Societe Bertin - Technologies, Aix-en-Provence (2005) 10. Melo, J., Naftel, A., Bernardino, A., Santos-Victor, J.: Viewpoint independent detection of vehicle trajectories and lane geometry from uncalibrated Traffic Surveillance Cameras. In: ICIAR Conf. on Image Analysis and Recognition, Porto,Portugal, September 29-October 1 (2004) 11. Sadeghi, M., Fathy, M.: A Low-cost Occlusion Handling Using a Novel Featur. In: Congested Traffic Images. In: proceeding of IEEE ITSC 2006 Toronto pp. 522–527 (2006) 12. Huertas, A., Nevatia, R.: Detecting buildings in aerial images. Comput. Vis. Graph. Image Process 41, 31–152 (1988) 13. Yoneyama, A., Yeh, C.H., Kuo, C.: Moving cast shadow elimination for robust vehicle extraction based on 2d joint vehicle/shadow models. In: IEEE Conf. on Advanced Video and Signal Based Surveillance, Miami, USA (July 2003) 14. Scanlan, J.M., Chabries, D.M., Christiansen, R.: A shadow detection and removal algorithm for 2-d images. In: Proc. IEEE Int. Conf. on Acoustics, Speech, and Signal Processing (ICASSP), pp. 2057–2060 (1990) 15. Adjouadj, M.: Image analysis of shadows, depressions, and upright objects in the interpretation of real world scenes. IEEE Int. Conf. on Pattern Recog (ICPR), pp. 834–838 (1986) 16. Fung, G.S.K., Yung, N.H.C., Pang, G.K.H., Lai, A.H.S.: Effective moving cast shadows detection for monocular color image sequence. In: Proc. 11th ICIAP, pp. 404–409 (2001) 17. Nadimi, S., Bhanu, B.: Moving shadow detection using a physicsbased approach. In: Proc. IEEE Int. Conf. Pattern Recognition, vol. 2, pp. 701–704 (2002) 18. Gershon, R., Jepson, A., Tsotsos, J.: Ambient illumination and the determination of material changes. Journal of the Optical Society of America A 3(10), 1700–1707 (1986) 19. Cavallaro, A., Salvador, E., Ebrahimi, T.: Shadow-aware object-based video processing. IEE Proc.-Vis. Image Signal Process 152(4), 398–406 (2005) 20. Gevers, T., Stokman, H.: Classifying color edges in video into shadow-geometry, highlight, or material transitions. IEEE Trans. on Multimedia 5(2), 237–243 (2003) 21. Forsyth, D., Ponce, J.: Computer Vision: A Modern Approach. Prentice-Hall, NY (2003) 22. Haritaoglu, I., Harwood, D., David, L.S.: W4 Real-time Surveillance of People and Their Activities. IEEE Trans. on Pattern Recog. and Machine Intelligence 22(8), 809–830 (2000) 23. Wolf, C., Jolion, J.-M., Chassaing, F.: Text localization, enhancement and binarization in multimedia documents. In: Proc. of the ICPR 2002, vol. 2, pp. 1037–1040 (August 2002)
A Robust Method for Near Infrared Face Recognition Based on Extended Local Binary Pattern Di Huang1, Yunhong Wang1, and Yiding Wang2 1
School of Computer Science and Engineering, Beihang University, Beijing, 100083, China 2 Graduate University of Chinese Academy of Science, Beijing, 100049, China
[email protected],
[email protected]
Abstract. Face recognition is one of the most successful applications in biometric authentication. However, methods reported in the literature still suffer from some problems which prevent the further development in face recognition. This paper presents a novel robust method for face recognition under near infrared (NIR) lighting condition based on Extended Local Binary Pattern (ELBP), which solves the problems produced by variations of illumination rightly, since the NIR images are insensitive to variations of ambient lighting, and ELBP can extract adequate texture features form the NIR images. By combining the local feature vectors, a global feature vector is formed and as the global feature vectors extracted by ELBP operator often have very high dimensions, a classifier has been trained using the AdaBoost algorithm to select the most representative features for better performance and dimensionality reduction. Compared with the huge number of features produced by ELBP operator, only a small part of the features are selected in this paper, which saves much computation and time cost. The comparison with the results of classic algorithms proves the effectiveness of the proposed method. Keywords: Face recognition, Near Infrared (NIR), Extended Local Binary Pattern (ELBP), local feature vector, global feature vector, AdaBoost.
1 Introduction As one of the most representative applications in biometrics, face recognition has attracted increasing attention during the past several decades [1]. But there are still a few difficult problems such as variations in lighting condition, expression and posture to counteract the further development in face recognition. Among the difficult problems above, lighting condition is the most important one to challenge the robustness of a face recognition algorithm. To solve this problem, lots of efforts have been made. Compared with all the solutions, face recognition under NIR lighting brings a new and efficient dimension and is even more practical in real condition because of four main reasons. First of all, with an active NIR lighting which provides enough NIR intensity to remove the influence of ambient lighting, NIR face recognition is robust to the variations of ambient lighting. Secondly, compared with thermal IR face G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 437–446, 2007. © Springer-Verlag Berlin Heidelberg 2007
438
D. Huang, Y. Wang, and Y. Wang
recognition, NIR face recognition is less affected by the ambient temperature, emotional and health condition. Thirdly, compared with 3D face recognition, NIR face recognition has less costing. Finally, NIR face recognition has a wide range of application, for it can work both in daytime and nighttime. Face detection and recognition using NIR images has been studied since the beginning of this century. Dowdall et al. [2] presented a NIR face detection method. Pan et al. [3] presented a NIR face recognition method using the images captured in multiband spectral. Stan Z. Li et al. [4] presented an illumination invariant face recognition using NIR images based on the LBP method recently. However, we find there is still some room for improving the accuracy of NIR face recognition if more applicable features can be extracted from the NIR images. Recently, feature-based approaches have been used in face recognition for obvious advantages: being robust to expression, illumination and posture variations and requiring fewer or no training data. Local Binary Pattern (LBP) is a typical feature-based method, and it is applied to face recognition by Timo et al. [5] soon after being proposed. However, LBP operator only extracts differences of signs between the neighbor pixels of the face images, and the exact gray value differences called hidden information in this paper which are also important to describe a face have been ignored. Considering what is introduced above, an Extend Local Binary Pattern (ELBP) method has been proposed for NIR face recognition in this paper. ELBP comes of 3DLBP which is first proposed in 3D face recognition [6], and being modified, it is used to NIR face recognition. With ELBP, hidden information can be extracted to get a more comprehensive description of the NIR face image. For each NIR image, a global texture feature can be built through combining the local texture vectors extracted by ELBP operator. As the global feature vectors usually have very high dimensions, a classifier has been trained using the AdaBoost algorithm to select the most representative feature for better performance and dimensionality reduction. The remaining part of the paper is organized as follows: In section 2, hardware and database is introduced. Section 3 presents ELBP method. Section 4 describes the feature selection based on AdaBoost. Experiments are conducted and the results are shown in section 5, followed by a discussion, conclusion and future work in section 6.
2 Hardware and Database Since there is no public available NIR database, in order to obtain NIR images with good quality for face recognition, we have designed a device including one PC, one camera, an active NIR lighting, and an optical sensor. The computer is utilized to control the whole device. The camera has a special optical filter which prevents the visible light to get through; assuring that visible light brings no influence to the collected images. The active NIR lighting consists of a certain number of diodes, and the wavelength is 850nm, which is almost invisible to human eyes. The optical sensor is used to induce the ambient NIR intensity, and controls the active NIR lighting to give a changeable NIR intensity to ensure that the camera works in a constant and proper NIR lighting condition.
A Robust Method for Near Infrared Face Recognition
439
The NIR face image database contains 1200 images form 60 subjects, which has a proper gender proportion of 1:1, with moderate pose and expression variations captured in different time within one week. After the processing of wavelet denoise and normalization, we get the qualified images of the same size, 80×80 pixels. Fig.1 shows some samples in the database.
Fig.1. Samples in database
3 Extended Local Binary Pattern Based Face Description 3.1 Extended Local Binary Pattern Local Binary Pattern (LBP) algorithm is first proposed by Ojala et al. [7] for texture description. The local binary patterns are fundamental properties of local image texture and their occurrence histogram is proven to be a very powerful texture feature. As we known, LBP has been successfully utilized in 2D face recognition by Timo et al. [5]. Y.G. Huang [6] proposed a modified LBP named 3DLBP in 3D face recognition, and we consider that it is an extended LBP in fact. Motivated by the original LBP and the 3DLBP method referred above, in this paper, an Extended Local Binary Pattern (ELBP) operator is proposed to extract global texture features from NIR face images. The LBP operator used in 3DLBP is original LBP(8, 1) which has been changed U2 into uniform pattern LBP U2 (8, 1) . LBP (8, 1) brings two main advantages: one is on robustness to the variations of rotation, and the other is on dimensionality reduction. In ELBP, not only the original LBP are included, but also the information of the hidden differences is encoded into binary patterns. The main idea of the original operator is described in Fig.2. See Fig.2 for an illustration of the basic LBP operator. The operator assigns a label to each pixel of an image by thresholding the 3×3-neighborhood of each pixel with the center pixel value and considering the results as binary units: 0 or 1 according to their signs. Then, binary units are arranged clockwise. Thus, a set of binary units is obtained as the local binary pattern of the pixel. Two parameters (P, R) are used to control the selection of the number of neighbors (P) and their radius (R) of the location.
440
D. Huang, Y. Wang, and Y. Wang
Fig. 2. An example of basic LBP operator
The binary pattern is further transformed to decimal number or ‘uniform pattern’ which is another extension to the original LBP operator. A local binary pattern is called uniform pattern if the binary pattern contains at most two bitwise transitions from 0 to 1 or vise verse when the bit pattern is considered circular. For instance, the patterns 00000000 (0 transitions) and 01110000 (2 transitions) are uniform whereas the pattern 11001001 (4 transitions) and 01010011 (6 transitions) are not. In the computation of the LBP histogram, uniform patterns are used so that the histogram has a separate bin for every uniform pattern and all non-uniform patterns are assigned to a single bin. The LBP method can be expressed with the parameters LBPU2 (P, R). From the process in Fig.2, we can see that LBP operator actually encodes relationships of pixels with their neighbors. These “relationships” are named as “texture features” in this paper. Therefore, LBP can be seen as a type of local texture features. In our opinion, the difference information should exist in the texture features of the points on the face images. Considering this intrinsic property of LBP analyzed above, we suppose that LBP operator should have the potential power to encode hidden information in the NIR face images. Moreover, in the process of LBP operator, encoding differences of signs between the neighbor pixels only is not enough for describing faces, since some of the important information kept in the differences of the gray values is neglected, which is also stated in [6]. See a specific example in Fig.3. Though I and II are two different persons, LBP of their nose tips are the same, because all the points around the nose tips are ‘lower’ than them. So we can see that, with the same trend of gray value variation in the same place in two NIR face images of different persons, LBP is inadequate to distinguish them. As a result, the exact differences of the gray values are encoded into binary patterns as what is shown in Fig.3. According to our statistical analysis, in our NIR image database, nearly 90% percent of the gray value differences between points in R=2 are smaller than 7, so three binary units are added to encode each gray value difference between a pixel and its neighbor. Three binary units ({i2i3i4}) [6] can correspond to the absolute value of gray difference (GD): 0~7. All the |GD| 7 are assigned to 7. Combining with signs of the differences denoted by 0, 1 as the head binary unit (i1) like what the original LBP does, we finally construct a set of 4 binary units {i1i2i3i4} to denote GD between two points as follows [6].
≥
⎧1, i1 = ⎨ ⎩ 0,
GD ≥ 0, GD < 0.
GD = i2 ⋅ 2 2 + i3 ⋅ 21 + i1 ⋅ 2 0
(1)
(2)
A Robust Method for Near Infrared Face Recognition
441
To illustrate the equations above, an example is shown in Fig.3.
Fig. 3. An example of ELBP and its comparison to original LBP
Four binary units are divided into four layers. The binary units of each layer are arranged clockwise, see Fig.3. Then, four decimal numbers are achieved: P1, P2, P3, P4 at each pixel point as its representation, which are called Extended Local Binary Pattern (ELBP). For matching, ELBP are first transformed into four maps according to P1, P2, P3 and P4 respectively: ELBPMap1, ELBPMap2, ELBPMap3, and ELBPMap4. Then the four maps are all transformed to uniform pattern LBPU2 (8, 1). At last, histograms of local regions of the four maps are concatenated as a local texture features η. 3.2 Face Description with ELBP The ELBP method introduced in the previous subsection is utilized for face description. The procedure consists of using ELBP operator to produce several local texture features ρ1, ρ2,…, ρn and combining them into a global texture feature η to describe the whole face. The NIR face image is divided into local regions and ELBP operators extract local texture feature from each region independently. The local features are then concatenated to form a global texture feature of the face. Fig.4 presents an example of a NIR face image divided into rectangular regions.
442
D. Huang, Y. Wang, and Y. Wang
Fig. 4. A NIR face image divided into 2×2, 4×4, 8×8 rectangular regions
The basic histogram can be extended into a spatially enhanced histogram which encodes both the appearance and the spatial relations of face regions. As the n face regions R1, R2, …, Rn have been determined, a histogram is computed independently for each of the n regions. The resulting n histograms are combined and a spatial enhanced histogram is produced. The spatially enhanced histogram is size of n×k where k is the length of a single ELBP histogram. Global texture feature η of the face on three different levels of locality can be gained from the spatially enhanced histogram: the ELBP labels for the histogram contain information about the patterns on a pixel-level, the labels are summed over a small region to yield information on a regional level and the regional histograms are concatenated to build a global texture feature η of the face. It should be noted that when using the histogram-based methods, despite the examples in Fig.4, the regions R1, R2, …, Rn do not need to be rectangular. Besides, the regions do not need to be of the same size or shape, and it is not necessary to cover the whole image either. For instance, they could be circular regions located at the fiducial points. It is also possible to have partially overlapping regions like in [5].
4 Feature Selection Based on AdaBoost With the successful application in face detection, boosting, as one of the most commonly used methods, has shown its strong ability to solve two-class problems. In order to utilize this successful method, we need to convert the multi-class problem to a two-class problem. The Bayesian method proposed by Moghaddam and Pentland [8] was the top one performer in FERET96 test. Actually, the essential idea underlying the method is just what we want to utilize. Basically, face recognition is a multiclass problem, but Moghaddam and Pentland use a statistical approach which learns the variation in different images of the same subject to form the intra-personal space, and the variation in different images of different subjects to form the extra-personal space. Therefore, the multi-class problem is converted to a two-class problem. The estimation of the intra-personal and the extra-personal distributions is based on the assumption that the intra-personal distribution is Gaussian [4]. In our method, the intra-personal class and the extra-personal class are defined as follows: ηi is a global texture feature vector of a NIR face image, where the subscript i means this image belongs to the subject whose ID is i; ηj is a global texture feature vector of other subject; D(η) = || ηi - ηj || means the difference of the two vectors, presenting the distinction of the ELBP features. If i = j, D(η) is in the intra-personal space and considered as the positive examples in the training process. On the j, D(η) is in the extra-personal space and considered as the negative contrary, if i examples.
≠
A Robust Method for Near Infrared Face Recognition
443
As a version of the boosting algorithm, AdaBoost is used to solve two-class problem that distinguishing the intra-personal space from extra-personal space. AdaBoost is based on the notion that a strong classifier can be created by linearly combining a number of weak classifiers. Viola and Jones [9] used the AdaBoost algorithm to train a classifier to learn simple Haar features for face detection. Up to now, their system is the fastest face detection system with competitive accuracy. A weak classifier could be a very simple threshold function hj (x) consisting of only one simple feature fj (x):
⎧⎪1 if p j f j ( x ) < p j λ j hj ( x ) = ⎨ otherwise ⎪⎩0
(3)
Where λj is a threshold and pj is a parity to indicate the direction of the inequality. The threshold value could be determined by the mean value of the positive samples and the mean value of the negative samples on the jth feature response:
⎤ 1⎡1 m 1l f j ( xp | yp =1) + ∑f j ( xn | yn = 0)⎥ ∑ 2 ⎣m p=1 l n=1 ⎦
λj = ⎢
(4)
Each weak classifier is trained to select one feature from the complete set for classification. When the classifiers are combined, a much better performance can be achieved than just using a single classifier. The algorithm focuses on difficult training patterns, increasing their representation in successive training sets. The algorithm can be described in detail: each feature presents a weak classifier, and T weak classifiers are selected to compose the final strong classifier during a number of T rounds. In each of iterations, the space of all possible features is searched exhaustively to find the best weak classifier with the lowest weighted classification error. The error is then used to update the weights so that the wrongly classified samples get weights increased.
5 Experiments and Results After ELBP feature extraction and AdaBoost feature selection, the global texture feature η is transformed to a feature vector ζ with T dimension which is much lower than the original n×k dimension. Then Nearest Neighbor (NN) classification is utilized on these AdaBoost selected Gabor features for classification instead of the final strong AdaBoost classifier, for the latter one often appears in the area of face verification, a two-class problem which is different from face recognition, a multi-class problem, as a result NN is more convenient. The NIR face database contains 60 subjects. Each subject has 20 images, half of which are used for training AdaBoost, 5 for gallery and 5 for probe. All the images are normalized to size of 80×80 pixels, and the local features extracted by ELBPU2 (8, 1) operator from one region are also with the same number of 59×4= 236 bins. Therefore, if the NIR face images are divided into 8×8 rectangular regions, the global texture feature vector η includes 59×4×10×10=23600 bins and if ELBPU2 (16, 2) operator is used, the global texture feature vector η contains 243×4×10×10=97200 bins. The
444
D. Huang, Y. Wang, and Y. Wang
dimension is too high to calculate. To apply the AdaBoost algorithm for ELBP feature selection, in the training process, 2700 intra-class global texture vector pairs are used to produce positive samples, and 17700 global texture vector pairs selected randomly from the whole 177000 extra-class global texture vector pairs are used for yielding negative samples. The first learned 10000 features are chosen for classification. Table 1. The recognition rates of different algorithms Method PCA LDA LBP ELBP LBP(Best Result) ELBP (Best Result)
Average Recognition Rate 0.8246 0.8977 0.8830 0.9350 0.9167 0.9574
To prove the effectiveness of ELBP on NIR face database, several experiments are designed to compare the performance with Eigenface (PCA), linear discriminant analysis (LDA), local binary pattern (LBP). The recognition results of these methods are shown in Table 1. It should be noted that the LBP algorithm used in the experiments is LBPU2 (8, 1), and the NIR images are divided into 10×10 windows. The best result of LBP and ELBP is achieved when the number of selected features is 4200 and 8600 respectively.
(a)
(b)
Fig. 5. (a) Recognition rate of the proposed method with respect to the number of selected ELBP features by AdaBoost when the number of bins is less than 5900. (b) Recognition rate of the proposed method with respect to the number of selected ELBP features by AdaBoost when the number of bins is between 6000 and 10000.
The face recognition performance of the proposed method with different number of selected ELBP features by AdaBoost is drawn in Fig.5, and the curve of LBP is used as a comparison. Since the feature produced by LBPU2 (8, 1) operator contains 5900 bins whereas the feature produced by ELBPU2 (8, 1) contains 23600 bins, so Fig.5 is divided
A Robust Method for Near Infrared Face Recognition
445
into two parts. We can see when the number of selected bins is between 7300 and 9000, the results of ELBP are obviously better than the best result of LBP. In addition, the ELBP method is evaluated on the FERET database and protocol, which have been widely to evaluate face recognition algorithms and are a de faceto standard in face recognition research field [10]. The FERET database consists of images captured under visible light, and the images are normalized to size of 80×80 pixels in the experiments. To achieve a fair comparison, all the images are divided into 8×8 windows. The results might be different from the results mentioned in [6] due to the different forms of image division and LBP operators. Table 2 shows the performances of LBP and ELBP method, and as a comparison, the best result of FERET’ 97 [10] is also listed. We can see from the table that ELBP method is not as good as the performance of LBP especially in fc. Statistical analysis gives one reason that the distribution of exact gray value differences between a pixel and its neighbor leads to the bad results. Only about 65% percent of the gray value differences between points in R=2 are smaller than 7, and the gray differences are assigned to 7 if they are more than 7. As a result, less hidden information is extracted from this kind of images than from NIR images. Therefore, the proposed ELBP method is applicable to the database, in which the gray value differences of neighbor pixels in the images do not change dramatically, such as the NIR database. Table 2. The rank-1 recognition rates of different algorithms for the FERET probes sets Method Best Result of FERET’97 LBP ELBP
fb 0.96 0.924 0.806
fc 0.82 0.302 0.195
dup I 0.59 0.517 0.448
dup II 0.52 0.296 0.151
6 Conclusions This paper proposes a novel method utilizing ELBP to extract global texture feature from NIR images, and pursuing better performance and dimensionality reduction, Adaboost is used to select the most representative features. Since the NIR images are insensitive to variations of illumination, and ELBP can extract adequate texture features, compared with original LBP method, ELBP is proved to be more effective for NIR face recognition in the experiments. Moreover ELBP is also evaluated on the FERET database, which illustrates that the proposed method is not applicable to the images which are captured in visible spectrum. Through statistical analysis, we have found that the performance of the ELBP depends on the distribution of exact gray value differences between a pixel and its neighbor. If most of the gray value differences can be denoted by the ELBPMap instead of being assigned to a certain number, the results will be encouraging. And the ELBP can be extended further to describe the exact gray value differences if no limits to the computation cost are required. Moreover, a classifier has been trained using the AdaBoost algorithm to select the most representative features, avoiding the very high dimensions. Compared with the huge number of features produced by ELBP operator, the classifier in this paper only selects part of features, which saves computation and time cost significantly.
446
D. Huang, Y. Wang, and Y. Wang
In the future work, the weight of each division in the ELBP method will be trained to improve the performance, and some efforts may further be made for the computation reduction and feature selection.
Acknowledgment This work was supported by Program of New Century Excellent Talents in University, National Natural Science Foundation of China (No. 60575003, No. 60332010), Joint Project supported by National Science Foundation of China and Royal Society of United Kingdom (No. 60710059), and Hi-Tech Research and Development Program of China (2006AA01Z133).
References 1. Zhao, W.Y., Chellappa, R., Phillips, P.J., Rosenfeld, A.: Face recognition: A Literature Survey. ACM Computing Surveys, 399–458 (2003) 2. Dowdall, J., Pavlidis, I., Bebis, G.: Face Detection in the Near-IR Spectrum. Image and Vision Computing 21, 565–578 (2003) 3. Pan, Z.H., Healey, G., Prasad, M., Tromberg, B.: Face Recognition in Hyperspectral Images. IEEE Trans. Pattern Analysis and Machine Intelligence 25, 1552–1560 (2003) 4. Li, S.Z., Chu, L.F., Liao, S.C., Zhang, L.: Illumination Invariant Face Recognition Using Near-Infrared Images. IEEE Trans. Pattern Analysis and Machine Intelligence 29, 627– 639 (2007) 5. Timo, A., Abdenour, H., Matti, P.: Face Recognition with Local Binary Patterns. In: Pajdla, T., Matas, J(G.) (eds.) ECCV 2004. LNCS, vol. 3021, pp. 469–481. Springer, Heidelberg (2004) 6. Huang, Y.G., Wang, Y.H., Tan, T.N.: Combining Statistics of Geometrical and Correlative Features for 3D Face Recognition. British Machine Vision Association 1, 391–395 (2006) 7. Ojala, T., Pietikäinen, M., Mäenpää, T.: Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. Pattern Analysis and Machine Intelligence 24, 971–987 (2002) 8. Moghaddam, B., Pentland, A.: Beyond Eigenface: Probabilistic Matching for Face Recognition. In: IEEE International Conference on Automatic Face and Gesture Recognition, pp. 30–35 (1998) 9. Viola, P., Jones, M.: Rapid Object Detection Using a boosted cascade of simple features. In: Proc. of IEEE Conf. on Computer Vision and Pattern Recognition (2001) 10. Phillips, P.J., Moon, H., Rauss, P., Rizvi, S.A.: The FERET Evaluation Methodology for Face-Recognition Algorithms. IEEE Trans. Pattern Analysis and Machine Intelligence 22, 1090–1104 (2002)
Surface Signature-Based Method for Modeling and Recognizing Free-Form Objects H.B. Darbandi1, M.R. Ito1, and J. Little2 1
Electrical and Computer Engineering, University of British Columbia 2 Computer Science, University of British Columbia
[email protected],
[email protected],
[email protected]
Abstract. In this paper we propose a new technique for modeling threedimensional rigid objects by encoding the fluctuation of the surface and the variation of its normal around an oriented surface point, as the surface expands. The surface of the object is encoded into three vectors as the surface signature on each point, and then the collection of signatures is used to model and match the object. The signatures encode the curvature, symmetry, and convexity of the surface around an oriented point. This modeling technique is robust to scale, orientation, sampling resolution, noise, occlusion, and cluttering.
1 Introduction Object recognition can be divided into object representation and feature-matching procedures [1]. Three-dimensional model-based object representations use the geometry of the object as the basis for modeling. In this method, an object’s geometric properties and relations are extracted and stored as models of that particular object. During the matching process, the same procedure is applied to the test object, and its geometric properties and relations are compared against the models for identification purposes. The main goal of all modeling techniques is to extract sufficient object features to enable reliable object recognition during the matching process. In their early work, Besl and Jain [2] characterized the surface of an object based on the mean and Gaussian curvatures of the object’s surface. Faugeras and Hebert [3] used curvature to detect primitive features in range data. The importance of curvatures in computer vision applications is well-known; however, curvatures are very sensitive to noise. Although parametric surfaces such as B-Spline [4] are very flexible, different parameters and different knots (control points) can create the same surface. Parametric surfaces are more suitable for computer graphics. All EGI-based techniques such as [5] are generally considered global techniques, and they react poorly in the event of occlusion in the recognition process. These modeling techniques are not unique, except for CEGI [6]. Point signatures [7] are vulnerable to surface sampling and noise, and can therefore result in ambiguous representations [8]. The splash representation [9] is also sensitive to noise and occlusions. The symmetrical properties of Super Quadrics [10] limit their capability in modeling freeform objects. Constructive Solid Geometries [11] risk the ambiguity in joining volumetric primitives. G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 447–458, 2007. © Springer-Verlag Berlin Heidelberg 2007
448
H.B. Darbandi, M.R. Ito, and J. Little
Recently, due to the decreasing cost of 3D scanners, more complex methods that use patches created from dense images have been introduced in the literature. These methods include spin images [12], surface point signatures [13], harmonic shape contexts [14], and the tensor method, which models and recognizes 3D objects in cluttered environments [15]. A good survey of different techniques can be found in [16]. Of the more recently introduced methods, the spin image is perhaps the most accurate and easy to implement. The spin image reacts very well in the event of occlusion and cluttering, and is also robust to noise. Spin image representation is sensitive to the resolution and sampling of the models and scene. An improvement to this technique [17] overcomes the sensitivity of the spin image to resolution and sampling. Spin images map a 3D surface into a 2D histogram. Histograms do not encode surface geometric properties, which are essential in many computer vision applications. Furthermore, 2D histograms lead to large model size and an inefficient matching process. The technique proposed in this paper can be used to model a free-form object. This modeling technique is not only robust to scale, orientation, occlusion, cluttering, sampling resolution, and noise, but the resulting models are also packed and much smaller than those created by spin images. Furthermore, the proposed modeling technique encodes the curvature, symmetry, and convexity of the surface around an oriented point that can be used in other related applications. The proposed technique encodes the surface into three discriminating vectors, making the matching process more efficient and accurate. The rest of this paper is organized as follows. In Section 2, we describe the proposed modeling technique, parameters, and data set. In Section 3 we present the results and analysis of the experiments.
2 Flatness and Orientation Signatures The basic element used to model and recognize an object in this paper is an oriented point. The oriented point is a point (P) on the surface of an object along with its normal (N) at point P. Consider an oriented point, P, on the surface of an object (please refer to the lefthand column in Figure 1). Now, assume a sphere with radius R, centered on P. If S is the total area of the object circumscribed inside the sphere, and A is the projection of S on a plane Π, normal to N, then the length of the normal, N, is set to N =F=
A S
F ≤1
(1)
F specifies the flatness of the area around an oriented point, P, on the surface. For a flat surface, F is equal to 1; for a curved surface, F is less than one. The more curved the surface, the lower the value of F. As R increases, the ratio of A/S changes, creating a graph that is the flatness signature of the surface around point P. The flatness signature of a flat area is a horizontal line. Similarly, the angle of the normal, θ, which is equal to the average of the normal of the patches enclosed in the sphere, also fluctuates from its original position, N, as R increases, creating another curve, the orientation signature. For a
Surface Signature-Based Method for Modeling and Recognizing Free-Form Objects
449
Fig. 1. Modeling Technique
symmetric surface, the orientation signature is a horizontal line if P is set on the symmetrical point on the surface. The combination of orientation signature and flatness signature models the object on point P on the surface of an object. The collection of the signatures is used to model the entire object. Each signature is a vector of n elements, in which n is the number of intervals used to generate the signature. For each interval, the radius of the sphere is set to Ri = Ri −1 + ΔR . . To find the signature for each oriented point on a vertex, the normal of the vertex, N, is first calculated by averaging the normal of the surfaces around the vertex. To decrease noise effect, the normal of the patches is averaged inside a sphere with a specific radius called the base radius. By finding the normal, N, and П, the plane normal to N, the two vectors S = s i ≥ 0 s1 ,..., s n and A = ai ≥ 0 a1 ,..., a n
[
[
]
[
can be calculated. The flatness signature, F = f i ≥ 0
]
f1 ..., f n ], of the oriented
point P is then calculated from F = A /( S + ε ) . Here, ε is added to each element of S to avoid division by zero. Concurrently, the oriented signature O = o i ≥ 0 o1 ,..., o n is found. The
[
]
collection of F and O signatures models the surface of the object at the selected vertex. The main problem with this method is that as R increases, the signatures reach a relatively steady state. Consequently, the discriminativity of the signatures decreases. To overcome this problem, we use the surface of the object circumscribed between two steps of R for signature creation, rather than accumulating the surface as R increases. Instead of a single sphere, two co-centered spheres are used, as shown in the middle column of Figure 1. The area circumscribed between two spheres within radius R2-R1 is then used for signature creation.
M48
Fig. 2. Filter used to smooth flatness signatures
M100
Fig. 3. Test models
M605 M1156
450
H.B. Darbandi, M.R. Ito, and J. Little
Experiments show that the signatures created by this method are sensitive to noise. When si and αi are too small, a small fluctuation on the surface of the object caused by noise affects the flatness signature. To remedy this problem, the flatness signatures are smoothed with a Gaussian shape filter (Figure 2) with parameters μ = mean (S ) and δ = std ( S for all s i < μ ) . 2.1 Convex and Concave Surfaces The method we have introduced so far creates identical signatures for both concave and convex surfaces. To distinguish between concave and convex surfaces, the surface is divided into positive and negative patches (convex and concave surfaces respectively) based on the direction of their normal relative to an oriented point. Consider the cross-section of surface S in the right-hand column of Figure 1. O1 is a point on the surface, and N1 is its normal. Let us assume we are interested in finding the signatures of the surface relative to oriented point O1. To find the convexity and concavity of the surface on points O2 and O3 relative to O1, connect O1 to O2 and O1 to O3 to create two vectors, O1O2 and O1O3. Then find the projection of N2 and N3 on O1O2 and O1O3, T1 and T3, respectively. If the direction of O1Ox and Tx are the same, then the surface at point Ox is convex relative to point O1. If the direction of O1Ox and Tx are opposite, then the surface at point Ox is concave relative to O1. For example, the surface at O2 and O3 are convex and concave, respectively, relative to the oriented point O1. In addition to flatness and orientation signatures, the total convex and concave area at each step of signature creation is stored separately. The calculation of convex and concave areas has no major effect on processing time since the same procedures are used for calculating flatness and orientation signatures. The convexity signature is calculated by dividing the convex area to the total area at each step of signature creation. The convexity can hold a value between one and zero. A value of one for convexity means that the area being considered for signature creation is a complete convex area. A value of zero means that the area is a complete concave or a flat area. Convexity signatures, along with flatness and orientation signatures, are used to model the surface around an oriented point. 2.2 Parameters of the Model The signatures created with the proposed modeling technique depend on two parameters. Support distance and support angle [18] are two parameters that limit the modeling area of a surface around an oriented point. The support distance limits the value of R. Small values of R model the local deformation around point P, and large values of R model the global deformation of the surface relative to the oriented point P. Because of occlusion, we cannot see the entire object from a single viewpoint. It is logical to assume that if the normal of a patch makes an angle greater than a threshold angle with the orientation of point P, it cannot be seen from the same angle that sees point P. As a result, those patches cannot be used for modeling the object from that viewpoint. This parameter is called the support angle.
Surface Signature-Based Method for Modeling and Recognizing Free-Form Objects
451
2.3 Models and Settings We chose M48 [19], M100, M605, and M1153 (a duck, a pig, a stand light, and a toy airplane) as our experimental models (Figure 3). The dimensions of these models are listed in Table 1. Normal noise with standard deviation equal to 4 (2% of the maximum dimension of the model) was added to each vertex in the normal direction of the surface of M48, M100, M605, and M1153 to create M48N4, M100N4, M605N4, and M1153N4 respectively. Table 1. Dimensions of test models Model
Length
Height
Width
Number of Patches
Number of Vertexes
M48 M100 M605 M1153
97.5 112.2 100.4 161.3
90.7 91.1 200 71.5
200 200 170.9 200
1307 1208 1300 1478
671 606 531 743
To demonstrate the signatures’ robustness to clutter, a large portion of the test object was removed by selecting one to five random points as seed patches on the surface of the test object; surface patches were removed of up to 60% of the total surface of the object (Figure 4).
M48
20% Removed
40% Removed
60% Removed
Fig. 4. M48 with 20%, 40%, and 60% of the surface patches removed
2.4 Similarity Measurement To compare the signatures of the models, we multiplied the angle of the normalized cross-correlation and the normalized Euclidian distance of the signatures:
similarity = cos −1 ( NCC ( S A , S B )) ×
SA − SB length( S )
(2)
To compare the oriented points, we used the Euclidian distance between the similarity measurements: likeness = O, α F , β C
(3)
where F, O, and C are the similarity measurements for the orientation, flatness, and convexity signatures, respectively. The coefficients α and β are weights of the similarity measurements, and their values depend on noise level. In our experiments both coefficients were set to one.
452
H.B. Darbandi, M.R. Ito, and J. Little
2.5 Patch Resolution The size of the patches has no significant effect on the creation of the signatures. To illustrate the effect of patch size on surface signatures, M48 was sampled with 1307, 326, 162, and 103 patches, labeled as A, B, C, and D, respectively, as shown in Figure 5. Table 2. Results of likeness measurement of signatures M48 (A) with the signatures of models M48N4, B, C, and D. μ and δ stand for mean and standard deviation of the comparisons, and the results are shown in log10
μ δ
likeness
M48N4 -1.55 -1.49
B -2.09 -1.91
C -1.71 -1.62
D -1.54 -1.50
The results of likeness comparisons of model A with corresponding points from models B, C, D, and M48N4 are summarized in Table 2. The results are shown in log10. As indicated in the table, compared with model A the effect of sampling resolution on the signature creation of the models labeled B, C, and D is less than or equal to the effect of the noise on signature creation added to M48N4. As will be shown in Section 3, the proposed modeling technique is robust to this level of noise.
326 Patches
1307 Patches
162 Patches
103 Patches
Fig. 5. M48 sampled with different sampling rates
2.6 Noise Effect Because of averaging in our modeling technique, noise was suppressed. In Figure 6 the left-hand column shows samples of patches with their normal, and the right-hand column shows the same patches perturbed by noise. The oriented signature of an oriented point in each interval of signature creation is calculated as follows:
Oi =
⎡u j ⎤ 1 m 1 m s j N j = ∑ a jb j ⎢ ⎥ ∑ m j =1 m j =1 ⎣v j ⎦
Fig. 6. Left: patches along with their normal. Right: the same patches fluctuated by noise
(4)
Surface Signature-Based Method for Modeling and Recognizing Free-Form Objects
453
where s j = a j b j is the surface of the patch, Nj is its normal, m is number of the patches, and uj and vj are components of Nj in u and v directions. For the purposes of this manuscript calculations were simplified by making an assumption that noise affects two adjacent vertices in only the v direction with the same amount of noise, gj, as shown in the right-hand column of Figure 6. Then s ′j = b j a 2j + g 2j and
Oi′ = where p j = a j
− qj ⎤ N p j ⎥⎦ j
m ⎡pj 1 m s ′j N ′j =∑ s ′j ⎢ ∑ m j =1 j =1 ⎣q j
a 2j + g 2j and q j = g j
(5)
a 2j + g 2j . Nj is the normal of the patch before
noise effect. By simplifying equation (5), then Oi′ =
1 m ⎡a jbj ∑⎢ m j =1 ⎣b j g j
− b j g j ⎤ ⎡u j ⎤ 1 m ⎡u j ⎤ 1 m ⎡− v j ⎤ = ∑ a jb j ⎢ ⎥ + ∑ g jb j ⎢ ⎥ ⎢ ⎥ ⎥ a j b j ⎦ ⎣ v j ⎦ m j =1 v m j =1 ⎣ j⎦ ⎣ uj ⎦
Finding the difference between two oriented signatures, and using Cauchy–Schwarz inequality e = Oi − Oi′ =
⎡ vj ⎤ 1 m 1 g jb j ⎢ ≤ ∑ ⎥ m j =1 ⎣− u j ⎦ m
m
m
∑b ∑ g j =1
2 j
j =1
2 j
=b
m
g 2j
∑m
= bδ
(6)
j =1
where δ is the standard deviation of noise. Given that for simplicity it is assumed in Figure 6 that b1 = b2 = ... = bm = b . As shown in Equation (6), by increasing the number of samplings, and consequently decreasing the size of b, the noise effect can be controlled. We have shown the effect of noise on an orientation signature with noise applied in one dimension. However, it can be demonstrated that the findings also apply when noise is applied in all three dimensions. Furthermore, it can be shown that the effects of noise on flatness and convexity signatures are also suppressed, although flatness signatures are more sensitive to noise than orientation and convexity signatures.
3 Object Recognition Likeness measurement provides a means of finding the corresponding points on the surface of a test object and models. To match the signatures, first the flatness, orientation, and convexity signatures created from a sample oriented point on the surface of the test object are compared to the flatness, orientation, and convexity signatures of all models, using Equation (2). Then their likeness is measured using Equation (3). Our experiments show that the signatures created from an object with the proposed modeling technique are more similar to the signatures created from the same object than they are to the signatures created from other objects. During the matching process, a sample point can be matched to more than one point for two reasons: both the points located on symmetrical parts of an object, and similar surfaces, create similar signatures. To counter this problem, the maximum of the top M likeness matches for each sample point within the threshold is selected, and
454
H.B. Darbandi, M.R. Ito, and J. Little
outliers are removed by geometric verification. A histogram of the models to which the verified matches belong is then created. The model that has the highest value in the histogram is selected as the candidate model. The candidate model is then passed to the registration process [20] for alignment. If the average error of alignment between the test object and the selected model falls below a threshold value, the candidate is selected as the matched model. 3.1 Experiments
In our experiments, six sample points were selected on the surface of the test object. The threshold value was set from 0.1% to 0.2% of the total signatures, and M was set to 4. Support distance was set from 20 to 100 with steps of 5, and the support angle was set to π/3. The average alignment error and base radius were set to 10% and 5% of the maximum dimension of the objects respectively. Since we were considering only one object, the results were either true-positive or false-negative. The library models consist of 69 toys, with 56663 vectors for each flatness, orientation, and convexity signature. Each experiment was conducted 50 times for each support distance, and each graph in Figure 7 is the outcome of 450 to 1800 recognition tries. The curves are smoothed by a polynomial trend of order 2.
Fig. 7. Left: Average of matching results for M48N4, M100N4, and M1156M4. Right: Matching results for M605N4. Support angle set to π/3.
3.2 Results
Figure 7 shows recognition results for M48N4, M100N4, M605N4, and M1156N4. Each figure shows the result of positive candidate selection for each support distance and positive match, with an average of 10% alignment error. The left column of Figure 7 shows the average matching results for M48N4, M100N4, and M1156N4. As indicated in the graph, the positive candidate selection begins at 50% for support distance 20 and rises to 100% as support distance increases. However, because of the threshold set to alignment error, the positive match is very low for small support distances. Experiments indicate that when the threshold of alignment error or the number of sample points selected on the surface of the test object are increased, the result improves and moves closer to positive candidate curves.
Surface Signature-Based Method for Modeling and Recognizing Free-Form Objects
455
As indicated in the right column of the figure, the matching results for M605N4 are very low compared to the matching results for other test objects. This is because M605 is a symmetrical elongated object, and its signatures are very similar to each other. The signatures selected on the surface of the test object match with multiple signatures on the surface of the model, causing the alignment process to fail for the specified threshold values. This problem can be addressed by selecting extra points on the surface of the test object after positive candidate selection for a better alignment process. Figure 7 also indicates that the positive match for this model increases for support distances above 80, because the sphere reaches the boundary of the object, and the signatures are more descriptive at the boundaries for this kind of object. Figure 8 displays the recognition results for the experiment with cluttered objects. Here we assumed that the cluttered areas had been segmented and removed, so these areas were not used for signature creation. The test object, M48N4, was cluttered randomly from 10% to 60% of its total surface, and then 6 random oriented points were selected on the uncluttered surface of the object. The signatures of the sample oriented points were matched with the signatures of the models in the library. The support angle was set to π/3. The figure shows the true positive match with 10% alignment error for occlusions of 10% and 60%. For greater clarity, the occlusions between 10% and 60%, which fall between the two curves, are not shown here. The figure also shows the maximum and minimum positive candidate selections for each support distance.
Fig. 8. Matching results for occluded test Fig. 9. Comparing matching results of spin object images, and proposed modeling technique
3.3 Comparison
The spin images of the library models and test models were created with the same parameters. We then used the same algorithm, chose the same sample-oriented points, and repeated the recognition process with the same parameters. The average results for recognizing test objects with spin images and with the proposed modeling technique are shown in Figure 9. As indicated in Figure 9, positive candidate selection improved by 12.5% for support distance 20 and 0% for support distance 100, and true positive matches improved by 8.5% for support distance 20, and decreased by -1.5% for support
456
H.B. Darbandi, M.R. Ito, and J. Little
distance 100. The spin images used in the experiments were not compressed, and the models and point selections were exactly the same for both methods. The size of the proposed model used in our experiments is 7.5% of the size of the model used to create spin images. Spin images use Principal Component Analysis (PCA) [21] to compress modeling data, and a similar technique can be used to pack the data in the modeling method introduced in this paper. The process of signature creation is complicated and requires considerable processing time. The time needed to model an object is (m2), where m is the number of the model’s vertex. However, the modeling process is performed offline. In our experiment, each signature for the test model (M48N4), consisting of 671 vertexes, required an average of 6 seconds on a PC with a Centrino 1.8GHZ processor with 1GB RAM. Each matching process, which consisted of selecting 6 random points and generating their signatures, matching, and verification, required approximately 40 seconds.
Fig. 10. Library models used in the experiments
4 Conclusions In this paper we present a general method for modeling and recognizing 3D objects. The representation we describe is simple but rich enough to model and match freeform objects effectively. The signatures obtained in the experiments show that while this technique provides an elegant method for modeling and matching objects, it also encodes the curvature, symmetry, and convexity of the surface around an orientation point that can be used in related applications. The modeling and matching parameters can be adjusted to provide the results best suited to the particular experimental subject. However, the modeling process is more complicated, and uses more CPU cycles.
Surface Signature-Based Method for Modeling and Recognizing Free-Form Objects
457
The results presented in the paper indicate that the matching process involves two major parameters: the number of oriented points selection, and the number of maximum top match points selection (M). Increasing the cardinality of these two parameters improves the matching result. The proposed method is not only applicable to object recognition: It can also be used to trace an object. Furthermore, the signatures created by the model can be used to find the symmetrical access of 3D objects and spot the high curvatures and flat areas on an object’s surface. Our experiments show that the signatures can be grouped to create a pool of data that can be shared to model a variety of objects with limited amounts of data. This application needs further investigation.
References 1. Bennamoun, M., Mamic, G.J.: Object Recognition: Fundamentals and Case Studies. Springer, Heidelberg (2000) 2. Besl, P.J., Jain, R.: Range Image Understanding. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 430–449 (1985) 3. Faugeras, O.D., Herbert, M.: The Representation, Recognition, and Location of 3-D Objects. The International Journal of Robotics Research 5(3), 27–52 (1986) 4. Wand, J., Cohen, F.S.: Part II: 3-D Object Recognition and Shape Estimation from Image Contours Using B-Splines, Shape Invariant Matching, and Neural Network. IEEE Trans. Pattern Analysis and Machine Intelligence 16(1), 13–23 (1994) 5. Horn, B.: Extended Gaussian Image. Proceedings of the IEEE 72, 1671–1686 (1984) 6. Kang, S.B., Ikeuchi, K.: The complex EGI: A new representation for 3D pose determination. IEEE Transactions on Pattern Analysis and Machine Intelligence 15(7), 707–721 (1993) 7. Chua, C.S., Jarvis, R.: Point Signatures: A New Representation for 3D Object Recognition. Int’l J. Computer Vision 25(1), 63–85 (1997) 8. Mian, A.S., Bennamoun, M., Owens, R.A.: Automatic Correspondence for 3D Modeling: An Extensive Review. Int’l J. Shape Modeling (2005) 9. Stein, F., Medioni, G.: Structural Indexing: Efficient 3D Object Recognition of a Set of Range Views. IEEE transactions on pattern analysis and Machine Intelligence 17(4), 344– 359 (1995) 10. Katsoulas, D., Kosmopoulos, D.I.: Box-like Superquadric Recovery in Range Images by Fusing Region and Boundary Information. Pattern Recognition, 2006. In: ICPR 2006. 18th International Conference on, August 20-24, vol. 1, pp. 719–722 (2006) 11. Haralick, R.M., Shapiro, L.G.: Computer and Robot Vision. Addison-Wesley, Reading (1993) 12. Johnson, A.E.: Spin Image: A Representation for 3D Surface Matching. PhD Thesis, Carnegie Mellon University (1997) 13. Correa, S., Shapiro, L.: A New Signature-Based Method for Efficient 3D Object Recognition. Proc. IEEE Conf. Computer Vision and Pattern Recognition 1, 769–776 (2001) 14. Yamany, S.M., Farag, A.: Freeform Surface Registration Using Surface Signatures. Proc. Int. Conf. on Computer Vision 2, 1098–1104 (1999) 15. Bennamoun, A.S., Owens, M.: Three-Dimensional Model-Based Object Recognition and Segmentation in Cluttered Scenes Mian. Pattern Analysis and Machine Intelligence, IEEE Transactions on 28(10), 1584–1601 (2006)
458
H.B. Darbandi, M.R. Ito, and J. Little
16. Campbell, R.J., Flynn, P.J.: A Survey of Free-Form Object Representation and Recognition Techniques. Computer Vision and Understanding 81, 166–210 (2001) 17. Carmichael, O., Huber, D., Hebert, M.: Large Data Sets and Confusing Scenes in 3-D Surface Matching and Recognition. In: Proc. Int’l Conf. 3-D Digital Imaging and Modeling, pp. 358–367 (1999) 18. Johnson, A., Herbert, M.: Using Spin Images for Efficient Object Recognition in Cluttered 3D Scenes. IEEE Tr. on Pattern Analysis and Machine Intelligence 21(5) (1999) 19. http://shape.cs.princeton.edu/benchmark 20. Nene, S.A., Nayar, S.K.: A simple Algorithm for Nearest Neighbor Search in High Dimensions. IEEE Trans. Pattern Analysis and Machine Intelligence, 999–1003 (1997) 21. Forsyth, D.A., Ponce, J.: Computer Vision: A Modern Approach. Prentice-Hall, Englewood Cliffs (2000)
Integrating Vision and Language: Semantic Description of Traffic Events from Image Sequences Takashi Hirano1, Shogo Yoneyama1, Yasuhiro Okada1, and Yukio Kosugi2 1
Mitsubishi Electric Corporation, Information Technology R & D Center {Hirano.Takashi@eb,Shogo.Yoneyama@dn, Yasuhiro.Okada@dh}.MitsubishiElectric.co.jp 2 Tokyo Institute of Technology, Interdisciplinary Graduate School of Science and Engineering
[email protected]
Abstract. We propose an event extraction method from traffic image sequences. This method extracts moving objects and their trajectories from image sequences recorded by a stationary camera. These trajectories are mapped to 3D virtual space and physical parameters such as velocity and direction are estimated. After that, traffic events are extracted from these trajectories and physical parameters based on case-frame analysis in the field of natural language processing. Our method facilitates to describe events easily and detect general traffic events and abnormal situations. The experimental results of actual intersection traffic image sequence have shown the effectiveness of the method.
1 Introduction There is a demand to prevent traffic accidents before they happen. For example, the plan of the driving safety support systems aiming at prevention of traffic accidents is carried out in Japan. The system detects cars, pedestrians, and bicycles whose position is hard to recognize for a driver, and sends the information to the driver through a car navigation equipment. In these systems, the method to automatically detect and organize traffic events from image sequences is required. A lot of studies about the detection of moving objects such as cars and pedestrians have been done for many years, and some of them are suitable for practical use. In recent years, researches not only the detection of such moving objects but also for understanding of traffic situation has been done. For example, H.Kollnig, H.-H.Nagel proposed a method which chooses the verb that expresses the movement of vehicles appropriately for the purpose of traffic surveillance [1][2]. However, the method extracts simple events by judging whether the position and posture of moving matched with pre-determined specific conditions (equations). On the other hand, Herzog et al. proposed the traffic event extraction method which considered linguistic processing in VITRA (VIsual TRAnslator) project [3][4]. In this method, they tried to describe the action of pedestrians crossing a road by G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 459–468, 2007. © Springer-Verlag Berlin Heidelberg 2007
460
T. Hirano et al.
sentences. Here, tedious expressions are reduced by using the context, and sentences are generated continuously as image sequence changes. However, these sentences are generated by using simple templates. As a pioneering research, which applied the semantics of natural language to image understanding, Okada et al. translated the motion of simple graphics into sentences [5]. Moreover, A.Kojima et al. applied this idea to the analysis of human behaviors of indoor image sequences [6]. Our proposed method is based on these previous works. This method extracts traffic events from intersection traffic image sequences based on the case frame analysis, which is used in the field of natural language processing. In the field of traffic surveillance, it is required not only to detect general traffic events but also to detect abnormal situations. This method can extract abnormal situations by using knowledge data (definition of traffic events) as a constraining knowledge. Moreover, in order to detect many types of traffic events or to set a number of cameras at various areas, the editing procedure of the knowledge data should be easy one. To overcome this problem, the proposal method can describe the knowledge data by simple text.
2 Proposed Method We describe the overview of the proposed method in section 2.1. Section 2.2 shows the extraction method of moving objects and their trajectories. The traffic event extraction algorithm from these trajectories and the extraction method of abnormal situations are detailed in section 2.3 and 2.4. 2.1 Overview Figure 1 shows the overview of the proposed method. Here, traffic sequence images are acquired from a stationary camera set up on a pole near the intersection. After that, in the traffic image analysis, moving objects and their trajectories are detected by image processing. Finally, the semantic analysis detects the traffic event from these trajectories by referring a knowledge database. In addition to traffic events registered in the knowledge data, other traffic events which do not match the knowledge database are detected as abnormal situations. 2.2 Traffic Image Analysis Figure 2 shows the processing flow of the example of traffic image analysis. Here, moving objects are detected from each frame image by presuming a background image. Next, a trajectory is extracted by tracking a moving object between the continuous frames. After that, the point (x,y) on the image is converted into point (X,Y,Z) on a 3D virtual space, then physical parameters such as the acceleration, velocity, etc. are calculated. Finally, the object type (a car, a bicycle, a pedestrian, a dog) is judged from the size of object. (1) Estimation of background image Tuzel, O., et al. proposed Bayesian learning method to capture the background statistics of a dynamic scene [7][8]. They have modeled each pixel as a set of layered normal distributions that compete with each other. Using a recursive Bayesian
Integrating Vision and Language: Semantic Description of Traffic Events
461
learning mechanism, it estimates not only the mean and variance but also the probability distribution of the mean and covariance of each model. We have used this algorithm in the traffic image analysis. (2) Tracking of moving objects We have applied the object tracking algorithm for the low-frame-rate video in which objects have fast motion [9]. The conventional mean-shift tracking fails in case the relocation of an object is large and its regions between the consecutive frames do not overlap. This algorithm overcomes this problem by using multiple kernels centered at the high motion areas. (3) Coordinate conversion and the estimation of physical parameters In coordinate conversion processing, the domain around the intersection on a twodimensional image is mapped into the 3D virtual space normalized to 100x100x100. From the trajectory on the normalized virtual space, the values of physical parameters of the moving object at time t are calculated. We have chosen six physical parameters as shown in figure 3(b). (4) Detection of object type The object type (car, bicycle, person, and dog) is distinguished. Here, for simplicity, the object type is detected from the normalized size of the moving object. Traffic signal
Traffic image sequence data
Video Camera Traffic image analysis Pole Background image estimation Moving object detection and tracking Knowledge database Semantic analysis
Object data
(Name, Semantic category, Attributes)
Case frame analysis of trajectories
Physical parameter definition (Velocity, Acceleration, Direction,..)
Predicate verb definition Case frame data
Abnormal situation detection
Language description of traffic events
Fig. 1. Overview of the proposed method
2.3 Semantic Analysis of Traffic Events 2.3.1 Description of Traffic Events with Case Frame Analysis We have described traffic events based on the case grammar of natural language processing. The case grammar has been proposed by Fillmore [10]. It describes the relation between a verb and the other components (typically nouns) of a single
462
T. Hirano et al.
fi
Input image
Difference of the input image and Background image Target area around the intersection
Extracted moving objects
BICYCLE
Sample trajectory
Trajectories of objects
Fig. 2. Processing flow and sample images for traffic image analysis
proposition. It is known that the number of sentence patterns decreases by describing the relation by using the verb, in the field of the machine translation. Table 1 shows case names used in the Fillmore’s case grammar. For example, the sentence “John broke the stick" is described by the case grammar as follows. [broke] Predicate verb - [John] Agent - [the stick] Object Here, each word has selectional restriction. The selectional restriction is expressed by the semantic category of the word (Table 2). For instance, only human and living thing can be used as the Agent case of the verb [broke], and only a physical object can be used as the Object case. We have used case names (AG, CAG, LOC) and semantic categories (PHYSOBJ, LIVING, HUM, PHYSLOC) for traffic event detection. Yuri A. Ivanov and Aaron F. Bobick proposed simplified grammar to describe the interactions in a parking lot [11]. The advantages of using Fillmore’s case grammar for describing traffic events are as follows.
Integrating Vision and Language: Semantic Description of Traffic Events
463
(1) Since case grammar expresses the deep semantic structure in the field of natural language processing, it is suitable for expressing the conceptual structure extracted from nonverbal contents. (2) It is easy to describe the event. And the content is understandable for the editor. (3) The number of sentence patterns decreases by describing the relation by using the verb. Therefore, a concept can be expressed in the small amount of description. Table 1. Fillmore’s case grammar [10] Case name (label)
Definition
Agent (AG) Counter-Agent (CAG) Object (OBJ) Instrument (INSTR) Source (SO) Goal (GO) Location (LOC) Time (TIME) Experiencer (EXPER)
A person or entity causing a verb's action to be performed The force of resistance against which a verb's action is carried out An entity affected directly by a transitive verb's action An inanimate entity causally involved in a verb's action The place from which something moves The place to which something moves The location or spatial orientation of the state or action The time of the event A person or thing affected by a verb's action, replacing dative
Table 2. Semantic category of word Semantic category
definition
Semantic category
definition
PHYSOBJ LIVING HUM TIME PHYSLOC
Physical object Living Human Time Physical Location
PHYSACT MENTACT PTRANS ATTRTRANS BODACT
Physical action Mental action Physical movement Attribute change Body action
2.3.2 Knowledge Database The knowledge database is used to detect traffic events from trajectories and physical parameters of moving objects. The knowledge database consists of four kinds of data in consideration of extendibility and readability (Figure 3). These data describe the following contents. The knowledge database is generated manually in advance. (a)
(b) (c) (d)
Object data: Whether the object applied to agent case, counter agent case, or location case is defined. The object name, semantic category and attributes (e.g. size or area) is described. Physical parameter definition: The list of physical parameters extracted at image analysis process. Predicate verb definition: The relation between physical parameters and a predicate verb is defined. Case frame data: Each traffic event is defined by using a case frame data.
2.3.3 Case Frame Matching of Image Sequences Here, the event at time t is detected according to the following procedures. (i)
Select a case frame from case-frame data in figure 3(d).
464
T. Hirano et al.
(a) Object data
ObjectName SemanticCategory
[CAR] PHYSOBJ Width=5 Height=5 Length=10
Attributes
[PEDESTRIAN] HUM Width=1 Height=5 Length=1
[BICYCLE] PHYSOBJ Width=1 Height=5 Length=5
[DOG] LIVING Width=1 Height=1 Length=1
[Footway] PHYSLOC
[Roadway] PHYSLOC
Area in virtual space
Area in virtual space
(b) Physical parameter definition Verocity
Acceleration
Position
Direction of movement
Size
Distance of two objects
v
α
X, Y, Z
θ
W, H
dist(O1,O2)
(c) Predicate verb (PREV) definition [RUN]
vAG > 0
[STOP]
vAG = 0
[ACCELARATE] [DECELERATE] [TURN RIGHT]
d (vAG ) dt
continue N frames
continue N frames
d (vAG )
>0
dt
continue N frames
<0
t−N θ At G − θ AG
> 20
[TURN LEFT] t θ AG − θ AtG− N
< − 20
continue N frames
[COLLIDE]
[PASS]
dist (PAG , PCAG ) = 0
t t θ AG − θ CAG > 140
∩ ∩ t− N t−N d (α AG ) d(αCAG ) dist (PAG , PCAG )<τ + >η dt dt ∩
(
)
t t dist PAG , PCAG >τ
(d) Case frame data Case name
[RUN]
[STOP]
Agent (AG)
AG-PHYSOBJ or AG-LIVING or AG-HUM
AG-PHYSOBJ or AG-LIVING or AG-HUM
Location (LOC)
LOC-PHYSLOC
LOC-PHYSLOC
[ACCELARATE] [DECELERATE]
AG-PHYSOBJ
AG-PHYSOBJ
[TURN RIGHT]
[TURN LEFT]
[COLLIDE]
[PASS]
AG-PHYSOBJ or AG-LIVING or AG-HUM
AG-PHYSOBJ or AG-LIVING or AG-HUM
AG-PHYSOBJ or AG-LIVING or AG-HUM
AG-HUM or AG-LIVING
LOC-PHYSLOC
LOC-PHYSLOC
LOC-PHYSLOC
LOC-PHYSLOC
AG-PHYSOBJ or AG-LIVING or AG-HUM
AG-HUM or AG-LIVING
Counter-Agent (CAG)
Fig. 3. Contents of knowledge database
(ii) Calculate the relevance rate PAG between AG and a moving object i at time t. When the semantic category of AG is the same as the semantic category of the moving object i, the PAG =1.0. If the semantic category of the moving object i is unknown, the PAG =0.5. Otherwise PAG =0.0. For instance, when the moving object i is recognized as [CAR], the semantic category of the moving object i is PHYSOBJ. In addition, the AG of predicate verb [RUN] has a semantic category PHYSOBJ. Therefore, it results in PAG =1.0. (iii) Calculate the relevance rate PPREV between a predicate verb and the moving object i. When the physical parameters of the moving object i satisfies the equation of Figure. 3(c), it is assumed PPREV =1.0. In other cases, PPREV =0. (iv) After that, the relevance rate PLOC is calculated. Here, objects which have semantic category PHYSLOC are selected from object data in Figure 3(a). Next, when the position of the moving object i exists in the area of these objects, it is considered PLOC =1.0. Otherwise, it is regarded as PLOC =0.5. (v) When the case frame has CAG, the relevance rate PCAG between a moving object j ( i ≠ j ) and CAG is calculated by the same method as mentioned above (ii). (vi) Finally, an evaluation value E is calculated according to equation (1). When the value of relevance rate is high in all cases, the value of E becomes 0. When the value of relevance rate is low, E takes a negative value.
Integrating Vision and Language: Semantic Description of Traffic Events
465
These steps (i) - (vi) are performed to the combination of all case frames and all moving objects. When the value E > -1, it is considered that the event is detected.
E = δ PREV ⋅ log(PPREV ) + δ AG ⋅ log(PAG ) + δ CAG ⋅ log(PCAG ) + δ LOC ⋅ log(PLO
⎧1 : If the CaseFrame has the case ' C ' δc = ⎨ ⎩0 : Otherwise
(1)
2.4 Detection of Abnormal Situations In the traffic event analysis, there is a demand to detect abnormal situations (e.g. "The car collides with the car"), and some methods have been proposed. However, most of these methods require the rule to detect the abnormal situations in advance. Therefore, it is difficult to detect unexpected abnormal situations. We consider the knowledge database as a constraining knowledge. And, the event that does not agree with the constraining knowledge is detected as an abnormal situation. For instance, we define a traffic event “PREV [RUN] - AG [PHYSOBJ] - LOC [PHYSLOC]” in the knowledge database. When a car jumped out of the roadway and runs on the undefined area, it results in PLOC =0.5. If the extracted event has the value -1 < E < 0, the extracted event is judged as an unexpected abnormal event.
3 Experimental Result 3.1 Simulation with Virtual Traffic Sequence We have applied this method to three scenarios. These scenarios consist of artificial trajectories that simulate general traffic events such as right turn, acceleration, passing each other, etc. In addition, abnormal situations that are difficult to acquire in real traffic image sequence are simulated. Figure 4 shows a scenario including abnormal situations and the event extraction result. As the results, all general events were detected correctly (e.g. "A car turns right on the roadway."). Moreover, an expected abnormal situation "A bicycle collides with a bicycle on the footway." was detected correctly. And unexpected abnormal situation "A car runs on the unidentified place" was detected too. 3.2 Results for Intersection Traffic Image Sequence We have evaluated the accuracy of the method with the intersection traffic sequence images that acquired from a stationary camera set up on a footbridge about 5m in height. The number of images is 5409 frames, covering about 6 minutes. There are 81 cars and bicycles (motorcycles), and 5 pedestrians in the image sequences. Table 3 shows the specification of the camera and settings. Here, the traffic image analysis
466
T. Hirano et al. T15
++++++++++++++++++++++++++++ TIME 1 ++++++++++++++++++++++++++++ EVENT [CAR-(OID:0)] [RUN] on the [Roadway]E=0.00 EVENT [BICYCLE-(OID:1)] [RUN] on the [Roadway]E=0.00 ++++++++++++++++++++++++++++ TIME 2 ++++++++++++++++++++++++++++ EVENT [CAR-(OID:0)] [RUN] on the [Roadway]E=0.00 EVENT [CAR-(OID:0)] [DECELERATE] E=0.00 EVENT [BICYCLE-(OID:1)] [RUN] on the [Roadway]E=0.00 EVENT [BICYCLE-(OID:1)] [DECELERATE] E=0.00 ++++++++++++++++++++++++++++ TIME 3 ++++++++++++++++++++++++++++ EVENT [CAR-(OID:0)] [RUN] on the [Roadway]E=0.00 EVENT [CAR-(OID:0)] [DECELERATE] E=0.00 EVENT [BICYCLE-(OID:1)] [RUN] on the [Roadway]E=0.00 EVENT [BICYCLE-(OID:1)] [DECELERATE] E=0.00
BICY CLE
T16 T17
T20-T21
T18-21
T15 T19
Trajectories T14
T18 T13 T12 T17
T11 T10
:
T9 T16
++++++++++++++++++++++++++++ TIME 14 ++++++++++++++++++++++++++++ EVENT [CAR-(OID:0)] [RUN] on the [Roadway]E=0.00 EVENT [CAR-(OID:0)] [ACCELARATE] E=0.00 EVENT [CAR-(OID:0)] [TURN LEFT] on the [Roadway]E=0.00 EVENT [BICYCLE-(OID:1)] [RUN] on the [Roadway]E=0.00 EVENT [BICYCLE-(OID:1)] [DECELERATE] E=0.00 ++++++++++++++++++++++++++++ TIME 15 ++++++++++++++++++++++++++++ EVENT [CAR-(OID:0)] [RUN] on the unidentified place E=-0.22 EVENT [CAR-(OID:0)] [TURN LEFT] on the unidentified place E=-0.22 EVENT [BICYCLE-(OID:1)] [RUN] on the [Footway]E=0.00 EVENT [BICYCLE-(OID:1)] [DECELERATE] E=0.00 EVENT [BICYCLE-(OID:2)] [RUN] on the [Footway]E=0.00 ++++++++++++++++++++++++++++ TIME 16 ++++++++++++++++++++++++++++ EVENT [BICYCLE-(OID:1)] [RUN] on the [Footway]E=0.00 EVENT [BICYCLE-(OID:1)] [DECELERATE] E=0.00 EVENT [BICYCLE-(OID:1)] [TURN RIGHT] on the [Footway]E=0.00 EVENT [BICYCLE-(OID:2)] [RUN] on the [Footway]E=0.00 EVENT [BICYCLE-(OID:2)] [ACCELARATE] E=0.00
T8 T7
T15
T6
T14
T5 T13 T4 T12
T11 T10
BICYCLE
T9
T3
T8
T4
T3
T2
T1
T0
T5-T7 T2
CAR
T1
: ++++++++++++++++++++++++++++ TIME 20 ++++++++++++++++++++++++++++ EVENT [BICYCLE-(OID:1)] [RUN] [Footway]E=0.00 EVENT [BICYCLE-(OID:1)] [DECELERATE] E=0.00 EVENT [BICYCLE-(OID:1)] [COLLIDE] with [BICYCLE] on the [Footway]E=0.00 EVENT [BICYCLE-(OID:2)] [TURN RIGHT] [Footway]E=0.00 EVENT [BICYCLE-(OID:2)] [STOP] [Footway]E=0.00 EVENT [BICYCLE-(OID:2)] [COLLIDE] with [BICYCLE] on the [Footway]E=0.00 ++++++++++++++++++++++++++++ TIME 21 ++++++++++++++++++++++++++++
T0
Alert of Abnormal situation
Fig. 4. An example of virtual traffic sequence and traffic event extraction result
process is applied to all the frame images, and traffic events are extracted once every 15 frames (once / second). The image analysis process extracted 97 moving objects.Total 1359 traffic events are extracted from the image sequences. The traffic event extraction accuracy of the method is 73.8% (Table 4). Most of traffic event extraction errors occurred due to moving object extraction errors in the image analysis (15.2%). Figure 5 shows examples of extracted traffic events. As shown in the figure, some wrong events are extracted. For instance, an error event was detected when the traffic signal changed. In another case, a wrong event was detected by extracting several cars as one object. In addition, a wrong abnormal event "A car collided with a car on the roadway" was detected by dividing one car into two objects. Table 3. Specification of the image acquisition system Camera device Scanning system Mode File format Size / Color
DCR-TRV900 (SONY) Progressive Scan Automatic (Brightness, Shutter speed, white balance), Zoom:Off AVI (DVI compression), 15 frames / sec 720 x 480 pixels, 24bit color
Table 4. Traffic event extraction results Correct Error
73.8 % 26.2 %
15.2%: Moving object extraction error in the image analysis. 11.0%: Verb estimation error in the semantic analysis. (e.g. [PREDESTRIAL] [RUN] -> [PREDESTRIAN][STOP])
Integrating Vision and Language: Semantic Description of Traffic Events
467
(a) Correct traffic events
The [CAR] – [TURN RIGHT] s – on the [Roadway] E=0.0 (AG)
(PREV)
(LOC)
The [CAR] – [STOP] s – on the [Roadway] E=0.0 (AG)
(PREV)
(LOC)
The [BICYCLE] – [RUN] s – on the [Roadway] E=0.0 (AG)
(PREV)
(LOC)
The [CAR] – [TRUN LEFT] s – on the [Roadway] E=0.0 (AG)
(PREV)
(LOC)
The [PEDESTRIAN] – [PASS] s – the [PEDESTRIAN] - on the [Footway] E=0.0 (AG)
(PREV)
(CAG)
(LOC)
(b) Error traffic events
Traffic signal changed
Several cars are detected as one moving object
A car is divided into two moving objects
Fig. 5. Examples of traffic event extraction results
3.3 Size of knowledge Database Table 5 shows the data size of the knowledge database. Only 215 lines (text and C language) are used to describe the knowledge data of Figure 3. Since the traffic events are described by case frames, the data size is small and editing of the knowledge is easy. Table 5. Data size and type of knowledge data Data Object data Physical parameter definition & Predicate verb definition Case frame data
Size
Type
54 lines (about 9 lines / object) 116 lines (about 14.5 lines / verb)
text C language program
45 lines (about 5.6 lines / verb)
text
468
T. Hirano et al.
4 Conclusion We have proposed the traffic event extraction method from traffic image sequences. Experiments revealed good results for traffic image sequences. However, there is a problem that wrong abnormal events are detected due to the error of object extraction. We will improve the method to overcome the problem in the future. The detection and simple description of more complex traffic events are our future work too.
References 1. Kollnig, H., Nagel, H.-H., Otte, M.: Association of motion verbs with vehicle movements extracted from dense optical flow fields. In: Eklundh, J.-O. (ed.) ECCV 1994. LNCS, vol. 801, pp. 338–347. Springer, Heidelberg (1994) 2. Nagel, H.-H.: A vision of ‘vision and language’ comprises action: An example from road traffic. Artifitial Intelligence Review 8, 189–214 (1994) 3. Herzog, G., Wazinski, P.: Visual translator: Linking perceptions and natural language descriptions. Artifitial Intelligence Review 8, 175–187 (1994) 4. Herzog, G., Rohr, K.: Integrating vision and language: Towards automatic description of human movements. In: Proc. 19th Annual German Conf. on Artificial Intelligence, pp. 257–268 (1995) 5. Okada, N.: Integrating vision, motion, and language through mind. Artificial Intelligence Review 9, 209–234 (1996) 6. Kojima, A., Tahara, N., Tamura, T., Fukunaga, K.: Natural Language Description of Human Behavior from Image Sequences. IEICE J81-D-II(8), 1867–1875 (1998) (in Japanese) 7. Porikli, F., Tuzel, O.: Bayesian Background Modeling for Foreground Detection. In: ACM International Workshop on Video Surveillance and Sensor Networks (VSSN), pp. 55–28 (November 2005) 8. Tuzel, O., Porikli, F., Meer, P.: A Bayesian Approach to Background Modeling. In: IEEE Workshop on Machine Vision for Intelligent Vehicles (MVIV), vol. 3, p. 58 (June 2005) 9. Porikli, F., Tuzel, O.: Multi-Kernel Object Tracking. In: IEEE International Conference on Multimedia and Expo (ICME), pp. 1234–1237 (2005) 10. Fillmore, C.J.: The case for case. In: Bach, E., Harms, R. (eds.) Universals in Linguistic Theory. Rinehart and Wiston (1968) 11. Ivanov, Y.A., Bobick, A.F.: Recognition of Visula Activities and Interactions by Stochastic Parsing. IEEE Trans. on Pattern Analysis and Machine Intelligence 22(8), 852–872 (2000)
Rule-Based Multiple Object Tracking for Traffic Surveillance Using Collaborative Background Extraction Xiaoyuan Su, Taghi M. Khoshgoftaar, Xingquan Zhu, and Andres Folleco Computer Science and Engineering, Florida Atlantic University, Boca Raton, FL 33431
[email protected], {taghi,xqzhu,andres}@cse.fau.edu
Abstract. In order to address the challenges of occlusions and background variations, we propose a novel and effective rule-based multiple object tracking system for traffic surveillance using a collaborative background extraction algorithm. The collaborative background extraction algorithm collaboratively extracts a background from multiple independent extractions to remove spurious background pixels. The rule-based strategies are applied for thresholding, outlier removal, object consolidation, separating neighboring objects, and shadow removal. Empirical results show that our multiple object tracking system is highly accurate for traffic surveillance under occlusion conditions.
1 Introduction Multiple object tracking (MOT) is important for visual surveillance and event classification tasks [1]. However, due to challenges such as background variation, occlusion, and object appearance variation, MOT is generally difficult. In the case of traffic surveillance, background variations in terms of illumination variation, small motions in the environment, weather and shadow changes, occlusions in terms of vehicles overshadowed or blocked by neighboring vehicles, trees, or constructions, and vehicle appearance changes in terms of different sizes of the same vehicles in different video frames, contribute to inaccurate visual tracking. As traditional visual tracking methods, feature-based tracking detects features in a video frame and searches for the same features nearby in subsequent frames; Kalman filtering [2] uses a linear function of parameters with respect of time, and assumes white noise with a Gaussian distribution, however, the method with the Kalman filtering to predict states of objects can not be applied to objects in occlusion [3]; particle filtering [4] is appealing in MOT for its ability to have multiple hypotheses, however, its direct application for multiple object tracking is not feasible. For traffic surveillance videos that generally have stationary background, it is important to segment moving vehicles from the background either when viewing the scene from a fixed camera or after stabilization of the camera motion. With the assumption of a stationary camera, we can simply threshold the difference of intensities between the current image frame with the background image, I(x,y)-Ibg(x,y), to segment the moving objects from the background. However, due to background variations, this simple approach may not work well in general. In previous work, normal G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 469–478, 2007. © Springer-Verlag Berlin Heidelberg 2007
470
X. Su et al.
(Gaussian) distribution, linear prediction and adaptation [5], and hysteresis thresholding [6] have been investigated to model the background changes. We proposed a rule-based multiple object tracking system using a collaborative background extraction algorithm for the application of traffic surveillance, which is easy-to-implement and highly effective in handling occlusions in terms of removing outliers and shadows, consolidating objects, and separating occluded vehicles. The collaborative background extraction algorithm collaboratively extracts a background from several independent extractions of the background, which effectively removes spurious background pixels and adaptively reflects the environment changes. Section 2 is our framework for the collaborative background extraction algorithm and rule-based multiple object tracking system for traffic surveillance. Experimental results and conclusions are in Section 3 and Section 4.
2 Framework Our multiple object tracking system consists of the following procedure: adaptively extracting backgrounds using collaborative background extraction, generating binary images by differencing frames with background, applying the rule-based tracking strategies, and finally recording features of tracked objects. 2.1 Collaborative Background Extraction We propose a non-Gaussian, single-thresholding background extraction method called collaborative background extraction, an adaptive background extraction algorithm using a collaborative strategy. With the assumption that the background will not change significantly in a few seconds, we extract several backgrounds alternatively over a short period of time, e.g., every 60 frames (2 seconds for 30 fps videos), and then integrate these backgrounds into one. By updating the background every few seconds, we adaptively model the background changes. As in Fig. 1, every single background extraction will produce a background with spurious points in different locations (Fig. 1(a)~(d), black points in the first four background images are intentionally-marked foreground points). With the help of collaborative extraction, the final background (Fig. 1(e)) is almost impeccable. We collaboratively extract a background from four independent extractions of background, in order to produce a reliable background from multiple single extractions. We use the average intensity value of the labeled background pixels from the four extractions, and those foreground pixels (with values of 0) are then automatically replaced unless none of the four is classified as background (Equation 1).
⎧ 1 bg k ,i (a, b), I = {i bg k ,i ( a, b) ≠ 0; i = 1,2,3,4} . ⎪ bg k (a, b) = ⎨| I | ∑ i∈I ⎪⎩ 0, | I |= 0
(1)
where k is the starting frame number of background extraction, i is the number of the four independent background extractions, I is for a non-zero pixel at (a,b) of the four background files. For example, for a pixel (a,b), when given bgk,1(a,b)=0, bgk,2(a,b)=0, bgk,3(a,b)=20, and bgk,4(a,b)=30, we will have |I|=2, and bgk(a,b)= (20+30)/2=25.
Rule-Based Multiple Object Tracking for Traffic Surveillance
471
⇒ Fig. 1. Collaborative background extraction for a traffic surveillance video. (top row) (a) background extracted from frame 1, 5, 9, … 57; (b) background extracted from frame 2, 6, 10, 58; (c) background extracted from frame 3, 7, 11, … 59; (bottom row) (d) background extracted from frame 4, 8, 12, … 60; (e) the final background resulted from collaborative multiple background extractions, with spurious background pixels removed.
For each individual background extraction, we use a small threshold of intensity difference (we use 2 here) to determine background and foreground, and assign 0 for a foreground pixel and an average intensity value for the background pixel (Equation 2). We do not threshold immediately consecutive frames; instead, we calculate an intensity difference between frames with a gap of four frames. This strategy is specifically for the traffic videos, in which vehicles have generally moved out of the locations of four frames before as they move very fast.
I (a, b) + I s+4 (a, b) ⎧1 , S = {s | I s (a, b) − I s+4 (a, b) |≤ 2} ⎪ ∑ s . bgk,i (a, b) = ⎨| S | s∈S 2 ⎪⎩ 0, | S |= 0
(2)
where k is the starting frame number of background extraction, i is the number of the four independent background extractions, s∈[k+i-1 : 4 : k+i+51], which means s is between k+i-1 and k+i+51, with an increment of four in each iteration, and 51 is a value calculated from a total consecutive frame number of 60 (which can be adjusted according to actual videos) for each single background extraction. E.g., given k=1, i=4, we will have s∈{4, 8, …, 56}, and we need to calculate |I56(a,b)-I60(a,b)| for s=56. S is for a pixel at (a,b) that the intensity difference |Is(a,b)-Is+4(a,b)| ≤2. During the iterations of [k+i-1 : 4 : k+i+51], when |Is(a,b)-Is+4(a,b)|≤2, we take the average intensity value of frame s and s+4, Iave(s,s+4)=[Is(a,b)+Is+4(a,b)]/2. For example, when we have three occasions that |Is(a,b)-Is+4(a,b)|≤2 for k=1, i=1, at s=21, 33, and 53, we will have bg1,1(a,b)= [Iave(21, 25)+ Iave(33, 37)+ Iave(53, 57)]/3. If none of the intensity differences is less than or equal to 2, we will have bg1,1(a,b)=0. The four independent background extractions are produced by alternatively thresholding the frames, e.g., the 1st single background is extracted from frames 1, 5, …, 57, and the 4th from frames 4, 8, …, 60 (Fig. 1).
472
X. Su et al.
The detailed collaborative background extraction algorithm is described in Fig. 2. In our implementation, we adaptively extract backgrounds every 60 frames. However, this parameter is adjustable according to actual video clips. Algorithm: Collaborative Background Extraction { Inpute: k = starting frame number; total_fm = total frame number of the video; FN = 60 (total consecutive frame numbers for each independent background extraction); Th = 2 (threshold); Output: extracted background files bgk (for each of the starting frame numbers k).}; begin For (k=1; k
2.2 Rule-Based Multiple Object Tracking for Traffic Surveillance After adaptively extracting backgrounds, our multiple object tracking for traffic surveillance is conducted with the following rule-based steps (see Fig. 3): (1) difference frames with the background to produce a binary image, (2) clean up the binary image
Rule-Based Multiple Object Tracking for Traffic Surveillance
473
and consolidate objects using outlier remover, hole remover, and strip removers, (3) separate neighboring vehicles and remove shadows, and (4) clean up the binary image again. We then extract and record the features of tracked vehicles. Differencing with Background. From input frame It at time t, we can create a binary image Ib by differencing its intensity value with that of the background image Ibg. We use a threshold value of 10, i.e., when |It(a,b)-Ibg(a,b)|>10, Ib(a,b)=1; else, Ib(a,b)=0. Outlier Removal. As our objects to track are vehicles, they are supposed to have connected blocks with solid areas (with non-negligible heights and widths) in their respective binary images. While unconnected blobs that are much smaller than a car are considered outliers and need to be cleaned up (Fig. 4). Hole Removal (Object Consolidation). When some portions of a vehicle have similar intensity values to the background, these portions will be left as holes in the binary image after the intensity differencing. The removal of the holes will connect the unnecessarily separated regions and consolidate the tracked objects. The hole removal algorithm is also in Fig. 4 (the portion after the notation sign “//”).
Fig. 3. Steps of rule-based multiple object tracking system for traffic surveillance (top row) (a) highway traffic video I frame No. 139, (b) after differencing with the background, (c) after outlier removal and hole removal; (bottom row) (d) after object separation and shadow removals, (e) after outlier and hole removals again, (f) tracked multiple vehicles (with white squares in the geometric centers of the vehicles).
Strip Removal. For strip-shaped outliers/holes that a regular outlier/hole remover with small thresholds can not remove (increasing the threshold may result in overpruning), we use strip removers for outliers/holes instead. If the pixels at the four corners of the rectangle with height h and width w and centered by the pixel (a,b) are all 0s (1s for hole strip removal), then each pixel in the rectangle will be set to 0 (1 for hole strip removal) (see Fig. 5). When we want to remove a horizontal strip, we can set the height to a certain value and set the width to 1 (here the rectangle is reduced to two points); and similar for a vertical strip; if each of the width and the height is bigger than 1, the strip remover will remove both horizontal and vertical strips. The setting of the width/height is rule-based and adjustable.
474
X. Su et al.
Algorithm: Outlier / hole removal algorithms { Inpute: in_f = input binary image thres = a threshold to determine outliers/holes; Output: binary image with outliers/holes removed.}; begin for each pixel in_f(a,b) in the binary image { if(in_f(a, b)==1) //in_f(a,b)==0 for hole remover { search in four directions (up, down, left, right) until the pixel is 0 in each direction //1 for hole remover compute the total distance to reach 0s: //1s for hole remover total_dis=up_dis+down_dis+left_dis+right_dis; if (total_dis<=thres) { in_f(a,b)=0; //in_f(a,b)=1 for hole remover}}} end. Fig. 4. Outlier removal algorithm and hole removal algorithm (the portion after the “//”)
Algorithm: Outlier/hole strip removal algorithms { Inpute: in_f = input binary image h = height of a rectangle; w = width of the rectangle; Output: strips removed binary image.}; begin for each pixel in_f(a,b) in the binary image { if(in_f(a,b)==1) //in_f(a,b)==0 for hole strip remover { if the pixels at the four corners of the rectangle with h*w and centered by in_f(a,b) are all 0s { //1s for hole strip remover for each pixel (i,j) in the rectangle { in_f(i, j)=0; // 1 for hole strip remover }}}} end. Fig. 5. Outlier strip removal and hole strip removal (the portion after “//”) algorithms
Fig. 6. Separations of neighboring vehicles (green horizontal lines) and locations of where to start removing shadows (blue rods) and where to stop (red rods)
Rule-Based Multiple Object Tracking for Traffic Surveillance
475
Shadow Removal. As our task is to track vehicles instead of shadows, we need to effectively remove the shadows in our system, especially when shadows are longer than the widths of vehicles. In the example of Fig. 3, we first need to locate where to start shadows removal (the blue rods in Fig. 6), we also need to protect the vehicles from mis-pruning (the red rods in Fig. 6), and then we can remove the shadows. Locating corners of where to start removing shadows and where to stop is rulebased: we need to detect four kinds of inner corners illustrated in Fig. 7, (a) is the corner for the start of leftwards shadows of vehicles, (b) is the corner for the start of leftwards shadows of vehicles outside of the images (on the right border), and (c)(d) are the corners for removal protection of leftwards shadow pruning. 1 1
2
4 1
3
4
3 2
(a)
(b)
(c)
3
4
2
1
(d)
2
3
1
4 (e)
Fig. 7. Corner locations for shadow removal starts, removal protection stops, and vehicle separations (the corners are around the red stars in square 1 of each figure) (a) inner corner for starts of removing leftwards shadows (from vehicle), (b) inner corner for starts of removing leftwards shadows (from the right border), (c)(d) inner corners for leftwards shadow removal protection stops, (e) inner corner for vehicles separation
Except for Fig. 7(b), each of the inner corners meets the following rules: (1) starting from the corner, the height of the shadow is smaller than the width or height of the vehicle, (2) for leftwards shadows, the corners of the starting points and stop points for shadow removal have three kinds of shapes illustrated in Fig. 7 (a)(c)(d), (3) the square centered at the corner can be approximately composed by one background square (0s, e.g., square 1 in Fig. 7(a)) and three foreground squares (1s, e.g., squares 2, 3, 4 in Fig. 7(a)). For the 3rd rule, we relax the constraint to allow bigger flexibility: the number of background pixels in square 1 of Fig. 7 is 22%~28% (around 1/4) of total pixels of the square covering squares 1, 2, 3, and 4 of Fig. 7, and the pixel number of foreground squares in other three squares is 72%~78% (around 3/4) of the total. The size of the squares is rule-based and adjustable. The rule to locate the inner corner for leftwards shadow removal starting from the right border (Fig. 7(b)) is: near the border, the ratio of height to width of the object in the binary image is less than 0.4 (this ratio is adjustable according to actual videos). After locating the shadow corners, we start removing the shadows leftwards until the shadows are removed or we meet removal protection stops (e.g., Fig. 7(c)(d)), which are vertical background strips separating the shadow from the vehicles. The above rule-based strategies are for the leftwards shadow removals. When the traffic surveillance camera is horizontally placed and is vertical to the traffic, the shadows have three situations, leftwards shadows, rightwards shadows, and short or no shadows. The last case of shadows is generally not considered for shadow removing. For rightwards shadows, the starts and stops in the shadow removal are reversed with those for leftwards shadows.
476
X. Su et al.
Object Separation. As shadow is a major contribution of occlusion in traffic surveillance videos, we separate the partially occluded vehicles from their neighbors in the binary image by creating narrow horizontal background strips (0s), with the help of shadows. We only consider one shape of corners for leftwards shadows to separate connected vehicles in the binary image (see Fig. 7(e)): the bottom of the shadow of one vehicle is neighboring the top of another vehicle in a traffic image (see the example in Fig. 6). The rule to determine the corner is similar: it requires 22%~28% (around 1/4) 0s in the background square and 72%~78% (around 3/4) 1s in the foreground squares, in addition to having the corner shape of Fig. 7(e). The objects separation is conducted before shadow removals. Feature Recording. After the above rule-based multiple object tracking strategies, it will be easy for us to record the heights, widths, and geometric centers for each tracked vehicle. These features in consecutive video frames are used for further tracking tasks, such as track analysis, event classification, and abnormal behavior alarming. Examples of tracked vehicles are in Fig. 3(f) and Fig. 9.
3 Experimental Results We implement our collaborative background extraction algorithm and rule-based multiple object tracking strategies for traffic surveillance, and experiment on two highway traffic videos, one with heavy occlusions from shadows and frequently changing backgrounds (Video I), another with barely any shadows and an almost fixed background (Video II) (see examples of the video clips in Fig. 8).
Fig. 8. (a) frame 84 of Video I, (b) frame 350 of Video II
We adaptively extracted backgrounds using the collaborative background extraction algorithm for both videos. Then, we applied all the steps of rule-based multiple object tracking for Video I, and used only the basic steps of background differencing, outlier removal, and object consolidation for Video II. The summary of the videos and the experimental results are in Table 1. The reason that the Video II has a zero false alarm is because it is occlusion-free and nothing else can be mistaken as vehicles. With the high effectiveness of our rulebased strategies, we achieve a very low false alarm rate and a higher tracking accuracy on the heavily-occluded video than the occlusion-free video that does not apply our occlusion handling methods. Fig. 9 illustrates the situations where our tracking system produces false alarms. The false alarms in Fig. 9(a) and Fig. 9(c) are caused by external shadows from the
Rule-Based Multiple Object Tracking for Traffic Surveillance
477
Table 1. Tracking results on a heavily-occluded video (I) and an occlusion-free video (II)
Videos frame numbers time duration frame size valid vehicles correct tracking hit rate false alarm
I (occluded video) 440 29" 240*320 1684 1649 97.92% 1.37%
II (occlusion-free video) 500 33" 240*320 2088 1997 95.64% 0
right border that are not effectively removed by the rule-based tracking system, the false alarms in Fig. 9(b) are caused by the shadow from an exceptionally big vehicle, and a shadow of an external vehicle from the right border; however, in each of the neighboring frames, our tracking system has much better tracking accuracy (Fig. 9(d)(e)(f)). These examples indicate that our rule-based tracking system can produce inaccurate tracking where the rules do not apply; however, the false alarms can be eliminated by adjusting the comprehensive rules customized for different surveillance environments. To adaptively tune the rules to achieve the best tracking performance will be our future work. Our rule-based multiple object tracking system for traffic surveillance is comparable with some state-of-the-art tracking algorithms, in terms of high hit rate (97.92%) and low false alarm rate (1.37%). For example, Bai et al. applied feature-based and appearance-based approaches for traffic tracking and got a best result of a false alarm rate of 9.5% and a hit rate of 89.33% [7]. Beleznai et al. [8] implemented a fast mean shift model to track human beings and achieved a 94% hit rate but with a 37% false alarm rate, which is better than blob-based approaches [9].
Fig. 9. The example frames that our tracking system produces false alarms and the more accurately tracked vehicles in their neighboring frames of video I (the white squares are the geometric centers of the tracked vehicles). (top row) (a) frame 114, (b) frame 364, (c) frame 414; (bottom row) (d) frame 115, (e) frame 365, (f) frame 415.
478
X. Su et al.
4 Conclusions Multiple object tracking is known as a difficult task, especially in the presence of occlusions and background variations. In this paper, we collaboratively extract a background from multiple independently extracted backgrounds and effectively remove spurious background pixels. With the adaptive background updates using the collaborative background extraction algorithm, multiple object tracking can be simplified and based on the binary images from thresholding the video frames with the backgrounds. In the presence of occlusions, our rule-based tracking strategies effectively remove outliers, consolidate objects by removing holes, separate the partially occluded objects based on the occlusion features, and remove the shadows effectively and correctly by using corner detection and removal protection rules. Empirical results on highway traffic surveillance videos show that our rule-based multiple object tracking system using the collaborative background extraction algorithm is very accurate. It has a higher hit rate on a heavily-occluded video than that on an occlusion-free video without using our occlusion-handling strategies, and has a very low false alarm rate. Our multiple object tracking system is comparable with some state-of-the-art tracking algorithms.
References [1] Jung, Y.-K., Ho, Y.-S.: Multiple Object Tracking under Occlusion Conditions. In: Visual Communications and Image Processing 2000, Proceedings of SPIE, vol. 4067, pp. 1011– 1020 (2000) [2] Kalman, R.E.: A New Approach to Linear Filtering and Prediction Problems. Transactions of the ASME–Journal of Basic Engineering 82(D), 35–45 (1960) [3] Weng, S.-K., Kuo, C.-M., Tu, S.-K.: Video object tracking using adaptive Kalman filter. Journal of Visual Communication and Image Representation 17, 1190–1208 (2006) [4] Gustafsson, F., Gunnarsson, F., Bergman, N., Forssell, U., Jansson, J., Karlsson, R., Nordlund, P.-J.: Particle filters for positioning, navigation, and tracking. IEEE Transactions on Signal Processing 50(2), 425–437 (2002) [5] Stauffer, C., Grimson, W.E.L.: Adaptive Background Mixture Models for Real-Time Tracking. In: IEEE Conference on Computer Vision and Pattern Recognition, vol. 2, pp. 246–252 (1999) [6] Kumar, P., Ranganath, S., Huang, W.: Queue based Fast Background Modelling and Fast Hysteresis Thresholding for Better Foreground Segmentation. In: Proceedings of the 4th International Conference on Information, Communications and Signal Processing and Pacific Rim Conference on Multimedia, vol. 2, pp. 743–747 (2003) [7] Bai, L., Tompkinson, W., Wang, Y.: Computer vision techniques for traffic flow computation. Pattern Analysis and Applications 7, 365–372 (2005) [8] Beleznai, C., Frühstück, B., Bischof, H.: Human Tracking by Fast Mean Shift Mode Seeking. Journal of Multimedia 1(1) (2006) [9] Yang, T., Pan, Q., Li, J.: Real-time multiple objects tracking with occlusion handling in dynamic scenes. In: IEEE Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 970–975 (2005)
A Novel Approach for Iris Recognition Using Local Edge Patterns Jen-Chun Lee1, Ping S. Huang2, Chien-Ping Chang1, and Te-Ming Tu1 1
Department of Electrical and Electronic Engineering, Institute of Technology, National Defense University, Taoyuan, Taiwan 2 Department of Electronic Engineering, Ming Chuan University, Taoyuan, Taiwan {g980301,cpchang,tutm}@ccit.edu.tw
Abstract. This paper presents an effective approach for iris recognition by analyzing the iris patterns. We propose an iris classification method that divides the normalized iris image into several regions to avoid the iris image with several noise factors (eyelids and eyelashes) and reduce the error rates. In every region, effective features are extracted by the proposed method of local edge pattern (LEP) for edge and corner detection. Feature vectors are linearly combined into a two dimensional matrix that represents every iris image for further recognition. Then 2D linear discriminant analysis (2DLDA) is used to identify the person. We use two public and freely available iris image databases for evaluation, organized in training and test sets respectively. Experimental results show that the recognition rate of the two iris image databases have achieved similar performance more than 98% and the proposed method has an encouraging performance and robustness.
1 Introduction Biometrics is inherently a more reliable and capable technique to identity human's authentication by his or her own physiological or behavioral characteristics. The features used for personnel identification by current biometric applications include facial features, fingerprints, iris, palm-prints, retina, handwriting signature, DNA, gait, etc. [1], [2] and the lower error recognition rate is achieved by iris recognition [3]. In general, iris recognition approaches can be roughly divided into four main categories: phase-based approaches [4], zero-crossing representation [5], intensity variation analysis based methods [6], [7] and texture analysis [8], [9]. Daugman’s algorithm [4] adopted the 2D Gabor filters to demodulate phase information of iris. Boles and Boashash [5] proposed the zero-crossing of 1D wavelet transform to represent distinct levels of a concentric circle for an iris image. L. Ma et al. [6], [7] proposed a local intensity variation analysis-based method and adopted the GaussianHermite moments and dyadic wavelet to characterize the iris image for recognition. Wildes et al. [8] analyzed the iris texture using the Laplacian pyramids to combine features from four different resolutions. More recently, Tisse et al. [9] constructed the analytic image to demodulate the iris texture. However, there are still limited capabilities in recognizing the identity of person accurately and efficiently, and much space is needed for improving performance in a practical viewpoint. The main G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 479–488, 2007. © Springer-Verlag Berlin Heidelberg 2007
480
J.-C. Lee et al.
difficulty of iris recognition is that it is hard to find apparent feature points in the iris image and to keep high classification rate in an efficient way. In this paper, we propose a robust approach to improve the performance of human identification system based on iris patterns. This approach is robust to noise factors such as eyelid and eyelashes. We observed that the noisy data is located in some iris subparts in most cases. Our method is based on the division of the normalized iris image into several regions, followed by the independent feature extraction for each region. Herein, we define the local edge pattern (LEP) operator for feature extraction. Each feature value represents various edge or corner patterns, respectively. The LEP operator is adopted to characterize the iris texture in each image block, and all local image blocks are used to construct a global feature vector. Finally, we employ the feature vector to represent an iris image. In order to match the feature vectors extracted from the iris images to individual classes, two dimensional linear discriminant analysis (2DLDA) is adopted. The remainder of this paper is organized as follows. Section 2 introduces preprocessing procedures for iris images. A detailed description of the iris classification strategy and feature extraction methodology is given in Section 3. Section 4 and Section 5 report the iris matching method and the experimental results. Finally, Section 6 presents the conclusions.
2 Iris Image Preprocessing The human iris is an annular portion between the pupil (inner boundary) and the sclera (outer boundary). The image preprocessing procedures to extract the iris from the eye image operate in three steps. The first step is to locate the iris area. Then, the located iris is normalized to a rectangular window with a fixed size in order to achieve the approximate scale invariance. Finally, illumination and contrast problems are eliminated from the normalized image by image enhancement. 2.1 Locating the Iris Area In an iris recognition system, iris location is an essential step. Herein, we proposed the method for iris location base on the Thales' theorem that the diameter of a circle always subtends a right angle to any point on the circle’s circumference. Fig. 1 shows the Thales' theorem is applied to find the inner and outer boundary of iris. The iris location method is not detailed here, because it is not the focus of this paper. We sum up the main points as follows. Firstly, the dilation and erosion basic morphological operators are used in order to obliterate the illumination influence inside the pupil. Then, a point inside the pupil is found using the method of minimum local block mean and the pupil area gives the minimum average gray value in the eye image. Therefore, the target point P0 inside the pupil is found and its coordinate can be computed. Secondly, we rely on the target point and cooperate with the specialized boundary detection mask (SBDM) to locate three points ( P1 , P2 , P3 and P4 , P5 , P6 ) along the inner and outer iris boundaries, respectively. The SBDM is constructed by a × b matrix, as shown in Fig. 2. During the processing, each pixel ( x, y ) in the search
A Novel Approach for Iris Recognition Using Local Edge Patterns
481
range is considered as the center of SBDM and the corresponding edge intensity is calculated by
e=
∑
b −1 2 b −1 i= x− 2 x+
∑
a −1 2 a −1 j= y− 2
y+
f (i, j ) ∗ w(i, j )
(1)
where f (i, j ) represents the pixel value in an image, w(i, j ) is a weighting value of the SBDM. During the search range, we can find that the boundary point appears with the largest edge intensity variation values. Finally, we apply Thales' theorem to calculate the circle parameters such as the circle center ( Pp and Pi ) and its radius ( R p and Ri ).
P0 ( x0 , y0 )
P1 ( x1, y1 )
P2 ( x2 , y2 )
P4 (x4 , y4 )
P5 ( x5 , y5 )
Pp ( xp , y p )
Rp
Ri
P0
Pi (xi , yi )
P6 (x6 , y6 )
P3 ( x3, y3 )
(a)
(b)
Fig. 1. Thales' theorem is applied to find (a) the inner and (b) the outer boundary of iris 1 1 1
1 1 1
1 1 1
1 1 1
1 1 1
0 0 0
-1 -1 -1
-1 -1 -1
-1 -1 -1
-1 -1 -1
-1 -1 -1
Fig. 2. An example of the SBDM (3x11)
2.2 Iris Normalization
Before using the proposed method for iris recognition, it is required to normalize the iris image, so that the representation is common to all, with similar dimensions. Normalization process involves in unwrapping the iris and converting it into its equivalent polar coordinates. We transform the circular iris area into a block using Daugman’s Rubber sheet model [4]. The pupil center is considered as the reference point and a remapping formula is used to convert the points on the Cartesian scale to the polar scale. In our experiment, the radial resolution and the angular resolution are set to 64 and 512 pixels, respectively. 2.3 Image Enhancement
After normalization, iris templates still have low contrast and non-uniform illumination problems. To eliminate the background brightness, the iris template is divided into non-overlapped 16 × 16 blocks and their means constitute coarse estimates of background illumination for individual blocks. By using bicubic interpolation,
482
J.-C. Lee et al.
each estimated value is expanded to the size of a 16 × 16 block. Then each template block can be enhanced to a uniform light condition by subtracting from the background illumination. After that, the lighting corrected images are enhanced by histogram equalization. It shows clearer texture characteristics of iris than the original iris image, but it also exists some salt-and-pepper noise. In order to filter out the unreliable noise, we adopted the Gaussian lowpass filter for noise reduction. Figure 3 illustrates the preprocessing process for the iris image.
(a)
(b)
ROI (c)
Fig. 3. Preprocessing of the iris image, (a) the original iris image, (b) the image with located iris area, and (c) the region of interesting (ROI) from the smoothed image
3 Feature Extraction Despite all normalized iris templates have the same size and uniform illumination; there are still eyelashes and eyelids on them. Therefore, the region of interest (ROI) is selected to remove the influence of eyelashes and eyelids that are shown in Fig. 3(c). The features are extracted only from the upper portion of a normalized iris template ( 40 × 512 ) which provides the most discriminating information [10]. 3.1 Proposed Iris Partition Methodology
After the preprocessing of iris image, there are still a lot of iris images that exist noise subparts (i.e., eyelashes and eyelids), especially in the CASIA Iris Database [11]. Therefore, we propose to divide the iris into separate regions. The aim is to ensure some of these are noise-free and that the comparison with the respective enrolled region enables accurate recognition. Through the analysis of the available iris image databases followed by consecutive experiments, we observed that the most common types of noise (iris obstructions) are located predominantly. Based on these observations, the proposed iris division schema minimizes the number of regions simultaneously affected by each type of noise, as illustrated in Fig. 4. In our experiments, we divided the iris image into 32 [ (40 / 20) × (512 / 32)] regions, and each region has the size of 20 × 32 . The independent feature extraction for each region is mainly to avoid the noise parts, located in the iris subparts, to corrupt the whole biometric signature.
A Novel Approach for Iris Recognition Using Local Edge Patterns
483
Fig. 4. Division of the iris image into 32 regions
3.2 Local Edge Pattern
From the viewpoint of texture analysis, local spatial patterns in an iris mainly involve frequency and orientation information. Thus, we present a new model of texture analysis to detect high frequency part of iris texture. It provides a robust way for describing local edge patterns (LEP) in an iris texture. The LEP represents the qualitative intensity relationship between a pixel and its neighborhoods. It is robust, discriminative, and computationally efficient, so it is well suited for texture analysis. We used LEP to represent iris image blocks’ distinctive information because iris patterns can be seen as textures constituted by many minute image structures. The LEP operator is represented by 16 mask patterns, as shown in Fig. 5. We restrict the range of displacement to within a local 3 × 3 window that extracts the local information mainly. The feature values are calculated by scanning each image block by the 16 local 3 × 3 mask patterns individually and computing the sums of the intensities of corresponding pixels. Each pixel xij in the image is considered as the center of LEP and the corresponding edge and corner intensity is represented by L p , where p = 1, 2,… ,16 . Every feature value is calculated by m
Lp =
m
∑∑
n
Lp (i , j ) = ∑∑ xij ∗ G p
n
L p (i , j ) ,
(2)
j =1 i =1
j =1 i =1
where n and m are the width and height of the image block, and xij is the image pixel value, G p is one of the local edge patterns. We can employ the 16 local edge patterns to find out the 16 feature values that represent various edge or corner patterns in the normalized iris image. Since LEP features use the information of two dimensional distributions as well as the directions, they can analyze an image more closely. Furthermore, the feature values of LEP operator are invariant to translation (shift) due to the geometrical feature extraction. The LEP operator is regarded as basis functions of frequency analysis. Those feature values represent high frequency part of the iris image. 0 -1 0 0 0 0
0 0 -1 0
0 1
-1 0 0 0
0 0
0 0
0 -1 0 0
-1 0 -1 0 2 0
0 0
0 0
-1 0 0 2
0 0
0 0
0 -1 2 0
0
0
0
0
0 -1
1
0
0
0 2
0
0
0
1
1
0
0
0
0
0
-1 0 -1
-1 0
0 -1 0
0 -1 0
0
0
1
0
0
0
0
0
0
0
0
0
-1 0
0
-1 0
0
0 0
0 0 2 -1
0 -1 0 -1 2 0
0 -1 0 0 2 -1
0 0 -1 2
0
0
1
1
0
0
0
0
0
1
0 -1 0
0
0
0 -1 0
0
0
0
0
0
0
0 0
Fig. 5. Sixteen mask patterns of the local edge pattern (LEP) operator ( 3 × 3 pixels)
3.3 Feature Vector
For the ROI of each normalized iris image I , we divided the iris image into several regions. According to the proposed feature extraction method, we extracted the
484
J.-C. Lee et al.
independent feature values for each of these regions. These feature values are arranged to form the 2-D vector V represented by
V = ⎡⎣l11, l12, ..., l1q ; l21,l22, ..., l2 q ;…; l p1, l p 2, ..., l pq ⎤⎦
(3)
where p is the feature number of the LEP operator, q is the divided block number of iris image, and l pq is the feature value. In our experiment, the total number of regions is 32 (see Fig. 4). For each region, 16 feature values are obtained. Therefore, a 2D feature matrix with the size of 16 × 32 is used to represent every iris image.
4 Iris Matching After feature extraction, it is important to choose a suitable similarity measure between feature vectors. To improve computational efficiency and classification accuracy, 2DLDA is first used to reduce the dimensionality of the feature vector and then the nearest neighbor classifier is adopted for classification. Ye [12] presented the 2DLDA, which employed the projection technique, and has been developed for feature extraction. The 2DLDA searches for projected vectors that best discriminate different classes in terms of maximizing the ratio of between-class to within-class covariance matrix. Further detail of 2DLDA can be found in [12]. The new feature vector f can be denoted as:
f = VW
(4)
where W is the projection matrix and V is the original feature vector derived in Section 3.3. Then, a nearest neighbor classifier is used for classification in a lowdimensional feature space. m = arg min d ( f , fi ), 1≤ i ≤ c
d ( f , fi ) =
∑( f j
j
− fi j )2
(5)
where f and fi are the feature vector of an unknown sample and the ith class, respectively, f j and fi j are the jth component of the feature vector of the unknown sample and that of the ith class, respectively, c is the total number of classes, d ( f , f i ) denotes the Euclidean distances measure. The feature vector f is classified into the mth class, the closest mean, using similarity measure d ( f , f i ) .
5 Experimental Results To evaluate the effectiveness of our method for iris recognition, two publicly available iris databases, CASIA [11] and UBIRIS [13] are used as the test datasets. The first one mainly comes from Chinese volunteers, were collected using OKI’s hand-held iris sensor [11]. And the second one is constituted by European volunteers, captured under natural lighting. The experiments were completed in two modes: identification and verification. In the identification mode, the correct recognition rate
A Novel Approach for Iris Recognition Using Local Edge Patterns
485
(CRR) is adopted to assess the efficacy of the algorithm. In the verification mode, the ROC curve that depicts the relationship of false acceptance rate (FAR) versus false reject rate (FRR) is used. 5.1 Iris Database
In our experiments, we used two large publicly and freely available iris databases for biometric purposes: CASIA and UBIRIS. The CASIA iris database is the large open iris database [11] and we only use the subset for performance evaluation. Each image has the resolution of 320 × 280 in JPEG format. This database includes 1992 iris images from 249 different eyes with 8 each. The first three images of each class are selected to constitute the training set and the other images of each class are used as the test set. In the experiments for the preprocessing stage, we checked the accuracy of the boundaries subjectively and obtained the success rate of 95.9% (81 images are not used) on 1992 images. Table 1 shows the causes of the failure of preprocessing. Therefore, there are 747 images for training and 1164 images for testing. The UBIRIS iris database [13] is composed of 1205 images collected from 241 persons during September, 2004, in the first sessions (5 for each eye). The main characteristic is the same as the CASIA iris databases in that it includes images with several noise factors, thus permitting the robustness evaluation of iris recognition methods. Each image size has the resolution of 150 × 200 in JPEG format. For each iris class, we choose two samples for training and the other as test samples. There are still a few image failed in the preprocessing stage (34 images are not used), as shown in Table 1. Therefore, there are 482 images for training and 689 images for testing. Table 1. Analysis of the failure of preprocessing according to the causes Cause of Failure (1) Occlusion by eyelids (2) Inappropriate eye positioning (3) Occlusion by eyelash (4) Noises within iris Total
Number of image CASIA UBIRIS 31 6 21 13 23 11 6 4 81 34
In order to evaluate the recognition accuracy both in highly and less noisy environments, a large number of images from the CASIA and UBIRIS databases were selected. The experiments conducted below are running on the computing environment of 1.8GHz PC with 736MB RAM using Matlab 6.5. 5.2 Results of Proposed Methods
Herein, the performance of the proposed method was evaluated using the CASIA iris database. The 2DLDA is applied for iris matching. The distribution of the matching results of our method is shown in Fig. 6. The minimal impostor matching score is 47.
486
J.-C. Lee et al.
We can see that the comparison results of genuine and impostor are well separated by our method although they overlap each other in a minor part. Table 2 describes the two iris database operating states in verification. It is clear that the CASIA iris database does not perform as well as the UBIRIS iris database. The main reason is that the texture information of Asian subjects is much less than that of the Europeans, especially on the regions far from the pupil. Therefore, experimental results show that the proposed iris representation is effective and the proposed approach can really extract the promising feature from each iris image.
Fig. 6. The distribution of matching results of our method on the CASIA database Table 2. False match and False non-match rates of the two iris databases UBIRIS database False match rate False non-match rate 0.001 % 0.197% 0.01 % 0.055 % 0.1 % 0.013 %
CASIA database False match rate False non-match rate 0.001 % 0.343 % 0.01 % 0.061 % 0.1 % 0.015 %
5.3 Comparison and Discussion
Experimental results from previous paragraphs reveal that the proposed technique is an effective scheme for feature extraction and the 2DLDA for matching can achieve a correct recognition rate up to 99.53% in the UBIRIS Iris Database and 98.72% in the CASIA Iris Database after removing low-quality images as described in section 5.1. The methods of the Fourier-wavelet feature [14] and the Gaussian-Hermite moments [6] are the well known among existing schemes for iris recognition. From the results shown in Fig.7, we have also implemented those iris recognition algorithms. Together with our proposed scheme, three approaches are tested using the 241 classes of the UBIRIS Iris Database. Figure 7 displays the ROC curve of those three methods. We can find that the proposed method achieves the best performance.
A Novel Approach for Iris Recognition Using Local Edge Patterns
487
Based on the previous experimental results with corresponding analysis, we can conclude: 1.
2.
In general, the methods based on local variation analysis have better performance than the texture analysis based methods [6]. The main reason is that texture features are captured from the global region of iris image. In this paper, we employ the method of texture analysis to extract the local features of iris image and to associate the local feature for iris recognition. All experiment results have demonstrated that the proposed method achieves high performance. In other words, texture features which can effectively represent iris image by our proposed algorithm. The proposed method can achieve high accuracy and performance for iris recognition. This confirms that local sharp variations truly play an important role in iris recognition and our intuitive observation and understanding are reasonable. This also indicates that the LEP operator can extract discriminating features suitable for iris recognition. The ROC Curve of different iris recognition methods on UBIRIS database 0.2 Fourier-wavelet feature[14] Gaussian-Hermite moments[6] Proposed method
0.18
False Non-Match Rate(%)
0.16 0.14 0.12 0.1 0.08 0.06 0.04 0.02 0
0.01
0.02
0.03
0.04 0.05 0.06 False Match Rate(%)
0.07
0.08
0.09
0.1
Fig. 7. The ROC curve of different methods using the UBIRIS database
6 Conclusions Biometrics-based personal identification methods have recently gained more interests with an increasing emphasis on security. In this paper, we have presented a novel and efficient method for personal identification using two iris image datasets. We proposed a new iris classification strategy that divides the normalized iris into several regions and developed the LEP operator for feature extraction in every region. LEP operator, which is successfully applied to feature extraction for iris recognition, is used to represent the robust texture features of iris images. The statistical scheme of 2DLDA is employed as a classifier to compare with the similarity between two iris images. Experimental results on two publicly available iris image databases, CASIA
488
J.-C. Lee et al.
and UBIRIS, illustrated the effectiveness of our method. The main advantage of our method is its robustness against noise or occlusions in iris images because our algorithm only needs to match only a fraction of all image blocks to authenticate a genuine. How to define suitable global features to improve the robustness of local feature-based methods is not well addressed before, and it should be an important issue in future works.
References 1. Yanushkevich, S.N.: Synthetic Biometrics: A Survey. In: International Joint Conference on Neural Networks, pp. 676–683 (2006) 2. Miller, B.: Vital Signs of Identity. IEEE Spectrum 31, 22–30 (1994) 3. Mansfield, T., Kelly, G., Chandler, D., Kane, J.: Biometric Product Testing Final Report. Issue 1.0, National Physical Laboratory (2001) 4. Daugman, J.: How Iris Recognition Works. IEEE Transactions on Circuits and Systems for Video Technology 14(1), 21–30 (2004) 5. Boles, W.W., Boashash, B.: A Human Identification Technique Using Images of the Iris and Wavelet Transform. IEEE Transactions on Signal Processing 46(4), 1185–1188 (1998) 6. Ma, L., Tan, T., Wang, Y., Zhang, D.: Local Intensity Variation Analysis for Iris Recognition. Pattern Recognition 37, 1287–1298 (2004) 7. Ma, L., Tan, T., Wang, Y., Zhang, D.: Efficient Iris Recognition by Characterizing Key Local Variations. IEEE Transactions on Image Processing 13(6), 739–750 (2004) 8. Wildes, R., Asmuth, J., Green, G., Hsu, S., Kolczynski, R., Matey, J., McBride, S.: A Machine-Vision System for Iris Recognition. Machine Vision and Applications 9(1), 1–8 (1996) 9. Tisse, C., Martin, L., Torres, L., Robert, M.: Person Identification Technique Using Human Iris Recognition. In: Proc. Vision Interface, pp. 294–299 (2002) 10. Sung, H., Lim, J., Park, J., Lee, Y.: Iris Recognition Using Collarette Boundary Localization. In: Proceedings of the 17th International Conference on Pattern Recognition, vol. 4, pp. 857–860 (2004) 11. Institute of Automation, Chinese Academy of Science, CASIA Iris Image Database 12. Ye, J., Janardan, R., Li, Q.: Two-Dimensional Linear Discriminant Analysis. In: Proc. Neural Information Processing Systems, pp. 1569–1576 (2005) 13. Proenca, H., Alexandre, L.A.: UBIRIS: A Noisy Iris Image Database. In: 13th International Conference on Image Analysis and Processing (ICIAP (2005) pp. 970–977 (2005), http://iris.di.ubi.pt 14. Huang, P.S., Chiang, C.S., Liang, J.R.: Iris Recognition Using Fourier-Wavelet Features. In: Kanade, T., Jain, A., Ratha, N.K. (eds.) AVBPA 2005. LNCS, vol. 3546, pp. 14–22. Springer, Heidelberg (2005)
Automated Trimmed Iterative Closest Point Algorithm R. Synave, P. Desbarats, and S. Gueorguieva LaBRI UMR 5800 CNRS / Universit´e Bordeaux 1
Abstract. A novel method for automatic registration based on Iterative Closest Point (ICP) approach is proposed. This method uses geometric bounding containers to evaluate the optimum overlap rate of the data and model point sets. Keywords: laser scanner acquisition, automatic regisration, iterative closest point, range image reconstruction, robustness, uncertain geometric feature
1
Introduction
Last years have seen significant increases in 3D data-acquisition technologies for building detailed models of complex objects. Beyond traditional applications as reverse engineering, part inspection and rapid prototyping [33], new virtual reality applications as virtual museums and cultural heritage preservation [17,5,30,32], computer aided paleoanthropology and digital morphometry [15,1,34], have emerged. The common framework of all applications is the 3D model acquisition pipeline [4,24,16] and its automation. Following [3], the 3D data acquisition pipeline consists of four steps: scanning (data acquisition), data registration (alignment), data integration and model conversion. We are interested in the laser scanning acquisition and the automatic position control during data registration. Initially, a point cloud of geometric samples of the 3D object is created. In order to capture the entire object, different views are captured corresponding to multiple range images. Next, all measurements are brought in a common reference system through pose estimations. Depending on the number of views, registration could be carried in a pair-wise way registering just two views, or as n-view registration with more views implied. The registration is based on an optimization of a cost function used for the estimation of the distance between corresponding portions of the measured surface. There exists a great variety of methods for correspondence definition. Starting with unstructured point sets identification, shape geometry and topology features are further investigated [2,10,12,13,18,19,24] in order to estimate pose uncertainty [27,20] and furthermore predict, quantify and validate registration error [22,21]. This process often results in complex optimization problems. In practice, the computational complexity of these methods is very time-consuming. G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 489–498, 2007. c Springer-Verlag Berlin Heidelberg 2007
490
R. Synave, P. Desbarats, and S. Gueorguieva
The goal of our research is to investigate relatively straightforward technics to improve the iterative closest point algorithm [6,8,25] in order to achieve controlled robustness behaviour on data and measurement impreciseness. The basis of this paper is that we believe correspondence recovery and refinement could be calculated using geometric bounding containers [28] for the potentially overlapping regions and as a preprocessing step of a ICP based approach. The proposed algorithm, called Automated Trimmed ICP, is inspired from [9] with a new automatic procedure for the estimation of the optimal overlap setting parameters. Tests on both synthetic and real range images establish that the estimation of the pose registration parameters using appropriated geometric bounds avoids local minima during optimization search. Besides, geometric bounds are calculated in an adaptive way depending on both the scanner acquisition resolution and the object minimal point separating distance thus allowing a rigorous control on the final registration error. Basic notions and Automated Trimmed ICP algorithm framework are given in the next section. Then, results and discussion are supplied. Finally, axes for future algorithm evaluation and improvements are provided.
2
Automated Trimmed Iterative Closest Point
2.1
ICP and Trimmed ICP N
p Following the notations of [6,9] let us consider the ”data” shape P = {pi }i=1 Nm that is moved to be in best registration with a ”model” shape M = {mj }j=1 , Np = Nm . It is assumed that P and M are ”roughly preregistered” and that the overlapping part of the two sets is characteristic enough to allow for unambiguous matching1 . Let ξ be the minimum rate of overlapping and Npo be the number of data points that can be paired. The minimum overlap is expressed as:
Npo = ξNm
(1)
For ”conventional” ICP, ξ = 1 and Npo = Np . Under these assumptions, the ICP problem is to find the Euclidean transformation that brings a Npo -point subset of P into the best possible alignment with M . Using the quaternion based solution [14], the complete registration vector q is defined as q = [qR |qT ]t where qR = [q0 q1 q2 q3 ]t is the unit rotation quaternion and qT = [q4 q5 q6 ]t is the translation vector. The least squares quaternion operation with mean square point matching error dms is denoted as (q, dms ) = Q(P, M )
(2)
The notation q(P ) is used to denote the point set P after transformation by the registration vector q. Let d be the distance metric between pi ∈ P and M defined as (3) d(p, M ) = min m − p m∈M
1
Part should not be symmetric and ”featureless”.
Automated Trimmed Iterative Closest Point Algorithm
491
Let y denote the closest point that yields the minimum distance, y ∈ M and d(p, y) = d(p, M ). Let Y denote the resulting set of closest points, and let C be the closest point operator: Y = C(P, M ) (4) Given the resulting corresponding point set Y , the least squares registration is computed as (q, d) = Q(P, Y ) (5) The position of the data shape point set is updated as P = q(P ). In terms of this notations, Besl’s algorithm is formulated as follows: Initialize : P0 = P, q0 = [1, 0, 0, 0, 0, 0, 0]t, k = 0
(6)
Compute the closest points : Yk = C(Pk , M ) Compute the registration : (qk , dk ) = Q(P0 , Yk )
(7) (8)
Apply the registration : Pk+1 = qk (P0 ) if ((dk − dk+1 ) ≥ τ ) then goto (7) else Stop
(9) (10)
Step (6) corresponds to the algorithm initialization. Steps (7,8,9) and (10) are applied until convergence within a tolerance τ , the desired precision of the registration. The mean square distance dk is calculated as: dk =
Np 1 yik − pi,k+1 2 Np i=1
(11)
Pk = {pik } = qk (P0 )
(12)
Y k = {yik }
(13)
There is a great number of variations of Besl’s algorithm. Comparisons and quantitive studies of ICP algorithm variants could be found in [7,25,26]. As pointed out by Besl and illustrated through multiple experiments, the original algorithm is not efficient if there exist points in P not corresponding to any points in M . Besides, in scanner acquisition, the consecutive views are partly overlapping and the straightforward application of Besl’s algorithm unvalidates the registration. Moreover, incomplete data and outliers further deteriorate the robustness. Numerous ICP variations aim at improving these problems [11,29,31,23]. Our approach follows [9] to cope with incomplete data and outliers. The statistical matching heuristics in use is the Least Trimmed Squares (LTS). In particular, the global cost function that is minimised to calculate the optimal registration is the sum ST S of the least Npo squared individual distances. ST S =
Npo
di:Np ,
(14)
i=1
d21:Np ≤ d22:Np ≤ . . . ≤ d2Npo :Np ≤ . . . d2Np :Np
(15)
However, Chetverikov’s algorithm, the TrICP, has a basic difficulty inherent to the overlap parameter ξ calculation. Let e denotes the trimmed mean square error.
492
R. Synave, P. Desbarats, and S. Gueorguieva
e=
ST S Npo
(16)
By default, ξ is set to minimize the objective function Ψ (ξ). Ψ (ξ) =
e(ξ) , λ≥0 ξ 1+λ
(17)
Different experimental values for λ, ξ and Ψ are reported but as it is emphasized, the processing time tends to increase when ξ exceeds the actual overlap. In fact, when highly symmetric part occurs, TrICP converges to a local minimum because there is no way to direct the trimmed squared search to tiny neighbourhoods of the shape. For this reason, we propose to investigate in addition local geometric constraints and thus to force the algorithm to explore chosen informative parts of the shape. More precisely, parts of interest are enclosed in bounding containers and an optimum rate of overlap is computed in accordance with the local data and model point distribution. 2.2
Automated Trimmed ICP (AuTrICP)
For our purpose, the data and the model shapes correspond to different views of the object surface acquired with a laser scanner and given in the form of NT P triangular meshes. The following notations are used: TP = {tP i }i=1 and TM = M NT M {tj }j=1 , are two triangular meshes defining respectively the data shape P and the model shape M , VP and VM are the sets of vertices of TP and TM . The mainframe of the elaborated algorithm, called Automated Trimmed ICP, AuTrICP, follows: 1: Calculate the geometric bound ∂TP M of the overlapping parts of TP and TM . The goal of this step is to locate the regions of interest where the overlap should be located. For this purpose, bounding containers σtPi , 1 ≤ i ≤ NT P and σtM , 1 ≤ j ≤ NT M are associated to all triangles from TP and TM . j Next, ∂TP M is calculated as M ∂TP M = {(tP , σtM ) < Δσ , 1 ≤ i ≤ NT P , 1 ≤ j ≤ NT M } (18) i , tj ), ϕ(σtP i j
ϕ : σti × σtj → is a distance f unction (19) Δσ ∈ is a separability threshold (20) We have experienced two types of bounding containers, the Ritter’s bounding balls (RBB) and the axial aligned bounding boxes (AABB) [28]. In general, different geometric shapes for the bounding containers support different levels in intersection evaluation refinement. For example, for Ritter’s bounding box σiRitter (Oi , Ri ), centered at Oi with radius Ri , we have adopted the following constraint ϕ(σiRitter(Oi , Ri ), σjRitter(Oj , Rj ))< Δσ Ritter ⇔Oi −Oj < α(Ri +Rj ), 0 ≤ α ≤ 1 (21)
Automated Trimmed Iterative Closest Point Algorithm
493
2: Evaluate the intersection rate χ of ∂TP M .
Let us consider the set of vertices VP∂TP M belonging to VP and being in Δσ -vicinity of VM . VP∂TP M = {v, v ∈ VP
M P & ∃(tP i , tj ) ∈ ∂TP M | v ∈ ti }
(22)
We define χ, the rate of intersection of TP and TM as the cardinal number of the set VP∂TP M χ = VP∂TP M
(23)
3: Calculate the minimum overlap ξ and the number Npo of the data points to
be paired. The parameters ξ and Npo used in the TrICP are settled as follows ξ= χ
(24)
Npo = ξ × VP
(25)
4: Apply the TrICP-like phase.
-Initialize the number NT rICP of the TrICP like phase iterations. -∀vi ∈ VP find the closest vertex in VM and compute the individual square distance di . P in the increasing order, select the Npo least values and calculate -Sort {di }Vi=1 their sum ST S . -If the trimmed mean square error e is sufficiently small then exit. e≤τ
(26)
-If the number of iterations NT rICP is superior to a maximum threshold N MAX of the TrICP-like step or the change Δe of e is sufficiently small then update Δσ and goto 1:. NT rICP > N MAX or Δe ≤ Δτ
(27)
-Compute the registration that minimizes ST S for the selected Npo pairs. -Apply the registration to VP . -Reiterate TrICP-like phase.
3
Results and Discussion
The elaborated AuTrICP algorithm has been experienced on both synthetic and real data. Three examples are supplied. The cow model with a total cow number of vertices VM = 317, and the octopus model with a total octopus = 248, are synthetic object models from number of vertices VM http://shapes.aim-at-shape.net/index.php. Both objects are simply connected but include irregular and highly segmented geometric features. The third
494
R. Synave, P. Desbarats, and S. Gueorguieva Perturbation l l h h
BC AABB RBB AABB RBB
ξ 0.753943 0.738170 0.577287 0.548896
Npo ST S e Niter tBC 239 0.105414 0.000441 12 0.01 234 0 0 11 0.04 183 0.00009 0 37 0 174 0 0 39 0.05
tr tAuT rICP 0.22 0.23 0.21 0.25 0.75 0.75 0.82 0.87
Fig. 1. Cow range image registration Perturbation l l h h
BC AABB RBB AABB RBB
ξ 0.483871 0.451613 0.673387 0.604839
Npo 120 112 167 150
ST S e Niter 0.00001 0 22 0.000001 0 24 0.002235 0.000013 29 0.000034 0 31
tBC 0 0.02 0.01 0.02
tr tAuT rICP 0.26 0.26 0.31 0.33 0.41 0.42 0.43 0.45
Fig. 2. Octopus range image registration BC ξ Npo ST S e Niter tBC tr tAuT rICP AABB 0.630937 983 286.350860 0.291303 15 0.08 6.81 6.89 RBB 0.599487 934 252.584147 0.270433 19 0.78 8.7 9.48 Fig. 3. Talus range image registration
Fig. 4. (a) Octopus unregistered data and model triangular meshes (b) Random perturbation (c) Low range perturbation l : Rx = Ry = Rz = 10◦ , Tx = Ty = Tz = 2% of bounding box diagonal dBBOX (d) High range perturbation h : Rx = Rz = 20◦ , Ry = 25◦ , Tx = 3%, Ty = Tz = 4% of dBBOX
example, a human bone acquisition (talus) with a total number of vertices talus = 1558, is a representative example for our acquisition pipeline. VM First, we investigate algorithm behaviour on perturbed data and model shapes. In the following figures, dark objects represent data shapes and light gray objects define model shapes. In fig.4, different initial states for P and M are illustrated, in fig.4(a) unaligned data, in fig.4(b) manually random aligned data, and in fig.4(c,d) data perturbed by rigid motions with specific parameters, Rx , Ry , Rz for the rotations and Tx , Ty and Tz for the translations along the axis x, y and z. We called low perturbation l , rigid motion with rotation angles of 10◦ , and translation vectors with length of 2% the object bounding box diagonal. By analogy, we called high perturbation h , rigid motion with rotation angles of 20◦ , and translation vectors with length of 4% the object bounding box
Automated Trimmed Iterative Closest Point Algorithm
495
Fig. 5. The first column illustrates the perturbed data and model shapes: (a) h , (d) l , (g) Random perturbation. The second column shows the ICP registrated models and the third column represents the AuTrICP registrations with the respective inputs.
Fig. 6. (a) Talus data and model unregistered triangular meshes (b) Rough registration (c) ICP registration (d) AuTrICP registration
diagonal. The results of AuT rICP performance are given in fig.1, fig.2 and fig.3. The first column for the synthetic objects denotes the perturbation. Next, BC refers to the type of bounding containers, AABB for the axis aligned bounding boxes and RBB for the Ritter’s bounding balls, ξ is the minimum overlap, Npo is the number of data shape points candidates for trimmed squared search, ST S is the sum of the trimmed square distances, e is the trimmed mean square error, Niter is the total number of AuTrICP iterations, tBC , tr and tAuT rICP are the
496
R. Synave, P. Desbarats, and S. Gueorguieva
Fig. 7. (a) Rough registration (b) One pass AuTrICP registration with AABB (c) Double pass AuTrICP registration with AABB (d) One pass of AuTrICP with RBB (e) Double pass AuTrICP registration with RBB
execution times for the minimum overlap calculation, for the registration and the total execution of AuT rICP . The registration precision τ is chosen as 0.1% of the model bounding box. The execution times are in seconds performed on an Intel Pentium IV 3GHz computer with 1GB RAM. The following remarks could be done: – The AuTrICP algorithm is stable on different perturbations and different rates of data and model shape overlap while visual quality is preserved, see fig.6 and fig 5. – The minimum overlap calculation induces an insignificant execution time augmentation while the algorithm convergence, measured in total number of iterations, is speeded up. – AABB containers give similar to RBB container error bounds while being easier to calculate. That is, for a good overlap estimation there is no need to use complex shape bounding containers. – In cases of ICP failure due to bad initial alignment, AuTrICP can readjust the overlap calculation and avoid misleading local minima as for example in fig.7 where a second minimum overlap calculation gives the correct result.
4
Conclusion and Future Work
Automatic data registration in laser acquisition pipeline is the fundamental point of our research. Exploring ICP approach permits to elaborate simple and fast algorithms for alignment of 3D point sets. How one can improve ICP robustness in
Automated Trimmed Iterative Closest Point Algorithm
497
presence of noise and outliers while preserving its simplicity is still an open question. We propose AuTrICP algorithm that takes advantage of the geometric shape of data and model point sets. In comparison with other ICP based methods, the geometry similarity is defined in terms of an intersection rate of bounding containers for chosen informative parts of the point sets. Preliminary tests on different perturbed data confirm that AuTrICP remains stable for a reasonable execution cost. We are further working to refine the geometric bounds for the intersection estimation. In particular, we are interested in the optimization of the registration error calculation in accordance with the data acquisition resolution. Aknowledgements. The authors wish to thank T.Boubekeur, H.Coqueugniot, Ch.Couture, B.De la Villetanet and B.Dutailly for providing and supporting the human bone acquisitions.
References 1. Allard, T.T., Sitchon, M.L., Sawatzky, R., Hoppa, R.D.: Use of hand-held laser scanning and 3d printing for creation of a museum exhibit. In: 6th International Symposium on Virtual Reality, Archaelogy and Cultural Heritage (2005) 2. Allen, B., Curless, B., Popovic, Z.: The space of human body shapes: reconstruction and parameterization from range scans. ACM Trans. Graph. 22(3), 587–594 (2003) 3. Bernardini, F., Mittleman, J., Rushmeier, H.E., Silva, C.T., Taubin, G.: The ball-pivoting algorithm for surface reconstruction. IEEE Trans. Vis. Comput. Graph. 5(4), 349–359 (1999) 4. Bernardini, F., Rushmeier, H.E.: The 3d model acquisition pipeline. Comput. Graph. Forum 21(2), 149–172 (2002) 5. Bernardini, F., Rushmeier, H.E., Martin, I.M., Mittleman, J., Taubin, G.: Building a digital model of michelangelo’s florentine piet` a. IEEE Computer Graphics and Applications 22(1), 59–67 (2002) 6. Besl, P.J., McKay, N.D.: A method for registration of 3-d shapes. IEEE Trans. Pattern Anal. Mach. Intell. 14(2), 239–256 (1992) 7. Bispo, E.M., Fisher, R.B.: Free-form surface matching for surface inspection. In: IMA Conference on the Mathematics of Surfaces, pp. 119–136 (1994) 8. Chen, Y., Medioni, G.G.: Object modelling by registration of multiple range images. Image Vision Comput. 10(3), 145–155 (1992) 9. Chetverikov, D., Stepanov, D., Krsek, P.: Robust euclidean alignment of 3d point sets: the trimmed iterative closest point algorithm. Image Vision Comput. 23(3), 299–309 (2005) 10. Curless, B., Levoy, M.: A volumetric method for building complex models from range images. In: SIGGRAPH 1996: Proceedings of the 23rd annual conference on Computer graphics and interactive techniques, pp. 303–312. ACM Press, New York (1996) 11. Est´epar, R.S.J., Brun, A., Westin, C.-F.: Robust generalized total least squares iterative closest point registration. In: Barillot, C., Haynor, D.R., Hellier, P. (eds.) MICCAI 2004. LNCS, vol. 3216, pp. 234–241. Springer, Heidelberg (2004) 12. Gelfand, N., Mitra, N.J., Guibas, L.J., Pottmann, H.: Robust global registration. In: Symposium on Geometry Processing, pp. 197–206 (2005) 13. Godin, G., Laurendeau, D., Bergevin, R.: A method for the registration of attributed range images. In: 3DIM, pp. 179–186 (2001)
498
R. Synave, P. Desbarats, and S. Gueorguieva
14. Horn, B.K.P.: Closed-form solution of absolute orentation using unit quaternions. Journal of Optical Society of America 4(4), 629–642 (1987) 15. Jacq, J.J., Roux, C., Couture, C.: Segmentation morphologique g´eod´esique de surfaces. application ` a la morphom´etrie interactive des surfaces articulaires en anthropologie. In: 17`eme colloque GRETSI sur le Traitement du Signal et des Images ( September 1999) 16. Jaeggli, T., Koninckx, T.P., Van Gool, L.: Online 3d acquisition and model integration 17. Levoy, M.: The digital michelangelo project. In: 3DIM, pp. 2–13 (1999) 18. Nehab, D., Rusinkiewicz, S., Davis, J., Ramamoorthi, R.: Efficiently combining positions and normals for precise 3d geometry. ACM Trans. Graph. 24(3), 536–543 (2005) 19. Pauly, M., Mitra, N.J., Giesen, J., Gross, M.H., Guibas, L.J.: Example-based 3d scan completion. In: Symposium on Geometry Processing, pp. 23–32 (2005) 20. Pennec, X.: Registration of uncertain geometric features: Estimating the pose and its accuracy. In: Massachusetts Institute of Technology, AIM-1623, p. 20 (1998) 21. Pennec, X., Guttmann, C.R.G., Thirion, J.-P.: Feature-based registration of medical images: Estimation and validation of the pose accuracy. In: Wells, W.M., Colchester, A.C.F., Delp, S.L. (eds.) MICCAI 1998. LNCS, vol. 1496, pp. 1107–1114. Springer, Heidelberg (1998) 22. Pennec, X., Thirion, J.-P.: Validation of 3d registration methods based on points and frames. In: INRIA, RR 2470, p. 42 (1995) 23. Pulli, K.: Multiview registration for large data sets (1999) 24. Rusinkiewicz, S., Hall-Holt, O.A., Levoy, M.: Real-time 3d model acquisition. In: SIGGRAPH, pp. 438–446 (2002) 25. Rusinkiewicz, S., Levoy, M.: Efficient variants of the icp algorithm. In: 3DIM, pp. 145–152 (2001) 26. Sharp, G., Lee, S., Wehe, D.: Icp registration using invariant features (2002) 27. Stoddart, A.J., Lemke, S., Hilton, A., Renn, T.: Estimating pose uncertainty for surface registration. In: BMVC (1996) 28. Dan Sunday. Bounding containers for polygons, polyhedra, and point sets (2d and 3d) (2000), http://geometryalgorithms.com 29. Trucco, E., Fusiello, A., Roberto, V.: Robust motion and correspondence of noisy 3-d point sets with missing data. Pattern Recognition Letters 20(9), 889–898 (1999) 30. Tsakiri, M., Ioannidis, C., Carty, A.: Laser scanning issues for the geometrical recording of complex statue. In: Optical 3D Measurement Techniques VI, Zurich, pp. 214–222 (2003) 31. Turk, G., Levoy, M.: Zippered polygon meshes from range images. In: SIGGRAPH, pp. 311–318 (1994) 32. Valzano, V., Bandiera, A., Beraldin, J.A., Picard, M., El-Hakim, S., Godin, G., Paquet, E., Rioux, M.: Fusion of 3d information for efficient modeling of cultural heritage sites with objects. In: International Symposium: Cooperation to Save World’s Cultural Heritage (2005) 33. Varady, T., Martin, R.R., Cox, J.: Reverse engineering of geometric models - an introduction. Computer-Aided Design 29(4), 255–268 (1997) 34. Yan, P., Bowyer, K.W.: A fast algorithm for icp-based 3d shape biometrics. In: AUTOID 2005: Proceedings of the Fourth IEEE Workshop on Automatic Identification Advanced Technologies, pp. 213–218. IEEE Computer Society, Washington, DC, USA (2005)
Classification of High Resolution Satellite Images Using Texture from the Panchromatic Band María C. Alonso1, María A. Sanz2, and José A. Malpica1 2
1 Mathematics Department, Alcalá University, Madrid, Spain Mathematics Department, Politécnica University, Madrid, Spain
Abstract. Traditional classification algorithms are not suitable for feature extraction on high resolution satellite images, given the heterogeneity of the pixels of this type of imagery due to a great amount of detail. Most of this type of imagery is taken by the satellite in several bands and at different resolutions, and the method presented in this paper takes advantage of this situation, merging information provided by the multispectral bands with the panchromatic band. An Ikonos image of 2 x 2 km of the university campus of Alcala has been used for obtaining a classification with seven land use classes. A comparison is carried out between the traditional maximum likelihood method and the method developed here. The latter using context information obtained by the texture from the band with the maximum resolution, the panchromatic band. The results show how texture information improves maximum likelihood classification of the multispectral bands for smooth-textured classes.
1 Introduction Most of the information for building topographic and thematic maps comes from aerial and space images via visual interpretation. Today there are a variety of high resolution satellite and semi-automatic methods that could improve the reliability and reduce the budgets for cartographic projects. This is a main concern for some European cartographic organizations, such as EuroSDR (Spatial Data Research); its working group (commission VI) has proposed a project ‘Change Detection’ to study semi-automated classification methodologies on digital remote sensing images for updating geospatial data [1]. Today there are several commercial high resolution satellites in orbit. In September 1999, the company Space Imaging launched Ikonos. It was the first commercial high resolution satellite to go into orbit and had a spatial resolution of 1 m in the panchromatic band and 4 m in the multispectral band. Two years later, Quickbird was launched by the company Digital Globe. Quickbird is another commercial highresolution satellite with a spatial resolution of 0.61 m in the panchromatic band and 2.44 m in the multispectral band. In 2003, the company OrbImage launched OrbView, which had characteristics similar to Ikonos. New high resolution satellites will be launched in the future. This includes GeoEye 1, to be launched by the end of 2007. It will be the world´s highest resolution commercial satellite to date (0.41m in the panchromatic band and 1.6 m in the multispectral band). These satellites collect millions of square kilometers of imagery per day and can see any geographic target G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 499–508, 2007. © Springer-Verlag Berlin Heidelberg 2007
500
M.C. Alonso, M.A. Sanz, and J.A. Malpica
from 80º S to 80º N. Heightened knowledge and advances in technology should result in a fall in prices and faster delivery of satellite imagery in the near future. This imagery is useful in several fields, including: 1.
2.
3.
4. 5.
Agriculture: The images allow for the quick monitoring of crops on a weekly or quarterly basis. They allow for production forecasting and can create timely alerts regarding widespread problems in farming (e.g., pestilence or drought). Land use: State and local administration can study land use with these images. Increasing urbanization puts pressure on natural resources and decision makers must deal with a wide variety of issues and they require timely information that can be provided by products such as orthorectified digital images. Change analysis: Due to the satellite’s frequent visits of three days each, it is possible to perform change analyses studies using several images from the same site. Irrigation: The fusion of image information with other images allows for the discovery of irrigation necessities. Natural Disaster Preparedness: Using DEM and knowing the height of buildings in the area surveyed, high-resolution images can be useful in predicting and monitoring such natural disasters as floods. When a stereoscopic pair of images is available, it is possible to calculate building heights.
The huge amount of information, provided by high resolution satellite images, needs to be reduced for analysis and interpretation. This information reduction is usually made by classification, but traditional classification techniques for remote sensing were developed mostly for low resolution satellite imagery. These techniques are not valid for the new high resolution satellite due to image complexity and other special characteristics. For example, maximum likelihood generally gives a speckle appearance to the final classified image, since it is a method based only on isolated pixel information. Generally, classification algorithms do not take into account the image-spatial relationship of pixels. This is because classification algorithms are generally created for general data, but when the data are pixels in an image the spatial information must be taken into account. This has been done here using texture from the panchromatic band. Several authors have studied classification methods for satellite images (and specifically for Ikonos imagery) using neural networks and Hopfield nets [2], and fractal features [3]. Burbridge and Zhang [4] used classification on Ikonos imagery to study change detection. They concluded it is possible to extract reliable change detection information pertaining to small streets and new rows of residential housing with a medium-resolution benchmark. However, the detection of change in individual houses and small buildings was beyond the scope of their study, and not possible with the resolution of their images. Han et al. [5] developed a fuzzy neural network classifier and compared it with maximum likelihood classifiers for land cover for Ikonos. The classifier they developed was more accurate for mixed composition areas such as bare soil and dried grass; however, for asphalt or cement roads, roads were more accurate.
Classification of High Resolution Satellite Images
501
In this paper, we propose a new algorithm for classifying high resolution images of the Ikonos type, using four low multispectral bands in combination with texture taken from the high resolution panchromatic band.
2 Material and Methods 2.1 Study Area The available data is the imagery taken in June 2000 by the Ikonos satellite, and consists of a panchromatic (PAN) and four multispectral bands (XS), 450–530 nm for the blue, 520–610 nm for the green, 640–720nm for the red and 770–880 nm for the near infrared spectrum. The PAN and XS bands have 1 and 4 meters’ resolution respectively, with 11 bits per pixel for all bands. For the purposes of our study, a subscene of 2x2 km centered on the Alcala Campus is used (see Figure 1 for the PAN image).
Fig. 1. The study area. Pancromatic Ikonos image of Alcala University Campus.
2.2 Algorithms Several algorithms may be used to assign an unknown pixel to one of a number of classes. Among the most frequently used classification algorithms is the maximum likelihood algorithm. For the Ikonos imagery, a pixel in the XS bands is represented with the vector x with 4 components. We choose m classes given by
502
M.C. Alonso, M.A. Sanz, and J.A. Malpica
ωi, i = 1, ..., m.
(1)
In order to know if a given pixel x belongs to class ωi, the following probabilities should be calculated (2)
P(ωi / x), i = 1, ... m. A pixel is assigned to a class if its conditional probability is the largest. x ∈ ωi ⇔
P(ωi | x) ≥ P(ωj | x)
∀ j ≠ i.
(3)
But the probabilities in equation (3) are unknown. If sufficient training data for each class is available, it can be used to estimate the probability distribution for the class and can be represented by the symbol P(x/ωi). This last probability and the one in equation (3) are related by the Bayes’ theorem. P (ω | x ) = i
p ( x | ω ) ⋅ P (ω ) i i , p(x )
(4)
Applying (4) to (3), the following equation is obtained: x ∈ ωi ⇔
P(x /ωi) P(ωi) ≥ P(x /ωj) P(ωj)
∀ j ≠ i.
(5)
Let us assume that the probability distribution for the classes P(x/ωi) is in the form of a multivariate normal distribution:
p( x | ωi ) = ( 2π ) − 2 ⋅ ∑ i
−1 2
−1 1 t ⋅ exp{− ⋅ (x − m i ) ⋅ ∑ i (x − m i )}, 2
(6)
where mi are the means of each class, ∑i is covariance matrix for class i and t stands for transpose. Among the advantages of using the Gaussian model is the mitigation of need for large training sets to properly define the desired classes. Using (6) in (5), and after several simplifications [6], we arrive at x ∈ ωi ⇔ ln ∑ i − (x − m i )t ⋅ ∑ i −1 (x − m i ) ≤ ln ∑ j − (x − m j )t ⋅ ∑ ∀ j ≠ i.
−1 j
(x − m ) j
(7)
The expression in (7) is known as the general maximum likelihood decision rule. We have applied this rule in two ways: 1. The maximum likelihood algorithm is applied in the usual way for classification with the four multispectral XS bands. 2. Texture is first extracted from the PAN image, forming a new band which is added to the four XS multispectral bands; then, the maximum likelihood algorithm is applied to the five bands data set. Many measures have been used to characterize texture in remotely sensed imagery. We have chosen the co-occurrence matrix, which is applied to the PAN image. Several features from the co-occurrence matrix [7] can be used, but we chose entropy and correlation. For the two features and for each direction (0º, 45º, 90º and 135º), a layer was constructed, making eight in total. For a finite sized training set, the Hughes phenomena can become a problem. This is the effect of declining classification
Classification of High Resolution Satellite Images
503
accuracy for increasing numbers of features. As more features are added, the accuracy increases for a while, then comes to a head and begins to decrease. When the training set sizes are small or the number of bands is large, or both, the Hughes phenomenon is a problem. We cannot keep augmenting the number of textures or directions since the accuracy will degrade. In order to mitigate the Hughes phenomenon by selecting the features with more discriminating power, we have applied Principal Component Analysis (PCA) to the eight layers obtained with the co-occurrence matrix.
Input Panchromatic 1m
Co-register R,G,B,NIR 4m XS
Texturas layes 1m
Method 1
First Principal Component 1m
Method 2
Maximum likelihood Classified image, accuracy 79.4%
Output
Classified image, accuracy 82.8%
Fig. 2. Flow Chart for the proposed algorithm
2.3 Training and Test Data In land use classification, it is important to define the number and type of classes used. This is usually constrained by the application at hand. Classification of high resolution satellite images potentially has many applications associated with urban planning and management. In general, four categories are usually considered regarding the spectral characteristics of materials: 1. 2.
Impervious surface: consisting of roofs, roads, parking lots and other built-up materials. Vegetation: deciduous and evergreen vegetation, trees and bushes, etc.
504
M.C. Alonso, M.A. Sanz, and J.A. Malpica
3. 4.
Soil: different types of soil. Water.
Since there is no water in our data, the remaining three categories have each been subdivided into two in order to improve the classification accuracy. Therefore six classes have been defined, plus a class for shadows. Shadows are not a problem for low resolution satellite images, contrary to high resolution ones such as Ikonos, where shadows play a relevant role. Table 1. Classes for Alcala Campus
Asphalt Metallic roof Evergreen vegetation Deciduous vegetation Clay soil Sand soil Shadow Samples of ground data are necessary for training the classifier for these seven classes (Table 1), as well as for assessing the final map accuracy. A field survey was carried out on site at the time of flight. For each class in Table 1 sufficient training samples were selected, following the [8] recommendation of 10N samples per spectral class minimum, where N is the number of bands. In our case, a minimum of 50 pixels for training each class were taken. Then, more than double the number of training samples were obtained for the test samples from different places.
Classe definition Training and test samples
Modifications No
Performe classification
Satifactory evaluation Yes Final classification
Fig. 3. Classification method
Classification of High Resolution Satellite Images
505
Figure 3 shows the procedure followed in this paper. First, we preliminarily selected the classes, their training and test sets. Then, the histogram of each class was observed to determine the need for subclasses. Furthermore, we observed the separations of the classes using the Mahalanobis distance measure, performed the classification and studied the evaluation using a confusion matrix. Depending upon the results of these evaluations, it was sometimes necessary to modify the initial class definition, training and test sets. Once satisfied with the results, we generated a thematic map for the whole image (Figure 4).
3 Results The confusion matrix [9] for the classification results of Figure 4 can be seen in Table 2. Table 2. Confusion matrix for the maximum classification algorithm
evergreen veg. deciduous veg. clay soil sand soil asphalt metallic roof Shadows Total Comission % Omission %
evergreen deciduous clay soil sand soil asphalt metallic roof Shadows Total 1009 0 0 1 53 0 0 1063 0 1233 0 0 0 0 0 1233 0 0 1186 12 0 0 0 1198 0 0 38 1047 0 0 0 1085 0 1 0 161 1225 444 14 1845 0 0 0 0 4 71 0 75 0 0 782 0 6 2 78 868 1009 1234 2006 1221 1288 517 92 7367 54/1063 0/1233 12/1198 38/1085 620/1845 4/75 790/868 5.08 0.00 1.00 3.50 33.60 5.33 91.01 0/1009 1/1234 820/2006 174/1221 63/1288 446/517 14/92 0.00 0.08 40.88 14.25 4.89 86.27 15.22
Overall Accuracy = (5849/7367) ~ 79.3946% and Kappa Coefficient = 0.7540.
Table 3 shows the numerical results for this thematic map in a confusion matrix. Table 3. Confusion matrix for the proposed algorithm evergreen deciduous clay soil sand soil asphalt metallic roof Shadows Total 1009 0 342 0 52 0 0 1403 evergreen veg. 0 1234 0 0 0 0 0 1234 deciduous veg. 0 0 1600 0 0 0 0 1600 clay soil 0 0 14 1018 0 0 0 1032 sand soil 0 0 50 203 1099 206 26 1584 asphalt 0 0 0 0 4 72 0 76 metallic roof 0 0 0 0 133 239 66 438 Shadows 1009 1234 2006 1221 1288 517 92 7367 Total 394/1403 0/1234 0/1600 14/1032 485/1584 4/76 372/438 Comission 28.08 0.00 0.00 1.36 30.62 5.26 84.93 % 0/1009 0/1234 406/2006 203/1221 189/1288 445/517 26/92 Omission 0.00 0.00 20.24 16.63 14.67 86.07 28.26 %
Overall Accuracy = (6098/7367) ~ 82.7745% and Kappa Coefficient = 0.7911.
506
M.C. Alonso, M.A. Sanz, and J.A. Malpica
Asphalt Metallic roof Evergreen veg. Deciduous veg. Clay soil Sand soil Shadow
Fig. 4. Direct classification with maximum likelihood of the XM Ikonos image Asphalt Metallic roof Evergreen veg. Deciduous veg. Clay soil Sand soil Shadow
Fig. 5. Classification with maximum likelihood of the XM Ikonos image + Pancromatic with texture and PCA
Classification of High Resolution Satellite Images
507
Figure 5 shows the thematic map obtained as a result of the maximum likelihood classification using the XP bands, plus a band derived from performing PCA in eight layers of texture and extracted from the PAN band. The overall accuracy and the Kappa coefficients are 79.4% and 0.75 (see table 2) for the maximum likelihood classification shown in figure 4; and 82.8% and 0.79 (see table 3) for the proposed method shown in figure 5. Therefore a gain of 3.4 % is obtained considering the texture of the PAN band together with the multispectral bands. Analyses of the confusion matrices allow us to examine the detail of the differences. For land use classes with a relatively fine texture the proposed method improves the classification, for example in table 2 the class clay soil has an omission error of 40.88%, which is improved by the proposed method to 20.24%. This is a consequence of introducing texture which removes the speckle appearance, as it is seen in upper left corner of the figure 4. For land use classes with rough texture, such as evergreen vegetation, the proposed method deteriorates the classification since the texture confuses the result; i.e. the propose method is useful to clean the speckle effect on traditional maximum likelihood classification for classes with fine texture, and when the objective of the classification is this type of classes the method will be useful. There are spectral problems that are too difficult to solve. For example, the spectral signatures of some urban materials are similar because their composition is similar, such as asphalt-based roofs and roads, and bare soil surfaces. These problems will always cause errors of commission and omission that will be difficult to eliminate. This is what happens with the class shadows and asphalt in our study area, and these are misclassified in both methods.
4 Conclusions It can be stated that the introduction of texture information from the PAN band improves the classification results of maximum likelihood for Ikonos imagery. The main conclusion is that for land use classes with a relatively fine texture, such as clay soil, the proposed method improves the classification. For rough texture classes, such as evergreen vegetation, the proposed method deteriorates the classification. Depending on the application at hand, the proposed method is useful when a smoothtextured area of land is the final objective of the classification.
Acknowledgements The authors wish to thank the Spanish MCYT for financial support; project number CGL2006-07132/BTE.
References 1. Kressler, F.P., Franzen, M., Steinnocher, K.: Segmentation based classification of aerial images and its potential to support the update of existing land use data bases. Commission VI, WG VI/4 (2005)
508
M.C. Alonso, M.A. Sanz, and J.A. Malpica
2. Tatem, A.J., Lewis, H.G., Atkinson, P.M., Nixon, M.S.: Multiple-class land-cover mapping at the sub-pixel scale using a Hopfield neural network. International Journal of Applied Earth Observation and Geoinformation 3(2), 184–190 (2001) 3. Ishida, H., Inainura, M.: Improvement of IKONOS Images with Estimation of Spatial Information by Neural Networks (IEEE) 2316–2318 (2001) 4. Burbridge, S., Zhang, Y.: A neural network based approach to detecting urban land cover changes using Landsat TM and IKONOS imagery. In: 2nd GRSS/ISPRS Joint Workshop on “Data Fusion and Remote Sensing over Urban Areas”, pp. 157–161 (2003) 5. Han, J., Lee, S., Chi, K., Ryu, K.: Comparison of Neuro-Fuzzy, Neural Network, and Maximum Likelihood Classifiers for Land Cover Classification using IKONOS Multispectral Data. IEEE, pp. 3471–3473 (2002) 6. Richards, J.A., Jia, X.: Remote Sensing Digital Image Analysis, 3rd edn. Springer, Berlin (1999) 7. Haralick, R.M., Shanmugam, K., Dinstein, I.: Textural features for image classification. IEEE Transactions on Systems, Man and Cybernetics 6(3), 610–621 (1973) 8. Swain, P.H., Davis, S.M.: Remote sensing: The Quantitative Approach. McGraw-Hill, New York (1978) 9. Jensen, J.R.: Introductory Digital Image Processing. Prentice-Hall, New York (1996)
Deriving a Priori Co-occurrence Probability Estimates for Object Recognition from Social Networks and Text Processing* Guillaume Pitel, Christophe Millet, and Gregory Grefenstette CEA-LIST, Fontenay-aux-Roses, France
Abstract. Certain components in images can be recognized with high accuracy, for example, backgrounds such as leaves, grass, snow, sky, water. These components provide the human eye with context for identifying items in the foreground. Likewise for the machine, the identification of background should help in the recognition of foreground objects. But, in this case, the computer needs explicit lists of object and background co-occurrence probabilities. We examine two ways of deriving estimates of these a priori object co-occurrence probabilities: using an online social network of people storing annotated images, FlickR; and using variations on co-occurrence frequencies in natural language text. We show that the object co-occurrence probabilities derived from both sources are very similar. The possibility of using non-image derived semantic knowledge drawn from text processing for object recognition opens up possibilities of mining a priori probabilities for a much wider class of objects than those found in manually annotated collections.
1 Introduction The worlds of language and images mix when one asks such questions as “what search terms should I use to find an image I want” or “what words can I associate with this image”. In the field of image processing, object identification seeks to automate part of this language-image connection, usually by training over already annotated images. This training involves using words that have been manually assigned to known images or image segments to allow a computer to tag new images containing similar visual characteristics with these same words [1][20]. In this article we examine whether information gathered from pure text processing (text without images) could possibly aid object identification in images. Our hypothesis is that certain real-world object-to-object associations can be learned from text processing and that these associations should be useful for improving object identification in images. Humans regularly use background characteristics as predictors for foreground object identification [5], and some backgrounds can be identified with high accuracy by image processing [14]. If the a priori probabilities of finding a given foreground object against a certain background were known, image processing methods *
This research was funded by a grant from the Fondation Jean-Luc Lagardère http://www.fondation-jeanluclagardere.com.
G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 509–518, 2007. © Springer-Verlag Berlin Heidelberg 2007
510
G. Pitel, C. Millet, and G. Grefenstette
could use this data to choose between candidate interpretations of objects, preferring a zebra in a savanna rather than a polar bear. These association probabilities can be learned from hand-tagged images, given enough images, but the quantity of text available on the web is orders of magnitude greater than the number of hand-tagged images. For example, though there are nearly 700,000 Flickr images in tagged with ‘horse’, here are over 37 million web pages containing the word ‘horse’. Text promises to be a much richer and more readily available resource, with a greater coverage, than annotated images for finding a priori co-occurrence probabilities. If information gathered from text processing is to supplement what can be learned from hand tagged images, we need to verify that we can learn the same type of information from both sources. In what follows, we compare object and background cooccurrence statistics from these two distinct sources, from processing non-image text found on the Web to co-occurrence data available at Flickr, the popular image-sharing social network. The annotations that Flickr users add to their pictures are not controlled, but the quantity of annotated images found there is much greater than any research testbed (30 million images annotated Flickr images compared to large research testbeds, such as COREL1 and CLIC [13], which contain a few dozen thousands of controlled-annotation images). Our article is structures as follows: in Section 2, we show what can be gained from associating objects and background in object recognition for some sets of objects; In Section 3, we compare the co-occurrence statistics that can be derived from two independent sources: FlickR annotations and web text processing. In Section 4, we discuss results that show that both independent sources have a high level of correlation, but non-image text found in the web possesses the desirable trait have having a much wider object coverage. From this we conclude that text processing should be useful for deriving information needed in improving object recognition in image processing.
2 Contextual and Extra-Pictorial Information 2.1 Background Contextual Information for Object Identification It is widely acknowledged that contextual information helps in the visual identification of a particular object. Information about background, color, and other objects present in the scene are good guides. One is more likely to find a horse in a field or in a forest than in a library. This intuition is corroborated in both psychological studies [4] and in the performance of automatic image recognition systems [17][3]. Most uses of contextual information to-date have been focused on exploiting tagged images, often using all parts of an annotated image to train a classifier on a particular label tagging the scene. Given the right image collection, one can learn that the visual patterns for a horse co-occur more with the patterns for trees and for grass than with the patterns for books on a shelf. Beyond simple, global co-occurrence of labels, context can include the relative positions of objects in an image. In [12][9], relative spatial relationships between parts of an image are taken into account in the learning features. Including this spatial relation improves both the identification of background contexts and of foreground objects: a background composed by sand on the bottom of 1
http://wang.ist.psu.edu/docs/home.shtml
Deriving a Priori Co-occurrence Probability Estimates for Object Recognition
511
the image, water in the middle and sky at the top, can easily be identified as a beach. A picture of objects like a car often contains parts presenting the same spatial relations (here, two wheels intersecting a body, surrounding window glass). Background image identification is on task in which image processing can achieve a high accuracy. In [14] Oliva and Torralba propose a method using envelopes that achieves an accuracy of 89% over 8 different backgrounds using 250-300 training examples per category. The system described in [19] achieves a score of 77% for classification on 6 categories, with only 100 training examples. In [7], a Bayesian hierarchical model is presented that performs background identification over 13 classes, with a score of 76%. This method requires only 100 images for each category. These methods require human annotations of several properties on a large number of images, which might be considered as hindrance in scaling up these methods. 2.2 Extra-Pictorial Information Little research has been conducted on using information derived from text processing (text unassociated with images, that is) to help image processing. One example is the use of WordNet, a hand-built lexical resource, to filter out irrelevant indexing terms [10]. It is evident that humanly recognized meanings found in text and images are intertwined, and computational manifestations of this intermingling will play a larger role as methods of automatic annotation of images mature. Our hypothesis is the cooccurrence of objects in images is similar to the co-occurrence of mentions of objects in text. If this is true, since textual resources are more widely available than images, and since automatically processing text is much less expensive than manually labeling images, one can imagine replacing learning prior probabilities from hand tagged images to learning similar probabilities from text processing, increasing coverage and achieving the same improvement at a cheaper cost. 2.3 Potential Object Recognition Improvements As an illustration of how a priori co-occurrence probabilities, including those computed from text, might serve to improve object identification, consider the confusion matrix, presented in Figure 1, that comes from an animal identification system using segmentation and visual signatures. The first row shows us that a zebra is sometimes recognized as a polar bear. The third row shows that lions are sometimes identified as rattlesnakes, jellyfish, penguins or polar bears. It is perfectly possible to find a rattlesnake or a lion in a desert region or in a region with sparse bushes. Contextual semantic information will certainly not help in this case. But finding a lion on a seashore, or finding a zebra on a floating ice block is less plausible. We evaluated the potential gain of exploiting co-occurrence information about objects and backgrounds. Table 1 shows the intuitive associations between each animal and a background (taken from a short list: field, desert, water, forest, polar, savanna). Using this hand-made association table as a filter, we find that 59 out of 101 main confusions (shaded squares) shown in Figure 1 can be eliminated: if the class proposed by the identification system cannot appear in one of the possible backgrounds of the actual image class, then the proposition is rejected. For example, if the identification system classifies an object as a husky or a hare, but the background is identified as a forest, then the hare is preferred.
512
G. Pitel, C. Millet, and G. Grefenstette
Fig. 1. Confusion matrix obtained from multi-class SVM classification trained on 900 images (30 images each of 30 different animals) and tested on 600 images (20 images per animal). Rows represent the actual classes of the images, while columns represent the classes proposed by the classifier. The darkness of the square corresponds to the frequency of identification.
animal field desert water forest polar savana
zebra r hino lion gir affe walrus seal polar _bear penguin husky wood pecker squir rel wild_boar gor illa deer whale sea_urchin jellyfish dolphin clownfish scor pion oryx jackal dromedary afr ican_elephant r attlesnake sheep kangaroo hor se har e cow
Table 1. A Manually built, binary, Objects/Backgrounds co-occurrence matrix. The dots represent expected Object/Background co-occurrences in images.
• •
•
• • • • •
•
• • • • • • • • •
•
• • • • • •
• • • • •
•
• • • •
• • • • • •
• • • •
• •
• •
•
Deriving a Priori Co-occurrence Probability Estimates for Object Recognition
513
While the actual improvement one would obtain using this method depends on the sparseness of the object/background association table, the background recognition score and the background confusion matrix, this manual experiment indicates that a priori object-object (or object/background) co-occurrence probabilities should help in some object identification. These probabilities might be estimated from tags of handlabeled images (see [6] and below), or from co-occurrence of words in imageless text. We next test to see whether co-occurrence statistics from two such populations, image and text, provide similar results. We compare “general” text found on the web, with or without associated images, with “image-specific text” found on the FlickR social community web site. There is no evaluation of the accuracy of FlickR's annotations at the time we write this article, but other researchers have used the FlickR website as a training resource for automatic image annotation [18] or as a testbed for image retrieval [8].
3 Experiment 3.1 Experiment Resources We manually created two sets of words: one containing 120 object-evoking words or terms (max. 3 words) (animals) and one with 34 background-evoking words or expressions (max. 2 words). These two sets correspond to the complete setting in the experiments below. A simplified setting (using only single word expressions) was also defined containing 82 animal words and 32 background words. We have used these sets to extract co-occurrence statistics from two independent sources: FlickR and Exalead. FlickR is a social network for photography, which allows users to tag, describe and share photos with the internet community. FlickR also proposes two simple search mechanisms: search in tags or in full text descriptions. For each of these modes, usual Boolean operators are available: AND, OR, () and NOT. Exalead is a search engine over 8 billion web pages. We preferred it over other popular search engine that may have a greater coverage, because of the reliability and stability of its page count results2. Exalead allows for the usual combination of operators to be used in queries: AND, OR, ( ), NOT, but also more powerful operators such as NEAR (words must be less than 16 words away from each other), NEXT or even OPT for optional words. The NEAR operator is particularly interesting in our case, since we expected better results from more linguistically-aware counts3. Both FlickR and Exalead provide us with co-occurrence counts for the elements of our object and background sets. 3.2 Experimental Procedure We queried FlickR and Exalead for two statistics: counts of individual expressions (object or background-evoking) and counts of co-occurring expressions (object/ 2
3
See [2] and [11] for an overview of the problems related to the use of large-scale search engines to find word and term frequencies. Co-occurrence in a 16-words window certainly can not be considered as deep linguistic information, but it is still better than a simple document-based co-occurrence. We plan to use linguistically-enriched methods in future experiments.
514
G. Pitel, C. Millet, and G. Grefenstette
background pairs). We submitted four different co-occurrence queries: Exalead (abbrev.: ExaDoc) where co-occurrences were at the document level, ExaleadNear (abbrev.: ExaNr) using the NEAR operator, FlickRTags (abbrev.: Ftag) where terms co-occurred in the tags associated with an image; and FlickRTexts (abbrev.: Ftxt) where terms co-occurred in FlickR image descriptions. From these statistics we aim to estimate the conditional probability of finding a particular object given the background. This conditional probability is independent from the population (see equation 1). We use this equation since we do not know the count of the population. Even though we have rough estimates of the total number of pages indexed in Exalead, or the number of photographs in FlickR, we cannot estimate the reference population when querying for pairs of words in a co-occurrence window, nor do we do not know the number of tagged photographs and the average tags per photograph, nor the number of textually described photographs.
p (object|bac kground ) =
|Obj ∩ Bgd | . |Bgd |
(1)
Since the correlation between two variables is significant only when the variables are independent, it is not possible to base our evaluation on the correlation of conditional probabilities. Indeed, in each resource we use, conditional probabilities are based on data of the same language, where word distribution is not equiproportional and is strongly determined. We considered using Pointwise Mutual Information (equation 2) in order to eliminate this dependency, but this would require knowledge of the cardinality of the population, which is not possible in our situation. Since we aim at finding the correlation between two resources, we use rank correlation. For this rank correlation to be equal to the PMI's rank correlation, we must use a function F(x,y) satisfying the inequality in equation 3. Since we are interested in the correlation of PMI for a given background, the right member of the PMI is the same in both terms of the inequality. Function F (equation 4) is the function we finally chose for the experiment, since it does not depend on the cardinality of the population.
⎛ p( x, y ) ⎞ ⎛ p( x | y ) ⎞ ⎛ | X ∩ Y | |Ω| ⎞ ⎟⎟ = log⎜⎜ ⎟⎟ = log⎜⎜ ⎟⎟ . PMI(x, y ) = log⎜⎜ ⎝ |Y | |X | ⎠ ⎝ p( x ) p( y ) ⎠ ⎝ p( x ) ⎠
(2)
PMI(x1 , y ) > PMI( x2 , y ) ⇔ F (x1 , y ) > F (x2 , y ) .
(3)
F (obj,bgd ) =
|Obj ∩ Bgd| . |Obj|
(4)
4 Results The graphical representation of the correlations between F(obj,bgd) applied on ExaleadNear against FlickRTags and FlickRTexts (respectively) for the whole set of object/background pairs for the simplified setting are presented figure 2.
Deriving a Priori Co-occurrence Probability Estimates for Object Recognition
515
Fig. 2. Graphical correlations: x = F(objExaNr,bgdExaNr), y = F(objftags,bgdftags) on the left; x = F(objExaNr,bgdExaNr),y = F(objftxts,bgdftxts) on the right. Logarithmic scale on x and y axis. Each dot is painted with a 10% opacity.
Both figures4 appear to present a correlation (the apparently stronger correlation of ExaleadNear/FlickRTexts is mainly due to a different scale on the y axis). It is also clear that both FlickRTags and FlickRTexts present a bias in their distribution: one small group of dots appears above the main group in both settings, and one very small group appears below in FlickRTexts. Table 2. Average Spearman's rank correlations of F between sources in the simplified and complete settings
Combination ExaDoc/Ftxt ExaDoc/Ftag ExaNr/Ftxt ExaNr/Ftag
Complete 0.29 0.28 0.345 0.24
Simplified 0.21 0.21 0.35 0.38
The average Spearman's correlation (based on the rank of F(obj,bgd) for a given background) presented table 2 confirms the intuitively observed correlation in the simplified setting with the ExaleadNear resource of FlickR resources: both correlations are significant (p < 0.01). In the complete setting, only the ExaleadNear/FlickRText is significant with p < 0.01, others are significant with p < 0.5. Multi-word expressions present in the complete setting and removed in the simplified setting have a negative impact for the pair ExaleadNear/FlickRTags, while their impact on other pairs is not significant. A detailed analysis of Spearman's correlation per background, presented table 3 shows that, as one could have expected, backgrounds that are less likely to be involved in animal-related photographs present lower correlations. For instance, “attic”, “basement”, “bathroom”, “bedroom”, “brick”, “kitchen”, present correlations whose 4
All figures and data were obtained using the GNU R System [15] and the lattice package [16].
516
G. Pitel, C. Millet, and G. Grefenstette
Table 3. Detailed Spearman's rank correlations (ρ) of pointwise mutual information between sources in the simplified setting. Numbers are formatted as: crossed-out ⇔ not significant, no formatting ⇔ p < 0.05, italic (orange in electronic version) ⇔ p < 0.01, bold (green in electronic version) ⇔ p < 0.001. p is the probability that the correlation happens by chance. ExaDoc/ Ftxt
ExaDoc /Ftag
ExaNr/Ftxt
ExaNr/Ftag
attic
0.08
0.05
0.33
0.21
ground
basement
0.15
0.12
0.32
0.30
kitchen
bathroom
-0.06
0.03
0.02
0.22
leaf
0.22
Backgrounds
ExaDoc/ BackFtxt grounds
ExaDoc/ Ftag
ExaNr/Ftxt
ExaNr/Ftag
0.37
0.19
0.47
0.40
-0.13
0.08
0.11
0.30
0.21
0.24
0.45
beach
0.22
0.19
0.37
0.39
leaves
0.23
0.15
0.36
0.33
bedroom
-0.08
0.06
0.01
0.08
mountains
0.33
0.28
0.56
0.62
branch
0.36
0.31
0.51
0.53
mud
0.17
0.11
0.32
0.43
brick
0.01
0.03
0.24
0.20
road
0.48
0.39
0.51
0.49
building
0.19
0.19
0.35
0.32
rock
0.31
0.30
0.29
0.43
bush
0.22
0.15
0.37
0.32
sand
0.14
-0.01
0.45
0.22
cloud
0.24
0.32
0.19
0.27
savanna
0.11
0.24
0.35
0.44
desert
0.24
0.37
0.40
0.45
sky
0.25
0.34
0.38
0.45
flower
0.20
0.29
0.22
0.42
snow
0.21
0.09
0.40
0.48
foliage
0.33
0.13
0.46
0.31
swamp
0.37
0.36
0.56
0.58
forest
0.33
0.33
0.32
0.33
tree
0.30
0.32
0.40
0.48
grass
0.10
0.06
0.29
0.28
trunk
0.29
0.19
0.38
0.41
grassland
0.33
0.52
0.55
0.60
underwater
0.31
0.30
0.56
0.52
Table 4. A list of the most common backgrounds found for some common animals using each co-occurrence statistics Animal
Most common backgrounds using ExaleadNear
Most common backgrounds using Flickr Tags
zebra
grassland, rock, grass, road
sand, savanna, grass, road
rhino
rock, snow, desert, road
sand, rock, tree, mud
lion
snow, savanna, grassland, desert
rock, cloud, savanna, tree
giraffe
flower, grass, forest, mud
savanna, desert, grassland, grass
swan
road, snow, beach, rock
forest, sky, flower, rock
seagull
beach, sky, sand, rock
beach, sand, sky, cloud
penguin
building, snow, rock, beach
cloud, underwater, beach, snow
shark
building, underwater, sand, beach
underwater, sand, beach
woodpecker
tree, forest, swamp, rock
tree, leaf, forest, trunk
Deriving a Priori Co-occurrence Probability Estimates for Object Recognition
517
significance never has a probability less than 0.1. This might be because the animals in our list do not have a meaningful correlation with those rooms in a house. Table 4 shows the most common locations found near a random selection of animals, using statistics from both web text (ExaleadNear) and annotated images (Filckr tags).
5 Conclusion and Perspectives Using very limited and shallow text processing tools, we have shown a significant correlation between a priori probability estimates computed from image-related tags and from general text (using 16 word-window co-occurrences). Since general text provides information about object/background conditional probabilities with much higher coverage than specialized but limited resources like FlickR and other ad hoc collections of manually tagged images, these results strongly indicate that using general text processing can be used to extract and estimate a priori object/background conditional probabilities. The next step of our research will be to connect a background identification system and an object identification system with probabilities computed from general text. Afterwards, we plan to improve the text-to-a priori probabilities part using deeper linguistic analysis, such as named entities (cities names, proper nouns) removal, partof-speech tagging, and low-level parsing.
References 1. Barnard, K., Duygulu, P., de Freitas, N., Forsyth, D., Blei, D., Jordan, M.I.: Matching words and pictures. Journal of Machine Learning Research 3, 1107–1135 (2003) 2. Bar-Ilan, J.: Expectations versus reality - Search engine features needed for Web research at mid 2005. Int. Journal of Scientometrics, Informetrics and Bibliometrics 9(1) (2005) 3. Carbonetto, P., de Freitas, N., Barnard, K.: A Statistical model for general contextual object recognition. In: Pajdla, T., Matas, J(G.) (eds.) ECCV 2004. LNCS, vol. 3021, pp. 350–362. Springer, Heidelberg (2004) 4. Davon, D.: Forest before the trees: the precedence of global features in visual perception. Cognitive Psychology 9, 353–383 (1977) 5. De Graef, P., Christiaens, D., d’Ydewalle, G.: Perceptual effects of scene context on object identification. Psychological Research 52, 317–329 (1990) 6. Duygulu, P., Barnard, K., de Freitas, N., Forsyth, D., Jordan, M.I.: Object recognition as machine translation: Learning a lexicon for a fixed image vocabulary. In: Heyden, A., Sparr, G., Nielsen, M., Johansen, P. (eds.) ECCV 2002. LNCS, vol. 2353, pp. 97–112. Springer, Heidelberg (2002) 7. Fei-Fei, L., Perona, P.: A Bayesian Hierarchical Model for Learning Natural Scene Categories. In: Proc. IEEE Computer Society International Conference on Computer Vision and Pattern Recognition (2005) 8. Gonzalo, J., Karlgren, J., Clough, P.: iCLEF 2006 Overview: Searching the Flickr WWW photo-sharing repository. In: Proceedings of CLEF 2006 workshop. LNCS, vol. 4730, pp. 186–194 (2006)
518
G. Pitel, C. Millet, and G. Grefenstette
9. Hoiem, D., Efros, A.A., Hebert, M.: Putting Objects in Perspective. In: Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 2137–2144 (2006) 10. Jin, Y., Khan, L., Wang, L., Awad, M.: Image annotations by combining multiple evidence & wordNet. In: Proceedings of the 13th Annual ACM international Conference on Multimedia, pp. 706–715 (2005) 11. Kilgarriff, A.: Googleology is Bad Science. Computational Linguistics 33(1), 147–151 (2007) 12. Millet, C., Bloch, I., Hede, P., Moellic, P.A.: Using relative spatial relationships to improve individual region recognition. In: Proceedings of EWIMT (2005) 13. Moëllic, P.A., Hède, P., Grefenstette, G., Millet, C.: Evaluating Content Based Image Retrieval Techniques with the One Million Images CLIC TestBed. In: Proc. World Enformatika Congress, pp. 171–174 (2005) 14. Oliva, A., Torralba, A.: Modeling the shape of the scene: a holistic representation of the spatial envelope. Int. Journal of Computer Vision 42 (2001) 15. R Development Core Team: R: A language and environment for statistical computing (2007) 16. Sarkar, D.: lattice: Lattice Graphics. R package version 0.14–17 (2007) 17. Torralba, A., Murphy, K.P., Freeman, W.T., Rubin, M.A.: Context-based vision system for place and object recognition. In: Proceedings Ninth IEEE International Conference on Computer Vision pp. 273–280 (2003) 18. Tuffield, M., Harris, S., Dupplaw, D.P., Chakravarthy, A., Brewster, C., Gibbins, N., O’Hara, K., Ciravegna, F., Sleeman, F., Shadbolt, N.R., Wilks, Y.: Image annotation with Photocopain. In: Proceedings of the Fifteenth World Wide Web Conference (2006) 19. Vogel, J., Schiele, B.: A semantic typicality measure for natural scene categorization. In: Proceedings of DAGM04 Annual Pattern Recognition Symposium (2004) 20. Wang, L., Khan, L.: Automatic image annotation and retrieval using weighted feature selection. Multimedia Tools Appl. 29(1), 55–71 (2003)
3D Face Reconstruction Under Imperfect Tracking Circumstances Using Shape Model Constraints H. Fang and N. P. Costen Manchester Metropolitan University Department of Computing and Mathematics, John Dalton Building, Chester Street, Manchester M1 5GD, U.K.
[email protected],
[email protected]
Abstract. We propose a factorization structure from motion (SfM) framework which employs 3D active shape constraints for a 3D face model application. Two types of shape model, individual shape models and a generic model, are used to approximate non-linear manifold variation. When the 3D shape models are trained, they help the SfM algorithm to reconstruct the 3D face structure under noisy observation (tracking) circumstances. By minimizing two sets of errors, the reconstruction error generated by the linear transform of the shape models and projection error obtained by re-projecting the 3D shape to 2D positions, the 3D face shapes can be recovered optimally. Experimental results show that this algorithm accurately reconstructs the 3D shape of familiar and non-familiar faces from video sequences under circumstances of imperfect face tracking or noisy observations.
1 Introduction 3D face models share a number of advantages compared with 2D face models, being significantly less sensitive to a number of sources of facial variations, especially those from changes in pose. Thus a typical approach is to build 3D face models based on data obtained from 3D acquisition equipment [1,2]. However, due to the difficulty and expense of capturing data using these devices, reconstructing 3D face models from video sequences is also a useful technique. Given a monocular image sequence, Structure from Motion (SfM) has become a popular algorithm for recovering the 3D shape of objects present and moving in the scene. SfM employs geometric principles to recover camera motion and scene structure from correspondences which can be tracked across the sequence. There are a number of methods which use different techniques, such as epipolar constraints [3], trifocal tensor constraints [4] or rank constraints [5],to remove the inevitable ambiguity induced when 3D objects are projected onto a 2D plane. Across a range of SfM work, the factorization method [5] has proved its efficiency and robustness. This uses rank constraints on a measurement matrix composed of the
This research was support by EPSRC grants EP/D056942 and EP/D054818 to N. P. Costen, T. F. Cootes, K. Lander and C. J. Taylor.
G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 519–528, 2007. c Springer-Verlag Berlin Heidelberg 2007
520
H. Fang and N. P. Costen
complete set of 2D positions of a group of salient points. If the objects in the scene are rigid and static, the rank of the measurement matrix is equal to 3. More recently, nonrigid 3D shape reconstruction has become a subject of intensive investigation [6,7,8]. These algorithms generally deal with the motion of non-rigid objects, such as a face undergoing expressive change. They can regarded as a method of unifying the 3D active shape model [9] and the SfM framework. Although the SfM framework is able to recover the 3D shape if the 2D observed correspondences are perfectly matched throughout the whole sequence, the recovered shape will be irregular if the correspondence match is damaged by significant noise or if it is impossible to track some of the salient points as a consequence of the weak edge contrast typically present in faces. In these cases, 3D shape recovery by non-rigid shape recovery algorithms is further impaired because a combination of linear basis functions can fit a set of 2D observations in a more flexible manner than the traditional factorization algorithms. In our proposed algorithm, we design an improved factorization method by exploiting the face shape constraints to refine the 3D face shape recovery. Unlike the general SfM framework where the shapes of the objects are unknown, we use the face shape constraints to further smooth the recovered structure, which is initialized from noisy observations.When an initial shape has been obtained from the general SfM framework, it is fed into a set of 3D shape models and re-projected back to the 2D domain. The 3D shape is reconstructed by minimizing both the reconstruction and projected errors. The contributions of this algorithm are highlighted as: (1) the 3D shape of SfM is obtained to build a set of specific shape models and a generic shape model and (2) these shape models are used as extra constraints to recover the 3D shape in the presence of noise. The rest of the paper is organized as follows: Section 2 presents the background of our work. Section 3 gives the proposed algorithm in detail. Section 4 shows the experimental results. Conclusions and future plans are drawn in Section 5.
2 Background 2.1 Factorization Method Tomasi and Kanade [5] assume an orthographic projection model for 3D scene reconstruction. Following the orthogonal model, each of the feature points in the 3D domain can be projected into the 2D domain with 6 parameters. Assuming that any translation has been removed, ⎛ ⎞ X ui,j Rj11 Rj12 Rj13 ⎝ i ⎠ Yi (1) = Rj21 Rj22 Rj23 vi,j Zi where Xi , Yi and Zi represent the 3D position of the ith salient point, R represents the rotation matrix and ui,j and vi,j are the 2D projection of the ith point for frame j. Therefore, a group of salient points can be represented by a matrix operation,
3D Face Reconstruction Under Imperfect Tracking Circumstances
⎛
u00 v00 u10 v10 .. .
... ... ... ... .. .
u0n v0n u1n v1n .. .
⎞
⎛ ⎜ ⎟ R011 ⎜ ⎟ ⎜ ⎟ ⎜ R021 ⎜ ⎟ ⎜ ⎜ ⎟ ⎜ .. ⎜ ⎟=⎜ . ⎜ ⎟ ⎜ ⎜ ⎟ ⎝ Rm 11 ⎜ ⎟ ⎝ um0 . . . umn ⎠ Rm21 vm0 . . . vmn
⎞ R012 R013 ⎛ ⎞ R022 R023 ⎟ ⎟ X0 . . . Xn ⎟ .. .. ⎝ Y0 . . . Yn ⎠ . . ⎟ ⎟ Rm12 Rm13 ⎠ Z0 . . . Zn Rm22 Rm23
521
(2)
where m represents the number of frames and n represents the number of the salient points. In general, the equation is expressed as Q = RP + T
(3)
where R represents the rotation matrix, P is the recovered shape and T is the translation of the points in the 2D plane. Matrix Q thus has a rank of 3 and can be decomposed. However in practice, noise in the 2D point locations forces Q to be a full rank matrix, but it can still be decomposed using the SVD algorithm to identify the 3 largest orthogonal components and discarding any others [5]. While there are manifold solutions for subsequent recovering of the shape and motion, these all exploit the metric constraints to obtain a corrective matrix, fixing the solution. 2.2 Person Specific Models The modeling of facial variation has been intensively investigated over recent years. A wide range of face model methods have been proposed to analyze and synthesize faces. Popular techniques include Eigenfaces [10], the Active Shape Model (ASM) [9] and the Active Appearance Model (AAM) [11]; these have been widely used and extended to many face applications, and normally involve the construction of generic generative models based on different subjects, face poses and expressions. However, these generic models often have difficulty achieving identity-consistent parameters for novel individuals [12]. This may be a consequence of overfitting of models and difficulties in obtaining suitably balanced ensembles, although there are techniques for overcoming these problems [13,14]. A number of studies have proposed the use of person specific models, which are built to model the variation of a single person across poses and expressions. Torres et. al [15] proposes a self-eigenface approach for face recognition. This is based on the observation that the face reconstruction error from a subspace representation of the same subject should be smaller than that from subspaces of different subjects. As a result, twelve specific spaces are built by a normal Eigenface algorithm [10]. The reconstruction error is used to recognize the subject from the gallery. Liu et. al [16] also mentions a similar idea which they call an individual PCA approach. They use this method to reconstruct the individual subspaces of the optical flow, rather than the eigenface-based texture. Subsequently, they treat the residue as a feature for a face authentication system. Person specific models have a number of advantages. This has been shown to be the case for a 2D AAM [12]. They are easier to build, easier to fit and provide more accurate
522
H. Fang and N. P. Costen
models for familiar faces than a comparable generic model-based system. Even when modeling unfamiliar faces, they can work well in many cases. However, a robust face processing system must also be able to accommodate unfamiliar faces, including the construction of novel person-specific models to describe them. One technique, pursued here, is to combine an array of specific models with single generic model. Therefore, the proposed algorithm not only extends the person specific models into the 3D morpable model but also keeps the system robust when processing unfamiliar faces.
3 Proposed Algorithm The objective of our algorithm is to recover the 3D face structure optimally by using face shape constraints when the observations of face tracking are noisy and most faces are familiar to the system. We first employ the factorization method to recover the 3D shape of all subjects in our gallery. Subsequently, both individual shape models and a generic shape model are trained and used as extra constraints when the 3D face shape needs to be recovered in noisy circumstances. The overall algorithm is shown in Figure 1.
Fig. 1. The overall data-flow of the proposed algorithm
Although the proposed algorithm shares some similar aspects with the ideas proposed in [17], it has several differences concerning building the 3D morphable model and processing the constraints. First, the 3D model is trained by recovering a number of 3D shapes using a classical SfM algorithm which makes the model more reliable and robust. Secondly, we implement the shape constraints via a model selection mechanism to combine the advantages of the person specific models and generic model. The details of the face model construction and evaluation of the shape recovery will be described in the following subsections.
3D Face Reconstruction Under Imperfect Tracking Circumstances
523
3.1 3D Shape Model Construction from SfM According to [5,8], rotation constraints are sufficient to reconstruct 3D structural information in the case of a rigid-shape object, with rigid motion when the correspondences of the salient points are tracked correctly. Thus the 2D tracking points are labeled manually before building the 3D shape models; the 68 salient points, defined in [11], are used to track the face, shown in Figure 2.
Fig. 2. Manually labeled salient points, tracked across pose changes
Two forms of ambiguity still exist when the 2D observations are fed into the SfM framework to recover the 3D shape. The first concerns ambiguity on the sign of the coordinates and the second on their scale. We assume the depth value of the nose tip is positive to remove the sign ambiguity, as the center of the world coordinates is set at the center of gravity on the faces in our work. We further set a fixed distance and focal length for collecting the video data so that we can ignore the scale ambiguity. After removing these ambiguities, the structure of the face can be reconstructed as shown in Figure 3. The 3D shapes of individuals, each with a range of different expressions, are used to train the person specific shape models and the generic shape model. The 3D shape is represented with a shape vector, s = (x1 , y1 , z1 , ..., xn , yn , zn )T
(4)
and PCA is applied to keep most of the shape variations with the first m eigenvectors. Assuming s is the average shape and Φ is the matrix of the first m eigenvectors, the 3D face shape s can be approximated as ˆ s = Φc + s
(5)
where c represents the coefficients of the shape in the eigenvector domain. 3.2 Structure Recovery by Shape Constraints Both person specific shape models and a generic shape model are built to provide extra constraints, smoothing the face structure obtained from the SfM algorithm. If a familiar
524
H. Fang and N. P. Costen
50
0
−50 −150 −100
−100 −150
−50 −100
0
−50
50
0 50
100
100 150
150
Fig. 3. 3D facial shape recovery based on the factorization method. All three axes are arbitrary pixel units.
face is under reconstruction, the person specific shape models play an important role in recovering the shape correctly, because both the reconstruction error and projection error through that individual model are sufficiently small. When an unfamiliar face needs be recovered, the specific models could lead to large shape deviation. To avoid this, a generic shape model is applied to constrain the recovery. The reconstruction error Er is defined as the Euclidean distance between the input shape s and reconstruction shape ˆ s obtained from Equation 5. The projection error Ep is defined as the Euclidean distance between the observations and the projection of the reconstruction shape generated by the specific models, Er = s − ˆ s 2
Ep =
F 1
oj − Proj(ˆ sj )2 F j=1
(6)
(7)
where F represent the number of frames, oj is the observation on j-th frame and Proj() represents the projection of the 3D shape ˆ s onto the 2D image. If both of the errors generated by a specific model is smaller than those generated by the other specific models, the shape reconstructed by this model is regarded as the correct recovered shape. Otherwise, the shape will be reconstructed by the generic model. With this mechanism, the noisy recovered shape can be refined by the extra shape constraints. The evaluation of this mechanism is described in Section 4.
4 Experimental Results In order to evaluate the efficiency of the proposed algorithm, we test it on both synthetic and real data. The synthetic data is generated by adding several levels of random Gaussian noise to the manually located tracking observations, allowing specific manipulations. For evaluation of the complete system, an Active Appearance Model (AAM) is used to track the salient points and so generate real observations. Both sets of data
3D Face Reconstruction Under Imperfect Tracking Circumstances
525
are assessed using Root Mean Square (RMS) error, comparing the reconstruction error with the ground truth, derived from the manually labeled observations. 4.1 Synthetic Noise Level Effects Experiments concerning the stability of the facial shapes extracted by the algorithm were conducted by adding known quantities of Gaussian noise to the 2D pointlocations, before using SfM to extract the 3D configuration. Reconstruction accuracy was then assessed by measuring the RMS error between the projected and actual 3D locations of the feature points. The accuracy of the basic SfM algorithm and the advantages of regularizing the point-locations via a shape model are shown in Figure 4. This compares RMS error for point-sets with equal levels of noise with and without the projection of the recovered points through the generic and specific shape models. While in the case of the original SfM algorithm, the RMS error increases directly with the noise-level, projecting though the shape model removes any significant change. As the noise level increases, so does the advantage for projecting through the appropriate specific shape model. Note that the ‘noisy tracking’ zero-noise point is the reference with which the other points are compared, and thus is recorded as having an error of zero although in fact, given the error-prone manual labeling the shape will be distorted. The minimal RMS variance of approximately 1 pixel reflects human point-placement variation.
7
Average RMS Error (Pixels)
6
Noisy Tracking Generic Model Specific Models
5 4 3 2 1 0 0
5
10
15 20 25 30 Gaussian Noise Level (Variance)
35
40
Fig. 4. Comparison of 3D shape recovery accuracy with several levels of noise and different noise-reduction techniques
526
H. Fang and N. P. Costen
4.2 Shape Model Selection Algorithm The shape model used to regularize the point-locations is selected by a combination of the model reconstruction error and the re-projecting projection from 3D to 2D shape. This needs to both correctly identify familiar individuals, and also pass unfamiliar individuals to the generic model. In Table 1, the selection mechanism is evaluated based on both familiar and unfamiliar faces with several facial expressions and noise levels. Faces were rendered ‘unfamiliar’ by a leave-one-out procedure. It should be noted that it is not absolutely wrong to choose a specific model for the unfamiliar face (or to select the generic model for a familiar face), since the aim here is to regularize the shape, rather than identify the individual. However, such responses are recorded as negative in order to make the evaluation objective. Overall, the accuracy can be maximized by minimizing both reconstruction error and re-projection error and setting a heuristic threshold on the reconstruction error. However, the threshold limit can be ignored if a large number of specific models are involved. The minimum entropy optimization threshold algorithm [18] is used to set the reconstruction error threshold heuristically for the model selection mechanism. The appropriate model, either specific or generic, can be chosen properly by minimizing the entropy value of the possibility of classifying the familiar faces and the unfamiliar faces. Table 1. Performance of the specific-model selection algorithm Criteria Familiar Unfamiliar Rate Minimal Error 1 40/40 0/40 50% Minimal Error 2 40/40 0/40 50% Minimal Both 40/40 10/40 67.5% Proposed 39/40 39/40 97.5%
In this table, “Error 1” represents reconstruction error defined in Equation 7 and “Error 2” is the re-projection error defined in Equation 6. “Rate” is the percentage on which the system correctly chooses a person-specific model across the different expression and pose conditions. 4.3 Real Tracking Test To evaluate the efficiency of the entire algorithm, an Active Appearance Model [11] is used to track the salient points in 4 image sequences, allowing testing under relatively realistic tracking circumstances. When building the AAM, approximately 450 images of several different subjects are used, drawn from a range of poses across approximately 30◦ around fronto-parallel avoids the occlusion of salient points. The AAM is then used to track a subject with four different expressions in four image sequences, not included in the training set of the AAM model but in the data set to build the 3D specific and general shape models. The model selection mechanism chooses the most appropriate model for the shape regularizations(see Figure 1). The RMS error of the shapes recovered by SfM with the different regularization procedures, compared with the ground truth are shown in Table 2.
3D Face Reconstruction Under Imperfect Tracking Circumstances
527
Table 2. Shape recovery performance (RMS) using AAM-derived point correspondences and different regularization procedures Seq 1 Seq 2 Seq 3 AAM Tracking 7.23 6.86 7.21 Generic Model Constraints 5.85 6.10 5.42 Specific Model Constraints 2.18 1.64 1.89
Seq 4 15.46 14.83 12.37
It can be seen from these results that the reconstruction error is smaller when the extra model constraints are used in the SfM framework, although the tracking algorithm sometimes fails to get the correct position. The Specific Model condition errors for Sequences 1 – 3, in the region of 1.6 – 2.2 pixels, are comparable with that seen when veridical hand-labeled data is entered into the regularization scheme (see Figure 4), and reflects perturbations in the hand-labeled points making up the various models. Although AAM model cannot track the salient points well in the Sequence 4, resulting in an RMS error is about 15 pixels for the 3D recovered shape, our proposed mechanism still can reduce the 3D recovery noise via the shape constraints.
5 Conclusions and Future Plans 3D shape reconstruction from monocular sequences is an important topic for building 3D face models. Instead of using 3D acquisition devices, the 3D data can be recovered from common video sequences. The factorization method, one of the family of SfM algorithms, is a technique for recovering the 3D shape from a 2D moving sequence. It uses rotation constraints to remove ambiguity, although these constraints are not enough to approach the correct solution when the salient points cannot be tracked perfectly. Therefore, in our proposed algorithm we put appropriate shape constraints on the noisy observations to get a refined face shape recovery. Based on experiments, we have shown that individual 3D shape models supply good constraints to reconstruct the model optimally for familiar faces. In cases where an individual is unfamiliar, a generic shape model can supply similar, but less effective constraints. The specific models provide a piecewise non-linear solution to the problem of modeling the face-variation manifold, embedded within higher-dimensioned space described by the generic model. In future work, these shape constraints will be further improved to recover the 3D face shape while the face undergoes non-rigid motion by adding generic model constraints. Specifically, constraints will be added to ensure that facial identity parameters are constant while the face shape distorts. These decomposed parameters will be derived from a face model which has been generated completely automatically.
References 1. Blanz, V., Vetter, T.: A morphable model for the synthesis of 3D faces. In: Computer Graphics. In: Proceedings of SIGGRAPH, pp. 187–194 (1999) 2. Lu, X., Jain, A.K., Colbry, D.: Matching 2.5D face scans to 3D models. IEEE Transations on Pattern Analysis and Machine Intelligence 28, 31–43 (2006)
528
H. Fang and N. P. Costen
3. Zhang, Z., Deriche, R., Faugeras, O., Luong, Q.T.: A robust technique for matching two uncalibrated images through the recovery of the unknown epipolar geometry. Artifical Intelligence Jounal 78, 87–119 (1995) 4. Hartley, R.: Lines and points in three views - a unified approach. In: Proceedings of the ARPA IU Workshop, vol. 2, pp. 1009–1016 (1994) 5. Tomasi, C., Kanade, T.: Shape and motion from image streams under orthography: a factorization method. International Journal of Computer Vision 9, 137–154 (1992) 6. Bregler, C., Hertzmann, A., Biermann, H.: Recovering non-rigid 3d shape from image streams. In: Proceedings of the IEEE International Conference on Computer Vision & Pattern Recognition, vol. 2, pp. 690–696 (2000) 7. Torresani, L., Hertzmann, A., Bregler, C.: Learning non-rigid 3D shape from 2D motion. In: Proceedings of Advances in Neural Information Processing Systems, pp. 1555–1562 (2003) 8. Xiao, J., Chai, J.X., Kanade, T.: A closed-form solution to non-rigid shape and motion recovery. International Journal of Computer Vision 67, 233–246 (2006) 9. Cootes, T.F., Taylor, C.J.: Active shape models. In: Proceedings of the British Machine Vision Conference, pp. 266–275 (1992) 10. Turk, M., Pentland, A.: Face recognition using eigenfaces. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–591 (1991) 11. Cootes, T.F., Edwards, G.J., Taylor, C.J.: Active appearance models. IEEE Transations on Pattern Analysis and Machine Intelligence 23, 681–685 (2001) 12. Gross, R., Matthews, I., Baker, S.: Generic vs. person specific active appearance models. Image and Vision Computing 23, 1080–1093 (2005) 13. Costen, N.P., Brown, M., Akamatsu, S.: Sparse models for gender classification. In: Proceedings of the IEEE International Conference on Automatic Face and Gesture Recognition, pp. 201–206 (2004) 14. Costen, N.P., Cootes, T.F., Taylor, C.J.: Compensating for ensemble-specific effects when building facial models. Image and Vision Computing 20, 673–682 (2002) 15. Torres, L., Vila, J.: Automatic face recognition for video indexing applications. Pattern Recognition 35, 615–625 (2002) 16. Liu, X., Chen, T., Kumar, B.V.K.V.: Face authentication for multiple subjects using eigenflow. Pattern Recognition 36, 313–328 (2003) 17. Xiao, J., Baker, S., Matthews, I., Kanade, T.: Real-time combined 2d+3d active appearance models. In: Proceedings of the IEEE International Conference on Computer Vision & Pattern Recognition, vol. 2, pp. 535–542 (2004) 18. Ross, T.: Fuzzy Logic with Engineering Applications. John Wiley & Sons Ltd, Chichester (2004)
A Combined Statistical-Structural Strategy for Alphanumeric Recognition N. Thome and A. Vacavant LIRIS - UMR 5205, Universit´e Lumi`ere Lyon 2 5, avenue Pierre Mend`es-France 69676 Bron cedex, France {nicolas.thome,antoine.vacavant}@liris.cnrs.fr
Abstract. We propose an approach dedicated to recognize characters from binary images by an hybrid strategy. A statistical method is used to identify the global shape of each alphanumeric symbol. The recognition is managed by a Hierarchical Neural Network (HNN), that is able to deal with topological errors in the contour extraction. This strategy is extremely efficient for the majority of the classes: the recognition rate reaches about 99.5%. However, the performances sensitively decrease for ’similar characters’, i.e. ’8’/’B’. In that case, we adopt a strategy that revolves around decomposing the characters into structural elements. The Reeb graph generated from the binary images and a simple polygonal approximation permit to capture both topological and geometrical relevant features. The classification stage is carried out by a boosting algorithm. Keywords: Fourier Descriptors, Hierarchical Neural Network, Reeb Graph, AdaBoost, Statistical-Structural Recognition
1
Introduction
We are here interested in the topic of character recognition from binary images of symbols, in the context of License Plate Recognition. The automatic identification of vehicles by the content of their license plate is an important research area that has been widely studied in the last two decades. A lot of industrial Car License Plate Recognition systems emerged since the eighties, and there today exist commercial OCRs at very affordable prices and impressive recognition rates (98-99%). However, these solutions implicitly make various assumptions that might be hard to fulfill in real environments or that make the system limited to some specific configurations. Actually, the development of reliable and robust systems for License Plate Recognition remains a very challenging task. The character recognition is the heart of the approach, and must be able to face various kinds of binary images corresponding to potentially degradated characters. For example, a limited resolution of the recorded characters in the image has to be managed, as well as dirty plates, screws and bolts, etc. The main challenge consists in finding a compromise between robustness and accuracy of the recognition. To fulfill this goal, we propose an hybrid strategy that combines advantages of statistical and structural approaches. G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 529–538, 2007. c Springer-Verlag Berlin Heidelberg 2007
530
N. Thome and A. Vacavant
State of the Art. The character recognition part is the heart of the LPR system. It is composed of two main steps: feature extraction and feature classification. Concerning the methodology, we can distinguish two kinds of strategies: statistical and structural approaches. In statistical approaches, the input pattern is composed of a set of N features, and the classification stage can be seen as a statistical decision process. A general review of shape descriptors is beyond the scope of the paper and the reader can refer to [1]. For character recognition applications, the most basic feature that can be considered corresponds to the overall image extracted [2]. Semantically richer representations are commonly used: Fourier Descriptors [3], Curvature Scale Space [4], Zernike moments [5], Hu’s moments [6], etc. Statistical methods strength is its robustness to noise. However, some small differences between similar patterns may be difficult to distinguish and therefore similar characters may be hard to recognize. On the other hand, structural approaches decompose the patterns into simpler primitives and obtain the properties of primitives and/or their relationships. Structural methods are closer to human recognition strategy [7], and are more powerful to recognize similar patterns. They are, however, more sensitive to noise. There are numerous approaches to decompose a given pattern into constitutive parts, but a general state of the art go beyond our goal. The methods of vectorization constitute an example that is closely related to our purpose. These techniques can be divided into several classes, varying with the final application of the method, and the background culture of the authors [8,9]. We will only focus on a few kinds of raster-to-vectors methodologies, largely developed for document analysis applications. In the run length based methods, the run length encoding (RLE) is a decomposition into elongated pixels along an axis of the image where we can build a line adjacency graph (LAG) [10]. The skeletonization and thinning methods are surely the most widely employed methods in vectorization. A survey of vectorization methods based on skeleton can be found in [11]. The aim is to compute a medial axis of the object that minimally represents its shape. The Hough transform is a line modelisation and is another common manner to detect lines and their intersections in an image [12]. Here, no relation is established between the detected lines, and thus the topology of the recognized object can not be deduced by this approach. Contribution. We propose to use an hybrid approach for recognition, combining the advantages of statistical and structural methods. For the statistical recognition module (section 2.1), a hierarchical neural network analyzes the Fourier Descriptors extracted from the binary image. Similar to the system developed in [13], we notice that the recognition rate is very good for the majority of the classes, but largely decrease with ’similar’ characters. In these cases, structural approaches are more adapted to discriminate the characters. Instead of detecting existing lines in the image, like in the LAG or with the Hough transform, we choose to first represent the topology of the represented objects, with the construction of the Reeb graph [14]. We also prefer not modifying the geometry of objects as the skeletonization process. Thus, we use an efficient vectorization algorithm, based on discrete geometry tools (section 2.2). Section 3 illustrates
A Combined Statistical-Structural Strategy for Alphanumeric Recognition
531
the approach efficiency by evaluating the performances of statistical and structural approaches. Finally section 4 concludes the paper and proposes directions for future works.
2 2.1
Proposed Approach Statistical Character Recognition
2.1.1 The Fourier Descriptors The Fourier Descriptors provide a discriminative signature of the contour of an object [15]. The N points of the contour X(i) = {x(i), y(i)} , i ∈ {1; N } in the plane can be considered as complex number. The Fourier Descriptor A(k) are determined by the computation of the Discrete Fourier Transform of the function X: N −1 1 A(k) = X(i)e−j2πkn/N N i=0
, k∈
−N N ; 2 2
(1)
The coefficients A(k) represent the discrete contour of a shape in the frequency domain. The general shape of the object is represented by the lower frequency descriptors. Thus, it is possible to only keep a subset of the lowest frequency coefficients to extract a robust shape signature. Fourier Descriptors have intensively been used for shape matching applications and seem very adapted to our optical character recognition purpose. They prove to be superior to alternative representations, such as Autoregressive models [16], CSS or Zernike moments [17]. Normalization Process. Normalization is performed to provide a feature that only encodes shape. Invariance to translation and scale is performed by resizing the binary image. Moreover, we do not require rotation invariance in order to discriminate ’6’ from ’9’. The set of Fourier coefficients is also dependent on the choice of the starting point. This is normalized in the frequency space. We use an approach similar to these proposed by Arbter et al [18]. After this normalization the starting point of the reconstructed contour is approximately at angle 0 (see [19]). Figure 1 illustrates the Fourier Descriptors computation and normalization.
Initial Binary Images Reconstructed contours Fig. 1. Fourier Descriptors
2.1.2 The Classifier The classification task is performed by means of a hierarchy of neural networks. The architecture is shown in Figure 2. At the first level, three different
532
N. Thome and A. Vacavant
perceptrons, that we denote Cl1 , Cl2 and Cl3 , are used. They are intended to recognize the external contour and the two possibly existing internal ones. The second level of the hierarchy takes as input the results provided by the 3 previous neural networks and carries out a final recognition, resulting to an output vector quantifying the probability of each possible character. We justify in 2.1.2.2 the chosen architecture. The classifier Cl1 is dedicated to the outer contour analysis.
Fig. 2. Classifier for OCR Architecture
It includes the 35 different alphanumeric symbols with a single class for ’0’ and ’O’, and is extended with topological error characters (see Figure 3). The Cl2 classifier is devoted to identify the single inner contour of the ’0’, ’4’, ’A’, ’D’, ’Q’, the upper contour of ’8’ and ’9’ (similar shape), and the upper contour of ’B’, ’R’ and ’P’. We introduce again classes with topological errors, corresponding to ’8’ and ’B’ for which a single connected component has been extracted. Moreover a ’default’ class encodes the absence of connected component. Finally, the Cl3 classifier analyzes the shape of the lower inner contour of ’8’ and ’6’, and the lower contour of ’B’. The second level of the hierarchy precesses in a similar way as the first level. For a given input binary image, the classification output from Cl1 , Cl2 and Cl3 constitutes the input for the neural network. 2.1.2.1 Neural Network parameters. The internal architecture for each neural network is the following. The activation function for each neuron is a sigmoid one. The fundamental parameters of the neural network correspond to its weights, being iteratively updated during the training scheme. Similar to increasing number of recent works [20], we choose to develop an hybrid strategy, combining the
A Combined Statistical-Structural Strategy for Alphanumeric Recognition
533
accuracy of the gradient-based algorithms and the capacity of Genetic Algorithms to localize global minima. The evolutionary approach is used to initialize the weights for the backpropagation algorithm. 2.1.2.2 Discussion. We justify here the proposed architecture of the classifier and in particular its relevance for real applications. Decoupling the training between the 3 first level classifiers and the second level one brings us two major advantages. Firstly, the shape information inherent to each inner/outer contour is wholly capitalized on. For example, despite ’6’ and ’Z’ differ each other from the presence of an inner contour, we want the recognition step to be able to discriminate them by means of the outer contour perceptron alone. Secondly, the hierarchy makes it possible to robustly learn characters with topological differences. For example, it is possible to properly identify alphanumeric with missing inner contours, as illustrated in Figure 3. These situations are actually common because the binary images come from an extraction step included in a global License Plate recognition framework, that is prone to failure.
Initial Binary Images Reconstructed contours Fig. 3. Fourier Descriptors Sensitivity to topological differences
2.2
Structural Analysis
As we have presented in the previous section, we have chosen a structural approach based on discrete geometry tools to distinguish ambiguous characters in our system (e.g. ’8’/’B’) when the Neural Network classifier detects such a sample. We propose to describe the topology and the shape of such an example with an algorithm inspired from [21]. More exactely, this process performs the Reeb graph [14] of the binary object I and a simple polygonal structure inside I. For now, we deal with classifying ’8’/’B’ samples, but we could make a similar reasoning for others cases (e.g. ’2’/’Z’). First, it can be noticed that those two characters really differ in the left part of the binary image I. Thus, we have decided to treat only this part of I, denoted IL . As the letter is centered in the example, only the half of I has to be considered. We also choose an order to treat the pixels of IL , e.g. from left to right and from top to bottom. Then, the algorithm Struct Analysis() can be sketched as follows (see also Figure 4): 1. Group pixels of IL to build as large as possible irregular pixels (or cells). We denote those k cells {Ci }i=1,k . This part could be compared to a Run Length
534
N. Thome and A. Vacavant
Encoding scheme, where we have decided to merge together pixels with the same height. In an example ’B’, the cells should be greater than in the ’8’ case. While this incremental process is performed, we build our topological and geometrical representation: 2. In the Reeb graph GIL , the nodes link connected components of IL , which are -maybe thick- arcs of this object. Considering the order we have chosen before, a ’B’ starts with one component, and finishes with three. So, in the general case, a ’B’ example has one begin node in GIL (denoted by b in Figure 4), and three end nodes (e in this figure). 3. The polygonal structure PIL is computed by tracing the longest line segments through the arcs of IL , according to the graph GIL . They respect the extended supercover model [22], which means that those line segments stand inside the recognized object. There may be more points in PIL if IL represents an ’8’.
I
IL
{Ci }i=1,k , PIL
GIL
Fig. 4. The elements computed by the algorithm Struct Analysis(IL ). The Reeb graph GIL and the polygonal structure PIL are depicted for “perfect” letters ’8’ and ’B’. Nodes b and e are the beginning and the end of arcs in IL , while s and m represent respectively splitting and merging operations on those arcs.
One could compare this process with classical Line Adjacency Graph (or LAG) based methods [10], but our final reconstruction with line segments contains naturally less points than in this kind of methodologies. Moreover, we can make a similar remark for the Reeb graph. Thus, we propose a fast algorithm, that do not need post-processing treatments and is convenient for real-time applications. For others letters, we would adapt this approach by treating another part of the binary image and thus by choosing an other orientation. For example, in the ’2’/’Z’ case, only the top part of the images mainly differs, and we choose a top-to-bottom orientation for Struct Analysis(). In our system, we have implemented a structural classifier based on AdaBoost algorithm [23], since we have two classes C1 and C2 in our problem (i.e. C1 = B , C2 = 8 ).
A Combined Statistical-Structural Strategy for Alphanumeric Recognition
535
Considering the remarks we have presented in the description of the algorithm Struct Analysis(), let V be the feature vector defined as follows: ⎞ ⎛ # p = (x, y) ∈ PI L ⎠ (2) V T = ⎝ max i=1,k S(Ci ) # n = b, n ∈ GIL Where S(Pi ) is the surface of the pixel Pi , and n is abusively denoted as a node begin in GIL . By giving a constant number N of examples per class, we first train the AdaBoost classifier with 2N feature vectors concatenated in a matrix M. M is determined by a simple combinatorial algorithm based on the Hill Climbing [24] and permits to compute the best classification rate over the test sample. In the next section, we illustrate the performance of our structural approach, and the behavior of this optimization scheme.
3
Performances
Figure 5 presents the results of the statistical recognition. The performances are evaluated using a cross-validation procedure, each classifier being trained using 50 examples for each of its output class. In Figure 5-a, we can notice that the recognition rate is about 96%. More importantly, it must be pointed out that a large disparity exists among the different classes, the error rate for the similar characters (’8’/’B’, ’2’/’Z’, ’0’/’D’/’Q’, and ’V’/’Y’) being extremely higher than for the other classes. If we do not take into account these characters, the recognition rate reaches 99.5% (Figure 5-b). This justifies the validity of the hierarchical neural network to accurately discriminate the majority of the classes. We insist here on the fact that the binary images to classify come from a extraction and a binarization step. Thus, the topological differences after the contour extraction are common, and the hierarchical architecture makes it possible to robustly learn and recognize these variations. However, Figure 5-b illustrates as well the limitation of the Fourier Desciptors to discriminate similar pattern. Indeed, these classes only differ locally and are distinguishable by sailent features such as right angles that are lost after the lowest frequency terms extraction. In Figure 6, we show an example of the behaviour of our combinatorial optimization scheme during 10 000 iterations (about 2 hours). This experiment is performed over a base of about 446 ’8’ and ’B’ images, and we have fixed N = 50. Then, the AdaBoost classifier learns the best-rate base (2N = 100 images) and runs the classification over the 336 other images. After a few iterations, the global rate is more than the one obtained by the neural network approach (about 85%) and increases until more than 90% after a hundred iterations (1.5 minutes). The best score (more than 93%, see Table 1) is obtained in 25 minutes in this example. Any user may choose the quality of the structural classification and have a coarse result (but better than the neural network) in a few seconds. It could also let the algorithm run until he gets an accurate enough classification. In Table 1, we depict the best rate of structural classification for the same base of ’B’ and ’8’ images. We can notice that the recognition rate is better by
536
N. Thome and A. Vacavant
a) Performances for all classes
b) Performances without similar pattern
Fig. 5. Global Performances of the Statistical Classifier
Fig. 6. An example of the run of our classifier optimization algorithm during 10 000 iterations (about 2 hours). Individual ’8’ and ’B’ recognition rates are depicted in dashed lines, and the global rate in solid lines
Table 1. Recognition rates, depending on the method used and the set of images treated
’8’ images ’B’ images Global
Neural Network Structural classification 76.1 % 92.9 % 86.96 % 95.28 % 80.61 % 93.64 %
A Combined Statistical-Structural Strategy for Alphanumeric Recognition
537
considering the structural classifier, for the sets of 8 and B images. This rate increases by 13% for all the image base, and reaches more than 93%.
4
Conclusion and Future Works
We propose an original hybrid approach for Optical Character Recognition in the context of a License Plate Recognition system. A statistical method analyzes the extracted Fourier Descriptors with a Hierarchical Neural Network. Decoupling the two levels of the Hierarchy for the training makes the approach robust to noise, over-fitting or offers the capacity to properly identify alphanumeric symbols with topological errors. The statistical recognition performances are impressively good for the majority of the classes, but sensitively decrease for ’similar’ characters. The lack of locality of the Fourier Descriptors and the high frequency terms that are ignored during the feature extraction step constitute the origin of the problem. Thus, we use for these characters a structural approach that extracts discriminative local features to perform the recognition. Thanks to discrete geometrical tools, we represent topological and geometrical important features of the treated binary images. In this article, the error rate for ’8’/’B’ is shown to be reduced of more than 13%. The perspective for future works would be to validate the structural approach for the other classes of similar pattern.
References 1. Zhang, D., Lu, G.: Review of shape representation and description techniques. Pattern Recognition 37(1) (2004) 2. Bremananth, R., Chitra, A.: A robust video based license plate recognition system. ICISIP (2005) 3. Shridhar, M., Badreldi, A.: High accuracy character recognition algorithm using fourier and topological descriptors. Patt. Recogn. 17(5), 515–524 (1984) 4. Kopf, S., Haenselmann, T., Effelsberg, W.: Enhancing Curvature Scale Space Features for Robust Shape Classification. In: Proceedings of IEEE International Conference on Multimedia and Expo (ICME 2005), Amsterdam, The Netherlands (2005) 5. Broumandnia, A., Shanbehzadeh, J.: Fast zernike wavelet moments for farsi character recognition. Image Vision Comput. 25(5), 717–726 (2007) 6. Cash, G., Hatamian, A.: Optical character recognition by the method of moments. 39(3), 291–310 (1987) 7. Pavlidis, T.: 36 years on the pattern recognition front. Pattern Recognition Letters 24, 1–7 (2003) 8. Cordella, L.P., Vento, M.: Symbol recognition in documents: a collection of techniques? IJDAR 3(2), 73–88 (2000) 9. Hilaire, X., Tombre, K.: Robust and accurate vectorization of line drawings. IEEE Transactions on Pattern Analysis and Machine Intelligence 28(6), 890–904 (2006) 10. Burge, M., Kropatsch, W.: A minimal line property preserving representation of line images. Computing 62(4), 355–368 (1999) 11. Lam, L., Lee, S.W., Suen, C.Y.: Thinning methodologies - a comprehensive survey, 61–77 (1995)
538
N. Thome and A. Vacavant
12. K¨ alvi¨ ainen, H., Hirvonen, P., Xu, L., Oja, E.: Probabilistic and non-probabilistic hough transforms: overview and comparisons. Image Vision Comput. 13(4), 239– 252 (1995) 13. Pan, X., Ye, X., Zhang, S.: A hybrid method for robust car plate character recognition. Engineering Applications of Artificial Intelligence, 963–972 (2005) 14. Reeb, G.: Sur les points singuliers d’une forme de pfaff compl´ement int´egrable ou d’une fonction num´erique. Comptes Rendus de L’Acad´emie ses S´eances, 847–849 (1946) 15. Peerson, E., Fu, K.: Shape discrimination using fourier descriptor. Trans. Syst. Man, Cybern. SMC-7(3), 170–179 (1977) 16. Kauppinen, H., Seppanen, T., Pietikainen, M.: An experimental comparison of autoregressive and fourier-based descriptors in 2d shape classification. IEEE Transactions on Pattern Analysis and Machine Intelligence 17(2), 201–207 (1995) 17. Zhang, D., Lu, G.: Content-based shape retrieval using different shape descriptors: A comparative study. icme 00, 289 (2001) 18. Arbter, K., Snyder, W.E., Burhardt, H., Hirzinger, G.: Application of affineinvariant fourier descriptors to recognition of 3-d objects. IEEE Trans. Pattern Anal. Mach. Intell. 12(7), 640–647 (1990) 19. Folkers, A., Samet, H.: Content-based image retrieval using fourier descriptors on a logo database. In: ICPR 2002: Proceedings of the 16 th International Conference on Pattern Recognition (ICPR 2002), vol. 3, IEEE Computer Society, Washington, DC, USA (2002) 20. Devi, S., Panda, S., Shyam, S., Pattnik, D., Khuntia, B.: Initializing artificial neural networks by genetic algorithm to calculate the resonant frequency of single shorting post rectangular patch antenna. In: IEEE Antennas and Propagation Society International Symposium, pp. 144–147 (2003) 21. Vacavant, A., Coeurjolly, D., Tougne, L.: Topological and geometrical reconstruction of complex objects on irregular isothetic grids. In: Kuba, A., Ny´ ul, L.G., Pal´ agyi, K. (eds.) DGCI 2006. LNCS, vol. 4245, pp. 470–481. Springer, Heidelberg (2006) 22. Coeurjolly, D.: Supercover model and digital straight line recognition on irregular ´ Damiand, G., Lienhardt, P. (eds.) DGCI 2005. isothetic grids. In: Andr`es, E., LNCS, vol. 3429, pp. 311–322. Springer, Heidelberg (2005) 23. Schapire, R.: The boosting approach to machine learning: An overview (2001) 24. Selman, B., Levesque, H.J., Mitchell, D.G.: A new method for solving hard satisfiability problems. In: AAAI, pp. 440–446 (1992)
The Multiplicative Path Toward Prior-Shape Guided Active Contour for Object Detection Wei Wang and Ronald Chung Department of Mechanical and Automation Engineering The Chinese University of Hong Kong Shatin, Hong Kong, China {wangwei,rchung}@mae.cuhk.edu.hk
Abstract. In detecting the boundary of an object in an image, if certain prior shape knowledge of the object is available, an effective approach is to have the intensity gradient information in the image and the prior shape knowledge be combined together to drive an active contour for the purpose. While in the classical methods the two terms are almost always summed with a certain weight between them to form the optimization functional, in the method we propose, they are multiplied together so as to avoid the need and thus design of the weight parameter. We show that the object detection result in the traditional formulation could indeed be very much affected by the weight value, and the proposed method, being without its presence, is therefore free from the influence of the important parameter. Experimental results on cells in real biological images, whose boundaries are blurred to very different degrees across the image by the inevitably uneven illumination, are shown to demonstrate the improvement in performance.
1
Introduction
Ever since the seminal work on active contour by Kass et al. [1], there have been a variety of active contour based methods for image segmentation, object detection and object tracking. Generally, these methods could be categorized as boundary based ones [1], [2], [3] and region based ones [4], [5], [6], [7], [8]. In the boundary based methods such as [1], [2], [3], the contour is driven by external forces to locate sites with large gradient magnitude, and meanwhile smoothed by internal forces. Since boundary based methods are sensitive to noises and the contour is generally confined to only a small neighborhood of its initial configuration, other methods based on the certain homogeneity measure of the regions enclosed by the contour have been proposed, examples of which are [5] [8]. In these methods, the mean value or the variance of the intensities in the regions inside and outside the contour are utilized to guide the active contour’s evolution. However, region based methods are unsuitable for images in which regions cannot be readily distinguished by statistical characteristics.
Corresponding author.
G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 539–548, 2007. c Springer-Verlag Berlin Heidelberg 2007
540
W. Wang and R. Chung
Due to illumination or other factors, in some images the objects consist of rather inhomogeneous regions, and generally part of their boundaries are not with sharp intensity gradient. In these cases, boundary based or region based active contour methods both fail. To tackle the problem, a number of prior shape based active contour methods [9] [10] [11] [12] [13] [14] [15] have been proposed. The basic idea is to incorporate prior knowledge of object’s shape to guide the active contour’s evolution: deviation from the prior shape will be penalized. According to features used, the prior shape guided active contour models can also be categorized into boundary based methods and region based ones. The methods in [9] [12] [13] [14] belong to the former category, and those in [15] [11] [10] belong to the latter category. In this work, we aim to detect object boundaries in biological images. In such images, the object boundaries like those of the cells shown in Fig. 1 and Fig. 2 are often blurred unevenly over their length by uneven illumination over different parts of the image. It is generally impossible for active contours to outline meaningful boundaries based upon region homogeneity or gradient feature alone. However, prior shape based active contour could be an effective approach, since prior shape knowledge of the object boundaries, like cells often appear as more or less oval in the image data, is available. Here we propose a prior shape guided geodesic active contour model. The prior shape guided approach makes use of two terms – the intensity gradient and the distance of the contour to the prior shape – for outlining the object boundary. While in the classical prior shape guided models the two terms are almost always summed with a certain weight between them to form the optimization functional, in the method we propose, they are multiplied together so as to avoid the presence of the weight parameter. We show that the object detection result in the traditional formulation could indeed be very much affected by the value of the weight parameter, and the proposed method, being without its presence, is therefore free from the influence of the weight parameter. The remainder of the paper is organized as follows. Section 2 reviews the geodesic active contour model and previous prior shape guided geodesic active contour models. Section 3 analyzes the limitation of the previous methods and proposes a new prior shape guided geodesic active contour model. In section 4 we present experimental results on real biological images, where the cell boundaries are blurred to very different degrees over their length by the inevitably uneven illumination. Finally, we draw the conclusion and outline possible future work in section 5.
2
Review of Related Work
In this section, a brief review on previous works of the geodesic active contour model and the prior-shape guided active contour models is presented.
The Multiplicative Path Toward Prior-Shape Guided Active Contour
2.1
541
Geodesic Active Contour
The original active contour proposed by Kass et al. [1] is represented by a parametric (explicit) curve C(s) : [0, 1] → R2 which evolves by locally minimizing the functional 2 E(C) = Eext ds + ν1 |Cs | ds + ν2 |Css |2 ds (1) where Cs and Css denote the first and second derivative of the contour position C with respect to the curve parameter s. The first term in Equation (1) is the external energy which accounts for information from the image data. Generally it is defined as −|∇I| in which I is the image intensity, then the minimizing contour will favor sites with large image gradient value. The last two terms – weighted by nonnegative parameters ν1 and ν2 – are the internal energy of the contour, measuring the length and the stiffness (or rigidity) of the contour respectively. The minimization of these two terms represents preference of contour that is compact, continuous, and smooth. Ever since the level set technique’s introduction [16], it has been intensively used in image segmentation, object detection, and tracking. In the technique, a contour in the image is represented as a level set of a continuous function defined on the image domain. Based upon such an idea, Malladi [17] designed a mechanism to embed the original 2D active contour’s evolution in a 3D level set function’s evolution. The model is then named as geometric active contour. Caselles [3] further improved the active contour to a geodesic computation approach with the following formulation: 1 g(|∇I(C(s))|)|C (s)|ds (2) E(C) = 0
in which the edge-detector function g is defined as g=
1 , (p ≥ 1) 1 + |∇Gσ ∗ I|p
(3)
where Gσ is the Gaussian smoothing operator. In the level set formulation, Equation (2) can be represented as E(φ) = gδ(φ)|∇φ|dx (4) Ω
in which Ω is the image domain, δ is the Dirac measure and φ is the level set function which embeds the contour C. The minima of the functional correspond to smooth contours located at sites that are with large intensity gradient magnitude. Sometimes, a balloon force β can be additionally involved to expand or shrink the contour [3]. 2.2
Prior-Shape Guided Active Contour
In the prior shape guided active contour models, the active contour’s evolution is further constrained by a prior shape contour. Generally it is realized by adding
542
W. Wang and R. Chung
to the optimization functional the difference between the active contour and the prior shape contour. Then the contour’s evolution equation consists of a term from the geodesic measure and a term from the distance measure. As in Chen’s work [9], the energy functional is generally defined as λ 2 δ(φ) g + d (μRx + T ) |∇φ|dx (5) E(φ, μ, R, T ) = 2 Ω where g is the edge-detector function, d is the distance function, λ is a nonnegative weight parameter, μ is a scale parameter, R is the rotation matrix and T is the translation vector. Thus the preferred location for the active contour is related to pixels with small g value and small d value. The pixel x on the zero level contour of the level set function is firstly transformed by μRx + T , then d measures the distance of point μRx + T to the prior shape contour.
3
A Weight-Free Prior-Shape Based Method
In almost all the classical prior shape based methods such as Chen’s method [9], there is the following notion about the weight parameter λ. If a large λ is adopted, the contour will approach a shape close to the prior shape; however, g’s effect will be inhibited, i.e., there is higher chance for the active contour to stay inside a homogeneous region or across multiple homogeneous regions. On the contrary, if a small λ is adopted, the contour will more likely be located on edge pixels or intensity gradient extreme, at the expense of however that the shape could be far from the prior shape. Since the magnitudes of the distance d and the edge-detector function g are generally not of the same scale range, it is nontrivial to strike the right λ.
Fig. 1. Oocyte cells of mouse
If images are like that shown in Fig. 1, in which the magnitude of the intensity gradient has large variance along the desired object’s boundary, the difficulty is even more immense. The image shows three oocyte cells imaged under the microscope, in which the outline of the membrane of each cell is desired. The illumination is of a direction mainly from the lower left of the image. Hence the
The Multiplicative Path Toward Prior-Shape Guided Active Contour
543
lower left of every cell possesses higher intensities and suffers from certain saturation effect, and appears more blurred than the rest of the membrane boundary. In the experiments we shall present later in this work, it is shown that if no prior knowledge is utilized in the detection process, both the region based and the boundary based active contour methods would fail. If prior shape knowledge is to be used, one possible detection strategy is to use the prior shape and the intensity profile at the upper part of the boundary as the basis to retrieve the entire cell membrane. However, such a strategy has a dilemma in choosing the value of the weight parameter λ. If it is desired to detect the real boundary at the upper part of the membrane, the λ value should be small; conversely, if it is desired to depend more on the prior shape to retrieve the lower part of the membrane, the λ value should be large. In the subsequent experimental section, we shall present how the results are affected by choice of λ’s value. To tackle the issue, here we introduce a prior shape guided geodesic active contour model whose formulation is different from that of the traditional ones. The major difference of the model from the traditional ones say the one used in Chen’s work [9] is that the integrand in the energy functional is the multiplication of the edge-detector function and the distance function, not the weighted sum of them. More precisely, the energy functional is defined as 1 δ(φ) gd2 (μRx + T ) |∇φ|dx (6) E(φ, μ, R, T ) = 2 Ω in which Ω is the image domain, x is any point in the image domain, d(x) is the distance of the point x on the transformed zero level contour to the prior shape contour, φ is the level set function, μ, R, T are the 2D similarity transformation parameters in the image space (between the detected shape and the prior shape). The gradient descent flows of the energy functional in Equation (6) are:
1 2 ∇φ ∂φ = δ(φ) div gd (7) ∂t 2 |∇φ| with boundary condition being ∂φ ∂n = 0, where n is the normal to ∂Ω, and ∂μ =− δ(φ)gd∇d · (Rx)|∇φ|dx ∂t Ω ∂θ =− ∂t
δ(φ)gμd∇d · Ω
∂T =− ∂t
dR x |∇φ|dx dθ
δ(φ)gd∇d|∇φ|dx
(8)
(9) (10)
Ω
In the numerical processing, the energy functional is minimized by sequentially evolving the level set function φ, scale parameter μ, rotation parameter θ, and translation vector T in each iteration. In tracking task, the prior shape, R, T and μ can be available from the results on the previous frame.
544
W. Wang and R. Chung
In the proposed method the weight parameter λ is eliminated by having the distance function term d and the edge-detector function term g not summed but multiplied in defining the contour’s energy. The upper part of the cell boundaries have sharp intensity gradients and in turn small g values, thus even if the d term is nonzero, the multiplication outcome gd2 is still relatively small and the contour is attracted to the right locations. On the other hand, at sites like those of the lower part of the cell membranes where there is no sharp intensity change, g will be near one, and the evolution of the active contour will be mostly dominated by the distance d to the prior shape. It can be seen that the proposed model utilizes the intensity gradient information and the prior shape knowledge more adaptively than the classical models. Experiments indeed show that the proposed method has much better performance on the challenging images.
4
Experiments
Here results obtained from the various methods on the image data shown in Fig. 1 are given and compared. Fig. 2(a), (b), (c) display the zoomed-in images of the individual cells shown in Fig. 1 from right to left, where the image sizes in pixel are respectively 58×60, 55 × 60, and 60 × 64. As aforementioned, the illumination is of a direction from the bottom-left of the image, thus the intensities over the bottom-left of the cells appear much higher, to the degree that there is certain saturation effect blurring the cell boundaries there. Shown in Fig. 3 is the result derived from the region-based active contour method described in [5] (which makes use of the intensity mean value in defining homogeneity of a segmented region) on the image shown in Fig. 2(b). The left image shows the initial contour, and the right one shows the final contour. Apparently, since the region inside the cell membrane is not quite homogeneous, the region-based method does not have satisfactory performance on outlining the cell membrane. As a comparison, shown in Fig. 4 is the result derived from the geodesic active contour method with the same initial contour as the one in Fig. 3. The left image is derived with a balloon force input β = 0.4, and the right one is derived with
(a)
(b) Fig. 2. Zoomed-in oocyte cells of mouse
(c)
The Multiplicative Path Toward Prior-Shape Guided Active Contour
545
Fig. 3. Initial active contour (left) and result derived from region-based active contour (right)
another balloon force input β = 1. It can be seen that if without prior knowledge the boundary based methods also fails to outline the cell membrane desirably. Comparatively, the boundary-based method has more capacity than the regionbased one in detecting meaningful result on this particular image, since the cell membrane consists of mostly pixels with relatively strong intensity gradient. There is however still the challenge of guiding the contour to the most desirable configuration in the image space.
(a)
(b)
Fig. 4. Result derived from geodesic active contour under different balloon forces, under the same initial contour as in the above figure
Since the shape of oocyte cell is often close to a circle, introducing a circle as the prior shape to guide the active contour’s evolution is a reasonable and simple choice. In general, the prior shape should be trained from a number of training images. But for this particular set of image data, since the prior shape is a simple one, the training process is omitted and just a perfect circle is supplied. From experiments, it can be found that even a prior shape of this simplicity worked well and exhibited tolerance to the variation of the actual shape of the object. For comparison, results derived from a classical prior-shape based method (Chen’s method) on the same set of images are shown.
546
W. Wang and R. Chung
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(j)
(k)
(l)
Fig. 5. Results derived from prior-shape guided active contour models. The contour initializations are shown in the first row, results derived by our method are shown in the second row, and the results by Chen’s method are shown in the third row and fourth row for comparison.
The Multiplicative Path Toward Prior-Shape Guided Active Contour
547
Since the prior shape is a perfect circle, the rotation parameter θ can be ignored. In the experiments, the prior shape was set as a circle with the diameter of the smaller value of the image’s height and width. In each experiment of the proposed method, we set the parameters initially as μ = 1, T = [0, 0]T , dt = 0.01. To images shown in Fig. 2, the initial locations allocated to the active contour are shown in the first row of Fig. 5, and the final contours achieved by our method are shown in the second row. By our method, the final transformation parameters for the first image were μ = 0.74, T = [5.38, 5.70]T (in pixel), for the second image were μ = 0.72, T = [4.00, 3.76]T , and for the third image were μ = 0.74, T = [6.47, 0.89]T respectively. In the third row and the fourth row of Fig. 5, the results derived from Chen’s method are shown separately for comparison. The active contours were evolved from the same initial contours as displayed in the first row of the figure. The results shown in the third row were derived with the weight parameter λ set as 0.01, thus the contour’s evolution was more influenced by the edge-indictor function; in the fourth row, the results were derived with λ set as 0.1, thus the contour’s evolution was more guided by the prior shape. It can be seen that if λ is set small, the guidance of the prior shape is limited as shown by the results in the third row. Conversely, if λ is set larger, the evolution of the active contour will be mainly decided by the prior shape, as displayed by the results in the fourth row. The result shown in Fig. 5(j) appears satisfactory, but those shown in (g) (k) and (l) demonstrate that the result is sensitive toward the value of λ as well as the initialization of the active contour. In comparison, results derived from our method show larger tolerance toward contour initialization and deviation of the desired shape from the prior shape. In addition, there is no notion of the weight parameter λ.
5
Conclusion and Future Work
In this work, a new prior-shape based geodesic active contour model for object detection is described. Compared with the classical methods, the model has the advantage that the challenge of setting the right balance between the edgestrength term and the prior-shape distance term in the optimization functional is eliminated. Experimental results show that the method has a promising potential in detecting even object boundaries with a large variance of intensity gradient magnitude over the boundary, and that the method has large tolerance toward contour initialization and deviation of object shape from the prior shape. In the future work, we will optimize the coupled evolution equations of the prior shape and the level set function.
References 1. Kass, M., Witkin, A., Terzopoulos, D.: Snakes: Active contour models. Int’l J. Comp. Vision 1(4), 321–331 (1988) 2. Amini, A., Weymouth, T., Jain, R.: Using dynamic programming for solving variational problems in vision. IEEE Trans. Pattern Anal. Mach. Intell. 12, 855–867 (1990)
548
W. Wang and R. Chung
3. Caselles, V., Kimmel, R., Sapiro, G.: Geodesic active contours. Int’l J. Computer Vision 22(1), 61–79 (1997) 4. Samson, C., Blanc-feraud, L., Aubert, G., Zerubia, J.: A level set model for image classification. Int’l J. Comp. Vision 40(3), 187–197 (2000) 5. Chan, T.F., Vese, L.A.: Active contours without edges. IEEE Trans. Image Process. 10, 266–277 (2001) 6. Chan, T.F., Vese, L.A.: A level set algorithm for minimizing the mumford-shah functional in image processing. In: IEEE Workshop on Variational and Level Set Methods in Comp. Vision, pp. 161–168 (2001) 7. Zhu, S.C., Yuille, A.: Region competition: Unifying snakes, region growing, and bayes/mdl for multiband image segmentation. IEEE Transcations on Pattern Analysis and Machine Intelligence 18(9), 884–900 (1996) 8. Yezzi, A., Tsai, A., Willsky, A.: A statistical approach to snakes for bimodal and trimodal imagery. In: IEEE Int’l Conf. Comp. Vision, vol. 2, pp. 898–903 (1999) 9. Chen, Y., Hemant, D., Tagare, E.: Using prior shapes in geometric active contours in a variational framework. Int’l J. Computer Vision 50(3), 315–328 (2002) 10. Tsai, A., Anthony Yezzi, J., Wells, W., Tempany, C., Tucker, D., Fan, A., Grimson, W.E.: A shape-based approach to the segmentation of medical imagery using level sets. IEEE Trans. Medical Imaging 22(2), 137–154 (2003) 11. Bresson, X., Vandergheynst, P., Thiran, J.P.: A variational model for object segmentation using boundary information and shape prior driven by the mumfordshah functional. Int’l J. Computer Vision 68(2), 145–162 (2006) 12. Staib, L.H., Duncan, J.S.: Boundary finding with parametrically deformable models. IEEE Trans. on Pattern Analysis and Machine Intellignece 14(11), 1061–1075 (1992) 13. Rousson, M., Paragios, N.: Shape priors for level set representations. In: Heyden, A., Sparr, G., Nielsen, M., Johansen, P. (eds.) ECCV 2002. LNCS, vol. 2351, pp. 78–92. Springer, Heidelberg (2002) 14. Leventon, M.E., Grimson, W.E.L., Faugeras, O.: Statistical shape influence in geodesic active contours. In: IEEE Conf. Computer Vision and Pattern Recognition, pp. 316–323 (2000) 15. Riklin-Raviv, T., Kiryati, N., Sochen, N.: Unlevel-sets: Geometry and prior-based segmentation. In: Pajdla, T., Matas, J(G.) (eds.) ECCV 2004. LNCS, vol. 3024, pp. 50–61. Springer, Heidelberg (2004) 16. Osher, S., Sethian, J.: Fronts propagation with curvature dependent speed: Algorithms based on hamilton-jacobi fomulations. J. Comput. Phys. 79, 12–49 (1988) 17. Malladi, R., Sethian, J.A., Vemuri, B.C.: Shape modeling with front propagation: A level set approach. IEEE Trans. Pattern Anal. Mach. Intell. 17(2), 158–175 (1995)
On Shape-Mediated Enrolment in Ear Biometrics Banafshe Arbab-Zavar and Mark S. Nixon Department of Electronics and Computer Science, University of Southampton, Southampton SO17 1BJ, UK
Abstract. Ears are a new biometric with major advantage in that they appear to maintain their shape with increased age. Any automatic biometric system needs enrolment to extract the target area from the background. In ear biometrics the inputs are often human head profile images. Furthermore ear biometrics is concerned with the effects of partial occlusion mostly caused by hair and earrings. We propose an ear enrolment algorithm based on finding the elliptical shape of the ear using a Hough Transform (HT) accruing tolerance to noise and occlusion. Robustness is improved further by enforcing some prior knowledge. We assess our enrolment on two face profile datasets; as well as synthetic occlusion.
1
Introduction
Ears have long been considered as a potential means of personal identification, yet it is only in the last 10 years that machine vision experts started to tackle the idea of using ears as a biometric. Empirical evidence supporting the ear’s uniqueness was provided in studies by Iannarelli [1], conducted over 10,000 ears. Ears have appealing properties for personal identification; they have a rich structure that appears to be consistent with age from a few months after birth. Clearly, ears are not affected by facial expressions. Images of ears can be acquired without the subject’s participation and ears are big enough to be captured from a distance. However there exists a big obstacle - the potential occlusion by hair and earrings, which is almost certain to happen in uncontrolled environments. Up-to-date surveys of ear biometrics have recently been provided by Hurley et al. [2] and Choras [3]. Non-invasive biometrics such as gait, face and ear are often acquired with standard equipment, and they include unnecessary background information. The input samples of an ear biometric system are often head profile images with eyes, mouth, nose, hair and etc. as background. Enrolment (or registration) in an automatic biometric system is the process of detecting and isolating the area of interest. Yan et al. [4] developed an automatic ear biometric system which inputs 2D colour images and 3D range information of the human head profiles. Their enrolment uses a multistage process which uses both 2D and 3D data and curvature estimation to detect the ear pit which is then used to initialize an elliptical active contour to locate the ear outline and crop the 3D ear data. Chen et al. [5] G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 549–558, 2007. c Springer-Verlag Berlin Heidelberg 2007
550
B. Arbab-Zavar and M.S. Nixon
also use 2D and 3D face-profile images and the ears are detected by a two step alignment procedure, which aligns edge clusters with an ear model based on the helix and anti-helix of one ear image. Alvarez et al. [6] fits an ovoid model to the ear contour. However, their algorithm initializes with a manual estimate of the ear contour. These studies have not presented an evaluation of occlusion tolerance. Yan et al. merely suggest that their method is capable of handling small amounts of occlusion by hair. The increasing variety of biometrics applications in everyday identification and authorization problems urges the development of easily applicable methods. Therefore rather than use specialised and elaborate 3D equipment, our focus is for generic deployment via planar images derived from video cameras. The ear is largely a planar shape and 3D must penetrate the intricate inner ear, which restricts deployment potential. We propose the use of the Hough Transform (HT), which can extract shapes with properties equivalent to template matching and is tolerant of noise and occlusion, to find the elliptical shape of the ears in 2D head-profile images. We also add a number of refinements and customizations to make this technique more suitable for application to ear images. We report the enrolment results on two standard databases: (i) a subset of XM2VTS [7] face-profile database with the total number of 252 images, (ii) the UND ear database [8] with 942 images. We shall describe our ear enrolment technique in section 2, describing the applied Hough Transform and our refinements which incorporate some prior knowledge to our method. In section 3 we enrol the ears of two different databases with 2D head-profile images and also evaluate our technique for partially occluded ears, followed by conclusions.
2
Ear Enrolment
The enrolment stage prepares an acquired sample for matching and classification by separating the object of interest from the background. It is an important step for any automatic biometric recognition system. Recognizing the potential of the ear for being partially occluded by hair or earrings, we aim to develop an automatic ear biometric system which can handle occlusion. This system should be able to handle occlusion at all stages since for example it is impossible to recognize a half-occluded ear whose data has been discarded by enrolment as a result of incorrect detection. We propose the use of the Hough Transform, which can extract shapes with properties equivalent to template matching and is tolerant of noise and occlusion. As such, it appears an attractive choice for the initial location of the ears. Our enrolment finds the elliptical shapes in 2D faceprofile images to locate the ear regions by using a Hough transform to gather votes for putative ellipse centres in an accumulator, which will go through a refinement process that will eliminate some of the erroneous votes; the location of the peak in this accumulator gives the coordinates of the best matching ellipse.
On Shape-Mediated Enrolment in Ear Biometrics
2.1
551
Reduced Hough Transform for Ellipses
Despite its advantages, known drawbacks of the HT include need for much memory and its high computational requirement. Hence, a reduced HT [9] appears attractive. In this the centre is determined by using known ellipse properties. In these properties, given two points whose tangents intersect, then the line between the intersection point and their midpoint will pass though the centre of the ellipse. Figure 1 illustrates this with two points P1 and P2.
Fig. 1. Tangents of the two points P1 and P2 intersect at the point T. The line L which passes through the intersection point T and the midpoint M, also passes through the ellipse centre C.
For application to ear images, first a smoothed edge detected image is derived using the Canny operator, and this is used to vote into an accumulator for the centre of the ellipse. The accumulator is constructed by voting for the lines which pass through the putative ellipse centre. When this process is completed, the locations of the peaks in the accumulator give the coordinates of the centre of the best matching ellipse. The reduced HT for ellipses decreases the computational complexity by using the known geometrical properties of the ellipse to decompose the parameter space. The space which would otherwise be a 5D has been reduced to a 2D space, gathering evidence for putative ellipse centres whilst marginalizing on scale and orientation, which may cause ellipses of arbitrary scale and orientation to cast votes in a single cell of the accumulator. Hence, some erroneous peaks may appear in the accumulator. Thereby while this method is effective for easing the computation and memory requirements, it may impair the accuracy.
552
B. Arbab-Zavar and M.S. Nixon
2.2
Eliminating Erroneous Votes
Applying the reduced Hough transform to a profile image of the human head, results in an accumulator which usually presents a number of peaks. To detect the peak corresponding to the ear, we use various geometrical cues to eliminate some of the votes or even some of the voting pair points which do not comply with the properties of a prospective ear rim ellipse. Vertical Ellipses. Given a non-rotated head profile image, the ear usually appears as a vertical ellipse with a vertical axis approximately twice as large as the horizontal axis. This can be described as, (y − y0 )2 (x − x0 )2 + = 1 , b = 2a a2 b2
(1)
where equation (1) denotes a non-rotated ellipse where a and b are the ellipse size along each axis and (x0 , y0 ) is the centre. By differentiation, ∂y (x − x0 )b2 b2 =− , =4. ∂x (y − y0 )a2 a2
(2)
∂y and thereby a vote cast for a The edge direction can be arranged to equal ∂x putative ellipse centre (x0 , y0 ) by a point (x, y) is allowed in the accumulator if ∂y ∂x
+
(x−x0 )b2 (y−y0 )a2
=t,
b2 a2
=4 (3)
t ≤ threshold . The filter is set to ignore discrepancies less than a threshold value since: (i) 2 some ears are slightly rotated, (ii) the exact value of ab 2 for each ear is not known, (iii) edge direction information cannot be accurately calculated. Figure 2(a) shows three similar ellipses rotated and sized differently. The accumulator is depicted in figure 2(b) presenting three peaks corresponding to these three ellipses. Figure 2(c) shows the effects of filtering on this accumulator removing the peaks corresponding to non-vertical ellipses. Note that the noise level is also reduced as a result of this filtering. Selective Pairing. Given that a pair of edge pixels are within a size window which controls the size of the target ellipse, they will vote for putative ellipse centres along a specific line (see figure 1). Aiming to be more selective, we use the edge direction information to determine whether or not a pair of points can possibly be part of an ellipse. Choosing the pairs more carefully reduces the noise level in the accumulator and the peak corresponding to the ear becomes more pronounced. The line l which passes through the edge point p with the same direction as the edge direction at p splits the image plane into two regions; l is tangent to any
On Shape-Mediated Enrolment in Ear Biometrics
(a) The image of 3 ellipses
(b) The accumulator corresponding to (a)
553
(c) The same accumulator after filtering
Fig. 2. Removing the non-vertical ellipses. (a) An image of 3 ellipses. At the top left of this image a vertical ellipse with b = 2a is shown. At the top right the same ellipse with about 45◦ of rotation is depicted and on the bottom right there is a horizontal ellipse with a = 3b. (b) Three peaks are presented corresponding to the three ellipses. (c) After filtering only the peak corresponding to the vertical ellipse with b = 2a remains.
putative ellipse containing p. Examples of this are depicted in figure 3, where the gray regions are regions which do not contain ellipse points. Thus, only the points in the white regions can be considered as the pairing counterparts of the point p.
Fig. 3. Recognizing the ellipse free regions (the gray regions), to improve selectivity in edge points pairing
Removing the Effects of Spectacles. Frames: The lenses of spectacles may appear as ellipses when viewed from the side, this challenges the presumption that the ear being the most prominent ellipse shape in the head profile images. At times, the impairing effect of these ellipses is significant enough to lead the enrolment process to miss-localize. Accordingly, 13 of the front of the faces is cropped to eliminate these elliptical shapes. Handles: Inspecting example accumulators generated by the reduced HT for ellipses, shows that the dominant orientation of the accumulators’ vote lines in a region mimics the dominant edge direction of the corresponding region in the images. For instance the vote lines cast by ear rim edges are mostly vertical.
554
B. Arbab-Zavar and M.S. Nixon
(a)
(b)
(c)
(d)
Fig. 4. The destructive effect of spectacle handles. The accumulator for image (a) is in (c) and it is filtered to eliminate the horizontal vote lines (d).
Given a region with numerous edge points and similar edge directions many pairs are recognized which vote in a limited space with similar vote line orientations. This leads to a ridge of votes, and possibly a peak. The case of spectacle handles brings about a similar scenario; these handles usually pose many points in the edge detected map (see figure 4(b)) which all have rather a horizontal edge direction. Figure 4 shows such an edge map and the corresponding accumulator. To remove the destructive effects of this type, bearing in mind that the ear ellipses are mostly vertical thus their dominant vote lines are also vertical, we cut out the vote lines which are close to being horizontal. Note that our datasets do not contain significantly rotated images, consistent with a subject looking forward. 2.3
Averaging
After these pruning stages, the ear centre is usually presented by a wide peak which has less height than expected, due to the fact that ear is not a perfect ellipse and the edge direction information is not as accurate as desired. Occasionally, single peaks also appear in the accumulator, which are mostly caused by curls in the hair. To remove the single peaks and find the centre of the widest peak which represents the ear, an 30 × 30 averaging template is applied to the accumulator array.
3
Results
We initially developed our enrolment algorithm for a database of 2D images selected from the XM2VTS [7] face-profile database. Our database consists of 252 images from 63 individuals who are those whose ear is not obscured by hair; 4 images per individual taken in 4 different sessions over a period of five months ensure natural variation between the images of the same subject. We also used our algorithm to enrol the ears from the UND ear biometric database [8], which consists of 942 image pairs from 302 subjects. Each image pair is one 3D scan and one colour image of the profile of the human head which are obtained nearly simultaneously. We only use the 2D images for enrolment. Example images from both databases are shown in figure 5. The XM2VTS subjects are imaged against
On Shape-Mediated Enrolment in Ear Biometrics
555
Fig. 5. Top row: examples of XM2VTS images. Bottom row: examples of UND images.
Fig. 6. The complete set of 252 enrolled ears
a blue background were the head is easily detected; diffused lighting was used to illuminate the head. On the other hand the UND images have a more natural indoor background and illumination. In this section we will present the results achieved on both databases, and we shall also evaluate the capability of our enrolment technique for partially occluded ears. Figure 6 shows the enrolled ears of the XM2VTS derived, 252-image database. As shown in this figure, the enrolment process succeeds to detect the ear region
556
B. Arbab-Zavar and M.S. Nixon
Fig. 7. An increasingly occluded ear enrolled successfully. The detected ellipse centres are shown by the X marks.
Fig. 8. Enrolment of occluded ears
in all the 252 profile images. A 150 × 120 pixels image patch containing the ear is then cropped. We then use our algorithm to enrol the UND data. Given that the UND images include much more background information, we use both skin detection in colour images and the Canny edge operator to detect the face region of the images. The method proposed by Hsu et al. [10] was used for skin detection. Enrolment as was described in section 2 was then performed on the face region of the images. For UND data our algorithm offers a 91% enrolment success rate. The UND database does not offer as good quality for 2D images as presented in XM2VTS partly because of the indoor lighting conditions and partly because of using a rotating filter for capturing the 2D colour images which suffers from the slight movements of the subject. Thereby the edge points of the ear rim are less accurately detected which has an impairing effect on an evidence gathering algorithm such as HT which feeds on edge points.
On Shape-Mediated Enrolment in Ear Biometrics
3.1
557
Occlusion Test
The Hough Transform works with the edge points which vote in an accumulator. Given that adequate voting points are visible the structure will be detected in spite of occlusion of some of these points. For this test we synthetically occlude the ear region of the face-profile images in the XM2VTS derived database. Figure 7 shows that although the ear was increasingly occluded from the top it has still been enrolled correctly. Figure 8 depicts the enrolment success rate of our 252-image database against increasing synthetic occlusion similar to the images in figure 7. As shown in this figure, 83% of the ears are enrolled successfully despite of 40% occlusion from the top, and at 50% occlusion 66% of the ears are still being correctly enrolled. Our algorithm shows promising results when used to enrol partially occluded ears.
4
Conclusions
The elliptical shape of the ear in profile images of the human head makes it possible to detect the ear by a HT, which was joined with refinements and additional constrains imposed by skin colour, rotation and relative sizes of the ear ellipse. The resulting method achieves error-free enrolment on our 252-image database. Using a reduced HT the method is not constrained by the high computation and memory requirements of the 5D HT for ellipses and given the properties of the HT in handling noise and occlusion it obtains satisfactory results enrolling occluded ears. Our enrolment method can be used to enrol ears from similar head-profile images. This generalization property has been demonstrated through the use of the method for enrolling ears of UND dataset. In our other work we have introduced a new model-based ear recognition [11] in which the ear is recognised from the parts selected via a model. Combining our enrolment with the model we have built an automatic ear biometric system in which the objective of handling occlusion is carried through in enrolment via the HT and in feature extraction via the use of our new model which is contrary to holistic methods that describe images by their overall appearance.
References 1. Iannarelli, A.: Ear Identification. Paramount Publishing Company, Freemont, California (1989) 2. Hurley, D.J., Arbab-Zavar, B., Nixon, M.S.: The ear as a biometric. In: Jain, A., Flynn, P., Ross, A., (eds.) Handbook of Biometrics (Forthcoming, 2007) 3. Choras, M.: Image feature extraction methods for ear biometrics–a survey. In: 6th International Conference on Computer Information Systems and Industrial Management Applications (CISIM 2007), Poland, pp. 261–265 (2007) 4. Yan, P., Bowyer, K.W.: Biometric recognition using 3d ear shape. IEEE Transactions on Pattern Analysis and Machine Intelligence 29, 1297–1308 (2007)
558
B. Arbab-Zavar and M.S. Nixon
5. Chen, H., Bhanu, B.: Human ear recognition in 3d. IEEE Transactions on Pattern Analysis and Machine Intelligence 29, 718–737 (2007) 6. Alvarez, L., Gonzalez, E., Mazorra, L.: Fitting ear contour using an ovoid model. In: Proc. of 39 IEEE International Carnahan Conference on Security Technology, pp. 145–148 (2005) 7. Messer, K., Matas, J., Kittler, J., Luettin, J., Maitre, G.: Xm2vtsdb: The extended m2vts database. In: Proc. AVBPA, Washington D.C (1999) 8. Yan, P., Bowyer, K.W.: Empirical evaluation of advanced ear biometrics. In: IEEE Computer Society Workshop on Empirical Evaluation Methods in Computer Vision, San Diego (2005) 9. Aguado, A.S., Montiel, E., Nixon, M.S.: On using directional information for parameter space decomposition in ellipse detection. Pattern Recognition 29, 369–381 (1996) 10. Hsu, R., Abdel-Mottaleb, M., Jain, A.: Face detection in color images. IEEE Transactions on Pattern Analysis and Machine Intelligence 24, 696–706 (2002) 11. Arbab-Zavar, B., Nixon, M.S., Hurley, D.J.: On model-based analysis of ear biometrics. In: IEEE Conference on Biometrics: Theory, Applications and Systems(BTAS 2007), Washington DC (to appear, 2007)
Determining Efficient Scan-Patterns for 3-D Object Recognition Using Spin Images Stephan Matzka1,2 , Yvan R. Petillot1 , and Andrew M. Wallace1 1 2
School of Engineering & Physical Sciences, Heriot-Watt University, Edinburgh, UK Institute for Applied Research, Ingolstadt University of Applied Sciences, Germany
Abstract. This paper presents a method to determine efficient scanpatterns for spin images using robust multivariate regression. A large dataset is generated using scan-patterns with random radial scanlines through an oriented point and determining the corresponding classification performance. Eight features are chosen, which are used as predictor variables for a multivariate least trimmed squares regression algorithm, achieving an adjusted coefficient of determination of R2 =0.80. The correlation coefficients are then used in an exemplary cost-benefit function of an exemplary application of the proposed method.
1
Introduction
In the past few years, the idea of autonomous driving has become more and more popular among researchers and engineers. The reasons for this desire are manifold, ranging from scientific interest about autonomous mobile systems to car manufacturers wanting to provide additional comfort for their customers. The safety aspect of autonomously driving cars is also of interest, as the main cause for car accidents is known to be inattentive human drivers. However, the idea of having a car navigate autonomously is not only a matter of acting in an environment; it relies heavily on an accurate knowledge about the car’s environment. One way to gain this knowledge is the use of range-sensors such as radars or laser-scanners. The latter are often capable of acquiring highresolution range-information, yet it is very time-consuming to obtain a regular set of input data. This would for example require the scene to be scanned line by line. In a dynamic road traffic environment this becomes problematic, as a single 3-D scan of the environment takes as long as 4-12 sec (cf. [1]). There exist measurement concepts other than obtaining a regular rangeinformation grid, such as by using Lissajous figures or by deflecting a 2-D scanline, which are explained in section 2.3. Yet, even these concepts do not satisfy real-time constraints, if many scanlines have to be acquired in order to perform a successful classification of a scanned object. Therefore, the objective is to generate an efficient scan-pattern, that acquires sparse input data for a robust classification scheme, such as spin images (see section 2). In this context, efficiency stands for an optimum cost-benefit-ratio which is the case if a good classification result can be obtained with few scanlines. G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 559–570, 2007. c Springer-Verlag Berlin Heidelberg 2007
560
2
S. Matzka, Y.R. Petillot, and A.M. Wallace
Spin Images
A 3-D shape-based classification algorithm for classification of multiple objects in scenes containing clutter and occlusion has been presented in Johnson and Hebert, 1999 [2]. Spin images are a shape descriptor working at data level. The methods’ recognition performance is evaluated in Johnson and Hebert, 1999 and has shown to be robust. Spin images use an object-centred coordinate system and the surfaces are matched by matching individual surface points as opposed to complete surfaces. The problem of matching a complete surface is divided into the problem of matching a number of surface points, thereby reducing complexity. Clutter points will not have a correspondence on a nearby surface, while occluded points in the scene do not affect the classification, as they are rejected (cf. [2]). Each point is represented by an associated 2-D spin image, which is created by spinning a plane around an oriented point, a 3-D point with an associated surface normal. From this point, two coordinates can be defined, the first being the perpendicular distance to the surface normal α and the perpendicular distance to the tangent plane on the oriented point β (cf. Fig. 1a).
a)
b)
Fig. 1. Figure a) shows an oriented point p on a 3-D surface with the point’s surface normal n and the corresponding tangent plane. Figure b) shows spin images for three oriented points on a 3-D surface mesh of a rubber duck. Source: [2].
A 2-D accumulator field indexed by α and β is then created. The lengths of the axes correspond to the support of the spin image. The respective indexes α and β are calculated for all points on the surface within support distance, and the spin image bin at (α, β) is incremented using an interpolation method. If the bin values are seen as intensity values, the accumulator field can be seen as an image. As 2-D matching is a well researched field, it is possible to use proven algorithms to match the resulting spin images in an efficient manner. This approach is known from splash images representation (cf. [3]). An example for a spin images on a surface mesh is given in Fig. 1b.
Determining Efficient Scan-Patterns for 3-D Object Recognition
2.1
561
Spin Image Generation
Spin image generation can be thought of as spinning a sheet around the surface normal n of an oriented point p, while binning all surface points x as they intersect the plane. Figure 1a shows a 3-D surface, with an oriented point p, as well as its tangent plane and the surface normal n. The two values α and β are used as bin indexes in the spin image and represent the perpendicular distance to the surface normal through the oriented point (α) and the perpendicular distance to the tangent plane (β) respectively. n ||n|| · (x − p) 2 α = x − p − β 2
β=
(1) (2)
Calculation of α and β is then performed for every measured point x. For the real values gained, an interpolation method is used for the contribution to discrete bins. 2.2
Spin Image Matching
A correlation coefficient indicates the strength and relation of a linear relationship between two variables. Johnson and Hebert, 1999 propose the widely used Pearson product-moment correlation coefficient, which is obtained by dividing the covariance of the two variables by the product of their standard deviations. The correlation coefficient R(P, Q) between two points P and Q in two spin images can be determined with Equ. 3. N Pi Qi − Pi Qi (3) R(P, Q) = 2 (N Pi2 − ( Pi )2 )(N Qi − ( Qi )2 ) The coefficient R will take values from R = −1 (wholly anti-correlated) to R = 1 (wholly correlated). R can thus be used to asses the similarity of two images. When R is high, the spin images are similar, for low or negative R values, the spin images are considered not similar. However, clutter and occlusion can add respectively remove data from a spin image. Complete correlation will therefore be the exception and necessitates assessing the level of correlation coefficient. 2.3
Spin Image Generation with Sparse Input Data
As proposed in the introduction, only a small number of scan-lines should be obtained around an oriented point, which is depicted in Fig. 2. This concept implies a large reduction of data, which is desirable. Moreover, the density of contributing points around the oriented point – and therefore on the object – is relatively high, therefore the ratio of contributing points increases as compared to using a regular point-set.
562
S. Matzka, Y.R. Petillot, and A.M. Wallace
a)
b)
Fig. 2. Acquisition of sparse data using five radial scanlines through an oriented point (a). For synthetic range-image (a), this has been emulated by masking out pixels not touched by a scanline (b) using the remaining pixels as input information.
Possible measurement concepts using scanlines with varying inclination angles can be both found in patents – mainly omni-directional bar-code scanners – and literature. Blais at al., 2001 [4] describes a triangulation based laser-scanner using Lissajous scan-patterns as opposed to obtaining a regular grid. Another concept is to deflect the scanline of a 2-D time-of-flight laser scanner in a way so that the inclination angle is variable. This concept has been realised using two mirrors, which are independently rotated by two high-resolution stepper motors (see Fig. 3 below).
Fig. 3. Figure a) shows a sketch of the deflection concept with the two mirrors rotated by two stepper motors. Figure b) depicts the deflection and inclination of an initially horizontal scanline by the two rotated mirrors.
The question that arises at this point is which scan-patterns are most suitable for the scanning process considering a spin image classification concept.
3 3.1
Scan-Patterns Generation of Scan-Pattern Database
In order to generate a database of scan-patterns with corresponding classification performance measures, 6177 scan-patterns consisting of random radial lines
Determining Efficient Scan-Patterns for 3-D Object Recognition
563
Fig. 4. Four models used in the test-set, representing a) car, b) SUV, c) truck, and d) bunny. The fifth model (ND) is also a car and therefore belongs to the ’car’ class.
Fig. 5. Distribution of classification rates distinguishing between 20 oriented-points in the database, ranging from 7.5% to 75.0%, with a median classification rate of 37.5%
through an oriented point are used on a test-set of 40 range-images, containing five different objects of four different classes (cf. Fig. 4). The models’ eight range-images in the test-set are acquired from eight viewpoints by rotating each object around its z-axis, which is the only mayor rotational object motion, and therefore viewpoint-change, to be expected in road traffic scenes. The classification performance is then determined by masking out pixels in a single range-image not touched by a scanline (cf. Fig. 2), using the remaining sparse range-data as input for the spin image classification algorithm. The classification was then considered successful if the classification result was correct and the matching spin images from the test-set and the classificationdatabase were acquired at the same oriented-point, which are determined using a geometrical saliency algorithm. The correspondence of oriented-points in the database and the test-set was defined manually beforehand, which could be done, as all scan-patterns were applied at identical oriented-points. Narrowing the definition of correct classification was necessary, as a correct classification not based upon the correct oriented-point may well be considered an erroneous classification. This decreases the chance of correct classification by pure chance to 5% as opposed to 25% if the correct object class would suffice.
564
S. Matzka, Y.R. Petillot, and A.M. Wallace
The classification rates range from 7.5% to 75.0%, with a median classification rate of 37.5% (cf. Fig. 5), distinguishing between 20 oriented-points.
4
Robust Linear Regression
Considering the distribution of classification rates shown in Fig. 5, the question arises, whether the classification rate is influenced by the chosen scan-pattern, and if so, which features of the scan-patterns show the highest correlation to the classification rate. It is possible to use a multivariate, linear regression model with yi = Θ0 + xi1 Θ1 + · · · + xip Θp + ei (i = 1, · · · , n)
(4)
with the error ei exhibiting a normal distribution around zero [5,6]. A multivariate regression algorithm estimates the regression coefficients Θ = (Θ1 , ·, Θp ) from a length i set of p predictor variable observations xi1 , · · · , xip and response variable observations yi . The most popular estimation technique for Θ is the sum of least squares method, where the sum of the squared residuals is minimised. However, this method is not robust against outliers, which can be measured by the notion of the breakdown point ∗, which is the smallest percentage of outliers, that is able to cause the estimator to return an arbitrarily large deviant value [7,5]. Using a sum of the least squares method, ∗ = 0, which is not robust at all. Besides replacing the square with another functional it is possible to replace the sum of the squared errors by the median of the squared errors, called least median of squares (LMS, cf. Eq. 5). Θ1..p = arg min(med ri2 )
(5)
The LMS method is proven to posses ∗ = 50% as a breakdown point, but shows a slow convergence rate of n−1/3 . This problem can be mitigated by computing a one-step M estimator, which converges at n−1/2 (cf. [8]). As a result, the least trimmed squares (LTS) method given by h Θ1..p = arg min( (r2 )i:n
(6)
i=1
where (r2 )1:n ≤ · · · ≤ (r2 )n:n are the ordered squared residuals [5]. This method allows for a trimming proportion α which determines the breakdown point ∗ of the algorithm, as 1 p−1 (7) α = − 2 2n An implementation of the FAST-LTS method [9] is included in the LIBRA library for MATLAB [10], which was used for the regression in this paper.
Determining Efficient Scan-Patterns for 3-D Object Recognition
565
In order to determine the goodness of the fit of the regression, the unadjusted coefficient of determination Ru2 (cf. Eq. 3) can be determined as Ru2 =
cov(A, B)2 var(A) · var(B)
(8)
The problem with the unadjusted Ru2 measure in Eq. 8 is that is will increase with the number of used coefficients, albeit slowly. This effect can be countered using the adjusted coefficient of determination R2 according to Eq. 9, which is used throughout this paper. n−1 (9) R2 = 1 − (1 − Ru2 ) n−p−1 4.1
Feature Selection
Number of Scanlines. A major feature of the examined scan-pattern is its number of scanlines. In the database, the latter ranges from 5 to 15, and shows a strong correlation with the measured classification rates (cf. Fig. 6).
Fig. 6. Classification rate versus number of scanlines is drawn as gray crosses. Note that both classification rate and number of scanlines have been slightly noised as they are both discrete values and would therefore hide the frequency of the individual value pairs. The black line shows the linear regression calculated using LTS regression.
It can be seen from Fig. 6, that the classification rate is correlated to the number of scanlines, and therefore the amount of range information, used. However, regarding R2 = 0.432, there remains a considerable variance of the classification rate that is unaccounted for by the number of scanlines. This implies that classification results can be optimised without increasing the number of scanlines and thus rendering the scanning process more efficient. It appears feasible to gain 40% classification rate with only five scanlines, which could only be expected to be the case using ten or more scanlines according to the regression. The number of scanlines is the first predictor variable for the multivariate regression and will be referred to as xi1 .
566
S. Matzka, Y.R. Petillot, and A.M. Wallace
Statistical Classification Performance of Individual Inclination Angles. The classification performance of an individual scanline’s inclination angles can be assessed by examining which classification rates have been achieved if a certain inclination angle has been used. The outcome of this examination can be seen in Fig. 7.
Fig. 7. The gray crosses show the average classification performance, if the respective inclination angle has been used in the classification process. The black line shows a mean squared error approximation of the measurements with a Gaussian function.
The measurements in Fig. 7 can be approximated by a Gaussian function. For the average classification measurements in Fig. 7, a Gaussian approximation using a mean squared error method can be established (cf. Eq. 10) −x2 e 2·(17.2)2 √ + 38.4 (10) y(x) = 246 · 17.2 · 2π A reason for the displayed behaviour is that the 3-D models are only rotated around their z-axis in order to obtain a various viewpoints, therefore the quality of the horizontal component of the surface normal is crucial to a correct classification, which is naturally improved by a horizontal scanline. Neither the measurements in Fig. 7 nor their Gaussian approximation in Eq. 10 can be used for regression directly, as various inclination angles are used in order to acquire range-data. Out of the functions that have been tested, the maximum classification rate xi2 of all used inclination angles (R2 =0.337) and the average classification rate xi3 of all used inclination angles (R2 =0.329) show the highest correlation to the recognition rates gained by the corresponding scanpatterns. However, the covariance between both values is 0.542, therefore we do not have two truly independent input variables. Evenness of the Scanlines’ Distribution. Examining scan-patterns that returned a high classification-rate, it appeared that in the majority of the cases, the scanlines were evenly distributed over the range of inclination angles. Robust linear regression resulted in a R2 =0.229, showing a decreasing classification rate with an increasing standard deviation xi4 of the inclination angles’ differences.
Determining Efficient Scan-Patterns for 3-D Object Recognition
567
Fourth Central Moment (Kurtosis) of Inclination Angles. During the search for statistical properties of the used scanlines that correlate with the classification rate, the fourth central moment of the used inclination angles xi5 – also called kurtosis – has shown a coefficient of determination of R2 =0.107 with the classification rate measured [11]. Despite a high correlation towards the angle range (R2 =0.595) the fourth central moment improves the coefficient of determination if used as an additional predictor variable. Angle Range Covered by the Scanlines. As could be expected, the angle range covered by the scanlines shows a correlation towards the classification rate. The angle range is defined to be 180◦ reduced by the largest distance between two angles. Performing LTS regression on the angle range versus the classification rate returned a R2 =0.019, which is comparatively small. However, the angle range feature xi6 has a considerable statistical impact if it is combined with the number of scanlines. In a robust multivariate regression, 2 2 =0.432 to Ri[1;6] =0.552 if both input it increases the number of scanlines’ Ri1 variables are considered. This is an emergent behaviour that is sometimes seen in multivariate regression. Median Inclination Angle of Used Scanlines. The median inclination angle of the used scanlines xi7 shows a small correlation towards the classification rate with R2 =0.010. As this feature is largely independent from the other features, it is useful to include xi7 in the multivariate regression. Number of Scanline-Clusters. Besides the number of scanlines, the number of scanline-clusters xi7 , which was acquired using a k-means clustering algorithm, has shown to be of interest. This is not so much caused by the direct correlation, which is as little as R2 =0.001, but in connection with the number of scanlines and/or the evenness of the inclination angle’s distribution. As opposed to R2 =0.450 when using only number of scanlines (xi1 ) and evenness (xi4 ) in a multivariate regression, a coefficient of determination of R2 =0.493 is gained with the number of scanline clusters as an additional prediction variable. 4.2
Multivariate Regression
Providing LIBRA’s LTS regression algorithm with the eight predictor variables and the classification rate as the response variable, the regression coefficients Θ1..p , an offset Θ0 , and a coefficient of determination of R2 = 0.800 (cf. Eq. 11 below) are determined. ⎞ ⎛ ⎛ ⎞ xi1 3.27 ⎜ xi2 ⎟ ⎜ 1.23 ⎟ ⎞ ⎛ ⎞ ⎛ ⎟ ⎜ ⎜ ⎟ xi1 Θ1 ⎜ xi3 ⎟ ⎜ 8.64 ⎟ ⎟ ⎜ ⎜ ⎟ ⎜ xi2 ⎟ ⎜ Θ2 ⎟ ⎜ xi4 ⎟ ⎜ 0.27 ⎟ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎜ ⎟ − 364.8 yi = ⎜ . ⎟ · ⎜ . ⎟ + Θ0 = ⎜ (11) ⎟ ·⎜ ⎟ ⎝ .. ⎠ ⎝ .. ⎠ ⎜ xi5 ⎟ ⎜ 18.14 ⎟ ⎜ xi6 ⎟ ⎜ −0.22 ⎟ xi8 Θ8 ⎟ ⎜ ⎜ ⎟ ⎝ xi7 ⎠ ⎝ 0.05 ⎠ xi8 −0.89
568
5
S. Matzka, Y.R. Petillot, and A.M. Wallace
Generating Efficient Scan-Patterns
As could be expected, the number of scanline has the largest impact upon the overall coefficient of correlation. However there is still an R2 = 0.480, if the number of scanlines used is not considered. Therefore, it is possible to optimise the estimated classification performance of the chosen scanlines without necessarily increasing the number of scanlines. Depending upon the scanning hardware and the application, different efficient scan-pattern generation algorithms can be devised using Eq. 11 and a costbenefit function. Parameters that – among others – have to be taken into account for the cost function are the rotational speed, at which the scanline’s inclination angle can be changed, possible inertia properties that complicate of disallow a change of the rotation’s direction, and the time the measurement of single scanline consumes. Also, the time available for the acquisition of the range-information can be limited to a fixed value, or might be determined dynamically, e.g. if an object moves too fast or is only visible for a certain time. On the other hand, a certain quality of the classification result might be required, therefore the scan-pattern’s estimated classification rate would have to exceed a predefined value. 5.1
Exemplary Cost-Benefit Function
For the cost-function, it is necessary to analyse the system used for the acquisition of the range-information. Our exemplary scanning system shall be the scanline deflector shown in Fig. 3. The scan-frequency of the used 2-D laser-scanner is 75Hz, or 13.3ms per scanline. The used stepper motors have a maximum rotational speed of 360◦ /s, at which no inertia problem occurs due to the motors’ high torque. The inclination λj of the outgoing scanline j is equal to the angular difference between the two mirrors, therefore the inclination angle’s maximum rotational speed is 720◦/s, or 1.39ms per degree. The initial inclination angle λ0 may be any valid angle value. The cost-function c(nSL , λ0..nSL ) – using the number of scanlines nSL , λ0 , and the used inclination angles λ1..nSL as variables – can therefore be written as c(nSL , λ0..nSL ) = nSL · 13.3ms +
n SL
(λj − λj−1 ) · 1.29ms
(12)
j=1
The regression coefficients Θ from Eq. 11 can be used for the benefit-function b(λ0..nSL ), calculating the predictor variables xip from the used inclination angles λ1..nSL . ⎞ ⎛ ⎞ ⎛ Θ1 x1 (λ0,1..j ) ⎟ ⎜ ⎜ . ⎟ . .. b(λ0..nSL ) = ⎝ (13) ⎠ · ⎝ .. ⎠ + Θ0 xp (λ0,1..j )
Θp
Determining Efficient Scan-Patterns for 3-D Object Recognition
569
The most efficient scan-pattern λopt in terms of a low cost-benefit-ratio can now be determined by solving a minimisation problem
c(nSL , λ0..nSL ) λopt = arg min (14) b(λ0..nSL ) The accuracy of the benefit-function is evaluated using a set of 300 scanpatterns with five scanlines. From the 10 scan-patterns predicted to be performing best by the benefit-function, 6 are also found among the 10 best scan-patterns determined by performing a classification test (cf. Fig. 8). Moreover, the minimum classification rate of the 10 scan-patterns selected by prediction is 37.5% which is still far better than the median classification rate of 25% when using 5 scanlines.
Fig. 8. Six efficient scan-patterns as selected by the benefit-function. For each scanpattern, the predicted (Pred.) and measured (Meas.) classification rate is given.
Starting from λ0 = 0◦ , the most efficient scan-pattern would then be #202, as it would take c = 5 · 13.3ms + (6 + 57)◦ · 1.29 ms ◦ = 148ms to scan, resulting ms −1 = 2.79 = in a minimum cost-benefit quotient of cb−1 min % , as opposed to cb ms [2.93 .. 3.69] % for the other scan-patterns.
6
Conclusion and Future Work
In this paper, a method to estimate the classification rate using an arbitrary scanpattern as sparse input data for spin images has been presented. Using robust linear multivariate regression to determine the benefit-function of a given scanpattern as well as the proposed features are original work that can be applied to a multitude of applications. The regression shows a high coefficient of determination R2 =0.800, indicating that 80% of all variance is modelled by the regression. Yet, it may prove to be difficult to gain a higher coefficient of determination without involuntarily modelling noise included in the database. Exemplary cost-benefit function are presented, and applied on scan-patterns with 5 scanlines in order to determine the most efficient scan-pattern in respect to its cost-benefit quotient for the given situation. Also, features not explicitly tested in this paper due to the information contained in the database are accuracy of the normal vector used by the spin image algorithm and influence of the saliency-based oriented-point selection, both of which will have to be addressed in future work.
570
S. Matzka, Y.R. Petillot, and A.M. Wallace
Moreover, future work will include application of the simulated results to rangeimage data acquired with a laser scanner, using the presented measurement concept deflecting the scanlines of a 2-D laser scanner.
References 1. Surmann, H., Lingemann, K., N¨ uchter, A., Hertzberg, J.: A 3d laser range finder for autonomous mobile robots. In: Proceedings of the 32nd ISR(International Symposium on Robotics), pp. 153–158 (2001) 2. Johnson, A.E., Hebert, M.: Using spin images for efficient object recognition in cluttered 3d scenes. IEEE Transactions on Pattern Analysis and Machine Intelligence 21, 433–449 (1999) 3. Stein, F., Medioni, G.: Structural indexing: Efficient 3-d object recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence 14, 125–145 (1992) 4. Blais, F., Beraldin, J.A., El-Hakim, S.F.: Real-time geometrical tracking and pose estimation using laser triangulation and photogrammetry. In: Proceedings of the 1st International Symposium on 3D Data Processing, Visualization, and Transmission, pp. 205–210 (2001) 5. Rousseeuw, P.: Least median of squares regression. Journal of the American Statistical Association 79, 871–880 (1984) 6. Rousseeuw, P., Leroy, A.: Robust Regression and Outlier Detection. Wiley-IEEE (2003) 7. Hampel, F.: A general qualitative definition of robustness. Annals of Mathematical Statistics 42, 1887–1896 (1971) 8. Bickel, P.: One-step huber estimates in the linear model. Journal of the American Statistical Association 70, 428–434 (1975) 9. Rousseeuw, P., Van Driessen, K.: An Algorithm for Positive-Breakdown Regression Based on Concentration Steps. In: Data Analysis: Scientific Modeling and Practical Application, pp. 335–346. Springer, Heidelberg (2000) 10. Verboven, S., Hubert, M.: LIBRA: a MATLAB library for robust analysis. Chemometrics and Intelligent Laboratory Systems 75, 127–136 (2005) 11. Joanes, D.N., Gill, C.A.: Comparing measures of sample skewness and kurtosis. Journal of the Royal Statistical Society: Series D (The Statistician) 47, 183–189 (1998)
A Comparison of Fast Level Set-Like Algorithms for Image Segmentation in Fluorescence Microscopy Martin Maˇska, Jan Huben´ y, David Svoboda, and Michal Kozubek Centre for Biomedical Image Analysis, Faculty of Informatics Masaryk University, Brno, Czech Republic
[email protected]
Abstract. Image segmentation, one of the fundamental task of image processing, can be accurately solved using the level set framework. However, the computational time demands of the level set methods make them practically useless, especially for segmentation of large threedimensional images. Many approximations have been introduced in recent years to speed up the computation of the level set methods. Although these algorithms provide favourable results, most of them were not properly tested against ground truth images. In this paper we present a comparison of three methods: the Sparse-Field method [1], Deng and Tsui’s algorithm [2] and Nilsson and Heyden’s algorithm [3]. Our main motivation was to compare these methods on 3D image data acquired using fluorescence microscope, but we suppose that presented results are also valid and applicable to other biomedical images like CT scans, MRI or ultrasound images. We focus on a comparison of the method accuracy, speed and ability to detect several objects located close to each other for both 2D and 3D images. Furthermore, since the input data of our experiments are artificially generated, we are able to compare obtained segmentation results with ground truth images. Keywords: image segmentation, level set method, active contours.
1
Introduction
Image segmentation is one of the fundamental and also one of the most difficult task of image analysis. It is the art of automatically partitioning the image into different connected regions that satisfy a certain criterium. The field of image segmentation is quite well explored due to its importance. There exists a large variety of segmentation methods like thresholding [4,5], region growing [5,6], template matching, watershed algorithm [6], etc. Nevertheless, some real image data (e.g. low contrast and noisy biomedical data) cannot be correctly segmented by these classic methods. Therefore, new sophisticated and mathematically wellfounded segmentation methods were proposed in recent years. The active contour model [7] is one of them. The problem of image segmentation is transformed to minimization of a specially designed energy functional. An initial interface, closed curve for 2D images or surface for 3D images, G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 571–581, 2007. c Springer-Verlag Berlin Heidelberg 2007
572
M. Maˇska et al.
is deformed by various internal and external forces in time and moves towards the object boundaries, where the energy functional reaches its local minimum. Since the active contour model, proposed by Kass et al. [7], provided an accurate segmentation results of noisy and low contrast images, the research has been very popular, extensive and rather successful in this area. The various forms of the active contour models, parametric [7,8], geometric [9,10] and geodesic active contour model [11] or active contours without edges [12], have been introduced in recent years. The level set method [13], a numerical implementation of the geometric active contour model, is extensively used in fluids mechanics and computer vision [14], but can also be used in biomedical image processing [15]. Here, an initial interface is represented as the zero level set of a higher-dimensional, implicit function φ. The motion of the interface is driven by the following partial differential equation: φt + F |∇φ| = 0 ,
(1)
where F is appropriately chosen speed function that describes the motion of the interface in the direction of the normal vector. The level set method in comparison with the implementations of the parametric active contour models has two main advantages. The moving interface is able to naturally change its topology and this method can work in any number of dimensions without significant modification. However, its computational time demands are unacceptable, especially for the segmentation of large 3D images. The most time consuming operations, computed over the whole computational domain in each iteration, are the updating of the implicit function φ according to the level set equation (1) and computation of the extended speed function F [16]. To speed up the general level set framework, many approximations have been introduced in recent years. In particular, the narrow-band algorithm [10] used for segmentation of nuclei and cells labeled with the fluorescently labeled membranes [17] or the fast marching method [18] used for segmentation of cell clusters in two-dimensional bone marrow samples [19]. The fast marching method was also used for 3D reconstruction of interphase chromosomes [20,21]. Because the majority of fast approximations of the level set framework were not properly tested against ground truth images or compared to each other, we conduct a comparison of three recently published fast level set-like algorithms for image segmentation in fluorescence microscopy. Moreover, since our experiments are performed on artificial image data, we are able to compare the segmentation results with ground truth images.
2
Tested Methods
In this section, we briefly describe the main ideas of three recently introduced algorithms that are based on the level set approach. We explain the principles of the Sparse-Field method [1] (SFM), Deng and Tsui’s algorithm [2] (DTA) and Nilsson and Heyden’s algorithm [3] (NHA). These methods use certain approximations of the general level set framework to speed up its computation, but
A Comparison of Fast Level Set-Like Algorithms
573
still preserve its advantages. They are able to easily change the topology of the interface and work in any number of dimensions without significant modification. The SFM was first proposed by R. T. Whitaker [1] for 3D reconstruction from range data. Later, the same author introduced its another detailed description in [5]. This method takes the narrow-band algorithm to the extreme. The implicit function φ is updated using the equation (1) only in the grid points belonging to the interface. Subsequently, these new values of φ are propagated to the neighborhood of the interface. Since the spatial partial derivatives in (1) are usually approximated using the first- and second-order finite difference schemes, only those points, lying within the distance up to two grid points from the interface, are processed in one iteration. Unlike the general level set approach, the SFM need not compute the extended speed function F , because the implicit function φ is updated using (1) only in the interface points. This is another significant speed up factor. Another fast algorithm based on the level set approach was proposed by Deng and Tsui [2]. Here, an initial interface is driven by the level set equation (1), but instead of moving the interface in a small constant time step, the point with the minimal arrival time is processed in one iteration. The implicit function φ is then updated only in a certain neighborhood of this point. Like in the SFM, there is no need to compute the extended speed function F , which rapidly speeds up the iterative process. On the other hand, this method must evidently perform more iterations to reach the steady state. The third recently published algorithm was proposed by Nilsson and Heyden [3]. Although this method and its later usage in biomedical image processing [19] were presented only for 2D data, its extension to three dimensions is straightforward. Like in the DTA, the evolution of an initial interface is realized by a point-by-point manner. The point of the interface with the minimal departure time is moved to the outside or inside of the boundary according to the sign of the speed function F in this point. Simultaneously, its neighborhood is accordingly updated. Furthermore, this algorithm removes the need for updating and reinitializing the signed distance function, because the implicit function φ is considered as a function that maps the set membership of each point (i.e. the points of the interface are represented by the value 0, points outside the boundary by −1 and points inside the boundary by 1). This simplification enables to roughly approximate a curvature of the interface in an incremental manner, which, unlike the previous approaches where the curvature is estimated using a finite difference scheme, decreases the computational time demands of one iteration. Clearly, this algorithm must again accomplish more iterations than the general level set approach or SFM to reach the steady state.
3
Input Data
The algorithm for generating artificial image data is briefly described in this section. After that follows the description of input image data sets, which were used for our experiments in Section 4, and a few examples of their members.
574
M. Maˇska et al.
The ground truth images are needed for evaluation purposes. Obviously, they are not available for real images acquired in fluorescence microscopy. Therefore, we have developed a generator of synthetic image data [22], that is able to simulate a process of 3D image data acquisition on the microscope. This simulation is intended for obtaining images of cell nuclei, especially nuclei of HL-60 cell line, and is realized in several steps. First, the simple geometric objects (e.g. spheres or ellipsoids) representing the image foreground are placed into the empty volume representing the image background (Fig. 1a). Second, the regularity of foreground objects is reduced using the deformable model approach. The boundaries of the foreground objects are considered as an initial interface which is evolved using the Nilsson and Heyden’s algorithm [3]. We use a randomly generated Perlin noise [23] as a speed function F . The result of these two steps (Fig. 1b) is used as a ground truth image. To represent a distribution of chromatine in cell nuclei, Perlin noise is added into the foreground voxels. After that follows a simulation of the microscope acquisition. The ground truth image is convolved with a theoretical point-spread function of the optical system, hereupon Poisson noise and hot pixels are added. Finally, the additive noise of the analog-to-digital converter is taken into account (Fig. 1c). Four sets of 8-bit grayscale images were generated for our experiments. The first data set A1 is composed of 51 three-dimensional images. Each volume contains only one cell nucleus in the center and was generated from the volume containing one ellipsoid with the intensity 140 on the background with the intensity 15. The size of each image is 100 × 100 × 40 voxels with the resolution 0.123 × 0.123 × 0.200 micrometers per voxel. We used these volumes to compare the accuracy of the methods and their computational time demands. Since our experiments were also performed on 2D images, the second image data set A2 of 51 two-dimensional images was obtained as maximal projections of 3D images from the set A1 to the xy plane. The third data set B1 is composed of 10 three-dimensional images. Each volume was obtained from three spheres with intensities 130, 135 and 140 placed into the empty volume with the intensity 15. All spheres had identical radius (r = 15 voxels). Their centers created an equilateral triangle (Fig. 2) so that the mutual distance between them was uniform and belonged to the range 1, 10. Since we wanted to preserve the position and shape of the spheres in the image, the process of deformation was not performed in this case. The size of each image is 100 × 100 × 50 voxels with the resolution 0.1 micrometers per voxel in all axes. Like the image data set A2 , the fourth image data set B2 of 10 two-dimensional images was again obtained as maximal projections of volumes from the set B1 to the xy plane. The data sets B1 and B2 were used to investigate whether the tested methods are able to detect several objects located close to each other.
4
Experimental Results
This section presents several tests and comparisons of tested methods. We highlight and illustrate the differences between them. We made three tests on four
A Comparison of Fast Level Set-Like Algorithms
XY
575
XZ
XY
YZ
XZ
(a)
(b)
(c)
YZ
Fig. 1. The maximal projections of artificially generated 3D data to xy, xz and yz planes. (a) The initial image containing one ellipsoid. (b) The ground truth image. (c) The output image of the generator.
XY
XZ
YZ
Fig. 2. The maximal projections of some members of the image data set B1 . The distances between the spheres are 1, 3, 5 and 10 voxels.
artificial image data sets in order to compare suitability of selected methods for image segmentation in fluorescence microscopy. But in our opinion, presented results are valid and applicable to other biomedical images like CT scans, MRI or ultrasound images. We focused on the method accuracy with respect to a priori known ground truth in the first test. We compared the computational time demands in the second test. Finally, we investigated whether the methods are able to find several objects that are located close to each other. All our experiments were done on a common PC workstation (AMD Athlon 2600+, 512 MB RAM, Windows XP Professional). To ensure the objectivity of the segmentation results, we used following form of the speed function: F = gI (1 + εκ) + β · ∇P · n
(2)
1 for all compared methods. The term gI = 1+|∇G 2 gives the local propagation σ ∗I| speed that depends on the position of the boundary. The symbol κ denotes curvature, which introduces smoothing of the interface. Its relative impact is determined by a constant ε. Last term β · ∇P · n, where P = |∇Gσ ∗ I|, β is a constant and the symbol n denotes the unit normal vector of the interface, attracts the moving interface towards the edges in the smoothed version of the input image I. Its smoothing was performed by Gaussian filter (σ = 1.5, radius r = 2.0).
576
4.1
M. Maˇska et al.
Accuracy
The accuracy is one of the significant features of every algorithm. We use a correctness measure, which describes precisely the quality of the segmentation results, as the criterium of accuracy in our experiments. We compute the sensitivity and specificity measures of the segmentation results with respect to a priori known ground truth: Sens(f ) =
TP TP + FN
Spec(f ) =
TN , TN + FP
(3)
where f is the segmentation result, T P (true positive) is number of pixels correctly assigned to image foreground, F P (false positive) is number of pixels incorrectly assigned to image foreground, T N (true negative) and F N (false negative) are defined in a similar way with respect to image background. We describe the correctness of the segmentation result with the following commonly used measure: Cor(f ) = Sens(f ) ∗ Spec(f ) . (4) The visually correct segmentation results have 0.9 < Cor(f ) ≤ 1.0. The accuracy of the methods was tested on the image data sets A1 and A2 . We were concerned with two cases. First, the initial interface was placed into the image background. We used the corner points of the input image as the initial interface and tried to detect the boundary of the cell nucleus in the center. Second, the initial interface was placed into the image foreground. In this case, we used the center point of the image as the initial interface and tried again to find the boundary of the cell nucleus. To determine the values of parameters β and ε of the speed function (2), we used the image data set A2 , because the computations on two-dimensional domain are significantly faster than on three-dimensional one. We experimentally set these parameters by hand (see Table 1 for details) with respect to the best results of each method and used them subsequently for the processing of the data set A1 . Since we computed the measure Cor(f ) every k iterations (k = 10 for the SFM and k = 100 for the others methods) of each run of the compared methods, we were able to determine the optimal number of iterations, after which we get the best segmentation results with respect to previously defined correctness measure and stopped the evolution of the moving interface. The average values of Cor(f ), Sens(f ) and Spec(f ) of the segmentation results for the image data sets A1 and A2 obtained using the tested methods are shown in Table 1. 4.2
Computational Time Demands
We compared the computational time demands of the tested methods in the second test. The image data sets A1 and A2 were used for this purpose. Moreover, we resized these images to the different sizes in order to highlight the dependence of the methods computational time demands on the image size. The volumes
A Comparison of Fast Level Set-Like Algorithms
577
Table 1. The average values of Cor(f ), Sens(f ) and Spec(f ) of the segmentation results for the image data sets A1 , A2 obtained using the tested methods with parameters β and ε determining the relative impacts of the curvature and the attraction force of the speed function F (see Eq. 2). The first three rows illustrate the case when the segmentation was started from the image background, while next three rows present the segmentation results when the initial interface was evolved from the image foreground. Parameters Method β ε SFM 0.002 0.04 DTA 0.002 0.05 NHA 0.004 0.50 SFM 15 · 10−5 0.04 DTA 12 · 10−5 0.05 NHA 25 · 10−5 0.50
Two dimensions Sens(f ) Spec(f ) Cor(f ) 0.9385 0.9876 0.9269 0.9348 0.9884 0.9239 0.9343 0.9862 0.9214 0.9382 0.9882 0.9271 0.9364 0.9847 0.9220 0.9358 0.9856 0.9223
Three dimensions Sens(f ) Spec(f ) Cor(f ) 0.9379 0.9887 0.9273 0.9361 0.9869 0.9238 0.9358 0.9864 0.9231 0.9376 0.9884 0.9267 0.9357 0.9852 0.9218 0.9361 0.9850 0.9221
(100 × 100 × 40 voxels) of the data set A1 were resized to the volumes of sizes 25 × 25 × 10, 50 × 50 × 20 and 75 × 75 × 30 voxels. Similarly, two-dimensional images of sizes 50 × 50, 200 × 200, 400 × 400 and 800 × 800 pixels were obtained from the images (100 × 100 pixels) of the data set A2 . We used the corner points of the input images as the initial interface and the same values of the parameters β and ε of the speed function (2) as in the previous test (see Table 1). The stopping criterium of each run was again determined using the repeated computation of the measure Cor(f ) every k iterations (see Section 4.1). The average values of the computational time demands of the tested methods are illustrated in Fig. 3. 4.3
Detection of Closely Located Objects
The ability of the compared methods to detect several objects located close to each other was investigated in the last test. We used the image data sets B1 and B2 for this purpose. We were concerned with two cases. First, the initial interface was placed into the image foreground. The evolution of the moving interface started from the center points of the spheres. Second, the initial interface was placed into the image background and represented as the corner points of the input image and its center point that was located between all three spheres. We aimed at the correct detection of three objects in both cases. The parameters β and ε of the speed function (2) were first experimentally set by hand on the data set B2 with respect to the best segmentation results and subsequently used for the other image data set B1 . The SFM was computed with β = 8 · 10−4 and ε = 0.03, DTA with β = 25 · 10−4 , ε = 0.025 and NHA with β = 14 · 10−4 , ε = 0.55 in the case of placing the initial interface into the image foreground. In the latter case, the SFM was computed with β = 14 · 10−4 and ε = 0.01, DTA with β = 16 · 10−4 , ε = 0.035 and NHA with β = 35 · 10−4
578
M. Maˇska et al.
Fig. 3. The dependences of the computational time demands of the tested methods on the size of the input image. The results for 2D images are depicted on the left and the results for 3D ones are illustrated on the right.
and ε = 0.55. The dependences of the number of detected objects on their mutual distance are depicted in Fig. 4. 4.4
Discussion
The presented results of the three above described tests are summarized and discussed in this section. We highlight and explain the differences of the segmentation results obtained using the compared methods. The values of Sens(f ), Spec(f ) and Cor(f ) shown in Table 2 indicate that the segmentation results of the tested methods are very similar and independent on the position of the initial interface. Even if the value of correctness measure Cor(f ) about 0.93 does not look very optimistic, the accuracy of the methods is very good. The simulation of the microscope acquisition of our generator (see Section 3), especially adding of the Perlin noise and subsequent convolution with the theoretical PSF of the optical system, cause small perturbations in the object boundary. In consequence, this object is a little smaller in the generator output image than the same object in the ground truth image data. The main difference of the tested methods is in their computational time demands. Though all methods provide similar segmentation results, the computation of the NHA is significantly faster than the computations of the other compared methods for both 2D and 3D images. The graphs depicted in Fig. 3 show that the computational time demands of the NHA was about 3 seconds, about 141 seconds for the DTA and about 176 seconds for the SFM on the image of size 800 × 800 pixels. The differences of the method speeds are also evident for 3D images. While the computation of the NHA took about 57 seconds on the image of size 100 × 100 × 40 voxels, the time of computation of the DTA was about 6138 seconds and about 7072 seconds for the SFM on the same image. The results illustrated in Fig. 4 indicate that if the initial interface is placed into the image background then the detection of several closely located objects cannot be correctly conducted if their mutual distance is lower than 2 voxels. It is
A Comparison of Fast Level Set-Like Algorithms
579
Fig. 4. The dependences of the number of detected objects on their mutual distance. The left column represents the segmentation results of the tested methods for the case when the initial interface was evolved from the image foreground. The right one illustrates the segmentation results when the initial interface was evolved from the image background. The results are the same for both 2D and 3D images.
because the propagation speed of the interface is compensated by the attraction forces of the object boundaries. But this problem can be successfully solved when the initial interface is placed into the image foreground. The parameters β and ε of the speed function (2) were experimentally set with respect to the best segmentation results. During the determination of their correct values, we noticed that the value of the parameter ε has not any significant effect on the segmentation results of the tested methods. However, a small change of the parameter β can cause that the segmentation process totally fails. In particular, the DTA and NHA are highly sensitive to the parameter β in the case of low contrast edges in the input image.
5
Conclusion
The comparison of the suitability of the Sparse-Field method, Deng and Tsui’s algorithm and Nilsson and Heyden’s algorithm for image segmentation in
580
M. Maˇska et al.
fluorescence microscopy has been presented in this paper. We have focused on the method accuracy, computational time demands and ability to detect several objects located close to each other. Since the input image data of our experiments were artificially generated, we were able to compare the segmentation results with ground truth images. We have shown that all three methods provide accurate segmentation results and are able to detect several closely located objects. Due to their computational time demands, only the Nilsson and Heyden’s algorithm can be efficiently used for image segmentation in near real-time. We suppose that our results are also valid and applicable to other biomedical images like CT scans, MRI or ultrasound images. For the time being, several tests and comparisons of the selected methods were conducted only on artificially generated images. In the future we intend to compare the tested methods on real biomedical data. Acknowledgments. This work has been supported by the Ministry of Education of the Czech Republic (Projects No. MSM-0021622419, No. LC535 and No. 2B06052).
References 1. Whitaker, R.T.: A level-set approach to 3d reconstruction from range data. Int. J. Comput. Vision 29, 203–231 (1998) 2. Deng, J., Tsui, H.T.: A fast level set method for segmentation of low contrast noisy biomedical images. Pattern Recognition Letters 23, 161–169 (2002) 3. Nilsson, B., Heyden, A.: A fast algorithm for level set-like active contours. Pattern Recogn. Lett. 24, 1331–1337 (2003) 4. Pratt, W.K.: Digital Image Processing. Wiley, Chichester (1991) 5. Yoo, T.S.: Insight into Images: Principles and Practice for Segmentation, Registration, and Image Analysis. AK Peters Ltd (2004) 6. Soille, P.: Morphological Image Analysis, 2nd edn. Springer, Heidelberg (2004) 7. Kass, M., Witkin, A., Terzopoulos, D.: Snakes: Active contour models. International Journal of Computer Vision 321–331 (1987) 8. Cohen, L.D.: On active contour models and balloons. CVGIP: Image Understanding 53, 211–218 (1991) 9. Caselles, V., Catt, F., Coll, T., Dibos, F.: A geometric model for active contours in image processing. Numer. Math. 66, 1–31 (1993) 10. Malladi, R., Sethian, J.A., Vemuri, B.C.: Shape modeling with front propagation: a level set approach. IEEE Transactions on Pattern Analysis and Machine Inteligence 17, 158–175 (1995) 11. Caselles, V., Kimmel, R., Sapiro, G.: Geodesic active contours. Int. J. Comput. Vision 22, 61–79 (1997) 12. Chan, T.F., Vese, L.A.: Active contours without edges. IEEE Transactions of Image Processing 10, 266–277 (2001) 13. Osher, S., Sethian, J.A.: Fronts propagating with curvature dependent speed: algorithms based on Hamilton–Jacobi formulation. Journal of Computational Physics 79, 12–49 (1988)
A Comparison of Fast Level Set-Like Algorithms
581
14. Sethian, J.A.: Level Set Methods and Fast Marching Methods:Evolving interfaces in computational geometry, fluid mechanics, computer vision, and materials science, 2nd edn. Cambridge University Press, Cambridge (1999) 15. Sarti, A., Sol´ orzano, C.O.d., Lockett, S., Malladi, R.: A geometric model for 3-D confocal image analysis. IEEE Transaction on Biomedical Engineering 47, 1600– 1609 (2000) 16. Adalsteinsson, D., Sethian, J.A.: The fast construction of extension velocities in level set methods. J. Comput. Phys. 148, 2–22 (1999) 17. Sol´ orzano, C.O.d., Malladi, R., Leli´evre, S.A., Lockett, S.J.: Segmentation of nuclei and cells using membrane related protein markers. Journal of Microscopy 201, 404– 415 (2001) 18. Sethian, J.A.: A fast marching level set method for monotonically advancing fronts. Proc. Nat’l Academy of Sciences 93, 1591–1595 (1996) 19. Nilsson, B., Heyden, A.: Segmentation of complex cell clusters in microscopic images: Application to bone marrow samples. Cytometry part A 66, 24–31 (2005) 20. Matula, P., Huben´ y, J., Kozubek, M.: Fast marching 3d reconstruction of interphase chromosomes. In: Sonka, M., Kakadiaris, I.A., Kybic, J. (eds.) CVAMIA. LNCS, vol. 3117, pp. 385–394. Springer, Heidelberg (2004) 21. Huben´ y, J., Matula, P., Matula, P., Kozubek, M.: Improved 3d reconstruction of interphase chromosomes based on nonlinear diffusion filtering. In: In Proceedings of PDE-Based Image Processing and Related Inverse Problem. Mathematics and Vizualization, pp. 163–173. Springer, Heidelberg (2006) 22. Svoboda, D., Kaˇs´ık, M., Maˇska, M., Huben´ y, J., Stejskal, S., Zimmermann, M.: On simulating 3d fluorescent microscope images. In: Computer Analysis of Images and Patterns. LNCS, vol. 4673, pp. 309–316. Springer, Heidelberg (2007) 23. Perlin, K.: An image synthesizer. In: SIGGRAPH 1985: Proceedings of the 12th annual conference on Computer graphics and interactive techniques, pp. 287–296. ACM Press, New York (1985)
Texture-Based Objects Recognition for Vehicle Environment Perception Using a Multiband Camera Yousun Kang, Kiyosumi Kidono, Yoshikatsu Kimura, and Yoshiki Ninomiya Toyota Central R&D Labs., Inc., Aichi, 480-1192 Japan {yskang,kidono,YKimura,ninomiya}@mosk.tytlabs.co.jp
Abstract. The vision-based intelligent vehicle systems for environment perception have required integration of image data acquired from multiple cameras. We developed multiband camera, which can simultaneously obtain both images of visible color and near infrared. In this paper, we present a texture-based objects recognition under road environment scene using a multiband image. The new color feature is proposed to cluster meaningful regions of a multiband image and the texture segmentation is utilized in classification of texture-based objects. Experimental results show that the proposed method effectively recognizes the texture-based objects including roads, buildings, trees, and sky, as well as faces of pedestrians. In the future, by integrating the shape-based objects recognition, which includes pedestrians, cars, and bicycles with texture-based objects recognition, the proposed system can expand into a complex scene understanding system for vehicle environment perception.
1
Introduction
The latest driver assistance systems require multiple cameras mounted on a vehicle for environment perception. Color cameras are utilized for a rear-view monitoring and a blind-corner monitoring system, which provide images of driver blind-spots [1]. Stereo cameras are needed that cope with 3D localization and geometry estimation [2], and infrared technologies are now used in driver monitoring and night-vision systems [3]. There is now a challenge to integrate the multiple cameras to form a vision-based intelligent vehicle system. We developed a multiband camera for the purpose of integrating a color camera and a near infrared camera. The developed multiband camera can simultaneously obtain both images of color and near infrared wavelengths. Therefore, an input image from a multiband camera is called a multiband image in this paper, which has four bands image consisting of a band of near infrared and three bands of color. This paper presents texture-based objects recognition with a multiband image using color based clustering scheme and texture segmentation. The shape-based objects recognition, which is contrary to texture-based objects recognition, has been the main focus for the prevention of vehicular accidents. Because, the obstacles of a vehicle are recognized by specific patterns such as pedestrians, cars, and G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 582–591, 2007. c Springer-Verlag Berlin Heidelberg 2007
Texture-Based Objects Recognition for Vehicle Environment Perception
583
bicycles. However, a more complex scene understanding system, recently introduced in [4], relies upon the recognition of not only shape-based objects but also of texture-based objects. The texture-based objects, better described by their texture features rather than the local shape, include buildings, roads, trees, and sky. They are also important objects to be recognized as a background from a moving vehicle. In addition, the texture-based objects recognition can influence 3D scene interpretation. The 3D scene interpretation system integrates scene geometry into 2D shape-based objects recognition. More recently, dynamic 3D scene analysis from a moving vehicle was presented at [5] using a SfM (Structured from Motion) module. The SfM module computes camera pose and ground plane estimation from calibrated stereo cameras. If roads, buildings and sky regions are explicitly classified as the background of an input image, the 3D geometric scene interpretation can better and more accurately improve the detection of pedestrians and obstacles. In Section 2, the near infrared and the multiband camera are introduced and its properties analyzed. New color features which combine infrared and visible color features are also proposed in this section. In Section 3, clustering algorithm and texture segmentation are applied to multiband and visible color image, Section 4 displays the experimental results.
2 2.1
Feature Extraction Near Infrared Imaging
Before we extract a new color feature from infrared and visible color bands, it is important to understand the multiband image formation process. The infrared wavelengths on the electromagnetic spectrum roughly lie between 0.7 to 12 microns in contrast to wavelengths in the visible spectrum, which lie between 0.4 to 0.7 microns. The infrared is divided into near, far, and mid infrared, however, in this paper, we referred only to the near infrared. The statistics of far infrared images introduced in [6]. They mentioned a potential application of infrared image combined with visible spectrum such as statistics-based segmentation and categorization algorithms. In this work, we present an application of near infrared image combined with visible spectrum in color image segmentation and show the better performance of multiband image than only visible color band image’s in texture-based recognition. Near infrared is defined by water absorption, and the effect is formed by strongly reflecting off a person’s exterior layer, and foliage, such as tree leaves and grass. The uses of near infrared include computer vision field such as remote sensing of vegetation using NDVI (Normalized Difference Vegetation Index) [7], road surface (freezing or moisture) condition detection [8], illumination invariant face recognition [9], and night vision system [10]. In general, near infrared can be captured by most imaging devices with a visible spectrum. Because the imaging devices are sensitive to infrared light, many digital cameras employ infrared blockers. However, the developed multiband camera removes the infrared cut-off filter and a special color filter was
584
Y. Kang et al.
Imaging device
Imaging device Filter for multiband camera
Color filter (Bayer array) Infrared cut-off filter Lens
R G R G
G B G B
R G R G
G B G B
R Ir R Ir
Lens
G B G B
R Ir R Ir
G B G B
Infrared
(a)
(b)
Fig. 1. The structure of the multiband camera with special color filter array (a) Conventional digital color camera (b) The proposed multiband camera
designed to simultaneously obtain infrared and visible light. The structure of a conventional digital color camera and the multiband camera is shown in Fig. 1. The Bayer filter [11], which is generally used in most of the conventional digital color camera has a pixel array of 50% green, 25% red and 25% blue patterns, and is referred to as RGBG or GRGB. The special filter of the multiband camera rearranges the 50% green pixel array into 25% green and 25% infrared cell onto a square grid of image sensors. The 25% infrared array acts as a visible light cut-off filter. Therefore, the developed multiband camera can provide a color image, an infrared image, and visible monochrome images according to each color band, depending on the requirement of the application. The specifications and more detailed information of the multiband camera are presented in [12]. 2.2
New Color Space
In this section, we derive a set of effective three dimensional color features from a multiband image. The multiband image consists of four bands, a band of infrared and three bands of visible color spectrums. To obtain meaningful clustering regions from a multiband image, the mean shift color image segmentation algorithm [13] is employed. Before proceeding to the mean shift algorithm, the color space issue has to be mentioned. Perceived color differences should correspond to Euclidean distances in the color space chosen to represent the features. The L*u*v* was especially designed to best approximate perceptually uniform color spaces. As was used in mean shift algorithm, we also utilized the three dimensional L*u*v* color space. We proposed a new method to transform four dimensional features of a multiband image to three dimensional color space, in which the visible color and near infrared are combined to create L*u*v* color space. In color image segmentation using a histogram thresholding scheme, Karhunen-Loeve (K.L.) transformation was applied for extracting effective color features, which is called Ohta [14] color space. Our proposed method is similar to Ohta color space in the way it uses
Texture-Based Objects Recognition for Vehicle Environment Perception 0.6
0.6
0.4
0.4
0.2
0.2
0
0
-0.2
-0.2
-0.4
-0.4
-0.6
w' R
1
w' G
1
w' B
1
w' R
2
w' G
2
w' B
2
w' R
3
w' G
3
w' B
3
-0.6
585
wIr wR wG wB wIr wR wG wB wIr wR wG wB wIr wR wG wB 1
1
1
1
2
2
(a)
2
2
3
3
3
3
4
4
4
4
(b)
Fig. 2. The eigenvectors distributions from 500 multiband images (a) Only Visible 3 bands color space (R, G, B) (b) Visible + Infrared 4 bands color space (Ir, R, G, B)
K.L. transformation. At first, we calculate the eigenvectors of covariance matrix extracted from four bands’s distributions of a multiband image. Let S be the multiband image to be segmented, and Σ be the covariance matrix of the distributions of four bands (Ir, R, G, B ) in S. Let λ1 , λ2 , λ3 , and λ4 be the eigenvalues of Σ and λ1 ≥ λ2 ≥ λ3 ≥ λ4 . Let ωi = (ωIri , ωRi , ωGi , ωBi )t for i = 1, 2, 3 and 4 be the eigenvectors of Σ corresponding to λi , respectively. The feature vectors Xi calculated by K.L. transformation are defined as : Xi = (Ir, R, G, B) · ωi (ωi = 1, i = 1, 2, 3, and 4). At second, we also calculate the eigenvectors of covariance matrix extracted from only visible three bands (R, G, B ) in S. Let ωi = (ωR , ωG , ωB )t for i i i i = 1, 2, 3 be the eigenvectors of Σ corresponding to λi , respectively. These three dimensional feature vectors Xi are defined as : Xi = (R, G, B) · ωi = 1, i = 1, 2, 3).
(ωi
Fig. 2 shows the eigenvectors distributions of 500 multiband images acquired from a multiband camera mounded on a moving vehicle. Fig. 2(a) and (b) shows the distributions of ωi in the visible three bands color space and of ωi in four bands color space. The feature vectors Xi can get back the input color value (R, G, B) by multiplying the inverse matrix of eigenvector, (ωi )−1 . (R, G, B) = Xi · (ωi )−1 (i = 1, 2, 3) New color values can also get by multiplying the feature vectors Xi extracted from a multiband image and the inverse matrix of eigenvector, (ωi )−1 . At this
586
Y. Kang et al.
(a)
(b)
(c)
Fig. 3. (a) A input image of visible color space : R, G, B (b) A infrared image of grayscale : Ir (c) A new color image extracted from infrared and visual color features : r, g, b
time, to adjust dimension of feature vectors between 4 bands and 3 bands, we compressed four dimensional feature vectors Xi to three dimensional vectors Xi , in which X1 substitutes X1 , X2 substitutes a mean vector of (X2 , X3 ), and X3 substitutes X4 . Therefore, the new color feature r, g, b are calculated by this equation. (r, g, b) = Xi · (ωi )−1 (i = 1, 2, 3) (X1
= X1 ,
X2 = (X2 + X3 )/2, X3 = X4 ) (0 ≤ r, g, b ≥ 255)
As seen in Fig. 3, we compared the new color feature images composed by the r, g, b (c) with the original color image R, G, B (a) and the infrared image Ir (b). Since Euclidean distance in RGB color space does not correlate well to perceived change in color, we converted the r, g, b color vectors in RGB color space to L*u*v* color space for clustering process. Using this new color feature, we can effectively cluster meaningful regions of a multiband image by decreasing the variances of RGB color feature and increasing the reflection of infrared spectrometer. Therefore, we confirmed the better clustering results in new color (r, g, b) image than the original three dimensional color (R, G, B) image in experimental results.
3 3.1
Recognition Clustering
The mean shift algorithm [13] is a nonparametric clustering technique which does not require prior knowledge of the number of clusters, and does not constrain the shape of the clusters. An image is typically represented as a two-dimensional lattice of p-dimensional vectors (pixels), where p = 1 in the gray-level case, three for color images. The space of the lattice is known as the spatial domain, while the gray level or color information is represented in the range domain. For both domains, Euclidean metric is assumed. When the location and range vectors
Texture-Based Objects Recognition for Vehicle Environment Perception
587
(a)
(b)
Fig. 4. (a) Input images of visible color and infrared (321 × 241 pixels) (b) Clustering results using a mean shift algorithm
are concatenated in the joint spatial-range domain of dimension d = p + 2, their different nature has to be compensated by proper normalization. Thus, the multivariate kernel is defined as the product of two radially symmetric kernels and the Euclidean metric allows a single bandwidth parameter for each domain Khs ,hr (x) =
C xs 2 xr 2 k( )k( ) h2s hpr hs hr
(1)
where xs is the spatial part, xr is the range part of a feature vector, k(x) the common profile used in both two domains, hs and hr the employed kernel bandwidths, and C the corresponding normalization constant. In practice, this clustering process was accomplished using the mean shift part of the segmentation software Edison [15]. We need to choose the bandwidth parameter h = (hs , hr ) and maximum pixel size M . The M refers to the elimination of spatial regions containing less than M pixel size was setted. In order to assign the approximate number of regions between R, G, B and r, g, b images, the color bandwidth parameter hr is given difference value, while spacial bandwidth hs and maximum pixel size M are given equal value. Fig. 4 shows the clustering results between R, G, B (a) and r, g, b (b) images. When the minimum region parameter M is 100 for both images and the bandwidth parameters of R, G, B and r, g, b images are hRGB = (6, 12) and hrgb = (6, 7), the clustered region numbers are 42 and 40, respectively. Note the proposed r, g, b image has less region number but better meaningful regions than R, G, B image. 3.2
Texture Segmentation
Through the mean-shift clustering process, the cluster regions are labeled on R, G, B and r, g, b images. In order to discriminate the labeled regions, texture
588
Y. Kang et al. Street Scenes RGB
Pedestrian Scenes R‛G‛B‛
RGB
R‛G‛B‛
(a)
(b)
(c)
(d)
(e)
(f)
Fig. 5. (a) Input images of visible color and infrared (321 × 241 pixels) (b) Clustering results using a mean shift algorithm (c) Region boundaries from segmented results (d) Texture segmentation results (road-black, tree-green, building-magenta, sky-blue, faceyellow, etc-white) (e) Classification results using clustering and texture segmentation (f) Color overlay images onto the input images
Texture-Based Objects Recognition for Vehicle Environment Perception
589
segmentation is performed and texture patterns are extracted from higher order autocorrelation functions. The higher-order local autocorrelation function has been used in a wide range of applications [16] and have advantages of being shift-invariant and computationally simple. The autocorrelation functions possess the unique property for even orders and can be used to assess the amount of regularity, as well as the fineness/coarseness, of a texture image [17]. The higher-order autocorrelation functions are defined by (2) rxn (a1 , a2 , · · · , an )= f (x)f (x + a1 ) · · · f (x + an )dx, D
where n denotes the order of the autocorrelation function, x is the image coordinate vector, and ai are the displacement vectors. A function f (x) stands for the image intensity on the retinal plane D [18]. We limited the order n to 2 due to computational calculations. The feature extraction module computes 35 local autocorrelation coefficients from each color band, using the mask patterns [19]. Extracted surface texture patterns from the higher-order local autocorrelation function can be discriminated by estimating a multivariate normal density [20].
4
Experimental Results
The effectiveness of the proposed method for texture-based objects recognition has been tested in two scenes. The first scene is a street scene with buildings, roads, trees, sky, etc. The second scene is another street scene but pedestrian’s faces are visible. In our experiments, we compared the multiband image to the three bands color image. Texture-based Objects Recognition. The street scenes are displayed in the first and second column of Fig. 5(a). The first column shows the visible R, G, B image and the second column shows the proposed new color feature r, g, b image with 321 × 241 size. Two images were processed with mean shift algorithm employing normal kernels defined using spatial and range resolutions such as (hs , hr , M ) = (6, 6, 20) for RGB and (hs , hr , M ) = (6, 8.5, 20) for r, g, b as shown in Fig. 5(b). From the R, G, B image and r, g, b, image, 187 regions and 184 regions from each image were obtained respectively as clustering results. In Fig. 5(c), the region boundaries are shown and we can observed that r, g, b image is more effectively divided into meaningful regions than R, G, B image. Fig. 5(d) shows the texture segmentation results extracted from higher-order local autocorrelation functions. To get the training texture patterns, we randomly sampled 2000 windows of 5 × 5 pixels from texture patterns of each class. We then made the feature vectors consist of the 105 dimensions for visible color band, (35 dimension × 3, 3 meaning R, G, B) and 140 dimension for infrared and color band (35 dimension × 4, 4 meaning Ir, R, G, B). Each object in the image was classified into six classes and assigned a color. The assigned colors are : road-black, tree-green, building-magenta, sky-blue, etc-white. The texturebased objects recognition results shown in Fig. 5(e) is obtained by a voting
590
Y. Kang et al.
process carried out between the mean shift algorithm and texture segmentation. Fig. 5(f) shows the color overlay images onto an input image, this result can be integrated to a shape-based objects recognition for scene understanding. As a result, the building, tree and sky regions of the multiband image yielded a greater recognition accuracy than the visible color image. Texture-based Face Detection. Face detection using near infrared were frequently used because facial signatures are less likely to vary due to light source. Fig. 5(a), columns three and four show a R, G, B and r, g, b image of a pedestrian scene. The mean shift parameter is (hs , hr , M ) = (6, 5.3, 50) for R, G, B and (hs , hr , M ) = (6, 7, 50) for r, g, b, and 163 regions and 164 regions are obtained respectively. The region boundaries of the r, g, b image is more clear for pedestrian’s region detection than R, G, B image as shown in Fig. 5(c). The texture-based objects and face recognition results are displayed in Fig. 5(e). As a results, the color image failed to identify the face (yellow regions) with other similar colors. However, the infrared image combined with visible spectrum is superior to the color image, because infrared took into account the facial signature reflection.
5
Conclusion
This paper presents a texture-based objects recognition using infrared images combined with visible spectrums. The new color features are proposed to cluster meaningful regions in L*u*v* color space. Texture features are extracted from higher-order local autocorrelation functions for texture segmentation. The clustered regions are classified by voting process from texture segmentation results. In the experiment, the proposed method effectively recognizes the texture-based objects as well as faces of pedestrians. By integrating the shape-based objects recognition system or 3D geometric scene interpretation system, the proposed system can be utilized by vehicles to perceive its environment and better understand complex scenes.
References 1. Ishida, S., Tanaka, J., Kondo, S., Shingyoji, M.: The method of a driver assistance system and analysis of a driver’s behavior. In: Proc. 10th World Congress on Intell. Transp. Syst. p. 3168 (2003) 2. Kimura, Y.: Stereo vision for obstacle detection. In: Proc. 13th World Congress on Intell. Transp. Syst. p. 1082 (2006) 3. Tsuji, T., Hattori, H., Watanabe, M., Nagaoka, N.: Development of night-vision system. IEEE Trans. Intell. Transp. Syst. 3(3), 203–209 (2002) 4. Serre, T., Wolf, L., Bileschi, S., Riesenhuber, M., Poggio, T.: Robust Object Recognition with Cortex-Like Mechanisms. IEEE Trans. Pattern Anal. Mach. Intell. 29(3), 411–426 (2007) 5. Leibe, B., Cornelis, N., Cornelis, K., Gool, V.L.: Dynamic 3D Scene Analysis from a Moving Vehicle. In: IEEE Conf. Computer Vision and Pattern Recognition (2007)
Texture-Based Objects Recognition for Vehicle Environment Perception
591
6. Morris, N., Avidan, S., Matusik, W., Pfister, H.: Statistics of Infrared Images. In: IEEE Conf. Computer Vision and Pattern Recognition (2007) 7. Jackson, T.J., Chenb, D., Cosha, M., Lia, F., Andersonc, M., Walthalla, C., Doriaswamya, P., Hunta, E.R.: Vegetation water content mapping using Landsat data derived normalized difference water index for corn and soybeans. Remote Sensing of Environment 92(4), 475–482 (2004) 8. Nami, M., Tsutsumi, D., Nagao, S., Doi, Y., Ishikawa, M., Atanabe, K.: Study on Detecting Freeze of Road Surface by using Near Infrared Absorption Image. Hokkaido Industrial Research Institute Technical Report, 296, 159–168 (2000) 9. Chu, R., Liao, S., Zhang, L.: Illumination Invariant Face Recognition Using NearInfrared Images. IEEE Trans. Pattern Anal. Mach. Intell. 29(4), 627–639 (2007) 10. Toyofuku, K., Iwata, Y., Hagisato, Y.: Night view system using near-infrared light. TOYOTA Technical Review 52(2) (2002) 11. Gunturk, B.K., Altunbasak, Y., Mersereau, R.: Color plane interpolation using alternating projections. IEEE Trans. Image Process. 11(9), 997–1013 (2002) 12. Kidono, K., Ninomiya, Y.: Visibility Estimation under Night-time Conditions using a Multiband Camera. In: IEEE Intell. Veh. Symposium (2007) 13. Comaniciu, D., Meer, P.: Mean shift: A robust approach toward feature space analysis. IEEE Trans. Pattern Anal. Mach. Intell. 24(5), 603–619 (2002) 14. Ohta, Y., Kanade, T., Sakai, T.: Color Information for Region Segmentation. Computer Graphics and Image Processing 13, 222–241 (1980) 15. Christoudias, C., Georgescu, B., Meer, P.: Synergism in low-level vision. In: 16th Int. Conf. on Pattern Recognition, vol. IV, pp. 150–155 (2002) 16. Kurita, T., Otsu, N.: Texture classification by higher order local autocorrelation features. In: Proc. of Asian Conf. on Computer Vision, pp. 175–178 (1993) 17. Tuceryan, M., Jain, A.K.: Chapter 2.1: Texture Analysis. In: The Handbook of Pattern Recognition and Computer Vision, 2nd edn. World Scientific Publishing Co. Singapore (1998) 18. McLaughlin, J.A., Raviv, J.: Nth-order autocorrelations in pattern recognition. Information and Control 12, 121–142 (2000) 19. Kang, Y., Morooka, K., Nagahashi, H.: Texture classification using hierarchical linear discriminant space. IEICE Trans. Inf. & Syst. 88-D(10), 2380–2388 (2005) 20. Smith, S.P., Anil, K.J.: A test to determine the multivariate normality of a dataset. IEEE Trans. Pattern Anal. Mach. Intell. 10(5), 757–761 (1988)
Object Tracking Via Uncertainty Minimization Albert Akhriev Institute of Information Technologies, RRC “Kurchatov Institute”, 1 Kurchatov square, 123182 Moscow, Russia
[email protected]
Abstract. Color and texture provide important visual information for real-time tracking of non-rigid and partially occluded objects. Recent developments have shown the robustness and effectiveness of color based tracking algorithms, especially for tracking tasks where object shape exhibits dramatic variability. In this article we solve the problem by tracking color distributions of background and foreground (object) points simultaneously. The key feature of our approach is careful selection of histogram resolution (or kernel radius) on each frame of a sequence.
1
Introduction
The problem of object tracking through a set of sequential frames is the subject of intensive research. It is not surprising that a number of articles were devoted to the different settings of the tracking problem. In this paper we investigate the task of color object tracking given manually outlined contour in the first image of a videosequence. The goal is to track an object through a long videosequence using just a color information. Methods proposed in the literature can be formally separated on geometric, photometric (color, texture) and combined ones [6]. Photometric and combined approaches are of particular interest in recent years. The articles [1, 14, 7, 9, 8, 2, 12, 3, 10], cited in the reference list, reflect only the minor part of the whole spectrum of ideas. In this paper a version of photometric approach is presented, where we track the boundary contour of an object of interest (in spirit of works of Zhang et al. [6] and Abd-Almageed et al. [11]) rather than its bounding ellipse or rectangle [1, 9]. There are several ways how to adopt color/texture distributions for the tracking task. Some methods rely on object’s color distribution given in the first frame as an object model through a sequence. In the methods, proposed by Zhang et al. [6], Comaniciu et al. [1] and McKenna et al. [9], color distributions were allowed to adapt for changes on every new frame. Hayman et al. [12] presented the concept of online learning based on distribution analysis. Avidan [5] considered confidence map to find the most probable object’s location — the way very similar to our approach. Other authors [3,10] exploited the idea of local color/texture segmentation utilizing information obtained on the previous frame(s). The latter approach seems exceptionally promising. G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 592–601, 2007. c Springer-Verlag Berlin Heidelberg 2007
Object Tracking Via Uncertainty Minimization
593
The long known concept of competition between background and foreground regions [4] was successfully applied in various computer vision algorithms that solve segmentation or classification tasks. Nevertheless, little attention has been paid to the issue of increasing certainty of segmentation. For example, a typical approach uses empirical distributions of background and foreground sets represented as histograms of fixed resolution. However, the outcome of competing for a pixel can be made less ambiguous if selected resolution is near optimal. In this paper we shall show how proper selection of histogram resolution or kernel size strengthens the certainty of dynamic background/foreground segmentation. The paper is organized as follows. Section 2 begins with brief outline of proposed tracking algorithm. Estimation of optimal histogram resolution(s) or kernel size is discussed in Sections 2.1, 2.2 and 2.3. In Section 2.4 the process of object localization on the next frame is explained. Results and conclusion remarks are summarized in Section 3.
2
Tracking Algorithm
Our tracking algorithm includes two main stages. During the first stage, location of object’s contour is predicted on the next frame It+1 from its location on the current one It . During the second stage, the contour is adjusted to fit better for background and foreground color distributions. Initial condition assumes handdrawn contour given at the moment t = 0. In this work we use the simplest predictor based on direct color correlation. Better techniques can be found in [7, 1, 14, 15]. Since distribution tracking methods are mainly insensitive to complicated object transformations, all we need is the vector of mean interframe translation Tt = (dx, dy) of an object. We begin with some notation. At each moment t on the frame It there is a contour rt of tracked object obtained as the result of previous calculations. At the moment t = 0 initial contour is given. Let us introduce the set of object or foreground points Ft and the set of background points Bt . These sets form two wide bands inside and outside of object’s boundary respectively (left picture on Figure 1). The set Ct = Ft ∪ Bt includes all points of the current frame that participate in tracking process. Let us define the set of points Nt+1 of the next frame It+1 covered by the set Ct shifted on the vector of mean interframe displacement Tt (right picture on Figure 1). It is supposed that a contour on the next frame is situated somewhere inside the set Nt+1 . A color vector c = n (R, G, B) is given at each image point. Color histograms Htf , Htb and Ht+1 are constructed on point colors of sets Ft , Bt and Nt+1 respectively. Let us denote a histogram entry for a color c as H(c), and the total sample number as |H|. In some more details, the algorithm is given as follows. First, prediction of object’s location on the next frame is performed. Second, histograms of optimal resolution are constructed. Third, we assign background and foreground probabilities to each point of the set Nt+1 according to the frequencies of occurrence of its color in the histograms Htb and Htf . These probabilities form probability map on the next frame. Fourth, given predicted object’s contour as initial estimation,
594
A. Akhriev
Ft
y
Bt
Nt+1 y+dy t+1
t x
x+dx Fig. 1. Point sets that participate in tracking process between a couple of successive frames. Thick line represents object boundary at moments t and t + 1 respectively. Near-boundary points constitute background (outer) Bt and foreground (inner) Ft sets on the current frame (left picture). Joint set Ct = Ft ∪ Bt , being shifted by the vector of mean interframe displacement Tt = (dx, dy), covers the set Nt+1 on the next frame (right picture). Contour changes from frame to frame. Initially we don’t know its position at moment t + 1, but we assume that it lies somewhere inside the set Nt+1 .
we smoothly adjust the contour over the probability map, so that points with high foreground probability settle mainly inside the contour, whereas points with high background probability remain mainly outside of the contour. The process is repeated on the new couple of successive frames. In other words, we use histograms Htb and Htf of the current frame to classify points of the next frame into foreground and background ones. Careful selection of histogram resolution on each frame of a sequence is the key feature of our approach. 2.1
Optimal Histogram Construction
The value of a color component R, G or B varies within the interval [0, 255]. We define histogram resolution as the number of bins the latter interval is divided into. Optimal resolutions of color channels Rbin , Gbin and Bbin are obtained by minimization of classification uncertainty n Uncertainty = min Ht+1 (c) · E Htb (c), Htf (c) , (1) Rbin ,Gbin ,Bbin
c∈Nt+1
n (c) — number of points of the set Nt+1 with color c, and E — where Ht+1 entropy (uncertainty measure) of classification of a point with color c. The entropy function receives the numbers of samples with color c in the background and foreground distributions as its arguments. Summing in (1) is carried out n (see some detailed explanations below). over all entries of the histogram Ht+1 We shall always select equally populated sets Bt and Ft , i.e. |Bt | = |Ft | = = |Htb | = |Htf |, where operation | · | gives the number of set points or the number of histogram samples. Strictly speaking, it is possible to handle the case |Bt | = |Ft |, however, this case rises the whole number of questions need to be answered. Particularly, the entropy function becomes too complicated. In this paper, point classification means assignment of background pb (c) and foreground pf (c) probabilities to a point with color c on the next frame. Conventional expression for the entropy of such classification is given as follows
E = −pb log (pb ) − pf log (pf ),
(2)
Object Tracking Via Uncertainty Minimization
595
where empirical probabilities can be defined, for the time being, by frequencies of occurrence in the histograms Htb and Htf respectively pb (c) = H b (c) H b (c) + H f (c) , pf (c) = H f (c) H b (c) + H f (c) . (3) Let us consider minimization (1) in detail. Let us pick up any point v∈Nt+1 on the next frame It+1 with color c. For the moment we do not know whether this point is background or foreground one. We can only assert that the set Nt+1 contains the object’s contour on the next frame, see Figure 1. Our goal is to achieve the minimal integral uncertainty of classification on the set Nt+1 using point colors and current empirical distributions Htb and Htf . Distribution of point type has binomial nature — background or foreground. The entropy of this distribution is given by formula (2). Formula (1) defines the total entropy, where a point of the set Nt+1 contributes to the total uncertainty proportionally n n , i.e. as Ht+1 (c). to the frequency of occurrence of its color in the histogram Ht+1 Minimization of (1) is achieved by direct enumeration of histogram resolutions Rbin , Gbin and Bbin . Optimal resolutions provide minimal intersection of n histograms Htb and Htf keeping high intersection between the couples Htb , Ht+1 n and Htf , Ht+1 . We divide the interval [0, 255] of color intensity variation into 4, 8, 16, 32 and 64 parts (bins). Three channels gives 53 = 125 partitions of color space. The best partition minimizes (1). In every trial variant all histograms must have the same resolutions. Experiments have shown that processing of all variants is not too time consuming. Moreover, one or two texture features could be included as additional “colors” without drastic processing slowdown. 2.2
Make Account of Finiteness of Sample Number
In practice formula (2) does not work. A simple example illustrates the essence of the problem. Let us for some color c background and foreground histograms of the current frame are populated as follows: Htb (c) = 100 and Htf (c) = 200. Then a point on the next frame with the color c has background and foreground 100 200 probabilities 100+200 = 13 and 100+200 = 23 respectively. A large number of samples guarantees proximity between empirical and actual probabilities. However, when the number of samples is 1 and 2 respectively, the same probabilities can be formally obtained, but cannot be definitely entrusted due to evidently insufficient statistics of observation. As a result, formula (2) enforces the highest histogram resolutions, because they produce less populated bins. Really, if there is just one sample of any type in a bin, then entropy (2) equals zero. In this section, we shall derive a new entropy that coincides with (2) if the number of samples is large, and tends to some constant value otherwise. The latter constant is the maximal entropy achieved when pb = pf = 12 . In new formulation, poor statistics of some color leads to unreliable classification by this color and we accept equiprobable outcomes. Let b = Htb (c) and f = Htf (c) be the numbers of samples that correspond to some color c. Let x ∈ [0, 1] be unknown probability that a point with color c belongs to the background, whereas 1−x is respective foreground probability. Let
596
A. Akhriev
A be the event that b background points and f foreground points of the current frame have the color c. Binomial distribution gives probability of the event A b given x: P (A|x) = Cb+f xb (1−x)f . Mathematical expectation of entropy subject to A is given by the formula 1 1 ¯ E(A) = P (x|A) −x log x − (1 − x) log(1 − x) dx P (x|A)dx. 0
0
According to Bayes rule: P (x|A) = P (A|x)P (x)/P (A). Since nothing is a priory known about x, it is natural to assume uniform distribution, i.e. P (x) ≡ 1, x ∈ [0, 1]. Taking into account that P (A) does not depend on x, after integration we obtain expectation of entropy, called henceforth the discrete entropy f +1 b+1 Z(b+f +2) − Z(b+1) + Z(b+f +2) − Z(f +1) , (4) b+f +2 b+f +2
where Z(n) = nk=1 k1 — harmonic numbers. It is easy to see that
subject to n 1 b1 and f 1 formulas (2) and (4) practically coincide, since Z(n) = k=1 k ≈ n 1 dx = log(n). However, when b∼1 and f ∼1 the behavior is distinct in kind. 1 x Figure 2 illustrates the differences. For clarity, we admit the real numbers b and f by making use of continuous digamma function instead of harmonic numbers. Although fractional b and f have no physical sense, such a representation is justified in view of its continuity. As a matter of course, the less the number of samples b + f , the more plain the function of discrete entropy becomes, i.e. uncertainty never decreases too much. On the contrary, as more statistics is accumulated, the closer conventional and discrete entropies become. Note that f b and pft = b+f . in formula (2) the probabilities are calculated as pbt = b+f E=
0.6
E(b, f )
0.4
E(b, f )
b+f = 1
0.2
E(b, f )
b+f = 5
b + f = 25
0 0
0.5
1.0
0
1
2
3
4
5
0
5
10 15 20 25
Fig. 2. Conventional entropy (2) (thin lines) vs. discrete one (4) (thick lines). The entropies are plotted as the functions of real arguments b and f subject to b+f = const.
Obtained result is important for choosing the correct histogram resolutions. In fact, the higher histogram resolutions, the better background and foreground distributions are separated in the color space. However, resolution enlargement leads to bin shrinkage and reduction of the number of samples in many bins to several units. The formula (2) implies proximity of empirical (3) and actual probabilities. The latter is not true due to the limited number of samples. As a consequence, calculated entropy does not reflect real situation, benefiting high-resolution histograms, and the tracking process quickly degrades. Discrete
Object Tracking Via Uncertainty Minimization
597
entropy works in a different way. Excessive enlargement of histogram resolution leads to enlargement of entropy (uncertainty) of classification, because as the number of bin samples becomes less, the discrete entropy approaches its maximum, see Figure 2. The optimal histogram resolutions, in the case of discrete entropy, are formed as a compromise between separability of background and foreground distributions in the color space and uncertainty of classification that arises from finiteness of the sample number. 2.3
Kernel-Based Formulation
The proposed method can be easily reformulated in terms of popular kernelbased approach. Let us assume that points in the color space are surrounded by small vicinities, which play a role of histogram bins. Suppose that a point p of the next frame has a color c. Following formulas give contributions of all background (Bt ) and foreground (Ft ) points of the current frame to p b(c) = n (c ) K ( c −c ), f (c) = nf (c ) Kr ( c −c ), b r c ∈Bt
c ∈Ft
where variables c and c enumerate all background and foreground colors respectively, nb (c ) — number of background points with color c , nf (c ) — number of foreground points with color c , Kr — kernel function of radius r. Since kernel method is much slower than a histogram one, it is desirable to choose polynomial kernel function. Imagine two balls of radius r centered at vectors c1 and c2 in the color space. The balls intersect if d(c1 , c2 ) = c1 − c2 L2 ≤ 2r. The natural choice of kernel function is the ratio between intersection volume and d3 3d ball’s volume: Kr (d) = 1 + 16r 3 − 4r , d ≤ 2r; Kr (d) = 0, d > 2r. In contrary to histogram methods, we accumulate fractional quantities (particles) here. Therefore, b and f cannot be directly substituted into the discrete entropy expression (4). Luckily, the function φ(a, b) = ln a+0.520595 b+0.520595 gives close estimation of harmonic number difference Z(a) − Z(b) on the natural set N and close approximation of digamma function between integers. The entropy is minimized by taking in turn all possible values of kernel radius within some range. In our experiments the range was [2, 63]. Further improvements could be achieved by selecting separate kernel radius per color channel. 2.4
Contour Formation on the Next Frame
Suppose that the optimal histogram resolutions are already obtained by minimization of (1). Taking in turn all points of the set Nt+1 , we substitute their colors in the histograms Htb , Htf , and calculate background and foreground probabilities at each point. The rest image points, which do not belong to Nt+1 , receive default probabilities pb = pf = 12 . In the alternative formulation the optimal kernel is used in a similar way. Making use of formulas (3) we again face with the same problem of insufficient statistics. With a view to get more realistic estimate of mathematical expectation of background and foreground probabilities, we formally repeat the calculus of Section 2.2. Then we obtain p¯b (c) = (b+1)/(b+f +2),
p¯f (c) = (1− p¯b) = (f +1)/(b+f +2).
(5)
598
A. Akhriev entropy Dance
Foreman
Stefan
kernel radius 0
20
40
60
0
20
40
60 0
20
40
60 0
20
40
60
Fig. 3. Left, averaged entropy profiles for “Dance” (dashed), “Stefan” (solid) and “Foreman” (dotted) sequences as the functions of kernel radius. Right, distributions of optimal kernel radius for three test sequences. Diagrams are shown up to scale, kernel radius varies along abscissa.
Fig. 4. Examples of probability maps, prepared for demonstration. The background points are drawn in light gray, while the foreground ones are drawn in dark gray.
Recall that b = Htb (c) and f = Htf (c), Section 2.2, see also Section 2.3. The last formulas practically coincide with (3) when b1 and f 1, but do correctly in the case of insufficient statistics of observation. In particular, if b = f = 0, then pb (c) = pf (c) = 12 . Really, if there is no information about a color c, then it is natural to consider equiprobable classification outcomes. As a result of above procedure we obtain so-called probability map, see Figure 4. Every entry of the probability map keeps background and foreground probability of a point of the next frame It+1 . Using predicted contour as the initial estimation, we smoothly adjust the contour over the probability map so that points with high foreground probability (pf > pb ) mainly occur inside the contour, whereas points with high background probability (pf < pb ) mainly occur outside of the contour. Synthesis of the final object contour on the next frame is accomplished by means of active contour technology [6, 4, 2], where the contour is represented as a sequence of points restricted by smoothness constraint. We have also developed a technique that admits topology changing of a contour. Choosing one or another approach is not a matter of principal importance. Suppose that a point v lies on the current estimation of object’s contour on the next frame It+1 . Let n = n(v) be a unit normal at the point v directed outside of the contour. During contour adjustment, the points on each iteration may move strictly along the normal. Following expression gives a force applied at a contour
Object Tracking Via Uncertainty Minimization
599
1/q point [11, 13]: F = sign pf (v) − pb (v) · pf (v) − pb (v) · n, where pf (v) and pb (v) are background and foreground probabilities, and a constant q shapes the control function. The choice q = 2 seems to be the best one. The values pf (v) and pb (v) are picked from the probability map and bilinearly interpolated if necessary. Obtained force F is further substituted into active contour equation. It follows from the last formula that an active contour extends locally when pf > pb , joining a point to its inner area, and shrinks locally when pf < pb , returning a point to the background. A point remains untouched if pf = pb . A process of evolution continues until achieving a steady state.
3
Experimental Results and Conclusion
Four standard videosequences were chosen for experiments “Tennis”, “Dance”, “Stefan” and “Foreman”. The choice was motivated by the following factors: (1) complicated object motions, (2) cluttered background, (3) presence of low-contrast boundaries, (4) sudden appearance of unexpected color spots on sequential frame, see Figure 5. We have applied histogram-based method in the first test (“Tennis” sequence), and kernel-based one in the others. Left picture on Figure 3 shows averaged entropy profiles as the functions of kernel radius. When object is well separated from the background (“Stefan”, “Dance”) the entropy profile is flat near minimum, otherwise we observe prominent extremum (“Foreman”). Next three diagrams on Figure 3 show distributions of optimal kernel radius. Diagrams are scaled for the best visibility. In all experiments the width of point bands, see Figure 1, was about 15 pixels. In contrast to the other methods with many settings, the band width is a single crucial parameter of our approach and this parameter is quite conservative, i.e. the value within the range [10, 15] seems suitable for most cases. We have found that kernel method (Section 2.3) does slightly better than histogram one (Section 2.1). Nevertheless, even non-optimized histogram method provides several fps on typical PC with 5000–15000 points participating in the tracking process, whereas kernel method spends 0.5–4 seconds per frame. Performance can be significantly improved using previously obtained optimal kernel size. It can be seen that the tracking process is relatively stable. Although a contour sometimes locks on color spots of the background, it constantly recovers due to active snake technique. When topology adaptive snake is divided into several separate contours we pick up the largest one and continue the process. Better results could be achieved if the holes and small color spots on the surface of an object are properly processed. This can be done by considering complex curve model, which consists of several separate parts. Experiments confirm that there exist a lot of situations where color tracking alone is unstable because of high intersection of background and foreground distributions. The common shortage of color tracking algorithms is the requirement for principal separability of those distributions. Nevertheless, the shortage does not mean uselessness of such methods. For example, background/foreground probabilities assigned to image points are reliable candidate features for a
600
A. Akhriev
Fig. 5. Tracking results on standard sequences, top to bottom: (1) “Tennis”, frames 1 to 67; (2) “Foreman”, frames 1 to 160; (3) “Dance”, frames 1 to 157; (4) “Stefan”, frames 1 to 266. Sequences begin from the top-left corner.
multi-cue approach like the one proposed by Hayman et al. [12]. As it was fairly mentioned by Zhang et al. [6], the next important step would be combination of geometric and photometric methods. However, it is interesting to understand the limits of accuracy achievable by color-based approach alone.
Object Tracking Via Uncertainty Minimization
601
References 1. Comaniciu, D., Ramesh, V., Meer, P.: Kernel-based object tracking. IEEE Trans. Pattern Analysis and Machine Intelligence 25(5), 564–577 (2003) 2. Paragios, N., Deriche, R.: Geodesic Active Contours and Level Sets for the Detection and Tracking of Moving Objects. IEEE Trans. Pattern Analysis and Machine Intelligence 22(3), 266–280 (2000) 3. Nguyen, H.T., Worring, M., Boomgaard, R., Smeulders, A.W.M.: Tracking nonparameterized object contours in video. IEEE Trans. Image Processing 11(9), 1081– 1091 (2002) 4. Zhu, S.C., Yuille, A.: Region Competition: Unifying Snake/balloon, Region Growing and Bayes/MDL/Energy for multi-band Image Segmentation. IEEE Trans. Pattern Analysis and Machine Intelligence 18(9), 884–900 (1996) 5. Avidan, S.: Ensemble Tracking. IEEE Trans. Pattern Analysis and Machine Intelligence 29(2), 261–271 (2007) 6. Zhang, T., Freedman, D.: Improving Performance of Distribution Tracking through Background Mismatch. IEEE Trans. Pattern Analysis and Machine Intelligence 27(2), 282–287 (2005) 7. Perez, P., Hue, C., Vermaak, J., Gangnet, M.: Color-based probabilistic tracking. In: Heyden, A., Sparr, G., Nielsen, M., Johansen, P. (eds.) ECCV 2002. LNCS, vol. 2350, pp. 661–675. Springer, Heidelberg (2002) 8. Wu, Y., Huang, T.S.: Nonstationary Color Tracking for Vision-Based HumanComputer Interaction. IEEE Trans. Neural Networks 13(4), 948–960 (2002) 9. McKenna, S.J., Raja, Y., Gong, S.: Tracking colour objects using adaptive mixture models. Image and Vision Computing 17(3-4), 225–231 (1999) 10. Yilmaz, A., Li, X., Shah, M.: Contour-Based Object Tracking with Occlusion Handling in Video Acquired Using Mobile Cameras. IEEE Trans. Pattern Analysis and Machine Intelligence 26(11), 1531–1536 (2004) 11. Abd-Almageed, W., Smith, C.E.: Active Deformable Models Using Density Estimation. International Journal of Image and Graphics 4(4), 343–361 (2004) 12. Hayman, E., Eklundh, J.O.: Probabilistic and Voting Approaches to Cue Integration for Figure-Ground Segmentation. In: Heyden, A., Sparr, G., Nielsen, M., Johansen, P. (eds.) ECCV 2002. LNCS, vol. 2352, pp. 469–486. Springer, Heidelberg (2002) 13. Ivins, J., Porrill, J.: Constrained Active Region Models for Fast Tracking in Color Image Sequences. Computer Vision and Image Understanding 72(1), 54–71 (1998) 14. Nummiaro, K., Koller-Meier, E., Van Gool, L.: Object Tracking with an Adaptive Color-Based Particle Filter. In: Van Gool, L. (ed.) Pattern Recognition. LNCS, vol. 2449, pp. 353–360. Springer, Heidelberg (2002) 15. Akhriev, A., Kim, C.Y.: Contour Tracking without Prior Information. In: Proc. Int. Conf. on Digital Image Computing (DICTA 2003), Sydney, pp. 379–388 (2003)
Detection of a Speaker in Video by Combined Analysis of Speech Sound and Mouth Movement Osamu Ikeda Faculty of Engineering, Takushoku University, 815-1 Tate, Hachioji, Tokyo, 193-0985 Japan
Abstract. We present a robust method to detect and locate a speaker using a joint analysis of speech sound and video image. First, the short speech sound data is analyzed to estimate the rate of spoken syllables, and a difference image is formed using the optimal frame distance derived from the rate to detect the candidates of mouth. Then, they are tracked to positively prove that one of the candidates is the mouth; the rate of mouth movements is estimated from the brightness change profiles for the first candidate and, if both the rates agree, the three brightest parts are detected in the resulting difference image as mouth and eyes. If not, the second candidate is tracked and so on. The first-order moment of the power spectrum of the brightness change profile and the lateral shifts in the tracking are also used to check whether or not they are facial parts.
1 Introduction Research on face detection or recognition has extensively been made in recent years in such fields as image processing and computer vision [1], [2]. For example, Rowley, Baluja and Kanade proposed a neural network-based algorithm [3]. Schneiderman and Kanade developed a Naïve Bayes classifier [4]. Osuna, Freund and Girosi presented an algorithm to train Support Vector Machines [5]. And Turk and Pentland proposed to use eigenfaces [6]. Those algorithms, however, are not so fast. Viola and Jones reported a rapid object detection method [7]. Fröba, Ernst, and Küblbeck presented a real-time face detection system [8] using AdaBoost [9]. In these real-time methods the detection rates are not reported explicitly. In the field of multimedia, on the other hand, sound and texts as well as images have been used to better understand the semantic meanings of multimedia documents [10], [11]. Several improvements have been reported to enhance the accuracy of face segmentation for a short face video. They combine temporal segmentation or tracking with spatial segmentation [12], or they adopt manual segmentation [13] as a last resort. In an emerging field of surveillance, Wang and Kankanhalli detect faces based on the dynamically changing number of attention samples [14], where AdaBoost is used for face detection, and cues of movement, hues and speech are used to adaptively correct the samples. As for speech, its existence alone is their interest and no analysis is made for it. In this paper, we present a combined analysis of speech sound and video images to detect and locate a speaker. For each scene, a short speech sound is analyzed to estimate the rate of spoken syllables, and a difference image is formed using the G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 602–610, 2007. © Springer-Verlag Berlin Heidelberg 2007
Detection of a Speaker in Video by Combined Analysis of Speech Sound
603
optimal frame distance derived from the rate to detect candidates of the mouth. Each candidate is tracked to estimate the rate of mouth movements from the average brightness change profile, and whether or not both the rates agree is checked. If so, the three brightest parts are detected in the resulting difference image as mouth and eyes. The first-order moment of the power spectrum of the brightness change profile and the lateral shifts in the tracking are also checked to prove that they are mouth and eyes.
2 Combined Sound and Video Analysis For each new scene, the rate of spoken syllables is estimated for the beginning several seconds of the video, based on that each spoken syllable forms a wavelet of sound signal. Letting s(t) be the sound signal, its envelope is detected:
e(t ) = Env{s (t )}
(1)
where the maximal amplitude among sequential 512 sampled signals is regarded to be the envelope for the case of 48kHz sampling frequency. Then, it is moving-averaged: es (t ) = MA{e(t )}
(2)
where the sequential 512 samples are averaged. Then, e(t) is compared with es(t) to make the former binary: ⎧ 1 if e(t ) > es (t ) ws (t ) = ⎨ ⎩0 otherwise
(3)
The wavelet is required to have a variation of more than Max{e(t)}/Cev to be significant:
⎧⎪ w s (t ) if Var{e(t )} > Max{e(t )} C ev t∈ws wsi (t ) = ⎨ ⎪⎩0 otherwise
(4)
where i=1, 2, …, Nw. The coefficient, Cev, may be larger with increasing signal-tonoise ratio. In our experiment 100 is used for the coefficient. Letting T be the time duration, the rate of spoken syllables is given by Rw = N w T
(5)
The optimal frame distance to form the difference image is given by m fd = 15 Rw
(6)
based on the assumption that the frame rate is 30 per second. This considers twice the faster movements than the average one. The difference image is given by
(
) (
id ( x, y; m fd ) = ∑∑ i x, y, c, km fd − i x, y, c, (k + 1)m fd c
k
)
(7)
where i(x,y,c,n) is the image brightness at (x,y) in the n-th frame and for the color component, c, and the factor of normalization is omitted. It is then averaged over a sub-window of Nsw by Nsw pixels to reduce the effects of noise:
604
O. Ikeda
{
}
ids ( x, y; m fd ) = E id ( x, y; m fd )
(8)
Nsw is equal to around 10 for 304 by 232 pixel images used in experiments. The difference image may reveal the facial parts of mouth and eyes, but two problems remain to solve. First, we have to positively prove that the extracted parts are those facial parts. Second, if not, we have to do additional processing to find them.
(a)
(b)
(c)
(d)
Fig. 1. (a) Three brightest parts in a difference image of Eq. (7), B1, B2 and B3, as the candidates of the mouth. (b)-(d) grey regions where eyes are supposed to be, and the three brightest parts in the difference images as the result of tracking for B1, B2 and B3, respectively.
Assuming that the brightest part B1 in Fig. 1 represents the mouth, we set the surrounding 3Nsw by 2Nsw area as the mouth region Rmouth. Then, the region is tracked by laterally shifting the next frame image, which is repeated consecutively following Eq. (9)
⎧
⎫
(u n , v n ) = arg Min ⎨ ∑ ∑ i ( x + u , y + v, c, n) − i ( x, y, c, n − 1) ⎬ (u ,v ) ⎩( x , y )∈R
⎭
c
mouth
(9)
where n=2, 3,…. Using the resulting shifts, the average brightness change profile of the region is obtained as Δim (n) = ∑
∑ Δim ( x, y, c, n,1)
(10)
c ( x , y )∈Rmouth
where n
n
n − dn
n − dn
n′= 2
n′= 2
n =2
n =2
Δim ( x, y, c, n, dn) = i( x + ∑ u n′ , y + ∑ vn′ , c, n) − i ( x +
u n′ , y + ∑ vn′ , c, n − dn) ∑ ′ ′
(11)
The difference image can be formed similarly to Eq. (7) as
idm ( x, y; m fdm ) = ∑ ∑ Δim ( x, y, c, km fdm , m fdm ) c
k
(12)
where mfdm is the frame distance. It’s safer to use the same frame distance as mfd, but we often can use unity value, too. The profile Δim(n) is then moving averaged over 90/Rw frames to obtain Δim , s (n) = MA{Δim (n)}
and the profile is made binary as
(13)
Detection of a Speaker in Video by Combined Analysis of Speech Sound
⎧1 if Δi m (t ) > Δi m, s (t ) w m (t ) = ⎨ otherwise ⎩0
605
(14)
where m = 1, 2, …, Nm. The rate of mouth movement is given by Rm = N m T
(15)
If the two numbers, Nw and Nm, agree, this means that the speech sound comes from the mouth candidate. Then, we can detect the face using the three brightest parts in the resulting difference image. If the two numbers do not agree, the same process is repeated for the next brightest candidate B2 and then for B3. The three brightest parts in the grey region shown in Fig. 1 must satisfy the following criteria in addition to general conditions for face [15]: (i) Nw and Nm agree, and three Nm’s for the three brightest parts are similar; (ii) the lateral shift patterns are similar for the three parts; (iii) the moment defined below has a significant value less than unity for each of the three parts: mm =
N/2
∑ n I ( n)
n =0
2
N /2
∑ I ( n)
2
(16)
n =0
where N is equal to 30T, and I(n) is the discrete Fourier transform of the brightness change as N −1 ⎡ 2πkn ⎤ I (n) = ∑ Δim (n) exp ⎢ j ⎥. k =0 ⎣ N ⎦
(17)
3 Experiments Seventeen short videos from news programs of ABC, CNN and BBC were used in experiment. Figure 2 shows that spoken syllables form separate wavelets so that their number can be counted after making the waveform binary with the moving averaged one. The one-to-one correspondence, however, does not always occur; for example, one syllable comprises two wavelets or two syllables result in one wavelet. Results for 17 videos are summarized in Table 1, showing that the number of syllables can be estimated more accurately for longer speech data. Figure 3 shows the effect of the optimal frame distance in forming difference image on facial parts detection, where three in the lateral axis means that the three brightest parts are mouth and two eyes, two means that the two brightest parts are mouth and an eye but the third brightest part is not the other eye, and one means that the brightest part is mouth but the next two brightest parts are not eyes. If the brightest part is neither mouth nor eye, the detection is regarded to be failure. It is seen that use of the optimal frame distance leads to successful results for 16 out of 17 videos; the only failure is for Video 9. The following three specific cases may be worth to describe. Figure 4 shows three cases where the unity frame distance does not work but the optimal frame distance works to reveal the facial parts in the difference image. One
606
O. Ikeda
Fig. 2. Speech waveform, spoken syllables, envelope, and its binary profile for part of Video 4 14 unity frame distance optimal frame distance
12 10 8 6 4 2 0 0
1
2
3
Fig. 3. Successful detection rates of mouth and eyes for 17 videos for difference images of unity and optimal frame distance
Fig. 4. Effects of the optimal frame distance for Videos 3, 5 and 8 of Table 1. Center: difference images of one frame distance; and right: those of the optimal frame distance.
Detection of a Speaker in Video by Combined Analysis of Speech Sound
607
Table 1. Results for 17 video clips, where cnn(f1) means cnn’s 1st female speaker and abc(m2) abc’s second male speaker, for instance, and Ns is the number of syllables spoken Speaker (m: male) (f: female) 1 cnn (f1) 2 bbc (m1) 3 cnn (f2) 4 abc (f1) 5 cnn (f3) 6 cnn (f4) 7 abc (m1) 8 cnn (m1) 9 abc (m2) 10 cnn (f5) 11 cnn (f6) 12 abc (m3) 13 abc (f2) 14 cnn (f7) 15 abc (m4) 16 abc (m5) 17 cnn (m2)
Spoken speech
Detection of speech wavelets
Mouth moves
T[s]
Ns
Rate
Nw
Rate
mfd
Nw/Ns
Nm
Nm /Ns
2.24 2.97 3.60 4.00 4.00 4.04 4.17 4.20 4.50 5.61 5.80 6.81 13.48 13.95 17.68 19.20 19.99
12 12 17 17 15 25 17 23 23 29 26 34 57 78 80 96 103
5.37 4.04 4.72 4.25 3.75 6.19 4.08 5.47 5.11 5.17 4.48 5.00 4.23 5.59 4.52 5.00 5.15
16 11 15 18 18 20 19 21 23 26 30 36 52 78 81 95 104
7.16 3.70 4.17 4.50 4.50 4.95 4.56 5.00 5.11 4.64 5.17 5.29 3.86 5.59 4.58 4.95 5.20
2.1 4.1 3.6 3.3 3.3 3.0 3.3 3.0 2.9 3.2 2.9 2.8 3.9 2.7 3.3 3.0 2.9
1.33 0.92 0.88 1.06 1.20 0.80 1.12 0.91 1.00 0.90 1.15 1.06 0.91 1.00 1.01 0.99 1.01
13 15 13 21 17 26 21 23 23 31 26 34 57 74 79 95 102
1.08 1.25 0.76 1.24 1.13 1.04 1.24 1.00 1.00 1.07 1.00 1.00 1.00 0.95 0.99 0.99 0.99
might think that the result for Video 3 is not successful due to somewhat inappropriate positions of the brightest three parts. The tracking results of the three facial parts in Fig. 5 show that their lateral moves are very similar, although the brightness change profiles are different between her mouth and eyes. If we check the frames, we can see that the camera moves leftward, as shown in Fig. 6, in agreement with the lateral shifts in Fig. 5. In the case of Video 7, the first, second and third brightest parts are mouth, left eye, and the shirt, as shown in Fig. 7. The tracking results show that the mouth and the eye move in a similar way, but that the shirt part does not move at all. The tracking results for the estimated right eye region are very similar to those for the left eye region. So, the detection of mouth and a single eye parts can be regarded to be successful enough. In the case of Video 9, two cars are running behind the reporter, taking 20 and 19 frames or 0.67 and 0.62 sec, respectively. The image brightness changes caused by them are insignificant for unity frame difference but serious for the optimal distance of 2.9. As a result, B1 in the difference image of the optimal frame distance in Fig. 8(a) represents a part of his shirt’s collar close to the boundary with the background scene where cars are running. The tracking pattern for B1 in Fig. 9 is unusual for facial parts, and the result in Table 2 shows that the first-order moment value of the collar is too large for facial parts. Tracking the B2 reveals the three facial parts, as shown in Fig. 8(c). It is seen that the mouth and right eye correspond to B2 and B4, respectively, that their moment values in Table 2 are appropriate, and that their brightness change profiles in Fig. 9 are similar.
608
O. Ikeda
Fig. 5. Tracking is made for each of the three brightest parts shown on top-right in Fig. 4. From top, the difference image with the red tracking region, the brightness change profile, its binary pattern, and the lateral shifts of the tracking region.
Fig. 6. 88th, 93rd, 98th, 103rd, and 108th frames, showing that the camera moves leftward
Fig. 7. For Video 7, the third brightest image part is on his shirt, as marked white. The lateral shifts of the mouth and left eye are similar, but there are no shifts for the shirt part.
Detection of a Speaker in Video by Combined Analysis of Speech Sound
(a)
(b)
(c)
609
(d)
Fig. 8. (a) Four brightest parts in the difference image of the optimal frame distance for Video 9, (b) – (d) three brightest parts of the difference images obtained as the results of the tracking for B1, B2 and B3, respectively
Fig. 9. Tracking results for B1 (left), B2 (center) and B4(right) Table 2. Values of Nm and the moment mm for Video 9 Image Part B1 B2(mouth) B3 B4(eye)
Nm 22 23 19 17
mm 3.38 0.51 1.82 0.38
The numbers of the mouth moves obtained by the tracking are summarized in Table 1. It is seen that the accuracy is better for longer video clip and it is slightly better than that of speech..
4 Conclusions We presented a method to detect and locate a speaker using a combined analysis of speech sound and video image. The optimal frame distance is estimated from the
610
O. Ikeda
short speech sound data, and a difference image is formed to detect the candidates of mouth and eyes. They are tracked to positively prove what and where they are. The method may be applied to locate a speaker among persons appearing in the field of view.
References 1. Chellappa, R., Wilson, C.L., Sirohey, S.: Human and Machine Recognition of Faces: A Survey. Proc. IEEE 83, 705–740 (1995) 2. Yang, M.-H., Kriegman, D.J., Ahuja, N.: Detecting Faces in Images: A Survey. IEEE Trans. PAMI 24, 34–58 (2002) 3. Rowley, H.A., Baluja, S., Kanade, T.: Neural Network-Based Face Detection. IEEE Trans. PAMI 20, 23–38 (1998) 4. Schneiderman, H., Kanade, T.: Probabilistic Modeling of Local Appearance and Spatial Relationships for Object Recognition. CVPR, 45–51 (1998) 5. Osuna, E., Freund, R., Girosi, G.: Training Support Vector Machines: An Application to Face Detection. In: CVPR, pp. 130–136 (1997) 6. Turk, M.A., Pentland, A.P.: Eigenfaces for pattern recognition. J. Cognitive Neuroscience 3, 71–96 (1991) 7. Viola, P., Jones, M.: Rapid Object Detection Using a Boosted Cascade of Simple Features. CVPR 1, 511–518 (2001) 8. Fröba, B., Ernst, A., Küblbeck, C.: Real-Time Face Detection. In: Proc. 4 th IASTED Signal and Image Processing, pp. 497–502 (2002) 9. Freund, Y., Schapire, R.E.: A Short Introduction to Boosting. J. Jpn. Soc. Artificial Intelligence 14, 771–780 (1999) 10. Wang, Y., Liu, Z., Huang, J.: Multimedia Content Analysis. IEEE Signal Processing Magazine 17, 12–36 (2000) 11. Satoh, S., Nakamura, Y., Kanade, T.: Name-It: Naming and Detecting Faces in News Videos. IEEE Multimedia 6, 22–35 (1999) 12. Wang, D.: Unsupervised Video Segmentation Based on Watersheds and Temporal Tracking. IEEE Trans. Circuits & Systems for Video Tech. 8, 539–545 (1998) 13. Toklu, C., Tekalp, A.M., Erdem, A.T.: Simultaneous Alpha Map Generation and 2D Mesh Tracking for Multimedia Applications. ICIP 1, 113–116 (1997) 14. Wang, J., Kankanhalli, M.S.: Experience based Sampling Technique for Multimedia Analysis. Proc. ACM Multimedia, 319–322 (2003) 15. Ikeda, O.: Segmentation of Faces in Video Footage Using HSV Color for Face Detection and Image Retrieval. ICIP 3, 913–916 (2003)
Extraction of Cartographic Features from a High Resolution Satellite Image José A. Malpica, Juan B. Mena, and Francisco J. González-Matesanz Mathematics Department. Alcalá University. Madrid. Spain {josea.malpica,juan.mena}@uah.es
[email protected]
Abstract. This paper deals with how to correct distortions on high resolution satellite images and optimized cartographic feature extraction. It is shown, for an Ikonos satellite image that subpixel accuracy can be obtained using rational functions with only a few accurate ground control points. These control points are taken as the centres of road roundabouts, tennis courses, swimming pools and other cartographic features using least square, for greater precision in ground-image matching. The radiometric quality is also studied, in order to examine Ikonos visualization capability and consequently the potential for map updating.
1 Introduction Ikonos was launched in September of 1999, making it the first commercial highresolution satellite. From an average altitude of 681 km, this satellite takes images of 1 m in the panchromatic and 4 m in the multispectral. The revisit time is approximately three days. Two years later, DigitalGlobe launched Quickbird, another commercial high-resolution satellite, from an average altitude for 450 km, Quickbird in this case takes images with a spatial resolution of 0.61 m in the panchromatic and 2.44 m in the multispectral. OrbImage put in orbit OrbView in 2003 with similar characteristic to Ikonos. New high-resolution satellites are planned to be launched in the future, [1] and [2]. The highest commercial resolution up to date is schedule to be launched by the end of 2007, with 0.41 m for the panchromatic band and 1.64m for the multispectral. Economics of spacecraft systems are changing dramatically, falling by at least an order of magnitude to the millions of dollars per year, due to increases in competency and more advanced technology, a fall in prices and a faster delivery of satellite imagery can be expected in the near future. As time goes by, it will become more necessary to update semi-urban area layers for geographic information systems (GIS) applications. It is a fact that, most of the time, the areas to be updated are flat or close to flat terrains. These images are useful in several fields such as agriculture, digital elevation model construction, natural disaster analysis, etc. In this paper, we are going to concentrate on cartographic applications. More specifically, we are interested in addressing the following questions: Are Ikonos images useful for cartographic purposes in small urban or semi-urban areas? What are the higher scales that can be expected from these images? G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 611–620, 2007. © Springer-Verlag Berlin Heidelberg 2007
612
J.A. Malpica, J.B. Mena, and F.J. González-Matesanz
An orthophoto is an aerial or space image that has corrections that take into account the irregularities of the terrain and the deformation of the camera or sensor, so the image can be seen as a map where distances can be measured. The necessity of such orthorectified imagery as backdrops for GIS and mapping applications gives one some insight into the importance of studding deformations on aerial and space images [3]. Distortions can occur in a satellite image for different reasons: angular tilt, the earth’s curvature, the earth’s rotation, atmospheric refraction, and so forth. A recent paper by G.A. Guienko [4] outlines the main sources of image geometric distortions occurring in high-resolution satellite images, and so we shall not elaborate here. Since some satellite companies do not provide the physical sensor model with the image data – or because this is unavailable for other reasons –rational functions and other transformations could be valuable options that can preclude the need to perform necessary corrections. Some authors have already dealt with Ikonos orthorectification for cartographic purposes, [5], [6], being the most important results from their studies that Ikonos is a cost-effective acquisition of orthoimagery suitable for use as digital image basemaps in local government GIS applications. The first author obtained an error of about 2-3 meters while the seconds of 1 meter or less. Several parameters could affect the results, type of terrain (flat or rough), accuracy on GCP, atmospheric conditions when the image was taken, etc. In this paper we are going to study the cartographic potential of Ikonos geo imagery for a small flat area, we analyse how with the less expensive product (i.e., Ikonos Geo imagery), the best possible accuracy can be obtained. The rapid changes to semi-urban areas have generated a great demand for updating this type of cartography. Most of the time, the areas examined are not very large, are relatively flat and have sufficient ground control points (GCP). In this case, the acquisition of Ikonos Geo, or other similar high-resolution satellite imagery, could be a better option (in terms of a tighter budget) than an aerial survey.
2 Georeferencing of Digital Imagery Two steps can be differentiated in the operation of georeferencing an image: First.- A two-dimensional coordinate transformation connects the digital image to the ground system. Second.- An interpolation of the brightness values from the image to the grid is made for the ground system. The classic technique in remote sensing has been to use polynomial transformations. In the first step, a number of GCPs are selected that can be identified in the image. One of the simplest transformations (which is also a specific case of a polynomial’s transformation) is: x = a 0 + a1 X + a 2Y
y = b0 + b1 X + b2Y ,
(1)
where (x, y) and (X, Y) represent image coordinates and ground coordinates respectively; the coefficients ai, bi (i=0,1,2) must be calculated. Three GCP points (which are not collinear) are sufficient to determinate the coefficients.
Extraction of Cartographic Features from a High Resolution Satellite Image
613
This is an affine transformation; it is also a linear model, and it preserves both straight lines and parallelism. 2.1 Rigorous Model The obtaining of three-dimensional information from satellite images that use CCD cameras requires mathematics models that are different from those of the classic photogrammetric methods. In order to generate a mathematic model for a CDD pushbroom camera, one must bear in mind several factors, such as that the earth is rotating with respect to the orbital plain of the satellite, that the satellite has its own rotation and that the satellite is moving around the orbit with a speed that is not constant according to Kepler’s Laws. There is no definite formula for pinpointing the exact position of the satellite in its orbit at a specific time; calculations are iterative, and are done through procedures such as the Newtonian formula. In the paper [7] was produced a complete model where the sensor and orbit parameters were known, and the authors concluded that it was possible to obtain a geometric precision of one-third of a pixel with stereo satellite images of medium resolution of the type SPOT or MOMS-02. A simplified model is offered by R. Gupta and R. I. Harley [8], who reduced the number of calculations while maintaining high precision. In general, sensor and orbit parameters are not provided by commercial houses – at least not in the case of Ikonos imagery – and therefore the user cannot make geometric corrections by way of the mathematics rigorous models mentioned above (neither the completed nor the simplified). An alternative is the rational functions model. 2.2 Rational Functions The Rational Functions (RF) model is the closest to the couple of equations that can be obtained through the co-linearity equations in the rigorous model, if the sensor parameter is available. Derived from the RF is the Universal Sensor Model, which is an extension of the model base in RF as developed by the Open GIS Consortium [9]. The equation for the RFs is given by:
x=
p( X , Y , Z ) q( X ,Y , Z )
r ( X ,Y , Z ) , y= s( X , Y , Z )
(2)
where x and y represent the pixel coordinates in the image and X, Y and Z the terrain coordinates in the system WGS84; p, q, r and s represent the polynomials that are used until the third degree maximum. The degree of the polynomial depends on the number of control points on the terrain. With a polynomial of the first degree, it is necessary to determine 12 coefficients, and so it is therefore necessary to have more than 12 control points and apply the least square technique. If the third degree is used, it is necessary to calculate 76 coefficients [10].
614
J.A. Malpica, J.B. Mena, and F.J. González-Matesanz
Rigorous
Models Rational Functions
Independent of the terrain Dependent on the terrain
The coefficients can be calculated from the coupled model when one knows the sensor parameters, or from the terrain control points. The coefficients can be given by the software package and calculated from the true parameters, in which case it is said to be independent of the terrain, or the coefficient can be calculated from control points on the terrain, in which case it is considered dependent on the terrain. In [11] tests were given in order to determine the precision of RF coefficients with SPOT and aerial imagery, in a case that is independent of the terrain. The main conclusion was that the differences between the rigorous model and the independent model in terms of RF are negligible. There were also tests for a case that was dependent on the terrain. The authors in [12] showed that the technique for performing geometric corrections in SPOT and aerial images according to the independent model is quite good, but they also highly the importance of distributing the control points to obtaining good results. These same authors, in another work [13], proposed a computational model that improves the stability of the method of the RFs. [14] used a pair of stereoscopic images Geo Ikonos of San Diego, and they conclude that by using 140 control points, it is possible to obtain a precision of 1 m in planimetry and 2 m of height. The results obtained by Fraser et al. [3] are particularly interesting, because Ikonos Geo images have 0.3-0.6 m of planimetric precision and 0.5-0.9 m of height precision, with only three to six control points and minimal information about the sensor. The model they use is a little different from the RFs; first, they make a DLT transformation:
x=
a1 X + a2Y + a3 Z + a4 a5 X + a6Y + a7 Z + 1
a X + a9Y + a10 Z + a11 y= 8 a5 X + a6Y + a7 Z + 1
(3)
and then an affine transformation, such as in (1). Although there are 19 coefficients to be calculated, Fraser et al. manage to reduce the number of coefficients, thanks to the use of the azimuth and elevation that comes with the metadata file of the Geo images. Toutin and Cheng [15] have developed another model for RF, with information from the metadata of the Geo images. This model is proprietary and is included in
Extraction of Cartographic Features from a High Resolution Satellite Image
615
Fig. 1. - Panchromatic Ikonos image area (2 km by 2 km), with distribution of GCP (red)
PCI’s OrthoEngine software package; this model has been applied to Quickbird [16] with the following results: 4.0 m RMS in x and 2.1 m RMS in y when using RFs, while obtaining 1.4 m RMS in x and 1.3 RMS in y when applying the Toutin model.
3 Case Study A case study with an Ikonos Geo image of the university campus in Alcalá, Spain has been carried out. This is an area with relatively flat terrain (15 m of maximum different elevation), with an area of 4 km2. First, a GPS survey for the whole area was conducted (see figure 1), with two GPS double frequencies with a precision, σ = ± 10 cm. Points around road roundabouts were taken, and the centres of these geometric figures were obtained by least squares. Also, other geometric figures were used, such as rectangles for tennis courses or swimming pools. Similar calculations were performed
616
J.A. Malpica, J.B. Mena, and F.J. González-Matesanz
on de image, coordinates of pixels around road roundabout were taken and least square was used to calculate the centres; then the correspondence ground-image was done by matching the calculated centres in the ground and in the image. We measured the accuracy of the Ikonos image for the study site of Alcalá University campus, before any processing was done. Geo imagery requires some corrections by space imaging, in order to supply UTM coordinates to the image. Initially, the UTM coordinates facilitated by Space Imaging were compared with the UTM coordinates taken with the GPS on the ground for the 35 GCPs, and errors were within the range predicted by this company. For the Alcalá site, errors in the x direction had an average of 14 meters (maximum 18 m, minimum 5 m), and in the y direction the errors had an average of 3 meters (maximum 4 m, minimum 2 m). The former presented some systematic characteristics, probably related to both ground elevation and the fact that the image was taken away from the orbit footprint; no such systematic characteristics were observed in the latter. Equations (4) for terrain elevation corrections (see Helder et al. [17] for a graphic deduction) explain why the displacements in x are greater than in y in the data that is provided by Space Imaging.
ΔH sin A tan e , ΔH Δy = cos A tan e Δx =
(4)
where ΔH is the difference between the actual terrain and the reference ellipsoid, and A and e are the azimuth and elevation, respectively, of the satellite when the image was taken. A and e are provided in the metadata file, and in our case A=264º.7799. Observe that in equations (4), the sine is almost 1, producing greater displacements for the x or easting directions. In any case, the observed inaccuracies for the case of the university campus are well inside the range stated by Space Imaging –in general, and for a whole scene for a Geo Ikonos is 50 m with a 90% level of confidence. Points in the image were chosen according to the following criteria: • Individually, they are as distinguishable from surrounding pixels as possible • Collectively, they are distributed evenly over the whole area In high-resolution satellite images, it is difficult to determine an exact position in the image [18]. Therefore, we have sought a 20-cm precision on the ground but only a 50 to 60-cm precision in the image. If you refer to figure 2, you will note several points which, even though they are clear candidates for being GCPs, it is difficult to pinpoint the exact position in the image that corresponds to a place on the ground (e.g., the position of the corner of the “H” on the heliport (a) and the corner of the pavement (b)). To augment the image point accuracies, we took the lead of Fraser et al. [3] by taking points around roundabouts with GPS receivers; the central points of the ellipses corresponding to the roundabouts were calculated by least squares. Similar calculations were performed in the image, with the coordinates of the pixels around the roundabout being taken and the centres calculated. The centres on the ground and in the image were then matched. Other geometric figures were also used in the process – such as rectangles for tennis courses or swimming pools – and the centres of these geometric figures were also used as GPSs. Figure 1 illustrates the distribution of GCPs
Extraction of Cartographic Features from a High Resolution Satellite Image
617
Several rectifying models were tried, including polynomial, spline and RFs. Of all three of these models, RFs are preferable to simple polynomial or spline models because RFs considers elevation. For our case study, which did not use sensor parameters, it should be noted that the image had already undergone some geometric corrections as applied through Space Imaging – the nature of which is unknown to the authors – and therefore the use of RFs is almost mandatory.
a)
b)
c)
d)
Fig. 2. - Examples of GCP: a) Aerial image showing the university hospital heliport; a GCP has been taken in the corner of the “H,” but it is difficult to obtain the same point in the Ikonos image, as seen on the right (image b). Images c) and d) similarly show the low radiometric quality of Ikonos in comparison to the aerial image, which has 10-cm pixels. The long lines of the pavement should be considered in the Ikonos image (d) for helping in precision: The corner should be defined as the intersection of the two long straight lines of the pavement. Table 1. Rational Functions for two sets of coefficients Coef
GCPs
10 8
21 15
Check s
14 16
x-RMS
y-RMS
x-RMS
y-RMS
Max
GCPs 0.11 0.16
GCPs 0.16 0.15
Checks 0.60 0.36
Checks 0.56 0.41
Residual 1.54 0.89
618
J.A. Malpica, J.B. Mena, and F.J. González-Matesanz
As an example, Table 1 presents two combinations of coefficients for RFs, GCPs and checkpoints. When the check points are considered inside the area of the GCP, the errors are generally less than a meter. This is the case for points measured with least square, as well as well-defined points. We found considerable errors of 8 m and greater when trying to extrapolate points outside the GCPs’ influence area. The values of the coefficients in the denominators of equations (3) are almost zero, which indicates that (3) could be reduced to the affine equation (1) by considering Z. Precision of one meter can be obtained if great care is taken with the GCP. Even though it has high geometric quality, the imagery would not be adequate for 1:5,000 scale cartography, because of its low radiometric quality; some additional field work could be necessary as it will be explain latter. Following Fraser et al. [3] and Baltsavias et al. [19] and using the metadata (azimuth 264º.7799 and elevation 65º.32514) from Space Imaging, only four or five GCPs were needed to obtain precision of around 1 m each. In fact, only three points are necessary, but in order to have some redundancy for the least square method, using four or five is ultimately more convenient. As reported by these authors, if a good DEM is available, even a sole GCP would be enough for the whole georeferencing. We did so, and found errors of 1.5 m, probably because the DEM was not very accurate. In summary, it could be said that a very good amount of geometric accuracy can be obtained with a minimal number of GCP points. When updating maps from the Ikonos, not only is the geometric accuracy important, but also the clarity of the information content in the image. Samadzadegan et al. [20] show the deficiencies of some cartographic features to be extracted from Ikonos images, especially electrical or telephone lines, fences, some small bridges, some types of building, and so forth. In contrast to high geometrical quality found elsewhere, it is obvious that the radiometrical quality of Ikonos needs some improvement, as the image included some noise and artefacts. To some extent this is compressible, given the height of the satellite and the high resolution; nonetheless, the integration time for a pixel footprint is only 0.166 ms – too short a time for the sensor to obtain a strong signal, as reported in Balsavias et al. [19].
4 Conclusions The main conclusion to be wrought here is that when the terrain to be map is relatively plain and the scale is 1:10.000 or smaller, the least expensive product from Ikonos should be acquired, rather than the more expensive products (Precision, Ortho) or aerial imagery. With only a few control points, it is possible to obtain enough precision to update medium-scale maps. This has been considered in the case study for an Ikonos image for the campus of Alcalá University where, with only five control points, a precision of around 1 m was achieved. This excellent geometric accuracy is sufficient for 1:5,000 cartography, but aside from that it is also necessary clear perceptible cartography feature information, since some cartographic features needed in a 1:5,000 scale are not discernible in a 1 m2 resolution image (which will contain some noise and relatively low radiometric quality, compared to its high geometrical quality). Therefore, if a 1:5,000 scale is the objective of the Ikonos image acquisition, it should expect to undertake some additional field work.
Extraction of Cartographic Features from a High Resolution Satellite Image
619
The following rules or advice follows from our work: -
-
To take great care in measuring GCPs. It is better to have a few good points (i.e., around four or five of approximately 20 cm in accuracy each, both in the ground and in the image), than many points with lower quality. To visit the ground after the images have been taken, to study which points are the most appropriate for the image correction. Use on-the-ground geometric figures (e.g., roundabouts, features that are squares or rectangles, etc.) or long lines, in order to minimise errors.
Acknowledgments The authors are thankful to Roberto Gómez-Jara, Rafael Enriquez and Marta Juanatey for field work and to Aurensa for the Ikonos imagery. Thanks are also due to the Spanish Ministry of Education and Science for financial support; project number CGL2006-07132/BTE.
References 1. Jacobsen, K.: High resolution satellite imaging systems overview. In: Proceeding of Joint ISPRS Worksshop on High Resolution Mapping from Space (in CD-ROM) (2005) 2. Demirel, A., Bayir, I.: One meter and below high resolution satellites in production. In: Proceeding of Joint ISPRS Worksshop on High Resolution Mapping from Space (in CDROM) (2005) 3. Fraser, C.S., Baltsavias, E., Gruen, A.: Processing of Ikonos imagery for submetre 3D positioning and building extraction. ISPRS Journal of Photogrammetry and Remote Sensing 56, 177–194 (2002) 4. Guienko, G.A.: Geometric Accuracy of Ikonos: Zoom. Transaction on Geoscience and Remote Sensing 42(1) (2004) 5. Davis, C.H., Wang, X.: Planimetric accuracy of Ikonos 1m panchromatic orthoimage products and their utility for local government GIS basemap applications. Int. J. Remote Sensing 24(22), 4267–4288 (2003) 6. Büyüksalih, G., Oruç, M., Koçak: Geometric accuracy testing of Ikonos geo-product Mono Imagery using different sensor orientation models. Turkish J. Engineering Environment. Science 27, 347–360 (2003) 7. Ebner, H., Ohlhof, T., Putz, E.: Orientation of MOMS-02/D2 and MOMS-2P imagery. International Archives Photogrammetry and Remote Sensing 31, 67–72 (1996) 8. Gupta, R., Hartley, R.I.: Linear pushbroom cameras. IEEE Tranwqction on Pattern Analysis and Machine Intelligence 19(9), 963–975 (1997) 9. OGC. The Open GIS Abstract Specification vol. 7, pp. 99–107 (1999), www.opengis.org/techno/specs.htm 10. Barrett, E.B., Payton, P.M.: Geometric Invariants for Rational Polynomial Camaras. In: Applied Imagery Pattern Recognition Workshop, pp. 223–234 (2000) 11. Yang, X.: Accuracy of rational function approximation in photogrammetry. In: Proceedings of ASPRS Annual Convention (en CD-ROM sin paginar), Washington D.C. (2000)
620
J.A. Malpica, J.B. Mena, and F.J. González-Matesanz
12. Hu, Y., Tao, C.V.: Updating solutions of the rational function model using additional control points and enhanced photogrammetric processing. In: Proceeding of Joint ISPRS Worksshop on High Resolution Mapping from Space. Hanover pp. 19–21 (in CD-ROM) (2001) 13. Tao, C.V., Hu, Y.: A comprehensive study on the rational function model for photogrammetric processing. Photogrammetric Engineering and Remote Sensing 67(12), 1347–1357 (2001) 14. Grodecki, J., Dial, G.: Ikonos geometric accuracy. In: Proceeding of Joint ISPRS Worksshop on High Resolution Mapping from Space 2001. Hanover 19-21 (in CD-ROM), 15. Toutin, T., Cheng, P.: Demystification of Ikonos. Earth Observation Magazine 9(7), 17–21 (2000) 16. Toutin, T., Cheng, P.: Quickbird – a milestone for high resolution Mapping. Earth Observation Magazine 11(4), 14–18 (2002) 17. Helder, D., Coan, M., Patrick, K., Gaska, P.: IKONOS geometric characterization. Remote Sensing of Environment 88, 69–79 (2003) 18. Ganas, A., Lagios, E., Tzannetos, N.: An investigation into the spatial accuracy of the IKONOS 2 orthoimagery within an urban environment. International Journal of Remote Sensing 23(17), 3513–3519 (2002) 19. Baltsavias, E., Pateraki, M., Zhang, I.: Radiometric and Geometric Evaluation of Ikonos Geo Images and their use for 3D bulding modelling. In: Joint ISPRS Workshop “High Resolution Mapping from Space 2001”, Hannover, Germany, pp. 19–21 (2001) 20. Samadzadegan, F., Sarpoulaki, M., Azizi, A., Talebzadeh, A.: Evaluation of the Potential of the High Resolution Satellite Imageries (IKONOS) for Large Scale Map Revision. In: IAPRS, vol. 34(part 4) Geospatial Theory, Processing and Applications ISPRS Commission IV, Symposium (2002) Ottawa, Canada (July 9-12 2002)
Expression Mimicking : From 2D Monocular Sequences to 3D Animations Charlotte Ghys1,2 , Maxime Taron1, Nikos Paragios1 , Nikos Komodakis1, and B´en´edicte Bascle2 1
MAS - Ecole Centrale Paris, Grande Voie des Vignes, 92295 Chatenay-Malabry, France 2 Orange - France Telecom R&D, 2 avenue Pierre Marzin, 22300 Lannion, France Abstract. In this paper we present a novel approach for mimicking expressions in 3D from a monocular video sequence. To this end, first we construct a high resolution semantic mesh model through automatic global and local registration of a low resolution range data. Such a model is represented using predefined set of control points in a compact fashion, and animated using radial basis functions. In order to recover the 2D positions of the 3D control points in the observed sequence, we use cascade Adaboost-driven search. The search space is reduced through the use of predictive expression modeling. The optimal configuration of the Adaboost responses is determined using combinatorial linear programming which enforces the anthropometric nature of the model. Then the displacement can be reproduced on any version of the model, registered on another face. Our method doesn’t require dense stereo estimation and can then produce realistic animations, using any 3D model. Promising experimental results demonstrate the potential of our approach.
1 Introduction Reproducing facial animations from images is an interesting problem intersecting computer vision and computer graphics. The problem consists of determining the deformation of a 3D model from 2D image(s) and is often ill-posed. Presence of occlusions or self-occlusions and the lack of depth information are the main challenges. Facial parametric surface models with or without texture can address them. Then, assuming the camera-parameters are known, 3D animation is equivalent to pose estimation. In other words, one would like to determine the set of parameters (surface configuration) such that the projection of the model to the image matches with the observations [1]. 3D - 2D matching has been a problem well studied both from computer vision in terms of facial extraction [2,3,4,5,6] as well as computer graphics perspective [7,8] in terms of animation. These methods can have excellent performance, and are mostly based on dense matching between the model projection and the image. Therefore one can claim that they are computationally inefficient. We consider a more challenging problem that is animating a facial model using limited image-based support obtained from low cost acquisition sensors. The use of patches to describe features and the use of classifiers to extract these features from images has emerged mostly due to progress in machine learning. In [9] facial interest points are determined through training on positive and negative examples of features. This method doesn’t take into account the expected facial geometry. G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 621–630, 2007. c Springer-Verlag Berlin Heidelberg 2007
622
C. Ghys et al.
The use of active appearance models (AAM) [10] is a natural approach to introduce anthropometric constraints and has been heavily considered in this context [11]. Their training (huge dimensional space) as well as potential convergence to local minima are important limitations of this method. In this paper, we propose a novel method for mimicking expression, from a monocular video sequence to a 3D avatar. To this end, we construct a generic 3D model using a range database, as well as models of expressions/transitions between emotions. The expression mimicking problem is addressed through a compact facial representation (through control points), and a fast/efficient/optimal search of its geometric elements in the image. We use weak classifiers to this end, while towards the optimal configuration anthropometric constraints are combined with the response of classifiers using linear programming. Last, but not least animation is done using radial-based functions. The reminder of this paper is organized as follows; in section 2 we briefly present the static and the dynamic aspects of the model while in section 3 the image-to-model correspondence problem is solved. Validation and discussion are presented in of section 4 while discussion is presented in the last section.
2 Face and Expressions Models The construction of a generic 3D face model, able to capture the variations across individuals is a rather critical aspect of a method aiming to reproduce facial animations. Such a model should be generic enough, involve a small number of parameters (control points) and be capable of reproducing realistic facial animations. Furthermore, one should be able to easily determine the correspondences/projections of these control points in the image plane. 2.1 MPEG Semantic Mesh Understanding facial expressions consists of estimating a number of parameters that explain the current state of the face model. Such an action requires on one hand the definition of a neutral state for the face, and on the other hand theparameters explaining the different expressions. The selection of the model is critical in this process in terms of performance quality and computational cost. A compromise between complexity and performance is to be made, aiming at a model that is able to capture the most critical deformations. Our face representation is based on the MPEG-4 standard [12], which aimed at images, text, graphics, face and body animation representation. Such a model consists of introducing the neutral state through a set of 205 Features Points (FPs). The model is still quite complex, while the estimation of the actual positions of the FPs through image inference is quite problematic. We selected 19 critical features which form an additional level to a high resolution triangulated model. This selection is guided from the potential representation of expressions using geometric deformations, as well as hardware acquisition constraints. The final model [Fig. (1.i)], consists of 19 degrees of freedom. It produces a reasonable compromise between complexity and performance. The position of each of these 19 points is easy to define on the Candide model [Fig. (1.ii)], the starting point of our model.
Expression Mimicking : From 2D Monocular Sequences to 3D Animations
(i)
(ii)
623
(iii)
Fig. 1. Control Points of the Model: (i) in 2D, (ii), on the Candide model and (iii) on our model
Actually, such a simple model suffers from poor realism and therefore towards proper use one should introduce real samples in the process and update such a model to account for variations across individuals. 2.2 Automatic Construction of the Model To determine a realistic/generic 3D face model we first modify the Candide model, to increase the resolution. Then, we have considered the 3D RMA range dataset benchmark [13]. It consists of approx 120 facial surfaces of 30 individuals. The next step consists of registering all training examples to the same reference pose. The modified Candide model has been considered as ‘reference’ pose configuration. Then, each example has been registered to this Model using a landmarks-enforced approach of the method proposed in [14] with thin plate splines [15] being the transformation domain. In such a context both the source and the target shapes CS , CT are represented using a distance transform [16] in a bounded domain Ω, or 0, x ∈ CS 0, x ∈ CT , φT (x) = (1) φC (x) = d(x, S), x ∈ CS d(x, T ), x ∈ CT with d(x, S) being the minimum distance between x and CS . In [14], the idea of measuring the local deformations between two shapes using a direct comparison through the quadratic norm of their distance functions once the transformation has been determined was proposed: χα (φC (x))(φC (x) − φT (x + L(x)))2 dx Edt (L) = (2) Ω
with χα being the characteristic/indicator function: 1/(2α) if d ∈ [−α, α] χα (d) = 0 otherwise
(3)
We use Thin Plate Spline (TPS) transformation [15] to address global registration as well as local deformations Examples of registration between the Candide model and the training set are shown in [Fig. (2)], while the final model is obtained through averaging of the 120 locally registered examples and is shown in [Fig. (1.iii)]. One should point out that the training examples refer to faces at a neutral and static state. A system of animation should now be added to the model.
624
C. Ghys et al.
Fig. 2. Some examples of registration between our model and the training set
2.3 Animation of the Model To animate this model, the position of all the vertices of the mesh should be estimated according to the movements of the FPs. To do so, we use the method proposed in [17]. This method produces localized real-time deformations, assuring smooth displacements of points. An influence area, around each control point, is defined according to a metric, such that every point of this area moves with the control point. To animate the mesh, the new position of the control point has to be specified, and a Radial Basis Functions system computes the new locations of all vertices in the influence region. N ci h(||(x − xi )||) F (x) = (4) i=0
where x is the initial position of a vertex, F (x) is its new position, the xi are the set of the anchor point surrounding the influence area and the control point, ci are the associated weight, estimated in a learning phase, and h() is a radial basis function. The main limitation of this is that it doesn’t take into account the semantic aspect of the mesh, i.e. there is no difference in the definition of the influence area if the control point is a corner of an eyebrow or a corner of the mouth. We proposed a set of influence areas for each control point of our model. The system for each feature point is a weighted sum of radial basis functions. The next aspect to be addressed is now variations due to expressions. 2.4 Learning Transitions/Expressions In the 70s, Paul Ekman and Wallace F. Friesen designed a muscle-based [18] Facial Action Units System (FACS) to help human observers in depicting facial expression verbally. Such a system includes 44 Action Units (AUs), expressing all possible facial movements. Each of them is related to the contraction of one or several muscles and each facial expression can be described as an activation of one or a set of AUs. An alternative to this muscle-based facial movements is a description according to geometrical changes of the face. Such an approach is more tractable and reformulates expression understanding in a more technical fashion. To this end, several geometric models have been proposed such as the MPEG-4 standard and its facial action parameters (FAPs) ([12]). Such an animation mechanism consists of a set of feature points (FPs) associated with a set of FAPs, corresponding to a particular facial action deforming a face model.
Expression Mimicking : From 2D Monocular Sequences to 3D Animations
(a)
(b)
(c)
(d)
(e)
625
(f)
Fig. 3. The 6 basic emotions : (a) anger, (b) disgust, (c) fear, (d) joy, (e) sadness, (f) surprise
Combinations of AUs or FAPs are able to model an expression. We propose another approach consisting in modeling transitions between expressions using an autoregressive process. Building predictive models is equivalent with expressing the state of a random variable X(t) in time as a function of the previous system : ˆ X(t) = G(X(t − k); k ∈ [1, p]) + η(t)
(5)
with p known to be the order of the model and η a noise model that is used to describe errors in the estimation process. In a learning step, given a set of sequences of observations and the selection of the prediction mechanism we aim to recover the parameters of this function such that for the training set, the observations meet the prediction. The Auto Regressive Process (ARP) permits to solve the problem of predicting objects position in time and is able to model the temporal deformation of a highdimensional vector. In the context of this paper, such a vector corresponds to the position of the face control points (we assume no changes in depth/orientation of the face). A system to model the transitions between expressions could be express such as : ˆ X(t) =
I
p Wi f ( wi,k X(t − k) + θi )
i=1
(6)
k=1
where Wi and wi,k , i ∈ [1, I] and k ∈ [1, p] are the auto regressive process parameters, while f () is the identity in case of a linear model or a smooth bounded monotonic function for a non linear model. In this case the model can be seen as a neural network and we use a back propagation technique to estimate the parameters. Both system were tested. The results presented in the paper are obtained using the non-linear system. These models can now be used to improve the detection of control points in images. This model is calibrated off-line using a set of landmarks and a number of individuals performing various expressions.
3 3D from Monocular Images The 3D face model, and the autor egressive processes provide a natural animation framework. However, a link is to be established with the information being present in the image. To this end, one should be able to extract candidates points in the image plane corresponding to the control points. Combination of weak classifiers are the most prominent selection to address this demand.
626
C. Ghys et al.
3.1 Feature Extraction and Cascade Adaboost Adaboost is a linear combination of T weak classifiers defined by ht : X → {−1, 1} with error < 0.5. Training data is a set of positive and negative patch examples of the object to detect (x1 , y1 ) · · · (xn , yn ) where xi ∈ X and yi ∈ Y = {−1, 1}. Each example could be a vector of grey-scales or filter responses (e.g., Haar Basis Functions, Gabor Filter, etc.). In this paper, the data to classify is the grey-scale patch, with equalized histograms. A weight is assigned to each of them indicating their importance in the dataset. At each round t, the best weak classifier and its associated weight are computed, while the weights of incorrectly classified examples are increased. In this manner, the next weak classifier focuses more on those examples. The final classification is given by: T (7) αt ht (x))), H(x) = sign( t=1
Cascade Adaboost classifiers were introduced in [19]. The point is to combine successively Adaboost classifiers with a small number of weak classifiers considering that candidates from background are easy to eliminate at low computational cost. One can accelerate the process using a local search in the areas predicted from the non-linear autoregressive model regarding the potential positions of the control points. The outcome of this process refers to a number of potential candidate image locations for each control point of the face model. Then one can consider the best match (as determined from the classifier) to establish correspondences between the 3D model and image. However, such an approach will certainly fail, due to the fact that a number of facial landmarks lack or to artifacts on the face, like wrinkles or beauty spot. Let us assume that n responses are available for each control point. Then, we can define a label set with potential correspondences for each control and the task of finding the optimal configuration can be viewed as a discrete optimization problem. In such a context, one should recover a configuration that is consistent with the expected face geometric characteristics. 3.2 Anthropometric Constraints and Linear Programming Let us now consider a discrete set of labels Θ = {θ1 , ..., θi } corresponding to the potential image candidates for the control points Dm={x1m ,...,xim }. A label assignment θm to a grid node m is associated with the model control point m and the image coordinate xθmm . Let us also consider a graph G which has as nodes the 19 FPs. One can reformulate the optimal correspondence selection problem as a discrete optimization problem, where the goal is to assign individual labels θm to the grid nodes. In this context, the image support term Emo (·) can be rewritten as: T θm g αt ht xm , Emo (θ) = (8) t=1 m∈G
≈Vm (θm )
with g being a monotonically decreasing function. The next aspect to be addressed, is the introduction of proper interactions between label assignments that can be done through the use of anthropometric constraints Esm (·) in the label domain.
Expression Mimicking : From 2D Monocular Sequences to 3D Animations
627
Table 1. MPEG-4 anthropometric constraints and corresponding pair-wise potentials (m, n) Description
Constraints
Vm,n
(15,16) (5,6) (9,10) (11,12)
Symmetry Symmetry x = 0.x+2.x 2 x = 0.x+2.x 2
(15.y − 16.y)2 (5.y − 6.y)2 (9.y − 10.y)2 and (9.x − 0.x+2.x )2 2 2 0.x+2.x 2 (11.y − 12.y) and (11.x − ) 2
Midpoints of the lips Inner corners of the eyebrows Midpoints of upper eyelids Midpoints of lower eyelids
Even if a face is not exactly symmetric, considering the 6 basic emotions [Fig.3], the movements of FPs are rather symmetric with non critical errors. We assume that two pseudo symmetric points move with the same intensity. Examples of symmetry correspondences and inequality are presented in [Tab. (1)]. These constraints can be formulated as follows: Esm (θ) = Vmn (θm , θn ), (9) m∈G n∈N (m)
where N represents the neighborhood system associated with the deformation grid G and the distance Vmn (·, ·). Towards addressing computational constraints, we have considered only pair-wise relations between control points of the model : first corner points such as the corner of the eyes or of the mouth, nostrils or eyebrows. Then, once they were detected, we added the middle points on the eyelids or on the lips and the tip of the nose, taking into account their x-coordinates is enforced by the previously detected points. For optimizing the above discrete Markov Random Field, we will make use of a recently proposed method, called Fast-PD [20]. Instead of working directly with the discrete MRF optimization problem above, Fast-PD first reformulates that problem as an integer linear programming problem (the primal problem) and also takes the dual of the corresponding LP relaxation. Given these 2 problems, i.e. the primal and the dual, Fast-PD then generates a sequence of integral feasible primal solutions, as well as a sequence of dual feasible solutions. These two sequences of solutions make local improvements to each other until the primal-dual gap (i.e. the gap between the objective function of the primal and the objective function of the dual) becomes small enough. This is exactly what the next theorem, also known as the Primal-Dual Principle, states. Primal-Dual Principle 1 (Primal-Dual principle). Consider the following pair of primal and dual linear programs: D UAL : max bT y P RIMAL : min cT x s.t. Ax = b, x ≥ 0 s.t. AT y ≤ c and let x, y be integral-primal and dual feasible solutions, having a primal-dual gap less than f , i.e.: cT x ≤ f · bT y. Then x is guaranteed to be an f -approximation to the optimal integral solution x∗ , i.e., cT x∗ ≤ cT x ≤ f · cT x∗
628
C. Ghys et al.
(a)
(b)
(c)
Fig. 4. Constrained Extraction :(a) The candidates , (b) The highest score configuration,(c) The optimal configuration defined by Fast-PD
Fast-PD is a very general MRF optimization method, which can handle a very wide class of MRFs. Given the optimal configuration , [Fig.4], in the image plane of the control points, the autoregressive models explaining the different expressions and the known camera parameter one now can perform the mimicking of face intra movement to another person in the real 3D world.
4 Validation and Discussion The process to mimick an expression from a person to another one can be presented as follow : First, from a stereo pair of images, the face of the person, mimicking the expression is computed. This reconstruction is used for the registration of our face model, using the method presented in section 2.2. The registered face model is called avatar. The feature points are detected in the first image of the source sequence. In the next images, the position of the feature points is estimated through the auto regressive process estimated for the 6 emotions. It is so possible to restrict the research area before the extraction of feature points. Their movement are projected on the avatar assuming the changes in depth are negligible. Results are presented in [Fig.5] for joy and surprise. In this paper, we have presented a novel computationally efficient approach to animate 3D face from 2D images. Our approach consists of a learning and an image inference step. In the learning stage, first we create a generic neutral compact facial model through automatic global and local registration of 3D range data. In order to account for facial variations due to expressions, we also use time series to determine sequences of expressions. Once the static and the dynamic aspects of the model have been addressed, feature extraction from images using cascade Adaboost and anthropometric constraints through efficient linear programming determine the optimal image configuration of the model. This configuration provides natural means of animation, as well as recognition of expressions. In order to demonstrate the concept we have considered individuals who exhibit important facial geometric differences. Several extensions are feasible within the proposed framework. The automatic annotation of 3D FPs in range data should be straightforward through regression. Next, introducing 3D appearance features in the model could improve performance of the control points image projection extractions, while at the same time lead to more realistic representations. Then, complex models able to account for expressions transitions
Expression Mimicking : From 2D Monocular Sequences to 3D Animations
629
(a)
(b)
(c)
Fig. 5. Joy animation : (1) Observed sequence, (2) Individual 1, (3) Individual 2, (a) First frame , (b) intermediate frame, (c) last frame
like HMMs could be investigated. Last, but not least, considering a more complete set of emotions, or even better, the 46 Action Units, as defined in [18] could improve the recognition and the animation results.
References 1. Low, D.: Fitting parameterized three-dimensional models to images. PAMI, 441–450 (1991) 2. Terzopoulos, D., Waters, K.: Physically-based facial modelling, analysis, and animation. The Journal of Visualization and Computer Animation (1990)
630
C. Ghys et al.
3. Terzopoulos, D., Metaxas, D.: Dynamic 3d models with local and global deformations: Deformable superquadrics. PAMI, 703–714 (1991) 4. Vetter, T., Blanz, V.: Estimating coloured 3d face models from single images: An example based approach. In: ECCV (1988) 5. Duan, Y., Yang, L., Qin, H., Samaras, D.: Shape reconstruction from 3d and 2d data using pde-based deformable surfaces. In: Pajdla, T., Matas, J(G.) (eds.) ECCV 2004. LNCS, vol. 3021, pp. 238–251. Springer, Heidelberg (2004) 6. Ilic, S., Fua, P.: Implicit meshes for surface reconstruction. PAMI, 328–333 (2006) 7. Lee, Y., Terzopoulos, D.: Realistic modeling for facial animation. SIGGRAPH (1995) 8. Blanz, V., Vetter, T.: A Morphable Model for the Synthesis of 3d Faces. In: SIGGRAPH (1999) 9. Pantic, M., Rothkrantz, L.: Expert System for Automatic Analysis of Facial Expressions. In: Image and Vision Computing (2000) 10. Cootes, T., Edwards, G., Taylor, C.: Active appearance models. PAMI 681–685 (2001) 11. Cristinacce, D., Cootes, T.: Feature detection and tracking with constrained local models. In: BMVC (2006) 12. Forchheimer, R., Pandzic, I., et al.: MPEG-4 Facial Animation: the Standards, Implementations and Applications. John Wiley & Sons, Chichester (2002) 13. Achroy, M., Beumier, C.: (The 3d rma database) 14. Huang, X., Paragios, N., Metaxas, D.: Shape registration in implicit spaces using information theory and free form deformations. In: PAMI (2006) 15. Bookstein, F.: Principal warps: Thin-plate splines and the decomposition of deformations. In: PAMI (1989) 16. Borgefors, G.: Distance transformations in digital images. In: Computer Vision, Graphics, and Image (1986) 17. Noh, J., Fidaleo, D., Neumann, U.: Animated deformations with radial basis functions. In: ACM Symposium on Virtual Reality Software and Technology (2000) 18. Ekman, P., Friesen, W.: Facial Action Coding System. Palo Alto (1978) 19. Viola, P., Jones, M.: Robust real-time face detection. IJCV (2004) 20. Komodakis, N., Tziritas, G.: Approximate labeling via graph-cuts based on linear programming. In: PAMI (2007)
Object Recognition: A Focused Vision Based Approach Noel Trujillo, Roland Chapuis, Frederic Chausse, and Michel Naranjo Laboratoire des Sciences et Materiaux pour l’Electronique, et d’Automatique (LASMEA) 24 av. des Landais - 63177 Aubiere - France
[email protected]
Abstract. In this paper we propose a novel approach for visual object recognition. The main idea is to consider the object recognition task as an active process which is guided by multi-cue attentional indexes, which at the same time correspond to object’s parts. In this method, a visual attention mechanism is carried out. It does not correspond to a different stage (or module) of the recognition process; on the contrary, it is inherent in the recognition strategy itself. Recognition is achieved by means of a sequential search of object’s parts: parts selection depends on the current state of the recognition process. The detection of each part constraints the process state in order to reduce the search space (in the overall feature space) for future parts matching. As an illustration, some results for face and pedestrian recognition are presented.
1 Introduction 1.1 Object Recognition Thanks to the significant progress in developing classification methods (like neural networks, support vector machines, AdaBoost, etc.) and in image processing tools (like wavelets transform, eigen-images), the template matching technique by using sliding windows has been widely used in the last years. In order to detect objects present in an image, approaches belonging to the above referenced techniques scan the overall image in an exhaustive way in location and scale [1,2,3]. To decrease the detection time, other methods propose the use of a cascade of classifiers [4, 2], or to add some heuristics to focus the search by introducing focus indexes such as: color for face detection, shadows for vehicle detection, or by defining a region of interest given by the user [1]. In the same way, reducing the number of parameters in the feature vector [1], or by taking larger steps for scanning over scale, are also typical solutions. All of these solutions involve an increase of false-detections (poor discriminant classifier) or non-detections when searching for scale varying objects. In order to deal with deformable or with high appearance variant objects, other techniques have been proposed such as the relational matchers. Unlike purely structural approaches, in this case it is not difficult to learn and to detect object’s components (also called patches). The main disadvantage is that all object’s parts are defined by hand [5, 6, 7, 8]. A combinatorial problem for parts correspondence appears in this kind of technique. In order to decrease the impact, the search space can be limited by introducing some constraints in the spatial configuration [5]. G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 631–642, 2007. c Springer-Verlag Berlin Heidelberg 2007
632
N. Trujillo et al.
1.2 Visual Attention for Object Recognition In the other hand, several attentional mechanisms have been proposed over the last years, in order to understand biological perception or to optimize the artificial perception process. We can find some approaches showing a good performance for selecting regions or points of interest, in a bottom-up control [9, 10, 11, 12], or topdown [13, 14, 15, 16, 17, 18]. For object recognition, the usual strategy is: 1. Region selection provided by an attentional mechanism (by means of saliency regions previously extracted (bottom-up) or by means of the a priori context information (topdown)) [19, 11, 20, 21, 13, 17, 18, 15], and 2. Object recognition (in the region selected by the attentional mechanism) by a separate module independent to the object and the overall system. 1.3 Discussion For object recognition we noticed that there is a need to guide the recognition module, by means of an attentional mechanism, in order to delimit the search space and thus avoid an exhaustive search in position and scale (one of the main problems of template matching approaches). Furthermore, there is a tendency of decomposing the popular and commonly used template matching classifiers into different components (like relational matchers): The main goal is to accelerate recognition and/or to deal with deformable objects. For this kind of approaches, the explicit definition of object’s components presents one of the main problems. In this proposal we try to formalize a methodology which allows, not only to overcome the problems presented above, but also to explore "progressive attention" as an alternative way for studying the object recognition problem. For us, visual attention and object recognition are governed under the same control strategy. Unlike typical static process, visual attention becomes a dynamic process which depends on the object and on the observations provided by the object itself.
2 Methodology Our goal is to develop an unified methodology for object recognition, seen as a visual search process, which allows to optimize the recognition process by means of using visual attention indexes. Thus it could be possible to avoid an exhaustive search for the whole object in the image. Under this framework, we are looking for unifying the recognition control algorithm and the attentional mechanism under the same global strategy. Therefore, a particular object representation is required. The main idea is to use these "attentional indexes" (in our case the object’s parts) in order to guide the recognition process to a region of interest, where the object could be present. As a result we have a recursive generation/verification process of hypotheses, which is guided by an object’s probabilistic model. During recognition, the object’s model is updated in order to reduce the overall search space and thus constrain future detections. Unlike typical approaches where only the 2D geometrical space is constrained (a region in the image plane), here selective attention operates in four different levels: low-level operator’s selection, response interval selection of the operators (ROI in the overall feature space), scale of analysis selection and region selection (windowing).
Object Recognition: A Focused Vision Based Approach
633
2.1 Object Representation When dealing with object recognition, the object’s representation is critical; not only for having a good representation but also to allow this representation to be adapted into a goal guided vision framework. Under this framework, the typical "what" (what property must be searched?), "where" (in which location and scale?), "how" (how this property can be detected?) and "who" (who is in charge to detect this property?) questions are posed. Object representation must contain enough information to answer at least three of the four questions presented above. In order to overcome the representation problem, we have chosen an hybrid representation: an appearance based representation which integrates a set of parts that describe the object’s local structure. Schneiderman [2] has proposed a representation of the same type, in which the object is represented by a set of local parts. For Schneiderman, each part corresponds to a local image region (group of pixels) which captures the local structure, taking into consideration the statistics of parts in object’s appearance. We will take the above definition for an object’s part; it corresponds to an extension of that proposed by [2], mainly relating to the multi-cue integration. Object components: definition of parts. In order to assign a particular location to a given part, we use a grid of M c cells (which will be defined later). This grid (distributed in a uniform way) is helpful for capturing the local structure in different regions covering the object. By spreading the set of N low level operators, inside each cell, we are able to sample the local structure characterized by all the N parameters (one for each operator). To obtain the local structure at multiples scales, a multi-resolution grid is used. A cell, C , is defined as an entity which captures the local structure by means of a set of N operators. This cell allows us to observe multiple object’s properties in a given region in the image. Thus, for a m-cell Cm , we define a parameters vector ςm = [ζm1 , ζm2 , . . . , ζmN , am ]t , and its associated diagonal1 covariance matrix Σςm , with [σ2ζm1 , σ2ζm2 , . . . , σ2ζmN , Σam ] being the elements of the main diagonal. Vector am contains the coordinates of Cm , and Σam is its associated covariance matrix. In summary, we dispose of a grid of M c cells and of N parameters within each cell. During learning (see subsection 2.2), we will see that some cells could be not representative for the object’s class (e.g. the image background or high intraclass variability in local structure) and they could be eliminated from the grid as well as some non pertinent parameters (within each cell). The remaining M p < M c cells with their respective parameters (potentially different from one cell to another) are called the object parts. After introducing the required elements for obtaining the object’s parts, let us formalize the definition of part. As stated before, in this approach an object is considered as composed of a set or parts which are statistically related between them. A part is denoted by Λm for m ∈ [1, M p ], where M p is the total number of object’s parts. The main difference between a cell, C , and a part Λ, is that a cell helps for capturing the local information in an image 1
Supposition of statistic independence between the N operators for a given cell. Nevertheless, we maintain the dependence between the same parameter in different cells.
634
N. Trujillo et al.
by means of operators (during learning), while a part corresponds to a characterized cell by the local statistics of an object (after learning). For a given m-part Λm , we define: – A parameters vector λm = [ζm1 , ζm2 , . . . , ζmNm , atm ]t , where ζi j may be any descriptor like color, local edge orientation, etc., and am = [um , vm ]t are the coordinates showing the location of Λm in the image. This will allows the system to be guided by means of different attentional indexes (isolated or a set of them). – Adetection function λˆ m = fm (λ¯ m , Σλm ), associated to a given part Λm , which extracts an observation λˆ m into a region of interest (ROI) centered in λ¯ m ; where λ¯ m and Σλm correspond respectively to the mean parameters vector and its associated covariance matrix. This ROI is not only defined in the 2D geometric space but also in the overall feature space of dimension (Nm + 2). Up to now we have only defined the object’s components. Learning will yield, for each cell, a mean vector and its associated covariance matrix. By means of a detection function, recognition will use, for each cell, these two magnitudes in order to capture the parameters values within the corresponding cell of the object to be recognized; This detection function will seek the parameters inside an ellipse, deduced from covariance matrix, centered on the mean vector. 2.2 Model Learning The main goal of this stage is to extract and characterize (with a mean value and a dispersion) each one of the object’s parts for its further detection. Therefore, from the grid of M c cells are extracted the M p most representative cells and their corresponding Nm parameters (Nm potentially different for each cell, and Nm ≤ N). The M p selected cells will be the parts composing the object. Learning is mainly divided in three stages: the parameters learning, the cell location learning and the model reduction. A brief description is given next. Parameters learning. Our objective is to define, for each cell Cm and from the T training samples, the mean value and the confidence interval of the parameters vector ςm : ς¯ m and Σςm . To achieve this objective, the statistics are collected in a brute-force way: we will observe what is happening inside each cell (by means of the N operators Opn , for n ∈ [1, N] disposed at the beginning) when the object of interest is present. Due to the supposition of statistical independence between operators (inside a given cell but not between the overall set of cells), we can observe separately the response of each operator Opn , in all cells Cm , m ∈ [1, M c ], given a training example. This process is repeated for all the T samples. At the end, we obtain N matrices, Xn = [xn1 xn2 . . . xnT ], n ∈ [1, N], of size M c × T . This, will allow us to calculate the mean vector x¯ n = [ζ¯ 1n , ζ¯ 2n , ..., ζ¯ Mc n ] and the covariance matrix Σxn , corresponding to the statistics of a n parameter for all the cells in the grid. Cells location. Usually, an object can be located at any place in the image plane; it can be found in different sizes and/or under small (or strong) variations in point of view (2D
Object Recognition: A Focused Vision Based Approach
635
and/or 3D rotation). This makes the artificial object recognition task difficult. In order to overcome this problem, these variations could be considered within the object’s model. Therefore, depending on the nature of the training samples, we can learn the probable object’s location and scale (by means of a similarity transformation from a centered and object size normalized database (for details, see [22]), or directly from an annotated database). In both cases, we obtain a¯ = [a¯ 1 , a¯ 2 , . . . , a¯ Mc ], which is the geometrical model with mean vector a¯ and covariance matrix Σa . Vector am , with m ∈ [1, M c ], corresponds to the coordinates describing the location of a m-part Λm . Model reduction. The fact that a given parameter may not describe a local region (for a given object class) because of the high variability, the pour discriminant power or the non response of a given operator, it will allow us to eliminate the weak parameters from this cell. In our case, we use a very simple rule based on the correlation coefficient: we eliminate all the parameters which obey ri j ≤ .95, where ri j is the correlation coefficient between ζin and ζ jn . Although the method proposed above may reduce strongly the model, this is not the only way to reduce it. When any of the N parameters describes a local region, the corresponding cell can be eliminated from the grid (and their corresponding coordinates from the geometrical model a¯ = [a¯ 1 , a¯ 2 , . . . , a¯ Mc ]). Typically such cells correspond to regions outside the object. In summary, after learning stage we have a statistical object’s model X ∼ N (¯x, Σx ) , with x¯ = [λ¯ 1 , λ¯ 2 , . . . , λ¯ M p ], that contains M p parts Λm , with m ∈ [1, M p ≤ M c ]. Each part is a characterized cell described by: a parameters vector λm = [ζm1 , ζm2 , . . . , ζmNm , am ], with Nm ≤ N, and a diagonal covariance matrix, Σλm , with (σ2ζ , σ2ζ , . . . , σ2ζ , Σam ) m1 m2 mNm being the elements in the main diagonal describing its dispersion. Σam corresponds to the m-part coordinates covariance matrix. 2.3 Strategy for Recognition The needs of using an attentional mechanism for object recognition is strongly related to the recognition process optimization. This could be achieved by using attentional indexes, local observations, and by avoiding an exhaustive search of the whole object without any a priori. The way the algorithm looks for the object’s parts, in order to better guide towards the target, is completely defined by the chosen recognition strategy. In fact, the strategy proposed here gives as result a hypothesis generation/verification and recursive process, which is guided by an object’s probabilistic model. After a given part detection (verification), due to the fact that learning has memorized (by means of the covariance matrix) all the dependencies between parameters, the statistics of the remaining parts are refined looking for reducing the overall search space (in the overall feature space) in order to constraint future detections. Let us illustrate with a simple example. Let Λ1 and Λ2 be the left and right eye respectively, belonging to a face. Λ1 and Λ2 may have color parameters with large dispersion (e.g. from black to sky blue and over all the possible locations in image). When Λ1 has been successfully detected, the statistical relationship learned within the covariance matrix will constrain the position
636
N. Trujillo et al.
and color of Λ2 : object’s model is updated by the Λ1 detection and the search of Λ2 will be focused in position and color: it is a focus of attention in the overall parameters space. In figure 1 we show a simplified chart that describes, in a general way, the proposed strategy for controlling the recognition process. This strategy integrates several stages that go from the object’s part selection, to the model updates (focusing); going through by other intermediate stages like for example the part’s detection and decision stage. Preparing and set up: zero level. Once learning is done, the result obtained corresponds to the average object’s initial model N (¯x, Σx )0 , where the sub-index 0 represent the zero level of the recognition process. The initial model defines for each part, the mean value of its parameters (with its corresponding mean location in the image) and the confidence interval(which allows defining, for each part, the ROI in the overall feature space). Hypothesis generation: part’s selection stage. Hypothesis generation is equivalent to define the hierarchy for the selection of the object’s components. In our case, the most pertinent part will be the one that carries the maximum number of descriptor parameters (to increase discrimination power), with minimum variance, the maximum correlation coefficient and the lowest mean computing time (to prioritize low level resolution parts). Thus, the proposed procedure is as follows: at the k recognition’s state we will look for, between all parts that are not yet selected, the best candidate according to the criterion presented above. In fact, the selected part will correspond to the most pertinent one for the current recognition state. We insist for the current state k, beFig. 1. Simplified chart describing the proposed cause there exists a strong dependence recognition strategy between the selection criterion and the k current process state. In fact, because the model is updated at each iteration (after a valid detection), the minimum variance part will depend on the part previously detected. Hypothesis verification: parts detection stage Region of interest (ROI). Before detecting a given object’s part, we have to define the ROI in the image plane where this part could be potentially located. This ROI, in the purely geometric space, can be obtained from the mean location, a¯ m , for this part and
Object Recognition: A Focused Vision Based Approach
637
from its associated covariance matrix Σam .When searching for a part, only this region in the image will be processed by the involved Nm operators. Part’s detection. Firstly, before detecting an object’s part Λm , we have to observe from which parameters this part is composed. Next, the ROI defined before will be processed by the subset of the Nm low level operators. The decision wether an object’s part has been detected or not, inside the predefined ROI, is made by bounding the probability density function (in our case the parameters vector is supposed having a normal distribution). Because of the poor discriminant power of an isolated part (mostly for the first iterations), when detecting a given part Λm , with a mean parameters vector λ¯ m and covariance matrix Σλm , we obtain generally Lm candidate vectors λˆ l ; l ∈ [1, Lm ]. A candidate λˆ l will be retained if ˆ l = (λ¯ m − λˆ l )Σ−1 (λ¯ m − λˆ l )t ≤ s. This relation (Mahalanobis distance) determines if λ λm the observed vector λˆ l falls inside the ellipsoid or not. l ∈ [1, Lm ] indicates the candidates corresponding to λm , where Lm is the total number of candidates for being Λm . s is a threshold defined by the user and, in our case, s has been fixed at two (trade-off between the non detection risk and the ambiguity of detection). ˆ l , a part’s In case of the existence of at least one detection between all candidates λ ¯ detector will return the nearest candidate to the mean vector λ m . If the selected part does not correspond to the target object’s part (it means a false hypothesis), another candidate must be evaluated and this process will continue on. If needed, the process ends when all candidates are tested. Focusing in the overall feature space: model updates by Kalman filtering. After a given detection of a m-part, we can take advantage of the observation λˆ m compatible with λm , in order to determine future detections. In the general case, the ROI of λm will be deduced from the covariance matrix Σx . Due to the fact that the statistical relationship between different parameters is described in the covariance matrix, it will be possible to, by a simple detection of one of its parts λm , adjust not only the position of other parts but also to precise its parameter’s confidence that describe these parts. Model updating is achieved by means of a degenerated Kalman filter (for details see [22]). 2.4 Decision Phase Finding a decision rule that minimizes the error rate is quite difficult because of the recursive nature of the proposed approach. Instead, it could be suitable to make an holistic observation of the overall object once the model is almost placed over the probable target. In our case, this decision is made by using a SVM classifier. Even though the final decision is made by template matching, it is necessarily to indicate the algorithm when to stop the recursive process, and to early abandon the wrong hypothesis. Branch & Bound. During the recognition stage, it is necessarily to have an indicator about the convinience for continuing the search of object’s parts. It is evident that the decision criterion for object/non-object depends on the fact of having detected or not a given object’s part through the recognition process. In deed, the fact of not having
638
N. Trujillo et al.
detected a given object’s part, could completely invalidate a given hypothesis. This dependence can be modeled in terms of a conditional probability. Thus, the problem may be posed as follows: We consider a region of analysis where we look for an object, and in particular an object’s part. Let be: d: event "we have a compatible detection with the searched object’s part" and O: event "the target object is present in the region of analysis". Pr(O) , the a posteriori probability By Bayes rule, we will have: Pr(O|d) = Pr(d|O) Pr(d) which can be understood as the probability that the object is present in the region of analysis knowing that we have a compatible detection with the searched part. It is important not only to consider Pr(O|d) but also Pr(O|¬d). What is really relevant for us is to have this measure of the object’s presence not only having a detection d, but a set of detections d1 , d2 , ..., dk that correspond to having detected the Λ1 , Λ2 , ..., Λk parts, respectively. Thus, given a detection dk , corresponding to Λk , where k corresponds to the current state of the recognition process, we can pose the problem as follows: Pr(Ok |dk ) = Pr(dk |Ok ) Pr(Ok ) where Pr(Ok ) = Pr(Ok−1 |dk−1 ). Pr(dk ) Here, Pr(Ok−1 |dk−1 ) corresponds to the new object’s presence probability after having a detection of one of the object’s pats in the previous state. For the zero level, we can define a probability Pr(O0 ) = 0.5 that corresponds to having the same probability of presence and absence of the object, inside the initial region of analysis. In the same way, we can consider the object absence probability having dk detections, obtained by: k ) Pr(¬Ok ) Pr(¬Ok |dk ) = Pr(dk |¬O . Pr(dk ) Because we are interested on the likelihood ratio between the probability of presence and absence a certain number of detections, we have: Lk = of the object given Pr(Ok |dk ) Pr(dk |Ok ) Pr(Ok ) log Pr(¬O |dk ) = log Pr(dk |¬O ) + log Pr(¬O ) . k k k In our case, for a given k state, thelog likelihood ratio can be calculated by Lk = Pr(dk |Ok ) . Similarly, we obtain Lk = Lk−1 + ¬Bk , with Lk−1 + Bk , where Bk = log Pr(d k |¬Ok ) Pr(¬dk |Ok ) ¬Bk = log Pr(¬dk |¬O ) , for the non detection case. Pr(dk |Ok ) and Pr(dk |¬Ok ) calculak tion is out of scope of this paper. With this formulation, we are able to have, at each iteration, a measure about the presence of the object in the region of analysis. If the presence probability becomes too small, it is convenient to stop exploring this branch an return to the precedent level. Therefore, for each iteration in the recognition process, we have to evaluate the probability of presence of the object in order to know the pertinence of continuing the search of this branch. The branch&bound criterion is given by: Lk ≤ τs , where τs corresponds to a threshold defined by user. If this relation becomes true, the process abandons the current hypothesis and returns one level in the search tree. In this case, a new hypothesis must be tested. Final decision by a SVM classifier. After a number of detections we observe that, for the remaining parts, the regions of interest (in the geometric space) are strongly reduced. In this particular case, it could be more convenient to stop the recursive process and to
Object Recognition: A Focused Vision Based Approach
639
decide by observing the whole object. In this work, in order to give a final decision concerning the presence or absence of the target, we have used a SVM classifier. To summarize, the proposed decision strategy is described as follows: 1. from the remaining parts (for a given resolution level), we find the maximal variance σ2max among them. 2. if σ2max < τσ , then observe the whole ROI and build a feature vector which will be evaluated by a SVM classifier. 3. if the object is not recognized, abandon this branch and continue the process until there are no more parts for selection.
3 Testing for Face and Pedestrian Detection In order to validate our approach, several tests have been done for face and pedestrian recognition. To illustrate the algorithm’s evolution (see figure 2-III), four iterations of the recognition process are shown. In figure 2-III.a we show the position of the initial object’s parts obtained after learning. In the first column (see figure 2-III.b), the ellipse corresponds to the ROI where the first selected part will be searched. The second column (see figure 2-III.c) shows the part detected inside the ellipse (the nearest to the mean vector). In the third column (see figure 2-III.d) we show the new ROI of the remaining parts, once the model has been updated. We can observe that, step by step (at each level of the process state), the ROI of all object’s parts is reduced after each model’s update (figure 2-III.d), as well as the adaptation of initial model to the target object. Each row represents the four process states. In figure 2-II,III and IV, some examples obtained from the test are shown. We can observe that the object’s model is placed correctly, over the target object, in location and scale. We have to note that, thanks to the dynamical search of local parts, we are able to recognize objects varying in scale (from a factor 2:1) in a continuous way. In this way, for detecting objects of varying in size between 128 pixels and 512 pixels, we need only three scales of analysis. For the face recognition test, we have tested three object’s models: the model as obtained directly from the learning stage (model 1 with around 300 parts), and two reduced models by taking only the more correlated parts (150 parts for model 2 and 50 parts for model 3). Results show an acceptable recognition score of 89.1% for model 1, 90.2% for model 2 and 92.5% for model 3. Another important result is the significant reduction on the average number of SVM classifier evaluations in comparison with exhaustive search methods: 8.86, 5.91 and 4.01 evaluations for model 1, 2, and 3 respectively. The average number of model updates is 46.25, 38.28 and 18 for model 1, 2, and 3 respectively. Even though, for the three tests, there is not a significant variation in the average number of model updates and SVM evaluations, there is a strong reduction in computing time (by a factor of almost 60 times between model 1 and model 3). For the quality of detection, there is not a significant variation between the different tests. Moreover, in figure 2-V, we show an example of the followed shift-of-attention trajectory during recognition. This trajectory is not the same for one example to another. We have to remark that, for some test images, the algorithm takes a lot of time to recognize the object (or it does not recognized it). This is due mainly because the object
640
N. Trujillo et al.
Fig. 2. Examples obtained from face and pedestrian recognition test. I) Example of the algorithm’s evolution. II),III),IV) Some results from the face and pedestrian recognition test. V) Example of a shift-of-attention trajectories during recognition.
Object Recognition: A Focused Vision Based Approach
641
representation. Even though the proposed object representation is well adapted under a visual search framework, it is desirable to have fewer objects parts, most global descriptors and to avoid redundancy in resolution. Adding a quad-tree decomposition to our object representation may be useful. Concerning the branch&bound formulation, for almost all the test images, the algorithm abandons a wrong hypothesis with a maximum of 12 non detected parts. For the SVM testing, with few detections (between 8 and 20) the recursive process is stopped and an holistic observation is done. Both results are a direct consequence of focusing in the search space.
4 Conclusion We have presented a methodology for visual object recognition by a focused based approach. There have been described the object representation, the learning stage as well as the strategy and the decision rules. Results obtained from the implementation of the proposed methodology, for face and pedestrian recognition, allows us to justify our approach. In fact, the number of possible candidates for a given part Λm , is strongly reduced as the object’s parts are detected. This entails a decrease of the combinatorial when making correspondence of parts. Furthermore, the average number of SVM tests is strongly reduced compared with typical approaches. With this, structural and appearance based approaches are unified under the same framework. In summary, by focusing in the overall feature space we obtain an important result when looking for an object in an image. Additional results (not shown here) like simultaneous object recognition, localization and tracking, all of them formalized under the same framework, encourage us to continue exploring this way. As future works, on the one hand, we must mainly improve the object representation to obtain better results. On the other hand, the proposed methodology could be used under an active vision framework by means of integrating a smart camera: intelligent data acquisition could be achieved.
References 1. Papageorgiou, C., Oren, M., Poggio, T.: A general framework for object detection. In: ICCV 1998: Proceedings of the Sixth International Conference on Computer Vision, p. 555. IEEE Computer Society Press, Washington, DC, USA (1998) 2. Schneiderman, H., Kanade, T.: Object detection using the statistics of parts. International Journal of Computer Vision 56, 151–177 (2004) 3. Serre, T., Wolf, L., Poggio, T.: A new biologically motivated framework for robust object recognition. Technical report (2004) 4. Viola, P., Jones, M.: Rapid object detection using a boosted cascade of simple features. In: Conference on computer vision and pattern recognition, vol. 1, pp. 511–518 5. Leung, T., Burl, M., Perona, P.: Finding faces in cluttered scenes using random labelled graph matching. In: fifth Intl. Conf. on Computer Vision, pp. 637–644 (1995) 6. Burl, M., Weber, M., Perona, P.: A probabilistic approach to object recognition using local photometry and global geometry. In: Burkhardt, H., Neumann, B. (eds.) ECCV 1998. LNCS, vol. 1407, pp. 628–641. Springer, Heidelberg (1998)
642
N. Trujillo et al.
7. Mohan, A., Papageorgiou, C., Poggio, T.: Example-based object detection in images by components. IEEE Transactions on Pattern Analysis and Machine Intelligence 23, 349–361 (2001) 8. Felzenszwalb, P., Huttenlocher, D.: Pictorial structures for object recognition (2003) 9. Tsotsos, J., Culhane, S., Yan Key Wai, W., Lai, Y., Davis, N., Nuflo, F.: Modeling visual attention via selective tuning. Artificial intelligence 78, 507–545 (1995) 10. Takacs, B., Wechsler, H.: A dynamical and multiresolution model of visual attention and its application to facial landmark detection. Computer Vision and Image Understanding 70, 63–73 (1998) 11. Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. IEEE transactions on pattern analysis and machine intelligence 20, 1254–1259 (1998) 12. Draper, B.A., Lionelle, A.: Evaluation of selective attention under similarity transformations. Computer vision and image understanding. 100, 152–171 (2005) 13. Walther, D., Itti, L., Riesenhuber, M., Poggio, T., Koch, C.: Attentional selection for object recognition- a gentle way. In: Second IEEE International Workshop, BMCV, pp. 472–479 (2002) 14. Sun, Y., Fisher, R.: Object-based visual attention for computer vision. Informatics research report EDI-INF-RR-0213 (June 2004) 15. Frintrop, S., Rome, E.: (Simulating visual attention for object recognition) 16. Machrouh, J., Tarroux, P.: Attentional mechanisms for interactive image exploration. EURASIP Journal on Applied Signal Processing 2005, 2391–2396 (2005) 17. Murphy, M., Torralba, A., Freeman, W.: Using the forest to see the trees: a graphical model relating features, objects, and scenes. In: Advances in Neural Information Processing Systems, vol. 16, MIT Press, Cambridge (2003) 18. Ramström, O., Christensen, H.: Object detection using background context. In: Procedings of the 17th International Conference on Pattern Recognition (ICPR 2004), IEEE Computer Society Press, Los Alamitos (2004) 19. Dickinson, S., Christensen, H., Tsotsos, J., Olofsson, G.: Active object recognition integrating attention and viewpoint control. Computer vision and image understanding 67, 239–260 (1997) 20. Autio, I., Lindgren, K.: Attention-driven parts-based object detection (2004) 21. Deco, G., Schürmann, B.: A hierarchical neural system with attentional top-down enhancement of the spatial resolution for object recognition. Vision Research 40, 2845–2859 (2000) 22. Trujillo, N., Chapuis, R., Chausse, F., Blanc, C.: On road simultaneous vehicle recognition and localization by model based focused vision. In: IAPR Conference on Machine Vision Applications 2005,Tsukuba, Japan (2005)
A Robust Image Segmentation Model Based on Integrated Square Estimation Shuisheng Xie1 , Jundong Liu1 , Darlene Berryman2, Edward List2 , Charles Smith3 , and Hima Chebrolu3 1 2
School of Elec. Eng. & Comp. Sci. School of Human & Consumer Sci. Ohio University Athens OH 3 Department of Neurology University of Kentucky Lexington KY
Abstract. This paper presents a robust segmentation method based on the integrated squared error or L2 estimation (L2 E). Formulated under the Finite Gaussian Mixture (FGM) framework, the new model (FGML2E) has a strong discriminative ability in capturing the major parts of intensity distribution without being affected by outlier structures or heavy noise. Comparisons are made with two popular solutions, the EM and F CM algorithms, and the experimental results clearly show the improvement made by our model.
1
Introduction
Image segmentation, i.e. partitioning an image into homogeneous areas, is one of the most fundamental problems in a variety of applications, including but not limited to remote sensing, optical imaging, and medical image analysis. Although great strides have been made by the research community in solving various practical problems, many difficult challenges still exist for this problem, especially on medical images. Poor image contrast and noise are very common for many modalities, such as ultrasound, low Tesla MRI, PET and SPECT. Subject movement and partial volume effect in imaging process can easily further deteriorate the image quality by blurring tissue boundaries. It is because of this weakness in the current technology that leads us to propose a new segmentation method in this paper, which is very robust to noise and outlier structures. 1.1
Related Work
A variety of approaches to image segmentation have been proposed in the literature. Pixel-based methods, especially those modeled with certain parametric pixel statistics, are widely employed. This type of method estimates the distribution profile of each class based purely on image pixel intensities, and classification is carried out according to the probability value of each individual pixel. With G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 643–651, 2007. c Springer-Verlag Berlin Heidelberg 2007
644
S. Xie et al.
respect to the form of the probability density function, finite Gaussian mixture models (FGM) [2,3,8] has been used in modeling the intensity distribution in many segmentation methods. The estimation of label distribution is usually formulated in the sense of maximum a posteriori (MAP) or maximum likelihood (ML) criterion. Expectationmaximization (EM) algorithm is widely employed in solving ML estimation of the model parameter [4,14]. In [4], prior neighborhood consistency assumption, which is expressed with Markov random field (MRF), is incorporated into the model formulation. The algorithm starts with an initial estimation step to obtain initial tissue parameters, and then a three-step EM process that updates the class labels, tissue parameters and MR bias field, is conducted iteratively. SPM brain segmentation algorithm [8] also assumes the input image conforms to a mixture of three Gaussian distributions, corresponding to Gray Matter (GM), White Matter (WM) and Cerebrospinal fluid (CSF) respectively. Segmentation is achieved through an iterative procedure, where each iteration involves three operations: estimating the tissue class parameters from the bias corrected image, assigning individual voxel probabilities based on the class parameters, and re-estimating and applying the bias estimation function. Upon convergence, the final values of the class probabilities show the likelihood of each voxel belonging to certain tissue type. ML and EM are also utilized in this model, in which the E-step computes the belonging probabilities, and the M-step calculates the cluster and non-uniformity correction parameters. Although impressive segmentation results have been reported for [8,4], the ML (EM) estimate, known for being no robust, sometimes makes the segmentation procedure vulnerable to noise and outlier structures. With ML (EM) as the numerical parameter estimation solution, the above mentioned algorithms all potentially suffer from inaccurate estimations, and therefore incorrect pixel labeling if the input image’s intensity distribution is away from the assumed finite mixture. One remedy is to utilize robust estimators, such as the popular M-estimators [9]. However, robust M-estimators, varying in the choice of the influence function [1], are all formulated under certain scale parameter, whose choice is critical for success. This scale parameter often requires some prior to be set up precisely, therefore it is not easy to be determined. 1.2
Our Proposed Method
In this paper, we develop a new method for finite Gaussian mixture segmentation. We adopt a statistically robust measure called L2 E as the fitting criterion, which is defined as the square difference between the true density and the assumed Gaussian mixture. This matching criterion is minimized over an iterative gradient descent procedure that involves estimation of the individual class parameters, as well as the proportion of each class. Our algorithm works particularly well for the input cases where the image contains structures whose intensity profiles are vastly overlapped. Comparing with the popular ML (EM) measure, our model has the advantage of being able to capture target structures accurately, without being affected by the outlier components. Experiments
A Robust Image Segmentation Model
645
with MicroCT mouse data sets are presented to depict the performance of our algorithm. Comparison is made with ML-EM and Fuzzy C-Means (FCM). The rest of the paper is organized as follows. Section 2 provides an brief introduction of L2 E. The application of L2 E to Finite Gaussian Mixture estimation as well as its robust properties are outlined in detail in section 3. The segmentation results obtained by using our GFML2E model are presented in section 4. Section 5 concludes this paper.
2
Integrated Square Estimation(L2 E)
In [7], L2 distance has been investigated as an estimation tool for a variety of parametric statistical models. Estimation through the minimization of the integrated square error, or L2 E error, is shown to be inherently robust. Since then, several works have published in applying L2 measure for image registration [11,12]. A brief introduction of the L2 E measure is given as follows. Suppose y(x) is an unknown density function. The parametric approximation of y(x) is yˆ(x|θ). The L2 E minimization estimator for θ is given as: ˆ θL2 E = arg min [ˆ y (x|θ) − y(x)]2 dx θ y(x|θ)y(x) + y 2 (x)]dx (1) y 2 (x|θ) − 2ˆ = arg min [ˆ θ
Observing that y (x) doesn’t contain any θ term, it can thus be dropped from the functional minimization in equation (1). Considering y(x) is a density func tion, yˆ(x|θ)y(x) can therefore be viewed as the expectation of yˆ(x|θ). Putting these two considerations together, equation (1) can be rewritten as: n 2 2 ˆ yˆ(xi |θ) (2) θL2 E = arg min yˆ (x|θ)dx − θ n i=1 2
3
Finite Gaussian Mixture Segmentation Using L2 E (FGML2E)
Segmentation algorithms based on finite Gaussian mixture (FGM) make a group of popular solutions in various applications, e.g, human brain analysis, animal imaging, etc. In FGM models, the pixel intensity values for each tissue are assumed to conform to a Gausssian distribution, and the overall histogram is presumably approximatable by a mixture of several Gaussians. Let φ(x|μ, σ) denotes the univariate normal density, the parametric distribution assumed by FGMs is: y(x|θ) =
K
wk φ(x|μk , σk )
(3)
k=1
where θ = {w, μ, σ} is a combined vector representing the percentages, means, and standard deviations of the Gaussian components.
646
S. Xie et al.
Expectation-Maximization (EM) algorithm, the estimator of the Maximum Likelihood (ML) measure, has been widely used in many FGM-based segmentation algorithms [4,8]. However, ML, together with EM, is inherently not robust and potentially influenced by input outliers. In this paper, we adopt the L2 E measure to develop a robust segmentation method. Our solution is stilla FGM model, and the model fitting is through a special case of Eqn.( 2), where yˆ(x|θ) is a mixture of Gaussians. Due to the importance of two-phase (background and foreground) segmentation and the fact that multi-object segmentation can usually be tackled through a hierarchical execution of the two-phase procedure, we focus on the two-Gaussian mixture case in this paper. Based on the formula described in [13], when K = 2, the L2 E functional to be minimized in Eqn.( 2), can be derived as: w2 (1 − w1 )2 2 1 L2 E(w1 , μ1 , σ1 , μ2 , σ2 ) = √ 1 + √ − √ n 2π 2 πσ1 2 πσ2
1 − w1 (xi − u2 )2 + exp − σ2 2σ22 +
n
i=1
2w1 (1 − w1 ) (u1 − u2 )2 − exp 2(σ12 + σ22 ) 2π(σ12 + σ22 )
(xi − u1 )2 w1 exp − σ1 2σ12
(4)
For discussion purpose, we also include the minimization energy for singlemode Gaussian case (K = 1) as follows: n 2 1 1 (xi − u)2 − √ L2 E(μ, σ) = √ exp − 2 πσ n 2πσ i=1 2σ 2 3.1
(5)
Inherent Robustness Properties of L2 E
To demonstrate the robustness of L2 E with respect to outliers, which also serves as the motivation of our model, a group of experiments have been conducted, based on simulated data sets. The comparison is made with the EM algorithm. Fig. 1.(a) shows an experiment for a single-mode Gaussian (K = 1) case. The input data is made of a single Gaussian plus an outlier portion. The EM estimates, as shown in Fig. 1.(a), are greatly deviated from the true values, while L2 E captures the single Gaussian part very well. Fig. 1.(b) presents the results of the second experiment, in which the outlier portion has bulky overlap with the inlier part. L2 E maintains the ability to capture the major component, without being affected by the outlier. Fig. 1.(c) shows the result for a two-Gaussian mixture (K = 2) case. The data is composed of two Gaussians and one outlier portion. The result using EM demonstrates the impact of outliers on the global parameter estimation – none of the two Gaussians is estimated correctly by the EM. L2 E, to the contrary, successfully gets hold of the two major Gaussians with great accuracy. A similar experiment is shown in Fig. 1.(d), where the outlier portion is located at a higher
A Robust Image Segmentation Model
647
position of the intensity spectrum. This time, the outliers, instead of Gaussian components, got captured by the EM algorithm. Therefore if an algorithm is based on EM, the classification results would likely be far away from the true values, if this same data set is applied. L2 E still works perfectly in rejecting the outliers and producing the desired results. For the examples showed above, if extra classes are assigned, EM algorithm can manage to capture both the inliers and the outliers. However, to separate the inlier portion off the outliers usually requires some prior knowledge or certain post-classification user intervention, which certainly demands extra efforts in practice. 0.035
0.018 Histogram EM Component L2E Component
0.016
Histogram EM Component L2E Component
0.03
0.014 0.025 0.012 0.01
0.02
0.008
0.015
0.006 0.01 0.004 0.005
0.002 0 −5
0
5
10 EM
15
20
25
0 −5
0
5
(a)
15
20
25
(b) 0.018
0.018 Histogram EM Component1 EM Component2 L2E Component1 L2E Component2
0.016 0.014
Histogram EM Component1 EM Component2 L2E Component1 L2E Component2
0.016 0.014 0.012
0.012
0.01
0.01 0.008
0.008
0.006
0.006
0.004
0.004
0.002
0.002
0 −5
10
0
5
10
(c)
15
20
25
0 −5
0
5
10
15
20
25
(d)
Fig. 1. Parameter estimation using the EM and the FGML2E. True distributions: (a) 0.7N (5, 2) + 0.3N (15, 2); (b) 0.7N (5, 1) + 0.3N (8, 2); (c) 0.41N (3, 2) + 0.41N (9, 1) + 0.18N (15, 1); (d) 0.41N (3, 2) + 0.41N (9, 1) + 0.18N (20, 1) (Figures are better seen on screen than in black/white print).
Comparing with the M-estimators, L2 E differs in the sense that it is not formulated with any scale parameter. This property can again be counted as an advantage of L2 E, as the success of the M-estimators is heavily dependent on the often elusive setup of the scale parameter. Another proof of the superiority of
648
S. Xie et al.
L2 E comes from ([15],[7]). The authors compared L2 E with 15 other robust estimators, and L2 E often came out on top, particularly with the outlier-abundant, heavy-tailed data sets. 3.2
Numerical Solution
To minimize the energy in Eqn. 4, gradient descent numerical solution is employed. For the image input cases that have more than 2 classes, we used a hierarchical approach. The finite Gaussian mixture model using L2 E (FGML2E) is first used to get the dominant Gaussian in the histogram. Then we divide the histogram into 2 parts, and the FGML2E fitting will be continued in the partitioned histogram sections. The procedure terminates when the exact number of classes is reached.
4
Experimental Results
To demonstrate the improvement made by our FGML2E algorithm, a segmentation experiment is conducted with a microCT mouse image. Comparisons are made with EM and the Fuzzy C-Means (FCM) algorithms. The quantification of regional fat in mouse models is very important in type 2 diabetes and obesity research [5]. The data set, shown in Fig. 2.(a), was generated from a GE eXplore MicroCT scanner. Four structures can be detected by human eyes: air (background), fat, soft tissues and bones. Separating the subcutaneous fat from the soft tissues is a challenging problem, as the gray values of these two areas are very close, which can be observed in Fig. 3. The segmentation results using the FGML2E, EM and F CM are shown in Fig. 2.(b), 2.(c) and 2.(d), respectively. The predetermined number of classes, K, is set to 4 for all three models. As evident, only FGML2E can successfully capture the subcutaneous fat area. EM and F CM both blend the fat and soft tissues together and take them as a single class. Fig. 3 gives an explanation as why different outcomes are generated by the FGML2E and the EM . The image’s histogram consists of one big Gaussian-like component for the background, and three smaller ones for other tissue types. The ”Fat” and the ”Soft Tissues” parts, as shown in Fig. 3.(b) are well overlapped. Therefore, the EM tends to take them as a single Gaussian. Our FGML2E, however, with very good discriminative capability, manages to capture the two components with great accuracy. If extra classes are assumed, the EM and the F CM might be able to find small structures. For the same microCT data set, if we increase the number of classes to 9 for the EM and 10 for the F CM , the ”Fat” part starts to be separated out, as shown in Fig. 4. However, the segmentation result is still not as good as the FGML2E model with the exact number of classes (K = 4). In addition, there is no general guideline as how to set the number of classes in the EM and F CM to make the target structure stand out, so this approach (increasing the number of classes) is not always applicable in practice.
A Robust Image Segmentation Model
50
50
100
100
150
150
200
200
250
649
Fat
250 50
100
150
200
250
300
350
50
100
(a)
150
200
250
300
350
200
250
300
350
(b)
50
50
100
100
150
150
200
200
250
250 50
100
150
200
250
300
350
50
100
(c)
150
(d)
Fig. 2. Segmentation results from the mouse image using the FGML2E, the EM and the F CM with 4 components. (a) one slice of the MicroCT mouse image; (b) FGML2E result; (c) EM result; (d) FCM result. (Figures are better seen on screen than in black/white print)
−4
−3
1.8
x 10
20 Histogram EM Component1 EM Component2 L2E Component1 L2E Component2
1.6 1.4
x 10
18 16
Histogram EM Component1 EM Component2 L2E Component1 L2E Component2
14 1.2 12 1
Soft Tissues
10
0.8
8
0.6
6
Fat
4
0.4
Fat
2
0.2
0 0
0
2000
4000
(a)
6000
8000
10000
2500
3000
3500
4000
4500
5000
5500
(b)
Fig. 3. Components estimated by the EM and the FGML2E (a) histogram of the entire image; (b) histogram section and the fitting results near the Fat area. (Figures are better seen on screen than in black/white print)
650
S. Xie et al.
50
50
100
100
150
150
200
200
250
250 50
100
150
(a)
200
250
300
350
50
100
150
200
250
300
350
(b)
Fig. 4. Segmentation results of the mouse image using the EM and FCM with increased number of classes. (a) EM result with 9 components; (d) FCM result with 10 components. (Figures are better seen on screen than in black/white print).
5
Conclusions
L2 E is an inherently robust minimum distance estimator. The FGML2E model we propose in this paper has many desired properties. The most salient one is its strong discriminative ability in capturing the major parts of data without being affected by outlier structures or heavy noise. Comparisons have been made with two popular solutions, the EM and F CM , and the results clearly show the improvement made by our model. Our method has great potential applications in many practical applications where structure of interest has low contrast against the surrounding tissues. To apply this method to other practical problems, e.g. White Matter Hyperintensities is the ongoing direction of this research theme.
Acknowledgment This work was supported by the Biomolecular Innovation and Technology (BMIT) Partnership project funded by Ohio University.
References 1. Black, M., Rangarajan, A.: The outlier process: Unifying line processes and robust statistics. In: CVPR (1994) 2. Guillemaud, R., Brady, J.M.: Estimating the bias field of MR images. IEEE Trans. on Medical. Imaging 16, 238–251 (1997) 3. Held, K., et al.: Markov Random Field Segmentation of Brain MR Images. IEEE Trans. on Medical Imaging 16(6) (1997) 4. Zhang, Y., Brady, M., Smith, S.: Segmentation of brain MR images through a hidden Markov random field model and the expectation maximization algorithm. IEEE Trans. on Medical Imaging 20(1), 45–57 (2001) 5. Papademetris, X., Shkarin, P., Staib, L.H., Behar, K.L.: Regional Whole Body Fat Quantification in Mice. In: Christensen, G.E., Sonka, M. (eds.) IPMI 2005. LNCS, vol. 3565, pp. 369–380. Springer, Heidelberg (2005)
A Robust Image Segmentation Model
651
6. Pohl, K., Bouix, S., Kikinis, R., Grimson, W.: Anatomical guided segmentation with non-stationary tissue class distributions in an expectation-maximization framework. In: ISBI 2004, pp. 81–84 (2004) 7. Scott, D.W.: Parametric Statistical Modeling by Minimum Integrated Square Error. Technometrics 43(3) (2001) 8. Mechelli, A., Price, C.J., Friston, K.J., Ashburner, J.: Voxel-Based Morphometry of the Human Brain: Methods and Applications. Current Medical Imaging Reviews 105–113 (2005) 9. Zhang, Z.: Parameter Estimation Techniques: A Tutorial with Application to Conic fitting. Image and Vision Computing 25, 59–76 (1997) 10. Titterington, D.M., Makov, U.E., Smith, A.: Statistical Analysis of Finite Mixture Distributions. John Wiley, New York (1985) 11. Liu, J., Vemuri, B.C., Marroquin, J.L.: Local Frequency Representations for Robust Multimodal Image Registration. IEEE TMI 21(5), 462–469 (2002) 12. Jian, B., Vemuri, B.C.: A Robust Algorithm for Point Set Registration Using Mixture of Gaussians. In: ICCV 2005, pp. 1246–1251 (2005) 13. Wand, M., Jones, M.: Kernel Smoothing. Chapman and Hall, London (1995) 14. Wells, W.M., et al.: Adaptive segmentation of MRI data. IEEE Trans. Med. Imag. 15, 429–442 (1996) 15. Wojciechowski, W.: Robust Modeling, doctoral dissertation, RiceUniversity, Houston (2001)
Measuring Effective Data Visualization Ying Zhu Department of Computer Science Georgia State University Atlanta, Georgia, USA
[email protected]
Abstract. In this paper, we systematically examine two fundamental questions in information visualization – how to define effective visualization and how to measure it. Through a literature review, we point out that the existing definitions of effectiveness are incomplete and often inconsistent – a problem that has deeply affected the design and evaluation of visualization. There is also a lack of standards for measuring the effectiveness of visualization as well as a lack of standardized procedures. We have identified a set of basic research issues that must be addressed. Finally, we provide a more comprehensive definition of effective visualization and discuss a set of quantitative and qualitative measures. The work presented in this paper contributes to the foundational research of information visualization.
1 Introduction Information visualization research can be divided into three categories – basic or foundational work, transitional approaches to create and refine techniques, and application-driven efforts [1]. In the 2006 NIH-NSF Visualization Research Challenges Report [1], Johnson, et al. pointed out that “a disproportionate amount of attention is currently devoted to incremental refinement of a narrow set of techniques.” In view of this problem, a number of prominent researchers have called for more emphasis on engaging foundational problems in visualization [1-3]. For example, Jarke van Wijk [3] wrote, “If we look at the field now, many algorithms and techniques have been developed, but there are few generic concepts and theories. … methodological issues have to be studied further. This concerns questions like how to design visualizations and how to measure and evaluate the effectiveness of various solutions.” In this paper, we examine two foundational problems of visualization – how to define the effectiveness of visualization, and how to measure it? First, we survey the current literatures and find that, although the term “effective visualization” has been used extensively in many publications, there has not been a consistent and universally accepted definition for this term. There are different views on what “effective visualization” really means and the existing definitions are incomplete. We examine the existing measures of effectiveness and discuss their limitations. Through this analysis, we have identified a list of basic research issues that must be addressed in order to improve the measures of effectiveness. G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 652–661, 2007. © Springer-Verlag Berlin Heidelberg 2007
Measuring Effective Data Visualization
653
To address some of the issues we have identified, we propose a more comprehensive definition of effective visualization. We point out that the effectiveness of visualization depends on the interaction between visualization, data, task, and user. We examine a set of interdependent factors that influence the effectiveness of visualization and discuss approaches to measure these factors.
2 A Review of the Current Definitions of Effective Visualization Although the term “effective visualization” has been used extensively in visualization literatures, there are different views on what this term means. Some researchers take a more data-centric view and suggest that effectiveness largely depends on the correspondence between the visualization and the data it displays. Dastani [4] states that “a visualization presents the input data effectively if the intended structure of the data and the perceptual structure of the visualization coincide.” Similarly, Wattenberg and Fisher [5] point out that the structure of a visualization should match the structure of the data. Tufte [6] has also expressed a similar view. Tufte [6] suggests that a effective visualization is the one that maximizes the data/ink ratio. As a result, Tufte essentially recommends that a visualization should be packed with as much data as possible [6, 7]. This view is very influential in the current practice of visualization design. However, Kosslyn [7] has challenged the use of data/ink ratio as a primary guidance for visualization design. In addition, there is no empirical evidence that links maximizing data/ink ratio with more accurate interpretation or better task efficiency. Some researchers take a task-centric view and believe that the effectiveness of visualization is task specific. For example, Casner [8] strongly argues that a visualization should be designed for a specific task, and the effective visualizations are the ones that improve task efficiency. Bertin [9] suggests that visualization designers should have a specific task in mind when they design the visualization. In an empirical study, Nowell, et al. [10] conclude that the evaluation of effective visualization should focus more on tasks than data. Amar and Stasko [11] have also criticized the “representational primacy” in the current visualization research, and advocated a knowledge task-based framework for visualization design and evaluation. However, the task-centric view of visualization effectiveness has been disputed by some researchers. Tufte, for example, doesn’t seem to believe that the designer should be specific about the tasks that a visualization is designed for [6, 7]. Tweedie [12] dismisses the task-specific argument as irrelevant because user interactions can make a visualization suitable for different tasks. So far the empirical studies in psychology and computer-human interaction seem to support the task-centric view. Many psychological studies have shown that the effectiveness of visualizations is task specific [10, 13-16]. In particular, a number of psychological studies have shown that visualizations have little impact on task performance when task complexity is low [17-19], suggesting that the effectiveness of visualization needs to be evaluated in the context of task complexity [19]. From a psychological point of view, Scaife and Rogers [15] point out that the effectiveness of visualization depends on the interaction between the external and internal representation of information. Cleveland and McGill [20], as well as
654
Y. Zhu
Mckinlay [21], think that effectiveness of visualization is about how accurate a data visualization can be interpreted. Along the same line, Tversky, et al. [22] give two principles for effective visualization – the Principle of Congruence and Principle of Apprehension. According to the Principle of Congruence, the structure and content of a visualization should correspond to the structure and content of the desired mental representation. According to the Principle of Apprehension, the structure and content of a visualization should be readily and accurately perceived and comprehended. Tversky’s principles seem to be an attempt to unify different views on the effectiveness of visualization. This definition covers the mapping between visual structure and data structure as well as the efficiency and accuracy of visualization. However, the influence of task is not mentioned in this definition. Data, task, and internal representation are not the only factors that influence the effectiveness of visualization. Many psychologists also believe that the effectiveness of visualization depends on the reader’s working memory capacity, domain knowledge, experience with visualization techniques, as well as their explanatory and reasoning skills [13, 14, 16, 23]. Particularly, studies by Petre and Green [24] show that visualization readership skill must be learned, and there is a clear difference between the way novice users and experienced users explore visualizations. However, the influence of domain knowledge, working and long-term memory, and visualization readership skill on the effectiveness of visualization is not well understood. In summary, there has not been a universally accepted definition of effective visualization. Most of the existing definitions are incomplete and only focus on one aspect of the effectiveness. The existing research has identified a set of factors that influence the effectiveness of visualization, but these factors have not been organized in a comprehensive and coherent framework. The lack of such a theoretical framework has deeply affected the design and evaluation of visualization. First, there is no clear consensus as to what criteria should be used to guide the visualization design. In many case, it is not clear what the visualization techniques are optimized for. Is it optimized for accurate interpretation, task efficiency, data/ink ratio, or all of them? These questions are rarely addressed explicitly. Second, even when these questions are addressed by designers, they tend to focus narrowly on one or two factors. For example, Casner [8] emphasizes task efficiency, Mackinlay [21] focuses on accurate interpretation, and many other designers (consciously or unconsciously) focus on data/ink ratio. What’s needed is a systematic design and evaluation approach that considers a comprehensive set of factors.
3 A Review of the Current Measures of Effectiveness There are generally two methods to evaluate the effectiveness of visualization – heuristic evaluation [25, 26] and user studies. Heuristic evaluation is a type of discount evaluation in which visualization experts evaluate the visualization designs based on certain rules and principles. For example, Bertin [9], as well as a number of other visualization designers [6, 27-30], provide many rules and examples of good visualization design. Schneiderman’s “visual information-seeking mantra” and his
Measuring Effective Data Visualization
655
taxonomy for information visualization are often used as guidelines for design and heuristic evaluations [25, 31]. Amar and Stasko’s knowledge task framework [11] and Tversky’s two principles on visualization effectiveness [22] are the more recent additions to the rules and principles. Most of the heuristic evaluations generate qualitative measures of effectiveness, but there are also a number of quantitative measures. Tufte [6] uses data/ink ratio as a measure for visualization effectiveness, which is challenged by Kosslyn [7]. Wattenberg and Fisher [5] use a machine vision technique to extract the perceptual structure from a visualization and use it to match the structure of the data presented. Kosslyn [23] has developed a method for analyzing visualizations to reveal design flaws. This method requires isolating four types of constituents in a visualization, and specifying their structure and interrelations at a syntactic, semantic, and pragmatic level of analysis. This method has the potential to generate some quantitative measures. There are several limitations with the current heuristic evaluations. First, most of the rules and principles are not empirically validated – a long standing problem in the visualization field. Second, many rules and principles are abstract and vaguely defined, leading to ambiguous interpretations. There needs to be a classification or taxonomy of the heuristic rules and principles. Third, the rules and principles are often presented without a context. Under what circumstances should a rule or principle apply? This type of question is rarely investigated and discussed. Fourth, there have not been standard procedures for conducting heuristic evaluations. For example, what is the optimal data/ink ratio for a particular type of visualization? What is considered an optimal mapping between the visual structure and the perceived data structure? What are the procedures for evaluating a visualization based on the Principle of Congruence and Principle of Apprehension? Kosslyn’s method [23] is perhaps the closest to a systematic heuristic evaluation methodology, but we have not found an example of its application in visualization design and evaluation. The second evaluation approach is user study, and the most common measures of effectiveness are task completion time, error rate, and user satisfaction. A number of researchers in psychology and information visualization have measured task efficiency of visualization [8, 10, 17, 32, 33]. However, the results are mixed – visualizations do not always lead to shorter task completion time. Cleveland and McGill [20] record the subjects’ judgments of the quantitative information on graphs. Cox, et al. [13, 34] measure the number of errors made in interpreting the visualizations. Saraiya, et al. [35] propose an interesting method that evaluate bioinformatics visualization by measuring the amount and types of insight they provide and the time it takes to acquire them. Beyond error rate and task completion time, Schneiderman and Plaisant [36] recently propose a method called Multidimensional In-depth Long-term Case studies that involve documenting usage (observations, interviews, surveys, logging etc.) and expert users’ success in achieving their professional goals. While user studies are extremely useful for evaluating visualizations, measuring task completion time and error rates also have their limitations. First, these are largely black-box approaches that do not help explain specifically what causes the performance problem or improvement.
656
Y. Zhu
One way to address this issue is to establish a correlation between heuristic evaluation and user studies. Specifically, what’s needed is a methodology that systematically analyzes the various factors that influence the efficiency and accuracy of visualization comprehension. Through this analysis, the evaluators attempt to predict the benefits of different visual features over non-visual representations. User studies should be designed accordingly to test these hypotheses. To address this issue, we have developed a complexity analysis methodology that systematically analyzes and quantify a set of parameters to predict the cognitive load involved in visual information read-off and integration [37]. We have applied this approach to a number of computer security visualization programs and are currently expanding to other areas. Another major problem facing user studies today is the lack of standard benchmark databases, benchmark tasks, and benchmark measures. The user study procedures have not been standardized. As a result, the user study data are not generally comparable with each other. There has been some progress in this area. For example, the Information Visualization Benchmarks Repository [38] has been established. More importantly, fully annotated benchmark databases for major application areas of information visualization, such as computer security and bioinformatics, are needed. In addition, benchmark task specifications, standardized user study procedures, as well as baseline measures need to be developed. In summary, much foundational research needs to be done to improve the measures of effectiveness in information visualization. The major tasks include the following: 1. 2. 3.
4.
5.
Develop a comprehensive definition of effective visualization. Study the factors that influence the effectiveness of visualization. Identify the measures for each factor and organize them in a coherent framework. Develop and refine systematic heuristic evaluation methods that generate more quantitative measures of effectiveness. (Kosslyn’s method [23] is a good start. The complexity analysis method [37] that we propose is also an attempt to address this issue.) The visualization rules and principles need to be classified, organized, and empirically verified. Create annotated benchmark databases, benchmark tasks, and benchmark measures for major application domains of information visualization. Standardize user study procedures and use benchmark databases for user studies so that the results are comparable across different studies. Closely correlate systematic heuristic evaluation with user studies. Use the outcome of heuristic evaluations to guide user studies, and design user studies to test the hypotheses from heuristic evaluations.
In the next section, we will address the first two issues by providing a definition of effective visualization and discuss a framework for the measures of effectiveness.
4 Define and Measure the Effectiveness of Visualization We define the effectiveness of data visualization in terms of three principles: accuracy, utility, and efficiency. Under each principle, we also discuss the steps to measure the effectiveness.
Measuring Effective Data Visualization
657
Principle of Accuracy: For a visualization to be effective, the attributes of visual elements shall match the attributes of data items, and the structure of the visualization shall match the structure of the data set. The accuracy principle defines the relationship between visualization and data. Measuring the accuracy involves several steps. First, a taxonomy of visualization techniques should be developed to classify the various attributes and structures of visualization. So far there has not been a generally accepted taxonomy of visualization techniques, although many attempts have been made [9, 31, 39-43]. Further research is needed to develop a unified classification of visualization techniques. To address this issue, we have proposed a hierarchical classification of visual elements [37]. Second, a domain specific data analysis should be conducted to develop a data taxonomy that classifies data properties and data structures. There has been a number of domain independent taxonomies of data [31, 42], but domain specific data classifications are needed for better accuracy assessment. Third, once the classifications of visualization and data are developed, the visualization designers shall identify the possible mapping between visual attributes and data attributes as well as the mapping between visual structures and data structures. An accuracy score shall be assigned to each visualization-data mapping as a measure of accuracy. The accuracy score shall be determined by consulting the relevant psychological studies or visualization rules [4, 5, 9, 20, 30, 44]. The initial values of the accuracy scores may be based on intuition, but shall be continuously refined by domain specific empirical studies or newer psychological theories. Principle of utility: An effective visualization should help users achieve the goal of specific tasks. The utility principle defines the relationship between visualizations and tasks. A visualization may be designed for multiple tasks, but the tasks should be explicitly specified so that the utility and efficiency of the visualization can be measured. Measuring the utility also involves multiple steps. First, a domain task analysis should be conducted to develop a task classification. Second, an annotated benchmark database should be established, and clearly specified benchmark tasks and measurable goals should be developed. Third, the utility of the visualizations is evaluated by measuring how well they help achieve the goal of the benchmark tasks, using the benchmark data set. A utility score for each task can be calculated for the visualization based on the number of benchmark goals achieved. A baseline utility score can be calculated using non-visual display. Principle of efficiency: An effective visualization should reduce the cognitive load for a specific task over non-visual representations. The efficiency principle defines the relationship between visualizations and users. It means the visualization should be easy to learn and improve task efficiency. The common measure of efficiency is the time to complete a task [8, 10, 33]. Benchmark databases and benchmark tasks should be used in the user studies so that the results are comparable across different studies. Baseline task completion times should be recorded for non-visual displays and used as references. To address the limitation of task completion time, we have proposed a method to analyze the complexity of visualization, which serves as an indicator of the perceptual and cognitive load
658
Y. Zhu
involved in exploring the visualization [37]. Therefore the outcome of complexity analysis is another measure of efficiency. Eye movement tracking is often used to study a reader’s attention. But it can also be used as a measure of efficiency – frequent eye movement is a major factor that influences task performance in visual comprehension. In addition, both task completion time and eye movements can be recorded over a period of time to measure the learning curve of a visualization design. Finally, users’ subjective opinions on the accuracy, utility, and efficiency of the visualization should be collected through interviews and observations. It is also important to point out that the accuracy, utility, and efficiency of visualization are greatly influenced by users’ domain knowledge, experience with visualization, and visual-spatial capability. It remains a major challenge to measure the impact of these factors. A possible approach is to observe or record the use of visualization by experts and novices and conduct expert-novice comparisons – a method that has been used successfully in the field of psychology. Table 1 summarizes both the quantitative and qualitative measures for accuracy, utility, and efficiency. Table 1. Quantitative and qualitative measures of effectiveness
Quantitative measurements Accuracy
•
Measure the number interpretation errors
of
• • •
Utility
•
Measure the number of achieved benchmark goals Record the number of times a visualization design is selected by users to conduct a task Record task completion time Record eye movements Measure the learning curve
• • •
•
Efficiency
• • •
• • • •
Qualitative measurements Interview Observation Expert-novice comparison Interview Observation Expert-novice comparison Visualization complexity analysis Interview Observation Expert-novice comparison
5 Summary The research presented in this paper is an attempt to systematically analyze two foundational problems in information visualization – how to define effective visualization and how to measure it? A review of the literatures shows that the current definition of effective visualization is incomplete and often inconsistent. We have pointed out a number of basic research issues that need to be addressed. Finally, we
Measuring Effective Data Visualization
659
provide a comprehensive definition of effective visualization and present a set of quantitative and qualitative measures of effectiveness.
References 1. Johnson, C., Moorhead, R., Munzner, T., Pfister, H., Rheingans, P., Yoo, T.S.: NIH/NSF Visualization Research Challenges Report. IEEE Press (2006) 2. Johnson, C.R.: Top Scientific Visualization Research Problems. IEEE Computer Graphics & Applications 24, 13–17 (2004) 3. Wijk, J.J.v.: The Value of Visualization. In: Proceedings of IEEE Visualization Conference Minneapolis, MN, IEEE, Los Alamitos (2005) 4. Dastani, M.: The Role of Visual Perception in DataVisualization. Journal of Visual Languages and Computing 13, 601–622 (2002) 5. Wattenberg, M., Fisher, D.: Analyzing perceptual organization in information graphics. Information Visualization 3, 123–133 (2004) 6. Tufte, E.R.: The Visual Display of Quantitative Information, 2nd edn. Graphics Press (2001) 7. Kosslyn, S.M.: Graphics and Human Information Processing: A Review of Five Books. Journal of the American Statistical Association 80, 499–512 (1985) 8. Casner, S.M.: A task-analytic approach to the automated design of graphic presentation. ACM Transactions on Graphics 10, 111–151 (1991) 9. Bertin, J.: Semiology of Graphics: University of Wisconsin Press (1983) 10. Nowell, L., Schulman, R., Hix, D.: Graphical Encoding for Information Visualization: An Empirical Study. In: Proceedings of the IEEE Symposium on Information Visualization 2002 (InfoVis) (2002) 11. Amar, R.A., Stasko, J.T.: Knowledge Precepts for Design and Evaluation of Information Visualizations. IEEE Transactions on Visualization and Computer Graphics 11, 432–442 (2005) 12. Tweedie, L.: Characterizing Interactive Externalizations. In: Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI) (1997) 13. Cox, R.: Representation construction, externalised cognition and individual differences. Learning and Instruction 9, 343–363 (1999) 14. Freedman, E.G., Shah, P.: Toward a Model of Knowledge-Based Graph Comprehension. In: Hegarty, M., Meyer, B., Narayanan, N.H. (eds.) Diagrams 2002. LNCS (LNAI), vol. 2317, pp. 59–141. Springer, Heidelberg (2002) 15. Scaife, M., Rogers, Y.: External cognition: how do graphical representations work? International Journal of Human-Computer Studies 45, 185–213 (1996) 16. Vekiri, I.: What Is the Value of Graphical Displays in Learning? Educational Psychology Review 14, 261–312 (2002) 17. Lohse, G.L.: The role of working memory on graphical information processing. Behaviour & Information Technology 16, 297–308 (1997) 18. Marcus, N., Cooper, M., Sweller, J.: Understanding Instructions. Journal of Educational Psychology 88, 49–63 (1996) 19. Sweller, J.: Visualisation and Instructional Design. In: Proceedings of the International Workshop on Dynamic Visualizations and Learning (2002) 20. Cleveland, W.S., McGill, R.: Graphical Perception: Theory, Experimentation, and Application to the Development of Graphical Methods. Journal of the American Statistical Association 79, 531–554 (1984)
660
Y. Zhu
21. Mackinlay, J.: Automating the Design of Graphical Presentations of Relational Information. ACM Transactions on Graphics 5, 110–141 (1986) 22. Tversky, B., Agrawala, M., Heiser, J., Lee, P., Hanrahan, P., Phan, D., Stolte, C., Daniel, M.-P.: Cognitive Design Principles for Automated Generation of Visualizations. In: Allen, G.L. (ed.) Applied Spatial Cognition: From Research to Cognitive Technology, Lawrence Erlbaum Associates, Mahwah (2006) 23. Kosslyn, S.M.: Understanding Charts and Graphs. Applied Cognitive Psychology 3, 185– 226 (1989) 24. Petre, M., Green, T.R.G.: Learning to Read Graphics: Some Evidence that ’Seeing’ an Information Display is an Acquired Skill. Journal of Visual Languages and Computing 4, 55–70 (1993) 25. Craft, B., Cairns, P.: Beyond guidelines: what can we learn from the visual information seeking mantra? In: Proceedings of the 9th IEEE International Conference on Information Visualization (IV) (2005) 26. Tory, M., Moller, T.: Evaluating Visualizations: Do Expert Reviews Work? IEEE Computer Graphics and Applications 25, 8–11 (2005) 27. Chambers, J.M., Cleveland, W.S., Tukey, P.A.: Graphical methods for data analysis. Duxbury Press (1983) 28. Cleveland, W.S.: Visualizing Data. Hobart Press (1993) 29. Wilkinson, L.: The Grammar of Graphics, 2nd edn. Springer, Heidelberg (2005) 30. Senay, H., Ignatius, E.: Rules and Principles of Scientific Data Visualization. In: State of the art in data visualization, SIGGRAPH Course Notes (1990) 31. Shneiderman, B.: The Eyes Have It: A Task by Data Type Taxonomy for Information Visualizations. In: Proceedings of the IEEE Conference on Visual Languages, IEEE, Los Alamitos (1996) 32. Casner, S.M., Larkin, J.H.: Cognitive Efficiency Considerations for Good Graphic Design. In: Proceedings of the Eleventh Annual Conference of the Cognitive Science Society Ann Arbor, MI (1989) 33. Lohse, G.L.: A Cognitive Model for Understanding Graphical Perception. HumanComputer Interaction 8, 353–388 (1993) 34. Cox, R., Brna, P.: Supporting the use of external representation in problem solving: the need for flexible learning environments. Journal of Artificial Intelligence in Education 6, 239–302 (1995) 35. Saraiya, P., North, C., Duca, K.: An Insight-Based Methodology for Evaluating Bioinformatics Visualizations. IEEE Transactions on Visualization and Computer Graphics 11, 443–456 (2005) 36. Shneiderman, B., Plaisant, C.: Strategies for evaluating information visualization tools: multi-dimensional in-depth long-term case studies. In: Proceedings of the AVI workshop on Beyond time and errors: novel evaluation methods for information visualization, ACM, New York (2006) 37. Zhu, Y., Suo, X., Owen, G.S.: Complexity Analysis for Information Visualization Design and Evaluation. In: Proceedings of the 3rd International Symposium on Visual Computing (ISVC). LNCS, vol. 4841, Springer, Heidelberg (2007) 38. Plaisant, C.:Information Visualization Repository (2007), http://www.cs.umd.edu/hcil/InfovisRepository/ 39. Card, S.K., Mackinlay, J.: The Structure of the Information Visualization Design Space. In: Proceedings of the IEEE Symposium on Information Visualization (InfoVis) (1997) 40. Chi, E.H.: A Taxonomy of Visualization Techniques using the Data State Reference Model. In: Proceeding of IEEE Symposium on Information Visualization (InfoVis) (2000)
Measuring Effective Data Visualization
661
41. Lohse, G.L., Biolsi, K., Walker, N., Rueter, H.H.: A Classification of Visual Representations. Communications of the ACM 37, 36–49 (1995) 42. Tory, M., Möller, T.: Rethinking Visualization: A High-Level Taxonomy. In: Proceeding of the IEEE Symposium on Information Visualization (InfoVis) (2004) 43. Wehrend, S., Lewis, C.: A Problem-oriented Classification of Visualization Techniques. In: Proceedings of the IEEE Symposium on Information Visualization (InfoVis), IEEE, Los Alamitos (1990) 44. Senay, H., Ignatius, E.: A Knowledge-Based System for Visualization Design. IEEE Computer Graphics & Applications 14, 36–47 (1994)
Automatic Inspection of Tobacco Leaves Based on MRF Image Model Yinhui Zhang1 , Yunsheng Zhang1 , Zifen He1 , and Xiangyang Tang2 1
Kunming University of Science and Technology,Yunnan,China yinhui
[email protected] 2 Kunming Shipbuilding Equipment Co.,Ltd.
Abstract. We present a design methodology for automatic machine vision application aiming at detecting the size ratio of tobacco leaves which will be feedback to adjust running parameters of manufacture system. Firstly, the image is represented by Markov Random Field(MRF) model which consists of a label field and an observation field. Secondly, according to Bayes theorem, the segmentation problem is translated into Maximum a Posteriori(MAP) estimation of the label field and the estimation problem is solved by Iterated Conditional Model(ICM) algorithm. Finally we give the setup of the inspection system and experimented with a real-time image acquired from it, the experiment shows better detection results than Otsu’s segmentation method especially in the larger leaf regions.
1
Introduction
Machine vision systems have been developed to perform industrial inspection tasks which can replace human visual separation and at the same time can provide real-time signal feedback for on-line manufacturing process adjustment. For example, machine vision systems are in existence that can recognize apple defects[1], printing quality perception based on RBF neural network[2], fault segmentation in fabric images using Gabor wavelet transform[3]. However, a domain has not hitherto been investigated using machine vision is size ratio inspection of tobacco leaves. The size ratio of tobacco leaves is one of the most important parameters in the tobacco manufacture and packaging line which is used to evaluate the tobacco quality and adjust running parameters. Nowadays, this parameter is obtained manually by the operator. Firstly, the operator performs stochastic sampling of tobacco leaves from manufacturing line and put them onto vibration grading sifter from which the tobacco leaves are separated into different sizes. Secondly, the weight of tobacco leaves with various areas is weighted by the operator. Finally, the data is registered and feedback to the other operator who would adjust the manufacturing parameters according to it. The drawback of this inspection method is obvious. On the one hand, this measuring method is off-line which will take a relative long time, that is, the measuring could not G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 662–670, 2007. c Springer-Verlag Berlin Heidelberg 2007
Automatic Inspection of Tobacco Leaves Based on MRF Image Model
663
synchronize with the parameter adjustment. On the other hand, the measuring process wastes redundant manpower costs. We present a novel design methodology which is used to detect the size ratio of tobacco leaves automatically with the help of machine vision devices. The segmentation accuracy of the acquired tobacco leaf image is the key step for size ratio computation which is used to estimate tobacco quality and feedback control of manufacture system. As one of the important early vision process, an image segmentation algorithm aims to assign a class label to each pixel of an image based on the properties of the pixel and its relationship with its neighbors. A good segmentation separates an image into simple regions with homogeneous properties, each with a different texture. Recently, Z. Kato[4] proposed a hierarchical Markovian image model for image segmentation which is optimized by multi-temperature annealing algorithm. Mark R.Luettgen[5] presented a class of multiscale stochastic models for approximately representing Gaussian Markov random fields and illustrated the computational efficiencies of this stochastic framework. Fabien Salzenstein[6] presented a comparison of recent statistical models based on fuzzy Markov random fields and chains for multispectral image segmentation, and developed a fuzzy Markov chain model which works in an unsupervised way. Qimin Luo[7] presented an unsupervised multiscale color image segmentation algorithm and the basic idea is to apply mean shift clustering to obtain an over-segmentation and then merge regions at multiple scales to minimum description length criterion. The rest of the paper is organized as follows: in Section 2, the real-time image is represented by MRF model which consists of a label field and an observation field, then the segmentation problem is translated into MAP estimation of the label field and the estimation problem is optimized by ICM algorithm. The experiment setup and detecting results are presented in Section 3. Finally, this paper is concluded in Section 4.
2
Statistical Image Segmentation Model
There are various approaches to perform image segmentation such as the classical watershed segmentation and region splitting and merging method. Our approach consists of building statistical image models and simply select the most likely labeling. To obtain the most likely labeling we need to define some probability measure on the set of all possible labeling. One can easily find the regularities that neighboring pixels usually posse similar intensities which can be well expressed by Markov Random Fields(MRF) mathematically. Another reason for dealing with MRF models is the Hammersley-Clifford theorem which allows us to define MRF’s through clique-potentials. 2.1
General MRF Models
MRF models in computer vision has become popular with the famous paper of S.Geman and D.Geman on image restoration[8].The field has grown up rapidly
664
Y. Zhang et al.
in recent years addressing a variety of low-level image tasks such as compression, edge detection, segmentation and motion detection.We now discuss the mathematical formulation of a MRF image model. Let R = {r1 , r2 , . . . , rM } be a set of sites and F = {Fr : r ∈ R} be a set of image data (or observations) on these sites. The set of all possible observations f = (fr1 , fr2 , . . . , frM ) is denoted by Φ. Furthermore, we are given another set of sites S = {s1 , s2 , . . . , sN }, each of these sites may take a label from Λ = {0, 1, . . . , L − 1}. The configuration space Ω is the set of all global discrete labeling ω = (ωs1 , ωs2 , . . . , ωsN ) , ωs ∈ Λ. The two set of sites R and S are not necessarily disjunct, they may have common parts or refer to a common set of sites. Our goal is to modelize the labels and observations with a joint random field (X , F ) ∈ Ω × Φ[4][5]. The field X = {Xs }s∈S is called the label field and F = {Fr }r∈R is called the observation field(Fig. 1).
Fig. 1. Observation field and label field. The upper layer denotes observation field and the lower layer denotes label field.
2.2
Image Segmentation Model
A very general problem is to find the labeling ω ˆ which maximizes the a posteriori probability P (ω|F). ω ˆ is the MAP estimate of the label field. Bayes theorem tells us that P (F |ω) P (ω) P (ω|F) = (1) P (F ) Actually P (F ) does not depend on the labeling ω and we make the assumption that P (F |ω) = P (fs | ωs ) (2) s∈S
It is then easy to see that the global labeling, which we are trying to find, is given by: P (fs |ωs ) exp (−VC (ωC )) (3) ω ˆ = argmaxω∈Ω s∈S
C∈C
Automatic Inspection of Tobacco Leaves Based on MRF Image Model
665
It is obvious from this expression that the a posteriori probability also derives from a MRF. The energies of cliques of order 1 directly reflect the probabilistic modeling of labels without context, which would be used for labeling the pixels independently. Let us assume that P (fs |ωs ) is Gaussian, the class λ ∈ Λ = {0, 1, . . . , L − 1} is represented by its mean value μλ and its deviation σλ . We get the following energy function:
U1 (ω, F) =
s∈S
ln
√ (f − μ )2 s ωs 2πσωs + 2σω2 s
U2 (ω) =
and,
V2 {ωC }
(4) (5)
C∈C
where V2 (ωC ) = V{s,r} (ωs , ωr ) =
−β +β
if ωS = ωr if ωS = ωr
(6)
where β is a model parameter controlling the homogeneity of the regions. As β increases, the resulting regions become more homogeneous. Clearly, we have 2L+1 parameters. They are denoted by the vector Θ[4][5]: ⎛
μ0 μ1 .. .
⎞
⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ϑ0 ⎜ ⎟ ⎟ ⎜ ϑ1 ⎟ ⎜ ⎜ μL−1 ⎟ ⎟ ⎜ ⎟ Θ=⎜ . ⎟=⎜ ⎟ ⎝ .. ⎠ ⎜ ⎜ σ0 ⎟ ⎜ . ⎟ ϑ2L ⎜ .. ⎟ ⎜ ⎟ ⎝ σL−1 ⎠ β ⎛
⎞
(7)
If the parameters are supposed to be known, we say that the segmentation process is supervised. For supervised segmentation, we are given a set of training data(small sub-images), each of them representing a class. According to the law of large numbers, we approach the statistics of the classes by the empirical mean and empirical variance: ∀λ ∈ Λ :
μλ =
1 (fs ) |Sλ |
(8)
s∈Sλ
σλ2 =
1 (fs − μλ )2 |Sλ |
(9)
s∈Sλ
where Sλ is the set of pixels included in the training set of class λ. The parameter β is initialized in an ad-hoc way. Typical values are between 0.5 and 1. Fig. 2 shows the overview of the MRF image segmentation framework[6][9].
666
Y. Zhang et al.
Fig. 2. Overview of the MRF image segmentation framework
3
Experimental Setup and Results
The optical image acquisition system is the key section of the tobacco leaf quality inspection framework whose role is to capture clear tobacco images on the production line at a high speed. The image acquisition system is composed of a CCD camera, optical lens and illumination systems. A TR-33 line-scan camera produced by DALSA Corporation is adopted in the image acquisition system whose pixel number per line is 2048 and line rate is 11kHz. We adopted Nikon 50mm normal lens which has the merits of less distortion and high suitability to various polarizing screens. The detection resolution is 0.375mm in the running direction and 0.15mm in the crossing direction. A fluorescence lamp with a color temperature of 6500K is adopted in our system. Fig. 3 shows one frame of the real-time images detected by our image acquisition system and Fig. 4 shows its histogram. It’s obvious that the histogram has multiple modes which could not be segmented simply by Otsu’s threshold method. Now we adopt the stochastic image segmentation model presented by[4] and perform MAP estimate through ICM algorithm. Firstly, the input vector Θ is acquired by randomly selecting four rectangular regions from tobacco leaves and background area, respectively. The mean and variance of the eight regions can be seen in Tab. 1 and the averaged values are displayed in the last row. The large difference of the average values between the two classes guarantees the realization of MAP estimation during iterated optimization process. The MAP estimate problem is solved by Iterated Conditional Model(ICM)[10]. The problem to be optimized is described as follows
logpys |xs (ys |xs ) + βt1 (x) (10) xˆMAP = argminx − s∈S
Iteratively minimize the function with respect to each pixel,xs xˆs = argminxs −logpys |xs (ys |xs ) + βv1 (xs |x∂s )
(11)
where ∂s denotes the neighboring points of s. Through MAP estimate based on iterated conditional model, the segmented image is shown in Fig. 5. The segmented results have both homogenous regions and smooth edges at the same time.
Automatic Inspection of Tobacco Leaves Based on MRF Image Model
667
Fig. 3. Acquired Real-time Image
Fig. 4. Histogram of the original image Table 1. Input Parameters Obtained from the Original Image Class 1 Region 1 2 3 4 Average
Mean 55.42 43.98 46.49 48.34 48.56
Class 2 Variance Region 259.11 1 155.02 2 53.66 3 121.48 4 147.32 Average
Mean 129.85 135.35 137.69 147.13 137.51
Variance 33.72 26.96 50.18 56.10 41.74
However, the noisy of the segmented image can be obvious seen in Fig. 5. For this reason we thus perform morphological smoothing based on openings and closings operators which is usually called open-close filtering. The opening of image f by structuring element b is defined as f ◦ b = (f b) ⊕ b
(12)
where and ⊕ denotes the erosion and dilation operator, respectively. Similarly, the closing of f by b is defined as f • b = (f ⊕ b) b
(13)
668
Y. Zhang et al.
Fig. 5. Segmented image by ICM algorithm
Fig. 6. Structuring element Table 2. Segmentation Region Areas Through MRF and Otsu’s Methods Region
Region Areas (pixels)
Error Rate(%)
Otsu’s MRF True Pixels Otsu’s 1 2 3 4 5 6 7 8
4525 4250 3307 189 143 12 4 4
5482 5084 3880 277 173 9 4 4
5215 4953 3689 251 164 10 4 4
13.23 14.19 10.36 24.70 12.8 20 0 0
MRF 5.12 2.64 5.18 10.36 5.49 10 0 0
The structuring element b is selected by trial and error in this detecting system in terms of both smoothing effect and minimum segmentation error. The final structuring element adopted is a 2 × 2 pixels matrix which is shown is Fig 6. To test the segmentation accuracy between MRF segmentation algorithm and classical Otsu’s method, we computed the segmentation error rate of the two methods with the true region areas which is obtained by segmenting the original image manually. The results of comparative tests are shown in Tab.2 from which we can see that the segmentation accuracy of MRF algorithm over Otsu’s method is obvious especially in the inspection of tobacco leaves with larger areas.
Automatic Inspection of Tobacco Leaves Based on MRF Image Model
669
Fig. 7. Segmentation error rate of the eight regions
Fig. 7 shows the segmentation error rate through Otsu’s and MRF methods, respectively. The notation from R1 to R8 on x axis represents the eight regions segmented and the values on y axis denote the segmentation error rate of the two different methods. From this figure we can see that both methods can achieve satisfied segmentation accuracy in small regions which has only a few pixels. However, the segmentation accuracy of MRF method performs over that of Otsu’s once the region area exceeds 10 pixels. The error rate attributes to illumination ununiformity around larger regions and the adaptive limitations of Otsu’s method.
4
Conclusion
This paper presents a design methodology for automatic machine vision application aiming at detecting the size ratio of tobacco leaves which will be feedback to adjust running parameters of manufacture system. We introduced the general image segmentation models based on MRF and ICM algorithm used to compute MAP estimation. The segmented results through MRF models can achieve both homogenous regions and smooth edges at the same time. Then the openclose filtering using a 2 × 2 structuring element is performed to get rid of the noises. Finally, the segmentation accuracy between MRF and Otsu’s methods are compared and results show that MRF method can achieve smaller segmentation error rates especially in the larger leaf areas. This proposed method has some advantages but also some drawbacks: the segmentation framework based on MRF model is more complex, thus demanding much more computing time. Our future work would be to construct a wavelet-domain multiscale MRF model which would increase the detecting efficiency dramatically.
References 1. Zhiqiang, W., Yang, T.: Building a rule-based machine vision system for defect inspection on apple sorting and packing lines. Expert Systems with Application 16, 307–313 (1999)
670
Y. Zhang et al.
2. Tchan, J., Thompson, R.C, Manning, A.: A computational model of print-quality perception. Expert Systems with Application 17, 243–256 (1999) 3. Arivazhagan, S., Ganesan, L., bama, S.: Fault segmentation in fabric images using Gabor wavelet transform. In: Machine Vision and Applications, vol. 16, pp. 356– 363. Springer, Heidelberg (2006) 4. Kato, Z.: Multi-scale Markovian modelisation in computer vision with applications to SPOT image segmentation. PhD thesis, INRIA Sophia Antipolis, France (1994) 5. Luettgen, M.R.: Image processing with multiscale stochastic models. PhD thesis, Massachusetts Institute of Technology (1993) 6. Salzenstein, F., Collet, C.: Fuzzy markov random fields versus chains for multispectral image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence 28(11) (2006) 7. Luo, Q., Taghi, M.: Unsupervised multiscale color image segmentation based on MDL principle. IEEE Transactions on Image Processing 15(9) (2006) 8. Geman, S., Geman, D.: Stochastic relaxation, Gibbs distributions and the Bayesian restoration of images. IEEE Trans. On Pattern Analysis and Machine Intelligence 6, 721–741 (1984) 9. Bouman, C.A., Shapiro, M.: A Multiscale random field model for Bayesian image segmentation. IEEE Trans. On Image Processing 3, 162–177 (1994) 10. Besag, J.: On the statistical analysis of dirty pictures. Journal of the Royal Statistical Society 48, 259–302 (1986)
A Mesh Meaningful Segmentation Algorithm Using Skeleton and Minima-Rule Zhi-Quan Cheng, Kai Xu, Bao Li, Yan-Zhen Wang, Gang Dang, and Shi-Yao Jin DL Laboratory, National University of Defense Technology, Changsha City, Hunan Province, P.R. China (410073)
Abstract. In this paper, a hierarchical shape decomposition algorithm is proposed, which integrates the advantages of skeleton-based and minima-rulebased meaningful segmentation algorithms. The method makes use of new geometrical and topological functions of skeleton to define initial cutting critical points, and then employs salient contours with negative minimal principal curvature values to determine natural final boundary curves among parts. And sufficient experiments have been carried out on many meshes, and shown that our framework can provide more reasonable perceptual results than single skeleton-based [8] or minima-rule-based [15] algorithm. In addition, our algorithm not only can divide a mesh of any genus into a collection of genus zero, but also partition level-of-detail meshes into similar parts.
1 Introduction Mesh segmentation [1][2][3] refers to partitioning a mesh into a series of disjoint elements, and it has become a key ingredient in many mesh operation methods, including texture mapping [4], shape manipulation[5][6], simplification and compression, mesh editing, mesh deformation [7], collision detection [8], shape analysis and matching [9][10]. Especially, the process that decomposes a model into visually meaningful components is called part-type segmentation [1][2] (shape decomposition or meaningful segmentation). Now, more researches are seeking to propose automatic procedures, which can efficiently produce more natural results that are in keeping with human recognition and shape understanding. Especially, more advanced coherency issues should be addressed, such as pose invariance [11], handling more complex models e.g. David and Armadillo, extracting similar parts and shapes over similar objects and more. 1.1 Related Work The basic segmentation problem can be viewed as clustering primitive mesh elements (vertices, edges and faces) into sub-meshes, and the techniques finishing the partition include hierarchical clustering [9], iterative clustering [4][5], spectral analysis [12], region growing [13], and other methods. The detail survey of mesh segmentation algorithms can be found in [1][2][3]. Among existed meaningful segmentation algorithms, two types of methods, developing at the same time, have caught more focus. G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 671–680, 2007. © Springer-Verlag Berlin Heidelberg 2007
672
Z.-Q. Cheng et al.
One type, including [13][14] [15], is guided by the minima rule [16], which states that human perception usually divides an object into parts along the concave discontinuity and negative minima of curvature. Enlightened by the minima rule, the mesh’s concave features, identified as natural boundaries, are used for segmentation in the region growing watershed algorithm [13]. Due to the limitation of region growing, the technique cannot cut a part if the part boundary contains non-negative minimum curvatures. And then, Page et al. [14] have used the factors proposed in [17] to compute the salience of parts by indirect super-quadric model. To directly compute the part salience on a mesh part and avoid complex super quadric, Lee et al. [15] have experientially combined four functions (distance, normal, centricity and feature) to guide the cutting path in a reasonable way. So the nice segmentation results are completely dependent on the experimental values of the function parameters, as well as the structures of underlying manifold surface. Since directly targeting the cutting contours, the type algorithms can produce better visual effects, especially at the part borders. However, these algorithms are sensitive to surface noises and tending to incur oversegmentation problems (one instance is shown in Fig. 12.a) mostly due to local concavities. The other, including [6][8][18], is driven by curve-skeleton/skeleton [19] that is 1D topologically equivalent to the mesh. The skeleton-type algorithms don’t overpartition the mesh, but the part boundaries do not always follow natural visual perception on the surface. Consequently, it would be a better choice to combine the skeleton-based approach with the minima rule, since the minima rule alone doesn’t give satisfying results, while skeleton-based methods don’t guarantee a perceptually salient segmentation. 1.2 Overview In the paper, we want to develop a robust meaningful segmentation paradigm, aiming at integrating advantages of minima-rule-based and skeleton-based approaches. Besides this basic aim, the testing models, applied to our algorithm, would be more sophisticated models, e.g., Armadillo and David beside common Cow and Dinopet. And the new approach would also guarantee to divide an arbitrary genus mesh into a collection of patches of genus zero. Especially, the algorithm would be resolutionindependent, which means that the same segmentation is achieved at different levels of detail (i.e., Fig. 1, two Armadillo meshes in different fidelity are decomposed into similar components, although segmented separately). Consequently, we simultaneously use the skeleton and surface convex regions to perform shape decomposition. In a nutshell, the partitioning algorithm can be roughly described in two stages. First, the hierarchical skeleton (Fig. 1.e) of a mesh is computed by using a repulsive force field (Fig. 1.d) over the discretization (Fig. 1.c) of a 3D mesh (Fig. 1.a and 1.b). And for every skeleton level, the cutting critical points (the larger points in Fig. 1.e) are preliminarily identified by geometric and topological functions. Second, near each critical point, corresponding final boundary is obtained using local feature contours in valley regions. As a result, our algorithm can automatically partition a mesh into meaningful components with natural boundary among parts (Fig. 1.f and 1.g).
A Mesh Meaningful Segmentation Algorithm Using Skeleton and Minima-Rule
(f)
(a)
(b)
673
(c)
(d)
(e)
(g)
Fig. 1. Main steps of our mesh segmentation algorithm. (a)low level-of-detail Armadillo with 2,704 vertices (b)high level-of-detail Armadillo with 172,974 vertices (c)voxelized volume representation in 963 grids (d)the corresponding repulsive force field (e)core skeleton with cutting critical points (the larger red points) (f)corresponding segmentation of low level-ofdetail Armadillo (g)corresponding segmentation of high level-of-detail Armadillo.
The rest of the paper is structured as follows. Cutting critical points are located in the section 3 based on the core skeleton extracted in section 2, and then the cutting path completion mechanism is illustrated in section 4. Section 5 demonstrates some results and compares them with related works. Finally, section 6 makes a conclusion and gives some future researching directions.
2 Core Skeleton and Branch Priority 2.1 Core Skeleton In our approach, the skeleton of a mesh is generated by directly adapting a generalized potential field method [18], which works on the discrete volumetric representation [20] of the mesh. In [18], core skeleton is discovered using a force following algorithm on the underlying vector field, starting at each of the identified seed points. At seed points, where the force vanishes, the initial directions are determined by evaluating the eigen-values and eigen-vectors of the Jacobian. The force following process evaluates the vector (force) value at the current point and moves in the direction of the vector with a small pre-defined step (value σ , set as 0.2). Consequently, the obtained core skeleton consists of a set of points sampled by above process. Once the core skeleton of the mesh extracted, a smoothing procedure, described in detail in [21], is applied to the point-skeleton to alleviate the problem of noise. Basically, this procedure expands the fluctuant skeleton to a narrow sleeve, defined by each point’s bounding sphere with radius in the certain threshold value σ , then fines the shortest polygonal path lying in the 2σ wide sleeve. The procedure gives a polygonal approximation of the skeleton, and may be imagined as stretching a loose rubber band within a narrow sleeve. Subsequently, the position of each point in the original skeleton is fine-tuned by translating to the nearest point on the path.
674
Z.-Q. Cheng et al.
2.2 Skeleton Branch Selection According to the number of neighboring points, every point of the skeleton can be classified into three kinds: terminal nodes (one neighbor), common nodes (two neighbors) and branch nodes (more than two neighbors). In the paper, terminal points and branch points would be viewed as feature points, and any subset of the skeleton, bounded by the feature points, is called a skeleton branch. In the following sweeping process, all branches would be tested. It’s important to determine the order of the branches, since our approach would like to detect the accurate cutting critical points by measuring related geometric and topological properties and the separated parts would not be taken account into subsequent computation of critical points. Basically, the ordering should allow small but significant components to be extracted first, so that they are not divided and absorbed into larger components in an improper way. We use three criteria to find the best branch: the type, its length and centricity. z
z
The type of a branch is determined by the classification of its two end points. The type weight of the branch with two branch nodes is low (value is 0), that with one terminal and one branch node is medium (value is 1), and that with two terminal nodes is high (value is 2). The centricity of a point t is defined as the average hops from t to all the points of the mesh’s skeleton. In a mesh, let maxH represent the maximum average hopping numbers among all the points, maxH = maxt (avgH(t)) . We normalize the centricity value of vertex t as C(t) = avgH(t)/maxH .
For each branch b , which is a set of points, we define its priority P(b) as its type value adding the product of the reciprocal length and sum of all normalized centricity of its points, since we treat the total number of points as its length.
P(b) = Type(b ) +
1 ∑ C (t ) Length(b) t∈b
After a mesh has been partitioned based on the cutting critical points in the selected branch, the current centricity values of points are no longer valid in the following segmentation. Hence, we should re-compute the centricity values after each partitioning when we want to select another branch.
3 Locating Cutting Critical Points Just as the principle observed by Li et al. [8], the geometrical and topological properties are two important characteristics, which distinguish one part from the others in mesh segmentation. We adapt the space sweep method (Fig. 2), used in computational geometry, to sweep a given mesh perpendicular to the skeleton branches (represented by points). Our approach prefers to disjoin parts along concave region. In the following, we will define a new function to measure the variety of the geometrical properties and find candidate critical points to identify the corresponding salient changes, rather than adapting the function defined in [8].
A Mesh Meaningful Segmentation Algorithm Using Skeleton and Minima-Rule
675
sweep plane n p
point branch
Fig. 2. The schematic of space sweep [8]
Let b be a selected branch. If b is a medium type branch, we will sweep it from its terminal point pstart to the other branch node pend . And some points nearby the branch node are excluded from the scan, due that no effective cutting critical points lie in the neighbouring region. In the paper, the nearby region is a sphere, whose centre is the current branch node and the radius is the minimal distance from the point to the nearest vertex on the surface. Under other conditions, b is a high or low type branch, start to sweep from the end point pstart with larger cross-section area. For the selected branch b , we compute the area of cross-section at each point p on the sweeping path from pstart to pend -1 , and then define our geometric function as following: G(p) =
AreaCS(p +1) - AreaCS(p) , p ∈ [pstart , pend -1 ] AreaCS(p)
To accelerate the computation of cross-section area at each point, we approximately calculate it by summing up the number of the voxels that are intersected by the perpendicular sweeping plane of the current point. By lining the dots of G(p), we get a connected polygonal curve. The G(p) curve has one general property: it fluctuates in the way that there are a few outburst pulses on the straight line very close to zero. Three kinds of dots would be filtered out as accurate as possible. Fig. 3 shows the three kind sample profile of AreaCS(p) and G(p). Based on the AreaCS(p), it’s obvious that Fig. 3.a denotes a salient concavity, Fig. 5.b and 5.c respectively mean the fact that how a thin part is connecting with another thick part. In the G(p) curve, if the rising edge of one pulse goes through the p axis and its trough is less than -0.15 (relative to Fig. 3.a or 3.c), the cross point t would be certainly selected. In addition, the peak of positive pulse (relative to Fig. 3.b), whose value is more than a threshold (0.4), would be obtained. The points located in the skeleton, corresponding to dots t as indicated by dashed lines, are marked as cutting critical points to divide the original mesh. The segmentation based on G(p) can handle L-shaped object (Fig 4.a), which is just the ambiguity of the minima rule theory. However, it is not practical to directly treat all selected points found by before checking procedure (Fig. 4.b), lying in the turning space region, as real critical points, since the straight absorption would lead to over-segmentation for the type object, as shown in Fig. 4.c. Hence, we should avoid some over-parts by excluding useless candidate points from the set of critical points.
676
Z.-Q. Cheng et al.
AreaCS(p) (a)
G(p) G(p) 1
AreaCS(p)
p t
(b)
p -1 G(p)
AreaCS(p)
t
1 p p
t
(c)
-1
t
G(p) 1
AreaCS(p)
p p
t
-1
t
Fig. 3. Three kind sample profiles of AreaCS(p) and G(p) G(p) 1 -1 (a)
p
(b)
(c)
(d)
Fig. 4. The segmentation of L-shaped object. (a)the object can’t be partitioned by watershed algorithm [13] based on minima rule (b)the corresponding profile of G(p) with candidate critical dots in red color (c)the over-segmentation problem (d)the final result of our algorithm.
The exclusion is performed by checking whether three nearby candidates are located in a same space. The neighbourhood, defined as earlier, a detecting sphere whose centre is the current testing node and radius is the minimal distance from the point to the nearest vertex on the surface. And if the nearby phenomenon happens, the middle point is definitely discarded and the one of two side point (preferring to Fig. 4.c type) is preserved as a real candidate in the paper. Therefore, the over-segmentation problem would be effectively resolved, and the natural result is gotten as Fig. 4.d.
4 Cutting Path Completion Common methods, such as [8], scissor a mesh by the cutting planes, which is perpendicular to the orientation of related critical points. However, the critical points can only be able to capture coarse characteristics and may not have an exact and smooth boundary between different parts. Hence, we use the cutting critical point to find
A Mesh Meaningful Segmentation Algorithm Using Skeleton and Minima-Rule
(c)
(b)
(a)
677
Fig. 5. The operational principle sketch of our segmentation. (a)the restricting zone, built on a larger red critical point, is formed by two parallel planes, which are perpendicular to the direction of the critical point and distance to the point in a threshold value. (b)(c)the feature contour, located at the armadillo’s ankle in various resolution meshes, is used to refine the boundary.
primary position, and the ultimately boundary is refined based on the feature contours of the underlying valley surface. Therefore, the skeleton-based and minima-rule-based algorithms are consistently embodied by the paper, whose operational principle is sketched out in Fig. 5. On one hand, we employ local medial axis, when partitioning a mesh. Fortunately, we have computed its skeleton, and found the cutting position marked by the critical points. Therefore, we can determine the segmentation regions on the mesh’s surface, enclosed by restricting zones. For example, the ankle of the armadillo is enclosed by a restricting zone, shown as Fig. 5.b and 5.c. One zone (Fig 5.a) is sliced by two parallel planes, whose normal is identical to the direction of the corresponding critical point, and both planes keep a same distance to the critical point in a threshold d value.
d
=
⎧ 2 * σ , if σ ⎨ ⎩ 2 * LNGedge ,
> LNGedge else
where, value σ , defined in section 2, is the distance between the adjacent skeleton points, and LNGedge is the average edge length of the mesh. On the other hand, we prefer to divide a given mesh into disjoint parts along the concave region. Therefore, if one concave region lies in the previous restricting zone, we are inclined to extract the cutting boundary from the region. For instances, the Fig 5.b and 5.c demonstrate that the dark blue contour, located in the restricting pink zone, is used to get natural perceptual boundary between foot and leg of the armadillo model. Similar to the [22], we use proper normalization to unify the minima curvature value of each mesh’s vertex, obtain the concave feature regions by filtering out the vertices with higher normalized value, extract contour curves from the graph structures of the regions (e.g. the blue regions in Fig. 1.a and 1.b) and complete best curve path going over the mesh in the shortest way. We refer readers to [22] for details regarding the feature contour extraction and completeness. For every feature contour, we compute its main direction based on principle component analysis of its vertices. But only the feature contour, whose main direction is approximate to the orientation of the corresponding critical point, is treated as one boundary curve. The approximation is measured by the angle between them. And if the separation angle is less than π /4 in radian, we say that they are approximate. Note that, if there is none concaving
678
Z.-Q. Cheng et al.
contour locating in a restricting zone, the corresponding cutting critical point can be removed and no partitioning action happens.
5 Results and Discussion 5.1 Results Fig. 6 demonstrates the final decomposition of two-level David and Buddha in 2563 grids. As shown, our algorithm respects the more likely segmentation that a human observer would choose for the scenes, and it is resolution-independent. The voxelized resolution is an important external factor affecting the skeleton, since it defines the precision of the repulsive force fields and determines the computing time and memory requirement. It is evident that a 103 grid will yield a less accurate result than a 1003 grid. However, it doesn’t mean that the finer resolution is the better, in view of the application request and algorithm complexity. Especially, for different level-of-detail of a mesh, the volume representation is always similar, if the voxelized resolution is lower than 2563.
(a)
(b)
(c)
(d)
Fig. 6. Segmentation instances in 2563 grids. (a)low level Buddha with 10,000 vertices (b) high level Buddha with 100,000 vertices (c)low level David with 4,068 vertices (d)high level David with 127,465 vertices.
5.2 Comparison and Discussion Fig. 7 begins to give a comparison of the visual effect between the state-of-the-art minima-rule-based segmentation [15] and our algorithm using the core skeleton. For the Disonaur mesh, the over-segmentation would definitely happen in [15] and [22](Fig. 7.a), while the problem disappears in the paper. And then, we compare our results with the typical skeleton-based algorithm [8] by Dinopet and Hand meshes in Fig. 8. It’s obvious that the cutting boundaries of all parts have been improved in the both models.
A Mesh Meaningful Segmentation Algorithm Using Skeleton and Minima-Rule
(a)
(c)
(b)
679
(d)
Fig. 7. The comparison with minima-rule-based algorithm [15]. (a)possible over-segmented Disonaur with 56,194 vertices generated by [15] and [22] (b)Disonaur with 56,194 vertices partitioned by our algorithm (c)Hand in [15] with 10,070 vertices (d)Hand with 5,023 vertices segmented by our approach.
(a)
(b)
(c)
(d)
Fig. 8. The comparison with skeleton-based algorithm [8]. (a)Hand in [8] (b)likely Hand with 1,572 vertices segmented by our approach (c)Dinopet in [8] (d)Dinopet with 4,388 vertices partitioned by ours.
6 Conclusion In this paper, we have developed an algorithm to decompose a mesh into meaningful parts, which integrates the advantages of skeleton-based and minima-rule based segmentation algorithms. In a nutshell, on one hand, our algorithm assures the cut would be smooth and follow natural concaving regions as much as possible, on the other hand it uses the more robust skeleton of the mesh and isn’t sensitive to surface noises any more.
References [1] Shamir, A.: A Formulation of Boundary Mesh Segmentation. In: Proceedings of 3DPVT, pp. 82–89 (2004) [2] Shamir, A.: Segmentation Algorithms for 3D Boundary Meshes. In: Proceedings of EuroGraphics, Tutorial (2006) [3] Attene, M., Katz, S., Mortara, M., Patane, G., Spagnuolo, M., Tal, A.: Mesh Segmentation - a Comparative Study. In: Proceedings of SMI, pp. 14–25 (2006) [4] Zhang, E., Mischaikow, K., Turk, G.: Feature-Based Surface Parameterization and Texture Mapping. ACM Transactions on Graphics 24(1), 1–27 (2005)
680
Z.-Q. Cheng et al.
[5] Katz, S., Tal, A.: Hierarchical mesh decomposition using fuzzy clustering and cuts. ACM Transactions on Graphics 22(3), 954–961 (2003) [6] Lien, J.M., Keyser, J., Amato, N.M.: Simultaneous shape decomposition and skeletonization. In: Proceedings of the ACM SPM, pp. 219–228 (2006) [7] Huang, J., et al.: Subspace gradient domain mesh deformation. ACM Transactions on Graphics (Special Issue: Proceedings SIGGRAPH) 25(3), 1126–1134 (2006) [8] Li, X., Toon, T.W., Huang, Z.: Decomposing polygon meshes for interactive applications. In: Proceedings of ACM Symposium on Interactive 3D Graphics, pp. 35–42 (2001) [9] Attene, M., Falcidieno, B., Spagnuolo, M.: Hierarchical Mesh Segmentation Based-on Fitting Primitives. The Visual Computer 22(3), 181–193 (2006) [10] Podolak, J., Shilane, P., Golovinskiy, A., Rusinkiewicz, S., Funkhouser, T.: A PlanarReflective Symmetry Transform for 3D Shapes. ACM Transactions on Graphics, 25(3), 549–559 (2006) [11] Katz, S., Leifman, G., Tal, A.: Mesh Segmenation Using Feature Point and Core Extraction. The Visual Computer (Special Issus: Pacific Graphics) 21(8-10), 649–658 (2005) [12] Liu, R., Zhang, H.: Segmentation of 3D Meshes Through Spectral Clustering. In: Proceedings of Pacific Graphics, pp. 298–305 (2004) [13] Page, D.L., Koschan, A.F., Abidi, M.A.: Perception-based 3D triangle mesh segmentation using fast marching watersheds. In: Proceedings of IEEE Computer Vision and Pattern Recognition (CVPR), vol. II, pp. 27–32 (2003) [14] Page, D.L., Abidi, M.A., Koschan, A.F., Zhang, Y.: Object representation using the minima rule and superquadrics for under vehicle inspection. In: Proceedings of the IEEE Latin American Conference on Robotics and Automation, pp. 91–97 (2003) [15] Lee, Y., Lee, S., Shamir, A., Cohen-Or, D., Seidel, H.-P.: Mesh Scissoring with Minima Rule and Part Salience. Computer Aided Geometric Design 22, 444–465 (2005) [16] Hoffman, D., Richards, W.A.: Parts of recognition. Cognition 18, 65–96 (1984) [17] Hoffman, D., Signh, M.: Salience of visual parts. Cognition 63, 29–78 (1997) [18] Cornea, D.N., Silver, D., Yuan, X.S., Balasubramanian, R.: Computing Hierarchical Curve-Skeletons of 3d Objects. The Visual Computer 21(11), 945–955 (2005) [19] Cornea, D.N., Silver, D., Min, P.: Curve-Skeleton Applications. In: Proceedings of IEEE Visualization, pp. 23–28 (2005) [20] Dachille, F., Kaufman, A.: Incremental Triangle voxelization. In: Proceedings of Graphics Interface, pp. 205–212 (2000) [21] Kalvin, A., Schonberg, E., Schwartz, J.T., Sharir, M.: Two Dimensional Model Based Boundary Matching Using Footprints. International Journal of Robotics Research 5(4), 38–55 (1986) [22] Cheng, Z.-Q., Liu, H.-F., Jin, S.-Y.: The Progressive Mesh Compression based-on meaningful segmentation. The Visual Computer 23(9-11), 651–660 (2007)
Fast kd-Tree Construction for 3D-Rendering Algorithms Like Ray Tracing Sajid Hussain and Håkan Grahn Blekinge Institute of Technology SE-371 79 Karlskrona, Sweden {sajid.hussain,hakan.grahn}@bth.se http://www.bth.se/tek/paarts
Abstract. Many computer graphics rendering algorithms and techniques use ray tracing for generation of natural and photo-realistic images. The efficiency of the ray tracing algorithms depends, among other techniques, upon the data structures used in the background. kd-trees are some of the most commonly used data structures for accelerating ray tracing algorithms. Data structures using cost optimization techniques based upon Surface Area Heuristics (SAH) are generally considered to be best and of high quality. During the last decade, the trend has been moved from off-line rendering towards real time rendering with the introduction of high speed computers and dedicated Graphical Processing Units (GPUs). In this situation, SAH-optimized structures have been considered too slow to allow real-time rendering of complex scenes. Our goal is to demonstrate an accelerated approach in building SAH-based data structures to be used in real time rendering algorithms. The quality of SAH-based data structures heavily depends upon split-plane locations and the major bottleneck of SAH techniques is the time consumed to find those optimum split locations. We present a parabolic interpolation technique combined with a golden section search criteria for predicting kd-tree split plane locations. The resulted structure is 30% faster with 6% quality degradation as compared to a standard SAH approach for reasonably complex scenes with around 170k polygons.
1 Introduction Almost everyone in the field of 3D computer graphics is familiar with ray tracing - a very popular rendering method for generating and synthesizing photo realistic images. The simplicity of the algorithm makes it very attractive. However, it has very high computational demands and a lot of research has been done for the last couple of decades in order to increase the performance of ray tracing algorithms. Different types of acceleration techniques have been proposed like fast ray - object intersections, Bounding Volume Hierarchies (BVH), Octrees, kd-trees, and different flavors of grids including uniform, non-uniform, recursive, and hierarchical [5], [13], [15]. kd-trees, due to their versatility and wide range of application areas, are one of the most used techniques to generate efficient data structures for fast ray tracing and are increasingly being adopted by researchers around the world. Wald [14] and Havran [5] report that kd-trees are good adaptive techniques to deal with varying complexity of the scene G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 681–690, 2007. © Springer-Verlag Berlin Heidelberg 2007
682
S. Hussain and H. Grahn
and usually perform better, or at least comparable, to any other similar technique. kdtree construction normally uses a fixed stop criteria, either depth of the tree or the number of objects in the leaf node. Since adaptation is the main property for the kdtree to behave better as compared to other similar techniques, the stop criteria should also be adaptive for the best generation of kd-trees. In this paper, we present an approach to improve and optimize the construction of kd-trees. We are concerned about the decision of the separation plane location. Our approach is to use parabolic interpolation combined with golden section search to reduce the amount of work done when building the kd-trees. The SAH cost function used to chose the best split location is sampled at three locations which brackets the minimum of the cost function. These three locations are determined by a golden section search. A parabolic interpolation is then used to estimate the minimum and the algorithm iteratively converges towards optimum. A minimum is then found for the optimum locations of split planes (see section 4). We have evaluated our proposed approximation as compared to a standard SAH algorithm in two ways. First, using Matlab models, we have evaluated how well it predicts the actual split plane locations when building the kd-trees. Second, we have implemented our model in a real ray tracer and have used five common scenes with varying number of geometrical complexities up to 170k polygons. As compared to a standard SAH based kd-tree data structures, our approach is considerable faster and justified. The demonstrated improvement ranges from 7% to 30% as the scene becomes more and more complex. On the other hand, the tree quality decreases. The rest of the paper is organized as follows. In section 2, we discuss some previous work on the topic under investigation. Section 3 presents the basics of the kd-tree algorithm and the cost function to be minimized followed by the theory behind parabolic interpolation and golden section search criteria in section 4. Section 5 implements the idea followed by conclusion and future work in section 6.
2 Related Work kd-tree construction has mainly focused on optimized data structure generation for fast ray tracing. Wald [14] and Harvan [5] analyzed the kd-tree algorithm in depth and proposed a state-of-the-art O(nlogn) algorithm. Chang [3], in his thesis work described the theoretical and practical aspects of ray tracing including kd-tree cost function modeling and experimental verifications. Current work by Hunt [7] and Harvan [6] also aims at fast construction of kd-trees. By adaptive sub-sampling they approximate the SAH cost function by a piecewise quadratic function. There are many different other implementations of the kd-tree algorithm using SIMD instructions like Benthin [2]. Another approach is used by Popov [10], where he experiments with stream kd-tree construction and explores the benefits of parallelized streaming. Both Hunt [7] and Popov [10] demonstrate considerable improvements as compared to conventional SAH based kd-tree construction. The cost function to optimally determine the depth of the subdivision in kd-tree construction has been demonstrated by several authors. Cleary and Wyvill [16] derive an expression that confirms that the time complexity is less dependent on the number of objects and more on the size of the objects. They calculate the probability that the
Fast kd-Tree Construction for 3D-Rendering Algorithms Like Ray Tracing
683
ray intersects an object as a function of the total area of the subdivision cells that (partly) contain the object. MacDonald and Booth [8] use a similar strategy but refine the method to avoid double intersection tests of the same ray with the same object. They determine the probability that a ray intersects at least one leaf cell from the set of leaves within which a particular object resides. They use a cost function to find the optimal cutting planes for a kd- tree construction. A similar method was also implemented by Whang [17]. kd-tree acceleration structures for modern graphics hardware have been proposed by Daniel [20] and Tim [21], where they experimented kd-tree acceleration structure for GPU raytracers and achieved considerable improvement.
3 Basics of kd-Tree Construction In this section, we give some background about the kd-tree algorithm, which will be the foundation for the rest of the paper. Consider a set of points in a space Rd, the kdtree is normally built over these points. In general, kd-trees are used as a starting point for optimized initialization of k-means clustering [11] and nearest neighbor query problems [12]. In computer graphics, and especially in ray tracing applications, kdtrees are applied over a scene S with points as bounding boxes of scene objects. The kd-tree algorithm subdivides the scene space recursively. For any given leaf node Lnode of the kd-tree, a splitting plane splits the bounding box of the node into two halves, resulting in two bounding boxes, left and right. These are called child nodes and the process is repeated until a certain criterion is met. Havran [5] reports that the adaptability of the kd-tree towards the scene complexity can be influenced by choosing the best position of the splitting plane. The choice of the splitting plane is normally the mid way between the scene maximum and minimum along a particular coordinate axis [9] and a particular cost function is minimized. MacDonald and Booth [8] introduced SAH for the kd-tree construction algorithm which works on probabilities and minimizes a certain cost function. The cost function is built by firing an arbitrary ray through the kd-tree and applying some assumptions. Fig. 1 uses the conditional probability P(y/x) that an arbitrary fired ray hits the region y inside region x provided that it has already touched the region x. Bayes rule can be used to calculate the conditional probability P(y/x) as
P( y x ) =
P( x y ) P ( y ) . P( x)
(1)
P(x/y) is the conditional probability that the ray hits the region x provided that it has intersected y, and here P(x/y) = 1. P(x) and P(y) can be expressed in terms of areas [5].
Fig. 1. Visualization of the conditional probability P(y/x) that a ray intersects region y given that it has intersected region x
684
S. Hussain and H. Grahn
In Fig. 2, if we start from the root node or the parent node and assume that it contains N elements and the ray passing the root node has to be tested for intersection with N elements. If we assume that the computational time it takes to test the ray intersection with element n ⊆ N is Tn, then the overall computational cost C of the root node would be N
C = ∑ Tn .
(2)
n =1
After further division of root node (Fig. 2), the ray intersection test cost for each left and right child nodes changes to CLeft and CRight. Thus the overall new cost becomes CTotal and CTotal = CTrans + C Right + C Left .
(3)
Where, CTrans is the cost of traversing the parent or root node. The equation can be written as N Left
N Right
i =1
j =1
CTotal = CTrans + PLeft . ∑ Ti + PRight . ∑ T j .
(4)
Where, PLeft =
ALeft
PRight =
ARight
A
,
(5)
and A
.
(6)
Where A is the surface area of the root node and the area of two child nodes are ALeft and ARight. PLeft and PRight are the probabilities of a ray hitting the left and the right child nodes. NLeft and NRight are the number of objects present in the two nodes and Ti and Tj are the computational time for testing ray intersection with the ith and jth objects of the two child nodes. The kd-tree algorithm minimizes the cost function Ctotal, and then subdivides the child nodes recursively.
Fig. 2. Example scene division and the corresponding nodes in the kd-tree
Fast kd-Tree Construction for 3D-Rendering Algorithms Like Ray Tracing
685
As shown in [10], the cost function is a bounded variation function as it is the difference of two monotonically varying functions CLeft and CRight. In [10], this important property of the cost function has been exploited to increase the approximation accuracy of cost function and only those regions that can contain the minimum have been adaptively sampled. We have used the golden section search to find out the region that could contain the minimum and combined it with parabolic interpolation to search for that minimum. In next section, we present the mathematical foundations of the technique and simulations in Matlab.
4 Parabolic Interpolation and Golden Section Search The technique takes advantage of the fact that a second order polynomial usually provides a good approximation to the shape of a parabolic function. As the cost function we are dealing with is parabolic in nature, parabolic interpolation can provide good approximation of the cost function minima and hence the split plane locations. The idea behind is
f ( x ) = f ( x1 )
( x − x2 )( x − x3 ) ( x − x3 )( x − x1 ) ( x − x1 )( x − x2 ) + f ( x2 ) + f ( x3 ) . (7) ( x1 − x2 )( x1 − x3 ) ( x2 − x3 )( x2 − x1 ) ( x3 − x1 )( x3 − x2 )
Where, x1, x2 and x3 are the three values of x for which the right hand side of the equation (7) is equal to its left hand side f(x). Since a parabola is uniquely defined by a set of three points and this is the parabola that we want and we want the minimum, we differentiate equation (7) and put it equal to zero. After some algebraic manipulations we get the minimum of the parabola through these three points as x4 = x 2 −
1 ( x2 − x1 )2 [ f ( x2 ) − f ( x3 )] − ( x2 − x3 )2 [ f ( x2 ) − f ( x1 )] . 2 ( x2 − x1 ) [ f ( x2 ) − f ( x3 )] − ( x2 − x3 ) [ f ( x2 ) − f ( x1 )]
(8)
Where x4 is the point where the minimum of the parabola occurs and f(x4) is the minimum of the parabola. The golden section search is similar in spirit to the bisection approach for locating roots of a function. We use golden ratio to find two intermediate sampling points as the starting points for optimization search. x1 = xL + h x2 = xU − h
.
(9)
Where x1, x2 are two intermediate points and xL, xU are lower and upper bounds of the cost function which contains the minimum. h can be calculated as h = (ϕ − 1)( xU − xL ) .
(10)
Where ϕ is called the golden ratio and ϕ ≈ 1.618 . The cost function is evaluated at points x1 and x2 and f(x1) is compared with f(x2). If f(x1) < f(x2), then x2, x1 and xU are used in equation (8) to find the minimum of the parabola.
686
S. Hussain and H. Grahn
Fig. 3. Calculation of initial guess for parabolic interpolation
If f(x2) < f(x1), then xL, x2 and x1 are used in equation (8) to find the minimum of the parabola. This will give us a good initial guess to start our optimization through parabolic interpolation. With a good initial guess, the result converges more rapidly on the true value. Fig. 3 depicts the whole situation in graphical format. Two cost functions in Fig. 3 have been generated using Matlab and care has been taken that one of the cost functions contains the minimum in the right half of the x-axis and the other one contains the minimum in the left half of the x-axis. This is done to simulate the golden section search direction information. After x4 has been calculated, f(x4) is evaluated and compared with the intermediate point which is x1 in x2, x1 and xU case and x2 in xL, x2 and x1 case. If f(x4) < Intermediate Point (either f(x1) or f(x2)), then we shift the parabola towards the right direction in the next iteration. In this case the upper bound remains the same, while the lower bound is shifted toward right with a constant step size called μ and the intermediate point (either x1 or x2) becomes x4. If f(x4) > Intermediate Point (either f(x1) or f(x2)), then we shift the parabola towards the left direction in the next iteration. In this case the lower bound remains the same and we shift the upper bound towards left with the same step μ . The choice of the step size varies and the size decreases with increased tree depth as the number of primitives decreases and the size of the division axis also decreases. Consider a real case depicted in Fig. 4
Optimum
x2
x1
xU x4
Fig. 4. Optimum search with parabolic interpolation
Fast kd-Tree Construction for 3D-Rendering Algorithms Like Ray Tracing
687
and taken from [7]. After, golden section search, x1 and x2 are found and it is clear from Fig. 4 that f(x1) < f(x2). We chose x2, x1 and xU for interpolation and from equation (8) we find x4. Since f(x4) < f(x2), means we have to move towards right. For next iteration, x1 becomes x4, xU remains unchanged and x2 increases to x2 + μ . Using μ = 10 , we reach to the optimized solution as indicated by a dotted line in Fig. 4. Note that for each iteration other than first one, we need to sample the cost function only at one location i.e., in the second iteration, we only have to sample the cost function at x2 + μ . In this case we only need to sample the cost function at five different locations x1, x2, x4, x2 + μ and xU.
5 Experimental Evaluation To evaluate the performance of our algorithm, we have compared it with the conventional SAH construction techniques and measured the time needed for building the kd-tree structure for different test scenes. We have also tried to measure the cost overhead introduced by the approximation errors. As [7] uses SIMD support (SSE) and proposes to sample the cost function at more than one location during a single scan. Our algorithm exhibits the property to sample the cost function at four different locations during a single scan. The locations are x1, x2, x2 + μ and xU. Hence, it provides the base for SIMD support. However, we have not used this strategy in our implementations in this version. We implemented our algorithm in C++ and tested on a variety of scenes (Fig. 5). The BUNNY model is used from the Stanford 3D Scanning Repository [19]. This image is acquired from a 3D scanner and hence contains regularly distributed triangles. On the other hand, Square Flake, F15 and HAND are designed with CAD tools and the regularity of primitive distribution varies. As, the algorithm described above behaves well if the cost function is slowly varying. The BUNNY model is more suitable scene for our algorithm. All the measurements were performed on a workstation with an Intel Core 2 CPU 2.16GHz processor and 2GB of RAM. We see from Table 1 that the cost increase for BUNNY is approximate 1.62% despite of 26% increase in speed. The reason is that our algorithm performs well if the primitives are more towards normal distribution. As, this scene was acquired using a 3D scanner and the primitives are uniformly distributed, our technique works well in this case. On the other hand, take the example of spheres. Although the complexity is less as compared to that of BUNNY, but the increase in cost is 2.5% because of the fact that primitives are not uniformly distributed in this case. The Fairy scene contains more empty spaces and the cost increase jumps to 5.72%. The main reason behind is that the algorithm performs poor once there is a step change or abrupt change in the cost function as discussed in [7]. Its quality degrades because parabolic interpolation exhibits swinging characteristics while interpolating step changes. To overcome this problem, we can use splines piecewise interpolation to minimize oscillations by interpolating lower order polynomials, in this case linear spline piecewise interpolations.
688
S. Hussain and H. Grahn
Fig. 5. Test scenes : Spheres, Mesh, F15, Hand, Fairy and Bunny Table 1. Comparison of conventional kd-tree SAH and modified kd-tree SAH. The comparison is made in built time and quality of kd-tree.
Scene
Primitives Conventional SAH (msec) Exp.Cost F15 9250 80.32 45.10 Hand 17135 140.54 73.24 Mesh 56172 480.53 83.36 Spheres 66432 540.28 92.17 Bunny 69451 692.36 96.31 Fairy 174117 1450.4 105.29
Modified SAH (msec) Exp.Cost 73.12 46.23 130.33 74.12 401.98 85.21 450.25 94.46 550.35 97.85 1120.2 111.32
Speedup
9.84% 7.83% 19.54% 20.06% 25.81% 29.46%
Cost Increase 2.27% 1.20% 2.21% 2.51% 1.62% 5.72%
6 Conclusion and Future Work In this paper, we have presented an algorithmic approach to improve and optimize the split plane location search criteria used for the kd-tree construction in fast ray tracing algorithms. The major contribution is the combination of golden section search criteria with parabolic interpolation. The golden section search provides us with a better initial guess for the sampling locations of the cost function. The better the initial guess, the faster is the convergence of the algorithm. We have demonstrated in section 3 that the algorithm reaches the optimum within two iterations in a real scenario taken from [7]. Our idea gives us two important parameters, first, the initial guess and second, towards which direction we should move to reach the optimum. The information is very critical in order to get the convergence faster. We have evaluated two aspects of our proposed modified model. First, we evaluated how well it predicts the actual split locations of the planes when building the kdtree using Matlab models. Second, we have implemented our modified model in a ray tracer and compared its performance to a standard SAH ray tracer. We have used six
Fast kd-Tree Construction for 3D-Rendering Algorithms Like Ray Tracing
689
different scenes, with varying complexity levels. The performance improvement is small for the scenes with low complexity as the kd-tree depth is small. As the kd-tree depth increases for the more complex scenes, the performance improvement increases and we have successfully demonstrated an improvement of 30% for a scene complexity of around 170k polygons with only less than 6% degradation of the tree quality. We expect the performance difference to increase when the complexity and the number of objects in a scene increases. Further use of the technique could be demonstrated with the combination of SIMD support and spline interpolation where the cost function is very ill-behaved or changes abruptly. We hope to increase the tree quality even more with this technique and also intend to implement another version of this algorithm on dynamic scenes with multiple objects like Fairy in Fig.5.
References 1. Amanatides, J., Woo, A.: A Fast Voxel Traversal Algorithm. In: Proceeding of Eurograph -ics 1987, pp. 3–10 (August 1987) 2. Benthin, C.: Realtime Raytracing on Current CPU Architectures. PhD thesis, Saarland University (2006) 3. Chang, A.Y.: Theoretical and Experimental Aspects of Ray Shooting. PhD Thesis, Polytechnic University, New York (May 2004) 4. Fussell, D.S., Subramanian, K.R.: Automatic Termination Criteria for Ray tracing Hierarchies. In: Proceedings of Graphics Interface 1991 (GI 91), Calgary, Canada, pp. 93–100 (June 1991) 5. Havran, V.: Heuristic Ray Shooting Algorithm. PhD thesis, Czech Technical University, Prague (2001) 6. Havran, V., Herzog, R., Seidel, H.-P.: On fast construction of spatial hierarchies for ray tracing. In: Proceedings of the 2006 IEEE Symposium on Interactive Ray Tracing, pp. 71– 80 (September 2006) 7. Hunt, W., Mark, W., Stoll, G.: Fast kd-tree construction with an adaptive error-bounded heuristic. In: Proceedings of the 2006 IEEE Symposium on Interactive Ray Tracing, pp. 81–88 (September 2006) 8. MacDonald, J.D., Booth, K.S.: Heuristics for ray tracing using space subdivision. The Visual Computer 6(3), 153–166 (1990) 9. Kaplan, M.: The Use of Spatial Coherence in Ray Tracing. In: ACM SIGGRAPH 1985 Course Notes 11, pp. 22–26 (July 1985) 10. Popov, S., Gunther, J., Seidel, H.-P., Slusallek, P.: Experiences with Streaming Construction of SAH KD-Trees. In: Proceedings of the 2006 IEEE Symposium on Interactive Ray Tracing, pp. 89–94 (September 2006) 11. Redmonds, S.J., Heneghan, C.: A method for initializing the K-means clustering algorithm using kd-trees. Pattern Recognition Letters 28(8), 965–973 (2007) 12. Stern, H.: Nearest Neighbor Matching Using kd-Trees. PhD thesis, Dalhousie University, Halifax, Nova Scotia (August 2002) 13. Stoll, G.: Part I: Introduction to Realtime Ray Tracing. SIGGRAPH 2005 Course on Interactive Ray Tracing (2005) 14. Wald, I.: Realtime Ray Tracing and Interactive Global Illumination. PhD thesis, Computer Graphics Group, Saarland University, Saarbrucken, Germany
690
S. Hussain and H. Grahn
15. Zara, J.: Speeding Up Ray Tracing - SW and HW Approaches. In: Proceedings of 11th Spring Conference on Computer Graphics (SSCG 1995), Bratislava, Slovakia, pp. 1–16 (May 1995) 16. Cleary, J.G., Wyvill, G.: Analysis of an algorithm for fast ray tracing using uniform space subdivision. The Visual Computer, (4), 65–83 (1988) 17. MacDonald, J.D., Booth, K.S.: Heuristics for ray tracing using space subdivision. The Visual Computer (6), 153–166 (1990) 18. Whang, K.-Y., Song, J.-W., Chang, J.-W., Kim, J.-Y., Cho, W.-S., Park, C.-M., Song, I.Y.: An adaptive octree for efficient ray tracing. IEEE Transactions on Visualization and Computer Graphics 1(4), 343–349 (1995) 19. Stanford 3D scanning repository.: http://graphics.stanford.edu/ data/3Dscanrep/ 20. Horn, D.R., Sugerman, J., Houston, M., Hanrahan, P.: Interactive k-d tree GPU raytracing. In: Symposium on Interactive 3D Graphics I3D, pp. 167–174 (2007) 21. Foley, T., Sugerman, J.: KD-tree acceleration structures for a GPU raytracer. In: Proceedings of the ACM SIGGRAPH/EUROGRAPHICS conference on Graphics hardware, pp. 15–22 (2005)
Phase Space Rendering Andr´e Hinkenjann and Thorsten Roth University of Applied Sciences Bonn-Rhein-Sieg, Sankt Augustin, Germany {Andre.Hinkenjann,Thorsten.Roth}@fh-brs.de Abstract. We present our work on phase space rendering. Every radiance sample in space has a location and a direction from which it is received. These degrees of freedom make up a phase space. The rendering problem of generating a discrete image from single radiance values is reduced to reconstruct a continuous radiance function from sparse samples in its phase space. The problem of reconstruction in a sparsely sampled space is solved by utilizing scattered data interpolation (SDI) methods. We provide numerical and visual evaluations of experiments with three SDI methods.
1
Introduction
Today, global illumination methods are used in many areas, like design review and lighting simulations. Due to the simulation of energy exchange between scene objects, highly accurate lighting distributions can be calculated. However, despite of recent advances, like faster data structures for ray tracing [1], advanced global illumination calculation methods usually take minutes or hours to complete. On the other side, image based methods, like the Lumigraph [2] or Light Field Rendering [3] are practical approaches to generate new images from given images. Their main application is rendering of outside-in views of objects from a limited region in space (although they are not limited to these views). For a comparison of image based rendering techniques compare [4]. There has been work done on reconstructing radiosity values across patches, like [5] who explore the use of scattered data interpolation (SDI) methods. In [6], a Clough-Tocher method (cubic triangular interpolation) is applied. Bicubic Hermite interpolation is used by Bastos et al. starting from a regular as well as from a quad tree [7] subdivision. While these approaches deal with twodimensional spaces (the s,t-space of a patch), we employ SDI methods in higher dimensional spaces for the task of rendering. In [8], SDI is used to estimate a continuous isotropic SBRDF from sample images of an object. In addition to images, the geometry of the object has to be known for this estimation. For a general overview of SDI methods and applications, see [9]. We propose phase space rendering 1 , a generic approach to reconstruct the radiance function at any point for any direction in a scene, especially for 1
The phase space is a multi dimensional space that represents every degree of freedom of a system as an axis. Every possible state of the system can be described as a point in phase space.
G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 691–700, 2007. c Springer-Verlag Berlin Heidelberg 2007
692
A. Hinkenjann and T. Roth
Fig. 1. Radiance values retrieved by sampling an image plane (left) and a five dimensional radiance phase space with irregular samples (right)
generating images. The basic idea of this work is to interpret the task to render an image as reconstructing a function using scattered data interpolation methods, based on sparse samples. The function to reconstruct is the radiance distribution Li = f (x, y, z, θ, φ), f : [R3 × S 2 ] → R (ignoring time and wave length dependencies), also called the plenoptic function [10]. Figure 1 shows some radiance samples in R3 and in phase space. Because of the sparsity of the radiance samples (images are created only at certain reference points) we employ scattered data interpolation methods. Scattered data interpolation finds its use in many fields, like medical imaging [11] or cartography.
2
Sample Generation
The basis for the reconstruction of the plenoptic function are the radiance samples. Samples can be generated in advance in an image based rendering fashion. In this case, image synthesis programs can be used to generate images of a scene from different positions. It is important to note that our approach does not depend on the creation of complete images. There is no minimum number of samples required, although the number of samples clearly relates to image quality. Alternatively, one could use photographs together with additional information like the position and direction of the camera. to fill the data structure. For details on how many samples are needed to generate a continuous representation of the radiance for plenoptic modeling, see [12]. We restricted our tests to synthetically generated images, created with the freely available software renderer POV-Ray [13]. Every sample consisted of an (r, g, b) triple. For a fixed camera position (x, y, z), a number of incoming radiance (field radiance) samples from different directions (θ, φ) is recorded, according to the resolution of the image plane. That means we sample a two dimensional subspace of the phase space. During image synthesis, the radiance samples are stored in a file that records camera parameters, image plane parameters and radiance values for one or more images. For that we modified POV to output these parameters. The samples are later retrieved and stored in an efficient data structure for reconstruction.
Phase Space Rendering
693
The data structure for phase space rendering has to fulfill some requirements to be useful for scattered data interpolation: – High dimensionality: The radiance samples are five dimensional. – Fast nearest neighbor queries and range queries: Our algorithms often query the data set for (k-)nearest neighbors and perform orthogonal range queries. These operations have to be efficient. – Efficient storage: Simply storing samples in a many dimensional array results in high storage demands. An array with axis resolution of 100 samples requires more than 9 gigabytes of memory in five dimensions. A data structure that is suited for this requirements is the kd-tree [14]. We use a kd-tree for storing the image samples. For better storage efficiency, the kd-tree is built on demand from samples per image (see below).
3
Radiance Reconstruction
For the reconstruction of a continuous radiance function from distributed radiance values we take advantage of SDI methods. Due to the possibly very high number of sample values, the SDI methods could consume a considerable amount of time and main memory without modifications. This is because SDI methods generally take all available sample values into account. To be able to use SDI methods, we need to restrict the domain of the interpolating kernels to a practicable size. In our case, we solved this problem by reducing the number of sample values to all those samples which lie in a certain orthogonal range around the new point in phase space. This was realized with a kd-tree into which sample values were inserted. In addition, with a very large number of samples,the algorithm might run out of memory. Fortunately, the chosen SDI methods allow for an sequential calculation of the continuous radiance representation, keeping only part of the samples in memory. One of the best known methods of SDI is Shepard’s method. This method calculates a weighted average of all existing radiance values L(pi ) to determine ˜ for a new point q using the interpolated radiance value L ˜ L(q) =
n
wi (q)L(pi )
(1)
i=1
The weight wi of an existing radiance value L(pi ) decreases with distance from q. For the weights wi , Shepard selected σi (q) 1 . , σi (q) = wi (q) = n d(q, pi )μ σ (q) i i=1
(2)
A simple conversion of this approach would evaluate equation 1 for all available radiance samples (for red, green and blue) to determine the color of a single
694
A. Hinkenjann and T. Roth
Algorithm 1.1. Phase space rendering with Shepard’s method ˜ L(q) = 0; foreach sample image si do if camera of si points away from desired direction or camera distances too large then continue; endif build kd-tree of samples in si; foreach point q of new image do a ← result of range query around q; ˜ ˜ L(q) ← L(q) + contribution from samples in a; endfch endfch
Algorithm 1.2. Phase space rendering with nearest neighbor selection a := new array of distances; initialize a with max. distance; foreach sample image si do build kd-tree of samples in si; foreach point q of new image do p ← nearest neighbor of q in si; if distance(p, q) smaller than a(q) then a(q) ← distance (p, q); ˜ L(q) ← L(p); endif endfch endfch
new image pixel. For 10 sample images of resolution 5002 pixels and a reconstructed image of the same resolution, this would result in more than 1011 sample values to consider. As a quick solution to this problem, a two stage reduction of samples is used (cf. algorithm 1.1). Firstly, only those original samples that were generated from a nearby camera point and a similar camera direction (cosine of direction vector of old and new camera < 0) are accepted. From the resulting set of remaining original samples, a kd-tree is built. This heuristic limits the size of the kd-tree for range queries. Secondly, during the evaluation of equation 1 only those samples in an orthogonal range around the new point q are considered. ˜ One of the most obvious SDI methods is nearest neighbor selection. L(q) is simply L(p), with p being the nearest sample next to q. To reduce the size of the kd-tree, the distance comparison is done image by image (cf. algorithm 1.2). Another SDI method is Hardy’s multiquadrics [15]. Hardy’s idea was to describe the interpolating function as a linear combination of basis functions fi . ˜ L(q) =
n i=1
αi fi (d(q, pi )), f (d) = (d2 + r2 )μ , r > 0, μ = 0
(3)
Phase Space Rendering
695
d(q, p) is the distance between q and p. The coefficients αi have to be determined by solving the linear equations of 3 for all sample points with auxiliary ˜ i ) = L(pi ). With n being the number of samples, this results in conditions L(p a linear equation of size n2 . It is even more important to reduce the number of samples to consider in this case. This is again done in two steps: First, for all original sample images a pre-filtering as described above is done. After that, an orthogonal range query is performed to determine all original samples in a range around q. Secondly, from the result of the range queries, a kd-tree is built to accelerate the k-nearest neighbor searches for q, because we wanted a fixed size ˜ of the linear equation 3. Then L(q) is calculated from the remaining original samples. Algorithm 1.3 describes the approach in more detail. Algorithm 1.3. Phase space rendering with Hardy’s multiquadrics a := empty array of sample lists; foreach sample image si do if camera of si points away from desired direction or camera distances too large then continue; endif build kd-tree of samples in si; foreach point q of new image do a(q) ← a(q)+ resulting samples of range query around q; endfch endfch knn := empty sample list; foreach q of new image do build kd-tree from a(q); knn ← k nearest neighbors of q in a(q); ˜ Calculate L(q) from samples in knn; endfch
4
Results and Evaluation
To be able to evaluate the results of the methods of section 3 from meaningful positions in a scene, a viewer application was written that allows a view on the scene, together with a view on all sample images taken. The sample images are drawn as textured, semitransparent quads. Figure 2 shows an extreme example with many randomly placed samples images in scene. The user of the viewer application can navigate freely by using the keyboard and mouse. From an arbitrary point in the scene, the reconstruction process can be started. The generated images are stored as .png files. In addition, the viewer application allows the user to manipulate all relevant parameters of the SDI methods, like μ, maximum number of samples for k-nearest neighbar search, and the like. The viewer application and reconstruction algorithms were implemented in Java. For the experiments, three arrangements of cameras positions and directions were chosen.
696
A. Hinkenjann and T. Roth
Fig. 2. Viewer application showing the position and content of different sample images
Fig. 3. Test scenes ”Radiosity” and ”‘CubeSphere”
1. Inside-out view. The camera has a fixed viewpoint and is directed towards all sides of a virtual, surrounding cube. 2. Outside-in view. The camera is directed towards the midpoint of the scene and changes its standpoint. 100 images are rendered for a complete ”orbit”. 3. Random. The camera has a random viewpoint and a random direction. For the experiments, 500 random sample images are generated with the camera direction always parallel to the ”floor”. Two scenes were used: ”Radiosity” and ”CubeSphere” (cf. fig. 3). The reconstructed images were evaluated visually and in addition the RMS (root mean square) error was calculated, obtained by rendering reference images with POVray with the corresponding parameters. The RMS error was calculated for red, green and blue separately and was averaged to obtain one error value per image.
Phase Space Rendering
697
Table 1. Time for rendering the test scenes and averaged RMS error Scene DW rms DW time HM rms HM time NN rms [seconds] [seconds] CubeSphere1 5.49 6.32 5.59 (μ = 0.9) 26.19 6.58 CubeSphere2 12.3 21.7 53.21 (μ = 0.1) 34.11 19.51 CubeSphere3 9.35 21.7 16.81 (μ = 0.02) 34.11 n/a Radiosity1 8.17 6.54 8.88 (μ = 0.9) 26.31 9.73 Radiosity2 17.26 28.22 24.42 (μ = 0.1) 41.07 13.57 Radiosity3 36.44 27.32 49.71 (μ = 0.02) 42.09 n/a
4.1
NN time [seconds] 74.55 2013.14 n/a 66.58 1622.67 n/a
Numerical Evaluation
Table 1 shows the RMS error and calculation time for reconstructed images for camera arrangements 1, 2 and 3 and the three SDI methods Shepard’s (DW), Hardy’s multiquadrics (HM) and Nearest Neighbor (NN). HM-images were generated with μ values producing (nearly) the smallest error and a maximum of 15 nearest samples. For sample sets from random camera positions and directions we did not reconstruct images using the NN method because of the expected very large computation times. Shepard’s method produced the smallest RMS error for almost all camera arrangements and scene types. It also had the smallest reconstruction times. The time needed for the reconstruction is very high for the nearest neighbor method, because no original sample images are rejected by pre-filtering during reconstruction. Shepard’s method is faster than Hardy’s multiquadric, because HM needs to solve many linear equations during image generation. The next experiment shows the influence of the parameter μ on the RMS error for Shepard’s method. The example of figure 4 shows the RMS error for varying μ for scene ”CubeSphere” and inside-out views. As a reference, the RMS error for nearest neighbor selection is shown, too. There is a clear relation between μ and numerical image quality. For μ ≈ 3 the best results were achieved. Any interpolation scheme has more or less difficulties when high frequency signals have to be reconstructed. In the case of rendering this means, that SDI
Fig. 4. Influence of parameter μ on the RMS error for Shepard’s method
698
A. Hinkenjann and T. Roth
Table 2. Comparison of the RMS error of SDI methods for different surface types SDI method RMS diffuse/specular DW, μ = 2 12.29/15.93 DW, μ = 4 11.80/15.23 DW, μ = 8 11.85/15.24 HM, μ = 0.01 17.41/20.22 NN 16.34/21.29
Fig. 5. Comparison of reference image (left) with Shepard’s method (middle left), Hardy’s multiquadrics (middle right) and nearest neighbor selection (right)
methods should have more problems generating images from samples with highly specular surfaces (compared to diffuse surfaces). Table 2 shows a comparison of the methods for diffuse and specular surfaces for different μ values. 4.2
Visual Evaluation
All methods provide a reasonable visual quality for inside-out views from constant viewpoints, tested with the ”Radiosity” and ”CubeSphere” scenes. Figure 5 shows the ”CubeSphere” scene for camera direction outside-in. It can be observed, that Hardy’s multiquadrics and even more Shepard’s method showed ”ghosting” when the camera position is not fixed. This is even more pronounced in the case of random camera positions. Ghosting is not visible with the nearest neighbor method which in turn shows hard transitions during camera movements.
Fig. 6. Results for diffuse and specular surfaces with Hardy’s multiquadrics (μ = 0.01)
Phase Space Rendering
699
The influence of the surface properties can be observed in figure 6 where a purely diffuse and a purely specular scene is reconstructed using Hardy’s multiquadrics (μ = 0.01). Both images show more or less blurry images, but in the case of the specular surfaces the effect is more pronounced, because it filters out high frequencies.
5
Conclusion and Future Work
We presented a method for the continuous reconstruction of radiance values in phase space. By utilizing scattered data interpolation methods a reconstruction can be done even in sparsely sampled areas. The initial experiments showed that our approach is suited for inside-out views, like panoramic views. For moving cameras, a densely sampled phase space is needed, because otherwise Shepard’s method and Hardy’s multiquadrics showed ghosting. This paper presents the first results on phase space rendering. Many ideas can be realized in future work: Other SDI methods. Many other SDI methods exist and might result in better visual and numerical results. In [5] it was shown that natural neighbor interpolation provided good results in the context of image synthesis. Projection to lower dimensions. Problems like nearest neighbor search and orthogonal range queries are getting more and more complex in higher dimensions, tending to linear runtime in very high dimensions. Projecting samples from phase space into lower dimensional space will make these queries more efficient, but the quality of the result will most probably be worse. Also, it is the question which subspace to use. Maybe two or more results from different projections can be combined. In three dimensional subspaces graphics hardware could be used to accelerate the computations. On demand sample generation. The quality of the results presented depends on the sample density around the reconstruction point. If during reconstruction the sample density is too low, samples could be generated on demand with an image synthesis program. Dynamic scenes. As presented, phase space rendering is dependent on static scenes. The simplest approach for dynamic scenes would be to update all samples after an object moves. This could be done on demand only for those samples that remain after the pre-filtering phase, described in section 3.
References 1. Havran, V.: Heuristic Ray Shooting Algorithms. Ph.d. thesis, Department of Computer Science and Engineering, Faculty of Electrical Engineering, Czech Technical University in Prague (2000) 2. Gortler, S.J., Grzeszczuk, R., Szeliski, R., Cohen, M.F.: The lumigraph. In: SIGGRAPH 1996: Proceedings of the 23rd annual conference on Computer graphics and interactive techniques, pp. 43–54. ACM Press, New York (1996)
700
A. Hinkenjann and T. Roth
3. Levoy, M., Hanrahan, P.: Light field rendering. In: SIGGRAPH 1996: Proceedings of the 23rd annual conference on Computer graphics and interactive techniques, pp. 31–42. ACM Press, New York (1996) 4. Shum, H.Y., Li, Y., Kang, S.B.: An introduction to image-based rendering. In: Integrated image and graphics technologies, pp. 131–159. Kluwer Academic Publishers, Norwell, MA, USA (2004) 5. Hinkenjann, A., Pietrek, G.: Using scattered data interpolation for radiosity reconstruction. In: CGI 1998: Proceedings of the Computer Graphics International 1998, p. 715. IEEE Computer Society Press, Washington, DC, USA (1998) 6. Salesin, D., Lischinski, D., DeRose, T.: Reconstructing Illumination Functions with Selected Discontinuities. In: Third Eurographics Workshop on Rendering, Bristol, UK, pp. 99–112 (1992) 7. Bastos, R., Goslin, M., Zhang, H.: Efficient radiosity rendering using textures and bicubic reconstruction. In: Symposium on Interactive 3D Graphics, pp. 71– 74 (1997) 8. Zickler, T., Enrique, S., Ramamoorthi, R., Belhumeur, P.: Image-based rendering from a sparse set of images. In: SIGGRAPH 2005: ACM SIGGRAPH 2005 Sketches, p. 147. ACM Press, New York (2005) 9. Alfeld, P.: Mathematical methods in computer aided geometric design. In: Mathematical methods in computer aided geometric design, pp. 1–33. Academic Press Professional, Inc. San Diego, CA, USA (1989) 10. Adelson, E.H., Bergen, J.R.: The Plenoptic Function and the Elements of Early Vision. In: The Plenoptic Function and the Elements of Early Vision, MIT Press, Cambridge (1991) 11. Amidror, I.: Scattered data interpolation methods for electronic imaging systems: a survey. Journal of Electronic Imaging 11(2), 157–176 (2002) 12. Chai, J.X., Chan, S.C., Shum, H.Y., Tong, X.: Plenoptic sampling. In: SIGGRAPH 2000: Proceedings of the 27th annual conference on Computer graphics and interactive techniques, pp. 307–318. ACM Press/Addison-Wesley Publishing Co, New York (2000) 13. POV-Ray – The Persistence of Vision Raytracer (2007), http://www.povray.org [Online; accessed 30-July-2007] 14. Bentley, J.L.: Multidimensional binary search trees used for associative searching. Commun. ACM 18, 509–517 (1975) 15. Hardy, R.L.: Theory and applications of the multiquadric-biharmonic method. Computer Math. Applications 19, 1905–1915 (1990)
Automatic Extraction of a Quadrilateral Network of NURBS Patches from Range Data Using Evolutionary Strategies John William Branch1 , Flavio Prieto2 , and Pierre Boulanger3 1
Escuela de Sistemas, Universidad Nacional de Colombia - Sede Medell´ın, Colombia
[email protected] 2 Departamento de Ingenier´ıa El´ectrica, Electr´ onica y Computaci´ on Universidad Nacional de Colombia - Sede Manizales, Colombia
[email protected] 3 Department of Computing Science, University of Alberta, Canada
[email protected]
Abstract. We propose an algorithm to produce automatically a 3-D CAD model from a set of range data, based on non-uniform rational B-splines (NURBS) surface fitting technique. Our goal is to construct automatically continuous geometric models, assuming that the topology of the surface is unknown. In the propose algorithm, the triangulated surface is partitioned in quadrilateral patches, using Morse theory. The quadrilateral regions on the mesh are then regularized using geodesic curves and B-splines to obtain an improved smooth network on which to fit NURBS surfaces. NURBS surfaces are fitted and optimized using evolutionary strategies. In addition, the patches are smoothly joined guaranteeing C 1 continuity. Experimental results are presented.
1
Introduction
Three-dimensional reconstruction is the process by which the 3D geometry of real-world objects are captured in computer memory from geometric sensors such as laser scanners, photogrammetric cameras, and tomography machines. This reconstructed models consist of two main information; first, its physical characteristics such as density, volume and shape; second, its topological structure such as the adjacency between points, surfaces, and volumes. Finding a useful and general representation of 3D shape from 3D sensors that is useful for industrial and medical applications has proven to be a nontrivial problem. Up to recently, there are no real automated ways to perform this task, hence the flurry of surface reconstruction packages available in industry such as Polyworks, RapidForm, Geomagic to name a few. Many of these software packages can perform some of the reconstruction task automatically but many of them requires extensive user inputs especially when ones deal with high-level representation such as CAD modeling of complex industrial parts or natural shapes such as the one found in medical applications. G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 701–710, 2007. c Springer-Verlag Berlin Heidelberg 2007
702
J.W. Branch, F. Prieto, and P. Boulanger
There are many ways to perform this high-level surface reconstruction task from 3D sensors. In most schemes, surface reconstruction starts with the registration of various views produced by the 3D sensor. Then, each views are generally referenced to each other relative to a common central coordinate system. In some systems, this task is performed using data to data registration, other system use external positioning devices such as mechanical arms or optical tracking systems. Following this process each views are then triangulated to create a unique non-redundant 3D mesh. Many of the commercial packages can do this meshing process automatically. Because of sensor occlusions and limitations, the mesh produced by the triangulation process is frequently plagued with holes that need to be filled in order to create a leak free solid model. Most commercial systems use some sort of semi-automated algorithms but more recent new algorithms [1] based on radial basis functions were introduced to perform this task automatically. In most cases, the hole filling process keeps the original discontinuities of the real object, and generates a complete closed triangular model. Following this process many commercial software require extensive manual processing to lay on the 3D mesh a network of curves where NURBS surfaces can be fitted. This process is extremely labor intensive and require a strong experience in 3D modeling and data processing to complete the task. In this paper, we present a possible solution to this problem using an automated NURBS extraction process where the 3D mesh is converted automatically into a smooth quadrilateral network of NURBS surfaces based on Morse’s theory. Section 2, presents a literature review of the state-of-the-art of automated NURBS fitting algorithms. Section 3, describes how Morse’s theory can be applied to determine critical points from an eigenvalue analysis of the Laplacian of the surface mesh. Section 4, describes how to regularize those curves joining the critical points using geodesics calculation and b-spline fitting. Section 5, describes how to fit smooth NURBS with C 1 continuity constraints for each regions using an evolutionary strategy. Section 6, presents some experimental results and compares these results to a well know algorithm. We then conclude and discuss future work.
2
Literature Review
Eck and Hoppe [2] present the first complete solution to the fitting problem of a network of B-spline surfaces of arbitrary topology on disperse and unordered points. The method builds an initial parametrization, which in turn is re-parameterized to build a triangular base, which is then used to create a quadrilateral domain. In the quadrilateral domain, the B-spline patches adjust with a continuity degree of C 1 . This method, although effective, is quite complex due to the quantity of steps and process required to build the net of B-spline patches. It is also limited to B-spline as oppose to NURBS. Krishnamurthy and Levoy [3] presented an approach to adjust NURBS surface patches on cloud of points. The method consists of building a polygonal mesh on the points set. Using this 3D mesh, a re-sampling is performed to generate a
Automatic Extraction of a Quadrilateral Network of NURBS Patches
703
regular mesh, on which NURBS surfaces patches can be adjusted. The method has poor performance when dealing with complex surfaces and with surfaces with holes. Other limitation is the underlying difficulty on keeping continuity on the NURBS surface patches. Boulanger et al. [4] describe a linear approximation of continuous pieces by means of trimmed NURBS surfaces. This method generates triangular meshes which are adaptive to local surface curvature. First, the surface is approximated with hierarchical quadrilaterals without considering the jagged curves. Later, jagged curves are inserted and hierarchical quadrilaterals are triangulated. The result is a triangulation which satisfies a given tolerance. The insertion of jagged curves is improved by organizing the quadrilaterals’ hierarchy into a quad-tree structure. The quality of triangles is also improved by means of a Delaunay triangulation. Although this method produces good results, it is restricted to surfaces which are continuous and it does not accurately model fine details. A different approach is presented by Yvart et al. [5], which uses triangular NURBS for dispersed points adjustment. Triangular NURBS do not require that the points-set has a rectangular topology, although it is more complex than NURBS. Similar to the previous works, it requires intermediate steps where triangular meshes are reconstructed, re-parametrize, and where continuity patches G1 are adjusted to obtain a surface model. Dong et al. [6] describe a fundamentally new approach to the quadrangulation of manifold polygon meshes using Laplacian eigenfunctions, the natural harmonics of the surface. These surface functions distribute their extrema evenly across a mesh, which connect via gradient flow into a quadrangular base mesh. An iterative relaxation algorithm simultaneously refines this initial complex to produce a globally smooth parameterization of the surface. From this, they can construct a well-shaped quadrilateral mesh with very few extraordinary vertices. The quality of this mesh relies on the initial choice of eigenfunction, for which they describe algorithms and hueristics to efficiently and effectively select the harmonic most appropriate for the intended application.
3
Determination of Morse Critical Points Using Spectral Coding of the Laplacian Matrix
The proposed procedure estimates an initial quadrilaterization of the mesh, using a spectral coding scheme based on the eigen-value analysis of the Laplacian matrix of the 3D mesh. The method is similar to the one proposed in Dong et al. [7]. Initially, the quadrilateral’s vertices are obtained as a critical points-set of a Morse function. Morse’s discrete theory [6] guarantees that, without any concerns on the topological complexity of the surface represented by triangular mesh, a complete quadrilateral description of the surface is possible. Since one requires a scalar function for each vertex, it has been shown by [7] that the eigenvalue of the Laplacian matrix behaves like a Morse-Smale complex creating a spectral coding function that can be used to determine which vertices are Morse critical points. One advantage of eigenvalue analysis over other coding
704
J.W. Branch, F. Prieto, and P. Boulanger
schemes is that by selecting the dimension of the eigenvector one can directly define the number of critical points on the surface as higher frequencies produce a higher number of critical points. The eigenvalues assigned to every vertex of the mesh is then analyzed to determine if the vertex is a Morse critical point. In addition, according to a value set obtained as the neighborhood of the first ring of every vertex, it is possible to classify the critical points as maximum, minimum or ”saddle points.” Once the critical points are obtained and classified, then they can be connected to form a quadrilateral base of the mesh using the following Algorithm 1.1: Algorithm 1.1. Bulding method of MS cells. Critical points interconnection(); begin Let T={F,E,V} M triangulation; Initialize Morse-Smale complex, M=0; Initialize the set of cells and paths, P=C=0; S=SaddlePointFinding(T); S=MultipleSaddlePointsDivission(T); SortByInclination(S); for every s ∈ S in ascending order do CalculeteAscedingPath(P); end while exists intact f ∈ F do GrowingRegion(f, p0, p1, p2, p3); CreateMorseCells(C, p0, p1, p2, p3); end M = MorseCellsConnection(C); end
4
Regularization of the Quadrilateral Regions
Because the surface needs to be fitted using NURBS patches, it is necessary to regularize the quadrilateral regions boundaries connecting the critical points on the mesh. Regularization means here that we need a fix number of points (λ) on each boundary and that the boundary is described by a smoothing function such as a B-spline. The Algorithm 1.2 is proposed to regularize the path on the mesh joining the critical points. In this algorithm, the link between two Morse critical points on the mesh is defined by a geodesic trajectory between them. A geodesic trajectory is the minimum path joining two points on a manifold. To compute this geodesic path on the mesh, we use a Fast Marching Method (FMM) algorithm [8]. This algorithm compute on a discrete mesh the minimal trajectory joining two critical points in O(nlogn) computational complexity. At the end of the regularization process, a B-splines curve is fitted on the geodesic path and the curve is re-sampled with λ points to obtain a grid which is used to fit the NURBS surfaces. This is a much simpler and robust algorithm that the one proposed by Dong where an uniform parameterization is computed.
Automatic Extraction of a Quadrilateral Network of NURBS Patches
705
Algorithm 1.2. Quadrilateral path regularization algorithm. Regularization(); begin 1. Quadrilateral selection; 2. Determination of common paths between regions by computing the geodesics on the mesh connecting both points; 3.1 Smoothing of the geodesic path using B-splines fitting functions; 3. Determination of common boundary points by interpolating λ points on the path; end
5
Surface Fitting Using Optimized NURBS Patches
In order to fit smoothly NURBS surfaces on the quadrilateral network a method based on an evolutionary strategy (ES) is proposed. In order to fit a NURBS surface onto a grid, one needs to determine the weights of control points of a NURBS surface, without modifying the location of sampled points of the original surface. The main goal of this algorithm is to reduce the error between the NURBS surfaces and the data points inside the quadrilateral regions. In addition, the algorithm make sure that the C 1 continuity condition is preserved for all optimized NURBS patches. The proposed algorithm is composed of two parts: first an optimization of the NURBS patches parameters is performed, and second a NURBS patch intersections is computed. 5.1
Optimization of the NURBS Patches Parameters
A NURBS surface is completely determined by its control points P i,j and by its weight factors wi,j . The main difficulty in fitting NURBS surface locally is in finding an adequate parametrization for the NURBS and the ability to automatically choose the number of control points and their positions. Weight factors wi,j of NURBS surfaces determine the local influence degree of a point on the surface topology. Generally, as in Dong [6], weights of control points for a NURBS surface are assigned in an homogeneous way and are set equal to 1 in most common algorithms, reducing NURBS to simple B-spline surface. The reason for this simplification is that control points weights determination for arbitrarily curved surfaces adjustment is a complex non-linear problem. This restricts fitting NURBS to a regular points-set. It is necessary that every row has the same number of points, making it impossible to fit the surface on a disperse unordered points-cloud. When a surface of explicit function is fitted using NURBS, the following Equation is normally minimized: 2 n m np i=0 j=0 Ni,p (u)Nj,q (v)wi,j P i,j (1) δ= Z l − n m i=0 j=0 Ni,p (u)Nj,q (v)wi,j l=1
where Ni,p (u) and Nj,q (v) are base B-spline functions of p and q degree in the parametrical directions u and v respectively, wi,j are the weights, P i,j the control points, and np the number of control points. If the number of knots and
706
J.W. Branch, F. Prieto, and P. Boulanger
their positions are fixed, same as the weights set, and only the control points 3 are considered during minimization of Equation 1, then {{P i,j }ni=1 }m j=1 ∈ R we have a linear mean square problem. If knots or the weights are considered unknown it is necessary to solve a non-linear problem during the fitting process. In many applications, the optimal position of knots is not necessary. Hence, the knots location problem is solved by using heuristics. In the proposed algorithm, multiple ES of type “+” are used. These generally are denoted as follows: (γ,+ μ), where γ is the size of the population and μ is the size of the descendence. Symbol “,+ ” is used to indicate the existence of two replacement possibilities: deterministic replacement by inclusion (or type “+”) or deterministic replacement by insertion (or type “,”). The optimization process can be described as follows: Let P = {P 1 , P 2 , . . . , P n } a points-set in R3 sampled from the surface of a physical object, our problem consists of: 1 E(s) = dP ,S < δ n i=1 i i n
(2)
where dPi ,Si represents the distance between a point of the set P of sampled points of the original surface S, and a point on the approximated surface S . To get the configuration of surface S , E is minimized to a tolerance lower than the given δ. Manipulation is performed by means of an evolution strategy (μ + λ) configured as follows: – Representation Criteria: Representation is performed using pairs of real vectors. Representation using triples is often used, where the last vector controls the correlation between mutations of each component, but, because of the expense of the method, we decided to use only duplets. – Treatment criteria of non-feasible individuals: A filtering of individuals is performed ignoring non-feasible individuals. – Genetic Operators: • Individual: is composed of the weights of the control points belonging to the original points-cloud and the parameters of mutation stepadaptation. The initial values wi , δi of every individual are uniformly distributed in interval [0.01, 1.0]. This range is chosen because it is not possible to set the weight to zero. • Mutation: Individuals mutation will not be correlated with n σ s (mutation steps) as established in individual configuration, and it is performed as indicated in the following equations:
σi = σi e(c0 .N (0,1)+ci .Ni (0,1)) ,
xi = xi + σi .Ni (0, 1)
(3)
where N (0, 1) is a normal distribution with expected value 0 and variance 1, c0 , ci are constants which control the size of the mutation step. This refers to the change in mutation step σ. Once the mutation step has been updated, the mutation of individuals is generated wi . – Selection Criteria: The best individuals in each generation are selected according to the fitness function given by Equation (2).
Automatic Extraction of a Quadrilateral Network of NURBS Patches
707
– Replacement criteria: In ES, the replacement criteria is always deterministic, which means that μ or γ best members are chosen. In this case, the replacement by inclusion was used (type “+”), in which the μ descendants are joined with the γ parents into a single population, and from it, the γ best members and are taken for the new population. – Recombination operator: Two types of recombination are applied whether object variables wi or strategy parameters σi are being recombined. For object variables, an intermediate global recombination is used:
bi =
1 bk,i ρ ρ
(4)
k=1
where bi is the new value of i, and ρ is the number of individuals within the population. For strategy parameters, an intermediate local recombination is used: (5) bi = ui bk1 ,i + (1 − ui )bk2 ,i
where bi is the new value of i, and ui is a real number which is distributed uniformly within the interval [0, 1]. 5.2
NURBS Patch Intersections
Several authors use complex schemes to guarantee continuity of normals in reconstructed models. Loop [9] proposes a continuity schema in which three different types of patches are used for special cases. In neighborhoods with big curves Bi-quadratic Bezier patches are used. At corners with triangular neighborhoods, cubic Bezier patches are used, and at regular zones, bi-quadratic spline patches are used. In a similar way Eck and Hoppe [2] use a model in which bi-quadratic B-splines functions and bi-cubic Bezier functions are fused to guarantee continuity between patches. Continuity in regular cases (4 patches joined at one of the vertexes) is a solved problem [2]. However, in neighborhoods where the neighbors’ number is different than 4 (v 3 → v = 4), continuity must be adjusted to guarantee a soft transition of the implicit surface function between patches of the partition. Continuity C 0 shows that a vertex continuity between two neighboring patches must exist. This kind of continuity only guarantees that holes at the assembling limit between two parametric surfaces does not exists. C 1 shows that continuity in normals between two neighboring patches must exist. This continuity also guarantees a soft transition between patches, offering a correct graphical representation. In this algorithm, C 1 continuity between NURBS patches is guaranteed, using Peters continuity model [10] which guarantees continuity of normals between bicubical Spline functions. Peters proposes a regular and general model of bi-cubic NURBS functions with regular nodes vectors and the same number of control points at both of the parametric directions. In such a way, Peter’s model was adapted by choosing generalizing NURBS functions, with the same control points
708
J.W. Branch, F. Prieto, and P. Boulanger
number at both of the parametric directions, bi-cubic basis functions and regular expansions in their node vectors.
6
Experimental Results
The tests were performed using a 3.0 GHz processor, with 1.0 GB of RAM, running Microsoft Windows XP operating system. The methods were implemented using C++ and MATLAB, and graphics programs used OpenGL 1.1. The 3D data used were digitized with a Minolta Vivid 9i. The precision of the measurements were in the order of 0.05 mm. Figure 1 shows the result of NURBS extraction on a pre-colombian ceramic object. It was necessary to integrate 18 range images to produce a complete model, as shown in Figure 1(a). In Figure 1(b) the registered and triangulated model of the object is shown, which is composed of 22217 points, with an average error of 0.0254. The surface has two topological anomalies associated to occlusions, which were corrected using a local radial base function interpolation scheme describe in [11]. This technique guarantee that the new reconstructed region adjusts smoothly with the other and also keeps the sampling density of the original mesh intact. The final model is obtained with 391 patches of optimized NURBS surfaces with a fitting error of 1.80×10−4 (see Figures 1(e), 1(f)). The reconstruction model of the object took an average computing time of 21 minutes. 6.1
Comparison Between the Proposed Method and Eck and Hoppe’s Method
The work by Eck and Hoppe [2] performs a similar adjustment using a network of B-spline surface patches which are iteratively refined until they achieve a
(a) Initial images set.
(d) Holes filling.
(b) Registered images. (c) Holes detection analysis.
and
(e) Extracted quadrilateral (f) Final model obtained usregions. ing NURBS patches.
Fig. 1. Reconstruction of a pre-colombian object using a quadrilateral network of NURBS patches
Automatic Extraction of a Quadrilateral Network of NURBS Patches
(a)
(b)
(c)
(d)
709
(e)
Fig. 2. Comparison between the proposed method and Eck and Hoppe’s method. a) triangulated model, b) 27 patches model (proposed method without optimization), c) 27 patches model (proposed method with optimization), d) 29 patches model (Eck and Hoppe’s method without optimization), e) 156 patches model (Eck and Hoppe’s method with optimization).
preset error tolerance. The process of optimization performed by Eck and Hoppe reduces the error by generating new patches, which considerably augments the number of patches which represent the surface. The increment of the number of patches reduces the error because the regions to be adjusted are smaller and more geometrically homogeneous. In the method proposed in this paper, the optimization process focus on improving fitting for every patch by modifying only its parameterizations (control points and weights). For this reason, the number of patches does not increase after the optimization process. The final number of patches which represent every object is determined by the number of critical points obtained in an eigenvector associated with the eigenvalue selected from the solution system of the Laplacian matrix, and it does not change at any stage of the process. Figure 2 shows two objects (foot and skidoo part) reported by Eck and Hoppe. The model created with the proposed method , is composed of 27 and 25 patches, while Eck and Hoppe use 156 and 94 patches for the same precision. This represent a reduction of 82% and 73% less patches respectively.
7
Conclusion
The methodology proposed in this paper for the automation of reverse engineering of free-form three-dimensional objects has a wide application domain, allowing to approximate surfaces regardless of topological complexity of the original objects. A novel method for fitting triangular mesh using optimized NURBS patches has been proposed. This method is topologically robust and guarantees that the complex base is always a quadrilateral network of NURBS patches which is compatible with most commercial CAD systems. This algorithm is simpler and robust and do not require an extensive optimization of the surface parameterization as in Dong. In the proposed algorithm, the NURBS patches are optimized using multiple evolutionary strategies to estimate the optimal NURBS parameters. The resulting NURBS are then joined, guaranteing C 1 continuity. An other advantage of
710
J.W. Branch, F. Prieto, and P. Boulanger
this algorithm over Dongs is that the formulation of C 1 continuity presented in this paper can be generalized, because it can be used to approximate regular and irregular neighborhoods which present model processes regardless of partitioning and parametrization. In the future, we are planning to explore other spectral coding functions that are more intrinsic and invariant to the way the object is immersed in 3-D space. One possible avenue is the use the eigenvalue of the curvature matrix instead of the Laplacian.
References 1. Carr, J., Beatson, R., Cherrie, J., Mitchell, T., Fright, W., McCallum, B., Evans, T.: Reconstruction and representation of 3d objects with radial basis functions. In: Proc. 25th International Conference on Computer Graphics and Interactive Techniques, Los Angeles, USA, pp. 67–76. ACM Press, New York (2001) 2. Eck, M., Hoppe, H.: Automatic reconstruction of b-spline surface of arbitrary topological type. In: Proc. 23rd International Conference on Computer Graphics and Interactive Techniques, pp. 325–334 (1996) 3. Krishnamurthy, V., Levoy, M.: Fitting smooth surfaces to dense polygon meshes. In: Proc. 23rd International Conference on Computer Graphics and Interactive Techniques, pp. 313–324 (1996) 4. Boulanger, P.: Triangulating nurbs surfaces, curve and surface design. Technical report, Vanderbilt University Press, Nashville, Tennesee, USA (2000) 5. Yvart, A., Hahmann, S., Bonneau, G.: Smooth adaptive fitting of 3-d models using hierarchical triangular splines. In: Proc. International Conference on Shape Modeling and Applications (SMI 2005), Boston, USA, pp. 13–22 (2005) 6. Dong, S., Bremer, P.-T., Garland, M., Pascucci, V., Hart, J.C.: Spectral surface quadrangulation. In: SIGGRAPH 2006: ACM SIGGRAPH 2006 Papers, pp. 1057– 1066. ACM Press, New York (2006) 7. Dong, S., Bremer, P., Garland, M., Pascucci, V., Hart, J.: Quadrangulating a mesh using laplacian eigenvectors. Technical report, University of Illinois, USA (2005) 8. Dicker, J.: Fast Marching Methods and Level Set Methods: An Implementation. PhD thesis, Department of Computer Science, University of British Columbia (2006) 9. Loop, C.: Smooth spline surfaces over irregular meshes, Orlando, USA, 303–310 (1994) 10. Peters, J.: Constructing c1 surfaces of arbitrary topology using bicuadric and bicubic splines. Designing Fair Curves and Surfaces , 277–293 (1994) 11. Branch, J.W.: Reconstruction of Free Form Objects from Range Images using a Net NURBS Patches. PhD thesis, Universidad Nacional de Colombia (2007)
ChipViz: Visualizing Memory Chip Test Data Amit P. Sawant1 , Ravi Raina2 , and Christopher G. Healey3 1,3
North Carolina State University, Department of Computer Science, Raleigh, NC, USA 2 Qimonda AG, 3000 Centregreen Way, Cary, NC, USA
[email protected],
[email protected],
[email protected]
Abstract. This paper presents a technique that allows test engineers to visually analyze and explore within memory chip test data. We represent the test results from a generation of chips along a traditional 2D grid and a spiral. We also show correspondences in the test results across multiple generations of memory chips. We use simple geometric “glyphs” that vary their spatial placement, color, and texture properties to represent the critical attribute values of a test. When shown together, the glyphs form visual patterns that support exploration, facilitate discovery of data characteristics, relationships, and highlight trends and exceptions in the test data that are often difficult to identify with existing statistical tools.
1 Introduction One of the biggest challenges in analyzing memory test data is discovering interrelationships between different test attributes. It is often time consuming and difficult to correctly interpret different test attributes using existing data analysis tools. With semiconductor manufacturing processes and technology changing rapidly, and test complexity increasing with every new generation of chips, its imperative that test data analysis tools keep pace. The objective of memory testing (or, more generally, Integrated Circuit (IC) testing) is not only to isolate bad chips, but also to identify the root cause of failure which could be either weaknesses in chip design, or issues in manufacturing processes. Thus testing also acts as a feedback loop for design and manufacturing. With time to market the chip being absolutely critical for any IC company, its important that this feedback is integrated into the design and manufacturing process in a timely and efficient manner. This depends on how the data is organized, and more importantly, on how well it is presented for analysis and interpretation. One possible solution is to use visualization techniques to convert this large and complex dataset into a multi-dimensional visual image that the test engineers can use for exploring, discovering, comparing, validating, accounting, monitoring, identifying faults and process excursions, and studying the effects of adjusting different test parameters. Numerous efforts have been made to use test data to improve yield and optimize tests. Previous research work includes developing a fault simulator to determine fault coverage of test patterns [1]. Sang-Chul et al. have developed an automatic failure analysis system based on production data [2]. Researchers have applied data-mining techniques to optimize VLSI testing [3]. Test visualization techniques have also been used in area G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 711–720, 2007. c Springer-Verlag Berlin Heidelberg 2007
712
A.P. Sawant, R. Raina, and C.G. Healey
of software engineering to assist fault localization [4]. Recently Van de Goor et al. have developed methods to evaluate DRAM production test results to optimize tests and fault coverage [5]. The remainder of this paper proceeds as follows. In Section 2, we provide details on data collection. Section 3 describes our visualization technique. Sections 4 and 5 provide a few examples of visualizing memory chip test data. Finally, Section 6 discusses conclusions and future work.
2 Data Collection The visualization process begins by collaborating with domain experts to identify the important parameters relating to yield and test optimization they want to analyze, explore, and monitor. The test datasets for this paper are taken from Qimonda AG1 , the fourth largest DRAM chip design and manufacturing company in the world. Generally memory test results are collected from manufacturing sites for analysis purpose. They consist of a spreadsheet report of the tests with corresponding failure rates and pertinent information about the individual test’s attributes for each type of memory chip. It is then up to the test engineer to manually interpret the test results. The memory chip goes through different sets of tests called insertions. Each test typically has more than one critical attribute associated with it. Moreover, the same critical attributes may appear in multiple insertions. This leads to a complex dataset that is very large, and difficult to analyze and interpret correctly. Data is viewed as a spreadsheet where rows represent individual tests and columns represent the attributes of the test. These attributes are typically related to critical timings and voltages of the chip, for example: tRP, tRCD, tWR, tRAS, Vdd, Retention, Logistics, Current, and Failure Rate. Attribute definitions are provided in the Appendix. The engineers currently depend on their experience and expertise to analyze the data and deduce meaningful information. Unfortunately, this is not an efficient method as the amount of data is huge and the interrelationships complex enough to confuse even the most experienced engineer. The test datasets for this paper are taken from DDR2/DDR3 memory chips. In this paper, we visualized the following four datasets: 1. Low Vdd at LT: This dataset contains test results from a lot (a set of memory chips) with high failure rates at low temperature (LT) and low voltage (Vdd). 2. Retention at High Vdd and HT: This case contains a certain memory chip with high retention failure rates on certain lots at high voltage and high temperature (HT). 3. tRCD and tRP at HT/LT: This case contains high row to column address delay (tRCD) and row precharge time (tRP) failure rates across multiple critical attributes at high or low temperatures. 4. Optimized Test Data: This dataset represents a stable, high volume product in which the tests/processes are already highly optimized and exceed the required pass thresholds. 1
http://www.qimonda.com, formerly the memory chip division of Infineon Technologies AG.
ChipViz: Visualizing Memory Chip Test Data
713
3 Visualization Technique To visualize the memory test results, we adopted the following design guidelines proposed by Eick [6]: (1) ensure that the visualization is focused on the user’s needs by understanding the data analysis task; (2) encode data using color and other visual characteristics; and (3) facilitate interaction by providing a direct manipulation user interface. A number of well-known techniques exist for visualizing non-spatial datasets, such as, geometric projection, iconic display, hierarchical, graph-based, pixel-oriented and dynamic (or some combination thereof) [7,8]. We decided an iconic display was most relevant to our goal of visualizing a memory chip’s test data. Our visualizations were designed by first constructing an object to represent a single data element. Next, the objects are positioned to produce a static visualization of the memory chip test data. Glyphs are positioned based on scalar ranking attribute(s) within a traditional 2D grid or along a linear spiral embedded in a plane. 3.1 Placement Algorithm Glyphs representing the attribute values embedded in a dataset have to be positioned appropriately in order to create an information workspace for visual sense making. We decided to use two layout methods: a traditional 2D grid, and a spiral. A two-dimensional ordering is imposed on the data elements through user-selected scalar attributes. We chose a 2D grid layout because it is intuitive and well-known placement algorithm. A one-dimensional ordering is imposed on the data elements through a single userselected scalar attribute, or “ranking” attribute. One way to map this ordering to a 2D spatial position is to use a 2D space-filling spiral. Our algorithm is based on a technique introduced by Carlis and Konstan to display data along an Archimedean spiral [9]. We have previously used 2D grid and spiral layouts to visualize storage controller performance data [10]. 3.2 Data-Feature Mapping When we design a visualization, properties of the dataset and the visual features used to represent its data elements must be carefully controlled to produce an effective result. Important characteristics that must be considered include [11]: (1) dimensionality (number of attributes in the dataset), (2) number of elements, (3) visual-feature salience (strengths and limitations that make it suitable for certain types of data attributes and analysis tasks), and (4) visual interference (different visual features can interact with one another, producing visual interference; this must be controlled or eliminated to guarantee effective exploration and analysis). Perceptual knowledge of how the human visual system “sees” different properties of color and texture allow us to choose visual features that are highly salient, both in isolation and in combination [12,13,14]. We map visual features to individual data attributes in ways that draw a viewer’s focus of attention to important areas in a visualization. Our glyphs support variation of spatial position, color and texture properties, including: x-position and y-position or linear radial position, hue, luminance, height, size, and
714
A.P. Sawant, R. Raina, and C.G. Healey
orientation. A glyph uses the attribute values of the data element it represents to select specific values of the visual features to display. After consulting with the domain experts we identify the attributes to include in a default data-feature mapping. The most important attributes should be mapped to the most salient features. The order of importance for the visual features we used is luminance, hue, and then various texture properties [12]. ChipViz allows the user to interact with the visualizations by translating, rotating, and zooming the environment. Users can change which visual features are mapped to each attribute using click-and-drag sliders. Finally, users can select individual data elements to display a pop-up balloon that describes the exact attribute values encoded by the element.
4 Visualization of Single Memory Chip Test Data We selected different cases of test results taken from actual DDR2/DDR3 products, and analyzed and interpreted the results with the help of our visualization tool, ChipViz. These cases represent four typical scenarios an engineer would encounter while analyzing memory chip test results. The first case shows the analysis of a lot with high fallout at LT and low Vdd. The second case describes the situation where high fallout in Retention occurs at HT and high Vdd. In the third case, we represent a more complicated scenario where high fallout occurs at HT due to multiple critical attributes (tRCD and tRP). Finally in the fourth case, we show the test results for a stable, high volume product in which the tests/processes are already highly optimized and results exceed the required pass rates. 4.1 Visualizing Low Vdd at LT This dataset represents memory chip test results taken from a lot with high fallout. By visualizing the test results run at LT, it is evident that numerous tests have high failure rates and the overall yield is low. It is not immediately clear from the spreadsheet data what attributes are causing this high fallout, however. By using ChipViz in Figure 1, we take advantage of visualizing multi-dimensional elements. The Failure Rate of a test is directly proportional to x-position and height, so high failure tests are sorted and can be easily viewed. y-position represents the Test ID number. In addition to this spatial filtering process, we visualize additional critical attributes among the high failing tests. The engineers requested to visualize Vdd as a primary attribute, and tRAS as a secondary attribute. Vdd is redundantly mapped to luminance, hue, and size (dark to light, red to blue, and small to large for lower to higher values, respectively). tRAS is mapped to orientation (more counterclockwise twist for larger values). By displaying the primary critical attribute Vdd with the most salient visual features (luminance, hue, and size), it is immediately evident that most of the high failure tests are dark, red, and large, showing an inverse relationship between Failure Rate and Vdd. Low Vdd is a common characteristic for most of the high failure tests. This is an important piece of information as it could point to design weakness for low voltage. The same information is visualized along a spiral in Figure 1b with distance from the center of the spiral proportional to Failure Rate (i.e., farther from the center for
ChipViz: Visualizing Memory Chip Test Data
(a)
715
(b)
Fig. 1. Visualizing Low Vdd at LT, Vdd → luminance, hue, size and tRAS → orientation: (a) Failure Rate → x-position, height and Test ID → y-position; (b) Failure Rate → radial position, height
higher Failure Rates). We can easily conclude that as a data element moves away from the center of the spiral, its glyph becomes dark, red, and large, indicating the inverse relationship between Failure Rate and Vdd. Orientations varied randomly, suggesting no correspondence between tRAS and Failure Rates. 4.2 Visualizing Retention at High Vdd and HT A second dataset is taken from a high volume memory chip tested at high temperatures. Retention at high temperatures is one critical attribute to test. For yield improvement, it is necessary to identify the top failing tests and identify their critical attributes. We can gain yield by trying to optimize these attributes. One of the most common problems for memory chips is Retention as the temperature increases. Again in Figure 2, we map the Failure Rate of the test to x-position and height, and y-position to Test ID in order to sort the data. We then map Retention to size. We can see that most of the high failure tests have large sizes, confirming a critical Retention component. By mapping Vdd to luminance and hue, we see most glyphs becoming bright and blue as we move along the x-axis, indicating that Vdd is directly proportional to Failure Rate. The high failure tests, apart from being Retention critical, also have high Vdd. This is confirmed by our spiral view. As we move away from the center of the spiral, the size of the glyphs increase, hue tends to blue, and luminance increases. As before, no patterns between tRAS (represented with orientation) and Failure Rate were visible. 4.3 Visualizing tRCD, tRP at HT The third dataset represents a case with high fallout at high temperatures due to the row to column access delay (tRCD) and row precharge time (tRP) critical attributes. By
716
A.P. Sawant, R. Raina, and C.G. Healey
(a)
(b)
Fig. 2. Visualizing Retention at High Vdd and HT, Vdd → luminance, hue, Retention → size, and tRAS → orientation: (a) Failure Rate → x-position, height and Test ID → y-position; (b) Failure Rate → radial position, height
(a)
(b)
Fig. 3. Visualizing tRCD, tRP at HT, Vdd → luminance, hue, tRP → size, and tRCD → orientation: (a) Failure Rate → x-position, height and Test ID → y-position; (b) Failure Rate → radial position, height
looking at the spreadsheet of test results, it is difficult to decipher any useful information quickly, unless an experienced engineer remembers the critical attributes of every test. As before, Failure Rate is represented by x-position and height, and y-position represents Test ID. We map tRP to size and tRCD to orientation. From Figure 3, we see two trends: tRP decreases along the x-axis (smaller glyphs) and tRCD increases along
ChipViz: Visualizing Memory Chip Test Data
(a)
717
(b)
Fig. 4. Visualizing optimized test data, Retention → hue, Vdd → luminance, tWR → size, and tRCD → orientation: (a) Failure Rate → x-position, height and Test ID → y-position; (b) Failure Rate → radial position, height
x-axis (more counterclockwise twist). We can conclude that most of the high failure tests have two critical attributes, tRP and tRCD. In this dataset there was no correspondence between Vdd (represented by color) and Failure Rate. The same information can be gleaned from the spiral visualization. 4.4 Visualizing Optimized Test Data The final dataset contains test results from a stable, high volume product. From the spreadsheet data, we can interpret that the Failure Rate for the tests are all low and of the same order. This usually happens for a product which is in high volume with optimum yields. There is a possibility that even though the Failure Rate for the tests is low, a particular attribute is contributing significantly toward the top failing tests. With ChipViz, we can try to interpret not only the individual Failure Rate of the tests but also the different attributes of each test by sorting the data and mapping each critical attribute to different visual features. In this dataset, from Figure 4 we do not see any correlation or trend among the different tests and their various attributes. This suggests the dataset is for a product which is at a mature stage and for which the processes in the Front End (manufacturing sites) are stable and optimized.
5 Visualization of Multiple Memory Chips Test Data One additional advantage of ChipViz is that we can visualize test data for more than one product. This is extremely helpful in analyzing test results for a product family or for products of a particular technology or from a particular manufacturing site. This allows engineers to identify design, technology or process issues on a much wider scale.
718
A.P. Sawant, R. Raina, and C.G. Healey
(a)
(b)
(c)
(d)
Fig. 5. Visualizing three types of memory chips, Failure Rate → x-position, height and Test ID → y-position, chip type mapped to: (a) hue; (b) luminance; (c) size; (d) orientation
Given this powerful capability, viewers can increase or decrease the resolution of the data analysis on the fly. Figure 5 shows such a case for three types of memory chips visualized together. All three products belong to the same chip generation and technology, but have different physical sizes. We use Failure Rates to position each glyph, then define the chip being tested with hue (Figure 5a), luminance (Figure 5b), size (Figure 5c), or orientation (Figure 5d). The visualizations show that all three products have similar failure rates for most of the tests as expected. However, a small number of tests exhibit very different behavior across the different chips. That is, for some Test ID (i.e., rows in the visualization in Figure 5a), the failure rates for the three different chips are significantly different (e.g., there is no overlap between the three glyphs for Test ID 24). These tests are targeted for further analysis using individual chip visualizations from Section 4 to identify the common critical attributes that produce variable Failure Rates.
ChipViz: Visualizing Memory Chip Test Data
719
These examples illustrate the power and versatility of ChipViz. Our system can present complicated test results in a way that allows an engineer to decipher the results and draw conclusions efficiently. It also helps in highlighting information or relationships which are buried in the dataset.
6 Conclusions and Future Work We have successfully applied perceptual visualization techniques to represent memory chip test data. This allows our engineering colleagues to gain more understanding of the relationships between various attributes measured during their testing process. Our results help the engineers rapidly analyze large amounts of test data and identify critical attributes that result in high failure rates. Based on anecdotal observations of real chip engineers, it took between one to two minutes to interpret a visualization. Our visualization methods are not necessarily restricted to memory chip test data, and may be useful for other datasets with appropriate ranking attributes. In the future, we would like to conduct validation studies to quantify our visualization design choices, and to measure the improvement our system provides over existing analysis techniques. We also plan to extend our techniques to analyze simulation results from VLSI circuit design and verification.
References 1. Oberle, H.-D., Muhmenthaler, P.: Test pattern development and evaluation for drams with fault simulator ramsim. In: Proceedings of the IEEE International Test Conference on Test, pp. 548–555. IEEE Computer Society, Washington, DC, USA (1991) 2. Oh, S.C., Kim, J.H., Choi, H.J., Choi, S.D., Park, K.T., Park, J.W., Lee, W.J.: Automatic failure-analysis system for high-density dram. In: Proceedings of the IEEE International Test Conference on TEST: The Next 25 Years, pp. 526–530. IEEE Computer Society, Washington, DC, USA (1994) 3. Fountain, T., Dietterich, T., Sudyka, B.: Mining ic test data to optimize vlsi testing. In: KDD 2000: Proceedings of the sixth ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 18–25. ACM Press, New York (2000), doi:10.1145/347090.347099 4. Jones, J.A., Harrold, M.J., Stasko, J.: Visualization of test information to assist fault localization. In: ICSE 2002: Proceedings of the 24th International Conference on Software Engineering, pp. 467–477. ACM Press, New York (2002) 5. van de Goor, A.J., de Neef, J.: Industrial evaluation of dram tests. In: DATE 1999: Proceedings of the conference on Design, automation and test in Europe, p. 123. ACM Press, New York (1999) 6. Eick, S.G.: Engineering perceptually effective visualizations for abstract data. In: Scientific Visualization, Overviews, Methodologies, and Techniques, pp. 191–210. IEEE Computer Society, Washington, DC, USA (1997) 7. Keim, D.A.: Pixel-oriented database visualizations. SIGMOD Record (ACM Special Interest Group on Management of Data) 25, 35–39 (1996) 8. Foltz, M., Davis, R.: Query by attention: Visually searchable information maps. In: Proceedings of Fifth International Conference on Information Visualisation, London, England, pp. 85–96 (2001)
720
A.P. Sawant, R. Raina, and C.G. Healey
9. Carlis, J.V., Konstan, J.: Interactive visualization of serial periodic data. In: ACM Symposium on User Interface Software and Technology, pp. 29–38 (1998) 10. Sawant, A.P., Vanninen, M., Healey, C.G.: PerfViz: A visualization tool for analyzing, exploring, and comparing storage controller performance data. In: Visualization and Data Analysis. vol. 6495, 07. San Jose, CA, pp. 1–11 (2007) 11. Weigle, C., Emigh, W., Liu, G., Taylor, R., Enns, J.T., Healey, C.G.: Oriented texture slivers: A technique for local value estimation of multiple scalar fields. In: Proceedings Graphics Interface, Montr´eal, Canada (2000) pp. 163–170 (2000) 12. Healey, C.G., Enns, J.T.: Large datasets at a glance: Combining textures and colors in scientific visualization. IEEE Transactions on Visualization and Computer Graphics 5, 145–167 (1999) 13. Interrante, V.: Harnessing natural textures for multivariate visualization. 20, 6–11 (2000) 14. Ware, C.: Information Visualization: Perception for Design, 2nd edn. Morgan Kaufmann Publishers, Inc, San Francisco (2004)
Appendix: Definitions Below are definitions of the attributes included in the memory chip test datasets: 1. tRP (Row Precharge time): the number of clock cycles taken between issuing a precharge command and an active command to the same bank 2. tRCD (Row Address to Column Address Delay): the number of clock cycles taken between issuing an active command and a read/write command to the same bank 3. tWR (Write Recovery time): the number of clock cycles taken between writing data and issuing a precharge command to the same bank, required to guarantee that all data in the write buffer can be safely written to the memory core 4. tRAS (Row Active time): the number of clock cycles taken between issuing an active command and a precharge command to the same bank 5. Vdd: power supply voltage 6. Retention: the maximum time a DRAM cell can store its programmed data 7. Logistics: a value defining whether there is a handling issue of chips in the manufacturing site 8. Current: current measured under different conditions on the chip 9. LT: low temperature (degree Celsius) 10. HT: high Temperature (degree Celsius) 11. tRC (Row Cycle time): the minimum time interval between successive active commands to the same bank, defined as tRC = tRAS + tRP
Enhanced Visual Experience and Archival Reusability in Personalized Search Based on Modified Spider Graph Dhruba J. Baishya Computer Science, Engineering Science & Physics University of Michigan Flint, Michigan, USA
[email protected] Abstract. Academia and search engine industry followers consider personalization as the future of search engines, and this fact is well supported by the tremendous amount of research in this field. However the impact of technological advancement seems to be focused towards bringing more relevant results to the users - not the way it is usually presented to the users. User archives are useful resources which can be exploited more efficiently if reusability is promoted appropriately. In this paper, we present a theoretical framework which can sit on top of existing search technologies and deliver visually enhanced user experience and archival reusability. Contribution of this paper is two fold; first – visual interface for personal search engine setup, self-updating user interests, and session mapping based on modified spider graph; and secondly – enabling better archival reusability through user archival maps, session maps, interest specific maps and visual bookmarking. Keywords: User-Interests, Click-through Data (CTD), Self-updating Modified Spider Graph, User Archival Maps, Session Mapping, Visual Bookmarking.
1 Introduction Academia and search engine industry followers consider personalization as the future of search engines, and this fact is well supported by the tremendous amount of research in this field [5, 6, 7, 8, 10]. From the users’ perspective, national surveys conducted in the year 2004 [11] show that the majority of the Internet users are willing to exchange demographic and preference information for a personalized online experience. User experience is perhaps one of the most important aspects of search engines and it can be defined as a combination of the user interface and its functionality. The one-dimensional hierarchical listing of result entries are still primary mode of result representation for major commercial search engines. Many researchers have demonstrated improved user experience through an enhanced visualization approach [1, 2, 3, 4, 9]. Few new breed of commercial search engines have adopted similar approach to provide visually enriched user experience.
2 Related Work and Motivation This work is an attempt to combine information visualization with personalized search experience. Researchers have explored many information visualization G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 721–731, 2007. © Springer-Verlag Berlin Heidelberg 2007
722
D.J. Baishya
techniques for representation of large scale data collection. A few noteworthy systems developed specifically for web are xFind [1], VIEWER [2], WaveLens [3], and HotMap [4]. Moving one step further we have commercial web search engines based on visualization techniques such as Grokker, Quintura, Kartoo and Kooltorch. All these systems provide some sort of graphical representation based on contextual relationships of the result set which allows the users to discover relevant results. However these visualization techniques are more focused on generic search, not personalization. If we shift our focus from visualization enhancement in search engines to personalization in general, we can observe significant developments. SNAKET [5] is an open source system based on web-snippet hierarchical clustering and it can plug on top of any search engine to bring personalization. UCAIR [6] framework is a client-side search agent (toolbar) that performs implicit feedback based on pervious queries and result re-ranking based on click-through data. CubeSVD [7] is based on analysis of users, queries and web pages which are generated from click-through data. There have been many prior attempts to optimize search engines using click-through data such as Support Vector Machine [8]. PERISCOPE [9] is an adaptive 3D visualization system for search engine results and it provides the users with multiple visualization interfaces such as holistic, analytical and hybrid interfaces. Adaptive Web Search [10] is a novel approach to combine social, personalized and real-time collaborative. It is very interesting to observe that most visualization techniques presented here are more suited for generic search engines. On the other hand systems and techniques specializing in personalization lack visual experience.
Fig. 1. Call for exploring personalization through visualization and vice-versa
3 Spider Graph Spider graph, also known as a Spider Chart, is a two-dimensional chart of three or more quantitative variables represented on axes starting from a single origin. Spider graphs are widely used in areas such as quality control, performance benchmarking and market research problems. Spider charts are excellent Human Resource tools and they are used for performance appraisal [12].
Enhanced Visual Experience and Archival Reusability in Personalized Search
723
Fig. 2. Typical spider graphs with three, four, six and ten quantitative variables
3.1 Modified Spider Graph Substantial modification will be made to spider graph and its properties to accomplish our objectives. Web crawlers [13, 14] visit millions of web pages everyday indexing everything they crawl. We can compare users’ search behavior to these automated crawlers, although it is a scaled-down version. Figure 3, first modification, instead of visualizing our spider graph as a cohesive “cob-web” we are presenting it as a graph composed of N identical isosceles triangles “slice”.
Fig. 3. (Left) A modified spider graph of with 6 variables is composed of 6 isosceles triangles. (Right) Inverting numerical scale on the modified spider graph
In figure 3. (Left) the numerical values of each quantitative variable increases from zero at the center of graph to a maximum value at the vertex. In our modified spider graph we are inverting the numerical scale upside-down. Our second modification is shown in figure 3 (Right).
Fig. 4. Axis XN, represents distribution of PageRank [15]. Axis YN, the page freshness axis.
724
D.J. Baishya
Our third and final modification - the addition of set of axes with origin at the center of modified spider graph and the radial axis bisecting through the isosceles triangle. This modification is illustrated in figure 4.
4 SpiderWebWalk Model 4.1 Parameters and Visual Setup User search history, the user interests, user profile and click through data are primary ingredients for performing personalized search. Previous work on personalized search such as personalization based on automatic mapping of known user interests onto a group of categories in the Open Directory Project [16] and automatic estimation of user’s preferences based on past click history [17] are focused on automatic user interest extractions. A modified spider graph is a two-dimensional chart of three or more quantitative variables. Each variable represents a user interest. Accordingly SWW requires at least three user interest inputs for the initial setup. Assume user interests are given by userInterestN, where N is an integer and N ≥ 3. For our demonstration we assume that a hypothetical user has five interests: userInterest1 = Sports Cars; userInterest2 = NASCAR; userInterest3 = Sportswear; userInterest4 = Video Games; and userInterest5 = Energy Drinks. If each user interest is feed into generic search engines as search keywords it will yield N (= 5) results sets. The corpus of result sets acts as a base source of personalized results. It is important to note that number of retrieved pages for each user interest varies with search engines and numbers can change with time. Assume that result sets for each user interest is given by resultSetN, where N is an integer and N ≥ 0. For our selected set of interests, result sets obtained from Google, YahooSearch and Live search are as follows: Table 1. Result sets for sample user interests
A visual setup interface for SWW is shown in figure 5. (Left) and as the users enter their initial user interests (5 in our sample) base setup of modified spider graph will change and users will be able to see it visually changing. The expected visual change is illustrated in figure 6. At this point it is very natural to ask, “How do you distribute and represent millions of result entries for each user interest in each isosceles triangle?”. In our sample case, for user interest “Sports Cars” Google yield 289 million results. If we distribute 289 million results into ten bands as follows: S1 = Set of result entries with 9 < PageRank ≤ 10; S2 = Set of result entries with 8 < PageRank ≤ 9; … S10 = Set of result entries with 0 < PageRank ≤ 1. We define the size of each band as the cardinality of each set. Mathematically, BandN = | SN |; N ≤ 10. Band 1 = | S1 |;
Enhanced Visual Experience and Archival Reusability in Personalized Search
725
Fig. 5. (Left) SWW initial setup interface. (Right) Band size v/s Color tones.
Band 2 = | S2 |; … Band 10 = | S10 |. Relevancy bands are organized on the basis of PageRank distribution. Ten different tones of color are used to represent distribution of result entries in each isosceles triangle. The bands BandN are reorganize in the decreasing order of cardinality (size). A color tone is assigned to each band in the order of its decreasing size. Bands with the same size are assigned same color tones.
Fig. 6. Visual Personalized Search Engine Setup for uniform distribution. For the purpose of demonstration we assume that the size of bands smoothly decreases with increasing PageRank.
In a more realistic scenario, band size may not vary uniformly with the PageRank, i.e. for certain user interests search engine may retrieve the maximum number of result pages with PageRank values between 6 and 8. Figure 7. (Left) represents an expected color tone for non-uniform distribution.
Fig. 7. Color tone distribution - non-uniform (Left) and uniform (Right)
726
D.J. Baishya
4.2 SpiderWebWalk Self-updating It is wise to expect that users may not be able to predict or feed their interests which will remain consistent for a long time. Users’ interests can change significantly or insignificantly over a period of time. In all possible scenarios SWW is structured for self-updating based on the current user behavior and the user preferences. User interest deletion – SWW maintains a time log of the last entry browsed from each user interests, which is represented by LastTimeN, where N is an Integer and N ≥ 3. We define time gap for each user interest as TimeGapN, where N is an Integer and N ≥ 3, such that TimeGapN is the difference of current time and LastTimeN. If TimeGapN, exceeds a user defined threshold value, defined as TimeThresholdN, the Nth interest is removed from the SWW graph. User interest addition – users’ search queries may not be strictly limited to their user interests. SWW maintains click-through data for all search queries. For all CTDs which do not belong to the resultSetN, a log is maintained as (QueryKeywordsI, PageCounterI), here QueryKeywordsI are search keywords which do not contain any keywords from the userInterestN and PageCounterI is the number of pages browsed “clicked” when QueryKeywordsI were used as search keywords. If PageCounterI for QueryKeywordsI, exceeds a user defined threshold value, defined as ClicksThreshold a new userInterestM, with M > N and userInterestM (= QueryKeywordsI) is created.
Fig. 8. (Left) Interest deletion, (center) initial setup and (right) interest addition
5 Session Mapping Session mapping is the most important aspect of SWW framework. We define session mapping as, “Plotting users’ click-through data on a meaningful map”. Session mapping definition is satisfied when click-through data are generated from keywords k which belongs to the set of user interest, { userInterestN }. In all such cases CTDs resulting from keyword k will also belong to the combined set of base result sets, { resultSetN }. In our definition the meaningful map is referred to our modified spider graph. The abovementioned property is used to plot “stamp” CTDs on modified spider graph. Plotting requires two coordinates which are defined by (XCTD_PageRank, YCTDPageFreshness), where XCTD_PageRank is clicked page’s PageRank and YCTD-PageFreshness is clicked page’s retrieval freshness on a scale defined by the users. PageFreshness is a
Enhanced Visual Experience and Archival Reusability in Personalized Search
727
measure of indexing freshness i.e. how recently the page was indexed by the search engine crawler. A CTD which is more “fresh” will be located closer to the X-axis. Our coordinate system for plotting CTDs are based on the assumption that the importance of a relatively older page can be amplified if their PageRanks are higher. On either sides of the X-axis time units increases linearly with 0 at the origin. Figure 9. illustrates a Y-axis distribution for Band2 (6 < PageRank ≤ 8). Y-axis distribution on either side are shown as 0, 1, …, t, t’. For any CTD where PageRank є (6, 8] and retrieval freshness measure є [0, t] they can be plotted within the rectangle with coordinates (8, t), (8, t), (6, t) and (6, t). Two competing CTDs i.e. CTDs with exactly same PageRank and freshness measure are plotted on either side of the X-axis. If the number of such competing CTDs are more than two, they are placed on top of existing CTDs. If freshness time measure is greater than time t, then CTDs are arbitrarily placed between t and t’. If two or more CTDs have freshness time measure greater than t, relative retrieval freshness is considered and CTDs with the higher t is plotted further away from the X-axis.
Fig. 9. Illustrating SWW coordinates
A typical personalized search session include searching queries from the user interest areas. Displayed result entries are skimmed by the users usually in a random manner and mostly linked to the first few result pages. On the basis of skimming the user may click some entries for detailed information. As such a set of CTDs are generated in each search session. During session mapping CTDs are continuously plotted on the SpiderWebWalk session map. In figure 10. a hypothetical session map is illustrated. CTDs are shown as (colored) dots “stamps” on the SWW map. Notice session maps are shown with and without ClickPath, which are obtained by continuously joining CTDs on the maps. Session maps provide users with extreme flexibility to move from one clicked page to another without reentering search queries or using “Back buttons” or skimming through textbased search history. ClickPath acts as navigation guide.
728
D.J. Baishya
Fig. 10. An illustration of a typical session map on a uniform SWW graph. Left map shows session clicks and right map shows CTDs with ClickPath.
6 Archival Reusability User archive commonly known as search history are resourceful by-products of personalized search. In order to exploit archived data more efficiently search engines need to promote archival reusability in a more useful manner. The SWW framework perceive archived data as resourceful “tool” for the users. It could be a useful medium for continuous search i.e. searching for some information over multiple search sessions, SWW allows users to quick start a search session from any point in their search history; and SWW allows visual bookmarking or labeling of important pages on their session maps. For all repeated search queries SWW should be an efficient solution as it would save users significant time over repeated searches.
Fig. 11. (Top) Illustration of a session map. (Bottom) A hypothetical master archival maps.
6.1 User Archival and QuickStart Maps SWW is structured to archive session maps in two categories. The first category stores archival maps from each search session and in the second category the master
Enhanced Visual Experience and Archival Reusability in Personalized Search
729
archival map is stored. The master archival map can be considered as the combination of all session maps. It plots and tracks CTDs from the initial setup stage. Master archival map may undergo a few changes with time due to the addition or deletion of user interests. Session maps can be given the flexibility of adding user defined visual “name tags” and editing controls such as “favorite”, delete and “To-Do”. SWW allows users to place visual bookmarks. A typical QuickStart map could be any map from Figure 11. 6.2 Sliced and Band Maps with Map HotSpots SWW graphs are composed of N isosceles triangles and N represents the number of current user interests. It is realistic to expect that many searchers may seek information on particular interests and such search activity may run over several search sessions. SWW framework facilitates interest specific map viewing. In other words the users can isolate map sections specific to their interest. Here a map section is indicated by an isosceles triangle. In figure 12. (Left) Sliced Maps are illustrated.
Fig. 12. (Left) Zoomed Sliced Map and (Right) Zoomed Band Map
It is has been found that most clicked results are usually generated from the top 2-3 pages. Since top results usually have higher PageRank we may observe congestion of CTD stamps in Bands with high PageRank. SWW facilitates Band specific map viewing where users can isolate Bands within a specific user interest. In figure 12. (Right) a small Band Map is illustrated.
7 Conclusion This work can be summarized by the caption in figure 1. which states “Call for exploring personalization through visualization and vice-versa”, SpiderWebWalk
730
D.J. Baishya
framework is an attempt to bridge this gap between visualization techniques and personalized search experience. The modified spider graph is the foundation of this framework.
Acknowledgement Many thanks to Dr. Shantaram for insightful discussion sessions and also for reviewing this paper.
References 1. Andrews, K., Gutl, C., Moser, J., Sabol, V.: Search result visualization with xFind. In: 2nd International Workshop on User Interfaces to Data Intensive Systems, IEEE Press, Zurich (2001) 2. Berenci, E., Carpineto, C., Giannini, V., Mizzaro, S.: Effectiveness of keyword-based display and selection of retrieval results for interactive searches. International Journal on Digital Libraries 3(3) (2000) 3. Paek, T., Dumais, S., Logan, R.: Wavelens: A new view onto internet search results. In: Proceedings of the ACM Conference on Human Factors in Computing Systems (2004) 4. Hoeber, O., Yang, X.D.: The visual exploration of web search results using HotMap. In: 10th International Conference on Information Visualization (2006) 5. Ferragina, P., Gulli, A.: Industrial and practical experience track paper session 1: A personalized search engine based on web-snippet hierarchical clustering. In: Special interest tracks and posters of the 14th international conference on World Wide Web WWW 2005, ACM, Chiba (2005) 6. Shen, X., Tan, B., Zhai, C.: Demos: UCAIR: a personalized search toolbar. In: 28th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval SIGIR 2005, ACM, Salvador (2005) 7. Sun, J., Zeng, H., Liu, H., Lu, Y., Zheng, C.: User-focused search and crawling: CubeSVD: a novel approach to personalized Web search. In: 14th International Conference on World Wide Web WWW 2005, ACM, Chiba (2005) 8. Thorsten, J.: Web search and navigation: Optimizing search engines using clickthrough data. In: 8th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining KDD 2002, ACM, Alberta (2002) 9. Wiza, W., Walczak, K., Cellary, W.: Visualization: Periscope: a system for adaptive 3D visualization of search results. In: 9th International Conference on 3D Web technology Web3D 2004, ACM, California (2004) 10. Dalal, M.: Social networks: Personalized social & real-time collaborative search. In: 16th international conference on World Wide Web WWW 2007, ACM, Alberta (2007) 11. Choicestream personalization solutions, http://www.choicestream.com/ 12. Rogers, B.: The Spider Chart: A Unique Tool for Performance Appraisal. Series: Annual Quality Congress 49, 16–22 (1995) 13. Arasu, A., Cho, J., Garcia-Molina, H., Paepcke, A., Raghavan, S.: Searching the Web. ACM Transactions on Internet Technology (TOIT) 1(1) (2001) 14. Notess, G.: Searching the World-Wide Web: Lycos, WebCrawler and more. Online Inc. Online 19(4) (1995)
Enhanced Visual Experience and Archival Reusability in Personalized Search
731
15. Page, L., Brin, S., Motwani, R., Winograd, T.: The Pagerank citation ranking: Bringing order to the web. Technical report, Stanford Digital Library Technologies Project (1998) 16. Ma, Z., Pant, G., Olivia, R., Sheng, L.: Interest-based personalized search. ACM Transactions on Information Systems (TOIS) 25(1) Article 5 (2007) 17. Qiu, F., Cho, J.: Improved search ranking: Automatic identification of user interest for personalized search. In: 15th international conference on World Wide Web WWW 2006, ACM, Edinburgh (2006)
Probe-It! Visualization Support for Provenance Nicholas Del Rio and Paulo Pinheiro da Silva University of Texas at El Paso 500 W. University Ave, El Paso, Texas, USA
Abstract. Visualization is a technique used to facilitate the understanding of scientific results such as large data sets and maps. Provenance techniques can also aid in increasing the understanding and acceptance of scientific results by providing access to information about the sources and methods which were used to derive them. Visualization and provenance techniques, although rarely used in combination, may further increase scientists’ understanding of results since the scientists may be able to use a single tool to see and evaluate result derivation processes including any final or partial result. In this paper we introduce ProbeIt!: a visualization tool for scientific provenance information that enables scientists to move the visualization focus from intermediate and final results to provenance back and forth. To evaluate the benefits of Probe-It!, in the context of maps, this paper presents a quantitative user study on how the tool was used by scientists to discriminate between quality results and results with known imperfections. The study demonstrates that only a very small percentage of the scientists tested can identify imperfections using maps without the help of knowledge provenance and that most scientists, whether GIS experts, subject matter experts (i.e., experts on gravity data maps) or not, can identify and explain several kinds of map imperfections when using maps together with knowledge provenance visualization.
1
Introduction
In complex virtual environments like cyber-infrastructures, scientists rely on visualization tools to help them understand large amounts of data that are generated from experiments, measurements obtained by sensors, or a combination of measurements and applied derivations. Instead of tediously tracing through datasets, scientists view results condensed as a graph or map, and draw conclusions from these projected views. However, in order for scientists to fully understand and accept artifacts generated on the cyber-infrastructure, scientists may need to know which data sources and data processing services were used to derive the results and which intermediate datasets were produced during the derivation process. In fact, scientists may need to have access to provenance information, which in this paper is described as meta-information about the final results and how they were generated. Provenance information includes: provenance metainformation, which is a description of the origin of a piece of knowledge (e.g., names of organizations, people, and software agents who played some role in the G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 732–741, 2007. c Springer-Verlag Berlin Heidelberg 2007
Probe-It! Visualization Support for Provenance
733
generation of an artifact), process meta-information, which is a description of the reasoning process used to generate an answer, such as a proof or execution trace, and intermediate or partial results. Provenance visualization capabilities are expected to be more sophisticated than those required for the visualization of only final results. For example, in addition to the visualization of results, provenance visualization should include capabilities for visualizing intermediate or partial results, derivation processes, and any information regarding used sources. In this paper, we report our progress on Probe-It!, a general-purpose, provenance visualization prototype that has been used to visualize both logical proofs generated by inference engines and workflow execution traces. Additionally, the paper reports on an ongoing user study that confirms the need for provenance information in the tasks of identifying and explaining imperfections in maps generated by cyber-infrastructure applications.
2
Scientific Knowledge Provenance Visualization
Probe-It! is a browser suited to graphically rendering provenance information associated with results coming from inference engines and workflows. In this sense, Probe-It! does not actually generate content (i.e. logging or capturing provenance information); instead it is assumed that users will provide ProbeIt! with end-points of existing provenance resources to be viewed. The task of presenting provenance in a useful manner is difficult in comparison to the task of collecting provenance. Because provenance associated with results from small workflows can become large and incomprehensible as a whole, Probe-It! consists of a multitude of viewers, each suited to different elements of provenance. Decomposing provenance into smaller more comprehensible chunks, however, raises the following questions: 1. How do scientists navigate back and forth between the visualizations of final and intermediate results (i.e., datasets and scientific artifacts such as maps) and information about the generation of such results (i.e., meta-data about the applied sources, methods, and sequencing regarding the execution of those methods). 2. How do scientists define relevance criteria for distinct provenance information and how can tools use relevance criteria to improve scientist experiences during the visualization of scientific provenance? 3. How can scientists instruct tools to present scientific provenance by defining and selecting preferences? The following sections describe how Probe-It! addresses these concerns. 2.1
Queries, Results, Justifications, and Provenance
Probe-It! consists of four primary views to accommodate the different kinds of provenance information: queries, results, justifications, and provenance, which refer to user queries or requests, final and intermediate data, descriptions of the
734
N. Del Rio and P.P. da Silva
generation process (i.e., execution traces), and information about the sources respectively. In a highly collaborative environment such as the cyberinfrastructure, there are often multiple applications published that provide the same or very similar function. A thorough integrative application may consider all the different ways it can generate and present results to users, placing the burden on users to discriminate between high quality and low quality results. This is no different from any question/answer application, including a typical search engine on the Web, which often uses multiple sources and presents thousands of answers back to users. The query view visually shows the links between application requests and results of that particular request. The request and each corresponding result is visualized as a node similar to the nodes in the justification view presented later. Upon accessing one of the answer nodes in the query view, Probe-It! switches over to the justification view associated with that particular result. Because users are expected to compare and contrast between different answers in order to determine the best result, all views are accessible by a menu tab, allowing users to navigate back to the query view regardless of what view is active. The results view provides graphical renderings of the final and intermediate results associated with scientific workflows. This view is captured on the right hand side of Figure 1, which presents a visualization of a gridded dataset; this view is initiated by selecting one of the justification nodes, described in the next section. Because there are many different visualizations suited for gridded data and datasets in general, the results view is composed of a set of viewers, each implementing a different visualization technique suited to the data being viewed. The framework supporting this capability is described in Section 3. The justification view, on the other hand, is a complimentary view that contains all the process meta-information associated with the execution trace, such as the functions invoked by the workflow, and the sequencing associated with these invocations. Probe-It! renders this information as a directed acyclic graph (DAG). An example of a workflow execution DAG can be found on the left hand side of Figure 1, which presents the justification of a contour map. From this perspective, Web services and sources (i.e., data sinks) are presented as nodes. Nodes contain a label indicating the name of a source or invoked service, as well as a semantic description of the resulting output data. In the justification view, data flow between services is represented by edges of the DAG; the representation is such that data flows from the leaf nodes towards the root node of the DAG, which represents the final service executed in the workflow. Users can access both provenance meta-information and intermediate results of the sources and services represented by the DAG nodes. In this sense, the justification DAG serves as a medium between provenance meta-information and intermediate results. The provenance view, provides information about sources and some usage information e.g., access time, during the execution of an application or workflow. For example, upon accessing the node labeled gravity database, meta-information
Probe-It! Visualization Support for Provenance
735
Fig. 1. Probe-It! justification view
about the database, such as the contributing organizations, is displayed in another panel. Similarly, users can access information transformation nodes, and view information about used algorithms, or the hosting organization. 2.2
Result Viewers and Framework Support for Visualization Techniques
Different visualizations model data from different perspectives thus Probe-It! provides scientists with as many viewers as possible. For example, gravity datasets provided by the GIS center at the University of Texas at El Paso have three associated visualizations: default textual view, plot view, and XMDV view. The default textual view is a table; the raw ASCII result from gravity database. The location plot viewer provides a 2D plot of the gravity reading in terms of latitude and longitude. XMDV, on the other hand, provides a parallel coordinates view, a technique pioneered in the 1970’s, which has been applied to a diverse set of multidimensional problems [9]. Figure 2 shows a pop-up of these three visualizations in their respective viewer windows. Upon selecting a node in a justification DAG, ProbeIt! is able to determine, based on a semantic description of the output data, which viewers are appropriate. This is similar to a Web browser scenario in which transmitted data is tagged with a MIME-TYPE that is associated with a particular browser plug-in. Probe-It! should be flexible enough to support a wide array of scientific conclusion formats just as Web browsers can be configured to handle any kind of data, but also leverage any
736
N. Del Rio and P.P. da Silva
Fig. 2. Three different viewers for gravity data sets
semantic descriptions of the data. For example, XMDV is a viewer suited to any N dimensional data; the data rendered by XMDV need only be in a basic ASCII tabular format, as shown on the right hand side of Figure 2, with a few additional headers. Because gravity datasets are retrieved in an ASCII tabular format, XMDV can be used to visualize them. However, this kind of data is also semantically defined as being gravity point data, in which case Probe-It! is configured to invoke the more appropriate 2D spatial viewer, as shown in the center of Figure 2. The semantic capabilities provided by the Probe-It! viewer framework compliments the MIME tables used in typical Web browsers, which only indicate the format or syntax of the data. In order to manage the many relationships between a kind of data and the appropriate viewer, Probe-It relies on a MIME-like table to store these mappings. This table contains all the known data types, their semantic descriptions, and their respective renderers. Thus, an appropriateness of a particular renderer is based on both the data’s format and semantic description. The property that makes this MIME-like table so desirable for the Probe-It is its extendibility; scientists can register new mappings on request, keeping Probe-It! up-to-date with the scientists’ needs. 2.3
Comparing Knowledge Provenance: Pop-Up Viewers
In many cases, scientists may need to compare the provenance associated with different results in order to decide which result best fits their needs. To facilitate such comparisons, results of a workflow can be popped out in separate windows. The pop-up capability provided by the tool is useful when comparing both final and intermediate results of different maps. Users can pop-up a visualization of intermediate results associated with one map, navigate to the justification of a different map and pop-up a window for the corresponding results, i.e., results of the same type, for comparison purposes. In addition to the result that is being viewed, pop-up windows contain the ID of the artifact from which it is associated. This allows users to pop-up several windows without losing track of what artifact the pop-up window belongs to.
Probe-It! Visualization Support for Provenance
3 3.1
737
Underlying Technologies Proof Markup Language (PML) and the Inference Web
Provenance browsed by ProbeIt! is encoded in the Proof Markup Language (PML) [6] provided by the Inference Web [3]. Inference Web leverages PML for encoding and publishing provenance information on the web as well as providing a set of tools and services for handling these documents. PML is an OWL [5] based language that can be used for encoding provenance. PML consists of a specification of terms for encoding collections of justifications for computationally derived results. Depending upon the application domain, users may view each justification as an informal execution trace or as a proof describing the inference steps used by an inference engine, e.g., theorem prover or web service, to derive some conclusion. From this perspective, PML node sets (i.e., the topmost element in the language) represent the invocation of some service; the node set conclusion serves as the output of the service while the inference step represents the provenance meta-information associated with the function provided by a service or application. For example, the inference step proof elements antecedent, rule, and inference engine can be used to describe the applications inputs, function, and name respectively. IW-Base is a repository of provenance elements that can be referenced by PML documents [4]. In order to support interoperability when sharing provenance among Inference Web tools and between Inference Web tools and other Semantic Web tools in general, provenance elements such as sources are stored and made publicly available in IW-Base. For example, PML direct assertions can be linked to provenance elements in IW-Base to indicate the agent, person, or organization responsible for the information being asserted. Since a single source can contribute to the generation of a number of different artifacts, IW-Base alleviates PML provenance loggers from always re-generating files describing sources that can otherwise be shared from the database. 3.2
PML Service Wrapper (PSW)
PML Service Wrapper (PSW) is a general-purpose wrapper that logs knowledge provenance associated with workflow executions as a set of PML documents. Since workflows can be composed entirely of Web services, PSW logs workflows at the level of service invocations and transactions. Thus, information such as the input/output of each service and meta-information regarding the used algorithm are all logged by PSW. In a cyber-infrastructure setting, functionality or reasoning is often supported by Web services that can be considered “black boxes” hard to be instrumented at source-code level to generate PML. This is the primary reason why PSW, a sort of external logger, must be deployed to intercept transactions and record events generated by services instead of modifying the service and workflows themselves to support logging.
738
4
N. Del Rio and P.P. da Silva
Evaluation
The effectiveness of provenance visualization in the task of understanding complex artifacts was verified by a user study described below. The context of the user study is presented first, following with a brief discussion of how provenance and visualization aided scientists in the evaluation tasks. 4.1
Gravity Map Scenario
Contour maps generated from gravity data readings serve as models from which geophysicists can identify subterranean features. In particular, geophysicists are often concerned with data anomalies, e.g., spikes and dips, because these are usually indicative of the presence of some subterranean resource such as a water table or an oil reserve. The Gravity Map scenario described in this section is based on a cyber-infrastructure application that generates such gravity contour maps from the Gravity and Magnetic Dataset Repository1 hosted at the Regional Geospatial Service Center at the University of Texas at El Paso. In this scenario, scientists request the generation of contour maps by providing a footprint defined by latitude and longitude coordinates; this footprint specifies the 2D spatial region of the map to be created. The following sequence of tasks generate gravity data contour maps in this scenario: 1. Gather Task: Gather the raw gravity dataset readings for the specified region of interest 2. Filter Task: Filter the raw gravity dataset readings (remove unlikely point values) 3. Grid Task: Create a uniformly distributed dataset by applying a gridding algorithm 4. Contour Task: Create a contoured rendering of the uniformly distributed dataset The gravity map scenario is thus based on a workflow in which each activity is implemented as an independent Web service. The following Section describes how this scenario served as a test-bed to evaluate the effectiveness of Probe-It! in aiding scientists to both identify and explain imperfect maps generated by the gravity map workflow. 4.2
Results
The premise of our work is that scientific provenance is a valuable resource that will soon be become an integral aspect of all cyber-infrastructure applications. The use of provenance is still being researched and its various applications are still being explored, thus a widespread adoption of provenance has yet to take place. A previous study of ours has indicated that providing scientists with 1
http://irpsrvgis00.utep.edu/repositorywebsite/
Probe-It! Visualization Support for Provenance
739
visualizations of provenance helps them to both identify and explain map imperfections [7]. This study was composed of seven evaluation cases all derived from the different possible errors that can arise in the gravity map scenario; each case was based on a gravity contour map that was incorrectly generated. The subjects were each asked to identify the map as either correct or with imperfections. Additionally, they were asked to explain why they identified the map as such, usually by indicating the source of error. Table 1 shows the subjects accuracy in completing the identifying and explaining tasks with a contour map that was generated using a grid spacing parameter that was too large with respect to the density of data being mapped; this causes a loss of resolution hiding many features present in the data. The results are presented in terms of the classification of the subjects such as: subject matter experts (SME), Geographic Information Systems Experts (GISE), and non experts (NE). Table 1. Percentage of correct identifications and explanations of map imperfections introduced by the inappropriate gridding parameter. [No Provenance (NP), Provenance (P)]. (%) Correct Identifications Experience NP P SME 50 100 GISE 11 78 NE 0 75 all users 13 80
5
(%) Correct Explanations NP P 25 100 11 78 0 75 6 80
Related Work
VisTrails, a provenance and data exploration system provides an infrastructure for systematically capturing provenance related to the evolution of a workflow [1]. Provenance information managed by VisTrails refers to the modifications or history of changes made to particular workflow in order to derive a new workflow; modifications include, adding, deleting or replacing workflow processes. VisTrails renders this history of modifications as a treelike structure where nodes represent a version of some workflow and edges represent the modification applied to a workflow in order to derive a new workflow. Upon accessing a particular node of the provenance tree, users of VisTrails are provided with a rendering of the scientific product which was generated as a result of a particular workflow associated with the node. In this sense, VisTrails provides excellent support for visualizing both process meta-information and intermediate results but may not provide a rich description of provenance meta-information as defined in this paper. ProbeIt! is an attempt to visualize all aspects of provenance, including rich information about the sources used. MyGrid, from the e-science initiative, tracks data and process provenance of some workflow execution. Authors of MyGrid draw an analogy between the type
740
N. Del Rio and P.P. da Silva
of provenance they record for cyber-infrastructure type applications and the kind of information that a scientist records in a notebook describing where, how and why results were experimental results were generated [10]. From these recordings, scientists can achieve three primary goals: (i) debugging, (ii) validity checking, and (iii) updating, which refer to situations when, a result is unexpected, when a result is novel, or a workflow component is changed respectively. The Haystack application displays the provenance as a labeled directed graph, tailored to a specific user; only relevant provenance elements related to the role of a user are rendered in order to reduce data overloading on the screen. In this scenario, links between resources are rendered allowing users to realize the relationships between provenance elements such as inputs/outputs and applied processes thus realizing the execution trace. In contrast to graphically displaying scientific provenance, the Kepler [2] workflow design and execution tool provides an interface for querying recorded provenance associated with workflow execution via a set of predefined operators. In this case, provenance is queried, with the result of the query being some relation. Similarly, Trio, a management system for tracking data resident in databases, tracks data as it is projected and transformed by queries and operations respectively [8]. Because of the controlled and well understood nature of a database, lineage of some result can many times be derived from the result itself by applying an inversion of the operation that derived it. These inverse transformations of the data are stored in special table and made available via querying capabilities.
6
Conclusions and Future Work
The research presented in this paper is based on the fundamental problem of developing tools and methods that can help scientists understand complex scientific products (e.g., datasets, reports, graphs, maps) derived from complex software systems (e.g., applications and services) deployed on a distributed and heterogeneous environments such as cyber-infrastructures. We have developed Probe-It!, a tool that visualizes all aspects of provenance, to address these concerns. The work has been developed in the context of a realistic scenario based on ongoing cyber-infrastructure efforts in the fields of Earth Sciences. A user study driven by this scenario verified the effectiveness of provenance visualization provided by Probe-It! in helping scientists understand complex artifacts, strengthening the notion that provenance should be maintained by all cyber-infrastructure applications, and available on demand in some useful representation. Since the effectiveness of provenance has been demonstrated, the strategy will be to present scientists with the most effective ways of browsing such information. The current evaluation approach was based on the suitability of provenance in decision making scenarios, rather than the usability of the tool itself. Usability is based on the evaluation of many dimensions including learnability, understandability, and handling ability. Each of the aforementioned aspects refer to the amount of time of necessary training before independent use of a system is possible, ability of users to correctly draw conclusion from display, and the
Probe-It! Visualization Support for Provenance
741
speed of a trained user respectively. The next step is to develop a more formal model of how users interact with provenance visualizations in order to improve the usability of Probe-It!
References 1. Freire, J., Silva, C.T., Callahan, S.P., Santos, E., Scheidegger, C.E., Vo, H.T.: Managing Rapidly-Evolving Scientific Workflows. In: Proceedings of the International Provenance and Annotation Workshop (IPAW) (to appear) 2. Ludacher, B., et al.: Scientific Workflow Management and the Kepler System. Concurrency and Computation: Practice & Experience (2005) Special Issue on Scientific Workflows 3. McGuinness, D.L., da Silva, P.P.: Explaining Answers from the Semantic Web. Journal of Web Semantics 1(4), 397–413 (2004) 4. McGuinness, D.L., da Silva, P.P., Chang, C.: IW-Base: Provenance Metadata Infrastructure for Explaining and Trusting Answers from the Web. Technical Report KSL-04-07, Knowledge Systems Laboratory, Stanford University (2004) 5. McGuinness, D.L., van Harmelen, F.: OWL Web Ontology Language Overview. Technical report, World Wide Web Consortium (W3C), February 10, Recommendation (2004) 6. da Silva, P.P., McGuinness, D.L., Fikes, R.: A Proof Markup Language for Semantic Web Services. Information Systems 31(4-5), 381–395 (2006) 7. Del Rio, N., da Silva, P.P: Identifying and Explaining Map Imperfections Through Knowledge Provenance Visualization. Technical report, The University of Texas at El Paso (June 2007) 8. Widom, J.: Trio: A System for Integrated Management of Data, Accuracy, and Lineage. In: Proceedings of the Second Biennial Conference on Innovative Data Systems Research, Asilomar, CA, pp. 262–276 (January 2005) 9. Xie, Z.: Towards Exploratory Visualization of Multivariate Streaming Data http://davis.wpi.edu/ 10. Zhao, J., Wroe, C., Goble, C., Stevens, R., Quan, D., Greenweed, M.: Using Semantic Web Technologies for Representing E-science Provenance. In: Proceedings of the 3rd International Semantic Web Conference, pp. 92–106 (November 2004)
Portable Projection-Based AR System Jihyun Oh, Byung-Kuk Seo, Moon-Hyun Lee, Hanhoon Park, and Jong-Il Park Department of Electrical and Computer Engineering, Hanyang University, 17 Haengdang-dong, Seongdong-gu, Seoul, Korea {ohjh,nwseoweb,fly4moon,hanuni}@mr.hanyang.ac.kr,
[email protected]
Abstract. Display systems with high quality and wide display screen can be used at fixed place due to big size and heavy weight. On the other hands, mobile systems have small display screen and thus decrease user-immersion because it is compact. In this paper, we resolve these drawbacks of established display systems by proposing a novel portable projection-based augmented reality (AR) system. The system uses a camera mounted on PDA and a small projector to measure characteristics of screen surface. We use geometric correction and radiometric compensation technique to project undistorted image in user viewpoint onto an arbitrary screen surface. Rather than float point operations, we use integer point operations to enhance system performance. Our proposed system not only supports mobility but also wide display screen. Usability of our system is verified through experimental results and user evaluation. Keywords: portable, projection, display system, augmented reality.
1 Introduction Display systems with high quality and wide display screen such as High Definition Television (HDTV) have been rapidly widespread. But there has been a limitation that they can be used at fixed place because of their big size and heavy weight. On the other hands, mobile display systems, e.g. mobile phone, Personal Digital Assistant (PDA), Portable Multimedia Player (PMP), and so on, have recently come into spotlight as a new trend of display system because they are quite handy to use anywhere. However, they have other limitations such as small display screen and lack of userimmersion. In this paper, we propose a novel display system based on AR technology for resolving these drawbacks of existing display systems. The system is a portable projection-based AR system which uses a PDA and a small projector not only to support high-resolution and wide display screen, but also to enhance mobility of the system. A number of mobile AR systems have been already developed and applied to various applications. Wagner et al. [1] proposed an AR system for education and entertainment on PDA platform. The AR system shows 3-D virtual information instead of AR markers on the PDA screen and enables multi-user to simultaneously enjoy by offering infrared radial communication service. Bruns et al. [2] applied AR technology to a mobile phone that shows additional information for exhibitions in the museum. G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 742–750, 2007. © Springer-Verlag Berlin Heidelberg 2007
Portable Projection-Based AR System
743
As another branch of AR systems, a number of projection-based AR systems have been also developed. Raskar et al. [3] proposed a projection-based AR system, called iLamp, this accurately provides visual information without geometric distortions onto an arbitrary screen. Mitsunaga et al. [4], Nayar et al. [5], and Grossberg et al. [6] resolved problems that projection images are distorted onto colorful textured screens by proposing radiometric compensation techniques. Park et al. [7] also proposed an integrated AR system that handles not only geometric and radiometric distortions, but also viewpoint-ignorant projection and uneven projection. In this paper, we try to combine the benefits of mobile-based AR system with those of projection-based AR system. That is, our system enables visual information to display without geometric and radiometric distortions anywhere and anytime by using a PDA and a small projector. The remainder of the paper is organized as follows. Section 2 introduces our system briefly. Section 3 explains detailed techniques for implementing the system. Experiment results and user evaluation are given in Section 4. Finally, the conclusion is drawn in Section 5.
2 Overview of the Proposed System Portable projection-based AR system is considered as an intelligent display system that project undistorted images onto a 3-D surface that is an arbitrary screen in mobile environment. In our system, entire operations are performed by a PDA based on Window CE 5.0 platform. A camera that mounted on the PDA and a portable projector are used to analyze geometric and radiometric characteristics of 3-D surface.
(a)
(b)
Fig. 1. System structure, (a) illustrated a portable projection-based AR system (b) a flow of processing
Proposed system structure is illustrated shown in Fig 1. The structure is divided into three processing parts. First of all, a camera and a projector are geometrically and radiometrically calibrated, and that is one time preprocessing. And then the
744
J. Oh et al.
geometry and radiometry of 3-D surface are estimated by projecting and capturing a set of patterns as shown in Fig. 1(a). Finally, a compensation image is mathematically computed by using obtained information, and it is projected as undistorted image onto 3-D surface. In the following sections, we will explain these techniques in detail.
3 Component Techniques 3.1 Geometric Correction Projection images are geometrically distorted unless a projection surface is planar and the projecting direction is perpendicular to the surface. In our approach, the geometric distortion is corrected by using a compensation image which is pre-warped by homography. First of all, both a camera and a projector are calibrated in advance. The camera is calibrated by using Zhang’s calibration method [8]. For calibrating the projector, we used a modified version of Zhang’s calibration method [7] since the projector can be considered as an inverse model of the pinhole camera. With knowing the geometric information, the geometry of a 3-D surface is recovered by applying a binary coded structured light technique [9] and a linear triangulation method [10] (see Fig. 2). The binary coded structured light technique is used for finding correspondences between image points and points of the projected patterns. A set of patterns is successively projected and imaged by the camera. Both image points and points of the projected patterns have their own codeword, so correspondences are precisely computed by the codeword. Finally, the 3-D surface geometry is estimated by triangulation and is presented as the form of a triangular mesh. Each triangle of the 3-D surface mesh is piece-wise and planar, so geometric relationships among the camera, the projector, and the 3-D surface can be defined by homographies [7, 11]. With these relationships, an arbitrary viewpoint image can be synthesized by projecting a projection image onto a viewer image plane as
x veiewpo int = K c [R t ]X
(1)
where x veiewpo int is a point of the viewpoint image, X is a point of the projection image, and K c is the intrinsic matrix of the camera. Rotation parameter R and translation parameter t are specified by the position of the viewpoint. It is usually assumed that the viewpoint is perpendicular to the projection surface. Let the homography is H p−v between a desired image which is and a viewpoint image which is distorted from 3-D surface, a compensation image for a desired image xdesired without distortion in user’s viewpoint is presented as
x prewarped = H −p 1− v x desired where x prewarped
(2)
is a point of the compensation image. Note that we assume that
user’s viewpoint is on normal direction of screen surface.
Portable Projection-Based AR System
745
Fig. 2. The geometry of a 3-D surface is estimated by triangulation method
3.2 Radiometric Compensation The fundamental problem with using an arbitrary surface is that the color of a projection image depends on that of a projection surface and ambient light. Radiometric compensation is a technique that makes the color of the projection image preserved by adjusting a color of the projector input image when the projection surface has colorful texture [3, 4, 5]. In this paper, we focus on distortion by the color of the projection surface with the assumption that ambient light is unchanged during the system operation. With the geometric mapping between a camera image and a projector input image, the radiometric model of the pipeline from input projector color to the measured camera color is defined as C = VP + F (3) where C is a point of camera image, V is color mixing matrix, P is points of projected image, and F is ambient light. Generally relationship between image radiance and image brightness is non-linear and the spectral response of the color channels of both devices can overlap with each other, so these factors are handled off-line in advance. Both devices’ non-linear radiometric response functions are calculated as 4th –order polynomial function [2] and also the couplings between the projector and camera channels and their interactions with the spectral reflectance of the projection surface points are all captured by the matrix V [3, 4, 5]. Once these unknown parameters are estimated, compensation image for a desired image without color distortion is obtained by
Pcompensation = V −1 ( I desired − F )
(4)
3.3 Improve Computational Time on the Mobile Platform Computational time is critical factor when system designed based on mobile environment. Especially, mobile environment does not have a floating point unit (FPU),
746
J. Oh et al.
which is a part of computer system specially designed to carry out operations on float point numbers such as multiplication, division, and so on, so computational time is more increased for processing floating points operations and thus it may affect system performance. In our method, we used integer operations from the step of image prewarping to correct geometric distortion and made up a lookup table for pre-warping since it was assumed that 3-D surface was modeled one time after mobile device was fixed at specific position. Our method remarkably enhanced system performance on mobile platform and the result was shown in Table 1. Table 1. Comparison computational time on mobile platform
486
computational time (sec)
× 536 color image
float point integer point
3-D surface reconstruction 8 -
image pre-warping 0.265 0.046
4 Experimental Results and Discussions Our experimental environment was composed of a DLP projector (SAMSUNG Pocket Imager), a camera (HP mobile camera, SDIO type), and a PDA (DELL AXIM X51V) (see Fig. 4). The system was developed in Window Mobile 5.0 platform and used a modified version of OpenCV library. Fig. 5 shows the example of applying our system to multimedia display. Projection images were geometrically and radiometically distorted on dynamic and colorful textured screen (see Fig. 5(a)). The compensation image was computed by analyzing the geometric and radiometric characteristics of the screen (see Fig. 5(b)) and then multimedia could be displayed without distortion in user’s viewpoint (see Fig. 5(c)).
(a)
(b)
Fig. 4. Experiment environment, (a) our portable projection-based AR system, (b) projection surface which is non-planar
Portable Projection-Based AR System
(a)
747
(b)
(c) Fig. 5. Experiment results, (a) without any processing (b) compensation image (c) with geometric correction and radiometric compensation
11 10 9 8 7 6 5 4 3 2 1 0 mobility
HDTV
user-immersion multi-user access
PMP
preference
Portable Projection-Based AR System
Fig. 6. Subjective evaluation and comparison
748
J. Oh et al.
(a)
(b)
(c) Fig. 7. Various display systems, (a) HDTV (b) PMP (c) the Proposed System
To confirm the usability of our proposed system by comparing among established display systems as shown in Fig.7, we asked fifteen volunteers to complete a questionnaire that rates the established display systems on four areas: mobility, userimmersion about a display quality, multi-user access, and preference. The degree of evaluation was divided into ten scores and results are shown in Fig. 6.
Portable Projection-Based AR System
749
In the user evaluation, HDTV got highest scores on the evaluated areas except for the mobility. HDTV scored very low on mobility due to the big size and heavy weight of HDTV. Volunteers gave PMP the highest score on mobility, because of the compact size, weight, and display. However, some complained that the display screen size of PMP is too small and thus prone to the easy eyestrain. And PMP received the lowest score on multi-user access and the lowest preference. Portable projection-based AR system did not received a high score on mobility as expected, because our current prototype is not as compact as existing portable devices namely PMP. To the best of our knowledge, the mobile device that features our prototype is not produced yet. However, a compact portable projection-based display device will soon be offered as shown in Fig. 8, and then the mobility of the proposed portable projection-based AR system will increase. The proposed system got high scores on both user-immersion and multi-user access, because volunteers could watch not only the accurate visual information without a restricted projection screen but also high resolution images by a projector. And thus on preference, the proposed system got higher scores than PMP. This indicates that the portable projection based AR system have more advantages and usability than exiting mobile devices. If the compact portable projection-based display device is produced and commercialized, intelligent display systems such as the proposed system will have more increased demands than now.
Fig. 8. Example of compact Portable Projection-Based Display Device (Illustrated by Partsnic Components Co., Ltd. in Daewoo, Korea)
5 Conclusions The proposed portable projection-based AR system not only improved mobility by overcoming limitation of display screen and by offering wide display screen, but also enable to accurately project on an arbitrary screen without distortion on dynamic and colorful textured projection screens by performing geometric correction and radiometric compensation technique. Also, proposed system verified possibility performing on the mobile platform. That was possible because mobile processors have been becoming powerfully and system was optimized on mobile platform suitably.
750
J. Oh et al.
In this paper, we confirmed usability of the portable projection-based AR system by user evaluation. If a compact mobile device is produced, user’s satisfaction should increase more and more toward to the proposed portable projection-based AR system. Additionally, we are trying to applying imperceptible structured light technique [12, 13, 14] for compensating in real-time without interrupting normal operations. Finally intelligent display systems such as our system would be applied to variety application.
References 1. Wagner, D., Schmalstieg, D.: First steps towards handheld augmented reality. In: Proceedings of the 7th International Symposium on Wearable Computers (ISWC 2003), White Plains, NY, USA, pp. 127–137. IEEE Computer Society, Los Alamitos (2003) 2. Bruns, E., Brombach, B., Zeidler, T., Bimber, O.: Enabling mobile phones to support large-scale museum guidance. IEEE Multimedia (2006) 3. Raskar, R., Baar, J.V., Beardsley, P., Willwacher, T., Rao, S., Forlines, C.: iLamps: Geometrically Aware and Self-Configuring Projectors. In: Proc. of SIGGRAPH, vol. 22, pp. 809–818 (2003) 4. Mitsunaga, T., Nayar, S.K.: Radiometric Self Calibration. In: Proc. of CVPR, pp. 374–380 (1999) 5. Nayar, S.K., Peri, H., Grossberg, M.D., Belhumeur, P.N.: A Projection System with Radiometric Compensation for Screen Imperfections. In: Proc. of IEEE Int. Workshop of PROCAMS (2003) 6. Grossberg, M.D., Peri, H., Nayar, S.K., Belhumeur, P.N.: Making One Objects Look Like Another: Controlling Appearance Using a Projector-Camera System. In: Proc. of CVPR, pp. 452–459 (2004) 7. Park, H., Lee, M.H., Kim, S.J., Park, J.I.: Surface-Independent Direct-Projected Augmented Reality. In: Narayanan, P.J., Nayar, S.K., Shum, H.-Y. (eds.) ACCV 2006. LNCS, vol. 3852, pp. 892–901. Springer, Heidelberg (2006) 8. Zhang, Z.: Flexible Camera Calibration by Viewing a Plane from Unknown Orientation. In: Proc. of ICCV, pp. 666–673 (1999) 9. Salvi, J., Pages, J., Batlle, J.: Pattern Codification Strategies in Structured Light Systems. Pattern Recognition 37(4), 827–849 (2004) 10. Hartely, R., Zisserman, A.: Multiple View Geometry in Computer Vision. Cambridge University Press, Cambridge (2003) 11. Sukthankar, R., Stockton, R., Mullin, M.: Smarter Presentations: Exploiting Homography in Camera-Projector Systems. In: Proc. of ICCV, pp. 247–253 (2001) 12. Yasumuro, Y., Imura, M., Oshiro, O., Chihara, K.: Projection-based Augmented Reality with Automated Shape Scanning. In: Proceedings of EI, pp. 555–562 (2005) 13. Raskar, R., Welch, G., Cutts, M., Stesin, L., Fuchs, H.: The office of the FutureL A Unified Approach to Image-Based Modeling and Spatially Immersive Displays. In: Proceedings of SIGGRAPH, pp. 179–188 (July 1998) 14. Park, H., Lee, M.H., Seo, B.K., Park, J.I.: Undistorted projection onto dynamic surface. In: Chang, L.-W., Lie, W.-N. (eds.) PSIVT 2006. LNCS, vol. 4319, pp. 582–590. Springer, Heidelberg (2006)
Adaptive Chrominance Correction for a Projector Considering Image and Screen Color Sun Hee Park, Sejung Yang, and Byung-Uk Lee Ewha W. University, Republic of Korea Abstract. Beam projectors are widely used for large screen displays due to their high image quality at relatively low cost. Recently many works have been published extending the application area of beam projectors by employing nonideal screens such as walls and ceilings. When the color of the nonconventional screen is not white, reflected color is shifted from the original. Therefore, we need to compensate for the color distortion. Past work has been limited to global gamut mapping where the mapping function is constant over a given color screen. The represented color range is the intersection of gamuts of various colored screen areas, which limits the usable gamut severely. We propose to use adaptive mapping function which depends on the screen color and the input image of each pixel. The proposed method attempts to employ the gamut at each screen position with a constraint that the mapping function changes smoothly over the screen to avoid abrupt color variation. Experiments with natural images show that our proposed method expands the gamut and enhances the color quality significantly.
1 Introduction With wide spread use of compact and low cost video projectors, they are employed in various situations that deviate from ideal environment with a perfect white screen: wall of an office can be utilized as a screen [1,2,3], or the shape of a screen may not be planar [4,5]. The reflected images in these situations result in geometric distortion or deterioration of intensity and color. Therefore, it is imperative to correct for geometric distortion and color change. When the screen surface is not white, the color of the reflected image is different from the original. The range of reproduced color on the screen is limited because of the screen color. For example, it is hard to reproduce yellow color on a blue screen which is the complementary color of yellow. To reduce the color shift on a non-white screen, we need to employ gamut mapping which maps the original image color to another. Gamut mapping is a method for assigning colors from the reproduction medium to colors of the original image and the purpose is to reproduce the output image with similar appearance as the original image [6]. The gamut mapping methods can be divided into color clipping and color compression. Color clipping maps colors outside of the gamut to the nearest boundaries of the reproduction gamut. Therefore the color outside of the reproduction may be mapped to the same color. On the other hand, compression method compresses all colors from the original gamut; therefore different colors are not mapped to the same color. Compression methods may employ linear or nonlinear functions [7]. G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 751–759, 2007. © Springer-Verlag Berlin Heidelberg 2007
752
S.H. Park, S. Yang, and B.-U. Lee
Recently, content-dependent photometric compensation using uniform chrominance transformation and space-varying luminance transformation is proposed [8]. This method can reduce the irregular luminance difference of the projected image efficiently by applying luminance transformation that is adaptive to screen reflectance. Global gamut mapping methods use spatially uniform color transformation; however, we propose to produce a compensated image by changing transformation parameters adaptively and smoothly according to the input image and the characteristics of the screen. The proposed method results in vivid color as shown in Section 3. This paper is organized as follows. In Section 2, we describe details of our proposed color compensation algorithm considering spatial characteristics of input images. Experimental results of the proposed method and analysis are presented in Section 3. Finally, conclusions and ideas for future work follow in Section 4.
2 Proposed Gamut Mapping Method The range of color that can be reproduced on a colored screen is smaller than that of the original color space. Therefore we need a mapping from the input image color to the reproduction to ensure a good correspondence of overall color appearance between the original and reproduction. Conventional gamut mapping is a global method which is the same mapping over the screen area that may have different colors. Ashdown et al. [8] proposed spatially non-uniform luminance mapping to utilize the maximum possible brightness on a non-white screen. However, they employ global chrominance mapping which uses a common color gamut that can be reproduced on any colored screen. Different color screens have different gamuts, and the intersection of them becomes smaller. Fig. 1 shows a color range of an input image, which is indicated by a dotted triangle and gamuts reproducible on a screen with different colors. Global chrominance mapping uses the common intersection of reproducible gamuts. Therefore, it decreases the reproducible color strictly. However, we propose to utilize the gamut of each screen point, which produces better color on the colored screen. We will briefly describe global gamut mapping method proposed by Ashdown et al. [8], since our adaptive mapping is based on the Ashdown method. First RGB
Fig. 1. Projector gamut according to the screen color. Horizontal shading represents a gamut of one screen color and vertical shading shows a gamut of another colored screen.
Adaptive Chrominance Correction for a Projector Considering Image and Screen Color
753
values of input image are linearized to compensate for the non-linearity of the projector, and convert to device independent CIELUV color space ( L0 , u0 , v0 ) . The relationship between desired chrominance and original chrominance is a spatially uniform linear transform equation shown in equation (1).
⎡u1 ⎤ ⎡u0 ⎤ ⎡ a ⎤ ⎢ v ⎥ = s ⎢ v ⎥ + ⎢b ⎥ . ⎣ 1⎦ ⎣ 0⎦ ⎣ ⎦
(1)
Ashdown defined an objective function to minimize errors between the original gamut and the mapped gamut and applied nonlinear optimization. Once luminance and chrominance mapping is established, we can compute the compensated image using the characteristics of the screen data. Since the color compensation method proposed by Ashdown et al. applies a global transformation for every pixel of an input image, the chrominance fitting parameters at each pixel on the screen is constant even when the screen color is different. Global method compresses input image color space onto an intersection of a screen with various colors. In order to expand the gamut reproduced on the screen we propose an adaptive chrominance mapping algorithm utilizing the reproducible gamut at each screen color. The mapping equation (1) has a constant parameter; however, we propose to use adaptive values for each pixel depending on the screen and image colors. However, the obtained color may have abrupt spatial variations, since the mapping parameters of equation (1), a and b are calculated independently at each gamut in uv color space. We apply spatial Gaussian smoothing to avoid false color edges caused by parameter changes and the equation follows.
q(i, j ) = ∑∑ p (i − i′, j − j ′) g (i′, j ′), i′
j′
.
(2)
g : gaussian function where, p represents the initial mapped image, and q is the smoothed color. Fig. 2 shows chrominance scale parameter s of equation (1) along a horizontal line of a test image. As shown in the figure the global scale parameter s proposed by
Fig. 2. Scale parameter s for chrominance mapping. Dashed line shows a global parameter by Ashdown et al. and solid line represents improved value from the proposed algorithm. Dotted line shows results before spatial smoothing.
754
S.H. Park, S. Yang, and B.-U. Lee
Ashdown algorithm reduces mapped color space. Our method results in larger value, and it changes smoothly over the space. The parameter change is not noticeable on natural images.
3 Experimental Results We compare gamut mapping algorithms in this section. We used InFocus LP600 projector, CS-1000 Konica Minolta spectroradiometer, and Cannon EOS 350D camera for measurements of screen reflectance. We present an experiment on a three colored screen shown in Fig. 3. Left portion of the screen is yellow-green and the center part is white, while the right is pink.
Fig. 3. Screen used for chrominance mapping tests
Fig. 4. Projector gamut influenced by screen colors
We projected red, green, blue, cyan, magenta, yellow, black and white test patterns to measure a gamut at each pixel on the screen. Image was captured by an EOS 350D digital camera, which has been calibrated by a CS-1000 spectroradiometer [9,10]. The
Adaptive Chrominance Correction for a Projector Considering Image and Screen Color
755
measured gamut is shown in Fig. 4, which shows three major gamuts caused by three different colors on the screen. While a global chrominance mapping utilizes the intersection of those gamuts, we attempt to use each gamut of a pixel to reduce the color change. Table 1 shows u-v color coordinate values on colored screens before and after compensation. We compute average (u, v) values for six test color patterns on pink, white and yellow-green screens. Fig. 5 shows plots of Table 1. From Fig. 5 (e) and (f) we observe that the compensated colors on colored screens are reasonably close to colors on the white screen. The performance of our proposed method is better than the global color mapping proposed by Ashdown et al. Table 1. uv values of six test colors before and after compensation
Table 2. Comparision of color error between the previous global color mapping and the proposed algorithms Before compensation Average color error
0.50
After compensation Previous method
Proposed method
0.45
0.21
756
u
S.H. Park, S. Yang, and B.-U. Lee
v
1 desired output lightgreen screen white screen pink screen
0.8 0.6
0.8 desired output lightgreen screen white screen pink screen
0.6 0.4
0
0.2
-0.2
v
u
0.2 0.4
-0.4
0
-0.6 -0.2 -0.8 -0.4
-1
-0.6
-1.2 1
1.5
2
2.5
3
3.5 pixel
4
4.5
5
5.5
6
1
1.5
2
2.5
3
(a)
u
3.5 pixel
4
4.5
5
5.5
6
4
4.5
5
5.5
6
4
4.5
5
5.5
6
(b)
v
1
0.8 0.6
0.8
0.4 0.6 0.2 0
0.2
v
u
0.4
-0.2 -0.4
0
-0.6 -0.2 -0.8 -0.4 -0.6
-1
1
1.5
2
2.5
3
3.5 pixel
4
4.5
5
5.5
-1.2
6
1
1.5
2
2.5
3
(c )
u
3.5 pixel
(d)
v
1
0.8 0.6
0.8
0.4 0.6 0.2 0
0.2
v
u
0.4
-0.2 -0.4
0
-0.6 -0.2 -0.8 -0.4 -0.6
-1
1
1.5
2
2.5
3
3.5 pixel
(e)
4
4.5
5
5.5
6
-1.2
1
1.5
2
2.5
3
3.5 pixel
(f)
Fig. 5. Comparison of compensation methods using a test pattern with six colors. (a) u values of output image before compensation (b) v values of output image before compensation (c) u values of output image after compensation: the global method (d) v values of output image after compensation: the global method (e) u values of output image after compensation: the proposed method (f) v values of output image after compensation: the proposed method.
Adaptive Chrominance Correction for a Projector Considering Image and Screen Color
(a)
(a)
(a)
(b)
(b)
(b)
(c)
(c)
(c)
(d)
(d)
(d)
(e)
(e)
(e)
757
Fig. 6. Screen color correction for three natural images (a) original input image, (b) output image before compensation, (c) input image after compensation: the proposed method, (d) compensated image by global gamut mapping, (e) compensated image by the proposed algorithm
758
S.H. Park, S. Yang, and B.-U. Lee
We apply Ashdown algorithm and our gamut mapping algorithm to the uniform test patterns and compare the errors in Table 2. The table represents mean of absolute color error in uv space. Please note that the proposed adaptive color compensation method reduces color error, which is less than half of the global compensation algorithm. Fig. 6 shows the test results using our method for natural images. As shown in Fig. 6 (b), the projected image without compensation, we observe that center part of the image is the brighter, and it is darker on yellow-green and pink portion of the screen. We can also recognize the color difference, and it is more distinct across the edge of the screen color. Fig. 6 (d) shows a compensated image by Ashdown, while Fig. 6 (e) shows the result of our proposed algorithm. The adaptive gamut mapping shows vivid color with smooth changes across the screen color boundaries.
4 Conclusions In this paper, we present a color compensation method to reduce color change on nonwhite screen. We use adaptive gamut mapping parameters which depend on the screen color and the image color. It uses wider gamut than a global gamut mapping algorithm. We apply spatial smoothing to the mapping parameters to avoid sudden change caused by screen color change. We verify that the proposed method results show dramatic reduction in color shift by quantitative error measurements and subjective observation. Future work includes reduction in calculation complexity and more analysis on spatial smoothing of gamut mapping parameters.
References 1. Raskar, R., Welch, G., cutts, M., Lake, A., Stesin, L., Fuchs, H.: The Office of the Future: A Unified Approach to Image-Based Modeling and Spatially Immersive Displays. In: Proceedings of SIGGRAPH 1998, pp. 179–188 (1998) 2. Raij, A., Gill, G., Majumder, A., Towles, H., Fuchs, H.: PixelFlex2: A Comprehensive, Automatic, Casually-Aligned Multi-Projector Display. In: Proceedings of IEEE International Workshop on Projector-Camera Systems (2003) 3. Stone, M.C.: Color and Brightness Appearance Issues in Tiled Displays. IEEE Computer Graphics & Applications , 58–66 (2001) 4. Bimber, O., Emmerling, A., Klemmer, T.: Embedded entertainment with smart projectors. IEEE Computer 38, 48–55 (2005) 5. Zollmann, S., Bimber, O.: Imperceptible Calibration for Radiometric Compensation. In: EUROGRAPHICS 2007, vol. 26(3) (2007) 6. Morovic, J., Luo, M.R.: The fundamentals of gamut mapping: A survey. Journal of Imaging Science and Technology 45(3), 283–290 (2001) 7. Hezog, P.G., Muller, M.: Gamut Mapping Using an Analytic Color Gamut Representation. SPIE, Device-Independent Color, Color Hard Copy, and Graphic Arts? 117–128 (1997)
Adaptive Chrominance Correction for a Projector Considering Image and Screen Color
759
8. Ashdown, M., Okabe, T., Sato, I., Sato, Y.: Robust Content-Dependent Photometric Projector Compensation. In: Computer Vision and Pattern Recognition Workshop, pp. 60–67 (2006) 9. Goldman, D.B., Chen, J.H.: Vignette and Exposure calibration and compensation. In: ICCV, Beijing, China, pp. 899–906 (2005) 10. Debevec, P.E., Malik, J.: Recovering High Dynamic Range Radiance Maps from Photographs. In: Proceedings of SIGGRAPH, pp. 369–378 (1997)
Easying MR Development with Eclipse and InTml Pablo Figueroa and Camilo Florez Universidad de los Andes, Colombia
[email protected],
[email protected]
Abstract. This paper shows our work in progress towards an easy to use development environment for Mixed Reality (MR) Applications. We argue that development of MR applications is a collaboration between interaction designers who know about user requirements, and expert developers who know the intricacies of MR development. This collaboration should be supported by tools that aid both roles and ease their communication. We also argue that real MR development should allow easy migration from one hardware setup to another, since hardware greatly varies in these type of applications, and it is important to fit a solution to the particular user’s requirements and context. We show the foundational concepts in our work and current Integrated Development Environment (IDE) implementation. This work is based on InTml, a domain specific language for MR applications, and Eclipse, an open source, general purpose IDE.
1
Introduction
Current MR interfaces use a wide variety of form factors due to the range of available input and output devices, options in required computational power, and dissimilar solutions in subfields of MR, such as Virtual Reality (VR) and Augmented Reality (AR). MR development is complex in general, and therefore it is usually targeted to a particular combination of devices and software frameworks. However, this development style precludes MR interface exploration and prototyping, since changes in devices and interaction techniques may require drastic changes in the software infrastructure. Moreover, it is difficult to involve domain experts and interaction designers in this type of projects, due to the lack of a common communication language that allows these people and MR programmers to understand each other’s ideas and design concerns. We are interested in supporting MR development for multiple hardware setups, in order to facilitate rapid prototyping or application migration, among other related issues. Our approach is based on the use of the Interaction Techniques Markup Language (InTml), a domain specific language (DSL) for MR applications, which hides the intricacies of a particular implementation, and allows porting and migration between different MR setups by implementing a translator to general purpose programming languages such as C++ or Java. G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 760–769, 2007. c Springer-Verlag Berlin Heidelberg 2007
Easying MR Development with Eclipse and InTml
761
This paper shows our philosophy behind IDE support for MR development, current work that eases the use of InTml, and the way we support collaborative work between domain experts and programmers. We believe tool support such as the one described here could help the development of more complex and compelling MR applications, and facilitate domain experts to create their own prototypes. This paper is divided as follows: First, we give a brief overview of related work. Next, we describe our proposal for an IDE for MR applications, and finally we give conclusions and a description of future work.
2
Related Work
There are several frameworks, libraries, and languages to support MR development, usually directed to a particular subfield such as VR or AR. There are few development environments with high level, user–friendly languages (e.g. [1,2]), but they predefine interaction techniques that are either impossible or very difficult to override. Several toolkits and frameworks offer a partial and low level solution to related problems [3,4,5,6,7,8], and there are some frameworks [9,10,11,12,13] that offer integrated solutions, sometimes based on the previous partial solutions. While these solutions are the state of the art in MR development, they are still too low level for interaction designers or domain experts, since their use require deep programming skills in languages such as C++ or Java. Some authors have proposed high–level, portable ways for describing VR applications [14,15], but they have only been used on a subset of VR applications, usually Desktop VR. There are also some DSLs for MR applications [1,16], but they concentrate mostly on geometry description and interaction on PC–based environments. Other available DSLs could be very complex for non–programmers, such as [17,18] which are based on state machines and Petri Nets, respectively. There have been some attempts to define VR development methodologies supported by tools, such as the ones in [19,20,21,22], although they are usually directed to the development of isolated MR applications, and therefore do not yet address the issue of porting an application from one MR environment to another. In summary, there are several libraries and languages to support MR development, but they are either too low level for interaction designers, too limited in terms of the supported devices, or create isolated applications, which lack provisions for reuse and application migration.
3
Integrated and Scalable Development
This section presents the core assumptions behind our proposal, the concepts in an IDE for MR, how the IDE we are developing will look like, and current status of our development.
762
3.1
P. Figueroa and C. Florez
Core Assumptions
Our approach for easying MR development is based on the following assumptions: 1. It is possible to separate MR development concerns at two levels: design concerns and programming concerns. Design concerns are related to user requirements and high level decisions, such as content geometry and expected behavior, interaction techniques, and devices. Programming concerns are related to low level elements in the solution, such as software frameworks, device setup, and device calibration. 2. A dataflow–based language is simple enough for non–programmers, and it could be used for people without degrees in Computer Science. The DSL we use is based on this paradigm, and there are similar languages in other domains [23,24] which facilitate programming in their fields. 3. An IDE could facilitate the use of the DSL concepts and the communication between interaction designers and programmers. In a similar way that current IDEs for Flash or WIMP interfaces accelerate development, an IDE for MR development will provide an easy to use environment for the creation of MR rapid prototypes. 4. A well supported MR platform should have a component library that aid designers to create their own MR applications, without minimum intervention of programmers. 3.2
Concepts
This section presents concepts about InTml, the foundational DSL language in our approach, and the issues we would like to address with an IDE supported development process. The Interaction Techniques Markup Language (InTml) [25] is a dataflowbased environment for the development of VR applications, which can be easily extended to the MR domain. The main design element is a filter, a black box with several input and output ports. Ports are typed, and they can receive zero or many tokens of information at any particular frame of execution. Filters represent either devices, behavior, or content, and they can also be hidden inside composite filters. Filters have type definitions, which are useful for documentation, error checking, and code generation into implementation frameworks. There are several factors in MR development that we would like to support with an IDE, as follows: – Development for a diversity of MR setups. Since there are and will be several options for input and output devices, it is important to support a variety of MR platforms within an IDE. It is still difficult to identify standard setups, so developers should be able to define their own device choices in any particular development.
Easying MR Development with Eclipse and InTml
763
– Support for common practices. Despite the previous factor, there are already some interaction techniques, devices and algorithms that are useful in MR applications [26], which could be useful in some situations. An IDE should be able to offer such practices, or new ones as soon as they are widely accepted. – Levels of developer expertise. It could be desirable to support development from novice developers to experts. Novices will require support for common solutions, examples, and ways to easily try their ideas, while experts will be able to create novel solutions from scratch. In any case, tools in the IDE should support these development styles. – Multidisciplinary development teams. We believe interaction designers and programmers should interact and collaborate in the development of new MR applications. They should have a common language for understanding, and they should easily observe the responsibilities and solutions that the other role is adding to the project. – Development of isolated applications or families of applications. It should be possible to develop either one application for a particular MR setup, or a family of applications for an entire set of hardware configurations. A family of application could also be seen as a way to support hardware migration in MR applications. From these requirements, and from the model that the Eclipse Platform [27] provides, we defined an IDE that supports MR development. The following subsection shows our current design, and the following shows current status of our implementation. 3.3
The Design of an IDE for MR Applications
This section shows our IDE design, with concepts shown in italics. The creation of an application involves three related concepts: a basic type system, a library of filter classes, and the application itself. The application’s type system defines common abstract types which will be used throughout the filter classes and the application. Such types should be later translated to types in the general purpose language in which the runtime engine is implemented. A library of filter classes is a set of components, useful for the application in development. An application is a set of filter class instances and constants, connected in a dataflow. Any filter instance should have a class, defined in a library. An application could be implemented in terms of filter classes from one or more libraries, as long as their type systems are compatible. Several applications could be defined on top of the same set of libraries, in which case they conform a family of applications. An application can be defined in terms of tasks, which are partial views of the dataflow. A task represents a piece of application functionality, which can be understood by itself. Several applications in a family can implement the same task in very different ways, according to their own contexts and setup. An IDE in this model should define ways to create basic types, create filter classes, create new applications by creating tasks, instantiating filter classes and connecting such instances. In order to support reuse, it should be possible to
764
P. Figueroa and C. Florez
Fig. 1. Desired View for an Eclipse–Based MR IDE
create create, import, or merge filter class libraries, and create new applications in a family. Finally, there should be a way to easily run an application, both in a simulator and in the final MR setup. In principle, editing tasks should be easily understood by IDE users, which could be achieved by a direct depiction of the domain specific model in the interface, in this case the application dataflow. Figure 1 shows how we envision our IDE. The main editor at the center will allow direct manipulation of an application dataflow. The properties window below will provide fine tuning capabilities for the selected element in the dataflow. The left pane will show the main components in the current family of applications, such as libraries, basic data types, and applications. Finally, at the right, application views will provide extra information, such as current application tasks and filters. 3.4
Current Status
Basic support for the previous concepts is generated through the Eclipse Modeling Framework (EMF) tools in the Eclipse platform [28]. Figure 2 shows how an application is composed of tasks, filters, and links. Figure 3 show how a filter library is composed of filter classes, which in turn are composed of a name, input ports, and output ports. This UML model is read by the EMF tools in order to create a basic set of editors and views. We have integrated 3 basic views in order to provide support for the creation of generic applications, libraries and basic types. We are in the process of adding code generation and dataflow editing capablities. Figure 4 show the basic types editor, with its corresponding property page. This tools allow to define new basic types, and for each type its name, mapping to the space of Java classes, and visibility (false means that the basic type is not visible from other editors).
Easying MR Development with Eclipse and InTml
Fig. 2. UML Diagram for Application Modeling
Fig. 3. UML Diagram for Filter Class Modeling
765
766
P. Figueroa and C. Florez
Fig. 4. Basic Types View
Fig. 5. Filter Class Library View
Easying MR Development with Eclipse and InTml
767
Fig. 6. Application View
Figure 5 shows the filter class library view. New filter classes could be created in this view, by defining its name, input, and output ports. Note that ports require a type, which is defined in the basic types resource of the same project. Finally, Figure 6 shows the application view. An application is defined in terms of tasks, filters, and links. The editor in the middle of the screen allows creation of these three entities, while the view on the right side of the screen allows visualization of just filters. Although our tool is not ready for final users, we have informally put into test our development technique and concepts with our BSc, MSc and PhD students in Computing Engineering, and with students from other specialties. For example, 15 students from several specialties in an extension program in Multimedia received a three hour training in InTml. After training, they were asked to design MR applications in groups of about 4 people, in order to promote discussion and self tutoring on the basic concepts of this new language. After a week, designs were shown to all class, and a discussion about each design and the overall experience was performed. Although they developed their designs just in paper, we could observe that concepts were understood and that our DSL allowed them to express their designs. We also observed that they prefer to learn new languages and concepts with approaches such as trial–and–error and learning–by–examples, which could be supported through an IDE. Some issues they made explicit were related to the restrictions that they could feel from IDEs for other languages, and how such restrictions helped them to understand how to use such a language.
768
4
P. Figueroa and C. Florez
Conclusions and Future Work
We have shown our design and current status of an IDE for development of MR applications, based on InTml, a high level, dataflow–based language. Such a language allows us to separate concerns related to interaction and functionality from concerns related to calibration and software integration, so designers should not be directly worried with the latter issues. As future work we plan to fully implement the expected functionality in our design, and with it show how non– programmers can use complex MR facilities in our lab.
Acknowledgements This on–going work has been funded by the IBM Faculty Awards 2006.
References 1. Web3D Consortium: Extensible 3D (X3DT M ) Graphics (2003), Home Page http://www.web3d.org/x3d.html 2. : Alice: Easy interactive 3D graphics. Carnegie Mellon University and University of Virginia (1999), http://www.alice.org 3. SGI: (2003), Iris performer home page. http://www.sgi.com/software/performer 4. Reiners, D., Voss, G.: (2007), Opensg home page, http://opensg.vrsource.org 5. Taylor, R.M., Hudson, T.C., Seeger, A., Weber, H., Juliano, J., Helser, A.T.: VRPN: A device-independent, network-transparent VR peripheral system. In: Proceedings of the ACM symposium on Virtual reality software and technology, pp. 55–61. ACM Press, New York (2001) 6. Reitmayr, G., Schmalstieg, D.: An open software architecture for virtual reality interaction. In: Proceedings of the ACM symposium on Virtual reality software and technology, pp. 47–54. ACM Press, New York (2001) 7. Kato, H.: Artoolkit. (2007), http://www.hitl.washington.edu/artoolkit/ 8. Sun Microsystems: (1997), Java 3D Home Page. http://java.sun.com/products/java-media/3D/index.html 9. VRCO: Cavelib library (2003), http://www.vrco.com/products/cavelib/cavelib.html 10. Bierbaum, A., Just, C., Hartling, P., Meinert, K., Baker, A., Cruz-Neira, C.: VR Juggler: A Virtual Platform for Virtual Reality Application Development. In: Proceedings of IEEE Virtual Reality, pp. 89–96. IEEE, Los Alamitos (2001) 11. Sense8: Virtual reality development tools. The sense8 product line(2000), http://www.sense8.com/products/index.html 12. Bowman, D.: Diverse (2000), http://www.diverse.vt.edu/ 13. : Virtools. Virtools SA (2007), http://www.virtools.com/index.asp 14. Mass´ o, J.P.M., Vanderdonckt, J., Simarro, F.M., L´ opez, P.G.: Towards virtualization of user interfaces based on usixml. In: Web3D 2005: Proceedings of the tenth international conference on 3D Web technology, pp. 169–178. ACM Press, New York (2005) 15. Dachselt, R., Hinz, M., Meiner, K.: Contigra: an XML–based architecture for component-oriented 3d applications. In: Proceeding of the seventh international conference on 3D Web technology, pp. 155–163. ACM Press, New York (2002)
Easying MR Development with Eclipse and InTml
769
16. Autodesk: Autodesk FBX (2006), http://usa.autodesk.com/adsk/servlet/index?id=6837478\&siteID=123112 17. Wingrave, C.A., Bowman, D.A.: Chasm: Bringing description and implementation of 3d interfaces. In: Proceedings of the IEEE Workshop on New Directions in 3D User Interfaces, pp. 85–88. Shaker Verlag (2005) 18. Smith, S., Duke, D.: The hybrid world of virtual environments. In: Eurographics Proceedings, vol. 18, pp. 298–307. Blackwell Publishers (1999) 19. Tanriverdi, V., Jacob, R.J.: VRID: A design model and methodology for developing virtual reality interfaces. In: Proceedings of the ACM Symposium of Virtual Reality Software and Technology, pp. 175–182. ACM Press, New York (2001) 20. Neale, H., Cobb, S., Wilson, J.: A front ended approach to the user-centred design of ves. In: Proceedings of IEEE Virtual Reality, IEEE, pp. 191–198. IEEE Computer Society Press, Los Alamitos (2002) 21. Kim, G.J.: Designing Virtual Reality Systems. The Structured Approach. Springer, Heidelberg (2005) 22. Sastry, L., Boyd, D., Wilson, M.: Design review and visualization steering using the inquisitive interaction toolkit. In: IPT/EGVE 2001: Joint 5th Immersive Projection Technology Workshop / 7th Eurographics Workshop on Virtual Environments (2001) 23. Inc, T.M.: Matlab (2007), http://www.mathworks.com/ 24. 74, C.: Max/msp (2007), http://www.mathworks.com/ 25. Figueroa, P., Green, M., Hoover, H.J.: InTml: a description language for vr applications. In: Web3D ’02: Proceeding of the seventh international conference on 3D Web technology, pp. 53–58. ACM Press, New York (2002) 26. Bowman, D., Kruijff, E., Joseph, J., LaViola, J., Poupyrev, I.: 3D User Interfaces: Theory and Practice. Addison-Wesley, Reading (2004) 27. Foundation, T.E.: Eclipse (2007), http://www.eclipse.org/ 28. Budinsky, F., Steinberg, D., Merks, E., Ellersick, R., Grose, T.J.: Eclipse Modeling Framework. Addison-Wesley, Reading (2003)
Unsupervised Intrusion Detection Using Color Images Grant Cermak and Karl Keyzer Institute of Technology University of Minnesota, Twin Cities Minneapolis, MN 55455
[email protected],
[email protected] Abstract. This paper presents a system to monitor a space and detect intruders. Specifically, the system analyzes color video to determine if an intruder entered the space. The system compares any new items in a video frame to a collection of known items (e.g. pets) in order to allow known items to enter and leave the space. Simple trip-line systems using infrared sensors normally fail when a pet wanders into the path of a sensor. This paper details an adaptation of the mean shift algorithm (described by Comaniciu et al.) in RGB color space to discern between intruders and benign environment changes. A refinement to the histogram bin function used in the tracking algorithm is presented which increases the robustness of the algorithm.
1 Introduction Security systems are not prevalent in many residences due to cost and ease of use issues. One of the methods to address this concern is to use video recording technology and intelligent environment monitoring to detect intruders. For scenarios where the environment does not vary significantly, the problem is as simple as detecting large changes to the scene. When the environment can change significantly under normal circumstances, such as in a home where pets enter the view of the video camera, or when significant lighting changes occur, a more complex intrusion detection algorithm is required. The proposed system defines algorithms which provide intrusion detection while limiting false alerts.
2 Intrusion Detection A successful intrusion detection system does not need to segment an image into individual items to be effective. Rather the most important system function is notifying the user when a significant change occurs. The fundamental problem is to model the nominal state of the system and the expected variation in such a way that anomalous events are easily detected. Lighting changes and the movement of household items (such as a pet cat or dog or robotic cleaning agent) must be accounted for in the model as expected variations.
3 Problematic Approaches Several approaches to the problem were investigated. Initially the problem was approached using edge detection based segmentation algorithms. Most of the G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 770–780, 2007. © Springer-Verlag Berlin Heidelberg 2007
Unsupervised Intrusion Detection Using Color Images
771
literature concerning edge detection and segmentation describe operations on grayscale images. Segmentation of grayscale images is problematic because the distinction between pixels is simply the difference in their intensity values. Thus pixels with completely different hues may appear very similar in a grayscale image. Therefore the initial approach to segmentation used color information to improve the process. Color images contain hue and saturation information which provide for a more effective pixel comparison. Edge detection operations often work by comparing the “distance” between colors in RGB space. This approach poses other problems. If a portion of an item is well lit while another portion of the item has a shadow cast on it, this large intensity difference will cause some segmentation algorithms to divide a single item into two distinct items. One method of mitigating this issue uses the angular difference between the vectors defined by the RGB values of two pixels.[3] Combining the RGB vector angle method with a clustering algorithm provides a more robust segmentation technique.[4] Analysis determined that segmentation of the base image, which represents the nominal state of the scene, however, is not needed to provide an effective solution to the problem. The system need not know that a certain region of the scene represents an inanimate object (such as a couch or a chair). Attempting to teach the system about classifications or taxonomies of inanimate object classes is impractical due to the limited visibility of the camera and large degree of variation in shape, size, color, and design of these objects. Therefore a robust approach to the system must not include more processing or classification than is absolutely necessary. A complicated approach will ultimately lead to failure as special case scenarios are encountered.
4 A New Approach: Real-Time Target Tracking A robust approach to determining if a region of an image matches a region in another image is presented by Comaniciu et al.[1] This approach does not use a pixel-bypixel comparison but rather compares histograms which represent the distribution of colors to determine a “degree of matching.” By comparing sub-regions of an image to the corresponding sub-regions in subsequent images, this technique can be used to detect substantial changes between images. Thus a threshold can be defined that allows for slight variation in pixel color value to “match” from one image to the next. This approach accommodates slight shifts of the camera and slow color changes over time. The algorithm presented accommodates lighting changes by allowing gradual movements in lighting or color. In addition, regions of pixels that contain a rapidly moving object (such as a pet or robotic cleaner) can be “recognized” between images. Gradual changes in the image are allowed by systematically updating the base image to a more recent image. This greatly increases the robustness to lighting variation over the course of a day. This algorithm serves as the basis for comparison and recognition in the system. Fundamental questions such as, “Is this region of pixels an intruder?” or “Is this region of pixels the same as the region of pixels identified as the cat in the previous image?” are answered by careful formulations of the algorithm.
772
G. Cermak and K. Keyzer
4.1 Macroblock Pre-processing In most cases, the captured scene does not exhibit significant shifts when the camera is bolted to a solid surface. However for a robust solution in locations where small tremors can occur or where the camera is mounted far from the monitored scene small variations in the scene must not trigger false alerts. Macroblock pre-processing is a fast algorithm [6] that assumes there is not a pixel shift in the monitored scene. This system averages the colors in individual sub-regions of the image and uses that average RGB value for comparison to the corresponding sub-region in a base image. A deficiency of this approach is that for a slight shift in the image, colors along the edges of the scene can shift dramatically as new items (which were cropped by the camera) suddenly come into view. However the majority of frames will not contain frames shifts so this fast algorithm provides a good indication of regions that are of concern. Using the macroblock technique decreases the need for additional processing under nominal monitoring conditions. Items were moved between the images in Figure 1 and Figure 2 below and the algorithm highlights the areas of change.
Fig. 1. Base Image Macroblock
Fig. 2. New Image Macroblock4.2
4.2 System Training The system must store the representation of an expected intruder so it can discern between it and an unwanted intruder. Thus the system learns how the cat, dog, or robot agent appears when intruding upon the system. The system stores the expected intruder in the form of a target model (see section 5.1). This representation contains color distribution information about the intruder such that it can be compared against potential intruders in subsequent images.
5 Target Tracking Each region in the training image is considered a target model for which the system calculates a distribution of pixel colors to summarize the item. The color distribution is weighted with the pixels located closer to the center of target model weighted greater than the peripheral pixels. This is desirable because the pixels farther from the target model center are more likely to be occluded by other items in the scene.
Unsupervised Intrusion Detection Using Color Images
773
The color distribution based on the corresponding location in the new image is calculated and compared to that of the target model. The comparison operator also employs a kernel derivative which guides the algorithm to the next “target candidate” location. If the distance between the current location and where the algorithm predicts the next target candidate should be is below a threshold, then the algorithm has converged on the solution. [1] 5.1 Item Tracking Specifics In order to begin the algorithm, the color distribution for the target model is calculated. Based on the size and shape of the image, we approximate it by its width in pixels and its center of mass, Let
p0* = ( x0* , y0* ) .
p1 ... p n be the set of n pixels that are within the target model, that is, they
are within the range defined by the item center of mass and its width. Let Q be a three-dimensional matrix with the axes corresponding to the red, blue
N b be the number of bins that each color axis is divided into. For example, if we choose N b to be 32, then Q will have a total of 323 elements. Let k be a kernel that assigns a weight to pixels at a given distance from the and green color primaries. Let
center. For this system, a Gaussian kernel was implemented:
⎛ x2 ⎞ 1 ⎟ exp⎜⎜ − 2 ⎟ σ 2π ⎝ 2σ ⎠ where this system chooses σ = width / 2 , where width corresponds to the width of k ( x) =
the current target model. This value was chosen after several experiments showed that decreasing the sigma value de-weighted the peripheral pixels too significantly. With the peripheral pixels severely de-weighted, the item tracking algorithm has difficulty in centering on the item that it was tracking. Below is a diagram of the kernel used in the system with weight distributions for a target of width 15, and sigma of 7.5: 2-D Gaussian Distribution, width = 15, sigma = 7.5 x 10
-3
3
2.5
2
1.5 1 10 5
10 5
0
0
-5
-5 -10
-10
Fig. 3. Kernel Distribution
774
G. Cermak and K. Keyzer
The color distribution for the target model can be calculated as follows: 1. Set all elements in Q to zero. 2. Calculate the normalization factor:
C=
1
∑k( p n
i
)
2
− p0*
i =1
3. Finally: For each pixel pi where i = 1 to n Determine the weight to be given to each bin location (Section 5.5) For each appropriate bin b , increase the value as follows:
(
)
2
Q (b) = Q (b) + Weight (b) * C * k pi − p0* End For End For
At the completion of this algorithm, the target model color distribution, Q , has been found. From this distribution, the system attempts to locate the item in the subsequent images. Each location that we evaluate is defined as a target candidate. The distribution of the target candidate is calculated similarly to that of the target model, except that the center pixel used is the center of mass of the target candidate instead of that of the target model. For this implementation, the same width is assumed for the target candidate as that of the target model. Let P be the 3-dimensional matrix defining the color distribution of the target candidate. Let d be a derivative kernel, where d ( x ) = − k ' ( x ) After the color distribution has been calculated for the target model and for a target candidate the next candidate target location can be calculated. The next target candidate location is calculated using the mean-shift vector: [1]
∑ p w d( p n
p0est =
i
i =1 n
∑w d( p i
i =1
where
Q ( r , g , b) , and wi = P ( r, g , b)
* 0
i
* 0
)
2
− pi
− pi
)
2
r = red( pi ) / N b g = green( pi ) / N b b = blue( pi ) / N b
Unsupervised Intrusion Detection Using Color Images
775
An effective comparison of the two distributions is the Bhattacharyya Coefficient: Nb Nb Nb
ρ [Q, P ] = ∑∑∑ P( r, g , b)Q ( r, g , b) r =1 g =1 b =1
5.2 Item Tracking Algorithm Putting all of the steps together, the entire algorithm [1] is as follows: 1. Set the initial estimate of the item in the new image equal to the location of the target model in the previous image:
p0 = p0*
p0 and call this P 3. Compute the Bhattacharyya coefficient ρ [Q , P ]
2. Compute the color distribution at
4. Using the mean-shift vector, calculate the new estimated target location 5. Calculate the color distribution,
p0est
Q est at p0est and compute the Bhattacharyya
ρ [Q est , P ] ρ [Q est , P ] < ρ [Q , P ] ,
coefficient 6. If
Set
p0est =
1 est ( p0 + p0 ) 2
Return to step 5 Else, continue to step 7 7. If
p0est − p0 < ε Stop
Else set In step 7,
p0 = p0est and return to step 2 p0est − p0 < ε , indicates that the target candidate does not need to be
adjusted, since the estimated target location differs little from the current location. By tracking items from image to image using this algorithm, the system need not rely on the edge detection routines for this task. 5.3 Application of the Item Tracking Algorithm One useful application of the item tracking algorithm is to eliminate false alerts due to image shifts. Even with a stationary camera, each captured image may have a shift of a small number of pixels. Since the macroblock algorithm divides the image into small sections, the macroblocks are susceptible to false alerts due to image shifts. This is particularly the case in areas of large contrast within a shifted image. The method chosen to solve this issue is to allow the item tracking algorithm to search for image shifts. For each macroblock where a change is detected, a target model is created. The target is centered on the macroblock in the new image. The item
776
G. Cermak and K. Keyzer
tracking algorithm is then run for this target model in the base image. If an image shift had occurred, then the tracking algorithm will be able to find the macroblock in the base image. If the target model is not found in the base image this means that a significant change in the image occurred. A clustering algorithm is used to group the area of changed pixels in order to compare the area of change to the known set of items. Below is an example of the macroblock detection algorithm performed on an image. There are no changes between this and the base image, except for slight image shift. The macroblock algorithm detected changes along of edges of items due to the image shift. Using the algorithm which accounts for image shift all macroblock detections are removed and thus no false alerts are raised.
Fig. 4. Original Image
Fig. 5. New Image with Macroblock Detections
5.4 Clustering Algorithm When a significant change is detected by the macroblock algorithm that cannot be attributed to an image shift, the system aims to group the set of pixels that changed. The system summarizes this pixel group as a target model and then both tracks the item and compares it to a database of known items. The system implements a clustering algorithm where adjacent pixels exhibiting large color changes are grouped. [8] To determine the location to start clustering, the system evaluates the macroblocks that indicated significant changes, groups adjacent macroblocks into sets, and finds the center of the set. The center of the set defines the beginning location of the clustering algorithm. The algorithm first computes the RGB pixel differences between the base image and the current image. The algorithm adds pixels to the cluster when the RGB difference calculated exceeds a defined threshold. The motion clustering algorithm can be summarized as follows: Compute the Euclidean distance in RGB coordinates at each pixel location, ( x, y ) between the current image and the base image:
diff ( x, y ) = ( rc − rb ) 2 + ( g c − g b ) 2 + (bc − bb ) 2 where
Unsupervised Intrusion Detection Using Color Images
777
( rc , g c , bc ) are the pixel values at location ( x, y ) in the current image ( rb , gb , bb ) the pixel values at location ( x, y ) in the base image Algorithm: Let D be the maximum distance between pixels that can be clustered together Let M be the minimum RGB distance indicating significant change Let p0 be the initial pixel Cluster = {} PixelList = {p0} While PixelList is not empty Let p be the first pixel in PixelList For each pixel, p', at location (x,y) within a distance, D of p and not in Cluster If diff(x,y) ∃ M Add p' to PixelList End If End For Add p to Cluster Remove p from PixelList End While The result of the algorithm is a cluster of pixels that all experienced a large change between frames. The cluster of pixels can then be treated as a target model. The model is then compared to a database of expected targets in order to determine if that item is one of the acceptable intruders whose presence should not raise an alert. 5.5 Interpolation Technique The distribution of colors of a target is classified by allocating the pixel values to bins representing the colors. For this system, the bins were defined as follows:
N b : as previously defined, the number of bins into which each RGB axis is divided into N p : the number of pixel values along each axis for each bin, thus N p = 256 / N b Because each bin represents a range of
N p pixels along each RGB axis, some
information is lost in translating the pixel values into their corresponding bins. The method described by Comaniciu et al. assumes a binning function which maps any RGB pixel value to an individual bin [1]. This method is somewhat problematic as it must make arbitrary determinations in allocating very similar pixel values to different bins. An enhancement to this method is to interpolate between bin locations and distribute the weight of the pixel amongst a set of bins (See Fig. 6, Fig. 7).
778
G. Cermak and K. Keyzer
The proposed technique allows for a more robust matching procedure as more partial matches will occur allowing the tracking algorithm to have a better probability of finding the best match to a target model. The following procedure describes this method to determine the distribution of weights given to the different bins based on an individual pixel:
1.0
0.0 1,1
0.48
0.12 1,1
1,2
1,2
0.4
0.6 2,1
2,1
2,2
2,2
0.32
0.0
0.0
0.2
Fig. 6. Naïve Bin Distribution
For a pixel value r1 g1 b1 r2 g2 b2
= = = = = =
Np Np Np r1 g1 b1
* * * + + +
0.08 0.8
Fig. 7. Interpolation Technique
( rp , g p , bp ) , compute:
floor(rp / Np) floor(gp / Np) floor(bp / Np) Np Np Np
wr = (r2 – rp) / Np wb = (g2 – gp) / Np wb = (b2 – bp) / Np Then, the weight given to each bin is: Weight Weight Weight Weight Weight Weight Weight Weight
(r1, (r1, (r1, (r1, (r2, (r2, (r2, (r2,
g1, g1, g2, g2, g1, g1, g2, g2,
b1) b2) b1) b2) b1) b2) b1) b2)
= = = = = = = =
(1 (1 (1 (1
(wr) (wr) (wr) (wr) – wr) – wr) – wr) – wr)
* * * * * * * *
(1 (1 (1 (1
(wg) (wg) – wg) – wg) (wg) (wg) – wg) – wg)
* * * * * * * *
(wb) (1 – wb) (wb) (1 – wb) (wb) (1 – wb) (wb) (1 – wb)
Below is a diagram illustrating in two dimensions how this interpolation technique works:
Unsupervised Intrusion Detection Using Color Images
779
X is pixel value, the bin numbers are in the centers and weights given to each bin is shown in the corners. Without distributing the weight, all of its weight would be given to bin corresponding to coordinates (1,1) while the adjacent bins would not receive any weight from this pixel. As an example, Bin(1,1) receives (1.0 - 0.2) * (1.0 - 0.4) = 0.48 or 48% of the weight of the pixel.
6 Experiment A video camera (XCam2 Wireless Color Camera, XX16A) was mounted in the loft of a home surveying a living room. Images captured by the camera were transmitted wirelessly to a PC. Images of resolution 620 x 480 were captured over the course of 5 hours at 30 second intervals. Images were resized to 640 x 480 (using a letterboxing
Fig. 8. Summary
780
G. Cermak and K. Keyzer
technique) then down-sampled to 320 x 240 for processing by the system. The capture rate of the camera was not configurable but was sufficient for experimentation purposes and would be significantly faster in a deployed system. The system successfully filtered out small changes in lighting conditions and detected all occurrences of intruders. The system also identified two occurrences where the window blinds were raised and one event where sunlight caused a significant lighting change on the floor (though this could be mitigated with a faster capture rate).
7 Conclusion In this paper we presented a system to identify intruders in an environment with varying lighting conditions. We discussed several problematic approaches and the ways that this system addresses their deficiencies. Several algorithms that make up the system are discussed. We detailed an enhancement to the pixel matching portion of the Real Time Tracking Using Mean Shift algorithm. Finally we showed experimentally that the system can achieve the goal of intrusion detection.
References 1. Comaniciu, D., Ramesh, V., Meer, P.: Real Time Tracking of Non-Rigid Objects Using Mean Shift. In: IEEE Conference on Computer Vision and Pattern Recognition, June 13-15, vol. 2, pp. 142–149 (2000) 2. Swain, M.J., Ballard, D.H.: Indexing Via Color Histograms. In: Third International Conference on Computer Vision, pp. 390–393 (December 1990) 3. Dony, R.D., Wesolkowski, S.: Edge Detection on Color Images Using RGB Vector Angles. In: 1999 Proceedings of the 1999 IEEE Canadian Conference on Electrical and Computer Engineering, May 9-12, 1999, pp. 687–692 (1999) 4. Ivanov, Y., Stauffer, C., Bobick, A., Grimson, W.E.L.: Video Surveillance of Interactions. In: Second IEEE Workshop on Visual Surveillance 1999, June 26, 1999, pp. 82–89 (1999) 5. Green, B.: Canny Edge Detection Tutorial (2002), [Online]. Available: http:// www.pages.drexel.edu/ weg22/can_tut.html 6. Drake, M., Hoffmann, H., Rabbah, R., Amarasinghe, S.: MPEG-2 Decoding in a Stream Programming Language. In: 20th International Parallel and Distributed Processing Symposium 2006, April 25-29, 2006, p. 10 (2006) 7. Lakshmi Ratan, A., Grimson, W.E.L.: Training Templates for Scene Classification Using a Few Examples. In: IEEE Workshop on Content-Based Access of Image and Video Libraries, Conference proceedings, June 20, 1997, pp. 90–97 (1997) 8. Bourbakis, N., Andel, R., Hall, A.: Visual Target Tracking from a Sequence of Images. In: Ninth IEEE International Conference on Tools with Artificial Intelligence 1997, Conference proceedings, November 3-8, 1997, pp. 384–391 (1997) 9. Muguira, M.R., Salton, J.R., Norvick, D.K.: Schwebach, Chile Identification for Metrics in the Chile Industry. In: 2005 IEEE International Conference on Systems, Man and Cybernetics, October 10-12, 2005, vol. 4, pp. 3118–3123 (2005)
Pose Sampling for Efficient Model-Based Recognition Clark F. Olson University of Washington Bothell, Computing and Software Systems 18115 Campus Way NE, Box 358534, Bothell, WA 98011-8246 http://faculty.washington.edu/cfolson
Abstract. In model-based object recognition and pose estimation, it is common for the set of extracted image features to be much larger than the set of object model features owing to clutter in the image. However, another class of recognition problems has a large model, but only a portion of the object is visible in the image, in which a small set of features can be extracted, most of which are salient. In this case, reducing the effective complexity of the object model is more important than the image clutter. We describe techniques to accomplish this by sampling the space of object positions. A subset of the object model is considered for each sampled pose. This reduces the complexity of the method from cubic to linear in the number of extracted features. We have integrated this technique into a system for recognizing craters on planetary bodies that operates in real-time.
1
Introduction
One of the failings of model-based object recognition is that the combinatorics of feature matching often do not allow efficient algorithms. For three-dimensional object recognition using point features, three feature matches between the model and the image are necessary to determine the object pose. Unless the features are so distinctive that matching is easy, this usually implies a computational complexity that is (at least) cubic the number of features (see, for example, [1,2,3]). Techniques using complex features [4,5,6], grouping [7,8,9], and virtual points [2] have been able to reduce this complexity in some cases, but no general method exists for such complexity reduction. Indexing can also be used to speed up recognition [10,11,12]. However, under the assumption that each feature set indexes a constant fraction of the database (owing to error and uncertainty), indexing provides a constant speedup, rather than a lower complexity [10,13]. We describe a method that improves the computational complexity for some cases. This method is valid for cases where the object model is large, but only part of it is visible in any image and at least a constant fraction of the features in the image can be expected to arise from the model. An example that is explored in this paper is the recognition of crater patterns on the surface of a planet. The basic idea in this work is to (non-randomly) sample viewpoints of the model such that one of the sampled viewpoints is guaranteed to contain the G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 781–790, 2007. c Springer-Verlag Berlin Heidelberg 2007
782
C.F. Olson
model features viewed in any image of the object. We combine this technique with an efficient model-based object recognition algorithm [3]. When the number of samples can be constrained to be linear in the number of model features and the number of salient features in each sample can be bounded, this yields an algorithm with computational complexity that is linear in both the number of image features and the number of model features. Our pose sampling algorithm samples from a three degree-of-freedom space to determine sets of features that might be visible in order to solve full six degree-of-freedom object recognition. We do not need to sample from the full six dimensions, since the rotation around the camera axis does not change the features likely to be visible and out-of-plane rotations can usually be combined with translations in two degrees-of-freedom. Special cases may require sampling from more (or less) complex spaces. The set of samples is determined by propagating the pose covariance matrix (which allow an arbitrarily large search space) into the image space using the partial derivatives of the image coordinates with respect to the pose parameters. We can apply similar ideas to problems where the roles of the image and model are reversed. For example, if a fraction of model is expected to appear in the image, and the image can be divided into (possibly overlapping) sets of features that can be examined separately to locate the model, then examination of these image sets can reduce the complexity of the recognition process. Section 2 discusses previous research. Section 3 describes the pose sampling idea in more detail. We use this method in conjunction with efficient pose clustering, This combination of techniques is analyzed in Section 4. The methodology is applied to crater matching in Section 5 and the paper is concluded in Section 6.
2
Related Work
Our approach has a similar underlying philosophy to aspect graphs [14,15], where a finite set of qualitatively different views of an object are determined for use in recognizing the object. This work is different in several important ways. We do not attempt to enumerate all of the quantitatively different views. It is sufficient to sample the pose space finely enough that one of the samples has significant overlap with the input image. In addition, we can compute this set of samples (or views) efficiently at run-time, rather than using a precomputed list of the possible aspects. Finally, we concentrate on recognizing objects using discrete features that can be represented as points, rather than line drawings, as is typical in aspect graph methods. Several other view-based object recognition methods have been proposed. Appearance-based methods using object views (for example, [16,17]) and those using linear combinations of views [18] operate under the assumption that an object can be represented using a finite set of views of the object. We use a similar assumption, but explicitly construct a set of views to cover the possible feature sets that could be visible.
Pose Sampling for Efficient Model-Based Recognition
783
Greenspan [19] uses a sampling technique for recognizing objects in range data. In this approach, the samples are taken within the framework of a tree search. The samples consist of locations in the sensed data that are hypothesized to arise from the presence of the object. Branches of the tree are pruned when the hypotheses become infeasible. Peters [20] builds a static structure for view-based recognition using ideas from biological vision. The system learns an object representation from a set of input views. A subset of the views is selected to represent the overall appearance by analyzing which views are similar.
3
Pose Sampling
Our methodology samples from the space of poses of the camera, since each sample corresponds to a reduced set of model features that are visible from that camera position. For each sampled pose, the set of model features most likely to be detected are determined and used in a feature matching process. Not every sample from the pose space will produce correct results. However, we can cover the pose space with samples in such a way that all portions of the model that may be visible are considered in the matching process. Success, thus, should occur during one of the trials, if it would have occurred when considering the complete set of model features at the same time. It is important to note that, even when we are considering a full six degree-offreedom pose space, we do not need to sample from all six. Rotation around the camera axis will not change the features most likely to be visible in the image. Similarly, out-of-plane rotation and translation cause similar changes in set of the features that are likely to be visible (for moderate rotations). Therefore, unless we are considering large out-of-plane rotations, we can sample from a three-dimensional pose space (translations) to cover the necessary sets of model features to ensure recognition. For most objects, three degrees-of-freedom are sufficient. If large rotations are possible, then we should instead sample the viewing sphere (2 degrees-offreedom) and the distance from the object. For very large objects (or those for which the distance from the camera is very small), it may not be acceptable to conflate out-of-plane rotation and translation in the sampling. In this case, a five degree-of-freedom space must be sampled. We define a grid for sampling in the translational pose space by considering the transverse motion (x and y in the camera reference frame) separately from the forward motion (z), since forward motion has a very different effect on the image than motion perpendicular to the viewing direction. Knowledge about the camera position is represented by a pose estimate p (combining a translation t for the position and a quaternion q for the orientation) and a covariance matrix C in the camera reference frame. While any bounding volume in the pose space could be used, the covariance representation lends itself well to analysis. It allows an arbitrarily large ellipsoidal search space. While our
784
C.F. Olson
pose representation has seven parameters (three for the translation and four for the quaternion), only six are independent. For the z component of our sampling grid, we bound the samples such that a fixed fraction of the variance is enclosed (for example, three standard deviations around the pose estimate). Within this region, samples are selected such √ that neighboring samples represent a scale change by a fixed value, such as 2. Each sampled z-coordinate (in the camera frame of reference), yields a new position estimate (according to the covariances with this z value) and we are left with a 6 × 6 covariance matrix in the remaining parameters. For each of these distances, we propagate the covariance matrix into the image space by determining a bounding ellipse for the image location of the object point at the center of the image for the input pose estimate. From this ellipse, we can determine the range over which to sample the transverse translations. Let pˆ be the vector [0 p]. This allows us to use quaternion multiplication to rotate the vector. We can convert a point in the global frame of reference into the camera frame using: (1) p = q pˆq ∗ + t. For a camera with focal length f , the image coordinates of a point are: ⎡ ⎤ f px ix = ⎣ fppzy ⎦ iy
(2)
pz
We now wish to determine how far the covariance matrix allows the location at the center of the image (according to the input pose estimate) to move within a reasonable probability. This variation is then accommodated by appropriate sampling from the camera translations. We can propagate the covariance matrix into the image coordinates using linearization by computing the partial derivatives (Jacobian) of the image coordinates with respect to the pose (Eq. 2). These partial derivatives are given in Eq. (3). The error covariance in the image space is Ci = JCp J T , where Cp is the covariance matrix of the remaining six parameters in the camera reference frame. ⎡ δi δi δi δi δi δi ⎤T x
⎢ J =⎣ ⎡
x
x
x
x
x
δtx δty δq0 δq1 δq2 δq3 δiy δiy δiy δiy δiy δiy δtx δty δq0 δq1 δq2 δq3
1 2pz
⎢ 0 ⎢ ⎢ pz (−q3 py +q2 pz )−px (−q2 px +q1 py ) ⎢ p 2 ⎢ 2f ⎢ pz (q2 py +q3 pz )−pxz(q3 px +q0 py −2q1 pz ) ⎢ 2 z ⎢ p (q p +q p −2q p p)+p ⎢ z 0 z 1 y 2 x 2 x (q0 px −q3 py +2q2 pz ) ⎣ pz pz (−2q3 px −q0 py +q1 pz )−px (q1 px +q2 py ) pz 2
⎥ ⎦ =
0 1
2pz pz (q3 px −q1 pz )−py (−q2 px +q1 py ) pz 2 pz (q2 px −2q1 py −q0 pz )−py (q3 px +q0 py −2q1 pz ) pz 2 pz (q1 px +q3 pz )−py (−q0 px +q3 py −2q2 pz ) pz 2 pz (q0 px −2q3 py +q2 pz )−py (−q1 px +q2 py ) pz 2
⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ (3)
Pose Sampling for Efficient Model-Based Recognition
785
The eigenvalues and eigenvectors of this covariance matrix indicate the shape of the area to sample from, with the eigenvectors being the axes of an ellipse and the square roots of the eigenvalues being the semi-axis lengths. We must now determine the spacing of the samples within these boundaries. Our strategy is to space the samples in a uniform grid aligned with the axes of the bounding ellipse such that the images that would be captured from neighboring samples overlap by 50 percent. This implies that, if the features are evenly distributed across the input image, one of the samples will contain a majority of the image features, even in the worst alignment with the sampling grid.
4
Efficient Pose Clustering
Our pose sampling technique has been combined with an efficient object recognition technique [3]. This method uses random sampling within the set of image features in order to develop a pose clustering algorithm that requires O(mn3 ) computation time, where m is the number of features in the model and n is the number of features in the image. In previous analysis, it was assumed that some fraction of the model features must appear in the image in order for recognition to succeed. For the type of problem that we consider here, the model is large and the image covers a small portion of it. In addition, the image features are distinctive, with a significant fraction of them arising from the object model. Under these circumstances, the roles of the image and model are reversed in the analysis. We assume that at least some constant fraction of the image features arise from the model in order for recognition to succeed, but that only a small portion of the model may appear in the image. The number of model features that must appear in the image for recognition to succeed is not dependent on the size of the model. Following the analysis of [3], this implies a complexity of O(m3 n) rather than O(mn3 ), since it is the image features that must be sampled, rather than the model features. Overall, O(m2 ) pairs of model features are sampled and each requires O(mn) time prior to the application of the new pose sampling techniques. The combination of pose sampling with this technique implies that the pose clustering technique must be applied multiple times (once for each of the sampled poses). This still results in improved efficiency, since the number of model features examined for each sampled pose is much smaller and the algorithm is cubic in this number. The key to efficient operation is being able to set an upper bound on the number of model features that are examined for each pose. If this can be achieved, then the complexity for each of the sampled poses is reduced to O(n), since the cubic portion is now limited by a constant. However, most sampled poses will not succeed and we must examine several of them. Since the number of model features that is examined for each pose is constant, we must examine O(m) samples in order to ensure that we have considered the entire model. The overall complexity will therefore be O(mn), if we can bound the number of model
786
C.F. Olson
features examined in each pose sample by a constant and if the number of pose samples that are examined is O(m). We ensure that the number of model features examined for each sampled pose is constant by selecting only those that best meet predefined criteria (i.e., those most likely to be present and detected in the image given the sampled pose). Note also that the number of sampled poses in which each model feature is considered does not grow with the size of the model. This combined with the fact that each sample examines at least a constant number of model features (otherwise it can be discarded) implies that we examine O(m) total samples. To maintain O(mn) efficiency, we must take care in the process by which the model features are selected for each sampled pose. Either the selection must be performed offline or an efficient algorithm for selecting them must be used online. Alternatively, each model feature can be considered for examination online for each sampled pose, but the algorithm becomes O(m2 + mn) in this case. In practice, this works well, since the constant on this term is small.
5
Crater Matching
We have applied pose sampling to the problem of recognizing a pattern of craters on a planet (or planetoid) as seen by a spacecraft orbiting (or descending to) the planet. In this application, we are able to simplify the problem, since the altitude of the spacecraft is well known from other sensors. This allows us to reduce the number of degrees-of-freedom in the space of poses that must be sampled from three to two. In addition, we have shown that many crater match sets can be eliminated efficiently using radius and orientation information [21]. For each pose that is sampled, we extract a set of the craters in the model that are most likely to be visible from that pose by examining those that are expected to be within the image boundaries and those that are of an appropriate size to be detected in the image. A set with bounded cardinality is extracted by ranking the craters according to these criteria. Our first experiment used a crater model of the Eros asteroid that was extracted from images using a combination of manual and automatic processing at the Jet Propulsion Laboratory. See Fig. 1. Recognition was performed using a set of images collected by the Near Earth Asteroid Rendezvous (NEAR) mission [22]. Three images from this set can be seen in Fig. 1. Craters were first detected in these images using the method of Cheng et al. [23]. Results of the crater detection are shown in Fig. 1 (left column). The extracted craters, the crater model, and an inaccurate pose estimate were then input to the recognition algorithm described in this work. Figure 1 (right column) shows the locations where the visible craters in the model would appear according to the computed pose. The close alignment of the rendered craters with the craters visible in the image indicates that accurate pose estimation is achieved. Our techniques found the same poses as detected in previous work [21] on this data with improved efficiency. With pose sampling, recognition required an average of 0.13 seconds on a Sun BladeTM 100 with a 500 MHz processor, a speedup of 10.2 over the case with no sampling.
Pose Sampling for Efficient Model-Based Recognition
787
Fig. 1. Recognition of crater patterns on the Eros asteroid using images from the Near Earth Asteroid Rendezvous (NEAR) mission. (top) Rendering of a model of the craters on the Eros asteroid. (left) Craters extracted from NEAR images. (right) Recognized pose of crater model. Correctly matched craters are white. Unmatched craters are rendered in black according to the computed pose.
788
C.F. Olson
Fig. 2. Crater catalog extracted from Mars Odyssey data. Image courtesy of NASA/JPL/ASU.
Our second experiment examined an image of Mars captured by the THEMIS instrument [24] on the Mars Odyssey Orbiter [25]. The image shown in Fig. 2 shows a portion of the Mars surface with many craters. Crater detection [23] was applied to this image to create the crater model used in this experiment. Since the images in which recognition was performed for this experiment were resampled from the same image in which the crater detection was performed, these experiments are not satisfying as a measure of the efficacy of the recognition. However, our primary aim here is to demonstrate the improved efficiency of recognition, which these experiments are able to do. Recognition experiments were performed with 280 image samples that cover the image in Fig 2. For examples in this set, we limited the number of features to the 10 strongest craters detected in the image and the 40 most likely craters to be visible for each pose. The correct qualitative result was found in each case, indicating that the sampling does not cause us to miss correct results that would be found without sampling. Four examples of the recognition results can be seen in Fig. 3. In addition, the pose sampling techniques resulted in a speedup by a factor of 9.02 with each image requiring 24.8 seconds on average with no input pose estimate. Experiments with the data set validate that the running time increases linearly with the number of features in the object model.
6
Summary
We have examined a new technique to improve the efficiency of model-based recognition for problems where the image covers a fraction of the object model, such as occurs in crater recognition on planetary bodies. Using this technique, we (non-randomly) sample from the space of poses of the object. For each pose, we extract the features that are mostly likely to be both visible and detected in the image and use these in an object recognition strategy based on pose clustering.
Pose Sampling for Efficient Model-Based Recognition
789
Fig. 3. Recognition examples using Mars Odyssey data. (Correctly matched craters are white. Unmatched craters are rendered in black according to the computed pose.)
When the samples are chosen appropriately, this results in a robust recognition algorithm that is much more efficient than examining all of the model features at once. A similar technique is applicable if the object is a small part of the image and the image can be divided into regions within which the object can appear.
Acknowledgments We gratefully acknowledge funding of this work by the NASA Intelligent Systems Program. For the Eros crater model and the test images used in this work, we thank the Jet Propulsion Laboratory, the NEAR mission team, and the Mars Odyssey mission team.
References 1. Cass, T.A.: Polynomial-time geometric matching for object recognition. International Journal of Computer Vision 21, 37–61 (1997) 2. Huttenlocher, D.P., Ullman, S.: Recognizing solid objects by alignment with an image. International Journal of Computer Vision 5, 195–212 (1990) 3. Olson, C.F.: Efficient pose clustering using a randomized algorithm. International Journal of Computer Vision 23, 131–147 (1997) 4. Bolles, R.C., Cain, R.A.: Recognizing and locating partially visible objects: The local-feature-focus method. International Journal of Robotics Research 1, 57–82 (1982)
790
C.F. Olson
5. Lowe, D.G.: Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision 60, 91–110 (2004) 6. Thompson, D.W., Mundy, J.L.: Three-dimensional model matching from an unconstrained viewpoint. In: Proceedings of the IEEE Conference on Robotics and Automation, vol. 1, pp. 208–220 (1987) 7. Havaldar, P., Medioni, G., Stein, F.: Perceptual grouping for generic recognition. International Journal of Computer Vision 20, 59–80 (1996) 8. Lowe, D.G.: Three-dimensional object recognition from single two-dimensional images. Artificial Intelligence 31, 355–395 (1987) 9. Olson, C.F.: Improving the generalized Hough transform through imperfect grouping. Image and Vision Computing 16, 627–634 (1998) 10. Clemens, D.T., Jacobs, D.W.: Space and time bounds on indexing 3-d models from 2-d images. IEEE Transactions on Pattern Analysis and Machine Intelligence 13, 1007–1017 (1991) 11. Flynn, P.J.: 3d object recognition using invariant feature indexing of interpretation tables. CVGIP: Image Understanding 55, 119–129 (1992) 12. Lamdan, Y., Schwartz, J.T., Wolfson, H.J.: Affine invariant model-based object recognition. IEEE Transactions on Robotics and Automation 6, 578–589 (1990) 13. Jacobs, D.W.: Matching 3-d models to 2-d images. International Journal of Computer Vision 21, 123–153 (1997) 14. Gigus, Z., Malik, J.: Computing the aspect graph for line drawings of polyhedral objects. IEEE Transactions on Pattern Analysis and Machine Intelligence 12, 113– 122 (1990) 15. Kriegman, D.J., Ponce, J.: Computing exact aspect graphs of curved objects: Solids of revolution. International Journal of Computer Vision 5, 119–135 (1990) 16. Murase, H., Nayar, S.K.: Visual learning and recognition of 3-d objects from appearance. International Journal of Computer Vision 14, 5–24 (1995) 17. Turk, M., Pentland, A.: Eigenfaces for recognition. Journal of Cognitive Neuroscience 3, 71–86 (1991) 18. Ullman, S., Basri, R.: Recognition by linear combinations of models. IEEE Transactions on Pattern Analysis and Machine Intelligence 13, 992–1006 (1991) 19. Greenspan, M.: The sample tree: A sequential hypothesis testing approach to 3D object recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 772–779 (1998) 20. Peters, G.: Efficient pose estimation using view-based object representation. Machine Vision and Applications 16, 59–63 (2004) 21. Olson, C.F.: Pose clustering guided by short interpretation trees. In: Proceedings of the 17th International Conference on Pattern Recognition, vol. 2, pp. 149–152 (2004) 22. http://near.jhuapl.edu/ 23. Cheng, Y., Johnson, A.E., Matthies, L.H., Olson, C.F.: Optical landmark detection and matching for spacecraft navigation. In: Proceedings of the 13th AAS/AIAA Space Flight Mechanics Meeting (2003) 24. Christensen, P.R., Gorelick, N.S., Mehall, G.L., Murray, K.C.: (THEMIS public data releases) Planetary Data System node, Arizona State University, http://themis-data.asu.edu 25. http://mars.jpl.nasa.gov/odyssey/
Video Segmentation for Markerless Motion Capture in Unconstrained Environments Martin Côté1, Pierre Payeur1, and Gilles Comeau2 1
School of Information Technology and Engineering 2 Department of Music University of Ottawa Ottawa, Ontario, Canada, K1N 6N5 {mcote,ppayeur}@site.uottawa.ca,
[email protected]
Abstract. Segmentation is a first and important step in video-based motion capture applications. A lack of constraints can make this process daunting and difficult to achieve. We propose a technique that makes use of an improved JSEG procedure in the context of markerless motion capture for performance evaluation of human beings in unconstrained environments. In the proposed algorithm a non-parametric clustering of image data is performed in order to produce homogenous colour-texture regions. The clusters are modified using soft – classifications and allow the J-Value segmentation to deal with smooth colour and lighting transitions. The regions are adapted using an original merging and video stack tracking algorithm.
1 Introduction Image segmentation is often considered one of the most important low level vision processes. It has recently been extended from colour images to video sequences with field applications in video encoding and video database indexing [1, 2]. The concept of representing video regions in terms of objects has also been introduced. An analysis of these objects can provide more insight as to the content and the semantics of a video. In this particular case, objects representing individuals could be evaluated to extract information regarding their activities. The provision of quantitative measurements for human performances using a passive vision based system has a strong appeal for activities in the field of music and sports where performance measurements are based on human perception and experience. This type of application is often referred to as Motion Capture. Recently there has been significant advancement in the field of computer vision techniques. However, none have yet addressed the complex problem faced here without having to impose unreasonable constraints upon musicians or athletes and their environments. Many motion capture techniques using passive sensors still rely on contrasting backgrounds or on assumptions on the motion and complexity of the scene. These impositions yield an environment that is foreign to a performer, potentially compromising the integrity of his actions, leading him to behave differently than he would in a more comfortable environment. The limitations of such techniques may also obfuscate key performance markers through the application of arbitrary data G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 791–800, 2007. © Springer-Verlag Berlin Heidelberg 2007
792
M. Côté, P. Payeur, and G. Comeau
representations or manipulations. We introduce the concept of unconstrained environments, where a performer and his environment are faced with a minimum of assumptions and requirements allowing him to perform uninhibited. Two categories of segmentation techniques are explored in the application of a Motion Capture system. The first category deals with frames in a sequential manner. This field has been explored thoroughly and has too many varying approaches to list within the scope of this paper. Some of the more popular techniques can be categorized as contour-based, background modeling and region space approaches. In the case of contour-based approaches, techniques are often driven by image gradients in order to produce a delineation of important image components. One of the founding techniques, called active contours, was introduced by Kass et al. [3]. The contours are formed using an energy minimization procedure designed in such a way that its local minima are achieved when the contour corresponds to the boundary of an object. The technique was modified for video objects by Sun et al. [1] using a projective tracking algorithm but is not well suited for large non-rigid movements. In the case of background modeling techniques, one successful algorithm was introduced by Stauffer and Grimson [4]. They proposed the use of a mixture of Gaussian probability models to capture individual pixel behaviours and separate active foreground objects from low-motion background objects. Despite various improvements to this technique [5], in the case of performance evaluation, assumptions on the presence of motion cannot always be made, making the distinction between foreground and background objects complex. Finally, region-based approaches perform an analysis of the data space in order to produce a simplified grouped representation of the data. The union of these regions makes the process of segmentation and tracking much simpler. In the case of watershed algorithms [6, 7, 8], regions are formed by identifying local minima within a frame’s gradient image. More adaptive techniques achieve segmentation using an adaptation of the k-means algorithm [9]. The criterion used for the creation of regions can yield different results depending on the nature of the images processed. The second category of segmentation techniques deals with frames in sets of blocks, called video stacks, and has received an increasing amount of attention. In DeMenthon [10], video stack segmentation uses a modified Mean-Shift approach which is computationally intensive, requiring a hierarchical implementation. The hybrid approach proposed within this paper incorporates a video stack analysis with a sequential frame tracking of segmented video objects. It avoids the high computational and memory cost of volume based analysis by separating the video stream into frame windows. A combination of clustering and spatio-temporal segmentation techniques is performed on the video window in order to extract pervasive homogenous regions. The algorithm builds upon the JSEG approach introduced in [11] and extended by Wang et al. [12].
2 General Approach The proposed technique is categorized as a region-based motion capture segmentation algorithm and uses colour-texture information to produce homogenous regions within a set of frames that are then tracked throughout the sequence. The technique is based on Deng and Manjunath’s JSEG implementation [11] with key improvements making
Video Segmentation for Markerless Motion Capture in Unconstrained Environments
793
it more appropriate to the context of a performer evaluation considered here. The algorithm is structured as a set of five key processes: clustering, soft-classification, Jvalue segmentation, merging and tracking. While many of these processes have been addressed in the original JSEG algorithm, this work proposes several improvements and introduces algorithms which have shown to be more efficient within the harsh environments that we tested in. 2.1 Non-parametric Clustering of Images As a precursor to the actual segmentation the video stacks must first undergo a clustering process. Originally proposed by Deng et al. [11] was a k-means based approach which assumes that the colours present within a scene follow Gaussian-like statistics. This hypothesis cannot always be guaranteed for complex scenes. Wang et al. [12] also reached this conclusion and modified the approach to use a nonparametric clustering technique called the Fast Adaptive Mean-Shift (FAMS). The FAMS algorithm introduced by Georgescu et al. [13] builds upon the original MeanShift technique proposed by Comaniciu et al. [14]. It is used within our approach to cluster colour distributions within a video stack without applying constraints to these distributions. Only the basic concepts of the Mean-Shift property and the FAMS algorithm are conveyed here. Given n data points such as xi ∈ R d , i = 1,…, n associated with a bandwidth
hi > 0 , the multivariate kernel density estimator at location x is defined as: c k,d n ⎛ x − x ⎜ i fˆh,k (x) = ∑k d i =1 ⎜ h nh
⎝
2⎞
⎟ ⎟ ⎠
(1)
Where k (x) is a function defining the kernel profile and c k ,d is a normalization constant. If the derivative of the kernel profile k ( x) exists, a density gradient estimator can be obtained from the gradient of the density estimator yielding the following:
∇fˆ (x) = h,k
⎡ ⎛ x − x 2 ⎞ 2c ⎛ 2c ⎜ k,d n k,d ⎢ n ′⎜ i ⎟= k ∑ ( xi − x)k′⎜ ∑ ⎟ ⎜ ⎢ d +2 ⎜ h ⎟ nhd + 2 ⎢i = 1 ⎜ nh i =1 ⎝
⎠
⎣
⎝
⎡ ⎛ x− x 2⎞ ⎤ n ⎜ ⎢ ∑ i ⎟ ⎥ x k′⎜ ⎟ ⎥ ⎢ i 2 ⎤ ⎟ ⎥ x − x ⎞⎟ ⎢ i = 1 ⎜ h ⎥ ⎝ ⎠−x i ⎟⎥⎢ ⎥ h ⎟⎥⎢ n ⎛⎜ x − x 2 ⎞⎟ ⎥ ⎠⎦⎢ ∑ k′ i ⎜ ⎟ ⎥ ⎢ i =1 ⎜ h ⎟ ⎥ ⎝ ⎠ ⎦ ⎣
(2)
The last term in equation (2) is called the Mean-Shift. This term, by definition, points in the direction of the maximum increase in the density surrounding a point x . By applying the mean shift property iteratively, we converge on the mode of a given point. By associating the mode of each distribution to the data points converging to it, a nonparametric clustering of the data space is obtained. In [13] several other improvements were brought to the clustering technique. These include the use of adaptive bandwidth sizes and an optimization technique called Locality-Sensitive Hashing (LSH) that aims to speed up the clustering process. This speed up requires a lengthy pre-processing in order to obtain optimal parameters that would yield the best computation time and reduced error. In the implementation
794
M. Côté, P. Payeur, and G. Comeau
done here, the adaptive bandwidths were omitted and optimization parameters were manually selected. These omissions did not overly affect the clustering process but did allow for a much quicker processing. The end result is an algorithm that achieves a better colour clustering in light of smooth colour gradients. 2.2 Creation of Soft-Classification Maps
In the list of improvements to the original JSEG algorithm, Wang et al. [12] introduced the concept of soft-classification maps. These maps represent a measured membership value that a pixel has to its assigned cluster. These values allow the JSEG algorithm to soften the colour-texture edges between two similar cluster distributions. The classification maps can be created for every pixel using Bayesian probabilities. The cluster distributions in this case are represented using Gaussian statistics in order to compute the corresponding memberships. In this paper, we have opted to compute our classification maps differently. The use of Gaussian statistics to describe the clusters undermines the idea behind the use of the non-parametric FAMS algorithm. Instead, the clusters are represented using 3D normalized histograms of Lu*v* pixel intensities. The values of the histogram bins, when projected back into the image, represent the non-parametric probability, P( I k | wi ) , that pixel I k belongs to the class wi . This process is called a histogram back-projection [2] and allows for the creation of soft-classification maps without the need to assume particular distributions on the clusters. 2.3 J-Value Segmentation
JSEG is a novel segmentation technique that attempts to produce regions out of pixel labelled images. In this case, the labels are generated by the FAMS process described earlier and represent the distribution assigned to each pixel. The first step in the segmentation is to compute a homogeneity measure of every pixel based on its neighbours. This measurement depicts the local variation in colour classification surrounding a pixel. This value, called the J-value, is presented here following the same notation as adopted by Deng et al. [11] and adapted in order to take into consideration the previously defined soft-classification maps. First the mean position of clusters is defined as: m=
1 N
∑z z∈Z
(3)
Where m is the mean, Z the set of all N data points within a local region around a pixel and z = ( x, y ), z ∈ Z . Assuming that there are a total of C colour clusters, we can define the mean position of a particular cluster i as: ∑ z ⋅ ω z,i mi = z∈Z i = 1,..., C (4) ∑ ω z ,i z∈Z
Here ω z,i is the membership value taken from the soft-classification maps defined in the previous section. The introduction of this term is a modification proposed by [12] to the original JSEG technique and allows the membership values to influence the
Video Segmentation for Markerless Motion Capture in Unconstrained Environments
795
mean position of a particular cluster. Finally, the total spatial variance of clusters is defined as: ST =
∑ z−m
2
(5)
z∈Z
Similarly, the sum of all cluster variances is given as: C
SW =
C
∑ S = ∑∑ (ω i
i =1
z ,i
⋅ z − mi
2
)
(6)
i =1 z∈Z
The term ω z,i also makes an appearance within the above variance computation and allows the same membership values to play a role within the computation of SW . The J-value of the local region is obtained based on these variances: J = ( ST − SW ) / SW (7) The original paper [11] provides examples on how a particular local cluster distribution would affect the outcome of the J-value. For a local region where clusters are distributed approximately uniformly, the J-value will remain relatively small. Inversely, should the local region consist of segregated clusters; the J-value will increase. The result of an image wide J-value computation is a gradient image corresponding to homogeneous colour-texture edges. The set of points which define the local region on which the J-value is computed is described by a circularly symmetric kernel mask. This mask is applied to every pixel in an image. The kernel size depends on the scale at which J-values are computed. At a larger scale smoother texture edges are detected while at a smaller scale, hard edges are detected. The process is iterative; once regions are determined at a large scale, the regions undergo another JSEG process in order to split them based on a smaller kernel size. Regions are created using a seed growing algorithm that amalgamates nearby pixels having a low J-value. The JSEG algorithm also allows for video segmentation by way of seed tracking. The tracking algorithm presented by Deng et al. [11] requires that all video frames be segmented at once and depends on small motion between frames. This is not practical for very large or lengthy videos; a solution to this is presented within section 2.5. JSEG also defines the term J t in order to measure spatio-temporal homogeneity. This term, computed similarly as its J counterpart, helps to indicate which pixels should be used when determining seed overlap. Only pixels with a good spatiotemporal homogeneity are considered. 2.4 Joint-Criteria Region Merging
Both the JSEG and modified JSEG approaches suffer from over-segmentation. Its original authors proposed a simple merging algorithm that iteratively attempted to merge regions having the closest corresponding histograms. Over-segmentation being a classical problem, it has been explored extensively within other contexts [6, 7, 8]. We adopted an algorithm that uses a joint space merging criterion introduced by Hernandez et al. [6]. This technique not only relies on colour information but also on the edges between two candidates. As such, it prevents the accidental merging of regions with similar colour attributes having a strong edge in between them.
796
M. Côté, P. Payeur, and G. Comeau
The first step in performing the merge operation is to formulate a Region Adjacency Graph (RAG) [15]. Region labels are represented by graph nodes while their similarities with adjacent regions are represented by edges. Region merges are done iteratively and invoke an update of the RAG. The similarity criterion used is based on both colour homogeneity and edge integrity. Colour homogeneity is defined as a weighted Euclidian distance between the colour means of two adjacent regions. The weight is computed based on region sizes and will favour the merging of smaller regions. The edge integrity criterion is based on the ratio of strong edge pixels and regular edge pixels found along the boundary of two adjacent regions. In order to compute this ratio, a gradient image is first created using Wang’s [8] morphological method. A threshold is found based on the median value of the gradient image. Any pixels found to have a value higher than the threshold are considered strong boundary pixels. The edge criterion will increase in the case where two regions are separated by a prominent edge. To produce a single similarity criterion, both homogeneity and edge integrity criteria must be evaluated. Since their scales are not known, Hernadez et al. [6] suggest using a rank based procedure where the final similarity is given by: (8) W = αR H + (1 − α ) R ε Here R H and R ε are the respective ranks of the criteria given above for the same two adjacent regions. α is a weight parameter used to impart importance on either of the former criteria. 2.5 Region Tracking
The tracking algorithm developed for this framework combines the strengths of sequential and video stack segmentation to create a hybrid strategy to track regions. The resulting technique is described in the following sections. 2.5.1 Intra-video Stack Tracking The original JSEG algorithm allows for block segmentation with the use of a seed tracking procedure. This tracking however requires that the video be segmented in its entirety by considering all the frames at once in order to be successful and is often not feasible due to memory constraints. We propose to first separate the segmented video in a series of video stacks. The size of the stacks can be manipulated by an operator and depends on the available memory and computing power. The tracking and region determination applied within a video stack is done using the technique proposed by Deng et al. [11] and described in section 2.3. The tracking done in between stacks is described in the next section. 2.5.2 Inter-video Stack Tracking The inter-video stack tracking algorithm proposed is strongly based on region overlaps between two consecutive video frames. This means that the motions exhibited by the objects in the video must be captured with an appropriate frame rate in order to allow regions to have an overlap between frames. The tracking correspondence indicator used within this work stems from the research produced by Withers et al. [16]. In their work, the authors have tried to identify region correspondences between
Video Segmentation for Markerless Motion Capture in Unconstrained Environments
797
frames regardless of splitting, merging and non-uniform changes. This tracking methodology lends itself well to the segmentation technique presented here. The criterion used to find a correspondence between regions of two subsequent frames depends highly on distance and pixel overlap. In this case, pixel overlap is defined as the number of pixels one region has in common with another between two frames. Withers et al. [16] define the overlap-ratio, Ri , j (t ) , as the correspondence measure between region i and j , it is given by the following equation: Vi, j (t ) Ri, j (t ) = (9) Di, j (t ) Here the terms Di , j (t ) and Vi , j (t ) are distance and overlap ratios for the intersection of regions i and j . These ratios are defined as the fraction between the distance and overlap of regions i and j respectively and those of the region intersection exhibiting the smallest value. Regions that may have undergone a splitting or merging will still have a very large overlap-ratio with their ancestors. By applying a threshold to the overlap-ratio, eq. (9), final correspondence can be achieved.
3 Experimental Results This section provides a comparison between the various additions proposed in this paper and the steps found in the original JSEG algorithm. It also presents sample results of the final segmentation that can be achieved. Due to the nature of the algorithm and of the improvements, it is difficult to define quantitative evaluation metrics that would apply to such segmentation methods. However, the improvements are easily discernable by comparing the clusters of pixels. All sequences were captured at a resolution of 320x240 and 30 fps and depict piano players performing in complex environments. In Figure 1 a clustering comparison between the original k-means and FAMS is shown. The major disadvantage with the k-means algorithm is that it requires extensive parameter tweaking in order to obtain a good clustering. In the first sequence the k-means algorithm does not produce nearly as many clusters as FAMS, many of the image details are lost in the over clustering. In the second sequence, FAMS is better able to distinguish colours from various image components. The piano is described using fewer clusters. In the case of the pianist a distinction between the left and right arms as well as the torso can be made. These improvements are in part due to FAMS’s ability to account for colour gradients, thus allowing cluster centers to differ from their actual mean. The figure also looks at the quality of the results when parameters are manually selected. FAMS requires more time to complete than its kmeans counterpart, but gives better results. If an optimization on the parameters selection is performed, the overall computation time is significantly increased for a negligible difference in results. Figure 2 shows the effect of the soft-classification of cluster on the J-value computations. Larger J-values are represented by brighter pixels. The figure depicts
798
M. Côté, P. Payeur, and G. Comeau
Fig. 1. K-Means and FAMS Comparison
Fig. 2. Impact of Soft-Classification
the results of two soft-classifications computed using the smallest kernel size with Gaussian distributions [12] and the proposed histogram back-projection. The softclassification of clusters results in a softening of the non-homogeneous colour-texture edges. The use of Gaussian distributions leads to an over attenuation of J-Values, while the more flexible histogram back-projection yields J-Values that do not remove key image details. The attenuation of values ultimately results in fewer seeds being generated for nearby regions having similar colour-texture properties. The results in Figure 3 clearly demonstrate a reduction in superfluous regions caused by colour gradients and lighting effects. The shaded regions are the ones selected by a human operator and are relevant to the motion capture process. In the first video where more clusters were found using FAMS, a better outline of the musician is achieved. In the second video where the number of clusters was approximately the same, the regions form better contours and identify semantic image components clearly. In particular the pianist’s torso, arms and legs can be identified more easily. The change from Gaussian to histogram based soft-classification allowed regions to better keep their distinctive shapes and conform to the semantic video content.
Video Segmentation for Markerless Motion Capture in Unconstrained Environments
799
Fig. 3. Segmentation Comparison
4 Conclusions In this work several improvements were achieved over the original JSEG algorithm in order to allow for a non-parametric clustering of natural scenes and to take advantage of soft-classification maps. The algorithm was also extended in order to improve the region merging process and to track key regions throughout a sequence for the purpose of creating a motion capture system. Results have shown the incredible adaptability of the technique without the need to impose constraints on either the target or its environment thus allowing the technique to be used more efficiently in practical applications.
References 1. Sun, S., Haynor, D.R., Kim, Y.: Semiautomatic Video Object Segmentation Using VSnakes. IEEE Trans. on Circuits and Systems for Video Technology 13(1), 75–82 (2003) 2. Swain, M.J., Ballard, D.H.: Indexing Via Color Histograms. In: Proc. 3rd Intl Conf. on Computer Vision, pp. 390–393 (1990) 3. Kass, M., Witkin, A., Terzopoulos, D.: Snakes: Active Contour Models. Intl Journal of Computer Vision 1(4), 321–331 (1987) 4. Stauffer, C., Grimson, W.E.L.: Adaptive Background Mixture Models for Real-Time Tracking. In: Proc. IEEE Conf. on Computer Vision and Pattern Recognition, pp. 246–252 (1999) 5. Atev, S., Masoud, O., Papanikolopoulos, N.: Practical Mixtures of Gaussians with Brightness Monitoring. In: Proc. 7th IEEE Intl Conf. on Intelligent Transportation Systems, pp. 423–428 (2004) 6. Hernandez, S.E., Barner, K.E.: Joint Region Merging Criteria for Watershed-Based Image Segmentation. In: Proc. Intl Conf. on Image Processing, vol. 2, pp. 108–111 (2000) 7. Tsai, Y.P., Lai, C.-C., Hung, Y.-P., Shih, Z.-C.: A Bayesian Approach to Video Object Segmentation via Merging 3-D Watershed Volumes. IEEE Trans. on Circuits and Systems for Video Technology 15(1), 175–180 (2005)
800
M. Côté, P. Payeur, and G. Comeau
8. Wang, D.: Unsupervised Video Segmentation Based on Watersheds and Temporal Tracking. IEEE Trans. on Circuits and Systems for Video Technology 8(5), 539–546 (1998) 9. Chen, J., Pappas, T.N., Mojsilovic, A., Rogowitz, B.E.: Adaptive Perceptual ColorTexture Image Segmentation. IEEE Trans. on Image Processing 14(10), 1524–1536 (2005) 10. DeMenthon, D.: Spatio-Temporal Segmentation of Video by Hierarchical Mean Shift Analysis. University of Maryland, Tech. Rep. (2002) 11. Deng, Y., Manjunath, B.S.: Unsupervised Segmentation of Color-Texture Regions in Images and Video. IEEE Trans. on Pattern Analysis and Machine Intelligence 23(8), 800– 810 (2001) 12. Wang, Y., Yang, J., Ningsong, P.: Synergism in Color Image Segmentation. In: Zhang, C., W. Guesgen, H., Yeap, W.-K. (eds.) PRICAI 2004. LNCS (LNAI), vol. 3157, pp. 751– 759. Springer, Heidelberg (2004) 13. Georgescu, B., Shimshoni, I., Meer, P.: Mean Shift Based Clustering in High Dimensions: A Texture Classification Example. In: Proc. IEEE Intl Conf. on Computer Vision, pp. 456–463 (2003) 14. Comaniciu, D., Meer, P.: Robust Analysis of Feature Spaces: Color Image Segmentation. In: Proc. IEEE Conf. on Computer Vision and Pattern Recognition, pp. 750–755 (1997) 15. Haris, K., Efstratiadis, S.N., Maglaveras, N., Katsaggelos, A.K.: Hybrid Image Segmentation using Watershed and Fast Region Merging. IEEE Trans. on Image Processing 7(12), 1684–1699 (1998) 16. Withers, J.A., Robbins, K.A.: Tracking Cell Splits and Merges. In: Proc. IEEE Southwest Symposium on Image Analysis and Interpretation, pp. 117–122 (1996)
Hardware-Accelerated Volume Rendering for Real-Time Medical Data Visualization Rui Shen and Pierre Boulanger Department of Computing Science University of Alberta Edmonton, Alberta, Canada T6G 2E8 {rshen,pierreb}@cs.ualberta.ca
Abstract. Volumetric data rendering has become an important tool in various medical procedures as it allows the unbiased visualization of fine details of volumetric medical data (CT, MRI, fMRI). However, due to the large amount of computation involved, the rendering time increases dramatically as the size of the data set grows. This paper presents several acceleration techniques of volume rendering using general-purpose GPU. Some techniques enhance the rendering speed of software ray casting based on voxels’ opacity information, while the others improve traditional hardware-accelerated object-order volume rendering. Remarkable speedups are observed using the proposed GPU-based algorithm from experiments on routine medical data sets.
1
Introduction
Volume rendering deals with how a 3D volume is rendered and projected onto the view plane to form a 2D image. It has been broadly used in medical applications, such as for the planning of treatment [1] and diagnosis [2]. Unlike surface rendering, volume rendering bypasses the intermediate geometric representation and directly renders the volumetric data set based on scalar information such as density, and local gradient. This allows radiologists to visualize the fine details of medical data without prior processing such as the visualization of isosurfaces. Transfer functions are commonly employed for color mapping (including opacity mapping) to enhance the visual contrast between different materials. However, due to the large amount of computation involved, the rendering time increases dramatically as the size of the data set grows. Our objective is to provide radiologists with more efficient volume rendering tools to understanding the data produced by medical imaging modalities, such as Computed Tomography (CT) and Magnetic Resonance Imaging (MRI). Hence, we introduce several techniques, including hardware implementation using commercial graphics processing unit (GPU), to enhance the rendering speed. This increased speed will allow radiologists to interactively analyze volumetric medical data in real-time and in stereo. According to [3], volume rendering approaches can be classified into three main categories: object-order, image-order and domain methods. Some hybrid G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 801–810, 2007. c Springer-Verlag Berlin Heidelberg 2007
802
R. Shen and P. Boulanger
methods [4] [5] are proposed by researchers in recent years, but their fundamental operations still fall into one of the three categories. The object-order approaches [6] [7] evaluate the final pixel values in a back-to-front or front-toback fashion, i.e., the scalar values in each voxel are accumulated along the view direction. Such intuitive approaches are simple and fast, but often yield image artifacts due to the discrete selection of projected image pixel(s) [8]. This problem can be solved by using splatting [9], which distributes the contribution of one voxel into a region of image pixels. While resampling in splatting is view-dependent, shear-warp [10] alleviates the complications of resampling for arbitrary perspective views. The input volume composed of image slices is transformed to a sheared object space, where the viewing rays are perpendicular to the slices. The sheared slices are then resampled and composited from front to back to form an intermediate image, which is then warped and resampled to get the final image. The image-order volume rendering [11] [12] is also in the category of ray casting or ray tracing. The basic idea is that rays are cast from each pixel on the final image into the volume and the pixel values are determined by compositing the scalar values encountered along the rays with some predefined ray function. One typical optimization is early ray termination, which stops tracing a ray when the accumulated opacity along that ray reaches a user-defined threshold. Another common optimization is empty space skipping, which accelerates the traversal of empty voxels. Volume rendering can also be performed in the frequency domain using Fourier projection-slice theorem [13]. After the volume is transformed from the spatial domain to the frequency domain, a specific 2D slice is selected and transformed back to the spatial domain to generate the final image. All of the previous three categories of methods can be partially or entirely implemented in GPU for acceleration [14] [15] [16]. Hardware-accelerated texture mapping moves computationally intensive operations from the CPU to the GPU, which dramatically increases the rendering speed. A detailed comparison between the four most popular volume rendering techniques, i.e., ray casting, splatting, shear-warp and 3D texture hardware-based methods, can be found in [17]. Experimental results demonstrate that ray casting and splatting generate the highest quality images at the cost of rendering speed, whereas shear-warp and 3D texture mapping hardware are able to maintain an interactive frame rate at the expense of image quality. When using splatting for volume rendering, it is difficult to determine parameters such as the type and radius of kernel, and the resolution of the footprint table to achieve an optimal appearance of the final image [8]. In shear-warp, the memory cost is high since three copies of the volume need to be maintained. The frequency domain methods perform fast rendering, but is limited to orthographic projections and X-ray type rendering [13].
2
Software-Based Accelerated Ray Casting
Software-based ray casting produces high-quality images, but due to the huge amount of calculation, the basic algorithm suffers from poor real-time
Hardware-Accelerated Volume Rendering
803
performance. To accelerate software ray casting, there are two common acceleration techniques as mentioned in the previous section: empty space skipping and early ray termination. Empty space skipping is achieved via the use of a precomputed min-max octree structure. It can only be performed efficiently when classification is done before interpolation, i.e., when the scalar values in the volume are converted to colors before the volume is resampled. This often produces coarser results than applying interpolation first. If empty space skipping is applied with interpolation prior to classification, one additional table lookup is needed to determine whether there are non-empty voxels in the current region. Nevertheless, the major drawback with such kind of empty space skipping lies in that every time the transfer functions change, the data structure that encodes the empty regions or the lookup table needs to be updated. Early ray termination exploits the fact that when a region becomes fully opaque or is of high opacity, the space behind it can not be seen. Therefore, ray tracing stops at the first sample point where the cumulative opacity is larger than a pre-defined threshold. The rendering speed is often far from satisfactory, even for medium-size data sets (e.g., 2563). In order to accelerate the speed, one can use acceleration techniques such as β-acceleration [18]. The fundamental idea of the β-acceleration is that as the pixel opacity (the β-distance) along a ray accumulates from front to back, less light travels back to the eye, therefore, fewer ray samples need to be taken without significant change to the final image quality. In other words, the sample interval along each ray becomes larger as the pixel opacity accumulates. Unlike β-acceleration, which depends on a pyramidal organization of volumetric data, here the jittered sample interval is applied directly to the data set. This reduces the computational cost of maintaining an extra data structure, especially when the transfer function changes. Instead of going up one level in the pyramid whenever the remaining pixel opacity is less than a user-defined threshold after a new sample is taken, the sample interval is modified according to a function of the accumulated pixel opacity: s = s × (1.0 + α × f )
(1)
where s denotes the length of the sample interval; α denotes the accumulated opacity; and f is a predefined jittering factor. The initial value of s is set by the user. Normally, the smaller s the better image quality. For every sample point, the remaining opacity γ is compared against a user-specified threshold. If γ is less than the threshold, the current sample interval is adjusted according to Equation 1. We term this acceleration technique as β -acceleration. To further enhance the performance of software ray casting during interaction, the sample interval is automatically enlarged to maintain a high rendering speed, and once interaction stops, the sample interval is set back to normal. When multiple processors are available, the viewport is divided into several regions and each processor handles one region. The whole process is executed in the CPU and main memory. The enhanced algorithm is illustrated in the following pseudo-code. Not only is this software
804
R. Shen and P. Boulanger
approach suitable for computers with low-end graphics cards, but since parallel ray tracing is used, it is also suitable for multi-processor computers or clusters. Accelerated Software-Based Ray Casting Break current viewport into N regions of equal size Initialize early ray termination threshold Γ Initialize jittering start threshold Γ Initialize jittering factor f Initialize sample interval s For each region For every pixel in the current region Compute ray entry point, direction, maximum tracing distance D While the traced distance d < D and γ < Γ Interpolate at current sample point Get opacity value α according to opacity mapping function If α = 0 Compute pixel color according to color mapping function γ = γ × (1.0 − α) End If If γ < Γ s = s × (1.0 + α × f ) End If d=d+s Compute next sample position End While End For End For
3
GPU-Based Object-Order Volume Rendering
GPU-based object-order volume rendering has several advantages over GPUbased image-order volume rendering. First, perspective projections can be more easily implemented in object order, since only a proper scaling factor needs to be assigned to each slice based on several viewing parameters. In ray casting, the direction of each ray needs to be determined individually. Second, as pointed out in [19], GPU-based ray casting has the limitation that it can only render volumes that fit in texture memory. Since ray tracing needs to randomly access the whole volume, it is impossible to break the volume into sub-volumes and load each sub-volume only once per frame. Finally, most of the speedup from GPUbased ray casting comes from empty space skipping, and ray casting with only early ray termination shows close performance to object-order volume rendering while both implemented in the GPU, as compared in [15]. Other implementations generate the proxy polygons that textures are mapped to in the CPU, and use the fragment shader for trilinear interpolation and texture mapping. Little work has been done to exploit the vertex shader in the
Hardware-Accelerated Volume Rendering
805
Fig. 1. The five intersection cases between a proxy plane and the volume bounding box, and the traversal order of the bounding box edges
hardware-accelerated volume rendering pipeline. Our accelerated rendering algorithm is based on the algorithm proposed by Rezk-Salama and Kolb [20], which balances the workload between the vertex shader and the fragment shader. Based on the observation of different box-plane intersection cases, the generation of proxy polygons can be moved from the CPU to the GPU. The intersection between a proxy plane and the bounding box of the volume may have five different cases, ranging from three intersection points to six, as illustrated in Figure 1. Let n · (x, y, z) = d represent a plane, where n is the normalized plane normal and d is the signed distance between the origin and the plane, and let Vi + λei,j represent the edge Ei,j from vertex Vi to Vj , where eij = Vj − Vi , then the intersection between the plane and the edge can be computed by d−n·Vi n·ei,j , n · ei,j = 0; (2) λi,j = −1, otherwise. If λi,j ∈ [0, 1], then it is a valid intersection; otherwise, there is no intersection. The edges of the volume bounding box are checked following a specific order, so that the intersection points can be obtained as a sequence that forms a valid polygon. If V0 is the front vertex (the one closest to the viewpoint) and V7 is the back vertex (the one farthest from the viewpoint), then the edges are divided into six groups, as shown in Figure 1 marked with different gray levels and line styles. For a given plane P l parallel to the viewport that does intersect with the bounding box, there is exactly one intersection point for each of the three groups (solid lines), and at most one intersection point for each of the other
806
R. Shen and P. Boulanger
three groups (dotted lines). The six intersection points P0 to P5 are computed as described in Table 1. For the other seven pairs of front and back vertices, the only extra computation is to map each vertex to the corresponding vertex in this case, which can be implemented as a simple lookup table. Table 1. The computation of the intersection points Point Checked Edges Intersection Position P0 E0,1 , E1,4 and E4,7 λi,j , where (i, j) ∈ {(0, 1), (1, 4), (4, 7)} ∧ λi,j ∈ [0, 1] λ1,5 , λ1,5 ∈ [0, 1]; P1 E1,5 P0 , otherwise. E0,2 , E2,5 and E5,7 λi,j , where (i, j) ∈ {(0, 2), (2, 5), (5, 7)} ∧ λi,j ∈ [0, 1] P2 λ2,6 , λ2,6 ∈ [0, 1]; P3 E2,6 P2 , otherwise. E0,3 , E3,6 and E6,7 λi,j , where (i, j) ∈ {(0, 3), (3, 6), (6, 7)} ∧ λi,j ∈ [0, 1] P4 λ3,4 , λ3,4 ∈ [0, 1]; P5 E3,4 P4 , otherwise.
In Rezk-Salama and Kolb’s method, the coordinates of a sample point in the world coordinate system are required to be the same as the coordinates of the corresponding sample point in the texture coordinate system. However, this is not true for most cases, where the sizes of one volume are different in the two coordinate systems. The box-plane intersection test is carried out in the data coordinate system. Since typically the texture coordinates need to be normalized to the range between [0, 1], a conversion of valid intersection points’ coordinates is required. If the point Pk intersects the edge Ei,j at position λi,j , then each coordinate of the resulting texture-space intersection point Pk is obtained by ⎧ Vi .p−min(Bp ) ⎨ max(Bp ) , ei,j .p = 0; Pk .p = λi,j , (3) ei,j .p > 0; ⎩ 1 − λi,j , ei,j .p < 0. where p denotes either x, y or z and B denotes the volume bounding box. The coordinates of Pk are then scaled and translated in order to sample near the center of the cubic region formed by eight adjacent voxels in texture memory. To further accelerate the rendering process, we also propose another enhancement that the sample interval is adjusted based on the size of the volume in the world coordinate system and the distance from the viewpoint to the volume. This idea of adaptive sample interval is similar to the concept of level-of-detail (LOD) in mesh simplification. The sample interval is calculated by: max(d)
s = S × F max(Bx ,By ,Bz )
(4)
where S denotes the constant initial sample interval; F 1 denotes the predefined interval scale factor; Bx , By and Bz denote the length of the volume
Hardware-Accelerated Volume Rendering
807
bounding box B in the x, y, and z-direction respectively; and max(d) denotes the distance between the farthest vertex of B and the view plane. Now that the proxy polygons are generated, one can then perform texture mapping. The fragment shader performs two texture lookups per fragment to attach textures onto the proxy polygons. The first texture lookup gets the scalar value associated with the sample point from a 3D texture that holds the volumetric data. The hardware does the trilinear interpolation automatically for every sample. The second texture lookup uses the scalar value to get the corresponding color from a 2D texture that encodes the transfer function. Then, the textured polygons are written into the frame buffer from back to front to produce the final image. The vertex program and the fragment program are both written in Cg, a high-level shading language developed by NVIDIA. To exploit the most powerful profile supported by a graphics card, the shader programs are compiled at runtime instead of at compile time. To accommodate graphics cards with different vertex processing capabilities, the amount of work assigned to the vertex shader should vary from card to card as graphic cards do not have all the same processing capabilities. The more capable the programmable graphics hardware is, the larger the amount of processing load is moved from the CPU to the vertex shader. Currently, our vertex program has variations for all the OpenGL vertex program profiles supported by the Cg compiler. The fragment program only requires basic Cg profiles to compile. Therefore, theoretically the proposed GPUbased volume rendering program can be executed on most commodity computers with a good-quality programmable graphics card.
4
Results
The algorithms were tested on a dual-core 2.0GHz computer running Windows XP with a 256MB-memory NVIDIA GeForce 7800 GTX graphics card. The data used for testing is a medium-size (512x512x181) CT-scan of the pelvic region. Software-based ray casting provides high quality images, but only with small viewports or for small data sets it can maintain an acceptable rendering speed, even with the proposed β -acceleration. The rendering times using software ray casting with both early ray termination and β -acceleration and with only early ray termination are enumerated in the first two columns of Table 2. Figure 2(a) depicts the two cases’ performance curves with respect to the viewport size. The x-axis is the size of the viewport in pixels and the y-axis is the rendering time in seconds. The dark gray line denotes the performance of the method without β -acceleration, and the other line denotes the performance of the one with β acceleration. On average, software ray casting with both early ray termination (Γ =0.02) and β -acceleration (Γ =0.6 and f =0.1) takes 28% less time than that with only early ray termination (Γ =0.02). The resulting images are shown in Figure 3(a)(b). There is no noticeable difference between these two images. High-quality images and interactive rendering speed are both achieved by exploiting the processing power of the GPU. The rendering times under four
808
R. Shen and P. Boulanger Table 2. The rendering times using different acceleration techniques
5
0.1
4.5
0.09
4
0.08
Rendering Time (unit: second)
Rendering Time (unit: second)
Viewport Size Rendering Time (unit: second) (unit: pixel) Without β With β Tex’ 1.0 Tex’ 1.15 Tex 1.0 Tex 1.15 200x200 0.172 0.125 0.031 0.016 0.015 0.015 300x300 0.422 0.281 0.031 0.016 0.015 0.015 400x400 0.688 0.484 0.031 0.016 0.015 0.015 500x500 1.078 0.750 0.047 0.031 0.031 0.015 600x600 1.641 1.125 0.047 0.031 0.031 0.016 700x700 2.078 1.562 0.062 0.047 0.047 0.031 800x800 2.704 2.187 0.062 0.047 0.047 0.031 900x900 3.391 2.469 0.078 0.062 0.062 0.047 1000x1000 4.312 3.062 0.094 0.078 0.078 0.047
3.5
3
2.5
2
1.5
0.07
0.06
0.05
0.04
0.03
0.02
1
0.01
0.5
0
0
200x200
300x300
400x400
500x500
600x600
700x700
800x800
Viewport Size (unit: pixel) Without Beta'-Acceleration
With Beta'-Acceleration
(a) Ray Casting.
900x900
1000x1000
200x200
300x300
400x400
500x500
600x600
700x700
800x800
900x900
1000x1000
Viewport Size (unit: pixel) Tex 1.0
Tex 1.15
Tex' 1.0
Tex' 1.15
(b) Object-Order.
Fig. 2. The comparison of the rendering times using different acceleration techniques
different conditions are enumerated in Table 2. Tex’ 1.0 denotes no acceleration; Tex’ 1.15 denotes adaptive sample interval with interval scale factor F =1.15; Tex 1.0 denotes only with vertex shader acceleration; Tex 1.15 denotes with both acceleration techniques and F =1.15. Figure 2(b) gives a comparison of the performance curves under the four different conditions. In all cases, the rendering time increases as the viewport grows, but even for the 1000x1000 viewport the rendering times are below 0.1 second, i.e., the rendering speeds are above the psycho-physical limit of 10 Hz. With only adaptive sample interval enabled, when F =1.15, we get an average 33% speedup. With only vertex shader acceleration enabled, the algorithm’s performance is almost the same as Tex’ 1.15. With both acceleration techniques enabled, when F =1.15, an average 53% speedup is achieved with respect to the Tex’ 1.0 case and an average 28% speedup is achieved with respect to the Tex’ 1.15 case. The final images are shown in Figure 3(c)-(f), together with the images produced by software ray casting. From these images, no significant difference can be observed between the image quality of image-order methods and that of object-order methods, as long as the original data set is at high resolution.
Hardware-Accelerated Volume Rendering
(a) Without β .
(b) With β .
(c) Tex’ F = 1.0.
(d) Tex’ F = 1.15.
(e) Tex F = 1.0.
(f) Tex F = 1.15.
809
Fig. 3. Volume rendering results of a CT-scanned pelvic region
5
Conclusion
In this paper, we have presented several volume rendering acceleration techniques for medical data visualization. β -acceleration enhances the rendering speed of software-based ray casting using voxels’ opacity information, while vertex shader proxy polygon generation and adaptive sample interval improve the performance of traditional hardware-accelerated object-order volume rendering. Remarkable speedups are observed from experiments on average-size medical data sets. We are now working on incorporating the β -acceleration into the GPU ray casting pipeline, which may be more efficient than our current GPU-based object-order method. Moreover,we are also exploring more efficient and effective rendering algorithms using GPU clusters to handle larger and larger data sets produced by doppler MRI and temporal CT.
References 1. Levoy, M., Fuchs, H., Pizer, S.M., Rosenman, J., Chaney, E.L., Sherouse, G.W., Interrante, V., Kiel, J.: Volume rendering in radiation treatment planning. In: Proceedings of the first conference on Visualization in biomedical computing, pp. 22–25 (1990) 2. Hata, N., Wada, T., Chiba, T., Tsutsumi, Y., Okada, Y., Dohi, T.: Threedimensional volume rendering of fetal MR images for the diagnosis of congenital cystic adenomatoid malformation. Academic Radiology 10, 309–312 (2003) 3. Kaufman, A.E.: Volume visualization. ACM Computing Surveys 28, 165–167 (1996)
810
R. Shen and P. Boulanger
4. Hadwiger, M., Berger, C., Hauser, H.: High-quality two-level volume rendering of segmented data sets on consumer graphics hardwarem. In: VIS 2003: Proceedings of the conference on Visualization 2003, pp. 301–308 (2003) 5. Mora, B., Jessel, J.P., Caubet, R.: A new object-order ray-casting algorithm. In: VIS 2002: Proceedings of the conference on Visualization 2002, pp. 203–210 (2002) 6. Drebin, R.A., Carpenter, L., Hanrahan, P.: Volume rendering. In: SIGGRAPH 1988: Proceedings of the 15th annual conference on Computer graphics and interactive techniques, pp. 65–74 (1988) 7. Upson, C., Keeler, M.: V-buffer: visible volume rendering. In: SIGGRAPH 1988: Proceedings of the 15th annual conference on Computer graphics and interactive techniques, pp. 59–64 (1988) 8. Shroeder, W., Martin, K., Lorensen, B.: The visualization toolkit: an objectoriented approach to 3D graphics, 4th edn. Pearson Education, Inc. (2006) 9. Mueller, K., Shareef, N., Huang, J., Crawfis, R.: High-quality splatting on rectilinear grids with efficient culling of occluded voxels. IEEE Transactions on Visualization and Computer Graphics 5, 116–134 (1999) 10. Lacroute, P., Levoy, M.: Fast volume rendering using a shear-warp factorization of the viewing transformation. In: SIGGRAPH 1994: Proceedings of the 21st annual conference on Computer graphics and interactive techniques, pp. 451–458 (1994) 11. Levoy, M.: Efficient ray tracing of volume data. ACM Transactions on Graphics 9, 245–261 (1990) 12. Yagel, R., Cohen, D., Kaufman, A.: Discrete ray tracing. IEEE Computer Graphics and Applications 12, 19–28 (1992) 13. Entezari, A., Scoggins, R., M¨ oller, T., Machiraju, R.: Shading for fourier volume rendering. In: VVS 2002: Proceedings of the 2002 IEEE symposium on Volume visualization and graphics, pp. 131–138 (2002) 14. Van Gelder, A., Kim, K.: Direct volume rendering with shading via threedimensional textures. In: VVS 1996: Proceedings of the, symposium on Volume visualization (1996) pp. 23–30 (1996) 15. Kruger, J., Westermann, R.: Acceleration techniques for GPU-based volume rendering. In: VIS 2003: Proceedings of the 14th IEEE Visualization 2003, pp. 287–292 (2003) 16. Viola, I., Kanitsar, A., Gr¨ oller, M.E.: GPU-based frequency domain volume rendering. In: SCCG 2004: Proceedings of the 20th spring conference on Computer graphics, pp. 55–64 (2004) 17. Meißner, M., Huang, J., Bartz, D., Mueller, K., Crawfis, R.: A practical evaluation of popular volume rendering algorithms. In: VVS 2000: Proceedings of the 2000 IEEE symposium on Volume visualization, pp. 81–90 (2000) 18. Danskin, J., Hanrahan, P.: Fast algorithms for volume ray tracing. In: VVS 1992: Proceedings of the 1992 workshop on Volume visualization, pp. 91–98 (1992) 19. Scharsach, H.: Advanced GPU raycasting. In: Proceedings of CESCG 2005, pp. 69–76 (2005) 20. Rezk-Salama, C., Kolb, A.: A vertex program for efficient box-plane intersection. In: Proceedings of the 10th international fall workshop on Vision, modeling and visualization (2005)
Fuzzy Morphology for Edge Detection and Segmentation Atif Bin Mansoor, Ajmal S Mian, Adil Khan, and Shoab A Khan National University of Science and Technology, Tamiz-ud-din Road, Rawalpindi, Pakistan {atif-cae,ajmal-cae}@nust.edu.pk,
[email protected],
[email protected]
Abstract. This paper proposes a new approach for structure based separation of image objects using fuzzy morphology. With set operators in fuzzy context, we apply an adaptive alpha-cut morphological processing for edge detection, image enhancement and segmentation. A Top-hat transform is first applied to the input image and the resulting image is thresholded to a binary form. The image is then thinned using hit-or-miss transform. Finally, m-connectivity is used to keep the desired number of connected pixels. The output image is overlayed on the original for enhanced boundaries. Experiments were performed using real images of aerial views, sign boards and biological objects. A comparison to other edge enhancement techniques like unsharp masking, sobel and laplacian filtering shows improved performance by the proposed technique.
1
Introduction
Image enhancement refers to improving the visibility and perception of an image for a specific application, such that its various features are improved. An enhanced image provides additional information which is not easily observable in the original image. Image enhancement is directly linked to the application for which the image is acquired, and is one of the most widely researched area in image processing. Image enhancement is usually followed by feature detection, which can further be used in various applications like object detection, identification, tracking and classification. An effective image enhancement technique can improve the reliability and accuracy of classification. Image enhancement approaches fall into two broad categories namely, spatial domain methods and frequency domain methods [1]. Spatial domain approaches like Log transform, Power-Law transform, contrast stretching, Bit level slicing, Histogram processing, mask filtering like order statistics filters, laplacian filter, high-boost filtering and unsharp masking are based on direct manipulation of pixels in an image. Frequency domain processing algorithms like lowpass, highpass filters and homomorphic filtering are based on manipulation of frequency contents of image. Edge enhancement is an effective image enhancement technique. Edges form the periphery of objects in an image, and separate the foreground from the background. Accurate identification and enhancement of edges G. Bebis et al. (Eds.): ISVC 2007, Part II, LNCS 4842, pp. 811–821, 2007. c Springer-Verlag Berlin Heidelberg 2007
812
A. Bin Mansoor et al.
improves the image for subsequent applications like object recognition [2], [3], object registration [4], surface reconstruction from stereo images [5], [6]. Traditional image enhancement techniques have been based mostly on linear approaches. Non-linear approaches are now being investigated for image enhancement. An effective non-linear approach for this purpose is mathematical morphology. The word morphology represents a branch of biology, which deals with the form and structure of animals and plants. Mathematical morphology, in the same context, is a tool for obtaining image structures that are helpful in identifying region shape, boundaries, skeletons etc. Mathematical morphology was introduced in the late 1960s to analyze binary images from geological and biomedical data [7], [8] as well as to formalize and extend earlier or parallel work on binary pattern recognition based on cellular automata and Boolean/threshold logic [9], [10]. It was extended to gray level images [8] in the late 1970s. In the mid 1980s, mathematical morphology was brought to the mainstream of image/signal processing and related to other nonlinear filtering approaches [11], [12]. Finally, in the late 1980s and 1990s, it was generalized to arbitrary lattices [13], [14], [15]. Existing mathematical morpholgy literature is based on crisp set theory. Fuzzy sets, in contrast to crisp sets, can represent and process vague data, handling the concept of partial truth i.e., values between 1 (completely true) and 0 (completely false). A gray scale image can be termed as a fuzzy set in the sense that it is a fuzzy version of a binary image. Similarly, as per Bloch and Maitre [16], for pattern recognition purposes, imprecision and uncertainty can be taken into account by means of fuzzy morphology. In this paper, we propose a new approach for edge detection and image enhancement based upon fuzzy morphology. Our approach gives an adaptive control through which edges of various connectivities in the image can be enhanced as required. This is done by defining the detected boundary lengths in terms of the number of connected pixels of various objects in the image. Briefly, the algorithm proceeds as follows. First, a Top-hat transform is applied to the input image and the resulting image is thresholded to a binary image. Next, thinning of the image is performed using hit-or-miss transform. Finally, the thinned image is processed using m-connectivity to keep only the desired number of connected pixels. The output image is overlayed on the original for enhancing boundaries. The proposed approach can be used for many applications like segmentation, object recognition and pattern matching. Experiments were performed using real images of aerial views, biological objects and text images. A comparison with other popular approaches like unsharp masking, sobel and laplacian is also made.
2 2.1
Background Morphological Image Processing
Morphology is a mathematical framework for the analysis of spatial structures and is based on set theory. It is a strong tool for performing many image processing tasks. Morphological sets represent important information in the description
Fuzzy Morphology for Edge Detection and Segmentation
813
of an image. For example, the set of all black pixels in a binary image is a complete morphological description of the image. In binary images, the sets are members of the 2-D integer space Z 2 , where each element of a set is a tuple (2-D vector) whose coordinates are the (x,y) coordinates of a black or white pixel in the image. Gray-scale digital images can be represented as sets whose components are in Z 3 . In this case, two components of each element of the set refer to the coordinates of a pixel, and the third corresponds to its discrete gray-level value. Sets in higher dimensional spaces can contain other image attributes, such as colour and time varying components [1]. The major part of morphological operations can be defined as a combination of two basic operations, dilation and erosion, and non-morphological operations like difference, sum, maximum or minimum of two images. Morphological operations also make use of a structuring element M ; which can be either a set or a function that correspond to a neighborhood-function related to the image function g(x) [17]. Further morphological operations and algorithms can be obtained from sequencing the basic operations. In general, a dilation (denoted by ⊕) is every operator that commutes with the supremum operation. On the other hand, an erosion (denoted by ) is every operator that commutes with the infimum operation. There is a homomorphism between the image function g and the set B of all pixels with image function value 1. The structuring element M (x) is a function that assigns a subset of N × N to every pixel of the image function. Then dilation, an increasing transformation is defined as M (x) (1) B⊕M = x∈B
and erosion, a decreasing transformation is defined as B M = {x|M (x) ⊆ B}
(2)
Similarly, opening of set B by structuring element M is defined as B ◦ M = (B M ) ⊕ M,
(3)
and closing of set B by structuring element M is defined as B • M = (B ⊕ M ) M
(4)
Further details about morphological operations like Opening, Closing, Top-hat transform, Hit or Miss transform, morphological gradient and further operations based upon use of second structuring elements can be found in [1], [8], [17]. 2.2
Fuzzy Sets
In classical or crisp set theory, the boundaries of the set are precise, thus membership is determined with complete certainty. An object is either definitely a member of the set or not a member of it. However, in reality most sets and
814
A. Bin Mansoor et al.
propositions are not so neatly characterized. For example, concepts such as experience, tallness, richness, brightness etc cannot be represented by classical set theory. Fuzzy sets, in turn, are capable to represent imprecise concepts. In Fuzzy sets, the membership is a matter of a degree, i.e., degree of membership of an object in a fuzzy set expresses the degree of compatibility of the object with the concept represented by the fuzzy set. Each fuzzy set, A is defined in terms of a relevant universal set X by a membership function. Membership function assigns each element x of X a number, A(x), in the closed unit interval [0,1] that characterizes the degree of membership of x in A. In defining a membership function, the universal set X is always assumed to be a classical set.
3
Related Work in Fuzzy Image Processing
Fuzzy image processing has three main stages: image fuzzification, modification of membership values, and finally image defuzzification [18]. The fuzzification and defuzzification steps are due to unavailability of fuzzy hardware. The coding of image data (fuzzification) and decoding of the results (defuzzification) are steps that make possible to process images with fuzzy techniques. Rosenfeld [19] for the first time introduced fuzzy dilation and erosion and named them shrinking and expanding respectively. Later, Kaufmann and Gupta defined Minkowski addition of two fuzzy sets by means of the α-cuts [20]. They are considered as pioneers of fuzzy morphology. Today, there are many fuzzy constructions with various fuzzy operations available. In [21], Nachtegael et al. describe various usages of soft computing particularly fuzzy logic based applications. In [22], Popov applied various fuzzy morphological operators to colour images in YCrCb colour space utilizing centrally symmetric pyramidal structuring element. Ito and Avianto extracted tissue boundary from ultrasonogram using fuzzy morphology [23]. Maccarone et al. used fuzzy morphology to restore and retrieve structural properties of astronomical images [24]. Wirth and Ninkitenko presented a contrast enhancement algorithm based upon fuzzy morphology [25]. Großert et al. applied fuzzy morphology to detected infected regions of a leaf [26]. Strauss and Comby applied fuzzy morphology to omnidirectional catadioptric images [27]. Bloch and Saffiotti applied fuzzy morphology dealing with imprecise spatial information in autonomous robotics [28]. In our proposed scheme, we employed the alpha-cut morphology with a diamond shaped structuring element. We empirically optimized the structuring element by assigning alpha the value equal to complement of highest frequency intensity level in image histogram. Thus, modifying structuring mask weights for different images, it was possible to tune fuzzy morphological operations to image enhancement. α-cut Morphology. α-cut are used to easily connect fuzzy and crisp sets. Given a fuzzy set A(x), where x is an element of the universe of discourse X, and assigning membership degrees from the interval [0;1] to each element of X,
Fuzzy Morphology for Edge Detection and Segmentation
815
then for 0 < a < 1, the α-cut of A(x) is the set of all x ∈ X with membership degree at least as large as α : Aα = {x|A(x) ≥ α}
(5)
The union of two fuzzy sets A(x) and B(x) can be defined in terms of α-cut as: (A ∪ B)α = Aα ∪ Bα
(6)
The definition of Minkowski addition of two sets A and B requires the translate τa (X) of a (crisp) set X by a vector a, given as τa (X) = { y|y = x − a, x X}
(7)
Then, the normal Minkowski addition and subtraction are defined as [17]: A (+) B = { a ∈ A | τa (B) ∩ A = φ}
(8)
m
A (−) B = { a ∈ A | τa (B) ⊆ A}
(9)
m
Kuafmann and Gupta extended it to fuzzy sets by applying α-cut on both sets, performing Minkowski operations and recombining them. Thus, the Minkowski addition and subtraction of two fuzzy sets are defined as: [A(x) (+) B(x)]α (x) = Aα (x) (+) Bα (x) m
(10)
m
[A(x) (−) B(x)]α (x) = Aα (x) (−) Bα (x) m
(11)
m
These equations are similar to mathematical morphological dilation and erosion, and are taken as the definition of fuzzy dilation and fuzzy erosion.
4
Proposed Fuzzy Morphological Filtering
Initially, a Top-hat tansformation is applied on the input image using fuzzy dilation and erosion. Top-hat transformation of an image is defined as h = B − (B ◦ M )
(12)
where B is the input image and M is the structuring element. The transform result in enhancing details, and is useful even in the presence of shading in the image. We used an adaptive diamond shaped structuring element of radius 11 by calculating the super minima. The fuzzification process of structuring element was done through α-cuts. We empirically found out that for structuring element the alpha chosen as complement of highest frequency in image histogram is the most appropriate choice. After completing the top-hat filtering the gray scale image was converted into binary by selecting the optimum threshold level using Otsu method [29]. We employed thinning by hit-or-miss transform for
816
A. Bin Mansoor et al.
shape detection. Hit or miss transform is a basic morphological tool for shape detection. The hit-or-miss transformation of A by B is denoted A ⊗ B. Here, B is a structuring element pair, B = (B1 , B2 ), rather than single element. The hit-or-miss transform is defined in terms of these two structuring elements as A ⊗ B = (A B1 ) ∩ (Ac B2 )
(13)
where Ac is the complement of A. The transform has its particular name because the output image consists of all locations that match the pixels in B1 (a hit), and that have none of the pixels in B2 (a miss). The thinned set is converted to m-connectivity to eliminate multiple paths, and only connected pixels are kept. The approach is adaptive by varying the desired quantity of connected pixels depending upon the ratio of the image and desired object. The resultant image is overlayed on the original image offering enhanced boundaries of the objects.
5
Experimental Results
For testing the proposed scheme, we chose images from various diverse areas like aerial images, sign boards and biological organisms. Fig. 1-a shows an aerial image of a runway. Image was processed with the proposed fuzzy morphology scheme and resulted in the connected edge map as shown in Fig. 1-b. In Fig. 1-c, the detected edges are overlaid on the original input image to show the enhanced image. We can see that the runway is accurately identified and unwanted edges are not detected. In Fig. 2, the proposed approach is applied to an image containing a sign board. The alphabets are correctly segmented by the proposed fuzzy morphology scheme. However, some unavoidable boundaries are also detected. Fig. 3-a shows an input image of a cell. The external shape and some internal details of the cell have correctly been detected using fuzzy morphology (see Fig. 3-b and c). To compare the performance of the proposed fuzzy morphology based filtering, we compared our results to some of the well known existing filters namely, unsharp masking (Fig. 1-d, Fig. 2-d and Fig. 3-d), Sobel (Fig. 1-e, Fig. 2-e and Fig. 3-e), and Laplacian ((Fig. 1-f, Fig. 2-f and Fig. 3-f). In all the three cases, we can see that our proposed fuzzy morphology based filter outperforms the existing filters by detecting more meaningful features in the input images.
6
Conclusion
We presented an adaptive image segmentation technique utilising alpha-cut fuzzy morphology approach to edge detection, image enhancement and segmentation. Better results were achieved than various existing enhancement approaches like unsharp masking, sobel and laplacian filtering. The proposed approach can be applied in various applications related to objects identification, geographical surveys, character recognition etc. In future work, we plan to investigate quantitative measures calculating the value of alpha.
Fuzzy Morphology for Edge Detection and Segmentation
(a)
(b)
(c)
(d)
(e)
(f)
817
Fig. 1. (a) Input image of an aerial view of a runway. (b) Connected edges found using fuzzy morphology. (c) Edges overlaid on input image for enhancement. (d) Applying unsharp filter. (e) Applying Sobel filter. (f) Applying Laplacian filter.
818
A. Bin Mansoor et al.
(a)
(b)
(c)
(d)
(e)
(f)
Fig. 2. (a) Input image of a sign board. (b) Connected edges (alphabets) detected using fuzzy morphology. (c) Edges overlaid on the input image for enhancement. (d) Applying unsharp filter. (e) Applying Sobel filter. (f) Applying Laplacian filter.
Fuzzy Morphology for Edge Detection and Segmentation
(a)
(b)
(c)
(d)
(e)
(f )
819
Fig. 3. (a) Input Image Of a Cell. (b) Connected Edges Detected Using Fuzzy Morphology. (c) Edges Overlaid On Original Image For Enhancement. (d) Applying Unsharp Filter. (e) Applying Sobel Filter. (f) Applying Laplacian Filter.
820
A. Bin Mansoor et al.
References 1. Gonzalez, W.: Digital Image Processing, 2nd edn. Prentice Hall, Englewood Cliffs (2002) 2. Liu, H.C., Srinath, M.D.: Partial shape classification using contour matching in distance transformation. IEEE Transactions on Pattern Analysis and Machine Intelligence 12, 1072–10791 (1990) 3. Marr, D., Hildreth, E.: Theory of edge detection. Proc. R. Soc. Lond. 207, 187–217 (1980) 4. Brown, L.G.: A survey of image registeration techniques. ACM Computing Surveys 24, 352–376 (1992) 5. Hoff, W., Ahuja, N.: Surface from stereo: Integrating feature matching, disparity estimation, and contoure detection. IEEE Transactions on Pattern Analysis and Machine Intelligence 11, 121–136 (1989) 6. Lengagne, R.P.F., Monga, O.: Using crest lines to guide sufrace reconstructin from stereo. In: IEEE International Conference on Pattern Recognition (1996) 7. Matheron, G.: Random Sets and Integral Geometry. Wiley, New York (1975) 8. Serra, J.: Image Analysis and Mathematical Morphology. Academic Press, London (1982) 9. Rosenfeld, A., Kak, A.C.: Digital Picture Processing. Academic Press, Boston (1982) 10. Preston, D.: Modern Cellular Automata. Plenum Press, New York (1984) 11. Maragos, P., Schafer, R.W.: Morphological filters. par i: Their set-theoretic analysis and relations to linear shift-invariant filters. part ii their relations to median, orderstatistic, and stack filters. IEEE Transactions on Pattern Analysis and Machine Intelligence (1987) 12. Maragos, P., Schafer, R.W.: Morphological systems for mulitdimensional signal processing. In: Trew, R.J. (ed.) Proc. of IEEE, pp. 690–710 (1990) 13. Heijmans, H.: Morphological Image Operators. Academic Press, Boston (1994) 14. Serra: Image Analysis and Mathematical Morphology. Academic Press, Boston (1988) 15. Bovik, A.: Morphological filtering for image enhancement and feature detection. In: Bovik, A. (ed.) Handbook of image and video processing, pp. 135–156 (2005) 16. Bloch, I., Maitre, H.: Fuzzy mathematical morphologies: A comparative study. Pattern Recognition (1995) 17. Soille: Morphological Image Analysis: Principles and Applications. Springer, Berlin (1999) 18. Tizhoosh: Fuzzy Image Processing. Springer, Berlin (1997) 19. Rosenfeld: The fuzzy geometry of image subsets. Pattern Recognition Letters (1984) 20. Kaufmann, G.: Fuzzy Mathematical Models in Engineering and Mangement Science. Elsevier Science Inc. New York (1988) 21. Nachtegael, Van der Weken, Van De Ville, Kerre (eds.): Fuzzy Filters for Image Processing. Studies in Fuzziness and Soft Computing, vol. 1. Springer, Heidelberg (2004) 22. Popov: Fuzzy mathematical morphology and its applications to colour image processing. W S C G (2007) 23. Ito, A.: Tissue boundary extraction from ultrasonogram by fuzzy morphology processing. In: 20th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, vol. 20, IEEE, Los Alamitos (1998)
Fuzzy Morphology for Edge Detection and Segmentation
821
24. Maccarone, T., Gesu: Fuzzy mathematical morphology to analyse astronomical images. In: International Conference on Pattern Recognition. (1992) 25. Wirth, N.: Applications of fuzzy morphology to contrast enhancement. In: Annual Meeting of the N. American Fuzzy Information Processing Society (2005) 26. Großert, K¨ oppen, N.: A new approach to fuzzy morphology based on fuzzy integral and its application in image processing. In: ICPR 1996, vol. 2, pp. 625–630 (2005) 27. Strauss, C.: Fuzzy morphology for omnidirectional images. In: IEEE International Conference on Image Processing, vol. 2, pp. 141–144. IEEE, Los Alamitos (2005) 28. Bloch, S.: Why robots should use fuzzy mathematical morphology. In: 1st Int. ICSC-NAISO Congress on Neuro-Fuzzy Technologies (2002) 29. Otsu, N.: A threshold selection method from gray-level histogram. IEEE Transactin on Systems, Man and Cybernetics (1979)
Author Index
Abidi, Besma I-476 Abidi, Mongi I-476 Ahmed, Abdelrehim I-531 Akhriev, Albert II-592 Aldea, Emanuel II-307 Alexandre, Lu´ıs A. I-621 Aliroteh, Meisam I-542 Alonso, Mar´ıa C. II-499 Amara, Yacine I-586 Ambardekar, Amol II-318 Andriot, Claude I-734 Antonini, Gianluca I-13 Anwander, Alfred I-341 Arbab-Zavar, Banafshe II-549 Archibald, James I-682 Ardizzone, Edoardo II-265 Arguin, Martin I-509 Arvind, K.R. II-96 Asari, Vijayan K. I-432 Atif, Jamal II-307 Azzabou, Noura I-220 Bab-Hadiashar, Alireza II-75 Bagnato, Luigi I-13 Baishya, Dhruba J. II-721 Bajcsy, Ruzena I-714 Baldwin, Doug I-321 Barry, Mark I-816 Baruffa, Giuseppe I-13 Bascle, B´en´edicte II-621 Bebis, George I-757, II-173 Berger, Matt I-769 Berryman, Darlene II-643 Besbes, Ahmed I-189 Bhatia, Sanjiv K. II-245 Bilodeau, Guillaume-Alexandre II-1 Bimber, Oliver I-363 Bittermann, Michael S. I-137 Bittner, Jiˇr´ı I-106 Bloch, Isabelle II-307 Borzin, Artyom I-442 Boucheron, Laura E. I-208 Boulanger, Pierre II-701, II-801 Boyle, D. I-393
Branch, John William II-701 Breen, David I-554 Burkhardt, Hans I-610 Cermak, Grant II-770 Chang, Chien-Ping II-479 Chapuis, Roland II-631 Charalambos, Jean Pierre I-106 Chausse, Frederic II-631 Chebrolu, Hima I-170, II-643 Chelberg, David I-170 Chen, Jingying I-498 Chen, Tsuhan I-230 Cheng, Fuhua (Frank) I-88 Cheng, Zhi-Quan II-671 Christmas, W. II-86 Chu, Chee-Hung Henry II-349 Chung, Ronald I-268, II-52, II-539 ¨ Ciftcioglu, Ozer I-137 Clausing, Achim II-214 Cohen, Isaac II-328 Collette, Cyrille I-734 Comeau, Gilles II-791 Connolly, Christopher I. II-340 Costen, N.P. II-519 Cˆ ot´e, Martin II-791 da Silva, Paulo Pinheiro II-732 da Silva Torres, Ricardo II-193 Daliri, Mohammad Reza II-234 Dang, Gang II-671 Darbandi, H.B. II-447 de Toledo, Rodrigo I-598 Del Rio, Nicholas II-732 Desbarats, P. II-489 Doemens, Guenter I-521 Donner, Ren´e I-633 Dror, Gideon I-652 Duriez, Christian I-149 Eerola, Tuomas I-403 Erus, G¨ uray I-385 Fahad, Ahmed II-11 Falc˜ ao, Alexandre X. II-193
824
Author Index
Faloutsos, Petros I-76 Fan, Fengtao I-88 Fan, Lixin I-672 Fang, H. II-519 Farag, Aly I-531 Fashandi, Homa II-33 Fathy, Mahmoud II-427 Fazel-Rezai, Reza II-33 Figueroa, Pablo II-760 Filipovych, Roman I-662, II-21 Fleury, Gilles I-220 Florez, Camilo II-760 Folleco, Andres II-469 Fouquier, Geoffroy II-307 Fowers, Spencer I-682 Fridman, Alex I-554 Fu, Sheng-Xue II-417 Fujishima, Makoto II-377 Galasso, Fabio I-702 Gattass, Marcelo I-160, I-288 Geng, Yanfeng I-278 Ghys, Charlotte II-621 Gill, Daniel I-652 Glocker, Ben I-189 G¨ obbels, Gernot I-130 G¨ obel, Martin I-130 Gong, Xun I-488 Gonz´ alez-Matesanz, Francisco J. Gosselin, Fr´ed´eric I-509 Grahn, H˚ akan II-681 Grefenstette, Gregory II-509 Grest, Daniel I-692 Grisoni, Laurent I-149 Gu, Kai-Da I-298 Gueorguieva, S. II-489
Hennessey, Dan I-554 Hesami, Reyhaneh II-75 Hinkenjann, Andr´e I-130, II-691 Hirano, Takashi II-459 Hirose, Osamu I-310 Hlawitschka, Mario I-331, I-341 Hong, Hyunki I-238 Hosseinnezhad, Reza II-75 Huang, Di II-437 Huang, Ping S. II-479 Huang, Weimin II-128 Huben´ y, Jan II-571 Hussain, Sajid II-681 Hwang, Yongho I-238 Ikeda, Osamu II-357, II-602 Isler, Veysi I-792 Ito, M.R. II-447 Iurgel, Uri II-367
II-611
Hamann, Bernd I-331, I-341, I-351 Hammal, Zakia I-509 Han, Bohyung II-162 Han, JungHyun I-66 Han, Lei II-417 Hanbury, Allan I-633 Hanson, Andrew J. I-745, I-804 Hao, Yingming I-278 Harvey, Neal R. I-208 He, Lei I-278 He, Qiang II-349 He, Zifen II-662 Healey, Christopher G. II-711
Jain, Amit II-255 Jain, Ramesh II-162 Jang, Han-Young I-66 Jeong, TaekSang I-66 Jiang, Xiaoyi II-214 Jilk, David II-152 Jin, Shi-Yao II-671 Jin, Yoonjong I-452 Juengling, Ralf II-183 Kaimakis, Paris I-24 Kalkan, Sinan I-692 K¨ alvi¨ ainen, Heikki I-403 Kam, Moshe I-554 Kamarainen, Joni-Kristian I-403 Kandan, R. II-96 Kang, Yousun II-582 Kelley, Richard II-173 Keyzer, Karl II-770 Khan, A. II-86 Khan, Adil II-811 Khan, Shoab A. II-811 Khoshgoftaar, Taghi M. I-248, II-469 Kidono, Kiyosumi II-582 Kimura, Yoshikatsu II-582 King, Christopher I-375, II-173 Kittler, J. II-86 Knoll, Alois I-1 Kn¨ osche, Thomas I-341 Kolano, Paul Z. I-564
Author Index Koldas, Gurkan I-792 Kollias, Stefanos D. II-224 Komodakis, Nikos I-189, II-621 Koraˇcin, D. I-393 Koschan, Andreas I-476 Kosugi, Yukio II-459 Kozubek, Michal II-571 Kr¨ uger, Norbert I-692 Kummert, Anton II-367 Kuno, Yoshinori II-140 Kuo, C.-C. Jay I-781 Kurillo, Gregorij I-714 Kwak, Youngmin I-781 La Cascia, Marco II-265 Lai, Shuhua I-88 Laloni, Claudio I-521 Lamorey, G. I-393 Lamy, Julien I-199 Langlotz, Tobias I-363 Lasenby, Joan I-24, I-702 Lau, Rynson W.H. I-792 Lavee, Gal I-442 Leathwood, Peter I-13 Lee, Byung-Uk II-751 Lee, Dah-Jye I-682, II-43, II-152 Lee, Jaeman I-238 Lee, Jen-Chun II-479 Lee, Moon-Hyun II-742 Lehner, Burkhard I-351 Leite, Neucimar J. II-193 Leitner, Raimund I-644 Lemerle, Pierre I-734 Lensu, Lasse I-403 Leung, Edmond II-298 Levy, Bruno I-598 Lewin, Sergej II-214 Li, Bao II-671 Li, Baoxin I-258 Li, Liyuan II-128 Li, Xiaokun I-258 Lien, Jyh-Ming I-714 Lietsch, Stefan I-724 Lillywhite, Kirt I-682 Lin, Huei-Yung I-298 Lin, Yi I-56 Lipinski-Kruszka, Joanna I-179 List, Edward II-643 Little, J. II-447 Liu, Jundong I-170, II-643
825
Liu, Xin U. II-62 Livingston, Adam I-432 Loaiza, Manuel I-160 Lom´enie, Nicolas I-385 Louis, Sushil II-318 Luo, Chuanjiang I-278 Ma, Yunqian II-328 Madden, C. II-116 Maier, Andrea I-13 Malpica, Jos´e A. II-499, II-611 Manjunath, B.S. I-208 Mannuß, Florian I-130 Mansoor, Atif Bin II-811 Mansur, Al II-140 Marquardt, Oliver I-724 Marsault, Xavier I-586 Maˇska, Martin II-571 Matzka, Stephan II-559 McAlpine, J. I-393 McCool, Michael D. I-56 McDermott, Kyle I-757 McDonald, E. I-393 McInerney, Tim I-542 Mena, Juan B. II-611 Meng, Xiangxu I-98 Meunier, Sylvain I-586 Meuter, Mirko II-367 Mian, Ajmal S. II-811 Micaelli, Alain I-734 Miller, Ben II-328 Millet, Christophe II-509 Mitchell, Melanie II-183 Modrow, Daniel I-521 Monekosso, N. I-424 Montoya-Zegarra, Javier A. II-193 Moreno, Plinio I-464 Morris, Tim II-11 Mosabbeb, Ehsan Adeli II-427 Mukundan, Ramakrishnan II-205 M¨ uller, Dennis II-367 M¨ uller-Schneiders, Stefan II-367 Muthuganapathy, Ramanathan II-255 Nagao, Tomoharu I-310, II-287 Nakra, Teresa M. I-414 Naranjo, Michel II-631 Nataneli, Gabriele I-76 Neji, Radhou`ene I-220 Nelson, Brent E. II-43
826
Author Index
Newsam, Shawn II-275 Nicolescu, Mircea I-375, II-173, II-318 Nicolescu, Monica I-375, II-173 Ninomiya, Yoshiki II-582 Nixon, Mark S. II-62, II-549 Noh, Sungkyu I-452 Nordberg, Klas II-397 Oh, Jihyun II-742 Okada, Yasuhiro II-459 Olson, Clark F. II-781 Owen, G. Scott I-576 Page, David I-476 Palathingal, Xavier I-375 Panin, Giorgio I-1 Papa, Jo˜ ao P. II-193 Paragios, Nikos I-189, I-220, II-621 Park, Hanhoon I-452, II-742 Park, Jong-Il I-452, II-742 Park, Su-Birm II-367 Park, Sun Hee II-751 Paul, Jean-Claude I-598 Payeur, Pierre II-791 Peng, Jingliang I-781 Petillot, Yvan R. II-559 Pfirrmann, Micheal I-414 Piccardi, M. II-116 Pilz, Florian I-692 Pistorius, Stephen II-33 Pitel, Guillaume II-509 Prieto, Flavio II-701 Proen¸ca, Hugo I-621 Pugeault, Nicolas I-692 Pylv¨ an¨ ainen, Timo I-672 Qi, Meng
I-98
Raftopoulos, Konstantinos A. II-224 Raina, Ravi II-711 Ramakrishnan, A.G. II-96 Ramani, Karthik II-255 Raposo, Alberto I-160, I-288 Raskin, Leonid I-36 Rasmussen, Christopher I-46 Reddy, Nirup Kumar II-96 Remagnino, P. I-424 Ribeiro, Eraldo I-662, II-21 Ribeiro, Pedro Canotilho I-464 Rigoll, Gerhard I-521
Ritov, Ya’acov I-652 Rivlin, Ehud I-36, I-442 Romero, Eduardo I-106 Roth, Thorsten II-691 Rudzsky, Michael I-36, I-442 Sadeghi, Maryam II-427 Sakata, Katsutoshi II-140 Salgian, Andrea I-414 Samal, Ashok II-245 Santos-Victor, Jos´e I-464 Sanz, Mar´ıa A. II-499 Sariyildiz, I. Sevil I-137 Saupin, Guillaume I-149 Sawant, Amit P. II-711 Schaefer, Gerald II-298 Scheuermann, Gerik I-331, I-341 Schoenberger, Robert II-152 Seo, Byung-Kuk II-742 Seow, Ming-Jung I-432 Shao, Te-Chin I-230 Shen, Rui II-801 Shi, Lejiang II-387 Shi, Yan I-692 Shirakawa, Shinichi II-287 Singh, Rahul I-179 Smith, Charles I-170, II-643 Song, Zhan I-268 Sorci, Matteo I-13 St-Onge, Pier-Luc II-1 Su, Xiaoyuan I-248, II-469 Sukumar, Sreenivas I-476 Summers, Ronald M. I-199 Sun, Yanfeng II-407 Sun, Yuqing I-98 Suo, Xiaoyuan I-576 Svoboda, David II-571 Synave, R. II-489 Szumilas, Lech I-633 Tang, Xiangyang II-662 Tang, Yang II-387 Taron, Maxime II-621 Tavakkoli, Alireza II-173, II-318 Teoh, Soon Tee I-118 Teynor, Alexandra I-610 Thakur, Sidharth I-745, I-804 Thiran, Jean-Philippe I-13 Thome, N. II-529 Tian, Xiaodong II-377
Author Index Tiddeman, Bernard I-498 Tippetts, Beau I-682 Tittgemeyer, Marc I-341 Torre, Vincent II-234 Trujillo, Noel II-631 Tu, Changhe I-98 Tu, Te-Ming II-479 Tziritas, Georgios I-189 Umlauf, Georg
I-351
Vacavant, A. II-529 Vadlamani, Prasanth II-245 Vanzella, Walter II-234 Velastin, S.A. I-424 Vella, Filippo II-265 Veropoulos, K. I-393 Wagner, Gustavo N. I-288 Wallace, Andrew M. II-559 Wang, Be I-781 Wang, Guoyin I-488 Wang, Hong-Qing II-417 Wang, Jun II-407 Wang, Wei II-52, II-539 Wang, Yan-Zhen II-671 Wang, Yiding II-437 Wang, Yunhong II-437 Wang, Zhifang II-387 Weber, Steven I-554 Webster, Michael A. I-757 Wei, Zhao-Yi II-43, II-152 Weinshall, Daphna II-106
Wildenauer, Horst I-633 Wimmer, Michael I-106 Wood, Zo¨e I-816 Wu, Qi I-230 Xie, Shuisheng II-643 Xu, Kai II-671 Xu, L.-Q. I-424 Xu, Roger I-258 Yamazaki, Kazuo II-377 Yang, Chenglei I-98 Yang, Sejung II-751 Yang, Yang II-275 Yao, Yi I-476 Yin, Baocai II-407 Yin, Lijun I-769 Yoneyama, Shogo II-459 Yu, Junqing II-387 Yu, Xinguo II-128 Zamir, Lior II-106 Zhan, B. I-424 Zhang, Hui I-745 Zhang, Xi II-377 Zhang, Yinhui II-662 Zhang, Yunsheng II-662 Zhao, Li-Feng II-417 Zhou, Jin I-258 Zhu, Feng I-278 Zhu, Xingquan II-469 Zhu, Ying I-576, II-652 Zuffi, S. II-116
827