Lecture Notes in Computer Science Commenced Publication in 1973 Founding and Former Series Editors: Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen
Editorial Board David Hutchison Lancaster University, UK Takeo Kanade Carnegie Mellon University, Pittsburgh, PA, USA Josef Kittler University of Surrey, Guildford, UK Jon M. Kleinberg Cornell University, Ithaca, NY, USA Alfred Kobsa University of California, Irvine, CA, USA Friedemann Mattern ETH Zurich, Switzerland John C. Mitchell Stanford University, CA, USA Moni Naor Weizmann Institute of Science, Rehovot, Israel Oscar Nierstrasz University of Bern, Switzerland C. Pandu Rangan Indian Institute of Technology, Madras, India Bernhard Steffen University of Dortmund, Germany Madhu Sudan Massachusetts Institute of Technology, MA, USA Demetri Terzopoulos University of California, Los Angeles, CA, USA Doug Tygar University of California, Berkeley, CA, USA Gerhard Weikum Max-Planck Institute of Computer Science, Saarbruecken, Germany
5304
David Forsyth Philip Torr Andrew Zisserman (Eds.)
Computer Vision – ECCV 2008 10th European Conference on Computer Vision Marseille, France, October 12-18, 2008 Proceedings, Part III
13
Volume Editors David Forsyth University of Illinois at Urbana-Champaign, Computer Science Department 3310 Siebel Hall, Urbana, IL 61801, USA E-mail:
[email protected] Philip Torr Oxford Brookes University, Department of Computing Wheatley, Oxford OX33 1HX, UK E-mail:
[email protected] Andrew Zisserman University of Oxford, Department of Engineering Science Parks Road, Oxford OX1 3PJ, UK E-mail:
[email protected]
Library of Congress Control Number: 2008936989 CR Subject Classification (1998): I.4, I.2.10, I.5.4, I.5, I.7.5 LNCS Sublibrary: SL 6 – Image Processing, Computer Vision, Pattern Recognition, and Graphics ISSN ISBN-10 ISBN-13
0302-9743 3-540-88689-3 Springer Berlin Heidelberg New York 978-3-540-88689-1 Springer Berlin Heidelberg New York
This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. Springer is a part of Springer Science+Business Media springer.com © Springer-Verlag Berlin Heidelberg 2008 Printed in Germany Typesetting: Camera-ready by author, data conversion by Scientific Publishing Services, Chennai, India Printed on acid-free paper SPIN: 12553624 06/3180 543210
Preface
Welcome to the 2008 European Conference on Computer Vision. These proceedings are the result of a great deal of hard work by many people. To produce them, a total of 871 papers were reviewed. Forty were selected for oral presentation and 203 were selected for poster presentation, yielding acceptance rates of 4.6% for oral, 23.3% for poster, and 27.9% in total. We applied three principles. First, since we had a strong group of Area Chairs, the final decisions to accept or reject a paper rested with the Area Chair, who would be informed by reviews and could act only in consensus with another Area Chair. Second, we felt that authors were entitled to a summary that explained how the Area Chair reached a decision for a paper. Third, we were very careful to avoid conflicts of interest. Each paper was assigned to an Area Chair by the Program Chairs, and each Area Chair received a pool of about 25 papers. The Area Chairs then identified and ranked appropriate reviewers for each paper in their pool, and a constrained optimization allocated three reviewers to each paper. We are very proud that every paper received at least three reviews. At this point, authors were able to respond to reviews. The Area Chairs then needed to reach a decision. We used a series of procedures to ensure careful review and to avoid conflicts of interest. Program Chairs did not submit papers. The Area Chairs were divided into three groups so that no Area Chair in the group was in conflict with any paper assigned to any Area Chair in the group. Each Area Chair had a “buddy” in their group. Before the Area Chairs met, they read papers and reviews, contacted reviewers to get reactions to submissions and occasionally asked for improved or additional reviews, and prepared a rough summary statement for each of the papers in their pool. At the Area Chair meeting, groups met separately so that Area Chairs could reach a consensus with their buddies, and make initial oral/poster decisions. We met jointly so that we could review the rough program, and made final oral/poster decisions in groups. In the separate meetings, there were no conflicts. In the joint meeting, any Area Chairs with conflicts left the room when relevant papers were discussed. Decisions were published on the last day of the Area Chair meeting. There are three more somber topics to report. First, the Program Chairs had to deal with several double submissions. Referees or Area Chairs identified potential double submissions, we checked to see if these papers met the criteria published in the call for papers, and if they did, we rejected the papers and did not make reviews available. Second, two submissions to ECCV 2008 contained open plagiarism of published works. We will pass details of these attempts to journal editors and conference chairs to make further plagiarism by the responsible parties more difficult. Third, by analysis of server logs we discovered that
VI
Preface
there had been a successful attempt to download all submissions shortly after the deadline. We warned all authors that this had happened to ward off dangers to intellectual property rights, and to minimize the chances that an attempt at plagiarism would be successful. We were able to identify the responsible party, discussed this matter with their institutional management, and believe we resolved the issue as well as we could have. Still, it is important to be aware that no security or software system is completely safe, and papers can leak from conference submission. We felt the review process worked well, and recommend it to the community. The process would not have worked without the efforts of many people. We thank Lyndsey Pickup, who managed the software system, author queries, Area Chair queries and general correspondence (most people associated with the conference will have exchanged e-mails with her at some point). We thank Simon Baker, Ramin Zabih and especially Jiˇr´ı Matas for their wise advice on how to organize and run these meetings; the process we have described is largely their model from CVPR 2007. We thank Jiˇr´ı Matas and Dan Veˇcerka, for extensive help with, and support of, the software system. We thank C. J. Taylor for the 3-from-5 optimization code. We thank the reviewers for their hard work. We thank the Area Chairs for their very hard work, and for the time and attention each gave to reading papers, reviews and summaries, and writing summaries. We thank the Organization Chairs Peter Sturm and Edmond Boyer, and the General Chair, Jean Ponce, for their help and support and their sharing of the load. Finally, we thank Nathalie Abiola, Nasser Bacha, Jacques Beigbeder, Jerome Bertsch, Jo¨elle Isnard and Ludovic Ricardou of ENS for administrative support during the Area Chair meeting, and Dani`ele Herzog and Laetitia Libralato of INRIA Rhˆ one-Alpes for administrative support after the meeting. August 2008
Andrew Zisserman David Forsyth Philip Torr
Organization
Conference Chair Jean Ponce
Ecole Normale Sup´erieure, France
Honorary Chair Jan Koenderink
EEMCS, Delft University of Technology, The Netherlands
Program Chairs David Forsyth Philip Torr Andrew Zisserman
University of Illinois, USA Oxford Brookes University, UK University of Oxford, UK
Organization Chairs Edmond Boyer Peter Sturm
LJK/UJF/INRIA Grenoble–Rhˆ one-Alpes, France INRIA Grenoble–Rhˆ one-Alpes, France
Specialized Chairs Fr´ed´eric Jurie Workshops Fr´ed´eric Devernay Demos
Universit´e de Caen, France INRIA Grenoble–Rhˆone-Alpes, France Edmond Boyer Video Proc. LJK/UJF/INRIA Grenoble–Rhˆ one-Alpes, France James Crowley Video Proc. INPG, France Nikos Paragios Tutorials Ecole Centrale, France Emmanuel Prados Tutorials INRIA Grenoble–Rhˆ one-Alpes, France Christophe Garcia Industrial Liaison France Telecom Research, France Th´eo Papadopoulo Industrial Liaison INRIA Sophia, France Jiˇr´ı Matas Conference Software CTU Prague, Czech Republic Dan Veˇcerka Conference Software CTU Prague, Czech Republic
Program Chair Support Lyndsey Pickup
University of Oxford, UK
VIII
Organization
Administration Danile Herzog Laetitia Libralato
INRIA Grenoble–Rhˆone-Alpes, France INRIA Grenoble–Rhˆ one-Alpes, France
Conference Website Elisabeth Beaujard Ama¨el Delaunoy Mauricio Diaz Benjamin Petit
INRIA INRIA INRIA INRIA
Grenoble–Rhˆ one-Alpes, Grenoble–Rhˆ one-Alpes, Grenoble–Rhˆ one-Alpes, Grenoble–Rhˆ one-Alpes,
France France France France
Printed Materials Ingrid Mattioni Vanessa Peregrin Isabelle Rey
INRIA Grenoble–Rhˆ one-Alpes, France INRIA Grenoble–Rhˆ one-Alpes, France INRIA Grenoble–Rhˆ one-Alpes, France
Area Chairs Horst Bischof Michael Black Andrew Blake Stefan Carlsson Tim Cootes Alyosha Efros Jan-Olof Eklund Mark Everingham Pedro Felzenszwalb Richard Hartley Martial Hebert Aaron Hertzmann Dan Huttenlocher Michael Isard Aleˇs Leonardis David Lowe Jiˇr´ı Matas Joe Mundy David Nist´er Tom´aˇs Pajdla Patrick P´erez Marc Pollefeys Ian Reid Cordelia Schmid Bernt Schiele Christoph Schn¨ orr Steve Seitz
Graz University of Technology, Austria Brown University, USA Microsoft Research Cambridge, UK NADA/KTH, Sweden University of Manchester, UK CMU, USA KTH, Sweden University of Leeds, UK University of Chicago, USA Australian National University, Australia CMU, USA University of Toronto, Canada Cornell University, USA Microsoft Research Silicon Valley, USA University of Ljubljana, Slovenia University of British Columbia, Canada CTU Prague, Czech Republic Brown University, USA Microsoft Live Labs/Microsoft Research, USA CTU Prague, Czech Republic IRISA/INRIA Rennes, France ETH Z¨ urich, Switzerland University of Oxford, UK INRIA Grenoble–Rhˆ one-Alpes, France Darmstadt University of Technology, Germany University of Mannheim, Germany University of Washington, USA
Organization
Richard Szeliski Antonio Torralba Bill Triggs Tinne Tuytelaars Luc Van Gool Yair Weiss Chris Williams Ramin Zabih
Microsoft Research, USA MIT, USA CNRS/Laboratoire Jean Kuntzmann, France Katholieke Universiteit Leuven, Belgium Katholieke Universiteit Leuven, Belgium The Hebrew University of Jerusalem, Israel University of Edinburgh, UK Cornell University, USA
Conference Board Horst Bischof Hans Burkhardt Bernard Buxton Roberto Cipolla Jan-Olof Eklundh Olivier Faugeras Anders Heyden Aleˇs Leonardis Bernd Neumann Mads Nielsen Tom´aˇs Pajdla Giulio Sandini David Vernon
Graz University of Technology, Austria University of Freiburg, Germany University College London, UK University of Cambridge,UK Royal Institute of Technology, Sweden INRIA, Sophia Antipolis, France Lund University, Sweden University of Ljubljana, Slovenia University of Hamburg, Germany IT University of Copenhagen, Denmark CTU Prague, Czech Republic University of Genoa, Italy Trinity College, Ireland
Program Committee Sameer Agarwal Aseem Agarwala J¨ orgen Ahlberg Narendra Ahuja Yiannis Aloimonos Tal Arbel Kalle ˚ Astr¨om Peter Auer Jonas August Shai Avidan Simon Baker Kobus Barnard Adrien Bartoli Benedicte Bascle Csaba Beleznai Peter Belhumeur Serge Belongie Moshe Ben-Ezra Alexander Berg
Tamara Berg James Bergen Marcelo Bertalmio Bir Bhanu Stan Bileschi Stan Birchfield Volker Blanz Aaron Bobick Endre Boros Terrance Boult Richard Bowden Edmond Boyer Yuri Boykov Gary Bradski Chris Bregler Thomas Breuel Gabriel Brostow Matthew Brown Michael Brown
Thomas Brox Andr´es Bruhn Antoni Buades Joachim Buhmann Hans Burkhardt Andrew Calway Rodrigo Carceroni Gustavo Carneiro M. Carreira-Perpinan Tat-Jen Cham Rama Chellappa German Cheung Ondˇrej Chum James Clark Isaac Cohen Laurent Cohen Michael Cohen Robert Collins Dorin Comaniciu
IX
X
Organization
James Coughlan David Crandall Daniel Cremers Antonio Criminisi David Cristinacce Gabriela Csurka Navneet Dalal Kristin Dana Kostas Daniilidis Larry Davis Andrew Davison Nando de Freitas Daniel DeMenthon David Demirdjian Joachim Denzler Michel Dhome Sven Dickinson Gianfranco Doretto Gyuri Dorko Pinar Duygulu Sahin Charles Dyer James Elder Irfan Essa Andras Ferencz Rob Fergus Vittorio Ferrari Sanja Fidler Mario Figueiredo Graham Finlayson Robert Fisher Fran¸cois Fleuret Wolfgang F¨ orstner Charless Fowlkes Jan-Michael Frahm Friedrich Fraundorfer Bill Freeman Brendan Frey Andrea Frome Pascal Fua Yasutaka Furukawa Daniel Gatica-Perez Dariu Gavrila James Gee Guido Gerig Theo Gevers
Christopher Geyer Michael Goesele Dan Goldman Shaogang Gong Leo Grady Kristen Grauman Eric Grimson Fred Hamprecht Edwin Hancock Allen Hanson James Hays Carlos Hern´ andez Anders Heyden Adrian Hilton David Hogg Derek Hoiem Alex Holub Anthony Hoogs Daniel Huber Alexander Ihler Michal Irani Hiroshi Ishikawa David Jacobs Bernd J¨ ahne Herv´e J´egou Ian Jermyn Nebojsa Jojic Michael Jones Fr´ed´eric Jurie Timor Kadir Fredrik Kahl Amit Kale Kenichi Kanatani Sing Bing Kang Robert Kaucic Qifa Ke Renaud Keriven Charles Kervrann Ron Kikinis Benjamin Kimia Ron Kimmel Josef Kittler Hedvig Kjellstr¨om Leif Kobbelt Pushmeet Kohli
Esther Koller-Meier Vladimir Kolmogorov Nikos Komodakis Kurt Konolige Jana Koˇseck´a Zuzana Kukelova Sanjiv Kumar Kyros Kutulakos Ivan Laptev Longin Jan Latecki Svetlana Lazebnik Erik Learned-Miller Yann Lecun Bastian Leibe Vincent Lepetit Thomas Leung Anat Levin Fei-Fei Li Hongdong Li Stephen Lin Jim Little Ce Liu Yanxi Liu Brian Lovell Simon Lucey John Maccormick Petros Maragos Aleix Martinez Iain Matthews Wojciech Matusik Bruce Maxwell Stephen Maybank Stephen McKenna Peter Meer Etienne M´emin Dimitris Metaxas Branislav Miˇcuˇs´ık Krystian Mikolajczyk Anurag Mittal Theo Moons Greg Mori Pawan Mudigonda David Murray Srinivasa Narasimhan Randal Nelson
Organization
Ram Nevatia Jean-Marc Odobez Bj¨ orn Ommer Nikos Paragios Vladimir Pavlovic Shmuel Peleg Marcello Pelillo Pietro Perona Maria Petrou Vladimir Petrovic Jonathon Phillips Matti Pietik¨ ainen Axel Pinz Robert Pless Tom Pock Fatih Porikli Simon Prince Long Quan Ravi Ramamoorthi Deva Ramanan Anand Rangarajan Ramesh Raskar Xiaofeng Ren Jens Rittscher R´omer Rosales Bodo Rosenhahn Peter Roth Stefan Roth Volker Roth Carsten Rother Fred Rothganger Daniel Rueckert Dimitris Samaras
ˇ ara Radim S´ Eric Saund Silvio Savarese Daniel Scharstein Yoav Schechner Kondrad Schindler Stan Sclaroff Mubarak Shah Gregory Shakhnarovich Eli Shechtman Jianbo Shi Kaleem Siddiqi Leonid Sigal Sudipta Sinha Josef Sivic Cristian Sminchi¸sescu Anuj Srivastava Drew Steedly Gideon Stein Bj¨ orn Stenger Christoph Strecha Erik Sudderth Josephine Sullivan David Suter Tom´aˇs Svoboda Hai Tao Marshall Tappen Demetri Terzopoulos Carlo Tomasi Fernando Torre Lorenzo Torresani Emanuele Trucco David Tschumperl´e
John Tsotsos Peter Tu Matthew Turk Oncel Tuzel Carole Twining Ranjith Unnikrishnan Raquel Urtasun Joost Van de Weijer Manik Varma Nuno Vasconcelos Olga Veksler Jakob Verbeek Luminita Vese Thomas Vetter Ren´e Vidal George Vogiatzis Daphna Weinshall Michael Werman Tom´aˇs Werner Richard Wildes Lior Wolf Ying Wu Eric Xing Yaser Yacoob Ruigang Yang Stella Yu Lihi Zelnik-Manor Richard Zemel Li Zhang S. Zhou Song-Chun Zhu Todd Zickler Lawrence Zitnick
Additional Reviewers Lourdes Agapito Daniel Alexander Elli Angelopoulou Alexandru Balan Adrian Barbu Nick Barnes Jo˜ ao Barreto Marian Bartlett Herbert Bay
Ross Beveridge V. Bhagavatula Edwin Bonilla Aeron Buchanan Michael Burl Tiberio Caetano Octavia Camps Sharat Chandran Fran¸cois Chaumette
Yixin Chen Dmitry Chetverikov Sharat Chikkerur Albert Chung Nicholas Costen Gabriela Oana Cula Goksel Dedeoglu Herv´e Delingette Michael Donoser
XI
XII
Organization
Mark Drew Zoran Duric Wolfgang Einhauser Aly Farag Beat Fasel Raanan Fattal Paolo Favaro Rogerio Feris Cornelia Ferm¨ uller James Ferryman David Forsyth Jean-S´ebastien Franco Mario Fritz Andrea Fusiello Meirav Galun Bogdan Georgescu A. Georghiades Georgy Gimel’farb Roland Goecke Toon Goedeme Jacob Goldberger Luis Goncalves Venu Govindaraju Helmut Grabner Michael Grabner Hayit Greenspan Etienne Grossmann Richard Harvey Sam Hasinoff Horst Haussecker Jesse Hoey Slobodan Ilic Omar Javed Qiang Ji Jiaya Jia Hailin Jin Ioannis Kakadiaris Joni-K. K¨ am¨ar¨ ainen George Kamberov Yan Ke Andreas Klaus Georg Klein Reinhard Koch Mathias Kolsch Andreas Koschan Christoph Lampert
Mike Langer Georg Langs Neil Lawrence Sang Lee Boudewijn Lelieveldt Marc Levoy Michael Lindenbaum Chengjun Liu Qingshan Liu Manolis Lourakis Ameesh Makadia Ezio Malis R. Manmatha David Martin Daniel Martinec Yasuyuki Matsushita Helmut Mayer Christopher Mei Paulo Mendon¸ca Majid Mirmehdi Philippos Mordohai Pierre Moreels P.J. Narayanan Nassir Navab Jan Neumann Juan Carlos Niebles Ko Nishino Thomas O’Donnell Takayuki Okatani Kenji Okuma Margarita Osadchy Mustafa Ozuysal Sharath Pankanti Sylvain Paris James Philbin Jean-Philippe Pons Emmanuel Prados Zhen Qian Ariadna Quattoni Ali Rahimi Ashish Raj Visvanathan Ramesh Christopher Rasmussen Tammy Riklin-Raviv Charles Rosenberg Arun Ross
Michael Ross Szymon Rusinkiewicz Bryan Russell Sudeep Sarkar Yoichi Sato Ashutosh Saxena Florian Schroff Stephen Se Nicu Sebe Hans-Peter Seidel Steve Seitz Thomas Serre Alexander Shekhovtsov Ilan Shimshoni Michal Sofka Jan Solem Gerald Sommer Jian Sun Rahul Swaminathan Hugues Talbot Chi-Keung Tang Xiaoou Tang C.J. Taylor Jean-Philippe Thiran David Tolliver Yanghai Tsin Zhuowen Tu Vaibhav Vaish Anton van den Hengel Bram Van Ginneken Dirk Vandermeulen Alessandro Verri Hongcheng Wang Jue Wang Yizhou Wang Gregory Welch Ming-Hsuan Yang Caspi Yaron Jieping Ye Alper Yilmaz Christopher Zach Hongyuan Zha Cha Zhang Jerry Zhu Lilla Zollei
Organization
Sponsoring Institutions
XIII
Table of Contents – Part III
Matching 3D Non-rigid Surface Matching and Registration Based on Holomorphic Differentials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Wei Zeng, Yun Zeng, Yang Wang, Xiaotian Yin, Xianfeng Gu, and Dimitris Samaras
1
Learning Two-View Stereo Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jianxiong Xiao, Jingni Chen, Dit-Yan Yeung, and Long Quan
15
SIFT Flow: Dense Correspondence across Different Scenes . . . . . . . . . . . . Ce Liu, Jenny Yuen, Antonio Torralba, Josef Sivic, and William T. Freeman
28
Learning+Features Discriminative Sparse Image Models for Class-Specific Edge Detection and Image Interpretation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Julien Mairal, Marius Leordeanu, Francis Bach, Martial Hebert, and Jean Ponce Non-local Regularization of Inverse Problems . . . . . . . . . . . . . . . . . . . . . . . . Gabriel Peyr´e, S´ebastien Bougleux, and Laurent Cohen Training Hierarchical Feed-Forward Visual Recognition Models Using Transfer Learning from Pseudo-Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Amr Ahmed, Kai Yu, Wei Xu, Yihong Gong, and Eric Xing Learning Optical Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Deqing Sun, Stefan Roth, J.P. Lewis, and Michael J. Black
43
57
69 83
Poster Session III Optimizing Binary MRFs with Higher Order Cliques . . . . . . . . . . . . . . . . . Asem M. Ali, Aly A. Farag, and Georgy L. Gimel’farb Multi-camera Tracking and Atypical Motion Detection with Behavioral Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . J´erˆ ome Berclaz, Fran¸cois Fleuret, and Pascal Fua Automatic Image Colorization Via Multimodal Predictions . . . . . . . . . . . . Guillaume Charpiat, Matthias Hofmann, and Bernhard Sch¨ olkopf
98
112 126
XVI
Table of Contents – Part III
CSDD Features: Center-Surround Distribution Distance for Feature Extraction and Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Robert T. Collins and Weina Ge Detecting Carried Objects in Short Video Sequences . . . . . . . . . . . . . . . . . . Dima Damen and David Hogg
140 154
Constrained Maximum Likelihood Learning of Bayesian Networks for Facial Action Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cassio P. de Campos, Yan Tong, and Qiang Ji
168
Robust Scale Estimation from Ensemble Inlier Sets for Random Sample Consensus Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lixin Fan and Timo Pylv¨ an¨ ainen
182
Efficient Camera Smoothing in Sequential Structure-from-Motion Using Approximate Cross-Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Michela Farenzena, Adrien Bartoli, and Youcef Mezouar
196
Semi-automatic Motion Segmentation with Motion Layer Mosaics . . . . . . Matthieu Fradet, Patrick P´erez, and Philippe Robert
210
Unified Frequency Domain Analysis of Lightfield Cameras . . . . . . . . . . . . . Todor Georgiev, Chintan Intwala, Sevkit Babakan, and Andrew Lumsdaine
224
Segmenting Fiber Bundles in Diffusion Tensor Images . . . . . . . . . . . . . . . . Alvina Goh and Ren´e Vidal
238
View Point Tracking of Rigid Objects Based on Shape Sub-manifolds . . . Christian Gosch, Ketut Fundana, Anders Heyden, and Christoph Schn¨ orr
251
Generative Image Segmentation Using Random Walks with Restart . . . . Tae Hoon Kim, Kyoung Mu Lee, and Sang Uk Lee
264
Background Subtraction on Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . Teresa Ko, Stefano Soatto, and Deborah Estrin
276
A Statistical Confidence Measure for Optical Flows . . . . . . . . . . . . . . . . . . Claudia Kondermann, Rudolf Mester, and Christoph Garbe
290
Automatic Generator of Minimal Problem Solvers . . . . . . . . . . . . . . . . . . . . Zuzana Kukelova, Martin Bujnak, and Tomas Pajdla
302
A New Baseline for Image Annotation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ameesh Makadia, Vladimir Pavlovic, and Sanjiv Kumar
316
Behind the Depth Uncertainty: Resolving Ordinal Depth in SFM . . . . . . Shimiao Li and Loong-Fah Cheong
330
Table of Contents – Part III
Sparse Long-Range Random Field and Its Application to Image Denoising . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yunpeng Li and Daniel P. Huttenlocher
XVII
344
Output Regularized Metric Learning with Side Information . . . . . . . . . . . Wei Liu, Steven C.H. Hoi, and Jianzhuang Liu
358
Student-t Mixture Filter for Robust, Real-Time Visual Tracking . . . . . . . James Loxam and Tom Drummond
372
Photo and Video Quality Evaluation: Focusing on the Subject . . . . . . . . . Yiwen Luo and Xiaoou Tang
386
The Bi-directional Framework for Unifying Parametric Image Alignment Approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . R´emi M´egret, Jean-Baptiste Authesserre, and Yannick Berthoumieu
400
Direct Bundle Estimation for Recovery of Shape, Reflectance Property and Light Position . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tsuyoshi Migita, Shinsuke Ogino, and Takeshi Shakunaga
412
A Probabilistic Cascade of Detectors for Individual Object Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Pierre Moreels and Pietro Perona
426
Scale-Dependent/Invariant Local 3D Shape Descriptors for Fully Automatic Registration of Multiple Sets of Range Images . . . . . . . . . . . . . John Novatnack and Ko Nishino
440
Star Shape Prior for Graph-Cut Image Segmentation . . . . . . . . . . . . . . . . . Olga Veksler
454
Efficient NCC-Based Image Matching in Walsh-Hadamard Domain . . . . . Wei-Hau Pan, Shou-Der Wei, and Shang-Hong Lai
468
Object Recognition by Integrating Multiple Image Segmentations . . . . . . Caroline Pantofaru, Cordelia Schmid, and Martial Hebert
481
A Linear Time Histogram Metric for Improved SIFT Matching . . . . . . . . Ofir Pele and Michael Werman
495
An Extended Phase Field Higher-Order Active Contour Model for Networks and Its Application to Road Network Extraction from VHR Satellite Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ting Peng, Ian H. Jermyn, V´eronique Prinet, and Josiane Zerubia A Generic Neighbourhood Filtering Framework for Matrix Fields . . . . . . Luis Pizarro, Bernhard Burgeth, Stephan Didas, and Joachim Weickert
509 521
XVIII
Table of Contents – Part III
Multi-scale Improves Boundary Detection in Natural Images . . . . . . . . . . . Xiaofeng Ren Estimating 3D Trajectories of Periodic Motions from Stationary Monocular Views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Evan Ribnick and Nikolaos Papanikolopoulos
533
546
Unsupervised Learning of Skeletons from Motion . . . . . . . . . . . . . . . . . . . . . David A. Ross, Daniel Tarlow, and Richard S. Zemel
560
Multi-layered Decomposition of Recurrent Scenes . . . . . . . . . . . . . . . . . . . . David Russell and Shaogang Gong
574
SERBoost: Semi-supervised Boosting with Expectation Regularization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Amir Saffari, Helmut Grabner, and Horst Bischof
588
View Synthesis for Recognizing Unseen Poses of Object Classes . . . . . . . . Silvio Savarese and Li Fei-Fei
602
Projected Texture for Object Classification . . . . . . . . . . . . . . . . . . . . . . . . . . Avinash Sharma and Anoop Namboodiri
616
Prior-Based Piecewise-Smooth Segmentation by Template Competitive Deformation Using Partitions of Unity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Oudom Somphone, Benoit Mory, Sherif Makram-Ebeid, and Laurent Cohen
628
Vision-Based Multiple Interacting Targets Tracking Via On-Line Supervised Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Xuan Song, Jinshi Cui, Hongbin Zha, and Huijing Zhao
642
An Incremental Learning Method for Unconstrained Gaze Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yusuke Sugano, Yasuyuki Matsushita, Yoichi Sato, and Hideki Koike
656
Partial Difference Equations over Graphs: Morphological Processing of Arbitrary Discrete Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Vinh-Thong Ta, Abderrahim Elmoataz, and Olivier L´ezoray
668
Real-Time Shape Analysis of a Human Body in Clothing Using Time-Series Part-Labeled Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Norimichi Ukita, Ryosuke Tsuji, and Masatsugu Kidode
681
Kernel Codebooks for Scene Categorization . . . . . . . . . . . . . . . . . . . . . . . . . Jan C. van Gemert, Jan-Mark Geusebroek, Cor J. Veenman, and Arnold W.M. Smeulders Multiple Tree Models for Occlusion and Spatial Constraints in Human Pose Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yang Wang and Greg Mori
696
710
Table of Contents – Part III
Structuring Visual Words in 3D for Arbitrary-View Object Localization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jianxiong Xiao, Jingni Chen, Dit-Yan Yeung, and Long Quan
XIX
725
Multi-thread Parsing for Recognizing Complex Events in Videos . . . . . . . Zhang Zhang, Kaiqi Huang, and Tieniu Tan
738
Signature-Based Document Image Retrieval . . . . . . . . . . . . . . . . . . . . . . . . . Guangyu Zhu, Yefeng Zheng, and David Doermann
752
An Effective Approach to 3D Deformable Surface Tracking . . . . . . . . . . . . Jianke Zhu, Steven C.H. Hoi, Zenglin Xu, and Michael R. Lyu
766
MRFs Belief Propagation with Directional Statistics for Solving the Shape-from-Shading Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tom S.F. Haines and Richard C. Wilson A Convex Formulation of Continuous Multi-label Problems . . . . . . . . . . . . Thomas Pock, Thomas Schoenemann, Gottfried Graber, Horst Bischof, and Daniel Cremers
780 792
Beyond Loose LP-Relaxations: Optimizing MRFs by Repairing Cycles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nikos Komodakis and Nikos Paragios
806
Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
821
3D Non-rigid Surface Matching and Registration Based on Holomorphic Differentials Wei Zeng1 , Yun Zeng1 , Yang Wang2 , Xiaotian Yin1 , Xianfeng Gu1 , and Dimitris Samaras1 1 2
Stony Brook University, Stony Brook NY 11790, USA Carnegie Mellon University, Pittsburgh PA 15213, USA
Abstract. 3D surface matching is fundamental for shape registration, deformable 3D non-rigid tracking, recognition and classification. In this paper we describe a novel approach for generating an efficient and optimal combined matching from multiple boundary-constrained conformal parameterizations for multiply connected domains (i.e., genus zero open surface with multiple boundaries), which always come from imperfect 3D data acquisition (holes, partial occlusions, change of pose and nonrigid deformation between scans). This optimality criterion is also used to assess how consistent each boundary is, and thus decide to enforce or relax boundary constraints across the two surfaces to be matched. The linear boundary-constrained conformal parameterization is based on the holomorphic differential forms, which map a surface with n boundaries conformally to a planar rectangle with (n − 2) horizontal slits, other two boundaries as constraints. The mapping is a diffeomorphism and intrinsic to the geometry, handles an open surface with arbitrary number of boundaries, and can be implemented as a linear system. Experimental results are given for real facial surface matching, deformable cloth nonrigid tracking, which demonstrate the efficiency of our method, especially for 3D non-rigid surfaces with significantly inconsistent boundaries.
1
Introduction
In recent decades, there has been a lot of research into surface representations for 3D surface analysis, which is a fundamental issue for many computer vision applications, such as 3D shape registration, partial scan alignment, 3D object recognition, and classification [1,2,3]. In particular, as 3D scanning technologies improve, large databases of 3D scans require automated methods for matching and registration. However, matching surfaces undergoing non-rigid deformation is still a challenging problem, especially when data is noisy and with complicated topology. Different approaches include curvature-based representations [4,5], regional point representations [2,6], spherical harmonic representations [7,8], shape distributions [9], multi-dimensional scaling[10], local isometric mapping [11], summation invariants [12], landmark-sliding [13], physics-based deformable models [14], Free-Form Deformation (FFD) [15], and Level-Set based methods [16]. However, many surface representations that use local geometric D. Forsyth, P. Torr, and A. Zisserman (Eds.): ECCV 2008, Part III, LNCS 5304, pp. 1–14, 2008. c Springer-Verlag Berlin Heidelberg 2008
2
W. Zeng et al.
(a)Original surfaces
(b)Conformal slit mappings to rectangular domains
Fig. 1. Multiple slit mappings. The columns in (b) correspond to mouth, left-eye and right-eye boundary condition from left to right, where the optimal mouth, left-eye and right-eye area distortions are induced respectively. Optimal conformal parameterizations are chosen for different regions on the surface.
invariants can not guarantee a global convergence and might suffer from local minima in the presence of non-rigid deformations. To address this issue, many global parameterization methods have been developed recently based on conformal geometric maps [17,18,19,20,21,22]. Although the previous methods have met with a great deal of success in both computer vision and graphics, there are three major shortcomings in conformal maps when applied to matching of real discrete data such as the output of 3D scanners: 1) complicated topology of the inputs, 2) area distortions and 3) inconsistent boundaries. In this paper we will address the above three issues by introducing a novel linear algorithm for multiply connected surfaces based on holomorphic differentials. Most existing conformal mapping methods can only handle surfaces with the simplest topology, namely, genus zero surface with a single boundary. In reality, due to partial occlusion, noises, arbitrary surface patch acquired by a single scan by a camera-based 3D scanner, eg. face frontal scan, cloth, machine parts etc, is a genus zero surface with arbitrary number of holes, which are called multiply connected domains. The only previous existing conformal geometric method that can handle such surfaces is the Ricci Flow (RF) method in [23]. But RF is highly non-linear, hence much slower than linear methods. All linear methods, such as harmonic maps [18] and Least-Squares Conformal Maps (LSCMs) [20], are not guaranteed to generate diffeomorphisms between multiply connected domains. In this work, we introduce a novel method to handle multiply connected domains, without any restriction on the number of holes. The mappings generated by the harmonic maps and LSCMs can have flip overs. The mapping computed by RF is globally one-to-one, whereas our current method preserves the bijectiveness and improves the speed by tens of times faster. The second major disadvantage of conformal mapping is that while it guarantees no angle distortions, it may introduce large area distortions. If a large portion of the surface shrinks to a tiny area on the parameter domain, which would make matching problematic. In order to tackle this problem, we propose to combine multiple mappings. For a given surface, there are infinite conformal mappings to flatten it onto the plane. For a given area on the surface, the area distortions induced by different mappings vary drastically. For each part of the surface, we can pick a specific conformal mapping, which would enlarge this part
3D Non-rigid Surface Matching and Registration
3
and shrink the remaining parts. For example, Fig. 1 shows different conformal mappings for the same face surface. Three mappings enlarge the areas surrounding the mouth, the left eye and the right eye respectively. By combining matching results, we can get a better result and overcome the shrinkage problem. The third issue we address is that conventional conformal geometric methods cannot handle surfaces with unreliable boundaries. In reality, many surfaces have multiple boundaries, where matching on the boundaries is crucial. Even more important, boundaries on 2.5D scans are sensitive to motion, noise, object pose, etc, and hence methods like harmonic maps [17,18,19] fail when the requirement for a constant boundary changes. Hence we present three contributions: The first contribution of this paper is to develop an algorithm which combines the results of several matches based on different conformal maps of the surfaces to be matched. Although a conformal mapping has no angle distortion, it introduces highly non-uniform area distortion. In order to avoid aliasing problems while matching, by selecting multiple conformal maps, we can ensure that all parts of the surface map to areas of equivalent size, albeit not on the same map. We can then combine all these locally optimal matching results, thus improving overall matching accuracy. Our second contribution is to design flexible boundary conditions for matching. Some boundaries of the scanned surfaces are inconsistent among frames whereas other boundaries are more reliable. In our method, we map reliable boundaries across surfaces to be matched and allow the image of a boundary to slide on the target boundary. If the boundary is not reliable, we set it free. We also integrate feature constraints in the algorithm. Taking advantage of meaningful features is essential for any matching or registration method. In the case of large non-rigid deformations, matched features allow accurate description of the deformations. In the extreme case matching can be achieved based on feature constraints only, without any boundary constraints. Our final contribution is to introduce a novel matching method for multiply connected domains based on canonical conformal mappings in conformal geometry. All multiply connected domains can be conformally mapped to canonical planar domains, which are annuli with concentric circular slits or rectangles with horizontal slits, as shown in Fig. 2. The resulting map does not have any singularities and is a diffeomorphism, i.e., one-to-one and onto. These maps are stable, insensitive to resolution changes, and robust to noise. Hence, the original 3D surface-matching problem simplifies to a 2D image-matching problem of the conformal geometric maps, which is better understood [24,25].This is the first time this mapping method is applied in the computer vision field. Multiply connected domains are the most difficult cases in conformal geometry. This first application of the following theorem leads to a highly efficient method handling genus 0 surfaces with multiple holes with linear complexity. Theorem 1 ((Canonical Mappings of Multiply Connected Domain)). The function φ effects a one-to-one conformal mapping of S onto the annulus minus n − 2 concentric arcs.
4
W. Zeng et al.
γ3
γ2
γ2 γ3 γ1 γ0
γ0
γ1
γ3
γ2 γ1
γ0
(a)Multi. conn. (b)Circular slit (c)Conformal text. (d)Parallel slit (e)Conformal text. domain S map φ mapp. by φ map log φ mapp. by log φ
Fig. 2. Conformal mapping of multiply connected domains (best viewed in color). This scan of the human face surface is a multiply connected domain with 4 boundaries. It is conformally mapped to an annulus with concentric circular slits by a circular slit mapping φ1 , shown in (b). The boundaries γ0 , γ1 are mapped to outer and inner circles, γ2 , γ3 are mapped to circular slits. The parallel slit map log φ1 is shown in (d). (c) and (e) show that the circular and parallel slit mappings are conformal.
The rest of the paper is organized as follows: the theoretic background and linear algorithm for computing the slit mapping of multiply connected domains are introduced in Sect. 2. The algorithms for 3D surface matching and reliable boundary selection are proposed in Sect. 3. Experimental results are presented in Sect. 4, and we conclude with discussion and future work in Sect. 5.
2
Algorithm for Slit Mapping
All surfaces embedded in R3 have the induced Euclidean metric g. A conformal structure is an atlas, such that on each local chart, the metric can be represented as g = e2u (dx2 +dy 2 ). A surface with a conformal structure is a Riemann surface, therefore, all surfaces in R3 are Riemann surfaces. A harmonic 1-form τ on a Riemann surface can be treated as a vector field with zero circulation and divergence. A holomorphic 1-form ω can be treated as a pair of harmonic 1-forms, ω = (τ1 , τ2 ), such that τ2 can be obtained by rotating τ1 about the normal by 90◦ . We say τ2 is conjugate to τ1 , and denoted as ∗ τ1 = τ2 . It is convenient to use the complex representation of holomorpic √ 1-forms, ω = τ1 + −1τ2 . All the holomorphic 1-forms form a group, which is isomorphic to the first cohomology group H 1 (S, R). A topological genus zero surface S with multiple boundaries is called a multiply connected domain. Suppose the boundary of S is a set of loops ∂S = {γ0 , γ1 , · · · , γn }, where γ0 is the exterior boundary. Then a set of basis of holomorphic 1-forms can be found, ω1 , ω2 , · · · , ωn , such that the integration of ωi along γj equals to δij . Special holomorphic 1-forms can be found, such that ⎧ ⎨ 2π i = 0 Im( ω) = −2π i = 1 (1) ⎩ γi 0 otherwise then if we choose a base point p0 on the surface, for any point p, Rwe choose arbitrary path γ on the surface, define a complex function φ(p) = e γ ω , which
3D Non-rigid Surface Matching and Registration
5
maps the surface to an annulus. γ0 is mapped to the outer boundary, γ1 to the inner boundary, and all other boundaries are mapped to the concentric circular slits. Then the (complex) logarithm of φ maps the surface periodically to a rectangle, with all the circular slits mapped to horizontal slits. We call φ a circular slit mapping, log φ a horizontal slit mapping. The algorithm for computing slit mapping is straight forward, first we compute find a holomorpic a set of holomorphic 1-form basis of the surface, {ωi }. Then we 1-form represented as the linear combination of the basis ω = λi ωi , such that Equation (1) holds. Details can be found in [26].
3
Algorithm for Surface Matching
Suppose S1 and S2 are two multiply connected domains, then our goal is to find an one to one map f : S1 → S2 , which is as close to an isometry as possible. Instead of matching the two surfaces in R3 directly, we find two conformal maps φk : Sk → Dk , k = 1, 2 which map the surface to the planar domains Dk , then we compute a planar map between the planar domains f˜ : D1 → D2 . Then the ˜ matching is the composition f = φ−1 2 ◦ f ◦ φ1 , also described in [23]. In theory, if f is isometric and the boundary conditions are consistently set, then f˜ is an identity. In our applications for face matching and cloth tracking, the mappings are close to isometry. Therefore, the planar mappings are near to the identities. This greatly simplifies the matching process. We first explain the metric to measure the matching quality of a simple matching, then how to choose good parameterizations for optimal area distortions and how to detect the consistency between boundaries. Then we explain in detail the matching process based on a single parameterization, and finally the algorithm to fuse multiple matching results. 3.1
Optimality Criterion
Since we want to combine matching results, a metric is needed to measure the quality of the match. Let S1 , S2 be the surfaces, f : S1 → S2 be the match (non-rigid in general), we want our criterion to measure the distance between the match f and a rigid motion. Our proposed criterion is based on the theorem [27]: Theorem 2. Suppose S is a surface with conformal parameters (u, v), the Riemannian metric is represented as e2λ(u,v) , the mean curvature is H(u, v). Then S is determined by λ and H uniquely up to a rigid motion in R3 . where e2λ measures the area distortion, and called the conformal factor. Suppose S1 , S2 have been conformally mapped to the plane with conformal factors λ1 , λ2 , and their mean curvatures are H1 and H2 , p ∈ S1 and mapped to f (p) ∈ S2 . Therefore, we define the match energy as our optimality criterion: E(f ) = |λ1 (p) − λ2 (f (p))|2 + |H1 (p) − H2 (f (p))|2 dp. (2) S1
6
W. Zeng et al.
If the matching energy is 0, then the match must be rigid motion. Thus, the smaller the energy the better the match. The conformal factor can be approximated in the following P way: Suppose A(f ) v ∈ S is a vertex, φ : S → D is a conformal map, λ(v) = P v∈fA(φ(f )) , where v∈f
f is a face adjacent to v, A(f ) is the original area of the face f , A(φ(f )) is the area of the planar image of f . The mean curvature can be approximated as H(p)n(p) = 12 Δr(p), where n(p) is the normal vector at p, r(p) is the position vector, Δ is the Laplace-Beltrami operator, which can be approximated using the cotan formulae [19]. 3.2
Choice of Conformal Parameterization
In theory, there are infinite conformal mappings for a given surface. We can only afford to compute few of them. The following is our method to choose optimal parameterizations for different regions on the surface. First, we partition the surface to regions using the curved surface equivalent of Voronoi Diagrams. We compute the shortest distance from each vertex to all interior boundaries, choose the closest boundary, and label the vertex. We put all the vertices with the same label in the same region. The outer boundary of the scanned surface is usually noisy and inconsistent, therefore, we do not compute the Voronoi region adjacent to it. Second, for each interior boundary γi , we choose a conformal parameterization, which maps the γi as the exterior boundary on the parameter domain. Then the region associated with γi has the optimal area distortion on the parameterization. For example, for a human face with eyes and mouth open, we partition the face into 3 regions, surrounding the left, right eyes and the mouth respectively. As shown in Fig. 1, the conformal parameterizations in the 2nd, 3rd and 4th columns have optimal area distortions for the mouth region, the left eye and the right eye regions respectively. 3.3
Boundary Consistency Checking
Suppose Sk , k = 1, 2 are two surfaces with the same number of boundaries, ∂Sk = {γ1k , γ2k , · · · , γnk }, where γik is the i-th boundary loop on Sk . Also we assume the correspondence between boundaries (i.e. mouth to mouth, left eye to left eye, etc.) is known. Then we choose two boundary loops and without loss of generality, assume they are γk ∈ ∂Sk . The following procedure detects whether the two boundaries are consistent or not. 1. Compute conformal mapping φk : Sk → Dk , which maps γk to the outer boundary of planar domain Dk . 2. Compute the conformal factor λk and mean curvature Hk for Sk . 3. Find an affine mapping f : D1 → D2 . Measure the matching energy of Equation (2) in the neighborhood of the outer boundary of Dk , and if this energy is greater than a given threshold, then the boundary loops γ1 and γ2 are inconsistent, otherwise, they are consistent.
3D Non-rigid Surface Matching and Registration
7
According to conformal geometric theory, the image of each conformal mapping must be a rectangle; affine mapping can match different rectangles with minimal stretching energy. Our process can detect inconsistencies by trying all boundary combinations, and picking the one with least matching energy. 3.4
Simple Matching
The following steps explain the process for a matching based on a single parameterization: 1. Compute a conformal mapping φk : Sk → Dk . 2. Let the corresponding feature points on Sk be Fk = {pk1 , pk2 , pk3 , · · · , pkm }. Compute a harmonic map f˜ : D1 → D2 , such that Δf = 0, with the following constraints: (a) Feature constraints: f˜(φ1 (p1i )) = φ2 (p2i ), ∀pki ∈ Fk . (b) Boundary constraints: f˜(φ1 (γi1 )) = φ2 (γi2 ), if γi1 ∈ ∂S1 and γi2 ∈ ∂S2 are consistent. ˜ 3. The simple matching is given by f = φ−1 2 ◦ f ◦ φ1 . The algorithm is applied on discrete meshes. We refer readers to [18] for details of computing harmonic maps, which is equivalent to solve a Dirichlet problem with boundary conditions. The final mapping is represented as follows: Suppose v ∈ S1 is a vertex on the first mesh, f (v) is a point p ∈ S2 on the second surface, and p is on a face [v1 , v2 , v3 ] ∈ S2 , such that p = μ1 v1 + μ2 v2 + μ3 v3 , where (μ1 , μ2 , μ3 ) are the barycentric coordinates of p in [v1 , v2 , v3 ]. Then we represent f (v) as a pair f (v) = ([v1 , v2 , v3 ], (μ1 , μ2 , μ3 )), (3) and call it a natural representation of the matching f . Figure 3 illustrates a simple matching. The face surfaces in the first column need to be matched. The second column shows a conformal mapping to a rectangular domain, where γ0k are mapped to the top, γ1k are mapped to the bottom. Our algorithm found that γ01 and γ02 are inconsistent, γ11 and γ12 are consistent. γ0 γ3
γ2
γ2
γ2
γ3
γ3
(c)
γ1 γ0 γ2
γ3
γ1
γ1 γ0
(a)
γ0
(b)
γ1
(d)
Fig. 3. Matching using boundary and feature constraints (best viewed in color). SIFT points [28] on the texture are color encoded. (a) is mapped to (b). (c) slit map for (b). (d) simple matching result. In (d), boundary γ1 (mouth boundary) is deemed consistent using the algorithm in Sect. 3.3 and enforced to match, whereas boundary γ0 (outer boundary) is deemed inconsistent and left free. Hence the quality of the match is due to the combination of boundary and interior feature point constraints.
8
W. Zeng et al.
Therefore, the harmonic map aligns the feature points as well as φ1 (γ11 ) with φ2 (γ12 ), but φ1 (γ01 ) remains free. 3.5
Fusion of Matches
Suppose we have computed several simple matchings fi : S1 → S2 , with natural representations given by Equation 3. By selecting multiple conformal maps, we can ensure that all parts of the surface map to areas of equivalent size, albeit not on the same map. We can then combine all these locally optimal matching results, using the following algorithm: Given a vertex v ∈ S1 , 1. Find a neighborhood of v on S1 , Nk (v) = {[v1 , v2 , v3 ] ∈ S1 |d(v, vi ) < k, i = 1, 2, 3}, where d(v, vi ) is the number of edges of the shortest path from v to vi . In our implementation, we choose k around 4. 2. Compute the matching energy of fi restricted on N (v), denoted as E(fi |N (v) ). If there is no image of v in fi , we set E(fi |N (v) ) be ∞. The energy is represented as an integration and computed by conventional Monte Carlo method. (a) Randomly generate N sample points {p1 , p2 , · · · , pN } in Nk (v) with uniform distribution. (b) Compute their images {q1 , q2 , · · · , qN }, qj = fi (pj ). (c) Estimate the energy as N 2 2 j=1 [λ(pj ) − λ(qj )] + [H(pj ) − H(qj ) ] . N 3. Set the combined map as f (v) = fk (v), k = minj E(fj |N (v) ). In extensive testing of the combined matching algorithm for fine enough input meshes, the combined mappings are one-to-one and onto.
4
Experimental Results
This work handles 3D moving deformable data with complex topologies, which are very difficult to acquire. Data availability limits our experimental datasets, which however are still very challenging for existing methods. We thoroughly tested our algorithms on 50 facial scans (Face1-4, each set including more than 10 faces) with different expressions and posture, and 15 scans for deformable clothes. The face surfaces are topological three-hole annuli, the cloth surfaces are topological disks. These surfaces are representative because of their general topologies, with big distortions and very inconsistent boundaries. Such difficult experiments sufficiently support the generality and effectiveness of our method. Tracking Deformable Cloth Surfaces. We tested our algorithm for tracking deformable cloth surfaces. The cloth surfaces are captured by the 3D scanner introduced by [29]. Each frame has about 10K vertices and 20K faces. The cloth surface is a quadrilateral (i.e., only one simple boundary). Its parallel slit mapping can be automatically computed through the preprocessing of double covering [19], which makes the surface a topological annulus (with two boundaries).
3D Non-rigid Surface Matching and Registration
(a)
(b)
(c)
(d)
(e)
(f)
(g)
9
(h)
Fig. 4. Matching deforming cloth surfaces. (a) and (c) are two cloth surfaces, (b) and (d) are each rectangular parameter domain. The computing process for (c) is shown by (e-h): (e) double-covered surface, (f) circular map, (g) rectangular map, (h) halfcircular map, then get the rectangular domain (d).
A
B
C
D
E
F
Fig. 5. Cloth tracking sequences (A-F) with consistent texture coordinates. The consistently deformable tracking can be observed from the checker board texture motion.
Fig. 6. Non-rigid matching and registration for Face1(A1 -C1 with expression change and mouth closed, visualized by consistent checkerboard texture)
Fig. 4 illustrates the process of computing the rectangular mapping. It also demonstrates the simple matching between two frames. The tracking is based on the matchings frame by frame. In order to visualize the tracking results, we put checkerboard textures to the first frame, and propagated the texture parameters to the other frames, through the correspondences from tracking, see Fig. 5. The checkerboard textures are consistent across frames, without oscillating effects, or checker collapse. Thus we demonstrate that the matching between two frames is a diffeomorphism, and the tracking is stable and automatic. Matching Facial Surfaces. Figures 6, 7 and 8 illustrate our experimental results on matching and registering two human faces with different expressions and inconsistent boundaries, acquired by the method in [18] with greyscale texture. The feature points were computed directly using SIFT [28] algorithm on the textures. The figures show how the boundaries are noisy and inconsistent.
10
W. Zeng et al.
Fig. 7. Non-rigid matching and registration for Face2(A2 -C2 with eye, mouth motion)
Fig. 8. Non-rigid matching and registration for Face3(A3 -C3 with mouth motion, pose change). The rotation is about 30◦ . The snapshots in the left and right are taken from the original and frontal view respectively. The boundaries are significantly different.
(a)
(b)
Fig. 9. Combined matching (best viewed in color): (a) the combined map, where the image of each vertex is selected from three simple matchings. The choices are color encoded; (b) the complete and zoomed in meshes.
Figure 9 shows the combined matching, where the colors indicate the choices from simple matchings. The selection metric depends on the matching energy defined in Equation 2. Intuitively, the energy measures the matching distortion. According to differential geometry, if the number is 0, then the matching must be a rigid motion. Thus the smaller the energy the better the matching. Table 1 demonstrates that the combined matching has the optimal accuracy. Matching for Faces with Pose Change. Figure 8 shows our matching results for three faces scanned from different poses. The rotation is about 30◦ . Even though the exterior boundaries are very different as can be seen in the right, the matches are consistent in the mapped textures. For the surfaces with significantly inconsistent outer boundaries, we tested two kinds of simple matching conditions on them, enforced and relaxed matching. Figure 10 illustrates both of the matching results. We compared our current method based on holomorphic differentials (HD) with several other existing methods.
3D Non-rigid Surface Matching and Registration
11
Fig. 10. Matching of significantly inconsistent boundaries of Face4
(a) Surfaces (b) Harmonic Map (c) LSCM (d) Ricci Flow (e) Holo. Diff.
Fig. 11. Comparison of geometric mappings for multiply connected domains. (b) HM makes the boundary areas much stretched. (c) LSCM generates self-intersections depending on two prescribed feature points. (d) and (e) are one-to-one and onto. Table 1. Matching energy: R1,2,3 are the areas around the mouth, the left eye and the right eye respectively, and Map1,2,3 are the slit maps induced by each area. The number represents the matching energy of each region under each simple matching. Face1 Comb Map1 Map2 Map3
R1 0.439 0.439 13.645 24.259
R2 0.217 18.788 0.217 8.646
R3 0.076 21.422 13.995 0.076
Total 0.732 40.649 27.857 32.981
Face2 Comb Map1 Map2 Map3
R1 0.138 0.138 35.151 34.990
R2 0.094 37.634 0.094 24.672
R3 0.117 29.920 14.228 0.117
Total 0.349 67.692 49.473 59.779
Matching Accuracy. We tested the matching accuracy of three methods: Iterative Closest Point (ICP) [30], Holomorphic Differentials (HD) and Ricci Flow (RF) [23]. As a baseline system, ICP is one of the most popular 3D shape matching method and has relatively good performance. RF is the only previous method to compute the matching for multiple connected domains. Our current method combines multiple mappings, which overcomes the area distortion issue, whereas RF only uses one mapping. The matching error is measured for facial surfaces and cloth surfaces by computing the relative Hausdorff average distance (RHAD) under ICP, HD and RF. We matched the first frame to others within each class and got the average match error as follows: Face1(0.028, 0.009, 0.007), Face2(0.021, 0.014, 0.010), Face3(0.089, 0.020, 0.016) and Face4(0.074, 0.015, 0.012) for (ICP, RF, HD); Cloth(0.0258, 0.0003) for (ICP, HD). For all the tested experiments, our method HD outperforms both the ICP and RF methods.
12
W. Zeng et al. Table 2. Computational time
Name Face1 Face2 Face3 Face4 Cloth Time(s) Face1 Face2 Face3 Face4 Cloth Faces 15,000 28,800 14,387 28,148 20,998 HD 21 140 20 138 41 Verts 7,698 14,778 7,385 14,376 10,667 RF 610 4412 580 4236 N/A Table 3. Performance comparison of geometric mapping methods Harmonic Map LSCM Ricci Flow Is one-to-one map No No Yes Time complexity Linear Linear Non-linear Boundary occlusion Sensitive Not sensitive Sensitive Boundary constraint Needed Not needed Needed Feature constraint Not needed Two points needed Not needed Resolution change Not sensitive Not sensitive Not sensitive Topology limited Topological Topological Arbitrary disk disk surface
Holo. Diff. Yes Linear Not sensitive Needed Not needed Not sensitive Multiply conn. domain
Efficiency. We implemented our algorithm using generic C++ on Windows XP and used conjugate gradient optimization without using any linear package. Table 2 reports the computational time on a Laptop with CPU 2.00 GHZ, RAM 3.00 GB. RF method is non-linear, while our work is LINEAR and 30 times FASTER. Other mapping methods for computing such surfaces, like HM and LSCM, can not guarantee one-to-one map and may generate intersections (see Fig. 11). The comparison among each mapping method is illustrated in Table 3. Automaticity and Uniqueness. The method is completely automatic. The set of holomorphic differential generators are computed automatically. The choice of holomorphic 1-form for matching is also automatic; basically, each pair of boundaries uniquely determines a unique 1-form and a unique conformal map. There is no ambiguity. The best choice of the mapping for each region on the surface is automatically determined by our algorithm as the one minimizing Equation (2). Therefore, the choice is unique and rigorous.
5
Conclusion
This work introduces novel methods for 3D surface matching based on multiple conformal mappings using holomorphic differentials (HD) for multiply connected domains. We propose a method to choose conformal parameterizations which minimize area distortions for each region and then combine the local optimal parameterization to cover the whole surface. An optimality criterion is designed to assess how consistent each boundary is across the two surfaces to be matched. That allows us to enforce matching between consistent boundaries while relaxing constraints between inconsistent ones. Compared with harmonic maps and LSCMs, HD method can generate one-to-one mapping and is linear. We tested
3D Non-rigid Surface Matching and Registration
13
our matching and tracking algorithm on a large amount of 3D facial surfaces and deformable cloth surfaces with significantly inconsistent boundaries. The experiments demonstrated that our combined matching method got more accuracy and more efficiency than Ricci Flow method, which is the only previous oneto-one mapping for multiply connected domains. In the future, we will explore linear methods for surfaces with more complicated topologies, and the matching and tracking among surfaces with inconsistent topologies.
References 1. Campbell, R.J., Flynn, P.J.: A survey of free-form object representation and recognition techniques. Computer Vision and Image Understanding 81, 166–210 (2001) 2. Ruiz-Correa, S., Shapiro, L., Meila, M.: A new paradigm for recognizing 3d object shapes from range data. In: ICCV, pp. 1126–1133 (2003) 3. Huber, D., Kapuria, A., Donamukkala, R., Hebert, M.: Parts-based 3d object classification. In: CVPR, vol. II, pp. 82–89 (June 2004) 4. Vemuri, B., Mitiche, A., Aggarwal, J.: Curvature-based representation of objects from range data. Image and Vision Computing 4, 107–114 (1986) 5. Xiao, P., Barnes, N., Caetano, T., Lieby, P.: An mrf and gaussian curvature based shape representation for shape matching. In: CVPR (2007) 6. Sun, Y., Abidi, M.: Surface matching by 3d point’s fingerprint. In: ICCV, vol. II, pp. 263–269 (2001) 7. Frome, A., Huber, D., Kolluri, R., Bulow, T., Malik, J.: Recognizing objects in range data using regional point descriptors. In: Pajdla, T., Matas, J(G.) (eds.) ECCV 2004. LNCS, vol. 3023, pp. 224–237. Springer, Heidelberg (2004) 8. Funkhouser, T., Min, P., Kazhdan, M., Chen, J., Halderman, A., Dobkin, D., Jacobs, D.: A search engine for 3d models. In: ACM TOG, pp. 83–105 (2003) 9. Osada, R., Funkhouser, T., Chazelle, B., Dobkin, D.: Shape distributions. In: ACM TOG, vol. 21, pp. 807–832 (2002) 10. Bronstein, A.M., Bronstein, M.M., Kimmel, R.: Expression-invariant representations of faces. IEEE Trans. Image Processing 16(1), 188–197 (2007) 11. Starck, J., Hilton, A.: Correspondence labelling for wide-timeframe free-form surface matching. In: ICCV (2007) 12. Lin, W.Y., Wong, K.C., Boston, N., Yu, H.H.: Fusion of summation invariants in 3d human face recognition. In: CVPR (2006) 13. Dalal, P., Munsell, B.C., Wang, S., Tang, J., Oliver, K., Ninomiya, H., Zhou, X., Fujita, H.: A fast 3d correspondence method for statistical shape modeling. In: CVPR (2007) 14. Terzopoulos, D., Witkin, A., Kass, M.: Constraints on deformable models: Recovering 3d shape and nonrigid motion. Artificial Intelligence 35, 91–123 (1988) 15. Huang, X., Paragios, N., Metaxas, D.: Establishing local correspondences towards compact representations of anatomical structures. In: Ellis, R.E., Peters, T.M. (eds.) MICCAI 2003. LNCS, vol. 2878, pp. 926–934. Springer, Heidelberg (2003) 16. Malladi, R., Sethian, J.A., Vemuri, B.C.: A fast level set based algorithm for topology-independent shape modeling. JMIV 6(2/3), 269–290 (1996) 17. Zhang, D., Hebert, M.: Harmonic maps and their applications in surface matching. In: CVPR 1999, vol. II, pp. 524–530 (1999) 18. Wang, Y., Gupta, M., Zhang, S., Wang, S., Gu, X., Samaras, D., Huang, P.: High resolution tracking of non-rigid 3d motion of densely sampled data using harmonic maps. In: ICCV 2005, vol. I, pp. 388–395 (2005)
14
W. Zeng et al.
19. Gu, X., Wang, Y., Chan, T.F., Thompson, P.M., Yaun, S.: Genus zero surface conformal mapping and its application to brain surface mapping. TMI 23(7) (2004) 20. Levy, B., Petitjean, S., Ray, N., Maillot, J.: Least squares conformal maps for automatic texture atlas generation. In: SIGGRAPH, pp. 362–371 (2002) 21. Sharon, E., Mumford, D.: 2d-shape analysis using conformal mapping. In: CVPR 2004, vol. II, pp. 350–357 (2004) 22. Wang, S., Wang, Y., Jin, M., Gu, X.D., Samaras, D.: Conformal geometry and its applications on 3d shape matching, recognition, and stitching. PAMI 29(7), 1209–1220 (2007) 23. Gu, X., Wang, S., Kim, J., Zeng, Y., Wang, Y., Qin, H., Samaras, D.: Ricci flow for 3d sahpe analysis. In: ICCV (2007) 24. Lowe, D.: Distinctive image features from scale-invariant keypoints. IJCV 60(2), 91–110 (2004) 25. Athitsos, V., Alon, J., Sclaroff, S., Kollios, G.: Boostmap: A method for efficient approximate similarity rankings. In: CVPR 2004, vol. II, pp. 268–275 (2004) 26. Yin, X., Dai, J., Yau, S.T., Gu, X.: Slit map: Conformal parameterization for multiply connected surfaces. In: Geometric Modeling and Processing (2008) 27. Gu, X., Vemuri, B.C.: Matching 3d shapes using 2d conformal representations. In: Barillot, C., Haynor, D.R., Hellier, P. (eds.) MICCAI 2004. LNCS, vol. 3216, pp. 771–780. Springer, Heidelberg (2004) 28. Lowe, D.: Object recognition from local scale-invariant features. In: ICCV 1999, pp. 1150–1157 (1999) 29. Hern´ andez, C., Vogiatzis, G., Brostow, G.J., Stenger, B., Cipolla, R.: Non-rigid photometric stereo with colored lights. In: ICCV, vol. 1 (2007) 30. Besl, P.J., McKay, N.D.: A method for registration of 3-D shapes. PAMI 14(2), 239–256 (1992)
Learning Two-View Stereo Matching Jianxiong Xiao, Jingni Chen, Dit-Yan Yeung, and Long Quan Department of Computer Science and Engineering The Hong Kong University of Science and Technology Clear Water Bay, Kowloon, Hong Kong {csxjx,jnchen,dyyeung,quan}@cse.ust.hk
Abstract. We propose a graph-based semi-supervised symmetric matching framework that performs dense matching between two uncalibrated wide-baseline images by exploiting the results of sparse matching as labeled data. Our method utilizes multiple sources of information including the underlying manifold structure, matching preference, shapes of the surfaces in the scene, and global epipolar geometric constraints for occlusion handling. It can give inherent sub-pixel accuracy and can be implemented in a parallel fashion on a graphics processing unit (GPU). Since the graphs are directly learned from the input images without relying on extra training data, its performance is very stable and hence the method is applicable under general settings. Our algorithm is robust against outliers in the initial sparse matching due to our consideration of all matching costs simultaneously, and the provision of iterative restarts to reject outliers from the previous estimate. Some challenging experiments have been conducted to evaluate the robustness of our method.
1 Introduction Stereo matching between images is a fundamental problem in computer vision. In this paper, we focus on matching two wide-baseline images taken from the same static scene. Unlike many previous methods which require that the input images be either calibrated [1] or rectified [2], we consider here a more challenging scenario in which the input contains two images only without any camera information. As a consequence, our method can be used for more general applications, such as motion estimation from structure. 1.1 Related Work Many stereo matching algorithms have been developed. Traditional stereo matching algorithms [2] were primarily designed for view pairs with a small baseline, and cannot be extended easily when the epipolar lines are not parallel. On the other hand, existing wide-baseline methods [3] depend heavily on the epipolar geometry which has to be provided, often through off-line calibration, while other methods can only recover very sparse matching [4,5]. Although the epipolar geometry could be estimated on-line, those approaches still fail frequently for wide-baseline image pairs since the sparse matching result is fragile and the estimated fundamental matrix often fits only to some parts of the image but not D. Forsyth, P. Torr, and A. Zisserman (Eds.): ECCV 2008, Part III, LNCS 5304, pp. 15–27, 2008. c Springer-Verlag Berlin Heidelberg 2008
16
J. Xiao et al.
the entire image. Region growing based methods [6,7] can achieve denser matching, but may easily get trapped in local optima. Therefore its matching quality depends heavily on the result of the initial sparse matching. Also, for image pairs with quite different pixel scales, it is very difficult to achieve reasonable results due to discrete growing. Recent research shows that learning techniques can improve the performance of matching by taking matched pairs as training data or by learning the probabilistic image prior [8] that encodes the smoothness constraint for natural images. However, for a test image pair, the information learned from other irrelevant images is very weak in the sense that it is unrelated to the test image pair. Thus the quality of the result greatly depends on the training data. 1.2 Our Approach In this work, we explore the dense matching of uncalibrated wide-baseline images by utilizing all the local, regional and global information simultaneously in an optimization procedure. We propose a semi-supervised approach to the matching problem requiring only two input images taken from the same static scene. Since the method does not rely on any training data, it can handle images from any scene with stable performance. We consider two data sets, X 1 and X 2 , corresponding to the two input images with 1 n = r1 × c1 pixels and n2 = r2 × c2 pixels, respectively. For p = 1, 2, T Xp = xp1 , xp2 , . . . , xp(sp −1)×cp +tp , . . . , xpnp ,
(1)
where xp(sp −1)×cp +tp represents the pixel located at the coordinate position (sp , tp ) in the p-th image space, sp ∈ {1, · · · , rp }, and tp ∈ {1, · · · , cp }. In this paper, we define q = 3 − p, meaning that q = 1 when p = 2 and q = 2 when p = 1, and let i = (sp − 1) × cp + tp . For each pixel xpi , we want to find a matching point located at coordinate position (sq , tq ) in the q-th (continuous) image space, where sq , tq ∈ R. Hence, we can use a label vector to represent the position offset from a point in the second image to the corresponding point in the first image: T T yip = (vip , hpi ) = s1 , t1 − s2 , t2 ∈ R2 . In this way, our label vector representation takes real numbers for both elements, thus supporting sub-pixel matching. Let T T Yp = (y1p , · · · , ynp p ) be the label matrix, and Op = (op1 , · · · , opn p ) be the correp sponding visibility vector: oi ∈ [0, 1] is close to 1 if the 3D point corresponding to the data point xpi is visible in the other image, and otherwise close to 0 such as a point in the occluded region. This notion of visibility may also be interpreted as matching confidence. Obviously, nearby pixels are more likely to have similar label vectors. This smoothness constraint, relying on the position of the data points, can be naturally represented by a graph G = V, E where the node set V represents the data points and the edge 1 = the affinities set 1E represents 2 2 between them. In our setting,1 we have G 2 two graphs 1 2 1 2 V , E and G = V , E for the two images where V = xi and V = xi . Let N (xpi ) be the set of data points in the neighborhood of xpi . The affinities can be p represented by two weight matrices W1 and W2 : wij is non-zero iff xpi and xpj are p neighbors in E .
Learning Two-View Stereo Matching
17
In recent years, matching techniques such as SIFT [4] are powerful enough to recover some sparsely Now, pairs as matched pairs. the problem here is, given such matched 2 labeled data X1l , Yl1 , X2l , Yl2 and the affinity matrices W1 and W , we want to infer the label matrices for the remaining unlabeled data X1u , Yu1 , X2u , Yu2 . For the sake of clarity of presentation and without loss of generality, we assume that the indices of the data points are arranged in such a way that the labeled points come before T T T . For computation, the index of the unlabeled ones, that is Xp = (Xpl ) , (Xpu ) the data point can be mapped by multiplying elementary matrices for row-switching transformations. In what follows, we formulate in Sec. 2 the matching problem under a graph-based semi-supervised label propagation framework, and solve the optimization problem via an iterative cost minimization procedure in Sec. 3. To get reliable affinity matrices for propagation, in Sec. 4 we learn W1 and W2 directly from the input images which include color and depth information. The complete procedure of our algorithm is summarized in Alg. 1. More details are given in Sec. 5. Finally, extensive experimental results are presented in Sec. 6.
2 Semi-supervised Matching Framework ˆp Semi-supervised learning on the graph representation tries to find a label matrix Y that is consistent with both the initial incomplete label matrix and the geometry of the data manifold induced by the graph structure. Because the incomplete labels may be ˆ p for the labeled data is allowed to differ from the noisy, the estimated label matrix Y l p ˆ p , consistency with the initial labeling can given label matrix Yl . Given an estimated Y be measured by
p p 2 ˆ p , Op = Clp Y oi ˆ yi − yip . (2) p xp i ∈Xl
On the other hand, consistency with the geometry of the data in the image space, which follows from the smooth manifold assumption, motivates a penalty term of the form
2 p p p p ˆ p , Op = 1 ˆi − y wij φ oi , oj y ˆjp , (3) Csp Y 2 p p p xi ,xj ∈X
2 2 where φ opi , opj = 12 (opi ) + opj . When opi and opj are both close to 1, the function ˆ p between points value is also close to 1. This means we penalize rapid changes in Y p that are close to each other (as given by the similarity matrix W ), and only enforce smoothness within visible regions, i.e., op is large. 2.1 Local Label Preference Cost Intuitively, two points of a matched pair in the two images should have great similarity in terms of the features since they are two observations of the same 3D point. Here, we use a similarity cost function ρpi (y) to represent the similarity cost between the pixel xpi in one image and the corresponding point for the label vector y in the other image space
18
J. Xiao et al.
(detailed in Subsec. 5.2). On the other hand, if opi is close to 0, which means that xpi is almost invisible and the matching has low confidence, the similarity cost should not be charged. To avoid the situation when every point tends to have zero visibility to prevent cost charging, we introduce a penalty term τip . When opi is close to 0, (1 − opi ) τip will increase. Also, τip should be different for different xpi . Textureless regions should be allowed to have lower matching confidence, that is, small confidence penalty, and vice versa. We use a very simple difference-based confidence measure defined as follows p x − xp . (4) τip = max i j p p xj ∈N (xi ) Now, we can define the local cost as
ˆ p , Op = Cdp Y (opi ρpi (ˆ yip ) + (1 − opi ) τip ) .
(5)
xpi ∈X p
2.2 Regional Surface Shape Cost The shapes of the 3D objects’ surfaces in the scene are very important cues for matching. An intuitive approach is to use some methods based on two-view geometry to reconstruct the 3D surfaces. While this is a reasonable choice, it is unstable since the structure deduced from two-view geometry is not robust especially when the baseline is not large enough. Instead, we adopt the piecewise planar patch assumption [7]. Since two data points with high affinity relation are more likely to have similar label vectors, we assume that the label vector of a data point can be linearly approximated by the label vectors of its neighbors, as in the manifold learning method called locally linear embedding (LLE) [9], that is
p p wij yj . (6) yip = xpj ∈N (xpi ) Hence, the reconstruction cost can be defined as 2
p 2 p p Cr (Yp ) = wij yj = (I − Wp ) Yp F . yi − p xi ∈X p xpj ∈N (xpi ) T
(7)
T
Let Ap = Wp + (Wp ) − Wp (Wp ) be the adjacency matrix, Dp the diagonal matrix containing the row sums of the adjacency matrix Ap , and Lp = Dp − Ap the un-normalized graph Laplacian matrix. Because of the way Wp is defined in Sec. 4, we have Dp ≈ I. Therefore,
2 T apij yip − yjp . (8) Cr (Yp ) ≈ tr (Yp ) Lp Yp = xpi ,xpj ∈X p
2 This approximation induces the representation of apij yip − yjp , which makes the integration of the cost with visibility much easier.
Learning Two-View Stereo Matching
19
Now, the data points from each image lie on one 2D manifold (image space). Except for the occluded parts which cannot be matched, the two 2D manifolds are from the same 2D manifold of the visible surface of the 3D scene. LLE [10] is used to align the two 2D manifolds (image spaces) to one 2D manifold (visible surface). The labeled data (known matched pairs) are accounted for by constraining the mapped coordinates of p T p T q T T ˆp= Y ˆ ˆ ˆ matched points to coincide. Let Xcp = Xlp ∪Xup ∪Xuq , Y , Y , Y c u u l p T p p T q T T p and Oc = (Ol ) , (Ou ) , (Ou ) . We partition A as
Apll Aplu A = . Apul Apuu p
(9)
Alignment of the manifold can be done by combining the Laplacian matrix as in [10], which is equivalent to combining the adjacency matrix: ⎤ Apll + Aqll Aplu Aqlu Apuu 0 ⎦ . Apc = ⎣ Apul q 0 Aquu Aul ⎡
(10)
Imposing the cost only on the visible data points, the separate LLE cost of each graph is summed up: ˆ 2 , O1 , O2 = ˆ 1, Y Crp Y
p p xp i ,xj ∈Xc
2 p (apc )ij φ (opc )i , (opc )j (ˆ ycp )j , yc )i − (ˆ
(11)
where (apc )ij is the element of Apc . 2.3 Global Epipolar Geometric Cost In the epipolar geometry [11], the fundamental matrix F12 = FT21 encapsulates the intrinsic projective geometry between two views in the way that, for xpi at position (sp , tp ) in one image with matching point at position (sq , tq ) in the other image, the matching point (sq , tq ) should lie on the line (api , bpi , cpi ) = (sp , tp , 1) FTpq . This global constraint affects every matching pair in the two images. For xpi , we define dpi (y) to be the squared Euclidean distance in the image space of the other image between the corresponding epipolar line (api , bpi , cpi ) and the matching point (sq , tq ): dpi (y) = T
where y = (v, h) = squared distances:
(api sq + bpi tq + cpi ) 2
(api ) + (bpi )
2
2
,
(12)
T 1 s − s 2 , t1 − t2 . The global cost is now the sum of all
p p p ˆ p , Op = Cgp Y oi di (ˆ yi ) . xpi ∈X p
(13)
20
J. Xiao et al.
2.4 Symmetric Visibility Consistency Cost Assume that xpi in one image is matched with xqj in the other image. xqj should also have a label vector showing its matching with xpi in the original image. This symmetric visibility consistency constraint motivates the following visibility cost 2 1
p ˆq ˆq = β Cvp Op , Y oi − γip Y + 2 p p xi ∈X
2 p p oi − opj , wij
(14)
xpi ,xpj ∈X p
ˆ q is a function defined on the p-th image space. For each xp , its value via where γ Y i the γ function indicates whether or not there exist one or more data points that match ˆ q . The value at pixel xp is close to a point near xpi from the other view according to Y i 0 if there is no point in the other view corresponding to a point near xpi , and otherwise close to 1. The parameter β controls the strength of the visibility constraint. The last term enforces the smoothness of the occlusion that encourages spatial coherence and is helpful to remove some isolated pixels or small holes of the occlusion. ˆ q is available in the The γ function can be computed as a voting procedure when Y T q q q q other view. For each point xj at position (s , t ) in X with label yjq = vjq , hqj = 1 1 2 2 T p p , equivalent to be matched with a point at position (s , t ), we s ,t − s ,t place a 2D Gaussian function ψ (s, t) on the p-th image centered at the matched position T cj = (sp , tp ) . Now, we get a Gaussian mixture model xq ψcj (s, t) in the voted j image space. Truncating it, we get
γ p (s, t) = min 1, ψcj (s, t) . (15) xqj ∈X q
Our matching framework combines all the costs described above. We now present our iterative optimization algorithm to minimize the costs.
3 Iterative MV Optimization It is intractable to minimize the matching and visibility costs simultaneously. Therefore, our optimization procedure iterates between two steps: 1) the M-step estimates matching given visibility, and 2) the V-step estimates visibility given matching. Before each iteration, we estimate the fundamental matrix F by the normalized 8-point algorithm with RANSAC followed by the gold standard algorithm that uses the LevenbergMarquardt algorithm to minimize the geometric distance [11]. Then, we use F to reject the outliers from the matching result of the previous iteration and obtain a set of inliers as the initial labeled data points. The iterations stop when the cost difference between two consecutive iterations is smaller than a threshold, which means that the current matching result is already quite stable. The whole iterative optimization procedure is summarized in Alg. 1.
Learning Two-View Stereo Matching
21
Algorithm 1. The complete procedure 1. Compute the depth and occlusion boundary images and feature vectors (Sec. 5). 2. Compute sparse matching by SIFT and the confidence penalty τ , then interpolate the results from sparse matching with depth information to obtain an initial solution (Subsec. 5.1). 3. Learn the affinity matrices W1 and W2 (Sec. 4). 4. while (cost change between two iterations ≥ threshold): (a) Estimate the fundamental matrix F, and reject outliers to get a subset as labeled data (Sec. 3), (b) Compute the parameters for the similarity cost function ρ and epipolar cost function d (Subsec. 5.2 and 2.3), (c) Estimate matching given visibility (Subsec. 3.1), (d) Compute the γ map (Subsec. 2.4), (e) Estimate visibility given matching (Subsec. 3.2).
3.1 M-Step: Estimation of Matching Given Visibility ˆ Actually, the visibility term Cv imposes two kinds of constraints on the matching Y given the visibility O: First, for each pixel xpi in the p-th image, it should not match the invisible (occluded) points in the other image. Second, for each visible pixel in the q-th image, at least one pixel in the p-th image should match its nearby points. The first restriction is a local constraint that is easy to satisfy. However, the second constraint is a global one on the matching of all points, which is implicitly enforced in the matching process. Therefore, in this step, we approximate the visibility term by considering only the local constraint [12], which means that some possible values for a label vector, corresponding to the occluded region, have higher costs than the other possible values. This variation of the cost can be incorporated into the similarity function ρpi (y) in Cd . T T T Let Y = Y1 , Y2 . Summing up all the costs and considering the two images together, our cost function is
2 ˆ = ˆ , λl Clp + λs Csp + λd Cdp + λr Crp + λg Cgp + Y CM Y (16) p=1,2
2 ˆ is a small regularization term to avoid reaching degenerate situations. where Y 1 Fixing O and O2 , cost minimization is done by setting the derivative with respect to ˆ to zero since the second derivative is a positive definite matrix. Y 3.2 V-Step: Estimation of Visibility Given Matching After achieving a matching, we can recompute the γ map (Subsec. 2.4). Let O = 1 T 2 T T O , O . Then, summing up all the costs and considering the two images together, our cost function is CV (O) =
λl Clp + λs Csp + λd Cdp + λr Crp + λg Cgp + λv Cvp + O2 , (17) p=1,2
22
J. Xiao et al.
ˆ 1 and Y ˆ 2 , cost miniwhere O2 is a small regularization term. Now, for fixed Y mization is done by setting the derivative with respect to O to zero since the second derivative is a positive definite matrix. Since Wp is very sparse, the coefficient matrix of the system of linear equations is also very sparse in the above two steps. We use a Gauss-Seidel solver or a conjugate gradient method on GPU [13], which can solve in parallel a large sparse system of linear equations very efficiently. We can derive that by the way Wp is defined in Sec. 4 and the cost functions defined in Eq. 16 and Eq. 17, the coefficient matrix is strictly diagonally dominant and positive definite. Hence, both Gauss-Seidel and conjugate gradient converge to the solution of the linear system with theoretical guarantee.
4 Learning the Symmetric Affinity Matrix We have presented our framework which finds a solution by solving an optimization problem. Traditionally, for W1 and W2 , we can directly define the pairwise affinity between two data points by normalizing their distance. However, as pointed out by [14], there exists no reliable approach for model selection if only very few labeled points are available, since it is very difficult to determine the optimal normalization parameters. Thus we prefer using a more reliable and stable way to learn the affinity matrices. Similar to the 3D visible surface manifold of Eq. 6 in Sec. 2.2, we make the smooth manifold and linear reconstruction assumptions for the manifold in the image space. We also assume that the label space and image space share the same local linear reconp struction weights. Then we can obtain the linear reconstruction weight matrix W by p minimizing the energy function EWp = xp ∈X p Exi , where i
Expi
= xpi −
2 p p wij xj .
(18)
xpj ∈N (xpi )
This objective function is similar to the one used in LLE [9], in which the lowdimensional coordinates are assumed to share the same linear reconstruction weights with the high-dimensional coordinates. The difference here is that we assume the sharing relation to be between the label vectors and the features [15]. Hence, the way we construct the whole graph is to first shear the whole graph into a series of overlapped linear patches and then paste them together. To avoid the undesirable contribution of negative weights, we further enforce the following constraint
p p = 1, wij ≥ 0. (19) wij xpj ∈N (xp ) i T p p i p w G wik , where Gijk = xpi − xpj (xpi − xpk ). ij jk xpj ,xp ∈N x ( ) i k p p p be. Also, wij and wji should Obviously, the more similar is xpi to xpj , the larger will wij p be the same since they both correspond to the affinity relation between xi and xpj . How-
From Eq. 18, Expi =
ever, the above constraints do not either enforce or optimize to have this characteristic, p p = wji may result in violation of Eq. 19. Hence, we add a and the hard constraint wij
Learning Two-View Stereo Matching
23
p p 2 soft penalty term ij wij − wji to the objective function. Thus the reconstruction weights of each data point can be obtained by solving the following quadratic programming (QP) problem
p
p p p 2 i wij − wji w G w + κ (20) min jk ij ik Wp ij xpi ∈X p xpj ,xp ∈N (xp ) i k s.t. ∀xpi ∈ X p ,
p p wij = 1, wij ≥ 0.
xpj ∈N (xpi )
After all the reconstruction two sparse matrices can be con p weights are computed, p while letting wii = 0 for all xpi . In our experiment, Wp is structed by Wp = wij almost symmetric and we further update it by Wp ← 12 (Wp )T + Wp . Since the soft constraint has made Wp similar to (Wp )T , this update just changes Wp slightly, and will not lead to unreasonable artifacts. To achieve speedup, we can first partition the graph into several connected components by the depth information and super-pixel over-segmentation on the RGB image, and break down the large QP problem into several smaller QP problems with one QP for each connected component, then solve them one by one.
5 More Details The feature vectors are defined as RGB color. For each image, we recover the occlusion boundaries and depth ordering in the scene. The method in [16] is used to learn to identify and label occlusion boundaries using the traditional edge and region cues together with 3D surface and depth cues. Then, from just a single image, we obtain a depth estimation and the occlusion boundaries of free-standing structures in the scene. We append the depth value to the feature vector. 5.1 Label Initialization by Depth We use SIFT [4] and a nearest neighbor classifier to obtain an initial matching. For robustness, we perform one-to-one cross consistency checking, which matches points of the first image to the second image, and inversely matches points of the second image to the first image. Only the best matched pairs consistent in both directions are retained. To avoid errors on the occlusion boundary due to the similar color of background and foreground, we filter the sparse matching results and reject all pairs that are too close to the occlusion boundaries. Taking the remaining as seed points, with the depth information, region growing is used to achieve an initial dense matching [7]. Then, the remaining unmatched part is interpolated. Assuming the nearby pixels in the same partition lie on a planar surface, we estimate the homography transformation between two corresponding regions in the two images. With the estimated homography, the unknown regions are labeled and the occlusion regions are also estimated.
24
J. Xiao et al.
5.2 Computing the Similarity Cost Function As mentioned in Sec. 2.1, the continuous-valued similarity cost function ρpi (y) represents the difference between point xpi and the matching point, characterizing how suitable it is for xpi to have label y = (v, h)T . Since our algorithm works with some labeled data in a semi-supervised manner by the consistent cost Cl , the local cost Cd just plays a secondary role. Hence, unlike the traditional unsupervised matching [12], our framework does not heavily rely on the similarity function ρpi (y). Therefore, for efficient computation, we just sample some values for some integer combination of xp −xq 2 h and v to compute ρpi (y) = exp(− i2σ2j ). We normalize the largest sampled value to 1, and then fit ρpi (y) with a continuous and differentiable quadratic func2 +(h−ho )2 tion ρpi (y) = (v−vo ) 2σ , where (vo , ho ) and σ are the center and spread of the 2 parabola for xpi .
6 Experiments In all our experiments performed on a desktop PC with Intel Core 2 Duo E6400 CPU and NVIDIA GeForce 8800 GTX GPU, the number of iterations is always less than 9 before stopping and the computation time is less than 41 seconds for each image pair, excluding the time spent on estimating the depth for a single image by [16]. We set the parameters to favor Cl and Cg in the M-step and Cv in the V-step. Since there is no ground truth in searching for good parameter values, we tune the parameters manually and then fix them for all experiments. To solve the QP problem for Wp , we first compute a “warm start” without involving the positive constraints using the method in [9], and then run the active set algorithm on this “warm start”, which converges rapidly in just a few iterations. We demonstrate our algorithm on various data set in Fig. 2, most of which have very complex shape with similar color that makes the matching problem very challenging. Compared with [17], our method can produce more detail, as shown in Fig. 1. In the figures of the matching results, the intensity value is set to be the norm of the label vector, that is y, and only visible matching with o 0.5 is shown.
(a) Input image [3]
(b) Result from [17]
(c) Result of our method
Fig. 1. Comparison with [17]. Attention should be paid to the fine details outlined by red circles. Also, our method can correctly detect the occluded region and does not lead to block artifacts that graph cut methods typically give. Subfig. (b) is extracted from Fig. 7 of [17].
Learning Two-View Stereo Matching
(a) City Hall Brussels
25
(b) Bookshelf
(c) Valbonne Church
(d) City Hall Leuven
(e) Temple
(f) Shoe
Fig. 2. Example output on various datasets. In each subfigure, the first row shows the input images and the second row shows the corresponding outputs by our method.
6.1 Application to 3-View Reconstruction In our target application, we have no information about the camera. To produce a 3D reconstruction result, we use three images to recover the motion information. Five examples are shown in Fig. 3. The proposed method is used to compute the point correspondence between the first and second images, as well as the second and third images. Taking the second image as the bridge, we can obtain the feature tracks of three views. As in [18], these feature tracks across three views are used to obtain projective reconstruction by [19], and are metric upgraded inside a RANSAC framework, followed by bundle adjustment [11]. Note that the feature tracks with too large reprojection errors are considered as outliers and are not shown in the 3D reconstruction result in Fig. 3.
26
J. Xiao et al.
(a) Temple
(d) City Hall Leuven
(b) Valbonne Church
(c) City Hall Brussels
(e) Semper Statue Dresden
Fig. 3. 3D reconstruction from three views. In each subfigure, the first row contains the three input images and the second row contains two different views of the 3D reconstruction result. Points are shown without texture color for easy visualization of the reconstruction quality.
7 Conclusion In this work, we propose a graph-based semi-supervised symmetric matching framework to perform dense matching between two uncalibrated images. Possible future extensions include more systematic study of the parameters and extension to multi-view stereo. Moreover, we will also pursue a full GPU implementation of our algorithm since we suspect that the current running time is mostly spent on data communication between the CPU and the GPU. Acknowledgements. This work was supported by research grant N-HKUST602/05 from the Research Grants Council (RGC) of Hong Kong and the National Natural Science Foundation of China (NSFC), and research grant 619006 and 619107 from the RGC of Hong Kong. We would like to thank the anonymous reviewers for constructive comments to help improve this work. The bookshelf data set is from J. Matas; City Hall Brussels, City Hall Leuven and Semper Statue Dresden data set are from C. Strecha; Valbonne Church data set is from the Oxford Visual Geometry Group, while the temple data set is from [1] and the shoe data set is from [20].
Learning Two-View Stereo Matching
27
References 1. Seitz, S.M., Curless, B., Diebel, J., Scharstein, D., Szeliski, R.: A comparison and evaluation of multi-view stereo reconstruction algorithms. In: Proceedings of IEEE Conference Computer Vision and Pattern Recognition, vol. 1, pp. 519–528 (2006) 2. Scharstein, D., Szeliski, R.: A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. International Journal of Computer Vision 47(1-3), 7–42 (2002) 3. Strecha, C., Fransens, R., Gool, L.: Wide-baseline stereo from multiple views: a probabilistic account. In: Proceedings of IEEE Conference Computer Vision and Pattern Recognition, vol. 1, pp. 552–559 (2004) 4. Lowe, D.G.: Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision 60(2), 91–110 (2004) 5. Toshev, A., Shi, J., Daniilidis, K.: Image matching via saliency region correspondences. In: Proceedings of IEEE Conference Computer Vision and Pattern Recognition, pp. 1–8 (2007) 6. Lhuillier, M., Quan, L.: Match propagation for image-based modeling and rendering. IEEE Transaction on Pattern Analysis and Machine Intelligence 24(8), 1140–1146 (2002) 7. Kannala, J., Brandt, S.S.: Quasi-dense wide baseline matching using match propagation. In: Proceedings of IEEE Conference Computer Vision and Pattern Recognition, pp. 1–8 (2007) 8. Roth, S., Black, M.J.: Fields of experts: A framework for learning image priors. In: Proceedings of IEEE Conference Computer Vision and Pattern Recognition, vol. 2, pp. 860–867 (2005) 9. Roweis, S., Saul, L.: Nonlinear dimensionality reduction by locally linear embedding. Science 290, 2323–2326 (2000) 10. Ham, J., Lee, D., Saul, L.K.: Learning high dimensional correspondences from low dimensional manifolds. In: Proceedings of International Conference on Machine Learning (2003) 11. Hartley, R., Zisserman, A.: Multiple view geometry in computer vision, 2nd edn. Cambridge University Press, Cambridge (2004) 12. Sun, J., Li, Y., Kang, S.B., Shum, H.Y.: Symmetric stereo matching for occlusion handling. In: Proceedings of IEEE Conference Computer Vision and Pattern Recognition, vol. 2, pp. 399–406 (2005) 13. Krüger, J., Westermann, R.: Linear algebra operators for GPU implementation of numerical algorithms. ACM Transactions on Graphics, 908–916 (2003) 14. Zhou, D., Bousquet, O., Lal, T.N., Weston, J., Schölkopf, B.: Learning with local and global consistency. Neural Information Processing Systems 16, 321–328 (2004) 15. Wang, F., Wang, J., Zhang, C., Shen, H.: Semi-supervised classification using linear neighborhood propagation. In: Proceedings of IEEE Conference Computer Vision and Pattern Recognition, pp. 160–167 (2006) 16. Hoiem, D., Stein, A., Efros, A., Hebert, M.: Recovering occlusion boundaries from a single image. In: Proceedings of IEEE International Conference on Computer Vision (2007) 17. Tola, E., Lepetit, V., Fua, P.: A fast local descriptor for dense matching. In: Proceedings of IEEE Conference Computer Vision and Pattern Recognition, pp. 1–8 (2008) 18. Lhuillier, M., Quan, L.: A quasi-dense approach to surface reconstruction from uncalibrated images. IEEE Transaction on Pattern Analysis and Machine Intelligence 27(3), 418–433 (2005) 19. Quan, L.: Invariant of six points and projective reconstruction from three uncalibrated images. IEEE Transactions on Pattern Analysis and Machine Intelligence 17(1), 34–46 (1995) 20. Xiao, J., Chen, J., Yeung, D.Y., Quan, L.: Structuring visual words in 3D for arbitrary-view object localization. In: Proceedings of European Conference on Computer Vision (2008)
SIFT Flow: Dense Correspondence across Different Scenes Ce Liu1 , Jenny Yuen1 , Antonio Torralba1, Josef Sivic2 , and William T. Freeman1,3 1
Massachusetts Institute of Technology {celiu,jenny,torralba,billf}@csail.mit.edu 2 INRIA/Ecole Normale Sup´erieure
[email protected] 3 Adobe Systems
Abstract. While image registration has been studied in different areas of computer vision, aligning images depicting different scenes remains a challenging problem, closer to recognition than to image matching. Analogous to optical flow, where an image is aligned to its temporally adjacent frame, we propose SIFT flow, a method to align an image to its neighbors in a large image collection consisting of a variety of scenes. For a query image, histogram intersection on a bag-of-visual-words representation is used to find the set of nearest neighbors in the database. The SIFT flow algorithm then consists of matching densely sampled SIFT features between the two images, while preserving spatial discontinuities. The use of SIFT features allows robust matching across different scene/object appearances and the discontinuity-preserving spatial model allows matching of objects located at different parts of the scene. Experiments show that the proposed approach is able to robustly align complicated scenes with large spatial distortions. We collect a large database of videos and apply the SIFT flow algorithm to two applications: (i) motion field prediction from a single static image and (ii) motion synthesis via transfer of moving objects.
1
Introduction
Image alignment and registration is a central topic in computer vision. For example, aligning different views of the same scene has been studied for the purpose of image stitching [2] and stereo matching [3]. The considered transformations are relatively simple (e.g. parametric motion for image stitching and 1D disparity for stereo), and images to register are typically assumed to have the same pixel value after applying the geometric transformation. The image alignment problem becomes more complicated for dynamic scenes in video sequences, as is the case of optical flow estimation [4,5,6], shown in Fig. 1(1). The correspondence problem between two adjacent frames in the video
WILLOW project-team, Laboratoire d’Informatique de l’Ecole Normale Superieure, CNRS/ENS/INRIA UMR 8548.
D. Forsyth, P. Torr, and A. Zisserman (Eds.): ECCV 2008, Part III, LNCS 5304, pp. 28–42, 2008. c Springer-Verlag Berlin Heidelberg 2008
SIFT Flow: Dense Correspondence across Different Scenes
(a) Query image
(b) Best match
29
(c) Best match warped to (a) (d) Displacement field
Fig. 1. Scene alignment using SIFT flow. (a) and (b) show images of similar scenes. (b) was obtained by matching (a) to a large image collection. (c) shows image (b) warped to align with (a) using the estimated dense correspondence field. (d) Visualization of pixel displacements using the color-coding scheme of [1]. Note the variation in scene appearance between (a) and (b). The visual resemblance of (a) and (c) demonstrates the quality of the scene alignment.
is often formulated as an estimation of a 2D flow field. The extra degree of freedom (from 1D in stereo to 2D in optical flow) introduces an additional level of complexity. Typical assumptions in optical flow algorithms include brightness constancy and piecewise smoothness of the pixel displacement field. Image alignment becomes even more difficult in the object recognition scenario, where the goal is to align different instances of the same object category, as illustrated in Fig. 1(2). Sophisticated object representations [7,8,9,10] have been developed to cope with the variations in objects’ shape and appearance. However, the methods still typically require objects to be salient and large, visually very similar and with limited background clutter. In this work, we are interested in a seemingly impossible task of aligning images depicting different instances of the same scene category. The two images to match may contain different object instances captured from different viewpoints, placed at different spatial locations, or imaged at different scales. In addition, some objects present in one image might be missing in the other image. Due to these issues the scene alignment problem is extremely challenging, as illustrated in Fig. 1(3) and 1(4).
30
C. Liu et al.
Inspired by the recent progress in large image database methods [11,12,13], and the traditional optical flow estimation for temporally adjacent (and thus visually similar) frames, we create a large database so that for each query image we can retrieve a set of visually similar scenes. Next, we introduce a new alignment algorithm, dubbed SIFT flow, to align the query image to each image in the retrieved set. In the SIFT flow, a SIFT descriptor [14] is extracted at each pixel to characterize local image structures and encode contextual information. A discrete, discontinuity preserving, optical flow algorithm is used to match the SIFT descriptors between two images. The use of SIFT features allows robust matching across different scene/object appearances and the discontinuity-preserving spatial model allows matching of objects located at different parts of the scene. As illustrated in Fig. 1(3) and Fig. 1(4), the proposed alignment algorithm is able to estimate dense correspondence between images of complex scenes. We apply SIFT flow to two original applications, which both rely on finding and aligning images of similar scenes in a large collection of images or videos. The first application is motion prediction from a single static image, where a motion field is hallucinated for an input image using a large database of videos. The second application is motion transfer, where we animate a still image using object motions transferred from a similar moving scene. The rest of the paper is organized as follows: section 2 starts with introducing the concept of SIFT flow and describing the collected video database. Subsection 2.1 then describes the image representation for finding initial candidate sets of similar scenes. Subsection 2.2 details the SIFT flow alignment algorithm and subsection 2.3 shows some image alignment results. Applications of scene alignment to motion prediction and motion transfer are given in section 3.
2
Scene Alignment Using Flow
We are interested in finding dense correspondences between a query image and its nearest neighbours found in a large database of images. Ideally, if the database is large enough to contain almost every possible image in the world, the nearest neighbours would be visually similar to the query image. This motivates the following analogy with optical flow, where correspondence is sought between temporally adjacent (and thus visually similar) video frames: Dense sampling in time : optical flow Dense sampling in the space of all images : scene alignment using SIFT flow In other words, as optical flow assumes dense sampling of the time domain to enable tracking, SIFT flow assumes dense sampling in (some portion of) the space of natural images to enable scene alignment. In order to make this analogy possible we collect a large database consisting of 102,206 frames from 731 videos. Analogous to the time domain, we define the “temporal frames” to a query image as the N nearest neighbors in this database. The SIFT flow is then established between the query image and the N nearest neighbors. These two steps will be discussed in the next two subsections.
SIFT Flow: Dense Correspondence across Different Scenes
(a)
(b)
(c)
31
(d)
Fig. 2. Visualization of SIFT descriptors. We compute the SIFT descriptors on a regular dense grid. For each pixel in an image (a), the descriptor is a 128-D vector. The first 16 components are shown in (b) in a 4 × 4 image grid, where each component is the output of a signed oriented filter. The SIFT descriptors are quantized into visual words in (c). In order to improve the clarity of the visualization by mapping similar cluster centers to similar colors, cluster centers have been sorted according to the first principal component of the SIFT descriptor obtained from a subset of our dataset. An alternative visualization of the continuous values of the SIFT descriptor is shown in (d). This visualization is obtained by mapping the first three principal components of each descriptor into the principal components of the RGB color space (i.e. the first component is mapped into R+G+B, the second is mapped into R-G and the third into R/2 + G/2-B). We will use (d) as our visualization of SIFT descriptors for the rest of the paper. Notice that visually similar image regions have similar colors.
2.1
Scene Matching with Histogram Intersection
We use a fast indexing technique in order to gather candidate frames that will be further aligned using the SIFT flow algorithm to match the query image. As a fast search, we use spatial histogram matching of quantized SIFT [14,15]. First, we build a dictionary of 500 visual words [16] by running K-means on 5000 SIFT descriptors randomly selected out of all the video frames in our dataset. Then, the visual words are binned using a two level spatial pyramid [15,17]. Fig. 2 shows visualizations of the high dimensional SIFT descriptors. The similarity between two images is measured by histogram intersection. For each input image, we select the top 20 nearest neighbors. Matching is performed on all the frames from all the videos in our dataset. We then apply SIFT flow between the input image and the top 20 candidate neighbors and re-rank the neighbors based on the alignment score (described below). The frame with the best alignment score is chosen from each video. This approach is well-matched to the similarity obtained by SIFT flow (described below) as it uses the same basic features (SIFT descriptors) and spatial information is loosely represented (by means of the spatial histograms). 2.2
The Flow Algorithm
As shown in Fig. 1, images of distinct scenes can be drastically different in both RGB values and their gradients. In addition, the magnitude of pixel displacements between potentially corresponding objects or scene parts can be much larger than typical magnitudes of motion fields for temporal sequences. As a
32
C. Liu et al.
result, the brightness constancy and coarse-level zero flow assumptions common in classical optical flow [4,5,6] are no longer valid. To address these issues, we modify the standard optical flow assumptions in the following way. First, we assume SIFT descriptors [14] extracted at each pixel location (instead of raw pixel values) are constant with respect to the pixel displacement field. As SIFT descriptors characterize view-invariant and brightness-independent image structures, matching SIFT descriptors allows establishing meaningful correspondences across images with significantly different image content. Second, we allow a pixel in one image to match any other pixel in the other image. In other words, the pixel displacement can be as large as the image itself. Note, however, that we still want to encourage smoothness (or spatial coherence) of the pixel displacement field by encouraging close-by pixels to have similar displacements. We formulate the correspondence search as a discrete optimization problem on the image lattice [18,19] with the following cost function 2 2 s1 (p) − s2 (p + w) + 1 u (p) + v (p) + E(w) = 1 σ2 p p min α|u(p) − u(q)|, d + min α|v(p) − v(q)|, d , (1) (p,q)∈ε
where w(p) = (u(p), v(p)) is the displacement vector at pixel location p = (x, y), si (p) is the SIFT descriptor extracted at location p in image i and ε is the spatial neighborhood of a pixel (here a 4-neighbourhood structure is used). Parameters σ = 300, α = 0.5 and d = 2 are fixed in our experiments. The optimization is performed using efficient belief propagation [22]. In the above objective function, L1 norm is employed in the first term to account for outliers in SIFT matching and a thresholded L1 norm is used in the third, regularization term to model discontinuities of the pixel displacement field. In contrast to the rotation-invariant robust flow regularizer used in [21], the regularization term in our model is decoupled and rotation dependent so that the computation is feasible for large displacements. Unlike [19] where a quadratic regularizer is used, the thresholded L1 regularizer in our model can preserve discontinuities. As the regularizer is decoupled for u and v, the complexity of the message passing algorithm can be reduced from O(L3 ) to O(L2 ) using the distance transform [22], where L is the size of the search window. This is a significant speedup since L is large (we allow a pixel in the query image to match to a 80×80 neighborhood). We also use the bipartite message passing scheme and a multi-grid as proposed in [22]. The message passing converges in 60 iterations for a 145×105 image, which is about 50 seconds on a quad-core Intel Xeon 2.83GHz machine with 16GB memory using a C++ implementation. 2.3
Scene Alignment Results
We conducted several experiments to test the SIFT flow algorithm on our video database. One frame from each of the 731 videos was selected as the query image and histogram intersection matching (section 2.1) was used to find its 20 nearest
SIFT Flow: Dense Correspondence across Different Scenes
33
Fig. 3. SIFT flow for image pairs depicting the same scene/object. (a) shows the query image and (b) its densely extracted SIFT descriptors. (c) and (d) show the best (lowest energy) match from the database and its SIFT descriptors, respectively. (e) shows (c) warped onto (a) i.e. SIFT flow. (f) shows the warped SIFT image (d) onto (b) w.r.t. the SIFT flow. (g) shows the estimated displacement field i.e. SIFT flow with the minimum alignment energy shown to the right.
neighbors, excluding all other frames from the query video. The scene alignment algorithm (section 2.2) was then used to estimate the dense correspondence (represented as a pixel displacement field) between the query image and each of its neighbors. The best matches are the ones with the minimum energy defined by Equation (1). Alignment examples are shown in Figures 3–5. The original query image and its extracted SIFT descriptors are shown in columns (a) and (b). The minimum energy match (out of the 20 nearest neighbors) and its extracted SIFT descriptors are shown in columns (c) and (d). To investigate the quality of the pixel displacement field, we use the computed displacements to warp the best match onto the query image. The warped image and warped SIFT descriptor image are shown in columns (e) and (f). The visual similarity between (a) and (e), and (b) and (f) demonstrates the quality of the matching. Finally, the displacement field is visualized using color-coding adapted from [1] in column (g) with the minimum alignment energy shown to the right. Fig. 3 shows examples of matches between frames coming from the same video sequence. The almost perfect matching in row (1) and (2) demonstrates that SIFT flow reduces to classical optical flow when the two images are temporally adjacent frames in a video sequence. In row (3)–(5), the query and the best match are more distant within the video sequence, but the alignment algorithm can still match them reasonably well. Fig. 4 shows more challenging examples, where the two frames come from different videos while containing the same type of objects. The alignment algorithm
34
C. Liu et al.
Fig. 4. SIFT flow computed for image pairs depicting the same scene/object category where the visual correspondence is obvious
Fig. 5. SIFT flow for challenging examples where the correspondence is not obvious
SIFT Flow: Dense Correspondence across Different Scenes
35
Fig. 6. Some failure examples with incorrect correspondences
Ranking from histogram intersection
Ranking from the matching score of SIFT flow
Fig. 7. Alignment typically improves ranking of the nearest neighbors. Images enclosed by the red rectangle are the top 10 nearest neighbors found by histogram intersection, displayed in a scan-line order (left to right, top to bottom). Images enclosed by the green rectangle are the top 10 nearest neighbors ranked by the minimum energy obtained by the alignment algorithm. The warped nearest neighbor image is displayed to the right of the original image. Note how the returned images are re-ranked according to the size of the depicted vehicle by matching the size of the bus in the query.
attempts to match the query image by transforming the candidate image. Note the significant changes in viewpoint between the query and the match in examples (8), (9), (11), (13), (14) and (16). Note also that some discontinuities in the flow field are caused by errors in SIFT matching. The square shaped discontinuities are a consequence of the decoupled regularizer on the horizontal and vertical components of the pixel displacement vector. Fig. 5 shows alignment results for examples with no obvious visual correspondence. Despite the lack of direct visual correspondence, the scene alignment algorithm attempts to rebuild the house (17), change the shape of the door into a circle (18) or reshuffle boats (20). Some failure cases are shown in Fig. 6. Typically, these are caused
36
C. Liu et al.
by the lack of visually similar images in the video database. Note that, typically, alignment improves ranking of the K-nearest neighbors. This is illustrated in Fig. 7.
3
Applications
In this section we demonstrate two applications of the proposed scene matching algorithm: (1) motion field prediction from a single image using motion priors, and (2) motion synthesis via transfer of moving objects common in similar scenes. 3.1
Predicting Motion Field from a Single Image
The goal is, given a single static image, to predict what motions are plausible in the image. This is similar to the recognition problem, but instead of assigning a label to each pixel, we want to assign possible motions. We built a scene retrieval infrastructure to query still images over a database of videos containing common moving objects. The database consists of sequences depicting common events, such as cars driving through a street and kids playing in a park. Each individual frame was stored as a vector of word-quantized SIFT features, as described in section 2.1. In addition, we store the temporal motion field between every two consecutive frames of each video. We compare two approaches for predicting the motion field for the query still image. The first approach consists of directly transferring the motion of the closest video frame matched in the database. Using the SIFT-based histogram matching (section 2.1), we can retrieve very similar video frames that are roughly spatially aligned. For common events such as cars moving forward on a street, the motion prediction can be quite accurate given enough samples in the database. The second approach refines the coarse motion prediction described above using the dense correspondences obtained by the alignment algorithm (section 2.2). In particular, we compute the SIFT flow from the retrieved video frame to the query image and use the computed correspondence to warp the temporally estimated motion of the retrieved video frame. Figure 8 shows examples of predicted motion fields directly transferred from the top 5 database matches and the warped motion fields. Note that in simple cases the direct transfer is already quite accurate and the warping results in only minor refinements. While there are many improbable flow fields (e.g. a car moving upwards), each image can have multiple plausible motions : a car or a boat can move forward, in reverse, turn, or remain static. In any scene the camera motion can generate motion field over the entire frame and objects can be moving at different velocities. Figure 9 shows an example of 5 motion fields predicted using our video database. Note that all the motions fields are different, but plausible. 3.2
Quantitative Evaluation
Due to the inherent ambiguity of multiple plausible motions for each still image, we design the following procedure for quantitative evaluation. For each test video,
SIFT Flow: Dense Correspondence across Different Scenes
(a)
(b)
(c)
(d)
37
(e)
Fig. 8. Motion from a single image. The (a) original image, (b) matched frame from the video data set, (c) motion of (b), (d) warped and transferred motion field from (b), and (e) ground truth for (a). Note that the predicted motion in (d) is inferred from a single input still image, i.e. no motion signal is available to the algorithm. The predicted motion is based on the motion present in other videos with image content similar to the query image.
Fig. 9. Multiple motion field candidates. A still query image with its temporally estimated motion field (in the green frame) and multiple motion fields predicted by motion transfer from a large video database.
we randomly select a test frame and obtain a result set of top n inferred motion fields using our motion prediction method. Separately, we collect an evaluation set containing the temporally estimated motion (from video) for the test frame
38
C. Liu et al.
(the closest to a ground truth we have) and 11 random motion fields taken from other scenes in our database, acting as distractors. We take each of the n inferred motion fields from the result set and compute their similarity (defined below) to the set of evaluation fields. The rank of the ground truth motion with respect to the random distractor motions is an indicator of how close the predicted motion is to the true motion estimated from the video sequence. Because there are many possible motions that are still realistic, we do this comparison with each of the top n motion fields within the result set and keep the highest ranking achieved. Finally, we repeat this evaluation ten times with a different randomly selected test frame for each test video and report the median of the rank score across the different trials. For this evaluation, we represent each motion field as a regular two dimensional motion grid filled with 1s where there is motion and 0s otherwise. The similarity between two motion fields is defined then as def M(x, y) = N(x, y) (2) S(M, N) = (x,y)∈G
where M and N are two rectangular motion grids of the same size, and (x, y) is a coordinate pair within the spatial domain G of grids M and N. Figure 10a shows the normalized histogram of these rankings across 720 predicted motion fields from our video data set. Figure 10b shows the same evaluation on a subset of the data that includes 400 videos with mostly streets and cars. Notice how, for more than half of the scenes, the inferred motion field is ranked 1st suggesting a close match to the temporally-estimated ground truth. Most other test examples are ranked within the top 5. Focusing on roads and cars gives even better results with 66% of test trials ranked 1st and even more test examples ranked within the top 5. Figure 10c shows the precision of the inferred motion (the percentage of test examples with rank 1) as a function of the size of the result set, comparing (i) direct motion field transfer (red circles) and (ii) warped motion field transfer using SIFT flow (blue stars). Street Videos 0.7
60
60
0.6
50
50
0.5
40 30 20 10 0
1 2 3 4 5 6 7 8 9 10 Inference Ranking
(a)
precision
70
% instances
% instances
All Videos 70
40 30
0.4 0.3
20
0.2
10
0.1
0
1 2 3 4 5 6 7 8 9 10 Inference Ranking
(b)
0
Direct Transfer Warp
5
10
15
# top inferences considered
(c)
Fig. 10. Evaluation of motion prediction. (a) and (b) show normalized histograms of prediction rankings (result set size of 15). (c) shows the ranking precision as a function of the result set size.
SIFT Flow: Dense Correspondence across Different Scenes 0.945
0.928
0.429
0.255
0.161
0.068
0.039
0.011
39
Fig. 11. Motion instances where the predicted motion was not ranked closest to the ground truth. A set of random motion fields (blue) together with the predicted motion field (green, ranked 3rd). The number above each image represents the fraction of the pixels that were correctly matched by comparing the motion against the ground truth. In this case, some random motion fields appear closer to the ground truth than our prediction (green). However, our prediction also represents a plausible motion for this scene.
While histograms of ranks show that the majority of the inferred motions were ranked 1st, there is still a significant number of instances with lower rank. Figure 11 shows a false negative example, where the inferred motion field was not ranked top despite the reasonable output. Notice how the top ranked distractor fields are quite similar to our prediction showing that, in some cases, where our prediction is not ranked 1st, we still produce realistic motion. 3.3
Motion Synthesis Via Object Transfer
We described above how to predict the direction and velocity of objects in a still image. Having a prior on what scenes look like over time also allows us to infer what objects (that might not be part of the still image) can possibly appear. For example, a car moving forward can appear in a street scene with an empty road, or a fish can start swimming in a fish tank scene. Based on this idea, we propose a method for synthesizing motions from a still image. The goal is to transfer moving objects from similar video scenes. In particular, given a still image q that is not part of any video in our database D, we identify and transfer moving objects from videos in D into q as follows: 1. Query D using the SIFT-based scene matching algorithm to retrieve the set of closest video frame matches F = {fi |fi is the ith frame from a video in D} given the query image q. 2. For each frame fi ∈ F , we can synthesize a video sequence based on the still image q. The kth frame of the synthesized video is generated as follows: (a) Densely sample the motion from frame fi+k to fi+k+1 (b) Construct frame qk by transferring non-moving pixels from q and moving pixels from fi+k .
40
C. Liu et al.
(c) Apply poisson editing [23] to blend the foreground (pixels from fi+k ) into the background composed of pixels from q. Figure 12 shows examples of synthesized motions for three different scenes. Notice the variety of region sizes transferred and the seamless integration of objects into the new scenes.
Fig. 12. Motion synthesis via object transfer. Query image (a), the top video match (b), and representative frames from the synthesized sequence (c) obtained by transferring moving objects from the video to the still query image.
Some of the biggest challenges in creating realistic composites lie in estimating the correct size and orientation of the objects to introduce in the scene [24]. Our framework inherently takes care of these constraints by retrieving sequences that are visually similar to the query image. This enables the creation of realistic motion sequences from still images with a simple transfer of moving objects.
4
Conclusion
We have introduced the concept of SIFT flow and demonstrated its utility for aligning images of complex scenes. The proposed approach achieves good matching and alignment results despite significant differences in appearance and spatial layout of matched images. The goal of scene alignment is to find dense correspondence between similar structures (similar textures, similar objects) across different scenes. We believe that scene alignment techniques will be useful for various applications in both computer vision and computer graphics. We have illustrated the use of scene alignment in two original applications: (1) motion estimation from a single image and (2) video synthesis via the transfer of moving objects.
SIFT Flow: Dense Correspondence across Different Scenes
41
Acknowledgements Funding for this work was provided by NGA NEGI-1582-04-0004, MURI Grant N00014-06-1-0734, NSF Career award IIS 0747120, NSF contract IIS-0413232, a National Defense Science and Engineering Graduate Fellowship, and gifts from Microsoft and Google.
References 1. Baker, S., Scharstein, D., Lewis, J., Roth, S., Black, M.J., Szeliski, R.: A database and evaluation methodology for optical flow. In: Proc. ICCV (2007) 2. Szeliski, R.: Image alignment and stiching: A tutorial. Foundations and Trends in Computer Graphics and Computer Vision 2(1) (2006) 3. Scharstein, D., Szeliski, R.: A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. Intl. J. of Computer Vision 47(1), 7–42 (2002) 4. Horn, B.K.P., Schunck, B.G.: Determing optical flow. Artificial Intelligence 17, 185–203 (1981) 5. Lucas, B., Kanade, T.: An iterative image registration technique with an application to stereo vision. In: Proceedings of the International Joint Conference on Artificial Intelligence, pp. 674–679 (1981) 6. Bruhn, A., Weickert, J., Schn¨ orr, C.: Lucas/Kanade meets Horn/Schunk: combining local and global optic flow methods. Intl. J. of Computer Vision 61(3), 211–231 (2005) 7. Belongie, S., Malik, J., Puzicha, J.: Shape context: A new descriptor for shape matching and object recognition. In: NIPS (2000) 8. Berg, A., Berg, T., Malik, J.: Shape matching and object recognition using low distortion correspondence. In: Proc. CVPR (2005) 9. Felzenszwalb, P., Huttenlocher, D.: Pictorial structures for object recognition. Intl. J. of Computer Vision 61(1) (2005) 10. Winn, J., Jojic, N.: Locus: Learning object classes with unsupervised segmentation. In: Proc. ICCV, pp. 756–763 (2005) 11. Russell, B.C., Torralba, A., Murphy, K.P., Freeman, W.T.: LabelMe: a database and web-based tool for image annotation. Intl. J. of Computer Vision 77(1-3), 157–173 (2008) 12. Hays, J., Efros, A.A.: Scene completion using millions of photographs. ACM Transactions on Graphics (SIGGRAPH 2007) 26(3) (2007) 13. Russell, B.C., Torralba, A., Liu, C., Fergus, R., Freeman, W.T.: Object recognition by scene alignment. In: NIPS (2007) 14. Lowe, D.G.: Object recognition from local scale-invariant features. In: Proc. ICCV 1999, Kerkyra, Greece, pp. 1150–1157 (1999) 15. Lazebnik, S., Schmid, C., Ponce, J.: Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. In: Proc. CVPR, vol. II, pp. 2169–2178 (2006) 16. Sivic, J., Zisserman, A.: Video Google: a text retrieval approach to object matching in videos. In: Proc. ICCV (2003) 17. Grauman, K., Darrell, T.: Pyramid match kernels: Discriminative classification with sets of image features. In: Proc. ICCV (2005) 18. Boykov, Y., Veksler, O., Zabih, R.: Fast approximate energy minimization via graph cuts. IEEE Trans. on Pattern Analysis and Machine Intelligence 23(11), 1222–1239 (2001)
42
C. Liu et al.
19. Shekhovtsov, A., Kovtun, I., Hlavac, V.: Efficient MRF deformation model for non-rigid image matching. In: Proc. CVPR (2007) 20. Wainwright, M., Jaakkola, T., Willsky, A.: Exact MAP estimates by (hyper)tree agreement. In: NIPS (2003) 21. Brox, T., Bruhn, A., Papenberg, N., Weickert, J.: High accuracy optical flow estimation based on a theory for warping. In: Pajdla, T., Matas, J(G.) (eds.) ECCV 2004. LNCS, vol. 3024, pp. 25–36. Springer, Heidelberg (2004) 22. Felzenszwalb, P.F., Huttenlocher, D.P.: Efficient belief propagation for early vision. Intl. J. of Computer Vision 70(1), 41–54 (2006) 23. P´erez, P., Gangnet, M., Blake, A.: Poisson image editing. ACM Trans. Graph. 22(3), 313–318 (2003) 24. Lalonde, J.F., Hoiem, D., Efros, A.A., Rother, C., Winn, J., Criminisi, A.: Photo clip art. ACM Transactions on Graphics (SIGGRAPH 2007) 26(3) (August 2007)
Discriminative Sparse Image Models for Class-Specific Edge Detection and Image Interpretation Julien Mairal1,4 , Marius Leordeanu2, Francis Bach1,4 , Martial Hebert2 , and Jean Ponce3,4 1
INRIA, Paris-Rocquencourt Carnegie Mellon University, Pittsburgh 3 Ecole Normale Sup´erieure, Paris WILLOW project-team, ENS/INRIA/CNRS UMR 8548 2
4
Abstract. Sparse signal models learned from data are widely used in audio, image, and video restoration. They have recently been generalized to discriminative image understanding tasks such as texture segmentation and feature selection. This paper extends this line of research by proposing a multiscale method to minimize least-squares reconstruction errors and discriminative cost functions under 0 or 1 regularization constraints. It is applied to edge detection, category-based edge selection and image classification tasks. Experiments on the Berkeley edge detection benchmark and the PASCAL VOC’05 and VOC’07 datasets demonstrate the computational efficiency of our algorithm and its ability to learn local image descriptions that effectively support demanding computer vision tasks.
1 Introduction Introduced in [21], learned sparse representations have recently been the focus of much attention and have led to many state-of-the-art algorithms for various signal and image processing tasks [8,15,22]. Different frameworks have been developed, which exploit learned sparse decompositions, nonparametric dictionary learning techniques [1,9,11], convolutional neural networks [24], probabilistic models [25], each of them being applied to the learning of natural image bases. Recently, a novel discriminative approach of the dictionary learning techniques has been proposed in [14], and it has been applied on texture segmentation and category-based feature selection. In this paper, we present a method for learning overcomplete bases, which combines ideas from [1,9,11] but with a slightly better convergence speed. It is also compatible with 0 and 1 regularization sparsity constraints.1 We use this algorithm in a multiscale extension of the discriminative framework of [14] and apply it to the problem of edge detection, with raw results very close to the state of the art on the Berkeley segmentation dataset [19]. Following [23], we also learn a class-specific edge detector and show that using it as a preprocessing stage to a state-of-the-art edge-based classifier [12] can dramatically improve its performance of the latter. 1
P 2 1 2 The 2 norm of a vector x in Rn is defined as ||x||2 = ( n i=1 x[i] ) , the 1 -norm as ||x||1 = P n |x[i]|, and ||x|| , the -pseudo-norm of x, counts its number of nonzero elements. 0 0 i=1
D. Forsyth, P. Torr, and A. Zisserman (Eds.): ECCV 2008, Part III, LNCS 5304, pp. 43–56, 2008. c Springer-Verlag Berlin Heidelberg 2008
44
J. Mairal et al.
2 Background Consider a signal x in Rn . We say that x admits a sparse representation over a dictionary D in Rn×k composed of k unit vectors (atoms) of Rn when one can find a linear combination of a few atoms from D that is “close” to the original signal x. Given an input matrix X = [x1 , . . . , xm ] in Rn×m of m signals, learning such a dictionary can be formulated as an optimization problem over a dictionary D = [d1 , . . . , dk ] in Rn×k and the sparse representation matrix α = [α1 , . . . , αm ] in Rk×m , namely m minD,α l=1 ||xl − Dαl ||22 subject to ||dj ||22 = 1 and φ(αl ) ≤ 0 (1) for j = 1, . . . , k and l = 1, . . . , m. Here, φ(αl ) ≤ 0 is a sparsity constraint guaranteeing that only a few of the components of each vector αl are nonzero. The most popular sparsity constraints in the literature involve the 0 -pseudo-norm, and the 1 -norm (see [11] for other sparsity functions). In the first case, we simply take φ(αl ) = ||αl ||0 − L, where L is the maximum number of nonzero coefficients allowed. In the second case, we take φ(αl ) = ||αl ||1 − T , where T is an arbitrary parameter. It is well known in the statistics, optimization and compressed sensing communities that the 1 constraint yields a sparse solution [6], but there is no analytic link between the value of T and the effective sparsity L that it yields. A number of practical methods have been developed for solving problem (1). This includes the K-SVD algorithm of Aharon et al. [1] and the method of optimal directions (MOD) of Engan et al. [9] for its 0 formulation, and the algorithm of Lee et al. [11] for its 1 variant. All these techniques [1,9,11] are iterative approaches designed to minimize the energy (1). After an initialization of the dictionary D in Rn×k , e.g., from random signals, they iterate between a sparse coding step where D is fixed and the matrix α is computed, and a dictionary update step, where D is updated with α fixed in [9,11] and variable in [1]. Given a signal xl in Rn and a fixed dictionary D in Rn×k , sparse coding amounts to solving the following optimization over αl in Rk : min ||xl − Dαl ||22 , s.t. φ(αl ) ≤ 0. αl ∈Rk
(2)
In the 0 case, solving this problem and finding the corresponding nonzero coefficients— what we will call the sparsity pattern in the rest of this paper—is NP-hard. In the applications addressed in this paper as well as in [1,9], the dimension n of the signals is quite small and so is the sparsity factor L (typically n ≤ 1000 and L ≤ 10), which makes it reasonable to use a greedy algorithm called orthogonal matching pursuit (OMP) [17] to effectively find an approximate solution. Using the 1 formulation of sparse coding “convexifies” this problem, and the LARS-Lasso algorithm [7], can be used to find its global optimum. In general, the question of whether to prefer an 0 or 1 formulation has not been settled yet. The main advantages of the 1 norm are that it is easy to optimize and, since it is piecewise smooth, it provides more stable decompositions. In practice, with the small signals and low sparsity typical of our applications, OMP usually provides sparser solutions but is not as stable in the sense that a small variation of the input data may result in a completely different sparsity pattern. When the coefficient matrix α in Rk×m is fixed, updating the dictionary is a linear least-squares problem under quadratic constraints. For a given data matrix X in Rn×m , it can be formulated as the following optimization problem over D in Rn×k :
Discriminative Sparse Image Models
min D
m
||xl − Dαl ||22
subject to ||dj ||22 = 1 for j = 1, . . . , k.
45
(3)
l=1
This constrained optimization problem can be addressed using several methods, including, as noted in [11], gradient descent with iterative projection, or a dual version derived from its Lagrangian. This is the approach followed in the 1 formulation by the method of Lee et al. [11]. Engan et al for the MOD algorithm have chosen to solve the problem (3) without constraints and normalize the dj a posteriori since multiplying a column of D while dividing the j-th line of α by the same value does not change the energy in Eq. (1). The K-SVD of Aharon et al. [1] uses a different strategy where the columns of D are updated one at a time but only the sparsity pattern is fixed while the values of the nonzero coefficients of α are allowed to change as well. This allows for larger steps at the cost of more complex calculations in general. The approach proposed in this paper combines several of the aspects of the methods reviewed so far. In particular, as will be shown in the next section, it can handle both 0 and 1 formulations of problem (1), takes advantage of the LARS-Lasso or OMP sparse coding speed when appropriate, and enjoys fast dictionary updates steps similar to [9,11], while letting the α coefficients change for faster convergence, similar to KSVD [1]. It also generalizes to discriminative tasks in a straightforward way.
3 A Method for Sparse Model Learning In this section, we present how to learn reconstructive and discriminative sparse representations and a multiscale extension of the latter. 3.1 Learning Reconstructive Dictionaries In our experiments, the MOD and K-SVD algorithms present very similar performances in terms of convergence and speed. Both algorithms suffer from using the same expensive sparse coding step, even with efficient Cholesky-based implementations. With sets of parameters commonly used in image, signal and video processing, the K-SVD dictionary update, which relies on k truncated SVDs of matrices of size roughly n × mL k , is slower than the MOD one (one inversion of a k × k symmetric matrix) except when k is very large, but performs larger steps. The algorithm we propose here extends [9,11] and thus enjoys fast dictionary updates, while exploiting fast updates of the nonzero coefficients of α by fixing only the sparsity pattern like in the K-SVD. Note that such a trick to accelerate convergence has already been used successfully in [20], but in a different context. The overall process is outlined in Figure 1. Instead of solving a single instance of Eq. (5) with α fixed to update D, we alternate between this update, which we call partial dictionary update, and a fast update of the nonzero coefficients of α with D fixed (partial fast sparse coding). This allows us to reduce the number of calls of the expensive full sparse coding step. In the 0 case, a partial dictionary update can be D(Γ ) = XαT (ααT )−1 as in the MOD. In the 1 case, it can be done efficiently like in [11] by using a Newton method
46
J. Mairal et al.
Input: X = [x1 , . . . , xm ] ∈ Rn×m (input data); k (number of atoms); φ : Rk → R (constraint on the coefficients); J (number of iterations). Output: D ∈ Rn×k (dictionary); α ∈ Rk×m (coefficients). Initialization: Choose randomly some xl to initialize the columns of D. Loop: Repeat J times: – Sparse coding: Fix D and compute, using OMP (0 ) or LARS-Lasso (1 ): for all l = 1, . . . , m, αl = arg min ||xl − Dαl ||2 , s.t. φ(αl ) ≤ 0. αl ∈Rk
(4)
– Dictionary update: Repeat until convergence: • Partial dictionary update: Fix α and solve: D = arg min ||X − D α||2F s.t. ||dj ||2 = 1, for all j = 1, . . . , k
(5)
D ∈Rk
• Partial fast sparse coding: Fix D and update the nonzero coefficients of α to minimize Eq. (4) while keeping the same sparsity pattern. Fig. 1. Proposed algorithm for learning reconstructive dictionaries
to solve the dual problem arising from the Lagrangian of this constrained optimization problem. In particular, it is easy to show that the optimal vector Γ of Lagrange multipliers must satisfy ⎡ ⎤ k Γ = arg max ⎣||X − D(Γ )α||2F + Γj (dTj dj − 1)⎦ , (6) Γ ∈Rk
j=1
where D(Γ ) = XαT (ααT + diag(Γ ))−1 . Assuming that the sparsity pattern is fixed, partial fast sparse coding becomes much less costly than full sparse coding: given a fixed dictionary D in Rn×k , a signal xl in Rn , and the active dictionary Da in Rn×L composed of the atoms corresponding to the ˜ l in RL composed of the nonzero values from L nonzero coefficients of αl , the vector α αl can be updated as follows: – In the 0 case, α ˜ l is the minimum of ||xl − Da α ˜ l ||22 , and the corresponding linear least-squares system is solved using a conjugate gradient method. – In the 1 case, we prevent the sign of the values in α ˜ l from strictly changing, but we allow them to be zero. We denote by a in {−1, 1}L the sign of the initial α ˜l and we address the problem minα ˜ l ||22 s.t. Ta α ˜ l = T and a [j]α ˜ l [j] ≥ 0 ˜ l ||xl − Da α for j = 1, . . . , L
(7)
using reduced projected gradient descent [13] with optimal steps. We repeat until convergence: • Gradient computation: g = −2DTa (xl − Dα ˜ l ). T ˜ [j] = 0: g ˜ = I − aTaa g. • Projection of g so that j g a
Discriminative Sparse Image Models
47
• Computation of the optimal step, which prevents the sign of the coefficients to ˜ l [1] , . . . , α ˜ l [L] , g˜ T g ). change: t = min( α ˜ [1] g
˜ [L] g
˜ T DT ˜ g a Da g
• Steepest descent: α ˜ l = α ˜ l − t˜ g, • If α ˜ l [j] = 0, it is removed from α ˜ l . Note that for simplicity reasons we chose not to allow a coefficient which has been removed from α ˜ l to change again. Thus, this descent algorithm stops before the exact solution of Eq. (7). This approach for learning sparse representations extends [9,11], since the only difference with these algorithms is the idea of using a partial fast sparse coding to accelerate the convergence, with often less computational effort than the K-SVD. Note that it also extends the K-SVD in some sense in the 0 case (since K-SVD is per se not compatible with an 1 constraint). Suppose that we perform our dictionary update on a single atom, by keeping the other atoms fixed. Then, the alternating iterations between what we call partial dictionary update and partial fast sparse coding do exactly the same as the power method, which performs the truncated SVD used in the K-SVD algorithm. 3.2 Learning Discriminative Dictionaries An effective method for learning discriminative dictionaries under 0 constraints has been introduced in [14] using an energy formulation that contains both sparse reconstruction and class discrimination components, jointly optimized towards the learning of the dictionaries. Given N classes Si of signals, i = 1, . . . , N , the goal is to learn N discriminative dictionaries Di , each of them being adapted to reconstructing a specific class better than others. As shown in [14], this yields the following optimization problem: min
{Dj }N j=1
where
N
Ciλ {R (xl , Dj )}N j=1 + λγR (xl , Di ),
i=1 l∈Si R (xl , D)
≡ min ||xl − Dαl ||22 s.t. φ(αl ) ≤ 0. αl
(8) (9)
Here, R (xl , Di ) is the reconstruction error of the signal xl using the dictionary Di and Ciλ is a softmax discriminative cost function, which is the multiclass version of the logistic regression function. Its purpose is to make the dictionary Di better at reconstructing the signals from class Si than the dictionaries Dj for j different than i. In this equation, λ is a parameter of the cost function, and γ controls the trade-off between reconstruction and discrimination. More details on this formulation and on the choices of the parameters λ and γ are given in [14]. The optimization procedure in this case uses the same iterative (i) sparse coding and (ii) dictionary update scheme as the MOD and K-SVD. Nevertheless, due to the different nature of the energy, the dictionary update is slightly different from the MOD or K-SVD. It is implemented as a truncated Newton iteration with respect to the dictionaries. It is shown in [14] that performing this truncated Newton iteration to update the j-th dictionary Dj is equivalent to solving a problem of the form:
48
J. Mairal et al.
min
N
D ∈Rn×k
wl ||xl − D αlj ||22 ,
(10)
i=1 l∈Si
where the αlj ’s are the coefficients of the decompositions of xl using the dictionary Dj , and the wl ’s are weights coming from a local linear approximation of Ciλ . They depend on the derivatives of Ciλ and therefore have to be recomputed at each step of the optimization procedure. It is thus clear that this formulation can easily be adapted and generalized to the framework proposed in the previous section, allowing us to use the 1 as well as the 0 formulation, which might be more suitable for some applications. The partial fast sparse coding remains unchanged and the partial dictionary update becomes: D(Γ ) =
N
wl xl αTlj
N
i=1 l∈Si
Γ = arg max Γ ∈Rk
−1 wl αlj αTlj + diag(Γ )
i=1 l∈Si N
wl ||xl −
D(Γ )αl j||22
+
i=1 l∈Si
k
Γj (dTj dj − 1).
(11)
j=1
Note that interestingly, our framework allows us to update all the atoms at the same time when the K-SVD does some sequential optimization. When this is always a benefit in the reconstructive framework, the discriminative one relies on a local linear approximation of the cost function, which is linked to the weights wl introduced above. In the 0 case, while our procedure improves upon the MOD-like algorithm from [14] since it achieves faster the same reconstruction error, the more computationally expensive KSVD-like algorithm updates sequentially the atoms of the dictionary but also updates the local linear approximations of the cost function (recomputes the wl ’s) between each update of the atoms, which has proven experimentally to converge toward a better local minimum. Therefore, the choice between our new discriminative framework and the K-SVD-like dictionary update from [14] in the 0 case becomes a matter of trade-off between speed and quality of the optimization. In the 1 case, the K-SVD can not be applied, but the partial dictionary update stage of our discriminative framework can alternatively be replaced by a sequential update of the columns of the dictionary while interlacing updates of the wl ’s. The update of the j-th atom when the α are fixed should then be written: N l∈Si wl αl [j](xl − p=j αl [p]dp ) i=1 , dj = N || i=1 l∈Si wl αl [j](xl − p=j αl [p]dp )||2 which is the solution of:
2
min w − α [p]d − α [j]d s.t. ||dj ||22 = 1. x l l l p l j
dj
i=1 l∈Si p=j N
(12)
2
3.3 A New Multiscale Feature Space In this subsection, we present a multiscale extension and some improvements to the classification procedure outlined in [14], which have proven to improve noticeably the
Discriminative Sparse Image Models
49
Classifier 1
Classifier 2
Linear classifier
Classifier 3 Signal input Subsampling
Sparse coding
Fig. 2. Multiscale classifier using discriminative sparse coding. The signal input is subsampled in different signal sizes. Then, each classifier outputs N curves of reconstruction errors as functions of a sparsity constraint, one curve per dictionary. A linear classifier provides a confidence value.
performance of our classifier. Although it is presented for illustrative purposes when the signals are image patches, its scope goes beyond vision tasks and similar concepts could be applied in other situations. An important assumption, commonly and successfully used in image processing, is the existence of multiscale features in images, which we exploit using a multi-layer approach, presented in Figure 2. It allows us to work with images at different resolutions, with different sizes of patches and avoids the choice of the hyperparameters L (0 case) or T (1 case) during the testing phase. In [14], the class i0 for some patch x is taken to be i0 = arg mini=1...N R (x, Di ). However, R is a reconstruction error obtained with an arbitrary 0 or 1 constraint, which does not take into account the different characteristics of the patches. Patches with a high amount of information need indeed a lot of atoms to achieve a correct representation, and should therefore benefit from being classified with a high sparsity factor. On the other hand, some patches admit extremely sparse representations, and should be classified with a small sparsity factor. To cope with this effect, we have chosen when testing a given patch to compute many reconstruction errors with different constraints (different values of L or T ). Thanks to the nature of the OMP and LARS-Lasso, this can be done without additional computations since both these algorithms can plot the reconstruction error as a function of the given constraint value in one pass. The two curves produced by two different dictionaries on a patch can then be incorporated into a logistic regression classifier or a linear SVM [26] as feature vectors. The same idea can be used to combine the output of different classifiers, working at different resolutions and with different sizes of patches. Suppose you train P discriminative classifiers with different sizes of patches and different resolutions. Testing a signal x consists of sending x to each classifier independently, cropping and subsampling it so that its size and resolution match the classifier. Each classifier produces N curves representing the reconstruction errors using the N dictionaries and different sparsity constraints. Then, a linear classifier (logistic regression or SVM) permits to combine these outputs.
50
J. Mairal et al.
4 Combining Geometry and Local Appearance of Edges Considering the edge detection task as a pixelwise classification problem, we have applied our patch-based discriminative framework to learn a local appearance model of “patches centered on an edge pixel” against “patches centered on a non-edge pixel” and therefore have a confidence value for each pixel of being on an edge or not. An evaluation of this scheme is detailed in the experimental part. Then, once we have trained an edge detector, we propose to use this generic method for class-specific edge detection. Suppose we have N classes of images. After finding the edges of all the images, we can then learn N classifiers to discriminate “patches centered on a pixel located on an edge from an object A” against “patches centered on a pixel located on the other edges”. If this method should not be enough for recognizing an object by itself, we show in the experimental part how crucial this local analysis can be when used as a preprocessing of a global contour-based recognition framework. We now show how to use our edge detector for object recognition by combining it with the shape-based method for category recognition from [12]. Their algorithm learns the shape of a specific category in a discriminative fashion by selecting from training images the pieces of contours that are most relevant for a specific category. The method exploits the pairwise geometric relationships between simple features that include relative angle and distance information. Thus, features are selected based on how discriminative they are together, as a group, and not on an individual basis (for more details on learning these models, see [12]). After the models are learned, they are matched against contours extracted from novel images by formulating the task as a graph matching problem, where the focus is on pairwise relationships between features and not their local appearance. While the authors of [12] make the point that shape is stronger than local appearance for category recognition we want to demonstrate that there is a natural way of combining shape and local appearance that produces a significant increase in the recognition performance. While shape is indeed important for category recognition, the toughest challenge for such shape-based algorithms on difficult databases is the change in view point, which makes the use of 2D shape less powerful. Therefore, it is important to be able to help the shape recognizer by local appearance methods that are geometry independent and thus less sensitive to such changes in viewpoint. Our proposed approach of combining local appearance with shape is to first learn a class specific edge detector on pieces of contours. Next this class specific edge detector is used to filter out the irrelevant edges while learning the shape-based classifier based on [12]. Similarly, at testing time, the shape-based algorithm is applied to the contours that survive after filtering them with our class dependent edge detector. The outputs of both the shape-based classifier and the real values given by our detector are later combined for the final recognition score. This framework provides a natural way of combining the lower-level, appearance based edge detection and contour filtering with the more higher level, shape-based approach. The algorithm can be described in more detail as follows: • Contour Training: Learn a class specific edge classifier using our proposed method. For each image, we apply our general edge detector, then obtain pieces of contours
Discriminative Sparse Image Models
51
obtained as in [12]. Next, we train class specific detectors on such contours belonging to the positive class vs. all other classes. • Shape Training: The output of the class specific edge detector on each training image (contours with average scores less than 0.5 are removed) is given to the shape-based algorithm from [12]. Thus the shape classifier is trained on images that were first preprocessed with our class dependent contour classification. • Testing: Each testing image is first preprocessed at the individual contours level in the same way as it is done at training time. The edge classifier it is used to filter out contours that had an average score less than 0.5 (over all pixels belonging to that contour). The contours that survived are then used by the shape-based classifier, to obtain the final recognition score.
5 Experimental Validation 5.1 Sparse Coding with Learned Bases In this experiment, we show that our approach slightly improves upon [1,9,11] in terms of convergence speed. Comparing the speed of algorithms is a very delicate issue. Plotting the residual error as a function of the number of iterations would not be fair since the amount of computation per iteration is different from one algorithm to another. Computing the exact number of elementary operations (flops) for one iteration could help, but it is often extremely far from the real computation time. Therefore, we have chosen to base our measures on our careful implementations of the above algorithms. Both our implementations of OMP and LARS-Lasso are efficient parallel Choleskybased ones. All computations are done on an Intel Quad-Core 2.4Ghz processor. The comparison is reported in Figure 3 and we report the average 2 -norm of the residuals as a function of the running time of the algorithm for 100 000 patches of size 8 × 8 taken randomly from the Berkeley segmentation database, a dictionary of size k = 256, sparsity constraints L = 6 for 0 and T = 0.1 for the 1 case, which are y
0.64
0.44 0.63
0.43 0.42 0.62
0.41 0.40
0.61
0.39 x
0.38 0
10
20
x
0.60 0
10
20
30
Fig. 3. On the left, the diagram presents an 0 constrained experiment and on the right, an 1 one. The curves presented here represents reconstruction errors as a function of the computation time in seconds. Plain red curves represent our algorithm. Black dotted curves represent the MOD [9] in the 0 case and [11] in the 1 one. Blue dashed curves represent the K-SVD.
52
J. Mairal et al.
typical settings in the sparse coding literature [8]. In these experiments, our approach proves to slightly outperform those of [1,9,11]. Nevertheless, with different parameters or different implementations, results could differ due to the nature of these algorithms. Let us also note that the times we report here are far lower than any of those reported in the literature [8,11,15] for comparable tasks. 5.2 Edge Detection We have chosen to train and evaluate our multiscale discriminative framework on the Berkeley segmentation benchmark [19]. To accelerate the procedure and obtain thin edges, we first process all the images with a Canny edge detector [3] without thresholding. Then, we use the manually segmented images from the training set to classify the pixels from these Canny edges into two classes: S1 for the ones that are close to a human edge, and S2 for the others (bad Canny edges). As in [14], RGB patches are concatenated into vectors. The size k of all of our dictionaries are 256. 14 local classifiers using 7 different sizes of patches’ edges e = 5, 7, 9, 11, 15, 19, 23, and 2 resolutions (full and half) are trained independently and we perform J = 25 iterations with a sparsity constraint of L = 6, on a sample of 150 000 random patches from S1 and 150 000 patches from S2 . This maximum size of patches associated with the half-resolution version of the images allows us to capture sufficient neighborhood context around each pixel, which has proven to be crucial in [5]. A new sample of the training set is encoded using each trained dictionary and we compute the curves of the reconstruction error as function of the sparsity constraint (L = 1, 2, . . . , 15 for the 0 case), (T = 0.1, 0.2, . . . , 2.0 for the 1 case). All the curves corresponding to a given patch are concatenated into a single feature vector, which are used to train a linear logistic classifier. During the test phase, we have chosen to compute independently a confidence value per pixel on the Canny edges without doing any post-processing or spatial regularization, which by itself is also a difficult problem. Precision-recall curves are presented on Figure 4 and are compared with Pb [18], BEL [5], UCM [2] and the recent gPb [16], which was published after that this paper was accepted. Note that with no postprocessing, our generic method achieves similar performance as [5] just behind [2] in terms of F-measure (see [18,19] for its definition), although it was not specifically designed for this task. Compared to Pb, BEL and UCM, our method performs slightly better for high recalls, but is slightly behind for lower recalls, where our edges map contain many small nonmeaningful edges (noise). Recently, gPb has outperformed all of these previous methods by adding global considerations on edges. Examples of dictionaries and results are also presented on Figure 4. Interestingly, we have observed two phenomenons. First, while the dictionaries are learned on RGB color images, most of the learned atoms appear gray, which has already been observed in [15]. Second, we have often noticed color atoms composed of two complementary colors: red/cyan, green/magenta and blue/yellow. 5.3 Category-Based Edge Detection and Object Recognition In this section, we use our edge detector on all the images from Pascal VOC’05 and VOC’07 [10] and postprocess them to remove nonmeaningful edges using the same
Discriminative Sparse Image Models
53
1 gPb, F=0.70 0.9
UCM, F=0.67 BEL, F=0.66
0.8
Ours, F=0.66 Pb, F=0.65
0.7
Precision
0.6 0.5 0.4 0.3 0.2 0.1 0
0
0.1
0.2
0.3
0.4
0.5 Recall
0.6
0.7
0.8
0.9
1
Fig. 4. The curves represents the precision-recall curve for our edge detection task. The two bottom left images are two dictionaries, the left one corresponding to the class S1 ”Good edges” and the right one S2 ”Bad edges”. On the right, some images and obtained edge maps. This is a color figure. Table 1. Average multiclass recognition rates on the Pascal 2005 Dataset Algorithm Ours + [12] [12] Winn [27] Pascal 05 Dataset 96.8% 89.4% 76.9%
grouping method as [12]. Then, we train our class-specific edge detector on the training set of each dataset, using the same training set as [12] for VOC’05 and the training set of VOC’07. For each class (4 in VOC’05 and 20 in VOC’07) a one-vs-all classifier is trained using the exact same parameters as for the edge detection, which allows us to give a confidence value for each edge as being part of a specific object type. Some examples of filtered edges maps are presented in Figure 5. In our first set of recognition experiments we want to quantify the benefit of using our class-specific edge detector for object category recognition (shown in Table 1). Thus, we perform the same sets of experiments as [12] and [27], on the same training and testing image sets from Pascal 2005 dataset, on a multiclass classification task. Following the works we compare against, we also use bounding boxes (for more details on the the experiments set up see [12] and [27]). Notice (Tables 1 and 2) that by using our algorithm we are able to reduce the error rate more than 3-fold as compared with
54
J. Mairal et al.
Fig. 5. Examples of filtered edges. Each column shows the edges that survive (score ≥ 0.5) after applying different class specific edge detectors.
Table 2. Left: Confusion matrix for Pascal 2005 Dataset (using the bounding boxes). Compare this with the confusion matrix obtained when shape alone is used in [12], on the exact same set of experiments. Right: classification results (recognition performance) at equal error rate for 8 classes from our experiment using the Pascal 07 dataset. Note that this preliminary experiment is different from the official Pascal 07 benchmark [10]. Our filtering method reduces the average error rate by 33%.
Category Bikes Cars Motorbikes People
Bikes 93.0% 1.2% 1.9% 0%
Cars Motorbikes People 3.5% 1.7% 1.8% 97.7% 0.0% 1.1% 1.8% 96.3% 0% 0% 0% 100%
Category (Ours+[12]) [12] Aeroplane 71.9% 61.9% Boat 67.1% 56.4% Cat 82.6% 53.4% Cow 68.7% 59.22% Horse 76.0% 67% Motorbike 80.6% 73.6% Sheep 72.9% 58.4% Tvmonitor 87.7% 83.8%
the shape alone classifier (3.2% vs. 10.6%) and more than 7-fold when compared to the appearance based method of Winn et al. [27]. We believe that these results are very encouraging and that they demonstrate the promise of our edge classifier for higher level tasks such as object recognition.
Discriminative Sparse Image Models
55
In the second set of experiments we want to evaluate how the class-based contour classification can help the shape recognizer on a more challenging dataset, where the objects are undergoing significant changes in viewpoint and scale making their 2D shape representation less powerful. To do so, we have chosen the same experimental protocol as for VOC’05 on subset of the VOC’07 dataset composed of 8 object classes (Table 2). For each class we use the training set with the provided bounding boxes for learning both the class specific edge detector and the shape models. But to make the task more challenging, the test set consisted of the full images (no bounding boxes were used) from the official validation set provided in the Pascal 07 challenge (only the images containing one of the eight classes were kept for both testing and training). Given the difficulty of this dataset we believe that our results are very promising and demonstrate the benefit of combining lower level appearance with higher level, shape based information for object category recognition. Fusioning these low-level and shape-based classification methods, instead of using them sequentially, by using the sparse representations as learned local geometric features is part of our current effort. The results we have obtained on these preliminary experiments strongly encourage us to pursue that direction.
6 Conclusion We have presented a multiscale discriminative framework based on learned sparse representations, and have applied it to the problem of edge detection, and class-specific edge detection, which proves to greatly improve the results obtained with a contourbased classifier [12]. Our current efforts are devoted to find a way to use our local appearance model as a preprocessing step for a global recognition framework using bags of words, as we have done for a contour-based classifier. Clustering methods of locally selected patches to define interest regions is an option we are considering, which could eventually allow us to get rid of expensive sliding windows analysis for object detection [4]. Another direction we are pursuing is to use directly the coefficients α of the sparse decompositions as features. Developing a discriminative optimization framework which can use the robust 1 regularization instead of the 0 one was a first step, which should make this possible.
Acknowledgments This paper was supported in part by French grant from the INRIA associated team Thetys and the Agence Nationale de la Recherche (MGA Project).
References 1. Aharon, M., Elad, M., Bruckstein, A.M.: The K-SVD: An algorithm for designing of overcomplete dictionaries for sparse representations. IEEE Trans. SP 54(11) (2006) 2. Arbelaez, P.: Boundary extraction in natural images using ultrametric contour maps. In: POCV 2006 (2006)
56
J. Mairal et al.
3. Canny, J.F.: A computational approach to edge detection. IEEE Trans. PAMI 8(6) (1986) 4. Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: Proc. CVPR (2005) 5. Dollar, P., Tu, Z., Belongie, S.: Supervised learning of edges and object boundaries. In: Proc. CVPR (2006) 6. Donoho, D.L.: Compressive sampling. IEEE Trans. IT 52(4) (2006) 7. Efron, B., Hastie, T., Johnstone, I., Tibshirani, R.: Least angle regression. Ann. Statist. 32(2) (2004) 8. Elad, M., Aharon, M.: Image denoising via sparse and redundant representations over learned dictionaries. IEEE Trans. IP 54(12) (2006) 9. Engan, K., Aase, S.O., Husoy, J.H.: Frame based signal compression using method of optimal directions (MOD). In: Proc. ISCS (1999) 10. Everingham, M., Van Gool, L., Williams, C.K.I., Winn, J., Zisserman, A.: The PASCAL Visual Object Classes Challenge 2007 (VOC 2007) Results (2007) 11. Lee, H., Battle, A., Rajat, R., Ng, A.Y.: Efficient sparse coding algorithms. In: Adv. NIPS (2007) 12. Leordeanu, M., Hebert, M., Sukthankar, R.: Beyond local appearance: Category recognition from pairwise interactions of simple features. In: Proc. CVPR (2007) 13. Luenberger, D.G.: Linear and Nonlinear Programming. Addison-Wesley, Reading (1984) 14. Mairal, J., Bach, F., Ponce, J., Sapiro, G., Zisserman, A.: Discriminative learned dictionaries for local image analysis. In: Proc. CVPR (2008) 15. Mairal, J., Elad, M., Sapiro, G.: Sparse representation for color image restoration. IEEE Trans. IP 17(1) (2008) 16. Maire, M., Arbelaez, P., Fowlkes, C., Malik, J.: Using Contours to Detect and Localize Junctions in Natural Images. In: Proc. CVPR (2008) 17. Mallat, S., Zhang, Z.: Matching pursuit in a time-frequency dictionary. IEEE Trans. SP 41(12) (1993) 18. Martin, D.R., Fowlkes, C.C., Malik, J.: Learning to detect natural image boundaries using local brightness, color, and texture cues. IEEE Trans. PAMI 26(1) (2004) 19. Martin, D., Fowlkes, C., Tal, D., Malik, J.: A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In: Proc. ICCV (2001) 20. Moghaddam, B., Weiss, Y., Avidan, S.: Spectral bounds for sparse pca: Exact and greedy algorithms. In: Adv. NIPS (2005) 21. Olshausen, B.A., Field, D.J.: Sparse coding with an overcomplete basis set: A strategy employed by v1? Vision Research 37 (1997) 22. Peyre, G.: Non-negative sparse modeling of textures. In: Sgallari, F., Murli, A., Paragios, N. (eds.) SSVM 2007. LNCS, vol. 4485. Springer, Heidelberg (2007) 23. Prasad, M., Zisserman, A., Fitzgibbon, A., Kumar, M.P., Torr, P.: Learning class-specifc edges for object detection and segmentation. In: Kalra, P.K., Peleg, S. (eds.) ICVGIP 2006. LNCS, vol. 4338. Springer, Heidelberg (2006) 24. Ranzato, M., Huang, F., Boureau, Y., LeCun, Y.: Unsupervised learning of invariant feature hierarchies with applications to object recognition. In: Proc. CVPR (2007) 25. Roth, S., Black, M.J.: Fields of experts: A framework for learning image priors. In: Proc. CVPR (2005) 26. Shawe-Taylor, J., Cristianini, N.: Kernel Methods for Pattern Analysis (2004) 27. Winn, J., Criminisi, A., Minka, T.: Object categorization by learned universal visual dictionary. In: Proc. ICCV (2005)
Non-local Regularization of Inverse Problems Gabriel Peyr´e, S´ebastien Bougleux, and Laurent Cohen Universit´e Paris-Dauphine, CEREMADE, 75775 Paris Cedex 16, France {peyre,bougleux,cohen}@ceremade.dauphine.fr
Abstract. This article proposes a new framework to regularize linear inverse problems using the total variation on non-local graphs. This nonlocal graph allows to adapt the penalization to the geometry of the underlying function to recover. A fast algorithm computes iteratively both the solution of the regularization process and the non-local graph adapted to this solution. We show numerical applications of this method to the resolution of image processing inverse problems such as inpainting, super-resolution and compressive sampling.
1
Introduction
State of the art image denoising methods perform a non-linear filtering that is adaptive to the image content. This adaptivity enables a non-local averaging of image features, thus making use of the relevant information along an edge or a regular texture pattern. This article shows how such adaptive non-local filterings can be extended to handle general inverse problems beyond simple denoising. Adaptive Non-Local Image Processing. Traditional image processing methods use local computation over a time-frequency or a multi-scale domain [1]. These algorithms use either fixed transforms such as local DCT or wavelets, or fixed regularization spaces such as Sobolev or bounded variations, to perform image restoration. In order to better respect edges in images, several edge-aware filtering schemes have been proposed, among which Yaroslavsky’s filter [2], the bilateral filter [3], Susan filter [4] and Beltrami flow [5]. The non-local means filter [6] goes one step further by averaging pixels that can be arbitrary far away, using a similarity measure based on distance between patches. This non-local averaging shares similarities with patch-based computer graphics synthesis [7,8]. These adaptive filtering methods can be related to adaptive decompositions in dictionaries of orthogonal bases. For instance the bandlets best basis decomposition [9] re-transform the wavelet coefficients of an image in order to better capture edges. The grouplet transform of Mallat [10] does a similar retransformation but makes use of an adaptive geometric flow that is well suited to capture oriented oscillating textures [11].
This work was partially supported by ANR grant SURF-NT05-2 45825.
D. Forsyth, P. Torr, and A. Zisserman (Eds.): ECCV 2008, Part III, LNCS 5304, pp. 57–68, 2008. c Springer-Verlag Berlin Heidelberg 2008
58
G. Peyr´e, S. Bougleux, and L. Cohen
Regularization and inverse problems. Non-local filtering can be understood as a quadratic regularization based on a non-local graph, as detailed for instance in the geometric diffusion framework of Coifman et al. [12], which has been applied to non-local image denoising by Szlam et al. [13]. Denoising using quadratic penalization on image graphs is studied by Guilboa and Osher for image restoration and segmentation [14]. These quadratic regularizations can be extended to non-smooth energies such as the total variation on graphs. This has been defined over the continuous domain by Gilboa et al. [15] and over the discrete domain by Zhou and Sch¨ olkopf [16]. Elmoataz et al. [17] consider a larger class of non-smooth energies involving a p-laplacian for p < 2. Peyr´e replaces these non-linear flows on graphs by a non-iterative thresholding in a non-local spectral basis [18]. A difficult problem is to extend these graph-based regularizations to solve general inverse problems. The difficulty is that graph-based regularizations are adaptive since the graph depends on the image. To perform denoising, this graph can be directly estimated from the noisy image. To solve some specific inverse problems, the graph can also be estimated from the measurements. For instance, Kindermann et al. [19] and Buades et al. [20] perform image deblurring by using a non-local energy computed from the blurry observation. A similar strategy is used by Buades et al. [21] to perform demosaicing, where the non-local graph is estimated using an image with missing pixels. For inpainting of thin holes, Gilboa and Osher [22] compute a non-local graph directly from the image with missing pixels. These approaches are different from recent exemplar-based methods introduced to solve some inverse problems, such as super-resolution (see for instance [23,24,25]). Although these methods operate on patches in a manner similar to non-local methods, they make use of pairs of exemplar patches where one knows both the low and high resolution versions. Contributions. This paper proposes a new framework to solve general inverse problems using a non-local and non-linear regularization on graphs. Our algorithm is able to efficiently solve for a minimizer of the proposed energy by iteratively computing an adapted graph and a solution of the inverse problem. We show applications to inpainting, super-resolution and compressive sampling where this new framework improves over wavelet and total variation regularization.
2 2.1
Non-local Regularization Inverse Problems and Regularization
Many image processing problems can be formalized as the recovery of an image f ∈ Rn from a set of p n noisy linear measurements y = Φf + ε ∈ Rp . where ε is an additive noise. The linear operator Φ typically accounts for some blurring, sub-sampling or missing pixels so that the measured data y only captures a small portion of the original image f one wishes to recover.
Non-local Regularization of Inverse Problems
59
In oder to solve this ill-posed problem, one needs to have some prior knowledge on the kind of typical images one expects to restore. This prior information should help to recover the missing information. Regularization theory assumes that f has some smoothness, for instance small derivatives (linear Sobolev regularization) or bounded variations (non-linear regularization). A regularized solution f to the inverse problem can be written in variational form as 1 (1) f = argmin ||y − Φg||2 + λJ(g), g∈Rn 2 where J is small when g is close to the smoothness model. The weight λ needs to be adapted to match the amplitude of the noise ε, which might be a non-trivial task in practical situations. Classical variational priors include Total variation: The bounded variations model imposes that f has a small total variation and uses def. def. tv J (g) = ||g||TV = |∇x g|dx. (2) This prior has been introduced by Rudin, Osher and Fatemi [26] for denoising purpose. It has been extended to solve many inverse problems, see for instance [27]. Sparsity priors: Given a frame (ψm )m of Rn , one defines a sparsity enforcing prior in this frame as def. J spars (g) = |g, ψm |. (3) m
This prior has been introduced by Donoho and Johnstone [28] with the orthogonal wavelet basis for denoising purpose. It has then been used to solve more general inverse problems, see for instance [29] and the references therein. It can also be used in conjunction with redundant frames instead of orthogonal bases, see for instance [30,31]. 2.2
Graph-Based Regularization
Differential operators over graphs. We consider a weighted graph w that links together pixels x, y over the image domain with a weigth w(x, y). This graph allows to compute generalized discrete derivatives using the graph gradient operator w(x, y)(f (y) − f (x)) ∈ Rn . ∀ x, ∇w xf = y
n This operator defines, for any pixel x, a gradient vector ∇w x f ∈ R , see [32]. The w w T divergence operator div = (∇ ) is the adjoint of the gradient, viewed as an operator f → ∇w f ∈ Rn×n . For a gradient field Fx ∈ Rn , the divergence is (divw (F ))(x) = w(x, y)(Fx (y) − Fy (x)). y
60
G. Peyr´e, S. Bougleux, and L. Cohen
The total-variation energy of an image, according to the graph structure given by w is then defined as Jw (f ) = ||∇w (4) x f ||, x
where || · || is the euclidean norm over Rn . This energy was proposed by Gilboa et al. [15] in the continuous setting. It is used in the discrete setting by Zhou and Sch¨ olkopf [16] and Elmoataz et al. [17] in order to perform denoising. Non-local graph adaptation. Given an image f ∈ Rn to process, one wishes to compute an adapted graph w(f ) so that the regularization by Jw efficiently removes noise without destroying the salient features of the image. In order to do so, we use a non-local graph inspired by the non-local means filtering [6], which has been used in several recent methods for denoising, see for instance [15,17]. This non-local graph is built by comparing patches around each pixel. A patch p√ x (f ) of size τ × τ (τ being an odd integer) around a pixel position x ∈ {0, . . . , n − 1}2 is ∀ t ∈ {−(τ − 1)/2 + 1, . . . , (τ − 1)/2}2 ,
px (f )(t) = f (x + t). def.
A patch px (f ) is handled as a vector of size τ 2 . Color images f of n pixels can be handled using patches of dimension 3τ 2 . The non-local means algorithm [6] filters an image f using the following imageadapted weights w = w(f ) ||p (f )−p (f )|| − x 2σ2 y w(x, ˜ y) if ||x − y|| 2δ , (5) where w(x, ˜ y) = e w(x, y) = Zx 0 otherwise, def. where the normalizing constant is Zx = ˜ y). The parameter δ > 0 y w(x, restricts the non-locality of the method and also allows to speed-up computation. The parameter σ controls how many patches are taken into account to perform the averaging. It is a difficult parameter to set and ideally it should also be adapted to the noise level |ε|. In the following, we consider the mapping f → w(f ) as a simple way to adapt a set of non-local weights to the image f to process. Graph-based regularization of inverse problems. We propose to use this graph total-variation (4) to solve not only the denoising problem but arbitrary inverse problems such as inpainting, super-resolution and compressive sampling. Our non-local graph regularization framework tries to recover an image f from a set of noisy measurements y = Φf +ε using an adapted energy Jw(f ) . The graph w(f ) should be adapted to the image f to recover, but unfortunately, one does not have this information since only the noisy observations y are available. In order to cope with such a problem, one performs an optimization over both the image f to recover and the optimal graph w(f ) as follow f = argmin g∈Rn
1 ||y − Φg||2 + λJw(g) (g). 2
(6)
Non-local Regularization of Inverse Problems
61
It is important to note that the functional prior Jw(g) depends non-linearly on the image g being recovered through equation (5). 2.3
Proximal Resolution of the Graph Regularization
The optimization of (6) is difficult because the energy Jw(g) (g) makes it nonconvex. We use an iterative approximate minimization that optimizes successively the optimal graph and then the image to recover. Since Jw is a non-smooth functional, gradient descent methods are inefficient to minimize Jw . In order to cope with this issue, we use proximal iterations. The resulting algorithm is based on three main building blocks: The non-local graph adaptation procedure, equation (5), to compute a graph w(fk ) adapted to the current estimate fk of the algorithm. Proximal iterations to solve (6) when the graph is fixed. Fixed point iterations to compute the proximity operator needed for the proximal iterations. Proximal iterations. Equation (5) allows to compute a graph w adapted to a current estimate of the algorithm. In order to solve the initial regularization (6), we suppose that the graph w is fixed, and look for a minimizer of 1 f (w) = argmin ||y − Φg||2 + λJw (g). n 2 g∈R
(7)
This non-smooth convex minimization is difficult to solve because of the rankdefficient matrix Φ that couples the entries of g. In order to perform the optimization, we use iterative projections with a proximal operator. Proximal iterations replace problem (7) by a series of simpler problems. This strategy has been developed to solve general convex optimizations and has been applied to solve non-smooth convex problems in image processing [33,34]. The proximity operator of a convex functional J : Rn → R+ is defined as ProxJ (f ) = argmin g∈Rn
1 ||f − g||2 + J(f ). 2
(8)
A proximal iteration step uses the proximity operator to decrease the functional (7) one wishes to minimize
1 T (k+1) (k) (k) f = Prox λ Jw f + Φ (y − Φf ) . (9) μ μ It uses a step size μ > 0 that should be set in order to ensure that ||ΦT Φ|| < μ. If the functional J is lipshitz continuous (which is the case of Jw ), then f (k) tends to f (w) which is a minimizer of (7), see for instance [33].
62
G. Peyr´e, S. Bougleux, and L. Cohen
Computation of the proximity operator. In order to compute the proximity operator ProxJw for the functional Jw , one needs to solve the minimization (8), which is simpler than the original problem (7) since it does not involve anymore the operator Φ. In order to do so, we use a fixed point algorithm similar to the one of Chambolle [35]. It is based on the computation of a gradient field g(x) ∈ Rn at each pixel x gi (x) − ηh(x) μ w with h(x) = ∇w f) . (10) gi+1 (x) = x div (gi ) − 1 + η||h(x)|| λ One can then prove that if η is small enough, the iterations (10) satisfy i→+∞
f − λ divw (gi ) −→ Prox λ Jw (f ). μ
In numerical computation, since the graph w defined by equation (5) is relatively sparse (each pixel is connected to less than δ 2 pixels), g(x) is stored as a sparse vector. Graph regularization algorithm. The algorithm to minimize approximately (6) is detailed in table 1. It proceeds by iterating the proximal mapping (9). Each computation of this proximity operator requires m inner iterations of the dual gradient descent (10). Since the proximal iterations are robust against imperfect computation of the proximity operator, m is set to a small constant. Table 1. Block coordinate descent algorithm to minimize approximately (6) 1. Initialization: set f (0) = 0 and k ← 0. 2. Enforcing the constraints: compute f˜(k) = f (k) + μ1 ΦT (y − Φf (k) ). 3. Update the graph: compute the non-local graph w(k) = w(f˜(k) ) adapted to f˜(k) using equation (5). (k) (k) 4. Compute proximal iterations: Set g0 = ∇w f (k) . Perform m steps of the iterations of (10) (k)
gi+1 (x) =
(k)
gi (x) − ηh(x) 1 + η||h(x)||
with
h(x) = ∇w x (k)
(k)
“ ” (k) μ (k) divw (gi ) − f˜(k) . λ
(k)
Set the new estimate f (k+1) = f˜(k) − λ divw (gm ). 5. Stopping criterion: while not converged, set k ← k + 1 and go back to 2.
3
Numerical Illustration
In the numerical simulations, we consider three different regularizations: The total variation energy J tv , defined in equation (2). An algorithm very similar to the algorithm of table 1 is used for this minimization, excepted that ∇x is the classical gradient and that the step 3 of algorithm 1 is not needed.
Non-local Regularization of Inverse Problems
63
The sparsity energy J spars , defined in equation (3), using a redundant tight frame of translation invariant wavelets (ψm )m . An algorithm very similar to the one of table 1 is used for this minimization, excepted that the proximal projection is computed with a soft thresholding as detailed in [31] and that the step 3 of algorithm 1 is not needed. The non-local total variation regularization Jw in an optimized graph, solved using algorithm 1. For this regularization, the parameter σ of equation (5) is fixed by hand in order to have consistent results for all experiments. The locality parameter δ of equation (5) is fixed to 15 pixels. Both total variation and non-local total variation require approximately the same number of iterations. For these two methods, the number of inner iterations to solve for the proximity operator is set to m = 10. The non-local iteration is computationally more intensive since the computation of the non-local weights (w(x, y))y requires to explore δ 2 pixels y for each pixel x. In the three applications of sections 3.1, 3.2 and 3.3, we use a low noise level |ε| of .02||y||. For all the proposed methods, the parameter λ is optimized in an oracle manner in order to minimize the PSNR of the recovered image f PSNR(f , f ) = −20 log2 (||f − f ||/||f ||∞ ). 3.1
Inpainting
Inpainting corresponds to the operation of removing pixels from an image 0 if x ∈ Ω, (Φf )(x) = f (x) if x ∈ / Ω, √ where Ω ⊂ {0, . . . , n−1}2 is the region where the input data has been damaged. In this case, ΦT = Φ, and one can take a proximity step size μ = 1 so that the proximal iteration (9) becomes f (x) if x ∈ Ω, f (k+1) = ProxλJ (f˜(k) ) where f˜(k) (x) = f (k) (x) if x ∈ / Ω. Classical methods for inpainting use partial differential equations that propagate the information from the boundary of Ω to its interior, see for instance [36,37,38,39]. Sparsity promoting prior such as (3) in wavelets frames and local cosine bases have been used to solve the inpainting problem [30,31]. Our method iteratively updates the optimal graph w(x, y) and iterations of the algorithm update in parallel all the pixels inside Ω. This is similar to the exemplar-based inpainting algorithm of [40] which uses a non-local copying of patches, however in their framework Ω is filled progressively by propagating inward from the boundary. The anisotropic diffusion of Tschumperle and Deriche [39] also progressively builds an adapted operator (parameterized by a tensor field) but they solve a PDE and not a regularization as we do. Figure 1 shows some numerical examples of inpainting on images where 80% of the pixels have been damaged. The wavelets method performs better than
64
G. Peyr´e, S. Bougleux, and L. Cohen Input y
Wavelets
TV
Non local
24.52dB
23.24dB
24.79dB
29.65dB
28.68dB
30.14dB
Fig. 1. Examples of inpainting where Ω occupates 80% of pixels
total variation in term of PSNR but tends to introduce some ringing artifact. Non-local total variation performs better in term of PSNR and is visually more pleasing since edges are better reconstructed. 3.2
Super-Resolution
Super-resolution corresponds to the recovery of a high-definition image from a filtered and sub-sampled image. It is usually applied to a sequence of images in video, see the review papers [41,42]. We consider here a simpler problem of increasing the resolution of a single still image, which corresponds to the inversion of the operator ∀ f ∈ Rn ,
Φf = (f ∗ h) ↓k
and ∀ g ∈ Rp ,
ΦT g = (g ↑k ) ∗ h
where p = n/k 2 , h ∈ Rn is a low-pass filter, ↓k : Rn → Rp is the sub-sampling operator by a factor k along each axis and ↑k : Rp → Rn corresponds to the insertion of k − 1 zeros along horizontal and vertical directions. Figure 2 shows some graphical results of the three tested super-resolution methods. The results are similar to those of inpainting, since our method improves over both wavelets and total variation. 3.3
Compressive-Sampling
Compressive sensing is a new sampling theory that uses a fixed set of linear measurements together with a non-linear reconstruction [43,44]. The sensing operator computes the projection of the data on a finite set of p vectors p Φf = {f, ϕi }p−1 i=0 ∈ R ,
where (ϕi )p−1 i=0 are the rows of Φ.
(11)
Non-local Regularization of Inverse Problems Input y
Wavelets
TV
Non local
21.16dB
20.28dB
21.33dB
20.23dB
19.51dB
20.53dB
25.43dB
24.53dB
25.67dB
65
Fig. 2. Examples of image super-resolution with a down-sampling k = 8. The original images f are displayed on the left of figure 3.
Compressive sampling theory gives hypotheses on both the input signal f and the sensing vectors (ϕi )i for this non-uniform sampling process to be invertible. In particular, the (ϕi )i must be incoherent with the orthogonal basis (ψm )m used for the sparsity prior, which is the case with high probability if they are drawn randomly from unit normed random vectors. Under the additional condition that f is sparse in an orthogonal basis (ψm )m # {m \ f, ψm = 0} s then the optimization of (1) using the energy (3) leads to a recovery with a small error ||f¯ − f || ≈ |ε| if p = O(s log(n/s)). These results extend to approximately sparse signals, such as for instance signals that are highly compressible in an orthogonal basis. In the numerical tests, we choose the columns of Φ ∈ Rp×m to be independent random unit normed vectors. Figure 3 shows examples of compressive sampling reconstructions. The results are slightly above the wavelets method and tend to be visually more pleasing.
66
G. Peyr´e, S. Bougleux, and L. Cohen Original f
Wavelets
TV
Non local
24.91dB
26.06dB
26.13dB
25.33dB
24.12dB
25.55dB
Fig. 3. Examples of compressed sensing reconstruction with p = n/8
4
Conclusion and Future Work
This paper proposed a new framework for the non-local resolution of linear inverse problems. The variational minimization computes iteratively an adaptive non-local graph that enhances the geometric features of the recovered image. Numerical tests show how this method improves over some state of the art methods for inpainting, super-resolution and compressive sampling. This new method also open interesting questions concerning the optimization of the non-local graph. While this paper proposes to adapt the graph using a patch comparison principle, it is important to understand how this adaptation can be re-casted as a variational minimization.
References 1. Mallat, S.: A Wavelet Tour of Signal Processing. Academic Press, San Diego (1998) 2. Yaroslavsky, L.P.: Digital Picture Processing – an Introduction. Springer, Heidelberg (1985) 3. Tomasi, C., Manduchi, R.: Bilateral filtering for gray and color images. In: Int. Conf. on Computer Vision, pp. 839–846. IEEE Computer Society, Los Alamitos (1998) 4. Smith, S.M., Brady, J.M.: SUSAN - a new approach to low level image processing. International Journal of Computer Vision 23, 45–78 (1997) 5. Spira, A., Kimmel, R., Sochen, N.: A short time beltrami kernel for smoothing images and manifolds. IEEE Trans. Image Processing 16, 1628–1636 (2007) 6. Buades, A., Coll, B., Morel, J.M.: On image denoising methods. SIAM Multiscale Modeling and Simulation 4, 490–530 (2005)
Non-local Regularization of Inverse Problems
67
7. Efros, A.A., Leung, T.K.: Texture synthesis by non-parametric sampling. In: Int. Conf. on Computer Vision, pp. 1033–1038. IEEE Computer Society, Los Alamitos (1999) 8. Wei, L.Y., Levoy, M.: Fast texture synthesis using tree-structured vector quantization. In: SIGGRAPH 2000, pp. 479–488 (2000) 9. Le Pennec, E., Mallat, S.: Bandelet Image Approximation and Compression. SIAM Multiscale Modeling and Simulation 4, 992–1039 (2005) 10. Mallat, S.: Geometrical grouplets. Applied and Computational Harmonic Analysis (to appear, 2008) 11. Peyr´e, G.: Texture processing with grouplets. Preprint Ceremade (2008) 12. Coifman, R.R., Lafon, S., Lee, A.B., Maggioni, M., Nadler, B., Warner, F., Zucker, S.W.: Geometric diffusions as a tool for harmonic analysis and structure definition of data: Diffusion maps. Proc. of the Nat. Ac. of Science 102, 7426–7431 (2005) 13. Szlam, A., Maggioni, M., Coifman, R.R.: A general framework for adaptive regularization based on diffusion processes on graphs. Journ. Mach. Learn. Res. (to appear 2007) 14. Gilboa, G., Osher, S.: Nonlocal linear image regularization and supervised segmentation. SIAM Multiscale Modeling and Simulation 6, 595–630 (2007) 15. Gilboa, G., Darbon, J., Osher, S., Chan, T.: Nonlocal convex functionals for image regularization. UCLA CAM Report 06-57 (2006) 16. Zhou, D., Scholkopf, B.: Regularization on discrete spaces. In: German Pattern Recognition Symposium, pp. 361–368 (2005) 17. Elmoataz, A., Lezoray, O., Bougleux, S.: Nonlocal discrete regularization on weighted graphs: a framework for image and manifold processing. IEEE Tr. on Image Processing 17(7), 1047–1060 (2008) 18. Peyr´e, G.: Image processing with non-local spectral bases. SIAM Multiscale Modeling and Simulation (to appear, 2008) 19. Kindermann, S., Osher, S., Jones, P.W.: Deblurring and denoising of images by nonlocal functionals. SIAM Mult. Model. and Simul. 4, 1091–1115 (2005) 20. Buades, A., Coll, B., Morel, J.M.: Image enhancement by non-local reverse heat equation. Preprint CMLA 2006-22 (2006) 21. Buades, A., Coll, B., Morel, J.M., Sbert, C.: Non local demosaicing. Preprint 200715 (2007) 22. Gilboa, G., Osher, S.: Nonlocal operators with applications to image processing. UCLA CAM Report 07-23 (2007) 23. Datsenko, D., Elad, M.: Example-based single image super-resolution: A global map approach with outlier rejection. Journal of Mult. System and Sig. Proc. 18, 103–121 (2007) 24. Freeman, W.T., Jones, T.R., Pasztor, E.C.: Example-based super-resolution. IEEE Computer Graphics and Applications 22, 56–65 (2002) 25. Ebrahimi, M., Vrscay, E.: Solving the inverse problem of image zooming using ’self examples’. In: Kamel, M., Campilho, A. (eds.) ICIAR 2007. LNCS, vol. 4633, pp. 117–130. Springer, Heidelberg (2007) 26. Rudin, L.I., Osher, S., Fatemi, E.: Nonlinear total variation based noise removal algorithms. Phys. D 60, 259–268 (1992) 27. Malgouyres, F., Guichard, F.: Edge direction preserving image zooming: A mathematical and numerical analysis. SIAM Journal on Numer. An. 39, 1–37 (2001) 28. Donoho, D., Johnstone, I.: Ideal spatial adaptation via wavelet shrinkage. Biometrika 81, 425–455 (1994)
68
G. Peyr´e, S. Bougleux, and L. Cohen
29. Daubechies, I., Defrise, M., Mol, C.D.: An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Comm. Pure Appl. Math. 57, 1413–1541 (2004) 30. Elad, M., Starck, J.L., Donoho, D., Querre, P.: Simultaneous cartoon and texture image inpainting using morphological component analysis (MCA). Journal on Applied and Computational Harmonic Analysis 19, 340–358 (2005) 31. Fadili, M., Starck, J.L., Murtagh, F.: Inpainting and zooming using sparse representations. The Computer Journal (to appear) (2006) 32. Chung, F.R.K.: Spectral graph theory. In: Regional Conference Series in Mathematics, American Mathematical Society, vol. 92, pp. 1–212 (1997) 33. Combettes, P.L., Wajs, V.R.: Signal recovery by proximal forward-backward splitting. Multiscale Modeling & Simulation 4, 1168–1200 (2005) 34. Fadili, M., Starck, J.L.: Monotone operator splitting and fast solutions to inverse problems with sparse representations (preprint, 2008) 35. Chambolle, A.: An algorithm for total variation minimization and applications. Journal of Mathematical Imaging and Vision 20, 89–97 (2004) 36. Masnou, S.: Disocclusion: a variational approach using level lines. IEEE Trans. Image Processing 11, 68–76 (2002) 37. Ballester, C., Bertalm`ıo, M., Caselles, V., Sapiro, G., Verdera, J.: Filling-in by joint interpolation of vector fields and gray levels. IEEE Trans. Image Processing 10, 1200–1211 (2001) 38. Bertalm`ıo, M., Sapiro, G., Caselles, V., Ballester, C.: Image inpainting. In: Siggraph 2000, pp. 417–424 (2000) 39. Tschumperl´e, D., Deriche, R.: Vector-valued image regularization with PDEs: Acommon framework for different applications. IEEE Trans. Pattern Anal. Mach. Intell 27, 506–517 (2005) 40. Criminisi, A., P´erez, P., Toyama, K.: Region filling and object removal by exemplarbased image inpainting. IEEE Trans. on Image Processing 13, 1200–1212 (2004) 41. Park, S.C., Park, M.K., Kang, M.G.: Super-resolution image reconstruction: a technical overview. IEEE Signal Processing Magazine 20, 21–36 (2003) 42. Farsiu, S., Robinson, D., Elad, M., Milanfar, P.: Advances and challenges in superresolution. Int. Journal of Imaging Sys. and Tech. 14, 47–57 (2004) 43. Cand`es, E., Tao, T.: Near-optimal signal recovery from random projections: Universal encoding strategies? IEEE Trans. Information Theory 52, 5406–5425 (2006) 44. Donoho, D.: Compressed sensing. IEEE Trans. Information Theory 52, 1289–1306 (2006)
Training Hierarchical Feed-Forward Visual Recognition Models Using Transfer Learning from Pseudo-Tasks Amr Ahmed1, , Kai Yu2 , Wei Xu2 , Yihong Gong2 , and Eric Xing1 1
2
School of Computer Science, Carnegie Mellon University {amahmed,epxing}@cs.cmu.edu NEC Labs America, 10080 N Wolfe Road, Cupertino, CA 95014 {kyu,xw,ygong}@sv.nec-labs.com
Abstract. Building visual recognition models that adapt across different domains is a challenging task for computer vision. While feature-learning machines in the form of hierarchial feed-forward models (e.g., convolutional neural networks) showed promise in this direction, they are still difficult to train especially when few training examples are available. In this paper, we present a framework for training hierarchical feed-forward models for visual recognition, using transfer learning from pseudo tasks. These pseudo tasks are automatically constructed from data without supervision and comprise a set of simple pattern-matching operations. We show that these pseudo tasks induce an informative inverse-Wishart prior on the functional behavior of the network, offering an effective way to incorporate useful prior knowledge into the network training. In addition to being extremely simple to implement, and adaptable across different domains with little or no extra tuning, our approach achieves promising results on challenging visual recognition tasks, including object recognition, gender recognition, and ethnicity recognition.
1 Introduction Visual recognition has proven to be a challenging task for computer vision. This difficulty stems from the large pattern variations under which an automatic recognition system must operate. Surprisingly, this task is extremely easy for humans, even when very few examples are available to the learner. This superior performance is in fact due to the expressive hierarchical representation employed by human visual cortex. Therefore, it has been widely believed that building robust invariant feature representation is a key step toward solving visual recognition problems. In the past years, researchers have designed various features that capture different invariant aspects in the image, to name a few: shape descriptors [21], appearance descriptors like SIFT features and their variants [16], etc. A classifier is then feed with this representation to learn the decision boundaries between the object classes. On the other hand, many efforts have been put toward building trainable vision systems in the form of hierarchical feed-forward models that learn the feature extractors and the classification model simultaneously. This approach emulates processing in the visual
Work mainly done while the author was interning at NEC labs.
D. Forsyth, P. Torr, and A. Zisserman (Eds.): ECCV 2008, Part III, LNCS 5304, pp. 69–82, 2008. c Springer-Verlag Berlin Heidelberg 2008
70
A. Ahmed et al.
cortex and is reminiscent of the Hubel-Wiesel architecture [12]. While we concede that given enough time and proper understanding of a particular visual recognition problem, researchers can devise ingenious feature extractors that would achieve excellent classification performance especially when the learner is faced with few examples, we believe that it is hard to devise a single set of features that are universally suitable for all recognition problems. Therefore, it is believed that learning the features automatically via biologically inspired models will open the door for more robust methods with wider applications. In this paper, we focus on Convolutional Neural Networks (CNNs) as an example of trainable hierarchical feed-forward models [15]. CNNs have been successfully applied to a wide range of applications, including character recognition, pose estimation, face detection, and recently generic object recognitions. The model is very efficient in the recognition phase because of their feed-forward nature. However, this generality and capacity of handling a wide variety of domains comes with a price: the model needs a very large number of labeled examples per class for training. To solve this problem, recently an approach has been proposed that utilizes unlabeled data in the training process [20]. Even though the method improves the performance of the model, to date, the best reported recognition accuracy on popular benchmarks like Caltech101 by hierarchical feed-forward models are yet unsatisfactory [14]. In this paper, we present a framework for training hierarchical feed-forward models by leveraging knowledge via transfer learning from a set of pseudo tasks which are automatically constructed from data without supervision . We show that these auxiliary tasks induce a data-dependent inverse-Wishart prior on the parameters of the model. The resulting framework is extremely simple to implement, in fact, nothing is required beyond the ability to train a hierarchical feed-forward model via backpropagation. We show the adaptability and effectiveness of our approach on various challenging benchmarks that include the standard object recognition datasets Caltech101, gender classification, and ethnic origin recognition on face databases FERET and FRGC [19]. Overall, our approach, with minimal across-domain extra tuning, exhibits excellent classification accuracy on all of these tasks, outperforming other feed-forward models and being comparable to other state-of-the-art methods. Our results indicate that: – Incorporation of prior knowledge via transfer learning can boost the performance of CNNs by a large margin. – Trainable hierarchical feedforward models, have the flexibility to handle various visual recognition tasks of different nature with excellent performance.
2 Related Work Various techniques have been proposed that exploit locally invariant feature descriptors, to name a few: appearance descriptors based on SIFT features and their derivatives [16], shape descriptors [21], etc. Based on these feature descriptors, a similarity measure is induced over images, either in the bag of word representation [8], or in a multi-resolution representation [14]. This similarity measure is then then used to train a discriminative classifier. While these approaches achieve excellent performance, we
Training CNN Using Transfer Learning from Pseudo-Tasks
71
believe that it is hard to devise a single set of features that are universally suitable for all visual recognition problems. Motivated by the excellent performance and speed of the human visual recognition system, researchers explored the possibility of learning the features automatically via hierarchical feedforward models that emulate processing in the visual cortex. These approaches are reminiscent of multi-stage Hubel-Wiesel architectures that use alternating layers of convolutional feature detectors (simple cells) and local pooling and subsampling (complex cells) [12]. Examples of this generic architecture include: [7],[22],[18] in addition to Convolutional Neural Networks (CNN) [15] (see Fig. 2). Several approaches have been proposed to train these models. In [22] and [18] the first layer is hard-wired with Gabor filters, and then large number of image patches are sampled from the second layer and used as the basis of the representation which is then used to train a discriminative classifier. In CNN all the layers, including a final layer for classification, are jointly trained using the standard backpropagation algorithm [15]. While this approach makes CNN powerful machines with a capacity to adapt to various tasks, it also means that large number of training examples are required to prevent overfitting. Recently [20] proposed a layer-wise greedy algorithm that utilizes unlabeled data for pre-training CNNs. More recently, in [13], the authors proposed to train a feed-forward model jointly with an unsupervised embedding task, which also leads to improved results. Though using unlabeled data too, our work differs from the previous work at the more emphasis on leveraging the prior knowledge which suggests that our work can be combined with those approaches to further enhance the training of feed-forward models in general and CNN in particular, as we will discuss in section 4. Finally, our work is also related to a generictransfer learning framework [2], which uses auxiliary tasks to learn a linear feature mapping. The work here is motivated differently and aims toward learning complex nonlinear visual feature maps as we will discuss in section 3.3. Moreover, in object recognition, transfer learning has been studied in the context of probabilistic generative models [6] and boosting [23]. In this paper our focus is on using transfer learning to train hierarchical feedforward models by leveraging information from unlabeled data.
3 Transfer Learning 3.1 Basics Transfer learning, also known as multi-task learning [1,5], is a mechanism that improves generalization by leveraging shared domain-specific information contained in related tasks. In the setting considered in this paper, all tasks share the same input space (X) and each task m can be viewed as a function fm that maps between this space to an output space: fm : X → Y . Intuitively, if the tasks are truly related, then there is a shared structure between all of all them that can be leveraged by learning them in parallel. For example, Fig 1-a depicts few tasks. In this figure it is clear that input points a and b1 have similar values across all of these tasks, and thus one can conclude that these two input points are semantically similar, and therefore should be assigned similar values 1
Please note that the order of points along the x-axis does not necessarily encode similarity.
72
A. Ahmed et al.
under other related tasks. When the input space X represents images, the inclusion of related tasks would help induce similarity measures between images that enhances the generalization of the main task being learned. The nature of this similarity measure depends on the architecture of the learning system. For instance, in a feed-forward Neural Network (NN) with one hidden layer, all tasks would share the same hidden representation (feature space) Φ(x) (see Fig. 1-b) and thus the inclusion of pseudo tasks in this architecture would hopefully result in constraining the model to map semantically similar points like a and b ,from the input space, to nearby positions in the feature space. 3.2 Problem Formulation Since in this paper we mainly focus on feed-forward models, we will formulate our transfer learning problem using a generic neural network learning architecture as in Fig. 1-b. Let N be the number of input examples, and assume that the main task to be learnt has index m with training examples Dm = {(xn , ymn )} . A neural network has a natural architecture to tackle this learning problem by minimizing: min l (Dm , θ) + γΩ(θ) θ
(1)
where l (Dm , θ) amounts to an empirical loss T min (ymn , wm Φ(xn ; θ)) + αwm 2 wm
n
Ω(θ) is a regularization term on the parameters of the feature extractors Φ(x; θ) = [φ1 (x; θ) . . . φJ (x; θ)]T – this feature extractor, i.e. the hidden layer of the network, maps from the input space to the feature space. Moreover, m (·, ·) is the cost function for the target task. Unlike the usual practice in neural networks where the regularization on θ is similar to the one on wm , we adopt a more informative Ω(θ) by additionally introducing Λ pseudo auxiliary tasks, each represented by learning the input-output pairs: Dλ = {(xn , yλn )}N n=1 , where yλn = gλ (xn ) are a set of real-valued functions automatically constructed from the input data. As depicted in Fig. 1.b, all the tasks share the hidden layer feature mapping. Moreover, we hypothesis that each pseudo auxiliary function, gλ (xn ), is linearly related to Φ(xn ; θ) via the projection weights wλ . Then the regularization term Ω(θ) becomes: 2 T 2 yλn − wλ Φ(xn ; θ) + βwλ (2) min {wλ }
λ
n
Training the network in 1.b to realize the objective function in (1) is extremely simple because nothing beyond the standard back-propagation algorithm is needed. By constructing meaningful pseudo functions from input data, the model is equipped with extensive flexibilities to incorporate our prior knowledge. Furthermore, there is no restriction on the parametric form of Φ(x; θ), which allows us to apply learning problem (1) to more complex models (e.g., the CNN shown in Fig. 2.a). Our experiments will demonstrate that these advantages can greatly boost the performance of CNNs for visual recognition.
Training CNN Using Transfer Learning from Pseudo-Tasks
(a)
73
(b)
Fig. 1. Illustrating the mechanism of transfer learning. (a) Functional view: tasks represented as functional mapping share stochastic characteristics. (b) Transfer learning in neural networks, the hidden layer represents the level of sharing between all the task.
3.3 A Bayesian Perspective In this section we give a Bayesian perspective to the transfer learning problem formulated in Section 3.2. While (1, 2) are all what is needed to implement the proposed approach, the sole purpose of this section is to give more insight to the role of the pseudo tasks and to formalize the claims we made near the end of Section 3.1. In Section 3.2, we hypothesized that the pseudo tasks are realizable as a linear projection from the feature mapping layer output, Φ(x; θ), that is: yλ = wλT Φ(x; θ) + e
(3)
where e ∼ N(0, β −1 ). The intuition behind (3) is to limit the capacity of this mapping so that the constraints imposed by the pseudo tasks can only be satisfied by proper adjustments of the feature extraction layer paraments, θ. To make this point more clear, consider Fig. 1.a, and consider points like a and b which are assigned similar values under many pseudo tasks. Under the restriction that the pseudo auxiliary tasks are realizable as a linear projection from the feature extraction layer output, and given an appropriate number of such pseudo tasks, the only way that the NN can satisfy these requirements, is to map points like a and b to nearby position in the feature space. Therefore, the kernel induced by the NN, K(xi , xj ; θ), via its feature mapping function Φ(.; θ), is constrained to be similar to the kernel induced by the pseudo tasks, where the degree of similarity is controlled via the parameter γ in (1). Below we will make this intuition explicit. We first begin by writing the empirical loss due to the pseudo auxiliary tasks, L({Dλ }, θ, {wλ }), where we make the dependency on {wλ } explicit, as follows: 2 T 2 yλn − wλ Φ(xn ; θ) + βwλ L({Dλ }, θ, {wλ }) = (4) λ
n
If we assumer that wλ ∼ N(0, I), and that e ∼ N(0, β −1 ), then it is clear that (4) is the negative log-likelihood of {Dλ } under these mild Gaussian noise assumptions.
74
A. Ahmed et al.
In Section 3.2, we decided to minimize this loss over {wλ }, which gives rise to the regularizer term, Ω(θ). Here, we will take another approach, and rather integrate out {wλ } from (4), which results in the following fully Bayesian regularizer, ΩB (θ): Λ Λ ΩB (θ) = log det(ΦT Φ + β −1 I) + tr (ΦT Φ + β −1 I)−1 KΛ (5) 2 2 PΛ
K
λ where KΛ = λ=1 and Kλ = [gλ (xi )gλ (xj )]N i,j=1 . If we let K(θ) denotes the KerΛ nel induced by the NN feature mapping layer, where K(xi , xj , θ) = Φ(xi ; θ), Φ(xj ; θ) + δij β −1 , then (5) can be written as: Λ Λ ΩB (θ) = log det(K(θ)) + tr K(θ)−1 KΛ (6) 2 2
It is quite easy to show that (6) is equivalent to a loss term due to an inverse-wishart prior, IW(Λ, KΛ ), placed over K(θ). Put it another way, (6) is the KL-divergence between two multivariate normal distributions with zero means and covariance matrices given by K(θ) and KΛ . Therefore, in order to minimize this loss term the leaner is biased to make the kernel induced by the NN,K(θ), as similar as possible to the kernel induced by the pseudo-tasks,KΛ, and this helps regularize the functional behavior of the network, especially when there are few training examples available. In Section 3.2, we choose to use the regularizer, Ω(θ) as a proxy for ΩB (θ) for efficiency as it is amenable to efficient integration with the online stochastic gradient descent algorithm used to train the NN, whereas ΩB (θ) requires gradient computations over the whole pseudo auxiliary task data sets, for every step of the online stochastic gradient algorithm. This decision turns out to be a sensible one, and results in an excellent performance as will be demonstrated in Section 6.
4 Transfer Learning in CNNs There are no constraints on the form of the feature extractors Φ(.; θ) nor on how they are parameterized given θ, therefore, our approach is applicable to any feed-forward architecture as long as Φ(.; θ) is differentiable, which is required to train the whole model via backpropagation. A popular architecture that showed excellent performance for visual recognition is the CNN architecture, see Fig. 2.a, which is an instance of multi-stage Hubel-Wiesel architectures [12],[15]. The model includes alternating layers of convolutional feature detectors (C layers), and local pooling of feature maps using a max or an averaging operation (P layers), and a final classification layer. Detailed descriptions of CNNs can be found in [15]. Applying the transfer learning framework described in Section 3 to CNNs results in the architecture in Fig. 2-a. The pseudo tasks are extracted as described in Section 5 and the whole resulting architecture is then trained using standard backpropagation to minimize the the objective function in (1). Throughout the experiments of this paper, we applied CNNs with the following architecture: (1) Input: 140x140 pixel images, including R/G/B channels and additionally two channels Dx and Dy, which are the horizontal and vertical gradients of gray intensities; (2) C1 layer: 16 filters of size 16 × 16; (3) P1 layer: max pooling over each
Training CNN Using Transfer Learning from Pseudo-Tasks
75
Fig. 2. Joint training using transfer-learning from pseudo-tasks
5 × 5 neighborhood; (4) C2 layer: 256 filters of size 6 × 6, connections with sparsity2 0.5 between the 16 dimensions of P1 layer and the 256 dimensions of C2 layer; (5) P2 layer: max pooling over each 5 × 5 neighborhood; (6) output layer: full connections between 256 × 4 × 4 P2 features and outputs. Moreover, we used least square loss for pseudo tasks and hinge loss for classification tasks. Every convolution filter is a linear function followed by a sigmoid transformation (see [15] for more details). It is interesting to contrast our approach with the layer-wise training one in [20]. In [20], each feature extraction layer is trained to model its input in a layer-wise fashion: the first layer is trained on the raw images and then used to produce the input to the second feature extraction layer. The whole resulting architecture is then used as a multilayered feature extractor over labeled data, and the resulting representation is then used to feed an SVM classifier. On contrast, in our approach, we jointly train the classifier and the feature extraction layers, thus the feature extraction layer training is guided by the pseudo-tasks as well as the labeled information simultaneously. Moreover, we believe that the two approaches are orthogonal as we might first pre-train the network using the method in [20], and then use the result as a starting point for our method. We leave this exploration for future work.
5 Generating Pseudo Tasks We use a set of pseudo tasks to incorporate prior knowledge into the training of recognition models. Therefore, these tasks need to be 1) automatically computable based on unlabeled images, and 2) relevant to the specific recognition task at hand, in other words, it is highly likely that two semantically similar images would be assigned similar outputs under a pseudo task. A simple approach to construct pseudo tasks is depicted in Fig. 4. In this figure, the pseudo-task is constructed by sampling a random 2D patch and using it as a template to form a local 2D filter that operates on every training image. The value assigned to an image under this task is taken to be the maximum over the result of this 2D convolution operation. Following this method, one can construct as many pseudo-tasks as required. 2
In other words, on average, each filter in C2 is connected to a randomly chosen 8 dimensions (filter maps) from P1.
76
A. Ahmed et al.
(a)
Fig. 3. Images (b)FRGC 2.0
(b)
from:
(a)Caltech101
and
Fig. 4. Simple pseudo task generation
Moreover, this construction satisfies condition (2) above as semantically similar images are likely to have similar appearance. Unfortunately, this simple construction is brittle with respect to scale, translation, and slight intensity variations, due to operating directly on the pixel-level of the image. Below, we show how to generalize this simple approach to achieve mild local-invariance with respect to scale, translation and slight intensity variations. First, we processed all the images using a set of Gabor filters with 4 orientations and 16 scales. This step aims toward focusing the pseudo-tasks on interesting parts of the images by using our prior knowledge in the form of a set of Gabor filters. Then a max-pooling operation, across scale and space, is employed to achieve mild scale and translation-invariance. We then apply the simple method detailed above to this representation. It is interesting to note that this construction is similar in part to [22] which used random patches as the parameters of feed-forward filters which is later used as the basis for the representation. The detailed procedure is as follows, assuming each image is a 140 × 140 gray image: (1) Applying Gabor filters result in 64 feature maps of size 104 × 104 for each image; (2) Max-pooling operation is performed first within each non-overlapping 4 × 4 neighborhood and then within each band of two successive scales resulting in 32 feature maps of size 26 × 26 for each image; (3) An set of K RBF filter of size 7 × 7 with 4 orientations are then sampled and used as the parameters of the pseudo-tasks. To generate the actual values of a given pseudo-task, we first process each training image as above, and then convolve the resulting representation with this pseudo-task’s RBF filter. This results in 8 feature maps of size 20 × 20; Finally, max pooling is performed on the result across all the scales and within every non-overlapping 10 × 10 neighborhood, giving a 2 × 2 feature map which constitutes the value of this image under this pseudo-task. Note that in the last step instead of using a global maxpooling operator over the whole image, we maintained some 2D spatial information by this local max operator, which means that the pseudo-tasks are 4-dimensional vectorvalued functions, or equivalently, we obtained 4 ∗ K pseudo-tasks (K actual random patches, each operating at a different quadrant of the image). These pseudo-tasks encode our prior knowledge that a similarity matching between an image and a spatial pattern should tolerate a small change of scale and translation as well as slight intensity variation. Thus, we can use these functions as pseudo tasks to train our recognition models. We note that the framework can generally benefit from
Training CNN Using Transfer Learning from Pseudo-Tasks
77
all kinds of pseudo task constructions that comply with our prior knowledge for the recognition task at hand. We have tried other ways like using histogram features of spatial pyramid based on SIFT descriptors and achieved a similar level of accuracy. Due to space limitation, we only report the results using the method detailed in this section.
6 Experimental Results To demonstrate the ability of our framework to adapt across domains with little tuning, first, we fixed the architecture of CNN as descried in Section 4. Second, we fixed the number of pseudo tasks K = 1024. To speed up the training phase, we apply PCA to reduce these resulting pseudo-tasks to 300 ones. Moreover, in order to ensure that the neural network is trained with balanced outputs, we further project these 300 dimensions using a random set of 300 orthonormal bases and scale each of the response dimensions to have a unitary variance. 6.1 Object Recognition We conducted experiments on the Caltech-101 database, which contains 102 categories (including 101 object categories plus a background category) of object images, with from 31 to 800 images per category. We chose Caltech-101, because the data set is considered one of the most diverse object databases available today, and more importantly, is probably the most commonly tested benchmark in the literature of object recognition, which makes our results directly comparable with those of others. We follow the standard setting in the literature, namely, train on 15/30 images per class and test on the rest. For efficiency, we limit the number of test images to 30 per class. Note that, because some categories are very small, we may end up with less than 30 test images. To reduce the overweight of popular categories, we first compute the accuracy within each category and then compute the average over all the categories. All the experiments were randomly repeated for 5 trails. Table 1 shows the comparison of our Table 1. Categorization accuracy of dif- results with those reported in the literature ferent hierarchical feed-forward models on using similar hierarchical feed-forward modCaltech-101 els on the same settings of experiments. The baseline method “CNN”, i.e., CNN trained Training Size 15 30 without pseudo tasks, presented very poor acHMAX-1 [22] 35% 42% curacy, which is close to the phenomenon obHMAX-2 [18] 51% 56% served in [20]. The “CNN+Pretraining” ap54% CNN + Pretraining [20] proach made a significant improvement by CNN 23.9% 25.1% first training a encoder-decoder architecture CNN+Transfer 58.1% 67.2% with unlabeled data, and then feeding the result of applying the encoder on labeled data to an SVM classifer [20]. The idea was inspired by [11] that suggested an unsupervised layer-wise training to improve the performance of deep belief networks. Our strategy “CNN+Pseudo Tasks” also improved the baseline CNN by a large margin, and achieved
78
A. Ahmed et al.
the best results of hierarchical feedforward architectures on the Caltech 101 data set. To better understand the difference made by transfer learning with pseudo tasks, we visualize the learnt first-layer filters of CNNs in Fig. 5 (a) and (b). Due to lacking of sufficient supervision in such a high-complexity learning task, a bit surprisingly, CNN cannot learn any meaningful filters. In contrast, thanks to the additional bits of information offered by pseudo tasks, CNN ends up with much better filters. Our result is comparable to the state-of-the-art accuracy, i.e., 64.6% ∼ 67.6% in the case of 30 training images per class, achieved by the spatial pyramid matching (SPM) kernel based on SIFT features [14][9]. However, the feedforward architecture of CNN can be more efficient in recognition phase. In our experiments, it takes in average 0.18 second in a PC with 2.66 GHz CPU, to process one 140 × 140 color image, including feature extraction and classification. 6.2 Gender and Ethnicity Recognition In this section we work on gender and ethnicity recognitions based on facial appearance. We use the FRGC 2.0 (Face Recognition Grand Challenge[19]) data set, which contains 568 individuals’ face images under various lighting conditions and backgrounds, presenting in total 14714 face images. Beside person identities, each image is annotated with gender, age, race, as well as positions of eyes and nose. Each face image is aligned based on the location of eyes, and normalized to be with zero mean and unitary length. We note that the data set is not suitable for research on age prediction, because majority of individuals are young students. We built models for binary gender classification and 3-class ethnicity recognition, i.e., classifying images into “white”, “asian”, and “other”. For comparison, we implemented two state-of-the-art algorithms that both utilize holistic facial information: one is “SVM+SPM”, namely, the SVM classifier using SPM kernels based on dense SIFT descriptors, as described by [14]; the other is “SVM+RBF”, namely, the SVM classifier using radius basis function (RBF) kernels operating directly on the aligned face images. The second approach has demonstrated state-of-the-art accuracy for gender recognition [3,17]. We fix 114 persons’ 3014 images (randomly chosen) as the testing set, and train the recognition models with various randomly selected 5%, 10%, 20%, 50%, and “All” of the remaining data, in order to examine the model’s performance given different training sizes. Note that we strictly ensure that a particular individual appear only in the test set or training set. For each training size, we randomize the training data 5 times and report the average error rate as well as the standard deviation. The results are shown in Table 2 and Table 3. Table 2. Error of gender recognition on the FRGC data set Training Size RBF+SVM SPM+SVM CNN CNN+Transfer
5% 16.7 ± 2.4% 15.3 ± 2.9% 61.5 ± 7.3% 16.9 ± 2.0%
10% 13.4 ± 2.4% 12.3 ± 1.1% 17.2 ± 4.3% 7.6 ± 1.1%
20% 11.3 ± 1.0% 11.1 ± 0.6% 8.4 ± 0.5% 5.8 ± 0.3%
50% 9.1 ± 0.5% 10.3 ± 0.8% 6.6 ± 0.3% 5.1 ± 0.2%
All 8.6% 8.7% 5.9% 4.6%
Training CNN Using Transfer Learning from Pseudo-Tasks
79
Table 3. Error of ethnicity recognition on the FRGC data set Training Size RBF+SVM SPM+SVM CNN CNN+Transfer
5% 22.9 ± 4.7% 23.7 ± 3.2% 30.0 ± 5.1% 16.0 ± 1.7%
10% 16.9 ± 2.3% 22.7 ± 3.6% 13.9 ± 2.4% 9.2 ± 0.6%
20% 14.1 ± 2.2% 18.0 ± 3.6% 10.0 ± 1.0% 7.9 ± 0.4%
50% 11.3 ± 1.0% 15.8 ± 0.7% 8.2 ± 0.6% 6.4 ± 0.3%
(a)
(c)
(e)
(b)
(d)
(f)
All 10.2% 14.1% 6.3% 6.1%
Fig. 5. First-layer filters on the B channel, learnt from both supervised CNN and CNN with transfer Learning. top: filters learnt from supervised CNN. bottom: filters learnt using transfer learning from pseudo-tasks. first column: Caltech-101 (30 examples per class); second column: FRGC-gender; and third column: FRGC-Race.
From Table 2 and 3 we have the following observations: (1) The two competitor methods resulted in comparable results for gender classification, while for ethnicity recognition SVM+RBF is more accurate than SVM+SPM; (2) In general, CNN models outperformed the two competitors for both gender and ethnicity recognition, especially when sufficient training data were given; (3) CNN without transfer learning produced very poor results when only 5% of the total training data were provided; (4) “CNN+Transfer” significantly boosted the recognition accuracy in nearly all the cases. In cases of small training sets, the improvement was dramatic. In the end, our methods achieved 4.6% error rate for gender recognition and 6.1% for ethnicity recognition. Interestingly, although CNN and “CNN+Transfer” resulted in close performances when all the training data were employed, the filters learnt by CNN+Transfer (visualized in Fig. 5.d appear to be much smoother than those learnt by CNN (shown in Fig. 5.c3 Moreover, as indicated by Fig. 6, we also found that “CNN+Transfer” converged much faster than CNN during the stochastic gradient training, indicating another advantage of our approach. We note that the best performances our method achieved here are not directly comparable to those reported in [10,17], because their results are based on the FERET data 3
To save space, here we only show the filters of one channel for gender and ethnicity recognition. However the same phenomenon was observed for filters of other channels.
80
A. Ahmed et al. Table 4. Error of gender recognition on the FERET data set RBF+SVM Boosting CNN CNN+Transfer Error 6.5%[3] 5.6%[3] 2.3% 1.7%
(a)
(b)
Fig. 6. Number of errors on test data over epochs, where dashed lines are results of CNN with transfer learning, solid lines are CNN without transfer learning: (a) gender recognition; (b) ethnic recognition
set4 , which contains face images under highly controlled lighting conditions and simpler backgrounds. More importantly, as recently pointed by [3], their experiments mixed up faces of same individuals across training and test sets, which made the results not truly measuring the generalization performance of handling new individuals. To make a direct comparison possible, we followed the experimental setting of [3] as much as possible, and conducted experiments on the FERET data for gender recognition, where no individual is allowed to appear in the training and test simultaneously. The results are summarized in Table 4, showing that“CNN+Transfer” achieved the best accuracy on the FERET data set. 6.3 A Further Understanding of Our Approach In the previous two subsections we showed that our framework, with little tuning, can adapt across different domains with favorable performance. It is interesting to isolate the source of this success. Is it only because of the informativeness of the pseudo-tasks used? And if not, then is there a simpler way of combining the information from the pseudo-tasks with its equivalent from a supervised CNN trained only on labeled data? To answer the first question, as we mentioned in Section 5, our pseudo-task construction overlaps with the the system in [22],[18] 5 , however, our results in Table 1 indicates significant improvement over these baseline. To answer the second question, we did an additional experiment on Caltech101, using 30 training examples per category, to train an SVM on the features produced by the pseudo-tasks alone or on the combined features produced by the pseudo-tasks and the features from the last layer of a CNN trained via purely supervised learning. The results were 49.6% and 50.6% respectively. This shows that the gain from using the features from a supervised CNN was minimal. On the other 4 5
Available at http://www.itl.nist.gov/iad/humanid/feret/ In fact, the system in [22] and its successor [18] has other major features like inhibition,etc.
Training CNN Using Transfer Learning from Pseudo-Tasks
81
hand, our approach which involves joint-training of the whole CNN inherits the knowledge from the pseudo-tasks in the form of its induced kernel, as explained in Section 3.3, but is also supervised by labeled data and thus has the ability to further adapts its induced kernel, K(θ), to better suit the task at hand. Moreover, our approach results in an efficient model at prediction time. In fact, the pseudo-task extraction phase is computationally expensive and it took around 29 times longer to process one image than a feedforward pass over the final trained CNN. In other words, we paid some overhead in the training phase to compute these pseudotasks once, but created a fast, compact, and accurate model for prediction.
7 Discussion, Conclusion, and Future Work Benefiting from a deep understanding of a problem, hand-engineered features usually demonstrate excellent performances. This success is in a large sense due to the fact that the features are learnt by the smartest computational units – brains of researchers. In this sense, hand-craft designing and automatic learning of visual features do not have fundamental differences. An important indication of this paper is that, it is generally hard to build a set of features that are universally suitable for all different tasks. For example, the SPM kernel based on SIFT is excellent for object recognition, but may not be good for gender and ethnicity recognition. Interestingly, an automatically learnable architecture like CNN can adapt itself to a range of situations and learn significantly different features for object recognition and gender recognition (if comparing Fig. 5 (b) and (d)). We believe that given a sufficient amount of time, very likely researchers can come up with even better features for any visual recognition task. However, a completely trainable architecture can hopefully achieve good results for a less well-studied task with minimum human efforts. In this paper, we empirically observed that training a hierarchical feedforward architecture was extremely difficult. We conjecture that the poor performance of CNN on Caltech 101 is due to the lack of training data, given the large variation of object patterns. In the tasks of gender and ethnicity recognitions, where we have sufficient data, CNNs in fact produced poor results on small training sets but excellent results given enough training data (see Table 2 and Table 3). Therefore, when insufficient labeled examples are present, it is essential to use additional information to supervise the network training. We proposed using transfer learning to improve the training of hierarchical feedforward models. The approach has been implemented on CNNs, and demonstrated excellent performances on a range of visual recognition tasks. Our experiments showed that transfer learning with pseudo tasks substantially improves the quality of CNNs by incorporating useful prior knowledge. Our approach can be combined with the pretraining strategy [20][11], which remains an interesting future work. Very recently, [4] showed that detecting region of interest (ROI) can greatly boost the performance of SPM kernel on Caltech 101. Our work is at the level of [14] that builds classifier based on the whole image. In the future, it is highly interesting to develop a mechanism of attention in CNNs that can automatically focus on the most interesting region of images.
82
A. Ahmed et al.
References 1. Abu-Mostafa, Y.: Learning from hints in neural networks. Journal of Complexity 6, 192–198 (1990) 2. Ando, R., Zhang, T.: A framework for learning predictive structures from multiple tasks and unlabeled data. JMLR 6, 1817–1853 (2005) 3. Baluja, S., Rowley, H.: Boosting sex identification performance. International Journal of Computer Vision (2007) 4. Bosch, A., Zisserman, A., Munoz, X.: Image classification using random forests and ferns. In: ICCV 2007 (2008) 5. Caruana, R.: Multitask learning. Machine learning. Machine Learning 28(1), 41–75 (1997) 6. Fei-Fei, L.: Knowledge transfer in learning to recognize visual object classes. In: International Conference on Development and Learning (ICDL) (2006) 7. Fukushima, K., Miyake, S.: Object recognition with features inspired by visual cortex. Pattern Recognition (1982) 8. Grauman, K., Darrell, T.: The pyramid match kernel: Discriminative classification with sets of image features. In: CVPR 2005 (2005) 9. Griffin, G., Holub, A., Perona, P.: Caltech 256 object category dataset. California Institute of Technology 04-1366 (2007) 10. Gutta, S., Huang, J., Jonathon, P., Wechsler, H.: Mixture ofo experts for classification of gender, ethnic origin, and pose of human faces. IEEE Transactions on Neural Networks (2000) 11. Hinton, G., Osindero, S., Teh, Y.-W.: A fast learning algorithm for deep belief nets. Neural Computation 18, 1527–1554 (2006) 12. Hubel, D.H., Wiesel, T.N.: Receptive fields, binocular interaction, interaction and functional architecture in the cat’s visual cortex. J. Physiology 160, 106–154 (1968) 13. Weston, R.C.J., Ratle, F.: Deep learning via semi-supervised embedding. In: ICML (2008) 14. Lazebnik, S., Schmid, C., Ponce, J.: Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. In: CVPR 2006 (2006) 15. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proceedings of the IEEE 86(11), 2278–2324 (1998) 16. Lowe, D.: Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision 60(2) (2004) 17. Moghaddam, B., Yang, M.-H.: Learning gender with support faces. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI) (2002) 18. Mutch, J., Lowe, D.G.: Multiclass object recognition with sparse, localized features. In: CVPR 2006 (2006) 19. Philips, P.J., Flynn, P.J., Scruggs, T., Bower, K.W., Worek, W.: Preliminary face recognition grand challenge results. In: Proceedings of the Sevethn International Conference on Automatic Face and Gesture Recgonition (2006) 20. Ranzato, M., Huang, F.-J., Boureau, Y.-L., LeCun, Y.: Unsupervised learning of invariant feature hierarchies with applications to object recognition. In: CVPR 2007 (2007) 21. Belongie, J.M.S., Puzicha, J.: Shape matching and object recognition using shape contexts. IEEE Trans. Pattern Anal. Mach. Intell. 24(4), 509–522 (2002) 22. Serre, T., Wolf, L., Poggio, T.: Object recognition with features inspired by visual cortex. In: CVPR 2005 (2005) 23. Torralba, A., Murphy, K., Freeman, W.: Sharing visual features for multiclass and multiview object detection. IEEE PAMI (2007)
Learning Optical Flow Deqing Sun1 , Stefan Roth2 , J.P. Lewis3 , and Michael J. Black1 1
Department of Computer Science, Brown University, Providence, RI, USA {dqsun,black}@cs.brown.edu 2 Department of Computer Science, TU Darmstadt, Darmstadt, Germany
[email protected] 3 Weta Digital Ltd., New Zealand
[email protected]
Abstract. Assumptions of brightness constancy and spatial smoothness underlie most optical flow estimation methods. In contrast to standard heuristic formulations, we learn a statistical model of both brightness constancy error and the spatial properties of optical flow using image sequences with associated ground truth flow fields. The result is a complete probabilistic model of optical flow. Specifically, the ground truth enables us to model how the assumption of brightness constancy is violated in naturalistic sequences, resulting in a probabilistic model of “brightness inconstancy”. We also generalize previous high-order constancy assumptions, such as gradient constancy, by modeling the constancy of responses to various linear filters in a high-order random field framework. These filters are free variables that can be learned from training data. Additionally we study the spatial structure of the optical flow and how motion boundaries are related to image intensity boundaries. Spatial smoothness is modeled using a Steerable Random Field, where spatial derivatives of the optical flow are steered by the image brightness structure. These models provide a statistical motivation for previous methods and enable the learning of all parameters from training data. All proposed models are quantitatively compared on the Middlebury flow dataset.
1
Introduction
We address the problem of learning models of optical flow from training data. Optical flow estimation has a long history and we argue that most methods have explored some variation of the same theme. Particularly, most techniques exploit two constraints: brightness constancy and spatial smoothness. The brightness constancy constraint (data term) is derived from the observation that surfaces usually persist over time and hence the intensity value of a small region remains the same despite its position change [1]. The spatial smoothness constraint (spatial term) comes from the observation that neighboring pixels generally belong to the same surface and so have nearly the same image motion. Despite the long history, there have been very few attempts to learn what these terms should be [2]. Recent advances [3] have made sufficiently realistic image sequences with ground truth optical flow available to finally make this practical. Here we revisit D. Forsyth, P. Torr, and A. Zisserman (Eds.): ECCV 2008, Part III, LNCS 5304, pp. 83–97, 2008. c Springer-Verlag Berlin Heidelberg 2008
84
D. Sun et al.
several classic and recent optical flow methods and show how training data and machine learning methods can be used to train these models. We then go beyond previous formulations to define new versions of both the data and spatial terms. We make two primary contributions. First we exploit image intensity boundaries to improve the accuracy of optical flow near motion boundaries. The idea is based on that of Nagel and Enkelmann [4], who introduced oriented smoothness to prevent blurring of flow boundaries across image boundaries; this can be regarded as an anisotropic diffusion approach. Here we go a step further and use training data to analyze and model the statistical relationship between image and flow boundaries. Specifically we use a Steerable Random Field (SRF) [5] to model the conditional statistical relationship between the flow and the image sequence. Typically, the spatial smoothness of optical flow is expressed in terms of the image-axis-aligned partial derivatives of the flow field. Instead, we use the local image edge orientation to define a steered coordinate system for the flow derivatives and note that the flow derivatives along and across image boundaries are highly kurtotic. We then model the flow field using a Markov random field (MRF) and formulate the steered potentials using Gaussian scale mixtures (GSM) [6]. All parameters of the model are learned from examples thus providing a rigorous statistical formulation of the idea of Nagel and Enkelmann. Our second key contribution is to learn a statistical model of the data term. Numerous authors have addressed problems with the common brightness constancy assumption. Brox et al. [7], for example, extend brightness constancy to high-order constancy, such as gradient and Hessian constancy in order to minimize the effects of illumination change. Additionally, Bruhn et al. [8] show that integrating constraints within a local neighborhood improves the accuracy of dense optical flow. We generalize these two ideas and model the data term as a general high-order random field that allows the principled integration of local information. In particular, we extend the Field-of-Experts formulation [2] to the spatio-temporal domain to model temporal changes in image features. The data term is formulated as the product of a number of experts, where each expert is a non-linear function (GSM) of a linear filter response. One can view previous methods as taking these filters to be fixed: Gaussians, first derivatives, second derivatives, etc. Rather than assuming known filters, our framework allows us to learn them from training data. In summary, by using naturalistic training sequences with ground truth flow we are able to learn a complete model of optical flow that not only captures the spatial statistics of the flow field but also the statistics of brightness inconstancy and how the flow boundaries relate to the image intensity structure. The model combines and generalizes ideas from several previous methods and the resulting objective function is at once familiar and novel. We present a quantitative evaluation of the different methods using the Middlebury flow database [3] and find that the learned models outperform previous models, particularly at motion boundaries. Our analysis uses a single, simple, optimization method throughout to focus the comparison on the effects of different objective functions. The results
Learning Optical Flow
85
suggest the benefit of learning standard models and open the possibility to learn more sophisticated ones.
2
Previous Work
Horn and Schunck [9] introduced both the brightness constancy and the spatial smoothness constraints for optical flow estimation, however their quadratic formulation assumes Gaussian statistics and is not robust to outliers caused by reflection, occlusion, motion boundaries etc. Black and Anandan [1] introduced a robust estimation framework to deal with such outliers, but did not attempt to model the true statistics of brightness constancy errors and flow derivatives. Ferm¨ uller et al. [10] analyzed the effects of noise on the estimation of flow, but did not attempt to learn flow statistics from examples. Rather than assuming a model of brightness constancy we acknowledge that brightness can change and, instead, attempt to explicitly model the statistics of brightness inconstancy. Many authors have extended the brightness constancy assumption, either by making it more physically plausible [11,12] or by linear or non-linear pre-filtering of the images [13]. The idea of assuming constancy of first or second image derivatives to provide some invariance to lighting changes dates back to the early 1980’s with the Laplacian pyramid [14] and has recently gained renewed popularity [7]. Following a related idea, Bruhn et al. [8] replaced the pixelwise brightness constancy model with a spatially smoothed one. They found that a Gaussian-weighted spatial integration of brightness constraints results in significant improvements in flow accuracy. If filtering the image is a good idea, then we ask what filters should we choose? To address this question, we formulate the problem as one of learning the filters from training examples. Most optical flow estimation methods encounter problems at motion boundaries where the assumption of spatial smoothness is violated. Observing that flow boundaries often coincide with image boundaries, Nagel and Enkelmann [4] introduced oriented smoothness to prevent blurring of optical flow across image boundaries. Alvarez et al. [15] modified the Nagel-Enkelmann approach so that less smoothing is performed close to image boundaries. The amount of smoothing along and across boundaries has been determined heuristically. Fleet et al. [16] learned a statistical model relating image edge orientation and amplitude to flow boundaries in the context of a patch-based motion discontinuity model. Black [17] proposed an MRF model that coupled edges in the flow field with edges in the brightness images. This model, however, was hand designed and tuned. We provide a probabilistic framework within which to learn the parameters of a model like that of Nagel and Enkelmann from examples. Simoncelli et al. [18] formulated an early probabilistic model of optical flow and modeled the statistics of the deviation of the estimated flow from the true flow. Black et al. [19] learned parametric models for different classes of flow (e.g. edges and bars). More recently, Roth and Black [2] modeled the spatial structure of optical flow fields using a high-order MRF, called a Field of Experts (FoE), and learned the parameters from training data. They combined their learned prior
86
D. Sun et al.
model with a standard data term [8] and found that the FoE model improved the accuracy of optical flow estimates. While their work provides a learned prior model of optical flow, it only models the spatial statistics of the optical flow and not the data term or the relationship between flow and image brightness. Freeman et al. [20] also learned an MRF model of image motion but their training was restricted to simplified “blob world” scenes; here we use realistic scenes with more complex image and flow structure. Scharstein and Pal [21] learned a full model of stereo, formulated as a conditional random field (CRF), from training images with ground truth disparity. This model also combines spatial smoothness and brightness constancy in a learned model, but uses simple models of brightness constancy and spatially-modulated Potts models for spatial smoothness; these are likely inappropriate for optical flow.
3
Statistics of Optical Flow
3.1
Spatial Term
Roth and Black [2] studied the statistics of horizontal and vertical optical flow derivatives and found them to be heavy-tailed, which supports the intuition that optical flow fields are typically smooth, but have occasional motion discontinuities. Figure 1 (a, b (solid)) shows the marginal log-histograms of the horizontal and vertical derivatives of horizontal flow, computed from a set of 45 ground truth optical flow fields. These include four from the Middlebury “other” dataset, one from the “Yosemite” sequence, and ten of our own synthetic sequences. These synthetic sequences were generated in the same way as, and are similar to, the other Middlebury synthetic sequences (Urban and Grove); two examples are shown in Fig. 2. To generate additional training data the sequences were also flipped horizontally and vertically. The histograms are heavy-tailed with high peaks, as characterized by their high kurtosis (κ = E[(x − μ)4 ]/E[(x − μ)2 ]2 ). We go beyond previous work by also studying the steered derivatives of optical flow where the steering is obtained from the image brightness of the reference (first) frame. To obtain the steered derivatives, we first calculate the local image orientation in the reference frame using the structure tensor as described in [5]. Let (cos θ(I), sin θ(I))T and (− sin θ(I), cos θ(I))T be the eigenvectors of the structure tensor in the reference frame I, which are respectively orthogonal to and aligned with the local image orientation. Then the orthogonal and aligned I I and ∂A of the optical flow are given by derivative operators ∂O I ∂O = cos θ(I) · ∂x + sin θ(I) · ∂y
I and ∂A = − sin θ(I) · ∂x + cos θ(I) · ∂y , (1)
where ∂x and ∂y are the horizontal and vertical derivative operators. We approximate these using the 2 × 3 and 3 × 2 filters from [5]. Figure 1 (c, d) shows the marginal log-histograms of the steered derivatives of the horizontal flow (the vertical flow statistics are similar and are omitted here). The log-histogram of the derivative orthogonal to the local structure orientation has much broader tails than the aligned one, which confirms the intuition that large flow changes occur more frequently across the image edges.
Learning Optical Flow 0
0
10
0
10
−2
10
−2
10
−4
0
10
−2
10
−2
10
10
−4
10
10
−6
−4
−4
10
10
−6
10
10
−6
−6
10 −15
87
−10
−5
0
5
10
15
(a) ∂x u, κ = 420.5
−15
−10
−5
0
5
10
15
(b) ∂y u, κ = 527.3
−10
10 −5
0
5
10
I (c) ∂O u, κ = 340.1
−10
−5
0
5
10
I (d) ∂A u, κ = 636.4
Fig. 1. Marginal filter response statistics (log scale) of standard derivatives (left) and derivatives steered to local image structure (right) for the horizontal flow u. The histograms are shown in solid blue; the learned experts in dashed red. κ denotes kurtosis.
These findings suggest that the steered marginal statistics provide a statistical motivation for the Nagel-Enkelmann method, which performs stronger smoothing along image edges and less orthogonal to image edges. Furthermore, the nonGaussian nature of the histograms suggest that non-linear smoothing should be applied orthogonal to and aligned with the image edges. 3.2
Data Term
To our knowledge, there has been no formal study of the statistics of the brightness constancy error, mainly due to the lack of appropriate training data. Using ground truth optical flow fields we compute the brightness difference between pairs of training images by warping the second image in each pair toward the first using bi-linear interpolation. Figure 2 shows the marginal log-histogram of the brightness constancy error for the training set; this has heavier tails and a tighter peak than a Gaussian of the same mean and variance. The tight peak suggests that the value of a pixel in the first image is usually nearly the same as the corresponding value in the second image, while the heavy tails account for violations caused by reflection, occlusion, transparency, etc. This shows that modeling the brightness constancy error with a Gaussian, as has often been done, is inappropriate, and this also provides a statistical explanation for the robust data term used by Black and Anandan [1]. The Lorentzian used there has a similar shape as the empirical histogram in Fig. 2. 0
10
−2
10
−4
10
−6
10
−50
0
(a)
50
(b)
(c)
(d)
(e)
Fig. 2. (a) Statistics of the brightness constancy error: The log-histogram (solid blue) is fit with a GSM model (dashed red). (b)-(e) two reference (first) images and their associated flow fields from our synthetic training set.
88
D. Sun et al.
We should also note that the shape of the error histogram will depend on the type of training images. For example, if the images have significant camera noise, this will lead to brightness changes even in the absence of any other effects. In such a case, the error histogram will have a more rounded peak depending on how much noise is present in the images. Future work should investigate adapting the data term to the statistical properties of individual sequences.
4
Modeling Optical Flow
We formulate optical flow estimation as a problem of probabilistic inference and decompose the posterior probability density of the flow field (u, v) given two successive input images I1 and I2 as p(u, v|I1 , I2 ; Ω) ∝ p(I2 |u, v, I1 ; ΩD ) · p(u, v|I1 ; ΩS ),
(2)
where ΩD and ΩS are parameters of the model. Here the first (data) term describes how the second image I2 is generated from the first image I1 and the flow field, while the second (spatial) term encodes our prior knowledge of the flow fields given the first (reference) image. Note that this decomposition of the posterior is slightly different from the typical one, e. g., in [18], in which the spatial term takes the form p(u, v; ΩS ). Standard approaches assume conditional independence between the flow field and the image structure, which is typically not made explicit. The advantage our formulation is that the conditional nature of the spatial term allows for more flexible methods of flow regularization. 4.1
Spatial Term
For simplicity we assume that horizontal and vertical flow fields are independent; Roth and Black [2] showed experimentally that this is a reasonable assumption. The spatial model thus becomes p(u, v|I1 ; ΩS ) = p(u|I1 ; ΩSu ) · p(v|I1 ; ΩSv ).
(3)
To obtain our first model of spatial smoothness, we assume that the flow fields are independent of the reference image. Then the spatial term reduces to a classical optical flow prior, which can, for example, be modeled using a pairwise MRF: pPW (u; ΩPWu ) =
1 φ(ui,j+1 −uij ; ΩPWu )·φ(ui+1,j −uij ; ΩPWu ), (4) Z(ΩPWu ) (i,j)
where the difference between the flow at neighboring pixels approximates the horizontal and vertical image derivatives (see e. g., [1]). Z(ΩPWu ) here is the partition function that ensures normalization. Note that although such an MRF model is based on products of very local potential functions, it provides a global probabilistic model of the flow. Various parametric forms have been used to model the potential function φ (or its negative log): Horn and Schunck [9] used
Learning Optical Flow
89
Gaussians, the Lorentzian robust error function was used by Black and Anandan [1], and Bruhn et al. [8] assumed the Charbonnier error function. In this paper, we use the more expressive Gaussian scale mixture (GSM) model [6], i. e., φ(x; Ω) =
L
ωl · N (x; 0, σ 2 /sl ),
(5)
l=1
in which Ω = {ωl |l = 1, . . . , L} are the weights of the GSM model, sl are the scales of the mixture components, and σ 2 is a global variance parameter. GSMs can model a wide range of distributions ranging from Gaussians to heavy-tailed ones. Here, the scales and σ 2 are chosen so that the empirical marginals of the flow derivatives can be represented well with such a GSM model and are not trained along with the mixture weights ωl . The particular decomposition of the posterior used here (2) allows us to model the spatial term for the flow conditioned on the measured image. For example, we can capture the oriented smoothness of the flow fields and generalize the Steerable Random Field model [5] to a steerable model of optical flow, resulting in our second model of spatial smoothness: I1 I1 φ (∂O u)ij ; ΩSRF u · φ (∂A u)ij ; ΩSRF u . (6) pSRF (u|I1 ; ΩSRF u ) ∝ (i,j)
The steered derivatives (orthogonal and aligned) are defined as in (1); the superscript denotes that steering is determined by the reference frame I1 . The potential functions are again modeled using GSMs. 4.2
Data Term
Models of the optical flow data term typically embody the brightness constancy assumption, or more specifically model the deviations from brightness constancy. Assuming independence of the brightness error at the pixel sites, we can define a standard data term as pBC (I2 |u, v, I1 ; ΩBC ) ∝ φ(I1 (i, j) − I2 (i + uij , j + vij ); ΩBC ). (7) (i,j)
As with the spatial term, various functional forms (Gaussian, robust, etc.) have been assumed for the potential φ or its negative log. We again employ a GSM representation for the potential, where the scales and global variance are determined empirically before training the model (mixture weights). Brox et al. [7] extend the brightness constancy assumption to include highorder constancy assumptions, such as gradient constancy, which may improve accuracy in the presence of changing scene illumination or shadows. We propose a further generalization of these constancy assumptions and model the constancy of responses to several general linear filters: φk {(Jk1 ∗ I1 )(i, j)− pFC (I2 |u, v, I1 ; ΩFC ) ∝ (i,j) k
(Jk2 ∗ I2 )(i + uij , j + vij ); ΩFC }, (8)
90
D. Sun et al.
where the Jk1 and Jk2 are linear filters. Practically, this equation implies that the second image is first filtered with Jk2 , after which the filter responses are warped toward the first filtered image using the flow (u, v) 1 . Note that this data term is a generalization of the Fields-of-Experts model (FoE), which has been used to model prior distributions of images [22] and optical flow [2]. Here, we generalize it to a spatio-temporal model that describes brightness (in)constancy. If we choose J11 to be the identity filter and define J12 = J11 , this implements brightness constancy. Choosing the Jk1 to be derivative filters and setting Jk2 = Jk1 allows us to model gradient constancy. Thus this model generalizes the approach by Brox et al. [7] 2 . If we choose Jk1 to be a Gaussian smoothing filter and define Jk2 = Jk1 , we essentially perform pre-filtering as, for example, suggested by Bruhn et al. [8]. Even if we assume fixed filters using a combination of the above, our probabilistic formulation still allows learning the parameters of the GSM experts from data as outlined below. Consequently, we do not need to tune the trade-off weights between the brightness and gradient constancy terms by hand as in [7]. Beyond this, the appeal of using a model related to the FoE is that we do not have to fix the filters ahead of time, but instead we can learn these filters alongside the potential functions. 4.3
Learning
Our formulation enables us to train the data term and the spatial term separately, which simplifies learning. Note though, that it is also possible to turn the model into a conditional random field (CRF) and employ conditional likelihood maximization (cf. [23]); we leave this for future work. To train the pairwise spatial term pPW (u; ΩPWu ), we can estimate the weights of the GSM model by either simply fitting the potentials to the empirical marginals using expectation maximization, or by using a more rigorous learning procedure, such as maximum likelihood (ML). To find the ML parameter estimate we aim to maximize the loglikelihood LPW (U; ΩPWu ) of the horizontal flow components U = {u(1) , . . . , u(t) } of the training sequences w. r. t. the model parameters ΩPWu (i. e., GSM mixture weights). Analogously, we maximize the log-likelihood of the vertical components V = {v(1) , . . . , v(t) } w. r. t. ΩPWv . Because ML estimation in loopy graphs is generally intractable, we approximate the learning objective and use the contrastive divergence (CD) algorithm [24] to learn the parameters. To train the steerable flow model pSRF (u|I1 ; ΩSRF ) we aim to maximize the conditional log-likelihoods LSRF (U|I1 ; ΩSRFu ) and LSRF (V|I1 ; ΩSRFv ) of the 1
2
It is, in principle, also possible to formulate a similar model that warps the image first and then applies filters to the warped image. We did not pursue this option, as it would require the application of the filters at each iteration of the flow estimation procedure. Filtering before warping ensures that we only have to filter the image once before flow estimation. Formally, there is a minor difference: [7] penalizes changes in the gradient magnitude, while the proposed model penalizes changes of the flow derivatives. These are, however, equivalent in the case of Gaussian potentials.
Learning Optical Flow (1)
91
(t)
training flow fields given the first (reference) images I1 = {I1 , . . . , I1 } from the training image pairs w. r. t. the model parameters ΩSRFu and ΩSRFv . To train the simple data term pD (I2 |u, v, I1 ; ΩD ) modeling brightness constancy, we can simply fit the marginals of the brightness violations using expectation maximization. This is possible, because the model assumes independence of the brightness error at the pixel sites. For the proposed generalized data term pFC (I2 |u, v, I1 ; ΩFC ) that models filter response constancy, a more complex training procedure is necessary, since the filter responses are not independent. Ideally, we would maximize the conditional likelihood LFC (I2 |U, V, I1 ; ΩFC ) of (1) (t) the training set of the second images I2 = {I2 , . . . , I2 } given the training flow fields and the first images. Due to the intractability of ML estimation in these models, we use a conditional version of contrastive divergence (see e. g., [5,23]) to learn both the mixture weights of the GSM potentials as well as the filters.
5
Optical Flow Estimation
Given two input images, we estimate the optical flow between them by maximizing the posterior from (2). Equivalently, we minimize its negative log E(u, v) = ED (u, v) + λES (u, v),
(9)
where ED is the negative log (i. e., energy) of the data term, ES is the negative log of the spatial term (the normalization constant is omitted in either case), and λ is an optional trade-off weight (or regularization parameter). Optimizing such energies is generally difficult, because of their non-convexity and many local optima. The non-convexity in our approach stems from the fact that the learned potentials are non-convex and from the warping-based data term used here and in other competitive methods [7]. To limit the influence of spurious local optima, we construct a series of energy functions EC (u, v, α) = αEQ (u, v) + (1 − α)E(u, v),
(10)
where EQ is a quadratic, convex, formulation of E that replaces the potential functions of E by a quadratic form and uses a different λ. Note that EQ amounts to a Gaussian MRF formulation. α ∈ [0, 1] is a control parameter that varies the convexity of the compound objective. As α changes from 1 to 0, the combined energy function in (10) changes from the quadratic formulation to the proposed non-convex one (cf. [25]). During the process, the solution at a previous convexification stage serves as the starting point for the current stage. In practice, we find using three stages produces reasonable results. At each stage, we perform a simple local minimization of the energy. At a local minimum, it holds that ∇u EC (u, v, α) = 0, and ∇v EC (u, v, α) = 0.
(11)
Since the energy induced by the proposed MRF formulation is spatially discrete, it is relatively straightforward to derive the gradient expressions. Setting these
92
D. Sun et al.
to zero and linearizing them, we rearrange the results into a system of linear equations, which can be solved by a standard technique. The main difficulty in deriving the linearized gradient expressions is the linearization of the warping step. For this we follow the approach of Brox et al. [7] while using the derivative filters proposed in [8]. To estimate flow fields with large displacements, we adopt an incremental multi-resolution technique (e. g., [1,8]). As is quite standard, the optical flow estimated at a coarser level is used to warp the second image toward the first at the next finer level and the flow increment is calculated between the first image and the warped second image. The final result combines all the flow increments. At the first stage where α = 1, we use a 4-level pyramid with a downsampling factor of 0.5. At other stages, we only use a 2-level pyramid with a downsampling factor of 0.8 to make full use of the solution at the previous convexification stage.
6 6.1
Experiments and Results Learned Models
The spatial terms of both the pairwise model (PW) and the steerable model (SRF) were trained using contrastive divergence on 20, 000 9 × 9 flow patches that were randomly cropped from the training flow fields (see above). To train the steerable model, we also supplied the corresponding 20, 000 image patches (of size 15 × 15 to allow computing the structure tensor) from the reference images. The pairwise model used 5 GSM scales; and the steerable model 4 scales. The simple brightness constancy data term (BC) was trained using expectation-maximization. To train the data term that models the generalized filter response constancy (FC), the CD algorithm was run on 20, 000 15 × 15 flow patches and corresponding 25 × 25 image patches, which were randomly cropped from the training data. 6-scale GSM models were used for both data terms. We investigated two different filter constancy models. The first (FFC) used 3 fixed 3 × 3 filters: a small variance Gaussian (σ = 0.4), and horizontal and vertical derivative filters similar to [7]. The other (LFC) used 6 3 × 3 filter pairs that were learned automatically. Note that the GSM potentials were learned in either case. Figure 3 shows the fixed filters from the FFC model, as well as two of the learned filters from the LFC model. Interestingly, the learned filters do not look like ordinary derivative filters nor do they resemble the filters learned in an FoE model of natural images [22]. It is also noteworthy that even though the Jk2 are not enforced to be equal to the Jk1 during learning, they typically exhibit only subtle differences as Fig. 3 shows. Given the non-convex nature of the learning objective, contrastive divergence is prone to finding local optima, which means that the learned filters are likely not optimal. Repeated initializations produced different-looking filters, which however performed similarly to the ones shown here. The fact that these “non-standard” filters perform better (see below) than standard ones suggests that more research on better filters for formulating optical flow data terms is warranted.
Learning Optical Flow
(a)
(b)
(c)
(d)
93
(e)
Fig. 3. Three fixed filters from the FFC model: (a) Gaussian, (b) horizontal derivative, and (c) vertical derivative. (d,e) Two of the six learned filter pairs of the LFC model and the difference between each pair (left: Jk1 , middle: Jk2 , right: Jk1 − Jk2 ).
(a) Estimated flow
(b) Ground truth
(c) Key
Fig. 4. Results of the SRF-LFC model for the “Army” sequence
For the models for which we employed contrastive divergence, we used a hybrid Monte Carlo sampler with 30 leaps, l = 1 CD step, and a learning rate of 0.01 as proposed by [5]. The CD algorithm was run for 2000 to 10000 iterations, depending on the complexity of the model, after which the model parameters did not change significantly. Figure 1 shows the learned potential functions alongside the empirical marginals. We should note that learned potentials and marginals generally differ. This has, for example, been noted by Zhu et al. [26], and is particularly the case for the SRFs, since the derivative responses are not independent within a flow field (cf. [5]). To estimate the flow, we proceeded as described in Section 5 and performed 3 iterations of the incremental estimation at each level of the pyramid. The regularization parameter λ was optimized for each method using a small set of training sequences. For this stage we added a small amount of noise to the synthetic training sequences, which led to larger λ values and increased robustness to novel test data. 6.2
Flow Estimation Results
We evaluated all 6 proposed models using the test portion of the Middlebury optical flow benchmark [3]3 . Figure 4 shows the results on one of the sequences along with the ground truth flow. Table 1 gives the average angular error (AAE) 3
Note that the Yosemite frames used for testing as part of the benchmark are not the same as those used for learning.
94
D. Sun et al.
(a) HS [9]
(b) BA [1]
(c) PW-BC
(d) SRF-BC
(e) PW-FFC
(f) SRF-FFC
(g) PW-LFC
(h) SRF-LFC
Fig. 5. Details of the flow results for the “Army” sequence. HS=Horn & Schunck; BA=Black & Anandan; PW=pairwise; SRF=steered model; BC=brightness constancy; FFC=fixed filter response constancy; LFC=learned filter response constancy. Table 1. Average angular error (AAE) on the Middlebury optical flow benchmark for various combinations of the proposed models HS[9] BA[1] PW-BC SRF-BC PW-FFC SRF-FFC PW-LFC SRF-LFC
Rank Average Army Mequon Schefflera Wooden Grove Urban Yosemite Teddy 16.4 8.72 8.01 9.13 14.20 12.40 4.64 8.21 4.01 9.16 9.8 7.36 7.17 8.30 13.10 10.60 4.06 6.37 2.79 6.47 13.6 8.36 8.01 10.70 14.50 8.93 4.35 7.00 3.91 9.51 10.0 7.49 6.39 10.40 14.00 8.06 4.10 6.19 3.61 7.19 12.6 6.91 4.60 4.63 9.96 9.93 5.15 7.84 3.51 9.66 9.3 6.26 4.36 5.46 9.63 9.13 4.17 7.11 2.75 7.43 10.9 6.06 4.61 3.92 7.56 7.77 4.76 7.50 3.90 8.43 8.6 5.81 4.26 4.81 7.87 8.02 4.24 6.57 2.71 8.02
of the models on the test sequences, as well as the results of two standard methods [1,9]. Note that the standard objectives from [1,9] were optimized using exactly the same optimization strategy as used for the learned models. This ensures fair comparison and focuses the evaluation on the model rather than the optimization method. The table also shows the average rank from the Middlebury flow benchmark, as well as the average AAE across all 8 test sequences. Table 2 shows results of the same experiments, but here the AAE is only measured near motion boundaries. From these results we can see that the steerable flow model (SRF) substantially outperforms a standard pairwise spatial term (PW), particularly also near motion discontinuities. This holds no matter what data term the respective spatial term is combined with. This can also be seen visually in Fig. 5, where the SRF results exhibit the clearest motion boundaries. Among the different data terms, the filter response constancy models (FFC & LFC) very clearly outperform the classical brightness constancy model (BC), particularly on the sequences with real images (“Army” through “Schefflera”), which are especially difficult for standard techniques, because the classical
Learning Optical Flow
95
Table 2. Average angular error (AAE) in motion boundary regions PW-BC SRF-BC PW-FFC SRF-FFC PW-LFC SRF-LFC
Average 16.68 15.71 16.36 15.45 15.67 15.09
Army Mequon Schefflera Wooden Grove Urban Yosemite Teddy 14.70 20.70 24.30 26.90 5.40 20.70 5.26 15.50 13.40 20.30 23.30 26.10 5.07 19.00 4.64 13.90 12.90 17.30 20.60 27.80 6.43 24.00 5.05 16.80 12.10 17.40 20.20 27.00 5.16 22.30 4.24 15.20 12.80 16.00 18.30 27.30 6.09 22.80 5.40 16.70 11.90 16.10 18.50 27.00 5.33 21.50 4.30 16.10
brightness constancy assumption does not appear to be as appropriate as for the synthetic sequences, for example because of stronger shadows. Moreover, the model with learned filters (LFC) slightly outperforms the model with fixed, standard filters (FFC), particularly in regions with strong brightness changes. This means that learning the filters seems to be fruitful, particularly for challenging, realistic sequences. Further results, including comparisons to other recent techniques are available at http: // vision. middlebury. edu/ flow/ .
7
Conclusions
Enabled by a database of image sequences with ground truth optical flow fields, we studied the statistics of both optical flow and brightness constancy, and formulated a fully learned probabilistic model for optical flow estimation. We extended our initial formulation by modeling the steered derivatives of optical flow, and generalized the data term to model the constancy of linear filter responses. This provided a statistical grounding for, and extension of, various previous models of optical flow, and at the same time enabled us to learn all model parameters automatically from training data. Quantitative experiments showed that both the steered model of flow as well as the generalized data term substantially improved performance. Currently a small number of training sequences are available with ground truth flow. A general purpose, learned, flow model will require a fully general training set; special purpose models, of course, are also possible. While a small training set may limit the generalization performance of a learned flow model, we believe that training the parameters of the model is preferable to hand tuning (particularly to individual sequences) which has been the dominant approach. While we have focused on the objective function, the optimization method may also play an important role [27] and some models may may admit better optimization strategies than others. In addition to improved optimization, future work may consider modulating the steered flow model by the strength of the image gradient similar to [4], learning a model that adds spatial integration to the proposed filter-response constancy constraints and thus extends [8], extending the learned filter model beyond two frames, automatically adapting the model to the properties of each sequence, and learning an explicit model of occlusions and disocclusions.
96
D. Sun et al.
Acknowledgments. This work was supported in part by NSF (IIS-0535075, IIS0534858) and by a gift from Intel Corp. We thank D. Scharstein, S. Baker, R. Szeliski, and L. Williams for hours of helpful discussion about the evaluation of optical flow.
References 1. Black, M.J., Anandan, P.: The robust estimation of multiple motions: Parametric and piecewise-smooth flow fields. CVIU 63, 75–104 (1996) 2. Roth, S., Black, M.J.: On the spatial statistics of optical flow. IJCV 74, 33–50 (2007) 3. Baker, S., Scharstein, D., Lewis, J., Roth, S., Black, M., Szeliski, R.: A database and evaluation methodology for optical flow. In: ICCV (2007) 4. Nagel, H.H., Enkelmann, W.: An investigation of smoothness constraints for the estimation of displacement vector fields from image sequences. IEEE TPAMI 8, 565–593 (1986) 5. Roth, S., Black, M.J.: Steerable random fields. In: ICCV (2007) 6. Wainwright, M.J., Simoncelli, E.P.: Scale mixtures of Gaussians and the statistics of natural images. In: NIPS, pp. 855–861 (1999) 7. Brox, T., Bruhn, A., Papenberg, N., Weickert, J.: High accuracy optical flow estimation based on a theory for warping. In: Pajdla, T., Matas, J(G.) (eds.) ECCV 2004. LNCS, vol. 3024, pp. 25–36. Springer, Heidelberg (2004) 8. Bruhn, A., Weickert, J., Schn¨ orr, C.: Lucas/Kanade meets Horn/Schunck: combining local and global optic flow methods. IJCV 61, 211–231 (2005) 9. Horn, B., Schunck, B.: Determining optical flow. Artificial Intelligence 16, 185–203 (1981) 10. Ferm¨ uller, C., Shulman, D., Aloimonos, Y.: The statistics of optical flow. CVIU 82, 1–32 (2001) 11. Gennert, M.A., Negahdaripour, S.: Relaxing the brightness constancy assumption in computing optical flow. Technical report, Cambridge, MA, USA (1987) 12. Haussecker, H., Fleet, D.: Computing optical flow with physical models of brightness variation. IEEE TPAMI 23, 661–673 (2001) 13. Toth, D., Aach, T., Metzler, V.: Illumination-invariant change detection. In: 4th IEEE Southwest Symposium on Image Analysis and Interpretation, pp. 3–7 (2000) 14. Adelson, E.H., Anderson, C.H., Bergen, J.R., Burt, P.J., Ogden, J.M.: Pyramid methods in image processing. RCA Engineer 29, 33–41 (1984) 15. Alvarez, L., Deriche, R., Papadopoulo, T., Sanchez, J.: Symmetrical dense optical flow estimation with occlusions detection. IJCV 75, 371–385 (2007) 16. Fleet, D.J., Black, M.J., Nestares, O.: Bayesian inference of visual motion boundaries. In: Exploring Artificial Intelligence in the New Millennium, pp. 139–174. Morgan Kaufmann Pub., San Francisco (2002) 17. Black, M.J.: Combining intensity and motion for incremental segmentation and tracking over long image sequences. In: Sandini, G. (ed.) ECCV 1992. LNCS, vol. 588, pp. 485–493. Springer, Heidelberg (1992) 18. Simoncelli, E.P., Adelson, E.H., Heeger, D.J.: Probability distributions of optical flow. In: CVPR, pp. 310–315 (1991) 19. Black, M.J., Yacoob, Y., Jepson, A.D., Fleet, D.J.: Learning parameterized models of image motion. In: CVPR, pp. 561–567 (1997)
Learning Optical Flow
97
20. Freeman, W.T., Pasztor, E.C., Carmichael, O.T.: Learning low-level vision. IJCV 40, 25–47 (2000) 21. Scharstein, D., Pal, C.: Learning conditional random fields for stereo. In: CVPR (2007) 22. Roth, S., Black, M.J.: Fields of experts: A framework for learning image priors. In: CVPR, vol. II, pp. 860–867 (2005) 23. Stewart, L., He, X., Zemel, R.: Learning flexible features for conditional random fields. IEEE TPAMI 30, 1145–1426 (2008) 24. Hinton, G.E.: Training products of experts by minimizing contrastive divergence. Neural Comput 14, 1771–1800 (2002) 25. Blake, A., Zisserman, A.: Visual Reconstruction. The MIT Press, Cambridge, Massachusetts (1987) 26. Zhu, S., Wu, Y., Mumford, D.: Filters random fields and maximum entropy (FRAME): To a unified theory for texture modeling. IJCV 27, 107–126 (1998) 27. Lempitsky, V., Roth, S., Rother, C.: FusionFlow: Discrete-continuous optimization for optical flow estimation. In: CVPR (2008)
Optimizing Binary MRFs with Higher Order Cliques Asem M. Ali1 , Aly A. Farag1 , and Georgy L. Gimel’farb2 1
Computer Vision and Image Processing Laboratory, University of Louisville, USA {asem,farag}@cvip.uofl.edu 2 Department of Computer Science, The University of Auckland, New Zealand
[email protected]
Abstract. Widespread use of efficient and successful solutions of Computer Vision problems based on pairwise Markov Random Field (MRF) models raises a question: does any link exist between the pairwise and higher order MRFs such that the like solutions can be applied to the latter models? This work explores such a link for binary MRFs that allow us to represent Gibbs energy of signal interaction with a polynomial function. We show how a higher order polynomial can be efficiently transformed into a quadratic function. Then energy minimization tools for the pairwise MRF models can be easily applied to the higher order counterparts. Also, we propose a method to analytically estimate the potential parameter of the asymmetric Potts prior. The proposed framework demonstrates very promising experimental results of image segmentation and can be used to solve other Computer Vision problems.
1
Introduction
Recently, discrete optimizers based on e.g. graph cuts [1,2], loopy belief propagation [3,4], and tree reweighted message passing [5,6] became essential tools in Computer Vision. These tools solve many important Computer Vision problems including image segmentation, restoration, and matching, computational stereo, etc. (see [7] for more detail). Conventional framework for such problems is the search for Maximum A Posteriori (MAP) configurations in a Markov Random Field (MRF) model where the MAP problem is formulated as minimizing an interaction energy for the model. In this work, we focus only on binary MRFs that play an important role in Computer Vision since Boykov et al. [1] proposed an approximate graph-cut algorithm for energy minimization with iterative expansion moves. The algorithm reduces the problem with multivalued variables to a sequence of subproblems with binary variables. Most of the energy-based Computer Vision frameworks represent the MRF energy on an image lattice in terms of unary and pairwise clique potentials. However, this representation is insufficient for modeling rich statistics of natural scenes [8,9]. The latter require higher order clique potentials being capable to describe complex interactions between variables. Adding potentials for the D. Forsyth, P. Torr, and A. Zisserman (Eds.): ECCV 2008, Part III, LNCS 5304, pp. 98–111, 2008. c Springer-Verlag Berlin Heidelberg 2008
Optimizing Binary MRFs with Higher Order Cliques
99
higher order cliques could improve the image model [10,11]. However, optimization algorithms of these models have too high time complexity to be practicable. For example, a conventional approximate energy minimization framework with belief propagation (BP) is too computationally expensive for MRFs with higher order cliques, and Lan et al. [8] proposed approximations to make BP practical in these cases. However, the results are competitive with only simple local optimization based on gradient descent technique. Recently, Kohli et al. [12] proposed a generalized P n family of clique potentials for the Potts MRF model and showed that optimal graph-cut moves for the family have polynomial time complexity. However, just as in the standard graph-cut approaches based on the α-expansion or αβ-swap, the energy terms for this family have to be submodular. Instead of developing efficient energy minimization techniques for higher order MRFs, this paper chooses an alternative strategy of reusing well established approaches that have been successful for the pairwise models and proposes an efficient transformation of an energy function for a higher order MRF into a quadratic function. It is worth to mention that Kolmogorov and Rother [13] referred to a similar transformation [14] in their future work. In order to reduce an energy function for a higher order MRF into a quadratic function, first we convert the potential energy for higher order cliques into a polynomial form and show explicitly when this form can be graph representable and how this graph can be constructed for such an energy. Then we reduce the higher-order polynomial to a specific quadratic one. The later may have submodular and/or nonsubmodular terms, and few approaches have been proposed to minimize such functions. For instance, Rother et al. [15] truncate nonsubmodular terms in order to obtain an approximate submodular function to be minimized. This truncation leads to a reasonable solution when the number of the nonsubmodular terms is small. Recently, Rother et al. [16] proposed an efficient optimization algorithm for nonsubmodular binary MRFs, called the extended roof duality. However, it is limited to only quadratic energy functions. Our proposal expands notably the class of the nonsubmodular MRFs that can be minimized using this algorithm. Below, we use it to minimize the proposed quadratic version of the higher order energy. To illustrate potentialities of the higher order MRFs in modeling complex scenes, the performance of our approach has been assessed experimentally in application to image segmentation. The potential parameter of the asymmetric Potts prior that has been used in the experiments is analytically estimated. The obtained results confirm that the proposed optimized MRF framework can be efficiently used in practice.
2
Preliminaries
The goal image labeling x in the MAP approach is a realization of a MarkovGibbs random field (MGRF) X defined over an arithmetic 2D lattice V = {1, 2, · · · , n} with a neighborhood system N . The system explicitly specifies neighboring random variables that have spatial interaction. Let X denote a set of all possible configurations of an MGRF. Then the probability of a particular
100
A.M. Ali, A.A. Farag, and G.L. Gimel’farb
configuration x ∈ X is given by a Gibbs probability distribution: P (x) = 1 −E(x) , where Z denotes a normalizing constant (the partition function) and Ze E(x) is the Gibbs energy function. The latter sums Gibbs potentials supported by cliques of the interaction graph. As defined in [17], a clique is a set of sites i ∈ V (e.g. pixels in an image) such that all pairs of sites are mutual neighbors in accord with N . The maximal clique size determines the Gibbs energy order. Energy functions for an MGRF with only unary and pairwise cliques can be written in the following form: ϕ(xi ) + ϕ(xi , xj ) , (1) E(x) = {i,j}∈N
i∈V
where ϕ(.) denotes the clique potential. The energy minimum E(x∗ ) = minx E(x) corresponds to the MAP labeling x∗ . For a binary MGRF, the set of labels consists of two values, B = {0, 1}, each variable xi is a binary variable, and the energy function in (1) can be written in a quadratic form: a{i} xi + a{i,j} xi xj , (2) E(x) = a{} + {i,j}∈N
i∈V
where the coefficients a{} , a{i} and a{i,j} are real numbers depending on ϕ(0), ϕ(1), . . . , ϕ(1, 1) in a straightforward way. Generally, let B n = {(x1 , x2 , · · · , xn )| xi ∈ B; ∀ i = 1, · · · , n}, and let E(x) = E(x1 , x2 , · · · , xn ) be a real valued polynomial function of n bivalent variables and real coefficients defining a Gibbs energy with higher order potentials (in contrast to the above quadratic function E). Such function E(x) is called a pseudo-Boolean function [18] and can be uniquely represented as a multi-linear polynomial [14] as follows: aS xi , (3) E(x) = S⊆V
i∈S
where aS are non-zero real numbers, and the product over the empty set is 1 by definition.
3
Polynomial Forms of Clique Potentials
To be transformed into a quadratic energy, the higher order energy function should be represented in a multi-linear polynomial form (3). Hereafter, we will consider how the clique potentials can be represented in a polynomial form. An unary term has an obvious polynomial form ϕxi = ϕ(xi ) = (ϕ1 −ϕ0 )xi +ϕ0 where ϕ1 and ϕ0 are the potential values for the labels 1 and 0 for the variable xi ∈ B, respectively. A clique of size k has a potential energy ϕ(xi , xj , · · · , xk ), where k ≤ n. The coefficients of the polynomial that represents the energy of a clique of size k can be estimated using Algorithm 1 below. It is worth mentioning that many works tried to compute these coefficients, such as the formula proposed by Freedman and Drineas [19], an observation in [20], and Proposition 2 in [14]. However, these works are somewhat complicated and are not explicit, and we believe Algorithm 1 is much easier for implementation.
Optimizing Binary MRFs with Higher Order Cliques
101
Algorithm 1. Computing coefficients aS of energy (3). H denotes the set of pixels in a clique which its potential represented by (3), S ⊆ H, Z = H − S, and W is a labeling of the pixels in Z. Initially, W = . 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11.
aS if (S = ) Return ϕ(xH = 0) end if Z =H−S if (W = ) W = {wi = 0 | i = 1, 2, . . . , |Z|} end if if (|S| = 1) then Return ϕ(xS = 1, xZ = W) − ϕ(xS = 0, xZ = W) else Select first element i ∈ S. Then: S ← S −{i}, W1 ← {1}+W and W0 ← {0}+W Let W = W1 and compute t1 = aS . Then, let W = W0 and compute t0 = aS Return t1 − t0 end if
To verify the correctness of Algorithm 1, let us use it to estimate the polynomial coefficients of the potential ϕ(xi , xj , x , xk ); xi , xj , x , xk ∈ B, for a clique of size 4. This polynomial can be represented as follows1 : ϕxi xj x xk = aijk xi xj x xk + aij xi xj x + aijk xi xj xk + aik xi x xk + ajk xj x xk + aij xi xj + ai xi x + aik xi xk + aj xj x + ajk xj xk + ak x xk + ai xi + aj xj + a x + ak xk + a ,
(4)
where the coefficients can be computed using Algorithm 1. Examples of these coefficients’ computations are (here H = {i, j, , k}): +aj (with W = {100}) = ϕ1100 − ϕ1000 , a = ϕ0000 , ai = ϕ1000 − ϕ0000 , aij −a j (with W = {000}) +ϕ0000 − ϕ0100 ⎧ j ⎨ +aj (with W = {10}) +a (with W = {110}) = ϕ1110 − ϕ1100 j −a (with W = {100}) +ϕ1000 − ϕ1010 , and aij 0100 − ϕ0110 ⎩ −aj (with W = {00}) −a (with W = {010}) +ϕ +a (with W = {000}) +ϕ0010 − ϕ0000 8 j ⎧ +a (W = {111}) > = ϕ1111 − ϕ1110 > ⎪ < +ak (W = {11}) −ak (W = {110}) ⎪ k +ϕ1100 − ϕ1101 ⎪ j +a (W = {1}) ⎪ jk ⎪ −a (W = {101}) > +ϕ1010 − ϕ1011 ⎨ > : −ak (W = {10}) +ak (W = {100}) +ϕ1001 − ϕ1000 k 8 j aijk +ϕ0110 − ϕ0111 . (5) −ak (W = {011}) > > ⎪ −a (W = {01}) ⎪ +ϕ0101 − ϕ0100 ⎪ −ajk (W = {0}) < k j +ak (W = {010}) ⎪ ⎪ +ϕ0011 − ϕ0010 +ak (W = {001}) > ⎩ > : +ak (W = {00}) −ak (W = {000})
+ϕ0000 − ϕ0001
In a similar way, the potential function ϕ(xi , xj , x ); xi , xj , x ∈ B, of a 3rd order clique can be represented as follows: ϕxi xj x = ((ϕ111 + ϕ100 − ϕ110 − ϕ101 ) − (ϕ011 + ϕ000 − ϕ001 − ϕ010 ))xi xj x + (ϕ011 + ϕ000 − ϕ001 − ϕ010 )xj x + (ϕ101 + ϕ000 − ϕ001 − ϕ100 )xi x + (ϕ110 + ϕ000 − ϕ100 − ϕ010 )xi xj + (ϕ010 − ϕ000 )xj + (ϕ100 − ϕ000 )xi + (ϕ001 − ϕ000 )x + ϕ000 . 1
(6)
For brevity, hereafter, the notation is simplified: e.g. a{i,j,,k} becomes aijk and ϕ(xi , xj , x , xk ) becomes ϕxi xj x xk .
102
A.M. Ali, A.A. Farag, and G.L. Gimel’farb
Indeed, representing potentials in polynomial forms (e.g. (6)) has many advantages. It implies an algebraic proof of the Kolmogorov–Zabih’s submodularity condition [2]. As we will explain in Section 5, the first(cubic) term in (6) will be reduced to be a quadratic term with the same coefficient. Thus, and according to a combinatorial optimization theorem [19], such energy can be minimized via graph cuts if and only if (ϕ111 + ϕ100 − ϕ110 − ϕ101 ) − (ϕ011 + ϕ000 − ϕ001 − ϕ010 ) ≤ 0, ϕ011 + ϕ000 − ϕ001 − ϕ010 ≤ 0, ϕ101 + ϕ000 − ϕ001 − ϕ100 ≤ 0, and
ϕ110 + ϕ000 − ϕ100 − ϕ010 ≤ 0 .
(7)
These inequalities represent all the projections of the ϕxi xj x on 2 variables and follow Definition 1. Definition 1. [Kolmogorov–Zabih; 2004] A function of one binary variable is always submodular. A function ϕ(xi , xj ) from the family F 2 is submodular if and only if ϕ11 + ϕ00 ≤ ϕ01 + ϕ10 . A function from the family F k is submodular if and only if all its projections on 2 variables are submodular. Moreover, the polynomial representation can be explicitly related to the corresponding graph. For example potential ϕ(xi , xj ); xi , xj ∈ B, for a clique of size two can be generally represented as follows: ϕ(xi , xj ) = (ϕ11 + ϕ00 − ϕ01 − ϕ10 )xi xj + (ϕ01 − ϕ00 )xj + (ϕ10 − ϕ00 )xi + ϕ00 . This expression explicitly shows edges in the graph related to the polynomial coefficients. This construction is similar to what has been introduced in [2] but here each term in the previous expression excluding the constant, directly represents a part of the graph.
4
MGRF Parameter Estimation
The scaling parameter of a pairwise homogenous isotropic MGRF specifying the symmetric Potts prior can be analytically estimated similarly to [21]. However, we focus on asymmetric pairwise co-occurrences of the region labels. The asymmetric Potts model provides more chances to guarantee that the energy function is submodular. The Gibbs potential governing the asymmetric pairwise co-occurrences of the region labels is as follows: ϕ(xi , xj ) = γδ(xi = xj ) ,
(8)
where γ is the model’s parameter and the indicator function δ(C) equals 1 when the condition C is true and zero otherwise. So that the MGRF model of region maps is specified by the following Gibbs distribution:
1 1 ϕ(xi , xj ) = exp − γ|T |fneq (x) . (9) P (x) = exp − Z Z {i,j}∈N
Here, T = {{i, j} : i, j ∈ V; {i, j} ∈ N } is a family of the neighboring pixel pairs (second-order cliques) supporting the pairwise Gibbs |T | is the car
potentials, dinality of the family, the partition function Z = xˆ ∈X exp − γ|T |fneq (ˆ x) , and fneq (x) denotes the relative frequency of the non-equal label pairs over T :
Optimizing Binary MRFs with Higher Order Cliques
fneq (x) =
1 |T |
δ(xi = xj ) .
103
(10)
{i,j}∈T
To completely identify the Potts model, its potential can be estimated for a given training label images x using a reasonably close first approximation of the maximum likelihood estimate (MLE) of γ. It is derived in accord with [21] from 1 the specific log-likelihood L(x|γ) = |V| log P (x) that can be rewritten as: L(x|γ) = −γρfneq(x) −
1 log exp − γ|T |fneq (ˆ x) , |V|
(11)
ˆ ∈X x
| where ρ = |T |V| . The approximation is obtained by truncating the Taylor’s series expansion of L(x|γ) to the first three terms in the close vicinity of the zero potential, γ = 0. Then, it is easily to show that the resulting approximate log likelihood is:
L(x|γ) ≈ −|V| log K + ργ
K − 1 1 K −1 − fneq (x) − γ 2 ρ , K 2 K2
(12)
where, K is generally the number of labels (classes) in multi-label MRFs (in binary MRFs K = 2). For the approximate likelihood (12), let dL(x|γ) = 0. The dγ resulting approximate MLE of γ is: K fneq (x) . (13) γ∗ = K 1 − K −1 We tested the robustness of the obtained MLE using label images having been simulated with the known potential values. The simulated images were generated using the Gibbs sampler [22] and four variants of the asymmetric Potts model with 32 color labels (i.e. K = 32). Examples of the generated images of the size 128 × 128 are shown in Fig. 1. To get accurate statistics, 100 realizations are generated from each variant, and the proposed MLE (13) for the model parameter γ was computed for these data sets. The mean values and the variances of γ ∗ for the 100 realizations for each type are shown in Table 1.
(a) γ = 0.5
(b) γ = 5
(c) γ = 10
(d) γ = 25
Fig. 1. Samples of synthetic images of the size 128 × 128
104
A.M. Ali, A.A. Farag, and G.L. Gimel’farb
Table 1. Accuracy of the proposed MLE in (13): its mean (variance) – for the 100 synthetic images of the size 128 × 128 Actual parameter γ 0.5 5 10 25 Our MLE γ ∗ 0.51 (0.03) 5.4 (0.05) 10.1 ( 0.06) 25.7 (0.11)
5
Energy Reduction – The Proposed Approach
We showed so far how MGRF parameters can be estimated and how the Gibbs energy can be represented in the polynomial form (3) for any clique size. Now let us discuss the minimization of such energies. Quite successful optimization techniques for graphical models have been proposed to minimize quadratic energies with submodular terms (e.g. [1]) and nonsubmodular terms (e.g. [16,13]). We intend to apply the same techniques to minimize higher order Gibbs energies by transforming the latter to the quadratic ones. The transformation is based on adding dummy variables, each one substituting the product of two initial variables. Some known theoretical works [18,14] consider the reduction of optimization of a general pseudo-Boolean function E(x) in polynomial time to optimization of a quadratic pseudo-Boolean function. In contrast to [18], Algorithm 2 guarantees that the goal quadratic function has the same minimum and at the same variables as the initial general pseudo-Boolean function. Also, as distinct from what has been proposed in [14], we present both a detailed proof to verify the introduced algorithm and an efficient implementation of the latter on a relevant graph. To obtain the quadratic pseudo-Boolean function from the pseudo-Boolean function E(x) (3), we replace the occurrence of xi xj by the dummy variable xn+1 and adding the term N ·(xi xj +3xn+1 −2xi xn+1 −2xj xn+1 ) to E(x): This gives the following function E(x1 , x2 , · · · , xn+1 ): E(x) = N · (xi xj + 3xn+1 − 2xi xn+1 − 2xj xn+1 ) + aS ∗ xi , (14) S ∗ ⊆V ∗
where
S∗ =
(S − {i, j}) ∪ {n + 1} if{i, j} ⊆ S S if{i, j} S
i∈S ∗
,
and V ∗ = {S ∗ |S ∈ V}. To compute the constant N , (3) is rewritten first as follows: + E(x) = a{} + a− x + a xi , (15) i S1 S2 S1 ⊆V
a− S1 ’s
i∈S1
S2 ⊆V
i∈S2
are the negative coefficients, and a+ where a{} is the absolute term, S2 ’s are
− the positive coefficients. Then let A = a{} + S1 ⊆V aS1 be the sum of all the negative coefficients in (15) plus the absolute term. Note that A ≤ minx∈Bn E(x). Also, denote r a real number being greater than the minimal value of E(x) on B n (i.e. r > minx∈Bn E(x)). Practically, r can be any number being greater than a particular value of E(x) on B n . Finally, the chosen value N has to satisfy the relationship N ≥ r − A.
Optimizing Binary MRFs with Higher Order Cliques
105
This replacement is repeated until we get a quadratic pseudo-Boolean function. Algorithm 2 shows these steps in detail. At each step, E(x1 , x2 , · · · , xn+1 ) must satisfy the following. Algorithm 2. Transforming to Quadratic Input: general pseudo-Boolean function E(x) (3). P r > minx∈Bn E(x) (e.g., r = E(0) + 1), and set set A = a{} + S1 ⊆V a− S1 , set N ≥r−A 2. while (∃S ⊆ V and |S| > 2) do Select a pair {i, j} ⊆ S and update the coefficients a{n+1} = 3N, a{i,n+1} = a{j,n+1} = −2N, a{i,j} = a{i,j} + N, a(S−{i,j})∪{n+1} = aS , set aS = 0 ∀S ⊇ {i, j} 4. n = n + 1, update the function as shown in (14) end while Output: The quadratic pseudo-Boolean function E (x)
Lemma 1. Let ME = {y ∈ B n |E(y) = minx∈Bn E(x)} be a set of all y ∈ B n such that E(y) is the global minimum of the function E on B n . Then 1. E(x1 , x2 , · · · , xn+1 ) = E(x1 , x2 , · · · , xn ), 2. (y1 , y2 , · · · , yn+1 ) ∈ ME iff (y1 , y2 , · · · , yn ) ∈ ME . Proof. For x, y, z ∈ B, the function g(x, y, z) = xy + 3z − 2xz − 2yz equals 0 if xy = z, and g(.) xn+1 then E(x) = N · g(xi , xj , x
If xi xj = ≥ 1 otherwise. n+1 ) +
∗ ∗ a x = 0 + a x , i.e. E(x) = a S ∗ ⊆V ∗ S i∈S ∗ i S ∗ ⊆V ∗ S i∈S ∗ i S⊆V S i∈S xi = E(x). More specifically, E(x) has the same minimum value as E(x) on B n . On the other hand, let yi yj = yn+1 which implies g(yi , yj , yn+1 ) 1. Assuming that (y1 , y2 , · · · , yn+1 ) ∈ ME we have E(y) = N · g(yi , yj , yn+1 ) +
S ∗ ⊆V ∗ aS ∗ i∈S ∗ yi . As follows from choices of A and r, N > 0. So that E(y) N + S ∗ ⊆V ∗ aS ∗ i∈S ∗ yi N +A r due to our choice of N r−A. This contradicts the assumption (y1 , y2 , · · · , yn+1 ) ∈ ME . Thus, (y1 , y2 , · · · , yn+1 ) ∈ ME if yi yj = yn+1 , and the lemma follows.
By repeatedly applying the construction in Lemma 1, we get the following theorem (different versions of this theorem can be found in [14,18]): Theorem 1. Given a general pseudo-Boolean function E(x1 , x2 , · · · , xn ), there exists at most a quadratic pseudo-Boolean function E(x1 , x2 , · · · , xn+m ) where m ≥ 0 such that 1. (y1 , y2 , · · · , yn+m ) ∈ ME ⇐⇒ (y1 , y2 , · · · , yn ) ∈ ME 2. The size of the quadratic pseudo-Boolean function is bounded polynomially in the size of E, so the reduction algorithm terminates at polynomial time. Proof. Repeated application of the construction in the proof of Lemma 1 yields Point 1 of the theorem. To prove Point 2, let us define M3 the number of terms with |S| > 2 (i.e. of higher order terms containing more than two variables) in the function E(x1 ,
106
A.M. Ali, A.A. Farag, and G.L. Gimel’farb
x2 , · · · , xn ).2 In the loop of Algorithm 2, the term of size n (i.e. |S| = n) needs at most n − 2 iterations. Also, at each iteration in this loop, at least one of the terms with |S| > 2 will decrease in size. Hence, the algorithm must terminate in at most T M3 (n − 2) iterations because the average number of iterations for each term is less than n − 2. Indeed, the larger number of variables in each energy term indicates that these terms share several common variables, so that they will be reduced concurrently. For example, a function with ten variables contains at most 968 terms with |S| > 2. Using Algorithm 2, it is reduced with T = 68 968 × 8 iterations. This proves the claim about complexity.
Although our work in this section is similar to the work in [14], it has to be mentioned that our paper gives a formal derivation for selection of the value N such that the resulting quadratic energy has the same minimum and at the same values of variables as the initial general pseudo-Boolean function. A different formula has been only stated, but not derived, in [14] before the minimization issues were proved. 5.1
Efficient Implementation
The number of dummy variables in the generated quadratic pseudo-Boolean function depends on the selection of the pairs {i, j} in the loop of Algorithm 2. Finding the optimal selection to minimize this number is an NP-hard problem [14]. Also, searching for this pair in other terms will be exhaustive. However, in most Computer Vision problems, we deal with images on an arithmetic 2D lattice V with n pixels. The order of the Gibbs energy function to be minimized depends on the particular neighborhood system and the maximal clique size. A natural 2D image structure helps us to define a general neighborhood system N , e.g. the system of neighbors within a square of a fixed size centered on each pixel [17]. The size of this square determines the order of the system. The neighborhood system specifies the available types of cliques. The prior knowledge about the neighborhood system and the clique size can be used to minimize the number of dummy variables and to eliminate the search for the repeated pair in other terms. We will demonstrate this process on the second order neighborhood system and the cliques of the size 3 and 4, but it can be generalized for the higher orders. Figure 2(a) suggests that the second order neighborhood system contains four different cliques of the size 3 (i.e. C31 , C32 , C33 , and C34 ). Thus, we can convert the cubic terms that correspond to the cliques of the size 3, to quadratic terms as follows: – At each pixel (e.g. m) select the cubic term that corresponds to clique type C32 . – Reduce this term and the cubic term of the clique type C31 at the diagonal pixel (e.g. i), if possible, by eliminating common variables (e.g. j and ). 2
Obviously, a function E of n binary variables contains at most 2n terms and at most 2 2n − n +n+2 terms with more than two variables (|S| > 2). 2
Optimizing Binary MRFs with Higher Order Cliques
(a)
107
(b)
Fig. 2. The 2nd -order neighborhood on a 2D lattice: cliques of size 3 (a) and 4 (b)
– At each pixel (e.g. m) select the cubic term that corresponds to the clique type C33 – Reduce this term and the cubic term of the clique type C34 at the diagonal pixel (e.g. k), if possible, by eliminating common variables (e.g. j and n). After a single scanning of the image, all the cubic terms will be converted to the quadratic terms, and every term will be visited only once. As shown in Fig. 2(b), for a second order neighborhood system and clique of size four, the three neighbor cliques have the following potentials functions3 : ϕxi xm xj xn = xi xm xj xn + xi xm xj + xi xj xn + xi xm xn + xm xj xn . . . ϕxj xn xk xo = xj xn xk xo + xj xn xk + xj xn xo + xj xk xo + xk xo xn . . .
.
ϕxk xo x xp = xk xo x xp + xk xo x + xk xo xp + xk x xp + x xp xo . . . For this configuration, one can notice that ϕxi xm xj xn and ϕxj xn xk xo cliques’ potential functions share the elements xj and xn . Also ϕxj xn xk xo and ϕxk xo x xp share the elements xk and xo . Therefore, by replacing xj xn and xk xo with two new elements using Algorithm 2, the clique’s potential function ϕxj xn xk xo will be quadratic. Repeating this for every three neighbor cliques through the whole grid, and assuming a circular grid, i.e. the first and the last column are neighbors, all the cliques’ potential functions will be converted to quadratic. Notice that using this technique in the reduction provides the minimum number of dummy variables that equals the number of cliques of size four in the grid. Notice that these scenarios are not unique. Many other scenarios can be chosen for image scanning and selection of the higher order cliques to be reduced. However, in the efficient scenario every higher order term must be converted to a quadratic term after being visited only once. To illustrate the enhancement introduced by the proposed implementation, we give the following example. The linear search in a list runs in O(n) where n is the number of elements. An image of size R×C has 4(R−1)(C −1) triple cliques in the 2nd -order neighborhood system. Each triple clique has 4 terms with |S| > 1 with total 9 elements as shown in (6). So applying Algorithm 2 directly without the proposed implementation has an overhead proportional to 36(R − 1)(C − 1). 3
For brevity, only the higher-order terms that appear in the discussion are typed, assuming that all coefficients are 1.
108
6
A.M. Ali, A.A. Farag, and G.L. Gimel’farb
Experimental Results
To illustrate the potential of the higher order cliques in modelling complex objects and assess the performance of the proposed algorithm, let us consider image segmentation into two classes: object and background. Following a popular conventional approach, an input image and the desired region map (the labeled image) are described by a joint MGRF model of independent image signals and interdependent region labels. The desired map is the mapping x : V −→ B, where B is the set of two labels {0 ≡ “background”, 1 ≡ “object”}. The MAP estimate of x, given an input image, is obtained by minimizing an energy function (3) where each label xi is a binary variable in the energy function. The unary term ϕ(xi ) in this function specifies the data penalty. This term is chosen to be ϕ(xi ) = ||Ii − Ixi ||2 where Ii is the input feature vector for the pixel i, e.g. a 4D vector Ii = (ILi , Iai , Ibi , Iti ) [23] where the first three components are the pixel-wise color L*a*b* components and Iti is a local texture descriptor [24]. Seeds selected from the input image can be used to estimate feature vectors for the object, I1 , and background, I0 .
a1
b1
c1
a2
b2
c2
d1
e1
f1
d2
e2
f2
Fig. 3. Starfish segmentation – the pairwise (a1 –f1 ) vs. third-order (a2 –f2 ) cliques
Optimizing Binary MRFs with Higher Order Cliques
109
Using feature vectors I1 and I0 , an initial binary map can be estimated. For the cliques of size 2, the pairwise potentials were analytically estimated from the initial map using the proposed method described in Section 4. The potential for the third order cliques have the same analytical form (13) but with the frequency fneq (x) = |T1 | {i,j,}∈T (1 − δ(xi = xj = x )). In all our experiments, we selected the second order neighborhood system with the clique sizes from 1 to 3. By defining the clique potentials (unary, pairwise, and third-order), we identify the target segmentation energy to be minimized. After that, Algorithm 1 is used to compute the coefficients of the polynomial that represents the segmentation energy and Algorithm 2 generates a quadratic version of this polynomial. Finally, we use the extended roof duality algorithm (QPBOP) [16] to minimize the quadratic pseudo-Boolean function. For all the presented examples, QPBOP technique was able to label all pixels. In the experiments below, images are segmented with unary and pairwise cliques and with unary and third order cliques in the MGRF model. Of course, cliques of greater sizes can be more efficient for describing complex regions, but we used the third order for illustration purposes only.
a1
b1
c1
a2
b2
c2
d1
e1
f1
d2
e2
f2
1
2
Fig. 4. More segmentation results: the pairwise (a1 –f1 ) and third-order (a2 –f2 ) cliques; numbers in images refer to regions with inhomogeneities (a–e) and partial artificial occlusions (f)
110
A.M. Ali, A.A. Farag, and G.L. Gimel’farb
Figure 3 shows the starfish segmentation. Unlike the pairwise interaction in Fig. 3,a1 , the higher order interaction in Fig. 3,a2 overcomes the intensity inhomogeneities of the starfish and its background. For more challenging situations, we occluded some parts from the starfish in Figs. 3,b–f. The higher order interaction successes to get the correct boundary of the starfish while only the pairwise interaction could not. The average processing time for this experiment: in the third-order case 6 sec, comparing to 2 sec in the pairwise case. More segmentation results are shown in Fig. 4 for different color objects from the Berkeley Segmentation Dataset [25]. Numbers 1 and 2 in Fig. 4,a1 , indicate regions with inhomogeneities where the pairwise interaction fails. As expected, the higher order interaction overcomes the problem (Fig. 4,a2 ). Similar regions exist in Figs. 4,b1 ( 1, 2, and 3); c1 ( 1, 2 and 3); d1 ( 1 and 2), and e1 ( 1 and 2). In Fig. 4,f artificial occlusions were inserted by letting some object regions take the background color (regions 1 and 2 in f1 ). These results show the higher order interactions are instrumental for more correct segmentation. Finally, we can notice the segmentation results’ improvements. Of course, if we use cliques of sizes greater than three, we can model more complex interactions that lay outside the domain of uniform spatial interaction assumed in the 3rd -order MGRF model. However, we used third order MGRF for illustration purposes only. Recall that, our goal is to introduce a link between the higher order energies and the quadratic to Computer Vision community. This helps to use the well established tools of a quadratic energy’s optimization to optimize a higher order one.
7
Conclusions
This paper has introduced an efficient link between the MGRF models with higher order and pairwise cliques. We have proposed an algorithm that transforms a general pseudo-Boolean function into a quadratic pseudo-Boolean function and provably guarantees the obtained quadratic function has the same minimum and at the same variables as the initial higher order one. The algorithm is efficiently implemented for image-related graphical models. Thus, we can apply the well known pairwise MGRFs solvers to the higher order MGRFs. The MGRF parameters are analytically estimated. Experimental results show the proposed framework notably improves image segmentation and therefore may be useful for solving many other Computer Vision problems.
References 1. Boykov, Y., Veksler, O., Zabih, R.: Fast Approximation Energy Minimization via Graph Cuts. IEEE Trans. PAMI 23(11), 1222–1239 (2001) 2. Kolmogorov, V., Zabih, R.: What Energy Functions Can be Minimized via Graph Cuts? IEEE Trans. PAMI 26(2), 147–159 (2004) 3. Yedidia, J.S., Freeman, W.T., Weiss, Y.: Generalized Belief Propagation. In: NIPS, pp. 689–695 (2000)
Optimizing Binary MRFs with Higher Order Cliques
111
4. Felzenszwalb, P.F., Huttenlocher, D.P.: Efficient Belief Propagation for Early Vision. Int. J. Computer Vision 70(1), 41–54 (2006) 5. Kolmogorov, V.: Convergent Tree-Reweighted Message Passing for Energy Minimization. IEEE Trans. PAMI 28(10), 1568–1583 (2006) 6. Wainwright, M.J., Jaakkola, T., Willsky, A.S.: Tree-Based Reparameterization for Approximate Inference on Loopy Graphs. In: NIPS, pp. 1001–1008 (2001) 7. Szeliski, R., Zabih, R., Scharstein, D., Veksler, O., Kolmogorov, V., Agarwala, A., Tappen, M.F., Rother, C.: A Comparative Study of Energy Minimization Methods for Markov Random Fields. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3952, pp. 16–29. Springer, Heidelberg (2006) 8. Lan, X., Roth, S., Huttenlocher, D.P., Black, M.J.: Efficient Belief Propagation with Learned Higher-Order Markov Random Fields. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3952, pp. 269–282. Springer, Heidelberg (2006) 9. Potetz, B.: Efficient Belief Propagation for Vision Using Linear Constraint Nodes. In: CVPR (2007) 10. Paget, R., Longstaff, I.D.: Texture Synthesis via a Noncausal Nonparametric Multiscale Markov Random Field. IEEE Trans. Image Processing 7(6), 925–931 (1998) 11. Roth, S., Black, M.J.: Fields of Experts: A Framework for Learning Image Priors. In: CVPR, pp. 860–867 (2005) 12. Kohli, P., Kumar, M., Torr, P.: P 3 & beyond: Solving Energies with Higher Order Cliques. In: CVPR (2007) 13. Kolmogorov, V., Rother, C.: Minimizing Nonsubmodular Functions with Graph Cuts-A Review. IEEE Trans. PAMI 29(7), 1274–1279 (2007) 14. Boros, E., Hammer, P.L.: Pseudo-Boolean Optimization. Discrete Appl. Math. 123(1-3), 155–225 (2002) 15. Rother, C., Kumar, S., Kolmogorov, V., Blake, A.: Digital Tapestry. In: CVPR, pp. 589–596 (2005) 16. Rother, C., Kolmogorov, V., Lempitsky, V.S., Szummer, M.: Optimizing Binary MRFs via Extended Roof Duality. In: CVPR (2007) 17. Geman, S., Geman, D.: Stochastic Relaxation, Gibbs Distributions, and the Bayesian Restoration of Images. IEEE Trans. PAMI 6, 721–741 (1984) 18. Rosenberg, I.G.: Reduction of Bivalent Maximization to The Quadratic Case. Cahiers du Centre d’Etudes de Recherche Operationnelle 17, 71–74 (1975) 19. Freedman, D., Drineas, P.: Energy Minimization via Graph Cuts: Settling What is Possible. In: CVPR, pp. 939–946 (2005) 20. Cunningham, W.: Minimum Cuts, Modular Functions, and Matroid Polyhedra. Networks 15, 205–215 (1985) 21. Gimel’farb, G.L.: Image Textures and Gibbs Random Fields. Kluwer Academic Publishers, Dordrecht (1999) 22. Chen, C.C.: Markov Random Field Models in Image Analysis. PhD thesis, Michigan State University, East Lansing (1988) 23. Chen, S., Cao, L., J.L., Tang, X.: Iterative MAP and ML Estimations for Image Segmentation. In: CVPR (2007) 24. Carson, C., Belongie, S., Greenspan, H., Malik, J.: Blobworld: Image Segmentation Using Expectation-Maximization and Its Application to Image Querying. IEEE Trans. PAMI 24(8), 1026–1038 (2002) 25. Martin, D., Fowlkes, C., Tal, D., Malik, J.: A Database of Human Segmented Natural Images and Its Application to Evaluating Segmentation Algorithms and Measuring Ecological Statistics. In: ICCV, pp. 416–423 (2001)
Multi-camera Tracking and Atypical Motion Detection with Behavioral Maps J´erˆome Berclaz1, , Fran¸cois Fleuret2, , and Pascal Fua1 1
Computer Vision Laboratory, EPFL, Lausanne, Switzerland
[email protected],
[email protected] 2 IDIAP Research Institute, Martigny, Switzerland
[email protected]
Abstract. We introduce a novel behavioral model to describe pedestrians motions, which is able to capture sophisticated motion patterns resulting from the mixture of different categories of random trajectories. Due to its simplicity, this model can be learned from video sequences in a totally unsupervised manner through an Expectation-Maximization procedure. When integrated into a complete multi-camera tracking system, it improves the tracking performance in ambiguous situations, compared to a standard ad-hoc isotropic Markovian motion model. Moreover, it can be used to compute a score which characterizes atypical individual motions. Experiments on outdoor video sequences demonstrate both the improvement of tracking performance when compared to a state-of-the-art tracking system and the reliability of the atypical motion detection.
1
Introduction
Tracking multiple people in crowded scenes can be achieved without explicitly modeling human behaviors [1,2,3,4,5] but can easily fail when their appearances become too similar to distinguish one person from another, for example when two individuals are dressed similarly. Kalman filters and simple Markovian models have been routinely used for this purpose but do not go beyond capturing the continuous and smooth aspect of people’s trajectories [6]. This problem has long been recognized in the Artificial Intelligence community and the use of much more sophisticated Markovian models, such as those that involve a range of strategies that people may pursue [7], have been demonstrated. For such an approach to become practical, however, the models have to be learnable from training data in an automated fashion instead of being painstakingly hand-crafted.
Supported in part by the Indo Swiss Joint Research Programme (ISJRP), and in part by funds of the European Commission under the IST-project 034307 DYVINE. Supported by the Swiss National Science Foundation under the National Centre of Competence in Research (NCCR) on Interactive Multimodal Information Management (IM2).
D. Forsyth, P. Torr, and A. Zisserman (Eds.): ECCV 2008, Part III, LNCS 5304, pp. 112–125, 2008. c Springer-Verlag Berlin Heidelberg 2008
Multi-camera Tracking and Atypical Motion Detection
113
In this paper, we introduce models that can both describe how people move on a location of interest’s ground plane, such as a cafeteria, a corridor, or a train station, and be learned from image data. To validate these models, we use a publicly available implementation [8] of a multi-camera multi-people tracking system [2] first to learn them and second to demonstrate that they can help disambiguate difficult situations. We also show that, far from forcing everyone to follow a scripted behavior, the resulting models can be used to detect abnormal behaviors, which are defined as those that do not conform to our expectations. This a crucial step in many surveillance applications whose main task is to raise an alarm when people are having dangerous or prohibited behavior. We represent specific behaviors by a set of behavioral maps that encode, for each ground plane location, the probability of moving in a particular direction. We then associate to people being tracked a probability of acting according to an individual map and to switch from one to the other based on their location. The maps and model parameters are learned by Expectation-Maximization in a completely unsupervised fashion. At run-time, they are used for robust and near real-time recovery of trajectories in ambiguous situations. Also, the same maps are used for efficient detection of abnormal behavior by computing the probability of retrieved trajectories under the estimated model. The contribution of this paper is therefore to show that the models we propose are both sophisticated enough to capture higher-level behaviors that basic Markovian models cannot, and simple enough to be learned automatically from training data.
2
Related Works
With the advent of video surveillance and real-time people tracking algorithms, we have recently seen an increasing amount of research focused on acquiring spatiotemporal patterns by passive observation of video sequences [9,10,11,12,13]. Our approach shares similarities with [9], since we try to learn trajectory distributions from data as they do. However, while they model the trajectories in the camera view, and handle the temporal consistency using an artificial neural network with a short memory, we propose a more straight-forward modeling under a classical Markovian assumption with an additional behavioral hidden state. The metric homogeneity of the top-view allows for simpler priors, and the resulting algorithm can be integrated seamlessly in a standard HMM-based tracking system. In a relatively close spirit, [10] uses an adaptive background subtraction algorithm to collect patterns of motion in the camera view. With the help of vector quantization, they build a codebook of representations out of this data, which they use to detect unusual events. [12] proceeds in a similar fashion to gather statistics from an online surveillance system. Using this data, they infer higher level semantics, such as the locations of entrance points, stopping areas, etc. More related to our approach is the work of [11], which applies an EM algorithm to cluster trajectories recorded with laser-range finders. From this data,
114
J. Berclaz, F. Fleuret, and P. Fua
they derive an HMM to predict future position of the people. The use of laserrange scanners and their trajectory cluster model makes this approach more adapted to an indoor environment where people have a relatively low freedom of movement, whereas our proposed behavioral maps are more generic and learned from standard video sequences shot with off the shelf cameras. A quite different strategy has been chosen by [14]. In their work, they propose a generic behavior model for pedestrian based on discrete choice models, and apply it for reinforcing tracking algorithms. As opposed to our method, their framework does not need any training phase, but it is not able to learn the intrinsic specificity of a particular location. Finally, our approach to handling human behaviors can be seen as a simplified version of Artificial Intelligence techniques, such as Plan Recognition [7] where the strategies followed by the agents are encoded by the behavioral maps. This simplification is what lets us learn our models from real data without having to hand-design them, which is a major step-forward with respect to traditional Artificial Intelligence problems.
3
Algorithm
We present in this section the core algorithm of our approach, first by describing the formal underlying motion model, and second by explaining both the E-M training procedure and the method through which the adequate training data was collected. 3.1
Motion Model
As described briefly in § 1, our motion model relies on the notion of behavioral map, a finite hidden state associated to every individual present in the scene. The rational behind that modeling is that an individual trajectory can be described with a deterministic large scale trajectory both in space and time (i.e. “he is going from door A to door B”, “he is walking towards the coffee machine”) combined with additional noise. The noise itself, while limited in scale, is highly structured: motion can be very deterministic in a part of a building where people do not collide, and become more random in crowded area. Hence this randomness is both strongly anisotropic – people in a certain map go in a certain direction – and strongly non-stationary – depending on their location in the area of interest the fluctuations differ. With an adequate class of models for individual maps, combining several of them allows for encoding such a structure. Hence, re-using the formalism of [2], we associate to each individual a random process (Lt , Mt ) indexed by the time t and taking its values in {1, . . . , G} × {1, . . . , M } where G 1000 is the number of locations in the finite discretization of the area of interest and M is the total number of behavioral maps we consider, typically less than 5. We completely define this process by first making a standard Markovian assumption, and then choosing models for both P (L0 , M0 ) and P (Lt+1 , Mt+1 | Lt , Mt ) .
(1)
Multi-camera Tracking and Atypical Motion Detection
115
Note that the very idea of maps strongly changes the practical effect of the Markovian assumption. For instance, by combining two maps that encode motions in opposite directions and a very small probability of switching from one map to the other, the resulting motion model is a mixture of two flows of individuals, each strongly deterministic. By making the probabilities of transition depend on the location, we can encode behaviors such as people changing their destination and doing a U-turn only at certain locations. Such a property can be very useful to avoid confusion of the trajectories of two individuals walking in opposite directions. To define precisely (1), we first make an assumption of conditional independence between the map and the location at time t + 1 given the same at time t P (Lt+1 , Mt+1 | Lt , Mt ) = P (Lt+1 | Lt , Mt )P (Mt+1 | Lt , Mt ). Due to the 20cm spatial resolution of our discretization, we have to consider a rather coarse time discretization to be able to model motion accurately. If we were using directly the frame-rate of 25 time steps per second, the location at time t + 1 would be almost a Dirac mass on the location at the previous time step. Hence, we use a time discretization of 0.5s, which has the drawback of increasing the size of the neighborhood to consider for P (Lt+1 | Lt , Mt ). In practice an individual can move up to 4 or 5 spatial locations away in one time step, which leads to a neighborhood of more than 50 locations. The issue to face when choosing these probability models is the lack of training data. It would be impossible for instance to model these distributions exhaustively as histograms, since the total number of bins for G 1, 000 and M = 2, if we consider transitions only to the 50 spatial neighbor locations and all possible maps, would be 1, 000 ∗ 2 ∗ 50 ∗ 10 = 106 , hence requiring that order of number of observations. To cope with that difficulty, we interpolate these mappings with a Gaussian kernel from a limited number Q of control points, hence making a strong assumption of spatial regularity. Finally, our motion model is totally parametrized by fixing the locations l1 , . . . , lQ ∈ {1, . . . , G}Q of control points in the area of interest (where Q is a few tens), and for every point lq and every map m by defining a distribution μq,m over the maps and a distribution fq,m over the locations. From these distributions, for every map m and every location l, we interpolate the distributions at l from the distributions at the control points with a Gaussian kernel κ: P (Lt+1 = l , Mt+1 = m | Lt = l, Mt = m)
(2)
= P (Lt+1 = l | Lt = l, Mt = m)P (Mt+1 = m | Lt = l, Mt = m) q κ(l, lq ) fq,m (l − l ) q κ(l, lq ) μq,m (m ) = . r κ(l, lr ) r κ(l, lr )
(3) (4)
Remains the precise definition of the motion distribution itself fq,m (δ), for which we still have to face the scarcity of training data compared to the size of the neighborhood. We decompose the motion δ into a direction and a distance and make an assumption of independence between those two components:
116
J. Berclaz, F. Fleuret, and P. Fua
fq,m (δ) = P (Lt+1 − Lt = δ | Lt = lq , Mt = m) = gq,m (δ) hq,m (θ(δ)) ,
(5) (6)
where . denotes the standard Euclidean norm, g is a Gaussian density, θ is the angle quantized in eight values and h is a look-up table, so that h(θ(.)) is an eight-bin histogram. Finally, the complete parametrization of our model requires, for every control point and every map, M transition probabilities, the two parameters of g and the eight parameters of h, for a total of Q ∗ M ∗ (M + 2 + 8) parameters. 3.2
Training
We present in this section the training procedure we use to estimate the parameters of the model described in the previous section. We denote by α the parameter vector of our model (of dimension Q ∗ M ∗ (M + 2 + 8)) and index all probabilities with it. Provided with images from the video cameras, the ultimate goal would be to optimize the probability of the said sequence of images under a joint model of the image and the hidden trajectories, which we can factorize into the product of an appearance model (i.e. a posterior on the images, given the locations of individuals) with the motion model we are modeling here. However, such an optimization is intractable. Instead, we use an ad-hoc procedure based on the multi-camera multi-people tracking Probability Occupancy Map (POM) algorithm [2] to extract trajectory fragments and to optimize the motion model parameters to maximize the probability of those fragments. Generating the Fragments. To produce the list of trajectory fragments we will use for the training of the motion model, we first apply the POM algorithm to every frame independently. This procedure optimizes the marginal probabilities of occupancy at every location in the area of interest so that a synthetic image produced according to these marginals matches the result of a backgroundsubtraction pre-processing. We then threshold the resulting probabilities with a fixed threshold to produce finally at every time step t a small number Nt of locations (l1t , . . . , ttNt ) ∈ {1, . . . , G}Nt likely to be truly occupied. To build the fragments of trajectories we process pairs of consecutive frames and pick the location pairing Ξ ⊂ {1, . . . , Nt } × {1, . . . , Nt+1 } minimizing the ||. If Nt > Nt+1 , some total distance between paired locations ξ∈Ξ ||lξt 1 − lξt+1 2 points occupied at time t cannot be paired with a point at time t + 1, which corresponds to the end of a trajectory fragment. Reciprocally, if Nt < Nt+1 , some points occupied at t + 1 are not connected to any currently considered fragment, and a new fragment is started. We end up with a family of U fragments of trajectories fu ∈ {1, . . . , G}su ,
u = 1, . . . , U .
(7)
Multi-camera Tracking and Atypical Motion Detection
117
E-M Learning. The overall strategy is an E-M procedure which maximizes alternatively the posterior distribution on maps of every point of every fragment fu , and the parameters of our motion distribution. Specifically, let fuk denote the k-th point of fragment u in the list of fragments we actually observed. Let Fku and Muk denote respectively the location and the hidden map of the individual of fragment u at step k under our model. Then, during the E step, we re-compute the posterior distribution of those variables under our model. For every first point of a fragment, we set it to the prior on maps. For every other point we have: νuk (m) = =
(8)
Pα (Muk = m | F1u = fu1 , . . . , Fku = fuk )
Pα (Muk = m | Fk−1 = fuk−1 , Fku = fuk , Muk−1 = m ) νuk−1 (m ) u
(9) (10)
m
∝
Pα (Fku = fuk | Fk−1 = fuk−1 , Muk−1 = m ) u
m
= m
· P (Muk = m | Fk−1 = fuk−1 , Muk−1 = m ) νuk−1 (m ) (11) u k−1 k−1 k k−1 , lq ) fq,m (fu − fu ) , lq ) μq,m (m) q κ(fu q κ(fu νuk−1 (m ) . k−1 k−1 κ(f , l ) κ(f , l ) u u r r r r (12)
From this estimate, during the M step, we recompute the parameters of μq,m and fq,m for every control point lq and every map m in a closed-form manner, since there are only histograms and Gaussian densities. Every sample fuk is weighted with the product of the posterior on the maps and the distance kernel weight νuk (m) κ(fuk , lq ).
4
Results
In this section, we first present the video sequences that we acquired to test our algorithm and describe the behavioral models we learned from them. We then demonstrate how they can be used both to improve the reconstruction of typical trajectories and to detect atypical ones. 4.1
Synthetic Data
The first step taken to validate the correct functioning of our algorithm was to test it against synthetic data. We generated synthetic probability occupancy maps of people moving along predefined paths. New people were created at the beginning of paths according to a Poisson distribution. Their speed followed a Gaussian distribution and their direction of movement was randomized around the paths. When two or more paths were connected, we defined transition probabilities between them, and people were switching paths accordingly. The results on the synthetic data have been fully satisfying, as the retrieved behavioral maps correctly reflected the different paths we created.
118
4.2
J. Berclaz, F. Fleuret, and P. Fua
Training Sequences
To test our algorithm, we acquired two multi-camera video sequences using 3 standard DV cameras. They were placed at the border of a rectangular area of interest in such a way as to maximize camera overlap, as illustrated by Fig. 1. The area of interest is flat and measures about 10m by 15m. The 3 video streams were acquired at 25 fps and later synchronized manually. We use the first video, which lasts about 15 minutes, for training purposes. It shows four people walking in front of the cameras, following the predefined patterns of Fig. 1 that involve going from one entrance point to another. In a second 8-minute-long test sequence, the same 4 people follow the same patterns as the training sequence for about 50 percent of the time and take randomly chosen trajectories for the rest. These random movements can include standing still for a while, going in and out of the area through non standard entrance points, taking one of the predefined trajectory backwards, etc. Screen shots of the test sequence with anomaly detection results are displayed on Fig. 7. 4.3
Behavior Model
As described in § 3.2, we first apply the POM algorithm [2] on the video streams, which yields ground plane detections that are used by our EM framework to construct the behavior model.
camera
entrance / exit points
entrance / exit point
cameras Fig. 1. Top view of the scenario used for algorithm training. People are going from one entrance point to an exit point using one of the available trajectories.
Multi-camera Tracking and Atypical Motion Detection
119
0.09 0.085
test data likelihood
0.08 0.075 0.07 0.065 0.06 0.055 real example 0.05 1
2
3
4
5
6
7
Number of maps
Fig. 2. Cross-validation: to find the ideal number of maps to model a given scenario, we run our learning algorithm with different number of maps on 80% of the training sequence. We then use the other 20% to compute the likelihood of the data given our model. In our training sequence, that is shown here, 2 maps are enough to model the situation correctly.
The ground plane of the training sequence is discretized into a regular grid of 30×45 locations. Probability distribution maps are built using one control point every 3 locations. The behavioral model of the 15 minute long training sequence is generated using 30 EM iterations, which takes less than 10 minutes on a 3 GHz PC using no particular optimization. We use cross-validation to choose the number of maps that gives the most significant model. We apply our learning algorithm several times on 80% of the training sequence with each time a different number of maps, as shown on Fig. 2. The rest of the sequence is used to compute the likelihood of the trajectories under our model. In the end, we choose the smallest number of maps, which accurately captures the patterns of motion. On our testing sequence, it turns out that two maps are already sufficient. Figure 3 displays the behavioral maps that are learned in the one-map (left) and two-map (right) cases. By comparing them to Fig. 1, one can see that the two-map case is able to model all trajectories of the scenario. Figure 4 shows the probabilities of staying in the same behavioral map over the next half second. These probabilities are relatively high, but not uniform over the whole ground plan, which indicates that people are more likely to switch between maps in some locations. 4.4
Tracking Results
Here, we discuss the benefits of using behavioral maps learned with our algorithm to improve the performance of a people tracker. To this purpose, we have implemented the multi-people tracking algorithm of [2]. This work combines dynamic programming with a color model and an isotropic motion model to extract people trajectories. To integrate our behavioral maps, we have customized the tracking algorithm by replacing the uniform isotropic motion model with our model.
120
J. Berclaz, F. Fleuret, and P. Fua
(a) One map
(b) Two maps
Fig. 3. Motion maps in the top view resulting from the learning procedure, with one map (a) or two maps (b). The difficulty of modeling a mixture of trajectories under a strict Markovian assumption without an hidden state appears clearly at the centerright and lower-left of (a): Since the map has to account for motions in two directions, the resulting average motion is null, while in the two-map case on (b) two flows appear clearly.
Fig. 4. Probability to remain in the map 0 (left) and in the map 1 (right) in the two-map case. Dark color indicates a high probability.
The behavioral maps had to be adapted to fit into the dynamic programming framework. Specifically, from every behavioral map, we generated a motion map that stores, for each position of the ground plane, the probability of moving into one of the adjacent position at the next time frame.
Multi-camera Tracking and Atypical Motion Detection
121
Table 1. The false negative value corresponds to the number of trajectories out of a total of 75, that were either not found or were not consistent with the ground truth. The false positive value stands for the average number of false detections per time frame.
Fleuret et al. Our algorithm, one map Our algorithm, two maps
R = .5m FN FP 17 0.18 15 0.22 10 0.21
R = 1m FN FP 14 0.15 13 0.20 10 0.18
R = 2m FN FP 13 0.15 12 0.19 10 0.17
The main difference with [2] is that a hidden state in the HMM framework is now characterized by both a map and a position. Also the transition between HMM states is now given by both a transition probability between maps and between locations. The rest of the tracking framework, however, has been untouched. To quantify the benefits of the behavioral maps, we started by running the original tracking algorithm [2] on our training sequence. We then ran our modified version on the same sequence, using in turn a one-map behavior model and a two-map one. A ground truth used to evaluate the results was derived by manually marking the position and identity of each person present on the ground plane for every 10 frames. Scores for both algorithms were then computed by comparing their results to the ground truth. For this purpose, we define a trajectory as being the path taken by a person from the time it enters the area until it exits it. For every trajectory of the ground truth, we search if there is a matching set of detections from the algorithm results. A true positive is declared when, for every position of a ground truth trajectory, a detection is found within a given distance R, and all detections correspond to the same identity. If there is a change in identity, it obviously means that there has been a confusion between the identities of two people, which cannot be considered as a true positive. The false positive value is the average number of false detections per time frame. Results from Table 1 show both false positive and negative values for [2] and the modified algorithm using a one-map and a two-map behavior model. Results are shown for 3 different values of the distance R. It appears from Table 1 that for about the same number of false positives, using 1, respectively 2, behavioral maps helps reducing significantly the number of false negatives. Moreover, one can notice that the paths are found with greater precision, when using two behavioral maps, since the number of false negatives is no longer influenced by the distance R. 4.5
Anomaly Detection Results
Detecting unlikely motions is another possible usage of the behavioral maps computed by our algorithm. We show the efficiency of this approach by applying it
122
J. Berclaz, F. Fleuret, and P. Fua
Table 2. Error rate for atypical trajectory detection. The total number of retrieved trajectories is 47, among which 16 are abnormal. With either one or two maps, the number of false positives (i.e. trajectories flagged as abnormal while they are not) drops to 1 for a number of false-negatives (i.e. non flagged abnormal trajectories) greater than 2. However, for very conservative thresholds (less than 2 false-negatives) the two-map model the advantage of using two maps appears clearly. FN 0 1 2
One map 29 7 1
FP Two maps 9 4 1
Fig. 5. Three example of retrieved “normal” trajectories, according to the scenario illustrated on Fig. 1
for classifying trajectories from the test sequence into “normal” or “unexpected” category. We start by creating a ground truth for the test sequence. We manually label each trajectory depending on whether it follows the scenario of Fig. 1 or not. For every trajectory, a likelihood score is computed using the behavioral maps. For this we proceed using an HMM framework, in which our hidden state is the behavioral map the person is following. The transition between states is given by the transition probabilities between maps and the observation probability is the probability of a move, given the map the person is following. Having defined all this, the likelihood of a trajectory is simply computed using the classical forwardbackward algorithm. The score is then compared to a threshold to classify the trajectory as “normal” or “unexpected”. We classified the 47 trajectories automatically retrieved from the test sequence using a one-map and a two-map behavior models. The results are displayed on Table. 2 and show the improvement when using several maps: the behavior model
Multi-camera Tracking and Atypical Motion Detection
(a)
(b)
123
(c)
Fig. 6. Three examples of atypical retrieved trajectories, according to the scenario illustrated on Fig. 1. Unlikely parts are displayed with dotted-style lines. a) The person is taking an unusual path; b) The person is stopping (middle of the trajectory); c) The person is taking a predefined path backward.
Fig. 7. Anomaly detection in camera views. Each row consists of views from three different cameras at the same time frame. A red triangle above a person indicates that it does not move according to the learned model.
with only one map produces 7 (respectively 29) false positives if missing only one (respectively zero) abnormal trajectories, when the two-map models reduces this figure to 4 (respectively 9). Instead of computing a score for a complete trajectory, one can also generate a score for a small part of it only, using the very same technique. This way of
124
J. Berclaz, F. Fleuret, and P. Fua
doing is more appropriate for monitoring trajectories in real time, for instance embedded in a tracking algorithm. This leads to a finer analysis of a trajectory, where only the unexpected parts of it are marked as such. This procedure can be used directly to “tag” individuals on short time interval in the test video sequence. Figure 6 shows a selected set of atypical behavior, according to our two-map model. The unlikely parts of the trajectories are drawn using dotted-style lines. This should be compared to the two right maps of Fig.3. On the other hand, Fig. 5 shows some trajectories that follow the predefined scenario. Finally, Fig. 7 illustrates the same anomaly detection results, projected on camera views.
5
Conclusion
We have presented a novel model for motions of individuals which goes beyond the classical Markovian assumption by relying on a hidden behavioral state. While simple, this model can capture complex mixture of patterns in public places and can be trained in a fully unsupervised manner from video sequences. Experiments show how it can be integrated in a fully automatic multi-camera tracking system and how it improves the accuracy and reliability of tracking in ambiguous situation. Moreover, it allows for the characterization of abnormal trajectories with a very high confidence. Future work will consist of extending the model so that it can cope with a larger class of abnormal behaviors which can be characterized only by looking at statistical inconsistency over long period of time or inconsistency between the individual appearance and the motion.
References 1. Khan, S., Shah, M.: A Multiview Approach to Tracking People in Crowded Scenes Using a Planar Homography Constraint. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3954, pp. 133–146. Springer, Heidelberg (2006) 2. Fleuret, F., Berclaz, J., Lengagne, R., Fua, P.: Multi-camera people tracking with a probabilistic occupancy map. IEEE Transactions on Pattern Analysis and Machine Intelligence 30(2), 267–282 (2007) 3. Zhao, T., Nevatia, R.: Tracking multiple humans in crowded environment. In: Conference on Computer Vision and Pattern Recognition (2004) 4. Smith, K., Gatica-Perez, D., Odobez, J.M.: Using particles to track varying numbers of interacting people. In: Conference on Computer Vision and Pattern Recognition (2005) 5. Kang, J., Cohen, I., Medioni, G.: Tracking people in crowded scenes across multiple cameras. In: Asian Conference on Computer Vision (2004) 6. Oh, S., Russell, S., Sastry, S.: Markov chain monte carlo data association for general multiple-target tracking problems. In: IEEE Conference on Decision and Control, Paradise Island, Bahamas (2004) 7. Bui, H., Venkatesh, S., West, G.: Policy recognition in the abstract hidden markov models. Journal of Artificial Intelligence Research 17, 451–499 (2002)
Multi-camera Tracking and Atypical Motion Detection
125
8. Berclaz, J., Fleuret, F., Fua, P.: Pom: Probability occupancy map (2007), http://cvlab.epfl.ch/software/pom/index.php 9. Johnson, N., Hogg, D.: Learning the distribution of object trajectories for event recognition. In: British Machine Vision Conference (1995) 10. Stauffer, C., Grimson, W.E.L.: Learning patterns of activity using real-time tracking. IEEE Transactions on Pattern Analysis and Machine Intelligence 22(8), 747– 757 (2000) 11. Bennewitz, M., Burgard, W., Cielniak, G.: Utilizing learned motion patterns to robustly track persons. In: Proceedings of the Joint IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance (VS-PETS) (2003) 12. Makris, D., Ellis, T.: Learning semantic scene models from observing activity in visual surveillance. IEEE Transactions on Systems, Man, and Cybernetics, Part B 35(3), 397–408 (2005) 13. Hu, W., Xiao, X., Fu, Z., Xie, D., Tan, T., Maybank, S.: A system for learning statistical motion patterns. IEEE Transactions on Pattern Analysis and Machine Intelligence 28(9), 1450–1464 (2006) 14. Antonini, G., Venegas, S., Thiran, J.P., Bierlaire, M.: A discrete choice pedestrian behavior model for pedestrian detection in visual tracking systems. In: Advanced Concepts for Intelligent Vision Systems (2004)
Automatic Image Colorization Via Multimodal Predictions Guillaume Charpiat1,3 , Matthias Hofmann2,3 , and Bernhard Sch¨ olkopf3 1
2
Pulsar Team, INRIA, Sophia-Antipolis, France
[email protected] Wolfson Medical Vision Lab, Dpt. of Engineering Science, University of Oxford, UK
[email protected] 3 Max Planck Institute for Biological Cybernetics, T¨ ubingen, Germany
[email protected]
Abstract. We aim to color greyscale images automatically, without any manual intervention. The color proposition could then be interactively corrected by user-provided color landmarks if necessary. Automatic colorization is nontrivial since there is usually no one-to-one correspondence between color and local texture. The contribution of our framework is that we deal directly with multimodality and estimate, for each pixel of the image to be colored, the probability distribution of all possible colors, instead of choosing the most probable color at the local level. We also predict the expected variation of color at each pixel, thus defining a nonuniform spatial coherency criterion. We then use graph cuts to maximize the probability of the whole colored image at the global level. We work in the L-a-b color space in order to approximate the human perception of distances between colors, and we use machine learning tools to extract as much information as possible from a dataset of colored examples. The resulting algorithm is fast, designed to be more robust to texture noise, and is above all able to deal with ambiguity, in contrary to previous approaches.
1
Introduction
Automatic image colorization consists in adding colors to a new greyscale image without any user intervention. The problem, stated like this, is ill-posed, in the sense that one cannot guess the colors to assign to a greyscale image without any prior knowledge. Indeed, many objects can have different colors: not only artificial, plastic objects can have random colors, but natural objects like tree leaves can have various nuances of green and turn brown during autumn, without a significant change of shape. The color prior most often considered in the literature is the user: many approaches consist in letting the user determine the color of some areas and then in extending this information to the whole image, either by pre-computing a segmentation of the image into (hopefully) homogeneous color regions, or by D. Forsyth, P. Torr, and A. Zisserman (Eds.): ECCV 2008, Part III, LNCS 5304, pp. 126–139, 2008. c Springer-Verlag Berlin Heidelberg 2008
Automatic Image Colorization Via Multimodal Predictions
127
Fig. 1. Failure of standard colorization algorithms in presence of texture. Left: manual initialization; right: result of Levin et al.’s approach. In spite of the general efficiency of their simple method (based on the mean and standard deviation of local intensity neighborhoods), texture remains difficult to deal with. Hence the need of texture descriptors and of learning edges from color examples.
spreading color flows from the user-defined color points. This last method supposes a definition of the difficulty of the color flow to go through each pixel, usually estimated as a simple function of local greyscale intensity variations, as in Levin et al.[1], in Yatziv and Sapiro [2], or in Horiuchi [3], or by predefined thresholds to detect color edges [4]. However, we have noticed, for instance, that the simple, efficient framework by Levin et al. cannot deal with texture examples such as in figure 1, whereas simple oriented texture features such as Gabor filters would have obviously solved the problem1 . This is the reason why we need to consider texture descriptors. More generally, each hand-made criterion for edge estimation has its drawbacks, and therefore we will learn the probability of color change at each pixel instead of setting a hand-made criterion. User-based approaches present the advantage that the user can interact, add more color points if needed, until a satisfying result is reached, or even place color points strategically in order to give indirect information on the location of color boundaries. Our method can easily be adapted to incorporate such user-provided color information. The task of providing first a fully automatic colorization of the image, before a possible user intervention if necessary, is however much harder. Some recent attempts of predicting the colors gave mixed results. For instance the problem has been studied by Welsh et al.[5]; it is also one of the applications presented by Hertzmann et al. in [6]. However the framework developed in both cases is not mathematically expressed, in particular it is not clear whether an energy is minimized, and the results shown seem to deal with only a few colors, with many small artifacts probably due to the lack of a suitable spatial coherency criterion. An other notable study is by Irony et al.[7]: it consists in finding a few points in the image where a color prediction algorithm reaches the highest confidence, and then in applying Levin’s approach as if these points were given by the user. Their approach of color prediction is based on a learning set of colored images, partially segmented by the user into regions. The new image to be colored is then automatically segmented into locally homogeneous regions whose texture is similar to one of the colored regions previously observed, and the colors are transferred. Their method reduces the effort required of the user 1
Their code is available at http://www.cs.huji.ac.il/~weiss/Colorization/.
128
G. Charpiat, M. Hofmann, and B. Sch¨ olkopf
but still requires a manual pre-processing step. To avoid this, they proposed to segment the training images automatically into regions of homogeneous texture, but fully automatic segmentation (based on texture or not) is known to be a very hard problem. In our approach we will not rely on an automatic segmentation of the training or test images but we will build on a more robust ground. Irony’s method also brings more spatial coherency than previous approaches, but the way coherency is achieved is still very local since it can be seen as a onepixel-radius filter, and furthermore it relies partly on an automatic segmentation of the new image. But above all, the latter method, as well as all other former methods in the history of colorization (to the knowledge of the authors), cannot deal with the case where an ambiguity concerning the colors to assign can be resolved only at the global level. The point is that local predictions based on texture are most often very noisy and not reliable, so that information needs to be integrated over large regions to become significant. Similarly, the performance of an algorithm based on texture classification, such as Irony’s one, would drop dramatically with the number of possible texture classes, so that there is a real need for robustness against texture misclassification or noise. In contrast to previous approaches, we will avoid to rely on very-local texture-based classification or segmentation and we will focus on more global approaches. The color assignment ambiguity also happens when the shape of objects is relevant to determine the color of the whole object. More generally, it appears that boundaries of objects contain much not-immediately-usable information, such as the presence of edges in the color space, and also contain significant details which can help the identification of the whole object, so that the colorization problem cannot be solved at the local level of pixels. In this article we try to make use of all the information available, without neglecting any low probability at the local level which could make sense at the global level. Another source for prior information is motion and time coherency in the case of video sequences to be colored [1]. Though our framework can easily be extended to the film case and also benefit from this information, we will deal only with still images in this article. First we present, in section 2, the model we chose for the color space, as well as the model for local greyscale texture. Then, in section 3, we state the problem of automatic image colorization in machine learning terms, explain why the naive approach, which consists in trying to predict directly color from texture, performs poorly, and we show how to solve the problem by learning multimodal probability distributions of colors, which allows the consideration of different possible colorizations at the local level. We also show how to take into account user-provided color landmarks when necessary. We start section 4 with a way to learn how likely color variations are at each pixel of the image to color. This defines a spatial coherency criterion which we use to express the whole problem mathematically. We solve a discrete version of it via graph cuts, whose result is already very good, and refine it to solve the original continuous problem.
Automatic Image Colorization Via Multimodal Predictions
2
129
Model for Colors and Greyscale Texture
We first model the basic quantities to be considered in the image colorization problem. Let I denote a greyscale image to be colored, p the location of one particular pixel, and C a proposition of colorization, that is to say an image of same size as I but whose pixel values C(p) are in the standard RGB color space. Since the greyscale information is already given by I(p), we should add restrictions on C(p) so that computing the greyscale intensity of C(p) should give I(p) back. Thus the dimension of the color space to be explored is intrinsically 2 rather than 3. We present in this section the model chosen for the color space, our way to discretize it for further purposes, and how to express continuous probability distributions of colors out of such a discretization. We also present the feature space used for the description of greyscale patches. 2.1
L-a-b Color Space
In order to quantify how similar or how different two colors are, we need a metric in the space of colors. Such a metric is also required to associate to any color a corresponding grey level, i.e. the closest unsaturated color. This is also at the core of the color coherency problem: an object with a uniform reflectance will show different colors in its illuminated and shadowed parts since they have different grey levels, so that we need a way to define robustly colors against changes of lightness, that is to consider how colors are expected to vary as a function of the grey level, i.e. how to project a dark color onto the subset of all colors who share a particular brighter grey level. The psychophysical L-a-b color space was historically designed so that the Euclidean distance between the coordinates of any colors in this space approximates as well as possible the human perception of distances between colors. The transformation from standard RGB colors to L-a-b consists in first applying gamma correction, followed by a linear function in order to obtain the XY Z color space, and then by another highly non-linear application which is basically a linear combination of the cubic roots of the coordinates in XY Z. We refer to Lindbloom’s website2 for more details on color spaces or for the exact formulas. The L-a-b space has three coordinates: L expresses the luminance, or lightness, and is consequently the greyscale axis, whereas a and b stand for two orthogonal color axes. We choose the L-a-b space to represent colors since its underlying metric has been designed to express color coherency. In the following, by grey level and 2D color we will mean respectively L and (a, b). With the previous notations, since we already know the grey level I(p) of the color C(p) to predict at pixel p, we search only for the remaining 2D color, denoted by ab(p). 2
http://brucelindbloom.com/
130
2.2
G. Charpiat, M. Hofmann, and B. Sch¨ olkopf
Discretization of the Color Space
This subsection contains more technical details and can be skipped in the first reading. In section 3 we will temporarily need a discretization of the 2D color space. Instead of setting a regular grid, we define a discretization adapted to the color images given as examples so that each color bin will contain approximately the same number of observed pixels with this color. Indeed, some entire zones of the color space are useless and we prefer to allocate more color bins where the density of observed colors is higher, so that we can have more nuances where it makes statistical sense. Figure 2 shows the densities of colors corresponding to some images, as well as the discretizations in 73 bins resulting from these densities. We colored each color bin by the average color of the points in the bin. To obtain these discretizations, we used a polar coordinate system in ab and cut recursively color bins with highest numbers of points at their average color into 4 parts. Discussing the precise discretization algorithm is not relevant here provided it makes sense statistically; we could have used K-means for instance.
Fig. 2. Examples of color spectra and associated discretizations. For each line, from left to right: color image; corresponding 2D colors; location of the observed 2D colors in the ab-plane (a red dot for each pixel) and the computed discretization in color bins; color bins filled with their average color; continuous extrapolation: influence zones of each color bin in the ab-plane (each bin is replaced by a Gaussian, whose center is a black dot; red circles indicate the standard deviation of colors within the color bin, blue ones are three times bigger).
Further, in section 4, we will need to express densities of points over the whole plane ab, based on the densities computed in each color bin. In order to interpolate continuously the information given by each color bin i, we place Gaussian functions on the average color μi of each bin, with a standard deviation proportional to the statistically observed standard deviation σi of points of each color bin (see last column of figure 2). Then we interpolate the original density d(i) to any point x in the ab plane by:
Automatic Image Colorization Via Multimodal Predictions
dG (x) =
i
131
2
(x−μi ) 1 − e 2(ασi )2 d(i) 2 π(ασi )
We found experimentally that considering a factor α ≈ 2 improved significantly the distribution aspect. Cross-validation could be used to determine the optimal α for a given learning set. 2.3
Greyscale Patches and Features
The grey level of one pixel is clearly not informative enough to decide which 2D color we should assign to it. The hope is that texture and local context will give more clues for such a task (see section 3). In order to extract as much information as possible to describe local neighborhoods of pixels in the greyscale image, we compute SURF descriptors [8] at three different scales for each pixel. This leads to a vector of 192 features per pixel. In order to reduce the number of features and to condense the relevant information, we apply Principal Component Analysis (PCA) and keep the first 27 eigenvectors, to which we add, as supplemental components, the pixel grey level as well as two biologically inspired features: a weighted standard deviation of the intensity in a 5 × 5 neighborhood (whose meaning is close to the norm of the gradient), and a smooth version of its Laplacian. We will refer to this 30-dimensioned vector, computed at each pixel, as local description in the following, and denote it by v or w.
3
Color Prediction
Now that we have modelled colors as well as local descriptions of greyscale images, we can start stating the image colorization problem. Given a set of examples of color images, and a new greyscale image I to be colored, we would like to extract knowledge from the learning set to predict colors C(p) for the new image. 3.1
Need for Multimodality
One could state this problem in simple machine learning terms: learn the function which associates to any local description of greyscale patches the right color to assign to the center pixel of the patch. This could be achieved by kernel regression tools such as Support Vector Regression (SVR) or Gaussian Processes [9]. There is an intuitive reason why this would perform poorly. Many objects can have different colors, for instance balloons at a fair could be green, red, blue, etc., so that even if the task of recognizing a balloon was easy and that we knew that we should use colors from balloons to color the new one, a regression would recommend the use of the average value of the observed balloons, i.e. grey. The problem is however not specific to objects of the same class. Local descriptions of greyscale patches of skin or sky are very similar, so that learning from images
132
G. Charpiat, M. Hofmann, and B. Sch¨ olkopf
including both would recommend to color skin and sky with purple, without considering the fact that this average value is never probable. We therefore need a way to deal with multi-modality, i.e. to predict different colors if needed, or more exactly, to predict at each pixel the probability of every possible color. This is in fact the conditional probability of colors knowing the local description of the greyscale patch around the pixel considered. We will also be interested in the confidence in these predictions in order to know whether some predictions are more or less reliable than others. 3.2
Probability Distributions as Density Estimations
The conditional probability of the color ci at pixel p knowing the local description v of its greyscale neighborhood can be expressed as the fraction, amongst colored examples ej = (wj , c(j)) whose local description wj is similar to v, of those whose observed color c(j) is in the same color bin Bi . We thus have to estimate densities of points in the feature space of grey patches. This can be accomplished with a Gaussian Parzen window estimator: k(wj , v) k(wj , v) p(ci |v) = j
{j : c(j)∈Bi } 2
2
where k(wj , v) = e−(wj −v) /2σ is the Gaussian kernel. The best value for the standard deviation σ can be estimated by cross-validation on the densities. With this framework we can express how reliable the probability estimation is: its confidence depends directly on the density of examples around v, since an estimation far from the clouds of observed points loses signification. Thus, the confidence in a probability prediction is the density in the feature space itself: k(wj , v) p(v) ∝ j
In practice, for each pixel p, we compute the local description v(p), but we do not need to compute the similarities k(v, wj ) to all examples in the learning set: in order to save computational time, we only search for the K-nearest neighbors of v in the learning set, with K sufficiently large (as a function of the σ chosen), and estimate the Parzen densities based on these K points. In practice we choose K = 500, and thanks to fast nearest neighbor search techniques such as kD-tree3 , the time needed to compute all predictions for all pixels of a 50 × 50 image is only 10 seconds (for a learning set of hundreds of thousands of patches) and this scales linearly with the number of test pixels. Note that we could also have used sparse kernel techniques such as SVR to estimate, for each color bin, a regression between the local descriptions and the probability of falling into the color bin. We refer more generally to [9] for details and discussions about kernel methods. 3
We use, without particular optimization, the TSTOOL package available at http://www.physik3.gwdg.de/tstool/.
Automatic Image Colorization Via Multimodal Predictions
3.3
133
User-Provided Color Landmarks
We can easily consider user-provided information such as the color c at pixel p in order to modify a colorization obtained automatically. We set p(c|p) = 1 and set the confidence p(p) to a very large value. Consequently our optimization framework is still usable for further interactive colorization. A re-colorization with new user-provided color landmarks does not require the re-estimation of color probabilities, and therefore lasts only a fraction of second (see next part).
4
Spatial Coherency
For each pixel of a new greyscale image, we are now able to estimate the probability distribution of all possible colors (within a big finite set of colors since we discretized the color space into bins). The interest in such computation is that, if we add a spatial coherency criterion, a pixel will be influenced by its neighbors, and the choice of the best color to assign will be done accordingly to the probability distributions in the neighborhood. Since all pixels are linked by neighborhoods, even if not directly, they all interact with each other, so that the solution has to be computed globally. Indeed it may happen that, in some regions that are supposed to be homogeneous, a few different colors may seem to be the most probable ones at a local level, but that the winning color at the scale of the region is different, because in spite of its only second rank probability at the local level, it ensures a good probability everywhere in the whole region. The opposite may also happen: to flip a whole such region to a color, it may be sufficient that this color is considered as extremely probable at a few points with high confidence. The problem is consequently not trivial, and the issue is to find a global solution. In this section we first learn a spatial coherency criterion, then find a good solution to the whole problem with the help of graph cuts. 4.1
Local Color Variation Prediction
Instead of picking randomly a prior for spatial coherence, based either on detection of edges, or on the Laplacian of the intensity, or on a pre-estimated complete segmentation, we learn directly how likely it is to observe a color variation at a pixel knowing the local description of its greyscale neighborhood, based on a learning set of real color images. The technique is similar to the one detailed in the previous section. For each example wj of colored patch, we compute the norm gj of the gradient of the 2D color (in the L-a-b space) at the center of the patch. The expected color variation g(v) at the center of a new greyscale patch v is then: j k(wj , v) gj . g(v) = j k(wj , v) Thus we now have priors both on the colors and on the color variations.
134
4.2
G. Charpiat, M. Hofmann, and B. Sch¨ olkopf
Global Coherency Via Graph Cuts
The graph cut, or max flow, algorithm is a minimization technique widely used in computer vision [10,11] because of its suitability for many image processing problems, because of its guarantee to find a good local minimum, and because of its speed. In the multi-label case with α-expansion [12], it can be applied to all energies of the form i Vi (xi ) + i,j Di∼j (xi , xj ) where xi are the unknown variables, with possible values in a finite set L of labels, where the Vi are any functions, and where Di,j are any pair-wise interaction terms with the restriction that each Di,j (·, ·) should be a metric on L. The reason why we temporarily discretized the color space in section 2 was to be able to use this technique. We formulate the image colorization problem as an optimization problem on the following energy: |c(p) − c(q)| Lab Vp (c(p)) + λ (1) g p,q p p∼q is the cost of choosing color where Vp (c(p)) = − log p v(p) p c(p)|v(p) c(p) for pixel p (whose neighboring texture is described by v(p))) and where −1 is the harmonic mean of the estimated color gp,q = 2 g(v(p))−1 + g(v(q))−1 variation at pixels p and q. An 8-neighborhood is considered for the interaction term, and p ∼ q means that p and q are neighbors. The term Vp penalizes colors which are not probable at the local level according to the probability distributions obtained in section 3, with the strength depending on the confidence in the predictions. The interaction term between pixels penalizes color variation where it is not expected, according to the variations predicted in the previous paragraph. We use the graph cut package4 provided by [13]. The solution for a 50 × 50 image and 73 possible colors is obtained by graph cuts in a fraction of second and is generally already very satisfying. The computation time scales approximately quadratically with the size of the image, which is still fast, and the algorithm performs well even on massively down-scale versions of the image, so that a good initial clue can still be given quickly for very big images too. The computational costs compete with those of the fastest colorization techniques [14] while bringing much more spatial coherency. 4.3
Refinement in the Continuous Color Space
We can now go back to the initial problem in the continuous space of colors. We interpolate probability distributions p(ci |v(p)) estimated at each pixel p for each color bin i, to the whole space of colors with the technique described in section 2, so that Vp (c) is now defined for any color c and not only for color bins. The energy (1) can consequently be minimized in the continuous space of colors. We start from the solution obtained by graph cuts and refine it with a gradient descent. This refinement step will generally not introduce huge changes like flipping of whole regions, but will bring more nuances. 4
Available at http://vision.middlebury.edu/MRF/code/
Automatic Image Colorization Via Multimodal Predictions
5
135
Experiments
We now show results of automatic colorization. In figure 3 we colored a famous painting by Leonardo Da Vinci with another painting of his. The paintings are significantly different and textures are relatively dissimilar. The prediction of color variation performs well and helps much to determine the boundaries of homogeneous color regions. The multimodality framework proves extremely useful in areas such as Mona Lisa’s forehead or neck where the texture of skin can be easily mistaken with the texture of sky at the local level. Without our global optimization framework, several entire skin regions would be colored in blue, disregarding the fact that skin color is a second probable possibility of colorization for these areas, which makes sense at the global level since they are surrounded by skin-colored areas, with low probability of edges. We insist on the fact that the input of previous texture-based approaches is very similar to the “most probable color” prediction (second line, middle image), whereas we consider the probabilities of all possible colors at all pixels. This means that, given a certain quality of texture descriptors, we handle much more information. In figure 4 we perform similar experiments with photographs of landscapes. The effect of the refinement step can be observed in the sky, where nuances of blue vary more smoothly.
Fig. 3. Da Vinci case: Mona Lisa colored with Madonna. The border is not colored because of the window size needed for SURF descriptors. Second line: color variation predicted (white stands for homogeneity and black for color edge); most probable color at the local level; 2D color chosen by graph cuts. Note that previous algorithms could not deal with regions such as the neck or the forehead, where blue is the most probable color at the local level because greyscale skin looks like sky. Surroundings of these regions and lower-probability colors are decisive for the final color choice.
136
G. Charpiat, M. Hofmann, and B. Sch¨ olkopf
Fig. 4. Landscape example. Same display as in figure 3, plus last image: colors obtained after refinement step. Note that the sky is more homogeneous, the color gradients in the sky are smoother than when obtained directly by graph cuts (previous image).
Fig. 5. Comparable results with Irony et al. on their own example[7]. First line: our result. Second line, left: our predicted 2D colors; right: Irony et al.’s result with the assumption that it is a binary classification problem. Note that this example they chose could be solved easily without texture consideration since there is a simple correspondence between grey level intensity and color. Concerning our result, the color variations in the grass around the zebra are probably due to the influence of the grass color for similar patches in the learning image, and this problem should disappear with a larger training set. Note however, if you zoom in, that the contour of the zebra matches exactly boundaries of 2D color regions (purple and pink), whereas in Irony et al.’s result, a several pixel wide band of grass around the zebra’s legs and abdomen are colored as if they were part of the zebra. The moral is that we should not try to learn colors from a single overfitted, carefully-chosen training image but rather check how the colorization scales with the number of training images and with their non-perfect suitability.
Automatic Image Colorization Via Multimodal Predictions
137
Fig. 6. Charlie Chaplin frame colored by three different images. Second line: prediction of color variations, most probable color at the local level, and final result. This example is particularly difficult because many different textures and colors are involved and because there does not exist color images very similar to Charlie Chaplin’s one. We have chosen three different images, each of which shares a partial similarity with the target image. In spite of the complexity, the prediction of color edges or of homogeneous regions remains significant. The brick wall, the door, the head and the hands are globally well colored. The large trousers are not in the learning set; the mistakes in the colors of Charlie Chaplin’s dog are probably due to the blue reflections on the dog in the learning image and to the light brown of its head. Both would probably benefit from larger training sets.
We compare our method with the one by Irony et al., on their own example [7] in figure 5; the task is easier and results are similar. The boundaries of our color regions even fit better to the zebra contour. Grass areas near the zebra are colored according to the grass observed at similar locations around the zebra in the learning image. This bias should disappear with bigger training sets since the color of the background would become independent of zebra’s presence. We would have liked to compare results on a wider scale, in particular on our difficult examples, but we face the problems of source code availability and objective comparison criteria. In figure 6 we consider a very difficult task: the one of coloring an image from a Charlie Chaplin movie, with many different objects and textures, such as a brick wall, a door, a dog, a head, hands, a loose suit... Because of the number of objects, and because of their particular arrangement, it is unlikely to find a single color image with a similar scene that we would use as a learning image. Thus we consider a small set of three different images, each of which shares a partial similarity with Charlie Chaplin’s image. The underlying difficulty is that each learning image also contains parts which should not be re-used in this target image. The result is however promising, considering the learning set. Dealing with bigger learning datasets should slow the process only logarithmically, during the kD-tree search.
138
6
G. Charpiat, M. Hofmann, and B. Sch¨ olkopf
Discussion
Our contribution is both theoretical and practical. We proposed a new approach for automatic image colorization, which does not require any intervention by the user, except from the choice of relatively similar color images. We stated the problem mathematically as an optimization problem with an explicit energy to minimize. Since it deals with multimodal probability distributions until the final step, this approach makes better use of the information that can be extracted from images by machine learning tools. The fact that we solve directly the problem at the global level, with the help of graph cuts, makes the framework more robust to noise and local prediction errors. It also makes it possible to resolve large scale ambiguities which previous approaches could not. This multi-modality framework is not specific to image colorization and could be re-used in any prediction task on images. We also proposed a way to learn and predict where color variations are probable and how important they are, instead of choosing a spatial coherency criterion by hand, and this performs quite well (see figures 3 and 6 again). It relies on many features and adapts itself to the current training set. Our colorization approach outperforms [5] and [6], whose examples contain only few colors and lack spatial coherency. Our process requires less or similar intervention than [7] but can handle more ambiguous cases and more texture noise. The computational time needed is also very low, enabling real-time user interaction since a re-colorization with new color landmarks lasts a fraction of second. The automatic colorization results should not be compared to those obtained by user-helped approaches since we do not benefit from such decisive information. Nevertheless, even with such an handicap, our results clearly compete with the state of the art of colorization with few user scribbles.
Acknowledgments We would like to thank Jason Farquhar, Peter Gehler, Matthew Blaschko and Christoph Lampert for very fruitful discussions.
References 1. Levin, A., Lischinski, D., Weiss, Y.: Colorization using optimization. In: SIGGRAPH 2004, pp. 689–694. ACM Press, Los Angeles, California (2004) 2. Yatziv, L., Sapiro, G.: Fast image and video colorization using chrominance blending. IEEE Transactions on Image Processing 15(5), 1120–1129 (2006) 3. Horiuchi, T.: Colorization algorithm using probabilistic relaxation. Image Vision Computing 22(3), 197–202 (2004) 4. Takahama, T., Horiuchi, T., Kotera, H.: Improvement on colorization accuracy by partitioning algorithm in cielab color space. In: Aizawa, K., Nakamura, Y., Satoh, S. (eds.) PCM 2004. LNCS, vol. 3332, pp. 794–801. Springer, Heidelberg (2004)
Automatic Image Colorization Via Multimodal Predictions
139
5. Welsh, T., Ashikhmin, M., Mueller, K.: Transferring color to greyscale images. In: SIGGRAPH 2002: Proc. of the 29th annual conf. on Computer graphics and interactive techniques, pp. 277–280. ACM Press, New York (2002) 6. Hertzmann, A., Jacobs, C.E., Oliver, N., Curless, B., Salesin, D.H.: Image analogies. In: Fiume, E. (ed.) SIGGRAPH 2001, Computer Graphics Proceedings, pp. 327–340. ACM Press, New York (2001) 7. Irony, R., Cohen-Or, D., Lischinski, D.: Colorization by Example. In: Proceedings of Eurographics Symposium on Rendering 2005 (EGSR 2005), Konstanz, Germany, June 29–July 1, 2005, pp. 201–210 (2005) 8. Bay, H., Tuytelaars, T., Van Gool, L.: Surf: Speeded up robust features. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3954. Springer, Heidelberg (2006) 9. Sch¨ olkopf, B., Smola, A.J.: Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. MIT Press, Cambridge (2001) 10. Boykov, Y., Kolmogorov, V.: An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision. In: Figueiredo, M., Zerubia, J., Jain, A.K. (eds.) EMMCVPR 2001. LNCS, vol. 2134, pp. 359–374. Springer, Heidelberg (2001) 11. Kolmogorov, V., Zabih, R.: What Energy Functions Can Be Minimized via Graph Cuts? In: Heyden, A., Sparr, G., Nielsen, M., Johansen, P. (eds.) ECCV 2002. LNCS, vol. 2352, pp. 65–81. Springer, Heidelberg (2002) 12. Boykov, Y., Veksler, O., Zabih, R.: Fast approximate energy minimization via graph cuts. In: ICCV, pp. 377–384 (1999) 13. Szeliski, R., Zabih, R., Scharstein, D., Veksler, O., Kolmogorov, V., Agarwala, A., Tappen, M.F., Rother, C.: A comparative study of energy minimization methods for markov random fields. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3952, pp. 16–29. Springer, Heidelberg (2006) 14. Blasi, G., Recupero, D.R.: Fast Colorization of Gray Images. In: Proc. of Eurographics Italian Chapter (2003)
CSDD Features: Center-Surround Distribution Distance for Feature Extraction and Matching Robert T. Collins and Weina Ge The Pennsylvania State University, University Park, PA 16802, USA {rcollins,ge}@cse.psu.edu
Abstract. We present an interest region operator and feature descriptor called Center-Surround Distribution Distance (CSDD) that is based on comparing feature distributions between a central foreground region and a surrounding ring of background pixels. In addition to finding the usual light(dark) blobs surrounded by a dark(light) background, CSDD also detects blobs with arbitrary color distribution that “stand out” perceptually because they look different from the background. A proofof-concept implementation using an isotropic scale-space extracts feature descriptors that are invariant to image rotation and covariant with change of scale. Detection repeatability is evaluated and compared with other state-of-the-art approaches using a standard dataset, while use of CSDD features for image registration is demonstrated within a RANSAC procedure for affine image matching.
1
Introduction
One of the key challenges in object recognition and wide-baseline stereo matching is detecting corresponding image regions across large changes in viewpoint. Natural images tend to be piecewise smooth, and most small image regions are therefore near-uniform or edge-like; ill-suited for accurate localization and matching. Although larger regions tend to be more discriminative, they are more likely to span across object boundaries and are harder to match because view variation changes their appearance. The idea behind interest region detection is to find patches that can be reliably detected and localized across large changes in viewpoint. State-of-the-art approaches include regions that self-adapt in shape to be covariant to image transformations induced by changing rotation, scale and viewing angle [1]. To date, the majority of interest region detectors search for image areas where local intensity structure is corner-like (containing gradients of several different orientations) or blob-like (exhibiting a center-surround contrast difference). Since it is not possible to predict apriori the spatial extent of interesting image structures, detection is often performed within a multiresolution framework where the center and size of interesting regions are found by seeking local extrema of a scale-space interest operator. Our work is motivated by the goal of finding larger interest regions that are more complex in appearance and thus more discriminative. We seek to improve D. Forsyth, P. Torr, and A. Zisserman (Eds.): ECCV 2008, Part III, LNCS 5304, pp. 140–153, 2008. c Springer-Verlag Berlin Heidelberg 2008
CSDD Features: Center-Surround Distribution Distance
141
Fig. 1. Detection results for a yin-yang symbol. The leftmost picture shows results from CSDD region detection, which captures the hierarchical composition of the object by correctly finding blobs at the three characteristic scales. The smaller images at right show, from top left to bottom right, the original image and results from five other detectors: Harris-affine, Hessian-affine, IBR, MSER and the Salient region detector. Only the CSDD detector captures the natural location and scale of the symbol.
the utility of larger image regions by using a feature descriptor that is insensitive to geometric deformation. Specifically, we develop an interest region operator and feature descriptor called Center-Surround Distribution Distance (CSDD) based on comparing empirical cumulative distributions of color or texture extracted from a central foreground region and a surrounding ring of background pixels. The center-surround nature of the approach makes it a variant of blob detection. However, in comparison to typical center-surround schemes that measure difference in average feature contrast, our approach is based on fine-grained histograms of the feature distributions in the central and surrounding regions, compared using a distance measure that takes into account not only the intersection of mass in overlapping bins but also the ground-distance between non-overlapping bins. Our long-term goal is to develop a method capable of extracting entire objects as interest regions, since we believe that this kind of figure-ground separation is needed to make progress in extremely wide-baseline matching scenarios. Although we cannot rely on objects and their surrounding background as having uniform intensity, the distribution of color and texture on an object often differs from that of the background, and thus a center-surround approach to blob detection using feature distributions should have applications to figure-ground segmentation in addition to interest region detection (Fig. 1). Related Work A vast literature exists on local feature detection. Of particular interest are detectors that generate features covariant to image transformations [2,3,4,5,6,7]. Lindeberg studied scale covariant features extensively in his seminal work on scale-space theory [8]. Recent work considers affine covariance to cope with broader types of image deformations. Mikolajczyk and Schmid propose affine covariant detectors based on local image statistics characterized by Harris and Hessian matrices [6]. Tuytelaars and Van Gool incorporate edge information into
142
R.T. Collins and W. Ge
the local image structure around corners to exploit geometric constraints that are consistent across viewpoint [4]. Matas et al. extract blob-like maximally stable extremal regions (MSER) whose borders remain relatively unchanged over large ranges of greyscale thresholding [5]. Bay et al. use integral images to build a fast and effective interest point detector and descriptor [7]. Kadir et.al. define a region saliency score composed of two factors: Shannon entropy of local intensity histograms, and magnitude change of intensity histograms across scale. Scale-space maxima of the entropy term identify regions with complex, and thus distinctive, greyscale structure. The inter-scale term serves to both downweight salience at edges, as well as promote the salience of blob-like regions. Indeed, if implemented as a finite difference approximation, this term becomes L1-distance of histograms across scale, which can be interpreted as a center-surround difference operator applied to multi-channel data. Similar center-surround ideas have long been used in salience detection. Itti and Koch [9] apply a Difference-of-Gaussian filter to each channel of a feature map to mimic receptive cell response and salience detection in human vision. More recently [10], color histograms extracted within rectangular central and surrounding regions are compared using the χ2 distance as one component to measure the salience of the central region. To find a distinctive representation for detected interest regions, various region descriptors have been developed. Histograms are a simple yet effective representation. Histograms of intensity, color, gradient of intensities [11], and other filter responses are prevalent in vision. Refer to [12] for an extensive discussion on measuring the difference between two histograms.
2
CSDD Features
This section presents an overview of our proposed CSDD feature for interest region extraction. An implementation-level description is provided in Sect. 3. 2.1
Center-Surround Distributions
Given a random variable f defined over a (possibly real and multi-valued) feature space, we define its empirical cumulative distribution function (hereafter referred to simply as a distribution function) within some 2D region of support of the image I as F (v) = δ(I(x) ≤ v) w(x)dx / w(x)dx (1) x∈R
x∈R
where δ(·) is an indicator function that returns 1 when the boolean expression evaluates to true, and 0 otherwise, R is the real plane, and w(x) is a spatial weighting function defining the spatial extent of the region of support while perhaps emphasizing some spatial locations more than others. For example, a rectangular region with uniform weighting can be defined by w(x, y) = δ(a ≤ x ≤ b) · δ(c ≤ y ≤ d),
(2)
CSDD Features: Center-Surround Distribution Distance
→{ ∇2 G(x; x0 , σ)
143
}
−
wC (x; x0 , σ) wS (x; x0 , σ) darker areas denote higher weights
Fig. 2. Weight functions for accumulating center-surround distributions are formed by decomposing the Laplacian of Gaussian ∇2 G(x; x√ 0 , σ) into weight functions wC (x; x0 , σ) defining a central circular region of radius 2σ and wS (x; x0 , σ) defining a surrounding annular region. Specifically, wC − wS = ∇2 G.
whereas
(x − x0 )t (x − x0 ) } 2σ 2 specifies a Gaussian-weighted region centered at point x0 with circular support of roughly 3σ pixels radius. Now, consider two regions of pixels comprised of a compact central region C and a surrounding ring of neighboring pixels S. Fig. 2 illustrates the center-surround regions we use in this paper. The center region is circular, and the surround region is an annular ring. These regions are defined by weighting functions based on an isotropic Laplacian of Gaussian, ∇2 G(x0 , σ), with center location x0 and scale parameter σ. Specifically, letting r = (x−x0 ) be a distance from the center location x0 , the center and surround weighting functions, denoted wC and wS , respectively, are defined as: ⎧ √ ⎨ 1 4 1 − r22 e−r2 /2σ2 r ≤ 2σ πσ 2σ (3) wC (x; x0 , σ) = ⎩0 otherwise w(x) = exp{−
wS (x; x0 , σ) =
⎧ ⎨− ⎩0
1 πσ4
1−
r2 2σ2
e−r
2
/2σ2
r>
√ 2σ
(4)
otherwise
These nonnegative weighting functions coincide with the positive and negative channels of the LoG function, with the positive channel being the central portion of the “Mexican hat” operator, and the negative channel being the annular ring around it. Thus wC − wS = ∇2 G. Since the LoG operator integrates to zero, we know that the positive and negative channels have equal weight. Integrating in polar coordinates, we find that 2 wS (·) r dr dθ = (5) wC (·) r dr dθ = e σ2 This term 2/(e σ 2 ) becomes the normalization factor in the denominator of (1) when we extract feature distributions using wC and wS .
144
R.T. Collins and W. Ge
Why We Don’t Use Integral Histograms: Integral histograms are often used to extract feature histograms of rectangular regions efficiently [13]. We choose not to use them for the following reasons: • Integral histograms require space proportional to storing a full histogram at each pixel. Therefore, in practice, integral histograms are only used for coarsely quantized spaces. We use finely quantized feature spaces where histograms have hundreds of bins, making the integral histogram data structure very expensive. • Generating rectangular center-surround regions at multiple scales using integral histograms can be viewed as equivalent to generating a scale space by smoothing with a uniform box filter. It is well-known that smoothing with a box filter can lead to high-frequency artifacts. We prefer to use LoG filtering to generate a scale-space that is well-behaved with respect to non-creation/nonenhancement of extrema [8]. • Integral histograms lead to rectangular center-surround regions, aligned with the image pixel array axes. They are not a good choice for rotationallyinvariant processing. Our approach yields circular center-surround regions that are rotationally invariant, and a minor modification yields anisotropic elliptical regions (see Sect. 4.1). 2.2
Center-Surround Distribution Distance
Consider two feature distributions F and G computed over the center and surround regions described above. Intuitively, if the feature distributions of F and G are very similar, it may be hard to visually discriminate the central region from its surrounding neighborhood, and this central region is therefore a bad candidate to use as a blob feature. On the other hand, if distributions F and G are very different, the central region is likely to be visually distinct from its surroundings, and thus easy to locate. Our hypothesis is that regions that look different from their surroundings are easy to detect and match in new images of the scene taken from a different viewpoint. Many dissimilarity measures are available to compare two cumulative distributions F and G, or density functions dF and dG. In practice, information theoretic measures such as χ2 distance or Bhattacharyya coefficient for comparing two density functions are problematic since they only take into account the intersection of probability mass, not the distance between masses [14]. For example, using these measures to compare greyscale density functions represented as histograms with 256 bins, an image patch with uniform value 0 looks as different from a patch with uniform value 1 as it does from a patch with uniform value 255. This makes such measures unsuitable for use when viewpoint or lighting variation cause shifts in the underlying intensity/color of corresponding image patches. Kolmogorov-Smirnov or Kuiper’s test [12] based on comparing cumulative distribution functions F and G are more robust in this regard, as well as being applicable to unbinned data. In this paper, we use Mallow’s distance, also known as Wasserstein’s distance, and in the present context equivalent to the Earth Mover’s Distance (EMD) [15]. Mallow’s distance between two d-dimensional distributions F and G can be defined as
CSDD Features: Center-Surround Distribution Distance
145
1 Mp (F, G) = min (EJ X − Y p ) p : (X, Y ) ∼ J, X ∼ F, Y ∼ G
(6)
J
where the minimum is taken over all d × d joint distributions J such that the d dimensional marginal distribution wrt X is F , and the marginal wrt Y is G. In general, finding this minimum is equivalent to solving the Monge-Kantorovich transportation problem via, for example, the simplex algorithm [14,16]. In this paper we use the known but still surprising result that, despite it’s intractability in general dimensions, for 1D distributions the transportation problem is solved as
1
|F
Mp (F, G) =
−1
−1
(t) − G
p
(t)| dt
p1 (7)
0
and that, for p = 1, there is a closed form solution that is the L1 distance between cumulative distributions F and G [15,17] : ∞ M1 (F, G) = |F (v) − G(v)| dv (8) −∞
Although restricting ourselves to 1D distributions seems to be a limitation, in practice it comes down to approximating a joint distribution by a set of 1D marginals. We take care to first transform the original joint feature space into an uncorrelated one before taking the marginals (see next section), and also note results from the literature that show the use of marginals outperforming full joint feature spaces when empirical sample sizes are small [15]. We feel that any drawbacks are more than made up for by the ability to use finely quantized marginal distributions while still computing an efficient, closed-form solution.
3
An Implementation of CSDD
We now describe a specific practical implementation of CSDD feature extraction. Like many current detectors, the CSDD detector operates over a scalespace formed by a discrete set of scales. At each scale level, center-surround distributions are extracted at each pixel and compared using Mallow’s distance to form a CSDD measure. The larger the CSDD value, the more dissimilar the center region is from its surrounding neighborhood. We therefore can interpret these dissimilarity scores within the 3D volume of space (center pixel) and scale (sigma) as values from a scale-space interest operator (see Fig. 3). Similar to other detectors, we extract an interest region for each point in the volume where the interest function achieves a local maximum across both space and scale, as determined by non-maximum suppression with a 5x5x3 (x, y, σ) neighborhood. Each maxima yields the center point (x, y) and size σ of a center-surround region that we can expect to detect reliably in new images. We choose not to do spatial subsampling at higher scales to form a pyramid, but instead keep the same number of pixels at all scale levels. Therefore, features at all scales are spatially
146
R.T. Collins and W. Ge
Fig. 3. Sample regions extracted as local maxima of the CSDD interest operator at one scale level. Left: Original color image. Middle: CSDD interest score computed for each pixel, over center-surround regions of scale σ = 9.5 pixels. Overlaid√are the 30 most dominant peaks at this scale level, displayed as circles with radius 2σ. Right: Interest regions identified by these peaks.
localized with respect to the resolution of the original pixel grid, with no need for spatial interpolation. What remains to be seen is how to generate the scale-space volume of CSDD interest values efficiently. As discussed in the last section, Mallow’s/EMD distance can be solved in closed-form if we are willing to approximate a joint RGB distribution using three 1D marginals. To make approximation by marginals better justified, we first transform the color space into a new space where the three random variables representing the color axes are roughly uncorrelated. A simple linear transformation of RGB color space due to Ohta [18] yields a set of color planes that are approximately uncorrelated for natural images. The transformation is I1 = (R + G + B)/3, I2 = R − B, I3 = (2G − R − B)/2. Although the Jacobian of this transformation is one, individual axes are scaled non-uniformly so that color information is slightly emphasized (expanded). We implement the integral in (8) as the sum over a discrete set of sampled color values. Unlike typical histogram-based approaches, we avoid coarse quantization of the feature space or adaptive color clustering. Instead, we sample values finely along each of the marginal color feature axes. In our current implementation, each axis is quantized into 128 values, yielding a concatenated feature vector of size 384 to represent the color distribution of the center region, and another vector of the same size representing the surround region. The heart of the distribution distance computation thus involves computing F (v) − G(v), the difference between cumulative distributions F and G for a sampled color space value v. Referring to (1) and (5) for the definition of how each distribution is computed and for the value of the normalization constant associated with our two center-surround weight functions wC and wS , and the fact that, by construction, wC − wS = ∇2 G, we see that
δ(I(x) ≤ v) wC (x)dx 2 δ(I(x) ≤ v) wS (x)dx F (v) − G(v) = − R (9) w (y)dy R2 C R2 wS (y)dy eσ 2 δ(I(x) ≤ v) [wC (x) − wS (x)] dx (10) = 2 R2 R2
CSDD Features: Center-Surround Distribution Distance
=
eσ 2 2
147
R2
δ(I(x) ≤ v) ∇2 G(x; x0 , σ) dx
(11)
Although we have compressed the notation to save space, distributions F and G are also functions of pixel location x0 and scale level σ, the center and size of the concentric central and surrounding regions of support. Since we want to explicitly compute a difference value for every pixel, the center-surround difference F (v) − G(v) for all pixels at one scale level becomes convolution of the binary indicator function δ(I(x) ≤ v) with a LoG filter of scale σ. An important implementation detail is how to perform efficient LoG filtering, particularly since we have to perform it many times, once for each of a set of finely sampled values v and at each scale level σ. Standard convolution with a 2D spatial LoG kernel takes time quadratic in the radius of the support region. While a separable spatial implementation would reduce this to linear time, this is still expensive for large scale levels. Our implementation is based on a set of recursive IIR filters first proposed by Deriche and later improved by Farneback and Westin [19,20]. These fourth-order IIR filters compute the Gaussian and its first and second derivatives (and through combining these, the LoG operator) with a constant number of floating point operations per pixel, regardless of the spatial size σ of the operator. Although there is a superficial similarity between our implementation and greyscale blob detection by LoG filtering, note that we are filtering multiple binary masks formed from a fine-grained feature distribution and combining the results into a distance measure, not computing a single greyscale convolution. Our approach measures Mallow’s distance between two distributions, not difference between average grey values. It is fair to characterize the difference between our method and traditional LoG blob detection as analogous to using a whole distribution rather than just the mean value to describe a random variable.
4
Experimental Validation
As discussed, we compute a distance between empirical cumulative feature distributions to measure how similar a circular central region is in appearance to its surrounding annular ring of pixels. Our hypothesis is that regions that look markedly different from their surroundings are likely to be good features to use for detection and matching. One important aspect of an interest region detector’s performance is repeatability of the detections. This measures how often the same set of features are detected under different transformations such as changes in viewing angle or lighting. In Sect. 4.1, we adopt the framework of [1] to compare repeatability of our detector against other state-of-the-art approaches. Moreover, it is also desirable for extracted regions to have high descriptive power. In Sect. 4.2 we demonstrate the utility of CSDD features for correspondence matching under affine image transformation.
148
4.1
R.T. Collins and W. Ge
Repeatability Experiments
We test the repeatability of our detector using the standard dataset for evaluation of affine covariant region detectors.1 CSDD regions are detected by the procedure discussed in Sect. 3. To improve the accuracy of scale estimation, for each detection corresponding to a local maximum in the 3D volume of space and scale, we do a 3-point parabolic interpolation of CSDD response across scale to refine the peak scale and response value. We also compute the Hessian matrix of responses around the detected points at each scale level. This matrix of second order derivatives is used for two purposes: 1) to remove responses due to ridgelike features [11]; and 2) to optionally adapt our circular regions to ellipses, using the eigenvectors and eigenvalues of the Hessian matrix to define the orientation and shape of an ellipse constrained to have the same area as the original circular region of support. Although this is a much simpler and coarser ellipse fitting method than the iterative procedure in [1], it does improve the performance of the CSDD detector for large viewpoint changes. We refer to this version of the implementation as the elliptical CSDD (eCSDD), to distinguish it from the original circular CSDD (cCSDD). We also set a conservative threshold on CSDD response score to filter out very weak responses. The evaluation test set consists of eight real world image sets, generated by image transformations with increasing levels of distortion, such as view angle, zoom, image blur, and JPEG compression. For each reference and transformed image pair, a repeatability score is computed as described in [1]. Fig. 5 compares repeatability results from our CSDD detector against the five detectors evaluated
CSDD (428/18)
CSDD (474/18)
Hessian (493/11)
Hessian (495/11)
CSDD (743/41)
CSDD (238/41)
MSER (1524/179)
MSER (653/179)
Fig. 4. Regions found by CSDD (left) and the comparison detector (right) for two pairs of test images. The detected regions that fall within the common image area are in yellow. The corresponding regions are in red. Each image is labeled with the total number of detections within the image versus the number of correspondences found in both of the images. Top: Regions found in bark images differing by a scale factor of 4. Bottom: Regions found in boat images differing by a scale factor of 2.8. 1
http://www.robots.ox.ac.uk/∼vgg/research/affine/ as of March 2008.
CSDD Features: Center-Surround Distribution Distance
149
in [1]. We outperform the state of the art in three out of the eight test cases (the orange line in Fig. 5b,c,d) and achieve comparable results for the others, even though our detector is designed to handle only isotropic scale and rotation changes. Two of the three cases where we outperform all other detectors are the ones that test ability to handle large amounts of zoom and rotation. For cases where viewpoint changes induce a large amount of perspective foreshortening, elliptical adaptation improves the performance (Fig. 5a). We show elliptical CSDD regions side-by-side with regions from the bestperforming state-of-the-art comparison detector for two sample cases in Fig. 4. In both cases, the CSDD regions are able to capture the meaningful structures in the scene such as the green leaves in the bark image and the windows on the boat. Our region density is of relatively similar order as MSER, EBR, and IBR detectors. For more details on the region density and the computational cost of CSDD feature detection, please refer to our website 2 . We did not include the performance curve of the salient region detector [2] because of its absence in the online evaluation package and its consistently low rank for most of the cases in this repeatability test, according to [1]. 4.2
Matching Experiments
Ultimately, the best test of a feature detector/descriptor is whether it can be used in practice for correspondence matching. We ran a second series of experiments to test the use of CSDD features to find correspondence matches for image registration. To demonstrate the robustness and the discriminativeness of the original circular CSDD regions, we do not include the elliptical adjustment procedure in this matching test. A simple baseline algorithm for planar image registration was used, similar to the one in [21]. The method estimates a 6-parameter affine transformation matrix based on RANSAC matching of a sparse set of feature descriptors. In [21], corner features are detected in each frame, and a simplified version of the linear assignment problem (aka marriage problem) is solved by finding pairs of features across the two images that mutually prefer each other as their best match, as measured by normalized cross-correlation of 11x11 intensity patches centered at their respective locations. This initial set of candidate matches is then provided to a RANSAC procedure to find the largest inlier set consistent with an affine transformation. We modify this basic algorithm by replacing corner features with our CSDD features, and replacing image patch NCC with average Mallow’s distance between the two center distributions and two surround distributions of a pair of interest regions. Note that use of CSDD features is a vast improvement over single-scale (11x11) corner patches in terms of being able to handle arbitrary rotation and scaling transformations. Sample results of this baseline matching algorithm are shown in Fig. 6. Higher resolution versions of these pictures along with more examples and a complete video sequence of matching results on the parking lot sequence are provided on our website 2 . The use of circular CSDD regions limits our performance under 2
http://vision.cse.psu.edu/projects/csdd/csdd.html
150
R.T. Collins and W. Ge
Fig. 5. Comparison of repeatability scores between the circular cCSDD detector, the elliptical eCSDD detector, and five other state-of-the-art detectors (the Harris- and Hessian-affine detectors, the MSER detector, the edge-based detector (EBR), and the intensity extrema-based detector (IBR)) for the eight image sequences from the standard evaluation dataset[1].
CSDD Features: Center-Surround Distribution Distance
151
Fig. 6. Affine Ransac Image Matching Experiments. Row 1: four frames from a parking lot video sequence, showing affine alignment of bottom frame overlaid on top frame. The full video sequence of results is provided on our website. Row 2: left to right: shout3 to shout4; shout2 to was2 (images courtesy of Tinne Tuytelaars); stop sign; snowy stop sign. Row 3: kampa1 to kampa4 (images courtesy of Jiri Matas); bike1 to bike6; trees1 to trees5; ubc1 to ubc6. Row 4: natural textures: asphalt; grass; gravel; stones.
152
R.T. Collins and W. Ge
out-of-plane rotations. Although we can get more matching features on tilted planes if we use elliptical adaptation and relax the inlier distance threshold, images with large amounts of planar perspective foreshortening should be handled with a homography-based match procedure.
5
Discussion
We have presented a new scale-space interest operator based on center-surround distribution distance (CSDD). The method finds rotationally invariant and scale covariant interest regions by looking for locations and scales where empirical cumulative distributions between a central region and its surrounding local neighborhood are dissimilar. The proposed approach performs competitively on the standard evaluation test for affine covariant region detectors, where it outperforms the state-of-the-art in three out of the eight scenarios. We have also tested the use of CSDD as a feature descriptor within a RANSAC-based affine image matching framework. In our experience, we find that CSDD feature extraction and matching works well on textured, natural images, and performs very well under large changes of scale and in-plane rotation. While the use of Mallow’s/EMD distance is resilient to some changes in lighting, large illumination changes currently defeat our baseline matching algorithm. This is partly due to the simplified marriage problem solution of only picking pairs of candidate features that are mutually each other’s best match. With large changes in lighting, it is often the case that some other feature patch becomes a better match in terms of intensity/color than the correct correspondence. Matching based only on color distributions, without taking into account any spatial pattern information, is a brittle approach, and our results can be viewed as surprisingly good in this regard. Our ability to match at all, using only “color histograms”, is due to the fact that we finely quantize the color spaces, and use a robust measure of color similarity. However, a practical system would also need to incorporate additional feature descriptor information, such as SIFT keys computed from normalized patches defined by CSDD spatial support regions [22].
References 1. Mikolajczyk, K., Tuytelaars, T., Schmid, C., Zisserman, A., Matas, J., Schaffalitzky, F., Kadir, T., Gool, L.V.: A comparison of affine region detectors. International Journal of Computer Vision 65(1-2), 43–72 (2005) 2. Kadir, T., Zisserman, A., Brady, M.: An Affine Invariant Salient Region Detector. In: Pajdla, T., Matas, J(G.) (eds.) ECCV 2004. LNCS, vol. 3021, pp. 345–457. Springer, Heidelberg (2004) 3. Schaffalitzky, F., Zisserman, A.: Multi-view Matching for Unordered Image Sets, or How Do I Organize My Holiday Snaps? In: Heyden, A., Sparr, G., Nielsen, M., Johansen, P. (eds.) ECCV 2002. LNCS, vol. 2350, pp. 414–431. Springer, Heidelberg (2002)
CSDD Features: Center-Surround Distribution Distance
153
4. Tuytelaars, T., Gool, L.V.: Content-based image retrieval based on local affinely invariant regions. In: International Conference on Visual Information Systems, pp. 493–500 (1999) 5. Matas, J., Chum, O., Urban, M., Pajdla, T.: Robust wide baseline stereo from maximally stable extremal regions. In: British Machine Vision Conference, pp. 384–393 (2002) 6. Mikolajczyk, K., Schmid, C.: An Affine Invariant Interest Point Detector. In: Heyden, A., Sparr, G., Nielsen, M., Johansen, P. (eds.) ECCV 2002. LNCS, vol. 2350, pp. 128–142. Springer, Heidelberg (2002) 7. Bay, H., Tuytelaars, T., Gool, L.V.: Surf: Speeded up robust features. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3954, pp. 404–417. Springer, Heidelberg (2006) 8. Lindeberg, T.: Scale-Space Theory in Computer Vision. Kluwer, The Netherlands (1994) 9. Itti, L., Koch, C.: Computational modeling of visual attention. Nature Reviews Neuroscience 2, 194–203 (2001) 10. Liu, T., Sun, J., Zheng, N.N., Tang, X., Shum, H.Y.: Learning to detect a salient object. In: IEEE Computer Vision and Pattern Recognition, pp. 1–8 (2007) 11. Lowe, D.: Distinctive image features from scale-invariant keypoints, cascade filtering approach. International Journal of Computer Vision 60, 91–110 (2004) 12. Rubner, Y., Puzicha, J., Tomasi, C., Buhmann, J.: Empirical evaluation of dissimilarity measures for color and texture. Computer Vision and Image Understanding 84, 25–43 (2001) 13. Porikli, F.: Integral histogram: A fast way to extract higtograms in cartesian spaces. In: IEEE Computer Vision and Pattern Recognition, vol. I, pp. 829–836 (2005) 14. Rubner, Y., Tomasi, C., Guibas, L.: The earth mover’s distance as a metric for image retrieval. International Journal of Computer Vision 40, 91–121 (2000) 15. Levina, E., Bickel, P.: The earth mover’s distance is the Mallows distance: Some insights from statistics. In: International Conference on Computer Vision, vol. II, pp. 251–256 (2001) 16. Zhao, Q., Brennan, S., Tao, H.: Differential emd tracking. In: International Conference on Computer Vision, pp. 1–8 (2007) 17. Villani, C.: Topics in Optimal Transportation. Graduate Studies in Mathematics series, vol. 58. American Mathematical Society, Providence (2003) 18. Ohta, Y., Kanade, T., Sakai, T.: Color information for region segmentation. Computer Graphics and Image Processing 13, 222–241 (1980) 19. Deriche, R.: Fast algorithms for low-level vision. IEEE Trans. Pattern Analysis and Machine Intelligence 12, 78–87 (1990) 20. Farneback, G., Westin, C.F.: Improving Deriche-style recursive gaussian filters. Journal of Mathematical Imaging and Vision 26, 293–299 (2006) 21. Nister, D.: Visual odometry. In: IEEE Computer Vision and Pattern Recognition, vol. I, pp. 652–659 (2004) 22. Mikolajczyk, K., Schmid, C.: A performance evaluation of local descriptors. IEEE Trans. Pattern Analysis and Machine Intelligence 27(10), 1615–1630 (2005)
Detecting Carried Objects in Short Video Sequences Dima Damen and David Hogg School of Computing, University of Leeds {dima,dch}@comp.leeds.ac.uk
Abstract. We propose a new method for detecting objects such as bags carried by pedestrians depicted in short video sequences. In common with earlier work [1,2] on the same problem, the method starts by averaging aligned foreground regions of a walking pedestrian to produce a representation of motion and shape (known as a temporal template) that has some immunity to noise in foreground segmentations and phase of the walking cycle. Our key novelty is for carried objects to be revealed by comparing the temporal templates against view-specific exemplars generated offline for unencumbered pedestrians. A likelihood map obtained from this match is combined in a Markov random field with a map of prior probabilities for carried objects and a spatial continuity assumption, from which we obtain a segmentation of carried objects using the MAP solution. We have re-implemented the earlier state of the art method [1] and demonstrate a substantial improvement in performance for the new method on the challenging PETS2006 dataset [3]. Although developed for a specific problem, the method could be applied to the detection of irregularities in appearance for other categories of object that move in a periodic fashion.
1
Introduction
The detection of carried objects is a potentially important objective for many security applications of computer vision. However, the task is inherently difficult due to the wide range of objects that can be carried by a person, and the different ways in which they can be carried. This makes it hard to build a detector for carried objects based on their appearance in isolation or jointly with the carrying individual. An alternative approach is to look for irregularities in the silhouette of a person, suggesting they could be carrying something. This is the approach that we adopt, and whilst there are other factors that may give rise to irregularities, such as clothing and build, experiments on a standard dataset are promising. Although the method has been developed for the detection of objects carried by people, there could be applications of the approach in other domains where irregularities in the outline of known deformable objects are of potential interest. We assume a static background and address errors in foreground segmentations due to noise and partial occlusions, by aligning and averaging segmentations to generate a so-called ‘temporal-template’ - this representation was originally proposed in [1] for the same application. The temporal template is then matched D. Forsyth, P. Torr, and A. Zisserman (Eds.): ECCV 2008, Part III, LNCS 5304, pp. 154–167, 2008. c Springer-Verlag Berlin Heidelberg 2008
Detecting Carried Objects in Short Video Sequences
155
Fig. 1. All the frames in the the sequence are first aligned. The temporal template represents the frequency of each aligned pixel (relative to the median) being part of the foreground. The exemplar temporal template from a similar viewing angle is transformed (translation, scaling and rotation) to best match the generated temporal template. By comparing the temporal template to the best match, protruding regions are identified. MRF with a map of prior locations is used to decide on those pixels representing carried objects.
against a pre-compiled exemplar temporal template of an unencumbered pedestrian viewed from the same direction. Protrusions from the exemplar are detected as candidate pixels for carried objects. Finally, we incorporate prior information about the expected locations of carried objects together with a spatial continuity assumption in order to improve the segmentation of pixels representing the carried objects. Figure 1 summarizes, with the use of an example, the process of detecting carried objects. Section 2 reviews previous work on the detection of carried objects. Section 3 presents our new method, based on matching temporal templates. Experiments comparing the performance of the earlier work from Haritaoglu et al. and our new method on the PETS2006 dataset are presented in Section 4. An extension of the method to incorporate locational priors and a spatial continuity assumption for detecting carried objects is presented in Section 5. The paper concludes with an overall discussion.
2
Previous Work
Several previous methods have been proposed for detecting whether an individual is carrying an object. The Backpack [1,2] system detects the presence of carried objects from short video sequences of pedestrians (typically lasting a few seconds) by assuming the pedestrian’s silhouette is symmetric when a bag is not being carried, and that people exhibit periodic motion. Foreground segmentations are aligned using edge correlation. The aligned foreground masks for the complete video segment are combined into the temporal template that records the proportion of frames in the video sequence in which each (aligned) pixel was classified as foreground. Next, symmetry analysis is performed. The principal axis is computed using principal component analysis of 2-D locations, and is
156
D. Damen and D. Hogg
Fig. 2. For each foreground segmentation, the principal axis is found and is constrained to pass through the median coordinate of the foreground segmentation. Light gray represents the two detected asymmetric regions. Asymmetric regions are projected onto the horizontal projection histogram. Periodicity analysis is performed for the full histogram [Freq = 21] and for regions 1 [Freq = 11] and 2 [Freq = 21]. As region 2 has the same frequency as the full body, it is not considered a carried object.
constrained to pass through the median coordinate in the vertical and horizontal directions. For each location x, relative to the median of the blob, asymmetry is detected by reflecting the point in the principal axis. The proportion of frames in which each location was classified as asymmetric is calculated. Consistent asymmetric locations are grouped into connected components representing candidate blobs. Backpack then distinguishes between blobs representing carried objects and those being parts of limbs by analyzing the periodicity of the horizontal projection histograms (See [1] for details). This estimates the periodic frequency of the full body, and that of each asymmetric region. Backpack assumes the frequency of an asymmetric blob that represents a limb is numerically comparable to that of the full body. Otherwise, it is believed to be a carried object. Figure 2 illustrates the process from our re-implementation. From our own evaluation, errors in the Backpack method arise from four sources. Firstly, the asymmetric assumption is frequently violated. Secondly, the position of the principal axis is often displaced by the presence of the carried object. It may be possible to reduce this source of error by positioning the major axis in other ways, for example forcing it to pass through the centroid of the head [4] or the ground point of the person walking [5]. Thirdly, accurate periodicity analysis requires a sufficient number of walking cycles to successfully retrieve the frequency of the gait. Fourthly, the periodicity of the horizontal projection histogram does not necessarily reflect the gait’s periodicity. Later work by Benabdelkader and Davis [6] expanded the work of Haritaoglu et al. by dividing the person’s body horizontally into three slices. The periodicity and amplitude of the time series along each slice is studied to detect deviations from the ‘natural’ walking person and locate the vertical position of the carried object. They verified that the main limitation in Haritaoglu et al.’s method is the sensitivity of the axis of symmetry to noise, as well as to the location and size of the carried object(s). Branca et al. [7] try to identify intruders in archaeological sites. Intruders are defined as those carrying objects such as a probe or a tin. The work assumes
Detecting Carried Objects in Short Video Sequences
157
a person is detected and segmented. Their approach thus tries to detect these objects within the segmented foreground region. Detection is based on wavelet decomposition, and the classification uses a supervised three layer neural network, trained on examples of probes and tins in foreground segmentations. Differentiating people carrying objects without locating the carried object has also been studied. Nanda et al. [8] detect pedestrians carrying objects as outliers of a model for an unencumbered pedestrian obtained in a supervised learning procedure based on a three layer neural network. Alternatively, the work of Tao et al. [9] tries to detect pedestrians carrying heavy objects by performing gait analysis using General Tensor Discriminant Analysis (GTDA), and was tested on the USF HumanID gait analysis dataset. Recent work by Ghanem and Davis [10] tackles detecting abandoned baggage by comparing the temporal template of the person before approaching a Region of Interest (ROI) and after leaving it. Carried objects are detected by comparing the temporal templates (the term ‘occupancy map’ was used in their work to reference the same concept) and colour histograms of the ‘before’ and ‘after’ sequences. The approach assumes the person is detected twice, and that the trajectory of the person before approaching the ROI and after departing are always correctly connected. It also assumes all observed individuals follow the same path, and thus uses two static cameras to record similar viewpoints. Our method uses the temporal template but differs from earlier work [1,10] by matching the generated temporal template against an exemplar temporal template generated offline from a 3D model of a walking person. Several exemplars, corresponding to different views of a walking person, were generated from reusable silhouettes used successfully for pose detection [11]. The use of temporal templates provides better immunity to noise in foreground segmentations. Our new approach does not require the pedestrian to be detected with and without the carried object, and can handle all normal viewpoints. It also generalizes to any type of carried object (not merely backpacks), and can be considered a general approach to protrusions from other deformable tracked objects. This work provides the first real test of this task on the challenging PETS2006 dataset, which we have annotated with ground-truth for all carried objects. It is worth mentioning that this dataset does not depend on actors and thus records typical carried objects in a busy station. This enables us to demonstrate the generality of the approach, and clarify the real challenges of this task.
3
Description of the Method
Our method starts by creating the temporal template from a sequence of tracked pedestrians as proposed by Haritaoglu et al. [2]. We, though, introduced two changes to the procedure for creating the temporal template. Firstly, we apply Iterative Closest Point (ICP), instead of edge correlation, to align successive boundaries. ICP is performed on the edge points of the traced boundary around the foreground segmentation. Unlike edge correlation, this does not require a predefined search window, and in our experiments it gives a more accurate alignment
158
D. Damen and D. Hogg
in the presence of shape variations between consecutive frames. Secondly, L1 is used to rank the frames by their similarity to the generated temporal template. The highest ranked p% of the frames are used to re-calculate a more stable template. p was set to 80 in our experiments. The more computationally expensive Least Median of Squares (LMedS) estimator [12] gave similar results. Having derived a temporal template from a tracked pedestrian, one of eight exemplars are used to identify protrusions by matching. These exemplar temporal templates represent a walking unencumbered pedestrian viewed from different directions. A set of exemplars for eight viewing directions was created using the dataset of silhouettes gathered at the Swiss Federal Institute of Technology (EPFL) [11]. The dataset is collected from 8 people (5 men and 3 women) walking at different speeds on a treadmill. Their motion was captured using 8 cameras and mapped onto a 3D Maya model. This dataset is comprised of all the silhouettes of the mapped Maya model, and has previously been used for pose detection, 3D reconstruction and gait recognition [11,13]. We average the temporal templates of different individuals in this dataset to create the exemplar for each camera view. The eight exemplars (Figure 3) are used for detecting the areas representing the pedestrian. The unmatched regions are expected to correspond to carried object(s). To decide on which exemplar to use, we estimate a homography from the image plane to a coordinate frame on the ground-plane. We then use this to estimate the position and direction of motion of each pedestrian on the ground. The point on the ground-plane directly below the camera is estimated from the vertical vanishing point. The angle between the line connecting this point to the pedestrian and the direction of the pedestrian’s motion gives the viewing direction, assuming the pedestrian is facing their direction of motion. We ignore the elevation of the camera above the ground in order to avoid having to generate new exemplars for different elevations, although this approximation may be unnecessary since generating the prototypes is fast and need only be done once. The mean of the computed viewing directions over the short video sequence is used to select the corresponding exemplar. Diagonal views (2,4,6,8) are used to match a wider range of angles (60◦ ) in comparison to frontal views. This is because the silhouettes change more radically near frontal views. The chosen exemplar is first scaled so that its height is the same as that of the generated temporal template. We align the median coordinate of the temporal template with that of the corresponding exemplar. An exhaustive search is then performed for the best match over a range of transformations.
Fig. 3. The eight exemplar temporal templates, created to represent 8 viewpoints
Detecting Carried Objects in Short Video Sequences
159
6
x 10 2.2 2 1.8
d(M,P)
1.6 1.4 1.2 1 0.8 0.6 15
10
5
0
Rotation
(a)
(b)
(c)
−5 −10 −15
0.75
0.85
0.95
1.05
1.15
1.25
Scale
(d)
Fig. 4. The temporal template of the person (a) is matched to the corresponding exemplar (b), the global minimum (d) results in the protruding regions (c)
In our experiments, the chosen ranges for scales, rotations and translations were [0.75:0.05:1.25], [-15:5:15] and [-30:3:30] respectively. The cost of matching two templates is an L1 measure, linearly weighted by the y coordinate of each pixel (plus a constant offset), giving higher weight to the head and shoulder region. Equation 1 represents the cost of matching a transformed model (MT ) to the Person (P ), where h represents the height of the matched matrices. |MT (x, y) − P (x, y)|(2h − y) (1) d(MT , P ) = x,y
The best match M T is the one that minimizes the matching cost M T = argmin d(MT , P ) T
(2)
Figure 4 shows an example of such a match and the located global minimum. The best match M T is then used to identify areas protruding from the temporal template: protruding(x, y) = max(0, P (x, y) − M (3) T (x, y)) We are only concerned with areas in the person template that do not match body parts in the corresponding best match. Pixels where P (x, y) < M T (x, y) are assumed to have been caused by noise, or poor foreground segmentation. For the initial results in Section 4, the protruding values are thresholded and grouped into connected components representing candidate segmentations of carried objects. Another threshold limits the minimum area of accepted connected components to remove very small blobs. An enhanced approach is presented in Section 5 where segmentation is achieved using a binary-labeled MRF formulation, combining prior information and spatial continuity.
4
Experiments and Results
We used the PETS2006 dataset and selected the viewpoint from the third camera for which there is a greater number of people seen from the side. The groundplane homography was established using the ground truth measurements provided as part of the dataset. Moving objects were detected and tracked using
160
D. Damen and D. Hogg
Fig. 5. PETS2006 Third camera viewpoint showing ground truth bounding boxes representing carried objects
a generic tracker [14] to retrieve foreground segmentations. The tracker has an automatic shadow remover that worked efficiently on the dataset. Trajectories shorter than 10 frames in length were discarded. As this method can not deal with groups of people tracked together, such trajectories were also manually removed. The carried objects in the dataset varied between boxes, hand bags, briefcases and suitcases. Unusual objects are also present like a guitar in one example. In some cases, people were carrying more than one object. The number of individually tracked people was 106. Ground truth for carried objects was obtained manually for all 106 individuals. 83 carried objects were tracked, and the bounding box of each was recorded for each frame (Figure 5). We chose bounding boxes instead of pixel masks for simplicity. We compare our re-implementation of Backpack as specified in their papers [1,2] with our proposed method (Section 3). To ensure fair comparison, we use the same temporal templates as the input for both methods. A detection is labeled as true if the overlap between the bounding box of the predicted carried object (Bp ) and that of the ground truth (Bgt ) exceeds 15% in more than 50% of the frames in the sequence. The measure of overlap is defined by Equation 4 [15]: area(Bp ∩ Bgt ) (4) overlap(Bp , Bgt ) = area(Bp ∪ Bgt ) A low overlap threshold is chosen because the ground truth bounding boxes enclose the whole carried object, while both methods only detect the parts of the object that do not overlap the body. Multiple detections of the same object are counted as false positives. We first compare the blobs retrieved from both techniques without periodicity analysis. Each of the two algorithms has two parameters to tune, one for thresholding and one for the minimum size of the accepted connected component. Precision-Recall (PR) curves for the two methods are shown in Fig. 6 (left). These were generated by linearly interpolating the points representing the maximum precision for each recall. They show a substantial improvement in performance for the proposed method. Maximum precision on a recall of 0.5, for
Detecting Carried Objects in Short Video Sequences
161
1
Proposed Method Haritaoglu’s Method
0.9
1
0.8
0.7 0.6 0.5 0.4
0.7 0.6 0.5 0.4
0.3
0.3
0.2
0.2
0.1 0 0
Proposed Method Haritaoglu’s Method
0.9
Precision
Precision
0.8
0.1
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
0 0
1
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Recall
Recall
Fig. 6. PR curves for the proposed method compared to Haritaoglu et al.’s method without (left) and with (right) periodicity analysis to classify the retrieved blobs
(a)
(b)
(c)
(d)
Fig. 7. Three examples (a), along with their temporal templates (b) are assessed using both techniques. Haritaoglu’s method (c-top) thresholded (d-top) and our proposed method (c-bottom) thresholded (d-bottom) show some examples of how matching retrieves better estimate of the carried objects than symmetry.
example, was improved from 0.25 using asymmetry to 0.51 using matching. Maximum recall was 0.74 for both techniques, as noisy temporal templates and non-protruding carried objects affect both techniques. Figure 7 shows examples comparing asymmetry analysis with matching temporal templates. To further compare the methods, we present the results after performing periodicity analysis. We thus take all optimal setting points represented by the curves in Fig. 6 (left), and vary the two thresholds for periodicity analysis. Figure 6 (right) shows PR curves analogous to those in Fig. 6 (left) but now
162
D. Damen and D. Hogg
including periodicity analysis, again taking the maximum precision for each recall. The improved performance of our method is still apparent. In addition, comparing the corresponding curves shows that periodicity analysis improves the performance for both methods.
5
Using Prior Information and Assuming Spatial Continuity
The protruding connected components can be at locations where carried objects are not expected like hats on top of heads. We propose training for carried object locations relative to the person’s silhouette to better differentiate carried objects from other protruding regions, and at the same time impose a spatial continuity assumption on the pixels corresponding to carried objects. In this section, we show how training was used to generate a map of prior locations. Training values were also used to estimate the distribution of protrusion values conditioned on their labeling. Finally, this information is combined into a Markov random field, determining an energy function which is minimized. Results are presented along with a discussion of the advantages of training for prior locations. We divided the pedestrians into two sets, the first containing 56 pedestrians (Sets 1-4 in PETS2006) and the second containing 50 pedestrians (Sets 5-7). Two-fold cross validation was used to detect carried objects. Training for carried object locations is accomplished by mapping the temporal template, using the inverse of the best transformation, to align its corresponding exemplar. During training, we obtain connected components using a threshold of 0.5. Correct detections, by comparing to bounding boxes from the ground truth, are used to train for locations of carried objects separately for each directionallyspecific exemplar. A map of prior probabilities Θd is produced for each viewpoint d. Prior information for each location is calculated by the frequency of its occurrence within a correctly-detected carried object across the training set. To make use of our small training set, we combine the maps of opposite exemplars. For example, the first and the fifth exemplars are separated by 180◦ . Θ1 and Θ5 are thus combined by horizontally flipping one and calculating the weighted average Θ1,5 (by the number of blobs). The same applies for Θ2,6 , Θ3,7 and Θ4,8 . Figure 8 shows Θ2,6 using the two disjoint training sets. We aim to label each location x within the person’s temporal template as belonging to a carried object (mx = 1) or not (mx = 0). Using the raw protrusion values v = protruding(x) calculated in Equation 3, we model the classconditional densities p(v|mx = 1) and p(v|mx = 0) based on training data (Figure 9). By studying these density distributions, p(v|mx = 1) was approximated by two Gaussian distributions, one for stable carried objects, and another for swinging objects. The parameters of the two Gaussians were manually chosen to approximately fit the training density distributions. p(v|mx = 1) = γN (v; 0.6, 0.3) + (1 − γ)N (v; 1.0, 0.05)
(5)
Detecting Carried Objects in Short Video Sequences
163
p(v|mx=1)
p(v|mx=0)
Fig. 8. For the second exemplar (left), Θ2,6 (middle) was generated using sets 1-4, and Θ2,6 (right) was generated using sets 5-7. The location model Θ has high values where stronger evidence of carried objects had been seen in training. A prior of 0.2 was used when no bags were seen. By symmetry, Θ6 is a horizontal flip.
0
0.1
0.2
0.3
0.4
0.5
v
0.6
0.7
0.8
0.9
1
0
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 v
Fig. 9. Pixel values distribution for objects (left) and non-objects (right) protruding pixels. Thresholded pixels (>0.5) that match true detections when compared to ground truth, are used to train p(v|mx = 1). The rest are used to train p(v|mx = 0).
γ is the relative weight of the first Gaussian in the training set. Its value resulted to be 0.64 for the first set, and 0.66 for the second disjoint set. The density distribution p(v|mx = 0) resembles a reciprocal function. It was thus modeled as: 1/(v + β) (6) p(v|mx = 0) = log(1 + β) − log(β) β was set to 0.01. The denominator represents the area under the curve for normalization. As we believe neighboring locations to have the same label, spatial continuity can be constrained using a Markov Random Field (MRF). The energy function to be minimized E(m) over Image I is given by Equation 7. φ(v|mx ) + ω(mx |Θ) + E(m) = ψ(mx , my ) (7) x∈I
(x,y)∈C
φ(v|mx ) represents the cost of assigning a label to the location x based on its protrusion value v in the image: − log(p(v|mx = 1)) if mx = 1 φ(v|mx ) = (8) − log(p(v|mx = 0)) if mx = 0
164
D. Damen and D. Hogg 1 0.9 0.8
Precision
0.7 0.6 0.5 0.4 0.3 0.2
MRF With Prior MRF Without Prior
0.1 0 0
0.1
0.2
0.3
0.4
0.5 0.6 Recall
0.7
0.8
0.9
1
Fig. 10. PR Curves for detecting carried objects using MRF. Introducing location maps to encode prior information about carried object locations produces better performance.
(a)
(b)
(c)
(d)
Fig. 11. The yellow rectangles show the choice of carried objects using MRF with location models. Red rectangles refer to MRF without location models. Prior information drops candidate blobs at improbable locations (a,b), and better segments the object (a,c). It nevertheless decreases support for carried objects in unusual locations (d).
ω(mx |Θ) is based on the map of prior probabilities Θ given a specified walking direction: − log(p(x|Θ)) if mx = 1 ω(mx |Θ) = (9) − log(1 − p(x|Θ)) if mx = 0 The interaction potential ψ follows the Ising model over the cliques, where C represents all the pairs of neighboring locations in the image I: λ if mx = my (10) ψ(mx , my ) = 0 if mx = my The interaction potential ψ is fixed regardless of the difference in protrusion values v at locations x and y. We did not choose a data-dependent term because the protrusion values represent the temporal continuity, and not the texture information at the neighboring pixels. We use the max-flow algorithm, proposed in [16], and its publically available implementation, to minimize the energy function (Equation 7). Regions representing carried objects were thus retrieved. The smoothness cost term λ was optimized based on the used training set. To evaluate the effect of introducing location models, the term ω(mx |Θ) was removed from the energy function and the results were re-calculated. λ was varied between [0.1:0.1:6] to produce the PR curves in Fig. 10 that demonstrate
Detecting Carried Objects in Short Video Sequences
165
Table 1. Better performance was achieved by introducing the MRF representation
Thresholding MRF - Prior
Precision Recall TP FP FN 39.8% 49.4% 41 62 42 50.5% 55.4% 46 45 37
the advantage of introducing location prior models. Examples in Fig. 11 show how prior models affect estimating carried objects. In order to compare the MRF formulation with simple thresholding, we optimize the parameters on each training dataset and test on the other. For MRF, λ was optimized on the training datasets resulting in 2.2 and 2.5 respectively. Table 1 presents the precision and recall results along with the actual counts combined for the two test datasets, showing that MRF produces higher precision and recall results. Quantitatively, for the 45 false positive, and 37 false negative cases, Fig. 12 dissects these results according to the reason for their occurrence. Figure 13 presents a collection of results highlighting reasons for success and the main sources of failure. We also demonstrate the video results at http://www.comp.leeds.ac.uk/dima/ECCVDemo.avi.
6
Conclusion
We have proposed a novel method to detect carried objects, aiming at higher robustness than noisy single frame segmentations. Carried objects are assumed to cause protruding regions from the normal silhouette. Like an earlier method we use a temporal template but match against exemplars rather than assuming that unencumbered pedestrians are symmetric. Evaluated on the PETS2006 dataset, the method achieves a substantial improvement in performance over the previously published method. Finally, we train on possible locations of carried objects and use an MRF to encode spatial constraints resulting in a further improvement in performance. Reasons behind FP detections Protruding parts of clothing 15 Protruding body parts 10 Extreme body proportions 6 Incorrect template matching 5 Noisy temporal template 5 Duplicate matches 4 Total 45
Reasons behind FN detections Bag with little or no protrusion Dragged bag tracked separately by tracker Carried object between legs Carried object not segmented from background Little evidence of prior location in training Swinging small object Noisy template Incorrect template matching Merging two protruding regions into one Total
9 6 5 4 3 3 3 2 2 37
Fig. 12. Reasons behind False Positive (FP) and False Negative (FN) detections
166
D. Damen and D. Hogg
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
Fig. 13. The proposed method can identify single (1) or multiple (2,3) carried objects. (4) shows its ability to classify true negative cases. Objects extending over the body are split into two (5). Failure cases may result from poor temporal templates due to poor foreground segmentation (6). The map of prior locations could favor some false positive objects (7). The method is not expected to cope with extreme body proportions (8). The second row shows the detections projected into the temporal templates, and the third row shows detections projected into the images.
Due to its dependence on protrusion, the method may not be able to distinguish carried objects from protruding clothing or non-average build. Future improvements to this method might be achieved using texture templates to assist segmentation based on color information. In addition, the independence assumption in learning prior bag locations could be studied to utilize shapes of previously seen bags in producing better segmentations. When matured, this technique can be embedded into surveillance and security systems that aim at tracking carried objects or detecting abandoned objects in public places.
Acknowledgement We would like to thank Miodrag Dimitrijevic at the CVLAB, EPFL and his colleagues for providing the dataset of silhouettes used in our research.
References 1. Haritaoglu, I., Cutler, R., Harwood, D., Davis, L.S.: Backpack: detection of people carrying objects using silhouettes. In: Proc. Int. Conf. on Computer Vision (ICCV), vol. 1, pp. 102–107 (1999)
Detecting Carried Objects in Short Video Sequences
167
2. Haritaoglu, I., Harwood, D., Davis, L.: W4 : real-time surveillance of people and their activities. IEEE Trans. on Pattern Analysis and Machine Intelligence 22(8), 809–830 (2000) 3. Ferryman, J. (ed.): IEEE Int. Workshop on Performance Evaluation of Tracking and Surveillance (PETS). IEEE, New York (2006) 4. Haritaoglu, I., Harwood, D., Davis, L.: Hydra: Multiple people detection and tracking using silhouettes. In: Proc. IEEE Workshop on Visual Surveillance (1999) 5. Hu, W., Hu, M., Zhou, X., Tan, T., Lou, J., Maybank, S.: Principal axis-based correspondence between multiple cameras for people tracking. IEEE Trans. on Pattern Analysis and Machine Intelligence 28(4), 663–671 (2006) 6. Benabdelkader, C., Davis, L.: Detection of people carrying objects: a motion-based recognition approach. In: Proc. Int. Conf. on Automatic Face and Gesture Recognition (FGR), pp. 378–384 (2002) 7. Branca, A., Leo, M., Attolico, G., Distante, A.: Detection of objects carried by people. In: Proc. Int. Conf on Image Processing (ICIP), vol. 3, pp. 317–320 (2002) 8. Nanda, H., Benabdelkedar, C., Davis, L.: Modelling pedestrian shapes for outlier detection: a neural net based approach. In: Proc. Intelligent Vehicles Symposium, pp. 428–433 (2003) 9. Tao, D., Li, X., Maybank, S.J., Xindong, W.: Human carrying status in visual surveillance. In: Proc. Computer Vision and Pattern Recognition (CVPR) (2006) 10. Ghanem, N.M., Davis, L.S.: Human appearance change detection. In: Image Analysis and Processing (ICIAP), pp. 536–541 (2007) 11. Dimitrijevic, M., Lepetit, V., Fua, P.: Human body pose detection using Bayesian spatio-temporal templates. Computer Vision and Image Understanding 104(2), 127–139 (2006) 12. Rousseeuw, P.J.: Least median of squares regression. Journal of the American Statistical Association 79(388), 871–880 (1984) 13. Fossati, A., Dimitrijevic, M., Lepetit, V., Fua, P.: Bridging the gap between detection and tracking for 3D monocular video-based motion capture. In: Proc. Computer Vision and Pattern Recognition (CVPR) (2007) 14. Magee, D.: Tracking multiple vehicles using foreground, background and motion models. In: Proc. ECCV Workshop on Statistical Methods in Video Processing, pp. 7–12 (2002) 15. Everingham, M., Winn, J.: The PASCAL visual object classes challenge (VOC 2007) development kit. Technical report (2007) 16. Boykov, Y., Veksler, O., Zabih, R.: Fast approximate energy minimization via graph cuts. IEEE Trans. on Pattern Analysis and Machine Intelligence 23(11), 1222–1239 (2001)
Constrained Maximum Likelihood Learning of Bayesian Networks for Facial Action Recognition Cassio P. de Campos1 , Yan Tong2 , and Qiang Ji1 1
Electrical, Computer and Systems Eng. Dept. Rensselaer Polytechnic Institute Troy, NY, USA 2 Visualization and Computer Vision Lab GE Global Research Center Niskayuna, NY, USA
Abstract. Probabilistic graphical models such as Bayesian Networks have been increasingly applied to many computer vision problems. Accuracy of inferences in such models depends on the quality of network parameters. Learning reliable parameters of Bayesian networks often requires a large amount of training data, which may be hard to acquire and may contain missing values. On the other hand, qualitative knowledge is available in many computer vision applications, and incorporating such knowledge can improve the accuracy of parameter learning. This paper describes a general framework based on convex optimization to incorporate constraints on parameters with training data to perform Bayesian network parameter estimation. For complete data, a global optimum solution to maximum likelihood estimation is obtained in polynomial time, while for incomplete data, a modified expectation-maximization method is proposed. This framework is applied to real image data from a facial action unit recognition problem and produces results that are similar to those of state-of-the-art methods.
1
Introduction
Graphical models such as Bayesian Networks are becoming increasingly popular in many applications. During the last few years, the adoption of Bayesian networks in areas of computer vision and pattern recognition has strongly increased. Issues of the most important journals are dedicated to this matter, for instance the Special Issue on Probabilistic Graphical Models in Computer Vision [1] of the IEEE Transactions on Pattern Analysis and Machine Intelligence and the Special Issue on Probabilistic Models for Image Understanding [2] of the International Journal of Computer Vision. Latest research uses Bayesian networks for representing causal relationships in facial expression recognition, active vision, image segmentation, visual surveillance, pattern discovery, activity understanding, amongst others. For example, Delage et al. [3] use Bayesian networks to automatically recover 3D reconstructions from single indoor images. Zhou et al. [4] apply Bayesian networks for visual D. Forsyth, P. Torr, and A. Zisserman (Eds.): ECCV 2008, Part III, LNCS 5304, pp. 168–181, 2008. c Springer-Verlag Berlin Heidelberg 2008
Constrained Maximum Likelihood Learning of Bayesian Networks
169
tracking. Mortensen et al. [5] present a semi-automatic segmentation technique based on a Bayesian network constructed from a watershed segmentation. Zhang et al. [6] use Bayesian networks for modeling temporal behaviors of facial expressions in image sequences. Tong et al. [7] present a Bayesian network to recognize facial action units. A Bayesian network encodes a joint probability distribution for its variables in a very compact graph structure, relying on a factorization in local conditional probability distributions for efficient inferences. Parameter learning is the problem of estimating probability measures of conditional probability distributions given the structure of the network. Many parameter learning techniques depend heavily on training data. Ideally, with sufficient data, it is possible to learn parameters by standard statistical analysis like maximum likelihood estimation. In many real-world applications, however, data are either incomplete or scarce, which can cause inaccurate parameter estimation. Incompleteness means that some parameter values are missing in the data, while scarceness means that the amount of training data is small. Most of computer vision problems just mentioned have to deal with scarce and incomplete data. Methods for improving parameter learning will certainly benefit many of such applications. When data are incomplete, Expectation-Maximization (EM) [8] algorithm is often used. Even with incomplete and scarce data, qualitative knowledge about parameters is usually available, and such knowledge might be employed to improve estimations. In this paper, we propose a framework based on non-linear convex optimization to solve the parameter learning problem by combining quantitative data and domain knowledge in the form of qualitative constraints. Many types of qualitative constraints are treated, including range and relationship constraints [9], influences and synergies [10,11], non-monotonic constraints [12], weak and strong qualitative constraints [13,14]. Experiments with facial expression recognition and real image data show the benefits that qualitative constraints can impose to parameter learning and classification accuracy. Facial expressions are very important in non-verbal human communication [15]. Based on facial action units [16], it is possible to detect and measure a large number of facial expressions by virtually observing a small set of discernible muscular movements [15]. We employed a Bayesian network where nodes are associated to action units and links describe relations among them. Parameter learning is conducted using real image data and qualitative relations from experts for both complete and incomplete datasets. Although we use a simpler model and fewer training data than state-of-the-art algorithms do, inferences with our networks display recognition rates that are comparable to other results [7,17,18,19]. This indicates that our parameter learning procedure achieves high accuracy. We emphasize that we employ the proposed methods to a facial action unit recognition problem, but they are general and can be applied to a wide range of problems, as long as qualitative knowledge is available and data are scarce (it is also possible to work with large amount of data, but in those cases a standard maximum likelihood might be enough).
170
C.P. de Campos, Y. Tong, and Q. Ji
Section 2 comments on some related work. Section 3 introduces our notation for Bayesian networks, describes the problem of parameter learning, and details the qualitative constraints, specified by domain experts, that can guide the learning process. Then, for scarce but complete data, we describe a simple but effective procedure to solve parameter learning by reformulating the problem as a constrained convex optimization problem, which ensures global optimality in polynomial time (Section 4). For incomplete data, we describe a constrained EM idea by adding constraints to the maximization step, and iteratively solve the learning problem. Section 5 presents some experiments with real image data from a facial action unit recognition problem. Section 6 concludes the paper and indicates paths for future work.
2
Related Work
Domain knowledge can be classified as quantitative and qualitative, which describes the explicit quantification of parameters and approximate characterizations, respectively. Both are useful for parameter learning, but quantitative knowledge has been widely used while qualitative relations among parameters have not been fully exploited in many domains. Here we focus our attention on related work using qualitative relations. Parameter learning is a well explored topic and we suggest Jordan’s book [20] for a broader view. Concerning the use of qualitative relations, Wittig et al. [21] and Altendorf et al. [22] present methods to integrate qualitative constraints by introducing penalty functions to the log likelihood criterion. Weights for the penalty functions often need be manually tuned, which strongly rely on human knowledge about such weights. Feelders and Van der Gaag [23] incorporate some simple inequality constraints in the learning process, but they assume that all the variables are binary. Niculescu et al. [9,24] derive closed form solutions for the maximum likelihood estimation supposing some predefined types of constraints. However, the constraints used in all those methods are restrictive in the number of parameters and involvement of distinct distributions (usually there is no overlap between parameters of different constraints and constraints are restricted to single distributions). There are very restricted cases where parameters and constraints can involve distinct distributions. Even simple cases such as influences of Qualitative Probabilistic Networks [10] are not addressed. de Campos and Cozman [25] formulate the learning problem as a constrained optimization problem. However, they are restricted to complete datasets and apply non-convex optimization. We describe a general learning procedure that deal with a wider range of constraints and still find the global optimum solution in polynomial time.
3
Problem Definition
A Bayesian network (or BN) represents a single joint probability density over a collection of random variables. We assume throughout that variables are categorical; variables are uppercase and their assignments are lowercase.
Constrained Maximum Likelihood Learning of Bayesian Networks
171
Definition 1. A Bayesian network is a triple (G, X , P), where: G = (VG , EG ) is a directed acyclic graph, with VG a collection of vertices associated to random variables X (a node per variable), and EG a collection of arcs; P is a collection of conditional probability densities p(Xi |PAi ) where PAi denotes the parents of Xi in the graph (PAi may be empty), respecting the relations of EG . In a BN every variable is conditionally independent of its non-descendants given its parents (Markov condition). This structure induces a joint probability distribution by the expression p(X1 , . . . , Xn ) = i p(Xi |PAi ). We focus on parameter learning in a BN where structure is known in advance. Let ri be the number of discrete categories of Xi , qi the number of distinct assignments to PAi (that is, qi = Xt ∈PAi rt ) and θ be the entire vector of parameters such as θijk = p(xki |paji ), where i = 1, . . . , n, j = 1, ..., qi and k = 1, ..., ri . Each j in paji defines a configuration to the parents of Xi . Whenever necessary and for ease of expose, we use the notation θijk = θi{xk1 ,...,xkt }k meaning the parameter i1
it
p(xki |xki11 , . . . , xkitt ). We also define an order for the states of each variable Xi such that x1i < x2i < . . . < xri i (if necessary, we exchange positions of states). 3.1
Learning Parameters of a BN
1 n Given a dataset D = {D1 , . . . , DN }, with Dt = {xk1,t , . . . , xkn,t } a sample of all BN nodes, the goal of parameter learning is to find the most probable values for θ. These values best explain the dataset D, which can be quantified by the log likelihood function log(p(D|θ)), denoted LD (θ). Assuming that samples are drawn independently from the underlying distribution and conditional nbased qion nijk ri independence assumptions of BNs, we have LD (θ) = log i=1 j=1 k=1 θijk , where nijk indicates how many elements of D contain both xki and paji . If the dataset D is complete, Maximum Likelihood (ML) estimation method can be described as a constrained optimization problem, i.e. maximize ri LD (θ) subject to simplex equality constraints: ∀i=1,...n ∀j=1...qi gij (θ) = k=1 θijk − 1 = 0, where gij (θ) = 0 imposes that distributions defined for each variable given a parent configuration sums one over all variable states. This problem has n its global optimum solution at θijk = nijk , where n = ij k=1,...,ri nijk . ij
3.2
Qualitative Constraints
Standard likelihood estimations are usually enough if we have enough data. However, when small amount of data is available, the likelihood function may not produce reliable estimations for the parameters. Example 2. Suppose a BN with three binary variables (with categories x1i , x2i ) and the following simple graph: X1 → X2 ← X3 . Suppose further that we have the dataset D = {D1 , D2 }, with D1 = {x11 , x12 , x13 } and D2 = {x21 , x22 , x23 }. Using the ML estimation, we have the posterior probabilities θ101 = θ102 = θ301 = . . . θ302 = 0.5 and θ2j1 1 = θ2j2 2 = 1, with j1 = {x11 , x13 }, j2 = {x21 , x23 }, j3 = . 2 1 1 2 {x1 , x3 }, j4 = {x1 , x3 }. Posterior probability distributions θ2j3 k and θ2j4 k can not be estimated as no data about such configurations are available.
172
C.P. de Campos, Y. Tong, and Q. Ji
Situations like in Example 2 could be alleviated by inserting quantitative prior distributions for the parameters. However, acquiring such quantitative prior information may not be an easy task. An incorrect quantitative prior might lead to bad estimation results. For example, standard methods apply quantitative uniform priors. In this case, if no data are present for a given parameter, then the answer would be 0.5, which may be far from the correct value. A path to overcome this situation is through qualitative information. Qualitative knowledge is likely to be available even when quantitative knowledge is not, and tends to be more reliable. For example, someone hardly will make a mistake about the qualitative relation between sizes of the Earth and the Sun; almost everyone will fail to specify a quantitative ratio (even approximate). Example 3. Suppose, in addition to Example 2, that the following two constraints are known: θ302 + θ2j3 1 ≤ 0.7 and θ2j1 1 ≤ θ2j4 2 . With this knowledge, it is likely that θ2j3 1 ≤ 0.2 and θ2j4 2 = 1, reducing the space of possible parameterizations and alleviating the problem with scarce quantitative data. We define a very general constraint as basis for our methods: linear relationship constraints define linear relative relationships between sets of weighted parameters and numerical bounds. Definition 4. Let θA be a sequence of parameters, αA a corresponding sequence of constant numbers and α also a constant. A linear relationship constraint is defined as αijk · θijk − α ≤ 0, (1) h(θ) = θijk ∈θA
that is, any linear constraint over parameters can be expressed as a linear relationship constraint. We describe some well-known constraints that can be specified through linear relationship constraints: – Qualitative influences of Qualitative Probabilistic Networks [10]: they define some knowledge about the state of a variable given the state of another, which roughly means that observing a greater state for a parent Xa of a variable Xb makes more likely to have greater states in Xb (for any parent configuration except for Xa ). Although influences over non-binary variables can be described by linear relationship constraints, we use a simple binary . case to illustrate: θbj2 2 ≥ θbj1 2 + δ, where jk = {xka , pajb∗ } and j∗ is an index ranging over all parent configurations except for Xa . In this case, the greater state is 2, and observing x2a makes more likely to have x2b . Note that if these constraints hold for δ > 0, the influence is said strong with threshold δ [14]. Otherwise, it is said weak for δ. A negative influence is obtained by replacing the inequality operator ≥ by ≤ and the sign of the δ term to negative. A zero influence is obtained by changing inequality to an equality. – Additive synergies of Qualitative Probabilistic Networks [10]: they define a conjugate influence from two parents acting to influence the child. This means that observing the same configuration for the parents Xa and Xc of
Constrained Maximum Likelihood Learning of Bayesian Networks
173
the variable Xb makes more likely to have a greater state in Xb . An example . over binary variables is: θbj1,1 2 + θbj2,2 2 ≥ θbj1,2 2 + θbj2,1 2 + δ, where jka ,kc = {xkaa , xkc c , pajb∗ } and j∗ ranges over all parent configurations not including Xa nor Xc , and δ ≥ 0 is a constant. This forces the sum of parameters with equal configurations for Xa and Xc to be greater than the sum of parameters with distinct configurations, for all other parent configurations. Again we have exemplified using a binary case, but synergies involving non-binary variables are also linear relationship constraints. Negative and zero additive synergies, as well as strong and weak versions are obtained analogously. – Non-monotonic influences and synergies [26]. They happen when constraints hold only for some configurations of the parents. For example, suppose three binary variables such that Xb has Xa and Xc as parents and that θb{x2a ,x1c }2 ≥ θb{x1a ,x1c }2 holds, but θb{x2a ,x2c }2 ≥ θb{x1a ,x2c }2 can not be stated. Hence we do not have a positive influence of Xa on Xb , because it would be necessary to have both constraints valid to ensure that influence. In fact we might realize that the state of Xc is relevant for the influence. In this case, we may state a non-monotonic influence of Xa on Xb that holds when Xc is x1c but not when it is x2c . Situational signs [13] and context-specific signs [27] are some examples of non-monotonic constraints that can be encoded as linear relationship constraints. – Range, intra- and inter-relationship constraints [9]. Range constraints happen when θA has only one parameter θijk and αijk = 1. In this case the constraint becomes a upper bound constraint for θijk (we can obtain a lower bound using negative αijk and α). If all parameters involved in a linear relationship constraint share the same node index i and parent configuration j, the constraint is called intra-relationship constraint. Otherwise, it is a inter-relationship constraint.
4
Learning through Convex Optimization
Constraints of previous section can be used to describe our knowledge. As the log likelihood function is concave (a positive sum of concave functions is also concave) and we need to maximize it, our problem is in fact a constrained convex minimization program [28]: nijk · log θijk subject to (2) min − θ
i,j,k
∀t=1,...,m ∀i=1,...n ∀j=1...qi
ht (θ) ≤ 0 gij (θ) = 0
where m is the number of linear relationship constraints, and gij are the simplex constraints. To exactly solve such a convex minimization program, there are many optimization algorithms. We can use specialized interior point solvers [29] or even some general optimization ideas [30], because convex programming has the attractive property that any local optimum is also a global optimum.
174
C.P. de Campos, Y. Tong, and Q. Ji
Furthermore, such global optimum can be found in polynomial time in the size of input [28]. We employ the Mosek software [29] to solve our convex programs. In fact non-linear convex constraints are also allowed, as convex optimization will still find the global optimum in polynomial time. On the other hand, nonconvex constraints imply in losing such properties. Hence, we allow as general constraints as possible while keeping the problem tractable. 4.1
Incomplete Data
Incomplete data means that some fields of the dataset are unknown. If the 1 n dataset is D = {D1 , . . . , DN }, then each Dt ⊆ {xk1,t , . . . , xkn,t } is a sample of some BN nodes. We say that ut is the missing part in tuple t, that is, ut ∩Dt = ∅ and ut ∪ Dt is a complete instantiation for all BN nodes. Let U be the set of all missing data. In this case, the likelihood function log(p(D|θ)) is not a simple product anymore, and the corresponding optimization program is not convex. A common method to overcome this situation is standard EM algorithm [8], which starts from some initial guess, and then iteratively takes two types of steps (E-steps and M-steps) to get a local maximum of the likelihood function. Particularly for discrete nodes, E-step computes the expected counts for all parameters, and M-step estimates the parameters by maximizing log likelihood function, given the counts from E-step, just like would be done with a complete dataset. EM algorithm converges to a local maximum under very few assumptions [31]. Assume θ0 is an initial guess for the parameters, and θt denotes the estimation after t iterations, t = 1, 2, .... Then, each iteration of EM can be summarized as follows: – E-step: compute expectation of the log likelihood given observed data D and current estimation of parameters θt : Q(θ|θt ) = Eθt [log p(U ∪ D|θ)|θt , D]. – M-step: find new parameter θt+1 , which maximizes expected log likelihood computed in E-step: θt+1 = arg max Q(θ|θt ). θ
We propose to extend EM with the formulation of Program (2), that is, the Mstep is performed using convex programming. So, θt+1 is arg max Q(θ|θt ), subject θ
to linear relationship and simplex constraints, and a polynomial time algorithm solver can be employed. Because the parameter space is convex and the enhanced M-step produces a global optimum solution for the current parameter counts, this modified EM shares convergence and optimality properties of the standard EM algorithm [31]. Although the modified EM is more time expensive than the standard EM (but still polynomial) as each M-step requires the solution of a convex optimization program (standard EM may use closed form solution for ML), we argue that, just as in standard EM where an improving solution is enough instead of an optimum one (called Generalized EM), we might stop the convex programming as soon as an improving solution is found.
Constrained Maximum Likelihood Learning of Bayesian Networks
5
175
Experiments
In order to test the performance of our method against standard ML estimation and standard EM algorithm given scarce and incomplete data, we use random generated networks, take one network parametrization as our “truth”, and then generate samples from that network. After training the models, we apply the Kullback-Leibler (KL) divergence criterion to measure the difference between joint probability distributions induced by generated networks and distributions of true networks. We conduct experiments for datasets with 100 and 1000 samples, using random linear relationship constraints from 2 to 8 terms in summations. The constraints are created using the true network (so they are certainly correct) in number at most equal to the number of probability distributions in the corresponding network. For each configuration, we work with twenty random sets of data and qualitative constraints. Our results show that in most part of the cases the divergence is substantially reduced (almost 40% average reduction in the divergence) when constraints are employed, which shows that they are actively used during learning. Most importantly, harder problems are most benefited: scarce incomplete data and constraints performed better than large sample sets without constraints: we could verify decrease factors greater than 100 times in the amount of data needed to achieve the same accuracy results. We now consider the problem of recognizing facial action units from real image data [18]. Based on the Facial Action Coding System [16], facial behaviors can be decomposed into a set of action units (denoted as AUs), which are related to contractions of specific sets of facial muscles. In this work, we intend to recognize 14 commonly occurring AUs.1 We have chosen these AUs because they appear often in the literature, so it is possible to properly compare our methods with others. There are semantic relationships among them. Some AUs happen together to show a meaningful facial expression: AU6 (cheek raiser) tends to occur together with AU12 (lip corner puller) when someone is smiling. On the other hand, some AUs may be mutually exclusive: AU25 (lips part) never happens simultaneously with AU24 (lip presser) since they are activated by the same muscles but with opposite motion directions. Instead of recognizing each AU individually, a probabilistic network can be employed to explicitly model relationships among AUs [7]. A BN with 14 hidden nodes is employed, where each node is associated to an AU. States of AUs are 1 (activated) and 0 (deactivated). Figure 1 depicts the structure of the BN. Note that every link between nodes has a sign, which is provided by a domain expert. Signs indicate whether there is positive or negative qualitative influence between AUs and will be commented later. For example, it is difficult to do AU2 (outer brow raiser) alone without performing AU1 (inner brow raiser), but we can do AU1 without AU2 . Hence, a positive influence from AU2 to AU1 is stated. Furthermore, 14 measurement nodes (unshaded in Figure 1, 1
AU1 (inner brow raiser), AU2 (outer brow raiser), AU4 (brow lowerer), AU5 (upper lid raiser), AU6 (cheek raiser and lid compressor), AU7 (lid tightener), AU9 (nose wrinkler), AU12 (lip corner puller), AU15 (lip corner depressor), AU17 (chin raiser), AU23 (lip tightener), AU24 (lip presser), AU25 (lips part), and AU27 (mouth stretch).
176
C.P. de Campos, Y. Tong, and Q. Ji
Fig. 1. Network for the AU recognition problem
one for each AU) represent classification results derived from computer vision techniques. Links between AU and measurement nodes represent uncertainties in classifications. To obtain the measurement for each AU, first the face and eyes are detected in the images, and the face region is extracted and normalized based on the detected eye positions. Then each AU is detected individually by a two-class AdaBoost classifier with Gabor wavelet features [32]. The output of the AdaBoost classifier is employed as the AU measurement in the BN model. To parametrize the BN, training data is needed. However, it may be difficult to get enough training data to learn these parameters. The effort for training human experts and manually labeling the AUs is expensive and time consuming, and the reliability of manually coding AUs is inherently attenuated by the subjectivity of human coder. Furthermore, some AUs rarely occur. Thus, the training data can be incomplete, biased and scarce, which may cause low learning accuracy. Even though quantitative data are very important, combining them with qualitative knowledge may improve learning accuracy. Sometimes it is easier to derive qualitative relations between AUs than to fully label the data. Parameter learning is performed using qualitative influences obtained from experts. They are described in Figure 1 (positive and negative signs mean positive and negative influences, respectively) and processed using linear relationship constraints. They are mainly based on physiological aspects: – Mouth stretch increases the chance of lips part; it decreases the chance of cheek raiser and lid compressor and lip presser. – Cheek raiser and lid compressor increases the chance of lip corner puller. – Outer brow raiser increases the chance of inner brow raiser. – Upper lid raiser increases the chance of inner brow raiser and decreases the chance of nose wrinkler. – Nose wrinkler increases the chance of brow lowerer and lid tightener. – Lip tightener increases the chance of lip presser. – Lip presser increases the chance of lip corner depressor and chin raiser. We further extract some generic constraints: AU27 has small probability of happening, so p(AU27 = 1) ≤ p(AU27 = 0); if AUi has more than one parent
Constrained Maximum Likelihood Learning of Bayesian Networks
177
Fig. 2. Difference between unconstrained and constrained percentage rates of false negative and false positive alarms for AU recognition with complete data (but with possible mislabeling). 100 samples were used in the left graph and 1000 samples in the right graph.
node and all of them have positive influence, then p(AUi = 1|pa(AUi ) = 1) ≥ 0.8, where pa(AUi ) = 1 means the configuration where all parents are activated; if AUi has more than one parent node and all of them have negative influence, then p(AUi = 1|pa(AUi ) = 1) ≤ 0.2. Note that these numerical assessments are conservative, as we expect the real probabilities to be greater than 0.8 (or less than 0.2, respectively). Conservative assessments are much more likely to be valid. Furthermore, a domain expert provide ranges (usually tight) for p(Oi = 1|AUi = 1) and p(Oi = 0|AUi = 0), which represent accuracy of classifiers. The 8000 images used in experiments are collected from Cohn and Kanade’s DFAT-504 database [33]. We work with three datasets: one generated from computer vision measurements (used as evidence for testing) and two from human labeling (used for training), where one is complete (but with possible incorrect labels) and other is incomplete (uncertain labels were removed). Thus, in some sense, incomplete data are more precise. We consider training data with 100 and 1000 samples. Testing is performed over 20% of the data (not chosen for training). This database was chosen because of its size and the existence of results in the literature, so as our approach can be fully compared to others and amongst different amounts of data. Figure 2 shows the recognition results for complete data. For each AU, black bars indicate the percentage difference between false negative rates of standard and constrained ML (a positive result means that the constrained version is better). White bars are differences between false positive percentages. The accuracy using qualitative constraints is improved, specially with scarce data. For 100 samples, the average false negative using standard ML is 28%, with an average false positive of 6.5%. The constrained version obtains 17.8% of false negative, with 6.6% of false positive. We have a 10.2% improvement in the false negative rate, without considerable increase in the false positive rate. With 1000 samples, standard ML has 20.8% of false negative and 6.7% of false positive, while the
178
C.P. de Campos, Y. Tong, and Q. Ji
constrained version has 16.8% and 6.4%, respectively. The decrease is 4% in false negative, with also decrease in the false positive rate. Moreover, we emphasize that more than 3000 samples (without constraints) are needed to achieve the same average results as those of 100 samples and constrains (reduction greater than 30 times in the amount of data). Figure 3 shows results for incomplete data, using standard and constrained EM. Black bars indicate differences between false negative rates while white bars are differences between false positive rates. Again, the constrained version obtains better overall results. For 100 samples, average false negative rate using standard EM is 16.7%, with a false positive of 7.1%. The constrained version obtains false negative rate of 15.3%, with 6.8% of false positive. So, we have a 1.4% improvement in the false negative rate, with also improvement in the false positive rate. With 1000 samples, standard EM has 16.6% of average false negative and 6.4% of average false positive, while the constrained version obtains 14.8% and 6.5%, respectively. This represents an overall recognition rate (percentage of correctly classified cases) of 93.7%. These last results are comparable to state-of-the-art results. For instance, Tong et at. [7] use more sophisticated models such as Dynamic Bayesian networks and employ more data for training, achieving an overall recognition rate of 93.3%. Bartlett et al. [17] reports 93.6%, and other state-of-the-art methods [32,34,35] have results with slight variance, even using more data for training. We further emphasize some points: 1) although the average rate gain is not large, we have a great gain in AU9 , because it has many missing data and constraints are fully exploited; 2) overall accuracy with incomplete data is better than that with complete data because removed labels were uncertainly labeled by human experts, so the chance of labeling error in such cases is high, and incomplete data have no such errors, which justifies the better accuracy; 3) our methods are general learning procedures that can be straightforward applied in other problems. Still, they produce results as good as those of state-of-the-art methods for the AU recognition problem.
Fig. 3. Difference between unconstrained and constrained percentage rates of false negative and false positive alarms for AU recognition with incomplete data. 100 samples were used in the left graph and 1000 samples in the right graph.
Constrained Maximum Likelihood Learning of Bayesian Networks
179
We also have explored spontaneous facial expression recognition. The problem is usually much hard, as people are not posing to the camera and data are even more scarce. We have collected 1350 complete samples for training and 450 samples for testing from Belfast natural facial expression database [36] and internet repositories (e.g. Multiple Aspects of Discourse Research Lab at the University of Memphis, http://madresearchlab.org/). Using an automatically learned structure, constrained version obtains 28.4% of average false negative (decrease of 6.2% with respect to unconstrained version), with a considerably low 5.9% of average false positive (small increase of 0.6% with respect to unconstrained version). Although the relationships learned from posed facial expressions may bias the recognition for the spontaneous problem and there is a clear need to refine the system and correct some constraints by using spontaneous data, initial experiments seem promising. A deeper exploration of qualitative constraints in spontaneous datasets is left for future work.
6
Conclusion
This paper presents a framework for parameter learning when qualitative knowledge is available, which is specially important for scarce data. Even with enough data, qualitative constraints may help to guide the learning procedures. For complete data, we directly apply convex optimization to obtain a global optimum of the constrained maximum likelihood estimation, while for incomplete data, we extend the EM method by introducing a constrained maximization in the M-step. We have applied our methods to a real world computer vision problem of recognizing facial actions. For this study, constraints were elicited from domain experts. The results show that with some simple qualitative constraints from domain experts and using only a fraction of the full training data set, our method can achieve equivalent results to conventional techniques that use full training data set only. This not only demonstrates the usefulness of our work for a real world problem but also indicates its practical importance since for many applications it is often difficult to obtain enough representative training data. Our experiments show one important application, but these techniques certainly have practical implications on other computer vision problems. Hence, future work may apply the ideas on other datasets with spontaneous facial expressions for action recognition and also on other problems such as image segmentation and body tracking. Besides that, we plan to explore other properties of the problem structure to develop and improve learning ideas based on nonlinear optimization procedures. Although the idea of using convex optimization for solving parameter learning with qualitative constraints may seem simple, we know no deep investigation of such properties has been conducted. We see the simplicity of the methods as an important characteristic, because they can be promptly applied to many real problems. While many proposals in the literature try to find specialized methods that only deal with specific constraints, we propose to use convex programming as a systematic framework for parameter learning that deals with a wide range of constraints. Finally, some words about
180
C.P. de Campos, Y. Tong, and Q. Ji
feasibility and the use of wrong constraints are worth mentioning: if constraints are valid, unfeasible problems never happen. We have assumed that constraints are valid, which is reasonable as we have worked with very general constraints. A systematic study of possible wrong constraints is left for future work.
References 1. Ji, Q., Luo, J., Metaxas, D., Torralba, A., Huang, T., Sudderth, E. (eds.): Special Issue on Probabilistic Graphical Models in Computer Vision, IEEE Transactions on Pattern Analysis and Machine Intelligence (2008), http://www.ecse.rpi.edu/homepages/qji/PAMI GM.html 2. Triggs, B., Williams, C. (eds.): Special Issue on Probabilistic Models for Image Understanding. International Journal of Computer Vision (2008), http://visi.edmgr.com/ 3. Delage, E., Lee, H., Ng, A.: A dynamic bayesian network model for autonomous 3d reconstruction from a single indoor image. In: Proc. of the IEEE International Conference on Computer Vision and Pattern Recognition (2006) 4. Zhou, Y., Huang, T.S.: Weighted Bayesian network for visual tracking. In: Proc. of the International Conference on Pattern Recognition (2006) 5. Mortensen, E., Jia, J.: Real-time semi-automatic segmentation using a Bayesian network. In: Proc. of the IEEE International Conference on Computer Vision and Pattern Recognition (2006) 6. Zhang, Y., Ji, Q.: Active and dynamic information fusion for facial expression understanding from image sequence. IEEE Trans. on Pattern Analysis and Machine Intelligence 27(5), 699–714 (2005) 7. Tong, Y., Liao, W., Ji, Q.: Facial action unit recognition by exploiting their dynamic and semantic relationships. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1683–1699 (2007) 8. Dempster, A., Laird, N., Rubin, D.: Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, Series B 39(1), 1–38 (1977) 9. Niculescu, R.S.: Exploiting Parameter Domain Knowledge for Learning in Bayesian Networks. PhD thesis, Carnegie Mellon (2005) CMU-CS-05-147 10. Wellman, M.P.: Fundamental concepts of qualitative probabilistic networks. Artificial Intelligence 44, 257–303 (1990) 11. Wellman, M.P., Henrion, M.: Explaining explaining away. IEEE Transactions on Pattern Analysis and Machine Intelligence 15, 287–307 (1993) 12. van der Gaag, L.C., Bodlaender, H.L., Feelders, A.: Monotonicity in Bayesian networks. In: UAI, pp. 569–576. AUAI Press (2004) 13. Bolt, J.H., van der Gaag, L.C., Renooij, S.: Introducing situational influences in QPNs. In: Nielsen, T.D., Zhang, N.L. (eds.) ECSQARU 2003. LNCS (LNAI), vol. 2711, pp. 113–124. Springer, Heidelberg (2003) 14. Renooij, S., van der Gaag, L.C.: Enhancing QPNs for trade-off resolution. In: UAI, pp. 559–566 (1999) 15. Pantic, M., Bartlett, M.: Machine analysis of facial expressions, pp. 377–416. I-Tech Education and Publishing, Vienna, Austria (2007) 16. Ekman, P., Friesen, W.V.: Facial action coding system: A technique for the measurement of facial movement. Consulting Psychologists Press, Palo Alto (1978)
Constrained Maximum Likelihood Learning of Bayesian Networks
181
17. Bartlett, M.S., Littlewort, G.C., Frank, M.G., Lainscsek, C., Fasel, I., Movellan, J.R.: Automatic Recognition of Facial Actions in Spontaneous Expressions. Journal of Multimedia 1(6), 22–35 (2006) 18. Pantic, M., Rothkrantz, L.J.M.: Automatic analysis of facial expressions: The state of the art. IEEE Trans. Pattern Analysis and Machine Intelligence 22, 1424–1445 (2000) 19. Tian, Y., Kanade, T., Cohn, J.: Facial expression analysis. Springer, Heidelberg (2004) 20. Jordan, M. (ed.): Learning Graphical Models. The MIT Press, Cambridge (1998) 21. Wittig, F., Jameson, A.: Exploiting qualitative knowledge in the learning of conditional probabilities of Bayesian networks. In: UAI, pp. 644–652 (2000) 22. Altendorf, E., Restificar, A.C., Dietterich, T.G.: Learning from sparse data by exploiting monotonicity constraints. In: UAI, pp. 18–26 (2005) 23. Feelders, A., van der Gaag, L.C.: Learning Bayesian network parameters under order constraints. International Journal of Approximate Reasoning 42(1-2), 37–53 (2006) 24. Niculescu, R.S., Mitchell, T., Rao, B.: Bayesian network learning with parameter constraints. Journal of Machine Learning Research 7(Jul), 1357–1383 (2006) 25. de Campos, C.P., Cozman, F.G.: Belief updating and learning in semi-qualitative probabilistic networks. In: UAI, pp. 153–160 (2005) 26. Renooij, S., van der Gaag, L.C.: Exploiting non-monotonic influences in qualitative belief networks. In: IPMU, Madrid, Spain, pp. 1285–1290 (2000) 27. Renooij, S., van der Gaag, L.C., Parsons, S.: Context-specific sign-propagation in qualitative probabilistic networks. Artificial Intelligence 140, 207–230 (2002) 28. Ben-Tal, A., Nemirovski, A.: Lectures on Modern Convex Optimization: Analysis, Algorithms, and Engineering Applications. MPS/SIAM Series on Optimization. SIAM (2001) 29. Andersen, E.D., Jensen, B., Sandvik, R., Worsoe, U.: The improvements in mosek version 5. Technical report, Mosek Aps (2007) 30. Murtagh, B.A., Saunders, M.A.: Minos 5.4 user’s guide. Technical report, Systems Optimization Laboratory, Stanford University (1995) 31. Wu, C.F.J.: On the convergence properties of the EM algorithm. The Annals of Statistics 11(1), 95–103 (1983) 32. Bartlett, M.S., Littlewort, G., Frank, M.G., Lainscsek, C., Fasel, I., Movellan, J.R.: Recognizing facial expression: Machine learning and application to spontaneous behavior. In: Proc. IEEE Int’l Conf. Computer Vision and Pattern Recognition, 2nd edn., pp. 568–573 (2005) 33. Kanade, T., Cohn, J.F., Tian, Y.: Comprehensive database for facial expression analysis. In: IEEE International Conference on Automatic Face and Gesture Recognition, pp. 46–53 (2000) 34. Valstar, M.F., Patras, I., Pantic, M.: Facial action unit detection using probabilistic actively learned support vector machines on tracked facial point data. In: Proc. IEEE Int. Conf. Computer Vision and Pattern Recognition, Workshop Vision for Human-Computer Interaction (2005) 35. Tian, Y., Kanade, T., Cohn, J.: Recognizing action units for facial expression analysis. IEEE Trans. Pattern Analysis and Machine Intelligence 23(2), 97–115 (2001) 36. Douglas-Cowie, E., Cowie, R., Schroeder, M.: The description of naturally occurring emotional speech. In: Int’l Congress of Phonetic Sciences (2003)
Robust Scale Estimation from Ensemble Inlier Sets for Random Sample Consensus Methods Lixin Fan1 and Timo Pylv¨ an¨ ainen2 1
Nokia Research Center
[email protected] 2 Nokia Research Center
[email protected]
Abstract. This paper proposes a RANSAC modification that performs automatic estimation of the scale of inlier noise. The scale estimation takes advantage of accumulated inlier sets from all proposed models. It is shown that the proposed method gives robust results in case of high outlier ratio data, in spite that no user specified threshold is needed. The method also improves sampling efficiency, without requiring any auxiliary information other than the data to be modeled.
1
Introduction
Many computer vision problems boil down to fitting noisy data with parametric models. The problem is difficult due to the presence of outliers, i.e., data points that do not satisfy underlying parametric models. The solution is often formulated as maximum a posterior (MAP) estimation of the model parameter θ, given input data D and an user-specified threshold differentiating outliers from inliers: ˆ = arg max p(θ|D, ε). (1) [θ] θ
For specific problems, one is able to evaluate the likelihood function L(θ|D, ε) (or simply L(θ)) at certain discrete values of θ. From Bayesian theorem, the posterior distribution is partially known up to a normalization constant1 : p(θ|D, ε) ∝ L(θ|D, ε)p(θ).
(2)
If the prior distribution p(θ) is assumed to be uniform, one can simply seek maximum likelihood (ML) estimation: ˆ = arg max L(θ|D, ε). [θ] θ
(3)
For many computer vision problems, data with high outlier ratio renders the standard least squares fitting useless. The likelihood function L(θ|D, ε) has 1
The likelihood function is linked to the conditional probability p(D|θ, ε) by a normalization constant L(θ|D, ε) = Z · p(D|θ, ε). This constant Z is irrelevant in the optimization problem.
D. Forsyth, P. Torr, and A. Zisserman (Eds.): ECCV 2008, Part III, LNCS 5304, pp. 182–195, 2008. c Springer-Verlag Berlin Heidelberg 2008
Robust Scale Estimation from Ensemble Inlier Sets
183
to account for the statistical nature of inlier and outlier models. Often a set of indicator variables γ = [γ1 ...γN ] is assigned such that γi = 1 if the ith corresponding data point is an inlier, and γi = 0 otherwise. For instance, a mixture model of Gaussian and uniform distributions is used in the context of multiview geometry estimation [1] N 1 e2i 1 √ exp − 2 + (1 − γi ) log γi L(θ|D, ε) = 2σ v 2πσ 2 i=1
(4)
where ei stands for ith data point error, and v is the volume of the uniform distribution for outliers. While γ depends on (θ, D, ), it is difficult to come up with an analytic formula γ(θ, D, ) and optimize (3) directly. Instead, Random Sampling Consensus (RANSAC) [2] algorithm and its numerous variants [1,3,4,5,6] are used to make a robust estimate of inlier sets i.e., γ, followed by a final refinement of model parameters. One crucial issue concerning RANSAC is its sensitivity to the user specified threshold or the scale of inlier noise ε. If no knowledge about scale is available, one has to simultaneously estimate model parameter and the scale from given data, which is the principal problem to be solved in this paper: ˆ εˆ] = arg max L(θ, ε|D). [θ, θ,ε
(5)
Since for many practical problems the number of permissible models is often prohibitively huge, it is impossible to perform exhaustive search. RANSAC approach repeatedly evaluates L only at certain randomly drawn sample models and the best model thus far is retained as the optimal solution. The random nature of RANSAC brings about the concern of sampling efficiency, i.e., how likely a new proposed model is better than already explored models. This problem has been recognized and methods of using guided sampling have been proposed [4,5,7,8,9,10]. The contribution of this paper is twofold. Firstly we present a robust method which estimates scales of inlier noise in the course of random sampling. Scales are derived from statistics of repeated inlier data points accumulated from all proposed models. The use of accumulated inlier points, i.e., Ensemble Inlier Sets (EIS), provides a simple and efficient computational tool for classifying inliers vs. outliers. The probabilistic inlier/outlier classification is then used to attenuate effects of extreme outliers. The resulting scale estimator, i.e., Weighted Median Absolute Deviation (WMAD), considerably improves the accuracy of fitting results. Secondly, this paper proposes to use Ensemble Inlier Sets for guided sampling and demonstrates improved sampling efficiency for different fitting problems. The proposed guided sampling approach does not require any auxiliary information such as match scores and can be used in many different fitting problems.
184
1.1
L. Fan and T. Pylv¨ an¨ ainen
Previous Work
The least median of squares (LMS) algorithm is widely used to obtain the estimate of standard deviation of inlier data error [11]. Median Absolute Deviations (MAD) was used by Torr et al. to estimate inlier noise scales in the context of multi-view matching [12]. For data with high ratio outliers (> 0.5), MAD estimator breaks down and tends to overestimate as observed by Chen et al. [13] and present authors. In [6,13], scales were automatically derived from kernel density estimates by using a variable bandwidth mean-shift procedure [14]. Wang et al. [15] proposed an iterative procedure alternating between estimating a scale and using it to evaluate proposed models. Scale estimation was, again, based on a mean-shift procedure with bandwidths linked to sample standard deviations. Singh [16] used Kernel Maximum Likelihood estimator for decomposing range images into a parametric model with additive unknown scale noise. RANSAC sampling can be significantly improved by using information other than the data to be modeled. For instance, methods in [4,5] used matching scores from putative matching stage to classify inliers and achieved great computational savings. Although effective in practice, these methods were restricted to multiview matching problem due to the inclusion of domain specific knowledge, i.e., matching scores. It is unclear whether the same trick can be applied to other fitting problems, e.g. ellipse detection. Other research efforts to use guided sampling included [7,8,9]. Our contribution along this direction follows our previous work [10], in which a simple data point weighting method favored a local hill climbing search instead of blind random sampling. The method presented in this paper is similar in spirit, but in contrast to [10], data point weights are derived from ensemble inlier sets instead of the best inlier set only. This modification gracefully improves the capability of dealing with local optima for ellipse fitting problem. The rest of the paper is organized as follows. Section 2 is devoted to discussion of robust scale estimators. Section 3 shows how to use Ensemble Inlier Sets to estimate unknown scale and improve sampling efficiency. Experimental results of applying the proposed method to line/ellipse detection, hyperplane fitting and fundamental matrix estimation are presented in Section 4. Conclusion and future research are discussed in Section 5.
2
Robust Scale Estimators
Let us define a model as a parameterized function f (x | θ), where x is a data point and θ is an unknown model parameter. We aim to find models in a set of N points xn ∈ D, n ∈ 1, 2, ...N. A point xn is said to perfectly fit the model when f (xn | θ) = 0. For a subset of s points, there always exists a unique model θ for which all s points fit perfectly2 . We refer to such a set as a minimal subset and s as the model dimensionality. For instance in line fitting, two points would constitute a minimal subset, i.e., s = 2, while in ellipse fitting s = 5. A 2
Without loss of generality, we assume that the points in question are not degenerate.
Robust Scale Estimation from Ensemble Inlier Sets
185
permissible model of D is the one defined by a minimal subset. And all permissible models form a finite discrete model set Θ. Given a scale parameter ε, we say that the inlier set for model θ is Iε (θ) = {x ∈ D | |f (x | θ)| ≤ ε},
(6)
in which f (x | θ) is the error of fitting x with θ, and | · | is the absolute value of the error. The value of ε is related to the scale of inlier noise. The inlier set is referred to as the perfect set I0 (θ) of model θ when ε = 0. For standard RANSAC approach [2], the likelihood L is simply taken as the cardinality of inlier sets, i.e., L = |Iε (θ)|. More robust results are obtained by taking into account how well inlier points fit the estimated model [1,3]. If the scale ε is unknown and can be chosen freely, the number of inliers takes maximum as εˆ → ∞. One obvious solution to this problem is to use the likelihood function as suggested in [15] L(θ, ε|D) =
|Iε (θ)| . ε
(7)
Unfortunately, this likelihood function may bring about another degenerate solution εˆ → 0 when any one of minimal sets is selected. As a robust solution to avoid these degenerate cases, one could estimate the median absolute deviation (MAD) of fitting error εˆMAD = median (| f (xn |θ) − median(f (xn |θ)) |) ; xn ∈ D
(8)
and use its inverse as the likelihood L(θ|D) =
1 εˆMAD
.
(9)
Notice that εˆMAD is not a free parameter but a maximum likelihood estimation given D and θ [17]. MAD as such is a very robust scale estimator with breakdown point 0.5 and bounded influence function, and it is often used as ancillary scale estimation for iterative M-estimators [18]. If inlier set distribution is assumed to be Gaussian, one could estimate the standard deviation: σ ˆ=
εˆMAD = 1.4826 ∗ εˆMAD . Φ−1 ( 34 )
(10)
In many computer vision applications, however, MAD tends to over-estimate when the outlier ratio is high (> 0.5). As a modification to MAD, Weighted Median Absolute Deviation (WMAD) can be used to attenuate effects of extreme outliers: εˆWMAD = WM(|f (xn |θ) − WM(f (xn |θ), wn )|, wn ); xn ∈ D
(11)
in which wn , n ∈ [1...N ] is a non-negative scalar, proportional to the probability of each data point being an inlier. Like εˆMAD , scale estimator in (11) is not a
186
L. Fan and T. Pylv¨ an¨ ainen
free parameter, but a quantity uniquely determined by D, θ and weights W = [w1 w2 ...wn ]. How to assign weights wn to each data point is of crucial importance for robust estimation of scales. It is worth mentioning that weighted median was first used by Laplace to characterize the solution of the bivariate computational problem [19]. The term “weighted median” however was due to Y.F. Edgeworth, who formulated weighted median as the optimal solution minimizing weighed sum of absolute residuals for any given ordered sample s1 , ..., sn and associated weights, w1 , ..., wn [19]. Weighted median is simply sm such that m = min{j| ji=1 wi ≥ ni=1 wi /2}. This definition leads to an efficient implementation of WM, which is adopted in our work.
3
Ensemble Inlier Sets
It follows from (6) that for any given data set D and scale ε, there exists a mapping from a model parameter θi to its inlier set Iε (θ i ). The inlier set can be represented by an indication vector γ(θi ) = [γ1 ...γN ] with each component: 1 if xn ∈ Iε (θi ), γn = n ∈ 1, 2, ...N. (12) 0 otherwise. When many new models are proposed, corresponding inlier sets can be accumulated by summing up γ(θ i ). We end up with a weight vector W = [w1 ...wN ] associated with all data points: W =
T
γ(θi ).
(13)
i=1
The weight for each data point counts how many times the data point has been selected as an inlier by all T models. Notice that accumulated inlier sets are not union of inlier sets, since repeated inlier data points are counted multiple times. To this end, we refer it as Ensemble Inlier Sets (EIS). In following sections, we explain its rationale and elaborate how to use EIS for robust scale estimation and efficient sampling. 3.1
Robust Scale Estimation Using Ensemble Inlier Sets
Given a scale threshold ε, the set of associate models for data point x is defined as the collection of permissible models that accept x as its inlier, i.e., Aε (x) = {θ ∈ Θ | |f (x | θ)| ≤ ε}.
(14)
For all inliers xk in set Iε (θ ∗ ), corresponding associate models intersect and the intersection contains θ ∗ : θ∗ ∈ Aε (xk ). (15) xk ∈Iε (θ ∗ )
Robust Scale Estimation from Ensemble Inlier Sets
187
If θ ∗ is the optimal model and an appropriate ε is given, associate models for inliers Aε (xk ) contain many suboptimal models in the vicinity of θ ∗ . In contrast, associate models of outliers contain fewer permissible models. Figure 1 illustrates a 2D line fitting problem with associate models of inliers and outliers. It is shown that the relative number of associate models, i.e., the count of how many times data points in question being selected as inliers by all permissible models, can be used to identify inliers of the optimal model. It is also shown that classification based on this quantity is insensitive to different scales. While it is practically impossible to enumerate all associate models for high dimensional problems, in this work, Ensemble Inlier Set W is used instead as an estimate of the number of associate models. Throughout random sampling iterations, each sampling model θi , together with W , gives rise to an estimation of scale εˆi according to (11). Since the likelihood function is defined as the inverse of εˆ, the maximum likelihood scale is taken as the minima of all estimated scales: εˆ = min(ˆ εi ).
(16)
The estimated εˆ is then used to update W according to (12) and (13) in the next iteration. The whole estimation process starts from an initial ε0 (e.g. = Original data domain
Hough domain
9
12
8
10 8
7 X
1
6
6
|A (X )|=114
4
ε
r
5
1
|Aε(X2)|=42
2
4 0 3
−2
2
X2
−4
1
−6 0
2
4
6
8
−8
10
220
220
200
200
180
180
160
160
140
140
|A(outliers)|
|A(inliers)|
0
120 100
−0.5
0 α
4
5 6 outliers
0.5
1
1.5
100 80
60
60
40
40
0
−1
120
80
20
−1.5
20 1
2
3
4
5
6 inliers
7
8
9
10
0
1
2
3
7
8
9
10
Fig. 1. Upper Left: Line fitting with 10 inliers (red circles) and 10 outliers (blue dots). Red dotted line denotes the optimal model, blue dotted line the optimal model ±ε(= 0.45). X1 is an inlier and X2 outlier. Upper Right: Associate models plotted in Hough domain. XY axes represent model parameters, i.e., normal vector angle α and distance to origin r. Inlier X1 has 114 associate models (red circles) which contain the optimal model (red star). Outlier X2 has 42 associate models (blue triangles), 224 green dots denote the rest of permissible models. Bottom Left: Number of associate models for 10 inliers, when overestimated (dotted), true (dashed) and underestimated (solid) scales = {2, 1, 0.5} ∗ ε are used. Bottom Right: Number of associate models for 10 outliers.
188
L. Fan and T. Pylv¨ an¨ ainen sigma:15.76 ninliers:28
sigma: 4.07 ninliers:16
sigma: 1.18 ninliers:13
sigma: 1.02 ninliers:11
Fig. 2. 1D point fitting example with 10 inliers (true σ = 0.98) and 20 outliers uniformly distributed. Four images correspond to iterations 1, 17, 70, 165 respectively. 11 detected inliers (marked with circles) have σ ˆ = 1.4286 ∗ εˆWMAD = 1.02. Dotted line: estimated mean. Dashdot line: mean ±ˆ σ. Notice that σ ˆ decreases from 15.76 to 1.02 while W transforms from an uniform to a peaked distribution.
∞ or εˆMAD ) and iterates between updating W and refining εˆ separately (see Algorithm 2). Due to (16), εˆ decreases monotonically. It is worth mentioning that under two boundary conditions, i.e., ε → ∞ and ε = 0, the number of associate models are constant for any data points, and W always converges to an uniform weighting after many iterations. Therefore, these two boundary conditions ensure that εˆ converges to some positive fixed point ε∗ ∈ (0, +∞). Figure 2 illustrates an 1D data point fitting example. Algorithm 1. RANSAC-MAD 1. 2. 3. 4. 5.
Initialize ε0 = ∞. Randomly draw s data points and compute the model θi defined by these points. Evaluate the model and estimate εˆi according to (8). If εˆi < εˆ, set θˆ = θi and εˆ = εˆi . Repeat 2–4 until #MaxIter has been reached.
Algorithm 2. RANSAC-EIS 1. 2. 3. 4. 5. 6.
Initialize ε0 = ∞, wn = 1 in W = [w1 w2 ...wn ] for all points xn . Randomly draw s data points and compute the model θi . Evaluate the model and update W according to (12) and (13). Estimate εˆi according to (11) by using weights W . If εˆi < εˆ, set θˆ = θi and εˆ = εˆi . Repeat 2–5 until #MaxIter has been reached.
Robust Scale Estimation from Ensemble Inlier Sets
189
Algorithm 3. RANSAC-EIS-Metropolis 1. Initialize ε0 = ∞, assign wn = 1 in W = [w1 w2 ...wn ] and wn = 1 in W samp = [w1 w2 ...wn ] for all point xn . 2. Randomly draw s data points with probability proportional to wn , and compute the model θi . 3. Evaluate the model and update W according to (12) and (13). 4. Estimate εˆi according to (11) by using weights W . 5. If εˆi < εˆ, set θˆ = θi and εˆ = εˆi . 6. If t = 1, set θs1 = θi and update W samp . Otherwise, calculate the ratio using (8): α=
LMAD (θi ) . LMAD (θst−1 )
(17)
7. Draw uniformly distributed random number U ∈ [0 1]. Set θst = θi and update W samp if U ≤ min(α, 1), otherwise set θst = θst−1 and keep W samp unchanged. 8. Repeat 2–7 until #MaxIter has been reached.
3.2
Efficient Sampling Using Ensemble Inlier Sets
In standard RANSAC sampling, proposed models are uniformly distributed over all permissible models: q(θ i ) = C1 ; θi ∈ Θ (18) in which C(≈ N s ) is the total number of all permissible models. One immediately identifies a major drawback in (18): even if a suboptimal model has been proposed and deemed as a good model, the probability of hitting the nearby optimal model is still unchanged. We used in [10] a hill climbing algorithm to explore the neighborhood of the best model. In this work, we use ensemble inlier sets W to achieve efficient sampling. If we randomly draw s data points with probability proportional to corresponding weights wn , the probability of sampling a new model θ is given by wn (19) q(θ) ∝ {n|xn ∈I0 (θ)}
in which I0 (θ) is the perfect inlier set of θ. We know from (12) and (13), that W has high weights for potentially true inlier points and low weights for outliers. Therefore, this non-uniform sampling distribution revisits more often high likelihood models, while pays less attention to those with low likelihood scores. However, remember that W are accumulated inlier sets of all proposed models. If some models are favored by the current W , they will be more likely selected and accumulated again in the updated W . This will lead to a biased histogram with one or few models peaked while others unduely suppressed. Special care has to been taken before we applying this sampling strategy. In this work, we incorporate a Metropolis step [20] to ensure that the globally optimal model can be reached in finite number of iterations. See Algorithm 3 for detailed descriptions of the Metropolis step.
190
L. Fan and T. Pylv¨ an¨ ainen
Notice that the score function L(·|D) in (17) is set as the inverse of εˆMAD in (8) instead of εˆWMAD in (11). This Metropolis step ensures the distribution of models that are used to update W samp is independent of W samp . With this adaptive sampling step, sampling efficiency can be significant improved, while the accumulated histogram faithfully respects input dataset. 3.3
Algorithm Implementation
Pseudo code of three algorithms are outlined in Algorithms 1 to 3. Notice that RANSAC-MAD is implemented as a reference point for performance evaluation. For RANSAC-EIS-Metropolis algorithm, we need to draw random points according to wn . This is implemented by first computing the cumulative sum of wn and finding the index of a cumulative sum element whose ratio to the total sum exceeds a random number U ∈ [0 1]. In order to avoid sampling degenerate models, we do not take repeated points. Also, if a model is numerically ill-conditioned, it is not evaluated and a new sample model is proposed instead.
4
Experimental Results
Three algorithms are implemented in Matlab code and tested with different fitting problems. To compare scale estimation accuracy, following [13], we measure the number of detected inliers vs. true inliers, and the ratio of estimated inlier standard deviation to true inlier standard deviation. Sampling efficiency is related to the number of required iterations before reaching the best model. Note that for RANSAC-EIS-Metropolis algorithm, even if a model is rejected in Metropolis step, it is evaluated and still counts one iteration. This gives a fair comparison of the number of required iterations for different methods. In every iteration, it takes RANSAC-EIS and RANSAC-EIS-Metropolis methods on average 0.07ms (3.6%) and 0.2ms (10%) extra time compared with RANSAC-MAD. The former is the overhead in evaluating (11) and (13), while the latter is mainly attributed to weighted sampling in step 2 and evaluating (17) in Metropolis step. The elapse time is measured by Matlab profile, on a 1.6GHz Intel Core Duo CPU laptop. 4.1
Line, Ellipse and Hyperplane Fitting
For line fitting, 2 points are randomly drawn from dataset, and recovered models are evaluated during each iteration. For ellipse fitting, 5 points are drawn and the algorithm in [21] is used to recover model parameters. Ground truth line segments and ellipses are randomly generated on a 2D region with x, y coordinates between [0 100]. Each model consists of 100 points and has a random size between 20 to 50. Gaussian noise with standard deviation 2.5 is added to x, y coordinates of each inlier point. 200 and 160 outliers for line and ellipse respectively are randomly distributed over the same region. 100 datasets are generated and each algorithm repeats 10 times with 5 different threshold values for each
Robust Scale Estimation from Ensemble Inlier Sets
191
Table 1. Performance comparison for fundamental matrix estimation F estimation: #inliers/#true inliers RANSAC-MAD 140/100 RANSAC-EIS 122/100 RANSAC-EIS-Metropolis 100/94
σ ˆin /σt 5.3 1.7 0.95
#Iter 1623.3 5383.1 797.6
Table 2. Performance comparison for line, ellipse and hyperplane fitting. All numbers are averages over 100 datasets with 10 runs for each algorithm. #Iter is the number of iterations when each algorithm reaches the best model. Line fitting: RANSAC-MAD RANSAC-EIS RANSAC-EIS-Metropolis Ellipse fitting: RANSAC-MAD RANSAC-EIS RANSAC-EIS-Metropolis Hyperplane fitting (d = 5): RANSAC-MAD RANSAC-EIS RANSAC-EIS-Metropolis Hyperplane fitting (d = 9): RANSAC-MAD RANSAC-EIS RANSAC-EIS-Metropolis
#inliers/#true inliers 257.3/94.2 122.3/95.8 105.1/96.1 #inliers/#true inliers 186.3/100 114.5/98.2 112.3/99.2 #inliers/#true inliers
σ ˆin /σt 17.8 1.79 0.95 σ ˆin /σt 24.3 1.21 0.97 σ ˆin /σt
#Iter 4128.3 3137.2 175.2 #Iter 1158.3 843.8 109.7 #Iter
148.3/93.7 112.3/96.3 113.6/98.5 #inliers/#true inliers
5.31 1.23 1.07 σ ˆin /σt
3598.2 4178.9 357.8 #Iter
238.2/98.9 124.8/92.3 110.1/96.7
32.3 1.43 0.93
15784.3 23174.8 1073.1
dataset. Randomly selected fitting results are illustrated in Figures 4 and 5. Table 2 summarizes the statistics of fitting results for different algorithms. Hyperplane with dimension d(=5 and 9) are randomly generated on d − 1 hypercube with side length of 1. Gaussian noise with standard deviation 0.02 is added to inliers. Outliers are generated from an uniform distribution over 5 and 9 dimension hypercube which spans from -1 to 1 on each axis. Statistics of fitting results are summarized in Table 2 as well. As observed from Figures 4 and 5, RANSAC-MAD scale estimates are often far too big to create the correct model. Even if the correct model is found, inlier noise level is considerably higher than those by RANSAC-EIS algorithm. In contrast, RANSAC-EIS estimates scales and models more accurately, however, sometimes at the cost of more iterations. It is observed that whenever εˆ is updated, it might take many iterations before the new scale manifests itself into W . For RANSAC-EIS-Metropolis, on the other hand, a sharply peaked W is accumulated in far less number of iterations. Consequently, RANSACEIS-Metropolis outperforms in terms of both accuracy and efficiency. Indeed,
192
L. Fan and T. Pylv¨ an¨ ainen
Fig. 3. Fundamental matrix estimation with corridor sequence (frames 0 and 9). Up: 104 true inliers (pentagons) and 150 outliers (dots) uniformly distributed over images. Bottom: RANSAC-EIS-Metropolis fitting result with 100 inliers, in which 94 are true inliers and 6 mismatch. 144 outliers and 10 true inliers are rejected.
Fig. 4. 2D line segment fitting examples with 100 inliers and 200 outliers. Up to bottom rows: the best model found by RANSAC-MAD, RANSAC-EIS and RANSAC-EISMetropolis for 5 datasets. Dotted line: estimated line model. Solid line: estimated line model ±3 ∗ σ ˆ.
Robust Scale Estimation from Ensemble Inlier Sets
193
Fig. 5. Ellipse fitting examples with 100 inliers and 160 outliers. Up to bottom rows: the best model found by RANSAC-MAD, RANSAC-EIS and RANSAC-EIS-Metropolis for 5 datasets. Pentagon point: estimated center. Length of solid line = 3∗ˆ σ along max/min axes.
the speedup is even more considerable for high dimensional(d = 9) hyperplane fitting. 4.2
Fundamental Matrix Estimation
For fundamental matrix estimation, we use two frames with wide baseline (frames 0 and 9) from corridor sequence to test three algorithms. The ground truth for this sequence is known and there are 104 inlier matches between frames 0 and 9. Eight points are randomly drawn from dataset, and a standard eight point algorithm is used to estimate F for each iteration [22]. Gaussian noise with standard deviation 1.0 is added to 2D coordinates of inlier points. 150 uniformly distributed outlier points are add to each frame. All algorithms repeat 10 times with 5 different threshold values. Figure 3 illustrates one of RANSAC-EISMetropolis algorithm outputs. Table 1 summarizes comparison between different algorithms. With no surprise, RANSAC-EIS-Metropolis still outperforms with inlier variation ratio closest to 1 and the minimal number of iterations to reach the best model. We applied the proposed method to realistic fundamental matrix estimation problem, in an application of 3D point cloud reconstruction from a collection of photos. It was observed that the speedup over standard RANSAC ranges from 3 to 10 on average, which is less pronounced as with synthetic data in Figure 3. This is due to the fact that outlier ratios in realistic data are relatively low, ranging from 10% to 30%. As the proposed modification is easy to implement, it makes sense to use the method for realistic data, regardless of outlier ratios.
194
5
L. Fan and T. Pylv¨ an¨ ainen
Conclusion
The proposed method addresses two important aspects of RANSAC type of methods. Firstly, automatic scale estimation is achieved by using accumulated inlier points from previous RANSAC iterations. Compared with other scale estimation methods, which often rely on mean-shift algorithms, the proposed method is self-contained and requires no additional parameters. The combination of Ensemble Inlier Set and Weighted Median Absolute Deviation provides a robust scale estimation approach, which requires very limited modification from standard RANSAC algorithm. Secondly, RANSAC sampling efficiency can be significantly boosted by using EIS-based sampling. Since no auxiliary information other than the given data is required, this method can be readily applied to different fitting problems as demonstrated in the paper. Ensemble Inlier Sets is a robust and efficient computational tool suitable for any methods with random sampling nature. However, there are still open questions not addressed here. For instance, an interesting topic is to study the connection between EIS-based sampling and sampling from parameter space using Markov Chain Monte Carlo (MCMC) approaches (e.g. see pp. 361 in [1]). It follows from (19) that a Gaussian transition distribution in parameter space can be simulated by drawing s data points according to appropriate EIS weights. But what is the exact mapping from some arbitrary non-Gaussian transition distribution to corresponding weights? Or how to design proper weight schemes such that corresponding transition distribution have controlled behavior? All these questions call for more research efforts and we are working on theoretic analysis along these directions.
Acknowledgement Authors are grateful to Kari Pulli for valuable discussion and three anonymous referees for their critical and constructive comments.
References 1. Torr, P.H., Davidson, C.: IMPSAC: Synthesis of importance sampling and random sample consensus. IEEE Transactions on Pattern Analysis and Machine Intelligence 25(3), 354–364 (2003) 2. Fischler, M.A., Bolles, R.C.: Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM 24(6), 381–395 (1981) 3. Torr, P.H.S., Zisserman, A.: MLESAC: A new robust estimator with application to estimating image geometry. Computer Vision and Image Understaring 78(1), 138–156 (2000) 4. Tordoff, B.J., Murray, D.W.: Guided-MLESAC: Faster image transform estimation by using matching priors. IEEE Transactions on Pattern Analysis and Machine Intelligence 27(10), 1523–1535 (2005)
Robust Scale Estimation from Ensemble Inlier Sets
195
5. Chum, O., Matas, J.: Matching with PROSAC - progressive sample consensus. In: Proceedings of the Computer Vision and Pattern Recognition, pp. 220–226 (2005) 6. Subbarao, R., Meer, P.: Beyond RANSAC: User independent robust regression. In: Proceedings of the Conference on Computer Vision and Pattern Recognition Workshop (2006) 7. Myatt, D., et al.: NAPSAC: High noise, high dimensional robust estimation — it’s in the bag. In: Proceedings of the Britsh Machine Vision Conference, pp. 458–467 (2002) 8. Nist´er, D.: Preemptive ransac for live structure and motion estimation. In: Proceedings of the International Conference on Computer Vision, pp. 199–206 (2003) 9. Matas, J., Chum, O.: Randomized ransac with sequential probability ratio test. In: Proceedings of the International Conference on Computer Vision, pp. 1727–1732 (2005) 10. Pylv¨ an¨ ainen, T., Fan, L.: Hill climbing algorithm for random sampling consensus methods. In: Bebis, G., Boyle, R., Parvin, B., Koracin, D., Paragios, N., Tanveer, S.-M., Ju, T., Liu, Z., Coquillart, S., Cruz-Neira, C., M¨ uller, T., Malzbender, T. (eds.) ISVC 2007, Part I. LNCS, vol. 4841, pp. 672–681. Springer, Heidelberg (2007) 11. Rousseeuw, P.J., Leroy, A.M.: Robust regression and outlier detection. John Wiley & Sons, Inc, New York (1987) 12. Torr, P.H.S., Murray, D.W.: The development and comparison of robust methods for estimating the fundamental matrix. International Journal of Computer Vision 24(3), 271–300 (1997) 13. Chen, H., Meer, P.: Robust regression with projection based m-estimators. In: Proceedings of the International Conference on Computer Vision, pp. 878–885 (2003) 14. Chen, H., Meer, P.: Robust computer vision through kernel density estimation. In: Proceedings of the European Conference on Computer Vision, pp. 236–250 (2002) 15. Wang, H., Suter, D.: Robust adaptive-scale parametric model estimation for computer vision. IEEE Transactions on Pattern Analysis and Machine Intelligence 26(11), 1459–1474 (2004) 16. Singh, M.K., Arora, H., Ahuja, N.: A robust probabilistic estimation framework for parametric image models. In: Proceedings of the European Conference on Computer Vision, pp. 508–522 (2004) 17. Hampel, F.: The influence curve and its role in robust estimation. Journal of the American Statistical Association, 383–393 (1974) 18. Huber, P.J.: Robust statistics. John Wiley and Sons, Chichester (1981) 19. Koenker, R., Bassett, G.: On Boscovich’s Estimator. The Annals of Statistics (4) (1985) 20. Metropolis, N., Rosenbluth, A.W., Rosenbluth, M.N., Teller, A.H.: Equation of state calculations by fast computing machines. Journal of Chemical Physics 21(6), 1087–1092 (1953) 21. Halir, R., Flusser, J.: Numerically stable direct least squares fitting of ellipses. In: Proceedings of the Int. Conf. in Central Europe on Computer Graphics, Visualization and Interactive Digital Media, pp. 125–132 (1998) 22. Hartley, R.I., Zisserman, A.: Multiple View Geometry in Computer Vision. Cambridge University Press, Cambridge (2003)
Efficient Camera Smoothing in Sequential Structure-from-Motion Using Approximate Cross-Validation Michela Farenzena, Adrien Bartoli, and Youcef Mezouar LASMEA, UMR6602 CNRS Universit´e Blaise Pascal, Clermont, France
[email protected]
Abstract. In the sequential approach to three-dimensional reconstruction, adding prior knowledge about camera pose improves reconstruction accuracy. We add a smoothing penalty on the camera trajectory. The smoothing parameter, usually fixed by trial and error, is automatically estimated using Cross-Validation. This technique is extremely expensive in its basic form. We derive Gauss-Newton Cross-Validation, which closely approximates Cross-Validation, while being much cheaper to compute. The method is substantiated by experimental results on synthetic and real data. They show that it improves accuracy and stability in the reconstruction process, preventing several failure cases.
1
Introduction
The sequential approach to Structure-from-Motion (SfM) [1,2,3,4] entails starting from a seed reconstruction, then adding one new view at a time, updating the structure accordingly. The strategy that is usually adopted to robustly calculate a new camera pose is to use the already estimated three-dimensional (3D) points to solve a resection problem [1,5,6] within RANSAC [7]. In our experience however, this does not guarantee a good initialisation for bundle adjustment and does not prevent the reconstruction process from failing. Resection indeed uses only local information; it is prone to drifting and local instabilities. It is commonly admitted that using prior knowledge improves the quality of an estimate. In video sequences, it is reasonable to add a continuity or smoothing prior on the camera trajectory, encouraging each camera to lie close to the previous ones. We minimize a compound cost function, which sums the reprojection error and a smoothing penalty. The trade-off between these is regulated by the smoothing parameter. The smoothing parameter is commonly tuned by trial and error, and is kept constant in the whole sequence. We show that accuracy can be enhanced by choosing this parameter automatically, customising the smoothness for each pose. The idea is to estimate the most predictive camera pose, in the sense that it can ’explain’ the whole image as well as possible, given a restricted set of data points. This is a typical machine learning problem. Cross-Validation (CV) techniques can be used. The dataset is split in a training and a test set. The D. Forsyth, P. Torr, and A. Zisserman (Eds.): ECCV 2008, Part III, LNCS 5304, pp. 196–209, 2008. c Springer-Verlag Berlin Heidelberg 2008
Efficient Camera Smoothing in Sequential Structure-from-Motion
197
smoothing parameter for which the trained model minimizes the test error is selected. The main drawback of CV is the computational cost. The greedy formula to compute the leave-one-out CV score (CVloo ) for one certain value of λ requires to solve n nonlinear least squares problems, with n the number of points in the dataset. For the case of regular linear least squares there is a simple noniterative formula that gives the CVloo score without having to solve as many problems as there are measurements [8]. We derive a similar non-iterative formula for the nonlinear least squares resection problem. We define the Gauss-Newton CVloo (GNCVloo ) score and show that it closely approximates the true CVloo . The computation of GNCVloo requires to solve only one nonlinear least squares problem. This makes the estimation of the smoothing parameter a much cheaper problem. Thus, our method could be embedded in other SfM pipelines, such as [4,9,10], working in real time. The approach is validated by experimental results on synthetic and real data. They show that it increases accuracy and stability in the reconstruction process, preventing several failure cases.
2
Background on Sequential Structure-from-Motion
A common scheme for sequential SfM, in both the calibrated and uncalibrated camera cases, has three main steps. Our contribution concerns the last step. Extraction of keyframes and keypoint matching. The first step consists in relating the images. This means extracting keypoints and matching them between the images. In video sequences the baseline (the camera displacement between two views) is however often too limited. This makes the computation of matching tensors (such as the fundamental matrix) ill conditioned. A solution is to select a subset of images (keyframes). Many ways to choose these keyframes have been proposed [2,3,11,12]; they balance the baseline and the number of matched keypoints. Once keyframes are selected, matching tensors are estimated. The initial set of corresponding points is typically contaminated with outliers. Traditional least squares approaches thus fail and a robust method, such as RANSAC, must be used. Initialisation of structure and motion. The first few views and the matched keypoints are used to retrieve a seed 3D structure of the scene and the motion of the camera. Usually two or three views are used [2,3,13]. Sequential processing. Keyframes are sequentially added, calculating the pose of each new camera using the previously estimated 3D points. This is a nonlinear least squares problem, called resection. Robust estimation is usually necessary in order to cope with outliers. Subsequently, the 3D structure is updated by triangulating the 3D points conveyed by the new view. Both structure and motion are finally adjusted using bundle adjustment, with the aim of finding the parameters for the cameras and the 3D points which minimize the mean squared distances between the observed image points and the reprojected image points.
198
3
M. Farenzena, A. Bartoli, and Y. Mezouar
Resection with Automatic Camera Smoothing
Each time a new camera is added, bundle adjustment is performed in order to refine both structure and motion. This has been proved to be the essential step to achieve a good accuracy and to prevent failures [14]. The initial estimate must however be sufficiently close to the optimal solution. Figure 1 shows an example of reconstruction failure, from a real sequence taken by a handheld camera. At the 47th keyframe the computation stops because there are not enough points to estimate the new camera pose, meaning that all points seen in the last views have been rejected as outliers. At the moment of failure, the camera is rotating. Even if the rotation is not around the camera’s optical centre, this is a delicate situation, where the field of view varies rapidly and reconstruction accuracy is crucial. It is evident from Figure 1 that the pose for the last 5 keyframes are wrongly estimated, and that bundle adjustment can not fix the problem. 3.1
A Compound Cost Function
In order to refine the initial estimate a common strategy is to minimize a data term representing geometric error, i.e. the reprojection error. The problem is formalised as a least squares minimization of the mean of squared residuals (MSR): n 1 Ψ(P, Qi ) − qi 22 , (1) Ed2 (P) = n i=1 where P is the projection matrix and Qi is the 3D position of the image point qi . The function Ψ(P, Qi ) is the reprojection of Qi through P, in Cartesian coordinates. The optimal solution is usually obtained by approximating Ψ(P + Δ, Qi ) ≈ Ψ(P, Qi ) + J(P, Qi )vect(Δ), where J(P, Qi ) is the Jacobian matrix of
9 8.8 8.6 8.4 8.2 8 7.8 7.6 7.4 7.2 7 −2
−1.5
−1
−0.5
0
Fig. 1. Reconstruction failure in case of non smoothing of camera pose. On the left, the 1st and 47th keyframe; in the centre, a 3D view of the recovered cameras; on the right, the top view of the last keyframes. Visual inspection shows that the last five cameras are misestimated.
Efficient Camera Smoothing in Sequential Structure-from-Motion
199
Ψ wrt P evaluated at (P, Qi ), and vect is the operator that rearranges a matrix into a vector. The normal linear least squares equations are then solved in an iterative Gauss-Newton manner. Since the keyframes come from a video, it is reasonable to add a smoothing penalty on the camera trajectory, saying that the position of one keyframe should not differ too much from that of the previous one. If properly weighted, this penalty increases the stability in the camera trajectory estimation. The problem is formalised as the minimization of a compound cost function, which sums the reprojection error Ed and a smoothing term Es : E 2 (P, λ) = (1 − λ)2 Ed2 (P) + λ2 Es2 (P) ,
(2)
where λ is the smoothing parameter. As a smoothness measure we use the mean of squared residuals between reprojected points in the current and previous keyframes. That is, if the two cameras are close to each other, then the points they reproject should be close as well: 1 Ψ(P, Qi ) − Ψ(Pp , Qi ) 22 , n i=1 n
Es2 (P) =
(3)
with Pp the projection matrix of the previous keyframe. This is actually a continuity measure, as it can be interpreted as a finite difference approximation to the first derivatives of the predicted tracks. The same scheme holds if higher order derivatives are used. The smoothing penalty usually utilised in the literature in similar contexts is often the norm of the difference between camera matrices, but we found that the resulting cost function does not lead to the expected solution, as shown in Figure 2. Equation (2) depends on a smoothing parameter that must be estimated. In the next section we present an automatic data-driven method solving this problem.
1.8 1.8 0
1.7
0
1.7
1.6 1.6
−0.2
1.5
−0.2
1.5 −0.4
1.4
−0.4
1.4
1.3
1.3
−0.6
1.2
−0.6
1.2 −0.8
−0.8
1.1
1.1
1
1
−1
−0.1
0
0.1
(a)
0.2
−2.2
−2
−1.8
−1.6
(b)
−1.4
−1.2
−1
−0.1
0
0.1
(c)
0.2
−2.2
−2
−1.8
−1.6
−1.4
−1.2
(d)
Fig. 2. Examples of resection using the norm of the difference between camera matrices (a,b) or Equation (3) (c,d) as smoothing penalties. In black thick line is the ground truth for the previous and current pose; in thin green line are the poses obtained after resection, with λ varying from 0 to 1.
200
3.2
M. Farenzena, A. Bartoli, and Y. Mezouar
Smoothing Parameter Estimation by Cross-Validation
We propose to automatically determine both the camera pose and the smoothing parameter. The idea is to find the most predictive camera pose, in the sense that it can ’explain’ the whole image, given a restricted set of 3D positions matching 2D points. This concept derives from the machine learning paradigm of supervised learning from examples. The approach we follow is to split the data points we have in a training and a test set, and select the smoothing parameter for which the trained model minimizes the test error. A well-known method, widely applied in machine learning [16], is CV (Cross-Validation), firstly introduced in [17]. Considering that the number of samples is small, this technique recycles the test set, averaging the test error over several different partitions of the whole data set. There are different kinds of CV techniques; we chose the CVloo (leave-one-out CV). The CVloo score is defined as a function of the parameter λ: 1 ˆ (j) (λ), Qj ) − qj 2 , Ψ(P 2 n j=1 n
Eg2 (λ) =
(4)
ˆ (j) (λ) is the camera pose estimated with all but the j-th 3D–2D point where P ˆ is obtained by solving the correspondence. The most predictive camera pose P following nested optimization problem: ˆ = arg min E 2 (P, arg min Eg2 (λ)). P P
λ
(5)
ˆ This means that the optimal camera pose is obtained by minimizing E(P, λ), ˆ is the optimal vale for λ, i.e. the one that gives the lowest CVloo score. where λ ˆ is usually calculated by sampling λ over the range [0, 1]. λ Computing the CVloo score using (4) is computationally expensive: it requires to solve n nonlinear least squares problems. Solving (5) is thus extremely costly. In the next section we propose a non-iterative approximation to the CVloo score. Non-iterative means that it does not require to solve n nonlinear least squares problems as a trivial greedy application of Equation (4) entails. The derivation proceeds in two steps. First we approximate the greedy formula (4) through the Gauss-Newton approximation, then we provide a non-iterative formula that exactly solves such linear least squares problems. 3.3
GNCVloo : Gauss-Newton Leave-One-Out Cross-Validation
We rewrite Equation (2) in matrix form: E 2 (P, λ) = (1 − λ)2
1 1 B(P) − b 22 +λ2 B(P) − B(Pp ) 22 , n n
with B(P)T = [Ψ(P, Q1 ) Ψ(P, Q2 ) . . . Ψ(P, Qn )] and bT = [q1 . . . qn ].
(6)
Efficient Camera Smoothing in Sequential Structure-from-Motion
201
ˆ λ be the global solution to Equation (6) obtained by Given a certain λ, let P ˆ λ ) be the Jacobian matrix evaluated at P ˆ λ . It Gauss-Newton (GN). Let C = J(P is given by the GN algorithm. The GN approximation to E is: ˆ λ +Δ, λ) ≈ (1−λ)2 E 2 (P
1 ˆ λ )+Cδ−b 22 +λ2 1 B(P ˆ λ )+Cδ−B(Pp ) 22 , (7) B(P n n
with δ = vect(Δ). ˆ λ ), we get: ˆ λ ) and t = B(Pp ) − B(P Substituting s = b − B(P 1 1 Cδ − s 22 +λ2 Cδ − t 22 . (8) n n Note that this holds if higher order derivative are used as a smoothing penalty. The CVloo score can similarly be approximated by: ˆ λ + Δ, λ) ≈ E˜2 (δ, λ) = (1−λ)2 E 2 (P
1 2 ˆ E˜g2 (λ) = cT j δ (j) − sj 2 , n j=1 n
(9)
where δˆ(j) is the solution of the linear least squares system (8) with all but the j-th correspondence. This way we have approximated the nonlinear least squares problem by a linear least squares one. We call this approximation the GNCVloo score. Calculating the GNCVloo score using (9) is cheaper than calculating the CVloo score, but it still requires iterating over the n correspondences. We derive a non-iterative formula that exactly estimate (9). This formula is the following: 2 1 1 ˆ − s − diag (diag (C))(k ˆ E˜g2 (λ) = diag Ck − s) , ˆ n 1 − diag (C) 2 (10) where ˆ = CC+ , C+ is the pseudoinverse of C and k = (1 − λ)s + λt . C (1 − λ)2 + λ2 1 is a vector of ones and the diag operator, similar to the one in Matlab, extracts a diagonal matrix or constructs one from a vector. The derivation of this formula is shown in the Appendix. Maximizing the GNCVloo score is then done through the global solution of Equation (2) and the closed-form (10): instead of n nonlinear least squares problems only one has to be solved. As is, this CV method is not robust, in the sense that it does not cope with mismatched correspondences. Therefore, we use RANSAC to robustly estimate ˆ λ in Equation (6), and the dataset is as initial solution with λ = 0 to get P restricted to only correspondences classified as inliers after RANSAC. Moreover, ˆ is carried out by sampling, estimating Equation (2) at steps the computation of λ of 0.01 from 0 to 1. Here 0.01 was experimentally derived as a sufficiently fine discretization step to find the global minimum of the GNCVloo score. Table 1 summarizes the proposed method.
202
M. Farenzena, A. Bartoli, and Y. Mezouar Table 1. Automatic camera resection based on our GNCVloo score Camera Resection Method Find an initial robust estimation of P; On those points classified as inliers: for λ = 0 : 0.01 : 1 ˆ λ , using Gauss-Newton; Find P Estimate the GNCVloo score using (10); end ˆ λ with minimum GNCVloo score. Select P
4
Implementation Details
We give some details of our implementation of the reconstruction pipeline. We assume that the camera is calibrated. The KLT tracker [18] is used to detect and track keypoints in the sequence. Similarly to [19], the first frame is chosen as the first keyframe I1 . I2 is chosen so that there are as many frames as possible between I1 and I2 with at least N feature points in common. Frame In is selected as a keyframe if: a) there are as many frames as possible between In and In−1 ; b) there are at least N point correspondences between In−1 and In and c) there are at least M point correspondences between In−2 and In . This criterion ensures that there are common matches at least in three consecutive views. In our experiments we used N = 300 and M = 200. In order to initialize the 3D reconstruction we use the first image triplet, computing relative camera motion as described in [13]. This process is coupled with RANSAC, in order to have a robust estimate, and the final solution is further refined with bundle adjustment [20]. The resection problem is solved as described in Section 3, using Fiore’s linear algorithm [5] to find the initial estimate. The 3D points are obtained by triangulation considering all image points of the visible tracks up to the current keyframe. A reconstructed point is considered an inlier if a) its computation is well conditioned – we set a threshold on the condition number of the matrix in the linear system that computes the 3D point – and b) if it projects sufficiently close, say by a distance of one pixel, to all associated image points. This requires us to refine the initial estimation of a 3D point based on all observations, including the latest. Therefore, each time a new keyframe is added the tracks visible in it are checked and the list of inliers updated. For the first 10 views, a full bundle adjustment using all keyframes and all points is performed. After that the computation becomes increasingly expensive, even if the sparseness inherent to the problem is exploited [21]. So we perform local bundle adjustment, i.e. only a subset of keyframe poses are adjusted. Similarly to [14], we choose the last 5 keyframes, while the frames beyond these are locked and not moved. All 3D points visible in the last keyframes are considered,
Efficient Camera Smoothing in Sequential Structure-from-Motion
203
together with all measurements ever made of these points. That is, the reprojection errors are accumulated for the entire tracks backwards in time, regardless of whether the views where the reprojections reside are locked.
5
Experimental Results
In Figure 3 we show the final camera pose obtained by the proposed nonlinear refinement for the cases depicted in Figure 2. We compare the true CVloo score, calculated in the greedy way, with the approximated GN CVloo score proposed in this paper. The optimal values of λ obtained with the CVloo score are respectively 0.05 and 0, while the values obtained with the GNCVloo score are respectively 0.05 and 0.02. It can be seen that the approximation is close to the true score. We show the effectiveness of our method on synthetic unstable sequences. The dataset consists of 100 points randomly scattered in a sphere of radius 1 meter, centred at the origin. We consider three different scenarios. In the first setting, views are generated by placing cameras along a line in the z-direction, at a distance from the origin of 5.5 up to 7 meters approximately. In the second setting, in order to simulate more unstable cases, the rectilinear trajectory is perturbed along the x-direction, applying Gaussian noise of standard deviation 0.8. In the third setting the trajectory is perturbed in the three directions, with the same noise. In the three cases the number of views is fixed to 10, and Gaussian noise with standard deviation 0.5 is added to the image points.
score (pixels)
score (pixels)
120
100
score / GNCV
loo 60
40
1.7 1.6
0.41 1.5
0.405
1.4 1.3
0.4
1.2
0.395
CV
CV
loo
20
0
1.8
loo
score / GNCV
0.42 0.415
loo
80
0.425
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
Smoothing parameter λ
0.8
0.9
1
1.1
0.39 0.385
0.01
0.02
0.03
0.04
0.05
0.06
0.07
Smoothing parameter λ
0.08
0.09
0.1
CVloo score / GNCVloo score (pixels)
1000
0
0.1
0.2
0.48 0
0.46
−0.1
0.44
score / GNCV
loo
800
600
400
−0.2
0.42 −0.3
0.4 −0.4
0.38
−0.5 −2.2
loo
0.36
CV
200
0
−0.1
0.5
score (pixels)
1200
1 0.9
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
Smoothing parameter λ
0.8
0.9
1
−2
−1.8
−1.6
−1.4
−1.2
0.34 0.32
0
0.01
0.02
0.03
0.04
0.05
0.06
0.07
Smoothing parameter λ
0.08
0.09
0.1
Fig. 3. In the left column, comparison between the true CVloo score (blue thick line) and its approximation (red dashed line) in the examples of Figure 2. The central column shows a zoom around the minimum. The vertical lines indicate the CVloo score minimum (black line) and the GNCVloo score minimum (black dashed line). In the right column, the final pose estimate (green, thin line). In black thick line the ground truth for the previous and current pose, in red dashed line the initial pose estimated by RANSAC.
204
M. Farenzena, A. Bartoli, and Y. Mezouar
Table 2. Results on synthetic scenes. Mean, minimum and maximum distance of estimated optical centres from the ground truth (in meters) for the three synthetic settings and the cases a) without using nonlinear refinement, b) using nonlinear refinement, but without the smoothing term, c) with the smoothing term and λ carefully set by hand and d) with λ estimated by CV. (*) In this setting 20% of the cases fails. Setting 1 Setting 2 Setting 3 a b c d a b c d a* b c d mean 0.1704 0.0916 0.0793 0.0566 0.0766 0.110 0.091 0.044 0.052 0.072 0.090 0.067 min 0.001 0.005 0.004 0.002 0.005 0.004 0.007 0.001 0.003 0.003 0.005 0.004 max 1.018 1.021 0.301 0.383 0.582 0.707 0.614 0.543 0.396 0.479 0.545 0.465
Fig. 4. Thumbnails of the Campus (top) and Laboratory (bottom) sequences
For each scenario we compare results in terms of distance of the estimated cameras to the ground truth, considering four cases: a) without using nonlinear refinement of camera pose, b) using nonlinear refinement, but without the smoothing penalty, c) with the smoothing term and λ carefully set by hand and kept constant for the whole sequence, and finally d) with λ estimated by CV, as proposed in this paper. For each scenario 50 independent trials are carried out. Results are shown in Table 2. The distances between the camera centres of the estimated cameras and the ground truth are reported. In the third scenario without nonlinear pose refinement the computation stopped before estimating all cameras in 20% of cases. Adding the smoothing penalty improves stability and accuracy in pose estimation, and our automatic method gives the best results. For real sequences, we first show the results from a video, Campus, taken by a calibrated handheld camera (see Figure 4). The trajectory executed is an initial rotation, then a rectilinear part and finally another small rotation, without caring too much about shaking. From 1608 frames of resolution 784 × 516, without smoothness the reconstruction process stops at the 47th keyframe, as already displayed in Figure 1. Using the proposed method, instead, all the sequence can be processed, with 135 keyframes extracted, and 5000 points reconstructed
Efficient Camera Smoothing in Sequential Structure-from-Motion
205
Fig. 5. In the left column, top view of the 3D maps of Campus (top) and Laboratory (bottom) sequences; in red the estimated cameras. The right column show a perspective view of respectively Campus and Laboratory maps, where for visualisation understanding one keyframe of the sequence is superimposed.
with a mean reprojection error of 0.53 pixel. The 3D map produced, with the estimated camera trajectory, can be seen in Figure 5. The second video, Laboratory, is taken with a camera mounted on a Unmanned Autonomous Vehicle (UAV), in an indoor setting. It is made up of 929 frames of resolution 576 × 784 (see Figure 4). 89 keyframes are calibrated, and the final 3D map is composed of 4421 points (Figure 5) with a mean reprojection error of 0.64 pixel. With no smoothness, the reconstruction process stops in the last rotation.
206
6
M. Farenzena, A. Bartoli, and Y. Mezouar
Conclusions
Local camera pose estimation might often be unstable. We proposed adding a smoothing penalty on the camera trajectory and automatically estimating the smoothing parameter, usually manually fixed, using Cross-Validation. The noniterative closed-form we proposed allows us to solve the problem very efficiently, dropping the complexity one order of magnitude below the straightforward application of Cross-Validation. Experimental results show that the method is effective in improving accuracy and stability in the reconstruction process, preventing several failure cases.
References 1. Hartley, R., Zisserman, A.: Multiple view geometry in computer vision, 2nd edn. Cambridge University Press, Cambridge (2003) 2. Klein, G., Murray, D.: Parallel tracking and mapping for small AR workspaces. In: Proc. Sixth IEEE and ACM International Symposium on Mixed and Augmented Reality (ISMAR 2007), Nara, Japan (November 2007) 3. Pollefeys, M., Gool, L.V., Vergauwen, M., Verbiest, F., Cornelis, K., Tops, J., Koch, R.: Visual modeling with a hand-held camera. International Journal of Computer Vision 59(3), 207–232 (2004) 4. Clipp, B., Welch, G., Frahm, J.M., Pollefeys, M.: Structure from motion via a twostage pipeline of extended kalman filters. In: British Machine Vision Conference (2007) 5. Fiore, P.D.: Efficient linear solution of exterior orientation. IEEE Transactions on Pattern Analysis and Machine Intelligence 23(2), 140–148 (2001) 6. Haralick, R., Lee, C., Ottenberg, K., Nolle, M.: Review and analysis of solutions of the three point perspective pose estimation problem. International Journal of Computer Vision 13(3), 331–356 (1994) 7. Fischler, M.A., Bolles, R.C.: Random Sample Consensus: a paradigm model fitting with applications to image analysis and automated cartography. Communications of the ACM 24(6), 381–395 (1981) 8. Gentle, J.E., Hardle, W., Mori, Y.: Handbook of Computational Statistics. Springer, Heidelberg (2004) 9. Eade, E., Drummond, T.: Scalable monocular vision. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 469–476 (2006) 10. Davison, A.J., Reid, I.D., Molton, N.D., Stasse, O.: June 2007. IEEE Transactions on Pattern Analysis and Machine Intelligence 29(6), 1052–1067 (2007) 11. Torr, P.H.S.: Bayesian model estimation and selection for epipolar geometry and generic manifold fitting. International Journal of Computer Vision 50(1), 35–61 (2002) 12. Thomahlen, T., Broszio, H., Weissenfeld, A.: Keyframe selection for camera motion and structure estimation from multiple views. In: Proceedings of the European Conference on Computer Vision, p. 523 (2004) 13. Nist´er, D., Naroditsky, O., Bergen, J.: Visual odometry. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, vol. 2, pp. 652– 659 (2004) 14. Hengels, C., Stew´enius, H., Nist´er, D.: Bundle adjustment rules. In: Photogrammetric Computer Vision (September 2006)
Efficient Camera Smoothing in Sequential Structure-from-Motion
207
15. Olsen, S.I., Bartoli, A.: Using priors for improving generalization in non-rigid structure-from-motion. In: British Machine Vision Conference (2007) 16. Bishop, C.M.: Neural Networks for Pattern Recognition. Oxford University Press, Oxford (1995) 17. Wahba, G., Wold, S.: A completely automatic french curve: Fitting spline functions by Cross-Validation. Communications in Statistics 4, 1–17 (1975) 18. Shi, J., Tomasi, C.: Good features to track. Technical Report 93-1399, Department of Computer Science, Cornell University, Ithaca, NY 14853-7501 (November 1993) 19. Royer, E., Lhuillier, M., Dhome, M., Lavest, J.: Monocular vision for mobile robot localization and autonomous navogation. International Journal of Computer Vision 74(3), 237–260 (2007) 20. Lourakis, M., Argyros, A.: The design and implementation of a generic sparse bundle adjustment software package based on the levenberg-marquardt algorithm. Technical Report 340, Institute of Computer Science - FORTH, Heraklion, Crete, Greece (August 2004), http://www.ics.forth.gr/∼ lourakis/sba 21. Triggs, B., McLauchlan, P.F., Hartley, R.I., Fitzgibbon, A.W.: Bundle adjustment - a modern synthesis. In: Triggs, B., Zisserman, A., Szeliski, R. (eds.) ICCV-WS 1999. LNCS, vol. 1883, pp. 298–372. Springer, Heidelberg (2000)
A
Deriving a Non Iterative Formula for GNCVloo
We show that there exists a non-iterative formula that exactly estimate (9). First, given the smoothing parameter λ, δˆ in Equation (8) is solved through: ˜ λ) δˆ = arg min E(δ, δ 2 (1 − λ)C (1 − λ)s δ− = arg min λC λt 2 δ = ((1 − λ)2 CT C + λ2 CT C)−1 CT ((1 − λ)2 s + λ2 t) 1 (CT C)−1 CT ((1 − λ)2 s + λ2 t) = (1 − λ)2 + λ2 1 = C+ Rωs , ω22
(11)
2 2 where C+ is the pseudo-inverse of C, ωT = [(1 − λ) λ], ω T s = [(1 − λ) λ ] and R = [s t]. We define ej as a zero vector with one at the j-th element and Kj = I − ˆ diag(ej ). We recall that Kj Kj = Kj and KT j = Kj . Then δ (j) is solved through:
2 (1 − λ)Kj C (1 − λ)Kj s ˆ δ (j) = arg min δ− λKj C λKj t 2 δ 1 = (Kj C)+ Kj Rωs . ω22 δˆ(j) can be expressed alternatively as this following Lemma says:
(12)
208
M. Farenzena, A. Bartoli, and Y. Mezouar
Lemma 1. δˆ(j) as defined by Equation (12) is given by the following equation: δˆ(j) =
2 1 +˜ ˜ j = Kj R + ω2 (I − Kj )Cδˆ (j) ωT . C ω , with R R j s s ω22 ωs 22
(13)
Proof. We start by expanding the right-hand side of Equation (13):
1 ω22 + 1 +˜ + ˆ (j) ω T ω s , C ω = K Rω + C (I − K )C δ R C j s j s j s ω22 ω22 ωs 22 2 and as ω T s ω s = ω s 2 we can simplify:
1 ˜ j ωs = 1 (C+ Kj Rωs + ω2 C+ Cδˆ (j) − ω2 C+ Kj Cδˆ(j) ) . C+ R 2 2 2 ω2 ω22 The second term reduces to δˆ(j) since C+ C = I. The third term, replacing δˆ(j) with its expression (12), expands as: ω22 C+ Kj Cδˆ (j) = C+ Kj C (Kj C)+ Kj Rωs = (CT C)−1 CT Kj C(CT Kj C)−1 CT Kj Rω s
I
= C+ Kj Rωs , and the overall expression simplifies to: 1 ˜ j ω s = δˆ (j) . C+ R ω22 T The projection of the j-th data with the global model δˆ is cj δˆ and with the ˆ partial model δˆ(j) is cT j δ (j) . Using Equations (11) and (12) we can rewrite these projections as: ˆ cT jδ =
1 T 1 T + ˆc Rωs , c C Rω s = ω22 j ω22 j
(14)
ˆ cT j δ (j) =
1 T +˜ 1 T˜ ˆc Rj ω s , c C Rj ω s = ω22 j ω22 j
(15)
+ ˆ ˆT with c j the j-th row of the hat matrix C = CC .
We note that (I−Kj )C = ej cT j . Taking the difference between the two predictions and factoring we obtain: 1 T ˜ j ) ωs cˆ (R − R ω22 j
1 T ω22 Tˆ T cˆ R − Kj R − = ej cj δ (j) ω s ω s ω22 j ωs 22 1 T 2 Tˆ cˆ (ej rT = j ω s − ω2 ej cj δ (j) ) ω22 j 1 2 Tˆ = cˆjj (rT j ω s − ω2 cj δ (j) ) , ω22
Tˆ ˆ cT j δ − cj δ (j) =
Efficient Camera Smoothing in Sequential Structure-from-Motion
209
ˆ Rearranging the terms where cˆjj is the j-th diagonal element of the matrix C. gives: 1 ˆ ˆ cT cˆjj rT ˆjj )cT jδ− j ω s = (1 − c j δ (j) . ω22 Subtracting (1 − cˆjj )sj on both sides we have: ˆ cT jδ−
1 ˆ cˆjj rT ˆjj sj = (1 − cˆjj )(cT j ω s − sj + c j δ (j) − sj ) , ω22
from which: ˆ cT j δ (j)
1 − sj = 1 − cˆjj
Tˆ cj δ − sj + cˆjj sj −
1 rT ω s ω22 j
.
ˆ We observe that cT j δ (j) − sj is the residual vector of the j-th measurement, as in Equation (9). Replacing δˆ from Equation (11) and summing the squared norm over j we get, after some rewriting, the non-iterative formula (10): 2 1 1 2 ˆ ˆ ˜ Eg (λ) = diag Ck − s − diag (diag (C))(k − s) . ˆ n 1 − diag (C) 2
Semi-automatic Motion Segmentation with Motion Layer Mosaics Matthieu Fradet1,2 , Patrick P´erez2, and Philippe Robert1 1
Thomson Corporate Research, Rennes, France INRIA, Rennes-Bretagne Atlantique, France
2
Abstract. A new method for motion segmentation based on reference motion layer mosaics is presented. We assume that the scene is composed of a set of layers whose motion is well described by parametric models. This usual assumption is compatible with the notion of motion layer mosaic, which allows a compact representation of the sequence with a small number of mosaics only. We segment the sequence using a reduced number of distant image-to-mosaic comparisons instead of a larger number of close image-to-image comparisons. Apart from computational advantage, another interest lies in the fact that motions estimated between distant images are more likely to be different from one region to another than when estimated between consecutive images. This helps the segmentation process. The segmentation is obtained by graph cut minimization of a cost function which includes an original image-to-mosaic data term. At the end of the segmentation process, it may happen that the obtained boundaries are not precisely the expected ones. Often the user has no other possibility than modifying manually every segmentation one after another or than starting over all again the process with different parameters. We propose an original easy way for the user to manually correct the possible errors on the mosaics themselves. These corrections are then propagated to all the images of the corresponding video interval thanks to a second segmentation pass. Experimental results demonstrate the potential of our approach.
1
Introduction
The problem of segmenting a video in regions of similar motion has long been an active topic in computer vision and it is still an open problem today. Motion layer extraction has many applications, such as video compression, mosaic generation, video object removal, etc. Moreover the extracted layers can be used for advanced video editing tasks including matting and compositing. The usual assumption is that a scene can be approximated by a set of layers whose motion in the image plane is well described by a parametric model. Motion segmentation consists in estimating the motion parameters of each layer and in extracting the layer supports. There are many different approaches. Only examples from different classes are mentioned below. One type of motion segmentation methods relies on the partitioning of a dense motion field previously estimated. A K-mean clustering algorithm on the motion D. Forsyth, P. Torr, and A. Zisserman (Eds.): ECCV 2008, Part III, LNCS 5304, pp. 210–223, 2008. c Springer-Verlag Berlin Heidelberg 2008
Semi-automatic Motion Segmentation with Motion Layer Mosaics
211
estimates is proposed in [1]. The Minimum Description Length (MDL) encoding principle is used in [2] to decide automatically the adequate number of models. Some authors [3,4] propose to combine dense motion estimation and parametric segmentation, while some others extract layers either jointly [5,6], or one after another using a direct parametric segmentation with no optical flow computation [7]. More recently, the video segmentation problem was formulated into the graph cut framework. The sequential approach described in [8] provides a segmentation map for the current image taking into account the previous labeling and matching the current image with the next one only (t → t + 1). In such a sequential method, in poorly textured regions or when the different motions are small, motions estimated between consecutive images are unlikely to be very different from one region to another, which does not help the segmentation process. Simultaneous batch approaches provide jointly the segmentation of N images using a 3D graph while increasing temporal consistency. Temporal constraints between an image and the subsequent ones (1 → 2, 1 → 3, . . . ) are used in [9]. The method presented in [10] considers temporal constraints between successive images (1 → 2 → 3 → . . . ) but uses also more distant image pairs (t → t + 1, t → t + 2, . . . , t → t − 1, t → t − 2, . . . ) to handle small motion cases when computing motion residuals. Note that in [10] the hidden parts of motion layers are extracted too, but in this paper we consider only the visible parts. It is important to note here that such batch methods require that the motion models have already been estimated for the N considered images before the supports extraction, which implies that a clustering has already been done. Moreover the number N of images processed in the 3D graph can certainly not cover a whole sequence due to complexity issues. These methods also impose matching large numbers of image pairs without exploiting the large amount of implied redundancy. Complementary to the motion segmentation task, the layer representation and the extracted supports are used to generate as many layer mosaics as extracted motion layers [1,11]. [12] presents not only static mosaics built in batch mode but also dynamic mosaics corresponding to a sequential update of mosaic images. The recurrent problem of global and local realignment in mosaics is addressed in [13]. In this paper we propose a new motion layer extraction system using reference layer mosaics. “Motion layer ”, or simply “layer ”, designates a set of image regions sharing the same motion. The terminology will also be used to simply designate a segmentation label or a layer index. “Motion layer mosaic”, or simply “mosaic”, designates a still image providing a not necessarily complete planar representation integrating the different elements of a given layer observed on a set of images. In our work a mosaic is associated with a reference instant on which it is aligned. Our motivation is to work with distant images (because motions estimated between distant images are more likely to be different from one region to another than when estimated between consecutive images) while reducing the number
212
M. Fradet, P. P´erez, and P. Robert
of image pairs to be handled and benefiting from the fact that layer mosaics represent the video sequence in a compact way. Note that mosaics can be reused in region filling or video editing systems, or also for compression tasks. We present an iterative technique to estimate the motion parameters, to extract the support of each layer, and to create layer mosaics as the segmentation progresses. At the end of the segmentation, like with every segmentation method, it can happen that the obtained boundaries are not precisely the expected ones. Often the user has no other possibility than interacting on every images one after another to correct the unsatisfactory results. We propose an original way for the user to easily provide additional input on the mosaics themselves. This input is then propagated to the whole video interval during a second segmentation pass. The paper is organized as follows. Section 2 presents our motion layer extraction algorithm and the objective function that we use. Section 3 presents different ways to generate motion layer mosaics, either before or as the segmentation task is conducted. Experimental results are shown in Sect. 4.
2
Motion Segmentation Using Motion Layer Mosaics
According to the assumption usually made in motion segmentation, the scene can be approximated by a set of layers whose motion in the image plane is well described by a low-dimensional parametric model. Additionally, in our work we assume that the number n of layers is known and that the layers keep the same depth order during the whole sequence. We mainly base our work on [8,9,10]. In our approach, instead of linking temporally many image pairs from the sequence, we propose to link temporally the currently processed image with two sets of motion layer mosaics only. Thus a new image-to-mosaic data term replaces the usual image-to-image motion data term in our objective function. Note that motion layer mosaics are mentioned in the following subsections, although their generation is only presented in Sect. 3. 2.1
Extraction of Motion Layers Using Reference Layer Mosaics
First, the user selects reference instants to divide the video into several time intervals. On reference images, the segmentations and the depth order of the n layers are obtained semi-automatically. We then process sequentially from one time interval [ta , tb ] to the next. A simplified flow chart of our sequential system after initialization is shown in Fig. 1. For a given time interval [ta , tb ] between two reference instants, our aim is to progress from ta to tb while obtaining a succession of temporally consistent segmentations. The independent segmentation maps obtained by the user for ta and tb may be seen as boundary conditions that are propagated in the interval. In association with these reference instants, as the segmentation progresses, we generate two sets of reference motion layer mosaics (Ma,i )i∈[0,n−1] and (Mb,i )i∈[0,n−1] . At current time t, the segmentation of the image It is first predicted by projection of the previous segmentation. A dense backward motion field between It
Semi-automatic Motion Segmentation with Motion Layer Mosaics
213
Fig. 1. Simplified flow chart of our algorithm
and Ita and a dense forward motion field between It and Itb are estimated. A reliability index, based on the Displaced Frame Difference (DFD) is computed for every motion vector of both fields: It (p) − Itα (p − dpα ) , ∀α ∈ {a, b} reliabα (p) = max 0, 1 − τ1
(1)
where dpa is the backward motion vector estimated at pixel p, dpb the forward one, and τ1 is an empirical normalization threshold. As for parametric motion modeling, the affine model A with 6 degrees of freedom is adopted. Thus for a pixel p = (x, y) of motion vector dp(x, y): ax0 + axx · x + axy · y dp(x, y) = A(x, y) = . (2) ay0 + ayx · x + ayy · y Based on the dense motion fields, forward and backward affine motion models are estimated for each layer of the predicted map. The motion parameters are recovered by a standard linear regression technique but the reliability measure weights the influence of each vector on the estimate. For the motion models, we use the following notations: for layer l, Aa,l is the backward affine model estimated between It and Ita and Ab,l is the forward one estimated between It and Itb . According to the assumption of brightness constancy, for a pixel p of the image It that belongs to the layer l and that is not occluded in Ita and Itb : ∀α ∈ {a, b} It (p) ≈ Itα p − Aα,l (p) . (3) Based on these layered motion models, predicted motion boundaries are finally refined using the graph cut framework. At this step the layers obtained for It are warped into the two reference mosaic sets (Ma,i )i∈[0,n−1] and (Mb,i )i∈[0,n−1] using either the forward models or the backward models depending on the mosaic of destination. The stitching of the supports of It into the reference mosaics is described in Subsect. 3.3. The prediction of the segmentation map at next time t+1 is done by projection of the segmentation map of It using affine motion models estimated from a dense motion field computed between It and It+1 . The layers are projected one after another from the most distant to the closest. Since the affine predictions rarely arrive at some integer positions, the same label is assigned to the four nearest
214
M. Fradet, P. P´erez, and P. Robert
neighbors. This allows attaching a prediction to most of the pixels where no affine prediction arrives. Remaining areas without any predicted label are areas which are occluded in It and visible in It+1 . This is continued for all the images of the interval [ta , tb ]. 2.2
Objective Function
Given the labeling f = (fp )p∈P with fp ∈ [0, n − 1] and P the pixel set to be segmented, we consider the following objective function, which is the sum of two standard terms (color data term and spatial smoothness) described in [14], with an original image-to-mosaic data term and a temporal term: Cp (fp )+λ1 Vp,q (fp , fq )+λ2 Dp (fp )+λ3 Ψp (fp ) , (4) E(f ) = λ0 p∈P
Ecolor (f )
(p,q)∈C
Esmooth (f )
p∈P
Emosaic (f )
p∈P
Etemp (f )
where C is the set of neighbor pairs with respect to 8-connectivity, and (λi )i∈[0,3] are positive parameters that weight the influence of each term. Cp (fp ) is a standard color data penalty term at pixel p, set as negative loglikelihood of color distribution of the layer fp [8,15]. This distribution consists of a Gaussian Mixture Model (GMM) computed on the mosaics Ma,fp and Mb,fp before that the segmentation process starts (see Sect. 3 for details on mosaic generation). Vp,q (fp , fq ) is a standard contrast-sensitive regularization term: 2
Vp,q (fp , fq ) =
It (p) − It (q) 1 exp(− ) · T (fp = fq ) , dist(p, q) 2σ 2
(5)
where σ is the standard deviation of the norm of the gradient, T (.) is 1 if the argument predicate is true and 0 otherwise and dist(.) is the euclidean distance. Dp (fp ) is a new image-to-mosaic data penalty term at pixel p for the motion model corresponding to layer fp . It corresponds to a residual computed between the image and the two mosaics for the concerned layer. These mosaics contain more information than the image itself since it is an accumulation of all the elements that have been visible in the previously segmented images, that could have then disappeared but also that could reappear. Thus, with a unique image-to-mosaic matching we can do as well as with numerous image-to-image comparisons. Dp (fp ) is defined as: (6) Dp (fp ) = min rα (p, fp ) , α∈{a,b}
⎧ 2 ⎨ arctan It (p) − Mα,fp (pα ) − τ2 + π2 rα (p, fp ) = if pα belongs to the mosaic support , ⎩ +β otherwise
(7)
where pα is the position associated in the mosaic Mα,fp to p in It according to Aα,fp . This smooth penalty and its threshold parameter τ2 allow a soft distinction between low residuals (well classified pixels) and high residuals (wrongly
Semi-automatic Motion Segmentation with Motion Layer Mosaics
215
Fig. 2. Matching between the current image to be segmented and the mosaics of the two layers 0 and 1 that respectively correspond to the background and to the moving van at reference instants ta and tb .
classified pixels or occluded pixels). For all our experiments, τ2 = 50 and β = 10. Figure 2 illustrates this matching between the current image to be segmented and the two sets of reference mosaics. The last temporal term of the energy function enforces temporal consistency of labeling between consecutive images. It is defined as: 1 if fp = fp and fp = ∅ Ψp (fp ) = , (8) 0 otherwise where fp is the predicted label of pixel p at instant t, and ∅ is the blank label for the appearing areas without any predicted label. 2.3
Minimization
The global energy (4) is minimized using graph-cuts [16,17]. As we want to handle an arbitrary number of layers, we use the α-expansion algorithm [18] to solve the multi-labels problem. We obtain simultaneously all the layer supports for the considered instant. Moreover our algorithm provides dense segmentation maps without any additional label for noise, occluded pixels or indetermination (contrary to [9,10]). Given that we work at the pixel level, building a graph on the whole image and conducting minimization on it is computationally expensive in the context of video analysis and editing. Moreover, because we assume temporal consistency on the segmentation maps, it is not useful to question all of the pixels at every instant. Consequently we propose to build the graph on an uncertainty strip around the predicted motion boundaries and on appearing areas only. Note that appearing areas have no predicted label, so they are also considered as uncertain. Pixels ignored by the graph retain their predicted label that we assume correct. Such a graph restriction constrains the segmentations both spatially and temporally.
216
M. Fradet, P. P´erez, and P. Robert
Pixels on the outlines of the graph area keep their predicted label and constitute boundary conditions for the segmentation to be obtained. These labeled pixels are considered as seeds and provide hard constraints which are satisfied setting to specific values the weights of the links which connect them to the terminals (see [14]).
3
Generation of Motion Layer Mosaics
The proposed motion segmentation system requires for each reference instant as many mosaics as there are layers in the scene. But how to provide such mosaics as input for our system whereas layer supports are needed to generate these mosaics? We describe three ways to obtain these motion layer mosaics. For the generation of each mosaic the user decides which way to be employed regarding the ease of use. 3.1
Simple Case
The first approach concerns the simplest particular case: the one of a foreground object fully visible in the image associated to the chosen reference instant. The interest of such a situation can serve as a guide in the manual choice of reference instants. For such a foreground object the mosaic is made up of the region corresponding to the complete object. The whole remaining area of the mosaic is empty. Since the considered object is fully visible at the reference instant, the mosaic does not need to be further completed. Thus the user can easily provide the mosaic before the motion segmentation step. The object boundaries can be obtained by a semi-automatic tool for still image segmentation like [14,15]. An example of this simple case is illustrated with the white van in Fig. 3.
Fig. 3. Reference mosaics for PETS 2001 sequence. First row, background mosaics obtained by semi-automatic stitching from the two reference images. Second row, foreground mosaics obtained by interactive segmentation in the two still reference images.
Semi-automatic Motion Segmentation with Motion Layer Mosaics
3.2
217
Semi-automatic Stitching
The second approach is inspired by the stitching tools commonly used to generate panoramas from several photographs [13,19]. Such tools usually detect feature points in each photograph of the set. Then for each pair of images with a common overlapping area, the features belonging to this area are matched to compute the global transformation between the two photographs. In our case, the set of photographs is replaced by two distant images from the original sequence. The user is guided by his/her knowledge of the sequence to choose adequately the two images such that they will provide together the whole layer support. While one assumes that there is only one layer in the common cases of panorama, in our case we have to remember that there are several layers. The user semi-automatically segments both distant reference images and indicates the layer correspondences. Then one transformation per layer is estimated. Instead of automatic features matching, the user can manually select pairs of control points. This approach is particularly well adapted for the background if the areas occluded by foreground objects in the first image are totally disoccluded in the last image. This example is illustrated by the background layer in Fig. 3. 3.3
Joint Mosaic Generation and Motion Segmentation
The third approach, which can always be applied, does not require any user interaction. The generation of the mosaics is embedded in our time-sequential segmentation system. For each layer, the supports are stitched automatically together as they are extracted. The first support results from the semi-automatic segmentation of the reference image. Motion layer extraction consists in extracting the layer supports but also in estimating the motion model parameters of each layer. Thus, using the motion models, layers are warped into the corresponding reference mosaics we want to build. This means that in this case reference mosaics evolve as the segmentation process progresses unlike the mosaics built in the two previous approaches. Regions that appear for the first time are copied into the mosaics. For the regions already seen, we simply keep the values of the first apparition without any refreshment to preserve the reference data in case of classification errors. Figure 7 shows mosaics obtained this way using 20 images. Given the notations introduced in Sect. 2: – Either the pixel p of layer fp in It has already been disoccluded in the images previously segmented, in which case the information is already available in the reference mosaics Ma,fp and Mb,fp generated for layer fp . – Or the pixel p of layer fp in It occurs for the first time, in which case the information is missing in the reference mosaics, so we copy it: (9) ∀α ∈ {a, b} Mα,fp p − Aα,fp (p) = It (p) .
218
M. Fradet, P. P´erez, and P. Robert Table 1. Summary of steps and degree of automaticity reference instants selection reference images segmentation motion estimation and segmentation mosaic generation Subsect. 3.1 Subsect. 3.2 Subsect. 3.3 mosaic correction (if required)
by the user (or automatic if periodic) semi-automatic automatic cf. reference images segmentation semi-automatic automatic manual
In case of segmentation mistakes, the mosaics will be directly affected. In other words, misclassified areas may be stitched into the wrong mosaic. We voluntarily do not try to conceal such errors during mosaic generation. The user is allowed to efficiently remove them at the end of the first segmentation pass. A second segmentation pass with such corrected mosaics as input can improve notably the accuracy of the motion boundaries (see Subsect. 4.3). Table 1 summarizes the steps of our system and their degree of automaticity.
4 4.1
Experimental Results Validation of Our Image-to-Mosaic Energy Term
First we tested our algorithm on a video surveillance sequence from the PETS 2001 database (courtesy of The University of Reading, UK). The camera is fixed. The scene represents a parking with a single moving object: a white van. We chose to process only one interval of 60 images because the motion of the van is more complex than a planar motion in the rest of the sequence. The van is fully visible in every image of the processed interval. Mosaics were easily obtained before we launched the motion segmentation task. This simple case allowed us to focus on motion layer extraction and not on mosaic generation. Thus we validated our image-to-mosaic energy term by imposing null weights to the color data term and to the temporal one. We present the reference mosaics we used in Fig. 3. The algorithm extracted well both layers (Fig. 4). The only errors occurred when the moving white van goes in front of the parked white car. Our image-to-mosaic term is efficient except in these “white on white” regions. Note that through the windshield of the van, pixels are considered as belonging to the background, which is interesting for some applications. If the white van were to be removed by region filling, the windshield should be classified as foreground. This can be obtained by incorporating the temporal term (see Fig. 5). 4.2
Results on Flower Garden Sequence
We tested our algorithm on the first 20 images of the well known Flower Garden sequence. 0ur results shown in Fig. 6 are as good as the best ones in the literature.
Semi-automatic Motion Segmentation with Motion Layer Mosaics
219
Fig. 4. Results on PETS 2001 sequence without temporal constraints. Motion layer boundaries are superimposed on the images. Note the temporal inconsistencies on the windshield of the van, due to its transparency. λ0 = 0, λ1 = 30, λ2 = 17 and λ3 = 0.
Fig. 5. Results on PETS 2001 sequence with temporal constraints. Motion layer boundaries are superimposed on the images. The windshield of the van is always classified as foreground. λ0 = 0, λ1 = 30, λ2 = 17 and λ3 = 15.
Fig. 6. Results on Flower Garden with simultaneous automatic mosaics generation. Four images with superimposed motion boundaries and corresponding segmentation maps. Depth display convention: the darker is the region, the more distant is the layer. Please report to [8,9] for results comparisons. λ0 = 1, λ1 = 11, λ2 = 17 and λ3 = 3.
220
M. Fradet, P. P´erez, and P. Robert
Fig. 7. Layer mosaics automatically generated from 20 different images of Flower Garden sequence. Please report to [1] for “flowerbed” mosaic comparison.
The precise extraction of the branches would require a matting method. The end of the sequence was satisfactorily processed too in a second interval but results are not shown due to space constraints. Figure 7 shows two of the mosaics generated automatically from the 20 images of the interval as the motion segmentation progressed. Compared with the “flowerbed” accumulation of [1], there is no insertion of parts of “houses” layer in our “flowerbed” mosaic and the quality of the stitching is as good without any temporal median operation. 4.3
Results on Carmap Sequence
The Carmap sequence is also tested in [9,10]. Because of 3D motion present in the sequence we divided it into three time intervals to make the problem easier. Segmentation was done on the three intervals. Due to space constraints, results on the first two intervals are not supplied. A first experiment was done building the mosaics as the segmentation was performed. Results are shown in Fig. 8. The approximation of the motion of the car by an affine model failed because of the out-of-plane 3D rotation and of the importance of the time distance between matched images (relative to the amplitude of apparent motion in the scene). Instead of subdividing the interval into two shorter ones and starting all over again, we added some modest user input on the generated mosaics before launching a second segmentation pass.
Fig. 8. Results on Carmap sequence after first pass. Note the misclassifications of the windshield. λ0 = 1, λ1 = 30, λ2 = 14 and λ3 = 10.
Semi-automatic Motion Segmentation with Motion Layer Mosaics
221
Fig. 9. Mosaic of the background. From left to right: obtained mosaic after first pass, detail of the white square that wrongly includes some parts of the car, misclassification removal by the user. In black, remaining holes in the mosaic: these regions are never visible in the processed interval.
Fig. 10. Results on Carmap sequence after light weight mosaic correction by the user and second segmentation pass. Misclassifications of the windshield are fixed. λ0 = 1, λ1 = 30, λ2 = 14 and λ3 = 10.
Figure 9 shows how the user erased misclassified regions in the background mosaic using a standard image editing rubber. A second segmentation pass was performed with this modified mosaic as input mosaic for the background layer. The interest to work with layer mosaics is here fully demonstrated. Corrections applied by the user in the still mosaic behave as seeds which are propagated to all the images during the second segmentation pass. Results are presented in Fig. 10. After modest user interaction on one mosaic only, the second segmentation pass provided results at least as good as the ones of batch methods [9,10]. 4.4
Discussion
Because we do not have any ground-truth, the quality of the presented results has unfortunately to be evaluated subjectively. In our opinion we obtained results of at least similar quality to those of simultaneous batch approaches that are more expensive. However, remember that the manual intervention is integral to the present approach, so comparison to more automatic approaches requires precautions. Our contribution is well adapted for post-production applications for example. In such a context, user interaction is relevant. The constraint of perfect visual quality of the resulting sequence is so high that it is generally preferable that the approach relies on some user input even if this reduces the degree of automation. Let us discuss for example the initialization step which is crucial in motion layer extraction methods, whether they are automatic or semi-automatic. The automatic initialization proposed in [9] uses the N first images of the sequence
222
M. Fradet, P. P´erez, and P. Robert
and is based on time-consuming steps like seed regions expansion and region merging. First, the user could quickly obtain similar initialization with modest effort, as we do. Second, it may happen that two layers have different motions at the end of the sequence only. Such an automatic initialization will certainly merge them with no way to separate them efficiently later, whereas a semi-automatic initialization allows the user to distinguish both layers.
5
Conclusion
In this paper we present a new motion layer extraction method based on layer mosaics. For some reference instants of the sequence, we generate motion layer mosaics in which we accumulate the information of each layer. This mosaic generation is done either initially or as the segmentation process progresses. We propose to exploit this compact layered representation of the sequence in three ways during the segmentation process. First, we reduce the number of image pairs to be handled. Motion is estimated between each image and two of the reference instants only, and not for large numbers of image pairs as with the simultaneous batch approaches. Second, since the previous point implies that we work with distant images, we benefit from the fact that motions estimated between distant images are more likely to be different from one region to another than when estimated between consecutive images. This helps the segmentation process. Third, if after a first segmentation pass, results are not satisfactory, the user can easily add modest input directly on the generated mosaics before starting a second segmentation pass during which input is propagated on the whole video interval. Promising results show the potential of our method. In future work, we will integrate in our algorithm a matting step to be able to handle sequences with significant semi-transparent regions.
References 1. Wang, J., Adelson, E.: Representing moving images with layers. IEEE Trans. on Image Processing 3(5), 625–638 (1994) 2. Ayer, S., Sawhney, H.S.: Layered representation of motion video using robust maximum-likelihood estimation of mixture models and MDL encoding. In: ICCV, pp. 777–784 (1995) 3. Black, M.J., Anandan, P.: The robust estimation of multiple motions: Parametric and piecewise-smooth flow fields. Computer Vision and Image Understanding 63(1), 75–104 (1996) 4. M´emin, E., P´erez, P.: Hierarchical estimation and segmentation of dense motion fields. IJCV 46(2), 129–155 (2002) 5. Bouthemy, P., Fran¸cois, E.: Motion segmentation and qualitative dynamic scene analysis from an image sequence. IJCV 10(2), 157–182 (1993) 6. Cremers, D., Soatto, S.: Motion competition: A variational approach to piecewise parametric motion segmentation. IJCV 62(3), 249–265 (2005)
Semi-automatic Motion Segmentation with Motion Layer Mosaics
223
7. Odobez, J.M., Bouthemy, P.: Direct incremental model-based image motion segmentation for video analysis. Signal Processing 66(2), 143–155 (1998) 8. Dupont, R., Paragios, N., Keriven, R., Fuchs, P.: Extraction of layers of similar motion through combinatorial techniques. In: Rangarajan, A., Vemuri, B.C., Yuille, A.L. (eds.) EMMCVPR 2005. LNCS, vol. 3757, pp. 220–234. Springer, Heidelberg (2005) 9. Xiao, J., Shah, M.: Motion layer extraction in the presence of occlusion using graph cuts. PAMI 27(10), 1644–1659 (2005) 10. Dupont, R., Juan, O., Keriven, R.: Robust segmentation of hidden layers in video sequences. Technical report, CERTIS - ENPC (January 2006) 11. Min, C., Yu, Q., Medioni, G.: Multi-layer mosaics in the presence of motion and depth effects. In: ICPR, pp. 992–995 (2006) 12. Irani, M., Anandan, P., Bergen, J., Kumar, R., Hsu, S.: Efficient representations of video sequences and their applications. Signal Processing: Image Communication 8(4), 327–351 (1996) 13. Shum, H.Y., Szeliski, R.: Construction and refinement of panoramic mosaics with global and local alignment. In: ICCV (1998) 14. Boykov, Y., Jolly, M.P.: Interactive graph cuts for optimal boundary & region segmentation of objects in n-d images. In: ICCV (2001) 15. Rother, C., Kolmogorov, V., Blake, A.: “Grabcut”: interactive foreground extraction using iterated graph cuts. ACM Trans. Graph. 23(3), 309–314 (2004) 16. Boykov, Y., Kolmogorov, V.: An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision. PAMI 26(9), 1124–1137 (2004) 17. Kolmogorov, V., Zabih, R.: What energy functions can be minimized via graph cuts? PAMI 26(2), 147–159 (2004) 18. Boykov, Y., Veksler, O., Zabih, R.: Fast approximate energy minimization via graph cuts. PAMI 23(11), 1222–1239 (2001) 19. Brown, M., Lowe, D.: Recognising panoramas. In: ICCV, pp. 1218–1225 (2003)
Unified Frequency Domain Analysis of Lightfield Cameras Todor Georgiev, Chintan Intwala, Sevkit Babakan, and Andrew Lumsdaine 1
2
Adobe Systems, 345 Park Ave, San Jose, CA 95110 Computer Science Department, Indiana University, Bloomington, IN 47405
Abstract. This paper presents a theory that encompasses both “plenoptic” (microlens based) and “heterodyning” (mask based) cameras in a single frequency-domain mathematical formalism. Light-field capture has traditionally been analyzed using spatio-angular representation, with the exception of the frequency-domain “heterodyning” work. In this paper we interpret “heterodyning” as a general theory of multiplexing the radiance in the frequency domain. Using this interpretation, we derive a mathematical theory of recovering the 4D spatial and angular information from the multiplexed 2D frequency representation. The resulting method is applicable to all lightfield cameras, lens-based and mask-based. The generality of our approach suggests new designs for lightfield cameras. We present one such novel lightfield camera, based on a mask outside a conventional camera. Experimental results are presented for all cameras described.
1
Introduction
A central area of research in computational photography is capturing “light itself” as opposed to capturing a flat 2D picture. Advantages of light-field or integral photography are the ability to obtain information about the 3D structure of the scene, and the new power of optical manipulation of the image, like refocusing and novel view synthesis. The light itself, or light-field, is mathematically described by the radiance density function, which is a complete record of light energy flowing along “all rays” in 3D space. This density is a field defined in the 4D domain of optical phase space, that is, the space of all lines in 3D with symplectic structure [1]. Conventional cameras, based on 2D image sensors, do not record the 4D radiance. Instead, they simply act as “integration devices”. In a typical setting, they integrate over the 2D aperture to produce a 2D projection of the full 4D radiance. Integral Photography [2,3] was proposed more than a century ago to “undo” this integration and measure the complete 4D radiance arriving at all points on a film plane or sensor. As demonstrated by Levoy and Hanrahan [4] and Gortler et al. [5], capturing the additional two dimensions of radiance data allows us to re-sort the rays of light to synthesize new photographs, sometimes referred to as “novel views.” Recently, Ng et al. [6] have shown that a full 4D light field can be captured even D. Forsyth, P. Torr, and A. Zisserman (Eds.): ECCV 2008, Part III, LNCS 5304, pp. 224–237, 2008. c Springer-Verlag Berlin Heidelberg 2008
Unified Frequency Domain Analysis of Lightfield Cameras
225
with a hand-held “plenoptic” camera. This approach makes light field photography practical, giving the photographer the freedom and the power to make adjustments of focus and aperture after the picture has been taken. A number of works have developed techniques for analyzing radiance in the frequency domain [7,8,9]. Several important results have been derived, among which are: Applications of Poisson summation formula to lightfields, lightfield displays, and depth representation of scenes; light transport and optical transforms; Fourier slice theorem applied to refocusing, and others. However, with the notable exception of “heterodyning” [10], frequency analysis of radiance has not been applied to the general understanding and design of light-field cameras. The main goal of this paper is to expand the results of “heterodyning.” The contributions of our paper are: 1. We provide a general mathematical framework for analysing lightfield cameras in the frequency domain. Our theory is applicable to both lens-based and mask-based light-field cameras. 2. We show that the concept of “heterodyning” is not just a new mask-based camera design. Rather, it is a new method for processing data from any lightfield camera in the freqyency domain. 3. We develop a generalized F/number matching condition and show that it is required for all lightfield camera designs, lens-based and mask-based. 4. We propose and demonstrate a new lightfield camera design that has a transparency mask placed in front of the main lens.
2 2.1
Prior Work and Basic Equations Frequency Domain Representation
Recent works [7,8,9] have analyzed radiance, or light field, in frequency representation. Let r(x) be the radiance in conventional x−space, describing position and angle. In more detail, the spatio-angular coordinates of a ray at a given plane orthogonal to the optical axis are represented as a vector q x= , (1) p where q is the location of ray-plane intersection, and p is a vector defining the two angles of that ray at location q. Following texts on optics [1,11], we use paraxial approximation assuming the angle is small. A simplified 2-dimensional vector representation of a ray is shown in Figure 1. The physical ray-space is 4D. In the frequency domain radiance is represented by the Fourier transform of r(x): R(ω) =
r(x)eiω·x dx
(2)
where the spatial frequency ωq and the angular frequency ωp are grouped together in a similar way as a 4D vector: ωq ω= . (3) ωp
226
T. Georgiev et al.
Fig. 1. (a) Geometric representation of a ray as position and angle in an optical system. (b) Same ray described as a vector x = (q, p), in 2D ray space.
To simplify our text and figures we will often use 2D radiance with 1-dimensional position q and angle p for each ray. 2.2
Optical Transforms
We summarize and extend some results of previous work (e.g., [7,8,12]) about transformations of radiance in optical systems. The operation of a lens L or a translation T can be described by a linear transformation x = Ax operating on a ray x [11]. If a ray is described as a position-angle vector (3), then L and T are given by the following matrices: 1 0 1t L= and T = . (4) − f1 1 01 Here, f is the focal length of the lens, and t is the translation (distance of flight). The combined action of several such elements is described by the composition of their transforms. Now, since all elements in an optical system are described only by the above transforms, we can mathematically model any optical system (like a multielement camera lens or a light-field camera) as a single combined transform. Another important observation is that radiance is conserved in a non-absorbing system. In other words, the radiance does not change along a ray during travel or transformation by optical elements. The mathematical representation of this fact is that any optical matrix is symplectic [1]. Therefore, the determinant of any optical matrix is unity, i.e., det A = 1, (5) which can also be seen directly from equation (4). Based on this conservation property, the radiance r after a transform is related to the radiance r before the transform by the following equation: r (x) = r(x0 ) = r(A−1 x),
(6)
where x0 is the ray which has been mapped into x by the optical transformation A, i.e., x = Ax0 .
Unified Frequency Domain Analysis of Lightfield Cameras
The above transformation from sentation as: R (ω) = = = =
227
r to r can be expressed in frequency reprer (x)eiω·x dx r(A−1 x)eiω·x dx r(A−1 x)eiωAA
−1
x
dx
r(x0 )eiωA·x0 dx0
= R(AT ω),
(7)
where AT is the transposed matrix, R(ω) is the Fourier transform of r(x), and we have used (6) for the change of variables from x to x0 . Note that this expression is derived for any optical transform A, while previous works have considered special cases [8]. Summarizing the basic equations that will be used throughout the paper, we have: x = Ax r (x) = r(A−1 x)
(8) (9)
R (ω) = R(AT ω)
(10)
3 3.1
Light-field Cameras in the Frequency Domain Ives’ Camera
The first light-field camera (called “Process of Making Parallax Stereograms”) was invented in 1903 by Frederick Ives [2]. This camera can be described as an array of pinhole cameras with the same focal distance f , as shown in Figure 2. This array of cameras is placed at the focal plane of a conventional large format camera. We will develop the mathematical representation for the radiance transforms inside Ives’ camera in the frequency domain. This representation is a new result, and it will be used extensively throughout the paper.
Fig. 2. The Ives’ light-field camera. Only the focal plane with the pinholes is represented in this figure.
228
T. Georgiev et al.
Consider a 1-dimensional Ives’ camera and the corresponding 2D radiance. Modeling the array of pinholes as a delta-train, the radiance, just before this array is r(x) = r(q, p). Just after the array of pinholes the radiance is ∞
r (q, p) = r(q, p)
δ(q − mb),
(11)
m=−∞
where b is the pitch (distance between pinholes). In frequency domain the modulated train of delta-functions (11) can be represented using the Poisson summation formula [13] as R (ω) = r(q, p) δ(q − mb)eiω·x dx 1 = b
m
r(q, p)
ein
2πq b
ei(ωq q+ωp p) dqdp
n
2π 1 R(ωq + n , ωp ). = b n b
(12)
Assuming a band-limited signal, this result shows that the radiance after the pinholes consists of multiple copies of the original radiance, shifted in their frequencies by n 2π b for all integers n, as shown in Figure 3a. Note that the representation in Figure 3 follows the heterodyning method proposed in [10]. Due to traveling a distance f from the pinholes to the image plane, the radiance is transformed by the transpose of the translation matrix T , according to (10). The resultant radiance Rf reaching the film plane is:
Rf (ω) =
∞ n=−∞
R(ωq + n
2π , f ωq + ωp ). b
(13)
Fig. 3. (a) Bandlimited signal after the array of pinholes. (b) Shear of the signal after travelling a distance f . (c) Reconstructing the original signal before the pinholes by combining samples at different intersections with the ωq axis.
Unified Frequency Domain Analysis of Lightfield Cameras
229
This equation shows that the signal is sheared in the direction of angular frequency. This is represented in Figure 3b. The key observation here, first made in [10], is that a different angular frequency part of each copy of the spectrum intersects with the ωq axis. Since the film responds only to the DC component (zero angular frequency), it records only the thin “slice” where the spectrum intersects with the ωq axis. The above observation suggest a method of reconstructing the complete spatioangular representation of the radiance r(x). This can be done by stacking the above slices along the ωp axis and performing inverse Fourier transform of the volume image. The original idea that we can pick up the above type of “slices” of a similar spatio-angular representation of a periodic mask in the frequency domain, and use them to recover angular information, has been first proposed in the context of a different camera in [10]. We will cover this in more detail in subsection 3.3. 3.2
Replacing the Pinhole Array with a (Micro) Lens Array
The idea of replacing the array of pinholes in front of the film with lenses was first proposed by Lippmann back in 1908 [3]. Just as with a single pinhole camera, lenses gather much more light and produce better image quality than small holes. Lipmann called his approach Integral photography. Different versions of it have been proposed throughout the years, the most recent one being the plenoptic camera [6,14]. Our analysis of the integral camera in frequency space will be done in two steps. (1) We consider an array of pinholes as in the Ives’ camera, only shifted by a constant (for all pinholes) vector a. Each pinhole is covered by a prism with angle of deviation depending on the shift, defined as pprism = af . (2) We consider the superposition of multiple shifted arrays of such pinhole-prisms, and show that they all contribute to the final image in a similar way. Lippmann’s integral photography is based on this coherent action of different arrays. It can be viewed as the limiting case where the plane is made completely of pinhole-prisms and all the light goes through. Each microlens is formed by the corresponding prisms, as a Fresnel lens. Following the above derivation for the Ives camera in equation (12), the radiance after the pinhole-prism array can be expressed as R (ω) =
a ) δ(q − mb − a)eiω·x dx f m 1 a in 2π(q−a) i(ωq q+ωp p) b = e e dqdp r(q, p + ) b f n 1 −i(ωp fa +n 2πa 2π b ) R(ω + n = , ωp ). (14) e q b n b
r(q, p +
Note that now there exist additional phase multipliers in each term of the sum. After the pinhole-prism array, the light travels a distance f to the film
230
T. Georgiev et al.
plane. Using equations (4) and (10) we obtain the following expression for the radiance at the film (sensor): Rf (ω) =
1 −i((f ωq +ωp ) af +n 2πa 2π b ) R(ω + n , f ωq + ωp ) e q b n b
As explained above, the film (or sensor) only records zero angular frequencies. Therefore, by restricting ω to the ωq axis, we obtain the following final expression: 1 −i(ωq a+n 2πa ) 2π b e R(ωq + n , f ωq ) (15) Rf (ωq , 0) = b n b The effect of coherence would be easily observed for small a. It takes place π due to the term ωq a + n 2πa b , where ωq is within b from the corresponding center 2π (peak), which is at frequency n b in each block. For every exponential term with frequency ωq there is another term with frequency −n 2π b − ωq inside the same block, but on the other side of the center. Those two frequencies produce opposite phases, which results in a real positive term, cos((ωq + n 2π b )a). This term is close to 1 for all rays. Based on this analysis, the integral camera will also work with lenses for which a can be as big as 2b , and the area of the plane is completely covered. All the terms are still positive, but the efficiency of rays far from the center is lower, and high frequencies will be attenuated. The above analysis leads to a surprising new result: The frequency method of demultiplexing radiance, described in the case of Ives’ pinhole camera, is also applicable to Lippmann’s microlens based integral photography. Similarly, the plenoptic camera, as well as other light-field cameras that can be shown to be equivalent to it, can be analyzed using this new formulation. 3.3
Replacing the Pinhole Array with a Mask
A light-field camera that uses a mask instead of pinholes or microlenses in front of the sensor was first proposed in [10]. One way to analyze this camera would be to start again with the pinhole formula we derived for the Ives’ camera, and instead of prisms assume appropriate attenuation at each pinhole. On the other hand, it is also possible to directly derive the result for periodic attenuation functions, like 12 (1 + cos(ω0 q)). This attenuating mask modulates the light field to produce two spectral copies seen mathematically as follows 1 1 R (ω) = R(ω) + r(x)cos(ω0 q) eiω·x dx 2 2 1 1 = R(ω) + r(x)(eiω0 q + e−iω0 q ) eiω·x dx 2 4 1 1 = R(ω) + (R(ωq + ω0 , ωp ) + R(ωq − ω0 , ωp )). (16) 2 4
Unified Frequency Domain Analysis of Lightfield Cameras
231
After the mask the signal travels a distance f to the sensor. Again using equations (4) and (10) we obtain the final expression for the radiance: Rf (ωq , ωp ) =
1 R(ωq , f ωq + ωp ) 2 1 + (R(ωq + ω0 , f ωq + ωp ) + R(ωq − ω0 , f ωq + ωp )). (17) 4
Again we observe duplication of our bandlimited signal into multiple blocks, and shearing proportional to the travel distance. Any periodic mask can be analyzed this way based on Fourier series expansion and considering individual component frequencies. As first observed in [10], samples of the “mask signal” on the ωq axis can be used to reconstruct the complete spatio-angular attenuation function of the mask. In our case we use the method to reconstruct the radiance R(ω). 3.4
Placing the Array in Front of the Camera
Another family of light-field cameras can be described as putting any of the arrays used on previous camera designs in front of a regular camera, and focusing it slightly behind the array. The idea for this design is based on the fact that the image inside any camera is 3-dimensional, and is a distorted copy of the outside world. It is clear that the structures we place inside the camera have their corresponding structures in the outside world. This is based on the 1-to-1 mapping defined by the main camera lens. The sensor plane corresponds to the plane of focus, and any optical elements in front of it could be replaced by their enlarged copies in the real world, in front of the plane of focus. Because of this correspondence, and based on the lens formula, we can build structures in front of the camera and use them as if they were microstructures inside. In the Results section we will demonstrate how a fine mask in front of the sensor, in an area not accessible due to the cover glass, can be replaced by a mosquito mesh in front of the camera. We will return to this idea in section 4.3. 3.5
Generalized F/Number Matching
The angle of the cone of rays coming to a point on the sensor in a camera is Ω = 1 F , where F is the F/number of the main lens. As shown in [6], for a plenoptic camera the F/numbers of the main lens and the microlenses must match. A generalization of the F/Number matching rule to other radiance camera systems can be derived using frequency domain analysis. The final expression for the radiance in all cameras has second (angular frequency) argument in R equal to f ωq , where f is the distance from the pinholes, microlenses or mask - to the sensor. This is a measure of the amount of shear, which can be seen as the tilt of the line f ωq in Figure 3b.
232
T. Georgiev et al.
Assume a camera samples the angular frequency N times, i.e. sheared copies of the signal intersect the ωq axis N times. For example, this could be a mask containing N frequencies at interval ω0 . The frequency spectrum of this signal covers an interval of N ω0 in the horizontal axis. Because of the tilt, those peaks are spread in the vertical ωp direction to cover an interval of f N ω0 . Therefore, the following expression holds: 2ωp0 = f N ω0 ,
(18)
where ωp0 is the maximal angular frequency of the original signal. If the maximal number of samples in a light-field camera in angular direction is N , then the maximal angular frequency would be ωp0 = 2π N Ω = 2πN F . By substituting in (18), we obtain 4πF = f ω0 Denote by ωb = 2π b the base frequency of the modulating structure with period b. We have seen that our bandlimited signal captured with this structure needs to have maximum spatial frequency ω20 = ωb . Then ω0 = 4π b . If we substiture in the above equation, 4πF = f
4π . b
In this way we obtain the final result: F =
f . b
(19)
Here, b is the pitch of the pinholes/microlenses or the period of the lowest frequency in the mask; f is the distance from the array of pinholes/microlenses or the mask to the sensor. All cameras multiplexing in the frequency domain must satisfy this condition. We refer the reader to a series of movies (available in the electronic version of the paper), showing how mask-based cameras work or fail to work for different F/numbers.
4 4.1
Results Mask-Based Cameras
We have built several working prototypes mask-based cameras. In order to achieve good resolution we need small value of the largest period b, on the order of 0.1 mm. With F/number of the main lens equal to 4 we need to place the mask about 0.4 mm from the surface of the sensor, which is impossible due to the cover glass. This situation forced us to work with film based medium format camera. The reason for medium format is the bigger image which gives potential
Unified Frequency Domain Analysis of Lightfield Cameras
233
Fig. 4. Typical pictures as obtained from our camera designs and their corresponding Fourier transforms (magnitude shown). From left to right are images from,(a) maskbased, (b) mosquito-net based, and, (c) lens-based cameras.
for higher resolution, and easier access to the film back, where we make our modifications. We are using Contax 645 with a film back. We have experimented with two different masks. First, we take a picture of a poster displaying computer generated grid, and then use the negative as a mask in front of the film. The computer generated grid is is a 2D cosine mask with 3 harmonics in both spatial dimensions. The spacing of 0.5 mm is achieved by placing the developed negative between two thin glasses. The film that is being exposed slides directly on the surface of the glass. Another mask we used was a 3M computer screen filter. Measurements under magnification showed that the filter contains about 14 black lines/mm, with the lines being sandwiched between transparent plastic material 0.2mm thick. Accordingly, the F/number of the mask is about 3. In this section we only show results obtained with this second mask, since the results from the first mask are similar. Figure 5 shows a picture of the Contax camera and the film back with the 3M filter glued to the window just in front of the film. A sequence of parallax movies accompanying the electronic version of this paper, which are generated from pictures at different apertures, show that the best F/number is 5.6. This value is slightly higher than the expected 3 or 4. Possible reasons are the refractive index of the plastic material, which increases optical path, and possible micro-spacing between the film and the 3M filter due to mechanical imperfection/dust. Figure 7 shows two stereo views from the light-field generated from an image taken with the mask at F/5.6. The reader is encouraged to see the electronic version of this paper for the original high resolution images. The supplementary videos present sequences of synthetic view sweeps through a range of angles, or refocusing generated with the method of [6]. These videos create a clear sense of depth in the images.
234
T. Georgiev et al.
Grating
1mm
1mm
In Focus Plane
da = 10cm
Fig. 5. Our camera with the 3M filter in the film back
a = 2m
b = 80mm
Fig. 6. Our “mosquito net” setup
Fig. 7. Stereo images from our mask-based camera. Stereopsis can be achieved with crossed eye observation. Close examination of the marked areas will also reveal horizontal parallax between the pictures.
4.2
“Mosquito-Net” Camera
The research reported in this paper was initially motivated by an interest in explaining the unusual behavior exhibited by images taken through a mosquito net. We demonstrate the operation of frequency domain demultiplexing with an external mask (a “mesh” or a “mosquito net”). Since real mosquito nets are irregular, we constructed a regular mask by printing out a pattern of 250 vertical black lines on a transparency and mounting this transparency on glass. The printed lines were 1mm in width, with a spacing of 1mm, i.e,. the period is T = 2mm. Pictures were taken with a 16 megapixel digital camera having an 80mm lens. The transparency mask was placed approximately 2m from the camera, and the camera is focused on a plane about 10cm behind the transparency (see Figure 6).
Unified Frequency Domain Analysis of Lightfield Cameras
235
Fig. 8. Refocused images from our mosquito-net-based camera
With this setting we overcome the difficult problem of implementing a mask inside the camera, which would have to be precisely positioned under the cover glass of the sensor, 0.2mm from the silicon. We need to find exactly how a movement in depth in object space corresponds to a movement in image space. By differentiating the lens equation a1 + 1b = f1 db we obtain da a2 = − b2 . Thus a displacement by da = 10cm away from the plane of focus and towards the transparency produces a corresponding displacement of 2 −da ab 2 = 0.16mm away from the sensor surface. At the same time the image of our 2mm grid of the transparency is reduced linearly to T ab = 0.08mm, which gives us an F/number of about 2, and high effective resolution of the final image (defined by the small effective mesh period of 0.08mm). In this way an outside mesh is used as a much finer mesh inside the camera. Figure 4b shows an image taken through the transparency. An example of refocusing using this image is presented in Figure 8. It is achieved by generating different views and mixing them to effectively integrate over the virtual aperture. The result is shown in Figure 8, where the two images are produced with two different registrations of the views, one on the background, and the other one on the foreground. 4.3
(Micro) Lens-Based Cameras
An example image obtained through our lens-based camera is shown in Figure 4c. Notice the zoomed in area which clearly displays the complex structure of the image, encoding 3D information. The FFT of this image is shown in the same figure. To obtain horizontal parallax, we apply the demultiplexing described in section 4.1 to generate 11 views. Two images resulting from this process are shown in Figure 9. They show clear parallax. It can be observed at close examination, for example in the marked region. Note that left and right images are switched so that stereo parallax can be noticed with proper cross eyes observation. Supplemental videos show the effect of smoothly changing views for this image and another “Tall Grass” image. Also, we provide an example of refocusing based on mixing those 11 views using the method of [6].
236
T. Georgiev et al.
Fig. 9. Stereo images from our lens-based camera. Stereopsis can be achieved with crossed eye observation. Close examination of the marked areas will also reveal horizontal parallax between the pictures.
5
Conclusions and Future Work
In this paper, we have derived a new mathematical formalism for analyzing light-field cameras in the frequency domain. The method of multiplexing the 4D radiance onto the 2D sensor is shown to work in the frequency domain for a number of of light-field cameras, both lens-based and mask-based. The important conclusion we draw out of this finding is that the “heterodyning” method is not restricted to a special “heterodyning” type of camera. It is a new method of demultiplexing captured lightfield data from any lightfield camera, mask based or microlens based. Analyzing light-field cameras in the frequency domain and designing new approaches based on that is a new emerging area of research, covering both mask and lens based cameras. Mask based cameras are much cheaper and easier to build. Using microlenses, prisms, and other arrays of optical elements with the frequency multiplexing approach might have unexpected new potential. There is much more to be done in this direction. In relation to this approach, we have proposed and implemented a new “mosquito net” lightfield camera based on masks/nets in front of the main lens. We have also built prototypes of all these cameras in order to demonstrate the validity of this formalism. In the last year, a whole new dimension has been added to to integral photography due to unexpected possibilities with frequency domain multiplexing. Compact camera designs, and later post-processing based on computer vision are opening new possibilities. We hope that our work will inspire others to explore further this rich domain.
References 1. Guillemin, V., Sternberg, S.: Symplectic techniques in physics (1985) 2. Ives, F.: Patent US 725,567 (1903)
Unified Frequency Domain Analysis of Lightfield Cameras
237
3. Lippmann, G.: Epreuves reversible donnant la sensation du relief. J. Phys. 7, 821– 825 (1908) 4. Levoy, M., Hanrahan, P.: Light field rendering. ACM Trans. Graph, 31–42 (1996) 5. Gortler, S.J., Grzeszczuk, R., Szeliski, R., Cohen, M.F.: The lumigraph. ACM Trans. Graph, 43–54 (1996) 6. Ng, R., Levoy, M., Brdif, M., Duval, G., Horowitz, M., Hanrahan, P.: Light field photography with a hand-held plenoptic camera. Tech. Rep. (2005) 7. Chai, J., Chan, S., Shum, H., Tong, X.: Plenoptic sampling. ACM Trans. Graph, 307–318 (2000) 8. Durand, F., Holzschuch, N., Soler, C., Chan, E., Sillion, F.: A frequency analysis of light transport. ACM Trans. Graph, 1115–1126 (2005) 9. Ng, R.: Fourier slice photography. ACM Trans. Graph, 735–744 (2005) 10. Veeraraghavan, A., Mohan, A., Agrawal, A., Raskar, R., Tumblin, J.: Dappled photography: Mask enhanced cameras for heterodyned light fields and coded aperture refocusing. ACM Trans. Graph 26(3), 69 (2007) 11. Gerrard, A., Burch, J.M.: Introduction to matrix methods in optics (1994) 12. Georgiev, T., Zheng, K.C., Curless, B., Salesin, D., Nayar, S., Intwala, C.: Spatioangular resolution tradeoffs in integral photography. In: Rendering Techniques 2006: 17th Eurographics Workshop on Rendering, pp. 263–272 (June 2006) 13. Oppenheim, A.V., Willsky, A.S.: Signals and Systems. Prentice Hall, Upper Saddle River, New Jersey (1997) 14. Adelson, T., Wang, J.: Single lens stereo with a plenoptic camera. IEEE Transactions on Pattern Analysis and Machine Intelligence, 99–106 (1992)
Segmenting Fiber Bundles in Diffusion Tensor Images Alvina Goh and Ren´e Vidal Center for Imaging Science, Johns Hopkins University, Baltimore, MD 21218, USA
Abstract. We consider the problem of segmenting fiber bundles in diffusion tensor images. We cast this problem as a manifold clustering problem in which different fiber bundles correspond to different submanifolds of the space of diffusion tensors. We first learn a local representation of the diffusion tensor data using a generalization of the locally linear embedding (LLE) algorithm from Euclidean to diffusion tensor data. Such a generalization exploits geometric properties of the space of symmetric positive semi-definite matrices, particularly its Riemannian metric. Then, under the assumption that different fiber bundles are physically distinct, we show that the null space of a matrix built from the local representation gives the segmentation of the fiber bundles. Our method is computationally simple, can handle large deformations of the principal direction along the fiber tracts, and performs automatic segmentation without requiring previous fiber tracking. Results on synthetic and real diffusion tensor images are also presented.
1 Introduction Diffusion Tensor Imaging (DTI) is a 3-D imaging technique that measures the restricted diffusion of water in living tissues. Water diffusion is represented mathematically with a symmetric positive semi-definite (SPSD) tensor field D : R3 → SPSD(3) ⊂ R3×3 that measures the diffusion in a direction v ∈ R3 as v Dv. The direction of maximum diffusion is indicative of the orientation of fibers in highly anisotropic tissues. Therefore, DTI can be used to analyze the local orientation and anisotropy of tissue structures, and infer the organization and orientation of tissue components. For example, DTI allows one to distinguish the different anatomical structures of the brain such as the corpus callosum, cingulum, or fornix, noninvasively. In order to make DTI beneficial for both diagnostic as well as clinical applications, it is necessary to develop image analysis methods for registering DT images, extracting and tracking fibers, segmenting bundles of fibers with different orientation, etc. However, as the space of diffusion tensors is not Euclidean, traditional image analysis techniques need to be revised to handle the new mathematical structure of the data. Related work. It is well-known [1, 2, 3, 4, 5] that the traditional Euclidean distance is not the most appropriate metric for the Riemannian symmetric space SPSD(r), where r is the dimension of the matrices. This has motivated several frameworks for tensor computing that incorporate different Riemannian properties of SPSD matrices [1, 2, 3]. Applications to interpolation and filtering of tensor fields have shown encouraging results. Although there exists extensive literature studying the problem of classifying gray matter, white matter and cerebrospinal fluid from MR images, there is relatively lesser D. Forsyth, P. Torr, and A. Zisserman (Eds.): ECCV 2008, Part III, LNCS 5304, pp. 238–250, 2008. c Springer-Verlag Berlin Heidelberg 2008
Segmenting Fiber Bundles in Diffusion Tensor Images
239
work done on the problem of segmenting different white matter structures from DTI. A first family of segmentation methods [6] reduces the tensor data to a scalar anisotropic measure and then applies standard image segmentation methods to the scalar data. However, reducing the tensor field to a scalar measure eliminates the directional information, thereby reducing the discriminative power. For instance, when two fiber bundles are oriented in different directions, but have the same anisotropy, this method will fail. A second family of segmentation methods extracts the fiber tracts from the tensor data and then segments these tracts using a measure of similarity between pairs of curves, such as the Euclidean distance between two fibers, or the ratio of the length of corresponding portions of the fibers to the overall length of the pairs [7]. In [8], fibers are reduced to a feature vector extracted from the statistical moments of the fibers, and segmentation is done by applying normalized cuts to these feature vectors. Unfortunately, there are several problems with this approach. Most notably, accurate extraction of fiber tracts in the presence of noise in DT data remains an obstacle. Most tractography methods start at a user specified point and follow the direction of the principal eigenvector of the tensor until a termination criterion is reached. Observe that a slight error in the estimation of the principal tensor direction at one voxel will likely result in tracking a different fiber. This error will propagate as tracking continues, so the extracted fiber could be completely wrong. Also, it is known that the estimation of a tensor is poor in areas where two different fiber bundles cross at an angle. Thus, the likelihood of fiber tracking veering off course is high in regions of crossing fibers. Even if the fiber tracts were correctly estimated, comparing 3-D curves in a mathematical rigorous manner remains an open question. In order to overcome the shortcomings resulting from the local decision-making of the tractography methods, stochastic approaches [9, 10] use a measure of connectivity between brain regions. However, these methods do not give the explicit segmentation of the fiber bundles. A third family of segmentation methods attempts to sidestep these issues by segmenting the tensor data directly, without first extracting the fiber tracts. These methods make use of a metric on SPSD(3), such as the Euclidean metric trace(D1 D 2 ) [11, 12], trace(D D )
1 2 or the normalized tensor scalar product trace(D1 )trace(D [13]. These metrics are then 2) combined with classical segmentation methods, such as spectral clustering [11, 14] or level set methods [12, 13]. However, as these methods are designed to segment discrete tensors rather than continuous fiber bundles, they fail to segment fiber bundles correctly whenever the tensors in a bundle present high variability, e.g., in a long curved tract. These issues have motivated the usage of more sophisticated metrics such as the logEuclidean metric [1], the information theoretic metric [4] or the affine-invariant metric [2, 3, 15]. For example, in [4], the diffusion tensor is interpreted as the covariance matrix of a local Gaussian distribution. The distance measure between two matrices is based on the Kullback-Leibler divergence between the two Gaussian probability density functions induced by the two matrices. As the Kullback-Leibler divergence is not symmetric, the J-divergence which is the mean of the two divergences is used. More recently, locally-constrained region based methods that could handle variability in a fiber bundle have been proposed [16, 17, 18]. In these methods, fiber bundles are segmented by minimizing an energy function in a probabilistic framework. These energy minimization techniques, combined with different metrics, work well in general. However,
240
A. Goh and R. Vidal
Two Fiber Bundles in Regions of Low Diffusion
Low−dimensional Representation of Fiber Bundles and Regions of Low Diffusion
Fiber Bundle 1 Fiber Bundle 2
Low Diffusion
Low Diffusion
Fiber Bundle 1 Fiber Bundle 2
Fig. 1. Pictorial view of how our algorithm works. Left: Two fiber bundles surrounded by a region of low diffusion. Right: Low-dimensional representation learned by our algorithm, in which different fiber bundles form different clusters.
they suffer from convergence to a local minimum, and often require a user-specified initial segmentation. Paper contributions. In this paper, we present an algorithm for segmenting fiber bundles in DT images. Our algorithm is algebraic, thus it requires no initialization; operates directly on tensor data, thus it requires no fiber extraction or tracking; and is designed to deal with long curved fiber bundles. More specifically, we assume that tensors at adjacent voxels within a fiber bundle are similar (similar eigenvectors and eigenvalues), while tensors at distant voxels could be very different, even if they lie in the same bundle. Under this assumption, we show that one can map each diffusion tensor to a point in a low-dimensional linear space in such a way that tensors tracing out a fiber bundle are mapped to nearby points, while different fiber bundles are mapped to distinct clusters, as shown in Fig. 1. This is achieved by using a new manifold clustering technique called Locally Linear Diffusion Tensor Clustering (LLDTC), which is a natural generalization of locally linear embedding (LLE) [19] from Euclidean to diffusion tensor data. The generalization is based on the Riemannian framework with the affine-invariant and log-Euclidean metrics [1, 3]. In particular, we will adopt the generic framework first proposed in [20] and extended to clustering of probability density functions in [21]. The combination of techniques from Riemannian geometry and manifold learning has already been used to perform statistical analysis and segmentation of diffusion MRI data [22] by considering the tensors as points lying on a single manifold. However, as noted in [23], diffusion MRI data belongs to a union of manifolds with different dimensions and densities. [23] shows that it is possible to characterize neuro-anatomical areas by considering the data as points clouds, and clustering these points into different groups by estimating the dimension and density around each data point based on its k nearest neighbors. Our algorithm, while modeling the data as a union of manifolds, does not require different fiber bundles to have different dimensions to achieve clustering. Paper outline. §2 reviews the classical LLE algorithm for a nonlinear manifold using the Euclidean metric. §3 presents the extension of LLE to the space of diffusion tensors
Segmenting Fiber Bundles in Diffusion Tensor Images
241
using the Riemannian framework. §4 shows how to segment DTI fiber bundles that correspond to different submanifolds using LLDTC framework. §5 presents experimental results on synthetic and real data, and §6 gives the conclusions.
2 Review of Locally Linear Embedding in Euclidean Spaces Let X = {xi ∈ RD }ni=1 be a set of n data points sampled from a low-dimensional manifold embedded in RD . The goal of nonlinear dimensionality reduction (NLDR) is to find a set of n vectors {yi ∈ Rd }ni=1 , where d D, such that nearby points remain close and distant points remain far. Existing NLDR techniques can be divided in two main groups. Global NLDR techniques, such as Isomap [26], try to preserve global properties of the data. Local NLDR techniques, such as LLE [19], Laplacian eigenmaps [24] and Hessian LLE [25], try to preserve local properties obtained from small neighborhoods around the datapoints. In particular, LLE exploits the fact that the local neighborhood of a point on the manifold can be well approximated by the affine subspace spanned by the k nearest neighbors of the point. The key idea of LLE is to find a low-dimensional embedding of the data that preserves the coefficients of such affine approximations. More specifically, the LLE algorithm can be summarized as follows. 1. Nearest neighbor search: For each data point xi ∈ X, find its k nearest neighbors (kNN) {xij }kj=1 according to the Euclidean distance. 2. Least squares fit: Find a matrix of weights W ∈ Rn×n whose entries Wij minimize the reconstruction error ε(W ) =
n i=1
xi −
n
Wij xj 2 =
j=1
n n n Wij (xi − xj )2 = dist2 (xi , xi ) i=1
j=1
i=1
(1) subject to the constraints (i) Wij = 0 if xj is not a k-nearest neighbor of xi and (ii) n n −−→ W = 1. In (1), x = x + ij i i j=1 j=1 Wij xi xj is the linear interpolation of xi and its kNN. The solution to this problem can be computed as 1 Ci−1 Wi i1 Wi i2 · · · Wi ik = −1 ∈ R1×k , 1 Ci 1
(2)
where 1 is the vector of all ones, and Ci ∈ Rk×k is the local Gram matrix at xi , i.e. Ci (j, l) = (xj − xi ) · (xl − xi ). The matrix of weights W is invariant to rotations, scalings and translations of each data point and its neighbors. 3. Sparse eigenvalue problem: Find vectors {yi ∈ Rd }ni=1 that minimize the error φ(Y ) =
n i=1
yi −
n
Wij yj 2 = trace(Y M Y ),
(3)
j=1
n where Y = [y1 , . . . , yn ] ∈ Rn×d , subject to the constraints (i) i=1 yi = 0 n (centered at the origin) and (ii) n1 i=1 yi yi = I (unit covariance). The optimal
242
A. Goh and R. Vidal
solution is the matrix Y whose columns are the d eigenvectors of the matrix M = (I − W ) (I − W ) associated with its second to (d + 1)-st smallest eigenvalues. The first eigenvector of M is discarded, because it is the vector of all ones, 1 ∈ Rn , with 0 as its eigenvalue. This is because nj=1 Wij = 1, hence W 1 = 1, and M 1 = 0. In principle, the LLE algorithm is designed for data lying in a single connected submanifold of Euclidean space. In the next sections we will show how LLE can be extended to data lying in multiple submanifolds of SPSD(3).
3 Locally Linear Embedding in the Space of SPSD Matrices The LLE framework presented in §2 is applicable in the presence of one connected manifold with unknown structure. Therefore, every operation has to be approximated by the corresponding Euclidean operation, e.g., finding the kNN using the Euclidean distance, minimizing the Euclidean reconstruction error and doing Euclidean interpolation of a point and its neighbors. For diffusion tensors, however, the manifold structure is known. Moreover, previous work has shown that the Euclidean distance is not the most appropriate metric for SPSD matrices [1, 2, 3, 4, 5]. For example, in [3] it is clearly illustrated that Euclidean averaging of tensors leads to a tensor swelling effect in which the resulting determinant of the mean is larger than the original determinants. In this section, we will show how to extend LLE to diffusion tensors using the affine-invariant and log-Euclidean metrics [1, 3] instead of the Euclidean metric. Under these metrics, closed-form formulae for Riemannian operations, such as the geodesic distance, geodesic interpolation, etc., are available. We will make use of the generic framework proposed in [20] for such closed-form Riemannian structures. Since the information about the local geometry of the manifold is essential only in the first two steps of the LLE algorithm, modifications are made only to these two stages, i.e. how to select the kNN and how to compute the matrix W representing the local geometry of fiber bundles using the new metrics. Given W , the calculation of the low-dimensional representation remains the same as in the Euclidean case. Selection of Riemannian kNN. The first step of the LLE algorithm is the computation of the kNN associated with each data point. To that end, consider any two tensors D(x1 ) and D(x2 ) at coordinates x1 and x2 . Notice that there are two ways of measuring similarity: the Riemannian metric between the tensors μ(D(x1 ), D(x2 )) and the Euclidean distance between the coordinates x1 −x2 . A weighing factor has been used in [11] to control the trade-off between these two distances. However, since our objective here is to cluster fiber bundles, we must choose a distance accordingly. Clearly μ alone does not suffice, because two tensors in a bundle may be very different from each other. Since nearby tensors within a bundle are similar, we select the kNN of D(x) as follows. Definition 1. The kNN of a tensor D(x) at x are the k tensors D(x1 ), . . . , D(xk ) that minimize μ(D(x), D(xi )), subject to x − xi ≤ R, for a given radius R > 0. Notice that our definition is essentially a combination of the kNN and -neighborhood used to build a graph from similarity matrices in spectral clustering [27].
Segmenting Fiber Bundles in Diffusion Tensor Images
243
Riemannian Calculation of W . The second step of LLE is to compute the matrix of weights W ∈ Rn×n . For this purpose, we need to define a reconstruction error similar . to (1), and an interpolation method that allows us to express a tensor Di = D(xi ) as an . “affine combination” of its kNN {Dj = D(xj )}. Both the reconstruction error and the interpolation method depend on the Riemannian metric chosen. We will illustrate our algorithm using the affine-invariant [3] and log-Euclidean metrics [1]. Affine-invariant metric. From [3], we know that the affine-invariant metric is given by − 12 − 12 −1 −1 μAI (Di , Dj ) = log(Di Dj Di )F = trace log(Di 2 Dj Di 2 )2 , (4) where ·F is the Frobenius norm and log(·) is the matrix logarithm. We also know that the geodesic linear interpolation of Di by tensors {Dj }nj=1 with weights {Wij }nj=1 is AI,i = D 2 exp D i 1
n
−1 −1 1 Wij log(Di 2 Dj Di 2 ) Di2 ,
(5)
j=1
where exp(·) is the matrix exponential. Therefore, instead of minimizing the Euclidean reconstruction error (1), we minimize the affine-invariant reconstruction error εAI (W ) =
n
AI,i ) = μ2AI (Di , D
i=1
n n
−1 − 1 2
Wij log(D 2 Dj D 2 ) , i
i=1
i
F
(6)
j=1
subject to Wij = 0 if Dj is not a kNN of Di and j Wij = 1. Therefore, the optimal weights are obtained as in (2), with the local Gram matrix Ci ∈ Rk×k defined as −1
−1
−1
−1
Ci (j, l) = trace(log(Di 2 Dj Di 2 ) log(Di 2 Dl Di 2 )).
(7)
Log-Euclidean metric. From [1], we know that the log-Euclidean metric is given by μLE (Di , Dj ) = log Di − log Dj F ,
(8)
and the geodesic linear interpolation of Di by tensors {Dj }nj=1 with weights {Wij }nj=1 is n LE,i = exp( Wij log Dj ). D
(9)
j=1
Thus, W is obtained by minimizing the log-Euclidean reconstruction error εLE (W ) =
i
LE,i ) = μ2LE (Di , D
n n
2
Wij (log Di − log Dj ) , F
i=1
(10)
j=1
subject to Wij = 0 if Dj is not a kNN of Di and j Wij = 1. The optimal weights are obtained as in (2), with the local Gram matrix Ci ∈ Rk×k defined as Ci (j, l) = trace((log Di − log Dj )(log Di − log Dl )).
(11)
244
A. Goh and R. Vidal
Thanks to (7) and (11), we can calculate W exactly and the matrix M is computed as before, i.e. , M = (I − W ) (I − W ). Calculation of the Embedding Coordinates. The last step of LLE is to find a Euclidean low-dimensional representation. Given W , this step is independent of the Riemannian structure. Hence, one can find the embedding coordinates as described in §2. That is, the embedding coordinates are the d eigenvectors of the matrix M associated with its second to (d + 1)-st smallest eigenvalues.
4 Locally Linear Diffusion Tensor Clustering (LLDTC) In this section, we present our algorithm for segmenting fiber bundles in the brain that are separated, e.g., segmenting the cingulum and the corpus callosum into different groups. As each fiber bundle defines a different submanifold, the segmentation problem is equivalent to the problem of clustering m submanifolds in the Riemannian space SPSD(3). In particular, we will make use of the Riemannian manifold clustering algorithm in [20]. The LLE algorithm provides a low-dimensional representation of a set of n points under the assumption that the n points are k-connected, i.e. for any two points z1 , z2 ∈ X there exists an ordered sequence of points in X having z1 and z2 as endpoints, such that any two consecutive points in the sequence have at least one k-nearest neighbor in common. We extend the results of §2 and §3 in order to cluster data lying in a union of m k-connected submanifolds. The important assumption we make is that no kNN of a data point in one submanifold lies in a different submanifold. At first, this may seem as a very strong assumption. However, Def. 1 ensures that this assumption is approximately true. For instance, consider two spatially close fiber bundles such as the corpus callosum and the cingulum. We know that the corpus callosum is mostly oriented in a left-right direction whereas the cingulum is oriented in the anterior-posterior direction. Even though these two bundles are close to each other spatially, the distance between tensors on different bundles in terms of the Riemannian SPSD metric μ is significantly large. Therefore by Def. 1, tensors on different bundles are not connected. Consider now two tensors D1 and D2 on the same bundle, but spatially separated and having very different orientations. It follows from Def. 1 that these two tensors are not connected. However, as the fiber connecting the two tensors is smooth, there is a sequence of tensors connecting D1 and D2 . In short, by making use of the locality property in both the coordinate and tensor space to separate two fiber bundles, the aforementioned assumption is fulfilled. Proposition 1 states the main result of [20] adapted to our scenario. This proposition shows that in the case of a disconnected union of m k-connected submanifolds, the matrix M has at least m zero eigenvalues, whose eigenvectors give the clustering of the data. This is a general result that is applicable to both Euclidean and Riemannian LLE. The interested reader is referred to [20] for the proof of Proposition 1. Proposition 1. Let {Di }ni=1 be a set of tensors drawn from a disconnected union of m k-connected d-dimensional submanifolds of SPSD(3). Then, there exist m vectors
Segmenting Fiber Bundles in Diffusion Tensor Images
245
{vj }m j=1 in the null space of M such that vj corresponds to the jth group of points, i.e. vij = 1 if the i-th data point is in the jth group, and vij = 0 otherwise. With real data, we still have distinct clusters, but the between-cluster weights are not exactly 0. Therefore, the matrix M is a perturbed version of the ideal case. Nevertheless, it is well-known from perturbation theory [28] that if the perturbation is small or the eigengap is big, the eigenvector vj is equal to the ideal indicator vectors (0, ., 1, ., 0) of the j-th cluster up to a small error term. Hence, it is reasonable to expect that, instead of mapping the data points on m submanifolds to m points, Riemannian LLE will generate a collection of n points distributed around m cluster centers. Therefore, the k-means algorithm will still be able to separate the groups from each other. Notice that when computing a basis for ker(M ), we do not necessarily obtain the set of membership vectors, but rather linear combinations of them, including the vector 1. In general, linear combinations of segmentation eigenvectors still contain the segmentation of the data. Hence, we can cluster the data into m groups by applying k-means to the columns of a matrix whose rows are the m eigenvectors in the null space of M . From §3 and Proposition 1, we have the following linear algebraic tensor clustering algorithm. Locally Linear Diffusion Tensor Clustering Algorithm (LLDTC) 1. Nearest neighbors search: For each tensor Di ∈ SPSD(3) at coordinate xi ∈ R3 , find the k tensors {Dij } at coordinates xij located within a fixed spatial radius R from xi , i.e. xi − xij ≤ R, that have the smallest tensor distance μ, where μ is ⎧ Euclidean. ⎨ Di − Dj F , log D − log D , Log-Euclidean. i j F μ(Di , Dj ) = ⎩ − 12 − 12 log(Di Dj Di )F , Affine-invariant. 2. Least squares fit: Compute the k nonzero entries of the i-th row of the weight matrix 1 Ci−1 as Wi i1 · · · Wi ik = 1 C −1 ∈ R1×k , where Ci is the local Gram matrix for Di 1 i
⎧ Euclidean. ⎨ trace((Di − Dj )(Di − Dl )), Ci (j, l) = trace((log Di − log Dj )(log Di − log Dl )), Log-Euclidean. ⎩ −1 −1 −1 −1 trace(log(Di 2 Dj Di 2 ) log(Di 2 Dl Di 2 )), Affine-invariant. 3. Clustering: Compute the m eigenvectors {vj }m j=1 of M = (I − W ) (I − W ) associated with its m smallest eigenvalues and apply k-means to the rows of [v1 · · · vm ] to cluster the tensors into m different groups.
5 Experiments Synthetic data. We first test our algorithm on synthetic data in order to validate the segmentation performance of the different metrics. For this purpose, we generate a 3D synthetic tensor field containing two distinct fiber bundles (straight and curved) generated by taking the tensors to be oriented according to the tangential direction of two
246
A. Goh and R. Vidal
(a) Synthetic dataset
(b) Euclidean
(e) Fiber tracts
(c) Log-Euclidean
(d) Affine-invariant
(f) Segmenting fiber (g) Segmenting fiber tracts with manual ROI tracts with LLDTC ROI
Fig. 2. Segmentation of a synthetic dataset. Fig. 2(a) shows the visualization of the data with each tensor at each voxel represented by an ellipsoid. Figs. 2(b)–2(d) show LLDTC clustering results using the Euclidean, log-Euclidean and affine-invariant metrics. Fig. 2(e) shows the fiber tracking results from MedINRIA. Fig. 2(f) shows two ROI masks marked manually at the beginning of the two bundles, and the fiber tracts extracted by TD. Fig. 2(f) show two ROI masks extracted automatically from the segmentation given by LLDTC, and the fiber tracts extracted by TD.
curves. The background contains tensors without any orientation (isotropic). The eigenvalues of the tensors are independently corrupted by Gaussian noise. Fig. 2(a) shows the dataset with each tensor represented by an ellipsoid whose major axis indicates the dominant diffusion direction. Figs. 2(b)-2(d) show the clustering results using the Euclidean, log-Euclidean, and affine-invariant metrics, respectively. Observe that the Riemannian metrics give the correct segmentation, while the Euclidean metric fails. In order to extract and cluster all the fibers in each one of the two bundles, we can manually select a region of interest (ROI), and then track fibers that pass through voxels in that ROI. Alternatively, the ROI can be defined automatically from the segmentation given by LLDTC. To evaluate which method is able to find most of the fibers in a bundle, we first extracted all the fibers in the two bundles using the freely available software MedINRIA [29]. The fiber tracking algorithm used here is tensor deflection (TD) [30]. Fig. 2(e) shows the extracted fibers. Notice that TD gives good results when the bundles are straight, but fails in regions of high curvature. We then manually marked two ROI masks located at the beginning of each fiber bundle. These masks are shown in red and pink in Fig. 2(f). We also used the segmentation generated by LLDTC with the affineinvariant metric to define two other ROI masks, shown in red and pink in Fig. 2(g). Since we have ground truth for the two synthetic fiber bundles, we can compare the extracted volume with the true volume of the fiber bundle. For the straight fiber bundle, the manual method gave 97.8% of the bundle, whereas LLDTC achieved 98.6%. For the curved fiber bundle, the manual method gave 90.8% of the bundle, while LLDTC achieved 98.6%. Hence, by using LLDTC to automatically generate a ROI, we can obtain a good estimate of the fiber bundles, even when tracking is not completely accurate.
Segmenting Fiber Bundles in Diffusion Tensor Images
247
Real data. We also test the LLDTC algorithm in the segmentation of the corpus callosum and the cingulum from real DTI data using the affine-invariant metric. The corpus callosum is the major communications conduit linking the two hemispheres of the human brain. The two cerebral hemispheres are responsible for distinct and dissimilar cognitive processes, as well as control of contralateral motion and proprioception. Consisting of over 200 million individual nerve fibers, it provides not only a physical, but also a functional connection essential for the coordination of our motor, language, and cognitive abilities. The cingulum bundle, measuring 5-7 mm in diameter, runs dorsal to the corpus callosum, and is the most prominent fiber bundle of the limbic lobe. Many studies have suggested that some functions of the cingulate gyrus depend on the integrity of its connections with other parts of the neuronal network. Therefore, the cingulum bundle, which serves to connect the cingulate cortex with other regions, would be important in the maintenance of the processing of cognitive functions. The corpus callosum and cingulum bundles have been studied extensively using DTI in many clinical populations, including Alzheimer’s disease [31], schizophrenia [32] and
Fig. 3. Segmenting the corpus callosum and the cingulum in the left hemisphere (LH) using the affine-invariant metric. The first column shows the visualization of the data in five sagittal slices and the tensor at each voxel is represented by an ellipsoid. The second column shows an eigenvector of the LLE matrix M for each of the sagittal slices. The third column shows the clustering result given by LLDTC. The corpus callosum is segmented into the red cluster and the cingulum into the light blue cluster.
248
A. Goh and R. Vidal
autism [33]. For example, patients who have Alzheimer’s disease have reduced fractional anisotropy in the cingulum bundle compared to normal aging patients, suggesting that lower anisotropy is associated with cognitive dysfunction and atrophy of the limbic system [31]. The size of the entire DTI volume of the brain is 128 × 128 × 58 voxels and the voxel size is 2×2×2 mm. From the visualization of the tensor data, we know the approximate location of each cingulum bundle in the left and right hemispheres. Hence, we reduce the input volume to the algorithm by focusing in this location. In addition, we also mask out voxels with fractional anisotropy that is below a threshold of 0.2 in order to separate white matter from the rest of the brain. We set the value of the spatial radius R to be 5 and the number of nearest neighbors to be 25. Fig. 3 and 4 show the results of the left and right hemispheres respectively. Figs. 3(a)–3(e) and 4(a)–4(e) show the sagittal slices used and the ellipsoid visualization of the tensors. The corpus callosum is the bundle with the red tensors pointing out of the plane and resembles the letter ‘C’. The cingulum, which is significantly smaller,
Fig. 4. Segmenting the corpus callosum and the cingulum in the right hemisphere (RH) using the affine-invariant metric. The first column shows the visualization of the data in five sagittal slices and the tensor at each voxel is represented by an ellipsoid. The second column shows an eigenvector of the LLE matrix M for each of the sagittal slices. The third column shows the clustering result given by LLDTC. The corpus callosum is segmented into the red cluster and the cingulum into the light blue cluster.
Segmenting Fiber Bundles in Diffusion Tensor Images
249
is the bundle left to the corpus callosum with the green tensors oriented vertically. Figs. 3(f)–3(j) and 4(f)–4(j) show an eigenvector of the matrix M for each of the sagittal slice. We see that that the corpus callosum and the cingulum are clustered around different centers. Figs. 3(k)–3(o) and 4(k)–4(o) show the results of LLDTC. In both cases, the corpus callosum forms a distinct cluster (in red). The cingulum is better segmented in the left hemisphere as it consists of the light blue cluster. In the right hemisphere, however, the segmentation of the cingulum is not as distinct. In addition, as our algorithm does not incorporate any smoothness constraint, our segmentation is noisier compared to energy minimization methods such as [16, 17, 18]. However, for the segmentation of the cingulum bundle in [16, 17, 18], a significant effort was required to manually remove voxels in the corpus callosum before running their respective algorithms. Our algorithm, on the other hand, is automatic. Hence, an immediate use for our method is that the output could be used as an automatic initialization for such algorithms.
6 Conclusion We have presented an algorithm for the automatic segmentation of fiber bundles in DT images. Our method requires no initialization or fiber tracking. Instead, it makes the reasonable assumption that tensors at adjacent voxels within a fiber bundle have similar eigenvectors and eigenvalues. Results on synthetic and real data were encouraging. However, an open problem is to incorporate spatial coherence into the algorithm. Acknowledgments. This work has been supported by startup funds from JHU, by grants NSF CAREER IIS-0447739, NSF EHS-0509101, NIH RO1 HL082729, and ONR N00014-05-10836, and by contract JHU APL-934652.
References 1. Arsigny, V., et al.: Log-Euclidean metrics for fast and simple calculus on diffusion tensors. Magnetic Resonance in Medicine 56, 411–421 (2006) 2. Fletcher, P.T., Joshi, S.: Riemannian geometry for the statistical analysis of diffusion tensor data. Signal Processing 87 (2007) 3. Pennec, X., Fillard, P., Ayache, N.: A Riemannian framework for tensor computing. International Journal of Computer Vision 66, 41–46 (2006) 4. Wang, Z., Vemuri, B.C.: DTI segmentation using an information theoretic tensor dissimilarity measure. IEEE Trans. on Med. Imag. 24, 1267–1277 (2005) 5. Kindlmann, G., et al.: Geodesic-loxodromes for diffusion tensor interpolation and difference measurement. In: Ayache, N., Ourselin, S., Maeder, A. (eds.) MICCAI 2007, Part I. LNCS, vol. 4791, pp. 1–9. Springer, Heidelberg (2007) 6. Zhukov, L., et al.: Level set segmentation and modeling of DT-MRI human brain data. Journal of Electronic Imaging, 125–133 (2003) 7. Ding, Z., et al.: Classification and quantification of neuronal fiber pathways using diffusion tensor MRI. Magnetic Resonance in Medicine 49, 716–721 (2003) 8. Brun, A., et al.: Clustering fiber tracts using normalized cuts. In: Barillot, C., Haynor, D.R., Hellier, P. (eds.) MICCAI 2004. LNCS, vol. 3216, pp. 368–375. Springer, Heidelberg (2004)
250
A. Goh and R. Vidal
9. Friman, O., Farneback, G., Westin, C.F.: A Bayesian approach for stochastic white matter tractography. IEEE Trans. on Med. Imag. 25, 965–978 (2006) 10. Perrin, M., et al.: Fiber tracking in q-ball fields using regularized particle trajectories. In: Information Processing in Medical Imaging (2005) 11. Wiegell, M., et al.: Automatic segmentation of thalamic nuclei from diffusion tensor magnetic resonance imaging. NeuroImage, 391–401 (2003) 12. Wang, Z., Vemuri, B.: Tensor field segmentation using region based active contour model. In: European Conference on Computer Vision, pp. 304–315 (2004) 13. Jonasson, L., et al.: A level set method for segmentation of the thalamus and its nuclei in DT-MRI. Signal Processing 87, 309–321 (2007) 14. Ziyan, U., Tuch, D., Westin, C.F.: Segmentation of thalamic nuclei from DTI using spectral clustering. In: Larsen, R., Nielsen, M., Sporring, J. (eds.) MICCAI 2006. LNCS, vol. 4191, pp. 807–814. Springer, Heidelberg (2006) 15. Lenglet, C., et al.: A Riemannian approach to diffusion tensor images segmentation. In: Information Processing in Medical Imaging (2005) 16. Melonakos, J., et al.: Locally-constrained region-based methods for DW-MRI segmentation. In: MMBIA (2007) 17. Melonakos, J., et al.: Finsler tractography for white matter connectivity analysis of the cingulum bundle. In: Ayache, N., Ourselin, S., Maeder, A. (eds.) MICCAI 2007, Part I. LNCS, vol. 4791, pp. 36–43. Springer, Heidelberg (2007) 18. Awate, S., et al.: A fuzzy, nonparametric segmentation framework for DTI and MRI analysis. IEEE Trans. on Med. Imag. 26, 1525–1536 (2007) 19. Roweis, S., Saul, L.: Think globally, fit locally: Unsupervised learning of low dimensional manifolds. Journal of Machine Learning Research 4, 119–155 (2003) 20. Goh, A., Vidal, R.: Clustering and dimensionality reduction on Riemannian manifolds. In: IEEE CVPR (2008) 21. Goh, A., Vidal, R.: Unsupervised Riemannian clustering of probability density functions. In: ECML PKDD (2008) 22. Wassermann, D., et al.: Diffusion maps clustering for magnetic resonance Q-Ball imaging segmentation. International Journal in Biomedical Imaging (2008) 23. Haro, G., Lenglet, C., Sapiro, G., Thompson, P.M.: On the non-uniform complexity of brain connectivity. In: ISBI, pp. 887–890 (2008) 24. Belkin, M., Niyogi, P.: Laplacian eigenmaps and spectral techniques for embedding and clustering. Neural Information Processing Systems, 585–591 (2002) 25. Donoho, D., Grimes, C.: Hessian eigenmaps: Locally linear embedding techniques for highdimensional data. PNAS 100, 5591–5596 (2003) 26. Tenenbaum, J.B., de Silva, V., Langford, J.C.: A global geometric framework for nonlinear dimensionality reduction. Science 290, 2319–2323 (2000) 27. von Luxburg, U.: A tutorial on spectral clustering. Stat. and Computing 17 (2007) 28. Horn, R., Johnson, C.: Matrix Analysis. Cambridge University Press, Cambridge (1985) 29. Toussaint, N., et al.: MedINRIA: Medical image navigation and research tool by INRIA. In: Proc. of MICCAI 2007 Workshop on Interaction in med. image analysis and vis. (2007), http://www-sop.inria.fr/asclepios/software/MedINRIA 30. Weinstein, D., et al.: Tensorlines: Advection-diffusion based propagation through diffusion tensor fields. In: IEEE Visualization, San Francisco, pp. 249–254 (1999) 31. Xie, S., et al.: Evaluation of bilateral cingulum with tractography in patients with Alzheimer’s disease. Neuroreport 16, 1275–1278 (2005) 32. Foong, J., et al.: Investigating regional white matter in schizophrenia using diffusion tensor imaging. Neuroreport 13, 333–336 (2002) 33. Alexander, A., et al.: Diffusion tensor imaging of the corpus callosum in autism. Neuroimage 34, 61–73 (2007)
View Point Tracking of Rigid Objects Based on Shape Sub-manifolds Christian Gosch1 , Ketut Fundana2 , Anders Heyden2 , and Christoph Schn¨ orr1 1 2,
Image and Pattern Analysis Group, HCI, University of Heidelberg, Germany Applied Mathematics Group, School of Technology, Malm¨ o University, Sweden
Abstract. We study the task to infer and to track the viewpoint onto a 3D rigid object by observing its image contours in a sequence of images. To this end, we consider the manifold of invariant planar contours and learn the low-dimensional submanifold corresponding to the object contours by observing the object off-line from a number of different viewpoints. This submanifold of object contours can be parametrized by the view sphere and, in turn, be used for keeping track of the object orientation relative to the observer, through interpolating samples on the submanifold in a geometrically proper way. Our approach replaces explicit 3D object models by the corresponding invariant shape submanifolds that are learnt from a sufficiently large number of image contours, and is applicable to arbitrary objects.
1
Introduction
Motivation and Contribution. The representation of planar shapes has been a focus of research during the last few years [1, 3, 4, 5]. By mathematically separating similarity transforms and potentially also reparametrisations from other deformations of planar curves, an invariant representation of planar shapes is obtained in terms of a smooth manifold embedded in a euclidean space. Furthermore, distances between shapes can be computed that are only sensitive to shape deformations, by determining the geodesic path between the corresponding points of the shape manifold (Fig. 3 below provides an illustration). In this paper, we adopt this representation and show that it is accurate enough to infer the change in aspect of a given rigid 3D object, represented by a point on the view sphere, just by observing 2D shapes of its silhouette in a given image sequence – see the left panel of Fig. 1 below. To this end, we assume to be given a collection of silhouettes of any known object, that we represent one-to-one by a corresponding set of points on the view sphere. These data can be acquired off-line by observing the object from different directions. We regard these shapes as samples of an object-specific submanifold of the manifold of all planar shapes, that is parametrized by the view sphere. Taking into account the geometry of this submanifold and interpolating
Funded by the VISIONTRAIN RTN-CT-2004-005439 Marie Curie Action within the EC’s FP6.
D. Forsyth, P. Torr, and A. Zisserman (Eds.): ECCV 2008, Part III, LNCS 5304, pp. 251–263, 2008. c Springer-Verlag Berlin Heidelberg 2008
252
C. Gosch et al.
Fig. 1. Illustration of a view sphere. Right hand: indicated are three sampled contours of an airplane seen from a camera from points on the view sphere. The object is located in the centre of the sphere. Left hand: illustration of the shape sub-manifold. The green lines between sphere and manifold indicate corresponding points, the blue arrow indicates a point that is interpolated using, in this case, three points which are neighbours on the sphere. This specific object was taken from the Princeton 3D shape benchmark [14].
the shape samples accordingly, we show that either the viewpoint of a moving camera, or object pose relative to the observer, can be tracked by observing deformations of the object’s silhouette in an image sequence. We point out the 3D models are not utilized in this work, besides illustrating graphically various points below. Rather, a sample set of object contours observed from different viewpoints, along with the information to what object they belong, define the input data. Our results are novel and relevant, for instance, for reaching and maintaining a reference position relative to a moving object, through vision based control, in man-made and industrial scenes. Related work. Related work has been published recently in [2, 10, 11, 12]. Etyngier et al. [10] use Laplacian eigenmaps [16] for embedding a set of training shapes into a low dimensional standard euclidean space. They present a method for projecting novel shapes to the submanifold representing the training samples, in order to model a shape prior for image segmentation. Similarly, Lee and Elgammal [11] use locally linear embedding (LLE) [17] to learn separately a configuration manifold of human gaits and a view manifold corresponding to a circle on the view sphere, based on a tensor factorization of the input data. While nonlinear euclidean embeddings (Laplacian eigenmap, LLE) of locally connected similarity structures (weighted adjacency graphs) are employed in [10, 11], we use directly the intrinsic manifold of invariant shapes as developed in [1, 5]. Statistical models based on this manifold have been elaborated in [2, 12] for deformable objects and shapes of classes of rigid objects, respectively, in connection with image segmentation and computational anatomy. By contrast, we focus on tracking and pose estimation of a single rigid object, based on contour deformations and the corresponding shape submanifold. This approach is novel. Our work may be regarded as a learning-based approach for
View Point Tracking of Rigid Objects Based on Shape Sub-manifolds
253
associating views and contours of arbitrary objects, that is both more general and easier to apply than earlier work on model-based contour representations of specific objects [25, 26]. Organization. We describe in Section 2 the mathematical shape and object representation, and the corresponding data acquisition. Section 3 details our approach for pose inference and object tracking on the view sphere. For the sake of completeness, we briefly discuss in Section 4 two major approaches to image segmentation for extracting object contours from images, although this is not the main focus of our present work. We validate our approach by numerical experiments in Section 5 and conclude in Section 6.
2
Shape Model, Object Representation, Learning
We work with the elastic closed pre-shape space covering closed regular twodimensional curves, proposed in [1]. A regular curve α : [0, 1] → R2 is represented t by α(t) = α0 + 0 eΦ(t) eiΘ(t) dt, with the integrand denoting a velocity function along the curve. eΦ(t) describes the curve speed, while Θ(t) is the tangent angle relative to the real axis in the complex plane. To achieve invariance under translation, the integral constant α0 is left out, and shapes are represented by pairs (Φ, Θ) as elements of a vector space of functions denoted by H. To also achieve scale and rotation invariance and to restrict to closed regular curves, further constraints turn H into the space of pre-shapes C: ⎧ ⎫ 1 Φ(t) iΘ(t) ⎪ ⎪ e e dt = 0 (closure) ⎪ ⎪ 0 ⎨ ⎬ 1 Φ(t) . (1) C := (Φ, Θ) ∈ H : 0 e dt = 1 (scale) ⎪ ⎪ ⎪ ⎪ 1 ⎩ Θ(t)eΦ(t) dt = π (rotation) ⎭ 0
So, curves are restricted to length 1 and an angle function average of π. Note that this is an arbitrary choice, we adopted π from [1]. Invariance with respect to reparametrisations is not handled intrinsically, since it would raise a considerable additional computational burden. Instead, shapes are matched by dynamic programming, following [15]. The elastic Riemannian metric [6] used on C is
1
1 Φ(t) (p1 , t1 ), (p2 , t2 )(Φ,Θ) := a p1 (t)p2 (t)e dt + b t1 (t)t2 (t)eΦ(t) dt (2) 0
0
with constants a, b ∈ R that weight the amount of stretching and bending, and with tangent vectors (p{1,2} , t{1,2} ) at (Φ, Θ). [1] proposes ways to numerically approximate geodesics on a discrete representation of C, as well as to approximate the inverse exponential map by gradient descent on C. Another recent representation of elastic shape is discussed in [7], also cf. [9], which allows for faster computations. However, rotation invariance is not easy to achieve. [8] introduces an optimisation approach to find minimal geodesics between orbits under the action of rotations and reparametrisations.
254
C. Gosch et al.
View Sphere Sampling. The input data of our approach are given samples on the view sphere S2 from any object, at known positions (see Fig. 1). These data are acquired off-line and result in a sample set of points in C.
3
Pose Inference and Tracking on the View Sphere
This section describes a model that we use for modelling motion of a point on the sphere that represents the object’s shape in a submanifold of C, as well as a simple scheme to predict positions locally. We also explain how we keep track of points on the view sphere that correspond to shapes measured from images in an image sequence. To avoid confusion with tracking an object in the image plane, we call the process of tracking the position on the view sphere sphere tracking. Motion Model. We model a mass point on the sphere as motion in a potential field V (x) = m · g · (x − P )2 , together with a friction component. m is a constant inertia, g weights the impact of V , and β in Equation (3) weights the impact of friction. The motion is governed by the differential equation ˙ = m · s¨(t) . −2 · m · g · (s(t) − P ) −β · s(t) −∇V
(3)
Stokes friction
This is applied to a point in the tangent space of the group of 3D rotations, i.e. s(t), P ∈ T SO3 , with rotations representing motions of a point on the sphere S2 . The corresponding exponential and logarithmic maps for SO 3 can be efficiently computed in closed form. The “centre of gravitation” P is updated whenever a new measurement Pk is available. Fig. 2 shows an illustration of the motion model following a path of points P . Predictions. Given past measurements pi ∈ S2 , we would like to predict s(t) locally. Assume to be given a new measurement Pk at time tk , and the motion model to be at point s(tk ). We then follow the trajectory governed by (3) until the distance d(s(tk ), Pk ) has been travelled, say at time tk , so that d(s(tk ), s(tk )) =
Fig. 2. Representing and tracking shape changes as motions on the view sphere. Blue: measurements Pk . Red: path s(t) of the mass point. Magenta: predicted points. The start point of the trajectory is at the far left end. The green grid lines indicate the underlying sphere.
View Point Tracking of Rigid Objects Based on Shape Sub-manifolds
255
Fig. 3. Illustration of shape interpolation with Karcher means in the closed pre-shape space C. The corners represent the original shapes, the other contours are interpolations weighted with their barycentric coordinates. The corner curves are randomly chosen from the MPEG-7-CE1 shape data base.
d(s(tk ), Pk ), and then continue for an additional fixed time period Δt = tk − tk to obtain the prediction ppred := s(tk + Δt) .
(4)
As illustrated in Fig. 2, this simple “mechanical” model can result in rather sensible paths and corresponding predictions of shape changes, as detailed below. Shape Interpolation. Interpolation of shapes on the view sphere at a point s ∈ S2 is realised by Karcher means using a local neighbourhood M of sampled shapes around s. The empirical Karcher mean is
μ = arg min m∈C
|M|
ai · d(m, pi )2 ,
(5)
i=1
with d(·, ·) the geodesic distance on C, and weights ai ≥ 0 with i ai = 1. μ can in practice be calculated by gradient descent [18]. Fig. 3 illustrates the interpolation of three shapes depicted at the corners of the triangle. Keeping Track of the Spherical Position. Assume that we know initially a point ck ∈ C and the corresponding position tk ∈ S2 . Now, suppose a new shape q ∈ C is to be considered, typically delivered by an image segmentation algorithm that tracks an object over a number of frames (see the next section). Fig. 4 illustrates the following problem: We wish to determine a point ck+1 ∈ C at tk+1 ∈ S2 on the sub-manifold modeled by the samples pi from the view sphere at spherical coordinates ti ∈ S2 , that is as close as possible to q. That is, we would like to minimise the geodesic distance d(m, q) = Logm (q)m by minimising F (m, q) = Logm (q)2m ,
(6)
256
C. Gosch et al.
Fig. 4. Keeping track of the spherical position: Shape ck and position tk are known, as well as a new shape q. What is the (approximate) position tk+1 on the view sphere corresponding to q?
where m results from minimizing (5), ⎛ ⎞ |M| ai (t) · d(m, ˜ pi )2 ⎠ m(t) = arg min ⎝ m∈C ˜
(7)
i=1
with both the neighbourhood M and the weights ai depending on the spherical position t. We then solve at frame k + 1 tk+1 = arg min F (m(t), q) t
(8)
using non-linear conjugate gradient descent on the view sphere, as follows: choose b,1 , b,2 ∈ R3 to be orthonormal basis vectors of the tangent space Tt (S2 ), and a small constant Δ > 0. Notice that in the following equations, Exp and Log denote the exponential and inverse exponential maps on the sphere S2 , not on the pre-shape space C. trans : T (S2 ) × S2 × S2 → T (S2 ),
v2 = trans(v1 , t1 , t2 )
(9)
is a function that takes a tangent vector at t1 and translates it along a geodesic from t1 to t2 . Then, let t0 = tk , β−1 = 0, d˜−1 = 0, and v =
2 i=1
b,i ·
F (m(Expt (Δ · b,i )), q) − F (m(t ), q) Δ
t
(10)
= −v + β−1 d˜−1
(11)
= Expt (α · d ) ˜ d = trans(d , t , t+1 )
(12)
d +1
(13)
View Point Tracking of Rigid Objects Based on Shape Sub-manifolds
257
Fig. 5. Experiment tracking the view sphere position using only the segmented contours from a sequence of images. Right: shown are measurements obtained on the view sphere, for the complete sequence. Left: a few images from the sequence are shown, the corresponding interpolated contours from the shape space C to their right. The initial position t0 ∈ S2 and shape s0 were given manually. Then for each image, the result from the previous one was used as initialisation. A region based level set segmentation was used, with a curvature regularisation term after [13].
v˜ = trans(v , t , t+1 )
(14)
β =
[v+1 − v˜ ] v+1 . v v
(15)
v takes the role of the gradient direction, in the tangent space of S2 at the current point t . d is the search direction, computed from the gradient v and the previous search direction d˜−1 , with factor β−1 calculated using the PolakRibi`ere variant of the non-linear conjugate gradient method in Equation (15), which is more robust than the Fletcher-Reeves variant according to [19]. The rest of the above equations are needed to adapt to the geometry of the sphere. Specifically, we need to translate tangent vectors to the current iterate t to be able to combine them, and we need to go back to the sphere using the exponential map. In order to find a step length R α > 0 for use in Equation (12), we use a standard line search procedure with the Armijo or sufficient decrease condition F (m(Expt (α · d )), q) ≤ F (m(t ), q) + c · α · (v d ) ,
0 < c < 1.
(16)
Figures 5 and 6 depict paths on the view sphere.
4
Segmentation and Image Contours
There are several possibilities to obtain contours from actual images, and to track contours while they are moving in the image plane. We have so far applied two methods: the well known region based level set method [21] and the related, more recent method from [24]. Since [24] finds a global optimum, it is suitable if the
258
C. Gosch et al.
Fig. 6. Sphere tracking experiment with occlusion. The top row shows the tracked view sphere path on the right (the arrows indicate the direction of motion), and an illustration of the image sequence on the left. The colour coding shows the corresponding contours and view sphere positions. Using the resulting shape from each previous frame to create a prior for the segmentation algorithm enables the sphere tracking to keep going for this sequence, where a small occluding object moves in front of the object. Each row shows the area of interest from 3 subsequent frames with the superimposed segmentation result, followed by the contour representing the shape tracked on the view sphere.
images contain only a more or less homogeneous background and a single object. In more complex scenes containing clutter and heterogeneous background, the level set method that only finds local optima is advantageous. We sketch these two approaches below, and how results from the sphere tracking are used as prior for steering the segmentation process. Level sets. Our implementation of level set segmentation uses the image energy from [21] and additionally the curvature diffusion regularisation term from [13], replacing the more common mean curvature term in the evolution in all our experiments. We also optionally use a prior energy based on [23] and [22]:
2 1 ˜ Γ x + T )) dx . H(Φ(x)) − H(Φ(s (17) Eshape = 2 D H denotes the Heaviside function, Φ and Φ˜ are the embedding functions of the evolving contour and the prior contour, respectively. s, Γ, T are transformation parameters as described further below.
View Point Tracking of Rigid Objects Based on Shape Sub-manifolds
259
Global Segmentation Method. The variational segmentation model of [21] suffers from the existence of local minima due to the non-convexity of the energy functional. Segmentation results depend on the initialisation. To overcome this limitation, Chan et al. [24] propose algorithms which are guaranteed to find global optima as follows: For a normalised grey scale image f (x) : D → [0, 1] on the domain D and constants λ, c1 , c2 ∈ R, a global minimiser u can be found by minimising the convex functional
min |∇u| dx + λ {(c1 − f (x))2 − (c2 − f (x))2 } u(x) dx . (18) 0≤u≤1
D
D
It is proved in [24] that if u(x) is a solution of (18), then for almost every μ ∈ [0, 1], 1{x:u(x)>μ} (x) is a global minimizer of [21]. In order to segment an object of interest in the image plane, we modify (18) by adding an additional term as shape prior
min |∇u| dx+λ {(c1 −f (x))2 −(c2 −f (x))2 +(ˆ u(x)−u˜(x))} u(x) dx , (19) 0≤u≤1
D
D
where u ˆ is a ’frozen’ u which gets updated after each time step in the numerical implementation, and u˜ is the prior template. We would like (19) to be invariant with respect to euclidean transformations of the object in the 2D image plane. To this end, we add transformation parameters, as in [23], of the fixed u ˆ with respect to the prior u ˜ by minimising
Eshape = [ˆ u(x) − u˜(s Γ x + T )]u(x) dx (20) D
for the scale s, translation T , and rotation matrix Γ (rotation by an angle θ). As a result, we obtain
|∇u| dx
min
u,s,Γ,T
D
{(c1 − f (x))2 − (c2 − f (x))2 + (ˆ u(x) − u˜(s Γ x + T ))} u(x) dx , (21)
+λ D
which is minimised by gradient descent. This functional is no longer convex in all unknowns, but the convexity with respect to u facilitates the computation of the transformation parameters. Possible Priors. Points on the view sphere predicted by the motion model can be used to provide a prior when segmenting subsequent frames of an image sequence. This can be done in several ways — the most obvious is to take the shape at ppred ∈ S2 from Equation (4) as a template. To incorporate the prior into the segmentation method, it is most appealing to impose a vector field defined on a contour C that drives C along a geodesic in shape space towards the prior; this appears to be a sensible choice and has been proposed amongst others in [2]. Parametric active contour methods seem to be naturally suited for
260
C. Gosch et al.
Frame 0
Frame 97
Fig. 7. Sphere tracking with a real recorded sequence totalling 97 frames. Roughly every 20th is shown, the last three are closer. Indicated in each frame are the segmentation result (green) and aligned interpolated shape (red). Difficult situations where the view tracking goes wrong are indicated in red, yellow are situations which are just ok. The time line on the bottom indicates the situation for the whole 97 frames. The spheres on the right indicate the inferred view positions along the sequence.
View Point Tracking of Rigid Objects Based on Shape Sub-manifolds
261
this sort of modification, since they work directly on points lying on the contour. For the implicit level set method [20, 21] or the method described in [24], applying a vector field that is defined only on the level set defining the interface is a little more involved. Imposing a flow along a geodesic in the implicit framework for other distance measures has been proposed, e.g., in [27]. The prior we use is a single shape predicted by the motion model on the view sphere. The shape is interpolated using a weighted Karcher mean and converted to a binary image. This binary image is then used as a prior for segmentation.
5
Experiments and Evaluation
Figures 5 and 6 show the results of the following experiment: for a given sequence {I1 , . . . , In } of images depicting a moving object, the contour c1 and view sphere position t1 for the first image were initialised manually. Then, using the methods from Sections 3 and 4, for each subsequent image Ii+1 the contour ci+1 and the respective view sphere point ti+1 were updated. The contour ci from the previous image was used for initialisation and as a weak prior for the segmentation of image Ii+1 . The segmentation result from Ii+1 was then used to calculate ti+1 , starting at ti , using the method described in Section 3. In Fig. 6, an occluding object was added in a different scene, which could be successfully handled by using ci as prior template for the segmentation algorithm. For these experiments, the level set algorithm was used. The figures depict a few snapshots from the whole sequences, which respectively consist of 100 and 50 frames each. These experiments show that the sphere tracking mechanism is capable of keeping track of the view sphere position fairly well, given a sufficient number of samples on the view sphere for interpolating the shape submanifold corresponding to the object. Fig. 7 shows results for a real recorded sequence.
6
Conclusions and Further Work
We presented a method that combines techniques from elastic shape manifold modelling, segmentation and optimisation, to track the change of pose of a 3D object through tracking its contour. While the given contours of the object are currently sampled more or less uniformly on the view sphere, an adaptive sampling strategy may be investigated in future work: the amount of contour change depends on the position on S2 and the object in question. Advanced sampling should adapt the density of points in areas of rapid shape change on S2 , thus exploiting the geometry of the shape submanifold already during data acquisition. However, in our experiments sampling 162 points appeared to be sufficient. Another point concerns initialisation, which is currently done manually. Automatic initialisation may be achieved for example by a voting scheme on the first few frames, for sequences where the first few contours can be extracted well enough by any extraction method. Regarding the segmentation prior, another option is to investigate weighted combination of a local neighbourhood of shapes around ppred to create a tem-
262
C. Gosch et al.
plate with a “fuzzy” boundary, in order to take more into account inherent uncertainties of the predicted path of shapes. A last matter worth mentioning is computation speed. Specifically, potential for speed-up is in the numerical calculation of the Log map for C.
References 1. Mio, W., Srivastava, A., Joshi, S.: On Shape of Plane Elastic Curves. IJCV 73, 307–324 (2007) 2. Joshi, S.H., Srivastava, A.: Intrinsic Bayesian Active Contours for Extraction of Object Boundaries in Images. ACCV (2006) 3. Mio, W., Srivastava, A., Liu, X.: Contour Inferences for Image Understanding. IJCV 69(1), 137–144 (2006) 4. Srivastava, A., Joshi, S.H., Mio, W., Liu, X.: Statistical Shape Analysis: Clustering, Learning, and Testing. IEEE PAMI 27(4), 590–602 (2005) 5. Klassen, E., Srivastava, A., Mio, W., Joshi, S.H.: Analysis of Planar Shapes Using Geodesic Paths on Shape Spaces. IEEE PAMI 26(3), 372–383 (2003) 6. Younes, L.: Computable Elastic Distances Between Shapes. SIAM J. on App. Math. 58, 565–586 (1998) 7. Joshi, S.H., Klassen, E., Srivastava, A., Jermyn, I.: An Efficient Representation for Computing Geodesics Between n-Dimensional Elastic Shapes. In: CVPR (2007) 8. Joshi, S., Srivastava, A., Klassen, E., Jermyn, I.: Removing Shape-Preserving Transformations in Square-Root Elastic (SRE) Framework for Shape Analysis of Curves. In: Yuille, A.L., Zhu, S.-C., Cremers, D., Wang, Y. (eds.) EMMCVPR 2007. LNCS, vol. 4679, pp. 387–398. Springer, Heidelberg (2007) 9. Michor, P.W., Mumford, D., Shah, J., Younes, L.: A Metric on Shape Space with Explicit Geodesics. ArXiv e-prints, 706 (2007) 10. Etyngier, P., Segonne, F., Keriven, R.: Shape Priors using Manifold Learning Techniques. In: ICCV (2007) 11. Lee, C., Elgammal, A.: Modeling View and Posture Manifolds for Tracking. In: ICCV (2007) 12. Davis, B., Fletcher, P.T., Bullitt, E., Joshi, S.: Population Shape Regression From Random Design Data. In: ICCV (2007) 13. Delingette, H., Montagnat, J.: Shape and Topology Constraints on Parametric Active Contours. CVIU 83, 140–171 (2001) 14. Shilane, P., Min, P., Kazhdan, M., Funkhouser, T.: The Princeton Shape Benchmark (2004) 15. Sebastian, T., Klein, P., Kimia, B.: On Aligning Curves. PAMI 25, 116–125 (2003) 16. Belkin, M., Niyogi, P.: Laplacian Eigenmaps for Dimensionality Reduction and Data Representation. Neural Computation 15, 1373–1396 (2003) 17. Roweis, S.T., Saul, L.K.: Nonlinear Dimensionality Reduction by Locally Linear Embedding. Science 290, 2323–2326 (2000) 18. Pennec, X.: Probabilities And Statistics On Riemannian Manifolds: Basic Tools For Geometric Measurements. In: NSIP (1999) 19. Optimization Technology Center, N. U.: The NEOS Guide, http://www-fp.mcs.anl.gov/OTC/Guide/ 20. Osher, S., Fedkiw, R.: Level Set Methods and Dynamic Implicit Surfaces. Springer, Heidelberg (2003) 21. Chan, T.F., Vese, L.A.: Active Contours Without Edges. IEEE TIP (2001)
View Point Tracking of Rigid Objects Based on Shape Sub-manifolds
263
22. Riklin-Raviv, T., Kiryati, N., Sochen, N.A., Pajdla, T., Matas, J.: Unlevel-Sets: Geometry and Prior-Based Segmentation. In: Pajdla, T., Matas, J(G.) (eds.) ECCV 2004. LNCS, vol. 3024, pp. 50–61. Springer, Heidelberg (2004) 23. Chen, Y., Tagare, H.D., Thiruvenkadam, S., Huang, F., Wilson, D., Gopinath, K.S., Briggs, R.W., Geiser, E.A.: Using Prior Shapes in Geometric Active Contours in a Variational Framework. IJCV 50, 315–328 (2002) 24. Chan, T.F., Esedoglu, S., Nikolova, M.: Algorithms for Finding Global Minimizers of Image Segmentation and Denoising Models. SIAM J. of App. Math. 66, 1632– 1648 (2006) 25. Eggert, D.W., Stark, L., Bowyer, K.W.: Aspect Graphs and Their use in Object Recognition. Ann. Math. Artif. Intell. 13(3-4), 347–375 (1995) 26. Vijayakumar, B., Kriegman, D.J., Ponce, J.: Invariant-Based Recognition of Complex Curved 3D Objects from Image Contours. CVIU 72(3), 287–303 (1998) 27. Solem, J.E.: Geodesic Curves for Analysis of Continuous Implicit Shapes. ICPR, 43–46 (2006)
Generative Image Segmentation Using Random Walks with Restart Tae Hoon Kim, Kyoung Mu Lee, and Sang Uk Lee Dept. of EECS, ASRI, Seoul National University, 151-742, Seoul, Korea
[email protected],
[email protected],
[email protected]
Abstract. We consider the problem of multi-label, supervised image segmentation when an initial labeling of some pixels is given. In this paper, we propose a new generative image segmentation algorithm for reliable multi-label segmentations in natural images. In contrast to most existing algorithms which focus on the inter-label discrimination, we address the problem of finding the generative model for each label. The primary advantage of our algorithm is that it produces very good segmentation results under two difficult problems: the weak boundary problem and the texture problem. Moreover, single-label image segmentation is possible. These are achieved by designing the generative model with the Random Walks with Restart (RWR). Experimental results with synthetic and natural images demonstrate the relevance and accuracy of our algorithm.
1
Introduction
Image segmentation is an important issue in computer vision. Specially, the segmentation of natural images is one of the most challenging issues. Two important difficulties of the segmentation in natural images are the weak boundary problem and the texture problem. The first problem is to find weak boundaries when they are parts of a consistent boundary. The second problem is to separate the texture in the highly cluttered image. In fact, such situations often arise in natural images. In these cases, the segmentations become ambiguous without user-provided inputs, and thus the supervised image segmentation approaches are often preferred. In this paper, we address the supervised image segmentation problem. Recently, several supervised segmentation approaches have been proposed. There are three types of supervised segmentation algorithms according to the user inputs. The first type is that the segmentation is obtained based on pieces of the desired boundary, such as the intelligent scissors [1]. The second type is that an initial boundary that is closed to the desired boundary is given, such as Active Contour [2] and Level Set [3]. Finally, the third type is that the user provides an initial labeling of some pixels. We focus on the supervised image segmentation of the third type. One of the popular approaches is the Graph Cuts method (GC) [4] based on energy functionals which are minimized via discrete optimization techniques. The set of edges with minimum total weights is obtained via maxflow/min-cut energy minimization. Since GC treats this minimum cut criterion, D. Forsyth, P. Torr, and A. Zisserman (Eds.): ECCV 2008, Part III, LNCS 5304, pp. 264–275, 2008. c Springer-Verlag Berlin Heidelberg 2008
Generative Image Segmentation Using Random Walks with Restart
265
it often causes small cut problem when the contrast is low or the number of seed pixels is small. In [5], the geodesic distance from the seeds was used for image segmentation. By assigning each pixel the label with minimum distance, the segmentation is obtained. The geodesic distance between two pixels is simply defined as the smallest integral of a weight function over all paths. However, since it does not consider the global relationship between two pixels, it is not reliable to use the simple geodesic distance as the relevance measure between two pixels. Another approach is the Random Walker image segmentation algorithm (RW) proposed by Grady [6]. After the first arrival probability that a random walker starting at a pixel first reaches one of the seeds with each label is computed, that pixel takes one label with maximum probability. It was shown in [6] that RW has better performance under difficult conditions than GC. However, the first arrival probability defined in [6] has some limitations. Since a random walker starting at a pixel must first arrive at the border of pre-labeled region, it only considers the local relationship between the pixel and that border. Therefore, the information of seeds inside the pre-labeled region is ignored in absence of higher-order interactions. Also, this probability depends on the number of seeds. If the seeds with only one label numerically grow under the weak boundary problem, the first arrival probability of that label is increased without regard to the whole relation between a pixel and seeds. These limitations explain why RW still suffers from the two problems: the weak boundary problem and the texture problem. Most recently, the segmentation approach defined by an l∞ norm was proposed in [7]. Since RW with constraints was used as a regularization method for yielding a unique solution, this approach still has the limitations of RW. Most previous supervised image segmentation algorithms focus on the interlabel discrimination, not finding the generative model for each label. Although they tried to solve the weak boundary problem and the texture problem, these two problems in natural images are still the most challenging issues in image segmentation. In this paper, we propose a new generative image segmentation algorithm based on the Random Walks with Restart (RWR) [8] that can solve the weak boundary problem and texture problem effectively. The key contributions of our proposed algorithm are as follows. 1. We introduce a generative model for image segmentation. From basic decision theory [9], it is known that generative methods are better inference algorithms. In contrast to most existing models which focus on the inter-label discrimination, we address the problem of finding the generative model for each label like [10]. For example, we can consider just one-label segmentation problem as shown in Fig. 1. Our model can produce the segmentation result with an optimal threshold level as shown in Fig. 1(c). This is possible since the likelihood probability can be generated using our generative model as depicted in Fig. 1(b). Since the generative model of each label is constructed independently, it is also possible to add a new label without altering the models of previous labels.
266
T.H. Kim, K.M. Lee, and S.U. Lee
(a) Original
(b) Likelihood
(c) Resulting segmentation
Fig. 1. An example of the generative segmentation with just one label. Given the seeds with green initial label in (a), the likelihood in (b) is computed using our generative algorithm. The range of this probability is [0, 2.8225 × 10−4 ]. (c) is the resulting segmentation with a threshold level τ = 1.5524 × 10−5 . The foreground label is assigned to the pixels with probability above the threshold τ .
2. We design a generative image segmentation approach using the steady-state probability of RWR as a part of the likelihood term. Since the likelihood of a pixel is defined as the average of all the steady-state probabilities between that pixel and the seeds with same label, our algorithm can reduce dependence on the number of seeds under the weak boundary problem. RWR, similar to graph-based semi-supervised learning [11], is a very successful technique for defining the relevance relation between two nodes in graph mining [8][12][13][14]. It has good performance on many other applications: Cross-model correlation discovery [8], Center-piece subgraph discovery [13], Content based image retrieval [12], Neighborhood formulation [14] and etc. Since this steady-state probability of RWR considers the whole relationship between two pixels, it naturally reflects the effects of texture. 3. Under two challenging problems: the weak boundary problem and the texture problem, our algorithm produces very good segmentation results on synthetic and natural images. It has better performance than RW as well as GC. The paper is organized as follows. In Section 2, we introduce our proposed generative image segmentation algorithm and explain that algorithm in detail. The experimental results are shown in Section 3. Finally, we discuss our approach and give conclusions in Section 4.
2
Generative image segmentation
Let us consider the image segmentation as a labeling problem in which each pixel xi ∈ X = {x1 , .., xN } is to be assigned one label lk ∈ L = {l1 , .., lK }. From basic decision theory [9], we know that the most complete characterization of the solution is expressed in terms of the set of posterior probabilities p(lk |xi ). Once we know these probabilities, it is straightforward to assign xi the label having the largest probability. In a generative approach, we model the joint distribution p(lk , xi ) of pixels and labels. This can be done by computing the
Generative Image Segmentation Using Random Walks with Restart
267
label prior probability p(lk ) and the pixel likelihood p(xi |lk ) separately. The required posterior probabilities are obtained using Bayesian rules: p(xi |lk )p(lk ) p(lk |xi ) = K , n=1 p(x|ln )p(ln )
(1)
where the sum in the denominator is taken over all labels. k } (Xlk ⊂ X) be a set of the Mk seeds with label lk . Let Xlk = {xl1k , ..., xlM k Then the likelihood p(xi |lk ) can be obtained by Mk p(x |xlk , l )p(xlk |l ) p(xi |lk ) = Z1 m=1 Mk i m klk m k (2) 1 = Z×Mk m=1 p(xi |xm , lk ), where Z is a normalizing constant. Each pixel likelihood is modeled by a mixture of distribution p(xi |xlmk , lk ) from each seed xlmk which has a seed distribution p(xlmk |lk ). The pixel distribution p(xi |xlmk , lk ) indicates the relevance score between a pixel xi and a seed xlmk . In this work, we propose to use the steadystate probability defined by RWR [8]. Compared with traditional graph distances (such as shortest path, maximum flow), this steady-state probability can capture the whole relationship between two pixels. The seed distribution p(xlmk |lk ) is defined by a uniform distribution, 1/Mk . Since the likelihood p(xi |lk ) is computed by the average of the pixel distributions of all the seeds with the label lk , our method is less dependence on the number of seeds. Now, we briefly describe the process of our image segmentation algorithm. First, we construct a weighted graph in an image. Then, we define p(xi |xlmk , lk ) as the steady-state probability that a random walker starting at a seed xlmk stays at a pixel xi in this graph. After computing this steady-state probability using RWR, we can estimate the likelihood p(xi |lk ) in (2) and, finally assign the label with maximum posterior probability in (1) to each pixel. 2.1
Graph Model
Given an image I, let us construct an undirected graph G = (V, E) with nodes v ∈ V , and edges e ∈ E. Each node vi in V uniquely identifies an image pixel xi . The edges E between two nodes are determined by the neighborhood system. The weight wij ∈ W is assigned to the edge eij ∈ E spanning between the nodes vi , vj ∈ V . It measures the likelihood that two neighboring nodes vi , vj have the same label. The weights encode image color changes used in many graph based segmentation algorithms [15][4][6]. In this work, a weight wij is defined as the typical Gaussian weighting function given by 2 gi − gj wij = exp − , (3) σ where gi and gj indicate the image colors at two nodes vi and vj in Lab color space. It provides us with a numerical measure, a number between 0 and 1, for
268
T.H. Kim, K.M. Lee, and S.U. Lee
the similarity between a pair of pixels. The gaussian function has the nature of geodesic distance. multiplication between two weights wij , wjk , For example, the g −g 2 +g −g 2
, can measure the similarity between the wij wjk = exp − i j σ j k nodes vi , vk . This property fits in well with our algorithm. Therefore, we choose this Gaussian function for the edge weights. 2.2
Learning
Suppose a random walker starts from a m-th seed pixel xlmk of label lk in this graph G. The random walker iteratively transmits to its neighborhood with the probability that is proportional to the edge weight between them. Also at each step, it has a restarting probability c to return to the seed xlmk . After convergence, lk we obtain the steady-state probability rim that the random walker will finally lk stay at a pixel xi . In this work, we use this steady-state probability rim as the lk distribution p(xi |xm , lk ) in (2) such as lk p(xi |xlmk , lk ) ≈ rim .
(4)
lk By denoting rim , i = 1, ..., N in terms of an N -dimensional vector rlmk = and defining an adjacency matrix W = [wij ]N ×N using (3), RWR can be formulated as follows [8]. lk ]N ×1 [rim
r lmk = (1 − c)Pr lmk + cblmk = c(I − (1 − c)P)−1 blmk = Qblmk ,
(5)
where blmk = [bi ]N ×1 is the N -dimensional indicating vector with bi = 1 if xi = xlmk and bi = 0 otherwise, and the transition matrix P = [pij ]N ×N is the adjacency matrix W row-normalized: P = D−1 × W, (6) N where D = diag(D1 , ..., DN ), Di = j=1 wij . If these steady-state probabilities rlmk are inserted into (2) by our definition, the likelihoods p(xi |lk ) (i = 1, ..., N ) are achieved such as: [p(xi |lk )]N ×1 =
1 Q˜ blk , Z × Mk
(7)
where ˜ blk = [˜bi ]N ×1 is the N -dimensional vector with ˜bi = 1 if xi ∈ Xlk and ˜bi = 0 otherwise. In a RWR view, Q = [qij ]N ×N is used for computing the affinity score between two pixels. In other words, qij implies the likelihood that qi has the same label which was assigned to qj . It can be reformulated as follows: Q = c(I − (1 − c)P)−1 ∞ =c (1 − c)t Pt . t=0
(8)
Generative Image Segmentation Using Random Walks with Restart
269
Q is defined as the weighted sum of all matrices Pt , t = 0, ..., ∞. Note that Pt is the t-th order transition matrix, whose elements ptij can be interpreted as the total probability for a random walker that begins at vj to end up at vi after t iterations, considering all possible paths between two pixels. By varying the number of iterations t, we explicitly explore relationship at different scales in the image, and as t increases, we expect to find coarser structure. Therefore, RWR gives the effects of texture by considering all paths between two pixels at all scales (any iteration number (t = 0, ..., ∞)) in the image. Since close-by pixels are likely to have high similarity value, as t increases, Pt has lower weight, (1−c)t (0 < c < 1). The resulting matrix Q can be solved by using a linear method of matrix inversion. Although it requires more memory space, fast computation is possible if the matrix is sparse. Since the small pixel neighborhood system is used in this paper, normalized matrix P is highly sparse. Therefore, Q can be calculated fast. However, if the number of nearest neighbors is chosen to be a fixed large number, the complexity of matrix inversion is very high. In this case, some approximation methods such as Fast Random Walk with Restart method [16] can be used. 2.3
Segmentation
Assume that the prior probability p(lk ) in (1) is uniform. Using the likelihood p(xi |lk ) in (7), the decision rule of each pixel xi for image segmentation is as follows: (9) Ri = arg max p(lk |xi ) = arg max p(xi |lk ). lk
lk
By assigning the label Ri to each pixel xi , the segmentation is obtained.
(a) Original
(b) Initial labels
(c) Resulting segmentation
(d) Posterior for Red
(e) Posterior for Green
(f) Posterior for Blue
Fig. 2. Overview of our proposed segmentation algorithm. Given seeds (Red, Green and Blue) in (b), the posterior probabilities (d),(e) and (f) are obtained by computing (1) for the three labels, respectively. Segmentation result (c) is obtained by assigning each pixel the label having maximum posterior probability.
270
T.H. Kim, K.M. Lee, and S.U. Lee
Fig. 2 shows the overall process of our algorithm from the seeds to the calculation of each label posterior probability p(lk |xi ) and the resulting segmentation. It starts with three initial seed labels: Red, Green, and Blue as shown in Fig. 2(b). After computing the likelihood for each label, we generate the posterior probabilities as shown in Fig. 2(d),(e) and (f). By a decision rule (9), each pixel is assigned the label that has the maximum probability. Finally, we obtain the segmentation results in Fig. 2(c), where the object boundary is drawn in red color overlaid on the original image.
3
Experimental Results
Our algorithm has two parameters: a color variance σ and a restarting probability c. The σ is used in all graph-based segmentation algorithms. In this work, this parameter is fixed with the same value for all the segmentation algorithms we tested. However, the restarting probability c is only used in our algorithm, for computing the steady-state probability of RWR. In this section, we first analyze the effect of the restarting probability c in image segmentation. And then, we compare the performance of our algorithm with the state of the art methods including GC [4] and RW [6] on several synthetic and natural images. 3.1
Parameter Setting
RWR needs one parameter; the restarting probability c in (5). According to this parameter, the range of propagation of a random walker from the starting node is varied as shown in Fig 3. If c is decreased, the probability that a random walker travels over a larger area is increased. This means that by varying c, we can control the extent of the label information of a seed in different scales in the image. Figure 4 shows another example of the segmentation with respect to the variation of the restarting probability c in a natural image. According to the restarting probability c, the segmentation results are changed. Therefore, it is important to find a proper probability c according to the quantity (or quality)
(a) Original image
(b) c = 10−4
(c) c = 10−5
(d) c = 10−6
Fig. 3. An example of the variability of the steady -state probabilities r according to the restarting probability c. (a) is an image whose size is 328 × 310. It has the seeds with just one label (red pixels) at the center. (b),(c) and (d) show the variation of r in accordance with the decrease of the restarting probability c in a 4-connected pixel neighborhood system. The display range of r is [0, 0.0006].
Generative Image Segmentation Using Random Walks with Restart
(a) Original image
ao =0.826165 (b) c = 10−3
ao =0.850946 (c) c = 10−4
271
ao =0.877542 (d) c = 10−5
Fig. 4. An example of the segmentation with respect to the variation of the restarting probability c in a natural image. For comparison, GC has ao =0.789613 and RW has ao =0.781148. The accurate rate ao is computed by (10).
of the seeds and the image size. In this work, c was chosen empirically, and we set c = 4 × 10−4 for all the natural test images. The 8-connected neighborhoods were used as the pixel neighborhood system. 3.2
Segmentation Results
We begin by analyzing the performance under two difficult problems: weak boundary problem and texture problem. We then compare the segmentation results obtained by the three algorithms on natural images and provide quantitative comparisons. Weak boundary problem. The weak boundary problem is to find the weak boundaries when they are parts of a consistent boundary. In [6], RW shows better segmentation result than GC in low contrast with small number of seeds. Although GC and RW are capable of finding weak boundaries, our algorithm gives more intuitive outputs. In Fig. 5, our algorithm is compared with GC and RW in the weak boundary problem. We used two synthetic examples: circle and 3×3 grid with four sections erased. Given the seeds (green and blue) in Fig. 5(a), the segmentations were obtained in Fig. 5(b)-(d). Fig. 5(b) shows clearly that GC has small cut problem. In Fig. 5(c), we can confirm that the segmentations of RW is substantially affected by the difference between the numbers of Green and Blue seeds. Namely, RW is sensitive to the number of seeds. In contrast, Fig. 5(d) shows that our algorithm is less dependent on the number of seeds and produces better segmentations, because the likelihood is computed by the average of the relevance scores of all the seeds. Texture problem. In GC and RW, it is hard to separate the textured region without considering higher-order connections, because they deal with the minimum cut criterion and the fisrt arrival probability, respectively. Since these two algorithms do not consider the information of seeds that are inside the prelabeled regions, it is not easy for them to take into account the effects of texture. On the other hand, our algorithm can reflects the texture information by using the steady-state probability of RWR, because RWR considers all possible paths between two nodes in a small neighborhood system. In spite of the use of a small neighborhood system, it captures the textural structure well and obtains
272
T.H. Kim, K.M. Lee, and S.U. Lee
(a) Original
(b) GC
(c) RW
(d) Our algorithm
Fig. 5. Comparison of our algorithm with GC and RW for finding weak boundaries. Given two labels (Green and Blue) and original images in (a) ( Top: image created with a black circle with four erased sections. Bottom: image created with a 3 × 3 black grid with four erased sections. ), (b),(c) and (d) are the segmentation results of GC, RW and our algorithm (c = 4 × 10−4 ), respectively.
(a) Original
(b) GC
(c) RW
(d) Our algorithm
Fig. 6. Comparison of our algorithm (c = 10−6 ) with GC and RW on synthetic textured images
object details. In Fig. 6, we used the synthetic images that consist of four or five different kinds of textures. This is the texture segmentation problem that one texture is extracted among them. The segmentation results in Fig. 6(d) show that our algorithm produced more reliable texture segmentations on these synthetic textured images than GC and RW. Quantitative comparisons. The previous two situations often arise in natural images. Now, we compare the segmentations obtained from GC, RW, and our
Generative Image Segmentation Using Random Walks with Restart
(a) Original
273
ao =0.795424
ao =0.799471
ao =0.876489
ao =0.628529
ao =0.571411
ao =0.768381
ao =0.754095
ao =0.578121
ao =0.847204
ao =0.613335
ao =0.570816
ao =0.706983
ao =0.767218
ao =0.791435
ao =0.825402
ao =0.466541
ao =0.542821
ao =0.869866
(b) GC
(c) RW
(d) Our Algorithm
Fig. 7. Comparison of our algorithm with RW and GC on natural images. (b),(c) and (d) are the segmentation results of GC, RW and our algorithm ((c = 4 × 10−4 )), respectively. ao is the accurate rate in (10).
274
T.H. Kim, K.M. Lee, and S.U. Lee
algorithm on natural images. We utilized a dataset of natural images where human subjects provide foregrond/background label: ground truth (the Berkeley Segmentation Dataset [17]). For quantitative comparisons, the similarity between the segmentation result and the preset ground truth was measured using a normalized overlap ao [7]: |R ∩ G| , (10) ao = |R ∪ G| where R is the set of pixels assigned as the foreground from the segmentation result and G is that from the ground truth. In this work, it was used as the accuracy measure of the image segmentation. For this experiments, we chose the natural images with highly textured (cluttered) regions or with similar color distributions between the foregrounds and backgrounds. In Fig. 7, the segmentations were produced from the three different algorithms on these natural images. Compared with the segmentations from GC and RW in Fig. 7, our algorithm has better segmentations qualitatively and quantitatively. The quantitative comparison confirms the relevance and accuracy of our algorithm.
4
Conclusion
This paper presents a novel generative image segmentation model in the Bayesian Framework. More importantly, we provide a new interpretation of RWR for image segmentation. Although RW [6] is also based on the Random Walks concept, our work is conceptually different from RW, and produces significant improvement in performance as shown in the experiments. The key differences between RW and our work are ”First arrival probability vs. Average probability”. In [6], the score between a pixel and each label is defined by the first arrival probability that a random walker starting at a pixel reaches to a seed. On the other hand, in our work it is defined as the average probability that a random walker starting at one of the seeds stays at a pixel. Our approach has several advantages for image segmentation. First, owing to the generative segmentation model with RWR, it can obtain segmentations with just single label. Second, it is less dependent on the number of seeds, because the likelihood is computed by the average of the relevance scores of all the seeds. Finally, it gives qualitatively and quantatively better segmentations on natural images. Generally, large neighborhood system is needed for obtaining object details, because it captures image structure well. Since it gives high computations, many efficient methods, like multi-scale approach, have been proposed. Our method is an alternative solution, since RWR considers all possible paths between two nodes in small neighborhood system. For the computation of RWR, in this work, the restarting probability c, was chosen empirically. However, it is not optimal for every image. If we can control it well, better segmentation results will be obtained. Thus, our future work will include the automatic selection of the optimal value of this parameter.
Generative Image Segmentation Using Random Walks with Restart
275
Acknowledgement This research was supported in part by the Defense Acquisition Program Administration and Agency for Defense Development, Korea, through the Image Information Research Center under the contract UD070007AD, and in part by the MKE (Ministry of Knowledge Economy), Korea under the ITRC (Information Technolgy Research Center) Support program supervised by the IITA (Institute of Information Technology Advancement) (IITA-2008-C1090-0801-0018).
References 1. Mortensen, E.N., Barrett, W.A.: Interactive segmentation with intelligent scissors. Graphical Models in Image Process. 60(5), 349–384 (1998) 2. Kass, M., Witkin, A., Terzopoulos, D.: Snakes: Active contour models. IJCV V1(4), 321–331 (1988) 3. Sethian, J.A.: Level Set Methods and Fast Marching Methods: Evolving Interfaces in Computational Geometry, Fluid Mechanics, Computer Vision, and Materials Science. Cambridge University Press, Cambridge (1999) 4. Boykov, Y., Funka-Lea, G.: Graph cuts and efficient n-d image segmentation. IJCV 70(2), 109–131 (2006) 5. Bai, X., Sapiro, G.: A geodesic framework for fast interactive image and video segmentation and matting. In: Proc. ICCV 2007, pp. 1–8 (2007) 6. Grady, L.: Random walks for image segmentation. PAMI 28(11), 1768–1783 (2006) 7. Sinop, A.K., Grady, L.: A seeded image segmentation framework unifying graph cuts and random walker which yields a new algorithm. In: Proc. ICCV (2007) 8. Pan, J.Y., Yang, H.J., Faloutsos, C., Duygulu, P.: Automatic multimedia crossmodal correlation discovery. In: KDD 2004, pp. 653–658 (2004) 9. Bishop, C.M.: Neural Networks for Pattern Recognition. Oxford University Press, Oxford (1995) 10. Grady, L., Schwartz, E.L.: Isoperimetric graph partitioning for image segmentation. PAMI 28(3), 469–475 (2006) 11. Zhou, D., Bousquet, O., Lal, T.N., Weston, J., Scholkopf, B.: Learning with local and global consistency. In: NIPS (2003) 12. He, J., Li, M., Zhang, H.J., Tong, H., Zhang, C.: Manifold-ranking based image retrieval. In: ACM Multimedia, pp. 9–16 (2004) 13. Tong, H., Faloutsos, C.: Center-piece subgraphs: problem definition and fast solutions. In: KDD 2006, pp. 404–413 (2006) 14. Sun, J., Qu, H., Chakrabarti, D., Faloutsos, C.: Neighborhood formation and anomaly detection in bipartite graphs. In: ICDM 2005, pp. 418–425 (2005) 15. Shi, J., Malik, J.: Normalized cuts and image segmentation. PAMI 22(8), 888–905 (2000) 16. Tong, H., Faloutsos, C., Pan, J.Y.: Fast random walk with restart and its applications. In: ICDM 2006, pp. 613–622 (2006) 17. Martin, D., Fowlkes, C., Tal, D., Malik, J.: A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In: Proc. ICCV, vol. 2, pp. 416–423 (2001)
Background Subtraction on Distributions Teresa Ko, Stefano Soatto, and Deborah Estrin Vision Lab Computer Science Department University of California, Los Angeles 405 Hilgard Avenue, Los Angeles – CA 90095 {tko,soatto,destrin}@cs.ucla.edu
Abstract. Environmental monitoring applications present a challenge to current background subtraction algorithms that analyze the temporal variability of pixel intensities, due to the complex texture and motion of the scene. They also present a challenge to segmentation algorithms that compare intensity or color distributions between the foreground and the background in each image independently, because objects of interest such as animals have adapted to blend in. Therefore, we have developed a background modeling and subtraction scheme that analyzes the temporal variation of intensity or color distributions, instead of either looking at temporal variation of point statistics, or the spatial variation of region statistics in isolation. Distributional signatures are less sensitive to movements of the textured background, and at the same time they are more robust than individual pixel statistics in detecting foreground objects. They also enable slow background update, which is crucial in monitoring applications where processing power comes at a premium, and where foreground objects, when present, may move less than the background and therefore disappear into it when a fast update scheme is used. Our approach compares favorably with the state of the art both in generic lowlevel detection metrics, as well as in application-dependent criteria.
1 Introduction Background subtraction is a popular pre-processing step in many visual monitoring applications, as it facilitates the detection of objects of interest (“foreground”). Even when the cameras are fixed in the infrastructure, however, naive background modeling and subtraction results in large numbers of false detections because of changes in illumination and fine-scale motion in the scene. Natural environments such as the forest canopy present an extreme challenge because the foreground objects, by necessity, blend with the background, and the background itself changes due to the motion of the foliage and the rapid transition between light and shadow. For instance, images of birds at a feeder station exhibit a larger per-pixel variance due to changes in the background than due to the presence of a bird. Rapid background adaptation fails because birds, when present, are often moving less than the background and often end up being incorporated into it. Even a summary inspection of a short video will easily convince the reader that neither analysis of the temporal variation of a single pixel, common to many background subtraction methods, nor analysis of the spatial statistics of each image in isolation, common to many image segmentation algorithms, is sufficient to detect the presence D. Forsyth, P. Torr, and A. Zisserman (Eds.): ECCV 2008, Part III, LNCS 5304, pp. 276–289, 2008. c Springer-Verlag Berlin Heidelberg 2008
Background Subtraction on Distributions Feeder Station Webcam data set
Feeder Station Camcorder data set
Intensity value of a pixel over time
Patch difference over time
277
Fig. 1. Birds are difficult to detect due to their similarity with the background and the large temporal variability of the background. Examination of the intensity value of a pixel over time reveals minimal variability in the presence of birds (gray regions). This motivates the use of distributional signatures.
of birds. This is illustrated in Fig. 1. Therefore, we advocate background processing algorithms that analyze not individual images but video, by comparing not single-pixel statistics, but spatial distributions of pixel intensities or color. An additional peculiarity of environmental monitoring sequences is the coarse temporal sampling, dictated by energy considerations since the cameras deployed in natural environments are batteryoperated. This makes learning the temporal dynamics of the background motion, as done in Dynamic Textures [1], impossible. On the other hand, only coarse localization of foreground objects is required, as input to subsequent stages where scientists can measure biodiversity by counting the number of birds visiting the feeder station, or placing a bounding box for subsequent processing such as species classification. The benefit of an automated approach to background modeling and foreground detection is readily measured in the amount of person-time saved by scientists in analyzing these long and tedious sequences. The resulting approach, consisting of analyzing the temporal variation of intensity distributions, rather than pixel values, is a departure from traditional background subtraction schemes. We represent the signature of each pixel using a distribution of pixel intensities in a neighborhood, and use the Bhattacharyya distance to compare such distributions over time. This distribution signature is relatively insensitive to small movements of the highly textured background, and at the same time is not tied to individual pixel values for detecting foreground objects. This enables slower background updates, and therefore minimizes the probability that the foreground object be incorporated into the background. Indeed, the background update rate can be chosen depending on the application within a broad range. We use bird monitoring in natural scenes as a motivating application to bring attention to a far larger class of significant scenes not previously addressed in the literature. A number of pertinent questions about the impact of climate change on our ecosystem are most readily answered by monitoring fine-scale interactions between animals and plants and their environment. Such fine scale measurements of species distribution, feeding habits, and timing of plant blooming events require continuous monitoring in
278
T. Ko, S. Soatto, and D. Estrin
the natural environment, and are plagued by the same challenges as the feeder station monitoring described in this paper. There is inherent pressure to increase spatial coverage at the cost of reducing the size of the objects of interest in the image, thereby creating a more challenging detection and recognition task. Similarly, increasing temporal coverage (lifetime) pushes for lower sampling rates limiting the applicable methods. The field of computer vision has made great strides in addressing challenging problems by simplifying the problem with key assumptions. When trying to use techniques developed previously for other use cases, we found that these assumptions did not hold under a large class of our use case scenes. In this paper, we demonstrate an approach that is more generally applicable to a larger set of natural scenes than previous work.
2 Related Work The most straightforward approach to segment foreground from background, frame differencing [2], thresholds the difference between two frames. Large changes are considered foreground. To resolve ambiguity due to slow moving objects, Kameda and Minoh [3] use a “double difference” that classifies foreground as a logical “add” of the pairwise difference between three consecutive frames. Another approach is to build a representation of the background that is used to compare against new images. One such approach captures a background image when no foreground objects are present, assuming some user control over the environment. A compromise between differencing neighboring frames and differencing against a known background image is to adapt the background over time by incrementally incorporating the current image into the background. Migliore et al. [4] integrate frame differencing and background modeling to improve overall performance. As needed, added complexity in the model would allow for added complexity in the background scene. W 4 [5] was one of the first to incorporate more powerful statistics by modeling the variance found in a set of background images with the maximum and minimum intensity value and the maximum difference between consecutive frames. Pfinder [6] uses the mean and the variance of pixel value. If all that is known about a distribution is the mean and variance, the most reasonable assumption based on maximal entropy is the Gaussian distribution. The assumption then is that the pixel value follows a Gaussian distribution, and a likelihood model is used to compare the likelihood of background and foreground for a particular pixel. When this assumption does not adequately account for the variance, a Mixture of Gaussians (MoG) can be used [7,8] to further improve the accuracy of the estimate. A MoG model is capable of handling a range of realistic scenarios, and is widely used [9,10]. Rather than extending the MoG model, Elgammal et al. [11] show it is possible to achieve greater accuracy under the same computational constraints as the MoG when using a non-parametric model of the background. Another significant contribution of this work was the incorporation of spatial constraints into the formulation of foreground classification. In the second phase of their approach, pixel values that could be explained away by distributions of neighboring pixels were reclassified as background, allowing for greater resilience against dynamic backgrounds. Sheikh and Shah unify the temporal and spatial consistencies into a single model [12]. The result is highly accurate
Background Subtraction on Distributions
279
segmentations of objects even when occluding a dynamic background. Similar models include [13,14,15]. A different approach, taken by Oliver et al. [16], looks at global statistics rather than the local constraints used in the previously described work. Similar to eigenfaces, a small number of “eigenbackgrounds” are created to capture the dominant variability of the background. The assumption is that the remaining is due to foreground objects. A threshold on the difference between the original image and the part of the image that can be generated by the eigenbackgrounds differentiates the foreground objects from the background. Rather than implicitly modeling the background dynamics, many approaches have explicitly modeled the background as composed of dynamic textures [17]. Wallflower [18] uses a Wiener filter to predict the expected pixel value based on the past K samples whose α’s are learned. Monnett et al. [1] model the background as a dynamic texture, where the first few principal components of the variance of a set of background images (similar to [16]) comprise an autoregressive model in the same vein as [18]. For computational efficiency, Kahl et. al. [19] illustrates that using “eigenbackgrounds” on shiftable patches in an image is sufficient to capture the variance in dynamic scenes. The inspiration for this work came from Rathi’s success in single image segmentation using the Bhattacharyya distance [20]. Similarly, the consistency of the Bhattacharyya distance under motion is used for tracking in [21]. While many distances between distributions exist (e.g., Kullback-Leibler divergence [22], χ2 [23], Earth Mover’s Distance [24], etc.), the Bhattacharyya distance is considered here due to its low computational cost.
3 Background Model In our approach, a background model is constructed for each pixel location, including pixel values with temporal and spatial proximity. A distribution is constructed for each pixel location on the current image and compared to the background model for classification. 3.1 Modeling the Background The background model for the pixel located at the ith row and jth column is in general a non-parametric density estimate, denoted by pij (x). The feature vector, x ∈ R3 , is some color-space representation of the pixel value. For computational reasons, we consider the simplest estimate, given by the histogram 1 δ(s − x), (1) pij (x) = |S| s∈S
where S, the set of pixel values contributing to the estimate is defined as S = {xt (a, b) | |a − i| < c, |b − j| < c, 0 ≤ t < T },
(2)
where xt (a, b) is the colorspace representation of the pixel at the ath row and bth column of the image taken at time t. The feature vector, x, is quantized to better approximate the true density.
280
T. Ko, S. Soatto, and D. Estrin
Elgammal et al. and Sheikh and Shah similarly model the background as a nonparametric density estimate. A generalized form of Eq. (1), pij (x) =
1 K(s − x), |S|
(3)
s∈S
where K is a kernel function that satisfies K(x)dx = 1, K(x) = K(−x), xK(x)dx = 0, and xxT K(x)dx = I|x| , can better approximate the true distribution when the size of S is small. Elgammal et al. construct a model using an independent Gaussian kernel for each pixel using only sample points of close temporal proximity, S = {xt (i, j) | 0 ≤ t < T }.
(4)
While Sheikh and Shah also use the independent Gaussian kernel, they model the entire background with a single density estimate of the same form as Eq. (1) except the feature vector, x, is appended with the pixel location, (i, j). The set of pixels used to construct the estimate is S = {xt (a, b) | 0 ≤ a < h, 0 ≤ b < w, 0 ≤ t < T }
(5)
for a w × h image. Also, Sheikh and Shah adopt the simpler δ kernel function when the algorithm is optimized for speed. 3.2 Temporal Consistency To detect foreground at time τ , a distribution, qij,τ (x), is similarly computed for the pixel located in the ith row and jth column using only the image at time τ according to qij,τ (x) =
1 δ(s − x), |Sτ |
(6)
s∈Sτ
where Sτ , the set of pixel values contributing to the estimate is defined as Sτ = {xτ (a, b) | |a − i| < c, |b − j| < c},
(7)
The Bhattacharyya distance between qij,τ (x) and the corresponding background model distribution for that location, pij,τ −1 (x), calculated from the previous frames, is computed to determine the foreground/background labeling. The Bhattacharyya distance between two distributions is given by d= pij,τ −1 (x)qij,τ (x)dx, (8) X
where X is the range of valid x’s. d ranges from 0 to 1. Larger values imply greater similarity in the distribution. A threshold on the computed distance, d, is used to distinguish between foreground and background.
Background Subtraction on Distributions
281
While subtle, enforcing spatial consistency in the current image results in dramatic improvements in the performance of our scheme compared to previous work, as will be demonstrated in the following sections. This approach can be viewed as a hybrid between pixel- and texture-level comparisons. The distribution computed on the image at time τ , qij,τ (x) could be viewed as a feature vector describing the texture at that location. But, rather than building a density estimate in this feature space as the background model, we treat each pixel independently in our distribution. While previous approaches have mentioned the use of generic image statistics as a pixel-level representation which could take into account neighboring statistics, a direct application of their approach would have been prohibitively expensive in terms of memory and computation. Our approach is one that compromises model precision so as to feasibly fit on commonplace devices and, as we shall show later on, still works well for our application domain. We expect that this approach will work well for large but consistent background changes and foreground objects that are similar in appearance to the background, exhibit sudden motion and periods of stationarity, and do not necessarily dominate the scene. 3.3 Updating the Background The background is adapted over time so that the probability at time τ is pij,t (x) = (1 − α)pij,τ −1 (x) + αqij,τ (x),
(9)
where α is the adaptation rate of the background model.
4 Experimentation We compare our approach against a set of background subtraction methods that handle large variances in the background as well as frame differencing for a baseline. In our experiments with Elgammal’s Non-Parametric Background model, we vary the decision threshold on the difference image and α, which determines the relative pixel values considered as shadowed. A difference image was constructed from the probability of a pixel value coming from the background model. The implementation was kindly provided by the author of the paper. We implemented the speed-optimized version of Sheikh’s Bayesian model due to the computational constraints of the application domain, and varied the number of bins used to approximate the background model. We also tested our implementation of Oliver’s Eigenbackground model with a varying number of eigenbackgrounds used to model the background. The results presented are the best precision-recall pairs found across these parameters. The subsequent parts of the algorithm suggested by Elgammal et al. and Sheikh and Shah used to provide clean segmentations (i.e., morphological operations and minimum cuts on Markov Random Field, respectively) could arguably be used by any of the algorithms tested, so the informativeness of the results could be obscured by the varying abilities of these operations. Therefore, when comparing these algorithms, we look only at the construction of the background model and the subsequent comparison against the current frame.
282
T. Ko, S. Soatto, and D. Estrin
4.1 Data Sets We collected video sequences from two independent cameras pointed at a feeder station. This feeder station had been previously set up to aid biologists in observing avian behavior. The characteristics of data sets evaluated in this paper are: – Feeder Station Webcam: Images are captured by webcam at 1 frame per second. The size of these images are 480 × 704. – Feeder Station Camcorder: Images are captured at full NTSC speed. The size of the resulting images is 480 × 640. – Sheikh’s: This dataset consists of 70 frames of only background, followed by a person and a car traversing the scene in the opposite direction. Examples of these data sets are shown in Figs. 2, 4, and 5. Each data set was hand labeled by defining the outline of the foreground objects. 4.2 Metrics We use several metrics in our evaluation to try to best characterize the performance. The most direct measure we use is the precision and recall of each pixel, following the methodology of Sheikh’s work [12]. They are defined as follows: P recision = Recall =
#T rueP ositives #T rueP ositives + #F alseP ositives
#T rueP ositives . #T rueP ositives + #F alseN egatives
(10)
(11)
This measures the accuracy of the approach at the pixel level, but does not capture precisely its ability to give reasonable detections of birds for higher level classification. In some cases, performance may easily be improved by morphological operations or the use of Markov Random Fields, evidenced in [11,12]. On the other hand, if the predicted foreground pixels that do correspond to birds are not connected (or cannot reasonably be connected through some morphological operation), each disjoint set of foreground pixels could be interpreted as a bird on its own, resulting in partial birds being fed to a classifier or, worse, discarded because the region is too small. To better quantify the performance of our approach, we also look at the precision and recall of birds. If 10% of the bird is detected as a single blob, a then it is counted as a true positive. Otherwise, the bird is considered a miss, and counted as a false negative. The smaller blobs are discarded, and those remaining blobs are counted as false positives. These values are used to compute the final precision and recall of birds. More important is the end goal of understanding the nature of a bird’s visit. At the application level, we would like an algorithm that successfully detects a wide range of objects, rather than the one that may happen to dominate our test sequence. To that end, we treat a visit to the feeder station as one event. Visit accuracy is the percent of visits where the bird is detected at least once.
Background Subtraction on Distributions
283
4.3 Results A challenge of the data sets considered is the absence of long periods of only background. For the Feeder Station data sets, we tested with 20 background images. For Sheikh’s data set which had more background images, we used 100 background images. As shown in Figure 3, our approach significantly outperforms the others on the Feeder Station data sets, and performs comparably on Sheikh’s data set. Representative frames of the Feeder Station Webcam are shown in Figure 2 and of the Feeder Station Camcorder in Figure 4. While most birds are detected correctly, our approach has trouble when the bird is similar to parts of the background, such as in image A in Figure 2, where the bird is Original
Manual
Frame Diff
Elgammal
Sheikh
Ours
Original
Manual
Oliver
Kahl
Monnet
Ours
A
B
C
D
A
B
C
D
Fig. 2. Results on the Feeder Station Webcam Dataset. Parameters were varied to find the maximum precision subject to pixel recall > 50%. 20 background images were used to train the background models. Our approach works well under most situations, but is unable to deal well with narrow regions. This is shown in row A, where the tail is lost on the bottom bird. Also, another weakness is when the background is of similar color. The bird is broken up into many segments, as shown in row B.
284
T. Ko, S. Soatto, and D. Estrin
Fig. 3. Comparison of Elgammal et al., Sheikh and Shah, Oliver et al., Kahl et al., Monnet et al. and our approach using Precision-Recall curves on selected data sets. Our approach outperforms other approaches on the Feeder Station data sets and is comparable to others (as shown with Sheikh’s data set). Original
Manual
Frame Diff
Elgammal
Sheikh
Ours
Original
Manual
Oliver
Kahl
Monnet
Ours
A
B
C
A
B
C
Fig. 4. Results on the Feeder Station Camcorder Dataset. Parameters were varied to find the maximum precision subject to pixel recall > 50%. 20 background images were used to train the background models. Most of the previous approaches a challenged by the slight movement of the camera during the sequence run and the strongest response is to the movement of the leaves in the background. Our approach better filters out this movement. In some cases, the automatic approaches actually outperformed the manual labeling. In row A, the detected region in our approach corresponds to birds, verified after the fact.
separated into multiple segments. Surprisingly, though, in the Feeder Station Camcorder data set, our approach detected birds that were missed by the manual labeling, as in the upper left of image A in Figure 4.
Background Subtraction on Distributions Original
Manual
Frame Diff
Elgammal
Sheikh
Ours
Original
Manual
Oliver
Kahl
Monnet
Ours
285
A
B
A
B
Fig. 5. Results on the Sheikh’s dataset. Parameters were varied to find the maximum pixel precision subject to pixel recall > 80%. 100 background images were used to train the background model. Incorporating neighboring pixels in the current image for classification, our approach results in some inherent blurring of decision boundaries, resulting in blob-like classifications rather than the more detailed boundaries of other schemes. In our application domain, this is not a significant drawback.
The suggested methodology for Oliver et al., Kahl et al., and Monnet et al. is to include all images in the eigenbackground computation with the assumption that foreground objects do not persist in the same location for long periods of time. If they are infrequent, they will not constitute the principal dimensions of variance. Adopting this assumption, we tested all the algorithms against a background model with 100 frames (where birds are at times present) and see in most cases a degradation in performance as compared to when only 20 frames are used for the background, shown in Figure 3. One drawback of our approach (though, not particularly relevant for our application) is that accurate boundaries are lost. A comparison is made against the data set used in [12] to illustrate this drawback. We see in this case, as shown in Figure 5, the algorithm suffers from its inherent blurring, missing the full contour of the person in image A. This is quantified in Figure 3. Interestingly enough, frame differencing works relatively well on this data set, especially if coupled with some morphological filtering. Due to the nature of the monitoring application, background adaptation is particularly difficult, yet obviously necessary. Scientists are interested in monitoring the outdoors for prolonged periods of time and do not necessarily have control over the environment to gather background images whenever desired. Blind adaptation is very sensitive to the rate of adaptation, as shown in Figure 6. If too slow, the background model does not adequately account for the changes. If too fast, the stationary birds go undetected due to integration into the foreground. Updating only the background model of pixel locations that are clearly background (far from the decision boundary of foreground/background) results in less sensitivity in the choice of α. Since we account for most of the pixel value variance as movement, the remaining changes are slow and due mostly to the lighting changes during the day. When sampling one
286
T. Ko, S. Soatto, and D. Estrin
Fig. 6. When blindly updating the background, only a limited range of α’s (the adaptation rate of the background) maintain high precision at a fixed recall rate. By selectively updating a pixel’s background model when no bird is present at that location, the accuracy is less affected by the selected α. Original
Manual
Ours
A
B
C
Fig. 7. Results of our approach on sample images over an hour using a background update every minute. Even with very infrequent updates, our approach is able to continually segment birds. Remaining unaddressed situations include specular highlights resulting in false positives, as shown in the row C.
frame per minute, we still achieve 85.21% precision at 50% recall.This allows greater flexibility in system design including situations where full video frame rate capture and analysis is either infeasible or impractical. Sample frames are shown in Figure 7 illustrating the background model’s ability to continuously resolve images over the course of an hour. Figure 8 shows the corresponding bird precision-recall curves for the Feeder Station Webcam and Camcorder using 20 background images. Most missed birds occured at the boundary of the image. At bird recall > 59.5% on the Feeder Station Webcam data set, our approach detected 37 of out 39 birds during their visit. The two undetected birds were captured only in one frame each and either suffered from interlacing effects or were only partially shown. At bird recall > 48.2% on the Feeder Station, 4 out of 4 birds were detected. In Figure 9, we provide a subset of cropped images detected
Background Subtraction on Distributions
287
Fig. 8. Precision-Recall Curves for Feeder Station Webcam and Camcorder data sets. Performance suffers in the Feeder Station Webcam when birds are cut off by the capture window. At the point indicated by the purple dot, we detect 37 out of 39 and 4 out of 4 bird visits on the respective data sets.
Fig. 9. Sample segmentations from our approach, including both true detections and false alarms
by our approach. We improved the selected area of the detected bird by lowering the pixel difference threshold around our detection. A simple clustering algorithm could separate these images into a reasonably sized set for a biologist to view and provide domain-specific knowledge, such as the species of the bird.
5 Conclusion In this paper, we call to attention several inherent characteristics of natural outdoor environmental monitoring that pose a challenge to automated background modeling and subtraction. Namely, foreground objects tend to, by necessity, blend into the background, and the background exhibits large variations due to non-stationary objects (moving leaves) and rapid transitions from light to shadow. These conditions present a challenge to the state of the art, which we have addressed with an algorithm that exhibits comparable performance also on standard surveillance data sets.
288
T. Ko, S. Soatto, and D. Estrin
A side benefit of this approach is that it has relatively low memory requirements, does not require floating point operations, and for the most part, can run in parallel. This makes it a good candidate for embedded processing, where Single Instruction, Multiple Data (SIMD) processors are available. Because the scheme does not depend on a high sampling rate, needed for optical flow or dynamic texture approaches, it lends itself to an adjustable sampling rate. This ability to provide an embedded processor that can easily capture objects facilitates scientific observation of phenomena that are consistently difficult to reach (e.g, environmental, space, underwater, and more general surveillance monitoring).
Acknowledgments This material is based upon work supported by the Center for Embedded Networked Sensing (CENS) under the National Science Foundation (NSF) Cooperative Agreement CCR-012-0778 and #CNS-0614853, by the ONR under award #N00014-08-1-0414, and by the AFOSR under #FA9550-06-1-0138. Any opinions, ndings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reect the views of the NSF, the ONR or the AFOSR.
References 1. Monnet, A., Mittal, A., Paragios, N., Ramesh, V.: Background modeling and subtraction of dynamic scenes. In: International Conference on Computer Vision, pp. 1305–1312 (2003) 2. Jain, R., Nagel, H.: On the analysis of accumulative difference pictures from image sequences of real world scenes. IEEE Transactions on Pattern Analysis and Machine Intelligence 1(2), 206–214 (1979) 3. Kameda, Y., Minoh, M.: A human motion estimation method using 3-successive video frames. In: International Conference on Virtual Systems and Multimedia, pp. 135–140 (1996) 4. Migliore, D., Matteucci, M., Naccari, M., Bonarini, A.: A revaluation of frame difference in fast and robust motion detection. In: VSSN 2006. Proceedings of the 4th ACM international workshop on Video surveillance and sensor networks, pp. 215–218 (2006) 5. Haritaoglu, I., Harwood, D., Davis, L.S.: W4: real-time surveillance of people and their activities. IEEE Transactions on Pattern Analysis and Machine Intelligence 22(8), 809–830 (2000) 6. Wren, C., Azarbayejani, A., Darrell, T., Pentland, A.: Pfinder: Real-time tracking of the human body. IEEE Transactions on Pattern Analysis and Machine Intelligence 19(7), 780–785 (1997) 7. Stauffer, C., Grimson, W.: Adaptive background mixture models for real-time tracking. In: Computer Vision and Pattern Recognition, vol. 2, pp. 246–252 (1999) 8. Friedman, N., Russell, S.: Image segmentation in video sequences: A probabilistic approach. In: 13th Conference on Uncertainty in Aritficial Intelligence, pp. 175–181 (1997) 9. Harville, M.: A framework for high-level feedback to adaptive, per-pixel, mixture-ofgaussian background models. In: European Conference on Computer Vision, pp. 543–560 (2002) 10. Tian, Y.L., Lu, M., Hampapur, A.: Robust and efficient foreground analysis for real-time video surveillance. In: Computer Vision and Pattern Recognition, pp. 1182–1187 (2005) 11. Elgammal, A., Harwood, D., Davis, L.: Non-parametric model for background subtraction. In: European Conference of Computer Vision, pp. 751–767 (2000)
Background Subtraction on Distributions
289
12. Sheikh, Y., Shah, M.: Bayesian modeling of dynamic scenes for object detection. IEEE Transactions on Pattern Analysis and Machine Intelligence 27(11), 1778–1792 (2005) 13. Mittal, A., Paragios, M.: Motion-based background subtraction using adaptive kernel density estimation. In: IEEE International Conference on Computer Vision and Pattern Recogntion, pp. 302–309 (2004) 14. Pless, R., Larson, J., Siebers, S., Westover, B.: Evaluation of local models of dynamic backgrounds. In: Computer Vision and Pattern Recognition, vol. II, pp. 73–78 (2003) 15. Ren, Y., Chua, C.-S., Ho, Y.-K.: Motion detection with nonstationary background. Machine Vision and Applications 13, 332–343 (2003) 16. Oliver, N.M., Rosario, B., Pentland, A.P.: A bayesian computer vision system for modeling human interactions. IEEE Transactions on Pattern Analysis and Machine Intelligence 22, 831–843 (2000) 17. Doretto, G., Cremers, D., Favaro, P., Soatto, S.: Dynamic texture segmentation. In: International Conference of Computer Vision, pp. 1236–1242 (2003) 18. Toyama, K., Krumm, J., Brumitt, B., Meyers, B.: Wallflower: Principles and practice of background maintenance. In: Seventh International Conference on Computer Vision, pp. 255–261 (1999) 19. Kahl, F., Hartley, R., Hilsenstein, V.: Novelty detection in image sequences with dynamic background. In: 2nd Workshop on Statistical Methods in Video Processing (SMVP), European Conference on Computer Vision (2005) 20. Rathi, Y., Michailovich, O., Malcolm, J., Tannenbaum, A.: Seeing the unseen: Segmenting with distributions. In: Signal and Image Processing (2006) 21. Comaniciu, D., Ramesh, V., Meer, P.: Kernel-based object tracking. IEEE Transactions on Pattern Analysis and Machine Intelligence 25(5), 564–577 (2003) 22. Kullback, S., Leibler, R.: On information and sufficiency. Annals of Mathematical Statistics 22, 79–86 (1951) 23. Lancaster, H.O.: The chi-squared distribution. Biometrics 27, 238–241 (1971) 24. Rubner, Y., Tomasi, C., Guibas, L.J.: A metric for distributions with applications to image databases. In: IEEE International Conference on Computer Vision (1998)
A Statistical Confidence Measure for Optical Flows Claudia Kondermann1, Rudolf Mester2 , and Christoph Garbe1, 1
Interdisciplinary Center for Scientific Computing, University of Heidelberg, Germany {claudia.kondermann,christoph.garbe}@iwr.uni-heidelberg.de 2 Visual Sensorics and Information Processing Lab, University of Frankfurt, Germany
[email protected]
Abstract. Confidence measures are crucial to the interpretation of any optical flow measurement. Even though numerous methods for estimating optical flow have been proposed over the last three decades, a sound, universal, and statistically motivated confidence measure for optical flow measurements is still missing. We aim at filling this gap with this contribution, where such a confidence measure is derived, using statistical test theory and measurable statistics of flow fields from the regarded domain. The new confidence measure is computed from merely the results of the optical flow estimator and hence can be applied to any optical flow estimation method, covering the range from local parametric to global variational approaches. Experimental results using state-of-the-art optical flow estimators and various test sequences demonstrate the superiority of the proposed technique compared to existing ’confidence’ measures.
1 Introduction It is of utmost importance for any optical flow measurement technique to give a prediction of the quality and reliability of each individual flow vector. This was already asserted in 1994 in the landmark paper by Barron et al. [1], where the authors stated that ’confidence measures are rarely addressed in literature’ even though ’they are crucial to the successful use of all [optical flow] techniques’. There are mainly four benefits of confidence measures: 1st ) unreliable flow vectors can be identified before they cause harm to subsequent processing steps, 2nd ) corrupted optical flow regions can be identified and possibly recovered by model-based interpolation (also denoted as ’inpainting’), 3rd ) existing optical flow methods can be improved, e.g. by integrating the confidence measure into variational approaches, 4th ) fast, structurally simple optical flow methods in combination with a confidence measure can replace slow, complicated ones. Yet, the confidence measures known today are inadequate for the assessment of the accuracy of optical flow fields due to the following reasons: First, many confidence measures infer confidence values based on the local structure of the image sequence only, without taking into account the computed flow field. Second, most confidence measures are directly derived from specific optical flow computation techniques and, thus, can only be applied to flow fields computed by this method. In fact, so far no generally applicable
The authors thank the German Research Foundation (DFG) for funding this work within the priority program ”Mathematical Methods for Time Series Analysis and Digital Image Processing” (SPP1114).
D. Forsyth, P. Torr, and A. Zisserman (Eds.): ECCV 2008, Part III, LNCS 5304, pp. 290–301, 2008. c Springer-Verlag Berlin Heidelberg 2008
A Statistical Confidence Measure for Optical Flows
291
confidence measure exists, which takes into account the computed flow field without being limited to a special type of flow computation method. But if the same model for flow and confidence estimation is used the confidence measure only verifies the restrictions already imposed by the flow computation model. Thus, errors are often not detected as the flow follows the model. Hence, we opt against using the same motion model for confidence estimation. Third, none of the proposed measures is statistically motivated despite the notion ’confidence measure’. Therefore, in this paper we propose a statistical confidence measure, which is generally applicable independently of the flow computation method. An additional benefit of our method is its adaptability to application-specific data, i.e. it exploits the fact that typical flow fields can be very different for various applications.
2 Related Work The number of previously proposed confidence measures for optical flow fields is limited. In addition to the comparison by Barron et al. [1], another comparison of different confidence measures was carried out by Bainbridge and Lane [2]. In the following we will present confidence measures that have been proposed in the literature so far. Many of these rely on the intrinsic dimensionality of the image sequence. According to [3] the notion ’intrinsic dimension’ is defined as follows: ’a data set in n dimensions is said to have an intrinsic dimensionality equal to d if the data lies entirely within a ddimensional subspace’. It has been applied to image processing by Zetzsche and Barth in [4] in order to distinguish between edge-like and corner-like structures in an image. Such information can be used to identify reliable locations, e.g. corners, in an image sequence for optical flow computation, tracking and registration. A continuous formulation has recently been proposed by Felsberg et al. [5]. To make statements on the intrinsic dimension of the image sequence and thus on the reliability of the flow vector, Haussecker and Spies [6] suggested three measures for the local structure tensor method [7]: the temporal coherency measure, the spatial coherency measure and the corner measure, which is derived from the two former. All three follow the concept that reliable motion estimation is only possible at those locations in an image sequence where the intrinsic dimension is two, which refers to fully two-dimensional variations in the image plane (e.g. at corners). In case of homogeneous regions and aperture problems, which both correspond to lower intrinsic dimensions, the measures indicate low reliability. Other examples for confidence measures based on the image structure are the gradient or Hessian of the image sequence or the trace or smallest eigenvalue of the structure tensor [1]. All of these measures are examples for confidence measures which assess the reliability of a given flow vector exclusively based on the input image sequence. In this way they are independent of the flow computation method but they do not take into account the computed flow field. Other measures take into account the flow field but are derived from and thus limited to special flow computation methods. Examples are the confidence measure proposed by Bruhn and Weickert [8] for variational optical flow methods, which computes the local inverse of the variational energy to identify locations where the energy could not be
292
C. Kondermann, R. Mester, and C. Garbe
minimized, e.g. in cases where the model assumption is not valid. Hence, their approach assigns a low confidence value to these locations. Another example is our previously proposed measure for local optical flow methods [9]. In that paper, the idea is to learn a linear subspace of correct flow fields by means of principal component analysis and then derive the confidence from the distance of the original training flow and the projection into the learned subspace. Other confidence estimation methods have been suggested by Singh [10], Waxman et al. [11], Anandan [12], Uras et al. [13] as well as by Mester and Hoetter [14]. Yet, these are directly inherent to special optical flow computation methods not applied here. To obtain a globally applicable, statistically motivated confidence measure which takes into account the flow field we first derive natural motion statistics from sample data and carry out a hypothesis test to obtain confidence values. Probability distributions for the estimation of optical flow fields have been used before by Simoncelli et al. [15], where the flow is estimated as the solution to a least squares problem. Yet, in their paper the distribution is a conditional based on the uncertainty in the brightness constancy equation. In contrast, we estimate the flow distribution from training data without prior assumptions. The errors in flow estimation have been analyzed by Ferm¨uller et al. [16]. Linear prediction of optical flow by means of motion models has been suggested by Fleet et al. [17].
3 Natural Motion Statistics In order to draw conclusions on the accuracy of a flow vector, we examine the surrounding flow field patch (see Figure 1) of a predefined size (n×n×T , where n×n stands for spatial and T for temporal size). To obtain statistical information on the accuracy, we learn a probabilistic motion model from training data, which can be ground truth flow fields, synthetic flow fields, computed flow fields or every other flow field that is considered correct. In this way, e.g. even motion boundaries can be included in the model if they occur in the training data. If motion estimation is performed for an application domain where typical motion patterns are known a priori, the training data should of course reflect this. It is even possible to use the flow field for which we want to compute the confidence as training data, i.e. finding outliers in one single data set. This leads to a very general approach, which allows for the incorporation of different levels of prior knowledge. To compute the statistical model, the empirical mean m and covariance C are computed from the training data set containing the n × n × T flow patches, which are vectorized in lexicographical order. Hence, each training sample vector contains
Fig. 1. Examples of flow field patches from which the motion statistics are computed
A Statistical Confidence Measure for Optical Flows
293
p := 2n2 T components as it consists of a horizontal and a vertical optical flow component at each patch position. To estimate the accuracy of a given flow vector we carry out a hypothesis test based on the derived statistical model. Note that for results of higher accuracy it is advisable to rotate each training flow field patch four times (each time by 90 degrees) in order to estimate a zero mean vector of the distribution.
4 Hypothesis Testing We want to test the hypothesis H0 : ’The central flow vector of a given flow field patch follows the underlying conditional distribution given the remaining flow vectors of the patch.’. Let D denote the spatio-temporal image domain and V : D → Rp a p-dimensional real valued random variable describing possible vectorized flow field patches. Testing the confidence of the central vector of a regarded flow patch boils down to specifying the conditional pdf of the central vector given the remainders of the flow patch, and comparing the candidate flow vector against this prediction, considering a metric induced by the conditional pdf. To define an optimal test statistic, we need to know the correct distribution underlying the flow field patches. Yet, this distribution is unknown. As this is a standard procedure in statistical test theory, we thus choose the optimum test statistic for a reasonable approximation of the conditional pdf. This approximation is here that the conditional pdf of the flow vectors in case that H0 is true is a two dimensional normal distribution. Even though this approximation is not precisely true, this still leads to a valid test statistic, only the claim that this is the uniformly most powerful test statistic is lost. Hence, to develop the test statistic we now assume that V is distributed according to the multivariate normal distribution described by the estimated parameters m and C V ∼ N (m, C) (1) with probability density function f : Rp → R f (v) =
1 exp(− (v − m)T C−1 (v − m)) . 2 (2π) |C| 1
p 2
1 2
(2)
We now derive the conditional distribution for the central vector given the remaining vectors of the patch. For a given image sequence location (x, y, t) ∈ D let v ∈ Rp correspond to the vectorized flow field patch centered on this location, and let (i, j), i < j denote the line indices of v corresponding to the horizontal and vertical flow vector component of the central vector of the original patch. We partition v into two disjoint vectors, the central flow vector v a , and the ’remainders’ v b of the regarded flow patch: v a = (vi , vj )T
(3) T
v b = (v1 , ..., vi−1 , vi+1 , ..., vj−1 , vj+1 , ..., vp )
.
The mean vector andcovariance matrix C are partitioned accordingly: ma Caa Cab m= C= . mb Cba Cbb
294
C. Kondermann, R. Mester, and C. Garbe
Then, the conditional distribution p(v a |v b ) is a two-dimensional normal distribution with probability density function fa|b , mean vector ma|b and covariance matrix Ca|b . ma|b = ma + Cab C−1 bb (v b − mb )
(4)
Ca|b = Caa −
(5)
Cab C−1 bb Cba
.
We stress that these first and second order moments of the conditional pdf are valid independent of the assumption of a normal distribution. To derive the test statistic let dM : Rp → R+ 0
dM (v) = (v a − ma|b )T C−1 a|b (v a − ma|b )
(6)
denote the squared Mahalanobis distance between v a and the mean vector ma|b given the covariance matrix Ca|b . The Mahalanobis distance is the optimal test statistic in case of a normally distributed conditional pdf of the central flow vector. This does not imply that the image data or the flow data are assumed to be normally distributed as well. Even though we do not know the conditional distribution, we choose the squared Mahalanobis distance as test statistic. To carry out a hypothesis test (significance test), we have to determine quantiles of the distribution of the test statistic for the case that the null hypothesis to be tested is known to be true. To this end, we compute the empirical cumulative distribution function G : R+ → [0, 1] of the test statistic from training data. We obtain the empirical quantile function G−1 : [0, 1] → R+ −1
G
(7)
(q) = inf{x ∈ R | G(x) ≥ q} .
(8)
To, finally, examine the validity of H0 we apply a hypothesis test ϕα : Rp → {0, 1} 0 , if dM (v) ≤ G−1 (1 − α) ϕα (v) = 1 , otherwise
(9) (10)
where ϕα (v) = 1 indicates the rejection of the hypothesis H0 . Based on this hypothesis test we would obtain a binary confidence measure instead of a continuous mapping to the interval [0, 1]. Furthermore, it would be inconvenient to recompute the confidence measure each time the significance level α is modified. Therefore, we propose to use the concept of p-values introduced by Fisher [18]. A p-value function Π maps each sample vector to the minimum significance level α for which the hypothesis would still be rejected, i.e. (11) Π : Rp → [0, 1] −1 Π(v) = inf{α ∈ [0, 1]|ϕα (v) = 1} = inf{α ∈ [0, 1]|dM (v) > G (1 − α)} . Hence, we finally arrive at the following confidence measure c : Rp → [0, 1]
(12) −1
c(v) = Π(v) = inf{α ∈ [0, 1]|dM (v) > G
(1 − α)} .
A Statistical Confidence Measure for Optical Flows
295
As the computation of the confidence measure in fact reduces to the computation of the mean vector and covariance matrix given in (4), the computation can be carried out efficiently.
5 Results As there are several test sequences with ground truth data and numerous optical flow computation methods with different parameters each, it is impossible to present an extensive comparison between the proposed and previously known confidence measures. Hence, we will present results for a selection of typically used real and artificial sequences and flow computation methods. Here, we will use the Yosemite, the Marble, the Dimetrodon and the RubberWhale sequence (from the Middlebury database [19]). As optical flow computation methods we use the local structure tensor method [7], the non-linear 2d multiresolution combined local global method (CLG) [20] as well as the methods proposed by Nir [21] and Farneb¨ack [22]. To quantify the error e(x) ∈ R of a given flow vector at image sequence location x ∈ D the endpoint error [19] is used. It is defined by the length of the difference vector between the ground truth flow vector g(x) ∈ R2 and the computed flow vector u(x) ∈ R2 : e(x) := g(x) − u(x)2
(13)
We compare our approach to several of the confidence measures described in section 2. These are the three measures examining the intrinsic dimension of the image sequence by Haussecker and Spies [6] (strCt, strCs, strCc) [6], the inverse of the energy of the global flow computation method by Bruhn et al. [8] (inverse energy) [8], the PCA-based measure by Kondermann et al. (pcaRecon) [9] and the image gradient measure (grad), which is approximated by central differences. In the following, the approach proposed in this paper will be abbreviated by pVal. Note that the inverse of the energy measure is only applicable for variational approaches and has thus not been applied to the flow fields computed by methods other than CLG. The Yosemite flow field by Nir et al. [21] was obtained directly from the authors. Hence, no variational energy is available for the computation of the inverse energy confidence measure. In order to numerically compare the proposed confidence measure to previously used measures we follow the comparison method suggested by Bruhn et al. in [8] called ’sparsification’, which is based on quantile plots. To this end, we remove n% of the flow vectors (indicated on the horizontal axis in the following figures) from the flow field in the order of increasing confidence and compute the average error of the remaining flow field. Hence, removing fraction 0 means that all flow vectors are taken into account, so the value corresponds to the average error over all flow vectors. Removing fraction 1 indicates that all flow vectors have been removed from the flow field yielding average error 0. For some confidence measures, the average error even increases after removing a certain fraction of the flow field. This is the case if flow vectors with errors below the average error are removed instead of those with the highest errors. As a benchmark, we also calculate an ’optimal confidence’ copt , which reproduces the correct rank order
C. Kondermann, R. Mester, and C. Garbe
ϱ ϰ͕ϱ ϰ ϯ͕ϱ ϯ Ϯ͕ϱ Ϯ ϭ͕ϱ ϭ Ϭ͕ϱ Ϭ
ϯdžϯdžϭ ϳdžϳdžϭ ϭϭdžϭϭdžϭ ϭϱdžϭϱdžϭ ϮϭdžϮϭdžϭ Ϭ
Ϭ͕ϭ Ϭ͕Ϯ Ϭ͕ϯ Ϭ͕ϰ Ϭ͕ϱ Ϭ͕ϲ Ϭ͕ϳ Ϭ͕ϴ Ϭ͕ϵ
ϭ
ŵ ŵĞĂŶĞƌƌŽƌ
ŵ ŵĞĂŶĞƌƌŽƌ
296
ϱ ϰ͕ϱ ϰ ϯ͕ϱ ϯ Ϯ͕ϱ Ϯ ϭ͕ϱ ϭ Ϭ͕ϱ Ϭ
ŐƌŽƵŶĚƚƌƵƚŚ zŽƐĞŵŝƚĞ zŽƐĞŵŝƚĞ͕DĂƌďůĞ ^ƚƌĞĞƚ͕KĨĨŝĐĞ ƉĂƌƚŝĐůĞƐW/s ƐĞǀĞƌĂů Ϭ Ϭ͕ϭ Ϭ͕Ϯ Ϭ͕ϯ Ϭ͕ϰ Ϭ͕ϱ Ϭ͕ϲ Ϭ͕ϳ Ϭ͕ϴ Ϭ͕ϵ ϭ
ƌĞŵŽǀĞĚĨƌĂĐƚŝŽŶ
ƌĞŵŽǀĞĚĨƌĂĐƚŝŽŶ
Fig. 2. Remaining mean error for given fraction of removed flow vectors based on different patch sizes for the proposed confidence measure (Farneb¨ack method on RubberWhale sequence, trained on ground truth data). The results show that the patch size chosen for the confidence measure is rather negligible.
Fig. 3. Remaining mean error based on different training sequences for the proposed confidence measure (Farneb¨ack method on RubberWhale sequence for 3 × 3 × 1 patch size). The results show that the method is hardly sensible to the choice of training data.
of the flow vectors in terms of the endpoint error (13) and, thus, indicates the optimal order for the sparsification of the flow field: copt (x) = 1 −
e(x) . max{e(y)|y ∈ D}
(14)
For the experiments the patch size n × n × T was not optimized but kept constant at 3 × 3 × 1 for all test sequences. The influence of this parameter is rather negligible as shown in Figure 2. Figure 3 shows that the performance of the confidence measure is also mostly independent of the training data. In case ground truth or similar training data is used the performance is improved, but even particle sequence data yields results close to ground truth data. Quantile plots of the average flow field error for the state-of-the-art flow computation method by Nir et al. [21], Farneb¨ack et al. [22], the nonlinear 2d CLG method [20] and the structure tensor method [7] have been computed for the Dimetrodon and the RubberWhale sequence proposed in [19] as well as for the standard Yosemite and Marble test sequences. Selected results are shown in Figures 4 and 5. For all test examples except for one case the results indicate that the remaining average error for almost all fractions of removed flow vectors is lowest for our proposed confidence measure. As confidence measures are applied to remove the flow vectors with the highest errors only, the course of the curves is most important for small fractions of removed flow vectors and can in practice be neglected for larger fractions. Hence, the results indicate that our proposed confidence measure outperforms the previously employed measures for locally and globally computed optical flow fields on all our test sequences except on the CLG field for the RubberWhale sequence. In this case the inverse energy measure yields results slightly closer to the optimal curve than the proposed measure. It should be noted that for a flow field density of 90% the average error of the local structure tensor method is already lower than that of the CLG flow fields for 100% density on the Marble, Yosemite and RubberWhale test sequences (see for example Figure 5 d),e)). If the CLG flow field is sparsified to 90% as well, the error
A Statistical Confidence Measure for Optical Flows
297
Ϭ͕Ϭϳ
Ϭ͕Ϭϲ
Ϭ͕Ϭϱ
ŵĞ ĞĂŶĞƌƌŽƌ
ŐƌĂĚ ƉsĂů
Ϭ͕Ϭϰ
ƉĐĂZĞĐŽŶ Ϭ Ϭϯ Ϭ͕Ϭϯ
Ɛƚƌƚ ƐƚƌƐ
Ϭ͕ϬϮ
ƐƚƌĐ ŽƉƚŽŶĨ
Ϭ͕Ϭϭ
Ϭ Ϭ
Ϭ͕ϭ
Ϭ͕Ϯ
Ϭ͕ϯ
Ϭ͕ϰ
Ϭ͕ϱ
Ϭ͕ϲ
Ϭ͕ϳ
Ϭ͕ϴ
Ϭ͕ϵ
ϭ
ƌĞŵŽǀĞĚĨƌĂĐƚŝŽŶ
a) Nir method, Yosemite sequence Ϭ͕ϭ Ϭ͕Ϭϵ Ϭ͕Ϭϴ Ϭ͕Ϭϳ
ŵ ŵĞĂŶĞƌƌŽƌ
ŐƌĂĚ Ϭ͕Ϭϲ
ƉsĂů
Ϭ͕Ϭϱ
ƉĐĂZĞĐŽŶ
Ϭ Ϭϰ Ϭ͕Ϭϰ
Ɛƚƌƚ ƐƚƌƐ
Ϭ͕Ϭϯ
ƐƚƌĐ Ϭ͕ϬϮ
ŽƉƚŽŶĨ
Ϭ͕Ϭϭ Ϭ Ϭ
Ϭ͕ϭ
Ϭ͕Ϯ
Ϭ͕ϯ
Ϭ͕ϰ
Ϭ͕ϱ
Ϭ͕ϲ
Ϭ͕ϳ
Ϭ͕ϴ
Ϭ͕ϵ
ϭ
ƌĞŵŽǀĞĚĨƌĂĐƚŝŽŶ
b) Farneb¨ack method, Yosemite sequence Ϭ͕Ϯ Ϭ͕ϭϴ Ϭ͕ϭϲ
ŵ ŵĞĂŶĞƌƌŽƌ
Ϭ͕ϭϰ
ŐƌĂĚ
Ϭ͕ϭϮ
ƉsĂů
Ϭ͕ϭ
ƉĐĂZĞĐŽŶ
Ϭ Ϭϴ Ϭ͕Ϭϴ
Ɛƚƌƚ
Ϭ͕Ϭϲ
ƐƚƌƐ ƐƚƌĐ
Ϭ͕Ϭϰ
ŽƉƚŽŶĨ Ϭ͕ϬϮ Ϭ Ϭ
Ϭ͕ϭ
Ϭ͕Ϯ
Ϭ͕ϯ
Ϭ͕ϰ
Ϭ͕ϱ
Ϭ͕ϲ
Ϭ͕ϳ
Ϭ͕ϴ
Ϭ͕ϵ
ϭ
ƌĞŵŽǀĞĚĨƌĂĐƚŝŽŶ
c) Structure tensor method, Yosemite sequence
Fig. 4. Average error quantile plot for the comparison of previous confidence measures to the proposed method (pVal); the previous confidence measures are three measures examining the intrinsic dimension of the image (strCt, strCs, strCc) [6], the image gradient (grad), a PCA model based measure [9] (pcaRecon), the inverse of the global energy [8] (Inverse Energy) and the optimal confidence defined in (14) (optConf ); horizontal axis: fraction of removed flow vectors, vertical axis: mean error of remaining flow field.
298
C. Kondermann, R. Mester, and C. Garbe Ϯ͕ϱ
Ϯ
ŵ ŵĞĂŶĞƌƌŽƌ
ŐƌĂĚ ϭ͕ϱ
ƉsĂů ƉĐĂZĞĐŽŶ Ɛƚƌƚ
ϭ
ƐƚƌƐ ƐƚƌĐ Ϭ͕ϱ
ŽƉƚŽŶĨ
Ϭ Ϭ
Ϭ͕ϭ
Ϭ͕Ϯ
Ϭ͕ϯ
Ϭ͕ϰ
Ϭ͕ϱ
Ϭ͕ϲ
Ϭ͕ϳ
Ϭ͕ϴ
Ϭ͕ϵ
ϭ
ƌĞŵŽǀĞĚĨƌĂĐƚŝŽŶ
d) Structure tensor method, RubberWhale sequence Ϭ͕ϱ Ϭ͕ϰϱ
ŵ ŵĞĂŶĞƌƌŽƌ
Ϭ͕ϰ Ϭ͕ϯϱ
ŐƌĂĚ
Ϭ͕ϯ
ƉsĂů ƉĐĂZĞĐŽŶ
Ϭ͕Ϯϱ
ŝŶǀĞƌƐĞŶĞƌŐLJ ϬϮ Ϭ͕Ϯ
Ɛƚƌƚ
Ϭ͕ϭϱ
ƐƚƌƐ
Ϭ͕ϭ
ƐƚƌĐ ŽƉƚŽŶĨ
Ϭ͕Ϭϱ Ϭ Ϭ
Ϭ͕ϭ
Ϭ͕Ϯ
Ϭ͕ϯ
Ϭ͕ϰ
Ϭ͕ϱ
Ϭ͕ϲ
Ϭ͕ϳ
Ϭ͕ϴ
Ϭ͕ϵ
ϭ
ƌĞŵŽǀĞĚĨƌĂĐƚŝŽŶ
e) CLG method, RubberWhale sequence Ϭ͕Ϯϱ
Ϭ͕Ϯ
ŵ ŵĞĂŶĞƌƌŽƌ
ŐƌĂĚ ƉsĂů
Ϭ͕ϭϱ
ƉĐĂZĞĐŽŶ ŝŶǀĞƌƐĞŶĞƌŐLJ Ϭϭ Ϭ͕ϭ
Ɛƚƌƚ ƐƚƌƐ
Ϭ͕Ϭϱ
ƐƚƌĐ ŽƉƚŽŶĨ
Ϭ Ϭ
Ϭ͕ϭ
Ϭ͕Ϯ
Ϭ͕ϯ
Ϭ͕ϰ
Ϭ͕ϱ
Ϭ͕ϲ
Ϭ͕ϳ
Ϭ͕ϴ
Ϭ͕ϵ
ϭ
ƌĞŵŽǀĞĚĨƌĂĐƚŝŽŶ
f) CLG method, Marble sequence
Fig. 5. Average error quantile plots, see Figure 4 for details
is approximately equal to that of the structure tensor method for the Yosemite and Marble sequence. Yet, the structure tensor approach only needs a fraction of the computation time of the CLG method and is much simpler to implement. Hence, for the local structure tensor method in two out of three cases we were able to obtain a flow field of 90% density of a quality level equal to that of the CLG method by means of the proposed confidence measure, which clearly shows the benefit of our approach.
A Statistical Confidence Measure for Optical Flows
a) optimal
b) pVal
c) pcaRecon
d) strCc
e) grad
f) strCt
299
Fig. 6. Sparsification order of flow vectors based on increasing confidence value for structure tensor flow field on RubberWhale sequence. The proposed confidence measure (pVal) is closest to the optimal confidence.
To graphically compare confidence measure results we use the structure tensor flow field computed on the RubberWhale test sequence as example as here the difference between the proposed confidence measure and the previously used ones is most eminent. As the scale of confidence measures is not unique we again only compare the order of removal of the flow vectors based on increasing confidence. Hence each flow vector is assigned the time step of its removal from the field. The resulting orders for three of the confidence measures is shown in Figure 6.
300
C. Kondermann, R. Mester, and C. Garbe
6 Summary and Conclusion In this paper we have proposed a confidence measure, which is generally applicable to arbitrarily computed optical flow fields. As the measure is based on the computation of motion statistics from sample data and a hypothesis test, it is to the best of our knowledge the first confidence measure for optical flows, for which the notion ’confidence measure’ is in fact justified in a statistical sense. Furthermore, the method can be adapted to specific motion estimation tasks with typical motion patterns by choice of sample data if prior knowledge on the type of computed flow field is available. In this case the results can even be superior to those shown in this paper as here we did not assume any prior knowledge. Results for locally and globally computed flow fields on ground truth test sequences show the superiority of our method compared to previously employed confidence measures. An interesting observation is that by means of the proposed confidence measure we were able to obtain lower average errors for flow fields of 90% density computed by the fast structure tensor method compared to 100 % dense flow fields computed by the non-linear multiresolution combined local global method. And we obtained approximately equal error values for 90% density for both methods. Hence, fast local methods combined with the proposed confidence measure can, in fact, obtain results of a quality equal to global methods, if only a small fraction of flow vectors is removed.
References 1. Barron, J.L., Fleet, D.J., Beauchemin, S.: Performance of optical flow techniques. International Journal of Computer Vision 12(1), 43–77 (1994) 2. Bainbridge-Smith, A., Lane, R.: Measuring confidence in optical flow estimation. IEEE Electronics Letters 32, 882–884 (1996) 3. Bishop, C.: In: Neural Networks for Pattern Recognition. Oxford University Press, Oxford (1995) 4. Zetzsche, C., Barth, E.: Fundamental limits of linear filters in the visual processing of two dimensional signals. Vision Research 30, 1111–1117 (1990) 5. Felsberg, M., Kalkan, S., Kr¨uger, N.: Continuous dimensionality characterization of image structures. Journal of Image and Vision Computing (2008) 6. Haussecker, H., Spies, H.: Motion. In: J¨ahne, B., Haussecker, H., Geißler, P. (eds.) Handbook of Computer Vision and Applications, vol. 2, pp. 336–338. Academic Press, London (1999) 7. Big¨un, J., Granlund, G., Wiklund, J.: Multidimensional orientation estimation with applications to texture analysis and optical flow. IEEE Journal of Pattern Analysis and Machine Intelligence 13, 775–790 (1991) 8. Bruhn, A., Weickert, J.: A confidence measure for variational optic flow methods. In: Geometric Properties for Incomplete data, pp. 283–298. Springer, Heidelberg (2006) 9. Kondermann, C., Kondermann, D., J¨ahne, B., Garbe, C.: An adaptive confidence measure for optical flows based on linear subspace projections. In: Hamprecht, F.A., Schn¨orr, C., J¨ahne, B. (eds.) DAGM 2007. LNCS, vol. 4713, pp. 132–141. Springer, Heidelberg (2007) 10. Singh, A.: An estimation-theoretic framework for image-flow computation. In: Proceedings of IEEE ICCV, pp. 168–177 (1990) 11. Waxman, A., Wu, J., Bergholm, F.: Convected activation profiles and receptive fields for real time measurement of short range visual motion. In: Proceedings of IEEE CVPR, pp. 717–723 (1988)
A Statistical Confidence Measure for Optical Flows
301
12. Anandan, P.: A computational framework and an algorithm for the measurement of visual motion. International Journal of Computer Vision 2, 283–319 (1989) 13. Uras, S., Girosi, F., Verri, A., Torre, V.: A computational approach to motion perception. Journal of Biological Cybernetics 60, 79–97 (1988) 14. Mester, R., H¨otter, M.: Robust displacement vector estimation including a statistical error analysis. In: Proc. 5th International Conference on Image Processing and its Applications, Edinburgh, UK, pp. 168–172. Institution of Electrical Engineers (IEE), London (1995) 15. Simoncelli, E.P., Adelson, E.H., Heeger, D.J.: Probability distributions of optical flow. In: Proc Conf on Computer Vision and Pattern Recognition, pp. 310–315. IEEE Computer Society, Los Alamitos (1991) 16. Ferm¨uller, C., Shulman, D., Aloimonos, Y.: The statistics of optical flow. Journal of Computer Vision and Image Understanding 82, 1–32 (2001) 17. Fleet, D.J., Black, M.J., Yacoob, Y., Jepson, A.D.: Design and use of linear models for image motion analysis. International Journal of Computer Vision 36, 171–193 (2000) 18. Fisher, R.: Statistical Methods for Research Workers. Oliver and Boyd (1925) 19. Baker, S., Roth, S., Scharstein, D., Black, M., Lewis, J., Szeliski, R.: A database and evaluation methodology for optical flow. In: Proceedings of the International Conference on Computer Vision, pp. 1–8 (2007) 20. Bruhn, A., Weickert, J., Schn¨orr, C.: Lucas/Kanade meets Horn/Schunck: Combining local and global optic flow methods. International Journal of Computer Vision 61, 211–231 (2005) 21. Nir, T., Bruckstein, A.M., Kimmel, R.: Over-parameterized variational optical flow. International Journal of Computer Vision 76, 205–216 (2008) 22. Farneb¨ack, G.: Very high accuracy velocity estimation using orientation tensors, parametric motion, and simultaneous segmentation of the motion field. In: International Conference on Computer Vision, Proceedings, Vancouver, Canada, vol. I, pp. 171–177 (2001)
Automatic Generator of Minimal Problem Solvers Zuzana Kukelova1, Martin Bujnak1,2 , and Tomas Pajdla1 1 Center for Machine Perception Czech Technical University, Prague 2 Microsoft Corporation {kukelova,bujnam1,pajdla}@cmp.felk.cvut.cz
Abstract. Finding solutions to minimal problems for estimating epipolar geometry and camera motion leads to solving systems of algebraic equations. Often, these systems are not trivial and therefore special algorithms have to be designed to achieve numerical robustness and computational efficiency. The state of the art approach for constructing such algorithms is the Gr¨obner basis method for solving systems of polynomial equations. Previously, the Gr¨obner basis solvers were designed ad hoc for concrete problems and they could not be easily applied to new problems. In this paper we propose an automatic procedure for generating Gr¨obner basis solvers which could be used even by non-experts to solve technical problems. The input to our solver generator is a system of polynomial equations with a finite number of solutions. The output of our solver generator is the Matlab or C code which computes solutions to this system for concrete coefficients. Generating solvers automatically opens possibilities to solve more complicated problems which could not be handled manually or solving existing problems in a better and more efficient way. We demonstrate that our automatic generator constructs efficient and numerically stable solvers which are comparable or outperform known manually constructed solvers. The automatic generator is available at http://cmp.felk.cvut.cz/minimal 1 .
1 Introduction Many problems can be formulated using systems of algebraic equations. Examples are the minimal problems in computer vision, i.e. problems solved from a minimal number of point correspondences, such as the five point relative pose problem [20], the six point focal length problem [18], six point generalized camera problem [19], the nine point problem for estimating para-catadioptric fundamental matrices [9], the eight point problem for estimating fundamental matrix and single radial distortion parameter for uncalibrated cameras [11], the six point problem for estimating essential matrix and single radial distortion parameter for calibrated cameras [4, 12], the nine point problem for estimating fundamental matrix and two different radial distortion parameters for uncalibrated cameras [4, 12]. These are important problems with a broad range of applications [10]. 1
This work has been supported by EC projects FP6-IST-027787 DIRAC and MRTN-CT-2004005439 VISIONTRAIN and grants MSM6840770038DMCMIII, STINT Dur IG2003-2 062 and MSMT KONTAKT 9-06-17.
D. Forsyth, P. Torr, and A. Zisserman (Eds.): ECCV 2008, Part III, LNCS 5304, pp. 302–315, 2008. c Springer-Verlag Berlin Heidelberg 2008
Automatic Generator of Minimal Problem Solvers
303
Often, polynomial systems which arise are not trivial. They consist of many polynomial equations in many unknowns and of higher degree. Therefore, special algorithms have to be designed to achieve numerical robustness and computational efficiency. The state of the art method for constructing such algorithms, the solvers, is the Gr¨obner basis method for solving systems of polynomial equations. It was used to solve all previously mentioned computer vision problems. The Gr¨obner basis solvers in computer vision are mostly designed for concrete problems and in general consist of three key steps. In the first step, the problem is solved using a computer algebra system, e.g. Macaulay 2 or Maple, for several (many) random coefficients from a finite field. This is done using a general technique for solving polynomial equations by finding a Gr¨obner basis for original equations [6]. In this phase, the basic parameters of the problem are identified, such as whether there exists a finite number of solutions for general coefficients and how “hard” is to obtain the Gr¨obner basis, and which monomials it contains. This procedure is automatic and relies on general algorithms of algebraic geometry such as Buchberger [1] or F4 [8] algorithm. The computations are carried out in a finite field in order to avoid expensive growth of coefficients and to avoid numerical instability. In the second step, a special elimination procedure, often called “elimination template” is generated. This elimination template says which polynomials from the ideal should be added to the initial polynomial equations to obtain a Gr¨obner basis or at least all polynomials needed for constructing a special matrix, called “action matrix”, and thus solving the initial equations. The goal of this step is to obtain a computationally efficient and numerically robust procedure. Until now, this step has been mainly manual requiring to trace the path of elimination for random coefficient form the finite field, checking redundancies and possible numerical pitfalls, and writing down a program in a procedural language such as Matlab or C. In the last step, the action matrix is constructed from the resulting equations and the solutions to the original problem are obtained numerically as the eigenvalues or eigenvectors of the action matrix [7]. The first and the third step are standard and well understood. It is the second step which still involves considerable amount of craft and which makes the process of solver generating rather complex and virtually impenetrable for a non-specialist. Moreover, for some problems it need not be clear how their elimination templates were generated and therefore non-specialists often use them as black boxes, are not able to reimplement them, improve them or create similar solvers for their own new problems. In this paper we present an automatic generator of the elimination templates for Gr¨obner basis solvers to an interesting class of systems of polynomial equations which appear in computer vision problems. We have to accept that there is no hope to obtain an efficient and robust solver for completely general systems of polynomial equations since the general problem is known to be NP-complete and EXPSPAPCE-complete [15]. On the other hand, all known computer vision problems share the property that the elimination path associated with their solution is the same for all interesting configurations of coefficients of a given problem. Consider, for instance, the classical 5 point problem for estimating the essential matrix from 5 point correspondences [16, 20]. In general, the elimination path used to
304
Z. Kukelova, M. Bujnak, and T. Pajdla
solve the 5 point problem depends on actual image coordinates measured. Fortunately, for non-degenerate image correspondences, i.e. for those which lead to a finite number of essential matrices, the elimination path is always the same. Therefore, it is enough to find the path for one particular non-degenerate configuration of coefficients and then use the same path for all non-degenerate configuration of coefficients. The paths for degenerate configurations of coefficients may be very different and there may be very many of them but we need not consider them. We propose an automatic generator that finds one elimination path. It can find any path. The choice of the path is controlled by the particular coefficients we choose to generate the elimination template. We demonstrate that our automatic generator constructs efficient and numerically stable solvers which are comparable or outperform known manually constructed solvers. The input into our automatic generator is the system of polynomial equations which we want to solve with a particular choice of coefficients from Zp that choose the particular elimination path. For many problems, the interesting “regular” solutions can be obtained with almost any random choice of coefficients. Therefore, we use random coefficients from Zp . The output from the generator is the Matlab or C code solver that returns solutions to this system of polynomial equations for concrete coefficients from R. In online computations, only this generated solver is called. In the next two sections we review the Gr¨obner basis method for solving systems of polynomial equations and the solvers based on this method. Section 5 is dedicated to our automatic procedure for generating Gr¨obner basis solvers. Then, we demonstrate the results of our automatic generator on some minimal problems.
2 Solving Systems of Polynomial Equations Our goal is to solve a system of algebraic equations f1 (x) = ... = fm (x) = 0
(1)
which are given by a set of m polynomials F = {f1 , ..., fm | fi ∈ C [x1 , ..., xn ]} in n variables x = (x1 , ..., xn ) over the field of complex numbers. Such system of algebraic equations can be written in a matrix form M X = 0,
(2)
where X is a vector of all monomials which appear in these equations and M is a coefficient matrix. In the next we mostly consider this matrix representation of systems of equations and, for example, by Gauss-Jordan (G-J) elimination of equations we understand G-J elimination of the corresponding coefficient matrix M. Solving systems of algebraic polynomial equations is a very challenging problem. There doesn’t exist one robust, numerically stable and efficient method for solving such systems in general case. Therefore, special algorithms have to be designed for specific problems. The general Gr¨obner basis method for solving systems of polynomial equations can be quite inefficient in some cases but it was recently used successfully as the basis for efficient solvers of computer vision minimal problems. We now describe this method and the solvers based on this method.
Automatic Generator of Minimal Problem Solvers
305
3 Gr¨obner Basis Method The polynomials F = {f1 , ..., fm | fi ∈ C [x1 , ..., xn ]} define ideal I, which is the set of all polynomials that can be generated as polynomial combinations of initial polynomials F m fi pi | pi ∈ C [x1 , ..., xn ]} , (3) I = {Σi=1 where pi are arbitrary polynomials from C [x1 , ..., xn ]. We can define division by an ideal I in C [x1 , ..., xn ] as the division by the set F of generators of I. There are special sets of generators, Gr¨obner bases, of the ideal I, for which this division by the ideal I is well defined in the sense that the remainder on the division doesn’t depend on the ordering of the polynomials in the Gr¨obner basis G. This means that the remainder of an arbitrary polynomial f ∈ C [x1 , ..., xn ] on the division by G in a given monomial ordering is uniquely determined. Furthermore, f ∈ I if and G only if the reminder of f on the division by G is zero (f = 0). This implies that G
G
G
G
G
+ gG = f + g and f g G = f g . Gr¨oebner bases generate the same ideal as the initial polynomial equations and therefore have the same solutions. However, it is important that they are often easier to solve (e.g. the reduced Gr¨obner basis w.r.t. the lexicographic ordering contains polynomial in one variable only). Computing such basis and “reading off” the solutions from it is one standard method for solving systems of polynomial equations. Although this sounds really nice and easy the reality is much harder. The problem is that the computation of Gr¨obner bases is in general an EXPSPACE-complete problem, i.e. that large space is necessary for storing intermediate results. Fortunately in many specific cases, the solution to a system of polynomial equations computed using Gr¨obner bases can be obtained much faster. For solving systems of polynomial equations, the most suitable ordering is the lexicographic one which results in a system of equations in a “triangular form” with one equation in one variable only. Unfortunately, computation of such Gr¨obner basis w.r.t. the lexicographic ordering is very time consuming and for most of the problems can not be used. Therefore in many cases a Gr¨obner basis G under another ordering, e.g. the graded reverse lexicographical ordering (grevlex), which is often easier to compute, is constructed. Then, other properties of this basis are used to obtain solutions to the initial system of polynomial equations. Thanks to the property that the division by the ideal I is well defined for the Gr¨obner basis G, we can consider the space of all possible remainders on the division by I. This space is know as a quotient ring and we will denote it as A = C [x1 , ..., xn ] /I. It is known that if I is a radical ideal [6] and the set of equations F has a finite number of solutions N , then A is a finite dimensional space with dim(A) = N . Now we can use nice properties of a special action matrix defined in this space, to find solutions to our system of equations (1). Consider the multiplication by some polynomial f ∈ C [x1 , ..., xn ] in the quotient ring A. This multiplication defines a linear mapping Tf from A to itself. Since A is a finite-dimensional vector space over C, we canrepresent this mapping by its matrix α αG α of A, where xα is =x Mf with respect to some monomial basis B = x |x f
306
Z. Kukelova, M. Bujnak, and T. Pajdla G
αn 1 α2 α a monomial xα = xα is the reminder of xα on the division by 1 x2 ...xn and x G. The action matrix Mf can be viewed as a generalization of the companion matrix used in solving one polynomial equation in one unknown. It is because solutions to our system of polynomial equations (1) can be easily obtained from the eigenvalues and eigenvectors of this action matrix [7].
4 Gr¨obner Basis Solver The Gr¨obner basis method for solving systems of polynomial equations based on action matrices was recently used to solve many minimal problems in computer vision. The solvers to all these minimal problems are very similar and are based on the same concepts. Many minimal problems in computer vision, including all mentioned above, have the convenient property that the monomials, appearing in the set of initial generators F are always same irrespectively from the concrete coefficients arising from non-degenerate image measurements. Therefore, the leading monomials of the corresponding Gr¨obner basis, and thus the monomials in the basis B of the quotient ring A, are generally the same and can be found once in advance. This is an important observation which helps to solve all these problems efficiently. The first step of these solvers is to analyze the particular problem, i.e. whether it is solvable and how many solutions there are, in a randomly chosen finite prime field Zp (Z/ p) with p >> 7. Coefficients in Zp can be can be represented accurately and efficiently. It speeds up computations, minimizes memory requirements and especially avoids numerical instability. Computing with floating point approximations of the coefficients may lead to numerical instability since it may be difficult to determine when the coefficients become zero. Next, the Gr¨obner basis G, and the basis B are found for many random coefficients from Zp . Thanks to algebraic geometry theorem [21], we know that if the bases G and B remain stable for many different random coefficients, i.e. if B consists of the same monomials, they are generically equivalent to the bases of the original system of polynomial equations with rational coefficients. With this information in hand the solver can be created. The solver typically consists of hand made elimination templates [9, 11, 18, 19] (or one template [2, 3, 4, 5]) that determine which polynomials from the ideal I should be added to the initial equations to obtain the Gr¨obner basis G or at least all polynomials needed for constructing the action matrix Mf . These elimination templates are the crucial part of the solver. An important observation has been made in [11]. It was observed that the action matrix can be constructed without computing a complete Gr¨obner basis G. All we need for creating the action matrix Mf is to construct polynomials from the ideal I with leading monomials from the set (f · B) \ B and the remaining monomials from B. This fact was in some way implicitly used in previous solvers [9, 18, 19] but hasn’t been fully articulated. Consider that the basis B = xα(1) , . . . , xα(N ) of A, which has been found once in advance by computations in Zp . Then, the polynomials we need for constructing the action Mf matrix are of the form qi = f xα(i) + hi ,
(4)
Automatic Generator of Minimal Problem Solvers
307
N
∈ A and xα(i) ∈ B. It is because to construct the action G matrix Mf we need to compute Tf xα(i) = f xα(i) for all xα(i) ∈ B [7]. However, G if for some xα(i) ∈ B and chosen f , f xα(i) ∈ A, then Tf xα(i) = f xα(i) = N / f xα(i) = j=1 dji xα(j) and we are done. For all other xα(i) ∈ B, for which f xα(i) ∈ A, we consider the above mentioned polynomials qi . For these xα(i) , Tf xα(i) = with hi =
G
α(j) j=1 cji x
G
f xα(i) = qi − hi = −hi ∈ A. Then the action matrix Mf has the form 0
−c11 d12 −c13 B −c21 d22 . B B . . . B Mf = B . . B . @ . . . −cN1 dN2 −cN3
1 . . . −c1N . C C . . C C, . . C C . . A . . . −cNN
(5)
where columns containing cji correspond to the monomials xα(i) ∈ B for which f xα(i) ∈ / A and columns containing dji to the monomials xα(i) ∈ B for which N f xα(i) = j=1 dji xα(j) ∈ A. Since the polynomials qi are from the ideal I, they can be generated as algebraic combinations of the initial generators F . This can be done using several methods. One possible way is to start with F and then systematically generate new polynomials from I by multiplying already generated polynomials by individual variables and reducing them each time by the G-J elimination. This method was, for example, used in [11] and resulted in several G-J eliminations of coefficient matrices M1 , . . . , Ml . Another possible way is to generate all new polynomials in one step by multiplying polynomials from F with selected monomials and reducing all generated polynomials at once using single G-J elimination of one coefficient matrix M. This method was used in [2] and it was observed to be numerically more stable. Such systematic generation of polynomials qi results in many unnecessary polynomials, many of which can be eliminated afterwards in a simple and intuitive way [5]. The method starts with the matrix M, which has the property that after its G-J elimination all polynomials qi necessary for constructing the action matrix are obtained. Since it is known which monomials appear in qi for a particular problem to be solved, the number of generated polynomials can systematically be reduced in the following way: 1. For all rows from M starting with the last row r (i.e. with the highest degree polynomial) do (a) Perform G-J elimination on the matrix M without the row r (b) If the eliminated matrix contains all necessary polynomials qi , then M := M\{r} All the previous steps, i.e. finding the number of solutions, basis G, basis B, the generation of elimination template(s) and the reduction of unnecessary polynomials, are performed only once in the automatic generator. The generated online solver, which the user come into contact with, takes the elimination template, the matrix M, fills it
308
Z. Kukelova, M. Bujnak, and T. Pajdla
parse equations
extract monomials + coefficients
not a zero dimensional ideal
“action variable”
reduce unnecessary polynomials
quotient ring A
B
analyze coefficient matrix
yes
generate polynomials up to degree d
Test if necessary polynomials have been generated no
online solver code generation
increase degree d
Fig. 1. Block diagram of the automatic generator
with concrete coefficients arising from image measurements and performs its G-J elimination. The rows of M then correspond to the polynomials qi and are used to create the action matrix. Finally, the eigenvalues or the eigenvectors of this action matrix give solutions to the problem.
5 The Automatic Procedure for Generatig Gr¨obner Basis Solvers In this section we describe our approach to automatic generation of such Gr¨obner basis solvers to general problems. The input of this automatic generator is the system of polynomial equations with coefficients from Zp and the output is the solver, the MATLAB or C code, which returns solutions to this system of polynomial equations for concrete coefficients from R. Our automatic generator consists of several independent modules (Fig. 1). Since all these parts are independent, they can be further improved or replaced by more efficient implementations. Next we briefly describe each of these parts. 5.1 Polynomial Equations Parser First, input equations are split into coefficient and monomials. For the automatic generator, we instantiate each known parameter occurring in coefficients with a random number from the Zp . We assign a unique identifier to each coefficient used. This is important for the code generation module to be able to track the elimination path. 5.2 Computation of the Basis B and the Number of Solutions This module starts with the system of polynomial equations F , which we want to solve, with random coefficients from Zp . For many problems, the interesting “regular” solutions can be obtained with almost any random choice of coefficients. The coefficients from Zp speed up computations, minimize memory requirements and especially avoid numerical instability. The generator first verifies if the input system of equations F has a finite number of solution, i.e. the initial polynomial equations generate a zero dimensional ideal, and how many solutions there is. If the system has a finite number of solutions, we compute the Gr¨obner basis G w.r.t. the grevlex [6] monomial ordering and the basis B of the
Automatic Generator of Minimal Problem Solvers
309
quotient ring A. The output of this part of the generator is the basis B of the quotient ring A. To obtain this information we use the existing algorithms implemented in algebraic geometry softwares Macaulay 2 or Maple. Both these softwares are able to compute in finite prime fields and provide efficient implementations of all functions that we need for our purpose. Moreover, these functions can be called directly from MATLAB and their output can be further used in the generator. Modularity of the generator allows replacing this part of the code by another existing module computing the Gr¨obner basis G and the basis B [6]. 5.3 Single Elimination Template Construction The input to this third, most important, step of our automatic generator is the basis B of the quotient ring A and the polynomial f for which we want to create the action matrix. We use an individual variable i.e. f = xk , called “action variable”, to create the action matrix. The goal is to generate now all necessary polynomials for constructing the action matrix Mxk . The method described in Section 4 calls for generating polynomials from ideal I with leading monomials from the set (xk · B) \ B and the remaining monomials α(j) ∈ I. from B, i.e. polynomials of the form qi = xk xα(i) + N j=1 cji x As explained in Section 4, there are several ways how to generate these polynomials from the initial polynomial equations F . We have decided to generate them in one step by multiplying polynomials from F with selected monomials and reducing all generated polynomials at once using a single G-J elimination of one coefficient matrix. These monomial multiples of polynomials F , which should be added to the initial equations to obtain all necessary polynomials qi , are generated in this part of the automatic generator. This is done by systematically generating polynomials of I and testing them. We stop when all necessary polynomials qi are obtained. The generator tries to add polynomials starting with the polynomials with as low degree as possible. Thus, it first multiplies input polynomials with the lowest degree monomials and then moves to the higher degree monomials. The polynomial generator can be described as follows: 1. Generate all monomial multiples xα fi of degree ≤ d (sort them by leading term w.r.t. grevlex ordering). 2. Write the polynomials xα fi in the form MX = 0, where M is the coefficient matrix and X is the vector of all ordered monomials. 3. Simplify matrix M by the G-J elimination. 4. If all necessary polynomials qi have been generated, stop. 5. Else set d = d + 1. Go to 1. In this way we generate polynomials, which, after G-J elimination of their corresponding coefficient matrix M, contain all necessary polynomials qi . These polynomials, i.e. the matrix M, is the so called elimination template. Unfortunately, we often generate many unnecessary polynomials. In the next part of our automatic generator we try to minimize the number of these unnecessary polynomials.
310
Z. Kukelova, M. Bujnak, and T. Pajdla
5.4 Reducing the Size of the Template This part of the automatic generator starts with the polynomials generated in the previous step. We know that after the G-J elimination of these polynomials (i.e. of the corresponding matrix M) we obtain all polynomials that we need for constructing of the action matrix. Starting with the coefficient matrix M and with the information about the form of the necessary polynomials qi , we systematically reduce the number of generated polynomials using the method described in Section 4. The algorithm in Section 4 removes polynomials one by one and each time calls expensive G-J elimination. This is not efficient. It has almost o(n4 ) complexity in the number n of polynomials used. We enhanced this algorithm in several ways: (1) we use sparse G-J elimination, since elimination template is quite sparse matrix, (2) we remove more than one polynomials at once with the heuristic - if we succeeded to remove k polynomial, then we remove 2 k polynomials in the next step. If we failed to remove k polynomials, we try to remove only 14 k polynomials in the next step. These two steps considerably speed up the reduction process. Moreover, we can employ the fact that polynomials in the elimination template are ordered by the degree of their leading monomials. In G-J elimination of such ordered polynomials we can exploit results from previous G-J eliminations. 5.5 Construction the Action Matrix To create the template for the action matrix Mxk , we identify those rows of eliminated matrix M (matrix M after the G-J elimination) which correspond to polynomials qi . The action matrix will than contain coefficients from these rows which correspond to the monomials from the basis B and will have the form (5). For the generated online solver we just need to know which rows and columns we need to extract. Note that the structure of the eliminated matrix is always the same for all instances of the problem. 5.6 Generating Efficient Online Solver The generated online solver consist of the following steps: 1. 2. 3. 4.
construction of the coefficient matrix (from elimination template); G-J elimination; action matrix extraction; solution extraction from eigenvectors of the action matrix.
To build the coefficient matrix we use unique identifiers associated with coefficients of each of the input polynomials. Hence besides coefficient matrix in Zp we maintain matrix with coefficient identifiers, the index matrix, and apply all operations performed on actual coefficients, i.e. adding, removing rows and linear combination of rows, to the index matrix. Recall that in the construction of the elimination template we use the input equations and multiply them by monomials. This is nothing else than shifting identifiers associated to input polynomials in the index matrix.
Automatic Generator of Minimal Problem Solvers
311
Reducing of the polynomials and further optimizations results only in removing rows or columns in the index matrix. Hence, the code generator creates code which simply puts coefficients of the input polynomials to correct places using coefficients identifiers. Then, after G-J elimination, it reads values form precomupted rows and columns and builds the action matrix.
6 Experiments In this section we demonstrate that our automatic generator constructs efficient and numerically stable solvers which are comparable or outperform known manually constructed solvers. For comparison, we consider five recently solved minimal problems which have a broad range of applications and can be used in a RANSAC-based estimation. Since the automatic procedure described in this paper generates very similar Gr¨obner basis solvers as those proposed in the original solutions, there is no point in testing the behavior of generated solvers under noise, outliers or on real images. Generated solvers solve the same system of polynomial equations using the same algebraic method as the original solvers. The difference is only in the number of generated polynomials and therefore the size of matrices, elimination templates, which are used to obtain solutions. For all considered problems we have obtained comparable or quite smaller elimination templates than those used in the original solvers. We choose two problems, the well known [18] and more complex “radial distortion problem” [4], to evaluate and compare the intrinsic numerical stability of the existing solvers with our solvers generated automatically. For the remaining three problems we compare only the sizes of generated elimination templates. The numerical stability of the solvers is compared on synthetically generated scenes without noise. These generated scenes consist of 1000 points distributed randomly within a cube. Points were projected on image planes of the two displaced cameras with the same focal lengths. We use different radial distortions in the “radial distortion problem”. 6.1 Six Point Focal Length Problem The problem of estimating the relative camera position for two cameras with unknown focal length is a classical and popular problem in computer vision. The minimal number of point correspondences needed to solve this problem is six. This minimal problem was solved by Stew´enius et. al. [18] using the Gr¨obner basis techniques and has 15 solutions. In this solution the linear equations from the epipolar constraint are first used to parametrize the fundamental matrix with two unknowns, F = F0 + xF1 + yF2 . Using this parameterization, the rank constraint for the fundamental matrix and the trace constraint for the essential matrix result in ten third and fifth order polynomial equations in the three unknowns x, y and w = f −2 , where f is the unknown focal length. The Gr¨obner basis solvers [18] starts with these ten polynomial equations which can be represented in a 10 × 33 matrix. In the first step two new polynomials are added to the initial polynomial equations and eliminated by G-J elimination. After this four
312
Z. Kukelova, M. Bujnak, and T. Pajdla 700 600
Frequency
500 400 300 200 100 0 −20
−15 −10 −5 0 Log of relative error in focal length
5
10
Fig. 2. The log10 relative errors of the focal length for 10000 runs of two solvers. Original solver [18] (Red, darker) and our generated solver (Green, lighter) on the synthetic dataset without noise.
new polynomials are added and eliminated. Finally two more polynomials are added and eliminated. The resulting system then contains the Gr¨obner basis and can be used to construct the action matrix. The resulting solver therefore consists of three G-J eliminations of three matrices of size 12 × 33, 16 × 33 and 18 × 33. More recently, another Gr¨obner basis solver to this problem was proposed in [2]. This solver uses only one G-J elimination of a 34×50 matrix and uses special technique for improving the numerical stability of Gr¨obner basis solvers based on changing the basis B. In this paper it was shown that this solver gives more accurate results than the original solver [18]. Our automatic generator starts with the ten initial polynomial equations in three unknowns. For this problem we have generated three different solvers for all three variables (action matrices for three different action variables x, y and w). For the action variable w our generator first generates all monomial multiples of initial ten polynomial equations up to total degree eight. This results in the 236 × 165 matrix which contains all necessary polynomials qi . After the reduction step only 41 polynomials in 60 monomials remained. For the action variables x and y our generator generates all monomial multiples of initial ten polynomial equations up to total degree seven. This results in the 125 × 120 matrix. In the reduction step 94 polynomials out of these 125 are removed resulting in 31 polynomials in 50 monomials. After the G-J elimination of the corresponding 31 × 50 matrix (in fact 31 × 46 matrix is sufficient thanks to removing columns that do not affect G-J elimination) all necessary polynomial are obtained and the action matrix Mx (My ) is created. Our generated solver results in a little bit smaller matrix than the solver proposed in [2]. Since we did not have the source code of the solver proposed in [2], we have compared our generated solver with the original solver proposed by Stew´enius [18]. The log10 relative errors of the focal length for 10000 runs of both solvers (original Stew´enius solver (Red) and our automatically generated solver (Green)) are shown in Fig. 2. Our generated solver gives a little bit more accurate results than Stew´eniuse’s original solver. As we have already mentioned, we have no source code to the solver proposed in [2], but according to the results presented in this paper our generated solver gives very
120
120
100
100
80
80
Frequency
Frequency
Automatic Generator of Minimal Problem Solvers
60 40 20 0 −20
313
60 40 20
−15
−10 −5 0 Log10 relative error of λ1
5
0 −20
−15
−10 −5 0 Log10 relative error of λ2
5
Fig. 3. The log10 relative errors of the radial distortion parameters λ1 (Left) and λ2 (Right) for 1000 runs of three different solvers. Original solver with the basis selection [4] (Red, darker), original solver without the basis selection (Blue, dotted) and our generated solver (Green, lighter) on the synthetic dataset without noise.
similar results (log10 relative errors around 10−13 − 10−14 ) as this solver which uses further special technique for improving the numerical stability. 6.2 Nine Point Radial Distortion Problem The minimal problem of simultaneous estimation of fundamental matrix and two different radial distortion parameters for two uncalibrated cameras and nine image point correspondences has been successfully solved in floating point arithmetic only recently [4]. This problem has 24 solutions and results in ten polynomial equations in ten unknowns. These equations can be simplified to four equations in four unknowns. The existing Gr¨obner basis solver [4] starts with these four equations and to obtain the action matrix first generates all monomial multiples of these initial equations up to total degree eight. This gives 497 equations. Using “fine tuning” authors reduce the number of used equations to 393 equations in 390 monomials. After the G-J of the corresponding 393 × 390 matrix all necessary polynomials for constructing the action matrix Mf31 are obtained. Our automatic procedure starts with simplified four polynomial equations in four unknowns. First, the generator also generates all monomial multiples of the initial polynomial equations up to total degree eight. In the reduction step, 318 out of these 497 polynomials are removed, resulting in 179 polynomials in 212 monomials for action variables f31 and also for λ2 . After the G-J elimination of the corresponding 179 × 212 matrix (in fact 179 × 203 matrix is sufficient) all necessary polynomial qi are obtained and the action matrix Mf31 (Mλ2 ) is constructed. We have compared our generated solver with the original solver proposed in [4] which uses special technique for improving the numerical stability based on changing the basis B and also with the “one elimination solver” (the same solver as in [4]) but without this basis selection. The log10 relative errors of the two radial distortion parameters λ1 and λ2 for 1000 runs of these solvers, i.e. the solver [4] with the basis selection (Red, darker), the solver [4] without the basis selection (Blue, dotted) and our generated solver (Green, lighter), are shown in Fig. 3.
314
Z. Kukelova, M. Bujnak, and T. Pajdla
Table 1. The comparison of the size of the elimination templates used in our generated solvers with the size of the elimination templates used in the original solvers
5pt relative pose problem [20]
6pt focal length problem [2, 18]
6pt radial distortion problem [4] 8pt radial distortion problem [11] 9pt radial distortion problem [4]
Original solver Our generated solver 1 elimination 1 elimination 10 × 20 10 × 20 3 eliminations [18] 12 × 33, 16 × 33 and 18 × 33 1 elimination 1 elimination [2] 31 × 46 34 × 50 1 elimination 1 elimination 320 × 363 238 × 290 3 eliminations 1 elimination 8 × 22,11 × 30, 36 × 50 32 × 48 1 elimination 1 elimination 393 × 390 179 × 203
The best results gives the original solver [4] with basis selection. A classical Gr¨obner basis solver (without this basis selection) gives very similar results as our automatically generated solver. Although these results are still very good (log10 relative errors around 10−6 ) they can be further enhanced using the same technique for improving the numerical stability as it was used in [4]. 6.3 Elimination Template(s) Size We have compared the size of the elimination templates used in our generated solvers with the size of the elimination templates used in the original solvers for three other minimal problems, (i) five point relative pose problem [20], (ii) problem of estimating epipolar geometry and single distortion parameter for two uncalibrated cameras and eight point correspondences [11], and (iii) problem of estimating epipolar geometry and single distortion parameter for two calibrated cameras and six point correspondences [4]. The results for these three minimal problems together with the results for the two previously discussed problems are shown in Table 1. For all these problems we have obtained smaller or the same size elimination templates than those used in original solvers. Smaller templates in faster solvers. 6.4 Computation Time We have implemented the generator in MATLAB. Computation times demand on the problem. For several problems we have tested, the generator running time was from nine seconds to two minutes. Running times of the resulting automatically generated online solvers were in milliseconds.
7 Conclusion We have proposed an automatic procedure for generating Gr¨obner basis solvers for an interesting problems which appear in computer vision and elsewhere. This automatic
Automatic Generator of Minimal Problem Solvers
315
generator can be easily used even by non-experts to solve their own new problems. The input to the generator is a system of polynomial equations with a finite number of solutions and the output is the Matlab or C code which computes solutions to this system for concrete coefficients. We have demonstrated the functionality of our generator on several minimal problems. Our generator constructs efficient and numerically stable solvers which are comparable or outperform known manually constructed solvers in acceptable time. The automatic generator is available at http://cmp.felk.cvut.cz/minimal.
References 1. Buchberger, B.: Ein Algorithmus zum Auffinden der Basiselemente des Restklassenringes nach einem nulldimensionalen Polynomideal PhD Thesis, Mathematical Institute, University of Innsbruck, Austria (1965) 2. Byr¨od, M., Josephson, K., Astr¨om, K.: Improving numerical accuracy of gr¨obner basis polynomial equation solver. In: International Conference on Computer Vision (2007) 3. Byr¨od, M., Josephson, K., Astr¨om, K.: Fast Optimal Three View Triangulation. In: Yagi, Y., Kang, S.B., Kweon, I.S., Zha, H. (eds.) ACCV 2007, Part I. LNCS, vol. 4843, pp. 549–559. Springer, Heidelberg (2007) ˚ om, K.: Fast and robust numerical 4. Byr¨od, M., Kukelova, Z., Josephson, K., Pajdla, T., Astr¨ solutions to minimal problems for cameras with radial distortion. In: CVPR 2008 (2008) 5. Bujnak, M., Kukelova, Z., Pajdla, T.: A general solution to the P4P problem for camera with unknown focal length. In: CVPR 2008 (2008) 6. Cox, D., Little, J., O’Shea, D.: Ideals, Varieties, and Algorithms. Springer, Heidelberg (1992) 7. Cox, D., Little, J., O’Shea, D.: Using Algebraic Geometry. Springer, Heidelberg (2005) 8. Faugere, J.-C.: A new efficient algorithm for computing gr¨obner bases (F4 ). Journal of Pure and Applied Algebra 139(1-3), 61–88 (1999) 9. Geyer, C., Stewenius, H.: A nine-point algorithm for estimating para-catadioptric fundamental matrices. In: CVPR 2007, Minneapolis (2007) 10. Hartley, R., Zisserman, A.: Multiple View Geometry in Computer Vision. Cambridge University Press, Cambridge (2003) 11. Kukelova, Z., Pajdla, T.: A minimal solution to the autocalibration of radial distortion. In: CVPR 2007, Minneapolis (2007) 12. Kukelova, Z., Pajdla, T.: Two minimal problems for cameras with radial distortion. In: OMNIVIS 2007, Rio de Janeiro (2007) 13. Li, H.: A simple solution to the six-point two-view focal-length problem. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3951. Springer, Heidelberg (2006) 14. Li, H., Hartley, R.: Five-point motion estimation made easy. In: ICPR 2006 (2006) 15. Mayr, E.W.: Some complexity results for polynomial ideals. Journal of Complexity 13(3), 303–325 (1997) 16. Nister, D.: An efficient solution to the five-point relative pose. IEEE PAMI 26(6), 756–770 (2004) 17. Stew´enius, H.: Gr¨obner basis methods for minimal problems in computer vision. PhD thesis, Lund University (2005) 18. Stew´enius, H., Nister, D., Kahl, F., Schaffalitzky, F.: A minimal solution for relative pose with unknown focal length. In: CVPR 2005, pp. 789–794 (2005) 19. Stew´enius, H., Nister, D., Oskarsson, M., Astrom, K.: Solutions to minimal generalized relative pose problems. In: OMNIVIS 2005 (2005) 20. Stew´enius, H., Engels, C., Nister, D.: Recent developments on direct relative orientation. ISPRS J. of Photogrammetry and Remote Sensing 60, 284–294 (2006) 21. Traverso, C.: Gr¨obner trace algorithms. In: Gianni, P. (ed.) ISSAC 1988. LNCS, vol. 358, pp. 125–138. Springer, Heidelberg (1989)
A New Baseline for Image Annotation Ameesh Makadia1, Vladimir Pavlovic2 , and Sanjiv Kumar1 1 Google Research, New York, NY
[email protected],
[email protected] 2 Rutgers University, Piscataway, NJ
[email protected]
Abstract. Automatically assigning keywords to images is of great interest as it allows one to index, retrieve, and understand large collections of image data. Many techniques have been proposed for image annotation in the last decade that give reasonable performance on standard datasets. However, most of these works fail to compare their methods with simple baseline techniques to justify the need for complex models and subsequent training. In this work, we introduce a new baseline technique for image annotation that treats annotation as a retrieval problem. The proposed technique utilizes low-level image features and a simple combination of basic distances to find nearest neighbors of a given image. The keywords are then assigned using a greedy label transfer mechanism. The proposed baseline outperforms the current state-of-the-art methods on two standard and one large Web dataset. We believe that such a baseline measure will provide a strong platform to compare and better understand future annotation techniques.
1 Introduction Given an input image, the goal of automatic image annotation is to assign a few relevant text keywords to the image that reflect its visual content. Utilizing image content to assign a richer, more relevant set of keywords would allow one to further exploit the fast indexing and retrieval architecture of Web image search engines for improved image search. This makes the problem of annotating images with relevant text keywords of immense practical interest. Image annotation is a difficult task for two main reasons: First is the well-known pixel-to-predicate or semantic gap problem, which points to the fact that it is hard to extract semantically meaningful entities using just low level image features, e.g. color and texture. Doing explicit recognition of thousands of objects or classes reliably is currently an unsolved problem. The second difficulty arises due to the lack of correspondence between the keywords and image regions in the training data. For each image, one has access to keywords assigned to the entire image and it is not known which regions of the image correspond to these keywords. This makes difficult the direct learning of classifiers by assuming each keyword to be a separate class. Recently, techniques have emerged to circumvent the correspondence problem under a discriminative multiple instance learning paradigm [1] or a generative paradigm [2]. Image annotation has been a topic of on-going research for more than a decade and several interesting techniques have been proposed [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]. D. Forsyth, P. Torr, and A. Zisserman (Eds.): ECCV 2008, Part III, LNCS 5304, pp. 316–329, 2008. c Springer-Verlag Berlin Heidelberg 2008
A New Baseline for Image Annotation
317
Most of these techniques define a parametric or non-parametric model to capture the relationship between image features and keywords. Even though some of these techniques have shown impressive results, one thing that is sorely missing in the annotation literature is comparison with very simple ‘straw-man’ techniques. The goal of this work is to create a family of baseline measures against which new image annotation methods should be compared to justify the need for more complex models and training procedures. We introduce several simple techniques characterized by minimal training requirements that can efficiently serve this purpose. Surprisingly, we also show that these baseline techniques can outperform more complex state-of-the art annotation methods on several standard datasets, as well as on a large Web dataset. Arguably, one of the simplest annotation schemes is to treat the problem of annotation as that of image-retrieval. For instance, given a test image, one can find its nearest neighbor (defined in some feature space with a pre-specified distance measure) from the training set, and assign all the keywords of the nearest image to the input test image. One obvious modification of this scheme would be to use K-nearest neighbors to assign the keywords instead of relying on just the nearest one. In the multiple neighbors case, as we discuss in Section 3.3, one can easily assign the appropriate keywords to the input image using a simple greedy approach. As we show in Section 4, some simple distance measures defined on even global image features perform similar to or better than several popular image annotation techniques. The K-nearest neighbor approach can be extended to incorporate multiple distance measures, possibly defined over distinct feature spaces. Recently, combining different distances or kernels has been shown to yield good performance in object recognition task [13]. In this work, we explore two different ways of linearly combining different distances to create the baseline measures. The first one simply computes the average of different distances after scaling each distance appropriately. The second one is based on selecting relevant distances using a sparse logistic regression method, Lasso [14]. To learn the weights of Lasso, one needs a training set containing similar and dissimilar images. A typical training set provided for the annotation task does not contain such information directly. We show that one can train Lasso by creating a labeled set from the annotation training data. Even such a weakly trained Lasso outperforms the stateof-the-art methods in most cases. Surprisingly, however, the averaged distance performs better or similar to the noisy Lasso. The main contributions of our work are that it (1) introduces a simple method to perform image annotation by treating it as a retrieval problem in order to create a new baseline against which annotation algorithms can be measured, and (2) provides exhaustive experimental comparisons of several state-of-the-art annotation methods on three different datasets. These include two standard sets (Corel and IAPR TC-12) and one Web dataset containing about 20K images.
2 Prior Work A large number of techniques have been proposed in the last decade [15]. Most of these treat annotation as translation from image instances to keywords. The translation paradigm is typically based on some model of image and text co-occurrences [3, 16].
318
A. Makadia, V. Pavlovic, and S. Kumar
The translation approach of [3] was extended to models that ascertain associations indirectly, through latent topic/aspect/context spaces [4, 8] . One such model, the Correspondence Latent Dirichlet Allocation (CorrLDA) [4], considers associations through a latent topic space in a generatively learned model. Despite its appealing structure, this class of models remains sensitive to the choice of topic model, initial parameters, prior image segmentation, and more importantly the inference and learning approximations to handle the typically intractable exact analysis. Cross Media Relevance Models (CMRM) [5], Continuous Relevance Model (CRM) [7], and Multiple Bernoulli Relevance Model (MBRM) [9] assume different, nonparametric density representations of the joint word-image space. In particular, MBRM achieves robust annotation performance using simple image and text representations: a mixture density model of image appearance that relies on regions extracted from a regular grid, thus avoiding potentially noisy segmentation, and the ability to naturally incorporate complex word annotations using multiple Bernoulli models. However, the complexity of the kernel density representations may hinder MBRM’s applicability to large data sets. Alternative approaches based on graph representation of joint queries [11], and cross-language LSI [12], offer means for linking the word-image occurrences, but still do not perform as well as the non-parametric models. Recent research efforts have focused on extensions of the translation paradigm that exploit additional structure in both visual and textual domains. For instance, [17] utilizes a coherent language model, eliminating independence between keywords. Hierarchical annotations in [18] aim not only to identify specific objects in an image, but also explicitly incorporate concept ontologies. The added complexity, however, makes the models applicable only to limited settings with small-size dictionaries. To address this problem, [19] developed a real-time ALIPR image search engine which uses multiresolution 2D Hidden Markov Models to model concepts determined by a training set. While this method successfully infers higher level semantic concepts based on global features, identification of more specific categories and objects remains a challenge. In an alternative approach, [2] relies on a hierarchical mixture representation of keyword classes, leading to a method that demonstrates both computational efficiency and stateof-the-art performance on several complex annotation tasks. However, the annotation problem is treated as a set of one-vs-all binary classification problems, potentially failing to benefit from competition among models during the learning stage. Even though promising results have been reported by many sophisticated annotation techniques, they commonly lack a comparison with simple baseline measures across diverse image datasets. In the absence of such a comparison, it is hard to understand the gains and justify the need for complex models and training processes as required by most of the current annotation methods. Our work addresses this issue by suggesting a family of baseline measures, some of which surprisingly outperform the current stateof-the-art in image annotation on several large real-world datasets.
3 Baseline Methods We propose a family of baseline methods for image annotation that are built on the hypothesis that images similar in appearance are likely to share keywords. We treat
A New Baseline for Image Annotation
319
image annotation as a process of transferring keywords from nearest neighbors. The neighborhood structure is constructed using simple low-level image features resulting in a rudimentary baseline model. The details are given below. 3.1 Features and Distances Color and texture are recognized as two important low-level visual cues for image representation. The most common color descriptors are based on coarse histograms, which are frequently utilized within image matching and indexing schemes, primarily due to their effectiveness and ease of computation. Image texture is commonly captured with Wavelet features. In particular, Gabor and Haar wavelets have been shown to be quite effective in creating sparse yet discriminative image features. To limit the influence and biases of individual features, and to maximize the amount of information extracted, we choose to employ a number of simple and easy to compute features. Color. We generate features from images in three different color spaces: RGB, HSV, and LAB. While RGB is the default color space for image capturing and display, both HSV and LAB isolate important appearance characteristics not captured by RGB. For example, the HSV (Hue, Saturation, and Value) colorspace encodes the amount of light illuminating a color in the Value channel, and the Luminance channel of LAB is intended to reflect the human perception of brightness. The RGB, HSV, and LAB features are 16-bin-per-channel histograms in their respective colorspaces. To determine the corresponding distance measures, we evaluated four measures commonly used for histograms and distributions (KL-divergence, χ2 statistic, L1 -distance, and L2 -distance) on the human-labeled training data from the Corel5K dataset. L1 performed the best for RGB and HSV, while KL-divergence was found suitable for LAB distances. Throughout the remainder of the paper, RGB and HSV distances imply the L1 measure, and the LAB distance implies KL-divergence. Texture. We represent the texture with Gabor and Haar Wavelets. Each image is filtered with Gabor wavelets at three scales and four orientations. From each of the twelve response images, a histogram over the response magnitudes is built. The concatenation of these twelve histograms is a feature vector we refer to as ‘Gabor’. The second feature captures the quantized Gabor phase. The phase angle at each response pixel is averaged over 16 × 16 blocks in each of the twelve Gabor response images. These mean phase angles are quantized to 3 bits (eight values), and are concatenated into a feature vector referred to as ‘GaborQ’. Haar Wavelet responses are generated by block-convolution of an image with Haar filters at three different orientations (horizontal, diagonal, and vertical). Responses at different scales were obtained by performing the convolution with a suitably subsampled image. After rescaling an image to 64x64 pixels, a Haar feature is generated by concatenating the Haar response magnitudes (this feature is referred to as ‘Haar’). As with the Gabor features, we also consider a quantized version, where the sign of the Haar responses are quantized to three values (either 0, 1, or -1 if the response is zero, positive, or negative, respectively). Throughout the text this quantized feature is referred to as ‘HaarQ.’ We use L1 distance for all the texture features.
320
A. Makadia, V. Pavlovic, and S. Kumar
3.2 Combining Distances Joint Equal Contribution (JEC). If labeled training data is unavailable, or the labels are extremely noisy, the simplest way to combine distances from different descriptors would be to allow each individual distance to contribute equally (after scaling the individual distances appropriately). Let Ii be the i-th image, and say we have extracted N features fi1 , . . . , fiN . Let us define dk(i,j) as the distance between fik and fjk . We would like to combine the individual distances dk(i,j) , k = 1, . . . , N to provide a comprehensive distance between image Ii and Ij . Since, in JEC, each feature contributes equally towards the image distance, we first need to find the appropriate scaling terms for each feature. These scaling terms can be determined easily if the features are normalized in some way (e.g., features that have unit norm), but in practice this is not always the case. We can obtain estimates of the scaling terms by examining the lower and upper bounds on the feature distances computed on some training set. We scale the distances for each feature such that they are bounded by 0 and 1. If we denote the scaled distance as d˜k(i,j) , we can define the comprehensive image distance between images Ii and Ij as N d˜k(i,j) k=1 N . We refer to this distance as Joint Equal Contribution (JEC). L1 -Penalized Logistic Regression (Lasso [14]). Another approach to combining feature distances would be to identify those features that are more relevant for capturing image similarity. This is the well-known problem of feature selection. Since we are using different color (and texture) features that are not completely independent, it is an obvious question to ask: Which of these color (or texture) features are redundant? Logistic regression with L1 penalty, also known as Lasso [14], provides a simple way to answer this question. The main challenge in applying Lasso to image annotation lies in creating a training set containing pairs of similar and dissimilar images. Typical training datasets for image annotation contain images and associated text keywords, and there is no direct notion of similarity between images. In this setting, we consider any pair of images that share enough keywords to be a positive training example, and any pair with no keywords in common to be a negative example. Clearly, the quality of such a training set will depend on the number of keywords required to match before an image pair can be called ‘similar.’ In this work, we obtained training samples from the designated training set of the Corel5K benchmark (Section 4). Images pairs that had at least four common keywords were treated as positive samples for training, and those with no common keywords were used as negative samples (training samples are illustrated in Fig. 1). Combining basic distances using JEC or Lasso gives us a simple way to compute distances between images. Using such composite distances, one can find the K nearest neighbors of an image. In the next section, we present a label transfer algorithm that assigns keywords to any test image given its nearest neighbors. 3.3 Label Transfer We propose a simple method to transfer n keywords to a query image I˜ from the query’s K nearest neighbors in the training set. Let Ii , i = 1, . . . , K be these K nearest neighbors, ordered by increasing distance (i.e., I1 is the most similar image). The number of
A New Baseline for Image Annotation
321
Positive
Negative
Fig. 1. Pairs of images that were used as positive training examples (top row) and negative training examples (bottom row) for Lasso. In positive pairs the images shared at least 4 keywords, while in negative pairs they shared none.
keywords associated with Ii is denoted by |Ii |. Following are the steps of our greedy label transfer algorithm. 1. Rank the keywords of I1 according to their frequency in the training set. ˜ If 2. Of the |I1 | keywords of I1 , transfer the n highest ranking keywords to query I. |I1 | < n, proceed to step 3. 3. Rank the keywords of neighbors I2 through IK according to two factors: 1) cooccurrence in the training set with the keywords transferred in step 2, and 2) local frequency (i.e. how often they appear as keywords of images I2 through IK ). Select ˜ the highest ranking n − |I1 | keywords to transfer to I. This transfer algorithm is somewhat different from other obvious choices. One can imagine simpler algorithms where keywords are selected simultaneously from the entire neighborhood (i.e., all the neighbors are treated equally), or where the neighbors are weighted according to their distance from the test image. However, an initial evaluation showed that these simple approaches underperform in comparison to our two-stage transfer algorithm (see Section 4). In summary, our baseline annotation methods are comprised of a composite image distance measure (JEC or Lasso) for nearest neighbor ranking, combined with our label transfer algorithm. Is there any hope to achieve reasonable results for image annotation using such simplistic methods? To answer this question, we evaluate our baseline methods on three different datasets as described in the following section.
4 Experiments and Discussion Our experiments examined the performance and behavior of the proposed baselines for image annotation on three collections of images.
322
A. Makadia, V. Pavlovic, and S. Kumar
– Corel5K [3] has become a de-facto evaluation benchmark in the image annotation community. It contains 5,000 images collected from the larger Corel CD set, split into 4,500 training and 500 test examples. Each image is annotated with an average of 3.5 keywords, and the dictionary contains 260 words that appear in both the train and the test set. – IAPR TC-12 is a collection of 19,805 images of natural scenes that include different sports and actions, photographs of people, animals, cities, landscapes and many other aspects of contemporary life1 . Unlike other similar databases, images in IAPR TC-12 are accompanied by free-flowing text captions. While this set is typically used for cross-language retrieval, we have concentrated on the English captions and extracted keywords (nouns) using the TreeTagger part-of-speech tagger2 . This resulted in a dictionary size of 291 and an average of 4.7 keywords per image. 17,825 images were used for training, and the remaining 1,980 for testing. Samples from IAPR are depicted in Fig. 2. – ESP Game consists of a set of 21,844 images collected in the ESP collaborative image labeling task [20]3 . In ESP game, two players assign labels to the same image without communicating. Only common labels are accepted. As an image is shown to more teams, a list of taboo words is accumulated, increasing the difficulty for future players and resulting in a challenging dataset for annotation. The set we obtained4 contains a wide variety of images annotated by 269 keywords, and is split into 19,659 train and 2,185 test images. Each image is associated with up to 15 keywords, and on average 4.6 keywords. Examples are shown in Fig. 3. For the IAPR TC-12 and ESP datasets, we have made public the dictionaries, as well as the training and testing image set partitions, used in our evaluations5. On all three
Fig. 2. Sample IAPR data. On the left are 25 randomly selected images from the dataset. On the right is a single image and its associated annotation. Noun extraction from the caption provides keywords for annotation. 1 2 3 4 5
http://eureka.vu.edu.au/∼grubinger/IAPR/TC12 Benchmark.html http://www.ims.uni-stuttgart.de/projekte/corplex/TreeTagger http://www.espgame.org http://hunch.net/∼jl/ http://www.cis.upenn.edu/∼makadia/annotation/
A New Baseline for Image Annotation
323
Fig. 3. Sample ESP data. On the left are 25 randomly selected images from the dataset, while on the right are two images and their associated keywords. These images are quite different in appearance and content, but share many of the same keywords.
annotation datasets, we evaluated the performance of a number of baseline methods. For comparisons on Corel5K, we summarized published results of several approaches, including the most popular topic model (i.e. CorrLDA [4]), as well as MBRM [9] and SML [2], which have shown state-of-the-art performance on Corel5K. On the IAPR TC12 and ESP datasets, where no published results of annotation methods are available, we compared the performance of our baseline methods against MBRM [9] which was relatively easier to implement and had comparable performance to SML [2]6 . When evaluating performance of baseline methods, we focused on three different settings: 1) performance of individual distance measures, 2) performance of the learned weighted distance model (Lasso), and 3) performance of the Joint Equal Contribution (JEC) model, where all features contributed equally to the global distance measure. In the Corel setting, we also examined the impact of leaving-out one distance measure at a time in the JEC model. Performance of all models was evaluated using five measures following the methodology used in [2, 9]. We report mean precision (P%) and mean recall (R%) rates obtained by different models, as well as the number of total keywords recalled (N+ ). Precision and recall are defined in the standard way: the annotation precision for a keyword is defined as the number of images assigned the keyword correctly divided by the total number of images predicted to have the keyword. The annotation recall is defined as the number of images assigned the keyword correctly, divided by the number of images assigned the keyword in the ground-truth annotation. Similar to other approaches, we assign top 5 keywords to each image using label transfer. Additionally, we report two retrieval performance measures based on top 10 images retrieved for each keyword: mean retrieval precision (rP%) and mean retrieval precision for only the recalled keywords (rP+ %) [2]. 4.1 Corel The results of experiments on the Corel set are summarized in Table 1(a). The top portion of the table displays published results of a number of standard and top-performing 6
No implementation of SML [2] was publicly available.
324
A. Makadia, V. Pavlovic, and S. Kumar
Table 1. Results on three datasets for different annotation algorithms. Corel5K contains 5,000 images and 260 keywords, IAPR-TC12 has 19,805 images and 291 keywords, and ESP has 21,844 images and 268 keywords. P% and R% denote the mean precision and the mean recall, respectively, over all keywords in percentage points. N+ denotes the number of recalled keywords. rP%, and rP+ % denote the mean retrieval precision for all keywords and the mean retrieval precision for recalled keywords only, respectively. Note that the proposed simple baseline technique (JEC) outperforms state-of-the-art techniques in all datasets. CorrLDA1 and JEC1 correspond to models built on a reduced 168 keyword dictionary, as in [4]. (b) IAPR-TC12 & ESP
(a) Corel5K Method P% CRM[7] 16 InfNet [11] 17 NPDE [21] 18 MBRM [9] 24 SML [2] 23 CorrLDA[4]1 6 RGB 20 HSV 18 LAB 20 Haar 6 HaarQ 11 Gabor 8 GaborQ 5 Lasso 24 JEC 27 JEC1 32
+
R% N 19 107 24 112 21 114 25 122 29 137 9 59 23 110 21 110 25 118 8 53 13 87 10 72 6 52 29 127 32 139 40 113
rP% 30 31 27 24 23 25 12 16 11 7 30 33 35
+
rP % 35 49 37 49 45 47 33 35 31 26 51 52 48
Method P% MBRM 24 RGB 24 HSV 20 LAB 24 Haar 20 HaarQ 19 Gabor 15 GaborQ 8 Lasso 28 JEC 28
IAPR-TC12 R% N+ rP% rP+ % 23 223 24 30 24 233 23 29 20 215 18 24 25 232 23 29 11 176 21 32 16 189 18 28 15 183 14 22 9 137 9 18 29 246 26 31 29 250 27 31
P% 18 20 18 20 21 18 15 14 21 22
ESP R% N+ rP% 19 209 18 22 212 19 20 212 17 22 221 20 18 205 21 19 207 18 16 186 15 15 193 13 24 224 21 25 224 21
rP+ % 24 25 21 24 27 24 21 19 25 25
methods that approach the annotation problem from different perspectives, using different image representations: CRM [7], InfNet [11], NPDE [21], MBRM [9], SML [2], and CorrLDA [4]. The middle part of the table shows results of using only the distance measures induced by individual features. Finally, the bottom rows list results of the baseline methods that rely on combinations of distances from multiple features. Individual feature distances show a wide spread in performance scores, ranging from high-scoring LAB and RGB color measures to the potentially less effective Haar and GaborQ. It is interesting to note that some of the best individual measures perform on par or better than several more complex published methods. More surprising, however, is that the measures which arise from combinations of individual distances (Lasso and JEC) perform significantly better than most other published methods. In particular, JEC, which emphasizes equal contribution of all the feature distances, shows domination in all five performance measures. One reason for this exceptional performance may be due to the use of a wide spectrum of different features, contributing along different “orthogonal” factors. This also points to the well-understood inadequacies and limitations of most image representation models that rely on individual or small subsets of features. Figure 4 shows some images annotated using the JEC baseline. Additionally, we show some retrieval examples using the JEC baseline in Fig. 5.
A New Baseline for Image Annotation
sky, jet, Predicted plane, smoke, keywords formation Human sky, jet, annotation plane, smoke
grass, rocks, water, tree, sun, water, sea, sand, valley, grass, deer, waves, birds canyon white-tailed rocks,sand, sun, water, tree, forest, valley, canyon clouds, birds deer, white-tailed
325
bear, snow, wood, deer, white-tailed tree, snow, wood, fox
Fig. 4. Predicted keywords using JEC versus the human annotations for a sampling of images in the Corel5K dataset (using all 260 keywords)
Fig. 5. Retrieval results using JEC on Corel5K. Each row displays the first seven images retrieved for a query. From top to bottom, the queries are: sky, street, mare, train.
It should be noted that most top-performing methods in literature rely on instancebased representations (such as MBRM, CRM, InfNet, and NPDE) which are closely related to our baseline approach. While generative parametric models such as CorrLDA [4] have significant modeling appeal due to the interpretability of the learned models, they fail to stack up to the nonparametric representations on this difficult task. Table 1 confirms that the gap between the two paradigms remains large. Another interesting result is revealed by comparing JEC with Lasso. One may expect the learned weights through Lasso to perform better than the equal contributions in JEC. However, this is not the case, in part, because of the different requirements posed by the two models. Lasso relies on the existence of sets of positive (similar) and negative (dissimilar) pairs of images, while JEC is a learning-free model. Since the Lasso training set was created artificially from the annotation training set, the effect of noisy labels undoubtedly reflects on the model’s performance. We further contrast the role of individual features and examine their contribution to the combined baseline models in experiments summarized in Tables 2(a) and 2(b). Performance of individual features shown in Table 1 may tempt one to leave out the low-performing features, such as the texture-based Haar and Gabor descriptors. However, Table 2(a) suggests that this is not a wise thing to do. Correlated features, such
326
A. Makadia, V. Pavlovic, and S. Kumar
Table 2. (a) All-but-one testing of the JEC scheme. In each row, a different feature was left out of JEC. It is clear from these results that all seven features make some positive contribution to the combined distances. The last row shows the JEC results for the full set of features for reference. (b) Texture vs. color results for 260 keywords in Corel5K. The texture feature is a weighted average of all four texture features, and the color feature is a weighted average of all three color features. The third row shows the full JEC results with all the texture and color features. (b) Texture & Color
(a) All-but-one Feature held out RGB HSV LAB Haar HaarQ Gabor GaborQ None
P% R% N+ rP% rP+ % 27 27 27 26 26 25 26 27
31 31 32 31 30 29 31 32
134 137 134 133 130 128 134 139
32 32 33 32 31 30 33 33
53 52 53 54 53 53 53 52
Feature Class Texture Color Texture + Color
P% R% N+ rP% rP+ % 16 19 101 24 23 26 120 27
45 51
27 32 139 33
52
Table 3. Evaluation of alternative label transfer schemes on Corel5K. In (a), we assess two simple methods. All neighbors equal simultaneously selects keywords from all 5 nearest neighbors. Keywords are ranked by their frequency in the neighborhood. All neighbors weighted applies an additional weighting relative to the distance of the neighbor from the test image. In (b), we evaluate the individual neighbors in isolation (i.e. all keywords transferred from a single neighbor). (b) Single-neighbor performance
(a) Alternative label transfer methods P% R% N All neighbors 23 24 113 equal All neighbors 25 31 135 weighted Proposed method 27 32 139 (Section 3.3)
+
rP% rP % 37
30 Precision Recall
25
56
20
32
50
33
52
Percent
+
15 10 5 0
1
2
3 Neighbor
4
5
as HSV and LAB may contribute little jointly and could potentially be left out. While the texture-based descriptors lead to individually inferior annotation performance, they complement the color features. A similar conclusion may be reached when considering joint performance of all color and all texture features separately, as depicted in Table 2(b): either of the two groups alone results in performance inferior to the JEC combined model. Finally, as mentioned earlier, the greedy label transfer algorithm utilized in JEC is not immediately obvious. One straightforward alternative is to transfer all keywords simultaneously from the entire neighborhood while optionally weighting the neighbors according to their distance from the test image. Additionally, by evaluating the labels transferred from a single neighbor, we can estimate the average “quality” of neighbors in isolation. These results are summarized in Table 3. The simple alternative of selecting
A New Baseline for Image Annotation
edge, front, glacier, life, tourist glacier, jacket, clothes, jean, Human life, rock, man, pavement, annotation sky, water, shop, square woman
Predicted keywords
clothes, jean, man, shop, square
327
court, player, brick, grave, desert, grass, sky, stadium, mummy, stone, mountain, sky, tennis wall slope court, player, desert, grey brick, grave, sky, stadium, mountain, round, mummy, wall man, tennis stone
Fig. 6. Predicted keywords using JEC versus human annotations for sample images in the IAPR dataset
all keywords simultaneously from the entire neighborhood (with and without weighting the neighbors) underperforms our proposed label transfer algorithm. Regarding individual neighbors, the difference in performance between the first two neighbors is greater than the difference between the second and fifth neighbor. This observation led us to treat the first neighbor specially. 4.2 IAPR TC-12 The Corel set has served as a common evaluation platform for many annotation methods. Nevertheless, it if often criticized for its bias due to insufficiently varying appearance and contrived annotations. We therefore measure performance of our baseline models, JEC and Lasso, as well as that of individual features on a more challenging IAPR set. Table 1(b) depicts performance of different methods on this set. Figure 6 shows some examples of annotated images using the JEC baseline. Trends similar to those observed on the Corel set carry over to the IAPR setting: the JEC baseline leverages multiple, potentially “orthogonal” factors, to retrieve neighboring images most relevant for predicting reasonable annotation of queries. The baseline also shows performance superior to that of the MBRM. While color features contribute consistently more than the texture descriptors, we observe improved individual performance of Gabor and Haar measures. This can be due to the presence of a larger number of images exhibiting textured patterns in IAPR compared to the Corel set. It is also interesting to note that selection of relevant features using Lasso exhibits performance on par with JEC in two out of the five measures. This is a potential indicator that the selection criterion for determining the Lasso training set may be more reflective of the true image similarities in IAPR than in Corel. 4.3 ESP ESP game set has arisen from an experiment in collaborative human computing— annotation of images in this case [20]. An advantage of this set, compared to Corel and IAPR, lies in the fact that its human annotation elicits a collective semantic agreement among annotators, leading to annotations with less individual bias. Table 1(b) depicts
328
A. Makadia, V. Pavlovic, and S. Kumar
bikini, girl, bear, black, Predicted grass, hair, brown, nose, keywords woman white animal, bear, Human bed, girl, black, brown, annotation woman head, nose
band, light, man, old, cloud, grass, man, music, picture, red, green, hill, red play wall band, light, black, man, cloud, gray, green, man, music, old, red, mountain, picture, red, wheel sit rock, sky, stone
Fig. 7. Predicted keywords using JEC versus human annotations for sample images in the ESP dataset
results of MBRM and our baseline methods on this set. Figure 7 shows some examples of annotated images using JEC. Even though JEC again gives the best performance, the overall low precision and recall rates for this dataset indicate its difficult nature. Also, more so than in other sets, the texture features play a critical role in the process. For instance, the Haar and Gabor distances fall not far behind the color features. 4.4 Discussion To be able to solve the image annotation problem at the human level, perhaps one needs to first solve the problem of scene understanding. However, identifying objects, events, and activities in a scene is still a topic of intense research with limited success. The goal of our work was not to develop a new annotation method but to create a family of very simple and intuitive baseline methods. Experiments on three different datasets reaffirm the enormous importance of considering multiple sources of evidence to bridge the gap between the pixel representations of images and the semantic meanings. It is clear that a simple combination of basic distance measures defined over commonly used image features can effectively serve as a baseline method to provide a solid test-bed for developing future annotation methods. Acknowledgments. Our thanks to Ni Wang for the Lasso training code and Henry Rowley for helpful discussions on feature extraction.
References 1. Yang, C., Dong, M., Hua, J.: Region-based image annotation using asymmetrical support vector machine-based multiple-instance learning. In: Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition (2006) 2. Carneiro, G., Chan, A.B., Moreno, P.J., Vasconcelos, N.: Supervised learning of semantic classes for image annotation and retrieval. IEEE Transactions on Pattern Analysis and Machine Intelligence 29 (2007)
A New Baseline for Image Annotation
329
3. Duygulu, P., Barnard, K., de Freitas, J.F.G., Forsyth, D.A.: Object recognition as machine translation: Learning a lexicon for a fixed image vocabulary. In: European Conference on Computer Vision, pp. 97–112 (2002) 4. Blei, D.M., Jordan, M.I.: Modeling annotated data. In: Proc. ACM SIGIR, pp. 127–134 (2003) 5. Jeon, J., Lavrenko, V., Manmatha, R.: Automatic image annotation and retrieval using crossmedia relevance models. In: ACM SIGIR Conf. Research and Development in Informaion Retrieval, New York, NY, USA, pp. 119–126 (2003) 6. Wang, L., Liu, L., Khan, L.: Automatic image annotation and retrieval using subspace clustering algorithm. In: ACM Int’l Workshop Multimedia Databases (2004) 7. Lavrenko, V., Manmatha, R., Jeon, J.: A model for learning the semantics of pictures. In: Advances in Neural Information Processing Systems, vol. 16 (2004) 8. Monay, F., Gatica-Perez, D.: On image auto-annotation with latent space models. In: ACM Int’l Conf. Multimedia, pp. 275–278 (2003) 9. Feng, S.L., Manmatha, R., Lavrenko, V.: Multiple bernoulli relevance models for image and video annotation. In: IEEE Conf. Computer Vision and Pattern Recognition (2004) 10. Barnard, K., Johnson, M.: Word sense disambiguation with pictures. Artificial Intelligence 167, 13–30 (2005) 11. Metzler, D., Manmatha, R.: An inference network approach to image retrieval. In: Image and Video Retrieval, pp. 42–50. Springer, Heidelberg (2005) 12. Hare, J.S., Lewisa, P.H., Enserb, P.G.B., Sandomb, C.J.: Mind the gap: Another look at the problem of the semantic gap in image retrieval. Multimedia Content, Analysis, Management and Retrieval (2006) 13. Frome, A., Singer, Y., Sha, F., Malik, J.: Learning globally-consistent local distance functions for shape-based image retrieval and classification. In: Proceedings of the IEEE International Conference on Computer Vision, Rio de Janeiro, Brazil (2007) 14. Tibshirani, R.: Regression shrinkage and selection via the Lasso. J. Royal Statistical Soc. B 58, 267–288 (1996) 15. Datta, R., Joshi, D., Li, J., Wang, J.Z.: Image retrieval: Ideas, influences, and trends of the new age. ACM Computing Surveys (2008) 16. Mori, Y., Takahashi, H., Oka, R.: Image-to-word transformation based on dividing and vector quantizing images with words. In: First International Workshop on Multimedia Intel ligent Storage and Retrieval Management (MISRM) (1999) 17. Jin, R., Chai, J.Y., Si, L.: Effective automatic image annotation via a coherent language model and active learning. In: ACM Multimedia Conference, pp. 892–899 (2004) 18. Gao, Y., Fan, J.: Incorporating concept ontology to enable probabilistic concept reasoning for multi-level image annotation. In: 8th ACM International Workshop on Multimedia Information Retrieval, pp. 79–88 (2006) 19. Li, J., Wang, J.: Automatic linguistic indexing of pictures by a statistical modeling approach. IEEE Transactions on Pattern Analysis and Machine Intelligence 25 (2003) 20. von Ahn, L., Dabbish, L.: Labeling images with a computer game. In: ACM CHI (2004) 21. Yavlinsky, A., Schofield, E., Ruger, S.: Automated Image Annotation Using Global Features and Robust Nonparametric Density Estimation. In: Leow, W.-K., Lew, M., Chua, T.-S., Ma, W.-Y., Chaisorn, L., Bakker, E.M. (eds.) CIVR 2005. LNCS, vol. 3568. Springer, Heidelberg (2005)
Behind the Depth Uncertainty: Resolving Ordinal Depth in SFM Shimiao Li and Loong-Fah Cheong Department of Electrical and Computer Engineering National University of Singapore {shimiao,eleclf}@nus.edu.sg
Abstract. Structure from Motion(SFM) is beset by the noise sensitivity problem. Previous works show that some motion ambiguities are inherent and errors in the motion estimates are inevitable. These errors may render accurate metric depth estimate difficult to obtain. However, can we still extract some valid and useful depth information from the inaccurate metric depth estimates? In this paper, the resolution of ordinal depth extracted from the inaccurate metric depth is investigated. Based on a general depth distortion model, a sufficient condition is derived for ordinal depth to be extracted validly. By studying the geometry and statistics of the image regions satisfying this condition, we found that although metric depth estimates are inaccurate, ordinal depth can still be discerned locally if physical metric depth difference is beyond certain discrimination threshold. The resolution level of discernible ordinal depth decreases as the visual angle subtended by the points increases, as the speed of the motion carrying the depth information decreases, and as points recede from the camera. These findings suggest that accurate knowledge of qualitative 3D structure is ensured in a small local image neighborhood, which might account for biological foveated vision and shed light on the nature of the perceived visual space.
1
Introduction
The Structure from Motion (SFM) problem has attracted many concerns in the last two decades from researchers in the computer vision community and many other disciplines. Despite the large amount of algorithms proposed, the estimation of motion and structure is beset by the noise sensitivity problem. This has led to many error analysis works trying to understand the behavior of the SFM algorithms in the presence of noise [1] [2] [3] [4]. These works have shown that some motion ambiguities are inherent and errors in the motion estimates are inevitable. Since motion errors are inevitable, it is important to understand how the errors and noise may affect the recovered 3D structure information. A few works investigating this problem can be found in the literature [5] [6] [7]. It was found
The support of the POEM project grant R-705-000-018-279 is gratefully acknowledged.
D. Forsyth, P. Torr, and A. Zisserman (Eds.): ECCV 2008, Part III, LNCS 5304, pp. 330–343, 2008. c Springer-Verlag Berlin Heidelberg 2008
Behind the Depth Uncertainty: Resolving Ordinal Depth in SFM
331
that errors in motion estimates may cause severe systematic distortion in the estimated depth and metrically accurate depth estimate is difficult to obtain [6]. However, despite the above works , there is still little understanding about the nature of the distorted perceived visual space. Are there any rules behind the uncertainty of the recovered structure? Specifically, although the estimated metric depth might differ significantly from the physical value, can we still extract some valid and useful information of depth from these inaccurate estimates? Moreover, instead of recovering the depth of individual point, robustly recovering some information for the relative positions among points might be of more importance. Such information extracted may be of a less precise form such as ordinal or interval depth measurement [8]. It may be qualitative rather than quantitative. It could be more robustly achieved than metric depth estimates and might suffice for many vision tasks such as navigation and recognition. In computer vision literature, a qualitative description of depth was given in [9]. Qualitative depth representation such as ordinal depth map has been adopted for visual motion segmentation and visual navigation tasks [10] [11] [12] [13]. On the other hand, some psychophysics experiments were designed to test observers’ judgement on interval and ordinal depth relations via depth cues such as texture, shading and stereo [14] [15] [16]. However, despite these works, the computational property of shape from X algorithms to resolve qualitative depth information from inaccurate metric depth estimates is as yet unknown. Such knowledge might provide us with better understanding about the nature of the perceived visual space and shed light on a proper space representation in which structure information could be obtained robustly via the depth cues and applied to vision-based tasks. In this paper, we aim to investigate the resolution of the ordinal depth extracted via motion cues in the perceived visual space, which is distorted from the physical space due to errors in the motion estimates. Based on a general model describing how recovered depth is distorted by errors in the motion estimates, we derive a sufficient condition under which ordinal depth can be estimated validly. Then the condition is explored under orthographic/weak-perspective and perspective projection. Image regions that have valid ordinal depth estimates up to certain resolutions of depth variation are delineated. By studying the geometry and statistics of these regions, we found that although metric depth estimates are inaccurate, ordinal depth can still be discerned reliably if physical metric depth difference is beyond a certain discrimination threshold. Moreover, the resolution level of discernible ordinal depth decreases as the image distance or visual angle between the point pairs increases. Ordinal depth resolution also decreases as points receding from the camera (as average depth increases) or as the speed of the motion component carrying depth information decreases. These findings suggest that accurate knowledge of qualitative 3D structure is ensured in a relatively small local image neighborhood, which might account for biological foveated vision. By fleshing out the computational properties of on the qualitative visual space perception under estimation uncertainty, we hope
332
S. Li and L.-F. Cheong
to inspire future computational and psychophysical ventures into the study of visual space representation. This paper is organized as follows. Section 2 describes a depth recovery and distortion model via motion. Section 3 presents the ordinal depth estimator and condition for its validity (valid ordinal depth(VOD) condition). Section 4 investigates VOD condition under orthographic/weak-perspective projection and presents analytic and delineated results of various factors that affect the resolution of discernible ordinal depth. Section 5 investigates VOD condition under perspective projection. Section 6 discusses about possible implications. Section 7 presents conclusion and future work suggestions.
2
Depth from Motion and Its Distortion: A General Model
Notations: In this paper, we denote the estimated parameters with the hat symbol ˆ and errors in the estimated parameters with the subscript e. Error of any estimated parameter l is defined as le = l − ˆl. p⊥ is the vector perpendicular to vector p. Generally, 2D image velocities p˙ = (p˙ x , p˙ y )T due to 3D rigid motion (translation T = (U, V, W )T and rotation Ω = (α, β, γ)T ) between the camera and the scene under any camera projection model can be written as p˙ = dg(Z) + p˙ indep
(1)
where d is the epipolar direction, g(Z) a monotonic function of depth Z(Z > 0) and p˙ indep component in image velocity independent of depth. Then depth information of a scene point can be recovered up to a scale factor as ˜˙ − p ˆ˙ indep · n p ˆ = (2) g(Z) ˆ·n d ˜˙ = p˙ + p˙ n is the measured image velocity. p˙ n is the noise term in where p optical flow measurement which is random and its distribution is up to the image formation process and optical flow computation process. n is a unit vector which specifies a direction. n’s value depends on the approach we use to recover while depth. For example, the epipolar reconstruction approach uses n = d; reconstruction from normal flow uses local image gradient direction as n. Due to errors in motion estimates and noise in optical flow measurements, ˆ Zˆ > 0) can be readily shown to be actual estimate of depth information g(Z)( related to the true g(Z) as = ag(Z) + b + c g Z (3) where a, b and c are distortion factors: a=
de · n d·n =1+ , ·n ·n d d
b=
p˙ indep e · n , ˆ·n d
c=
p˙ n · n ˆ·n d
(4)
Behind the Depth Uncertainty: Resolving Ordinal Depth in SFM
333
de and p˙ indep e are both functions of image coordinates and motion errors which are random variables whose distribution functions rely on the motion estimation ˆ = 0. process and motion-scene configurations. a , b and c are undefined while d Equation (3) shows us how the errors in motion estimates and noise in image measurements may distort the actual recovered depth. The error in the estimates ˆ while error in of epipolar direction de causes a multiplicative distortion in g(Z); the estimates of depth independent component p˙ indep e and noise in the optical flow measurement p˙ n result in additive distortions. Note that a, b and c are functions of image coordinates. We denote them as ai,j , bi,j , ci,j , where i, j are the indices of image pixels. Let matrices A = [ai,j ], B = [bi,j ], C = [ci,j ]. We call A, B and C the distortion maps, whose entries are random. In each depth recovery process from motion cue, there exist certain realizations of the distortion maps A, B and C. Fig. 1 illustrates realizations of distortion maps A, B for specific motion configurations and realizations of motion errors in image velocities under perspective projection.
Fig. 1. Realization of the distortion maps A, B under perspective projection, isoa contour, iso-b contour are shown. Motion parameters were focus of expansion (FOE):(x0 , y0 ) = (26, 30.5), rotation velocity α = 0.005, β = 0.004, γ = 0.0002. Error in FOE estimates: (x0e , y0e ) = (8, 9), error in rotation: αe = 0.001, βe = 0.001, γe = 0.00005. Focal length: 50 pixel, FOV= 90◦ , epipolar reconstruction scheme was ˆ d adopted (n = d ˆ ), , blue ∗ indicates the true FOE, red ∗ indicates the estimated FOE.
334
S. Li and L.-F. Cheong
The depth distortion model described above can be adopted into any camera projection model, such as orthographic, weak-perspective, perspective and catadioptric cameras. Under perspective projection, distortion factors a, b and c in 1 . the model are related to the distortion factor D in [6] [7] by D = a+(b+c)Z
3 3.1
Estimation of Ordinal Depth Relation Ordinal Depth Estimator
Suppose Z0 and Z1 are the depths of two scene points P0 and P1 , whose image points are p0 and p1 . We denote g(Z0 ) as g0 , g(Z1 ) as g1 . The function sgn(g0 − g1 ), i.e. the sign of (g0 − g1 ), reveals us the ordinal depth relationship between points P0 and P1 , since g(Z) is a monotonic function of depth Z(Z > 0). For example, given g(Z) = Z1 which is the case under perspective projection, we have sgn (g0 − g1 ) = 1 Z0 < Z1 (5) { sgn (g0 − g1 ) = −1 Z0 > Z1 3.2
Valid Ordinal Depth (VOD) Condition and VOD Inequality
g0 − gˆ1 ), However, we only have gˆ0 and gˆ1 at our disposal. Unfortunately, sgn (ˆ may not reveal the correct ordinal relation information, because g Zˆ may not be a monotonic function of Z due to distortion. We now derive the general condition under which sgn (ˆ g0 − gˆ1 ) is a valid estimator for ordinal depth relation. sgn (ˆ g0 − gˆ1 ) is a valid estimator for ordinal depth relation if and only if : sgn (ˆ g0 − gˆ1 ) sgn (g0 − g1 ) > 0
(6)
Referring to (3), the above is the same as ((a0 g0 − a1 g1 ) + (b0 − b1 ) + (c0 − c1 )) (g0 − g1 ) > 0
(7)
where (a0 , b0 , c0 ) and (a1 , b1 , c1 ) are realizations of the distortion factors associated with points p0 and p1 in a structure from motion computation process. Equation (6) or (7) is a sufficient and necessary condition for sgn (ˆ g0 − gˆ1 ) to be a valid estimator for ordinal depth. We call it the Valid Ordinal Depth (VOD) Condition. It reveals how the distortion factors may affect the judgement of the ordinal depth relation. a0 +a1 1 Define g = g0 +g 2 , a = 2 , a = a0 − a1 , b = b0 − b1 , c = c0 − c1 , g = g0 − g1 , then a0 g0 − a1 g1 = ag + ag. The VOD Condition (7) becomes (ag + (ag + b + c)) g > 0
(8)
Generally, given a > 0, it can be shown that a sufficient condition (but not necessary) for (8) to be satisfied is ag + b + c |g| > (9) a
Behind the Depth Uncertainty: Resolving Ordinal Depth in SFM
335
We call (9) the VOD Inequality. It is a sufficient condition for sgn (ˆ g0 − gˆ1 ) to be a valid ordinal depth relation estimator given a > 0. If a < 0, depth order between the two points is ensured to be estimated reversely by VOD Inequality. Equation (8) and (9) reflect that when the average of depth function g, depth function difference g, and the difference of the distortion factors of the two points a, b, and c satisfy certain conditions defined by the inequality, ordinal depth can be validly discerned up to a certain resolution even in the presence of motion errors and image measurement noise. To understand the VOD Condition and VOD Inequality better, we will look into specific projection models, reconstruction schemes, and motion configurations in the Sect. 4 and Sect. 5 to see how various factors may affect the judgement of ordinal depth.
4
Resolving Ordinal Depth under Orthographic/WeakPerspective Projection
We begin our investigation with orthographic/weak-perspective cameras, which are good approximations of perspective camera model under small FOV. Equations associated with these models are relatively simple and easily to be handled. The concepts introduced will also be applied to the investigation with perspective camera model in the next section. 4.1
Depth Recovery and Its Distortion under Orthographic/Weakperspective Projection
Motion field equations under orthographic and weak-perspective cameras can be written as follows [17]: p˙ x = −sZβ − sU + γy + δx,
p˙y = sZα − sV − γx + δy
(10)
is the relative changing rate of the scaling factor s (s = 1 for where δ = orthographic camera; s = Zf for weak-perspective camera, where f is the focal 1 ds s dt
T
length and Z is the average depth of scene points), d = (−β, α) , g(Z) = sZ. Like under perspective projection, depth can only be recovered up to a scale ˆ = 1, which factor. The magnitude of frontal rotation is unsolvable. We set d √ 2 2 means we will recover depth information up to a scale factor k = (α +β ). It is known that in the 2-frame motion estimation process under affine projection, frontal translation can only be estimated in the direction perpendicular to epipolar direction [17], thus p˙ indep can only be partially estimated. Depth can ˆ be recovered as a scaled and offset version of Z. ˜˙ − p ˆ˙ indep−known · n p g Zˆ = ksZˆ + Zc = (11) ˆ·n d T )T ·n ˆ −ˆ ˆ ˆ˙ indep−known = γˆ y + δx, γ x + δy and Zc = (−sU,−sV which is where p ˆ d·n unknown. Depth distortion due to motion errors and noise can be written as g Zˆ = ksZˆ + Zc = a (ksZ) + Zc + b + c (12)
336
S. Li and L.-F. Cheong
where a =
d·n ˆ , d·n
b=
p ˙ indep−known e ·n , ˆ d·n
p˙ indep−known e = (γe y + δe x, −γe x + δe y)T ,
n ·n c = p˙d·n . ˆ In the following discussion in this section, we assume n is the same for every feature point thus Zc is a constant. This allows relative depth between any two points to be recovered up to a scale factor. The scaled relative depth between points p0 and p1 can be recovered as
g(Zˆ0 ) − g(Zˆ1 ) = ksZˆ = g (Zˆ0 ) − g (Zˆ1 ) = a(ksZ) + b + c
(13)
whereZˆ = Zˆ0 − Zˆ1 and Z = Z0 −Z1 , a = a0 = a1 , b = b0 −b1 , c = c0 −c1 . ˆ Specifically, if the epipolar reconstruction scheme (n = d ˆ ) is adopted, we d β ˆ b = have a = cos φe (φ = tan−1 α and φe is the angle between d and d), ⊥ ˆ ˆ γe p + δe p · d and c = p˙ n · d. Note that φ can only be recovered up to a 180◦ ambiguity in the model and thus a may be negative. If a < 0, all the relative depths will be recovered reversely, and the whole scene structure will be recovered up to a mirror transformation. 4.2
VOD Inequality under Orthographic/Weak-Perspective Projection
Here we will consider the VOD Inequality (9) under orthographic/weak-perspective projection. We adopt the epipolar reconstruction scheme (the derivation can be modified using other reconstruction schemes in which n is continuous). We ˆ c = p˙ n · d ˆ where p = p0 − have a = 0, b = γe p⊥ + δe p · d, p1 , p˙ n = p˙ n0 − p˙ n1 . We write p = r(sin θ, cos θ), where r indicates the image distance between p0 and p1 . After manipulation, VOD Inequality under orthographic/weak-perspective cameras takes the form r < ksε (14) |Z| cos φe ,where p˙ n = p˙ n ·dˆ . ε = ∞ in the where ε = γ cos φ−θ b b r ( )+δe sin(φ−θ)+p˙ n e error and noise-free ideal case, which implies that VOD inequality is satisfied in the entire image plane. Equation (14) shows that for two points, if the ratio between the image distance r and depth variation |Z| is less than a certain value ksε defined by a particular realization of motion errors and noise in the optical flow measurements, the SFM system can still get a valid ordinal depth relation judgement even in the presence of errors and noise. 4.3
Ordinal Depth Resolution and Discrimination Threshold(DT)
Equation (14) can be written as : |Z| > DT,
DT =
r ksε
(15)
Behind the Depth Uncertainty: Resolving Ordinal Depth in SFM
337
Equation (15) indicates that when depth variation is larger than a discrimination threshold(DT), ordinal depth relation can be judged validly by the SFM system. DT is an indication of the ordinal depth resolution. It gives us the smallest physical metric depth variation that ensures ordinal depth to be resolved validly by VOD Inequality. The bigger DT is, the poorer the ordinal depth resolution. It is noted that DT is a function of r – the distance between p0 and p1 in the image. Generally, for a certain realization of errors in motion estimates and noise in image velocity, DT increases as r increases. This means that ordinal depth resolution decreases as image distance increases. Equation (15) shows that ordinal depth resolution decreases as motion errors (ε) increase and as the magnitude of the motion component carrying depth information(frontal rotation here) k decreases. 4.4
VOD Function and VOD Region
We define VOD function and VOD region as following: - VOD function: Given certain realization of errors in motion estimates and noise in the optical flow measurements, for an image point p0 , if image point pi satisfies the VOD Inequality (9) with p0 for depth variation |Z| = DT and average depth Z, function V OD(p0 , pi , DT, Z) = 1; otherwise function V OD(p0 , pi , DT, Z) = 0. - VOD region: VOD region R of image point p0 for DT at Z is a set of image points: R(p0 ,DT,Z) = {pi |V OD(p0 , pi , DT, Z) = 1}. VOD region with DT contains all the image points that satisfy the VOD inequality with point p0 for depth variation bigger than DT and thus their ordinal depth relation with p0 can be recovered validly for depth variation bigger than DT when average depth of the two points is Z. Since motion errors and noise are random, VOD region is a random region in the image plane. Figure 2 illustrates the realizations of VOD regions for different DT under certain motion error realizations when effect of noise is ignored. We indicate the width of the region by the biggest circles centered at the investigated point drawn inside the region (as noise in optical flow measurement is ignored). It is shown in Fig. 2 that under motion errors, realization of VOD region of an image point p0 is of band shape under orthographic/weak-perspective projection. The width of the band increases as DT increases. The anisotropic property is due to the dependence of ε on θ. 4.5
Ordinal Depth Resolution and Visual Angle
We now investigate the relationship between ordinal depth resolution and visual r angle. Define the visual angle subtended by two image points as τ = 2 tan−1 2f . VOD inequality can be written in terms of visual angle as | Z| >
τ 2 tan Z εk 2
(16)
338
S. Li and L.-F. Cheong
Fig. 2. Realization of VOD region of p0 = (0, 0)T (denoted by red asterisk) for different DT under weak-perspective projection. VOD region is bounded by black lines. The big red circles show the width of the region bands. τ is the visual angle between points on the circle and p0 . The rainbow at the background shows the change of distortion factor b. . Motion parameters and errors: T = (0.81, 0.2, 0.15)T , Ω = (0.008, 0.009, 0.0001), Z = 35000, δ = −4.2857e − 006, φe = 28.6◦ , δe = 1.0e − 006, γe = 1.0e − 006, p˙ n = 0, f = 250.
This shows that given ε and k, for two image points subtending visual angel τ , the ordinal depth relation between points in this region can be validly resolved 2 when the depth variation | Z| is greater than DT = εk tan τ2 Z . The bigger the visual angle, the higher DT. Therefore, ordinal depth resolution decreases as visual angle increases. Moreover, ordinal depth resolution also decreases as average depth Z increases. Fig. 2 also shows the increase of DT in the direction perpendicular to the band as the visual angle τ increases. 4.6
VOD Reliability
Practically, VOD region is random due to the randomness of errors and noise. To deal with the statistical issue, we define the VOD reliability of image point pi with respect to the investigated point p0 as PV OD(p0 ,pi ,DT,Z) = P (V OD(p0 , pi , DT, Z) = 1) = P (pi ∈ R(p0 ,DT,Z) )
(17)
where P (.) is the probability of certain event. VOD reliability gives us the probability that image point pi falls inside p0 ’s VOD region for DT at Z. It gives us the lower bound of the probability of correct judgement of the depth order relationship between point p0 and pi (for depth variation bigger than DT at average depth Z). Particularly, under orthographic/weak-perspective projection, we have: Z εk ) (18) PV OD(p0 ,pi ,DT,Z) = P (r < |Z|ksε) = P (τ < 2tan−1 Z 2 It is clear that, generally, under certain error and noise level, the VOD reliability decreases as the distance r between the points and the visual angle τ subtended by the points increase.
Behind the Depth Uncertainty: Resolving Ordinal Depth in SFM
339
Fig. 3. Left: VOD Reliability of image points w.r.t. the image center for DT = 100 at Z = 35000. Right: VOD Reliability of image points w.r.t. the image center for different DT at Z = 35000 as visual angle (◦ ) between the point pair changes.
Figure 3(Left) shows the VOD reliability of image points w.r.t. the image center p0 for DT = 100 at average depth Z = 35000. This figure is the result of repeating the SFM process described in [17] 500 times on 1000 randomly generated points when the level of isotropic noise in the optical flow is 10%. Figure 3(Right) shows the result for different DT as visual angle increases. It is shown that VOD reliability drops down significantly as distance between pi and p0 increases. This indicates that for the same depth variation, ordinal depth judgement by SFM systems for a pair of closer points can be considered as more reliable and trustworthy. Pairs of points subtending smaller visual angle have more reliable ordinal depth judgement. Ordinal depth information is strong in the local image areas within small visual angle despite the motion uncertainties and noise. It is noted that the anisotropic property of the VOD regions shown in Fig. 2 disappears because of the randomness of the band direction.
5
Resolving Ordinal Depth under Perspective Projection
In this section, the ordinal depth resolution is investigated under perspective projection. Our analysis is first carried out under pure lateral motion configuration (Sect. 5.1) since it is very similar to the orthographic/weak-perspective projection analysis above in the sense that all points have epipolar lines lying in the same direction. Then the effect of adding forward motion is analyzed in Sect. 5.2. Some detailed derivations are omitted due to the space limitation. 5.1
The Pure Lateral Motion Case
We assume that the SFM system knows that pure lateral motion is executed. ˆ = W = 0. Image velocity equation under the case is Therefore W p˙ x =
αxy βx2 −U f −βf +γy + − , Z f f
p˙ y =
βxy αy 2 −V f +αf −γx− + (19) Z f f
340
S. Li and L.-F. Cheong
Fig. 4. Realization of VOD region of p0 = (0, 0)T (denoted by red cross) for different DT under perspective projection, pure lateral motion. Left: second order flow ignored. Right: second-order flow considered. VOD region is bounded by black lines. The background rainbow shows the change of distortion factor b. . Motion parameters and errors: T = (18, 22, 0)T , Te = (15.3, 24.5, 0)T , (translation direction estimation error is −7.3◦ ), Ωe = (0.00002, 0.00002, 0.00005), Z = 20000, p˙ n = 0, f = 250. (U,V)T √ U2 +V2
as (cos φ, sin φ)T . The ˆ + distortion factors can be written as a = cos φe , b = f (αe sin φˆ − βe cos φ) αe xy βe x2 −βe xy 2 2 ˆ ˆ ˆ ˆ γe (cos φy−sin φx)+O (x, y), where O (x, y) = cos φ( − )+sin φ( + We denote the direction of lateral motion d =
f
αe y 2 f )
f
f
is the second order term, which only exists under errors in frontal rotation estimates. The VOD inequality can be written as |Z| > DT,
DT =
r ksε
(20)
which takes the same √ form as (15), with the meaning of the parameters slightly different here. k = U 2 + V 2 is the magnitude of lateral translation. s = f2 , Z √ where Z = Z0 Z1 is the geometric mean of the depths of the two points. 2 2 cos φe , where O2 = O (x0 ,y0 )−O (x1 ,y1 ) . ε = γ sin φ−θ r ( b )+O2 +p˙ n e Figure 4 shows realization of VOD regions of the image center point under pure lateral motion. When second-order flow is ignored, the shape of the region is the same as that under orthographic/weak-perspective projection (left). While second-order flow considered, the lines change to hyperbolae and the band shapes are distorted (right), though the general topology remains. 5.2
Adding Forward Motion: The Influence of FOE
Here we add the forward translation component in. It is well known that when focus of expansion(FOE) is near the image, recovered depth is highly unreliable. This phenomenon is also shown in Fig. 1, in which the values of distortion factors change fast near the estimated FOE. Ordinal depth recovery in this case is of little practicability. Therefore, our investigation here is restricted to the case | that FOE is far away from the image bound. We use angle μ = arctan √U|W 2 +V 2
Behind the Depth Uncertainty: Resolving Ordinal Depth in SFM
341
Fig. 5. Realization of VOD region of p0 = (0, 0)T (denoted by red cross) for different DT under perspective projection with forward translation added to the motion configuration shown in Fig. 4. Left: μ = 15◦ . Right: μ = 25◦ . μe = 0 in both cases. Only first-order optical flow is considered for the illustration.
to measure the amount of forward translation component added in. The bigger μ, the bigger the forward translation executed, and the nearer FOE is to the image. VOD region realizations with forward translation added are shown in Fig. 5. Several observations are summarized below: 1. Adding forward translation narrows the width of VOD region realization and distorts the band shape. VOD region realization is narrower in the image region nearer to the estimated FOE. However, the topology remains the same as under pure lateral motion. 2. VOD region realization is bounded by curves which can be proved to pass through the estimated FOE. 3. The bigger the forward translation component, the more the VOD region shrinks. Therefore, ordinal depth resolution decreases as image points approach the FOE.
6 6.1
Discussion Psychophysical and Biological Implication
Our results show that SFM algorithms can obtain reliable ordinal depth resolution in small visual angles despite the motion uncertainties. Ordinal depth resolution decreases as visual angle increases. This agrees with the intuition that in human vision, the depth order of two objects close together can be determined with much greater ease than that of objects far apart. Moreover, our result is also consistent with the experimental findings in psychophysics [14] [15] which showed that human vision gives better judgement of ordinal depth relation and depth intervals for pairs of closer points using stereo or texture depth cue. From an evolutionary perspective, foveated vision is adopted for many biological vision systems. For example, humans have a sharp foveated vision. The spatial resolution of the human eye decreases by more than an order of magnitude within a few degrees from the optical axis and at least two orders at ten
342
S. Li and L.-F. Cheong
degrees from the optical axis. One possible explanation for this phenomenon may be that depth cues such as motion can only resolve various levels of depth information precisely in a small visual angle due to errors in ego-motion estimation, as shown by our results. Therefore, foveated vision might be an adaptive result of natural selection in response to the computational capability and limitation of Shape from X modules. 6.2
Space Representation: Global vs. Local
In another aspect, the result that ordinal depth resolution decreases as visual angle increases also suggests that accurate ordinal 3D structure recovery is ensured in small local image neighborhood. If a global space representation is adopted, how to describe the global links between regions with local accuracy of qualitative 3D structure knowledge is an important issue. Such issue can perhaps be called the glocalization problem.
7
Conclusion and Future Work
In this study, the resolution of ordinal depth discernible from the inaccurate metric depth estimates in SFM was investigated theoretically based on a novel general depth distortion model. It was shown that: 1. In SFM algorithms, although accurate metric depth may be difficult to obtained due to motion errors, ordinal depth can still be discerned locally if physical metric depth difference is beyond a certain discrimination threshold. 2. The reliable ordinal depth resolution was found to decrease as visual angle increases, as speed of the motion component carrying depth information decreases, as points recede from the camera, and as image points approach the estimated FOE. These findings in the study are important since they suggest that accurate qualitative 3D structure information is ensured in small local image neighborhood. This may provide possible explanation for biological foveated vision. Future parallel study on qualitative depth information may be carried out for other depth cues such as stereo and texture. Moreover, the problem of describing the global links between different regions each with local accuracy of qualitative 3D structure knowledge, should be given serious theoretical attention.
References 1. Adiv, G.: Inherent ambiguities in recovering 3-d motion and structure from a noisy flow field. IEEE Trans. Pattern Anal. Mach. Intell. 11, 477–489 (1989) 2. Weng, J., Ahuja, N., Huang, T.S.: Optimal motion and structure estimation. IEEE Trans. Pattern Anal. Mach. Intell. 15, 864–884 (1993) 3. Daniilidis, K., Spetsakis, M.E.: Understanding noise sensitivity in structure from motion. In: Aloimonos, Y. (ed.) Visual Navigation, pp. 61–88 (1993)
Behind the Depth Uncertainty: Resolving Ordinal Depth in SFM
343
4. Oliensis, J.: A new structure-from-motion ambiguity. IEEE Trans. Pattern Anal. Mach. Intell. 22, 685–700 (2000) 5. Szeliski, R., Kang, S.B.: Shape ambiguities in structure from motion. IEEE Trans. Pattern Anal. Mach. Intell. 19, 506–512 (1997) 6. Cheong, L.F., Ferm¨ uller, C., Aloimonos, Y.: Effects of errors in the viewing geoemtry on shape estimation. Computer Vision and Image Understanding 71, 356–372 (1998) 7. Cheong, L.F., Xiang, T.: Characterizing depth distortion under dfferent generic motions. International Journal of Computer Vision 44, 199–217 (2001) 8. Stevens, S.S.: On the theory of scales of measurement. Science 103, 677–680 (1946) 9. Aloimonos, Y., Ferm¨ uller, C., Rosenfeld, A.: Seeing and understanding: Representing the visual world. ACM Computing Surveys 27, 307–309 (1995) 10. Ferm¨ uller, C., Aloimonos, Y.: Representations for active vision. In: Proc. Int’l Joint Conf. Artificial Intelliegence, pp. 20–26 (1995) 11. Ogale, A.S., Ferm¨ uller, C., Aloimonos, Y.: Motion segmentation using occlusions. IEEE Trans. Pattern Anal. Mach. Intell. 27, 988–992 (2005) 12. Lourakis, M.: Non-metric depth representations: Preliminary results (1995) 13. Teo, C.-L., Li, S., Cheong, L.-F., Su, J.: 3D ordinal constraint in spatial configuration for robust scene recognition. In: Proc. Int’l Conf. Pattern Recognition (ICPR 2008) (2008) (to be appeared) 14. Todd, J.T., Reichel, F.D.: Ordinal structure in the visual perception and cognition of smoothly curved surfaces. Psychological Review 96, 643–657 (1989) 15. Norman, J.F., Todd, J.T.: Stereoscopic discrimination of interval and ordinal depth relations on smooth surfaces and in empty space. Perception 27, 257–272 (1998) 16. Koenderink, J.J., van Doorn, A.J., Kappers, A.M.L.: Ambiguity and the ’mental eye’ in pictorial relief. Perception 30, 431–448 (2001) 17. Cheong, L.F., Li, S.: Error analysis of sfm under weak-perspective projection. In: Narayanan, P.J., Nayar, S.K., Shum, H.-Y. (eds.) ACCV 2006. LNCS, vol. 3851, pp. 862–871. Springer, Heidelberg (2006) 18. Xiang, T., Cheong, L.F.: Understanding the behavior of SFM algorithms: a geometric approach. Int’l J. Computer Vision. 51, 111–137 (2003)
Sparse Long-Range Random Field and Its Application to Image Denoising Yunpeng Li and Daniel P. Huttenlocher Department of Computer Science, Cornell University, Ithaca, NY 14853 {yuli,dph}@cs.cornell.edu
Abstract. Many recent techniques for low-level vision problems such as image denoising are formulated in terms of Markov random field (MRF) or conditional random field (CRF) models. Nonetheless, the role of the underlying graph structure is still not well understood. On the one hand there are pairwise structures where each node is connected to its local neighbors. These models tend to allow for fast algorithms but do not capture important higher-order statistics of natural scenes. On the other hand there are more powerful models such as Field of Experts (FoE) that consider joint distributions over larger cliques in order to capture image statistics but are computationally challenging. In this paper we consider a graph structure with longer range connections that is designed to both capture important image statistics and be computationally efficient. This structure incorporates long-range connections in a manner that limits the cliques to size 3, thereby capturing important second-order image statistics while still allowing efficient optimization due to the small clique size. We evaluate our approach by testing the models on widely used datasets. The results show that our method is comparable to the current state-of-the-art in terms of PSNR, is better at preserving fine-scale detail and producing natural-looking output, and is more than an order of magnitude faster.
1 Introduction Random fields are among the most common models used in low-level vision problems such as image restoration, segmentation, and stereo. The strength of these models lies in their ability to represent both the interaction between neighboring pixels and the relationship between the observed data values and estimated labels at each pixel. A random field model defines a graph structure with potential functions over the labelings of cliques in this graph. For low-level vision problems the graph generally has a node corresponding to each pixel, edges connecting certain pairs of neighboring pixels, and potentials that encourage neighboring pixels to have similar labels. Two key issues in the application of random field models to a given problem are (i) defining appropriate graph structures and (ii) finding suitable potential functions over the cliques of that graph. Most research has focused on the latter problem, whereas here we focus on the former. In this paper we study sparse long-range random field (SLRF) models, that represent interactions between distant pixels using sparse edges so as to maintain a fixed clique size. The size of the clique is chosen so as to be appropriate for a particular problem. In image denoising, second-order spatial terms are important for representing intensity D. Forsyth, P. Torr, and A. Zisserman (Eds.): ECCV 2008, Part III, LNCS 5304, pp. 344–357, 2008. c Springer-Verlag Berlin Heidelberg 2008
Sparse Long-Range Random Field and Its Application to Image Denoising
345
change. Thus we use a graph structure that has cliques of size three, as discrete approximations to a second order function require three data values. In this framework the potential functions are defined over fixed-size cliques that have different spatial extents, effectively encoding image structure of a fixed order (defined by the clique size) at multiple scales of observation. This enables such models to produce smooth labelings for highly noisy images but at the same time allows efficient solution. In contrast, other recent work using higher-order models and longer-range connections, such as the Field of Experts (FoE) model [1], has large cliques and thus does not support fast optimization. Our main interest is thus in investigating whether simpler models with smaller cliques can produce results comparable to the state-of-the-art achieved with more powerful models, such as FoE, while using much less time. The experiments that we report here, performed on widely used datasets, indicate that this is indeed the case. Not only do we achieve comparable peak signal-to-noise ratio (PSNR) to large-clique methods, our method is also better at avoiding over-smoothing although that is not captured by the PSNR measure. At the same time, our method is over 100 times faster than FoE and at least 10 times faster than other spatial-domain methods that achieve state-of-the-art results. 1.1 Motivation and Related Work Random field models in statistics have existed for decades [2] and also have a long history in computer vision (e.g, [3,4]). A Bayesian formulation of pixel labeling problems using a Markov random field (MRF) model decomposes the problem into a prior that enforces spatial consistency of the labels, and a likelihood function that encourages agreement between the labels and the observed data. We use the more general terminology “spatial term”, rather than “prior”, and “data term”, rather than “likelihood”, as this applies to both Bayesian and non-Bayesian models, as well as to models that do not have a probabilistic interpretation. In recent years, MRF models have been a popular choice for many low-level vision problems such as image restoration (e.g., [1,5,6]). The resurgence in the use of MRF models is largely complemented by the development of efficient approximation algorithms for inference on these models, including discrete methods such as loopy belief propagation (LBP) [7], tree-reweighted message passing [8,9] and graph cuts (GC) [10], as well as gradient-based methods such as diffusion [1] and variational inference [6]. Each of these methods has its own pros and cons, some of which have been investigated in [11]. The more recent conditional random field (CRF) [12] was originally proposed as a tree-structured model to address the label bias of hidden Markov models (HMM) in the context of natural language processing, and has since also been applied to loopy graphs. As a discriminative model, CRF is more convenient in situations where the generative process is unclear, the spatial term (i.e. the prior) depends on the observations, or the label of one site is related to observations at multiple sites (e.g., [13,14]). The most widely used graph structure for random field models in low-level vision is a grid where each node is connected to its four immediate neighbors in the horizontal and vertical direction. While this model is simple and convenient to set up and optimize, it suffers from a number of drawbacks. First, as noted above, it can only represent firstorder properties of the image, because it is based on pairwise relations between pixels.
346
Y. Li and D.P. Huttenlocher
Second, it can be sensitive to noise. Consider a connected region of√n nodes in a 4connected grid graph. In this case there are only approximately O( n) connections between nodes in the region and those outside the region, because the boundary grows approximately as the square root of the area. Thus the data term of the n nodes over the region comes to dominate the connections outside the region, especially when robust (i.e. discontinuity preserving) spatial terms are used. For example, in image denoising this can be problematic for high noise levels because good estimates require substantial sized regions. Another way to view this is in terms of the standard deviation of the mean over the region. For concreteness, consider an image with additive Gaussian noise of σ = 25, and a√5 × 5 region of the image. The standard deviation of the mean of that region is σ/ 5 · 5 = 5. At the same time, the perimeter-to-area ratio of such a neighborhood is only 4 · 5/52 = 4/5, or 1/5 that of a single pixel. Hence the collective labeling of the group is dominated by its data term, which is subject to a non-trivial standard deviation of 5 in its mean. The 4-connected grid graph is a special case of graphs that connect a node to all nodes that lie within some distance d. In contrast to our approach, such graphs produce quite dense edges even for moderate values of d. Early work on MRF models in vision, such as [15], used these models but restrict their attention to pairwise clique potentials. However, such pairwise models do not always capture the underlying distribution. For image denoising, in particular, second-order statistics are important, implying a need for cliques of size at least three. Problems with earlier pairwise random field models have led to higher-order models such the Field of Experts (FoE) [1], where overlapping blocks of pixels are used rather than purely local connections. However, such models are computationally intensive due to their relatively large complete subgraphs. In addition, the learnt priors are also unintuitive, despite recent interpretations as derivative filters [6] and frequency filters [16]. This motivates our approach, which uses long-range connections to address the problem of noise but does so in the context of a simple graph structure with cliques of size three, so as to efficiently encode both first- and second-order spatial properties. The work bearing the most similarity to ours is that of [17], which uses long-range edges in the problem of texture synthesis. Clique families are chosen using heuristic search based on the strength of interaction, which is evaluated on the training data. However, the model is restricted to pairwise clique potentials. Moreover each model is trained to synthesize a particular type of texture, which usually consists of some characteristic (and often repeating) patterns. Thus it is not well suited to modeling generic structures, such as those of natural scenes.
2 Sparse Long-Range Random Field We now introduce our model. A sparse long-range random field (SLRF) is constructed so as to have a fixed clique size regardless of the spatial extent of the edges in the grid. Consider a set of nodes V arranged on a grid, where there is a spatial distance defined between each pair of nodes. By choosing edges that increase in length exponentially, we can construct a graph that has a fixed clique size even though there is no bound on the maximum edge length. Consider the case of cliques of size 3, which as noted above (and
Sparse Long-Range Random Field and Its Application to Image Denoising
347
Fig. 1. Horizontal 3-cliques of E42 with edge lengths {1, 2, 4, 8}
discussed in more detail below) are important for image restoration because they enable the representation of second-order spatial statistics. A local 3-clique has edges of length 1 that connect each node to its immediate neighbors and edges of length 2 that connect each node to those two-away. Adding edges of length 4 to each node would then create additional 3-cliques composed of edges of length 2, 2 and 4, but does not increase the maximum clique size. Similarly for edges of length 8 and so on, as illustrated for the one-dimensional case in Figure 1. More formally, each node is connected to other nodes at distance 2k away from it, for integer values k such that 0 ≤ k < K. In other words the density of connections 2 decreases exponentially with distance. We let EK denote this set of edges; for instance, 2 E4 is the set of edges of length {1, 2, 4, 8}. More generally, one could consider graphs where the edges are of length bk for some b > 2 which yields sparser graphs. However, for b = 3 the resulting graphs already have maximum cliques of only size 2, which for image denoising does not allow representing second-order image statistics. In the case of a two-dimensional image grid, edges may correspond to any underlying 2 again orientation. Considering both horizontal and vertical directions using edges in EK yields a graph with maximum cliques of size 3. These cliques correspond to spatial neighborhoods at different scales of observation and at different orientations (horizontal or vertical), but in each case capture second-order spatial information based on three nodes. The inclusion of long-range edges in the SLRF offers the following advantages over a local grid model: – Improved information flow. The graph requires fewer hops to reach one node from another, as is illustrated in Figure 1. In the example shown in the figure, the maximum graph distance between any two nodes is 2. Without long range edges, the corresponding numbers would be 8. In general, it can be shown that any two nodes v1 and v2 with grid distance d have graph distance O(d/bK−1 +bK). The decreased graph distance facilitates the flow of information over the random field. – Increased resistance to noise. Long-range edges address the local bias problem discussed in Section 1. For any n × n neighborhood S with n up to bK−1 (i.e. up to the length of the longest edges), each node in S is connected to at least four nodes outside of S. Hence the total amount of interaction between S and the environment is now proportional to the area of S instead of its perimeter as in the 4-connected grid. This makes the strength of the spatial constraints between pixel blocks comparable to that of the data term over the block, suppressing noise-induced local bias without resorting to increasing the weight of the spatial term (which tends to cause over-smoothing).
348
Y. Li and D.P. Huttenlocher
The sparse nature of the SLRF also has the following computational benefits: – Small, fixed clique size. As previously discussed, the size of the maximal cliques in an SLRF is either 2 or 3 regardless of the span of the longest range interaction being modeled. The low clique size allows arbitrary clique potentials to be optimized globally using efficient approximation algorithms such as belief propagation. In contrast, high-order random fields in general can only be optimized with continuous methods that rely on gradient (e.g. diffusion [1]), which may not exist in problems with discrete labels. Even when gradient-based methods are applicable, the running time is still super linear in the size of the cliques. 1 – Low computational cost. Since SLRF models have only K different edge lengths in an exponential series, the total number of edges in an SLRF is no more than K times of that in the underlying grid. Hence an SLRF model is at most logb d times as costly as one with only short edges, where d is the length of the longest ranged interaction to be modeled. If on the other hand each node is connected to all the nodes near it up to some distance d (such as in [15]), the resulting graph would have Θ(d2 ) edges and hence much higher computational. Although the model can still be called “sparse” from a graph theoretical point of view (as any graph with edge density independent of its size will qualify), it is clearly not so from the aspect of efficient optimization. 2.1 Cliques and Clique Potentials 2 2 Let C = CK denote the set of all cliques in an SLRF with edges EK for a fixed K. There are several distinct types of cliques in this set, which can be characterized by the lengths of their edges. For instance, 2 = C1,1,2 ∪ C2,2,4 ∪ ... ∪ C2K−2 ,2K−2 ,2K−1 CK
(1)
where Ca,b,c is the set of 3-cliques with edge length a, b, and c. Each of these sets of 3-cliques corresponds to observations at a different spatial scale, based on the lengths of the edges. Let T (c) denote the type of clique c, e.g. T (c) = (1, 1, 2) ∀c ∈ C1,1,2 and T (c) = (1) ∀c ∈ C1 . We represent the likelihood of the random field as an exponential family of cost functions f and g parameterized by θ, where fTθ (c) is the spatial term and g θ is the data term. Thus given observation I, pθ (X|I) =
1 exp(− fTθ (c) (xc ; I) − g θ (xv ; I)) Z(θ) c∈C
(2)
v∈V
where X is the labeling of the random field, and xc and xv are the configurations of clique c and node v respectively. The configuration of a clique or node includes its labeling, and may also include input-dependent latent variables such as image gradient. This formulation is similar to a CRF except that parametric functions over the clique 1
The time for computing the gradient is linear in clique size for using linear filters, and quadratic in the general case. At the same time, larger cliques also tend to require more iterations.
Sparse Long-Range Random Field and Its Application to Image Denoising
349
and node configuration space Xf and Xg are used instead of features. The random field becomes Markovian when f is independent of the observed data, i.e. fTθ (c) (xc ; I) = fTθ (c) (xc ) and g is a function only of the observation at a single node, i.e. g θ (xv ; I) = g θ (xv ; I(v)).
3 Parameter Estimation To learn the parameters θ, it is desirable to find the maximum a posteriori (MAP) estimate. By applying Bayes’ rule and assuming a uniform prior over the parameter space, this is equivalent to finding the maximum likelihood (ML) estimate. Computing the maximum likelihood estimate is nevertheless hard on loopy graphs due to the intractability of the partition function Z(θ) in pθ (X|I). This makes it impossible to use the standard CRF learning scheme, since it is designed for tree-structured graphs where the partition function can be computed efficiently using dynamical programming [12]. Various approaches have been proposed to address this difficulty. Gradient descent methods [18] have been used to obtain a local minimum in the negative log-likelihood space. The expectation over the model is nonetheless intractable to compute and often has to be estimated by MCMC sampling [1,18], by loopy belief propagation [7,19], or approximated using the mode (i.e. MAP labeling) [20]. The last case resembles the perceptron algorithms [21], except that the inference is not exact. As recently proposed in [16], a basis rotation algorithm based on expectation maximization (EM) can be used to learn parameters for filter based image models. This comes from a key observation that the partition function can be kept constant by constraining the parameter vectors to have unit norm. An alternative to maximum likelihood is using discriminative training to optimize for some loss function, typically evaluated on the mode. Such a loss can be minimized by descending along its derivative in the parameter space, when the mode has a closed-form solution [14] (or approximate solution [6]). Since some approximation must be used, we take the approach of optimizing for the marginal likelihood of the random field cliques, which effectively approximates the global partition function using the product of local partition functions over the cliques. This can be considered as form of piecewise local training [22,23], which minimizes a family of upper bounds on the log partition function. It can be shown that maximizing the marginal likelihood is equivalent to minimizing the Kullback-Leibler (KL) divergence DKL (p0 ||pθ ) between the empirical distribution p0 and the model distribution pθ for each type of cliques. The minimization can be performed using gradient descent with the standard update rule (as in [1]) ∂fθ ∂fθ δθ = η − , ∂θ pθ ∂θ p0 where ·pθ and ·p0 denote the expectation with respect to the model and the empirical distribution respectively, and η is the learning rate. Unlike in FoE we do not need to sample, since the model expectation can be computed by summing over all possible values of clique configurations. This computation is inexpensive in our model due to the small clique size. As noted in [1] performance
350
Y. Li and D.P. Huttenlocher
can be improved by learning an additional weight for the data term, which we also use for our model.
4 Image Denoising To test the effectiveness of our model, we apply it to the widely studied problem of image denoising. As is conventional in the literature, the image is assumed to be grayscale and have been corrupted by additive white Gaussian noise of known standard deviation. Since this is a well-defined generative process, we model the data term using the known Gaussian noise model and only the spatial term needs to be estimated. As described above we use 3-cliques since they capture second-order properties. In order to illustrate the importance of these second-order statistics we considered the marginal statistics of the images in the Berkeley dataset [24] that is commonly used in evaluations of such methods. These images show a strong correlation between the distribution of neighboring pairs, suggesting that simple pairwise models are less appropriate (see Figure 2). c c , v0c , v+s ), where v0c is the center We denote clique c of type Cs,s,2s as a triplet (v−s c c node of c, v−s is the left node, and v+s is the right node. We limit our discussion to horizontal cliques, as the case for vertical ones is essentially the same. Let d1 (c) = c c c c X(v+s )−X(v−s ) and d2 (c) = X(v−s )+X(v+s )−2X(v0c ), where X is the labeling of the image. Hence d1 and d2 are proportional to the discrete first and second derivatives of the image luminance respectively. In other words, the clique potential couples both first and second order spatial information. The Lorentzian function has been widely used to model the statistics of natural images (e.g., [25,1,6]). In our case, we use a family of 2-dimensional Lorentzian functions for the spatial term, i.e. 1 f (xc ) = α · log(1 + [(β1 d1 )2 + (β2 d2 )2 ]) 2
(3)
where {α, β1 , β2 } is the set of parameters for cliques of type T (c). Hence f is intensityinvariant and regulates both the first and the second derivatives of the spatial signal. We
(a)
(b)
(c)
Fig. 2. Frequency (unnormalized, logarithm scale) plotted against gradients of the two neighboring pairs in a linear 3-clique, from the Berkeley dataset [24]. (a) The empirical marginal distribution. (b) The would-be distribution if gradients of the neighboring pairs were independent. (c) The distribution from a fitted Lorentzian cost function.
Sparse Long-Range Random Field and Its Application to Image Denoising
351
choose this family since it not only fits the statistics of natural images (Figure 2) but is also able to produce smooth gradient while preserving discontinuities. This form is subtly different from filter based models, such as [1,16], that use a linear combination of functions over filter responses; in our case the first and second order derivatives are coupled, that is both orders of derivatives are inputs to the same non-linear function rather than using a linear combination of separate non-linear functions of each spatial filter. It has been noted that natural images are self-similar over different spatial scales [26,27]. As a result, cliques with different scales (i.e. edge lengths) all have very similar marginal distributions. This makes the marginals of cliques at different scales highly correlated, which we also observed empirically. Hence using independently collected marginals as the clique potentials is not a good model when dealing with natural scenes. To account for this factor, we reweigh the distribution of smaller-scale cliques according to the marginals of larger-scale ones, so as to make the former learn different trends from what have already been captured by the latter. 4.1 Inference For denoising, inference can be performed using either belief propagation (BP) [5] or gradient based methods such as limited memory BFGS (L-BFGS) [28]. We experimented with both and found that L-BFGS produces the same quality of results as BP while requiring less running time. Hence the results we report in this paper are based on using L-BFGS. It should be noted, however, that some problems in vision are of a discrete nature and cannot be solved using gradient-based methods. In those cases, discrete optimization techniques such as BP and graph cuts are required.
5 Experimental Results To evaluate the model for image denoising we used the Berkeley Segmentation Dataset and Benchmark [24] in order to compare the results with previous methods. The models were trained on the training images in the dataset and performance was measured on a subset of the testing images, which are the same as the ones used in [1,6,13,14]. In all the experiments we ran L-BFGS for 20 iterations, which we found to be sufficient for our model. 2 This is in contrast to large-clique methods, which usually require many hundred iterations to produce results of good quality [1,14]. Table 1 shows the denoising performance of our model along with the results from the FoE model in [1], the steerable random field (SRF) in [13], the Gaussian CRF in [14], and the variational MRF in [6]. This table reports the peak signal-to-noise ratio (PSNR) of each method averaged over the 68 test images (higher is better). These results demonstrate that the performance of our approach is comparable to that of recent top performing random field methods using the standard measure of PSNR. However, as is widely recognized, PSNR does not tell the entire story, thus we also consider some 2
We also experimented with conjugate gradient as the optimization method, which achieved the same performance but needs a few more iterations (about 30 as opposed to 20 for L-BFGS).
352
Y. Li and D.P. Huttenlocher
Table 1. Denoising performance of SLRF measured in peak signal-to-noise ratio (PSNR), higher is better. Results from other random field based denoising methods are shown for comparison. (Bold indicates the best performance among the 3-clique MRF models, asterisk denotes the best overall result, and “–” indicates no published data available. ) Model \ Noise σ 5 10 15 20 25 SLRF, K=4 36.90 32.71∗ 30.39 28.86∗ 27.73 Local MRF, K=2 36.51 32.04 29.81 27.89 26.41 FoE [1] GCRF [14] Var. MRF [6] SRF [13]
(a)
– – – –
(b)
32.67 30.47∗ 28.79 27.59 – – – 28.04∗ – 30.25 – – – – 28.32 –
(c)
(d)
Fig. 3. Denoising output for a medium-texture scene. (a) Original image. (b) Corrupted by Gaussian noise, σ = 25. (c) Restored using our SLRF model, PSNR = 28.63. (d) Restored using FoE [1], PSNR = 28.72. The magnified view shows that our model, while having comparable PSNR, does a significantly better job at preserving the small and low-contrast structures of the stonework below the windows.
example images in more detail both to show the overall quality and to highlight the extent to which our method removes noise without smearing out the details. Figures 3 and 4 display sample outputs from our model (in c) and from FoE (in d), illustrating the comparable quality of our method and FoE. In particular our method is able to reproduce image texture without yielding to the visually unpleasant blockiness that other methods using small cliques tend to produce [5,29]. The enlarged regions in each of the images illustrate that our method is able to reproduce fine-scale texture better than the FoE. For instance in the castle image (Fig. 3), the stonework detail below the windows is smoothed out in the FoE but preserved in our model. The textured surface of the rocks in the sheep image (Fig. 4) similarly illustrates the ability of our method to preserve realistic texture while removing noise, rather than over-smoothing. Moreover, our method produces a consistent level of sharpness across the whole image, and, unlike
Sparse Long-Range Random Field and Its Application to Image Denoising
(a)
(b)
(c)
353
(d)
Fig. 4. Denoising output for a high-texture scene. (a) Original image. (b) Corrupted by Gaussian noise, σ = 25. (c) Restored using our SLRF model, PSNR = 26.02. (d) Restored using FoE, PSNR = 25.55. Again, the detail illustrates that our model not only achieves good PSNR but also produces less over-smoothing.
FoE, does not tend to make high-contrast regions very sharp while low-contrast regions very smooth (Fig. 3 and 4, compare (c) and (d)). This gives the output of our model a more natural look. Table 1 also shows that the model with long-range edges (K = 4) performed better than the local model (K = 2), in terms PSNR, and that the difference is most pronounced at high noise levels (e.g. σ = 25) as would be expected. Even at low noise levels (e.g. σ = 5), where one would not necessarily expect much help from longerrange connections, the long-range model still slightly outperformed the local model. This suggests that long-range interactions increase robustness of the model without sacrificing fine-scale precision. Figure 5 shows in side-by-side comparison some sample output of the long-range model and the local model. The difference in visual quality between the two emphasizes that longer-range connections are useful and that our simple second-order models are capturing important properties of images, though these are not completely reflected in the PSNR numbers. In addition to the experiments with artificial Gaussian noise, we also test our model on real-world noisy images. For color images, we simply transform them into YCbCr space and apply the model on each channel separately. In all our experiments, Gaussian white noise is assumed. Although this is suboptimal, we obtain qualitatively good results as can be seen in Figure 6. These results illustrate that our model utilizing sparse long-range connections achieves state-of-the-art performance when compared with other random field methods for image denoising. Arguably the better preservation of texture and more natural look compared with FoE, without the blocky effects of other local methods, improves upon previous results. Due to the small clique size and hence low complexity, our model is less prone to artifacts, such as the ringing pattern, which occurs more often with
354
Y. Li and D.P. Huttenlocher
PSNR = 30.96dB
PSNR = 30.08dB
PSNR = 28.78dB
PSNR = 27.65dB
Fig. 5. Comparison of denoising outputs of the long-range and the local models. Input images have Gaussian white noise with σ = 25 (PSNR = 20.17). Left: Results of the long-range (K = 4) model. Right: Results of the local (K = 2) model. Observe that the outputs of the local model is blocky and appear tainted while those of the long-range model are smooth and clean.
Input
SLRF
BLS-GSM [31]
Fig. 6. Results on two real-world noisy images used in [31]. For these two images, our model assumes Gaussian white noise of standard deviation of 50 and 25 respectively. Despite the lack of accurate noise model, the visual quality of the output of our method is at least as good as that of [31].
Sparse Long-Range Random Field and Its Application to Image Denoising
355
Table 2. Running time of various image denoising methods Method SLRF
Image size 481×321
Processor Xeon-3.0GHz
FoE [1] 481×321 Xeon-3.2GHz GCRF [14] 481×321 Xeon-3.2GHz GSM [30] 256×256 PentiumIII-1.7GHz
Running time (sec.) 3.2 376.9 97.8 approx. 40
higher-order models. The highest PSNR has been achieved by wavelet based methods (e.g. [30,31]); nevertheless, such models tend to produce a larger amount of ringing artifacts. Finally we compare in Table 2 the running time of our model with those reported for some other methods, including both random field [1,14] and wavelet-based [30]. These results show that our method is a factor of 30 or more faster than the other random field methods and about 10 times faster than the wavelet-based one (note that while the running time in this last case is for a slower processor, the image is also considerably smaller). The speed of our model makes it a practical denoising method even for highresolution images.
6 Conclusion We have presented a model which explicitly represents long-range interactions but only uses low-order cliques, thereby enabling much faster optimization than other approaches that rely on high-order cliques. For image denoising this model achieves stateof-the-art PSNR results among random field methods, is better at preserving fine-scale detail, and runs at least an order of magnitude faster. The low complexity nature of the model not only reduces artifacts such as ringing, but also makes it readily interpretable and easy to understand. The small clique size enables the use of efficient approximate global inference algorithms for arbitrary clique potentials, whilst the explicit long-range interactions effectively counters noise-induced local bias. The combination of speed and expressiveness makes it an efficient and robust approach for low-level vision problems in noisy domains.
Acknowledgments This work was supported in part by NSF grant IIS-0713185.
References 1. Roth, S., Black, M.J.: Fields of experts: A Framework for Learning Image Priors. In: CVPR (2005) 2. Besag, J.E.: Spatial interaction and the statistical analysis of lattice systems. J. Royal Stat. Soc. B 36(2) (1974)
356
Y. Li and D.P. Huttenlocher
3. Geman, S., Geman, D.: Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. PAMI 6(6), 721–741 (1984) 4. Szeliski, R.: Bayesian modeling of uncertainty in low-level vision. IJCV 5(3) (1990) 5. Lan, X., Roth, S., Huttenlocher, D.P., Black, M.J.: Efficient belief propagation with learned higher-order Markov random fields. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3951. Springer, Heidelberg (2006) 6. Tappen, M.F.: Utilizing variational optimization to learn Markov random fields. In: CVPR (2007) 7. Murphy, K., Weiss, Y., Jordan, M.: Loopy belief propagation for approximate inference: An empirical study. In: UAI (1999) 8. Kolmogorov, V.: Convergent tree-reweighted message passing for energy minimization. PAMI 28(10) (2006) 9. Wainwright, M.J., Jaakkola, T.S., Willsky, A.S.: Map estimation via agreement on (hyper)trees: Message-passing and linear programming approaches. Technical Report UCB/CSD-03-1269, EECS, UC Berkeley (2003) 10. Boykov, Y., Veksler, O., Zabih, R.: Fast approximate energy minimization via graph cuts. In: ICCV (1999) 11. Szeliski, R., Zabih, R., Scharstein, D., Veksler, O., Kolmogorov, V., Agarwala, A., Tappen, M.F., Rother, C.: A comparative study of energy minimization methods for Markov random fields. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3951. Springer, Heidelberg (2006) 12. Lafferty, J., McCallum, A., Pereira, F.: Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In: ICML (2001) 13. Roth, S., Black, M.J.: Steerable random fields. In: ICCV (2007) 14. Tappen, M.F., Liu, C., Adelson, E.H., Freeman, W.T.: Learning gaussian conditional random fields for low-level vision. In: CVPR (2007) 15. Geman, S., Graffigne, C.: Markov random field image models and their applications to computer vision. In: Intl. Congress of Mathematicians (1986) 16. Weiss, Y., Freeman, W.F.: What makes a good model of natural images? In: CVPR (2007) 17. Gimel’farb, G.L.: Texture modeling by multiple pairwise pixel interactions. PAMI 18(11) (1996) 18. Hinton, G.E.: Training products of experts by minimizing contrastive divergence. Neur. Comput. 14(8) (2002) 19. Yedidia, J.S., Freeman, W.T., Weiss, Y.: Generalized belief propagation. In: NIPS (2000) 20. Scharstein, D., Pal, C.: Learning conditional random fields for stereo. In: CVPR (2007) 21. Collins, M.: Discriminative training methods for hidden Markov models: Theory and experiments with perceptron algorithms. In: EMNLP (2002) 22. Sutton, C., McCallum, A.: Piecewise training of undirected models. In: UAI (2005) 23. Sutton, C., Minka, T.: Local training and belief propagation. Technical Report TR-2006-121, Microsoft Research (2006) 24. Martin, D., Fowlkes, C., Tal, D., Malik, J.: A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In: ICCV (2001) 25. Huang, J., Mumford, D.: Statistics of natural images and models. In: CVPR (1999) 26. Freeman, W.T., Pasztor, E.C., Carmichael, O.T.: Learning low-level vision. IJCV 40(1) (2000) 27. Srivastava, A., Lee, A., Simoncelli, E., Zhu, S.: On advances in statistical modeling of natural images. Journal of Mathematical Imaging and Vision (2003)
Sparse Long-Range Random Field and Its Application to Image Denoising
357
28. Nocedal, J.: Updating quasi-newton matrices with limited storage. Mathematics of Computation 35, 773–782 (1980) 29. Felzenszwalb, P.F., Huttenlocher, D.P.: Efficient belief propagation for early vision. IJCV 70(1) (2006) 30. Portilla, J.: Blind non-white noise removal in images using gaussian scale mixtures in the wavelet domain. In: Benelux Signal Processing Symposium (2004) 31. Portilla, J., Strela, V., Wainwright, M.J., Simoncelli, E.P.: Image denoising using scale mixtures of gaussians in the wavelet domain. IEEE Trans. Imag. Proc. 12(11) (2003)
Output Regularized Metric Learning with Side Information Wei Liu1 , Steven C.H. Hoi2 , and Jianzhuang Liu1 1
Department of Information Engineering, The Chinese University of Hong Kong Hong Kong, China {wliu5,jzliu}@ie.cuhk.edu.hk 2 School of Computer Engineering, Nanyang Technological University, Singapore
[email protected]
Abstract. Distance metric learning has been widely investigated in machine learning and information retrieval. In this paper, we study a particular content-based image retrieval application of learning distance metrics from historical relevance feedback log data, which leads to a novel scenario called collaborative image retrieval. The log data provide the side information expressed as relevance judgements between image pairs. Exploiting the side information as well as inherent neighborhood structures among examples, we design a convex regularizer upon which a novel distance metric learning approach, named output regularized metric learning, is presented to tackle collaborative image retrieval. Different from previous distance metric methods, the proposed technique integrates synergistic information from both log data and unlabeled data through a regularization framework and pilots the desired metric toward the ideal output that satisfies pairwise constraints revealed by side information. The experiments on image retrieval tasks have been performed to validate the feasibility of the proposed distance metric technique. Keywords: Distance Metric Learning, Side Information, Output Regularized Metric Learning, Collaborative Image Retrieval.
1
Introduction
Recently, there are some emerging research interests in exploring the historical log data of the user’s relevance feedback in content-based image retrieval (CBIR). Hoi et al. [1] proposed the log-based relevance feedback with support vector machines (SVMs) through engaging the feedback log data in traditional online relevance feedback sessions. In this paper, we study distance metric learning to discover the potential of the log data so that the needs of online relevance feedback can be avoided. Distance metric learning has attracted increasing attention in recent machine learning and computer vision studies, which may be classified into two main categories. The first category is supervised learning approaches for classification, where distance metrics are usually learned from the training data associated D. Forsyth, P. Torr, and A. Zisserman (Eds.): ECCV 2008, Part III, LNCS 5304, pp. 358–371, 2008. c Springer-Verlag Berlin Heidelberg 2008
Output Regularized Metric Learning with Side Information
359
with explicit class labels. The representative techniques include Linear Discriminant Analysis (LDA) [2] and some other recently proposed methods, such as Neighbourhood Components Analysis (NCA) [3], Maximally Collapsing Metric Learning (MCML) [4], metric learning for Large Margin Nearest Neighbor classification (LMNN) [5], and Local Distance Metric Learning (LDML) [6]. Our work is closer to the second category, i.e., semi-supervised distance metric learning which learns distance metrics from pairwise constraints, or known as side information [7]. Each constraint indicates whether two data objects are “similar” (must-link) or “dissimilar” (cannot-link) in a particular learning task. A well-known metric learning method with these constraints was proposed by Xing et al. [7], who cast the learning task into a convex optimization problem and applied the generated solution to data clustering. Following their work, there are several emerging metric techniques in this “semi-supervised” direction. For instance, Relevance Component Analysis (RCA) learns a global linear transformation by exploiting only the equivalent (must-link) constraints [8]. Discriminant Component Analysis (DCA) improves RCA via incorporating the inequivalent (cannot-link) constraints [9]. Si et al. [10] proposed a regularized metric learning method by formulating the side information into a semidefinite program. Particularly, we are aware that routine metric techniques may be sensitive to noise and fail to learn reliable metrics when handling small amount of side information. In this paper, we present a new semi-supervised distance metric learning algorithm to incorporate the unlabeled data together with side information in producing metrics with high fidelity. Specifically, we develop an output regularized framework to integrate the synergistic information from both the log data and the unlabeled data for the goal of coherently learning a distance metric. The proposed output regularized metric learning (ORML) algorithm is elegantly formulated, resulting in a close-form solution which can be obtained with a global optimum substantially efficiently.
2
Collaborative Image Retrieval
In the field of CBIR, choosing appropriate distance metrics plays a key role in establishing an effective CBIR system. Regular CBIR systems usually adopt Euclidean metrics for distance measure on images represented into vector form. Unfortunately, the Euclidean distance is generally not effective enough in retrieving relevant images. A main reason stems from the well-known semantic gap between low-level visual features and high-level semantic concepts [11]. To remedy the semantic gap issue, relevance feedback is frequently engaged in CBIR systems. Relevance feedback mechanism has been vastly studied in the CBIR community and demonstrated to improve retrieval performance. However, the relevance feedback mechanism has some drawbacks in practice. One problem is that relevance feedback often has to involve overloaded communication between systems and users, which might not be efficient for real-time applications. Further, relevance feedback often has to be repeated several times for retrieving relevant images. This procedure could be a tedious task for users.
360
W. Liu, S.C.H. Hoi, and J. Liu
Thus, relevance feedback may not be an efficient and permanent solution for addressing the semantic gap from a long-term perspective. In this paper, we consider an alternative solution, called collaborative image retrieval (CIR), for attacking the semantic gap challenge by leveraging the historical log data during the user’s relevance feedback. CIR has attracted a surge of research interests in the past few years [1,10]. The key to CIR is to find a convenient and effective way of leveraging the log data in relevance feedback so that the semantic gap can be successfully reduced. A lot of ways could be studied to use the log data to boost the retrieval performance. In this paper, we explore to learn distance metrics from the log data for image retrieval tasks, and address some practical problems in applying distance metric techniques to the CIR application.
3 3.1
Distance Metric Learning with Side Information Side Information
Assume that we have a set of n data points X = {xi }ni=1 ⊂ Rm , and two sets of pairwise constraints on these data points: S = {(xi , xj ) | xi and xj are judged to be equivalent} D = {(xi , xj ) | xi and xj are judged to be inequivalent},
(1)
where S is the set of similar pairwise constraints, and D is the set of dissimilar pairwise constraints. Each pairwise constraint indicates if two data points xi and xj are equivalent (similar) or inequivalent (dissimilar) judged by users under certain application context. The two types of constraints S and D are referred to as side information. Note that it is not necessary for all the points in X involved in S or D. For any pair of points xi and xj , let d(xi , xj ) denote the distance function between them. By introducing a symmetric matrix A ∈ Rm×m , we can then express the distance function as follows: dA (xi , xj ) = xi − xj A = (xi − xj )T A(xi − xj ). (2) In practice, the metric matrix A is a valid metric if and only if it satisfies the non-negativity and the triangle inequality conditions. In other words, A must be positive semidefinite, i.e., A 0. Generally speaking, the matrix A parameterizes a family of Mahalanobis distances defined on the vector space Rm . As an extreme case, when setting A to be the identity matrix Im×m , the distance in Eqn. (2) becomes the common Euclidean distance. Abiding by the settings of semi-supervised learning, our learning problem is to learn an optimal square matrix A ∈ Rm×m from a collection of data points X ⊂ Rm coupled with a set of similar pairwise constraints S and a set of dissimilar pairwise constraints D. So far, the central theme to attack metric learning is to design an appropriate optimization objective and then find an efficient algorithm to solve the optimization problem.
Output Regularized Metric Learning with Side Information
3.2
361
Optimization Model
One intuitive yet effective principle for designing metric learning approaches is to minimize the distances between the data points with similar constraints and meanwhile to maximize the distances between the data points with dissimilar constraints. We call it as the min-max principle. Some existing work [7][10] can be interpreted in terms of the min-max principle. To make metric learning techniques practical, the second principle we want to highlight is the regularization principle, which is a key to empowering the learnt metric with the generalization and robustness capabilities. Motivated by the idea of regularization in kernel machines [12], we formulate a general regularization prototype for distance metric learning as follows: min
R(A, X , S, D) + γV(A, S, D)
s.t.
A0
A
(3)
where R(·) is some regularizer defined on the target metric A, raw samples X and side information S and D. V(·) is some loss function defined on A and side information, and γ is a regularization parameter for controlling the trade-off between two terms in Eqn. (3). According to the min-max principle, a good loss function V(·) should be designed in a way such that its minimization will simultaneously result in shrinking the distances between points with similar constraints and elongating the distances between points with dissimilar constraints. 3.3
Dissimilarity-Enhanced Regularizer
There are a lot of options to decide a regularizer in the above regularization prototype. The simplest one is based on the Frobenius norm: R(A) = AF that simply prevents any elements within the matrix A from being overlarge [10]. However, this regularizer cannot take advantage of any side information. Hence, we intend to formulate a better regularizer by exploiting side information and unlabeled data information which is beneficial to semi-supervised learning tasks. Given the collection of n data points X including the unlabeled data and the side information S and D, we define a weight matrix W ∈ Rn×n on X : ⎧ α, (xi , xj ) ∈ S ⎪ ⎪ ⎨ β, (xi , xj ) ∈ D Wij = (4) ⎪ 1, (xi , xj ) ∈ S ∪ D and (xi ∈ N (xj ) or xj ∈ N (xi )) ⎪ ⎩ 0, otherwise where N (xi ) denotes the list composed of k nearest neighbors of the point xi , and α, β > 0 are two weighting parameters corresponding to S, D. It is worth mentioning that W absorbs and encodes the side information as well as inherent neighborhood structures among examples. We define another weight matrix T ∈ Rn×n based on only dissimilarity constraints: β, (xi , xj ) ∈ D Tij = (5) 0, otherwise
362
W. Liu, S.C.H. Hoi, and J. Liu
To delve into a metric matrix A, one can assume there exits a linear mapping U : Rm → Rr to constitute A = U U T , where U = [u1 , . . . , ur ] ∈ Rm×r . We require u1 , . . . , ur be linearly independent so that r is the rank of A. Then, the distance under A between two inputs can be written as: dA (xi , xj ) = (xi − xj )T A(xi − xj ) = (xi − xj )T U U T (xi − xj )
r T
T 2 ud xi − uTd xj . (6) = U (xi − xj ) = d=1
Minimizing dA (xi , xj ) will lead to uTd xi − uTd xj → 0 corresponding to projection direction ud . Specially, we define a new function as follows hA (xi , xj ) = (xi + xj )T A(xi + xj ) = U T (xi + xj )
r
2 = uTd xi + uTd xj . (7) d=1
At this time, minimizing hA (xi , xj ) will lead to uTd xi +uTd xj → 0, which actually pushes xi and xj far away along each projection direction ud . Intuitively, we would like to minimize dA (xi , xj ) if xi and xj meet the similarity constraint or belong to the nearest neighbors of each other, and meanwhile to minimize hA (xi , xj ) if xi and xj meet the dissimilarity constraint. By leveraging side information and neighborhood structures in the weight matrix W , we formulate the regularizer as follows: ⎡ R(A, X , S, D) = ⎡
1⎣ 2
d2A (xi , xj )Wij +
(xi ,xj )∈D
⎤ h2A (xi , xj )Wij ⎦
(xi ,xj )∈D
⎤ r 1 ⎣ T 2 2 ud xi − uTd xj Wij + uTd xi + uTd xj Wij ⎦ = 2 d=1 (xi ,xj )∈D (xi ,xj )∈D ⎡ ⎤ r n
1 ⎣ T 2 ud xi − uTd xj Wij + 4 uTd xi uTd xj Wij ⎦ = 2 d=1 i,j=1 (xi ,xj )∈D ⎡ ⎤ r n n n T
T T 1 ⎣ T 2 T ud xi Dii − 2 ud xi ud xj Wij + 4 ud xi ud xj Tij ⎦ 2 = 2 i=1 i,j=1 i,j=1 d=1
=
r d=1
uTd X(D − W + 2T )X T ud =
r
uTd XM X T ud = tr(U T XM X T U ),
(8)
d=1
where D ∈ Rn×n is a diagonal matrix n whose diagonal elements equal the sums of the row entries of W , i.e., Dii = j=1 Wij , M = D − W + 2T ∈ Rn×n , and tr(·)
Output Regularized Metric Learning with Side Information
363
stands for the trace operator. Note that L = D − W is well known as the graph Laplacian. The matrix M = L + 2T is thus the combination of graph Laplacian matrix L and dissimilarity matrix T . Importantly, the regularizer R(U ) = tr(U T XM X T U ) in terms of the transform U is convex because the matrix XM X T is positive semidefinite (XM X T 0 has been proved in Eqn. (3.3) as R(U ) ≥ 0 for any U ). Previous metric learning methods [5][7] treat the dissimilarity side information as hard constraints, but we leverage the dissimilarity constraints into the convex regularizer, which sheds light on efficient optimization. We call the formulated regularizer R(U ) = tr(U T XM X T U ) as the dissimilarity-enhanced regularizer since the core matrix M engages the dissimilarity information other than the similarity information. Our regularizer is similar to the label regularizer proposed in [13] in utilizing dissimilarity information. 3.4
Regularization Framework
Without loss of generality, we suppose the first l samples in X are involved in the side information and form Xl = [x1 , · · · , xl ] ∈ Rm×l . Using the above regularizer, we propose a novel distance metric learning approach, called Output Regularized Metric Learning (ORML), based on the following regularization framework min
U∈Rm×r
s.t.
2 tr(U T XM X T U ) + γ U T Xl − Yl F UT U = Σ
(9) (10)
apwhere Yl ∈ Rr×l is the ideal output of some conceived linear transform U T plied to the data matrix Xl such that the output Yl = U Xl perfectly satisfies the pairwise constraints in S ∪ D. The least squares formulation U T Xl − Yl 2F instantiates the loss function V(A, S, D) stated in the regularization prototype Eqn. (3). Σ ∈ Rr×r is a diagonal matrix with positive entries, i.e., Σ 0. More clearly, the constraint in Eqn. (10) is equivalent to uTi uj = 0, i, j = 1, · · · , r, i = j.
(11)
It indicates that U = [u1 , · · · , ur ] consists of r orthogonal vectors in Rm . The reason to impose such an orthogonal constraint is to explicitly make the projection vectors u1 , · · · , ur linearly independent and, more notably, uncorrelated. Actually, u1 , · · · , ur are principle eigenvectors of the metric matrix and are A r thus physically meaningful to construct the metric as A = U U T = i=1 ui uTi . The major merit of the proposed regularization framework Eqn. (9)(10) is to adroitly drop the positive semidefinite constraint A 0 in Eqn. (3) which casts the metric learning problem into Semidefinite Programming (SDP) [14]. SDP takes an expensive optimization cost and even becomes computationally prohibitive when the dimension of A is large, e.g., m > 103 . Equivalently, we optimize the transformation matrix U instead of the metric matrix A and thus formulate the metric learning task as a constrained quadratic optimization problem which can be solved quite efficiently with a global optimum solution. In the
364
W. Liu, S.C.H. Hoi, and J. Liu
next section, we will show the skills for finding the ideal output Yl as well as coping with the orthogonal constraint in Eqn. (11).
4
ORML Algorithm for CIR
Now, we discuss how to apply ORML to collaborative image retrieval (CIR) and implement related optimization in details. As in the previous work in [1,10], we assume the log data are collected in the form of log sessions, of which each one corresponds to a particular user querying process. During each log session, a user first submits an image example to a CBIR system and then judges the relevance on the top ranked images returned by the CBIR system. The relevance judgements specified by the user and the involved image samples, i.e., log samples, are then saved as the log data. Within each log session of the user’s relevance feedback, we can convert the relevance judgements to similar and dissimilar pairwise constraints. For instance, given the query image xi and each top-ranked image xj , if they are marked as relevant in one log session q, we will put (xi , xj ) into the set of similar pairwise constraints Sq ; if they are marked as irrelevant, we will put (xi , xj ) into the set of dissimilar pairwise constraints Dq . Note that the first element xi in an ordinal pair (xi , xj ) always represents a query image. Consequently, we denote the collection of log data as {(Sq ∪ Dq )|q = 1, . . . , Q}, where Q is the total number of log sessions. The log data exactly provide the side information needed by distance metric learning. 4.1
Ideal Output
Eqn. (9) is essentially a quadratically constrained quadratic optimization problem, and is not easy to solve directly. Here we adopt a heuristic method to explore the solution. First, we can get an initial transformation matrix V with Principal Component Analysis (PCA) [2]. Without loss of generality, we assume that {xi }ni=1 be zero-centered. This can be simply achieved by subtracting the mean vector from all xi s. Let P contain r ≤ min{m, n} unitary eigenvectors of XX T , i.e., P = [p1 , · · · , pr ], corresponding to the r largest eigenvalues λ1 , · · · , λr with a nonincreasing order. We define the diagonal matrix Λ = diag(λ1 , · · · , λr ) and have P T XX T P = Λ. Then we acquire the initial transform V ∈ Rm×r by V = P Λ−1/2 ,
(12)
such that V T XX T V = Λ−1/2 P T XX T P Λ−1/2 = I. For any column vector v ∈ 2 n Rm in V and any two inputs xi and xj , we utilize i=1 vT xi = vT XX T v = 1 to conclude √ |vT xi − vT xj | = (vT xi − vT xj )2 ≤ 2 ((vT xi )2 + (vT xj )2 ) ≤ 2, (13) which indicates that the range of 1D projections {vT xi } on vector v is upperbounded.
Output Regularized Metric Learning with Side Information
365
Let us suppose that the pairwise constraints are imposed on l log samples {x1 , · · · , xl }. Thus, we only need to find the output Yl of Xl . In light of Eqn. (13), we may correct the output V T Xl under the initial transform V piloted by the constraints Sq ∪ Dq within each log session q. Concretely, we investigate each row vdT Xl of V T Xl and form each row vector (d) y ∈ Rl in output Yl as follows (d = 1, · · · , r) ⎧ T (xi , xj ) ∈ Sq ⎨ vd xi , (d) yj = (14) ⎩ −sgn(vT x )(|vT x | + √1 ), (x , x ) ∈ D i i i j q d d r where sgn(·) denotes the sign function, returning -1 for negative input and 1 (d) otherwise. The idea of setting yj based on the PCA output vdT xj and side information Sq ∪ Dq is in tune with the proposed regularizer in Eqn. (3.3) as it (d) (d) (d) (d) turns out that yi − yj = 0, (xi , xj ) ∈ Sq and |yi + yj | = √1r , (xi , xj ) ∈ Dq . √ (d) (d) The residue 1/ r < 1 prevents the freak case yi = yj = 0, (xi , xj ) ∈ Dq . Throughout all log sessions (q = 1, · · · , Q), we sequentially set up y(1) , · · · , y(r) using Eqn. (14) and ultimately arrive at l ] ∈ Rr×l , Yl = [y(1) , · · · , y(r) ]T = [ x1 , · · · , x
(15)
i is the low-dimensional representation of xi via some conceived linear in which x : xi → x T xi . Importantly, Yl exactly obeys all those pairwise i = U mapping U constraints {Sq ∪ Dq } because we have j = 0, (xi , xj ) ∈ Sq ; j ≥ 1, (xi , xj ) ∈ Dq , xi − x xi − x
(16)
which implies that the distances between ideal low-dimensional points with similar constraints are zeros and the distances between those with dissimilar constraints are always larger than a constant. In summary, the found output Yl perfectly conforms to the min-max principle with skillfully modifying the heuristic output V T Xl supported by PCA. 4.2
Orthogonal Pursuit
If the constraint in Eqn. (10) is removed, Eqn. (9) can be easily solved and even result in a close-form solution. Eqn. (9) is rewritten as 2 tr(U T XM X T U ) + γ U T Xl − Yl F
=
r 2 T uTd XM X T ud + γ Xl ud − y(d)
r d=1
=
r
d=1
2 (uTd XM X T ud + γ XlT ud − y(d) ),
d=1
which guides us to greedily pursue the target vectors ud (d = 1, · · · , r).
(17)
366
W. Liu, S.C.H. Hoi, and J. Liu
Now we tackle the orthogonal constraint with a recursive notion. Suppose we have obtained d − 1 (1 ≤ d ≤ r) orthogonal projection vectors u1 , · · · , ud−1 , and calculate d−1 ui uTi V (d) = (18) I− [vd , · · · , vr ] ∈ Rm×(r−d+1) , 2 u i i=1 where V (1) = V = [v1 , · · · , vr ]. We constrain ud to be the form of V (d) b (b is an arbitrary vector) and have the following proposition. Proposition. ud = V (d) b is orthogonal to previous d−1 vectors {u1 , · · · , ud−1 }. Proof. For 1 ≤ i ≤ d − 1, we have d−1 ui uTi ut uTt T T (d) T ui ud = ui V b = ui I − I− [vd , · · · , vr ]b ui 2 ut 2 = (uTi − uTi )
d−1
I−
t=i
ut uTt ut 2
t=i
[vd , · · · , vr ]b = 0.
(19)
(d)
This proposition shows that the expression ud = V b must satisfy the orthogonal constraint. To obtain the exact solution, we substitute ud = V (d) b into Eqn. (17) and derive 2 T min bT V (d) XM X T V (d) b + γ XlT V (d) b − y(d) , (20) b
whose derivatives with respect to b will vanish at the minimizer b∗ . In the sequel, we get the close-form solution for each projection vector: −1 T T 1 XM X T + Xl XlT V (d) V (d) Xl y(d) . (21) ud = V (d) b∗ = V (d) V (d) γ 4.3
Algorithm
We summarize the Output Regularized Metric Learning (ORML) algorithm for CIR below. It is appreciable that the number r of the learnt projection vectors is independent of the size l of log samples and can stretch until r = m (m < n in this paper). 1. Compute the regularizer: Build two weight matrices W and T upon all n input samples with Eqn. (4) and Eqn. (5), respectively. Calculate the graph Laplacian matrix L and the matrix M = L + 2T . Compute the matrix S = XM X T ∈ Rm×m used in the dissimilarity-enhanced regularizer. 2. PCA: Run PCA on {x1 , · · · , xn } ∈ Rm to get the matrix V = [v1 , · · · , vr ] ∈ m×r (r ≤ m) such that V T XX T V = I. R 3. Get output: Given the log data Xl ∈ Rm×l and {Sq , Dq }Q q=1 , use Eqn. (14) and Eqn. (15) to get the output matrix Yl = [y(1) , · · · , y(r) ]T ∈ Rr×l .
Output Regularized Metric Learning with Side Information
367
4. Orthogonal pursuit: For d = 1 to r −1 ud ←− V V T γ1 S + Xl XlT V V T Xl y(d) ; V ←− exclude the first column of V ; u uT V ←− I − udd d2 V ; End. 5. Construct the metric: Form the projection matrix U = [u1 , · · · , ur ] ∈ Rm×r , and then construct the distance metric as A = U U T .
5
Experiments
In our experiments, we evaluate the effectiveness of the proposed ORML algorithm applied to collaborative image retrieval. We design the experiments in several perspectives for extensive performance evaluation, where we compare ORML with state-of-the-art distance metric learning techniques through engaging normal, limited and noisy log data. We obtained a standard CBIR testbed from the authors in [1]. The testbed consists of real-world images from COREL image CDs. It contains two datasets : 20-Category (20-Cat) that includes images from 20 different categories, and 50-Category (50-Cat) that includes images from 50 different categories. Each category consists of exactly 100 images that are randomly selected from relevant examples in the COREL CDs. Every category represents a different semantic topic, such as antelope, balloon, butterfly, car, cat, dog, horse, etc. The way of sampling the images with semantic categories lets us evaluate the retrieval performance automatically, which significantly reduces the subjective errors caused by manual evaluations. In this paper, we only employ 50-Cat since it provides us much more samples. 5.1
Image Representation and Log Data
We use color, edge and texture to represent images. Three types of color moments, mean, variance and skewness, are extracted in each color channel (H, S, and V), resulting in a 9-dimensional color moment feature vector. The Canny edge detector is applied to obtain the edge image from which the edge direction histogram is computed. Each edge direction histogram is quantized into 18 bins of 20 degrees each and an 18-dimensional edge feature vector is acquired. To extract texture features, the Discrete Wavelet Transformation (DWT) is performed on the image using a Daubechies-4 wavelet filter [15]. For each image, 3-level DWT decompositions are conducted and the entropy values of 9 resulting subimages are computed, which gives rise to a 9-dimensional texture feature vector. Finally, an image is represented as a 36-dimensional feature vector. We collected the real log data related to the COREL testbed by a real CBIR system from the authors in [1]. In their collection, there are two sets of log data. One is a set of normal log data, which contains small noise. The other is a set of noisy log data with relatively large noise. In log data, a log session is defined as
368
W. Liu, S.C.H. Hoi, and J. Liu Table 1. The log data collected from users Dataset
Normal Log Noisy Log #Log Sessions Noise # Log Sessions Noise 50-Cat 150 7.7% 150 17.1%
the basic unit. Each log session corresponds to a customary relevance feedback process, in which 20 images were judged by a user. So, each log session contains 20 log samples that are marked as either relevant or irrelevant. Table 1 shows the basic information of the log data. 5.2
Experimental Setup
We compare ORML with representative metric learning techniques using side information. It is noticeable that we do not compare ORML with supervised metric learning approaches as they require explicit class labels for classification, which is unsuitable for CIR. The compared approaches include six distance metric techniques: – Euclidean: The baseline denoted as “EU”. – Xing: A pairwise constraint-based metric learning method with an iterative convex optimization procedure [7]. – RCA: Relevance Component Analysis, which learns linear transformations using only equivalent (similar) constraints [8]. – DCA: Discriminative Component Analysis, which improves RCA by engaging inequivalent (dissimilar) constraints [9]. – RML: Regularized Metric Learning using the Frobenius norm as the regularizer [10]. – ORML: the proposed Output Regularized Metric Learning algorithm using the dissimilarity-enhanced regularizer and the output-based loss function. Lately, an information-theoretic metric learning approach is presented to express the metric learning problem as a Bregman optimization problem [16]. This approach aims at minimizing the differential relative entropy between two multivariate Gaussians under pairwise constraints on the distance function. Due to the space limit in this paper, we do not contrast ORML with the informationtheoretic metric learning approach. We follow the standard procedure in CBIR experiments. Specifically, a query image is picked from the database and then queried with the evaluated six distance metrics. The retrieval performance is evaluated based on top ranked images ranging from top 10 to top 100. The average precision (AP) and Mean Average Precision (MAP) are used as the performance measures, which are broadly adopted in CBIR experiments. To implement ORML, we use k = 6 nearest neighbors, α = 1 and β = 2 to compute the matrix M used in our regularizer. The regularization parameter γ is simply fixed to 9 and the number of projection vectors, which construct the target distance metric, is set to r = 15.
Output Regularized Metric Learning with Side Information
369
50−Cat+Normal Log Data 55 EU Xing RCA DCA RML ORML
Average Precision (%)
50
45
40
35
30
25
20 10
20
30
40
50
60
70
80
90
100
Scope
Fig. 1. Average precision of top ranked images on the 50-Category testbed with normal log data Table 2. Mean Average Precision (%) of top ranked images on the 50-Category testbed over 5,000 queries with three kinds of log data. The last column shows the relative improvement of the proposed ORML over the baseline (Euclidean). Datasets 50-Cat+normal log 50-Cat+small log 50-Cat+noisy log
5.3
EU 27.99 27.99 27.99
Xing 28.71 28.27 27.95
RCA 31.41 31.31 29.79
DCA RML ORML ORML Improve 31.72 32.47 34.13 21.94 31.64 31.41 32.68 16.76 30.14 31.08 33.48 19.61
Normal Log Data
Above all, we perform experiments on the normal log data. Figure 1 shows the experimental results on the 50-Category dataset. By comparing the four previous metric learning methods, Xing, RCA, DCA and RML, RML achieves the best overall performance, obtaining 16.0% improvement on MAP over the baseline, as shown in Table 2. Xing performs the worst among the four methods. Overall, we find that the proposed ORML algorithm achieves the best performance, significantly improving the baseline by about 21% on MAP. It demonstrates that ORML is more effective than the previous methods when working on the normal log data. Here, we find that Xing et al.’ method does not perform well in this dataset. One possible reason is that this method may be too sensitive to noise since it imposes the hard constraint on dissimilar data points. 5.4
Small Log Data
In this experiment, we evaluate the robustness of the metric learning methods on small amount of normal log data. This situation usually happens at the beginning
370
W. Liu, S.C.H. Hoi, and J. Liu (a) 50−Cat+Small Log Data
(b) 50−Cat+Noisy Log Data
55
55
Average Precision (%)
50
45
40
35
30
25
20 10
EU Xing RCA DCA RML ORML
50
Average Precision (%)
EU Xing RCA DCA RML ORML
45
40
35
30
25
20
30
40
50
60
Scope
70
80
90
100
20 10
20
30
40
50
60
70
80
90
100
Scope
Fig. 2. Average precision of top ranked images on the 50-Category testbed with small and noisy log data.
stage of CBIR systems. Figure 2(a) shows the experimental results on the 50Category testbed with a small subset of the normal log data which contains only 50 log sessions randomly selected from the normal log dataset. Again, we find that ORML achieves the best improvement among all compared methods over the baseline. It also shows that the proposed ORML method is more effective to learn robust metrics by utilizing the unlabeled data, even with limited log data. 5.5
Noisy Log Data
To further validate the robustness performance, the third experiment is to evaluate the metric learning methods on the noisy log data carrying relatively large noise. Figure 2(b) shows the experimental results on the 50-Category testbed with the noisy log data. From the experimental results, we find that Xing et al.’s method fails to improve over the baseline due to the noise problem. The results also validate our previous conjecture that Xing et al.’s method may be too sensitive to noise. Compared with Xing, the other three metric learning methods including RCA, DCA and RML are less sensitive to noise, but they still suffer a lot from the noise. For example, RCA achieves 12.2% improvement on MAP with the normal log data as shown in Table 2, but only achieves 6.4% improvement on MAP with the same amount of noisy log data. In contrast, ORML achieves 21.94% improvement on MAP with normal log data, and is able to keep 19.61% improvement on MAP with the larger noisy log data without too much dropping. These experimental results again validate that ORML is effective to learn reliable distance metrics with real noisy log data.
6
Conclusion
This paper studies distance metric learning with side information and its application to collaborative image retrieval, in which real log data provided by the user’s relevance feedback are leveraged to improve traditional CBIR performance. To robustly exploit the log data and smoothly incorporate the unlabeled
Output Regularized Metric Learning with Side Information
371
data, we propose the Output Regularized Metric Learning (ORML) algorithm. ORML uses the dissimilarity-enhanced regularizer and the ideal output of log samples piloted by side information for learning a series of orthogonal projection vectors that readily construct an effective metric. The promising experimental results show that the proposed ORML algorithm is more effective than the state-of-the-arts in learning reliable metrics with real log data.
Acknowledgement The work described in this paper was supported by a grant from the Research Grants Council of the Hong Kong SAR, China (Project No. CUHK 414306).
References 1. Hoi, S.C., Lyu, M.R., Jin, R.: A unified log-based relevance feedback scheme for image retrieval. IEEE Trans. Knowledge and Data Engineering 18(4), 509–524 (2006) 2. Fukunaga, K.: Introduction to Statistical Pattern Recognition. Elsevier, Amsterdam (1990) 3. Goldberger, G.H.J., Roweis, S., Salakhutdinov, R.: Neighbourhood components analysis. In: NIPS 17 (2005) 4. Globerson, A., Roweis, S.: Metric learning by collapsing classes. In: NIPS 18 (2006) 5. Weinberger, K., Blitzer, J., Saul, L.: Distance metric learning for large margin nearest neighbor classification. In: NIPS 18 (2006) 6. Yang, L., Jin, R., Sukthankar, R., Liu, Y.: An efficient algorithm for local distance metric learning. In: Proc. AAAI (2006) 7. Xing, E.P., Ng, A.Y., Jordan, M.I., Russell, S.: Distance metric learning with application to clustering with side-information. In: NIPS 15 (2003) 8. Bar-Hillel, A., Hertz, T., Shental, N., Weinshall, D.: Learning a mahalanobis metric from equivalence constraints. JMLR 6, 937–965 (2005) 9. Hoi, S.C., Liu, W., Lyu, M.R., Ma, W.-Y.: Learning distance metrics with contextual constraints for image retrieval. In: Proc. CVPR (2006) 10. Si, L., Jin, R., Hoi, S.C., Lyu, M.R.: Collaborative image retrieval via regularized metric learning. ACM Multimedia Systems Journal 12(1), 34–44 (2006) 11. Smeulders, A.W.M., Worring, M., Santini, S., Gupta, A., Jain, R.: Content-based image retrieval at the end of the early years. IEEE Trans. PAMI 22(12), 1349–1380 (2000) 12. Vapnik, V.N.: Statistical Learning Theory. John Wiley and Sons, Chichester (1998) 13. Goldberg, A., Zhu, X., Wright, S.: Dissimilarity in graph-based semi-supervised classification. In: Proc. Artificial Intelligence and Statistics (2007) 14. Boyd, S., Vandenberge, L.: Convex Optimization. Cambridge University Press, Cambridge (2003) 15. Manjunath, B., Newsam, P.W.S., Shin, H.: A texture descriptor for browsing and similarity retrieval. Signal Processing Image Communication (2001) 16. Davis, J.V., Kulis, B., Jain, P., Sra, S., Dhillon, I.S.: Information-theoretic metric learning. In: Proc. ICML (2007)
Student-t Mixture Filter for Robust, Real-Time Visual Tracking James Loxam and Tom Drummond Department of Engineering, University of Cambridge
Abstract. Filtering is a key problem in modern information theory; from a series of noisy measurement, one would like to estimate the state of some system. A number of solutions exist in the literature, such as the Kalman filter or the various particle and hybrid filters, but each has its drawbacks. In this paper, a filter is introduced based on a mixture of Student-t modes for all distributions, eliminating the need for arbitrary decisions when treating outliers and providing robust real-time operation in a true Bayesian manner.
1
Introduction
Filtering, estimating some hidden dynamical state over time from just noisy measurements of that state, in a robust, reliable manner is a challenging but important problem, with applications in many fields. Bayes’ Theorem provides the mathematical framework for solving this problem, however design decisions impinge upon the implementation of this framework which put limitations on the performance of the final filter. In this paper, an implementation is proposed based on a Student-t mixture model, which provides robust performance in the face of outliers at a speed allowing real-time operation on complex problems. 1.1
Background
Robust methods of estimation have been well covered in the literature, with most methods falling into the class of M-estimators [1]. M-estimation as a technique can be adapted by the use of different weighting functions (e.g. Huber, Cauchy and Tukey among others) to create likelihood functions with different properties, the maximum of which can then be found to provide a solution. The problem with estimation in general, however, is that the computational load increases linearly with the number of measurements, and thus time. Filtering in a recursive manner (see section 2) circumvents this problem by maintaining a probability distribution over the state which encodes all previous measurements and can be simply updated at each timestep. Doing this in a robust manner, however, is non-trivial as the probability distributions must be proper (integrate to unity), thus precluding and method which uses a uniform component to model outliers (e.g. Tukey weighting function). D. Forsyth, P. Torr, and A. Zisserman (Eds.): ECCV 2008, Part III, LNCS 5304, pp. 372–385, 2008. c Springer-Verlag Berlin Heidelberg 2008
Student-t Mixture Filter for Robust, Real-Time Visual Tracking
373
Filtering Methods. The Kalman filter [2] is often considered the forerunner to modern filtering systems. In the case of normally distributed state and noise distributions, it is optimal. The real world, however, often produces noise which is far from normally distributed, e.g. measurement noise from a point tracker where data association sometimes fails or the process noise involved with flying a model helicopter in windy conditions, where occasional gusts of wind can cause rapid changes to the system state. In such systems, it is known that the Kalman filter performs poorly. Due to the mathematical niceties of the normal distribution, many attempts have been made to improve performance, e.g. pre-filtering measurements using RANSAC [3] to remove any erroneous measurements. This often involves, however, rather arbitrary decisions about which measurements are erroneous and which are not and problems can still occur when these decisions turn out to be incorrect. It has long been proposed that there should be no need to make such arbitrary decisions, that a properly formulated Bayesian approach should handle such problems [4]. The reason that the Kalman filter is unable to properly handle erroneous measurements is due to the light-weight tails it possesses, which effectively rule out the idea that any measurement is ever wrong. Non-Parametric Filters. The Bootstrap filter [5] and the Condensation algorithm [6] led the way in a new form of non-parametric filter. These non-parametric filters escaped the constraints of the normal distribution by representing the state as a set of discrete samples from that distribution, allowing the state to take arbitrary distributions and the use of any evaluable likelihood function (e.g. a Student-t distribution [7]). The problem with this early set of particle filters, however, was the computational cost, as the number of particles scaled exponentially with the dimensionality of the state. Advancements in particle filters have mostly been due to improvements in the importance sampling densities used, such as in the Extended Kalman Particle Filter (EKPF) and the Unscented Particle Filter(UPF) [8], which provide a better distribution of particles, reducing the quantity required and thus the computational cost. More recent developments have thus turned into a class of hybrid filters or kernel-based particle filters, where parametric representations are used in parts of the algorithm to allow better particle placement [9,10]. The restriction of using certain distributions to provide better particle placement, however, is the shift back towards limited distribution representation. By approximating all samples with a Gaussian distribution at each step (as in [9], or mixture of Gaussian distributions as in [10]) any semblance of heavy tails in the state distribution is being removed, eliminating the possibility of robustness to process noise outliers. Although the approach in [11] is to use a Student-t kernel (and thus maintain the heavy tails), the system is limited to representing the posterior (and prior) by one Student-t , which fails to deal correctly with the problems associated with data confusion (see section 3.2).
374
J. Loxam and T. Drummond
Despite the improvements, particle and hybrid filters are still often expensive to run in high dimensional spaces and struggle to cope with both outliers in the process noise and measurement noise simultaneously. Parametric Filters. Many papers have discussed introducing heavy-tailed distributions into the filtering problem. Replacement of one of the noise distributions (either process noise or measurement noise) with a Student-t distribution has been introduced and has been shown to reject outliers in that noise distribution [7,12,13,14]. Most early work, however, maintained a normal distribution for one part of the framework, and thus avoiding the problems associated with data confusion (see section 3.2). The Gaussian Sum Filter (GSF) aims to approximate heavy-tailed distributions by using a mixture of normal distributions [15]. A theoretical problem with this is proved in [13] that for sufficiently large observations, the posterior is strictly unimodal (does not reject outliers) and thus the approximation breaks down. While this may be avoidable in a given application provided due care is taken, it is a limitation in using gaussian sums as a general method. Another practical limitation is the reduction of mixture size, complicated by the fact that different components of the mixture represent different parts of the distribution. One of the first papers to discuss the use of Student-t distributions throughout the system was [16], albeit in a purely one-dimensional case. It discusses ideas such as data confusion (see section 3.2) but due to its methodology, is not extendable into higher dimensional space. Section 2 shall discuss a basic framework for dynamical filtering followed by investigating the implications of using heavy-tailed distributions in section 3. Section 4 will introduce the base of the work followed by sections 5 and 6 which will describe the details of the work. Results are presented in section 7.
2
Filtering Theory
The filtering problem is concerned with determining, at a given time k, the posterior distribution of the state given all the previous gathered information, which comes in the form of measurements. p(xk |Zk )
(1)
where xk represents the state distribution at time k and Zk = {z0 , z1 , . . . zk } represents the set of all measurements up to time k. By formulating the filtering problem as a first-order Markov chain, the problem of finding the posterior distribution over all measurements can be reduced to the recursive update of a state estimate. p(zk |xk , Zk−1 )p(xk |Zk−1 ) p(zk |Zk−1 ) p(xk |Zk−1 ) = p(xk |xk−1 )p(xk−1 |Zk−1 )dxk−1 p(xk |Zk ) =
(2) (3)
Student-t Mixture Filter for Robust, Real-Time Visual Tracking
375
Formulating the problem recursively ensures the necessary property that the amount of computation required does not grow over time. The different methods of filtering differ on their representation of these distributions; the Kalman filter represents all distributions by Normal distributions, making the equations above closed form, where as the particle filters tend to represent the prior and posterior distributions by sets of samples drawn from that distribution. The main requirement when designing a filter is to ensure that it fits into this framework, and to ensure that, at each time step k, the posterior distribution has a constant structure to ensure recursive behaviour. When a non-conjugate prior is used then, approximations must be made to represent the posterior distribution.
3
Heavy-Tailed Distributions
For a long time, the Normal distribution has been used as the default distribution for approximating random variables. This has occurred due to its nice mathematical properties, it being closed under multiplication and convolution making it ideal for the Bayesian filtering framework mentioned above, and the ease with which it can be fit to data within an ML framework. The Normal distribution is, however, also known for its lack of robustness to outliers, which comes as a consequence of its light-weight tails that die very quickly. Various stages of pre-filter (e.g. RANSAC) have been used to remove outliers ahead of the filtering, however with each of these forms there is a somewhat arbitrary decision about which measurements actually constitute ‘outliers’. Heavy-tailed distributions have a non-negligible weight away from the mode, making the probability of outliers occurring non-negligible. By formulating the problem correctly and using appropriately heavy-tailed distributions, outliers need not be thought of as a special case, as they will simply be taken care of within the Bayesian framework. 3.1
Multivariate Student-t Distribution
The multivariate Student-t distribution represents a generalisation of the Gaussian distribution (the limit ν → ∞ produces the Gaussian distribution), and is defined as
− ν+d 1 2 Γ ( ν+d |P | 2 Δ2 2 ) (4) 1+ S(x; P, ν, μ) = d ν Γ ( 2 ) (ν − 2)π 2 ν −2 f (x;P,ν,μ) 1/z
where Δ2 = (x − μ)T P (x − μ) is the squared Mahalanobis distance from x to μ and Γ (.) is the Gamma function. μ and P denote the mean and precision (inverse covariance) matrix respectively, while ν denotes the number of “degrees of freedom” of the filter, which, in this instance, may take non-integer values.
376
J. Loxam and T. Drummond
The dimension of the filter is given by d. The Student-t distribution allows the modelling of heavier tails, controllable by ν which can be directly related to Mardia’s measure of multivariate kurtosis [17]. 2d(d + 2) (5) ν−4 The most common use of Mardia’s measure of kurtosis is to test the validity of Gaussian assumptions. Here, however, it will be used to parameterise the deviation of the Student-t modes from the Gaussian distribution. γ2 =
3.2
Data Confusion
In Meinhold’s introduction of the Student-t distribution as the basis for in a filtering framework [16], he explicitly addressed the problem of Data Confusion, which had been ignored by earlier works. Data confusion is the tendency of heavy tailed distributions to generate multiple distinct modes under multiplication with each other (application of Bayes’ theorem) and is a natural consequence of heavy tailed distributions: logically by admitting that measurements can be ‘wrong’, a decision has to be made between them when they disagree. When both the likelihood and the prior have heavy-tails, as the two distributions diverge on their consensus of the current state, the posterior distribution becomes multimodal, effectively representing each of the two possibilities, one where the measurement was ‘correct’ and the prior ‘incorrect’, and the other vice-versa. Although data confusion may initially appear to be a problem, it is merely reflecting reality and representing the current estimate as accurately as possible. Provided the distributions are dealt with as accurately as possible, future information will resolve the uncertainty when available, reinforcing one mode and not the other, which will then die away.
4
Student-t Mixture Filter
A mixture of Student-t distributions is proposed for the basis of a filter, both for the internal state estimate and the measurement noise distributions. This provides a heavy tailed distribution to deal with any outliers in a Bayesian manner, while allowing multiple modes to be propagated through time will enable it to deal with the problems attributed to data confusion. In order to maintain computation efficiency, only the N largest modes in the state estimate will be kept after each stage. Since the Student-t distribution is closed neither under multiplication nor convolution, both the inference (equation 2) and the propagation of the state through time (equation 3) must be approximated to provide a recursive filtering framework.
5
Approximate Inference
Usual methods for approximating distributions involve the minimisation of some error metric (e.g. KL divergence or integrated square error), but due to the non-
Student-t Mixture Filter for Robust, Real-Time Visual Tracking
377
integrable form of the poly-t (product of Student-t distributions) distribution, none of these functions are analytically evaluable. Also, due to the multimodal nature of the posterior, moments do not carry sufficient information to make an approximation. As such, an alternate approximation scheme has been developed which concentrates on the key areas of the posterior while maintaining heavy tails away from the peaks. Although it is impossible to give guarantees about the performance of this scheme, in practice performance is good. This is demonstrated by a number of experiments in section 7.1. 5.1
Mode Approximation
Although the posterior poly-t distribution is neither integrable nor representable exactly by a sum of Student-t distributions, it does have a analytic representation, albeit with an unknown normalisation constant, C, 1 p(x) p(zi |x) C i M
p(x|z) =
(6)
for M independent measurements, zi i ∈ 1...M . As such, the value of the distribution and its derivatives can be calculated, up to scale. This is a key point to the method: a scaled approximation of the scaled posterior is calculated, which can then be normalised to provide a proper probability distribution (integrates to unity) and an approximation to the real posterior. The proposed method of mode approximation is similar to that put forward in [10]. From a given set of starting points (see section 5.2), a Gauss-Newton optimisation is performed over the real (poly-t ) posterior to find peaks in the state distribution. Gauss-Newton is particularly applicable in this situation as it does not require knowledge of the absolute scales of the Hessian and gradient of the cost function to operate, but merely their ratio, which can be calculated as it is independent of C. Once the locations of the peaks are known, a mode is placed at each. (7) μi = peaki The degrees-of-freedom parameter can easily be determined by considering the decay rate of the real posterior. Each measurement has an exponent of − 21 (ν+dz ) (where dz is the dimension of the measurement) which are summed when the product of the measurements is taken. For the multimodal prior, the decay rate is dominated by the smallest exponent, − 12 (min(ν) + dz ). By equating this sum to the exponent of the approximating mode, a value for the degrees-of-freedom parameter of the approximating mode can be determined. νi = min (νj ) + j∈[1,N ]
M
(νk + dz )
(8)
k=1
The precision matrix is approximated by the Hessian at the peak which this mode is representing. Since the actual Hessian of the posterior at this point
378
J. Loxam and T. Drummond
cannot be calculated (due to the lack of a normalisation constant) the Hessians of the real posterior and the approximate mode are equated, each normalised by the value of their own distribution at this point. Once again, this is calculable for the poly-t posterior as it is a ratio, and allowing the precision of the approximate mode to be set.
ν − 2 ∇2 p(x|z)
Pi = − (9) ν + d p(x|z) x=μi With all the parameters for the mode set, the weight of each mode can be set such that the peak of each approximate mode is the same height as the unnormalised poly-t posterior at that point.
p(x|z)
(10) wi = S(x; Pi , νi , μi ) x=μi Since p(x|z) was not a actual distribution and was only known upto scale, once all the modes have been estimated these weights then need to be normalised to provide a proper probability distribution for the posterior. wi = 1 (11) i
5.2
Starting Points for Peak Finding
Since the posterior distribution is expected to contain multiple peaks, peak finding must be performed several times in an attempt to find them all. To maximise the chances of finding all of the peaks, the start points used for the search must cover as large a part of the space that is expected to contain peaks as possible. A preliminary list of start points is generated from two sources. – Peaks in the Prior : For each peak in the prior distribution, a search start point is generated. This set of start points should find all peaks corresponding to little change in the state, even in the presence of measurement outliers. – ML State estimates from random sets of measurements: For a number of different randomly selected sets of measurements, an ML estimate of the state can be generated which can then be used as a start point. These start points are generated in much the same way as hypotheses in a RANSAC test. This set of start points should cover areas of the state space containing peaks due to correct sets of measurements, even when large changes to the state occur and the prior distribution in not close to the posterior. The preliminary list of start points will often contain many overlapping points which would most likely converge to the same peaks. To avoid extra computation, and indeed to bound the amount of processing done, this preliminary list of start points is clustered using the k-means++ algorithm [18]. From each cluster, the start point with the maximum probability (as evaluated in the real poly-t posterior) is used as a start point for peak finding for mode approximation.
Student-t Mixture Filter for Robust, Real-Time Visual Tracking
5.3
379
Multimodal Measurement Distributions
During approximation of the posterior, the only properties requested of the real poly-t posterior (which is being approximating) is that the distribution can be evaluated and the first two derivatives calculated. As such, any measurement distribution could be used in the algorithm, provided these operations can be performed. This provides the opportunity for using mixtures of Student-t s for the measurement distributions. Section 1.1 discussed how an explicit decision on outliers is not required in order to be able to deal with erroneous measurements; in a similar way by allowing multimodal measurement distributions to be incorporated the explicit one-to-one data association stage can be removed.
6
Approximate Time Propagation
Part of the initial Markov assumption was that of a model of how the state of the system will evolve over time and the application of transformation dynamics is the realisation of this model on the current state estimate. For a known process model g(.), xk+1 = g(xk )
(12)
The first two moments have been well studied in the literature and the results of an application to a linear system are well known. μx+1 = g(μx ) Σx+1 = (∇x g) Σx (∇x g)T
(13) (14)
In [17] it was also proved that Mardia’s measure of kurtosis is invariant under non-singular transformations. γ2:k+1 = γ2:k
(15)
Use of an unscented transform instead of this extended transform, for increased accuracy in non-linear environments, is a trivial extension. The state must then be convolved with the process noise distribution, to allow for errors in the model. The Student-t distribution is not closed under convolution and thus as with inference, the result must be approximated. Unlike multiplication, however, under convolution the Student-t distribution is guaranteed to be unimodal, which makes the matching of moments more meaningful. Under the assumption of independence, it is well known that the first two standardised moments simply sum. P (Y ) = P (X1 + X2 + · · · + XN ) μY = μi ΣY = Σi i
i
(16)
380
J. Loxam and T. Drummond
This falls out from the fact that these first two centralised moments are equal to the cumulants. Although the higher order cumulants still sum, since they do not equal the standardised centralised moments (e.g. skew, kurtosis), these moments do not sum. Using algebraic methods developed by [19] however, the following update equation has been derived. T r(ΣY−2 ) T r(Σi2 )γ2 (Xi ) d2 i N
γ2 (Y ) =
(17)
These equations can be used to perform simple moment matching to calculate the new state distribution for each mode in the state.
7 7.1
Results Accuracy of Approximate Inference
Since the method of approximate inference introduced in section 5 is based on numerical methods, it does not provide a guaranteed level of performance. The results of tests indicating the actual level of performance are presented here. In each test, two Student-t distributions were generated with random parameters and multiplied together to produce a poly-t posterior. Two methods were then used to fit a mixture of two Student-t distributions to this poly-t posterior. The first method used was a Maximum Likelihood (ML) estimator. 20000 samples were generated from the posterior distribution and a mixture of two Student-t distributions was fit to these samples using an iterative method designed to maximise the likelihood of the observed samples. The second method used was the method presented in section 5. The difference between these two approximations and the original poly-t posterior was then measured in terms of the KL divergence between the distributions. The results for the ML fit indicate the suitability of using a mixture of Student-t distributions to approximate a poly-t distribution if one were allowed as much time as necessary to perform this approximation. The results for the method used in the Student-t Mixture Filter (SMF) illustrate the extra information lost in performing the approximation quickly using the method provided in section 5. Table 1. KL divergence statistics from the actual poly-t distribution to a mixture of Student-t distributions fit by two different methods, a ML based parameter estimator and the method presented in section 5 Poly-t entropy Mean Standard Deviation Maximum Minimum
10.09 0.84 -
KL divergence from poly-t ML SMF 0.016 0.042 0.031 0.135 0.109 0.756 1.97 × 10−6 2.15 × 10−6
Student-t Mixture Filter for Robust, Real-Time Visual Tracking
381
Fig. 1. Wireframe model of the street scene that was tracked tracked
Fig. 2. A sample frame from one of the video sequences, with the model being tracked by the Student-t mixture filter
The results presented in table 1 show that in most cases, a mixture of Student-t distributions provide a good approximation to a poly-t distribution, as given by the low mean KL divergence. In addition, it also shows that the method presented causes a loss of very little extra information beyond that caused by the distributional approximation in the general case, as given by the low mean value. A shortfall of the proposed method is the maximum possible error, which occurs when two peaks interact with each other, changing the shape but not actually creating an extra peak. As shown by the results however, these occasions do not have an over-severe impact on the representation still having a relatively small KL divergence.
382
7.2
J. Loxam and T. Drummond
Visual Tracking
Figure 1 shows a wireframe model of the street scene each of the filters was tasked with tracking. The model includes a number of point features (with known 3D location) and it is measurements of these that are supplied as input to each of the filters. For each frame, the FAST [20] feature detector is used to determine image features and normalised cross-correlation is used to match the features in the model to those in the current frame. The strongest matches are then supplied to the filters. The Student-t filter is compared to both the Kalman filter and the EKPF across the sequences. The filters were set-up with constant position models. Measurement noise is assumed to be un-correlated, with 1-pixel variance in each direction. The EKPF was set-up to run with 100 particles. Since the Kalman filter and the EKPF are known to be sensitive to outliers, RANSAC is used to remove outlying measurements prior to updating the filter to create a robust system. Note that this RANSAC stage is not needed with the Student-t mixture filter as it is inherently robust.
Student-t filter Kalman filter EKPF 1
0.8
0.6
0.4
0.2
0 Seq. 1
Seq. 2
Seq. 3
Seq. 4
Seq. 5
Fig. 3. Proportion of the sequence tracked by each filter for five different outdoor sequences. The Student-t filter can clearly be seen to out-perform both the Kalman filter and the EKPF in terms of robustness. Sequence 2 for each filter is included in the supplementary material for each filter, as is Sequence 4 for the Student-t mixture filter.
Student-t Mixture Filter for Robust, Real-Time Visual Tracking
383
Table 2. Average filter processing time per frame for each of the filters (Where RANSAC is included, 500 RANSAC tests were run). The times relating to the most robust implementation are shown in bold. Filter Measurement Update Time Time inc. RANSAC Student-t Mixture Filter 33.3ms Kalman Filter 0.53ms 19.1ms EKPF 40.3ms 58.8ms
(a) Student-t Mixture Filter
(b) Kalman Filter
(c) EKPF
Fig. 4. Top to bottom, frames 100, 145 and 190 from Sequence 2 for each of the different filters tested. The Kalman filter and the EKPF lose track of the buildings at various points, whereas the Student-t mixture filter tracks the building for the entire sequence.
A video of tracking performance is supplied with the supplementary material as outdoortracking.avi, while figure 4 shows a number of frames from each of the trackers. Figure 3 presents the proportion of each sequence for which each filter was able to successfully track the buildings. It can easily be seen that the Student-t filter out-performs both the Kalman filter and the EKPF in terms of robustness: it maintains track for much longer than its competitors in each of the five sequences. Table 2 shows the average update time required for the filter. As expected due to its simplicity the Kalman filter is the least computationally expensive, but at an average of 33.3ms per update the Student-t mixture filter falls within the bounds of frame rate operation. Additionally, the independent nature of the peak-finding process (the most expensive operation for the Student-t mixture
384
J. Loxam and T. Drummond
filter) means the algorithm could easily be parallelised and run across multiple processors. These timings are for each filter taking an input of up to 25 measurements. The Student-t Mixture filter simply took the top 25 matches by NCC score (of which 47% were outliers on average), where as the Kalman filter and the EKPF had outliers removed beforehand, resulting in 14 inlying measurements per update on average. All filters scale with O(m) for m measurements.
8
Conclusions
In this paper, the Student-t Mixture Filter has been introduced, a filter based around mixtures of Student-t distributions. The heavy tails of the Student-t provide an inherent robustness to ‘outliers’ in the measurement and process noise, negating the need for an explicit step to determine erroneous data. It has also been shown to outperform competing filters in real-world scenarios. The Student-t Mixture Filter is also shown to be quick enough to run in real time, facilitating it’s use in real-time tracking systems. Being of a parametric base, it has polynomial complexity O(n3 ), thus presenting itself as a scalable solution for robust filtering.
References 1. Huber, P.: Robust Statistics. Wiley, New York (1981) 2. Kalman, R.: A new approach to linear filtering and prediction problems. Transactions of the ASME–Journal of Basic Engineering 82, 35–45 (1960) 3. Fischler, M., Bolles, R.: Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 24(6), 381–395 (1981) 4. Finetti, B.D.: The bayesian approach to the rejection of outliers. In: Proc. Fourth Berkeley Symp. on Math. Statist. and Prob., vol. 1, pp. 199–210 (1961) 5. Gordon, N.: A hybrid bootstrap filter for target tracking in clutter. IEEE Transactions on Aerospace and Electronic Systems (1997) 6. Isard, M., Blake, A.: Condensation – conditional density propagation for visual tracking. International Journal of Computer Vision 29(1), 5–28 (1998) 7. Gordon, N., Smith, A.: Approximate non-gaussian bayesian estimation and modal consistency. Journal of the Royal Statistical Society B 55(4), 913–918 (1993) 8. van der Merwe, R., de Freitas, J., Doucet, A., Wan, E.: The unscented particle filter. In: Advances in Neural Information Processing Systems, vol. 13 (2001) 9. Kotecha, J.H., Djuric, P.M.: Gaussian sum particle filtering. IEEE Transactions on Signal Processing 51, 2602–2612 (2003) 10. Han, B., Zhu, Y., Comaniciu, D., Davis, L.: Kernel-based bayesian filtering for object tracking. In: Proc. IEEE CVPR (2005) 11. Li, S., Wang, H., Chai, T.: A t-distribution based particle filter for target tracking. In: Proc. American Control Conference, pp. 2191–2196 (2006) 12. Dawid, A.P.: Posterior expectations for large observations. Miscellanea, 664–667 (1973)
Student-t Mixture Filter for Robust, Real-Time Visual Tracking
385
13. O’Hagan, A.: On outlier rejection phenomena in bayes inference. Journal of the Royal Statistical Society B 41(3), 358–367 (1979) 14. West, M.: Robust sequential approximate bayesian estimation. Journal of the Royal Statistical Society B 43(2), 157–166 (1981) 15. Sorenson, H., Alspach, D.: Recursive Bayesian estimation using Gaussian sums. Automatica 7(4), 465–479 (1971) 16. Meinhold, R.J., Singpurwalla, N.D.: Robustification of kalman filter models. Journal of the American Statistical Association 84, 479–486 (1989) 17. Mardia, K.: Measures of multivariate skewness and kurtosis with applications. Biometrika 57, 519–530 (1970) 18. Arthur, D., Vassilvitskii, S.: k-means++: the advantages of careful seeding. In: SODA 2007. Proceedings of the eighteenth annual ACM-SIAM symposium on Discrete algorithms, Philadelphia, PA, USA, pp. 1027–1035. Society for Industrial and Applied Mathematics (2007) 19. Jammalamadaka, S.R., Rao, T.S., Terdik, G.: Higher order cumulants of random vectors, differential operators, and applications to statistical inference and time series (1991) 20. Rosten, E., Drummond, T.: Machine learning for high speed corner detection. In: 9th European Conference on Computer Vision (May 2006)
Photo and Video Quality Evaluation: Focusing on the Subject Yiwen Luo and Xiaoou Tang Department of Information Engineering The Chinese University of Hong Kong, Hong Kong {ywluo6,xtang}@ie.cuhk.edu.hk
Abstract. Traditionally, distinguishing between high quality professional photos and low quality amateurish photos is a human task. To automatically assess the quality of a photo that is consistent with humans perception is a challenging topic in computer vision. Various differences exist between photos taken by professionals and amateurs because of the use of photography techniques. Previous methods mainly use features extracted from the entire image. In this paper, based on professional photography techniques, we first extract the subject region from a photo, and then formulate a number of high-level semantic features based on this subject and background division. We test our features on a large and diverse photo database, and compare our method with the state of the art. Our method performs significantly better with a classification rate of 93% versus 72% by the best existing method. In addition, we conduct the first study on high-level video quality assessment. Our system achieves a precision of over 95% in a reasonable recall rate for both photo and video assessments. We also show excellent application results in web image search re-ranking.
1 Introduction With the popularization of digital cameras and the rapid development of the Internet, the number of photos that can be accessed is growing explosively. Automatically assessing the quality of photos that is consistent with human’s perception has become more and more important with the increasing need of professionals and home users. For example, newspaper editors can use it to find high quality photos to express news effectively; home users can use such a tool to select good-looking photos to show from their ephoto albums; and web search engines may incorporate this function to display relevant and high quality images for the user. Fig. 1 shows two example photos. Most people agree that the left photo is of high quality and the right one is not. To tell the differences between high quality professional photos and low quality photos is natural to a human, but difficult to a computer. There have been a number of works on image quality assessment concerning image degradation caused by noise, distortion, and compression artifacts [1], [2], [3]. Different from these works, we consider photo quality from an aesthetic point of view and try to determine the factors that make a photo look good in human’s perception. The most D. Forsyth, P. Torr, and A. Zisserman (Eds.): ECCV 2008, Part III, LNCS 5304, pp. 386–399, 2008. c Springer-Verlag Berlin Heidelberg 2008
Photo and Video Quality Evaluation: Focusing on the Subject
(a)
387
(b)
Fig. 1. Most people may agree that (a) is of higher quality than (b)
related work is published in [4], [5], and [6]. Tong et al. [4] and Datta et al. [5] combined features that are mostly used for image retrieval previously with a standard set of learning algorithms for the classification of professional photos and amateurish photos. For the same purpose, Ke et al. designed their features based the spatial distribution of edges, blur, and the histograms of low-level color properties such as brightness and hue [6]. Our experiments show that the method in [6] produce better results than that in [4] and [5] with much less number of features, but it is still not good enough with a classification rate of 72% on a large dataset. The main problem with existing methods is that they compute features from the whole image. This significantly limits the performance of the features since a good photo usually treats the foreground subject and the background very differently. Professional photographers usually differentiate the subject of the photo from the background to highlight the topic of the photo. High quality photos generally satisfy three principles: a clear topic, gathering most attention on the subject, and removing objects that distract attention from the subject [7], [8], [9]. Photographers try to achieve this by skillfully manipulating the photo composition, lighting, and focus of the subject. Motivated by these principles, in this paper, we first use a simple and effective blur detection method to roughly identify the focus subject area. Then following human perception of photo qualities we develop several highly effective quantitative metrics on subject clarity, lighting, composition, and color. In addition, we conduct the first study on video quality evaluation. We achieve significant improvement over state of the art methods reducing the error rates by several folds. We also apply our work to on-line image reranking for MSN Live image search results with good performance. In summary, the main contributions of this paper include: 1) Proposed a novel approach to evaluate photo and video quality by focusing on the foreground subject and developed an efficient subject detection algorithm; 2)Developed a set of highly effective high-level visual features for photo quality assessment; 3) Conducted the first study of high-level video quality assessment and build the first database for such study; 4) First studied visual quality re-ranking for real world online image search.
2 Criteria for Assessing Photo Quality In this section, we briefly discuss several important criteria used by professional photographers to improve photo quality. Notice that most of them rely on different treatment of the subject and the background.
388
Y. Luo and X. Tang
(a)
(b)
(d)
(c)
(e)
Fig. 2. (a) “Fall on the Rocks” by M. Marjory, 2007. (b) “Mona Lisa Smiles” by David Scarbrough, 2007. (c) “Fall: One Leaf at a Time” by Jeff Day, 2007. (d) “Winter Gets Closer” by Cyn D. Valentine, 2007. (e) “The Place Where Romance Starts” by William Lee, 2007.
2.1 Composition Composition means the organization of all the graphic elements inside a photo. Good composition can clearly show the audience the photo’s topic and effectively express photographer’s feeling. The theory of composition is usually rooted in one simple concept: contrast. Professional photographers use contrast to awaken a vital feeling for the subject through a personal observation [10]. Contrast between light and dark, between shapes, colors, and even sensations, is the basis for composing a photo. The audience can often find the obvious contrast between the cool and hard stones in the foreground and the warm and soft river and forest in the background in Fig. 2a. 2.2 Lighting A badly lit scene ruins the photo as much as poor composition. The way a scene is lit changes its mood and the audience’s perception of what the photo tries to express. Lighting in high quality photos makes the subjects not appear flat and enhances their 3D feeling, which is helpful to attract the audience’s attention to the subjects. Good lighting results in strong contrast between the subject and the background, and visually distinguishes the subject from the background. The lighting in Fig. 2b isolates the girls from the background and visually enhances the 3D feeling of them. 2.3 Focus Controlling Professional photographers control the focus of the lens to isolate the subject from the background. They blur the background but keep the subject in focus, such as Fig. 2c. They may also blur closer objects but sharpen farther objects to express the depth of the scene, such as Fig. 2d. More than capturing the scene only, controlling the lens can create surrealistic effects, such as Figs. 2c and 2e.
Photo and Video Quality Evaluation: Focusing on the Subject
389
2.4 Color Much of what viewers perceive and feel about a photo is through colors. Although their color perception depends on the context and is culture-related, recent color science study shows that the influence on human emotions or feeling from a certain color or a certain color combination is usually stable in varying culture background [11], [12]. Professional photographers use various exposure and interpreting methods to control the color palette in a photo, and use specific color combination to raise viewers’ specific emotion, producing a pleasing affective response. The photographer of Fig. 2a uses the combination of bright yellow and dark gray to produce an aesthetic feeling from the beauty of nature. The photographer of Fig. 2b uses the combination of white and natural skin color to enhance the beauty of chasteness from the girls.
3 Features for Photo Quality Assessment Based on the previous analysis, we formulate these semantic criteria mathematically in this section. We first separate the subject from the background, and then discuss how to extract the features for photo quality assessment. 3.1 Subject Region Extraction Professional photographers usually make the subject of a photo clear and the background blurred. We propose an algorithm to detect the clear area of the photo and consider it as the subject region and the rest as the background. Levin et al. [13] presented a scheme to identify blur in an image when the blur is caused by 1D motion. We modify it to detect 2D blurred regions in an image. Let us use Fig. 3 as an example to explain the method. Fig. 3a is a landscape photo. We use a kernel of size k × k with all coefficients equal to 1/k 2 to blur the photo. Figs. 3b, 3c and 3d are the results blurred by 5 × 5, 10 × 10, and 20 × 20 kernels, respectively. The log histograms of the horizontal derivatives of the four images in Fig. 3 are shown in Fig. 3e, and the log histograms of the vertical derivatives of the four images are shown in Fig. 3f. It is obvious that the blurring significantly changes the shapes of the curves in the histograms. This suggests that the statistics of the derivative filter responses can be used to tell the difference between clear and blurred regions. Let fk denotes the blurring kernel of size k × k. Convolving the image I with fk , and computing the horizontal and vertical derivatives from I ∗ fk , we have the distributions of the horizontal and vertical derivatives: pxk ∝ hist(I ∗ fk ∗ dx ),
pyk ∝ hist(I ∗ fk ∗ dy )
(1)
where dx = [1, −1], and dy = [1, −1]T . The operations in Eq. (1) are done 50 times with k = 1, 2, ..., 50. For a pixel (i, j) in I, we define a log-likelihood of derivatives in its neighboring window W(i,j) of size n × n with respect to each of the blurring models as: lk (i, j) = (log pxk (Ix (i , j )) + log pyk (Iy (i , j ))), (2) (i ,j )∈W(i,j)
390
Y. Luo and X. Tang
where Ix (i , j ) and Iy (i , j ) are the horizontal and vertical derivatives at pixel (i , j ), respectively, and lk (i, j) measures how well the pixel (i, j)’s neighboring window is explained by a k × k blurring kernel. Then we can find the blurring kernel that best explains the window’s statistics by k ∗ (i, j) = arg maxk lk (i, j). When k ∗ (i, j) = 1, pixel (i, j) is in the clear area; otherwise it is in the blurred area. With k ∗ (i, j) for all the pixels of I, we can obtain a binary image U to denote the clear and blurred regions of I, defined as: 1, k ∗ (i, j) = 1 (3) U (i, j) = 0, k ∗ (i, j) > 1. Two examples of such images are show in in Figs. 4a and 4b with the neighboring window size of 3 × 3. Next, we find a compact bounding box that encloses the main part of the subject in an image. Projecting U onto the x and y axes independently, we have U (i, j), Uy (j) = U (i, j). (4) Ux (i) = j
i
On the x axis, we find x1 and x2 such that the energy in [0, x1 ] and the energy in [x2 , N − 1] are each equal to 1−α 2 of the total energy in Ux , where N is the size of the image in the x direction. Similarly, we can find y1 and y2 in the y direction. Thus, the subject region R is [x1 + 1, x2 − 1] × [y1 + 1, y2 − 1]. In all our experiments, we choose α = 0.9. Two examples of subject regions corresponding to Figs. 1a and 1b are given in Figs. 4c and 4d.
(a)
(b)
(c)
0
0
Original 5X5 blurred 10X10 blurred 20X20 blurred
−0.5
−1
−1.5
−1.5
−2
−2
−2.5
−2.5
−3
−3
−3.5
−3.5
−0.1
0
(e)
0.1
Original 5X5 blurred 10X10 blurred 20X20 blurred
−0.5
−1
−4 −0.2
(d)
0.2
−4 −0.2
−0.1
0
0.1
0.2
(f)
Fig. 3. Images blurred by different blurring kernels. (a) Original Image. (b) Result blurred by the 5 × 5 kernel. (c) Result blurred by the 10 × 10 kernel. (d) Result blurred by the 20 × 20 kernel. (e) Log histograms of the horizontal derivatives of the original image and the images blurred by the 5 × 5, 10 × 10, and 20 × 20 kernels, respectively. (f) Log histograms of the vertical derivatives of the original image and the blurred images by 5 × 5, 10 × 10, and 20 × 20 kernels, respectively.
Photo and Video Quality Evaluation: Focusing on the Subject
(a)
(b)
(c)
391
(d)
Fig. 4. (a) The clear regions (white) of Fig. 1a. (b) The subject region of Fig. 1a. (c) The clear (white) regions of Fig. 1b. (d) The subject region of Fig. 1b.
3.2 Clarity Contrast Feature To attract the audience’s attention to the subject and to isolate the subject from the background, professional photographers usually keep the subject in focus and make the background out of focus. A high quality photo is neither entirely clear nor entirely blurred. We here propose a clarity contrast feature fc to describe the subject region with respect to the image: fc = (MR /R)/(MI /I),
(5)
where R and I are the areas of the subject region and the original image, respectively, and MI = {(u, v) | |FI (u, v)| > β max{FI (u, v)}},
(6)
MR = {(u, v) | |FR (u, v)| > β max{FR (u, v)}}, FI = F F T (I), FR = F F T (R).
(7) (8)
A clear image has relatively more high frequency components than a blurred image. In Eq. (5), MR /R denotes the ratio of the area of the high frequency components to the area of all the frequency components in R. The similar explanation applies to MI /I. In all our experiments, we choose β = 0.2. For the two images in Fig. 1, their clarity contrast features are 5.78 and 1.62, respectively. From our experiments, we have found that the clarity feature of high quality and low quality photos mainly fall in [1.65, 20.0] and [1.11, 1.82], respectively. 3.3 Lighting Feature Since professional photographers often use different lighting on the subject and the background, the brightness of the subject is significantly different from that of the background. However, most amateurs use natural lighting and let the camera automatically adjust a photo’s brightness, which usually reduces the brightness difference between the subject and the background. To distinguish the difference between these two kinds of photos, we formulate it as: fl = | log(Bs /Bb )|,
(9)
392
Y. Luo and X. Tang
where Bs and Bb are the average brightness of the subject region and the background, respectively. The values of fl of Fig. 1a and Fig. 1b are 0.066 and 0.042, respectively. Usually, the values of fl of high quality and low quality photos fall in [0.03, 0.20] and [0.00, 0.06], respectively. 3.4 Simplicity Feature To reduce the attention distraction by the objects in the background, professional photographers make the background simple. We use the color distribution of the background to measure this simplicity. For a photo, we quantize each of the RGB channels into 16 values, creating a histogram His of 4096 bins, which gives the counts of quantized colors present in the background. Let hmax be the maximum count in the histogram. The simplicity feature is defined as: fs = (S/4096) × 100%,
(10)
where S = {i|His(i) ≥ γhmax }. We choose γ = 0.01 in all our experiments. The values of fs of Fig. 1a and Fig. 1b are 1.29% and 4.44%, respectively. Usually, the simplicity features of high quality and low quality photos fall in (0, 1.5%] and [0.5%, 5%], respectively. 3.5 Composition Geometry Feature Good geometrical composition is a basic requirement for high quality photos. One of the most well-known principle of photographic composition is the Rule of Thirds. If we divide a photo into nine equal-size parts by two equally-spaced horizontal lines and two equally-spaced vertical lines, the rule suggests that the intersections of the two lines should be the centers for the subject (see Fig. 5 ). Study has shown that when viewing images, people usually look at one of the intersection points rather than the center of the image. To formulate this criterion, we define a composition feature as (11) fm = min { (CRx − Pix )2 /X 2 + (CRy − Piy )2 /Y 2 }, i=1,2,3,4
where (CRx , CRy ) is the centroid of the binary subject region in U (see Section 3.1), (Pix , Piy ), i = 1, 2, 3, 4, are the four intersection points in the image, and X and Y are the width and height of the image. For Figs. 1a and 1b, the values of fm are 0.11 and 0.35, respectively.
(a)
(b)
Fig. 5. (a) An illustration of the Rule of Thirds. (b) A high quality image obeying this rule.
Photo and Video Quality Evaluation: Focusing on the Subject
393
3.6 Color Harmony Feature Harmonic colors are the sets of colors that are aesthetically pleasing in terms of human visual perception. There are various mathematical models for defining and measuring the harmony of the color of the image [14], [15]. We have tried to measure the color harmony of a photo based on previous models [15], and found that the single feature classification rate is low. Here we develop a more accurate feature to measure the color harmony of a photo in terms of learning the color combinations (coexistence of two color in the photo) from the training dataset. For each photo, we compute a 50-bin histogram for each of the hue, saturation, and brightness. The value of the color combination between hue i and hue j is defined as Hhue (i) + Hhue (j). The definitions for saturation combination and brightness combination are similar. For the high quality and low quality photos in the training database, we can obtain the histograms of hue combinations with Hhigh,hue (i, j) = Average(Hhigh,hue (i) + Hhigh,hue (j)), Hlow,hue (i, j) = Average(Hlow,hue (i) + Hlow,hue (j)).
(12) (13)
where Hhigh,hue (Hlow,hue ) is the histogram of hue from high (low) quality training photos. Similarly, we can have the histograms of saturation combinations and brightness combinations, Hhigh,sat (i, j), Hlow,sat (i, j), Hhigh,bri (i, j), and Hlow,bri (i, j). We design a feature fh to measure whether a photo is more similar to the high quality photos or the low quality photos in the color combinations, which is formulated as: fh = Hues × Sats × Bris ,
(14)
where Hues = Huehigh /Huelow , Sats = Sathigh /Satlow , Bris = Brihigh /Brilow , and Huehigh is the cross product distance between Hhigh,hue and the histogram of hue of the input photo, Huelow , Sathigh , Satlow , Brihigh , and Brilow are computed similarly. For Figs. 1a and 1b, the values of fh are 1.42 and 0.86, respectively. Usually, the color combination features of high quality photos fall in [1.1, 1.6], and those of low quality photos are in [0.8, 1.2].
4 Features for Video Quality Assessment A video is a sequence of still images, and so the features proposed to assess photo’s quality are also applicable for video quality assessment. Since a video contains motion information that can be used to distinguish professional videos from amateurish videos, we design two more motion-related features in this section. 4.1 Length of Subject Region Motion Experienced photographers usually adjust the focus and change the shooting angle to tell the story more effectively. For example, Fig. 6a shows a sequence of conversation in the movie “Blood Diamond”. The photographers change the shooting angle and focus continually to show the audience not only the speaking man’s expression but also the
394
Y. Luo and X. Tang
(a)
(b) Fig. 6. (a) A sequence of screenshots in “Blood Diamond”. (b) A sequence of screenshots in “Love Story”. Both of them show how the professional photographers change the shooting angle and focus when taking the videos.
listening woman’s. In Fig. 6b, the photographer moves the focus from the ring to the girl’s face, showing the girl’s expression and feeling when she sees the ring. However, amateurish photographers seldom change the shooting angle and focus when taking videos. Since the change of shooting angle and focus usually changes the subject region in the frames, we evaluate these changes by average moving distance of the subject region between neighbor frames. We sample frame groups from a video, each of which contains P frames with a rate of 5 frames per second. Then this feature is defined as: P fd = ( (Ci,x − Ci−1,x )2 /X 2 + (Ci,y − Ci−1,y )2 /Y 2 )/(P − 1),
(15)
i=2
where (Ci,x , Ci,y ) is the centroid of the binary subject region of frame i, and X and Y are the width and height of the frame. Usually, the values of fd of high quality and the low quality photos fall in [0.05, 0.6] and [0.003, 0.2], respectively. 4.2 Motion Stability Camera shake is much less in high quality videos than in low quality videos. This feature can be used to distinguish between these two kinds of videos. Various methods have been proposed to detect shaking artifacts [16]. We use Yan and Kankanhalli’s method [16] for this work due to its simplicity. We sample Q groups of frames from a video, each of which contains several successive frames. Then this feature is defined as: ft = Qt /Q,
(16)
where Q is the total number of successive three frames in all the groups, and Qt is the number of successive three frames that are detected as shaky frames. Here we briefly explain how to detect shaky frames. We select the subject region from the first frame of a group as the target region, and then iteratively compute the best motion trajectory of the region, which results in a set of motion vectors. From three successive frames in the group, we have two motion vectors that form an angle. If the angle is larger than 90 degree, these three successive frames are considered shaky.
Photo and Video Quality Evaluation: Focusing on the Subject
395
5 Experiments In this section, we demonstrate the effectiveness of our features using the photo database collected by Ke et al. [6], and a large and diverse video database collected from professional movies and amateurish videos. To further test these features’ usability, we apply them on the images searched by MSN Live Search to give better rankings. To compare different features, we use three popular classifiers including the Bayes classifier which is also used in [6], the SVM [17] and the Gentle AdaBoost [18]. 5.1 Photo Assessment We compare the performance of our features with Ke et al’s features [6], and Datta et al.’s features [5] on the database collected by Ke et al. [6]. The database was acquired by crawling a photo contest website, DPChallenge.com, which contains a diverse set of high and low quality photos from many different photographers. The obtained 60000 photos were rated by hundreds of users at DPChallenge.com. The top 10%, total 6000 photos, were rated as high quality photos, and the bottom 10%, total 6000 photos, were rated as low quality photos. We randomly choose 3000 high quality and 3000 low quality photos as the training set, and choose the remaining 3000 high quality and 3000 low quality photos as the testing set. This design of the experiment is the same as that in [6]. We first give the classification results of individual features, and then the combined result using the Bayes classifier. For each feature, we plot a precision-recall curve to show its discriminatory ability. Fig. 7b shows the performance of our features. For comparison, Fig. 7a shows the performance of the features proposed by Ke et al. [6]. In low recall rates, the precisions Performance of our features on the photos
Performance of Ke’s features on the photos
70
90
80
Precision (%)
Precision (%)
80
100
Clarity Lighting Simplicity Composition Color Combination
90
Precision (%)
Edge Spatial Distribution Edge Bounding Box Area Hue Count Blur Contrast Brightness Color Distribution
90
Performance of combined features on the photos
100
100
70
80
70
60
60
60
Ke’s features combined Our features combined 20
40
60
80
50 0
100
40
60
80
100
50 0
Recall (%)
Recall (%)
(a) Single SVM Performance of Ke’s features on photos
40
35
Error Rate (%)
35 30 25 20
30 25 20
15
15
10
10
5
5
0
0
(d)
80
100
Classification performance of different methods on photos 50
Clarity Lighting Simplicity Composition Color Combination
40
60
(c)
Single SVM performance of our features on photos
45
40 Recall (%)
50
Edge Spatial Distribution Edge Bounding Box Area Hue Count Blur Contrast Brightness Color Distribution
45
20
(b)
50
Error Rate (%)
20
Datta’s Ke’s Ours
45 40 35 Error Rate (%)
50 0
30 25 20 15 10 5 0
(e)
Bayesian Classifier
SVM
Gentle AdaBoost
(f)
Fig. 7. Photo classification performance comparisons. Bayes classifier performance of (a) Ke’s features, (b) our features, (c) combined features. One-dimensional SVM performance of (d) Ke’s features, (e) our features. (f) Classification performance of different methods.
396
Y. Luo and X. Tang
(a)
(b) Fig. 8. (a) Five samples from the 1000 top ranked test photos by our features using the Bayesian classifier. (b) Five samples from the 1000 bottom ranked test photos by our features using the Bayesian classifier.
of all our four features are over 80%, but only the precision of the blur feature in Ke et al.’s method is over 80%. The clarity is the most discriminative of all the features. Fig. 7c shows two curves denoting the performances of Ke et al.’s features combined and our features combined, respectively. It is easy to see that our method outperforms Ke et al.’s. Part of the training and testing samples can be found in the supplementary materials. To further test the performance of our features, we perform one-dimensional SVM on individual features. Figs. 7d and 7e show that four of our five features’ classification error rates are below 30%, and those of the Ke’s features’ are all above 30%. We use the Bayesian classifier, SVM and gentle AdaBoost to test the performance of Datta et al.’s, Ke et al.’s and our methods. The results are given in Fig. 7f, from which we can clearly see that our algorithm greatly outperforms the other two algorithms. To show some examples, we randomly pick 5 samples from the ranked test photos and display them in Fig. 8. It is easy to tell the quality difference between the two groups. One reason why our features perform much better than Ke et al.’s and Datta et al.’s is that we extract the subject region from a photo first and then define the features based on this region and the entire photo, while they developed their features from the whole photo only. Another reason is that we design our features mostly based on professional photography techniques. 5.2 Video Assessment To demonstrate the effectiveness of our video quality assessment method, we collect a large and diverse video database from a video sharing website, YouTube.com. There are 4000 high quality professional movie clips and 4000 low quality amateurish clips. We randomly select 2000 high quality clips and 2000 low quality clips as the training set, and take the rest as the test set.
Photo and Video Quality Evaluation: Focusing on the Subject Performance of Ke’s features on the videos
100
100
70
80
90
Precision (%)
80
Clarity Lighting Simplicity Composition Color Combination Motion Length Motion Stability
90
Precision (%)
Edge Spatial Distribution Edge Bounding Box Area Hue Count Blur Contrast Brightness Color Distribution
90
70
80
70
60
60
60
50 0
50 0
50
20
40
60
80
100
Recall (%)
20
40
60
80
100
Single SVM performance of Ke’s features on videos
30 25 20
35
30
25
20
15
15
10
10
5
5
0
0
(d)
Clarity Lighting Simplicity Composition Color Combination Motion Length Motion Stability
40
40
60
80
100
Recall (%)
Classification performance of different methods on videos
Single SVM performance of our features on videos
45
Error Rate (%)
Error Rate (%)
35
20
(c) 50
50
Edge Spatial Distribution Edge Bounding Box Area Hue Count Blur Contrast Brightness Color Distribution
40
0
(b)
50 45
Ke’s features combined Our photo features combined Our photo and video features combined
Recall (%)
(a)
Datta’s Ke’s Our photo features combined Our photo and video features combined
45 40 35
Error Rate (%)
Precision (%)
Performance of combined features on the videos
Performance of our features on the videos
100
397
30 25 20 15 10 5 0
(e)
Bayesian Classifier
SVM
Gentle AdaBoost
(f)
Fig. 9. Video classification performance comparisons. Bayes classifier performance of (a) Ke’s features, (b) our features, (c) combined features. One-dimensional SVM performance of (d) Ke’s features, (e) our features. (f) Classification performance of different methods.
To apply the features for photo assessment to a video, we select a number of frames from the video in a rate of one frame per second, and take the average assessment of these frames as the assessment of the video. Similar to the photo experiment, we first use Bayesian classifier to plot the precision-recall curve for each of our features and Ke et al.’s feature. Fig. 9a shows the performances of Ke et al.’s features, and Fig. 9b shows the performances of our features. Most of our features perform better than Ke et al.’s. Fig. 9c shows the performances of Ke et al.’s features combined, our photo features combined, and our photo and video features combined. Then we use the one-dimensional SVM to test individual features. Figs. 9d and 9e show the experiment results. We apply the three classifiers to compute the classification error rates with Datta et al.’s, Ke et al.’s and our features. The results are shown in Fig. 9f. The improvement of our method over the other is obvious. 5.3 Web Image Ranking To further evaluate the usability of our image assessment method, we use it to rank the images retrieved by MSN Live Search. 50 volunteers aged between 18 and 30 took part in the experiment. They used 10 keywords randomly selected from a word list to search for the images. The top 1000 images in each search were downloaded. Then the volunteers gave them scores, ranging from 1 to 5(5 is the best). We use Ke et al.’s classification method and ours to re-rank these images. Fig. 10c shows the average scores of the top 1 to top 50, top 51 to top 100, ..., top 951 to top 1000 images, respectively. From Figs. 13a and 13b, we can see that the MSN Live Search engine does not consider the quality of the images. After re-ranking of these images by our method, the top ranked
398
Y. Luo and X. Tang
5
Original order Ranked by Ke’s features combined Ranked by our features combined
Score
4
3
2
1
Ranking of the images
(a)
(b)
(c)
Fig. 10. (a) The first page of the images searched by MSN Live Search with the key word “bird”. (b) The first page of the images after re-ranking by our classification system. (c) The average scores of top rank images by different ranking system.
images are of higher quality. Ke et al.’s method can also improve the original ranking but does not perform as well as ours. Figs. 10a and 10b show an example of the images ranked by our system. More examples can be found in the supplementary materials. It should be noticed that the photo quality ranking is not the only feature for image search re-ranking. Better photo quality does not mean more relevant. We plan to combine this work with other image search re-ranking work [19] in our future research.
6 Conclusion In this paper, we have proposed a novel method to assess photo and video quality. We first extract the subject region from a photo, and then formulate a number of high level semantic features based on professional photography techniques to classify high quality and low quality photos. We have also conducted the first video quality evaluation study based on professional video making techniques. The performance of our classification system using these features is much better than the previous work. Our algorithm can be integrated into existing image search engines to find not only relevant but also high quality photos. Notice, one strength of our algorithm is that only using very simple features we achieve very good results. It is certainly possible to improve with more sophisticated design of features. The data used in this paper can be downloaded at http://mmlab.ie.cuhk.edu.hk.
References 1. Wang, Z., Sheikh, H.R., Bovik, A.C.: No-reference perceptual quality assessment of JPEG compressed images. ICIP (2002) 2. Wang, Z., Bovik, A., Sheikh, H., Simoncelli, E.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Processing 13 (2004) 3. Sheikh, H., Bovik, A., de Veciana, G.: An information fidelity criterion for image quality assessment using natural scene statistics. IEEE Trans. Image Processing 14 (2005)
Photo and Video Quality Evaluation: Focusing on the Subject
399
4. Tong, H., Li, M., Zhang, H., He, J., Zhang, C.: Classification of Digital Photos Taken by Photographers or Home Users. In: Proc. Pacific-Rim Conference on Multimedia (2004) 5. Datta, R., Joshi, D., Li, J., Wang, J.: Studying Aesthetics in Photographic Images Using a Computational Approach. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3951. Springer, Heidelberg (2006) 6. Ke, Y., Tang, X., Jing, F.: The Design of High-Level Features for Photo Quality Assessment. In: CVPR (2006) 7. Freeman, M.: The Complete Guide to Light and Lighting. Ilex Press (2007) 8. Freeman, M.: The Photographer’s Eye: Composition and Design for Better Digital Photos. Ilex Press (2007) 9. London, B., Upton, J., Stone, J., Kobre, K., Brill, B.: Photography, 8th edn. Pearson Prentice Hall, London (2005) 10. Itten, J.: Design and Form: The Basic Course at the Bauhaus and Later. Wiley, Chichester (1975) 11. Manav, B.: Color-Emotion Associations and Color Preferences: A Case Study for Residences. Color Research and Application 32 (2007) 12. Gao, X., Xin, J., Sato, T., Hansuebsai, A., Scalzo, M., Kajiwara, K., Guan, S., Valldeperas, J., Lis, M., Billger, M.: Analysis of Cross-Cultural Color Emotion. Color Research and Application 32 (2007) 13. Levin, A.: Blind motion deblurring using image statistics. In: NIPS (2006) 14. Tokumaru, M., Muranaka, N., Imanishi, S.: Color design support system considering color harmony. In: Proc. of the 2002 IEEE International Conference on Fuzzy Systems, vol. 1 (2002) 15. Cohen-Or, D., Sorkine, O., Gal, R., Leyvand, T., Xu, Y.: Color harmonization. ACM Transactions on Graphics (TOG) 25 (2006) 16. Yan, W., Kankanhalli, M.: Detection and removal of lighting & shaking artifacts in home videos. In: Proc. of the tenth ACM international conference on Multimedia (2002) 17. Vapnik, V.: The Nature of Statistical Learning Theory. Springer, Heidelberg (2000) 18. Friedman, J., Hastie, T., Tibshirani, R.: Additive logistic regression: a statistical view of boosting (With discussion and a rejoinder by the authors). Ann. Statist 28 (2000) 19. Cui, J., Wen, F., Tang, X.: Real Time Google and Live Image Search Re-ranking. In: Proc. of ACM international conference on Multimedia (2008)
The Bi-directional Framework for Unifying Parametric Image Alignment Approaches R´emi M´egret, Jean-Baptiste Authesserre, and Yannick Berthoumieu IMS Laboratory, University of Bordeaux, France {remi.megret,jean-baptiste.authesserre, yannick.berthoumieu}@ims-bordeaux.fr Abstract. In this paper, a generic bi-directional framework is proposed for parametric image alignment, that extends the classification of [1]. Four main categories (Forward, Inverse, Dependent and Bi-directional) form the basis of a consistent set of subclasses, onto which state-of-theart methods have been mapped. New formulations for the ESM [2] and the Inverse Additive [3] algorithms are proposed, that show the ability of this framework to unify existing approaches. New explicit equivalence relationships are given for the case of first-order optimization that provide some insights into the choice of an update rule in iterative algorithms.
1
Introduction
Motion estimation is a fundamental task of many vision applications such as object tracking, image mosaicking, video compression or augmented reality. Image alignment based on template matching is a natural approach to image registration, by estimating the parameters that best warp one image onto the other. The optimum is conventionally provided by the minimization of the displaced frame difference between the template and an image. Since the Lucas and Kanade algorithm [4], many algorithms have been proposed to improve the performances. Baker and Matthews [1] summarized and compared experimentally templatebased techniques divided into four classes (Forwards Additive, Forwards Compositional, Inverse Additive, and Inverse Compositional). Since then, methods such as Efficient Second-order Minimization (ESM) [2], Symmetrical Gradient Method (SGM) and Bi-directional Gradient Method (BDGM) [5], have been proposed that do not fit into the initial four classes. To our knowledge, no generic framework has been proposed yet in which all these methods can be classified. In this paper, we develop a generic bi-directional framework which unifies the different template-based approaches. Thanks to this framework, the main contribution of this paper is to show how to rigorously define the alignment problem, and to propose a consistent set of well defined but generic classes. State of the art methods are expressed as instances of this generic formulation. For some of the methods, this formulation is new, or more general that initially. In Sect. 2, the image alignment problem is formalized, and the bi-directional framework is explained. In Sect. 3, state of the art methods are classified and their interpretation within the framework is precised. In Sect. 4, a discussion addresses general issues concerning all approaches. D. Forsyth, P. Torr, and A. Zisserman (Eds.): ECCV 2008, Part III, LNCS 5304, pp. 400–411, 2008. c Springer-Verlag Berlin Heidelberg 2008
The Bi-directional Framework
2 2.1
401
Problem Formalization Motion Model
The motion model is represented by a warp function W(μ, x) of parameter vector μ ∈ P, applied at position x ∈ IR2 . In order to facilitate the structured formulation of the framework, we require that the considered motion model form a group with respect to composition. This is the case of most models of interest [1], such as non degenerate affine motion and homographies. This group property is extended to the parameter space P : W(μ0 ◦ δμ, x) = W(μ0 , W(δμ, x)) −1
W(μ
, x) = W
−1
Composition (1)
(μ, x) = {y | W(μ, y) = x}
W(0, x) = x
Inverse
(2)
Identity
(3)
In order to apply gradient methods, the smoothness of the warp is additionnaly required. More precisely, it is assumed that μ → W(μ, x) is a C1 -diffeomorphism, and that δμ → μ ◦ δμ and δμ → δμ−1 are C1 -diffeomorphisms in a neighbourhood of δμ = 0. Provided that the parameters are not close to a degenerate configuration, these constraints are again satisfied by the models of interest. 2.2
Bi-directional Image Alignment
The fundamental assumption used for image alignment correspond to the graylevel constancy equation (see Fig. 1 for an illustration of the concepts presented in this section): ∀x ∈ Rref
I(W(¯ μ, x)) = T(W(μref , x))
,
(4)
where μ ¯ represents the parameters of the true displacement between images I and the reference coordinate frame, and μref represents a fixed transformation between the template image T and the reference coordinate frame. Rref corresponds to the region of interest, expressed in the reference coordinate frame. If both images are considered to be continuous functions with respect to the position in the image, then the change in variables x = W(μref −1 ◦ μT , z) using an arbitrary μT leads to a more generic equation: ∀z ∈ R
I(W(μI , z)) = T(W(μT , z))
,
(5)
where R = W(μT −1 ◦ μref , Rref ) is the transformed region of interest and μI = μ ¯ ◦ μref −1 ◦ μT
.
(6)
The bi-directional image alignment framework can therefore be formalized as finding a pair (μI , μT ) that minimizes the discrepancy between the right and the left hand side of (5).
402
R. M´egret, J.-B. Authesserre, and Y. Berthoumieu
Fig. 1. General principle of the bi-directional framework, when aligning two images I and T. The initial parameters μ0 and μref are shown in parallel with the parameters μI and μT leading to a correct alignment. The support region R is emphasized on the common compensated frames T(W(μT , x)) and I(W(μI , x)), as well as its corresponding regions on T(x) and I(x). For the Forwards approach, μT = μref . For the Inverse approach, μI = μ0 . In the general case shown here (Dependant and Bi-directional approaches) both μI and μT are varying during the optimization.
We denote e(μI , μT ) the N -dimensional vector obtained by concatenating the pixelwise differences ei between the compensated images over a spatial sampling (xi )i=1..N of R, e(μI , μT ) = I(W(μI , R)) − T(W(μT , R)). Using the L2 norm of e, the bi-directional error function corresponds to: 2 E(μI , μT ) = I(W(μI , x)) − T(W(μT , x))
(7)
(8)
x∈R
Once the optimal bi-directional parameters μI and μT have been estimated, the equivalent estimate μ ˆ to the true forwards displacement μ ¯ is then computed by applying the update rule derived from (6): μ ˆ ← μI ◦ μT −1 ◦ μref
(9)
The region of interest R that appears in (8) is considered to be constant. We delay the discussion on the implications of this choice to Sect. 4.1. Many different error metrics may be used as a replacement of the L2 norm, based on pixelwise difference [6], but that could for instance also use color distributions [7]. In this paper, the focus is on the motion model aspects of image alignment. We will therefore limit ourselves to errors similar to (8).
The Bi-directional Framework
2.3
403
Criteria for the Classification of Methods
Composing μI and μT to the right by the same parameter μ yields an infinity of pairs (μI ◦ μ, μT ◦ μ) that satisfy (5). Four main image alignement categories can be proposed, depending on the constraints enforced on μI and μT . In the following, μ0 represents an initial estimate of μ ¯, and μref the motion parameter vector between the template image and the reference coordinate frame. Aligning the image and the template consists in finding a corrective parameter vector δμ, whose nature depends on the approach. Forwards (F). The image is warped onto the template with respect to δμ ∈ P : μT = μref and μI depends on the increment δμ. Inverse (I). The template is warped onto the image with respect to δμ ∈ P: μI = μ0 and μT depends on the increment δμ. Dependent (D). The image and the template are warped using parameter vectors μI and μT that are both dependent from a common source δμ ∈ P. The Symmetric (S) approaches are a special case of dependent approaches, which correspond to applying symmetric corrections to μT and μI . These will be the only dependent approaches detailed in this paper. Bi-directional (B). The image and the template are respectively warped using independant corrective parameters δμI and δμT that can be concatenated into a bi-directional parameter vector δμ = (δμI , δμT )t ∈ P 2 . The first three approaches consider a corrective vector in a single parameter space δμ ∈ P, even though the (D) approach warps both images. On the opposite, the (B) approach instead considers that the optimization of (8) takes places inside the complete bi-directional space δμ ∈ P 2 . Inside each category, the methods can be further characterized by: Meta-Parametrization of the motion parameters, which expresses the functional relationship μI = μI (δμ) and μT = μT (δμ) of both parameter vectors with respect to the corrective vector δμ. A corrective parameter vector equal to the identity parameter δμ = 0 should correspond to the initial alignment parameters: μI (δμ) = μ0 and μT (δμ) = μref . It is called metaparametrization in order to differentiate it from the parametrization, which consists in choosing the parametric model W. Optimization method used to minimize E(μI (δμ), μT (δμ)), the error function of (8). This may involve gradient-based optimization, such as the GaussNewton (GN) and the Newton (N) methods [1] or even higher-order methods [8], but also learning approaches, such as the learning of a Linear Estimator (LE) [9]. We emphasize that a particular meta-parametrization is not restricted to any type of optimization. The forwards and inverse categories overlap the categories of the same name proposed by Baker and Matthews [1] with one special case that will be discussed in more details in Sect. 3.5. The symmetric and the bi-directional approaches were first used by Keller and Averbuch [5]. To our knowledge, our proposal is the first systematic and generic classification covering all of these approaches as special cases.
404
3
R. M´egret, J.-B. Authesserre, and Y. Berthoumieu
Classification of Existing Methods
In this section, state of the art image alignment methods are going to be classified, and expressed with respect to the criteria we presented. Table 3 sums up the proposed categories and the associated meta-parametrizations, which are used in Table 3 to provide a synoptic view on the classification of existing algorithms. The justification of this classification is done on a case by case basis in the indicated subsections. The methods marked with the symbol * are of particular interest, and are discussed more specifically. Although we give more details related to the Gauss-Newton optimization (GN), because it allows us to give new insights for the Inverse Additive algorithm [3] [1], the Efficient Second-order Minimization algorithm [2] and the Symmetric Gradient Method approach [5], we recall that the choice of a metaparametrization is distinct from the choice of an optimization method. 3.1
Gauss-Newton Optimization (GN) within the Framework
The Gauss-Newton optimization of the generic error function (8) yields : δμ = −(J t J)−1 J t e(μ0 , μref ) (10) T (δμ)) where J = ∂e(μI (δμ),μ corresponds to the Jacobian matrix of the error ∂δμ 0
vector defined in (7). The considered functions I(x), T(x), W(μ, x), μI (δμ) and μT (δμ) are assumed to be differentiable w.r.t. to x and δμ. Table 1. Categorized bi-directional meta-parametrizations and corresponding update rules. Categories are defined in Sect. 2.3. The following naming conventions are used: 1st letter: F=Forwards, I=Inverse, S=Symmetric, B=Bi-directional; 2nd letter: C=Compositional, A=Additive; 3rd letter: R=Reverse, D=Direct, M=Midway, E=Exponential map, O=Opposite. Approaches marked with a symbol * correspond to a new or more generic formulation of the problem. App. μI μT μ ¯ = μI ◦ μT −1 ◦ μref FC μ0 ◦ δμ μref μ0 ◦ δμ F FA μ0 + δμ μref μ0 + δμ ICR μ0 μref ◦ δμ μ0 ◦ δμ−1 −1 ICD* μ0 μref ◦ δμ μ0 ◦ δμ I IAR μ0 μref + δμ μ0 ◦ (μref + δμ)−1 ◦ μref IAD* μ0 μref ◦ (μ0 + δμ)−1 ◦ μ0 μ0 + δμ 1 SCM* μ0 ◦ ( 2 δμ) μref ◦ ( 12 δμ)−1 μ0 ◦ ( 12 δμ) ◦ ( 12 δμ) D/S SCE* μ0 ◦ μ( 12 δv) μref ◦ μ(− 21 δv) μ0 ◦ μ(δv) SCO* μ0 ◦ ( 12 δμ) μref ◦ (− 21 δμ) μ0 ◦ ( 12 δμ) ◦ (− 21 δμ)−1 BCD* μ0 ◦ δμI μref ◦ δμ−1 μ0 ◦ δμI ◦ δμT T B BCO* μ0 ◦ δμI μref ◦ (−δμT ) μ0 ◦ δμI ◦ (−δμT )−1 C.
Sec 3.2 3.2 3.3 3.3 3.4 3.5 3.6 3.6 3.6 3.7 3.7
The Bi-directional Framework
405
Table 2. Classification of various existing methods. Category: see Sect. 2.3. Meta-parametrization: see Table 3. Optimization: GN=Gauss Newton, N=Newton, LE=Linear Estimator, O3=Third Order. New insights are obtained for the methods indicated with a symbol *, which are discussed in their respective sections. C.
Method M.-Param Optim. Sect. Forwards Additive [4] FA GN 3.2 Forwards Compositional [10] FC GN 3.2 Third-order Gradient Method [8] FA O3 3.2 Inverse Compositional [1] [11] ICR GN, N. . . 3.3 Hyperplane Approximation [9] IAR LE 3.4 Inverse Additive [3] IAD GN 3.5* Efficient Second-order Minimization [2] SCE GN 3.6* Symmetric Gradient Method [5] SCO GN 3.6* Symmetric third-order Gradient Method [8] SCO O3 3.6* Bi-directional Gradient Method [5] BCO GN 3.7*
F
I
D/S B
The Jacobian J is specific for each approach. It can be expressed as the concatenation of the gradients J(xi ) of the pixelwise errors ei , where: ∂T(W(μT (δμ), xi )) ∂I(W(μI (δμ), xi )) J(xi ) = − (11) ∂δμ ∂δμ 0 0 JI (xi ) JT (xi ) Due to a lack of space, only the key equations will be given for each approach. The intermediate steps can be obtained by replacing, in equations (8) and (7), μI and μT by their expression with respect to δμ from Table 3, and deriving from these equations. 3.2
Forwards Additive (FA) and Forwards Compositional (FC)
The forwards approaches, such as Forwards Additive (FA) and Forwards Compositional (FC) fit naturally into the bi-directional framework by setting μT = μref . This generalizes the formulation shown in [1] where μref = 0. The two are equivalent by replacing T with Tref = T(W(μref , ·)). This approach was combined with Newton-type optimization in [1], and a third-order gradient method in [8]. For the FA approach, μI = μ0 + δμF A , which yields the following Jacobian: ∂W(μ, xi ) FA FA J (xi ) = JI (xi ) = ∇I(W(μ0 , xi )) (12) ∂μ μ0 For the FC approach, μI = μ0 ◦ δμF C , which yields J
FC
(xi ) =
JIF C (xi )
∂W(μ0 , x) ∂W(μ, xi ) = ∇I(W(μ0 , xi )) ∂x ∂μ xi 0
(13)
An explicit equivalence relationship between the FA and the FC approaches that extends the equivalence proof proposed in [1] is discussed in Sect. 4.2.
406
3.3
R. M´egret, J.-B. Authesserre, and Y. Berthoumieu
Inverse Compositional (IC) Variants
The inverse compositional approach proposed in [1] also fits naturally in the framework using parameters shown in Table 3. We further classify it as Inverse Compositional Reverse (ICR) because of the presence of δμ−1 in its update rule. The Jacobian is expressed w.r.t. gradients of the reference template Tref , which allows for their pre-computation, thus improving the online performance: ∂W(μref , x) ∂W(μ, xi ) IC J ICR (xi ) = −JT (xi ) = − ∇T(W(μref , xi )) (14) ∂x ∂μ xi 0 ∇Tref (xi )
−1
By replacing δμ with δμ in μT , we can define a new Inverse Compositional Direct (ICD) approach, which has a simpler update rule. It has the same complexity as the ICR approach for the estimation of δμ in GN optimization since J ICD = −J ICR . 3.4
Inverse Additive Reverse (IAR)
The Inverse Additive Reverse (IAR) approach is the dual of the FA approach, by reversing the roles of I and T. This meta-parametrization (shown in Table 3) was used in [9] with a Linear Estimator (LE): the estimation is based on δμ = Ae(μ0 , μref ) where A is a learned matrix. The disturbances δμk used for learning A are applied on the template through the parameters μT k = μref + δμk , which makes it an IAR approach. The advantages over gradient-based approaches are that A is computed off-line, thus decreasing the online computationnal cost, and that it can handle larger motion amplitude. The use of learning based optimization is one specific advantage of the Inverse approach compared to the others. 3.5
Inverse Additive Direct (IAD): New Insights on [3]
The algorithm introduced in [3] was called Inverse Additive in the survey [1]. In the sequel, we will refer to it as the IA algorithm. The previous justification for this algorithm is based on a FA parametrization, but where the roles of I and T are swapped for the computation of the Jacobian, by assuming that the two images are identical up to motion compensation (equation (4)). This allows the authors to derive an efficient algorithm if the factorization of (15) is possible. Because of this strong assumption on the relative content of the two images, the minimized error function can not be expressed in closed form. We propose a new meta-parametrization (see Table 3), called Inverse Additive Direct (IAD), which has an additive update rule. The GN optimization of its closed-form error function then leads naturally to the IA algorithm. Indeed the associated Jacobian matches the one used in the IA algorithm:
−1 ∂W(μ, xi ) ∂W(μ0 , x) IAD IAD (xi ) = −JT (xi ) = −∇Tref (xi ) J (15) ∂x ∂μ0 xi μ0 factored as Γ (xi )Σ(μ0 )
The Bi-directional Framework
407
where ∇Tref (xi ) was defined in (14). The previous equation, when plugged into (10) corresponds to one iteration of the IA algorithm [3,1], thus allowing us to integrate this algorithm as an instance of the proposed framework. 3.6
Symmetric Compositional (SC): New Insights on [2], [8] and [5]
The Symmetric Compositional Midway (SCM) approach is defined by compensating both I and T towards each other using two transformations that are inverse one from the other (see Table 3). In that case the common compensated coordinate frame lies exactly midways from both images coordinate frames from a compositional point of view. Its update rule is μ ˆ ← μ0 ◦ ( 12 δμ) ◦ ( 12 δμ). By reusing the notations from (13) and (14), its Jacobian is equal to: J SCM (xi ) =
1 FC IC JI (xi ) + JT (xi ) 2
(16)
In a similar way as the IA algorithm discussed before, the justification of the ESM algorithm [2] relies on assumption (4). We now propose to instantiate this algorithm into the bi-directional framework by adapting the SCM approach. The update rule μ ˆ ← μ0 ◦ μ(δv) used in [2] is slightly different from the SCM rule. The update step δμ = μ(δv) is indeed further parametrized around the identity by a vector δv using an exponential map (associated with a Lie Group on projective transformations matrices denoted by G(x) in [2]). The optimization is then done with respect to δv instead of δμ. This parametrization has two interesting properties : μ(2v) = μ(v) ◦ μ(v), μ(−v) = μ(v)−1 . We can therefore split δv into two symmetrical parts, to obtain the Symmetrical Compositional Exponential map (SCE) approach: −1 1 1 1 and μT = μref ◦ μ − δv = μref ◦ μ δv (17) μI = μ0 ◦ μ δv 2 2 2 Because additionnally δμ = μ(δv) ≈ μ(0) + δv to the first order around the identity, due to the exponential map properties, the associated Jacobian is identical to (16). We can therefore conclude that the ESM algorithm corresponds to the GN optimization with respect to δv of the closed form SCE error function. A very similar approach was proposed for GN optimization [5] and higherorder optimization [8]. By taking into account the compensations that occur in the described algorithms explicitely (in a similar way as in Sect. 3.2), this translates into a compositional meta-parametrization, which we call Symmetric Compositional Opposite (SCO), the O corresponding to the term − 21 δμ, that appears in μT (see Table 3). According to (9), the associated update rule should ˆ ← μ0 + δμ used in be μ ˆ ← μ0 ◦ ( 12 δμ) ◦ (− 12 δμ)−1 . It is different from the rule μ [5]. This issue is discussed in more detail in Sect. 4.2. One of the advantages of such symmetrical approaches is that the estimation is precise up to the second-order in δμ, even when using only first-order approximation, provided (4) is satisfied and the motion increment to estimate is small enough (shown for SCE in [2] and SCO in [5]).
408
R. M´egret, J.-B. Authesserre, and Y. Berthoumieu
3.7
Bi-directional Compositional (BC) Variants
The bi-directional approach was first proposed for Gradient Methods in [5]. As in the previous subsection, we propose to reformulate their approach by taking into account explicitely the initial parameters μ0 and μref , which leads to the Bidirectional Compositional Opposite (BCO) meta-parametrization (see Table 3). The corresponding update rule can be approximated by μ ˆ ← μ0 ◦ (δμI + δμT ) for a small δμ, which is different from rule μ ˆ ← μ0 + (δμI + δμT ) used in [5]. The consequence of this difference is discussed in Sect. 4.2. δμI It is important to note that the corrective parameter vector δμ = δμT belongs to the full bi-directional parameter space P 2 . The Jacobian of the error can thus be expressed as the horizontal concatenation of the Jacobians of the simple compositional approaches: J BCO = J F C , −J ICR . In the same spirit as the ICD approach, the Bi-directional Compositional Direct (BCD) approach shown in Table 3 has a slightly simpler update rule (see Table 3), and the same estimation cost as BCR, since J BCD = J BCO .
4 4.1
Discussion Integration into an Iterative Scheme
Up to now, we have mostly derived results corresponding to one estimation step. This step is generally included in an iterative algorithm, in order to refine the estimation. The iterative schemes in the reviewed approaches update the initial parameters from iteration n to n + 1 by finding the equivalent forwards parametrization to the estimated bi-directional parameters based on (9): ← μI n ◦ (μT n )−1 ◦ μref n , μref n+1 ← μref n . This allows to keep R fixed μn+1 0 across iterations in order to avoid a drift from the initial interest region. The region R in (8) represents the region of interest in the common compensated coordinates frame (see Fig. 1). It is generally chosen such that it corresponds to the reference region of interest in the template at initialization: R = W((μref n )−1 ◦ μref 0 , Rref ). Once defined, R is considered to be fixed for the optimization of (8), in order to avoid the spurious terms in the error derivatives that would reflect the variation of R. These terms have always been neglected in the studied methods. This scheme also leads to fixed derivatives with respect to the template, thus decreasing the computational cost within the (I), (D) and (B) approaches. Figure 2 illustrates the error function E(μI , μT ) in the bi-directional space P 2 . It can be noted that it forms a valley along the curve μI = μ ¯ ◦ μref −1 ◦ μT , which is valid for μT close to μref = 0. Indeed, when μT is too far away, the interest region R includes elements from the background, which increases the error. The trajectories of the initial (μn0 , μref n ) and the estimated (μI n , μT n ) parameters are plotted for one approach of each category. The differences in the meta-parametrization are clearly reflected as different types of trajectories in the bi-directional space P 2 .
The Bi-directional Framework
409
Fig. 2. Error function E(μI , μT ) corresponding to the image of Fig. 1 displayed on a (μI,1 , μT,1 ) slice of the bi-directional space P 2 , where the index 1 stands for the horizontal translation coefficient. Translation was estimated using GN optimization with one method in each category. Each iteration n is drawn with an arrow, that links n n n (μn 0 , μref ) (numbered bullets) to (μI , μT ) ( bullets). The true deformation is a 5 pixels horizontal translation μ ¯1 = 5, and the initialisation is (μ10,1 , μ1ref,1 ) = (0, 0). The dashed line (μI,1 , μT,1 ) = (μ1 , 5 + μ1 ) represents the set of correct estimates. The shape of the trajectories reflect the contraints put on the meta-parametrization.
4.2
Equivalences and Incompatibilities in Terms of Update Rules
When using GN optimization, the equivalence between FA and FC approaches was argued in [1]. On the basis of our formulation, we can extend this result by providing the exact relationship between the two update steps δμF A and δμF C computed by both approaches. Indeed, thanks to the regularity of the considered functions, we can show that ∂μ0 ◦ δμ FC FA J = J M0 where M0 = (18) ∂δμ 0 For GN optimization, (10) then yields δμF C = M0 −1 δμF A . From this relationship, we can conclude on the equivalence to the first order: μ0 ◦ δμF C ≈
μ0 + M0 δμF C =
μ0 + δμF A
(19)
The same methodology can be used to show that, when using a GN optimization, all proposed approaches are equivalent to the first order within the same category. The equivalences are based on: δμICR = −δμICD , δμIAR = Mref δμICR
δμIAD = M0 δμICD where Mref =
δμSCM = δμSCE = δμSCO
∂μref ◦δμ ∂δμ 0
and δμBCO = δμBCD
(20) (21)
One issue that the previous analysis reveals, is that although the final estimate μ ˆ may be approximately equal between related additive and compositional
410
R. M´egret, J.-B. Authesserre, and Y. Berthoumieu
Fig. 3. The equivalence of two approaches does not mean the egality of their corrective parameter δμ. This counter-example is based on affine motion estimation using a standard benchmark [1]. The error is expressed in pixels. A rotation μ ¯ of 70◦ around the center of the object is to be estimated, from an initial rotation μ0 of 35◦ . All approaches use GN optimization. The hybrid approach composed of an SCO or BCO meta-parametrization combined with an additive update rule [5] converge faster that the F and I approaches when the angle of μ0 is small, but cannot converge to the correct estimate as explained in the text. Using instead the rule stemming from the framework corrected this problem, to achieve the best results with this type of optimization.
approaches, the corresponding δμ are not equal in the general case, because of the presence of the matrix M . Therefore the update step should always be consistent with the used meta-parametrization. When this is not the case, a correctly estimated δμ can lead to an incorrect estimate μ ˆ. This causes convergence problems, especially when rotations are involved, which seem to appear for example in some experiments from [5] and [8]. These effects are illustrated in Fig. 3, where the error corresponding to an affine motion estimate oscillates without converging around the correct estimate when μ0 corresponds to a large rotation. The proposed framework offers a systematic methodology to avoid such problems when designing template based alignment. 4.3
Practical Considerations on the Classification
In order to facilitate the choice of an approach, here is a short summary of the main properties of the respective categories. The FA and FC approaches have been shown to be equivalent to first order in [1]. We have shown that this equivalence is also true within each separate Inverse, Symmetrical, and Bidirectionnal categories. The Inverse approach has the fastest step computation thanks to the offline computation of the Jacobian. This category is also the only one to benefit from a learned parameter estimator [9] which yields a direct estimation in one step. Additionally, when (4) is satisfied, the Symmetrical approaches (SCE [2], SCO [5]) need a lower number of GN iterations to converge than the Forwards and Inverse approaches. According to [5], BCO should not be more performant than SCO when (4) holds, but may outperform it in the more general case.
The Bi-directional Framework
5
411
Conclusion
In this paper, we have presented a formal framework for pixel-based image alignment methods, associated to a simple and consistent classification. The proposed criteria have been applied to a wide range of image alignment methods. In particular, this methodology has led to a new formulation of the IA algorithm and the ESM algorithm, based on a closed form error function without any assumption on the content of the images. This unification revealed useful to make an explicit description of the equivalence to the first order between the methods, and to give new insights with respect to the use of a mismatching update rule. We think such a framework offers a structured formulation of the parametric image alignment problem, which, we hope, will help in understanding, designing and evaluating the performance of alignment algorithms. Perspectives are to extend the formalization to include non purely geometric models such as illumination compensation [12], and study the interaction between model parametrization and meta-parametrization.
References 1. Baker, S., Matthews, I.: Lucas-Kanade 20 years on: A unifying framework. International Journal of Computer Vision 56(3), 221–255 (2004) 2. Benhimane, S., Malis, E.: Real-time image-based tracking of planes using efficient second-order minimization. In: IROS 2004, Sendai, Japan, vol. 1, pp. 943–948 (2004) 3. Hager, G.D., Belhumeur, P.N.: Efficient region tracking with parametric models of geometry and illumination. IEEE Transactions on Pattern Analysis and Machine Intelligence 20(10), 1025–1039 (1998) 4. Lucas, B., Kanade, T.: An iterative image registration technique with an application to stereo vision. In: IJCAI 1981, pp. 674–679 (1981) 5. Keller, Y., Averbuch, A.: Fast motion estimation using bi-directional gradient methods. IEEE Transactions on Image Processing 13(8), 1042–1054 (2004) 6. Baker, S., Gross, R., Matthews, I., Ishikawa, T.: Lucas-Kanade 20 years on: A unifying framework: Part 2. Technical Report CMU-RI-TR-03-01, Robotics Institute, Carnegie Mellon University, Pittsburgh, PA (February 2003) 7. M´egret, R., Mikram, M., Berthoumieu, Y.: Inverse composition for multi-kernel tracking. In: Kalra, P.K., Peleg, S. (eds.) ICVGIP 2006. LNCS, vol. 4338, pp. 480– 491. Springer, Heidelberg (2006) 8. Keller, Y., Averbuch, A.: Global parametric image alignment via high-order approximation. Computer Vision and Image Understanding 109(3), 244–259 (2008) 9. Jurie, F., Dhome, M.: Hyperplane approximation for template matching. IEEE Transactions on Pattern Analysis and Machine Intelligence 24(7), 996–1000 (2002) 10. Shum, H.Y., Szeliski, R.: Construction of panoramic image mosaics with global and local alignment. International Journal of Computer Vision 36(2), 101–130 (2000) 11. Matthews, I., Baker, S.: Active appearance models revisited. International Journal of Computer Vision 60(2), 135–164 (2004) 12. Bartoli, A.: Groupwise geometric and photometric direct image registration. IEEE Transactions on Pattern Analysis and Machine Intelligence (2008) preprint (January 14, 2008)
Direct Bundle Estimation for Recovery of Shape, Reflectance Property and Light Position Tsuyoshi Migita, Shinsuke Ogino, and Takeshi Shakunaga Department of Computer Science, Okayama University {migita,ogino,shaku}@chino.cs.okayama-u.ac.jp
Abstract. Given a set of images captured with a fixed camera while a point light source moves around an object, we can estimate the shape, reflectance property and texture of the object, as well as the positions of the light source. Our formulation is a large-scale nonlinear optimization that allows us to adjust the parameters so that the images synthesized from all of the parameters optimally fit the input images. This type of optimization, which is a variation of the bundle adjustment for structure and motion reconstruction, is often employed to refine a carefully constructed initial estimation. However, the initialization task often requires a great deal of labor, several special devices, or both. In the present paper, we describe (i) an easy method of initialization that does not require any special devices or a precise calibration and (ii) an efficient algorithm for the optimization. The efficiency of the optimization method enables us to use a simple initialization. For a set of synthesized images, the proposed method decreases the residual to zero. In addition, we show that various real objects, including toy models and human faces, can be successfully recovered.
1
Introduction
In the present paper, we present a method for estimating the three-dimensional shape and bidirectional reflectance distribution function (BRDF) of an object from a set of images, based on the appearance changes that occur with respect to the changing position of a point light source. The proposed method should fulfill the following two criteria: (i) it should not require any special devices, except for a camera, a darkened room, a light source, and a computer, and (ii) it should not be too theoretically complicated. Although the method proposed herein and that proposed in [1,2] are similar with respect to the formulation and the input data set, we would like to solve the problem using a much simpler framework. The theoretical simplicity requirement enables the method to be easily extended to more complicated models, even though this task will not be examined in the present paper. On the other hand, the minimal requirement of the proposed method is important to be usable by non-professionals who wish to conveniently create three-dimensional models. Several methods for recovering the shape and BRDF of an object were proposed in the literature [1,2,3,4,5,6,7,8,9,10,11,12,13,14]. Some of these methods, D. Forsyth, P. Torr, and A. Zisserman (Eds.): ECCV 2008, Part III, LNCS 5304, pp. 412–425, 2008. c Springer-Verlag Berlin Heidelberg 2008
Direct Bundle Estimation for Recovery of Shape
413
as well as some earlier methods, proposed the recovery of an object shape by assuming that the lighting conditions are known in a computer-controlled lighting system, or by using a mirror spherical probe. Without prior knowledge of the lighting conditions, a typical method first estimates the shape of an object by using the silhouette intersection method, or by using a range finder and then estimating the reflectance properties. In the proposed method, the three-dimensional positions of the light source, as well as the object shape and reflectance properties of the object are estimated. Note that, in the previous proposed methods, a distant light source (which only has two degrees-of-freedom) is typically assumed. However, multiple viewpoints and a complicated lighting environment are beyond the scope of the present paper. The proposed formulation and solution method is a variation of the bundle adjustment [15] used for structure and motion reconstruction. In other words, it is a large-scale nonlinear optimization that adjusts the parameters so that the images synthesized from all parameters optimally fit the input images. In terms of the given cost function, no other method attains a more accurate result. However, one of the difficulties is that the optimization requires a reasonable initial estimation. It is possible to use a sophisticated method, such as that described in Refs. [4,6,8], to initialize the optimization process. However, the required accuracy for the initialization is much lower. In fact, we use a flat plane for our initial shape parameters. Once the initialization is finished, an efficient algorithm is required to perform the optimization. Since we assume the typical number of parameters to be approximately 105 , a naive optimization method, such as the steepest descent method, is insufficient, and the Levenberg-Marquardt method [16], which exploits the second-order derivative, or a Hessian matrix, is required. For solving a large-scale linear equation system with a sparse coefficient matrix (Hessian matrix) for each iteration, the preconditioned conjugate gradient method is more suitable in that it allows us to solve the problem within a limited memory requirement and at a reasonable computational cost. The algorithm can attain an almost zero residual for an input set of synthesized images, which is not possible using naive methods. However, since extra care should be taken to avoid local minima, we gradually increase the number of parameters to be estimated and worked to detect abnormally estimated parameters that must be corrected. The methods proposed in [1,2] employ a cost function similar to the proposed function and a very different approach for minimization in order to achieve a feasible computational cost. These methods require that the parameters be updated one by one, using several different algorithms (such as the steepest descent method, Newton’s method, DCT, and SVD) based on several different aspects of the reflectance model. However, with our proposed method, all of the parameters are updated simultaneously. Using our proposed method, we demonstrate that various real objects, including a wooden figure, model toys and a human face, can be successfully recovered.
414
2
T. Migita, S. Ogino, and T. Shakunaga
Shape Recovery Method
2.1
Image Formation/Reconstruction Model
A set of input images can be described as a collection of the following measurement vectors: mf p := (rf p gf p bf p )T
(1)
which is a three-vector containing red, green, and blue components of the image intensity at the p-th pixel in the f -th image. Since we assume that the object and the camera are fixed and that a point light source will be moving, mf p is a function of the position of the light lf . It is also a function of the object shape and its reflectance property. We approximate the surface reflectance by use of the Simplified Torrance-Sparrow Model described in Ref. [3] to describe the measurement as follows: ⎞ ⎛⎡ ⎤ ⎡ ⎤ w1p sR exp ρα2f p ⎠ (2) mf p = ηf ⎝⎣ w2p ⎦ cos βf p + w4p ⎣ sG ⎦ cos γp w3p sB where cos βf p = nTp N [lf − xp ] ,
(3)
nTp N nTp N
[v − xp ] ,
(4)
[N [lf − xp ] + N [v − xp ]] ,
(5)
cos γp = cos αf p =
ηf is the emittance of the light source for the f -th image, (w1p , w2p , w3p )T and w4p are the intrinsic color and the reflectance of the specular reflection at the p-th pixel. We refer to wmp as the weight. Then, lf is the position of the light source for the f -th image, (sR , sG , sB )T is the color of the light source, v is the camera position, N is the normalization operator such that N [x] := x/|x|, and xp is the three-dimensional position of the object at the p-th point. Note that the object shape is represented by its depth dp from the camera for each pixel, not by triangular meshes. In addition, np is a unit normal, which is calculated from the three-dimensional positions of neighboring pixels. Finally, ρ is the surface roughness, which is shared by each pixel. However, this constraint does not mean that the object consists of just one material, because the specular reflection of w4p can change from pixel to pixel. Although Eqs. (3)–(5) assume that the light source is near the object, Eq. (2) does not take into account the attenuation with respect to the distance between the object and the light source. However, this effect can be approximated by considering that the light source emittance ηf decreases as the distance grows. Strictly speaking, the attenuation varies from pixel to pixel, but when the object is sufficiently small, compared to the distance, this effect is small and thus the approximation is sufficient for our experimental setup.
Direct Bundle Estimation for Recovery of Shape
415
Using the image formation model, we can formulate the simultaneous recovery as a nonlinear optimization problem: rTfp rf p (6) arg min E(u) , where E(u) = u fp
where rf p is the difference between the measured intensity mf p and the synthesized intensity (i.e., the right-hand side of Eq. (2)), and u is a vector containing all the parameters to be estimated, namely, depths dp ’s and weights wmp ’s for all p and m, as well as emittances ηf ’s and positions lf ’s for all f , in addition to specular parameters ρ and s. Let N denote the dimension of u; we typically assume N ≈ 105 . The foreground and background are differentiated by thresholding the input images. In other words, constantly dark pixels throughout the images are considered to be in the background, which are not used for the estimation, and each foreground pixel has its own unique identifier p. Even if the p-th pixel is in the foreground, we exclude the error term r f p from the cost function if the pixel is considered to be saturated or the pixel is in a shadow in the f -th image, i.e., all components of the pixel must be more than 0 and less than 255 when the intensity is 8 bits. For each p-th pixel, the surface normal np is calculated by np = N [(xRp − xLp ) × (xT p − xBp )]
(7)
where Rp, Lp, T p, and Bp indicate the indices of the pixel to the left, right, top, or bottom of the p-th pixel, respectively. However, when these pixels are outside the boundary of the object foreground, Rp, Lp, T p, or Bp indicate p. 2.2
Optimization Method
To achieve an efficient search for the optimal parameter, we first describe the basic idea of the Levenberg-Marquardt (L-M) method, and then describe its efficient implementation by use of the preconditioned conjugate gradient (PCG) method. The selection of the initial parameter vector is discussed in the following section. Letting uk be the search vector at the k-th iteration, the L-M process is as follows: uk+1 = uk − (Hk + μk I)−1 (∇E)
(8)
where ∇E is the gradient of the cost function E, and Hk is the Hessian matrix of E, that is, Hk := (∂ 2 E/∂ui ∂uj ). These are evaluated at uk , and μk is a constant for stabilization. For each iteration of the L-M process, we have to solve the large-scale linear equation system (Hk + μk I)q = ∇E. Since the coefficient matrix is sparse, the PCG method is suitable. In this method, solving q so as
416
T. Migita, S. Ogino, and T. Shakunaga
to satisfy Aq = b is equivalently transformed into the minimization of f (q) := (1/2)qT Aq − bT q, yielding the process 1 :
initial guess (k = 0) qk = q − α d (k > 0), where αk = arg min f (q k − αdk ) (9) k−1 k−1 k−1 α ⎧ −1 (k = 0) ⎨ C g0 g T C −1 g k dk = (10) −1 ⎩ C g k + βk dk−1 (k > 0), where βk = T k −1 g k−1 C g k−1 and g k = ∇f
(evaluated at q k ).
(11)
Here, C is called a preconditioning matrix, which is an approximation of the coefficient matrix A, such that C −1 g k is easily obtainable. The structure of the Hessian matrix, or the coefficient matrix, becomes as follows:
Ck =
(12)
when the parameters are ordered in such a manner that the former elements of u are the parameters independent of position p, and the latter elements are the shape and reflection parameters for each p. It is important that the bottom-right part has a band structure and that there are numerous zero elements inside the band. The reason for this will be described later. The topmost and the leftmost parts of the matrix are dense, but their heights and widths are small. Thus, although the size of the matrix is N × N , the required memory size is O(N ), as is the computational complexity for approximating C −1 k ∇E, which is obtained by a fixed number of iterations of the PCG process. Note that, a naive method requires O(N 2 ) memory and O(N 3 ) computation and is hardly applicable for large N . Implementation. The matrix C k is calculated as follows, based on the approximation of the Hessian matrix used in the Gauss-Newton algorithm: J Tfp J f p + μk I (13) Ck = fp
where
Jfp =
1
∂rf p ∂r f p ··· ∂u1 ∂uN
.
(14)
The symbols αk , βk , dk in the following algorithm are not the same as those in the reflection model.
Direct Bundle Estimation for Recovery of Shape
initial shape
417
z
x
optical axis camera y
Fig. 1. Initial shape
Most elements in this Jacobian matrix are zero, because the residual vector rf p is affected only by the three-dimensional positions and the weights of the p-th pixel and its direct neighbors, the lighting parameters of the f -th image, and several global parameters. Using this definition, it is easy to show that the Hessian matrix has the structure shown in Eq. (12). To avoid the search for μk required for each iteration, as proposed in the original L-M algorithm [16], we again use the PCG algorithm. Note that, if αk in Eq. (9) is 1, and βk in Eq. (10) is 0, then the PCG method is exactly the same as the L-M method. Instead, we fix μk and search for αk and calculate βk for each iteration. In other words, we use a two-layered PCG algorithm, where the upper layer minimizes E in Eq. (6), and the lower layer calculates C −1 k ∇E for each (k-th) iteration of the upper layer. The preconditioning matrix C used for the upper layer is C k , and, for the lower layer, we use the block diagonalized version of C k , which is constructed by simply omitting the off-diagonal blocks of C k . 2.3
Initial Parameters
It is normally necessary to prepare an initial parameter carefully to ensure that the nonlinear optimization converges successfully. Special devices or sophisticated algorithms might be used to obtain the initial parameters. However, since the optimization method described in Section 2.2 is fast and powerful, it can recover the parameters from a relatively crude initial estimation. The initialization method used herein is described below. We use a plane perpendicular to the optical axis of the camera as an initial shape (see Fig.1). Even using such crude initial parameters, the shape quickly converges into an appropriate shape if the light positions are reasonable. In order to prepare the light positions, we use the Lambertian reflection property based on Ref. [17]. If a Lambertian surface is lit by a distant light source, it is observed as a three-vector mT , which is described by the product of the intensity η and the direction l of the light, and the normal n and the albedo d (RGB 3-vector) of the surface as mT = ηlT ndT . By collecting measurements mf p at the p-th pixel in the f -th image, this becomes a matrix relation ⎤ ⎡ ⎤ ⎡ T m11 · · · mT1P η1 lT1 ⎢ .. . . . ⎥ ⎢ . ⎥ T T (15) ⎣ . . .. ⎦ = ⎣ .. ⎦ n1 d1 · · · nP dP , T T T mF 1 · · · mF P ηF lF
418
T. Migita, S. Ogino, and T. Shakunaga
or M = LN . Ideally, the measurement matrix M is easily constructed and decomposed into the product of two rank-3 matrices, which contain the light directions and the shape. Unfortunately, we could not retrieve correct information directly from this decomposition because the decomposition is not correct when the measurements contain specular reflections and/or shadows. Nor could we determine the distance between the light and the object. Moreover, M = (LX)(X −1 N ) is also correct for an arbitrary nonsingular 3 × 3 matrix X, that contains the bas-relief ambiguity [12]. Even so, we can use this decomposition to prepare the initial light positions, which will lead to a correct solution. The decomposition is performed via singular value decomposition, even if specular pixels or pixels in the shadow are included. Let the singular value decomposition be M = LN , where L = (u1 u2 u3 ), N = diag(σ1 , σ2 , σ3 )(v 1 v 2 v 3 )T , and σ1 ≥ σ2 ≥ σ3 ≥ 0. If the light source moves in front of the object, and the mean of the light positions is near the camera, the most significant singular vector v 1 tends to be the mean of all of the input images, and thus u1 approaches a scalar multiple of (1, 1, · · · , 1)T because all of the images are approximated by the summation of the mean (v 1 ) and the relatively small deviations (v 2 and v 3 ). The structure of u1 implies that the ideal X has the form ⎡ ⎤ ±1 ⎦, X = ⎣a b (16) cd if the object is at the origin of the coordinate system and the camera is on the z-axis. The sign of ±1 should be selected so that the light is always positioned between the object and the camera. Assuming the sign is positive, we examine the following candidates for X: ⎡ ⎤ ⎡ ⎤ 1 1 ⎣ ±1 ⎦ , ⎣ ±1 ⎦ . (17) ±1 ±1 From the decomposition, only the direction of the light is obtained. Therefore, the positions are determined by projecting them onto a sphere or a flat plane. We then conducted the optimization for each candidate X, and selected the best reconstruction. Note that the initial light positions only require qualitative correctness, which are then corrected quantitatively by the following optimization process. Another possible initialization scheme is to reconstruct the shape by using a sophisticated method, such as those described in Refs. [4,6,8]. The computational cost of this scheme could be less than that using the planar initial shape. The other parameters are relatively trivial. The light intensities ηf can be assumed to be uniformly 1, if we move a single light bulb around the object. T The albedo at the p-th point, (w1p , w2pT, w3p ) , can be prepared as the mean of all observations of the images, f mf p /F . The specular weight w4p can be chosen as 0, assuming that the specular region is relatively small compared with the entire image. The surface roughness ρ is chosen as −10, because it usually
Direct Bundle Estimation for Recovery of Shape
419
converges at approximately −10 in our experiments. The specular color s is chosen as (1, 1, 1)T , which assumes that the color of the light source is white. 2.4
Incremental Estimation
A complicated reflection model can cause the estimation to be unstable because such a model produces many local minima. To avoid local minima, we first use a coarse model with a limited number of parameters and then gradually upgrade the model and the estimation. – Step 1 We assume that there is no specular reflection and that all light emittances are uniform. As a result, we only estimate the shape, diffuse weights, and light positions without changing the specular reflection parameters and ηf . – Step 2 We then add the specular reflection parameters to the set of estimation parameters. Note that we can assume that these steps adjust the ambiguity X described previously, although we do not explicitly have X as a parameter. – Step 3 Finally, we estimate all of the parameters, including the light emittance for each image. This yields the final estimation result. The required number of iterations differs for each step. We iterate the PCG process for a predetermined number of times, which is typically 100 for Step 1 and 200 for Steps 2 and 3. If the number of iterations is determined by analyzing the change in the cost function value for the last few iterations, unnecessary iterations would then be omitted or better accuracy would be attained. 2.5
Detection and Correction of an Abnormal Estimation
Although the proposed method works well for most parameters, some parameters tend to converge far away from meaningful values, and as a result, the entire estimation sometimes becomes meaningless. The specular reflection weight w4p at the edge pixels is particularly volatile, which causes the surface normals at the edge pixels to be incorrectly reconstructed and some light positions to be estimated far away from the other positions. This problem is caused by specular reflections, which means that Steps 2 and 3 are vulnerable. Thus, we added a procedure to avoid this problem. (i) If a specular reflection weight is more than 100 times the median of the weights for the other pixels, it is then corrected to the median value. If the value is negative, it is then corrected to 0. (ii) If the distance of a light from the object is greater than 100 times the median of the other distances, it is then corrected to the median value. This procedure often improves our estimation process. However, since this procedure is performed without checking the cost function value, the estimation process sometimes collapses. Thus, a more sophisticated approach would be to formulate the reconstruction problem in a quadratic programming algorithm
420
T. Migita, S. Ogino, and T. Shakunaga
with several linear constraints, such as certain parameters not being be less than 0, or to formulate using regularization terms to avoid an estimation that is so far away. 2.6
Extensibility
The proposed method can be extended to deal with other image formation models, such as multiple light sources and/or reflection models that are more complicated than the Torrance-Sparrow model. The main difficulty in implementation is the derivation of the Jacobian matrix Jf p . In addition, we can consider interreflections and cast shadows. Even though deriving the Jacobian is the main difficulty, it is straightforward to calculate the residual vector r f p using computer graphics algorithms with respect to these effects.
3 3.1
Experiments Experimental Setup
In order to validate the proposed method, we estimated the shape and reflectance properties of several objects, as well as the light positions of several real images and numerically generated images. The real images were taken in a room, as shown in Fig. 2, where the only light source was a light bulb held by a human operator. Photographic images of several static objects were captured by a static camera while the light source was moving. We also used a set of images extracted from the Yale Face Database B [4]. The captured images were RGB color images with 8-bit resolution. We did not use a technique that is required for a high dynamic range acquisition. As an evaluation criterion, we used the RMS error of the estimation, which is defined as T f p r f p rf p , (18) 3M where M is the number of terms contained in the cost function, which is at most F P . The proposed method was almost completely validated for a simulation
object camera light
Fig. 2. Experimental setup in a darkened room
Direct Bundle Estimation for Recovery of Shape
421
image set, where the RMS error approached 10−6 , which is an inevitable error because the input measurements were given in single-precision floating point variables. For the real images, although we would like to compare the obtained shape with the true shape, we did not have ground truth data. Therefore, for now, we evaluated the shape by comparing the obtained result and the real object from various viewpoints, and partially validated the proposed method based on the RMS error, Eq. (18). The quantitative evaluation is left as an important future study. For the light positions, we could compare the results with the ground truth data, because the true light directions were available for one of our data sets and Yale Database B. We also present several videos of the reconstruction results as supplemental material. 3.2
Experiments on Real Images
Wooden Figure: A total of 36 images were taken of this figure. Three of the images are shown in Figs. 3 (a)-(c). Each image was of size 128 × 296, and there were 25,480 foreground pixels. The extrinsic parameters of the camera were not required for the proposed method, and the intrinsic parameters were simply constructed based on the image size and an approximation of the focal length, as follows: ⎡ ⎤ 1000 0 64 0 P = ⎣ 0 1000 148 0 ⎦ . (19) 0 0 1 0 Actually, we tested several focal lengths and chose the one that provided the best result. This procedure can be replaced with a camera calibration. We estimated 127,512 parameters, where the initial shape formed a flat plane, and the initial light positions formed another flat plane. The initial estimate for the weights is as shown in Fig. 3 (d), which is the average of the input images, including Figs. 3(a)-(c), and the resulting diffuse weight is shown in Fig. 3(e).
(a)
(b)
(c)
(d)
(e)
Fig. 3. Images of the wooden figure Fig. 4. Estimated shape of (a)(b)(c) Examples of the image set, the wooden figure (d) Mean of the input images, used as the initial weight (e) Estimated diffuse weights
422
T. Migita, S. Ogino, and T. Shakunaga
light paths
(a)
(b)
(c)
object
(d)
Fig. 5. Input and estimation result of the wooden figure Fig. 6. The object and the circular paths of the light (a) Input images, (b) Reconstructed images, (c) Diffuse components, (d) Specular components
(a)
(b)
(c)
(d)
Fig. 7. A toy model (a) Overall view and target region, (b) Estimated shape, (c) Input images, (d) Reconstructed images
The estimated shape is as shown in Fig. 4. We can confirm that a rotationally symmetric shape was successfully reconstructed without considerable noise. Figure 5 shows, from left to right, the input images, the reconstructed images, and the reconstruction of the diffuse components and specular components, for two different images. The RMS error was approximately 5, which is 2% of the intensity range, and the error indicates that the model of Eq. (2) effectively approximated the input images. In addition, the diffuse and specular components were meaningfully separated. During this experiment, the light bulb traveled along several controlled circular paths, as shown in Fig. 6, even though that information was not added to the optimization. By comparing the estimated positions and the controlled trajectory, the average error in the estimated direction was calculated to be 25
Direct Bundle Estimation for Recovery of Shape
(a)
(b)
(c)
423
(d)
Fig. 8. The other toy models (a) Overall views and target regions, (b) Estimated shapes, (c) Examples of the input images, (d) Reconstructions
Fig. 9. Input images of a human face
Fig. 10. Estimated shapes of the human face
degrees. This is a rather poor result, even though the reconstructed shape does not seem to be greatly distorted. Intrinsically, the estimation of the light positions is ill-conditioned, since the specular intensity is a consequence of multiple factors, including the positions of the light, the curvature of the object, and the roughness of the surface. Toy Models: Several vinyl models were also used to test the proposed method. For each model, thirty images were captured in a darkened room. The overall views of the objects are shown in Figs. 7 and 8, along with the estimated shapes. They also show (c) input images and (d) reconstructed images. We can confirm that the input images were well reconstructed by the proposed model, Eq. (2). We compared the reconstructed shapes with the real models from various viewpoints and confirmed that the shapes were successfully reconstructed.
424
T. Migita, S. Ogino, and T. Shakunaga
(a)
(b)
Fig. 11. Estimated light positions of the human face (a) True light positions, (b) Estimated light positions
Human Face: Images extracted from the Yale Face Database B [4] were used to validate the proposed method. Figure 9 shows examples of the input images of subject #7 in the database, and Fig. 10 shows the estimated shape from a set of 43 images within Subset 4. The light positions are documented in the database, and the average estimation error of the light direction was 9.5 degrees with a standard deviation of 4.2 degrees. We did not use known light directions for our optimization.
4
Conclusions
In the present paper, we described a method that can be used for the recovery of a shape, reflectance property, and light positions that does not need any special devices other than a camera and a light source in a darkened room. For a set of numerically generated images, the method recovers almost the exact parameters. For real images, the method recovers satisfactory shapes. The method is based on the Levenberg-Marquardt algorithm combined with the preconditioned conjugate gradient algorithm for handling a large-scale nonlinear optimization problem. We used a three-step algorithm (coarse model to fine model) to increase the stability of the process. In addition, we do not need precise calibration or initialization based on special devices such as range finders, spherical mirrors, or robotic arms. The method is based on the Torrance-Sparrow model and the assumption that a single point light source will be used, which might limit the applicability of the method. Future research will include the replacement of the original models with more flexible models and the use of multiple cameras.
References 1. Georghiades, A.: Incorporating the torrance and sparrow model of reflectance in uncalibrated photometric stereo. In: ICCV 2003, pp. 816–823 (2003) 2. Georghiades, A.: Recovering 3-d shape and reflectance from a small number of photographs. In: Eurograpgics Symposium on Rendering, pp. 230–240 (2003)
Direct Bundle Estimation for Recovery of Shape
425
3. Sato, Y., Ikeuchi, K.: Reflectance analysis for 3d computer graphics model generation. CVGIP 58(5), 437–451 (1996) 4. Georghiades, A., Belhumeur, P., Kriegman, D.: From few to many: Illumination cone models for face recognition under variable lighting and pose. IEEE Trans. on PAMI 23(6), 643–660 (2001) 5. Lensch, H.P.A., Kautz, J., Goesele, M., Heidrich, W., Seidel, H.P.: Image-based reconstruction of spatial appearance and geometric detail. ACM Trans. on Graphics 22(3), 1–27 (2003) 6. Hertzmann, A., Seitz, S.M.: Example-based photometric stereo: Shape reconstruction with general varying brdfs. IEEE Trans. on PAMI 27(8), 1254–1264 (2005) 7. Mallick, S.P., Zickler, T.E., Kriegman, D.J., Belhumeur, P.N.: Beyond lambert: Reconstructing specular surfaces using color. In: CVPR 2005, vol. 2, pp. 619–626 (2005) 8. Sato, I., Okabe, T., Yu, Q., Sato, Y.: Shape reconstruction based on similarity in radiance changes under varying illumination. In: ICCV (2007) 9. Mercier, B., Meneveaux, A., Fournier, A.: A framework for automatically recovering object shape, reflectance and light sources from calibrated images. IJCV 73(1), 77– 93 (2007) 10. Yu, Y., Xu, N., Ahuja, N.: Shape and view independent reflectance map from multiple views. IJCV 73(2), 123–138 (2007) 11. Goldman, D., Curless, B., Hertzmann, A.: Shape and spatially-varying brdfs from photometric stereo. In: ICCV 2005, pp. 230–240 (2005) 12. Belhumeur, P., Kriegman, D., Yuille, A.: The bas-relief ambiguity. IJCV 35(1), 33–44 (1999) 13. Boivin, S., Gagalowicz, A.: Image-based rendering of diffuse, specular and glossy surfaces from a single image. SIGGRAPH, 107–116 (2001) 14. Paterson, J., Claus, D., Fitzgibbon, A.: Brdf and geometry capture from extended inhomogeneous samples using flash photography. EUROGRAPHICS 24(3), 383– 391 (2005) 15. Triggs, B., McLauchlan, P.F., Hartley, R., Fitzgibbon, A.W.: Bundle adjustment — a modern synthesis. In: Triggs, B., Zisserman, A., Szeliski, R. (eds.) ICCV-WS 1999. LNCS, vol. 1883, pp. 298–375. Springer, Heidelberg (2000) 16. Marquardt, D.W.: An algorithm for least-squares estimation of nonlinear parameters. SIAM J. Appl. Math. 11, 431–441 (1963) 17. Shashua, A.: Geometry and photometry in 3d visual recognition, Ph. D. thesis, Dept. Brain and Cognitive Science, MIT (1992)
A Probabilistic Cascade of Detectors for Individual Object Recognition Pierre Moreels1,2 and Pietro Perona2 1
2
Ooyala Inc., Mountain View, CA94040 California Institute of Technology, Pasadena, CA91125
[email protected],
[email protected]
Abstract. A probabilistic system for recognition of individual objects is presented. The objects to recognize are composed of constellations of features, and features from a same object share the common reference frame of the image in which they are detected. Features appearance and pose are modeled by probabilistic distributions, the parameters of which are shared across features in order to allow training from few examples. In order to avoid an expensive combinatorial search, our recognition system is organized as a cascade of well-established, simple and inexpensive detectors. The candidate hypotheses output by our algorithm are evaluated by a generative probabilistic model that takes into account each stage of the matching process. We apply our ideas to the problem of individual object recognition and test our method on several data-sets. We compare with Lowe’s algorithm [7] and demonstrate significantly better performance.
1
Introduction
Recognizing objects in images is perhaps the most challenging problem currently facing machine vision researchers. Much progress has been made in the recent past both in recognizing individual objects [2,7], while some groups have interpreted this task as a wide-baseline problem, and register pairs of images to build a 3D model used for recognition [9]. However, plenty of progress progress still needs to be made to reach levels of performance comparable to those of the human visual system. Much of what we know is still a ‘bag of tricks’ – we need to understand better the underlying principles in order to improve our designs and take full advantage of what we can learn from the statistics of images. In this study we focus on recognition of individual objects in complex images (as opposed to categories as in the ‘Pascal challenge’). Our goal is to produce a consistent probabilistic interpretation of the recognition system from Lowe [7], one of the most effective techniques we know for individual object recognition. The techniques we use are inspired by the work of Fergus [3] on the probabilistic ‘constellation’ model for object categories, and by Fleuret and Geman [4] work on coarse-to-fine searching. This work is also extending the probabilistic study from Schmid [10], in particular with the introduction of geometrical filtering stages D. Forsyth, P. Torr, and A. Zisserman (Eds.): ECCV 2008, Part III, LNCS 5304, pp. 426–439, 2008. c Springer-Verlag Berlin Heidelberg 2008
A Probabilistic Cascade of Detectors for Individual Object Recognition
427
and a geometric model, which allows us to perform object detection instead of simple image retrieval. The current study makes three novel contributions. First, we incorporate in our recognition algorithm a number of well-established, deterministic modules or ‘atomic operations’, arranged in a cascade in order to pursue the search for the best interpretation of a test image in a coarse-to-fine fashion. We start with ‘statistical’ global measurements and eliminate a great number of hypotheses with very little effort, before analyzing the remaining ones in greater detail. Second novel contribution, we introduce a generative probabilistic model that evaluates the hypotheses taking into account each stage of their formation. Third, although this was not the main goal of our study, our experiments show that our new algorithm and probabilistic scoring model, perform substantially better than a state-of-the-art detection system developed independently [7]. Section 2 introduces our generative model. Section 3 describes the coarse-tofine process used to generate hypotheses and sets of feature assignments. Section 4 explains the probabilistic model used to assign a score to the hypotheses. Section 5 presents and discusses results, and Section 6 contains our conclusions.
2 2.1
Generative Model Object Recognition Scenario
Our target scenario consists of recognizing individual objects in complex images, similarly to [7] - see Fig.1. We assume that a number of known objects have been gathered. We collect one or few images of these objects, the images collected for each object form the model of this object. The set of models form our database. On the other hand, we are given a query image, which is the photograph of a complex test composition containing some of the known objects - we call this image the test image. In addition to some of the known objects, the test image might contain unknown objects, i.e. objects not included in our training set, as well as background clutter. Our goal is to identify the known objects present in the test composition, along with their pose, i.e. position, orientation and scale. scene background (unwanted)
database
scene foreground
test scene (background)
database images of known objects
test scene (foreground)
clutter features
database features
foreground features
objects' identities + poses
feature extraction
matching process clutter
correct associations
stray associations
unmatched features
candidate matches
Fig. 1. Generative model for database and test image. The grey-shaded nodes indicate variables from the recognition system, that are directly observable.
428
2.2
P. Moreels and P. Perona
Modeling Object Images as Collections of Features
The objects in the database and in the test image are represented as a spatially deformable collection of parts, represented by features [3,7]. In this paper, features are characterized by both pose — i.e. feature location, scale, local orientation, and any other geometry-based measurement — and appearance, i.e. local image texture near the feature. This is in contrast with recognition methods based only on pose — e.g. Fleuret [4] — or on appearance only (‘bags of features models’, e.g. [5]).
3 3.1
Hypothesis Generation Test Image and Models
In this work we use as object features the popular combination of multi-scale difference-of-Gaussians detector and SIFT descriptor proposed by Lowe [7], although a few other options are equally good [8]. We call database of features and denote by M the set of features extracted in images of known objects, and denote by F the set of features extracted from the test image. Known objects, in number M, are indexed by k and denoted by mk . The indices i and j are used respectively for test features and database features: fi denotes the i − th test feature, while fjk denotes the j − th feature from the k − th object. The number of features detected in images of object mk is denoted by nk . For the M known objects, these cardinalities n = (n1 ...nM ). Therefore, form the vector M is a set of sets of features: M = {fjk }j=1...nk k=1...M . Note: throughout this paper, bold notation will denote vectors. Each feature is described by its pose and its appearance: fi = (Xi , Ai ) for a test feature, fjk = (Xjk , Akj ) for an object feature. The pose information is composed of x, y, scale s and orientation θ in the image. We have Xi = (xi , yi , si , θi ) for test features, and Xjk = (xkj , yjk , skj , θjk ) for database features. This pose information is measured relatively to the standard reference frame of the image in which the feature has been detected. The appearance information associated to a feature is a descriptor characterizing the local image appearance, or texture, in a region of the image centered at this location. It is denoted by Ai for test feature fi and Akj for database feature fjk . A hypothesis H is an interpretation of a test image, i.e. a subset of the known objects m = {mk }k together with their poses Θ = {Θk }k . In this paper we consider affine transformations between database and test images. Thus, an object’s pose is the affine transformation that maps a database model onto the test image. The number of objects specified present by the hypothesis is denoted by H. A hypothesis can be a no-object (H = 0 or H0 , all detections are clutter detections), single-object (H = 1) or multi-objects hypothesis (H > 1). An assignment vector V carries complementary information to a hypothesis: it assigns each feature from the test image to a database feature (we call it a foreground feature) or to clutter (background feature). The i − th component V (i) = (k, j) denotes that the test feature fi is matched to fjk , j-th feature from the k-th object mk . V (i) = 0 denotes the case when fi is attributed to clutter.
A Probabilistic Cascade of Detectors for Individual Object Recognition feature extraction
test image
indexing in database M (sec.3.2,4.2)
features F
Hough transform (sec.3.2, 4.3)
PROSAC (sec. 3.2)
combine with greedy method (sec.3.4)
YES single-object
model counts YES bin counts
hypotheses NO
reject
Final hypothesis
YES
accept interpretation
NO
NO
reject
429
reject interpretation
Fig. 2. Sequence of steps for the recognition process
3.2
Building Blocks of the Hypotheses Filtering (Figure 2)
Given a test image, our goal is to come up quickly with a likely explanation H. Since there are very many hypotheses to be considered, our strategy will be to exclude as many as possible from consideration at an early stage [4]. We choose a sequence of d detectors, cascaded from coarse to fine resolution, that use inexpensive tests on the image features. Each detector narrows down the set of possible explanations to a smaller number, which are explored in greater detail by the next detector. After each detector, we update the probabilities of the possible explanations of the test image (cf. Section 4). Model voting. The first screening of the hypothesis space is done by searching in the features database for candidate matches (fi , fjk ) to the features fi extracted from the test image. This indexing is based on appearance only. ¯ = (N1 ...NM ) indicates how many test features were The observed variable N associated to database features associated with each object. Only the objects k that collected a sufficient number of matches - defined by a threshold Tvotes are considered in the subsequent steps. Coarse Hough transform. We use the Hough transform to enforce pose consistency amongst candidate feature matches. The features encode location, orientation and scale, thus a single candidate match (fi , fjk ) characterizes a similarity transform from model to test image. For each known object, the 4-dimensional Hough space of similarity transform parameters is discretized into coarse bins (one table of bins for each known object), and the candidate matches are hashed into these bins. Each bin in Hough space can be considered to be a single-object hypothesis with a pose defined coarsely. The choice of a coarse discretization makes the exploration of the Hough space a fast process. Besides, the coarse discretization causes the boundary-related hashing issues to be less evident than in [7]. ˜ = {N b }k,b denotes the number of candidate matches falling in The variable N k each (object, pose) bin, where k indexes the object and b the pose bin. Only the combinations of object and pose that collected a sufficient number of matches – k – are considered in the subsequent steps. defined by a threshold Though
430
10
P. Moreels and P. Perona 10
5
2
Median value
Median value
25th & 75th percentile 4
10
3
10
2
10
1
10
0
25th & 75th percentile
5th and 95th percentile
Processing time (seconds)
Number of hypotheses
10
After prior
After model voting
After Hough transform
0
10
-1
10
-2
Model voting
Hough transform
Coarse hypotheses (bins)
PROSAC
}
'Fine-grained' hypotheses
10
5th and 95th percentile
}
After hypotheses merging and probabilistic score
}
} Coarse hypotheses (bins)
After PROSAC
10
1
Detailed hypotheses
Fig. 3. (left panel) Number of relevant hypotheses after each stage of the coarse-to-fine process. (right panel) Computation time - Matlab implementation on a 3GHz machine.
Generation of single-object hypotheses using PROSAC. Our outlier rejection stage uses the PROSAC algorithm [1]. PROSAC is similar to the popular RANSAC algorithm in sampling repeatedly seed subsets, fitting a model, and selecting the model that obtains the largest consensus set. However, in PROSAC the sampling stage gives a higher priority to the tentative correspondences with the highest quality in terms of similarity between descriptors. Each Hough bin that passed the previous tests is considered independently. For a given bin, random subsets of 4 matches (in our case) are repeatedly sampled randomly from the set of candidate matches in this bin. A global affine pose is computed from each sample, and the consistency of all tentative correspondences with this pose is measured. The winning pose is the pose that obtains the largest consensus set, i.e. the highest number of consistent correspondences. The following measure of pose consistency is used : the score contributions pf g (fi |fjk , H) and pbg (fi ) (see Equations (11) and (13)) are computed for both alternatives that would accept the candidate correspondence as a true match, or reject it and assign the test feature to clutter. The candidate matches that verify pf g (fi |fjk , H) > pbg (fi ) form the consensus set. The output of PROSAC is twofold. On one hand, we obtain a partial assignment vector V, namely the winning consensus set. On the other hand, we obtain a single-object hypothesis H, i.e. the combination of the object in the largest consensus set and the geometric pose computed from it. Figure 3 shows that PROSAC is very efficient at reducing the number of hypotheses to be considered. We use it in conjunction with the Hough transform rather than as a single filtering stage for two reasons. First, the Hough transform is an efficient tool for selecting clusters of candidate matches with good consistency. Second, without the first outlier filtering performed by the Hough transform, the fraction of outliers would be too high and the number of iterations required by RANSAC prohibitive [7].
A Probabilistic Cascade of Detectors for Individual Object Recognition
3.3
431
An Example (Figure 4)
The matching process explained in the previous sections is illustrated in Figure 4, on a small data-set that contains 31 images of 31 known objects. Panel a displays a test image, panel b the models of the three objects from the data-set that are present in it. Panel c and d. show the results of the model voting stage and k the Hough transform stage. Black horizontal bars show the thresholds Tvotes k and Though . Panels e-f-g show the manually labeled ground truth bounding box (blue) and the predicted bounding box (yellow) for the most populated bins corresponding to the 3 models present in the test image (bins #1, 2, 4 in panel d). The bounding boxes predicted by the matches in the bin under consideration are showed in magenta. A transformation is ‘correct’ if it overlaps with the ground truth by more than 50%. In panel d, bin #4 (beer bottle) does not meet this criterion and appears ‘incorrect’, its bar is therefore red instead of blue. Panels h, i show the results of the PROSAC stage, and the scores of the remaining hypotheses. Note that PROSAC is very effective at removing incorrect hypotheses. Panels j, k, l, show the remaining matches and predicted locations, for the same bins as in panels e, f, g. Note that the beer bottle now satisfies the requirement on overlap with ground truth. 3.4
From Single-Object to Multiple-Objects Hypotheses
The multiple single-object hypotheses obtained from the previous steps are finally combined into a final multi-object hypothesis Hf (and assignment vector Vf ) using a greedy approach that follows the maximum-likelihood approach. We start by merging the two hypotheses that obtained the highest score, i.e. ‘foreground’ features from both hypotheses are declared foreground features in the merged hypothesis, and both objects are added. In case the objects overlap by more than 50%, only one instance is created. The merged hypothesis is accepted only if it scores higher than both individual hypotheses, otherwise we revert to the best previous hypothesis. This greedy process is repeated with all individual hypotheses. Note that a greedy approach was also used e.g. by Leibe [6].
4
Probabilistic Interpretation of the Coarse-to-Fine Search
Our probabilistic treatment reflects the algorithmic steps described in Section 3.2 and illustrated in Figure 2–4. We develop a principled, probabilistic approach in order to improve upon ‘ad hoc’ tuning of the detection system used e.g. in [7]. Due to space restrictions, only major steps of the calculations are included. In order to decide which detections should be accepted or rejected we need P (H, V|F, M ), i.e. we want to rate combinations of hypotheses and assignment vectors, in the light of the features detected in the test image and in the database. We have P (H, V|F, M ) = P (F, H, V|M )/P (F |M ), where P (F |M ) is a prior on features observations which does not depend on H or V and can be omitted. The database of features M is acquired off-line, therefore we can omit the
432
P. Moreels and P. Perona
a
b After Hough transform, before Prosac
2
1.5
1
0.5
0
5
10
c
15
20
25
1.5
1
0.5
0
30
d
model index
incorrect sets of matches correct sets of matches for fish correct sets of matches for teddy bear correct sets of matches for beer bottle
k T hough
log 10 (number of votes)
2
k T votes
number of votes = N
(log10 scale)
After model voting stage
0
10
20
40
50
g
f
e
30
Hough bin index (sorted by decreasing population)
After Prosac incorrect sets of matches correct sets of matches for fish correct sets of matches for teddy bear correct sets of matches for beer bottle
1.4
5
1.2
8
1
6
0.8
log(score)
log(number of votes)
1.6
0.6 0.4
4 2 0
0.2 0
h
0
10
20
30
40
50
Hough bin index (same indices as in panel d.)
j
x 10
k
2
i
1
2
3
4
5
Hough bin index (empty bins have been discarded)
l
Fig. 4. Example of matching process. See Section 3.3 for details about the various panels.
A Probabilistic Cascade of Detectors for Individual Object Recognition
433
condition on M . We now examine P (F, H, V), which we will call score. Using ¯ and N ˜ defined in Section 3.2 (these variables are the additional variables N deterministic functions of F, H, V, M ), we obtain ˜ N) ¯ = P (F, H, V) = P (F, H, V, N, ˜ N, ¯ H) · P (V|N, ˜ N, ¯ H) · P (N| ˜ N, ¯ H) · P (N|H) ¯ P (F |V, N, · P (H) 4.1
(1)
Prior P (H)
P (H) is a prior on all possible coarse hypotheses. It contains information on which objects are most likely present, together with their most probable pose. Let H = ((m1 , Θ1 )...(mH , ΘH )) be a hypothesis. Conditioning on the number H of known objects present in this hypothesis, we decompose P (H) into (2) P (H) = P (m1 , Θ1 )...(mH , ΘH ), H P (Θi |mi ) · P (m1 ...mH |H) · P (H) (3) = 1≤i≤H
where we assumed mutual independence between poses of the objects present in the test image. P (Θi |mi ) is taken uniform over the image, P (m1 ...mH |H) = 1/ M H , and P (H) is modeled by a Poisson distribution. ¯ 4.2 Model Votes P (N|H) ¯ P (N|H) predicts the number of features N1 ...NM that will be associated to each model during the model voting phase (this is a ‘bag of features’ model). We assume that the models are independent of each other and independent of the background, therefore ¯ P (N|H) =
M
P (Nk |H)
(4)
k=1
The numbers of correct detections Nk1 and spurious detections Nk0 are hidden variables that verify Nk0 + Nk1 = Nk (if model mk is not present in H, we have Nk0 = Nk ). The ‘stable’ features (in number Nk1 ) originate from the database features (in number nk for model mk ), detected in the test image with probability pdet . A natural model for Nk1 is a binomial distribution P (Nk1 |H) = B(Nk1 |nk , pdet ). We take pdet = 0.1, this is consistent with the results from [7,8]. The ‘spurious’ matches are caused by database features that, by coincidence, look like a feature from the test image. We assume that database features generate such matches with probability pstray = 0.8. With this model, Nk0 also follows a Binomial distribution P (Nk0 |H) = B(Nk0 |n0 , pstray ) Finally, since Nk0 + Nk1 = Nk , we obtain P (Nk0 |H) · P (Nk1 |H) + P (Nk0 = Nk |H) (5) P (Nk |H) = mk ∈H Nk0 +Nk1 =Nk
mk ∈H / Nk0 +Nk1 =Nk
434
4.3
P. Moreels and P. Perona
˜ N, ¯ H) Hough Votes on Pose: P (N|
˜ N, ¯ H) models the spread of candidate matches in the Hough space. If all P (N| features were detected with exact position, scale and orientation, all correct matches should fall in the same bin, namely the bin that contains the pose parameters specified by H. Errors in the measurement of features’ location, orientation and scale, cause these matches to spread to adjacent bins as well. We make the simplifying approximation that bin counts are independent of each other. This approximation carries the idea that what happens in one part of the test image is independent of what happens far away. ˜ N, ¯ H) = P (Nkb |Nk , H) (6) P (N| k,b
/ H, all candidate matches counted in Nk are spurious. For a given If mk ∈ candidate match, the probability of hashing into any specific bin is uniform over the set of possible bins, in number B. The resulting distribution P (Nkb |Nk , H) is a Binomial distribution P (Nkb |Nk , H) = B(Nkb |Nk , 1/B). When mk ∈ H we need to take into account the correct model votes, in number Nk1 , as well as the spurious ones, in number Nk0 . The spurious votes get / H. The correct votes also follow distributed evenly in all bins as when mk ∈ a Binomial distribution, but do not get distributed evenly. Naturally, the bin expected to collect the highest number of votes is the bin that contains the pose specified by H, we denote it by b(H). If the observations and measurements did not contain any error, all votes from Nk1 would index to b(H). We determined pbH – probability that a correct candidate match indexes into bin b – statistically using the ground truth data from [8]. We obtained pbH = 0.48 for the privileged bin b(H), and pbH = 0.06 for its nearest neighbors. Too few candidate matches indexed into second-order neighbors and farther bins to obtain statistically significant data, for these bins we set a fixed value pbH = 0.001. We obtain
b 1 1 b b P (Nk |Nk , H) = B( Nk |Nk , pH ) · B(b Nk0 |Nk0 , 1/B) Nk0 +Nk1 =Nk
b N 0 +b N 1 =N b k k k
nk · B(Nk |n0 · k nk
1 , pstray ) · B(Nk |nk , pdet )
(7) (8)
The inner summation considers all possible combinations of correct and incorrect matches in the bin (k, b) of interest. The outer summation does the same for all the matches that indexed to the model k under consideration. 4.4
˜ N, ¯ H) Probability of Specific Assignments P (V|N,
The assignment vector V specifies for each image feature, whether it is associated to a database feature or considered a clutter detection. It does not take into account the features pose and appearance (information included in F ). Therefore, this is purely a combinatorial probabilistic expression.
A Probabilistic Cascade of Detectors for Individual Object Recognition
435
We denote by Vkb the restriction of V to the bin (k, b), and Vkb the number of ˜ N, ¯ H) into foreground features in Vbk . We decompose P (V|N, ˜ N, ¯ H) = P (Vkb |Nkb , Nk , H) (9) P (V|N, k,b
P (Vkb |Nkb , Nk , H) = 1/
b Nk · B(Vkb |nk , pdet ) Vkb
(10)
Line 9 uses models and bins independency. The first term in line 10 is due to all Vkb with the same number of candidate matches Vkb having the same probability. The second term is identical to P (Nk1 |H) in Section 4.2, with Vkb instead of Nk1 . 4.5
˜ N, ¯ H) Pose and Appearance Consistency P (F |V, N,
This term compares features values predicted in the test image by (H, V) with the values actually observed. We make the assumption that if we condition on the reference frame defined by an object’s pose in the test image, the test features attributed to this object are independent of each other. This is a ‘star model’ where the center of the star is a hidden variable, namely the reference frame of the object. Compared to [3] which learn a joint distribution on the object parts, our assumption of conditional independence dramatically reduces the number of parameters one has to learn – linear instead of quadratic. We obtain ˜ N, ¯ H) = pf g (fi |fV (i) , H) · pbg (fi ) (11) P (F |V, N, i|V (i)=0
i|V (i)=0
where pf g is the probability of the observed feature’s appearance and pose if the candidate match is correct, whereas pbg is the probability of its appearance and pose if the test feature was actually a clutter detection. We call these densities ‘foreground’ and ‘background’ densities respectively. If V (i) = 0, fi and fV (i) are believed to be caused by the same object part, respectively in the test image and in a model. Assuming independence between pose and appearance, denoted respectively by A and X , we have pf g (fi |fV (i) , H) = pf g,A (Ai |AV (i) , H) · pf g,X (Xi |XV (i) , H)
(12)
If V (i) = 0, fi is believed to be a clutter detection. Similarly to the ‘foreground’ treatment, we assume independence between pose and appearance, and decompose pbg (fi ) into pbg (fi ) = pbg,A (Ai ) · pbg,X (Xi ) The densities pf g,A (Ai |AV (i) , H), pf g,X (Xi |XV (i) , H), pbg,A (Ai ) are taken to be Gaussian. The parameters are learned from statistics on ground truth correct and incorrect matches [8]. pbg,X is taken as the product of a uniform density over the image for location, a uniform density over [0, 2π] for orientation, and a uniform density over [−4, 4] for log-scale. These densities parameters are shared across features, instead of having one set of parameters for each feature as in [3].
436
P. Moreels and P. Perona Threshold T khough on minimum population allowed in bins
Threshold T kvotes on minimum number of model votes
30
50 25
40
k
Though
Tkvotes
20
30
15
20 10
10 0
5
0
500
1000
1500
2000
2500
3000
0
0
50
number of features in model n k
100
150
200
250
number of model votes N k
k Fig. 5. (left) Tvotes as a function of the number of features nk in model. The curve has a staircase appearance because the value of the threshold is always rounded. (right) k as a function of the number of votes Nk for model k. variation of threshold Though
Finally, we accept or reject matches in a candidate assignment vector based on the likelihood ratio of the match being correct, versus the feature being a clutter detection. (The match is accepted if Ri > 1) Ri = 4.6
pf g,A (Ai |Akj , H) · pf g,X (Xi |Xjk , H)) pf g (fi |fjk , H) = pbg (fi ) pbg,A (Ai ) · pbg,X (Xi )
(13)
k k Choice of Tvotes and Though (Figure 5)
k Tvotes is the minimum number of votes that the known object mk must collect to be possibly included in a hypothesis. This threshold is chosen as the minimum k value of Nk that satisfies Rvotes = P (Nk |H)/P (Nk |H0 ) > 1. k Similarly, Though is the minimum number of candidate matches that a bin from Hough space must collect to be considered in further stages, we choose it P (Nkb |Nk ,mk ∈b) k as the minimum value of Nkb that satisfies Rhough = P (N > 1. b |N ,H 0 k
5 5.1
k
Experimental Results Setting and Results
The recognition method presented above was tested on the ‘Giuseppe Toys’ and the ‘Home Objects’ data-sets downloaded from the Caltech repository at www.vision.caltech.edu/archive.html. The images in both sets are challenging due to the variety of lighting conditions, occlusions, and the texture from the background (vegetation, concrete) that generate countless clutter detections. The method presented here was compared against Lowe’s voting approach [7], which is a state-of-the-art method for detection of individual objects. The implementation of Lowe’s system was provided by the company Evolution Robotics (‘ERSP’ – www.evolution.com/products/ersp/). Figure 7 displays the ROC curves obtained with the two methods, both when the probabilistic score (Equation 1) is used as threshold for the ROC, and when the threshold is simply the number of matches in the hypothesis. Regarding
A Probabilistic Cascade of Detectors for Individual Object Recognition
437
Fig. 6. Samples from ‘Giuseppe Toys’ data-set. Detections from our system are overlaid in green and yellow on the test images. Yellow boxes denote objects identified by our system but missed by ERSP’s, green boxes denote objects identified by both systems. ROC curve Objects from Home database 0.5
0.45
0.45
detection rate (100 = all objects found)
detection rate (100 = all objects found)
ROC curve Giuseppe Toys database 0.5
0.4
0.35
Our system, threshold on probabilistic score 0.3
Our system, threshold on number of votes Lowe, threshold on probabilistic score
0.25
Lowe, threshold on number of votes
0.2
0.15
0.1
0.05
0
0.4
0.35
0.3
0.25
0.2
0.15
0.1
Our system, threshold on probabilistic score Our system, threshold on number of votes Lowe, threshold on probabilistic score Lowe, threshold on number of votes
0.05
0
0.2
0.4
0.6
0.8
1
1.2
1.4
false alarm rate (per query image)
1.6
1.8
2
0
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
false alarm rate (per query image)
Fig. 7. Comparative ROCs for the ’Giuseppe Toys’ (left) and the ’Home Objects’ (right) data-sets, for our system and Lowe’s
detection rate, our slightly improved performance is due to less confusion in situations like panel g from Figure4, since we use the PROSAC filtering stage. On the other hand, the false alarm rate is significantly improved in our system. This is due partly to the efficiency of PROSAC at rejecting pose-inconsistent hypotheses, and partly to our probabilistic model which checks systematically the appearance and pose consistencies of each candidate match with respect to the hypothesis being tested.
438
5.2
P. Moreels and P. Perona
Performance in Specific Environment
In order to obtain a more fine-grained evaluation, we compared both systems on smaller data-sets targeted at specific environments. We first checked the performance on objects with printed text and graphics. Both the Evolution Robotics system and ours performed extremely well, since text and graphics have high contrast and generate SIFT features with good distinctiveness. Next, we investigated the performance on objects with very little texture and shiny surfaces (apple, plate, mug...). Both systems perform very poorly with such objects, as the front-end feature detector generate very few features with poor localization specificity. More interesting, both systems were compared in a ‘clutter-only’ environment. We collected images of 39 different textures, with two images per texture - one to be used as a training image, the other as test image. The training and test images were taken at separate locations, so that in theory no match should be identified between the training set and the testing set. In other words, in these conditions any detection is a false alarm. The similarity of texture and features appearance between training and test images might lead to confusion and makes this setup a challenging environment. However, pose consistency, used both by the ERSP system and ours, should be able to solve this confusion. On this data-set our system performed significantly better (see Figure 8-d / left). Database images
b)
Test scene
c) Test scene Our system Moreels & Perona
Test scene
known object identified
Lowe
Dataset
a)
known object identified
d)
Our system
Our system
*
* Hypotheses with more than 30 matches are 'high confidence' hypotheses
Fig. 8. Comparison on cluttered images. a) Some examples of images from training and test set. b) Examples of false alarms identified by the ERSP system. The outline of the hypothesis’ prediction of the object pose in the test image, is displayed in green. c) Examples of false alarms for our system. d) Performance comparison summary for both testing conditions when the training set contains the same textures as the test set and when the training set consists of the ‘Home Objects’ images.
A Probabilistic Cascade of Detectors for Individual Object Recognition
439
Last, both systems were compared on this same test, but with the training set consisting of the ‘Home Objects’ database. This is an easier task as the same texture is never present both in models and in the test images. Again, our system obtained significantly fewer false alarms than ERSP’s, with 12 false alarms for our system versus 30 for ERSP’s (Figure 8-d / right).
6
Conclusion
We presented a consistent probabilistic framework for individual object recognition. The search for the best interpretation of a given image is performed with a coarse-to-fine strategy. The early stages take into account only global counting variables that are inexpensive to compute. We benefit here from model voting and Hough transform, which result in first estimates of the objects likely contained in the test image and their pose. A large fraction of irrelevant hypotheses are discarded at a very low computational cost. Further steps refine the hypotheses and specify individual feature assignments. The pose consistency is efficiently enforced by the PROSAC estimator. The search procedure results in a small set of hypotheses whose probability is computed. The use of our probabilistic model allows to further reduce the rate of false alarms. Besides, the conditional densities used here are estimated using extensive measurements on ground truth matches between images from real 3D objects. We tested this recognition method against a state-of-the-art system on multiple data-sets. Our method performed consistently better than Lowe’s, but the performance was especially encouraging when the test images contained lots of clutter and texture, a frequent source of confusion for recognition systems.
References 1. Chum, O., Matas, J.: Matching with PROSAC - Progressive Sample Consensus. In: IEEE CVPR (2005) 2. Ferrari, V., Tuytelaars, T., Van Gool, L.: Simultaneous Object Recognition and Segmentation by Image Exploration. In: Pajdla, T., Matas, J(G.) (eds.) ECCV 2004. LNCS, vol. 3021. Springer, Heidelberg (2004) 3. Fergus, R., Perona, P., Zisserman, A.: Object Class Recognition by Unsupervised Scale-invariant Learning. In: IEEE CVPR (2003) 4. Fleuret, F., Geman, D.: Coarse-to-fine face detection. IJCV 41, 85–107 (2001) 5. Grauman, K., Darrell, T.: The Pyramid Match Kernel: Discriminative Classification with Sets of Image Features. In: ICCV (2005) 6. Leibe, B., Leonardis, A., Schiele, B.: Combined Object Categorization and Segmentation with an Implicit Shape Model. In: Pajdla, T., Matas, J(G.) (eds.) ECCV 2004. LNCS, vol. 3021. Springer, Heidelberg (2004) 7. Lowe, D.G.: Distinctive Image Features from Scale-Invariant Keypoints. IJCV 8. Moreels, P., Perona, P.: Evaluation of Features Detectors and Features Descriptors based on 3D objects. IJCV 73(3), 263–284 (2007) 9. Rothganger, F.: 3D Object Modeling and Recognition in Photographs and Video, PhD thesis, UIUC (2004) 10. Schmid, C.: A Structured Probabilistic Model for Object Recognition. In: CVPR, pp. 485–490 (1999)
Scale-Dependent/Invariant Local 3D Shape Descriptors for Fully Automatic Registration of Multiple Sets of Range Images John Novatnack and Ko Nishino Department of Computer Science, Drexel University {jmn27,kon}@drexel.edu Abstract. Despite the ubiquitous use of range images in various computer vision applications, little has been investigated about the size variation of the local geometric structures captured in the range images. In this paper, we show that, through canonical geometric scale-space analysis, this geometric scale-variability embedded in a range image can be exploited as a rich source of discriminative information regarding the captured geometry. We extend previous work on geometric scale-space analysis of 3D models to analyze the scale-variability of a range image and to detect scale-dependent 3D features – geometric features with their inherent scales. We derive novel local 3D shape descriptors that encode the local shape information within the inherent support region of each feature. We show that the resulting set of scale-dependent local shape descriptors can be used in an efficient hierarchical registration algorithm for aligning range images with the same global scale. We also show that local 3D shape descriptors invariant to the scale variation can be derived and used to align range images with significantly different global scales. Finally, we demonstrate that the scale-dependent/invariant local 3D shape descriptors can even be used to fully automatically register multiple sets of range images with varying global scales corresponding to multiple objects.
1
Introduction
Range images play central roles in an increasing number of important computer vision applications ranging from 3D face recognition to autonomous vehicle navigation and digital archiving. Yet, the scale variation of geometric structures captured in range images are largely ignored or simply viewed as perturbations of the underlying geometry that need to be accounted for in subsequent processing. Although several methods, mostly, for extracting scale-invariant or multi-resolution features or descriptors from range images based on smoothing 3D coordinates or curvature values of the vertices have been proposed in the past [1,2,3,4,5,6], they are prone to topological errors induced by the lack of canonical scale analysis as discussed in [7]. Most important, they do not fully exploit the rich discriminative information encoded in the scale-variability of local geometric structures that can in turn lead to novel computational methods for processing range images. D. Forsyth, P. Torr, and A. Zisserman (Eds.): ECCV 2008, Part III, LNCS 5304, pp. 440–453, 2008. c Springer-Verlag Berlin Heidelberg 2008
Scale-Dependent/Invariant Local 3D Shape Descriptors
441
In this paper, we introduce a comprehensive framework for analyzing and exploiting the scale-variability of geometric structures captured in range images based on canonical analysis of its geometric scale-space. We derive novel local 3D shape descriptors that naturally encode the inherent scale of local geometric structures. The geometric scale-space analysis of range images can be viewed as an extension of previous work on geometric scale-space analysis of 3D mesh models by Novatnack and Nishino [7]. The key idea underlying the newly derived geometric scale-space construction and analysis is that a range image is readily a 2D projection of the surface geometry of a 3D shape. We show that we may directly compute a geometric scale-space of a range image in this 2D projection with unique 2D operators defined on the surface geodesics. Based on the geometric scale-space analysis we detect scale-dependent geometric features, more specifically corners, and their inherent spatial extents. We show that we can encode the geometric information within the spatial extent of each feature in a scale-dependent local 3D shape descriptor that collectively form a sparse hierarchical representation of the surface geometry captured in the range images. We demonstrate how this representation can be exploited to robustly register a set of range images with a consistent global scale. Furthermore, we show how we may define a local 3D shape descriptor that is invariant to the variation of the inherent local scale of the geometry, which can be used to register a set of range images with unknown or inconsistent global scales. We demonstrate the effectiveness of the novel scale-dependent/invariant local 3D shape descriptors by automatically registering a number of models of varying geometric complexity. These registration results can be used as approximate alignments which can then be refined using a global registration algorithm that together realize fully automatic registration of multiple range images without any human intervention. We further demonstrate the effectiveness of our framework by fully automatically registering a set of range images corresponding to multiple 3D models, simultaneously. Note that previous work on fully automatic range image registration assume that the range images capture a single object or scene [8,9,10,11,12,13]. We, on the other hand, show that the novel scale-dependent/invariant descriptors contain rich discriminative information that enables automatic extraction of individual objects from an unordered mixed set of range images capturing multiple objects. To our knowledge, this work is the first to report such capability.
2
Geometric Scale-Space of a Range Image
We first construct and analyze the geometric scale-space of a range image. This part of our framework can be viewed as an extension of the work by Novatnack and Nishino [7] to range images. Readers are referred to [7] for details. The key insight underlying the extension of this approach to range images is that each range image is already a dense and regular projection, mostly a perspective projection, of a single view of the surface of the target 3D shape. Furthermore, the distortion map used for accounting for the distortions induced by the embedding
442
J. Novatnack and K. Nishino
in [7] is unnecessary, as the geodesics can be approximated directly from the range image itself. We build the geometric scale-space of a range image R : D → R3 , where D is a 2D domain in R2 , by first constructing a normal map N in the same domain by triangulating the range image and computing a surface normal for each vertex. As noted in [7], it is important to use the surface normals as the base representation since directly smoothing 3D coordinates can result in topological changes and higher-order derivative geometric entities such as the curvatures can be sensitive to noise. In order to construct a geometric scale-space of a range image that accurately encodes the scale-variability of the underlying surface geometry, we define all operators in terms of the geodesic distance rather than the Euclidean 3D or 2D distances. To efficiently compute the geodesic distance between two points in a range image, we approximate it with the sum of Euclidean 3D distances between vertices along the path joining the two points in the range image; given two points u, v ∈ D we approximate the geodesic distance d(u, v) as R(ui ) − R(ui+1 ) , (1) d(u, v) ≈ ui ∈P(u,v),=v
where P is a list of vertex points in the range image on the path between u and v. If the path between u and v crosses an unsampled point in the range image then we define the geodesic distance as infinity. We also parse the range image and detect depth discontinuities by marking vertex points whose adjacent points lie further than a predetermined 3D distance and define the geodesic distance as infinity if the path crosses such points. When approximating geodesics from a point of interest outwards, the sum can be computed efficiently by storing the geodesic distances of all points along the current frontier and reusing these when considering a set of points further away. We construct the geometric scale-space of a base normal map N by filtering the normal map with Gaussian kernels of increasing standard deviation σ, where the kernel is defined in terms of the geodesic distance. The resulting geometric scale-space directly represents the inherent scale-variability of local geometric structures captured in range images and serves as a rich basis for further scalevariability analysis of range image data.
3
Scale-Dependent Features in a Range Image
We may detect geometric features, in our case corner points, and their associated inherent scale, in other words their (relative) natural support sizes, in the geometric scale-space of a range image. The 3D geometric corners are first detected at each discrete scale by applying a corner detector proposed for the geometric scale-space of a 3D model by Novatnack and Nishino [7]. For a point u in the normal map Nσ at scale σ, the corner response is computed using the Gram matrix M(u; σ, τ ). The Gram matrix is defined in terms of scale-normalized
Scale-Dependent/Invariant Local 3D Shape Descriptors σ
443
σ
s and vertical N t directions, which first-order derivatives in the horizontal N themselves are the normal curvatures in these directions (see [7] for details): t σ (v) s σ (v)N s σ (v)2 N N M(u; σ, τ ) = g(v; u, τ ) , (2) σ σ σ s (v)N t (v) t (v)2 N N v∈W where W is the local window window to be considered, σ is the particular scale in the geometric scale-space representation, and τ is the weighting of the points in the Gram matrix. A set of corner points at each scale is detected by searching for spatial local maxima of the corner detector responses. Corners lying along edge points are pruned by thresholding the variance of the second-order partial derivatives. This results in a set of corner points at a number of discrete scales in the geometric scale-space. In order to determine the intrinsic scale of each corner, we then search for local maxima of the corner detector responses across the scales as was originally proposed for 2D scale-space [14]. The result is a comprehensive set of scale-dependent corners, where the support size of each corner follows naturally from the scale in which it was detected. Figure 1 shows the set of scale-dependent corners detected on two range images of a Buddha model. Note that the corners are well dispersed across scales, and that there are a large number of corresponding corner points at the correct corresponding scales.
4
Local 3D Shape Descriptors
Once we detect scale-dependent features via geometric scale-space analysis we may define novel 3D shape descriptors that naturally encode relevant local geometric structure. There are a wide variety of 3D shape descriptors that have been previously proposed [15,16,17,18,19,20]. Many of these suffer from the limitation that they are sensitive to the sampling density of the underlying geometry and the size of their support region cannot be canonically determined. In our case, the associated inherent scale of each scale-dependent corner directly tells us the natural spatial extent (the support size) of the underlying local geometric structure. This information can then in turn be used to identify the size of the neighborhood of each corner that should be encoded in a local shape descriptor. At the same time, we construct dense and regular 2D descriptors that are insensitive to the resolution of the input range images. 4.1
Exponential Map
We construct both our scale-dependent and scale-invariant local 3D shape descriptors by mapping and encoding the local neighborhood of a scale-dependent corner to a 2D domain using the exponential map. The exponential map is a mapping from the tangent space of a surface point to the surface itself [21]. Given a unit vector w lying on the tangent plane of a point u, there is a unique geodesic Γ on the surface such that Γ (0) = u and Γ (0) = w. The exponential map takes
444
J. Novatnack and K. Nishino
Fig. 1. Scale-dependent corners and scale-dependent local 3D shape descriptors computed based on geometric scale-space analysis of two range images. The scale-dependent corners are colored according to their inherent scales, with red and blue corresponding to the coarsest and finest scales, respectively. The scale-dependent local 3D shape descriptors capture local geometric information in the natural support regions of the scale-dependent features.
a vector w on the tangent plane and maps it to the point on the geodesic curve at a distance of 1 from u, or Exp(w) = Γ (1). Following this, any point v on the surface in the local neighborhood of u can be mapped to u’s tangent plane, often referred to as the Log map, by determining the unique geodesic between u and v and computing the geodesic distance and polar angle of the tangent to the geodesic at u in a predetermined coordinate frame {e1 , e2 } on the tangent plane. This ordered pair is referred to as the geodesic polar coordinates of v. The exponential map has a number of properties that are attractive for constructing a 3D shape descriptor, most important, that it is a local operator. Although fold-overs may occur if this neighborhood is too large, the local nature of the scale-dependent and scale-invariant descriptors implies this will rarely happen. In practice we have observed fold-overs on an extremely small number of features, mostly near points of depth discontinuities. Although the exponential map is not, in general, isometric, the geodesic distance of radial lines from the feature point are preserved. This ensures that corresponding scale-dependent corners will have mostly consistent shape descriptors among different views, i.e. different range images. In addition, because the exponential map is defined at the feature point, it does not rely on the boundary of the encoded neighborhood like harmonic images does [22]. 4.2
Scale-Dependent Local 3D Shape Descriptor
We construct a scale-dependent local 3D shape descriptor for a scale-dependent corner at u whose scale is σ by mapping each point v in the neighborhood of u to a 2D domain using the geodesic polar coordinates G defined as
Scale-Dependent/Invariant Local 3D Shape Descriptors
G(u, v) = (d(u, v), θT (u, v)) ,
445
(3)
where again d(u, v) is the geodesic distance between u and v and θT (u, v) is the polar angle of the tangent of the geodesic between u and v defined relative to a fixed bases {e1 , e2 }. In practice we approximate this angle by orthographically projecting v onto the tangent plane of u and measuring the polar angle of the intersection point. The radius of the descriptor is set proportional to the inherent scale of the scale-dependent corner σ to encode geometric information in the natural support region of each scale-dependent corner. After mapping each point in the local neighborhood of u to its tangent plane we are left with a sparse 2D representation of the local geometry around u. We interpolate a geometric entity encoded at each vertex to construct a dense and regular representation of the neighborhood of u at scale σ. Note that this makes the descriptor insensitive to resolution changes of the range images. We choose to encode the surface normals from the original range image, rotated such that the normal at the center point u points in the positive z direction. The resulting dense 2D descriptor is invariant up to a single rotation (the in-plane rotation on the tangent plane). We resolve this ambiguity by aligning the maximum principal curvature direction at u to the horizontal axis e1 in the geodesic polar coordinates, resulting in a rotation-invariant shape descriptor. Once this local basis has been fixed we re-express each point in terms of the normal coordinates, with the scale-dependent corner point u at the center of the descriptor. We refer to this dense 2D scale-dependent descriptor of the local 3D shape as Gσu for a scale-dependent corner at u and with scale σ. Figure 1 shows subsets of scale-dependent local 3D shape descriptors computed at scale-dependent corners in two range images of a Buddha model. 4.3
Scale-Invariant Local 3D Shape Descriptor
The scale-dependent local 3D shape descriptors provides a faithful sparse representation of the surface geometry in different range images when their global scales are the same or are known, e.g. when we know that the range images are captured with the same range finder. In order to enable comparison between range images that do not have the same global scale, we also derive a σ. scale-invariant local 3D shape descriptor G u We may safely assume that the scales of local geometric structures relative to the global scale of a range image remains constant as the global scale of a range image is altered. Note that this assumption holds as long as the geometry captured in the range image is rigid and does not go under any deformation, for instance, as it is captured with possibly different range sensors. We may then construct a set of scale-invariant local 3D shape descriptors by first building a set of scale-dependent local 3D shape descriptors and then normalizing each descriptor’s size to a constant radius. Such a scale-invariant representation of the underlying geometric structures enables us to establish correspondences between a pair of range images even when the global scale is different and unknown.
446
5
J. Novatnack and K. Nishino
Pairwise Registration
The novel scale-dependent and scale-invariant local 3D shape descriptors contain rich discriminative information regarding the local geometric structures. As a practical example, we show the effectiveness of these descriptors in range image registration, one of the most fundamental steps in geometry processing. In particular, we show how the scale-dependent local 3D shape descriptors form a hierarchical representation of the geometric structures that can be leveraged in a coarse-to-fine registration algorithm. We also show how the scale-invariant local shape descriptors can be used to establish correspondences and compute the transformation between a pair of range images with completely different global scales. 5.1
Similarity Measure
Since each descriptor is a dense 2D image of the surface normals in the local neighborhood we may define the similarity of the local 3D shape descriptors as the normalized cross-correlation of surface normal fields using the angle differences, S(Gσu1 , Gσu2 ) =
1 π − arccos(Gσu1 (v) · Gσu2 (v)) , 2 |A ∩ B|
(4)
v∈A∩B
where A and B are the set of points in the domain of Gσu1 and Gσu2 , respectively. Here, the similarity measure is defined in terms of the scale-dependent descrip tors, but the definition for the scale-invariant descriptors is the same with G substituted for G. 5.2
Pairwise Registration with Scale-Dependent Descriptors
The hierarchical structure of the set of scale-dependent local 3D shape descriptors can be exploited when aligning a pair of range images {R1 , R2 } with the same global scale. Note that if we know that the range images are captured with the same range scanner, or if we know the metrics of the 3D coordinates, e.g. centimeters or meters, we can safely assume that they have, or we can covert them to, the same global scale. Once we have a set of scale-dependent local 3D shape descriptors for each range image, we construct a set of possible correspondences by matching each descriptor to the n most similar1 . The consistency of the global scale allows us to consider only those correspondences at the same scale in the geometric scale-space, which greatly decreases the number of correspondences that must be later sampled. We find the best pairwise rigid transformation between the two range images by randomly sampling this set of potential correspondences and determining the one that maximizes the area of overlap between the two 1
In our our experiments n is set in the range of 5 ∼ 10.
Scale-Dependent/Invariant Local 3D Shape Descriptors
(a)
447
(b)
Fig. 2. (a) Aligning two range images with the same global scale using a set of scaledependent local 3D shape descriptors. On the left we show the 67 point correspondences found with our matching algorithm and on the right the result of applying the rigid transformation estimated from the correspondences. (b) Aligning two range images with inconsistent global scales using a set of scale-invariant local 3D shape descriptors. On the left we show the 24 point correspondences found with our matching algorithm and on the right the results of applying the estimated 3D similarity transformation. Both the scale-dependent and -invariant descriptors realize very accurate and efficient automatic pairwise registration of range images.
range images, similar to RANSAC [23]. However, rather then sampling the correspondences at all scales simultaneously, we instead sample in a coarse-to-fine fashion, beginning with the descriptors with the coarsest scale and ending with descriptors with the finest scale. This enables us to quickly determine a rough alignment between two range images, as there are, in general, fewer features at coarser scales. For each scale σi we randomly construct N σi sets of 3 correspondences, where each correspondence has a scale between σ1 and σi . For each correspondence set C we estimate a rigid transformation T , using the method proposed by Umeyama [24], and then add to C all those correspondences (uj , vj , σj ) where T · R1 (uj ) − R2 (vj ) ≤ α and σj ≤ σi . Throughout the sampling process we keep track of the transformation and correspondence set that yield the maximum area of overlap. Once we begin sampling the next finer scale σi+1 we initially test whether the correspondences at that scale improve the area of overlap induced by the current rigid transformation. This allows us to quickly add a large number of correspondences at finer scales efficiently without drawing an excessive number of samples. Figure 2(a) shows the results of applying our pairwise registration algorithm to two views of the Buddha model. The number of correspondences is quite large and the correspondences are distributed across all scales. Although the result is an approximate alignment, since for instance slight perturbations in the scale-dependent feature locations may amount to slight shifts in the resulting
448
J. Novatnack and K. Nishino
registration, the large correspondence set established with the rich shape descriptors leads to very accurate estimation of the actual transformation. 5.3
Pairwise Registration with Scale-Invariant Descriptors
We may align a pair of range images {R1 , R2 } with different global scales using the scale-invariant local 3D shape descriptors, which amounts to estimating the 3D similarity transformation between the range images. Since we no longer know the relative global scales of the range images, we must consider the possibility that a feature in one range image may correspond to a feature detected at a different scale in the second range image. Our algorithm proceeds by first constructing a potential correspondence set that contains, for each scale-invariant local 3D shape descriptor in the first range image R1 , the n most similar in the second range image R2 . We find the best pairwise similarity transformation by applying RANSAC to this potential correspondence set. For each iteration the algorithm estimates the 3D similarity transformation [24] and computes the area of overlap. The transformation which results in the maximum area of overlap is considered the best. Figure 2(b) shows the result of applying our algorithm to two views of the Buddha model with a relative global scale difference of approximately 2.4. Despite the considerable difference in the relative global scales, we can recover the similarity transformation accurately without any initial alignments or assumptions about the models and their global scales.
6
Multiview Registration
Armed with the pairwise registration using scale-dependent/invariant descriptors, we may derive a fully automatic range image registration framework that exploits the geometric scale-variability. We show that the scale-dependent and scale-invariant descriptors can be used to register a set of range images both with and without global scale variations without any human intervention. Most important, we show that we can register a mixed set of range images corresponding to multiple 3D models simultaneously and fully automatically2 . 6.1
Fully Automatic Registration
Given a set of range images {R1 , ..., Rn }, our fully automatic range image registration algorithm first constructs the geometric scale-space of each range image. Scale-dependent features are detected at discrete scales and then combined into a single comprehensive scale-dependent feature set, where the support size of each feature follows naturally from the scale in which it was detected. Each feature is encoded in either a scale-dependent or scale-invariant local shape descriptor, depending on whether the input range images have a consistent global scale or 2
In all our experiments, we randomized the order of the range images to ensure that no a priori information is given to the algorithm.
Scale-Dependent/Invariant Local 3D Shape Descriptors
449
not. We then apply the appropriate pairwise registration algorithm, presented in the previous sections, to all pairs of range images in the input set to recover the pairwise transformations. We augment each transformation with the area of overlap resulting from the transformation. Next we construct a graph similar to the model graph [9], where each range image is represented with a vertex and each pairwise transformation and area of overlap is encoded in a weighted edge. We prune edges with an area of overlap less then . In order to construct the final set of meshes {M1 , ..., Mm } we compute the maximum spanning tree of the model graph and register range images in each connected component using their estimated corresponding transformations. The alignment obtained by our algorithm is approximate yet accurate enough to be directly refined by any ICP-based registration algorithm without any human intervention, resulting in a fully automatic range image registration algorithm. 6.2
Range Images with Consistent Global Scale
Figure 3 illustrates the results of applying our framework independently to 15 views of the Buddha model and 12 views of the armadillo model, with consistent global scales. Scale-dependent local shape descriptors were detected at 5 discrete
Fig. 3. Fully automatic registration of 15 views of the Buddha model and 12 views of the armadillo model using scale-dependent local descriptors. First column shows the initial set of range images. Note that no initial alignment is given and they are situated as is. Second column shows the approximate registration obtained with our framework, which is further refined with multi-view ICP [25] in the third column. Finally a water tight model is built using a surface reconstruction algorithm [26]. The approximate registration obtained with our framework is very accurate and enables direct refinement with ICP-based methods which otherwise require cumbersome manual initial alignment.
450
J. Novatnack and K. Nishino
Fig. 4. Automatic registration of a set of range images of multiple objects: total 42 range images, 15 views of the Buddha model, 12 views of the armadillo, and 15 views of the dragon model. The scale-dependent local 3D shape descriptors contain rich discriminative information that enables automatic discovery of the three disjoint models from the mixed range image set. Note that the results shown here have not been postprocessed with a global registration algorithm.
scales, σ = {0.5, 1, 1.5, 2, 2.5}, in the geometric scale-space. The approximate registration results after applying our matching method using scale-dependent local 3D shape descriptors are refined using multi-view ICP [25] and a watertight model is computed using a surface reconstruction method for oriented points[26]. We may quantitatively evaluate the accuracy of our approximate registration using the local 3D shape descriptors by measuring the displacement of each vertex in each range image from the final watertight model. The average distances for all the vertices in all range images for the armadillo and Buddha models, relative to the diameter of the models, were 0.17% and 0.29% percent, respectively. The results show that the scale-dependent local 3D shape descriptors provide rich information leading to accurate approximate registration that enables fully automatic registration without any need of initial estimates. Next, we demonstrate the ability of our framework to simultaneously register range images corresponding to multiple 3D models. In order to automatically discover and register the individual models from a mixed set of range images, we prune the edges on the model graph that correspond to transformations with an area of overlap less then some threshold. In practice, we found this threshold easy to set as our framework results in approximate alignments that are very accurate. Figure 4 summarizes the results. Note that no refinement using global registration algorithms has been applied to these results to clarify the accuracy of our method, but can easily be applied without any human intervention. 6.3
Range Images with Inconsistent Global Scale
Next we demonstrate the effectiveness of our framework for fully automatically registering a number of range images with unknown global scales. Figure 5 illustrates the results of applying our framework to 15 views of the Buddha and dragon models. Each range image was globally scaled by a random factor between 1 and 4 – on average 2.21 and 2.52 for the Buddha and dragon, respectively. For each pair of adjacent range images the average errors in the estimated scales
Scale-Dependent/Invariant Local 3D Shape Descriptors
451
Fig. 5. Automatic registration of 15 views of the Buddha and dragon models each with a random global scaling from 1 to 4. For each model we visualize the initial set of range images and the approximate alignment obtained by our framework. Even with the substantial variations in the global scale, the scale-invariant local 3D shape descriptors enables us to obtain accurate (approximate) registrations without any assumptions about the initial poses.
Fig. 6. Automatic approximate registration of 42 randomly scaled range images consisting of 15 views of the Buddha model, 12 views of the armadillo and 15 views of the dragon model. Each range image was randomly scaled by a factor between 1 and 4. The scale-invariant local 3D shape descriptors enables automatic (approximate) registration of the 3 models from this mixed set of range images without any a priori information, which can be directly refined with any ICP-based registration algorithm to arrive at a set of watertight models.
after our approximate registration using scale-invariant local 3D shape descriptors were 1.6% for the dragon and 0.4% for the Buddha model. These results show that even with substantial variations in the global scale, our method successfully aligns the range images with high accuracy, which is good enough for subsequent refinement with ICP-based methods as in the examples shown in Figure 3 without any manual intervention. Finally, Figure 6 illustrates the results of applying our framework to 42 range images corresponding to three different models that have been randomly scaled by a factor between 1 and 4. Again, despite the significant scale variations, our scale-invariant representation of the underlying local geometric structures
452
J. Novatnack and K. Nishino
enables us to fully automatically discover and register all three models simultaneously without any human intervention.
7
Conclusion
In this paper, we introduced a comprehensive framework for analyzing and exploiting the geometric scale-variability of geometric structures captured in range images. Based on the geometric scale-space analysis of range images, we derived novel scale-dependent and scale-invariant local 3D shape descriptors. We demonstrated the effectiveness of exploiting scale-variability in these descriptors by using them in fully automatic registration of range images. Most important, we showed that the discriminative power encoded in these descriptors are extremely strong, so much so that they enable fully automatic registration of multiple objects from a mixed set of unordered range images. To our knowledge, this work is the first to report such capability. We strongly believe that the results indicate that our framework as well as the descriptors themselves can lead to novel robust and efficient range image processing methods in a variety of important applications beyond registration.
Acknowledgment The range images used in the experiments are courtesy of Stanford Computer Graphics Laboratory3. This work was in part supported by National Science Foundation under CAREER award IIS-0746717.
References 1. Brady, M., Ponce, J., Yuille, A., Asada, H.: Describing Surfaces. Technical Report AIM-822, MIT Artificial Intelligence Laboratory Memo (1985) 2. Ponce, J., Brady, M.: Toward A Surface Primal Sketch. In: IEEE Int’l Conf. on Robotics and Automation, vol. 2, pp. 420–425 (1985) 3. Morita, S., Kawashima, T., Aoki, Y.: Hierarchical Shape Recognition Based on 3D Multiresolution Analysis. In: European Conf. on Computer Vision, pp. 843–851 (1992) 4. Akagunduz, E., Ulusoy, I.: Extraction of 3D Transform and Scale Invariant Patches from Range Scans. In: IEEE Int’l Conf. on Computer Vision and Pattern Recognition, pp. 1–8 (2007) 5. Li, X., Guskov, I.: 3D Object Recognition From Range Images Using Pyramid Matching. In: ICCV Workshop on 3D Representation for Recognition, pp. 1–6 (2007) 6. Dinh, H.Q., Kropac, S.: Multi-Resolution Spin-Images. In: IEEE Int’l Conf. on Computer Vision and Pattern Recognition, pp. 863–870 (2006) 7. Novatnack, J., Nishino, K.: Scale-Dependent 3D Geometric Features. In: IEEE Int’l Conf. on Computer Vision (2007) 8. Chen, C.S., Hung, Y.P., Cheng, J.B.: RANSAC-based DARCES: A New Approach to Fast Automatic Registration of Partially Overlapping Range Images. IEEE Trans. on Pattern Analysis and Machine Intelligence 21(11), 1229–1234 (1999) 3
http://www-graphics.stanford.edu/data/3Dscanrep/
Scale-Dependent/Invariant Local 3D Shape Descriptors
453
9. Huber, D., Hebert, M.: Fully Automatic Registration of Multiple 3D Data Sets. Image and Vision Computing 21(7), 637–650 (2003) 10. Mian, A., Bennamoun, M., Owens, R.: From Unordered Range Images to 3D Models: A Fully Automatic Multiview Correspondence Algorithm. In: Theory and Practice of Computer Graphics, pp. 162–166 (2004) 11. Gelfand, N., Mitra, N., Guibas, L., Pottmann, H.: Robust Global Registration. In: Symposium on Geometry Processing, pp. 197–206 (2005) 12. Makadia, A., Patterson, A., Daniilidis, K.: Fully Automatic Registration of 3D Point Clouds. In: IEEE Int’l Conf. on Computer Vision and Pattern Recognition, vol. 1, pp. 1297–1304 13. ter Haar, F., Veltkamp, R.: Automatic Multiview Quadruple Alignment of Unordered Range Scans. In: IEEE Shape Modeling International, pp. 137–146 (2007) 14. Lindeberg, T.: Feature Detection with Automatic Scale Selection. Int’l Journal of Computer Vision 30, 77–116 (1998) 15. Stein, F., Medioni, G.: Structural Indexing: Efficient 3-D Object Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence 14, 125–145 (1992) 16. Chua, C., Jarvis, R.: Point Signatures: A New Representation for 3D Object Recognition. Int’l Journal of Computer Vision 25(1), 63–85 (1997) 17. Johnson, A., Hebert, M.: Using Spin-Images for Efficient Multiple Model Recognition in Cluttered 3-D Scenes. IEEE Trans. on Pattern Analysis and Machine Intelligence 21(5), 433–449 (1999) 18. Sun, Y., Abidi, M.: Surface Matching by 3D Point’s Fingerprint. In: IEEE Int’l Conf. on Computer Vision, vol. 2, pp. 263–269 (2001) 19. Frome, A., Huber, D., Kolluri, R., Bulow, T., Malik, J.: Recognizing Objects in Range Data Using Regional Point Descriptors. In: European Conf. on Computer Vision (May 2004) 20. Skelly, L.J., Sclaroff, S.: Improved Feature Descriptors for 3-D Surface Matching. In: Proc. SPIE Conf. on Two- and Three-Dimensional Methods for Inspection and Metrology, vol. 6762, pp. 63–85 (2007) 21. Do Carmo, M.: Differential Geometry of Curves and Surfaces. Prentice Hall, Englewood Cliffs (1976) 22. Zhang, D., Hebert, M.: Harmonic Maps and Their Applications in Surface Matching. In: IEEE Int’l Conf. on Computer Vision and Pattern Recognition (1999) 23. Fischler, M., Bolles, R.: Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography. Commun. ACM 24(6), 381–395 (1981) 24. Umeyama, S.: Least-Squares Estimation of Transformation Parameters Between Two Point Patterns. IEEE Trans. on Pattern Analysis and Machine Intelligence 13(4), 376–380 (1991) 25. Nishino, K., Ikeuchi, K.: Robust Simultaneous Registration of Multiple Range Images. In: Asian Conf. on Computer Vision, pp. 454–461 (2002) 26. Kazhdan, M., Bolitho, M., Hoppe, H.: Poisson Surface Reconstruction. In: Eurographics Symp. on Geometry Processing, pp. 61–70 (2006)
Star Shape Prior for Graph-Cut Image Segmentation Olga Veksler University of Western Ontario London, Canada
[email protected]
Abstract. In recent years, segmentation with graph cuts is increasingly used for a variety of applications, such as photo/video editing, medical image processing, etc. One of the most common applications of graph cut segmentation is extracting an object of interest from its background. If there is any knowledge about the object shape (i.e. a shape prior), incorporating this knowledge helps to achieve a more robust segmentation. In this paper, we show how to implement a star shape prior into graph cut segmentation. This is a generic shape prior, i.e. it is not specific to any particular object, but rather applies to a wide class of objects, in particular to convex objects. Our major assumption is that the center of the star shape is known, for example, it can be provided by the user. The star shape prior has an additional important benefit - it allows an inclusion of a term in the objective function which encourages a longer object boundary. This helps to alleviate the bias of a graph cut towards shorter segmentation boundaries. In fact, we show that in many cases, with this new term we can achieve an accurate object segmentation with only a single pixel, the center of the object, provided by the user, which is rarely possible with standard graph cut interactive segmentation.
1
Introduction
In the last decade, two important trends in image segmentation are the introduction of various user interaction techniques, and the development and increased reliance on global optimization methods. Interactive segmentation ([1, 2, 3, 4, 5, 6, 7]) became popular because in different domains, user interaction is available, and it can greatly reduce the ambiguity of segmentation caused by complex object appearance, weak edges, etc. Global optimization ([5, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16]), often formulated as a graph problem, became popular because it is more robust compared to the local methods. In this paper, we address the segmentation of an object from its background in the graph cut framework [5, 7]. The advantage of this framework is that it guarantees a globally optimal solution for a wide family of energy functions [17], allows incorporation of regional and boundary constraints, and provides a simple user interaction interface. The user has to mark some pixels as object and some pixels as background. Such pixels are usually called “seeds”. D. Forsyth, P. Torr, and A. Zisserman (Eds.): ECCV 2008, Part III, LNCS 5304, pp. 454–467, 2008. c Springer-Verlag Berlin Heidelberg 2008
Star Shape Prior for Graph-Cut Image Segmentation
455
Fig. 1. Star shape examples. First three shapes are convex and therefore are stars with respect to any inside point as the center. Last three shapes are stars with respect to the specified center, although there are multiple other valid centers.
If one has prior knowledge about the shape of an object (or a “shape prior”), incorporating this knowledge makes segmentation more robust. Shape prior reduces ambiguity by ruling out all segments inconsistent with the prior. Using shape priors to improve segmentation has been investigated in the level set and curve evolution frameworks [18, 19, 20, 21]. Level set methods are usually not numerically stable and are prone to getting stuck in a local minimum. There has been some work on shape priors for graph cuts. The authors in [22] use an elliptical prior, which is implemented only approximately within an iterative refinement process. In [23], a prior which encourages the object to be a convex blob centered around a certain point is implemented. Another example of a blob like prior is in [24]. The above shape prior assumptions are useful, but are quite restrictive on the shape of the object. In [25], an interesting “connectivity” prior is used, that is they enforce the object region to be connected. In [26, 27], an object specific shape prior is used. However, a shape model has to be registered to an image, which is challenging and computationally expensive. In this paper, we investigate a generic shape prior for graph cut segmentation. Our prior is generic because it is not based on a shape of a specific object class (like a “cow” class), but rather it is based on simple geometric properties of an object, similar to the ellipse assumption in [22]. Our shape prior is much more general than an ellipse though. We call it a star shape prior, defined as follows. A star shape is defined with respect to a center point c. An object has a star shape if for any point p inside the object, all points on the straight line between the center c and p also lie inside the object. Some star shapes are in Fig. 1. We assume that the user marks the star center. In many cases this information is enough to accurately segment the object, see Sec. 5. Star shaped objects are abundant in the environment. A special case is a convex shape, and in this case an additional advantage is that the user can choose any point inside the object as the center, since a convex shape is a star with respect to any inside point. For many other shapes there are multiple valid centers, so, in general, the user does not have to be too careful in choosing the center. For example, for the heart shape in Fig. 1, most points, except the ones in approximately the top fifth part of the shape, make a valid center. The advantage of using a generic star shape prior is that it can be directly incorporated in the optimization procedure, no expensive registration between the model and the image, like in [26, 27] is required. The disadvantage is that only a shape obeying a generic star shape is extracted, we cannot guarantee that the extracted shape will be a circle, or a rectangle, etc.
456
O. Veksler
An important positive side effect of the star shape prior is that we can include in the objective function a length-based “ballooning” term that encourages a larger object segment. This term helps to counterbalance the known bias of a graph cut to small segments. It is not as aggressive as the previously used areabased “ballooning” terms. With the new term, it is frequently enough for a user to provide just the object center, additional information about the object may be unnecessary, making segmentation very undemanding for user interaction. Note that [23, 24] also support a single-click segmentation with graph cuts. The paper is organized as follows. Sec. 2 reviews graph cut segmentation, Sec. 3 explains how to incorporate the star shape prior, Sec. 4 explains how to implement bias toward larger segments, and Sec. 5 has the experiments.
2
Graph Cut Segmentation
We now briefly review the graph cut segmentation algorithm of [5]. 2.1
Graph Cut
Let G = (V, E) be a graph with vertices V and edges E. Each edge e ∈ E has a non-negative cost we . There are two special vertices called terminals: the source, s and the sink, t. A cut C ⊂ E is a subset of edges, such that if C is removed from G, then V is partitioned into two disjoint sets S and T = V − S such that s ∈ S and t ∈ T . The cost of the cut C is the sum its edge weights: |C| = we . e∈C
The minimum cut is the cut with smallest cost. The max-flow/mincut algorithm [28] can be used to find the minimum cut in polynomial time. We use the max-flow algorithm of [29], which has linear time performance in practice [29]. 2.2
Object/Background Segmentation with a Graph Cut
Segmenting an object from its background is formulated as a binary labeling problem, i.e. each pixel in the image has to be assigned a label from the label set L = {0, 1}, where 0 and 1 stand for the background and the object, respectively. Let P be the set of all pixels in the image, and let N be the standard 4 or 8-connected neighborhood system on P, consisting of ordered pixel pairs (p, q) where p < q. Let fp ∈ L be the label assigned to pixel p, and f = {fp |p ∈ P} be the collection of all label assignments. The energy function commonly used for segmentation is as follows: E(f ) = Dp (fp ) + λ Vpq (fp , fq ). (1) p∈P
(p,q)∈N
In Eq. (1), the first term is called the regional or data term because it incorporates regional constraints. Specifically, it measures how well pixels fit into the object or background models. Dp (fp ) is the penalty for assigning label fp to pixel p. The more likely fp is for p, the smaller is Dp (fp ). The object/background
Star Shape Prior for Graph-Cut Image Segmentation
457
models could be known beforehand, or modeled from the seeds provided by the user. To insure that the seeds are segmented correctly, for any object seed p, one sets Dp (0) = ∞, and for any background seed p, one sets Dp (1) = ∞. The second sum in Equation (1) is called the boundary term because it incorporates the boundary constraints. A segmentation boundary occurs whenever two neighboring pixels are assigned different labels. Vpq (fp , fq ) is the penalty for assigning labels fp and fq to neighboring pixels. Most nearby pixels are expected to have the same label, therefore there is no penalty if neighboring pixels have the same label and a penalty otherwise. Typically, Vpq (fp , fq ) = wpq ·I(fp = fq ) , where I(·) is 1 if fp = fq and 0 otherwise. To align the segmentation boundary with intensity edges, wpq is typically a non-increasing function of |Ip − Iq |, where Ip is the intensity of pixel p. For example, the following is frequently used [5]: wpq = e−
(Ip −Iq )2 2σ2
.
(2)
Parameter λ ≥ 0 in Eq. (1) weights the relative importance between the regional and boundary terms. Smaller λ makes regional terms more important. In [5] they show how to construct a graph such that the labeling corresponding to the minimum cut is the one optimizing the energy in Eq. (1). In general, [17] shows which binary energies can be optimized exactly with a graph cut.
3
Implementing the Star Shape Prior
We now show how to implement the star shape prior in the graph cut segmentation. We assume that the center of the star shape c is known. In interactive segmentation it is provided by the user. In certain restricted domains, such as in medical imaging, it may be possible to calculate the center automatically. Consider Fig. 2(a). The center of the star shape is marked with a black dot c, and an example of a star shape is outlined in green. Some of the straight lines
(a)
(b)
Fig. 2. (a) A star shape is in green, its center is marked with a red dot c. Let p and q be pixels on the line passing through c. If p is labeled as the object, then q must be also labeled as the object; (b) Discretized lines are displayed with random colors.
458
O. Veksler
passing through c are shown in black. Let 1 and 0 be the object label and the background labels, respectively. To get an object segment of a star shape, for any point p inside the object, we have to insure that every single point q on the straight line connecting c and p is also inside the object. This implies that if p is assigned label 1, then every point between c to p (on a straight line) is also assigned 1. The following pairwise shape constraint term Spq implements this: ⎧ if fp = fq , ⎨0 (3) Spq (fp , fq ) = ∞ if fp = 1 and fq = 0, ⎩ β if fp = 0 and fq = 1 Eq. (3) assumes that q is between c and p. A segmentation with a finite cost never violates the star shape constraints. Parameter β is discussed later. In discrete implementation, c, p, and q are pixels. Observe that the shape constraint term Spq in Eq. (3) does not need to be placed between all pairs of pixels p, q that lie on a line passing through c. It is enough to put an Spq only between neighboring pixels p and q. Indeed, if the star shape is violated along some line passing through c, then there may be several pairs of pixels p and q, (with q in between c and p) that violate the constraint. There will be a pair of pixels p and q with the smallest distance between them, and such two pixels must be neighbors. Conversely, if the star shape constraints are not violated between all the neighboring pixels pairs, they are not violated between pairs of pixels that are not neighbors, and therefore the shape is a star. Thus the neighborhood system for incorporating the star constraints is the same as for the boundary constraints, making the efficiency overhead for the shape prior negligible. Also note that using the star shape constraints is equivalent to adding a flux field [30]. In practice we have to discretize the set of lines passing through the center c. We consider all the lines that pass through the center pixel c and any other image pixel p. This is the finest possible discretization at the given image resolution. We have to be careful when implementing the shape constraints on discrete lines. Continuous lines intersect only at the center c. Discrete lines can “intersect” at more than one pixel. Consider Fig. 3(a). One discretized line is shown in red, and another line with a larger slope is shown in black. These two lines first intersect at pixel p, and then at pixels q and r. After pixel p, these two lines
(a)
(b)
(c)
Fig. 3. (a) the red and black discrete lines “intersect” at more than one point; (b) the black line is merged into the red line; (c) the red line is merged into the black line
Star Shape Prior for Graph-Cut Image Segmentation
459
become essentially indistinguishable at image precision. Therefore at the first detected intersection pixel, in this case pixel p, we merge either the black line into the red one (Fig. 3 (b)) or vice versa (Fig. 3 (c)), chosen at random. Fig. 2(b) shows with random colors the discrete merged lines that are used for star shape constraints (generated from a particular example). Closer to the center of the star shape more lines have to be merged together, therefore the density of lines is smaller compared to the density towards the image borders. With the shape constraints, the our energy function becomes: Dp (fp ) + λ Vpq (fp , fq ) + Spq (fp , fq ). (4) E(f ) = p∈P
(p,q)∈N
(p,q)∈N
In Eq. (4), the Vpq terms are as defined in Sec. 2, and the shape constraint Spq terms are as defined in Eq. (3). According to [17], the energy in Eq. (4) can be optimized exactly with a graph cut if all the pairwise terms are submodular, where a binary function g of two variables is submodular if g(0, 0) + g(1, 1) ≤ g(1, 0) + g(0, 1). Both the Vpq and Spq terms are clearly submodular, and what is more interesting, the Spq terms are submodular for any finite choice of β. If we set β = 0, then the labeling minimizing the energy in Eq. (4) is the same as the one optimizing the standard energy in Eq. (1), except the optimal object segment is star shaped. However, we can do more interesting things. Notice that β can be set to a negative value. This enables a bias towards a longer segmentation boundary, as explained in the next section.
4 4.1
Bias Toward Longer Segment Boundaries Boundary Based Ballooning
A graph cut has a well known bias towards shorter boundaries. When a reliable model for the object and background is available, the data term in Eq. (4) can be given a large weight relative to the boundary term, by setting λ to a relatively smaller value. In this case, the bias to shorter boundaries is actually helpful to the segmentation process, since it serves to regularize the data terms. The data term can be known beforehand or it can be estimated from the seeds [7]. In the absence of a reliable model for the foreground/background, the data term has to be weighted low relative to the boundary term. In such a case, bias towards shorter boundaries is not helpful. The extreme case is when nothing about the appearance is known, and therefore the only non-zero data terms are those for the background/foreground seeds. If the user marks only a few seeds, then in most cases the result will consist of most pixels assigned to the same label. By marking enough seeds, the correct segmentation can always be achieved, but the amount of user interaction may be excessive. If a user enters only a few seeds, estimating a reliable appearance model may be impossible. Furthermore, in the case when the background and foreground objects have similar appearance, it may be difficult or impossible to construct reliable appearance models. Consider the image in Fig. 4(a). The heart object
460
O. Veksler
(a)
(b)
Fig. 4. (a) The heart object and its background have identical histograms; (b) Our result, the seed point is in red, only one object seed pixel is provided by the user, the border of the image is assumed to be the background
and its background have identical intensity histograms. If the appearance model is based only on the intensity histogram, it cannot distinguish between the foreground and the background. A user has to provide a significant number of seeds to segment this object. Notice that this image is not simple to segment with local algorithms because of intensity variation and weak boundaries. To prevent the shrinking bias of a graph cut in the absence of a strong data term, a bias towards a longer boundary is needed, or, in other words, a “ballooning” force. We can easily incorporate such bias by setting β in Eq. (4) to a negative value. The last summation term in Eq. (4) is roughly proportional to the length of the boundary, and setting β to a negative value implies that longer segmentation boundaries decrease the energy function more as compared to the shorter boundaries1 . The question is how to choose an appropriate β value, since the best value is likely to be different for each image. In the related work on ratio cycles and regions [9], [12], [31], a ratio energy Eratio (f ) = fcost1 + β · fcost2 is considered. Here fcost1 is usually related to the cost of the object boundary, and fcost2 is related to the object area or boundary length. A minimum ratio region is found by searching for β that s.t. the optimum value of Eratio is 0. Usually binary search is used to find such β, and the energy Eratio is repeatedly optimized for different β values. The optimum region has the smallest normalized fcost1 , where normalization is by length or by area, depending on fcost2 . Typically fcost1 is related to the contrast on the boundary, and therefore the region with highest normalized contrast is found. Our energy in Eq. (4) is basically the same as the ratio energy. Ignoring the data terms, our energy is approximately fweight + β · flength , where fweight is the sum of wpq weights on the boundary between the object and the background segments, and flength is the length of the boundary, or the sum of all Spq ’s. Therefore we could follow the strategy similar to the ratio regions by finding the highest contrast boundary. However, we observe that the highest contrast boundary may not be what the user wants. For example, if every image is placed in a “frame” with high contrast, this frame would always be extracted. 1
This is due to the merging of discrete lines, discussed in the previous section.
Star Shape Prior for Graph-Cut Image Segmentation
461
Instead, we pursue a different strategy. We find the largest β such that the object segment is at least some minimum specified size, which we set to 100 in all the experiments. Let β1 < β2 < 0, and let f 1 be the labeling minimizing the energy in Eq. 4 with β = β1 and f 2 be the labeling minimizing the energy in 1 2 1 2 Eq. 4 with β = β2 . It is easy to see that fweight ≥ fweight and flength ≥ flength . That is a smaller (or large negative) value of β results in a larger object segment with a larger sum of boundary weights wpq . The sum of the boundary weights wpq is just the standard cost of a labeling in Eq. (1), without ballooning (and ignoring the data terms). Therefore our strategy is equivalent to searching for a minimum cost labeling (without ballooning) that gives the object segment of size at least 100. To find such β, we use binary search, in the range from 0 to 50. To test the effectiveness of our approach, we do not use background/foreground models for all the experiments in this paper. We set the pixels on the image border to be the background seeds, and the user provides the center of the star shape, which is the single object seed. We could also incorporate the data term, but it makes it harder to evaluate the effectiveness of the shape prior and the parameter search strategy. Fig. 4(b) shows our segmentation result on the image in Fig. 4(a). The smallest β that gave the first large object segment is −1.97. Notice that we do not need to rerun the graph cut algorithm from scratch when searching for the value of β. We can use the idea of [32] to reuse the flow computation from the previous run. Thus the overhead we pay for the search is minimal, on average, the algorithm is 2.3 times slower with the search for β than without2 . The algorithm in [32], while performing well in practice, has no guarantees on the computational efficiency, in general. The parametric maxflow algorithm of [31] does have theoretical guarantees, but unfortunately their method has certain restrictions that are not applicable to our approach. 4.2
Relation to Other “Ballooning” Methods
To encourage a larger object segment, we balloon the boundary, i.e. have a bias to a longer boundary. Our ballooning is effectively equivalent to the ratio cycle method in [12]. The difference is that we work in the graph cut framework, and can easily implement user interaction, background models, and all the other advantages of graph cuts. Another difference that instead of finding the “cycle” with the best ratio (or best average) contrast, we find a large enough “cycle” with a good contrast, which has certain advantages, as already mentioned above. There are other ways to add a “ballooning” force. For example, uniform area based ballooning [33] can be used, which is implemented by adding a constant bonus to each pixel in the image if it is assigned the foreground label. The disadvantage of uniform ballooning is that the object region is not guaranteed to be connected, as is guaranteed with our method. A bigger disadvantage is that area ballooning is more aggressive compared to the boundary ballooning, in the sense that it may prefer a larger region to a smaller, but also a reasonable 2
The graph cut code for “recycling” the flow was downloaded from http://www.adastral.ucl.ac.uk/˜vladkolm/software.html
462
O. Veksler
cost region. This may also happen with length ballooning, but it is less likely. We can roughly show that if a region can be extracted with the area ballooning, then it can be extracted with length ballooning, but not vice versa. Let Elength (f ) = fcost + β · flength be the energy with length ballooning and Earea (f ) = fcost + β · farea be the energy area ballooning, where fcost is the cost of the boundary related to its contrast, flength is the length of the object segment and farea is the area of the object segment. Suppose f¯ can be extracted with area ballooning, that is there is a β¯ s.t. f¯cost + β¯ · f¯area ≤ ¯ cost −fcost fcost + β¯ ·farea for all labelings f . Then β¯ ≤ f¯farea for all f s.t. farea ≤ f¯area −farea ¯ and β¯ ≥ ¯fcost −fcost for all f s.t. farea > f¯area . Let f s be s.t. f s ≤ f¯area area
farea −farea s fcost −f¯cost is the smallest s f¯area −farea s assume that fcost − f¯cost < 0,
possible out of farea ≤ f¯area . We can safely s since if fcost > f¯cost , then any labeling with smaller area than f¯ has a cost higher than f¯, and therefore no labeling smaller than f¯ can be extracted and we would not even have to consider labelings with g fcost −f¯cost is as large as possible smaller area than f¯. Similarly, let f g be s.t. f¯area g −farea g fcost −f¯cost f s −f¯ g ¯ ¯ > farea . We must have ¯ ≤ β ≤ ¯cost cost . Since out of all f g s
and
area
farea −farea
farea −farea
area scales quadratically in terms of length, approximately we have farea = g fcost −f¯cost (flength )2 . Therefore we can approximate (f¯ ≤ β¯ ≤ −f g )(f¯ +f g ) length
length
length
length
s fcost −f¯cost f g −f¯ . Rearranging the terms we get: f¯ cost −fcost ≤ g s s (f¯length −flength )(f¯length +flength ) length length g g s ¯ ¯ ¯ f (f − f )( f +f ) +f cost length length cost length length ¯ f¯length + f g . Now f¯ ≥ 1 and β( s s s length ) ≤ (f¯length −flength )(f¯length +flength ) length +flength g s ¯ ¯ f − f f − g s cost fcost ¯ f¯length + f fcost − f¯cost < 0. Therefore f¯ cost −fcost ≤ β( . g s length ) ≤ f¯length −flength length length s
We need to make a couple of reasonable assumptions. Let us assume that f f s −f¯ s is also s.t. flength ≤ f¯length and f¯ cost −fcost is the smallest possible out of s length
length
g
f −f¯ flength ≤ f¯length . Similarly, let f g be also s.t. f¯ cost −fcost is as large as possible g length length g ¯ out of all flength > flength . Rolling back the arguments in the beginning of this ¯ f¯length + f g ¯ ¯ ¯ paragraph, we get that f¯cost + β( length ) · flength ≤ fcost + β(flength + g ¯ flength ) · flength for all labelings f , and therefore f can be extracted with length ¯ f¯length + f g based ballooning for β = β( length ). The reverse is not true, that is, for some choices of f¯, there is a β¯ that allows extraction with length ballooning but not area ballooning. Instead of proving this, we show in example. In Fig. 5, the original image is in (a), our results (with boundary ballooning) are in (b), and the results with uniform area ballooning are in (c). For area ballooning, just as for our algorithm, we chose the smallest β giving the object segment of size at least 100 pixels. Since for larger β segments get larger, with uniform area ballooning, for no choice of β is it possible to extract the seal (with only a single seed provided by the user). Another option is non-uniform area ballooning. For example, in [34] the ballooning decreases at the rate of 1/r where r is the distance from the pixel to the center. The problem is that such ballooning is biased towards including pixels
Star Shape Prior for Graph-Cut Image Segmentation
(a)
(c)
463
(b)
Fig. 5. (a) Original image; (b) Our algorithm; (c) Area “ballooning”
(a)
(c)
(b)
Fig. 6. (a) Original image; (b) Our results; (c) Non-uniform area “ballooning”
closer to the center, since such pixels get “ballooned” more. Consider Fig 6. It shows an oval object front of two smaller oval objects. The contrast between the larger ovals and two smaller ones is weak. Because of the star shape, we are able to extract the oval, shown in (b). With non-uniform ballooning, all three ovals are extracted because the smaller ovals get stronger “ballooning” than the lower two thirds of the large oval, since they are closer to the seed.
5
Experimental Results
First we summarize the experimental setup. The borders of the image are fixed to be the background seeds. The user provides a single object seed pixel, which is assumed to be the center of the star shape. Using binary search, we find the smallest β resulting in object segment larger than 100 pixels. Although we could make further corrections for the boundary with the help of user interaction, we choose not to do so to make the evaluation of the star shape effects easier. We set λ = 20, although its particular choice is not important, since changing β changes the relative weight between the regional and boundary terms. The only remaining parameter is σ in Eq. (2). Typically [5] one sets σ as an average absolute intensity difference between neighboring pixels. Such choice for σ is motivated by the following. If the difference in intensities between two pixels is twice larger than the typical (average) difference, then the weight of connection wpq between these pixels is very small, allowing to place a boundary between
464
O. Veksler
Fig. 7. Some results
them at a very small cost. If the intensity difference is smaller than the average, then wpq is large. When the difference is in the range from the average to twice larger than the average, the weights wpq decrease at the exponential rate. We
Star Shape Prior for Graph-Cut Image Segmentation
465
use the same idea, except for pixels p and q we set σ to be the absolute average intensity difference in the box around p and q, not the whole image. The box size is set to be 20 by 20. This approach is helpful to encourage the boundary along the edges that may be relatively weak as far as the whole image is concerned, but are significantly strong if one looks only at their local neighborhood. Fig. 7 shows results. Some of the images are from the Berkeley database [35], and others are from the Web. The odd columns show the original image, and the even columns show segmentation results. All the images are gray scale, which makes the segmentation problem much more challenging compared to the color images. The object segment is shown with its original intensities, and the background is shown in black. The seed pixel is a red square. For β = 0, for all the images shown, the object segment consists of either the single seed pixel or just a few pixels. Therefore the standard graph cut algorithm without bias to longer object boundaries fails without further user interaction. The results are very promising, especially considering that they were obtained with a single user click. In many cases, the extracted segment is a meaningful object or a collection of objects. We are able to deal with weak boundaries, and complex foreground and background. In some cases, a meaningful part of the object is found. For example, in the forth row of Fig. 7, the door part is segmented even though the whole car is a star shape. This is due to a strong intensity edge around the door part. In cases when only a part of the object has been segmented, the extracted part can be used for improving segmentation without further user interaction. For example, for the snake image in the bottom row of Fig. 7, we can get a reliable texture model from the extracted part, and then rerun the graph cut with this data term without the star shape constraints. This would make the algorithm similar to the Grab Cut [36]. In Grab Cut, an initial region containing the object is specified by the user as a box. However, this box usually contains a significant part of the background, making appearance modeling less reliable. Our method can find a large significant part of the foreground, mostly without the background pixels, making it more appropriate for modeling the object appearance. The running times on a 2.66 GHz computer with 2.0 GB or RAM were between a fraction of a second to less than 2 seconds, depending on the image size, which was in the range from 100 by 100 to 512 by 512.
Acknowledgments We thank Daniel Cremers for suggesting the name to our shape prior. We also thank the reviewers for numerous comments which helped to improve the paper.
References 1. Kass, M., Witkin, A., Terzopoulos, D.: Snakes: Active contour models. IJCV 2, 321–331 (1998) 2. Falaco, A.X., Udupa, J., Samarasekara, S., Sharma, S.: User-steered image segmentation paradigms: Live wire and live lane. In: Graphical Models and Image Processing, vol. 60, pp. 233–260 (1998)
466
O. Veksler
3. Mortensen, E.N., Barrett, W.A.: Interactive segmentation with intelligent scissors. In: Graphical Models and Image Processing (GMIP), vol. 60, pp. 349–384 (1998) 4. Osher, S., Sethian, J.: Fronts propagating with curvature dependent speed: Algorithm based on hamilton jacobi formulations. Journal of Computational Physics 79, 12–49 (1988) 5. Boykov, Y., Jolly, M.P.: Interactive graph cuts for optimal boundary and region segmentation. In: ICCV 2001, vol. I, pp. 105–112 (2001) 6. Blake, A., Rother, C., Brown, M., Perez, P., Torr, P.: Interactive Image Segmentation Using an Adaptive GMMRF Model. In: Pajdla, T., Matas, J(G.) (eds.) ECCV 2004. LNCS, vol. 3021, pp. 428–441. Springer, Heidelberg (2004) 7. Boykov, Y., Funka Lea, G.: Graph cuts and efficient n-d image segmentation. International Journal of Computer Vision 69(2), 109–131 (2006) 8. Wu, Z., Leahy, R.: An optimal graph theoretic approach to data clustering: Theory and its application to image segmentation. PAMI 15(11), 1101–1113 (1993) 9. Cox, I., Rao, S.B., Zhong, Y.: ”ratio regions”: A technique for image segmentation. In: ICPR 1996, pp. 557–565 (1996) 10. Shi, J., Malik, J.: Normalized cuts and image segmentation. In: IEEE Conference on Computer Vision(ICCV), pp. 731–737 (1997) 11. Veksler, O.: Image segmentation by nested cuts. In: CVPR 2000, vol. I, pp. 339–344 (2000) 12. Jermyn, I., Ishikawa, H.: Globally optimal regions and boundaries as minimum ratio weight cycles. PAMI 23(10), 1075–1088 (2001) 13. Felzenszwalb, P., Huttenlocher, D.: Efficient graph-based image segmentation. IJCV 59(2), 167–181 (2004) 14. Wang, S., Kubota, T., Siskind, J., Wang, J.: Salient closed boundary extraction with ratio contour. PAMI 27(4), 546–561 (2005) 15. Grady, L., Schwartz, E.L.: Isoperimetric graph partitioning for image segmentation. PAMI 28(3), 469–475 (2006) 16. Schoenemann, T., Cremers, D.: Globally optimal image segmentation with an elastic shape prior. In: ICCV 2007, pp. 1–6 (2007) 17. Kolmogorov, V., Zabih, R.: What energy function can be minimized via graph cuts? IEEE Transaction on PAMI 26(2), 147–159 (2004) 18. Leventon, M., Grimson, W., Faugeras, O.: Statistical shape influence in geodesic active contours. In: CVPR 2000, vol. I, pp. 316–323 (2000) 19. Tsai, A., Yezzi Jr., A., Wells III, W., Tempany, C., Tucker, D., Fan, A., Grimson, W., Willsky, A.: Model-based curve evolution technique for image segmentation. In: CVPR, vol. I, pp. 463–468 (2001) 20. Rousson, M., Paragios, N.: Shape priors for level set representations. In: Heyden, A., Sparr, G., Nielsen, M., Johansen, P. (eds.) ECCV 2002. LNCS, vol. 2351, pp. 78–92. Springer, Heidelberg (2002) 21. Cremers, D., Osher, S., Soatto, S.: Kernel density estimation and intrinsic alignment for shape priors in level set segmentation. IJCV 69(3), 335–351 (2006) 22. Slabaugh, G., Unal, G.: Graph cuts segmentation using an elliptical shape prior. In: ICIP 2005, vol. II, pp. 1222–1225 (2005) 23. Funka-Lea, G., Boykov, Y., Florin, C., Jolly, M., Moreau-Gobard, R., Ramaraj, R., Rinck, D.: Automatic heart isolation for ct coronary visualization using graph-cuts. In: ISBI 2006, pp. 614–617 (2006) 24. Das, P., Veksler, O., Zavadsky, S., Boykov, Y.: Semiautomatic segmentation with compact shapre prior. In: CRV 2006, pp. 28–36 (2006) 25. Vicente, S., Kolmogorov, V., Rother, C.: Graph cut based image segmentation with connectivity priors. In: CVPR (2008)
Star Shape Prior for Graph-Cut Image Segmentation
467
26. Freedman, D., Zhang, T.: Interactive graph cut based segmentation with shape priors. In: CVPR, vol. I, pp. 755–762 (2005) 27. Kumar, M., Torr, P., Zisserman, A.: Obj cut. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. I, pp. 18–25 (2005) 28. Ford, L., Fulkerson, D.: Flows in Networks. Princeton University Press, Princeton (1962) 29. Boykov, Y., Kolmogorov, V.: An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision. PAMI 26(9), 1124–1137 (2004) 30. Kolmogorov, V., Boykov, Y.: What metrics can be approximated by geo-cuts, or global optimization of length/area and flux. In: ICCV, vol. I, pp. 564–571 (2005) 31. Kolmogorov, V., Boykov, Y., Rother, C.: Applications of parametric maxflow in computer vision. In: ICCV, pp. 1–8 (2007) 32. Kohli, P., Torr, P.: Dynamic graph cuts for efficient inference in markov random fields. PAMI 29(12), 2079–2088 (2007) 33. Cohen, L., Cohen, I.: Finite-element methods for active contour models and balloons for 2-d and 3-d images. PAMI 15(11), 1131–1147 (1993) 34. Appleton, B., Talbot, H.: Globally optimal geodesic active contours. JMIV 23(1), 67–86 (2005) 35. Martin, D., Fowlkes, C., Tal, D., Malik, J.: A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In: ICCV, vol. 2, pp. 416–423 (July 2001) 36. Rother, C., Kolmogorov, V., Blake, A.: ”grab-cut”- interactive foreground extraction using iterated graph cuts. ACM Transaction on Graphics 23(3), 309–314 (2004)
Efficient NCC-Based Image Matching in Walsh-Hadamard Domain Wei-Hau Pan, Shou-Der Wei, and Shang-Hong Lai Department of Computer Science, National Tsing Hua University, Hsinchu, Taiwan {whpan,greco,lai}@cs.nthu.edu.tw
Abstract. In this paper, we proposed a fast image matching algorithm based on the normalized cross correlation (NCC) by applying the winner-update strategy on the Walsh-Hadamard transform. Walsh-Hadamard transform is an orthogonal transformation that is easy to compute and has nice energy packing capability. Based on the Cauchy-Schwarz inequality, we derive a novel upper bound for the cross-correlation of image matching in the Walsh-Hadamard domain. Applying this upper bound with the winner update search strategy can skip unnecessary calculation, thus significantly reducing the computational burden of NCC-based pattern matching. Experimental results show the proposed algorithm is very efficient for NCC-based image matching under different lighting conditions and noise levels. Keywords: pattern matching, image matching, image alignment, normalized cross correlation, winner update.
1 Introduction Pattern matching is widely used in many applications related to computer vision and image processing, such as stereo matching, object tracking, object detection, pattern recognition and video compression, etc. The pattern matching problem can be formulated as follows: Given a source image I and a template image T of size N-by-N, find the best match of template T from the source image I based on the criterion of minimal distortion or maximal correlation. The most popular similarity measures are the sum of absolute differences (SAD), the sum of squared differences (SSD) and the normalized cross correlation (NCC). For some applications, such as the block motion estimation in video compression, the SAD and SSD measures have been widely used. For practical applications, a number of approximate block matching methods [1][2][3] and some optimal block matching methods [4][5][6] have been proposed. The optimal block matching methods can provide the same solution as that of full search but with less operation by using some early termination schemes in the computation of SAD. A coarse-to-fine pruning algorithm with the pruning threshold determined from the lower resolution search space was presented in [7]. This search algorithm was proved to provide the global solution with considerable reduction in computational cost. HelOr and Hel-Or [8] proposed a fast template matching method based on accumulating the distortion on the Walsh-Hadamard domain in the order of the frequency of the D. Forsyth, P. Torr, and A. Zisserman (Eds.): ECCV 2008, Part III, LNCS 5304, pp. 468–480, 2008. © Springer-Verlag Berlin Heidelberg 2008
Efficient NCC-Based Image Matching in Walsh-Hadamard Domain
469
Walsh-Hadamard basis. By using a predefined threshold, they can reject early most of the impossible candidates very efficient. However, this algorithm can only be applied for the SSD measure, and it is not guaranteed to find the globally optimal solution. More recently, Pele and Werman [18] proposed a very fast method for pattern matching based on Bayesian sequential hypothesis testing. They used a rank to decide how many pixels can be skipped during the sliding window search process. Similarly, this method cannot guarantee to find the globally optimal solution. Chen et al. [9] proposed a winner-update algorithm for fast block matching based on the SAD and SSD measures. This algorithm significantly reduces the computation and guarantees to find the globally optimal solution. In their algorithm, only the current winner location with the minimal accumulated distortion is considered for updating the accumulated distortion. This updating process is repeated until the winner has gone through all levels in the pyramids that are constructed from the template and the candidate windows for the distortion calculation. The winner update algorithm examines all the candidates in the search image to guarantee the global optimal solution but skips the unnecessary calculations of the distortion measures for most candidates. The NCC similarity measure is commonly used for pattern matching under different lighting conditions. The definition of NCC is given as follows: N
NCC ( x, y ) =
N
∑ ∑ I ( x + i, y + j ) ⋅ T (i, j ) i =1 j =1
N
N
∑ ∑ I ( x + i, y + j ) 2 ⋅ i =1 j =1
N
N
∑ ∑ T (i, j ) 2
(1)
i =1 j =1
The NCC measure is more robust than SAD and SSD under uniform illumination changes, so the NCC measure has been widely used in object recognition and industrial inspection. The correlation approach is very popular for image registration [15]. The traditional NCC needs to compute the numerator and denominator at all locations in the image, which is very time-consuming. Lewis [12] employed the sum table scheme [13][14] to reduce the computation in the denominator. After building the sum table for the source image, the block squared intensity sum for any window inside the source image can be calculated very efficiently with 4 simple operations. Although the sum table scheme can significantly reduce the computation of the denominator in the NCC-based pattern matching, it is strongly demanded to simplify the computation involved in the numerator of the NCC-based pattern matching. The FFTbased method has been employed to calculate the cross correlation all over the image via element-wise multiplication in the frequency domain [16]. It is very effective especially when the size of the template is comparable to the size of the search image. Stefano and Mattoccia [10][11] derived upper bounds for the cross correlation based on Jensen's and Cauchy-Schwarz inequalities to early terminate some search points. They partitioned the search window into two blocks and compute the partial cross correlation for the first block with the other blocks bounded by the upper bound. Then, the successive elimination algorithm (SEA) was applied to reject the impossible candidates successively. More recently, Mahmood and Khan [17] proposed new tighter bounds on correlation for fast block matching in video compression. They derived tighter bounds for the B-frames in video by using the correlation between the reference frame and the C-frame.
470
W.-H. Pan, S.-D. Wei, and S.-H. Lai
In this paper, we propose an efficient NCC-based pattern search algorithm by applying the winner update procedure on the Walsh-Hadamard domain. The winner update scheme is employed with the upper boundary value for the NCC derived from the Cauchy-Schwarz inequality in the Walsh-Hadamard basis to skip unnecessary calculation. The rest of this paper is organized as follow: we first briefly review the winner update scheme and the SEA-based method for optimal pattern matching [4][5] as well as the upper bound for the cross correlation derived from the Cauchy-Schwarz inequality [10][11]. Then, we present the proposed efficient NCC-based image matching algorithm that performs the winner update scheme in the Walsh-Hadamard domain in section 3. The implementation details and the experimental results are shown in section 4 and section 5, respectively. Finally, we conclude this paper in the last section.
2 Previous Works Although the NCC measure is more robust than SAD, the computational cost of NCC is very high. The technique of the sum table [12][13][14] can be used to reduce the computation involved in the denominator in NCC. Any block sum in the source image can be accomplished with 4 simple operations from the pre-computed sum table. To reduce the computational cost in the numerator, Stefano and Mattoccia [10][11] derived upper bounds for the cross correlation based on Jensen's and Cauchy-Schwarz inequalities to terminate early some impossible search points. Because the bound is not very tight, they partitioned the search window into two blocks and compute the partial cross correlation for the first block (from row 1 to row k) with the other blocks bounded by the upper bound (from row k to row N). Then they used the SEA scheme to reject the impossible candidates successively. Based on the Cauchy-Schwarz inequality [11] given below N
∑ a ⋅b i =1
i
i
≤
N
∑a i =1
2 i
N
∑b
⋅
2 i
i =1
(2)
the upper bound (UB) of the numerator, i.e. cross correlation, can be derived as follows: M
k
UB( x, y ) = ∑∑ I ( x + i, y + j ) ⋅ T (i, j ) i =1 j =1
+
M
N
∑∑
i =1 j = k +1
M
I ( x + i, y + j ) 2 ⋅
M
2
N
∑∑
(3)
T (i , j )
i =1 j = k +1
N
≥ ∑∑ I ( x + i, y + j ) ⋅ T (i, j ) i =1 j =1
Thus, the boundary value of NCC is given by: BV ( x, y ) =
UB( x, y ) 1
1
⎛ ⎞2 ⎛ ⎞2 2 2 ⎜ ∑ I ( x + i, y + j ) ⎟ ⋅ ⎜ ∑ T (i, j ) ⎟ ⎝ ( i , j )∈W ⎠ ⎝ ( i , j )∈W ⎠
(4)
Efficient NCC-Based Image Matching in Walsh-Hadamard Domain
471
where W is the window for the template. Similar to the SEA scheme, the candidate at the position (x,y) of image I is rejected if BV ( x, y ) < NCC max , and NCCmax is updated by NCC ( x, y ) if NCC ( x, y ) > NCC max .
3 The Proposed Efficient NCC-Based Image Matching Algorithm Hel-Or and Hel-Or [1] proposed an efficient algorithm to eliminate the impossible candidates for SSD-based pattern search when the accumulated distortions in the Walsh-Hadamard domain exceed a predefined threshold. The elimination is efficient because the first few lowest-frequency Walsh-Hadamard coefficients usually occupy most of the energy in the SSD. With an appropriate threshold, they can reject most of the impossible candidates in a very early stage, thus leading to a very efficient algorithm. However, the selection of an appropriate threshold is very critical to the performance of the algorithm. In addition, it is not clear how to extend their algorithm for NCC-based image matching. In this section, we derive a novel boundary value for NCC in a projection domain corresponding to an orthogonal transformation by using the Cauchy-Schwarz inequality. Because of its nice energy packing property and ease of computation, we select the Walsh-Hadamard transform as the projection kernel for efficient pattern matching. In addition, we combine the winner update strategy with the hierarchical order of the boundary values for the Walsh-Hadamard domain to efficiently find the best match with the NCC criterion from the source image. 3.1 The Novel Boundary Value of Numerator in NCC Using Orthonormal Projections and Cauchy-Schwarz Inequality The cross correlation values between two vectors (images) is the same as the cross correlation between their transformed vectors when an orthogonal transformation is employed. This statement can be easily proved in the following. Let u and v be two ndimensional column vectors. Consider an orthogonal transformation matrix P ∈ R n×n is applied to transform u and v, and a and b be the corresponding transformed vectors, respectively, i.e. a = Pu and b = Pv . Thus, we have
a ⋅ b = a T b = ( Pu ) T ( Pv) = u T P T Pv = u T v = u ⋅ v
(5)
Similarly, we can easily prove that the 2-norms of the original vector and its transformed vector are the same, i.e. a = u and b = v . It is obvious that the numerator and the denominator in NCC are invariant under an orthogonal transformation. For reducing the computation burden of the numerator in NCC, we derive an upper bound for the numerator by using the Cauchy-Schwarz inequality on the projection domain with an orthnormal basis. We can rewrite the cross correlation (CC) in the numerator of NCC by partitioning the summation of the components in the vectors a and b into two parts as below:
472
W.-H. Pan, S.-D. Wei, and S.-H. Lai N
k
i =1
i =1
N
∑ ai ⋅ bi = ∑ ai ⋅ bi +
∑a
i = k +1
i
⋅ bi
(6)
The second term of right hand side in equation (6) can be bounded by CauchySchwarz inequality as below: N
N
∑ ai ⋅ bi ≤
i = k +1
N
N
∑ ai ⋅
∑b
2
i = k +1 k
i =k +1
i =1
(7)
N
= (∑ ai − ∑ ai ) ⋅ 2
2
i
∑b
2
i =1
i =k +1
2
i
Thus, we have an upper bound for the numerator in NCC as follows: N
k
N
i =1
i=1
i=1
k
N
∑ai ⋅ bi ≤ ∑ai ⋅ bi + (∑ai − ∑ai ) ⋅ ∑bi = UBk 2
2
i=1
2
(8)
i=k +1
The second term of the upper bound in equation (8) is bounded by the CauchySchwarz inequality, so we have the hierarchical order for the upper bounds of the cross correlation in NCC as follows: N
UB1 ≥ UB2 ≥ .... ≥ UBN = ∑ ai ⋅ bi
(9)
i =1
Then we can define the boundary value (BV) of NCC at the k-th level by
UBk
BVk =
N
∑a
2
i
i =1
⋅
N
∑b i =1
2
(10)
i
From equation (9) and (10), we can obtain the hierarchical order of the boundary values for NCC at different levels as follows:
1 = BV0 ≥ BV1 ≥ .... ≥ BVN = NCC N
The terms
∑a i =1
i
2
N
and
∑b i =1
i
2
(11)
in the denominator of equation (10) and in equation (8)
can be calculated very efficiently in the original domain by using the integral image. Thus, the calculation of BV becomes very efficient, since we only need to update the k
accumulated sums
∑ ai ⋅ bi and i =1
k
∑a i =1
i
2
at different levels.
3.2 The Proposed Method by Combining the Novel Boundary Value with the Winner Update Strategy With the nice energy-packing property and the ease of computation, we use the Walsh-Hadamard transform as our projection kernel in the order of the frequency of
Efficient NCC-Based Image Matching in Walsh-Hadamard Domain
473
the Walsh-Hadamard basis and use the Walsh-Hadamard tree [1] to accelerate the computation of WH(Walsh-Hadamard) coefficients. The vectors a and b in equation (5) can be the transformed vectors of the candidate and template in the WalshHadamard domain. We applied the winner update strategy on the novel hierarchical order of the boundary values at different levels as described in section 3.1 to find the best match in the source image. The winner update scheme choose the candidate with the largest boundary value to be the best match as the temporary winner from the candidate pool. The temporary winner is iteratively selected and updated until one has accumulated the boundary value of all WH coefficients. Similar to Chen et al. [9], we also use a hash table to find the temporary winner very efficiently. In the beginning, we compute the square sum of the template, denoted by Tss, and the windowed square sum for all candidates in the search image, denoted by Iss, as follows: N
N
Tss = ∑∑ T 2 (i, j )
(12)
i =1 j =1
N
N
Iss( x, y ) = ∑∑ I 2 ( x + i, y + j )
(13)
i =1 j =1
Note that the windowed square sum Iss can be easily computed with an integral square image. We first calculate all the WH coefficients of the template as vector b and the first WH coefficient (DC term) of all the candidates in the source image. We can calculate BV1 from the first WH coefficient and build the hash table to find temporary winner. At each iteration, we find the candidate with the largest BV as the temporary winner from hash table and update the boundary value and the associated level. We compute the next WH coefficient of the temporary winner, denoted by ak , and calculate the upper bound UBk with the following equation. k
k
i =1
i =1
k
UBk = ∑ ai ⋅ bi + ( Iss − ∑ ai ) ⋅ (Tss −∑ bi ) 2
2
(14)
i =1
Note that the terms Iss, Tss and the projected template, denoted as vector b, in equation (14) have been calculated at the first step. At each step, we only need to k
update the accumulated sums
i =1
k −1
k
∑a ⋅b = ∑a ⋅b + a i =1
i
i
i =1
i
i
k
k −1
∑ ai = ∑ ai + ak 2
i =1
2
2
k
,
k −1
∑ bi = ∑ bi + bk i =1
2
2
2
and
i =1
⋅ bk .
Thus, we can obtain the new BV at the next level from equation (10). After updating the boundary value and the level, we push the temporary winner into the hash table. This winner update procedure is repeated until one candidate has reached the final level.
474
W.-H. Pan, S.-D. Wei, and S.-H. Lai
Algorithm 2. The proposed fast NCC-based pattern matching algorithm 1. Initialization 1.1: Build the integral square image for the source image 1.2: Calculate the square sum of template Tss. 1.3: Calculate the window square sum Iss(x,y) of all candidates from the integral square image. 1.4: Calculate the projected vector b of the template. 1.5: Calculate the BV1 for all candidates and initialize the Hash Table. 2. Winner Update Scheme Repeat 2.1: Select the candidate I(x,y) with maximal BV in hash table as the temporary winner 2.2: Update the level l and the BV of the temporary winner
2.2.1. Compute the next WH coefficient al +1 of the temporary winner. Update its level l=l+1;
2.2.2. Calculate UBl for level l. Compute BV1
= UBl
Tss ⋅ Iss( x, y )
2.2.3. Push the temporary winner into the Hash Table. Until the winner reaches the maximal level.
The proposed fast algorithm can be easily extended to the zero-mean normalized cross correlation (ZNCC) by rewriting it in the following form: M
N
∑∑ ( I ( x + i, y + j ) − I ( x, y )) ⋅ (T (i, j ) − T ) i =1 j =1
M
N
∑∑ ( I ( x + i, y + j ) − I ( x, y)) i =1
=
2
j =1
M
N
i =1
j =1
⋅
M
N
i =1
j =1
∑∑ (T (i, j ) − T )
2
(15)
∑∑ I ( x + i, y + j ) ⋅ T (i, j ) − MN I ( x, y)T ) M
N
∑∑ I i =1
2
2
( x + i , y + j ) − MN I ( x, y ) ⋅
j =1
M
N
∑∑ T i =1
2
(i, j ) − MNT
2
j =1
where I ( x, y ) =
M N 1 M N I ( x + i, y + j ) , T = ∑∑ T (i, j ) ∑∑ MN i =1 j =1 i =1 j =1
(16)
Note that the summations of the image intensities and the squared intensities in the local window at each location in the image can be computed very efficiently by using the corresponding integral images.
4 Experimental Results In this section, we demonstrate the efficiency of the proposed NCC-based pattern matching algorithm in the Walsh-Hadamard domain. The proposed algorithm incrementally calculates higher level Walsh-Hadamard coefficients of the best candidate to obtain successively tighter upper bounds for the normalized cross correlation. To compare the efficiency of the proposed algorithm, we also implemented the BPC [11]
Efficient NCC-Based Image Matching in Walsh-Hadamard Domain
475
method with the correlation ratio Cr=50% and Hel-Or & Hel-Or's method [8] (with default parameter RejectThreshold set to 10). The experiments were performed on a PC with an Intel Pentium M 1.73GHz CPU and 512MB RAM. In the first experiment, we used the sailboat image of size 512-by-512 and its noisy version as the source images as shown in Figure 1. Six template images of size 64x64 were selected from the original sailboat image as depicted in Figure 2. The templates in Figure 2(d), (e), and (f) are the brighter version (increased by 30 intensity grayscales for all pixels) of the original templates in Figure 2(a), (b), and (c), respectively. To compare the robustness and the efficiency of the proposed algorithms, we add random Gaussian noises with variance 10 onto the search image as shown in Figure 1(b) and compare the performance of different pattern matching methods on the noisy image. The full search, BPC and the proposed algorithm are guaranteed to find the optimal NCC solution from the search image, so we only focus on the comparison of search time required for these three algorithms. Note that all the three methods used the sum table to reduce the computation in the denominator of NCC. In addition, the pattern search method by Hel-Or and Hel-Or [8] is not guaranteed to find a globally optimal solution. The execution time required for the full search, BPC, Hel-Or & Hel-Or's method and the proposed algorithm are shown in Table 1 and 2. For a fair comparison, the execution time shown here includes the time required for memory allocation for sum table and WalshHadamard transform. We also conducted the same experiments for the airplane image. Table 3 and 4 summarize the experimental results. All of these experimental results show the proposed NCC-based pattern matching algorithm in Walsh-Hadamard domain significantly outperforms the full search and BPC methods. In addition, the computational efficiency of the proposed algorithm is similar to that of Hel-Or and Hel-Or's method, but their method failed to find the optimal solution in several pattern matching experiments, especially when there is intensity scaling in the template.
(a)
(b)
Fig. 1. (a) The original "sailboat" image and (b) the noisy sailboat image added with zero-mean Gaussian noise with variance 10
(a)
(b)
(c)
(d)
(e)
(f)
Fig. 2. (a), (b) & (c) are three template images of size 64-by-64, and (d), (e) & (f) are their corresponding brighter versions (increased by 30 intensity grayscales for all pixels)
476
W.-H. Pan, S.-D. Wei, and S.-H. Lai
Table 1. The execution time (in milliseconds) of applying the full-search (NCC), BPC, Hel-Or and Hel-Or's method and the proposed algorithm to the NCC-based pattern matching with the six templates shown in Figure 2(a)~(f) and the source image shown in Figure 1(a)
ms
T(a)
T(b)
T(c)
T(d)
T(e)
T(f)
Full-Search (NCC)
3163
3162
3180
3171
3162
3198
BPC
1590
1812
1784
1597
1811
1752
Hel-Or & Hel-Or [8] Proposed Algorithm
61
67
70
*74
*52
*54
66
68
70
70
72
80
Table 2. The execution time (in milliseconds) of applying the full-search (NCC), BPC, Hel-Or and Hel-Or's method and the proposed algorithm to the NCC-based pattern matching with the six templates shown in Figure 2(a)~(f) and the source image shown in Figure 1(b)
ms
T(a)
T(b)
T(c)
T(d)
T(e)
T(f)
Full-Search (NCC)
3180
3173
3179
3172
3173
3177
BPC
1625
1806
1817
1614
1801
1755
Hel-Or & 59 *73 72 *76 Hel-Or’s [8] Proposed 80 95 99 85 Algorithm * indicates that method finds an incorrect matching result.
*52
*55
110
117
To show the influence of noise corruption on the efficiency of winner update in the proposed algorithm, we show in Figure 3(a) and 3(b) the total numbers of winner updates at all locations in the search image to find the template shown in Figure 2(a) from the original sailboat image and its noisy version shown in Figure 1(a) and Figure 1(b), respectively. The total number of candidates is 201601 (449x449) and the average winner update counts are 1.1506 and 1.2775 for the clean and noisy images, respectively. Note that the peaks of winner update counts shown in the figures correspond to the final searched locations. It is evident that the proposed algorithm is very efficient since all the locations other than the peak solution have very small numbers of winner updates. We can also see that the winner updates in the noisy search image as shown in Figure 3(b) are slightly more than those in the clean image as shown in Figure 3(a). To show the performance of the proposed algorithm for template matching tasks, we randomly selected 500 different templates of sizes 64-by-64 from the sailboat and airplane images with enough gradient magnitude to avoid selecting homogeneous blocks. The histograms of the execution time are depicted in Figure 4(a). From these figures, we can see the proposed method takes about 60-90 milliseconds to find most
Efficient NCC-Based Image Matching in Walsh-Hadamard Domain
(a)
477
(b)
Fig. 3. The total numbers of winner update at all locations in the search image by applying the proposed method to find the template in Figure 2(a) from the source sailboat image with (a) no noise (Figure 1(a)) and (b) additive Gaussian noises (Figure 1(b))
500 450 400 350 300 250 200 150 100 50 0
0
100
200
300
400
500
600
(a)
(b)
(c) Fig. 4. (a) The histograms of the execution time (in milliseconds) required for applying the proposed algorithm to find 500 randomly selected templates of size 64-by-64 from the sailboat image. (b) The inefficient template matching block randomly selected from the sailboat image, and (c) the winner update numbers of applying the proposed algorithm to find this template from the original image.
478
W.-H. Pan, S.-D. Wei, and S.-H. Lai
of these randomly selected templates. The execution time required for the proposed template matching algorithm depends on the image content in the template and the search images. The more unique content in the template, the faster the proposed algorithm finds the template from the image. In Figure 4(a), there are some special cases with execution time more than 300 milliseconds. For one of these cases, the template in the sailboat image is shown in Figure 4(b), and the winner update numbers of applying the proposed algorithm on this template are depicted in Figure 4(c). The reason that causes the inefficient matching is the candidates in the neighborhood of best match have similar Walsh-Hadamard coefficients. It is clear from Figure 4(c) that there are many updating numbers in the neighborhood of the best match.
(a)
(b)
Fig. 5. (a) The original 700x700 "Bears" image and (b) the darker 700x700 "Bears" image
(a)
(b)
(c)
(d)
(e)
(f)
Fig. 6. Testing 64x64 templates cut from Figure 5(b) Table 3. The execution time (in milliseconds) of applying the full-search (NCC), BPC, Hel-Or & Hel-Or’s method and the proposed algorithm to the NCC-based pattern matching with the six templates shown in Figure 6(a)~(f) and the source image shown in Figure 5(a)
ms
T(a)
T(b)
T(c)
T(d)
T(e)
T(f)
Full-Search (NCC)
6514
6520
6511
6525
6531
6522
BPC
4266
4281
4293
3913
4337
4396
Hel-Or & *158 *156 *155 *172 Hel-Or’s [8] Proposed 196 420 228 233 Algorithm * indicates that method finds an incorrect matching result.
*125
*140
211
292
Efficient NCC-Based Image Matching in Walsh-Hadamard Domain
479
In addition, we applied the proposed fast NCC-based pattern matching algorithm to real source and template images acquired under different illumination conditions. Figure 5(a) and Figure 5(b) are two photographs taken from different lighting illumination conditions. We cut some 64x64 templates from the darker photo, i.e. Figure 5(b), as shown in Figure 6, and apply the different pattern matching methods to find these templates from the source image shown in Figure 5(a). Table 3 summarizes the experiment results.
5 Conclusion In this paper, we proposed a very efficient algorithm for NCC-based pattern search in the Walsh-Hadamard domain. We derived a novel upper bound for cross correlation in NCC in an orthogonal transform domain. To achieve very efficient computation, we used Walsh-Hadamard transformation as the projection kernel. For the NCC pattern search, the winner update scheme is applied in conjunction with the novel incremental upper bound for the cross correlation derived from Cauchy-Schwarz inequality. The experimental results show the proposed algorithm is very efficient and robust for pattern matching under different illumination changes and noisy environments.
References 1. Zhu, S., Ma, K.K.: A new diamond search algorithm for fast block matching motion estimation. IEEE Trans. Image Processing 9(2), 287–290 (2000) 2. Li, R., Zeng, B., Liou, M.L.: A new three-step search algorithm for block motion estimation. IEEE Trans. Circuits Systems Video Technology 4(4), 438–442 (1994) 3. Po, L.M., Ma, W.C.: A novel four-step search algorithm for fast block motion estimation. IEEE Trans. Circuits Syst. Video Technol. 6, 313–317 (1996) 4. Li, W., Salari, E.: Successive elimination algorithm for motion estimation. IEEE Trans. Image Processing 4(1), 105–107 (1995) 5. Gao, X.Q., Duanmu, C.J., Zou, C.R.: A multilevel successive elimination algorithm for block matching motion estimation. IEEE Trans. Image Processing 9(3), 501–504 (2000) 6. Lee, C.-H., Chen, L.-H.: A fast motion estimation algorithm based on the block sum pyramid. IEEE Trans. on Image Processing 6(11), 1587–1591 (1997) 7. Gharavi-Alkhansari, M.: A fast globally optimal algorithm for template matching using low-resolution pruning. IEEE Trans. Image Processing 10(4), 526–533 (2001) 8. Hel-Or, Y., Hel-Or, H.: Real-time pattern matching using projection kernels. IEEE Trans. Pattern Analysis Machine Intelligence 27(9), 1430–1445 (2005) 9. Chen, Y.S., Huang, Y.P., Fuh, C.S.: A fast block matching algorithm based on the winnerupdate strategy. IEEE Trans. Image Processing 10(8), 1212–1222 (2001) 10. Di Stefano, L., Mattoccia, S.: Fast template matching using bounded partial correlation. Machine Vision and Applications 13(4), 213–221 (2003) 11. Di Stefano, L., Mattoccia, S.: A Sufficient Condition based on the Cauchy-Schwarz Inequality for Efficient Template Matching. In: IEEE International Conf. Image Processing, Barcelona, Spain, September 14-17 (2003) 12. Lewis, J.P.: "Fast template matching," Vision Interface, pp. 120–123 (1995)
480
W.-H. Pan, S.-D. Wei, and S.-H. Lai
13. Mc Donnel, M.: Box-filtering techniques. Computer Graphics and Image Processing 17, 65–70 (1981) 14. Viola, P., Jones, M.: Robust real-time face detection. International Journal of Computer Vision 52(2), 137–154 (2004) 15. Zitová, B., Flusser, J.: Image registration methods: a survey. Image Vision Computing 21(11), 977–1000 (2003) 16. Brown, L.G.: A survey of image registration techniques. ACM Computing Surveys 24(4), 325–376 (1992) 17. Mahmood, A., Kahn, S.: Exploiting Inter-frame Correlation for Fast Video to Reference Image Alignment. In: Proc. 8th Asian Conference on Computer Vision (2007) 18. Pele, O., Werman, M.: Robust real time pattern matching using Bayesian sequential hypothesis testing. IEEE Trans. Pattern Analysis Machine Intelligence (to appear)
Object Recognition by Integrating Multiple Image Segmentations Caroline Pantofaru1, , Cordelia Schmid2 , and Martial Hebert1 1
The Robotics Institute, Carnegie Mellon University, USA 2 INRIA Grenoble, LEAR, LJK, France
[email protected],
[email protected],
[email protected]
Abstract. The joint tasks of object recognition and object segmentation from a single image are complex in their requirement of not only correct classification, but also deciding exactly which pixels belong to the object. Exploring all possible pixel subsets is prohibitively expensive, leading to recent approaches which use unsupervised image segmentation to reduce the size of the configuration space. Image segmentation, however, is known to be unstable, strongly affected by small image perturbations, feature choices, or different segmentation algorithms. This instability has led to advocacy for using multiple segmentations of an image. In this paper, we explore the question of how to best integrate the information from multiple bottom-up segmentations of an image to improve object recognition robustness. By integrating the image partition hypotheses in an intuitive combined top-down and bottom-up recognition approach, we improve object and feature support. We further explore possible extensions of our method and whether they provide improved performance. Results are presented on the MSRC 21-class data set and the Pascal VOC2007 object segmentation challenge.
1
Introduction
The joint tasks of single-image object class recognition and object segmentation are difficult and important. Deformable objects, however, can take on an intractable number of pixel configurations to explore. Bottom-up image segmentation is one possible method for proposing plausible sets of pixels which may compose an object. Unfortunately, recent extensive experiments in [1] and [2] have shown that a single region generated by an image segmentation can rarely be equated to a physical object or object part. Also, image segmentation quality is highly variable, dependent on both the image data, the algorithm and the parameters used, as is clearly visible in Fig. 1. Most importantly, [1] has argued that a particular algorithm and parameter choice will create segmentations of different quality on different images. Even humans do not agree on a ‘correct’ image partition [3]. In an effort to address these concerns, we join [4,5,6,7,8,9] in suggesting the use of multiple segmentations per image.
The authors thank the INRIA associated team Tethys for support.
D. Forsyth, P. Torr, and A. Zisserman (Eds.): ECCV 2008, Part III, LNCS 5304, pp. 481–494, 2008. c Springer-Verlag Berlin Heidelberg 2008
482
C. Pantofaru, C. Schmid, and M. Hebert Image
IofRs
Ncuts
Ncuts
Ncuts
F-H
MS
MS
MS
Ncuts
Ncuts
Ncuts
Ncuts
Ncuts
Ncuts
F-H
F-H
F-H
F-H
F-H
Fig. 1. An example of intersections of regions (IofRs) and the 18 segmentations that generated them: 3 from Mean Shift [10], 9 from Ncuts [11,12], and 6 from Felzenszwalb and Huttenlocher’s method [13]
In this paper, we show that a straightforward approach to integrating the information from multiple image segmentations can provide a more robust basis for object class recognition and object segmentation than one image segmentation alone. Our approach relies on two basic principles: 1) groups of pixels which are contained in the same segmentation region in multiple segmentations should be consistently classified, and 2) the set of regions generated by multiple image segmentations provides robust features for classifying these pixel groups. The core approach involves generating multiple segmentations of each image, classifying each region in each segmentation, and allowing all of the regions to contribute to the final object map. Using multiple segmentations provides multiple opportunities for discovering object boundaries and creating regions which are appropriate feature supports, thereby providing robustness to outlier poor image segmentations which inevitably occur. This makes it possible to incorporate a segmentation-based approach into a larger system without tedious and potentially futile parameter tuning. In addition to our core object recognition and object segmentation approach, we explore a number of intuitive extensions, questioning whether they provide worthwhile performance gains. The core approach considers all segmentationgenerated regions to be equally useful, so the first extension we attempt is to learn the reliability of a region to predict its contents. Second, we attempt to go beyond independent region classification by modeling adjacent regions and utilizing a random field formulation for global consistency. The above system is trained using fully supervised data; images with each object carefully masked by a person. Given the expense of creating such a data set, our third extension considers using additional data with noisy, weaker supervision. Finally, since significant work exists on image classification without object localization, and object detection with bounding boxes (or other fixed shapes), we look at whether our approach improves such object information.
Object Recognition by Integrating Multiple Image Segmentations
2
483
Related Work
The idea of using unsupervised image segmentation to obtain good spatial support is not new. In practice, however, approaches which use this idea have made strong and questionable assumptions. Russell et al. [4] assume that the entire object falls within one image segmentation region, which is unlikely given object complexity and the simplicity of bottom-up features. In fact, [1] argues that segmentation is rarely ‘correct’, and [2] shows that often an object encompasses multiple regions. The approaches in [14,15], as well as others, enforce spatial constraints on object parts that are too rigid for highly deformable objects. Many of the existing approaches to using bottom-up segmentation for recognition have higher complexity and are less intuitive than our own [8,14,16]. The object segmentation problem can also be approached by pixel or patchbased methods that do not use image segmentation regions, as in [17,18,19] and others. These approaches can be useful for repetitive textures like grass, or somewhat rigid objects like faces or cars, but are difficult to apply to deformable objects. They often provide very coarse segmentations by overlapping small patches [20]. The segmentation-integration method we advocate does not make any of these assumption and thus is successful over a wide range of object classes.
3
Evaluation
The comparisons in this paper are performed on two difficult data sets, the MSRC 21-class data set [21] and the PASCAL Visual Object Challenge 2007 segmentation competition [22] data set. Each data set contains multiple object classes with extreme variation in deformation, scale, illumination, pose, and occlusion. All results are reported with respect to pixel-level performance, requiring that exact object masks be obtained. On the MSRC 21-class data set, we use the same training and test sets as Shotton et al. [21]. We also compare to the more recent work by Verbeek and Triggs [17], although it uses a different data split. On the PASCAL segmentation set, training and testing sets are as in the challenge, using only the 422 fully segmented images to train our core approach.
4
Core Approach
In this section, we describe the details of our core approach to the joint object recognition and object segmentation problem. The process involves three steps: generating multiple segmentations, describing and classifying each region, and combining the region classifications into an object map indicating which pixels belong to each object. We show that using a single segmentation to generate such object maps produces results of varying quality, while using all of the segmentations in concert produces comparable or improved object map accuracy.
484
C. Pantofaru, C. Schmid, and M. Hebert
4.1
Generating Multiple Segmentations
To capture the variety in color, edge contrast, texture, image size and noise that images possess, we produce multiple segmentations of each image. We assume (although not guarantee) that all of the object edges will be contained in the union of region outlines. We also assume that each pixel is contained in at least one region which has large enough spatial support for feature computation. Any method for generating multiple segmentations of an image could be used provided it satisfies these assumptions. Here, we describe the particular segmentation algorithms used to create the 18 segmentations for our system. The first three segmentations are generated by the mean shift-based segmentation algorithm [10] using pixel position, color (in the L*u*v* color space), and a histogram of quantized textons as features [5]. We perform segmentation of images with dimensions scaled to 0.5, 0.75 and 1 times their original lengths. The second set of nine segmentations is generated using the normalized cuts algorithm with the ‘probability of boundary’ features as in [11,12]. For each image size, segmentations with 9, 21 and 33 regions are generated (as suggested in [2]). The final six segmentations are generated using the graph-based method by Felzenszwalb and Huttenlocher (F-H) [13]. For each image size we use two values for the parameter k = {200, 500}, affecting the scale of the final regions. Examples of the segmentations we generate can be seen in Fig. 1. The granularity of the regions changes with the parameters. The regions created by each algorithm also have different natures. The mean shift segmentation regions are slightly rounded (due to the texture features), smaller, and with accurate boundaries. The normalized cuts regions are also rounded and tend to be of similar size at the cost of subdividing homogeneous regions or joining different textures. The F-H method captures corners and thin, wiry objects more easily, but also produces imprecise boundaries. 4.2
Describing and Classifying Regions from a Single Segmentation
An object map can be created from a single segmentation by classifying each region with one of many available classification algorithms. To instantiate this method, support vector machines as implemented in LIBSVM [23] work well. LIBSVM provides P (cr = k|r), the probability that the label of region cr is k, as in [24], and our classification of the region is argmaxk P (cr = k|r). We use three types of features to describe each region. Region position is given by the centroid normalized by the image dimensions. Color is described by a 100-dimensional histogram of quantized hue features [25]. The image structure within and near a region is captured by a 300-dimensional region-based context feature (RCF) [5], which is based on a distance-weighted histogram of quantized SIFTs [26]. This yields a 402-dimensional region-specific feature. Since overall image context is often informative, we also aggregate the color and RCF histograms over the entire image for a final set of 802 features. Examples of good and poor results of using this single-segmentation method can be seen for both data sets in Figs. 3 and 5, columns 5 and 6 respectively.
Object Recognition by Integrating Multiple Image Segmentations
485
Fig. 2. Histograms of the number of PASCAL 2007 images (left) or object classes (right) for which each single segmentation provides the best or worst pixel accuracy. Each segmentation is the best or worst on at least one image, and most are the best or worst on at least one object class. This suggests that all of the segmentations are useful, and none should be discarded nor used exclusively.
Image
Good single Poor single Ground truth All segs (1) Confidence segmentation segmentation
Fig. 3. Object map results from the MSRC 21-class data set. Each map shows the most likely object at each pixel. The third column results from the core multiple segmentation method in (1), with corresponding confidence maps in column four. For comparison, one high and one low-accuracy result of using single segmentations is given for each image. The black pixels in all maps are ‘void’ in the ground truth. The top five rows show promising results, the last less accurate.
486
C. Pantofaru, C. Schmid, and M. Hebert
Image
62 52 48 61 68 68
98 87 80 89 92 92
86 68 69 79 81 81
58 73 51 57 58 57
50 84 61 66 65 63
83 94 87 92 95 95
60 88 73 81 85 82
53 73 71 80 81 81
74 70 57 67 75 76
63 68 47 63 65 65
75 74 56 66 68 67
63 89 34 52 53 54
35 33 28 31 35 34
19 19 15 26 23 23
92 78 75 88 85 84
15 34 16 27 16 16
86 89 76 80 83 83
54 46 28 52 48 47
19 49 17 32 29 30
62 54 40 45 48 46
boat
bird book chair road cat dog body
water face car bike flower sign
72.2 73.5 63.3 72.2 74.3 74.2
tree cow sheep sky aeroplane
57.7 64.0 49.6 59.8 60.3 59.9
building grass
pixel avg
Shotton [14] Verbeek [17] Worst seg Best seg All segs (1) All segs (2)
class avg
Table 1. Pixel accuracy results for the MSRC 21-class data set in the form of the class-averaged pixel accuracy, overall pixel accuracy, and pixel accuracy for each class. The class-averaged and overall accuracies of the multiple segmentation approaches are comparable to using only the best single segmentation, and more importantly they are robust to the worst single segmentation. Using multiple segmentations out-performs the Textonboost approach of Shotton et al. [14], and is comparable to that of Verbeek and Triggs [17] (however [17] uses a different split of the data).
7 31 11 30 15 14
Ground truth All Segs (2)
Object Maps Resulting From Individual Segmentations:
Fig. 4. Example of object segmentation results for a PASCAL VOC2007 image generated using single and multiple image segmentations. The top-left image is the original, the top-middle is the ground truth labeling and the top-right shows the most likely class using all of the segmentations combined with (2). The beige pixels are denoted ‘void’ in the ground truth, they are not generated by our method. The last three rows show the object maps generated using each individual segmentation.
Fig. 4 displays all 18 results for one image. It is visually evident that the object map quality is extremely variable. Tables 1 and 2 confirm this quantitatively. The per-pixel accuracy of the best and worst-performing single segmentations are given for each class, and as an average of the classes. The disparities in class-averaged performance between the best and worst single segmentations are large, 10.2% on the MSRC data set, and 5.5% on the PASCAL data set.
Object Recognition by Integrating Multiple Image Segmentations
Image
Ground truth All segs (2)
CRF
487
Good single Poor single segmentation segmentation
Fig. 5. Object map results from the PASCAL VOC2007 segmentation data set. Each map shows the most confident class at each pixel. The third column was generated using multiple segmentations with (2) and the fourth column with the random field method (β = 0.5). For comparison, a good and bad single segmentation result is given for each image. The beige pixels in columns two and three are ‘void’ in the ground truth and not considered in the pixel accuracy results. The first result is promising, the girl and most of the table are correctly labeled. The second result is promising for the difficult dog and cat classes, however the background is misclassified. The third row shows a perfect segmentation but misclassified as ‘cow’, likely due to the relative scarcity of brown sheep. The final result is a complete failure. (Best viewed in color.)
Is there one segmentation/parameter/feature combination which would give the ‘best’ partition for every image? As shown in [1], the answer is ‘no’. Our results on the PASCAL 2007 data suggest the same conclusion. In Table 2, the best overall segmentation has lower classification accuracy than the worst overall segmentation on some classes. This suggests that using only one segmentation is disadvantageous. Fig. 2 shows the number of images and the number of object classes for which each segmentation gives the worst or best accuracy (for the PASCAL data set). Every segmentation is the best or worst on at least one image, and most of the segmentations are the best or worst on at least one object class. This suggests that none of the segmentations should be discarded as they can all produce useful results, but no one segmentation dominates. Instead of trying to choose one segmentation algorithm, we need to combine the strengths of all the algorithms. We next explore how to combine the individual segmentation results into a more robust object delineation. 4.3
Integrating Multiple Segmentations
Our approach to combining multiple segmentations revolves around two principles. First, pixels which are grouped together by every segmentation should be classified consistently. So, the ‘basic units’ of our approach are intersections of
488
C. Pantofaru, C. Schmid, and M. Hebert
Brookes [27] TKK [28] Worst seg Best seg All segs (1) All segs (2) CRF
8.5 30.4 12.7 18.2 19.1 19.6 19.3
78 23 71 60 55 59 47
6 19 10 15 28 27 25
0 21 7 1 1 1 1
1 5 1 12 9 8 12
1 16 1 1 2 2 1
0 3 8 2 1 1 1
9 1 29 29 33 32 34
5 78 2 11 13 14 15
10 1 14 18 17 14 16
1 3 3 4 3 4 3
2 1 1 4 8 8 7
11 23 7 28 31 32 34
1 69 0 7 9 9 6
6 44 13 23 23 24 23
6 42 20 13 16 15 14
29 0 50 79 80 81 87
2 65 0 8 8 11 8
2 30 5 16 19 26 27
sofa train tv/monitor
horse motorbike person potted plant sheep
bird boat bottle bus car cat chair cow dining table dog
background aeroplane bicycle
class avg
Table 2. Pixel accuracy results for the PASCAL VOC2007 segmentation data set. Given for each approach are the class-averaged pixel accuracy and pixel accuracy for each class. We compare our approach with that of the Oxford Brookes [27] entry into the segmentation competition [27]. The TKK [28] entry had higher performance, but as an entry into the detection challenge it was trained using a much larger data set of thousands of images. Our overall accuracy is much higher than that of Brookes. Overall, the combined segmentation methods both out-perform the single segmentations.
1 35 9 1 1 1 1
11 89 11 21 28 28 28
1 71 8 32 17 17 18
regions (IofRs), pixels which belong to the same region in every segmentation, as in Fig. 1. Region intersections differ from superpixels [29] as they are constructed by intersecting larger regions, not by image segmentation with small kernel bandwidths or enforcing many regions. Thus IofRs may in fact be quite large in homogeneous image sections (such as the wall in Fig. 1), or small in heterogeneous image sections (such as the people). The second principle is that the original regions provide better support for extracting features than the IofRs. The IofRs may be too small for computing features. Also, the variation in the segmentation-generated regions provides multiple features of different scales and content, increasing the information available. Thus, our approach is to classify each IofR by combining the information from all of the individual segmentations. Let i be an IofR, and ris the region which contains i in segmentation s. Let ci be the class label of i, k a specific class label, and I the image data. Then we define segmentation integration method 1 to be: P (ci = k|I) ∝ P (csi = k|ris , I) (1) s
This average over the individual regions’ confidences amounts to marginalizing over the regions containing i, assuming they are each equally likely. As before, the class assigned to an IofR is argmaxk P (ci = k|I). Fig. 3 shows selected results for the MSRC 21-class data set. Qualitatively, the results of this multiple segmentation approach are comparable to using the best single segmentation, and robust to the poor segmentation. This conclusion is confirmed quantitatively on both data sets in Tables 1 and 2. Using multiple segmentations gives slightly higher class-averaged accuracy than using the best
Object Recognition by Integrating Multiple Image Segmentations
489
single segmentation, and the results are robust to the poor performance of the worst single segmentation. For the MSRC data set, our class-averaged and overall pixel accuracy results out-perform those of [14], and are comparable to those of [17]. For the PASCAL 2007 data set, we out-perform the Oxford Brookes entry [27]. The TKK entry [28] does out-perform ours, however it is not directly comparable as it was an entry in the detection challenge and so trained on thousands of additional images not in the 422-image segmentation training set.
5
Extensions
We have shown that a straightforward method for combining multiple segmentations can lead to robust and accurate object recognition and object segmentation. There are many extensions which could be suggested for our system, here we explore a number of them and ask whether the added complexity is worthwhile. 5.1
Determining the Reliability of a Region’s Classification
The core approach assumes that all of the segmentations should have an equal vote in the final classification. Since segmentations differ in quality, another reasonable assumption is that the reliability of a region’s prediction corresponds to the number of objects it overlaps. Hoiem et al. [6] suggest learning a classifier to predict the ‘homogeneity’ of a region with respect to the class labels. If we consider the homogeneity as a measure of the likelihood of a particular region, P (ris |I), then we can write segmentation integration method 2 as: P (ris |I)P (csi = k|ris , I) (2) P (ci = k|I) ∝ s
The classifier used to determine P (ris |I) is a set of boosted decision trees, trained using logistic AdaBoost [30,31]. We use 20 trees with 16 leaf nodes each to avoid over fitting. The region features used are normalized average position (2D), RCF (300D), color histogram (100D), region size divided by image size (1D), and the number of IofRs encompassed (1D). Figs. 4 and 5 show qualitative results on the PASCAL 2007 images. We can see once again that using multiple segmentations produces robust object maps. Quantitatively, Tables 1 and 2 show the same conclusion for the overall and class-averaged pixel accuracies. Compared to our first method of integrating segmentations, however, the results are mixed. On the MSRC data set, the original method was slightly better, but on the PASCAL data incorporating homogeneity provides a slight improvement. Whether the expense of computing the homogeneity score for a region is justified is questionable. Although the official metric for the PASCAL challenge was class-averaged accuracy, examining the overall pixel accuracy provides a very different picture with Brookes achieving 58.4%, our method achieving 50.1%, and TKK achieving 24.4% accuracy. This order reversal is due to the tradeoff between performance on the background class versus other objects, with our approach being the most balanced. These results also demonstrate the importance of using multiple relevant evaluation metrics.
490
5.2
C. Pantofaru, C. Schmid, and M. Hebert
Incorporating Contextual Information
Our approach thus far has classified regions independently, incorporating contextual and spatial information implicitly by using RCFs, and by using large regions from some segmentations to smooth the labeling of smaller regions in others. One extension is to use explicit spatial information, specifically through a random field formulation of our problem. We can redefine the image labeling problem as an energy minimization, considering potentials of single and pairs of adjacent IofRs in the following manner: E(ci ) + E(ci , cj ) (3) E(C) = i
i,j
Where C is the labeling of the entire image and i, j are neighboring IofRs. The unary potentials are defined as E(ci ) = − log P (ci |I) to penalize uncertainty, computed using (2). The binary potentials penalize discontinuity between adjacent labels as suggested by [6]: 0 if ci = cj , (4) E(ci , cj ) = β (log pij − log (1−pij )) otherwise. We enforce that E(ci , cj ) ≥ 0 and use graph cuts with alpha-expansion to minimize the energy [32]. The pij reflect the likelihood that the parent regions of adjacent IofRs belong to the same object in each segmentation: 1 if ris = rjs s s s pij , pij = pij ∝ (5) s s s P ci = cj |ri , rj , I otherwise s Classifiers for P csi = csj |ris , rjs , I are learned by logistic AdaBoost [6] with the following features. The union of the two regions is described using normalized average position, RCF, and color histogram. To compare two regions we use the smaller region size divided by larger region size, the symmetrical KLdivergence between the individual regions’ RCFs (scaled between 0 and 1), the KL-divergence between the two color histograms, and the normalized difference in region positions. From Table 2, we can see that the random field results are mixed. We hypothesize that use of multiple segmentations of various scales, and RCFs which model the image surrounding a region, cause most of the label smoothing to occur without the random field. The random field formulation can also cause undesired over-smoothing as in Fig. 5. Finally, the pairwise potentials are difficult to learn due to inaccurate ground truth labeling on object boundaries, as seen throughout this paper. Despite these difficulties the random field increases certain class accuracies (bird, person, etc.), so its use warrants further study. 5.3
Incorporating Weakly Labeled Training Data
So far, all of the training data has been fully labeled with object masks. Generating such ground truth for very large data sets is prohibitively expensive. Even
Object Recognition by Integrating Multiple Image Segmentations
491
the small number of web-based efforts to label data, such as LabelMe [33] and Peekaboom [34], produce inaccurate labels like the ‘void’ labels throughout this paper. A possible solution to this problem is the use of weakly labeled data to increase training set size. In this section, we increase the size of the PASCAL 2007 segmentation training set from the original 422 fully labeled images by augmenting it with 400 random, weakly labeled images from the larger PASCAL set. The weak labels will take two forms: bounding boxes as in the PASCAL ground truth, and image-level object labels which contain no localization information. The weakly labeled data is incorporated into our approach by assuming that the weak ground truth labels are noisy object masks. If multiple bounding boxes or image labels exist for one pixel, they are all considered ‘correct’ for training. We use the augmented training sets to learn the individual region classification probabilities. Since the noise in the bounding box labels lies around the object outlines, the extra images are not used to relearn the homogeneity measure. The procedure is otherwise unchanged. The results of this process can be seen in Fig. 6. Using either augmented data set, the overall class-averaged accuracy increases by nearly 3-5% for both the best single segmentation and the multiple segmentations methods. These results show that a relatively small amount of additional, weakly-labeled training data can improve recognition performance. 5.4
Using Object Detection to Guide Object Segmentation
The final extension we explore is the use of other object recognition systems to provide priors for our object segmentation. Specifically, many object recognition algorithms provide image classification or bounding boxes around identified objects. The other two PASCAL challenges were exactly these tasks. We ask how much using these systems’ outputs could potentially improve our results. We perform a preliminary study by using the ground truth bounding boxes for the PASCAL segmentation challenge test images. Let Wk be a map of pixels
Fig. 6. Class-averaged pixel accuracy on the PASCAL VOC2007 using the 422 fully segmented training images, and augmenting the training set with 400 images with weak image labels (no localization), and weak bounding box labels. Using a relatively small amount of additional, weakly labeled data, the results improve by almost 4%.
C. Pantofaru, C. Schmid, and M. Hebert % bkgd removed / % obj lost
492
Size of object prior
Fig. 7. The effects of using bounding boxes of various sizes as priors for object segmentation on three representative classes from the PASCAL VOC2007. The bars show object mask accuracy improvement using our approach versus the boxes alone. Bar height shows the ratio between the percent of background pixels correctly removed from the boxes by object segmentation, and the percent of object pixels incorrectly removed. The bars are all above 1; object segmentation always improves upon the bounding boxes.
inside the bounding boxes of object k. Our confidence in class k at pixel q will be: T (cq = k|I) = Wk (q)P (ci = k|I), where i is the IofR containing q. We repeat the experiment with increasingly larger bounding boxes until they fill the image, generating an image classification. Using the ground truth bounding boxes as a prior improves the class-averaged pixel accuracy to 79.4%, while using image classification improves the accuracy to 58.9%. The accuracy improvement decreases monotonically between these extremes. This is the best performance we can expect with perfect object detection or image classification. Of the pixels labeled ‘object’ by a bounding box mask, some are actually object pixels, while others are actually background. Object segmentation labels only a subset of the bounding box pixels as ‘object’. We evaluate the amount our object segmentation improves on the bounding boxes by computing the percent of background pixels in the boxes we correctly remove versus the percent of object pixels we incorrectly remove. Fig. 7 shows three trends seen in the behavior of this measure with increasing box size. Since there are more background pixels incorrectly contained in larger bounding boxes, the intuitive trend would be for object segmentation to offer increasing improvement with increasing box size, as for the sheep class. However, object classes such as cats show the opposite trend. We speculate this is due to confusion between classes; as the boxes of other objects increase in size they overlap the true cat pixels and the cat is misclassified. The third trend is for minimal change between box sizes and is seen in more difficult objects. These patterns are interesting and suggest that both strong (bounding box) and weak (image classification) priors can provide large overall improvement, but the effects on individual classes are varied. Most importantly, the bars on plots as in Fig. 7 for all classes and box sizes were above 1, showing that our method always improves on the ground truth boxes.
Object Recognition by Integrating Multiple Image Segmentations
493
As image classification and object detection systems improve it will be important to compare future results to the ‘ideal’ situation here.
6
Conclusions
We have presented an intuitive method for recognizing and segmenting objects. Our approach relies on multiple bottom-up image segmentations to support topdown object recognition and allows us to use well-established methods for classification. Aggregating knowledge across image scales and features through multiple segmentations smooths our image labeling in a data-driven manner, increasing robustness. We have presented results on the MSRC 21-class data set and the PASCAL VOC2007 segmentation challenge data set which show that the segmentation combination method not only performs well, but is able to cope with large variation in segmentation quality. In addition to our core approach, we have suggested extensions and studied whether they are beneficial. Modeling region reliability proved difficult, although class-specific performance improvement warrants further study. On the other hand, increasing the training set size with a relatively small amount of weakly labeled data significantly improved results, and image-level weak labels were sufficient. We also concluded that explicitly incorporating spatial information in a random field was not worthwhile given the implicit spatial information captured in our approach. Finally, we took a preliminary look at using image classification and object detection as a prior for object segmentation. In conclusion, we believe that this paper stresses two important issues: the importance of algorithm robustness, and the importance of examining whether algorithm extensions reward their added complexity with improved performance.
References 1. Unnikrishnan, R., Pantofaru, C., Hebert, M.: Toward objective evaluation of image segmentation algorithms. PAMI 29 (2007) 2. Malisiewicz, T., Efros, A.A.: Improving spatial support for objects via multiple segmentations. In: BMVC (2007) 3. Martin, D., Fowlkes, C., Tal, D., Malik, J.: A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In: ICCV (2001) 4. Russell, B., Efros, A., Sivic, J., Freeman, W., Zisserman, A.: Using multiple segmentations to discover objects and their extent in image collections. In: CVPR (2006) 5. Pantofaru, C., Dork´ o, G., Schmid, C., Hebert, M.: Combining regions and patches for object class localization. In: Beyond Patches Workshop, CVPR (2006) 6. Hoiem, D., Efros, A., Hebert, M.: Recovering surface layout from an image. IJCV 75 (2007) 7. Azran, A., Ghahramani, Z.: Spectral methods for automatic multiscale data clustering. In: CVPR (2006) 8. Tu, Z., Chen, Z., Yuille, A.L., Zhu, S.C.: Image parsing: Unifying segmentation, detection, and recognition. IJCV (2005) 9. Borenstein, E., Malik, J.: Shape guided object segmentation. In: CVPR (2006) 10. Comaniciu, D., Meer, P.: Mean shift: A robust approach toward feature space analysis. PAMI (2002)
494
C. Pantofaru, C. Schmid, and M. Hebert
11. Fowlkes, C., Martin, D., Malik, J.: Learning affinity functions for image segmentation: Combining patch-based and gradient-based approaches. In: CVPR (2003) 12. Martin, D., Fowlkes, C., Malik, J.: Learning to detect natural image boundaries using local brightness, color and texture cues. PAMI (2003) 13. Felzenszwalb, P., Huttenlocher, D.: Efficient graph-based image segmentation. IJCV 59, 167–181 (2004) 14. Shotton, J., Winn, J., Rother, C., Criminisi, A.: Textonboost: Joint appearance, shape and context modeling for multi-class object recognition and segmentation. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3951. Springer, Heidelberg (2006) 15. Winn, J., Jojic, N.: Locus: Learning object classes with unsupervised segmentation. In: ICCV (2005) 16. Tu, Z., Zhu, S.C.: Image segmentation by data-driven markov chain monte carlo. PAMI 24, 657–673 (2002) 17. Verbeek, J., Triggs, B.: Region classification with markov field aspect models. In: CVPR (2007) 18. Winn, J., Shotton, J.: The layout consistent random field for recognizing and segmenting partially occluded objects. In: CVPR (2006) 19. Kumar, M., Torr, P., Zisserman, A.: Obj cut. In: CVPR (2005) 20. Leibe, B., Schiele, B.: Interleaved object categorization and segmentation. In: BMVC (2003) 21. Shotton, J., Winn, J., Rother, C., Criminisi, A.: The MSRC 21-class object recognition database (2006) 22. Everingham, M., Van Gool, L., Williams, C., Winn, J., Zisserman, A.: The PASCAL VOC 2007 (2007), http://www.pascal-network.org/challenges/VOC/voc2007 23. Chang, C.C., Lin, C.J.: LIBSVM: a library for support vector machines, Software (2001), http://www.csie.ntu.edu.tw/∼ cjlin/libsvm 24. Wu, T.F., Lin, C.J., Weng, R.C.: Probability estimates for multi-class classification by pairwise coupling. Journal of Machine Learning Research 5, 975–1005 (2004) 25. van de Weijer, J., Schmid, C.: Coloring local feature extraction. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3951. Springer, Heidelberg (2006) 26. Lowe, D.G.: Distinctive image features from scale-invariant keypoints. IJCV 60, 91–110 (2004) 27. Ladicky, L., Kohli, P., Torr, P.: Oxford Brookes entry, PASCAL VOC 2007 Segmentation Challenge (2007), http://www.pascal-network.org/challenges/VOC/voc2007 28. Viitaniemi, V.: Helsinki University of Technology, PASCAL VOC 2007 Challenge (2007), http://www.pascal-network.org/challenges/VOC/voc2007 29. Ren, X., Malik, J.: Learning a classification model for segmentation. In: ICCV (2003) 30. Friedman, J., Hastie, T., Tibshirani, R.: Additive logistic regression: a statistical view of boosting. Annals of Statistics (2000) 31. Collins, M., Schapire, R., Singer, Y.: Logistic regression, Adaboost and Bregman distances. Machine Learning (2002) 32. Boykov, Y., Veksler, O., Zabih, R.: Fast approximate energy minimization via graph cuts. PAMI 23, 1222–1239 (2001) 33. Russell, B., Torralba, A., Murphy, K., Freeman, W.T.: Labelme: a database and web-based tool for image annotation. IJCV (2007) 34. von Ahn, L., Liu, R., Blum, M.: Peekaboom: A game for locating objects in images. In: ACM CHI (2006)
A Linear Time Histogram Metric for Improved SIFT Matching Ofir Pele and Michael Werman School of Computer Science and Engineering The Hebrew University of Jerusalem {ofirpele,werman}@cs.huji.ac.il
Abstract. We present a new metric between histograms such as SIFT descriptors and a linear time algorithm for its computation. It is common practice to use the L2 metric for comparing SIFT descriptors. This practice assumes that SIFT bins are aligned, an assumption which is often not correct due to quantization, distortion, occlusion etc. In this paper we present a new Earth Mover’s Distance (EMD) variant. We show that it is a metric (unlike the original EMD [1] which is a metric only for normalized histograms). Moreover, it is a natural extension of the L1 metric. Second, we propose a linear time algorithm for the computation of the EMD variant, with a robust ground distance for oriented gradients. Finally, extensive experimental results on the Mikolajczyk and Schmid dataset [2] show that our method outperforms state of the art distances.
1
Introduction
Histograms of oriented gradient descriptors [3,4,5,6] are ubiquitous tools in numerous computer vision tasks. One of the most successful is the Scale Invariant Feature Transform (SIFT) [3]. In a recent performance evaluation [2] the SIFT descriptor was shown to outperform other local descriptors. The SIFT descriptor has proven to be successful in applications such as object recognition [3,7,8,9], object class detection [10,11,12], image retrieval [13,14,15,16], robot localization [17], building panoramas [18] and image classification [19]. It is common practice to use the L2 metric for comparing SIFT descriptors. This practice assumes that the histogram domains are aligned. However this assumption is violated through quantization, shape deformation, detector localization errors, etc. Although the SIFT algorithm has steps that reduce the effect of quantization, this is still a liability, as can be seen by the fact that increasing the number of orientation bins negatively affects performance [3]. The Earth Mover’s Distance (EMD) [1] is a cross-bin distance that addresses this alignment problem. EMD is defined as the minimal cost that must be paid to transform one histogram into the other, where there is a “ground distance” between the basic features that are aggregated into the histogram. There are two main problems with the EMD. First, it is not a metric between non-normalized histograms. Second, for a general ground distance, it has a high run time. D. Forsyth, P. Torr, and A. Zisserman (Eds.): ECCV 2008, Part III, LNCS 5304, pp. 495–508, 2008. c Springer-Verlag Berlin Heidelberg 2008
496
O. Pele and M. Werman
In this paper we present an Earth Mover’s Distance (EMD) variant. We show that it is a metric, if it is used with a metric ground distance. Second, we present a linear time algorithm for the computation of the EMD variant, with a robust ground distance for oriented gradients. Finally, we present experimental results for SIFT matching on the Mikolajczyk and Schmid dataset [2] showing that our method outperforms state of the art distances such as L2 , EMD-L1 [20], diffusion distance [21] and EMDMOD [22,23]. This paper is organized as follows. Section 2 is an overview of previous work. Section 3 introduces the new EMD variant. Section 4 introduces the new SIFT metric and the linear time algorithm for its computation. Section 5 describes the experimental setup and Section 6 presents the results. Finally, conclusions are drawn in Section 7.
2
Previous Work
Early work using cross-bin distances for histogram comparison can be found in [22,24,25,26]. Shen and Wong [24] proposed to unfold two integer histograms, sort them and then compute the L1 distance between the unfolded histograms. To compute the modulo matching distance between cyclic histograms they proposed taking the minimum from all cyclic permutations. This distance is equivalent to EMD between two normalized histograms. Werman et al. [25] showed that this distance is equal to the L1 distance between the cumulative histograms. They also proved that matching two cyclic histograms by examining only cyclic permutations is in effect optimal. Cha and Srihari [27] rediscovered these algorithms and described a character writer identification application. Werman et al. [22] proposed an O(M log M ) algorithm for finding a minimal matching between two sets of M points on a circle. The algorithm can be adapted to compute the EMD between two N -bin, normalized histograms with time complexity O(N ) (Appendix A in [23]). Peleg et al. [26] suggested using the EMD for grayscale images and using linear programming to compute it. Rubner et al. [1] suggested using EMD for color and texture images. They computed the EMD using a specific linear programming algorithm - the transportation simplex. The algorithm worst case time complexity is exponential. Practical run time was shown to be super-cubic (Ω(N 3 ) ∩ O(N 4 )). Interior-point algorithms with time complexity O(N 3 logN ) can also be used. All of these algorithms have high computational cost. Indyk and Thaper [28] proposed approximating the EMD by embedding it into an Euclidean space. Embedding time complexity is O(N d log Δ), where N is the feature set size, d is the feature space dimension and Δ is the diameter of the union of the two feature sets. Recently Ling and Okada proposed general cross-bin distances for histogram descriptors. The first is EMD-L1 [20]; i.e. EMD with L1 as the ground distance. To execute the EMD-L1 computation, they propose a tree-based algorithm, TreeEMD. Tree-EMD exploits the fact that a basic feasible solution of the simplex algorithm-based solver forms a spanning tree when the EMD-L1 is modeled as a network flow optimization problem. The worst case time complexity is
A Linear Time Histogram Metric for Improved SIFT Matching
497
exponential. Empirically, they show that this new algorithm has an average time complexity O(N 2 ). Ling and Okada also proposed the diffusion distance [21]. They defined the difference between two histograms to be a temperature field. The diffusion distance was derived as the sum of dissimilarities over scales. The algorithm run time is linear. For a comprehensive review of EMD and its applications in computer vision we refer the reader to Ling and Okada’s paper [20].
3
The New EMD Variant - EM D
This section introduces EM D, a new Earth Mover’s Distance variant. We show that it is a metric (unlike the original EMD [1] which is a metric only for normalized histograms). Moreover, it is a natural extension of the L1 metric. The Earth Mover’s Distance (EMD) [1] is defined as the minimal cost that must be paid to transform one histogram into the other, where there is a “ground distance” between the basic features that are aggregated into the histogram. Given two histograms P, Q the EMD as defined by Rubner et al. [1] is: i,j fij dij s.t (1) EM D(P, Q) = min {fij } i,j fij fij ≤ Pi , fij ≤ Qj , fij = min( Pi , Qj ) , fij ≥ 0 (2) j
i
i,j
i
j
where {fij } denotes the flows. Each fij represents the amount transported from the ith supply to the jth demand. We call dij the ground distance between bin i and bin j in the histograms. We propose EM D: EM Dα (P, Q) = (min
{fij }
i,j
fij dij )+|
i
Pi −
j
Qj |×α max{dij } s.t Eq. 2 (3) i,j
Note that for two probability histograms (i.e. total mass equal to one) EM D and EM D are equivalent. However, if the masses are not equal, EM D adds one supplier or demander such that the masses on both sides becomes equal. The ground distance between this supplier or demander to all other demanders or suppliers respectively is set to be α times the maximum ground distance. In addition, the EM D is not normalized by the total flow. Note that EM D with α = 0.5 and with the Kroncker δ ground distance multiplied by two (dij = 0 if i = j, 2 otherwise) is equal to the L1 metric. If α ≥ 0.5 and the ground distance is a metric, EM D is a metric (unlike the original EM D [1] which is a metric only for normalized histograms). A proof is given in Appendix B [23]. Being a metric can lead to more efficient data structures and search algorithms.
498
O. Pele and M. Werman
We now give two examples of when the usage of EM D is more appropriate (both of which are the case for the SIFT descriptors). The first is when the total mass of the histograms is important. For example, let P = (1, 0), Q = (0, 1), P = (9, 0), Q = (0, 9). Using L1 as a ground distance and α = 1, EM D(P, Q) = 1 = EM D(P , Q ), while EM D(P, Q) = 1 < 9 = EM D(P , Q ). The second is when the difference in total mass between histograms is a distinctive cue. For example, let P = (1, 0), Q = (1, 7). Using L1 as a ground distance and α = 1, EM D(P, Q) = 0, while EM D(P, Q) = 7.
4
The SIFTDIST Metric
This section introduces SIFTDIST , a new metric between SIFT descriptors. It is common practice to use the L2 metric for comparing SIFT descriptors. This practice assumes that the SIFT histograms are aligned, so that a bin in one histogram is only compared to the corresponding bin in the other histogram. This is often not the case, due to quantization, distortion, occlusion, etc. Our distance has three instead of two matching costs: zero-cost for exact corresponding bins, one-cost for neighboring bins and two-cost for farther bins and for the extra mass. Thus the metric is robust to small errors and outliers. The section first defines SIFTDIST and then presents a linear time algorithm for its computation. 4.1
SIFTDIST Definition
This section first describes the SIFT descriptor. Second, it proposes Thresholded Modulo Earth Mover’s Distance (EMDTMOD ), an EMD variant for oriented gradient histograms. Finally it defines the SIFTDIST . The SIFT descriptor [3] is a M × M × N histogram. Each of the M × M spatial cells contains an N -bin histogram of oriented gradients. See Fig. 1 for a visualization of SIFT descriptors. Let A = {0, . . . , N − 1} be N points, equally spaced, on a circle. The modulo L1 distance for two points i, j ∈ A is: DMOD (i, j) = min(|i − j|, N − |i − j|)
(a)
(b)
Fig. 1. (a) 4 × 4 × 8 SIFT descriptor. (b) 4 × 4 × 16 SIFT descriptor.
(4)
A Linear Time Histogram Metric for Improved SIFT Matching
499
Fig. 2. The flow network of EMDTMOD for N = 8. The brown vertexes on the inner circle are the bins of the “supply” histogram, P . The cyan vertexes on the outer circle are P of the “demand” histogram, Q. We assume without loss of generality that P the bins P ≥ i i j Qj ; thus we add one infinite sink in the middle. The short gray edges are zero-cost edges. The long black edges are one-cost edges. Two-cost edges that turn the graph into a full bi-partite graph between sinks and sources are not colored for visibility.
The thresholded modulo L1 distance is defined as: DT DMO (i, j) = min (DMOD (i, j), 2)
(5)
D (Eq. 3) with ground distance DT DMO (i, j) and EMDTMOD is defined as EM α = 1. DT DMO is a metric. It follows from Appendix B [23] that EMDTMOD is also a metric. Using EMDTMOD the transportation cost to the two nearby bins is 1, while for farther bins and for the extra mass it is 2 (see Fig. 2). For example, for N = 16 we assume that all differences larger than 22.5◦ are caused by outliers and should be assigned the same transportation cost. The SIFTDIST between two SIFT descriptors is defined as the sum over the EMDTMOD between all the M × M oriented gradient histograms. SIFTDIST is also an EMD, where edges between spatial bins have an infinite cost. EM D (Eq. 3) has two advantages over Rubner’s EMD (Eq. 1) for comparing SIFT descriptors. First, the difference in total gradient magnitude between SIFT spatial cells is an important distinctive cue. Using Rubner’s definition this cue is ignored. Second, EM D is a metric even for non-normalized histograms. 4.2
A Linear Time SIFTDIST Algorithm
As SIFTDIST is a sum of EMDTMOD solutions, we present a linear time algorithm for EMDTMOD . Like all other Earth Mover’s Distances, EMDTMOD can be solved by a maxflow-min-cost algorithm. Each bin i in the first histogram is connected to: bin i in the second histogram with a zero-cost edge, two nearby bins with one-cost edges and to all other bins with two-cost edges. See Fig. 2 for an illustration of this flow network. The algorithm starts by saturating all zero-cost edges. As the ground distance (Eq. 5) obeys the triangle inequality, this step does not change the minimum cost
500
O. Pele and M. Werman
solution [22]. Note that after the first step finishes from each of the supplierdemander pairs that were connected with a zero-cost edge, either the supplier is empty or the demander is full. To minimize the cost after the first step, the algorithm needs to maximize the flow through the one-cost edges; i.e. the problem becomes a max-flow problem on a graph with N vertexes and at most N edges, where the maximum path length is 1 as flow goes only from supply to demand (see Fig. 2). This step starts by checking whether all vertexes are degree 2, if yes we remove an arbitrary edge. Second, we traverse the suppliers’ vertexes clockwise, twice. For each degree one vertex we saturate its edge. If we did not remove an edge at the beginning of this step, there will be no augmenting paths. If an edge was removed, the algorithm flows through all augmenting paths. All these paths can be found by returning the edge and expanding from it. The algorithm finishes by flowing through all two-cost edges, which is equivalent to multiplying the maximum of the total remaining supply and demand by two and adding it to the distance.
5
Experimental Setup
We evaluate SIFTDIST using a test protocol similar to that of Mikolajczyk and Schmid [2]. The dataset was downloaded from [29]. The test data contain eight folders, each with six images with different geometric and photometric transformations and for different scene types. Fig. 3 shows the first and the third image from each folder. Six image transformations are evaluated: viewpoint change, scale and rotation, image blur, light change and JPEG compression. The images are either of planar scenes or the camera position was fixed during acquisition. The images are, therefore, always related by a homography. The ground truth homographies are supplied with the dataset. For further details about the dataset we refer the reader to [2]. The evaluation criterion is based on the number of correct and false matches obtained for an image pair. The match definition depends on the matching strategy. The matching strategy we use is a symmetric version of Lowe’s ratio matching [3]. Let a ∈ A and b ∈ B be two descriptors and n(a, A) and n(b, B) be their spatial neighbors; that is, all descriptors in A and B respectively such that the ratio of the intersection and union of their regions with a and b respectively is larger than 0.5. a and b are matched, if all the following conditions hold: a = arg mina ∈A D(a , b), a2 = arg mina ∈A\n(a,A) D(a , b), b = arg minb ∈B D(a, b ), 2 ,b) D(a,b2 ) b2 = arg minb ∈B\n(b,B) D(a, b ) and min D(a ≥ R. R value is varied , D(a,b) D(a,b) to obtain the curves. Note that for R = 1 the matching strategy is the symmetric nearest neighbor strategy. The technique of not using overly close regions as a second best match was used by Forss´en and Lowe [30]. The match correctness is determined by the overlap error [2]. It measures how well regions A and B correspond under a known homography H, and isT defined T S H BH by the ratio of the intersection and union of the regions: S = 1 − A A H T BH .
A Linear Time Histogram Metric for Improved SIFT Matching Graf
Wall
Boat
Bark
Bikes
Trees
Leuven
Ubc
501
Fig. 3. Examples of test images (left): Graf (viewpoint change, structured scene), Wall (viewpoint change, textured scene), Boat (scale change + image rotation, structured scene), Bark (scale change + image rotation, textured scene), Bikes (image blur, structured scene), Trees (image blur, textured scene), Leuven (light change, structured scene), and Ubc (JPEG compression, structured scene)
As in [2] a match is assumed to be correct if S < 0.5. The correspondence number (possible correct matches) is the maximum matching size in the correct match, bi-partite graph. The results are presented with recall versus 1 - precision: #correct matches matches , 1 − precision = #false recall = #correspondences #all matches . In all experiments we used Vedaldi’s SIFT detector and descriptor implementation [31]. All parameters were set to the defaults except the oriented gradient bins number, where we also tested our method with a SIFT descriptor having 16 oriented gradient bins (see Fig. 1 (b) in page 498).
6
Results
In this section, we present and discuss the experimental results. The performance of SIFTDIST is compared to that of L2 , EMD-L1 [20], diffusion distance [21] and EMDMOD 1 [22,23] for viewpoint change, scale and rotation, image blur, light 1
Note that as the fast algorithm for EMDMOD assumes normalized histograms, we normalized each histogram for its computation.
502
O. Pele and M. Werman
change and JPEG compression. The matching was done between SIFT descriptors with eight and sixteen orientation bins (see Fig. 1 in page 498). Finally, we present run time results. Figs. 4,5,6,7 are 1-precision vs. recall graphs. Due to space constraints we present graphs for the matching of the first to the third and fifth images from each folder in the Mikolajczyk and Schmid dataset [29]. Results for the rest of the data are similar and are in [32]. In all of the experiments SIFTDIST computed between SIFT descriptors with sixteen oriented gradient bins (SIFT-16) outperforms all other methods. SIFTDIST computed between the original SIFT descriptors with eight oriented gradient bins (SIFT-8) is usually the second. Also, using SIFTDIST consistently produces results with greater precision and recall for the symmetric nearest neighbor matching (the rightmost point of each curve). Increasing the number of orientation bins from eight to sixteen decreases the performance of L2 and increases the performance of SIFTDIST . This can be explained by the SIFTDIST robustness to quantization errors. Fig. 4 shows matching results for viewpoint change on structured (Graf ) and textured (Wall) scenes . SIFTDIST has the highest ranking. As in [2] performance is better on the textured scene. Performance decreases with viewpoint change (compare Graf-3 to Graf-5 and Wall-3 to Wall-5), while the distance ranking remains the same. For large viewpoint change, performance is poor (see Graf-5) and the resulting graphs are not smooth. This can be explained by the fact that the SIFT detector and descriptor are not affine invariant. Fig. 5 shows matching results for similarity transformation on structured (Boat) and textured (Bark) scenes. SIFTDIST outperforms all other distances. As in [2] performance is better on the textured scene. Fig. 6 shows matching results for image blur on structured (Bikes) and textured (Trees) scenes. SIFTDIST has the highest ranking. The SIFT descriptor is affected by image blur. A similar observation was made by Mikolajczyk and Schmid [2]. Fig. 7 - Leuven shows matching results for light change. SIFTDIST obtains the best matching score. The performance decreases with lack of light (compare Leuven-3 to Leuven-5), although not drastically. Fig. 7 - Ubc shows matching results for JPEG compression. SIFTDIST outperforms all other distances. Performance decreases with compression level (compare Ubc-3 to Ubc-5). Table 1 present run time results. All runs were conducted on a Dual-Core AMD Opteron 2.6GHz processor. The table contains the run time in seconds of Table 1. Run time results in seconds of 106 distance computations
SIFT-8 SIFT-16
SIFTDIST
(L2 )2
EMDMOD [22,23]
Diffusion [21]
EMD-L1 [20]
1.5 2.2
0.35 1.3
18 29
28 56
192 637
A Linear Time Histogram Metric for Improved SIFT Matching
Graf-3
503
Graf-5 0.05
0.45
0.045
0.4 0.35
recall(#correct/539)
recall(#correct/2387)
0.04
0.3 0.25 0.2
0.035 0.03 0.025 0.02 0.015 0.01
0.15 0.005
0.1 0
0.2
0.4 1−precision
0
0.6
0.8
Wall-3
0.9 1−precision
1
Wall-5 0.26
0.56
0.24
0.54
0.22
recall(#correct/8354)
recall(#correct/11331)
0.52 0.5 0.48 0.46 0.44 0.42
0.2 0.18 0.16 0.14 0.12
0.4
0.1
0.38
0.08
0.36
0.06
0
0.1
/ / /
0.2 1−precision
0.3
SIFTDIST , SIFT-8/16 L2 , SIFT-8/16
0
/ /
0.2
0.4 0.6 1−precision
0.8
EMDMOD [22, 23], SIFT-8/16 EMD-L1 [20], SIFT-8/16
Diffusion [21], SIFT-8/16
Fig. 4. Results on the Mikolajczyk and Schmid dataset [2]. Should be viewed in color.
504
O. Pele and M. Werman Boat-3
Boat-5
0.5
0.32
0.48
0.3 0.28
0.44
recall(#correct/2190)
recall(#correct/5558)
0.46
0.42 0.4 0.38 0.36 0.34
0.26 0.24 0.22 0.2 0.18
0.32 0.16 0.3 0
0.2 0.4 1−precision
0
0.6
Bark-3
0.2
0.4 0.6 1−precision
0.8
Bark-5
0.5
0.695
0.45
recall(#correct/1019)
recall(#correct/2613)
0.69
0.4
0.35
0.685 0.68 0.675 0.67 0.665 0.66 0.655
0.3 0
0.2
/ / /
0.4 0.6 1−precision
0.8
SIFTDIST , SIFT-8/16 L2 , SIFT-8/16
0.65 0
/ /
0.2
0.4 0.6 1−precision
0.8
EMDMOD [22, 23], SIFT-8/16 EMD-L1 [20], SIFT-8/16
Diffusion [21], SIFT-8/16
Fig. 5. Results on the Mikolajczyk and Schmid dataset [2]. Should be viewed in color.
A Linear Time Histogram Metric for Improved SIFT Matching Bikes-3
Bikes-5
0.42
0.44 0.42
recall(#correct/1202)
0.4
recall(#correct/3278)
505
0.38
0.36
0.34
0.4 0.38 0.36 0.34
0.32
0.32
0.3 0
0.1
0.2 0.3 1−precision
0.3 0
0.4
Trees-3
0.2 0.4 1−precision
0.6
Trees-5 0.09 0.08 0.07
recall(#correct/8483)
recall(#correct/14948)
0.2
0.15
0.1
0.06 0.05 0.04 0.03
0.05
0.02
0 0
0.2
/ / /
0.4 1−precision
0.6
SIFTDIST , SIFT-8/16 L2 , SIFT-8/16
0.01 0
/ /
0.5 1−precision
1
EMDMOD [22, 23], SIFT-8/16 EMD-L1 [20], SIFT-8/16
Diffusion [21], SIFT-8/16
Fig. 6. Results on the Mikolajczyk and Schmid dataset [2]. Should be viewed in color.
506
O. Pele and M. Werman Leuven-3
Leuven-5
0.65
0.56 0.54
recall(#correct/2054)
recall(#correct/2568)
0.52 0.6
0.55
0.5 0.48 0.46 0.44 0.42
0.5 0
0.05
0.1 0.15 1−precision
0.4 0
0.2
0.1
Ubc-3
0.3
Ubc-5
0.52
0.2
0.5
0.19 0.18
recall(#correct/5906)
0.48
recall(#correct/6779)
0.2 1−precision
0.46 0.44 0.42 0.4
0.17 0.16 0.15 0.14 0.13 0.12
0.38
0.11 0.36 0
0.1
/ / /
0.2 1−precision
0.3
SIFTDIST , SIFT-8/16 L2 , SIFT-8/16
0.1 0
/ /
0.2
0.4 0.6 1−precision
0.8
EMDMOD [22, 23], SIFT-8/16 EMD-L1 [20], SIFT-8/16
Diffusion [21], SIFT-8/16
Fig. 7. Results on the Mikolajczyk and Schmid dataset [2]. Should be viewed in color.
A Linear Time Histogram Metric for Improved SIFT Matching
507
each distance computation between two sets of 1000 SIFT descriptors with eight and sixteen oriented gradient bins. Note that we measured the run time of (L2 )2 and not L2 as computing the root does not change the order of elements and is time consuming. SIFTDIST is the fastest cross-bin distance.
7
Conclusions
We presented a new cross-bin metric between histograms and a linear time algorithm for its computation. Extensive experimental results for SIFT matching on the Mikolajczyk and Schmid dataset [2] showed that our method outperforms state of the art distances. The speed can be further improved using techniques such as Bayesian sequential hypothesis testing [33], sub linear indexing [34] and approximate nearest neighbor [35,36]. The new cross-bin histogram metric may also be useful for other histograms, either cyclic (e.g. hue in color images) or non-cyclic (e.g. intensity in grayscale images). The project homepage, including code (C++ and Matlab wrappers) is at: http://www.cs.huji.ac.il/∼ofirpele/SiftDist.
References 1. Rubner, Y., Tomasi, C., Guibas, L.J.: The earth mover’s distance as a metric for image retrieval. International Journal of Computer Vision 40(2), 99–121 (2000) 2. Mikolajczyk, K., Schmid, C.: A performance evaluation of local descriptors. IEEE Trans. Pattern Analysis and Machine Intelligence 27(10), 1615–1630 (2005) 3. Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vision 60(2), 91–110 (2004) 4. Bay, H., Tuytelaars, T., Gool, L.J.V.: Surf: Speeded up robust features. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3951, pp. 404– 417. Springer, Heidelberg (2006) 5. Dalai, N., Triggs, B., Rhone-Alps, I., Montbonnot, F.: Histograms of oriented gradients for human detection. In: CVPR, vol. 1 (2005) 6. Heikkila, M., Pietikainen, M., Schmid, C.: Description of Interest Regions with Center-Symmetric Local Binary Patterns. In: ICVGIP, pp. 58–69 (2006) 7. Ferrari, V., Tuytelaars, T., Van Gool, L.: Simultaneous object recognition and segmentation by image exploration. In: Pajdla, T., Matas, J(G.) (eds.) ECCV 2004. LNCS, vol. 3021, pp. 40–54. Springer, Heidelberg (2004) 8. Sudderth, E., Torralba, A., Freeman, W., Willsky, A.: Learning hierarchical models of scenes, objects, and parts. In: ICCV, vol. 2, pp. 1331–1338 (2005) 9. Arth, C., Leistner, C., Bischof, H.: Robust Local Features and their Application in Self-Calibration and Object Recognition on Embedded Systems. In: CVPR (2007) 10. Mikolajczyk, K., Leibe, B., Schiele, B.: Multiple object class detection with a generative model. In: CVPR (2006) 11. Dorko, G., Schmid, C., Gravir-Cnrs, I., Montbonnot, F.: Selection of scale-invariant parts for object class recognition. In: ICCV, pp. 634–639 (2003) 12. Opelt, A., Fussenegger, M., Pinz, A., Auer, P.: Weak hypotheses and boosting for generic object detection and recognition. In: Pajdla, T., Matas, J(G.) (eds.) ECCV 2004. LNCS, vol. 3022. Springer, Heidelberg (2004)
508
O. Pele and M. Werman
13. Sivic, J., Zisserman, A.: Video Google: a text retrieval approach to object matching in videos. In: ICCV, pp. 1470–1477 (2003) 14. Philbin, J., Chum, O., Isard, M., Sivic, J., Zisserman, A.: Object retrieval with large vocabularies and fast spatial matching. In: CVPR (2007) 15. Snavely, N., Seitz, S., Szeliski, R.: Photo tourism: exploring photo collections in 3D. ACM Transactions on Graphics (TOG) 25(3), 835–846 (2006) 16. Sivic, J., Everingham, M., Zisserman, A.: Person Spotting: Video Shot Retrieval for Face Sets. In: Leow, W.-K., Lew, M., Chua, T.-S., Ma, W.-Y., Chaisorn, L., Bakker, E.M. (eds.) CIVR 2005. LNCS, vol. 3568, pp. 226–236. Springer, Heidelberg (2005) 17. Se, S., Lowe, D., Little, J.: Local and global localization for mobile robots using visuallandmarks. In: IROS, vol. 1 (2001) 18. Brown, M., Lowe, D.: Recognising panoramas. In: ICCV, p. 3 (2003) 19. Nowak, E., Jurie, F., Triggs, B.: Sampling strategies for bag-of-features image classification. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3954, pp. 490–503. Springer, Heidelberg (2006) 20. Ling, H., Okada, K.: An Efficient Earth Mover’s Distance Algorithm for Robust Histogram Comparison. IEEE Trans. Pattern Analysis and Machine Intelligence 29(5), 840–853 (2007) 21. Ling, H., Okada, K.: Diffusion distance for histogram comparison. In: CVPR, vol. 1, pp. 246–253 (2006) 22. Werman, M., Peleg, S., Melter, R., Kong, T.: Bipartite graph matching for points on a line or a circle. Journal of Algorithms 7(2), 277–284 (1986) 23. http://www.cs.huji.ac.il/∼ ofirpele/publications/ECCV2008.pdf 24. Shen, H., Wong, A.: Generalized texture representation and metric. Computer vision, graphics, and image processing 23(2), 187–206 (1983) 25. Werman, M., Peleg, S., Rosenfeld, A.: A distance metric for multidimensional histograms. Computer Vision, Graphics, and Image Processing 32(3) (1985) 26. Peleg, S., Werman, M., Rom, H.: A unified approach to the change of resolution: Space and gray-level. IEEE Trans. Pattern Analysis and Machine Intelligence 11(7), 739–742 (1989) 27. Cha, S., Srihari, S.: On measuring the distance between histograms. Pattern Recognition 35(6), 1355–1370 (2002) 28. Indyk, P., Thaper, N.: Fast image retrieval via embeddings. In: 3rd International Workshop on Statistical and Computational Theories of Vision (October 2003) 29. http://www.robots.ox.ac.uk/∼ vgg/research/affine/index.html 30. Forss´en, P., Lowe, D.: Shape Descriptors for Maximally Stable Extremal Regions. In: ICCV, pp. 1–8 (2007) 31. http://vision.ucla.edu/∼ vedaldi/code/sift/sift.html 32. http://www.cs.huji.ac.il/∼ ofirpele/publications/ECCV2008addRes.pdf 33. Pele, O., Werman, M.: Robust real time pattern matching using bayesian sequential hypothesis testing. IEEE Trans. Pattern Analysis and Machine Intelligence 30(8), 1427–1443 (2008) 34. Obdrzalek, S., Matas, J.: Sub-linear indexing for large scale object recognition. In: BMVC, vol. 1, pp. 1–10 (2005) 35. Arya, S., Mount, D., Netanyahu, N., Silverman, R., Wu, A.: An optimal algorithm for approximate nearest neighbor searching fixed dimensions. Journal of the ACM (JACM) 45(6), 891–923 (1998) 36. Beis, J., Lowe, D.: Shape indexing using approximate nearest-neighbour search in high-dimensional spaces. In: CVPR, pp. 1000–1006 (1997)
An Extended Phase Field Higher-Order Active Contour Model for Networks and Its Application to Road Network Extraction from VHR Satellite Images Ting Peng1,2 , Ian H. Jermyn1, V´eronique Prinet2 , and Josiane Zerubia1 1
2
Project-Team Ariana, INRIA/I3S, 06902 Sophia Antipolis, France {tpeng,ijermyn,jzerubia}@sophia.inria.fr LIAMA & NLPR, CASIA, Chinese Academy of Sciences, Beijing 100190, China {tpeng,prinet}@nlpr.ia.ac.cn
Abstract. This paper addresses the segmentation from an image of entities that have the form of a ‘network’, i.e. the region in the image corresponding to the entity is composed of branches joining together at junctions, e.g. road or vascular networks. We present a new phase field higher-order active contour (HOAC) prior model for network regions, and apply it to the segmentation of road networks from very high resolution satellite images. This is a hard problem for two reasons. First, the images are complex, with much ‘noise’ in the road region due to cars, road markings, etc., while the background is very varied, containing many features that are locally similar to roads. Second, network regions are complex to model, because they may have arbitrary topology. In particular, we address a severe limitation of a previous model in which network branch width was constrained to be similar to maximum network branch radius of curvature, thereby providing a poor model of networks with straight narrow branches or highly curved, wide branches. To solve this problem, we propose a new HOAC prior energy term, and reformulate it as a nonlocal phase field energy. We analyse the stability of the new model, and find that in addition to solving the above problem by separating the interactions between points on the same and opposite sides of a network branch, the new model permits the modelling of two widths simultaneously. The analysis also fixes some of the model parameters in terms of network width(s). After adding a likelihood energy, we use the model to extract the road network quasi-automatically from pieces of a QuickBird image, and compare the results to other models in the literature. The results demonstrate the superiority of the new model, the importance of strong prior knowledge in general, and of the new term in particular.
1 Introduction The need to segment network-like structures from images arises in a variety of domains. Examples include the segmentation of road and river networks in remote sensing imagery, and of vascular networks in medical imagery. Extracting automatically the region in the image corresponding to the network is a difficult task, however. Because images often contain confounding elements having similar local properties to the entity of interest, techniques that include no prior knowledge about the region containing the network D. Forsyth, P. Torr, and A. Zisserman (Eds.): ECCV 2008, Part III, LNCS 5304, pp. 509–520, 2008. c Springer-Verlag Berlin Heidelberg 2008
510
T. Peng et al.
cannot succeed. In order to solve the problem, such prior knowledge must be injected somehow, either through the intervention of a user, or by incorporating it into a model. Human users possess very specific prior knowledge about the shape of regions corresponding to networks, and in most applications, this level of knowledge is necessary rather than merely sufficient: generic prior knowledge alone, for example concerning boundary smoothness, is not enough. The need to include more specific prior knowledge raises another, methodological issue, however. The set of network-like regions is complicated: it consists of a large (in principle infinite) number of connected components, corresponding to the different possible topologies of a network (number of connected components in the network, number of loops in each connected component), or equivalently to the set of planar graphs (for 2d data). To this is added a geometric superstructure corresponding to an embedding of the graph in the plane, and to its ‘fattening’ into a region. The construction of a model that favours regions lying in this set as opposed to those outside it is a non-trivial problem. This paper proposes a new model to address this problem, and applies it to the extraction of road networks from very high resolution satellite imagery. The incorporation into models of prior knowledge about a region to be segmented from an image has a long history. The earliest and still most widely used models incorporate local knowledge about the boundary, essentially smoothness: active contours [1] are one example, the Ising model another [2,3]. This degree of prior knowledge is almost never enough to segment an entity of interest automatically, even in relatively simple images. More recent work has focused on models that include more specific prior knowledge [4,5,6,7,8]. This work involves shape priors saying that the region sought must be ‘close’ to an exemplar region or regions. Although useful for many applications, this type of model is not appropriate when the region sought has arbitrary topology. To model families of regions such as networks, Rochery et al. [9] introduced ‘higherorder active contours’ (HOACs). HOACs incorporate not only local, differential knowledge about the boundary, but also nonlocal, long-range interactions between tuples of contour points. Via such interactions, they favour regions with particular geometric characteristics without constraining the topology via use of a reference region. For example, the model used in [9], which uses pairwise interactions, favours, for certain ranges of parameter values, network-like regions composed of branches with roughly parallel borders and a constant width that meet at junctions. The HOAC energy developed in [9] suffers from a serious limitation, however, when it is used to model networks. This is that the interaction between points on the same side of a network branch have the same range of interaction as points on opposite sides. The effect is that a typical maximum curvature of a branch κ is connected to the width of that branch W via κ ∼ 1/W . This is particularly limiting for certain types of networks, e.g. road networks in cities, for which κ 1/W . In this paper, we construct a new HOAC prior energy for modelling networks that overcomes this limitation, allowing separate control of branch straightness and width. The new energy also permits a broader range of widths to be modelled simultaneously, and can even model two disjoint width ranges. We test the model by applying it to the problem of road network extraction from very high resolution (VHR) images of
An Extended Phase Field HOAC Model for Networks and Its Application
511
Beijing. This represents an extremely challenging problem due to the amount of ‘noise’ in the road regions (cars, road markings, shadows, . . . ) and the degree of variation and detail in the non-road regions. Nevertheless, the new energy permits a quasi-automatic extraction of the road network. To avoid the complications of expressing regions with arbitrary topology in terms of boundaries and the complexity of the implementation of HOAC terms using standard level-set methods, Rochery et al. [10] reformulated HOAC models as equivalent nonlocal phase field models. Phase fields possess many advantages over more traditional methods for region representation and modelling, even in the non-HOAC case, but are particularly advantageous for HOAC energies. It is often convenient to formulate a model in terms of the contour initially, and then reformulate it as a phase field model for implementation; we follow that procedure in this paper. The paper is organized as follows: section 2 recalls HOAC energies and the phase field framework. In section 3, we introduce our new HOAC energy, and calculate the conditions for which the model allows stable bars. In section 4, we define the overall model, including a data term. The application of the model to road extraction from VHR images is illustrated in section 5. We conclude in section 6.
2 Higher-Order Active Contours and Phase Fields In [9], Rochery et al. proposed an Euclidean-invariant HOAC energy for modelling network regions: |Δγ(t, t )| βC , (1) dt dt γ(t) ˙ · γ(t ˙ ) Ψ EC (R) = λC L(R) + αC A(R) − 2 d (∂R)2 where ∂R is the boundary of region R; γ : S 1 → Ω is a map representing ∂R, parameterized by t; Ω ⊂ R2 is the image domain; dots represent differentiation wrt t; L is boundary length; A is region area; Δγ(t, t ) = γ(t) − γ(t ); and d is a constant that controls the range of the interaction. The long range interaction between t and t is modulated by Ψ , the interaction function: 1 2 − |k| + π1 sin(π|k|) if |k| < 2 , Ψ (k) = 2 (2) 0 else . It is a smoothly decreasing function from 1 at k = 0 to 0 for k ≥ 2. For many reasons [10], the phase field framework provides a more convenient framework for region modelling than do contours. A ‘phase field’ is a function φ : Ω → R, which defines a region R ∈ Ω via a threshold z: R = ζz (φ) = {x ∈ Ω : φ(x) > z}. The basic phase field energy term E0 is 1 ∇φ(x) · ∇φ(x) + V (φ(x)) . (3) dx E0 (φ) = 2 Ω where the ‘potential’ V is given by
1 4 1 2 1 3 V (y) = λ y − y +α y− y , 4 2 3
(4)
512
T. Peng et al.
where λ and α are constants. For λ α > 0, V has two minima, at y = −1 and y = 1, and a maximum at y = α/λ. Define φR = arg minφ: ζz (φ)=R E0 (φ). If we ignore the gradient term in equation (3), and set z = α/λ, we clearly find that φR (x) = 1 for ¯ = Ω \ R. Adding the gradient term results in a x ∈ R and φR (x) = −1 for x ∈ R smooth transition from 1 to −1 over an interface region RC around the boundary ∂R. Note that to a very good approximation ∇φ is non-zero only in RC . It can be shown [10] that E0 (φR ) λC L(R) + αC A(R). The third, HOAC term in EC can also be reformulated in terms of an equivalent phase field energy [10]. It becomes |x − x | β . (5) ES (φ) = − dx dx ∇φ(x) · ∇φ(x ) Ψ 2 d Ω2 The sum E0 + ES is then equivalent to EC in equation (1) [10].
3 Modelling Networks As explained briefly in section 1, EC (or equivalently E0 + ES ) suffers from a significant limitation when it comes to modelling networks. Apart from a sign change, the interaction between two points with parallel tangent vectors is the same, and in particular has the same range, as that between anti-parallel tangent vectors. The former interaction controls the curvature of network branches by trying to align tangent vectors, while the latter controls branch width by creating a repulsive force. Hence we expect that for a stable network branch, typical maximum curvature κ and branch width W will be related approximately by κ ∼ 1/W . Thus, EC does not model well networks with straight narrow branches or highly curved, wide branches. To overcome these limitations, we will set up a new, Euclidean invariant nonlocal energy term EL that will act in a complementary way to the HOAC term in EC . We will also find conditions that ensure that a long bar of a given width is a stable configuration of the new model. This enables the fixing of one of the parameters of the energy in terms of the others, and places constraints on the rest. 3.1 Linear Nonlocal HOAC Term One general class of quadratic HOAC terms can be written as dt dt γ(t) ˙ · GC (γ(t), γ(t )) · γ(t ˙ ) , EC,HO (R) = −
(6)
(∂R)2
where GC is a map from Ω 2 to 2 × 2 matrices. Imposing Euclidean invariance, and choosing GC (γ(t), γ(t )) = Ψ (|Δγ|)δ, where δ is the unit matrix, leads to the HOAC term in EC . Choosing GC (γ(t), γ(t )) = Ψ (|Δγ|)ΔγΔγ T leads to
|Δγ(t, t )| EC,L (R) = − ˙ · Δγ(t, t ) γ(t . (7) dt dt γ(t) ˙ ) · Δγ(t, t ) Ψ d2 (∂R)2 where we use the same Ψ as in EC , but with a different range d2 .
An Extended Phase Field HOAC Model for Networks and Its Application
513
EC,L compares each tangent vector to the vector Δγ(t, t ) joining the two interacting points. When two points have tangent vectors that are both nearly aligned or anti-aligned with Δγ, the product of the dot products is positive. The energy EC,L can decrease further by further aligning these tangent vectors with Δγ and hence with each other. This situation corresponds to two points on the same side of a network branch, as shown in Fig. 1(a). The energy thus favours straight lines, within a range controlled by d2 . On the other hand, when at least one of the two tangent vectors is nearly orthogonal to Δγ, the product of dot products is small. This means that changing the distance between the two points in the argument to Ψ does not change the energy much, and thus that the force between two such points is small. This situation corresponds to two points on opposite sides of a network branch, as shown in Fig. 1(b). As a result, when EC,L is added to EC , the width of the network branches is controlled largely by the parameter d of EC , while the distance over which the branch will be straight is controlled largely by d2 , if d2 > d. For thin, straight bars, we will indeed fix d2 > d. The exception to this rule is also shown in Fig. 1(b). From the above, γ(t ) exerts no force on γ(t), but γL (t ) and γR (t ) both repel γ(t), as shown by the force arrows FL and FR in the figure. The tangential parts of FL and FR cancel, and there is an overall normal repulsion F . If the weight of EC,L in the model is too large, this repulsion may begin to dominate the bar width, which we want to avoid.
(a)
(b)
Fig. 1. The effect of EC,L : (a) when two tangent vectors are nearly aligned or anti-aligned with Δγ, the energy EC,L favours their alignment; (b) when at least one of the two tangent vectors is nearly orthogonal to Δγ, there is only a very small force between the two points, but contributions from many points can add up to a significant repulsion
We now reformulate EC,L (R) in the phase field framework [10]. We rotate tangent vectors to normal vectors, and replace the latter by ∇φ. Since ∇φR is very small outside RC , the domains of integration can be extended from ∂R to Ω without significantly changing the energy, except for a multiplicative factor. The new, linear nonlocal HOAC phase field term EL (φ) becomes (we introduce a weight parameter β2 ) |x − x | β2 dxdx ∇φ(x)×(x−x ) ∇φ(x )×(x−x ) Ψ EL (φ) = − , (8) 2 d2 Ω2 where × is the 2D antisymmetric product. 3.2 Stability Analysis The sum of the three energies we have introduced so far, EP = E0 + ES + EL will constitute the prior energy for our model. The behaviour of EP depends on the six parameters (α, λ, β, β2 , d, d2 ), and can vary significantly. If we wish to model networks with
514
T. Peng et al.
this energy, it is therefore important to ensure that a network branch is a stable configuration. An important side-effect is that some of the (rather abstract) model parameters are effectively replaced by ‘physical’ quantities, such as bar and interface width, which we can reasonably fix from numerical or application considerations. Since network branches are locally like straight bars, we can to a good approximation analyse the stability of a long (because we want to ignore boundary effects) straight bar,
Fig. 2. Top-left: different regions in the βˆ2 − βˆ plane for dˆ2 = 2 < D2 . eP has either no local minimum (red) or one local minimum (green). Top-right: the associated stable bar width. Bottom-left: eP with no local minimum (βˆ = 0.05, βˆ2 = 0.04). Bottom-right: eP with one local minimum (βˆ = 0.2, βˆ2 = 0.1).
Fig. 3. Top-left: different regions in the βˆ2 − βˆ plane for dˆ2 = 5.5 > D2 . eP has either no local minimum (red), one local minimum (green), or two local minima (white). Top-right: the associated stable bar width(s). Bottom-left: eP with no local minimum (βˆ = 0.1, βˆ2 = 0.01). Bottom-middle: eP with one local minimum (βˆ = 0.05, βˆ2 = 0.015). Bottom-right: eP with two local minima (βˆ = 0.2, βˆ2 = 0.013).
An Extended Phase Field HOAC Model for Networks and Its Application
515
of length L and width W L. Ideally, we should minimize EP under the constraint that ζz (φ) = Rbar , and then expand around that point to test stability, but this is very difficult. Instead, we take a simple ansatz for φR bar , and study its stability in a lowdimensional subspace of function space; the results may be justified a posteriori by numerical experiments. In [10] a similar procedure was followed, the results comparing favourably to those obtained by more sophisticated ‘matched asymptotics’. The ansatz is as follows. The phase field is given by φ(x) = 1 for x ∈ R \ RC ; φ(x) = −1 for ¯ \ RC , while in RC , φ changes linearly from 1 to −1. The energy EP evaluated x∈R on this ansatz, per unit length of bar, which we denote eP , is given by eP (w, W ) =
4 4 4β 2d 2 4 πη αW + λw + + dη η − W 2 1 − cos 3 15 w d W d 2d2 η πη 1 . dη η η 2 − W 2 2 − + sin +4β2 d π d2 2 W
where w is the width of the interface region RC . The energy eP is now minimized with respect to w and W by setting its first derivatives to zero, while ensuring that the second derivatives are positive. For w this is trivial, and leads to λ = 15/w2 , and thus λ ∼ 1 for reasonable interface widths. For W , the calculation is lengthy and will not be detailed here. Note that stability in fact depends only on the three scaled parameters βˆ = β/α, ˆ = W/d. The main results are βˆ2 = β2 d2 /α and dˆ2 = d2 /d, and on the scaled width W ˆ then as follows. If d2 is less than a threshold D2 , at most one minimum can be found. ˆ βˆ2 , and dˆ2 : eP has no If dˆ2 > D2 , there are three cases, depending on the values of β, ˆ ˆ dˆ2 local minimum; eP has one local minimum, with either W 1 (i.e. W d) or W ˆ ˆ ˆ (i.e. W d2 ); or eP has two local minima, at W 1 and W d2 . (This behaviour is an example of a swallowtail catastrophe.) The two regimes are further illustrated in Figs. 2 and 3. The variety of behaviour is important for applications. As well as being able to model networks with branches of more or less fixed width, but with greater ‘stiffness’ than provided by the model in [9], the new energy can model two widths at the same time. At certain ‘critical points’ in parameter space, essentially where pairs of minima merge, it can also model a large range of widths, all of which are approximately stable.
4 Overall Model for Linear Network Extraction In addition to the prior energy EP , we also need a likelihood energy linking the region R (which in our case corresponds to the road network) to the data, in our case a VHR optical satellite image. We will also specify some of the implementation details. 4.1 Total Energy The total energy is the sum of the prior energy EP and the likelihood energy ED : E(φ; I) = ED (I, φ) + θEP (φ) ,
(9)
516
T. Peng et al.
where I : Ω → R is the image, and θ ∈ R+ balances the contributions of the two terms. ED is given by dx φ+ (x) ln P+ (I(x)) + φ− (x) ln P− (I(x)) . (10) ED (I, φ) = − Ω
P± (I) are models of the histograms of the image intensity, inside (+) and outside (−) the road region. They are both mixtures of Gaussians whose parameters are learned a priori, in a supervised way. The quantities φ± = (1 ± φ)/2 are, by construction, ap¯ The likelihood energy is proximately equal to the characteristic functions of R and R. quite weak, in the sense that maximum likelihood classification produces very poor results (see Fig. 4(d)), mainly due to the ‘noise’ in the road region and the great variations in the background. No image model with independent pixels can do much better than this, which is why a powerful prior model is needed. 4.2 Optimization and Parameter Settings To minimize E, we perform gradient descent with the neutral initialisation: the initial value of φ is set equal to the threshold z = α/λ everywhere in Ω [10]. The algorithm is thus quasi-automatic. The functional derivatives of the HOAC terms δES /δφ and δEL /δφ involve convolutions: they are calculated in the Fourier domain, as are all derivatives. The evolution equation is ∂φ(x) 1 P+ = ln + θ ∇2 φ(x) − λ(φ3 (x) − φ(x)) − α(1 − φ2 (x)) ∂t 2 P− ˆ ˆ + βF−1 k 2 dΨˆ (kd)φ(k) + β2 F−1 k · F{xxT T Ψ (|x|/d2 )} · k φ(k) , (11) where F and F−1 denote the Fourier and the inverse Fourier transforms respectively, and a hat ˆ indicates the Fourier transform of a variable. rotates the tangent vectors to the inward normal vectors. The time evolution of φ uses the forward Euler method. The parameters of the prior energy, i.e. θ, α, λ, β, β2 , d, and d2 , are constrained by the stability analysis of section 3.2. This enables a choice of λ, β, β2 , d, and d2 based on the width(s) to be modelled: only α and θ remain.
5 Experimental Results As input data I, we use a number of images, with average size 1500 × 1500 pixels, extracted from a QuickBird optical panchromatic image of Beijing. The scenes are characteristic of dense urban regions. Fig. 4(a) illustrates one of these images. Our aim is to extract, completely and accurately, the road network from an image. In order to evaluate the performance of our new model, we compare it quantitatively to ground truth and to other methods from the literature. We also analyse the effect of the different terms in our energy.
An Extended Phase Field HOAC Model for Networks and Its Application
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
517
Fig. 4. Experiments and comparisons. 4(a): a QuickBird image, 0.61m; 4(b)-4(c): results obtained using the new model, E, at 1/4 resolution and at full resolution. 4(d)-4(h): results obtained using: MLE; the model with β = β2 = 0 (equivalent to a standard active contour); the model in [9] β2 = 0; the methods in [11] and [12]
We will focus on two particular cases of road extraction: extraction of a network consisting of roads of roughly the same width; and extraction of networks containing roads of two different widths. In the former case, we choose the parameters so that eP has one local minimum. The resulting model can extract roads whose widths are close to the minimizing value. In the latter case, we choose the parameters so that eP has two local minima. Again a small range of widths around each minimum is possible. 5.1 Extraction of Roads of Similar Widths We apply our model to both the full-resolution and reduced resolution images. We fixed the parameters as described in section 3.2. For all experiments, the parameters (θ, α, λ, β, β2 , d, d2 ) were (200, 0.15, 4, 0.02, 2 × 10−4 , 4, 12) and (200, 0.15, 4, 0.02, 1.25 × 10−5 , 16, 48) at 1/4 and full resolution respectively. Note that apart from the obvious scaling of d and d2 , and a change in β2 , the other parameters are the same for the two resolutions. The results obtained using the model E (equation (9)), at 1/4 resolution and at full resolution, are shown in Fig. 4. The complete road network is retrieved successfully, at both resolutions. Although the segmentation at 1/4 resolution appears geometrically smoother, the extraction result is actually more accurate at full resolution. Accuracy at 1/4 resolution is limited both directly, by the low resolution of the phase field, and indirectly, because each scaling coefficient in the data at level 2 is the average of 16 pixels at full resolution: coefficients near the road border therefore include both road and background contributions, and the road boundary is thereby blurred.
518
T. Peng et al.
Table 1. Quantitative criteria at full resolution (except first row) for Fig. 4(a) (T = True, F = False, P = Positive, N = Negative)
PP Measure PP Method PPP
Our model E (with EL ) at 1/4 resolution, (e.g. Fig. 4(b)) Our model E (with EL ) (e.g. Fig. 4(c)) MLE (e.g. Fig. 4(d)) θE0 + ED (e.g. Fig. 4(e)) θ(E0 + ES ) + ED (e.g. Fig. 4(f)) Wang [11] (e.g. Fig. 4(g)) Yu [12] (e.g. Fig. 4(h))
Completeness Correctness Quality TP/(TP+FN) TP/(TP+FP) TP/(TP+FP+FN) 0.9688
0.8519
0.8292
0.8756 0.9356 0.6047 0.6946 0.9350 0.6050
0.9693 0.2073 0.8249 0.9889 0.3463 0.3695
0.8520 0.2044 0.5359 0.6892 0.3381 0.2977
To evaluate the performance of the new model, we now compare our result with other methods. In order to illustrate the effects of different terms in the model, we computed results using maximum likelihood estimation (MLE, i.e. θ = 0); a standard, non-higher-order active contour, (β = β2 = 0); and the model in [9] (β2 = 0). The results are shown in Figs. 4(d)–4(f). MLE is clearly incapable of distinguishing the roads from the background, while the models without ES and/or EL are not able to recover the complete road network (although that with ES does better than the standard active contour, which has only local prior knowledge). In addition, we apply two other methods, proposed in [11] and [12], and compare them to ours. The approach in [11] is a classification, tracking, and morphology algorithm; [12] introduced a fast but rough segmentation technique based on ‘straight line density’. Without much prior geometric knowledge, they extract many incorrect areas that happen to have similar statistical properties to roads. Moreover, the accuracy of the delineation of the road boundary is poor. Some quantitative evaluations based on standard criteria [13], are shown in Table 1. Ground truth for this evaluation was segmented by hand. The ‘quality’ is the most important measure because it considers both completeness and correctness. Our complete model outperforms all others. Fig. 5 presents more results using E. 5.2 Extraction of Roads of Different Widths Images containing roads of different widths are processed after choosing parameter values for which eP,L has two local minima. Fig. 6(a) shows an input image containing two roads: their widths are approximately 20 pixels and 80 pixels. The results obtained using our complete model E and the model in [9] (with β2 = 0) are illustrated in Figs. 6(b) and 6(c) respectively. The parameter values used in this experiment were (25, 0.15, 5, 0.02, 1.228 × 10−4 , 4, 22). The estimated stable widths for these parameter values are 5.28 and 20.68, corresponding to the road widths at 1/4 resolution, i.e. 5 pixels and 20 pixels. This comparison shows clearly that adding EL enables the detection of roads with both widths, while the model without EL finds only an incomplete network. In practice, the results are not very sensitive to the precise choice of parameter values, provided they lie in the correct subset of the βˆ − βˆ2 − dˆ2 diagram.
An Extended Phase Field HOAC Model for Networks and Its Application
519
Fig. 5. More results using the model E on pieces of a QuickBird image
(a)
(b)
(c)
Fig. 6. Extraction of a road network containing two different widths, at 1/4 resolution. Left to right: image data; results using: the new model E; the model in [9] (β2 = 0).
6 Conclusions We have proposed a new HOAC term for modelling bar shape and embedded it in the phase field framework. Based on a stability analysis of a bar with a desired width, we established constraints linking the parameters of the energy function. We explored the possible behaviours of the resulting prior energy EP as a function of the parameter settings, and showed that as well as separating the interactions between points on the same and opposite sides of a network branch, the new model permits the modelling of two widths simultaneously. The analysis also fixes some of the model parameters in terms of network width(s). Experiments on road network extraction from VHR satellite images demonstrate the superiority of the new model to others in the literature. Our current work is focused on constructing a prior energy EP that has a very flat local minimum in a wide range, instead of two sharp local minima. This might be a better solution for the extraction of roads with multiple widths.
520
T. Peng et al.
Acknowledgements This work was partially supported by European Union Network of Excellence MUSCLE (FP6-507752) and by INRIA Associated Team “SHAPES”. The work of the first author is supported by an MAE/Thales Alenia Space/LIAMA grant.
References 1. Kass, M., Witkin, A., Terzopoulos, D.: Snakes: Active contour models. Int. J. Comput. Vis. 1, 321–331 (1988) 2. Ising, E.: Beitrag zur theorie des ferromagnetismus. 31, 253–258 (1925) 3. Geman, S., Geman, D.: Stochastic relaxation, Gibbs distribution and the Bayesian restoration of images. IEEE Trans. Pattern Anal. Mach. Intell. 6, 721–741 (1984) 4. Chen, Y., Tagare, H., Thiruvenkadam, S., Huang, F., Wilson, D., Gopinath, K., Briggs, R., Geiser, E.: Using prior shapes in geometric active contours in a variational framework. Int. J. Comput. Vis. 50, 315–328 (2002) 5. Cremers, D., Tischh¨auser, F., Weickert, J., Schn¨orr, C.: Diffusion snakes: Introducing statistical shape knowledge into the Mumford-Shah functional. Int. J. Comput. Vis. 50, 295–313 (2002) 6. Leventon, M.E., Grimson, W.E.L., Faugeras, O.: Statistical shape influence in geodesic active contours. In: Proc. IEEE CVPR, Hilton Head Island, South Carolina, USA, vol. 1 (2000) 7. Rousson, M., Paragios, N.: Shape priors for level set representations. In: Heyden, A., Sparr, G., Nielsen, M., Johansen, P. (eds.) ECCV 2002. LNCS, vol. 2351. Springer, Heidelberg (2002) 8. Srivastava, A., Joshi, S., Mio, W., Liu, X.: Statistical shape analysis: Clustering, learning, and testing. IEEE Trans. Pattern Anal. Mach. Intell. 27, 590–602 (2003) 9. Rochery, M., Jermyn, I.H., Zerubia, J.: Higher-order active contours. Int. J. Comput. Vis. 69, 27–42 (2006) 10. Rochery, M., Jermyn, I.H., Zerubia, J.: Phase field models and higher-order active contours. In: Proc. IEEE ICCV, Beijing, China (2005) 11. Wang, R., Zhang, Y.: Extraction of urban road network using Quickbird pan-sharpened multispectral and panchromatic imagery by performing edge-aided post-classification. In: Proc. ISPRS, Quebec City, Canada (2003) 12. Yu, Z., Prinet, V., Pan, C., Chen, P.: A novel two-steps strategy for automatic GIS-image registration. In: Proc. ICIP, Singapore (2004) 13. Heipke, C., Mayr, H., Wiedemann, C., Jamet, O.: Evaluation of automatic road extraction. Int. Arch. Photogram. Rem. Sens. XXXII, 47–56 (1997)
A Generic Neighbourhood Filtering Framework for Matrix Fields Luis Pizarro, Bernhard Burgeth, Stephan Didas, and Joachim Weickert Mathematical Image Analysis Group, Faculty of Mathematics and Computer Science, Building E1.1, Saarland University, 66041 Saarbr¨ ucken, Germany {pizarro,burgeth,didas,weickert}@mia.uni-saarland.de, http://www.mia.uni-saarland.de
Abstract. The Nonlocal Data and Smoothness (NDS) filtering framework for greyvalue images has been recently proposed by Mr´ azek et al. This model for image denoising unifies M-smoothing and bilateral filtering, and several well-known nonlinear filters from the literature become particular cases. In this article we extend this model to so-called matrix fields. These data appear, for example, in diffusion tensor magnetic resonance imaging (DT-MRI). Our matrix-valued NDS framework includes earlier filters developped for DT-MRI data, for instance, the affine-invariant and the log-Euclidean regularisation of matrix fields. Experiments performed with synthetic matrix fields and real DT-MRI data showed excellent performance with respect to restoration quality as well as speed of convergence.
1
Introduction
Image denoising and simplification is a ubiquitous task in image processing, and numerous techniques have been developed over the years. These methods are based e.g. on statistical notions, partial differential equations, variational principles and regularisation methods. Nevertheless, a common feature for most of the techniques is an averaging process over the neighbourhood of each pixel. An early example is the sigma filter of Lee [1], and the M-smoothers of Chu et al. [2] fall also in this category. Polzehl and Spokoiny proposed a technique called adaptive weights smoothing [3]. The W-estimator by Winkler et al. [4] has a close relation to the spatially weighted M-smoothers [5]. The bilateral filter by Tomasi and Manduchi [6] can be described as a weighted averaging filter as well. The energy-based approach recently proposed by Mr´ azek et al. [7] combines M-smoothers with bilateral filtering. It is a fairly general nonlocal filtering framework that takes advantage of the so-called Nonlocal Data and Smoothness terms, hence referred to as NDS in this article. These terms allow for the processing of information from, in principle, arbitrary large neighbourhoods around pixels. The data term rewards similarity of our filtered image to the original one, and hence counteracts the smoothness term which penalises high variations of the evolving image inside a neighbourhood. A thorough investigation of the D. Forsyth, P. Torr, and A. Zisserman (Eds.): ECCV 2008, Part III, LNCS 5304, pp. 521–532, 2008. c Springer-Verlag Berlin Heidelberg 2008
522
L. Pizarro et al.
NDS-framework and its relation to other filters for grey scale images has been performed in [8] and [9]. The goal of this article is the extension of this rather general filtering framework to matrix-valued data, so-called matrix- or tensor-fields, which we regard as mappings from points of a set in Rd into the set S(k) of real, symmetric k × k-matrices. Diffusion Tensor Magnetic Resonance Imaging (DT-MRI) is the most prominent source for this data type: This modern medical image acquisition technique associates a real symmetric positive-definite 3 × 3-matrix to each voxel of the volume under consideration. These matrices, visualised by ellipsoids, indicate the diffusive behaviour of water molecules under thermal Brownian motion, and as such reflect the structure of the surrounding tissue. However, symmetric but possibly indefinite matrix-fields also appear, for example in physics and engineering as general descriptors of anisotropic behaviour. In any case, the data are often corrupted by noise and a filtering and simplification of the matrix fields is necessary. Filtering for processing of positive definite matrix-fields, namely DTMRI data, based on diffusion and regularisation concepts have been proposed in [10,11], based on differential geometric considerations in [12,13,14,15,16,17,18]. An alternative framework relying on an operator-algebraic view on symmetric matrices provides the ground for filtering and regularisation of matrix fields, positive definite or not, in [19,20]. A short review of the NDS framework in the subsequent Section 2 will reveal that for its applicability the data to be processed have to be elements of a vector space equipped with a metric, and hence be extended to matrix-fields. In Section 3 we show that various filtering approaches described in the recent literature are particular cases of the general matrix-valued NDS framework. We report on experiments in Section 4 pointing out the capabilities and prospects of the NDS methodology. We summarise our contribution in Section 5.
2
NDS Framework and Its Extension to Matrix Fields
Let f, u ∈ Rd be discrete d-dimensional scalar images. In this article we assume d = 1, . . . , 3, and f stands for the noisy image while u represents a processed version of it. Let J = {1, . . . , n} be the index set of all pixels in the images. The pixel position in the d-dimensional grid is indicated by xi (i ∈ J) and h2i,j = |xi − xj |2 stands for the square of the Euclidean distance between the two pixel positions xi and xj . Such quantity will be referred to as spatial distance. The tonal distance then is the distance between grey values of two pixels, for example |ui − fj |2 . The functional E of the NDS filter presented in [7] is a linear combination of a data and a smoothness term: E(u) = α ΨD |ui − fj |2 wD |xi − xj |2 i∈J j∈J
+ (1 − α)
i∈J j∈J
ΨS |ui − uj |2 wS |xi − xj |2 .
(1)
A Generic Neighbourhood Filtering Framework for Matrix Fields
523
This formulation combines a similarity constraint and a smoothness constraint, which are weighted by a parameter α ∈ [0, 1]. The spatial weights wD and wS take into account the spatial distance between pixel positions xi and xj in contrast to the tonal weights ΨD and ΨS penalising high deviations between the corresponding grey values. Omitting the details which can be found in [8] the minimiser of this functional is obtained through a fixed-point iteration based on ” ” X “ k X “ k α ΨD |ui − fj |2 fj wD (h2i,j ) + 2(1 − α) ΨS |ui − ukj |2 ukj wS (h2i,j )
uk+1 = i
j∈J
j∈J
” ” X “ k X “ k α ΨD |ui − fj |2 wD (h2i,j ) + 2(1 − α) ΨS |ui − ukj |2 wS (h2i,j ) j∈J
.
j∈J
(2) (s2 ), Ψ{S,D}
2
w{S,D} (h ) > 0, i.e., Positivity of the denominator is guaranteed if the penalisers are monotonically increasing, hence the right hand side of (2) is a convex combination of grey values uj , fj . We transfer the scalar fixed point formulation (2) to the matrix-valued setting. We use capital letters Fi , Ui to denote matrices of a matrix field at position xi . An associated fixed point iteration for matrix-fields is given by
Uik+1
⎛ ⎞ ΨD d(Uik , Fj )2 H(Fj ) wD (h2i,j ) + 2(1 − α) ΨS d(Uik , Ujk )2 H(Ujk ) wS (h2i,j ) α ⎜ ⎟ j∈J j∈J ⎟ = H −1 ⎜ ⎝ ⎠ k 2 2 k k 2 2 α ΨD d(Ui , Fj ) wD (hi,j ) + 2(1 − α) ΨS d(Ui , Uj ) wS (hi,j ) j∈J
j∈J
(3)
where we incorporated the following adjustments: The term d(A, B) denotes a distance measure between the two matrices A, B ∈ S(n). Two instances are of relevance in this article: One is the computationally inexpensive Frobenius norm of matrices, (4) dF (A, B) := A − BF with CF := trace(C C) . The second one is the log-Euclidean distance between matrices in S + (n), i.e., the set of real symmetric positive-semidefinite n × n-matrices, [16], dLE (A, B) := ln(A) − ln(B)F .
(5)
H in (3) is a function which is applied to a symmetric matrix M . To this end let M have the spectral decomposition M = Λ diag(λ1 , . . . , λn ) Λ where Λ is an orthogonal matrix and diag(λ1 , . . . , λn ) is a diagonal matrix with the eigenvalues λi of M as non-zero diagonal entries. Then H(M ) = Λ diag(H(λ1 ), . . . , H(λn )) Λ provided H is defined for each of the scalar values λi . In the next section we will see some instances of such mappings that allow us to obtain several filters suggested in the literature as particular cases of our general NDS framework for matrix-fields (3).
524
3
L. Pizarro et al.
Related Filters within This Framework
The matrix-valued NDS framework offers many degrees of freedom. It can even be considered as an unified approach to M-smoothing (α = 1) and bilateral filtering (α = 0) for matrix fields. Furthermore, we are able to regain several filtering approaches known from the literature by specifying relevant quantities in (3): 1. Affine-invariant weighted average of diffusion tensors [13,14,15,17] with • α = 0, ΨS (d2 ) = d2 , wS = Gaussian, • ΨD and wD do not any role,
play − 12 − 12 −1 −1 • H(Aj ) = Ai ln Ai Aj Ai 2 Ai 2 . 2. Log-Euclidean weighted average of diffusion tensors [16] with • the same than in 1, but with H(Aj ) = ln(Aj ). 3. Affine-invariant regularisation/interpolation of tensors fields via a discrete geodesic marching scheme [15] with , ΨS (d2 ) = any, wS = unit disk, • λ = 2·(1−α) α 2 2 • ΨD (d ) = d , wD = Gaussian, − 12
• H(Aj ) = Ai
−1
− 12
ln Ai 2 Aj Ai
−1
Ai 2 .
4. Log-Euclidean regularisation/interpolation of tensors fields via a discrete geodesic marching scheme [21] with • the same than in 3, but with H(Aj ) = ln(Aj ). 5. A version of bilateral filtering for tensor fields [22] with • α = 0, ΨS (d2 ) = d2 , wS = μ1 · dS + μ2 · |xi − xj | (μ1 , μ2 > 0), • ΨD and wD do not play any role, • H(Aj ) = ln(Aj ). Note that most of the mentioned methods do not exploit the utilisation of nonlocal information in the data/similarity term, i.e., α = 0, or the radius of action of their smoothness term is restricted to the unit circle. In this sense, we can consider generalised versions of those methods within the scope of the neighbourhood filtering framework for matrix fields proposed in Section 2. Of course, further specialised filters for tensor fields can be generated for specific applications by appropriately setting the matrix-valued NDS model. In Section 4 we will demonstrate the denoising capabilities of our general framework regarding two prominent special cases. Their performance will be evaluated with respect to restoration quality and speed of convergence on synthetically generated tensor fields and on real DT-MRI data.
4
Comparative Results
In this section, we test our general filtering framework for matrix fields (3) on synthetic and real-world data. Fig. 1 shows a 2-D dataset consisting of 32 × 32 matrices. The data are represented as ellipsoids via the level sets of the quadratic
A Generic Neighbourhood Filtering Framework for Matrix Fields
525
Fig. 1. Synthetic data. Left: Original matrix field with homogeneous structures consisting of four types of matrices. Middle Top: Scaled-up region of the original matrix field. Middle Bottom: Version degraded with σ = 500. Right Top: Version degraded with σ = 1000. Right Bottom: Version degraded with σ = 2000.
form x A−2 x = const., x ∈ R3 , associated with a matrix A ∈ S + (3). By using A−2 the lengths of the semi-axes of the ellipsoid correspond directly with the three eigenvalues of the matrix A. To demonstrate the denoising capabilities, we additively degrade our uncorrupted synthetic matrix field (Ui )i∈J with Ui ∈ S + (3), with random positive definite matrices (Ni )i∈J , i.e., Fi = |Ui +Ni |, where Fi is the corrupted version of Ui . The eigenvalues of the noise matrix Ni stem from a Gaussian distribution with vanishing mean and standard deviation σ. The eigenvectors of the noise matrix result in choosing three uniformly distributed angles and rotating Ni by these angles around the coordinate axes. Finally, we take the absolute value for positive definiteness. Considering that the eigenvalues of the original matrix field are in the range [1000, 4000], the noisy tensor fields for σ = 500, 1000, 2000 are shown in Fig. 1. 4.1
Two Prominent Filtering Models: NDS-I and NDS-LE
We focus on two models: The NDS-I model when choosing H(U ) = U , and the NDS-LE (log-Euclidean) model for H(U ) = ln(U ). Independently of the model, the choice of the tonal penalisers ΨD and ΨS is done following two strategies: (P.1) Penalisers requiring no parameters at all. We use the Whittaker-Tikhonov penaliser for the data term, i.e., ΨD (d2 ) = d2 , and the Nashed-Scherzer √ penaliser [23] for the smoothness term, i.e, ΨS (d2 ) = βd2 + d2 + 2 , with 1 . = 1 and β = 10 (P.2) Penalisers with better edge-preservation properties, paying the price of including a parameter λ as a contrast parameter.
We use the classic Perona d2 2 2 Malik penaliser, [24], Ψ (d ) = λ ln 1 + λ2 in both the data and the smoothness term. In this case the parameter λ is estimated as the 1%quantile of the distribution of distances for a particular distance measure d and noise level σ.
526
L. Pizarro et al. NDS−I model influenced by α 1800
NDS−LE model influenced by α 1800
σ = 500 σ = 1000
1600
σ = 2000
1400
σ = 2000
1200
|| F − U ||
|| F − U ||
σ = 1000
1400
1200 1000 800
1000 800
600
600
400
400
200 0 0.1
σ = 500
1600
200
0.2
0.3
0.4
0.5
0.6
parameter α
0.7
0.8
0.9
1
0 0.1
0.2
0.3
0.4
0.5
0.6
parameter α
0.7
0.8
0.9
1
Fig. 2. Influence of the parameter α on the NDS-I model (left) and the NDS-LE model (right) under different levels of noise σ = 500, 1000, 2000. The penalisers (P.2) are used in both models. NDS-I uses dF as tensor distance, while NDS-LE uses dLE . The size parameters of the spatial weight functions were set to rD = rS = 1.
Also independent of the filtering model, we consider two tensor distance measures: the Frobenius distance dF (4) and the log-Euclidean
distance dLE (5). h2 as spatial weight Last but not least, we use a soft window wr (h2 ) = exp − 2r 2 function for both the data and the smoothness term, with size parameters rD and rS , respectively. 4.2
Influence of Parameters
Although we have specified the NDS-I and the NDS-LE models in the previous section, note that there are still some free parameters that will directly influence the denoising capabilities of our filters. Namely, the parameter α that counterbalances the contributions of the data and the smoothness term in (3), and the size parameters rD and rS of the spatial weight functions that allow smoothing within large neighbourhoods. Fig. 2 shows the influence of the parameter α on the NDS-I and NDS-LE models with respect to the reconstruction quality measured as the norm of the difference between the original matrix field F and
1/2 N the denoised field U , i.e., ||F − U || := . The non-trivial i=1 ||Fi − Ui ||F steady-state is shown for α ∈ (0, 1]. We see that there is a value α ˆ for which the restoration quality is optimal. We now want to quantify the influence of the size parameters rD , rS . Increasing the parameters naturally increases the number of arithmetic operations in (3). However, the restoration quality might be improved and the steadystate can be reached in a shorter time. If we vary the parameters in the range [0, 4] there are 25 possible combinations (rD , rS ) that we arrange as O0 , . . . , O24 following the ordering shown in Fig. 3 (top). The diagonal lines in the figure group the combinations according to complexity order (CO), i.e., configurations with equal/increasing number of operations. Fig. 3 (bottom) shows the restoration quality (left ), the logarithmic computational time (middle), and the overall performance (right ) of the NDS-I model. The last measurement is simply the
A Generic Neighbourhood Filtering Framework for Matrix Fields rS
527
O 24
4 3 CO 3
2 CO 2
1 CO 1
0 0
1
2
3
4
O0
15
10
CO 2
3 4
.1 .2 .3 .4 .5 .6 .7 .8 .9 1.
9
CO 4
CO 4
14
−5
15
16 17
−10
CO 5
CO 5
−10
18
18
−15
21
19 20
−15
−20
.1 .2 .3 .4 .5 .6 .7 .8 .9 1.
parameter α
CO 8
22
24
19 20
−15
21 .1 .2 .3 .4 .5 .6 .7 .8 .9 1.
CO 7
.1 .2 .3 .4 .5 .6 .7 .8 .9 1.
−10 .1 .2 .3 .4 .5 .6 .7 .8 .9 1.
21
.1 .2 .3 .4 .5 .6 .7 .8 .9 1.
17 18
CO 6
19
23
16
.1 .2 .3 .4 .5 .6 .7 .8 .9 1. CO 6
.1 .2 .3 .4 .5 .6 .7 .8 .9 1.
−5 .1 .2 .3 .4 .5 .6 .7 .8 .9 1.
15
20
12
.1 .2 .3 .4 .5 .6 .7 .8 .9 1.
16
0
11 13
14
.1 .2 .3 .4 .5 .6 .7 .8 .9 1.
22 23
−20 .1 .2 .3 .4 .5 .6 .7 .8 .9 1.
24 .1 .2 .3 .4 .5 .6 .7 .8 .9 1.
parameter α
CO 7
CO 4
12 13
−5
15 CO 5
.1 .2 .3 .4 .5 .6 .7 .8 .9 1. 10
0
11
.1 .2 .3 .4 .5 .6 .7 .8 .9 1.
5
8
.1 .2 .3 .4 .5 .6 .7 .8 .9 1.
12
17
7 9
10
0
14
CO 6
5
8 9
13
CO 7
6
7
.1 .2 .3 .4 .5 .6 .7 .8 .9 1.
10 .1 .2 .3 .4 .5 .6 .7 .8 .9 1.
CO 3
CO 3
CO 3
5
11
4 5
6
7
10
3
.1 .2 .3 .4 .5 .6 .7 .8 .9 1.
6
CO 8
10
5
15
2 .1 .2 .3 .4 .5 .6 .7 .8 .9 1.
CO 8
CO 2
4
8
1
.1 .2 .3 .4 .5 .6 .7 .8 .9 1.
3 5
15
2
.1 .2 .3 .4 .5 .6 .7 .8 .9 1.
CO 2
2
Overall performance
1
CO 1
(log) Computational time (s)
1
CO 1
CO 1
|| F − U ||
rD
22 23
−20 .1 .2 .3 .4 .5 .6 .7 .8 .9 1.
24 .1 .2 .3 .4 .5 .6 .7 .8 .9 1.
parameter α
Fig. 3. Top: Ordering O0 , . . . , O24 for the different combinations of (rD , rS ) grouped according to complexity order COi (i = 1, . . . , 8). Bottom, left to right: Normalised restoration quality, computational time, and overall performance of the NDS-I model in filtering the noisy tensor field with noise level σ = 1000. The penalisers (P.2) are used with distance measure dF .
mean between the first two normalised measurements. We see that the configuration with the best performance in terms of quality and fast convergence is O8 = (rD , rS ) = (1, 2) for α = 0.9. It is worth mentioning that the configurations O5 = (2, 0), O6 = (3, 0) and O14 = (4, 0), lead to good results despite the fact that they allow only for the incorporation of neighbourhood information in the data term. This is in agreement with the findings in [9]. The authors argued that filters based only on nonlocal M-smoothers can produce similar results to those obtained via classical variational/regularisation methods. The observations presented here are also valid for the NDS-LE filtering model.
528
L. Pizarro et al. Denoising capabilities of the NDS−LE model
(P.1) + d F
2000
(P.1) + d F
1800
(P.1) + d LE
1800
(P.1) + d LE
1600
(P.2) + d F
1600
(P.2) + d F
1400
(P.2) + d LE
1400
(P.2) + d LE
|| F − U ||
|| F − U ||
Denoising capabilities of the NDS−I model 2000
1200 1000 800
1200 1000 800
600
600
400
400
200 0
200 500
1000
1500
0
2000
Noise level σ
500
1000
1500
2000
Noise level σ
Fig. 4. Left: Restoration quality achieved by the NDS-I model using independently both type of penalisers (P.1) and (P.2), as well as both distance measures dF and dLE . Right: The same for the NDS-LE model. All parameters rD , rS and α were optimised. Table 1. Best filtering results for both the NDS-I and the NDS-LE models under noise level σ = 500, 1000, 2000. All parameters were optimised. σ 500
1000
2000
4.3
Model
rD
rS
α
||F − U ||
Iter.
Time (s)
NDS-I
2
3
0.9
167
33
0.69
NDS-LE
1
2
0.9
264
71
19.31
NDS-I
1
2
0.9
409
72
0.71
NDS-LE
2
0
0.2
568
236
57.97
NDS-I
2
0
0.1
1238
256
1.91
NDS-LE
2
1
0.9
1214
32
9.50
Comparing the Models
In this section we juxtapose the NDS-I and the NDS-LE models. We evaluate their performance in filtering the noisy tensor fields shown in Fig. 1 for different levels of noise σ = 500, 1000, 2000. Fig. 4 depicts the restoration quality achieved by both the NDS-I and the NDS-LE frameworks. We notice that both models achieve the best performance when the Perona-Malik penalisers (P.2) are employed. With respect to the tensor distance measures, it turned out that the NDS-I model works better with the Frobenius distance, while the NDS-LE model in principle performs better with the log-Euclidean distance1 . The best results are outlined in Table 1. It is clear that both models benefit from nonlocal smoothing by considering rD , rS > 0. Note that the NDS-I model is considerably faster than the NDS-LE variant, the latter being burdened with the additional computation of logarithms and exponentials of matrices. Computations have been performed on a 1.86 GHz Intel Core 2 Duo processor (without exploiting multitasking) executing C code. 1
It is slightly worse for the noise level σ = 2000.
A Generic Neighbourhood Filtering Framework for Matrix Fields
529
Fig. 5. Left Column, Top to Bottom: Matrix-fields degraded with noise level σ = 500, 1000, 2000. Middle Column, Top to Bottom: Steady-state results of Table 1 for NDS-I filtering of noisy tensor fields with noise level σ = 500, 1000, 2000. Right Column, Top to Bottom: The same for NDS-LE filtering.
Fig. 5 shows the denoised matrix fields for the results presented in Table 1. At any noise level NDS-I filtering produces a slightly more homogeneous output, in accordance with the original, than the NDS-LE model. This effect is most prominent in the case of the filtering of the noisy field with noise level σ = 1000, but it is also present in the filtered version of the noisy field associated with σ = 500, in particular in the lower part of the inner ring. Particularly noticeable in the example for σ = 1000 is that both the edges of the image structures and the anisotropy of the matrices are better preserved if filtered with the NDS-I model than with the NDS-LE variant. Moreover, the eigenvalue-swelling-effect on the edges is more perceptible in the NDS-LE model than in the NDS-I model. 4.4
Test on DT-MRI Data
In DT-MRI, noisy diffusion weighted images (DWIs) are used to estimate the diffusion tensors via regression analysis. It is known that DWIs are perturbed by Rician noise [25]. However, the noise distribution of the diffusion tensors obeys a multivariate Gaussian distribution, as it has been statistically proven by Pajevic and Basser [26]. Here, as it was done in the previous section, we directly apply our filtering framework to the tensor field, and not to the scalar DWIs. We use a
530
L. Pizarro et al.
Fig. 6. Denoising capabilities of the NDS-I model on real-world data. Top Left: 2-D section (50 × 70 × 1 voxels) of a 3-D DT-MRI dataset showing the corpus callosum. Top Right: Scaled-up region of the corpus callosum. Bottom Left: Filtered region using the NDS-I model with penalisers (P.2) and distance measure dF , and parameters λ = 140 (0.01%-quantile), rD = 1, rS = 2, and α = 0.9. 386 iterations (≈4 seconds) were needed to reach the steady-state. Bottom Right: The same with parameters λ = 355 (1%-quantile). 184 iterations (≈2 seconds) needed.
real-world 3-D DT-MRI dataset of a human head consisting of a 128 × 128 × 30field of positive definite matrices. Fig. 6 shows a 2-D section of the corpus callosum, which has been filtered using the NDS-I model. We see that after denoising edges are well preserved and localised, and zones with different anisotropy are clearly distinguished. These characteristics are important in applications such as tractography [27] and the study of diseases associated with certain abnormalities in the brain anatomy [28].
A Generic Neighbourhood Filtering Framework for Matrix Fields
5
531
Conclusions
In its fixed point form the NDS filtering framework model has been extended in full generality to the matrix-valued setting. It generalises several known filtering concepts suggested in the literature for the filtering of DT-MRI data including those employing the log-Euclidean framework to preserve positive definiteness of the data. Despite its many degrees of freedom it does not require sophisticated tuning to outperform previous related filtering concepts concerning computational time and denoising quality. We emphasise that our methodology is generic and thus not restricted to DT-MRI denoising. It can be applied to any multi-valued image with values in the space of symmetric matrices. In a future work we will make full use of the directional and shape information of the local structures to steer the filtering process. Acknowledgements. We gratefully acknowledge partial funding by the Deutscher Akademischer Austauschdienst (DAAD), grant A/05/21715.
References 1. Lee, J.S.: Digital image smoothing and the sigma filter. Computer Vision, Graphics, and Image Processing 24, 255–269 (1983) 2. Chu, C.K., Glad, I.K., Godtliebsen, F., Marron, J.S.: Edge-preserving smoothers for image processing. Journal of the American Statistical Association 93, 526–541 (1998) 3. Polzehl, J., Spokoiny, V.: Adaptive weights smoothing with applications to image restoration. Journal of the Royal Statistical Society, Series B 62, 335–354 (2000) 4. Winkler, G., Aurich, V., Hahn, K., Martin, A.: Noise reduction in images: Some recent edge-preserving methods. Pattern Recognition and Image Analysis 9, 749– 766 (1999) 5. Griffin, L.D.: Mean, median and mode filtering of images. Proceedings of the Royal Society of London A 456, 2995–3004 (2000) 6. Tomasi, C., Manduchi, R.: Bilateral filtering for gray and colour images. In: Proc. of the 1998 IEEE International Conference on Computer Vision, Bombay, India, pp. 839–846. Narosa Publishing House (1998) 7. Mr´ azek, P., Weickert, J., Bruhn, A.: On robust estimation and smoothing with spatial and tonal kernels. In: Klette, R., Kozera, R., Noakes, L., Weickert, J. (eds.) Geometric Properties for Incomplete Data, pp. 335–352. Springer, Heidelberg (2006) 8. Didas, S., Mr´ azek, P., Weickert, J.: Energy-based image simplification with nonlocal data and smoothness terms. In: Iske, A., Levesley, J. (eds.) Algorithms for Approximation, pp. 51–60. Springer, Heidelberg (2006) 9. Pizarro, L., Didas, S., Bauer, F., Weickert, J.: Evaluating a general class of filters for image denoising. In: Ersbøll, B.K., Pedersen, K.S. (eds.) SCIA 2007. LNCS, vol. 4522, pp. 601–610. Springer, Heidelberg (2007) 10. Tschumperl´e, D., Deriche, R.: Diffusion tensor regularization with constraints preservation. In: Proc. of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2001), vol. 1, pp. 948–953 (2001) 11. Weickert, J., Brox, T.: Diffusion and regularization of vector- and matrix-valued images. In: Nashed, M.Z., Scherzer, O. (eds.) Inverse Problems, Image Analysis, and Medical Imaging. Contemporary Mathematics, vol. 313, AMS, Providence (2002)
532
L. Pizarro et al.
12. Chefd’hotel, C., Tschumperl´e, D., Deriche, R., Faugeras, O.: Regularizing flows for constrained matrix-valued images. Journal of Mathematical Imaging and Vision 20, 147–162 (2004) 13. Batchelor, P.G., Moakher, M., Atkinson, D., Calamante, F., Connelly, A.: A rigorous framework for diffusion tensor calculus. Magnetic Resonance in Medicine 53, 221–225 (2005) 14. Moakher, M.: A differential geometry approach to the geometric mean of symmetric positive-definite matrices. SIAM Journal on Matrix Analysis and Applications, 735–747 (2005) 15. Pennec, X., Fillard, P., Ayache, N.: A Riemannian framework for tensor computing. International Journal of Computer Vision 66, 41–66 (2006) 16. Arsigny, V., Fillard, P., Pennec, X., Ayache, N.: Log-Euclidean metrics for fast and simple calculus on diffusion tensors. Magnetic Resonance in Medicine 56, 411–421 (2006) 17. Fletcher, P.T., Joshi, S.: Riemannian geometry for the statistical analysis of diffusion tensor data. Signal Processing 87, 250–262 (2007) 18. Gur, Y., Sochen, N.: Coordinate-free diffusion over compact Lie-groups. In: Sgallari, F., Murli, A., Paragios, N. (eds.) SSVM 2007. LNCS, vol. 4485, pp. 580–591. Springer, Heidelberg (2007) 19. Burgeth, B., Didas, S., Florack, L., Weickert, J.: A generic approach to diffusion filtering of matrix-fields. Computing 81, 179–197 (2007) 20. Steidl, G., Setzer, S., Popilka, B., Burgeth, B.: Restoration of matrix fields by second order cone programming. Computing 81, 161–178 (2007) 21. Fillard, P., Arsigny, V., Pennec, X., Ayache, N.: Joint estimation and smoothing of clinical DT-MRI with a log-Euclidean metric. Research Report RR-5607, INRIA, Sophia-Antipolis, France (2005) 22. Hamarneh, G., Hradsky, J.: Bilateral filtering of diffusion tensor magnetic resonance images. IEEE Transactions on Image Processing 16, 2463–2475 (2007) 23. Nashed, M.Z., Scherzer, O.: Least squares and bounded variation regularization with nondifferentiable functionals. Numerical Functional Analysis and Optimization 19, 873–901 (1998) 24. Perona, P., Malik, J.: Scale space and edge detection using anisotropic diffusion. IEEE Transactions on Pattern Analysis and Machine Intelligence 12, 629–639 (1990) 25. Gudbjartsson, H., Patz, S.: The Rician distribution of noisy MRI data. Magnetic Resonance in Medicine 34, 910–914 (1995) [published erratum appears in Magnetic Resonance in Medicine 36, 332 (1996)] 26. Pajevic, S., Basser, P.J.: Parametric and non-parametric statistical analysis of DTMRI data. Journal of Magnetic Resonance 161, 1–14 (2003) 27. Mori, S., van Zijl, P.C.: Fiber tracking: principles and strategies - a technical review. NRM in Biomedicine 15, 468–480 (2002) 28. Nucifora, P.G., Verma, R., Lee, S.K., Melhem, E.R.: Diffusion-tensor MR imaging and tractography: Exploring brain microstructure and connectivity. Radiology 245, 367–384 (2007)
Multi-scale Improves Boundary Detection in Natural Images Xiaofeng Ren Toyota Technological Institute at Chicago 1427 E. 60 Street, Chicago, IL 60637, USA
[email protected]
Abstract. In this work we empirically study the multi-scale boundary detection problem in natural images. We utilize local boundary cues including contrast, localization and relative contrast, and train a classifier to integrate them across scales. Our approach successfully combines strengths from both large-scale detection (robust but poor localization) and small-scale detection (detail-preserving but sensitive to clutter). We carry out quantitative evaluations on a variety of boundary and object datasets with human-marked groundtruth. We show that multi-scale boundary detection offers large improvements, ranging from 20% to 50%, over single-scale approaches. This is the first time that multi-scale is demonstrated to improve boundary detection on large datasets of natural images.
1 Introduction Edge detection is a fundamental problem in computer vision that has been intensively studied in the past fifty years. Traditional approaches were built on the analysis of ideal edge models and white sensing noise. A variety of edge detectors were developed, mostly based on image gradients and Gaussian derivative filters, leading to the popular Canny edge detector [1] that we still use today. Edges, like any other image structures, are multi-scale in nature. Early work on multiscale edge detection used Gaussian smoothing at multiple scales [2]. Scale-Space theory gradually emerged [3] and evolved into a field of its own [4,5]. The Scale-Space theory states that, under a set of mild conditions, the Gaussian function is the unique kernel to generate multi-scale signals. The Scale-Space theory also provides guidelines on the selection and integration of signals across scales [6]. In practice, Gaussian-based edge detectors have considerable difficulty dealing with natural scenes, where idealized edge and noise models do not hold. To address the challenges of natural images, recent approaches have adopted a learning paradigm: large datasets of natural images have been collected and hand-labeled, such as the Berkeley Segmentation Dataset [7], providing both training data and evaluation benchmarks. Boundary detection is formulated as learning to classify salient boundaries against background [8]. State-of-the-art detectors combine local brightness, color and texture contrasts and have been shown to outperform traditional gradient-based approaches [9]. It would be a natural step to combine the strengths of learning-based boundary operators with the insights from classical multi-scale edge detection. Surprisingly, very few efforts have been devoted to this line of research. The analysis in [10] is based D. Forsyth, P. Torr, and A. Zisserman (Eds.): ECCV 2008, Part III, LNCS 5304, pp. 533–545, 2008. c Springer-Verlag Berlin Heidelberg 2008
534
X. Ren
Fig. 1. We run the Probability-of-Boundary operator [9] to generate boundary contrast signals at multiple scales. Here we show both the (soft) contrast map and the boundary map (after nonmaximum suppression), at a small scale and a large scale. Large-scale signals are reliable but poor in localization, and details are smoothed out. On the other hand, small-scale signals capture detailed structures but suffer from false positives in textured regions and clutter. The challenge is how to combine the strengths of them.
on gradient magnitudes only and does not report detection performance. The learning approach in [11] uses a large number of features without any specific discussion on multi-scale. The work of [12] focuses on finding elongated structures in medical images. No benchmarking results have been reported to show that multi-scale detection is better than single-scale approaches. On the other hand, the authors of [9] commented that they found no benefit using multi-scale signals. Beyond local classification, multi-scale has also been extensively explored in midlevel grouping, especially in image segmentation (e.g. [13,14]). The focus of multi-scale segmentation is typically on the pyramid representation of images, such as for Markov random fields, and on global inference and smoothing. More recently, multi-scale intensity cues have been successfully used (e.g. [15]) in segmenting natural images. However, segmentation algorithms typically do not produce boundary-based benchmarking results, partly because many of them focus on large-scale salient regions/boundaries and tend to ignore details in an image. Is boundary detection in natural images so difficult that it eludes multi-scale processing? That would be hard to believe. Studies in natural image statistics have strongly suggested that scale-invariance or multi-scale structure is an intrinsic property of natural images (e.g. [16]). In particular, scaling phenomena have been observed in the statistics of boundary contours [17,18]. This multi-scale nature must have important implications for boundary detection. In Figure 1, we show several examples of multi-scale boundary contrast signals, by running the (publicly available) Probability-of-Boundary operator (Pb) [9] at several disk sizes. Just as we have learned from gradient-based multi-scale processing, signals at different scales exhibit different characteristics: at a large scale, edge detection is reliable, but its localization is poor and it misses small details; at a small scale, details are preserved, but detection suffers greatly from clutters in textured regions. No doubt there is information in multi-scale. The challenge is how to combine the strengths of small and large scales, so as to improve boundary detection performance, not on a single image, but on large collections of natural images.
Multi-scale Improves Boundary Detection in Natural Images
535
In this work we empirically study the problem of multi-scale boundary detection in natural images. We explore a number of multi-scale cues, including boundary contrast, localization and relative contrast. We find that multi-scale processing significantly improves boundary detection. A linear classifier combining multi-scale signals outperforms most existing results on the Berkeley Segmentation Benchmark [7]. Extensive experimentation shows that the benefits of multi-scale processing are large and ubiquitous: we improve boundary detection performance by 20% to 50% on four other boundary and object recognition datasets. Our empirical work is important and of sufficient interest because this is the first time, after over 20 years of active research, that multi-scale is shown to improve boundary detection on large collections of natural images. Our results are useful both in quantifying the benefits of doing multi-scale in local boundary classification, as well as in comparing to alternative approaches to boundary detection (such as mid-level grouping). We obtain these results with a simple algorithm, which can have immediate applications in edge-based object and scene recognition systems.
2 Multi-scale Boundary Detection A (single-scale) boundary detector finds edges by measuring contrast in a fixed local window. Multi-scale detection varies the scale of the window and combines signals from multiple scales. There are two issues in this process: Correspondence/Tracking: How do signals across scales correspond to one another? If working with discrete edges (i.e. contrast peaks after non-maximum suppression), one would need to track edge locations across scales. Cue combination: How does one integrate boundary signals from multiple scales? Traditional approaches to multi-scale edge detection focus on the correspondence (or tracking) problem. Tracking can either be coarse-to-fine, such as in Bergholm’s edge focusing strategy [19], or be fine-to-coarse, such as in Canny’s feature synthesis [1]. A large number of multi-scale schemes have been proposed along these lines (see a survey in [20]). Most approaches take a simple view of cue combination: they either accept edges (after thresholding) at all scales, or accept edges that appear at the coarsest scale. The cue combination problem can be easily accommodated and analyzed in the learning paradigm of boundary detection: cues from multiple scales are inputs to a binary classification problem, and we can maximize performance over the selection of cues and the choice of the classifier. Given the complexities of signals in natural images, the correspondence problem is non-trivial; yet we may quantitatively evaluate and choose between candidate strategies. We base our analysis on the Berkeley Segmentation Dataset (BSDS) [7] and the Probability-of-Boundary (Pb) operator [9]. The BSDS collection includes 300 images of various natural scenes, each with multiple human-marked segmentations. The Pb operator has been shown to outperform other edge detectors on the BSDS images. Pb measures local contrast by computing histogram differences between brightness, color and texture distributions in two half-disks. If we vary the radius of the half-disks, we obtain contrast signals at multiple scales. For each scale s, we keep two sets of
536
X. Ren (s)
(s)
data: Psof t , soft contrast at each pixel before non-maximum suppression, and Ppeak , sparse/localized edges after non-maximum suppression. Small-scale signals are typically better for localization. Hence when we set up the classification, we only consider locations that generates a maximal response at the (1) smallest scale ( where Ppeak > 0). For each such location, we define a set of cues at multiple scales to classify boundary vs non-boundary. In this work we restrict ourselves to local boundary detection, i.e. making independent decisions at each location. 2.1 Multi-scale Boundary Cues There are two perspectives of “scale” in boundary detection: (1) intuitively, edges have intrinsic scales; and (2) we use measurements at multiple scales to capture the intrinsic scales and to improve detection performance. In the examples in Figure 1, the back of the horse is a typical large-scale edge, and the textured region under the penguin contain typical small-scale edges. A large-scale edge is much more likely to be an object boundary, and small-scale edges are mostly false positives. There are several cues that can help us make this distinction: 1. contrast: A large-scale edge has consistent contrast measurements at multiple scales. A small-scale edge would have high contrast at a small observation scale but lower contrasts at larger scales. 2. localization: If we look at the peak response (after non-maximum suppression), a large-scale edge tends to have a consistent location and does not shift much. Peak locations of small-scale edges become unreliable in large-scale measurements or disappear altogether. 3. relative contrast: Also known as contrast normalization, the strength of an edge relative to its surroundings is a cue for boundary saliency. A weak contrast boundary may be salient, at a large observation scale, if other locations around it are significantly lower in contrast. Texture edges, though maybe high contrast, is not salient because many other edges nearby have comparable contrasts. For contrast, we use the soft Pb contrast computed using multiple disk sizes, converted (back) to a linear scale: (s) (s) E (s) = log Psof t /(1 − Psof t ) (s)
We also define a localization cue: we threshold1 the peak signal Ppeak into a binary edge map, and compute its distance transform. The result is a distance d(s) from each pixel to the closest peak location, at each scale s. We define localization as: D(s) = log d(s) + 1 Note that in defining contrast E (s) , we have avoided the correspondence problem by using signals before non-maximum suppression. There are, of course, issues associated with this simple strategy. The soft contrasts at large scales are fairly blurry. That is, contrast generated by a single boundary may extend spatially and boost false positives in surrounding areas. The localization cue compensates this lack of correspondence: for 1
We choose thresholds such that 95% of the groundtruth edges are kept.
logit contrast
−1
−3
1
2
3
4
5
scale
(a)
6
1
−1
1
2
537
positive negative
positive negative
3
log relative contrast
positive negative
1
log localization distance
Multi-scale Improves Boundary Detection in Natural Images
3
4
5
6
1
0
−1
1
2
scale
(b)
3
4
5
6
scale
(c)
Fig. 2. Empirical distributions of boundary cues across scales. (a) Means and standard deviations of boundary contrast cues (E) at all 6 scales; in average, contrast is higher for positives or true boundaries. (b) Means and standard deviations of localization cues (D), distances to closest edge peaks; true boundaries are better localized. (c) Distributions of relative contrast (R); true boundaries are typically higher in contrast relative to its neighbors. We find all the cues informative. However, they are also noisy as shown by the large standard deviations.
an off-boundary point, even though the contrast may be high at a large scale, it is far away from peak locations and hence will be suppressed. (s) (s) Finally, for contrast normalization, we compute average contrasts Pavg,L and Pavg,R in the “left” and “right” half disks around each point, and define relative contrast as: (s) (s) (s) R(s) = log Psof t / min Pavg,L , Pavg,R where “left” and ”right” disks2 using the maximum Pb orientation. 2.2 Cue Validation Having defined multi-scale boundary cues, the first thing to ask is whether these features are informative for boundary detection. To answer this question empirically, we use the 200 training images in BSDS and compare distributions around on-boundary and off-boundary locations. In Figure 2 we visualize the empirical means and standard deviations of the distributions. Two observations can be made: – The cues we have defined are indeed informative for boundary detection, as one would have expected from intuition. All the cues, at all scales, exhibit different distributions for positive and negative examples. – The signals are nevertheless noisy. Individually these boundary cues are weak, as the standard deviations are large comparing to the differences between the means. 2.3 Cue Combination In the learning paradigm of boundary detection, finding boundaries is formulated as a classification between boundary and non-boundary pixels. The boundary classification problem has the characteristics of having a large amount of training data, relatively low dimension, and poor separability (significant overlap between classes). Consistent with the observations in [9], we have found in our experiments that logistic regression, linearly combining cues across scales, performs as good as a fair number of other standard techniques. We choose logistic regression for its simplicity. 2
We set the scale to be 2.5 times the disk radius in Pb.
X. Ren 1
1
0.9
0.9
0.8
0.8
0.7
0.7
Precision
Precision
538
0.6
0.5
0.4
0.3
0.2 0
0.6
0.5
0.4
pb scale 0.7 pb scale 1 pb scale 2 multi−scale pb 0.1
0.2
0.3
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0.2 0
canny scale−space grad. pb scale 1 multi−scale pb 0.1
Recall
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Recall
(a)
(b)
Fig. 3. Precision-recall curves for performance evaluation. (a) Comparing to the standard (scale 1) Pb operator, large-scale Pb has a higher precision (at the low recall range), and small-scale Pb has a higher (asymptotic) recall. Our multi-scale approach combines the strengths of both, dominating any fixed scale Pb. (b) Comparing to gradient-based approaches: scale-space edge detection of Lindeberg [6] offers minor improvements over Canny [1]. Pb performs much better than gradient-based methods. Our multi-scale classifier offers a large and consistent improvement over Pb and achieves an F-measure of 0.68, outperforming existing approaches.
3 Experiments and Empirical Analysis We carry out empirical analysis of our multi-scale boundary detection scheme using the Berkeley Segmentation Dataset, 200 training and 100 test images √ of resolution 480-by320. We run the Pb operator at 6 scales, half-octave apart ( 2), starting at one halfoctave lower than the default Pb scale. We obtain positive training examples by running a minimum distance matching between groundtruth boundary pixels and edge pixels (1) in the smallest scale (i.e. Ppeak ), with a distance threshold of 3 pixels. For negative (1)
examples, we use edges in Ppeak that are 20 pixels away from groundtruth boundaries. Boundary detection performance is evaluated in the precision-recall framework of [9]. We use both the F-measure (harmonic mean of precision and recall) and average precision (area under a P/R curve) [21]. F-measure selects the best trade-off point of precision vs recall. Average precision summarizes performance using entire curves and is better at revealing differences in sub-ranges of recall. 3.1 Multi-scale Improves Boundary Detection In Figure 3(a), we show the precision-recall curve for our multi-scale detection, along with the performance of Pb at three scales (0.7, 1 and 2). The performance of Pb at multiple scales is exactly as expected: at a small scale, the precision is lower, but more details are recovered and the asymptotic recall is higher. At a large scale, the precision is higher in the beginning, showing salient boundaries being more reliably detected; however, the curve saturates at a lower recall. Our multi-scale approach combines the strengths of both small- and large-scale, producing a P/R curve that dominates that of Pb at any fixed scale. Our approach achieves the asymptotic recall rate of the smallest
Multi-scale Improves Boundary Detection in Natural Images
539
scale while maintaining high precision in the entire range. In particular, the performance in the mid-range (0.5–0.8 recall) is much higher than any of the fixed scale Pb. In our evaluation we also include two gradient-based approaches: the Canny edge detector [1] which uses single-scale gradients, and the scale-space edge detector of Lindeberg [6]3 . The scale (sigma) in Canny is set to be 0.0025. For scale-space edges we use 4 scales, initial search scale 1.5, and search grid spacing of 8,16 and 32 pixels. In Figure 3(b), we show the precision-recall curves for all four approaches: Canny, scale-space edges, Pb, and our approach “Multi-Pb”. We observe that in our setting, scale-space edge detection finds only minor improvements over Canny and drops below Canny at the end. In comparison, our multi-scale approach offers a large and consistent improvement over single-scale Pb4 . The F-measure of our results is 0.68, higher than most existing results on the Berkeley benchmark [11,12,22,23,24,25]. On the gray-scale version of the BSDS benchmark, our F-measure is 0.66. It is interesting to compare our results to that in [9], where the authors did not find much advantage of using multi-scale. The two approaches are similar in spirit; the main differences are: (1) they worked with smaller images (half resolution); (2) they used only three scales, half octave apart; (3) they did non-maximum suppression after combining the signals, where we use small-scale Pb only; and (4) we have used additional features such as relative contrast or localization. Table 1 shows an empirical analysis of the differences. We find that our success is a combination of these factors, each contributing to the final improvement. In particular, our non-maximum suppression scheme is better, as it preserves details and prevents them from being smoothed out by largescale signals (which typically have higher weights). Table 1. We compare the average precision of standard Pb and Multi-Pb (this work) with several variants: Pb combined at 3 scales, with non-maximum suppression applied after combination; Pb combined at 3 scales (with our non-maximum suppression scheme); and Pb combined at 6 scales. We show results on both the full-sized BSDS images (with 0.75% distance threshold) and half-sized ones (with 1% threshold). These results help explain why our conclusion of multi-scale here is positive while that in [9] was negative. Avg. Precision full-res (480x320) half-res (240x160)
Pb (single) 0.648 0.643
Pb 3 (soft) 0.647 0.641
Pb 3 0.683 0.666
Pb 6 0.687 0.677
Multi-Pb 0.712 0.683
3.2 Cue Evaluation We quantitatively measure individual contributions of the cues by running multi-scale detection with subsets of the features. All these experiments use logistic regression as the classifier. We use average precision for evaluation. Figure 4(a) shows the average precision of detection when we vary the number of scales used. We start with a single scale (the smallest) and gradually expand to 6 scales. 3 4
We thank Mark Dow for his code at http://lcni.uoregon.edu/ mark/SS Edges/SS Edges.html. Note that the BSDS evaluation has changed considerably from that in [9]. In particular, the F-measure for Pb is 0.65 under the current implementation.
540
X. Ren
5
6
0.6
average precision
All
4
scale
+R+D
3
+E+D
2
+D
1
+E+R
0.6
0.65
+R
0.65
0.7
Pb
0.7
+E
average precision
average precision
0.8
0.6
0.4
0.2
0
1.8
2.3
2.9
3.5
4.3
5.8
8.1
distance tolerance
cue combination
(a)
(b)
(c)
Fig. 4. Cue combination evaluated with average precision: (a) improvements in performance as we gradually add the number of scales; (b) improvements over default scale Pb by adding subsets of features, including contrast (E), localization (D), and relative contrast (R); (c) improvements over Pb with different choices of distance tolerance in evaluation
We observe that the improvement is much larger in the first few scales, showing diminishing returns. Nevertheless, the large scales still make contributions, possibly indicating the existence of large-scale boundaries in the dataset that are best captured at a large scale of observation. In Figure 4(b), we evaluate the contributions of the three sets of cues: contrast (E), localization (D), and relative contrast (R). They are combined with the default (second smallest) scale Pb to show the improvements over Pb. Individually, contrast (E) and relative contrast (R) work better. However, there seems to be a fair amount of redundancy between contrast (E) and relative contrast (R). Localization (D) by itself does not work well; however, it improves performance when combined with contrast (E). In the precision-recall evaluation, detections are matched to groundtruth boundaries with a distance threshold. The default in the BSDS benchmark is 0.75% of image diagonal, about 5 pixels. We vary this tolerance and show the results in Figure 4(c). It appears that the (absolute) improvement in average precision is about the same for all the choices of tolerance. Relative to Pb, the improvement is larger at small distance tolerances, indicating that the multi-scale approach is better at localizing boundaries. 3.3 Additional Experiments The empirical evaluations on the Berkeley Segmentation Dataset is very encouraging: we show large and consistent improvements using multi-scale cues. To further verify the benefits of multi-scale detection, we test our approach on four other large datasets with groundtruth segmentation: 30 images from the CMU motion boundary dataset [26], 519 images from the MSR Cambridge object dataset [27,28] (MSRC2), 422 images from the segmentation competition in the PASCAL challenge 2007 [29], and 218 images from a subset (Boston houses 2005) of the LabelMe database [30]. These datasets add a lot of varieties to our empirical evaluation. They include both large vs. small objects, low- vs. high-resolution images, single-object photos vs. complex outdoor and indoor scenes, and detailed boundary labeling vs coarse polygonal contours. To show the robustness of our approach, in all these experiments we use the same parameters trained from BSDS.
Multi-scale Improves Boundary Detection in Natural Images 0.8
0.8
canny scale−space grad. pb scale 1 multi−scale pb
0.7
0.6
0.6
0.5
Precision
Precision
canny scale−space grad. pb scale 1 multi−scale pb
0.7
0.5
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0.1
0 0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
0 0
1
0.1
0.2
0.3
0.4
CMU motion
0.6
0.7
0.8
0.9
1
MSRC2
0.6
0.6
canny scale−space grad. pb scale 1 multi−scale pb
0.5
canny scale−space grad. pb scale 1 multi−scale pb
0.5
0.4
Precision
0.4
Precision
0.5
Recall
Recall
0.3
0.3
0.2
0.2
0.1
0.1
0 0
541
0.1
0.2
0.3
0.4
0.5
0.6
Recall
PASCAL 07
0.7
0.8
0.9
1
0 0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Recall
LabelMe (houses)
Fig. 5. Precision-recall evaluation on four other datasets with groundtruth segmentation, covering a variety of objects and scenes. We use parameters trained from BSDS. Our multi-scale approach significantly improves over Pb in all four cases, in the entire range of recall.
Figure 5 shows precision-recall evaluations on the four datasets. The CMU motion dataset and the MSRC2 dataset contain large objects; we show P/R curves using distance tolerance 0.75% in benchmarking. The PASCAL and LabelMe datasets have high resolution and tend to have small objects in scenes; we show P/R curves using distance tolerance 0.6%. Table 2 lists average precisions for the five datasets at three thresholds. These experiments show that our multi-scale approach is robust and offers large improvements over single-scale approaches. The amount of improvement differs, from about 20% (in average precision) in PASCAL 07 to about 45% in MSRC2. We see more improvements in CMU motion and MSRC2, probably because they both tend to have large objects. Our results on the CMU motion boundary dataset are comparable to what have been reported in [26], remarkable because we use no motion at all. The precisions on PASCAL and LabelMe images are lower for all approaches, probably because only a subset of objects and boundaries in these images are marked. Nevertheless, we still improve performance there proportionally. We observe that improvements over Pb are most prominent in the low-recall/highprecision range. Similar phenomena have been found in many related studies on natural image boundary detection [22,24,25]. These approaches typically focus on the
542
X. Ren
Table 2. Average precision evaluation on all five datasets, comparing four approaches at two distance thresholds in benchmarking. Our approach improves single-scale Pb by about 10% on BSDS, 20% on PASCAL and LabelMe, about 30% on CMU motion and 45% on MSRC2. Highlighted numbers correspond to the curves in Figure 5. Qualitatively there is little difference when we vary the distance threshold to other values. Dist. Threshold Methods
Th=0.6% Canny S Grad. Pb
BSDS (test) CMU Motion MSRC2 PASCAL 07 LabelMe (houses)
0.554 0.245 0.170 0.197 0.223
0.527 0.252 0.156 0.173 0.206
0.593 0.315 0.197 0.196 0.211
Th=0.75% Multi-Pb Canny S Grad. Pb Multi-Pb 0.663 0.413 0.283 0.242 0.251
0.605 0.271 0.193 0.226 0.254
0.589 0.287 0.182 0.204 0.235
0.648 0.350 0.228 0.233 0.245
0.712 0.448 0.325 0.277 0.283
Table 3. Multi-scale processing can be made efficient by working with sub-sampled images at large scales. Average precision hardly decreases when we use an image pyramid. Avg. Precision Multi-Pb Multi-Pb (Pyramid)
BSDS 0.712 0.707
CMU Motion 0.448 0.443
MSRC2 0.325 0.323
PASCAL07 0.277 0.275
LabelMe 0.283 0.295
low-recall range and show little improvements near high-recall. In comparison, our approach offers consistent improvements for all recall. 3.4 Pyramid-Based Multi-scale Processing Our approach is based on the Pb operator, which computes histogram differences between two half disks. The computational cost increases linearly with disk area, and at large scales it becomes prohibitive for large images. A standard solution is to use a pyramid and work with sub-sampled images at large scales. We have tested√this pyramid approach: at a large scale s > 1, we resize images by a factor of 1/ s, hence keeping the cost constant across scales. An empirical comparison is shown in Table 3. As expected, we find there is little loss in performance.
4 Discussions We have studied multi-scale boundary detection in the context of natural images. Conceptually our approach is straightforward: we compute contrast and localization cues for a number of scales, and use logistic regression to linearly combine them. Our multiscale approach combines the strengths of both large-scale (high precision) and smallscale (high recall and good localization). Our approach outperforms most reported results on the Berkeley Segmentation Benchmark. Significant improvements (20% to 45%) have been demonstrated on four other boundary and object recognition datasets. Our work has answered two important empirical questions on boundary detection: Does multi-scale processing improve boundary detection in natural images? Intuition says yes, because boundaries are multi-scale in nature; but we need more than in-
Multi-scale Improves Boundary Detection in Natural Images
543
Fig. 6. Examples shown in three rows: input images, Pb, and multi-Pb (this work). Detected boundaries are discretized into two types: strong edge pixels (ranked 1-2000 in each image) in black, and weak edge pixels (ranked 2001-6000 in each image) in green. In such a way we visualize two points on the precision-recall curves in Figure 3 and 5. Large-scale boundaries are enhanced (i.e. from green to black, or from none to green), while small-scale boundaries, such as those found in texture, are suppressed (from black to green, from green to none).
tuition to move the field forward. Previous studies did not find any empirical evidence in benchmarking [9]. Our work gives an affirmative answer to this question. Moreover, we show that performance continuously improves as we add more scales. This implies
544
X. Ren
that, because there are a wide range of scales in natural image structures, having a large range of observation scales would be useful. Is there room for improvement in local boundary detection? The comprehensive and meticulous experiments in [9], along with the psychological evidences in [31], suggest that there is a limited amount of information in local image neighborhoods, and the Pb boundary detector is already close to the limit. This has led many researchers to pursue other paths, such as exploring mid-level grouping [22,32,23,33,24,25], complex example-based models [11], or scene knowledge [34]. Our approach stays within the framework of local boundary detection, making decisions independently at each pixel, and we show significant improvements over Pb. Our results also compare favorably to those of the more sophisticated algorithms above. In retrospective, there are three reasons why multi-scale edge detection has not become popular: (1) having a high cost; (2) possibly losing details; and most importantly, (3) lack of empirical support. We have “proved”, with extensive experimentation on a variety of datasets, that multi-scale processing improves boundary detection, boosting precision at salient boundaries while preserving details. Using an image pyramid keeps computational cost within a constant factor. It is our hope that multi-scale will soon become a standard component of boundary detection approaches.
References 1. Canny, J.: A computational approach to edge detection. IEEE Trans. PAMI 8, 679–698 (1986) 2. Witkin, A.: Scale-space filtering. Int’l. J. Conf. on Artificial Intell. 2, 1019–1022 (1983) 3. Koenderink, J.: The structure of images. Biological Cybernetics 50, 363–370 (1984) 4. Lindeberg, T.: Scale-Space Theory in Computer Vision. Kluwer Academic Publishers, Dordrecht (1994) 5. Perona, P., Malik, J.: Scale-space and edge detection using anisotropic diffusion. IEEE Trans. PAMI 12, 629–639 (1990) 6. Lindeberg, T.: Edge detection and ridge detection with automatic scale selection. Int’l. J. Comp. Vision 30, 117–156 (1998) 7. Martin, D., Fowlkes, C., Tal, D., Malik, J.: A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In: ICCV, vol. II, pp. 416–423 (2001) 8. Konishi, S., Yuille, A., Coughlan, J., Zhu, S.: Fundamental bounds on edge detection: an information theoretic evaluation of different edge cues. In: CVPR, pp. 573–579 (1999) 9. Martin, D., Fowlkes, C., Malik, J.: Learning to detect natural image boundaries using local brightness, color and texture cues. IEEE Trans. PAMI 26(5), 530–549 (2004) 10. Konishi, S., Yuille, A., Coughlan, J.: A statistical approach to multi-scale edge detection. In: ECCV Workshop on Generative-Model-Based Vision (2002) 11. Dollar, P., Tu, Z., Belongie, S.: Supervised learning of edges and object boundaries. In: CVPR, vol. 2, pp. 1964–1971 (2006) 12. Galun, M., Basri, R., Brandt, A.: Multiscale edge detection and fiber enhacement using differences of oriented means. In: ICCV (2007) 13. Bouman, C., Shapiro, M.: A multiscale random field model for bayesian image segmentation. IEEE Trans. Im. Proc. 3(2), 162–177 (1994) 14. Koepfler, G., Lopez, C., Morel, J.: A multiscale algorithm for image segmentation by variational method. SIAM J. Numer. Anal. 31(1), 282–299 (1994)
Multi-scale Improves Boundary Detection in Natural Images
545
15. Sharon, E., Brandt, A., Basri, R.: Segmentation and boundary detection using multiscale intensity measurements. In: CVPR (2001) 16. Ruderman, D.L., Bialek, W.: Statistics of natural images: Scaling in the woods. Physics Review Letters 73(6), 814–817 (1994) 17. Elder, J., Goldberg, R.: Ecological statistics of gestalt laws for the perceptual organization of contours. Journal of Vision 2(4), 324–353 (2002) 18. Ren, X., Malik, J.: A probabilistic multi-scale model for contour completion based on image statistics. In: Heyden, A., Sparr, G., Nielsen, M., Johansen, P. (eds.) ECCV 2002. LNCS, vol. 2350, pp. 312–327. Springer, Heidelberg (2002) 19. Bergholm, F.: Edge focusing. IEEE Trans. PAMI 9, 726–741 (1987) 20. Basu, M.: Gradient-based edge detection methods - a survey. IEEE. Trans. System, Man and Cybernatics 32(3), 252–260 (2002) 21. Rijsbergen, C.V.: Information Retrieval, 2nd edn. Univ. of Glasgow (1979) 22. Ren, X., Fowlkes, C., Malik, J.: Scale-invariant contour completion using conditional random fields. In: ICCV, vol. 2, pp. 1214–1221 (2005) 23. Arbelaez, P.: Boundary extraction in natural images using ultrametric contour maps. In: Workshop on Perceptual Organization in Computer Vision (POCV) (2006) 24. Felzenszwalb, P., McAllester, D.: A min-cover approach for finding salient curves. In: Workshop on Perceptual Organization in Computer Vision (POCV) (2006) 25. Zhu, Q., Song, G., Shi, J.: Untangling cycles for contour grouping. In: ICCV (2007) 26. Stein, A., Hoiem, D., Hebert, M.: Learning to find object boundaries using motion cues. In: ICCV (2007) 27. Shotton, J., Winn, J., Rother, C., Criminisi, A.: Textonboost: Joint appearance, shape and context modeling for multi-class object recognition and segmentation. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3951. Springer, Heidelberg (2006) 28. Malisiewicz, T., Efros, A.: Improving spatial support for objects via multiple segmentations. In: BMVC (2007) 29. Everingham, M., Van Gool, L., Williams, C., Winn, J., Zisserman, A.: The PASCAL Visual Object Classes Challenge (VOC 2007) (2007), http://www.pascal-network.org/ challenges/VOC/voc2007/workshop/ 30. Russell, B., Torralba, A., Murphy, K., Freeman, W.: LabelMe: a database and web-based tool for image annotation. Technical Report AI Memo AIM-2005-025, MIT (2005) 31. Martin, D., Fowlkes, C., Walker, L., Malik, J.: Local boundary detection in natural images: Matching human and machine performance. In: ECVP (2003) 32. Yu, S.: Segmentation induced by scale invariance. In: CVPR, vol. I, pp. 445–451 (2005) 33. Estrada, F., Elder, J.: Multi-scale contour extraction based on natural image statistics. In: Workshop on Perceptual Organization in Computer Vision (POCV) (2006) 34. Hoiem, D., Efros, A., Hebert, M.: Recovering occlusion boundaries from a single image. In: ICCV (2007)
Estimating 3D Trajectories of Periodic Motions from Stationary Monocular Views Evan Ribnick and Nikolaos Papanikolopoulos University of Minnesota, USA {ribnick,npapas}@cs.umn.edu
Abstract. We present a method for estimating the 3D trajectory of an object undergoing periodic motion in world coordinates by observing its apparent trajectory in a video taken from a single stationary camera. Periodicity in 3D is used here as a physical constraint, from which accurate solutions can be obtained. A detailed analysis is performed, from which we gain significant insight regarding the nature of the problem and the information that is required to arrive at a unique solution. Subsequently, a robust, numerical approach is proposed, and it is demonstrated that the cost function exhibits strong local convexity which is amenable to local optimization methods. Experimental results indicate the effectiveness of the proposed method for reconstructing periodic trajectories in 3D.
1 Introduction Periodic motion occurs frequently from both natural and mechanical sources. For example, consider the motion of a person or animal running, or a point on the wheel of a car as it drives. The ability to analyze periodic motions when they appear in video is important in many applications. However, the problem of inferring 3D information about a periodic motion is still an open problem that has not been addressed in the literature. In this paper we propose a method for reconstructing the 3D trajectory of an object undergoing periodic motion in the world given its apparent trajectory in a video from a single stationary view. For example, Fig. 1(a) shows the trajectory of the point on the wheel of a vehicle as it moves at a constant speed. While clearly this trajectory is periodic in 3D world coordinates, its appearance in the image is distorted by the perspective projection, and as such is not periodic in image coordinates. However, we understand intuitively that information regarding the 3D trajectory of the object in world coordinates is embedded in its apparent trajectory in the image. The goal here is to infer this 3D information in order to estimate the trajectory of the object in world coordinates. The ability to reconstruct a periodic motion in 3D given only its appearance in a single stationary view has potential applications in many domains. Activity recognition, gait analysis, surveillance applications, and motion analysis for athletic training or physical therapy might all benefit from techniques such as the one proposed here. For example, in gait analysis/recognition, it may be possible to better characterize a gait by examining it in 3D rather than in image coordinates, since a description of the motion in world coordinates will be independent of both the intrinsic and extrinsic parameters of the camera, and will hence be more portable. D. Forsyth, P. Torr, and A. Zisserman (Eds.): ECCV 2008, Part III, LNCS 5304, pp. 546–559, 2008. c Springer-Verlag Berlin Heidelberg 2008
Estimating 3D Trajectories of Periodic Motions from Stationary Monocular Views
(a)
547
(b) Fig. 1. Examples of periodic trajectories
Periodicity is used here as a physical constraint on the motion in the world, which is exploited in the formulation. We require throughout that the period of the motion is known, which can be estimated automatically using one of several existing methods (e.g., [4]). In our analysis we find that the problem is under-constrained in general, and that an additional constraint is needed in order to arrive at a unique solution. This additional constraint, along with the initial constraint of periodicity in the world, make it possible to estimate the 3D position of each sample in the trajectory, and hence to reconstruct it as completely as possible in 3D at the given sampling rate. The perspective projection is fully accounted for in the formulation; in fact, the problem addressed here could not be solved in an orthographic system. We also require a calibrated camera, where both the intrinsic and extrinsic parameters are known. This was a design choice made by the authors, since without a calibrated camera it would not be possible to obtain an exact Euclidean reconstruction. With an uncalibrated camera, rather, it would only be possible to obtain at best an affine reconstruction, which would severely limit the usefulness of the proposed method. For this method we assume perfect periodicity, which might not be valid in many realworld applications. However, this assumption is often acceptable over relatively short windows of time for a given motion. Furthermore, experimental results indicate the effectiveness of this approach for real motions, even under this assumption. Development of methods that account for fluctuations in period will be the focus of future work. This work is unique in that it is the first (to the best of the authors’ knowledge) to reconstruct a periodic trajectory in 3D. This is done using video from a single stationary camera, from which it is more difficult to infer 3D information than from multi-camera systems. However, since it is often difficult to obtain video of a motion from multiple views in practice, examining such a problem is worthwhile. Existing work related to periodic motion in video has focused mainly on detection and analysis in image coordinates. For example, in [4] and [8] periodic motion is detected via sequence alignment, where the similarity between snapshots of a foreground object is computed and tested for periodicity. In both methods the effects of imaging are taken into account by considering the change in the apparent size of the object in the image as it moves relative to the position of the camera. Another approach is to perform frequency domain analysis of image intensities in order to detect periodic motion in the image ([9]); this approach is used in [2] to detect multiple periodic motions
548
E. Ribnick and N. Papanikolopoulos
simultaneously. Similarly, the authors of [11] develop a method for recognition of different classes of periodic motion based on a frequency domain analysis of image intensities. In other work, the assumption of strict periodicity has been relaxed in order to consider the case of cyclic ([13]) or oscillatory ([3]) motion in the image. In [5], a method is proposed for learning cylindrical manifolds for cyclic motion, where the radial parameter accounts for the inherent repetitiveness. Finally, in [1], a method is presented which can be used to infer the 3D structure of an object undergoing periodic motion by considering multiple snapshots of the object separated by one period in time as if they were simultaneous views of the same object. These multiple views are used to perform geometric inference using techniques from multiview geometry.
2 Problem Formulation The specific problem that we wish to solve can be stated more formally as follows. Given image coordinate samples from the trajectory of an object that is undergoing periodic motion in the world, our goal is to estimate the 3D world position of each sample in the most recent period of motion. An image coordinate sample is defined here as a single point in image coordinates corresponding to the apparent position of the object at the time of sampling. Note that when the 3D position of every sample during one period is known, the periodic motion is fully characterized (at the given sampling rate), since each period is simply a repetition of the preceding period. Before formalizing the problem, we first introduce some notation and define the quantities we wish to estimate. Periodic motion is taken to mean any motion in 3D world coordinates that is periodic ˙ Y˙ , Z): ˙ in velocity v (X, v(t + nT ) = v(t), (1) where T is the period of the motion and n is an integer. Notice that this definition differs from that in some previous work (e.g.,, [4]), where periodicity was defined in terms of position rather than velocity. In general, motion that is periodic in velocity (as defined above) includes as a special case motion that is periodic only in position. This definition of periodicity also includes motions for which there is translation from one period to the next, such as a person’s foot as he/she walks. In fact, the translational component is necessary in order to solve the problem formulated here, since the technique is based on the change in the appearance of the periodic trajectory in the image as the object displaces relative to the camera. This point will be illustrated in Sec. 3.1. For motion that is periodic in velocity we can write: p(t + T ) = p(t) + ΔpT ,
(2)
where ΔpT (ΔXT , ΔYT , ΔYT ) is the displacement per period of the point, which is constant over any period of length T . This can be used to describe the 3D displacement between any two samples that are separated by exactly one period in time. We next consider the displacement between two samples from the same period. For some length of time τ < T :
Estimating 3D Trajectories of Periodic Motions from Stationary Monocular Views
p(t + τ ) = p(t) + δp ,
549
(3)
where δp (δX , δY , δZ ) is the 3D displacement between the samples at times t and t + τ . Note that this displacement is the same for any pair of samples taken at times t + nT and t + nT + τ , for any integer n. Since samples are taken at discrete times determined by the video frame rate, we represent times using discrete indices of the form tik . These are reverse time indices, so that tik is taken to mean the time of the k-th most recent sample in period i, which is the i-th most recent period. Given these definitions and equations (2) and (3), each sample in world coordinates can be written in terms of the 3D position at the most recent sample, p00 = (X00 , Y00 , Z00 ), as follows: pik = p00 − iΔpT − δpk .
(4)
Here δpk is the 3D displacement between the samples tik and ti0 , which is constant for all periods i. Expressing each sample in terms of these quantities will prove to be useful, since these are the quantities that need to be estimated in order to fully characterize one period of motion in 3D. The method described in this paper relies on an accurate camera calibration, which can be obtained using techniques similar to those described in [6]. As a result of the calibration, we obtain the projection matrix Pc = Ac I3×4 Tc ∈ R3×4 , where Ac ∈ R3×3 describes the camera’s intrinsic parameters, I3×4 ∈ R3×4 is a non-square identity matrix, and Tc ∈ R4×4 describes the camera’s extrinsic parameters. Pc is used to project points from world coordinates into the image, using homogeneous coordinates, according to: T T u v h = Pc X Y Z 1 , (5) where the image coordinates can be obtained by dividing by the homogeneous coordinate: x = u/h, y = v/h. So, given the location of the object in the world at time tik , T denoted in homogeneous coordinates as qik = Xki Yki Zki 1 , the image coordinates of this point can be written as follows: xik =
Pc(1) qik , Pc(3) qik
yki =
Pc(2) qik , Pc(3) qik
(6)
where Pc(i) is the i-th row of Pc . Using (4) we can write qik in terms of the quantities for which we wish to solve: ⎛ 0 ⎞ X0 − iΔXT − δXk ⎜ Y00 − iΔYT − δYk ⎟ ⎟ qik = ⎜ (7) ⎝ Z00 − iΔZT − δZk ⎠ . 1
3 Existence of Solution Here we consider the ideal, noise-free version of the problem in order to analyze when it is possible to solve it uniquely. As such, we proceed by splitting the problem into
550
E. Ribnick and N. Papanikolopoulos
two subproblems, and analyzing each one separately. First, we consider the problem of estimating the 3D position of the most recent sample (X00 , Y00 , Z00 ) and the per period displacement (ΔXT , ΔYT , ΔZT ) by taking only samples at times separated by exactly one period from t00 — that is, samples at times ti0 . We find that this problem is underconstrained in general, and therefore an additional constraint must be added in order to arrive at a unique solution. Then, given (X00 , Y00 , Z00 ) and (ΔXT , ΔYT , ΔZT ), we focus on the problem of estimating the 3D positions of the other samples in the most recent period by solving for the inter-sample displacements (δXk , δYk , δZk ). In each case the subproblem is formulated as a linear system in order to make some its inherent properties more evident. Subsequently, in Section 4, these problems will be reformulated in a more robust way in order to account for measurement noise. 3.1 Position at t00 and Per Period Displacement Only samples at times ti0 are to be considered for this subproblem. There are six quantities that we wish to estimate ((X00 , Y00 , Z00 ) and (ΔXT , ΔYT , ΔZT )), so clearly at least three image coordinate samples are needed, since each image sample will provide two constraints (6) on the system. These three samples will are denoted (xi00 , y0i0 ), (xi01 , y0i1 ), and (xi02 , y0i2 ), respectively. Using these three samples, we can get three pairs of equations (i.e., six equations) of the form (6). Note that here (δXk , δYk , δZk ) = 0, since the samples we consider are taken at integer multiples of the period T from time t00 . By rearranging the terms, these six equations can be written as a linear system: Ax6 X6 = Bx6 ,
(8)
T ∈ R6 and Ax6 ∈ R6×6 is the coefficient where X6 X00 Y00 Z00 ΔXT ΔYT ΔZT matrix. It is important to note that in order to write the equations as a linear system, we have multiplied the projection equations (6) through by their denominator. Originally, the system of equations was such that each of the samples had equal weight. This multiplication has, in effect, weighted each image sample by the denominator term. For the ideal, noise-free case on which we are currently focusing, this re-weighting will have no effect, and is convenient for the sake of analysis. However, when measurement noise is present, a more robust approach will be necessary, and will be formulated later. Upon analysis it becomes clear that system (8), with six equations and six unknowns, is in fact underdetermined when the image samples are collinear. This will always be the case for periodic motion when the samples are separated by exactly one period in time, since they will be collinear in the world. The result is that rank(Ax6 ) = 5 in general. As such, an additional constraint is needed. For all the experiments performed here, we add this constraint by requiring the value of Z00 (the Z-coordinate of the object’s position at time t00 ) to be known. Note that this condition is not expected to be overly restrictive in practice, since Z00 can often be inferred automatically based on a priori knowledge about the motion in the world. For example, if it is known that the object undergoing periodic motion is the foot of a walking person, then it may be inferred that Z = 0 at the lowest point on the trajectory. Additionally, the problem could be formulated in a similar way if either the X- or Y -coordinate was known instead.
Estimating 3D Trajectories of Periodic Motions from Stationary Monocular Views
551
With this additional constraint there are now five remaining quantities to be estimated: (X00 , Y00 ) and (ΔXT , ΔYT , ΔZT ). Since Z00 is known, the third column of Ax6 can be eliminated (these coefficients will be absorbed by the vector on the other side of the equation), and one of the equations in the system is discarded. The remaining system of five unknowns and five equations is written as follows: (9) Ax5 X5 = Bx5 , T 0 0 where X5 X0 Y0 ΔXT ΔYT ΔZT ∈ R5 and Ax5 ∈ R5×5 . Recall now that the original coefficient matrix Ax6 contained the image coordinates of the samples from which the equations were written: (xi00 , y0i0 ), (xi01 , y0i1 ), and (xi00 , y0i0 ). If we now replace these image coordinate values with their expressions in terms of 3D quantities using the projection (6), we find that the rowspace of Ax5 is spanned by the rows of the following matrix: ⎛ ⎞ D X 0 −D ΔZT 0 0 0 − 123 D0123 234 ⎜ ⎟ D Y 0 −D ⎜ 0 ΔZ 0 0 − 123D0123 134 ⎟ T ⎜ ⎟ ⎜ D123 Z00 −D124 ⎟ , (10) ⎜ 0 ⎟ 0 0 0 − D123 ⎜ ⎟ ⎝ 0 ⎠ 0 ΔZT 0 −ΔXT 0 0 0 ΔZT −ΔYT where Dabc is the determinant of the 3 × 3 matrix formed by the a-th, b-th, and c-th columns of the projection matrix Pc . Note that D123 = 0, since it is the determinant of Ac R, the product of the camera’s intrinsic parameter and rotation matrices, both of which are nonsingular by definition. It will be possible to obtain a unique solution to (9) when rank(Ax5 ) = 5. Clearly x A5 will be full rank only when all the rows of the matrix (10) are nonzero. This, in effect, imposes physical conditions under which the problem will become unsolvable. The first important observation is that in order for all the rows of (10) to be nonzero, some per period displacement is required (in other words, some translation is necessary in order for the problem to be solved). In addition, (10) imposes conditions on X00 , Y00 , and Z00 . However, if any of these conditions are satisfied and Ax5 becomes singular, the problem can simply be re-parameterized around a new point (X00 , Y00 , Z00 ) such that these conditions are no longer satisfied. 3.2 Position at t0k For this subproblem we consider samples at times tik . The goal now is to estimate the 3D displacement between samples t00 and t0k , (δXk , δYk , δZk ), given the quantities that are known from solving the previous subproblem (i.e., (X00 , Y00 , Z00 ) and (ΔXT , ΔYT , ΔZT ) are known). Note that samples at times tik can also be expressed in terms of these previously estimated quantities. Since there are three quantities to be estimated, clearly at least three equations are needed, and therefore we consider two image coordinate samples, which are taken at times tik and tjk , respectively. Since these two samples result in four equations (recall each sample results in two equations as in (6)), we discard one of them to obtain the three equations. As before, we multiply all the equations through by their denominators and write them as a linear system:
552
E. Ribnick and N. Papanikolopoulos
Aδ3 δk = Bδ3 ,
(11)
T where δk δXk δYk δZk ∈ R3 and Aδ3 ∈ R3×3 is the coefficient matrix. As mentioned earlier, writing these nonlinear equations as a linear system will introduce nonuniform weights on the samples, but is convenient for analysis in the noise-free case. In order to obtain a unique solution for the system (11), matrix Aδ3 must be nonsingular. Recall that one of the four equations resulting from the two image samples was discarded in order to form this 3 × 3 system. If one of the x-samples was discarded, we find that: (12) det(Aδ3 ) = −(ykj − yki )D123 . Similarly, if one of the y-samples was discarded: det(Aδ3 ) = −(xjk − xik )D123 .
(13)
It is guaranteed that at least one of the two determinants above will be nonzero, since the samples (separated by one period) must be distinct and D123 is known to be nonzero. As such, this subproblem can always be solved as long as the per period displacement ΔpT is nonzero. This analysis is identical for each sample in the period, tik , and therefore the trajectory of the object in 3D can be completely characterized.
4 Trajectory Estimation In the previous section it was shown that when one additional physical constraint is added to the problem, a unique solution for the 3D position of each sample in the most recent period can be obtained in most cases by solving systems of linear equations derived from samples of the apparent trajectory in the image. However, in practice, the solution from just two or three image samples is sensitive to measurement noise and is often unreliable. Furthermore, dividing the problem into subproblems and solving each one separately ignores some of the interdependencies between variables, and as such is not as robust to noise. As such, a more robust approach to estimating the 3D position of each sample in the most recent period of motion is to use all of the available samples of the trajectory, rather than just the minimum number required for nonsingularity. In addition, here we estimate all parameters simultaneously instead of solving each subproblem separately. We propose a numerical approach for performing the estimation, in which a cost function is minimized. 4.1 Cost Function As mentioned above, here we estimate all variables of interest simultaneously (recall that Z00 is known). If we have image samples from M periods with N samples per period, there are a total of 5 + 3(N − 1) variables. These include the unknown coordinates of the most recent sample (X00 , Y00 ), the per period displacement (ΔXT , ΔYT , ΔZT ),
Estimating 3D Trajectories of Periodic Motions from Stationary Monocular Views 5
5
x 10
1.5 0.5
−200
0
200
−200
7
0
200
2 0
200
200
x 10
−200
0
200
0
200
4
x 10
x 10 2.5
2.5 2 1.5 1
6
0 4
x 10
10
−200
−200
4
x 10
10 8 6 4 2
4 3 2 1
2.5
8 6 4 2
6
7
x 10
x 10
553
1.2
2
1
1.5 1
0.8
−200
0
200
−200
0
200
−200
ˆ with significant measurement noise. Fig. 2. Examples of cross sections of the cost function F (X) The first eight dimensions are shown. Cross sections are plotted against the offset from the optimal value, which are shown in centimeters.
and the displacements within each period (δXk , δYk , δZk ), k ∈ [1, N − 1]. We concatenate all of these quantities into a single vector of variables, and denote the current estiˆ ∈ R(5+3(N −1)) . The image samples are denoted as (xi , y i ), k ∈ [1, N − 1], mate as X k k i ∈ [1, M − 1]. Then, for all M × N samples, we wish to minimize the squared reprojection error: ˆ ˆ ∗ = arg min F (X), (14) X where: ˆ F (X)
M−1 −1 i
N
x ki y k i=0 k=0
−
i 2 x ˆk , yˆki 2
(15)
and · 2 is the 2 norm. The reprojections in image coordinates (ˆ xik , yˆki ) are functions ˆ calculated using the projection equations (6). of the current estimate of the variables, X, ˆ can be rewritten as a sum of ratios of quadratic functions of The cost function F (X) the form: M−1 −1 ˆ T i ˆ
N
ˆ + Cui X Ak X + (Bik )T X ˆ = F (X) , (16) ˆ T Di X ˆ + (Ei )T X ˆ + Fi X i=0 k=0
Aik
Dik
k
k
Bik
k
Eik
where and are square coefficient matrices, and are vectors, and Cik and i ˆ is not globally convex in general — even a sum of linearFk are scalars. As such, F (X) fractional functions is known to be nonconvex, and cannot be solved efficiently using global methods for more than ten ratios [12]. Furthermore, the individual subfunctions in the summation are nonconvex themselves, since they are ratios of quadratic functions. ˆ is locally convex around its optimal However, we have found that, in practice, F (X) solution, and that this convex region is typically large, even in the presence of significant ˆ can be seen in measurement noise. An example of some of the cross sections of F (X) (5+3(N −1)) ˆ ; cross sections on only the first Fig. 2. Recall that F (X) is a function on R eight dimensions are shown here. In the next section it will be demonstrated that local optimization methods can be used to reliably converge to the optimal solution in the presence of measurement noise. One important point is that the cost function (15) properly accounts for noise, since it gives equal weight to all samples. This makes it more robust to noise than the linear
554
E. Ribnick and N. Papanikolopoulos
systems in Sec. 3, which were formulated by multiplying the projection equations (6) by their denominator, hence giving nonuniform weights to the image samples. Recall that the linear systems were useful only for analyzing the problem in the ideal, noise-free case in order to determine when it is possible to arrive at a unique solution. Although the present formulation is numerical instead of analytic, the analysis in Sec. 3 still gives us significant insight as to the nature of the problem in general. The numerical estimation procedure proposed here provides more information from which to solve the problem, which should only improve our ability to arrive at an appropriate solution. This will be shown subsequently. Furthermore, it is important to note that the optimization here aims to obtain an optimal estimate directly. This is in contrast to techniques such as those proposed in [7], which optimize convex relaxations of their cost functions in order to approximate the optimal solution for similar geometric problems. 4.2 Initial Solution The linear systems formulated in Sec. 3 are not robust to noise, since they introduce nonuniform weights to the image samples and divide the problem into parts. However, we use them to obtain an initial solution to the cost function (15). Specifically, we wish to use all of the available samples in order to solve the first subproblem (9), and the second subproblem (11) for each sample in the period. As such, the coefficient matrix Ax5 and vector Bx5 for the first subproblem, and Aδ3 and Bδ3 , k ∈ [1, N − 1], are augmented with additional rows corresponding to the additional samples that are to be used for estimation. This results in the matrices being non-square. The initial solution to the optimization is then obtained by solving the linear least-squares cost function AX − B22 in each case. In all experiments performed here, we find that the initial solution found in this manner is within the convex region of the original cost function (15), and therefore local optimization methods can be used to converge to the optimal solution.
5 Experiments In all of the experiments described here, a variant of Levenberg-Marquardt (LM) [10] was used to perform the minimization of the cost function described in Sec. 4. LM is convenient since it automatically interpolates between quasi-Newton’s method in more convex regions, and gradient descent in less convex regions. Convergence was always achieved in a reasonable number of iterations, and in most cases the LM algorithm tended more towards quasi-Newton’s method than gradient descent (this is controlled ˆ is by the LM parameter λ), which supports the assertion that the cost function F (X) locally convex in a large area surrounding the optimal solution. The initial solution was found by minimizing the simple linear least-squares cost functions for each separate subproblem, as described previously. 5.1 Results on Synthetic Data As a proof of concept, some initial experiments were performed on synthetic data in order to judge the effectiveness of the proposed method. For these experiments, four
Estimating 3D Trajectories of Periodic Motions from Stationary Monocular Views
555
different synthetic trajectories were generated, sampled, and projected into image coordinates using a user-specified projection matrix. Evaluation of the performance of the proposed method is particularly convenient in this case, since the 3D world position of each sample is known exactly for synthetic trajectories. The projections of the synthetic trajectories in the image are shown in Fig. 1(b), where the trajectories shown in the figure have been rotated/translated in world coordinates in order for them to fit into a single figure. As can be seen, there are various types of trajectories represented, with several different displacement patterns. In all of these experiments the periodic trajectories were accurately reconstructed in 3D world coordinates. The errors (Euclidean distances) between the actual positions and the estimates from our method were on the order of 10−12 cm in all cases. In other words, since these errors were measured in centimeters, the overall error was negligible. Furthermore, the value of the cost function always converged close to zero (on the order of 10−25 , which is less than machine precision). Convergence occurred in a reasonable number of iterations. These experiments clearly show that the proposed method is feasible, and that the theoretical results discussed earlier do apply in practice. 5.2 Results on Real Data Next we present results that illustrate the performance of the proposed technique on real data. The trajectories on which the algorithm was tested each represent a different type of periodic motion. The data used for testing can be obtained by contacting the authors. For each experiment the point of interest was tracked manually in image coordinates, and supplied as input to our algorithm. Vehicle Wheel. In this experiment we consider the motion of a point on the wheel of a vehicle as it drives with constant velocity. A snapshot from the video can be seen in Fig. 1(a), with the apparent trajectory of the point in image coordinates superimposed. For this trajectory it was possible to collect ground truth data regarding its motion in 3D, since it is clear that a point on the wheel moves in a vertical plane in world coordinates, and that this plane intersects the ground plane at known positions. The 3D reconstructed trajectory of one period of motion, along with the entire ground truth trajectory, can be seen in Fig. 3. Note that the reconstructed trajectory closely matches the actual positions of the samples. These results are summarized quantitatively in Table 1. The errors are relatively small when compared with the distance of the object from the camera (on the order of 600cm). Errors are higher in the Y -coordinate (depth), since it is more sensitive to noise and inaccuracies in the camera calibration. Hand and Foot of Walking Person. Next we consider the periodic motions of points on a person’s hand and foot as he walks. An image from the video is shown in Fig. 4(a). As can be expected from real human motion, these trajectories contain significant noise and small deviations from true periodicity. Accurate ground truth data was collected here using a motion capture system1 . Fig. 5 shows the 3D reconstructed trajectories of one period for both the hand and foot, along with the full ground truth trajectories. As before, the reconstructed periods 1
Thanks to Prof. David Nuckley for his assistance in collecting motion capture data.
556
E. Ribnick and N. Papanikolopoulos
30 20 10 0 −10 800 600 400 200
100
150
200
250
300
350
Fig. 3. The 3D reconstruction of the trajectory of a point on a wheel (red crosses), along with the approximate ground truth (blue circles). Axes are shown in centimeters. This figure is best viewed in color. Table 1. Absolute errors of the 3D reconstruction of the wheel trajectory for one period of motion. The object’s distance from the camera was on the order of 600cm.
X (cm) Y (cm) Z (cm) Euclidean (cm)
Mean Std. Dev. 4.21 2.09 5.65 3.15 2.62 1.51 7.69 3.72
Table 2. Absolute errors of the 3D reconstruction for the hand and foot trajectories over one period of motion. The person’s distance from the camera was on the order of 300cm.
X (cm) Y (cm) Z (cm) Euclidean (cm)
Foot Mean Foot Std. Dev. Hand Mean Hand Std. Dev. 0.79 0.99 4.76 2.51 3.16 4.67 2.49 1.58 2.26 2.98 0.98 1.10 4.02 5.59 5.59 2.92
closely match the ground truth. The reconstruction errors for both trajectories are given in Table 2, where the errors are again in cm. These errors are small, given that the distance of the person from the camera was on the order of 300cm. Foot of Running Person on Stairs. For the final experiment we consider a point on the foot of a person running down a set of stairs. Notice that in this case the person’s foot
Estimating 3D Trajectories of Periodic Motions from Stationary Monocular Views
(a)
557
(b)
Fig. 4. (a) Trajectories of a person’s hand and foot as he walks. (b) Trajectory of a person’s foot as he runs down stairs. Both the actual trajectory in image coordinates (blue) and the reprojection of the trajectory for one period estimated by our method (red) are shown. This figure is best viewed in color.
Fig. 5. The 3D reconstruction of the foot (a) and hand (b) trajectories (red crosses), along with the ground truth (blue circles). Axes are shown in millimeters. This figure is best viewed in color. Table 3. Dimensions of the stairs inferred from the estimate of the trajectory, compared with the actual dimensions of the stairs. The distance of the person from the camera was on the order of 700cm. Estimated Actual Stair Height (cm) 14.56 15.875 Stair Depth (cm) 25.52 33.02
displaces also in the vertical direction from one period to another. A snapshot from the video is shown in Fig. 4(b). Since the person in the video steps on the ground at the bottom of the stairs, it could simply be inferred that Z00 = 0. The point tracked in this case was on the tip of the person’s left foot. Fig. 4(b) shows the estimated trajectory reprojected into image coordinates for one period, along with the actual trajectory in image coordinates. In order to demonstrate the accuracy of the reconstruction in 3D, we use
558
E. Ribnick and N. Papanikolopoulos
our estimated trajectory to infer the dimensions of the stairs on which the person runs, and compare these to the actual dimensions of the staircase. Specifically, we compared the stair height with |ΔZT |/2, and the stair depth with |ΔYT |/2, and found the dimensions inferred from our estimate to be quite accurate. These results are summarized in Table 3.
6 Conclusions and Future Work We have presented a method that can be used to accurately reconstruct a periodic trajectory in 3D based on samples of the trajectory in image coordinates. Analysis of the problem showed that it is under-constrained in general, and the additional constraint that the value of Z00 is known was added. It was demonstrated that a numerical approach can be used to perform the estimation, and that the cost function has been found to be locally convex around the optimal solution. Experimental results illustrated that this technique can be used to accurately infer 3D information, even in the presence of noise and imperfect periodicity. In light of the fact that real motions are often not perfectly periodic, in future work we plan to develop methods for performing 3D inference in the more general case of cyclic motion. In this case it may be necessary to estimate the instantaneous frequency of the motion at each point.
Acknowledgements This work has been supported by the Department of Homeland Security, the Center for Transportation Studies and the ITS Institute at the University of Minnesota, the Minnesota Department of Transportation, and the National Science Foundation through grants #IIS-0219863, #CNS-0224363, #CNS-0324864, #CNS-0420836, #IIP-0443945, #IIP-0726109, and #CNS-0708344.
References 1. Belongie, S., Wills, J.: Structure from Periodic Motion. In: Proc. Int’l. Worksh. Spatial Coherence for Visual Motion Anal., pp. 16–24 (May 2004) 2. Briassouli, A., Ahuja, N.: Extraction and Analysis of Multiple Periodic Motions in Video Sequences. IEEE Trans. Pattern Anal. Mach. Intel. 29(7), 1244–1261 (2007) 3. Cohen, C.J., et al.: Dynamical System Representation, Generation, and Recognition of Basic Oscillatory Motion Gestures. In: Proc. Int’l. Conf. Automatic Face and Gesture Recognition, pp. 60–65 (October 1996) 4. Cutler, R., Davis, L.S.: Robust Real-Time Periodic Motion Detection, Analysis, and Applications. IEEE Trans. Pattern Anal. Mach. Intel. 22(8), 781–796 (2000) 5. Dixon, M., et al.: Finding Minimal Parameterizations of Cylindrical Image Manifolds. In: Proc. Comp. Vis. and Pattern Recog. Worksh., p. 192 (June 2006) 6. Hartley, R., Zisserman, A.: Multiple View Geometry in Computer Vision, 2nd edn. Cambridge University Press, Cambridge (2003) 7. Kahl, F., Henrion, D.: Globally Optimal Estimates for Geometric Reconstruction Problems. Int’l. J. Comp. Vis. 74(1), 3–15 (2007)
Estimating 3D Trajectories of Periodic Motions from Stationary Monocular Views
559
8. Laptev, I., et al.: Periodic Motion Detection and Segmentation via Approximate Sequence Alignment. In: Proc. IEEE Int’l. Conf. Comp. Vis., vol. 1, pp. 816–823 (October 2005) 9. Liu, F., Picard, R.: Finding Periodicity in Space and Time. In: Proc. IEEE Int’l. Conf. Comp. Vis., pp. 376–383 (January 1998) 10. Nocedal, J., Wright, S.J.: Numerical Optimization. Springer, New York (1999) 11. Polana, R., Nelson, R.C.: Detection and Recognition of Periodic, Nonrigid Motion. Int’l. J. Comp. Vis. 23(3), 261–282 (1997) 12. Schaible, S., Shi, J.: Fractional Programming: the Sum–of–Ratios Case. Optimization Methods and Software 18, 219–229 (2003) 13. Seitz, S.M., Dyer, C.R.: View-Invariant Analysis of Cyclic Motion. Int’l. J. Comp. Vis. 25(3), 231–251 (1997)
Unsupervised Learning of Skeletons from Motion David A. Ross, Daniel Tarlow, and Richard S. Zemel University of Toronto, Canada {dross,dtarlow,zemel}@cs.toronto.edu
Abstract. Humans demonstrate a remarkable ability to parse complicated motion sequences into their constituent structures and motions. We investigate this problem, attempting to learn the structure of one or more articulated objects, given a time-series of two-dimensional feature positions. We model the observed sequence in terms of “stick figure” objects, under the assumption that the relative joint angles between sticks can change over time, but their lengths and connectivities are fixed. We formulate the problem in a single probabilistic model that includes multiple sub-components: associating the features with particular sticks, determining the proper number of sticks, and finding which sticks are physically joined. We test the algorithm on challenging datasets of 2D projections of optical human motion capture and feature trajectories from real videos.
1 Introduction An important aspect of analyzing dynamic scenes involves segmenting the scene into separate moving objects and constructing detailed models of each object’s motion. For scenes represented by trajectories of features on the objects, structure-from-motion methods are capable of grouping the features and inferring the object poses when the features belong to multiple independently-moving rigid objects. Recently, however, research has been increasingly devoted to more complicated versions of this problem, when the moving objects are articulated and non-rigid. In this paper, we investigate this problem, attempting to learn the structure of an articulated object while simultaneously inferring its pose at each frame of the sequence, given a time-series of feature positions. We propose a single probabilistic model for describing the observed sequence in terms of one or more “stick figure” objects. We define a “stick figure” as a collection of line segments (bones or sticks) joined at their endpoints. The structure of a stick figure—the number and lengths of the component sticks, the association of each feature point with exactly one stick, and the connectivity of the sticks—is assumed to be temporally invariant, while the angles (at joints) between the sticks are allowed to change over time. We begin with no information about the figures in a sequence, as the model parameters and structure are all learned. An example of a stick figure learned by applying our model to 2D feature observations from a video of a walking giraffe is shown in Figure 1. Learned models of skeletal structure have many possible uses. For example, detailed, manually-constructed skeletal models are often a key component in full-body tracking algorithms. The ability to learn skeletal structure could help to automate the process, potentially producing models more flexible and accurate that those constructed manually. Additionally, skeletons are necessary for converting feature point positions into D. Forsyth, P. Torr, and A. Zisserman (Eds.): ECCV 2008, Part III, LNCS 5304, pp. 560–573, 2008. c Springer-Verlag Berlin Heidelberg 2008
Unsupervised Learning of Skeletons from Motion
1
1
1
1
0.9
0.9
0.9
0.9
0.8
0.8
0.8
0.8
0.7
0.7
0.7
0.7
0.6
0.6
0.6
0.6
0.5
0.5
0.5
0.5
0.4
0.4
0.4
0.4
0.3
0.3
0.3
0.3
0.2
0.2
0.2
0.1
0.1
0.1
0
0
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
561
0.2
0.1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Fig. 1. Four frames from a video of a walking giraffe, augmented with a learned skeleton. Each white line represents a separate stick, the black circles are joints, and the colored markers are tracked features.
joint angles, a standard way to encode motion for animation. Furthermore, knowledge of the skeleton can be used to improve the reliability of optical motion capture, permitting disambiguation of marker correspondence and occlusion [1]. Finally, a learned skeleton might be used as a rough prior on shape to help guide image segmentation [2]. In the following section we discuss other recent approaches to modelling articulated figures from tracked feature points. In Section 3 we formulate the problem as a probabilistic model and describe the optimization of this model, which proceeds in a stagewise fashion, building up the structure incrementally to maximize the joint probability of the model variables. In Section 5 we test the algorithm on a range of datasets. In the final section we describe assumptions and limitations of the approach, and discuss future work.
2 Related Work The task of learning stick figures from a set of 2D feature point trajectories can be thought of as a variant of the structure from motion (SFM) problem. When the trajectories all arise from the motion of one rigid object, Tomasi and Kanade [3] have shown that the matrix of point locations, W, is a linear product of a time-invariant structure matrix, S, and a time-varying matrix of motion parameters, M. M and S can be recovered by singular value decomposition. SFM can also be extended to handle multiple rigid objects moving independently. Costeira and Kanade [4] have shown that this problem, known as multibody SFM, can be solved by grouping the point trajectories according to the object they arise from, then solving SFM independently for each object. Grouping is accomplished by forming a shape-shape interaction or affinity matrix, indicating the potential for each pair of points of belonging to the same object, and using this matrix to cluster the trajectories. Several authors have demonstrated that SFM can be interpreted as a probabilistic generative model, e.g. [5,6,7]. This view permits the inclusion of priors on the motion sequence, thereby leveraging temporal coherence. Furthermore, in the multibody case, Gruber and Weiss have presented a single probabilistic model that describes both the grouping problem and the per-object SFM problems [7]. This produces a single objective function that can be jointly optimized, leading to more robust solutions. Unfortunately, multibody SFM cannot reliably be used to obtain the structure and motion of an articulated figure’s parts since, as shown by Yan and Pollefeys [8], the motions of connected parts are linearly dependent. However, this dependence can be used to form an alternative affinity matrix for clustering the trajectories. Yan and Pollefeys
562
D.A. Ross, D. Tarlow, and R.S. Zemel
use this as the basis for a stage-wise procedure for recovering articulated SFM [9]: (1) cluster point trajectories into body parts; (2) independently run SFM on each part; (3) determine connectivity between parts by running (a variant of) minimum spanning tree, where edge weights are the minimum principle angle between two parts’ motion matrices (for connected, dependent parts, this should be zero); (4) finally, solve for the joint locations between connected parts. A disadvantage of this method is its lack of an overall objective function that can be optimized globally, and used to compare the quality of alternative models. A number of authors have inferred articulated structures from three-dimensional observations, leveraging the fact that the distance between two points attached to the same rigid body part is constant, e.g. [10,11]. These methods can produce detailed structures from motion capture data; however, although simple to apply in 3D, they have not been extended to 2D observations. Others have inferred two-dimensional structures from 2D data [12,13,14]. Many of these methods focus on a different problem, inferring the correspondence of observations to features in each frame. With the exception of [12] (which is concerned only with the final stage of processing, after the motions of individual parts have been obtained), all of these methods build two-dimensional models directly in image coordinates. Thus, unlike SFM approaches, they are unable to deal with out-of-plane motion; a model trained on side views of a person walking would be inapplicable to a sequence of frontal views. Learning articulated figures can also be interpreted as structure learning in probabilistic graphical models, with nodes representing the positions of parts and edges their connectivity. Learning structure is a hard problem that is usually solved approximately, using greedy methods or by restricting the class of possible structures. Song et al. [13] note that the optimal structure (in terms of maximum likelihood) of a graphical model is the one that minimizes the entropy of each node given its parents. Restricting their attention to graphs in which nodes each have two parents, they propose to learn the structure greedily, iteratively connecting to the graph the node with the smallest conditional entropy given its parents.
3 Model Formulation Here we formulate a probabilistic graphical model for sequences generated from articulated skeletons. By fitting this model to a set of feature point trajectories (the observed locations of a set of features across time), we are able to parse the sequence into one or more articulated skeletons and recover the corresponding motion parameters for each frame. The observations are assumed to be 2D, whether tracked from video or projected from 3D motion capture, and the goal is to learn skeletons that capture the full 3D structure. Fitting the model is performed entirely via unsupervised learning; the only inputs are the observed trajectories, with manually-tuned parameters restricted to a small set of thresholds on Gaussian variances. The observations for this model are the locations wpf of feature points p in frames f . A discrete latent variable R assigns each point to one of S sticks. Each stick s consists of a set of time-invariant 3D local coordinates Ls , describing the relative positions of
Unsupervised Learning of Skeletons from Motion
f
vj
f+1
vj
j = 1:J
gi
e if
e f+1 i
f s
f+1 s
ki
Articulated Joints
φj
563
Ls
M
M
rp
w fp
w f+1 p
Frame f
Frame f+1
s = 1:S
Multibody SFM
i = 1:2S
p = 1:P
Time Invariant
Fig. 2. (Left) The generative process for the observed feature positions, and the imputed positions of the stick endpoints. For each stick, the relative positions of its feature points and endpoints are represented in a time-invariant local coordinate system (left). For each frame in the sequence f ) by mapping (right), motion variables attempt to fit the observed feature positions (e.g. wP local coordinates to world coordinates, while maintaining structural cohesion by mapping stick endpoints to inferred vertex (joint) locations. (Right) The graphical model. The bottom half shows the model for independent multibody SFM; the top half describes the vertices and endpoints, which account for motion dependencies introduced by the articulated joints.
all points belonging to the stick. Ls is mapped to the observed world coordinate system by a different motion matrix Mfs at every frame f (see Figure 2). For example, in a noiseless system, where rp,1 = 1, indicating that point p has been assigned to stick 1, Mf1 l1,p = wpf . If all of the sticks are unconnected and move independently, then this model essentially describes multibody SFM [4,7]. However, for an articulated structure, with connections between sticks, the stick motion variables are not independent [8]. Allowing connectivity between sticks makes the problems of describing the constraints between motions and inferring motions from the observations considerably more difficult. To deal with this complexity, we introduce variables to model the connectivity between sticks, and the (unobserved) locations of stick endpoints and joints in each frame. Every stick has two endpoints, each of which is assigned to exactly one vertex. Each vertex can correspond to one or more stick endpoints (vertices assigned two or more endpoints are joints). We will let ki specify the coordinates of endpoint i relative to the local coordinate system of its stick, s(i), and vjf and efi represent the world coordinate location of vertex j and endpoint i in frame f , respectively. Again, in a noiseless system, efi = Mfs(i) ki for every frame f . Noting the similarity between the efi variables and the observed feature positions wpf , these endpoint locations can be interpreted as a set of pseudo-observations, inferred from the data rather than directly observed. Vertices are used to enforce a key constraint: for all the sticks that share a given vertex, the motion matrices should map their local endpoint locations to a consistent world coordinate. This restricts the range of possible motions to only those resulting in appropriately connected figures. For example, in Figure 2(Left), endpoint 2 (of stick 1),
564
D.A. Ross, D. Tarlow, and R.S. Zemel
is connected to endpoint 4 (of stick 2); both are assigned to vertex 2. Thus in every frame f both endpoints should map to the same world location, the location of the knee joint, i.e. v2f = ef2 = ef4 . The utility of introducing these additional variables is that, given the vertices V and endpoints E, the problem of estimating the motions and local geometries (M and L) factorizes into S independent structure-from-motion problems, one for each stick. Latent variable gi,j = 1 indicates that endpoint i is assigned to vertex j; hence G indirectly describes the connectivity between sticks. The assumed generative process for the feature observations and the vertex and endpoint pseudo-observations is shown in Figure 2(Left), and the corresponding probabilistic model in Figure 2(Right). The complete joint probability of the model can be decomposed into a product of two likelihood terms, one for the true feature observations and the second for the endpoint pseudo-observations, and priors over the remaining variables in the model: P = P(W|M, L, R) P(E|M, K, V, φ, G)
(1)
P(V) P(φ) P(M) P(L) P(K) P(R) P(G) Assuming isotropic Gaussian noise with precision (inverse variance) τw , the likelihood function is P(W|M, L, R) = N (wpf |Mfs ls,p , τw−1 I)rp,s (2) f,p,s
where rp,s is a binary variable equal to 1 if and only if point p has been assigned to stick s. This distribution captures the constraint that for feature point p, its predicted world location, based on the motion matrix and its location in the local coordinate system for the stick to which it belongs (rp,s = 1), should match its observed world location. Note that dealing with missing observations is simply a matter of removing the corresponding factors from this likelihood expression. Each motion variable consists of a 2 × 3 rotation matrix Rfs and a 2 × 1 translation vector tfs : Mfs ≡ [Rfs tfs ]. The motion prior P(M) is uniform, with the stipulation T that all rotations be orthogonal: Rfs Rfs = I. We define the missing-data likelihood of an endpoint location as the product of two Gaussians, based on the predictions of the appropriate vertex and stick: P(E|M, K, V, φ, G) ∝ −1 gi,j N (efi |Mfs(i) ki , τm I) N (efi |vjf , φ−1 j I) f,i
(3)
f,i,j
Here τm is the precision of the isotropic Gaussian noise on the endpoint locations with respect to the stick, and gi,j is a binary variable equal to 1 if and only if endpoint i has been assigned to vertex j. The second Gaussian in this product captures the requirement that endpoints belonging to the same vertex should be coincident. Instead of making this a hard constraint, connectivity is softly enforced, allowing the model to accommodate a certain degree of non-rigidity in the underlying structure, as illustrated by the mismatch between endpoint and vertex positions in Figure 2(Left).
Unsupervised Learning of Skeletons from Motion
565
The vertex precision variables φj capture the degree of “play” in the joints, and are assigned Gamma prior distributions; the prior on the vertex locations incorporates a temporal smoothness constraint, with precision τt . The priors for feature and endpoint locations in the local coordinate frames, L and K, are zero-mean Gaussians, with isotropic precision τp . Finally, the priors for the variables defining the structure of the skeleton, R and G, are multinomial. Each point p selects exactlyone stick s ( s rp,s = 1) with probability cs , and each endpoint i selects one vertex j ( j gi,j = 1) with probability dj . P(φ) =
Y
Gamma(φj |αj , βj )
P(L) =
N (vjf |vjf −1 , τt−1 I)
P(K) =
j
P(V) =
Y
Y
N (ls,p |0, τp−1 I)
P(R) =
Y (cs )rp,s
N (ki |0, τp−1 I)
P(G) =
Y (dj )gi,j
s,p
f,j
Y
p,s
i
i,j
4 Learning Given a set of observed feature point trajectories, we propose to fit this model in an entirely unsupervised fashion, by maximum likelihood learning. Conceptually, we divide learning into two challenges: recovering the skeletal structure of the model, and given a structure, fitting the model’s remaining parameters. Structure learning involves grouping the observed trajectories into a number of rigid sticks, including determining the number of sticks, as well as determining the connectivity between them. Parameter learning involves determining the local geometries and motions of each stick, as well as imputing the locations of the stick endpoints and joints, all while respecting the connectivity constraints imposed by the structure. Both learning tasks seek to optimize the same objective function—the expected complete log-likelihood of the data given the model—using different, albeit related, approaches. Given a structure, parameters are learned using the standard variational expectation-maximization algorithm. Structure learning is formulated as an “outer-loop” of learning: beginning with a fully disjoint multibody SFM solution, we incrementally merge stick endpoints, at each step greedily choosing the merge that maximizes the objective. Finally the expected complete log-likelihood can be used for model comparison and selection. A summary of the proposed learning algorithm is provided in Figure 3. 1. Obtain an initial grouping R by clustering the observed trajectories using Affinity Propagation. Initialize G to a fully-disconnected structure. 2. Optimize the parameters M, L, K, V, φ, E, using 200 iterations of the variational EM updates, resampling R every 10 iterations. 3. Loop until no more valid merges, or maximum number of merges reached: (a) For all vertex-pair merges, estimate the merge cost of the proposed structure by updating the parameters with 20 EM iterations and noting the change in expected log-probability. (b) Choose the merge with the lowest cost, modifying G accordingly. Reoptimize all parameters using 200 EM iterations, resampling R every 10th iteration. Fig. 3. A summary of the learning algorithm
566
D.A. Ross, D. Tarlow, and R.S. Zemel
4.1 Learning the Model Parameters Given a particular model structure, indicated by a specific setting of R and G, the remaining model parameters are fit using the variational expectation-maximization (EM) algorithm. This well-known algorithm takes an iterative approach to learning: beginning with an initial setting of the parameters, each parameter is updated in turn, by choosing the value that maximizes the expected complete log-likelihood objective function, given the values (or expectations) of the other parameters. The objective function—also known as the negative Free Energy—is formed by assuming a fully-factorized variational posterior distribution Q over a subset of the model parameters, then computing the expectation of the model’s log probability (1) with respect to Q, plus an entropy term: L = EQ [log P] − EQ [log Q].
(4)
For this model, we define Q over the variables V, E, and φ, involved in the worldcoordinate locations of the joints. The variational posterior for vjf is a multivariate Gaussian with mean and precision parameters μ(vjf ) and τ (vjf ); for efi is also a Gaussian with mean μ(efi ) and precision τ (efi ); and for φ is a Gamma distribution with parameters α(φj ) and β(φj ): Q = Q(V) Q(E) Q(φ) Q(V) = N (vjf |μ(vjf ), τ (vjf )−1 )
Q(E) =
f,j
N (efi |μ(efi ), τ (efi )−1 )
Q(φ) =
Gamma(φj |α(φj ), β(φj )).
j
f,i
The EM update equations are obtained by differentiating the objective function L, with respect to each parameter, and solving for the maximum given the other parameters. We now present the parameter updates; see [15] for the derivation of L and the updates. The constants appearing in these equations denote the number of: observation frames F , vertices J, data points P , and sticks S; h(f ) = 1 if 1 < f < F and 0 otherwise; and s(i) is the index of the stick to which endpoint i belongs. P P f f 2 f −1 f,i μ(ei ) − Ms(i) ki f,i τ (ei ) −1 τm + = 2F P 4F S 2F S P PF P f f −1 2 f −1 μ(v ) − μ(v ) τ (v ) j j j f =2 j f,j + 2h(f ) τt−1 = 2(F − 1)J (F − 1)J P α(φ ) τm Mfs(i) ki + j gi,j β(φjj ) μ(vjf ) X α(φj ) + τm μ(efi ) = τ (efi ) = gi,j P α(φj ) β(φj ) τm + gi,j j P
τw−1 =
μ(vjf )
=
f,p,s
α(φj ) β(φj )
rp,s wpf − Mfs ls,p 2
P
j
f i gi,j μ(ei )
β(φj )
+ [f > 1]τt μ(vjf −1 ) + [f < F ]τt μ(vjf +1 ) α(φj ) P h(f ) i gi,j + τt 2 β(φj )
Unsupervised Learning of Skeletons from Motion
567
X α(φj ) X gi,j + τt 2h(f ) αj = α(φj ) βj = β(φj ) α(φj ) = αj + F gi,j β(φj ) i i X 1X β(φj ) = βj + gi,j μ(efi ) − μ(vjf )2 + gi,j [(τ (efi ))−1 + (τ (vjf ))−1 ] 2
τ (vjf ) =
f,i
f,i
The update for the motion matrices is slightly more challenging due to the orthogonality constraint on the rotations. A straightforward approach is to separate the rotation and components of the motion and to solve for each individually: translation Mfs = Rfs tfs . The update for translation is obtained simply via differentiation: “ X ts,f = τw rp,s (wpf − Rfs ls,p ) + τm p
X
”“ X ” (μ(efi ) − Mfs ks,i ) / τw rp,s + 2τm p
{i|s(i)=s}
To deal with the orthogonality constraint on Rfs , its update can be posed as an orthogonal Procrustes problem [16,17]. Given matrices A and B, the goal of orthogonal Procrustes is to obtain the matrix R that minimizes A − RB2 , subject to the constraint that the rows of R form an orthonormal basis. Computing the most likely rotation involves maximizing the likelihood of the observations (2) and of the endpoints (3), which can be written as the minimization of p (wpf − ts,f ) − Rfs ls,p 2 and f f 2 {i|s(i)=s} (μ(ei ) − ts,f ) − Rs ks,i respectively. Concatenating the two problems together, weighted by their respective precisions, allows the update of Rfs to be written as a single orthogonal Procrustes problem: argminRfs A − Rfs B2 , where √ √ τw rp,s (wpf − ts,f ) p=1..P τm (μ(efi ) − ts,f ) {i|s(i)=s} A= √ √ τw rp,s ls,p p=1..P τm ki {i|s(i)=s} . B= SV D
The solution is to compute the singular value decomposition of BAT = UΣVT , and let R = V Im×n UT , where m and n are the numbers of rows in A and B respectively. Given Rfs and tfs , the updates for the local coordinates are: f T f τp −1 f T f ls,p = [Rs ] Rs + I [Rs ] (wp − ts,f ) τw f f f T f τp −1 f T ki = [Rs(i) ] Rs(i) + I [Rs(i) ] (μ(efi ) − tfs(i) ) τm f
f
The final issue to address for EM learning is initialization. Many ways to initialize the parameters are possible; here we settle on one simple method that produces satisfactory results. The motions and local coordinates, M and L, are initialized by solving SFM independently for each stick [3]. The vertex locations are initialized by averaging f ) = ( g rp,s(i) wpf )/ the observations of all sticks participating in the joint: μ(v i,j j i,p ( i,p gi,j rp,s(i) ). The endpoints are initially coincident with their corresponding ver tices, μ(efi ) = j gi,j μ(vjf ), and each ki initialized by averaging the backprojected endpoint locations: ki = F1 f [Rfs(i) ]T (μ(efi ) − tfs(i) ). All precision parameters are initialized to constant values, as discussed in [15].
568
D.A. Ross, D. Tarlow, and R.S. Zemel
4.2 Learning the Skeletal Structure Structure learning in this model entails estimating the assignments of feature points to sticks (including the number of sticks), and the connectivity of sticks, expressed via the assignments of stick endpoints to vertices. The space of possible structures is enormous. We therefore adopt an incremental approach to structure learning: beginning with a fully-disconnected multibody-SFM model, we greedily add joints between sticks by merging vertices. Each merge forms a new model, and its parameters are updated via EM and the assignments of observations to sticks are resampled. At any step, the optimal model can be determined by simply comparing the expected complete loglikelihood of each model. The first step in structure learning involves hypothesizing an assignment of each observed feature trajectory to a stick. This is accomplished by clustering the trajectories using the Affinity Propagation algorithm [18]. Affinity Propagation takes as input an affinity matrix, for which we supply the affinity measure from [8,9] as presented in Section 2. During EM parameter learning, the stick assignments R are resampled every 10 iterations using the posterior probability distribution P(rp,s ) ∝ cs exp(− α2w f wpf − Mfs ls,p 2 ) s.t. s rp,s = 1. Instead of relying only on information available before model fitting begins, c.f. [4,9,11]), resampling of stick assignments allows model probability to be improved by leveraging current best estimates of the model parameters. This is a key advantage of our approach, employing a single model for the entire process. The second step of structure learning involves determining which sticks’ endpoints are joined together. As discussed earlier, connectivity is captured by assigning stick endpoints to vertices; each endpoint must be associated to one vertex, and vertices with two or more endpoints act as articulated joints. (Valid configurations include only cases in which endpoints of a given stick are assigned to different vertices.) We employ an incremental greedy scheme for inferring this graphical structure G, beginning from an initial structure that contains no joints between sticks. Thus, in terms of the model, we start with J = 2S vertices, one per stick-endpoint, so gi,j = 1 if and only if j = i. Given this initial structure, parameters are fit using variational EM. A joint between sticks is introduced by merging together a pair of vertices. The choice of vertices to merge is guided by our objective function L. At each stage of merging we consider all valid pairs of vertices, putatively joining them and estimating (via 20 iterations of EM) the change in log-likelihood if this merge were accepted. The merge with the highest log-likelihood is performed, by modifying G accordingly, and the model parameters are re-optimized with 200 additional iterations of EM, including resampling of the stick assignments R. This process is repeated until no valid merges remain, or the desired maximum number of merges has been reached. As can be seen from the EM updates, each iteration of parameter learning scales linearly with F , J, P , and S. At each stage of structure learning, determining the locallyoptimal merge scales with O(J 2 ).
5 Experimental Results and Analysis We now present experimental results on three feature point trajectory datasets—videos of an excavator and a walking giraffe, and 2D projections of human motion capture—
Unsupervised Learning of Skeletons from Motion
569
as well as a brief comparison with a related method [9]. Further results are included in [15]. In each experiment a model was learned on the first 70% of the sequence frames, with the remaining 30% held out as a test set used to measure the model’s performance. Learning was performed using the algorithm summarized in Figure 3, with greedy merging continuing (generally) until no valid merges remained. After each stage of merging, we saved the learned model and corresponding expected complete log-likelihood—the objective function learning maximizes. The likelihoods were plotted for comparison and used to select the optimal model. The learned model’s performance was evaluated based on its ability to impute (reconstruct) the locations of missing observations. For each test sequence we generated a set of missing observations by simulating an occluder that sweeps across the scene, obscuring points as it passes. We augmented this set with an additional 5% of the observations chosen to be “missing at random”, to simulate drop-outs and measurement errors, resulting in a overall occlusion rate of 10-15%. The learned model was fit to the un-occluded points of the test sequence, and used to predict the location of the missing points. Performance was measured by computing the root-mean-squared error between the predictions and the locations of the heldout points. We compared the performance of our model against similar prediction errors made by single-body and multibody structure from motion models. Our first dataset consisted of a brief video clip of an excavator. We used a KLT tracker [19] with manual clean-up to obtain 35 feature trajectories across 176 frames. Our algorithm processed the data in 4 minutes on a 2.8 gHz processor. The learned model at each stage of greedy merging is depicted in Figure 4 (Top). The optimal structure was chosen by comparing the log-likelihood at each state, as plotted in Figure 4 (Bottom,left). Using the excavator data, we also examined the model’s robustness to learning with occlusions in the training data. The algorithm was able to correctly recover the structure using the occlusion scheme described above, as well as when up to 75% of the training observations were randomly withheld during training. Figure 4 (Bottom,right) shows that the system’s predictions for occluded data was significantly better than either multibody or single-body SFM. Our second dataset consisted of a video of a walking giraffe. As before, 60 features were tracked in 128 frames. Merging results are depicted in Figure 5. Using the objective function to guide model selection, the best structure corresponded to state 10, and this model is shown superimposed over the original video in Figure 1. Our third dataset consisted of optical human motion capture data (courtesy of the Biomotion Lab, Queen’s University, Canada), which we projected from 3D to 2D using an isometric projection. The data contained 53 features, tracked across a 1018-frame range-of-motion exercise (training data), and 318 frames of running on an inclined plane (test data). Again the objective function peaks at what is intuitively the bestlooking structure (stage 11). For comparison, we ran a re-implementation of the algorithm of Yan et al. [9] on the Giraffe and 2D-Human datasets. (We note that these results are sensitive to parameter settings that are used to estimate the effective rank of motions; we manually explored a small range of parameter settings and chose the skeleton that was most visually appealing.) The criteria used by Yan and Pollefeys to determine stick connectivity relies
570
D.A. Ross, D. Tarlow, and R.S. Zemel
1
1
1
0.9
0.9
0.9
0.8
0.8
0.8
0.7
0.7
0.7
0.6
0.6
0.6
0.5
0.5
0.5
0.4
0.4
0.4
0.3
0.3
0.3
0.2
0.2
0.1
0.1
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
0
1
0.2
0.1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
E[logprob]
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Excavator − Occluder 0.7 train test 0.6
3800
0.5
3600
0.4
RMS error
4000
3400
0.3
3200
0.2
3000
0.1
2800
0
0
1
2
3
4
multibody
articulated
single body
Fig. 4. Top: Learned structures during greedy merging from the Excavator dataset. Middle: superposition of the structure onto video frames. Bottom: Log-probability scores after each stage of endpoint merging, and prediction errors of occluded feature data for multibody SFM, our articulated model, and single-body SFM.
Fig. 5. Learned structures during greedy merging from the Giraffe dataset
Unsupervised Learning of Skeletons from Motion
4
E[logprob]
x 10
571
2D Human − Occluder 1.4
1
train 1.2
test
0.5
0
1
RMS error
−0.5
−1
0.8
0.6
−1.5
0.4
−2
−2.5
0.2
−3 0
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
multibody
articulated
single body
Fig. 6. Top: Learned structures during greedy merging from the 2D-Human dataset. Bottom: Logprobability scores after each stage of endpoint merging, and prediction errors of occluded feature data for multibody SFM, our articulated model, and single-body SFM.
Fig. 7. Optimal structures found by the algorithm of Yan et al. [9] on Giraffe (Left) and 2DHuman (Right) data
on the dependencies between motions. Though two sticks sharing a joint will have intersecting motion subspaces and this method will correctly find these instances, there are other situations where it will choose to join two sticks that have dependent motions but that are not actually connected parts. This is clearly shown in the Giraffe result in Figure 7(Left), where front and back legs that move in phase are found to be connected.
572
D.A. Ross, D. Tarlow, and R.S. Zemel
In this case, the more natural representation of our algorithm, where we are hypothesizing a joint location and seeing how well it fits the data, proves beneficial. In the 2D-Human result in Fig. 7(Right), we can see that the effects of these incorrect dependencies are not restricted to be local when the structure learning is based upon a spanning tree. In this case, the spanning tree algorithm chooses to join the two feet together because there is a strong dependence in their motions for this dataset. This decision then causes another error, where the shoulder is connected to the thigh, because connecting each to the torso would no longer produce a tree given the connection between the feet.
6 Discussion We have demonstrated a single coherent model that can describe the structures and motion of articulated skeletons. This model can be applied to a variety of structures, requiring no input beyond the observed feature trajectories and a minimum of manuallyadjusted parameters. The method extends the state-of-the-art in a number of ways. It iterates between updates of the structure and the parameters, allowing information obtained from one stage to assist learning in the other. It is not limited to a single structure (additional results on feature trajectories of two separate objects were omitted due to space restrictions). Also, the noise in our generative model allows a degree of nonrigidity in the motion with respect to the learned skeleton. This not only allows a feature point to move in relation to its associated stick, but also permits complexity in the joints, as the stick endpoints joined at a vertex need not coincide exactly. To obtain good results, our model requires a certain density of features, in particular because the 2D affinity matrix [8] requires at least 4 points per stick. The flexibility of learned models are limited to the degrees of freedom visible in the training data; if a joint is not exercised, then the body parts it connects cannot be distinguished. Finally, our model requires that the observations arise from a scene containing roughlyarticulated figures; it would be a poor model of an octopus, for example. An important extension of the current work is to apply the learned skeleton to feature trajectories from new instances of the same type of articulated structure, allowing for recognition and tracking of a novel moving object.
References 1. Herda, L., Fua, P., Plankers, R., Boulic, R., Thalmann, D.: Using skeleton-based tracking to increase the reliability of optical motion capture. Human Movement Science Journal 20(3), 313–341 (2001) 2. Bray, M., Kohli, P., Torr, P.: Posecut: Simultaneous segmentation and 3d pose estimation of humans using dynamic graph-cuts. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3952, pp. 642–655. Springer, Heidelberg (2006) 3. Tomasi, C., Kanade, T.: Shape and motion from image streams under orthography: a factorization method. International Journal of Computer Vision 9, 137–154 (1992) 4. Costeira, J.P., Kanade, T.: A multibody factorization method for independently movingobjects. International Journal of Computer Vision 29(3), 159–179 (1998)
Unsupervised Learning of Skeletons from Motion
573
5. Dellaert, F., Seitz, S.M., Thorpe, C.E., Thrun, S.: EM, MCMC, and chain flipping for structure from motion with unknown correspondence. Machine Learning 50(1-2), 45–71 (2003) 6. Torresani, L., Hertzmann, A., Bregler, C.: Learning non-rigid 3d shape from 2d motion. In: Advances in Neural Information Processing Systems (NIPS) (2003) 7. Gruber, A., Weiss, Y.: Multibody factorization with uncertainty and missing data using the EM algorithm. In: Conference on Computer Vision and Pattern Recognition (CVPR) (2004) 8. Yan, J., Pollefeys, M.: A general framework for motion segmentation: Independent, articulated, rigid, non-rigid, degenerate and non-degenerate. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3954, pp. 94–106. Springer, Heidelberg (2006) 9. Yan, J., Pollefeys, M.: Automatic kinematic chain building from feature trajectories of articulated objects. In: Conference on Computer Vision and Pattern Recognition (CVPR) (2006) 10. Liebowitz, D., Carlsson, S.: Uncalibrated motion capture exploiting articulated structure constraints. In: International Conference on Computer Vision (ICCV) (2001) 11. Kirk, A.G., O’Brien, J.F., Forsyth, D.A.: Skeletal parameter estimation from optical motion capture data. In: Conference on Computer Vision and Pattern Recognition (CVPR). IEEE Computer Society (2005) 12. Taycher, L., Fisher III, J.W., Darrell, T.: Recovering articulated model topology from observed rigid motion. In: Becker, S., Thrun, S., Obermayer, K. (eds.) Advances in Neural Information Processing Systems (NIPS), pp. 1311–1318. MIT Press, Cambridge (2002) 13. Song, Y., Goncalves, L., Perona, P.: Unsupervised learning of human motion. IEEE Transactions on Pattern Analysis and Machine Intelligence 25(7), 814–827 (2003) 14. Ramanan, D., Forsyth, D.A., Barnard, K.: Building models of animals from video. IEEE Transactions on Pattern Analysis and Machine Intelligence 28(8), 1319–1334 (2006) 15. Ross, D.A.: Learning Probabilistic Models for Visual Motion. PhD thesis, University of Toronto, Ontario, Canada (2008) 16. Golub, G.H., Van Loan, C.F.: Matrix Computations. The Johns Hopkins University Press, Baltimore (1996) 17. Viklands, T.: Algorithms for the Weighted Orthogonal Procrustes Problem and Other Least Squares Problems. PhD thesis, Ume University, Ume, Sweden (2006) 18. Frey, B., Dueck, D.: Clustering by passing messages between data points. Science 315, 972– 976 (2007) 19. Shi, J., Tomasi, C.: Good features to track. In: Conference on Computer Vision and Pattern Recognition (CVPR), pp. 593–600 (1994)
Multi-layered Decomposition of Recurrent Scenes David Russell and Shaogang Gong Department of Computer Science, Queen Mary, University of London London E1 4NS, UK {dave,sgg}@dcs.qmul.ac.uk
Abstract. There is considerable interest in techniques capable of identifying anomalies and unusual events in busy outdoor scenes, e.g. road junctions. Many approaches achieve this by exploiting deviations in spatial appearance from some expected norm accumulated by a model over time. In this work we show that much can be gained from explicitly modelling temporal aspects in detail. Specifically, many traffic junctions are regulated by lights controlled by a timing device of considerable precision, and it is in these situations that we advocate a model which learns periodic spatio-temporal patterns with a view to highlighting anomalous events such as broken-down vehicles, traffic accidents, or pedestrians jaywalking. More specifically, by estimating autocovariance of self-similarity, used previously in the context gait recognition, we characterize a scene by identifying a global fundamental period. As our model, we introduce a spatio-temporal grid of histograms built in accordance with some chosen feature. This model is then used to classify objects found in subsequent test data. In particular we demonstrate the effect of such characterization experimentally by monitoring the bounding box aspect ratio and optical flow field of objects detected on a road traffic junction, enabling our model to discriminate between people and cars sufficiently well to provide useful warnings of adverse behaviour in real time.
1
Introduction
Currently countless people are deployed to watch and monitor CCTV screens in the hope of identifying criminal activity, untoward behaviour, and serious but non-malicious situations. A fundamental challenge to computer vision research is to devise algorithms capable of isolating and displaying events of interest in a clear, uncluttered way and with a relatively low false alarm rate. Considerable research effort has produced systems which learn statistical scene content both at the pixel level [1] and from a global perspective [2] with a view to segmenting an image into the usual (background) and unusual (foreground). By relating foreground object size, and possibly shape, to areas within the scene, it becomes possible to identify people and vehicles in the ‘wrong’ place. However, generally such models are oblivious to relative event timing. In this paper, with specific reference to road traffic junctions, we wish to extend the definition of ‘unusual’ to the temporal domain such that the presence D. Forsyth, P. Torr, and A. Zisserman (Eds.): ECCV 2008, Part III, LNCS 5304, pp. 574–587, 2008. c Springer-Verlag Berlin Heidelberg 2008
Multi-layered Decomposition of Recurrent Scenes
575
of an object is treated explicitly in a spatio-temporal context rather than modelled as a deviation from an accumulated distribution. This approach is aimed specifically at modelling scenarios in which periodic behaviour is present. For example, it should be possible to identify a pedestrian trying to cross a road at a time when cars are moving through the junction, namely this calls for a model possessing a certain temporal context awareness. 1.1
Related Work
Considerable work has been published on the biological aspects of perceptual grouping. In terms of the human visual system this amounts to forming relationships between objects in an image. But such grouping also occurs in the temporal dimension, whereby our attention is drawn to objects whose appearances change together, and those whose appearance changes cyclically or periodically. At this point it is important to make the distinction between these two types of variation: Cyclic motion implies events in a certain sequence, whereas Periodic motion involves events associated strictly with a constant time interval. Within the field of biologically inspired computing, systems using networks of Spiking RBF (Radial Basis Function) Neurons have been used in [3] to characterize and identify spatio-temporal behaviour patterns. Such a neuron generates a pulse of activity when the combination of its inputs reaches a critical threshold. The network of connections from input neurons to output neurons contains groups of parallel paths with varying synaptic delays whose relative weights are learned in a Hebbian fashion such that the delay pattern eventually complements (mirrors) the times between events in training data. By this mechanism, an output neuron can ‘learn’ to fire when the appropriate events occur with correctly matched time delays, since only under this condition will all spikes reach the nucleus simultaneously, causing its threshold to be breached and hence firing. This idea is applied to a practical vision system in [4], whereby relations between pixels in the Motion History Image (MHI) over a sequence are learned for a simple shopkeeper/customer scenario. Abnormal behaviour is detected when a customer takes an item of stock but leaves the shop without paying the shopkeeper. Similarly using MHI, [5] discriminates between actions based on movement of the human body by matching against various learned templates. But so far, although these examples identify sequences of learned events occurring at precise times, whereas overall the sequences themselves are asynchronous events - they might happen only once, or repeatedly but at arbitrary times. A model described in [6] forms relations between asynchronous but related scene events by adding links between parallel Hidden Markov Models, making it ideal for many situations where temporal invariance is paramount. When it comes to periodic motion, [7] describes a method of modelling moving water, flames, and swaying trees as Temporal Textures. An Autoregressive Model is proposed in which a new frame may be synthesized such that each pixel is described by a weighted sum of previous versions of itself and its neighbours, with an added Gaussian noise process. Similar to the Temporal Textures of [7], [8] applies the Wold decomposition to the 1-D temporal signals derived from
576
D. Russell and S. Gong
each image pixel giving rise to deterministic (periodic) and non-deterministic (stochastic) components, permitting distinction between various human and animal gaits, and other types of motion. On an apparently unrelated problem, much is to be found in the literature concerning gait characterization, modelling and identification. Generally these methods work by analyzing the relative motion of linked body members, which are of course all related by the same fundamental frequency. However, the parallel between this and modelling traffic at a road junction is surprisingly close. Given extracted features, image areas may be likened to body limbs, sharing fundamental frequency, but being of arbitrary phase and harmonic content. Various forms of periodic human motion are characterized in [9] by tracking candidate objects and forming their ‘reference curves’. After evaluating a dominant spectral component if it exists, an appropriate temporal scale may be identified. This idea is developed in [10] which considers periodic self-similarity, Fisher’s Test for periodicity and Time Frequency Analysis. The Recurrence Plot described in [11] is a useful tool for visualizing the evolution of a process in state-space, showing specifically when the state revisits a previous location. Instead of using Fourier analysis directly, [12] employs Phase Locked Loops (PLLs) to discriminate between different gaits, on the basis that it is more efficient. Having identified some fundamental frequency for an object (person), application of a PLL per pixel in the relevant area permits estimation of the magnitude and relative phase of this fundamental component for each pixel in the object. The idea is that the phase ‘signature’ for every object (person) will be different. The technique is rendered scale and translation invariant by matching these parameters as shapes in the complex plane using the Procrustes mean. In this work we wish to construct an algorithm to characterize the periodicity of a scene based on its temporal statistics rather than explicit object tracking (therefore avoiding the catch-22 problem of determining appropriate scale vs. saliency). Treating the recovered periodicity as a form of ‘temporal background’ we aim to discover anomalies in both space and time simultaneously in unseen images. Expanding on a technique employing self-similarity [10], we describe an algorithm for extracting the fundamental period from a video sequence of a scene, and then use this to facilitate a spatio-temporal data-driven model of scene activity. We show experiments in three traffic junctions scenes where we demonstrate the effectiveness and simplicity of such a model in performing anomaly detection.
2
Our Model
Given a video sequence Ix,y,t consisting of tmax frames each of size xmax × ymax pixels in which (x, y) represents spatial pixel location, t the time index, and I the colour triple {R, G, B}, we split the data into two parts, the first for training and the second for evaluation. Obviously, the first image of the test sequence directly follows the final image from the training sequence - a fact which becomes crucial in ensuring the initialized model is synchronized with the test data. This also
Multi-layered Decomposition of Recurrent Scenes
577
enables a natural way for bootstrapping a model from limited initial exposure to B the scene. A background model Ix,y,t is evaluated from and maintained through both the training and test data according to a method detailed in [13]. Our overall algorithm is shown in Figure 1, and described in more detail in the following. S Description 1 Derive a background model from training sequence 2 Extract chosen feature from training sequence 3 Quantize samples to a coarser spatio-temporal grid forming linear state data 4 Find dominant fundamental period Tf und for the scene using the linear state data 5 ‘Roll up’ Linear State Data using period Tf und starting from the end to form average State Cycle estimate 6 Use State Cycle to classify previously unseen frames 7 Synthesize output from background and mis-matched areas in new frames Fig. 1. Steps in our algorithm
2.1
Feature Selection
A feature which summarizes some local characteristic of the image sequence must be chosen. For modelling the traffic junction we start with selecting the aspect ratio of an object’s bounding box, anticipating that pedestrians will always be taller than they are wide, and vehicles will rarely be so under the majority of typical poses. In order to ensure symmetrical treatment of ratios greater and less than unity, we further develop a Log Aspect Ratio (LAR) feature LARx,y at position (x, y) by taking the natural logarithm and clipping to +/-1, resulting in ratios from 1e to e hx,y LARx,y = max −1, min 1, loge (1) wx,y where h and w are box height and width respectively. Bounding boxes are determined after applying morphological operations to a foreground binary mask fg removing shapes below a certain minimum pixel area. The binary mask Mx,y,t fg Mx,y,t is derived from the difference between the current image and the current background Dx,y,t according the the L1 (Manhattan) norm of the pixel vectors in colour space 1 if Dx,y,t > τ fg (2) Mx,y,t = 0 otherwise where τ is a constant and B Dx,y,t = Ix,y,t − Ix,y,t 1
(3)
Thus for each frame of video It , a (potentially empty) list Lt of valid bounding boxes Bt,m is produced governed by the above rules Lt = {Bt,1 , Bt,2 , . . . Bt,m }
(4)
578
D. Russell and S. Gong
(a)
(b)
Fig. 2. (a) Bounding Box centres accumulated over time at a road junction scene. Colour represents aspect ratio: green samples have h > w (pedestrians), red samples have h < w (vehicles). The ratio for vehicles becomes unreliable here in the far distance. (b) Y-T cut (right) through the spatio-temporal volume showing periodic behaviour of a road junction scene at the vertical yellow line (left).
where the mth bounding box is characterized by the quad Bt,m = {x, y, w, h}
(5)
in which (x, y) is the bounding box centre, and (w, h) are its size from which the LAR is calculated. The maximum value of m is determined by the number of objects detected in the current image. So the feature we have selected does not exist at every pixel, rather it will exist wherever in the spatio-temporal volume valid objects are detected. Figure 2(a) shows an example of accumulation of LAR over time in the training data, showing how it discriminates between people and vehicles. Meanwhile with a plot of the image y-axis against time, Figure 2(b) illustrates the inherently periodic nature of activity on a road junction. 2.2
Spatio-temporal Histogram
Thus far the training data is represented by a set of points in a 4-D space (x, y, t, LAR). In order to facilitate comparison of feature occurrence within the spatio-temporal volume, we seek to build a spatio-temporal set of histograms over the feature space. Therefore we split the volume into a grid of hmax × vmax equal sized square blocks of pixels spatially and nmax equal sized blocks of frames temporally. At each spatio-temporal grid position, consisting of ymax tmax xmax × × hmax vmax nmax
(6)
pixels we construct a histogram Hh,v,t of bmax equal width bins over feature space. For LAR it is a bounded 1-D set Hh,v,n (b) = {b1 , b2 , . . . bmax }
(7)
Multi-layered Decomposition of Recurrent Scenes
579
bmax (LAR + 1) + 1 (8) 2 such that the range of the LAR feature (−1 ≤ LAR ≤ +1) is mapped uniformly onto bin number b, where 1 ≤ b ≤ bmax . The inherent loss of resolution in all dimensions as a result of this down-sampling operation is countered by the advantage of being able to quantify the similarity between any two spatio-temporal regions on the basis of the selected feature purely by comparing histograms. In fact from this point on, the method becomes independent of the chosen feature and thus offers a degree of generality and considerable scope for matching any chosen feature(s). where
2.3
b=
The Sparsity Problem
It is quite possible that, given the relatively high dimensionality of the histogram containing the bounding box data points, the density is insufficient to yield meaningful distributions. One potential solution is to decrease the number of blocks in the grid in the dimension(s) causing the deficiency. Alternatively a degree of data smoothing may be applied, both over the bins within each histogram and also between spatio-temporal histograms. It was found that experimental results benefited from convolution of the former with a normalized 1-D Gaussian filter, and of the latter with a 3-D Gaussian kernel having potentially different variance in the spatial and temporal directions. Inevitably there will be some regions which are poorly supported, and steps to mitigate the effects of this may become necessary in later processing. 2.4
Fundamental Period Estimation
To derive an estimate of the fundamental period over which scene changes occur is a non-trivial procedure, and as such it is dealt with separately in Section 3. Suffice to say at this point that a scene may have a number of unrelated fundamental periods (including ‘none’) distributed over various regions (see Figure 3), and optimally distinguishing them is a topic for future research. In this work we consider applications like the traffic junction where it is assumed that there is a single dominant effect, for which the period is Kf und blocks each of tmax /nmax frames. Given a frame rate of F per second, the fundamental period is thus Tf und =
Kf und tmax F nmax
seconds.
(9)
Ideally the training data should be long enough to contain sufficient cycles of the fundamental period that the latter can be distinguished adequately from noise. 2.5
State Cycle and Model Initialization
k We define the State Cycle Sh,v k = {1 . . . Kf und } of a grid location (h, v) to be a temporal description of how the chosen feature varies throughout a single cycle
580
D. Russell and S. Gong
Fig. 3. Relative fundamental period distribution of the scene in Figure 2(a) based on per pixel temporal autocorrelation. Intensity representing period is given by the first significant peak. Much of the junction area is the same shade, indicating shared periodicity.
of its fundamental period of Kf und phases. Given that the array Hh,v,n contains a number of cycles of this temporal description in succession, we wish to form an ‘average histogram’ Hf und of size hmax × vmax × Kf und representing a summary of the scene’s typical behaviour over the c most recent cycles of the fundamental cycles. Thus taking the c most recent groups of Kf und period, where c = Knfmax und blocks, the kth element of Hf und is the mean of the kth elements of the c groups 1 Hh,v,nmax −iKf und +k (b) c i=1 c
Hf und,h,v,k (b) =
(10)
where k = {1, 2, . . . Kf und }. Normalization of Hf und over b yields an estimate of feature probability Pf und which is then our spatio-temporal model of the scene Hf und,h,v,k (b) Pf und,h,v,k (b) = bmax b=1 Hf und,h,v,k (b)
(11)
Assuming that continuous test sequence (e.g. real-time video streamed data) directly follows the initial training sequence, then the state counter k, initialized frames according to the relation k = mod to 1, may be updated every ntmax max (k, Kf und ) + 1 in order to keep track of the learned periodic scene behaviour. 2.6
Output Synthesis
The objective is to provide an output sequence from our algorithm showing only objects in the ‘wrong place’ at the ‘wrong time’. For a query test frame I query appearing subsequent to model initialization, the foreground mask M f g is obtained as in equation (2), and valid object bounding boxes Bt,m derived as in (5). For each candidate bounding box, the LAR is evaluated from width and height using equation (1) and b is given by (8). Values for h and v are y×vmax max calculated using h = x×h xmax and v = ymax . Thus the estimated probability of that particular aspect ratio bounding box at that position is given by the model, and may be compared with a threshold α in order to give a binary decision r as to whether the object is sufficiently rare to be displayed
Multi-layered Decomposition of Recurrent Scenes
r=
1 if Pf und,h,v,k (b) < α 0 otherwise
581
(12)
On the basis of r being true, for each object in I query , a matting mask M matt is used to re-insert pixels according to the bounding box dimensions from the new frame I query into the background I B for all objects determined to be anomalous with respect to the current model. The background with insertions forms the output image from the algorithm.
3
Determining the Fundamental Period
The method described in the previous section relies totally on obtaining a robust estimate of the fundamental period of a region or the whole image area using the 3-D spatio-temporal grid of histograms Hh,v,n defined in (7). We seek to find the most common lag between instances of temporal self-similarity at times n1 and n2 over all possible combinations of n1 and n2 . As a measure of the similarity between any two histograms, we utilize the general definition of the symmetric Kullback-Leibler Divergence (KLD) between distributions P1 and P2 given by DKL (P1 , P2 ) =
(P1,i log2 (
i
P1,i P2,i ) + P2,i log2 ( )) bits P2,i P1,i
(13)
Thus over an arbitrary spatial region R in our grid, we define the ‘average Dissimilarity matrix’ S between two temporal planes at times n1 and n2 as Sn1 ,n2 =
1 DKL (Pn1 (v, h), Pn2 (v, h)) R
(14)
v,h∈R
which after simplification yields Sn1 ,n2 =
bmax Pn1 ,i 1 (Pn1 ,i − Pn2 ,i ) log2 R Pn2 ,i i=1
(15)
v,h∈R
An example of the symmetric Divergence relative to a single time is illustrated in Figure 4(a), and between all combinations of times as matrix S in Figure 4(b). Because it is the coincidence of minima in S that we are interested in, we subtract its mean to form S 1 S (i, j) = S(i, j) − S(i, j) (16) imax jmax i,j and construct the normalized 2-D autocovariance matrix A at all possible lags (di , dj ) in both directions i,j S (i, j) S (i + di , j + dj ) (17) A(di , dj ) = 2 2 i,j S (i, j) i,j S (i + di , j + dj )
582
D. Russell and S. Gong
(a)
(b)
Fig. 4. (a) Temporal KL Divergence at a single grid position (corresponding to 50 on the x-axis) relative to all other temporal grid positions. Naturally the divergence is zero with respect to itself. (b) Average ‘Divergence’ matrix between histograms at temporal grid positions n1 , n2 for all combinations of n1 and n2 . Using the Symmetric Kullback-Leibler formula, divergence is summed over all spatial grid positions of the scene, as well as over the histogram bins (equation (15)).
(a)
(b)
Fig. 5. (a) Lattice for distance d = 15 generated by g(d)g(d)T . Multiplying such a lattice by the autocovariance matrix in (b) for a range of d identifies the fundamental period. (b) Autocovariance of the Divergence matrix in Figure 4(b), showing the strong lattice structure corresponding to a dominant fundamental in the video sequence.
As shown in Figure 5(b), matrix A exhibits a regular structure of peaks spaced at the dominant period if it exists. The fundamental interval Kf und is identified by exploratory element-wise multiplication of A with a regular matrix of peaks generated by column vector g(d) as shown in Figure 5(a), whereby varying the pitch d yields a peak in the overall temporal scene power observed Kf und = arg max(g(d)T A g(d)) d
(18)
for dmin ≤ d ≤ dmax and binary vector g such that gi (d) = δ((i − nmax ) mod d) where 1 ≤ i ≤ 2nmax − 1
(19)
Figure 6(a) shows how the scene’s signal power peaks at a given value of d. For our application the region R represents the entire scene, but this technique could equally well work with subsets of the scene, be they rectangular or square blocks, or even arbitrary shapes. A yet more elaborate scheme for analyzing
Multi-layered Decomposition of Recurrent Scenes
(a)
583
(b)
Fig. 6. (a) Relative spectral power of the scene in Figure 2(a) for values of d between 4 and 50. Note the fundamental at d = 15, giving a period of 15 × 7.5s = 112.5s corresponding to the cycle time of the junction signals. (b) Timing diagram showing correct synchronization of model throughout test sequence. Top: Pixels from closest green traffic light in scene. Middle: Consensus of light over cycles in training data. Bottom: Internal state counter. Note consistent phase relationship between all three. Original Frame
Static Bgnd (Layer 0)
Dynamic Bgnd (Layer 1)
Foreground (Layer2)
No Temporal Processing
(a)
(b)
(c)
Fig. 7. Examples from Scenario 1 show how the algorithm discovers objects not matching the learned spatio-temporal template, and thus splits the scene into 3 layers on the basis of its dynamic behaviour. Layer 0 is the continuously updated ‘static’ background, Layer 1 normal scene activity - the ‘dynamic background’, and Layer 2 carries ‘novel’ intrusions with respect to the training data. Some objects cannot be separated, regardless of threshold chosen. In (a) L2 correctly shows a car unusually pulling out onto the main road, whereas with No Temporal Processing (NTP), this cannot be distinguished from normal cars on the right. In (b) L2 spots the car over the waiting line, whereas NTP sees only a passing pedestrian. In (c) L2 finds pedestrians waiting at the crossing, whilst NTP wrongly highlights a car.
the autocovariance matrix A is described in [10], in particular explaining that a diagonal equivalent of the matrix in Figure 5(a) is necessary to detect periodicity in scenes in which self-similarity of appearance peaks more than once per cycle (e.g. a swinging pendulum).
584
4
D. Russell and S. Gong
Experiment
For our experiments we chose three busy city-centre road junctions controlled by traffic lights. Each dataset was made up of 30000 frames of 720×576 pixel colour video at a frame rate of 25Hz, yielding sequences of 20 minutes duration. The data was spatially down-sampled to 360 × 288 pixels to ease computational load. The short-term background model was obtained as described using the method described in [13], based on blocks of 20 frames taken at 12 second intervals. The L1 norm of the background-subtracted data was thresholded at a value of 30 given an intensity range of 0-255 per colour channel, and after morphological clean-up, identified object areas were thresholded to reject those below 70 pixels. The Log Aspect Ratio feature range of +1 to -1 was split into 5 histogram bins, and spatio-temporal histogram grid was 8 × 8 pixels wide spatially, and 180 frames deep temporally, giving h = 45, v = 36, and n = 167. For each sequence, we utilized the entire spatio-temporal matrix to estimate the global fundamental period Kf und for the scene using the method described in Section 3. We then allowed c = 5 cycles of this fundamental period to be used for training data, leaving the remainder for testing. Figure 6(b) illustrates how the state counter is correctly and consistently aligned with junction activity throughout the test sequence, as measured by the actual brightness of pixels representing the green traffic light at the bottom of the scene. The results for Scenarios 1,2 and 3 are shown in Figures 7, 8, and 9. Figures 7 and 8 show 3 rows of 5 images, with each row representing an example frame from the algorithm output. The left-most image is the original unprocessed frame, whilst the second image is the short-term ‘static’ background which we have labelled as ‘Layer 0’. The objects detected to be anomalous according to our model are shown inserted into the static background and labelled as ‘Layer 2’ the foreground. Similarly, the original image with the background inserted where the object was detected, is shown labelled as ‘Layer 1’ - the dynamic background. Finally in the right-hand column, for comparison purposes, we show the result of classification using a non-temporal equivalent model derived from the same training data. To achieve this, bin values of each histogram Ph,v,k (b) are marginalized out over the time dimension to yield Ph,v (b). Overall, when analyzing images, the algorithm achieves 3FPS throughput on a 2GHz PC, although initially building the model carries a considerably higher computational cost.
5
Discussion
The results in Figures 7, 8 and 9 demonstrate how, in spite of a background that is non-stationary, our algorithm has managed to split scene activity into 3 distinct layers. This has been achieved partly by being able to make reliable estimates of true background amongst a busy scene, and partly by classifying objects based on a spatio-temporal template learned from the scene during training. What we term Layer 0 takes on the non-stationary background, permitting detection of less persistently occurring objects such as people and vehicles. Having thus obtained reference to the latter in isolation from the background, our
Multi-layered Decomposition of Recurrent Scenes Original Frame
Static Bgnd (Layer 0)
Dynamic Bgnd (Layer 1)
Foreground (Layer2)
585
No Temporal Processing
(a)
(b)
(c)
Fig. 8. Examples from Scenario 2, an entirely different traffic junction. From behind, cyclists tend to have an aspect ratio similar to people. Thus in (a) L2 singles out a cyclist close to the pathway, which with No Temporal Processing (NTP), cannot be separated. In (b) L2 has detected a different cyclist, again with the same profile as a person, where there should not be people, whilst NTP sees only part of a car in normal position. In (c) L2 observes a person on the wrong part of the crossing, inseparable from vehicles on the junction with NTP.
Object Optical Flow
New S-T model
Reconstruction (L2) No Temporal Proc
Fig. 9. Scenario 3 with Optical Flow as the feature instead of shape ratio. Top: SpatioTemporal model correctly highlights errant vehicle crossing normal traffic from left. Bottom: Spatial-only model (NTP) wrongly highlights normal traffic instead of van jumping the red signal.
spatio-temporal model classifies them into Layer 1, objects of a suitable aspect ratio for the part of state-space they occupy, and Layer 2, objects which contradict the model. Within this framework, Layer 1 has taken on the role of a
586
D. Russell and S. Gong
‘dynamic background’ in relation to what might frequently be referred to as ‘foreground’ objects. Such a dynamic background has three dimensions, and a match in all of them is required as well as an acceptable value for the feature at those coordinates in order that the object is deemed acceptable as a dynamic background item. Thus we claim that our spatio-temporal model has more discriminative power than a spatial-only 2-D probabilistic model, which is oblivious to time. By marginalizing out the time dimension, one effectively increases the likelihood of an object at times in the cycle when it should be considered rare, and reduces its likelihood at times when it should be considered common. The overall unwanted result is thus a desensitization of the model. The upshot of this situation is that with no temporal processing (termed ‘NTP’) too many unimportant objects are detected, whilst use of our scenesynchronized spatio-temporal model reveals far more salient detection amongst ‘higher layers’ of temporal change, associated with interesting and unexpected spatio-temporal events. Furthermore, all this may be achieved without prior knowledge of the size and location of potential triggering objects in the scene. In particular, among the results are examples of our model detecting objects of interest, whilst the model without temporal processing fails to highlight these, but identifies less truly interesting objects instead. That this remains so, however one decides to select the detection thresholds for the respective models, strongly supports our claim that the temporal dimension is highly significant.
6
Conclusion and Further Work
We have demonstrated an algorithm capable of automatically learning the global periodicity of scenes, such as that exhibited at junctions controlled by traffic lights. The technique estimates a value for the global fundamental period, and then builds a spatio-temporal model based on this estimate. It has been demonstrated by experiment that the method can be more discriminating with regard to activity of a periodic scene than a model oblivious to repeating temporal trends. As such, we draw the conclusion that the method described has successfully decomposed the scene into separate layers on the basis of its dynamic characteristics. Even using only a restricted feature set, the approach achieves good results. However, as previously alluded to, the histograms defined could readily represent a more diverse range of image features. In its present form, the model described estimates the period once during training. A practical realization would need to re-evaluate the fundamental period continuously, in order to maintain both frequency and phase lock with respect to current scene activity, especially since many scenes will not be quite periodic in some way. Both short-term phase noise and longer-term frequency drift problems may be soluble using the Phase Locked Loop approach detailed in [12], whilst an on-line solution which augments the current model with additional data as it becomes available would make for a truly adaptive system. It is clear that many scenes will be composed of more than one harmonically unrelated periodic component. Instead of seeking a single global fundamental,
Multi-layered Decomposition of Recurrent Scenes
587
the scene may be searched in a systematic fashion using the estimation technique we have described on smaller regions. If somewhat optimal regions of common periodicity could be found, the ‘rolling up’ of periodic training data implemented here is equally applicable to different image areas, each with its own Kf und .
References 1. Stauffer, C., Grimson, W.: Adaptive background mixture models for real-time tracking. In: IEEE CVPR, Colorado, pp. 246–252 (1999) 2. Oliver, N., Rosario, B., Pentland, A.: A bayesian computer vision system for modelling human interactions. IEEE PAMI 22(8), 831–843 (2000) 3. Natschl¨ ager, T., Ruf, B.: Spatial and temporal pattern analysis via spiking neurons. Network: Computation in Neural Systems 9(3), 319–332 (1998) 4. Ng, J., Gong, S.: On the binding mechanism of synchronised visual events. In: IEEE Workshop on Motion and Video Computing (December 2002) 5. Bobick, A.F., Davis, J.W.: The recognition of human movement using temporal templates. IEEE PAMI 23(3), 257–267 (2001) 6. Xiang, T., Gong, S.: Beyond tracking: Modelling activity and understanding behaviour. IJCV 67(1), 21–51 (2006) 7. Szummer, M.: Temporal texture modeling. Technical Report 346, MIT Media Lab Perceptual Computing (1995) 8. Liu, F., Picard, R.W.: Finding periodicity in space and time. In: ICCV, pp. 376–383 (1998) 9. Polana, R., Nelson, R.C.: Detection and recognition of periodic, nonrigid motion. IJCV 23(3), 261–282 (1997) 10. Cutler, R., Davis, L.S.: Robust real-time periodic motion detection, analysis, and applications. IEEE PAMI 22(8), 781–796 (2000) 11. Casdagli, M.: Recurrence plots revisited. Physica D 108, 12–44 (1997) 12. Boyd, J.: Synchronization of oscillations for machine perception of gaits. CVIU 96(1), 35–59 (2004) 13. Russell, D., Gong, S.: Minimum cuts of a time-varying background. In: BMVC, pp. 809–818 (September 2006)
SERBoost: Semi-supervised Boosting with Expectation Regularization Amir Saffari1 , Helmut Grabner1,2, and Horst Bischof1 1
Institute for Computer Graphics and Vision, Graz University of Technology, Austria {saffari,hgrabner,bischof}@icg.tugraz.at 2 Computer Vision Laboratory, ETH Zurich, Switzerland
[email protected]
Abstract. The application of semi-supervised learning algorithms to large scale vision problems suffers from the bad scaling behavior of most methods. Based on the Expectation Regularization principle, we propose a novel semi-supervised boosting method, called SERBoost that can be applied to large scale vision problems. The complexity is mainly dominated by the base learners. The algorithm provides a margin regularizer for the boosting cost function and shows a principled way of utilizing prior knowledge. We demonstrate the performance of SERBoost on the Pascal VOC2006 set and compare it to other supervised and semi-supervised methods, where SERBoost shows improvements both in terms of classification accuracy and computational speed.
1
Introduction
Semi-supervised learning addresses the problem of “How to improve the performance of an adaptive model using unlabeled data together with the labeled data?”. Many supervised approaches obtain high recognition rates if enough labeled training data is available. However, for most practical problems there is simply not enough labeled data available, whereas hand-labeling is tedious and expensive, in some cases not even feasible, while most of the time a large amount of unlabeled data is available. This is especially true for applications in computer vision like object recognition and categorization. Therefore, the central issue of semi-supervised learning is to find a way to exploit this huge amount of obscured information from unlabeled data. Due to the considerably large amount of literature in this field and lack of space, we refrain to mention most of the well-known semi-supervised methods and encourage the interested readers to refer to a comprehensive overview of this field [1] and also to the recent book of Chapelle et al. [2]. However, since we are directly addressing the semi-supervised boosting methods, it should be noted that there are a few attempts to formulate the semi-supervised learning as a boosting procedure, started by [3,4]. Recently, based on the idea of graph-based
This work has been supported by the Austrian Joint Research Project Cognitive Vision under projects S9103-N04 and S9104-N04 and the Austrian Science Fund (FWF) under the doctoral program Confluence of Vision and Graphics W1209.
D. Forsyth, P. Torr, and A. Zisserman (Eds.): ECCV 2008, Part III, LNCS 5304, pp. 588–601, 2008. c Springer-Verlag Berlin Heidelberg 2008
SERBoost: Semi-supervised Boosting with Expectation Regularization
589
manifold regularization methods, Mallapragada et al. [5] and Chen and Wang [6] have proposed other approaches for semi-supervised boosting. Furthermore, in computer vision, Cohen et al. [7] use both labeled and unlabeled data to improve the face detectors. In [8,9] a semi-supervised approach for detecting objects has been developed. Recently, Mann and McCallum [10] have analyzed many semi-supervised learning algorithms and noted that despite the vast amount of literature there are not many practical applications, and they pointed out two main reasons for that: 1) many algorithms (especially those based on EM) are fragile and are heavily dependent on hyper-parameters, and 2) many algorithms are computationally expensive with a scaling behavior of O(n3 ), where n is the number of unlabeled samples. This is counter productive since the full power of semisupervised learning can only be obtained with a large amount of unlabeled data. Mann and McCallum proposed a method called Expectation Regularization on the exponential-family of parametric models which does not suffer from these problems. The basic idea is to augment the objective function of the labeled data with a term that encourages the model predictions on unlabeled data to match certain expectations. Based on this idea, we propose a novel semi-supervised boosting algorithm which has the following properties: – It scales reasonably with respect to the number of labeled and unlabeled samples, and in fact provides the same complexity as the traditional supervised boosting methods, while being very easy to implement. – It naturally provides a margin regularizers for the boosting algorithm [11] which has relations to the principles of maximum entropy learning [12]. – It provides a principled way of incorporating prior knowledge, e.g., [13], into the learning process of the semi-supervised boosting model. – It is robust with respect to the variations of its hyper-parameter. – It is a generalization of the GentleBoost [14] algorithm to the semi-supervised domain. – On Pascal VOC2006 datasets, it outperforms the Linear SVM, TSVM [15], Random Forests [16], and GentleBoost [14] by a large margin and gives comparable results to the χ2 -SVM [17,18,19] classifier while being considerably faster. This paper is organized as follows: we first derive the novel boosting formulation based on the idea of expectation regularization in Section 2. In Section 3, we demonstrate the performance of our model and compare it to a few other supervised and semi-supervised methods, while Section 4 provides a conclusion and points out the future work.
2
Boosting with Expectation Regularization
We address the problem of semi-supervised binary classification. Assume that we are given a prior conditional probability in the form of Pp (y|x) where y ∈ {−1, 1} is the binary class label and x ∈ RD is a sample instance. This prior knowledge
590
A. Saffari, H. Grabner, and H. Bischof
expresses our belief regarding the conditional distribution of the labels given the input features. This prior knowledge can be obtained in different ways: it can be only the label priors Pp (y) [10] or, as it will be shown later in this paper, it can be as weak as the maximum entropy prior ∀x, y : Pp (y|x) = 0.5, or it can be the output of another learning method. The later case is very interesting and important in practice as it provides solutions for knowledge transfer and incorporation of prior knowledge scenarios. Given a set of labeled, XL , and unlabeled, XU , samples as: XL ={(x1 , y1 ), . . . , (xNL , yNL )}, xi ∈ RD , yi ∈ {−1, 1} XU ={x1 , . . . , xNU }, xi ∈ RD
(1)
we denote X = XL ∪ XU as the overall collection of data samples. Let Pp (y|x) be our assumed prior probability and the Pˆ (y|x) be the estimated probability by the learning model. The goal is to use boosting [20,14] to learn an additive T model F (x) = t=1 ft (x) in a way that its classification accuracy is as high as possible while its probabilistic predictions over the unlabeled samples resembles the given prior. 2.1
Loss Function
We define a loss function for the learning process which contains two components corresponding to the labeled and unlabeled data as: L(F (x), X ) = LL (F (x), XL ) + αLU (F (x), XU )
(2)
where LL and LU are the loss functions for the labeled and unlabeled samples, respectively, and α ≥ 0 defines the contribution of the unlabeled loss. Loss for the Labeled Samples. Since we are trying to formulate our model as a boosting method, we use the traditional exponential loss function for the labeled data samples: LL (F (x), XL ) = IE(e−yF (x) ) = e−yF (x) . (3) x∈XL
Note that we drop the scaling factors, e.g. N1L , in calculating the expectations as these parameters can be easily integrated into α. Since the boosting algorithms are designed to minimize the exponential loss, it is known [14] that the minimizer of Eq.(3) is: Pˆ (y = 1|x) 1 . (4) F (x) = log 2 Pˆ (y = −1|x) Therefore, P + (x) =Pˆ (y = 1|x) =
eF (x) , eF (x) + e−F (x)
P − (x) =Pˆ (y = −1|x) = 1 − Pˆ (y = 1|x) =
e−F (x) . + e−F (x)
eF (x)
(5)
SERBoost: Semi-supervised Boosting with Expectation Regularization
591
Note that for notational brevity, we use the super-scripts symbols to refer to the class labels in conditional probabilities. Loss for the Unabeled Samples. It is natural to define the unlabeled cost function as the Kullback-Leibler (KL) divergence between the prior probability and the optimized model [10]: LU (F (x), XU ) = IE(D(Pp Pˆ ))
(6)
where: D(Pp Pˆ ) =
Pp (y|x) log
y∈{−1,1}
=
Pp (y|x) Pˆ (y|x)
Pp (y|x) log Pp (y|x) −
y∈{−1,1}
Pp (y|x) log Pˆ (y|x)
y∈{−1,1}
= − H(Pp ) + H(Pp , Pˆ )
(7)
is the KL-divergence. H(Pp , Pˆ ) is the cross entropy between the target and calculated model and H(Pp ) is the entropy of the target distribution. Since H(Pp ) is a constant and does not depend on the optimized model, we can simply drop it. Furthermore, since we are dealing with a binary classification problem, by using Eq.(5) we can write: Pp (y|x) log Pˆ (y|x) H(Pp , Pˆ ) = − y∈{−1,1}
= − Pp+ (x) log Pˆ + (x) + (1 − Pp+ (x)) log(1 − Pˆ + (x)) = − Pp+ (x) log
Pˆ + (x) + log(1 − Pˆ + (x)) 1 − Pˆ + (x)
= − 2Pp+ (x)F (x) − F (x) − log(eF (x) + e−F (x) ) = − (2Pp+ (x) − 1)F (x) − log(eF (x) + e−F (x) ) .
(8)
We define yp = 2Pp+ (x) − 1,
yp ∈ [−1, 1]
(9)
as the prior-label confidence for an unlabeled data sample induced from the prior knowledge Pp (y|x). In order to facilitate the derivation of our boosting algorithm, we use the exponential transformation of Eq.(8) and write the unlabeled loss function as: 1 −yp F (x) F (x) LU (F (x), XU ) = e (e + e−F (x) ) = e−yp F (x) cosh(F (x)). 2 x∈XU
x∈XU
(10) This loss function has a very similar structure to the exponential loss of the labeled data, by interpreting the prior-label yp as the target value of F (x). The
592
A. Saffari, H. Grabner, and H. Bischof
role of cosh(F (x)) can be also interpreted as a margin regularizer [11] for the boosting cost function. In fact, the cosh(·) ≥ 1 function has a convex form with a minimum at F (x) = 0, which can prevent the learning function to become over-confident and hence, can prevent over-fitting. Furthermore, if we set the prior knowledge to be a maximum entropy prior, i.e.Pp (y|x) = 0.5, then for all unlabeled samples yp = 0 and this loss reduces to a margin cost functional [11]. Therefore, our formulation also explains the relations of this margin regularizers to the maximum entropy learning principles [12] which is best stated by Jaynes [21] as: Information theory provides a constructive criterion for setting up probability distributions on the basis of partial knowledge, and leads to a type of statistical inference which is called the maximum entropy estimate. It is least biased estimate possible on the given information; i.e., it is maximally noncommittal with regard to missing information. 2.2
Learning
We adopt the functional gradient descent view of boosting [11,14] to derive the loss function of the base models during each iteration of the boosting. According to the gradient descent principles, at each step of boosting, we are looking for a function ft (x) which if added to the current ensemble, F (x), would result in an improvement in terms of the underlying objective function. We can write the overall loss function of Eq.(2) as: e−yF (x) + α e−yp F (x) cosh(F (x)). (11) L(F (x), X ) = x∈XL
x∈XU
The gradients of L(F (x), X ) with respect to the current models F (x) can be written as: ∇LF =
∂L(F (x), X ) = −ye−yF (x)+ ∂F (x) x∈XL −yp F (x) −yp e cosh(F (x)) + α e−yp F (x) sinh(F (x)). (12) +α x∈XU
x∈XU
Therefore, the overall optimization problem for adding a function at tth stage of the boosting can be formulated as: ft (x) = arg max − ∇LF , f (x)
(13)
f (x)
where A, B := weights as
x
A(x)B(x) is an inner product. We introduce the sample
∀x ∈ XL : wL (x) = e−yF (x)
and
∀x ∈ XU : wP (x) = e−yp F (x)
(14)
SERBoost: Semi-supervised Boosting with Expectation Regularization
593
and define the pseudo-labels for the unlabeled samples as yˆ = yp cosh(F (x)) − sinh(F (x)).
(15)
Using Eq.(12), we can write the loss function ∇LF , f (x) = Lf to be minimized as: −ywL (x)f (x) + α −ˆ ywP (x)f (x). (16) Lf (X ) = x∈XL
x∈XU
Therefore, if we define the pseudo-weights for unlabeled samples as wU (x) = |ˆ y |wP (x), we can write the Eq.(16) as: −ywL (x)f (x) + α −s(ˆ y )wU (x)f (x) Lf (X ) = x∈XL
=
−wL (x) +
x∈XL yf (x)=1
⎛ ⎜ wL (x) + α⎜ ⎝
x∈XL yf (x)=−1
x∈XU
⎞ −wU (x) +
x∈XU s(ˆ y )f (x)=1
⎟ wU (x)⎟ ⎠.
x∈XU s(ˆ y)f (x)=−1
(17) where s(·) is the sign function. If we normalize the sample weights to sum to one, then we know that: wL (x) + wL (x) = 1, wU (x) + wU (x) = 1. x∈XL yf (x)=1
x∈XL yf (x)=−1
x∈XU s(ˆ y )f (x)=1
x∈XU s(ˆ y )f (x)=−1
(18) As a result, we can simplify Eq.(2.2) further: wL (x) + 2α Lf (X ) = 2 x∈XL yf (x)=−1
wU (x) − (1 + α).
(19)
x∈XU s(ˆ y )f (x)=−1
The first and the second term in this equation corresponds to the weighted sum of the mis-classifications of f (x) with respect to the labeled samples and the unlabeled (pseudo-labeled) samples, respectively. The last term is a constant and thus, minimizing the loss function of Eq.(16) is equivalent to minimizing the weighted mis-classification rate. Consequently, we can use any ordinary classification model as a weak learner by taking into account the sample weights. The overall boosting procedure is depicted in Algorithm 1. The computational complexity of our boosting method is mainly dominated by the complexity of its base models, as the overhead has a linear complexity in terms of number of samples. 2.3
Priors
As discussed earlier, the prior probability can be obtained in different ways, and since our method is general enough, one can use any source of information in this respect. In order to show this fact, we use the following two priors in our experiments:
594
A. Saffari, H. Grabner, and H. Bischof
Algorithm 1. SERBoost: Semi-supervised Expectation Regularization based Boosting Require: Training samples: XL and XU . Require: Prior knowledge: ∀x ∈ XU , y : Pp (y|x). Require: T as the number of base models and α as the unlabeled loss parameter. 1: Set the model F (x) = 0. 2: Set the weights ∀x ∈ XL : wL (x) = |X1L | and ∀x ∈ XU : wP (x) = |X1U | 3: Set the prior labels ∀x ∈ XU : yp = 2Pp (y = 1|x) − 1 4: for t = 1 to T do 5: Compute the pseudo-labels ∀x ∈ XU : yˆ = yp cosh(F (x)) − sinh(F (x)). 6: Compute the weights y |wP (x). ∀x ∈ XU : wU (x) = |ˆ 7: Normalize the weights P ∀x ∈ XL : wL (x) ← wL (x)/ Px∈XL wL (x) and ∀x ∈ XU : wU (x) ← wU (x)/ x∈XU wU (x). 8: Find the base function P P y)wU (x)f (x). ft (x) = arg min x∈XL −ywL (x)f (x) + α x∈XU −sign(ˆ f (x)
9:
Update the model F (x) ← F (x) + ft (x). 10: Update the weights ∀x ∈ XL : wL (x) ← wL (x)e−yft (x) and ∀x ∈ XU : wP (x) ← wP (x)e−yp ft (x) . 11: end for 12: Output the final model: F (x)
Maximum Entropy. This is the simplest prior one can think of: ∀x, y : Pp (y|x) = 0.5. This can be stated as the maximum entropy prior which requires no knowledge from the underlying problem. By using this prior, we can study the effect of our margin regularizer term in Eq.(10). Knowledge Transfer. In a knowledge transfer scenario, we have a previously estimated model, and with minimal supervision effort, we would like to include its knowledge for training a new model (e.g. [13]). To show how this procedure can be incorporated into our framework, we train another classifier over the labeled data set, and use its predictions over the unlabeled samples as priors.
3 3.1
Experiments Data Sets and Evaluation Methodology
We test the performance of our method on the challenging object category recognition data sets of Pascal Visual Object Class Challenge 2006 [22]. This dataset consists of 2615 training and 2686 test images coming from 10 different
SERBoost: Semi-supervised Boosting with Expectation Regularization
595
categories. In our experiments, we use for the multi-class problem a one-vs-rest binary classification strategy. In order to observe the effect of including the unlabeled data into the learning process of our boosting algorithm, we randomly partition the training set into two disjoint sets of labeled and unlabeled samples. The size of the labeled partition is set to be r = 0.01, 0.05, 0.1, 0.25, and 0.5 times of the number of all training samples. We repeat the procedure of producing random partitions for 10 times and report the average of the area under the curve (AUC) for each model described in Section 3.3. 3.2
Feature Extraction
For feature extraction, we use a bag-of-words model which is partially similar to the top-ranked participants of Pascal challenge in 2006 [22]. We first extract three sets of interest points with complementary behaviours: the HarrisLaplacian (HL) points [23] for corner-like regions, the Difference of Gaussians (DoG) points [24] for blob-like regions, and a regular dense sampling (Reg) with a grid size of 8 pixels. Then we use SIFT [24] to describe these regions. For the dense sampling method, we apply the SIFT-descriptor to patches with multiple scales of 8, 16, 24, and 32 pixels. Following [17], we form three channels of HLSIFT, DoG-SIFT, and Reg-SIFT. For each channel, we find the class-specific visual vocabulary by randomly selecting 50000 interest regions from 10 training images of the target class and by forming 100 cluster centers using k-means method. The final vocabulary is the concatenation of all class-specific cluster centers. Afterwards, each interest point descriptor is assigned to its closest cluster center. We use the normalized 2-level spatial pyramids [25] to represent each image and this way we construct 2 channels of L0 and L1 from each of HLSIFT, DoG-SIFT, and Reg-SIFT channels. Finally, we concatenate the image descriptors of six channels to create our feature space. Hence, the dimensions of the feature space is 15000 for VOC2006. As a preprocessing, we normalize each sample to have a unit L1-norm. 3.3
Models
We compare the performance of the following supervised and semi-supervised classification models: χ2 -SVM. This model is a popular classifier for the bag-of-words approach, with an excellent performance in object categorization problems [17,18,19]. The feature kernel is constructed by a combination of χ2 distances between each level of spatial pyramids with the same weightings suggested in [25] as: KF (xi , xk ) =
1
e−dl (xi ,xk )/σl
(20)
l=0
where dl is the χ2 distance, and σl is the average distance of lth level, respectively. The LibSVM package [26] is utilized for training and testing of SVMs. It
596
A. Saffari, H. Grabner, and H. Bischof
should be noted that after a few cross-validation experiments, we fix the hinge loss parameter C of SVMs to be 5 in all experiments as it gives equally good performance for all classes. Lin-SVM. From computational complexity point of view, the χ2 -SVM model is the heaviest amongst all methods we study in this paper. In order to provide a similar model which behaves better with respect to the number of samples, we also use a linear SVM by utilizing the LibLinear [27] software package. We also set the hinge loss parameter C to be 5 by conducting a 5-fold cross-validation for this model. TSVM. We also compare the performance of our model with the Transductive Support Vector Machines (TSVM) [15] which is a popular semi-supervised formulation of SVMs. For this model, we use the SVMLight [15] package. Due to its computational costs, we use the same parameter settings of the Lin-SVM model here as well. Random Forest. The Random Forest (RF) classifier [16] is a collection of binary decision trees. These trees are grown separately and by introducing randomness at different levels of their learning process, one can get an ensemble of trees which collectively has a reasonable performance. Due to their efficiency and fast training/testing characteristics, RFs are gaining more attention in vision community too (e.g. [28,29]). Following the original idea of Breiman [16], we grow the trees to a maximum depth without pruning by computing a set of random tests at each decision node. For each decision node, we first select a number of features randomly, and then select randomly a few linear hyperplanes constructed from these features. Afterwards, we select the best test according to their Gini indexes. Bosch et al. [29] used 100 deep trees with depth of 20, computed considerably a large number of random tests, and performed a random selection of different channels for each test. It should be noted that decision trees are not efficient with respect to their depth, both from memory and computational complexity point of view, and additionally, in semi-supervised experiments, the number of labeled samples could be too low to create deep trees. Therefore, we grow shallow trees with maximum depth 2, use 10 random hypotheses, and construct a random forests separately for each feature channel and average their predictions. As a result, one of the main benefits of our approach is that we can effortlessly create huge forests with as high as 10000 trees in our experiments. GentleBoost. If we ignore the unlabeled part of our algorithm (or set α to zero), we end up with the GentleBoost method of Friedman et al. [14]. As a result, a comparison of our method with the GentleBoost enables us to study the effect of including the unlabeled data into the learning process of boosting. As weak learners, we use small random forests with 40 shallow trees grown exactly as specified previously. The only difference is that at each stage of boosting, we construct a separate RF for each feature channel, and instead of averaging their results, we let the boosting to select the best one. We iterate the GentleBoost
SERBoost: Semi-supervised Boosting with Expectation Regularization
597
Table 1. First row: the average AUC for the χ2 -SVM, Lin-SVM, Random Forest (RF), GentleBoost (GB) models, and the winner of VOC2006 (QMUL LSPCH). Second row: the average computation time for each model in minutes. Method χ2 -SVM Lin-SVM RF GB QMUL LSPCH AUC 0.9243 0.8911 0.8456 ± 0.0025 0.8978 ± 0.0012 0.936 Time 885 82 98 116 -
algorithm for 250 iterations, as with our implementation this gives a comparable computation time compared to the Lin-SVM model. SERBoost. The same settings as for Gentle Boosting are used for training the weak learners of the SERBoost algorithm. We also iterate SERBoost for 250 rounds. In order to simulate knowledge transfer we use the predictions of the χ2 -SVM model over the unseen unlabeled training data (the χ2 -SVM is trained only on labeled partition). One should note that this is just an example of how other classifiers could contribute to the learning process of SERBoost, and from application point of view the same principles can be applied to other knowledge transfer scenarios. 3.4
Results
Fully Supervised Methods. Table 1 shows the performance of the supervised models: χ2 -SVM, Lin-SVM, Random Forest (RF), and GentleBoost (GB), trained over the full training set (r = 1), together with the average computation time. We also provide the performance of the winner of VOC2006 challenge [22] as a reference. Comparing the performance of the different models, it is clear that the χ2 SVM produces the best result, followed by Lin-SVM and GentleBoost, while the Random Forest does not seem to be competitive. However, looking into the timings, the χ2 -SVM has considerably larger computation burden compared to all other methods. It should be noted that for the Random Forest, GentleBoost, and SERBoost methods we use our naive and unoptimized C++ implementation, while the other packages used for χ2 -SVM, Lin-SVM, and TSVM are heavily optimized regarding computation speed. Figure 1(a) shows the performance of the fully supervised models with respect to the ratio of labeled samples in the training set. As expected, the χ2 -SVM is producing the best performance by paying the price of heavier computations. It is also clear that the GentleBoost has usually a better or comparable performance compared to Lin-SVM. Maximum Entropy Prior. We turn our attention to study the behaviour of the TSVM and our SERBoost models. Figure 2 shows the performance of the SERBoost for two different values of unlabeled loss parameter α = 0.1, 0.25 and when the maximum entropy prior is used. This figure also shows the performance of TSVM and the performance of χ2 -SVM and GentleBoost from Figure 1(a) as
598
A. Saffari, H. Grabner, and H. Bischof 1000
0.9 Time (min)
800
AUC
0.85
0.8 ChiSVM GB LinSVM RF
0.75
0.7 0
0.2
0.4 0.6 0.8 Ratio of labeled samples (r)
ChiSVM GB LinSVM RF
600
400
200
0 0
1
0.2
(a)
0.4 0.6 0.8 Ratio of labeled samples (r)
1
(b)
Fig. 1. The performance (a) and computation times (b) of the χ2 -SVM, Lin-SVM, Random Forest (RF), and GentleBoost (GB) models with respect to the ratio of the labeled samples in training set.
0.85
0.85 AUC
0.9
AUC
0.9
0.8
0.8 ChiSVM GB SB−ME TSVM
0.75
0.7 0
0.1
0.2 0.3 0.4 Ratio of labeled samples (r)
(a) α = 0.1
0.5
ChiSVM GB SB−ME TSVM
0.75
0.7 0
0.1
0.2 0.3 0.4 Ratio of labeled samples (r)
0.5
(b) α = 0.25
Fig. 2. The performance of SERBoost (SB) with maximum entropy prior (ME) for two different values of unlabeled loss parameter, α
references. The first considerable observation is that the SERBoost is performing better or comparable to χ2 -SVM even when there is no prior knowledge included in its learning process. As a matter of fact, SERBoost outperforms χ2 -SVM when the number of labeled images are very low, and as we continue to add more labels their performances become very close, and eventually after approximately r = 0.5, χ2 -SVM starts to perform better. It is also clear that TSVM is not competitive neither in terms of performance nor in terms of computation time with requiring 518 minutes for a single run. It should be noted that SERBoost has on average 14 minutes computation overhead compared to the GentleBoost. Comparison to GentleBoost. From Figure 2, it is also clear that SERBoost is much better compared to its fully supervised counter-part, GentleBoost. In order to investigate this fact, we conducted another set of experiments where we duplicated the full training set and used the first half as the labeled set and the second half as the unlabeled partition. We applied SERBoost with maximum
SERBoost: Semi-supervised Boosting with Expectation Regularization
599
Table 2. The performance comparison of GentleBoost (GB) and SERBoost (SB) when trained on full labeled training set Method GB SB, α = 0.01 SB, α = 0.1 SB, α = 0.25 AUC 0.8978 ± 0.0012 0.9125 ± 0.0014 0.9153 ± 0.0016 0.914 ± 0.0015
0.85
0.85 AUC
0.9
AUC
0.9
0.8
0.8 ChiSVM GB SB−KT TSVM
0.75
0.7 0
0.1
0.2 0.3 0.4 Ratio of labeled samples (r)
(a) α = 0.1
0.5
ChiSVM GB SB−KT TSVM
0.75
0.7 0
0.1
0.2 0.3 0.4 Ratio of labeled samples (r)
0.5
(b) α = 0.25
Fig. 3. The performance of SERBoost (SB) with prior knowledge (KT) for two different values of unlabeled loss parameter, α
entropy prior to this data set and report their results in Table 2. One can clearly observe that even with using the full training set and no additional information, SERBoost outperforms GentleBoost by a nice margin in terms of AUC. It is also interesting to see that with this simple approach, SERBoost comes closer to the results of the winner of VOC2006. Knowledge Transfer Prior. Since our method provides a principled way of including the prior knowledge as an additional source of information, we conduct experiments by training the χ2 -SVM over the labeled partition and use its predictions over the unlabeled partition as priors for SERBoost. The results are shown in Figure 3 for two different values of α. When the number of labeled samples are low, the predictions of the prior are most of the time wrong, and therefore the performance of SERBoost is inferior to those of trained with maximum entropy priors. However, as the χ2 -SVM starts to produce more reliable predictions, our method also starts to improve its performance. As it can be seen, by moving towards larger labeled sets, our method utilizes the priors well and outperforms the maximum entropy based model. Hyperparameter Sensitivity. Another considerable fact is the robustness of the SERBoost with respect to the variations of α within a reasonable working range. If we compare the pairs of left and right plots in Figures 2 and 3, we can see that the performance changes smoothly when one varies α. For example in Figure 3, one can see that when the χ2 -SVM predictions are not reliable (lower values of r), having a smaller α results in a slight performance gain, while the figure is reversed when the χ2 -SVM starts to operate reasonably. However, the overall change in the performance is not significant.
600
4
A. Saffari, H. Grabner, and H. Bischof
Conclusion
In this paper, we derived a novel semi-supervised boosting method, called SERBoost, on the principles of expectation regularization. This algorithm provides a principled way of including the prior knowledge into the learning process of a model and naturally explains the probabilistic interpretation of the provided boosting margin regularizer with the maximum entropy learning concept. SERBoost scales very well with respect to the number of labeled and unlabeled samples, is very easy to implement, and is robust with respect to the variations of its hyper-parameter. The experimental results shows that SERBoost is able to exploit the obscured information hidden in unlabeled data and can benefit easily from a prior knowledge or domain expertise. SERBoost currently is designed for binary classification tasks, and we plan to investigate the possibility of extending it to multi-class/label problems.
References 1. Zhu, X.: Semi-supervised learning literature survey. Technical Report 1530, Computer Sciences, University of Wisconsin-Madison (2005) 2. Chapelle, O., Sch¨ olkopf, B., Zien, A. (eds.): Semi-Supervised Learning. Cambridge, MA (2006) 3. Buc, D.F., Grandvalet, Y., Ambroise, C.: Semi-supervised marginboost. In: Proc. of NIPS, pp. 553–560 (2002) 4. Bennett, K.P., Demiriz, A., Maclin, R.: Exploiting unlabeled data in ensemble methods. In: Proc. of KDD, pp. 289–296 (2002) 5. Mallapragada, P.K., Jin, R., Jain, A.K., Liu, Y.: Semiboost: Bossting for semisupervised learning. Technical report, Department of Computer Science, Michigan State University (2007) 6. Chen, K., Wang, S.: Regularized boost for semi-supervised learning. In: Proc. of NIPS (2008) 7. Cohen, I., Sebe, N., Cozman, F.G., Cirelo, M.C., Huang, T.: Learning bayesian network classifiers for facial expression recognition using both labeled and unlabeled data. In: Proc. of CVPR, vol. 1, pp. 595–604 (2003) 8. Yao, J., Zhang, Z.: Semi-supervised learning based object detection in aerial imagery. In: Proc. of CVPR, vol. 1, pp. 1011–1016 (2005) 9. Leistner, C., Grabner, H., Bischof, H.: Semi-supervised boosting using visual similarity learning. In: Proc. IEEE Conf. on Computer Vision and Pattern Recognition (2008) 10. Mann, G.S., Mccallum, A.: Simple, robust, scalable semi-supervised learning via expectation regularization. In: Proc. of ICML, pp. 593–600 (2007) 11. Mason, L., Baxter, J., Bartlett, P., Frean, M.: Functional gradient techniques for combining hypotheses. In: Advances in Large Margin Classifiers, pp. 221–247. MIT Press, Cambridge (1999) 12. Berger, A.L., Della Pietra, V.J., Della Pietra, S.A.: A maximum entropy approach to natural language processing. Comput. Linguist. 22, 39–71 (1996) 13. Schapire, R.E., Rochery, M., Rahim, M., Gupta, N.: Incorporating prior knowledge into boosting. In: Proc. of ICML (2002) 14. Friedman, J., Hastie, T., Tibshirani, R.: Additive logistic regression: a statistical view of boosting. The Annals of Statistics 38, 337–374 (2000)
SERBoost: Semi-supervised Boosting with Expectation Regularization
601
15. Joachims, T.: Transductive inference for text classification using support vector machines. In: Proc. of ICML, pp. 200–209 (1999) 16. Breiman, L.: Random forests. Machine Learning V45, 5–32 (2001) 17. Marszalek, M., Schmid, C.: Spatial weighting for bag-of-features. In: Proc. of CVPR, pp. 2118–2125 (2006) 18. Zhang, J., Marszalek, M., Lazebnik, S., Schmid, C.: Local features and kernels for classification of texture and object categories: A comprehensive study. IJCV 73, 213–238 (2007) 19. Bosch, A., Zisserman, A., Munoz, X.: Representing shape with a spatial pyramid kernel. In: Proc. of CIVR, pp. 401–408 (2007) 20. Freund, Y., Schapire, R.: Experiments with a new boosting algorithm. In: Proc. of ICML, pp. 148–156 (1996) 21. Jaynes, E.T.: Information theory and statistical mechanics. Physical Review 106, 620–630 (1957) 22. Everingham, M., Zisserman, A., Williams, C.K.I., Van gool, L.: The pascal visual object classes challenge 2006 (voc 2006) results. Technical report (2006) 23. Mikolajczyk, K., Schmid, C.: Scale and affine invariant interest point detectors. IJCV 60, 63–86 (2004) 24. Lowe, D.G.: Distinctive image features from scale-invariant keypoints. IJCV 60, 91–110 (2004) 25. Lazebnik, S., Schmid, C., Ponce, J.: Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. In: Proc. of CVPR, pp. 2169– 2178 (2006) 26. Chang, C.C., Lin, C.J.: Libsvm: a library for support vector machines. (2001), http://www.csie.ntu.edu.tw/∼ cjlin/libsvm 27. Lin, C.J., Weng, R.C., Keerthi, S.S.: Trust region newton method for large-scale logistic regression. Technical report (2007) 28. Lepetit, V., Fua, P.: Keypoint recognition using randomized trees. PAMI 28, 1465– 1479 (2006) 29. Bosch, A., Zisserman, A., Munoz, X.: Image classification using random forests and ferns. In: Proc. of ICCV (2007)
View Synthesis for Recognizing Unseen Poses of Object Classes Silvio Savarese1 and Li Fei-Fei2 1
Department of Electrical Engineering, University of Michigan at Ann Arbor 2 Department of Computer Science, Princeton University
Abstract. An important task in object recognition is to enable algorithms to categorize objects under arbitrary poses in a cluttered 3D world. A recent paper by Savarese & Fei-Fei [1] has proposed a novel representation to model 3D object classes. In this representation stable parts of objects from one class are linked together to capture both the appearance and shape properties of the object class. We propose to extend this framework and improve the ability of the model to recognize poses that have not been seen in training. Inspired by works in single object view synthesis (e.g., Seitz & Dyer [2]), our new representation allows the model to synthesize novel views of an object class at recognition time. This mechanism is incorporated in a novel two-step algorithm that is able to classify objects under arbitrary and/or unseen poses. We compare our results on pose categorization with the model and dataset presented in [1]. In a second experiment, we collect a new, more challenging dataset of 8 object classes from crawling the web. In both experiments, our model shows competitive performances compared to [1] for classifying objects in unseen poses.
1
Introduction
An important goal in object recognition is to be able to recognize an object or an object category given an arbitrary view point. Humans can do this effortlessly under most conditions. Consider the search for your car in a crowded shopping center parking lot. We often need to look around 360 degrees in search of our vehicle. Similarly, this ability is crucial for a robust, intelligent visual recognition system. Fig. 1 illustrates the problem we would like to solve. Given an image containing some object(s), we want to 1) categorize the object as a car (or a stapler, or a computer mouse), and 2) estimate the pose (or view) of the car. Here by ‘pose’, we refer to the 3D information of the object that is defined by the viewing angle and scale of the object (i.e. a particular point on the viewing sphere represented in Fig. 6). If we have seen this pose in the training time, and have a way of modeling such information, the problem is reduced to matching the known model with the new image. This is the approach followed by a number of existing works where either each object class is assumed to be seen under an unique pose [3,4,5,6,7,8,9,10] or a class model is associated to a specific pose giving rise to mixture models [11,12,13]. But it is not necessarily possible for an algorithm to have been trained with all views of the objects. In many situations, D. Forsyth, P. Torr, and A. Zisserman (Eds.): ECCV 2008, Part III, LNCS 5304, pp. 602–615, 2008. c Springer-Verlag Berlin Heidelberg 2008
View Synthesis for Recognizing Unseen Poses of Object Classes car:
stapler:
mouse:
azimuth = 200 deg; zenith = 30 deg.
azimuth = 75 deg; zenith = 50 deg.
azimuth = 60 deg; zenith = 70 deg.
603
Fig. 1. Categorize an Object Given An Unseen View. azimuth: [front,right,back, left]= [0, 90, 180, 270]o ; zenith: [low, med., high]= [0, 45, 90]o
training is limited (either by the number of examples, or by the coverage of all possible poses of the object class); it is therefore important to be able to extrapolate information and make the best guess possible given this limitation. This is the approach we present in this paper. In image base rendering, new view synthesis (morphing) have been an active and prolific area of research [14,15,16]. Seitz & Dyer [2] proposed a method to morph two observed views of an object into a new, unseen view using basic principles of projective geometry. Other researchers explored similar formulations [17,18] based on multi-view geometry [19] or extended these results to 3view morphing techniques [20]. The key property of view-synthesis techniques is their ability to generate new views of an object without reconstructing its actual 3D model. It is unclear, however, whether these can be useful as is for recognizing unseen views of object categories under very general conditions: they were designed to work on single object instances (or at most 2), with no background clutter and with given features correspondences across views (Figs. 6–10 of [2]). In our work we try to inherit the view-morphing machinery while generalizing it to the case of object categories. On the opposite side of the spectrum, several works have addressed the issue of single object recognition by modeling different degree of 3D information. Again, since these methods achieve recognition by matching local features [21,22,23,24] or group of local features [25,26] under rigid geometrical transformations, they can be hardly extended for handling object classes. Recently, a number of works have proposed interesting solutions for capturing the multi-view essence of an object category [1,27,28,29,30,31,32]. These techniques bridge the gap between models that represent an object category from just a single 2D view and models that represent single object instances from multiple views. Among these, [32] presents an interesting methodology for repopulating the number of views in training by augmenting the views with synthetic data. In [1] a framework was proposed in which stable parts of objects from one class are linked together to capture both the appearance and shape properties of the object class. Our work extends and simplifies the representation in [1]. Our critical contributions are: – We propose a novel method for representing and synthesizing views of object classes that are not present in training. Our view-synthesis approach is inspired by previous research on view morphing and image synthesis from
604
S. Savarese and L. Fei-Fei
multiple views. However, the main contribution of our approach is that the synthesis takes place at the categorical level as opposed to the single object level (as previously explored). – We propose a new algorithm that takes advantage of our view-synthesis machinery for recognizing objects seen under arbitrary views. As opposed to [32] where training views are augmented by using synthetic data, we synthesize the views at recognition time. Our experimental analysis validates our theoretical findings and shows that our algorithm is able to successfully estimate object classes and poses under very challenging conditions.
2
Model Representation for Unseen Views
We start with an overview of the overall object category model [1] in Sec. 2.1 and give details of our new view synthesis analysis in Sec. 2.2. 2.1
Overview of the Savarese et al. Model [1]
Fig. 2 illustrates the main ideas of the model proposed by [1]. We use the car category as an example for an overview of the model. There are two main components of the model: the canonical parts and the linkage structure among the canonical parts. A canonical part P in the object class model refers to a region of the object that tends to occur frequently across different instances of the object class (e.g. rear bumper of a car). It is automatically determined by the model. The canonical parts are regions containing multiple features in the images, and are the building blocks of the model. As previous research has shown, a part based representation [26,28,29] is more stable for capturing the appearance variability across instances of objects. A critical property introduced in [1] is that the canonical part retains the appearance of a region that is viewed most frontally on the object. In other words, a car’s rear bumper could render different appearances under different geometric transformations as the observer moves around the viewing sphere (see [1] for details). The canonical part representation of the car rear bumper is the one that is viewed the most frontally (Fig. 2(a)). Given an assortment of canonical parts (e.g. the colored patches in Fig. 2(b)), a linkage structure connects each pair of canonical parts {Pj , Pi } if they can be both visible at the same time (Fig. 2(c)). The linkage captures the relative position (represented by the 2 × 1 vector tij ) and change of pose of a canonical part given the other (represented by a 2 × 2 homographic transformation Aij ). If the two canonical parts share the same pose, then the linkage is simply the translation vector tij (since Aij = I). For example, given that part Pi (left rear light) is canonical, the pose (and appearance) of all connected canonical parts must change according to the transformation imposed by Aij for j = 1 · · · N, j = i, where N is the total number of parts connected to Pi . This transformation is depicted in Fig. 2(c) by showing a slanted version of each canonical part (for details of the model, the reader may refer to [1]). We define a canonical view V as the collection of canonical parts that share the same view V (Fig. 2(c)). Thus, each pair of canonical parts {Pi , Pj } within
View Synthesis for Recognizing Unseen Poses of Object Classes
605
Fig. 2. Model Summary. Panel a: A car within the viewing sphere. As the observer moves on the viewing sphere the same part produces different appearances. The location on the viewing sphere where the part is viewed the most frontally gives rise to a canonical part. The appearance of such canonical part is highlighted in green. Panel b: Colored markers indicate locations of other canonical parts. Panel c: Canonical parts are connected together in a linkage structure. The linkage indicates the relative position and change of pose of a canonical part given the other (if they are both visible at the same time). This change of location and pose is represented by a translation vector and a homographic transformation respectively. The homographic transformation between canonical parts is illustrated by showing that some canonical parts are slanted with respected to others. A collection of canonical parts that share the same view defines a canonical view (for instance, see the canonical parts enclosed in the area highlighted in yellow.
V is connected by Aij = I and a translation vector tij . We can interpret a canonical view V as a subset of the overall linkage structure of the object category. Notice that by construction a canonical view may coincide with one of the object category poses used in learning. However, not all the poses used in learning will be associated to a canonical view V . The reason is that a canonical view is a collection of canonical parts and each canonical part summarizes the appearance variability of an object category part under different poses. The relationship of parts within the same canonical view is what previous literature have extensively used for representing 2D object categories from single 2D views (e.g. the constellation models [4,6]). The linkage structure can be interpreted as its generalization to the multi-view case. Similarly to other methods based on constellations of features or parts, the linkage structure of canonical parts is robust to occlusions and background clutter. 2.2
Representing an Unseen View
The critical question is: how can we represent (synthesize) a novel non-canonical view from the set of canonical views contained in the linkage structure? As we will show in Sec. 3, this ability becomes crucial if we want to recognize an object category seen under an arbitrary pose. Our approach is inspired by previous research on view morphing and image synthesis from multiple views. We show that it is possible to use a similar machinery for synthesizing appearance, pose
606
S. Savarese and L. Fei-Fei
and position of canonical parts from two or more canonical views. Notice that the output of this representation (synthesis) is a novel view of the object category, not just a novel view of a single object instance, whereas all previous morphing techniques are used for synthesizing novel views of single objects. Representing Canonical Parts. In [1], each canonical part is represented by a distribution of feature descriptors along with their x, y location within the part. In our work, we simplify this representation and describe a canonical part P by a convex quadrangle B (e.g., the bounding box) enclosing the set of features. The appearance of this part is then characterized by a bag of codewords model [5] that is, a normalized histogram h of vector quantized descriptors contained in B. Our choice of feature detectors and descriptors is the same as in [1]. A standard K-means algorithm can be used for extracting the codewords. B is a 2 × 4 vector encoding the b = [x, y]T coordinates of the four corners of the quadrangle, i.e. B = b1 . . . b4 ; h is a M ×1 vector, where M is the size of the vocabulary of the vector quantized descriptors. Given a linked pair of canonical parts {Pi , Pj } and their corresponding {Bi , Bj }, relative position of the parts {Pi , Pj } is defined by tij = ci − cj , where the centroid ci = 14 k bk ; the relative change of pose is defined by Aij which encodes the homographic transformation acting on the coordinates of Bi . This simplification is crucial for allowing more flexibility in handling the synthesis of novel non-canonical views at the categorical level. View Morphing. Given two views of a 3D object it is possible to synthesize a novel view by using view-interpolating techniques without reconstructing the 3D object shape. It has been shown that a simple linear image interpolation (or appearance-morphing) between views do not convey correct 3D rigid shape transformation, unless the views are parallel (that is, the camera moves parallel to the image planes) [15]. Moreover, Seitz & Dyer [2] have shown that if the camera projection matrices are known, then a geometrical-morphing technique can be used to synthesize a new view even without having parallel views. However, estimating the camera projection matrices for the object category may be very difficult in practice. We notice that under the assumption of having the views in a neighborhood on the viewing sphere, the cameras can be approximated as being parallel, enabling a simple linear interpolation scheme (Fig. 3). Next we show that by combining appearance and geometrical morphing it is possible to synthesize a novel view (meant as a collection of parts along with their linkage) from two or more canonical views. Two-View Synthesis. We start by the simpler case of synthesizing from two canonical views V n and V m . A synthesized view V s can be expressed as a collection of linked parts morphed from the corresponding canonical parts belonging to V n and V m . Specifically, a pair of linked parts {Pis , Pjs } ∈ V s can be synthesized from the pair {Pin ∈ V n , Pjm ∈ V m } if and only if Pin and Pjm are linked by the homographic transformation Aij = I (Fig. 3). If we represent {Pis , Pjs } by the quadrangles {Bis , Bjs } and the histograms {hsi , hsj } respectively, a new view is expressed by:
View Synthesis for Recognizing Unseen Poses of Object Classes viewing sphere
607
viewing sphere
viewing sphere
object
object
object
Pi
Pi view
Vn
Pi
Pj
view Vs
view Vn
view
Vm
view Vn
view Vs,t m
n
Pj
s
s Pi Pj
m
view Vq
n
Aij Pj
parallel view Vm
view Vm
view Vm
view Vs
Pi
parallel view Vn
Pj
Pj
Aji Pj
Fig. 3. View Synthesis. Left: If the views are in a neighborhood on the viewing sphere, the cameras can be approximated as being parallel, enabling a linear interpolation scheme. Middle: 2-view synthesis: A pair of linked parts {Pis , Pjs } ∈ V s is synthesized from the pair Pin ∈ V n , and Pjm ∈ V m if and only if Pin and Pjm are linked by the homographic transformation Aij = I. Right: 3-view synthesis can take place anywhere within the triangular area defined by the 3 views.
Bis = (1 − s)Bin + sAij Bin ; hsi
= (1 −
s)hni
+
shm i ;
Bjs = sBjm + (1 − s)Aji Bjm ; hsj
=
shnj
+ (1 −
s)hm j ;
(1) (2)
The relative position between {Pis , Pjs } is represented as the difference tsij of the centroids of Bis and Bjs . tsij may be synthesized as follows: tsij = (1 − s)tnij + stm ij
(3)
In summary, Eqs. 1 and 3 regulate the synthesis of the linkage structure between the pair {Pis , Pjs }; whereas Eqs. 2 regulate the synthesis of their appearance components. By synthesizing parts for all possible values of i and j we can obtain a set of linked parts which give rise to a new view V s between the two canonical views V n and V m . Since all canonical parts in V n and V m (and their linkage structures) are represented at the categorical level, this property is inherited to the new parts {Pis , Pjs }, thus to V s . Three-View Synthesis. One limitation of the interpolation scheme described in Sec. 2.2 is that a new view can be synthesized only if it belongs to the linear camera trajectory from one view to the other. By using a bi-linear interpolation we can extend this to a novel view from 3 canonical views. The synthesis can take place anywhere within the triangular area defined by the 3 views (Fig. 3) and is regulated by two interpolating parameters s and t. Similarly to the 2-view case, 3-view synthesis can be carried out if and only if there exist 3 canonical parts Pin ∈ V n , Pjm ∈ V m , and Pkq ∈ V q which are pairwise linked by the homographic transformations Aij = I, Aik = I and Ajk = I. The relevant quantities can be synthesized as follows: n n Bi Aik Bi Aij Bin Aik Aij Bin
Bist = [ (s − 1)I sI ]
hst i
= [ (s − 1)I sI ]
q hn i hi m p hi hi
(1 − t)I tI
(1 − t)I tI
(4)
(5)
608
S. Savarese and L. Fei-Fei
tst ij
= [ (s − 1)I sI ]
tn ij tm ij
tqik m tij + tqik − tn ij
(1 − t)I tI
(6)
Analogous equations can written for the remaining indexes.
3
Recognizing Object Class in Unseen Views
Sec. 2.2 has outlined all the critical ingredients of the model for representing and synthesizing new views. We discuss here an algorithm for recognizing pose and Algorithm step 1 1. I ← list of parts extracted from test image 2. for each model C 3. for each canonical view V ∈ C 4. [R(n), V ∗ (n)] ← MatchView(V, C, I); % return similarity R 5. n ++; 6. L ← KMinIndex(R) % return shortlist L MatchView(V, C, I) 1. for each canonical part P ∈ V 2. M (p) ← MatchKPart (P, I)); % return K best matches 3. p ++; 4. for each canonical part P¯ ∈ C linked to V ¯ (q) ← MatchKPart (P¯ , I); % return K best matches 5. M 6. q ++; ¯ ∗ ] ← Optimize(V, M, M ¯ ); 7. [M ∗ , M ¯ ∗ , I); 8. V ∗ ← GenerateTestView(M ∗ , M ∗ 9. R ← Distance(V, V ); 10. return R, V ∗ ; Fig. 4. Pseudocode of the step 1 algorithm. MatchView(V, C, I) returns the similarity score between V and I. KminIndex() returns pointers to the the K smallest values of the input list. MatchKPart (P, I) returns the best K candidate matches between P and I. A match is computed by taking into account the appearance similarity Sa between two parts. Sa is computed as the distance between the histograms of vector quantized ¯ ) optifeatures contained in the corresponding part’s quadrangles B. Optimize(V, M, M ¯ ∗ from the candimizes over all the matches and returns the best set of matches M ∗ , M ¯ . The selection is carried out by jointly minimizing the overall apdate matches in M, M pearance similarity Sa (computed over the candidate matches) and the geometrical similarity Sg (computed over pairs of candidate matches). Sg is computed by measuring the ¯ ∗ , I) returns distance between the relative positions tij , ¯tij . GenerateTestView(M ∗ , M ¯ ∗. a linkage structure of parts (B, appearances h and relative positions t) given M ∗ , M ∗ This gives rise to the estimated matched view V in the test image. Distance(Vi , Vj ) returns an estimate of the overall combined appearance and geometrical similarity Sa + Sg between the linkage structures associated to Vi , Vj . Sa is computed as in MatchKPart over all the parts. Sg is computed as the geometric distortion between the two corresponding linkage structures.
View Synthesis for Recognizing Unseen Poses of Object Classes
609
Algorithm step 2 1. for each canonical view V ∈ L 2. V ∗ ← L(l) 3. V ← FindClosestView(V, C); 4. V ← FindSecondClosestView(V, C); 5. for each 2-view synthesis parameter s 6. V s ← 2-ViewSynthesis(V, V , s); 7. R(s) ← Distance(V s , V ∗ ); 8. for each 3-view synthesis parameters s and t 9. V s,t ← 3-ViewSynthesis(V, V , V , s, t); 10. R(s, t) ← Distance(V s,t , V ∗ ); 11. L(l) ← Min(R); 12. l ++; 13. [Cw Vw ] ← MinIndex(L); Fig. 5. Pseudocode of the step 2 algorithm. FindClosestView(V, C) (FindSecondClosestView(V, C)) returns the closest (second closest) canonical pose on the viewing sphere. 2-ViewSynthesis(V, V , s) returns a synthesized view between the two views V, V based on the interpolating parameters s. 3-ViewSynthesis(V, V , s, t) is the equivalent function for three view synthesis. Cw and Vw are the winning categories and poses respectively.
categorical membership of a query object seen under arbitrary view point. We consider a two-step recognition procedure. The first step is a modified version of [1]. The output of this algorithm is a short list of the K best model views across all views and all categories. The second step is a novel algorithm that refines the error scores of the short list by using the view-synthesis scheme. 3.1
A Two-Step Algorithm
In the first step (Fig. 4), we want to match the query image with the best object class model and pose. For each model, we find hypotheses of canonical parts consistent with a certain canonical view of an object model. Given such canonical parts, we infer the appearance, pose and position of other parts that are not seen in their canonical view (MatchView function). This information is encoded in the object class linkage structure. An optimization process finds the best combination of hypothesis over appearance and geometrical similarity (Optimize). The output is a similarity score as well as a set of matched parts and their linkage structure (the estimated matched view V ∗ ) in the test image. The operation is repeated for all possible canonical views and for all object class models. Finally, we create a short list of the N best canonical views across all the model categories ranked according to their similarity (error) scores. Each canonical view is associated to its own class model label. The complexity of step-1 is O(N 2 Nv Nc ), where N is the total number of canonical parts (typically, 200–500); Nv = number of views per model; Nc = number of models.
610
S. Savarese and L. Fei-Fei
In the second step (Fig. 5), we use the view synthesis scheme (Sec. 2.2) to select the final winning category and pose from the short list. The idea is to consider a canonical view from the short list, pick up the nearest (or two nearest) canonical pose(s) on the corresponding model viewing sphere (FindClosestView and FindSecondClosestView ), and synthesize the intermediate views according to the 2-view-synthesis (or 3-view-synthesis) procedure for a number of values of s (s, t) (2-ViewSynthesis and 3-ViewSynthesis). For each synthesized view, the similarity score is recomputed and the minimum value is retained. We repeat this procedure for each canonical view in the short list. The canonical view associated with the lowest score gives the winning pose and class label. The complexity of step-2 is just O(Nl Ns ), where Nl is the size of the short list and Ns is the number of interpolating steps (typically, 5–20).
4
Experiments and Results
In this section, we show that our algorithm is able to successfully recognize an object class viewed under a pose that is not seen during training. In addition to classification, we also measure the accuracy in pose estimation of an object. 4.1
Experiment I: Comparison with [1]
In the first set of experiments we compare the performances of our algorithm with those reported in [1]. We use the same dataset as in [1,33] and the same learning and testing methodology. The dataset comprises images of 8 different object categories, each containing 10 different instances. Each of these are photographed under a range of poses, described by a pair of azimuth and zenith angles (i.e., the angular coordinates of the observer on the viewing sphere, Fig. 6) and distance (or scale). The total number of angular poses in this dataset is 24: 8 azimuth angles and 3 zenith angles. Each pose coordinate is kept identical across instances and categories. Thus, the number and type of poses in the test set are the same as in the training set. The data set is split into a training and test set as in [1]. To assess the performance of our algorithm to recognize unseen views, we train both the model in [1] and ours by using a reduced set of poses in training. The z
z zenith φ
(90,60) ( φ , ϕ ) = (0,30)
φ y
ϕ
y (45,0)
x
azimuth ϕ
x
Fig. 6. Left: An object pose is represented by a pair of azimuth and zenith angles. Right: Some of the unseen poses tested during our recognition experiments (Fig. 7).
View Synthesis for Recognizing Unseen Poses of Object Classes Savarese et al.’07: av. accuracy on 8 unseen views = 46.80%
performance (%)
0.8
cell
0.75
.47
bike .03 0.7
iron .10
shoe .08
Savarese et al.’07 Our model
0.6
17
18
19
20
21
22
23
.16 .12 .11 .08
.41
iron .08
.14 .19 .10 .06
.50
.07 .08
.05 .12 .05 .03
.47
stapler .01
.11 .10 .15
toaster .18
.23 .03 .13
car .13 ce. bi.
.61
cell
bike .02
.48
.03 .08
.09 .08 .08 .05
.76
.10
.08
.58
mouse .04 .04 .11
.26 .04
shoe .05
.68
.04 .07
.02 .02 .02 .09
.57
stapler .01
.08 .04 .06
.38
.08
toaster .05
.16
.03 .02 .18 .07 .12
.45
ir.
mo. sh. st.
to.
.02 .02
.08 .08 .06 .06 .08
.08 .07
24
number of poses used in training
Our model: av. accuracy on 8 unseen views = 64.78%
.07 .08 .15 .03 .05
mouse .08 .05 .12
0.65
0.55 16
.03 .04
.58
611
.16 .03 .08
.69
.02 .05
.04 .07
.58
.14
car .03 .07 .02 .09 .03 .05 .71 ce. bi. ir. mo. sh. st. to. ca.
ca.
Fig. 7. Left: Performances of our model (red) and Savarese et al. model [1] (blue) as a function of the number of views used in training. Note that the performances shown here are testing performances, obtained by an average over all 24 testing poses. Middle: Confusion table results obtained by the Savarese et al. model [1] for 8 object classes on a sample of 8 unseen views only (dataset [33]). Right: Confusion table results obtained by our model under the same conditions.
Savarese et al.’07: aver. acc. = 60.5% cell
.69
.05 .05 .75 .05 .02 .07 .02 .59 .10 .05 .07 .12 .05 .02 .08 .65 .06 .04 .04 .10 .08 .10 .08 .47 .10 .04 .14 .02 .11 .04 .06 .57 .06 .13 .03 .06 .03 .09 .15 .52 .12 .02 .04 .04 .06 .10 .06 .08 .61
.02
ce. bi.
ce.
bike .02 iron mouse shoe stapler toaster car
Our model: aver. acc. = 72.3%
.75
.06 .09 .03 .09 .03
ir.
mo. sh. st.
to.
ca.
.03 .06
.09 .06
.82 .05 .02 .07 .02 .71 .07 .05 .05 .10 .02 .08 .73 .06 .06 .04 .06 .64 .08 .04 .09 .02 .06 .70 .02 .03 .06 .03 .06 .09 .67 .02 .04 .06 .02 .06 .04 bi.
ir.
mo. sh.
st.
to.
Performance (%)
0.9
.02
Savarese et al.’07 Our model
0.8 0.7
.10 .08 .11 .06
.76
0.6 0.5 0.4 0.3
l
cel
e
bik
n
iro
r r use shoe taple ste s toa
mo
car
ca.
Fig. 8. Left: Confusion table results obtained by [1] for 8 object classes ( dataset [35]). Middle: Confusion table results obtained by our model under the same conditions. Right: Performance improvement achieved by our model over [1] for each category.
reduced set is obtained by randomly removing poses from the original training set. This was done by making sure that no more than one view is removed from any quadruplet of adjacent poses in the viewing sphere1 . The number of poses used in testing is kept constant (to be more specific, all 24 views are used in this case). This means some of the views in testing have not been presented during training. Fig. 7 illustrates the performances of the two models as a function of the number of views used in training. The plots shows that our model systematically outperforms that of [1]. However, notice that the added accuracy becomes negligible as the number of views in training approaches 24. In other words, when no views are missing in training, the performance of the model used in [1] approximates our model. For a baseline comparison with a pure bag-of-world model the reader can refer to [1]. Fig. 7(middle, right) compare the confusion table results obtained by our model and that of [1] for 8 object classes on a sample of 8 unseen views only. 1
We have found experimentally that this condition is required to guarantee there are sufficient views for successfully constructing the linkage structure for each class.
612
S. Savarese and L. Fei-Fei
Fig. 9. Estimated pose for each object that was correctly classified by our algorithm. Each row shows two test examples (the colored images in column 3 and column 6) from the same object category. For each test image, we report the estimated location of the object (red bounding box) and the estimated view-synthesis parameter s. s gives an estimate of the pose as it describes the interpolating factor between the two closest model (canonical) views selected by our recognition algorithm. For visualization purposes we illustrate these model views by showing the corresponding training images (columns 1-2 and 4-5). (This figure is best viewed in color with PDF magnification.)
View Synthesis for Recognizing Unseen Poses of Object Classes
4.2
613
Experiment II: A New Testing Dataset
In this experiment we test our algorithm on a much more challenging testset. We have collected this testset for two reasons. First, we would like to test our models for recognizing object under arbitrary poses. Second, the dataset in [1] has been collected under relatively controlled settings. While training and testing images are well separated, the background, lighting and the cameras used for this dataset are similar. In this new dataset of 8 object classes, 7 classes of images (cellphone, bike, iron, shoe, stapler, mouse, and toaster) are collected from the Internet (mostly Google and Flickr) by using an automatic image crawler. The initial images are then filtered to remove outliers by a paid undergraduate with no knowledge of our work. We eventually obtain a set of 60 images for each category. The 8th class, the car, is from the LabelMe dataset [34]. A sample of the dataset is available at [35]. As in the previous experiment, we compare the performances of our algorithm to [1]. This time we have trained the models by using the full dataset from Savarese et al. [33] (48 available poses, 10 instances, for a total number of 480 images per category). Results by both models are reported in Fig. 8. Again, our model achieves better overall results. Fig. 8 (right panel) shows the performance comparison broken down by each category. Notice that for some categories such as cellphone or bikes, the increment is less significant. This suggests our algorithm is more effective for categories composed by a richer 3D structure (as opposed to cellphone and bike that are almost planar objects). All the experiments presented in this section use the 2-view synthesis scheme. The 3-view scheme is currently tested and will be presented in future work. Fig. 9 illustrates a range of pose estimation results on the new dataset. See Fig. 9 caption for details.
5
Conclusion
Recognizing objects in 3D space is an important problem in computer vision. Many works recently have been devoted to this problem. But beyond the possibility of semantic labeling of objects seen under specific views, it is often crucial to recognize the pose of the objects in the 3D space, along with its categorical identity. In this paper, we have proposed an algorithm to deal with the unseen (and/or untrained) poses in recognition. We achieve this by modifying the model proposed by Savarese et al. [1] and by taking advantage of a variant of the view morphing technique proposed by Seitz & Dyer [2]. Our initial testing of the algorithm shows promising results. But a number of issues remain. Our algorithm still requires a good number of views to be used during training in order to generalize. More analysis and research need to be done to make this as minimal as possible. Further research is also needed to explore to what degree the inherent nuisances in category-level recognition (lighting variability, occlusions and background clutter) affect the view morphing formulation. Finally, it would be interesting to extend our framework and incorporate the ability to model non-rigid objects.
614
S. Savarese and L. Fei-Fei
References 1. Savarese, S., Fei-Fei, L.: 3D generic object categorization, localization and pose estimation. In: IEEE Int. Conf. on Computer Vision, Rio de Janeiro, Brazil (October 2007) 2. Seitz, S., Dyer, C.: View morphing. In: SIGGRAPH, pp. 21–30 (1996) 3. Viola, P., Jones, M.: Rapid object detection using a boosted cascade of simple features. In: Proc. Computer Vision and Pattern Recognition (2001) 4. Weber, M., Welling, M., Perona, P.: Unsupervised learning of models for recognition. In: Vernon, D. (ed.) ECCV 2000. LNCS, vol. 1842, pp. 18–32. Springer, Heidelberg (2000) 5. Dance, C., Willamowski, J., Fan, L., Bray, C., Csurka, G.: Visual categorization with bags of keypoints. In: ECCV International Workshop on Statistical Learning in Computer Vision, Prague (2004) 6. Fergus, R., Perona, P., Zisserman, A.: Object class recognition by unsupervised scale-invariant learning. In: Proc. Comp. Vis. and Pattern Recogn. (2003) 7. Grauman, K., Darrell, T.: The pyramid match kernel: Discriminative classification with sets of image features. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV), Beijing, China (2005) 8. Leibe, B., Schiele, B.: Combined object categorization and segmentation with an implicit shape model. In: Proc. Workshop on satistical learning in computer vision, Prague, Czech Republic (2004) 9. Berg, A., Berg, T., Malik, J.: Shape matching and object recognition using low distortion correspondences. In: Proc. Computer Vis. and Pattern Recog. (2005) 10. Todorovic, S., Ahuja, N.: Extracting subimages of an unknown category from a set of images. In: CVPR (2006) 11. Schneiderman, H., Kanade, T.: A statistical approach to 3D object detection applied to faces and cars. In: Proc. CVPR, pp. 746–751 (2000) 12. Weber, M., Einhaeuser, W., Welling, M., Perona, P.: Viewpoint-invariant learning and detection of human heads. In: Int. Conf. Autom. Face and Gesture Rec. (2000) 13. Torralba, A., Murphy, K., Freeman, W.: Sharing features: efficient boosting procedures for multiclass object detection. In: Proc. Conference on Computer Vision and Pattern Recognition (CVPR) (2004) 14. Beier, T., Neely, S.: Feature-based image metamorphosis. In: SIGGRAPH (1992) 15. Chen, S., Williams, L.: View interpolation for image synthesis. Computer Graphics 27, 279–288 (1993) 16. Szeliski, R.: Video mosaics for virtual environments. Computer Graphics and Applications 16, 22–30 (1996) 17. Avidan, S., Shashua, A.: Novel view synthesis in tensor space. In: Proc. Computer Vision and Pattern Recognition, vol. 1, pp. 1034–1040 (1997) 18. Laveau, S., Faugeras, O.: 3-d scene representation as a collection of images. In: Proc. International Conference on Pattern Recognition (1994) 19. Hartley, R.I., Zisserman, A.: Multiple View Geometry in Computer Vision, 2nd edn. Cambridge University Press, Cambridge (2004) 20. Xiao, J., Shah, M.: Tri-view morphing. CVIU 96 (2004) 21. Brown, M., Lowe, D.: Unsupervised 3D object recognition and reconstruction in unordered datasets. In: 5th International Conference on 3D Imaging and Modelling (3DIM 2005), Ottawa, Canada (2005) 22. Lowe, D.: Object recognition from local scale-invariant features. In: Proc. International Conference on Computer Vision, pp. 1150–1157 (1999)
View Synthesis for Recognizing Unseen Poses of Object Classes
615
23. Ullman, S., Basri, R.: Recognition by linear combination of models. Technical report, Cambridge, MA, USA (1989) 24. Rothganger, F., Lazebnik, S., Schmid, C., Ponce, J.: 3d object modeling and recognition using local affine-invariant image descriptors and multi-view spatial constraints. IJCV 66(3), 231–259 (2006) 25. Ferrari, V., Tuytelaars, T., Van Gool, L.: Simultaneous object recognition and segmentation from single or multiple model views. IJCV (2006) 26. Lazebnik, S., Schmid, C., Ponce, J.: Semi-local affine parts for object recognition. In: Proceedings of BMVC, Kingston, UK, vol. 2, pp. 959–968 (2004) 27. Bart, E., Byvatov, E., Ullman, S.: View-invariant recognition using corresponding object fragments. In: Pajdla, T., Matas, J(G.) (eds.) ECCV 2004. LNCS, vol. 3022, pp. 152–165. Springer, Heidelberg (2004) 28. Thomas, A., Ferrari, V., Leibe, B., Tuytelaars, T., Schiele, B., Van Gool, L.: Towards multi-view object class detection. In: Proceedings IEEE Conference on Computer Vision and Pattern Recognition, vol. 2, pp. 1589–1596 (2006) 29. Kushal, A., Schmid, C., Ponce, J.: Flexible object models for category-level 3d object recognition. In: Proc. Conf. on Comp. Vis. and Patt. Recogn. (2007) 30. Hoeim, D., Rother, C., Winn, J.: 3D layoutcrf for multi-view object class recognition and segmentation. In: Proc. In IEEE Conference on Computer Vision and Pattern Recognition (2007) 31. Yan, P., Khan, D., Shah, M.: 3d model based object class detection in an arbitrary view. In: ICCV (2007) 32. Chiu, H., Kaelbling, L., Lozano-Perez, T.: Virtual training for multi-view object class recognition. In: CVPR (2007) 33. http://vangogh.ai.uiuc.edu/silvio/3ddataset.html 34. Russell, B., Torralba, A., Murphy, K., Freeman, W.: Labelme: a database and web-based tool for image annotation. Int. Journal of Computer Vision (2007) 35. http://vangogh.ai.uiuc.edu/silvio/3ddataset2.html
Projected Texture for Object Classification Avinash Sharma and Anoop Namboodiri Center for Visual Information Technology, International Institute of Information Technology, Hyderabad, INDIA - 500 032
Abstract. Algorithms for classification of 3D objects either recover the depth information lost during imaging using multiple images, structured lighting, image cues, etc. or work directly the images for classification. While the latter class of algorithms are more efficient and robust in comparison, they are less accurate due to the lack of depth information. We propose the use of structured lighting patterns projected on the object, which gets deformed according to the shape of the object. Since our goal is object classification and not shape recovery, we characterize the deformations using simple texture measures, thus avoiding the error prone and computationally expensive step of depth recovery. Moreover, since the deformations encode depth variations of the object, the 3D shape information is implicitly used for classification. We show that the information thus derived can significantly improve the accuracy of object classification algorithms, and derive the theoretical limits on height variations that can be captured by a particular projector-camera setup. A 3D texture classification algorithm derived from the proposed approach achieves a ten-fold reduction in error rate on a dataset of 30 classes, when compared to state-of-the-art image based approaches. We also demonstrate the effectiveness of the approach for a hand geometry based authentication system, which achieves a four-fold reduction in the equal error rate on a dataset containing 149 users.
1
Introduction
Three dimensional object are characterized by their shape, which can be thought of as the variation in depth over the object, from a particular view point. These variations could be deterministic as in the case of rigid objects or stochastic for surfaces containing a 3D texture. The depth information are lost during the process of imaging and what remains is the intensity variations that are induced by the object shape and lighting, as well as focus variations. Algorithms that utilize 3D object shape for classification tries to recover the lost depth information from the intensity or focus variations or using additional cues from multiple images, structured lighting, etc. This process is computationally intensive and error prone. Once the depth information is estimated, one needs to characterize the object using shape descriptors for the purpose of classification. Image-based classification algorithms tries to characterize the intensity variations of the image of the object for recognition. As we noted, the intensity D. Forsyth, P. Torr, and A. Zisserman (Eds.): ECCV 2008, Part III, LNCS 5304, pp. 616–627, 2008. c Springer-Verlag Berlin Heidelberg 2008
Projected Texture for Object Classification
617
variations are affected by the illumination and pose of the object. The attempt of such algorithms is to derive descriptors that are invariant to the changes in lighting and pose. Although image based classification algorithms are more efficient and robust, their classification power is limited due to loss of information during the imaging process. We propose the use of structured lighting patterns for the purpose of recognition without shape recovery. The depth variations of the object induces deformations in the projected patterns, and these deformations encode the shape information. We treat the deformed patterns as a texture; referred to as projected texture. The primary idea is to view the projected texture as a characteristic property of the object and use it directly for classification instead of trying to recover the shape explicitly. To achieve this we need to use an appropriate projection pattern and derive features that sufficiently characterize the deformations. The patterns required could be quite different depending on the nature of object shape and its variation across objects. In this paper, we primarily concentrate on the problem of 3D texture classification. We propose a set of simple texture features that can capture the deformations in projected lines on 3D textured surfaces. Experiments indicate the superiority of the approach as compared to traditional image based classification algorithms. To demonstrate the flexibility of the idea, we also show the use of projected textures for hand geometry based person authentication. Figure 1 shows the effect of projected texture on two similar texture classes: salt and sugar crystals. The projected texture based features are clearly different, while the image based features look similar. One should note that an approach using structured lighting has it limitations also as it requires some amount of control of the environment. However, it can be useful in a variety of applications such as industrial inspection, robot navigation, biometric authentication, supermarket billing, etc.
Texton Histogram Crystal Sugar
3000
NHoDG with projected pattern
2500
4 3
2000
2 1
1500
0
1000
−1 500
−2 100
200
300
50
Texton Histogram Crystal Salt
3000
with projected pattern
2500
100
150
NHoDG 4 3
2000
2
1500
1 0
1000
−1 500
−2 100
200
300
50
100
150
Fig. 1. Salt and Sugar crystal with and without projected texture and the corresponding feature representations
618
A. Sharma and A. Namboodiri
The use of Bidirectional Texture Function (BTF), which incorporates the variations in illumination and statistical texture variations, is popular in appearance based 3D texture models. Leung and Malik [1] extended the idea of modeling the appearance based on texture primitives (textons) using BTFs to define a set of 3D textons. Cula and Dana [2] modified the approach to by building dictionaries directly from filter output, making approach less sensitive to illumination and pose. Wang and Dana [3] extended the approach to incorporate geometric information computed from sampled BTF to make the representation suitable for tasks like texture prediction, synthesis, etc. Although the above algorithms work on 2D image features, their definitions are based on lighting variations in 3D. Varma and Zisserman[4] proposed image level features that are invariant of illumination and pose. They further extended the idea of textons by creating a dictionary from the most responsive filters for an image [5], as well as based on image patch exemplars [5]. Currently, these approaches are two of the best performing classifiers for texture images, and we use them as benchmarks for comparison. However, these approaches are computationally intensive for both training and testing. We show that a relatively simple texture measure that we propose is sufficient to achieve better performance, when combined with projected texture. A different class of approaches use natural texture in the scene for recognition of objects or people [6,7,8] as well as for depth estimation [9,10]. The primary difference in our approach is that the texture we use is not an inherent property of the object, but superimposed on it during imaging. We demonstrate the flexibility of our approach with a second application in hand geometry based person authentication, where one is required to capture minor variations between similar samples (hands) belonging to different people. The performance is compared with popular image based features [11,12].
2
Projected Texture for Recognition
The primary idea of the approach, as described before, is to encode the depth variations of an object as deformations of a projected light pattern. There are primarily two categories of objects that we might want to characterize. The first class of objects, such as manufactured parts and human palm, are characterized by their exact 3D shape, while the second class of objects are characterized by the stochastic variations in depth such as 3D textured surfaces. In this paper, we primarily concentrate on classification of 3D textured surfaces, and the results of hand geometry based authentication is presented briefly. The object is placed in the field of view of the camera and the projector, and a specific light pattern is projected on it. The projected pattern, or the original texture, falling on the surface containing the object, gets transformed according to the depth map of the object under illumination. These transformations can be primarily classified into two categories: – Pattern Shift: The position where a particular projected pattern is imaged by the camera depends on the absolute height from which the pattern in
Projected Texture for Object Classification
619
Projector
h
Displaced Pattern due to height h Projection on Base Surface
(a) Pattern Shift
(b) Pattern Deformation
Fig. 2. Pattern shift and deformation due to depth variations
reflected. Figure 2 illustrates this with a cross section of a projection setup. Note that the amount of shift depends on the height difference between the objects as well as the angle between the projector and camera axes. – Pattern Deformation: Any pattern that is projected on an uneven surface gets deformed in the captured image depending on the change in depth of the surface (see Figure 2). These deformations depend on the absolute angle between the projector axis and the normal to the surface at a point as well as its derivative. 2.1
Pattern Deformation and Projector Camera Configuration
We now take a closer look at the nature of depth variation in objects surface and how it affects projected patterns for a specific set of setup parameters. One of the important factor affecting deformation is the slope of the surface with respect to the projection axis. We first derive the relationship between the deformation in pattern to various parameters of physical setup and the height variations on object surface. Figure 3(a) shows the image capture setup and Figure 3(b) shows a schematic diagram with the object surface having slope θ to the Y -axis. We refer to this as the object plane. Figure 3(b), considers the projection of a single horizontal line pattern at an angle φ from Z-axis forming a plane that we will call the light plane. The light plane may be represented as xa + zb = 1, where a = b tan φ. The equation of the light plane and the object plane can hence be expressed as: x cot φ + z − b = 0, and (1) z − y tan θ = 0
(2)
The line cd as shown in figure is the intersection of both of these planes in 3D, and can be expressed by cross product of the normals of both intersecting planes. The direction vector of cd is: n3 = [ cot φ 0 1]T × [ 0 tan θ − 1]T or, n3 = [ − tan θ
cot φ tan θ cot φ]T
(3)
620
A. Sharma and A. Namboodiri
Z
φ b
O
a
Y d
c
X
(a)
θ
(b)
Fig. 3. The image capture setup and the pattern deformation geometry
One point common to both plane say p can be obtain by solving equation 1 and 2 as : p = [ b tan φ 0 0]T . So equation of 3D line can be written as r = [ b tan φ − s tan θ s cot φ s tan θ cot φ]T ,
(4)
where s is line parameter, different value of s will give different points on line. In order to express 2D projection of 3D line onto image plane of camera, we need to take two points on 3D line such that they are in FOV of camera. Let Q1 and Q2 be two such points corresponding value of s as s = l1 and s = l2 respectively. Q1 = [ b tan φ − l1 tan θ l1 cot φ l1 tan θ cot φ]T
(5)
Q2 = [ b tan φ − l2 tan θ l2 cot φ l2 tan θ cot φ]T
(6)
For simplifying the things let us assume camera to be pinhole camera with camera matrix P = K[R|t]. Let K = I i.e., the internal parameter matrix is unity matrix and R an t be ⎡ ⎤ R1 R2 R3 T R = ⎣ R4 R5 R6 ⎦ , t = t1 t2 t3 R7 R8 R9 The image of these points in camera plane be q1 = P Q1 and q2 = P Q2 . q1 can be represented in matrix form in terms of R1 to R9 , l1 and φ, θ as: 2
3 R1 (b tan φ−l1 tan θ)+R2 l1 cot φ+R3 l1 tan θ cot φ+t1 6 7 7 q1 =6 4 R4 (b tan φ−l1 tan θ)+R5 l1 cot φ+R6 l1 tan θ cot φ+t2 5 R7 (b tan φ−l1 tan θ)+R8 l1 cot φ+R9 l1 tan θ cot φ+t3
(7)
Similarly q2 can be represented in terms of R1 to R9 , l2 and φ, θ. Let us write q1 and q2 as: T T (8) q1 = X1 Y1 Z1 q2 = X2 Y2 Z2
Projected Texture for Object Classification
621
In the homogeneous coordinate system q1 and q2 can be represented as: T T X2 Y2 1 Y1 q1 = X q = (9) 2 Z1 Z1 Z2 Z2 Thus equation of line in 2D image plane is L : q1 × q2 = 0. i.e., L : X(Z1 Y2 −Z2 Y1 )−Y (Z1 X2 −Z2 X1 )−X1 Y2 +X2 Y1 =0
(10)
m = (Z1 Y2 − Z2 Y1 )/(Z1 X2 − Z2 X1 )
(11)
From equation of line it can inferred that slope m of this line will depend upon b, φ and θ thus slope of height variation directly affects orientation of projection of 3D line onto image plane subject to setup specific setup parameters as shown before. Hence, we can compute the projection angle given the minimum angle in deformation that can be detected by the camera and the slope variation of the surface. One other factor is the shadow effect if slope is in opposite direction of illumination. In that case response of any transform will be zero or low. Internal reflection of the surface is an important factor which depends on physical property of object surface. Thus all these factor combine to form a deformation pattern which we have used to recognize the surface. 2.2
Design of Projected Pattern
The choice of an appropriate projection pattern is important due to a variety of factors: 1. For the deformation to be visible at any point in the captured image, the gradient of the projected pattern should not be zero in the direction of gradient of the object depth. 2. One should be able to capture the deformations of the projected pattern using the texture measure employed for this purpose. 3. The density of the projected pattern or its spatial frequency should be related to the frequency of height variations to be captured. Hence, analyzing the geometry of an object with a high level of detail will require a finer pattern, whereas in the case of an object with smooth structural variations, a sparse one will serve the purpose. 4. Factors such as the color, and reflectance of the object surface should be considered in selecting the color, intensity and contrast of the projected pattern. For the purpose of 3D texture recognition, we use a set of parallel lines with regular spacing, where the spacing is determined based on the scale of the textures to be recognized. For hand geometry based authentication, we have selected a repetitive star pattern that has gradients in four different directions. The width of the lines and the density of patterns in the texture were selected experimentally so that it captures the height variations between the palms at the angle of projection selected.
622
2.3
A. Sharma and A. Namboodiri
Characterization of Pattern Deformation
An effective method for characterization of the deformations of the projected pattern is critical for its ability to discriminate between different objects. We propose a set of texture features the capture the statistics of deformation in the case of 3D textures. As we noted before, the projection pattern used for 3D texture classification was a set of parallel lines. Hence the feature set that we propose should capture the deformations in the lines and compute the overall statistics. Normalized Histogram of Derivative of Gradients (NHoDG). Gradient directions in images are the directions of maximal intensity variation. In our scenario, the gradient directions can indicate the direction of the projected lines. As the lines get deformed with surface height variations, we compute the differential of the gradient directions in both x and y axes to measure the rate at which the surface height varies. The derivatives of gradients are computed at each pixel in the image, and the texture is characterized by a Histogram of the Derivatives of Gradients (HoDG). The gradient derivative histogram is a good indicator of the nature of surface undulations in a 3D texture. For classification, we treat the histogram as a feature vector to compare two 3D textures. As the distance computation involves comparing corresponding bins from different images, we normalize the counts in each bin of the histogram across all the samples in the training set. This normalization allows us to treat the distance between corresponding bins in the histograms, equally (employ the Euclidean distance). The NHoDG is a simple but extremely effective feature for discriminating between different texture classes. Figure 4 illustrates the computation of the NHoDG feature from a simple image with a bell shaped intensity variation. We compare the effectiveness of this feature set under structured illumination in the experimental section using a dataset of 30 3D textures.
Derivative of Gradient in X direction 1.5
1
0.5
0
−0.5
−1
Original Image function
Gradient Field
0
50
100
150
Normalized Histogram of Derivative of Gradient
Derivative of Gradient in Y direction
Fig. 4. Computation of the proposed NHoDG feature vector
Projected Texture for Object Classification
623
Characterizing Deterministic Surfaces. In the case of hand geometry based authentication, we need to characterize the exact shape of the object, and not the statistics of height variations. Hence we divide the hand image into a set of non-overlapping sub-windows, and compute the local textural characteristics of each window using a filter bank of 24 Gabor filters with 8 orientations and 3 scales (or frequencies). In our experiments we have used a grid of 64 sub-windows (8 × 8), and the mean response of each filter forms a 1536 dimensional feature vector, that is used to represent each sample.
3
Experimental Results and Analysis
The image capture setup consists of a planar surface to place the object samples, an LCD projector fixed at an angle to the object surface. The camera is located directly above the object with its axis perpendicular to the object plane (see Figure 3(a)). We considered a set of 30 3D textures with considerable variations in depth profile. The texture surfaces included pebbles, concrete, thermocol, sand, soil, gravel, sponge, ribbed paper, crystal sugar and salt, and a variety of grains and pulses. The materials were chosen to have texture classes with similar scale and nature of depth variations, which makes the problem challenging. However, the overall scales varied considerably from pebbles to sand. A total 14, 400 images were collected, with 480 samples for each of the 30 classes. The 480 samples consisted of 24 different object samples, each taken with 5 different projected patterns (including no projection pattern case) under 4 different illumination conditions. The projected patterns are parallel lines having uniform spacing of 5, 10, 15, 20 pixels between them. We refer to these patterns as W5, W10, W15 and W20 in the rest of this section. The overall object surface was nearly planar, which is normal to the camera axis. Sample images of the 30 different classes along with their feature space representations are shown in Figure 5. We refer to this dataset as Projected Texture Dataset or PTD from now on.
(a)
(b)
Fig. 5. Examples of the 30 texture classes and their NHoDG representations
624
A. Sharma and A. Namboodiri
For evaluation of the hand geometry based person authentication algorithm, we collected a dataset of 1341 images from 149 users, each user providing 9 samples each. For comparison, we collected two sets of images from each user, with projected texture as well as with uniform illumination. 3.1
Texture Classification
We have conducted exhaustive experiments to validate our approach. Our contribution includes the use of deformed projection patterns as texture as well as proposing an appropriate feature set to capture the deformations. We conducted experiments with and without projection patterns, using the proposed and traditional 2D features. As mentioned before, we have used maximum filter response (MR) and image patch exemplar based features [5], as benchmarks for comparing our approach. We have included four additional filter responses with two higher scales (MR12 now instead of MR8) so as to improve the results of the MR approach, as our dataset contained higher scale variation. Patch-based texture representation with three different patch sizes were also used for comparison. Table 1. Error rates of classification using NHoDG, MR, and Image Patch features on the PTD and Curet datasets (in %) NHoDG Dataset Curet PTD
Projection Without Without With Combined
12.93 2.36 1.15 0.07
MR 3.15 1.18 0.76 0.31
3x3 4.67 3.51 1.60 1.70
Image Patch 5x5 4.38 1.53 1.18 0.66
7x7 3.81 1.46 0.90 0.62
Table 1 gives the results using the proposed NHoDG feature set as well as the MR and patch based features. Results are presented on PTD as well as the Curet datasets. However, note that the results on the Curet dataset are without projected patterns. All the results are based on a 4-fold cross validation, where the dataset is divided into non-overlapping training and testing sets, which is repeated 4 times and the average results are reported. We note that the 2D image based approaches achieves an error rate of 1.18%, i.e., 34 misclassifications on our dataset of 2880 samples (with no projection pattern). Clearly the MR12 feature set performs better in pure image based classification. However, while combining the image information with the projected texture feature, the NHoDG feature achieves an error rate of 0.07%, which corresponds to just 2 samples being misclassified. We had experimented with the patch based approach also, which performed worse than the MR filter approach. The best accuracies for 3x3, 5x5 and 7x7 patches were 1.70%, 0.66%, and 0.62%, as opposed to 0.31% of MR12 filter. Figure 6 shows the variation in classification performance as the histogram bin sizes and the pattern separation are varied. We note that the performance is consistently good, and we selected a bin resolution of 5 degrees and pattern
Projected Texture for Object Classification
(a)
625
(b)
Fig. 6. Performance with varying histogram bin sizes and pattern separations
Texton Histogram
Sample 3 Class 24
NHoDG
1400 1
1200 1000
0.5
800 0
600 400
−0.5
200 −1 100
200
300
50
Texton Histogram
Sample 3 Class 16
100
150
NHoDG
1400 1200
1
1000
0.5
800 0
600 400
−0.5
200 −1 100
200
300
50
100
150
Fig. 7. One of the two misclassifications in the dataset along with the MR and NHoDG feature representation
separation of 5 pixels for the above experiments. Figure 7 shows one of the misclassified samples, and the corresponding NHoDG and MR features. We also note that the proposed feature set is primarily intended for use with projection and does not perform well on datasets such as Curet, without projected patterns. 3.2
Hand Geometry Based Authentication
We compare the performance of three different feature sets in this experiment: i) Feat-1: A set of 17 features based of finger lengths, widths and heights, proposed by Jain et al. [11], ii) Feat-2: A set of 10 features computed from palm contours proposed by Faundez et al. [12], and iii) Feat-3: The proposed projected texture based features. Figure 8(a) shows the difference in deformation of the projected pattern based on the 3D shape of the palm. An effective method to compare the utility of a matching algorithm for authentication is the ROC curve, which plots the trade off between genuine acceptance and false acceptance of users in an authentication system. The ROC curves in Figure 8(b) clearly indicate the superiority of the proposed feature set. As the purpose of this experiment is to compare the feature sets, we have provided the ROC curve based on the Euclidean distance
626
A. Sharma and A. Namboodiri
100
90
Genuine Accept Rate(%)
80
70
60
50
40
30
Gabor − 57 Gabor − 1536 Jain et al.
20
Faundez et al. 10 −3 10
Original Image
Deformed Patterns
Zoomed View
−2
10
−1
0
10 10 False Accept Rate(%)
1
10
2
10
Fig. 8. Deformation in projected texture due to hand geometry, and the ROC curve for the different algorithms
between the samples of the same user as well as different users. The equal error rate (EER), or the point at which false reject rate equals false acceptance rate, for Feat-1 and Feat-2 were 4.06% and 4.03% respectively. In contrast, the proposed feature set achieved and EER of 1.91%. In addition to the equal error rate, we note that the genuine acceptance rate continues to be above 80%, even at false acceptance rates of 0.001% for the proposed features, while the performance of the 2D image based features degrade considerably at this point. We also conducted an experiment in feature selection to choose a subset of the 1536 features that would help us in reducing the computations required. We note that even with just 57 features out of 1536, the ROC curve is similar to that of the complete feature set. Moreover, the equal error rate improves to 0.84% with the reduced feature set. This is possible as the selection process avoids those sub-windows, where the intra-class variations in pose are high. Clearly the projected patterns induce a large amount of discriminating information into the computed features.
4
Conclusions and Future Work
A novel technique for recognition of 3D objects using projected texture is proposed. The results were demonstrated in the case of two different object classes, one for 3D texture classification, and the second for hand geometry based person authentication. The approach is robust to occlusions and noise as we need not find any correspondences or recover the depth map of the object. Moreover, the computational requirements are comparable to the simpler 2D image based recognition approaches, while being far more accurate. We are currently working on extending the approach for arbitrary pose object recognition, and the initial results are promising. Future work in this direction could be to handle objects of high reflectance and transparency. Automatic adaptation of the projected pattern to a particular application could also be interesting. Temporally varying projection patterns, giving rise to dynamic deformations, could also give us a cue towards designing optimal classifiers for recognition.
Projected Texture for Object Classification
627
References 1. Leung, T., Malik, J.: Representing and recognizing the visual appearance of materials using three-dimensional textons. International Journal of Computer Vision 43(1), 29–44 (2001) 2. Cula, O.G., Dana, K.J.: 3d texture recognition using bidirectional feature histograms. International Journal of Computer Vision 59(1), 33–60 (2004) 3. Wang, J., Dana, K.J.: Hybrid textons: modeling surfaces with reflectance and geometry. In: Proc. of CVPR 2004, vol. 1, pp. 372–378 (2004) 4. Varma, M., Zisserman, A.: Classifying images of materials: Achieving viewpoint and illumination independence. In: Heyden, A., Sparr, G., Nielsen, M., Johansen, P. (eds.) ECCV 2002. LNCS, vol. 2352, pp. 255–271. Springer, Heidelberg (2002) 5. Varma, M.: Statistical Approaches To Texture Classification. PhD thesis, University of Oxford (October 2004) 6. Kumar, A., Zhang, D.: Personal recognition using hand shape and texture. IEEE Transactions on Image Processing 15(8), 2454–2461 (2006) 7. Daugman, J.: High confidence visual recognition of persons by a test of statistical independence. IEEE Transactions on PAMI 15(11), 1148–1161 (1993) 8. Jain, A.K., Prabhakar, S., Hong, L., Pankanti, S.: Filterbank-based fingerprint matching. IEEE Transactions on Image Processing 9(5), 846–859 (2000) 9. Forsyth, D.: Shape from texture and integrability. In: Proc. of ICCV 2001, vol. 2, pp. 447–452 (July 2001) 10. Loh, A., Hartley, R.: Shape from non-homogeneous, non-stationary, anisotropic, perspective texture. In: Proc. of the BMVC 2005, pp. 69–78 (2005) 11. Jain, A.K., Ross, A., Pankanti, S.: A prototype hand geometry-based verification system. In: Proc. of the AVBPA 1999, Washington D.C., pp. 166–171 (1999) 12. Faundez-Zanuy, M., Elizondo, D.A., Ferrer-Ballester, M., Travieso-Gonz´ alez, C.M.: Authentication of individuals using hand geometry biometrics: A neural network approach. Neural Processing Letters 26(3), 201–216 (2007)
Prior-Based Piecewise-Smooth Segmentation by Template Competitive Deformation Using Partitions of Unity Oudom Somphone1,2 , Benoit Mory1 , Sherif Makram-Ebeid1 , and Laurent Cohen2 1 Medisys Research Lab, Philips Healthcare 33 rue de Verdun, B.P. 313, F-92156 Suresnes Cedex, France {oudom.somphone,benoit.mory,sherif.makram-ebeid}@philips.com 2 CEREMADE, CNRS UMR 7534, Universit´e Paris Dauphine, France
[email protected]
Abstract. We propose a new algorithm for two-phase, piecewise-smooth segmentation with shape prior. The image is segmented by a binary template that is deformed by a regular geometric transformation. The choice of the template together with the constraint on the transformation introduce the shape prior. The deformation is guided by the maximization of the likelihood of foreground and background intensity models, so that we can refer to this approach as Competitive Deformation. In each region, the intensity is modelled as a smooth approximation of the original image. We represent the transformation using a Partition of Unity Finite Element Method, which consists in representing each component with polynomial approximations within local patches. A conformity constraint between the patches provides a way to control the globality of the deformation. We show several results on synthetic images, as well as on medical data from different modalities.
1
Introduction
Image segmentation is a fundamental topic in computer vision, which has motivated many works to cope with challenging issues such as noise, occlusions and low contrasted regions. A common approach is the introduction of prior knowledge in order to constrain the solution to remain close to a given class of shapes. Statistical models have been proposed, using for example Principal Component Analysis (PCA), like the well-known active shape models [1]. Such techniques require careful training in order to capture the variability of the shapes, and do not enable segmentation if no training database is available. Shape priors have also been incorporated in the level set framework via an additive shape term in the energy, that penalizes the dissimilarity between the level set function for segmentation and the one embedding the prior shape [2,3,4,5]. In the model of Leventon et al. [6] and further works along the same line [7,8,9], a PCA of training shapes embedded in level set functions is computed in order to define a linear statistical model for the shape term. Non-linear versions have D. Forsyth, P. Torr, and A. Zisserman (Eds.): ECCV 2008, Part III, LNCS 5304, pp. 628–641, 2008. c Springer-Verlag Berlin Heidelberg 2008
Prior-Based Piecewise-Smooth Segmentation
629
also been explored [10]. For some applications, those models may however suffer from the uncontrollable topology changes allowed by the level set representation, which is most often not desirable when one wants to impose a shape prior. A possible alternative to control the topology is to directly apply a geometric deformation to the prior shape, provided that the transformation is diffeomorphic [11,12]. Two-phase segmentation is performed by deforming a binary template towards the image. The prior is the template itself, and the shape constraint is conveyed through the choice of a class of allowed deformations. This choice is crucial: excessive constraint may result in poor segmentation results, whereas insufficient constraint may lead to a final shape too far from the prior. The deformation can be guided by the maximization of the likelihood of userdefined intensity models, given the pixel values observed in both the foreground and the background. The approach bears similarities with Region Competition techniques [13], with the significant difference that the unknown variable is not the partitioning itself but the deformation of a predefined template. We will refer to this method as Competitive Deformation. Obviously, the choice of appropriate region intensity models is also essential. On one hand, piecewise-constant and global models are simple but their applicability is limited. On the other hand, piecewise-smooth and local models are more relevant in many cases, at the cost of an increased computational complexity. We propose a variational formulation of two-phase, piecewise-smooth segmentation based on Competitive Deformation and a Partition of Unity Finite Element Method (PUFEM). The key idea is to represent the deformation field with polynomial approximations within overlapping local patches. The method includes a conformity constraint between the patches, which offers a good control over the range of the smoothness, and hence over the strength of the prior. Moreover, this framework provides efficient smooth representations of the region intensity models that naturally extrapolate beyond the boundary. This paper is organised as follows. In section 2, we first give the basic formulation for two-phase segmentation by template competitive deformation. Then in section 3, we give a brief overview of the PUFEM framework before presenting our formulation of piecewise-smooth segmentation in section 4. In section 5, we show experimental results on illustrative synthetic images, and on medical images.
2
Basic Formulation of Competitive Deformation
Let the open bounded set Ω ⊂ Rd be the domain of a real-valued image I. Twophase segmentation aims at partitioning Ω into a foreground and a background that are homogeneous in terms of intensity properties. In the classical level set version of Region Competition [13], the optimal partition of Ω is obtained by evolving a level set function that embeds the region boundary, allowing undesirable topology changes in the case of prior-based segmentation. To control the
630
O. Somphone et al.
topology of the result, we rather deform the characteristic function χ of a prior foreground region Σ: 1 if x ∈ Σ χ(x) = (1) 0 otherwise with a diffeomorphic geometric transformation ψ : Ω → ψ (Ω). The basic form of the two-phase competitive deformation problem reads: ⎫ ⎧ ⎬ ⎨ min χ ◦ ψ(x)r1 (α1 , x) + (1 − χ ◦ ψ(x)) r2 (α2 , x) + γR (ψ) (2) ⎭ ψ,α1 ,α2 ⎩ x∈Ω
x∈Ω
where the ri : Ω → R are the region model functions that encode the intensity properties in the foreground and the background. Each ri usually depends on a set of parameters αi , e.g. the mean intensity value, the standard deviation, etc. R is the regularization constraint on ψ and γ a positive constant that controls its weighting. The minimization is carried out iteratively by alternating the following two steps: (A) Considering the transformation ψ fixed, optimize and update the model’s parameters α1 and α2 , (B) Considering α1 and α2 fixed, minimize w.r.t ψ. 2.1
Region Intensity Models
In this formulation, simple global statistics can be used to represent the region intensity properties, such as the well-known piecewise-constant case [14,15], assuming Gaussian intensity distributions with known variance, i.e.: 2
ri (mi , x) = (I(x) − mi )
(3)
mi being the mean intensity value in region i. The Gaussian assumption has practical limitations and is not valid in images showing more complex intensity distributions. In a similar template deformation context, Saddi et al. [12] used the more general non-parametric model: ri (pi , x) = − log pi (I(x))
(4)
where pi is the intensity probability density function in region i. Estimating global probability densities for the foreground and background using the whole image still has limitations in practice. Especially critical are the cases of cluttered and heterogeneous backgrounds and the presence of low-frequency artifacts such as illumination changes. Moreover, it may be difficult to obtain a precise positioning of the boundary since the local contributions of nearby pixels from both sides are diluted in the global estimation of the densities. To overcome these limitations, it is natural to turn to local models, assuming smoothly-varying intensity distributions. In the Gaussian case with spacedependent mean values, this leads to piecewise-smooth segmentation. The error function reads:
Prior-Based Piecewise-Smooth Segmentation 2
ri (Ii , x) = (I(x) − Ii (x)) + μi |∇Ii (x)|2
631
(5)
where Ii becomes a function that approximates I and is constrained to be smooth inside region i; μi is the smoothing parameter. Vese and Chan [16], and simultaneously Tsai et al. [17], introduced a level-set formulation based on diffusion and on the work of Mumford and Shah [18]. This diffusion scheme requires the iterative resolution of a PDE with new boundary conditions at each update step (B), which is computationnally costly. Methods based on Gaussian convolution were recently proposed [19,20,21], giving qualitatively similar results in a more efficient way. They enable the extrapolation of the model beyond the region boundary, to an extent that depends on the kernel’s scale. In section 4, we present an alternative method based on finite elements with polynomial extrapolary properties. 2.2
Deformation Models
Compliance with the shape prior is determined by the class of deformations allowed by the regularization constraint R. Therefore it has to be chosen carefully. In particular, R must take into account the fact that a shape is invariant to some geometric, global transformations such as translation, rotation, scaling, shearing, etc. For example, several instances of a same shape are shown on Fig. 1. Despite their different positions, orientations and sizes, they all represent the same shape. Consequently, a relevant regularizer R shall not penalize such transformations.
Fig. 1. Several instances of a same shape
Within the template matching context, a similarity transformation is used in [15] to segment synthetic images, excluding more complex deformations; in the non-rigid case, Saddi et al. [12] constrain the deformation with a diffeomorphic fluid model, thus enabling the result to strongly deviate from the prior shape. Methods based on local basis expansions of the deformation such as B-splines [22,23] or Radial Basis Functions [24,25] provide a suitable compromise between global and fluid models. As will be seen in the subsequent sections, we use a finite element registration framework based on a partition of unity, which enables to easily control the globality of ψ. Moreover, unlike Free Form Deformation methods, our regularization term does not penalize globally polynomial transformations.
632
3
O. Somphone et al.
Partition of Unity Finite Element Representation
In this section, we give an overview of the mathematical framework of the PUFEM [26] that we use to represent a given scalar function: in our case, the approximation images Ii in each region, and the components of the transformation ψ in each dimension. The basic idea is to locally fit the said scalar field with d-dimensional polynomials and smoothly blend them afterwards to obtain a regular representation. Let F be a real-valued function defined on Ω. We define a set N of nodes distributed over Ω. A node n ∈ N is characterized by: – – – –
a point c(n) ∈ Ω, called center of the node, an open bounded subdomain Ω (n) ⊂ Rd containing c(n) , called patch, a function ϕ(n) : Rd → R, called PU-function, (n) a set of ρ(n) functions B (n) = {pr : Ω → R | r ≤ ρ(n) }, called the local basis.
We allow the patches to overlap and assume the families (Ω (n) )n∈N and (ϕ(n) )n∈N to fullfil the Partition of Unity conditions: Ω⊂
Ω (n)
and
n∈N
∀x ∈ Ω
ϕ(n) (x) = 1
(6)
n∈N
For the sake of computational efficiency, our nodes are distributed over a regular, rectangular array and each patch Ω (n) is a cuboid centered on c(n) . This configuration is illustrated on Fig. 2.a. The PU-function ϕ(n) has a compact support included in Ω (n) ; it is non-negative, equal to 1 at c(n) and vanishes with the distance to c(n) (cf. Fig. 2.b). Thus, any point x ∈ Ω belongs to 4 patches in 2D, and 8 patches in 3D, with weights given by the corresponding ϕ(n) (x). The basis (n) functions pr are the monomials of all degrees up to a user-defined maximum
(a) Nodes and patches
(b) PU-function
Fig. 2. Example of Partition of Unity configuration in 2D
Prior-Based Piecewise-Smooth Segmentation
633
degree – e.g. in 2D and degrees up to 2: 1, x, y, x2 , xy, y 2 – centered on c(n) , so that F is locally modelled at node n by a polynomial F (n) : (n) F (n) =
a(n) (7) r pr rρ(n) (n)
where the ar are real coefficients. The global representation is then constructed by blending the F (n) with the PU-functions: (n) F = ϕ(n) F (n) = ϕ(n) a(n) (8) r pr n∈N rρ(n)
n∈N
According to (8), F (n) is as regular as the PU-functions per se. However, we want to impose a controllable, “long range” regularization, or rather, globality. To this end, we introduce the notion of non-conformity between two neighbouring nodes m and n through the energy: 2 ϕ(m) ϕ(n) Dβ F (m) − Dβ F (n) (9) Sκ(m,n) (F ) = Ω (m,n) |β|κ
where β = (β1 , β2 , . . . , βk ) and Dβ is the partial derivative operator in the standard multi-index notations. This local energy has an intuitive interpretation: it penalizes F if its local representations at nodes m and n and their derivatives up to order κ differ in the overlapping region Ω (m,n) . The total conformity energy is then defined by: 1 Sκ (F ) = Sκ(m,n) (F ) (10) 2 n∈N m∈V(n)
where V (n) is the set of neighbours of node n in 4-connexity. This inter-node conformity constraint is a key feature of our method. It enables smooth representations of the region intensity models, that are naturally extrapolated beyond the boundary. As for the representation of the deformation, this energy is zero when all the local representations are equal, i.e. when ψ is globally polynomial. Thus, in the case of local affine bases, global translation, rotation, scaling and shearing are not penalized.
4
Piecewise-Smooth Segmentation with Competitive Deformation
We now detail our two-phase, piecewise-smooth segmentation formulation casting the Competitive Deformation problem into the PUFEM framework. 4.1
Our Formulation
We address the deformation of the template characteristic function χ through its corresponding displacement vector field u = ψ − id, the components of which
634
O. Somphone et al.
are represented as in (8) by sets of coefficients that we pile up into one vector a. The regularization term R(ψ) in (2) takes the form of an inter-node conformity constraint Sκ (a), defined as the sum of the constraints on each component of u. In region i, I is approximated by a locally polynomial image Iiαi , represented over a set of nodes Ni by a set of coefficients αi according to (8) and (7): (n) (n) (n) (n) (n) ϕi Ii where Ii = αir pir (11) Iiαi = (n)
n∈Ni
rρi
Most existing piecewise-smooth methods include regularization inside each phase (e.g. see (5)). Consequently, Ii is not explicitly defined outside region i. We apply an inter-node conformity constraint Sκi (αi ) involving all the nodes of Ni , so that our energy functional reads: 2 2 E(a, α1 , α2 ) = χ ◦ ψ a (I − I1α1 ) + (1 − χ ◦ ψ a ) (I − I2α2 ) Ω
Ω
+ γ Sκ (a) + μ1 Sκ1 (α1 ) + μ2 Sκ2 (α2 )
(12)
We minimize (12) by alternating steps (A) and (B) (see section 2) until convergence. 4.2
Step (A): Estimating the Image Approximations
Minimization w.r.t. the parameter set αi is achieved, considering the energy: 2 Ei (αi ) = χi (I − Iiαi ) + μi Sκi (αi ) (13) Ω
where we define χ1 = χ ◦ ψ and χ2 = 1 − χ ◦ ψ. The first term is a masked least square term that enforces Ii to fit the image within region i. The second term is an inter-node conformity constraint that compels Ii to be regular everywhere in Ω. In other words, Ii results from a regularized approximation of I inside region i, and since no fitting constraint is imposed outside, it is extrapolated beyond the region border by regularization only. Sκi is a quadratic function of the parameters αi and hence so is Ei . Minimization is then achieved by classical linear regression. Saying that the derivatives of (n) Ei w.r.t. the αir vanish provides a system of linear equations which is sparse, since the nodes are only related to each other in 4-connexity, due to the overlap pattern between the patches (see Fig. 2.a). More precisely, we need to solve: M i · α i = gi
(14)
(n) where Mi is a symmetric, non-negative definite matrix of size n ρi , and gi (n) a vector of length n ρi . Their entries are given in the appendix. We use a Conjugate Gradient descent, well-suited for solving sparse linear systems [27].
Prior-Based Piecewise-Smooth Segmentation
4.3
635
Step (B): Template Registration
We show that step (B) is equivalent to a classical registration problem based on a Sum of Square Difference criterion. Let χ ˆ be the signed characteristic function of the prior foreground Σ: 1 if x ∈ Σ χ(x) ˆ = (15) −1 otherwise Then we can replace χ by (1 + χ)/2 ˆ in (12). The region parameters αi being fixed, the energy to minimize w.r.t. a is: 1 − (χ ˆ ◦ ψ a ) r + γ Sκ (a) (16) 2 Ω where r = (I − I2α2 ) − (I − I1α1 ) . By writing: 1 2 2 χ ˆ ◦ ψ a r = − (χ ˆ ◦ ψ a − r) − (χ ˆ ◦ ψ a ) − r2 2 2
2
(17)
and since (χ ˆ ◦ ψ a ) = 1 and r2 is independent of the parameters a, we can reduce the energy to minimize to: 1 (χ ˆ ◦ ψ a − r)2 + γ Sκ (a) (18) Eu (a) = 4 Ω 2
Therefore step (B) boils down to an SSD-based registration problem, with r being the reference and χ ˆ the template. Minimization of Eu follows a global-tolocal strategy: we define a coarse-to-fine dyadic pyramid of node distributions. Once the solution has been computed at one level, it is projected on the basis of the next finer level to provide an initialization. A detailed description of the minimization scheme is out of the scope of this paper and can be found in [28].
5
Results and Discussion
We first present results on illustrative synthetic, noisy images (see Fig. 3). The first image, “χ”, consists of two heterogeneous phases. The results show the robustness of our method to noise and strong intensity variations. The extrapolatory property is illustrated by figure (d) of the first line. The second image, “Treble Clef”, consists of a homogeneous foreground on a heterogeneous background and contains occluding objects. The final segmentations are robust to occlusions and leaks. We then apply our method to segment heart chambers on cardiac ultrasound (Fig. 4) and cine MR (Fig. 5) images. The final deformations are well-constrained so that the topologies of the prior shapes are preserved. On the MR images, the foreground region shall include the papillary muscles when segmenting the blood pool, which is challenging as they appear darker an may be confused with the myocardium.
636
O. Somphone et al.
“χ” and its prior shape
(a)
(b)
“Treble Clef” and its prior shape
(c)
(d)
(e)
Fig. 3. Synthetic images “χ” and “Treble Clef”: (a) Initializations. (b) Final segmentation. (c) Final piecewise-smooth approximations χI1 + (1 − χ)I2. (d) Final background approximations I2 . (e) Final deformations.
On Fig. 6 we compare three deformation models: affine, PUFEM, and fluid. An affine deformation (a) is obviously too restricted to obtain an accurate segmentation from the prior shape that is used (Fig. 5.a bottom). On the opposite, the fluid model (c) allows strong local deformations and hence strong deviations from the prior shape. This is visible on the lower part of the right ventricle segmentation where we can see a curvature inversion when compared to the prior, and a consequent exclusion of the papillary muscles. The PUFEM model (b) provides a good compromise to have both accuracy and compliance with the prior shape.
Prior-Based Piecewise-Smooth Segmentation
637
(a)
(b)
(c)
(d)
Fig. 4. Two cardiac ultrasound long-axis images, four-chamber views: (a) Initialization (identical for both images). (b) Final segmentations. (c) Final piecewise-smooth approximations χI1 + (1 − χ)I2 . (d) Final deformations.
638
O. Somphone et al.
(a)
(c)
(b)
(d)
Fig. 5. Cardiac MR long-axis images, two-chamber (first line) and four-chamber (second line) views: (a) Initializations. (b) Final segmentations. (c) Final piecewise-smooth approximations χI1 + (1 − χ)I2 . (d) Final deformations. Papillary muscles are pointed at by arrows.
(a)
(b)
(c)
Fig. 6. Comparison between three deformation models. Final segmentations are displayed on the first line and final deformations on the second line. (a) Affine model. (b) PUFEM model. (c) Affine + fluid model [12].
6
Conclusion
We introduced a novel variational approach for two-phase, piecewise-smooth image segmentation based on prior shape Competitive Deformation. We cast our
Prior-Based Piecewise-Smooth Segmentation
639
formulation into the Partition of Unity Finite Element framework, motivated by its long-range regularization properties, suitable for the class of transformations that is needed to abide by the shape prior. Indeed, the inter-node conformity constraint provides a good control over the globality of the deformation and does not penalize basic global transformations. This framework is also well-adapted for representing the region intensity models as smooth approximations of the original image on its whole domain. Our algorithm was successfully applied to challenging synthetic images and to medical images, with robustness to noise, occlusions and leaks.
References 1. Cootes, T.F., Taylor, C.J., Cooper, D.H., Graham, J.: Active shape models: Their training and application. Computer Vision and Image Understanding 61, 38–59 (1995) 2. Paragios, N., Rousson, M., Ramesh, V.: Matching distance functions: A shape-toarea variational approach for global-to-local registration. In: Heyden, A., Sparr, G., Nielsen, M., Johansen, P. (eds.) ECCV 2002. LNCS, vol. 2351, pp. 775–789. Springer, Heidelberg (2002) 3. Cremers, D., Sochen, N.A., Schnorr, C.: Towards recognition-based variational segmentation using shape priors and dynamic labeling. In: Scale Space, pp. 388–400 (2003) 4. Chan, T.F., Zhu, W.: Level set based shape prior segmentation. In: IEEE Computer Vision and Pattern Recognition or CVPR, vol. II, pp. 1164–1170 (2005) 5. Raviv, T.R., Kiryati, N., Sochen, N.A.: Prior-based segmentation by projective registration and level sets. In: International Conference on Computer Vision, pp. 204–211 (2005) 6. Leventon, M.E., Grimson, W.E.L., Faugeras, O.D.: Statistical shape influence in geodesic active contours. In: IEEE Computer Vision and Pattern Recognition or CVPR, pp. 316–323 (2000) 7. Tsai, A., Yezzi Jr., A.J., Wells III, W.M., Tempany, C., Tucker, D., Fan, A., Grimson, W.E.L., Willsky, A.S.: Model-based curve evolution technique for image segmentation. In: IEEE Computer Vision and Pattern Recognition or CVPR, pp. 463–468 (2001) 8. Tsai, A., Yezzi Jr., A.J., Wells III, W.M., Tempany, C., Tucker, D., Fan, A., Grimson, W.E.L., Willsky, A.S.: A shape-based approach to the segmentation of medical imagery using level sets. IEEE Trans. Medical Imaging 22, 137–154 (2003) 9. Bresson, X., Vandergheynst, P., Thiran, J.P.: A variational model for object segmentation using boundary information and shape prior driven by the mumford-shah functional. International Journal of Computer Vision 68, 145–162 (2006) 10. Cremers, D., Kohlberger, T., Schnorr, C.: Shape statistics in kernel space for variational image segmentation. Pattern Recognition 36, 1929–1943 (2003) 11. Hong, B.W., Prados, E., Soatto, S., Vese, L.A.: Shape representation based on integral kernels: Application to image matching and segmentation. In: IEEE Computer Vision and Pattern Recognition or CVPR, vol. I, pp. 833–840 (2006)
640
O. Somphone et al.
12. Saddi, K.A., Chefd’hotel, C., Rousson, M., Cheriet, F.: Region-based segmentation via non-rigid template matching. In: Workshop on Mathematical Methods in Biomedical Image Analysis (2007) 13. Zhu, S.C., Yuille, A.: Region competition: Unifying snakes, region growing, and bayes/mdl for multiband image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence 18, 884–900 (1996) 14. Chan, T., Vese, L.: Active contours without edges. IEEE Trans. on Image Processing 10, 266–277 (2001) 15. An, J.H., Chen, Y.: Region based image segmentation using a modified mumfordshah algorithm. In: Scale Space and Variational Methods in Computer Vision, pp. 733–742 (2007) 16. Vese, L.A., Chan, T.F.: A multiphase level set framework for image segmentation using the mumford and shah model. International Journal of Computer Vision 50, 271–293 (2002) 17. Tsai, A., Yezzi Jr., A.J., Willsky, A.S.: Curve evolution implementation of the Mumford-Shah functional for image segmentation, denoising, interpolation, and magnification. IEEE Trans. Image Processing 10, 1169–1186 (2001) 18. Mumford, D., Shah, J.: Optimal approximations by piecewise smooth functions and associated variational problems. Comm. on Pure and Applied Math. 42, 577–685 (1989) 19. Mory, B., Ardon, R., Thiran, J.P.: Fuzzy region competition: A convex two-phase segmentation framework. In: International Conference on Scale Space Methods and Variational Methods in Computer Vision, pp. 214–226 (2007) 20. Brox, T., Cremers, D.: On the statistical interpretation of the piecewise smooth mumford-shah functional. In: Scale Space and Variational Methods in Computer Vision, pp. 203–213 (2007) 21. Li, C.M., Kao, C.Y., Gore, J.C., Ding, Z.H.: Implicit active contours driven by local binary fitting energy. In: IEEE Computer Vision and Pattern Recognition or CVPR, pp. 1–7 (2007) 22. Kybic, J., Unser, M.: Multidimensional elastic registration of images using splines. In: International Conference on Image Processing, pp. 455–458 (2000) 23. Rueckert, D., Sonoda, L.I., Hayes, C., Hill, D.L.G., Leach, M.O., Hawkes, D.J.: Nonrigid registration using free-form deformations: Application to breast MR images. IEEE Trans. Medical Imaging 18 (1999) 24. Bookstein, F.: Principal warps: Thin-plate splines and the decomposition of deformations. IEEE Transactions of Pattern Analysis and Machine Intelligence 11, 567–585 (1989) 25. Fornefett, J., Rohr, K., Stiehl, H.: Elastic registration of medical images using radial basis functions with compact support. In: Conference on Computer Vision and Pattern Recognition, pp. 402–409 (1999) 26. Babuska, I., Melenk, J.M.: The partition of unity method. International Journal of Numerical Methods in Engineering 40, 727–758 (1997) 27. Shewchuk, J.: An introduction to the conjugate gradient method without the agonizing pain. Technical report, Carnegie Mellon University, Pittsburgh, PA, USA (1994) 28. Makram-Ebeid, S., Somphone, O.: Non-rigid image registration using a hierarchical partition of unity finite element method. In: International Conference on Computer Vision (2007)
Prior-Based Piecewise-Smooth Segmentation
641
Appendix: Entries of Mi and gi Estimating the image approximation in region i (step (A) in the minimization of the functional (12)) boils down to the matricial equation: M i · α i = gi
(19)
(n) where Mi is a symmetric, non-negative definite matrix of size n ρi , and gi (n) a vector of length n ρi . Their entries are given by: ⎛ ⎞ (m) (n) (m) (n) (m) (n) ϕi ϕi ⎝χi pis pir − μi Dβ pis Dβ pir ⎠ mi(r,n)(s,m) = Ω
ϕ(n)
+ δmn μi i g(r,n)
Ω (n) (n)
= Ω
βκi
Dβ pis Dβ pir (n)
(n)
(20)
βκi
χi ϕi pir I
(21)
δmn being the Kronecker delta equal to 1 if m = n and 0 otherwise. Since the support of a PU-function is included in the corresponding patch, mi(r,n)(s,m) equals zero when Ω (m) ∩ Ω (n) = ∅, i.e. when m and n are not neighbours in 4-connexity. Hence the sparseness of Mi .
Vision-Based Multiple Interacting Targets Tracking via On-Line Supervised Learning Xuan Song, Jinshi Cui, Hongbin Zha, and Huijing Zhao Key Laboratory of Machine Perception (Ministry of Education), Peking University, China {songxuan,cjs,zha,zhaohj}@cis.pku.edu.cn
Abstract. Successful multi-target tracking requires locating the targets and labeling their identities. This mission becomes significantly more challenging when many targets frequently interact with each other (present partial or complete occlusions). This paper presents an on-line supervised learning based method for tracking multiple interacting targets. When the targets do not interact with each other, multiple independent trackers are employed for training a classifier for each target. When the targets are in close proximity or present occlusions, the learned classifiers are used to assist in tracking. The tracking and learning supplement each other in the proposed method, which not only deals with tough problems encountered in multi-target tracking, but also ensures the entire process to be completely on-line. Various evaluations have demonstrated that this method performs better than previous methods when the interactions occur, and can maintain the correct tracking under various complex tracking situations, including crossovers, collisions and occlusions.
1
Introduction
Multiple targets tracking plays a vital role in various applications, such as surveillance, sports video analysis, human motion analysis and many others. Multi-target tracking is much easier when the targets are distinctive and do not interact with each other. It can be solved by employing multiple independent trackers. However, for those targets that are similar in appearance, obtaining their correct trajectories becomes significantly more challenging when they are in close proximity or present partial occlusions. Specifically, maintaining the correct tracking seems almost impossible when the well-known “merge/split” condition occurs (some targets occlude others completely, but they split after several frames). Hence, the goals of this research are: 1) to devise a new method that will help obtain a better tracking performance than those obtained from previous methods when the interactions occur; 2) to make a new attempt to solve the “merge/split” problem in the multi-target tracking area. In this paper, we present an on-line supervised learning based method for tracking a variable number of interacting targets. The essence of this research is that the learning and tracking can be integrated and supplement each other in D. Forsyth, P. Torr, and A. Zisserman (Eds.): ECCV 2008, Part III, LNCS 5304, pp. 642–655, 2008. c Springer-Verlag Berlin Heidelberg 2008
Vision-Based Multiple Interacting Targets Tracking PS=Positive Samples
A PS
PS
Independent Tracker
Classifier A
NS
Classifier B
NS
B
NS=Negtive Samples
A PS
Update PS Update
Independent Tracker
643
B
ĂĂ Classifier A
NS
Classifier B
NS
ĂĂ
A
ĂĂ ĂĂ B
ĂĂ
Independent Tracker
A
PS
Classifier A
NS
Classifier B
NS
Update Independent Tracker
B
Update PS
Time
ĂĂ
Fig. 1. Tracking for learning: When A and B do not interact with each other, we employ independent trackers to track them and the tracking results are used for learning. For each classifier of targets, the positive samples are dependent on its tracking results, and the negative samples are dependent on the other targets. We should update the classifiers per frame when the new samples are coming in.
Classifier A Classifier A
Tracking as one target
Classifier B
Sp
lit
Spli t
Classifier B
Time
Fig. 2. Learning for tracking: When the targets are in close proximity or a “merge/split” condition occurs, we use these classifiers to assist in the tracking.
one framework to deal with various complex tracking problems. The core idea of our method can be depicted in Fig.1 and Fig.2. For purposes of simplicity, we only track two targets, A and B. When the two targets do not interact with each other (see Fig.1), tracking becomes very easy and multiple independent trackers are employed. Due to the reliability of these tracking results, they are used as positive or negative samples to train a classifier for each target. When the two targets are in close proximity (see Fig.2), the learned classifiers are used to assist in tracking. Specifically, when the two targets merge, we assign a new state space and track this “merging target” as one target. When they split, their classifiers are used again to specify a correct identification. In this research, we extend some exciting learning based single target tracking method [1,2] into the multi-target tracking area. In this procedure, we solve two crucial problems which is the main contribution of this paper: (1) Tracking for learning: we solve the problem of how to obtain the positive and negative samples with no human-interaction to achieve the on-line supervised learning. (2) Learning for tracking: we solve the problem of how to use these learned classifiers to deal with difficult problems (interaction or merge/split) encountered in multitarget tracking.
644
X. Song et al.
Compared to the traditional multi-target tracking algorithms, our method offers several advantages: Firstly, due to the appearance of each target which is depicted by a classifier with a supervised learning process, the appearance model of the targets become increasingly stronger with the time-lapse and sufficiently exploit targets’ history information. Moreover, since the pieces of information from other targets are treated as negative samples for the learning, each classifier considers the other target information and has the strong distinguishability. Through these classifiers, we can deal with challenging tracking situations easily. Secondly, our method can automatically switch tracking and learning, which make them supplement each other in one framework and ensure that the entire process is completely on-line. Lastly, our method is a general method that can be extended in many ways: any better independent tracker, learning algorithm and feature space can be employed in the proposed method. The remainder of this paper is organized as follows: In the following section, related work is briefly reviewed. Section 3 introduces the switch of learning and tracking in different tracking situations. Section 4 and 5 provide the details about tracking for learning and learning for tracking. Experiments and results are presented in Section 6 and the paper is finally summarized in Section 7.
2
Related Work
Over the last couple of years, a large number of algorithms for multi-target tracking have been proposed. Typically, multi-target tracking can be solved through data association [3]. The nearest neighbor standard filter (NNSF) [3] associates each target with the closest measurement in the target state space. However, this simple procedure prunes away many feasible hypotheses and cannot solve “labeling” problems when the targets are crowed. In this respect, a widely approach to multi-target tracking is achieved by exploiting a joint state space representation which concatenates all of the targets’ states together [4,5,6] or inferring this joint data association problem by characterization of all possible associations between the targets and observations, such as joint probabilistic data association filter (JPDAF) [3,7,8], Monte Carlo technique based JPDA algorithms (MC-JPDAF) [9,10] and Markov chain Monte Carlo data association (MCMC-DA) [11,12,13]. However, with the increasing number of tracking targets, the state space becomes increasingly large and obtaining accurate MAP estimation in a large state space becomes quite difficult. Furthermore, the computational complexity of most methods mentioned above grows exponentially with the increasing tracking targets. Additionally, researchers also propose multiple parallel filters to track multiple targets [14], that is, one filter per target where each one has its own small state space. In spite of this, when the interactions among targets occur, this method encounters difficulty in maintaining the correct tracking. Therefore, modeling the interactions among targets becomes an incredibly important issue. Khan et al. [15] use a Markov random field (MRF) motion prior to model the interactions among targets. Qu et al. [16] proposed a magnetic-inertia potential modeling
Vision-Based Multiple Interacting Targets Tracking
645
to handle the “merge error” problem. Lanz et al. [17] proposed a hybrid jointseparable model to deal with the interactions among targets. Sullivan et al. [18] tracked the isolated targets and the “merging targets” respectively, and then connected these trajectories by a clustering procedure. Nillius et al. [19] employed a track graph to describe when targets are isolated and how they interact. They utilized Bayesian network to associate the identities of the isolated tracks by exploiting the graph. However, most of these methods mentioned above (except [18,19]) consider tracking as a Markov process, which fail to sufficiently exploit target history information and present a strong appearance model of the targets. Recently, there has been a trend of introducing learning techniques into single target tracking problems, and tracking is viewed as a classification problem in the sense of distinguishing the tracking target from the background. Representative publications include [1,2,20,21]. In this work, we extended this concept into the multi-target tracking to deal with complicated problems encountered in multitarget tracking.
3
Tracking Situation Switch
We have approached different situations in the tracking, such as non-correlated targets tracking, interacting targets (present partial occlusions or merge with each other) tracking, splitting of “merging targets” and appearing or disappearing of targets. We aim to detect these conditions and make the tracking and learning switch automatically from one to another. In this work, we employed a detection-driven strategy to deal with this task. In each frame, we had to obtain the detections of the targets which aided in judging the tracking situations. There have been a large number of human detection algorithms in recent years [22,23], which can provide reliable and accurate detected results. However, the basic assumption of this research is that the background is static. Hence, we only utilized the simple background subtraction method [24,25] to obtain the detections. After the background subtraction, we utilized Mean-shift [26] to search the centers of detections. We defined four conditions for each target: non-correlated targets condition, correlated targets condition, merge/split condition and appearing/disappearing of targets. A statistical distance function was employed to aid in detecting these conditions: ∗ ∗ − x)2 − y)2 (Yt,k (Xt,k + =1 (1) G2 σx2 G2 σy2 ∗ ∗ where d∗t,k = (Xt,k , Yt,k ) is the predicted position of target k in frame t, G the threshold, σx and σy the covariance. We utilized it to search for possible detections of targets and the targets in different conditions are defined as follows:
Non-correlated targets. If a target locates only one detection and this detection is only possessed by this target, then this is a non-correlated target (as shown in Fig.3-b).
646
X. Song et al.
Fig. 3. footnotesize Different tracking situations. The red points are centers of detections obtained by Mean-shift; the green points are the predicted position of the targets and the ellipses are the distance function. (b) One target only locates one detection and this detection is only processed by this target; this is a non-correlated target. (c) One detection is shared by two targets and its area is larger than a threshold; the two targets are correlated targets. (d) One detection is also shared by two targets, but its area is smaller than the threshold. Hence, the two targets are merging. We track them as one target, when this merging target locates more than one detection; we believe that these targets split.
Correlated targets. If one detection is shared by more than one targets and the area of this detection is larger than the threshold that depends on the scale parameter, we believe that these targets are in close proximity or present partial occlusions. Hence, they are correlated targets (as shown in Fig.3-c). Merge/Split condition. If one detection is shared by more than one targets and the area of this detection is smaller than the threshold, we believe that these targets present complete occlusions. We define these targets as merging. Several frames after, should this merging target discover greater than one detections, we believe these targets split. Hence, this is a merge/split condition (as shown in Fig.3-d). Appearing/Disappearing of targets. If a target on the edge of the coordinate plane cannot locate any detection for some continuous frames, this target may disappear. We save the state of this target and stop to track it. Similarly, if a detection on the edge of the coordinate plane cannot locate any targets for some continuous frames, this should be a new target. We assign a new state space for this target and start to track it. Therefore, we can detect these conditions easily and automatically switch tracking and learning.
4
Tracking for Learning
When the targets do not interact with each other (non-correlated targets), tracking becomes relatively easier since it can be solved through multiple independent trackers. Specifically, the obtained results are accurate and credible. Consequently, these tracking results can be utilized as samples for a supervised learning. In this section, we provided details about the independent trackers and the on-line supervised learning process.
Vision-Based Multiple Interacting Targets Tracking
4.1
647
Independent Tracker
Our method is a general method that does not depend on the independent tracker. Therefore, any tracker with reasonable performance can be employed, such as Meanshift [27] and CONDENSATION [28]. In this work, we utilized the color-based tracking model [29] and employed the detection-based particle filter [14] to perform the non-correlated targets tracking. The state space xt,k of target k in frame t are defined by xt,k = [dt , dt−1 , st , st−1 ], where d = (X, Y ) is the center of bounding box in the image coordinate system, and s is the scale factor. Let yt denotes observations. A constant velocity motion model is utilized, which could be best described by a second order autoregressive equation (2) xt,k = Axt−1,k + Bxt−2,k + CN (0, Σ) where matrices A, B, C and Σ are adjusted manually in the experiments. N (0, Σ) is a Gaussian noise with zero mean and standard deviation of 1. The likelihood for the filtering is represented by HSV color histogram similarity [27], which can be written as P (yt |xt,k ) ∝ e−λD
2
(K ∗ ,K(dt,k ))
(3)
where D is the Bhattacharyya distance [27] on HSV histograms, K ∗ the reference color model, K(dt,k ) the candidate color model, and λ is adjusted to 20 in the experiment After re-weighting and re-sampling in the particle filter, we obtain the new position of each target. 4.2
Random Image Patches for Learning
Once we obtained the tracking results of each target at time, a set of random image patches [30] are spatially sampled within the image region of each target. We utilized these random image patches as samples for the online supervised learning. In this case, each target is represented by a “bag of patches” model. Extracting distinguishable features from image patches is relatively important for the learning process. There have been a large number of derived features that can be employed to represent the appearance of an image patch, such as raw RGB intensity vector, texture descriptor (Haralick feature), mean color vector and so on. Suggested by paper [30], we employed the color + texture descriptor to extract features from image patches. We adapted an d-dimensional feature vector to represent each image patch. Therefore, these feature vectors can be utilized as samples for the learning or testing. 4.3
On-Line Supervised Learning
For each target, a strong classifier should be trained, which represents the appearance model of targets. Let each image patch be represented as a d-dimensional i }N feature vector. For target k in frame t, {sit,k , lt,k i=1 denote N samples and their d labels, where s ∈ and l ∈ {−1, +1}. The positive samples are the image
648
X. Song et al.
patches come from region of target k, while the negative samples are the image patches that come from other targets. In this work, we employed Classification and Regression Trees [31] as weak classifiers. Once the new samples are available, the strong classifier should update synchronously, which would make the classifier stronger and reflect the changes in the object appearance. Therefore, poor weak classifiers are removed and newly trained classifiers are added, which is motivated by Ensemble Tracking [1]. The whole learning algorithm is shown below.
Learning Algorithm i Input:Feature vectors of image patches and their labels {sit,k , lt,k }N i=1 , t = 1, ..., T Output: The strong classifier H(st,k ) of target k at time t Train a Strong Classifier(for frame 1):
1. Initialize weights {wi }N i=1 to be 1/N . 2. For j = 1...M (train M weak classifiers) (a) Make {wi }N i=1 a distribution. (b) Train a weak classifier hj . N i |. (c) Set err = i=1 wi |h(si1,k ) − l1,k (d) Set weak classifier weight αj = 0.5 log(1 − err)/err. i i (e) Update example weights w = w eαj |hj (s1,k )−l1,k | i
i
3. The strong classifier is given by sign(H(s1,k )), where H(s1,k ) =
M j=1
αj hj (s1,k ).
Update the Strong Classifier (for new frame t is coming in) 1. Initialize weights {wi }N i=1 to be 1/N . 2. For j = 1...K (choose K best weak classifiers and update their weights) (a) Make {wi }N i=1 a distribution. (b) Choose hj (st−1,k ) with minimal err from {h1 (st−1,k ), ...hM (st−1,k )}. (c) Update αj and {wi }N i=1 . (d) Remove hj (st−1,k ) from {h1 (st−1,k ), ...hj (st−1,k )}. 3. For j = K + 1...M (add new weak classifiers) (a) Make {wi }N i=1 a distribution. (b) Train a weak classifier hj . (c) Compute err and αj . (d) Update examples weights {wi }N i=1 . 4. The updated strong classifiers is given by sign(H(st,k )) , where H(st,k ) = M α h (s ). j=1 j j t,k
5
Learning for Tracking
When the targets are in close proximity, it is difficult to maintain the correct tracking with the independent trackers. Specifically, when the “merge/split” conditions occur, associating the identities of the targets becomes a significantly challenging problem. In this case, the learned classifiers of the targets can be utilized to assist in tracking. In this section, we have provided details on how to employ these classifiers to deal with difficult problems encountered in the tracking.
Vision-Based Multiple Interacting Targets Tracking 95
649
300
250
100
200 105 150 110 100 115
120 170
50
A 175
180
185
190
B
250 95 200 100
150 105
100
110
50
115
120 170
175
180
185
190
Fig. 4. Correlated targets tracking: We detected that A and B were correlated targets (Fig. a); some random image patches were sampling in their detected region (Fig. c). We used their classifiers to obtain their score maps (Fig. d). After the particle filtering process, we acquired the tracking results (Fig. e).
5.1
Correlated Targets Tracking
As the discussions in section 3, if the targets are in close proximity or present partial occlusions, we conclude that they are correlated targets. When this condition occurs, a set of random image patches are sampled within the interacting region of the detected map, and the feature vectors of these image patches are imputed to the classifiers of interacting targets respectively. The outputs of these classifiers are scores. Hence, we can obtain the score maps of these interacting targets effortlessly. Once we obtain the score maps of the interacting targets, we employ the particle filter technique [32] to obtain the positions of these targets. The likelihood for the update in the particle filter is N (d(xt,k ) − dit,k )2 1 βi exp( ) Pscores (yt |xt,k ) = σ2 2π/σ i=1
(4)
where βi is the normalized score of image patch i, d(xt,k ) the center position of candidate target k, dit,k the center position of image patch i, and σ is the covariance which depends on the size of the image patch. For each target, the observation is further peaked around its real position. As a result the particles are much focused around the true target state after each level’s re-weighting and re-sampling. Subsequently, we obtain the new position of these interacting targets. The overview of the process is shown in Fig.4. 5.2
Merge/Split Condition
Sometimes, several targets occlude another target completely. Maintaining the correct tracking of targets seems quite impossible. Once this condition occurs, we deal with it as a merge/split condition. Upon detecting that some targets merge together as discussed in section 3, we initialize the state of the “merging targets” and track it as one target, which is similar to the non-correlated targets tracking depicted in section 4. If we detect that this “merging target” splits and becomes an interacting condition or
650
X. Song et al.
Fig. 5. Merge/Split condition: In frame 110, we detected that A and B were merging (Fig. a); we track A and B as one target (Fig. b). After 15 frames, we detected that they split (Fig. c), and some random image patches were sampling in them (Fig. d). We used their classifiers to obtain their score maps (Fig. e). After the particle filtering process, we obtained the tracking results (Fig. f).
Fig. 6. Disposal of uncertain detections: For the OTCBVS dataset, false alarms frequently took place (Fig. b and Fig. c). We accumulated some continuous frames (Fig. d) and used Mean-shift to obtain the detections (Fig. e).
non-correlated condition, we utilized the classifiers of these targets to identify them (as shown in Fig.5). Hence, we can link the trajectories of these targets without difficulty. With the help of the classifiers, our method is able to deal with various complex situations in the tracking. In addition, the tracking and learning supplement each other in the proposed method, consequently becoming an adaptive loop, which ensures all the process to be completely on-line.
6
Experiments and Results
We evaluated the proposed method in the different kinds of videos, such as SCEPTRE Dataset [33], OTCBVS Benchmark Dataset [34] and our surveillance videos. The selected data used for testing were five different clips in which complex interactions frequently took place. All the algorithms mentioned in the experiments were implemented by the non-optimized MATLAB code. The results and comparisons are detailed in this section. 6.1
Disposal of Uncertain Detections
For the SCEPTRE Dataset and surveillance video, we achieved reliable detections by using background subtraction, since their background was simple or the targets in the image were large. However, for the OTCBVS Benchmark Dataset, the detections obtained by the background subtractions were unreliable: False
Vision-Based Multiple Interacting Targets Tracking
651
Fig. 7. Disposal of interactions or “merge/split” among targets: The first row is the tracking results of multiple independent color-based trackers [29]. The second row is the results of multiple independent Ensemble Trackers [1] and the the third is our tracking results.
alarms or ruptured human bodies frequently occurred (as shown in Fig.6-b,c), which sometimes had influenced on the tracking. Hence, we employed a practical method to deal with this problem in the experiments. We accumulated some continuous detection maps and utilized Mean-shift to obtain the centre of the new detections (as shown in Fig.6-e). We discovered that this simple strategy could deal with most false alarms or non-connected human bodies, ensuring the robustness of the tracking. 6.2
Tracking Results
Fig.7 displayed the efficacy of the proposed method to deal with interactions or “merge/split” among multiple targets. In this experiment, we utilized our method, multiple independent color-based trackers [29] or Ensemble Trackers [1] only to perform the tracking respectively. We can see that our method can deal with “merge/split” problem easily and maintain the correct identifications of targets when they split (at frame 323), which is difficult to just utilize the two kinds of independent trackers. Tracking results of different datasets under complex tracking situations were displayed in Fig.8, Fig.9 and Fig.10. More tracking results can be seen in our supplementary video. 6.3
Quantitative Comparison
We conducted two groups of comparisons to show the tracking performance of different methods under interacting situations. The first comparison was among some methods which can track a variable number of targets, and the second was among the methods which can track a fixed number of targets. The selected dataset for testing was SCEPTRE which had the most complex interactions in
652
X. Song et al.
11
3
9 1
1 10
3
1
10
10
10
3
9 3
1
9
9
67 8 Mergeing 567
4
5 8
67 5
4
8
Mergeing 467
4
5
8
2
2
2
2
Fig. 8. Tracking results of surveillance video: The first row is the detection of targets obtained by background subtraction; the second row is the tracking results of our method. Note that targets 5, 6 and 7 were merging in frame 231 and targets 4, 6 and 7 were merging in frame 305; when they split, we were still able to maintain their correct tracking. Please see our supplementary video for more details.
Fig. 9. Tracking results of SCEPTRE dataset: This dataset is very challenging, where complex interactions frequently occurred. This is an example of our results. Note that there was an interaction among targets 9 and 10 in frame 660; they split in the frame 678. In the frame 786, we still maintained their correct tracking. Please see our supplementary video for more details.
12
3
56 9 7 8
4
4 12
3 Mergeing 5 6 Mergeing 7 8 9
3 2 1
9 78 3 9 78
10
11
10
11
4
6ergeing 5 M 5 6
12 12
Fig. 10. Tracking results of OTCBVS dataset: Note that target 7, 8, 9 and target 5, 6 were merging in frame 1518. Please see our supplementary video for more details.
the three kinds of video. The ground truth was obtained by software ViPERGT [35], and the failed tracking were including target missed, false location and identity switch.
Vision-Based Multiple Interacting Targets Tracking
653
10 Our Method MCMCŦPF BPF
Failed Tracking
8 6 4 2 0 0
200
400
600 800 1000 1200 1400 (b) Failed tracking in 2000 frames
1600
1800
2000
0
200
400
600 800 1000 1200 1400 (a) Interactions in 2000 frames
1600
1800
2000
Interactions
8 6 4 2 0
Algorithm BPF MCMC-PF Ours
Success Rate 68.66% 75.63% 83.75%
Fig. 11. Quantitative comparison under interacting situations among three methods which can track a variable number of targets
Our Method MCŦJPDAF JPDAF NNSF
Interactions
Failed Tracking
10 8 6 4 2 0 0
200
0
200
400 600 800 (b) Failed tracking in 1200 frames
1000
1200
1000
1200
8 6 4 2 0 400
600
800
Algorithm NNSF JPDAF MC-JPDAF Ours
Success Rate 62.17% 73.13% 82.75% 86.20%
(a) Interactions in 1200 frames
Fig. 12. Quantitative comparison under interacting situations among four methods which can track a fixed number of targets
In the first experiment, we performed a quantitative comparison with two famous multi-target tracking algorithms: Boosted Particle Filter (BPF) [4] and MCMC-based Particle Filter (MCMC-PF) [15]. For the BPF, AdaBoost detections were displaced by background subtraction. We conducted a statistical survey of 2000 continuous frames to evaluate the tracking performance of these methods under interacting situations. Fig.11 illustrates the quantitative evaluation of three methods and the success rate of these methods is shown in the right table. In the second experiment, we conducted a comparison with several classical data association algorithms: Joint Probabilistic Data Association Filter (JPDAF), Monte Carlo Joint Probabilistic Data Association Filter (MC-JPDAF) and Nearest Neighbor Standard Filter (NNSF). Because JPDAF and MC-JPDAF can only track a fixed number of targets, in which the tracking targets must stay in the image all along. Therefore, we tracked seven targets which stayed in the image for 1200 continuous frames. A quantitative evaluation of the four methods is shown in Fig.12. Although the proposed method obtained better tracking performance than above methods under interacting situations and could deal with “merge/split” easily, our method also has some limitations. For the SCEPTRE dataset, we discovered that when some similar appearance players merged or split, our method might fail due to the similar score map obtained by the classifiers. Majority of the failed tracking in this dataset was caused by this condition. In the future, a more powerful feature space (including other cues, such as motion or shape) should be exploited to solve this problem.
654
7
X. Song et al.
Conclusion
In this paper, a novel on-line supervised learning based method is presented for tracking a variable number of interacting targets. Different evaluations describe the superior tracking performance of the proposed method under complex situations. Our method is a general method that can be extended in many ways: It is a much robust independent tracker; more powerful feature space; reliable detection algorithm and faster supervised learning algorithm. For the present testing datasets, we concluded that a better feature space that can distinguish targets with similar appearance and a much robust detection algorithm can further improve the tracking performance significantly. This task can be achieved in future. Acknowledgments. This work was supported in part by the NKBRPC (No.2006CB303100), NSFC Grant (No.60333010), NSFC Grant (No.60605001), the NHTRDP 863 Grant (No.2006AA01Z302) and (No.2007AA11Z225). We specially thank Kenji Okuma and Yizheng Cai for providing their code on the web. We also thank Vinay Sharma and James W. Davis for providing us their detected results.
References 1. Avidan, S.: Ensemble tracking. IEEE Trans. PAMI 29, 261–271 (2007) 2. Le, L., Gregory, D.: A nonparametric treatment for location/segmentation based visual tracking. In: Proc. IEEE CVPR, pp. 261–268 (2007) 3. Bar-Shalom, Y., Fortmann, T.E.: Tracking and data association. Academic Press, New York (1998) 4. Okuma, K., Taleghani, A., Freitas, N.D., Little, J.J., Lowe, D.G.: A boosted particle filter: Multitarget detection and tracking. In: Pajdla, T., Matas, J(G.) (eds.) ECCV 2004. LNCS, vol. 3021, pp. 28–39. Springer, Heidelberg (2004) 5. Vermaak, J., Doucet, A., Perez, P.: Maintaining multi-modality through mixture tracking. In: Proc. IEEE ICCV, pp. 1110–1116 (2003) 6. Zhao, T., Nevatia, R.: Tracking multiple humans in complex situations. IEEE Trans. PAMI 7, 1208–1221 (2004) 7. Rasmussen, C., Hager, G.: Probabilistic data association methods for tracking complex visual objects. IEEE Trans. PAMI 23, 560–576 (2001) 8. Gennari, G., Hager, G.: Probabilistic data association methods in visual tracking of groups. In: Proc. IEEE CVPR, pp. 876–881 (2004) 9. Vermaak, J., Godsill, S.J., Perez, P.: Monte carlo filtering for multi target tracking and data association. IEEE Trans. Aerospace and Electronic Systems 41, 309–332 (2005) 10. Schulz, D., Burgard, W., Fox, D., Cremers, A.: People tracking with a mobile robot using sample-based joint probabilistic data association filters. International Journal of Robotics Research 22, 99–116 (2003) 11. Oh, S., Russell, S., Sastry, S.: Markov chain monte carlo data association for general multiple target tracking problems. In: Proc. IEEE Conf. Decision and Control, pp. 735–742 (2004) 12. Khan, Z., Balch, T., Dellaert, F.: Mcmc data association and sparse factorization updating for real time multitarget tracking with merged and multiple measurements. IEEE Trans. PAMI 28, 1960–1972 (2006)
Vision-Based Multiple Interacting Targets Tracking
655
13. Yu, Q., Medioni, G., Cohen, I.: Multiple target tracking using spatio-temporal markov chain monte carlo data association. In: Proc. IEEE CVPR, pp. 642–649 (2007) 14. Cai, Y., Freitas, N.D., Little, J.J.: Robust visual tracking for multiple targets. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3954, pp. 125–135. Springer, Heidelberg (2006) 15. Khan, Z., Balch, T., Dellaert, F.: Mcmc-based particle filtering for tracking a variable number of interacting targets. IEEE Trans. PAMI 27, 1805–1819 (2005) 16. Qu, W., Schonfeld, D., Mohamed, M.: Real-time interactively distributed multiobject tracking using a magnetic-inertia potential model. In: Proc. IEEE ICCV, pp. 535–540 (2005) 17. Lanz, O., Manduchi, R.: Hybrid joint-separable multibody tracking. In: Proc. IEEE CVPR, pp. 413–420 (2005) 18. Sullivan, J., Carlsson, S.: Tracking and labeling of interacting multiple targets. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3953, pp. 661–675. Springer, Heidelberg (2006) 19. Nillius, P., Sullivan, J., Carlsson, S.: Multi-target tracking - linking identities using bayesian network inference. In: Proc. IEEE CVPR, pp. 2187–2194 (2006) 20. Li, Y., Ai, H.Z., Yamashita, T., Lao, S., Kawade, M.: Tracking in low frame rate video: A cascade particle filter with discriminative observers of different life-spans. In: Proc. IEEE CVPR, pp. 1–8 (2007) 21. Grabner, H., Bischof, H.: On-line boosting and vision. In: Proc. IEEE CVPR, pp. 260–267 (2006) 22. Zhe, L., Larry, S.D., David, D., Daniel, D.: Hierarchical part-template matching for human detection and segmentation. In: Proc. IEEE ICCV, pp. 351–358 (2007) 23. Leibe, B., Seemann, E., Schiele, B.: Pedestrian detection in crowded scenes. In: Proc. IEEE CVPR, pp. 661–668 (2005) 24. Stauffer, C., Grimson, W.: Adaptive background mixture models for real-time tracking. In: Proc. IEEE ICCV, pp. 37–63 (1999) 25. Davis, J., Sharma, V.: Fusion-based background-subtraction using contour saliency. In: Proc. IEEE CVPR, pp. 20–26 (2005) 26. Comaniciu, D., Visvanathan, R., Meer, P.: Kernel-based object tracking. IEEE Trans. PAMI 25, 564–575 (2003) 27. Comaniciu, D., Ramesh, V., Meer, P.: Real-time tracking of non-rigid objects using mean shift. In: Proc. IEEE CVPR, pp. 142–149 (2000) 28. Isard, M., Blake, A.: Condensation - conditional density propagation for visual tracking. International Journal of Computer Vision 28, 5–28 (1998) 29. Perez, P., Hue, C., Vermaak, J.: Color-based probabilistic tracking. In: Heyden, A., Sparr, G., Nielsen, M., Johansen, P. (eds.) ECCV 2002. LNCS, vol. 2350, pp. 661–675. Springer, Heidelberg (2002) 30. Lu, L., Hager, G.: Dynamic foreground/background extraction from images and videos using random patches. In: Proc. NIPS, pp. 351–358 (2006) 31. Breiman, L., Friedman, J.H., Olshen, R., Stone, C.J.: Classification and regression trees. Wadsworth, Chapman Hall, New York (1984) 32. Doucet, A., Godsill, S.J., Andrieu, C.: On sequential monte carlo sampling methods for bayesian filtering. Statistics and Computing 10, 197–208 (2000) 33. SCEPTRE-Dataset, http://sceptre.king.ac.uk/sceptre/default.html 34. Davis, J., Sharma, V.: Otcbvs benchmark dataset 03, http://www.cse.ohio-state.edu/otcbvs-bench/ 35. ViPER-GT, http://viper-toolkit.sourceforge.net/products/gt
An Incremental Learning Method for Unconstrained Gaze Estimation Yusuke Sugano1, , Yasuyuki Matsushita2 , Yoichi Sato1 , and Hideki Koike3 1
3
The University of Tokyo Tokyo, Japan {sugano,ysato}@iis.u-tokyo.ac.jp 2 Microsoft Research Asia Beijing, China
[email protected] The University of Electro-Communications Tokyo, Japan
[email protected]
Abstract. This paper presents an online learning algorithm for appearance-based gaze estimation that allows free head movement in a casual desktop environment. Our method avoids the lengthy calibration stage using an incremental learning approach. Our system keeps running as a background process on the desktop PC and continuously updates the estimation parameters by taking user’s operations on the PC monitor as input. To handle free head movement of a user, we propose a pose-based clustering approach that efficiently extends an appearance manifold model to handle the large variations of the head pose. The effectiveness of the proposed method is validated by quantitative performance evaluation with three users.
1
Introduction
Gaze estimation is a process of detecting the position the eyes are looking at. It has been an active research topic in computer vision because of its usefulness for a wide range of applications, including human computer interaction, marketing studies and human behavior research. However, despite considerable advances in recent research, current gaze estimation techniques still suffer from many limitations. Creating an accurate gaze estimator that uses simple and lowcost equipment with allowing users to move their heads freely is still an open challenge. Prior approaches are either model-based or appearance-based. Model-based approaches use an explicit geometric model of the eye, and estimate its gaze direction using geometric eye features. For example, one typical feature is the pupil-glint vector [1,2], the relative position of the pupil center and the specular reflection of a light source. While model-based approaches can be very accurate,
This work was done while the first author was visiting Mirosoft Research Asia.
D. Forsyth, P. Torr, and A. Zisserman (Eds.): ECCV 2008, Part III, LNCS 5304, pp. 656–667, 2008. c Springer-Verlag Berlin Heidelberg 2008
An Incremental Learning Method for Unconstrained Gaze Estimation
657
they typically need to precisely locate small features on the eye using a highresolution image and often require additional light sources. This often results in large systems with special equipment that are difficult to implement in casual, desktop environments. Appearance-based approaches directly treat an eye image as a high dimensional feature. Baluja and Pomerleau use a neural network to learn a mapping function between eye images and gaze points (display coordinates) using 2,000 training samples [3]. Xu et al . proposed a similar neural network-based method that uses more (3,000) training samples [4]. Tan et al . take a local interpolation approach to estimate unknown gaze point from 252 relatively sparse samples [5]. Recently, Williams et al . proposed a novel regression method called S3 GP (Sparse, Semi-Supervised Gaussian Process), and applied it to the gaze estimation task with partially labeled (16 of 80) training samples [6]. Appearancebased approaches can make the system less restrictive, and can also be very robust even when used with relatively low-resolution cameras. Among model-based methods, one popular approach that handles head movements is to use multiple light sources and camera(s) to accurately locate 3D eye features. Shih and Liu used both multiple cameras and multiple lights for 3D gaze estimation [7]. Zhu et al . use a stereo camera setup with one light source to locate the 3D eye position and estimate 2D gaze positions by considering a generalized gaze mapping which is a function of the pupil-glint vector and the eye position [8,9]. Morimoto et al . propose a single camera method with at least two lights, but show only simulated results [10]. Hennessey et al . develop a similar system with multiple light sources to locate the 3D cornea center by triangulation, and compute the gaze point as the 3D intersection of the monitor surface and the optical axis of the eye [11]. Yoo and Chung use a structured rectangular light pattern and estimate the gaze point from the pupil’s position relative to the rectangle [12]. Coutinho and Morimoto later extended this method with a more precise eye model [13]. In addition to 3D eye features, some methods also use 3D head pose information. Beymer and Flickner, for example, use a pair of stereo systems [14]. The first stereo system computes the 3D head pose, which is then used to guide a second stereo system that tracks the eye region. Matsumoto et al .’s method uses a single stereo system to compute the 3D head pose and estimate the 3D position of the eyeball [15]. A similar approach is also taken by Wang and Sung [16]. These approaches all require special equipment, preventing their use in casual environments. Among the appearance-based approaches, little study has been dedicated to dealing with changes in head pose. Baluja et al .’s method allows some head movement by using training samples from different head poses, but the range of movement is limited. They describe two major difficulties. One is that the appearance of an eye gazing at the same point varies drastically with head motion. Additional information about the head pose is needed to solve this problem. The second difficulty is that the training samples must be collected across the pose space to handle the head movement. This results in a large number of training samples and an unrealistically long calibration period.
658
Y. Sugano et al.
Our goal is to make a completely passive, non-contact, single-camera gaze estimation system that has no calibration stage yet still allows changes in head pose. To achieve this goal, we develop a new appearance-based gaze estimation system based on an online learning approach. We assume a desktop environment with a PC camera mounted on the monitor, and observe the fact that the user can be assumed to look at the mouse when he or she clicks. Our system incorporates recent advances in robust single-camera 3D head pose estimation to capture the user’s head pose and the eye image continuously. By using the clicked coordinates as gaze labels, the system acquire learning samples in the background while users use the PC. Thus, it can learn the mapping between the eye and the gaze adaptively during operation, without a long preliminary calibration. This work has the following major contributions: – An incremental learning framework. To eliminate the lengthy calibration stage required for appearance-based gaze estimators with free head movement, we employ an incremental learning framework using the user’s operations on the PC monitor. – A pose-based clustering approach. We take a local interpolation-based approach to estimate the unknown gaze point. To use the gaze distance to constrain the appearance manifold in the pose-variant sample space, we propose a method using sample clusters with similar head poses and their local appearance manifold for the interpolation. The details are described in Section 3. The outline of the rest of the paper is as follows. In Section 2 we describe the architecture of our system. Section 3 explains our incremental learning algorithm with local clusters. Section 4 provides experimental results, and Section 5 closes with a discussion of the potential of our method and future research directions.
2
Overview
Our gaze estimation system operates in a desktop environment with a user seated in front of a PC monitor, and with a PC camera mounted on the monitor. We assume that the user’s gaze is directed at the mouse arrow on the monitor when he or she clicks the mouse, so we collect learning samples by capturing eye images and mouse arrow positions for all mouse clicks. The architecture of the system is summarized in Fig. 1. The input to the system is a continuous video stream from the camera. The 3D model-based head tracker [17] continuously computes the head pose, p, and crops the eye image, x, as shown in Fig. 2. At each mouse click, we create a training sample by using the mouse screen coordinate as the gaze label g associated with the features (head pose p and eye image x). Using this labeled sample, our system incrementally updates the mapping function between the features and the gaze. This incremental learning is performed in a reduced PCA subspace, by considering sample clusters and the local appearance manifold. The details are described in Section 3. When the user is not using the mouse, the system runs in a prediction loop, and the gaze is estimated using the updated mapping function.
An Incremental Learning Method for Unconstrained Gaze Estimation
659
Fig. 1. Learning and prediction flow of the proposed framework
2.1
Head Tracking and Eye Capturing
Here we describe in detail how we capture input features. As stated above, our framework uses the head tracking method of Wang et al . [17]. The tracker estimates the head pose of a person from a single camera, using a 3D rigid facial mesh. The tracker outputs the user’s 7-dimensional head pose p = (tT , r T )T , where t is a 3-dimensional translation and r is a 4-dimensional rotation vector defined by four quaternions. The facial mesh in Fig. 2(a) shows an estimated head pose. The system converts the input image to gray-scale, then crops the eye region as follows. First, it extracts an eye region (the rectangle in Fig. 2(a)) that is predefined on the facial mesh. It then applies a perspective warp to crop the region as a fixed size rectangle. Fig. 2(b) shows the warped result, Isrc . The offset of the eye is still too large for this image to be a useful feature. We reduce this offset error using a two-stage alignment process. For the initial alignment, we apply a vertical Sobel filter to the cropped image, then threshold to create a binary image. We then crop a W1 × H1 image region I1 from Isrc such that the cropped image center corresponds to the average coordinate of the edge points. At this time, we also do some preliminary image processing. We apply histogram equalization to the image, then truncate higher intensities to eliminate the effects of illumination changes. We also apply a bilateral filter to reduce image noise while preserving edges. After this, we improve the alignment using image subspaces. Specifically, we choose the W2 × H2 (W2 < W1 and H2 < H1 ) image region I2 that minimizes the reconstruction error E = ||I2 − I´2 ||2 .
(1)
Here I´2 is the approximated version of I2 using the PCA subspace. As described later, this subspace is updated incrementally using labeled samples.
660
Y. Sugano et al.
Fig. 2. Capturing results. (a) Head pose estimation result. (b) Cropping result around predefined eye region on the facial mesh (the rectangle in (a)). (c) Eye alignment and image preprocessing result (the image feature used in our gaze estimator.)
Fig. 2(c) shows the final result of the cropping process, raster-scanned to create an image vector x. In our experiment, the size of the final image is set to W2 = 75 × H2 = 35 pixels, so x is 2625-dimensional. Finally, we compose the feature vector f = (xT , pT )T , which consists of the eye image x and the head pose p.
3
Gaze Estimation
The goal of our gaze estimator is to learn the mapping between the feature vector f and the gaze g. We use a local linear interpolation method similar to [5,18]. Given an unlabeled feature f˙, we predict the unknown label g˙ by choosing k nearest neighbors from the labeled samples and interpolating their labels using distance-based weights. For our application, it is critical to choose neighbors from a manifold according to the gaze variation. Tan et al . [5] use 2D topological information about the gaze points as constraints. Two points are assumed to be neighbors on the manifold if they are also neighbors in the gaze space. However, this assumption is not always satisfied in our case, because there can be many samples which have different head poses but the same gaze point. To overcome this problem, we construct sample clusters with similar head poses and consider a local manifold for each sample cluster. The architecture of the clusters is partially inspired by Vijayakumar et al .’s work [19]. The distance measure of the cluster, i.e., how close the head pose and the cluster are, is defined as a Gaussian function: 1 (pi − p¯i )2 exp − , (2) g(p) = 2 2κg σp,i 2πκ σ 2 i g p,i
An Incremental Learning Method for Unconstrained Gaze Estimation
661
Algorithm 1. Cluster-based gaze estimation. Learning: given tth labeled sample ft = (xTt , pTt )T and gt Update image subspace using incremental PCA: mean x ¯(t) , eigenvectors U (t) , eigen¯(t) + U (t) at . values λ(t) , coefficients A(t) . xt ≈ x for k = 1 to K do if gk (pt ) > τg then Add sample to the cluster end if end for if No gk (pt ) is above threshold then Create new K + 1th cluster and add sample end if Prediction: given unlabeled feature f˙ = (x˙ T , p˙ T )T . Project image x˙ into current subspace: a˙ = U (t)T (x˙ − x ¯(t) ) for k = 1 to K do Calculate interpolated gaze g˙ k and a prediction confidence ck end for P P Get final prediction as a weighted sum: g˙ = k ck g˙ k / k ck .
2 where pi is the ith element of the pose p, and p¯i and σp,i are the corresponding average and variance calculated from the samples contained in the cluster. The constant weight κg is empirically set. The framework is outlined in Algorithm 1. Given a labeled sample, the image xt is first used to update the PCA subspace. We use the algorithm of Skocaj et al .[20] to update all stored coefficients a1 . . . at . After updating the subspace, the sample is added to all clusters whose weight gk (pt ) is higher than the predefined constant threshold τg . In Algorithm 1, K is the total number of clusters at the time. If no suitable clusters are found, we create a new cluster containing only the new sample. Given an unlabeled feature, the output gaze g˙ is calculated as a weighted sum of predictions from each cluster. The following sections describe the processes executed in each cluster.
3.1
Learning
Here, we describe the learning process in detail. As stated above, the labeled sample st = {at , pt , gt } is added to a cluster only when its pose pt is sufficiently close to the cluster average. However, this rule cannot reject outliers (e.g., a mouse click without user’s attention). Moreover, the sample density can increase too much if all samples are stored in the cluster. For interpolation, the sample distribution in the gaze space does not have to be too dense. For these reasons, we introduce another constraint on the local linearity between the image distance (i,j) (i,j) da = ||ai − aj || and the gaze distance dg = ||gi − gj ||. We define a linearity (i,j) (i,j) measure l(si ) for the sample si as the correlation between da and dg among (i,j) < r1 }. Here, r1 is the distance threshold. The sample selection rule {sj |dg
662
Y. Sugano et al.
Fig. 3. Gaze triangulation example shown in the screen coordinates. Each eye image (flipped horizontally for presentation clarity) is located at the corresponding gaze point, and the lines indicate Delaunay edges between these gaze points.
is as follows. If there is more than one sample around the new sample st , i.e., (t,j) {sj |dg < r2 } = ∅ (r2 < r1 ), keep the sample with the highest l(s) and reject the others. The threshold r2 controls the sample density and should be chosen according to the size of the target display area and the memory capacity. Next, we update cluster mean p¯k and the variance σk2 (in Eq.(2)) to fit the current sample distribution. Furthermore, we compute a Delaunay triangulation of the gaze point for the current point set. Fig. 3 shows an example triangulation. Each point corresponds to the 2D coordinate of the gaze point. This topological information is used in the prediction process. 3.2
Prediction
When the unlabeled data a˙ t and p˙ t are given to the cluster, the system predicts the unknown gaze g˙ k by interpolation. The neighbors to be used for interpolation are chosen on the manifold. The system selects a closest triangle (concerning an average distance da between three vertices) to a˙ t to the local triangulation. Points adjacent to this triangle are also chosen as neighbors. Using the chosen set N , interpolation weights w are calculated to minimize a reconstruction error: wi ai ||2 , (3) w = argmin ||a˙ − w
subject to
i∈N
wi = 1.
(4)
i∈N
wi denotes the weight corresponds to the ith neighbor. Finally, under the assumption of local linearity, the gaze g˙ k is interpolated as wi gi . (5) g˙ k = i∈N
An Incremental Learning Method for Unconstrained Gaze Estimation
663
To reject the outliers from clusters that do not contain sufficient samples, we define an interpolation reliability measure that represents how well the input a˙ is described by the selected neighbors: (a˙ i − a ¯i )2 r(a) ˙ = exp − , (6) 2 2κr σa,i i 2 ˙ and a ¯i and σa,i are average and variance of where a˙ i is the ith element of a, the corresponding element among the neighbors N . The factor κr is empirically set. The prediction confidence ck of the cluster is defined as a product of the reliability r(a) ˙ and the distance g(p): ˙
˙ · g(p). ˙ ck = r(a)
(7)
The final prediction result g˙ is calculated as a weighted sum of g˙ k , based on ck . The value r(a) ˙ is useful for measuring the reliability of the current estimate, so we also output the cluster-weighted average of r(a): ˙ r¯(p, ˙ a) ˙ = gk (p)r ˙ k (a)/ ˙ gk (p). ˙ (8) k
4
k
Experiments
We have conducted some experiments to evaluate our system and the effect of the proposed cluster-based learning method. Our system consists of VGA resolution color camera (a PointGrey Dragonfly) and a Windows PC with a 3.00GHz CPU and 1GB of RAM. In current implementation, the whole process runs at about 10 fps. In our experiments, no special pattern is used to indicate the learning positions. Users are simply asked to randomly click on the desktop region without any unusual attention, but looking at the mouse pointer. This means the experimental situation is reasonably close to real behavior. During the 10-minute experiment, users are allowed to freely move their heads. Table 1 shows the actual range of head motion for each user during the experiments. The estimation error is evaluated at each time t when a new labeled sample is acquired. Before Table 1. Actual range of head motion for each target user. The rotation is defined as a quaternion qw + qx i + qy j + qz k.
x
Translation [mm] y z
Person A 170 Person B 220 Person C 142
47 54 32
169 203 134
qw
qx
Rotation qy
qz
0.134 0.011 0.058 0.202 0.211 0.027 0.342 0.351 0.242 0.019 0.126 0.277
664
Y. Sugano et al. Error [degree]
35
Person A
30
35
Person B
30
35
25
25
25
20
20
20
15
15
15
10
10
10
5
5
5
0
0 0.0
0.2
0.4
0.6
0.8
1.0
Person C
30
0 0.0
0.2
0.4
0.6
0.8
1.0
0.0
0.2
0.4
0.6
0.8
1.0
Prediction reliability
Fig. 4. Angular error against prediction reliability. Each graph shows the scatter plot of the estimation error versus the reliability we defined in Eq.(6) and Eq.(8).
adding it to the sample clusters, the angular error θt between the true (clicked) gaze position gt and the estimated position g˙ t (interpolation based on the past labeled samples) is calculated as: Dm (gt , g˙ t ) , (9) θt = tan−1 pz,t − dcam where Dm indicates the distance between two points in the metric unit, pz,t is the depth element of the estimated head pose, and dcam is the pre-calculated distance between the camera and the display. First, Fig. 4 shows the angular error θt plotted against the reliability r¯t . We see that the estimation accuracy increases as our reliability measure increases, and using this measure we can reject the outliers that have large error. Low reliability estimates are caused by the absence of labeled samples around the estimated gaze position. It can be improved if new labeled samples are added around there, and results with low reliability can be ignored by the system when a reliable prediction is needed. t t Fig. 5 shows the cumulative weighted average (C.W.A.) error i=i r¯i θi / i=i r¯i over time. We also show the result of another experiment (the left lower graph) to validate our cluster-based approach. For this experiment, the system did not create pose clusters, which is equivalent to normal appearance-based estimation with pose-varying input. Naturally, the C.W.A. error gradually increases. By contrast, even if the user moves his/her head, the estimation error of our method (shown in other graphs) does not increase and converges to a certain range. This shows the effectiveness of our cluster-based solution. Table 2 shows the average estimation error for three users. The left column is the normal average error, and the center column is the weighted average error throughout experiment. The right column indicates the number of points clicked during the experiments. The angular error of our gaze estimation is roughly 4 ∼ 5 degrees. This accuracy may not be sufficient to replace a mouse with our method as a user input, however, it is helpful to achieve our goal, i.e., to estimate the approximate region where the user is looking at.
An Incremental Learning Method for Unconstrained Gaze Estimation
665
C.W.A. error [degree] 12
Person A
12
10
10
8
8
6
6
4
4
2
2
0
Person B
0 1
201
401
601
801
12
1001
1201
1401
Person A (No clustering)
1
401
601
12
10
10
8
8
6
6
4
4
2
2
0
201
801
Person C
0 1
201
401
601
801
1001
1
201
401
601
801
Number of clicks
Fig. 5. Time variation of the cumulative weighted average (C.W.A.) estimation error. The lower left graph is the result without the cluster-based solution. The other graphs show results using our method for three different users. Table 2. Estimation error. The left column is the normal average error, and the center column is the weighted average error throughout experiment. The right column indicates the number of points clicked during the experiments. Average error [deg] Number of Normal Weighted clicked points Person A Person B Person C
5
5.6 7.1 6.6
4.4 4.4 5.8
1796 1748 1095
Conclusions
We have proposed a gaze estimation system that learns incrementally as the user clicks the mouse. When the user clicks somewhere on the display, our system uses the captured eye image and the head pose to learn the mapping function, with the clicked coordinate as learning label. Moreover, we extended an appearance interpolation-based method to the case with free head movement, by clustering learning samples with similar head poses and constructing their local manifold model. We showed the efficiency and reasonable estimation accuracy of our method through the experiments in actual environment.
666
Y. Sugano et al.
Because our method wholly relies on the image distance between samples, the estimation accuracy mostly depends on the accuracy of the distance measurement. We employed PCA-based distance measure for the sake of computational efficiency and implementation simplicity. However, it can be too sensitive to the appearance variation not related to the gaze, such as cropping shift and rotation. To diminish this effect, we conducted subspace-based eye alignment after cropping. Even so, there can be slight jitter and drift of the result. Thus, the estimation accuracy of our method can be improved by using more precise alignment, or shift-invariant distance measure method. Also, we should mention about the memory efficiency of our method. Since there are no scheme to adjust the number of the sample clusters, memory usage and computational cost can increase unlimitedly in theory. We verified that it does not become a major issue in the case of usual desktop environment, but some kind of reformation will be needed when it is applied to more general situation with a wide range of head pose. This is partially achieved by wider cluster kernel (κg in Eq.(2)) or higher threshold to create clusters (τg in Algorithm 1). In future work, we plan to extend this framework to higher level regression in the pose space. Our system is less accurate compared to state-of-the-art gaze estimation methods with an accuracy of less than 1 degree, however it has great advantage that it allows for free head movement and works with minimal equipment; single camera without additional light source. It has considerable potential for developing practically ideal gaze estimator, with further investigations.
Acknowledgement This research was supported by the Microsoft Research IJARC Core Project.
References 1. Hutchinson, T.E., White Jr., K.P., Martin, W.N., Reichert, K.C., Frey, L.A.: Human-computer interaction using eye-gaze input. IEEE Transactions on Systems, Man and Cybernetics 19(6), 1527–1534 (1989) 2. Jacob, R.J.: What you look at is what you get: eye movement-based interaction techniques. In: Proceedings of the SIGCHI conference on Human factors in computing systems, pp. 11–18 (1990) 3. Baluja, S., Pomerleau, D.: Non-intrusive gaze tracking using artificial neural networks. Advances in Neural Information Processing Systems (NIPS) 6, 753–760 (1994) 4. Xu, L.Q., Machin, D., Sheppard, P.: A novel approach to real-time non-intrusive gaze finding. In: Proceedings of the British Machine Vision Conference, pp. 428–437 (1998) 5. Tan, K.H., Kriegman, D.J., Ahuja, N.: Appearance-based eye gaze estimation. In: Proceedings of the Sixth IEEE Workshop on Applications of Computer Vision (WACV 2002), pp. 191–195 (2002) 6. Williams, O., Blake, A., Cipolla, R.: Sparse and semi-supervised visual mapping with the S3 GP. In: Proceedings of the 2006 IEEE Conference on Computer Vision and Pattern Recognition, pp. 230–237 (2006)
An Incremental Learning Method for Unconstrained Gaze Estimation
667
7. Shih, S.W., Liu, J.: A novel approach to 3-d gaze tracking using stereo cameras. IEEE Transactions on Systems, Man and Cybernetics, Part B 34(1), 234–245 (2004) 8. Zhu, Z., Ji, Q.: Eye gaze tracking under natural head movements. In: Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition (CVPR 2005), vol. 1, pp. 918–923 (2005) 9. Zhu, Z., Ji, Q., Bennett, K.P.: Nonlinear eye gaze mapping function estimation via support vector regression. In: Proceedings of the 18th International Conference on Pattern Recognition (ICPR 2006), vol. 1, pp. 1132–1135 (2006) 10. Morimoto, C., Amir, A., Flickner, M.: Detecting eye position and gaze from a single camera and 2 light sources. In: Proceedings of the 16th International Conference on Pattern Recognition (ICPR 2002), pp. 314–317 (2002) 11. Hennessey, C., Noureddin, B., Lawrence, P.: A single camera eye-gaze tracking system with free head motion. In: Proceedings of the 2006 symposium on Eye tracking research & applications, pp. 87–94 (2006) 12. Yoo, D.H., Chung, M.J.: A novel non-intrusive eye gaze estimation using crossratio under large head motion. Computer Vision and Image Understanding 98(1), 25–51 (2005) 13. Coutinho, F.L., Morimoto, C.H.: Free head motion eye gaze tracking using a single camera and multiple light sources. In: Proceedings of the Brazilian Symposium on Computer Graphics and Image Processing, pp. 171–178 (2006) 14. Beymer, D., Flickner, M.: Eye gaze tracking using an active stereo head. In: Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition (CVPR 2003), vol. 2, pp. 451–458 (2003) 15. Matsumoto, Y., Ogasawara, T., Zelinsky, A.: Behavior recognition based on head pose and gaze direction measurement. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2000), vol. 3, pp. 2127–2132 (2000) 16. Wang, J.G., Sung, E.: Study on eye gaze estimation. IEEE Transactions on Systems, Man and Cybernetics, Part B 32(3), 332–350 (2002) 17. Wang, Q., Zhang, W., Tang, X., Shum, H.Y.: Real-time bayesian 3-d pose tracking. IEEE Transactions on Circuits and Systems for Video Technology 16(12), 1533– 1541 (2006) 18. Roweis, S.T., Saul, L.K.: Nonlinear dimensionality reduction by locally linear embedding. Science 290(5500), 2323–2326 (2000) 19. Vijayakumar, S., D’Souza, A., Schaal, S.: Incremental online learning in high dimensions. Neural Computation 17(12), 2602–2634 (2005) 20. Skocaj, D., Leonardis, A.: Weighted and robust incremental method for subspace learning. In: Proceedings of the Ninth IEEE International Conference on Computer Vision (ICCV 2003), pp. 1494–1501 (2003)
Partial Difference Equations over Graphs: Morphological Processing of Arbitrary Discrete Data Vinh-Thong Ta , Abderrahim Elmoataz, and Olivier L´ezoray University of Caen Basse-Normandie, GREYC CNRS UMR 6072, Image Team 6 Boulevard Mar´echal Juin, F-14050 Caen Cedex, France {vinhthong.ta,abderrahim.elmoataz-billah,olivier.lezoray}@unicaen.fr
Abstract. Mathematical Morphology (MM) offers a wide range of operators to address various image processing problems. These processing can be defined in terms of algebraic set or as partial differential equations (PDEs). In this paper, a novel approach is formalized as a framework of partial difference equations (PdEs) on weighted graphs. We introduce and analyze morphological operators in local and nonlocal configurations. Our framework recovers classical local algebraic and PDEs-based morphological methods in image processing context; generalizes them for nonlocal configurations and extends them to the treatment of any arbitrary discrete data that can be represented by a graph. It leads to considering a new field of application of MM processing: the case of highdimensional multivariate unorganized data.
1
Introduction
Mathematical Morphology (MM) offers an important variety of tools in image processing and computer vision. The two fundamental operators are dilation and erosion. In standard flat (algebraic) MM, these operations employ a so-called structuring element B to process images. Dilation (δ) and erosion (ε) of an image, represented as a scalar function f 0 : Ω ⊂ IR2 → IR, by a symmetric struc0 0 turing element B are i , yi ) = max{f (xi + xj , yi + yj ) : 0 defined as δ f (x 0 (xj , yj ) ∈ B} and ε f (xi , yi ) = min{f (xi + xj , yi + yj ) : (xj , yj ) ∈ B}, with (xi , yi ) ∈ Ω. The combination of these two operators gives rise to a variety of other MM operators; for instance opening, closing, top hats, reconstruction [1]. An alternative formulation, based on partial differential equations (PDEs), was also proposed by [2, 3, 4] and references therein. PDEs-based approach generate flat dilation and erosion of a function f , by a unit ball B = {z ∈ IR2 :
This work was partially supported under a research grant of the ANR Foundation (ANR-06-MDCA-008-01/FOGRIMMI) and a doctoral grant of the Conseil R´egional de Basse-Normandie and of the Cœur et Cancer association in collaboration with the Department of Anatomical and Cytological Pathology from Cotentin Hospital Center.
D. Forsyth, P. Torr, and A. Zisserman (Eds.): ECCV 2008, Part III, LNCS 5304, pp. 668–680, 2008. c Springer-Verlag Berlin Heidelberg 2008
PdEs over Graphs: Morphological Processing of Arbitrary Discrete Data
669
zp ≤ 1}, with the following diffusion equations: δt (f ) = ∂t f = +∇f p and εt (f ) = ∂t f = −∇f p where ∇ = (∂x, ∂y)T is the spatial gradient operator, f is the transformed version of the image f 0 and the initial condition is f (x, y, 0) = f 0 (x, y, 0) at time t = 0. These PDEs produce continuous scale morphology and have several advantages. They offer excellent results for nondigitally scalable structuring elements whose shapes cannot be correctly represented on a discrete grid and they also allow sub-pixel accuracy. They can be adaptive by introducing a local speed evolution term [5]. However, these methods have several drawbacks. The numerical discretization is difficult for highdimensional data or irregular domains. They only consider local interactions on the data by using local derivatives while nonlocal schemes have recently received a lot of attention [6, 7, 8, 9]. Indeed, these latter works have shown their effectiveness in many computer vision tasks. Moreover, MM is a well known and well documented approach for binary and grayscale images. Nevertheless, there no exist general extension for the treatment of multivariate and high-dimensional data sets. Several methods address this problem such as [10] but for the particular case of tensor images or [11] for data set and cluster analysis. The latter approach uses only binary MM and has the drawback that it requires the construction of a regular discrete grid to perform MM processing. Inspired by previous work in [9], we propose to consider MM processing over graphs. Graph morphology was already defined in [12, 13] but, only algebraic MM operations and particular graphs (binary graph, minimum spanning tree) are considered. Our work is different. Contributions. We extend the PDEs-based MM operators to a discrete scheme by considering partial difference equations (PdEs) over weighted graphs of the arbitrary topologies. To this aim, nonlocal discrete derivatives on graphs are introduced to transcribe MM processing based on the continuous PDEs to PdEs over graphs. Our approach of MM operations has several advantages. Any discrete domain that can be described by a graph can be considered without any spatial discretization. Local and nonlocal processing are naturally and directly enabled within a same formulation. These two points provides novel application fields of MM operations such as unorganized high-dimensional data processing and nonlocal MM processing for images. Paper organization. Section 2 recalls some definitions and notations on graphs. Section 3 introduces the family of weighted nonlocal dilation and erosion. The potential of this framework is illustrated in Sect. 4, for the processing of unorganized data and in the context for image processing, on Region Adjacency Graph and textured images.
2 2.1
Mathematical Preliminaries on Graphs Definitions and Notations
We consider any general discrete domain as a weighted graph. Let G = (V, E, w) be a weighted graph composed of a finite set V of vertices, and a finite set
670
V.-T. Ta, A. Elmoataz, and O. L´ezoray
E ⊂ V ×V of weighted edges, and a weight function w : V ×V → IR+ . An edge of E, which connects two adjacent vertices u and v, is noted uv. In this paper, graphs are assumed to be connected and undirected (see in [14] for more details). This implies that the weight function w is symmetric, wuv =wvu , if uv ∈ E and wuv =0 otherwise. Let H(V ) be the Hilbert space of real-valued functions on the vertices. This space is endowed with the usual inner product. Each function f : V →IR ∈ H(V ), assigns a real value f (u) to each vertex u ∈ V . Graph construction. Any discrete domain can be modeled by a weighted graph and by defining an initial function f 0 : V → IR on the vertices. In image processing, graphs are commonly used to represent digital images. In machine learning community, they are usually used to represent data sets and their relations. Many typical structures can be quoted. (i) k-adjacency grid graphs [15]: vertices represent pixels and edges represent local pixel adjacency relationship. Two common graphs are the 4 and the 8-adjacency grid graph. (ii) Region Adjacency Graphs (RAG) [16] that provide useful descriptions of the picture structure: vertices represent image regions and edges represent region adjacency relationship. (iii) Proximity graphs [17], for instance the k-Nearest Neighbors graph (k-NN graph), where each vertex is associated with a set of k close vertices depending on a similarity criterion. Constructing a graph consists in modeling the neighborhood or the similarity relationship between data. This similarity depends on a pairwise distance measure. Computing distances between data elements consists in comparing their features that generally depend on the initial function f 0 . To this aim, each vertex u ∈ V is assigned with a feature vector denoted by F (f 0 , u) ∈ IRn (several choices can be considered for the expression of F and the simplest one is F (f 0 , u) = f 0 (u)). For an edge uv ∈ E, the following standard weight function g : V ×V → IR+ can be used −1 g1 (uv) = ρ F (f 0 , u), F (f 0 , v) + , > 0, → 0 and 2 g2 (uv) = exp −ρ F (f 0 , u), F (f 0 , v) /σ 2 , where σ controls the similarity and ρ : V ×V → IR+ is a distance measure. Then, the choice of the graph topology enables several processing that model local or nonlocal interactions between data (especially for the image processing context). Both notions of local and nonlocal interactions are directly integrated into edges weights by the associated weight function w. One has to note that both local and nonlocal interactions are only expressed by the graph topology in terms of neighborhood connectivity (see [9] for more details on these notions). 2.2
Discrete Derivatives and Gradient Operators
We introduce discrete operators definitions such as derivatives, gradient operators and its norms. These formulations constitute the basis of our morphological operators framework.
PdEs over Graphs: Morphological Processing of Arbitrary Discrete Data
671
We consider the directional derivative of a function f : V → IR ∈ H(V ) at vertex u along an edge uv. Following the basic operator defined in [9], we have ∂f 1/2 f (v) − f (u) . (1) = ∂v f (u) = wuv ∂(uv) u This definition is consistent with the continuous definition of the derivative of a function and satisfies the following properties: ∂v f (u) = −∂u f (v), ∂u f (u) = 0, and if f (u) = f (v) then ∂v f (u) = 0. From (1), we introduce two other directional derivatives based on min and max operators: ∂v+ f (u) = max 0, ∂v f (u) and ∂v− f (u) = min 0, ∂v f (u) . The weighted gradient operator of a function f ∈ H(V ) at vertex u ∈ V is the vector of all partial derivatives with respect to the set of edges uv: (∇w f )(u) = (∂v f (u))uv∈E . Then, with this definition one obtains: + − − (∇+ (2) w f )(u) = ∂v f (u) uv∈E and (∇w f )(u) = ∂v f (u) uv∈E In the sequel, we use the Lp -norm of the two latter gradients defined in (2) (∇+ w f )(u)p = (∇− w f )(u)p
=
v∼u
v∼u
p 1/p p/2 max 0, f (v) − f (u) wuv and
p 1/p 0, f (v) − f (u) ;
p/2 wuv min
(3)
∞
and the L -norm
√ wuv max 0, f (v) − f (u) and v∼u √ = max wuv min 0, f (v) − f (u) .
(∇+ w f )(u)∞ = max (∇− w f )(u)∞
(4)
v∼u
Similar definitions can be provided for the norm of the gradient ∇w f
3
PdEs for Morphology on Weighted Graphs: Dilation and Erosion Processes
In this Section, we define the discrete analogue of the continuous PDEs-based dilation and erosion formulations of a given function f ∈ H(V ). To this aim, we use on the one hand, the decomposition of f into its level sets f k = H(f − k) where H is the Heaviside function (a step function) and on the other hand, the notion of graph boundary. Let G = (V, E, w) be a graph and let A be a set of connected vertices with A ⊂ V i.e. for all u ∈ A, there exists v ∈ A such that uv ∈ E. We denote by ∂ + A and ∂ − A respectively the outer and the inner boundary sets of A in G. Then, for a given vertex u ∈ V :
∂ + A = u ∈ Ac : ∃v ∈ A, v ∼ u and ∂ − A = u ∈ A : ∃v ∈ Ac , v ∼ u , (5)
672
V.-T. Ta, A. Elmoataz, and O. L´ezoray
(a) 4-adjacency image grid graph
(b) Arbitrary undirected graph
Fig. 1. Graph boundary on two different graphs. Gray vertices correspond to set A. Plus or minus vertices are respectively outer ∂ + A and inner ∂ − A sets.
where Ac = V \ A is the complement of A. Figure 1 illustrates these notions on a 4-adjacency image grid graph and on an arbitrary graph. One can note that the boundary of V cannot be directly defined by (5). In this case, one assumed that is given. Then, dilation over A can be interpreted as a growth process that adds vertices from ∂ + A to A. By duality, erosion over A can be interpreted as a contraction process that removes vertices from ∂ − A. The following proposition shows the relation between the graph boundary and the gradients of the level k − k set function (∇+ w f )(u)p and (∇w f )(u)p . Proposition 1. For any level set f k , gradient norms (3) are defined by 1/p k p/2 (∇+ f )(u) = w χ∂ + Ak (u) and p w uv v∼u,v∈Ak
k (∇− w f )(u)p =
p/2 wuv
1/p
(6) χ∂ − Ak (u) ,
v∼u,v∈Ak
where Ak ⊂V is the set with f k =χAk and χ : V →{0,1} is the indicator function. Proof. We prove the first relation in (6). If f k = χAk , then p 1/p (3) p/2 k (∇+ wuv max 0, χAk (v) − χAk (u) . w f )(u)p = v∼u
/ Ak and similarly with the neighborhood of We study the cases where u ∈ Ak , u ∈ / Ak u. The only case where the quantity χAk (v) − χAk (u) > 0 is, when for a u ∈ k and its neighbor v ∈ A . This configuration corresponds to the definition of the outer set of vertices ∂ + Ak defined in (5). Then, with this property one can deduce the following relation: 1/p k p/2 wuv χ∂ + Ak (u) . (∇+ w f )(u)p = v∼u,v∈Ak
PdEs over Graphs: Morphological Processing of Arbitrary Discrete Data
673
Second relation in (6) is deduced by the same scheme: the only case where χAk (v) − χAk (u) < 0 is when we consider the inner set of vertices ∂ − Ak (i.e. u ∈ Ak and v ∈ / Ak ). From Proposition (1) we can directly obtain the following one. Proposition 2. For any level set f k and at vertex u ∈ V , the Lp -norm of the gradient (∇w f k )(u) can be decomposed as (∇w f k )(u)p = (∇+ f k )(u)p + (∇− f k )(u)p . Proof. Using the inner ∂ + Ak and the outer ∂ − Ak set of vertices and Proposition (1), we have: (∇w f k )(u)p =
p p 1 1 2 f k (v)−f k (u)p p + 2 f k (v)−f k (u)p p wuv wuv
v∼u u∈∂ + Ak k =(∇+ w f )(u)p
v∼u u∈∂ − Ak
k + (∇− w f )(u)p .
Remark 1. Propositions (1) and (2), only consider the Lp -norms. For the L∞ norm one can demonstrate and obtain the same results by using expressions defined in (4). As for the continuous case, a simple variational definition of dilation applied to f k can be interpreted as maximizing a surface gain proportional to +(∇w f k )(u)p . Similarly, erosion can be viewed as minimizing a surface gain proportional to −(∇w f k )(u)p . From Proposition (2), if we consider the case where u ∈ ∂ + Ak , k k (∇w f k )(u)p is reduced to (∇+ w f )(u)p and corresponds to dilation over A . k This process can be expressed by the following evolution equation ∂t f (u) = k +(∇+ w f )(u)p . With same scheme, the erosion process is expressed by the k equation ∂t f k (u) = −(∇− w f )(u)p . Finally, by extending these two processes for all the levels of f , we can naturally consider the following two families of dilation and erosion. These two processes are parameterized by p and w, over any weighted graph G = (V, E, w). They are defined as δp,t (f (u)) = ∂t f (u, t) = +(∇+ w f )(u, t) p and (7) εp,t (f (u)) = ∂t f (u, t) = −(∇+ w f )(u, t) p . Dilation process algorithm. To solve the PdEs dilation and erosion processes (7), on the contrary to the PDEs case, no spatial discretization is needed thanks to derivatives directly expressed in a discrete form. Then, one obtains the general iterative scheme for dilation, at time t+1, for all u ∈ V f (u, t+1) = f (u, t) + Δt(∇+ w f )(u, t)p
(8)
where f (., t) is the parametrization of f by an artificial time t > 0. The initial condition is f (u, 0) = f 0 (u) where f 0 ∈ H(V ) is the initial function defined on
674
V.-T. Ta, A. Elmoataz, and O. L´ezoray
the graph vertices. With the corresponding gradient ∇+ w f norms, (8) becomes for Lp and L∞ -norms p 1/p (3) p/2 max 0, f (v, t) − f (u, t) f (u, t+1) = f (u, t)+Δt wuv and (9) v∼u
1/2 f (u, t+1) = f (u, t)+Δt max wuv max 0, f (v, t) − f n (u, t) . (4)
v∼u
(10)
The extension to erosion process case can be established by following the corresponding gradient ∇− w f norms in (3) and (4). The proposed dilation and erosion framework has several advantages. (i) No spatial discretization is needed in contrary to the continuous case. (ii) The choice of a weight function provides a natural adaptive scheme by including more information on edges and repetitive structures in the processing. Local and nonlocal configurations are unified within same formulation. (iii) The same scheme works on graph of arbitrary structure i.e. any discrete data that can be represented by a graph can be processed with our framework. Relations with image processing schemes. We show that with an adapted graph topology and an appropriated weight function, the propose methodology for dilation and erosion is linked to well-known methods defined in the context of image processing. For simplicity we only consider dilation, but same remarks apply for erosion. Remark 2. When p=2 and the weight function is constant (i.e. w=1), one recovers from (9) the exact Osher and Sethian first-order upwind discretization scheme [18] for a grayscale image defined as f : V ⊂ IR2 →IR. Let G = (V, E, 1) be a 4-adjacency grid graph associated to the grayscale image. From (9), we have: 2 1/2 w=1,p=2 f (u, t+1) = f (u, t) + Δt | max 0, f (v, t) − f (u, t) . v∼u
Replacing the vertex u and its neighborhood by their spatial image coordinates 2 2 (x, y) and following the property max(0, a−b) = min(0, b−a) , we have 2 f ((x, y), t+1)=f ((x, y), t)+Δt | min 0, f ((x, y), t)−f ((x−1, y), t) + 2 | max 0, f ((x+1, y), t)−f ((x, y), t) + 2 | min 0, f ((x, y), t)−f ((x, y−1), t) + 2 12 | max 0, f ((x, y+1), t)−f ((x, y), t) . One can also note that this discretization corresponds exactly to the Osher and Sethian discretization scheme used by the PDEs-based dilation process. Using this expression, the proposed morphological framework can perform sub-pixel approximation. The notion of structuring elements as defined by [2] is recovered.
PdEs over Graphs: Morphological Processing of Arbitrary Discrete Data
675
For a unit ball B = z ∈ IR2 : zp ≤ 1 , if we consider the three special cases of p = 1, 2, ∞, an approximation of a square, circle and diamond is obtained. Remark 3. We study the case where p=∞, with a constant weight function (i.e. w=1) and a constant time discretization (i.e. Δt=1). Our formulation recovers the classical algebraic flat morphological dilation formulation over graphs. From (10) we have f (u, t+1) = f (u, t) + max max 0, f (v, t) − f (u, t) . v∼u
If f (v, t)−f (u, t) ≤ 0 then f (u, t+1)=f (u, t). If f (v, t)−f (u, t)> 0 then we obtain f (u, t+1) = f (u, t)+ max f (v, t)−f (u, t) = f (u, t)+ max f (v, t) −f (u, t). v∼u v∼u For both cases, by considering the neighborhood of vertex u includes u itself, then we recover the classical algebraic dilation over graphs f (u, t+1) = max f (v, t) . v∼u
In this case, the structuring element is provided by the graph topology and the vertices neighborhoods. For instance, if we consider an 8-adjacency image grid graph, it is equivalent to a dilation by a square structuring element of size 3×3.
4
Experimental Results
The proposed morphological framework can be used to process any function defined on the vertices of a graph or on any arbitrary discrete domain. In this Section, we illustrate our methodology through basic operations such as dilation, erosion, opening and closing. For a function f ∈ H(V ), the simplest way to obtain opening and closing operations is to implement serially as compositions of them dilation δ and erosion . Then, opening is δ (f ) and closing is δ(f ) . In the sequel, to show the flexibility and the novelty of our framework, we provide examples of morphological operations on arbitrary discrete data. We also consider various graph topologies, and local and nonlocal interactions. – Morphological image processing results are presented and in particular fast image processing. Indeed, the proposed formulation allows to consider another image representation than usual grid graph such as RAG. That leads to decrease computation complexity while obtains similar processing behavior. – Processing results on textured images illustrate the benefits on fine and repetitive structures preservation of nonlocal interactions as compared to local one. – Morphological processing results on high-dimensional unorganized discrete data show the potential of our framework to perform processing on arbitrary discrete domain. For all the examples we restrict ourselves to the case of p=2 for simplicity. The objective of the following experiments is not to solve a particular application or problem. They only illustrate the potential and the behavior of our morphological framework.
676
V.-T. Ta, A. Elmoataz, and O. L´ezoray
(a) Original
(b) Partition
(c) Reconstructed (d) Unweighted image graph
(e) Weighted processing on grid graph
processing
on
grid
(f) Weighted processing on RAG
Fig. 2. Dilation and erosion on image-based graph. (a) original image (65 536 pixels). (b) partition (11 853 regions i.e. 82% of reduction). (c) reconstructed image. (d), (e) and (f) at left dilation and at right erosion. (d) and (e) unweighted and weighted operations performed on grid graph constructed from (a). (f) weighted operations performed on a RAG constructed from (b) and (c).
Remark 4. In the case of vector-valued a function f : V →IRn , with f =(fi )i=1,...,n , morphological operations are performed on each component fi independently. This comes to have n morphological processes where the inner correlation between the vectorial data is expressed by the weight function that acts as a coupling term. Image processing on grid graph and fast processing on RAG. This experiment compares the behavior of our proposed morphological operations for image processing by considering different weight functions and graph structures. Figure 2(a) presents an original scalar grayscale image considered as a function f 0 : V ⊂ IR2 →IR that defines a mapping from vertices to grayscale values. Figures 2(d) and 2(e) compare local unweighted and local weighted dilation and erosion. The graph associated to these local processing is a 4-adjacency grid graph, where the weight function is w=1 for the unweighted case and w = g2 with F (f 0 , .) = f 0 . As shown on these examples, weighted processing better preserves edge information and main image structures as compared to unweighted one that destroys them during morphological processes. Figure 2(f) illustrates the flexibility of our framework by employing another image graph representation that allows fast processing. Figure 2(c) shows a reconstructed image with fine partition (Fig. 2(b)) obtains from Fig. 2(a). Each pixel value in the fine partition is replaced by its surrounding region mean color value. Then, the partition is associated with a RAG where each vertex is associated with mean value function of its associated region. One can note the reduction of the simplified version as compared to the original one (82% in terms of vertices). Figure 2(f) shows dilation and erosion performed with this RAG where the weight function is the same used in the grid graph case. Results exhibit similar behaviors as compared to those in Fig. 2(e) while drastically reducing computation complexity due to the reduced number of vertices to consider.
PdEs over Graphs: Morphological Processing of Arbitrary Discrete Data
677
Nonlocal processing of textured images. This experiment shows one of the novelty of our formulation: applying nonlocal patch-based approach to morphological processing. To this aim, we compare local operations and nonlocal configurations on textured images. Figure 3 shows the obtained results for three test images. The first row shows original images (Figures 3(a), 3(c) and 3(e)) and corrupted ones (Figures 3(b), 3(d) and 3(f)) with Gaussian noise where σ=20. The second row results are obtained by processing the corrupted images. For each test image, images at left (Figures 3(g), 3(i) and 3(k)) are results obtained with usual local closing achieved on a 4-adjacency grid graph associated with a constant weight (w=1). Images at right show results closing obtained with nonlocal configuration. These results clearly demonstrates that nonlocal patch-based configuration outperforms local approach. Nonlocal patch-based method better preserves frequent features during the morphological processing as compared to the local one that destroys fine structures and repetitive elements. To obtain such nonlocal results, the graph structure needs to incorporate more image feature information than local one. When f 0 ∈ H(V ) is the image to process, the nonlocal features are provided by image patches i.e. F (f 0 , u) is the values of f 0 in a square window of size (2s+1)×(2s+1) at vertex u, which we note Fs (f 0 , u) ∈ IR(2s+1)×(2s+1) . Then, the graph constructed to obtain nonlocal patch-based closing corresponds to a modified version of k-NN (undirected) where the nearest neighbors is selected depending on a patch distance measure ρ defined as j=s i=s Ga ((i, j))f 0 (u+(i, j)) − f 0 (v+(i, j))22 . ρ Fs (f 0 , u), Fs (f 0 , v) = i=−s j=−s
Ga is a Gaussian kernel of standard deviation a and the final weight function associated with this graph is w=1. In experiments of Fig. 3, the graph is a 10-NN
(a) Original
(b) Corrupted
(c) Original
(d) Corrupted
(e) Original
(f) Corrupted
(g) Local
(h) Nonlocal
(i) Local
(j) Nonlocal
(k) Local
(l) Nonlocal
Fig. 3. Local and nonlocal closing on textured images. First row: original and corrupted test images (Gaussian noise where σ = 20). Second row: local and nonlocal closing results.
678
V.-T. Ta, A. Elmoataz, and O. L´ezoray
graph with F3 (f 0 , .) as the feature vector within a neighborhood search window of size 21×21. Similar definition and graph construction can be found in [8, 7] and references therein.
0.0
0.2
0.4
0.6
0.8
(a) Original data sets
1.0
1.0 0.0
0.2
0.4
0.6
0.8
1.0 0.8 0.6 0.4 0.2 0.0
0.0
0.2
0.4
0.6
0.8
1.0
Extension to high-dimensional unorganized data sets processing. The following experiments present a novel application of morphological operators: high-dimensional unorganized data sets processing. Figures 4 and 5 show opening operation on four synthetic independent data sets and on the United States Postal Service (USPS) handwritten digits images database. Figure 4 shows the opening results on four noisy data sets. To obtain such results, the graphs associated to original data (Fig. 4(a)) are a modified (undirected) 8-NN graph associated with the weight function w = g2 , where each vertex of each graph corresponds to a data point and is described by a 2-dimensions feature vector. The four constructed graphs are shown in Fig. 4(b). Figure 4(c) shows results of opening operation. These results clearly show the filtering, denoising effect of the opening on the noisy original data. The processing tends to group the data into the feature space while preserving main data structures. Figure 5 shows the processing of high dimensional real-world image manifolds: the USPS handwritten digits data set. This database consists in grayscale handwritten digit images scanned from digit 0 to 9. Each image is of size 16×16. To perform opening operation on USPS database, we use two randomly subsampled of 100 samples test sets. One from digit 0 and the other a mixed from digits 1 and 3. The graphs associated to the original data (Fig. 5(a)) are a modified (undirected) 8-NN graph associated with the weight function w = g1 , where each vertex of each graph corresponds to an image sample and is described by a 256-dimensions (IR16×16 ) feature vector where each feature is a pixel grayscale value. Figure 5(b) presents the opening results. These results clearly show the filtering effect of the opening operation where all samples tends to be uniformly identical and converge to an artificial mean digit model. Finally, these two experiments show the potential of our morphological approach to process high-dimension unorganized data sets. This processing can be viewed as a data pre-processing that can be useful to improve the efficiency of final classification or machine learning purposes.
0.0
0.2
0.4
0.6
0.8
(b) 8-NN graphs
1.0
0.0
0.2
0.4
0.6
0.8
(c) Opening results
Fig. 4. Opening on four independent synthetic data sets
1.0
PdEs over Graphs: Morphological Processing of Arbitrary Discrete Data
(a) Original data
679
(b) Corresponding opening
Fig. 5. Opening on USPS digit 0 and mixed digits 3 and 1
5
Conclusion
In this paper, a novel formalism of Mathematical Morphology operators based on PdEs over weighted graphs of arbitrary topology has been proposed. This provides a framework that extends PDEs-based methods to discrete local and nonlocal schemes. Moreover, this enables to process by morphological means any high-dimensional unorganized multivariate data that has been very few considered in literature. Fast morphological processing of images has also been proposed by considering the Region Adjacency Graph instead of the usual grid graph. The integration of nonlocal patch-based approach was highlighted for morphological processing as an efficient way to preserve fine and repetitive structures. Finally, our proposed framework allows us to apply morphological operations on any discrete domain that can be useful for filter and denoise manifolds or databases.
References 1. Soille, P.: Morphological Image Analysis, Principles and Applications, 2nd edn. Springer, Heidelberg (2002) 2. Brockett, R., Maragos, P.: Evolution equations for continuous-scale morphology. In: IEEE International Conference on Acoustics, Speech, and Signal Processing, vol. 3, pp. 125–128 (1992) 3. Sapiro, G., Kimmel, R., Shaked, D., Kimia, B., Bruckstein, A.: Implementing continuous-scale morphology by curve evolution. Pattern Recognition 26(9), 1363– 1372 (1993) 4. Maragos, P.: PDEs for morphology scale-spaces and eikonal applications. In: Bovik, A. (ed.) The Image and Video Processing Handbook, 2nd edn., pp. 587–612. Elsevier Academic Press, Amsterdam (2004) 5. Breuß, M., Burgeth, B., Weickert, J.: Anisotropic continuous-scale morphology. In: Mart´ı, J., Bened´ı, J.M., Mendon¸ca, A.M., Serrat, J. (eds.) IbPRIA 2007. LNCS, vol. 4478, pp. 512–522. Springer, Heidelberg (2007) 6. Buades, A., Coll, B., Morel, J.: Nonlocal image and movie denoising. International Journal of Computer Vision 76(2), 123–139 (2008) 7. Gilboa, G., Osher, S.: Nonlocal operators with applications to image processing. Report 07-23, UCLA, Los Angeles (July 2007)
680
V.-T. Ta, A. Elmoataz, and O. L´ezoray
8. Peyr´e, G.: Manifold models for signals and images. Technical report, CEREMADE, Universit´e Paris Dauphine (2007) 9. Elmoataz, A., L´ezoray, O., Bougleux, S.: Nonlocal discrete regularization an weighted graphs: a framework for image and manifolds processing. IEEE Transactions on Image Processing 17(7), 1047–1060 (2008) 10. Burgeth, B., Bruhn, A., Didas, S., Weickert, J., Welk, M.: Morphology for matrix data: Ordering versus pde-based approach. Image and Vision Computing 25(4), 496–511 (2007) 11. Postaire, J., Zhang, R., Lecocq-Botte, C.: Cluster analysis by binary morphology. IEEE Trans. Patt. Anal. Machine Intell. 15(2), 170–180 (1993) 12. Heijmans, H., Nacken, P., Toet, A., Vincent, L.: Graph morphology. Journal of Visual Communication and Image Representation 3(1), 24–38 (1992) 13. Meyer, F., Lerallut, R.: Morphological operators for flooding, leveling and filtering images using grpahs. In: Escolano, F., Vento, M. (eds.) GbRPR. LNCS, vol. 4538, pp. 158–167. Springer, Heidelberg (2007) 14. Diestel, R.: Graph Theory. Graduate Texts in Mathematics, vol. 173. Springer, Heidelberg (2005) 15. Chan, T., Osher, S., Shen, J.: The digital TV filter and nonlinear denoising. IEEE Transactions on Image Processing 10(2), 231–241 (2001) 16. Tr´emeau, A., Colantoni, P.: Regions adjacency graph applied to color image segmentation. IEEE Transactions on Image Processing 9(4), 735–744 (2000) 17. Von Luxburg, U.: A tutorial on spectral clustering. Statistics and Computing 17(4), 395–416 (2007) 18. Osher, S., Sethian, J.: Fronts propagating with curvature-dependent speed: Algorithms based on Hamilton-Jacobi formulations. Journal of Computational Physics 79, 12–49 (1988)
Real-Time Shape Analysis of a Human Body in Clothing Using Time-Series Part-Labeled Volumes Norimichi Ukita, Ryosuke Tsuji, and Masatsugu Kidode Nara Institute of Science and Technology, Japan
[email protected]
Abstract. We propose a real-time method for simultaneously refining the reconstructed volume of a human body with loose-fitting clothing and identifying body-parts in it. Time-series volumes, which are acquired by a slow but sophisticated 3D reconstruction algorithm, with body-part labels are obtained offline. The time-series sample volumes are represented by trajectories in the eigenspaces using PCA. An input visual hull reconstructed online is projected into the eigenspace and compared with the trajectories in order to find similar high-precision samples with bodypart labels. The hierarchical search taking into account 3D reconstruction errors can achieve robust and fast matching. Experimental results demonstrate that our method can refine the input visual hull including loose-fitting clothing and identify its body-parts in real time.
1
Introduction
Using human motion information, a number of real-world applications can be realized; for example, gesture-based interface, man-machine interaction, and CG animation and computer-supported study of sports/expertises. For acquiring that information, body-part identification (i.e., posture estimation) is an essential technique. Such techniques have been proposed in many studies [1]. In a method based on 2D information obtained by a single camera, human posture is estimated by fitting an approximate human-body model into a human region in an image. The estimation result is, however, not robust to occlusions. To improve the robustness to occlusions, 3D volume reconstruction from multiple views is effective. A reconstructed volume is useful not only for posture estimation but also for 3D shape analysis. Although 3D reconstruction requires a computational cost in general, Shape-From-Silhouette (SFS) can provide the volume (i.e., visual hull) of a moving person stably in real time [2,3]. Online applications using 3D shape and posture are, therefore, feasible by speeding up 3D posture estimation following 3D reconstruction. In most methods based on a 3D volume, as with those based on a 2D image, the posture is estimated so that the overlapping region between the reconstructed D. Forsyth, P. Torr, and A. Zisserman (Eds.): ECCV 2008, Part III, LNCS 5304, pp. 681–695, 2008. c Springer-Verlag Berlin Heidelberg 2008
682
N. Ukita, R. Tsuji, and M. Kidode
volume and the 3D human model that consists of simple rigid parts (e.g., cylinders) is maximized (see [4,5], for example). The body parts (e.g., torso and arms) can be given manually or detected from time-series reconstructed volumes by extracting sub-volumes, each of whose motion is regarded as a rigid motion (see [6,7], for example). All of these methods can work well under the assumption that each body part can be approximately modeled as a rigid part. Approximation errors can be reduced by using a precise human-model obtained by a 3D scanner and by estimating the shape deformation around a joint [8]. In [9], the regions of bending limbs can be identified in the reconstructed volume without the assumption of rigidity. Even with these methods, it is impossible to represent a large variation of the shape of fully non-rigid loose-fitting clothing. Although the shapes/motions of clothing are modeled and simulated in several applications (e.g., CG [10] and non-rigid tracking [11]), it is impossible to estimate the shapes/motions without information about human motion. In [12], the shapes of a skirt and legs in it are reconstructed simultaneously using a clothing model. Although this method might be most successful, the observed target is simple (i.e., simple deformation without occlusions) and the computational cost is very expensive (5min/frame). As far as we know, no existing algorithm can simultaneously achieve shape reconstruction and posture estimation of a human body with loose-fitting clothing in complex motion in real-time. In addition to this essential problem, the previous methods tend to fail due to 3D reconstruction errors. Especially using SFS, ghost volumes must be included in concave areas of a human body even if the pre-processes (e.g., camera calibration and silhouette extraction) are achieved without any error. The ghost volumes and other reconstructed errors can be refined based on post-processes such as multi-view photo consistencies (e.g., space curving [13]) and additional restrictions such as silhouette consistencies and temporal smoothness (e.g., deformable mesh model [14]). Similar sophisticated algorithms allow us to fulfill photo consistencies of a textureless object [15] and to deform a visual hull with silhouette consistencies and estimated surface normals using a template human model [16,17]. Even occluded clothes can be reconstructed using color markers printed on a surface and hole filling [18,19]. None of these methods, however, can achieve real-time 3D reconstruction of a surface with arbitrary texture. Based on the above discussions, we propose a method for analyzing the shape of a human body in loose-fitting clothing, which has the properties below: (1) real-time processable for online applications, (2) identifying body parts with significantly deformable clothing, and (3) refining the volume with arbitrary textures. With our body-part identification, each voxel in the reconstructed volume including clothing is classified into a body-part label. The result does not show the joint positions/angles (i.e., posture) but enables robust posture estimation using existing methods and their extension to posture estimation of a body wearing loose-fitting clothing; each joint must be in the boundary of body parts estimated by our method. The purpose of our volume refinement is to fill and remove significant errors due to the failure in silhouette extraction and SFS.
Real-Time Shape Analysis of a Human Body in Clothing
2
683
Basic Schemes for Analyzing Loose-Fitting Clothing
In many methods for body-part identification and posture estimation, knowledge about the human body is useful for improving accuracy and robustness. In recent years, several methods learn and employ observed human motions as training samples (e.g., the movable range of each joint angle [20] and the probabilistic representation of each joint motion [21,22]). The example-based approach is superior to a parametric representation in terms of correctly representing complicated and small variations of the posture and motion. In most of the previous example-based methods, the motion data is expressed by a set of joint angles obtained by using a Motion Capture system with markers. Using the real data obtained by MoCap is superior to using CG samples [23] in terms of reality. For our purposes, however, the following problems arise in using MoCap: (1) when a person wears loose-fitting clothes, the joint positions cannot be measured because the markers on the clothes cannot stay in their corresponding joints, and (2) total shape information cannot be obtained because only the 3D positions of the markers are measured. Even with a number of markers attached on the surface of a target [24], detailed shape analysis is difficult because of interpolation among the markers and large holes caused by occlusions. To realize our objectives while keeping the advantages of the example-based methods (i.e., reality), therefore, we employ the following training samples: – The time-series reliable volumes of a human body wearing clothing; the reliable volumes are reconstructed with less errors by employing a slow but sophisticated method such as [13,14]. – The body-part labels of each voxel in the total shape (i.e., volume). These allow us to have the following advantages: (1) learning without any marker that prevents free motions of a body and clothing and (2) analyzing not sparse points on the surface of a body but its volume. In our training scheme, the body-part label in each voxel is obtained from a reconstructed sample volume wearing clothes in which each body part is colored with a different color. In online analysis, the sample volumes with the labels are compared with an input volume (i.e., visual hull) reconstructed online in order to find similar samples. Using PCA, all the volumes are analyzed in a lowerdimension eigenspace for quick search; (1) the input visual hull is projected into the eigenspace in order to find samples similar to it, and then (2) the reliable partlabels are acquired from the samples. Although a distinctive 3D shape descriptor (e.g., [25]) is effective for similarity retrieval, it needs more computational cost. In this work, therefore, characteristic features are extracted from the reconstructed volume with PCA for real-time search. In this paper, one of the following 10 part-labels is allocated to each voxel in a human body; head, torso, right upper-arm, right forearm, left upper-arm, left forearm, right thigh, right lower-leg, left thigh, and left lower-leg labels. In addition, a special label non-object is prepared in order to be allocated to ghost volumes. By allocating these 11 labels to the input visual hull, body-part labeling and volume refinement are realized simultaneously.
684
N. Ukita, R. Tsuji, and M. Kidode
Time-series sample volumes of each sequence are represented by a trajectory in the eigenspace as with a manifold in the parametric eigenspace method [26]. In terms of dealing with 3D volumes, our problem has the following distinctive difficulties: Huge dimensions. The voxels in a human volume is numerous. Since dimension reduction using PCA is insufficient for real-time processing, dual hierarchical searches in the eigenspace are implemented (Sec 3.3 and 4.3). Difference between an input visual hull and samples. While a sample volume is refined, an input visual hull may include large amounts of ghost volumes. A matching scheme robust to this difference is required (Sec. 4.2).
3 3.1
Time-Series Volume Learning Generating Reliable Volumes with Part-Labels
The visual hull of a human body with part-colored clothing (Figure 1 (a)) is reconstructed by SFS. Ghost volumes and other errors are then refined using the deformable mesh model [14] as shown in Figure 1 (b) in order to approximate the real volume for preparing samples. Next, the part-labeled image (Figure 1 (c)) is generated by color detection. The colors of the part-labeled images are projected onto the refined volume from multiple viewpoints in order to allocate one of the part-labels to each surface voxel. The inside voxels are labeled by finding the nearest surface voxel. Finally, the reliable volume with the part-labels can be acquired as shown in Figure 1 (d). 3.2
Volume Learning Based on PCA
For PCA, the dimensions of the volume (i.e., the number of voxels) in all frames must be identical. Therefore, the voxels in a fixed-size 3D bounding box, which is defined so that its centroid coincides with that of the volume in each frame, are extracted. The size of the bounding box is determined so that it can cover the whole-body in respective frames.
(a) Observed image (b) Reliable 3D volume
(c) Part-labeled image
(d) Part-labeled reliable 3D volume Fig. 1. Process flow for generating reliable volumes with part-labels.
Real-Time Shape Analysis of a Human Body in Clothing
685
Let v t = (vt,1 , vt,2 , · · · , vt,d )T (vt,i ∈ {0, 1}, where 0 and 1 denote non-object and body voxels, respectively) be d-dimensional voxels observed at time t (i.e., Fig. 1 (b)). If T sample volumes are observed in total, a matrix consisting of the sample volumes is expressed by V = (v 1 − m, v 2 − m, · · · , v T − m) , where m denotes the average of T volumes. The covariance matrix of the sample volumes, S = V V T , is computed in order to acquire a set of eigenvectors, {ei |i ∈ {1, · · · , d}} (in ascending order), of S. With the first k (<< d) eigenvectors, a d-dimensional volume v t can be approximated by a smaller dimensional vector as follows: v t is transformed to y t in the k-dimensional eigenspace by the following linear projection using a matrix E = (e1 , · · · , ek ) consisting of the k eigenvectors: y t = E T (v t − m)
(1)
By concatenating the projected volumes of each observed sequence in order of observed time, time-series volume variations can be represented by a trajectory. L Let {y L 1 , · · · , y T } be samples projected into the eigenspace. Each projected point in a trajectory has its original volume with the part-labels (i.e., Fig. 1 (d)). Figure 2 shows examples of the trajectory and the volumes with the part-labels. From the figure, it can be confirmed that similar volumes are located nearby. 3.3
Hierarchical Volume Learning
The detailed shape of a target is lost in significantly lowered dimensions. In our method, for real-time search while retaining the detailed shape, the volume is analyzed in two stages, namely low-resolution whole-body and high-resolution body-part analyses. After identifying the rough locations of body parts in the whole-body analysis, the detailed shape is acquired in the high-resolution analysis of each body-part. For the hierarchical analysis, the processes mentioned in Sec. 3.2 (i.e., division into bounding boxes and dimension decreasing based on PCA) are applied not only to a whole-body volume but also to all high-resolution body parts. Consequently, 11 eigenspaces (i.e., a whole-body + 10 body parts) are generated in total. Examples of the bounding boxes are shown in Figure 3. Note that a bounding box may partially overlap other bounding boxes.
e3
Volume data
Volume data Part-labeled data Volume data e
2
e
1
Trajectory (3D)
(a) Whole body Fig. 2. Trajectory with part-labels in a (3D) eigenspace
(b) Body parts
Fig. 3. Bounding boxes
686
4
N. Ukita, R. Tsuji, and M. Kidode
Shape Analysis: Part-Labeling and Volume Refinement
Figure 4 illustrates the process flow of online shape analysis. Each number in the figure indicates the section that introduces the corresponding process in detail.
Fig. 4. Process flow of shape analysis
4.1
Search from Time-Series Sample Volumes
In this Section 4.1, the outline of a search algorithm that is employed both for whole-body and body-parts analyses is described. Time-series volumes at the current frame and n previous frames are projected into the eigenspace using Formula (1). Let Y It = (y It−n , y It−n+1 , · · · , y It ), where y It denotes the projected point of the input visual hull at t, be the input trajectory in the eigenspace. By comparing the input trajectory Y It and subL L L trajectories Y L s = (y s−n , y s−n+1 , · · · , y s ) (where s ∈ {n + 1, · · · , T }) of the trajectory consisting of sample volumes, sample trajectories similar to the input one are searched for. In general, as the number of the previous frames (i.e., n) increases, the stability of the search improves. The increase in n, on the other hand, causes the problems below: – The search speed gets slow. – Since a sample sequence must coincide with the input sequence for a long time, a partially-matched sample sequence (i.e., a sequence similar to the input sequence for a short time) is ignored. Therefore, n should be determined in accordance not only with search stability but also with processing time and applicability to recognizing the combination of the short-period motions, depending on an application; n = 5 in our experiments. For search in the eigenspace, the summation of the distances between corresponding points of the input and sample sequences is evaluated as the similarity
Real-Time Shape Analysis of a Human Body in Clothing
687
L between them. Practically, y L s in Y s that leads the following minimum summation Dt is considered to be most similar to the input y It : T
Dt = min
s=n+1
0
I yL s+i − y t+i ,
(2)
i=−n
where · denotes the norm of a vector. If a selected sample shows the shape/posture of an online observed person correctly, the reliable volume with part-labels can be obtained from the selected sample. It is, however, difficult to find such a sample because the input visual hull may include many errors while all the sample volumes are refined to be without errors. Furthermore, the processing time increases in proportion to the number of samples if all the samples are compared with the input visual hull. To cope with these problems, the following is achieved in our method: Voxel Reliability. The occurrence probability of the ghost volume in each voxel is evaluated. We call this probability voxel reliability. Efficient Search. The nearest neighbor of the input visual hull is found from a small number of samples based on hierarchical groups of all samples. In what follows, these solutions for the ghost volumes and the processing time are introduced in Sections 4.2 and 4.3, respectively, and then the procedures of our online shape analysis is described in practical order. 4.2
Ghost Volume Suppression Using the Voxel Reliability in Search
It is hard to predict from observed images where 3D reconstruction errors due to the failure in silhouette extraction occur. This is because the difficulty in silhouette extraction changes depending not only on a target but also on a background scene and image noises. On the other hand, ghost volumes generated due to SFS occur in concave areas of a target. That is, the voxel reliability can be analyzed only in accordance with the shape of the target. The voxel reliability is given to each voxel in all sample volumes. The voxel reliabilities in each refined sample volume can be estimated by comparing it with “its visual hulls with ghost volumes reconstructed by SFS”. Note that the ghost volumes change depending not only on the shape of the volume but also on the geometric configuration among the target and the cameras. Although it is very troublesome and difficult to obtain the visual hulls in various positions/orientations of the target in real observations, the visual hulls can be virtually produced from the refined sample volume as follows: 1. Assume that the extrinsic parameters of real cameras have achieved. The projective silhouette of a sample volume, which is virtually located in visual fields of the cameras, is obtained in each camera. 2. The visual hull is computed from the obtained silhouettes using SFS. In each voxel, the difference between the sample volume and the computed visual hull means a ghost volume.
688
N. Ukita, R. Tsuji, and M. Kidode
3. Steps 1 and 2 are executed while changing the location/rotation of the sample volume in order to obtain a number of the visual hulls. 4. In each voxel of the visual hulls, the number of the ghost volumes is counted 1 Nvar i δt,v , where and its rate (denoted by rt,v ) is computed by rt,v = Nvar i – Nvar denotes the number of observations in the virtual 3D space and i outputs 0 if the reconstructed results of v-th voxel in t-th vol– δt,v ume are different between the original reliable volume and i-th positions/orientations in the virtual 3D space, otherwise 1. rt,v is the voxel reliability of v-th voxel in t-th sample volume. 5. The voxel reliability corresponding to the reliable t-th volume (denoted by Rt ) is expressed by the following matrix: Rt = diag (rt,1 , · · · , rt,d ). 6. 1, 2, 3, 4, and 5 are executed for every sample volume for acquiring the voxel reliabilities in all sample volume. With the following projection formula given by applying the voxel reliability to Formula 1, the projected point of an input visual hull, vtI , observed at t I I T I ˆ t = E Rt v t − m . Using y ˆ It , adverse effects (denoted by yˆt ) is obtained: y of ghost volumes in search can be reduced. To estimate the voxel reliability Rt , however, the sample corresponding to the input visual hull v It is required. This is the chicken-and-egg problem. Under the assumption that the difference between subsequent volumes is very small, the voxel reliability at t is estimated based on the sample selected at t − 1. 4.3
Efficient Search from Time-Series Sample Volumes
For speeding up search in the eigenspace, the likelihood expressed by Formula 2 should be evaluated for a small number of samples near an input projected point. To determine the limited samples, therefore, the eigenspace is divided into several sub-regions and the samples in each sub-region is counted in advance. In online search, the sub-region SRP including the input projected point P is first selected. The similar samples are then found from the samples in SRP . Figure 5 illustrates examples. As a sub-region becomes smaller as shown in Figure 5 (a), the number of selected samples decreases. However, SRP may have no sample as shown in (a). To cope with this problem, larger sub-regions are searched following a small sub-region (shown in Figure 5 (b)). e2
e2
SRP
P
e2
SRP
e2
True nearest neighbor point Nearest neighbor point
P SRP
e1
NSR1
NSR2
NSR3
d2
d1
NSR5
NSR4
d4 NSR6 NSR7
P e1
Additional neighbor sub-regions
NSR8
dp
e1
e1
(b) n-divided space (a) m-divided space (n < m)
(b) Search-regions (a) Search failure expansion
Fig. 5. Efficient hierarchical search
Fig. 6. Additional search regions
Real-Time Shape Analysis of a Human Body in Clothing
689
However, the true nearest neighbor (indicated by “True nearest neighbor point” in Figure 6 (a)) may exist in a sub-region other than SRP . Let (1) N SRi be one of the sub-regions neighboring SRP and (2) di and dp be the distances from P to N SRi and the nearest neighbor in SRP , respectively, as illustrated in Figure 6 (b). To always find the true nearest neighbor, the neighboring subregions, each of which satisfies di < dp , are also searched. With the above mentioned search algorithm, both high-speed capability and search stability for always finding the nearest neighbor can be attained. 4.4
Rough Shape Analysis of the Low-Resolution Whole-Body
In what follows, the procedures using the above mentioned functions are described in practical order. A low-resolution volume reconstructed online is projected into the eigenspace in order to estimate the following two information: Target orientation. Each volume is reconstructed in the world coordinate system. That is, even if the shapes/postures of a person are identical, the reconstructed volumes of the different orientations are different. The orientations of the volumes must be, therefore, aligned. Body-parts regions. In order to compare the samples of each high-resolution body-part with the region of this body-part in an input visual hull, this region must be identified in the input. This information is estimated as follows: 1. The centroid of the input visual hull is estimated. 2. The input visual hull is rotated θ along the vertical axis through the centroid. 3. The volume inside the bounding box, whose size is identical to the size of the whole-body in sample training, is projected into the eigenspace of the whole-body. n previous volumes are also projected. 4. With Dt in Formula (2), the sample nearest the projected point is selected. 5. Steps 2, 3, and 4 are repeated while changing θ. ´ is determined so that 6. The orientation of the input visual hull (denoted by θ) ´ Dt with regard to θ is minimum. The input visual hull is then rotated θ´ and the sample volume corresponding to the minimum Dt is selected as the nearest neighbor. 7. The bounding box including each body part in the input visual hull is acquired by overlapping the selected sample volume with its body-part labels and their bounding boxes. 4.5
Detailed Shape Analysis of the High-Resolution Body-Parts
The volume inside the bounding box of each body part is then analyzed in order to estimate the label probabilities in the high-resolution volume. The label probabilities in each voxel mean how appropriate 11 labels, including non-object, are to the label of this voxel.
690
N. Ukita, R. Tsuji, and M. Kidode
1. An input high-resolution visual hull is rotated θ´ and then divided into the regions of its body parts. 2. The volume inside each bounding box is projected into the corresponding eigenspace. n previous volumes are also projected. The following steps are executed for each body-part. 3. Dt in Formula (2) is evaluated. Samples, each of whose Dt is less than a predefined threshold, are selected. 4. Let Ndata be the number of the selected samples. The likelihood of the selected sample d (denoted by Ldt , where d ∈ {1, · · · , Ndata }) is the inverse of Dt of d. With this likelihood, the probability of l-th part label in v-th voxel Ndata Ldt d Ndata d (denoted by P v,l ) is defined by P v,l = d=1 d=1 Lt St v,l , where St = d and v,l outputs 1 if the part-label of v-th voxel in d is l, otherwise 0. 4.6
Part-Labeling and Volume Refinement
By integrating the label probabilities estimated in all bounding boxes, the highresolution whole-body volume with the label probabilities is generated. Note that several bounding boxes overlap with each other. In such an overlapping region, the label probabilities in the same voxel may be different among different bounding boxes. The average of the label probabilities with regard to each bodyNpart p,v,l 1 , where Npart denotes the number of physical part label (i.e., Npart p=1 P body parts, namely 10 in our method, and P p,v,l denotes P v,l in body-part p) is given to a voxel. In addition, spatially neighboring voxels should have the same label because each body part must be one cluster. For harmonization among the neighboring voxels, therefore, the weighted average of the label probabilities in Nnv × Nnv × Nnv neighboring voxels (denoted byPˆ v,l ) is then computed in 3 Nnv wi P¯ i,l , where wi denotes the distance between each voxel: Pˆ v,l = P 31 Nnv i=1
wi
i=1
v and its neighboring voxel i. In our experiments, Nnv = 3. Finally, in each voxel in the whole body, the label with the maximum label probability Pˆ v,l is considered to be the body-part label.
5
Experiments
We conducted experiments with loose-fitting 10-colored clothing as shown in Figure 1 (a). A person wearing this clothing danced while seven synchronized cameras captured this person at 30 fps (1024 × 768 pixels). 5000 frames were captured for training samples. All the sample images were prepared with one subject. With these images, the time-series volumes of the whole-body were reconstructed. The voxel sizes in low and high resolution volumes were 60mm3 and 20mm3 , respectively. While the voxel dimension of a low-resolution volume was always 23 × 11 × 31 = 8743, those of high-resolution volumes were changed depending on the body part (between 7479 and 26825). The dimension number of each eigenspace is determined based on the cumulative contribution rate. In our experiments, the dimension number, which allows
Real-Time Shape Analysis of a Human Body in Clothing
#50
#100
#150
#200
691
#250
Fig. 7. Experimental results (1st-row: Observed images, 2nd-row: low(left)/high(right)-resolution visual hulls, 3rd-row: Shape-analysis results).
Input
us to analyze the human volumes with satisfactory accuracy, was empirically determined so that the cumulative contribution rate was 75%, which is obtained using 44 eigenvectors in the whole-body volumes, for example. Using these samples recorded in the eigenspaces, shape analysis was conducted. For input image sequences, two subjects were observed separately; one (155 cm) was the person who was captured for the samples. Since their heights were different (155cm and 175cm), the reconstructed volumes of the other were normalized based on the ratio between their heights. They wore the same-shape clothing as that used for the samples, and performed the same dance as the samples. Partial results were shown in Figure 7. An example of shape refinement is shown in Figure 8. This result demonstrates that ghost volumes generated by being surrounded with two arms could be removed. An example of the results for the other subject is shown in Figure 9. It can be confirmed that the volume could be refined even if the input visual hull was corrupted due to the failure of silhouette extraction as indicated by red circles. We also conducted experiments using tight clothes under the same condition as that of the above experiments; samples with the tight clothes were prepared. An example of the results is shown in Figure 10. It can be confirmed that our method is also effective for this kind of common clothing. The processing time using a PC with an Opteron 1.8GHz was 24 fps. Although this is a little slower than the video rate (30fps), a moving person can be observed as shown in the results. As explained above, we can confirm that our proposed method almost satisfies our purposes introduced at the end of Section 1, namely real-time processing, body-part identification, and volume refinement. For quantitative evaluation, the following two volumes that were acquired by observing colored clothing used for the samples were compared: True volume. Volumes reconstructed by the method for obtaining samples by using color information. Result. Volumes acquired without color information by the proposed method.
692
N. Ukita, R. Tsuji, and M. Kidode
Table 1. Error analysis: The percentages of false-positive (FP) and false-negative (FN) voxels in proportion to the size of each body-part are shown Same subject Another subject Tight clothing Part FP(%) FN(%) FP(%) FN(%) FP(%) FN(%) head 7.2 5.2 7.9 5.7 4.7 5.8 torso 3.9 3.1 5.2 5.8 3.2 3.2 right upper-arm 9.9 6.4 9.1 8.1 4.9 5.6 right forearm 11.4 10.8 13.2 12.2 7.2 6.0 left upper-arm 6.5 11.1 10.1 9.6 4.0 3.6 left forearm 9.5 10.0 14.8 11.0 8.3 7.1 right thigh 5.1 7.1 6.4 4.4 4.8 5.1 right lower-leg 11.8 8.7 11.6 9.4 9.5 7.6 left thigh 3.3 6.3 7.7 5.1 6.8 6.5 left lower-leg 7.5 6.4 12.9 8.1 11.4 9.7
Original
Refined
Original
Refined
Fig. 8. Volume refinement results shown from two viewpoints
Fig. 9. Results in the other subject
Error2 Error1
Observed image
Fig. 10. Results in another dance
Input volume data
Result
Fig. 11. Wrong labeling results
Table 1 shows the results; errors were increased due to the following factors: – Subject change between training and analysis; errors in ”Another subject” were worse than those in ”Same subject”. – Non-rigid motion of loose-fitting clothing; (1) errors in ”Tight clothing” were smaller than those in other two results and (2) those in rigid body parts (e.g., head) were smaller than those in non-rigid parts (e.g., forearm). Although shape analysis was almost successful in most frames, large errors in body-part labeling were caused in a few frames. In an example shown in Figure 11 (Error1), the right upper-arm label is allocated to a part of the right thigh.
Real-Time Shape Analysis of a Human Body in Clothing
693
In analyzing high-resolution body parts, similar samples are searched for independently in each body part. The independent search allows us to recognize a variety of the shapes/postures of the whole-body, each of which is composed by a combination of samples of the body-parts, even if those shapes/postures of the whole body are not included in the samples. However, by simply integrating (i.e., averaging) all the body parts after the independent search, a combination error such as an example in Figure 11 may occur. The consistent results should be acquired by mutually exchanging the search results of the body parts, Furthermore, although our method strongly relies on search for similar samples, useful restrictions about the shape/posture of a human body can improve the method (e.g., “each body must be a cluster” and “the cubic content of each body part should be almost constant”). While a false-negative region in an input visual hull could be recovered in the example shown in Figure 9, a true-positive region was sometimes removed as shown in the example in Figure 11 (Error2). Such an error is caused due to small differences between the input visual hull and samples. These errors are inevitable when new people not included in the samples are observed. To cope with this problem, a number of samples of the same motion should be used. For using a large data set, powerful embedding [27] is useful because it can possibly represent observed volumes well by probabilistic lower-resolution data that has a beneficial effect on discrimination. Its capability also possibly can improve the versatility of the method, which is one of critical limitations of our method; how to generalize to new clothes and new motions. In principle, to recognize them, corresponding volumes have to be in samples. This results in increasing the samples. Therefore, powerful embedding [27] is crucial for the versatility.
6
Concluding Remarks
We proposed an online method for simultaneously refining the reconstructed volume of a human body wearing loose-fitting clothing and identifying body parts in it. In our method, reliable and high-precision time-series volumes with body-part labels are learned in advance. Each volume reconstructed online is compared with these reliable volumes in order to find similar reliable data with body-part labels. The voxel reliability plays a crucial role for matching robust to ghost volumes. Applying PCA to the volumes and hierarchical and coarse-to-fine analyses allow us to speed up and stabilize the search procedure. This study was supported by National Project on Development of High Fidelity Digitization Software for Large-Scale and Intangible Cultural Assets.
References 1. Poppe, R.: Vision-based human motion analysis: An overview. CVIU 108(2), 4–18 (2007) 2. Cheung, G., Kanade, T., Bouguet, J., Holler, M.: A real time system for robust 3D voxel reconstruction of human motions. CVPR 2, 714–720 (2000)
694
N. Ukita, R. Tsuji, and M. Kidode
3. Wu, X., Takizawa, O., Matsuyama, T.: Parallel Pipeline Volume Intersection for Real-Time 3D Shape Reconstruction on a PC Cluster. In: The 4th IEEE International Conference on Computer Vision Systems (ICVS) (2006) 4. Mikic, I., Trivedi, M., Hunter, E., Cosman, P.: Human Body Model Acquisition and Tracking using Voxel Data. IJCV 53(3), 199–223 (2003) 5. Hou, S., Galata, A., Caillette, F., Thacker, N., Bromiley, P.: Real-time Body Tracking Using a Gaussian Process Latent Variable Model. In: ICCV (2007) 6. Kakadiaris, I.A., Metaxas, D.: 3D Human Body Model Acquisition from Multiple Views. IJCV 30(3), 191–218 (1998) 7. Cheung, G., Baker, S., Kanade, T.: Shape-from-silhouette of articulated objects and its use for human body kinematics estimation and motion capture. In: CVPR, vol. 1, pp. 77–84 (2003) 8. Balan, A.O., Sigal, L., Black, M.J., Davis, J., Haussecker, H.W.: Detailed Human Shape and Pose from Images. In: CVPR (2007) 9. Sundaresan, A., Chellappa, R.: Segmentation and Probabilistic Registration of Articulated Body Models. In: ICPR (2006) 10. Bhat, K.S., Twigg, C.D., Hodgins, J.K., Khosla, P.K., Popovic, Z., Seitz, S.M.: Estimating Cloth Simulation Parameters from Video. In: Eurographics/SIGGRAPH SCA (2003) 11. Salzmann, M., Urtasun, R., Fua, P.: Local Deformation Models for Monocular 3D Shape Recovery. In: CVPR (2008) 12. Rosenhahn, B., Kersting, U., Powell, K., Klette, R., Klette, G., Seidel, H.-P.: A system for articulated tracking incorporating a cloth model. Machine Vision and Applications 18(1), 25–40 (2007) 13. Kutulakos, K.N., Seitz, S.M.: A Theory of Shape by Space Carving. IJCV 38(3), 199–218 (2000) 14. Nobuhara, S., Matsuyama, T.: Deformable Mesh Model for Complex Multi-Object 3D Motion Estimation from Multi-Viewpoint Video. In: 3DPVT (2006) 15. Bradley, D., Boubekeur, T., Heidrich, W.: Accurate Multi-View Reconstruction Using Robust Binocular Stereo and Surface Matching. In: CVPR (2008) 16. Vlasic, D., Baran, I., Matusik, W., Popovic, J.: Articulated Mesh Animation from Multi-view Silhouettes. In: ACM SIGGRAPH (2008) 17. Ahmed, N., Theobalt, C., Dobrev, P., Seidel, H.-P., Thrun, S.: Robust Fusion of Dynamic Shape and Normal Capture for High-quality Reconstruction of Timevarying Geometry. In: CVPR (2008) 18. Scholz, V., et al.: Garment Motion Capture Using Color-Coded Patterns. Computer Graphics Forum 24(3), 439–448 (2005) 19. White, R., Crane, K., Forsyth, D.: Capturing and Animating Occluded Cloth. ACM TOG (SIGGRAPH) 26(3) (2007) 20. Herda, L., Urtasun, R., Fua, P.: Hierarchical implicit surface joint limits for human body tracking. CVIU 99(2), 189–209 (2005) 21. Sidenbladh, H., Black, M.J., Sigal, L.: Implicit Probabilistic Models of Human Motion for Synthesis and Tracking. In: Heyden, A., Sparr, G., Nielsen, M., Johansen, P. (eds.) ECCV 2002. LNCS, vol. 2350, pp. 784–800. Springer, Heidelberg (2002) 22. Agarwal, A., Triggs, B.: Tracking Articulated Motion using a Mixture of Autoregressive Models. In: Pajdla, T., Matas, J(G.) (eds.) ECCV 2004. LNCS, vol. 3023, pp. 54–65. Springer, Heidelberg (2004) 23. Grauman, K., Shakhnarovich, G., Darrell, T.: Inferring 3D Structure with a Statistical Image-Based Shape Model. In: ICCV, pp. 641–648 (2003)
Real-Time Shape Analysis of a Human Body in Clothing
695
24. Park, S.I., Hodgins, J.K.: Capturing and Animating Skin Deformations in Human Motion. ACM TOG (SIGGRAPH) 25(3), 881–889 (2006) 25. Kazhdan, M., Funkhouser, T., Rusinkiewicz, S.: Rotation invariant spherical harmonic representation of 3D shape descriptors. In: Eurographics/SIGGRAPH SGP, pp. 156–164 (2003) 26. Murase, H., Nayar, S.K.: Visual learning and recognition of 3-D objects from appearance. IJCV 14(1), 5–24 (1995) 27. Lawrence, N.D.: Probabilistic non-linear principal component analysis with Gaussian process latent variable models. Journal of Machine Learning Research 6, 1783– 1816 (2005)
Kernel Codebooks for Scene Categorization Jan C. van Gemert, Jan-Mark Geusebroek, Cor J. Veenman, and Arnold W.M. Smeulders Intelligent Systems Lab Amsterdam (ISLA), University of Amsterdam, Kruislaan 403, 1098 SJ, Amsterdam, The Netherlands {jvgemert,J.M.Geusebroek,C.J.Veenman,ArnoldSmeulders}@uva.nl
Abstract. This paper introduces a method for scene categorization by modeling ambiguity in the popular codebook approach. The codebook approach describes an image as a bag of discrete visual codewords, where the frequency distributions of these words are used for image categorization. There are two drawbacks to the traditional codebook model: codeword uncertainty and codeword plausibility. Both of these drawbacks stem from the hard assignment of visual features to a single codeword. We show that allowing a degree of ambiguity in assigning codewords improves categorization performance for three state-of-the-art datasets.
1
Introduction
This paper investigates automatic scene categorization, which focuses on the task of assigning images to predefined categories. For example, an image may be categorized as a beach, office or street scene. Applications of automatic scene categorization may be found in content-based retrieval, object recognition, and image understanding. One particular successful scene categorization method is the codebook approach. The codebook approach is inspired by a word-document representation as used in text retrieval, first applied on images in texture recognition [1]. The codebook approach allows classification by describing an image as a bag of features, where image features, typically SIFT [2], are represented by discrete visual prototypes. These prototypes are defined beforehand in a given vocabulary. A vocabulary is commonly obtained by following one of two approaches: an annotation approach or a data-driven approach. The annotation approach obtains a vocabulary by assigning meaningful labels to image patches [3,4,5], for example sky, water, or vegetation. In contrast, a data-driven approach applies vector quantization on the features using k-means [6,7,8,9,10,11] or radius-based clustering [12]. Once a vocabulary is obtained, this vocabulary is employed by the codebook approach to label each feature in an image with its best representing codeword. The frequency of these codewords in an image form a histogram which is subsequently used in a scene categorization task. One drawback of the codebook approach is the hard assignment of codewords in the vocabulary to image feature vectors. This may be appropriate for text but D. Forsyth, P. Torr, and A. Zisserman (Eds.): ECCV 2008, Part III, LNCS 5304, pp. 696–709, 2008. c Springer-Verlag Berlin Heidelberg 2008
Kernel Codebooks for Scene Categorization
697
3
2
b i
1
h
c
g
0
e
a
d -1
f j -2
-3 -3
-2
-1
0
1
2
3
Fig. 1. An example showing the problems of codeword ambiguity in the codebook model. The small dots represent image features, the labeled red circles are codewords found by unsupervised clustering. The triangle represents a data sample that is well suited to the codebook approach. The difficulty with codeword uncertainty is shown by the square, and the problem of codeword plausibility is illustrated by the diamond.
not for sensory data with large variety in appearance. The hard assignment gives rise to two issues: codeword uncertainty and codeword plausibility. Codeword uncertainty refers to the problem of selecting the correct codeword out of two or more relevant candidates. The codebook approach merely selects the best representing codeword, ignoring the relevance of other candidates. The second drawback, codeword plausibility denotes the problem of selecting a codeword without a suitable candidate in the vocabulary. The codebook approach assigns the best fitting codeword, regardless the fact that this codeword is not a proper representative. Figure 1 illustrates both these problems. Accordingly, the hard assignment of codewords to image features overlooks codeword uncertainty, and may label image features by non-representative codewords. We propose an uncertainty modeling method for the codebook approach. In effect, we apply techniques from kernel density estimation to allow a degree of ambiguity in assigning codewords to image features. We argue that retaining ambiguity between features is a more suitable representation than hard assignment of a codeword to an image feature. By using kernel density estimation, the uncertainty between codewords and image features is lifted beyond the vocabulary and becomes part of the codebook model. This paper is organized as follows. The next section gives an overview of the related literature on codebook-based scene categorization. Section 3 introduces four types of ambiguity in the codebook model. We show the performance of our method on three datasets in Sect. 4. Finally, Sect. 5 concludes the paper.
2
Related Work
The traditional codebook approach [1,13] treats an image as a collection of local features where each feature is represented by a codeword from the codebook
698
J.C. van Gemert et al.
vocabulary. One extension of the traditional codebook approach aims to capture co-occurrences between codewords in the image collection. Typically, this co-occurrence is captured with a generative probabilistic model [14,15]. To this end, Fei-Fei and Perona [7] introduce a Bayesian hierarchical model for scene categorization. Their goal is a generative model that best represents the distribution of codewords in each scene category. They improve on Latent Dirichlet Allocation (LDA) [15] by introducing a category variable for classification. The proposed algorithm is tested on a dataset of 13 natural scene categories where it outperforms the traditional codebook approach by nearly 30%. The work by FeiFei and Perona is extended by Quelhas et al . [11], who investigate the influence of training data size. Moreover, Bosch et al . [6] show that probabilistic latent semantic analysis (pLSA) improves on LDA. The contributions on codeword ambiguity in this paper are easily extended with co-occurrence modeling. Besides co-occurrence modeling, other improvements on the codebook approach focus on the vocabulary. A semantic vocabulary inspired by Oliva and Torralba [16] is presented by Vogel and Schiele [5]. The authors construct a vocabulary by labeling image patches with a semantic label, for example sky, water or vegetation. The effectiveness of this semantic codebook vocabulary is shown in a scene categorization task. Moreover, a similar approach [4] provides the basis for the successful results on TRECVID news video by Snoek et al . [17], who draw inspiration from Naphade and Huang [18]. Furthermore, Winn et al . [19] concentrate on a universal codebook vocabulary, whereas Perronnin et al . [10] focus on class-specific vocabularies. In contrast to annotating a vocabulary, Jurie and Triggs [12] compare clustering techniques to obtain a data-driven vocabulary. Specifically, they show that radius-based clustering outperforms the popular k-means clustering algorithm, and we will make use of this observation below. Since the codebook approach treats an image as a histogram of visual words, the spatial structure between words is lost. Spatial structure is incorporated by Lazebnik et al . [8] who extend the work of Grauman and Darrell [20] with a spatial pyramid matching scheme. Furthermore research on incorporating spatial information in the codebook model focuses on regions of interest [21], object segmentation [22], and shape masks [23]. To demonstrate the modularity of our work, we incorporate spatial pyramid matching because of the excellent performance reported by Lazebnik et al . [8].
3
Visual Word Ambiguity by Kernel Codebooks
Given a vocabulary of codewords, the traditional codebook approach describes an image by a distribution over codewords. For each word w in the vocabulary V the traditional codebook model estimates the distribution of codewords in an image by n
1 CB(w) = n i=1
1 if w = arg min(D(v, ri )); v∈V
0 otherwise,
(1)
Kernel Codebooks for Scene Categorization
699
where n is the number of regions in an image, ri is image region i, and D(w, ri ) is the distance between a codeword w and region ri . Basically, an image is represented by a histogram of word frequencies that describes the probability density over codewords. A robust alternative to histograms for estimating a probability density function is kernel density estimation [24]. Kernel density estimation uses a kernel function to smooth the local neighborhood of data samples. A one-dimensional estimator with kernel K and smoothing parameter σ is given by n
1 Kσ (x − Xi ) , fˆ(x) = n i=1
(2)
where n is the total number of samples and Xi is the value of sample i. Kernel density estimation requires a kernel with a given shape and size. The kernel size determines the amount of smoothing between data samples whereas the shape of the kernel is related to the distance function [14]. In this paper we use the SIFT descriptor that draws on the Euclidian distance as its distance function [2]. The Euclidian distance assumes a Gaussian distribution of the SIFT features, with identity as the covariance. Hence, the Euclidian distance is paired with a Gaussian-shaped kernel 1 x2 1 exp(− 2 ) . (3) 2σ 2πσ The Gaussian kernel assumes that the variation between a data sample and a codeword may be described by a normal distribution. This normal distribution requires a scale parameter σ which determines the size of the kernel. The kernel size needs to be tuned to the appropriate degree of smoothing between data samples. This smoothing determines the degree of similarity between data samples, and is dependent on the dataset, the feature length, and the range of the feature values. These dependencies change for various datasets. Therefore, in the experiments we will tune the kernel size by cross-validation. In summary, the size of the kernel depends on the data and the image descriptor whereas the shape of the kernel follows directly from the distance function. In the codebook model, the histogram estimator of the codewords may be replaced by a kernel density estimator. Moreover, a suitable kernel (like the Gaussian kernel) allows kernel density estimation to become part of the codewords, instead of the data samples. Specifically, when the used kernel is symmetric, Kσ (x − Xi ) = Kσ (Xi − x), it trivially follows that there is no effective distinction between placing the kernel on the data sample or placing the kernel on a codeword. That is, if the centre of the kernel coincides with the codeword position, the kernel value at the data sample represents the same probability as if the centre of the kernel coincides with the data sample. Hence, a symmetric kernel allows transferring the kernel from the data samples to the codewords, yielding a kernel codebook, Kσ (x) = √
n
KCB(w) =
1 Kσ (D(w, ri )) , n i=1
(4)
700
J.C. van Gemert et al.
Table 1. The relationship between various forms of codeword ambiguity and their properties
Constant Weight Kernel Weighted
Best Candidate Traditional Codebook Codeword Plausibility
Multiple Candidates Codeword Uncertainty Kernel Codebook
where n is the number of regions in an image, ri is image region i, D(w, ri ) is the distance between a codeword w and region ri , and σ is the smoothing parameter of kernel K. In essence, a kernel codebook smoothes the hard mapping of features in an image region to the codeword vocabulary. This smoothing models two types of ambiguity between codewords: codeword uncertainty and codeword plausibility. Codeword uncertainty indicates that one image region may distribute probability mass to more than one codeword. Conversely, codeword plausibility signifies that an image feature may not be close enough to warrant representation by any relevant codeword in the vocabulary. Each of these two types of codeword ambiguity may be modeled individually. Codeword uncertainty, n
UNC(w) =
Kσ (D (w, ri )) 1 , n i=1 |V | Kσ (D(vj , ri ))
(5)
j=1
distributes a constant amount of probability mass to all relevant codewords, where relevancy is determined by the ratio of the kernel values for all codewords v in the vocabulary V . Thus, codeword uncertainty retains the ability to select multiple candidates, however does not take the plausibility of a codeword into account. In contrast, Codeword plausibility, n 1 Kσ (D(w, ri )) if w = arg min(D(v, ri )); v∈V (6) PLA(w) = n i=1 0 otherwise, selects for an image region ri the best fitting codeword w and gives that word an amount of mass corresponding to the kernel value of that codeword. Hence, codeword plausibility will give a higher weight to more relevant data samples, however cannot select multiple codeword candidates. The relation between codeword plausibility, codeword uncertainty, the kernel codebook model, and the traditional codebook model is indicated in Table 1. An example of the weight distributions of the types of codeword ambiguity with a Gaussian kernel is shown in Fig. 2(a). Furthermore, in Fig. 2(b) we show an example of various codeword distributions corresponding to different types of codeword ambiguity. Note the weight difference in codewords for the data samples represented by the diamond and the square. Where the diamond contributes full weight in the traditional codebook, it barely adds any weight in the kernel codebook and codeword plausibility model. This may be advantageous, since it incorporates the implausibility of outliers. Furthermore, in the traditional codebook, the square adds weight to one single codeword, whereas the
Kernel Codebooks for Scene Categorization 3
0.5 0.4 0.3 0.2 0.1 0
2
b i
1
h
c
g
0.5 0.4 0.3 0.2 0.1 0
e
a
-1
f j -2
-3 -3
-2
-1
0
(a)
1
2
c d
e
f
g h
i
j
3
a b
c d
e
f
g h
a b
c d
e
f
g h
i
j
i
j
Visual Word Uncertainty
Traditional Codebook
0
d
a b
0.5 0.4 0.3 0.2 0.1 0
701
i
j
0.5 0.4 0.3 0.2 0.1 0
Visual Word Plausibility
a b
c d
e
f
g h
Kernel Codebook
(b)
Fig. 2. (a) An example of the weight distribution of a kernel codebook with a Gaussian kernel, where the data and the codewords are taken from Fig. 1. (b) Various codeword distributions, according to Table 1, corresponding to different types of codeword ambiguity. These distributions are based on the kernels shown in Fig. 2(a), where the square, diamond and triangle represent the image features.
kernel codebook and codeword uncertainty adds weight to the two relevant codewords. In the latter two methods, the uncertainty between the two codewords is not assigned solely to the best fitting word, but divided over both codewords. Hence, the kernel codebook approach can be used to introduce various forms of ambiguity in the tradition codebook model. We will experimentally investigate the effects of all forms of codeword ambiguity in Sect. 4. The ambiguity between codewords will likely be influenced by the number of words in the vocabulary. When the vocabulary is small, essentially different image parts will be represented by the same vocabulary element. On the other hand, a large vocabulary allows more expressive power, which will likely benefit the hard assignment of the traditional codebook. Therefore, we speculate that codeword ambiguity will benefit smaller vocabularies more than larger vocabularies. We will experimentally investigate the vocabulary size in Sect 4. Since codewords are image descriptors in a high-dimensional feature space, we envision a relation between codeword ambiguity and feature dimensionality. With a high-dimensional image descriptor, codeword ambiguity will probably become more significant. If we consider a codeword as a high-dimensional sphere in feature space, then most feature points in this sphere will lay on a thin shell near the surface. Hence, in a high-dimensional space, most feature points will be close to the boundary between codewords and thus introduces ambiguity between codewords. See Bishop’s textbook on pattern recognition and machine learning [14, Chapter 1, pages 33–38] for a thorough explanation and illustration of the curse of dimensionality. Consequently, increasing the dimensionality of the image descriptor may increase the level of codeword ambiguity. Therefore, our improvement over the traditional codebook model should become more pronounced in a high-dimensional feature space. We will experimentally investigate the effects of the dimensionality of the image descriptor in the next section.
702
4
J.C. van Gemert et al.
Experiments
We experimentally compare codeword ambiguity modeling against the traditional codebook approach for three large and varied datasets: fifteen natural scene categories from Lazebnik et al . [8], Caltech-101 by Fei-Fei and Perona [25], Caltech-256 by Griffin et al . [26]. We start our experiments with an in-depth analysis of our methods on the set of fifteen natural scene categories, after which we transpose these findings to the experiments on the two Caltech sets. For our experimental setup we closely follow Lazebnik et al . [8]. We follow this work since it has shown excellent performance on these datasets. 4.1
Experimental Setup
To obtain reliable results, we repeat the experimental process 10 times. Thus, we select 10 random subsets from the data to create 10 pairs of train and test data. For each of these pairs we create a codeword vocabulary on the train set. This codeword vocabulary is used by both the codebook and the codeword ambiguity approaches to describe the train and the test set. For classification, we use a SVM with a histogram intersection kernel. Specifically, we use libSVM [27], and use the built in one-versus-one approach for multi-class classification. We use 10-fold cross-validation on the train set to tune parameters of the SVM and the size of the codebook kernel. The classification rate we report is the average of the per-class recognition rates which in turn are averaged over the 10 random test sets. For image features we again follow Lazebnik et al . [8], and use a SIFT descriptors sampled on a regular grid. A grid has been shown to outperform interest point detectors in image classification [7,9,12]. Hence, we compute all SIFT descriptors on 16x16 pixel patches, computed over a dense grid sampled every 8 pixels. We create a codeword vocabulary by radius-based clustering. Radius-based clustering ensures an even distribution of codewords over feature space and has been shown to outperform the popular k-means algorithm [12]. Our radius-based clustering algorithm is similar to the clustering algorithm of Jurie and Triggs [12]. However, whereas they use mean-shift with a Gaussian kernel to find the densestpoint, we select the densest point by maximizing the number of data samples within its radius r. 4.2
Experiment 1: In-Depth Analysis on the Scene-15 Dataset
The first dataset we consider is the Scene-15 dataset, which is compiled by several researchers [7,8,16]. The Scene-15 dataset consists of 4485 images spread over 15 categories. The fifteen scene categories contain 200 to 400 images each and range from natural scenes like mountains and forests to man-made environments like kitchens and offices. In Fig. 3 we show examples of the scene dataset. We use an identical experimental setup as Lazebnik et al . [8], and select 100 random images per category as a train set and the remaining images as the test set.
Kernel Codebooks for Scene Categorization
bedroom
(FP)
highway
(OT)
kitchen
(FP)
coast
(OT)
industrial
forest
(OT)
(L)
inside city
(OT)
(FP)
mountain
(OT)
living room
office
(FP)
open country
street
(OT)
suburb
(OT)
(FP)
store
703
(L)
tall building
(OT)
Fig. 3. Example images from the Scene-15 dataset. Each category is labeled with the annotator, where (OT) denotes Oliva and Torralba [16], (FP) is Fei-Fei and Perona [7], and (L) refers to Lazebnik et al . [8].
We start the experiments with an in-depth analysis of the types of codeword ambiguity, vocabulary size and feature dimensionality. To evaluate feature dimensionality we project the 128 length SIFT descriptor to a lower dimensionality. This dimension reduction is achieved with principal component analysis, which reduces dimensionality by projecting the data on a reduced-dimensional basis while retaining the highest variance in the data. We compute a reduced basis on each complete training set, after which we project the train set and corresponding test set on this basis. We reduce the feature length from 128 dimensions to 12 and 60 dimensions. However, because of space constraints we omit the results for the 60-dimensional features since they show the same trend as the other dimensions. In evaluating vocabulary size, we tune the radius in the radius-based clustering algorithm to construct eight differently sized vocabularies. The vocabulary sizes we consider are {25, 50, 100, 200, 400, 800, 1600, 3200}. The results for all types of codeword ambiguity evaluated for various vocabulary sizes and the two feature dimensionalities (12 and 128) are given in Fig. 4. We start the analysis of the results in Fig. 4 with the various types of codeword ambiguity. The results show that codeword uncertainty outperforms all other types of ambiguity for all dimensions and all vocabulary sizes. This performance gain is not always significant, however. Nevertheless, for 128 dimensions and a vocabulary size of 200 it can be seen that codeword uncertainty already outperforms hard assignment with a 400-word vocabulary, and this trend holds for larger vocabulary size pairs. On the other end of the performance scale there is codeword plausibility, which always yields the worst results. A kernel codebook outperforms hard assignment for smaller vocabulary sizes, however for larger
704
J.C. van Gemert et al.
Dimensionality: 12
Dimensionality: 128
Fig. 4. Classification performance of various types of codeword ambiguity for the Scene15 dataset over various vocabulary sizes and feature dimensions
Dimensionality: 12
Dimensionality: 128
Fig. 5. Comparing the overlap of the class labels as predicted by various types of codeword ambiguity for the Scene-15 dataset
vocabularies hard assignment performs equally well. These differences between codeword ambiguity types become more pronounced when using a smaller vocabulary, whereas using a larger vocabulary evens out the results between ambiguity types. Additionally, the highest performance gain for codeword ambiguity is in a higher-dimensional feature space. When taking overall performance into account, the results indicate that a higher dimensional descriptor yields the best results. Moreover, increasing the vocabulary size asymptotically improves performance. To gain insight in the performance variation between the various types of codeword ambiguity we show the overlap percentage between the predicted class labels for all paired method in Fig. 5. The first thing that is striking in Fig. 5, is the high class label overlap between hard assignment and codeword plausibility. This high overlap may be explained by noting that codeword plausibility resembles hard assignment when the kernel size is sufficiently large. Inspecting
Kernel Codebooks for Scene Categorization
705
the kernel sizes as found with cross-validation reveals that the kernel size for codeword plausibility is indeed large. The kernel size for codeword plausibility is typically 200, whereas the other types of codeword ambiguity range around 100. Furthermore, this label overlap between hard assignment and codeword plausibility is highest with a small number of dimensions. This may be due to the fact that a higher dimensional space leaves more room for implausible features than a lower dimensional space. On the other end of the spectrum we find the kernel codebook and hard assignment pair, which share the least number of class labels. This low label overlap may be expected, since these two types represent the extremes of the types of codeword ambiguity. Further differences of label overlap can be seen between the low- and the high-dimensional feature space. In a highdimensional feature space there tends to be less correlation between class labels. This reduced label overlap in a high-dimensional space may be explained by the increased effectiveness of codeword ambiguity in a high-dimensional space. A further trend in label overlap is the increased overlap for an increasing vocabulary size. Increasing the vocabulary size yields an increased performance, which requires more labels to be predicted correctly. We attribute the increase in label overlap for all methods to those images that can be predicted correctly by using a larger vocabulary. This link between increased performance and increased class label overlap also explains that the class label overlap is generally high between all types of codeword ambiguity. To show the modularity of our approach and improve results we incorporate the spatial pyramid by Lazebnik et al . [8]. The spatial pyramid divides an image into a multi-level pyramid of increasingly fine subregions and computes a codebook descriptor for each subregion. We use the 128 dimensional feature size since this gives the best results. Moreover, we find a vocabulary of 200 codewords, since this number is also used by Lazebnik et al . [8]. The results for the various forms of codeword ambiguity for the first two levels of the spatial pyramid are shown in Fig. 6. Note that the codeword uncertainty outperforms the hard assignment of the traditional codebook for all levels in the pyramid. Moreover, codeword Scene-15 Classification Rate (%)
0.76 0.74 0.72
0.70 0.68 0
Hard Assignment Only Uncertainty Only Plausibility Kernel Codebook 1 2 Spatial Pyramid Level
forest suburb tallbuilding mountain coast street highway insidecity store opencountry office bedroom kitchen livingroom industrial
Scene-15 per category
0.4
0.5
Hard Assignment Codeword Uncertainty 0.6 0.7 0.8 0.9 Classification Rate (%)
Fig. 6. Comparing the performance on the Scene-15 dataset of various types of codeword ambiguity using the spatial pyramid (left), and per category (right).
706
J.C. van Gemert et al.
uncertainty at pyramid level 1 already outperforms the traditional codebook at pyramid level 2. For the Scene-15 dataset, codeword uncertainty gives the highest improvement at level 0 of the spatial pyramid, which is identical to a codebook model without any spatial structure. The classification results for level 0 of the pyramid, split out per category are shown in Fig. 6. Note that by using codeword uncertainty the performance of all categories are similar or improve upon a traditional codebook. Due to small implementation differences, our re-implementation of the original paper [8] performs slightly under their reported results. However, we use the same re-implementation for all methods of codeword ambiguity. Thus we do not bias any method by a slightly different implementation. 4.3
Experiment 2 and 3: Caltech-101 and Caltech-256
Our second set of experiments are done on the Caltech-101 [25] and Caltech256 [26] datasets. The Caltech-101 dataset contains 8677 images, divided into 101 object categories, where the number of images in each category varies from 31 to 800 images. The Caltech-101 is a diverse dataset, however the obects are all centered, and artificially rotated to a common position. In Fig. 7 we show some example image of the Caltech-101 set. Some of the problems of Caltech101 are solved by the Caltech-256 dataset. The Caltech-256 dataset holds 29780 images in 256 categories where each category contains at least 80 images. The Caltech-256 dataset is still focused on single objects. However, in contrast to the Caltech-101 set, each image is not manually rotated to face one direction. We report classification performance on both sets. Our experimental results for both the Caltech-101 as Caltech-256 are generated by using 30 images per category for training. For testing, we used 50 images per category for the Caltech 101, and 25 images per category for the Caltech-256. These number of train and test images are typically used for these sets [8,26]. We use 128 dimensions, and compare the traditional hard assignment with codeword uncertainty since this has shown to give the best results on the Scene-15 dataset. The classification results per spatial pyramid level are shown in Fig. 9. For both sets, the codeword uncertainty method outperforms the traditional codebook.
Binocular (50 / 60)
lobster (23 / 33)
Bonsai (37 / 47)
Platypus (27 / 47)
Leopards (87 / 78)
wildcat (20 / 13)
waterlilly (48 / 43)
Flamingo head (60 / 56)
Fig. 7. Examples of the Caltech-101 set. Top: the top 4 classes where our method improves most, Bottom: the 4 classes where our method decreases performance. The numbers in brackets indicate the classification rate (hard / uncertainty).
Kernel Codebooks for Scene Categorization
707
revolver (27 / 35)
desk-globe (33 / 41)
cereal-box (20 / 29)
photocopier (33 / 44)
gorilla (18 / 15)
goose (7 / 4)
cannon (10 / 6)
hummingbird (17 / 14)
Fig. 8. Examples of the Caltech-256 set. Top: the top 4 classes where our method improves most, Bottom: the 4 classes where our method decreases performance most. The numbers in brackets indicate the classification rate (hard / uncertainty).
Caltech-101
0.65
Caltech-256
0.28
Classification Rate (%)
Classification Rate (%)
0.26
0.60
0.24
0.55
0.22
0.50 0.45
0.20
0
1
Hard Assignment Only Uncertainty
Spatial Pyramid Level
0.18 2
0
1
Hard Assignment Only Uncertainty
Spatial Pyramid Level
2
Fig. 9. Classification performance of Caltech-101 (left) and Caltech-256 (right)
4.4
Summary of Experimental Results
The experiments on the Scene-15 dataset in figures 4 and 6 show that codeword plausibility hurts performance. Codeword plausibility is dominated by those few image features that are closest to a codeword. In essence, codeword plausibility ignores the majority of the features, and leads us to conclude that it is better to have an implausible codeword representing an image feature then no codeword at all. Therefore, codeword uncertainty yields the best results, since it models ambiguity between codewords, without taking codeword plausibility into account. The results in Fig. 4 indicate that codeword ambiguity is more effective for higher dimensional features than for lower dimensions. We attribute this to an increased robustness to the curse of dimensionality. The curse prophesizes that increasing the dimensionality will increase the fraction of feature vectors on or near the boundary of codewords. Hence, increasing the dimensionality will increase codeword uncertainty. Furthermore, Fig. 4, shows that a larger vocabulary mostly benefits hard assignment, and asymptotically increases performance. Thus, since our ambiguity modeling approach starts with a higher performance, it stands to reason that our model will reach the maximum performance sooner. The results over the Scene-15, Caltech-101, and Caltech-256 datasets are summarized in Table 2. This table shows the relative improvement of codeword
708
J.C. van Gemert et al.
Table 2. The relationship between the data set size and the relative performance of codeword uncertainty over hard assignment for 200 codewords. Data set Scene-15 Caltech-101 Caltech-256
Train set size 1500 3030 7680
Test set size 2985 5050 6400
Performance 4.0 ± 6.3 ± 9.3 ±
Increase (%) 1.7 % 1.9 % 3.0 %
uncertainty over hard assignment. As can be seen in this table, the relative performance gain of ambiguity modeling increases as the number of scene categories grows. A growing number of scene categories requires a higher expressive power of the codebook model. Since the effects of ambiguity modeling increase with a growing number of categories, we conclude that ambiguity modeling is more expressive then the traditional codebook model. What is more, the results of all experiments show that codeword uncertainty outperforms the traditional hard assignment over all dimensions, all vocabulary sizes, and over all datasets.
5
Conclusion
This paper presented a fundamental improvement on the popular codebook model for scene categorization. The traditional codebook model uses hard assignment to represent image features with codewords. We replaced this basic property of the codebook approach by introducing uncertainty modeling, which is appropriate as feature vectors are only capable of capturing part of the intrinsic variation in visual appearance. This uncertainty is achieved with techniques based on kernel density estimation. We have demonstrated the viability of our approach by improving results on recent codebook methods. These results are shown on three state-of-the-art datasets, where our method consistently improves over the traditional codebook model. What is more, we found that our ambiguity modeling approach suffers less from the curse of dimensionality, reaping higher benefits in a high-dimensional feature space. Furthermore, with an increasing number of scene categories, the effectiveness of our method becomes more pronounced. Therefore, as future image features and datasets are likely to increase in size, our ambiguity modeling method will have more and more impact.
References 1. Leung, T., Malik, J.: Representing and recognizing the visual appearance of materials using three-dimensional textons. IJCV 43, 29–44 (2001) 2. Lowe, D.: Distinctive image features from scale-invariant keypoints. IJCV 60, 91–110 (2004) 3. Boutell, M., Luo, J., Brown, C.: Factor-graphs for region-based whole-scene classification. In: CVPR-SLAM (2006) 4. van Gemert, J., Geusebroek, J., Veenman, C., Snoek, C., Smeulders, A.: Robust scene categorization by learning image statistics in context. In: CVPR-SLAM (2006)
Kernel Codebooks for Scene Categorization
709
5. Vogel, J., Schiele, B.: Semantic modeling of natural scenes for content-based image retrieval. IJCV 72, 133–157 (2007) 6. Bosch, A., Zisserman, A., Munoz, X.: Scene classification using a hybrid generative/discriminative approach. TPAMI 30, 712–727 (2008) 7. Fei-Fei, L., Perona, P.: A bayesian hierarchical model for learning natural scene categories. In: CVPR (2005) 8. Lazebnik, S., Schmid, C., Ponce, J.: Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. In: CVPR, pp. 2169–2178 (2006) 9. Nowak, E., Jurie, F., Triggs, B.: Sampling strategies for bag-of-features image classification. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3954, pp. 490–503. Springer, Heidelberg (2006) 10. Perronnin, F., Dance, C., Csurka, G., Bressan, M.: Adapted vocabularies for generic visual categorization. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3954, pp. 464–475. Springer, Heidelberg (2006) 11. Quelhas, P., Monay, F., Odobez, J., Gatica-Perez, D., Tuytelaars, T., Gool, L.V.: Modeling scenes with local descriptors and latent aspects. In: ICCV (2005) 12. Jurie, F., Triggs, B.: Creating efficient codebooks for visual recognition. In: ICCV, pp. 604–610 (2005) 13. Sivic, J., Zisserman, A.: Video Google: A text retrieval approach to object matching in videos. In: ICCV, vol. 2, pp. 1470–1477 (2003) 14. Bishop, C.M.: Pattern Recognition and Machine Learning. Springer, Heidelberg (2006) 15. Blei, D., Ng, A., Jordan, M.: Latent dirichlet allocation. J. Mach. Learn. Res. 3, 993–1022 (2003) 16. Oliva, A., Torralba, A.: Modeling the shape of the scene: A holistic representation of the spatial envelope. IJCV 42, 145–175 (2001) 17. Snoek, C., Worring, M., van Gemert, J., Geusebroek, J., Smeulders, A.: The challenge problem for automated detection of 101 semantic concepts in multimedia. In: ACM Multimedia (2006) 18. Naphade, M., Huang, T.: A probabilistic framework for semantic video indexing, filtering, and retrieval. Transactions on Multimedia 3, 141–151 (2001) 19. Winn, J., Criminisi, A., Minka, T.: Object categorization by learned universal visual dictionary. In: ICCV, pp. 1800–1807 (2005) 20. Grauman, K., Darrell, T.: The pyramid match kernel: discriminative classification with sets of image features. In: ICCV, pp. 1458–1465 (2005) 21. Bosch, A., Zisserman, A., Munoz, X.: Image classification using random forests and ferns. In: ICCV (2007) 22. Larlus, D., Jurie, F.: Category level object segmentation. In: International Conference on Computer Vision Theory and Applications (2007) 23. Marszalek, M., Schmid, C.: Accurate object localization with shape masks. In: CVPR (2007) 24. Silverman, B., Green, P.: Density Estimation for Statistics and Data Analysis. Chapman and Hall, London (1986) 25. Fei-Fei, L., Fergus, R., Perona, P.: Learning generative visual models from few training examples: An incremental bayesian approach tested on 101 object categories. In: WGMBV (2004) 26. Griffin, G., Holub, A., Perona, P.: Caltech-256 object category dataset. Technical Report UCB/CSD-04-1366, California Institute of Technology (2007) 27. Chang, C., Lin, C.: LIBSVM: a library for support vector machines (2001)
Multiple Tree Models for Occlusion and Spatial Constraints in Human Pose Estimation Yang Wang and Greg Mori School of Computing Science, Simon Fraser University, Canada {ywang12,mori}@cs.sfu.ca
Abstract. Tree-structured models have been widely used for human pose estimation, in either 2D or 3D. While such models allow efficient learning and inference, they fail to capture additional dependencies between body parts, other than kinematic constraints between connected parts. In this paper, we consider the use of multiple tree models, rather than a single tree model for human pose estimation. Our model can alleviate the limitations of a single tree-structured model by combining information provided across different tree models. The parameters of each individual tree model are trained via standard learning algorithms in a single tree-structured model. Different tree models can be combined in a discriminative fashion by a boosting procedure. We present experimental results showing the improvement of our approaches on two different datasets. On the first dataset, we use our multiple tree framework for occlusion reasoning. On the second dataset, we combine multiple deformable trees for capturing spatial constraints between non-connected body parts.
1
Introduction
Estimating human body poses from still images is arguably one of the most difficult object recognition problems in computer vision. The difficulties of this problem are manifold – humans are articulated objects, and can bend and contort their bodies into a wide variety of poses; the parts which make up a human figure are varied in appearance (due to clothing), which makes them difficult to reliably detect; and parts often have small support in the image or are occluded. In order to reliably interpret still images of human figures, it is likely that multiple cues relating different parts of the figure will need to be exploited. Many existing approaches to this problem model the human body as a combination of rigid parts, connected together in some fashion. The typical configuration constraints used are kinematic constraints between adjacent parts, such as torso-upper half-limb connection, or upper-lower half-limb connection. This set of constraints has a distinct computational advantage – since the constraints form a tree-structured model, inferring the optimal pose of the person using this model is tractable. However, this computational advantage comes at a cost. Simply put, the single tree model does not adequately model the full set of relationships between parts of the body. Relationships between parts not connected in the kinematic tree cannot be directly captured by this model. D. Forsyth, P. Torr, and A. Zisserman (Eds.): ECCV 2008, Part III, LNCS 5304, pp. 710–724, 2008. c Springer-Verlag Berlin Heidelberg 2008
Multiple Tree Models for Occlusion and Spatial Constraints
711
The main contribution of this paper is developing a framework for modeling human figures as a collection of trees. We argue that this framework has the advantage of being able to locally capture constraints between the parts which constitute the model. With a collection of trees, a global set of constraints can be modeled. We demonstrate that the computational advantages of tree-structured models can be kept, and provide tractable algorithms for learning and inference in these multiple tree models. We present two applications of our framework. The first application uses the multiple tree model framework for occlusion reasoning. The second application combines multiple deformable trees to capture a richer set of spatial constraints between body parts. A preliminary version of this work appeared in a workshop paper [24] in which spatial constraints between parts were modeled. In this paper we demonstrate how the multiple tree model can be used for occlusion reasoning. We provide a model, new inference algorithms and experimental validations. We also provide an analysis of our approach which compares to existing approaches for combining multiple trees [10,23]. The rest of this paper is organized as follows. Section 2 reviews previous work. Sections 3, 4 and 5 give the details of our approach. Section 6 presents our experimental results. Section 7 concludes this paper.
2
Related Work
One of the earliest lines of research related to finding people from images is in the setting of detecting and tracking pedestrians. Starting with the work of Hogg [5], there has been a lot of work in tracking with kinematic models in both 2D and 3D. Forsyth et al. [3] provide a survey of this work. Some of these approaches are exemplar-based. For example, Toyama & Blake [21] use 2D exemplars for people tracking. Mori & Malik [11] and Sullivan & Carlsson [19] address the pose estimation problems as 2D template matching using pre-stored exemplars upon which joint locations have been marked. In order to deal with the complexity due to variations of pose and clothing, Shakhnarovich et al. [14] adopt a brute-force search, using a variant of locality sensitive hashing for speed. Exemplar-based approaches are effective when dealing with regular human poses. However, they cannot handle those poses that rarely occur. See Fig. 7 for some examples. There are many approaches which explicitly model the human body as an assembly of parts. Ju et al. [7] introduce the “cardboard people” model, where body parts are represented by a set of connected planar patches. Felzenszwalb & Huttenlocher [2] develop the tree-structured pictorial structure (PS) model and apply it in 2D human pose estimation. There is also some work using non-tree structured models. Sudderth et al. [18] introduce a non-parametric belief propagation method with occlusion reasoning for hand tracking. Sigal & Black [15] use a similar idea for pose estimation. Both of them use loopy belief propagation (LBP) for the inference, so the convergence is not guaranteed. Ren et al. [13] use bottomup detections of parallel lines as part hypotheses, and combine these hypotheses with various pairwise part constraints via an integer quadratic programming.
712
Y. Wang and G. Mori
While this sidesteps the problems of LBP, the solution relies heavily on the performance of lower-level limb detectors. Song et al. [17] detect corner features in video sequences and model them using a decomposable triangulated graph, where the graph structure is found by a greedy search. Ioffe & Forsyth [6] use a mixture of tree model for human tracking. Again, they both rely on good low-level detectors, and cannot search the images exhaustively. Our work is closely related to some recent work on learning discriminative models for localization. Ramanan [12] uses a variant of Conditional Random Fields (CRF) [8] for training localization models for articulated objects, such as human figures, horses, etc. Sminchisescu et al. [16] uses “mixture of experts” for visual tracking. Our work is also related to boosting on structured outputs. Boosting was originally proposed for classification problems. Recently people have adopted it for various tasks where the outputs have certain structures (e.g., chains, trees, graphs). For example, Torralba et al. [20] use boosted random fields for object detection with contextual information. Truyen et al. [22] use a boosting algorithm on Markov Random Fields (MRF) for multilevel activity recognition.
3
Modeling the Human Body
In our method we use a combination of tree-structured deformable models for human pose estimation. The basic idea is to model a human figure as a weighted combination of several tree-structured deformable models. The parameters of each tree model are learned from training data in a discriminative fashion. We first describe how we model the human body, and then demonstrate how this model of multiple trees can be used for modeling spatial constraints and occlusion reasoning. We also relate the pictorial structures [2] defined with pixel likelihoods and the CRF model [12] defined with patch likelihoods. This connection turns out to be useful when we develop our occlusion reasoning scheme in Sect. 5. 3.1
Single Tree-Structured Deformable Body Models
Consider a human body model with K parts, where each part is represented by an oriented rectangle with fixed size. We can construct an undirected graph G = (V, E) to represent the K parts (Fig. 1(a)). Each part is represented by a vertex vi ∈ V , and there exists an undirected edge eij = (vi , vj ) ∈ E between vertices vi and vj if vi and vj has some dependency. Let li = (xi , yi , θi ) be a random variable encoding the image position and orientation of the i-th part, we denote the configuration of the K part model as L = (l1 , l2 , ..., lK ). Given the model parameters Θ and assuming no occlusions, the conditional probability of L in an image I can be written as: P (L|I, Θ) ∝ P (L|Θ)P (I|L, Θ) = P (L|α) P (I|li , βi ) (1) i
Multiple Tree Models for Occlusion and Spatial Constraints
713
where we explicitly decompose P (L|I, Θ) into the prior term P (L|α) and the product of several likelihood terms P (I|li , βi ). Each P (I|li , βi ) is a local likelihood for the part i. Assuming pixel independence, we can write each local likelihood as: P (I|li , βi ) =
Pli (u) (f (Iu ))
u∈Ω(li )
Pbg(u) (f (Iu )) ∝
Υ −Ω(li )
u∈Ω(li )
Pli (u) (f (Iu )) Pbg(u) (f (Iu ))
(2) In the above equation, we have used the following notation. u is a pixel location in the image I, Iu is the image evidence at pixel u. In this paper, we use binary edges as our image evidences. f (Iu ) is a function returning 1 if Iu is an edge point, and 0 otherwise. Ω(li ) is the set of pixels enclosed by part i as defined by li . Υ is the set of all pixels in the whole image. Pli (u) is a binomial distribution indicating how likely pixel u is an edge point under the part li . Pbg(u) is a binomial distribution for the background. Pli (u) (f (Iu )) Let Pbg(u) (f (Iu )) = exp(βi(u) f (Iu )), we will have: P (I|li , βi ) ∝
⎛ ⎞ exp βi(u) f (Iu ) = exp ⎝ βi(u) f (Iu )⎠
u∈Ω(li )
(3)
u∈Ω(li )
= exp βiT fi (I(li ))
(4)
where fi (I(li )) is the part-specific feature vector extracted from the oriented image patch at location li . In our case, it is a binary vector of edges for part i. βi is a part-specific parameter that favors certain edge patterns for an oriented rectangle patch I(li ) in image I. In our formulation, βi is simply the concatenation of {βi(u) }u∈Ω(li ) . We visualize βi in Fig. 1(d). Following previous work [12], we assume the prior term P (L|α) is defined in terms of the relative locations of the parts as follows: ⎛ P (L|α) ∝ exp ⎝
⎞ ψ(li − lj )⎠
(5)
(i,j)∈E
Most previous approaches use Gaussian shape priors ψ(li − lj ) ∝ N (li − lj ; μi , Σi ) [2]. However, since we are dealing with images with a wide range of poses and aspects, Gaussian shape priors seem too rigid. Instead we choose a spatial prior using discrete binning (Fig. 1(c)) similar to the one used in Ramanan [12]: ψ(li − lj ) = αTi bin(li − lj )
(6)
bin(·) is a vector of all zeros with a single one for the occupied bin. αi is a parameter that favors certain spatial and angular bins for part i with respect to its parent j. This spatial prior captures more intricate distributions than a Gaussian prior.
714
Y. Wang and G. Mori
Fig. 1. Representation of a human body: (a) human body represented as a 10-part model; (b) corresponding kinematic tree structured model; (c) discrete binning for spatial prior; (d) visualization of the learned edge-based appearance model βi for each body part. Dark areas correspond to small values of βi , and bright areas correspond to large values of βi .
Combining (1), (4) and (5), we obtain the following formulation: ⎛ ⎞ K P (L|I, Θ) ∝ exp ⎝ ψ(li − lj ) + φ(li )⎠ (i,j)∈E
(7)
i=1
where φ(li ) is a potential function that models the local image evidence for part i located at li . φ(li ) is defined as φ(li ) = βiT fi (I(li )). Equation (7) is exactly the same Conditional Random Field (CRF) formulation of human pose estimation problem in Ramanan [12]. This shows the two different formulations (pictorial structures [2] and CRF [12]) of human pose estimation problems are in fact equivalent. The only difference is that they use different criteria for model parameter learning, i.e., maximizing the joint likelihood (ML) or the conditional likelihood (CL). In the following, we will use the CRF formulation in (7). But we will come back to the pictorial structure formulation of (1), when we develop our occlusion reasoning method. To facilitate tractable learning and inference, G is usually assumed to form a tree T = (V, ET ) [2,12]. In particular, most work uses the kinematic tree (Fig. 1(b)) as the underlying tree model. Inference in a single tree-structured model can be done by message-passing. Using 3D convolution, one can search exhaustively over all part locations in an image without relying on feature detectors. Learning of the model parameters can be done by closed-form solutions (ML) or gradient ascent methods (CL). See [12,24] for details. 3.2
Multiple Tree Models
In our work we model the human body by a collection of multiple tree-structured models. For example, one can use the two tree models in Fig. 4 to model the kinematic constraints and the occlusion relationships between the legs. One can also use different tree structures (e.g., Fig. 6) to model the spatial constraints that are not captured in the kinematic tree. The weighting parameters which
Multiple Tree Models for Occlusion and Spatial Constraints
715
combine the multiple tree models can also be learned in a discriminative fashion using boosting (Sect. 4). The final form of our model is: F (L, I; Θ) = wt ft (L, I; Θ) (8) t
where ft (L, I; Θ) is a single tree model with tree structure τt , wt is the weight associated with this single tree model. The optimal pose L∗ can be obtained in our model as L∗ = arg maxL F (L, I; Θ). In the next section, we describe the algorithm for learning all the model parameters Θ = {wt , Θt }.
4
Spatial Constraints with Multiple Trees
Our learning algorithm for spatial constraints is based on AdaBoost.MRF proposed in Truyen et al. [22]. Given an image I, the problem of pose estimation is to find the best part labeling L∗ that maximizes F (L, I), i.e. L∗ = arg maxL F (L, I). F (L, I) is known as the “strong learner” in the boosting literature. Given a set of training examples (I i , Li ), i = 1, 2, ..., N . F (L, I) is found by minimizing the following loss function LO : exp F (I i , L) − F (I i , Li ) (9) LO = i
L
We assume F (L, I)
is a linear combination of a set of so-called “weak learners”, i.e., F (I, L) = t wt ft (L, I). The t-th weak learner ft (L, I) and its corresponding weight wt are found by minimizing the loss function defined above, i.e. (ft , wt ) = arg maxf,w LO . In our case, we choose the weak learner as f (L, I) = log p(L|I). To achieve computational tractability, we assume each weak learner is defined on a tree model. If we can successfully learn a set of tree-based weak learners ft (L, I) and their weights wt , the combination of these weak learners captures more dependencies than a single tree model. At the same time, the inference in this model is still tractable, since each component is a tree. et al. [22] suggest optimizing the following Optimizing LO is difficult, Truyen
alternative loss function: LH = i exp −F (Li , I i ) . It can be shown that LH is an that we can make sure
upper bound of the original loss function LO , provided j wj = 1. In Truyen et al. [22], the requirement j wj = 1 is met by scaling down each previous weak learner’s weight by a factor of 1−wt as wj ← wj (1−wt ),
t−1
t−1 for j = 1, 2, ..., t − 1, so that j=1 wj + wt = j=1 wj (1 − wt ) + wt = 1, since
t−1 j=1 wj = 1. In practice, we find that this trick sometimes scales down previous weak learners to have zero weights. So we use a slightly different method by scaling down each weak learner’s
t weight up to t by a factor of 1/(1 + wt ). It can be shown that we still have j=1 wj = 1. Figure 2 shows the overall algorithm. Discussion. Our model is similar to “mixtures of trees” (MoT) [10] at a first glance, but there are some important differences. MoT is a generative model
716
Y. Wang and G. Mori
Input: i = 1, 2, ..., D data pairs, graphs {Gi = (Vi , Ei )} Output: set of trees with learned parameters and weights Select a set of spanning trees {τ } Choose the number of boosting iterations T 1 }, and w1 = 1 Initialize {wi,0 = D for each boosting round t = 1, 2, ..., T Select a spanning tree τt /* Add a weak P learner */ Θt = arg maxΘ i wi,t−1 log Pτt (Li , Ii |Θ) ft = log Pτt (L|I, Θt ) if t > 1 then select the step size 0 < wt < 1 using line searches end if /* Update the strong learner */ wt 1 Ft−1 + 1+w ft Ft = 1+w t t /* Scale down the previous learners’ weights*/ wj , for j = 1, 2, ..., t wj ← 1+w t /* Re-weight training data*/ wi,t ∝ wi,t−1 exp(−wt fi,t ) end for Output {τt },{Θt } and {wt }, t = 1, 2, ..., T
Fig. 2. Algorithm of boosted multiple trees
developed for density modeling problem. It is not designed for classification or prediction. Although one can use MoT as the spatial prior in a generative fashion, it is not clear how to learn the model in a discriminative way. Instead, our model is trained discriminatively, and our objective function is more closely tied to inference. Another similar work is the tree-reweighted message passing (TRW)[23]. TRW aims to approximate the partition function in MRF, it does not answer the question of learning a good model for recognition, i.e., TRW assumes the MRF model is given, and it simply tries to solve the inference problem. Plus, TRW is an iterative algorithm, and its convergence is still an unsolved problem.
5
Occlusion Reasoning with Multiple Trees
In this section, we apply the multiple tree framework to the “double counting of image evidence” problem in human pose estimation illustrated in the top row of Fig. 3, where the same image patch is used twice to explain two different body parts. Previous approaches [9] have focused on using strong priors of body poses to solve this problem. However, these approaches are limited to the cases of normal poses and known activities. We believe the proper way to solve this problem is to introduce occlusion reasoning in the model. In our multiple tree framework, we can define one tree for the kinematic constraint (e.g., Fig. 4(a)), and a second
Multiple Tree Models for Occlusion and Spatial Constraints
717
single tree model torso head lu−arm ll−arm lu−leg ll−leg ru−leg rl−leg
multiple torso head lu−arm ll−arm tree model lu−leg ll−leg ru−leg rl−leg
Fig. 3. Illustration of “double counting of image evidence” problem: top row shows how the same piece of image patch is used to explain two body parts, the bottom row shows how our occlusion reasoning mechanism using multiple trees can alleviate this problem l10 l6
l2
l10
l1 l4
l8
l3
l6
l2
l5
l1 l4
l9
(a)
l7
l8
l3
l7
l5 l9
(b)
Fig. 4. Two tree structures used on CMU Mobo dataset. We use dashed lines to indicate occlusion relationships, rather than spatial constraints.
tree for the occlusion relationships (e.g., Fig. 4(b)). In this section, we discuss how to incorporate occlusion reasoning into the human body model introduced in Sect. 3, and how to do inference in a tree model involving occlusion relationships (see Fig. 4(b)). Before we proceed, we first clarify the terminology we are using. By “occlusion reasoning”, we do not necessarily mean the body parts in the image are occluding each other, instead we use “occlusion” to refer to the particular problem of using the same image patch to explain different body parts, as illustrated in Fig. 3. Occlusion-sensitive formulation. The factorization of the global likelihood into local likelihood terms in Eq. 1 is valid only if the local terms P (I|li , βi ) for i ∈ {1..K} are independent. This assumption holds when there are no occlusions among different parts. In order to obtain a similar decomposition (hence distributed inference) when occlusions exist, we augment the configuration li of part i with a set of binary hidden variables zi = {zi(u) }u∈Υ , similar to [18]. Note that there is a binary variable zi(u) for each pixel. Let zi(u) = 0 if pixel u in the area enclosed by part i is occluded by any other part, and 1 otherwise. If a part
718
Y. Wang and G. Mori
is partially occluded, only a subset of these binary variables are zeros. Letting Z = {z1 , z2 , ..., zK }, the local likelihood term (2) can be rewritten as: P (I|L, Z, Θ) = P (I|li , zi , βi ) (10) i
Pli (u) (f (Iu )) zi (u) Pbg(u) (f (Iu )) i u∈Ω(li ) zi (u) = exp βi(u) f (Iu )
∝
(11) (12)
i u∈Ω(li )
It is important to note that if all the occlusion variables zi are consistent, the global likelihood P (I|L, Z, Θ) truly factorizes as (12). Similar to [18], we enforce the consistency of the occlusion variables using the following function:
0 if lj occludes li , u ∈ Ω(xj ), and zi(u) = 1 η(lj , zi(u) ; li ) = 1 otherwise The consistency relationship of occlusion variable zi and zj can be enforced by the following potential function: η(xj , zi(u) ; xi )η(xi , zj(u) ; xj ) (13) ψ O (li , zi , xj , zj ) = u∈Υ
Letting EO be the set of edges corresponding to pairs of parts that are prone O to occlusions, and defining PO (L, Z) ∝ (i,j)∈EO ψi,j (li , zi , lj , zj ), we obtain the final occlusion sensitive version of our model: P (L|I, Z, Θ) ∝ P (L|α)PO (L, Z)P (I|L, Z, β)
(14)
Occlusion-sensitive message passing. Now we discuss how to do message passing that involves occlusion variables zi . Similar to previous work [18,15], we assume that potentially occluding parts have a known relative depth in order to simplify the formulation. In general, one could introduce another discrete hidden variable indicating the relative depth order between parts and perform inference for each value. Our inference scheme is similar to [18]. It is based on the following intuition. Suppose part j is occluding part i and we have a distribution of P (lj ), we can use P (lj ) to calculate an occlusion probability P [zi(u) = 0] for each pixel u. Then we can discount the image evidence at pixel u according to P [zi(u) = 0] when we use that image evidence to infer the configuration of li . If P [zi(u) = 0] is close to 1, it means pixel u has a higher probability of being claimed by part j. In this case, we will discount more of the image evidence at u. In the extreme case of P [zi(u) = 0] approaches 0 for all {u : u ∈ Υ }, it is equivalent to inference without occlusion reasoning. Consider the BP message sent from lj to (li , zi ) in message passing. At this point, we already have a pseudo-marginal P (lˆj |I)) (it is the true marginal P (lj |I)
Multiple Tree Models for Occlusion and Spatial Constraints
719
if the underlying graph structure is a tree, and the message is passed from the root to the leaves). If li lies in front of lj (remember that we known the depth order), the BP message μj,i(u) (zi(u) ) is uninformative. If li is occluded and lj is the only potentially occluding part, we firstly determine an approximation to the marginal occlusion probability νi(u) ≈ P r[zi(u) = 0]. If we think of P (lˆj |I) as a 3D image (x, y, θ), νi(u) (which can be thought as a 2D image) can be efficiently calculated by convolving P (lˆj |I) with rotated version of a uniform rectangle (with size proportional to the size of lj ) filter, then summing over θ dimension. Then the BP approximation to li can be written in terms of these marginal occlusion probabilities (see [18] for the rationale behind (15)):
Pli (u) (f (Iu )) Pbg(u) (f (Iu )) u∈Ω(li ) νi(u) + (1 − νi(u) ) exp βi(u) f (Iu ) =
P (I|li ) ∝
νi(u) + (1 − νi(u) )
u∈Ω(li )
≈
exp
1 − νi(u) βi(u) f (Iu )
u∈Ω(li )
⎛
= exp ⎝
1 − νi(u)
⎞ βi(u) f (Iu ) ⎠
(15) (16) (17)
(18)
u∈Ω(li )
= exp (βi gi (I(li ), νi ))
(19)
where gi (I(li ), νi ) is a function similar to fi (I(li )), but instead of returning 1, it returns a fractional number (1 − νi(u) ) at pixel u if Iu is an edge point. The approximation in (17) is based on the fact that absolute values of βi(u) are usually small (e.g., less than 0.6 in our experiments). When |x| is small, exp(x) can be approximated by 1 + x based on the truncated Taylor expansion of exp(x). Unlike previous methods [18,15] which handle occlusion reasoning using sampling, our final result (19) has a surprisingly simple form. It can be efficiently calculated by first getting gi (I(li ), νi ) through a simple dot-product between f (I) (a binary 2D edge map of the whole image I) and (1 − νi ) (a 2D image of occlusion marginals), then convolving gi with rotated versions of βi . The dot-product has the nice intuition of discounting the image evidences by their occlusion variables. Our method can be applied efficiently and exhaustively over all the image pixel locations. This is due to the convolution trick. However, if the structure of the graphical model is not a tree, one has to use loopy belief propagations. In that case, the convolution trick is no longer valid, since the message stored at a node is no longer in a simple form that allows the derivation of (19) to go through. This further justifies the advantage of using tree-structured models.
6
Experiments
CMU MoBo dataset. We first test our algorithm on the rescaled versions of side-view persons of CMU mobo dataset [4] for the occlusion reasoning. Since
720
Y. Wang and G. Mori
(a)
(b)
(c)
(a)
(b)
(c)
Fig. 5. Sample results on the CMU mobo dataset: (a) original images; (b) results of using one kinematic tree; (c) results of using multiple trees for occlusion reasoning
Multiple Tree Models for Occlusion and Spatial Constraints
721
Table 1. Quantitative measurement on mobo dataset for the right upper and lower legs. Smaller perplexities mean better performance Part ru-leg rl-leg
Perplexity(two trees) 32.4939 26.7597
Perplexity(one tree) 33.9706 33.6693
people’s right arm in this dataset is almost always occluded, we only try to infer one arm. We use the background subtraction masks that come with this dataset to remove the edges found in the background. We use the two tree structures shown in Fig. 4. The first tree captures the kinematic spatial constraint. The second tree captures the occlusion relationships between the left and right legs. Inference in the second tree uses the message passing algorithm described in Sect. 5. Learning the model parameters is a bit tricky. If we use CMU mobo dataset for training, we will probably end up with a strong spatial prior specifically tuned to side-view walking. Instead, we learn the model parameters Θ = {αi , βi } using the same training set in our second experiment (see below). That dataset contains images of people with a variety of poses. We manually set the weights of these two trees to be equal, since we do not have appropriate datasets with ground truths, and we do not want to learn the parameters from the mobo dataset. In principle, this parameter can be learned from some labeled dataset where the relative depth order of parts is known. Some of the sample results are shown in Fig. 5. We can see that the single tree model tends to put the legs on top of each other. But our method correctly infers the configurations of both legs. To quantify the results, we manually label 300 mobo images as ground truths and measure their perplexity (or negative logprobability [12]) under the learned model. Instead of measuring the perplexity for the whole body pose L, we measure them separately for each body part li (i = 1, 2, ..., K) to emphasize the effect of occlusion reasoning between two legs. As shown in Table 1, our method achieves lower perplexity on the lower and upper right legs. The perplexities for other body parts are not shown in the table since they are the same for both methods. This is because we have only modeled the occlusion relationships between the legs. People dataset. We test our algorithm on the people dataset used in previous work [12]. This dataset contains 305 images of people in various poses. First 100 images and their mirror-flipped versions are used for training, and the remaining
l10 l6
l2
l10
l1 l4
l8
l3
l7
l6
l2
l5
l1 l4
l9
l8
l10 l3
l7
l6
l2
l5
l1 l4
l9
l3 l5
l8
Fig. 6. Three tree structures used on people dataset
l9
l7
722
Y. Wang and G. Mori
(a)
(b)
(c)
(a)
(b)
(c)
Fig. 7. Sample results on the people dataset: (a) original images; (b) results of using one kinematic tree; (c) results of using multiple trees
205 images for testing. We manually select three tree structures shown in Fig. 6, although it will be an interesting future work on how to automatically learn the tree structure at each iteration in an efficient way. We visualize the distribution P (L|I) as a 2D image using the same technique in [12], where the torso is rendered as red, the upper-limbs as green, the lower-limbs and the head as blue. Some of the parsing results are shown in Fig. 7. We can see that our parsing results are much clearer than the one using the kinematic tree. In many images, the body parts are almost clearly visible from our parsing results. In the results of using the kinematic tree, there are many white pixels, indicating high uncertainty about body parts at those locations. But with multiple trees, a lot of these white pixels are cleaned up. It is plausible that if we sample the part candidates li according to P (li |I) and use them as the inputs to other pose estimation algorithms (e.g., Ren et al. [13]), the samples generated from our parsing results are more likely to be the true part locations.
Multiple Tree Models for Occlusion and Spatial Constraints
7
723
Conclusion
We have presented a framework for modeling human figures as a collection of tree-structured models. This framework has the computational advantages of previous tree-structured models used for human pose estimation. At the same time, it models a richer set of constraints between body parts. We demonstrate our results on side-walking persons in CMU mobo dataset, and a challenging people dataset with substantial pose variations. Human pose estimation is an extremely difficult computer vision problem. The solution of this problem probably requires the symbiosis of various kinds of visual cues. Our framework provides a flexible way of modeling dependencies between non-connected body parts.
References 1. Crandell, D., Felzenszwalb, P.F., Huttenlocher, D.P.: Spatial priors for part-based recognition using statistical models. In: IEEE CVPR (2005) 2. Felzenszwalb, P.F., Huttenlocher, D.P.: Pictorial structures for object recognition. International Journal of Computer Vision 61(1), 55–79 (2003) 3. Forsyth, D.A., Arikan, O., Ikemoto, L., O’Brien, J., Ramanan, D.: Computational studies of human motion: Part 1, tracking and motion synthesis. Foundations and Trends in Computer Graphics and Vision 1(2/3), 77–254 (2006) 4. Gross, R., Shi, J.: The cmu motion of body(mobo) database. Technical Report CMU-RI-TR-01-18, CMU (2001) 5. Hogg, D.: Model-based vision: a program to see a walking person. Image and Vision Computing 1(1), 5–20 (1983) 6. Ioffe, S., Forsyth, D.: Human tracking with mixtures of trees. In: IEEE ICCV (2001) 7. Ju, S.X., Black, M.J., Yaccob, Y.: Cardboard people: A parameterized model of articulated image motion. In: Proc. Automatic Face and Gesture Recognition (1996) 8. Lafferty, J., McCallum, A., Pereira, F.: Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In: ICML (2001) 9. Lan, X., Huttenlocher, D.P.: Beyond trees: Common-factor models for 2d human pose recovery. In: IEEE ICCV (2005) 10. Meila, M., Jordan, M.I.: Learning with mixtures of trees. Journal of Machine Learning Research 1, 1–48 (2000) 11. Mori, G., Malik, J.: Estimating human body configurations using shape context matching. In: Heyden, A., Sparr, G., Nielsen, M., Johansen, P. (eds.) ECCV 2002. LNCS, vol. 2352, pp. 666–680. Springer, Heidelberg (2002) 12. Ramanan, D.: Learning to parse images of articulated bodies. In: NIPS 19 (2007) 13. Ren, X., Berg, A., Malik, J.: Recovering human body configurations using pairwise constraints between parts. In: IEEE ICCV (2005) 14. Shakhnarovich, G., Viola, P., Darrell, T.: Fast pose estimation with parameter sensitive hashing. In: IEEE ICCV (2003) 15. Sigal, L., Black, M.J.: Measure locally, reason globally: Occlusion-sensitive articulated pose estimation. In: IEEE CVPR (2006) 16. Sminchisescu, C., Kanaujia, A., Metaxas, D.: BM3 E: Discriminative Density Propagation for Visual Tracking. IEEE PAMI 29(11), 2030–2044 (2007)
724
Y. Wang and G. Mori
17. Song, Y., Goncalves, L., Perona, P.: Unsupervised learning of human motion. IEEE Transaction on Pattern Analysis and Machine Intelligence 25(7), 814–827 (2003) 18. Sudderth, E.B., Mandel, M.I., Freeman, W.T., Willsky, A.S.: Distributed occlusion reasoning for tracking with nonparametric belief propagation. In: NIPS (2004) 19. Sullivan, J., Carlsson, S.: Recognizing and tracking human action. In: Heyden, A., Sparr, G., Nielsen, M., Johansen, P. (eds.) ECCV 2002. LNCS, vol. 2350, pp. 629–644. Springer, Heidelberg (2002) 20. Torralba, A., Murphy, K.P., Freeman, W.T.: Contextual models for object detection using boosted random fields. In: NIPS 17 (2005) 21. Toyama, K., Blake, A.: Probabilistic exemplar-based tracking in a metric space. In: IEEE ICCV (2001) 22. Truyen, T.T., Phung, D.Q., Bui, H.H., Venkatesh, S.: AdaBoost.MRF: Boosted markov random forests and application to multilevel activity recognition. In: IEEE CVPR (2006) 23. Wainwright, M.J., Jaakkola, T.S., Willsky, A.S.: A new class of upper bounds on the log partition function. IEEE Transactions on Information Theory 51(7), 2313– 2335 (2005) 24. Wang, Y., Mori, G.: Boosted multiple deformable trees for parsing human poses. In: ICCV Workshop on Human Motion Understanding, Modeling, Capture and Animation (2007)
Structuring Visual Words in 3D for Arbitrary-View Object Localization Jianxiong Xiao, Jingni Chen, Dit-Yan Yeung, and Long Quan Department of Computer Science and Engineering The Hong Kong University of Science and Technology Clear Water Bay, Kowloon, Hong Kong {csxjx,jnchen,dyyeung,quan}@cse.ust.hk
Abstract. We propose a novel and efficient method for generic arbitraryview object class detection and localization. In contrast to existing singleview and multi-view methods using complicated mechanisms for relating the structural information in different parts of the objects or different viewpoints, we aim at representing the structural information in their true 3D locations. Uncalibrated multi-view images from a hand-held camera are used to reconstruct the 3D visual word models in the training stage. In the testing stage, beyond bounding boxes, our method can automatically determine the locations and outlines of multiple objects in the test image with occlusion handling, and can accurately estimate both the intrinsic and extrinsic camera parameters in an optimized way. With exemplar models, our method can also handle shape deformation for intra-class variance. To handle large data sets from models, we propose several speedup techniques to make the prediction efficient. Experimental results obtained based on some standard data sets demonstrate the effectiveness of the proposed approach.
1
Introduction
In recent years, generic object class detection and localization has been a topic of utmost importance in the computer vision community. Remarkable improvements have been reported in the challenging problem of true 3D generic multiview object class detection and localization [1,2,3]. In this work, we focus on the problem of automatically determining the locations and outlines of object instances as well as the camera parameters by reconstructing 3D visual word exemplar models. The objects in the test images can be at arbitrary view and the camera parameters are completely unknown. Under this setting, object detection and localization is a very challenging problem. 1.1
Related Work
Most existing approaches for object detection focus on detecting an object class from some particular viewpoints by modeling the appearance and shape variability of objects [4]. These approaches, however, are only limited to a few predefined viewpoints. On another research strand, several powerful systems focus on detecting specific objects in cluttered images in spite of viewpoint changes [5,6,7]. D. Forsyth, P. Torr, and A. Zisserman (Eds.): ECCV 2008, Part III, LNCS 5304, pp. 725–737, 2008. c Springer-Verlag Berlin Heidelberg 2008
726
J. Xiao et al.
Although the reported results are impressive, they can only find specific objects shown in the training images. In the context of multi-view generic object class modeling and detection, different models with geometric and appearance constraints have been proposed. Thomas et al. [1] developed a system for detecting motorbikes and sport shoes by establishing activation links and selecting working views. Savarese et al. [2] also proposed a model for 3D object categorization and localization by connecting the canonical parts through their mutual homographic transformation. Without a real 3D model, both methods have to use complicated mechanisms for approximately relating the structural information of the training views or different parts of the objects with simplified assumptions. These indirect representations cannot capture the complete spatial relationship of objects, and may fail to recognize objects when the test images are taken from quite different viewpoints from the training images. In this sense, a real 3D model plays an essential role in further improving the performance of multi-view object class detection. A closely related work is [3], which creates a 3D feature model for object class detection. However, in the process of matching between a test image and the 3D model, their method is computationally costly because it directly operates with a SIFT descriptor and has to enumerate a large space of viewing planes. Another closely related work is [8], which renders a synthetic model from different viewpoints and extracts a set of poses and class discriminative features. During detection, local features from real images are matched to the synthetically trained ones. However, since the features are extracted from a synthetic database, they may deviate significantly from those extracted from real-world images. Moreover, the camera poses are still estimated by searching for the registration of 3D models to images. 1.2
Our Approach
In this paper, we propose an exemplar-based 3D representation of visual words for arbitrary-view object class detection and localization. This model produces a powerful yet simple, direct yet compact representation of object classes. During the training process, our method removes the unknown background of images and obtains the region of interest for class instances. Also, given a test image of arbitrary view containing single or multiple object instances, our algorithm detects all the instances and outlines them precisely. For finding the viewing angle, instead of enumerating all the possible viewpoints of the 3D model, it accurately estimates both the intrinsic and extrinsic camera parameters in an optimized way. Moreover, with exemplar models, our method can also handle shape deformation with intra-class variance. To handle large data sets, several speedup techniques are also proposed to make the prediction more efficient.
2
Automatic Training of 3D Visual Word Models
This section presents the training procedure for the automatic training of 3D visual word models from a set of images taken around each object with unknown background and unknown camera parameters.
Structuring Visual Words in 3D for Arbitrary-View Object Localization
2.1
727
Creating Visual Words and Learning Word Discriminability
Local image patches are the basic building blocks of 2D images. In practice, we choose the Hessian-Laplace detector [9] to detect interest points on a set of images and the SIFT descriptor [10] to characterize local features, described by a set of 128-dimensional SIFT vectors. These SIFT vectors are then vectorquantized into visual words by k-means [11]. Each visual word is a cluster center of the quantized SIFT vectors. In our work, this procedure is performed over an image set containing two types of images. One type contains images taken around the objects for reconstructing 3D models. The other type contains the training images from the PASCAL Visual Object Classes (VOC) challenge [12]. We take the visual words as descriptors for the interest points in both 2D images and 3D models. For a particular class, not all the visual words play the same role in detection. For a particular visual word w, its weight to an object class Ci is learnt by a ratio discriminability function [13], Di (w) =
# images in Ci containing w . # images in image set containing w
(1)
The word weight measures the relevance of w with respect to the object class Ci . The higher the value of Di (w), the more discriminative the visual word w is. For each object class, we only preserve the top 512 most discriminative visual words for its 3D models. 2.2
Creating 3D Visual Word Models
With these visual words, several exemplar models for each object class are created. For each exemplar model M, M+ , the training procedure is shown in Fig. 1.
...
...
...
Word i
x1 y1 z1
Word j
x2 y2 z2
...
Input Images
Word k Camera Position
x3 y3 z3
xn yn zn
Sparse 3D Visual Word Model M
3D Point Model with Background
...
... Silhouette Mask
... Dense 3D Point Model M+
Fig. 1. Training Procedure for an Exemplar Model
728
J. Xiao et al.
In the first step, the input multiple-view images are used for 3D reconstruction by the standard Structure from Motion algorithm [14]. Specifically, the unordered input images are matched in a pairwise manner by the visual words. Taking these sparse pixel-to-pixel correspondences as seeds, a dense matching is obtained by [15]. Then, for three images with more than six mutual point correspondences, a projective reconstruction is obtained by [16]. We merge all the triplet reconstructions by estimating the transformation between those triplets with two common images as in [17]. Finally, the projective reconstruction is metric upgraded to Euclidian reconstruction. In each step, bundle adjustment is used to minimize the geometric error. Since our training data do not contain any label information about the object location in the image, not only the target object but also the background of the scene is reconstructed. However, we only want to preserve the 3D model for the target object. Hence, in the second step, a graph-cut based method [18] is used to automatically identify image regions corresponding to a common space region seen from multiple cameras. Briefly, we assume that the background regions present some color coherence in each image and we exploit the spatial consistency constraint that several image projections of the same space region must satisfy. Each image is iteratively segmented into two regions such that the background satisfies the color consistency constraints, while the foreground satisfies the geometric consistency constraints with respect to the other images. An EM scheme is adopted where the background and foreground model parameters are updated in one step, and the images are segmented in the next step using the new model parameters. Because the silhouette is just used to filter out the background of the 3D model, it does not need to be very precise. In most situations, the above automatic extraction results are satisfactory. In other cases, an interactive method [19] can be used. In our experiment, 8.5% of the silhouettes are annotated manually by [19]. After we have extracted the silhouette of the target object, we filter out all 3D points with projection outside the silhouette of the object and the set of remaining 3D points is the model M+ . To facilitate fast indexing and dramatically accelerate the detection, we record some 3D points in a hash table model M, with visual words as keys and the 3D points with coordinate (x, y, z) as content. The 3D points in the hash table model M are from the sparse matching seeds of M+ and correspond to the top 512 most discriminative visual words.
3
Object Localization and Camera Estimation
Given a new image with single or multiple instances, the task is to detect the locations of objects from a particular class, outline them precisely and simultaneously estimate the camera parameters for the test image. With the trained 3D exemplar models, our method can estimate arbitrary pose of the target object with no restriction to some predefined poses. The flow of the testing procedure is shown in Fig. 2 and Alg. 1.
Structuring Visual Words in 3D for Arbitrary-View Object Localization
729
Visual Word Indexing
Visual Word Model . . . Visual Word Model Visual Word Model + Mn−1 , M+ M1 , M+ n−1 Mn , Mn 1
Visual Word Detection
Voting with Exemplars
Input Testing Image
Over-segmentation
Hypothesis Voting
Output Object Outline
Fig. 2. Testing Procedure
Algorithm 1. Simultaneous Object Localization and Camera Estimation 1. Over-segment the test image I. 2. For each small region Ri in the over-segmentation and each exemplar model Mj , M+ j , (a) get all 2D and 3D correspondence pairs Sij inside the region Ri (b) compute the camera projection matrix Pij by SVD (c) project the 3D point model M+ j and vote in the image space for hypothesis. 3. Take the cumulative voting score as data cost and image gradient as smoothness in MRF to extract the outline O. 4. Use all 2D and 3D correspondence pairs S ∗ inside each connected component R of the outline O to compute the final camera matrix P ∗ .
3.1
Visual Word Detection and Image Over-segmentation
We follow the same procedure as in training to find local interest points in a test image by the Hessian-Laplace detector [9] and characterize the local features by a set of 128-dimensional SIFT vectors [10]. Each SIFT descriptor is then translated into its corresponding visual word by finding the nearest visual word around it. If the Euclidean distance between the SIFT descriptor of the interest point and that of the nearest visual word is two times larger than the mean distance of that cluster from its centroid, that interest point is deleted. The mapping from SIFT descriptor to visual word descriptor makes the matching between 2D image interest point and 3D visual word model very efficient by just indexing with the visual word as key. The target object in the test image may be embedded in a complicated background that will affect the overall performance of detection and localization. Over-segmenting the test image can help to improve the accuracy of object detection and get a much more precise outline of the object. It will also be useful for camera hypothesis estimation in the testing stage. Traditionally, oversegmentation is done by the watershed or mean-shift algorithm. In this work we adopt the over-segmentation technique by [20], which is very efficient and also stable with parameters to control the region size.
730
3.2
J. Xiao et al.
Visual Word Indexing and Hypothesis Voting
Suppose a test image I is over-segmented into n regions and there are m exemplar models. For each small region Ri in I and each 3D visual word model Mj , all correspondence pairs of 2D interest point uk inside Ri (from the test image I) and 3D point Xk (from the 3D visual word model Mj ) that have the same visual word descriptor are collected: Sij = {uk ↔ Xk | w (uk ) = w (Xk ) , uk ∈ Ri , Xk ∈ Mj } Given N correspondence pairs between the 2D projections and 3D points, the camera pose can be directly estimated by a linear unique-solution N -point method [21] with SVD as the solver. To improve the robustness of the above method, we refine it to automatically filter out some obvious error correspondences in Sij . The filtering algorithm is based on the following locality assumption: The 3D points {Xk }, with 2D projection {Pij Xk } inside the same small over-segmentation region Ri , should be also close to each other in 3D space. This assumption empirically holds since the over-segmentation algorithm tries not to cross depth boundaries. With this ¯ of the 3D points in assumption, we first compute the average 3D position p Sij . Then we filter out the correspondence pairs whose 3D points are far away ¯ . Specifically, we compute the mean d¯ and standard deviation σ from the from p ¯ and all the 3D points in Sij . Then if the distance between distances between p ¯ is greater than d¯+ 2σ, this a 3D point of a particular correspondence pair and p correspondence pair is removed from Sij . Since the camera matrix Pij is estimated from a local over-segmentation region Ri , it is likely to be degenerated if the 3D points are nearly planar. Hence, to further improve the camera estimation robustness, instead of the sparse visual word model Mj , we make use of the dense 3D point model M+ j to increase the number of 2D to 3D correspondences for camera estimation. In detail, each 2D interest point uk to 3D point Xk correspondence uk ↔ Xk in Sij is taken as the seed, and the pixels in the neighborhood of uk in Ri are greedily matched with the points in the neighborhood of Xk in the model M+ j . In this way, a new set Sij of 2D to 3D correspondences can be obtained. Sij contains much more correspondences that can characterize the local geometry changes and hence can greatly improve the camera estimation robustness. With the new correspondence pair set Sij , the camera matrix Pij is computed in the same way as before. After estimating the camera matrix Pij , we project the whole 3D model M+ j = + + Xk onto the test image with projections Pij Xk and vote in the image space for the hypothesis Pij . In detail, we lay over the test image I a regular grid with + the same resolution as the image. For each X+ k ∈ Mj , the value of the cell in po+ sition Pij Xk will increase by one. Therefore, for each over-segmentation region Ri , there is one vote for each exemplar model Mj , M+ j . Because each oversegmentation region Ri has its vote, our method is insensitive to occlusion since other un-occluded regions can still vote for the occluded regions. To increase the
Structuring Visual Words in 3D for Arbitrary-View Object Localization
731
+ effective regions for each point X+ k , the neighboring grid cells of Pij Xk also have + scores from Xk weighted with a 2D isotropic Gaussian. In our case, the variance is set to be 0.5% of the width of image I. However, if most parts of the small region are not the object of interest, the estimated camera projection matrix will be completely useless. In order to capture the difference, the hypothesis Pij is associated with a score c (Ri , Mj ) indicating the confidence of the vote:
−1 c (Ri , Mj ) = medianXk ∈Mj ,uk ∈I,w(uk )=w(Xk ) {uk − Pij Xk } + 1 .
(2)
The smaller the re-projection error uk − Pij Xk , the higher the confidence. Here, uk and Xk form a correspondence pair. However, the 2D and 3D visual word correspondence is not necessarily a bijection. Several 2D interest points {uk } in the test image may have the same visual word w, and hence may correspond to several 3D points {Xk } in Mj . For such multiple matched pairs {uk } ↔ {Xk }, the re-projection error is computed as the minimum distance between any 2D interest point {uk } and the projection {Pij Xk } of any 3D visual word {Xk }. 3.3
Outline Extraction and Camera Matrix Re-estimation
The over-segmentation regions are used to construct a Markov random field (MRF) graph. The smoothness cost is defined as the L2-norm of the RGB color difference between the background and the target object, as in [22]. The corresponding voting score is normalized and taken as the data cost in the MRF. An implementation of the graph cut algorithm from [23] is used for optimization and getting the outline O. Inside the outline O, we can obtain several connected components {Ri }. We use all corresponding pairs inside each connected component region Ri and the best matched 3D visual word model M∗ to re-estimate the camera matrix P ∗ by the same method as in Sec. 3.2. Here, the best matched 3D visual word model M∗ for that connected component region R is the one with the highest cumulative voting score summing up all over-segmentation regions Ri in R , i.e., M∗ = arg max c (Ri , Mj ) . Mj
Ri ∈R
In fact, for each target object in the test image, what we want to estimate is its relative pose and the camera parameters. Since each 3D point model has its own coordinate system, the camera so estimated is specific to that coordinate system. If multiple object instances exist in the test image, multiple cameras, one for each object instance, should be estimated in the respective coordinate system for the corresponding 3D point model. These multiple cameras do not violate the principle that there is only one camera for each image according to the perspective camera imaging theory. Because they are at different coordinate systems and will align to be exactly one camera (only theoretically when there is no noise) in the real-world coordinate system. In our case, we are more
732
J. Xiao et al.
concerned about the relative pose between each object and the corresponding camera. Hence, we do not try to align the multiple cameras for multiple objects. Now, for multiple object instances from the same object class in the same test image, if the objects do not overlap with each other, the outline O will have several connected components {Ri }, and several best matched models M∗j as ∗ well as several estimated cameras Pij . If the objects overlap greatly with each other, the object outlines can still be estimated correctly although the cameras cannot be estimated well. For objects from different classes, exemplars from different classes will vote on different grids. The voting score is normalized as the data cost in MRF, and multi-label graph-cut can be used to find the optimal outline for each class. After that the same procedure as in the single class case is used to estimate the cameras for each class separately. 3.4
Acceleration
Unlike previous 2D voting based methods, our method is computationally more expensive due to the larger data size. The bottleneck is, for each region Ri and each 3D model Mj , there is one SVD operation to compute the camera parameters and many matrix multiplications to project all 3D points onto the 2D grid. However, for different over-segmentation regions and different 3D exemplar models, there is no computational dependency. So it is possible to do parallel computing for different hypotheses. Here, we make use of a commercial programmable graphics hardware, a graphics processing unit (GPU), to speed up the testing procedure. The SVD algorithm is implemented as in [24] which mainly includes two steps: bidiagonalization of the given matrix by applying a series of householder transformations, and diagonalization of the bidiagonal matrix by iteratively applying the implicit-shifted QR algorithm. In practice, after the camera matrix is computed from SVD, the projection matrix in GPU is set to be the same as the camera matrix, and the 3D model is rendered on the GPU while the frame buffer is set to have the same resolution as the test image. To speed up and handle intra-class variance, for each class, we use only some most similar 3D exemplar models for hypothesis voting. For a rigid class with small intra-class variance, the voting values from the top five most similar exemplar models are added together to improve the robustness. For a very deformable object class such as a person class, however, we use only one most similar exemplar model rather than five for computation.
4
Experiments
There is a training data set with motorbikes and shoes provided by Leuven [1]. However, this data set is not specialized for 3D reconstruction, since the baseline is too large to achieve a reliable two-view matching for structure from motion. In fact, in our experiment, only two motorbike models can be successfully reconstructed from this data set. Due to the lack of an appropriate multi-view database for 3D reconstruction for the purpose of object class detection, we construct a
Structuring Visual Words in 3D for Arbitrary-View Object Localization
733
Fig. 3. Some 3D exemplar models. The first row shows one of the training images for each model. The second and third rows show two example views of the corresponding 3D point models.
(a) Motorbike
(b) Sport Shoe
Fig. 4. Some output examples. For each subfigure, the first column contains the input test images, the second column contains the over-segmentation images, the third column contains the voting results, the fourth column contains the outlines of the detected objects, i.e., the final result of our method, and the fifth column contains the result from [1].
3D object database with 15 different motorbikes and 30 different sport shoes. For each object, about 30 images with resolution 800 × 600 are taken around it and the camera parameters are completely unknown. Fig. 7 shows some sample images of our data set. Our exemplar models are mainly trained based on this data set. Hence, including the two motorbikes reconstructed from Leuven’s data set [1], there are 17 motorbike exemplar models and 30 shoe exemplar models in our experiments. Some 3D exemplar models are shown in Fig. 3. For a test image with resolution 480 × 360, it takes about 0.1 second for over-segmentation, 6.1 seconds for hypothesis voting, and 0.5 second for outline extraction on a desktop PC with Intel Core 2 Duo E6400 CPU and NVIDIA GeForce 8800 GTX GPU. For voting, we use the five most similar exemplar models. Fig. 4 shows some results of over-segmentation, hypothesis voting and outline extraction. Our method can handle occlusion very well, such as the persons
734
J. Xiao et al.
(a)
(b)
(c)
(d)
Fig. 5. Example results of camera estimation. The left of each subfigure is the input test image, and the right is the best matched 3D exemplar model with the estimated camera for the test image shown as the top view in 3D space. The camera is drawn by lines.
on the motorbike. The estimated camera positions of some test images are also shown in Fig. 5. 4.1
Evaluation and Comparison
For comparison with [1] and [2], although our model is obtained from different training data using different kinds of supervision, it can be evaluated on the same test set. We adopt the same evaluation protocol as in the PASCAL VOC Challenge, which is also used in [1,2]. Precison/recall curves are used to evaluate the performance of localization. We adopt the same 179 images from the ‘motorbikes-test2’ set provided by the PASCAL VOC Challenge 2005 [25] for testing. Fig.6(a) shows a substantial improvement of our method compared to [1]. Although our performance in terms of precision is similar to that of [2], we regard it as satisfactory, given the fact that the number of exemplar models is not large enough in our motorbike experiment. For more comparison, we use Leuven’s multi-view sports shoes data set for testing [1]. The result is shown in Fig. 6(b). Observing that our proposed method is significantly better than [1], we believe that this is partially due to the larger and better training data that we used. [2] did not report results on Leuven’s multi-view sports shoes data set and the shoes in their own data set are mainly leather shoes. Hence, we do not compare with [2] on shoes.
class: sport shoe 1 0.9
0.8
0.8
0.7
0.7
0.6
0.6
precision
precision
class: motorbike 1 0.9
0.5
0.5
0.4
0.4
0.3
0.3
0.2
0.2 Our method Thomas et al. Silvio and Feifei
0.1 0
0
0.2
0.1 0.4
0.6
0.8
1
0
Our method Thomas et al. 0
0.2
recall
0.4
0.6
0.8
recall
(a) Motorbike [25]
(b) Sport Shoe [1]
Fig. 6. Precision-recall Curves
1
Structuring Visual Words in 3D for Arbitrary-View Object Localization
735
Fig. 7. Sample images from our 3D object category data set
4.2
Discussions
Our approach may be seen as a significant extension of many previous works. The PASCAL VOC 2007 Detection Task winner [26] can be seen as the 2D version of our method, although their method uses histogram due to the lack of explicit structural information. [3] is a much simplified version of our method and does not take the efficiency issue into consideration, while [8] approximates our 3D visual word model by synthetic data, and both of them determine the camera matrix through searching. To handle large intra-class shape variance, state-ofthe-art representations such as [27] rely on deformable part models. Extending the deformable models to 3D is feasible but quite complicated. In our method, instead of explicitly modeling the deformation, we use an exemplar-based method to characterize the intra-class variance. On the other hand, our method extensively uses many standard state-of-theart methods for different problems in computer vision as building blocks, making it easy to implement and achieve good performance. The Structure from Motion algorithm [14] from the multiple view geometry community is used to reconstruct the 3D positions for the visual words. Efficient over-segmentation [20] from the image segmentation community is used to outline the region in which visual word matching is collected for hypothesis voting. A max-flow based MRF solver [23] from the energy minimization community is used to extract the object boundary. Moreover, a graphics hardware GPU is used to accelerate the voting procedure including camera estimation using SVD.
5
Conclusion
We have proposed a novel and efficient method for generic object class detection that aims at representing the structural information in their true 3D locations. Uncalibrated multi-view images from a hand-held camera are used to reconstruct the 3D visual word models in the training stage. In the testing stage, beyond bounding boxes, our method determines the locations and outlines of multiple objects in the test image, and accurately estimates the camera parameters in an optimized way. To handle large data sets, we propose several speedup techniques to make the prediction efficient. However, as a limitation of our method, more specific training data needs to be collected than many previous methods. Future work includes conducting more experiments with more object classes such as person classes, and extending our method to estimate the camera parameters for highly overlapping objects.
736
J. Xiao et al.
Acknowledgements. This work has been supported by research grant NHKUST602/05 from the Research Grants Council (RGC) of Hong Kong and the National Natural Science Foundation of China (NSFC), research grant 619006 and 619107 from RGC of Hong Kong.
References 1. Thomas, A., Ferrari, V., Leibe, B., Turtelaars, T., Schiele, B., Gool, L.V.: Towards multi-view object class detection. In: IEEE Conference Computer Vision and Pattern Recognition, vol. 2, pp. 1589–1596 (2006) 2. Savarese, S., Fei-Fei, L.: 3D generic object categorization, localization and pose estimation. In: IEEE International Conference on Computer Vision, pp. 1–8 (2007) 3. Yan, P., Khan, S., Shah, M.: 3D model based object class detection in an arbitrary view. In: IEEE International Conference on Computer Vision, pp. 1–6 (2007) 4. Dorko, G., Schmid, C.: Selection of scale-invariant parts for object class recognition. In: IEEE International Conference on Computer Vision, vol. 1, pp. 634–639 (2003) 5. Ferrai, V., Tuytelaars, T., Gool, L.V.: Simultaneous object recognition and segmentation by image exploration. In: Pajdla, T., Matas, J(G.) (eds.) ECCV 2004. LNCS, vol. 3021, pp. 40–54. Springer, Heidelberg (2004) 6. Lowe, D.: Local feature view clustering for 3D object recognition. In: IEEE Conference Computer Vision and Pattern Recognition, vol. 1, pp. 682–688 (2001) 7. Rothganger, F., Lazebnik, S., Schmid, C., Ponce, J.: 3D object modeling and recognition using affine-invariant patches and multi-view spatial constraints. In: IEEE Conference Computer Vision and Pattern Recognition, vol. 2, pp. 272–277 (2003) 8. Liebelt, J., Schmid, C., Schertler, K.: Viewpoint-independent object class detection using 3D feature maps. In: IEEE Conference Computer Vision and Pattern Recognition (2008) 9. Mikolajczyk, K., Leibe, B., Schiele, B.: Multiple object class detection with a generative model. In: IEEE Conference Computer Vision and Pattern Recognition (2006) 10. Lowe, D.: Object recognition from local scale-invariant features. In: IEEE International Conference on Computer Vision, vol. 2, pp. 1150–1157 (1999) 11. Sivic, J., Zisserman, A.: Video Google: A text retrival approach to object matching in videos. In: IEEE International Conference on Computer Vision, vol. 2, pp. 1470– 1477 (2003) 12. Everingham, M., Zisserman, A., Williams, C.K.I., Van Gool, L.: The PASCAL Visual Object Class challenge 2006 (VOC 2006) results (2006) 13. Dork´ o, G., Schmid, C.: Object class recognition using discriminative local features. IEEE Transaction on Pattern Analysis and Machine Intelligence (2004) 14. Hartley, R., Zisserman, A.: Multiple View Geometry in Computer Vision, 2nd edn. Cambridge University Press, Cambridge (2004) 15. Xiao, J., Chen, J., Yeung, D.Y., Quan, L.: Learning two-view stereo matching. In: European Conference on Computer Vision (2008) 16. Quan, L.: Invariant of six points and projective reconstruction from three uncalibrated images. IEEE Tranactions on Pattern Analysis and Machine Intelligence 17(1), 34–46 (1995) 17. Lhuillier, M., Quan, L.: A quasi-dense approach to surface reconstruction from uncalibrated images. IEEE Transaction on Pattern Analysis and Machine Intelligence 27(3), 418–433 (2005)
Structuring Visual Words in 3D for Arbitrary-View Object Localization
737
18. Lee, W., Woo, W., Boyer, E.: Identifying foreground from multiple images. In: Yagi, Y., Kang, S.B., Kweon, I.S., Zha, H. (eds.) ACCV 2007, Part II. LNCS, vol. 4844, pp. 580–589. Springer, Heidelberg (2007) 19. Xiao, J., Wang, J., Tan, P., Quan, L.: Joint affinity propagation for multiple view segmentation. In: IEEE International Conference on Computer Vision, pp. 1–7 (2007) 20. Felzenszwalb, P.F., Huttenlocher, D.P.: Efficient graph-based image segmentation. International Journal of Computer Vision 59, 167–181 (2004) 21. Quan, L., Lan, Z.: Linear n-point camera pose determination. IEEE Transactions on Pattern Analysis and Machine Intelligence 21(8), 774–780 (1999) 22. Li, Y., Sun, J., Tang, C.K., Shum, H.Y.: Lazy snapping. ACM Transaction on Graphics 23(3), 303–308 (2004) 23. Boykov, Y., Kolmogorov, V.: An experimental comparison of min-cut/max-flow algorithms for energy minimization in computer vision. IEEE Transaction on Pattern Analysis and Machine Intelligence (2004) 24. Galoppo, N., Govindaraju, N.K., Henson, M., Bondhugula, V., Larsen, S., Manocha, D.: Efficient numerical algorithms on graphics hardware. In: Workshop on Edge Computing Using New Commodity Architectures (2006) 25. Everingham, M., et al.: The 2005 PASCAL Visual Object Class challenge. In: Selected the 1st PASCAL Challenges Workshop (2005) 26. Chum, O., Zisserman, A.: An exemplar model for learning object classes. In: IEEE Conference Computer Vision and Pattern Recognition, pp. 1–8 (2007) 27. Felzenszwalb, P., McAllester, D., Ramanan, D.: A discriminatively trained, multiscale, deformable part model. In: IEEE Conference Computer Vision and Pattern Recognition (2008)
Multi-thread Parsing for Recognizing Complex Events in Videos Zhang Zhang, Kaiqi Huang, and Tieniu Tan National Laboratory of Pattern Recognition, Institute of Automation Chinese Academy of Science, Beijing 100190, China {zzhang,kqhuang,tnt}@nlpr.ia.ac.cn
Abstract. This paper presents a probabilistic grammar approach to the recognition of complex events in videos. Firstly, based on the original motion features, a rule induction algorithm is adopted to learn the event rules. Then, a multi-thread parsing (MTP) algorithm is adopted to recognize the complex events involving parallel temporal relation in sub-events, whereas the commonly used parser can only handle the sequential relation. Additionally, a Viterbi-like error recovery strategy is embedded in the parsing process to correct the large time scale errors, such as insertion and deletion errors. Extensive experiments including indoor gymnastic exercises and outdoor traffic events are performed. As supported by experimental results, the MTP algorithm can effectively recognize the complex events due to the strong discriminative representation and the error recovery strategy.
1
Introduction
Recently, event recognition in videos has become one of the most active topics in computer vision. A great deal of researchers have worked on it, which ranged from the recognition of simple, short-term actions, such as running and walking [1], to complex, long-term, multi-agent events, such as operating procedures or multi-agent interactions [3], [5], [11]. In this paper, we focus on the recognition of complex events involving multiple moving objects. Motivated by a natural cognitive experience that a complex event can be treated as a combination of several sub-events, we propose the solution based on some syntactic pattern recognition techniques. The flowchart of our solution is shown in Fig. 1. The motion features of moving objects are obtained by tracking. Then in the procedures of Primitive Modeling and Event Rule Induction, we take advantage of the method developed by Zhang et al. [14] to obtain a number of primitives as well as a set of event rules. The learnt rules extend the Stochastic Context Free Grammar (SCFG) production with Allen’s temporal logic [17] to represent the complex temporal relation in sub-events. However, in recognition module, the commonly used parser cannot handle the sub-events with parallel temporal relations. To solve this problem, referring to the idea in [18] where the linear ordered constraint in identifiers(ID) set is relaxed to an unordered one, we extend the original Earley-Stolcke parser [20] D. Forsyth, P. Torr, and A. Zisserman (Eds.): ECCV 2008, Part III, LNCS 5304, pp. 738–751, 2008. c Springer-Verlag Berlin Heidelberg 2008
Multi-thread Parsing for Recognizing Complex Events in Videos
739
Fig. 1. Flowchart of the solution to complex event recognition in this work
to a multi-thread parsing (MTP) algorithm. Additionally, a Viterbi-like error recovery strategy is also embedded to correct the large scale errors, e.g. insertion and deletion errors in the input stream. As examples, experiments including gymnastic exercises and traffic events are performed. As supported by experimental results, the effectiveness and robustness of the MTP algorithm has been validated.
2
Related Work
There has been much work on event recognition in videos. For simple actions such as “walking” and “running”, some researchers developed a quantity of effective feature descriptors from raw images [1] [2], etc. For recognizing complex long-term events, most work is based on modeling the moving object tracks. Generally two main kinds of approaches are used: Dynamic Bayesian Network (DBN) based approaches and rule based approaches. In DBN based approaches [3], [5], [4], [6], some trained stochastic state space models are adopted to represent the inherent structure of complex event. These approaches have the advantage of well studied parameter learning algorithms and the capability to reason with uncertainty. However for the multi-agent interactions involving complex temporal relation, the performance seriously relies on the appropriate model topology that is difficult to be learnt from small training data. In most cases, the predefinition for model topology is needed. In rule based approaches, some primitives (atomics) are first detected, then complex events are recognized as the combination of several sub-events with certain rules [15], [16]. Due to the convenient representation and the efficient parsing algorithm, SCFG has been adopted in applications, such as video surveillance [10], indoor operating activities [9], [8], [12] and human interaction [11]. However in their work, the event rules were all predefined manually, which is impossible in real applications. Furthermore, only single thread event can be tackled, where the temporal relation between sub-events is just sequential relation. In fact, the problem on parallel relation has been noticed in some previous work. In [11], based on context-free grammar, S.M.Ryoo et al. also used Allen’s temporal logic [17] to represent the parallel relation in complex human interactions. In recognition, the parsing problem was turned into a common constraint satisfaction problem. In [18], a multidimensional parsing algorithm is proposed to handle parallel relation in multimodal human computer interaction, where
740
Z. Zhang, K. Huang, and T. Tan
the linear ordered constraint for combining two constituents is relaxed to an unordered one. However, the above two methods do not consider the large time scale errors in primitive detection such as insertion errors and deletion errors. In this study, we focus on developing a more effective parsing algorithm which can handle the parallel relation problem and the uncertainties in primitive detection simultaneously for recognizing complex visual event.
3
Multi-thread Parsing
As shown in Fig. 1, there are two inputs to the parser: one is a symbol (primitive) stream, the other is a set of event rules. Here each primitive is represented as a four-tuple {type, ts, tf , lik}, where type is the primitive type, ts and tf represent start time point and finish time point respectively, lik is the likelihood probability. The primitives are arranged into a stream according to tf ascendingly. Referring to [14], the SCFG production is extended by a relation matrix: H → λ {R}
[p]
(1)
where R is the temporal relation matrix in which the element ri j denotes the temporal relation between the ith sub-event and the jth one, and p is the conditional probability of the production being chosen, given the event H. Note in our experiments, the size of event rule is two at most, so that the temporal relation matrix can be represented as one element r12 . Then the parsing task is to find out the most possible derivation (parsing tree) T to interpret a primitives stream S. In this work, for the root symbol A, a set of rules GA is constructed. Therefore in terms of Maximum Likelihood (ML) criterion, the event recognition problem can be described as follows: < Ad , Td >= arg max P (S, T |GA )
(2)
where Ad is the final decision on the type of complex event, Td is the corresponding parsing tree, and P (S, T |GA ) is computed as the product of the probabilities of the rules used in the parsing tree. 3.1
Parsing Algorithm
Referring to the idea in [18], we propose the multi-thread parsing algorithm by extending the Earley-Stolcke parser [20], where three operations: scanning, completion and prediction are performed iteratively. Here, the parsing state in our algorithm is represented as follows: I : X → λ · Y μ [υ]
(3)
where I is ID set that indicates the constituents in the input primitives, the dot marker is the current parsing position, which denotes the sub-events λ have been observed and the next needed symbol is Y , μ is the unobserved string, υ
Multi-thread Parsing for Recognizing Complex Events in Videos
741
is the Viterbi probability which corresponds the maximum possibility derivation of the state. In addition, the temporal attributes are also recorded in the state. Different from the state in the Earley-Stolcke parser where the ID set must be a set of consecutive primitives, the current ID set may contain disconnected identifiers. For example, I = {3, 5, 7} means the state is comprised of the 3rd, 5th and 7th primitives in the input string. The relaxed ID set enables multiple parsing threads to exist simultaneously. Given the current state set StateSet(i) and the primitive, the following three steps are to be performed. Scanning. For each primitive, say d, a pre-nonterminal rule D → d is added, so that the role of Scanning is able to accept the current primitive with the predicted state of the pre-nonterminal rule. And the likelihood of the detected primitive will be multiplied by the Viterbi probability of the predicted state. Completion. For a completed state in StateSet(i), suppose I : Y → ω · [υ ] that denotes event Y has been recognized, the state Sj in the last state set StateSet(i − 1) will be examined with the following conditions: – Y is one of the unobserved sub-events of Sj . – I ∩ ISj = φ, where the ID set of the completed state I is not intersected with that of Sj . – The relations between Y and the observed sub-events of Sj is consistent with the rule definition. The relation is computed by the fuzzy method in [21]. Then for the state satisfying the above conditions, do another judgment in terms of the position where Y locates at in Sj . If Y is not the first unobserved sub-event (the symbol following the dot ), the unobserved sub-events that are prior to Y are treated as deletion error candidates that are to be handled in Section 3.2, else Sj can be assumed as I : X → λ·Y μ[υ], a new state is generated. I : X → λ · Y μ[υ] (4) ⇒ I : X → λY · μ[υ ] I : Y → ω · [υ ] whereI = I I and υ = υυ . In the current state set, if another identical state with the new state has existed, the Viterbi probability υc of the identical state will be modified as υc = max{υc , υ }, else the new state will be added into the current state set. Due to the relaxed ID set, there may be too many combinations of different primitives. Therefore, a beam-width constraint is adopted to prune the redundant states, where only the first ω states are saved according to the Viterbi probability in an isomorphic state set. Here we define two states are isomorphic, if and only if they share the same rule, the same dot position, but different ID set. Prediction. As the next symbol may belong to other parsing thread, in prediction all the uncompleted states in the last state set will be put into the current state set. Note, all the non-terminals will be predicted in initialization step.
742
3.2
Z. Zhang, K. Huang, and T. Tan
Error Recovery Strategy
Commonly there are three kinds of errors: insertion, deletion, and substitution errors. Insertion errors mean the spurious detection of primitives that do not actually happen. Deletion errors denote the missing detection of primitives actually occurred. Substitution errors mean the misclassification between primitives. In [10], insertion errors were accepted by the extended skip productions, nevertheless the deletion errors cannot be handled by such skip productions. In [9], three hypothetical parsing pathes corresponding to the three kinds of errors (insertion, deletion, substitution) were generated as the parsing fails to accept the current primitive. However an error may not lead to the failure in current scanning but in the next iteration. Here referring to the idea in common parsing [19], a number of error hypotheses will be generated along with the parsing process. Finally, a Viterbi-like backtracking will determine the most possible error distribution. Since a substitution error can be seen as a pair of one insertion error and one deletion error, only insertion and deletion errors are considered in the following. Insertion Error. Due to the relaxed ID set in which the identifiers may be disconnected, the insertion errors are tackled naturally. At the end of parsing, for each completed root state If : 0 → S · [υf ], the primitives that are not contained in If are treated as insertion errors of this derivation. The penalties of insertion errors will be added to the Viterbi probability as follows: ρi (5) υ = υf i∈If
where ρi is the penalty factor of the ith insertion error with a low value, If is the complement set of If . Deletion Error. As presented in Section 3.1, deletion error candidates may be generated in completion operation. Suppose a sate I : X → λ · Y1 Y2 ...Yn Y μ[υ ] where Y1 Y2 ...Yn are hypothesized as deletion errors, Alg.1 is performed to transform the old state into a new one Ie : X → λY1 Y2 ...Yn · Y μ[υe ]. Here An s = I : X → λ · Y1 Y2 ...Yn Y μ[υ ], I = I ∪ I , e position is the position where Y locates at in An s, s set is the last state set. Concretely, given Yi = An s.predict that is the symbol just behind the dot of An s, if Yi can only be completed by pre-nonterminal rule Yi → z, the terminal z is recovered and a completed state re s is generated by scanning operation. The z is assigned to a low likelihood as the penalty factor of deletion error. Else if Yi is a non-terminal, M ax Ex is performed to find out the state re s = Yi → λ · Zμ that is to complete Yi with maximum Viterbi probability in s set. Then Recovery and M ax Ex are performed repeatly, until re s becomes a completed state. Then An s is combined with re s to form a new state new s with completion operator. Finally we examine whether the dot position of new s reaches to e position, if true the recovery of Ans is over, else recover the next sub-event.
Multi-thread Parsing for Recognizing Complex Events in Videos
743
Algorithm 1. Recovery(An s,I ,e postion,s set) 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16.
if An s.predict is pre-nonterminal then z = Error Hypothesize(An s.predict); re s = scanning(An s, z); else re s = Max Ex(An s.predict, I , s set); while re s.dot < size(re s.rule) do Recovery(re s, I ∪ Ire s , re s.dot + 1, s set); re s = Max Ex(An s.predict, I , s set); end while end if new s = completion(An s, re s); if new s.dot < e position then Recovery(new s, I ∪ Inew s , e position, s set); else Return; end if
Here considering the computing cost, we assume that deletion errors just take a small proportion in a state. Thus a maximum error constraint is proposed to prune the states with too many error hypotheses. An exponential distribution is 1
used to model the number of deletion errors. It is written as e−θn 2 , where θ is a control parameter, n is the size of ID set of the state. For a given state with m deletion errors, if
4 4.1
m n
1
> e−θn 2 , the state will be pruned.
Experimental Results Gymnastic Exercises
First, a recognizing process for a person doing gymnastic exercises is presented. Three exercises called E1, E2 and E3 are selected from a set of broadcast gymnastics. Twenty nine sequences are collected. The numbers of sequences are 9, 10, and 10 respectively. Fig. 2 illustrates the routine of exercise E 3. Here the motion trajectories of hands and feet are extracted as the original feature. Referring to [7], we use the optical flow magnitude to capture the dominate motion regions. Then hands and feet are located with prior color and spatial information. Note, the tracking technique is not the focus of this paper.
Fig. 2. Illustration on the gymnastic exercises E 3
744
Z. Zhang, K. Huang, and T. Tan
(a) lf 1 2
(b) lh 1 2
(c) lh 1 3
(d) rf 2 1
(e) rh 2 3
Fig. 3. Some of the primitives in the gymnastic exercises. Each primitive describes the movement between different semantic points. For instance, lh 1 2 means move left hand from semantic point #1 to #2.
Fig. 4. The learnt rules of the gymnastic exercise E 3
For primitive modeling, some semantic points can be firstly learnt by clustering the stop points, since the finish of a basic movement is commonly indicated by the stop motion of hand or foot. Then the primitives can be considered as the movements between different semantic points. Here, eighteen primitives are obtained. Fig. 3 illustrates some examples. Finally, the trajectory segments belonging to the same primitive are modeled by HMM which is for computing the likelihood of the detected primitive. After primitive detection, each exercise includes around 23 primitives. For each exercise, a set of rules is learnt by the rule induction algorithm [14]. Fig. 4 describes an example of the learnt rule corresponding to the exercise E3, where E3 is denoted by non-terminal P 40. More details of the rule induction algorithm can be found in [14].
Multi-thread Parsing for Recognizing Complex Events in Videos
745
To validate the performance of event recognition, HMM and Coupled Hidden Markov Model (CHMM) are chosen for comparison, because they can be trained with little human interference. While other DBN based methods usually require manual construction of model topology. For HMM, the input is a 8-D vector sequence formed by the four trajectories of hands and feet. In CHMM, the four trajectories are divided as two parts for the input of each chain. Here, we take the first 5 sequences for learning rules or parameters and all sequences for test. And the control parameter ω in the beam-width constraint is 3. The experimental results are shown in Table 1. Table 1. Correct classification rate (CCR) on the gymnastic exercises recognition Event Truth E1 9 E2 10 E3 10 Total 29 CCR
MTP θ = 0.7 θ = 0.5 θ = 0.2 9 9 9 4 10 10 10 10 10 23 29 29 79.3% 100% 100%
HMM
CHMM
9 10 8 27 93.1%
9 10 9 28 96.6%
As shown in Table 1, as θ is less than 0.5, the multi-thread parsing (MTP) can recognize all the sequences correctly, whereas HMM misclassifies two sequences and CHMM misclassifies one. To further validate the robustness of our algorithm, three kinds of synthetic errors are randomly added into the testing trajectories as follows: – A deletion error is added by replacing a motion trajectory segment that corresponds to a primitive with a still trajectory that does not correspond to any primitive. – An insertion error is added by replacing a still trajectory segment with a motion trajectory segment that corresponds to a random primitive. – A substitution error is added by replacing a motion trajectory segment with another segment that corresponds to a different primitive. After various amounts of large time scale errors are added, we compare our MTP parser and HMM as well as CHMM classifiers again. The performance is shown in Table 2. As six additional errors are added (one substitution error is equivalent to a pair of one insertion error and one deletion error, so there are over 25% errors in the primitive stream), as θ is 0.2 the multi-thread parsing can still acquire a satisfying result 96.6% due to the strong discriminative rule representation and the effective error recovery method. While the performance of HMM and CHMM decreases obviously as the number of errors increases. As θ is 0.5, in terms of the maximum errors constraint in Section 3.2, √ the maximum tolerant number of the deleted errors is 23 ∗ exp(−1 ∗ 0.5 ∗ 23) ≈ 2. So the performance will decrease rapidly when the number of errors exceeds 2.
746
Z. Zhang, K. Huang, and T. Tan Table 2. CCRs on event recognition with synthetic errors Number of Errors 1 2 3 4 5 6
MTP θ = 0.5 θ = 0.2 93.1% 100% 82.8% 100% 72.4% 100% 62.1% 100% 55.2% 100% 41.4% 96.6%
(a) primitive detection
HMM
CHMM
86.2% 82.8% 75.9% 69% 55.2% 51.7%
89.7% 89.7% 86.2% 79.3% 75.9% 75.9%
(b) parsing tree
Fig. 5. An example on the recognition process with our methods. The exercise “E 3” is recognized correctly. Here the leaf nodes are primitives. The number under the primitive indicates the corresponding ID in primitive stream.
From the above comparison, the effectiveness and robustness of our methods are validated. Moveover, along with the parsing, a parsing tree can be obtained to express the hierarchical structure explicitly in each primitive stream. An example on whole parsing process is shown in Fig. 5. In terms of the parsing tree, each input primitive has two possible afflictions. One is that it is accepted by the parsing tree. The other is the identification as an insertion error. Thereafter, the metric overall correct rate (OCR) is adopted I , where N A to measure the parsing accuracy, which can be defined as N A+N NP is the number of correct acceptance of primitives in the parsing tree and N I is the number of correct detection of insertion errors, N P is the total number of primitives in input stream. Table 3 presents the parsing accuracy with original data as well as various additional errors, where ω is 3 and θ is 0.2. As shown in the table, most correct Table 3. Parsing accuracy in recognizing the gymnastic exercises. Here, #e=0 means the ordinal data, #e=1 is the data with one synthetic errors, and so on. Errors number #e = 0 #e = 1 #e = 2 #e = 3 #e = 4 #e = 5 #e = 6 OCR 87.6% 86.1% 85.9% 83.3% 81.3% 81.2% 76.8%
Multi-thread Parsing for Recognizing Complex Events in Videos
747
Fig. 6. Illustration on the traffic events in the crossroad. The trajectory stream in the scene can be represented by an iteration of three kinds of passing event.
primitives are accepted by the parsing tree, while the parsing accuracy decreases with the increase of the added errors. The failure to accept the primitive is mainly due to two reasons. One is the uncertainty in computing the temporal relation between sub-events where the fuzzy method [21] relies on an appropriate threshold. The other reason is that in the deletion error recovery, only the state with the maximum Viterbi probability is handled, which may lead to the local optimization, instead of the global optimization. 4.2
Traffic Events in Crossroads
To further validate the effectiveness of our method, we would like to test it in a realistic surveillance scene which is shown in Figure 6. In this scene, a traffic cycle composes of three sub-events “Go straight over the crossroad in the main road ”, “Turn left from the main road to the side road ” and “Turn left from the side road to the main road ”, which happen alternately. Furthermore, “Go straight over the crossroad in the main road ” can be decomposed into two parallel sub-events “Go straight over the crossroad in the left side of the main road ” and “Go straight over the crossroad in the right side of the main road ”. Eventually, each traffic event is comprised of a number of primitives which are represented as vehicle trajectories in different lanes. We obtain the trajectory data from the previous work by Zhang et al. [14] that focuses on learning the rules from trajectory stream. In this study, we validate the effectiveness of the MTP parser to recognize the events in trajectory stream. Here, the single vehicle passing through the scene is considered as primitive. By clustering, seventeen primitives are acquired, which describe the main motion patterns between different entries and exits in the scene. The entries and exits can be learnt by some work on semantic scene modeling such as [13]. Some of the primitives are presented in Fig. 7. In terms of these clusters, the trajectories that do not belong to any of the clusters are deleted. Furthermore as reported
748
Z. Zhang, K. Huang, and T. Tan
(a) v 1 6
(b) v 2 5
(c) v 5 2
(d) v 7 5
(e) v 8 1
Fig. 7. The basic motion patterns in the crossroad scene. The white arrow denotes the motion direction of moving object.
Fig. 8. The learnt rules in traffic event experiment. v i j is the primitive which indicates the basic motion pattern of “moving from the ith entry to the jth exit in the scene”.
in [14], some irrelevant trajectories which are unrelated to the traffic rules will distort the rule induction process. Therefore we manually delete these unrelated trajectories, such as v 8 1 in Fig. 7. The learnt rules are shown in Fig. 8. We find that four main traffic events “P46 ”, “P47 ”, “P49 ” and “P50 ” (the meanings can refer to Fig. 8) in the crossroad have been learnt. And the whole traffic cycle is denoted by “P57 ”. With the learnt rules, the MTP parser is adopted to recognize the interesting events in a given primitive stream. Twenty traffic cycles are used for testing. Among these cycles, five of them are lack of main sub-event “P46 ” or “P47 ”, since there is no vehicles passing in the corresponding ways. Twenty traffic cycles are all recognized correctly, as the
Multi-thread Parsing for Recognizing Complex Events in Videos
749
Fig. 9. An example of the parsing result in one traffic cycle. Four main sub-events are all recovered correctly. Table 4. Parsing accuracy on recognizing traffic events Event P46 P47 P49 P50 P57 OCR 100% 100% 96.9% 99.2% 98.9%
root “P57 ” can be recovered in each parsing tree. And the absent sub-events are recovered as deletion errors. An example of the parsing tree is shown in Fig. 9. Different from the previous gymnastic exercises, we do not use the DBN based method for comparison. That is because the number of moving objects in one frame or an uniform time interval is not a fixed value so that the feature dimensionality cannot be determined. Moreover, in each parsing tree, we examine the parsing accuracies of the whole traffic cycle “P57 ” as well as four main sub-events “P46 ”, “P47 ”, “P49 ” and “P50 ”. OCR presented in Section 4.1 is used to measure the parsing accuracy. Table 4 presents the experimental results. The high OCR validates the event rules’ capability to fit the primitive stream as well as the accuracy of our parsing algorithm.
5
Conclusion and Future Work
We have present a probabilistic grammar approach to the recognition of complex events in videos. Compared with previous grammar based work, our work has three main advantages: – In this work, the event rules are learnt by an rule induction algorithm, while in other work the rules are predefined manually. – The complex event containing parallel sub-events can be recognized by the MTP parser, while others can only handle the single thread event.
750
Z. Zhang, K. Huang, and T. Tan
– An effective error recovery strategy is proposed to enhance the robustness of the parsing algorithm. Extensive experiments including indoor gymnastic exercises and outdoor traffic events have been performed to validate the proposed method. In the future, we will adopt some probabilistic methods to compute the temporal relation. Furthermore, a more efficient parsing strategy is also needed to reduce the time cost.
Acknowledgement This work is supported by the National Basic Research Program of China (Grant No. 2004CB318110), the National Natural Science Foundation of China (Grant No. 60723005, 60605014, 60332010, 60335010 and 2004DFA06900), the CASIA Innovation Fund for Young Scientists. The authors also thank to Shiquan Wang and Liangsheng Wang for their helps on motion tracking.
References 1. Niebles, J.C., Wang, H., Fei-Fei, L.: Unsupervised Learning of Human Action Categories Using Spatial-TemporalWords. In: Proc. Conf. BMVC (2006) 2. Laptev, I., Lindeberg, T.: Space-time interest points. In: Proc. Int. Conf. on Computer Vision (ICCV) (2003) 3. Laxton, B., Lim, J., Kriegman, D.: Leveraging temporal, contextual and ordering constraints for recognizing complex activities in video. In: Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR) (2007) 4. Shi, Y., Huang, Y., Minnen, D., Bobick, A., Essa, I.: Propagation networks for recognition of partially ordered sequential action. In: Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR) (2004) 5. Nguyen, N.T., Phung, D.Q., Venkatesh, S., Bui, H.: Learning and Detecting Activities from Movement Trajectories Using the Hierarchical Hidden Markov Model. In: Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR) (2005) 6. Xiang, T., Gong, S.: Beyond Tracking: Modelling Activity and Understanding Behaviour. International Journal of Computer Vision (IJCV) 67(1) (2006) 7. Min, J., Kasturi, R.: Activity Recognition Based on Multiple Motion Trajectories. In: Proc. Int. Conf. on Pattern Recognition (ICPR), pp. 199–202 (2004) 8. Minnen, D., Essa, I., Starner, T.: Expectation Grammars: Leveraging High-Level Expectations for Activity Recognition. In: Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), vol. 2, pp. 626–632 (2003) 9. Moore, D., Essa, I.: Recognizing Multitasked Activities from Video Using Stochastic Context-Free Grammar. In: Proc. Conf. AAAI (2002) 10. Ivanov, Y.A., Bobick, A.F.: Recognition of visual activities and interactions by stochastic parsing. IEEE TRANS. PAMI 22(8), 852–872 (2000) 11. Ryoo, M.S., Aggarwal, J.K.: Recognition of Composite Human Activities through Context-Free Grammar Based Representation. In: Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR) (2006)
Multi-thread Parsing for Recognizing Complex Events in Videos
751
12. Yamamoto, M., Mitomi, H., Fujiwara, F., Sato, T.: Bayesian Classification of TaskOriented Actions Based on Stochastic Context-Free Grammar. In: Proc. Int. Conf. on Automatic Face and Gesture Recognition (FGR) (2006) 13. Wang, X., Tieu, K., Grimson, E.: Learning Semantic Scene Models by Trajectory Analysis. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3953, pp. 110–123. Springer, Heidelberg (2006) 14. Zhang, Z., Huang, K.Q., Tan, T.N., Wang, L.S.: Trajectory Series Analysis based Event Rule Induction for Visual Surveillance. In: Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR) (2007) 15. Hakeem, A., Shah, M.: Learning, detection and representation of multi-agent events in videos. Artif. Intell. 171(8-9), 586–605 (2007) 16. Nevatia, R., Zhao, T., Hongeng, S.: Hierarchical Language-based Representation of Events in Video Streams. In: Proc. CVPRW on Event Mining (2003) 17. Allen, J.F., Ferguson, F.: Actions and Events in Interval Temporal Logical. J. Logic and Computation 4(5), 531–579 (1994) 18. Johnston, M.: Unification-based Multimodal Parsing. In: Proc. Conf. on COLINGACL, pp. 624–630 (1998) 19. Amengual, J.C., Vidal, E.: Efficient Error-Correcting Viterbi Parsing. IEEE TRANS PAMI 20(10), 1109–1116 (1998) 20. Stolcke, A.: An efficient probabilistic context-free parsing algorithm that computes prefix probabilities. Computational Linguistics 21(2), 165–201 (1995) 21. Snoek, C.G.M., Worring, M.: Multimedia event-based video indexing using time intervals. IEEE TRANS Multimedia 7(4), 638–647 (2005)
Signature-Based Document Image Retrieval Guangyu Zhu1 , Yefeng Zheng2 , and David Doermann1 1 2
University of Maryland, College Park, MD 20742, USA Siemens Corporate Research, Princeton, NJ 08540, USA
Abstract. As the most pervasive method of individual identification and document authentication, signatures present convincing evidence and provide an important form of indexing for effective document image processing and retrieval in a broad range of applications. In this work, we developed a fully automatic signature-based document image retrieval system that handles: 1) Automatic detection and segmentation of signatures from document images and 2) Translation, scale, and rotation invariant signature matching for document image retrieval. We treat signature retrieval in the unconstrained setting of non-rigid shape matching and retrieval, and quantitatively study shape representations, shape matching algorithms, measures of dissimilarity, and the use of multiple query instances in document image retrieval. Extensive experiments using large real world collections of English and Arabic machine printed and handwritten documents demonstrate the excellent performance of our system. To the best of our knowledge, this is the first automatic retrieval system for general document images by using signatures as queries, without manual annotation of the image collection.
1
Introduction
Searching for relevant documents from large complex document image repositories is a central problem in document image retrieval. One approach is to recognize text in the image using an optical character recognition (OCR) system, and apply text indexing and query. This solution is primarily restricted to machine printed text content because state-of-the-art handwriting recognition is error prone and is limited to applications with a small vocabulary, such as postal address recognition and bank check reading [24]. In broader, unconstrained domains, including searching of historic manuscripts [25] and the processing of languages where character recognition is difficult [7], image retrieval has demonstrated much better results. As unique and evidentiary entities in a broad range of application domains, signatures provide an important form of indexing that enables effective image search and retrieval from large heterogeneous document image collections. In this work, we address two fundamental problems in automatic document image search and retrieval using signatures: Detection and Segmentation. Object detection involves creating location hypotheses for the object of interest. To achieve purposeful matching, a detected object often needs to be effectively segmented from the background, and represented in a meaningful way for analysis. D. Forsyth, P. Torr, and A. Zisserman (Eds.): ECCV 2008, Part III, LNCS 5304, pp. 752–765, 2008. c Springer-Verlag Berlin Heidelberg 2008
Signature-Based Document Image Retrieval
753
Fig. 1. Examples from the Tobacco-800 [1, 17] database (first row) and the University of Maryland Arabic database [18] (second row)
Matching. Object matching is the problem of associating a given object with another to determine whether they refer to the same real-world entity. It involves appropriate choices in representation, matching algorithms, and measures of dissimilarity, so that retrieval results can be invariant to large intra-class variability and robust under inter-class similarity. In the following sub-sections, we motivate the problems of detection, segmentation, and matching in the context of signature-based document image retrieval and present an overview of our system. 1.1
Signature Detection and Segmentation
Detecting and segmenting free-form objects such as signatures is challenging in computer vision. In our previous work [38], we proposed a multi-scale approach to jointly detecting and segmenting signatures from document images with unconstrained layout and formatting. This approach treats a signature generally as an unknown grouping of 2-D contour fragments, and solves for the two unknowns — identification of the most salient structure in a signature and its grouping, using a signature production model that captures the dynamic curvature of 2-D contour fragments without recovering the temporal information. We extend the work of Zhu et al. [38] by incorporating a two-step, partially supervised learning framework that effectively deal with large variations. A base detector is learned from a small set of segmented images and tested on a larger pool of unlabeled training images. In the second step, we bootstrap these detections to refine detector parameters while explicitly train against clutter background. Our approach is empirically shown to be more robust than [38] against cluttered background and large intra-class variations, such as differences across languages. Fig. 4 shows detected and segmented Arabic signatures by our approach (right), in contrast to their regions in documents that originally contain significant amount of background text and noise.
754
1.2
G. Zhu, Y. Zheng, and D. Doermann
Signature Matching for Document Image Retrieval
Detection and segmentation produce a set of 2-D contour fragments for each detected signature. Given a few available query signature instances and a large database of detected signatures, the problem of signature matching is to find the most similar signature samples from the database. By constructing the list of best matching signatures, we effectively retrieve the set of documents authorized or authored by the same person. We treat a signature as a non-rigid shape, and represent it by a discrete set of 2-D points sampled from the internal or external contours on the object. 2-D point feature offers several competitive advantages compared to other compact geometrical entities used in shape representation because it relaxes the strong assumption that the topology and the temporal order need to be preserved under structural variations or clustered background. For instance, two strokes in one signature sample may touch each other, but remain well separated in another. These structural changes, as well as outliers and noise, are generally challenging for shock-graph based approaches [28, 30], which explicitly make use of the connection between points. In some earlier studies [16, 20, 23, 27], a shape is represented as an ordered sequence of points. This 1-D representation is well suited for signatures collected on-line using a PDA or Table PC. For unconstrained off-line handwriting in general, however, it is difficult to recover their temporal information from real images due to large structural variations [9]. Represented by a 2-D point distribution, a shape is more robust under structural variations, while carrying general shape information. As shown in Fig. 2, the shape of a
Fig. 2. Shape contexts [2] and local neighborhood graphs [36] constructed from detected and segmented signatures. First column: Original signature regions in documents. Second column: Shape contexts descriptors constructed at a point, which provides a large-scale shape description. Third column: Local neighborhood graphs capture local structures for non-rigid shape matching.
Signature-Based Document Image Retrieval
755
signature is well captured by a finite set P = {P1 , . . . , Pn }, Pi ∈ R2 , of n points, which are sampled from edge pixels computed by an edge detector.1 We use two state-of-the-art non-rigid shape matching algorithms for signature matching. The first method is based on the representation of shape contexts, introduced by Belongie et al. [2]. In this approach, a spatial histogram defined as shape context is computed for each point, which describes the distribution of the relative positions of all remaining points. Prior to matching, the correspondences between points are solved first through weighted bipartite graph matching. Our second method uses the non-rigid shape matching algorithm proposed by Zheng and Doermann [36], which formulates shape matching as an optimization problem that preserves local neighborhood structure. This approach has an intuitive graph matching interpretation, where each point represents a vertex and two vertices are considered connected in the graph if they are neighbors. The problem of finding the optimal match between shapes is thus equivalent to maximizing the number of matched edges between their corresponding graphs under a one-to-one matching constraint.2 Computationally, [36] employs an iterative framework for estimating the correspondences and the transformation. In each iteration, graph matching is initialized using the shape context distance, and subsequently updated through relaxation labeling for more globally consistent results. Treating an input pattern as a generic 2-D point distribution broadens the space of dissimilarity metrics and enables effective shape discrimination using the correspondences and the underlying transformations. We propose two novel shape dissimilarity metrics that quantitatively measure anisotropic scaling and registration residual error, and present a supervised training framework for effectively combining complementary shape information from different dissimilarity measures by linear discriminant analysis (LDA). We comprehensively study different shape representations, measures of dissimilarity, shape matching algorithms, and the use of multiple query instances in overall retrieval accuracy. The structure of this paper is as follows: The next section reviews related work. In Section 3, we describe our signature matching approach in detail and present methods to combine different measures of shape dissimilarity and multiple query instances for effective retrieval with limited supervised training. We discuss experimental results on real English and Arabic document datasets in Section 4 and conclude in Section 5.
2
Related Work
2.1
Shape Matching
Rigid shape matching has been approached in a number of ways with intent to obtain a discriminative global description. Approaches using silhouette features include Fourier descriptors [33,19], geometric hashing [15], dynamic programming 1 2
We randomly select these n sample points from the contours via a rejection sampling method that spreads the points over the entire shape. To robustly handle outliers, multiple points are allowed to match to the dummy point added to each point set.
756
G. Zhu, Y. Zheng, and D. Doermann
[13, 23], and skeletons derived using Blum’s medial axis transform [29]. Although silhouettes are simple and efficient to compare, they are limited as shape descriptors because they ignore internal contours and are difficult to extract from real images [22]. Other approaches, such as chamfer matching [5] and the Hausdorff distance [14], treat the shape as a discrete set of points in a 2-D image extracted using an edge detector. Unlike approaches that compute correspondences, these methods do not enforce pairing of points between the two sets being compared. While they work well under selected subset of rigid transformations, they cannot be generally extended to handle non-rigid transformations. The reader may consult [21, 32] for a general survey on classic rigid shape matching techniques. Matching for non-rigid shapes needs to consider unknown transformations that are both linear (e.g., translation, rotation, scaling, and shear) and non-linear. One comprehensive framework for shape matching in this general setting is to iteratively estimate the correspondence and the transformation. The iterative closest point (ICP) algorithm introduced by Besl and McKay [3] and its extensions [11,35] provide a simple heuristic approach. Assuming two shapes are roughly aligned, the nearest-neighbor in the other shape is assigned as the estimated correspondence at each step. This estimate of the correspondence is then used to refine the estimated affine or piece-wise-affine mapping, and vice versa. While ICP is fast and guaranteed to converge to a local minimum, its performance degenerates quickly when large non-rigid deformation or a significant amount of outliers is involved [12]. Chui and Rangarajan [8] developed an iterative optimization algorithm to determine point correspondences and the shape transformation jointly, using thin plate splines as a generic parameterization of a non-rigid transformation. Joint estimation of correspondences and transformation leads to a highly non-convex optimization problem, which is solved using the softassign and deterministic annealing. 2.2
Document Image Retrieval
Rath et al. [26] demonstrated retrieval of handwritten historical manuscripts by using images of handwritten words to query un-labeled document images. The system compares word images based on Fourier descriptors computed from a collection of shape features, including the projection profile and the contours extracted from the segmented word. Mean average precision of 63% was reported for image retrieval when tested using 20 images by optimizing 2-word queries. Srihari et al. [31] developed a signature matching and retrieval approach by computing correlation of gradient, structural, and concavity features extracted from fixed-size image patches. It achieved 76.3% precision using a collection of 447 manually cropped signature images from the Tobacco-800 database [1, 17], since the approach is not translation, scale or rotation invariant.
3 3.1
Matching and Retrieval Measures of Shape Dissimilarity
Before we introduce two new measures of dissimilarity for general shape matching and retrieval, we first discuss existing shape similarity metrics. Each of these
Signature-Based Document Image Retrieval
757
dissimilarity measures captures certain shape information from estimated correspondences and transformation for effective discrimination. In the next subsection, we describe how to effectively combine these individual measures with limited supervised training, and present our evaluation framework. Several measures of shape dissimilarity have demonstrated success in object recognition and retrieval. One is the thin-plate spline bending energy Dbe , and another is the shape context distance Dsc . As a conventional tool for interpolating coordinate mappings from R2 to R2 based on point constraints, the thin-plate spline (TPS) is commonly used as a generic representation of non-rigid transformation [4]. The TPS interpolant f (x, y) minimizes the bending energy 2 ∂2f 2 ∂2f ∂ f ) + ( 2 )2 dx dy ( 2 )2 + 2( (1) ∂x ∂x∂y ∂y R2 over the class of functions that satisfy the given point constraints. Equation (1) imposes smoothness constraints to discourage non-rigidities that are too arbitrary. The bending energy Dbe [8] measures the amount of non-linear deformation to best warp the shapes into alignment, and provides physical interpretation. However, Dbe only measures the deformation beyond an affine transformation, and its functional in (1) is zero if the undergoing transformation is purely affine. The shape context distance Dsc between a template shape T composed of m points and a deformed shape D of n points is defined in [2] as
(a)
(b)
(c)
(d)
(e)
(f)
Fig. 3. Anisotropic scaling and registration quality effectively capture shape differences. (a) Signature regions without segmentation. The first two signatures are from the same person, whereas the third one is from a different individual. (b) Detected and segmented signatures by our approach. Second row: matching results of first two signatures using (c) shape contexts and (d) local neighborhood graph, respectively. Last row: matching results of first and third signatures using (e) shape contexts and (f) local neighborhood graph, respectively. Corresponding points identified by shape matching are linked and unmatched points are shown in green. The computed affine maps are shown in figure legends.
758
G. Zhu, Y. Zheng, and D. Doermann
Dsc (T , D) =
1 1 arg min C(T (t), d) + arg min C(T (t), d), d∈D t∈T m n t∈T
(2)
d∈D
where T (.) denotes the estimated TPS transformation and C(., .) is the cost function for assigning correspondence between any two points. Given two points, t in shape T and d in shape D, with associated shape contexts ht (k) and hd (k), for k = 1, 2, . . . , K, respectively, C(t, d) is defined using the χ2 statistic as 1 [ht (k) − hd (k)]2 . 2 ht (k) − hd (k) K
C(t, d) ≡
(3)
k=1
We introduce a new measure of dissimilarity Das that characterizes the amount of anisotropic scaling between two shapes. Anisotropic scaling is a form of affine transformation that involves change to the relative directional scaling. As illustrated in Fig. 3, the stretching or squeezing of the scaling in the computed affine map captures global mismatch in shape dimensions among all registered points, even in the presence of large intra-class variation. We compute the amount of anisotropic scaling between two shapes by estimating the ratio of the two scaling factors Sx and Sy in the x and y directions, respectively. A TPS transformation can be decomposed into a linear part corresponding to a global affine alignment, together with the superposition of independent, affine-free deformations (or principal warps) of progressively smaller scales [4]. We ignore the non-affine terms in the TPS interpolant when estimating Sx and Sy . The 2-D affine transformation is represented as a 2 × 2 linear transformation matrix A and a 2 × 1 translation vector T u x =A + T, (4) v y where we can compute Sx and Sy by singular value decomposition on matrix A. We define Das as max (Sx , Sy ) . (5) Das = log min (Sx , Sy ) Note that we have Das = 0 when only isotropic scaling is involved (i.e., Sx = Sy ). We propose another distance measure Dre based on the registration residual errors under the estimated non-rigid transformation. To minimize the effect of outliers, we compute the registration residual error from the subset of points that have been assigned correspondence during matching, and ignore points matched to the dummy point nil. Let function M : Z+ → Z+ define the matching between two point sets of size n representing the template shape T and the deformed shape D. Suppose ti and dM(i) for i = 1, 2, . . . , n denote pairs of matched points in shape T and shape D, respectively. We define Dre as i:M(i)=nil ||T (ti ) − dM(i) || Dre = , (6) i:M(i)=nil 1 where T (.) denotes the estimated TPS transformation and ||.|| is the Euclidean norm.
Signature-Based Document Image Retrieval
3.2
759
Shape Distance
After matching, we compute the overall shape distance for retrieval as the weighted sum of individual distances given by all the measures: shape context distance, TPS bending energy, anisotropic scaling, and registration residual errors. D = wsc Dsc + wbe Dbe + was Das + wre Dre .
(7)
The weights in (7) are optimized by linear discriminant analysis using only a small amount of training data. The retrieval performance of a single query instance may depend largely on the instance used for the query [6]. In practice, it is often possible to obtain multiple signature samples from the same person. This enable us to use them as an equivalence class to achieve better retrieval performance. When multiple instances q1 , q2 , . . . , qk from the same class Q are used as queries, we combine their individual distances D1 , D2 , . . . , Dk into one shape distance as D = min(D1 , D2 , . . . , Dk ). 3.3
(8)
Evaluation Methodology
We use two most commonly cited measures, average precision and R-precision, to evaluate the performance of each ranked retrieval. Here we make precise the intuitions of these evaluation metrics, which emphasize the retrieval ranking differently. Given a ranked list of documents returned in response to a query, average precision (AP) is defined as the average of the precisions at all relevant documents. It effectively combines the precision, recall, and relevance ranking, and is often considered as an stable and discriminating measure of the quality of retrieval engines [6], because it rewards retrieval systems that rank relevant documents higher and at the same time penalizes those that rank irrelevant ones higher. R-precision (RP) for a query i is the precision at the rank R(i), where R(i) is the number of documents relevant to query i. R-precision de-emphasizes the exact ranking among the retrieved relevant documents and is more useful when there are a large number of relevant documents. Fig. 4 shows a query example, in which eight out of the nine total relevant signatures are among the top nine and one relevant signature is ranked 12 in the ranked list. For this query, AP = (1+1+1+1+1+1+1+8/9+9/12)/9 = 96.0%, and RP = 8/9 = 88.9%.
4 4.1
Experiments Datasets
To evaluate system performance in signature-based document image retrieval, we used the 1, 290-image Tobacco-800 database [17] and 169 documents from the University of Maryland Arabic database [18]. The Maryland Arabic database consists of 166, 071 Arabic handwritten business documents. Fig. 1 shows some
760
G. Zhu, Y. Zheng, and D. Doermann
Fig. 4. A signature query example. Among the total of nine relevant signatures, eight appear in the top nine of the returned ranked list, giving average precision of 96.0%, and R-precision of 88.9%. The irrelevant signature that is ranked among the top nine is highlighted with a blue bounding box. Left: signature regions in the document. Right: detected and segmented signatures used in retrieval.
examples from the two datasets. We tested our system using all the 66 and 21 signature classes in Tobacco-800 and Maryland Arabic datasets, among which the number of signatures per person varies in the range from 6 to 11. The overall system performance across all queries are computed quantitatively in mean average precision (MAP) and mean R-precision (MRP), respectively. 4.2
Signature Matching and Retrieval
Shape Representation. We compare shape representations computed using different segmentation strategies in the context of document image retrieval. In particular, we consider skeleton and contour, which are widely used mid-level features in computer vision and can be extracted relatively robustly. For comparison, we developed a baseline signature extraction approach by removing machine printed text and noise from labeled signature regions in the groundtruth using a trained Fisher classifier [37]. To improve classification, the baseline approach models the local contexts among printed text using Markov Random Field (MRF). We implemented two classical thinning algorithms—one by Dyer and Rosenfeld [10] and the other by Zhang and Suen [34], that compute skeletons from the signature layer extracted by the baseline approach. Fig. 5
Signature-Based Document Image Retrieval
761
Table 1. Quantitative comparison of different shape representations Tobacco-800 MAP MRP Skeleton (Dyer and Rosenfeld [10]) 83.6% Skeleton (Zhang and Suen [34]) 85.2% Salient contour (our approach) 90.5%
79.3% 81.4% 86.8%
UMD Arabic MAP MRP 78.7% 79.6% 92.3%
76.4% 77.2% 89.0%
illustrates the layer subtraction and skeleton extraction in the baseline approach, as compared to the salient contours of detected and segmented signatures from documents by our approach. In this experiment, we sample 200 points along the extracted skeleton and salient contour representations of each signature. We use the faster shape context matching algorithm [2] to solve for correspondences between points on the two shapes and compute all the four shape distances using Dsc , Dbe , Das , and Dre . To remove any bias, the query signature is removed from the test set in that query for all retrieval experiments. Document image retrieval performance of different shape representations on different datasets is summarized in Tables 1. Salient contours computed by our detection and segmentation approach outperform the skeletons that are directly extracted from labeled signature regions on both Tobacco-800 and Maryland Arabic datasets. As illustrated by the third and fourth columns in Fig. 5, thinning algorithms are sensitive to structural variations among neighboring strokes and noise. In contrast, salient contours provide a globally consistent representation by weighting more on structurally important shape features. This advantage in retrieval performance is more evident on the Maryland Arabic dataset, in which signatures and background handwriting are closely spaced. Shape Matching Algorithms. We developed signature matching approaches using two non-rigid shape matching algorithms—shape contexts and local neigh-
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(j)
Fig. 5. Skeleton and contour representations computed from signatures. The first column are labeled signature regions in the groundtruth. The second column are signature layers extracted from labeled signature regions by the baseline approach [37]. The third and fourth columns are skeletons computed by Dyer and Rosenfeld [10] and Zhang and Suen [34], respectively. The last column are salient contours of actual detected and segmented signatures from documents by our approach.
762
G. Zhu, Y. Zheng, and D. Doermann
borhood graph, and evaluate their retrieval performances on salient contours. We use all four measures of dissimilarity Dsc , Dbe , Das , and Dre in this experiment. The weights of different shape distances are optimized by LDA using randomly selected subset of signature samples as training data. Fig. 6 shows retrieval performances measured in MAP for both methods as the size of training set varies. A special case in Fig. 6 is when no training data is used. In this case, we simply normalize each shape distance by the standard deviation computed from all instances in that query, thus effectively weighting every shape distance equally.
Fig. 6. Document image retrieval using single signature instance as query using shape contexts [2] (left) and local neighborhood graph [36] (right). The weights for different shape distances computed by the four measures of dissimilarity can be optimized by LDA using a small amount of training data.
A significant increase in overall retrieval performance is observed using only a fairly small amount of training data. Both shape matching methods are effective with no significant difference. In addition, the performances of both methods measured in MAP only deviates less than 2.55% and 1.83% respectively when different training sets are randomly selected. These demonstrate the generalization performance of representing signatures by non-rigid shapes and counteracting large variations among unconstrained handwriting through geometrically invariant matching. Measures of Shape Dissimilarity. Table 2 summarizes the retrieval performance using different measures of shape dissimilarity on the larger Tobacco-800 database. The results are based on the shape context matching algorithm as it demonstrates smaller performance deviation in previous experiment. We randomly select 20% of signature instances for training and use the rest for test. The most powerful single measure of dissimilarity for signature retrieval is the shape context distance (Dsc ), followed by the affine transformation based measure (Das ), the TPS bending energy (Dbe ), and the registration residual error (Dre ). By incorporating rich global shape information, shape contexts are discriminative even under large variations. Moreover, the experiment shows that measures based on transformations (affine for linear and TPS for non-linear
Signature-Based Document Image Retrieval
763
Table 2. Retrieval using different measure of shape dissimilarity Measure of Shape Dissimilarity
MAP
MRP
Dsc Das Dbe Dre Dsc + Dbe Dsc + Das + Dbe + Dre
66.9% 61.3% 59.8% 52.5% 78.7% 90.5%
62.8% 57.0% 55.6% 48.3% 74.3% 86.8%
Table 3. Retrieval using multiple signature instances in each query Number of Query Instances
MAP MRP
One Two Three
90.5% 86.8% 92.6% 88.2% 93.2% 89.5%
transformation) are very effective. The two proposed measures of shape dissimilarity Dsc and Dbe improve the retrieval performance considerably, increasing MAP from 78.7% to 90.5%. This demonstrates that we can significantly improve the retrieval quality by combining effective complementary measures of shape dissimilarity through limited supervised training. Multiple Instances as Query. Table 3 summarizes the retrieval performances using multiple signature instances as an equivalent class in each query on Tobacco800 database. The queries consist of all the combinations of multiple signature instances from the same person, giving even larger query sets. In each query, we generate a single ranked list of retrieved document images using the final shape distance between each equivalent class of query signatures and each searched instance defined in Equation (7). As shown in Table 3, using multiple instances steadily improves the performance in terms of both MAP and MRP. The best results on Tobacco-800 is 93.2% MAP and 89.5% MRP, when three instances are used for each query.
5
Conclusion
In this paper, we described the first signature-based general document image retrieval system that automatically detects, segments, and matches signatures from document images with unconstrained layouts and complex background. To robustly handle large structural variations, we treated the signature in the unconstrained setting of a non-rigid shape and demonstrated document image retrieval using state-of-the-art shape representations, measures of shape dissimilarity, shape matching algorithms, and by using multiple instances as query.
764
G. Zhu, Y. Zheng, and D. Doermann
We quantitatively evaluated these techniques in challenging retrieval tests using real English and Arabic datasets, each composed of a large number of classes but relatively small numbers of signature instances per class. In addition to the experiments presented in Section 4, we have conducted field tests of our system using an ARDA-sponsored dataset composed of 32, 706 document pages in 9, 630 multi-page images. Extensive experimental and field test results demonstrated the excellent performance of our document image search and retrieval system.
References 1. Agam, G., Argamon, S., Frieder, O., Grossman, D., Lewis, D.: The Complex Document Image Processing (CDIP) test collection. Illinois Institute of Technology (2006), http://ir.iit.edu/projects/CDIP.html 2. Belongie, S., Malik, J., Puzicha, J.: Shape matching and object recognition using shape contexts. IEEE Trans. Pattern Anal. Mach. Intell. 24(4), 509–522 (2002) 3. Besl, P., McKay, H.: A method for registration of 3-D shapes. IEEE Trans. Pattern Anal. Mach. Intell. 14(2), 239–256 (1992) 4. Bookstein, F.: Principle warps: Thin-plate splines and the decomposition of deformations. IEEE Trans. Pattern Anal. Mach. Intell. 11(6), 567–585 (1989) 5. Borgefors, G.: Hierarchical chamfer matching: A parametric edge matching algorithm. IEEE Trans. Pattern Anal. Mach. Intell. 10(6), 849–865 (1988) 6. Buckley, C., Voorhees, E.: Evaluating evaluation measure stability. In: Proc. ACM SIGIR Conf., pp. 33–40 (2000) 7. Chan, J., Ziftci, C., Forsyth, D.: Searching off-line Arabic documents. In: Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp. 1455–1462 (2006) 8. Chui, H., Rangarajan, A.: A new point matching algorithm for non-rigid registration. Computer Vision and Image Understanding 89(2-3), 114–141 (2003) 9. Doermann, D., Rosenfeld, A.: Recovery of temporal information from static images of handwriting. Int. J. Computer Vision 15(1-2), 143–164 (1995) 10. Dyer, C., Rosenfeld, A.: Thinning algorithms for gray-scale pictures. IEEE Trans. Pattern Anal. Mach. Intell. 1(1), 88–89 (1979) 11. Feldmar, J., Anyche, N.: Rigid, affine and locally affine registration of free-form surfaces. Int. J. Computer Vision 18(2), 99–119 (1996) 12. Gold, S., Rangarajan, A., Lu, C., Pappu, S., Mjolsness, E.: New algorithms for 2-D and 3-D point matching: Pose estimation and correspondence. Pattern Recognition 31(8), 1019–1031 (1998) 13. Gorman, J., Mitchell, R., Kuhl, F.: Partial shape recognition using dynamic programming. IEEE Trans. Pattern Anal. Mach. Intell. 10(2), 257–266 (1988) 14. Huttenlocher, D., Lilien, R., Olson, C.: Comparing images using the Hausdorff distance. IEEE Trans. Pattern Anal. Mach. Intell. 15(9), 850–863 (1993) 15. Lamdan, Y., Schwartz, J., Wolfson, H.: Object recognition by affine invariant matching. In: Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp. 335–344 (1988) 16. Latecki, L., Lakamper, R., Eckhardt, U.: Shape descriptors for non-rigid shapes with a single closed contour. In: Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp. 424–429 (2000) 17. Lewis, D., Agam, G., Argamon, S., Frieder, O., Grossman, D., Heard, J.: Building a test collection for complex document information processing. In: Proc. ACM SIGIR Conf., pp. 665–666 (2006)
Signature-Based Document Image Retrieval
765
18. Li, Y., Zheng, Y., Doermann, D., Jaeger, S.: Script-independent text line segmentation in freestyle handwritten documents. IEEE Trans. Pattern Anal. Mach. Intell. 30(8), 1313–1329 (2008) 19. Lin, C., Chellappa, R.: Classification of partial 2-D shapes using Fourier descriptors. IEEE Trans. Pattern Anal. Mach. Intell. 9(5), 686–690 (1987) 20. Ling, H., Jacobs, D.: Shape classification using the inner-distance. IEEE Trans. Pattern Anal. Mach. Intell. 29(2), 286–299 (2007) 21. Loncaric, S.: A survey of shape analysis techniques. Pattern Recognition 31(8), 983–1001 (1998) 22. Mori, G., Belongie, S., Malik, J.: Efficient shape matching using shape contexts. IEEE Trans. Pattern Anal. Mach. Intell. 27(11), 1832–1837 (2005) 23. Petrakis, E., Diplaros, A., Milios, E.: Matching and retrieval of distorted and occluded shapes using dynamic programming. IEEE Trans. Pattern Anal. Mach. Intell. 24(11), 1501–1516 (2002) 24. Plamondon, R., Srihari, S.: On-line and off-line handwriting recognition: A comprehensive survey. IEEE Trans. Pattern Anal. Mach. Intell. 22(1), 63–84 (2000) 25. Rath, T., Manmatha, R.: Word image matching using dynamic time warping. In: Proc. IEEE Conf. Computer Vision and Pattern Recognition (2003) 26. Rath, T., Manmatha, R., Lavrenko, V.: A search engine for historical manuscript images. In: Proc. ACM SIGIR Conf., pp. 369–376 (2004) 27. Sebastian, T., Klein, P., Kimia, B.: On aligning curves. IEEE Trans. Pattern Anal. Mach. Intell. 25(1), 116–124 (2003) 28. Sebastian, T., Klein, P., Kimia, B.: Recognition of shapes by editing their shock graphs. IEEE Trans. Pattern Anal. Mach. Intell. 26(5), 550–571 (2004) 29. Sharvit, D., Chan, J., Tek, H., Kimia, B.: Symmetry-based indexing of image databases. J. Visual Communication and Image Representation 9, 366–380 (1998) 30. Siddiqi, K., Shokoufandeh, A., Dickinson, S., Zucker, S.: Shock graphs and shape matching. Int. J. Computer Vision 35(1), 13–32 (1999) 31. Srihari, S., Shetty, S., Chen, S., Srinivasan, H., Huang, C., Agam, G., Frieder, O.: Document image retrieval using signatures as queries. In: Proc. Int. Conf. on Document Image Analysis for Libraries, pp. 198–203 (2006) 32. Velkamp, R., Hagedoorn, M.: State of the art in shape matching. Utrecht University, Netherlands, Tech. Rep. UU-CS-1999-27 (1999) 33. Zahn, C., Roskies, R.: Fourier descriptors for plane closed curves. IEEE Trans. Computing 21(3), 269–281 (1972) 34. Zhang, T., Suen, C.: A fast parallel algorithm for thinning digital patterns. Comm. ACM 27(3), 236–239 (1984) 35. Zhang, Z.: Iterative point matching for registration of free-form curves and surfaces. Int. J. Computer Vision 13(2), 119–152 (1994) 36. Zheng, Y., Doermann, D.: Robust point matching for non-rigid shapes by preserving local neighborhood structures. IEEE Trans. Pattern Anal. Mach. Intell. 28(4), 643–649 (2006) 37. Zheng, Y., Li, H., Doermann, D.: Machine printed text and handwriting identification in noisy document images. IEEE Trans. Pattern Anal. Mach. Intell. 26(3), 337–353 (2004) 38. Zhu, G., Zheng, Y., Doermann, D., Jaeger, S.: Multi-scale structural saliency for signature detection. In: Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp. 1–8 (2007)
An Effective Approach to 3D Deformable Surface Tracking Jianke Zhu1 , Steven C.H. Hoi2 , Zenglin Xu1 , and Michael R. Lyu1 1
Department of Computer Science and Engineering, Chinese University of Hong Kong, Shatin, Hong Kong {jkzhu,zlxu,lyu}@cse.cuhk.edu.hk 2 School of Computer Engineering, Nanyang Technological University, Singapore [email protected]
Abstract. The key challenge with 3D deformable surface tracking arises from the difficulty in estimating a large number of 3D shape parameters from noisy observations. A recent state-of-the-art approach attacks this problem by formulating it as a Second Order Cone Programming (SOCP) feasibility problem. The main drawback of this solution is the high computational cost. In this paper, we first reformulate the problem into an unconstrained quadratic optimization problem. Instead of handling a large set of complicated SOCP constraints, our new formulation can be solved very efficiently by resolving a set of sparse linear equations. Based on the new framework, a robust iterative method is employed to handle large outliers. We have conducted an extensive set of experiments to evaluate the performance on both synthetic and real-world testbeds, from which the promising results show that the proposed algorithm not only achieves better tracking accuracy, but also executes significantly faster than the previous solution.
1
Introduction
Deformable surface modeling and tracking has attracted extensive research interest due to its significant role in many computer vision applications [1,2,3,4,5,6]. Since the deformable surface is usually highly dynamic and represented by many deformation parameters, the prior models are often engaged in dealing with the ill-posed optimization problem of deformable surface recovery. A variety of methods have been proposed to create these models, such as the interpolation method [1,7], the data embedding method [2,5,8] and physical models [9,10,11]. The major problem of these models is that their smoothness constraints usually limit their capability of recovering sharply folded and creased surfaces accurately. Instead of using the strong prior models, M. Salzmann et al. recently formulated the problem generally as a Second Order Cone Programming (SOCP) problem without engaging the unwanted smoothness constraints [12]. Although they have demonstrated some promising results on tracking deformable surfaces from 3D to 2D correspondences, their approach is computationally expensive D. Forsyth, P. Torr, and A. Zisserman (Eds.): ECCV 2008, Part III, LNCS 5304, pp. 766–779, 2008. c Springer-Verlag Berlin Heidelberg 2008
An Effective Approach to 3D Deformable Surface Tracking
(a) Sharply folded
(b) Bending
767
(c) Cloth
Fig. 1. Recovering highly deformable surfaces from video sequences (a-c). (a) A piece of paper with well-marked creases. (b) Severely bending. (c) A piece of cloth.
while handling a large number of SOCP constraints for a large set of free variables. In this paper, we apply the principles they have described, and investigate new techniques to address the shortcomings. Specifically, we propose a novel unconstrained quadratic optimization formulation for 3D deformable surface tracking, which requires only the solution of a set of sparse linear equations. In our approach, we first show that the SOCP formulation can be viewed as a special case of a general convex optimization feasibility problem. Then, we introduce a slack variable to rewrite the SOCP formulation into a series of Quadratic Programming (QP). Furthermore, we convert the SOCP constraints into a quadratic regularization term, which leads to a novel unconstrained optimization formulation. Finally, we show that the resulting unconstrained optimization problem can be solved efficiently by a robust progressive finite Newton optimization scheme [13], which can handle large outliers. Hence, not only is the proposed solution highly efficient, but also it can directly handle noisy data in an effective way. To evaluate the performance of our proposed algorithm, we have conducted extensive experiments on both synthetic and real-world data, as shown in Fig. 1. The rest of this paper is organized as follows. Section 2 reviews the previous approaches to deformable surface recovery. Section 3 presents the proposed 3D deformable surface tracking solution using a novel unconstrained quadratic optimization method. Section 4 shows the details of our experimental implementation and evaluates the experimental results. Section 5 discusses some limitations and sets out our conclusion.
2
Related Work
We are motivated from the SOCP method [12], the convex optimization [14] and quasiconvex optimization [15] to the triangulation problem. Moreover, it is important to note that our work is closely related to previous work on structure from motion [16] as well as nonrigid surface detection and tracking [4,6,11,13]. Factorization methods are widely used in 3D deformable surface recovery. Bregler et al. [16] proposed a solution for recovering 3D nonrigid shapes from video sequences, which factorizes the tracked 2D feature points to build the 3D
768
J. Zhu et al.
shape model. In this approach, the 3D shape in each frame is represented by a linear combination of a set of basis shapes. A similar method was applied to the Active Appearance Models fitting results in order to retrieve the 3D facial shapes from video sequences [8]. Based on the factorization method, a weak constraint [17] can be introduced to handle the ambiguities problem by constraining the frame-to-frame depth variations. In addition, machine learning techniques have also been applied to building the linear subspace from either the collected data or the synthetic data. Although some promising results have been achieved in 3D face fitting [18] and deformable surface tracking [2], these methods usually require a large number of training samples to obtain sufficient generalization capability. As for 2D nonrigid surface detection, J. Pilet et al. [11] proposed a real-time algorithm which employs a semi-implicit optimization approach to handle noisy feature correspondences. In contrast, several image registration methods [1,7] tend to be computationally expensive and are mainly aimed at object recognition.
3
Fast 3D Deformable Surface Tracking
In this section, we first formally define the 3D deformable surface tracking problem. Then we present an optimization framework for treating the 3D deformable surface tracking problem as a general convex optimization feasibility problem. We then revisit previous SOCP work that can be viewed as a special case of the general convex optimization framework. With a view to improving the efficiency of the optimization, we present techniques to relax the SOCP constraints properly and propose two new optimization formulations. One is a QP formulation and the other is an efficient unconstrained quadratic optimization. 3.1
Problem Definition
The 3D deformable surface is explicitly represented by triangulated meshes. As shown in Fig. 1, we employ a triangulated 3D mesh with n vertices, which are formed into a shape vector s as below: s = x1 . . . xn y1 . . . yn z1 . . . zn in which we define vi = (xi , yi , zi ) as the coordinates of the ith mesh vertex. The shape vector s is the variable to be estimated. Given a set of 3D to 2D correspondences M between the surface points and the image locations, a pair of matched points is defined as m = (mS , mI ) ∈ M, where mS is the 3D point on the surface and mI is the corresponding 2D location on the input image. We assume that the surface point mS lies on a facet whose three vertices’ coordinates are vi , vj and vk respectively, and {i, j, k} ∈ [1, n] is the index of each vertex. The piecewise affine transformation is used to map the surface points mS inside the corresponding triangle into the vertices in the mesh:
An Effective Approach to 3D Deformable Surface Tracking
769
⎤ ⎡ ⎤ ⎡ x xi xj xk mS = ⎣ y ⎦ = ⎣ yi yj yk ⎦ ξ1 ξ2 ξ3 zi zj zk z where (ξ1 , ξ2 , ξ3 ) are the barycentric coordinates for the surface point mS . As in [12], we assume that the 3 × 4 camera projection matrix P is known and remains constant. This does not mean that the camera is fixed, since the relative motion with respect to the camera can be recovered during the tracking process. Hence, with the projection matrix P, we can compute mI = [ u v ] , the 2D projection of the 3D surface point mS , as follows: P1,1 x+P1,2 y+P1,3 z+P1,4 u 3,1 x+P3,2 y+P3,3 z+P3,4 (1) = P P2,1 x+P2,2 y+P2,3 z+P2,4 v P x+P y+P z+P 3,1
3,2
3,3
3,4
In order to directly represent the projection by the variables s, an augmented vector a ∈ R3n is defined as below: ai = ξ1 P1,1 aj = ξ2 P1,1
ai+n = ξ1 P1,2 ai+2n = ξ1 P1,3 aj+n = ξ2 P1,2 aj+2n = ξ2 P1,3
ak = ξ3 P1,1
ak+n = ξ3 P1,2
ak+2n = ξ3 P1,3
The remaining elements of the vector a are all set to zero. Similarly, we define other two vectors b, c ∈ R3n accordingly, and then rewrite Eqn. 1 as follows: ⎤ ⎡ a s+P1,4 u 3,4 ⎦ (2) = ⎣ bc s+P s+P2,4 v c s+P3,4
3.2
Convex Optimization Formulations
General Convex Formulation. Since it is impossible to find a perfect projection that can ideally match all the 3D to 2D correspondences in practice, we let γ denote the upper bound for the reprojection error of each correspondence pair ˆ vˆ ] , the following m ∈ M. As a result, for each 2D image observation mI = [ u inequality constraint will be satisfied: a s + P1,4 b s + P2,4 ≤ γ for m ∈ M, − u ˆ , − v ˆ (3) c s + P3,4 c s + P3,4 p where p ≥ 1 is a constant integer and the inequality constraint is known as a p-norm cone constraint [19]. As a result, the 3D deformable surface tracking problem can be formulated as a general convex optimization problem: min γ a s + P1,4 b s + P2,4 ≤γ s. t. − u ˆ , − v ˆ c s + P3,4 c s + P3,4 p
γ≥0,s
for each m ∈ M.
770
J. Zhu et al.
In the above optimization, γ is usually set by the bisection algorithm [12,14]. Hence, the tracking problem can be regarded as a feasibility problem for the above general convex optimization. When p = 2, the p-norm cone constraint above reduces to the well-known SOCP constraint. In the following discussion, we will show that a recently proposed SOCP formulation can be viewed as a special case of the above general convex optimization feasibility problem. SOCP Formulation. The recent work in [12] formulated the 3D deformable surface tracking problem as an SOCP feasibility problem, which can be viewed as a special case of the above general convex optimization with p = 2: min γ a s + P1,4 b s + P2,4 ≤γ s. t. − u ˆ , − v ˆ c s + P3,4 c s + P3,4
γ≥0,s
for each m ∈ M.
(4)
where the 2-norm notation · 2 is by default written as · without ambiguity. To handle the outliers, we employ the method [20] to remove the set of matches whose reprojection errors equal the minimal γ. In practice, to regularize the deformable surface, an additional constraint is introduced to prevent irrational changes of the edge orientations between two consecutive frames [12]. We assume that the shape st at time t is known, and that the orientation of the edge linking the vertices vit and vjt will be similar at time t + 1. For each edge in the triangulated mesh, the corresponding constraint can be formulated below: t+1 v − vt+1 − θ tij ≤ λLi,j i
j
(5)
where Li,j is the original length of the edge. θtij is the difference of the two vertices vit and vjt at time t normalized by the original edge length, namely θ tij = vt −vt
Li,j vit −vjt . Also, λ is a coefficient to control the regularity of the deformable i j surface. Again, the above inequality constraint is also an SOCP constraint. As a result, the tracking problem is formulated as an SOCP feasibility problem 1 with a number of SOCP constraints, which can be solved by some bisection algorithm [12,14]. A major problem of the above formulation is that the number of correspondences |M| is often much larger than the number of variables for ensuring sufficient correct matches, and thus the SOCP formulation has to engage a large number of SOCP constraints. Specifically, if ne denotes the number of edges in the mesh model, the above SOCP formulation should have (|M| + ne ) SOCP constraints in total. Solving the above SOCP optimization directly leads to very high computational cost in practice. 1
The SOCP optimization problem is solved by Sedumi: http://sedumi.mcmaster.ca.
An Effective Approach to 3D Deformable Surface Tracking
771
QP Formulation. The drawback of the SOCP formulation lies in the large number of SOCP constraints. In this part, we present a QP formulation by removing the SOCP constraints. Specifically, for each of the SOCP constraints in Eqn. 4, we can rewrite it equivalently as [(a − uˆc) s + du ]2 + [(b − vˆc) s + dv ]2 ≤ γ(c s + dw )2 where dw = P3,4 , du = P1,4 −uˆdw , and dv = P2,4 −ˆ v dw . Further, we can introduce a slack variable (m) for each m ∈ M and rewrite the inequality constraint as the following equality: [(a − u ˆc) s + du ]2 + [(b − vˆc) s + dv ]2 + (m)2 = γ(c s + dw )2 In addition, we can replace the SOCP constraints in Eqn. 5 with 1-norm cone constraints. As a result, we can rewrite the original formulation by a min-max optimization formulation:
(m)2 min max γ≥0
s
s. t.
m∈M t+1 v i
− vjt+1 − θtij 1 ≤ λLi,j
for each edge (vi , vj ) in the mesh.
in which the objective function can be expressed as:
(m)2 = −(s Hs + 2g s + d)
(6)
m∈M
where H ∈ R3n×3n , g ∈ R3n×1 and d ∈ R are defined as:
H= (a − u ˆc)(a − u ˆc) + (b − vˆc)(b − vˆc) − γcc m∈M
g=
du (a − u ˆc) + dv (b − vˆc) − γc
m∈M
d=
d2u + d2v − γd2w
m∈M
It is clear that the above objective function is quadratic. For the tracking task to be an optimization feasibility problem, γ is assumed to be known. Hence, the min-max optimization becomes a standard QP problem. To solve it, we also employ the bisection algorithm and engage an interior-point optimizer 2 . 3.3
Unconstrained Quadratic Optimization
The QP formulation still has to include a number of 1-norm cone constraints. To address it, we present an unconstrained quadratic optimization formulation that completely relaxes all constraints. Specifically, instead of engaging the SOCP 2
http://www.mosek.com/
772
J. Zhu et al.
constraints in Eqn. 5, we integrate such constraints into the objective function by treating it as a weighted penalty function, which converts the complex SOCP constraints into a simple quadratic term. This leads to the following unconstrained minimization formulation: min − γ,s
2 + μ
m∈M
ne
ηk2
(7)
k=1
where μ is a regularization coefficient, and ηk is a variable to constrain the regularity of the k th edge: ηk = vit+1 − vjt+1 − θtij Moreover, the edge regularization term can be expressed as: ne
ηk2 = s Qs − 2f s + ϕ
(8)
k=1
where Q ∈ R3n×3n , f ∈ R3n×1 and t ∈ R are defined as: Q= f=
ne
k=1 ne
˜b ˜ + ˜ ˜a ˜ + b a cc˜ ˜ + θz ˜ ˜ + θy b c, θx a
k=1
ϕ=
ne
θk
k=1
where θk = (θkx , θky , θkz ) is used to denote θ tij . For the k th edge with vertices ˜ and ˜ ˜, b c ∈ R3n are defined as follows: vi and vj , three augmented vectors a ˜i = 1 a ˜ aj = −1
˜ i+n = 1 b ˜ j+n = −1 b
c˜i+2n = 1 ˜cj+2n = −1
˜ and ˜ ˜, b and the remaining elements in a c are all set to zero. By substituting Eqn. 6 and Eqn. 8 into Eqn. 7, we can thus obtain the following unconstrained minimization formulation: min s (H + μQ)s + 2(g − μf ) s + d + ϕ
γ≥0,s
(9)
Remark. In the above formulation, H, g and d are all related to the upper bound variable γ, which seems like a complicated optimization problem. Fortunately, we find that the upper bound γ plays the same role as the support of the robust estimator in [11,13], which is able to handle large outliers. Therefore, the above problem can be perfectly solved by the progressive finite Newton method as proposed in [6,13], which makes the proposed method capable of handling large outliers. Specifically, the upper bound γ starts at a large value, and then is
An Effective Approach to 3D Deformable Surface Tracking
773
progressively decreased at a constant rate. For each value of the upper bound γ, we can simply solve the following linear equation: (H + μQ)s = −g + μf
(10)
where H and g are computed with the inlier matches only. We employ the results from previous step to compute the inlier set. Obviously, the square matrix Q is kept constant for the given triangulated mesh, and f only needs to be computed once for each frame. Since both H and Q are sparse matrices, the above linear system can be solved very efficiently by a sparse linear solver. Owing to its high efficiency, the proposed solution enables us to handle very large scale 3D deformable surface tracking problems with high resolution meshes.
4
Experimental Results
In this section, we present the details of our experimental implementation and report the empirical results on 3D deformable surface tracking. First, we perform an evaluation on synthetic data for comparison with the convex optimization method. Then, we show results of our proposed approach in various environments, which demonstrate that our method is both efficient and effective for 3D deformable surface tracking. 4.1
Experimental Setup
All the experiments reported in this paper were carried out on an Intel Core2 Duo 2.0GHz Notebook Computer with 2GB RAM, and a DV camera was engaged to capture the videos. For simplicity, our QP formulation is denoted as “QP”, and the proposed unconstrained quadratic optimization method is denoted as “QO”. All the methods are implemented in Matlab, in which some routines were written in C code. Instead of relying on the 2D tracking results as in [12], we directly employ the SIFT method [21] to build the 3D to 2D correspondences by matching the model image and the input image. The planar surface with a template image is used due to its simplicity. Moreover, the non-planar surface can be employed by embedding the texture into 2D space. For the SOCP method, we use the similar parameters settings as given in [12]. Specifically, in our experiments, λ is set to 0.1, and the bisection algorithm stops when the maximal reprojection error is below one pixel. For the proposed QO method, the regularization parameter μ is found by grid searching, which is set to 5 × 104 for all experiments. The decay rate for the upper bound γ is set to 0.5. To initialize the 3D tracking, we register the first frame by the 2D nonrigid surface detection method [11], and then estimate the camera projection matrix P from 3D to 2D correspondences. In fact, the tracking usually starts from a surface that is slightly deformed. This method works well in practice, and it can automatically fit to the correct positions even when the initialization is not very accurate.
774
J. Zhu et al.
Fig. 2. Synthetic meshes with 96 vertices for evaluation. The 2D observations corrupted by noise having a normal distribution with σ = 2. Results for SOCP (black) and QO (blue) are shown with ground truth (red), at frame 94, 170 and 220. 1.4
0.7
QO QP SOCP
1 0.8 0.6 0.4
50
100
150 200 frame #
250
300
0.4 0.3 0.2
0 0
350
0.8
50
100
150 200 frame #
250
300
350
1.6
QO QP SOCP
0.7
0.6
0.5
0.4
mean reprojection error (pixel)
mean reprojection error (pixel)
0.5
0.1
0.2 0 0
QO QP SOCP
0.6 mean distance (mm)
mean distance (mm)
1.2
QO QP SOCP
1.4
1.2
1
0.8
0
50
100
150 200 frame #
250
300
350
(a) σ = 1
0
50
100
150 200 frame #
250
300
350
(b) σ = 2
Fig. 3. The performance comparison of the QO, QP and SOCP methods on the 350 synthetic meshes with little added noise. The first row shows the average distance between ground truth and recovery results. The second row is the mean reprojection errors.
4.2
Synthetic Data Comparison
We generate a sequence of 350 synthetic meshes by simulating a surface bending process as shown in Fig. 2. The total size of the mesh is 280mm × 200mm. Given a perspective projection matrix P, the 2D correspondences are obtained by projecting the 3D points defined by piecewise affine mapping, where the barycentric coordinates are randomly selected. We conduct two sets of experiments on the synthetic data. Firstly, we conduct the experiments on 2D observations with a small amount of added noises. Secondly, we evaluate the performance of SOCP and QO methods on data with large outliers. We set the number of correspondences in each facet to 5 for the first experiment, and 10 for the second one.
An Effective Approach to 3D Deformable Surface Tracking
775
Experiment I. In the first experiment, we consider two cases of noisy data, for which the noise is added to all the 2D observations based on a normal distribution with different standard deviations σ = 1, 2. Fig. 3 shows the results of the comparison between the QO, QP and SOCP methods. We can see that the proposed QO method achieves the lowest reprojection errors for both cases. When σ = 1, both QO and SOCP are more effective than the QP formulation in 3D reconstruction performance. Indeed, there is some large jittering for the QP method in 3D reconstruction. This may be due to the L1 norm relaxation of the constraints that may cause ambiguities in depth. Also, the SOCP method slightly outperforms the QO method when the surface is highly deformed, as observed around frame 170 in Fig. 2. When the standard deviation of the noise increases, we found that the proposed QO method achieves better and more steady results than the other two methods. This shows that the QO method is more resilient to noises. Experiment II. In the second experiment, we conduct experiments on the synthetic data partially corrupted by noises (40% and 60% respectively) with standard deviation σ = 10. The experimental results shown in Fig. 4 demonstrate that the proposed QO approach is very robust, and more effective than the SOCP method in dealing with large outliers. Furthermore, we observe that the results achieved by the QO approach are rather smooth. In contrast, large
1.4
1.5 QO SOCP
QO SOCP mean distance (mm)
mean distance (mm)
1.2 1 0.8 0.6 0.4
1
0.5
0.2 0 0
50
100
150 200 frame #
250
300
0 0
350
2.5 2 1.5 1 0.5 0 0
100
150 200 frame #
250
300
350
6 QO SOCP
50
100
150 200 frame #
(a) 40%
250
300
350
mean reprojection error (pixel)
mean reprojection error (pixel)
3
50
QO SOCP
5 4 3 2 1 0 0
50
100
150 200 frame #
250
300
350
(b) 60%
Fig. 4. Comparison of the performance of the QO and SOCP on the synthetic data with large outliers. The first row shows the average distance between ground truth and recovery results. The second row is the mean reprojection errors.
776
J. Zhu et al.
jittering is observed in the results from the SOCP method. In our experiments, the number of inliers for the QO method is larger than that for the SOCP method. Specifically, when the percentage of outliers is 60%, the average inlier rate is around 39% for QO, and below 30% for the SOCP method. Computational Efficiency. The complexity of the proposed QO method is mainly dominated by the order of Eqn. 10, which is equal to 3n. Another important factor is the number of inlier matches, which affects the sparseness of the system matrix. This number usually differs from one frame to another. For the synthetic data with 96 vertices, as shown in Fig. 2, the proposed method runs at about 29 frames per second on the synthetic data. Furthermore, the proposed QO method takes 0.034 seconds per frame. On the other hand, the QP and SOCP method require 10 seconds and 5 seconds per frame respectively. On average, the proposed QO method is over 140 times faster than the SOCP method. 4.3
Performance on Real Data
Next, we investigate the 3D deformable surface tracking performance on some real deformable surfaces based on a piece of paper, a bag and a piece of cloth. Since only the QO method is efficient enough in practice, we evaluate only the QO method on the real data. To ensure that a sufficient number of correct correspondences are found, all the objects are well-textured. Paper. As shown in Fig. 5, the proposed method is robust in handling large bending deformations. In practice, the whole process runs at around one frame per second on the DV size video sequence with a 187-vertex mesh model. The SIFT feature extraction and matching takes most of the time, whereas the optimization procedure only requires 0.1 seconds for each frame. Fig. 6 shows that a sharply folded surface is retrieved, and the well-marked creases can be accurately recovered.
Fig. 5. We use a piece of paper as the deformable object. The deformable surface is recovered from a 300 frame video. The first row shows the images captured by a DV camera size of 720× 576 overlaid by the reprojection of the recovered mesh. The second row is a different projective view of the recovered 3D deformable surface.
An Effective Approach to 3D Deformable Surface Tracking
777
Fig. 6. Tracking the deformable surface with two sharp folds in it. The creases are correctly recovered.
Fig. 7. Recovering the deformation of a bag
Fig. 8. Recovering the deformation of a piece of cloth
Bag and Cloth. To evaluate the performance on materials less rigid than a piece of paper, we reconstruct the surfaces of a bag and a piece of cloth with the proposed method. For the high efficiency of the proposed solution, we can handle real-world objects with high resolution mesh very fast. Fig. 7 shows the tracking results of the bag surface. The optimization procedure only takes about 0.2 seconds to process a mesh with 289 vertices. Similarly, Fig. 8 shows the tracking results of a piece of cloth. From these results, we can observe that the proposed method is able to recover the deformable surfaces accurately with the high resolution mesh.
778
5
J. Zhu et al.
Discussions and Conclusion
We have proposed a novel solution for the 3D deformable surface tracking by formulating the problem into an unconstrained quadratic optimization. Compared with previous convex optimization approaches, the proposed method enjoys several major advantages. Firstly, our method is very efficient without involving complicated SOCP constraints. Secondly, the proposed approach can handle large outliers and is more resilient to noises. Compared with the previous SOCP method, we have improved both the efficiency and robustness performance significantly. Furthermore, different from the previous SOCP approach that usually requires a sophisticated SOCP solver, our proposed method can be implemented easily in practice, requiring the solution of only a set of linear equations. Also, the optimization method used in this paper might be applicable to other similar problems solved by SOCP. We have conducted experimental evaluations on objects made of different materials. The experimental results show that the proposed method is significantly more efficient than the previous approach, and is also rather robust to noises. Promising tracking results show that the proposed solution is able to handle large deformations that often occur in real-world applications. Although promising experimental results have validated the efficiency and effectiveness of our methodology, some limitations should be addressed as future work. First of all, self-occlusion problem has not yet been studied. Also, in some situations we found that some jitter may occur due to a lack of texture information. To address the problem, we may consider employing the visible surface detection algorithm. Furthermore, global bundle-adjustment can be fitted into our optimization framework, which will help handle the jittering problem. Finally, we will consider an efficient GPU-based point-matching algorithm to facilitate real-time 3D deformable surface tracking applications.
Acknowledgments The work was fully supported by the Research Grants Council Earmarked Grant (CUHK4150/07E), and the Singapore MOE AcRF Tier-1 research grant (RG67/07).
References 1. Bartoli, A., Zisserman, A.: Direct estimation of non-rigid registration. In: Proc. British Machine Vision Conference, Kingston (September 2004) 2. Salzmann, M., Pilet, J., Ilic, S., Fua, P.: Surface deformation models for nonrigid 3d shape recovery. IEEE Trans. Pattern Anal. Mach. Intell. 29(8), 1481–1487 (2007) 3. Tsap, L.V., Goldgof, D.B., Sarkar, S.: Nonrigid motion analysis based on dynamic refinement of finite element models. IEEE Trans. on Pattern Analysis and Machine Intelligence 22(5), 526–543 (2000) 4. White, R., Forsyth, D.A.: Combining cues: Shape from shading and texture. In: Proc. Conf. Computer Vision and Pattern Recognition, pp. 1809–1816 (2006)
An Effective Approach to 3D Deformable Surface Tracking
779
5. Zhu, J., Hoi, S.C., Lyu, M.R.: Real-time non-rigid shape recovery via active appearance models for augmented reality. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3951, pp. 186–197. Springer, Heidelberg (2006) 6. Zhu, J., Lyu, M.R., Huang, T.S.: A fast 2d shape recovery approach by fusing features and appearance. IEEE Trans. Pattern Anal. Mach. Intell (to appear, 2008) 7. Chui, H., Rangarajan, A.: A new point matching algorithm for non-rigid registration. Computer Vision and Image Understanding 89(2-3), 114–141 (2003) 8. Xiao, J., Baker, S., Matthews, I., Kanade, T.: Real-time combined 2d+3d active appearance models. In: Proc. Conf. Computer Vision and Pattern Recognition, vol. 2, pp. 535–542 (2004) 9. Fua, P., Leclerc, Y.: Object-centered surface reconstruction: Combining multiimage stereo and shading. Int’l J. Computer Vision 16(1), 35–56 (1995) 10. Ilic, S., Fua, P.: Implicit meshes for surface reconstruction. IEEE Trans. on Pattern Analysis and Machine Intelligence 28(2), 328–333 (2006) 11. Pilet, J., Lepetit, V., Fua, P.: Fast non-rigid surface detection, registration, and realistic augmentation. Int’l J. Computer Vision (2007) 12. Salzmann, M., Hartley, R., Fua, P.: Convex optimization for deformable surface 3-d tracking. In: Proc. Int’l Conf. Computer Vision (October 2007) 13. Zhu, J., Lyu, M.R.: Progressive finite newton approach to real-time nonrigid surface detection. In: Proc. Conf. Computer Vision and Pattern Recognition, pp. 1–8 (2007) 14. Kahl, F.: Multiple view geometry and the l∞ -norm. In: ICCV, pp. 1002–1009 (2005) 15. Ke, Q., Kanade, T.: Quasiconvex optimization for robust geometric reconstruction. IEEE Trans. Pattern Anal. Mach. Intell. 29(10), 1834–1847 (2007) 16. Bregler, C., Hertzmann, A., Biermann, H.: Recovering non-rigid 3d shape from image streams. In: Conf. Computer Vision and Pattern Recognition, pp. 690–696 (2000) 17. Salzmann, M., Lepetit, V., Fua, P.: Deformable surface tracking ambiguities. In: Proc. Conf. Computer Vision and Pattern Recognition (2007) 18. Blanz, V., Vetter, T.: Face recognition based on fitting a 3d morphable model. IEEE Trans. on Pattern Analysis and Machine Intelligence 25(9), 1063–1074 (2003) 19. Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge University Press, Cambridge (2004) 20. Sim, K., Hartley, R.: Removing outliers using the L∞ norm. In: Proc. Conf. Computer Vision and Pattern Recognition, pp. 485–494 (2006) 21. Lowe, D.G.: Distinctive Image Features from Scale-Invariant Keypoints. Int’l J. Computer Vision 60(2), 91–110 (2004)
Belief Propagation with Directional Statistics for Solving the Shape-from-Shading Problem Tom S.F. Haines and Richard C. Wilson The University of York, Heslington, YO10 5DD, U.K
Abstract. The Shape-from-Shading [SfS] problem infers shape from reflected light, collected using a camera at a single point in space only. Reflected light alone does not provide sufficient constraint and extra information is required; typically a smoothness assumption is made. A surface with Lambertian reflectance lit by a single infinitely distant light source is also typical. We solve this typical SfS problem using belief propagation to marginalise a probabilistic model. The key novel step is in using a directional probability distribution, the Fisher-Bingham distribution. This produces a fast and relatively simple algorithm that does an effective job of both extracting details and being robust to noise. Quantitative comparisons with past algorithms are provided using both synthetic and real data.
1
Introduction
The classical problem of Shape-from-Shading [SfS] uses irradiance captured by a photo to calculate the shape of a scene. A known or inferred reflectance function provides the relationship between irradiance and surface orientation. Surface orientation may then be integrated to obtain a depth map. Horn [1] introduced this problem with the assumptions of Lambertian reflectance, orthographic projection, constant known albedo, a smooth surface, no surface inter-reflectance and a single infinitely distant light source in a known relation with the photo. This constrained scenario has been tackled many times since [2,3,4,5,6,7, to cite a few], and will again be the focus of this work. Zhang et al.[8] surveyed the area in 1999, concluding that Lee and Kuo [4] was the then state of the art. Lee and Kuo iteratively linearised the reflectance map and solved the resulting linear equation using the multigrid method. More recent methods include Worthington and Hancock [5], which iterated between smoothing a normal map and correcting it to satisfy the reflectance information; Prados et al. [6], which solved the problem with viscosity solutions; and Potetz [7] which used belief propagation. This last work by Potetz is particularly relevant due to it also using belief propagation, though in all further details it differs. Belief propagation estimates the marginals of a multivariate probability distribution, often represented by a graphical model. Potetz makes use of two variables per pixel, δx/δz and δy/δz, and uses various factor nodes to provide the reflectance information, smoothness assumption and integrability constraint. D. Forsyth, P. Torr, and A. Zisserman (Eds.): ECCV 2008, Part III, LNCS 5304, pp. 780–791, 2008. c Springer-Verlag Berlin Heidelberg 2008
Belief Propagation with Directional Statistics
781
Whilst this model can be implemented simply with discrete belief propagation it would never converge and require a large number of labels, instead advanced continuous methods are used. The following three sections, 2 through to 4, cover the component details, starting with the formulation, then belief propagation and finally directional statistics. Section 5 brings it all together into a cohesive whole, and is followed by section 6 which solves a specific problem. Following these sections we give results in section 7 and conclusions in the final section.
2
Formulation
Using previously given assumptions, of Lambertian reflectance, constant known albedo, orthographic projection, an infinitely distant light source and no interreflection the irradiance at each pixel in the input image is given by ˆ x,y ) Ix,y = A(ˆl · n
(1)
where Ix,y is the irradiance provided by the input image. A is the albedo and ˆl ∈ R3 , |ˆl| = 1 is the direction to the infinitely distant light source; these are both ˆ x,y ∈ R3 , |ˆ provided by the user. n nx,y | = 1 is the normal map to be inferred as the algorithm’s output. The normal map can be integrated to obtain a depth map, a step with which we are not concerned. By substituting the dot product with the cosine of the angle between the two vectors you get Ix,y = cos θx,y A
(2)
where θ is therefore the angle of a cone around ˆl which the normal is constrained to [5]. This leaves one degree of freedom per pixel that is not constrained by the available information. A smoothness assumption provides the extra constraint. Directional statistics is the field of statistics on directions, such as surface normals. Using a directional distribution allows the representation of surface orientation with a single variable, rather than the two used in Potetz [7] and many others. We propose a new SfS algorithm using such distributions within a belief propagation framework. This leads to a belief propagation formulation not dissimilar to Gaussian belief propagation [9] in its simplicity and speed.
3
Belief Propagation
Loopy sum-product belief propagation is a message passing algorithm for marginalising an equation of the form ψv (yv ) (3) P (x) = v∈V
where x is a set of random variables and ∀v, yv ⊂ x. Such an equation can be represented by a graphical model where each variable is a node and nodes that
782
T.S.F. Haines and R.C. Wilson
interact via ψ functions are linked. In this case the random variables are directions, represented by normalised vectors. Message passing then occurs within this model, with messages passed along the links between the nodes. As the variables are directions the messages are probability distributions on directions. The method uses belief propagation to obtain the maximum a posteriori estimate of a pairwise Markov random field where each node represents the orientation of the surface at a pixel in the image. The message passed from node p to node q at iteration t is ˆ q )ψp (ˆ xq ) = ψpq (ˆ xp , x xp ) mt−1 xp )δˆ xp (4) mtp→q (ˆ u→p (ˆ ˆp x
u∈(N \q)
ˆ q ) is the compatibility between adjacent nodes, ψp (ˆ where ψpq (ˆ xp , x xp ) is the prior on each node’s orientation and N is the 4-way neighbourhood of each node. Once message passing has iterated sufficiently for convergence to occur the belief at each node is bp (ˆ xp ) = ψp (ˆ xp ) mt−1 xp ) (5) u→p (ˆ u∈N
xp ) the most probable direction is selected as output. From bp (ˆ
4
Directional Statistics
The Fisher distribution, using proportionality rather than a normalising constant, is given by ˆ) (6) x; u) ∝ exp(uT x PF (ˆ ˆ , u ∈ R3 and |ˆ where x x| = 1. Similarly, the Bingham distribution may be defined as PB (ˆ x; A) ∝ exp(ˆ xT Aˆ x) (7) where A = AT . By multiplying the above we get the 8 parameter FisherBingham [10] [FB8 ] distribution ˆ+x ˆ T Aˆ PFB8 (ˆ x; u, A) ∝ exp(uT x x)
(8)
All three of these distributions have the advantage that they can be multiplied together without introducing further variables, which is critical in a belief propagation framework. We may decompose the FB8 distribution. As A is symmetric we may apply the eigen-decomposition to obtain A = BDBT , where B is orthogonal and D diagonal. This allows us to write ˆ+y ˆ T Dˆ PFB8 (ˆ x; u, A) ∝ exp(vT y y)
(9)
ˆ . As |ˆ ˆ = BT x where v = BT u and y y| = 1 we may offset D by an arbitrary multiple of the identity matrix, this allows any given entry to be set to 0. We can therefore consider it the case that D = Diag(α, β, 0), with α > 0 and β > 0 so that ˆ + αˆ ˆ y2 ) yx2 + β y PFB8 (ˆ x; u, A) ∝ exp(vT y (10)
Belief Propagation with Directional Statistics
783
For convenience we may represent the FB8 distribution as ˆ+x ˆ T Aˆ x) = Ω[u, A] exp(uT x
(11)
Using this notation multiplication is Ω[u, A]Ω[v, B] = Ω[u + v, A + B]
(12)
Various distributions may be represented by the Fisher-Bingham distribution, of particular use is the Bingham-Mardia distribution [11] ˆ − cos θ)2 ) = Ω[2k cos(θ)ˆ ˆT ] exp(−k(ˆ uT x u, −kˆ uu
(13)
ˆ is the direction of the axis of a cone and θ the angle of that cone. where u This distribution has a small circle as its maximum, which allows the irradiance information (Eq. 2) to be expressed as a FB8 distribution.
5
Method
We construct a graphical model, specifically a pairwise Markov random field. Each node of the model is a random variable that represents an unknown normal on the surface. Belief propagation, as described in section 3, is then used to determine the marginal distribution for each node. To define the distribution to be marginalised two sources are used: the irradiance information (Eq. 2) and a smoothness assumption. We model the smoothing assumption on the premise that adjacent points on the surface will be more likely to have a small angular difference than a large angular difference. We can express this idea by setting ˆ q )) ˆ q ) = exp(k(ˆ xp , x xTp x ψpq (ˆ
(14)
ˆ q ) is from the message passing equation (Eq. 4). This is a Fisher where ψpq (ˆ xp , x distribution with concentration k. Using FB8 for the messages and dropping equation 14 into equation 4 we have ˆ q ))t(ˆ xq ) = exp(k(ˆ xTp x xp )δˆ xp (15) mtp→q (ˆ S2
t(ˆ xp ) = ψp (ˆ xp )
mt−1 xp ) u→p (ˆ
(16)
u∈(N \q)
Message passing therefore consists of two steps: calculating t(ˆ xp ) by multiplying FB8 distributions together using equation 12, followed by convolution of the resulting FB8 distribution by a Fisher distribution to get mtp→q (ˆ xq ). The next section documents a method for doing the convolution. For each node we have an irradiance value. Using equations 2 and 13 we can define a distribution Ix,y ˆ l, −kˆlˆlT ] Ω[2k (17) A
784
T.S.F. Haines and R.C. Wilson
In principle ψp (ˆ xp ), from equation 16, can be set to this Bingham-Mardia distribution to complete the model to be marginalised. This fails however due to the concave/convex ambiguity [12]. The formulation presented so far will converge to a bi-modal distribution at each node, with the modes corresponding to the concave and convex interpretations. A bias towards one of the two interpretations is required, to avoid arbitrarily selecting between them on a per-pixel basis. Taking the gradient vector at each node and rotating it onto the irradiance defined cone ˆ provides a suitable bias direction. This is identical to the initialisation to get g used by Worthington & Hancock [5]. We then multiply equation 17 by a Fisher distribution using this direction vector with concentration h to get ˆp + x ˆ Tp (−kˆlˆlT )ˆ xp ) = exp((hˆ g + 2kIx,y A−1ˆl)T x xp ) ψp (ˆ
(18)
Using the gradient vector unmodified will produce a concave bias, whilst negating it will produce a convex bias. The pseudo-gradient defined in appendix A is used. Once belief propagation has converged equation 5 can be used to extract a final FB8 distribution for each node. For output we require directions rather than distributions. A method for finding the maximal mode of the FB8 distribution is given in appendix B. To optimise the method a hierarchy is constructed and belief propagation is applied at each level. Each level’s messages are initialised with the previous, lower resolution, levels messages. This results in less message passes being required for overall convergence [13].
6
Message Passing
As indicated by equation 15 when passing messages we have to convolve a FB8 distribution by a Fisher distribution. Doing this directly is not tractable, so we propose a novel three step procedure to solve this problem: 1. Convert the FB8 distribution into a sum of Fisher distributions. 2. Convolve the individual Fisher distributions. 3. Refit a FB8 distribution to the resulting mixture of Fisher distributions. All three steps involve approximation, in practise this proves not to be a problem. Step 1. We approximate the Fisher-Bingham distribution as a sum of Fisher distributions. Starting with equation 10 and rewriting the right-hand side ˆ )exp(αˆ ˆ y2 ) yx2 + β y exp(vT y we may substitute an approximation of the right-hand multiple to get 2π ˆ) exp(mˆ yx cos(θ) + nˆ yy sin(θ))δθ exp(vT y
(19)
(20)
0
In practise a small number of Fisher distributions will be sampled, to get ˆ) exp(vT y exp([m cos(θi ), n sin(θi ), 0]ˆ y) (21) i
Belief Propagation with Directional Statistics
which may be re-written as a sum of Fisher distributions1 ˆ) exp((v + [m cos(θi ), n sin(θi ), 0]T )T y
785
(22)
i
m and n need to be determined. To explicitly write the approximation 2π ˆ y2 ) ∝ exp(αˆ yx2 + β y exp(mˆ yx cos(θ) + nˆ yy sin(θ))δθ 0 ˆ x2 + n2 y ˆy2 ) ˆ y2 ) ∝ 2πI0 ( m2 y exp(αˆ yx2 + β y
(23) (24)
where I0 is the modified Bessel function of the first kind, order 0. Whilst similar2 the two sides of equation 24 are different, and so a match is not possible, ˆ ; [±1, 0, 0]T , [0, ±1, 0]T and [0, 0, ±1]T . however, we may consider six values of y These vectors are the minimas and maximas of the Bingham distribution. Using [0, 0, ±1]T we get exp(0) ∝ 2πI0 (0) ≡ 1 ∝ 2π (25) and, because of normalisation, can write √ exp(α) = I0 ( m2 )
√ exp(β) = I0 ( n2 )
(26)
which can be rearranged to get suitable values of m and n m = I−1 0 (exp(α))
n = I−1 0 (exp(β))
(27)
This approximation leaves the minimas and maximas in the same locations with the same relative values. Step 2. Mardia and Jupp [14, pp. 44] give an approximation of the convolution of two Von-Mises distributions. (i.e. distributions on the circle.) If we represent the n-dimensional von-Mises-Fisher distribution as ˆ ) = ψn [w, ˆ k) ∝ exp(k w ˆTx ˆ k] x; w, PvMF (ˆ
(28)
n
ˆ = 1 then the approximation given is ˆ, w ˆ ∈ R and |ˆ x| = |w| where x ˆ 1 , k1 ] ∗ ψ2 [w ˆ 2 , k2 ] ≈ ψ2 [w ˆ1 + w ˆ 2 , A−1 ψ2 [w 2 (A2 (k1 )A2 (k2 ))]
(29)
Ip/2 (k) Ip/2−1 (k) .
where Ap (k) = This may easily be extended to the Fisher distribution with no angular offset between the distributions ˆ k1 ] ∗ ψ3 [w, ˆ k2 ] ≈ ψ3 [w, ˆ A−1 ψ3 [w, 3 (A3 (k1 )A3 (k2 ))]
(30)
As a computational bonus, A3 (k) may be simplified A3 (k) = 1
2
1 I1.5 (k) = coth(k) − I0.5 (k) k
(31)
Note that they are written here without normalisation terms; to maintain this under the usual mixture model each Fisher distribution has to be weighted by its inverse normalisation term. Written as power series they are identical except for the denominators of the terms, for which the Bessel functions are the square of the exponentials.
786
T.S.F. Haines and R.C. Wilson
Step 3. To derive a Fisher-Bingham distribution from the convolved sum of Fisher distributions we first need the rotational component of the Bingham distribution, which we calculate with principal component analysis. Wi ui ¯ = i (32) m i Wi Wi is the normalisation constant of the indexed Fisher distribution, u ¯i is its direction vector multiplied by its concentration parameter. ⎡ ⎤ ¯ W0 (u0 − m) ⎢ ¯ ⎥ X = ⎣ W1 (u1 − m) (33) ⎦ .. . XT X = RERT
(34)
E is the diagonal matrix of eigenvalues. R is then the rotational component of the Bingham distribution. Given six directions and their associated density function values we may fit the rest of the parameters to get a distribution with matching ratios between the selected directions. Given six instances of3 ˆ+x ˆ T Dˆ exp(vT x x) = p
(35)
ˆ where D is diagonal, we can apply the natural logarithm for a known p and x to both sides to get ˆ+x ˆ T Dˆ vT x x = ln(p) (36) This is a linear set of equations, which can be solved using standard techniques to get v and D. The final FB8 distribution is then proportional to ˆ+x ˆ T RDRT x ˆ) exp((Rv)T x
(37)
The six directions have to be carefully selected to produce a reasonable approximation, as only these sampled directions will be fitted and the convolved distribution can differ greatly from a Fisher-Bingham distribution. The selection strategy used is based on the observation that with no Fisher component the optimal selection is [±1, 0, 0]T , [0, ±1, 0]T and [0, 0, ±1]T (There is also a computational advantage of this selection as they are linearly separable.). Given a Fisher component we may divide through the mixture of Fisher distributions to leave only a (supposed) Bingham component; the estimation procedure will then estimate another Fisher component as well as the Bingham component. This leads to an iterative scheme, where the Fisher component is initialised with the mean of the mixture of Fisher distributions and updated after each iteration. In practise convergence happens after only two iterations. It should be noted that this approach is the inverse operation of the initial conversion to a mixture of Fisher distributions, i.e. it has error precisely opposite the error introduced by step 1, ignoring the use of a finite number of Fisher distributions. 3
It should be noted that equality rather than proportionality is used here. This is irrelevant as multiplicative constants have no effect.
Belief Propagation with Directional Statistics
7
787
Results and Analysis
We compare the presented algorithm to two others, Lee & Kuo [4] and Worthington & Hancock [5], using both synthetic and real data. Figure 1 gives the four synthetic inputs used, figure 2 gives the results and ground truth for just one of the four inputs. Qualitatively, looking at figure 2, Lee & Kuo is simply too blurred to be competitive. Worthington & Hancock shows considerably more detail, but suffers from assorted artifacts and is still blurred. The presented algorithm has sharp details and less blurring compared to the others. Figure 3 gives the results of a quantitative analysis of the synthetic results. Each table gives the results for one of the four inputs, with each row dedicated to an algorithm. The columns give the percentage of pixels in each image that are beneath an error threshold, the error being the angle between the ground truth and estimated normals. Sticking to the 90◦ images where the light is at [0, 0, 1]T Lee & Kuo consistently makes fewer large mistakes, this can be put down to its excessive blurring. Worthington & Hancock appears to have an advantage at the very lower ends of the scale, this is presumably because it perfectly matches
Fig. 1. Synthetic inputs, derived from the set used by Zhang et al. [8]. From left to right they are referred to as Vase 90◦ , Vase 45◦ , Mozart 90◦ and Mozart 45◦ . The light source vector for the 90◦ images is [0, 0, 1]T , whilst for the 45◦ images it is √ √ direction [− 2, 0, 2]T .
Fig. 2. Results for the synthetic Mozart 90◦ input. From left to right they are Lee & Kuo [4], Worthington & Hancock [5], the presented algorithm and then finally ground truth. They represent normal maps, with x → red, y → green and z → blue to represent the surface normal at each pixel. Red and Green are adjusted to cover the whole [−1, 1] range, blue is left covering [0, 1].
788
T.S.F. Haines and R.C. Wilson
Vase 90◦
< 1◦ < 2◦ < 3◦ < 4◦
< 5◦ < 10◦ < 15◦ < 20◦ < 25◦
Lee & Kuo
0.8
3.3
7.0
12.0
21.6
75.7
97.7 100.0 100.0
Worthington & Hancock
6.8
13.3
17.8
22.2
26.7
46.5
59.1
67.9
75.3
Presented Algorithm
7.8
13.4
22.5
34.5
39.0
55.9
68.1
76.7
83.9
Vase 45◦
< 1◦ < 2◦ < 3◦ < 4◦
< 5◦ < 10◦ < 15◦ < 20◦ < 25◦
Lee & Kuo
0.9
3.9
7.4
11.4
15.7
47.0
73.8
85.1
88.8
Worthington & Hancock
6.6
13.4
17.4
20.4
24.1
37.3
49.1
57.9
65.1
Presented Algorithm
0.3
4.4
10.3
18.4
28.4
44.5
58.0
68.4
76.7
Mozart 90◦
< 1◦ < 2◦ < 3◦ < 4◦
< 5◦ < 10◦ < 15◦ < 20◦ < 25◦
Lee & Kuo
0.2
0.7
1.5
2.6
4.1
18.3
36.1
52.5
64.9
Worthington & Hancock
2.7
6.4
10.4
14.3
18.4
34.4
47.2
56.3
63.9
Presented Algorithm
0.9
3.7
8.5
15.4
21.7
42.2
53.5
61.9
68.5
Mozart 45◦
< 1◦ < 2◦ < 3◦ < 4◦
< 5◦ < 10◦ < 15◦ < 20◦ < 25◦
Lee & Kuo
0.2
0.7
1.5
2.5
3.8
16.1
35.0
54.7
67.2
Worthington & Hancock
2.4
5.4
8.0
10.4
13.4
25.0
33.4
40.5
46.8
Presented Algorithm
0.2
0.8
2.1
4.5
7.9
21.9
33.3
43.1
50.4
Fig. 3. Synthetic results. Each grid gives results for the input named in the top left. Each row gives results for a specific algorithm. Each column gives the percentage of pixels within a given error bound, i.e. the < 1◦ column gives the percentage of pixels where the estimated surface orientation is within 1 degree of the ground truth. The percentage is only for pixels where ground truth is provided.
the irradiance information, unlike the others. The presented approach is always ahead for the Vase 90◦ input. For the Mozart 90◦ input our approach consistently exceeds Lee & Kuo but does not do so well at getting a high percentage of spot on estimates as Worthington & Hancock. However, for error thresholds of 4◦ and larger the presented algorithm is again better. √ to the 45◦ inputs, where the light source direction vector is [− 2, 0, √ Moving 2]T , things do not go so well. For Vase 45◦ it gets the highest percentage of pixels with an error less than 5◦ , but above that is exceeded by Lee & Kuo and below that beaten by Worthington & Hancock. For Mozart 45◦ Worthington & Hancock is the clear victor. The presented algorithm doing poorly as the light source moves away from [0, 0, 1]T can be put down to the bias introduced to handle the concave/convex ambiguity [12]. The gradient information used for the bias is necessary to avoid a bi-modal result, but also pulls the solution away from the correct answer, this effect being more noticeable as the light deviates away from being at the camera.
Belief Propagation with Directional Statistics
789
Fig. 4. Input and results for the head. From left to right they are input, Lee & Kuo [4] and Worthington & Hancock [5] on the first line and the presented algorithm and then ground truth on the second.
Figure 4 gives a real world input and the results as 3D renders of the integrated output, figure 5 gives the same quantitative analysis used for the synthetic results. This input was captured in a dark room using a camera with a calibrated response curve and the shape determined with a head scanner, with the camera calibrated to the scanners coordinate system so that a ground truth normal map could be produced. Looking at figure 5 Worthington & Hancock is quantitatively ahead, but looking at the actual output it is more blob than face, though some features are recognisable. To use an analogy, an art restorer painting over a canvas with constant colour knowing that the original artist must have used that colour in some of the areas covered can get the most matches if the competition is terrible, despite producing a blurred result. Sticking to a qualitative analysis the presented algorithm is clearly not perfect, but it gives sharper results, with features such as mouth, eye sockets and hair that are superior to the competition. For the head image the run time is over 12 hours for Lee & Kuo, 54 minutes for Worthington and Hancock and 9.5 minutes for the presented algorithm on a 2Ghz Athlon. < 1◦ < 2◦ < 3◦ < 4◦
Head
< 5◦ < 10◦ < 15◦ < 20◦ < 25◦
Lee & Kuo
0.1
0.4
0.8
1.4
2.2
8.6
19.7
32.1
43.5
Worthington & Hancock
0.1
0.6
1.4
2.6
4.0
13.7
23.4
32.6
41.0
Presented Algorithm
0.1
0.5
1.1
1.9
3.0
11.5
21.4
30.9
39.6
Fig. 5. Results for head input. See figure 3 for explanation
790
8
T.S.F. Haines and R.C. Wilson
Conclusion
We have presented a new algorithm for solving the classical shape from shading algorithm, and demonstrated its competitiveness with previously published algorithms. The use of belief propagation with FB8 distributions is in itself new, and a method for the convolution of a FB8 distribution by a Fisher distribution has been devised. The algorithm does suffer a noticeable flaw in that overcoming the convex/concave problem biases the result, making the algorithm weak in the presence of oblique lighting. An alternative solution to the current bias is an obvious area for future research.
References 1. Horn, B.K.P.: Shape From Shading: A Method For Obtaining The Shape Of A Smooth Opaque Object From One View. PhD thesis, Massachusetts Institute of Technology (1970) 2. Brooks, M.J., Horn, B.K.P.: Shape and source from shading. Artificial Intelligence, 932–936 (1985) 3. Zheng, Q., Chellappa, R.: Estimation of illuminant direction, albedo, and shape from shading. Pattern Analysis and Machine Intelligence 13(7), 680–702 (1991) 4. Lee, K.M., Kuo, C.C.J.: Shape from shading with perspective projection. CVGIP: Image Understanding 59(2), 202–212 (1994) 5. Worthington, P.L., Hancock, E.R.: New constraints on data-closeness and needle map consistency for shape-from-shading. Pattern Analysis and Machine Intelligence 21(12), 1250–1267 (1999) 6. Prados, E., Camilli, F., Faugeras, O.: A unifying and rigorous shape from shading method adapted to realistic data and applications. Mathematical Imaging and Vision 25(3), 307–328 (2006) 7. Potetz, B.: Efficient belief propagation for vision using linear constraint nodes. Computer Vision and Pattern Recognition, 1–8 (2007) 8. Zhang, R., Tsai, P.S., Cryer, J.E., Shah, M.: Shape from shading: A survey. Pattern Analysis and Machine Intelligence 21(8), 690–706 (1999) 9. Haines, T.S.F., Wilson, R.C.: Integrating stereo with shape-from-shading derived orientation information. In: British Machine Vision Conference, vol. 2, pp. 910–919 (2007) 10. Kent, J.T.: The Fisher-Bingham distribution on the sphere. Royal Statistical Society, Series B (Methodological) 44(1), 71–80 (1982) 11. Bingham, C., Mardia, K.V.: A small circle distribution on the sphere. Biometrika 65(2), 379–389 (1978) 12. Ramachandran, V.S.: Perception of shape from shading. Nature 331, 163–165 (1988) 13. Felzenszwalb, P.F., Huttenlocher, D.P.: Efficient belief propagation for early vision. Computer Vision and Pattern Recognition 1, 261–268 (2004) 14. Mardia, K.V., Jupp, P.E.: Directional Statistics. Wiley, Chichester (2000) 15. Hart, J.C.: Distance to an ellipsoid. Graphics Gems IV, 113–119 (1994)
A
Pseudo Gradient
A diffusion method is used to calculate an estimate of the gradient direction. This method is robust in the presence of noise and lacks the distortion of methods such as the Sobel operator. It is described here in terms of a random walk.
Belief Propagation with Directional Statistics
791
All walks start at the pixel for which the calculation is being applied and are of fixed length. Each walk contributes a vector going from the walks start point to the walks end point, the mean of these vectors is the output gradient direction. Every step the walk moves to one of the four adjacent pixels, the pixels β where I(x,y) is the irradiance of the pixel and α and are weighted by α + I(x,y) β are parameters. This creates a walk that tends towards brighter areas of the image, the mean being a robust gradient direction.
B
Maximisation of FB8
For visualisation and integration with non-probabilistic modules finding the direction with the greatest density is needed. This is the quadratic programming problem of maximising equation 9. It may be solved efficiently by observing that it is the same problem as finding the closest point on an ellipsoid to a given point. This latter problem can be expressed as an order 6 polynomial and solved with Newtons method. As a further convenience the initialisation can be done in such a way that it always converges directly to the maximal root [15]. The Fisher-Bingham distribution is a conditioned multivariate normal distribution [14, pp. 175] Σ=
−(D + cI3 )−1 2
μ ¯ = Σv
(38)
where cI3 is a scaled identity matrix selected to make D + cI3 negative definite. The maximal point is therefore the closest point to μ ¯ on the unit sphere using Mahalanobis distance. Mahalanobis distance is (ˆ x−μ ¯ )T Σ −1 (ˆ y−μ ¯) (39) To minimise the above equation we consider that Σ is diagonal and rewrite as
[(ˆ yi − μ ¯i )2 σi−1 ] (40) i
where σi , i ∈ {1, 2, 3} are the elements of Σ, which may be rearranged as
[(zi − σi−1 μ ¯i )2 ]
(41)
i
ˆ i . This is now Euclidean distance when solving for zi , and where zi = σi−1 y ˆ be of unit length becomes the equation of an ellipsoid the constraint that y zi 2 [( ) ]=1 (42) σi−1 i
A Convex Formulation of Continuous Multi-label Problems Thomas Pock1,2, Thomas Schoenemann1 , Gottfried Graber2 , Horst Bischof2 , and Daniel Cremers1 1
2
Department of Computer Science, University of Bonn Institute for Computer Graphics and Vision, Graz University of Technology
Abstract. We propose a spatially continuous formulation of Ishikawa’s discrete multi-label problem. We show that the resulting non-convex variational problem can be reformulated as a convex variational problem via embedding in a higher dimensional space. This variational problem can be interpreted as a minimal surface problem in an anisotropic Riemannian space. In several stereo experiments we show that the proposed continuous formulation is superior to its discrete counterpart in terms of computing time, memory efficiency and metrication errors.
1
Introduction
Many Computer Vision problems can be formulated as labeling problems. The task is to assign a label to each pixel of the image such that the label configuration is minimal with respect to a discrete energy. A large class of binary labeling problems can be globally minimized using graph cut algorithms [1,2]. Applications of binary labeling problems include tworegion image segmentation, shape denoising and 3D reconstruction. On the other hand, multi-label problems in general cannot be globally minimized. They can only be solved approximately within a known error bound [3,4,5,6]. There exists one exception where multi-label problems can be solved exactly. Ishikawa [7] showed that, if the pairwise interactions are convex in terms of a linearly ordered label set, one can compute the exact solution of the multi-label problem. Applications of multi-label problems include image restoration, inpainting, multi-region image segmentation, motion and stereo. The continuous counterpart to discrete labeling problems is the variational approach. Similar to the labeling problem, the aim of the variational approach is to find the minimizer of an energy functional. The major difference between the variational approach and the discrete labeling approach is that the energy functional is defined in a spatially continuous setting and the unknown functions can take continuous values. If the energy functional is convex and the minimization is carried out over a convex set, the globally optimal solution can be computed. On the other hand, it is generally hard to minimize non-convex energy functionals globally. D. Forsyth, P. Torr, and A. Zisserman (Eds.): ECCV 2008, Part III, LNCS 5304, pp. 792–805, 2008. c Springer-Verlag Berlin Heidelberg 2008
A Convex Formulation of Continuous Multi-label Problems
793
In this paper we present a new variational method which allows to compute the exact minimizer of an energy functional incorporating Total Variation regularization and a non-convex data term. Our method can solve problems of the same complexity as the Ishikawa’s method. Hence, our method can be seen as the continuous counterpart. Our method comes along with several advantages compared to Ishikawa’s approach. First, our method is largely independent from grid bias, also known as metrication error. This leads to more accurate approximations of the continuous solution. Second, our method is based on variational optimization techniques which can be effectively accelerated on parallel architectures such as graphics processing units (GPUs) and third, it requires less memory. Thourth, our method allows to compute sub-pixel-accurate solutions. The remainder of the paper is follows. In Section 2 we review the method of Ishikawa. In Section 3 we give the definition of the energy functional which can be solved with our method. We show how this non-convex energy functional can be cast by an equivalent convex optimization problem. In Section 4 we show results of our method applied to stereo. In the last Section we give some conclusions and show directions for future investigations.
2
Ishikawa’s Discrete Approach
Ishikawa [7] presents a method to globally solve multi-label problems of a certain class. A less general class was given independently by Veksler [4]. Given a graph with node set V and edge set E and a label set L ⊂ Z, Ishikawa considers the task to compute the optimal labeling l ∈ LV for an energy of form P (l(u) − l(v)) + D(l(v)) (1) min l
(u,v)∈E
v∈V
Such a labeling problem combines a certain pairwise regularity term P (·) with an (arbitrary) data term D(·). Many problems in Computer Vision can be stated in this form, among them are stereo estimation, image restoration and image segmentation. Ishikawa shows that such problems can be solved in a globally optimal manner as long as the function P (·) is convex in l(u) − l(v). This is achieved by computing the minimal cut in an auxiliary graph with extended node set. For each combination of node v ∈ V and label l(v) ∈ L a node in the auxiliary graph is created. For details see [7]. While this approach is able to find global optimizers of a discrete energy, in practice it suffers from several drawbacks: – The algorithm requires a huge amount of memory. In part this is due to the large set of nodes. The true bottleneck however lies in the algorithms to find the minimal cut in the graph: All efficient solvers are based on computing the maximal flow in the graph [8]. This requires the storage of a flow value for each edge and hence an explicit storage of edges.
794
T. Pock et al.
– Graph-based methods generally suffer from grid bias (also known as metrication errors). To remove this grid bias and get close to rotational invariance, large neighborhood systems are required. The resulting huge number of edges increases the memory consumption even further. – Lastly the efficient parallelization of max-flow-based methods is still an open issue. While current graphics cards offer highly parallel architectures, to date this potential could not be exploited to speed up max-flow algorithms. In this paper we deal with all of these drawbacks. We propose a sub-pixelaccurate continuous formulation which makes use of continuous optimization techniques. As a direct consequence our method does not suffer from grid bias. Moreover, it requires much less memory and is easy to parallelize.
3
A Continuous Approach
This work is devoted to the study of the variational problem min |∇u(x)| dx + ρ(u(x), x) dx , u
Ω
(2)
Ω
which can be seen as the continuous counterpart of (1), where we used P (·) = |·|. Let u : Ω → Γ be the unknown function, where Ω ⊆ R2 is the image domain, Γ = [γmin , γmax ] is the range of u and x = (x, y)T ∈ Ω is the pixel coordinate. We may assume homogeneous Neumann boundary conditions for u on ∂Ω. The left term of (2) is for regularization, i.e. to obtain smooth results. It is based on minimizing the Total Variation (TV) of u. Note that the gradient operator is understood in its distributional sense. Therefore, the TV energy is also well-defined for discontinuous functions (e.g. characteristic functions). |∇u(x)| =
∂u(x) ∂x
2 +
∂u(x) ∂y
2 .
(3)
The main property of the TV term is that it allows for sharp discontinuities in the solution while still being a convex function [9]. The discontinuity preserving property is important for many Computer Vision problems, e.g. to preserve edges in the solution. The right term of (2) is the data term. It is based on a pixel-wise defined non-negative function ρ(u(x), x) : Ω → R+ , which directly relates to the data term D(·) of Ishikawa’s discrete approach. Note that our model is able to handle any pixel-wise defined data term, including non-convex ones. The type of data term also defines the application domain of our variational model. For example, if ρ(u, f ) measures the fidelity of u to given noisy input image f , our model could be used for image denoising. On the other hand, if u represents a disparity field and ρ(u, IL , IR ) measures the matching quality of a rectified stereo image pair IL and IR , our model could be used for stereo matching.
A Convex Formulation of Continuous Multi-label Problems
795
Let us now discuss whether we can find an exact solution of (2). The regularization term is convex in u. Therefore, this term can be globally minimized. However, ρ(u) is per definition non-convex. Hence, we cannot expect that we are able to compute the global minimizer in this setting. 3.1
A Convex Formulation Via Functional Lifting
In this section, we will develop a convex formulation of the non-convex variational model (2). The key idea is to lift the original problem formulation to a higherdimensional space by representing u in terms of its level sets. Consequently, this will allow us to compute the exact solution of the original non-convex problem. Let us first give some definitions. Definition 1. Let the characteristic function 1{u>γ} (x) : Ω → {0, 1} be the indicator for the γ - super-levels of u: 1 if u(x) > γ 1{u>γ} (x) = . (4) 0 otherwise Next, we make use of the above defined characteristic functions to construct a binary function φ which resembles the graph of u. Definition 2. Let φ : [Ω × Γ ] → {0, 1} be a binary function defined as φ(x, γ) = 1{u>γ} (x) .
(5)
As a direct consequence of (4) we see that φ(x, γmin ) = 1 and φ(x, γmax ) = 0. Hence, the feasible set of functions φ is given by D = {φ : Σ → {0, 1} φ(x, γmin ) = 1, φ(x, γmax ) = 0} , (6) where we used the short notation Σ = [Ω × Γ ]. Note that the function u can be recovered from φ using the following layer cake formula [10]. φ(x, γ) dγ (7) u(x) = γmin + Γ
Our intention is now to rewrite the variational problem (2) in terms of φ. This can be seen as lifting the variational problem (2) to a higher-dimensional space. This is stated by the following the following Theorem which forms the basis of our approach. Theorem 1. The variational problem (2) is equivalent to the higher dimensional variational problem |∇φ(x, γ)| + ρ(x, γ)|∂γ φ(x, γ)| dΣ , (8) min φ∈D
Σ
in the sense that the minimizer of (8) is related to the minimizer of (2) via the layer cake formula (7).
796
T. Pock et al.
Proof: First, the TV term of (2) can be easily rewritten in terms of φ, making use of the generalized co-area formula of Fleming and Rishel [11].
|∇u(x)| dx = Ω
|∇φ(x, γ)| dγ Ω
dx ,
(9)
Γ
where |∇φ(x, γ)| denotes the Total Variation of the characteristic function of the γ - super-levels of u. The co-area formula essentially states that the TV norm can be decomposed into a sum of the length of the level - sets of u. Second, we have to rewrite the data term of (2) by means of φ. From (5) we observe that (10) |∂γ φ(x, γ)| ≡ δ(u(x) − γ) where δ(·) is the Dirac Delta function. As a direct consequence, the data term can by rewritten as
ρ(u(x), x) dx = Ω
Ω Γ
= Ω
ρ(γ, x)δ(u(x) − γ) dγ dx ρ(γ, x)|∂γ φ(x, γ)| dγ dx
(11)
Γ
By substitution of the terms (9) and (11) into (2), we arrive at the higher dimensional variational model (8). Although (8) is convex in φ, the variational problem is still non-convex since the minimization is carried out over D which is a non-convex set. The idea is now to relax the variational problem (8) by allowing φ to vary smoothly in the interval [0, 1]. This leads to the following convex set of feasible solutions of φ. D = {φ : Σ → [0, 1] φ(x, γmin ) = 1, φ(x, γmax ) = 0} .
(12)
The associated variational problem is now given by
|∇φ(x, γ)| + ρ(x, γ)|∂γ φ(x, γ)| dΣ
min
φ∈D
.
(13)
Σ
Since (13) is convex in φ and minimization is carried over D, which is a convex set, the overall variational problem is convex. This means that we are able to compute its global minimizer. Our intention, however, is still to solve the binary problem (8). Fortunately, minimizers of the relaxed problem can be transformed to minimizers of the binary problem. Based on [10] we state the following thresholding theorem. Theorem 2. Let φ∗ ∈ D be the solution of the relaxed variational problem (13). Then for almost any threshold μ ∈ [0, 1] the characteristic function 1{φ∗ ≥μ} ∈ D is also a minimizer of the binary variational problem (8).
A Convex Formulation of Continuous Multi-label Problems
797
Proof: (Proof by Contradiction.) Since (13) is homogeneous of degree one, we can make use of the generalized co-area formula to decompose (13) by means of the level sets of φ. |∇φ(x, γ)| + ρ(x, γ)|∂γ φ(x, γ)| dΣ E(φ) = Σ 1 |∇1{φ≥μ} | + ρ(x, γ)|∂γ 1{φ≥μ} | dΣ dμ = 0
=
Σ 1
E(1{φ≥μ} )dμ
(14)
0
Assume to the contrary that 1{φ∗ ≥μ} ∈ D is not a global minimizer of the binary problem, i.e. there exists a binary function φ ∈ D with E(φ ) < E(1{φ∗ ≥μ} ) for a measurable set of μ ∈ [0, 1]. This directly implies that 1 1 E(φ ) = E(φ )dμ < E(1{φ∗ ≥μ} )dμ = E(φ∗ ) , (15) 0
0
∗
which means that φ is not a global minimizer of E(·), contradicting our assumption. We have seen that solving the non-convex variational problem (2) amounts to solving the convex variational problem (13). In the following section we will develop a simple but efficient numerical algorithm to compute the solution of (13). 3.2
Computing the Solution of the Relaxed Functional
The fundamental approach to minimize (13) is to solve its associated EulerLagrange differential equation. ∇φ ∂γ φ −div − ∂γ ρ = 0 , s.t. φ ∈ D . (16) |∇φ| |∂γ φ| It is easy to see that these equations are not defined either as |∇φ| → 0 or |∂γ φ| → 0. In order to resolve these
discontinuities, one could
use regularized variants of these terms, e.g. |∇φ|ε = |∇φ|2 + ε2 and |∂γ φ|ε = |∂γ φ|2 + ε2 , for some small constant ε. See [12] for more details. However, for small values of ε the equations are still nearly degenerate and for larger values the properties of the model get lost. To overcome the non-differentiability of the term |∇φ| + ρ|∂γ φ| we employ its dual formulation [13,14,15,16]: |∇φ| + ρ|∂γ φ| ≡ max {p · ∇3 φ} s.t. p21 + p22 ≤ 1 , |p3 | ≤ ρ , (17) p
where p = (p1 , p2 , p3 )T is the dual variable and ∇3 is the full (three dimensional) gradient operator. This, in turn, leads us to the following primal-dual formulation of the functional (13). min max ∇3 φ · p , (18) φ∈D
p∈C
Σ
798
T. Pock et al.
where
C = {p : Σ → R3 p1 (x, γ)2 + p2 (x, γ)2 ≤ 1 , |p3 (x, γ)| ≤ ρ(x, γ)} .
(19)
Note that the primal-dual formulation is now continuously differentiable in both φ and p. In order to solve (18) we exploit a primal-dual proximal point method [17]. The idea of the proximal point method is to generate a sequence of approximate solutions by augmenting the functional by quadratic proximal terms for both the primal and dual variables. We first minimize the functional with respect to the primal variable and then maximize the functional with respect to the dual variable. 1. Primal Step: For fixed p, compute a proximal primal step for φ.
1 k+1 k k 2 = min ∇3 φ · p + φ . φ−φ φ∈D 2τp Σ Σ 2. Dual Step: For fixed φ, compute a proximal dual step for p.
1 k+1 k+1 k 2 = max ∇3 φ ·p− . p p−p p∈C 2τd Σ Σ
(20)
(21)
The parameters τp and τd denote the stepsizes of the primal and dual updates. We will now characterize the solutions of the alternating minimization scheme by the following two Propositions. Proposition 1. The solution of (20) is given by
φk+1 = PD φk + div3 pk ,
(22)
where PD denotes the projection onto the set D. Proof: We compute the Euler-Lagrange equation of (20) which provides a necessary optimality condition for φ. −div3 pk +
1 φ − φk = 0 . τp
(23)
Solving this equation for φ directly leads to the presented scheme. Note, that the scheme does not ensure that φk+1 ∈ D. Therefore we have to reproject φk+1 onto D using the following Euclidean projector. PD (φk+1 ) = min φk+1 − x , (24) x∈D
which can be computed by a simple truncation of φk+1 to the interval [0, 1] and setting φ(x, γmin ) = 1 and φ(x, γmax ) = 0. Proposition 2. The solution of (21) is given by
pk+1 = PC pk + τd ∇3 φk+1 , where PC denotes the projection onto the set C.
(25)
A Convex Formulation of Continuous Multi-label Problems
799
Proof: The optimality condition for p is given by ∇3 φk+1 −
1 p − pk = 0 . τd
(26)
We solve this equation for p which results in the presented scheme. Since we need to ensure that pk+1 ∈ C we reproject pk+1 onto C using the Euclidean projector (27) PC (pk+1 ) = min pk+1 − y , y∈C
which can be computed via pk+1 = 1
pk+1 1
, max 1, p21 + p22
= pk+1 2
pk+1 2 ,
max 1, p21 + p22
pk+1 = 3
pk+1 . 3 max 1, |pρ3 |
(28)
3.3
Discretization
In our numerical implementation we are using a three-dimensional regular Cartesian grid (i, j, k) | 1 ≤ i ≤ M, 1 ≤ j ≤ N, 1 ≤ k ≤ O , (29) where (i, j, k) is used to index the discrete locations on the grid, M , N and O denote the size of the grid. We use standard forward differences to approximate the gradient operator T φi+1,j,k − φi,j,k φi,j+1,k − φi,j,k φi,j,k+1 − φi,j,k (∇3 φ)i,j,k = , , , (30) Δx Δy Δγ and suitable backward differences to approximate the divergence operator (div3 p)i,j,k =
p2i,j,k − p2i,j−1,k p3i,j,k − p3i,j,k−1 p1i,j,k − p1i−1,j,k + + , Δx Δy Δγ
(31)
where Δx, Δy denote the width of spatial discretization and Δγ denotes the width of the disparity discretization. 3.4
Convergence of the Algorithm
Currently we cannot prove explicit values for τp and τd which ensure convergence of the proposed algorithm. Empirically we observed that the algorithm√converges as long as the product τp τd ≤ 1/3. We therefore choose τp = τd = 1/ 3.
800
3.5
T. Pock et al.
Interpretation as Anisotropic Minimal Surfaces
In Section 3.1 we showed that the non-convex continuous multi-label problem (2) can be cast as a convex problem (13) by rewriting it in a higher dimensional space. We will now show that this higher-dimensional problem is that of a minimal surface problem in an anisotropic Riemannian space. Specifically, if we replace the anisotropic TV-like term |∇φ| + ρ|∂γ φ| by a weighted TV term ρ|∇3 φ| we obtain a variational model whose minimizer is the minimal surface with respect to an isotropic Riemannian metric ρ. min ρ|∇3 φ| dΣ . (32) φ∈D
Σ
This problem has been studied in the context of Total Variation minimization by Bresson et al. in [18] and in the context of Continuous Maximal Flows by Appleton and Talbot in [19]. Note that the isotropic Riemannian problem does not allow for discontinuities in the solution whereas the anisotropic does. 3.6
Implementation
Numerical methods working on regular grids, can be effectively accelerated by state-of-the-art graphics processing units (GPUs). We employ the huge computational power and the parallel processing capabilities of GPUs to obtain a parallel implementation of our algorithm. The algorithm was implemented on a standard desktop PC equipped with a recent Quadcore 2.66 GHz CPU, 4 GB of main memory and a NVidia GeForce GTX 280 graphics card. The computer is running a 64-bit Linux system. With this GPU implementation we achieved a speedup factor of approximately 33 compared to an optimized C++ implementation executed on the same computer.
4
Experimental Results
In this section we provide experimental results of our algorithm applied to standard stereo benchmark problems. First, we compare our continuous formulation to the discrete approach of Ishikawa. Second we evaluate our method on the standard Middlebury stereo database [20]. Finally, we show results from a real world stereo example. For stereo computation we need a data term measuring the matching quality of a rectified stereo image pair IL and IR for a certain disparity value γ. We use the absolute differences summed over the three color channels of the input images. (i) (i) ρ(x, γ) = λ (33) IL (x) − IR (x + (γ, 1)T . i∈{r,g,b}
4.1
Comparison to Ishikawa’s Approach
In our first experiment, we do a comparison of our continuous method to the discrete approach of Ishikawa using the Tsukuba data set [20]. According to [20],
A Convex Formulation of Continuous Multi-label Problems
801
we used γmin = 0, γmax = 16 and λ = 50. The spatial domain and the disparity space was discretized using Δx = Δy = Δγ = 1.0. We ran our numerical scheme until the decrease of the energy was below a certain threshold. We also set up Ishikawa’s algorithm for different neighborhood connectivities. Since different neighborhood systems result in different weights of the smoothness term, we had to adjust the value of λ for the larger neighborhoods. Fig. 1 shows a qualitative comparison of our continuous algorithm to Ishikawa’s discrete algorithm. In case of a 4-connected neighborhood, one can clearly see blocky structures in the solution. This effect, also known as metrication error, has its origin in the coarse approximation of the smoothness term when using a 4connected or 8-connected neighborhood. We also provide a zoom in of the upper right corner of the lamp for the different results. In this region the metrication error of the discrete approach is clearly visible. In case of a 16-connected neighborhood, the result of the discrete approach is comparable to the result of our continuous approach.
(a) Ishikawa 4-neighborhood
(b) Ishikawa 8-neighborhood
(c) Ishikawa 16-neighborhood
(d) Proposed continuous formulation
Fig. 1. Qualitative comparison of the proposed continuous approach to Ishikawa’s discrete approach. It clearly shows the metrication error in case of 4-connected and 8connected neighborhoods, favoring 90 degree and 45 degree edges.
802
T. Pock et al.
Table 1. Quantitative comparison of the proposed continuous approach to Ishikawa’s discrete 16-connected approach. It shows that our GPU-based algorithm is about 20 times faster while requiring only 3.6% of its memory. Algorithm error (%) Runtime CPU/GPU (sec) Memory (MB) Ishikawa 4-neighborhood 2.90 2.9 / 450 Ishikawa 8-neighborhood 2.63 4.9 / 630 Ishikawa 16-neighborhood 2.71 14.9 / 1500 Continuous formulation 2.57 25 / 0.75 54
Table 1 gives a quantitative comparison of our continuous algorithm to the discrete approach of Ishikawa using a error threshold of 1 for wrong pixels. It shows that the proposed continuous formulation provides error statistics slightly superior to its discrete counterpart. One can also see that both the runtime and the memory consumptions of Ishikawa’s discrete approach significantly increase with larger neighborhoods. Comparing our continuous approach to the 16-connected discrete approach of Ishikawa, we see that our GPU-based algorithm is about 20 times faster while requiring only 3.6% of its memory. This enables our method to compute the solution of stereo problems of much larger size in much shorter time. 4.2
Evaluation on the Middlebury Stereo Database
In this section we provide a full evaluation of our algorithm on the standard Middlebury stereo database [20]. In order to be more insensitive to brightness changes in the input images we applied a high-pass filter to the input images before computing the data term. We ran our algorithm with the following constant parameter settings for the entire data base: λ = 30, Δx = Δy = 1.0 and Δγ = 0.5. The disparity range given by γmin and γmax was set according to [20]. The computing time in this setting varies between 15 seconds for the Tsukuba data set and 60 seconds for the Cones and Teddy data sets. Fig. 2 shows the results of the stereo images. For a sub-pixel accurate threshold of th=0.5, our algorithm is currently ranked as number 15 out of 39 stereo algorithms. Note that Ishikawa’s algorithm failed in this setting due to its immense memory requirements. One should keep in mind that more sophisticated algorithms may provide better quantitative results for the stereo problems. However, our variational model is very simple and does not take into account additional information from image segmentation, plane-fitting and consistency checks. More importantly, our model can be exactly solved, which is not the case for the more sophisticated approaches. 4.3
Real World Example
Finally we give results of our algorithm applied to a real world stereo problem. Fig. 3 shows the estimated depth map from a large aerial stereo pair from Graz.
A Convex Formulation of Continuous Multi-label Problems
803
(a) Tsukuba
(b) Venus
(c) Teddy
(d) Cones
(e) rank = 17
(f) rank = 12
(g) rank = 7
(h) rank = 8
(i) error=14.3%
(j) error=4.99%
(k) error=12.5%
(l) error=7.25%
Fig. 2. Quantitative results from the Middlebury stereo evaluation data base
(a) Left image (1500 × 1400) pixel
(b) Estimated depth map
Fig. 3. Estimated depth map of the proposed algorithm applied to a large aerial stereo data set of Graz
804
T. Pock et al.
We ran our algorithm with the following parameter settings: λ = 50, γmin = −30, γmax = 30 Δx = Δy = 1.0 and Δγ = 0.5. This example shows that the proposed algorithm yields promising results for large practical problems.
5
Conclusion
In this paper we proposed a continuous formulation to the discrete multi-label problem of Ishikawa. We showed that the original non-convex problem can be reformulated as a convex problem via embedding into a higher dimensional space. Our formulation removes several shortcomings of Ishikawa’s discrete approach. First, our algorithm is defined in a spatially continuous setting and is therefore free from grid bias. Second, our algorithm is based on variational optimization techniques which can be easily parallelized. Finally our algorithm needs less memory enabling us to compute much larger problems. Results from practical stereo examples emphasize the advantages of our approach over the discrete approach of Ishikawa. For future work we see mainly two directions. One direction is to investigate more sophisticated optimization schemes to achieve an additional speedup in computing the minimizer of the convex formulation. The other direction is to improve the variational model for stereo estimation. Specifically, we plan to incorporate additional cues such as edges into our variational model while still allowing to compute its exact solution.
Acknowledgements This work was supported by the Hausdorff Center for Mathematics and the Austrian Research Promotion Agency within the VM-GPU project (no. 813396). We would also like to thank Microsoft Photogrammetry for providing us the aerial stereo images.
References 1. Greig, D., Porteous, B., Seheult, A.: Exact maximum a posteriori estimation for binary images. J. Royal Statistics Soc. 51(Series B), 271–279 (1989) 2. Kolmogorov, V., Zabih, R.: What energy functions can be minimized via graph cuts. IEEE Trans. Pattern Anal. Mach. Intell. 26(2), 147–159 (2004) 3. Boykov, Y., Veksler, O., Zabih, R.: Fast approximate energy minimization via graph cuts. IEEE Trans. Pattern Anal. Mach. Intell. 23(11), 1222–1239 (2001) 4. Veksler, O.: Efficient Graph-based Energy Minimization Methods in Computer Vision. PhD thesis, Cornell University (July 1999) 5. Schlesinger, D., Flach, B.: Transforming an arbitrary minsum problem into a binary one. Technical Report TUD-FI06-01, Dresden University of Technology (2006) 6. Werner, T.: A linear programming approach to max-sum problem: A review. IEEE Trans. Pattern Anal. Mach. Intell. 29(7), 1165–1179 (2007) 7. Ishikawa, H.: Exact optimization for markov random fields with convex priors. IEEE Trans. Pattern Anal. Mach. Intell. 25(10), 1333–1336 (2003)
A Convex Formulation of Continuous Multi-label Problems
805
8. Ford, L., Fulkerson, D.: Flows in Networks. Princeton University Press, Princeton (1962) 9. Rudin, L., Osher, S., Fatemi, E.: Nonlinear total variation based noise removal algorithms. Physica D 60, 259–268 (1992) 10. Chan, T., Esedoglu, S., Nikolova, M.: Algorithms for finding global minimizers of image segmentation and denoising models. SIAM Journal of Applied Mathematics 66(5), 1632–1648 (2006) 11. Fleming, W., Rishel, R.: An integral formula for total gradient variation. Arch. Math. 11, 218–222 (1960) 12. Vogel, C., Oman, M.: Iteration methods for total variation denoising. SIAM J. Sci. Comp. 17, 227–238 (1996) 13. Chan, T., Golub, G., Mulet, P.: A nonlinear primal-dual method for total variationbased image restoration. SIAM J. Sci. Comp. 20(6), 1964–1977 (1999) 14. Carter, J.: Dual Methods for Total Variation-based Image Restoration. PhD thesis, UCLA, Los Angeles, CA (2001) 15. Chambolle, A.: An algorithm for total variation minimizations and applications. J. Math. Imaging Vis. (2004) 16. Chambolle, A.: Total variation minimization and a class of binary MRF models. Energy Minimization Methods in Computer Vision and Pattern Recognition, 136– 152 (2005) 17. Rockafellar, R.: Augmented lagrangians and applications of the proximal point algorithm in convex programming. Math. of Oper Res 1, 97–116 (1976) 18. Bresson, X., Esedoglu, S., Vandergheynst, P., Thiran, J., Osher, S.: Fast global minimization of the active contour/snake model. J. Math. Imaging Vis. 28(2), 151–167 (2007) 19. Appleton, B., Talbot, H.: Globally minimal surfaces by continuous maximal flows. IEEE Trans. Pattern Anal. Mach. Intell. 28(1), 106–118 (2006) 20. Scharstein, D., Szeliski, R.: A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. Int. J. Comp. Vis. 47(1-3), 7–42 (2002)
Beyond Loose LP-Relaxations: Optimizing MRFs by Repairing Cycles Nikos Komodakis1 and Nikos Paragios2 1
University of Crete
[email protected] 2 Ecole Centrale de Paris
[email protected] Abstract. This paper presents a new MRF optimization algorithm, which is derived from Linear Programming and manages to go beyond current state-of-the-art techniques (such as those based on graph-cuts or belief propagation). It does so by relying on a much tighter class of LP-relaxations, called cycle-relaxations. With the help of this class of relaxations, our algorithm tries to deal with a difficulty lying at the heart of MRF optimization: the existence of inconsistent cycles. To this end, it uses an operation called cycle-repairing. The goal of that operation is to fix any inconsistent cycles that may appear during optimization, instead of simply ignoring them as usually done up to now. The more the repaired cycles, the tighter the underlying LP relaxation becomes. As a result of this procedure, our algorithm is capable of providing almost optimal solutions even for very general MRFs with arbitrary potentials. Experimental results verify its effectiveness on difficult MRF problems, as well as its better performance compared to the state of the art.
1
Introduction
Optimization algorithms for discrete MRFs are known to be of fundamental importance for numerous problems from the fields of vision, graphics and pattern recognition. It is also known that many of the most successful of these algorithms are tightly related to Linear Programming (LP) [1][2][3][4][5][6][7]. In particular, they are connected to the following LP relaxation P(¯ g, ¯f ): ¯fpp · xpp , ¯p · xp + g (1) P(¯ g, ¯f ) = min x p∈V pp ∈E s.t. xp (l) = 1, ∀p ∈ V (2) l∈L xpp (l, l ) = xp (l), ∀ pp ∈ E, l ∈ L (3) l ∈L
xp (l) ≥ 0, xpp (l, l ) ≥ 0.
(4)
The connection between P(¯ g, ¯f ) and MRFs lies in that if one replaces (4) with the constraints xp (·), xpp (·, ·) ∈ {0, 1}, then the resulting integer program optimizes ¯ p = {¯ gp (·)} and pairwise potentials the energy of an MRF with unary potentials g ¯fpp = {f¯pp (·, ·)}. This MRF will be denoted by MRF(¯ g, ¯f ) hereafter and L, V, E (in (2), (3)) represent respectively its labels, graph vertices and graph edges. D. Forsyth, P. Torr, and A. Zisserman (Eds.): ECCV 2008, Part III, LNCS 5304, pp. 806–820, 2008. c Springer-Verlag Berlin Heidelberg 2008
Beyond Loose LP-Relaxations: Optimizing MRFs by Repairing Cycles
807
Despite their success, LP-based methods are doomed to fail if relaxation P(¯ g, ¯f ) does not approximate well the actual problem MRF(¯ g, ¯f ) (i.e, it is not tight), which is exactly what happens when one has to deal with very hard MRF problems. We believe this is an issue that has to be addressed, if a further advance is to be made in MRF optimization. Motivated by this observation, the present work makes important practical, as well as theoretical, contributions in this regard. In particular, in this work we attempt to go beyond most existing MRF techniques, by deriving algorithms that are based on much tighter LP relaxations. Towards this goal, we try to strengthen relaxation P(¯ g, ¯f ). However, instead of doing that in the primal domain, which is inefficient, our strategy consists of applying this procedure in the dual domain. As a result, we create an hierarchy of tighter and tighter dual relaxations that starts from the dual of P(¯ g, ¯f ) and goes all the way up to a dual relaxation that actually coincides with the original problem MRF(¯ g, ¯f ) (§2). From this hierarchy, we choose to deal with one particular class of relaxations, which we call cycle-relaxations. This is achieved via a dual-based operation called cycle-repairing (§3), which helps us to deal with a difficulty that, we believe, lies at the heart of MRF optimization: the existence of inconsistent cycles (see definition later). As the name of that operation reveals, its role is to eliminate any inconsistent cycles that may appear during optimization. Furthermore, the more the repaired cycles, the tighter the underlying relaxation becomes. One efficient way of how one can actually do this cycle-repairing is described in §3.1, thus leading to a powerful algorithm that further promotes the use of general MRFs in future vision applications. Let us note at this point that Sontag and Jaakkola [8] have very recently tried as well to make use of a tighter LP relaxation for MRF optimization. However, their method relies on a weaker relaxation than ours. Furthermore, they use a primal-based cutting plane algorithm that requires solving a large primal LP (of growing size) at each iteration (i.e., after each new violated inequality is found), which makes their algorithm impractical for large MRFs. On the contrary, by working in the dual domain, our method is able to improve the relaxation (cycle repairing) by reusing work done in previous iterations. It is thus much more efficient, while it is also adaptive as it makes the relaxation tighter only when it needs to be. Moreover, being dual-based, it can provide lower bounds to the optimum MRF energy, which can be useful for verifying a solution’s optimality. Let us also note that two other dual-based methods of similar spirit to ours, that have been proposed concurrently with our work, appear in [9][10]. Finally, it is important to mention that, although one might be easily tempted to try to strengthen other type of MRF relaxations (e.g, quadratic or SOCP), these have been recently shown to be actually weaker than LP relaxation (1) [11].
2
Tighter LP-Relaxations for MRFs
Before examining how cycle-repairing works, let us first describe the hierarchy of dual relaxations that we will use. At one end of this hierarchy, of course, there will lie the dual of relaxation P(¯ g, ¯f ). For the moment, we will simply refer to
808
N. Komodakis and N. Paragios
it as D(¯ g, ¯f ). The dual cost of any feasible solution to that relaxation is a lower bound to the optimum (i.e, minimum) energy of MRF(¯ g, ¯f ). However, it is often the case that even the maximum of these bounds will be much lower than the optimum MRF energy, which is exactly what happens when D(¯ g, ¯f ) is not tight. To counter that, i.e, to raise the maximum lower bound, one has no choice but to introduce extra variables as well as extra constraints to D(¯ g, ¯f ), which is + ¯ g, f ), lying at the other end of our hierarchy, does: exactly what relaxation D (¯ D+ (¯ g, ¯f ) = max D(¯ g, f )
(5)
s.t. f E ¯f .
(6)
f
In (6), we have used an abstract comparison operation E between pairwise potential functions. In general, given any subset C ⊆ E of the MRF edges and any pairwise potential functions f , f , the operation f C f implies that, for each labeling l = {lp }, it should hold: fpp (lp , lp ) ≤ (7) f¯pp (lp , lp ), ∀ l = {lp } pp ∈C
pp ∈C
(This means that, instead of comparing pairwise potentials on individual edges, we compare sums of pairwise potentials over all edges in C). g, ¯f ) in the above form is to illustrate its The reason that we expressed D+ (¯ ¯ g, ¯f ) relation to D(¯ g, f ). As can be seen, one difference between D(¯ g, ¯f ) and D+ (¯ is that the latter contains an additional set of variables f = {fpp (·, ·)}, which can actually be thought of as a new set of pairwise potentials (also called virtual potentials hereafter). In relaxation D(¯ g, ¯f ), these virtual potentials f do not appear, as they are essentially kept fixed to the true potentials ¯f (i.e, it is as if constraints (6) have been replaced with the trivial constraints f = ¯f ). On the g, ¯f ), we can vary these new potentials in order to achieve a contrary, in D+ (¯ higher dual objective value (i.e, a higher lower bound to the optimum MRF energy). The restriction, of course, is that we must never allow f to become larger than the actual potentials ¯f . However, the comparison of f to ¯f is done based not on the standard operator ≤, but on the generalized operator E . g, ¯f ) is much tighter than D(¯ g, ¯f ). In fact, as the As a result, relaxation D+ (¯ next theorem certifies, it is so tight that it represents the MRF problem exactly: Theorem 1 ([12]). D+ (¯ g, ¯f ) is equivalent to MRF(¯ g, ¯f ). g, ¯f ) already contains an This should, of course, come as no surprise, since D+ (¯ exponential number of constraints on f . One may thus argue that relaxations D(¯ g, ¯f ), D+ (¯ g, ¯f ) lie at opposite ends: ¯ D(¯ g, f ) imposes trivial constraints on f and is thus efficient but not tight, whereas g, ¯f ) has an exponential number of constraints on f and is thus tight but D+ (¯ not easy to handle. However, one can adjust the amount of constraints on f by concentrating only on the virtual potentials at a subset C ⊆ E of the MRF /C edges. Indeed, assuming that initially f = ¯f , and that all fpp (·, ·) with pp ∈ will be kept fixed during the current step, constraints (6) then reduce to the
Beyond Loose LP-Relaxations: Optimizing MRFs by Repairing Cycles
809
easier upper-bounding constraints f C ¯f , thus creating a relaxation in between D(¯ g, ¯f ) and D+ (¯ g, ¯f ). Contrary to f E ¯f , constraints f C ¯f focus only on a subset of the virtual potentials fpp (·, ·), i.e, only on those with pp in subset C, while the rest are left untouched. Not only that, but, as optimization proceeds, one can choose a different local subset Ci to focus on at each step. Indeed, assuming that f are the virtual potentials from the previous step, one can then simply focus on satisfying the constraints f Ci f , while keeping all fpp (·, ·) with pp ∈ / Ci unchanged (i.e, equal to fpp (·, ·)) during the current step (note f already holds that, together, these two things imply f E f and, since f E ¯ from the previous iteration, feasibility constraint (6) is thus maintained). In this manner, different constraints are dynamically used at each step, implicitly creating a dual relaxation that becomes tighter and tighter as time passes by. g, ¯f ) hereafter) is actually part of an hierarchy This relaxation (denoted by D{Ci } (¯ of dual relaxations that starts from D(¯ g, ¯f ) (e.g, if Ci contains only a single edge) + ¯ g, f ) (e.g, if Ci contains all MRF edges). Here we and goes all the way up to D (¯ will deal with one particular type of relaxations from this hierarchy, where the edges from each set Ci always form a simple cycle on the MRF graph. We will call these “cycle-relaxations” hereafter. Note that these are completely different from the more familiar cycle-inequalities relaxations, which impose constraints not on the dual but on the primal variables. Let us end this section by describing relaxation D(¯ g, f ), which forms the building block of our hierarchy. Some terminology must be introduced first. We will hereafter refer to each MRF vertex p ∈ V as an object, to each object-label combination (p, l) as a node and to each pair {(p, l), (p , l )} of nodes (from adjacent objects in E) as a link . Relaxation D(¯ g, f ) is then defined as follows: D(¯ g, f ) = max (8) minl∈L hp (l) h,r
p∈V
s.t. rpp (l, l ) ≥ 0
∀pp ∈ E, l, l∈L
(9)
Here, the dual variables are all the components of vectors h, r. Each variable hp (l) will be called the height of node (p, l), while variable rpp (l, l ) will be called the residual of link {(p, l), (p , l )}. Furthermore, if a node (p, l) has minimal height at p (i.e, hp (l) = minl hp (l )) it will be called minimal, while a link {(p, l), (p , l )} with zero residual (i.e, rpp (l, l ) = 0) will be called tight (see Fig. 1(a)). As can be seen from (8), the goal of the above relaxation is to raise the heights of minimal nodes (i.e, to increase the quantity minl∈L hp (l) for each object p). But, due to (9), this must be done without any of the residuals becoming negative. Note, however, that heights h = {hp (·)} and residuals r = {rpp (·, ·)} are tightly related to each other, since they are both defined in terms of a third set of dual variables y = {ypp (·)} as follows: ypp (l), (10) hp (l) = g¯p (l) + p :pp ∈E
rpp (l, l ) = fpp (l, l )−ypp (l)−yp p (l ),
(11)
N. Komodakis and N. Paragios non-minimal node
p1 tight link
rp1 p3 0 0 (a)
a 0 6ε a b 0 p3 p1 DUAL_ASCENT
rp1 p3 0 0 (b)
a 0 b 0 p3
…
…
l2
pi …
…
…
p1
li
l1 lt
…
…
5ε a
p2 rp2 p3 ε ε 0 0
…
5ε a minimal node 5ε a rp2 p3 rp1 p2 ε pb rp1 p2 ε pb 2 2 ε ε 2ε ε ε 0 0 0
…
object heights
…
810
pt (c)
Fig. 1. We will hereafter visualize each object with a shaded parallelogram, each minimal/non-minimal node with a filled/unfilled circle and each tight link with a line segment (non-tight links are not drawn at all). (a) No line segment exists between minimal node (p1 , a) and (the only) minimal node (p2 , b) of object p2 (this means their link is non-tight, i.e, rp1 p2 (a, b) > 0). Current dual solution thus does not satisfy Thm. 2. (b) One can, therefore, perturb this solution by adding ε to yp1 p2 (a). Then (due to (11)) residuals rp1 p2 (a, ·) decrease by ε and thus the link between (p1 , a),(p2 , b) becomes tight (see red segment). Hence, the new solution satisfies Thm. 2. Furthermore, it satisfies Thm. 3, since a cyclic path goes through the black (i.e, minimal) nodes (p1 , a), (p2 , b), (p3 , a). Note also that the dual objective has increased, due to the increase by ε of the height of minimal node (p1 , a). (c) Visualization of a cycle p1 p2 . . . pt p1 that is consistent to minimal node (p1 , l1 ) (see also text).
Hence, if one adds ε to ypp (l) in order to raise the height hp (l) of a minimal node, thus increasing the dual objective (8), residual rpp (l, l ) will then decrease by ε due to (11) and, so, (9) may become violated. An observation (that will prove to be crucial later on) is that, due to (11), in g, ¯f ) or D+ (¯ g, ¯f ), one can alter a residual rpp (·, ·) by a relaxation such as D{Ci } (¯ changing either variables y, or potentials f . The latter, however, is not possible in weak relaxation D(¯ g, ¯f ), since f is assumed to be constant in this case, i.e, ¯ f = f . As we shall see later, this difference is exactly the reason why one is able g, ¯f ). to repair cycles while using relaxation D{Ci } (¯
3
Exploiting Tighter LP-Relaxations Via Cycle-Repairing
In order to see how to make use of a tighter relaxation, let us first examine when relaxation D(¯ g, ¯f ) itself is not tight, i.e, when even an optimal dual solution of D(¯ g, ¯f ) has a lower cost than the minimum cost of MRF(¯ g, ¯f ). To do that, however, we will first need to characterize the optimal dual solutions of D(¯ g, ¯f ). Theorem 2 ([6]). An optimal dual solution of D(¯ g, ¯f ) must satisfy the following conditions: if (p, l) is a minimal node of object p, then for any neighboring object p there must exist a minimal node (p , l ) such that the link {(p, l), (p , l )} is tight, i.e, rpp (l, l ) = 0. According to our visualization convention, this means that each minimal node must connect with a solid line to at least one minimal node from each neighboring object. For instance, this property doesn’t hold for the dual solution in Fig. 1(a), since (p1 , a) does not have a tight link with any of the minimal nodes in
Beyond Loose LP-Relaxations: Optimizing MRFs by Repairing Cycles
rp
p 1 2
0 0 p2 a b
rp
rp
p 1 2
2 p3
ε 0 0 ε
0 ε ε 0 0
CYCLE_REPAIR( p1,a)
a
a
0 bp 1 rp
0 ε 1 p3 ε 0 (a)
b p 0 3
ε ε ε 0 0 0
CYCLE_REPAIR( p1,b)
0 0 p2 a b
rp
-ε +ε
p 1 2
ε ε ε ε
0 0 0 ε
a -ε 0 bp 0 0 1 rp p 1 3 ε 0 (b)
rp
2 p3
a b p 0 3
0 0
0 0 p2 a b
+ε
a 0 bp 1 rp
-ε
0 0 1 p3 0 0 (c)
rp
-ε
2 p3
0 0 0 0
rp
p 1 2
0 0 p2 a b
ε ε ε ε
DUAL_ASCENT
rp
2 p3
0 0 0 0
a a 0 0 b 0 b 0 p3 p1 rp
0 0 1 p3 0 0 (d)
rp
p 1 2
0 0 p2 a b
811
rp
2 p3
0 0 0 0
0 0 0 0
a a 0 ε b 0 ε b p3 p1 rp
0 0 1 p3 0 0 (e)
a 0 b 0 p3
Fig. 2. (a) Current dual solution satisfies Thm. 2, but not Thm. 3. The latter due to that cycle C = p1 p2 p3 is inconsistent, e.g, to both nodes (p1 , a), (p1 , b). (b) To allow (p1 , a) to later raise its height and possibly become non-minimal, cycle repair(p1 , a) removes the blue tight link (i.e, makes it non-tight) by adding ε to potential fp1 p2 (a, a), thus making residual rp1 p2 (a, a) = ε > 0. However, to also maintain feasibility constraint f C f (where f simply denotes the virtual potentials f before adding ε), it then has to add −ε to potentials fp2 p3 (a, a), fp1 p3 (a, b), thus creating 2 new tight links (red segments). (c) cycle repair(p1, b) does the same for node (p1 , b) so as to allow him to later raise its height. As a result, a tight link (blue segment) is again removed and 2 new tight links (red segments) are created. (d) Resulting dual solution after the 2 calls to cycle repair (same as (c), but reshown for clarity). This solution no longer satisfies Thm. 2, since, due to the cycle-repairs, both (p1 , a), (p1 , b) do not have tight links with any of the minimal nodes of p2 . (e) Therefore, dual ascent can raise by ε the heights of (p1 , a), (p1 , b) (via adding ε to yp1 p2 (a), yp1 p2 (b), thus making all red links tight). Hence, the dual objective increases by ε as well, and now actually coincides with the global MRF optimum.
p2 , but it does hold for the dual solutions in Figs. 1(b), 2(a). The above theorem essentially provides necessary (but not sufficient) conditions for having dual solutions which are optimal for D(¯ g, ¯f ), i.e, which have maximum dual cost. This means that if these conditions do not hold, one knows how to increase the dual objective by keep applying perturbations to the current dual solution (i.e, the current heights and residuals) until these conditions are satisfied. The iterative routine used for this purpose will be called dual ascent hereafter. Its name comes from the fact that these perturbations can only improve (i.e, increase) the dual objective, but never decrease it. Just as an illustration, we show one such perturbation in Fig. 1. In this case, the dual solution of Fig. 1(a) does not satisfy Thm 2, since node (p1 , a) does not have tight links with any of the minimal nodes in p2 . We can therefore raise that node’s height (by increasing variable yp1 p2 (a) until one of these links becomes tight), thus increasing the dual objective and getting the dual solution of Fig. 1(b), which now, of course, satisfies Thm. 2 and is actually optimal for D(¯ g, ¯f ). Let us now return to the issue of tightness of relaxation D(¯ g, ¯f ) with respect to problem MRF(¯ g, ¯f ) and, to this end, let’s assume that, by applying dual ascent, we have managed to increase the cost of the current feasible dual solution until Thm. 2 holds true. When is the resulting dual cost high enough so as to have reached the minimum cost of MRF(¯ g, ¯f ) (thus proving that D(¯ g, ¯f ) is tight)? The next theorem provides a sufficient condition for this. Theorem 3 ([12]). If there is a labeling {lp } such that each node (p, lp ) is minimal and each link {(p, lp ), (p , lp )} is tight then D(¯ g, ¯f ) is tight. Hence,
812
N. Komodakis and N. Paragios
{lp } optimizes MRF(¯ g, ¯f ), current feasible dual solution optimizes D(¯ g, ¯f ) and their costs coincide. For instance, labeling {lp1 = a, lp2 = b, lp3 = a} in Fig. 1(b) is optimal (w.r.t. MRF(¯ g, ¯f )), as it satisfies this theorem. Visually, this can be understood by observing that there is a cyclic path in Fig. 1(b), passing only through the minimal nodes (p1 , lp1 ), (p2 , lp2 ), (p3 , lp3 ). No similar cyclic path exists, however, in Fig. 2(a), despite the fact that the shown dual solution is actually optimal to D(¯ g, ¯f ) and satisfies Thm. 2 (this is a case where relaxation D(¯ g, ¯f ) is not tight). Indeed, if we start from any minimal node in Fig. 2(a), say, (p1 , a) and keep traversing tight links until we return to that node, we won’t be able to do that without first passing from node (p1 , b). We then say that cycle p1 p2 p3 is inconsistent to (p1 , a). More generally, cycle p1 p2 . . . pt pt+1 (with pt+1 = p1 ) is said to be inconsistent to minimal node (p1 , l1 ) if no labeling {li } exists such that each node (pi , li ) is minimal and each link {(pi , li ), (pi+1 , li+1 )} is tight (see Fig. 1(c)). This inconsistency is, in fact, one of the main reasons why relaxation D(¯ g, ¯f ) is often not tight (i.e, why P(¯ g, ¯f ) may not have integral optimal solutions). In our quest for better solutions, it is therefore crucial to figure out how to eliminate these inconsistent cycles (without, of course, reducing the dual objective). Unfortunately, as demonstrated by the example in Fig. 2(a), this may not be possible when using relaxation D(¯ g, ¯f ). It is possible, however, if the tighter g, ¯f ) is used. As we shall see, the reason is that, in the latter relaxation D{Ci } (¯ case, if one wishes to modify the residuals (e.g, for changing whether a link is tight or not), he may modify for this purpose not only variables y = {ypp (·)}, but also variables f = {fpp (·)}, i.e, the virtual potentials. The process of repairing inconsistent cycles by modifying the virtual potentials f will be named cycle-repairing, and the associated routine will be called cycle repair. Given such a routine, the following iterative strategy can then be followed (see Fig. 3): Initially, we set f = ¯ f . Then, at each iteration, we apply successively routines dual ascent, cycle repair. The former drives the current dual solution towards satisfying Thm. 2 for the relaxation D(¯ g, f ) at the current iteration, but it may create inconsistent cycles. Therefore, the latter tries to repair these inconsistent cycles (thus tightening our relaxation even further), but, for this, it has to modify some residuals of the current dual solution by changing potentials g, f next ) is formed, which, however, f into f next . Due to this, a new relaxation D(¯ violates Thm. 2 with respect to the current dual solution (dual ascent must therefore be applied to it again at the next iteration). This process, of course, repeats until convergence, i.e, until no more inconsistent cycles exist. Note that both dual ascent and cycle repair never reduce the dual objective. We are therefore using a dual ascent procedure, applied, however, not g, ¯f ). Each time cyto weak relaxation D(¯ g, ¯ f ), but to tighter relaxation D{Ci } (¯ cle repair is applied, it helps dual ascent to escape from the fixed point it has previously got stuck in and thus further increase the dual objective function the next time it runs. Hence, this essentially results in having a series of weak
Beyond Loose LP-Relaxations: Optimizing MRFs by Repairing Cycles
813
f ← ¯f ; repeat apply dual ascent to relaxation D(¯ g, f ); get next cycle Ci = p1 p2 . . . pn ; f next ← {apply cycle repair to cycle Ci }; f ← f next ; until all cycles in {Ci } are consistent
Fig. 3. Algorithm’s pseudocode
relaxations {D(¯ g, f k )} with f 0 = ¯ f and D(¯ g, f k ) ≤ D(¯ g, f k+1 ). We now proceed to describe one way of implementing the cycle repair routine. 3.1
An Algorithm for Repairing Cycles
To get an intuition of how cycle repair might work, let us consider the example in Fig. 2(a). The current dual solution is a fixed point of dual ascent (i.e, it satisfies Thm. 2), yet it has inconsistent cycles. E.g, cycle C = p1 p2 p3 is inconsistent to node (p1 , a). One way to repair for this is by trying to allow (p1 , a) to raise its height, thus becoming a non-minimal node. (if we can manage that, (p1 , a) would then no longer be a problem as one cares only about minimal nodes when checking for inconsistent cycles). To this end, it suffices that we find a neighboring object, say, p2 so that node (p1 , a) has no tight links with any of the minimal nodes in p2 (indeed, if (p1 , a) has tight links neither with (p2 , a) nor with (p2 , b), then dual ascent can later raise the height of (p1 , a) simply by increasing the value of yp1 p2 (a) until one of these links becomes tight). Since link {(p1 , a), (p2 , b)} is already non-tight in Fig. 2(a), we are thus left with the task of making link {(p1 , a), (p2 , a)} non-tight as well (i.e, making rp1 p2 (a, a) positive). But how can we achieve that without decreasing the value of the dual objective function1 , i.e, without decreasing the height of any minimal node? This is impossible if we can manipulate only variables y (i.e, if relaxation D(¯ g, ¯ f ) is used), but it is easy if we can manipulate variables f as well (i.e, if we use a tighter relaxation). Indeed, in the latter case we can make rp1 p2 (a, a) equal to ε > 0 by simply adding ε to fp1 p2 (a, a) (see blue segment in Fig. 2(b)). Due to this update, however, we have violated the feasibility constrain f C f (where C = p1 p2 p3 ), since, for lp1 = lp2 = a and any lp3 , it then obviously holds f (l , l ) = p p p p i j i j ij ij fpi pj (lpi , lpj ) + ε (here f simply represents the virtual potentials f before adding ε). To restore that constraint, it thus suffices that we add −ε to both fp1 p3 (a, b) and fp2 p3 (a, a) (see red segments in Fig. 2(b)). In summary, the following update must be applied: fp1 p2 (a, a)+= ε, fp1 p3 (a, b)−= ε, fp2 p3 (a, a)−= ε, which results into rp1 p2 (a, a) = ε, rp1 p3 (a, b) = rp2 p3 (a, a) = 0. Note that we chose to use the value ε because it was the maximum positive value for which the lowered residuals would remain non-negative after the update (as required by (9)), i.e, ε = min(rp1 p3 (a, b), rp2 p3 (a, a)). 1
Making the residual rp1 p2 (a, a) non-zero is trivial if we are allowed to decrease the dual objective function. But then, when we reapply dual ascent, we will obtain the same dual solution that we started with (cycle C will thus remain inconsistent).
814
N. Komodakis and N. Paragios
A similar procedure can be applied for also repairing cycle p1 p2 p3 with respect to node (p1 , b) (see Fig. 2(c)). The dual solution resulting after the above 2 cycle-repairs appears in Fig. 2(d). As can be seen, it violates the conditions of Thm. 2 (e.g, node (p1 , a) or (p1 , b) has no tight links with any of the minimal nodes in p2 ). We can therefore reapply dual ascent, thus getting the dual solution in Fig. 2(e), which now has no inconsistent cycles and whose cost actually coincides with the global MRF optimum. This means that, contrary to the initial relaxation, the final relaxation after the 2 cycle repairs is tight. In general, each time we repair a cycle, we effectively make use of additional constraints and we thus tighten our current relaxation. For the general case, cycle repair may essentially mimic the procedure described above. Let thus C = p1 p2 . . . pn (with p1 = pn ) be an inconsistent cycle, e.g., with respect to minimal node (p1 , l1 ), which will be called the anchor node hereafter. One way for cycle repair to repair this inconsistent cycle is to allow the anchor node to later raise its height. For this, as explained in the previous example, it suffices that no tight link exists between the anchor node and any of the minimal nodes in the adjacent object p2 . This, in turn, means that if any such link exists, a positive value ε must be added to its associated virtual potential (the link would then be non-tight, as its residual would be equal to ε > 0). Hence, for all minimal nodes (p2 , l2 ) of object p2 that satisfy rp1 p2 (l1 , l2 ) = 0, we need to apply the following update: fp1 p2 (l1 , l2 )+= ε
(12)
As a result of this, however, feasibility constraint f C f is now violated (where f simply denotes the virtual potentials before the update). To restore feasibility, we thus have to add −ε to the virtual potentials of some subset S of non-tight links (note that the virtual potential of tight links cannot decrease, as this would result into negative residuals, thus violating feasibility). It is not difficult to verify that this set S suffices to contain all non-tight links {(pk , lk ), (pk+1 , lk+1 )} between an anchored node (pk , lk ) (where 2 ≤ k ≤ n − 1) and a non-anchored node (pk+1 , lk+1 ) (where lk+1 = l1 if k + 1 = n). Here we say that a node (pk , lk ) is anchored if only if there exists a path (p1 , l1 ) → (p2 , l2 ) → . . . → (pk , lk ) from the anchor node (p1 , l1 ) to node (pk , lk ) such that all the path links are tight (before applying (12)) and all the path nodes are minimal (e.g, in Fig. 2(b), the anchor node is (p1 , a) and the set S consists of the red links). The following transformation must then be applied: ∀{(pk , lk ), (pk+1 , lk+1 )} ∈ S, fpk pk+1 (lk , lk+1 )−= ε. A possible choice for ε is to set it equal to any positive value that ensures that none of the decreased residuals of the links in S will become negative, e.g: ε=
min
{(pk ,lk ),(pk+1 ,lk+1 )}∈S
rpk pk+1 (lk , lk+1 ).
(13)
Note that, since C is inconsistent, such a positive ε will always exist. Finally, for implementing the dual ascent routine, one of the ways to achieve that is to use the augmenting DAG algorithm [6]. This is an iterative
Beyond Loose LP-Relaxations: Optimizing MRFs by Repairing Cycles
815
algorithm that increases the dual objective by keep perturbing the current dual solution until it satisfies Thm. 2. To this end, it uses a procedure based on a so called augmenting directed acyclic graph (DAG) (this somewhat resembles the augmenting path algorithm for max-flow/min-cut). In our case, an additional advantage comes from the fact that the change from f to f next after each cycle repair is always small (i.e, local), which means that the change from the augmenting DAG of relaxation D(¯ g, f ) to that of D(¯ g, f next ) will be small as well. Hence, the convergence of dual ascent for D(¯ g, f next ) will be extremely fast, given that this routine has already converged for D(¯ g, f ) (this somewhat resembles the case of path augmenting max flow algorithms when applied to dynamic graphs). After our algorithm in Fig. 3 converges, we have to compute an MRF labeling based on the final dual solution, i.e, the final heights and residuals. To this end, one can traverse all objects in some predefined order, say, p1 , p2 , . . . , pn , and, then assign to each object pi a label ˆ li that satisfies the following criteria: it has minimal height (i.e, node (pi , ˆli ) is minimal) and also minimizes the non-negative sum j
4
Experimental Results
To verify our algorithm’s effectiveness, we tested it on various MRF problems. We begun by applying it on a standard set of benchmark MRF problems from vision [13] and Fig. 4 shows some of our results. For instance, Figures 4(a), 4(b) show stereo matching results from the widely used Middlebury stereo data set, Fig. 4(c) contains a result related to image inpainting and denoising, Fig. 4(d) displays an image segmentation result, while Fig. 4(e) contains a result from a panoramic stitching application. Similarly, Fig. 4(f) shows a result from a group photo merging application. In all cases, our method yielded solutions which were either optimal or had energy that was extremely close to the optimum (this can be proved due to that the dual costs always form a lower bound on the minimum MRF energy). For instance, the worst accuracy of the labelings obtained for all labeling−best lower bound ). MRFs was 0.0071 (where accuracy is defined as Energy of best lower bound Next, in order to do a more thorough analysis and to see how our algorithm behaves in much more difficult cases, we went on and tested it on a set of hard synthetic MRF problems. We thus begun by conducting a full set of experiments on binary MRF problems, which are often used as subroutines by many popular MRF algorithms like [14],[15]. The binary MRFs were defined on 4-connected 2
Thm. 3 provides only a sufficient (but not a necessary) condition for optimality, since g, ¯ f ). it refers to a dual solution of D(¯ g, ¯ f ) and not of D+ (¯
816
N. Komodakis and N. Paragios
Fig. 4. Results from the benchmark MRF problems in [13]
10 0 0.1
2
4
6
8
coupling strength
(a) ρ = 1% non-attractive terms
0.8
relative energy
20
0.8
our method BP QPBOP TRW−S
relative energy
relative energy
30
0.6 0.4 0.2 0 0.1
2
4
6
8
coupling strength
(b) ρ = 25% non-attractive terms
0.6 0.4 0.2 0 0.1
2
4
6
8
coupling strength
(c) ρ = 50% non-attractive terms
Fig. 5. Comparison results for binary MRFs with different percentage ρ of nonattractive couplings. For each value of ρ (i.e, for each plot), we also vary the coupling strength σ (horizontal axis). The vertical axis represents relative MRF energies (for an algorithm with energy E, its relative energy is (E − Emin )/|Emin |, where Emin equals the minimum energy among all tested algorithms).
30×30 grids and had different percentages of non-attractive couplings3 . Unary ¯p (·) were drawn independently from a Gaussian distribution N (0, 1), potentials g while pairwise potentials were set as: f¯pp (0, 0) = f¯pp (1, 1) = 0, f¯pp (0, 1) = f¯pp (1, 0) = λpp , with λpp drawn from |N (0, σ 2 )| for attractive couplings and from −|N (0, σ 2 )| for non-attractive couplings. The percentage of non-attractive couplings was controlled by parameter ρ, which was set to 1%, 25% and 50% in 3 different experiments4 . Also, for each experiment, we allowed parameter σ (that controls the strength of the coupling) to vary and take values from {0.1, 2, 4, 6, 8}. These two parameters, ρ and σ, affect the difficulty of the resulting MRF. Along with our algorithm, the BP, TRW-S [4] and QPBOP [16] algorithms have been tested as well5 . The results appear in Fig. 5, where each point on a plot corresponds to the average of 20 random MRF instances. As can be seen, our algorithm had the best performance in all cases (i.e, for all combinations of ρ and σ). Its running times varied depending on the binary MRF’s difficulty and 3 4 5
For binary (but not for multi-label) MRFs, attractive is equivalent to submodular. By flipping labels, cases ρ = 50 + ε and ρ = 50 − ε reduce to each other. We also tested the augmenting DAG method, but it performed similarly to TRW-S.
Beyond Loose LP-Relaxations: Optimizing MRFs by Repairing Cycles
817
4
x 10
our method TRW−S
energy
15 10
cycle repairing starts here
5 0 −5 10
20
30
(a) 1% non-attractive (b) 50% terms non-attractive terms
(c) our result, energy=23822
(d) BP result, energy=33110
Fig. 6. (a) Upper (solid) and lower (dashed) bounds after running TRW-S and our method on 2 different binary MRFs. Such bounds correspond to primal costs (i.e, MRF energies) and dual costs respectively. For a binary MRF with only 1% non-attractive terms, the upper and lower bounds of TRW-S converge (i.e, LP-relaxation (8) is tight). (b) This is not the case, however, for a binary MRF with 50% non-attractive terms. But, thanks to cycle repairing, our algorithm uses a much tighter relaxation and so its upper and lower bounds converge even in this case. (c) An example of non-rigid point matching. Source points (red) were generated by applying a global rigid motion plus a random non-rigid perturbation to the target points (green). Despite the heavy distortion, our algorithm succeeded to find a meaningful set of point correspondences. (d) On the other hand, BP failed as it did not manage to effectively minimize the MRF energy.
were up to 1.5 secs using heavily unoptimized code on a 1.4GHz CPU (contrary, e.g., to 0.53 secs for QPBOP). Note that the QPBOP algorithm did not perform very well in this case (and we speculate that this can be due to a possibly large percentage of unlabeled nodes). Also note that the method from [7] cannot be applied here, due to the large number of ties that would exist. As can be seen, in the case of binary MRFs, it is only for extremely small ρ (i.e, for almost submodular MRFs) that methods such as TRW-S (which was a top performer in [13]) are able to perform equally well with our algorithm. The reason is that, in this case, the standard LP relaxation upon which such methods rely is already a very good approximation to the MRF problem (recall that, in the special case of binary MRFs that are submodular, TRW-S can compute the global optimum, just like our method). This is also shown in more detail in Fig. 6(a), where the lower and upper bounds (generated while applying TRW-S and our algorithm to a binary MRF with ρ = 1%) have been plotted. In this case, the lower/upper bounds of TRW-S correspond to dual/primal costs of the standard LP relaxation. Since, as shown in Fig. 6(a), these bounds converge, this, therefore, implies that the LP relaxation is tight. However, as can be observed in Fig. 5(c), the situation is completely different when a mixed MRF (with, say, ρ = 50%) is to be optimized. These are really very hard problems to solve, for which TRW-S performs poorly. Fig. 6(b) shows in more detail what is really happening in this case. Contrary to the plot in Fig. 6(a), the upper and lower bounds of TRW-S no longer converge, and the reason, of course, is that the standard LP relaxation is now not tight. However, this is not at all an issue for our algorithm, thanks to the cycle repairing that it uses. This effectively enables him to take advantage of a much tighter relaxation,
N. Komodakis and N. Paragios
10
0.4
our method BP α−exp P+I TRW−S
relative energy
relative energy
15
5 0 0.1
2
4
6
8
coupling strength
(a) ρ = 1% non-attractive terms
0.2
relative energy
818
0.3 0.2 0.1 0 0.1
2
4
6
8
0.15 0.1 0.05 0 0.1
coupling strength
(b) ρ = 25% non-attractive terms
2
4
6
8
coupling strength
(c) ρ = 50% non-attractive terms
Fig. 7. Comparison results for multi-label MRFs
Fig. 8. Deformable feature matching via MRF optimization. From user selected features in a source image, we seek correspondences to automatically extracted features in a target image (feature similarity and geometric structure are both taken into account).
which yields upper and lower bounds far closer to each other (i.e, which almost converge). As a result, a much better optimum is obtained as well. Using a procedure similar to the binary case, we also conducted another full set of experiments for multi-label MRFs, which are even more popular in vision. The results are shown in Fig. 7 for various combinations of ρ and σ and when 5 labels were used. Our algorithm had again the best performance among all other methods and did much better than TRW-S (which was a top performer in [13]) in all cases (and even for very small ρ). Note that QPBOP, unlike our method, is applicable only to binary MRFs, and, so, it had to be replaced with the α-exp P+I algorithm proposed in [16] (note that this is essentially a generalization of α-expansion with a QPBOP-based technique being used as the binary optimizer at each step). Note, also, that the set of cycles {Ci } (used by our algorithm for the cycle-repairing) consisted only of 1×1, 1×2 and 2×1 rectangles on the grid (these were the settings used in all the grid-based MRFs of this paper). We also considered the problem of subgraph matching, where one seeks a mapping T between a source and a target graph. This can be formulated as a multilabel MRF problem, where source vertices correspond to MRF sites, while each target vertex represents one possible label. The unary potential g¯p (T (p)) thus measures the “dissimilarity” between source vertex p and target vertex T (p), while potential f¯pp (T (p), T (p )) measures the “dissimilarity” between source edge pp and target edge T (p)T (p ). We used this formulation for the purpose of non-rigid registration between geometric point sets. In this case, the unary potentials are all set to zero (as there is no notion of similarity between mere geometric points), while the pairwise potentials (of the fully connected MRF) measure the geometric distortion for each pair of source points as follows: f¯pp (T (p), T (p )) = |d(p, p ) − d(T (p), T (p ))|/d(p, p ),
Beyond Loose LP-Relaxations: Optimizing MRFs by Repairing Cycles
819
where d(·, ·) denotes Euclidean distance. Results from applying our method and BP to this problem are shown in Figures 6(c), 6(d). Contrary to BP, which failed to effectively minimize the MRF energy in this case, our algorithm managed to find a good minimum and thus extracted a good point matching despite the heavy geometric distortion (the result of α-exp P+I algorithm is even worse and is thus not shown). Note that, since a fully connected MRF has been used, our algorithm was allowed to use triplets of vertices as the set {Ci } of possible cycles to repair. Similarly, we have applied our registration algorithm to the problem of deformable feature matching between images (see Fig. 8), where the unary potentials (instead of being 0) are now based on comparing geometric blur descriptors.
5
Conclusions
We presented a novel LP-based algorithm for MRF optimization, which, contrary to most state-of-the-art techniques, no longer relies on a weak LP-relaxation. To this end, an hierarchy of dual relaxations (that become tighter and tighter) has been introduced, while it was shown how one can take advantage of a very tight class of relaxations from this hierarchy, named cycle-relaxations. This is done through an operation called cycled repairing, where inconsistent cycles get repaired during optimization, instead of simply being ignored as was the case up to now. The more the repaired cycles, the tighter the underlying relaxation becomes. The outcome is an algorithm that can handle difficult MRF problems with arbitrary pairwise potentials. The impact of our work can be both practical and theoretical. On the one hand, the proposed method makes one more step towards the design of optimization algorithms that can effectively handle very general MRFs, thus allowing users to freely exploit such MRFs for a better modeling of future vision problems. On the other hand, for a further advance to be made in MRF optimization, the use of tighter relaxations is a direction that, we believe, has to be followed. To this end, the idea of cycle-repairing, as well as that of a dynamic hierarchy of dual relaxations (of which only the cyclerelaxations have been explored here), provide an extremely flexible and general framework that can be useful in opening new research avenues in this regard.
References 1. Komodakis, N., Tziritas, G.: Approximate labeling via graph-cuts based on linear programming. In: PAMI (2007) 2. Komodakis, N., Paragios, N., Tziritas, G.: MRF Optimization via Dual Decomposition: Message-passing revisited. In: ICCV (2007) 3. Wainwright, M.J., Jaakkola, T.S., Willsky, A.S.: Map estimation via agreement on (hyper)trees: Message-passing and linear programming. IEEE Trans. on Information Theory (2005) 4. Kolmogorov, V.: Convergent Tree-reweighted Message Passing for Energy Minimization. PAMI 28(10) (2006) 5. Komodakis, N., Tziritas, G., Paragios, N.: Fast Approximately Optimal Solutions for Single and Dynamic MRFs. In: CVPR (2007)
820
N. Komodakis and N. Paragios
6. Werner, T.: A Linear Programming Approach to Max-Sum Problem: A Review. PAMI 29(7) (2007) 7. Yanover, C., Meltzer, T., Weiss, Y.: Linear Programming Relaxations and Belief Propagation – An Empirical Study. JMLR 7 (2006) 8. Sontag, D., Jaakkola, T.: New outer bounds on the marginal polytope. In: NIPS (2008) 9. Werner, T.: High-arity interactions, polyhedral relaxations, and cutting plane algorithm for soft constraint optimisation (MAP-MRF). In: CVPR (2008) 10. Sontag, D., Meltzer, T., Globerson, A., Jaakkola, T., Weiss, Y.: Tightening LP relaxations for MAP using message passing. In: UAI (2008) 11. Kumar, M.P., Kolmogorov, V., Torr, P.H.S.: An analysis of convex relaxations for MAP estimation. In: NIPS (2007) 12. Komodakis, N., Paragios, N.: Beyond loose LP-relaxations for MRF optimization. Technical report, Ecole Centrale Paris (2008) 13. Szeliski, R., Zabih, R., et al.: A Comparative Study of Energy Minimization Methods for Markov Random Fields. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3952, pp. 16–29. Springer, Heidelberg (2006) 14. Boykov, Y., Veksler, O., Zabih, R.: Fast Approximate Energy Minimization via Graph Cuts. In: PAMI (November 2001) 15. Raj, A., Singh, G., Zabih, R.: MRF’s for MRI’s: Bayesian Reconstruction of MR Images via Graph Cuts. In: CVPR (2006) 16. Rother, C., Kolmogorov, V., Lempitsky, V., Szummer, M.: Optimizing binary MRFs via extended roof duality. In: CVPR (2007)
Author Index
Aach, Til I-509 Agarwala, Aseem IV-74 Agrawal, Motilal IV-102 Ahmed, Amr III-69 Ai, Haizhou I-697 Ali, Asem M. III-98 Ali, Saad II-1 Alvino, Christopher I-248 ˚ Astr¨ om, Kalle IV-130 Athitsos, Vassilis I-643 Authesserre, Jean-Baptiste III-400 Babakan, Sevkit III-224 Babenko, Boris I-193, II-211 Bach, Francis III-43 Bagon, Shai IV-30 Bai, Xiang IV-788 B˘ alan, Alexandru O. II-15 Baldrich, Ramon IV-1 Baraniuk, Richard G. II-155 Barbu, Adrian IV-465 Barinova, Olga II-100 Barreto, Jo˜ ao P. IV-609 Bartoli, Adrien III-196 Basu, Anup II-554 Bauer, Joachim IV-873 Belhumeur, Peter N. IV-116, IV-340, IV-845 Belongie, Serge I-193, II-211 Berclaz, J´erˆ ome III-112 Berroir, Jean-Paul IV-665 Berthoumieu, Yannick III-400 Betke, Margrit I-643 Beveridge, J. Ross II-44 Bhat, Pravin II-114 Bhusnurmath, Arvind IV-638 Bibby, Charles II-831 Bischof, Horst I-234, III-588, III-792, IV-677, IV-873 Black, Michael J. II-15, III-83 Blake, Andrew I-99, IV-15 Blas, Morten Rufus IV-102 Blaschko, Matthew B. I-2 Bogoni, Luca IV-465
Boiman, Oren IV-30 Bon´e, Romuald II-392 Bougleux, S´ebastien II-129, III-57 Bouthemy, Patrick I-113 Bowden, Richard I-222 Boyer, Edmond II-30 Bronstein, Alexander M. II-143 Bronstein, Michael M. II-143 Brostow, Gabriel J. I-44 Brox, Thomas I-739 Bujnak, Martin III-302 Burgeth, Bernhard III-521 Burkhardt, Hans II-239 Byr¨ od, Martin IV-130 Calonder, Michael I-58 Campbell, Neill D.F. I-766 Cernuschi-Fr´ıas, Bruno I-113 Cevher, Volkan II-155 Chai, Jinxiang I-657 Chan, Syin IV-817 Chang, Shih-Fu IV-270 Charpiat, Guillaume III-126 Chellappa, Rama II-155 Chen, Daozheng IV-116 Chen, Jianing I-671 Chen, Jingni III-15, III-725 Chen, Tsuhan I-441, II-446 Chen, Yuanhao II-759 Cheng, Irene II-554 Cheong, Loong-Fah III-330 Chi, Yu-Tseh IV-256 Chia, Liang-Tien IV-817 Chli, Margarita I-72 Cho, Minsu IV-144 Chung, Albert C.S. IV-368 Chung, Ronald II-733 Cipolla, Roberto I-44, I-290, I-766 Cohen, Laurent D. II-129, II-392, III-57, III-628 Cohen, Michael II-114 Collins, Brendan I-86 Collins, Robert T. II-474, III-140 Comaniciu, Dorin I-711, IV-465
822
Author Index
Cooper, David B. IV-172 Cour, Timothee IV-158 Cremers, Daniel I-332, I-739, I-752, III-792, IV-677 Criminisi, Antonio I-99 Crivelli, Tom´ as I-113 Cui, Jinshi III-642 Curless, Brian II-114 Dambreville, Samuel II-169 Damen, Dima III-154 Daniilidis, Kostas IV-553 Darzi, Ara IV-492 Davis, Larry S. I-16, II-610, IV-423 Davison, Andrew J. I-72 de Campos, Cassio P. III-168 Delmas, Patrice II-350 Deng, Jia I-86 Denis, Patrick II-197 Dexter, Emilie II-293 Didas, Stephan III-521 Dinerstein, Michael II-321 Dinh, Thang Ba II-678 Doermann, David II-745, III-752 Doll´ ar, Piotr II-211 Donner, Yoni IV-748 Doretto, Gianfranco IV-691 Douze, Matthijs I-304 Drummond, Tom III-372 Du, Wei II-225 Duarte, Marco F. II-155 Durand, Fr´edo IV-88 Ecker, Ady I-127 Eden, Ibrahim IV-172 Edwards, Philip IV-492 Efros, Alexei A. IV-354 Elder, James H. II-197 Elmoataz, Abderrahim III-668 Enqvist, Olof I-141 Escobar, Maria-Jose IV-186 Ess, Andreas II-816 Estrada, Francisco J. II-197 Estrin, Deborah III-276 Fan, Lixin III-182 Farag, Aly A. III-98 Farenzena, Michela III-196 Farhadi, Ali I-154, IV-451 Farzinfar, Mahshid I-167
Fauqueur, Julien I-44 Fehr, Janis II-239 Fei-Fei, Li I-86, III-602, IV-527 Feiner, Steven IV-116 Ferencz, Andras IV-527 Figl, Michael IV-492 Fleischmann, Oliver II-638 Fleuret, Fran¸cois III-112, IV-214 Foroosh, Hassan I-318 Fossati, Andrea IV-200 Fradet, Matthieu III-210 Frahm, Jan-Michael I-427, II-500 Franke, Uwe I-739 Freeman, William T. III-28, IV-88 Fritz, Mario II-527 Fua, Pascal I-58, II-405, III-112, IV-200, IV-214, IV-567, IV-581 Fulkerson, Brian I-179 Fundana, Ketut III-251 Fusiello, Andrea I-537 Galleguillos, Carolina I-193 Gammeter, Stephan II-816 Gao, Jizhou II-624 Garbe, Christoph III-290 Gaspar, Jos´e Ant´ onio IV-228 Ge, Weina III-140 Georgiev, Todor III-224 Geusebroek, Jan-Mark III-696 Gevers, Theo I-208 Gijsenij, Arjan I-208 Gilbert, Andrew I-222 Gimel’farb, Georgy L. II-350, III-98 Gleicher, Michael IV-437 Goh, Alvina III-238 Goldman, Dan B IV-74 Gong, Shaogang III-574, IV-383 Gong, Yihong II-419, III-69 Gonz´ alez, Germ´ an IV-214 Gosch, Christian III-251 Graber, Gottfried III-792 Grabner, Helmut I-234, III-588 Grady, Leo I-248, II-252 Gray, Douglas I-262 Grinspun, Eitan IV-845 Grossmann, Etienne IV-228 Gu, Jinwei IV-845 Gu, Leon I-413 Gu, Xianfeng III-1
Author Index Gupta, Abhinav I-16 Gupta, Raj II-265 Haines, Tom S.F. III-780 Han, Bohyung IV-527 Han, Junwei IV-242 Hartley, Richard I-276 Hasinoff, Samuel W. IV-45 Hebert, Martial III-43, III-481 H´ebert, Patrick I-454 Heitz, Geremy I-30 Herlin, Isabelle IV-665 Hern´ andez, Carlos I-290, I-766 Heyden, Anders III-251 Ho, Jeffrey IV-256 Hofmann, Matthias III-126 Hogg, David III-154 Hoi, Steven C.H. III-358, III-766 Hoiem, Derek II-582 Horaud, Radu II-30 Hu, Weiming IV-396 Hu, Yiqun IV-817 Hua, Gang I-441 Huang, Chang II-788 Huang, Haoda II-759 Huang, Jianguo IV-284 Huang, Kaiqi III-738 Huang, Qingming IV-541 Huang, Thomas II-419 Huang, Xinyu II-624 Huttenlocher, Daniel P. II-379, III-344 Ikeuchi, Katsushi IV-623 Illingworth, John I-222 Intwala, Chintan III-224 Irani, Michal IV-30 Isambert, Till IV-665 Jacobs, David W. IV-116 J¨ aggli, Tobias II-816 Jain, Arpit I-483 Jebara, Tony IV-270 Jegou, Herve I-304 Jepson, Allan D. I-127 Jermyn, Ian H. III-509 Ji, Qiang II-706, III-168 Jia, Jiaya I-671, IV-775 Jiang, Hao II-278 Jiang, Shuqiang IV-541 Jiang, Wei IV-270
Jiang, Xiaoyue IV-284 Jin, Hailin I-576 Jordan, Chris IV-158 Josephson, Klas IV-130 Junejo, Imran N. I-318, II-293 Jung, Ho Yub II-307, IV-298 Kahl, Fredrik I-141 Kanade, Takeo I-413 Karlinsky, Leonid II-321 Karner, Konrad IV-873 Kidode, Masatsugu III-681 Kim, Tae Hoon III-264 Kjellstr¨ om, Hedvig II-336 Klein, Georg II-802 Klodt, Maria I-332 Ko, Teresa III-276 Kobayashi, Takumi I-346 Koch, Reinhard IV-312 Koenderink, Jan J. I-1 Kohli, Pushmeet II-582 Koike, Hideki III-656 Kolev, Kalin I-332, I-752 Koller, Daphne I-30 Kolmogorov, Vladimir II-596 Komodakis, Nikos III-806 Kondermann, Claudia III-290 Kong, Yuk On IV-284 Konolige, Kurt IV-102 Konushin, Anton II-100 Konushin, Vadim II-100 Koppal, Sanjeev J. IV-830 Korah, Thommen I-359 Kornprobst, Pierre IV-186 K¨ oser, Kevin IV-312 Kragi´c, Danica II-336 Krahnstoever, Nils IV-691 Krajsek, Kai IV-326 Kress, W. John IV-116 Krueger, Matthias II-350 Kukelova, Zuzana III-302 Kumar, Neeraj II-364, IV-340 Kumar, Sanjiv III-316 Kuthirummal, Sujit IV-60, IV-74 Kutulakos, Kiriakos N. I-127, IV-45 Kwon, Dongjin I-373 Kwon, Junseok I-387 Lai, Shang-Hong I-589, III-468 Lalonde, Jean-Fran¸cois IV-354
823
824
Author Index
Lampert, Christoph H. I-2 Langer, Michael S. I-401 Lao, Shihong I-697 Laptev, Ivan II-293 Latecki, Longin Jan IV-788 Law, Max W.K. IV-368 Lazebnik, Svetlana I-427 Lee, Hyunjung I-780 Lee, KeeChang II-100 Lee, Kyong Joon I-373 Lee, Kyoung Mu I-387, II-307, III-264, IV-144, IV-298 Lee, Sang Uk I-373, II-307, III-264, IV-298 Lee, Sang Wook I-780 Leibe, Bastian II-816 Leistner, Christian I-234 Lempitsky, Victor IV-15 Leordeanu, Marius III-43 Lepetit, Vincent I-58, II-405, IV-581 Levi, Dan II-321 Levin, Anat IV-88 Lewis, J.P. III-83 L´ezoray, Olivier III-668 Li, Jian IV-383 Li, Kai I-86 Li, Shimiao III-330 Li, Shuda I-631 Li, Xi IV-396 Li, Xiaowei I-427 Li, Yi II-745 Li, Yuan IV-409 Li, Yunpeng II-379, III-344 Liang, Jianming IV-465 Liang, Lin II-72 Liang, Wei II-664 Lim, Hwasup II-100 Lin, Chenxi II-759 Lin, Zhe IV-423 Ling, Haibin IV-116 Liu, Ce III-28 Liu, David I-441 Liu, Feng IV-437 Liu, Jianzhuang I-603, III-358 Liu, Qingshan I-685 Liu, Wei III-358 Liu, Yanxi II-474 Loeff, Nicolas IV-451 Lopez, Ida IV-116 Loui, Alexander C. IV-270
Loxam, James III-372 Lu, Le IV-465 Lucassen, Marcel P. I-208 Lui, Yui Man II-44 Lumsdaine, Andrew III-224 Luo, Yiwen III-386 Lyu, Michael R. III-766 Mairal, Julien III-43 Makadia, Ameesh III-316 Makram-Ebeid, Sherif III-628 Mandal, Mrinal II-554 Marszalek, Marcin IV-479 Martin, David R. II-278 Mart´ınez, David II-336 Matsushita, Yasuyuki II-692, III-656, IV-623 McKenna, Stephen J. IV-242 McMillan, Leonard I-711 Medioni, G´erard II-678 M´egret, R´emi III-400 Mei, Lin IV-492 Mensink, Thomas II-86 Menzel, Marion I. IV-326 Mester, Rudolf III-290 Metaxas, Dimitris I-685 Mezouar, Youcef III-196 Migita, Tsuyoshi III-412 Milborrow, Stephen IV-504 Mille, Julien II-392 Miltsakaki, Eleni IV-158 Mittal, Anurag I-483, II-265 Mordohai, Philippos IV-553 Moreels, Pierre III-426 Moreno-Noguer, Francesc II-405, IV-581 Mori, Greg III-710 Mory, Benoit III-628 Murray, David II-802 Nagahara, Hajime IV-60 Namboodiri, Anoop III-616 Narasimhan, Srinivasa G. IV-354, IV-830 Nayar, Shree K. II-364, IV-60, IV-74, IV-340, IV-845 Nevatia, Ramakant II-788, IV-409 Nickel, Kai IV-514 Nicolls, Fred IV-504 Niebles, Juan Carlos IV-527
Author Index Ning, Huazhong II-419 Nishino, Ko III-440 Nist´er, David II-183 Novatnack, John III-440 Ogino, Shinsuke III-412 Okada, Ryuzo II-434 Oliensis, John I-562 Orabona, Francesco IV-228 Otsu, Nobuyuki I-346 Ouellet, Jean-Nicolas I-454 Pajdla, Tomas III-302 Pal, Christopher J. I-617 Pan, Gang I-603 Pan, Wei-Hau III-468 Pang, Junbiao IV-541 Pantofaru, Caroline III-481 Papadopoulo, Th´eodore II-486 Papanikolopoulos, Nikolaos III-546 Paragios, Nikos III-806 Parikh, Devi II-446 Paris, Sylvain II-460 Park, Minwoo II-474 Patterson, Alexander IV IV-553 Pavlovic, Vladimir III-316 Pele, Ofir III-495 Peng, Ting III-509 P´erez, Patrick II-293, III-210 Perona, Pietro I-523, II-211, III-426 Peyr´e, Gabriel II-129, III-57 Piater, Justus II-225 Pilet, Julien IV-567 Piovano, J´erome II-486 Piriou, Gwenaelle I-113 Pizarro, Luis III-521 Pock, Thomas III-792, IV-677 Pollefeys, Marc II-500 Ponce, Jean III-43 Prinet, V´eronique III-509 Pylv¨ an¨ ainen, Timo III-182 Quan, Long
III-15, III-725
Rabe, Clemens I-739 Rabinovich, Andrew I-193 Raguram, Rahul II-500 Ramamoorthi, Ravi IV-116, IV-845 Ranganathan, Ananth I-468
825
Rasmussen, Christopher I-359 Ravichandran, Avinash II-514 Ravishankar, Saiprasad I-483 Reddy, Dikpal II-155 Reid, Ian II-831 Reisert, Marco II-239 Ren, Xiaofeng III-533 Ribnick, Evan III-546 Rittscher, Jens IV-691 Robert, Philippe III-210 Romeiro, Fabiano IV-859 Romero, Javier II-336 Ross, David A. III-560 Roth, Stefan III-83 Rother, Carsten II-596, IV-15 Rousseau, Fran¸cois I-497 Rueckert, Daniel IV-492 Russell, David III-574 Saffari, Amir III-588 Salganicoff, Marcos IV-465 Salzmann, Mathieu IV-581 Samaras, Dimitris III-1 Sandhu, Romeil II-169 Sankaranarayanan, Aswin II-155 Sato, Yoichi III-656 Savarese, Silvio III-602 Scharr, Hanno I-509, IV-326 Schiele, Bernt II-527, IV-733 Schikora, Marek I-332 Schindler, Konrad II-816 Schmid, Cordelia I-304, III-481, IV-479 Schnieders, Dirk I-631 Schnitzspan, Paul II-527 Schn¨ orr, Christoph III-251 Schoenemann, Thomas I-332, III-792 Sch¨ olkopf, Bernhard III-126 Schuchert, Tobias I-509 Sclaroff, Stan I-643 Sebastian, Thomas IV-691 Seitz, Steven M. II-541 Seo, Yongduek I-780 Shah, Mubarak II-1 Shahed, S.M. Nejhum IV-256 Shakunaga, Takeshi III-412 Sharma, Avinash III-616 Sharp, Toby I-99, IV-595 Shen, Chunhua IV-719 Sheorey, Sameer IV-116 Shi, Jianbo II-774, IV-760
826
Author Index
Shin, Young Min IV-144 Shotton, Jamie I-44 Simon, Ian II-541 Singh, Meghna II-554 Sivic, Josef III-28 Smeulders, Arnold W.M. III-696 Soatto, Stefano I-179, II-434, III-276, IV-705 Sommer, Gerald II-638 Somphone, Oudom III-628 Song, Xuan III-642 Sorokin, Alexander I-548 Spain, Merrielle I-523 Stew´enius, Henrik II-183 Stiefelhagen, Rainer IV-514 Strecha, Christoph IV-567 Sturm, Peter IV-609 Sugano, Yusuke III-656 Sun, Deqing III-83 Sun, Jian II-72, IV-802 Sun, Yi II-58 Syeda-Mahmood, Tanveer II-568 Szummer, Martin II-582 Ta, Vinh-Thong III-668 Tabrizi, Mostafa Kamali I-154 Takamatsu, Jun IV-623 Tan, Tieniu III-738 Tang, Xiaoou I-603, II-720, III-386, IV-802 Tannenbaum, Allen II-169 Tao, Dacheng I-725 Tao, Hai I-262 Tarlow, Daniel III-560 Taskar, Ben IV-158 Taylor, Camillo Jose IV-638 Teoh, Eam Khwang I-167 ter Haar, Frank B. IV-652 Toldo, Roberto I-537 Tong, Yan II-706, III-168 Torralba, Antonio III-28 Torresani, Lorenzo II-596 Tran, Du I-548 Tran, Lam I-617 Tran, Son D. II-610 Trobin, Werner IV-677 Tsuji, Ryosuke III-681 Tu, Peter IV-691 Tu, Zhuowen II-211, IV-788 Tuytelaars, Tinne II-650
Ukita, Norimichi Ullman, Shimon
III-681 II-321
van de Weijer, Joost IV-1 van Gemert, Jan C. III-696 Van Gool, Luc II-650, II-816 Varanasi, Kiran II-30 Vasilyev, Yuriy IV-859 Vaudrey, Tobi I-739 Vazquez, Eduard IV-1 Vedaldi, Andrea I-179, IV-705 Veenman, Cor J. III-696 Veksler, Olga III-454 Veltkamp, Remco C. IV-652 Verbeek, Jakob II-86 Vidal, Ren´e I-276, II-514, III-238 Vogiatzis, George I-290, I-766 Wang, Fei II-568 Wang, Hongzhi I-562 Wang, Jingbin I-643 Wang, Lei IV-719 Wang, Liang I-576 Wang, Liming II-774 Wang, Qiang II-720 Wang, Ruixuan IV-242 Wang, Shu-Fan I-589 Wang, Xianwang II-624 Wang, Yang III-1, III-710 Wang, Yueming I-603 Wedel, Andreas I-739 Wei, Shou-Der III-468 Wei, Xiaolin K. I-657 Weickert, Joachim III-521 Weinman, Jerod J. I-617 Wen, Fang II-72 Werman, Michael III-495 White, Sean IV-116 Wietzke, Lennart II-638 Willems, Geert II-650 Wilson, Richard C. III-780 Wojek, Christian IV-733 Wolf, Lior IV-748 Wolf, Matthias IV-465 Wong, Kwan-Yee K. I-631 Wu, Bo II-788 Wu, Changchang I-427 Wu, Yang II-774, IV-760 Wu, Zheng I-643
Author Index Xiang, Tao IV-383 Xiao, Jianxiong III-15, III-725 Xiao, Rong I-603, II-72 Xing, Eric III-69 Xu, Li I-671, IV-775 Xu, Wei II-419, III-69 Xu, Zenglin III-766 Xue, Zhong I-167 Yakubenko, Anton II-100 Yamazaki, Shuntaro IV-830 Yang, Jie I-725 Yang, Ming-Hsuan I-468, IV-256 Yang, Peng I-685 Yang, Ruigang I-576, II-624 Yang, Wuyi II-664 Yang, Xingwei IV-788 Yao, Bangpeng I-697 Yao, Jian-feng I-113 Yeung, Dit-Yan III-15, III-725 Yezzi, Anthony II-169 Yin, Lijun II-58 Yin, Xiaotian III-1 Yu, Kai III-69 Yu, Qian II-678 Yu, Ting IV-691 Yu, Xiaodong II-745 Yuen, Jenny II-692, III-28 Yuille, Alan II-759 Yun, Il Dong I-373 Zach, Christopher I-427 Zaharescu, Andrei II-30 Zebedin, Lukas IV-873
Zemel, Richard S. III-560 Zeng, Wei III-1 Zeng, Yun III-1 Zerubia, Josiane III-509 Zha, Hongbin III-642 Zhang, Jingdan I-711 Zhang, Lei II-706 Zhang, Li II-364 Zhang, Ling IV-116 Zhang, Shuwu II-664 Zhang, Tianhao I-725 Zhang, Wei II-720 Zhang, Weiwei IV-802 Zhang, Xiaoqin IV-396 Zhang, Yanning IV-284 Zhang, Zhang III-738 Zhang, Zhongfei IV-396 Zhang, Ziming IV-817 Zhao, Huijing III-642 Zhao, Ming II-733 Zhao, Rongchun IV-284 Zheng, Nanning IV-760 Zheng, Yefeng III-752 Zhou, Changyin IV-60 Zhou, Luping IV-719 Zhou, Shaohua Kevin I-711 Zhu, Guangyu II-745, III-752 Zhu, Jianke III-766 Zhu, Long (Leo) II-759 Zhu, Qihui II-774, IV-760 Zickler, Todd IV-859 Zitnick, C. Lawrence II-114, II-446 Zwanger, Michael IV-326
827