Lecture Notes in Computer Science Commenced Publication in 1973 Founding and Former Series Editors: Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen
Editorial Board David Hutchison Lancaster University, UK Takeo Kanade Carnegie Mellon University, Pittsburgh, PA, USA Josef Kittler University of Surrey, Guildford, UK Jon M. Kleinberg Cornell University, Ithaca, NY, USA Alfred Kobsa University of California, Irvine, CA, USA Friedemann Mattern ETH Zurich, Switzerland John C. Mitchell Stanford University, CA, USA Moni Naor Weizmann Institute of Science, Rehovot, Israel Oscar Nierstrasz University of Bern, Switzerland C. Pandu Rangan Indian Institute of Technology, Madras, India Bernhard Steffen University of Dortmund, Germany Madhu Sudan Massachusetts Institute of Technology, MA, USA Demetri Terzopoulos University of California, Los Angeles, CA, USA Doug Tygar University of California, Berkeley, CA, USA Gerhard Weikum Max-Planck Institute of Computer Science, Saarbruecken, Germany
4931
Gerald Sommer Reinhard Klette (Eds.)
Robot Vision Second International Workshop, RobVis 2008 Auckland, New Zealand, February 18-20, 2008 Proceedings
13
Volume Editors Gerald Sommer Cognitive Systems Group Institute of Computer Science Christian-Albrechts-University of Kiel 24098 Kiel, Germany E-mail:
[email protected] Reinhard Klette The University of Auckland Department of Computer Science Auckland, New Zealand E-mail:
[email protected]
Library of Congress Control Number: 2008920291 CR Subject Classification (1998): I.4, I.2.9-10, I.5 LNCS Sublibrary: SL 6 – Image Processing, Computer Vision, Pattern Recognition, and Graphics ISSN ISBN-10 ISBN-13
0302-9743 3-540-78156-0 Springer Berlin Heidelberg New York 978-3-540-78156-1 Springer Berlin Heidelberg New York
This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. Springer is a part of Springer Science+Business Media springer.com © Springer-Verlag Berlin Heidelberg 2008 Printed in Germany Typesetting: Camera-ready by author, data conversion by Scientific Publishing Services, Chennai, India Printed on acid-free paper SPIN: 12227426 06/3180 543210
Preface
In 1986, B.K.P. Horn published a book entitled Robot Vision, which actually discussed a wider field of subjects, basically addressing the field of computer vision, but introducing “robot vision” as a technical term. Since then, the interaction between computer vision and research on mobile systems (often called “robots”, e.g., in an industrial context, but also including vehicles, such as cars, wheelchairs, tower cranes, and so forth) established a diverse area of research, today known as robot vision. Robot vision (or, more general, robotics) is a fast-growing discipline, already taught as a dedicated teaching program at university level. The term “robot vision” addresses any autonomous behavior of a technical system supported by visual sensoric information. While robot vision focusses on the vision process, visual robotics is more directed toward control and automatization. In practice, however, both fields strongly interact. Robot Vision 2008 was the second international workshop, counting a 2001 workshop with identical name as the first in this series. Both workshops were organized in close cooperation between researchers from New Zealand and Germany, and took place at The University of Auckland, New Zealand. Participants of the 2008 workshop came from Europe, USA, South America, the Middle East, the Far East, Australia, and of course from New Zealand. There were 59 submissions for the workshop. From these, 21 papers were selected for oral presentation and 15 papers were selected for poster presentation. There was one invited talk on the topic of 10 years of robotic soccer competitions, including a look into the future, which was presented by Jacky Baltes (Winnipeg). This talk also enlightened the strong relationship between computer graphics and robotics. Christoph Stiller (Karlsruhe) organized a special session on the Urban Challenge program. Several system demos were also part of the workshop. The workshop provided ample time for discussion and opportunities for enjoying the beauty of the Auckland region. We thank the General Workshop Co-chairs, Bruce MacDonald (Auckland) and Uwe Franke (Sindelfingen), for their efforts in organizing this event. We thank all the members of the Program Committee and the local organizers for doing an excellent job. We acknowledge the help of Patrice Delmas and Robert Carter (both Auckland) regarding the Internet presence of the workshop. We thank Lennart Wietzke (Kiel) for his engagement in preparing this volume. Thanks also to the sponsors of this workshop: the ECE department of The University of Auckland and Daimler Germany. February 2008
Gerald Sommer Reinhard Klette
Organization
General Workshop Co-chairs Bruce MacDonald (The University of Auckland) Uwe Franke (Daimler AG)
Program Co-chairs Reinhard Klette (The University of Auckland) Gerald Sommer (Kiel University)
Program Committee Kean C. Aw (The University of Auckland, New Zealand) Hernan Badino (University of Frankfurt, Germany) Donald Bailey (Massey University, New Zealand) Jacky Baltes (University of Manitoba, Canada) John Barron (University of Western Ontario, Canada) Steven Beauchemin (University of Western Ontario, Canada) Anko Boerner (German Aerospace Center, Germany) Phil Bones (Canterbury University, New Zealand) Thomas Braeunl (The University of Western Australia, Australia) Hans Burkhardt (Freiburg University, Germany) Darius Burschka (German Aerospace Center, Germany) Chaur-Chin Chen (National Tsing-Hua University, Taiwan) Chi-Fa Chen (I-Shou University, Taiwan) Chia-Yen Chen (National Chung Cheng University, Taiwan) Sei-Wang Chen (National Taiwan Normal University, Taiwan) Yung-Chang Chen (National Tsing Hua University, Taiwan) Michael Cree (University of Waikato, New Zealand) Patrice Delmas (The University of Auckland, New Zealand) Joachim Denzler (Jena University, Germany) Leo Dorst (University of Amsterdam, The Netherlands) Uwe Franke (DaimlerChrysler, Germany) Chiou-Shann Fuh (National Taiwan University, Taiwan) Stefan Gehrig (DaimlerChrysler, Germany) Richard Green (University of Canterbury, New Zealand) Michael Hayes (Canterbury University, New Zealand) Olaf Hellwich (TU Berlin, Germany) Heiko Hirschmueller (German Aerospace Center, Germany) Fay Huang (National Ilan University, Taiwan) Bob Hodgson (Massey University, New Zealand) Atsushi Imiya (Chiba University, Japan)
VIII
Organization
Kenichi Kanatani (Okayama University, Japan) Reinhard Klette (The University of Auckland, New Zealand) Carsten Knoeppel (DaimlerChrysler, Germany) Reinhard Koch (Kiel University, Germany) Andreas Koschan (The University of Tennessee, USA) Richard Lane (Applied Research Associates New Zealand Ltd, New Zealand) Wen-Nung Lie (National Chung Cheng University, Taiwan) Bruce MacDonald (The University of Auckland, New Zealand) Takashi Matsuyama (Kyoto University, Japan) Brendan McCane (University of Otago, New Zealand) Domingo Mery (Pontificia Universidad Catolica, Chile) Chris Messom (Massey University, Albany, New Zealand) Lee Middleton (Southampton University, UK) Rick Millane (University of Canterbury, New Zealand) John Morris (The University of Auckland, New Zealand) Josef Pauli (Duisburg University, Germany) John Pfaltz (University of Virginia, USA) Stefan Posch (Halle University, Germany) Ralf Reulke (Humboldt University, Germany) Bodo Rosenhahn (Max Planck Institute, Germany) Gerald Sommer (Kiel University, Germany) Fridtjof Stein (DaimlerChrysler, Germany) Christoph Stiller (University of Karlsruhe, Germany) Karl Stol (The University of Auckland, New Zealand) Robert Valkenburg (Industrial Research Ltd, New Zealand) Friedrich Wahl (Braunschweig University, Germany) Harald Winter (Hella Aglaia Mobile Vision GmbH, Germany) Christian Woehler (DaimlerChrysler, Germany) Burkhard Wuensche (The University of Auckland, New Zealand) Shane Xie (The University of Auckland, New Zealand)
Local Organizing Committee Bruce MacDonald (Chair) Patrice Delmas Reinhard Klette Lynda Jones Aruna Shandil Rick Chen Karl Stol
Sponsoring Institutions The University of Auckland Daimler AG DAGM
Table of Contents
Motion Analysis Dynamic Multiresolution Optical Flow Computation . . . . . . . . . . . . . . . . . Naoya Ohnishi, Yusuke Kameda, Atsushi Imiya, Leo Dorst, and Reinhard Klette Particle-Based Belief Propagation for Structure from Motion and Dense Stereo Vision with Unknown Camera Constraints . . . . . . . . . . . . . . . . . . . . Hoang Trinh and David McAllester
1
16
Stereo Vision Integrating Disparity Images by Incorporating Disparity Rate . . . . . . . . . Tobi Vaudrey, Hern´ an Badino, and Stefan Gehrig
29
Towards Optimal Stereo Analysis of Image Sequences . . . . . . . . . . . . . . . . . Uwe Franke, Stefan Gehrig, Hern´ an Badino, and Clemens Rabe
43
Fast Line-Segment Extraction for Semi-dense Stereo Matching . . . . . . . . . Brian McKinnon and Jacky Baltes
59
High Resolution Stereo in Real Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hani Akeila and John Morris
72
Robot Vision Stochastically Optimal Epipole Estimation in Omnidirectional Images with Geometric Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Christian Gebken and Gerald Sommer
85
Modeling and Tracking Line-Constrained Mechanical Systems . . . . . . . . . B. Rosenhahn, T. Brox, D. Cremers, and H.-P. Seidel
98
Stereo Vision Local Map Alignment for Robot Environment Mapping . . . David Aldavert and Ricardo Toledo
111
Markerless Augmented Reality for Robotic Helicoptor Applications . . . . . Ian Yen-Hung Chen, Bruce MacDonald, and Burkhard W¨ unsche
125
Facial Expression Recognition for Human-Robot Interaction – A Prototype . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Matthias Wimmer, Bruce A. MacDonald, Dinuka Jayamuni, and Arpit Yadav
139
X
Table of Contents
Computer Vision Iterative Low Complexity Factorization for Projective Reconstruction . . . Hanno Ackermann and Kenichi Kanatani
153
Accurate Image Matching in Scenes Including Repetitive Patterns . . . . . . Sunao Kamiya and Yasushi Kanazawa
165
Camera Self-calibration under the Constraint of Distant Plane . . . . . . . . . Guanghui Wang, Q.M. Jonathan Wu, and Wei Zhang
177
Visual Inspection An Approximate Algorithm for Solving the Watchman Route Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fajie Li and Reinhard Klette
189
Bird’s-Eye View Vision System for Vehicle Surrounding Monitoring . . . . Yu-Chih Liu, Kai-Ying Lin, and Yong-Sheng Chen
207
Road-Signs Recognition System for Intelligent Vehicles . . . . . . . . . . . . . . . Boguslaw Cyganek
219
Situation Analysis and Atypical Event Detection with Multiple Cameras and Multi-Object Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ralf Reulke, Frederik Meysel, and Sascha Bauer
234
Urban Vision Team AnnieWAY’s Autonomous System . . . . . . . . . . . . . . . . . . . . . . . . . . . . Christoph Stiller, S¨ oren Kammel, Benjamin Pitzer, Julius Ziegler, Moritz Werling, Tobias Gindele, and Daniel Jagszent The Area Processing Unit of Caroline - Finding the Way through DARPA’s Urban Challenge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kai Berger, Christian Lipski, Christian Linz, Timo Stich, and Marcus Magnor Sensor Architecture and Data Fusion for Robotic Perception in Urban Environments at the 2007 DARPA Urban Challenge . . . . . . . . . . . . . . . . . . Jan Effertz
248
260
275
Poster Session Belief-Propagation on Edge Images for Stereo Analysis of Image Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Shushi Guan and Reinhard Klette
291
Table of Contents
Real-Time Hand and Eye Coordination for Flexible Impedance Control of Robot Manipulator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mutsuhiro Terauchi, Yoshiyuki Tanaka, and Toshio Tsuji
XI
303
MFC - A Modular Line Camera for 3D World Modelling . . . . . . . . . . . . . . B¨ orner Anko, Hirschm¨ uller Heiko, Scheibe Karsten, Suppa Michael, and Wohlfeil J¨ urgen
319
3D Person Tracking with a Color-Based Particle Filter . . . . . . . . . . . . . . . . Stefan Wildermann and J¨ urgen Teich
327
Terrain-Based Sensor Selection for Autonomous Trail Following . . . . . . . . Christopher Rasmussen and Donald Scott
341
Generic Object Recognition Using Boosted Combined Features . . . . . . . . Doaa Hegazy and Joachim Denzler
355
Stereo Vision Based Self-localization of Autonomous Mobile Robots . . . . Abdul Bais, Robert Sablatnig, Jason Gu, Yahya M. Khawaja, Muhammad Usman, Ghulam M. Hasan, and Mohammad T. Iqbal
367
Robust Ellipsoidal Model Fitting of Human Heads . . . . . . . . . . . . . . . . . . . J. Marquez, W. Ramirez, L. Boyer, and P. Delmas
381
Hierarchical Fuzzy State Controller for Robot Vision . . . . . . . . . . . . . . . . . Donald Bailey, Yu Wua Wong, and Liqiong Tang
391
A New Camera Calibration Algorithm Based on Rotating Object . . . . . . Hui Chen, Hong Yu, and Aiqun Long
403
Visual Navigation of Mobile Robot Using Optical Flow and Visual Potential Field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Naoya Ohnishi and Atsushi Imiya
412
Behavior Based Robot Localisation Using Stereo Vision . . . . . . . . . . . . . . . Maria Sagrebin, Josef Pauli, and Johannes Herwig
427
Direct Pose Estimation with a Monocular Camera . . . . . . . . . . . . . . . . . . . Darius Burschka and Elmar Mair
440
Differential Geometry of Monogenic Signal Representations . . . . . . . . . . . . Lennart Wietzke, Gerald Sommer, Christian Schmaltz, and Joachim Weickert
454
Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
467
Dynamic Multiresolution Optical Flow Computation Naoya Ohnishi1 , Yusuke Kameda1 , Atsushi Imiya1 , Leo Dorst2 , and Reinhard Klette3 1
2
Chiba University, Japan The University of Amsterdam, The Netherlands 3 The University of Auckland, New Zealand
Abstract. This paper introduces a new algorithm for computing multiresolution optical flow, and compares this new hierarchical method with the traditional combination of the Lucas-Kanade method with a pyramid transform. The paper shows that the new method promises convergent optical flow computation. Aiming at accurate and stable computation of optical flow, the new method propagates results of computations from low resolution images to those of higher resolution. The resolution of images increases this way for the sequence of images used in those calculations. The given input sequence of images defines the maximum of possible resolution.
1
Introduction
We introduce in this paper a new class of algorithms for multi-resolution optical flow computation; see [6,8,10,11] for further examples. The basic concept of these algorithms was specified by Reinhard Klette, Leo Dorst, and Atsushi Imiya during a 2006 Dagstuhl seminar (in their working group). An initial draft of an algorithm was mathematically defined for answering the question Assume that the resolution of an image sequence is increasing; does this allow to compute optical flow accurately (i.e., observing a convergence to the true local displacement)? raised by Reinhard Klette during these meetings. The question is motivated by the general assumption: Starting with low resolution images, and increasing the resolution, an algorithm should allow us to compute both small or large displacements of an object in a region of interest, and this even in a time-efficient way. Beyond engineering applications, the answer to this question might clarify a relationship between motion cognition and focusing an a field of attention. For instance, humans see a moving object in a scene as part of a general observation of the environment around us. If we realize that a moving object is important G. Sommer and R. Klette (Eds.): RobVis 2008, LNCS 4931, pp. 1–15, 2008. c Springer-Verlag Berlin Heidelberg 2008
2
N. Ohnishi et al.
for the cognition of the environment, we try to attend to the object, and start to watch it closer “by increasing the resolution locally”. The Lucas-Kanade method, see [7], combined with a pyramid transform (abbreviated by LKP in the following), is a promising method for optical flow computation. This algorithm is a combination of variational methods and of a multiresolution analysis of images. The initial step of the LKP is to use optical-flow computation in a low resolution layer for an initial estimation of flow vectors, to be used at higher resolution layers. For the LKP, we assume (very) high resolution images as input sequence, and the pyramid transform is applied to each pair of successive images in this sequence to derive low resolution images. With the decrease in image sizes we simplify the computation of optical flow. The result of a computation at one level of the pyramid is, however, an approximate solution only. This approximate solution is used as initial data for the next level, to refine optical flow using a slightly higher resolution image pair. We use an implementation [3] of the LKP as available on OpenCV. There are possible extensions to the LKP. The first one is to adopt different optical computation methods at these layers. For instance, we can apply the Horn-Schunck method, the Nagel-Enkelmann method, correlation method, or block-matching method, and they have particular drawbacks or benefits [2,5]. A second possible extension is to compute optical flow from pairs of images having different resolutions. In this paper, we address the second extension, and call it the Dagstuhl Algorithm for optical flow computation.
2
Multi-resolution Optical Flow Computation
Lucas-Kanade Method. Let f (x − u, t + 1) and f (x, t) be the images at time t + 1 and t, respectively, with local displacements u of points x. For a spatio-temporal image f (x, t), with x = (x, y) , the total derivative is given as follows: ∂f dx ∂f dy ∂f dt d f= + + (1) dt ∂x dt ∂y dt ∂t dt dy We identify u with the velocity x˙ = (x, ˙ y) ˙ = ( dx dt , dt ) , and call it the optical d flow of the image f at point x = (x, y) . Optical flow consistency [2,4,9] dt f =0 implies that the motion of the point u = (x, ˙ y) ˙ is a solution of the singular equation, fx x˙ + fy y˙ + ft = 0 (which actually defines a straight line in velocity space). We assume that the sampled image f (i, j) exists in the M × M grid region, that is, we express fij as the value of f (i, j) at the point xij = (i, j) ∈ Z2 . Furthermore, for the spatio-temporal image f (x, y, t), we set fijk = f (i, j, t)|t:=k where (i, j) ∈ Z2 and k ≥ 0. Since optical flow is a spatio-temporal vector function computed from the spatio-temporal function, we set the optical flow uijk = (u(i, j, t)|t:=k , v(i, j, t)|t:=k ) . In this section, for a fixed k, we write uijk as uij .
Dynamic Multiresolution Optical Flow Computation
3
Let D(xij ) be the neighbourhood of the point xij . Assuming that the flow vector is constant in D(xij ), for the computation of the optical flow vector uij of the point xij at the time k, we define the minimisation problem, E(u) =
M M
Eij (uij ), Eij (uij ) =
i=1 j=1
D(xij )
{(∇f uij + ∂t f )|t:=k }2 dx, (2)
for u = vec(u11 , u12 , · · · uMM ),
(3)
∂f ∂t .
where ft = ∂t f = The variation of E(u) with respect to uij for a fixed k is δ E(u) = 2(S ij uij + sij ), 1 ≤ i, j ≤ M, δuij where S ij =
D(xij )
∇f ∇f dx ∼ =
(4)
(∇fij )(∇fij ) , sij = (∂t f )ij (∇f )ij .
(i,j) ∈D(xij )
(5) The matrix S ij is the stricture tensor of f for the point xij in the domain D(xij ). Therefore, the optical flow u of the image f at the time k is the solution of the system of linear equations Su + s = 0, (6) where S = Diag(S 11 , S 12 , · · · , S MM ), u = vec (u11 , u12 , · · · uMM ) , s = vec (s11 , s12 , · · · sMM ) .
(7)
The gradient and the time derivative are numerically computed as 1 ((∇f )ijk + (∇f )ijk+1 ) 2 1 1 fi+1 jk − fijk 1 fi+1 j+1 k − fij+1 k ∼ + = 2 2 fi j+1 k − fijk 2 f(i+1 j+1 k − fi+1 jk 1 ∼ = {(fijk+1 − fijk ) + (fi+1 jk+1 − fi+1 jk ) 4 +(fi j+1 k+1 − fij+1 k ) + (fi+1 j+1 k+1 − fi+1 j+1 k )}
(∇f )ijk = (∇f )ijk (∂t f )ijk
(8)
(9)
These numerical approximations are valid if the pair of successive images are sampled in the same resolution. Definition 1. For a sequence of temporal images f (x, y, t) for t ≥ 0, if images f (i, j, t) ∀t ≥ 0 are sampled in the same resolution, then we say that the resolutions of the temporal images are synchronised. We call this synchronisation of the resolution, which is a mathematical property of resolutions of the image sequence.
4
N. Ohnishi et al.
Gaussian Pyramid. For the sampled function fij = f (i, j), where (i, j) ∈ Z2 , Gaussian-pyramid transform and its dual transform are expressed as R2 fmn =
1
wi wj f2m−i, 2n−j , E2 fmn = 4
m,n=−1
2
wi wj f m−i , n−j , 2
(10)
2
m,n=−2
where w±1 = 14 and w0 = 12 , and the summation of the dual transform is achieved for (m − i) and (n − j) are integers. These two operations involve the reduction and expansion of image sizes by the factor 2. We write n times application of R2 as f n+1 = R2n+1 f = R2 (R2n f ). Since the operator R2 shrinks a k × k image into a k2 × k2 image, the maximum number of layers for the transform is nmax = [log2 N ] for the N × N image. Gaussian Pyramid Optical Flow Computation. Setting f (x, y, 0), f (x, y, 1), f (x, y, 2), · · · , f (x, y, t), · · ·
(11)
to be an image sequence whose resolutions are synchronised, we compute the nth layer image sequence as the sequence of f n (x, y, t) = R2n f (x, y, t). There, the resolutions of f n (x, y, t) are synchronised. Setting unij = un (i, j) = (un (i, j, t)|t:=k , v n (i, j, t)|t:=k ) , for the fixed k, to be the optical flow of the nth layer image computed from f (i, j, k) and f (i, j, k + 1), optical flow of (n − 1)th layer is computed from samples of an image pair f (n−1) (x, y, k), f (n−1) (x − unk , y − vkn , k + 1) , for unk = (unk , vkn ) = E2 (un ) = (E2 (un ), E2 (v n )) . This operation assumes the relation un−1 = E2 (un ) + dn−1 .
(12)
If the value |un−1 − E2 (un )| is small then this relation provides a good update operation for the computation of optical flow of the finer grid from the coarse grid. These operations are described as Algorithm 1.
Algorithm 1. The Lucas-Kanade Method with Gaussian Pyramid Data: fkN · · · fk0 N 0 Data: fk+1 · · · fk+1 Result: optical flow u0k n := N ; while n = 0 do n n un k := u(fk , fk+1 ) ; n n uk := E2 (uk ) ; n−1 n−1 := W (fk+1 , un fk+1 k) ; n−1 n−1 n−1 dk := u(fk , fk+1 ); n−1 := un ; un−1 k + dk k n := n − 1 end
Dynamic Multiresolution Optical Flow Computation
3
5
Spatio-temporal Multiresolution Optical Flow
In this section, we develop an algorithm for the optical flow computation from an image sequences f n (x, y, 0), f (n−1) (x, y, 1), · · · , f 1 (x, y, n − 1), f 0 (x, y, n), · · · ,
(13)
whose resolutions are a synchronised. In this image sequence, resolution of images in the sequence zooms up. Therefore, it is impossible to use directly the original pyramid-based multiresolution optical flow computation, since we cannot derive layered-multi resolution images for all images in the sequence. Equations (8) and (9) require that (for a stable numerical differentiation) for the computation of optical flow, the pair of successive images are sampled in the same resolution. However, the zooming sequence of Equation (13) does not satisfy this assumption. Therefore, we use the pair f n (x, y, k), f n (x, y, k + 1) := R2 f (n−1) (x, y, k + 1)
(14)
by reducing the resolution of f (n−1) (x, y, k + 1) to synchronise the resolutions of the pair of images for the computation of Equations (8) and (9). Furthermore, (n+1) we use the optical flow vector E2 (uij k−1 ) computed form f (n+1) (x, y, k − 1), f n+1 (x, y, k) := R2 f n (x, y, k),
(15)
(n)
as the marker for the computation uij k+1 . Using this hierarchical resolution synchronisation, Algorithms 1 derives Al(n−1) gorithm 2 which computes optical flow using fkn and fk+1 synchronising the resolution of the image pairs for the computation of optical flow. We call algorithm 2, the Dagstuhl Algorithm for optical flow computation, or just the Dagstuhl Algorithm. Figure 1 shows the time-chart for this algorithm. (a) shows resolutions of images which increase with respect to time. This configuration is required to solve
Algorithm 2. The Optical Flow Computation for Zooming Image Sequences N−1 0 Data: fkN fk+1 · · · fk+N 0≤k≤N Result: optical flow u0N n := N ; k := 0 ; while n = 0 do n−1 n−1 fk+1 := W (fk+1 , un k) ; n−1 n−1 n−1 n dk := u(fk , fk+1 ), un k := E2 (uk ) ; n−1 n−1 n uk := uk + dk ; n := n − 1 ; k := k + 1 end
6
N. Ohnishi et al.
L
L
3
3
2
2
1
1
0
0
1
2
3
4
1
t
2
(a)
(b)
E2
R2
f nk
E2
f nk
u
unk
u
wR2 fn-1 k+1
R2
f n-1 k+1
u
un-1 k+1
u
wR2 fn-2 k+2
R2
f n-2 k+2
u
unk
u
u
un-1 k+1
u
R2 R2
wf n-1 k+2
f n-1 k+2
R R
(d)
L
L
3
3
2
2
1
1
0
0
2
f nk+1
E2
(c)
1
R
wf nk+1
E2
f n-1 k+1
E2
f n-1 k+1
t
3
(e)
4
5
t
1
2
t
(f)
Fig. 1. Two Spatio-Temporal Resolution Conversions for Optical Flow Computation. (a) Resolutions of images increase with respect to time. (b) Traditional pyramid transform based optical flow computation. (c) Signal flow graph of the Gaussian Pyramid Transform Based Dagstuhl Algorithm. (d) Signal flow graph of Gaussian pyramid transform based optical flow computation. (e) Signal flow graph of (c) generates image pairs with synchronised resolutions to compute optical flow using variational method. (f) Images used by the signal flow graph of the Gaussian Pyramid Transform based algorithm for the computation of optical flow.
the problem as stated by Reinhard Klette. In the traditional pyramid transform based optical flow computation, shown in (b), at each time, the algorithm requires many layers of images. (c) is the signal flow graph of the Gaussian Pyramid
Dynamic Multiresolution Optical Flow Computation
7
Transform Based Dagstuhl algorithm. (d) is the signal flow graph of Gaussian pyramid transform base optical flow computation (e) Signal flow graph of the Gaussian Pyramid Transform based Dagstuhl algorithm generates image pairs with synchronised resolutions to compute optical flow using variational method, though as shown in (f) by the Gaussian Pyramid Transform Based Algorithm, optical flow is directly computed form multiresolution image pairs.
4
Mathematical Properties of Algorithms
Minimisation as a Series of Convex Problems. The energy functional m J(u, n, k; f ) = {(∇f n unijk + ∂t f n )|t:=k }2 dx, (16) D ( x ) n ij i,j=1 is computed from the pair R2n f (x, y, k), R2n f (x, y, k + 1) , as described in the previous section. For this energy functional, we define unk = argument (min J(u, n, k; f )) .
(17)
Optical flow un is the solution of the equation S nk unk + snk = 0
(18)
where S nk = Diag(S n11k , S n12k , · · · , S nM(n)M(n)k ),
unk = vec un11k , un12k , · · · , unM(n)M(n)k , snk = vec s11k , s12k , · · · sM(n)M(n)k . for M (n) is the maximum grid number on the nth layer. Multi-resolution opticalflow computation establishes an algorithm which guarantees the relation lim unijt = uijt ,
n→0
(19)
Since J(u, n, t; f ) is a convex functional for a fixed f , this functional satisfies the relation (20) J(un , n, t; f ) < J(u, n, t; f ), for un = u. Therefore, we have the relation J(u(n−1) , n − 1, t; f ) ≤ J(un , n − 1, t; f ).
(21)
This relation implies that it is possible to generate a sequence which reaches to E(u0 , 0, t; f ) from E(un , n − 1, t; f ), since J(u(n−1) , n − 1, t; f ) ≤ J(E2 (un ), n − 1, t; f ) J(u(n−2) , n − 2, t; f ) ≤ J(E2 (u(n−1) ), n − 2, t; f ) .. . J(u , 1, t; f ) ≤ J(E2 (u ), 1, t; f ) 1
2
J(u0 , 0, t; f ) ≤ J(E2 (u1 ), 0, t; f )
(22)
8
N. Ohnishi et al.
E
v
k
k+1
k+2
un u n (k)
E2
u n-2(k+2)
0 u0=u
1
w
u
u n-2 w
2
u n-1 E2
u
u n-1(k+1)
n
(a)
(b)
(c)
Fig. 2. Convergence Geometry. (a) The minimum of a fixed resolution derives an approximation of the minimum for the next fine-resolution for optical flow computation. (b) In classical multi-resolution optical flow computation the approximation of solutions converges in each frame. (c) In the Dagstuhl algorithm, the approximation of solutions converges in across the frames in the image sequence.
for a fixed f , setting u(n−1) = E2 (un ) + d(n−1) , for a fixed time t where |d(n−1) | |u(n−1) |. Convergence Conditions. optical-flow vector u.
We introduce the following definitions for the
Definition 2. If the flow vectors satisfy the condition |u|m ≤ α, for |u|m = maxx∈R2 |u(x)| we call the flow vector is α-stationary. Definition 3. For sufficiently small positive constant β, if the optical-flow vector u satisfies the relation ∂∂tu m ≤ β, we call the flow field is β-time stationary. Specially, if β = 0, the flow field is stationary in the temporal domain. Setting |Ω| to be the region, a possible selection of
area measure of a tessellated is, we can set α = c |Ω|, 0 < c ≤ 1 In reference α is in the order of |Ω|, that [1], β is also assumed β = c 3 |Ω| × T , 0 < c ≤ 1, where T stands for the time frame interval and is usually set as 1. These conditions are mathematical description on the temporal smoothness of the optical flow vector at each point. Next we introduce the spatial smoothness assumption on the optical flow vectors. Definition 4. For sufficiently small √ positive constant γ, if the optical-flow vector u satisfies the relation |∇u| = tr∇u∇u ≤ γ in domain Ω, we call flow field is γ-spatial stationary. Furthermore, we introduce the cross-layer relation of flow vector. 1 n Definition 5. For un = |Ω| Ω u dx if δ δ δ , |u(n−1) − E2 (un )|m ≤ , |u(n−1) − u(n−1) |m ≤ , 3 3 3 n we call u the δ-layer stationary. |un − un |m ≤
(23)
Dynamic Multiresolution Optical Flow Computation
9
Form definition 1, we have the relation |un − un |m ≤ 2α. Therefore, the second condition controls the interlayer continuity of the flow vectors. It is possible to
assume δ = c |Ω|, 0 < c ≤ 1. Figure 3 shows geometrical interpretations of these definitions. (a) The displacement of α-stationary optical flow is small. (b) The β-stationary optical flow is smooth in time domain. (c) The γ-stationary optical flow is smooth. (d) The δ-stationary optical flow is smooth across the layers. Setting Δn un (x, t) = E2 (un (x, t)) − u(n−1) (x, t) we have the lemma. Lemma 1. If un is the δ-layer stationary, the relation |Δn un (x, t)|m ≤ δ is satisfied. If u(n−1) is the α-stationary, that is, |u(n−1) | ≤ α, we have the relation |un |m ≤ α Ω . Since |E2 (u(n−1) ) − un |m ≤ |u(n−1) − u(n−1) |m + |E2 (u(n−1) ) − un |m + |un − un |m we have the relation |u(n−1) − E2 (un )|m ≤ δ. This relation leads to the next theorem. Theorem 1. If the optical flow vectors are δ-layer stationary, the traditional pyramid algorithm converges.
t (a)
t (b)
n
(c)
n
(d)
Fig. 3. Convergence Conditions for Optical Flow on the layers. (a) The displacement of α-stationary optical flow is small. (b) The β-stationary optical flow is smooth in time domain. (c) The γ-stationary optical flow is smooth. (d) The δ-stationary optical flow is smooth across the layers.
10
N. Ohnishi et al.
(a)
(b)
(c)
Fig. 4. Image Sequence 1. Marbled Block.
(a)
(b)
(c)
Fig. 5. Image Sequence 2. Yosemite.
Furthermore, since (n−1)
ut+1
(n−1)
− E2 (unt ) = u(t+1) − E2 (un(t+1) ) + E2 (un(t+1) ) − E2 (unt )
= −Δn un(t+1) + E2 (
∂ n u ) ∂t t
we have the relation (n−1) |u(t+1)
−
E2 (unt )|m
(24) ∂ n ≤ δ + E2 ( u ) ∂t
≤ β + δ.
(25)
m
This relation leads to the next theorem. Theorem 2. Setting R2 to be an image transform to derive a low resolution image from image f , the low resolution images are expressed as R2 f . δ is sufficiently small and the motion is β-time stationary for sufficiently small constant, the algorithm 1 proposed in the previous section converges.
Dynamic Multiresolution Optical Flow Computation
11
This theorem guarantees the convergence of the multi-resolution-optical flow for a Gaussian pyramid transform. In the case of the Dagstuhl Algorithm, the energy functional is computed from the pair fn (x, y, t), R2 f (x, y, t + 1) . Theorem 4 guarantees that if the motion is stationary, this energy yields the sequence (n−1)
, n − 1, 1; f ) ≤ J(E2 (un0 ), n − 1, 0; f )
(n−2)
, n − 2, 2; f ) ≤ J(E2 (u1 .. .
J(u1 J(u2
J(u1(n−1) , 1, n − 1; f ) J(u0n , 0, n; f )
(n−1)
≤ ≤
), n − 2, 1; f ) (26)
J(E2 (u2(n−1) ), 1, n − J(u1n , 0, n; f )
1; f )
This series of inequalities guarantees the convergence of the Dagstuhl Algorithm.
(a)
(b)
(c) Fig. 6. Results of Computed Optical Flow Sequence for Image Sequence 1. (a) Result by the Classical Pyramid Method. (b) Results by Dagstuhl Algorithm. (c) Difference of optical flow field of (a) and (b).
12
N. Ohnishi et al.
(a)
(b)
(c) Fig. 7. Results of Computed Optical Flow Sequence for Image Sequence 2. (a) Result by the Classical Pyramid Method. (b) Results by Dagstuhl Algorithm. (c) Difference of optical flow field of (a) and (b).
5
Numerical Examples
The performance is tested for the image sequences Marbled Block and Yosemite. Figures 6 and 7 show the results for Marbled Block sequence and Yosemite sequence. The original sequences are shown in Figures 4 and 5. In these examples, N is three, the size of window is 5 × 5, and the algorithms extracted the flow vectors whose lengths are longer than 0.03. These results show the almost same results in appearances. We analyse mathematical properties of these three algorithms in the next section. In the Marbled Block sequence 1, the LK method failed to compute optical flow in the background region. Since, in this region, there is no texture pattern, the rank of the 5 × 5 spatio-temporal structure tensor is 1. Therefore, it is impossible to compute optical flow vector using the LK method. However, the algorithm detected the optical flow in textured regions, since the rank of the 3 × 3 spatio-temporal structure tensor is three in these region.
Dynamic Multiresolution Optical Flow Computation
13
L
2
L
1 2 1
0
1
2
3
4
5
6
(a) Images in the Sequence
7
t
0
1
2
3
4
5
6
7
t
(b) Time vs Resolution
Fig. 8. Resolution Time Chart. (a) Dynamic Dagstuhl Algorithm. (b) Layer-Time Chart.
Let rn = |u|nu−nu| n | be the residual of two vectors, for flow vectors computed ˆ n }N by Dagstuhl Algorithm D = {u i=1 and flow vectors by computed by the Clas. The statistical analysis shows that the Dagstuhl sical Algorithm C = {un }N i=1 Algorithm numerically derives acceptable results. ˆ
Table 1. LKD and LKP R sequence max mean variance block1 29.367246 0.126774 0.734993 Yosemite 3.830491 0.033199 0.035918
As an extension of the Dagstuhl Algorithm, we have evaluated a performance of the Dagstuhl algorithm for image sequences with various resolutions. Figure 8 shows the diagram of the resolutions of images along the time axis. In this sequence of images, the resolutions of images dynamically increase and decrease. The camera system is focusing and defocusing, if the resolutions of camera system are unstable along the time. In the extended algorithm, optical flow is computed using the pairs f n (x, y, k), R2 f (n−1) (x, y, k + 1) , R2 f (n−1) (x, y, k), f n (x, y, k + 1)
(27)
depending on the resolutions of images to synchronise the resolution of images. We have evaluated the optical flow vectors of the final image in the sequence with the optical-flow vectors computed by the usual pyramid. Figure 9 shows the comparison of optical flow (a) is the result by the Dynamic Dagstuhl Algorithm. (b) is the result by the pyramid algorithm for the final image in the sequence. (c) shows the subtraction of (b) from (a). Table 2 shows the length and angle deference between optical flow vectors computed by two method for the final frame of the sequence. These results show that the Dagstuhl
14
N. Ohnishi et al.
(a)
(b)
(c) Fig. 9. Comparison of Optical Flow (a) Result by the Dagstuhl Algorithm. (b) Result by the pyramid algorithm. (c) Subtraction of (b) from (a).
Table 2. LKD and LKP Length-Error and Angle-Error Error max mean variance Length 10.730187 0.247997 1.035227 Angle 171.974622 4.664410 9.180621
Algorithm, which is originally developed for the resolution-zooming sequence, is valid to the sequence of images whose resolutions are dynamically changing.
6
Conclusions
We introduced a spatio-temporal multiresolution optical flow computation algorithm as an extension of the Lucas-Kanade method with the pyramid transform. An extension is to compute optical flow from a pair of different resolution images. If the resolution of each image is higher than the previous one, the resolution of images in a sequence increases. We have addressed this extension.
Dynamic Multiresolution Optical Flow Computation
15
We developed an algorithm to compute optical flow from an image sequence whose resolutions are time dependent.
References 1. Anandan, P.: A computational framework and an algorithm for the measurement of visual motion. Int. J. Comput. Vision 2, 283–310 (1989) 2. Barron, J.L., Fleet, D.J., Beauchemin, S.S.: Performance of optical flow techniques. Int. J. Comput. Vision 12, 43–77 (1995) 3. Bouguet, J.: Pyramidal implementation of the Lucas-Kanade feature tracker. Intel Corporation, Microprocessor Research Labs, OpenCV Documents (1999) 4. Horn, B.K.P., Schunck, B.G.: Determining optical flow. Artificial Intelligence 17, 185–204 (1981) 5. Handschack, P., Klette, R.: Quantitative comparisons of differential methods for measuring image velocity. In: Proc. Aspects of Visual Form Processing, Capri, pp. 241–250 (1994) 6. Hwang, S.-H., Lee, U.K.: A hierarchical optical flow estimation algorithm based on the interlevel motion smoothness constraint. Pattern Recognition 26, 939–952 (1993) 7. Lucas, B.D., Kanade, T.: An iterative image registration technique with an application to stereo vision. In: Proc. Imaging Understanding Workshop, pp. 121–130 (1981) 8. Mahzoun, M.R., et al.: A scaled multigrid optical flow algorithm based on the least RMS error between real and estimated second images. Pattern Recognition 32, 657–670 (1999) 9. Nagel, H.-H.: On the estimation of optical flow: Relations between different approaches and some new results. Artificial Intelligence 33, 299–324 (1987) 10. Ruhnau, P., et al.: Variational optical flow estimation for particle image velocimetry. Experiments in Fluids 38, 21–32 (2005) 11. Weber, J., Malik, J.: Robust computation of optical flow in a multi-scale differential framework. Int. J. Comput. Vision 14, 67–81 (1995)
Particle-Based Belief Propagation for Structure from Motion and Dense Stereo Vision with Unknown Camera Constraints Hoang Trinh and David McAllester Toyota Technological Institute at Chicago 1427 E. 60th Street, Chicago Illinois 60637 {ntrinh,mcallester}@tti-c.org http://www.tti-c.org
Abstract. In this paper, we present a specific use of the Particle-based Belief Propagation (PBP) algorithm as an approximation scheme for the joint distribution over many random variables with very large or continuous domains. After formulating the problem to be solved as a probabilistic graphical model, we show that by running loopy Belief Propagation on the whole graph, in combination with an MCMC method such as Metropolis-Hastings sampling at each node, we can approximately estimate the posterior distribution of each random variable over the state space. We describe in details the application of PBP algorithm to the problem of sparse Structure from Motion and the dense Stereo Vision with unknown camera constraints. Experimental results from both cases are demonstrated. An experiment with a synthetic structure from motion arrangement shows that its accuracy is comparable with the state-of-the-art while allowing estimates of state uncertainty in the form of an approximate posterior density function. Keywords: Belief Propagation, Particle filter, Structure from Motion, Dense Stereo Vision.
1 Introduction Graphical models are considered a powerful tool to represent structures in distributions over many random variables. Such a structure can then be used to efficiently compute or approximate many quantities of interest such as the posterior modes, means, or marginals of the distribution, often using “message-passing” algorithms such as belief propagation [1]. Traditionally, most such work has focused on systems of many variables, each of which has a relatively small state space (number of possible values), or particularly nice parametric forms (such as jointly Gaussian distributions). For systems with continuous-valued variables, or discrete-valued variables with very large domains, one possibility is to reduce the effective state space through gating, or discarding low-probability states [2,3,4], or through random sampling [5,6,7,8,9]. The bestknown example of the latter technique is particle filtering, defined on Markov chains, in which each distribution is represented using a finite collection of samples, or particles. It is therefore natural to consider generalizations of particle filtering applicable G. Sommer and R. Klette (Eds.): RobVis 2008, LNCS 4931, pp. 16–28, 2008. c Springer-Verlag Berlin Heidelberg 2008
Particle-Based Belief Propagation for Structure
17
to more general graphs (“particle” belief propagation); several variations have thus far been proposed, corresponding to different choices for certain fundamental questions. As an example, consider the question of how to represent the messages computed during inference using particles. Broadly speaking, one might consider two possible approaches: to draw a set of particles for each message in the graph [5,7,8], or to create a set of particles for each variable, e.g., representing samples from the posterior marginal [6]. This decision is closely related to the choice of proposal distribution in particle filtering; indeed, choosing better proposal distributions from which to draw the samples, or moving the samples via Markov chain Monte Carlo (MCMC) to match subsequent observations, comprises a large part of modern work on particle filtering [10,11,12,13,14]. Either method can be made asymptotically consistent, i.e., will produce the correct answer in the limit as the number of samples becomes infinite. However, consistency is a weak condition—fundamentally, we are interested in the behavior of particle belief propagation for relatively small numbers of particles, ensuring computational efficiency. So far, little theory describes the finite sample behavior of these algorithms. In this work, we demonstrate the application of the Particle-based Belief Propagation algorithm, most closely related to that described in [6], for the problem of Structure from Motion and Dense Stereo Vision with unknown camera constraints. Experimental results show proofs of the convergence for the accuracy of such a generic PBP algorithm with finite samples. In addition to accuracy, the PBP algorithm also allows us to estimate various properties of the distribution, thereby allows us to represent state uncertainty, although at some computational cost.
2 Review of Belief Propagation Let G be an undirected graph consisting of nodes V = {1, . . . , k} and edges E, and let Γs denote the set of neighbors of node s in G, i.e., the set of nodes t such that {s, t} is an edge of G. In a probabilistic graphical model, each node s ∈ V is associated with a random variable Xs taking on values in some domain, Xs , and the structure of the graph is used to represent the Markov independence structure of the joint distribution p(X1 , . . . Xn ). Specifically, the Hammersley-Clifford theorem [15] tells us that if p(·) > 0, the distribution factors into a product of potential functions ψc (·) defined on the cliques (fully connected subsets) of G. For simplicity of notation, we further assume that these clique functions are pairwise. Now we can assume that each node s and edge {s, t} are associated with potential functions Ψs and Ψs,t respectively, and given these potential functions we define a probability distribution on assignments of values to nodes as ⎞ ⎛ 1 → → → → Ψs ( x s ) ⎝ Ψs,t ( x s , x t )⎠ . (1) P (x) = Z s {s,t}∈E
→
→
Here x is an assignment of values to all k variables, x s is the value assigned to Xs by x , and Z is a scalar chosen to normalize the distribution P (also called the partition function). We consider the problem of computing marginal probabilities, defined by
→
18
H. Trinh and D. McAllester
Ps (xs ) =
→
P (x) .
(2)
→ → x : x s =xs
In the case where G is a tree and the sets Xs are small, the marginal probabilities can be computed efficiently by belief propagation [1]. This is done by computing messages mt→s each of which is a function on the state space of the target node, Xs . These messages can be defined recursively as Ψt,s (xt , xs )Ψt (xt ) mu→t (xt ) . (3) mt→s (xs ) = xt ∈Xt
u∈Γt \s
When G is a tree this recursion is well founded (loop-free) and the above equation uniquely determines the messages. We will use an unnormalized belief function defined as follows. Bs (xs ) = Ψs (xs ) mt→s (xs ) . (4) t∈Γs
When G is a tree the belief function is proportional to the marginal probability Ps defined by (2). It is sometimes useful to define the “pre-message” Mt→s as follows for xt ∈ Xt . Mt→s (xt ) = Ψt (xt ) mu→t (xt ) . (5) u∈Γt \s
Note that the pre-message Mt→s defines a weighting on the state space of the source node Xt , while the message mt→s defines a weighting on the state space of the destination, Xs . We can then re-express (3)–(4) as mt→s (xs ) = Ψt,s (xt , xs )Mt→s (xt ) Bt (xt ) = Mt→s (xt )ms→t (xt ) . xt ∈Xt
Although we develop our results for tree–structured graphs, it is common to apply belief propagation to graphs with cycles as well (“loopy” belief propagation). We note connections to and differences for loopy BP in the text where appropriate. Loopy belief propagation maintains a state which stores a numerical value for each message value mt→s (xs ). In loopy BP one repeatedly updates one message at a time. More specifically, one selects a directed edge t → s and updates all values mt→s (xs ) using equation (3). Loopy BP often involves a large number of message updates. As the number of updates increases message values typically diverge — usually tending toward zero but possibly toward infinity if the potentials Ψs and Ψt,s are large. This typically results in floating point underflow or overflow. For reasons of numerical stability, it is common to normalize each message mt→s so that it has unit sum. However, such normalization of messages has no other effect on the (normalized) belief functions (4). We can normalize an entire state by normalizing each message. It is important to note that normalization commutes with update. More specifically, normalizing a message, updating, and then normalizing the resulting state, is the same as updating (without normalizing) and then normalizing the resulting state. This implies that any normalized state of the system can be viewed as the result of running unnormalized updates and
Particle-Based Belief Propagation for Structure
19
then normalizing the resulting state only at the end. Thus for conceptual simplicity in developing and analyzing particle belief propagation we avoid any explicit normalization of the messages; such normalization can be included in the algorithms in practice. Additionally, for reasons of computational efficiency it is helpful to use the alter
Ψt,s (xt , xs )Bt (xt )/ms→t (xt ) when computing native expression mt→s (xs ) = the messages. By storing and updating the belief values Bt (xt ) incrementally as incoming messages are re-computed, one can significantly reduce the number of operations required. Although our development of particle belief propagation uses the update form (3), this alternative formulation can be applied to improve its efficiency as well.
3 Particle Belief Propagation We now consider the case where |Xs | is too large to enumerate in practice and define a generic particle (sample) based algorithm. This algorithm essentially corresponds to a non-iterative version of the method described in [6]. (1) (n) (i) The procedure samples a set of particles xs , . . ., xs with xs ∈ Xs at each node s of the network1, drawn from a sampling distribution (or weighting) Ws (xs ) > 0 (corresponding to the proposal distribution in particle filtering). First we note that (3) can be written as the following importance-sampling corrected expectation.
Ψs,t (xs , xt )Ψt (xt ) u∈Γt \s mu→t (xt ) mt→s (xs ) = Ext ∼Wt . (6) Wt (xt ) (1)
(n)
(i)
Given a sample xt , . . ., xt of points drawn from Wt we can estimate mt→s (xs ) as (i) (i) follows where ws = Ws (xs ). n 1 Ψt,s (xt , xs )Ψt (xt ) = (j) n j=1 wt (j)
(i) m t→s
(i)
(j)
(j)
u∈Γt \s
m u→t
.
(7)
Equation (7) represents a finite sample estimate for (6). Alternatively, (7) defines a belief propagation algorithm where messages are defined on particles rather than the entire set Xs . As in classical belief propagation, for tree structured graphs and fixed particle locations there is a unique set of messages satisfying (7). Equation (7) can also be applied for loopy graphs (again observing that message normalization can be (i) (i) conceptually ignored). In this simple version, the sample values xs and weights ws remain unchanged as messages are updated. We now show that equation (7) is consistent—it agrees with (3) in the limit as n → ∞. For any finite sample, define the particle domain Xs and the count ci (x) for x ∈ Xs as 1
(i)
It is also possible to sample a set of particles {xst } for each message ms→t in the network from potentially different distributions Wst (xs ), for which our analysis remains essentially unchanged. However, for notational simplicity and to be able to apply the more computationally efficient message expression described in Section 2, we use a single distribution and sample set for each node.
20
H. Trinh and D. McAllester
Xs = {xs ∈ Xs : ∃k x(i) s = xs } (i)
Equation (7) has the property that if xs rewrite (7) as m t→s (xs ) =
cs (xs ) = |{k : x(i) s = xs }| . (i )
= xs
(i)
(i )
then mt→s = mt→s ; thus we can
1 ct (xt ) Ψt,s (xt , xs )Ψt (xt ) m u→t (xt ) xs ∈ Xs . (8) n Wt (xt ) t x t ∈X
u∈Γt \s
Since we have assumed Ws (xs ) > 0, in the limit of an infinite sample Xt becomes all of Xt and the ratio (ct (xt )/n) converges to Wt (xt ). So for sufficiently large samples (8) approaches the true message (3). Fundamentally, we are interested in particle-based approximations to belief propagation for their finite–sample behavior, i.e., we hope that a relatively small collection of samples will give rise to an accurate estimate of the beliefs - the true marginal Pt (xt ). At any stage of BP we can use our current marginal estimate to construct a new sampling distribution for node t and draw a new set of (i) particles {xt }. This leads to an iterative algorithm which continues to improve its estimates as the sampling distributions become more accurately targeted, although such iterative resampling processes often require more work to analyze; see e.g. [9]. In [6], the sampling distributions were constructed using a density estimation step t (xt ) can (fitting mixtures of Gaussians). However, the fact that the belief estimate M be computed at any value of xt allows us to use another approach, which has also been applied to particle filters [13,14] with success. By running a short MCMC simulation such as the Metropolis-Hastings algorithm, one can attempt to draw samples directly t . This manages to avoid any distributional assumptions inherent from the weighting M in density estimation methods, but has the disadvantage that it can be difficult to assess convergence during MCMC sampling.
4 Experimental Results In this section, we describe the application of Particle Belief Propagation (PBP) to the problem of Structure from Motion (SfM) and the problem of Dense Stereo Vision with unknown camera constraints. 4.1 Particle Belief Propagation for SfM In SfM, given a set of 2D images of the same scene, the objective is to simultaneously recover both the camera trajectories and the 3D structure of the scene. For this problem, it is commonly accepted that Bundle Adjustment (BA) [16] is the Gold standard. There has been a lot of work on this problem [17,18,19], mostly using geometric based methods. All of this work demonstrated that their result using BA as the last step is much better than without BA. With each camera pose and each map object 3D location being represented as a finite set of particles, we show that PBP can successfully estimate their true states by estimating their posterior distributions over the state space given the image observations. The algorithm was tested on both real and synthetic data. An
Particle-Based Belief Propagation for Structure
21
Fig. 1. The representation of the Structre from Motion problem as a bipartite graphical model
experiment on synthetic data, when the ground truth is available, allows us to compare the performance of our method with BA. First, we represent the problem of SfM in the context of PBP. For each observed image, we detect a sparse set of special image keypoints. These points should be highlevel image features, obtained by a feature detector (such as corner detector or SIFT detector [20]). The correspondences of image points between images are automatically obtained using a feature matching method, such as an efficient nearest neighbour algorithm [20]. The obtained matching result also defines the correspondences between the image points and the map points. The 3D scene can now be represented as a sparse set of 3D points. Each camera pose is assigned to a 3D location, combined with 3 angles of rotation, which define the rotation of the camera about 3 axes. Our method is based on the assumption that given a set of image observations and the estimate of all map points (resp. camera poses), there exists a probability distribution for each camera pose (resp. map points) over its state space. The true state of map points and camera poses are hidden variables that we want to estimate. Let Pi denote the random variable for the ith camera pose. Let Yj denote the random variable for the j th map point. The observed data are image points and their correspondences. We denote xij the image point variable associated with map point Yj as seen from camera Pi . Our graphical model G(V, E) consists of two types of nodes. Each camera node i is associated with a camera variable Pi , each map node j is associated with a map variable Yj . For simplicity of notation we name each node after its variable. We add to E every edge connecting each camera node and a map point node it observes. No edge connects any two camera nodes, or any two map nodes. If each camera has an entire view of the whole set of map points, we have a complete bipartite graph. Each edge in the graph is associated with a binary potential function, denoted Ψi,j (pi , yj ). More specifically, for an edge connecting camera node Pi and map node Yj , we have: Ψi,j (pi , yj ) = e
−Q(pi ,yj )−xij 2 2∗σ2
.
(9)
where Q(pi , yj ) is the reprojection function that takes a camera pose and a map point and returns the reprojected image point, xij is the image observation of map point yj as seen from camera pi , σ 2 is the variance of the Gaussian image noise. This is equivalent to assuming a normal distribution of the true observation around the given observation. This allows us to represent the uncertainty of observation.
22
H. Trinh and D. McAllester
At this point the message function and belief function on G are well defined using (3) and (4). However, the state space of each hidden variable in G is too large to do inference with BP. In order to use PBP, we discretize the state space of each variable into a relatively small number of states, each of which is represented by a particle. These states can be sampled from a normal distribution with some initial mean and variance. For example, we can initialize the state of each 3D map point by sampling from a 3D line going through the first camera centre point and the corresponding image point. As we assume that the camera motion is smooth, the initial states of the next camera pose can be sampled from a region nearby the previous one. (1) (M) Now we have at each camera node Pi , a set of M particles: pi , . . ., pi , at each (1) (N ) map node Yj , a set of N particles: yj , . . ., yj . If we assume no unary potential at each node, we can define the message going from camera node i to map node j in G by (7) as follows: (k)
m i→j =
M 1 (h) (h) (k) (h) wt Ψi,j (pi , yj ) m u→i . M
(10)
u∈Γi \j
h=1
At the beginning, all particles in a node are assigned uniform weights and no message has been computed, equation (10) becomes: (k)
m i→j =
M 1 (h) (k) Ψi,j (pi , yj ) . M
(11)
h=1
(k) (h) This can be interpreted as the marginal probability M h=1 P (Yj = yj |xij , Pi = pi ). It follows that the belief of a map node becomes the marginal probability over all assignments of its neighbouring camera nodes, conditioned on relevant observations: j (y (k) ) = B j
M
(k)
(h)
P (Yj = yj |x, Pi = pi )
i∈Γj h=1
=
(k)
P (Yj = yj |x, p) .
(12)
p
where p is an assignment of values to all neighbouring pose nodes. The message from a map node to a pose node and the belief of a pose node can be interpreted similarly. As PBP proceeds, the information from one node is sent to all other nodes in the graph. This allows (2) to be approximated by (12). As shown in section 3, the posterior marginal estimated by PBP is almost guaranteed to converge to the true posterior marginal distribution estimated by BP. In addition to PBP, the samples at each node are iteratively updated by Metropolis-Hastings sampling from the estimated posterior marginal distribution at that node. Metropolis-Hastings sampling allows particles to freely explore the state space and thus compensates the inadequacy of representing a large state space by a small number of samples. 4.2 Comparison with Bundle Adjustment The objective of this experiment is to compare PBP with the well known method BA in terms of reconstruction accuracy, by measuring the deviation of their reconstruction
Particle-Based Belief Propagation for Structure
25
Camera Pose Posteror Estimate
Map Point Posteror Estimate
840
Ground truth PBP BA sample points
20
820
15
800
10
780
X
Error 185.81 ± 66.43 194.00 ± 63.02 11.88 ± 4.40 11.56 ± 4.39
X
Method BA (Map) PBP (Map) BA (Pose) PBP (Pose)
23
5
760
0
740
5
10
Ground truth PBP BA Sample points
720
95
100
105
110
115
700 2800
2900
Z
(a)
(b)
3000
3100
3200
3300
3400
Z
(c)
Fig. 2. Comparing bundle adjustment to PBP in structure from motion. (a) Estimated mean and standard deviation of reconstruction errors for camera pose and map positions; (b) example posterior for one camera pose and (c) for one map point using PBP.
estimates from the ground truth. The model for bundle adjustment is a pairwise graphical model over poses pi and object positions yj given observed locations xij in the images: P ({pi }, {yj }) =
ψi,j (pi , yj )
Ψi,j (pi , yj ) = e
−Q(pi ,yj )−xij 2 2∗σ2
.
Bundle adjustment then searches for a local maximum (i.e., a mode) of the posterior distribution over the {pi , yj }. However, if the intent is to minimize the expected squared error loss, we should prefer the mean estimate of the posterior distribution instead of its mode. Note that these two quantities can be very different in problems where the posterior distribution is very skewed or multi-modal; in such cases, it may be advantageous to estimate the posterior distribution and its mean using methods such as belief propagation. Additionally, an explicit representation of uncertainty can be used to assess confidence in the estimate. We show a comparison between estimates given by BA2 and PBP on synthetic structure from motion data in Figure 2. For the data, we generate a fixed set of map points and a few camera poses. Each point’s observations are obtained by projecting the point on each camera’s image planes and adding Gaussian noise. We assume that the camera calibration parameters and the image correspondences are known, and initialize the estimates for both algorithms to ground truth, then run each to convergence and compare the resulting error. Figure 2(a) shows the estimated mean and standard deviation of reconstruction errors of BA and PBP from 200 different runs (units are synthetic coordinates). This shows that, at the same level of image noise and from the same initial conditions, PBP produces essentially the same accuracy as BA for both camera and map points. However, we expect that in less idealized cases (including, for example, incorrect feature associations or outlier measurements), PBP may perform much better [22]. Moreover, PBP does not require an initialization step to provide a good initial state vales, as we allow the set of initial states to be chosen randomly. In addition, PBP provides an estimate of state uncertainty (although at some computational cost). Figure 2(b)–(c) shows the estimated posterior distributions given by PBP for a camera pose and map point, respectively, 2
We use the sba package in [21].
24
H. Trinh and D. McAllester
Fig. 3. The graphical representation of the Dense Stereo Vision problem
(a)
(b)
Fig. 4. (a) left image of the stereo pair; (b) estimated depth map at convergence of PBP
along with the mean found via PBP (circle), mode found via BA (diamond), and true position (square). 4.3 Particle Belief Propagation for Dense Stereo with Unknown Camera Constraints Classical dense two-frame Stereo algorithms compute a dense disparity map for each pixel given a pair of images under known camera constraints (i.e. the configuration of the 2 cameras and the distance between them are known) [23]. Here, given a pair of stereo images with unknown camera constraints, we use PBP to simultaneously compute the dense depth map of the first image, and the configuration of the second camera relative to the first. Belief propagation [24,25] has been successfully applied to standard binocular stereo. To obtain computational efficiency, we take advantage of the algorithmic techniques in [25] which considerably speed up the the standard BP algorithm. The formulation of the graphical model in this case is quite similar to the previous problem. There are 2 camera nodes, and the number of map nodes equals the number of pixels in the first image. An edge is added between every pixel node and every camera node. (see Figure 3). A new energy function, which evaluates the quality of the labeling
Particle-Based Belief Propagation for Structure Camera Pose Posterior Estimate
Camera Pose Posterior Estimate
Camera Pose Posterior Estimate
10
10
10
Z
15
Z
15
Z
15
5
5
5
10
5
Ground truth PBP sample points
Ground truth PBP sample points 0 0
15
20
25
30
0 0
35
5
10
Ground truth PBP sample points 15
20
25
30
0 0
35
15
20
25
X
Camera Pose Posterior Estimate
Camera Pose Posterior Estimate
Camera Pose Posterior Estimate
10
10
5
Ground truth PBP sample points 10
20
25
30
35
35
30
35
5
Ground truth PBP sample points
Ground truth PBP sample points 15
30
Z
10
Z
15
Z
15
5
10
X
15
0 0
5
X
5
25
0 0
5
X
10
15
20
25
30
35
0 0
5
10
X
15
20
25
X
Fig. 5. Estimated posterior distribution of the camera pose over time
of both the camera poses and the 3D depth of each image point, has the following similar form, E(z, c) =
(p,q)∈N
V (zp , zq ) +
D(zp , c) .
(13)
p∈P
where P is the set of all pixels in the first image, N contains all edges connecting 2 pixel nodes, the labeling z is a function which assigns a depth value zp to each pixel p ∈ P , c is the assigned label for the 3D configuration of the second camera, namely a 6-vector (x, y, z, α, β, γ). V (zp , zq ) denotes the smoothness term, which is the cost of assigning labels zp and zq to two adjacent pixels. In this case we use the following V (zp , zq ) = min(zp − zq , θ), which captures the assumption that the scene has smooth depth almost everywhere, except at certain locations (such as boundaries). Note that this equation also defines the binary potential function associated with edges connecting two pixel nodes. The data term D(zp , c) computes the cost of simultaneously assigning label zp to pixel p, and assigning configuration c to the second camera. We use the following data term, D(zp , c) = min(I2 (Q(p, zp , c)) − I1 (p) , τ ) .
(14)
where the function Q(p, zp , c) takes the pixel p on the first image, its 3D depth zp , the configuration c of the second camera, and returns the corresponding pixel on the second image. Note that we also use a truncation value τ , which allows abrupt changes in image intensity, and makes the data cost robust to occlusion and specularity. (14) also defines the binary potential associated with edges connecting a pixel node and a camera node. Minimizing the energy function E(z, c) is now equivalent to finding the maximum posterior probability (MAP) estimate for the defined graphical model. For an energy
26
H. Trinh and D. McAllester Method nonocc Adapting BP 1.11 Graph cuts with occ 1.19 Our method 2.11 SSD + min-filter 5.23 Phase difference alg 4.89 Infection alg 7.95
all 1.37 2.01 4.22 7.07 7.11 9.54
disc 5.79 6.24 11.1 24.1 16.3 28.9
Fig. 6. Comparative performance on the Tsukuba data with some other well-known stereo algorithms, using the three performance measures, nonocc (ratio of bad pixels at nonoccluded regions), all (ratio of bad pixels at all regions) and disc (ratio of bad pixels at regions near depth discontinuities)
function of type (13), the max-product algorithm [26] can be used as the message updating method, which becomes min-sum when performed in negative log. As shown in Figure 5, we plot the estimated posterior distribution of the second camera pose at each iteration, thus show that the distribution gradually approaches the true state over time. Figure 4 shows the estimated depth map result on frame 3 of the Tsukuba image sequence at convergence. Finally, Figure 6 demonstrates the comparative performance on the Tsukuba data with some other well-known stereo algorithms. As the labels we assign for pixels are their true 3D depths instead of their disparities between the two images, in order to do this, we quantize the output depth into a number of depth range equivalent to the number of disparity levels used in the evaluation routine, as shown in [23] and the Middlebury College Stereo Evaluation webpage (http://www.middlebury.edu/stereo).
5 Summary and Conclusions In this paper we have demonstrated the application of the Particle-based Belief Propagation algorithm to the problems of Structure from Motion, and the problem of Dense Stereo with unknown camera constraints. To handle the cases with continuous-valued variables, or discrete-valued variables with very large domains, our approach creates a set of particles for each variable, representing samples from the posterior marginal. The algorithm then continues to improve the current marginal estimate by constructing a new sampling distribution and draw new sets of particles. It’s shown by experiments that the algorithm is consistent, i.e. approaches the true values of the message and belief functions with finite samples. Besides accuracy, PBP algorithm also provides good estimate for properties of the distribution, and a represention of state uncertainty. Although an “adaptive” choice for the sampling distribution, and such iterative resampling processes require further work to analyze, our results seem to support the notion of sampling from the current marginal estimates themselves, whether from fitted distributions [6] or via a series of MCMC steps. Acknowledgments. The authors would like to express their thanks to Dr. Alexander Ihler of the Toyota Technological Institute at Chicago for his constructive comments and suggestions.
Particle-Based Belief Propagation for Structure
27
References 1. Pearl, J.: Probabilistic Reasoning in Intelligent Systems. Morgan Kaufman, San Mateo (1988) 2. Freeman, W.T., Pasztor, E.C., Carmichael, O.T.: Learning low-level vision. IJCV 40(1), 25– 47 (2000) 3. Coughlan, J.M., Ferreira, S.J.: Finding deformable shapes using loopy belief propagation. ECCV 7, 453–468 (2002) 4. Coughlan, J.M., Shen, H.: Shape matching with belief propagation: Using dynamic quantization to accomodate occlusion and clutter. In: CVPR Workshop on Generative Model Based Vision (2004) 5. Arulampalam, M.S., et al.: A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking. IEEE Trans. SP 50(2), 174–188 (2002) 6. Koller, D., Lerner, U., Angelov, D.: A general algorithm for approximate inference and its application to hybrid Bayes nets. UAI 15, 324–333 (1999) 7. Sudderth, E.B., et al.: Nonparametric belief propagation. In: CVPR (2003) 8. Isard, M.: PAMPAS: Real–valued graphical models for computer vision. In: CVPR (2003) 9. Neal, R.M., Beal, M.J., Roweis, S.T.: Inferring state sequences for non-linear systems with embedded hidden Markov models. In: NIPS, vol.16 (2003) 10. Thrun, S., Fox, D., Burgard, W.: Monte carlo localization with mixture proposal distribution. In: AAAI, pp. 859–865 (2000) 11. van der Merwe, R., et al.: The unscented particle filter. In: NIPS, vol. 13 (2001) 12. Doucet, A., de Freitas, N., Gordon, N. (eds.): Sequential Monte Carlo Methods in Practice. Springer, New York (2001) 13. Godsill, S., Clapp, T.: Improvement strategies for Monte Carlo particle filters. In: Doucet, J.F.G.D.F.A., Gordon, N.J. (eds.) Sequential Monte Carlo Methods in Practice, Springer, New York (2001) 14. Khan, Z., Balch, T., Dellaert, F.: MCMC-based particle filtering for tracking a variable number of interacting targets. IEEE Trans. PAMI, 1805–1918 (2005) 15. Clifford, P.: Markov random fields in statistics. In: Grimmett, G.R., Welsh, D.J.A. (eds.) Disorder in Physical Systems, pp. 19–32. Oxford University Press, Oxford (1990) 16. Triggs, B., et al.: Bundle adjustment - a modern synthesis. In: Vision Algorithms (1999) 17. Hartley, R.I., Zisserman, A.: Multiple View Geometry in Computer Vision, 2nd edn. Cambridge University Press, Cambridge (2004) 18. Kaucic, R., Hartley, R., Dano, N.: Plane-based projective reconstruction. In: ICCV 2001. Proceedings of the Ninth IEEE International Conference on Computer Vision. IEEE Computer Society Press, Los Alamitos (2001) 19. Snavely, N., Seitz, S.M., Szeliski, R.: Photo tourism: Exploring photo collections in 3D. ACM Transactions on Graphics 25(3), 835–846 (2006) 20. Lowe, D.G.: Object recognition from local scale-invariant features. In: Proc. of the International Conference on Computer Vision, Corfu., pp. 1150–1157 (1999) 21. Lourakis, M., Argyros, A.: The design and implementation of a generic sparse bundle adjustment software package based on the levenberg-marquardt algorithm. In: Technical Report 340, Institute of Computer Science - FORTH, Heraklion, Crete, Greece (2004), http://www.ics.forth.gr/∼lourakis/sba 22. Ihler, A.T., et al.: Nonparametric belief propagation for self-calibration in sensor networks. IEEE Trans. Jsac., 809–819 (2005) 23. Scharstein, D., Szeliski, R., Zabih, R.: A taxonomy and evaluation of dense two-frame stereo correspondence algorithms (2001)
28
H. Trinh and D. McAllester
24. Sun, J., Zheng, N.N., Shum, H.Y.: Stereo matching using belief propagation. IEEE Trans. Pattern Anal. Mach. Intell. 25(7), 787–800 (2003) 25. Felzenszwalb, P.F., Huttenlocher, D.P.: Efficient belief propagation for early vision. Int. J. Comput. Vision 70(1), 41–54 (2006) 26. Weiss, F.: On the optimality of solutions of the max-product belief-propagation algorithm in arbitrary graphs. IEEETIT: IEEE Transactions on Information Theory 47 (2001)
Integrating Disparity Images by Incorporating Disparity Rate Tobi Vaudrey1 , Hern´ an Badino2 , and Stefan Gehrig3 1
2
The University of Auckland, Auckland, New Zealand Johann Wolfgang Goethe University, Frankfurt am Main, Germany 3 DaimlerChrysler AG, Stuttgart, Germany
Abstract. Intelligent vehicle systems need to distinguish which objects are moving and which are static. A static concrete wall lying in the path of a vehicle should be treated differently than a truck moving in front of the vehicle. This paper proposes a new algorithm that addresses this problem, by providing dense dynamic depth information, while coping with real-time constraints. The algorithm models disparity and disparity rate pixel-wise for an entire image. This model is integrated over time and tracked by means of many pixel-wise Kalman filters. This provides better depth estimation results over time, and also provides speed information at each pixel without using optical flow. This simple approach leads to good experimental results for real stereo sequences, by showing an improvement over previous methods.
1
Introduction
Identifying moving objects in 3D scenes is very important when designing intelligent vehicle systems. The main application for the algorithm, introduced in this paper, is to use stereo cameras to assist computation of occupancy grids [8] and analysis of free-space [1], where navigation without collision is guaranteed. Static objects have a different effect on free-space calculations, compared to that of moving objects, e.g. the leading vehicle. This paper proposes a method for identifying movement in the depth direction pixel-wise. The algorithm, presented in this paper, follows the stereo integration approach described in [1]. The approach integrates previous frames with the current one using an iconic (pixel-wise) Kalman filter as introduced in [10]. The main assumption in [1] was a static scene, i.e. all objects will not move, using the ground as reference. The ego-motion information of the stereo camera is used to predict the scene and this information is integrated over time. This approach lead to robust free-space estimations, using real-world image sequences, running in real-time. This paper extends the idea of integrating stereo iconically to provide more information with a higher certainty. The extension is adding change of disparity in the depth direction (referred to in this paper as disparity rate) to the Kalman filter model. For a relatively low extra computational cost, movement in the depth direction is obtained without the need for computing optical flow. G. Sommer and R. Klette (Eds.): RobVis 2008, LNCS 4931, pp. 29–42, 2008. c Springer-Verlag Berlin Heidelberg 2008
30
T. Vaudrey, H. Badino, and S. Gehrig
Furthermore, depth information on moving objects is also improved. The environment for which the algorithm was modelled are scenes where most of the movement is in the depth direction, such as highway traffic scenes. Therefore, the lateral and vertical velocities are not modelled, so movement in these directions are neglected. In these scenes, the lateral and vertical velocity limitation is not a problem. This paper is structured as follows: Section 2 specifies the outline of the algorithm. Section 3 briefly introduces the concept of iconic representations for Kalman filtering and explains the new Kalman model incorporating disparity rate. Section 4 contains the algorithm in detail. Experimental results are presented in Section 5, covering two main stereo algorithms with the application to the model presented in this paper and also comparing speed and distance estimation improvements over the static stereo integration approach. Finally, conclusions and further research areas are discussed in Section 6.
2 2.1
Outline of the Disparity Rate Algorithm Common Terminology
The algorithm presented in this paper uses some common terminology in computer vision. Here is a list of the nomenclature that is commonly used within this paper (from [4]): Stereo camera: a pair of cameras mounted on a rig viewing a similar image plane, i.e. facing approximately the same direction. The camera system has a base line b, focal length f , and the relative orientation between the cameras, calculated by calibration. Stereo cameras are used as an input for the algorithm presented in this paper (see Figure 1). Rectification: is the transformation process used to project stereo images onto an aligned image plane. Image Coordinates: p = [u , v]T is a pixel position with u and v being the image lateral and height coordinates respectively. Pixel position is calculated from real world coordinates using back projection to the image plane p = f (W). Disparity: the lateral pixel position difference d between rectified stereo images. This is inversely proportional to depth. Disparity Map: an image containing the disparity at every pixel position. Different stereo algorithms are used to obtain correct matches between stereo pairs. An example of a disparity map is seen in Figure 3(b). T Real World Coordinates: W = [X , Y , Z] , where X, Y and Z are the world lateral position, height and depth respectively. The origin moves with the camera system, and a right-hand-coordinate system is assumed. Real world coordinates can be calculated using triangulation of the pixel position and disparity W = f (p, d). Ego-motion: the movement of a camera system between frames. This is represented by a translation vector Tt and a rotation matrix Rt .
Integrating Disparity Images by Incorporating Disparity Rate
31
Ego-vehicle: the platform that the camera system is fixed to, i.e. the camera system moves with the ego-vehicle. For the case of a forward moving vehicle, the ego-motion is based on speed v (velocity in depth direction) and yaw ψ (rotation around the Y axis). 2.2
Disparity Rate Algorithm
The iconic representation of Kalman filtering [10], that the algorithm presented in this paper is based on, still follows the basic iterative steps of a Kalman filter [6] to reduce noise on a linear system. The two steps are predict and update. The iconic representation applies the principles pixel-wise over an entire image. Section 3.1 contains more detail. The algorithm presented in this paper creates a “track” (a set of information that needs to maintained together) for every pixel in a frame, and integrates this information over time. The information that is being tracked is: Pixel position: in image coordinates. Disparity: of the pixel position. Disparity rate: the change in disparity w.r.t. time. Disparity variance: the mean square error of the tracked disparity, e.g. low value = high likelihood. Disparity rate variance: the mean square error of the tracked disparity.
Fig. 1. Block diagram of the algorithm. Rectangles represent data and ellipses represent processes
32
T. Vaudrey, H. Badino, and S. Gehrig
Disparity covariance: the estimated correlation between disparity and disparity rate. Age: the number of iterations since the track was established. No measurement count: the number of iterations a prediction has been made without an update. The tracks are initialised using the current disparity map. This map can be calculated using any stereo algorithm, the requirement is having enough valid stereo pixels calculated to obtain a valid measurement during the update process. Obviously, different stereo algorithms will create differing results. Stereo algorithms tested against the algorithm presented in this paper are shown in Section 5.1. Initialisation parameters, that are supplied by the user, are also required to help initialise the tracks. The parameters supplied here are the initial variances and disparity rate. This is explained in full detail in Section 4.1. For the first iteration, the initialisation creates the current state (see Figure 1). In the next iteration, the current state transitions to become the previous state. The previous state is predicted using a combination of ego-motion and the state transition matrix from the Kalman filter. In the case of a forward moving ego-vehicle, in a static environment, objects will move toward the camera (Kalman state transition matrix is the identity matrix). This is illustrated in Figure 2. This approach was used in [1]. The algorithm, presented in this paper, also uses disparity rate (with associated Kalman state transition matrix) to assist the prediction. This is explained in Section 4.2. The predicted tracks are then validated against a set of validation parameters, such as: image region of interest, tracking age thresholds, and depth range of interest. Also, predicted tracks can have the same pixel position. Due to the iconic representation of the tracks, these pixels are integrated to combine the differing predictions. The full validation rules are found in Section 4.3.
(a) Image coordinates
(b) Real world coordinates
Fig. 2. Figure (a) shows relative movement of a static object between frames, in image coordinates, when the camera is moving forward. Figure (b) shows the real world birdseye view of the same movement, back projected into the image plane.
Integrating Disparity Images by Incorporating Disparity Rate
33
After the validation, the Kalman update is performed on all the tracks. The measurement information that is used here is the current disparity map and corresponding variance. The variance for each disparity is supplied by the initialisation parameters. In some situations, there is no valid disparity measurement at the predicted pixel position. In these situations, the track is either deleted or kept, depending on the age and no measurement count. Acception and rejection criteria are covered in Section 4.4. Finally, any pixels in the current disparity map, that are not being tracked, are initialised and added to the current state. The current state can then be used for subsequent processes, such as free-space calculations and leader vehicle tracking. The static stereo integration approach in [1] generated incorrect depth estimates over time on moving objects, primarily the lead vehicle. The main issue that the algorithm presented in this paper assists with, is improving the depth estimation of forward moving objects. It also provides extra information that was not available previously, pixel-wise disparity rate, which is used to calculate speed in the depth direction. Using this information for subsequent processes is outside the scope of this paper and constitutes further research. The algorithm presented in this paper was tested using real images, recorded from a stereo camera mounted in a car. The major ego-motion component is therefore speed, which is essential for the disparity rate model to work. The limitation is that lateral and vertical movements are not modelled, so can not be detected. Other approaches have already been established to track this type of movement, such as 6D-vision [3]. The advantage the disparity rate model has over such techniques, is that dynamic information is generated more densely for the same computation cost.
3 3.1
Kalman Filter Model Iconic Representation for Kalman Filter
In [10] an iconic (pixel-based) representation of the Kalman filter is introduced. Instead of using a Kalman filter to track flow on features, a filter is used to track each pixel individually. Applying a traditional Kalman filter [6] to an entire image would lead to a very large state vector, with a corresponding large covariance matrix (i.e. the pixel position and disparity, for every pixel, would define the state vector). The main assumption here is that every pixel is independent and there is no relation to adjacent pixels. This allows a simplification of a Kalman filter at each pixel, thus it leads to many small state vectors and covariance matrices, which is computationally more efficient because of the matrix inversions involved with the Kalman filter. The pixel position is estimated using known ego-motion information, so is not included in the Kalman state. To find out the full details of iconic representations, refer to [10]. The main error in triangulation is in the depth component [9]. This triangulation error is reduced by using the ego-motion of the camera to predict flow, then
34
T. Vaudrey, H. Badino, and S. Gehrig
integrating the data over time (as presented in [1]). In this paper, the algorithm is expanded by incorporating disparity rate into the Kalman filter. 3.2
Kalman Filter Model Incorporating Disparity Rate
An example of the general Kalman filter equations can be seen in [11], they are: Prediction Equations: x− t = Axt−1
and
T P− t = APt−1 A + Q
(1)
Update Equations: −1 T T K = P− HP− t H t H +R T − xt = x− t + K zt − H xt
and
Pt = I − KHT P− t
(2) (3)
where x is the estimated state vector, P is the corresponding covariance matrix of x, and Q is the system noise. K is the Kalman gain, zt is the measurement vector, R is the measurement noise and HT is the measurement to state matrix. A superscript “−” denotes prediction to be used in the update step. A subscript t refers to the frame number (time). For the algorithm presented in this paper, a Kalman filter is used inside the track to maintain disparity, disparity rate and associated covariance matrix, represented by: 2 2 σd σd|d˙ d (4) x= ˙ and P = 2 σd| σd2˙ d d˙ d is the disparity and d˙ is the disparity rate. σd2 represents the disparity variance, 2 σd2˙ represents disparity rate variance, and σd| represents the covariance between d˙ disparity and rate. Tracking of the pixel position is not directly controlled by the Kalman filter, but uses a combination of the Kalman state and ego-motion of the camera. This approach is discussed in Section 4.2. The state transition matrix is simply as follows: 1 Δt A= (5) 0 1 where Δt is time between frames t and (t − 1). The measurements are taken directly from the current disparity map (see Figure 1). Disparity rate is not measured directly. Therefore, H = [1 0]T , and both the measurement vector and noise become scalars; namely z for measurement of disparity and R as the associated variance. The update equations are then simplified as follows: σd2− 1 (6) K = 2− 2− σ + R σd|d˙ xt =
x− t
+ K zt − d− t
d
and
Pt = (I − K [1 0]) P− t
(7)
The initial estimates for x and P need to be provided to start a track. These will be discussed further in Section 4.1.
Integrating Disparity Images by Incorporating Disparity Rate
4 4.1
35
Disparity Rate Integration Algorithm in Detail Initialisation of New Points
The only information available at the time of initialisation is the current measured disparity zt at every pixel (some pixels may not have a disparity due to errors or lack of information from the algorithm being used). For every time frame, any pixel that has a valid measurement and is not currently being tracked, is initialised. The no measurement count is set to zero, and the pixel position is the initial location of the pixel. The initialisation for the Kalman filter is as follows: Rε z and P = (8) x= t 0 ε β where R, ε and β are values set by the Initialisation Parameters outlined in Figure 1, which are supplied by the user. Generally speaking, R should be between 0–1; disparity error should not be larger than 1 pixel and will not be “perfect”. ε should be a value close to zero, representing little correlation between the disparity and disparity rate. β will be a large value to represent a high uncertainty of the initial disparity rate, as there is no initial measurement. With the large variance set for disparity rate, the Kalman filter will move closer to the “real” value as it updates the “poor” predictions. 4.2
Prediction
Prediction of the disparity, disparity rate and covariance matrix are mentioned in Section 3. Therefore, the only prediction required is the current pixel position. The prediction part of Figure 1 shows two inputs for prediction, which are the previous state and ego-motion of the camera. First, to explain the prediction more simply, a static camera is assumed (i.e. no ego-motion knowledge is required); so the only input is the previous state. This results in the following equations for pixel position prediction: p− t =
Xt = Xt−1 = X0
f Xt f Yt , Zt Zt and
Zt = Zt−1 + Z˙ t−1 Δt =
T
Yt = Yt−1 = Y0 bf dt−1 + d˙t−1 Δt
(9)
(10)
(11)
Substituting Equations (10) and (11) into Equation (9), and assuming that the ˙ disparity prediction is d− t = dt−1 + dt−1 Δt, provides the current pixel estimation p− t =
d− T t [X0 , Y0 ] b
(12)
36
T. Vaudrey, H. Badino, and S. Gehrig
With the pixel prediction formulation above, the prediction model is expanded to account for ego-motion between frames. The primary assumption here is that the ego-motion of the camera is known and noise-free. The world coordinates are calculated using standard triangulation methods from [4]. For a full ego-motion model, the equation is as follows: Wt = Rt Wt− + Tt
(13)
where Wt− is calculated using triangulation Wt− = f (p− t ). The world coordinate prediction Wt is then back projected, to yield the final pixel prediction pt = f (Wt ). For the case of a camera that is moving forward (i.e. a car movement system presented in [3]), speed and yaw are the only factors to obtain depth and lateral movement (vertical movement is zero). This results in the following variables: ⎛ ⎞ ⎛ ⎞ 1 − cos(ψ) cos(ψ) 0 −sin(ψ) ⎠ ⎠ and Tt = v · Δt ⎝ 0 0 (14) Rt = ⎝ 0 1 ψ −sin(ψ) sin(ψ) 0 cos(ψ) where speed v and yaw ψ are measured between frames t and (t − 1). All other ego-motion components are neglected. 4.3
Validation
The prediction step in the section above is performed for every track individually, with no correlation between tracks. This leads to a need for validating the prediction results. The validation checks, currently in the model, are: 1. Check to see if the pixel position is still in the region of interest. If not, then remove track. 2. Check to see if the disparity is within the depth-range of interest. If it is below or above the threshold, then remove the track. 3. Check if there two or more tracks occupying the same pixel position. If so, integrate the tracks at this pixel position using the general weighting equations as follows: Pint =
−1 P−1 i
(15)
i∈N
xint = Pint
P−1 i xi
(16)
i∈N
where N is the set of tracks occupying the same pixel position. This will give a stronger weighting to tracks with a lower variance (i.e. higher likelihood that the estimate is correct).
Integrating Disparity Images by Incorporating Disparity Rate
4.4
37
Update
From Figure 1, the input to the update process is the current measured disparity map and associated variance (supplied by initialisation parameters). The disparity measurements are obtained by applying a stereo algorithm to a rectified stereo pair; see Section 5.1 for more detail. The Kalman update process, explained in Section 3.2, is performed for every currently tracked pixel. As above, R is the variance and zt is the current measurement at pixel position p. However, there are several reasons why there will be no measurement (or an incorrect measurement) at the predicted pixel position: – The stereo algorithm being used may not generate dense enough stereo matches. Therefore, no disparity information at certain pixel locations. – The model is using a noisy approach to initialise disparity rate. A recently created track will have a bad disparity rate prediction, causing the predicted pixel position to be incorrect. This will increase the likelihood that the prediction will land on a pixel without measurement. – Noisy measured ego-motion parameters or ego-motion components not modelled (such as tilt or roll), may induce an incorrect predicted pixel position. If the above issues were not handled correctly, then the tracks created would contain a lot of noise. The decision on how to handle these situations is as follows: 1. Check to see if there is a measurement in the neighbourhood (controlled by a search window threshold) of the predicted pixel position. (a) Measurement at exact position; then use this disparity measurement with the corresponding disparity. (b) Measurement at neighboring position; then use this disparity measurement but use the disparity measurement with a proportionally higher variance, depending on the distance from the exact position prediction. 2. If there is a valid disparity measurement found; perform a Mahalanobis 3sigma test [7]: (a) Pass: apply the Kalman update and reset the no measurement count. (b) Fail: treat as if there is no valid measurement found. 3. If there is no valid measurement, the variables age and no measurement count are used: (a) Check the age of the Kalman filter. If it is below a certain age (controlled by a threshold), then remove the track. This prevents young noisy tracks affecting the over-all results. (b) If the track is old enough (controlled by threshold above): do not update the track, but keep the predicted track as the truth. This will increase the no measurement count, with the hope there will be a valid measurement in the next few frames. (c) If the track is over the no measurement count maximum (controlled by a threshold), then remove the track. This prevents tracks from being predicted forever without a valid measurement.
38
T. Vaudrey, H. Badino, and S. Gehrig
(a) Correlation Pyramid Stereo
(b) SGM Stereo
(c) Correlation Speed Estimate
(d) SGM Speed Estimate
Fig. 3. Figures (a) and (b): red represents close pixels and green represents distant pixels. Figures (c) and (d): green represents static pixels (w.r.t. the ground), blue represents positive movement and red represents negative movement, both in the depth direction. Figure (c) shows a less noisy speed estimation, compared to SGM, when using Correlation Pyramid Stereo
5 5.1
Experimental Results Stereo Algorithms
The algorithm presented in this paper requires a disparity image as input, created by a stereo algorithm. The algorithm has been tested with several stereo algorithms. Semi-Global-Matching. the algorithm presented in this paper was tested using Hirschm¨ uller’s Semi-Global-Matching (SGM) [5], an efficient “fast” (compared to most dense algorithms) dense stereo algorithm. The SGM approach resulted in a lot of erroneous disparity calculations in low textured areas. This is seen in Figure 3(b), where there are clear mismatches around the number plate of the lead vehicle and the bank on the right-hand-side. SGM still worked as an input for the algorithm presented in this paper, but created lots of noise, propagated from the integration of erroneous disparities. If there is a pixel prediction
Integrating Disparity Images by Incorporating Disparity Rate
39
from a correct match in one frame, to a low textured area in the next frame, a mismatch can easily be found (due to the dense nature of the stereo algorithm). This will result in a large change in disparity (disparity rate), thus it creates noise around the low textured area. This is seen in Figure 3(d), where there is a lot of noise behind the lead vehicle and also on the right-hand-side bank. SGM can not yet be run in real-time. Correlation Stereo. another algorithm tested was a stereo feature based correlation scheme [2]. Correlation Pyramid Stereo “throws away” badly established feature correlations, only using the best features for disparity calculations (see Figure 3(a)). Using this approach lead to less noisy results, as is seen in the comparison of Figure 3(d) and 3(c). All pixels on the lead vehicle show a positive movement in the depth direction (blue) and most static pixels (road and barrier) correctly marked as green. Furthermore, oncoming traffic does create some negative movement estimations (red), although this information is rather noisy. The disparity information provided is not as dense as the SGM stereo method, but is a lot less computationally expensive (factor of 10); and can run in real-time. 5.2
Speed and Distance Estimation Results
Speed (m/s)
The algorithm presented in this paper was tested on real images recorded from a stereo camera mounted in a moving vehicle. The speed and yaw of the egovehicle were measured (but possibly noisy), as required for the model. To test the 26 24 22 20 18 16
Radar Disparity Rate Model 0
50
100 150 200 Time (Frame Number)
250
300
250
300
(a) Speed estimation of lead vehicle. Radar Disparity Rate Model Static Stereo Integration
Distance (m)
40 35 30 25 20 0
50
100 150 200 Time (Frame Number)
(b) Distance estimation of lead vehicle. Fig. 4. Figure (a) shows the comparison between radar and the disparity rate model. Figure (b) compares distance estimates using; radar, the static stereo integration and the Disparity Rate model. More information can be found in the text.
40
T. Vaudrey, H. Badino, and S. Gehrig
algorithm, all the tracks on the lead vehicle were measured over time. For each frame, outlying pixels were eliminated using the 3-sigma Mahalanobis distance test, then the weighted average was calculated using Equations (15) and (16), with N being the set of tracks located on the lead vehicle. The average result was plotted over time. The ego-vehicle also had a leader vehicle tracking radar, which measures speed and distance. The radar measurements were used as an approximate “ground truth” for comparison. An example of one scene where this approach was tested can be seen in Figure 4. In this scene, a van was tracked over time (see Figure 3 for lead vehicle scene). The algorithm provided reasonable speed estimates, as is seen in Figure 4(a). Between frames 200-275, there is an oscillation, this is due to a large lateral movement in the lead vehicle (going around a corner) which is not modelled. Furthermore, the distance estimates on the leader vehicle are improved, compared to the static stereo integration approach (outlined in [1]). When using the static integration, there is a lag behind the radar measurements (ground truth), this is due to the assumption that the scene is static and the back of the moving vehicle will be moving closer to the ego-vehicle. This no longer happens when incorporating disparity rate and is seen by the improved results in Figure 4(b). In both figures, the radar data is lost around frame 275, but the stereo integration still delivers estimations. For the 270 frames where there is valid radar data, the RMS (root mean square) to the radar data highlights the accuracy of the model. The speed estimate yields a RMS of 1.60 m/s. The RMS for the distance estimates using
Speed (m/s) / Distance (m )
(a) Original Image
(b) Estimated Speed Radar Distance Disp. Rate Model Distance Radar Speed Disp. Rate Model Speed
45 40 35 30 25 0
50
100
150
200
250
Time (Frame #)
(c) Results Fig. 5. A wet traffic scene with a car as the leading vehicle
Integrating Disparity Images by Incorporating Disparity Rate
(a) Static Model
41
(b) Disparity Rate Model
Fig. 6. Occupancy grids: Bright locations indicate a high likelihood that an obstacle occupies the location. Dark regions indicate a low likelihood that an obstacle occupies the region. The outline refers to the lead vehicle highlighted in Figure 3(a).
static integration is 2.59 m, compared to 0.43 m for the disparity rate model. This clearly shows an improvement over the static integration approach. Similar results can be seen in Figure 5 during a wet scene. In this scene, a car is tracked over time. The algorithm still provides meaningful results, even with the issues caused by wet weather (e.g. windscreen wiper occlusion, reflection of the road and poor visibility). The RMS for the 290 valid time frames, comparing to the radar data, is 0.59 m for distance and 2.13 m/s for speed. There is also no loss in information when computing occupancy grids, the comparison between the static approach and disparity rate model is seen in Figure 6. The occupancy grids show the same time frame number using the two models (also the same frame number as Figure 3). The improved depth information is highlighted showing a more accurate pixel cloud where the lead vehicle is located, and note that there is no loss of information from the background. The static approach has an average value of 20.26 m with a total spread of 17.8 m, where as the disparity rate model has an average of 22.4 m with a total spread of 10.26 m. The algorithm runs in real-time and takes only 5–10% more computational time than using the static stereo integration approach. To get the results obtained above, the algorithm was running at 20 Hz on an Intel Yonah Core Duo T2400.
6
Conclusions and Further Work
The approach outlined by this paper has shown that using a disparity rate model provides extra information, over static stereo integration, when applied to scenes with movement in the depth direction. Not only are reasonable speed results obtained, but the distance estimations are also improved. If this information is used
42
T. Vaudrey, H. Badino, and S. Gehrig
correctly, better occupancy grids will be created and leader vehicle tracking will be improved. Further work that still needs to be included in the model and is planned: – Using the ego-motion in the initialisation scheme as another hypothesis for speed prediction. This approach aims to create quicker convergence on moving objects. The assumption being that a moving object’s velocity will be closer to that of the ego-vehicle’s velocity, rather than the static ground. A negative ego-motion could also be included as another hypothesis, to allow convergence for oncoming traffic [3]. – Use the disparity rate (speed) estimation in some follow-on processes (e.g. objective function for free-space calculation). – There is a problem with occlusion of an area in one frame, which is not occluded in the next. This causes major changes in the disparity at those pixel positions, giving incorrect disparity rate estimations, which needs to be modeled and compensated. Acknowledgements. The first author would like to thank Reinhard Klette, his supervisor in Auckland, for introducing him to this area of research. He would also like to thank Uwe Franke, at DaimlerChrysler AG, for the opportunities provided to accelerate learning concepts for driver assistance systems.
References 1. Badino, H., Franke, U., Mester, R.: Free Space Computation Using Stochastic Occupancy Grids and Dynamic Programming. In: Dynamic Vision Workshop for ICCV (2007) 2. Franke, U., Joos, A.: Real-time Stereo Vision for Urban Traffic Scene Understanding. In: IEEE Conference on Intelligent Vehicles, Dearborn (2000) 3. Franke, U., et al.: 6D-Vision: Fusion of Stereo and Motion for Robust Environment Perception. In: Kropatsch, W.G., Sablatnig, R., Hanbury, A. (eds.) DAGM 2005. LNCS, vol. 3663, Springer, Heidelberg (2005) 4. Hartley, R., Zisserman, A.: Multiple View Geometry in Computer Vision, 2nd edn. Cambridge University Press, Cambridge (2003) 5. Hirschm¨ uller, H.: Accurate and Efficient Stereo Processing by Semi-Global Matching and Mutual Information. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 2, pp. 807–814 (June 2005) 6. K´ alm´ an, R.E.: A New Approach to Linear Filtering and Prediction Problems. Transactions of the ASME - Journal of Basic Engineering 82, 35–45 (1960) 7. Mahalanobis, P.C.: On the generalised distance in statistics. In: Proceedings of the National Institute of Science of India 12, pp. 49–55 (1936) 8. Martin, M.C., Moravec, H.: Robot Evidence Grids. Technical Report CMU-RI-TR96-06, Robotics Institute, Carnegie Mellon University (March 1996) 9. Matthies, L.: Toward stochastic modeling of obstacle detectability in passive stereo range imagery. In: Proceedings of CVPR, Champaign, IL, USA (1992) 10. Matthies, L., Kanade, T., Szeliski, R.: Kalman Filter-based Algorithms for Estimating Depth from Image Sequences. IJVC 3, 209–238 (1989) 11. Maybeck, P.S.: Stochastic Models, Estimating, and Control, ch.1. Academic Press, New York (1979)
Towards Optimal Stereo Analysis of Image Sequences Uwe Franke1 , Stefan Gehrig1 , Hern´ an Badino2 , and Clemens Rabe1 1
2
Daimler Research, B¨ oblingen Johann Wolfgang Goethe University, Frankfurt am Main
Abstract. Stereo vision is a key technology for understanding natural scenes. Most research concentrates on single image pairs. However, in robotic and intelligent vehicles applications image sequences have to be analyzed. The paper shows that an appropriate evaluation in time gives much better results than classical frame-by-frame reconstructions. We start with the state-of-the art in real-time stereo analysis and describes novel techniques to increase the sub-pixel accuracy. Secondly, we show that static scenes seen from a moving observer can be reconstructed with significantly higher precision, if the stereo correspondences are integrated over time. Finally, an optimal fusion of stereo and optical flow, called 6D-Vision, is described that directly estimates position and motion of tracked features, even if the observer is moving. This eases the detection and tracking of moving obstacles significantly.
1
Introduction
Robotic systems navigating in unknown terrain and driver assistance systems of future cars have one challenge in common: their environment needs to be perceived robustly, precisely and in real-time. Both systems need to know about free space as well as size, position and motion of obstacles. Moving objects are of interest because they represent a potential threat. Besides radar and lidar, stereo vision will play an important role in future driver assistance systems, since the goal of accident free traffic requires a sensor with high spatial and temporal resolution. Stereo vision is a research topic with a long history. For an overview see [1]. For a long time, correlation based techniques were commonly used, because they deliver precise and sufficiently dense measurements in real-time on a PC or on dedicated hardware. Recently, much progress has been achieved in dense stereo. Using stereo vision, the three-dimensional structure of the scene is easily obtained. The standard approach for free space analysis and obstacle detection is as follows: First, the stereo correspondences are computed with pixel accuracy. Sub-pixel accuracy is obtained by fitting a parabolic curve to the goodness-of-fit function, e.g. the correlation coefficient. Then, all 3D points are projected onto an evidence-grid (Section 3.1). In a third step, this grid is segmented and potential obstacles are tracked over time in order to verify their existence and to estimate their motion state. G. Sommer and R. Klette (Eds.): RobVis 2008, LNCS 4931, pp. 43–58, 2008. c Springer-Verlag Berlin Heidelberg 2008
44
U. Franke et al.
This paper discusses the drawbacks of the approach outlined above and presents powerful improvements. Besides better sub-pixel algorithms, the exploitation of the correlation in time leads to more precise and stable results, and allows to estimate the motion state of single image points even before the objects are detected. This ”track-before-detect” approach distinguishes between static and moving pixels before any segmentation has been performed. Using static points, very accurate evidence grids are generated. Also, moving points can be easily grouped. The paper is organized as follows: First we propose three independent methods to increase the sub-pixel accuracy, especially in low-textured regions. They deliver depth estimates with increased precision, which is important if one wants to reliably detect obstacles at large distances. Secondly, we show that the uncertainties of evidence grids are significantly reduced if the stereo information is integrated over time. Finally, section 4 presents a Kalman filter based integration of stereo and optical flow that allows the direct measurement of 3D-position and 3D-motion of all tracked image points.
2
Improvements in Stereo Accuracy
Real-time stereo vision systems typically provide stereo measurements at points with sufficient image structure using local algorithms. See [2] for a correlation variant that we use in our examples. Dense stereo algorithms estimate disparities at all pixels, including untextured regions. For a survey of the current state of the art refer to [1] and to the Middlebury stereo website that hosts the list of top-performing stereo algorithms compared to ground truth data [3]. Most stereo algorithms obtain integer disparities. Our applications need very accurate 3D positions also at large distances, where small disparities occur. Imagine a house at 50m distance and a stereo rig that yields a disparity of 5 pixels. An uncertainty of ± 41 px then corresponds to an uncertainty of about 2.5m in depth. Obviously, such a reconstruction is not very likely to be accurate. A simple sub-pixel interpolation is usually done via parabola fitting of the disparities in the vicinity of the best disparity [4]. However, this interpolation scheme is more likely to yield sub-pixel disparities close to the integer value than around .5 values. The determined disparity deviates up to 0.15 pixels from the actual disparity. This effect is known as the pixel-locking effect [5]. We propose to increase the sub-pixel accuracy in low-textured regions in three ways: First, we present an analysis that shows the benefit of evaluating the disparity space at fractional disparities. Second, we introduce a new disparity smoothing algorithm that preserves depth discontinuities and enforces smoothness on a sub-pixel level. Third, we present a novel stereo constraint (gravitational constraint) that assumes sorted disparity values in vertical direction and guides global algorithms to reduce false matches. We show results for two typical traffic scenes using a dense stereo algorithm. The three presented improvements are independent of the underlying type of stereo algorithm and can be applied to sparse stereo algorithms. For related work and a more detailed description of our algorithms see [6].
Towards Optimal Stereo Analysis of Image Sequences
2.1
45
Semi-global Matching
One of the most promising dense stereo algorithms listed on the Middlebury web page is semi-global matching (SGM) [7]. Our sub-pixel investigations are conducted with this algorithm due to its outstanding computational efficiency. The algorithm can run in real-time on pipe-line architectures such as graphic cards (GPU) [8]. Roughly speaking, SGM aims to minimize the two-dimensional energy in a dynamic-programming fashion on multiple (8 or 16) 1D paths, thus approximating the minimal energy of the 2D image. The energy consists of three parts: a data term for photo-consistency, a small smoothness energy term for slanted surfaces that changes the disparity slightly, and a smoothness energy term to prevent depth discontinuities. One example result is shown in Figure 1. The SGM result (right) provides a 3D measurement for almost every pixel, while local methods such as correlation can only provide reliable measurements for structured points (middle).
Fig. 1. Van scene (rectified left image shown on the left), correlation stereo result (middle) and semi-global matching result. Note the higher measurement density for SGM (the lighter the color the larger the disparity). Unmatched areas are marked in dark green.
2.2
Sampling the Disparity at Fractional Disparities
In Figure 2 on the left side, the 3D reconstruction of the van rear from Figure 1 is shown using standard SGM. High disparity noise occurs, since the SGM smoothness constraint works only at pixel level. One straightforward approach to obtain sub-pixel accuracy is to evaluate the matches at fractional disparity steps [9]. Roughly speaking, one virtually increases the image size, interpolates the intermediate pixels, and performs the same matching algorithm. This results in integer disparities for the enlarged image which corresponds to sub-pixel or fractional disparities for the actual input image size. Parabolic fitting is done in an additional step. We use this idea and extend it to global stereo algorithms. When working on discrete disparity steps, the number of subdivisions of one disparity step is the dominant factor for improvement in low-textured areas since smoothness is now enforced below integer level. The evaluation of the disparity space at fractional disparities is straightforward, instead of enlarging the image
46
U. Franke et al.
we only need to interpolate our matching costs and increase the numbers of disparity steps. A result using the Birchfield-Tomasi (BT) [10] similarity metric is shown in Figure 2. The depth variance of the van rear is significantly reduced when comparing the standard 3D reconstruction on the left with the fractional disparity reconstruction shown in the middle. In reality, the van rear is perfectly planar. This strategy of fractional disparity sampling can be applied to any stereo algorithm that incorporates smoothness constraints.
Fig. 2. 3D reconstruction of the rear of the van from the previous figure viewed from the bird’s eye view. Result for semi-global matching (left). Note the large noise in viewing direction. 3D reconstruction for SGM with fractional disparity sampling (middle) yields less noisy results. SGM with additional disparity smoothing (right) delivers a smooth surface. The van rear is reconstructed as an almost parallel surface.
2.3
Disparity Smoothing
Our second technique for improving the sub-pixel accuracy is also based on the smoothness constraint. This constraint is widely used to obtain good disparity estimates but often on a pixel level only. Commonly used local interpolation schemes neglect this constraint while estimating depth with sub-pixel accuracy. Our approach performs a depth-edge-preserving smoothing on the disparity image, similar to Yang‘s method [11] where bilateral filtering (adaptive filter coefficients depending on adjacency and on depth consistency to the central pixel) is used. Yang’s method yields excellent results but is computationally expensive. Unlike other methods we also exploit the confidence of an established disparity value. We treat the disparity estimation as an energy minimization problem similar to [12], with the parabola-fit quality contributing to the data term and the disparity variance representing the smoothness term. By design, a continuous transition is obtained from smoothed disparity-areas, where adjacent pixels have similar depths, to depth-discontinuities, where smoothing is completely turned off (see [6] for details). This relaxation-like scheme needs only a few iterations to converge. One example of disparity smoothing is shown in Figure 2. 2.4
Gravitational Constraint
In Figure 4 an urban scene is shown where the sky yields some erroneous correspondences due to matching ambiguities. For cameras where the optical axes
Towards Optimal Stereo Analysis of Image Sequences
47
Fig. 3. Principle of the Gravitational Constraint
point to the horizon, the following observation is made: When traversing the image from the bottom to the top, the distance to the 3D points on the rays tend to increase (see Figure 3). This is due to the fact that objects in the scene are connected to the ground (due to gravity, therefore we refer to the constraint as gravitational constraint). Therefore, for our scenario we prefer reconstructions with sorted depths for every column in the image. If the optical axes and the ground have a small angle, the constraint is still valid for most scenes. Even if violations of the gravitational constraints occur (e.g. bridges or traffic signs), this constraint is extremely helpful to disambiguate matches. For any global stereo algorithm, it is easy to incorporate this constraint with an additional ”gravitation energy” term that penalizes violations of the constraint. One example that shows the benefit of the gravitational constraint is shown in Figure 4. The mismatches in the sky are eliminated using this constraint.
Fig. 4. Street scene with traffic sign. Left: original image. Middle: disparity image using SGM with BT. Note the wrong depth estimation in the sky. Right: result with the gravitational constraint. Dark green marks unmatched areas.
3
Static Scenes Seen from a Moving Observer
Stereo evaluation on a single image pair comes to its limitations at a certain level. For image sequences, integrating stereo evaluations from subsequent image pairs opens up new potentials.
48
U. Franke et al.
This section presents a method for the computation of free space in static scenes. The free space is the world region where navigation without collision is guaranteed. First, a stochastic occupancy grid based on integrated stereo measurements is computed. The integration of stereo measurements is performed by an effective algorithm which reduces disparity uncertainty and avoids the propagation of stereo mismatches. Dynamic programming is finally applied to a polar occupancy grid representation obtaining an optimal solution of the free space problem. The following sections address the main components of this algorithm. 3.1
Occupancy Grids
An occupancy grid is a two-dimensional array or grid which models occupancy evidence of the environment. They were first introduced in [14]. An exhaustive review on occupancy grids can be found in [24]. There are two main types of occupancy grids: deterministic and stochastic grids. Deterministic grids are basically bidimensional histograms counting 3D points. They are usually obtained by projecting the 3D view of measurements onto the road and counting the number of points falling onto the same cell area ([21] [15]). The noise properties of the 3D measurements are not explicitly modeled. Grid cells with a large amount of hits are more likely to be occupied as those with few or no points. The choice of the discretization of cells of a determistic grid requires the usual compromise of any sampling process. Small discretization values might avoid the accumulation of points. Large discretization values loses spatial resolution, and therefore, estimation accuracy. On the other hand, the cells of the stochastic occupancy grids maintain a likelihood or probability of occupancy. Instead of counting points, the stochastic
Fig. 5. Block diagram of the free space algorithm
Towards Optimal Stereo Analysis of Image Sequences
49
grids define an update function which specifies the operation to perform on every cell, based on the measurement and its noise properties ([13] [22] [23]). Stochastic occupancy grids are more expensive to compute but are more informative and do not suffer from discretization effects, as compared to deterministic grids. 3.2
An Algorithm for Free Space Computation
Figure 5 shows a block diagram of the proposed algorithm. The algorithm starts by computing stereo disparity and variance images given a pair of rectified stereo images. In order to maintain the real time capability of the system, a correlationbased algorithm with the fractional disparity computation is used [2]. An example of the results of this algorithm on a typical traffic scene is shown in Figure 1. A constant variance image is initially assumed. Measured disparity and variance images are used to refine an integrated disparity image by means of Kalman filters. We use an iconic representation, where the state vector of the Kalman filter collects disparity tracks at integer image positions. The tracked disparities are predicted for the next cycle based on the ego-motion of the camera. If the new disparity measurements confirm the prediction, disparity uncertainty is reduced. This helps to generate more accurate stochastic occupancy grids. Implementation details and additional occupancy grids types are found in [13]. The occupancy grid is also integrated over time to reduce the occupancy noise of the cells. The Occupancy Grid Integration block performs a low-pass filter with the predicted occupancy grid based on the ego-motion of the camera. Finally, the free space is computed using dynamic programming on the polar grid (see Section 3.4). 3.3
Iconic Representation for Stereo Integration
The main error of a triangulated stereo measurement lies in the depth component (see e.g. [17]). The reduction of disparity noise helps towards the localization of estimated 3D points. This yields better grouping of objects in a grid. Tracking of features in an image over multiple frames allows reduction of the position uncertainty (see Section 4.1). Nevertheless, tracking becomes redundant in static scenes when the ego-motion of the camera is known a priori. The iconic representation initially proposed in [18] is used here. This method exploits the fact that multiple disparity observations of the same static point are possible due to the knowledge of the camera motion. Disparity measurements which are consistent over time are considered as belonging to the same world point, and therefore, disparity variance is reduced accordingly. This stereo integration requires three main steps as Figure 5 shows: – Prediction: the current integrated disparity and variance images are predicted. This is equivalent to computing the expected optical flow and disparity based on ego-motion [18]. Our prediction of the variance image includes the addition of a driving noise parameter that models the uncertainties of the system, such as ego-motion inaccuracy.
50
U. Franke et al.
Fig. 6. Stereo integration and evidence grids. Left, both rectified images are shown. The middle figure shows the evidence grid only based on the stereo information of the current frame. The evidence grid shown at the right was obtained with integrated disparity measurements over time. The occupancy likelihood is encoded as the brightness of the cells.
– Measurement: disparity and variance images are computed based on the current left and right images. – Update: if the measured disparity confirms its prediction, then both are fused together reducing the variance of the estimation. The verification of the disparity consistency is performed using a standard 3σ test. Figure 6 shows an example of the improvement achieved using the iconic representation. The occupancy grid shown in the middle was obtained with standard output from the stereo algorithm while the occupancy grid at the right was computed with an integrated disparity image. Note the significantly reduced uncertainties of the registered 3D points. A bicyclist at approximately 50 meters away is marked in the images. The integrated disparity image shows a clear improvement over the standard measured disparity image. 3.4
Free Space Computation
Cartesian space is not a suitable space to compute the free space because the search must be done in the direction of rays leaving the camera. The set of rays must span the whole grid. This leads to discretization problems. A more appropriate space is the polar space. In polar coordinates every grid column is,
Towards Optimal Stereo Analysis of Image Sequences
51
Fig. 7. Free space computation. The green carpet shows the computed available free space. The free space is obtained applying dynamic programming on a Column/Disparity occupancy grid, obtained as a remapping of the Cartesian depth map, shown at the right. The free space resulting from the dynamic programming is shown over the grids.
by definition, already in the direction of a ray. Therefore, searching for obstacles in the ray direction is straightforward. For the computation of free space the first step is to transform the Cartesian grid to a polar grid by applying a remapping operation. The polar representation we use is a Column/Disparity occupancy grid [13]. A result of this is shown in the middle image of Figure 7. In polar representation, the task is to find the first visible obstacle in the positive direction of depth. All the space found in front of occupied cell is considered free space. The desired solution forms a path from left to right segmenting the polar grid into two regions. Instead of thresholding each column as done in [19] [20], dynamic programming is used. The method based on dynamic programming has the following properties: – Global optimization: every row is not considered independently, but as a part of a global optimization problem which is optimally solved. – Spatial and temporal smoothness of the solution: the spatial smoothness is imposed by penalizing depth discontinuities while temporal smoothness is imposed by penalizing the deviation of the current solution from the prediction. – Preservation of spatial and temporal discontinuities: the saturation of the spatial and temporal costs allows the preservation of discontinuities. Figure 7 shows an example of using dynamic programming for free space analysis.
4
Dynamic Scenes Seen from a Moving Observer
Until now, we assumed the world to be static and showed how to combine successive stereo image pairs to reduce the variance of the free space estimation.
52
U. Franke et al.
Fig. 8. A bicyclist appearing behind a fence. The movement is not detected in the corresponding stereo reconstructions (right images).
This information can be used for obstacle detection and obstacle avoidance in a straight-forward manner, since all non-free space is considered an obstacle. However, in realistic scenes the world is not completely static and a system for obstacle detection has to cope with moving objects and estimate their movements precisely to predict potential collisions. A common approach is to analyze the evidence-grid shown in Section 3 and to track isolated objects over time. The major disadvantage of this algorithm is that the segmentation of isolated objects is difficult in scenes consisting of multiple nearby objects. This problem is illustrated in Figure 8: A bicyclist moves behind the fence towards the road at a distance of about 35 m. The position of the bicyclist after 0.4 s is shown in the bottom picture. In the corresponding stereo reconstructions, the points belonging to the bicyclist can’t be distinguished from the other static points. It is extremely difficult for an algorithm using only the stereo information to detect the moving object at this point in time. In the literature, many solutions to this problem are based on the optical flow. Argyros et al. [25] describe a method to detect moving objects using stereo vision. By comparing the normal flow of the right camera images with the disparities of the stereo image pair, they detect image regions with independent object motion as inconsistencies. Heinrich [26] proposes a similar approach by defining the so called flow-depth constraint. He compares the measured optical
Towards Optimal Stereo Analysis of Image Sequences
53
flow with the expectation derived from the ego-motion and the 3D stereo information. Independently moving objects do not fulfil the constraints and are detected. However, this approach is sensitive to small errors in the ego-motion estimation, since only two consecutive frames are considered. In addition, these two approaches lack a precise measurement of the detected movements. For a direct measurement of the objects motion, Waxman and Duncan [27] analyze the relation between the optical flow fields of each camera and define the so called relative flow. Using this information the relative longitudinal velocity between the observer and the object is directly determined. Direct optical flow analysis provides fast results, but is limited with respect to robustness and accuracy due to the immanent measurement noise. In the next section, we will present an algorithm that integrates the optical flow and the stereo information over time to get a fast and reliable motion estimation. 4.1
6D-Vision
The moving bicyclist in Figure 8 is not seen in the stereo views because the missing connection of the points over time is missing. If we know how a stereo point moves from one frame to another, we also know how the corresponding world point moves in that time interval. That leads to the main idea of the proposed algorithm: Track an image point in one camera from frame to frame and calculate its stereo disparity in each frame. Together with the known motion of the ego-vehicle, the movement of the corresponding world point can be calculated. In practice, a direct motion calculation based on two consecutive frames is extremely noisy. Therefore the obtained measurements are filtered in a Kalman filter. Since we allow the observer to move, we fix the origin of the coordinate system to the car. The state vector of the Kalman filter consists of the world point in the car coordinate system, and its corresponding velocity vector. The six-dimensional state vector (x y z x˙ y˙ z) ˙ gives this algorithm its name: 6D-Vision [29]. The measurement vector used in the update step of the Kalman filter is (u v d) , with u and v being the current image coordinates of the tracked image point and d its corresponding disparity. As the perspective projection formulae are non-linear, we have to apply the Extended Kalman filter. A block diagram of the algorithm is shown in Figure 9. In every cycle, a new stereo image pair is obtained and the left image is first analyzed by the tracking component. It identifies small distinctive image regions, called features (eg. ˙ edges, corners), and tracks them over time. In the current application we use a version of the Kanade-Lucas-Tomasi tracker [28] which provides sub-pixel accuracy and tracks the features robustly for a long image sequence. The disparities for all tracked features are determined in the stereo module. After this step the estimated 3D-position of each feature is known. Together with the egomotion (see Section 4.2) the measurements of the tracking and the stereo modules are given to the Kalman filter system. One Kalman filter for each tracked feature estimates the 6D state vector consisting of the 3D-position and the 3D-motion.
54
U. Franke et al.
Fig. 9. The block diagram of the 6D-Vision algorithm
For the next image pair analysis, the acquired 6D information is used to predict the image position of the tracked features. This yields a better tracking performance with respect to speed and robustness. In addition, the predicted depth information is used to improve the stereo calculation. The result of this algorithm is shown in Figure 10. On the left side the original image is shown with the estimated velocity vectors. They point to the predicted position of the corresponding world point in 0.5s. The colors encode the estimated lateral speed; the warmer the color the higher the velocity. It can be seen, that this rich information helps to detect the moving cyclist and provides a first prediction of its movement at the same time. On the right side, the same scene is shown in a bird’s eye view. More details can be found in [29].
Fig. 10. Velocity estimation results for cyclist appearing behind a fence. The arrows show the predicted position of the world point in 0.5s. The color encodes the lateral velocity component: Green encodes a lateral velocity of zero, red encodes high lateral velocity.
4.2
Ego-Motion Analysis
In order to obtain better results in the fusion process, the observer’s ego-motion has to be known. The inertial sensors installed in today’s cars measure the speed and the yaw rate. However, this information is not sufficient for a full description of the motion of the observer as it lacks important components such as the pitch
Towards Optimal Stereo Analysis of Image Sequences
55
Fig. 11. Velocity estimation results with and without image based ego-motion compensation
and the roll rate. Not compensating for these influences results in bad 3D motion estimation. This is illustrated in the left image of Figure 11. Here, the ego-motion was computed using inertial sensors only. As the car undergoes a heavy pitch movement, the world seems to move downwards. Using the estimated 6D information, static world points are identified. Assuming they remain static, the ego-motion of the observer is determined by comparing the predicted world position with the measured one. We use a Kalman filter to accumulate all these measurements and estimate a state vector containing all ego-motion parameters [30]. In addition, the inertial sensor data is integrated as an additional source of information. This calculation of the ego-motion fits well into the proposed system. It uses the already aquired information including the knowledge of the 6D filtering. Our algorithm runs in less than one millisecond per frame. The benefit of the ego-motion compensation is demonstrated in the right image of Figure 11. Comparing to the left image the world seems to be more stable.
5
Summary
Robots as well as vehicles acting in a dynamic environment must be able to detect any static or moving obstacle. This implies that an optimal stereo vision algorithm has to seek for an optimal exploitation of spatial and temporal information contained in the image sequence. We showed that the sub-pixel accuracy of stereo algorithms can be improved by a factor of 2 - 3 compared to the standard parabolic fit, especially in low-textured regions. The fractional disparity sampling as well as the disparity smoothing are helpful in any stereo application, since they efficiently tackle the pixel-locking effect and favor smoothness at the sub-pixel level. The novel gravitational constraint is of special use for dense stereo vision algorithms, especially the semi-global matching algorithm.
56
U. Franke et al.
As show in the paper, precision and robustness of 3D reconstructions are significantly improved if the stereo information is appropriately integrated over time. This requires knowledge of the ego-motion, which in turn can be efficiently computed from 3D-tracks. It turns out that the obtained ego-motion data outperforms commonly used inertial sensors. The obtained depth maps show less noise and uncertainties than those generated by simple frame-by-frame analysis. A dynamic programming approach allows to determine the free space without a susceptible obstacle threshold. The algorithm runs in real-time on a PC and has proven robustness in daily traffic including night-time driving and heavy rain. The tested version assumes static scenes. An extension to situations with leading vehicles is described in [31]. The request to detect small or partly hidden moving objects from a moving observer asks for fusion of stereo and optical flow. This leads to the 6D-Vision approach that allows to simultaneously estimate position and motion of each observed image point. Grouping this 6D-information is much more reliable than using stereo solely. A fast recognition of moving objects becomes possible. In particular, objects with a certain direction and motion are directly detected on the image level without further non-linear processing or classification steps that may fail if unexpected objects occur. Since the fusion is based on Kalman filters, the information contained in multiple frames is integrated. This leads to a more robust and precise estimation than differential approaches like pure evaluation of the optical flow on consecutive image pairs. Practical tests confirm that a crossing cyclist at an intersection is detected within 4-5 frames. The implementation on a 3.2 GHz Pentium 4 proves real-time capability. Currently, we select and track about 2000 image points at 25 Hz (the images have VGA resolution). Future work on 6D-Vision will concentrate on real-time clustering and tracking of obstacles.
References 1. Scharstein, D., Szeliski, R.: A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. IJCV 47(1), 7–42 (2002) 2. Franke, U.: Real-time stereo vision for urban traffic scene understanding. In: Proceedings of the Intelligent Vehicles 2000 Symposium (2000) 3. Scharstein, D., Szeliski, R.: Middlebury online stereo evaluation, http://www.middlbury.edu/stereo 4. Tian, Q., Huhns, M.: Algorithms for subpixel registration. Computer Vision, Graphics, and Image Processing 35, 220–233 (1986) 5. Shimizu, M., Okutomi, M.: Precise sub-pixel estimation on area-based matching. In: ICCV, pp. 90–97 (2001) 6. Gehrig, S., Franke, U.: Improving sub-pixel accuracy for long range stereo. In: ICCV-Workshop VRML, Rio de Janeiro, Brasil (October 2007) 7. Hirschmueller, H.: Accurate and efficient stereo processing by semi-global matching and mutual information. In: Proceedings of Int. Conference on Computer Vision and Pattern Recognition 2005, San Diego, CA, vol. 2, pp. 807–814 (June 2005)
Towards Optimal Stereo Analysis of Image Sequences
57
8. Rosenberg, I.D., et al.: Real-time stereo vision using semi-global matching on programmable graphics hardware. In: SIGGRAPH 2006. ACM SIGGRAPH 2006 Sketches (2006) 9. Szeliski, R., Scharstein, D.: Sampling the disparity space image. PAMI 26(3), 419– 425 (2004) 10. Birchfield, S., Tomasi, C.: Depth discontinuities by pixel-to-pixel stereo. In: Proceedings of Int. Conference on Computer Vision 1998, pp. 1073–1080 (1998) 11. Yang, Q., et al.: Spatial-depth super resolution for range images. In: Proceedings of Int. Conference on Computer Vision and Pattern Recognition 2007 (June 2007) 12. Alvarez, L., et al.: Dense disparity map estimation respecting image discontinuities: A pde and scale-space based approach. Technical report, Research Report 3874, INRIA Sophia Antipolis, France (January 2000) 13. Badino, H., Franke, U., Mester, R.: Free space computation using stotchastic occupancy grids and dynamic programming. In: Workshop on Dynamical Vision, ICCV (to be published, 2007) 14. Elfes, A.: Sonar-based real-world mapping and navigation. IEEE Journal of Robotics and Automation RA-3(3), 249–265 (1987) 15. Franke, U., et al.: Steps towards an intelligent vision system for driver assistance in urban traffic. In: Intelligent Transportation Systems (1997) 16. Franke, U., et al.: 6d-vision: Fusion of stereo and motion for robust environment perception. In: Kropatsch, W.G., Sablatnig, R., Hanbury, A. (eds.) DAGM 2005. LNCS, vol. 3663, Springer, Heidelberg (2005) 17. Matthies, L.: Toward stochastic modeling of obstacle detectability in passive stereo range imagery. In: Proceedings of CVPR, Champaign, IL (USA) (December 1992) 18. Matthies, K.T., Szeliski, R.: Kalman filter-based algorithms for estimating depth from image sequences. IJVC 3(3), 209–238 (1989) 19. Miura, J., Negishi, Y., Shirai, Y.: Mobile robot map generation by integrating omnidirectional stereo and laser range finder. In: Proc. of IROS (2002) 20. Murray, D., Little, J.J.: Using real-time stereo vision for mobile robot navigation. Autonomous Robots 8(2), 161–171 (2000) 21. Nedevschi, S., et al.: A sensor for uban driving assistance systems based on dense stereovision. In: Intelligent Vehicles Symposium (2007) 22. Suppes, S., Suhling, F., H¨ otter, M.: Robust obstacle detection from stereoscopic image sequences using kalman filtering. In: 23rd DAGM Symposium (2001) 23. Thrun, S.: Robot mapping: A survey: Technical report, School of Computer Science, Carnegie Mellon University (2002) 24. Thrun, S., Burgard, W., Fox, D.: Probabilistic Robotics. In: Intelligent Robotics and Autonomous Agents, MIT Press, Cambridge (2005) 25. Argyros, A.A., et al.: Qualitative detection of 3d motion discontinuities. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 1996), November 1996, vol. 3, pp. 1630–1637 (1996) 26. Heinrich, S.: Real time fusion of motion and stereo using flow/depth constraint for fast obstacle detection. In: Van Gool, L. (ed.) DAGM 2002. LNCS, vol. 2449, pp. 75–82. Springer, Heidelberg (2002) 27. Waxman, A.M., Duncan, J.H.: Binocular image flows: steps toward stereo-motion fusion. IEEE Transactions on Pattern Analysis and Machine Intelligence 8(6), 715– 729 (1986) 28. Tomasi, C., Kanade, T.: Detection and tracking of point features. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, Tech. Rep. CMU-CS91-132 (April 1991)
58
U. Franke et al.
29. Franke, U., et al.: 6d-vision: Fusion of stereo and motion for robust environment perception. In: Kropatsch, W.G., Sablatnig, R., Hanbury, A. (eds.) DAGM 2005. LNCS, vol. 3663, pp. 216–223. Springer, Heidelberg (2005) 30. Franke, U., Rabe, C., Gehrig, S.: Fast Detection of Moving Objects in Complex Scenarios. In: Proceedings of the 2007 IEEE Intelligent Vehicles Symposium, pp. 398–403 (2007) 31. Vaudrey, T., Badino, H., Gehrig, S.: Integrating Disparity Images by Incorporating Disparity Rate (submitted to, RobVis 08)
Fast Line-Segment Extraction for Semi-dense Stereo Matching Brian McKinnon and Jacky Baltes University of Manitoba, Winnipeg, MB, Canada, R3T 2N2
[email protected] http://aalab.cs.umanitoba.ca
Abstract. This paper describes our work on practical stereo vision for mobile robots using commodity hardware. The approach described in this paper is based on line segments, since those provide a lot of information about the environment, provide more depth information than point features, and are robust to image noise and colour variations. However, stereo matching with line segments is a difficult problem due to poorly localized end points and perspective distortion. Our algorithm uses integral images and Haar features for line segment extraction. Dynamic programming is used in the line segment matching phase. The resulting line segments track accurately from one frame to the next, even in the presence of noise.
1
Introduction
This paper describes our approach to stereo vision processing for urban search and rescue (USAR) robotics using low-quality cameras. The goal of all stereo vision processing is to produce dense depth maps that can be used to accurately reconstruct a 3-D environment. However, our target applications impose additional constraints: (a) since the stereo system is mounted on a mobile robot, the approach must support real-time depth map extraction, and (b) since we are using cheap off-the-shelf webcams, the approach must be robust to noise and other errors. The two algorithms described in this paper include fast line segment extraction and two frame line segment matching. The fast line segment extraction makes use of integral images to improve the performance of gradient extraction, as well as provide an edge representation suitable for binary image segmentation. Line segment matching is achieved using a dynamic programming algorithm, with integral images providing fast feature vector extraction. Most of the focus in stereo vision processing has been on point featurebased matching. The most notable feature extraction methods are corner detection [1,2], SIFT [3], and SURF [4]. Most point feature based stereo matching relies on the epipolar constraint [5], to reduce the search space of the feature matcher to a 1-D line. The epipolar constraint can be applied using the Essential or Fundamental matrix, which can be determined using several methods [5]. G. Sommer and R. Klette (Eds.): RobVis 2008, LNCS 4931, pp. 59–71, 2008. c Springer-Verlag Berlin Heidelberg 2008
60
B. McKinnon and J. Baltes
The problem that was encountered early in our stereo vision research was that the cameras tended to shake while the robot would move. Even small changes in the position of one camera changes relative to a second require the recalculation of the epipolar lines, which can be an expensive and error-prone task during the operation of the robot. This problem demanded a more robust feature set, that would be trackable over a larger 2-D search space. In earlier work[6], our approach focused on region-based stereo matching. Region segmentation proved to be a difficult task in complex scenes under realtime constraints. In addition, regions would not always form in a similar manner, making our hull signature matching inaccurate and prone to errors. The most important piece of information gleaned from this work was that most of the information in a region was contained on the boundary between two regions. This intuitively led to a shift in our research towards the extraction and matching of boundaries, in particular line segments. Though several authors have proposed methods for line segment matching [7,8], none could achieve the desired speed and accuracy needed for our mobile robotics applications. The algorithms described in this paper are being developed with several requirements. The most important requirement is that accurate depth maps be generated in real-time. The method has to be robust to jitter in the two cameras, therefore knowledge of the epipolar lines cannot be assumed. Since the epipolar constraint is not useful, neither is the assumption of rectified stereo frames. Also commodity cameras are used and allowed to freely modify the brightness and contrast of an image, therefore color calibration can not be assumed. This paper will begin by introducing existing methods for line segment based stereo processing. Then, both the line segment extraction and matching algorithms will be described in detail. This will be followed by some initial results that have been generated using the described line segment based method. Finally, the paper will conclude and areas for future work will be discussed.
2
Related Work
The review of related work will focus on existing line segment extraction and matching algorithms. Line segment extraction refers to the process of identifying boundaries between image regions, and grouping the points into a line segment feature. Matching is the process of identifying the new location of a previously observed feature given some change in the position of the cameras. The extraction of line segments can be approached [9] using either global or local methods. Both methods rely on identifying boundary pixels using one of several methods of edge detection, for more information refer to [10]. Most global line segment extraction algorithms are extensions of the generalized Hough transform [11]. In the generalized Hough transform each boundary pixel votes for candidate lines in the Hough space accumulator. Candidate lines are calculated using the line equation: y ∗ sin(φ) + x ∗ cos(φ) = ρ
Fast Line-Segment Extraction for Semi-dense Stereo Matching
61
where x and y are the position of the pixel, φ is the orientation of a perpendicular line to the candidate line, and ρ is the length from the origin to the perpendicular intersection point. The accumulator is a two dimensional table that plots φ against ρ. For each φ value in the accumulator, ρ is calculated and the resulting point at (φ, ρ) in the accumulator is incremented. Once all boundary pixels are processed, a search is done in the accumulator to find peaks that exceed a threshold number of votes. This Hough transform is used to extract infinite lines, however, many methods have been presented [11,12] to deal with the extraction of line segments with start and end points. The primary difficulty involved with global line segment extraction algorithms is that they are computationally expensive, and can generate imaginary lines formed by textured surfaces. The simplest form of local line segment extraction uses chain coding [13]. Chain code values represent the location of the next pixel in the chain using either a 4 or 8 connected neighbourhood. The boundary is followed starting from an initial edge point (generally the top-left most point in the chain) and followed until the chain returns to the start. Noise can be filtered from the chain code using a median filter over the connection direction. The final chain code is then examined for line segments by finding runs of equal value neighbour connections. Local line segment methods are more sensitive to noise in the image, therefore most of the recent work in this field focuses on joining small segments into larger segments [11,9]. Line segment matching is generally considered to be a more difficult problem than interest point matching. Although it seems simple to view line segments as simply a connected start and end point, the problem is that end points on line segments tend to be very unstable [8], making end point matching more difficult than single interest point matching. The two primary features of a line segment that are used for matching are the colour information around the line segment, and the topology of line segments. Bay, Ferrari, and van Gool [8] used colour information from points three pixels to the left and right of a line segment to generate histograms. The histograms from the two candidate lines were normalized, and the distance between them was calculated using the Euclidean distance between histogram bins in the defined colour space. The colour information produced soft matches [8] reducing the number of potential matches. By applying the sidedness constraint [8], the incorrect matches were filtered out to produce the final set of matched lines. The use of colour information was limited to situations where the capture devices produced very consistent colour output. In the more common situation, colour information varies greatly between the images [14] due to automatic brightness and contrast corrections differing between the capture devices. If colour information is ignored, then the matching must be based on the topology of the images. Hollinghurst [14] has defined a set of rules for topological matching that includes initial matching and refinement operations. Using geometric constraints, initial matches are generated based on overlap, length ratio, orientation difference, and disparity limit. The initial matches are refined by
62
B. McKinnon and J. Baltes
applying constraints that either support or suppress a potential match. Matches are suppressed if they match multiple lines segments, and if they violate the sidedness constraint. Support is generated by collinearity and connectivity. The forms of connectivity that are important include parallelism, junctions at endpoints, and T-junctions. These constraints are applied to the initial matches until only supported matches remain. The main problem with line segments is that the end points are often incorrect due to noise in the image. To address this problem Zhang [7] measured the amount of overlapping area in a given calibration. In his experiments, the extrinsic calibration for the two cameras was unknown, and instead the essential matrix was estimated by using rotated points on an icosahedron. He found that given a set of matched line segments, the essential matrix could be estimated as accurately as a calibrated trinocular system. The system did show that matched line segments could be used to generate a depth map of the image. One problem is that there is no discussion of the line segment matching problem, and it appears that pre-matched (possibly by a human operator) line segments were used. As a result, it is difficult to predict how line segment-based stereo vision will work in the presence of noise found in real-world images.
3
Implementation
In this section, the line segment extraction and matching algorithm will be outlined. The steps of the algorithm are: 1. 2. 3. 4. 5. 6.
Integral Image Generation Haar Feature Extraction Gradient Thinning Binary Segmentation Overlap Clean-up Dynamic Program Matching
For simplicity, speed, and grounding, the method described in this paper will be applied to 8-bit gray-scale images of dimensions 320x240. The sample images used to describe the algorithm are shown in Fig. 1. The integral image as shown in Fig. 2 is an efficient data structure for extracting large amounts of data from images in constant time. In an integral image, the value stored in each pixel is the sum of all intensities in the area above and to the left of that pixel. This means that the sum of any rectangular subregion inside the integral image can be extracted with only four table look-ups and three addition/subtraction operations. For example, integral images can be used to apply a blur to an image using a box kernel. This is achieved by taking the sum area in a rectangular region around each point, and normalizing by the area of the rectangle. The benefit of this method over conventional blurs is that run-time is independent of the kernel size and, unlike separable kernels, requires no additional temporary storage.
Fast Line-Segment Extraction for Semi-dense Stereo Matching
63
Fig. 1. The left and right input images used in the description of the algorithm
Fig. 2. The integral images generated for the left and right input images
Fig. 3. The gradient magnitude of the left and right input images
The most practical application of integral images is to extract arbitrarily sized Haar features from an image [15]. There are several Haar features that can be extracted from an image, however, this algorithm focuses on the vertical and horizontal gradient features. During the Haar feature extraction stage of the algorithm, the two gradient features are extracted at a kernel size of 4x4,
64
B. McKinnon and J. Baltes
Fig. 4. The edge cell activation used in the line segment extraction
8x8, and 12x12 pixels. The values are cached into a look-up table to improve performance during the matching stage. These features then provide the input for both gradient thinning and dynamic program matching. The gradient thinning algorithm is based on the Canny method, with the addition of the Haar feature gradients and an efficient data structure. The goal of this method is to extract the peak gradients by keeping only the strongest values along the gradient orientation. From the cached Haar features, one kernel size is selected for thinning. A threshold based on the magnitude of the gradient is initially applied to the pixels. Pixels exceeding the defined threshold are used to activate an edge cell (EC). The EC is an efficient data structure for storing gradient orientations, since it allows binary comparisons of the gradient orientation. Each EC is a dual layer ring discretized into 8 cells, allowing a resolution of π/4 per ring. The inner layer is offset from the outer layer by a distance of π/8, and the combined layers produce a π/8 resolution for the EC. The gradient orientation of each pixel activates one cell in each layer of the pixel’s EC. Using this activation, the EC can be quickly tested for a horizontal, vertical, or diagonal orientation with a binary logic operation. ECs can be compared against each other using either an equal operator, which returns true if both cell layers overlap, or a similar operator, which returns true if at least one cell layer overlaps. The thinning direction is determined by the EC activation, and only the strongest gradient pixels along the thinning orientation are processed by the binary segmentation. The binary segmentation is the core of the line segment extractions, and makes use of the EC data structure described above. For each active pixel, the EC is separated into two Split ECs (SEC) with the first containing the inner ring activation, and second containing the outer ring activation. The 8-neighbour binary segmentation then tests each of the two SECs separately against its neighbours. The binary test for the neighbour returns true if that neighbour EC is similar (at least one cell overlaps) to the current SEC. The final result of the segmentation is that each pixel is a member of two thinned binary regions, or line segments as they will be referred to from now on. The overlapped line segments are useful, but we generally prefer that each pixel only belong to a single line segment.
Fast Line-Segment Extraction for Semi-dense Stereo Matching
65
There are two cases of overlap that need to be cleaned up to produce the final result. The first clean-up step is the removal of any line segments contained mostly within another longer line segment. This is achieved by simply having each pixel cast a vote for the longer of the two line segments for which it is a member. Any line segment with a number a votes below a threshold is discarded. The second clean-up step involves finding an optimal cut point for any partially overlapping line segments. This is achieved by finding the point in the overlapping area that maximizes the area of a triangle formed between the start of the first line segment, the cut point, and the end point of the second line segment. The overlapping line segments are then trimmed back to the cut point to produce the final line segment. Any line segments smaller than a defined threshold are discarded at the end of this stage. Figure 5 shows the output of our line extraction algorithm. The result are good line segments for stereo vision. However, more post processing could be performed to improve the line segments and further reduce noise. A few examples include merging line segments separated by noise, or extending end points to junctions. These features await future implementation.
Fig. 5. The final line segments extracted by the described algorithm
The matching of line segments is a difficult task since the end points are generally poorly localized and the segments can be broken into multiple pieces due to noise in the image. In addition, some line segments may not be completely visible in all images due to occlusion. To address these problems, a dynamic programming solution is applied to the line segment matching problem. The dynamic programming table consists of points from the source line segment, ls1, making up the row header, and the points from the matching line segment, ls2, making up the column header. The goal is to find the overlapping area that maximizes the match value between the points of the two line segments. To compare the points of two line segments, we return to the Haar features extracted earlier. The match value for two point feature vectors, v1 and v2, is calculated as: M= (sign(v1i ) == sign(v2i )) ∗ min(v1i , v2i )/ max(v1i , v2i )
66
B. McKinnon and J. Baltes
Each insertion into the table involves selecting a previous match value from the table, adding the current match value, and incrementing a counter for the number of points contributing to the final result. The insertion of the match values into the dynamic programming table requires certain assumptions to prevent the table from becoming degenerative. The assumptions are defined algorithmically using the variables, x for the current column, y for the current row, Dp for the dynamic programming table, St for the match sum table, Ct for the point count table, and M for the current match value. The Dp value is generated from St/Ct and for simplicity any assignment Dp[x][y] = Dp[x − 1][y − 1] is actually St[x][y] = St[x − 1][y − 1] + M and Ct[x][y] = Ct[x − 1][y − 1] + 1. With that in mind, the assumptions used to generate the Dp table are: 1. If a line segment diverges from the centre line, it cannot converge back to the centre line. This prevents oscillation in the line match, and is enforced using the rules: – if x = y then Dp[x][y] = Dp[x − 1][y − 1] – if x > y then Dp[x][y] = best(Dp[x − 1][y − 1], Dp[x − 1][y]) – if x < y then Dp[x][y] = best(Dp[x − 1][y − 1], Dp[x][y − 1]) 2. No point in the first line can match more than two points in the second line. This prevents the table from repeatedly using one good feature to compute all matches, and is enforced by defining the behaviour of the best function as: – if Ct[x1][y1] > Ct[x2][y2] then best(Dp[x1][y1], Dp[x2][y2]) = Dp[x2][y2] – else if Ct[x1][y1] < Ct[x2][y2] then best(Dp[x1][y1], Dp[x2][y2]) = Dp[x1][y1] – else best(Dp[x1][y1], Dp[x2][y2]) = max(Dp[x1][y1], Dp[x2][y2]) The best match value is checked in the last column for all rows with an index greater than or equal to the length of ls2, or for any entries in Ct with a count equal to the size of ls1. Once a best match value is found, a back trace is done through the table to find the point matched that produced the best match. The position difference between the matched points in ls1 and ls2 is used to determine the disparity of the matched pixels. Linear regression is used to find a slope for the disparity for all points in ls1. The best match value is then recalculated using the disparity generated by the linear regression line, since the best match value should be based on the final disparity match. The matching process in repeated for all ls2 segments in the potential match set. To reduce the number of line segments that are compared, a few simple optimizations are performed. The primary filtering is achieved by only comparing line segments that have similar edge cell activations. The secondary method is
Fast Line-Segment Extraction for Semi-dense Stereo Matching
67
to apply a threshold to the maximum search space, and this is done in two ways. The first threshold involves placing a bounding rectangle extended by the size of the search space around ls1, with a second rectangle placed around ls2. If the two bounding rectangles overlap, then the comparison continues. The second threshold is done on a point-by-point basis, with points outside the search space receiving a match value of zero. These simple optimizations greatly increase the speed of the algorithm, making real-time performance achievable.
4
Experiments
The results of the described line segment extraction and matching algorithms are compared using images from the Middlebury 2005 and 2006 stereo test set [16], as well as images captured from our low quality cameras. Summary statics are shown for the entire image set, but the Baby, Cloth, Reindeer, and Wood image pairs have been selected from the Middlebury set for illustration (see Fig. 6).
Fig. 6. Above are the left camera images from the Middlebury stereo sets [16] used in these experiments
Using the 27 image pairs from the 2005 and 2006 Middlebury test set, the proposed algorithm has a matching accuracy of 71.8% + / − 11.5% for matches within one pixel of error. If only unoccluded pixels are considered (assuming that some form of occlusion detection has been added to the algorithm) the accuracy rises to 78.8% + / − 10.0%. In addition, the line segments cover an average of 7.0% + / − 2.4% of the image, providing far more density than point matches. Three sample matches are shown Fig. 7 on the images of the Middlebury test set. The most important results, however, are observed in natural environments using average quality webcams. In these unstructured environments the minimum and maximum disparity in the horizontal and vertical directions are unknown. For the sake of execution time, a +/− 80 pixel horizontal and +/− 20 pixel vertical disparity limit are applied to the 320x240 webcam images. The results are compared qualitatively against a simple SURF [4] implementation, by examining the disparities generated at neighbouring points. The resulting disparities are very accurate. Some of the images show that further filtering may be beneficial and we are currently investigating methods for improved postprocessing of matches.
68
B. McKinnon and J. Baltes
Fig. 7. In the left column is the depth map generated by the line segment matching algorithm. The right column is the ground truth depth map from the Middlebury stereo images [16]. The depth maps have a small dilation applied to them to improve the visibility.
Fast Line-Segment Extraction for Semi-dense Stereo Matching
69
Fig. 8. The top row shows the left and right web cam images. The bottom row shows the line segment matching disparity on the left, with the SURF feature matching on the right.
The running times for line segment extraction and matching process varied between 0.45 and 1.5 seconds on a Pentium M 1.6Ghz system. Optimization is still required to get running times down to 0.25 seconds, which is the capture rate of the camera system on our mobile robots. This goal should be achievable, but will be focused on more, once the matching quality is improved.
5
Conclusion
The method of line segment extraction described in this paper has been demonstrated to produce good quality matches. These algorithms provide a strong base for future work in line segment based stereo vision. By combining integral images and dynamic programming it has been shown that real-time processing rates are achievable using commodity hardware. The matching algorithm also shows that good quality matching can be achieved without the epipolar constraint. If exact calibrations are not required then it is possible to build stereo camera rigs with the ability to converge the cameras and focus on specific distances. There are several improvements to both the line segment extraction and matching algorithms being proposed for future development. These include improved gradient extraction, using one of the methods evaluated by Heath et al [10]. The extraction algorithm is very robust, therefore future development
70
B. McKinnon and J. Baltes
will focus on merging line segments split by noise, and extending end points to meet at junctions. The matching method must be improved to address inaccuracies when dealing with repeated patterns, and occlusion boundaries. Repeated patterns will require more of a global optimization over the matching values, rather than the greedy method currently used. Addressing occlusion boundaries is a difficult problem, and may require that the matching method be done using values to the left and right of the line segment, rather than being centered like they are now. In addition, a method for estimating the remaining pixels is required. It is believed that if the line segment matching is improved, then disparities may be generated by simply interpolating between neighbouring line segments.
References 1. Shi, J., Tomasi, C.: Good features to track. In: IEEE Conference on Computer Vision and Pattern Recognition (1994) 2. Tomasi, C., Kanade, T.: Detection and tracking of point features. Carnegie Mellon University Technical Report CMU-CS-91-132 (April 1991) 3. Lowe, D.G.: Object recognition from local scale-invariant features. In: Proceedings of the International Conference on Computer Vision ICCV, Corfu., pp. 1150–1157 (1999) 4. Bay, H., Tuytelaars, T., Van Gool, L.: Surf: Speeded up robust features. In: Proceedings of the ninth European Conference on Computer Vision (May 2006) 5. Hartley, R.I., Zisserman, A.: Multiple View Geometry in Computer Vision, 2nd edn. Cambridge University Press, Cambridge (2004) 6. McKinnon, B., Baltes, J., Anderson, J.: A region-based approach to stereo matching for usar. In: Bredenfeld, A., et al. (eds.) Proceedings of RoboCup-2005: Robot Soccer World Cup IX, Osaka (2005) 7. Zhang, Z.: Estimating motion and structure from correspondences of line segments between two perspective images. IEEE Trans. Pattern Analysis and Machine Intelligence 17(12), 1129–1139 (1995) 8. Bay, H., Ferrari, V., Van Gool, L.: Wide-baseline stereo matching with line segments. In: IEEE Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 329–336 (2005) 9. Jang, J.-H., Hong, K.-S.: Fast line segment grouping method for finding globally more favorable line segments. Pattern Recognition 35(10), 2235–2247 (2002) 10. Heath, M., et al.: A robust visual method for assessing the relative performance of edge-detection algorithms. IEEE Transactions on Pattern Analysis and Machine Intelligence 19(12), 1338–1359 (1997) 11. Kim, E., Haseyama, M., Kitajima, H.: Fast line extraction from digital images using line segments. Systems and Computers in Japan 34(10) (2003) 12. Mirmehdi, M., Palmer, P.L., Kittler, J.: Robust line-segment extraction using genetic algorithms. In: 6th IEE International Conference on Image Processing and Its Applications, pp. 141–145. IEE Publications (1997) 13. Gonzalez, R.C., Woods, R.E.: Digital Image Processing. Addison-Wesley Longman Publishing Co., Inc., Boston (2001)
Fast Line-Segment Extraction for Semi-dense Stereo Matching
71
14. Hollinghurst, N.J.: Uncalibrated stereo hand-eye coordination. PhD thesis, University of Cambridge (January 1997) 15. Viola, P., Jones, M.: Rapid object detection using a boosted cascade of simple features. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 1, p. 501 (2001) 16. Scharstein, D., Szeliski, R.: A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. International Journal of Computer Vision 47(1-3) (2002)
High Resolution Stereo in Real Time Hani Akeila and John Morris Department of Electrical and Computer Engineering The University of Auckland, Auckland 1000, New Zealand
[email protected],
[email protected]
Abstract. Efficient stereo matching algorithms search for corresponding points along the epipolar lines defined by the camera properties and configuration. In real cameras, lens distortions and misaligned cameras lead to sloping curved epipolar lines on the raw images. To minimize latencies in a system designed to produce 3D environment maps in real time, positions on the epipolar lines must be computed rapidly and efficiently. We describe the rectification components of a realtime high resolution stereo system in which two high resolution cameras are connected via FireWire links to an FPGA which removes lens distortion and misalignment errors with minimal latency. Look up tables and interpolations are used to provide a balance between high speed and accuracy. After analysing the loss in accuracy from this approach, we show that, for lenses with significant distortions, tables which contain an entry for every 16th pixel provide accurate interpolated intensities. For better (lower distortion) lenses, the lookup tables can be even smaller. The latency is several scan lines and is determined by the lens distortions and camera misalignments themselves. Keywords: FireWire, Real-Time rectification, high resolution stereo.
1
Introduction
To acquire high resolution 3D environment maps - suitable for precise navigation for any autonomous vehicle - in real time from stereo camera systems, we need high resolution images which permit a large disparity between matching points. This implies a requirement for high speed data transfer between the cameras and the processing system. Stereo matching algorithms are also computationally intensive: state-of-the-art processors do not have sufficient processing power to handle large disparity ranges in high resolution images at 25 fps video rates. Thus our system uses cameras with IEEE1394A (Firewire)[1] protocol links to a board with an FPGA which has sufficient parallel processing capability to – remove lens distortions, – rectify the images and – run a stereo matching algorithm G. Sommer and R. Klette (Eds.): RobVis 2008, LNCS 4931, pp. 72–84, 2008. c Springer-Verlag Berlin Heidelberg 2008
High Resolution Stereo in Real Time
73
at the desired frame rate. Gribbon et al. reported a system[2] which also used look-up tables to remove barrel distortion, but the approach described here can remove arbitrary distortions, including non-spherical ones. Our approach will work with any smooth distortion and our models are parametrized so that varying amounts of distortion and misalignment can be handled by replacement of the look-up table and adjustment of the parameters controlling the number of buffered raw scan lines. 1.1
Communications Protocol
When this project started, cameras with USB or Firewire interfaces were readily available. USB cameras are generally low to moderate resolution and thus not suitable for high resolution work. The USB protocol also requires a central controller and thus more complex drivers[3]. The IEEE 1394A protocol used in our system can support data rates up to 400 Mps. The newer 1394B protocol can be used by simply replacing the physical layer chip in our system and allows data rates up to 3.2Gps. Now, cameras with Gigabit ethernet protocol links are available, but Firewire continues to have some advantages: – the connected nodes are automatically configured without the need for initialization by a separate controller - the Firewire link layer controller initializes the link automatically, – chipsets are available for both physical layer and link layers and – the newer 1394B protocol provides higher data rates. 1.2
Distortion Removal
To match high resolution images, it is essential to remove the (albeit small) distortions introduced by real lens systems. Conventionally, a software preprocessing phase would produce an undistorted image from the ‘raw’ image received from a camera. This introduces a latency of more than a full frame and was considered unacceptable for the real time applications that our system targets. Thus our system computes intensities for the virtual pixels of an ideal virtual undistorted image at throughputs matching the input pixel stream and a latency of a small number of scanlines. Images with higher distortions generate longer latencies because of the need to buffer more scan-lines to compute the virtual image intensities, cf. Section 3.2). 1.3
Rectification
Although it is, in principle, possible to align two cameras in a canonical configuration (images planes parallel, corresponding scan lines collinear and optical axes parallel), in practice this may be difficult and - perhaps more importantly - the precise alignment needed for high resolution work might be difficult to maintain in field instruments. Thus our system was designed to handle small misalignments of the cameras in real time. In practice, we remove distortions
74
H. Akeila and J. Morris
and rectify images in the matching algorithm by determining the location of the corresponding pixel ‘on-the-fly’. In this way, the latency for matching - and thus the calculation of a depth value - can be reduced to a small number of scan lines, compared to the more than a frame (> 1000 scan lines) for a conventional ’remove distortions’, ’apply rectification homography’ then ’match’ approach. To achieve this low latency, the distortion removal and rectification stage cannot simply implement the conventional formulae in hardware: that would require several slow multiplications and introduce unacceptable delays. Instead, we use lookup tables to rapidly determine the location of the corresponding pixel. However, for high resolution images, a naive look-up table - which had one location for every pixel in an image - would require an amount of memory which was not available on even the highest capacity FPGAs. Accordingly, we developed an approach which combines a small look-up table with fast interpolation calculations (two additions and a subtraction) to produce a practical system. Linearly interpolating over a curve which is a higher order polynomial has the potential to introduce significant error and thus introduce an additional source of error in the matching process, so we carefully analyzed the errors introduced by our approach, considering a ‘worst-case’ scenario using cheap lens designed for undemanding applications like security cameras, see Section 3.5. This paper is organized as follows: Section 2 describes the FireWire interface. Section 3.2 discusses our new real-time resampling technique. Section 4 sets out the results of experiments on our ‘worst-case’ lenses which determine the largest expected latencies and Section 6 concludes.
2
FireWire Interface
IEEE Standard 1394 (Firewire)[1,3] is a serial interconnection protocol, which can accommodate up to 63 different high-speed peripherals on one bus. Firewire is a peer-to-peer protocol: data are transferred among connected nodes with no need for central control. This reduces resource demands in the FPGA compared to other protocols such as Gigabit ethernet or USB, which would consume FPGA logic for controllers. We designed a small (81 ×58 mm) daughter board which connects two video cameras to an FPGA development kit[4]. It uses two commercially available controller chips - a physical layer controller (TSB12LV32)[5] and a link layer controller (LLC - TSB12LV32IPZ)[6]. Our current implementation conforms to IEEE 1394a which is limited to 400 Mbps - since we use two ports on a single physical layer chip (cf. Section 2), it is adequate for two images up to 1 Mpixel at 25 fps. The interface can be upgraded to the faster IEEE 1394b protocol by replacing the link layer layer chip with a pin compatible chip[7] enabling transfer rates up to 3.2Gb/s - sufficient to support images up to ∼ 12 Mbytes. 2.1
Data Transfer
To provide maximum flexibility and bandwidth, two types of data transfer were implemented: isochronous and asynchronous transfer. Isochronous transfers
High Resolution Stereo in Real Time
75
Left Pixel Shift Register Right Pixel Shift Register L Camera
1
R Camera IEEE 1394a 1
Firewire Deserializer
IEEE 1394a
PClk
8 PA
Right Pixel Shift Register
PAG
8 PA
PAG E O /2 MClk Disparity Calculator d = 0,1 Disparity Calculator d = 2,3
Predecessor Array
3
Shift
3
3 Disparity Calculator d = 4,5
Disparity Calculator d = 0,1
3
Back−track module Disparity value stream
(a)
(b)
Fig. 1. (a) System block diagram: IPG = Interpolated Pixel Generator (combines Figure 3 and Figure 4) (b) Firewire Deserializer
guarantee reception of video data with a fixed latency. Because it is critical that video data not be interrupted during transfer, isochronous data transfer is used for actual data transfer. Asynchronous data transfer is used for transmitting commands (e.g. configuration directives) to the cameras: the link layer controller contains an asynchronous transmit FIFO that absorbs data rate differences between the controller (implemented on the FPGA) and the LLC. 2.2
Interface Architecture
The interface architecture is shown in Figure 1(b): a layered design provides physical layer (PHY), Link Layer Controller (LLC) and Hardware Abstraction Layer (HAL). The PHY layer handles the electrical signals on the the camera links and converts them into digital data streams to the LLC. The LLC layer is
76
H. Akeila and J. Morris
designed to interface to microprocessors or custom logic circuits using a number of internal FIFOs. HAL is a soft layer, programmed in VHDL and implemented on the FPGA. It handles data streams from both cameras - ensuring orderly streaming of data into the FPGA circuitry. 2.3
Circuit Operation
When the Firewire circuit powers up, the two cameras and the FPGA controller configure themselves, with the controller becoming a root node because it is the only node capable of being a bus manager. Configuration occurs only once on power up or reset; it requires 24 cycles or 488 ns at f = 49.152MHz. Firewire packets are limited to 125μs: the LLC will interleave words (two pixels) from the two cameras in these packets. The HAL starts a cycle by transmitting a cycle start packet to synchronize the nodes. Packet overheads (the expected data length, channel number and reception type) require 81 ns for each channel. This is followed by isochronous pixel data from both cameras at 16bits/40.6 ns. When the 125μs cycle completes, the system goes to an idle state until another transfer cycle starts. The FPGA’s internal phase-locked loop is used to regenerate a clean clock which is in phase with the input clock. Because pixels from the left and right cameras are interleaved by the PHY layer chip, the HAL provides 2 parallel load shift registers to buffer two pixels from the left camera until the right camera pixels arrive and they can be delivered in phase to the matching circuit.
3
Real-Time Rectification
Rectification resamples a pair of 2D images to produce the pair of images that would have been captured if the camera system had the canonical stereo configuration (identical cameras with parallel image planes, colinear scan lines and parallel optical axes). In this configuration, the points of a matched pair lie on the same scan-line and the time to search for corresponding points is reduced from O(n2 ) to O(n). 3.1
Fundamental Matrix Estimation
The relative orientation and position of the two cameras can be described by the fundamental matrix[8,9]. For this work, we assume that any one of the many methods described in the literature has been used to determine the fundamental matrix[10,11]. The fundamental matrix can be used to transform images to the ‘ideal’ ones taken with cameras in the canonical configuration. 3.2
Real-Time Resampling
With large images at the usual video frame rates, the pixel clock will be running at > 30 MHz or a pixel every ∼ 30 ns. This is insufficient time to perform
High Resolution Stereo in Real Time
77
the multiplications that are needed by a conventional approach to correcting distortions and rectifying both images. However, the required transformations depend only on the static camera configuration, so they can be pre-computed and stored in look-up tables. For high resolution images (> 1Mpixel), a simple approach that creates a look-up table entry for each pixel requires a prohibitive amount of memory space since each entry requires (dx, dy) - the displacement of a pixel in the distorted image. Thus an entry per pixel look-up table will need 4 Mbytes or more of memory for the table for each camera. High end FPGAs now generally provide significant amounts of internal fast block memory, but the largest published devices are limited to just over 2Mbytes1 . Sufficient memory could be readily added using a single external static RAM chip: chips with capacities of 4Mbytes or more are readily available. However, the access time for an external chip will add unacceptable propagation delays and the internal RAM blocks in the FPGA must be used. However the corrections required to remove distortions and correct for misalignment vary smoothly: distortions are generally modeled adequately by quadratics and misalignments are corrected by linear transformations. Put another way, real images of straight line grids contain smooth curves which can be accurately modeled with piece-wise linear segments. Therefore, without significant loss of accuracy, the look-up tables can contain an entry for every nth pixel and values for intermediate pixels can be computed by linear interpolations. Larger values of n reduce the size of the lookup table but potentially reduce accuracy and add slightly to circuit complexity so we carried out some experiments to determine how large n could be without sacrificing accuracy. We used relatively cheap security camera lenses so that the results of this study would reflect worst-case scenarios2. Accuracy is also affected by the number of bits used to store the displacement vectors so this has been considered in estimating memory requirements. 3.3
Imaging Model
Matching algorithms generally select one image as the ‘reference’ image and scan through it, determining the disparity, or displacement between matching pixels in the two images. Without loss of generality, we will use the left image as the reference image. In our system, we scan through an ideal left image pixel by pixel. This ideal image is aligned so that its scan lines are parallel to the system baseline and its optical axis is perpendicular to the baseline and parallel to the optical axis of the ideal right image. As pixel indices for this ideal image are generated, the look-up table is accessed to determine the four pixels in the real left image that surround the ideal point. The intensity at the ideal point is calculated by interpolation from the four neighbours. The Interpolated Pixel Generator module (’IPG’ in Figure 1)(a) is responsible for locating and computing the intensity of 1
2
For example, Altera’s largest Stratix III device, the EP3SL340 will provide 17 Mbits - when it becomes available in Q1 2008. With 1 Mpixel cameras, good quality fixed focus C-mount lenses exhibited negligible lens distortion - straight lines in the scene appear as straight lines in the image to within 1 pixel.
78
H. Akeila and J. Morris
pixels in this ideal image. As the matching algorithm searches for corresponding pixels, indices in a corresponding ideal right image are generated which are used to locate the four neighbours and interpolation generates the correct image pixel intensity. Using the look-up tables, the number of (slow) multiplications can be reduced to eight - two groups of four multiplications which can be done in parallel - see Figure 4. 3.4
Pixel Address Generator
The system buffers sufficient scan lines from the real images in shift registers (see Figure 1) and then computes pixels for scan lines of the ‘ideal’ image. For the left image, only two pixels are needed by the SDPS algorithm, and the left register has only two entries. For the right image, one pixel for each discrete disparity value is needed and the shift register is correspondingly larger. Figure 3 shows the datapaths through the Pixel Address Generator. As shown in the diagram, displacements for intermediate values are computed from a displacement stored in the look-up table by adding an increment: for an arbitrary xj,k : j = j/s k = k/s
(1) (2)
dxj,k = Xj ,k + (j − j/s) ∗ (Xj +1,k − Xj ,k )/s
(3)
where s = 2m is the step size (m is a small integer) and Xj,k is the dx value in the look-up table at index (j, k). The circuit is pipelined to reduce latency for calculating the virtual pixel intensity: it relies on the fact that pixels are processed sequentially in both images. 3.5
Re-sampling Errors
This section analyzes the several sources of error in the re-sampling process. Linear interpolation. We approximate a smooth curve by piece-wise linear interpolation from values stored in a look-up table for every s pixels. As the step-size, s, increases, then the maximum error in the computed pixel address increases (a in Figure 2), but the memory required for the look-up table decreases. a is also affected by the curvature of a straight line in an image - see Figure 6 which shows this error for various step sizes for the calibration image in Figure 5. Repeated addition. The second term in Equation 3 can be calculated rapidly and efficiently by repeated additions of (Xj +1,k − Xj ,k )/s to Xj ,k , but, in fixed precision arithmetic, these additions also introduce an error, b , in Figure 2. This error is decreased by decreasing s and by increasing the number of bits stored for each value in the look-up table.
High Resolution Stereo in Real Time
79
LUT yp+1
εa εb
yest
ypLUT LU T Fig. 2. Errors caused by look-up table: ypLU T , yp+1 - y values obtained from lookup table; yest - estimated y value; a - error from linear interpolation; b - error from repeated summation
Intensity interpolation. The intensity of the pixel in the ideal image is derived from intensities of four neighbouring pixels in the raw image. For an ideal pixel located at (i, j) in the ideal image where (i, j) maps to (x, y) in the raw image, the ideal intensity is: x = x; y = y
x = x − x ;y = y − y f
ideal Iij
=
f
(4)
f f raw f (Ixraw ,y (1 − x )(1 − y ) + Ix +1,y x (1 f f f f Ixraw + Ixraw ,y +1 (1 − x )y +1,y +1 x y )
(5) − y )+ f
(6) (7)
raw is ∼ Typically, image intensities are 8-bit values so that the error in Ijk 3 0.5% . We want to ensure that the position (x, y) is sufficiently accurate so that any error in it does not increase the error in I ideal , thus x and y should have an error of no more than 1 in 2000 or 0.05%. For images digitized to 10 bits, this error needs to be correspondingly smaller, ∼ 0.01%.
3.6
Data Precision
Logic resources can be saved in an FPGA by using only the needed number of bits to represent any numeric value. Figure 4 shows the varying data path widths used in that circuit to provide sufficient bits to ensure no effective loss of precision while maximising speed and minimising logic used. Fixed point arithmetic is used: the format of the data at every stage is indicated as [x.y] where x is the number of bits preceding the implicit binary point and y the number of bits following it. 3.7
Camera Misalignment
Misalignment of the cameras (or deviations from the canonical configuration) cause the epipolar lines to lie at angles to the scan lines and the images must be 3
Image noise may make this error considerably larger in practice.
80
H. Akeila and J. Morris
LUT dy dx
x ideal
s
x/s
− dxp+1− dxp
s 0 Step start
0 1
0 1
+ dx
+ xraw
Fig. 3. Pixel Address Generator (PAG) calculates xraw , the x-position of a pixel in the raw (distorted) image from the pixel in the ideal image. A similar module computes yraw . Note s may always be set to be 2k , so that division by s shifts by k bits.
Fig. 4. Intensity interpolation: Iout is computed from four neighbouring pixels in the distorted image. Fixed point arithmetic is used: the notation [x.y] implies x bits before the implicit binary point and y bits after it.
corrected before the matching algorithm is applied. We assumed that it would generally be possible to align cameras easily to better than 5o and therefore that a worst case misalignment would be 5o in any direction. For a simple rotation about the optical axis of 5o , the epipolar lines will run across the image at an angle of 5o . Thus the y-range of the epipoloar line for a 1024 pixel image will be: dy = tan(θ)dx = tan(5o ) × 1023 ∼ 90pixels
(8)
To this value, we must add an additional quantum of pixels to allow for lens distortion. In our example, the maximum deviation from a straight line due to distortion was 11 pixels. Thus 90 + 11 = 101 scan lines from the raw image must be buffered in the right image shift register before computation of the ideal image
High Resolution Stereo in Real Time
81
can proceed. This has considerable effect on storage requirements - of the order of ∼ 10kbytes of additional memory will be needed. It also affects the latency of the calculation: the first correct depth values will be delayed by ∼ 105 pixel clocks. For a 25 fps video rate, this translates to a latency of 40 × 10−9 × 105 = 4 ms or roughly 1/10 of a frame time. Clearly, latency - and thus the time available for interpreting the depth map produced by our system in the host computer - can be significantly reduced by careful camera alignment. A misalignment of 1o reduces the latency to ∼ 18 scan lines or ∼ 0.5 ms or ∼ 106 executed instructions on a fast PC.
4
Experimental Results
The resources used and latencies generated in our system are dependent on the quality of the lens and the alignment accuracy. To determine the feasibility of our approach, we chose suitable worst case scenarios for distortion and misalignment. For distortion, Zhang et al.report that a single parameter radial distortion model4 : (9) xu = xd (1 + κr2 ) yu = yd (1 + κr2 ) where (xu , yu ) and (xd , yd ) are the undistorted and distorted pixel positions, κ is the distortion parameter and r is the radial distance from the centre of the image[12]. An inexpensive zoom lens designed for low resolution security cameras (measured κ = 7 × 10−7 pixels−2 , see Figure 5) was chosen as a worst case - any precise navigation system would need to use a (readily available) higher quality lens. For alignment, less than 5o misalignment is readily achieved, so our example resource usage and latency calculations assume κ ≤ 7 × 10−7 pixels−2 and 5o misalignment. Figure 5 shows an image of a rectangular grid obtained with a Marlin F131C camera capable of capturing 1280 × 1024 pixel images and a low-resolution ∼ 12 mm focal length CCTV lens. The lowest grid line in Figure 5(a) was isolated, the y- coordinates of the centre of the line were then plotted against the x-pixel coordinate and a secondorder polynomial fitted - see Figure 5(b). This figure also shows the piece-wise linear approximation to this line for three diffent step sizes. Figure 6 shows the error between the fitted line (assumed to represent the true distorted image) and the piece-wise interpolations for several step sizes. Note that the error is considerably less than a pixel even for the largest step size (s = 1024/4 = 256 pixels), as might be expected for the smooth curve shown. However, since several arithmetic operations are based on these estimates of the true distortion, it is important to ensure that errors cannot build up and affect the matching operation and inferred depth or distance. Figure 7 shows the maximum error in the estimate of the pixel postion for various step sizes and precision of values stored in the look-up table. For our 4
Note that our LUT approach only assumes that we have a smooth continuous function.
82
H. Akeila and J. Morris
65
60
y co−ordinate
55
50
45
40
Extracted edge Fitted curve step size=1024 step size=512 step size=256
35
30
25
0
100
200
300
400
500
600
700
800
900
1000
x co−ordinate
(a) Image of calibration grid with low resolution lens. The lowest line was used to estimate distortion and interpolation accuracy.
(b) Raw data for the lowest grid line in (a); fitted curve and piece-wise linear approximations with various step sizes.
Fig. 5. Severe distortion test 0.025 Step=256 Step=128 Step=64 Step=32 Step=16
error in y co−ordinate
0.02
0.015
0.01
0.005
0
100
200
300
400
500
600
700
800
900
1000
x co−ordinate
Fig. 6. Error in position calculation across a scan line
example ‘worst case’ scenario, the point marked ’Q’ provides a position accuracy of 1 in ∼ 5000 with 16-bit fixed point values stored in the look-up table.
5
Timing Analysis
In this section, we consider how the circuit can be made to function correctly with a clock cycle time of ∼ 40 ns - the time per pixel for 1 Mbyte images arriving at 25fps. The circuitry described here generates ideal intensities required by the disparity calculators (see Figure 1). Since the stereo matching algorithm proceeds
High Resolution Stereo in Real Time
83
−1
10
fp=11 bits fp=14 bits fp=16 bits fp=21 bits −2
Max Error
10
−3
10
−4
10
Q −5
10
0
50
100
150
200
250
300
number of LUT entries
Fig. 7. Error as a function of step size with number of bits in fixed point representation of values as a parameter. ’Q’ marks the optimum settting.
pixel-by-pixel, line-by-line through an image, it is possible to ‘anticipate’ the values of xideal and yideal , they simply step through the image. Thus the calculation of xraw and yraw via the look-up table in Figure 3 can take place in parallel with the the calculation of the previous pixel’s disparity and introduces no effective delay. However, the intensity interpolation (Figure 4) lies on the critical path and affects latency. Simulation shows that this circuit requires ∼ 26ns in a Altera Stratix device (EPS1S60F1508C6). Thus it can generate ideal pixels at the same rate as the camera interface delivers them and introduces only a single cycle of latency. The disparity calculator similarly introduces only a single cycle of latency[13].
6
Conclusion
We have described a new method for removing distortion from an image and rectifying it which has only a few scanlines of latency. The latency is dependent on the degree of distortion or misalignment and can be reduced by using better lenses or more careful alignment. The reduced latency is achieved by using condensed look-up tables to store the information about distortion and misalignment. When the system is implemented in reconfigurable hardware, it can be reconfigured to accommodate different lenses and camera configurations. Because condensed look-up tables can be used (typically one entry per 16 image pixels - or even larger steps in
84
H. Akeila and J. Morris
low distortion systems) the look-up tables can be rapidly re-loaded if dynamic calibration is necessary. Low distortion lenses and careful camera alignment can significantly reduce the latency with which disparity maps can be streamed to software which is interpreting them (e.g. determining avoidance strategies). A reduction in misalignment from 5◦ to 1◦ allows processing software of the order of a million additional instructions for decision making - or reduces the lag from image to reaction by several milliseconds, so, whilst slow-moving devices may tolerate poor alignment and cheap optics, faster objects will not.
References 1. IEEE: Standard 1394 (Firewire). IEEE (1995) 2. Gribbon, K.T., Johnson, C.T., Bailey, D.G.: A real-time FPGA implementation of a barrel distortion correction algorithm with bilinear interpolation. In: Proc Image and Vision Computing, New Zealand, pp. 408–413 (2003) 3. Anderson, D.: FireWire(R) System Architecture: IEEE 1394A, 2nd edn. Mindshare Inc., (1999) 4. Altera, Inc.: NIOS II development kit; Stratix II edition (2002) 5. Texas Instruments: TSB41AB3 Physical Layer Controller Datasheet. Texas Instruments (2003) 6. Texas Instruments: TSB12LV32 Link Layer Controller Datasheet. Texas Instruments (2003) 7. Texas Instruments: IEEE1394b Physical Layer Controller. Texas Instruments (2003) 8. Hartley, R.I.: Theory and practice of projective rectification. International Journal of Computer Vision 35(2), 115–127 (1999) 9. Hartley, R.I., Zisserman, A.: Multiple View Geometry in Computer Vision. Cambridge University Press, Cambridge (2000) 10. Salvi, J., Armangue, X., Pages, J.: A survey addressing the fundamental matrix estimation problem. In: International Conference on Image Processing. II, pp. 209– 212 (2001) 11. Sheikh, Y., Hakeem, A., Shah, M.: On the direct estimation of the fundamental matrix. In: IEEE Computer Vision and Pattern Recognition, pp. 1–7 (2007) 12. Zhang, Z.: On the epipolar geometry between two images with lens distortion. In: International Conference on Pattern Recognition I., pp. 407–411 (1996) 13. Morris, J., Campbell, R., Woon, J.: Real-time disparity calculator. Technical report, Electrical and Computer Engineering, The University of Auckland (2007)
Stochastically Optimal Epipole Estimation in Omnidirectional Images with Geometric Algebra Christian Gebken and Gerald Sommer Institute of Computer Science Chair of Cognitive Systems Christian-Albrechts-University of Kiel, Germany {chg,gs}@ks.informatik.uni-kiel.de
Abstract. We consider the epipolar geometry between two omnidirectional images acquired from a single viewpoint catadioptric vision sensor with parabolic mirror. This work in particular deals with the estimation of the respective epipoles. We use conformal geometric algebra to show the existence of a 3×3 essential matrix, which describes the underlying epipolar geometry. The essential matrix is preferable to the 4×4 fundamental matrix, which comprises the fixed intrinsic parameters, as it can be estimated from less data. We use the essential matrix to obtain a prior for a stochastic epipole computation being a key aspect of our work. The computation uses the well-tried amalgamation of a least-squares adjustment (LSA) technique, called the Gauss-Helmert method, with conformal geometric algebra. The imaging geometry enables us to assign distinct uncertainties to the image points which justifies the considerable advantage of our LSA method over standard estimation methods. Next to the stochastically optimal position of the epipoles, the method computes the rigid body motion between the two camera positions. In addition, our text demonstrates the effortlessness and elegance with which such problems integrate into the framework of geometric algebra.
1
Introduction
Epipolar geometry is one way to model stereo vision systems. In general, epipolar geometry considers the projection of projection rays from different cameras. The resulting image curve is called epipolar line. The projection of a focal point of another camera is called epipole. Certainly, all epipolar lines must pass through the epipoles. The advantage of epipolar geometry is the search space reduction when doing stereo correspondence analysis: given a pixel in one image the corresponding pixel (if not occluded) must be located on the respective epipolar line. This relation is also expressed by the singular fundamental matrix F, the quadratic form of which attains zero in case a left-image pixel lies on the epipolar line belonging to the right-image pixel, and vice versa. The fundamental matrix contains all geometric information necessary, that is intrinsic and extrinsic parameters, for establishing correspondences between two images. If the intrinsic parameters as focal length, CCD-chip size or the coordinates of the optical center are known, the so-called essential matrix E describes the imaging in terms G. Sommer and R. Klette (Eds.): RobVis 2008, LNCS 4931, pp. 85–97, 2008. c Springer-Verlag Berlin Heidelberg 2008
86
C. Gebken and G. Sommer
of normalized image coordinates, cf. [10]. The aim of this work is to recover the epipoles, which includes the epipolar geometry, in case of an omnidirectional stereo vision system. Single viewpoint catadioptric vision sensors combine a conventional camera with one or two mirrors and provide a panoramic view of 360◦ . Our device is a folded system consisting of two parabolic mirrors and a lens to provide a scaled, approximately orthographic projection from the main mirror, see section 4. According to the work of Nayar et al [8] we can equivalently treat our vision sensor as a single mirror device. Our initial motivation was to compute a dense 3D-reconstruction from two omnidirectional images. This requires an exact knowledge of the underlying epipolar geometry. We decided on the least-squares adjustment technique called Gauss-Helmert method to account for the invariable uncertainties in observational data. In this way the drawback of nonuniform sampling of the vision sensor turns into an advantage as we can assign distinct uncertainties to the image points depending on their position. Specifically, we do motion estimation from which we ultimately infer the positions of the epipoles. They may then be used for a stereo matching with subsequent 3D-reconstruction. Due to the linearity in the stochastic estimation we have to provide a rough initial estimate for the motion which we refer to as rigid body motion (RBM): in section 5 we derive a formula for the essential matrix E for the omnidirectional case. In the following we do not explicitly evaluate E but estimate it from a set of pairs of corresponding image pixels1 . As the essential matrix reflects the epipolar geometry, we can construct the required first approximation of the RBM from E. It is vital to notice that we use conformal geometric algebra (CGA) in every part of this text. The omnidirectional imaging can be ideally described in terms of an inversion being a fundamental operation in CGA. At the same time it provides a favorable framework for the mentioned computation of the uncertainties. Our derivation of E is founded on geometric concepts deduced in and expressed with CGA, which is a new result. Eventually, the underlying tensor notation of CGA enables the (well-tried) amalgamation of the Gauss-Helmert method and geometry, cf. [11]. Before we explain our proposed method in detail, we give an overview of conformal geometric algebra.
2
Geometry with Geometric Algebra
Recently it has been shown [12,13] that the conformal geometry [9] is very attractive for robot vision. Conformal geometric algebra delivers a representation of the Euclidean space with remarkable features: first, the basic geometric entities of conformal geometry are spheres of dimension n. Other geometric entities as points, planes, lines, circles, . . . may be constructed easily. These entities are no longer set concepts of a vector space but elements of CGA. Second, the special Euclidean group is a subgroup of the conformal group, which is in CGA an or1
We initially perform a feature point based stereo matching yielding 50 correspondences on average.
Stochastically Optimal Epipole Estimation
87
thogonal group. Therefore, its action on the above mentioned geometric entities is linear. Third, the inversion operation is another subgroup of the conformal group which can be advantageously used in robot vision. Fourth, CGA generalizes the incidence algebra of projective geometry with respect to the above mentioned geometric entities. For a detailed introduction to geometric algebra (GA) see e.g. [5,2]. Here we only convey a minimal framework. We consider the geometric algebra 4,1 = C( 4,1 ) ⊃ 4,1 of the 5D conformal space, cf. [1,7]. Let {e1 , e2 , e3 , e+ , e− } denote the basis of the underlying Minkowski space 4,1 , where e21 = e22 = e23 = e2+ = 1 = −e2− . We use the common abbreviations e∞ = e− + e+ and eo = 12 (e− − e+ ) for the point at infinity and the origin, respectively. The algebra elements are termed multivectors, which we symbolize by capital bold letters, e.g. A. A juxtaposition of algebra elements, like CR, indicates their geometric product, being the associative, distributive and noncommutative algebra product. We denote ‘·’ and ‘∧’ the inner and outer (exterior) product, respectively. A point x ∈ 3 ⊂ 4,1 of the Euclidian 3D-space is mapped to a conformal point (null vector) X, with X 2 = 0, by the embedding operator K
K(x) = X = x +
1 2
x2 e∞ + eo .
(1)
A geometric object, say O, can now be defined in terms of its inner product null space (O) = {X ∈ 4,1 |X · O = 0}. This implies an invariance to scalar multiples (O) = (λ O), λ ∈ \{0}. Bear in mind that the object definitions given here remain variable in this respect. Verify that S = s+ 12 (s2 −r2 )e∞ + eo favorably represents a 2-sphere centered at s ∈ 3 with radius r. Similarly, if n ∈ 3 , with n2 = 1, then P = n + d e∞ represents a plane at a distance d from the origin with orientation n. Using these vector valued geometric primitives higher order entities can be built: given two objects, e.g. the planes P1 and P2 , their line of intersection L is simply the outer product L = P1 ∧ P2 . The outer product of linearly independent points leads to an alternative representation of CGA entities which is literally dual to the inner product null space representation. The term K = A1 ∧ A2 ∧ A3 ∧ A4 , for example, represents the sphere that passes through the points {A1 , A2 , A3 , A4 } in that a point X ∈ 4,1 lies on the sphere iff X ∧ K = 0. Now let S be the element dual2 to K. It may then be shown that X ∧ K = 0 iff X · S = 0 and hence X ∈ (S); especially {A1 , A2 , A3 , A4 } ⊂ (S). Depending on the points the outer product may also yield circles, planes, lines, . . . An important GA operation is the reflection. The reflection of A in a vector valued object O is given by the sandwiching product B = OAO. The most general case is the reflection in a sphere, which actually represents an inversion. Note that any RBM can be represented by consecutive reflections in planes. In CGA the resulting elements, for example M = P1 P2 , are generally referred to as , whereby the motors, cf. [12]. Some object C would then be subjected to M C M
2
Although the role of the dual operation is paramount in GA, cf. [2,5], it is only touched on in this text. Building the dual of a multivector amounts to a multiplication with the pseudoscalar I = e1 e2 e3 e+ e− , I 2 = −1.
88
C. Gebken and G. Sommer
= P2 P1 . In symbol ‘ ’ stands for an order reversion, i.e. the reverse of M is M case of only two involved planes we distinguish between a translator T (parallel planes) and a rotor R (intersecting planes). Note that every motor satisfies = 1. Since reflection is an involution, a double the unitarity condition M M by reflection O(OAO)O must be the identity w.r.t (A), therefore O2 ∈ associativity. It looks like a conclusion, but in GA every vector valued element X ∈ 4,1 squares to a scalar by definition X 2 := X · X ∈ . Using the above definitions, we have S 2 = r2 and P 2 = 1.
2.1
Geometric Algebra and Its Tensor Notation
We take a look beyond the symbolic level and question how we can realize the structure of geometric algebra numerically. We show a way that makes direct use of the tensor representation inherent in GA. If {E1 , E2 , . . . , E2n } denotes the 2n -dimensional algebra basis of n , then a multivector A ∈ n can be written as A = ai Ei , where ai denotes the ith n component of a vector3 a ∈ 2 and a sum over the repeated index i is implied. We use this Einstein summation convention also in the following. If B = bi Ei and C = ci Ei , then the components of C in the algebra equation C = A ◦ B can be evaluated via ck = ai bj Gkij . Here ◦ is a placeholder for the algebra n n n product and Gkij ∈ 2 ×2 ×2 is a tensor encoding this product (we use sans serif letters as a, g or G to denote vectors, matrices, tensors or generally any n n regular arrangement of numbers). If we define the matrices U, V ∈ 2 ×2 as i k j k U(a) := a Gij and V(b) := b Gij , then c = U(a) b = V(b) a. This perfectly reveals the bilinearity of algebra products. We define a mapping Φ and can then write Φ(A) = a, Φ(A◦) = U, Φ(A◦B) = ai bj Gkij or if a = ai ei is an element of a Euclidian vector space, Φ(a) = a as well. Note that we reduce the complexity of equations considerably by only mapping those components of multivectors that are actually needed. As an example, a vector in n can have at most n non-zero components. Also, the outer product of two vectors will not produce 3-vector components, which can thus be disregarded. In the following we assume that Φ maps to the minimum number of components necessary.
2.2
Conformal Embedding - The Stochastic Supplement
We have to obey the rules of error propagation when we embed points by means of function K, equation (1). Assume that point x is a random vector with a ¯ is its mean value. Furthermore, we denote the 3×3 Gaussian distribution and x covariance matrix of x by Σx . Let E denote the expectation value operator, ¯ The uncertain representative in conformal space, i.e. the such that E[x] = x. ¯ is determined by a sphere with imaginary stochastic supplement for X = K(x), radius ¯ 2 e∞ + eo + 12 trace Σx e∞ ¯ + 12 x (2) E K(x) = x 3
At least numerically, there is no other way than representing multivectors as vectors.
Stochastically Optimal Epipole Estimation
89
rather than the pure conformal point K E[x] . However, in accordance with [11] we refrain from using the exact term since the advantages of conformal points over spheres with imaginary radius outbalance the numerical error. ¯ by We evaluate the corresponding 5×5 covariance matrix ΣX for X = K(x) means of error propagation and find ¯ Σx JT ¯ , ΣX = JK (x) K (x) where we used the Jacobian of K
⎡
1 ⎢ 0 ⎢ ∂K ¯ := =⎢ JK (x) ⎢ 0 ∂X ⎣x ¯1 0
3
0 1 0 x ¯2 0
⎤ 0 0 ⎥ ⎥ 1 ⎥ ⎥. x ¯3 ⎦ 0
(3)
(4)
Stochastic Estimation Method
In the field of parameter estimation one usually parameterizes some physical process P in terms of a model M and a suitable parameter vector p. The components of p are then to be estimated from a set of observations originating from P. Here, we introduce our two parameter estimation methods, the common GaussMarkov method and the most generalized case of least squares adjustment, the Gauss-Helmert method. Both are founded on the respective homonymic linear models, cf. [6]. The word ’adjustment’ puts emphasis on the fact that an estimation has to handle redundancy in observational data appropriately, i.e. to weight unreliable data to a lesser extent. In order to overcome the inherent noisiness of measurements one typically introduces a redundancy by taking much more measurements than necessary to describe the process. Each observation must have its own covariance matrix describing the corresponding Gaussian probability density function that is assumed to model the observational error. The determination of which is inferred from the knowledge of the underlying measurement process. The matrices serve as weights and thereby introduce a local error metric. The principle of least squares adjustment, i.e. to minimize the sum of squared weighted errors Δyi , is often denoted as ΔyiT Σyi−1 Δyi −→ min , (5) i
where Σyi is a covariance matrix assessing the confidence of yi . Let {b1 , b2 , . . . , bN } be a set of N observations, for which we introduce the abbreviation {b1...N }. Each observation bi is associated with an appropriate covariance matrix Σbi . An entity, parameterized by a vector p, is to be fitted to the observational data. Consequently, we define a condition function g(bi , p) which is supposed to be zero if the observations and the entity in demand fit algebraically.
90
C. Gebken and G. Sommer
Besides, it is often inevitable to define constraints h(p) = 0 on the parameter vector p. This is necessary if there are functional dependencies within the parameters. Consider, for example, the parameterization of a Euclidean normal vector n using three variables n = [n1 , n2 , n3 ]T . A constraint nT n = 1 could be avoided using spherical coordinates θ and φ, i.e. n = [cos θ cos φ, cos θ sin φ, sin θ]T . In the following sections, we refer to the functions g and h as G-constraint and H-constraint, respectively. Note that most of the fitting problems in these sections are not linear but quadratic, i.e. the condition equations require a linearization and estimation becomes an iterative process. An important issue is thus the search for an initial estimate (starting point). If we know an already good estimate ˆp, we can make ˆ) ≈ 0. Hence, p)Δp + g(bi , p a linearization of the G-constraint yielding (∂p g)(bi ,ˆ ˆ): Ui Δp = yi + Δyi , which exactly with Ui = (∂p g)(bi ,ˆ p) and yi = −g(bi , p matches the linear Gauss-Markov model. The minimization of equation (5) in conjunction with the Gauss-Markov model leads to the best linear unbiased estimator. Note that we have to leave the weighting out in equation (5), since our covariance matrices Σbi do not match the Σyi . Subsequently, we consider a model which includes the weighting. ˆ If we take our observations as estimates, i.e. {b 1...N } = {b1...N }, we can make ˆ ˆ) yielding a Taylor series expansion of first order at (bi , p ˆi ,ˆ ˆi ,ˆ ˆi , p ˆ) ≈ 0 . p)Δp + (∂b g)(b p)Δbi + g(b (∂p g)(b
(6)
ˆi ,ˆ p) we obtain Ui Δp + Vi Δbi = yi , which exactly Similarly, with Vi = (∂b g)(b matches the linear Gauss-Helmert model. Note that the error term Δyi has been replaced by the linear combination Δyi = −Vi Δbi ; the Gauss-Helmert differs from the Gauss-Markov model in that the observations have become random variables and are thus allowed to undergo small changes Δbi to compensate for errors. But changes have to be kept minimal, as observations represent the best available. This is achieved by replacing equation (5) with −1 ΔbT (7) i Σbi Δbi −→ min , i
where Δbi is now considered as error vector. The minimization of (7) subject to the Gauss-Helmert model can be done T T T using Lagrange multipliers. By introducing Δb = [ΔbT 1 , Δb2 , . . . , ΔbN ] , Σb = T T T diag([Σb1 , Σb2 , . . . , ΣbN ]), U = [UT , U , . . . , U ] , V = diag([V , V 1 2 , . . . , VN ]) 1 2 N T T and y = [y1T , y2T , . . . , yN ] the Lagrange function Ψ , which is now to be minimized, becomes Ψ (Δp, Δb, u, v) =
T T 1 ΔbT Σb−1 Δb − U Δp + V Δb − y u + H Δp − z v . (8) 2
The last summand in Ψ corresponds to the linearized H-constraint, where H = p) and z = − h(ˆ p) was used. That term can be omitted, if p has no func(∂p h)(ˆ tional dependencies. A differentiation of Ψ with respect to all variables gives an extensive matrix equation, which could already be solved. Nevertheless, it can
Stochastically Optimal Epipole Estimation
91
T T −1 be considerably reduced with the substitutions N = Ui i Ui Vi Σbi Vi T T −1 and zN = y. The resultant matrix equation is free from i Ui Vi Σbi Vi Δb and can be solved for Δp Δp zN N HT . (9) = z v H 0 For the corrections {Δb1...N }, which are now minimal with respect to the Mahalanobis distance (7), we compute −1 yi − Ui Δp . Δbi = Σbi ViT Vi Σbi ViT
(10)
It is an important by-product that the (pseudo-) inverse of the quadratic matrix in equation (9) contains the covariance matrix ΣΔp = Σp belonging to p. The similar solution for the Gauss-Markov model and the corresponding proofs and derivations can be found in [6].
4
Omnidirectional Imaging
Consider a camera, focused at infinity, which looks upward at a parabolic mirror centered on its optical axis. This setup is shown in figure 1. A light ray emitted
Fig. 1. Left: mapping (cross-section) of a world point Pw : the image planes π1 and π2 are identical. Right: mapping of line L to Lπ via great circle LS on S.
from world point Pw that would pass through the focal point F of the parabolic mirror M , is reflected parallel to the central axis of M , to give point p2 on image plane π2 . Now we use the simplification that a projection to sphere S with a subsequent stereographic projection to π1 produces a congruent image on π1 . Accordingly, point Pw maps to PS and further to p1 , see figure 1. Together with the right side of figure 1 it is intuitively clear that infinitely extended lines form great circles on S. Moreover, a subsequent stereographic projection, being
92
C. Gebken and G. Sommer
a conformal mapping, results in circles on the image plane which then are no more concentric. For details refer to [4]. Note that the stereographic projection can also be done in terms of an inversion in a suitable sphere SI centered at the North Pole N of S. For the scenario √ depicted in figure 1 one can easily verify that the radius of SI must be 2 r, if r denotes the radius of S. Using CGA we have PS = SI p1 SI .
5
Discovering Catadioptric Stereo Vision with CGA
We now formulate a condition for the matching of image points. This enables the derivation of the fundamental matrix F and the essential matrix E for the parabolic catadioptric case. Consider the stereo setup of figure 2 in which the imaging of world point Pw is depicted. Each of the projection spheres S and S represents the catadioptric imaging device. The interrelating 3D-motion is denoted RBM (rigid body motion). Note that the left coordinate system is also rotated about the vertical axis. The two projections of Pw are X and Y . Let x and y be their respective image points (on the image plane). The inverse stereographic projection of x and y yields the points X and Y , which here share the camera coordinate system centered at F . In order to do stereo our considerations must involve the RBM, that we denote by the motor M from now on. Hence we can write Y = M Y M . and so {S , F } = M {S, F }M
Fig. 2. Omnidirectional stereo vision: the projection of ray LX (LY ) is the great circle CY (CX ). The 3D-epipole E (E ) is the projection of the focal point F (F ).
As we already know from section 4, lines project to great circles; the projection of LX on S , for example, is the circle CY including Y . This motivates the underlying epipolar geometry since all those great circles must pass through the point E being the projection of F . This must be the case because independent of Pw all triangles F Pw F have the line connecting F and F , called baseline, in common. Subsequently we refer to the 3D-points E and E as 3D-epipoles.
Stochastically Optimal Epipole Estimation
93
Intelligibly, the two projection rays LX and LY intersect if the four points F , Y , X and F are coplanar. We express this condition in terms of CGA. The outer product of four conformal points, say A1 , A2 , A3 and A4 , results in the sphere KA = A1 ∧ A2 ∧ A3 ∧ A4 comprising the points. If these are coplanar the sphere degenerates to the special sphere with infinite radius - which is a plane. Recall from section 2 that a plane lacks the eo -component in contrast to a sphere. The explanation is that the eo -component carries the value (a2 −a1 )∧(a3 −a1 )∧ (a4 − a1 ) which amounts to the triple product (a2 − a1 ) · ((a3 − a1 ) × (a4 − a1 )), where Ai = K(ai ∈ 3 ), 1 ≤ i ≤ 4. Our condition must therefore ensure the e∗o -component4 to be zero
G = F ∧X ∧F ∧Y = (F ∧ X) ∧ M F ∧ Y M
e∗
=o
0.
(11)
Using the abbreviations X = F ∧ X and Y = F ∧ Y the upper formula reads e∗ o = G = X ∧ MY M 0. We now exploit the tensor representation introduced in section 2.1 and obtain gt (m, x, y) = xk Otkc (mi Gail yl Gcab Rbj mj ) ,
(12)
where Φ(G) = g, Φ(X) = x, Φ(Y ) = y and Φ(M ) = m. Note that we only have to take a particular t = t∗ into account which indexes to the e∗o -component of the result. After setting F = eo it can be shown that x and y in fact denote the Euclidean 3D-coordinates on the projection spheres, i.e. x, y ∈ 3 . For the motor M we have m ∈ 8 in CGA, which is an overparameterization as an RBM has six degrees of freedom. The product tensors O, G and R denote the outer product, the geometric product and the reverse, respectively. If we fix a motor ∗ m we get the bilinear form g(x, y) = xk Ekl yl with Ekl = Otkc mi Gail Gcab Rbj mj . The condition is linear in X and linear in Y as expected by the bilinearity of the geometric product. Its succinct matrix notation is
xT E y = 0 ,
(13)
where E ∈ 3×3 denotes the essential matrix of the epipolar geometry. We do not give a proof but mention that equation (13), which ultimately reflects a triple product, is zero if and only if there is coplanarity between the four points F , Y , X and F . Next, if we set Y = E or X = E we get E y = 0 and xT E = 0, respectively. Otherwise, say E y = n ∈ 3 , we could choose an X such that the corresponding x is not orthogonal to n and get xT E y = 0. This would imply that the points F , E , F and the chosen X are not coplanar, which must be a contradiction since F , E and F are already collinear. Hence the 3D-epipoles reflect the left and right nullspace of E and we can infer that the rank of E can be at most two.
Because E does solely depend on the motor M , comprising the extrinsic parameters, it can not be a fundamental matrix, which must include the intrinsic 4
The component dual to eo -component (denoted e∗o ) must be zero as the outer product representation is dual to the sphere representation given in section 2.
94
C. Gebken and G. Sommer
parameters as well. Fortunately, we can easily extend our previous derivations to obtain the fundamental matrix F. Recall the image points x and y. They are related to X and Y in terms of a stereographic projection. As already stated in section 4, a stereographic projection is equal to an inversion in a certain sphere, but inversion is the most fundamental operation in CGA. In accordance with figure 1 we can write X = SI Xπ SI , where Xπ = K(x) denotes the conformal embedding of point x on the image plane π. Note that the inversion sphere SI depends on the focal length of the parabolic mirror and on the CCD-chip size, i.e. a scaling factor. In this way e∗
equation (11) becomes G = F ∧ (SI Xπ SI ) ∧ F ∧ (SI Yπ SI ) =o 0. In addition, the principal point offset (the coordinates of the pixel where the optical axis hits the image plane) can be included by introducing a suitable translator TC . We then would have to replace SI by the compound operator Z = SI TC . We would finally obtain ∧ F ∧ (ZY Z) G = F ∧ (ZXπ Z) π
e∗
=o
0.
(14)
However, equation (14) is still linear in the points Xπ and Yπ . We refrain from specifying the corresponding tensor representation or the fundamental matrix, respectively. Instead we show a connection to the work of Geyer and Daniilidis [3]. They have derived a catadioptric fundamental matrix of dimension 4 × 4 for what they call lifted image points which live in a 4D-Minkowski space (the fourth basis vector squares to −1). The lifting raises an image point p = [u, v]T ˜ ∈ 3,1 is onto a unit sphere centered at the origin such that the lifted point p collinear with p and the North Pole N of the sphere. Thus the lifting corresponds to a stereographic projection. The lifting of p is defined as
˜ = [2u, 2v, u2 + v 2 − 1, u2 + v 2 + 1]T . p
(15)
Compare the conformal embedding P = K(p) = u e1 +v e2 + 12 (u2 +v 2 ) e∞ +1 eo in the e∞ - eo -coordinate system (Φ discards the e3 -coordinate being zero) Φ(P ) = [u, v, 12 (u2 + v 2 ), 1]T . We can switch back to the e+ -e− -coordinate system of the conformal space with the linear (basis) transformation B ⎡ ⎤ ⎤ ⎤ ⎡ ⎡ 100 0 u u ⎢0 1 0 0 ⎥ ⎢ ⎥ ⎥ ⎢ v v ⎥ ⎥ ⎥ ⎢ ⎢ B Φ(P ) = ⎢ ⎣ 0 0 1 − 1 ⎦ ⎣ 1 (u2 + v 2 ) ⎦ = ⎣ 1 (u2 + v 2 − 1) ⎦ . 2 2 2 1 2 2 1 0 0 1 + 12 2 (u + v + 1) The lifting in equation (15) is therefore identical to the conformal embedding ˜ = 2B Φ(Xπ ) = 2B Φ(K(x)). up to a scalar factor 2. At first, this implies that x ˜ ∈ 4×4 denotes a fundamental matrix for lifted points then, by Second, if F ˜ is the fundamental matrix that we would obtain from analogy, F = 4 BT FB equation (14).
Stochastically Optimal Epipole Estimation
6
95
Estimating Epipoles
The results of the previous section are now to be applied to the epipole estimation. We use the essential matrix E to estimate the epipoles for two reasons. First, the intrinsic parameters do not change while the imaging device moves; one initial calibration is enough. Second, the rank-2 essential matrix is only of dimension 3 × 3 and one needs at least eight points for the estimation. We choose nine pairs of corresponding image points. For each image point = {x1...9 } and x we compute x = Φ( eo ∧ K(x)) ∈ 3 . We finally have = {y1...9 }. Every x-y-pair must satisfy equation (13) which can be rephrased as vec(xyT )T vec(E) = 0, where vec(·) reshapes a matrix into a column vector. Hence the best least-squares approximation of vec(E) is the right-singular vector to the smallest singular value of the matrix consisting of the row vectors ˇ be the estimated essential matrix. The left- and )T , 1 ≤ i ≤ 9. Let E vec(xi yT i ˇ are then our approxright-singular vectors to the smallest singular value of E imations to the 3D-epipoles, as described above. The epipoles then serve as a prior for the stochastic epipole estimation explained in the following section.
7
Stochastic Epipole Estimation
Here we describe the second key contribution of our work. The epipoles computed via the essential matrix are now to be refined. We do so by estimating the interrelating motor M , parameterized by m ∈ 8 . The direction to the and M F M . Note 3D-epipoles can then be extracted from the points M F M that the former point equals F . In order to apply the Gauss-Helmert method introduced in section 3 we have to provide the G-constraint, the H-constraint, the observations with associated uncertainties and a rough initial estimate. = As input data we use all N pairs of corresponding points, i.e. we use {x1...N } and = {y1...N }. An observation is a pair (xi , yi ), 1 ≤ i ≤ N . In our case the G-constraint is simply equation (12) with t = t∗
g t (m, x, y) = xkn Otkc (mi Gail yln Gcab Rbj mj ),
1≤n≤N,
(16)
Hence differentiating with respect to x, y and m yields the required matrices V and U, respectively. Since an RBM is defined by six rather than eight parameters, we need the H-constraint. To ensure that m encodes a motor it is sufficient to = 1. We also have to constrain the unitarity of M by demanding that M M constrain that M does not converge to the identity element M = 1. Otherwise the condition equation (11) would become G = F ∧ X ∧ F ∧ Y being zero at , all times. This is achieved by constraining the e∞ -component of F = M F M 5 F = eo , to be 0.5. Thus the distance between F and F is set to one . The ˆ is the unit length translator TE along the direction of the initial estimate m ˆ = Φ(TE ). 3D-epipole E, i.e. m 5
This can be done as the true distance between F and F cannot be recovered from the image data.
96
C. Gebken and G. Sommer
We assume that all image points initially have the same 2D-uncertainty given by a 2×2 identity covariance matrix, i.e. we assume an pixel error of one in row and column. The conformal embedding then adjusts the covariance matrices as explained in section 2.2. Recall that the computation of our observations, say x, involves a stereographic projection that we perform by means of an inversion. The points thereby obtain distinct 3D-uncertainties accounting for the imaging geometry. The mapping of a far image point to a point close to the North Pole N of S, for example, is less affected by noise and will thus inhere with a higher confidence, see figure 1. Mathematically, the uncertainties are computed using standard error propagation, where we profit from the inversion being an element of 4,1 . Note that the obtained inhomogeneity in the uncertainties is one of main justifications for the use of the Gauss-Helmert method.
Fig. 3. The distance between the images is 3.5m. The distance and the ground truth epipoles were measured with a laser device. The red circles symbolize the epipoles found via the essential matrix. The green epipoles reflect the improved solutions of the Gauss-Helmert method. The difference seems to be small but an exact epipole is crucial for further processing.
The results of the Gauss-Helmert method can be seen in figure 3, which shows our most challenging scenario with a translation of 3.5m in between the camera positions. The red circles indicate the epipoles found via the essential matrix approach described in section 5. The green circles show the stochastically optimal solution provided by the Gauss-Helmert method which exactly agree with the ground truth epipoles. These were determined with a laser along which the camera was moved. However, the uncertainty in the laser measured epipole positions is too big compared to the variation in the Gauss-Helmert results, which is why we do not state numerical results.
Stochastically Optimal Epipole Estimation
8
97
Conclusion
The objective of this work was to estimate the epipoles in omnidirectional images. The expression derived for the essential matrix is a new result and was shown to be correct. It gives some new insight into the structure of the essential matrix and enables a geometric interpretation. Concerning the fundamental matrix we identified an important connection to the work of Geyer and Daniilidis. We use the Gauss-Helmert method for the estimation of the stochastically optimal epipoles. As a byproduct we obtain the motion estimation between two omnidirectional images. The experimental results demonstrate that our method does provide exact results. Evidently, this work is also a valuable contribution to the field of (conformal) geometric algebra, which turned out to be the ideal framework to deal with geometry.
References 1. Angles, P.: Construction de revˆetements du groupe conforme d’un espace vectoriel muni d’une m´etrique de type (p, q). Annales de l’institut Henri Poincar´e 33, 33–51 (1980) 2. Dorst, L., Fontijne, D., Mann, S.: Geometric Algebra for Computer Science: An Object-Oriented Approach to Geometry. Morgan Kaufmann Publishers Inc., San Francisco (2007) 3. Geyer, C., Daniilidis, K.: Properties of the catadioptric fundamental matrix. In: Heyden, A., et al. (eds.) ECCV 2002. LNCS, vol. 2352, pp. 140–154. Springer, Heidelberg (2002) 4. Geyer, C., Daniilidis, K.: Catadioptric projective geometry. International Journal of Computer Vision 45(3), 223–243 (2001) 5. Hestenes, D., Sobczyk, G.: Clifford Algebra to Geometric Calculus: A Unified Language for Mathematics and Physics. Reidel, Dordrecht (1984) 6. Koch, K.-R.: Parameter Estimation and Hypothesis Testing in Linear Models. Springer, Heidelberg (1997) 7. Li, H., Hestenes, D., Rockwood, A.: Generalized homogeneous coordinates for computational geometry. In: Geometric computing with Clifford algebras: Theoretical foundations and applications in computer vision and robotics, pp. 27–59. Springer, London (2001) 8. Nayar, S.K., Peri, V.: Nayar and Venkata Peri. In: Folded catadioptric cameras, Ft. Collins, CO, USA, p. 2217 (1999) 9. Needham, T.: Visual Complex Analysis. Clarendon Press, Oxford (1997) 10. Olivier, F.: Three-Dimensional Computer Vision. MIT Press, Cambridge (1993) 11. Perwass, C., Gebken, C., Sommer, G.: Estimation of geometric entities and operators from uncertain data. In: Kropatsch, W.G., Sablatnig, R., Hanbury, A. (eds.) DAGM 2005. LNCS, vol. 3663, pp. 459–467. Springer, Heidelberg (2005) 12. Rosenhahn, B., Sommer, G.: Pose estimation in conformal geometric algebra, part I: the stratification of mathematical spaces. Journal of Mathematical Imaging and Vision 22, 27–48 (2005) 13. Rosenhahn, B., Sommer, G.: Pose estimation in conformal geometric algebra, part II: real-time pose estimation using extended feature concepts. Journal of Mathematical Imaging and Vision 22, 49–70 (2005)
Modeling and Tracking Line-Constrained Mechanical Systems B. Rosenhahn1, T. Brox2 , D. Cremers3 , and H.-P. Seidel1 1
Max Planck Institute for Computer Science, Stuhlsatzenhausweg 85, D-66123 Saarbr¨ucken, Germany
[email protected] 2 Department of Computer Science, University of Dresden 01062 Dresden, Germany 3 Computer Vision Goup, University of Bonn R¨omerstr. 164, 53117 Bonn, Germany
Abstract. This work deals with modeling and tracking of mechanical systems which are given as kinematic chains with restricted degrees of freedom. Such systems may involve many joints, but due to additional restrictions or mechanical properties the joints depend on each other. So-called closed-chain or parallel manipulators are examples for kinematic chains with additional constraints. Though the degrees of freedom are limited, the complexity of the dynamic equations increases rapidly when studied analytically. In this work, we suggest to avoid this kind of analytic integration of interconnection constraints and instead to model them numerically via soft constraints.
1 Introduction In robotics a kinematic chain represents a set of rigid links which are connected by a set of joints. The term forward kinematics of a robot refers to the configuration of an end-effector (the gripper of a robot arm), as a function of joint angles of the robot links. For many systems it is sufficient to deal with prismatic (sliding) or revolute (rotating) joints in a so-called Denavit-Hartenberg parametrization [8]. The term inverse kinematics is the opposite problem: given a desired configuration of the end-effector, we are interested in finding the joint angles that achieve this configuration. Despite numerous applications in robotics, the computation of the inverse kinematics is the classical problem statement for motion capturing (MoCap): here the task is to determine the position and orientation as well as the joint angles of a human body from image data. For classical robot manipulators or human beings, the end-effector is (despite physical boundaries) unconstrained, which means that the end-effector can move freely in space. This is also called an open-chain (or serial link) system and many works exist in the fields of robotics and computer vision to compute pose configurations for robots or human beings [4,9,15,14,11,13,18,5]. Besides open-chain systems, a variety of robots is equipped with so-called closed-chain manipulators. A four-bar linkage system is a famous example [16]. A G. Sommer and R. Klette (Eds.): RobVis 2008, LNCS 4931, pp. 98–110, 2008. c Springer-Verlag Berlin Heidelberg 2008
Modeling and Tracking Line-Constrained Mechanical Systems
99
Fig. 1. Example frames of a piston (left) or connected pistons as 3 cylinder motor on a crankshaft (right)
generalization of closed-chain manipulators means to constrain parts of a kinematic chain to a limited degree of freedom, thereby reducing the number of degrees of freedom. We call this kind of systems constrained manipulators. One example is a piston or connected pistons, see Figure 1, which are constrained to keep a specific orientation due to the cylinder. Though there are three joints involved, just one parameter (e.g. joint angle) is sufficient to determine the remaining angles. This follows from the fact, that the piston can only move along the cylinder by maintaining its orientation. A similar thing happens with the set of pistons on the crankshaft: though 3 ∗ 3 joints are involved, just one single parameter is needed to describe the configuration of the system. Thus, it is important to express the constraint that a piston is only allowed to move up and down inside the cylinder. Though the number of parameters needed to describe the system configuration is reduced, the complexity of the dynamic equations increases rapidly, since there are constraints between the joint coordinates. For controller design, closed-chain manipulators have been an active topic of research for many years [10,1,12,7,20]. In this work, we do not develop a controller system for active and passive joints. We are rather interested in tracking closed-chain and constrained manipulators from image data. A theoretical work dealing with reduced equations of robotics systems using Lie groups can be found in [17]. Figure 2 shows some examples of constrained manipulators, which we studied in our experiments. For instance, the pantograph type wardrobe model involves eight joints, but since the midpoints of the limbs are connected, the motion is restricted to a translation of the end-effector forwards and backwards. The second example is a rotating crankshaft and piston in a one cylinder engine assembly. Our experiments show that one can successfully capture the single degree of freedom of the piston without explicitly modeling the complex dynamics.
2 Twists, Kinematic Chains and Pose Estimation In this section we recall mathematic foundations needed for modeling the open-chain and constrained manipulators. We further briefly introduce the pose estimation procedure used for the experiments.
100
B. Rosenhahn et al.
Fig. 2. Top: Examples for a pantograph type of constrained kinematic chains to lift projectors (left), to adjust a cosmetics mirror (middle), or a wardrobe for children (right). Bottom: The second example: a rotating crankshaft and piston in a one cylinder engine assembly. Both types of constrained kinematic chains will be used in the experiments.
2.1 Twists A rigid body motion of a 3D point x can be expressed in homogeneous coordinates as
T
T
X = (x , 1) = MX = M(x, 1) =
R t 03×1 1
x . 1
(1)
The matrix R is a rotation matrix, R ∈ SO(3) and t is a translation vector. The set of all matrices of type M is called the Lie Group SE(3). To every Lie group there exists an associated Lie algebra, whose underlying vector space is the tangent space of the Lie group, evaluated at its origin. The Lie algebra associated with SE(3) is se(3) := {(v, ω )|v ∈ R3 , ω ∈ so(3)}, with so(3) := {A ∈ R3×3 |A = −AT }. Elements of so(3) and se(3) can be written as vectors ω = (ω1 , ω2 , ω3 )T , ξ = (ω1 , ω2 , ω3 , v1 , v2 , v3 )T or matrices ⎛ ⎞ 0 − ω3 ω2 ωˆ v ωˆ = ⎝ ω3 0 −ω1 ⎠ , ξˆ = . (2) 03×1 0 − ω2 ω1 0 A twist ξ can be converted into an element of the Lie group M ∈ SE(3) by computation of its exponential form, which can be done efficiently by using the Rodrigues formula [16]. For varying θ , the one-parametric Lie-subgroup Mθ = exp(θ ξˆ ) yields a screw motion around a fixed axis in space. A degenerate screw (without pitch component) will be used to model revolute joints.
Modeling and Tracking Line-Constrained Mechanical Systems
101
2.2 Plucker ¨ Lines 3D lines are needed on the one hand for the point-based pose estimation procedure and on the other hand to constrain the constrained manipulators. In this work, we use an implicit representation of a 3D line in its Pl¨ucker form [2]. A Pl¨ucker line L = (n, m) is given by a normalized vector n (the direction of the line) and a vector m, called the moment, which is defined by m := x × n for a given point x on the line. Collinearity of a point x to a line L can be expressed by x ∈ L ⇔ x × n − m = 0,
(3)
and the distance of a point x to the line L can easily be computed by x × n − m, see [19] for details. 2.3 Kinematic Chains A kinematic chain is modeled as the consecutive evaluation of exponential functions of twists ξi as done in [3,4]. A point at an end effector, additionally transformed by a rigid body motion is given as Xi = exp(θ ξˆ )(exp(θ1 ξˆ1 ) . . . exp(θn ξˆn ))Xi .
(4)
In the remainder of this paper we note a pose configuration by the (6 + n)-D vector χ = (ξ , θ1 , . . . , θn ) = (ξ , Θ ) consisting of the 6 degrees of freedom for the rigid body motion ξ and the joint angle vector Θ . In our setup, the vector χ is unknown and has to be determined from the image data. Furthermore, the angle configuration has to satisfy the additional mechanical constraints of the system. 2.4 Registration, Pose Estimation Assuming an extracted image contour and the silhouette of the projected surface mesh, the closest point correspondences between both contours are used to define a set of corresponding 3D lines and 3D points. Then a 3D point-line based pose estimation algorithm for kinematic chains is applied to minimize the spatial distance between both contours: For point based pose estimation each line is modeled as a 3D Pl¨ucker line Li = (ni , mi ) [2]. For pose estimation the reconstructed Pl¨ucker lines are combined with the twist representation for rigid motions: Incidence of the transformed 3D point Xi with the 3D ray Li = (ni , mi ) can be expressed as π exp(θ ξˆ )Xi × ni − mi = 0. (5) The function π denotes the projection of a homogeneous 4D vector to a 3D vector by neglecting the homogeneous component, which is 1. For the case of kinematic chains, we exploit the property that joints are expressed as special twists with no pitch of the form θ j ξˆj . The sought parameters are the joint angles
102
B. Rosenhahn et al.
θ j , whereas ξˆj is known (the location of the rotation axes is part of the model). The constraint equation of an ith point on a jth joint reads π exp(θ ξˆ ) exp(θ1 ξˆ1 ) . . . exp(θ j ξˆj )Xi × ni − mi = 0. (6) To minimize for all correspondences in a least squares sense, we optimize
2
xi
ˆ ˆ × n i − mi , argmin ∑ π exp(ξ ) ∏ exp(θ j ξ j ) 1
(ξ ,Θ ) i j∈J (x ) i
(7)
2
where J (xi ) denotes the set of joints that affect the point xi . Linearization of this equation leads to three linear equations with 6 + j unknowns, the six pose parameters and j joint angles. Collecting enough correspondences yields an over-determined linear system of equations and allows to solve for these unknowns in the least squares sense. Then the Rodrigues formula is applied to reconstruct the group action and the process is iterated until it converges. 2.5 The Silhouette Based Tracking System To represent the silhouette of an object in the image, a level set function Φ ∈ Ω → R is employed. It splits the image domain Ω into two regions Ω1 and Ω2 with Φ (x) > 0 if x ∈ Ω1 and Φ (x) < 0 if x ∈ Ω2 . The zero-level line thus marks the boundary between both regions. For an optimum partitioning, the following energy functional can be minimized. It is is an extended version of the Chan-Vese model [6] E(Φ , p1 , p2 ) = −
Ω
H(Φ (x)) log p1 (I(x)) +
(1 − H(Φ (x))) log p2 (I(x)) + ν |∇H(Φ (x))| dx
(8)
with a weighting parameter ν > 0 and H(s) being a regularized version of the Heaviside (step) function, e.g. the error function. The probability densities p1 and p2 measure the fit of an intensity value I(x) to the corresponding region. We model these densities by local Gaussian distributions [19]. The partitioning and the probability densities pi are estimated alternately. The tracking system in [19] builds upon the previous segmentation model by combining it with the pose tracking problem. To this end, the following coupled energy functional is optimized: E(Φ , p1 , p2 , χ ) = −
Ω
H(Φ ) log p1 + (1 − H(Φ )) log p2
+ν |∇H(Φ )| dx + λ (Φ − Φ0 (χ ))2 dx . Ω
(9)
shape constraint
It consists of the above segmentation model and an additional shape constraint that states the pose estimation task. By means of the contour Φ , both problems are coupled.
Modeling and Tracking Line-Constrained Mechanical Systems
103
In particular, the projected surface model Φ0 acts as a shape prior to support the segmentation [19]. The influence of the shape prior on the segmentation is steered by the parameter λ = 0.05. Due to the nonlinearity of the optimization problem, an iterative, alternating minimization scheme is proposed in [19]: first the pose parameters χ are kept constant while the functional is minimized with respect to the partitioning. Then the contour is kept constant while the pose parameters are determined to fit the surface mesh to the silhouettes (Section 2.4).
3 Closed-Chain and Constrained Manipulators To determine the degrees of freedom of a closed-chain manipulator, Grueblers formula can be applied [16]: let N be the number of links in the mechanism, g the number of joints, and fi the degrees of freedom for the ith joint. The number of degrees of freedom of the mechanism is g
g
i=1
i=1
F = 6N − ∑ (6 − fi ) = 6(N − g) + ∑ fi .
(10)
For planar motions, the scalar 6 needs to be replaced by 3. The key idea of the present work is to use open-chain models and to add constraint equations in the optimization procedure to enforce their configuration as a constrained manipulator. These further constraints will automatically result in equations of rank g − F, with the degrees of freedom of the mechanism as the remaining unknowns.
Fig. 3. Example configurations of the wardrobe model from Figure 2 (right)
Take, for instance, Figure 3. Here we have a pantograph assembly with two openchain manipulators, where mid-points of the links have to be on a straight line and the beginning and end of the two chains have to be incident. The joints are encircled in Figure 3 and the dashed line again shows the geometric invariance of the constrained manipulator which has to be satisfied. Following (10), the closed-chain system consists of 8 links and 9 joints, yielding 6(8 − 9) + 8 = −6 + 8 = 2 degrees of freedom. However, Gruebler’s formula only holds when the constraints imposed by the joints are independent. As the midpoints of the
104
B. Rosenhahn et al.
limbs are partially connected, here we have only one degree of freedom resulting in a translation of the end-effector forwards and backwards. We define Lm = (nm , mm ) as the (dashed) midline of the constrained system, see Figure 3 and collect a set of points pi on the model which have to be incident with Lm , ∀i : pi × nm − mm = 0.
(11)
The model in Figure 3 contains 8 joints, each with one degree of freedom and we gain 6 collinearity equations for the system. If these equations are collected and ordered in a system of equations, we get a system of rank 6, which means at least one additional point correspondence is needed to determine the configurations of the constrained system, though in this case eight joints are involved. The same is possible for the examples in Figure 1 (left): in principle the piston is an open-chain manipulator with the property that the piston has to move along a straight line (the dashed line in the Figure). Though 3 joints are involved only one 3D point is sufficient to determine the configuration of the system. Figure 1 shows on the right hand side a set of connected pistons. Here again, the fact that the pistons are required to move along a line can be applied to constrain one of the joints. The first piston is again determined from the moving point (shown as black dot). The second and third piston can be determined from the configuration of the first one. In the animations of Figures 1 and 3, only the black point is used to determine the joint configuration of the manipulators.
Fig. 4. Pose results of the non-constrained models: the first model tends to break apart and the top of the cylinder in the second model is tilted. Overall, the outcome is less stable.
We therefore use a point-line constraint to model the geometric properties of the constrained manipulator. It is exactly the same constraint we use for optimizing the pose parameters, which means we can directly integrate the equations in the optimization procedure of our tracking system by collecting equations to minimize
2
p
i × n m − mm . argmin ∑ π exp(ξˆ ) ∏ exp(θ j ξˆj ) 1
(ξ ,Θ ) i j∈J (p ) i
2
(12)
Modeling and Tracking Line-Constrained Mechanical Systems
105
Note, that the unknowns are the same as for Equation (7), the unknown pose parameters. Only the line Lm = (nm , mm ) is not due to a reconstructed image point, but stems from the object model. Since we use a point-line constraint to restrict the movement of the manipulators, we call them line-constrained mechanical systems. The matrix of gathered linear equations is attached to the one presented in Section 2.4. Its effect is to regularize the equations and to constrain the solution to a desired subspace of possible joint configurations. The structure of the generated linear system is Aχ = b, with two attached parts, generated from equation (7) and (12) in the same unknowns χ . Since this is a soft constraint for the constrained manipulator configuration that is integrated in a least squares sense, it can happen that the constraint is not fully satisfied due to errors in the point correspondences collected from the image data. This effect can be reduced by multiplying the constraint equations by a strong weight. In our experiments, the pose of the open chain system had a deviation of less then 0.01 degrees compared to an explicitly modeled constrained system, see Figure 6. For most tracking applications, this accuracy is sufficient.
4 Experiments In this section we present experimental results of constrained manipulators we tracked in stereo setups. For comparison to our proposed method, Figure 4 shows pose results of the models when the additional constraints are neglected. This comes down to an unconstrained
Fig. 5. Pose results of the wardrobe model moving freely in a cluttered scene
106
B. Rosenhahn et al.
Fig. 6. Angles of the linkage system
open chain model with more degrees of freedom than actually allowed. The first model tends to break apart and the top of the piston in the second model is tilted. Overall, the tracking is erroneous and unreliable. Figure 5 shows four examples of the first test sequence (250 frames). Here the object model moves freely in 3D space. The manual interaction visible in the scene (the arms behind the object model) rules out background subtraction methods for this tracking task. The algorithm successfully tracks the object and maintains its intrinsic constraints. Figure 6 shows angles of the closed-chain manipulator during tracking. The result clearly reflects the inherent mechanical constraints of the closed chain. It also reveals that in the end only one parameter is needed to determine the configuration of the system. The smooth curves indicate a stable tracking. In the second set-up, see Figure 7, we decided on a controlled environment (noncluttered back ground) and restricted movements: The object is only moved along the ground plane while it is streched and squeezed. Figure 7 shows in the left six example frames of a 585 frame stereo-sequence and in the right the translation vectors in millimeters. The deviation of the y-axis varies up to 3cm and indicates a reasonable stable result. Note, that during the last 100 frames, the model was kept still, while a hand was moved in front of the object, yielding to partial occlusions. The nearly constant
Modeling and Tracking Line-Constrained Mechanical Systems
107
Fig. 7. Left, Pose results: During tracking, the model is squeezed and stretched while moving it along a 3D plane. In the last 100 frames, the model is kept still and a hand moving in front of the model is causing partial occlusions. Right: The translation vectors along the x, y and z-axes in millimeter. The deviation along the y-axis varies up to 3cm which indicates a reasonable stable result.
Fig. 8. Laser scan of a piston. The scan contains several holes and missing parts, which are taken as an additional challenge for tracking.
translation values indicate a reasonably stable pose result. The average deviation from the ground plane was 6 mm. A motor from a mower was used for the second experimental setup. It was modified by milling off one side of the motor block. Then the motor was painted white and the piston, connecting rod and crankshaft painted black, see Figure 2 (bottom). The piston, the connecting rod and crankshaft was then scanned with a laser scanner, see Figure 8, and joints were added to the mesh model. The laser scan contains some holes and artefacts, which have been regarded as an additional challenge for tracking. Figure 9 shows some pose results of a sequence (730 frames) in which the piston is rotating several times around the crankshaft. The artefacts of the laser scan are clearly visible. They hinder the correct segmentation, since the shape model does not fully reflect the appearance of the object. Nonetheless, due to the stereo information of both
108
B. Rosenhahn et al.
Fig. 9. Pose and segmentation results during tracking the piston (images are cropped). The artefacts from the laser scanner are clearly visible. Nonetheless, the tracking is successfull.
cameras and the additional implicit constraint that the piston can only move upwards and downwards, the object is tracked successfully. Figure 10 shows the sine-value of the angles during tracking the piston model. Two aspects can be observed: just one parameter is optimized, so that the three angles are correlated and their dependency is non-linear. An explicit modeling of the constrained kinematic system would require higher order trigonometric functions, which are avoided here.
5 Summary In this paper we presented an extension of a tracking system that, so far, only allowed for open-chain manipulators such as human beings or robot arms. The extended system allows to track constrained kinematic chains. This includes models which can involve many joints with coupled degrees of freedom. The analytic examination of such constrained kinematic chains leads to rapidly increasing complexity of the dynamic equations. To deal with the inherent restrictions, we suggest to use open chain systems and to add model constraints such as a couple of points that have to be incident with a given line. Implicitly, these model constraints ensure a reduced rank of the system, so the minimal number of point correspondences is sufficient to compute the configuration of the whole system. At the same time, the model constraints ensure that the dynamical system behaves like a closed-chain system. The experiments show that the developed system works reliably, even in the presence of cluttered and varying background in the scene.
Modeling and Tracking Line-Constrained Mechanical Systems
109
Fig. 10. The sine-values of angles of the piston sequence
Acknowledgments This work has been supported by the Max-Planck Center for Visual Computing and Communication. We would like to thank Christian Fuchs and Axel K¨oppel from the Max-Planck Institute for Computer Science for their help with the one cylinder engine.
References 1. Bicchi, A., Prattichizzo, D.: On some structural properties of general manipulation systems. In: Astolfi, A., et al. (eds.) Modelling and Control of Mechanichal Systems, pp. 187–202. Imperial College Press, London, U.K (1997) 2. Blaschke, W.: Kinematik und Quaternionen, Mathematische Monographien, Deutscher Verlag der Wissenschaften, vol. 4 (1960) 3. Bregler, C., Malik, J.: Tracking people with twists and exponential maps. In: Proc. Computer Vision and Pattern Recognition, Santa Barbara, California, pp. 8–15 (1998) 4. Bregler, C., Malik, J., Pullen, K.: Twist based acquisition and tracking of animal and human kinetics. International Journal of Computer Vision 56(3), 179–194 (2004) 5. Carranza, J., et al.: Free-viewpoint video of human actors. In: Proc. SIGGRAPH 2003, pp. 569–577 (2003) 6. Chan, T., Vese, L.: Active contours without edges. IEEE Transactions on Image Processing 10(2), 266–277 (2001) 7. Cheng, H., Yiu, Y.: Dynamics and control of redundantly actuated parallel manipulators. Trans. on Mechatronics 8(4), 483–491 (2003) 8. Denavit, J., Hartenberg, R.S.: A kinematic notation for lower-pair mechanisms based on matrices. ASME Journal of Applied Mechanics 22, 215–221 (1955) 9. Fua, P., Pl¨ankers, R., Thalmann, D.: Tracking and modeling people in video sequences. Computer Vision and Image Understanding 81(3), 285–302 (2001) 10. Gao, X., Qu, Z.: On the robust control of two manipulators holding a rigid object. Intelligent and Robotic Systems 8, 107–119 (1992)
110
B. Rosenhahn et al.
11. Gavrila, D.M.: The visual analysis of human movement: A survey. Computer Vision and Image Understanding 73(1), 82–92 (1999) 12. Kim, D., Kang, J., Lee, K.: Robust tracking control design for a 6 dof parallel manipulator. Journal of Robotics Systems 17(10), 527–547 (2000) 13. Mikic, I., et al.: Human body model acquisition and tracking using voxel data. International Journal of Computer Vision 53(3), 199–223 (2003) 14. Moeslund, T.B., Hilton, A., Kr¨uger, V.: A survey of advances in vision-based human motion capture and analysis. Computer Vision and Image Understanding 104(2), 90–126 (2006) 15. Moeslund, T.B., Granum, E.: A survey of computer vision based human motion capture. Computer Vision and Image Understanding 81(3), 231–268 (2001) 16. Murray, R.M., Li, Z., Sastry, S.S.: Mathematical Introduction to Robotic Manipulation. CRC Press, Baton Rouge (1994) 17. Ostrowski, J.: Computing reduced equations for robotic systems with constraints and symmetries. Trans. on Robotics and Automation 15(1), 111–123 (1999) 18. Rosenhahn, B.: Pose estimation revisited. Technical Report TR-0308, PhD-thesis, Institute of Computer Science, University of Kiel, Germany (October 2003), http://www.ks.informatik.uni-kiel.de 19. Rosenhahn, B., Brox, T., Weickert, J.: Three-dimensional shape knowledge for joint image segmentation and pose tracking. International Journal of Computer Vision 73(3), 243–262 (2007) 20. Zweiri, Y., Senevirante, L., Althoefer, K.: Modelling of closed-chain manipulators on an excavator vehicle. Mathematical and Computer Modeling of Dynamical Systems 12(4), 329– 345 (2003)
Stereo Vision Local Map Alignment for Robot Environment Mapping David Aldavert and Ricardo Toledo Computer Vision Centre (CVC), Dept. Ci`encies de la Computaci´ o Universitat Aut` onoma de Barcelona (UAB), 08193, Bellaterra, Spain
[email protected],
[email protected]
Abstract. Finding correspondences between sensor measurements obtained at different places is a fundamental task for an autonomous mobile robot. Most matching methods search correspondences between salient features extracted from such measurements. However, finding explicit matches between features is a challenging and expensive task. In this paper we build a local map using a stereo head aided by sonars and propose a method for aligning local maps without searching explicit correspondences between primitives. From objects found by the stereo head, an object probability density distribution is built. Then, the Gauss-Newton algorithm is used to match correspondences, so that, no explicit correspondences are needed.
1
Introduction
In order to navigate through an environment and perform complex tasks, an autonomous mobile robot needs an accurate estimation of its location within the environment. Either if the map is known a priori or the robot builds the map itself using any SLAM algorithm, aligning robot’s sensor data obtained from different places is a fundamental task. This is known as the correspondence or data association problem, i.e. the problem of determining if sensor measurements taken from elsewhere corresponds to the same physical object in the world. This problem is usually approached extracting primitives from robot measurements and setting correspondences between primitives obtained from different views. From such correspondences an estimation of the robot motion and its uncertainty can be obtained. In [1], Cox extracts points from laser scans and uses them as primitives. Then point primitives are matched to lines from a map given a priori. An extension of the Cox’s algorithm consists in extracting lines instead of points as primitives [3]. In [4], Lu and Milios propose the IDC (Iterative Dual Correspondence) which is a more general approach that matches points to points. As Cox’s algorithm gives better results than IDC in very structured environments while in unstructures environments IDC gives better results, Gutmann combines both methods in [3]. The IDC is a variant of the ICP (Iterative Closest Points) algorithm [5] applied to laser range scans. The ICP is also used to align robot’s measurements, specially when using 3D range data [6,7,8]. G. Sommer and R. Klette (Eds.): RobVis 2008, LNCS 4931, pp. 111–124, 2008. c Springer-Verlag Berlin Heidelberg 2008
112
D. Aldavert and R. Toledo
The methods explained above search explicit correspondences between primitives. Computationally, the search of explicit correspondences is the most expensive step because for each primitive of a measurement set, the corresponding primitive of the other measurement set must be computed. Therefore, other methods tried to avoid this step aligning sensor measurements without finding direct correspondences between primitives. In [2] Weiss and Puttkamer build angular histograms by accumulating the orientation of sensor measurements. Then robot sensor measurements are rotated by the angle of the strongest response of the angular histogram and histograms of the x and y coordinates are built. To align two measurement sets, the x and y histograms are correlated. This method is designed to work in very structured environments, when applied in unstructured environments, the obtained results are poor. In [10] Biber and Straßer presented the Normal Distributions Transform which is a more general approach to align scans obtained from a laser range scanner. In this method the space is divided into cells forming a grid, then, to each cell they assign a normal distribution which locally models the probability of measuring a point. Finally, the Newton’s algorithm is used to align a laser scan input to the probability distribution. All these methods only use range information in order to align sensor measurements. To improve the robustness of the alignment method, visual information is added to range measurements. Such information provides a mean to obtain a unique and robust description of the world points [13]. In the past years, a set of SLAM techniques using aligning methods that rely on image data have been developed [14,15,16]. Such methods use invariant feature detectors [11] that detect image features invariant to noise, lens distortions or point of view and illumination changes. These image features detectors, using local image descriptors [12] provide a robust and reliable method to align robot’s visual information. However, there are situations where local invariant features cannot be used to describe the world. For example, in low textured environments, the number of putative matches usually is not enough to ensure that the estimated robot motion is correct. In environments with repetitive textures, the amount of false positive correspondences rises sharply and they cannot be filtered. These two problems can usually be found in indoor environments. In this paper, we present a method to align local maps obtained from different locations without establishing direct correspondences between map primitives. Local maps are obtained by scanning the environment with a stereo head. From a distribution of 3D objects that is estimated by a dense stereo algorithm aided by sonar measurements, a dense probability distribution is built. Then, it is used to determine the robot motion using the Gauss-Newton algorithm. The approach shares the same idea underlying in the Normal Distributions Transform [10] but the probability distributions map have to be built taking into account that stereo points have an heteroscedastic (point dependent) and anisotropic error distribution. Besides, colour image information is added to the probabilistic map in order to increase the convergence ratio and the robustness of the alignment estimation.
Stereo Vision Local Map Alignment for Robot Environment Mapping
113
The paper is organised as follows: In Sec. 2 the method used to create local maps from stereo camera pairs is presented. In Sec. 3 the alignment method is presented. In Sec. 4 the experiments set-up and their results are shown. Finally, Sec. 5 summarises the obtained results and conclusions.
2
Local Stereo Maps
Although laser range scanner provides accurate world information, they only acquire range data limited to a world plane. Whereas, stereo camera pairs give illumination, colour or texture information in addition to the depth information and gathered data is not restricted to a world plane. However, more information requires more processing and computing depth information requires some stereo matching algorithm [17]. Although dense stereo algorithms are computationally expensive, there are algorithms that, for relatively small resolutions, can obtain a dense stereo map in real time. To build the dense disparity map, an algorithm based on [18] is used. It is a correlation based algorithm that uses the SAD (Sum of Absolute Differences) function as region similarity measure. However, several refinements of the method such as the left-right consistency check or median filter are removed in order to reduce the computational cost of the algorithm. Besides, frontal sonar measurements are taken into account to set the disparity search range, so that the higher disparity correspond to the nearer object and if obstacles are too near, then the disparity map is directly build from sonar data. Finally, the resulting disparity map is segmented using the watershed algorithm [19] and small disparity regions are removed from it.
a)
b)
c)
d)
e)
f)
Fig. 1. a) Original right stereo pair image. b) Disparity map filtered with “c”. c) Segmented disparity map. d) Occupancy grid obtained from “b”. e) Occupancy grid obtained after filtering small disparity regions. f) Local map composed from all stereo measurements.
114
D. Aldavert and R. Toledo
Transforming image points from pixel coordinates to image plane coordinates, points can be reprojected to the 3D space. Let ml = [xl , yl ] and mr = [xr , yr ] be a corresponding point pair in image plane coordinates, they can be reprojected to the 3D space by simply using a noise free triangulation operation: X=
bxl xl − xr
Y =
byl xl − xr
Z=
bf . xl − xr
(1)
where b is the baseline and f is the focal length of the camera. The resulting 3D world points that are within a height range, are reprojected to a 2D histogram in the X − Z plane. Histogram cells that do not have the minimum support needed in order to consider that they are an object or cells that are isolated, too small to be considered an object, are removed from the 2D histogram. Figure 1 shows how a local map is built: First, a dense disparity map (Fig. 1.b) is obtained from stereo image pairs (Fig. 1.a). Although the disparity maps have gaps in low textured regions, objects are found in the occupancy grid (Fig. 1.d). Filtering small disparity map regions, a more accurate occupancy grid can be obtained (Fig. 1.e). Finally, the local map can be built by rotating the stereo pair using the robots pan & tilt unit (Fig. 1.f). The local map is built by making a scan from −80◦ degrees to 80◦ degrees and taking stereo head measurement at steps of 10◦ degrees. To determine the translation and rotation of stereo head cameras at each scan step, a set of known artificial landmarks is used. As the location and size of the landmarks are known, the camera motion at each step can be straightforwardly estimated. Position errors from the pan & tilt unit servo motor are not taken into account as they are quite small (0.5◦ − 1◦ degree) compared to the stereo depth estimation error. Once the stereo head motion at each measurement is known, cells of the local map that are seen from each location are also known. As the measurements are taken at steps of 10◦ degrees and the stereo cameras field of view is about 42◦ degrees, several cells can be seen from several stereo head locations. Therefore, as the uncertainty of depth estimation decreases for points that are near to the horizontal central image point, each cell is assigned to the stereo pair that reduces this uncertainty. Instead of using only object information to build local environment maps, the map is enhanced by adding colour information to each 2D histogram cell. Basically, 2D histogram is divided into 4 layers, where first layers contains objects with greyish colour, second layer reddish objects, third layer greenish objects and fourth layer bluish objects. Images are transformed to the HSV colour space (Hue-Saturation-Value) and the Hue channel is used to determine to which colour channel the pixel must be assigned. When the difference between the max and min RGB values is small, the Hue value is not well defined. This happens when the pixels have greyish colour. In this case, pixels are added to the first channel. In Fig. 2, a segmentation example is shown. Black pixels correspond to regions assigned to the greyish layer and colour pixels correspond to the other three colour layers. Pixels which their colour is a combination of two primary colours, are assigned to their primary components, e.g. a yellow pixels is assigned both to red and green layers. Once the histogram is built, if an histogram cell
Stereo Vision Local Map Alignment for Robot Environment Mapping
a)
b)
c)
d)
e)
f)
115
Fig. 2. Original image (a) is segmented obtaining (b) and the different layers of the local map are built: Locals map with greyish features (c), reddish features (d), greenish features (e) and bluish features (f)
of a colour layer have not enough support, the cell value is set to zero and its contribution is added to greyish layer. At each step of the previous scan, a sonar measurement is obtained. Although these sonar measurements are not as precise as the data obtained from stereo images, such depth information is useful to detect objects that are too near or that lack of texture and its depth cannot be estimated from stereo images. To integrate this information to the map alignment schema, it is only needed to estimate the PDF distribution of the obstacles detected by the sonar from the covariance of each sonar measurement. Information from other sensors can be added to the schema using the same strategy, i.e. estimating a PDF distribution for each detected object. For example, using the NDT of Biber and Straßer [10] laser information could be easily added to the alignment schema. Sonar measurements are added to the PDF distribution to all stereo layers because, both, it is very difficult to associate a colour to such measurement, and the sonar can only detect up to 17 obstacles per scan which is not enough to form a complete layer.
3
Maps Alignment
In indoor or flat urban environments, robot measurements taken from two places are related by a rigid 2D transformation. To align two local stereo maps, the following method is proposed: 1. Build an object probability distribution from one of the two local stereo maps.
116
D. Aldavert and R. Toledo
a)
b)
Fig. 3. a) Objects 2D histogram. b) Probability distribution build from the 2D histogram
2. Use the object coordinates of the other local stereo map to initialise the set of points S that will be registered to the first local stereo map. Initialise the motion parameters to zero or by an estimation obtained with the robot odometry. 3. Apply the parameters of the transformation to the set of points S. 4. From the values of the probability transformation of the set of transformed points a score value is obtained. 5. Estimate a new parameter value by optimising the score using a GaussNewton algorithm. 6. Until the convergence criterion is not meet, go to 3. The alignment method used is similar to the methods used in computer vision for registration of image information obtained from different sensors [21] or aligning images related by an affine or projective transformation [22,20]. The first step of the method consists in building the obstacle dense probability distribution. Although building the probability distribution is somewhat computationally expensive, it has to be built only once. The 2D histograms built in the previous section have several layers of information. However, for the sake of simplicity, those methods will be explained, first, as though they have a single layer. At each pixel where an object is found, we define Gaussian with mean μo equal to the location of the object and variance σo defined by the uncertainty of stereo points: σ 0 (2) σo = RJ lx J R . 0 σrx where σlx and σrx are the pixel localisation error which is determined by camera calibration error statistics and J is the Jacobian matrix that maps error from image coordinates to space coordinates: −bxr bxl d2 d2 J = −bf . (3) bf d2
d2
where d = xl − xr is the disparity between xl and xr expressed in image plane coordinates. The rotation matrix R is expressed as follows: cosβ −sinβ R= . (4) sinβ cosβ
Stereo Vision Local Map Alignment for Robot Environment Mapping
117
where β is the pan unit rotation angle. The covariance matrix σo of each probability distribution cell can be calculated a priori because we know the stereo head orientation for each histogram cell. Figure 3 shows how a probability distribution (Fig. 3.b) is formed from a 2D histogram (Fig. 3.a). Now, once the probability distribution is defined, a Gauss-Newton algorithm is used to find the transformation parameters that best align two probability distributions by minimising the following function: P ∗ = arg min{(P D1 (xi , yi ) − P D2 (xi , yi , pi )) } . 2
P
(5)
where P D1 and P D2 represent the probability distributions of each local map. However, to minimise (5) the probability distribution P D2 must be transformed using parameters P at each algorithm step. This operation is a quite expensive. Therefore, instead of using all probability distribution values, only a set of points corresponding to the object of P D2 are used and the alignment is carried out using a single probability distribution. So, let define F (S1 (P )) =
P D1 (xi , yi ) .
(6)
(xi ,yi )∈S1 (P )
where S1 is the point set on probability distribution P D1 and S1 (P ) is the point set transformed by parameter vector P . Maximising F (S(P )) with respect to P entails that the set S1 is near to the mean of the Gaussian of P D1 , so that, point set S1 is aligned to the points that formed the probability distribution P D1 . The alignment of the two local stereo maps is achieved by maximising (6): P ∗ = arg max{F (S1 (P ))} . P
(7)
If S2 is the point set corresponding to objects of the second probability distribution P D2 , then points set S1 is initialised by points S2 (P0 ). P0 is the first estimation of P that could be initialised to zero or by the robot odometry measurement. Hence, the solution of (7), P ∗ maps the set S1 (P ∗ ) to the set S2 , so P ∗ is the robot motion that relates the two local stereo maps. From now, let S1 be S and F (S1 (P )) be F (P ). As F (P ) is non-linear with respect to P , motion parameters P are computed using the Gauss-Newton iterative method [23]. So, the refinement equation is given by: −1 Jpn Fn . (8) Pn+1 = Pn + Jpn Jpn where Jpn is the Jacobian of F n with respect to the parameters p. Let Jpn (xi , yi ) and Fn (xi , yi ) be the Jacobian of Fn and Fn evaluated at pixel (xi , yi ) respectively, then Jpn and Fn are accumulated over the complete pixel set S: Jpn =
(xi ,yi )∈S(P )
Jpn (xi , yi ) .
(9)
118
D. Aldavert and R. Toledo
a)
b)
c)
d)
Fig. 4. Local maps alignment after: a) first iteration, b) 10 iterations, c) 20 iterations and d) final solution after 42 iterations
and Fn =
Fn (xi , yi ) .
(10)
(xi ,yi )∈S(P )
Jpn (xi , yi ) is evaluated using the derivative chain rule: ∂F (xi , yi ) = ∂P ∂F (xi , yi ) ∂x(xi , yi , Pn ) ∂F (xi , yi ) ∂y(xi , yi , Pn ) + . = ∂x ∂P ∂y ∂P
Jpn (xi , yi ) =
(11)
Robot motion in a flat environment can be modelled as a 2D rigid transformation, i.e. translation in the x and z axis and rotation in the y axis: x2i = x1i cosα − yi1 sinα + tx .
(12)
yi2
(13)
=
x1i sinα
+
yi1 cosα
+ ty .
So, the Jacobian matrices corresponding to the robot motion model is: ∂M (xi , yi , Pn ) −xi sinα − yi cosα 1 0 = . (14) xi cosα − yi sinα 0 1 ∂P Therefore, Jpn is: Jpn =
(xi ,yi )∈S(P )
∂M (xi , yi , Pn ) Fn (xi , yi ) . ∂P
(15)
where Fn (xi , yi ) =
∂F (xi ,yi ) ∂F (xi ,yi ) ∂x ∂y
.
(16)
Stereo Vision Local Map Alignment for Robot Environment Mapping
119
is the gradient evaluated at pixel (xi , yi ) at iteration n with respect to the image Cartesian coordinate system. The algorithm is iterated until a max number of iteration is reached or the update of the parameters Pn+1 − Pn < ε. Figure 4 depicts an example where the points set S gradually converges to the means of the dense probability distribution P D1 . To estimate motion parameters using all 2D histogram layers, (9) and (15) are separately estimated at each layer. Then in (8) the values of Jpn and Fn are obtained by summing the results of (9) and (15) values obtained at each layer respectively.
4
Results
In order to perform the experiments, data has been acquired by an autonomous mobile robot developed at our department (Fig. 5). It is based on a Lynxmotion 4WD3 robot kit and has been designed to be as cheap as possible. The robot is controlled by an on-board VIA Mini-ITX EPIA PE computer with a VIA C3 1 GHz CPU and by an AVR ATmega128 microcontroller. The AVR controls all low-level sensors and actuators, such as sonars, infrared range sensors, a digital compass and an accelerometer. The Mini-ITX computer controls the AVR and processes the images obtained by the two Philips SPC900NC webcams, providing a high level programming interface. Indeed, we developed a driver in order to include a robotic software platform called Pyro [24] as the highest software abstraction level. All the experiments with real data have been done using the on-board MiniITX computer, and it is expected that the robot will be able to create a map of an unknown environment without being aided by other computers. A local
Fig. 5. Robot used for the experiments
120
D. Aldavert and R. Toledo
Fig. 6. Alignment angular errors committed by increasing the initial angular error
Fig. 7. Alignment translation errors committed by increasing the initial angular error
Stereo Vision Local Map Alignment for Robot Environment Mapping
121
map is built from a pair of stereo images that have a resolution of 320 × 240 pixels. Computing the dense disparity map from the images and segmenting images by colour takes about 120 ms which is lower than the time needed to stabilise the pan & tilt unit (about 200 ms). Stereo measurements are stored in a 2D histogram that has 160 cell width per 120 cell height. Each cell side is 0.05 meters long, so that, the local map has a width of 8 meters and a height of 6 meters. Finally, from the 2D histogram the object dense probability distribution is built. Although this is the most time consuming step, in average it takes 940 ms, this step has to be done only once per local map. Once the dense probability distribution is built, the alignment process takes between 30 ms and 120 ms depending on the transformation that relates the two local maps and the amount of outliers, i.e. local map objects that are not present at the other local map. Local maps are aligned without taking into account odometry information, so initial parameters P0 are set to zero. To find out which robot displacement leads the convergence of the Gauss-Newton method to an incorrect solution, two similar local maps are transformed by different rotations and translations until the method diverges. The experiments have shown that the alignment method is more sensible to rotations than translation displacements: Two local maps related only by a translation can be correctly aligned, both, if maps are related by a large displacements or if a great number of outliers are present. However, when local maps are related by a rotation, the performance of the alignment method decreases as the rotation increases.
Fig. 8. Alignment angular error committed by increasing the number of objects outliers
122
D. Aldavert and R. Toledo
Fig. 9. Alignment translation error committed by increasing the number of objects outliers
Figures 6 and 7 show the rotational and translational alignment error vs. the rotation that relates both local maps. As the rotation value increases, the number of iterations needed to reach the convergence also increases. For example, with a rotation of 5◦ degrees the method converges after 12 iterations, while with a rotation of 50◦ it converges after 41 iterations. In the experiments, for local maps with a small amount of outliers that are related by a rotation greater than 60◦ degrees, the Gauss-Newton method usually was not able to converge to the correct solution. The Gauss-Newton method is also sensitive to outliers. As the outliers increase, the number of iterations needed to find the correct solution also increase. Figures 8 and 9 depict the rotational and translational alignment errors for different amounts of outliers. Rotation and translation errors are minimised until the parameter vector P is near the solution. There, the increment of P is too big due to the contribution of the outliers, causing oscillations like an under damped system. For example, in Figs. 6 and 7 the error is smoothly minimised. However, in Figs. 8 and 9 the error increases and decreases until the solution is reached. In our experiments, when the number of outliers is greater than 40%, the Gauss-Newton method can hardly converge to the correct solution. Finally, the results commented above have been obtained using colour information. The use of colour information has shown better convergence ratios even under rotation transforms. However, when outliers are present, the use of colour only slightly increases the performance of the alignment method.
Stereo Vision Local Map Alignment for Robot Environment Mapping
5
123
Conclusions and Future Work
In this paper, we have presented a method to build local maps from information acquired by a stereo head aided by robot’s sonars. The local map provides information about the 2D distribution of the 3D objects in the environment and it also stores colour information about this objects. To align local maps obtained from different locations, an object dense probability distribution is built from this information. Then using a Gauss-Newton algorithm over this probability distribution, two corresponding local maps can be aligned without searching explicit correspondences between the map primitives. Our current colour segmentation algorithm is not invariant to illumination conditions, so in a future work we will look for such invariant algorithm. We intend to use this alignment method combined with a method that uses robust invariant image features in a SLAM framework in order to create maps of either indoor or outdoor environments in real-time. We plan to find a measure that determines when the Gauss-Newton is more suitable than an invariant feature based method, as the computational resources of the on-board computer are limited. Besides, as all experiments have been done using indoor data, the performance of the presented method should be tested in unstructured outdoor environments.
Acknowledgements This work has been partially funded by TIN 2006-15308-C02-02 project grant of the Ministry of Education of Spain.
References 1. Cox, I.J.: Blanchean experiment in guidance and navigation of an autonomous robot vehicle. IEEE Transactions on Robotics and Automation 7(2), 193–203 (1991) 2. Weiss, G., von Puttkamer, E.: A map based on laserscans without geometric interpretation. In: Intel. Aut. Systems. Karlsruhe, Germany, March 27-30, pp. 403–407 (1995) 3. Gutmann, J.-S., Schlegel, C.: AMOS: comparison of scan matching approaches for self-localization in indoor environments, p. 61 (1996) 4. Lu, F., Milios, E.: Robot Pose Estimation in Unknown Environments by Matching 2D Range Scans. J. of Intelligent and Robotic Systems 18(3), 249–275 (1997) 5. Zhang, Z.: Iterative point matching for registration of free-form curves and surfaces. International Journal of Computer Vision 13(2), 119–152 (1994) 6. N¨ uchter, A., et al.: 6D SLAM with an Application in autonomous mine mapping. In: Proc. of the IEEE International Conf. of Robotics and Automation, New Orleans, USA, April 2004, pp. 1998–2003 (2004) 7. Nieto, J., Bailey, T., Nebot, E.: Recursive scan-matching SLAM. Robotics and Autonomous Systems 55(1), 39–49 (2007)
124
D. Aldavert and R. Toledo
8. N¨ uchter, A., et al.: 6D SLAM with approximate data association. In: Proceedings of the 12th International Conference on Advanced Robotics (ICAR), Seattle, WA, July 2005, pp. 242–249 (2005) 9. M¨ uhlmann, K., et al.: Calculating Dense Disparity Maps from Color Stereo Images, an Efficient Implementation. International Journal of Computer Vision 47(1–3), 79–88 (2002) 10. Biber, P., Straßer, W.: The Normal Distributions Transform: A New Approach to Laser Scan Matching, IEEE/RJS Int. Conf. on Intel. Robots and Systems (2003) 11. Mikolajczyk, K., et al.: A comparison of affine region detectors. International Journal of Computer Vision 65(1/2), 43–72 (2005) 12. Mikolajczyk, K., Schmid, C.: A performance evaluation of local descriptors. IEEE Transactions on Pattern Analysis & Machine Intelligence 10(27), 1615–1630 (2005) 13. Newman, P., Ho, K.: SLAM- Loop Closing with Visually Salient Features. In: International Conference on Robotics and Automation, Barcelona (2005) 14. Levin, A., Szeliski, R.: Visual Odometry and Map Correlation, IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Los Alamitos, CA, USA, vol. 1, pp. 611–618 (2004) 15. Se, S., Lowe, D., Little, J.: Vision-based Mobile robot localization and mapping using scale-invariant features. In: Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Seoul, Korea, May 2001, pp. 2051–2058 (2001) 16. Davison, A.J., Mayol-Cuevas, W., Murray, D.W.: Real-Time Localisation and Mapping with Wearable Active Vision. In: IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Washington, DC, USA, October 2003, p. 18 (2003) 17. Brown, M.Z., Burschka, D., Hager, G.D.: Advances in Computational Stereo. IEEE Trans. on Pattern Analysis & Machine Intel. 25(8), 993–1008 (2003) 18. Mhlmann, K., et al.: Calculating Dense Disparity Maps from Color Stereo Images, an Efficient Implementation. International Journal of Computer Vision 47, 79–88 (2002) 19. Vincent, L., Soille, P.: Watersheds in Digital Spaces: An Efficient Algorithm Based on Immersion Simulations. IEEE Transactions on Pattern Analysis and Machine Intelligence 13(6), 583–598 (1991) 20. Shum, H., Szeliski, R.: Panoramic image mosaics, Technical Report TR-97-23, Microsoft Research (1997) 21. Keller, Y., Averbuch, A.: Robust Multi-Sensor Image Registration Using Pixel Migration. In: Keller, Y., Averbuch, A. (eds.) IEEE Sensor Array and Multichannel Signal Processing Workshop (SAM), Washington D.C, USA (August 2002) 22. Baker, S., Matthews, I.: Lucas-Kanade 20 Years On: A Unifying Framework. International Journal of Computer Vision 56(3), 221–255 (2004) 23. Gill, P.E., Murray, W., Wright, M.H.: Practical Optimization. Academic Press, London and New York (1981) 24. Blank, D., et al.: Pyro: A python-based versatile programming environment for teaching robotics. Journal on Educational Resources in Computing (JERIC) 3(4), 1 (2003)
Markerless Augmented Reality for Robotic Helicoptor Applications Ian Yen-Hung Chen1 , Bruce MacDonald1 , and Burkhard W¨ unsche2 1
2
Dept. of Electrical and Computer Engineering, University of Auckland, New Zealand {i.chen, b.macdonald}@auckland.ac.nz Dept. of Computer Science, University of Auckland, New Zealand
[email protected]
Abstract. The objective of this research is to apply markerless Augmented Reality (AR) techniques to aid in the visualisation of robotic helicopter related tasks. Conventional robotic AR applications work well with markers in prepared environments but are infeasible in outdoor settings. In this paper, we present preliminary results from a real time markerless AR system for tracking natural features in an agricultural scene. By constructing a virtual marker under a known initial configuration of the robotic helicopter, camera and the ground plane, the camera pose can be continuously tracked using the natural features from the image sequence to perform augmentation of virtual objects. The experiments are simulated on a mock-up model of an agricultural farm and the results show that the current AR system is capable of tracking the camera pose accurately for translational motions and roll rotations. Future work includes reducing jitter in the virtual marker vertices to improve camera pose estimation accuracy for pitch and yaw rotations, and implementing feature recovery algorithms.
1
Introduction
The term Augmented Reality (AR) refers to the integration of virtual elements onto a view of a real world environment. This is achieved by superimposing virtual or synthetic computer generated information onto the live video of the real world captured from cameras. While the augmentation is being performed, the synthetic information will be displayed in real time on AR display devices such as monitors, projection screens, or head mounted displays (HMD) [1]. There are a growing number of applications that aim to enhance the AR experience with other sensory cues such as haptic feedback or spatial directional audio [2,3,4]. AR can be found in many application domains, such as entertainment [5], archaeology [6], touring [7], robotics [8], medical uses [9], military training [10], and more [11,12]. One of the main challenges in producing effective AR applications lies in the accurate registration of synthetic information in the real world, that is, the alignment between the virtual and the real [13]. Small errors in the alignment G. Sommer and R. Klette (Eds.): RobVis 2008, LNCS 4931, pp. 125–138, 2008. c Springer-Verlag Berlin Heidelberg 2008
126
I.Y.-H. Chen, B. MacDonald, and B. W¨ unsche
are easily perceptible to the human eye, and therefore, fast, precise and robust tracking techniques are essential for computing an accurate position and orientation of the camera [13,14]. Marker-based tracking methods can be a fast, low cost solution to many AR applications. By tracking markers of known size in the environment, the marker vertices and direction information can be extracted to determine the camera pose and produce accurate registration results [15]. However, there are a number of drawbacks associated with artificial markers, especially for outdoor AR applications in unprepared environments. Partial occlusion of the markers or direct exposure to strong lighting conditions would cause tracking to produce erroneous registration results. Moreover, the camera will be constrained to look only at regions where the markers are in view, hence limiting the users’ working space [13]. The use of natural features in the scene for tracking is more desirable. No a priori information need be known and modifications to the environment are avoided. You et al. [16] propose a hybrid tracking technique that predicts motion estimates using inertial sensors. The accumulated drift errors are then corrected by vision at regular time intervals. Cornelis et al. [17] develop an AR system from uncalibrated video sequences, based on motion and structure recovery techniques and preprocessing of images to specify key frames for computing the fundamental matrix that relates corresponding points in stereo images. Wu et al. [18] also make use of fundamental matrices at key frames to derive the camera pose, and simplify the registration problem by placing control points of known positions in the scene to resolve scale ambiguity. Yuan et al. [19] propose robust accurate registration in real time by using a projective reconstruction approach to recover camera pose from four planar points. The application of AR technology to the field of robotics visualisation has several benefits. It helps to visualise information, such as robot sensory data or historic paths, in context with the physical world [8]. A number of robotic AR applications exist which aim to improve the effectiveness of the robot software development process by providing real time visual feedback to robot developers. Pettersen et al. [20] use marker-based AR to visualise the results in robot painting tasks and obtain instant feedback. Stilman et al. [21] use AR for debugging vision and motion planning algorithms with humanoid robots, using motion capture systems and markers. Collett and MacDonald [8] also develop a marker-based AR visualisation tool to help robot developers debug robotic software by highlighting the inconsistencies between the robot’s world view and the real world. Moreover, AR is able to help users monitor robots and the environment from remote areas. Sugimoto et al. [22] use AR to aid teleoperation of robots, by presenting an exocentric view of the robot based on images taken from an egocentric camera. Brujic-Okreti et al. [23] assist users in remote-control of robotic vehicles by displaying AR visualisations of the vehicle and the environment using digital elevation models and simulated GPS readings. The research proposed in this paper applies natural feature tracking techniques to the development of AR systems for visualisation in robotic helicopter tasks. In contrast to existing robotic AR applications, the proposed method over-
Markerless Augmented Reality for Robotic Helicoptor Applications
127
comes the disadvantages of marker-based tracking techniques and scales robotic AR applications to operate in outdoor environments. It uses a simpler registration process than existing natural feature tracking AR applications. This is made possible by taking advantage of robotic actuator capabilities at initialisation to position the camera at an orientation perpendicular to the target plane. Under this condition, only two parameters are required to define the planar region, the centre point and the size of the virtual object. They are represented by a virtual marker, on which the augmentation will take place. Once the AR system is initiated, the robotic helicopter is free to move and change its orientation, and the virtual elements will remain aligned with the real world. Section 2 describes the problem to be solved. Section 3 presents our solution to markerless tracking and registration. Section 4 summarises our overall AR system design. Section 5 gives results for a simulation. Section 6 discusses the future improvements and Section 7 concludes the paper.
2
Problem Description
Our focus is on using AR visualisation to assist robot developers when they are creating robotic helicopter applications for the agricultural industry. Common vision-related robotic helicopter tasks in agriculture include remote sensing, spraying, and monitoring of crops and livestock for better farm management [24,25,26,27]. The tasks typically use onboard cameras to observe the agricultural objects in the field. The ultimate goal is a framework for applying AR techniques in the agricultural environment with the use of camera images captured from a robotic helicopter. This will involve identifying the important requirements and evaluating the algorithms developed as a result of them. It is also necessary to resolve the registration problem to produce accurate alignment of the virtual objects and the real world, in response to the motions of the robotic helicopter and a pan tilt camera. The objective is to track the pose of an intrinsically calibrated camera to facilitate the registration of a virtual object under a known initial configuration of the robotic helicopter, camera and the ground plane. It is advantageous to make use of the actuators on the robotic helicopter platform to adjust the orientation of the camera at the initial start-up of the system. The camera is oriented directly downwards at the agriculture field before commencing AR visualisation. This can be achieved by issuing a command to robotic actuator devices such as pan-tilt camera holders to alter to the desired orientation. The resulting configuration of the camera mounted on the robotic helicopter and the environment is illustrated in Fig. 1. It is also assumed that there are sensors onboard which are able to give an accurate estimate of the height of the robotic helicopter from the ground to initialise the registration process. The problem then becomes the tracking of the camera position and orientation under this initial configuration and keeping the virtual objects aligned with the real world while the robotic helicopter moves or changes its angle of view. This is a simplified version of a real world problem. The helicopter would not always be exactly parallel to the ground plane and future improvements must be implemented to address this issue.
128
I.Y.-H. Chen, B. MacDonald, and B. W¨ unsche
Fig. 1. Initial configuration of the camera and the environment with the camera oriented at an angle perpendicular to the ground plane. Xc , Yc and Zc denote the axes of the camera coordinate system, and Xw , Yw and Zw form the world coordinate system
The unique behaviour of robotic helicopters compared to other fixed wing unmanned aerial vehicles is that they have the ability to hover over the agricultural objects and perform both rotational and translational motion with six degrees of freedom as shown in Fig. 2. A yaw rotation in the helicopter will result in a roll of the camera, and similarly, a roll rotation of the helicopter results in a yaw of the camera. The proposed system will apply natural feature tracking techniques to derive the camera pose for accurate registration of virtual objects while dealing with the different motions of the robotic helicopter.
3 3.1
Markerless Tracking and Registration Augmented Reality Initialisation
The Kanade-Lucas-Tomasi (KLT) feature tracker [28,29] has been chosen for real time tracking of natural features in the scene. The KLT algorithm has been shown to achieve great performance in terms of speed and accuracy compared to other feature extracters for small camera motions and viewpoints between consecutive images [30]. In addition, it provides its own feature matching algorithm for tracking feature points in image pairs based on a model of affine image changes. The KLT feature tracker has been successfully applied in several outdoor AR systems [31,32,33,34]. A number of initial steps must be taken before proceeding to the augmentation of virtual objects. First, two reference images of the scene are required. The KLT tracking algorithm is applied to identify, select, and track natural features in these image frames. The features that are tracked between these two images are used to compute the Fundamental Matrix F using the Normalised Eight
Markerless Augmented Reality for Robotic Helicoptor Applications
129
Fig. 2. Robotic helicopter with 6 degrees of freedom. Translation in X,Y and Z direction in the local robot coordinate system plus pitch, yaw and roll rotations.
Point algorithm proposed by Hartley [35], which has been shown to improve the estimation of F when working with noisy inputs. The fundamental matrix is essentially an algebraic representation of epipolar geometry in which the two corresponding matching image points, x and x, are related by the equation: xT F x = 0
(1)
In this case, the pair of matching images points are the natural features tracked by the KLT feature tracker in the two reference images. From F , a pair of camera projection matrices, P and P , can be retrieved in the form of: P = [I|O] , P = [[e ]× F |e ]
(2)
where I is the 3×3 identity matrix, O is a null 3-vector, and e is the epipole of the second reference image. The detailed mathematical derivations and proofs can be found in [36]. The pair of projection matrices, P and P , establishes a projective space where the 3D coordinates X of the tracked image point matches can be constructed using a linear method from the equations: x = P X , x = P X
(3)
The 3D coordinates X will remain constant throughout the execution of the system, assuming that (a) the major portion of the environment remains static
130
I.Y.-H. Chen, B. MacDonald, and B. W¨ unsche
and (b) there are smoothing techniques available to eliminate outliers from the set of matching image points, such as wrongly tracked features or moving objects in the real world. It is also important to note that the two reference images should be captured from points wide apart to reduce the uncertainties in the reconstructed 3D points [17]. To indicate the position where the virtual object will be overlaid, the user needs only to specify a point on the second reference image which represents the centre of virtual objection registration. Since the camera on the robotic helicopter is facing in a direction perpendicular to the ground plane, a square-shaped virtual marker can be rapidly constructed with four virtual vertices equally spaced from the specified point in the 2D image plane. Their corresponding 3D points can be constructed using P . The method is similar to the projective reconstruction technique proposed by Yuan et al. [19], but in comparison, the registration process is simplified by starting with the camera orientation suggested in this research. It prevents users from the error-prone process of manually specifying the four coplanar points in the 2D image plane as viewed by a camera under perspective transformation, since small errors in the input will lead to inaccurate registration results. The four vertices of the virtual marker will play an important role, as will be seen later, for estimating the camera pose in consecutive image frames. 3.2
Camera Tracking
As the helicopter moves, the position and orientation of the virtual camera should also be updated accordingly such that the virtual objects remain aligned with the real world. There must be a new projection matrix associated with every consecutive frame. The problem can be solved by employing triangulation techniques with matching 3D to 2D correspondences. Given that the KLT feature tracker will be able to provide the 2D image coordinates of the natural features tracked at each image frame, and the reconstructed 3D points in the unique projection space remain unchanged, the projection matrix Pk for the k th image can be calculated using the Gold Standard algorithm [36] with at least six 3D to 2D point correspondences. To summarise, the 2D image points and 3D reconstructed points are first normalised, and the Direct Linear Transformation (DLT) algorithm is then applied to compute an initial estimate of Pk . To obtain a more robust result, the iterative Levenberg-Marquardt algorithm is used to minimise the geometric error: d(xi , P Xi )2 (4) i
where xi and Xi are in normalised form and P is parameterised with the twelve elements from the initial linear estimate of Pk . Denormalisation is then performed to obtain the final resulting Pk .
Markerless Augmented Reality for Robotic Helicoptor Applications
131
By obtaining the projection matrix for every consecutive image frame in the video sequence, the 2D image coordinates of the virtual vertices can be reprojected using Equation (3) from their corresponding 3D points in the unique projection space. Assuming that the intrinsic parameters of the camera are known in advance and stay constant throughout the entire augmentation, the camera pose for the k th image frame can be computed by using the reprojected image coordinates of the virtual marker vertices as input to the Robust Pose Estimation Algorithm for Planar Targets [37,38]. The rotation and translation motion of the camera can be derived relative to the virtual marker’s pose. The 3D virtual object will be registered at the origin of the world coordinate system sitting on the XY plane and the camera will be initially placed along the positive Z axis looking down at the virtual object as shown in Fig. 1. The appropriate rotation and translation motion derived at the k th image frame will be applied to the virtual camera so the virtual objects remains aligned with the real world when the position and orientation of the real camera changes.
4
System Design
In order to create useful AR applications, the quality of the graphics embedded in the AR system is important. A dedicated open source 3D graphics rendering engine, OGRE 3D [39], has been chosen. It has a large community support and provides advanced graphics rendering functionalities including a material shader, scripted animations and other special effects. The use of the OGRE 3D graphics rendering engine allows the creation of high quality computer graphics in a short period of time. Moreover, for future considerations, it has the capability to integrate physics and networking libraries to construct a more powerful AR or Mixed Reality (MR) system. The interface to robot sensors and actuators is made possible by Player/Stage [40] which is a popular open source software tool widely used in the research and development of robot and sensor applications. The Player server will be run on the robot platform and provides developers an interface for controlling the underlying hardware over the IP network. To connect to the Player server, a Player plug-in for OGRE 3D is created. It is essentially a Player client that acquires raw camera image data from the camera mounted on the robot platform. The camera images are compressed on the Player server first before being streamed across the network and decompressed on the client side to optimize bandwidth efficiency. The markerless AR system is implemented by utilising this plug-in and rendering the acquired camera images as dynamic textures on the background in the OGRE 3D window in colour, but they are converted to a gray scale format before feeding into the KLT feature tracker. The virtual objects to be rendered over the camera images can be created in other modeling tools and then loaded into OGRE 3D. Fig. 3 shows the overall system design and the data flow between the different components.
132
I.Y.-H. Chen, B. MacDonald, and B. W¨ unsche
Fig. 3. The overall system design and data flow
5
Fig. 4. Mock-up model of an agricultural farm
Experimental Results from Simulation
As robotic helicopter applications need to deal with noisy images, changes in illuminations, and different outdoor scene features, we have designed an outdoor scene which simulates an agricultural environment. The experiments were performed over a mock-up model of an agricultural farm shown in Fig. 4. The model is at the scale 1:200 with a base size of A0 and a maximum height of 10 centimeters. The model was built in the University of Auckland Robotics Laboratory, for simulating various robotic helicopter tasks, particularly robot vision related projects, before venturing into real world experimentation. A variety of realistic agricultural objects are added and lighting equipment is available to simulate a real life farm under different time-of-day conditions. There are other issues regarding this simulation approach which need to be considered in the future, such as the missing effect of shadows from moving clouds, simulating different weather conditions, and the realism of objects and illuminations. However, it provides us an insight to the characteristics of an outdoor environment and the major problems that could arise when operating robotic systems outdoors, which include working with different texture contents in an agricultural scene, repetitive scene patterns, the variety of shapes of agricultural objects, and motioning in an unstructured environment. First, two well-spaced reference images of the scene were captured and the natural features were tracked using the KLT feature tracker. A virtual barn was augmented starting at the centre of the image. Then the camera underwent a series of motions.
Markerless Augmented Reality for Robotic Helicoptor Applications
(a)
(b)
(c)
(d)
133
Fig. 5. The resulting reprojections of the virtual marker vertices after undergoing a series of camera motions. The marker vertices are blue squares, highlighted in these images by red circles.
(a)
(b)
(c)
(d)
Fig. 6. Augmentation of a virtual barn on the virtual marker. The marker vertices are blue squares, highlighted in these images by red circles.
134
I.Y.-H. Chen, B. MacDonald, and B. W¨ unsche
The system was deployed in Linux on a 2.4GHz Intel(R) Core(TM)2 Quad CPU with 1GB of RAM and a NVIDIA Quadro FX 3450/4000 SDI graphics card. A Logitech Quickcam Fusion camera was used to capture the images of size 640x480. The frame rate of the markerless AR system remained at approximately 4-5 frames per second for the entire augmentation. Fig. 5 shows the virtual marker vertices being reprojected in different frames. The red rectangular blobs represent the natural features tracked by the KLT feature tracker and the four blue rectangular blobs are the vertices that form a square-shaped virtual marker. Fig. 5(a), 5(b) show the result of the virtual marker vertices when moving the camera in X and Y directions while undergoing some rotation, and 5(c), 5(d) show the effect of moving the camera along the Z axis. Fig. 6 shows the result of placing the virtual barn in the origin of the world coordinate system while the virtual camera position and orientation are being updated using the Robust Pose Estimation algorithm. An experiment was conducted to measure the residual error between the predicted camera position and the actual world position for translational motions. The camera was placed at 40cm from the ground plane and translated in its local X,Y and Z directions. The virtual object was augmented at the centre of the image as before and the residual error was recorded at increments of 1cm for an overall translation of 15 cm. The experiment was repeated 8 times and the error after the full translation was found to be an average of 1.77cm in the X direction, 1.40cm in the Y direction, and 1.25cm in the Z direction. Equivalently, the expected error when operated on a real farm would be approximately 3.54m, 2.80m, 2.50m in the X, Y, and Z direction, respectively. The error was observed to increase incrementally as the result of losing the tracked natural features, mainly when they moved out of the camera view. Consequently, the registration error became obvious to the human eye when the virtual object was situated near the edges of the image plane.
6
Discussions and Future Work
The AR system was able to correctly update the camera pose as shown in Section 5. The augmentation of virtual objects was accurate for translations of the camera and the errors were minor. The noise in the tracked natural features had a larger impact on pitch and yaw rotations compared to roll rotations. The reason is that the virtual marker will no longer be square-shaped in the image image plane due to perspective transformation and minor pixel errors in the reprojections of virtual marker vertices will cause the Robust Pose Estimation algorithm to think that the camera is at a largely different position and orientation. Ultimately, the accuracy of the registration is highly dependent on the natural feature being tracked. There will be errors in the virtual object registration due to jitter in the virtual marker vertices, which would cause the Robust Pose Estimation algorithm to produce unstable results between image frames.
Markerless Augmented Reality for Robotic Helicoptor Applications
135
The instability in the virtual maker vertices will have a more significant effect on the computed camera rotations. To solve the problem, a more robust algorithm needs to be applied to filter the noise and wrongly tracked natural features. Other feature tracking algorithms could also be considered which would produce higher distinctive features to obtain better results, but the trade-off could be a higher computational cost. This needs to be further investigated. There are also a number of limitations in the current AR system if it is to be deployed for robotic helicopter flight. An automated feature recovering algorithm is necessary. As the robotic helicopter is a noisy and unstable platform, there will be situations where the helicopter will undergo sudden large motions. In this case, the natural features would be lost and augmentation would fail. Relying on vision techniques alone without artificial landmarks or external inputs in the environment presents difficulties to recover the camera pose quickly online. Therefore, a hybrid approach incorporating onboard sensors will be considered in the future, to provide rough estimates of the camera pose when features are lost. Then vision techniques can be used to refine these estimates to resume augmentation. To overcome the constraint imposed by the initial configuration mentioned in Section 2, the pose of the ground plane also needs to be determined in the initialization stage. Furthermore, to scale the system to operate in large working area, more natural features would need to be tracked outside the scene covered by the reference images. The system should be automated to recover the 3D structure information of the scene from the incoming camera images as the robotic helicopter travels between different areas of the agricultural land. Methods to achieve this include capturing more reference images during the preliminary offline process [41], or by tracking a virtual 3D plane from the computation of plane homographies over the image sequences [42].
7
Conclusions
This research investigates the essential tracking requirements for applying the augmented reality visualisation technique in robotic helicopter applications. A markerless AR system based on natural feature tracking is presented. We have successfully tracked the camera pose by introducing a virtual marker into the scene, which achieves good results subject to the proposed initial configuration of the camera, robotic helicopter, and the environment. We have developed a simulation using a mock-up model of an agriculture farm for testing the performance of the markerless AR system. From the experiments we can conclude that there are a number of improvements which need to be taken in the future before deploying the system in actual flight. The jittering motion of the virtual marker vertices must be reduced for better camera pose estimation, and most importantly, a feature recovery algorithm is crucial to scale the markerless AR system to operate outdoors on the robotic helicopter.
136
I.Y.-H. Chen, B. MacDonald, and B. W¨ unsche
References 1. Milgram, P., Kishino, F.: A taxonomy of mixed reality visual display. IEICE Transactions on Information Systems E77-D, 1321–1329 (1994) 2. Cheok, A.D., et al.: Capture the flag: a multiplayer online game for phone users. In: Proceedings of the Ninth IEEE International Symposium on Wearable Computers, 2005, October 18–21, 2005, pp. 222–223 (2005) 3. Hughes, C.E., et al.: Mixed reality in education, entertainment, and training. IEEE Computer Graphics and Applications 25(6), 24–30 (2005) 4. Sandor, C., et al.: Immersive mixed-reality configuration of hybrid user interfaces. In: Proceedings of the Fourth IEEE and ACM International Symposium on Mixed and Augmented Reality, 2005, October 5–8, 2005, pp. 110–113 (2005) 5. Thomas, B., et al.: Arquake: An outdoor/indoor augmented reality first person application. In: The Fourth International Symposium on Wearable Computers, 2000, October 16–17, 2000, pp. 139–146 (2000) 6. Benko, H., Ishak, E.W., Feiner, S.: Collaborative mixed reality visualization of an archaeological excavation. In: Third IEEE and ACM International Symposium on Mixed and Augmented Reality, 2004. ISMAR 2004, November 2–5, 2004, pp. 132–140 (2004) 7. Vlahakis, V., et al.: Personalized augmented reality touring of archaeological sites with wearable and mobile computers. In: Horrocks, I., Hendler, J. (eds.) ISWC 2002. LNCS, vol. 2342, pp. 15–22. Springer, Heidelberg (2002) 8. Collett, T., MacDonald, B.: Augmented reality visualisation for player. In: Proceedings of the 2006 IEEE International Conference on Robotics and Automation, 2006. ICRA 2006, pp. 3954–3959 (2006) 9. Rosenthal, M., et al.: Augmented reality guidance for needle biopsies: A randomized, controlled trial in phantoms. In: Niessen, W.J., Viergever, M.A. (eds.) MICCAI 2001. LNCS, vol. 2208, pp. 240–248. Springer, Heidelberg (2001) 10. Livingston, M.A., et al.: An augmented reality system for military operations in urban terrain. In: Proceedings of the Interservice / Industry Training, Simulation, & Education Conference (I/ITSEC 2002), Orlando, December 2–5 (2002) 11. Azuma, R.T.: A survey of augmented reality. Presence: Teleoperators and Virtual Environments 6(4), 355–385 (1997) 12. Azuma, R., et al.: Recent advances in augmented reality. IEEE Computer Graphics and Applications 21(6), 34–47 (2001) 13. Azuma, R.T.: 21. In: Mixed Reality: Merging Real and Virtual Worlds, pp. 379– 390. Springer, Heidelberg (1999) 14. Bimber, O., Raskar, R.: Spatial Augmented Reality: Merging Real and Virtual Worlds. A.K. Peters, Ltd., Natick, MA, USA (2005) 15. Kato, H., Billinghurst, M.: Marker tracking and hmd calibration for a video-based augmented reality conferencing system. In: Proceedings of the 2nd IEEE and ACM International Workshop on Augmented Reality, 1999 (IWAR 1999), pp. 85–94 (1999) 16. You, S., Neumann, U., Azuma, R.: Orientation tracking for outdoor augmented reality registration. IEEE Computer Graphics and Applications 19(6), 36–42 (1999) 17. Cornelis, K., et al.: Augmented reality using uncalibrated video sequences. In: Pollefeys, M., et al. (eds.) SMILE 2000. LNCS, vol. 2018, pp. 144–160. Springer, Heidelberg (2001) 18. Wu, J., Yuan, Z., Chen, J.: A registration method of ar based on fundamental matrix. In: 2005 IEEE International Conference Mechatronics and Automation, vol. 3, pp. 1180–1184 (2005)
Markerless Augmented Reality for Robotic Helicoptor Applications
137
19. Yuan, M., Ong, S., Nee, A.: Registration using natural features for augmented reality systems. IEEE Transactions on Visualization and Computer Graphics 12(4), 569–580 (2006) 20. Pettersen, T., et al.: Augmented reality for programming industrial robots. In: Proceedings of the Second IEEE and ACM International Symposium on Mixed and Augmented Reality, 2003, pp. 319–320 (2003) 21. Stilman, M., et al.: Augmented reality for robot development and experimentation. Technical Report CMU-RI-TR-05-55, Robotics Institute, Carnegie Mellon University, Pittsburgh, PA (November 2005) 22. Sugimoto, M., et al.: Time follower’s vision: a teleoperation interface with past images. IEEE Computer Graphics and Applications 25(1), 54–63 (2005) 23. Brujic-Okretic, V., et al.: Remote vehicle manoeuvring using augmented reality. In: International Conference on Visual Information Engineering, 2003. VIE 2003, pp. 186–189 (2003) 24. Sugiura, R., et al.: Field information system using an agricultural helicopter towards precision farming. In: Proceedings of the 2003 IEEE/ASME International Conference on Advanced Intelligent Mechatronics, 2003. AIM 2003, vol. 2, pp. 1073–1078 (2003) 25. Archer, F., et al.: Introduction, overview, and status of the microwave autonomous copter system (macs). In: Proceedings of the 2004 IEEE International Geoscience and Remote Sensing Symposium, 2004. IGARSS 2004, vol. 5, pp. 3574–3576 (2004) 26. Yokobori, J., et al.: Variable management for uniform potato yield using remote sensing images with unmanned helicopter. In: ASAE Annual Meeting (2003) 27. Noguchi, N., Ishii, K., Sugiura, R.: Remote-sensing technology for vegetation monitoring using an unmanned helicopter. In: Biosystems Engineering, pp. 369–379 (2005) 28. Lucas, B.D., Kanade, T.: An iterative image registration technique with an application to stereo vision. In: Proceedings of the 7th International Joint Conference on Artificial Intelligence (IJCAI 1981), April 1981, pp. 674–679 (1981) 29. Shi, J., Tomasi, C.: Good features to track. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1994. Proceedings CVPR 1994, vol. 1994, pp. 593–600 (1994) 30. Klippenstein, J., Zhang, H.: Quantitative evaluation of feature extractors for visual slam. In: Fourth Canadian Conference on Computer and Robot Vision, 2007. CRV 2007, pp. 157–164 (2007) 31. Yao, A., Calway, A.: Robust estimation of 3-d camera motion for uncalibrated augmented reality. Technical Report CSTR-02-001, Department of Computer Science, University of Bristol (March 2002) 32. Kameda, Y., Takemasa, T., Ohta, Y.: Outdoor see-through vision utilizing surveillance cameras. In: Third IEEE and ACM International Symposium on Mixed and Augmented Reality, 2004. ISMAR 2004, pp. 151–160 (2004) 33. Ong, S.K., Yuan, M.L., Nee, A.Y.C.: Tracking points using projective reconstruction for augmented reality. In: Proceedings of the 3rd international conference on Computer graphics and interactive techniques in Australasia and South East Asia. GRAPHITE 2005, pp. 421–424. ACM Press, New York (2005) 34. Behringer, R., Park, J., Sundareswaran, V.: Model-based visual tracking for outdoor augmented reality applications. In: Proceedings of the International Symposium on Mixed and Augmented Reality, 2002. ISMAR 2002, pp. 277–322 (2002) 35. Hartley, R.: In defense of the eight-point algorithm. IEEE Transactions on Pattern Analysis and Machine Intelligence 19(6), 580–593 (1997)
138
I.Y.-H. Chen, B. MacDonald, and B. W¨ unsche
36. Hartley, R.I., Zisserman, A.: 8. In: Multiple View Geometry in Computer Vision, 2nd edn., Cambridge University Press, Cambridge (2003) 37. Schweighofer, G., Pinz, A.: Robust pose estimation from a planar target. IEEE Transactions on Pattern Analysis and Machine Intelligence 28(12), 2024–2030 (2006) 38. Wagner, D., Schmalstieg, D.: Artoolkitplus for pose tracking on mobile devices. In: Computer Vision Winter Workshop 2007, February 6-8 (2007) 39. OGRE Ogre 3d: Object-oriented graphics rendering engine, http://www.ogre3d.org 40. Gerkey, B.P., Vaughan, R.T., Howard, A.: The player/stage project: Tools for multi-robot and distributed sensor systems. In: Proceedings of the International Conference on Advanced Robotics (ICAR 2003), June 30–July 3, 2003, pp. 317–323 (2003) 41. Skrypnyk, I., Lowe, D.: Scene modelling, recognition and tracking with invariant image features. In: Third IEEE and ACM International Symposium on Mixed and Augmented Reality, 2004. ISMAR 2004, pp. 110–119 (2004) 42. Lourakis, M., Argyros, A.: Vision-based camera motion recovery for augmented reality. In: CGI 2004. Proceedings of the Computer Graphics International, pp. 569–576. IEEE Computer Society Press, Washington, DC, USA (2004)
Facial Expression Recognition for Human-Robot Interaction – A Prototype Matthias Wimmer1 , Bruce A. MacDonald2 , Dinuka Jayamuni2 , and Arpit Yadav2 1
2
Department of Informatics, Technische Universitat M¨ unchen, Germany Electrical and Computer Engineering, University of Auckland, New Zealand
Abstract. To be effective in the human world robots must respond to human emotional states. This paper focuses on the recognition of the six universal human facial expressions. In the last decade there has been successful research on facial expression recognition (FER) in controlled conditions suitable for human–computer interaction [1,2,3,4,5,6,7,8]. However the human–robot scenario presents additional challenges including a lack of control over lighting conditions and over the relative poses and separation of the robot and human, the inherent mobility of robots, and stricter real time computational requirements dictated by the need for robots to respond in a timely fashion. Our approach imposes lower computational requirements by specifically adapting model-based techniques to the FER scenario. It contains adaptive skin color extraction, localization of the entire face and facial components, and specifically learned objective functions for fitting a deformable face model. Experimental evaluation reports a recognition rate of 70% on the Cohn–Kanade facial expression database, and 67% in a robot scenario, which compare well to other FER systems.
1
Introduction
The desire to interpret human gestures and facial expressions is making interaction with robots more human-like. This paper describes our model–based approach for automatically recognizing facial expressions, and its application to human-robot interaction. Knowing the human user’s intentions and feelings enables a robot to respond more appropriately during tasks where humans and robots must work together [9,10,11], which they must do increasingly as the use of service robots continues to grow. In robot-assisted learning, a robot acts as the teacher by explaining the content of the lesson and questioning the user afterwards. Being aware of human emotion, the quality and success of these lessons will rise because the robot will be able to progress from lesson to lesson just when the human is ready [12]. Studies of human-robot interaction will be improved by automated emotion interpretation. Currently the humans’ feelings about interactions must be interpreted using questionnaires, self-reports, and manual analysis of recorded video. Robots will need to detect deceit in humans in security applications, and G. Sommer and R. Klette (Eds.): RobVis 2008, LNCS 4931, pp. 139–152, 2008. c Springer-Verlag Berlin Heidelberg 2008
140
M. Wimmer et al.
facial expressions may help [13]. Furthermore, natural human-robot interaction requires detecting whether or not a person is telling the truth. Micro expressions within the face express these subtle differences. Specifically trained computer vision applications would be able to make this distinction. Today’s techniques for detecting human emotion often approach this challenge by integrating dedicated hardware to make more direct measurements of the human [14,15,16]. Directly connected sensors measure blood pressure, perspiration, brain waves, heart rate, skin temperature, electrodermal activity, etc. in order to estimate the human’s emotional state. In practical human-robot interactions, these sensors would need to be portable, wearable and wireless. However, humans interpret emotion mainly from video and audio information and it would be desirable if robots could obtain this information from their general purpose sensing systems, in the visual and audio domains. Furthermore, these sensors do not interfere with the human being by requiring direct connections to the human body. Our approach interprets facial expressions from video information. Section 2 explains the state of the art in facial expression recognition (FER) covering both psychological theory and concrete approaches. It also elaborates on the specific challenges in robot scenarios. Section 3 describes our model-based approach which derives facial expressions from both structural and temporal features of the face. Section 4 presents results on a standard test set and in a practical scenario with a mobile robot.
2
Facial Expression Recognition: State of the Art
Ekman and Friesen [18] find six universal facial expressions that are expressed and interpreted in the same way by humans of any origin all over the world. They do not depend on the cultural background or the country of origin. Figure 1 shows one example of each facial expression. The Facial Action Coding System (FACS) precisely describes the muscle activity within a human face [19]. So-called Action Units (AUs) denote the motion of particular facial parts and state the involved facial muscles. Combinations of AUs assemble facial expressions. Extended systems such as the Emotional FACS [20] specify the relation between facial expressions and emotions. The Cohn–Kanade Facial Expression Database (CKDB) contains a number of 488 short image sequences of 97 different persons showing the six universal facial expressions [17]. It provides researchers with a large dataset for experiment and benchmarking. Each sequence shows a neutral face at the beginning and then
happiness
anger
disgust
sadness
fear
surprise
Fig. 1. The six universal facial expressions as they occur in [17]
Facial Expression Recognition for Human-Robot Interaction – A Prototype
141
Fig. 2. Procedure for recognizing facial expressions according to Pantic et al. [3]
develops into the peak expression. Furthermore, a set of AUs has been manually specified by licensed FACS experts for each sequence. Note that this database does not contain natural facial expressions, but volunteers were asked to act them. Furthermore, the image sequences are taken in a laboratory environment with predefined illumination conditions, solid background and frontal face views. Algorithms that perform well with these image sequences are not necessarily appropriate for real-world scenes. 2.1
The Three-Phase Procedure
The computational task of FER is usually subdivided into three subordinate challenges shown in Figure 2: face detection, feature extraction, and facial expression classification [3]. Others add pre– and post–processing steps [4]. In Phase 1, the human face and the facial components must be accurately located within the image. On the one hand, sometimes this is achieved automatically, as in [1,21,22]. Most automatic approaches assume the presence of a frontal face view. On the other hand, some researchers prefer to specify this information by hand, and focus on the interpretation task itself, as in [23,24,25,26]. In Phase 2, the features relevant to facial expressions are extracted from the image. Michel et al. [21] extract the location of 22 feature points within the face and determine their motion between a neutral frame and a frame representative for a facial expression. These feature points are mostly located around the eyes and mouth. Littlewort et al. [27] use a bank of 40 Gabor wavelet filters at different scales and orientations to extract features directly from the image. They perform convolution and obtain a vector of magnitudes of complex valued responses. In Phase 3, the facial expression is derived from the extracted features. Most often, a classifier is learned from a comprehensive training set of annotated examples. Some approaches first compute the visible AUs and then infer the facial expression by rules stated by Ekman and Friesen [28]. Michel et al. [21] train a Support Vector Machine (SVM) that determines the visible facial expression within the video sequences of the CKDB by comparing the first frame with the neutral expression to the last frame with the peak expression. Schweiger and Bayerl [25] compute the optical flow within 6 predefined face regions to extract the facial features. Their classification uses supervised neural networks.
142
2.2
M. Wimmer et al.
Challenges of Facial Expression Recognition
FER is a particularly challenging problem. Like speech, facial expressions are easier to generate than to recognize. The three phases represent different challenges. While Phase 1 and Phase 2 are confronted with image interpretation challenges, Phase 3 faces common Machine Learning problems. The challenge of one phase increases if the result of the previous one is not accurate enough. The first two phases grasp image components of different semantic levels. While human faces have many similarities in general shape and layout of features, in fact all faces are different, and vary in shape, color, texture, the exact location of key features, and facial hair. Faces are often partially occluded by spectacles, facial hair, or hats. Lighting conditions are a significant problem as well. Usually, there are multiple sources of light, and hence multiple shadows on a face. The third phase faces typical Machine Learning challenges. The features must be representative for the target value, i.e. the facial expression. The inference method must be capable to derive the facial expression from the facial features provided. The learning algorithm has to find appropriate inference rules. Being the last phase, it depends most on accurate inputs from the preceding phases. Cohen et al. [24] use a three-dimensional wireframe model consisting of 16 different surface patches representing different parts of the face. Since these patches consist of B´ezier volumes, the model’s deformation is expressed by the B´ezier volume parameters. These parameters represent the basis for determining the visible facial expression. Cohen et al. integrate two variants of Bayesian Network classifiers, a Naive Bayes classifier and a Tree-Augmented-Naive Bayes classifier. Whereas the first one treats the motion vectors to be independent, the latter classifier assumes dependencies between them, which facilitate the interpretation task. Further improvements are achieved by integrating temporal information. It is computed from measuring different muscle activity within the face, which is represented by Hidden Markov Models (HMMs). Bartlett et al. [29] compare recognition engines on CKDB and find that a subset of Gabor filters using AdaBoost followed by training on Support Vector Machines gives the best results, with 93% correct for novel subjects in a 7–way choice of expressions in real time. Note that this approach is tuned to the specifics of the images within CKDB. In contrast, approaches for a robot scenario must be robust to the variety of real-world images. 2.3
Additional Challenges in a Robot Scenario
In contrast to human–machine interactions with say desktop computers, during interactions with robots the position and orientation of the human is less constrained, which makes facial expression recognition more difficult. The human may be further from the robot’s camera, e.g he or she could be on the other side of a large room. The human may not be directly facing the robot. Furthermore, he or she may be moving, and looking in different directions, in order to take part in the robot’s task. As a result it is difficult to ensure a good, well illuminated view of the human’s face for FER.
Facial Expression Recognition for Human-Robot Interaction – A Prototype
143
To make matters worse, robots are mobile. There is no hope of placing the human within a controlled space, such as a controlled lighting situation, because the human must follow the robot in order to continue the interaction. Most image subtraction approaches cannot cope, because image subtraction is intended to separate facial activity from the entire surrounding that is expected to be static. In contrast, our model-based approach copes with this challenge, because of the additional level of abstraction (the model). A technical challenge is to meet the real time computational constraints for human-robot interaction. The robot must respond to human emotions within a fraction of a second, to have the potential to improve the interaction. Many researchers have studied FER for robots. Some model–based methods cannot provide the real time performance needed for human–robot interaction, because model fitting is computationally expensive and because the required high resolution images impose an additional computational burden [30]. Kim et al [30] use a set of rectangular features and train using AdaBoost. Yoshitomi et al fuse speech data, face images and thermal face images to help a robot recognize emotional states [31]. An HMM is used for speech recognition. Otsuka and Ohya [32] use HMMs to model facial expressions, for recognition by robots. Zhou et al use an embedded HMM and AdaBoost for real-time FER by a robot [33].
3
Model-Based Interpretation of Facial Expressions
Our approach makes use of model-based techniques, which exploit a priori knowledge about objects, such as their shape or texture. Reducing the large amount of image data to a small set of model parameters facilitates and accelerates the subsequent facial expression interpretation, which mitigates the computational
Fig. 3. Model-based image interpretation splits the challenge of image interpretation into computationally independent modules
144
M. Wimmer et al.
problems often associated with model–based approaches to FER. According to the usual configuration [34], our model-based approach consist of seven components, which are illustrated in Figure 3. This approach fits well into the three-phase procedure of Pantic et al. [3], where skin color extraction represents a pre-processing step, mentioned by Chibelushi et al. [4]. Phase 1 is contains by the core of model-based techniques: the model, localization, the fitting algorithm, and the objective function. Phase 2 consists of the facial feature extraction, and Phase 3 is the final step of facial expression classification. 3.1
The Deformable Face Model
The model contains a parameter vector p that represents its possible configurations, such as position, orientation, scaling, and deformation. Models are mapped onto the surface of an image via a set of feature points, a contour, a textured region, etc. Our approach makes use of a statistics-based deformable model, as introduced by Cootes et al. [35]. Its parameters p = (tx , ty , s, θ, b)T comprise the affine transformation (translation, scaling factor, and rotation) and a vector of deformation parameters b = (b1 , . . . , bB )T . The latter component describes the configuration of the face, such as the opening of the mouth, the direction of the gaze, and the raising of the eye brows, compare to Figure 4. In this work, B = 17 to cover all necessary modes of variation. 3.2
Skin Color Extraction
Skin color, as opposed to pixel values, represents highly reliable information about the location of the entire face and the facial components and their boundaries. Unfortunately, skin color occupies a cluster in color space that varies with
b1
b3
b10 −2σ
−σ
0
+σ
+2σ
Fig. 4. Our deformable model of a human face consists of 134 contour points that represent the major facial components. The deformation by a change of just one parameter is shown in each row, ranging from −2σ and 2σ, as in Cootes et al. [36]. b1 rotates the head. b3 opens the mouth. b10 changes the direction of the gaze.
Facial Expression Recognition for Human-Robot Interaction – A Prototype
145
the scenery, the person, and the camera type and settings. Therefore, we conduct a two-step approach as described in our previous publication [37]. We first detect the specifics of the image, describe the skin color as it is visible in the given image. In a second step, the feature values of the pixels are adjusted by these image specifics. This results a set of simple and quick pixel features that are highly descriptive for skin color regions of the given image. This approach facilitates distinguishing skin color from very similar colors such as those of the lips and eyebrows. The borders of the skin regions are clearly extracted, and since most of these borders correspond to the contour lines of our face model, see Figure 4, this supports model fitting in the subsequent steps. 3.3
Localization of the Face and the Facial Components
The localization algorithm computes an initial estimate of the model parameters. The subsequent fitting algorithm is intended to refine these values further. Our system integrates the approach of Viola and Jones [38], which detects a rectangular bounding box around the frontal face view. From this information, we derive the affine transformation parameters of our face model. Additionally, rough estimation of the deformation parameters b improves accuracy. A second iteration of the Viola and Jones object locator is used on the previously determined rectangular image region around the face. We specifically learn further algorithms to localize the facial components, such as eyes and mouth.1 In the case of the eyes, the positive training data contains images of eyes, whereas the negative training data consists of image patches that are taken from the vicinity of the eyes, such as the cheek, the nose, or the brows. Note that the learned eye locator is not able to accurately find the eyes within a complex image, because images usually contain a lot of information that was not part of our specific training data. However, the eye locator is highly appropriate to determine the location of the eyes given a pure face image or a facial region within a complex image. 3.4
The Objective Function
The objective function f (I, p) yields a comparable value that specifies how accurately a parameterized model p describes an image I. It is also known as the likelihood, similarity, energy, cost, goodness, or quality function. Without loss of generality, we consider lower values to denote a better model fit. The fitting algorithm, as described in Section 3.5, searches for the model that describes the image best by determining the global minimum of the objective function. Traditionally, the calculation rules of objective functions are manually specified by first selecting a small number of image features, such as edges or corners, and then combining them by mathematical operators, see Figure 5. Afterwards, the appropriateness of the function is subjectively investigated by inspecting its 1
Locators for facial components, which are part of our system, can be downloaded at: http://www9.in.tum.de/people/wimmerm/se/project.eyefinder
146
M. Wimmer et al.
Fig. 5. The traditional procedure for designing objective functions is elaborate and error-prone
results on example images and example model parameterizations. If the result is not satisfactory the function is modified or designed from scratch. This heuristic approach relies on the designer’s intuition about a good measure of fitness. Our earlier works [39,40] show that this methodology is erroneous and tedious. To avoid these shortcomings, our former publications [39,40] propose to learn the objective function from training data generated by an ideal objective function, which only exists for previously annotated images. This procedure enforces the learned function to be approximately ideal as well. Figure 6 illustrates our five-step methodology. It splits the generation of the objective function into several partly automated independent pieces. Briefly, this provides several benefits: first, automated steps replace the labor-intensive design of the objective function. Second, the approach is less error-prone, because annotating example images with the correct fit is much easier than explicitly specifying calculation rules that need to return the correct value for all magnitudes of fitness while considering any facial variation at the same time. Third, this approach does not need any expert knowledge in computer vision and no skills of the application domain and therefore, it is generally applicable. This approach yields robust and accurate objective functions, which greatly facilitate the task of the fitting algorithms. 3.5
The Fitting Algorithm
The fitting algorithm searches for the model that best describes the face visible in the image, by finding the model parameters that minimize the objective function. Fitting algorithms have been the subject of intensive research and evaluation,
Fig. 6. The proposed method for learning objective functions from annotated training images automates many critical decision steps
Facial Expression Recognition for Human-Robot Interaction – A Prototype
147
see Hanek [41] for an overview and categorization. Since we adapt the objective function rather than the fitting algorithm to the specifics of the application, our approach is able to use any of these standard methods. Due to real-time requirements, the experiments in Section 4 have been conducted with a quick hill climbing algorithm. Carefully specifying the objective function makes this local optimization method nearly as accurate as a global optimization strategy. 3.6
Facial Feature Extraction
Two aspects characterize facial expressions: They turn the face into a distinctive state [27] and the muscles involved show a distinctive motion [21,25]. Our approach considers both aspects in order to infer the facial expression, by extracting structural and temporal features. This large set of features provides a fundamental basis for the subsequent classification step, which therefore achieves high accuracy. Structural Features. The deformation parameters b of the model describe the constitution of the visible face. The examples in Figure 4 illustrate their relation to the facial configuration. Therefore, b provides high-level information to the interpretation process. In contrast, the model’s transformation parameters tx , ty , s, and θ do not influence the facial configuration and are not integrated into the interpretation process. Temporal Features. Facial expressions emerge from muscle activity, which deforms the facial components involved. Therefore, the motion of particular feature points within the face is able to give evidence about the facial expression currently performed. In order to meet real-time requirements, we consider a small number of facial feature points only. The relative location of these points is derived from the structure of the face model. Note that these locations are not specified manually, because this assumes the designer is experienced in analyzing
Fig. 7. Model-based techniques greatly support the task of facial expression interpretation. The parameters of a deformable model give evidence about the currently visible state of the face. The lower section shows G = 140 facial feature points that are warped in the area of the face model, showing example motion patterns from facial expressions.
148
M. Wimmer et al. Table 1. Confusion matrix and recognition rates of our approach ground truth
classified as recognition rate surprise happiness anger disgust sadness fear surprise 28 1 1 0 0 0 93.33% happiness 1 26 1 2 3 4 70.27% anger 1 1 14 2 2 1 66.67% disgust 0 2 1 10 3 1 58.82% sadness 1 2 2 2 22 1 73.33% fear 1 5 1 0 2 13 59.09% mean recognition rate 70.25%
facial expressions. In contrast, a moderate number of G equally distributed feature points is generated automatically, shown in Figure 7. These points should move uniquely and predictably for any particular facial expression. The sum of the motion of each feature point i during a short time period is (gx,i , gy,i ) for the motion in (x, y), 1≤i≤G. In our proof-of-concept, the period is 2 seconds, which covers slowly expressed emotions. Furthermore, the motion of the feature points is normalized by subtracting the model’s affine transformation. This separates the motion that originates from facial expressions from the rigid head motion. In order to acquire robust descriptors, Principal Component Analysis determines the H most relevant motion patterns (Principal Components) that are contained within the training sequences. A linear combination of these motion patterns describes each observation approximately. Since H 2G, this reduces the number of descriptors by enforcing robustness towards outliers as well. As a compromise between accuracy and runtime performance, we set G = 140 and H = 14. Figure 7 visualizes the obtained motion of the feature points for some example facial expressions. 3.7
Facial Expression Classification
With the knowledge of the B structural and the H temporal features, we concatenate a feature vector t, see Equation 1. It represents the basis for facial expression classification. The structural features describe the model’s configuration within the current image and the temporal features describe the muscle activity during a small amount of time. t = (b1 , ..., bB , h1 , ..., hH )T
(1)
A classifier is intended to infer the correct facial expression visible in the current image from the feature vector t. 67% of the image sequences of the CKDB form the training set and the remainder the test set. The classifier used is a Binary Decision Tree [42], which is robust and efficient to learn and execute.
4
Experimental Evaluation
We evaluate the accuracy of our approach by applying it to the unseen fraction of the CKDB. Table 1 shows the recognition rate and the confusion matrix of
Facial Expression Recognition for Human-Robot Interaction – A Prototype
149
Table 2. Recognition rate of our system compared to state-of-the-art approaches facial expression Anger Disgust Fear Happiness Sadness Surprise Average
Results of our approach 66.7% 64.1% 59.1% 70.3% 73.3% 93.3% 71.1%
Results of Michel et al. [21] 66.7% 58.8% 66.7% 91.7% 62.5% 83.3% 71.8%
Results of Schweiger et al. [25] 75.6% 30.0% 0.0% 79.2% 60.5% 89.8% 55.9%
each facial expression. The results are comparable to state of the art approaches, shown in Table 2. The facial expressions happiness and fear are confused most often. This results from similar muscle activity around the mouth, which is also indicated by the sets of AUs in FACS for these two emotions. The accuracy of our approach is comparable to that of Schweiger et al. [25], also evaluated on the CKDB. For classification, they favor motion from different facial parts and determine Principal Components from these features. However, Schweiger et al. manually specify the region of the visible face whereas our approach performs an automatic localization via model-based image interpretation. Michel et al. [21] also focus on facial motion by manually specifying 22 feature points predominantly located around the mouth and eyes. They utilize a Support Vector Machine (SVM) for determining one of six facial expressions. 4.1
Evaluation in a Robot Scenario
The method was also evaluated in a real robot scenario, using a B21r robot, shown in Figure 8 [43,44]. The robot LCD panel shows a robot face, which is able to make facial expressions, and moves its lips in synchronization with the robot’s voice The goal of this work is to provide a robot assistant that is able to express feelings and respond to human feelings. The robot’s 1/3in Sony cameras are just beneath the face LCD, and one is used to capture images of the human in front of the robot. Since the embedded robot computer runs Linux and our model based facial expression recognizer runs under Windows, a file transfer protocol server provided images across a wireless network to a desktop Windows PC. The robot evaluation scenario was controlled to ensure good lighting conditions, and that the subject’s face was facing the camera and forming a reasonable sized part of the image (at a distance of approximately 1m). 120 readings of three facial expressions were recorded and interpreted. Figure 8 shows the results, which are very similar to the bench test results above. The model based FER was able to process images at 12 frames per second, although for the test the speed was reduced to 1 frame per second, in order to eliminate confusion when the human subject changed.
150
M. Wimmer et al.
ground truth
classified as recognition rate neutral happiness surprise neutral 85 11 19 71% happiness 18 83 25 69% surprise 17 26 76 63% mean recognition rate 67%
Fig. 8. Set up of our B21 robot and the confusion matrix and recognition rates in a robot scenario
5
Summary and Conclusion
To be effective in the human world robots must respond to human emotional states. This paper focuses on the recognition of the six universal human facial expressions. Our approach imposes lower computational requirements by specializing model-based techniques to the face scenario. Adaptive skin color extraction provides accurate low-level information. Both the face and the facial components are localized robustly. Specifically learned objective functions enable accurate fitting of a deformable face model. Experimental evaluation reports a recognition rate of 70% on the Cohn–Kanade facial expression database, and 67% in a robot scenario, which compare well to other FER systems. The technique provides a prototype suitable for FER by robots.
References 1. Essa, I.A., Pentland, A.P.: Coding, analysis, interpretation, and recognition of facial expressions. IEEE Trans. Pattern Anal. Mach. Intell. 19, 757–763 (1997) 2. Edwards, G.J., Cootes, T.F., Taylor, C.J.: Face Recognition Using Active Appearance Models. In: Burkhardt, H., Neumann, B. (eds.) ECCV 1998. LNCS, vol. 1406– 1607, pp. 581–595. Springer, Heidelberg (1998) 3. Pantic, M., Rothkrantz, L.J.M.: Automatic analysis of facial expressions: The state of the art. IEEE TPAMI 22, 1424–1445 (2000) 4. Chibelushi, C.C., Bourel, F.: Facial expression recognition: A brief tutorial overview. In: CVonline: On-Line Compendium of Computer Vision (2003) 5. Wimmer, M., Zucker, U., Radig, B.: Human capabilities on video-based facial expression recognition. In: Proc. of the 2nd Workshop on Emotion and Computing – Current Research and Future Impact, Oldenburg, Germany, pp. 7–10 (2007) 6. Schuller, B., et al.: Audiovisual behavior modeling by combined feature spaces. In: ICASSP, vol. 2, pp. 733–736 (2007) 7. Fischer, S., et al.: Experiences with an emotional sales agent. In: Andr´e, E., et al. (eds.) ADS 2004. LNCS (LNAI), vol. 3068, pp. 309–312. Springer, Heidelberg (2004)
Facial Expression Recognition for Human-Robot Interaction – A Prototype
151
8. Tischler, M.A., et al.: Application of emotion recognition methods in automotive research. In: Proceedings of the 2nd Workshop on Emotion and Computing – Current Research and Future Impact, Oldenburg, Germany, pp. 50–55 (2007) 9. Breazeal, C.: Function meets style: Insights from emotion theory applied to HRI. 34, 187–194 (2004) 10. Rani, P., Sarkar, N.: Emotion-sensitive robots – a new paradigm for human-robot interaction. In: 4th IEEE/RAS International Conference on Humanoid Robots, vol. 1, pp. 149–167 (2004) 11. Scheutz, M., Schermerhorn, P., Kramer, J.: The utility of affect expression in natural language interactions in joint human-robot tasks. In: 1st ACM SIGCHI/SIGART conference on Human-robot interaction, pp. 226–233. ACM Press, New York (2006) 12. Picard, R.: Toward agents that recognize emotion. In: Imagina 1998 (1998) 13. Bartlett, M.S., et al.: Measuring facial expressions by computer image analysis. Psychophysiology 36, 253–263 (1999) 14. Ikehara, C.S., Chin, D.N., Crosby, M.E.: A model for integrating an adaptive information filter utilizing biosensor data to assess cognitive load. In: Brusilovsky, P., Corbett, A.T., de Rosis, F. (eds.) UM 2003. LNCS, vol. 2702, pp. 208–212. Springer, Heidelberg (2003) 15. Vick, R.M., Ikehara, C.S.: Methodological issues of real time data acquisition from multiple sources of physiological data. In: Proc. of the 36th Annual Hawaii International Conference on System Sciences - Track 5, p. 129.1. IEEE Computer Society, Los Alamitos (2003) 16. Sheldon, E.M.: Virtual agent interactions. PhD thesis, Major Professor-Linda Malone (2001) 17. Kanade, T., Cohn, J.F., Tian, Y.: Comprehensive database for facial expression analysis. In: International Conference on Automatic Face and Gesture Recognition, France, pp. 46–53 (2000) 18. Ekman, P.: Universals and cultural differences in facial expressions of emotion. In: Nebraska Symposium on Motivation 1971, vol. 19, pp. 207–283. University of Nebraska Press, Lincoln, NE (1972) 19. Ekman, P.: Facial expressions. In: Handbook of Cognition and Emotion, John Wiley & Sons Ltd., New York (1999) 20. Friesen, W.V., Ekman, P.: Emotional Facial Action Coding System. Unpublished manuscript, University of California at San Francisco (1983) 21. Michel, P., Kaliouby, R.E.: Real time facial expression recognition in video using support vector machines. In: Fifth International Conference on Multimodal Interfaces, Vancouver, pp. 258–264 (2003) 22. Cohn, J., et al.: Automated face analysis by feature point tracking has high concurrent validity with manual facs coding. Psychophysiology 36, 35–43 (1999) 23. Sebe, N., et al.: Emotion recognition using a cauchy naive bayes classifier. In: Proc. of the 16th International Conference on Pattern Recognition (ICPR 2002)., vol. 1, p. 10017. IEEE Computer Society, Los Alamitos (2002) 24. Cohen, I., et al.: Facial expression recognition from video sequences: Temporal and static modeling. CVIU special issue on face recognition 91(1-2), 160–187 (2003) 25. Schweiger, R., Bayerl, P., Neumann, H.: Neural architecture for temporal emotion classification. In: Andr´e, E., et al. (eds.) ADS 2004. LNCS (LNAI), vol. 3068, pp. 49–52. Springer, Heidelberg (2004) 26. Tian, Y.L., Kanade, T., Cohn, J.F.: Recognizing action units for facial expression analysis. IEEE TPAMI 23, 97–115 (2001)
152
M. Wimmer et al.
27. Littlewort, G., et al.: Fully automatic coding of basic expressions from video. Technical report, University of California, San Diego, INC., MPLab (2002) 28. Ekman, P., Friesen, W. (eds.): The Facial Action Coding System: A Technique for The Measurement of Facial Movement. Consulting Psychologists Press, San Francisco (1978) 29. Bartlett, M., et al.: Recognizing facial expression: machine learning and application to spontaneous behavior. In: CVPR 2005. IEEE Computer Society Conference, vol. 2, pp. 568–573 (2005) 30. Kim, D.H., et al.: Development of a facial expression imitation system. In: International Conference on Intelligent Robots and Systems, Beijing, pp. 3107–3112 (2006) 31. Yoshitomi, Y., et al.: Effect of sensor fusion for recognition of emotional states using voice, face image and thermal image of face. In: Proc. of the 9th IEEE International Workshop on Robot and Human Interactive Communication, Osaka, Japan, pp. 178–183 (2000) 32. Otsuka, T., Ohya, J.: Recognition of facial expressions using hmm with continuous output probabilities. In: 5th IEEE International Workshop on Robot and Human Communication, Tsukuba, Japan, pp. 323–328 (1996) 33. Zhou, X., et al.: Real-time facial expression recognition based on boosted embedded hidden markov model. In: Proceedings of the Third International Conference on Image and Graphics, Hong Kong, pp. 290–293 (2004) 34. Wimmer, M.: Model-based Image Interpretation with Application to Facial Expression Recognition. PhD thesis, Technische Universitat M¨ unchen, Fakult¨ at f¨ ur Informatik (2007) 35. Cootes, T.F., Taylor, C.J.: Active shape models – smart snakes. In: Proc. of the 3rd BMVC, pp. 266–275. Springer, Heidelberg (1992) 36. Cootes, T.F., et al.: The use of active shape models for locating structures in medical images. In: IPMI, pp. 33–47 (1993) 37. Wimmer, M., Radig, B., Beetz, M.: A person and context specific approach for skin color classification. In: Proc. of the 18th ICPR 2006, vol. 2, pp. 39–42. IEEE Computer Society Press, Los Alamitos, CA, USA (2006) 38. Viola, P., Jones, M.: Rapid object detection using a boosted cascade of simple features. In: CVPR, Kauai, Hawaii, vol. 1, pp. 511–518 (2001) 39. Wimmer, M., et al.: Learning local objective functions for robust face model fitting. In: IEEE TPAMI (to appear, 2007) 40. Wimmer, M., et al.: Learning robust objective functions with application to face model fitting. In: DAGM Symp., vol. 1, pp. 486–496 (2007) 41. Hanek, R.: Fitting Parametric Curve Models to Images Using Local Self-adapting Separation Criteria. PhD thesis, Department of Informatics, Technische Universit¨ at M¨ unchen (2004) 42. Quinlan, R.: C4.5: Programs for Machine Learning. Morgan Kaufmann, San Mateo, California (1993) 43. Yadav, A.: Human facial expression recognition. Technical report, Electrical and Computer Engineering, University of Auckland, New Zealand (2006) 44. Jayamuni, S.D.J.: Human facial expression recognition system. Technical report, Electrical and Computer Engineering, University of Auckland, New Zealand (2006)
Iterative Low Complexity Factorization for Projective Reconstruction Hanno Ackermann and Kenichi Kanatani Department of Computer Science Okayama University, 700-8530 Okayama, Japan {hanno,kanatani}@suri.it.okayama-u.ac.jp
Abstract. We present a highly efficient method for estimating the structure and motion from image sequences taken by uncalibrated cameras. The basic principle is to do projective reconstruction first followed by Euclidean upgrading. However, the projective reconstruction step dominates the total computational time, because we need to solve eigenproblems of matrices whose size depends on the number of frames or feature points. In this paper, we present a new algorithm that yields the same solution using only matrices of constant size irrespective of the number of frames or points. We demonstrate the superior performance of our algorithm, using synthetic and real video images.
1
Introduction
Various methods are known for recovering the structure and motion of a rigid body from image sequences taken by uncalibrated cameras [4]. A popular approach is to reconstruct the structure up to projectivity first and then to modify it into a correct Euclidean shape. The first stage is called projective reconstruction, the second Euclidean upgrading. Some algorithms recover the projective structure by utilizing so-called multilinear constraints, which relate corresponding points in different images. For two images, this constraint is expressed by the fundamental matrix. For three and four views, the trifocal and quadrifocal tensors relate corresponding points, respectively. The use of multilinear constraints means, however, that particular pairs, triplets, or quadruples of images are chosen. For affine reconstruction, on the other hand, the popular Tomasi-Kanade factorization [11] handles all available data in a uniform way. Following this idea, Sturm and Triggs [9] extended the factorization formalism to perspective projection. However, their algorithm still relies on pair-wise epipolar constraints. Later, Heyden et al. [5] and Mahamud and Hebert [7] introduced new factorization algorithms for projective reconstruction. They utilized the fact that the joint coordinates of rigidly moving 3-D points are constrained to be in a 4-dimensional subspace. They estimated the unknown projective depths by minimizing the distances of these joint coordinates to the subspace, alternating the subspace fitting and the projective depth optimization. These algorithms are summarized in [6]. G. Sommer and R. Klette (Eds.): RobVis 2008, LNCS 4931, pp. 153–164, 2008. c Springer-Verlag Berlin Heidelberg 2008
154
H. Ackermann and K. Kanatani
However, it has been recognized that the projective depth optimization requires a long computation time due to the necessity to iteratively solve eigenproblems of matrices whose size depends on the number of frames or the number of points. Recently, Ackermann and Kanatani [1] proposed a simple scheme for accelerating the projective depth optimization, replacing the eigenvalue computation with the power method coupled with further acceleration techniques. They demonstrated that the computation time can significantly reduce. This work further improves their method. We show that the projective depth optimization can be done by solving eigenproblems of matrices of constant size irrespective of the number of frames or points. In Section 2, we analyze two projective reconstruction algorithms, geometrically dual to each other [1]: one is due to Mahamud and Hebert [7]; the other due to Heyden et al. [5]. In Section 3, we show how the complexity of these algorithms can be reduced. We also point out that the use of SVD for subspace fitting is not necessary at each iteration. Using synthetic and real video images, we demonstrate in Section 4 that our algorithm outperforms the method of Ackermann and Kanatani [1]. Section 5 concludes this paper.
2 2.1
Projective Structure and Motion Rank Constraint
Suppose N rigidly moving points are tracked through M consecutive frames taken by an uncalibrated camera (possibly with varying intrinsic parameters). The perspective projection of the αth point onto the κth frame can be modeled by ⎞ ⎛ uκα /f0 zκα xκα = Πκ Xα , xκα = ⎝ vκα /f0 ⎠ , (1) 1 where (uκα , vκα ) denote the measured point positions, and f0 is a scale factor for numerical stabilization [3]. Here, zκα are scalars called projective depths, Πκ are 3 × 4 projection (or camera) matrices consisting of the intrinsic and extrinsic parameters of the κth frame, and Xα are the 3-dimensional positions of the αth point in homogeneous coordinates. If we define the 3M × N matrix W , the 3M × 4 matrix Π, and the 4 × N matrix X by ⎞ ⎛ ⎞ ⎛ Π1 z11 x11 · · · z1N x1N ⎟ ⎜ . ⎟ ⎜ .. .. .. W =⎝ ⎠ , Π = ⎝ .. ⎠ , X = X1 · · · XN , (2) . . . zM1 xM1 · · · zMN xMN
ΠM
the first of Eqs. (1) for κ = 1, ..., M and α = 1, ..., N is written simply as W = ΠX. Since Π is 3M × 4 and X is 4 × N , the matrix W generally has rank 4.
(3)
Iterative Low Complexity Factorization for Projective Reconstruction
155
Let w α be the αth column of W . The absolute scale of the projective depths is indeterminate, because multiplying zκα , κ = 1, ..., M , by a constant c is equivalent to multiplying the homogeneous coordinate vector Xα , whose scale is indeterminate, by c. Hence, we can normalize each wα to wα = 1. Now, we rewrite wα as follows: w α = Dα ξ α , (4) ⎞ ⎞ ⎛ ⎛ 0 x1α /x1α · · · z1α x1α ⎟ ⎟ ⎜ ⎜ . . .. . .. .. .. Dα ≡ ⎝ ξα ≡ ⎝ (5) ⎠, ⎠. . 0
· · · xMα /xMα
zMα xMα
We see that ξ α is a unit vector: 2 2 ξα 2 = z1α x1α 2 + · · · + zMα xMα 2 = wα 2 = 1.
(6)
Alternatively, let wiκ be the (3κ + i)th row of W for i = 1, 2, 3. This time, we adopt normalization w1κ 2 + w2κ 2 + w3κ 2 = 1 for each κ. We rewrite wiκ as wiκ = Dκi ξ κ , ⎛
xiκ1 /xκ1 · · · ⎜ .. i .. Dκ ≡ ⎝ . . 0
···
0 .. . xiκM /xκM
⎞ ⎟ ⎠,
(7) ⎛
⎞ zκ1 xκ1 ⎜ ⎟ .. ξκ ≡ ⎝ ⎠, . zκN xκN
(8)
where xiκα denotes the ith component of xκα . We see that ξκ is a unit vector: 2 2 xκ1 2 + · · · + zκM xκM 2 = w1κ 2 + w2κ 2 + w3κ 2 = 1. (9) ξκ 2 = zκ1
In the following, we consider two methods. In one, we regard each column wα of W as a point in 3M dimensions; in the other, each triplet of rows {w1κ , w 2κ , w3κ } of W is regarded as a point in 3N dimensions. The first, which we call the primal method, uses Eq. (4) for computing W ; the second, which we call the dual method, uses Eq. (7). Both methods utilize the rank constraint on W . 2.2
Primal Method
Ideally, i.e., if the projective depths zκα are correctly chosen, and if there is no noise in the data xκα , the matrix W should have rank 4. Hence, all columns wα of W should be constrained to be in a 4-dimensional space L. Let U be the 3M × 4 matrix consisting of its orthonormal basis vectors as its columns. Then, we can identify U with Π and X with U W , which ensures W = ΠX due to the orthonormality of U . We define the reprojection error by
M N 1 xκα − Z[Πκ Xα ]2 , (10) = f0 M N κ=1 α=1
156
H. Ackermann and K. Kanatani
where Z[ · ] denotes normalization to make the third component 1. This reprojection error should ideally be 0, but if the projective depths zκα are not correct or there is noise in the data xκα , it may not exactly be zero. For starting the algorithm, the unknown projective depths zκα are initialized to 1, which amounts to assuming affine cameras. Then, we estimate the L by least squares. Namely, we regard the four eigenvectors of W W for the largest four eigenvalues, or equivalently the four left singular vectors of W for the largest four singular values, as an orthonormal basis of L. Since the assumed zκα may not be correct, the columns wα of W may not exactly be in the subspace L. So, we update zκα so as to minimize the distance of each wα from L. Since wα is a unit vector, the solution is obtained by maximizing its orthogonal projection onto L, namely maximizing 2 U wα = U Dα ξα 2 = ξ α (Dα U U Dα )ξ α ,
(11)
subject to ξα = 1. The solution is given by the unit eigenvector of Dα U U Dα for the largest eigenvalue. Since the sign of the eigenvector ξ α is indeterminate, we choose the one for which the sum of its components is non-negative. Next, we update each column wα of W by Eq. (4) and identify Π with U and X with U W . From the updated W , we compute the matrix U for the new basis of L. Then, we update ξ α by minimizing Eq. (11) and repeat this procedure until the reprojection error in Eq. (10) becomes smaller than some pre-defined threshold min . This process is a kind of EM algorithm, so the convergence is guaranteed. 2.3
Dual Method
Since the matrix W should ideally have rank 4, all its rows w iκ should also be constrained to be in a 4-dimensional space L . Let V be the N × 4 matrix consisting of its orthonormal basis vectors as its columns. Then, we can identify V with X and Π with W V , which ensures W = ΠX due to the orthonormality of V . For starting the algorithm, the unknown projective depths zκα are again initialized to 1. Then, we estimate the subspace L by least squares. Namely, we regard the four eigenvectors of W W for the largest four eigenvalues, or equivalently the four right singular vectors of W for the largest four singular values, as an orthonormal basis of L . Since the assumed zκα may not be correct, the rows w iκ of W may not exactly be in the subspace L . So, we update zκα so as to minimize the sum of the square distances of the triplet w iκ , i = 1, 2, 3, from L . Since the sum of squares of w iκ , i = 1, 2, 3, is normalized to 1, the solution is obtained by maximizing the sum of the squares of their orthogonal projections onto L , 3 3 3
i 2 i 2 V w κ = V Dκ ξ κ = ξ ( Dκi V V Dκi )ξ κ , κ i=1
i=1
i=1
(12)
Iterative Low Complexity Factorization for Projective Reconstruction
157
subject to ξ κ = 1. The solution is given by the unit eigenvector of 3 i i i=1 Dκ V V Dκ for the largest eigenvalue. Again, we choose the sign of ξ κ in such a way that the sum of its components is non-negative. Next, we update each row wiκ of W by Eq. (7) and identify X with V and Π with W V . From the updated W , we compute the matrix V for the new basis of L. Then, we update ξκ by minimizing Eq. (12) and repeat this procedure until the reprojection error in Eq. (10) becomes smaller than some pre-defined threshold min . As in the case of the primal method, this process is guaranteed to converge.
3
Complexity Analysis and Efficiency Improvement
3.1
Subspace Fitting
The basis of the subspace L for the primal method is obtained by computing the eigenvectors of W W , and the basis of the subspace L for the dual method is obtained by computing the eigenvectors of W W . Alternatively, they are obtained by computing the singular vectors of W . We first analyze the complexity of this subspace fitting computation. For the primal method, computing W W requires (3M )2 N operations, where we regard multiplication followed by addition/subtraction as one operation1 . Its eigenvalue computation takes approximately (3M )3 operations, so we need about (3M )2 (3M + N ) operations in total. For the dual method, computing W W requires 3M N 2 operations, and its eigenvalue computation takes approximately N 3 operations. So, we need about N 2 (3M + N ) operations in total. The singular value decomposition (SVD) of W , on the other hand, requires about (3M )2 N or 3M N 2 operations, depending on whether 3M is smaller or larger than N . In whichever case, SVD is obviously more efficient. This subspace fitting is repeated each time the projective depths are updated. However, we can expect that the basis computed in each step does not differ much from the preceding one. Hence, we can save the SVD computation time if the new basis can be updated by a small number of operations. For this, we use the power method [1]: the basis vectors are multiplied by W W (primal) or W W (dual) followed by Gram-Schmidt orthogonalization [2]. As pointed out earlier, computing W W and W W requires (3M )2 N and 3M N 2 operations, respectively. Multiplying a 3M -dimensional vector by W W costs (3M )2 operations, and multiplying an N -dimensional vector by W W costs N 2 operations. Hence, the complexity of the power method is (3M )2 N + 4k(3M )2 and 3M N 2 + 4kN 2 for the primal and dual methods, respectively, where k is the number of iterations for the power method to converge. Alternatively, multiplication by W W can be replaced by multiplication by W (3M N operations) followed multiplication by W (3M N operations). Similarly, multiplication by W W can be replaced by multiplication by W (3M N 1
We disregard the fact that for computing the sum of products the number of addition/subtraction is smaller than the number of multiplication by one.
158
H. Ackermann and K. Kanatani
operations) followed multiplication by W (3M N operations). Hence, this alternative is advantageous if (3M )2 N + 4k(3M )2 ≥ 24kM N, 3M N 2 + 4kN 2 ≥ 24kM N,
or
or
M≥ N≥
8k , 3 + 12k/N
24k , 3 + 4k/M
(13)
(14)
for the primal and dual methods, respectively. Equation (13) is satisfied for M ≥ 3k, and Eq. (14) is satisfied for N ≥ 8k. According to our experiments, the power method, whose convergence is theoretically guaranteed [2], almost always converges after one or two iterations to sufficient precision for both the primal and the dual methods. So, we adopt this alternative scheme. 3.2
Projective Depth Optimization
For computing the projective depths zκα , we need to compute the dominant eigenvector (i.e., the eigenvector for the largest eigenvalue) of an M × M matrix for the primal method and of an N × N matrix for the dual method. Since solving an eigenproblem has approximately cubic complexity, the projective depth optimization dominates the entire projective reconstruction computation when the number N of points or the number M of frames is large. To resolve this problem, Ackermann and Kanatani [1] introduced a special variant of the power method for eigenvalue computation and demonstrated that this can reduce the computation time significantly. We now introduce a method that results in even better performance. For the primal method, we need to compute the dominant eigenvector ξ α of the M × M matrix of Aα ≡ Dα U U Dα (see Eq. (11)), which can be written as Aα = (U Dα ) (U Dα ) = Cα Cα , ≡ Cα
(15)
where Cα is a 4 × M matrix. However, ξ α can be obtained from the dominant eigenvector η α of the 4 × 4 matrix A˜α = Cα Cα ,
(16)
in the form of ξ α = Cα η α . In fact, if Cα Cα η α = λη α , we have Cα Cα Cα η α = λCα ηα , or Aα ξ = λξ α . Once Cα is evaluated, computing Aα costs 4M 2 additional operations. Its eigenproblem has a complexity of M 3 . Computing A˜α costs 16M additional operations, and its eigenproblem has a constant complexity of 64. Hence, it is advantageous to solve the eigenproblem of A˜α if 4M 2 + M 3 ≥ 16M + 64, or M ≥ 4. Theoretically, we may be able to accelerate the dominant eigenvector computation of A˜α by the power method, but it has no practical merit; the standard tool is sufficiently fast for such a small size.
Iterative Low Complexity Factorization for Projective Reconstruction
159
For the dual method, 3we need to compute the dominant eigenvector ξκ of the N × N matrix Bκ ≡ i=1 Dκi V V Dκi (see Eq. (12)), which can be written as ⎛ 1⎞ V Dκ 1 Bκ = Dκ V Dκ2 V Dκ3 V ⎝ V Dκ2 ⎠ = Cκ Cκ , (17) V Dκ3 ≡ Cκ where Cκ is a 12 × N matrix. As in the case of the primal method, however, ξ κ can be obtained from the dominant eigenvector η κ of the 12 × 12 matrix ˜κ = Cκ C , B κ
(18)
in the form of ξκ = Cκ η κ . Once Cκ is evaluated, computing Bκ costs 12N 2 additional operations. Its ˜α costs 144N additional eigenproblem has a complexity of N 3 . Computing B operations, and its eigenproblem has a constant complexity of 1728. Hence, it ˜α if 12N 2 + N 3 ≥ 144N + 1728, is advantageous to solve the eigenproblem of B or N ≥ 12. As in the case of the primal method, we need not consider further speedup; the standard tool is sufficient. In the above analysis, we disregarded the fact that the M × M matrix Aα and the N × N matrix Bκ are symmetric and hence only their upper-triangular and diagonal elements need to be evaluated. This reduces the evaluation cost to nearly 50%, yet the advantage of dealing with the 4 × 4 matrix A˜α and the ˜κ is unchanged. The actual procedure of the above computation 12 × 12 matrix B is summarized in Algorithms (1) and (2). In Algorithms (1) and (2), the reprojection error (Step 6) is evaluated at each iteration step, but this can be omitted; we can simply stop when all values cease to change by setting an appropriate threshold. This can further speed up the computation, since the reprojection error evaluation takes time proportional to the number of points and the number of frames. This also avoids setting too low a reprojection error , which may never be reached. The reprojection error evaluation in Step 6 is solely for comparing the convergence performance for a common threshold, which is the theme of this paper.
4 4.1
Experimental Results Synthetic Example
We now experimentally evaluate the efficiency of our algorithm. Figure 1 shows six out of 256 frames of a synthetic image sequence. The object consists of 256 points inside a cuboid. They are perspectively projected onto 512 × 512 frames with focal length f = 600. Using this motion sequence, we compare our algorithm with the original form [6], which we call the prototype, and the method of Ackermann and Kanatani [1], which we call the power method for short. For our algorithm, we set the reprojection error threshold to min = 0.1 (pixels) and
160
H. Ackermann and K. Kanatani
Algorithm 1. Fast Projective Reconstruction: Primal Method 1: Input: Data vectors xκα , their norms xκα , the normalized vectors N [xκα ] = xκα /x κα for κ = 1, .., M , α = 1, .., N , the minimal reprojection error min , and the threshold δ for terminating the subspace update. 2: Output: Projection matrices Πκ and 3-D points Xα . 3: Initialize the projective depths to zκα = 1. 4: Compute the matrix W by stacking the products zκα xκα into the vectors w α and normalizing them to unit length. 5: Compute the matrix U of the subspace L consisting of the first four left singular vectors u1 , u2 , u3 , u4 of W . 6: while (in Eq. (10)) > min do 7: for α = 1 to N do 8: Compute the matrix Cα = U Dα . ˜α . 9: Compute the matrix A 10: Compute the dominant eigenvector ηα of A˜α . 11: Compute M ξ α = N [Cα ηα ]. 12: if i=1 ξαi < 0 then 13: ξ α = −ξα . 14: end if 15: end for 16: Update the matrix W . 17: Compute the projection matrices Πκ given by the row-triplets of Π = U . 18: Compute the 3-D positions Xα given by the columns of X = Π W . 19: repeat 20: Assign u0i ← ui , i = 1, ..., 4. 21: Compute u ˜i = W W u0i , i = 1, .., 4 22: Let ui , i = 1,.., 4, be their Gram-Schmidt orthonormalization. 0 2 < 10−δ . 23: until max4i=1 1 − 4l=1 u i ul 24: end while
the subspace fitting threshold to δ = 1. For the power method, we use the same parameters as in [1]. The computation time for the primal method is approximately linear in the number N of the points because the same computation is repeated for each point, while the optimization of each point is generally nonlinear in the number M of the frames because the eigenvalue computation is involved. The computation time for the dual method, on the other hand, is approximately linear in M but generally nonlinear in N , because the roles of points and frames are interchanged. So, we created two series of test sequences. For the primal method, we fixed the number of points to N = 256 and varied M from 32 to 512 by inserting intermediate frames (the first and last frames are the same as in Fig. 1). For the dual method, the number of frames was fixed to M = 256, and N is varied from 32 to 512 by randomly adding new points. We used Intel Core2Duo 1.8GHz CPU having 1GB main memory with Linux as operating system. Figure 2 shows the results for the primal and the dual methods on the left and right, respectively. The solid lines indicate the prototype, the dashed lines
Iterative Low Complexity Factorization for Projective Reconstruction
161
Algorithm 2. Fast Projective Reconstruction: Dual Method 1: Input: Data vectors xκα , their norms xκα , the normalized vectors N [xκα ] = xκα /x κα for κ = 1, .., M , α = 1, .., N , the minimal reprojection error min , and the threshold δ for terminating the subspace update. 2: Output: Projection matrices Πκ and 3-D points Xα . 3: Initialize the projective depths to zκα = 1. 4: Compute the matrix W by stacking the products zκα xκα into the vectors w iκ and normalize them to unit length. 5: Compute the matrix V of the subspace L consisting of the first four right singular vectors v1 , v2 , v3 , v4 of W . 6: while (in Eq. (10)) > min do 7: for κ = 1 to M do ⎛ 1⎞ V Dκ 8: Compute the matrix Cκ = ⎝ V Dκ2 ⎠. V Dκ3 ˜ 9: Compute the matrix Bκ . ˜κ . 10: Compute the dominant eigenvector ηκ of B = N [C η ]. 11: Compute ξ κ κ κ N 12: if i=1 ξκi < 0 then 13: ξ κ = −ξκ . 14: end if 15: end for 16: Update the matrix W . 17: Compute the 3-D positions Xα given by the columns of X = V . 18: Compute the projection matrices Πκ given by the row-triplets of Π = W X . 19: repeat 20: Assign vi0 ← vi , i = 1, ..., 4. 21: Compute v˜i = W (W vi ), i = 1, .., 4. .., 4, be their Gram-Schmidt orthonormalization. 22: Let vi , i = 1, 2 23: until max4i=1 1 − 4l=1 vi vl0 < 10−δ . 24: end while
the power method, and the dotted lines our algorithm. The prototype clearly performs very poorly both for the primal and the dual methods, as compared to which the power method runs dramatically fast. As we can see from the figure, however, our algorithm is even faster than the power method. 4.2
Other Examples
Figure 3(a) shows a sequence of 11 frames (six decimated frames are displayed here) of 231 points on a cylindrical surface perspectively projected onto an image frame of size 512 × 512 (pixels) with focal length f = 600 (pixels). We set the reprojection error threshold to min = 0.1 (pixels) as in the previous examples. Figure 3(b) shows a real video sequence of 200 frames (six decimated frames are displayed here) of 16 points detected by KLT (the Kanade-Lucas-Tomasi tracker) [10]; we manually interfered whenever tracking failed. The frame size is 640 × 480 (pixels). For this sequence, we have found that the reprojection
162
H. Ackermann and K. Kanatani
Fig. 1. Synthetic image sequence of 256 points through 256 frames (six frames decimated) 30
400
300 20
200 10
100
0
0 0
100
200
300
400 M 500
0
100
200
300
400 N 500
Fig. 2. Computation time (sec) for the synthetic sequence in Fig. 1 using the prototype (solid lines), the power method (dashed lines), and our algorithm (dotted lines). The left is for the primal method with N = 256, and the right is for the dual method with M = 256.
error cannot not be reduced to less than 2.1 pixels however many times the computation is iterated. This indicates that the points tracked by KLT have uncertainty of about 2.1 pixels. In fact, we visually observed that the tracked points had a few pixel fluctuations over the sequence. So, we set the reprojection error threshold to min = 2.1 (pixels). Table 1 lists the computation times (sec) for the two sequences. If the prototype is used, the primal method is more efficient than the dual method for the sequence in Fig. 3(a), because the number of points is large while the number of frames is small. For the real video sequence in Fig. 3(b), in contrast, the number of points is small while the number of frames large, so the primal method requires much longer time than the dual method. If the power method is used, however, the dual method is faster than the primal method for both sequences. In particular, the reduction of computation time is significant for the sequence in Fig. 3(a). We can also see that the computation time of the primal method for the real video sequence in Fig. 3(b) dramatically reduces by using the power method. Yet, as we can see, our algorithm further improves efficiency. As for the choice of the primal vs. the dual methods, the latter is almost always preferable for practical applications whatever algorithm is used. This is because the number of frames can be arbitrarily large if a video camera is used, while the number of trackable feature points are very much limited.
Iterative Low Complexity Factorization for Projective Reconstruction
163
(a)
(b)
Fig. 3. (a) Synthetic image sequence of 256 points through 256 frames (six frames decimated). (b) Real video sequence of 16 points through 200 frames (six frames decimated). Table 1. Computation times (sec) for the sequences in Figs. 3(a) (left side) and 3(b) (right side)
prototype power method our algorithm
5
Fig. 3(a) Primal Dual 2.385 4.029 0.873 0.050 0.801 0.036
Fig. 3(b) Primal Dual 107.338 0.073 0.772 0.062 0.690 0.056
Conclusions
We reformulated existing projective reconstruction algorithms, which solve the eigenproblem of very large size matrices, into a form that involves only small constant size matrices irrespective of the number of points or frames. Using synthetic and real video sequences, we demonstrated that our algorithm requires a very short time as compared with existing methods. The memory requirement also reduces significantly. Our algorithm is based solely on the algebraic structure of the problem. It has often been pointed out that the reconstruction accuracy quickly deteriorates as the noise in the input data becomes larger [4,8] and that we need to incorporate various statistical optimization techniques, a typical one being iterative refinement called bundle adjustment [12]. The focus of this paper is efficiency, so we have not gone into the accuracy issue, which crucially depends on the quality of the tracking data. To do statistical optimization, however, we need a good initial value, which our algorithm can very efficiently provide. Acknowledgments. The authors thank Akinobu Mori of Canon, Inc. for initiating this project. They also thank Hirotaka Niitsuma of Okayama University for helpful discussions. This work is partly supported by Mitsubishi Precision Co., Ltd.
164
H. Ackermann and K. Kanatani
References 1. Ackermann, H., Kanatani, K.: Robust and efficient 3-D reconstruction by selfcalibration. In: Proc. IAPR Conf. Machine Vision Applications, Tokyo, Japan, May 2007, pp. 178–181 (2007) 2. Golub, T.H., Van Loan, C.F.: Matrix Computations, 3rd edn. Johns-Hopkins University Press, Baltimore, MD, U.S.A. (1996) 3. Hartley, R.: In defense of the 8-point algorithm. IEEE Trans. Patt. Anal. Mach. Intell. 19(6), 580–593 (1997) 4. Hartley, R., Zisserman, A.: Multiple View Geometry in Computer Vision. Cambridge University Press, Cambridge (2000) 5. Heyden, A., Berthilsson, R., Sparr, G.: An iterative factorization method for projective structure and motion from image sequences. Image Vis. Comput. 17(13), 981–991 (1999) 6. Kanatani, K.: Latest progress of 3-D reconstruction from multiple camera images. In: Guo, X.P. (ed.) Robotics Research Trends, Nova Science, Hauppauge, NY, U.S.A., pp. 33–75 (2008) 7. Mahamud, S., Hebert, M.: Iterative projective reconstruction from multiple views. In: Proc. IEEE Conf. Comput. Vis. Patt. Recog., Hinton Head Island, SC, U.S.A., June 2000, vol. 2, pp. 2430–2437 (2000) 8. Pollefeys, M., Koch, R., Van Gool, L.: Self-calibration and metric reconstruction in spite of varying and unknown internal camera parameters. Int. J. Comput. Vis. 32(1), 7–25 (1999) 9. Sturm, P., Triggs, B.: A factorization based algorithm for multi-image projective structure and motion. In: Proc. 4th Euro. Conf. Comput. Vis., Cambridge, U.K., April 1996, vol. 2, pp. 709–720 (1996) 10. Tomasi, C., Kanade, T.: Detection and Tracking of Point Features. CMU Tech. Rept. CMU-CS-91132 (1991), http://vision.stanford.edu/∼ birch/klt/ 11. Tomasi, C., Kanade, T.: Shape and motion from image streams under orthography: A factorization method. Int. J. Comput. Vis. 9(2), 137–154 (1992) 12. Triggs, B., et al.: Bundle adjustment—A modern synthesis. In: Triggs, B., Zisserman, A., Szeliski, R. (eds.) Vision Algorithms: Theory and Practice, pp. 298–375. Springer, Berlin (2000)
Accurate Image Matching in Scenes Including Repetitive Patterns Sunao Kamiya1 and Yasushi Kanazawa2 1
Denso Techno Co., Ltd., Nagoya, 450-0002 Japan Department of Knowledge-based Information Engineering Toyohashi University of Technology, Toyohashi, 441-8580 Japan
[email protected] 2
Abstract. We propose an accurate method for image matching in scenes including repetitive patterns like buildings, walls, and so on. We construct our matching method with two phases: matching between the elements of repetitive regions; matching between the points in the remained regions. We first detect the elements of repetitive patterns in each image and find matches between the elements in the regions without using any descriptors depended on a view-point. We then find matches between the points in the remained regions of the two images using the informations of the detected matches. The advantage of our method is to use an efficient matching information in the repetitive patterns. We show the effectiveness of our method by real image examples.
1
Introduction
Detecting correspondences between two or more images is a fundamental and important step for most of computer vision applications. Therefore, various methods have been proposed in the past [1,2,3,4,5,6]. Two approaches exist for this purpose: tracking correspondences over successive frames, and direct matching between the images taken at different view points. This paper focuses on the latter. In image matching, image features and their descriptors play very important roles. The descriptors is especially important because it is used for computing similarities between the image features [6]. We often use “points,” “lines,” or “regions” for the image features. A typical descriptor for a point is a subimage centered on the point itself. It is very simple and we can easily compute a similarity between two points by SSD or SAD of the two subimages, which generally is called “template matching.” This descriptor is an invariant in image translation and depends on the view point. So, this descriptor is only available for limited configurations of cameras such as a narrow baseline stereo. If the baseline length of a stereo camera is long, we need invariants of the image features in higher level such as affine invariants [3,4,5,6,7]. However, when there are repetitive patterns in a scene, we often fail in matching even using such affine invariants. Needless to say, we also fail using the invariant in only translation. In addition and unfortunately, there exist such repetitive G. Sommer and R. Klette (Eds.): RobVis 2008, LNCS 4931, pp. 165–176, 2008. c Springer-Verlag Berlin Heidelberg 2008
166
S. Kamiya and Y. Kanazawa
(a)
(b)
(c)
Fig. 1. Images including repetitive patterns and the results of matching. (a) Original images. (b) Result by the method of Kanazawa and Kanatani [2]. (c) Result by the method of Lowe [3].
patterns in many city scenes, for example, windows, brick or ceramic tile walls and roads. Here, we show an example in Fig. 1. Fig. 1 (a) shows original images taken from different two view points. This type of wall is called “namako-kabe” in Japanese. We see the pattern in the images is arranged in precisely and has a high contrast. Fig. 1 (b) shows the correspondences obtained by the method of Kanazawa and Kanatani [2] and Fig. 1 (c) shows the correspondences obtained by the method of Lowe [3]. In these results, we show the correspondences by line segments, which connect corresponding feature points when we superpose one image on the other. The method of Kanazawa and Kanatani uses subimages as a descriptor, but they impose some global consistency on candidate matches. So, when there is no extreme differences in depth in a scene and the baseline of a stereo is short, we can obtain many correct matches stably by this method. The method of Lowe use a scale-invariant descriptor and is wellknown as a very effective method. In this case, although the baseline is short, the Kanazawa and Kanatani’s method failed in detecting correspondences. This is because the wrong matches in the repetitive pattern region have more strong correlation than those of the correct matches by template matching. In addition, such wrong matches are shifted because of the strong correlations. As we also see, we have obtained few correct matches by the Lowe’s method. This is because the descriptors in such repetitive regions are almost the same in this case. In this paper, we propose a practical and accurate matching method between two images including repetitive patterns without using any descriptors or invariants depend on view points. We first detect the elements of the repetitive patterns in the images independently and next detect their correspondences roughly. We then decide the point matches precisely using the rough correspondences between the elements. We finally detect point correspondences in the remained regions, which do not include any repetitive patterns. By this approach, we can not only reduce ill effects in image matching from the repetitive patterns, but also use powerful information of the repetitive patterns for image matching. We show the effectiveness of our method by some real image examples.
2
Two Phase Matching for Repetitive Patterns
There are many repetitive patterns in a city scene, such as windows and ceramic tiles of buildings and paving stones of roads. By human eyes, we can easily find
Accurate Image Matching in Scenes Including Repetitive Patterns
167
them because they have remarkable characteristics in the images: even textures and high contrast compared with the other regions. Such remarkable characteristics on repetitive patterns are very useful information for image matching. But, unfortunately, repetitive patterns are hard to detect automatically. In addition, the repetitive patterns in the images mislead correspondences in image matching because of their view-point dependency. If the area of the repetitive patterns is large, the misled correspondences affect finding the correspondences of the points in the other regions. The cause of the failure in image matching is to use the same descriptors for repetitive pattern regions and the other regions in the images. Our matching method consists of two phases: the first phase in matching is to find correspondences between only the repetitive pattern regions without using any descriptors depended on view points; the second phase in matching is to find correspondences between the remained regions using the obtained correspondences in the repetitive patterns regions. Here, we use an empirical knowledge that there are repetitive patterns generally on planes in a scene. We show the outline of our method in the followings. 1. The first phase: image matching in the repetitive pattern regions. (a) We first detect elements of repetitive patterns in the images independently. (b) We next divide the detected elements into some groups using some criterions. (c) We then find rough correspondences between the elements using RANSAC [8]. (d) We detect feature points in the repetitive regions and find their correspondences using the obtained rough correspondences of elements. 2. the second phase: image matching in the remained regions. (a) We compute the fundamental matrix from the obtained point matches in the repetitive pattern regions. (b) Finally, we detect feature points in the remained regions in the images and find their correspondences using the epipolar constraint. We show the details of the two steps in the following sections. 2.1
Detection of Elements of Repetitive Patterns
In order to detect the elements of repetitive patterns, we first define the element of a repetitive pattern as a region which intensities or colors are almost uniform. So, we can detect the elements by using an image segmentation method. In this paper, we use Watershed algorithm [9,10]. Before applying the Watershed algorithm, we do the following preprocessing:
168
S. Kamiya and Y. Kanazawa
(a)
(b)
(c)
Fig. 2. An example of segmentation and grouping
1. Apply Gaussian smoothing into the images. 2. Detect intensity flat regions, which means almost uniform pixels in intensity, in the images independently. For judging a region is flat or not, we use a variance of the intensities in a local area in the images. If the variance is smaller than the threshold σr2 , we regard the region as a “flat region.” Then, we apply the Watershed algorithm described as follows: 1. Detect the maximum gradient in eight-neighborhood of a pixel in the non-flat regions, then move the position along the maximum gradient. 2. If the moved position is a minima or in the detected flat region, stop moving. 3. Repeat the step 1 to 2 for the all pixels in the non-flat regions. 4. Group the pixels moved into the same minima or the same flat region into a group. Here, we call the obtained groups the “elements” of repetetive patterns. For example, if we apply the Watershed algorighm into Fig. 2 (a), we obtain the segmented regions shown in Fig. 2 (b). 2.2
Grouping Elements
We next group the detected elements into some groups, which indicate the repetitive pattern regions. For doing this, we use the following criterions: – the distance between the elements, – the difference between the averages of the intensities in the elements, – the difference between the variances of the intensities in the elements. Here, we define the distance between two elements by the distance between the centers of gravity in the elements. Then, we regard the obtained groups as the repetitive pattern regions in the images. Let pi , i = 1, ..., Lp be the detected small regions. We group the detected small regions pi and pj by ci − cj ≤ tg
and |mi − mj | ≤ tm
and |s2i − s2j | ≤ tσ2 ,
(1)
where ci and cj are the coordinates of the centers of the elements, mi and mj are the average intensities in the elements, and s2i and s2j are the variances of the intensities in the elements. Here, we use the threshold tg , tm , and tσ2 .
Accurate Image Matching in Scenes Including Repetitive Patterns
169
The shape features of a small region, such as a contour, an area, and so on, strongly depend on viewpoints, i.e. they mislead to group the small regions. So, in this paper, we do not use them. By repeating such the grouping, we can detect a group {pα |α = 1, ..., K} that consist of almost the same elements. We regard the detected group {pα |α = 1, ..., K} as a “repetitive region” P . For example, a result by doing such grouing is shown in Fig. 2 (c). Let Pα , α = 1, ..., M , be the repetitive pattern regions detected in the image I1 and Qβ , β = 1, ..., N , be the repetitive pattern regions detected in the image I2 . 2.3
Matching in Repetitive Pattern Regions
Let (x, y) be the image coordinates of a 3-D point P projected onto the first image I1 , and (x , y ) be those for the second image I2 . We use the following three-dimensional vectors to represent them: ⎛ ⎞ ⎞ ⎛ x/f0 x /f0 x = ⎝ y/f0 ⎠ , x = ⎝ y /f0 ⎠ . (2) 1 1 where, the superscript denotes transpose and f0 is a scale factor for stabilizing computation. In this paper, we use the empirical knowledge that many repetitive patterns lie on planes in a scene. If the point P is on a plane, the vectors x and x are related in the following form [11]: (3) x = Z[Hx], where Z[ · ] designates a scale normalization to make the third component 1. The matrix H is a nonsingular matrix and is called the homography. However, the homography between two small regions is very sensitive to the image noise of the corresponding points [12]. So, to find correspondences between the detected repetitive pattern regions, we use the following affine transformation instead of the homography: (4) x = Ax, where the matrix A is also a nonsingular matrix and is called the affine transformation matrix. We show the RANSAC-based [8] procedure for detecting correspondences between the elements of the repetitive patterns as follows. 1. Initialize NSmax = 0. 2. From Pα , randomly choose three elements pi , pj , pk whose the distances between each two elements are almost the same. In the same way, randomly choose three elements qi , qj , qk from Qβ . Then, let {pi , qi }, {pj , qj }, {pk , qk } be temporal correspondences. 3. Using the three pairs of the elements, compute an affine transformation matrix A form their centers of the gravities of them.
170
S. Kamiya and Y. Kanazawa
4. Compute a similarity g(pa , qb ) between the elements pa and qb by g(pa , qb ) =
Area of the overlapped region of A(pa ) and qb , Area of a union of A(pa ) and qb
(5)
where, A(p) denotes the region transformed p by the matrix A. 5. Count the number of correspondences that have the similarities g(pa , qb ) ≥ tS and let the count be NS . We call NS a “support.” If there are two or more overlapped regions for one region, only count one correspondence which similarity is the maximum among them. 6. If NS > NSmax then update NSmax ← NS . 7. Repeat from the step 3 to the step 6 by permute the pairs of the temporal correspondences. 8. Repeat from the step 2 to the step 7 until the number of iteration becomes sufficiently large. By doing the above procedure, we can obtain the pair of repetitive patterns {Pλ ,Qν } that has the maximum support. If the detected repetitive pattern region include two or more planes in a scene, we remove the matched elemental regions from the repetitive pattern regions and repeat doing the above procedure. Then, we can obtain the correspondences of all repetitive patterns. 2.4
Matching Between the Points in the Elements
The correspondences between the repetitive pattern regions obtained the above procedure are not suitable for reconstructing an accurate 3-D shape of the scene. So, we detect feature points in the repetitive pattern regions and then find correspondences between feature points for reconstructing an accurate 3-D shape. 1. Detect contours of the elements in the repetitive pattern regions in the two images independently and detect the feature points by finding high curvature points in the contours. Then, let xi and xj be the detected feature points. 2. Compute the homography H from the centers of the gravities of the corresponding elements. 3. If the pair {xi , xj } satisfies the following equation ||xj − Z[Hxi ]||2 < tH , we regard the pair is a correct correspondence. Then, let {xμ , xμ }, μ = 1, ..., L be the obtained correspondences.
(6)
Accurate Image Matching in Scenes Including Repetitive Patterns
171
(a)
(b)
(c)
(f)
(d)
(e)
(g)
(h)
Fig. 3. Real image example #1. (a) Original images. (b) Result by Kanazawa and Kanatani’s method [2]. (c) Result by Lowe’s method [3]. (d) Result by Kanazawa and Uemura’s method [7]. (e) Result by our method. (f) Region grouping results by our method. (g) Rough correspondences on regions. (h) Obtained matches from repetitive pattern regions.
2.5
Matching between Remained Regions
We now consider to detect correspondences in the remained regions. The correspondences obtained by the above procedure give us a powerful information because they satisfy the homography relationship which is stronger than the epipolar equation [11,13]. So, we compute the fundamental matrix from {xμ , xμ }, μ = 1, ..., L and simply use the epipolar constraint with respect to the computed fundamental matrix for detecting correspondences in the remained regions. 1. Detect feature points in the remained regions in the two images independently. Here, we use Harris operator [1]. Then, let y i and y j be the detected feature points in each images respectively. 2. Compute the fundamental matrix F from {xμ , xμ }, μ = 1, ..., L. Here, we use an optimal method called a renormalization proposed by Kanatani [14] instead of eight-points algorithm [11]. 3. If a pair {y i , y j } satisfy (y i , F y j )2 ||P k F y i ||2 + ||P k F y j ||2
< te ,
we regard the pair as the candidate correspondence. Here,
(7)
172
S. Kamiya and Y. Kanazawa
(a)
(b)
(c)
(f)
(d)
(e)
(g)
(h)
Fig. 4. Real image example #2. (a) Original images. (b) Result by Kanazawa and Kanatani’s method [2]. (c) Result by Lowe’s method [3]. (d) Result by Kanazawa and Uemura’s method [7]. (e) Result by our method. (f) Region grouping results by our method. (g) rough correspondences on regions. (h) Obtained matches from repetitive pattern regions.
⎛
⎞ 100 Pk = ⎝0 1 0⎠ 000 and (a, b) denotes the inner product of the vectors a and b. 4. Compute the residuals of the template matching between the candidate correspondences {y i , y j } in the two images that are rectified by using the fundamental matrix F . Here, for rectifying a image, we use the method proposed by Sugaya et al. [15]. If the residual is smaller than a threshold, we regard the pair {yi , y j } as a correspondence.
3
Real Image Examples
In order to show the effectiveness of our method, we show some real image examples. For comparison, we show some results obtained by Kanazawa and Kanatani’s method [2], Lowe’s method [3], and Kanazawa and Uemura’s method [7]1 . 1
We used the programs from placed at: http://www.img.tutkie.tut.ac.jp/programs/index-e.html http://www.cs.ubc.ca/~lowe/keypoints/ respectively.
Accurate Image Matching in Scenes Including Repetitive Patterns
173
(a)
FAILED
(b)
(c)
(f)
(d)
(e)
(g)
(h)
Fig. 5. Real image example #3. (a) Original images. (b) Result by Kanazawa and Kanatani’s method [2]. (c) Result by Lowe’s method [3]. (d) Result by Kanazawa and Uemura’s method [7]. (e) Result by our method. (f) Region grouping results by our method. (g) Rough correspondences on regions. (h) Obtained matches from repetitive pattern regions.
Fig. 3 shows an example of a “namako-kabe” scene. Fig. 3 (a) shows the input images. Such the “namako-kabe” scene is very difficult to detect correspondences between images because of high contrast and preciseness in arrangement of patterns. So, almost usual methods failed in matching with such the images. Fig. 3 (b), (c), and (d) show the results obtained by Kanazawa and Kanatani’s method [2], Lowe’s method [3], and Kanazawa and Uemura’s method [7], respectively. Fig. 3 (e), (f), (g), and (h) show the results obtained by our method: Fig. 3 (e) shows the final results, Fig. 3 (f) shows the detected repetitive regions, Fig. 3 (g) shows the correspondences of regions, and Fig. 3 (h) shows the correspondences in only the repetitive regions. In Fig. 3 (f), we show the different detected repetitive pattern regions by different colors. In these results, we indicate the correspondences by the line segments by the same way as Fig. 1. As we can see, the result by Kanazawa and Kanatani’s method seems to be correct at a glance. However, the obtained matches are almost wrong. It because the strong repetitive patterns in the images mislead the matching. In the results by Lowe’s method and Kanazawa and Uemura’s method, we see that we can obtain only few correct matches. In contrast to these results, we see that the results by our method is the almost correct. We also see in the Fig. 3 (f), the grouping results are different in the two images, but we can obtain correct matches between the repetitive pattern regions as shown in Fig. 3 (g).
174
S. Kamiya and Y. Kanazawa
(a)
(b)
(c)
(f)
(d)
(e)
(g)
(h)
Fig. 6. Real image example #4. (a) Original images. (b) Result by Kanazawa and Kanatani’s method [2]. (c) Result by Lowe’s method [3]. (d) Result by Kanazawa and Uemura’s method [7]. (e) Result by our method. (f) Region grouping results by our method. (g) Rough correspondences on regions. (h) Obtained matches from repetitive pattern regions.
Fig. 4 shows an example of a brick fence scene which are similarly arranged in Fig. 3. As we can see, Kanazawa and Kanatani’s method, Lowe’s method, and Kanazawa and Uemura’s method failed in matching. However, our method can detect many correct matches between the two images. Fig. 5 shows an example of a scene of a pillar of a building. In this case, the disparity between the two images is very small. So, Kanazawa and Kanatani’s method and Lowe’s method can detect many correct matches. But, we cannot obtain the result by Kanazawa and Uemura’s method. We see that we can obtain many correct matches by our method. In the result by our method, the two planes of the pillar were grouped into the same repetitive region, but we can separate them in the rough matching step using affine transformation.
Table 1. The thresholds in grouping ex. # threshold of σr2 threshold of distance tg #1 50 100 #2 40 50 #3 15 50 #4 20 50
Accurate Image Matching in Scenes Including Repetitive Patterns
175
Table 2. The elapsed time in the examples (in second) ex. # our method Kanazawa & Kanatani #1 747.63 8.78 #2 990.18 7.75 #3 623.59 5.54 #4 399.67 6.40
Lowe 10.43 2.32 3.66 23.05
Kanazawa & Uemura 17.44 324.93 — 25.10
Fig. 6 shows an example of a flower bed scene. In this case, we can see that we can obtain many correct matches by usual methods. It is because that the baseline is short and the repetitive pattern regions in the images are not so large. So, the repetitive pattern regions do not mislead to find correspondences. In these examples, when the repetitive pattern regions have a large share in the images, we can obtain many correct matches stably by our method. On the other hand, when the repetitive pattern regions are small in the images, we obtain good results by usual methods. In addition, although Lowe’s method is known as very powerful and robust method, we can also see that Lowe’s method cannot detect matches in the center of repetitive pattern regions. We show the thresholds in these examples in Tab. 1 and the elapsed times of these examples in Tab. 2. We used Pentium 4 540J 3.2 [GHz] for the CPU with 1 [GB] main memory and Vine Linux 4.0 for the OS.
4
Conclusions and Feature Work
In this paper, we have proposed a practical and accurate method for image matching in scenes including repetitive patterns. Many usual methods are misled by such repetitive patterns and failed in matching. However, the proposed method can find many correct matches stably in such the scenes. In the proposed method, we first detected the elements of the repetitive patterns in the two images and find their correspondences roughly. We then decided the point matches precisely using the rough correspondences of the repetitive patterns. We finally detected point correspondences in the remained regions, which not include any repetitive patterns. By this approach, we can not only reduce ill effects from the repetitive patterns, but also use powerful information of the repetitive patterns. We have shown the effectiveness of our method by some real image examples. Our method is specialized in the case that a scene includes repetitive patterns. As shown in real image examples, if the repetitive regions are large in the images, our method possess a grate advantage over usual methods. But, if the repetitive regions are small, the advantage is small or nothing. Especially, if there is no repetitive patterns in the images, our method should not be used. So, we think it is practical to use our method and usual methods properly by a scene. At this time, a judgement whether the the repetitive patterns are included or not is very important. We think we can be done the judgement in the elements detection step of our method.
176
S. Kamiya and Y. Kanazawa
For future works, we must reduce the processing time of our method. We also make our method be applicable to a wide baseline stereo.
Acknowledgement This work was supported in part by the Japan Society for the Promotion Science under the Grant for Aid for Scientific Research (C) (No. 19500146).
References 1. Harris, C., Stephens, M.: A combined corner and edge detector. In: Proc. 4th Alvey Vision Conf., Manchester, U.K, pp. 147–151 (1988) 2. Kanazawa, Y., Kanatani, K.: Robust image matching preserving global consistency. In: Proc. 6th Asian Conf. Comput, Jeju Island, Korea, pp. 1128–1133 (2004) 3. Lowe, D.: Distinctive image features from scale-invariant keypoint. Int. J. Comput. Vision 60(2), 91–110 (2004) 4. Matas, J., et al.: Robust wide baseline stereo from maximally stable extremal regions. In: Proc. 13th British Machine Vision Conf., Cardiff, U.K, pp. 384–393 (2002) 5. Mikolajczyk, K., Schmid, C.: Scale & affine invariant interest point detector. Int. J. Comput. Vision 60(1), 63–86 (2004) 6. Mikolajczyk, K., et al.: A comparizon of affine region detectors. Int. J. Comput. Vision 65(1–2), 43–72 (2005) 7. Kanazawa, Y., Uemura, K.: Wide baseline matching using triplet vector descriptor. In: Proc. 17th British Machine Vision Conf., Edinburgh, U.K, vol. 1, pp. 267–276 (2006) 8. Fischler, M., Bolles, R.: Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Comm. ACM 24(6), 381–395 (1981) 9. Vincent, L., Soille, P.: Watersheds in digital spaces: An efficient algorithm based on immersion simulations. IEEE Patt. Anal. Mach. Intell. 13(6), 583–598 (1991) 10. Roerdink, M.: The watershed transform: Definitions, algorithms and parallelization strategies. FUNDINF: Fundamenta Informatica 41 (2000) 11. Hartley, R., Zisserman, A.: Multiple View Geometry. Cambridge University press, Cambridge (2000) 12. Kanatani, K., Ohta, N., Kanazawa, Y.: Optimal homography computation with a reliability measure. IEICE Trans. Inf. & Syst. E83-D(7), 1369–1374 (2000) 13. Kanazawa, Y., Sakamoto, T., Kawakami, H.: Robust 3-d reconstruction using one or more homographies with uncalibrated stereo. In: Proc. 6th Asian Conf. Comput. Vision, Jeju Island, Korea, pp. 503–508 (2004) 14. Kanatani, K.: Optimal fundamental matrix computation: algorithm and reliability analysis. In: Proc. 6th Symposium on Sensing via Imaging Information (SSII), Yokohama, Japan, pp. 291–296 (2000) 15. Sugaya, Y., Kanatani, K., Kanazawa, Y.: Generating dense point matches using epipolar geometry. Memoirs of the Faculty of Engineering, Okayama University 40, 44–57 (2006)
Camera Self-calibration under the Constraint of Distant Plane Guanghui Wang1,2 , Q. M. Jonathan Wu1 , and Wei Zhang1 1
Department of Electrical and Computer Engineering, The University of Windsor, 401 Sunset, Windsor, Ontario, Canada N9B 3P4. 2 Department of Control Engineering, Aviation University, Changchun, Jilin, 130022, P.R. China
[email protected],
[email protected]
Abstract. Kruppa equation based camera self-calibration is one of the classical problems in computer vision. Most state-of-the-art approaches directly solve the quadratic constraints derived from Kruppa equations, which are computationally intensive and difficult to obtain initial values. In this paper, we propose a new initialization algorithm by estimating the unknown scalar in the equation, thus the camera parameters can be computed linearly in a closed form and then refined iteratively via global optimization techniques. We prove that the scalar can be uniquely recovered from the infinite homography and propose a practical method to estimate the homography from a physical or virtual plane located at a far distance to the camera. Extensive experiments on synthetic and real images validate the effectiveness of the proposed method.
1
Introduction
Camera calibration is an indispensable step in retrieving 3D metric information from 2D images. The calibration methods can be generally classified into three categories: classical methods, active vision based methods and self-calibration. The classical methods calibrate the camera from the projected images of a calibration grid or template with known 3D shape or fiducial markers [23][26]. The active vision based methods require a precisely designed platform to control the movement of cameras [5][14]. These methods can compute camera parameters linearly with good precision. However, the calibration grid and active vision platform may be unavailable in real applications. The theory of camera self-calibration was first introduced to the computer vision field by Faugeras and Maybank et al. [3][16]. The method is based on the so-called Kruppa equation which links the epipolar transformation to the image of the absolute conic (IAC). The self-calibration can be accomplished from only the image measurements without requiring special calibration objects or known camera motion. Thus it is more flexible and attractive. Following this idea, many improvements and related methods were proposed [6][13][25]. However, the constraints provided by Kruppa equation are quadratic which involve computational costs and it is difficult to give the initial estimation. To avoid G. Sommer and R. Klette (Eds.): RobVis 2008, LNCS 4931, pp. 177–188, 2008. c Springer-Verlag Berlin Heidelberg 2008
178
G. Wang, Q.M.J. Wu, and W. Zhang
the nonlinearity of the Kruppa equations, some researchers developed methods requiring that the camera undergoes special motions, such as pure rotation [4], planar motion [1] [11], circular motion [27], or combining the scene constraints into calibration [15]. A survey on self-calibration can be found in [8]. Pollefeys et al. [19] proposed a stratification approach to upgrade the reconstruction from projective to affine and finally to the Euclidean space. They also incorporate the modulus constraint to recover the affine geometry. Triggs [22] proposed to calibrate the cameras from the absolute quadric. Chandraker et al. [2] proposed a method for estimating the absolute quadric where the rank degeneracy and the positive semidefiniteness of the dual quadric are imposed as part of the estimation procedure. Vald´es et al. [24] proposed the constraints of the absolute line quadric and camera parameters. Seo and Heyden [21] proposed an algorithm to linearly estimate the dual of the absolute conic and incorporate the rank-3 constraint on the intrinsic parameters. Ronda et al. [20] gave the relationships of the horopter curves associated to each pair of cameras with the plane at infinity and the absolute conic. Most of the above methods work only for constant cameras. While the intrinsic parameters may vary from frame to frame in practice. Pollefeys et al. [17] proposed to extend the modulus constraint to allow varying focal lengths, but the method requires a pure translation as initialization. They extended the method in [18] to retrieve metric reconstruction from image sequence obtained with uncalibrated zooming cameras. Heyden and ˚ Astr¨ om [9] proved that selfcalibration is possible when the aspect ratio is known and no skew is present. They further extended the method to more general result that the existence of one constant intrinsic parameter suffices for self-calibration [10]. Many Kruppa equation based methods establish the constraints on the dual of the imaged absolute conic (DIAC) by eliminating the unknown scalar in the equation [7][13][25]. The resulted constraints are nonlinear and the difficulties in solving these equations restrict the applications of the method. Lei et al. [12] proposed to estimate the scale factor by Levenburg-Marquardt (LM) or Genetic optimization technique before establishing the calibration constraints. However, the method is computationally intensive and an initial value is needed. Inspired by the idea, we propose to initialize the scalar via the constraint provided by the homography induced by a remote physical or virtual plane in the scene. The camera parameters are estimated linearly in a closed form and finally refined by nonlinear optimization. The remaining part of the paper is organized as follows. Some preliminaries are reviewed in Section 2. The proposed method are elaborated in Section 3. Some experimental results are presented in Section 4. Finally, the paper is concluded in Section 5.
2
Preliminaries on Kruppa Equation
In order to facilitate our discussion in the subsequent sections, some preliminaries on epipolar geometry, Kruppa equation and plane induced homography are reviewed in this section. More details can be found in [7].
Camera Self-calibration under the Constraint of Distant Plane
2.1
179
Epipolar Geometry and Kruppa Equation
Under perspective projection, a 3D point X in space is projected to an image point m via a rank-3 projection matrix P ∈ R3×4 as sm = PX = K[R, t]X
(1)
where the points X and m are denoted in homogeneous forms, R and t are the rotation matrix and translation vector fromthe world coordinate system to the camera system, s is a non-zero scalar, K =
fu ς u0 fv v0 1
is the camera calibration
matrix. Given two images taken at different viewpoints, suppose the space point X is imaged at m in the first view and m in the second. Then the relationship between the two views can be encapsulated by the epipolar geometry as mT Fm = 0
(2)
where F is the fundamental matrix, which is a rank-2 homogeneous matrix. The camera baseline (the line joining the camera centers) intersects each image plane at the epipoles e and e , which can be computed from the following equations. Fe = 0, FT e = 0
(3) c2 c3 Suppose the camera parameters are fixed, denote C = KKT = cc2 cc4 c15 , 3 5 which is the dual of the imaged absolute conic (DIAC). Then it is easy to obtain the classical Kruppa equation from the epipolar geometry. λ[e ]× C[e ]× = FCFT
c1
(4)
where [e ]× is a 3 × 3 antisymmetric matrix derived from the 3-vector e , λ is an unknown scalar. Let us denote both sides of the Kruppa equation (4) as ⎤ ⎤ ⎡ ⎡ f1 (c) f2 (c) f3 (c) f1 (c) f2 (c) f3 (c) [e ]× C[e ]× = ⎣ f2 (c) f4 (c) f5 (c) ⎦ , FCFT = ⎣ f2 (c) f4 (c) f5 (c) ⎦ (5) f3 (c) f5 (c) f6 (c) f3 (c) f5 (c) f6 (c) where c = [c1 , c2 , c3 , c4 , c5 ]T , fi (c), and fi (c) are linear functions of the elements in C. Then the Kruppa equation can be written as f1 (c) f2 (c) f3 (c) f4 (c) f5 (c) f6 (c) = = = = = f1 (c) f2 (c) f3 (c) f4 (c) f5 (c) f6 (c)
(6)
which provides 6 quadratic constraints in the elements of C. It can be verified that only 2 of them are independent. 2.2
Plane Induced Homography
Suppose {R, t} is the movement between two views, π = [nT , d]T is the coordinates of a world plane. Then the homography induced by the plane can be computed from (7) H = αK(R − tnT /d)K−1
180
G. Wang, Q.M.J. Wu, and W. Zhang
When the plane is located at infinity, i.e. d → ∞. We can obtain the infinite homography from (7). H∞ = αKRK−1 (8) The homography is a 3 × 3 homogeneous matrix, which can be linearly computed from the images of four or more than four points on the plane. If the fundamental matrix is known, then 3 such points are sufficiently to recover the homography [7]. (9) H [e ]× F − e (M−1 b)T where denotes equality up to scale, M is a 3 × 3 matrix with rows of the 3 image points mT i , b is a 3-vector with components bi =
3
(mi × ([e ]× Fmi ))T (mi × e ) , i = 1, 2, 3 (mi × e )T (mi × e )
(10)
The Proposed Self-Calibration Method
3.1
The Constraint Derived from Infinite Homography
Lemma 1. Given the homography H between two views induced by a plane π in the space, let e be the epipole in the second view. Then the fundamental matrix can be expressed as F = μ[e ]× H with μ a nonzero scalar. Proof: A 3D point X on the plane π is imaged at m in the first image and m Hm in the second. Suppose the epipolar line passing through m is l , then from (2) we have l Fm (11) Since all the epipolar lines in the second view pass through the epipole e , thus l can be computed from l [e ]× m [e ]× Hm
(12)
According to (11) and (12), we have F = μ[e ]× H
(13)
Proposition 1. The unknown scalar λ in the Kruppa equation (4) can be uniquely determined if the infinite homegraphy H∞ is given. Proof: when the infinite homegraphy is known, according to (8) and (13), we have F = μ[e ]× H∞ = μα[e ]× KRK−1 (14) Then the Kruppa equation (4) can be written as follows. λ[e ]× C[e ]× = (μα)2 ([e ]× KRK−1 )(KKT )([e ]× KRK−1 )T = −(μα)2 [e ]× C[e ]×
(15)
Camera Self-calibration under the Constraint of Distant Plane
181
where μ can be computed from (13) for the given fundamental matrix and homography via least squares. f (μ) = arg min F − μ[e ]× H2F (16) The scalar α can be recovered from the determinant of (8) for the given infinite homography. det(H∞ ) = det(αKRK−1 ) = α3 (17) Thus the scalar λ can be directly recovered from λ = −μ2 α2 .
When λ is recovered, the Kruppa equation can be converted into the following linear constraints. ⎧ f (c) = λf1 (c) ⎪ ⎪ 1 ⎪ ⎨ f (c) = λf (c) 2 2 (18) ⎪ ··· ⎪ ⎪ ⎩ f6 (c) = λf6 (c) Each set of equations in (18) can provide two linearly independent constraints on the five unknowns of c. For a sequence of m images, we can obtain n = m(m−1) 2 sets of equations. Thus c can be recovered from at least three images via linear least squares. Therefore further optimization can be performed based on the initial estimation. ⎞ ⎛ n 6 (j) (j) (19) (fi (c) − λ(j) fi (c))2 ⎠ f (c, λ) = arg min ⎝ j=1 i=1
The minimization process is similar to the bundle adjustment in computer vision society that can be solved via Gradient Decent or Levenberg-Marquardt algorithm [7]. Then the camera parameters are factorized from C = KKT via Cholesky decomposition. 3.2
Recovery of the Infinite Homography
One of the most difficult problems in camera self-calibration is to find the plane at infinity. There are several possible ways to recover the infinite homography. One straightforward method is to use the prior knowledge of the scene, such as the orthogonal sets of parallel lines, to recover the vanishing points between different views. The infinite homography can also be easily recovered when the camera is under pure translation or planar movement [7]. For general scenario and camera motions, the modulus constraint [19] can provide a quartic polynomial constraint on the coordinates of the infinite plane. However, the equation is difficult to solve and the solution is not unique. In our implementation, we adopt the following simple method to approximate the infinite homography. According to equations (7) and (8), we can find that the infinite homography can be approximated by the plane induced homography (7) if the plane is far
182
G. Wang, Q.M.J. Wu, and W. Zhang
away from the camera. In practice, if the scenario contains a remote plane, such as the calendar in the teddy bear images (see Fig.4), we can simply use the features on the plane to compute the homography. However, the plane may not be available in many cases. Then we can simply select three points that are located at a remote distance in the scene, the three points will define a virtual plane and the homography induced by the plane can be computed from (9). In order to guarantee that the virtual plane is located at a far distance, the homography is further verified from the partition of the scene features.
Xr
π0
X0
X
l′
l m
m′r m 0′ m′
H0
L e
e′
O
O′
Fig. 1. The projection of a space point in two views
As shown in Fig.1, a space point X projects to m and m in two views, O and O are the optical centers of the two cameras, e and e are the epipoles of the two images. Then m and m must lie on the epipolar line l and l , respectively. For the first view, the space point X lies on the back projection of the image point m, which is the line defined by L [O]× m. Suppose π0 is the virtual plane, H0 is the induced homography of the plane. Under the homography, the image point m is mapped to m0 in the second one as follows.
m0 H0 m
(20)
which is the projection of the intersection point X0 of line L with the plane π0 . It is easy to verify that det([e , m0 , m ]) = 0, i.e. m is collinear with e and m0 . Let us denote m = δm0 + (1 − δ)e
(21)
where δ is a scalar and all the points are normalized to have unit homogeneous terms. Then the partition of the space point X with respect to the virtual plane π0 and the camera center O can be determined from the value of δ.
Camera Self-calibration under the Constraint of Distant Plane
⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩
183
δ < 0 ⇒ X lies behind the camera center δ = 0 ⇒ X lies on the camera center O 0 < δ < 1 ⇒ X lies between the plane π and O
(22)
δ = 1 ⇒ X lies on the plane π δ > 1 ⇒ X lies behind the plane π
The relative position of the space points with respect to π0 can also be inferred from the value of δ or the distance between m and m0 . If there are any points that lie behind the plane, then π0 can be re-hypothesized by replacing the nearest initially assigned points with these points. After camera calibration, a general triangulation algorithm can be adopted to recover the 3D Euclidean structure of the scene, and the 3D structure and camera parameters are further optimized via a bundle adjustment scheme.
4 4.1
Experimental Evaluations Tests with Synthetic Data
During simulations, the camera parameters were set as: focal length fu = 1000, fv = 1100, principal point u0 = v0 = 400, image skew ς = 0. We randomly generated 150 space points within a sphere area and 50 points on a plane. The distance of the camera to the center of the sphere was set at around 200, the distance of the camera to the plane was set at around 16000. Three images were simulated with randomly selected rotations and translations. The image resolution was set to 800 × 800. We evaluated the algorithm with respect to different image noise levels. The Gaussian white noise was added on each imaged point and the noise level was varied from 0 to 2 pixels with a step of 0.2 during the test. We estimated the homography using the points on the plane and recovered the camera parameters according to the proposed algorithm. In order to obtain more statistically meaningful results, 100 independent tests were carried out at each noise level. The mean and standard deviation of the relative errors of the recovered camera parameters are shown in Fig.2. To establish superiority of the proposed method, we also implemented the LM-based algorithm proposed in [12]. We can see from Fig.2 that the proposed algorithm performs better than the LM-based algorithm. It should be noted that the zero-skew constraint was imposed in the test, thus there is no error for the skew. Since the algorithm is based on the homography induced by a remote plane, we evaluated the algorithm with respect to the relative distance of the plane. The relative distance is defined as the ratio between the distance of the plane to the camera and that of the sphere center to the camera. We varied the relative distance from 5 to 95 with a step of 10, and 100 independent tests of the proposed algorithm were performed at each position. The mean and standard deviation of the relative errors of the recovered camera parameters are shown in Fig.3. We
184
G. Wang, Q.M.J. Wu, and W. Zhang 1
1
Mean of relative error of fu
STD of relative error of fu
0.8
0.8
0.6
0.6
0.4
0.4 LM-based Hinf-based
0.2 0 0
0.5
1 Noise level
1.5
LM-based Hinf-based
0.2 2
0 0
0.5
(a1) 1
Mean of relative error of fv
0.8
0.6
0.6
0.4
0.4
0 0
LM-based Hinf-based 0.5
1 Noise level
1.5
LM-based Hinf-based
0.2 2
0 0
0.5
Mean of relative error of u0
1.5
1
1
0 0
1 Noise level
1.5
2
0 0
0.5
1.5
1.5
1
1
0 0
1 Noise level
(a4)
1.5
2
1.5
STD of relative error of v0
0.5
LM-based Hinf-based 0.5
1 Noise level
(b3) 2
Mean of relative error of v0
0.5
2
LM-based Hinf-based
(a3) 2
1.5
STD of relative error of u0
0.5
LM-based Hinf-based 0.5
1 Noise level
(b2) 2
1.5
0.5
2
STD of relative error of fv
(a2) 2
1.5
(b1) 1
0.8
0.2
1 Noise level
2
0 0
LM-based Hinf-based 0.5
1 Noise level
1.5
2
(b4)
Fig. 2. The mean and standard deviation of the relative errors (unit: %) of the recovered camera parameters with respect to varying noise levels
Camera Self-calibration under the Constraint of Distant Plane
Relative error of fu
1.5
1
0.5
0.5
0
5
25
45 60 Relative distance
Relative error of fv
1.5
1
85
0
5
(a)
2
1.5
1.5
1
1
0.5
0.5
5
25 45 60 Relative distance
45 60 Relative distance
85
Relative error of v0
Relative error of u0
2
0
25
(b) 2.5
2.5
185
85
(c)
0
5
25 45 60 Relative distance
85
(d)
Fig. 3. The mean and standard deviation of the relative errors (unit: %) of the recovered camera parameters with respect to different relative distances
can see from the figure that the algorithm can still yields good results even the plane is close to the camera. 4.2
Tests with Real Images
We tested the proposed algorithm on many real images in order to evaluate its performance. Here we will only report the result on the teddy bear scenario. The images were captured by Canon Powershot G3 digital camera with a resolution of 1024 × 768. As shown in Fig.4, three images were used for calibration. The correspondences across the images were established by the even matching method presented in [28], the homography was estimated from the features on the background calendar. The calibration results are shown in table 1. After calibration, we reconstructed the 3D metric structure of the scene. Fig.4 shows the reconstructed VRML model and wireframe from different viewpoints. The 3D structure is visually plausible and looks realistic, which verifies that the calibration result is reasonable. For evaluation, we computed the mean of the reprojection errors Erep of the reconstructed structure to the three images. We also computed the length ratio of the two diagonals of each background calendar grid and the angle formed by
186
G. Wang, Q.M.J. Wu, and W. Zhang π0
(a1)
(a2)
(a3)
(b)
(c) Fig. 4. Reconstruction results of the teddy bear images. (a1)-(a3) three images used for calibration, where the background calendar shown in the elliptical area in (a1) is used to compute the homography, the correspondences between these images with relative disparities are overlaid to the second and the third images; (b) The reconstructed VRML model of the scene shown in different viewpoints with texture mapping; (c) the corresponding triangulated wireframe of the VRML model. Table 1. Calibration results and reconstruction errors with real images by the proposed and the LM-based algorithms Method Hinf -based LM-based
fu 2108.65 2117.38
fv 2125.73 2201.63
u0 510.28 513.16
v0 387.54 386.69
ς 0.127 -0.235
Erep 0.153 0.186
Eratio 0.315% 0.361%
Eangle 0.287% 0.326%
the two diagonals. The means of the relative errors of these values are denoted by Eratio and Eangle . As shown in Table 1, the results obtained by the LM-based method are also listed in the table as a comparison. It is clearly evident that the proposed method performs better than the LM-based method in the tests.
Camera Self-calibration under the Constraint of Distant Plane
5
187
Conclusion
In this paper, we have proposed and proved that the unknown scalar in the Kruppa equation can be recovered from the homography induced by a remote physical/virtual plane in the scene. Thus the camera parameters are initialized linearly in a closed form and the difficult problem of directly solving the quadratic equations is avoided. The effectiveness of the proposed algorithm is validated by extensive experiments. Compared with previous methods, the proposed calibration scheme is simple with a high degree of accuracy. The method is suitable for the scenario that contains a remote plane or some feature points are located at a far distance to the camera. The method can also be applied to the cameras with some varying parameters.
Acknowledgment The work is supported in part by the Canada Research Chair program and the National Natural Science Foundation of China under grant no. 60575015.
References 1. Armstrong, M., Zisserman, A., Hartley, R.: Self-calibration from image triplets. In: Buxton, B.F., Cipolla, R. (eds.) ECCV 1996. LNCS, vol. 1064, pp. 3–16. Springer, Heidelberg (1996) 2. Chandraker, M., et al.: Autocalibration via rank-constrained estimation of the absolute quadric. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8 (2007) 3. Faugeras, O.D., Luong, Q.T., Maybank, S.J.: Camera self-calibration: Theory and experiments. In: European Conference on Computer Vision, pp. 321–334 (1992) 4. Hartley, R.: Self-calibration from multiple views with a rotating camera. In: European Conference on Computer Vision, pp. A: 471–478 (1994) 5. Hartley, R.: Self-calibration of stationary cameras. International Journal of Computer Vision 22(1), 5–23 (1997) 6. Hartley, R.: Kruppa’s equations derived from the fundamental matrix. IEEE Trans. Pattern Anal. Mach. Intell. 19(2), 133–135 (1997) 7. Hartley, R., Zisserman, A.: Multiple View Geometry in Computer Vision, 2nd edn. Cambridge University Press, Cambridge (2004) 8. Hemayed, E.: A survey of camera self-calibration. In: IEEE Conference on Advanced Video and Signal Based Surveillance, pp. 351–357 (2003) 9. Heyden, A., ˚ Astr¨ om, K.: Euclidean reconstruction from image sequences with varying and unknown focal length and principal point. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 438–443 (1997) 10. Heyden, A., ˚ Astr¨ om, K.: Minimal conditions on intrinsic parameters for euclidean reconstruction. In: Asian Conference on Computer Vision, pp. 169–176 (1998) 11. Knight, J., Zisserman, A., Reid, I.: Linear auto-calibration for ground plane motion. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. I: 503–510 (2003)
188
G. Wang, Q.M.J. Wu, and W. Zhang
12. Lei, C., et al.: A new approach to solving Kruppa equations for camera selfcalibration. In: International Conference on Pattern Recognition, pp. 308–311 (2002) 13. Luong, Q., Faugeras, O.: Self-calibration of a moving camera from point correspondences and fundamental matrices. International Journal of Computer Vision 22(3), 261–289 (1997) 14. Ma, S.: A self-calibration technique for active vision systems. IEEE Transactions on Robotics and Automation 12(1), 114–120 (1996) 15. Malis, E., Cipolla, R.: Camera self-calibration from unknown planar structures enforcing the multiview constraints between collineations. IEEE Trans. Pattern Anal. Mach. Intell. 24(9), 1268–1272 (2002) 16. Maybank, S., Faugeras, O.: A theory of self-calibration of a moving camera. International Journal of Computer Vision 8(2), 123–151 (1992) 17. Pollefeys, M., Gool, L.J.V., Proesmans, M.: Euclidean 3D reconstruction from image sequences with variable focal lenghts. In: Buxton, B.F., Cipolla, R. (eds.) ECCV 1996. LNCS, vol. 1064, pp. 31–42. Springer, Heidelberg (1996) 18. Pollefeys, M., Koch, R., Van Gool, L.: Self-calibration and metric reconstruction in spite of varying and unknown intrinsic camera parameters. International Journal of Computer Vision 32(1), 7–25 (1999) 19. Pollefeys, M., Van Gool, L.: Stratified self-calibration with the modulus constraint. IEEE Trans. Pattern Anal. Mach. Intell. 21(8), 707–724 (1999) 20. Ronda, J., Valdes, A., Jaureguizar, F.: Camera autocalibration and horopter curves. International Journal of Computer Vision 57(3), 219–232 (2004) 21. Seo, Y., Heyden, A.: Auto-calibration by linear iteration using the DAC equation. Image and Vision Computing 22(11), 919–926 (2004) 22. Triggs, B.: Autocalibration and the absolute quadric. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 609–614 (1997) 23. Tsai, R.: An efficient and accurate camera calibration technique for 3-D machine vision. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 364–374 (1986) 24. Vald´es, A., Ronda, J., Gallego, G.: The absolute line quadric and camera autocalibration. International Journal of Computer Vision 66(3), 283–303 (2006) 25. Xu, G., Sugimoto, N.: Algebraic derivation of the Kruppa equations and a new algorithm for self-calibration of cameras. Journal of the Optical Society of AmericaA 16(10), 2419–2424 (1999) 26. Zhang, Z.: A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000) 27. Zhong, H., Hung, Y.: Self-calibration from one circular motion sequence and two images. Pattern Recognition 39(9), 1672–1678 (2006) 28. Zhong, Y., Zhang, H.: Control point based semi-dense matching. In: Asian Conference on Computer Vision, pp. 374–379 (2002)
An Approximate Algorithm for Solving the Watchman Route Problem Fajie Li and Reinhard Klette 1
Institute for Mathematics and Computing Science, University of Groningen P.O. Box 800, 9700 AV Groningen, The Netherlands 2 Computer Science Department, The University of Auckland Private Bag 92019, Auckland 1142, New Zealand
Abstract. The watchman route problem (WRP) was first introduced in 1988 and is defined as follows: How to calculate a shortest route completely contained inside a simple polygon such that any point inside this polygon is visible from at least one point on the route? So far the best known result for the WRP is an O(n3 log n) runtime algorithm (with inherent numerical problems of its implementation). This paper gives an κ(ε) × O(kn) approximate algorithm for WRP by using a rubberband algorithm, where n is the number of vertices of the simple polygon, k the number of essential cuts, ε the chosen accuracy constant for the minimization of the calculated route, and κ(ε) equals the length of the initial route minus the length of the calculated route, divided by ε. Keywords: computational geometry, simple polygon, Euclidean shortest path, visual inspection, Watchman Route Problem, rubberband algorithm.
1
Introduction
There are a number of computational geometry problems which involve finding a shortest path [22], for example, the safari problem, zookeeper problem, or watchman route problem (WRP). All are of obvious importance for robotics, especially the WRP for visual inspection. This paper presents algorithms for solving the touring polygons problem (TPP), parts cutting problem, safari problem, zookeeper problem, and watchman route problem. These problems are closely related to one-another. A solution to the first problem implies solutions to the other four problems. The safari, zookeeper, and watchman route problems belong to the class of art gallery problems [37]. 1.1
Touring Polygons Problem
We recall some notation from [13], which introduced the touring polygons problem. Let π be a plane, which is identified with R2 . Consider polygons Pi ⊂ π, where i = 1, 2, . . . , k, and two points p, q ∈ π. Let p0 = p and pk+1 = q. Let pi ∈ R2 , where i = 1, 2, . . . , k. Let ρ(p, p1 , p2 , , . . . , pk , q) denote the path pp1 p2 . . . pk q G. Sommer and R. Klette (Eds.): RobVis 2008, LNCS 4931, pp. 189–206, 2008. c Springer-Verlag Berlin Heidelberg 2008
190
F. Li and R. Klette
⊂ R2 . Let ρ(p, q) = ρ(p, p1 , p2 , , . . . , pk , q) if this does not cause any confusion. If pi ∈ Pi such that pi is the first (i.e., along the path) point in ∂Pi ∩ ρ(p, pi ), then we say that path ρ(p, q) visits Pi at pi , where i = 1, 2, . . . , k. Let A• be the topologic closure of set A ⊂ R2 . Let Fi ⊂ R2 be a simple polygon • such that (Pi• ∪ Pi+1 ) ⊂ Fi• ; then we say that Fi is a fence [with respect to Pi and Pi+1 (mod k + 1)], where i = 0, 1, 2, . . . , k + 1. Now assume that we have a fence Fi for any pair of polygons Pi and Pi+1 , for i = 0, 1, . . . , k + 1. The constrained TPP is defined as follows: How to find a shortest path ρ(p, p1 , p2 , , . . . , pk , q) such that it visits each of the polygons Pi in the given order, also satisfying pi pi+1 (mod k + 1) ⊂ Fi• , for i = 1, 2, . . . , k? 1.2
Watchman Route Problem
Assume that for any i, j ∈ {1, 2, . . . , k}, ∂Pi ∩ ∂Pj = ∅, and each Pi is convex; this special case is dealt with in [13]. The given algorithm runs in O(kn log(n/k)) time, where n is the total number of all vertices of all polygons Pi ⊂ π, for i = 1, 2, . . . , k. Let Π be a simple polygon with n vertices. The watchman route problem (WRP) is discussed in [4], and it is defined as follows: How to calculate a shortest route ρ ⊂ Π • such that any point p ∈ Π • is visible from at least one point on the path? This is actually equivalent to the requirement that all points p ∈ Π • are visible just from the vertices of the path ρ, that means, for any p ∈ Π • there is a vertex q of ρ such that pq ⊂ Π • ; see Figure 1. If the start point of the route is given, then this refined problem is known as the fixed WRP.
Fig. 1. A watchman route
A simplified WRP was first solved in 1988 in [10] by presenting an O(n log log n) algorithm to find a shortest route in a simple isothetic polygon. In 1991, [11] claimed to have presented an O(n4 ) algorithm, solving the fixed WRP. In 1993, [30] obtained an O(n3 ) solution for the fixed WRP. In the same year, this was further improved to a quadratic time algorithm [31]. However, four years later, in 1997, [16] pointed out that the algorithms in both [11] and [30] were flawed,
An Approximate Algorithm for Solving the Watchman Route Problem
191
but presented a solution for fixing those errors. Interestingly, two years later, in 1999, [32] found that the solution given by [16] was also flawed! By modifying the (flawed) algorithm presented in [30], [32] gave an O(n4 ) runtime algorithm for the fixed WRP. In 1995, [7] proposed an O(n12 ) runtime algorithm for the WRP. In the same year, [25] gave an O(n6 ) algorithm for the WRP. This was improved in 2001 by an O(n5 ) algorithm in [33]; this paper also proved the following Theorem 1. There is a unique watchman route in a simple polygon, except for those cases where there is an infinite number of different shortest routes, all of equal length. So far the best known result for the WRP is due to [13] which gave in 2003 an O(n3 log n) runtime algorithm. Given the time complexity of those algorithms for solving the WRP, finding efficient (and numerically stable) approximation algorithm became an interesting subject. In 1995, [19] gave an O(log n)-approximation algorithm for solving the WRP. In 1997, [8] gave a 99.98-approximation algorithm with time complexity O(n log n) for the WRP. In 2001, [34] presented a linear-time algorithm for an approximative solution of the fixed WRP such that the length of the calculated watchman route is at most twice of that√of the shortest watchman route. The coefficient of accuracy was improved to 2 in [35] in 2004. Most recently, [36] presented a linear-time algorithm for the WRP for calculating an approximative watchman route of length at most twice of that of the shortest watchman route. There are several generalizations and variations of watchman route problems; see, for example, [5,6,9,12,14,15,17,21,24,25,26,27,28]. [1,2,3] show that some of these problems are NP-hard and solve them by approximation algorithms. The rest of this paper is organized as follows: Section 2 recalls and introduces useful notions. It also lists some known results which are applied later in this paper. Section 3 describes the new algorithms. Section 4 proves the correctness of the algorithms. Section 5 analyzes the time complexity of these algorithms. Section 6 concludes the paper.
2
Definitions and Known Results
We recall some definitions from [36]. Let Π be a simple polygon. The vertex v of Π is called reflex if v’s internal angle is greater than 180◦ . Let u be a vertex of Π such that it is adjacent to a reflex vertex v. Let the straight line uv intersect an edge of Π at v . Then the segment C = vv partitions Π into two parts. C is called a cut of Π, and v is called a defining vertex of C. That part of Π is called an essential part of C if it does not contain u; it is denoted by Π(C). A cut C dominates a cut C if Π(C) contains Π(C ). A cut is called essential if it is not dominated by another cut. In Figure 2 (which is Figure 1 in [36]), the cuts xx and yy are dominated by C2 and C5 , respectively; the cuts C1 , C2 , C3 , C5 and C4 are essential. Let C be the set of all essential cuts. The WRP is reduced to
192
F. Li and R. Klette
find the shortest route ρ such that ρ visits in order each cut in C (see Lemma 1, and also see [2] or [6]). Let SC = {C1 , C2 , . . . , Ck } be the sorted set C such that vi is a defining vertex of Ci , and vertices vi are located in anti-clockwise order around ∂Π, for i = 1, 2, . . . , k. Definition 1. Let Π and SC be the input of a watchman route problem. This problem is simplified iff, for each i ∈ {1, 2, . . . , k − 1}, for each pi ∈ Ci and pi+1 ∈ Ci+1 we have that pi pi+1 ∩ ρ(vi+1 , Π, vi ) = ∅. We list some results used in the rest of this paper. The first one is about “order”, and the remaining three are about time complexity. Lemma 1. ([10], Lemma 3.3) A solution to the watchman route problem (shortest tour) must visit the Pi ’s in the same order as it meets ∂Π. Lemma 2. ([22], pages 639–641) There exists an O(n) time algorithm for calculating the shortest path between two points in a simple polygon. Lemma 3. ([29]) The two straight lines, incident with a given point and being tangents to a given convex polygon, can be computed in O(log n), where n is the number of vertices of the polygon. Theorem 2. ([36], Theorem 1) Given a simple polygon Π, the set C of all essential cuts for the watchman routes in Π can be computed in O(n) time.
3 3.1
The Algorithms A Local Solution for the Constrained TPP
The following procedure handles a degenerate case (see also Section 7.5 of [18]) of the rubberband algorithm, to be discussed later in this paper as “Algorithm 1”. Such a case may occur and should be dealt with when we apply Algorithm 1 to
Fig. 2. Examples for cuts and essential cuts
An Approximate Algorithm for Solving the Watchman Route Problem
193
Fig. 3. Illustration for Procedure 1
the unconstrained TPP when the polygons are not necessarily pairwise disjoint. See Figure 3. Procedure 1 Input: A point p and two polygons P1 and P2 such that p ∈ ∂P1 ∩ ∂P2 . Output: A point q ∈ ∂P1 such that de (q, p) ≤ ε and q ∈ / ∂P2 . 1. Let ε = 10−10 (the accuracy). 2. Find a point ej ∈ E(Pj ) (i.e., the set of edges of Pj ), where j = 1, 2, such that p ∈ e1 ∩ e2 . 3. Let e1 = q1 q2 . Let q3 and q4 be two points in two segments q1 p and q2 p, / ∂P2 , where j = 3, 4. respectively (see Figure 3) such that de (qj , p) ≤ ε and qj ∈ 4. Let q = min{q3 , q4 } (with respect to lexicographic order). 5. Output q. The following Procedure 2 is used in Procedure 3 which will be called in Algorithm 1 below. Procedure 2 Input: Two polygons P1 and P2 and their fence F ; two points pi ∈ ∂Pi , where i = 1, 2, such that p1 p2 ∩ ∂F = {q1 , . . . , q2 }. Output: The set of all vertices of the shortest path, which starts at p1 , then visits ∂F , and finally ends at p2 (but not including p1 and p2 ).
Fig. 4. Illustration for Procedure 2
194
F. Li and R. Klette
Fig. 5. Illustration for Procedure 3
1. Compute ρ(q1 , F, q2 ), which is the subpath from q1 to q2 inside of F . 2. Apply the Melkman algorithm (see [20]) to compute the convex path from q1 to q2 inside of F , denoted by ρ(q1 , q2 ). 3. Compute a tangent pi ti of pi and ρ(q1 , q2 ) such that pi ti ∩ F = ti , where i = 1, 2 (see [29]). 4. Compute ρ(t1 , F, t2 ), which is the subpath from t1 to t2 inside of F . 5. Output V (ρ(t1 , F, t2 )). See Figure 4 for an example: V (ρ(q1 , F, q2 )) = {q1 , v1 , v2 , v3 , t1 , v4 , v5 , t2 , q2 } (i.e., the set of vertices of the path ρ(q1 , F, q2 ).) and V (ρ(t1 , F, t2 )) = {t1 , v4 , v5 , t2 }. – The following Procedure 3 will be called in Step 4 in Algorithm 1 below. Procedure 3 Input: Three polygons P1 , P2 and P3 in order, the fence of P1 and P2 , denoted by F12 , the fence of P2 and P3 , denoted by F23 , and three points pi ∈ ∂Pi , where i = 1, 2, 3. Output: The set of all vertices of the shortest path which starts at p1 , then visits P2 , and finally ends at p3 (see Figure 5). 1.1. Let ε = 10−10 (the accuracy). 1.2. If (p2 = p1 ∧ p2 = p3 ) ∨ (p2 = p1 ∧ p2 = p3 ) ∨ (p2 = p1 ∧ p2 = p3 ), then apply Procedure 1 to compute a point to update p2 such that p2 = p1 and p2 = p3 . 1.3. Compute a point p2 ∈ ∂P2 such that de (p1 , p2 ) + de (p2 , p3 ) = min{de (p1 , p ) + de (p , p3 ) : p ∈ ∂P2 } 2. Update V by letting p2 = p2 . 3. If p1 p2 ∩ F12 = ∅, then let V12 = ∅. 4. Otherwise suppose that p1 p2 ∩ F12 = {q1 , . . . , q2 }. 5. Use P1 , P2 , F12 , q1 , and q2 as input for Procedure 2; the output equals V12 . 6. Analogously, compute a set V23 from p2 , p3 and F23 , as follows: 6.1. If p2 p3 ∩ F23 = ∅, then let V23 = ∅. 6.2. Otherwise suppose that p2 p3 ∩ F23 = {q2 , . . . , q3 }.
An Approximate Algorithm for Solving the Watchman Route Problem
195
6.3. Use P2 , P3 , F23 , q2 , and q3 as input for Procedure 2; the output equals V23 . 7. Let V = {v1 } ∪ V12 ∪ {v2 } ∪ V23 ∪ {v3 }. 8. Find q1 and q3 ∈ V such that {q1 , p2 , q3 } is a subsequence of V . 9.1. If (p2 = q1 ∧p2 = q3 )∨(p2 = q1 ∧p2 = q3 )∨(p2 = q1 ∧p2 = q3 ), then apply Procedure 1 to compute an update of point p2 such that p2 = q1 and p2 = q3 . 9.2. Find a point p2 ∈ ∂P2 such that de (q1 , p2 ) + de (p2 , q3 ) = min{de (q1 , p ) + de (p , q3 ) : p ∈ ∂P2 } 10. Update set V by letting p2 = p2 . 11. Output V . Note that in Steps 1.2 and 9.1, the updated point p2 depends on the chosen value of ε. – The following algorithm is for the constrained TPP. Algorithm 1 1. For each i ∈ {0, 1, . . . , k − 1}, let pi be a point on ∂Pi . 2. Let V = {p0 , p1 , . . . , pk−1 }. 3. Calculate the perimeter L0 of the polygon, which has the set V of vertices. 4. For each i ∈ {0, 1, . . . , k − 1}, use Pi−1 , Pi , Pi+1 (mod k) and Fi−1 , Fi (mod k) (note: these are the fences of Pi−1 and Pi , and Pi and Pi+1 (mod k), respectively) as input for Procedure 3, to update pi and output a set Vi . 5. Let V1 = V and update V by replacing {pi−1 , . . . , pi , . . . , pi+1 } by Vi . 6. Let V = {q0 , q1 , . . . , qm }. 7. Calculate the perimeter L1 of the polygon, which has the set V of vertices. 8. If L0 − L1 > ε, then let L0 = L1 , V = V1 , and go to Step 4. Otherwise, output the updated set V and (its) calculated length L1 . If a given constrained TPP is not a simplified constrained TPP, then we can slightly modify Procedure 3: just replace “Procedure 2” in Step 5 of Procedure 3 by the algorithm in [22] on pages 639–641. In this way, Algorithm 1 will still work also for solving a non-simplified constrained TPP locally. 3.2
Solution for the Watchman Route Problem
The algorithm for solving the simplified constrained TPP (or the “general” constrained TPP) implies the following algorithm for solving the simplified WRP (or the “general” WRP). Algorithm 2 We modify Algorithm 1 at two places: 1. Replace polygon “Pj ” by segment “sj ” in Steps 1 and 4, for j = i − 1, i, or i + 1. 2. Replace “Fi−1 , Fi (mod k)” (the fences of Pi−1 and Pi , and Pi and Pi+1 , all mod k, respectively) by the common polygon Π.
196
4
F. Li and R. Klette
Proof of Correctness
A single iteration of a w-rubberband algorithm is a complete run through the main loop of the algorithm. – Let Π be a simple polygon. Definition 2. Let P = (p0 , p1 , p2 , . . . , pm , pm+1 ) be a critical point tuple of Π. Using P as an initial point set, and n iterations of the w-rubberband algorithm, we get another critical point tuple of Π, say P = (p0 , p1 , p2 , . . . , pm , pm+1 ). The polygon with vertex set {p0 , p1 , p2 , . . . , pm , pm+1 } is called an nth polygon of Π, denoted by AESPn (Π), or (for short) by AESPn , where n = 1, 2, . . .. Let p, q ∈ Π • . Let dESP (Π, p, q) be the length of the shortest path between p and q inside of Π. Let AESPn (Π) be an nth polygon fully contained inside of Π, where n ≥ 1. Let AESP = lim AESPn (Π) n→∞
Let pi (ti0 ) be the i-th vertex of AESP , for i = 0, 1, . . ., or m + 1. Let dESP i = dESP (Π, pi−1 , pi ) + dESP (Π, pi , pi+1 ) where i = 1, 2, . . ., or m. Let dESP (t0 , t1 , . . . , tm , tm+1 ) =
m
dESP i
i=1
Fig. 6. Illustration for Definition 3
Definition 3. Let s0 , s1 , s2 , . . . sm and sm+1 be a sequence of segments fully contained in Π and located around the frontier of a simple polygon Π, with points pi ∈ ∂si (see Figure 6),1 for i = 0, 1, 2, . . ., m or m + 1. We call the m + 2 tuple (p0 , p1 , p2 , . . ., pm , pm+1 ) a critical point tuple of Π. We call it an AESP critical point tuple of Π if it is the set of all vertices of an AESP of Π. 1
Possibly, the straight segment pi pi+1 is not fully contained in Π. In this case, replace pi pi+1 by the ESP(Π, pi , pi+1 ) between pi and pi+1 , where i = 1, 2, . . ., or m.
An Approximate Algorithm for Solving the Watchman Route Problem
197
Definition 4. Let ∂dESP (t0 , t1 , . . . , tm , tm+1 ) |ti =ti0 = 0 ∂ti for i = 0, 1, . . ., or m+1. Then we say that (t00 , t10 , . . . , tm0 , tm+1 0 ) is a critical point of dESP (t0 , t1 , . . . , tm , tm+1 ). Definition 5. Let P = (p0 , p1 , p2 , . . . , pm , pm+1 ) be a critical point tuple of Π. Using P as an initial point set and n iterations of the w-rubberband algorithm, −−−−−−−−→ we calculate an n-w-rubberband transform of P , denoted by P (w − r − b)n Q, or P → Q for short, where Q is the resulting critical point tuple of Π, and n is a positive integer. 4.1
A Correctness Proof for Algorithm 2
We provide mathematic fundamentals for our proof that Algorithm 2 is correct for any input (see Theorem 5 below). At first we recall some basic definitions and theorems from topology, using the book [23] as a reference for basic notions such as topology, topologic space, compact, cover, open cover and so forth. The following is the well-known Heine-Borel Theorem for En (see [23], Corollary 2.2 on page 128): Theorem 3. A subset S of Rn is compact iff S is closed and bounded. Now let X be a set, and let d be a metric on X × X defining a metric space (X, d), for example, such as the Euclidean space (Rn , d). A map f of a metric space (X, d) into a metric space (Y, e) is uniformly continuous iff, for each ε > 0, there is a δ > 0 such that e(f (x), f (y)) < ε, for any x, y ∈ X with d(x, y) < δ. Another well-known theorem (see [23], Theorem 3.2, page 84) is the following: Theorem 4. Let f be a map of a metric space (X, d) into a metric space (Y, e). If f is continuos and X is compact, then f is uniformly continuous. Now let s0 , s1 , s2 , . . ., sm and sm+1 be a sequence of segments fully contained in a simple polygon Π, and located around the frontier of Π (see Figure 6). We express a point pi (ti ) = (xi + kxi ti , yi + kyi ti , zi + kzi ti ) on si this way in general form, with ti ∈ R, for i = 0, 1, . . ., or m + 1. In the following, pi (ti ) will be denoted by pi for short, where i = 0, 1, . . ., or m + 1. Lemma 4. (t0 0 , t1 0 , . . . , tm 0 , tm+1 0 ) is a critical point of dESP (t0 , t1 , . . . , tm , tm+1 ). Proof. dESP (t0 , t1 , . . . , tm , tm+1 ) is differentiable at each point (t0 , t1 , . . . , tm , tm+1 ) ∈ [0, 1]m+2 Because AESPn (Π) is an nth polygon of Π, where n = 1, 2, . . ., and AESP = lim AESPn (Π) n→∞
198
F. Li and R. Klette
it follows that dESP (t00 , t10 , . . . , tm0 , tm+10 ) is a local minimum of dESP (t0 , t1 , . . . , tm , tm+1 ) By Theorem 9 of [18],
∂dESP ∂ti
= 0, for i = 0, 1, 2, . . ., m + 1.
Let s0 , s1 and s2 be three segments. Let pi (pi1 , pi2 , pi3 ) ∈ si , for i = 0, 1, 2. Lemma 5. There exists a unique point p1 ∈ s1 such that de (p1 , p0 ) + de (p1 , p2 ) = min{de (p , p0 ) + de (p , p2 ) : p ∈ ∂s2 }
Proof. We transform the segment from the original 2D coordinate system into a new 2D coordinate system such that s1 is parallel to one axis, say, the x-axis. Then follow the proof of Lemma 16 of [18]. Lemma 5 and Algorithm 2 define a function fw , which maps [0, 1]m+2 into [0, 1]m+2 . A point pi is represented as follows: (ai1 + (bi1 − ai1 )ti , ai2 + (bi2 − ai2 )ti , ai3 + (bi3 − ai3 )ti ) Then, following the proof of Lemma 5, we obtain Lemma 6. Function t2 = t2 (t1 , t3 ) is continuous at each (t1 , t3 ) ∈ [0, 1]2 . −−−−−−−−→ Lemma 7. If P(w − r − b)1 Q, then, for every sufficiently small real ε > 0, there −−−−−→ is a sufficiently small real δ > 0 such that P ∈ Uδ (P ), and P (r − b)1 Q implies Q ∈ Uε (Q). Proof. This lemma follows from Lemma 5; also note that Π has m + 2 segments, that means we apply Lemma 6 repeatedly, m + 2 times. By Lemma 7, we have the following −−−−−−−−→ Lemma 8. If P(w − r − b)n Q, then, for every sufficiently small real ε > 0, there is a sufficiently small real δε > 0 and a sufficiently large integer Nε such that −−−−−−−−−→ P ∈ Uδε (P ), and P (w − r − b)n Q implies that Q ∈ Uε (Q), where n is an integer and n > Nε . Lemma 9. Function fw is uniformly continuous in [0, 1]m+2 . Proof. By Lemma 8, fw is continuous in [0, 1]m+2 . Since [0, 1]m+2 is a compact and bounded set, by Theorem 4, fw is uniformly continuous in [0, 1]m+2 .
An Approximate Algorithm for Solving the Watchman Route Problem
199
By Lemma 9, we are now able to construct an open cover for [0, 1]m+2 as follows: 1. For each ti ∈ [0,1]: 1.1. Let P = (p0 (t0 ), p1 (t1 ), p2 (t2 ), . . . , pm (tm ), pm+1 (tm+1 )) be a critical point tuple such that p(ti ) ∈ si , for i = 0, 1, 2, . . ., m + 1. 1.2. By Lemma 9, there exists a δ > 0 such that, for each Q ∈ Uδ (P ) ∩ [0, 1]m+2 , it is true that fw (P ) = Q. 1.3. Let Uδ (P ) = Uδ (P ) ∩ [0, 1]m+2 . 1.4. Let Sδ = {Uδ (P ) : ti ∈ [0, 1]}. ti
By Theorem 3, there exists a subset of Sδ , denoted by Sδ , such that, for each ∈ [0,1], and a critical point tuple P = (p0 (t0 ), p1 (t1 ), p2 (t2 ), . . . , pm (tm ), pm+1 (tm+1 ))
there exists Uδ (P ) ∈ Sδ such that P ∈ Uδ (P ). This proves the following: Lemma 10. The number of critical points of dESP (t0 , t1 , . . . , tm , tm+1 ) in [0, 1]m+2 is finite. Furthermore, in analogy to the proof of Lemma 24 of [18], we also have the following: Lemma 11. Π has a unique AESP critical point tuple. Analogously to the proof of Theorem 11 of [18], we obtain Theorem 5. The AESP of Π is the shortest watchman route of Π. Corollary 1. For each ε > 0, the w-rubberband algorithm computes an approximative solution ρ to a WRP such that |l(ρ ) − l(ρ)| < ε, where ρ is the true solution to the same WRP. Corollary 2. The WRP has a unique solution. Obviously, Corollary 2 is a stronger result than Theorem 1. 4.2
Another Correctness Proof for Algorithm 2
Let P0 , P1 , . . . , Pk−1 be a sequence of simple polygons (not necessary convex). Theorem 6. For the constrained TPP, the number of local minima is finite.
200
F. Li and R. Klette
Proof. Let Fi be a fence of Pi and Pi+1 (mod k), for i = 0, 1, . . ., k - 1. For each segment s, let ∂s = {p : p ∈ s} (i.e., ∂s = s). For each polygon P , let ∂P = {p : p ∈ e ∧ e ∈ E(P )}, where E(P ) is the set of all edges of P . Let (as default) k−1
∂Pi = ∂P0 × ∂P1 × · · · ∂Pk−1
i=0
For each pi ∈ ∂Pi , there exists a unique constrained ESP ρ(pi qi1 qi2 · · · qimi pi+1 ) ⊂ Fi• such that qij ∈ V (Fi ), where j = 0, 1, . . ., mi , mi ≥ 0 and i = 0, 1, . . ., k - 1. For a set S, let ℘(S) be the power set of S. – We define a map f from k−1
∂Pi
k−1
to
i=0
℘(V (Fi ))
i=0
such that f : (p0 , p1 , . . . , pk−1 ) →
k−1
{qi1 , qi2 , . . . , qimi }
i=0
In general, for a map g : A → B let Im(g) = {b : b ∈ B ∧ ∃ a [a ∈ A ∧ b = g(a)]} For each subset B1 ⊆ B, let g −1 (B1 ) = {a : a ∈ A ∧ ∃ b [b ∈ B ∧ b = g(a)]} Since V (Fi ) is a finite set, Im(f ) is a finite set as well. Let Im(f ) = {S1 , S2 , . . . , Sm } (m ≥ 1) Then we have that −1 ∪m (Si ) = i=1 f
k−1
∂Pi
i=0
In other words, k−1
∂Pi
i=0
can be partitioned into m pairwise disjoint subsets f −1 (S1 ), f −1 (S2 ), . . . , f −1 (Sm ) For each constrained ESP ρ(p0 q01 q02 · · · q0m0 p1 · · · pi qi1 qi2 · · · qimi pi+1 · · · pk−1 qk−11 qk−12 · · · qk−1mk−1 p0 ) there exists an index i ∈ {1, 2, . . . , m} such that
An Approximate Algorithm for Solving the Watchman Route Problem
201
(p0 , p1 , . . . , pk−1 ) ∈ f −1 (Si ) and
k−1
{qi1 , qi2 , . . . , qimi } ⊆ Si
i=0
On the other hand, analogously to the proof of Lemma 24 of [18], for each f −1 (Si ), there exists a unique constrained ESP. – This proves the theorem. By Theorems 6 and 1, we have a second and shorter proof for Corollary 2 which implies that Algorithm 2 computes an approximative and global solution to a WRP.
5 5.1
Time Complexity Constrained TPP
Lemma 12. Procedure 1 can be computed in O(|E(P1 )| + |E(P2 )|) time. Proof. Steps 1 and 5 only need constant time. Step 2 can be computed in time O(|E(P1 )| + |E(P2 )|), Step 3 in time O(|E(P2 )|), and Step 4 in time O(1). Therefore, Procedure 1 can be computed in O(|E(P1 )| + |E(P2 )|) time. Lemma 13. Procedure 2 can be computed in O(|V (ρ(q1 , F, q2 ))|) time. Proof. Step 1 can be computed in O(|V (ρ(q1 , F, q2 ))|) time. According to [20], Step 2 can be computed in O(|V (ρ(q1 , F, q2 ))|) time. By Lemma 3, Step 3 can be computed in O(log |V (ρ(q1 , F, q2 ))|) time. Steps 4 and 5 can be computed in O(|V (ρ(t1 , F, t2 ))|) time. Altogether, the time complexity of Procedure 2 is equal to O(|V (ρ(q1 , F, q2 ))|). Lemma 14. Procedure 3 can be computed in time O(|E(P1 )| + 2|E(P2 )| + |E(P3 )| + |E(F12 )| + |E(F23 )|) Proof. Step 1.1 requires only constant time. By Lemma 12, Steps 1.2 and 9.1 can be computed in time O(|E(P1 )| + 2|E(P2 )| + |E(P3 )|). Steps 1.3 and 9.2 can be computed in O(|E(P2 )|), and Steps 2 and 10 in O(1)) time. Steps 3–4 can be computed in time O(|E(F12 )|). By lemma 13, Step 5 can be computed in time O(|V (ρ(q1 , F12 , q2 ))|) = O(|V12 |). Since |V12 | ≤ |E(F12 )|, Steps 3–5 can be computed in O(|E(F12 )|) time. Step 6 can be computed in O(|E(F23 )|), and Steps 7, 8 and 11 in O(|V12 |) + O(|V23 |) ≤ O(|E(F12 )|) + O(|E(F23 )|) time. Therefore, Procedure 3 can be computed in O(|E(P1 )| + 2|E(P2 )| + |E(P3 )| + |E(F12 )| + |E(F23 )|) time. This proves the lemma.
202
F. Li and R. Klette
Lemma 15. Algorithm 1 can be computed in time κ(ε) · O(n), where n is the total number of all vertices of the polygons involved, and κ(ε) = (L0 − L1 )/ε. Proof. Steps 1–3 can be computed in O(k) time. By Lemma 14, each iteration in Step 4 can be computed in time k−1
O(
(|E(Pi−1 )| + 2|E(Pi )| + |E(Pi+1 )| + |E(Fi−1 )| + |E(Fi )|))
i=0
Steps 5 and 8 can be computed in O(k), and Steps 6 and 7 in O(|V |) time. Note that k−1 (|V (Pi )| + |V (Fi )|) |V | ≤ i=0
|V (Pi )| = |E(Pi )|, and |V (Fi )| = |E(Fi )|, where i = 0, 1, . . ., k − 1. Thus, each iteration (Steps 4–8) in Algorithm 1 can be computed in time k−1
O(
(|E(Pi−1 )| + 2|E(Pi )| + |E(Pi+1 )| + |E(Fi−1 )| + |E(Fi )|))
i=0
Therefore, Algorithm 1 can be computed in k−1
κ(ε) · O(
(|E(Pi−1 )| + 2|E(Pi )| + |E(Pi+1 )| + |E(Fi−1 )| + |E(Fi )|))
i=0
time, where each index is taken mod k. This time complexity is equivalent to κ(ε) · O(n) where n is the total number of vertices of all polygons. Lemma 15 allows to conclude the following Theorem 7. The simplified constrained TPP can be solved locally and approximately in κ(ε) · O(n) time, where n is the total number of all vertices of involved polygons. At the end of this subsection, we finally discuss the case when the constrained TPP is not simplified. In this case, we replaced “Procedure 2” in Step 5 of Procedure 3 by the algorithm in [22] on pages 639–641. Then, by Lemma 2 and the proof of Lemma 14, Steps 3–5 can still be computed in O(|E(F12 )|) time, and Step 6 can still be computed in O(|E(F23 )|) time. All the other steps are analyzed in exactly the same way as those in the proof of Lemma 14. Therefore, the modified Procedure 3 has the same time complexity as the original procedure. Thus, we have the following Theorem 8. The constrained TPP can be solved locally and approximately in κ(ε) · O(n) time, where n is the total number of vertices of involved polygons. According to the following theorem, Theorem 8 is the best possible result in some sense: Theorem 9. ([13], Theorem 6) The touring polygons problem (TPP) is NPhard, for any Minkowski metric Lp (p ≥ 1) in the case of nonconvex polygons Pi , even in the unconstrained (Fi = R2 ) case with obstacles bounded by edges having angles 0, 45, or 90 degrees with respect to the x-axis.
An Approximate Algorithm for Solving the Watchman Route Problem
5.2
203
Watchman Route Problem
We consider the time complexity of Algorithm 2 for solving the simplified watchman route problem. Lemma 16. Consider Procedure 3. If Pi is an essential cut, for i = 1, 2, 3, and F12 = F23 = Π, then this procedure can be computed in time O(|V (ρ(v1 , Π, v3 ))|), where vi is the defining vertex of the essential cut Pi , for i = 1, 3. Proof. Step 1.1 requires constant time only. By Lemma 12, Steps 1.2 and 9.1 can be computed in O(|E(P1 )| + 2|E(P2 )| + |E(P3 )|) = O(1) time. Steps 1.3 and 9.2 can be computed in O(|E(P2 )|) = O(1) time. Steps 2 and 10 can be computed in constant time. Steps 3–4 can be computed in time O(|E(ρ(v1 , Π, v2 ))|), where vi is the defining vertex of the essential cut Pi , for i = 1, 2. By Lemma 13, Step 5 can be computed in O(|V (ρ(v1 , Π, v2 ))|) time. Thus, Steps 3–5 can be computed in O(|V (ρ(v1 , Π, v2 ))|) time. Analogously, Step 6 can be computed in time O(|V (ρ(v2 , Π, v3 ))|), where vi is the defining vertex of the essential cut Pi , for i = 2, 3. Steps 7, 8 and 11 can be computed in O(|V (ρ(v1 , Π, v2 ))|)+O(|V (ρ(v2 , Π, v3 ))|) = O(|V (ρ(v1 , Π, v3 ))|) time. Therefore, the time complexity of Procedure 3 is equal to O(|V (ρ(v1 , Π, v3 ))|). Lemma 17. Consider Algorithm 1. If Pi is an essential cut, and Qi = Π, for i = 0, 1, . . ., k − 1, then this algorithm can be computed in time κ(ε) · O(n + k), where n is the number of vertices of Π, and k the number of essential cuts. Proof. Steps 1–3 can be computed in O(k) time. By Lemma 16, each iteration in Step 4 can be computed in time k−1
O(
(|V (ρ(vi−1 , Π, vi+1 ))|))
i=0
where vi is the defining vertex of the essential cut Pi , for i = 0, 1, . . ., k − 1. This time complexity is actually equivalent to O(|V (Π)|). Steps 5 and 8 can be computed in O(k), and Steps 6 and 7 in O(|V |) time. Note that |V | ≤ k+|V (Π)|, k−1 and |V (Pi )| = 2, where i = 0, 1, . . ., k − 1, and i=0 (|V (ρ(vi−1 , Π, vi+1 ))|) = 2|E(Π)| = 2|V (Π)|. Thus, each iteration (Steps 4–8) in Algorithm 1 can be computed in time O(n + k). Therefore, Algorithm 1 can be computed in time κ(ε) · O(n + k), where n is the total number of vertices of Π, and k the number of essential cuts. Lemma 17 allows to conclude the following Theorem 10. The simplified WRP can be solved approximately in κ(ε)·O(n+k) time, where n is the number of vertices of polygon Π, and k the number of essential cuts. If the WRP is not simplified, then we have the following
204
F. Li and R. Klette
Lemma 18. Consider Algorithm 1. If Pi is an essential cut, and Fi = Π, for i = 0, 1, . . ., k − 1, then this algorithm can be computed in κ(ε) · O(kn) time, where n is the number of vertices of Π, and k the number of essential cuts. Proof. Steps 1–3 can be computed in O(k) time. By Lemma 16, each iteration in Step 4 can be computed in k−1
O(
(|E(Pi−1 )| + 2|E(Pi )| + |E(Pi+1 )| + |E(Fi−1 )| + |E(Fi )|))
i=0
time, what is equivalent to the asymptotic class O(k|E(Π)|), because of |E(Pi−1 )| = |E(Pi )| = |E(Pi+1 )| = 1 and |E(Fi−1 )| = |E(Fi )| = |E(Π)| = |V (Π)|. Steps 5 and 8 can be computed in O(k), and Steps 6 and 7 in O(|V |) ≤ |V (Π)| time. Thus, each iteration (Steps 4–8)) in Algorithm 1 can be computed in O(kn) time. Therefore, Algorithm 1 can be computed in κ(ε) · O(kn) time, where n is the total number of vertices of Π, and k the number of essential cuts.
6
Conclusions
Lemma 18 and Theorem 8 allow us to conclude the following Theorem 11. The general WRP can be solved approximately in κ(ε) · O(kn) time, where n is the number of vertices of polygon Π, k the number of essential cuts, and κ(ε) = (L0 − L1 )/ε, where L0 is the length of the initial route and L1 that of the route calculated by the algorithm. The subprocesses of Algorithms 1 and 2 (as typical for rubberband algorithms in general) are also numerically stable. The proposed algorithms can be recommended for optimization of visual inspection programs.
References 1. Alsuwaiyel, M.H., Lee, D.T.: Minimal link visibility paths inside a simple polygon. Computational Geometry 3, 1–25 (1993) 2. Alsuwaiyel, M.H., Lee, D.T.: Finding an approximate minimum-link visibility path inside a simple polygon. Information Processing Letters 55, 75–79 (1995) 3. Arkin, E.M., Mitchell, J.S.B., Piatko, C.: Minimum-link watchman tours. Technical report, University at Stony Brook (1994) 4. Asano, T., Ghosh, S.K., Shermer, T.C.: Visibility in the plane. In: Sack, J.-R., Urrutia, J. (eds.) Handbook of Computational Geometry, pp. 829–876. Elsevier, Amsterdam (2000) 5. Carlsson, S., Jonsson, H., Nilsson, B.J.: Optimum guard covers and m-watchmen routes for restricted polygons. In: Dehne, F., Sack, J.-R., Santoro, N. (eds.) WADS 1991. LNCS, vol. 519, pp. 367–378. Springer, Heidelberg (1991) 6. Carlsson, S., Jonsson, H., Nilsson, B.J.: Optimum guard covers and m-watchmen routes for restricted polygons. Int. J. Comput. Geom. Appl. 3, 85–105 (1993)
An Approximate Algorithm for Solving the Watchman Route Problem
205
7. Carlsson, S., Jonsson, H.: Computing a shortest watchman path in a simple polygon in polynomial-time. In: Sack, J.-R., et al. (eds.) WADS 1995. LNCS, vol. 955, pp. 122–134. Springer, Heidelberg (1995) 8. Carlsson, S., Jonsson, H., Nilsson, B.J.: Approximating the shortest watchman route in a simple polygon. Technical report, Lund University (1997) 9. Carlsson, S., Jonsson, H., Nilsson, B.J.: Finding the shortest watchman route in a simple polygon. Discrete Computational Geometry 22, 377–402 (1999) 10. Chin, W., Ntafos, S.: Optimum watchman routes. Information Processing Letters 28, 39–44 (1988) 11. Chin, W.-P., Ntafos, S.: Shortest watchman routes in simple polygons. Discrete Computational Geometry 6, 9–31 (1991) 12. Czyzowicz, J., et al.: The aquarium keeper’s problem. In: Proc. ACM-SIAM Sympos. Data Structures Algorithms, pp. 459–464 (1991) 13. Dror, M., et al.: Touring a sequence of polygons. In: Proc. STOC, pp. 473–482 (2003) 14. Gewali, L.P., et al.: Path planning in 0/1/infinity weighted regions with applications. ORSA J. Comput. 2, 253–272 (1990) 15. Gewali, L.P., Ntafos, S.: Watchman routes in the presence of a pair of convex polygons. In: Proc. Canad. Conf. Comput. Geom., pp. 127–132 (1995) 16. Hammar, M., Nilsson, B.J.: Concerning the time bounds of existing shortest watchman routes. In: Chlebus, B.S., Czaja, L. (eds.) FCT 1997. LNCS, vol. 1279, pp. 210–221. Springer, Heidelberg (1997) 17. Kumar, P., Veni Madhavan, C.: Shortest watchman tours in weak visibility polygons. In: Proc. Canad. Conf. Comput. Geom., pp. 91–96 (1995) 18. Li, F., Klette, R.: Exact and approximate algorithms for the calculation of shortest paths. IMA Minneapolis, Report 2141, www.ima.umn.edu/preprints/oct2006 19. Mata, C., Mitchell, J.S.B.: Approximation algorithms for geometric tour and network design problems. In: Proc. Annu. ACM Sympos. Comput. Geom., pp. 360–369 (1995) 20. Melkman, A.: On-line construction of the convex hull of a simple polygon. Information Processing Letters 25, 11–12 (1987) 21. Mitchell, J.S.B., Wynters, E.L.: Watchman routes for multiple guards. In: Proc. Canad. Conf. Comput. Geom., pp. 126–129 (1991) 22. Mitchell, J.S.B.: Geometric shortest paths and network optimization. In: Sack, J.R., Urrutia, J. (eds.) Handbook of Computational Geometry, pp. 633–701. Elsevier, Amsterdam (2000) 23. Moore, T.O.: Elementary General Topology. Prentice-Hall, Englewood Cliffs (1964) 24. Nilsson, B.J., Wood, D.: Optimum watchmen routes in spiral polygons. In: Proc. Canad. Conf. Comput. Geom., pp. 269–272 (1990) 25. Nilsson, B.J.: Guarding art galleries; Methods for mobile guards. Ph.D. Thesis, Lund University, Sweden (1995) 26. Ntafos, S.: The robber route problem. Inform. Process. Lett. 34, 59–63 (1990) 27. Ntafos, S.: Watchman routes under limited visibility. Comput. Geom. 1, 149–170 (1992) 28. Ntafos, S., Gewali, L.: External watchman routes. Visual Comput. 10, 474–483 (1994) 29. Sunday, D.: Algorithm 14: Tangents to and between polygons (last visit June 2006), http://softsurfer.com/Archive/algorithm 0201/ 30. Tan, X., Hirata, T., Inagaki, Y.: An incremental algorithm for constructing shortest watchman route algorithms. Int. J. Comp. Geom. and Appl. 3, 351–365 (1993)
206
F. Li and R. Klette
31. Tan, X., Hirata, T.: Constructing shortest watchman routes by divide-and-conquer. In: Ng, K.W., et al. (eds.) ISAAC 1993. LNCS, vol. 762, pp. 68–77. Springer, Heidelberg (1993) 32. Tan, X., Hirata, T., Inagaki, Y.: Corrigendum to ‘An incremental algorithm for constructing shortest watchman routes’. Int. J. Comp. Geom. App. 9, 319–323 (1999) 33. Tan, X.: Fast computation of shortest watchman routes in simple polygons. Information Processing Letters 77, 27–33 (2001) 34. Tan, X.: Approximation algorithms for the watchman route and zookeeper’s problems. In: Wang, J. (ed.) COCOON 2001. LNCS, vol. 2108, pp. 201–206. Springer, Heidelberg (2001) 35. Tan, X.: Approximation algorithms for the watchman route and zookeeper’s problems. Discrete Applied Mathematics 136, 363–376 (2004) 36. Tan, X.: Linear-time 2-approximation algorithm for the watchman route problem. In: Cai, J.-Y., Cooper, S.B., Li, A. (eds.) TAMC 2006. LNCS, vol. 3959, pp. 181– 191. Springer, Heidelberg (2006) 37. Urrutia, J.: Geometric shortest paths and network optimization. In: Sack, J.-R., Urrutia, J. (eds.) Handbook of Computational Geometry, pp. 973–1027. Elsevier, Amsterdam (2000)
Bird’s-Eye View Vision System for Vehicle Surrounding Monitoring Yu-Chih Liu, Kai-Ying Lin, and Yong-Sheng Chen Department of Computer Science, National Chiao Tung University, Hsinchu, Taiwan
Abstract. Blind spots usually lead to difficulties for drivers to maneuver their vehicles in complicated environments, such as garages, parking spaces, or narrow alleys. This paper presents a vision system which can assist drivers by providing the panoramic image of vehicle surroundings in a bird’s-eye view. In the proposed system, there are six fisheye cameras mounted around a vehicle so that their views cover the whole surrounding area. Parameters of these fisheye cameras were calibrated beforehand so that the captured images can be dewarped into perspective views for integration. Instead of error-prone stereo matching, overlapping regions of adjacent views are stitched together by aligning along a seam with dynamic programming method followed by propagating the deformation field of alignment with Wendland functions. In this way the six fisheye images can be integrated into a single, panoramic, and seamless one from a look-down viewpoint. Our experiments clearly demonstrate the effectiveness of the proposed image-stitching method for providing the bird’s eye view vision for vehicle surrounding monitoring.
1
Introduction
The major aim of intelligent driving assistant systems focuses on collision avoidance. These systems achieve this goal by providing drivers with warning signals or visual information so that drivers can keep safe distances to obstacles. This kind of systems are very helpful, particularly when drivers want to maneuver their cars into garages, back into parking space, or drive in narrow alleys. Even though side-view and rear-view mirrors can help the driver to see areas behind, the ground areas all around the vehicle remains difficult to see. Recently, rear-view cameras become more and more popular because of their capability in reducing invisible areas behind the car. This motivates us to construct a vision system that can provide the driver with an image covering all the vehicle surroundings without blind spots. Consider a bird’s-eye view vision system that displays the image all around a vehicle from a look-down viewpoint. This kind of system can be easily constructed by placing a camera facing down on the top of the vehicle. However, this
This work was supported in part by the Ministry of Economic Affairs, Taiwan, under Grant 96-EC-17-A-02-S1-032. Corresponding author.
G. Sommer and R. Klette (Eds.): RobVis 2008, LNCS 4931, pp. 207–218, 2008. c Springer-Verlag Berlin Heidelberg 2008
208
Y.-C. Liu, K.-Y. Lin, and Y.-S. Chen
I1
I2
I6
C1
C6
C2
I5
C5
C3
C4
I3
I4
(a)
(b)
Fig. 1. The proposed vehicle surrounding monitoring system using multiple cameras. (a) Images I1 , I2 , ..., and I6 captured from the corresponding cameras C1 , C2 , ..., and C6 , respectively. (b) The bird’s-eye view image synthesized from the six fisheye images.
approach is barely feasible because it is very dangerous to hang a camera high above a vehicle. The Matsushita company proposed an image synthesis display system [1] which uses several cameras around the vehicle and synthesizes the images captured from these cameras into a whole picture. Because they simply average the image pixel values in the overlapped regions, the ghost artifact is severe and the driver cannot know the actual location of the objects. Ehlgen and Pajdla mounted four omnidirectional cameras on a truck and find several subdivision ways for dividing the overlap regions to construct the bird’s-eye view image [2]. No matter how to choose the subdivision way, there are some blind spots in the subdivision boundaries. In this paper we propose a novel method of synthesizing a seamless bird’s-eye view image of vehicle surroundings from six fisheye cameras mounted around the vehicle. As shown in Fig. 1(a), these six fisheye cameras cover the whole surrounding area and have some overlapped areas between adjacent cameras. Our goal is to stitch the six distorted images captured from the fisheye cameras and provide an image of vehicle surroundings from a bird’s eye viewpoint, as shown in Fig. 1(b).
2
Vehicle Surrounding Monitoring System
Flowchart of the proposed vehicle surrounding monitoring system is depicted in Fig. 2. We first calibrate the fisheye cameras to obtain their intrinsic parameters. Then these parameters can be used for correcting distortion and transforming the fisheye images into perspective ones. There perspective images are registered to a ground plane by using planar homographies. The objects coplanar with the ground plane can be registered perfectly in this way. For 3-D objects, some advanced registration process is necessary. We select a seam between each pair of adjacent images according to the residual error obtained in previous calibration
Bird’s-Eye View Vision System for Vehicle Surrounding Monitoring
209
Fig. 2. Flowchart of the proposed vehicle surrounding monitoring system
step and register image points on this seam by using a dynamic programming approach. Then we use Wendland functions to smoothly propagate the registration results from the seams to other areas in the whole synthesized image. Finally, we use weighted blending to reduce brightness discrepancy and obtain the seamless bird’s eye view image. In the following we presents these procedures in detail. 2.1
Fisheye Camera Calibration
Here we propose a simple and accurate method for calibrating fisheye cameras, which only requires the fisheye camera to observe a planar pattern in a few orientations. We use the Field of View (FOV) [3] as the model of radial distortion introduced by fisheye lenses based on the principle of fisheye projection. With the initial values of the radial parameters as well as other intrinsic parameters, we can dewarp a fisheye image into a perspective one for a virtual perspective camera. Zhang’s method [4] is then used to calibrate the camera parameters of the virtual perspective camera and the residual error between the modelpredicted and the measured positions of the feature points on the planar pattern can be regarded as an evaluation criterion of the accuracy of parameters in the FOV model. Levenberg-Marquardt method [5] is then applied to optimize the values of these parameters by minimizing the total residual error of all feature
210
Y.-C. Liu, K.-Y. Lin, and Y.-S. Chen
points. Finally, we rectify the fisheye images into perspective ones by using the FOV model with the estimated parameter values. 2.2
Planar Image Alignment with Homography
Since the goal of this work is to create a whole bird’s eye view of the monitoring area, we need to stitch together the images captured by all the fisheye cameras around the vehicle, after the fisheye images are rectified into perspective ones with calibrated camera parameters. Suppose the partial scene image Ii is captured by the camera Ci . By specifying the feature points in a reference image Ir of the ground plane and their corresponding points in image Ii , we can establish a homography Hri relating the homogeneous coordinate pi of each point in Ir and the coordinate pr of its corresponding point in Ir : pr = Hri pi .
(1)
In this way all images can be transferred into the same coordinate system and then be registered together. It is good to stitch the images of the object coplanar with the ground plane, since the perspective projection are preserved for the objects located at the ground plane in world coordinate. On the other hand, it is problematic to stitch the images of the objects not coplanar with the ground plane, since they are not consistent with the homography obtained in the aspect of the ground plane. Therefore, advanced processing of registration is necessary to deal with this problem. Thereinafter, we call the objects not coplanar with the ground plane “the non-ground-level objects.” 2.3
Optimal Seam Selection
Once having registered all images on a planar surface, the next step is to decide which pixels should contribute to the final composite. We want to create a composite with as less artifacts as possible. Toward this goal, we select an optimal seam which lessens mis-registration and this involves how to decide the placement of seam to avoid the discontinuities along the seam and to provide a smooth transition between adjacent images. In our application, the source images are captured by the fisheye camera and they have severe radial distortion. The accuracy of the camera parameters directly affects the image alignment result. Therefore, we use this property to select the seams. However, we can obtain the residual error only for the feature points in the camera calibration process. In order to obtain the pixel-by-pixel measurement which usually increase with its radial radius, we use a quadratic function Ei to model the distribution of the residual error over the whole image captured by camera Ci : Ei (x, y) = ai x2 + bi xy + ci y 2 + di x + ei y + fi ,
(2)
where ai , bi , ci , di , ei and fi are the six coefficients of the quadratic function which can be estimated by using the least squares approach. We can formulate
Bird’s-Eye View Vision System for Vehicle Surrounding Monitoring
211
the cost function according to the residual error for the optimal seam selection as follows: C(sij ) = EL(sij ,p) (HL(sij ,p)r p) , (3) p
where p is a pixel in the composite, sij is a seam in the overlapped region of registered images Ii and Ij which divides the whole image into part i and part j labelled by L(sij ). The error map Ei is modelled by a strictly monotonic increasing function, hence we can obtain an optimal seam through the greedy approach. For this reason, the optimal seam selection can be reduced to the pixel selection problem. The labelling of the source image for the pixel p can be decided by (4) L(sij , p) = arg min Ek (Hkr p) . k
In this manner, we can create the image mosaic by choosing one point with least residual error from among the possible viewpoints for each pixel of the image mosaic. Once every pixel in the composite have decided which source images it comes from, the optimal seam can be determined. We can stitch images along these optimal seams to create a composite that has less mis-registration caused by the distortion. 2.4
Seam Registration Using Dynamic Image Warping
Since we use the optimal seam approach proposed in the previous section to create the composite, the blurring, ghosting, and the artifacts (for example, mis-registration and exposure differences) can only lie along the seam. For the smooth transition from one view to another, we only need to find the right correspondences on the seam. Dynamic time warping (DTW) is an efficient technique for solving this problem. Instead of aligning data sequence in the temporal domain, we register the seams by aligning pixel data of two seams in the spatial domain. Suppose we have two image Ia and Ib to be stitched together on the seam Sab . Since the seam Sab lies in the overlapped region of Ia and Ib , we can obtain two series of pixel data A and B which come from Ia and Ib along Sab : A = a1 , a2 , . . . , ai , . . . , an
(5)
B = b1 , b2 , . . . , bj , . . . , bn ,
(6)
where n is the length of Sab . We use the proposed dynamic image warping (DIW) method, described in the following, to align two sequences. We construct an n×n matrix C and each element C(i, j) corresponds to the alignment between the nodes ai and bj . A warping path W , as shown in Fig. 3, is a set of matrix elements that describes a mapping between A and B: W = w1 , w2 , . . . , wk , . . . , wm
n ≤ m ≤ 2n − 1 ,
(7)
212
Y.-C. Liu, K.-Y. Lin, and Y.-S. Chen
Fig. 3. Warping path of dynamic image warping. The warping path starts from w1 = (1, 1) and ends at wm = (n, n). The element (i, j) in the warping path describes that the node ai should be aligned to the node bj .
where wk = C(i, j) is the k-th element of W , and m is the number of elements of W . The matrix C is also the cost function for DIW defined as follows: ⎧ ⎨ C( i , j − 1 ) + pv C(i, j) = D(ai , bj ) + H(ai , bj ) + min C( i − 1 , j − 1 ) , (8) ⎩ C( i − 1 , j ) + ph where ph and pv are the standard deviation of data sequence A and B and D(ai , bj ) is the difference measurement between data points ai and bj . This difference consists of two terms, the difference of absolute values and the difference of first derivative angles: D(ai , bj ) = d(ai , bj ) + d (ai , bj ) ,
(9)
where d(ai , bj ) = (ai − bj )2
d (ai , bj ) =
(arctan (ai )
(10) −
arctan (bj ))2
.
(11)
The idea of d (ai , bj ) term inherit from the Derivative Dynamic Time Warping (DDTW) [6] proposed by Keogh et al. Besides, the seam registration result in current frame may differ from that obtained in the previous frame and results
Bird’s-Eye View Vision System for Vehicle Surrounding Monitoring
213
in jittering effect. Therefore, we take the seam registration results of a sequence of images into consideration in H(ai , bj ) term: H(ai , bj ) = s (i − hb (f, j))2 + (j − ha (f, i))2 ,
(12)
where ha (f, i) is the average index of the corresponding point in data series B to the j-th point in data series A in f previous frames, hb (f, j) is the average index of the corresponding point in data series A to the j-th point in data series B in f previous frames, and s is a scalar. As s increases, the warping path in current frame emphasize the history of the seam registration. Once the term H(ai , bj ) has added to the cost function, we can obtain a smooth registration result between consecutive frames. 2.5
Image Deformation Using Wendland ψ-Function
Once we have aligned the seams of the adjacent images, the next step is to propagate this alignment result to the rest of the images. This is a typical problem of the image deformation that involves how to produce an overall deformation field to the whole images and preserve the known correspondence as much as possible. The linear deformation method can interpolate smooth surfaces to maintain overall characteristics, but it is difficult to keep landmarks in position. The major approaches for image deformation are based on the radial basis functions (RBFs). Selection of the type of the RBFs depends on the tradeoff between the overall characteristics such as the smoothness and the locality of the transformation function. In general, image deformation is accomplished by the transformation function T : R2 → R2 . The interpolation transformation function T (x) must map the landmark pi = (pix , piy ) ∈ R2 in the source image to its landmark qi = (qix , qiy ) ∈ R2 in the target image: T (pi ) = qi ,
i = 1, . . . , n ,
(13)
where n is the number of the landmarks. The transformation functions in two coordinates are calculated separately: T (pi ) = (tx (pi ), ty (pi )) = (qix , qiy ), i = 1, . . . , n ,
(14) (15)
where tx and ty are the transformation functions in x and y coordinate respectively. In radial basis function approach, the transformation consists of two terms: t(x) = Rs (x) + Ls (x) , (16) where Rs (x) is the non-linear term of the weighted RBFs, and Ls (x) is the linear term containing m bases of polynomials with degrees up to d:
214
Y.-C. Liu, K.-Y. Lin, and Y.-S. Chen
Rs (x) = Ls (x) =
n i=1 m
αi R(x − pi )
(17)
βj Lj (x)
(18)
j=1
where αi and βj are coefficients, R(x− pi ) is the radial basis function centered at landmark pi , and its value only depends on the Euclidean distance from x to pi . In order to preserve the overall smoothness as much as possible, the coefficients αi is typically subject to the constraint n
αi Lj (pi ) = 0,
i = 1, . . . , m .
(19)
i=1
A linear combination of the coefficients α = [α1 , . . . , αn ]T and β = [β1 , . . . , βm ]T can be derived from the above equations: K P α q = , (20) PT 0 β 0 where K is the n × n sub-matrix in which Kij = R(pi − pj ), and P is the n × m sub-matrix in which Pij = Lj (pi ). Fornefett et al. [7] use ψ functions of Wendland [8] as RBFs for elastic registration: d/2+k+1 (r) (21) ψd,k (r) = I k (1 − r)+ with ψ(r) = (1 − r)v+
(1 − r)v , = 0 , ∞ Iψ(r) = tψ(t)dt
(22) 0≤r<1 r≥1
(23)
r≥0 .
(24)
r
Because of the images to be deformed in our application are 2-D images, we prefer the RBFs can create the smooth deformation, and choose ψ2,1 (r) as our RBFs: (25) ψ2,1 (r) = (1 − r)4+ (4r + 1) . 2.6
Image Fusion Using Weighted Blending
After deforming the images using Wendland ψ-function, all edges cross seams could be stitched together on the seams. But the seams may look obviously since the images being stitched were not taken with the same exposure condition. In order to compensate the exposure difference, we use the bias and gain model to adjust the global exposure [9]: Ii = αi I + βi ,
(26)
Bird’s-Eye View Vision System for Vehicle Surrounding Monitoring
215
where β is the bias and α is the gain. The bias and gain for each image can be obtained in the least squares manner: [αi Ii (Hir p) + βi − Ij (Hjr p)]2 , (27) Ei = j
p
where image Ij is the adjacent image of image Ii , and p is the image point in the overlap of image Ii and image Ij . In this approach, the images can be adjusted into similar exposure. But the seams may still be visible, since the bias and gain model only compensate the global exposure difference. Hence, a further smoothing process is necessary to eliminate the seams. We use an image fusion method based on the weighted blending to smooth the seams away. We take the residual error of camera calibration as the weight to blend the images sources of the same pixels in the final composite. Though the weights of both image sources are equal along the seam, the weighting function is not continuous between the overlapping region and the non-overlapping region.
(a)
(b)
(c)
(d)
(e)
(f)
Fig. 4. The composite images from six fisheye cameras. (a) The optimal seams between adjacent images. (b) The composite image of the field for registration. (c) The composite image of ground-level objects. (d) The composite image of non-ground-level objects. (e) The composite image after the registration along the seam. (f) The composite image after exposure compensation and blending.
216
Y.-C. Liu, K.-Y. Lin, and Y.-S. Chen
For this reason, we further take the minimum distance to image boundaries into weighting function. The proposed blending function is formulated as Ei (Hir p)B(Hir p)Ii (Hir p) (28) I(p) = i i Ei (Hir p)B(Hir p)
2v 2u 1 − (29) B(u, v) = 1 − − 1 − 1 , width height where I is the image of the final composite, Ei is the calibration error function of the camera Ci , width× height is the resolution of images, and B is the weighting function with value 1 on the image center and 0 on the image boundary. We can reduce the seams in the integrated image by using this fusion method for intensity unification.
3 3.1
Experiment Results Configuration of Image Acquisition
We mounted four fisheye cameras with 140-degree FOV on four corners of a vehicle and two fisheye cameras with 170-degree FOV on left and right side of the vehicle. In such a configuration, we can not only acquire images with higher resolution in the front and rear of a vehicle, but also minimize the blind spots using six fisheye cameras. 3.2
Results of Image Stitching
Which pixels contribute to the final composite are decided according to the residual error obtained in the calibration procedure. An optimal seam between adjacent images can thus be determined, as shown in Fig. 4(a). A composite image of the ground can also be created by stitching images along optimal seams,
(a)
(b)
Fig. 5. Warped composites in fisheye view. (a) Original composite. (b) Virtual fisheye camera at a certain height.
Bird’s-Eye View Vision System for Vehicle Surrounding Monitoring
217
(a)
(b) Fig. 6. Image sequences of a car moving in reverse (a) and moving back into a parking space (b)
as shown in Fig. 4(b). As long as the cameras remain fixed, the perspective transformation from each rectified images to the bird’s eye view and the optimal seam between each pair of adjacent images are invariant. Hence, the images of ground-level objects around the vehicle could be stitching into a bird’s eye view, as shown in Fig. 4(c). But when a non-ground-level object exists in the surrounding scene, such as the car on the left side of the image shown in Fig. 4(d), and the image of this object cross the seam in the composite, a misalignment will occur on the seam. This misalignment is then removed by the DIW registration along the seam and the final composite image after the exposure compensation and weighted blending are shown in Figs. 4(e) and 4(f), respectively. Some problems may occur in practice when all surrounding images are stitched in perspective presentation, such as the distortions of non-ground-level objects, low image resolution and amplified vibration in the farther surrounding area (see Fig. 5(a)). Warping the composite in a distortion level by placing the virtual fisheye camera at certain height can cope with this problem, as shown in (Fig 5(b). In Figs. 6(a) and 6(b), we show two sampled image sequences in which the vehicle is moving in reverse and moving back into a parking space, respectively.
4
Conclusion
In this paper, we have presented a driving assistant system which can provide the bird’s eye view image of vehicle surrounding. The multiple fisheye cameras are mounted around the vehicle to capture images of the surroundings in all
218
Y.-C. Liu, K.-Y. Lin, and Y.-S. Chen
orientations. All the fisheye images are rectified to the virtual perspective images by using the proposed calibration method and these rectified images are stitched into a seamless mosaic by using the proposed stitching method. The images coplanar with the ground are registered on a planar surface in a bird’s eye view using perspective transformation, while the images of 3-D objects are aligned along the stitching seams using the dynamic image warping method. Finally, a seamless bird’s eye view image is created by blending all registered images after exposure compensation. With the support of this vehicle surrounding image, drivers can easily maneuver their vehicles by surveying the surrounding area of their vehicles on a single display. Beside, if a video recording system is mounted, it can also provide direct evidence when car accident happens.
References 1. Kato, K., et al.: Image synthesis display method and apparatus for vehicle camera. United States Patent 7,139,412 (2006) 2. Ehlgen, T., Pajdla, T.: Monitoring surrounding areas of truck-trailer combinations. In: Proceedings of the 5th International Conference on Computer Vision Systems (2007) 3. Devernay, F., Faugeras, O.: Straight lines have to be straight. Machine Vision and Applications 13(1), 14–24 (2001) 4. Zhang, Z.: A flexible new technique for camera calibration. IEEE Transactions on Pattern Analysis and Machine Intelligence 22(11), 1330–1334 (2000) 5. More, J.: Levenberg–marquardt algorithm: Implementation and theory. In: Conference on numerical analysis, vol. 28 (1977) 6. Keogh, E.J., Pazzani, M.J.: Derivative dynamic time warping. In: SIAM International Conference on Data Mining (2001) 7. Fornefett, M., Rohr, K., Stiehl, H.S.: Radial basis functions with compact support for elastic registration of medical images. Image and Vision Computing 19(1), 87–96 (2001) 8. Wendland, H.: Piecewise polynomial, positive definite and compactly supported radial functions of minimal degree. Advances in Computational Mathematics 4(1), 389–396 (1995) 9. Lucas, B.D., Kanade, T.: An iterative image registration technique with an application to stereo vision. In: Proceedings of International Joint Conference on Artificial Intelligence, pp. 674–679 (1981)
Road-Signs Recognition System for Intelligent Vehicles Boguslaw Cyganek AGH - University of Science and Technology, Al. Mickiewicza 30, 30-059 Krak´ ow, Poland
[email protected]
Abstract. Automatic road-signs recognition is becoming a part of Driver Assisting Systems which role is to increase safety and driving comfort. In this paper we present architecture and obtained results of the Polish project on the road-signs recognition system. A holistic approach was undertaken, focusing on interaction of different components of the system.
1
Introduction
Road-signs (RSs) recognition is becoming an important research area for automotive applications. The vision systems which are able to recognize road-signs in real-time of driving inevitably will be part of the Driver Assisting Systems (DAS) installed into intelligent vehicles. They aim at increasing safety, comfort and economy of driving. Much effort has been sacrificed to develop such systems. For example Daimler-Chrysler has developed a system called a thinking vehicle [12]. Also Siemens reports a system which is able to recognize speed limits [19]. The main problems encountered by RS recognition systems are diversities of real traffic conditions such as fast movements and vibrations, dirt and harsh weather conditions which directly affect visibility, noise, insufficient computational resources, etc. For instance Fig. 1 depicts materials used for production of the RSs which results in their different reflection and visibility properties. Various traffic conditions for a single A-14 sign Road works presents Fig. 2. We easily notice that even a single sign in comparably similar weather conditions can be viewed at different angles and positions, and that many exemplars of the same sign can have different tints. The system by Maldonado-Bascn et al. [16] is based on the generalization properties of the Support Vector Machine (SVM) classifiers. There are three stages of execution: HS colour segmentation, RS detection from shape with linear SVM, and classification with the Gaussian kernel SVM. The system shows some invariants to rotation, change of scale and position. However, the used one-versus-all SVMs with about 50 training patterns for each class make the training phase cumbersome. A solution presented by Paclik et al. [17] is based on the original cross-correlation method. The authors propose to correlate only selected regions, characteristic of the searched objects, instead of the whole images (or templates). The characteristic regions G. Sommer and R. Klette (Eds.): RobVis 2008, LNCS 4931, pp. 219–233, 2008. c Springer-Verlag Berlin Heidelberg 2008
220
B. Cyganek
a
b
c
d
Fig. 1. Different materials used for manufacturing of the road-signs which affect their visibility and light reflection properties
are obtained from a special classifier trained with real RS examples. Comparison results as well as discussion of the one-nearest-neighbour, Fisher and so called SIMCA classifiers are also provided in [17]. In this paper we present the system for RS recognition which takes a different approach. The system aims at fast and accurate processing. Training is based on single exemplars rather than statistical samples. In the presentation a holistic approach was undertaken with details of discussed methods provided in the references [3-11]. This paper presents an overview of the RS recognition system developed as the Polish scientific research project. Detailed descriptions of its different modules, as well as their discussion and experimental results, can be found in the references [3-9]. In this paper we focus on the general view, interaction and performance of the system. Fig. 3 depicts four main groups of the Polish RSs: Warning signs group A (a), prohibition signs group B (b), obligation group C (c), and information signs group D (d). There are also other groups with auxiliary signs [18]. In total about two hundred objects for recognition. The works [13-17] and cited there references report other RS detection and/or classification systems. The system discussed by de Escalera et al. [13] starts with detection of the image regions characteristic by strong red/blue colour responses. Then an energy functional is constructed from four terms, from which two account for colour assertions, and the other two for shape and distance. Detection is done as a maximization process of the a posteriori probability of the deformation vector. The genetic algorithm with simulated annealing is proposed as an optimization procedure. Finally, sign classification is done by the normalized correlation. However, this paper does not report a real-time processing. RS detection and tracking system was proposed by Fang et al. [14]. Their system detects places with colour and shape characteristic to the RSs. This is accomplished by neural networks. Then,
Road-Signs Recognition System for Intelligent Vehicles
221
Fig. 2. Variability of real traffic conditions on an example of the A-14 sign Road works
a fuzzy method is used for feature integration and detection. Detected objects are verified and tracked through consecutive frames based on the Kalman filter. However, the system is tuned to detect only shapes of a certain size. Also the applied neural networks are tuned to detect only non distorted shapes. The system proposed by Gao et al. [15] employs the CIECAM97 colour space model which is based on the Human Visual System. It allows colour inferring much more independent on external conditions such as lighting source or weather. For shape detection the Foveal System for Traffic Signs (FOSTS) is proposed. It is based on human behavioural model of vision,. Then the kind of a nearest-neighbour method, operating with the sign features, is employed for classification.
2
Architecture of the Road-Sign Recognition System
Fig. 4 depicts an activity diagram of the proposed system. The number of parallel processes is launched which role is to recognize signs of different groups. There are four main groups of signs, shown in Fig. 3. However, apart from these there are many other types of signs (groups E to T in [18]), such as railway crossing, yield, etc., some of them also shown in Fig. 4. A separate task is recognition of the traffic lights, which is also envisaged in the system. Recognition usually consists of two separate steps: sign detection and then pictogram classification. The first step determines the main group to which a test object can belong to. The second tells a sign from other in its group. In some cases however, yet a detection uniquely determines a sign (see Fig. 4). Detection and classification
222
B. Cyganek
a
b
c
d
Fig. 3. Four groups of the Polish road-signs: Warning A (a), prohibitive B (b), obligation C (c), and information D (d). There are also other groups with auxiliary signs [18].
stages can be also connected together for instance a sign recognizer based on the log-polar template matching in the Gaussian scale space [6]. Fig. 5 presents diagram of the whole system. It consists of the three main modules: the image acquisition, sign detection, and recognition. These are discussed in the following sections. 2.1
Image Acquisition
Processing always start with image acquisition. This is done with a system of digital cameras operating with video speed, i.e. 30 frames per second, and resolution 640×480. Signal comes in the RGB colour mode since most of the sign detectors require colour information. The system contains two cameras (Fig. 9) to allow computation of depth information in the observed scene. This is used to constrain search for objects which are in a certain distance from a vehicle.
Road-Signs Recognition System for Intelligent Vehicles
223
Fig. 4. Activity diagram of the road signs recognition system
More details on the camera setup is presented in the next section presenting experimental results. For different configurations of the system, the input images are down sampled to the 320×240 and also converted to the monochrome representation. Stereo matching constitutes an optional part of the acquisition module. Depth information allows scene partitioning into close regions and background. Due to optical parameters of the cameras, an object has to belong to a group of close range to be fed into the RS classification modules. Otherwise it would be too small for reliable recognition. Depth information can be also used for detection of other objects on the way, such as pedestrians, cars, etc. In the lane tracking application it can be used for an analysis of the decaying depth. Thus, importance of this information. However, conditions in a moving vehicle place high requirements on the camera setup. The main problems are vibrations, dirt, and weather conditions such as rain, snow, etc. On the other hand, a full calibration of the system is not necessary since only relative depth information is used. In our system the area-based correlation method was employed, which operates in the non-parametric sparse Census representations rather than with the intensity signal [10]. An example of such stereo matching is presented in Fig. 6. This has very desirable properties such as high tolerance to noise and local light variations. Unfortunately the software implementation of this module does
224
B. Cyganek
Fig. 5. System diagram. The main components: Image acquisition, detection, and recognition. Solid arrows represent data flow, dashed arrows indicate usage of a module by a module.
Road-Signs Recognition System for Intelligent Vehicles
a
225
b
c
d
Fig. 6. A stereo pair of a real traffic scene (a,b). Relative depth information after correlation in the non-parametric space (c), and after rejection of outliers (d).
not meet the hard real-time conditions. Therefore it is currently ported to the separate hardware board [9]. 2.2
Detection of the Road-Signs
Road signs are characteristic of colours and shapes distinguished from majority of other objects. These supply sufficient cues for detection. However, in real situations both colour and shape undergo great variations due to different factors such as projective distortions, occlusions, as well as colour variations due to aging, weather, camera characteristics, noise, etc. These place high demands on the detection module. There is also an inevitable trade off between accuracy and speed of the system. Our observations and experiments led us to a conclusion that properly acquired colour information alone is sufficient for fast and moderately accurate sign detection. Therefore colour segmentation constitutes a first stage of our detection module. For this the three methods has been tested: The simple thresholding in the HSI space [5], the fuzzy segmentation [7], and the SVM classifiers operating in the one-class mode and in the RGB space [4]. The former can be characterized by fastest execution and the simplest implementation. The latter shows more precise segmentation due to well known generalization abilities of the SVM [1]. However, this is achieved at a cost of about 20 Shapes are detected from the binary images in two different ways:
226
B. Cyganek
a
b
Fig. 7. A detector of triangular, rectangular, and diamond shapes based on salient points (a). A detector of ellipse shapes in binary images based on the adaptively growing window (b).
The triangles, rectangles, and diamond shapes are determined exclusively from salient corner points. Those points are found with the symmetric detector of local binary features (Fig. 7a), details of which are given in [5]. A different method is used to detect circular shapes. It relies on the region growing algorithm which is a simplified version of the mean-shift method [4]. The mode of the colour distribution is determined over a dynamically growing window until a convergence conditions are met (Fig. 7b). The detected shapes are subjected to the figure verification procedure which checks conditions predefined for each group of shapes. Thanks to these, only shapes which comply with the formal road-signs definitions are passed to the classification module [4][5][7]. Apart from the above, the system contains also modules for shape detection and analysis since conjunction of colour and shape rises the precision of detection. This is at a cost of additional computations, however. For this purpose the Hough method is employed. However, its complexity grows rapidly with a number of free parameters of detected shapes. For instance lines can be detected faster than e.g. ellipses. Therefore a more efficient method has been proposed which bases on the structural tensor (ST) [2] operating in the multi spectral and multi scale images [11]. The virtue of the ST is its ability to provide information on local phase and coherency in local neighbourhoods of pixels. Based on these the upward procedure traces local structures with a requested shape. The latter is set by a set of syntactical rules more details provided in [11]. Image registration is the last stage of the shape detection. Its purpose is to adjust shape of a found object to the reference templates stored in the data bases. Since RSs are planar rigid object then the affine rectification does its job. Transformation parameters are found solving a simple linear system of equations details in [4]. Last but not least is the supplementary method which does log-polar template matching in the Gaussian scale-space [6]. It assumes log-polar representation of the prototype templates as well as the test images. Then the first estimate of a position of a template is found at the coarsest level of the scale-space pyramid. Then refinement is done at its finest
Road-Signs Recognition System for Intelligent Vehicles
227
level. The most important feature of this method is significant invariance to the change in scale and rotation due both to the properties of the log-polar transformation but also to a search in the scale-space. It is important to notice that this is not only detection but in fact full recognition of signs. However, the price is long execution time; For software implementation it is measured in minutes per image which is not acceptable to control real traffic situations [6]. Nevertheless, the method can be very useful if implemented in hardware. For this purpose new graphic cards can be also useful. 2.3
Road-Signs Classification
Except for the log-polar template matching and some signs which can be uniquely recognized solely from their shapes, all other paths in the system lead from detection to classification (Fig. 4). A full data base of RSs contains more than 150 different objects. There are four main groups of signs, depicted in Fig. 3, the most countless contains around sixty objects. Thus, a classifier for a single group has to be able to determine a sign from such a set of prototypes. Designing the presented system we have made two basic postulates. The first assumes that the recognition will be based on the prototype pictograms obtained from the formal specification [18] (drawings) rather than from real examples. Secondly, we assume that the test patterns coming from a detector, although rectified, can be geometrically distorted, as well as they can exhibit much changes of the intensity signal. These results from different lighting conditions, variations of a colour of a sign, noise, occlusions, etc. The first assumption is motivated biologically. We know that even two years old children are able to recognize objects which they saw for instance as a hand drawing, sometimes only once. This assumption greatly simplifies the training phase of the system compared for example to the systems which require acquisition of statistically significant amount of real examples. Such demands are not easy to be met for some rare signs. The assumption is also justified by very pronounced pictograms of the road-signs which can be represented as binary (1/0) images. Our observations indicate significant variations of the colours of real RSs, whereas their pictograms are highly similar to the ones in the formal specification [18]. Thanks to this assumption a change of the prototypes of the system, e.g. to a specification of another country, is straightforward. The second assumption places a request on the classifiers to operate with slightly deformed input patterns. However, contrary to triangular and rectangular shapes which can be quite precisely registered, the elliptical signs can be also rotated. In practice a rotation can reach up to about ±30◦ . It can happen also that a detector passes a false positive, so a classifier should be able to reject such an object. Considering the aforementioned requirements and assumptions, a system of neural classifiers was designed. It consists of a number of Hamming neural networks (HNN), presented in Fig. 8a. Each HNN classifies a test pattern based on its set of prototypes which are encoded into the weights of the net [8]. These prototypes are extracted from the prints of the pictograms for each group of signs (Fig. 3). However, each HNN is trained with a geometrically deformed version of the original prototypes. Thanks to this
228
B. Cyganek
a
b
Fig. 8. A single classifier for one deformation of the prototypes The Hamming neural network (a). A committee machine built from the single classifiers and arbitration module (b).
each network operates on a group of deformable models. For the geometrical deformations we assume that they can be approximated by a linear change of parameters of a deformation around a point of attraction on a local manifold. In practice, we allow small horizontal and vertical shifts, as well as some inherent rotations in the case of the circular signs. However, both groups are processed differently, as will be explained later in this section. From a collection of HNNs a committee machine is built which is controlled by an arbitration mechanism (Fig. 8a). It follows the winner-takes-all (WTA) strategy with a mechanism of boosting response from a sub-group of unanimous experts [8][4]. As alluded to previously, classification of the circular shapes have to take into account their inherent rotations which cannot be determined by a detector. For this purpose the committee machines classifying groups of circular signs (groups B and C in [18]) operate simultaneously in the log-polar and spatial domains [4]. The log-polar mapping transforms rotations and change of scale into horizontal and vertical shifts. Thus, recognition of the circular patterns in the log-polar domain is the same as recognition of the triangular and rectangular objects in the spatial domain and involves analysis of horizontal and vertical deformations of the prototypes. Quite different approach was undertaken in the classifier of circular RSs, which is composed of the probabilistic neural network (PNN) operating with the affine moment invariants [3]. The main difference compared to the system with HNNs is that features of the PNN are computed from the samples of real RS examples, rather than prints of the formal specification. However, the problem with the statistical moments is that they are affected greatly by noise and distortions of the images which is due to computations of high powers of point coordinates. Hence, even small outliers can have big influence on a value of statistical moments. Therefore PNN is recommended as an auxiliary classifier in the system and only if there are sufficient computational resources. Recently, the other type of classifiers have been developed which are based on matching of the phase histograms obtained from the structural tensor. This method shows to be very efficient, both in accuracy and time. Moreover, it is invariant to rotations and small local deformations. It relies on precise ST [2], which is also employed in the stereo matching (1.2) and shape detection (1.3). For acquisition of real
Road-Signs Recognition System for Intelligent Vehicles
229
examples and testing a data base was created which contains few thousands of traffic scene images.
3
Experimental Results and Discussion of Further Development
The experimental system for road signs recognition, depicted in Fig. 9, was installed in the Chrysler Voyager car. It consists of a canonical stereo setup with the Marlin C33 cameras mounted on the adjustable tripod. The base line and tilt of the cameras can be regulated as well. The cameras are connected by the two IEEE 1394 (FireWire) links to the Dell Precision M90 laptop computer endowed with the FireWire interface. All modules of the software were written in C++ using the Microsoft Visual 6.0, then .NET 2005 platforms. Additionally, the system is augmented with the GPS positioning system, also connected to the main computer (visible in Fig. 9). Fig. 10 depicts stages of recognition of the warning signs (A group) with the described detector of salient points and committee classifier with HNN. The 320×240 real traffic scene, with the A-14 sign Road works, is visible in Fig. 10a. Then it is segmented with the SVM classifier (b); Salient points are detected (c) with the method outlined in Fig. 7a. After shape verification one triangle is found (d). Finally, the monochrome region is cropped from the red channel of the image, which is then rectified to the required size and orientation (e). From its pictogram a binary feature vector is composed (f) which is fed to the soft committee classifier which was already pre-trained with the locally deformed prototypes of the A signs. Its correct response, with a prototype from the A data base, is depicted in (g). Similar recognition process for the prohibition signs is depicted in Fig. 11 and Fig. 12. Elliptic shapes are detected based on the already
Fig. 9. The experimental road signs recognition system installed in the car
230
B. Cyganek
a
b
d
c
e
f
g
Fig. 10. Recognition of the A-14 sign from a real traffic scene (a). Image segmented with the SVM classifier (b). Detected points of interest (c) and figure after verification (d). The rectified image (e), its binary feature vector (f), the recognized sign from a data-base for group A (g).
mentioned adaptive region growing method (see Fig. 7b). The features are built from the blue channel of the RGB representations. The signs are classified by the soft classifiers operating in different domains, however. The recognition in Fig. 11 is done in the log-polar space, whereas in Fig. 12 is performed in the spatial domain. Both methods can work in conjunction, increasing a precision factor of the response [4]. In the same way the system recognizes other groups of signs [8]. An average execution time of the pure software implementation of the system, and for 320×240 resolution of the RGB input images, is about 0.45s per sign. System qualitative parameters were measured in terms of Precision (P) vs. Recall (R) factors. Results of these measurements are presented in Table 1 for
a
e
b
f
c
g
h
i
Fig. 11. Recognition of the B-25 and B-33 signs in a twin position (a). The SVM segmented image (b). Detected figures (c, d). Results of recognition with the committee machine operating in the log-polar space. LP versions of the candidates (e,g); The found prototypes (f, h).
Road-Signs Recognition System for Intelligent Vehicles
a
b
d
231
c
e
f
Fig. 12. Speed-limit sign in a real scene (a). The segmented image (b). The found and verified shapes (c,d). The test pattern after rectification (e). The recognized pattern by the committee machine with HNN experts, operating in the spatial domain (f).
three different configurations of the system. The first one (Conf. 1) corresponds to the committee of soft classifiers operating concurrently in the spatial and logpolar domains. Conf. 2 contains recognition parameters of the PNN and affine moment invariants which was tested only with the groups B and C of circular signs. Finally, Conf. 3 denotes the log-polar template matching in the Gaussian scale-space. The first configuration operates on the full road-signs specification [18]. The two other work on limited data bases containing only selected signs. Table 1. Accuracy of recognition for different groups of road-signs B and C and daily lighting conditions. Precision (P) versus recall (R). Conf. 1 denotes soft committee classifiers operating in the spatial and log-polar domains. Conf. 2 corresponds to recognition with the PNN and affine moment invariants. Conf. 3 is log-polar template matching in the scale-space. Group P Conf. 1 0.91 Conf. 2 Conf. 3 0.93
“A” R 0.90 0.98
P 0.92 0.69 0.92
“B” R 0.90 0.89 0.98
P 0.97 0.80 0.92
“C” R 0.94 0.90 0.97
P 0.92 0.93
“D” R 0.88 0.96
The values of the recall parameter are affected mostly by performance of the detectors and in the lower degree by the classifiers. However, the parameters depend on specific settings as well as type and quality of the input images. Only results for daily conditions are provided. Under night or worse weather conditions the recall parameter falls down due to segmentation method which relies on samples acquired in day light conditions. Regarding further development of the system, the most important modifications are as follows:
232
1. 2. 3. 4. 5. 6. 7. 8.
B. Cyganek
Development of colour segmentation that works in different conditions. Joining shape and colour for detection. Development of efficient stereo methods for automotive applications. Classification with phase histograms from ST. Further work on combining classifiers. Parallelization of the algorithms as well as hardware implementation. Sign tracking. Synchronization with the GPS.
All of the aforementioned steps are under development and are envisaged in the next version of this project.
4
Conclusions
The paper presents a system for road-signs recognition. Although designed to meet Polish specifications, it can be easily adapted to other conditions. The main assumptions are accuracy, fast execution, and easy maintenance. Therefore the system was designed to recognize signs based on prototypes provided from a formal specification rather than from real examples. Detection is mostly based on colour information, although shape detection from the structural tensor is also processed. Colour segmentations is based on the colour samples from real traffic objects. Two segmentation methods were tested: the fuzzy segmentation in the HSI space as well as the SVM classifier operating in the RGB space. There are two types of detectors, one for circular shapes, the second for other. Shapes corresponding to the RSs are verified, rectified, and passed to the classifiers. Classification is based on pictograms of the signs. For this purpose a committee machine was designed with two groups of soft classifiers operating on deformable models of the prototypes. Additionally, classification is done in the spatial and log-polar domains to cope with rotations. The other tested classifiers were based on the probabilistic neural network and log-polar template matching in the Gaussian scale-space. However, these two solutions suffer either from poor accuracy in real conditions or excessive execution time. Nevertheless, the main configuration of the system shows high accuracy of recognition and execution of about two frames per second in software implementation.
Acknowledgement This work was supported from the Polish funds for the scientific research in 2007.
References 1. Abe, S.: Support Vector Machines for Pattern Classification. Springer, Heidelberg (2005) 2. Brox, T., et al.: Adaptive Structure Tensors and their Applications. In: Weickert, J., Hagen, H. (eds.) Visualization and Processing of Tensor Fields, pp. 17–47. Springer, Heidelberg (2006)
Road-Signs Recognition System for Intelligent Vehicles
233
3. Cyganek, B.: Circular Road Signs Recognition with Affine Moment Invariants and the Probabilistic Neural Classifier. In: Beliczynski, B., et al. (eds.) ICANNGA 2007. LNCS, vol. 4432, pp. 508–516. Springer, Heidelberg (2007) 4. Cyganek, B.: Circular Road Signs Recognition with Soft Classifiers. Integrated Computer-Aided Engineering 14(4), 323–343 (2007) 5. Cyganek, B.: Real-Time Detection of the Triangular and Rectangular Shape Road Signs. LNCS, vol. 4768, pp. 744–755. Springer, Heidelberg (2007) 6. Cyganek, B.: Road Signs Recognition by the Scale-Space Template Matching in the Log-Polar Domain. In: Mart´ı, J., et al. (eds.) IbPRIA 2007. LNCS, vol. 4477, pp. 330–337. Springer, Heidelberg (2007) 7. Cyganek, B.: Soft System for Road Sign Detection. In: Advances in Soft Computing 41, pp. 316–326. Springer, Heidelberg (2007) 8. Cyganek, B.: Committee Machine for Road-Signs Classification. In: Rutkowski, L., et al. (eds.) ICAISC 2006. LNCS (LNAI), vol. 4029, pp. 583–592. Springer, Heidelberg (2006) 9. Cyganek, B.: Hardware-Software System for Acceleration of Image Processing Operations. In: The International Conference on Computer Vision and Graphics (2006) 10. Cyganek, B.: Matching of the Multi-channel Images with Improved Nonparametric Transformations and Weighted Binary Distance Measures. In: Reulke, R., et al. (eds.) IWCIA 2006. LNCS, vol. 4040, pp. 74–88. Springer, Heidelberg (2006) 11. Cyganek, B.: Object Detection in Multi-channel and Multi-scale Images Based on the Structural Tensor. In: Gagalowicz, A., Philips, W. (eds.) CAIP 2005. LNCS, vol. 3691, pp. 570–578. Springer, Heidelberg (2005) 12. DaimlerChrysler, The Thinking Vehicle (2002), http://www.daimlerchrysler.com 13. de Escalera, A., Armingol, J.A.: Visual Sign Information Extraction and Identification by Deformable Models. IEEE Tr. On Int. Transportation Systems 5(2), 57–68 (2004) 14. Fang, C.-Y., Chen, S.-W., Fuh, C.-S.: Road-Sign Detection and Tracking. IEEE Transactions on Vehicular Technology 52(5), 1329–1341 (2003) 15. Gao, X.W., et al.: Recognition of traffic signs based on their colour and shape. J. Vis.Comm. & Image Repr. (2005) 16. Maldonado-Bascn, S., et al.: Road-Sign Detection and Recognition Based on Support Vector Machines. IEEE Tr. on Intelligent Transportation Systems 8(2), 264– 278 (2007) 17. Paclk, P., Novoviov, J., Duin, R.P.W.: Building road sign classifiers using a trainable similarity measure. IEEE Trans. on Int. Transportation Systems 7(3), 309–321 (2006) 18. Road Signs and Signalization. Directive of the Polish Ministry of Infrastructure, Internal Affairs and Administration (Dz. U. Nr 170, poz. 1393) (2002) 19. Siemens VDO (2007), http://usa.siemensvdo.com/topics/adas/traffic-sign-recognition/
Situation Analysis and Atypical Event Detection with Multiple Cameras and Multi-Object Tracking Ralf Reulke, Frederik Meysel, and Sascha Bauer German-Aerospace Center (DLR), Institute for Transport Research, Rutherfordstr. 2, 12489 Berlin, Germany {ralf.reulke,frederik.meysel,sascha.bauer}@dlr.de
Abstract. This paper discusses a trajectory based recognition algorithm to expand the common approaches for atypical event detection in multi-object traffic scenes and to obtain area-based types of information (e.g. maps of speed patterns, trajectory curvatures or erratic movements). Different views of the same area by more than one camera sensor are necessary, because of the typical limitations of single camera systems, resulting from occlusions by other cars, trees and traffic signs. Furthermore, distributed cooperative multi-camera system (MCS) enables a significant enlargement of the observation area. The fusion of object data from different cameras is done by a multi-target tracking approach. This approach opens up opportunities to identify and specify traffic objects, their location, speed and other characteristic object information. New and consolidated information of traffic participants is derived from the system. Keywords: Multi-camera sensing, fixed-viewpoint camera, cooperative distributed vision, multi-camera orientation, multi-target tracking
1
Introduction
An intelligent traffic management is based on an exact knowledge of the traffic situation. Therefore traffic monitoring at roads and intersections is an essential prerequisite for the implementation of an Intelligent Transportation System (ITS). The most common detection and surveillance systems to measure traffic flow on public roads are inductive loops and microwave radar systems. Non-intrusive video-detection for traffic flow observation and surveillance is the primary alternative to conventional inductive loop detectors. Video Image Detection Systems (VIDS) can derive traffic parameters by means of image processing and pattern recognition methods. An important advantage of cameras is the opportunity for large area traffic observation. Additionally, this field of view can be enlarged by multiple camera systems (MCS). The temporal behavior of every object in the observation area can be described by trajectories. G. Sommer and R. Klette (Eds.): RobVis 2008, LNCS 4931, pp. 234–247, 2008. c Springer-Verlag Berlin Heidelberg 2008
Multi-Camera-Detection and Multi-Target-Tracking
235
By fitting trajectories to analytical functions driver intentions can be determined. In addition, semantic interpretation also can be derived from statistical information (such as direction and speed) over the observed scene. Deviations of the found statistics can be interpreted as atypical events and used for the detection and prevention of dangerous situations. This paper is organized as follows: After an overview of existing multiplecamera systems the approach is introduced. Then, an example installation is described and its results are presented. It follows an application, which adapts formerly derived traces or trajectories of turning vehicles by hyperbolas. This analytical description of trajectories can be used for traffic scene description. The article closes with a summary and an outlook.
2
Situation Analysis and Atypical Event Detection Overview
Situation analysis in driver assistance applications and road environments aims at the integration and interpretation of data from many sensors. The result of the process is then a semantic description of the situation, applicable to higher level decision processes or interaction with drivers or operators. Other areas for situation analysis than driver assistance [16] may include traffic situation representation, surveillance applications, sport video analysis [6] or even customer tracking for marketing analysis [8]. There already exist a variety of solutions for multi-camera observation and tracking, especially for surveillance tasks. As for a MCS the main problem to solve is that an observed object in the images of the different cameras must be assigned to the same real object. Therefore, an accurate relation between every pixel and the object coordinates must be available. A real-time cooperative multi-target tracking system for ITS-applications was presented by Matsuyama et al. in [11]. A system of active vision agents (AVAs), where an AVA is a logical model of a network-connected computer with an active camera cooperatively track their target objects by dynamically exchanging object information with each other. With this cooperative tracking capability, the system is able to track multiple moving objects persistently even in complicated dynamic real world environments. Collins et al. [4] have described a system for acquiring multi-view videos of a person moving through the environment. A real-time tracking algorithm adjusts the pan, tilt, zoom and focus parameters of multiple active cameras to keep the moving person centered in each view. The output of the system is a set of synchronized, time-stamped video streams, showing the person simultaneously from several viewpoints. In [12] Meagher et al. presented a method for tracking an object and the determination of its absolute position using the image coordinates provided from multiple cameras. The proposed method obtains the image coordinates of an object at known locations and generates “virtual points”.
236
R. Reulke, F. Meysel, and S. Bauer
Mittal et al. [13] described an algorithm for detecting and tracking people in a cluttered scene using multiple synchronized cameras. This camera arrangement results in multiple wide-baseline camera systems. The results from these widebaseline camera systems are then combined using a scheme that rejects outliers and gives very robust estimations of the 2D locations of the people.
3
Processing Approach
The used cameras cover overlaid or adjacent observation areas. With it, the same road user can be observed by different cameras from different positions and angles. Using image processing methods the objects of interest are found in the image data. In order to enable the tracking and fusion of the objects which are detected in the respective observation area the image coordinates of these objects are converted to a common world coordinate system. In case of poor quality of the orientation parameters, the same objects will be observed at different positions. To avoid misidentification of these objects that were derived from different camera images, high precision in coordinate transformation of the image into the object space is required. Therefore, a very exact calibration (interior orientation) as well as knowledge of the position and view direction (exterior orientation) of the cameras is necessary. If the camera positions are given in absolute geographical coordinates, the detected objectpositions can be provided in world coordinates. The approach presented here can be separated into three main steps (Figure 1). Firstly, all moving objects have to be extracted from each frame of the video sequence. Next, these traffic objects have to be projected onto a georeferenced world plane. Afterwards these objects are tracked and associated to trajectories. This can be utilized to assess comprehensive traffic parameters and to characterize trajectories of individual traffic participants. These six steps are described more precisely below.
Video Acquisition Object Detection
Tracking
Trajectory Processing
Coordinate Transformation
Feature Map Generation
Atypical Event Detection
Fig. 1. Process chain
Multi-Camera-Detection and Multi-Target-Tracking
3.1
237
Video Acquisition and Object Detection
In order to receive reliable and reproducible results, only compact digital industrial cameras with standard interfaces and protocols (e.g. IEEE1394, Ethernet) are used. To extract moving objects from an image sequence, different image processing libraries or programs (e.g. OpenCV or HALCON) can be utilized. The used algorithm is based on a Kalman filter background estimator, which adapts to the variable background and extracts the desired objects. The extracted objects are then grouped using a cluster analysis combined with additional filters to avoid object splitting by infrastructure at intersections and roads. The dedicated image coordinates as well as additional parameters like area, volume, color and compactness can be computed for each extracted traffic object. 3.2
Coordinate Transformation and Camera Calibration
The employed tracking concept is based on extracted objects, which are georeferenced to a world coordinate system. This concept allows the integration or fusion of additional data sources. Therefore, a transformation between image and world coordinates is necessary. Using the collinearity equations (1), the world coordinates X, Y, Z can be derived from the image coordinates x , y : r11 ·(x −x0 )+r21 ·(y −y0 )−r31 ·c r13 ·(x −x0 )+r23 ·(y −y0 )−r33 ·c r12 ·(x −x0 )+r22 ·(y −y0 )−r32 ·c r13 ·(x −x0 )+r23 ·(y −y0 )−r33 ·c
X = X0 + (Z − Z0 ) · Y = Y0 + (Z − Z0 ) ·
(1)
with X, Y world coordinates (to be calculated), Z Z-component in world coordinates (to be known), X0 , Y0 , Z0 position of the perspective center in world coordinates (exterior orientation), r11 , r12 ,. . . , r33 elements of the rotation matrix (exterior orientation), x’, y’ uncorrected image coordinates (interior orientation), x0 , y0 coordinates of the principal point and c the focal length (interior orientation). The Z-component in world coordinates can be deduced by appointing a dedicated ground plane. Additional needed input parameters are the interior and exterior orientation of the camera. The interior orientation (principal point, focal length and additional camera distortion) can be determined using a well known lab test field. The 10 parameter Brown camera model was used for describing interior orientation [3]. The parameters can be determined by a bundle block adjustment as described in [17]. Calculating the exterior orientation of a camera, hence determining its location and orientation in a well known world coordinate system is based on ground control points (GCPs) previously measured with differential GPS. The accuracy of these points is less than 5 cm. With these coordinates an approximate orientation can be deduced using Direct Linear Transformation (DLT) [10]. Eventually the exterior orientation is calculated using the spatial resection algorithm for improvement and elimination of erroneous GCPs.
238
R. Reulke, F. Meysel, and S. Bauer
Fig. 2. Original images of the example scene
Fig. 3. Orthophoto, generated from images of three different observation positions
The scenario has been tested using three cameras to observe the intersection Rudower Chaussee / Wegedornstrasse, Berlin (Germany).The observed area extent is about 10000 m2 . Figure 2 shows the original images taken from three different positions and the derived orthophotos. The overlapping area of the three orthophotos verifies the good accuracy of the approach (Figure 3). Figure 4 shows the lateral error of the GCPs in X- and Y-direction. An error of 20 cm in 100 m distance from the projection center is achived by this approach. 3.3
Tracking, Trajectory Creation and Fusion
A number of objects are recognized for each image k. A set of positions data is available for n objects. The aim is to map the observation to an existing object and to update its state values describing this object, e.g. position or shape. Tracking is done using a Kalman-filter approach [1,2]. The basic idea consists of transferring supplementary information concerning the state into the filter approach in addition to the measurement. This forecast of the measuring results (prediction) is derived from earlier results of the filter. The approach is recursive with that.
Multi-Camera-Detection and Multi-Target-Tracking
239
Fig. 4. Lateral error of GCPs in X- and Y-direction as a function of point distance from the camera projection center
A map of the system state to the measurement vector has to be done in order to describe a complex state of an observed process: Z k = H · X k + βk + ε k
(2)
with Zk measurement of the sensor at time tk , Xk object state at tk , βk unknown measurement offset, εk random measurement error, H Observation matrix and H· Xk Measurement (object position). The state-vector for each object consists of position, speed and acceleration of the object in X-axis and Y-axis direction. The measurement statistics is described by uncorrelated white noise. The state transition model (plant model) is characterized by uniform motion. Since this is idealized, an additional model error (predictions error, plant noise) was introduced. (3) X k+1 = Φ (Δtk ) · X k + U k with Φ calculated from the movement model and U k plant noise. If a (filtered) estimation is given at tk , then the predicted state X’k+1 at tk+1 is: (4) X k+1 = Φ (Δtk ) · X k tk+1 = tk + Δtk The a posteriori state estimation is a linear combination of the a priori estimation and the weighted difference from the difference of forecast and measurement: X k+1 = X k+1 + K(Z k+1 − HX k+1 )
(5)
The initialization of the state-vector is done from two consecutive images. The association of a measurement to an evaluated track is a statistical based decision-making process. Errors are related to clutter, object aggregation and splitting. The decision criteria minimize the rejection probability. The coordinate projection mentioned in the last section and the tracking process provide the opportunity to fuse data acquired by different sensors. The algorithm is independent from sensors as long as the data is referenced in a joint coordinate system sharing the same time frame. The resulting trajectories are then used for different applications e.g. for the derivation of traffic parameters (TP).
240
4
R. Reulke, F. Meysel, and S. Bauer
Results
The following examples were chosen to show the advantage of the trajectory based object description. 4.1
Statistical Trajectory Classification
In the special case of motion in a plane, which is often assumed, the curvature cki = vik × aki of the trajectory is ⎧ k ⎨ ci ε straight travel = ck < 0 right . (6) ⎩ i else left The mean value and variance of cki can be used to classify trajectories in straight traveling, turning left and turning right objects under some conditions. However,
vua
vua a v
Fig. 5. The absolute value of the cross product of v and a along the trajectory allows for curvature evaluation
in practice, more sophisticated feature analysis or even semantic trajectory analysis may be necessary for robust trajectory segmentation or classification. Note, that trajectories derived in the manner of image processing and tracking from stationary platforms [8,14] have much more different characteristics than trajectories obtained from a highly accurate navigation system or inertia measurement unit (IMU). The length of the longest side of the bounding box is a useful feature for a quick filtering of trajectories acquired during scene surveillance. Let the observation vector be ⎛ k ⎞ xi,1 ⎜ xki,2 ⎟ ⎟ ⎜ (7) xki = ⎜ . ⎟ ⎝ .. ⎠ xki,M and xki,j is the j−th feature of trajectory i at observation k. Then the bounding box around all features is ⎛ ⎞ b11 b12 ⎜ b21 b12 ⎟ ⎜ ⎟ (8) B=⎜ . .. ⎟ ⎝ .. . ⎠ bM1 bM2
Multi-Camera-Detection and Multi-Target-Tracking
241
x3 u
u
b21
b22
x2
u
u
B
u
b12
b11
x1
Fig. 6. The bounding box B of a trajectory is the smallest simple subspace of the scene containing all of its observations
with bj1 = min xki,j and bj2 = max xki,j and 1 j M . The index j of the feature with the longest edge in the bounding box is jmax = arg max (|bj1 − bj2 |)
(9)
j
and the length of the longest edge is Bmax = |bjmax 1 − bjmax 2 |.
(10)
Since trajectories accumulate all insufficiencies and errors that occur in image processing and tracking, they are not very smooth and continuous. Trajectory fragmentation and disrupted observations are normal in outdoor surveillance setups. However, even with limited means, a number of trajectories that fulfil basic quality characteristics can be found in the scene. An example of the use of an algorithmic description of trajectories is shown by Koller et al. [7]. Since many image processing artefacts are highly localized, the bounding box can be used as a filter criterium. The minimum number of observations tracked into one trajectory (e.g. N = 5) can be used as another filter parameter. A set of plausible trajectories can be extracted for further examination by applying these filters to a scene. 4.2
Deterministic Analysis of Trajectories
A method for deterministic description of trajectories shall be introduced in the following. Functional descriptions for these trajectories should be as simple as possible and allow straightforward interpretations. Linear movements can be
242
R. Reulke, F. Meysel, and S. Bauer
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
Fig. 7. Left (top) and right (bottom) turning trajectories extracted by positive and negative mean value of their respective cross products
described by simple straights. However, there are several possibilities of descriptions for curve tracks by functional dependencies. A variety of suggestions of possible functions does exist in the literature. Clothoid [9] or G2-Splines [5] are curves whose bend depends on the arc length. An alternative is the use of closed functions like B-Splines, Cartesian polynomials of fifth degree or Polarsplines [15]. [1] have proposed a description of tracks by hyperbolas. The great advantage of this approach is that the derived parameters clarify directly geometric connections and permit a categorization and derivation of important features of the trajectories. The hyperbola, shown in figure 8 was derived by an estimation algorithm, which has been described in [18]. Figure 9 shows an example of the implemented approach. The colored points and crosses are related to the trajectory, observed from different cameras. The hyperbola can be used for an automatic classification of right and left turns. In this case the angle φ is positive or negative. With the calculated center (xm , ym ) all four possibilities for right / left turning on the junction can be classified.
Multi-Camera-Detection and Multi-Target-Tracking
243
Fig. 8. Example tracks (left) and object trajectory, observed from three cameras and a hyperbola fit to the trajectory (right)
4.3
Trajectory Based Atypical Event Detection
Two examples of approaches for analytical and statistical description of observed trajectories in a multi-object scene were given. As for the detection of atypical events, trajectories that fail to fit into the acquired scheme have to be tagged. Concerning the deterministic approach, valid hyperbola parameters were clustered in a parameter space and atypical trajectories could be sorted out by a threshold for the cluster distance. Regarding the statistical analysis, a different approach was chosen. All statistical features of the trajectories that met minimum boundary box and observation count requirements, were projected into ortho-maps. Figure 10 takes speed as an example and shows such a map for two different camera views right before fusion. The map reflects the normal state of the scene in terms of which speeds are normal in which places. Figure 11 shows examples for typical trajectories. Atypical trajectories were detected in corresponding experiments by comparing currently tracked objects and the results of map fusion and accumulate differences. Thus, an object moving slowly (Fig. 12) where multiple sensors normally tracked faster moving objects, was tagged by the algorithm (car in a curve). Furthermore, a pedestrian walking on the road was detected by its trajectory’s speed difference to the underlying speed map of the scene. The detection was done as follows: The position of each trajectory that has been finalized by tracking was projected into a speed-map. To ensure comparability with significant map data, a boolean map was used for pre-filtering. Only object observations falling into frequented cells of the ortho-plane were considered in the calculation and tagged active. The current speeds of objects were compared with mean speeds in that cell and a tag was applied to the observation for each difference
244
R. Reulke, F. Meysel, and S. Bauer
Fig. 9. Classified object trajectories
exceeding a threshold of trvabs = 5 km h . The total number of deviations for all data points in the trajectory was used for tagging atypical trajectories. Here, a limit of 30 % deviation of all data points was chosen. The trajectory in figure 12 (right) has 26 observed data points xki with 0 < k ≤ 26, of which 17 (65.38 %) were coinciding with active cells in the activity-map of the scene. The speed limit exceeded trvabs for 16 data points. Thus, 61.54 % of all points in the
cam01
cam02
Fig. 10. Speed maps of two cameras before map-fusion. The maps are generated from accepted trajectories and used for atypical trajectory detection.
Multi-Camera-Detection and Multi-Target-Tracking
245
Fig. 11. Typical trajectories
Fig. 12. Atypical trajectories
trajectory conflict with the map data. That is more than 94 % of its active data points. The driver was driving particularly slow in that curve.
5
Conclusion and Outlook
The presented approach to multi-camera, multi-object tracking and surveillance systems has been implemented and tested. Thus, it could be shown that relevant surveillance tasks and automatic scene description can be automatically assisted based on video detection, tracking and trajectory analysis. These are necessary steps for the development next generation surveillance systems. However, detection errors and tracking problems can deteriorate the trajectory data, which leads to less usable analysis or less reliable alarm rates for the operators. Methods to recognize object detection errors and deteriorated trajectories to stitch them together are key factors in the current and future work. Furthermore, the intelligent evaluation of feature maps for atypical trajectory detection will be expanded.
246
R. Reulke, F. Meysel, and S. Bauer
Acknowledgements We would like to thank Ragna Hoffmann for the support during preparation of the paper and Marcel Lemke for his support in acquiring the image data.
References 1. Anderson, B., Moor, J.: Optimal filtering. Prentice-Hall, Englewood Cliffs (1979) 2. Blackman, S.S.: Multiple-target tracking with radar applications, MA: Artech House, Dedham (1986) 3. Brown, D.C.: Close range camera calibration. Photogrammetric Engineering 37(8), 855–866 (1971) 4. Amidi, O., Collins, R.: An active camera system for acquiring multi-view video. In: Proceedings of the 2002 International Conference on Image Processing (ICIP 2002), pp. 517–520 (2002) 5. Forbes, A.B.: Least-squares best fit geometric elements. Technical report, National Physical Laboratory of Great Britain, NPL-report, DITC 140/89 (1989) 6. Jiang, S., et al.: A new method to segment playfield and its applications in match analysis in sports video. In: MULTIMEDIA 2004. Proceedings of the 12th annual ACM international conference on Multimedia, pp. 292–295. ACM Press, New York (2004) 7. Koller, D., Heinze, N., Nagel, H.-H.: Algorithmic characterization of vehicle trajectories from image sequences by motion verbs. In: IEEE Conf. on Computer Vision and Pattern Recognition, pp. 90–95 (1991) 8. Leykin, A., Tuceryan, M.: A vision system for automated customer tracking for marketing analysis: Low level feature extraction. In: International Workshop on Human Activity Recognition and Modeling, pp. 1–7 (2005) 9. Liscano, R., Green, D.: Design and implementation of a trajectory generator for an indoor mobile robot. In: Proceedings of the IEEE/RJS International Conference on Intelligent Robots and Systems, Tsukuba, Japan, pp. 380–385 (1989) 10. Robson, S., Kyle, S., Luhmann, T.: Close-Range Photogrammetry. Whittles Publishing (2006) 11. Matsuyama, T., Ukita, N.: Real-time multitarget tracking by a cooperative distributed vision system. Proceedings of the IEEE 90, 1115–1136 (2002) 12. Maire, F., Meagher, T., Wong, O.: A robust multi-camera approach to object tracking and position determination using a self-organising map initialised through cross-ratio generated ”virtual point”. In: CIMCA 2004, Gold Coast, Australia (2004) 13. Mittal, A., Davis, L.: Unified multi-camera detection and tracking using regionmatching (2001), viewed (August 20, 2007), http://www.umiacs.umd.edu/sim 14. Nagel, H., et al.: Quantitative comparison between trajectory estimates obtained from a binocular camera setup within a moving road vehicle and from the outside by a stationary monocular camera. Image and Vision Computing 18(5), 435–444 (1998) 15. Nelson, W.L.: Continuous steering-function control of robot carts. IEEE Transactions on Industrial Electronics 36(3), 330–337 (1989) 16. Reichardt, D.: A real-time approach to traffic situation representation from imageprocessing data. In: Proceedings of the Intelligent Vehicles 1995 Symposium (1995)
Multi-Camera-Detection and Multi-Target-Tracking
247
17. Remondino, F., Fraser, C.: Digital camera calibration methods: Considerations and comparisons. In: ISPRS Commission V Symposium ’Image Engineering and Vision Metrology, pp. 266–272 (2006) 18. Reulke, R., Bauer, S., Spangenberg, R.: Multi-camera detection and multi-target tracking. In: VISAPP - 3rd International Conference on Computer Vision Theory and Applications (to be published, 2008)
Team AnnieWAY’s Autonomous System Christoph Stiller1 , S¨ oren Kammel1 , Benjamin Pitzer1 , Julius Ziegler1 , Moritz Werling2 , Tobias Gindele3 , and Daniel Jagszent3 2
1 Institut f¨ ur Mess- und Regelungstechnik Institut f¨ ur Angewandte Informatik/Automatisierungstechnik, 3 Institut f¨ ur Technische Informatik, Universit¨ at Karlsruhe (TH), 76131 Karlsruhe, Germany
[email protected]
Abstract. This paper reports on AnnieWAY, an autonomous vehicle that is capable of driving through urban scenarios and that has successfully entered the finals of the DARPA Urban Challenge 2007 competition. After describing the main challenges imposed and the major hardware components, we outline the underlying software structure and focus on selected algorithms. A recent laser scanner plays the prominent role in the perception of the environment. It measures range and reflectivity for each pixel. While the former is used to provide 3D scene geometry, the latter allows robust lane marker detection. Mission and maneuver selection is conducted via a concurrent hierarchical state machine that specifically ascertains behavior in accordance with California traffic rules. We conclude with a report of the results achieved during the competition.
1
Introduction
The capability to seamlessly perceive the vehicle environment, to stabilize the vehicle, and to plan and conduct suitable driving maneuvers is a remarkable competence of human drivers. For the sake of vehicular comfort, efficiency, and safety, research groups all over the world have worked on building autonomous technical systems that resemble such capability (cf. e.g. [1,2,3,4,5]). The DARPA Urban Challenge 2007 has been a competition introduced to expedite mainly US research on autonomous vehicles. Its finals took place on Nov. 3rd, 2007 in Victorville, CA, USA. As in its predecessors, the Grand Challenges 2004 and 2005 [6,7], the vehicles had to conduct missions fully autonomously and unmanned without any intervention of or interaction with the teams. In contrast to the earlier competitions, the Urban Challenge required operation in ’urban’ traffic, i.e. in the presence of other vehicles operated either autonomously themselves or by the organizer. The major challenge imposed was collision-free driving in traffic in compliance with traffic rules (e.g. right of way at intersections) while completing the given missions that included overtaking maneuvers, U-turns, parking, and merging into regular flow of traffic. Finally, recovery strategies had to be demonstrated in deadlock situations or in traffic congestions that G. Sommer and R. Klette (Eds.): RobVis 2008, LNCS 4931, pp. 248–259, 2008. c Springer-Verlag Berlin Heidelberg 2008
Team AnnieWAY’s Autonomous System
High Speed Section
249
Offroad Section
T-Junctions
Offset Intersection
Zones
4-way stop
Parking Lots
Start Chutes
Fig. 1. Terrain of the Urban Challenge finals
cannot solely be handled with traffic rules. Fig. 1 shows the site of the finals in Victorville. This competition has significantly enhanced research, progress, and public awareness in the field of autonomous driving. Autonomous driving requires substantiated knowledge in different domains like engineering, computer and cognitive sciences: A robust vehicle architecture and design plays a central role. On-board sensor technology, sensor data analysis including localization and sensor fusion techniques, are required for a consistent perception of the environment and the ego pose of the vehicle therein. This information is eventually used for decision on and planning of a suitable behavior that is appropriate in a given traffic situation. The outcome of the planning is fed in real-time to a robust controller that keeps the vehicle in a stable condition under all circumstances. Fault and error detection and recovery procedures need to supervise both hardware and software of the autonomous vehicle. Omnipresent measurement uncertainties as well as contradictory sensor data have to be handled consistently. This broad variety of tasks shows that the Urban Challenge connects interdisciplinary areas of research with relevance to science, industry and community that may hardly be overestimated.
250
C. Stiller et al.
Team AnnieWAY mainly gathers partners from the Collaborative Research Centre on Cognitive Automobiles established by the German Research Foundation (DFG) on January 2006. This Collaborative Research Centre - like Team AnnieWAY - comprises partners from Universit¨ at Karlsruhe (TH), Forschungszentrum Karlsruhe, Technische Universit¨ at M¨ unchen, Fraunhofer-Gesellschaft IITB Karlsruhe, and Universit¨ at der Bundeswehr M¨ unchen. The Cognitive Automobiles team conducts fundamental and interdisciplinary research of machine cognition in the context of mobile systems as the basis of machine action and the development of a scientific theory thereof. Its verifiability will exemplarily be proven in the way that the behavior of automobiles in traffic will be perceived, interpreted and even automatically generated. A cognitive automobile will be capable of individual as well as cooperative perception and interaction. A theory of machine cognition will make it feasible to propagate measurement uncertainty and symbolic vagueness throughout the complete cognition chain to give measures of certainty in order to quantify the trust in a specific generated behavior. Cognition includes perception, deduction and recognition and thereby supplies automobiles with completely novel abilities. Cognitive automobiles are able to sense themselves and their surroundings, as well as to accumulate and organize knowledge [8]. The scope of Team AnnieWAY was to extract early research results from the Cognitive Automobiles’ project that would allow real-time operation of the vehicle under the restricted traffic environment in the Urban Challenge. Its team members are professionals in the fields of image processing, 3D perception, knowledge representation, reasoning, real time system design, driver assistance systems and autonomous driving. Some of the team members were in the ’Desert Buckeyes’ team of Ohio State University and Universitt Karlsruhe (TH) and developed the 3D vision system for the Intelligent Offroad Navigator (ION) that traveled successfully 29 miles through the desert during the Grand Challenge 2005 [9,10].
2
Hardware Architecture
The basis of the AnnieWAY automobile is a 2006 VW Passat Variant B4 (Figure 2). The Passat has been selected for its ability to be easily updated for drive-by-wire use by the manufacturer. AnnieWAY relies on an off-the-shelf AMD dual-core Opteron multiprocessor PC main computer whose computing power is comparable to a small cluster, yet offers low latencies and high bandwidth for interprocess communication. All sensors connect directly to the main computer which offers enough processing capacity to run almost all software components. The main computer is augmented by a dSpace AutoBox that operates as electronic control unit (ECU) for low-level control algorithms. It directly drives the vehicle’s actuators. Both computer systems communicate over a 1 Gbit/s Ethernet network. The drive by wire system as well as the car odometry are interfaced via the Controller Area Network (CAN) bus. The DGPS/INS system allows for precise localization and connects to the main computer and to the low-level ECU (AutoBox). The chosen hardware architecture is
Team AnnieWAY’s Autonomous System stereo cameras
251
GPS antennas 2d-lidar
electronic control unit
IMU
1d -lidar radar
e -throttle e -steering e -brakes
power supply
main computer
stereo cameras 1 E-Stop stereo cameras 2 rear view camera
main computer
Velodyne 2d lidar LIDAR
AutoBox ECU
vehicle CAN
DGPS/INS
odometer
ignition parking brake
actuators
SICK 1d-lidar LIDAR (front + rear)
Fig. 2. Architecture and hardware components of the vehicle
supported by a real-time-capable software architecture which will be described in the next section [11]. 2.1
Long Distance 2d Lidar
Since lidar units produce their own light, low light conditions have no effect on this kind of sensor. The Velodyne HDL-64E rotating laser scanner (www.velodyne. com/lidar/index.html) comprises 64 avalanche photodiodes that are oriented with constant azimuth and increasing elevation covering a 26.5-degree vertical field of view. The lasers and diodes are mounted on a spinning platform that rotates at a rate of 600 rpm. Thus, the HDL-64E provides a 360 degree field of view around the vehicle producing more than 1 million points per second at an angular resolution of 0.09 degrees horizontally and a distance resolution of 5cm with distances up to 70m. The result is a dense, highly accurate scan representation of almost the entire scene surrounding the vehicle. For each point, the sensor measures range and reflectivity. The reflectivity map is well suited for monoscopic image analysis tasks like lane
252
C. Stiller et al.
marker detection. The inherent association of each reflectivity pixel with a range measurement alleviates information fusion of these data significantly. 2.2
Stereo Vision
An active camera platform with two cameras is mounted in the vehicle that surveys the forward 180 field of view. The system is able to conduct on-line self-calibration while the individual cameras are steered individually in the direction of interest [12], i.e. the system continuously calculates the precise orientation of its two cameras. This capability is essential for 3d stereoscopic vision. 2.3
DGPS/INS
For precise localization we use the OXTS RT3003 Inertial and GPS Navigation System which is an advanced six-axis inertial navigation system that incorporates a Novatel L1/L2 RTKGPS receiver for position and a second GPS Receiver for accurate Heading measurements. Odometry is taken directly from AnnieWAY’s wheel encoders. The RT3003delivers better than 0.02m positioning accuracy under dynamic conditions using differential corrections and 0.1◦ heading accuracy using a 2m separation between the GPS antennas. The RT3003 Inertial and GPS Navigation System includes three angular rate sensors (gyros), three servo-grade accelerometers, the GPS receiver and the required processing. It works as a standalone, autonomous unit and requires no user input for operation. 2.4
Parking Lidars
Two additional Sick LMS 291 1D lidar scanners are mounted horizontally on the front and rear bumper to support the HDL-64E sensor and stereo cameras during parking maneuvers. 2.5
E-Stop System
As the vehicle had to operate unmanned, a remote stop system has been integrated for safety reasons as required by the organizer. This E-Stop system allows to remotely command run-, pause-, or emergency-stop mode via a wireless transmission. The system is connected directly to the ignition and the parking brake to ascertain appropriate emergency stop regardless of the state of the computer system. Run and pause mode are signalled to the low-level control computer. 2.6
Actuators
To actually control the car, actuators for steering, brake, throttle and gear shifting have been mounted. The steering actuator consists of a small electric motor attached to the steering column while the brakes are controlled by an additional brake booster.
Team AnnieWAY’s Autonomous System
3
253
Software Architecture
The core components of the vehicle are the perception of the environment, an interpretation of the situation in order to select the appropriate behavior, a path planning component and an interface to the vehicle control. Figure 3 depicts a block diagram of the information flow in the autonomous system. Spatial information from the sensors are combined to a static 2D map of the environment. Moving objects are treated differently. Such dynamic objects also include traffic participants that are able to move but have zero velocity at the moment. To detect moving objects, the spatial measurements of the lidar sensor are clustered and tracked with a multi-hypothesis approach. To detect possibly moving objects (which are standing still right now), a simple form of reasoning is used: If an object has the size of a car and is located on a detected lane, it is considered to be probably moving. Lane markers are detected in the reflectance data of the main lidar. Together with the road network map (RNDF), the absolute position obtained from the DGPS/INS system and the mission plan (MDF), this information serves as input for the situation assessment and the subsequent behavior generation. Most of the time, the behavior will result in a driveable trajectory. If a road is blocked or the car has to be parked, modules for special maneuvers, like the parking lot navigation module, are activated.
Fig. 3. Overview of the software architecture and the information flow
3.1
Environmental Mapping
Accurate and robust detection of obstacles at a sufficient range is an essential prerequisite to avoid obstacles on the road and in unstructured environments like parking lots. AnnieWAY uses a 2D grid structure gE (u) where each cell stores information on the local elevation. The grid map is constructed and updated asynchronously by the lidar sensors. The elevation gE (u) of the grid is the height range at a given cell position:
254
C. Stiller et al.
gE (u) = max h(u, l) − u∈U ,l∈L
min h(u, l) ,
u∈U ,l∈L
where L is the number of points recorded in a single rotation, U is the region representing the size of one grid cell (30cm×30cm) and h is the vertical component of each scan point. The map is shifted along with the motion of the vehicle. This restricts the size of the map to an area around the vehicle while the cells are bound to an absolute position.
Fig. 4. Example for the mapping of 3D lidar data (left) onto a 2D grid (right)
Figure 4 shows an example for our mapping algorithm. On the left side, the raw sensor information is shown, the right side shows the map resulting from this data. 3.2
Lane Marker Detection
Lane markers are detected from the intensity readings of the lidar as depicted in the top of Fig. 5. In contrast to camera images, the laser reflectivity map is insensitive to background light and shadows while producing only a sparse intensity image. In order to increase the density of the lane marker information, subsequent scans are registered and accumulated employing INS information similar to the obstacle map described in the last subsection. An example of a resulting map is shown in the bottom of Fig. 5. Lane markers are detected applying in the Radon transform domain of the accumulated reflectivity data. While Fig. 5 illustrates that the algorithm works well in real environments, it also clearly reveals a significant offset between the road network provided by the organizer and the visible lane markers. Thus even the correct detection of visible lane markers led to potential wrong or misleading lane positions in the competition. Hence the algorithm was suspended during the Urban Challenge. 3.3
Tracking of Dynamic Objects
Driving in urban environments requires to capture and estimate the dynamics of other traffic participants in real time. In our vehicle we use a detection and
Team AnnieWAY’s Autonomous System
255
Fig. 5. Intensity readings of the lidar (top). Lane marker map, the estimated current lane segment and a part of the original road net (bottom).
tracking system that consists of two parts: Clustering of point cloud/image data into objects and tracking of these objects in subsequent sensor frames. Every cluster that cannot be associated to a known object yields initialization of a new object on the map. We use a linear Kalman filter to both predict the next location of a moving object, and to observe useful information in the tracking process which cannot be measured directly with our sensors like velocities or accelerations. 3.4
Mission and Maneuver Planning
As a first preprocessing step for the planning, all elements of the RNDF (lanes, checkpoints, exits, etc.) are converted to a graph-based representation. RNDF waypoints form the vertices of the graph; lanes and exits are represented by graph edges. Information such as distances, lane boundaries, and speed limits annotate the respective graph edges. These annotations can be updated dynamically, e.g. to describe a road blockage. The graph is used as input to the mission planner. It finds the optimal route from one checkpoint to another using an A* graph search algorithm. The search process is repeated for every pair of subsequent checkpoints
256
C. Stiller et al.
in the MDF. The resulting route is written into the central real-time database together with all relevant information. Besides the obvious application of driving the shortest route, this enables other modules to make use of the map data, e.g. the lane recognition is supported by the a priori knowledge on the existence and type of lane markings. In addition to the information provided by the RNDF, the map also contains virtual turnoff lanes at intersections. A virtual lane connects each exit waypoint with a corresponding entry waypoint. It is generated assuming standard intersection geometry. If deviations between road network map and reality are detected, the map can be updated dynamically, enabling a safer and faster rerun of the respective lane. High-level behavior decisions of the car are derived from the mission plan, together with the map data and the car’s current position by a concurrent hierarchical state machine. The different states are also responsible for a behavior in accordance with the California traffic rules. Figure 6 shows the main level of the used state machine and the sublevel for intersection handling. The hierarchical structure simplifies the design and debugging of the state machine.
StPause ...ShortTerm ...LongTerm
StError ...GlobalRecover exception_thrown
EvPause EvActivate EvStopped EvPause
StWaitForActivation
exception_thrown StIntersection ...Approaching
Queue Infront
H StDrive ...OnLane ...Follow ...ChangeLanes ...PassObstacle ...KTurn ...Recover EvDrive EvCloseToIntersection
...Stopping
EvDrive EvEnter Zone
...Stop
Stopped
...Wait
Congestion
Has RightOfWay
EvDrive
StIntersection ...Approaching ...Queue ...Stop ...Wait ...DriveInside ...Recover
StopLine free
StStop EvStop
...Queue
StopLine free Stopline blocked
StActive
no need to stop
...DriveInside
Intersection Passed
EvStop
...Recover
Intersection Passed no need to stop
StZone EvEnterZone EvCloseTo Intersection
...Entering ...Parking ...Parked ...DriveToExit ...Recover EvStop
Fig. 6. Example from the concurrent hierarchical state machine used to model traffic situations and the behavior
3.5
Vehicle Control
The last step of the processing chain is the vehicle control which can be separated into lateral and longitudinal control. For the lateral movement of the vehicle a software state space controller was chosen. This controller minimizes the distance and heading error of the car as it
Team AnnieWAY’s Autonomous System
257
moves along the planned curve. A feed-forward term, that uses the curvature of the planned curve, strongly increases accuracy. Since the vehicle’s longitudinal dynamic is nonlinear, a compensation algorithm was chosen, which converts a requested acceleration into brake pressure and acceleration pedal values. The nonlinearity was compensated with the inverse of the static characteristic (engine and brake system). Velocity-dependant disturbances like wind drag and roll resistance of the wheels were measured in advance and accounted for in the control structure. For unpredictable disturbances, such as additional wind, an integrator was added. A higher control strategy asserts the bumpless transfer between a velocity, a stopping, and a following controller whereby the output of each controller is the desired acceleration (s. Fig. 7).
Fig. 7. High level strategy for longitudinal control
4
Results
Originally 89 teams have entered the competition, 11 of which were sponsored by the organizer. After several stages, 36 of those teams were selected for the semifinal. There, AnnieWAY has accomplished safe conduction of a variety of maneuvers including – – – – – –
regular driving on lanes turning at intersections with oncoming traffic lane change maneuvers vehicle following and passing following order of precedence at 4-way stops merging into moving traffic
Although the final event was originally planned to challenge 20 teams, only 11 finalists were selected by the organizers due to safety issues. AnnieWAY has entered the final and was able to conduct a variety of driving maneuvers. It drove collision-free, but stopped due to a software exception in one of the modules.
258
C. Stiller et al.
Figure 8 depicts three examples of the vehicles actual course taken from a logfile and superimposed on an aerial image. The rightmost figure shows the stopping position in the finals.
Fig. 8. Three steps of AnnieWAYs course driven autonomously in the finals
The competition was won by CMU, followed by the teams of Stanford and Virginia Tech.
5
Conclusions
The autonomous vehicle AnnieWAY is capable of driving through urban scenarios and has successfully entered the finals of the DARPA Urban Challenge 2007 competition. In contrast to earlier competitions, the Urban Challenge required to conduct missions in ’urban’ traffic, i.e. in the presence of other autonomous and human-operated vehicles. The major challenge imposed was collision-free and rulecompliant driving in traffic. AnnieWAY is based on a simple and robust hardware architecture. In particular, we rely on a single computer system for all tasks but low level control. Environment perception is mainly conducted by a roof-mounted laser scanner that measures range and reflectivity for each pixel. While the former is used to provide 3D scene geometry, the latter allows robust lane marker detection. Mission and maneuver selection is conducted via a concurrent hierarchical state machine that specifically ascertains behavior in accordance with California traffic rules. More than 100 hours of urban driving without human intervention in complex urban settings with multiple cars, correct precedence order decision at intersections and - last not least - the entry in the finals underline the performance of the overall system.
Acknowledgements The authors gratefully acknowledge the fruitful collaboration of their partners from Universit¨ at Karlsruhe, Technische Universit¨ at M¨ unchen and Universit¨ at der Bundeswehr M¨ unchen. Special thanks are directed to Annie Lien for her instant willingness and dedication as our official team leader. This work had not been possible without the ongoing research of the Transregional Collaborative Research
Team AnnieWAY’s Autonomous System
259
Centre 28 Cognitive Automobiles. Both projects cross-fertilized each other and revealed significant synergy. The authors gratefully acknowledge support of the TCRC by the Deutsche Forschungsgemeinschaft (German Research Foundation).
References 1. Bertozzi, M., Broggi, A., Fascioli, A.: Vision-based intelligent vehicles: State of the art and perspectives. J. of Robotics and Autonomous Systems 32, 1–16 (2000) 2. Franke, U., et al.: From door to door – Principles and applications of computer vision for driver assistant systems. In: Vlacic, L., Harashima, F., Parent, M. (eds.) Intelligent Vehicle Technologies: Theory and Applications, ch. 6, pp. 131–188. Butterworth Heinemann, Oxford (2001) 3. Nagel, H.-H., Enkelmann, W., Struck, G.: FhG-Co-Driver: From map-guided automatic driving by machine vision to a cooperative driver support. Mathematical and Computer Modelling 22, 185–212 (1995) 4. Thorpe, C.E.: Vision and navigation - The Carnegie Mellon Navlab. Kluwer Academic Publishers, Dordrecht (1990) 5. Dickmanns, E.D., et al.: The seeing passenger car “VaMoRs-P”. In: Proc. Symp. On Intelligent Vehicles, Paris, pp. 68–73 (October 1994) 6. Darpa. Grand Challenge 2005 (October 2005), http://www.darpa.mil/grandchallenge/overview.html 7. Thrun, S., et al.: Stanley: The robot that won the DARPA Grand Challenge. Journal of Field Robotics 23(9), 661–692 (2006) 8. Stiller, C., F¨ arber, G., Kammel, S.: Cooperative cognitive automobiles. In: Proc. IEEE Intelligent Vehicles Symposium, Istanbul, Turkey, pp. 215–220 (2007) ¨ uner, U., ¨ Stiller, C., Redmill, K.: Systems for safety and autonomous behavior 9. Ozg¨ in cars: The DARPA Grand Challenge experience. IEEE Proceedings 95(2), 1–16 (2007) 10. Hummel, B., et al.: Vision-based path-planning in unstructured environments. In: IEEE Intelligent Vehicles Symposium, Tokyo, Japan (June 2006) 11. Goebl, M., F¨ arber, G.: A real-time-capable hard- and software architecture for joint image and knowledge processing in cognitive automobiles. In: Proc. IEEE Intelligent Vehicles Symposium, Istanbul, Turkey, pp. 734–739 (June 2007) 12. Dang, T., Hoffmann, C., Stiller, C.: Self-calibration for active automotive stereo vision. In: Proc. IEEE Intelligent Vehicles Symposium, Tokyo, Japan, pp. 364–369 (June 2006)
The Area Processing Unit of Caroline - Finding the Way through DARPA’s Urban Challenge Kai Berger, Christian Lipski, Christian Linz, Timo Stich, and Marcus Magnor Computer Graphics Lab, M¨ uhlenpfordstrasse 23, 38106 Braunschweig, Germany {berger,lipski,linz,stich,magnor}@cg.tu-bs.de http://www.graphics.tu-bs.de
Abstract. This paper presents a vision-based color segmentation algorithm suitable for urban environments that separates an image into areas of drivable and non-drivable terrain. Assuming that a part of the image is known to be drivable terrain, other parts of the image are classified by comparing the Euclidean distance of each pixel’s color to the mean colors of the drivable area in real-time. Moving the search area depending on each frame’s result ensures temporal consistency and coherence. Furthermore, the algorithm classifies artifacts such as white and yellow lane markings and hard shadows as areas of unknown drivability. The algorithm was thoroughly tested on the autonomous vehicle ’Caroline’, which was a finalist in the 2007 DARPA Urban Challenge.
1
Introduction
The algorithm described in this paper is part of the software used in the autonomous vehicle ’Caroline’ that entered the DARPA Urban Challenge 2007 as one of the finalists. Although the autonomous vehicle is able to perform basic driving tasks without the proposed algorithm, it is needed in situations with terrain indistinguishable by other sensors, i.e., sections without proper lane markings, streets without high curbs and off-road tracks. Sensor Setup In order to understand the role of the area processing unit, a look at Caroline’s sensor setup is in order. Several active scanners, i.e., three IBEO laser scanners, two radars and two lidars, are used to detect large objects such as walls, traffic cones and other cars (see Fig. 1, red boxes). A sensor fusion module combines the sensor data in order to provide a single source of information. A stereocamera system and a pair of SICK laser sensors facing the ground are utilized by the ground module to reconstruct surface geometry (see Fig. 1, blue grid). This information can be used to avoid small obstacles and uneven terrain. The surface data is enriched by the area processing algorithm described in this paper (see Fig. 1, orange pixels). Lane markings and stop lines are detected by the lane processing module and its four color cameras (see Fig. 1, yellow lines and arrows). The Artificial Intelligence module takes all available sources of information into account and computes a trajectory which is executed by the actorics module (see Fig. 1, green path). G. Sommer and R. Klette (Eds.): RobVis 2008, LNCS 4931, pp. 260–274, 2008. c Springer-Verlag Berlin Heidelberg 2008
Vision-Based Area Processing
261
Fig. 1. The Sensor Setup of the autonomous vehicle ’Caroline’
2
Related Work
As foundation for the area detection algorithm we used the real-time approach suggested by Thrun et al. [1] in the DARPA Grand Challenge in October 2005. The basic idea is to consider a given region in the actual image as drivable. The predominant mean color values in that area are retrieved and compared to the pixel values in the entire image. Similar pixels are marked as drivable. The algorithm was designed for off-road terrain, therefore it cannot be applied to urban scenarios without fundamental modifications. We will describe the algorithm in the next section. The Expectation Maximization (EM) algorithm used for color clustering in this approach is thoroughly described in Ref. [2] and [3]. Instead of the EM algorithm, the KMEANS algorithm is also suitable for color clustering, as described in Ref. [4]. An algorithm similar to the one mentioned above points out the advantage of other color spaces than RGB [5], e.g., the HSI space. 2.1
The Stanford University Algorithm for Detecting Drivable Terrain
The main idea of algorithm 1 is to use the output of the laser scanner, normally a scan-line, which is integrated over time to a height map in world coordinates. A polygon is defined that covers an area in front of the car identified as level and therefore drivable surface. This polygon is transformed into image coordinates of the camera and clipped to the image boundaries. The resulting polygon is considered as the area that is drivable. In this area the pixels’ color values are collected and clustered to color clusters, for example bright grey and yellow.
262
K. Berger et al.
Algorithm 1. The basic algorithm for detecting drivable terrain
1 2 3 4 5
6 7 8
9 10 11 12
13 14 15
16 17 18 19 20 21
22 23 24 25 26 27 28 29 30 31 32 33
Data: A camera image Icamera and a polygon PScanner Result: A classified image Iarea , that is segmented in drivable and undrivable regions begin Retrieve an image Icamera and a polygon PScanner that marks the area which is definitely drivable Copy Icamera to Iarea Fill a list of current colors with the values of each pixel from Iarea inside the polygon PScanner Compute a set of n current color clusters and enhance them with a clustering method, for example the Expectation Maximisation algorithm[2], [3] /* Insert these n Gaussians in a memory list of m > n Gaussians */ if size of the list has not reached its maximum m then Just insert each value to the list else /* Compare each Gaussian i of the n Gaussians with each Gaussian j of the m memorized Gaussians Use this form of the Mahalanobis-distance: */ d(i, j) = (µi − µj )⊥ ∗ (Σi + Σj )−1 (µi − µj ) foreach Gaussian j do Save the smallest value d(i, j) and position j end if d(i, j) is smaller than or equal to a certain similarity-threshold φ then /* Gaussian j is adapted to Gaussian i */ 1 (αi ∗ µi + αj ∗ µj ) µj ← αi +α j Σj ←
1 αi +αj
(αi ∗ Σi + αj ∗ Σj )
αj ← αi + αj /* Here α is the weight of the Gaussian, e.g. according to the amount of represented colors */ else Erase the memorized Gaussian k whose weight αk has the smallest value, and insert the current Gaussian i end end foreach Memorized Gaussian j do Multiply its weight αj with a factor γ < 1 /* This step is used for aging the color clusters, so that they don’t rest too long in the memory list */ end foreach Pixel in the given Image Iarea do foreach Memorized Gaussian j do Compute the probability p(x, µj , Σj ) with the distribution function if p(x, µj , Σj ) is higher than a threshold χ for one j then Classify the pixel x as drivable terrain else Classify the pixel x as undrivable terrain end end end end
Vision-Based Area Processing
263
These color clusters are compared to the color values of each pixel in the image using distance measurements in the color space. If a resulting distance is smaller than a given threshold, the area comprised by the pixel is marked as drivable. The main benefit of the algorithm is that the range in which drivability can be estimated is enhanced from only a few meters to more than 50 meters.
(a) Original Picture
(b) Normal Drivability Grid
Fig. 2. The drivability grid (b) depicts the output of algorithm 1, the results differ from black (undrivable) to white (drivable) . A Yellow Line (a) is marked as undrivable (b, black) because the color differs by too much from the street color.
2.2
Problems Arising in Urban and Suburban Terrain
Designed for competing in a 132-miles desert course, the basic algorithm succeeds well in explicit off-road areas, which are limited by sand hills or shrubs. When testing in urban areas new problems occur, because there are streets with lane marks in different colors or tall houses casting big shadows. The yellow lane marks are often not inside the area of the polygon PScanner (the output of the laserscanner), so they are not detected as drivable. Especially nondashed lines prohibit a lane shift (see Fig. 2) and stop lines seem to block the road. Another problem are shadows, casted by tall buildings during the afternoon. Small shadows from trees in a fairly diffuse light change the color of the street only slightly and can be adapted easily. But huge and dark shadows appear as a big undrivable area (see Fig. 3). Even worse: once inside a shadowed area, the camera auto exposure adapts to the new light situation, such that the area outside the shadow becomes overexposed and appears again as a big undrivable area (see Fig. 4). Another problem during the afternoon is the car’s own shadow, in this paper referenced as ”egoShadow”, when the sun is behind the car. Sometimes it is marked as undrivable, sometimes it is completely adapted and marked as drivable, but the whole rest of the street is marked as undrivable (see Fig. 5). A fourth problem occurs when testing on streets without curbs but limited by mowed grassland. The laser scanner does not recognize the grassland as undrivable, because its niveau is about the same as the street niveau. This causes the vehicle to move onto the grassland, so that colors are adapted by the area processing algorithm, and consequentially keeps the car on the green terrain.
264
K. Berger et al.
(a) Original Picture
(b) Normal Drivability Grid
Fig. 3. Huge and dark shadows (a, left) differ too much from the street color (b, dark)
(a) Original Picture
(b) Normal Drivability Grid
Fig. 4. Inside shadows the exposure is automatically adapted (a). Areas outside the shadow are overexposed and are marked as undrivable (b, dark).
(a) Original Picture
(b) Normal Drivability Grid
Fig. 5. The vehicle’s own shadow can lead to problems (a), for example if only the shadowed region is used to detect drivable regions (b, white)
3 3.1
The Algorithm Suitable to Urban and Suburban Terrain Alterations to the Basic Algorithm
Differing from the original algorithm, our implementation does not classify regions of the image as drivable and undrivable. The result of our distance function
Vision-Based Area Processing
Fig. 6. The processing steps of the area detecting algorithm
265
266
K. Berger et al.
is mapped to an integer number ranging from 0 to 127, instead of creating a binary information via a threshold. In addition, a classification into the categories ’known drivability’ and ’unknown drivability’ is applied to each pixel. These alterations are required because the decision about the drivability of a certain region is not made by the algorithm itself, but by a separate sensor fusion application. For the sake of performance the KMEANS Nearest Neighbours algorithm is chosen instead of the EM-algorithm, because the resulting grids are almost of the same quality but the computation is considerably faster. Tests have shown that better results can be achieved by using a color space which separates the luminance and the chrominance in different channels, e.g. HSV, LAB or YUV. The problem with HLS and HSV is that chrominance information is coded in one hue channel and the color distance is radial. For example, the color at 358◦ is very similar to that one at 2◦ , but they are numerically very far away from each other. Thus a color space is chosen where chrominance information is coded in two channels, for example in YUV or LAB, where similarity between two colors can be expressed as Euclidean distance. 3.2
Preprocessing
To cope with the problems of large shadows and lane marks, a preprocessing system was developed. Before the camera picture is processed, it is handed to the following preprocessors: White preprocessor (masking out lane marks and overexposed pixels), black preprocessor (masking out huge and dark shadows), yellow preprocessor (masking out lane marks), egoShadow preprocessor (masking out the car’s shadow in the picture). The output of each preprocessor is a bit mask (1: feature detected, 0: feature not detected), which is afterwards used in the pixel classifying process, to mark the particular pixel as ”unknown”, which means that the vision-based area processor cannot state a valid information about the area represented by that pixel. In the following, the concept of each preprocessor is briefly described: White Preprocessor In order to cope with overexposed image areas during shadow traversing, pixels whose brightness value is larger than a given threshold are detected. The preprocessor converts the given image into HSV color space and compares for each pixel the intensity value with a given threshold. If the entry is above the threshold, the pixel of the output mask is set to 1. Black Preprocessor As huge dark shadows differ too much from the street color and would therefore be labeled as impassable terrain, pixels whose brightness value is smaller than a given threshold are masked out. The preprocessor analogously converts the given image into HSV color space and compares for each pixel the intensity value with a given threshold. If the value is below the threshold, the pixel of the output mask is set to 1.
Vision-Based Area Processing
267
Yellow Preprocessor Small areas of the image which are close to yellow in the RGB color space are detected so that yellow lane markings are not labeled as undrivable but as areas of unknown drivability. For each pixel of the given image the RGB ratios are checked to detect yellow lane markings. If the green value is larger than the blue value and larger or a little smaller than the red value, the pixel is not considered yellow. If the red value is larger than the sum of the blue and the green value, then the pixel is also not considered yellow. Otherwise, the pixel is −1. Afterwards a duplicate of the computed bit mask is smoothed set to min(R,G) B using the mean filter, dilated and subtracted from the bit mask to eliminate huge areas. For different areas of the image, different kernel sizes must be applied. In the end, only the relatively small yellow areas remain. A threshold determines the resulting bit mask of this preprocessor EgoShadow Preprocessor When the sun is located behind the car, the vehicle’s own shadow appears in the picture and is either marked as undrivable, or it is the only area marked as drivable. Therefore, a connected area right in front of the car is found whose brightness value is low. At the beginning of the whole computing process a set of base points p(x, y) is specified, which mark the border between the engine bonnet and the ground in the picture. The ROI in each given picture is set to ymax , the maximum row of the base points, so that the engine bonnet is cut off. From these base points the preprocessor starts a flood-fill in a copy of each given image, taking advantage of the fact that the car’s shadow appears in similar colors. Then the given picture is converted to HSV color space and the floodfilled pixel are checked if their intensity value is small enough. Finally, the sum of the flood-filled pixel is compared to a threshold which marks the maximum pixel area comprised of the car’s own shadow. 3.3
The Dynamic Search Polygon
Using the output of the laser scanner to determine the input polygon works quite well if the drivable terrain is limited by tall objects like sand hills or shrubs. But in urban terrain the output of the laser scanner must be sensitized to level distances smaller than curbs (10 to 20 cm) which becomes problematic if the street moves along a hill where the level distance is much higher. So the laser scanner polygon is not a reliable source anymore especially because both modules solve different problems: The laser scanner focuses on range-based obstacle-detection [5], which is based on analyzing the geometry of the surrounding environment, whereas the vision-based area processor follows an appearance-based approach. For example, driving through the green grass next to the street is physically possible, and therefore not prohibited by a range-based detection approach, but it must be prevented by the appearance-based system. This led to the idea to implement a self-dynamic search polygon which has a static shape but is able to move along both the X- and the Y-axis in a given boundary polygon Pboundary . The initial direction is zero. Every movement is computed using the output of the
268
K. Berger et al.
last frame’s pixel classification. For the computation a bumper polygon Pbumper is added, which surrounds the search polygon. The algorithm proceeds in the following steps: Algorithm 2. Dynamic Search Polygon Algorithm
1 2 3 4 5 6 7 8 9 10
11 12 13
14
15
Data: last frame’s grid of classified pixels, actual bumper polygon Pbumper Result: updated position of the Polygons Pbumper begin Initialize three variables pixelSum, weightedP ixelSumX, weightedP ixelSumY to zero foreach pixel of the grid which is inside the bumper do count the amount pixelSum of visited pixels if drivability of the actual pixel is above a given threshold then Add the pixel’s x-Position relative to the midpoint of Pbumper to weightedP ixelSumX Add the pixel’s y-Position relative to the midpoint of Pbumper to weightedP ixelSumY end end ixelSumX Perform the division xmoment = weightedP and pixelSum weightedP ixelSumY ymoment = and round the results to natural numbers pixelSum /* The value xmoment gives the amount and direction of the movement of Pbumper in x-direction, the value ymoment gives the amount and direction of the movement of Pbumper in y-direction. */ Add the values xmoment and ymoment to the values of the actual midpoint of Pbumper to retrieve the new midpoint of Pbumper Check the values of the new midpoint of Pbumper against the edges of Pboundary and adjust the values if necessary Add the values xmoment and ymoment to the values of the actual midpoint of the search polygon to retrieve the new midpoint of the search polygon (see Fig. 7) To prevent that the search polygon gets stuck in a certain corner, it is checked, if xmoment = 0 or if ymoment = 0 /* For example if xmoment = 0, it is evaluated, if the midpoint of Pbumper is located right or left to the midpoint of Pboundary ; xmoment is set to 1, if Pbumper is located left, otherwise it is set to −1. An analogous check can be performed for the */ ymoment . end
4 4.1
Experimental Results Implementation and Performance
The algorithm has been implemented with the use of Intel’s OpenCV library. The framework software is installed on an Intel Dual Core Car PC with a Linux
Vision-Based Area Processing
(a) Dynamic Polygon at time t
269
(b) Dynamic Polygon at time t + 1
Fig. 7. This figure shows how the dynamic search polygon (a, green trapezoid) is transposed to the right (b) because the calculated moment is positive in x-direction
operating system and communicates with an IDS uEye camera via USB. The resolution of a frame is 640*480, but the algorithm applied downsampled images of size 160*120 to attain an average performance of 10 frames per second. The algorithm is confined to a region of interest of 160*75 cutting of the sky and the engine bonnet.
(a) Original Picture
(b) Normal Drivability Grid
(c) With Black Preprocessor Fig. 8. The results with black preprocessor. The picture in the middle (b) shows the classification results without black processor. In the rightmost picture (c) the critical region is classified as unknown (red).
270
K. Berger et al.
(a) Original Picture
(b) Normal Drivability Grid
(c) With White Preprocessor Fig. 9. The results with white preprocessor. The picture in the middle (b) shows the classification results without white processor. In the rightmost picture (c) the critical region is classified as unknown (red).
(a) Original Picture
(b) Normal Drivability Grid
(c) With Yellow Preprocessor Fig. 10. The results with yellow preprocessor. The picture in the middle (b) shows the classification results without yellow preprocessor. In the rightmost picture (c) the lane marks are classified as unknown (red).
Vision-Based Area Processing
4.2
271
Preprocessing
In Fig. 8 the difference between normal area processing and processing with the black preprocessor is shown. Without the preprocessor, the large shadow of a building located to the left of the street is too dark to be similar to the street color and is classified as undrivable. The black preprocessor detects the shadowy pixels which are classified as unknown (red). The problem of overexposed areas in the picture is shown in Fig. 9, where the street’s color outside the shadow is almost white and therefore classified as undrivable in the normal process. The white preprocessor succeeds in marking the critical area as unknown, so that the vehicle has no problem in leaving the shadowy area. Yellow lane marks differ from pavement in color space so that a human driver can easily detect them even in adverse lighting conditions. This advantage turns out to be a disadvantage for a standard classification system, which classifies the lane marks also as undrivable, as shown in Fig. 10: Lane marks are interpreted as tiny walls on the street. To counter this problem, we use a preprocessing step which segments colors similar to yellow. To cope with different light conditions, the color spectrum has to be wider so that a brownish or grayish yellow is also detected. This leads to some false positives as shown in Fig. 10, but the
(a) Original Picture
(b) Normal Drivability Grid
(c) With EgoShadow Preprocessor Fig. 11. The results with egoShadow preprocessor. The picture in the middle (b) shows the classification results without egoShadow processor. In the rightmost picture (c) the car’s own shadow is classified as unknown (red).
272
K. Berger et al.
disturbing lane marks are clearly classified as unknown. The vehicle is now able to change the lane without further problems. A problem with the vehicle’s own shadow only occurs when the sun is located behind the vehicle, but in these situations the classification can deliver insufficient results. In Fig. 11 is shown that the shadowy area in front of the car is classified as unknown. 4.3
The Dynamic Search Polygon
The benefit of a search polygon which is transposed by the output of the last frame is tested by swerving about so that the car moves very close to the edges of the street. Fig. 12 shows the results when moving the car close to the left edge. As the static polygon touches a small green area, a somewhat green mean value is gathered and so the resulting grid shows a certain amount of drivability in the grassland, whereas the dynamic polygon moves to the right of the picture to avoid touching the green pixels so that the resulting grid does not show a drivability on the grassland. A clearer difference can be observed in Fig. 13 where the car is positioned close to the right edge and the static polygon adapts the grass color. The grid is fully drivable both on the street and the grassland.
(a) Original Picture (static Polygon)
(b) Drivability Grid (static Polygon)
(c) Original Picture (dynamic Polygon)
(d) Normal Drivability Grid (dynamic Polygon)
Fig. 12. The same frame first computed with a static search polygon (a, b), then with the dynamic polygon (c, d). The dynamic movement calculation lead the polygon to move to the right (c).
Vision-Based Area Processing
(a) Original Picture (static Polygon)
(b) Drivability Grid (static Polygon)
(c) Original Picture (dynamic Polygon)
(d) Normal Drivability Grid (dynamic Polygon)
273
Fig. 13. The same frame first computed with a static search polygon (a, b), then with the dynamic polygon (c, d). The dynamic movement calculation lead the polygon to move to the left (c).
This time the dynamic search polygon moves to the left avoiding once again the grass colors, so that the resulting grid shows a sharp edge between the drivable street and the undrivable grass. 4.4
Sensor Fusion
The output of the area processing unit is finally handled in a ground fusion module, which combines the gradients of the laser scanner’s height map with the drivability grid of the area processor. The grid is projected into world space and its information is used in the areas in front of the car, where no information from the laser scanner is obtained. In the cells where drivability values from the output of both sensors are provided the area information is used to enhance the information quality gathered by the laser scanner. All grid cells which are marked undefined both by the laser scanners and the areaprocessor are left as undefined. Especially in the situations where no or only little reliable information from the laser scanner are provided, the benefit of the area processing unit becomes clear. On a driver’s test area in Northern Germany there is a difference in height of only a few centimeters between the road and the grassland, so that no relevant gradients can be spotted. The same problems arise on an offroad course in Texas
274
K. Berger et al.
(see Fig. 12), where the heightfield itself does not signalize enough information about the drivable area. In both cases the vision-based area processing unit can solve the problems and guide the car through the course.
5
Conclusion
This paper presented a vision-based color segmentation algorithm suitable for urban environments. Assuming that a region of the initial image is known to be drivable, it is able to track areas of similar colors over time. It copes with artifacts such as white and yellow lane markings and with difficult lighting situations. It also tracks drivable areas when the robot tends to leave them, i.e., the drivable terrain may just cover a small fraction of the overall image. The algorithm was successfully implemented on the autonomous vehicle ’Caroline’ which participated in the 2007 DARPA Urban Challenge finals.
References 1. Thrun, S., et al.: Stanley: The Robot That Won The DARPA Grand Challenge, Journal of Field Robotics, 661–692 (2006) 2. Duda, R.O., Hart, P.E.: Pattern Classification and Scene Analysis (1973) 3. Jeff, B.: A Gentle Tutorial on the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models Technical Report, University of Berkeley, ICSI-TR-97-021 (1997) 4. Gary, B., Adrian, K., Vadim, P.: Learning-Based Computer Vision with Intels Open Source Computer Vision Library Intel Technology. Journal - Compute-Intensive, Highly Parallel Applications and Uses 9(2), 126–139 (2005) 5. Iwan, U., Illah, N.: Appearance-Based Obstacle Detection with Monocular Color Vision Proceedings of the AAAI National Conference on Artificial Intelligence, Austin, TX, 866–871 (2000) 6. Open Source Computer Vision Library, http://www.intel.com/research/mrl/research/opencv
Sensor Architecture and Data Fusion for Robotic Perception in Urban Environments at the 2007 DARPA Urban Challenge Jan Effertz Technische Universit¨ at Braunschweig, Institute of Control Engineering, Hans-Sommer-Str. 66, D-38106 Braunschweig, Germany
[email protected] http://www.ifr.ing.tu-bs.de
Abstract. We will demonstrate the sensor and data fusion concept of the 2007 DARPA Urban Challenge vehicle assembled by Team CarOLO, Technische Universit¨ at Braunschweig. The perception system is based on a hybrid fusion concept, combining object and grid based approaches in order to comply with the requirements of an urban environment. A variety of sensor systems and technologies is applied, providing a 360 degree view area around the vehicle. Within the object based subsystem, obstacles (static and dynamic) are tracked using an Extended Kalman Filter capable of tracking arbitrary contour shapes. Additionally, the grid based subsystem extracts drivability information about the vehicle’s driveway by combining the readings of laser scanners, a mono and a stereo camera system using a Dempster-Shafer based data fusion approach. . . .
1
Introduction
Robotic perception is one of the key issues in autonomous driving. While current automotive approaches are commonly targeting specific driver assistance systems, the 2007 DARPA Urban Challenge called for a more general realization, capable of detecting a broad variety of target types common in an urban environment. In order to fulfill these requirements, the Technische Universit¨ at Braunschweig developed a vehicle (further to be referred to as Caroline) equipped with a distributed sensor network, combining radar, rotating and fixed beam LIDAR as well as image processing principles in order to create a consistent artificial image of the vehicle’s surrounding. Different data fusion concepts have been implemented in a hybrid perception approach, delivering static and dynamic obstacles in an object-based fashion and drivability and 3-dimensional ground profile information in a grid oriented description.
2
Sensor Concept
A variety of sensor types originating from the field of driver assistance systems has been chosen to provide detection of static and dynamic obstacles in the vehicle’s surrounding as depicted in Fig. 1: G. Sommer and R. Klette (Eds.): RobVis 2008, LNCS 4931, pp. 275–290, 2008. c Springer-Verlag Berlin Heidelberg 2008
276
J. Effertz
5: Sick LMS 291
2: SMS UMRR Medium Range Radar
3: IBEO Alasca XT (Fusion Cluster)
7: IDS Mono Color Camera
6: Stereo Vision System
4: Ibeo ML laser scanner
2: SMS Blind Spot Detector (right and left)
1: Hella IDIS LIDAR system
Fig. 1. Sensor Setup
1. A stationary beam LIDAR sensor has been placed in the front and rear section of the vehicle, providing a detection range of approx. 200 meters with an opening angle of 12 degrees. The unit is equipped with an internal preprocessing stage and thus delivers its readings in an object oriented fashion, providing target distance, target width and relative target velocity with respect to the car fixed sensor coordinate frame. 2. 24 GHz radar sensors have been added to the front, rear, rear left and right side of the vehicle. While the center front and rear sensors provide a detection range of approx. 150 meters with an opening angle of 40 degrees, the rear right and left sensors operate as blind-spot detectors with a range of 15 meters and an opening angle of 140 degrees due to their specific antenna structure. The front sensor acts as a stand-alone unit delivering object oriented target data (position and velocity) through its assigned external control unit (ECU). The three radar sensors in the rear section are operated as a combined sensor cluster using an additional ECU, providing object oriented target data in the same fashion as the front system. Regarded from the post processing fusion system, the three rear sensors can therefore be regarded as one unit. 3. Two Ibeo ALASCA XT laser scanners have been installed in the vehicle’s front section, each providing an opening angle of 240 degrees with a detection range of approx. 60 meters. The raw measurement data of both front laser scanners is preprocessed and fused on their assigned ECU, delivering complex object-oriented target descriptions consisting of target contour information, target velocity and additional classification information. Additionally, the
Sensor Architecture and Data Fusion in Urban Environments
4.
5.
6.
7.
277
raw scan data of both laser scanners can be read by the fusion system’s grid based subsection. One Ibeo ML laser scanner has been added to the rear side, providing similar detection capabilities like the two front sensors, with a reduced opening angle of 180 degrees due to its mounting position. All Ibeo sensors are based on a four-plane scanning principle with a vertical opening angle of 3.2 degrees between the top and bottom scan plane. Due to this opening angle, smaller pitch movements of the vehicle can be covered. Two Sick LMS-291 laser scanners have been mounted to the vehicle’s front roof section. The scanners are based on a single-plane technology. They have been set up to measure the terrain profile in 10 and 20 meters, respectively. The view angle has been limited to 120 degrees by software means. A stereo vision system has been mounted behind the vehicle’s front window covering a view area of approx. 60 degrees within a range of 50 meters, providing 3-dimensional terrain profile data for all retrieved stereo vision points. A simple classification into the classes driveway, curb and obstacles is available as well. A USB-based color mono camera has been placed onto the vehicle’s front roof section, covering an opening angle of approx. 60 degrees.
The above stated sensor architecture reflects the applied hybrid post-processing scheme. While the sensors 1 to 4 are delivering their data in an object oriented fashion and are therefore treated within the system’s object tracking and data fusion stage, the three last described sensors are evaluated based on their raw measurement data in the grid based subsection. A distributed data fusion system has been set up consisting of three interconnected units, each based on identical 2 GHz automotive personal computers suitable for long-term operation in a rugged environment. In order to equally balance the available computing power, the object tracking system has been split up into two independent modules, covering the front and rear section independently. Therefore, two automotive PC’s carry out data acquisition and data fusion of the front and rear object detecting sensors, while the third PC is used to fuse the raw sensor readings of the Sick scanners, stereo vision system and mono color camera (Fig. 2). The units are interconnected using Fast Ethernet in order to provide the necessary data exchange.
3
Object Tracking System
The object tracking subsystem consists of several stages assembled in a pipes and filters pattern. First of all during data acquisition, each sensor’s object data is being acquired at the assigned CAN-bus interface. Due to the limited bandwidth and determinism of the CAN-bus system, which is still standard for the majority of automotive driver assistance sensor systems, a private CAN has been chosen for each sensor. This way, conflicts in bus arbitration which would lead to higher latencies and decrease the overall determinism, can be avoided. Upon reception, the sensor data is timestamped immediately to the vehicle system’s global clock, which is
J. Effertz
C AN
278
ECU
ECU Stereo Preprocessing
Front Data Acquisition
Surface Data, Classification
Object Data Drivability, Height Profile
Ethernet
front
Tracking + Data Fusion
Grid Fusion
rear
Object Data
Rear Data Acquisition
Surface Data
Color Analysis
ECU
CAN
ECU
Classification
Fig. 2. Fusion Architecture
acquired using a GPS time stamp and then distributed via network time protocol (NTP). Furthermore, the sensor data is transformed into a global, earth fixed reference frame for all further post processing. This way, static obstacles (which are the majority of all targets) will carry zero velocity, which eases state estimation later on. After data acquisition, the incoming object data is piped into the actual tracking system consisting of three main stages as depicted in Fig. 3. 3.1
Data Association
Data association plays the key role in any tracking system. Imperfect data association leads inevitably to incorrect track data, since no state estimator can handle faulty assignments between existing model (track) and incoming new observation. Prior to introducing the actual data association, it is necessary to describe the track- and measurement representation within Caroline’s fusion system. Two different track representations are possible: A 6-dimensional model, ⎛ 1..n ⎞ x ⎜ y 1..n ⎟ ⎜ ⎟ ⎜ v ⎟ ⎜ ⎟, x6D = ⎜ (1) ⎟ ⎜ a ⎟ ⎝ α ⎠ ω with x1..n and y 1..n being the global x- and y-coordinates of n tracked contour points, v being the common contour velocity, a the common contour acceleration,
Sensor Architecture and Data Fusion in Urban Environments Laserscanner Front
Laserscanner Rear
Radar Front
Radar Rear
Lidar Front
279
Lidar Rear
Data Acquisition, Timestamping and Transformation
Fusion Input Queue
Sensor Sweeps
Data Association Pretracking
Pretrack Database Track Managment
Pretrack ID 0
Extended Kalman Filter
Track Initialization
Track Database Track ID 0
Pretrack ID 1
Track ID 1
Pretrack ID 2
Track ID 2
Pretrack ID 3
Track ID 3
...
...
Fig. 3. Object Tracking Structure
α the course angle and ω the course angle velocity. Furthermore, a reduced 4-dimensional model, ⎛ 1..n ⎞ x ⎜ y 1..n ⎟ ⎟ x4D = ⎜ (2) ⎝ v ⎠. α takes into account that the majority of observed objects is of rather static nature and does not perform any maneuvers. Since the amount of information gathered by the sensor system is limited, it is not recommendable to spread this information into too many state variables if not necessary to describe motion. Model switching is carried out, tracking slow and static objects with the reduced state model and fast moving, maneuvering dynamic targets with the full 6D state vector. In contrast to other existing tracking systems in the automotive environment, the extension of the track state vector by an arbitrary number of contour points enables the system to detect objects of any shape, which is necessary to represent the full complexity expected in an urban environment. Similar to the track state vector, the measurement vector can be described through with x1..m and y 1..m being the m measured contour point x- and y-coordinates with respect to the global earth-fixed reference frame and vx as well as vy being the corresponding object’s velocity vector. Depending on the sensor type, some parts of Eq. (3) will remain empty: ⎛ 1..m ⎞ x ⎜ y 1..m ⎟ ⎟ y=⎜ (3) ⎝ vx ⎠ , vy
280
J. Effertz
– The rotating laser scanners are capable of delivering a number of up to 32 contour points with assigned common velocity vector, the latter as a result of sensor-internal pretracking. – The fixed beam lidar sensors deliver only the position of the right and left border of any detected object. Due to the underlying measurement principle (time of flight), they can as well not directly measure a target’s velocity and will therefore rely on differentiating the object’s position from within their own pretracking stage - which often does result in poor velocity information. – The radar sensors are only capable of delivering a single point of reflection, whose position on the target itself is not precisely known and can be randomly distributed - but as an advantage to the laser based sensor types, the relative radial target velocity can be precisely measured by evaluating the signal’s Doppler shift. For each incoming measurement object j, the corresponding existing track i is determined during data association in order to prepare state estimation within the main tracking stage. Due to the extension of the track state vector into a polyline it is necessary to perform a two-step process. In the first step, a weight function is evaluated determining the minimum euclidean distance between two points on the tracks polyline compared to the incoming sensor readings. In addition, the similarity of measured and tracked velocity is determined and both quantities are summed up in a combined scalar track-to-measurement association weight,
wi,j = a · min xki − xlj + yik − yjl ∀k, l
+b · vi · cos(αi ) − vxj + vi · sin(αi ) − vyj . (4) Gating is then applied to Eq. (4), pushing track-to-measurement pairings with a similarity up to a certain threshold into the second step of data association. Within that second step, the corresponding point-to-point matchings have to be evaluated, which assign the known tracked contour points to incoming new observations. For this purpose, an association matrix is created,
⎤ ⎡ 1 x − x1 + y 1 − y 1 ... x1 − xm + y 1 − y m i j i j i j i j ⎦ (5) Ω = ⎣ ... ... ... , xn − x1 + y n − y 1 ... xn − xm + y n − y m i j i j i j i j which can easily be optimized using standard approaches like the Hungarian or Nearest Neighbor algorithms [1,2]. The retrieved contour-point associations are then stored and later evaluated during the main tracking stage. 3.2
Pretracking
While data association handles the necessary updates for tracks that are already known to the system, the pretracking stage is carried out to perform track initialization. While it would be possible to simply initiate a new track for all measurements that could not be associated to any existing track, this would lead
Sensor Architecture and Data Fusion in Urban Environments
281
to an abundance of false alarms and decrease the tracking system’s performance significantly, as the computational load would inefficiently be risen and false alarms would compete against the correct tracks for incoming sensor readings. In Caroline’s fusion system, a database of track candidates is therefore filled with incoming sensor readings that could not be associated. A data association process similar to section 3.1 is carried out in order to justify track candidates prior to their activation. Only if a track candidate has been acknowledged more than once by one or more sensors, it will be activated and moved to the main tracking memory. Within this justification stage, the known sensor redundancy, sensor reliability, position of the target within the vehicle’s view area and other factors are evaluated in order to determine the necessary quantity of acknowledgment. 3.3
Tracking
An Extended Kalman Filter is used to carry out state estimation within the main tracking stage. Since the standard approach of Kalman-filtering would normally only handle state vectors consisting of a single position, velocity or further derivatives, the algorithm had to be extended. By postulating common derivatives (velocity, acceleration) and a common system and measurement noise covariance for all contour points within a track, the Kalman update cycle can be rewritten as: xk (v + 1|v) = f (xk (v|v)) P (v + 1|v) = F T · P (v|v) · F + Q sk,l (v + 1) = y l (v + 1) − h(xk (v + 1|v)) S(v + 1) = H · P (v + 1|v) · H T + R K(v + 1) = P (v + 1|v) · H T · S −1 (v + 1) rk,l (v + 1) = K(v + 1) · sk,l (v + 1),
(6) (7) (8) (9) (10) (11)
with xk being the state vector with respect to tracked contour point k and y l the measurement vector with respect to measured contour point l, f (x) the nonlinear system transfer function and F it’s associated Jacobian, P the common state noise covariance matrix, Q the system noise covariance matrix, sk,l the innovation vector with respect to tracked and measured contour point k and l, h(x) the nonlinear system output function and H it’s associated Jacobian, S the innovation noise covariance matrix, R the measurement noise covariance matrix, K the resulting Kalman gain and r k,l the correction vector for tracked contour point k updated by measured contour point l. While the corrected contour point positions can be immediately retrieved by adding the first two components of rk,l to the position values, the update for common velocity, course angle, acceleration and course angle velocity (if described by the 6D-model) can be acquired by calculating the mean of rk,l over all N contour point associations: r mean =
1 N
∀k,l∃Ω[k,l]≡true
r k,l (v + 1).
(12)
282
4
J. Effertz
Grid Fusion System
In contrast to the object tracking subsystem, the grid fusion system does not describe agents in the vehicle’s environment using discrete state vectors but instead discretizes the whole environment into a rectangular matrix (grid) structure. Each grid cell carries a number of assigned features: – a height value expressed in the global earth fixed reference frame, – a gradient value describing the height difference to neighboring cells, – a set of Dempster-Shafer probability masses counting for the hypotheses undrivable, drivable and unknown, – a status flag stating whether or not measurement data has been stored within the corresponding cell and – an update time stamp storing the last time a cell update has been carried out. 4.1
Data Structure
The biggest challenge with grid based models in an automotive environment is the need for real time operation. The high maneuvering speeds in automotive applications call for an update rate bigger than 10 Hz (which already equals a traveling distance of 1.4 meters at normal urban speed). The approach of discretizing the environment into grid cells leads to significantly high needs in memory and therefore calls for efficient data structures. As an example, simply the storage of a view area of 100 x 100 meters with a resolution of 25 centimeters leads to a number of 160.000 grid cells. Assuming a 4-byte floating point value for each feature as described above, extends this grid up to 3 MByte. Together with an update rate of 10 Hz this leads to a constant data throughput of 256 MBit/s, which in any case is more than the standard automotive bus infrastructure would be able to handle. Efficient algorithms and data reduction prior to serialization is therefore key to a successful application. Within Caroline’s grid fusion system, the underlying data structure is assembled in a spherical fashion, allowing the vehicle to move freely through the data structure without extensive memory allocation or shifting. The vehicle’s own position can be stored simply by a pointer at the corresponding grid cell, as depicted with the red coordinate system in Fig. 4. This position will be regarded as virtual origin for all incoming sensor readings, which can then be subsequently accessed by moving through the double linked data structure relatively to that virtual origin. The sphere is again subdivided into sub grids whose size match the processor’s caching mechanism for optimal usage of the available computing resources. While it would theoretically be possible to make the surface big enough to cover the expected maneuvering area, this would lead to extremely high memory usage and is therefore not feasible. Instead, when the vehicle moves through the world the reference point is shifted along the double linked spherical list. As soon as it crosses the border from one sub grid to the next, the corresponding sub grids at the new horizon of the data structure are cleared and therefore available for new data storage.
Sensor Architecture and Data Fusion in Urban Environments
283
Fig. 4. Spherical Data Structure
4.2
Treatment of Laser and Stereo Vision Point Data
As the first step within grid data fusion, the 3-dimensional point clouds received from the laser scanners and stereo vision system are transformed into the global earth fixed reference frame taking into account the vehicle pose (roll, pitch and yaw) acquired from the GPS/INS unit and the sensor specific calibration information. The accuracy of these transformations is crucial to the following post-processing. Especially the vehicle’s height delivered by the GPS is therefore subject to further filtering and justification. For each measured point, the corresponding grid cell is retrieved and a ray tracing algorithm (Bresenham) is carried out to update all cells from the sensor coordinate system’s origin to the measured target point. Several versions of the Bresenham algorithm are described in literature, in this case we will introduce the 2D-variant following Pitteway [3] for simplicity reasons. With a given start point x1 , y1 and target point x2 , y2 in discrete coordinates, the Bresenham algorithm continues basically as follows, 1. Compute Δx = x1 − x0 . 2. Compute Δy = y1 − y0 . We hereby assume that Δx ≥ Δy and we want to trace a line through the first quadrant, otherwise the coordinates must be swapped. 3. Initialize an error variable E to Δx 2 . 4. Start with x0 and increase up to x1 by step size 1. 5. For each step, decrease E by Δy. 6. If E ≤ 0, increase E by Δx and increase y by one. 7. Stop, if reached x1 , otherwise continue with step 5. The lines are traced similar to the functionality of a plotter, which is basically the origin of that algorithm. On the way through the traced lines, each cell passed is updated according to following rules:
284
J. Effertz
– If the cell lies on the path between sensor origin and measured target point and it’s height value exceeds the current Bresenham line height value, reduce the stored value to that of the current Bresenham line. – If the cell is the end point of the Bresenham line, store the associated height value. – In both cases, store update time stamp and mark that cell as being measured. The grid is updated following the direct optical travel path of any laser ray (or virtual stereo vision ray) starting in the sensor origin and ending in the target point as depicted in Fig. 5. This model follows the assumption that any obstacle would block the passing optical ray and therefore any cells on the traveling path must be lower than the ray itself.
sensor origin
new values old values
target point
Fig. 5. Ray update mechanism
4.3
Data Fusion
Parallel to entering the 3-dimensional point data acquired from laser scanners and stereo vision, the vision based classification is processed using a Dempster Shafer approach [4,5]. A sensor model has been created for each data source, mapping the sensor specific classification into the Dempster Shafer probability mass set as defined above, which can then subsequently be fused into the existing cell probability masses using Dempster’s rule of combination, 1 mm (A) = mc (B)mm (C), (13) m∗c (A) = mc (A) 1−K B∩C=A=∅
with mc , mm being the cell and measurement probability mass set and m∗c the combined (fused) new set of masses for the regarded cell, while the placeholders A, B and C can describe any of the three hypotheses drivable, undrivable and unknown. The term K expresses the amount of conflict between existing cell data and incoming measurement, with mc (B)mm (C). (14) K= B∩C=∅
The mass set mm has to be modeled out of the acquired sensor data.
Sensor Architecture and Data Fusion in Urban Environments
285
In case of the stereo vision system, which is capable of classifying retrieved point clouds into the classes road, curb and obstacle, this mapping is trivial and can be done by assigning an appropriate constant mass set to each classification result. The exact values of these masses can then be subject to further tuning in order to trim the fusion system for maximum performance given real sensor data. In case of the mono vision system, which in the case of Caroline assigns for each pixel in the retrieved image a value of drivability Pd between 0.0 (undrivable) and 1.0 (fully drivable), a mapping function is applied which creates the three desired mass values D: drivable, U : undrivable and N : unknown (can be either drivable or undrivable) as follows: mm (D) = Dmax · Pd , mm (N ) = (1 − Dmax ), mm (U ) = 1 − mm (D) − mm (N ).
(15)
The value Dmax will serve as a tuning parameter, influencing the maximum trust that is put into the mono vision system based on the quality of it’s incoming data. Both, the classification mechanism of mono camera and stereo vision system would be beyond the scope of this paper and will therefore not be explained in detail. Basically, the classification within the stereo system is based upon generating a mesh height model out of the obtained point cloud and applying adaptive thresholds to this mesh structure in order to label roadway, curb and obstacles. The mono vision system is based on a similar approach to [6]. Prior to mapping the mono vision data into the grid data structure, the image has to be transformed into the global world reference frame using the known camera calibration [7] and height information which can easily be retrieved from the grid itself. The creation of a sensor model for the 3-dimensional height data is more complex: First of all, a gradient field is calculated from the stored height profile. In case of Caroline’s grid fusion system, the grid is mapped into image space by converting into a grayscale image structure, with intensity counting for cell height values. Subsequently, the Sobel operator is applied in both image directions, leading to a convolution of the height image with two masks ⎡ ⎤ 1 0 −1 Mx = ⎣ 2 0 −2 ⎦ , 1 0 −1 ⎡ ⎤ 1 2 1 My = ⎣ 0 0 0 ⎦ . −1 −2 −1 The results of both convolutions are summed up and - after proper denormaliza∂h tion - back transformed into the grid structure, storing the gradient values ∂x∂y for each grid cell. Any existing obstacle will usually lead to a bigger peak within the gradient field, which can easily be detected. The process of fore- and back
286
J. Effertz
transforming the grid structure in- and out of a grayscale image seems to be redundant at the first view, since the gradient operator could easily be applied to the height field itself. Yet, by transforming the information into a commonly used image format, the broad variety of image processing algorithms and operators found in standard image processing toolkits like the OpenCV library [8] can easily be applied, which does significantly reduce development time. The acquired gradient values will then subsequently be mapped into a Dempster-Shafer representation, which leads to the desired sensor model combining all acquired height values. Similar to the method with the mono vision system, a mapping function is defined as follows: ⎧ ∂h ⎪ D , ⎪ max D ⎪ ∂x∂y ≤ G ⎨ max ∂h GD < ∂x∂y ≤ GUmin mm (D) = 0, ⎪ max ⎪ ⎪ ∂h ⎩ 0, > GUmin ∂x∂y ⎧ ∂h ⎪ 0, ⎪ D ∂x∂y ≤ G ⎪ ⎨ max ∂h Umax ∂h GD mm (U ) = GU −GD · ∂x∂y − GDmax , < ∂x∂y ≤ GUmin max min ⎪ max ⎪ ⎪ ∂h ⎩ Umax , ∂x∂y > GUmin mm (N ) = 1 − mm (D) − mm (U ),
(16)
with Dmax and Umax serving as parameters for maximum drivability / undrivability assigned to the gradient field, GDmax being the maximum gradient value that is still considered as fully drivable and GUmin the minimum gradient value that is considered as fully undrivable. By carefully tuning those four parameters, it is possible to suppress unwanted smaller gradients resulting e.g. from unimportant pits and knolls in the road while supporting higher gradients as originating from curbs or road berms in order to correctly fuse this information into the grid cells by using Eq. (13).
5
Combination of the Hybrid Models
Regarding the two fusion approaches in section 3 and 4 separately, they do deliver a partly redundant image of the vehicle’s surrounding, which has to be treated by the post processing artificial intelligence, including possible conflicts between both worlds. The artificial intelligence unit of Caroline is based on the DAMN (Distributed Architecture for Mobile Navigation) approach [10], which can from a certain point of view again be regarded as another high-level step of sensor data fusion. While a detailed introduction into the artificial intelligence concept would be far beyond the scope of this paper, the driving decisions are basically based on the following process: Given the starting position of the vehicle, a certain amount of curvature trajectories is generated and then later on analyzed given the available sensor data, ego position, digital map and all other information as depicted in Fig. 6. Regarding the two stimuli generated by the
Sensor Architecture and Data Fusion in Urban Environments
287
-0.1 tacle obs
startpoint 0
obstacle
curvatures
+0.1 vote
Fig. 6. DAMN approach to trajectory generation
system described within this contribution, any obstacle detected by the object tracking system would lead to a strong vote against a trajectory that would lead to a collision with this object. Similarly, any trajectory that would cross a border from drivable to non-drivable terrain evaluating the results of the grid fusion system would be avoided as well. As a third example, the trajectories that would point toward the next GPS way point that has to be approached would get a positive vote. The different votes of the system are finally combined taking into account specific overall weights for each stimulus. The desired vehicle trajectory is then simply obtained by picking the curvature with the highest vote. While objects detected within the tracking system are regarded as absolutely solid, which can subsequently lead to a zero vote in case of collision, the drivability information is regarded as a more guiding value, pushing the vehicle away from curbs and road berms but without completely forbidding to enter a region marked as not drivable, since it may be better to cross a curb than hitting an opposing vehicle. While this briefly describes the post processing of grid and object data, another benefit of hybridization can be spotted in Fig. 2. The acquired height profile and drivability information is passed to the object fusion system in order to carry out further justification of the acquired sensor data. Especially the laser based sensors are frequently subject to false readings when driving through 3-dimensional terrain that leads to significant pitch and roll movements of the vehicle’s body resulting in the laser scanners to be pointed directly into the ground. For that reason, the acquired contour point data of each sensor is transformed into the common global reference frame taking into account the precise pose of the vehicle and sensor calibration information. While it would primarily not be necessary to perform a full 3-dimensional transformation of the contour point data within the object tracking system, since a planar object model is assumed as described in Eq. (1) and (2), this complete transfor-
288
J. Effertz
mation is useful in combination with the grid based height and drivability map. Each contour point will be transformed and then checked whether or not it is detected on or close to the acquired elevation map acquired by the grid based subsystem. In case the contour point is close to that height map, it is checked whether the corresponding cell has been marked as drivable. In this case, the contour point will be marked as a potential unwanted ground reflection and can then subsequently be disregarded within track initialization. By using this bypass of grid fusion data into the object tracking system, the system’s reliability could significantly be improved - at barely no extra costs.
6
Race Performance
Caroline managed to qualify for the Urban Challenge final event as one of the first teams, even though some problems during NQE Mission A (merging into moving traffic) forced the team into some last minute software changes. While the missions B (urban driving, free navigation zones and parking) and C (urban driving, blocked roads and intersection behavior) were finished almost perfectly, the tight traffic setup provided by the DARPA in mission A as well as the close proximity to the existing k-rail barriers and some minor additional software bugs did call for some modifications: First of all, similar to most teams, we had to significantly reduce the minimum safety distance to existing static obstacles in order to be able to follow the k-rail guided roadway which often called for a lateral distance to the barriers of less than 50 centimeters. Second, the planar sensor concept revealed some weaknesses when being confronted with condensed urban traffic. In that particular situation, three DARPA vehicles where approaching a T-intersection from the left side, while a single vehicle arrived from the right side with a little more distance. Both groups had been given precedence, our vehicle’s objective was to pick a gap big enough to merge into the moving traffic. The vehicle stopped at the intersection and correctly did wait for the group of vehicles approaching from the left side, but while they were passing the intersection, they physically blocked the field of view in direction to the fourth target approaching the intersection from the right side. In the first attempts, our objective was to maximize safety and therefore we did not enter the intersection until the field of view in both directions was free enough. While this worked out well, it lead to situations where the vehicle took sometimes considerably long to pass the intersection, since the provided DARPA vehicles where approaching in close proximity to each other. First attempts to make the vehicle a little more aggressive subsequently overshot the goal and lead to some unwanted precedence problems. The fifth attempt, however, lead to a setup compliant to our judges expectations and brought us into the Urban Challenge final event. Within the final event, we successfully drove approx. 14 kilometers within 2:30 hours (including pauses), successfully passing 65 intersections and 13 checkpoints. Three situations (some documented in news media coverage) are particularly interesting. First of all, the vehicle was leaving it’s wanted trajectory by approx. one meter on a steep off road downhill section of the race track, calling
Sensor Architecture and Data Fusion in Urban Environments
289
for some human assistance in order to get it out of the sand berm and back on the road. At first we reasoned that probably a cloud of Mojave desert dust which had conflicted with our laser based sensors frequently before - must have lead to a false detection which forced the vehicle off the road. Surprisingly, an analysis of our on board camera material proved that the vehicle was approaching one of the chase vehicles waiting for some other robot having trouble while merging onto the two-laned road following the off road passage. Obviously, the vehicle had not been paused early enough on it’s way down the hill in order not to conflict with the incident up in front and instead of hitting the chase vehicle, Caroline decided to try passing it closely on the left side - which ended as described above. The second incident included the robot of the Massachusetts Institute of Technology in a left turn situation. The situation remains unclear. Caroline stopped for the MIT robot with precedence, MIT first did not move and then obviously both robots decided to cross the intersection at the same time - which was interrupted by DARPA officials. The third incident included the MIT again. Both robots where driving in a free navigation zone and obviously approached one another. Sadly, we have no video documentation for that part of the race, since our camera system was only able to cover about 90 minutes of recording time, which had already been well over. Our log files state that a number of obstacles had been detected within this zone and the vehicle had stopped several times. Finally, Caroline obviously tried to perform an evasive maneuver to the right side (as required in the DARPA guidelines within free navigation zones), which obviously was not enough to avoid the MIT robot. A smaller collision took place ending in a decalibrated laser unit on our side and no significant damage on side of the MIT. However, this incident meant the end of race for us, but taking into account that our team was a first time competitor in the Grand / Urban Challenge with limited financial resources and an overall development time of approx. one year, we have been more than satisfied with the race outcome.
7
Summary
We presented the sensor and data fusion concept of the 2007 Urban Challenge vehicle of the Technische Universit¨ at Braunschweig. The perception approach is based on a hybrid fusion system, combining classical object based tracking and grid data fusion. As a result, Caroline is able to detect both static and dynamic obstacles in its driveway by evaluating and taking advantage of both models, thus leading to a higher reliability to the driving decisions made by the artificial intelligence. A variety of sensor types is applied, including radar, fixed beam lidar, rotating laser scanners and image processing units. The grid based fusion system is a novel approach to multi-sensor data fusion, based on a Dempster Shafer probability mass model targeting to classify the environment into drivable, undrivable or unknown. This approach is superior compared to classical Bayesian occupancy grids [9] regarding the post processing algorithms of artificial intelligence, since the approach is able to quantify the amount of lack
290
J. Effertz
of information as well as voting for either drivable or undrivable. The perception system described in this article has been profoundly tested and proved it’s quality within the preparation and the final competition of the Urban Challenge event, leading to a ranking of Caroline within the top ten vehicles.
References 1. Frank, A.: On Kuhn’s Hungarian Method - A tribute from Hungary. Egrervary Research Group, Pazmany P. setany 1/C, H1117, Budapest, Hungary 2. Becker, J.-C.: Fusion der Daten der objekterkennenden Sensoren eines autonomen Straenfahrzeugs (german). Ph.D. thesis, Institute of Control Engineering, Braunschweig, Germany (March 2002) 3. Pitteway, M.L.V.: Algorithmn for Drawing Ellipses or Hyperbolae with a Digital Plotter. Computer Journal 10(3), 282–289 (1967) 4. Shafer, G.: A Mathematical Theory of Evidence. Princeton University Press, Princeton (1976) 5. Shafer, G.: Perspectives on the theory and practice of belief functions. International Journal of Approximate Reasoning 3, 1–40 (1990) 6. Thrun, S., et al.: Stanley: The Robot That Won The DARPA Grand Challenge. Technical report, Stanford University, Stanford (2005) 7. Heikkil´ a, J., Silv´en, O.: Calibration procedure for short focal length off-the-shelf CCD cameras. In: Proc. 13th International Conference on Pattern Recognition, Vienna, Austria, pp. 166–170 (1996) 8. The Open CV Library, Internet, http://opencvlibrary.sourceforge.net 9. Thrun, S., Burgard, W., Fox, D.: Probabilistic Robotics, Massachusetts Institute of Technology. The MIT Press, Cambridge, Massachusetts (2005) 10. Rosenblatt, J.: Damn: A distributed architecture for mobile navigation. Ph.D. dissertation, Robotics Institute, Carnegie Mellon University, Pittsburgh, PA (1997)
Belief-Propagation on Edge Images for Stereo Analysis of Image Sequences Shushi Guan and Reinhard Klette Computer Science Department, The University of Auckland Private Bag 92019, Auckland 1142, New Zealand
Abstract. The history of stereo analysis of pairs of images dates back more than one hundred years, but stereo analysis of image stereo sequences is a fairly recent subject. Sequences allow time-propagation of results, but also come with particular characteristics such as being of lower resolution, or with less contrast. This article discusses the application of belief propagation (BP), which is widely used for solving various low-level vision problems, for the stereo analysis of night-vision stereo sequences. For this application it appears that BP often fails on the original frames for objects with blurry borders (trees, clouds, . . . ). In this paper, we show that BP leads to more accurate stereo correspondence results if it is applied on edge images, where we have decided for the Sobel edge operator, due to its time efficiency. We present the applied algorithm and illustrate results (without, or with prior edge processing) on seven, geometrically rectified night-vision stereo sequences (provided by Daimler research, Germany). Keywords: Stereo analysis, belief propagation, Sobel operator, image sequence analysis.
1
Introduction
Stereo algorithms often use a Markov Random Field (MRF) model for describing disparity images. The basic task of an MRF algorithm is then to find the most likely setting of each node in an MRF by applying an inference algorithm. Belief propagation (BP) is one of the possible inference algorithms; it can be applied for calculating stereo disparities, or for other labeling processes defined by finite sets of labels. [4] shows that BP is such an algorithmic strategy for solving various problems. BP is recommended for finding minima over large neighborhoods of pixels, and it produces promising results in practice (see, e.g., evaluations at the Middlebury Stereo Vision website http://vision.middlebury.edu/stereo/). A belief propagation algorithm applies a sum-product or max-product procedure, and in this paper we choose the max-product option. The max-product procedure computes the Maximum A-Posteriori (MAP) estimate over a given MRF [10]. [5] reports about the general task of evaluating stereo and motion analysis algorithms on a given test set of seven rectified night-vision image sequences G. Sommer and R. Klette (Eds.): RobVis 2008, LNCS 4931, pp. 291–302, 2008. c Springer-Verlag Berlin Heidelberg 2008
292
S. Guan and R. Klette
(provided by Daimler research, Germany). In this article we consider the application of BP algorithms on these seven sequences, each defined between 250 and 300 pairs of stereo frames. We show that a straight application of BP fails, but it leads to promising results after prior application of an edge operator. The article is structured as follows. In Section 2, we briefly introduce the BP algorithm, which includes the definition of an energy function, max-product, message passing, Potts model, and also of some techniques to speed up the standard BP algorithm, following [3]. In Section 3 we calculate Sobel edge images, and show that subsequent BP analysis leads to improvements compared to results on the original sequences, verified by results for those seven test sequences mentioned above. Some conclusions are presented in Section 4.
2
BP Algorithm
Solving the stereo analysis problem is basically achieved by pixel labeling: The input is a set P of pixel (of an image) and a set L of labels. We need to find a labeling f :P →L (possibly only for a subset of P ). Labels are, or correspond to disparities which we want to calculate at pixel positions. It is general assumption that labels should vary only smoothly within an image, except at some region borders. A standard form of an energy function, used for characterizing the labeling function f , is (see [1]) as follows: E(f ) = Dp (fp ) + Vp,q (fp , fq ) p∈P
(p,q)∈A
Since we aim at minimizing the energy, this approach corresponds to the Maximum A-Posteriori (MAP) estimation problem. Dp (fp ) is the cost of assigning a label fp to pixel p. We use the differences in intensities between corresponding pixel (i.e., defined to be corresponding when applying disparity fp ). To be precise, in our project we use absolute differences. A is an assumed symmetric and irreflexive adjacency relation on P . Each pixel p (say, in the left image at time t) may take one disparity at that position, out of a final subset of L (e.g., defined by a maximum disparity). The corresponding pixel is then in the right image at time t. Because the given image sequences are rectified, we can simply search in identical image rows. The given gray-level (or intensity) images allow that differences in gray-levels define the cost Dp (fp ). The smaller an intensity difference, the higher the compatibility. Vp,q (fp , fq ) is the cost of assigning labels fp and fq to two adjacent pixel p and q, respectively. It represents a discontinuity cost. The cost V in stereo analysis is typically defined by the difference between labels; each label is a non-negative disparity. Thus, it is common to use the formula Vp,q (fp , fq ) = V (fp − fq )
Belief-Propagation on Edge Images for Stereo Analysis of Image Sequences
293
The resulting energy is (see [3]) as follows: Dp (fp ) + Vp,q (fp − fq ) E(f ) = p∈P
(p,q)∈A
The task is a minimization of E(f ). 2.1
Max-Product
A max-product algorithm is used to approximate the MAP solution to MRF problems. J. Pearl showed 1988 that the max-product algorithm is guaranteed to converge, and guaranteed to give optimal assignment values of a MAP solution based on messages at time of convergence [9]. The max-product BP algorithm is approached by passing messages (belief) around in an image grid with 4adjacency. Messages are updated in iterations, and messages pass in one iteration step is in parallel (from any node).
Fig. 1. Illustration for a message update step in the 4-adjacency graph: the circles represent pixel in an image, and the arrows indicate directions of message passing
Figure 1 illustrates message passing in the 4-adjacency graph. If node i is left of node j, then node i sends a message to node j at each iteration. The message from node i contains the messages already received from its neighbors. In parallel, each node of the adjacency graph computes its message, and then those messages will be send to adjacent nodes in parallel. Based on these received messages, we compute the next iteration of messages. In other words, for each iteration, each node uses the previous iteration’s messages from adjacent nodes, in order to compute its messages send to those neighbors next. Meanwhile, the larger Dp (fp ) is, the more difficult it is to pass a message to an adjacent node. That means, the influence of an adjacent node decreases when the cost at this node increases. Each message is represented as an array; its size is determined by the maximum disparity (assuming that disparities start at zero, and are subsequent integers), denoted by K. Assume that mtp→q is the message, send from node p to adjacent node q at iteration step t. For each iteration, the new message is now given by (see [3]) the following: ⎛ ⎞ ⎠ mt−1 mtp→q (fp ) = min ⎝Vp,q (fp − fq ) + Dp (fp ) + s→p (fp ) fp
s∈A(p)\q
294
S. Guan and R. Klette
where A(p)\q is the adjacency set of p except node q. The message array contains at its nodes the following bq (fq ) = Dq (fq ) + mtp→q (fq ) p∈A(q)
after T iterations; see [3]. Each iteration computes O(n) messages, where n is the cardinality of set P . 2.2
Potts Model
The Potts model [6] is a (simple) method for minimizing energy; see, for example, [2]. In this model, discontinuities between labels are penalized equally, and we only consider two states of nodes: equality and inequality. We measure differences between two labels; the cost is 0 if the labels are the same; the cost is a constant otherwise. Let the cost function between labels be V (xi , xj ). Then, (see [8]) we have that 0 if xi = xj V (xi , xj ) = d otherwise The Potts model is useful when labels are “piecewise constant”, with discontinuities at borders. It was suggested to apply this cost function to the message update formula. The formula is now rewritten (see [3]) as follows mtp→q (fq ) = min Vp,q (fp − fq ) + h(fp ) fp
where h(fp ) = Dp (fq ) +
mt−1 s→p (fp )
s∈A(p)\q
This form is very similar to that of a minimum convolution. After applying the cost function, the minimization over fp yields a new equation in the following way (see [3]):
mtp→q (fq ) = min h(fp ), min h(fp ) + d fp
Except the fp minimization, the message computation time reduces to O(K), see [3]. At first we compute minfp h(fp ), then we use the result to compute the message in constant time. 2.3
Speed-Up Techniques
In this section, we recall some techniques that may be used to speed up a BP analysis. We start with a technique called multi-grid method in [3]. This technique allows to obtain good results with just a small number of message passing iterations. [7] shows that message propagation over long distances takes more message update iterations. To circumvent this problem, a common data pyramid was
Belief-Propagation on Edge Images for Stereo Analysis of Image Sequences
295
used. (All nodes in one 2 × 2 array in one layer are adjacent to one node in the next layer; all 2n × 2n pixel (nodes) at the bottom layer zero are connected this way with a single node at layer n). Using such a pyramid, long distances between pixels are shortened, what makes message propagation more efficient. We do not reduce the image resolution, but aggregate data over connections defined in this pyramid. Such a coarse to fine approach allows that a small number of iterations (dependent on the level of the pyramid) is sufficient for doing a BP analysis. The red-black algorithm provides a second method for speeding up the BP message update algorithm of Section 2.1; see [3]. The message passing scheme adopts a red-black algorithm which allows that only half of all messages are updated at a time. Such a red-black technique is also used for Gauss-Seidel relaxations. A Gauss-Seidel relaxation attempts to increase the convergence rate by using values computed for the kth iteration in subsequent computations within the kth iteration. We can think of the image as being a checkerboard, so each pixel is differently “colored” compared to its 4-adjacent pixel. The basic idea is that we just update the message send from a “red” pixel to a “black” pixel at iteration t; in the next iteration t + 1, we only update the message send from a “black” pixel to a “red” pixel. We recall this message updating scheme at a more formal level: assume that B and C represent nodes of both defining classes in a bipartite graph, at iteration t, we know message M1 which is send from nodes in B to those in C; based on message M1 , we can compute message M2 send from nodes in C to those in B at iteration t + 1. That means, we can compute message M3 from nodes in B at iteration t + 2 without knowing the message send from nodes in B at iteration t+1. Thus, we only compute half the messages at each iteration. See Figure 2
Fig. 2. These two images illustrate the message passing scheme under a red-black algorithm; the left image shows messages only send from a “black” (dark gray) to a “red” (light gray) pixel at iteration t; at iteration t + 1, “red” sends messages back to “black” in the right image
296
S. Guan and R. Klette
for further illustration. This alternating message updating algorithm is described as follows: t mp→q if p ∈ B m= mt−1 p→q otherwise This concludes the specification of the BP algorithm as implemented for our experiments. We aimed at using a standard approach, but with particular attention of ensuring time efficiency.
3
Experiments
The image sequences used for stereo analysis are as described in [5], provided by Daimler research, Germany. These night vision stereo sequences are rectified. The following figures address each exactly one of those seven sequences, and each figure shows one unprocessed representative frame of the sequence at its upper left. These seven sequences are already geometrically rectified, and stereo matching is reduced to a search for corresponding pixels along the same horizontal line in both images. For explaining the following test results, the used test environment was as follows: AMD 64 Bit 4600 2.4 GHz, 2 Gigabyte memory, NVIDIA Geforce 7900 video card, WinXP operation system. BP has the potential to allow real time processing in driver assistance systems, because the BP message update in each iteration can be in parallel, that means some multi-CPU hardware may reduce the system running time. The following figures use a uniform scheme of presentation, and we start with explaining Figure 3. A sample of the original unprocessed sequence is shown in the upper left. The described BP stereo analysis algorithm was applied, and the resulting depth map is shown in the upper right. The maximum disparity for the illustrated pair of images (being “kind of representative” for the given stereo sequence) is 70, and 7 message iterations have been used. Table 1 shows these values, also the size of the used area in the given 640 × 480 frames (called “image size” in this table), which were the parameters used in our test, and finally the run time rounded to seconds. Note that no particular efforts have been made for run-time optimization besides those mentioned in the previous section. The experiments indicated (quickly) two common problems in stereo matching, namely bad matching due to lack of texture (such as at the middle of the road), and mismatching due to “fuzzy depth discontinuities” (such as in sky or in trees). For the other six image sequences, see a “typical” frame of a stereo pair (documented in Table 1) at the upper left of Figures 7 to 9. The depth maps shown at the upper right (BP analysis as described, for the original sequence) all illustrate similar problems. As a result of our analysis of those problems, we expected that the use of some edge enhancement (contrast improvement) could support the message passing
Belief-Propagation on Edge Images for Stereo Analysis of Image Sequences
297
Fig. 3. Image 0001 c0 of Sequence 1 (upper left) and its associated BP result (upper right). The Sobel edge image (lower left) and the corresponding BP result (lower right).
mechanism. Surprisingly, we can already recommend the use of the simple Sobel edge operator. The resulting Sobel edge image are certainly “noisy”, but provide borders or details of the original images which allow the message passing mechanism to proceed more in accordance to the actual data. The applied idea was inspired by Figure 14 in [7]. From that figure, we can see two important properties of a BP algorithm: Firstly, information from highly textured regions can be propagated into textureless regions; secondly, information propagation should stop at depth discontinuities. In other words, the influence of a message in a textureless region can be passed along a long distance, and the influence in “discontinuous regions” will fall off quickly. That means, if we emphasize the discontinuous regions (i.e., edges), then this improves the accuracy of the BP algorithm. For example, see the lower left in Figure 7. Sobel edge
Table 1. Table of parameter used for BP algorithm and program running time Figure 3 4 5 6 7 8 9
Max-disparity 70 pixel 55 pixel 40 pixel 60 pixel 30 pixel 35 pixel 40 pixel
Iterations 7 7 5 7 5 6 5
Image size 633 × 357 pixel 640 × 353 pixel 640 × 355 pixel 640 × 370 pixel 631 × 360 pixel 636 × 356 pixel 636 × 355 pixel
Running time 9s 7s 4s 8s 3s 4s 4s
298
S. Guan and R. Klette
Fig. 4. Image 0001 c0 of Sequence 2 (upper left) and its associated BP result (upper right). The Sobel edge image (lower left) and the corresponding BP result (lower right).
Fig. 5. Image 0001 c0 of Sequence 3 (upper left) and its associated BP result (upper right). The Sobel edge image (lower left) and the corresponding BP result (lower right).
enhancement “highlights” the border of the road, or changes in vegetation, such as a border of a tree. Messages within a tree propagate from its upper, highly textured region to its more textureless region further below; on the other hand,
Belief-Propagation on Edge Images for Stereo Analysis of Image Sequences
299
Fig. 6. Image 0227 c0 of Sequence 4 (upper left) and its associated BP result (upper right). The Sobel edge image (lower left) and the corresponding BP result (lower right).
Fig. 7. Image 0001 c0 of Sequence 5 (upper left) and its associated BP result (upper right). The Sobel edge image (lower left) and the corresponding BP result (lower right).
the influence of messages outside the tree’s border (in regions of sky or road, etc.) falls off quickly; this “outside influence” should end at the border. See Figures 3 to 9 for “key frames” of these test sequences and calculated disparity maps.
300
S. Guan and R. Klette
Fig. 8. Image 0001 c0 of Sequence 6 (upper left) and its associated BP result (upper right). The Sobel edge image (lower left) and the corresponding BP result (lower right).
Fig. 9. Image 0184 c0 of Sequence 7 (upper left) and its associated BP result (upper right). The Sobel edge image (lower left) and the corresponding BP result (lower right).
The lower right images in Figures 3 to 9 show the result of the specified BP analysis algorithm, after applying the Sobel operator to the given images. Obviously, the new result is much better than our preliminary result (upper right
Belief-Propagation on Edge Images for Stereo Analysis of Image Sequences
301
images). In general comparison with the BP analysis for the original image pairs of those seven sequences, major discontinuities are now often correctly detected. For example, the visual border of a tree may be recovered despite of an obvious fuzziness of its intensity edge. Especially the road and the sky are now often accurately located. In most cases, a car is also detected if at a reasonable distance. But there are still remaining problems. For example, in Figure 8, the traffic light is not matching correctly, we can see that there are “two ghost traffic lights” in the depth map. The use of the ordering constraint could help in such circumstances. In some images, we can not identify depth details accurately, especially in images showing many trees. See again Figure 8 for an example. Vertical edges disappeared in the depth map image. The reason might be that we have chosen a small discontinuity penalty only (see Section 2.2, the Potts model) to do these tests illustrated in the figures. (When using a higher discontinuity penalty in BP, this produces more edges or details in depth maps, but also more noise or matching errors.) Adaptation might be here a good subject, for identifying a balance point. In Figure 6, the road is in the depth map (lower right) not a smooth, even, leveled surface; this is caused by the shadows of the trees on the road which cause about horizontal stripes in the images. This means that pixel intensities in an epipolar line are about constant, what makes mismatching more likely.
4
Conclusions
In this paper, we propose the use of a simple edge detector prior to using belief propagation for stereo analysis. The proposed method is intended for BP stereo correspondence analysis where borders in given scenes or images are fuzzy. We detailed the used BP algorithm by discussing the max-product algorithm of belief propagation, and how messages propagate in the graph, especially also under the circumstances of the used two strategies for run-time optimization. One of both techniques reduced the number of message passing iterations, the second technique halved message computation. Recently we integrate the ordering constraint into the BP algorithm, and we also plan to design an adaptive algorithm which calculates discontinuity penalties based on image intensities of frames. The provided image sequences allowed a much more careful analysis than just by using a few image pairs. This improved the confidence in derived conclusions, but also showed more cases of unsolved situations. This is certainly just a beginning of utilizing such a very useful data set for more extensive studies. Acknowledgment. We acknowledge the support of Daimler research, Germany (research group of Uwe Franke) by providing those seven rectified night-vision stereo sequences.
302
S. Guan and R. Klette
References 1. Boykov, Y., Kolmogorov, V.: An experimental comparison of min-cut / max- flow algorithms for energy minimization in vision. IEEE Trans. Pattern Analysis Machine Intelligence 26, 1124–1137 (2004) 2. Boykov, Y., Veksler, O., Zabih, R.: Fast approximate energy minimization via graph cuts. IEEE Trans. Pattern Analysis Machine Intelligence 23, 1222–1239 (2001) 3. Felzenszwalb, P.F., Huttenlocher, D.P.: Efficient belief propagation for early vision. Int. J. Computer Vision 70, 41–54 (2006) 4. Frey, B.J., Koetter, R., Petrovic, N.: Very loopy belief propagation for unwrapping phase images. In: Dietterich, T.G., Becker, S., Ghahramani, Z. (eds.) Advances in Neural Information Processing Systems 14, MIT Press, Cambridge (2001) 5. Liu, Z., Klette, R.: Performance evaluation of stereo and motion analysis on rectified image sequences. CITR-TR-207, The University of Auckland, Computer Science Department, Auckland (2007) 6. Potts, R.B.: Some generalized order-disorder transformations. Proc. Cambridge Philosophical Society 48, 106–109 (1952) 7. Sun, J., Zheng, N.N., Shum, H.Y.: Stereo matching using belief propagation. IEEE Trans. Pattern Analysis Machine Intelligence 25, 787–800 (2003) 8. Tappen, M.F., Freeman, W.T.: Comparison of graph cuts with belief propagation for stereo, using identical MRF parameters. In: Proc. IEEE Int. Conf. Computer Vision, pp. 900–907 (2003) 9. Weiss, Y., Freeman, W.T.: On the optimality of solutions of the max-product belief propagation algorithm in arbitrary graphs. IEEE Trans. Information Theory 47, 723–735 (2001) 10. Yedidia, J., Freeman, W.T., Weiss, Y.: Understanding belief propagation and its generalizations. In: Lakemeyer, G., Nebel, B. (eds.) Exploring Artificial Intelligence in the New Millennium, pp. 236–239. Morgan Kaufmann, San Mateo (2003)
Real-Time Hand and Eye Coordination for Flexible Impedance Control of Robot Manipulator Mutsuhiro Terauchi1 , Yoshiyuki Tanaka2 , and Toshio Tsuji2 1
Faculty of Psychological Sciences, Hiroshima International University
[email protected] 2 Graduate school of Engineering, Hiroshima University
Abstract. In recent years a lot of versatile robots have been developed to work in environments with human. However they are not sufficiently flexible nor safe in terms of interaction with human. In our approach we focused on hand and eye coordination in order to establish a flexible robot control, in which a robot recognizes its environment from the input camera images and equips a soft contacting strategy by impedance control. To recognize the environment, we adopt a method to reconstruct motion from a sequence of monocular images by using a pair of parallel straight line segments, which enables us to obtain linear equations to solve the problem. On the other hand the impedance control strategy conveys a flexible interaction between robots and humans. The strategy can be considered as a passive force control, when something contacts the end-effector of the robot. In order to avoid a collision, we introduce a virtual impedance control which can generate force prior to the contact. Neural networks (hereafter: NN) learning is used to decide parameters for impedance control, in which NNs can obtain parameters during the motion (aka: online learning). The validity of the proposed method was verified through experiments with a multijoint robot manipulator.
1
Introduction
For autonomous robots moving in a workspace it is necessary to obtain and update information on their environments by using sensors. Among all types of sensors visual information is one of the most popular because of the abundance of its contents. The robots require geometric information of their environments to control the motion for interaction with them. In this paper we focus on a robot working in an artificial workspace where the environment and the object can be modelled as block shape (polyhedron). Furthermore we introduced impedance control to avoid damages caused by collision of the robot with environments or objects. The control system requires motion information such as position, velocity and acceleration, which can be obtained through input image sequences.
This study was supported in part by Ministry of Culture, Science and Education of Japan.
G. Sommer and R. Klette (Eds.): RobVis 2008, LNCS 4931, pp. 303–318, 2008. c Springer-Verlag Berlin Heidelberg 2008
304
M. Terauchi, Y. Tanaka, and T. Tsuji
In approaches of Motion from Image Sequence, it has been key technology to find correspondence of primitives on subsequent images. [1]. Ullman studied motion estimation based on points correspondence between sequential images by using the assumption of rigidity of the objects. He showed it is possible to obtain motion and structure of an object from points correspondence of 4 points on 3 frames of the sequential images under the parallel projection, and 5 points on 3 frames under the perspective projection [2]. Tsai and Huang derived linear equations by using decomposition of the singular point of the matrix which contains medial parameters obtained from motion information [3]. The equation consists of 8 variables and is linear, and it enables us to obtain solutions from 8 points correspondence on 2 frames under the perspective projection. They realized to get linear solution, while it had been analyzed using nonlinear equations. It is, however impossible to solve the linear equations, if all the 8 points lie on 2 planes where one of two planes intersects the origin of 3D coordinates (at least 5 of 8 points satisfy the condition), or on the surface of the cone which intersects the origin (at least 6 of 8 points satisfy the condition). On the other hand the impedance control has been proposed to equip robots an ability to behave flexibly against their environments[5].The control strategy is based on passive force generation from virtually set mechanical impedance such as stiffness, viscosity and inertia. However such a motion is invoked only after a contact between a robot and its environment occurs. To avoid collisions between them a virtual impedance scheme was proposed by Tsuji et al[6].It is possible to avoid a collision against an approaching object, or to set dynamic property of a robot adaptively before contacting the object, human or environment. However it is not easy to decide the impedance parameters beforehand. In this paper we propose a linear algorithm to estimate motion of a rigid object utilizing the relative expression of coordinates by assuming the existence of a pair of parallel line segments, and also an online learning method with NNs to obtain impedance parameters during motion of tasks.
2
Motion Formulation
In this paper we focus on the motion reconstruction from two subsequent images onto which one rigid object in the scene is projected by the perspective projection. In this section we formulate the motion of one end-point of a line segment and the relative motion of another end-point. Motion of a line segment on the image is shown in Fig.1. We define (x, y) as one end-point of a line segment and (r, s) as the relative position of another one, and the positions after motion are defined as (x , y ) and (r , s ) respectively as well as ones before motion. 2.1
Motion of Terminal Points
We show the geometrical relationship between the motion of point P in the scene and the image plane in Fig.1. The origin of the world coordinate system OXY Z is set at the lens center of camera, and the Z axis of the coordinate system is
Hand and Eye Coordination for Flexible Robots
305
Fig. 1. Moving line segment and its projection onto an image
placed along the optical axis. Then the 3D motion of point P can be generally represented by the rotational component Φ and the translational component Γ as follows, ⎡ ⎤ ⎡ ⎤ X X ⎣ Y ⎦ = Φ⎣ Y ⎦+ Γ (1) Z Z where Φ and Γ are represented as ⎡ φ1 φ2 Φ = ⎣ φ4 φ5 φ7 φ8
follows, ⎤ ⎡ ⎤ ΔX φ3 φ6 ⎦ , Γ = ⎣ ΔY ⎦ ΔZ φ9
(2)
The elements of the rotation matrix Φ can be rewritten by using the rotation angle θ and the rotation axis (σ1 , σ2 , σ3 ). On the other hand a point in the 3D space P (X, Y, Z) is projected onto an image plane p(x, y) as follows, The relations between two points in the 3D world P (X, Y, Z) and P (X , Y , Z ), and their projection onto image plane p(x, y) and p (x , y ) are given as follows, x=
Y X X Y , y = , x = , y = . Z Z Z Z
(3)
where P (X , Y , Z ) and p (x , y ) are the points after motion in 3D space and on the image respectively. 2.2
Relative Expression of Motion
ˆX ˆ Yˆ Zˆ whose origin O ˆ is Here we introduce a new Cartesian coordinate system O set at the end-point of line segment P as shown in Fig.2. Then another end-point can be represented as M (m1 , m2 , m3 ) in this coordinate system. The corresponding point after motion can be represented as M (m1 , m2 , m3 ), where M and
306
M. Terauchi, Y. Tanaka, and T. Tsuji
ˆX ˆ Yˆ Zˆ and motion of two line segments Fig. 2. Local coordinate system O
M are the vectors along the line-segments. The relations between the vectors M , M and their projection onto the image plane (r, s), (r , s ) are given as follows, r=
m1 − m3 x m2 − m3 y , s= , Z + m3 Z + m3
r =
(4)
m1 − m3 x m2 − m3 y , s = . Z + m3 Z + m3
(5)
In Fig.2, the motion between M and M’ is represented only by the rotation matrix Φ as follows, (6) M = ΦM Here we introduce parameters t and t defined as follows,
m3 m3 t= , t = , Z + m3 Z + m3
(7)
These parameters t, t denote ratios of both depths of end-points of line segments. Therefore, we obtain the following equations, ⎡ ⎤ r + tx (8) M = (Z + m3 )N, N = ⎣ s + ty ⎦ , t ⎤ ⎡ r + t x (9) M = (Z + m3 )N , N = ⎣ s + t y ⎦ . t For each line segment before and after motion we define unit vectors along the line segments as I and I respectively, I=
N N , I = . |N | |N |
(10)
Hand and Eye Coordination for Flexible Robots
307
Substituting Eq.(10) into Eq.(8) and Eq.(9), we get the following equations, M = (Z + m3 ) |N | I,
(11)
M = (Z + m3 ) |N | I .
(12)
M and M have the same length, because they are relatively represented and denote the same line segment. As the Z components of the coordinates of the end-points Z + m3 and Z + m3 are positive, we can define the following value K from Eq.(11) and Eq.(12),
K=
Z + m3 |N | . = Z + m3 |N |
(13)
Here it is possible to compute the value K, if t and t defined in Eq.(9) could be obtained. After Eq.(6) representing the rotation in the 3D space can be described by using the coordinates on the image as follows, KN = ΦN.
(14)
In this equation the unknown parameters are Φ, t and t .
3 3.1
Motion Estimation Motion Parameters
When the correspondence between two line segments in the 3D space is found from an image, the parameters in Eq.(14) can be obtained. In the equation Φ is a unitary matrix, therefore we obtain the following equation, 2 2 (15) K1 N1 = |N1 | , 2 2 (16) K2 N2 = |N2 | , T
K 1 K 2 N1 N2 = N1 T N2 .
(17)
These equations include unknown parameters t1 , t1 , t2 , t2 , hence, we need more than three equations shown above. 3.2
Linearization by Using Parallel Relation
Once the relative lengths t and t could be obtained, it is possible to get the rotation matrix Φ from the relations of two line segments using Eq.(14). Therefore we assume a pair of line segments are parallel in 3D space. This assumption is obviously the constraint for obtaining a unique solution for the parameters t and t .
308
M. Terauchi, Y. Tanaka, and T. Tsuji
If the two line segments l1 and l2 are parallel, we can obtain the following equations, ⎡ ⎤ ⎡ ⎤ r1 + t1 x1 r2 + t2 x2 ⎣ s 1 + t1 y 1 ⎦ = α ⎣ s 2 + t2 y 2 ⎦ . (18) t1 t2 The relation of line segments holds as well as one after the motion. ⎡ ⎤ ⎡ ⎤ r1 + t1 x1 r2 + t2 x2 ⎣ s1 + t1 y1 ⎦ = α ⎣ s2 + t2 y2 ⎦ , t1 t2
(19)
where, α and α are constant value and denote the ratio of two vectors along two parallel line segments. Here, we can eliminate α and α in Eq.(26) and (27), and can solve in terms of t1 , t2 , t1 and t2 . Then we get, r2 s1 − r1 s2 , s2 x1 − s2 x2 + r2 y2 − r1 y1 r2 s1 − r1 s2 t2 = , s1 x1 − s1 x2 + r1 y2 − r1 y1 r2 s1 − r1 s2 , t1 = s2 x1 − s2 x2 + r2 y2 − r1 y1 r2 s1 − r1 s2 . t2 = s1 x1 − s1 x2 + r1 y2 − r1 y1 t1 =
(20) (21) (22) (23)
Thus we can obtain the relative lengths t1 , t2 , t1 and t2 by using the coordinates on an image of end points of line segments which are parallel in the space. 3.3
Motion Estimation by Using Parallelism
As mentioned above, the parameters t1 , t2 , t1 and t2 can be obtained assuming two line segments are parallel. It is, however, impossible to compute motion parameters, i.e. the rotation matrix, by using just the relationship between parallel two line segments, because the equations derived from them is not independent. Therefore, we also introduce virtual lines which are the line segments obtained by connecting each end-point of two line segments not as to make diagonals as shown in Fig.3 a). In Fig.3 a), line segments l1 and l2 are real ones, and l3 , l4 virtual ones. Z coordinates of end points of these four line segments are defined as Z1 , Z2 , Z3 , Z4 , then vectors toward another end points of each line segments: μ1 , μ2 , μ3 , μ4 . These definition of coordinates can be seen in Fig.3 b). STEP 1. In Fig.3a the line segments l1 and l2 denote the real line, and l3 and l4 the virtual line. The parameters t1 , t1 , t2 and t2 can be obtained, because line segments l1 and l2 are parallel.
Hand and Eye Coordination for Flexible Robots
309
Fig. 3. Virtual lines are added to form a quadrilateral
STEP 2. From the parameters t1 , t1 , t2 and t2 , we will have the vectors N1 , N1 , N2 and N2 . In the next step we derive the parameter K3 and K4 for the virtual line segments by using the obtained parameters K1 and K2 for the real line segments. STEP 3. As well as Eq.(7) for real line segments, we can define t3 for l3 , and rewritten as, Z3 . (24) Z3 + μ3 = 1 − t3 Here we define the coordinates along Z axis of each base-point (the origin of local coordinates system) of the line segments as Z1 , Z2 , Z3 and Z4 , and the relative ones of the end-points as m1 , m2 , m3 and m4 paying attention to depth (Z axis) component of all points. As four line segments correspond their base/end-points each other, we have following relationships, Z1 = Z3 + μ3 ,
(25)
Z1 + μ1 = Z4 + μ4 , Z2 + μ2 = Z4 ,
(26) (27)
Z2 = Z3 .
(28)
Here we define the ratios of vector lengths for the virtual line segment l3 as K3 , as following, Z3 + μ3 K3 = . (29) Z3 + μ3 Substituting Eq.(25) into Eq.(29) we have,
K3 =
Z1 . Z1
(30)
Substituting Eq.(24) into Eq.(30) we obtain the following equation, Z1 1 − t1 Z1 + μ1 = · Z1 1 − t1 Z1 + μ1 1 − t1 = · K1 . 1 − t1
K3 =
(31)
310
M. Terauchi, Y. Tanaka, and T. Tsuji
Thus the parameter K3 can be obtained from K1 . In the same way, the ratios of vector lengths K4 for the virtual line segment l4 is solved as follows,
K4 =
Z4 + μ4 . Z4 + μ4
(32)
Substituting Eq.(26) into Eq.(32) yields,
K4 =
Z1 + μ1 = K1 . Z1 + μ1
(33)
Then, the parameter K for the virtual line segment can be acquired by using K for the real line segments. It is self-evident that we can solve them even for replacing the end-point and the base-point of the line segment. Finally we can get components of a vector which represents a direction of a virtual line segment l3 or l4 in the next step. STEP 4. Eq.(16) and Eq.(17) hold for the virtual line segments as well as for the real line segments. These are written as follows, 2 2 K3 N3 = |N3 | , T
K 1 K 3 N1 N3 = N1 T N3 .
(34) (35)
where N1 and K1 are given from Eq.(8), Eq.(9) and Eq.(13) etc. As the equation: K3 = βK1 , derived from Eq.(31). Hence, Eq.(34) and Eq.(35) have unknown parameters t3 and t3 . ⎡ ⎡ ⎤ ⎤ r3 + t3 x3 r3 + t3 x3 N3 = ⎣ s3 + t3 y3 ⎦ , N3 = ⎣ s3 + t3 y3 ⎦ . (36) t3 t3 The unknowns N3 and N3 can be expressed by t3 and t3 using Eq.(8), and Eq.(9), so Eq.(34) and Eq.(35) are rewritten as follows, 2 β 2 k1 2 (r3 + t3 x3 )2 + (s3 + t3 y3 )2 + t3
= (r3 + t3 x3 )2 + (s3 + t3 y3 )2 + t3 2 (37) βk1 2 {(r1 + t1 x1 )(r3 + t3 x3 ) + (s1 + t1 y1 )(s3 + t3 y3 ) + t1 t3 } (38) = {(r1 + t1 x1 )(r3 + t3 x3 ) + (s1 + t1 y1 )(s3 + t3 y3 ) + t1 t3 } Then we get two solutions for the above simultaneous equations. Furthermore we can select the unique solution using coefficients of variation, if the image data contains less error or noises. From these obtained parameters t1 , t1 , t2 , t2 , t3 and t3 , we can have the rotation matrix Φ using Eq.(14).
Hand and Eye Coordination for Flexible Robots
4 4.1
311
Non-contact Impedance Control Impedance Control
In general, a motion equation of an m-joint manipulator in the m-dimensional task space can be written as ¨ + h(θ, θ) ˙ = τ + J T (θ)F int M (θ)θ
(39)
where θ ∈ m is the joint angle vector; M (θ) ∈ m×m is the nonsigular inertia ˙ ∈ m is the nonlinear term including matrix (hereafter, denoted by M ); h(θ, θ) the joint torque due to the centrifugal, Coriolis, gravity and friction forces; τ ∈ m is the joint torque vector; F int ∈ l is the external force exerted on the en-effector; and J ∈ l×m is the Jacobian matrix (hereafter, denoted by J). The desired impedance properties of the end-effector can be expressed as ¨ + B e dX ˙ + K e dX = F int M e dX
(40)
where M e , B e , K e ∈ l×l are the desired inertia, viscosity and stiffness matrices of the end-effector, respectively; and dX = X e − X d ∈ l is the displacement vector between the current position of the end-effector X e and the desired one X d . The impedance control law does not use an inverse of the Jacobian matrix and is given as follows: τ = τ effector + τ comp
(41)
{M x (θ)[M −1 e (−K e dX
˙ τ effector = J − B e dX) −1 ˙ − [I − M x (θ)M ]F int } ¨ d − J˙ θ] +X T
e
τ comp = (M
−1
ˆ ˙ J M x (θ)J ) h(θ, θ) T
ˆ −1 (θ)J T )−1 M x (θ) = (J M
T
(42) (43) (44)
ˆ −1J T )−1 ∈ l×l indicates the operational space kinetic where M x (θ) = (J M energy matrix that is proper as long as the joint configuration θ is not singular; τ effector ∈ m is the joint torque vector necessary to realize the desired endeffector impedance; τ comp ∈ m is the joint torque vector for nonlinear comˆ ˙ and M ˆ denote the estimated values of h(θ, θ) ˙ and M (θ), pensation; h(θ, θ) respectively, and I is the l × l unit matrix. Impedance properties of the end-effector can be regulated by the designed controller. 4.2
Non-contact Impedance Control
Fig.4 (left) schematically shows the non-contact impedance control. Let us consider the case in which an object approaches a manipulator, and set a virtual sphere with radius r at the center of the end-effector. When the object enters
312
M. Terauchi, Y. Tanaka, and T. Tsuji Object
Virtual sphere
dXo Xo
Ko Xe r
Mo Bo
Me
Be
Xd +
− −dX
Ke
Xd
Virtual force Fo
z
x
+
+ Fact +
M e−1
y Me Ke
M e−1(Be s +Ke )
End-effector impedance Be
Fint +
+
Fo
1
Xe
s2
Manipulator
+
Xo
+
dXo
Mo s 2+Bo s+Ko Non-contact impedance
−
−
r
Fig. 4. Scheme and block diagram of non-contact impedance control
the virtual sphere, the normal vector on the surface of the sphere toward the object dX o ∈ l can be represented as dX o = X r − rn
(45)
where X r = X o − X e is the displacement vector from the center of the sphere (namely, the end-effector position) to the object; and the vector n ∈ l is given by ⎧ ⎨ Xr (X r = 0) n = |X r | (46) ⎩0 (X r = 0). When the object is in the virtual sphere (|X r | < r), the virtual impedance works between the end-effector and the object so that the virtual external force F o ∈ l is exerted on the end-effector by ¨ o + B o dX ˙ o + K o dX o (|X r | ≤ r) M o dX Fo = (47) 0 (|X r | > r) where M o , B o andK o ∈ l×l represent the virtual inertia, viscosity and stiffness matrices. It should be noted that F o becomes zero when the object is outside the virtual sphere or at the center of the sphere. Thus, the dynamic equation of the end-effector for non-contact impedance control can be expressed as ¨ + B e dX ˙ + K e dX = F int + F o . M e dX
(48)
Fig. 4 (right) depicts a block diagram of the non-contact impedance control. The motion equation of the end-effector X e (s) for the external forces depending on the object position X o (s) and the desired end-effector position X d (s) yields M o s2 + B o s + K o M e s2 + B e s + K e X X d (s) (s) + o M s2 + Bs + K M s2 + Bs + K −(M o s2 + B o s + K o )rn + F int (s) + M s2 + Bs + K
X e (s) =
(49)
Hand and Eye Coordination for Flexible Robots
313
where M = M o + M e , B = B o + B e , K = K o + K e . Therefore the stable condition for this system is M o ≥ −M e , B o ≥ −B e , K o ≥ −K e , where equals do not hold simultaneously. In the non-contact impedance control, the relative motion between the end-effector and the object can be regulated by the virtual impedance parameters during non-contact movements. These virtual impedance parameters (M o , B o , K o ) are decided using NNs to be described in the next section.
5 5.1
Learning of Virtual Impedance by NNs Structure of Control System
In the proposed control system, the virtual impedance part in Fig. 4 is composed of three multilayered NNs: a virtual stiffness network (VSN) at K o , a virtual viscosity network (VVN) at B o and a virtual inertia network (VIN) at M o . The ˙ r, X ¨ r) NNs input the relative motion between end-effector and object (X r , X and interaction force F int , while each NN outputs the corresponding impedance parameter: K o from VSN, B o from VVN and M o from VIN. The NNs utilize a linear function in the input units and a sigmoid function in the hidden and output units. Therefore, the input and output of each unit in the i-th layer, xi and yi can be derived as I (input layer) xi = i (50) wij yj (middle and output layers) ⎧ ⎪ ⎨ yi =
⎪ ⎩
xi 1 i 1+e−x U 1−e−xi +θ −x +θ i 2 1+e
(input layer) (middle layer)
(51)
(output layer)
where wij indicates the weight coefficient from the unit j to i; and U and θ are positive constants for the maximum output and the threshold of NN, respectively. 5.2
On-Line Learning
The learning of NNs is performed by updating synaptic weights in the NNs on-line so as to minimize an energy function E(t) depending on tasks under the stable condition. The synaptic weights in the VSN, the VVN and VIN are updated in the direction of the gradient descent to reduce the energy function E(t) as ∂E(t) ∂wij
(52)
∂E(t) ∂E(t) ∂F act (t) ∂O(t) = ∂F ∂wij ∂wij act (t) ∂O(t)
(53)
Δwij = −η
314
M. Terauchi, Y. Tanaka, and T. Tsuji
where η is the learning rate of each NN, F act (t) ∈ l is the control input and O(t) ∈ l×3 is the NN output. The term ∂F act (t)/∂O(t) can be computed from Eq.(47), and ∂O(t)/∂wij by the error back-propagation learning method. However, the term ∂E(t)/∂F act (t) cannot obtained directly because of the nonlinear dynamics of the manipulator. In our method the term ∂E(t)/∂F act (t) is approximated in the discrete-time system so that Δwij can be calculated in real time by using the change of E(t) for a slight variation of F act (t). Defining the energy function E(t) depending on end-point position and veloc˙ e (t), the term ∂E(t)/∂F act (t) can be expanded as ity, X e (t) and X ˙ e (t) ∂E(t) ∂E(t) ∂X e (t) ∂E(t) ∂ X = + . ˙ ∂F act (t) ∂X e (t) ∂F act (t) ∂ X e (t) ∂F act (t)
(54)
The slight change of control input ΔF act (t) within short time yields the following approximations: ΔX e (t) ≈ ΔF act (t)Δt2s ˙ e (t) ≈ ΔF (k) (t)Δts ΔX act
(55) (56)
˙ so that ∂X(t)/∂F act (t) and ∂ X(t)/∂F act (t) can be expressed [8] as follows: ∂X e (t) ΔX e (t) = = Δt2s I ∂F act (t) ΔF act (t) ˙ e (t) ˙ e (t) ∂X ΔX = = Δts I ∂F act (t) ΔF act (t)
(57) (58)
where Δts is a sampling interval, I ∈ l×l an identity matrix. Consequently the term ∂E(t)/∂F act (t) can be approximately computed. With the designed learning rules, an online learning can be held so that the output of NNs, O(t), will be regulated to the optimal virtual impedance parameters for tasks.
6
Experiments with the Robotic Manipulator
To verify the validity of the proposed method, we had some experiments for tasks, catching and hitting an object with a 6-DOF multijoint robotic manipulator as shown in Fig.5 (right). 6.1
Experimental System
The system for the experiments is outlined in Fig.5 (left). Motions of the robotic arm and the object are restricted on the vertical 2D plane in the experiments. The object is a wooden cubic, has several pairs of parallel edges on its surface, is hung from a ceiling by a metal stick and swung like a pendulum. An image sequence of motion of the object is captured with a conventional TV camera, which has been calibrated in advance.
Hand and Eye Coordination for Flexible Robots
Virtual impedance
315
L
ball Bo Ko
r
3
l3
Me Virtual sphere z x
2
Be
A load cell
Mo l2
Ke l1
Ke
Be
y
1
MoveMaster
Fig. 5. A model and a manipulator of experimental system
6.2
Catching Task
In catching task, the interaction force between the end-effector and the object should converge to the desired value without overshooting to avoid exerting a large interaction force on them. To this end, relative velocity between the endeffector and the object should be reduced before contact, and the end-point force after contact with the object should be controlled. Accordingly an energy function for the learning of NNs can be defined as Ec = Ecv (t) + μEcf (t) 2 1 ˙ ˙ Ecv (t) = α(X r )X r (tc ) − X r (t) 2 2 1 t F cd (u) − F int (u) du Ecf (t) = 2 0
(59) (60) (61)
where tc is the time when the virtual sphere just contacts the object. In this task, the end-effector should move with the same direction as the approaching direction of the object at the first phase and then slow down gradually in order to catch the object. α(X r ) is the time-varying gain function that should be designed according to contact tasks so as to avoid generating an excessive interaction force while performing the stable learning of NNs immediately after the object enters the virtual sphere. This function plays a role to smooth out the effects from the change of velocity of the object. Consequently the gain function was designed as follows ⎧ X r −Rb π ⎨ sin (|X r | ≥ Rb ) α(X r ) = (62) 2 r−Rb ⎩ 0 (|X r | < Rb ) . where r is a radius of the virtual sphere. α(|X r |) is shown in Fig.6. The NNs consist of four layered networks with four input units, two hidden layers with twenty units in each, and one output unit. Fig.6 shows typical experimental results with and without online learning. In Fig.6, (a) illustrates the
316
M. Terauchi, Y. Tanaka, and T. Tsuji Object, Yo End-point, Ye
Positions, Ye(t), Yo(t) [m]
0.6 With learning
0.5 Without learning
0.4 0.3
0
tc
1
2
3
Interaction force, Fint(t) [N]
Time [s] (a) Trajectories of Object and End-point with and without learning 0 -40 -80 0
tc
1
2
3
Interaction force, Fint(t) [N]
Time [s] (b) Interaction force without learning
(|Xr|)
1 0.5 0 Rb
r 2
r
Relative position, |Xr (t)| [m]
0 -40 -80 0
tc
1
2
3
Time [s] (c) Interaction force with learning
Fig. 6. A gain function and experimental results for catching task
time histories of the end-effector position along the x-axis (solid line) and the object (broken line), while (b) and (c) show the time histories of the interaction force without and with real-time learning. tc indicates the time when the object came into the virtual sphere. It can be seen that the manipulator moves its endeffector according to the object movements after the learning of NNs, so that the robot catches the object smoothly by reducing the impact force between the hand and the object. 6.3
Hitting Task
In hitting task, we need to control the direction of the end-effector motion opposite to that of the object. We define the evaluation function for the task as follows, Eh = Ehv (t) + μEhf (t)
(63)
Hand and Eye Coordination for Flexible Robots
1 ˙ ˙ e (t)2 β( X r )X o (t) − X 2 2 1 t F hd (u) − F int (u) du Ehf (t) = 2 0
Ehv (t) =
317
(64) (65)
where, F hd ∈ l depicts the desired interaction force, μ a positive constant which regulates the effect of Eq.(65). The difference of the velocities and the forces between the object and the end-effector is evaluated using Ehv (t) and Ehf (t) respectively. The function β(X r ) in Eq.(64) plays a role to smooth out the effect of the velocity change right after the object enters into the virtual sphere, and defined as ⎧ X r −Rb π ⎨ −1 (|X r | ≥ Rb ) sin β(X r ) = 2 r−Rb ⎩ 0 (|X r | < Rb )
(66)
The function β(X r ) is shown in Fig.7. The structure of NNs is the same as that used for the catching task. Object, Yo End-point, Ye
Positions, Ye(t), Yo(t) [m]
0.6 0.5 0.4 0.3
0
th
1
2
3
Rb (|Xr|)
0 -0.5 -1
Relative position, |Xr (t)| [m] r 2
r
Interaction force, Fint(t) [N]
Time [s] (a) Trajectories of Object and End-point with learning 0 -40 -80
-120
0
th
1
2
3
Time [s] (b) Interaction force with learning
Fig. 7. A gain function and experimental results for hitting task
In Fig.7, (a) illustrates the time histories of the positions of the end-effector (solid line) and the object (broken line) along y-axis, while (b) shows the interaction force between the end-effector and the object with online learning of NNs parameters. It can be seen that the manipulator succesfully hits the object periodically and repeatedly.
318
7
M. Terauchi, Y. Tanaka, and T. Tsuji
Conclusion
In this paper we proposed a recognition and a control methods for hand and eye coordination in order to establish a flexible robot system which can interact with artificial environments. From experimental results, real time control of a robot manipulator was achieved in which the robot could acquire parameters for impedance control online using NNs. We utilized a pair of parallel line segments on the object surface. However it is necessary to incorporate much more visual information in order to raise the reliability of reconstructed parameters from images.
References 1. Aggarwal, J.K., Nandhakumar, N.: On the Computation of Motion from Sequence of Images - A Review. Proc. of IEEE 76(8), 917–935 (1988) 2. Ullman, S.: The Interpretation of Visual Motion. MIT Press, Boston (1979) 3. Tsai, R.Y., Huang, T.S.: Uniqueness and Estimation of Three-Dimensional Motion Parameters of Rigid Object with Curved Surface. IEEE Trans. of PAMI PAMI-6(1), 13–27 (1984) 4. Kanatani, K.: Constraints on Length and Angle. Computer Vision, Graphics and Image Processing 41, 28–42 (1988) 5. Hogan, N.: Impedance Control: An Approach to Manipulation, Parts I, II, III. ASME Journal of Dynamic Systems, Measurement, and Control 107(1), 1–24 (1985) 6. Tsuji, T., Kaneko, M.: Non-contact Impedance Control for Redundant Manipulator. IEEE Transaction on Systems, Man, and Cybernetics - Part A 29(2), 184–193 (1999) 7. Cohen, M., Flash, T.: Learning Impedance Parameters for Robot Control Using an Associative Search Networks. IEEE Trans. on Robotics and Automation 7(3), 382–390 (1991) 8. Tsuji, T., Tanaka, Y.: On-line Learning of Robot Arm Impedance Using Neural Networks. Robotics and Autonomous Systems 52(4), 257–271 (2005)
MFC - A Modular Line Camera for 3D World Modulling B¨orner Anko1 , Hirschmu ¨ller Heiko2 , Scheibe Karsten1 , 2 urgen1 Suppa Michael , and Wohlfeil J¨ 1
2
German Aerospace Center, Optic Information System, Rutherford Str. 2, G-12489 Berlin, Germany
[email protected] German Aerospace Center, Robotic and Mechatronik, M¨ unchner Str. 20, G-82234 Wessling, Germany
[email protected] Abstract. Quality of 3D modelling in means of accuracy and level of detail is based fundamentally on the parameters of the imaging system. All aspects from camera design over calibration up to processing have to be regarded and to be coordinated. The whole system chain has to be realized and controlled. The German Aerospace Center developed a new generation of line cameras called MFC (multi functional camera), which can be applied for spaceborne, airborne and terrestrial data acquisition. Its core is a CCD line module consisting of the imaging detector and the front end electronics. Any desired number of such modules can be combined to build up a custom specific imaging system. Flexible usage of different CCD sensors, USB2 interface, real time noise reduction and image compression are the main parameters of this camera. The application of an inertial measurement unit allows data acquisition with moving platforms. This paper gives an overview about the system parameters. All steps of data processing are introduced and examples of 3D models will be given. State of the art stereo matching algorithms are of prime importance, therefore the applied semi-global matching is described shortly. The quality of data products will be evaluated. . . .
1
Introduction
The demand for digital camera systems providing the data base for high resolution 3D models increases drastically. Such cameras have to fulfil a number of strong requirements concerning resolution, calibration accuracy and image quality. Such parameters are correlated with costs. In the field of airborne photogrammetry a number of such sensor systems is available (HRSC, ADS40, DMC, UltraCam). On the other hand the quality of 3D models depends on the algorithmic chain for processing the image data. Image matching was and is the main challenge. In both fields, camera technologies and processing algorithms, very promising developments took place. Inexpensive, high resolution CCD lines, storage devices and fast processing units (CPU’s and FPGA’s) are available and allow to deploy efficient and smart cameras. G. Sommer and R. Klette (Eds.): RobVis 2008, LNCS 4931, pp. 319–326, 2008. c Springer-Verlag Berlin Heidelberg 2008
320
B. Anko et al.
The application and further development of matching algorithms developed for robotic applications ensures a new quality of 3D reconstruction. The German Aerospace Center (DLR) is able to perform, complete system approaches starting from camera design and development up to calibration and data processing. The Multi Functional Camera (MFC) is one outstanding example for this expertise.
2
Multi Functional Camera (MFC)
MFC is a CCD line based digital camera (see Figure 2). Each CCD line is glued on a separate focal plane module. This module includes heat pipes for thermal control. Besides, it carries two boards carrying the front end electronics (FEE). These boards are soldered directly on to the CCD, therefore very short ways for analogue signals are achieved. FEE is responsible for A/D conversion, for reduction of systematic noise effects (e.g. photo response non-uniformity) and data compression (lossy JPEG or lossless DPCM). All these tasks are performed in a FPGA. Digital data are transmitted via USB2 interface to a receiving unit (e.g. internal PC104, external Laptop or 19” industrial computer) and stored in a internal or external mass memory system (hard disk or solid state). All focal plane modules are aligned and fixed sequentially in relation to the optical axis.
Fig. 1. MFC-1 camera (left), focal plane module (middle) and MFC-3 camera (right)
Special features of the MFC cameras are: – Modularity (up to 5 focal plane modules in one system) – Usage of off-the-shelf RGB or monochromatic CCD-lines (own specified filter) with identical shape and interfaces and different numbers of pixels – Headpipe and peltier cooling system for CCD-lines – Usage of a standard USB2 interface – Usage of a standard Ethernet interface – Internal PC104 CPU and Storage (MFC-3 and MFC-5) – Trigger output and 2×Event input (PPS and GPS String)
MFC - A Modular Line Camera for 3D World Modulling
321
The basic parameters of three standard MFC cameras are depicted in table 1 and are only examples for a configuration. Currently two camera body’s are available, the smaller one for a single line application, and the bigger camera body for up to five modules.
Table 1. Individual setup possibilities for different MFC cameras Parameter
MFC-1
MFC-3
MFC-5
Number of modules 1 3 5 Number of pixels 2k, 6k, 8k 6k, 8k 8k, 10k, 14k* Focal length 35mm, 80mm 80mm 100mm Stereo angle 0 ±12,6◦ ±10.2◦ , ±19,8◦ Radiometric resolution 12bit 12bit 12bit, 10bit(optional)* Mass 1.5kg 15kg 20kg Interface USB 2.0 Ethernet (1Gb/s) Ethernet (1Gb/s) Size (W×H×D 4 × 20 × 15cm3 20 × 30 × 30cm3 20 × 30 × 30cm3 (without optic and grips) Application Terrestrial, Airborne Airborne Airborne
3
Photogrammetric Data Processing
The MFC is tested regarding its image quality, geometric resolution and accuracy. Therefore the camera was used in different measure campaigns. Because the MFC-3 and MFC-5 are mainly designed for digital aerial photography and 3D stereo reconstruction, different flight campaigns are made from October 2006 until October 2007 in Berlin and Bavaria. For elevation of the system and supporting future flight campaigns, a control point field of about 2km2 is surveyed and also a Siemensstern (i.e., resolution test chart) for measuring experimentally the modular transfer function (MTF) of the system is installed onto the area of the German Aerospace Center. Furthermore some tests for terrestrial applications are made also with the MFC-3 and MFC-5. Therefore the camera was mounted in a car and viewed sidedly through an open window. This is for example of interest for stereo scanning of larger street segments which cannot be scanned by aerial photographs. Also the camera was mounted on a turntable for testing the image quality within standard panoramic imaging (see Figure 3). Actually the MFC-1 is specially designed for such applications. At the moment tests with a German laser range finder manufacturers are in progress for implementing a single line camera for coloring 3D scans, both for terrestrial and aerial applications.
322
B. Anko et al.
Fig. 2. Using the MFC-3 with a turntable for scanning a panoramic image
3.1
Preprocessing
For measuring the cameras exterior orientation to enable direct georeferencing of the MFC-images, an accurate determination of the cameras exterior orientation is needed. For this purpose an integrated GPS/Intertial system is used (e.g., the IGI AEROControl IId, Applanix POS-AV 510) and its IMU is directly mounted onto the camera body. A precise time synchronisation of the MFC and the GPS/INS-system is mandatory. Therefore, the MFC receives a PPS-Signal (pulse per second) from the GPS/INS-system as well as it sends synchronisationevents to the GPS/INS-system. Both devices store the received synchronisation information. The original MFC-images, scanned line by line, are heavily distorted due to non-linear movements of the MFCs platform (e.g. aeroplane, car, robot, etc.). After synchronization and correction of the measured orientation each captured line of the original MFC-image is projected onto a virtual, reference-plane at the average scene depth. The projected images are clear from distortion and geometrically corrected. 3.2
Calibration
The radiometric calibration of the MFC consists of the correction of the electronic offset, a correction of the dark signal non uniformity (DSNU) and a homogenizing of the different sensitivity for each single pixel of a CCD line. The latter is corrected by the photo response non uniformity (PRNU) function. The PRNU includes also the luminosity gradation of an optic system, therefore the PRNU is measured with a diffuse homogenous emitter (e.g., Ulbricht-sphere). At last a linear factor for a white balance correction is estimated for different color temperatures [Scheibe, 2006]. The geometric calibration is different to standard photogrammetric approaches (e.g., [Brown, 1971], [Zhang, 2000]). Instead of describing the deviations of a linear Gaussian optic and also an interior orientation of the sensor a direct measurement of single pixel of a CCD-line is preferred. Therefore, the camera is mounted on a goniometric measuring place. A widened collimator ray illuminates a single pixel regarding the angles of a two axes manipulator[Schuster and Braunecker,2000]. The respective angles for a measured pixel corresponding two the effective view-
MFC - A Modular Line Camera for 3D World Modulling
323
ing direction. The resulting geometric calibration are within high subpixel accuracy. This was reproducible before and after different measuring campaigns. 3.3
Boresight A lignment
The angular misalignment of the MFC and the GPS/INS coordinate system (called boresight alignment) has to be determined every time when the IMU is (re)mounted on the MFC. As a 3D-rotation, the boresight alignment is used to transform the measured orientation to the actual orientation of the camera. One possible approach to determine the boresight alignment is to select ground control points in one of the MFC-images after a mission. Next, the 3D-rotation that minimizes the divergence of the calculated and the known control point positions is automatically calculated. The other approach needs more MFC-images, scanned in different directions, but does only need the selection of corresponding points, visible in all images. Minimizing the misalignment-caused fuzzyness of the calculation of the selected points’ 3D-positions, the boresight alignment can also be determined. It has been shown that both approaches produce the same results [B¨orner et al.]. 3.4
Geometrical Accuracy
The geometrical accuracy of the MFC-3 with an IGI AEROControl IId GPS/INSSystem has been studied in a test flight over Berlin-Adlershof (Germany), about 900 meters above the ground. The image-based, photogrammetric reconstruction of 30 ground points with known positions could be achieved with a standard deviation of 8 cm on the ground plane [B¨ orner et al.]. 3.5
Stereo Matching
The pre-processed images as well as the intrinsic calibration data and the extrinsic movement path serves as input for stereo matching. In order to minimize the search region for correspondence finding, the epipolar geometry is required. In case of dynamic applications on moving platforms like airplanes or cars, epipolar lines are not straight [Hirschm¨ uller et al., 2005]. The computation of epipolar lines is done point by point, by reconstruction of pixel in different distances and projection into other images using the instrinsic calibration data as well as all extrinsic camera positions and orientations. Details are described in [Hirschm¨ uller et al., 2005]. It is assumed that the camera calibration and especially the extrinsic movement path is accurate enough to compute the epipolar lines with an accuracy of 0.5 - 1 pixel. Stereo processing itself is relies on the Semi-Global Matching (SGM) method [Hirschm¨ uller, 2005]. This algorithm is based on minimizing a global cost function with a data and a smoothness term. The data term computes pixelwise matching using Mutual Information [Viola and Wells, 1997] for beeing robust against a wide range of radiometric distortions. The smoothness term adds a
324
B. Anko et al.
penalty for all pixels with neighbours that have a different disparity. All pixels of the image are connected by the smootheness term, which is the reason for calling it a global cost function. The goal of stereo matching is to find the disparity image that minimizes this global cost function. This is in general a NP-complete problem. There are many global stereo algorithms that approximate this minimization, but they are typically slow. SGM is an efficient method for minimizing the global cost function that requires just O(ND) steps like local algorithms, with N as the number of pixel and D as disparity range. The implemented algorithm includes a consistency check and blunder elimination. Sub-pixel accuracy is achieved be applying quadratic interpolation. SGM works with image pairs. If more than two images are available, pairwise matching and subsequent median filtering is executed in order to determine disparity values robustly from redundant measurements. High resolution images are processed by sub-dividing one data set into small patches. Geo-referencing is realized by projection into a suited coordinate system. Orthophotos can be calculated using the 3D surface model as well as the interior calibration and exterior positions and orientations by common mapping procedures. Figure 3.5 illustrates results of SGM matching. Even steep slopes (e.g. walls of buildings) are reconstructed exactly and small details are preserved. This is due to pixelwise matching that is used by SGM.
Fig. 3. MFC data products, test flight on March, 22nd 2007 over Berlin, textured 3D model (left), untextured 3D model (right)
3.6
Beside Matching
The combination of 3D laser range finder (LRF) data and high resolution panoramic color image data is widely used in the last years and results in an alternative for architectural documentation and digitalization. The measurement results of laser scanner systems are high resolution digital surface models. A combination of those scans allows the creation of real 3D models[Scheibe et al., 2002,Hirzinger et al.]. For merging this data with the texture obtained from a high resolution panoramic scanner, two strategies are pursued. First a more or less expensive
MFC - A Modular Line Camera for 3D World Modulling
325
mechanical concept is ensuring a more or less unique projection center for “both” data systems. In this case it can be understood to be one system. The advantage is registering a direct corresponding color pixel for each scanned 3D point. Secondly, separate data sets can be referenced to each other using photogrammetric bundle adjustment. The orientation data can be used to merge 3D points from either LRF or matching results and 2D color pixels. In this step the color pixel has to be mapped onto the provided 3D or 2.5D data set. An efficient ray tracing routine allows the mapping of large amounts of high resolution data [Scheibe, 2006]. Figure 3.6 shows a mapped panoramic image onto a meshed LRF point cloud. The point cloud was simple meshed point by point for a single LRF scan, and in a second step wrong meshed triangles are deleted by using connectivity analysis. The black parts are occlusions and are visible because the virtual viewpoint is different to the LRF scanning viewpoint and the real camera position. In [Bodenm¨ uller and Hirzinger] a meshing algorithm is shown which is designed for unorganized point clouds. This approach allows inserting points from different LRF scans into an existing mesh.
Fig. 4. A mapped panoramic image onto a 3D mesh given by an LRF. Using the LRF intensity data (left) and using color information of a 10k CCD-line (right).
4
Outlook
The DLR Institute of Robotics and Mechatronics developed a high-performance camera system for generating 3D world models. Due to its modular and scalable design, the usage of standardized interfaces and off-the-shelf components MFC is a very robust and cost-efficient sensor system being applicable for a huge number of photogrammetric and mapping tasks. Semi-Global Matching is a very powerful approach in order to produce very detailed and accurate three dimension models. DLR proves to control an entire system chain starting with camera design and finishing with operational data processing. Future work will focus more on automation, real time processing and fusion with other sensor systems.
326
B. Anko et al.
References [Brown, 1971] Brown, D.: Close-Range Camera Calibration. Photogrammetric Engineering, 37(8), 855–866 [Zhang, 2000] Zhang, Z.: A Flexible New Technique for Camera Calibration. IEEE Transactions on Pattern Analysis and Machine Intelligence 22(11), 1330–1334 (2000) [B¨ orner et al.] B¨ orner, A., et al.: Ableitung photogrammetrischer Produkte mit der MFC-Kamera. In: DGPF Tagungsband 16/2007 joint congress of SGPBF, DGPF and OVG, Muttenz, Swiss (2007) [Hirschm¨ uller, 2005] Hirschm¨ uller, H.: Accurate and efficient stereo processing by semiglobal matching and mutual information. In: IEEE Conference on Computer Vision and Pattern Recognition, San Diego, CA, USA, June 2005, vol. 2, pp. 807–814 (2005) [Hirschm¨ uller et al., 2005] Hirschm¨ uller, H., Scholten, F., Hirzinger, G.: Stereo vision based reconstruction of huge urban areas from an airborne pushbroom camera (hrsc). In: Kropatsch, W.G., Sablatnig, R., Hanbury, A. (eds.) DAGM 2005. LNCS, vol. 3663, pp. 58–66. Springer, Heidelberg (2005) [Viola and Wells, 1997] Viola, P., Wells, W.M.: Alignment by maximization of mutual information. International Journal of Computer Vision 24(2), 137–154 (1997) [Schuster and Braunecker, 2000] Schuster, R., Braunecker, B.: Calibration of the LH Systems ADS40 Airborne Digital Sensor. In: International Archives of Photogrammetry and Remote Sensing, vol. XXXIII, part B1, Amsterdam, pp. 288–294 (2000) [Scheibe et al., 2002] Scheibe, K., Pospich, M., Scheele, M.: Multi-Sensor-Approach in Close Range Photogrammetry. In: Proceedings of Image and Vision Computing New Zealand 2002, Auckland, New Zealand, November 2002, pp. 77–81 (2002) [Scheibe, 2006] Scheibe, K.: Design and test of algorithms for the evaluation of modern sensors in close-range photogrammetry Dissertation, University G¨ ottingen (2006), http://elib.dlr.de/44626/ [Bodenm¨ uller and Hirzinger] Bodenm¨ uller, T., Hirzinger, G.: Online Surface Reconstruction From Unorganized 3D-Points For the (DLR) Hand-guided Scanner System. In: 2nd Symposium on 3D Data Processing, Visualization, Transmission, Salonika, Greece (2004) [Hirzinger et al.] Hirzinger, G., et al.: Photo-Realistic 3D-Modelling - From Robotics Perception Towards Cultural Heritage. In: International Workshop on Recording, Modelling and Visualization of Cultural Heritage, Ascona, Switzerland (2005)
3D Person Tracking with a Color-Based Particle Filter Stefan Wildermann and J¨urgen Teich Hardware/Software Co-Design, Department of Computer Science, University of Erlangen-Nuremberg, Germany {stefan.wildermann, juergen.teich}@informatik.uni-erlangen.de
Abstract. Person tracking is a key requirement for modern service robots. But methods for robot vision have to fulfill several constraints: they have to be robust to errors evoked by noisy sensor data, they have to be able to work under realworld conditions, and they have to be fast and computationally inexpensive. In this paper we present an approach for tracking the position of a person in 3D based on a particle filter. In our framework, each particle represents a hypothesis for the 3D position, velocity and size of the person’s head being tracked. Two cameras are used for the evaluation of the particles. The particles are weighted by projecting them onto the camera image and applying a color-based perception model. This model uses skin color cues for face tracking and color histograms for body tracking. In contrast to feature-based approaches, our system even works when the person is temporary or partially occluded.
1 Introduction New scenarios for service robots require sophisticated ways for users to control robots. Since such systems are designed to support humans in everyday life, special user interfaces, known as Human Machine Interfaces (HMIs), have to be developed that support the cooperation, interaction and communication between robot and user. A basic requirement for a proper HMI is the awareness of the person that is currently interacting with it. Methods for robot vision have to fulfill several constraints. First, they have to be robust to errors evoked by noisy sensor data, changing lighting conditions, etc. Second, they have to be able to work under real-world conditions with moving objects and persons. Moreover, adequate algorithms have to be computationally inexpensive and fast, since a robot has to simultaneously perform several tasks. Tracking algorithms can be divided into two classes. In bottom-up approaches the camera images are searched for significant features of persons such as the face. The detected features are then used for tracking. In contrast, top-down approaches make assumptions of the possible position of the person. These hypotheses are then evaluated by considering the current camera images. Usually, such approaches are faster than algorithms of the first class, since the images have only to be processed on the hypothetical positions. Service robots have to deal with highly dynamic environments. This implies the use of adaptive methods. In recent years, probabilistic approaches have proven to be appropriate for such tasks, and thus have become quite popular in robotics [1]. The key G. Sommer and R. Klette (Eds.): RobVis 2008, LNCS 4931, pp. 327–340, 2008. c Springer-Verlag Berlin Heidelberg 2008
328
S. Wildermann and J. Teich
idea of the presented system is to use a particle filter as the probabilistic framework for a top-down approach. Particles represent hypotheses for the person’s position in 3D coordinates. In order to evaluate these particles, they are projected onto the image planes. The probability for observing the person at the resulting image coordinates is then calculated according to a color-based perception model. This model uses skin color cues for face tracking and color histograms for body tracking. Furthermore, a method for an automatic initialization of the tracker is provided. The further outline of this paper is as follows. In Section 2 we will give an overview of related work. Then, in Section 3 probabilistic tracking is briefly described. Section 4 outlines the system proposed in this paper. The color-based perception model is presented in Section 5. Section 6 shows, how the tracker can be properly initialized. In Section 7 the results are presented that were achieved with the tracker. We conclude our paper with Section 8.
2 Related Work Person tracking plays an important role for a variety of applications, including robot vision, HMI, augmented reality, virtual environments, and surveillance. Several approaches based on a single camera exist for this task. Bottom-up approaches for person tracking are based on feature extraction algorithms. The key idea is to find significant features of the human shape that can be easily tracked such as, e.g, eyes or the nose [2], but most algorithms rely on face detection. Color can provide an efficient visual cue for tracking algorithms. One of the problems faced here is that the color depends upon the lighting conditions of the environment. Therefore statistical approaches are often adopted for color-based tracking. Gaussian mixture models are used to model colors as probability distributions in [3] and [4]. Another approach is to use color histograms to represent and track image regions. In [5] color histogram matching is combined with particle filtering, thus improving speed and robustness. In recent years, a number of techniques have been developed that make use of two or more cameras to track persons in 3D. Usually, stereo vision works in a bottom-up approach. First, the tracked object has to be detected in the images that are simultaneously taken by at least two calibrated cameras. The object’s 3D position is then calculated via stereo triangulation [6,7]. The main problems of bottom-up approaches are that the object has to be detected in all camera images and the detected features have to belong to the same object known as the correspondence problem. If this is not the case, the obtained 3D position is erroneous. In [8,9] particle filters are applied to track persons in 3D using a top-down approach. Each particle is evaluated for all available cameras by using cascaded classifiers as proposed by Viola and Jones [10]. The disadvantage of these approaches is that they fail to track an object when it is partially occluded which is often the case in real world applications. The approach described in this paper overcomes this drawback by using a color-based algorithm. We will show that our method is able to track people even when they are temporarily and partially occluded.
3D Person Tracking with a Color-Based Particle Filter
329
3 Bayesian Tracking The key idea of probabilistic perception is to represent the internal view of the environment through probability distributions. This has many advantages, since the robot’s vision can recover from errors as well as handle ambiguities and noisy sensor data. But the main advantage is that no single ”best guess” is used. Rather, the robot is aware of the uncertainty of it’s perception. Probabilistic filters are based on the Bayes filter algorithm whose basics with respect to probabilistic tracking will be briefly described here. For a probabilistic vision-based tracker the state of an object at time t is described by the vector xt . All measurements of the tracked object since the initial time step are denoted by z1:t . The robot’s belief of the object’s state at time step t is represented by the probability distribution p(xt |z1:t ). This probability distribution can be rewritten using the Bayes rule p(xt |z1:t ) = We now define η = p(xt |z1:t )
1 p(zt |z1:t−1 )
= Markov
=
Total Prob.
=
Markov
=
p(zt |xt , z1:t−1 )p(xt |z1:t−1 ) . p(zt |z1:t−1 )
(1)
and simplify Equation 1:
η p(zt |xt , z1:t−1 )p(xt |z1:t−1 )
(2)
η p(zt |xt )p(xt |z1:t−1 )
(3)
η p(zt |xt )
p(xt |xt−1 , z1:t−1 )p(xt−1 |z1:t−1 )dxt−1
(4)
p(xt |xt−1 )p(xt−1 |z1:t−1 )dxt−1
(5)
η p(zt |xt )
In Equation 3 the Markov assumption is applied which states that no past measurements z1:t−1 is needed to predict zt if the current state xt is known. In Equation 4 the probability density p(xt |z1:t−1 ) is rewritten by using the Theorem of total probability. Finally, in Equation 5 the Markov assumption is exploited again since the past measurements contain no information for predicting state xt if the previous state xt−1 is known. The computation of the integral operation in Equation 5 is very expensive, so several methods were developed to estimate the posterior probability distribution efficiently, such as the Kalman filter or the Information filter. These filters represent the posterior as a Gaussian which can be described by the first two moments i.e., its mean and covariance. But most real-world problems are non-Gaussian. So non-parametric probabilistic filters, such as Particle filters, are better suited to such tasks [11]. Different from parametric filters, the posterior probability distribution p(xt |z1:t ) is not described in a functional form, but approximated by a finite number of weighted samples with each sample representing a hypothesis of the object’s state, and its weight reflecting the importance of this hypothesis. These samples are called particles and will be denoted as xti with associated weights wti . The particle filter algorithm is iterative. In each iteration, first n particles are sampled from the particle set Xt = {xti , wti }i=1...n according to their weights and then propa-
330
S. Wildermann and J. Teich
gated by a motion model that represents the state propagation p(xt |xt−1 ). The resulting particles are then weighted according to a perception model p(zt |xt ). A problem of particle filters is the degeneracy problem standing for that after a few iterations all but one particle will have a negligible weight. To overcome this problem generic particle filter algorithms measure the degeneracy of the algorithm with the effective sample size Ne f f introduced in [12] and shown in Equation (6). N
Ne f f = 1/ ∑ (wti )2 .
(6)
i=1
If the effective sample size is too small, there will be a resampling step at the end of the algorithm. The basic idea of resampling is to eliminate particles with small weights and to concentrate on particles with large weights. The purpose of this paper is to show, how to use this probabilistic framework to track a person in 3D.
4 Top-Down Approach to 3D Tracking The aim of the system presented here is to track the coordinates of a person in 3D. The position of the person’s head p = (x, y, z)T serves as reference point for our person model. Furthermore, the model includes a sphere with radius r centered at p that represents the person’s head, and the velocity of the person v = (x, ˙ y, ˙ z˙)T . The particle set Xt−1 can be seen as a set of weighted hypothesis, with each particle describing a possible state of the person. A particle is stored as vector xti = (pti , vti , rti )T .
(7)
In each iteration, n particles are drawn from particle set Xt−1 and then propagated by means of a linear motion model as defined in Equation 8. ⎞ ⎛ 1 Δt 0 i xti = ⎝0 1 0⎠ xt−1 + Nt (8) 0 0 1 with Δ t being the time interval between two subsequent sensor measurements, and Nt is a multivariate Gaussian random variable that models the system dynamics. New weights are derived by evaluating the propagated particles. For this purpose, the particle is projected onto the image plane of each camera. The pinhole camera model is used for this projection, as illustrated in Figure 1. Throughout this paper, the left camera will be the reference system. Given a point p in the left camera coordinate system, we can compute its position in the right one by p = R · p + t
(9)
where t is a translation vector, and R is a rotation matrix. The intrinsic parameters are the camera’s internal parameters specifying the way a three-dimensional point is projected onto the camera’s image plane. These parameters consists of the center of the camera image (u0 , v0 ) and the focal length f , i.e., the distance between the camera’s origin and its image plane.
3D Person Tracking with a Color-Based Particle Filter
331
Fig. 1. Top-down approach to 3D tracking. For evaluation, each particle xti is projected onto the image planes of the left and the right cameras.
The image coordinates of the person’s head are calculated according to Equation 10 for both cameras. ux = − ppx ·z f + u0 . (10) p ·f uy = py z + v0 Furthermore, the radiuses of the projected sphere are determined. Now that the image coordinates of the person’s head uil,t and uir,t as well as the rai and ri are known, the probability distributions p(L |xi ) and p(R |xi ) for diuses rl,t t t t t r,t observing the person in the left and right image at the specified position xti is computed for each camera according to a color-based perception model that is introduced in the next section. After applying the model to both cameras, the new weights of the particles are calculated according to wti = p(Lt , Rt |xti ) = p(Lt |xti ) · p(Rt |xti ).
(11)
Here, for reasons of simplification, probabilistic independence is assumed between random variables Lt and Rt .
5 Color-Based Perception Model The perception model proposed in this paper uses two kinds of information: Skin color is used as a global feature to track the person’s face and a histogram to describe the color distribution of the person’s upper part of the body used as local feature. This histogram not only specifies a distinct feature of the person, but is also independent of the person’s current orientation as long as parts of the person’s front are visible to the camera. Nonetheless, it has to be determined before tracking the person. A method to automatically initialize the tracker is described in Section 6.
332
S. Wildermann and J. Teich
When doing color-based image processing, an appropriate color space has to be selected. By default, today’s cameras provide images in the RGB color space. But RGB has several disadvantages, for example, color information is not separated from luminance information. Therefore, RGB images are highly sensitive to varying lighting conditions and thus not adequate in this context. In the following the HSV color space has been chosen that represents color by the three components hue, saturation and value (sometimes also called intensity). Hue defines the actual color type (such as red, blue, green), saturation measures the intensity or purity of this color, and the value component expresses the brightness. 5.1 Color Histograms Color histograms represent the color distributions of color samples C = {ci }i=1,...,nC in a discrete, non-parametric form. To achieve a certain tolerance for illumination changes, we separate color information (hue and saturation) from luminance information (value). The HS color space is partitioned into a number of equally sized bins with Nh bins for the hue component and Ns bins for the saturation component. These bins are populated using samples with sufficient color information, i.e. saturation and value larger than two predefined thresholds, see [5]. We use 0.1 and 0.2, respectively. Additionally, Nv bins are used for the value component of the remaining samples. The complete histogram consists of N = Nh · Ns + Nv bins. Each bin stores the number of times a color within its boundaries occurred in the color samples C. Given a color ci , the index of its associated bin is obtained by function b(ci ) ∈ {1, ..., N}. The histogram itself is defined as h = {qi }i=1,...,N with qi = α
(12)
nC
∑ δ (i, b(c j ))
(13)
j=1
where α is a scaling factor, and δ (n, b(c j )) is the Kronecker delta function: 1, if i = j δ (i, j) = . 0, else
(14)
5.2 Skin Color Model Skin color classification is one of the most widely used approaches for fast and easy identification of humans in color images. An overview of several methods for skin color modeling is given in [13]. In this paper, we want to model the distribution of skin color in the HSV space. Since only chromatic information is important for this task, the HS color space is used and illumination information, i.e., the value component, is not considered. As representation of this distribution we use a color histogram. The histogram hskin is populated with samples of skin color from several training images. Each bin stores the number of times a color within the bin’s boundaries occurred in the training samples according to Equation 13.
3D Person Tracking with a Color-Based Particle Filter
333
skin The scaling factor α is set appropriately to ensure that ∑i=1 qskin,i = 1. So the histogram can be used as the discrete probability distribution p(St |xt ) for observing skin color at position xt : p(St |xt ) = qskin, j with j = bskin I (ut ) (15)
N
where ut is the projected coordinate of xt in image I. 5.3
Histogram Tracking
The skin color model considers single pixels only. In addition, we use a color distribution model for image regions, i.e., a color histogram that represents the color distribution of a certain area of the person to be tracked. Here, the upper part of the person’s body is used as target model. Given the hypothetical person state x = (p, v, r)T as it is expressed by a particle, we use a quadratic image region R(x) with center c(x) and side length l(x): c(x) = (px , py + 2 · r)T l(x) = 2 · r
(16)
where p is the image coordinate and r is the radius of the projected face for the current camera. A histogram with Nbody,h bins for hue, Nbody,s bins for saturation, and Nbody,v bins for the value component is used, Nbody = Nbody,h · Nbody,h + Nbody,h bins overall. A histogram generated for the quadratic image region R(x) is denoted as hbody (x). Again, the N
body qbody,i (x) = 1. histogram is normalized by the scaling factor α to ensure that ∑i=1 To measure the similarity between the target model and an candidate image region R(x) for the person’s body position, the Battacharyya distance is applied. The target histogram is denoted as h∗body = {q∗i } and has to be determined before the tracking process. An example of how to do this is given in Section 6. The distance between target and hypothetical body histogram is calculated as
Nbody ∗ (17) D hbody , hbody (x) = 1 − ∑ q∗i qbody,i (x).
i=1
We now define the probability distribution p(Bt |xt ) as ∗ 1 1 D hbody , hbody (xt ) 2 p(Bt |xt ) = √ exp − . 2 σ 2πσ 2
(18)
5.4 Particle Evaluation Weights are assigned to each particle by evaluating the particle for each camera. The image coordinates uil,t and uir,t are calculated by projecting the particle xti onto the image i i and rr,t . planes of the left and the right camera as well as the face radiuses rl,t
334
S. Wildermann and J. Teich
Fig. 2. Two images that were simultaneously taken by the left and right camera to illustrate the person model. The upper rectangle shows the person’s face as it was detected in the user detection phase. The lower rectangle shows the part of the body that is used for tracking.
Afterwards, the skin color probabilities p(Sl,t |xti ) and p(Sr,t |xti ) are computed according to Equation 15 for both cameras. Furthermore, using Equation 18 the body color probabilities p(Bl,t |xti ) and p(Br,t |xti ) can be calculated. Again, for simplification, probabilistic independence between the random variables for skin color St and body color Bt is assumed. The resulting perception probability of each sample is given by p(zl,t |xti ) = p(Sl,t |xti ) · p(Bl,t |xti ).
(19)
The probability for the right camera is obtained analogous.
6 User Detection The problem of the approach described so far is that the histogram used as target body model has to be initialized properly. We therefore propose a system working in two phases. In the user detection phase, the camera images are searched for the person that tries to interact with the system (this interaction can be started via the speech interface). When a person was detected in both camera images, a histogram of the person’s upper part of the body is initialized for each camera using the current image data. After the initialization the system switches into the user tracking phase, where the histogram is used as the target model to track the person via particle filtering. For user detection we have chosen a face recognizer based on the popular algorithm of Viola and Jones [10]. An implementation of which can be found in the OpenCV library [14]. Every time a user starts an interaction with the robot, the camera images are searched for frontal faces. We reduce the computational costs for face detection by first calculating regions of interest (ROIs) that limit the search space based on skin color. Therefor, each image is segmented into skin color regions and non-skin color regions. To achieve this, the probability p(S|u) of observing skin color is computed for each pixel u according to
3D Person Tracking with a Color-Based Particle Filter
335
Equation 15. Those pixels whose probabilities are below a predefined threshold are set to 0, all other ones are set to 1. We use a threshold of 0.006. After that, skin clusters are calculated by region growing merging neighboring skin color pixels. For every cluster its enclosing bounding box is calculated. In a final step, boxes intersecting each other are further merged. We end up with distinct boxes that are used as the ROIs for the face detection algorithm. The face detector provides the coordinates ul and ur of the detected face as well as the side lengths sl and sr in the left and right image. To avoid the correspondence problem, i.e., determining which faces detected in both images belong to the same person, we assume that only one person is visible during the user detection phase. With coordinates and side length of the candidate face, we can calculate the position of the body area within the camera image according to Equation 16. The histograms of the regions in the left and the right camera image are calculated and used as the target body models h∗l,body and h∗r,body throughout the tracking phase. For initialization of the particles, the 3D position of the face is calculated via stereo triangulation. This is done by re-projecting the image points ul and ur into 3D, resulting in lines ll and lr with ⎞ ⎛ (ul,x − u0)/ f (20) ll (λ ) = λ ⎝(ul,y − v0 )/ f ⎠ 1 ⎞ (ur,x − u0)/ f lr (λ ) = λ · R−1 ⎝(ur,y − v0 )/ f ⎠ − t 1 ⎛
and
(21)
where R and t are the extrinsic parameters of the right camera. The 3D coordinates can now be determined by calculating the intersection between both lines. Since lines ll and lr may not intersect due to measurement errors, the problem is solved using a least square method that determines the λ = (λl , λr )T that minimizes 2 ε = ll (λl ) − lr (λr ) .
(22)
-350 #50
bottom-up smoothed bottom-up top-down
bottom-up top-down
#56
y [cm]
#56
-300
#54
200
#50
150
#12 z [cm]
100 50 0
-250 #54 #12
-300
80 20
-200 z [cm]
-200
60
-250 -20
-150
-40
40
0 x [cm]
-150 -60
-40
-20
0
20
40
60
80
x [cm]
(a)
(b)
Fig. 3. Trajectories produced by stereo triangulation (dashed lines) and proposed approach (solid lines). Images associated to marked positions are shown in Figure 4.
336
S. Wildermann and J. Teich
The calculated 3D position x f = ll (λl ) of the face is now used to initialize the particles of our tracker according to ⎛ i⎞ ⎛ ⎞ xf p0 (23) xi0 = ⎝ vi0 ⎠ = ⎝ 0 ⎠ + Nσ 0 r0i where N is a Gaussian multivariate with a mean value of 0 and variance σ . Furthermore, the weight of each particle is calculated as 1 1 d(x f , xi0 ) 2 i exp − (24) w0 = √ 2 σ 2πσ 2 where d(x f , xi0 ) is the distance between both vectors.
7 Experiments The proposed person tracker has been implemented in C++ and tested on a laptop with a 1.99 GHz Pentium 4 processor and 1GB memory. Two FireWire web cameras with a maximum frame rate of 15 fps were mounted with a distance of 20 cm between each other. The cameras were calibrated according to the method described in [15]. Training of the skin color classifier was done with 80 training images from the Internet and databases showing persons from different ethnic groups. All samples were converted into the HS color space and sampled into a histogram with 30 × 32 bins. No samples of persons appearing in the experiments where part of the training set. Histograms with 30 × 32 × 10 bins were used for body tracking. The variance in Equation 18 was set to 0.12. The Gaussian random variables used in Equation 8 were set to 0.96 m for position, 1.44 ms for velocity, and 0.48 m for scale error of the head radius. The particle filter itself worked with 400 particles. Every time the effective number of particles Ne f f was smaller than 100 resampling was performed. The system has been applied to a variety of scenarios. In each test run, the tracker was automatically initialized as described in Section 6. The system worked with 12 to 15 fps, thus achieving real-time capabilities. Several test runs were used to compare the proposed method with a classical bottomup method where face detection was performed in every image and the 3D position was determined via stereo triangulation. The resulting position was compared to the mean state of the particle set of our approach that is calculated as n
E[Xt ] = ∑ wti · xti . i=1
Table 1. Comparison of bottom-up and proposed top-down approach
XZ plane Y-axis
Mean distance [cm] Standard deviation [cm] 12.39 8.81 1.32 0.92
(25)
3D Person Tracking with a Color-Based Particle Filter
(a) Frame 12
(b) Frame 50
(c) Frame 54
(d) Frame 56
337
Fig. 4. Sequence to compare bottom-up tracking (solid lines) with the approach described in this paper (dashed lines). Bellow each image pair the room is shown in bird’s eye view with marks of the head position calculated by bottom-up (red circle), particles (marked as x) and mean state of top down approach (black circle)
Results from one training sequence can be seen in Figure 4 and are summarized in Table 1. Figure 3 shows the trajectories generated by both methods. The path produced by the bottom-up approach has many leaps since face detection is erroneous and no additional smoothing filter, such as a Kalman filter. In Figure 3(b) the data from stereo triangulation was additionally approximated with a Bezier curve resulting in a curve comparable to the one produced by our technique. Figure 5 show a second test sequence in which the user’s face is temporarily and partially occluded. As can be seen, the system manages to track the person if the face is occluded or if the person is looking into a different direction.
338
S. Wildermann and J. Teich
(a) Frame 26
(b) Frame 34
(c) Frame 56
(d) Frame 66
(e) Frame 100
(f) Frame 105
Fig. 5. Test sequence that shows the robustness to occlusion
3D Person Tracking with a Color-Based Particle Filter
339
8 Conclusion In this paper, we have presented a method for tracking a person in 3D with two cameras. This tracker was designed for robots, so different constraints had to be considered. Since robots work in highly dynamic environments a probabilistic approach has been chosen. Furthermore, a problem of real-world trackers is that users are often temporary or partially occluded. That is why a top-down approach based on a particle filter was selected. In this framework, particles represent hypotheses for the person’s current state in the world coordinate system. The particles are projected onto the image planes of both cameras, and then evaluated by a color-based perception model. Our model uses skin color cues for face tracking and color histograms for body tracking. We also provided a method to initialize the tracker automatically. We showed that our system tracks a person in real time. The system also manages to track a person even if it is partially occluded, or if the person is looking away from the cameras. In the future, we plan to use the proposed system on a mobile robot. For this purpose, we want to determine the robot’s motion via image-based ego-motion estimation.
References 1. Thrun, S.: Probabilistic Algorithms in Robotics. AI Magazine 21(4), 93–109 (2000) 2. Gorodnichy, D.O.: On Importance of Nose for Face Tracking. In: Proceedings of the Fifth IEEE International Conference on Automatic Face and Gesture Recognition, p. 188. IEEE Computer Society, Washington (2002) 3. Raja, Y., McKenna, S., Gong, S.: Segmentation and Tracking using Colour Mixture Models. In: Proceedings of the Third Asian Conference on Computer Vision, pp. 607–614 (1998) 4. McKenna, S., Raja, Y., Gong, S.: Tracking Colour Objects using Adaptive Mixture Models. IVC 17(3/4), 225–231 (1999) 5. P´erez, P., et al.: Color-Based Probabilistic Tracking. In: Heyden, A., et al. (eds.) ECCV 2002. LNCS, vol. 2350, pp. 661–675. Springer, Heidelberg (2002) 6. Focken, D., Stiefelhagen, R.: Towards Vision-Based 3-D People Tracking in a Smart Room. In: Proceedings of the 4th IEEE International Conference on Multimodal Interfaces (ICMI 2002), pp. 400–405 (2002) 7. Gorodnichy, D., Malik, S., Roth, G.: Affordable 3D Face Tracking using Projective Vision. In: Proceedings of International Conference on Vision Interface (VI 2002), pp. 383–390 (2002) 8. Kobayashi, Y., et al.: 3D Head Tracking using the Particle Filter with Cascaded Classifiers. In: BMVC06 - The 17th British Machine Vision Conference, pp. I:37 (2006) 9. Nickel, K., et al.: A Joint Particle Filter for Audio-visual Speaker Tracking. In: Proceedings of the 7th International Conference on Multimodal Interfaces, pp. 61–68. ACM Press, New York (2005) 10. Viola, P., Jones, M.: Rapid Object Detection using a Boosted Cascade of Simple Features. In: Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (2001) 11. Isard, M., Blake, A.: CONDENSATION – Conditional Density Propagation for Visual Tracking. International Journal of Computer Vision 29(1), 5–28 (1998)
340
S. Wildermann and J. Teich
12. Liu, J., Chen, R.: Sequential Monte Carlo Methods for Dynamic Systems. Journal of the American Statistical Association 93(443), 1032–1044 (1998) 13. Kakumanu, P., Makrogiannis, S., Bourbakis, N.: A Survey of Skin-Color Modeling and Detection Methods. Pattern Recognition 40(3), 1106–1122 (2007) 14. OpenCV-Homepage, http://sourceforge.net/projects/opencvlibrary/ 15. Zhang, Z.: Flexible Camera Calibration by Viewing a Plane from Unknown Orientations. In: Proceedings of the International Conference on Computer Vision, pp. 666–673 (1999)
Terrain-Based Sensor Selection for Autonomous Trail Following Christopher Rasmussen and Donald Scott Dept. Computer & Information Sciences University of Delaware, USA
[email protected],
[email protected]
Abstract. We introduce the problem of autonomous trail following without waypoints and present a vision- and ladar-based system which keeps to continuous hiking and mountain biking trails of relatively low human difficulty. Using a RANSAC-based analysis of ladar scans, trailbordering terrain is classified as belonging to one of several major types: flat terrain, which exhibits low height contrast between on- and off-trail regions; thickly-vegetated terrain, which has corridor-like structure; and forested terrain, which has sparse obstacles and generally lower visual contrast. An adaptive color segmentation method for flat trail terrain and a height-based corridor-following method for thick terrain are detailed. Results are given for a number of autonomous runs as well as analysis of logged data, and ongoing work on forested terrain is discussed.
1
Introduction
Navigationally-useful linear features along the ground, or trails, are ubiquitous in manmade and natural outdoor environments. Spanning carefully engineered highways to rough-cut hiking tracks to above-ground pipelines to rivers and canals, they “show the way” to unmanned ground or aerial vehicles that can recognize them. Built trails also typically “smooth the way,” whether by paving, grading steep slopes, or removing obstacles. An ability to recognize and follow trail subtypes amenable to wheeled vehicles such as dirt, gravel, and other marginal roads, as well as true hiking trails, is a vital skill for outdoor mobile robot navigation. The alternative of cross-country route-finding over desert, forested, or mountainous terrain is fraught with difficulties [1,2,3,4,5]. In contrast, a trail is a visibly-marked, precomputed plan that can be followed to lessen danger, minimize energy expenditure, save computation time, and ease localization. The routes given to competitors in the 2004 and 2005 DARPA Grand Challenge Events (GCE) generally followed trails for this very reason. However, as they were densely “marked” with human-chosen GPS waypoints, the burden of sensor-based route-finding was lessened considerably for the robots. In this paper we present a component of a larger project tackling the problem of enabling UGVs to robustly and quickly follow a wide variety of trails without G. Sommer and R. Klette (Eds.): RobVis 2008, LNCS 4931, pp. 341–354, 2008. c Springer-Verlag Berlin Heidelberg 2008
342
C. Rasmussen and D. Scott
Fig. 1. Our robot autonomously following flat and thickly vegetated trail segments (see text for definitions)
any waypoints. The subtasks necessary for robust trail-following may be divided into three categories: Trail keeping. Analogous to the sense of “lane keeping” from autonomous road following, keeping to a trail involves determining the shape and boundaries of a non-branching, non-terminating trail that the robot is already on, and controlling the platform to move along it. For discontinuous trails marked by blazes, footprints, or other sequences of discrete features, the underlying task is successive guided search rather than segmentation. Trail negotiation. Because trails cannot be assumed to be homogeneouslysurfaced free space (as with paved roads lacking other cars), trail negotiation adds vigilance for and responses to in-trail hazards. It includes adjusting direction to skirt solid obstacles such as rocks and fallen logs, as well as slowing, shifting, and other control policy modifications to suite different trail surface materials (e.g., sand, mud, ice) and terrain types (e.g., flat, bumpy, steep slope, etc.). Trail finding. Finding entails identifying trail splits and dead-ends that violate the assumptions of the trail keeper, as well as searching for trails from a general starting position, whether using onboard sensors or a priori map and aerial data. The trail-following component reported on here is a joint image and ladar algorithm for keeping to hiking/mountain biking trails which are continuous and relatively free of in-tread hazards. Our platform is a Segway RMP 400 (shown in Figure 1) with a single rigidly-mounted SICK LMS-291 ladar and a color video camera. The robot is controlled via Player client/server software [6]. The types of terrain and vegetation surrounding a hiking trail have a major bearing on the perceptual difficulty of the trail-following task. For simplicity, in this work we assume that trail-bordering terrain types fall into three common categories (examples are shown in Figure 2):
Terrain-Based Sensor Selection for Autonomous Trail Following
Flat
Thick
343
Forested
Fig. 2. Examples of main terrain types bordering trail segments considered in this paper. Synchronized ladar scans are shown at right in ladar coordinates. Each arc represents a 1 m range difference; radial lines are 15 degs. apart.
Flat. Off-trail regions such as grassy fields that have generally poor height contrast with the trail tread, making structural cues from ladar less reliable than appearance cues from the camera. Depending on the season, color contrast between the two regions may be very strong (e.g., in summer) or relatively weak (winter). Thick. Trail segments bordered by dense vegetation such as bushes, trees, or tall grass which form virtual corridors that are highly amenable to ladar cues. Image-based segmentation is often complicated by shadow and glare issues. Forested. Areas under canopy with more widely-spaced obstacles like tree trunks. Here the correlation between obstacle distribution and trail direction is typically weak and visual contrast between on- and off-trail regions is also often poor, as they may both be largely dirt or leaf-covered. One of our key contributions is a method for recognizing which of the above categories a trail segment belongs to in order to discretely switch to a different vision-based or ladar-based algorithm better suited to that terrain. In the next several sections, we describe our system’s methods for terrain type classification; an adaptive, vision-based trail segmentation technique for flat terrain; and a ladar-based approach to corridor following in thick terrain. Our results show the robot successfully operating autonomously using these techniques in an integrated fashion. In the conclusion we discuss the next steps necessary to extend the system to further terrain types and increase performance and safety.
344
C. Rasmussen and D. Scott
Fig. 3. Ladar-based terrain classification. Left and right edge estimates on thicklyvegetated trail section are shown in green and blue, respectively. The yellow line is the gap direction estimate (explained in Section 4). Saturation of the plotted ladar points is proportional to absolute height in robot coordinates.
2
Terrain Type Classification
Our robot’s ladar is mounted about 0.5 meters off the ground and pitched down 10 degrees. Thus, on planar ground the ladar beam sweeps across the ground nearly s = 3 m in front of the robot. The scans are set to a maximum range of about 8 m at 0.5 degree resolution over an angular range of 180 degrees. The general strategy behind mounting the ladar in this manner is to obtain data about tall obstacles on the robot’s flanks as well as information about the profile of the ground just ahead, including smaller bumps and negative hazards. Both Stanford and Carnegie-Mellon (CMU), which collectively accounted for the top three robots in the 2005 GCE, relied exclusively on multiple ladars for steering decisions (Stanford used vision as an input to speed control) [7,8]. However, we assert that ladar alone is not sufficient for successful trail following in each kind of terrain defined above. Stanford’s and CMU’s approaches worked because the provided GPS waypoints gave complete navigational information, making ladar only necessary for local obstacle avoidance. With no waypoints for guidance, in some areas—particularly flat terrain—ladar is virtually useless to keep the robot on track and appearance cues must be exploited for navigation. As can be seen in the “flat” example of Figure 2, the trail is very difficult to discern in one ladar scan (especially with a large rut to its left) but obvious in the camera image. Although ladar may be inadequate for navigation in certain kinds of terrain, it can be very informative about which terrain type the robot is on. The basic idea is that the gross shape of a single scan is very different depending on which terrain type is present. On level, planar ground, the scan is a horizontal line noisified by grass, ruts, and other features in flat terrain environments. With the range maximum we use for the ladar, nonlinearities due to curvature of the underlying ground are negligible. If the robot is on a sideslope (with the ground higher on one side and lower on the other), then the line is simply diagonal with
Terrain-Based Sensor Selection for Autonomous Trail Following
345
a slight jog in the middle corresponding to the level cross-section of trail. If the robot is traveling along the top of a slight ridge or the bottom of a slight valley, the scan shape will appear as a wide ∨ or ∧, respectively. Ladar scans taken in thickly-vegetated segments with their corridor-like structure, on the other hand, are characterized by two more nearly parallel linear clusters (as in the “thick” example in Figure 2). These are the “walls” of the corridor of foliage that the robot should follow. The difference between this configuration and the ∧ shape of some flat terrain is the acuteness of angle between the lines. In other words, for flat and thick terrain situations a reasonable fraction of the ladar data should be well-described by one or two lines. Forested terrain segments are the leftover category, characterized by jumbled ladar scans lacking enough coherent structure to fit a line or lines with any significant support. Based on these observations, our method for categorizing ladar scan shapes into terrain types is as follows. Let the x axis in robot coordinates be defined by the robot’s front wheel axle, with +x to the right and −x to the left of the robot center. On each new scan, two RANSAC robust line-fitting procedures [9] are performed in succession. The first is a nominal left edge fit, on which a constraint is enforced that the estimated line must intersect the x axis at a point xL < 0. If no line is found with over some fraction f of inliers (we used 20% here) in a reasonable number of samples, the left edge is considered not found. The second RANSAC procedure is a right edge fit run on the set of original ladar scan points minus the left edge inliers and subject to the constraint that it intersect the x axis at a point xR > 0. If f (with respect to the number of original points) is not exceeded by any line, then the right edge is not found. Note that |xL | or |xR | can be very large, allowing a virtually horizontal line fit. After running these two fitting operations, if neither a left edge nor a right edge is found, the current terrain is considered forested. If only one edge is found, the terrain is considered flat1 . If both edges are found, the terrain is considered flat if xR − xL > τ and thick otherwise (τ = 4.0 m was used for the runs in this paper). Somewhat surprisingly, running RANSAC independently on each successive frame yields fairly consistent estimates of the left and right edge lines. We smooth what noise there is with temporal filtering of the terrain classification itself via majority voting over a recent-history buffer. An example of left and right edge estimates on a thick terrain scene that would be especially difficult for a visual method is shown in Figure 3.
3
Flat Terrain: Image-Based Trail Segmentation
On flat terrain the preferred sensory modality is camera imagery, since on- vs. off-trail height contrasts are mostly too small for reliable ladar discrimination. Though the trails we are considering are engineered for walking or bike-riding 1
We currently do not account for the possibility of thick foliage on only one side of the trail (which would permit simple “wall-following”), but this could easily be incorporated into the classification logic.
346
C. Rasmussen and D. Scott
people only and thus are too small for road-legal cars or trucks, an obvious approach is to consider the task as just a scaled-down version of road following. Vision-based road following has been thoroughly studied on paved and/or painted roads with sharp edges [10,11,12,13], but hiking trails rarely have these. However, region-based methods using color or texture measured over local neighborhoods to measure road vs. background probabilities work well when there is a good contrast for the cue chosen, regardless of the raggedness of the road border [14,13,15,16,17]. The most notable recent work in this area is Stanford’s 2005 GCE road segmentation method [18]. Specifically, their approach relies on a driveability map (depicted in Figure 4(a)) accumulated over many timesteps and integrated from multiple ladars. A quadrilateral is fit to the largest free region in front of the vehicle in the latest map, and this is projected into the image as a training region of positive road pixel instances (Figure 4(b)). The method of modeling road appearance is with a mixture of k Gaussians in RGB space (with temporal adaptation) and does not require negative examples: new pixels are classified as road or non-road solely based on a thresholded distance to the current color model. This follows the framework of [19]’s original image-based obstacle avoidance algorithm, which used a fixed free region and a histogram-based color model. For several reasons, neither [18] nor [19] is directly applicable to trail following on flat terrain. First, the ladar is not sufficient for selecting an image region belonging to the trail alone because of the lack of height contrast. Secondly, using a fixed positive example region is problematic because the trail is very narrow in the camera field of view compared to a road and can curve relatively sharply. Even in thickly-vegetated terrain, tall foliage does not always grow to the edge of the trail proper, as Figure 2 shows. In both terrain types, without very precise adaptive positioning this approach would lead to off-trail pixels frequently polluting the trail color model . The very narrowness of the trail suggests a reverse approach: model the offtrail region or background with an adaptive sampling procedure and segment the trail as the region most dissimilar to it. In general the background is less visually homogeneous than the trail region and thus more difficult to model, but in our testing area it is not overly so. Under the assumption that the robot is initially on the trail and oriented along it, the background color model is initialized with narrow rectangular reference areas on the left and right sides of the image as shown in Figure 2. These areas extend from the bottom of the image to a nominal horizon line which excludes sky pixels and more distant landscape features in order to reduce background appearance variability. Following the method of [19], a color model of the off-trail region is constructed from the RGB values of the pixels in the reference areas for each new image as a single 3-D histogram with a few bins per channel. The entire image below the horizon is then classified as on- or off-trail by measuring each pixel’s color similarity to the background model and thresholding. In particular, if a particular pixel’s histogram bin value is below 25% of the maximum histogram bin value,
Terrain-Based Sensor Selection for Autonomous Trail Following
(a)
347
(b)
(c) Fig. 4. (a,b) Stanford’s approach to obtaining road labels (from [18]). (a) Overhead map of obstacle space and free space from multiple ladars integrated over time. Note quadrilateral fit to free space in front of vehicle; (b) Quadrilateral projected to image, with result of road segmentation overlaid. (c) Our histogram color classification method for trail segmentation example (frame from one autonomous run). Left and right reference areas are outlined by yellow rectangles; these are the regions from which the non-trail appearance model is built. Pixels classified as on-trail (after density filtering) are highlighted in green. The lateral median of the segmented trail region is indicated by the vertical orange line, which the robot steers to center. The isolated green blobs are gray rocks misclassified as trail.
it is considered a trail pixel. To reduce noisiness in this initial segmentation, this image is filtered using a 3 × 3 majority filter. The resulting binary image contains the pixels we consider most likely to belong to the trail (pixels shaded green in Figure 2). Using the image output from the pixel majority filter the trail image center (the orange line in Figure 2) is estimated as the median x value of the filtered trail pixels. This is converted to a trail direction θ with the camera calibration; steering follows a proportional control law. The reference areas are adapted to exclude the trail region based on a robust estimate of the trail width using the maximum absolute deviation of the lower
348
C. Rasmussen and D. Scott
section of filtered trail pixels plus a safety margin. If one is squeezed too thin by the trail moving to one side, it is not used. If the trail region is too small or too large, the reference areas are reset to their defaults.
4
Thick Terrain: Ladar-Based Corridor Following
When the terrain classifier judges the current segment to be thickly-vegetated, purely ladar-based trail following is a very robust option. An obvious cue for which direction θ to steer the robot comes from the already-computed left and right RANSAC edge lines, as their intersection would seem to be a straightforward indicator of the trail corridor direction. However, because of the “nearsightedness” of our ladar configuration, the edge estimates can vary somewhat due to locally nonlinear structures such as bulges or gaps in foliage lining the trail. While these variations are generally not significant enough to confuse the terrain classifier, they contribute to undesirable instability in intersection-derived trail direction estimates. A more reliable alternative is to pick the θ within an angular range in front of the robot associated with the farthest obstacle at distance r. Similar in spirit to vector field histograms [20], the robot avoids nearby obstacles comprising the walls on either side by seeking the empty gap between them. To filter out stray ladar beams which sometimes penetrate deep into trail-adjacent foliage, we calculate obstacle distances for candidate directions using the nearest return within a sliding robot-width window. A simple particle filter [21] is used to temporally smooth θ and r estimates. A proportional control law is then used to generate a steering command. The farthest-obstacle distance r is used to modulate robot speed. If the robot is traveling over planar ground r should roughly equal the sweep distance s (defined in Section 2 above). Thus, deviations of r from s indicate that the robot is currently pitching up or down or approaching a dip or bump. In any of these cases, the robot assumed to be on bumpy ground and the speed is lowered proportionally to |r −s| (see [22] for a machine learning approach to this problem at much higher speeds).
5
Results
We developed and tested our algorithms on camera and ladar data that were collected while driving the robot around a 1.5 km loop trail used by hikers and mountain bikers and located in a state park. The trail varies from approximately 0.3 m to 2 m wide and is mostly hard-packed dirt with few sizeable rocks or roots within the trail region. Altitude along the trail gradually changes over about a 10 m range, with a few notable humps and dips. Overall, it would be rated as an “Easy” trail under the system of the International Mountain Bicycling Association [23], which is crafted with wheeled motion in mind and is a reasonable scale match for our robot. An aerial view of the entire trail loop is
Terrain-Based Sensor Selection for Autonomous Trail Following
349
Fig. 5. Aerial view of trail loop where testing was conducted. Colors indicate primary vegetation types bordering trail segments: green marks predominantly flat segments, yellow shows where there was thick, wall-like foliage along the trail, and red shows forested trail segments.
shown in Figure 5, with the flat segments drawn in green, the thick segments in yellow, and the forested segments in red. Image histogram-based segmentation as described above was very successful for autonomous guidance of the robot along virtually the entire top flat section of the trail shown in Figure 52 . The robot covered a distance of several hundred meters on an initial eastward leg until it entered a narrow, thickly-vegetated segment and was manually turned around. The histogram method for background color modeling is somewhat sensitive to illumination variation, and there are several spots in the thick segment where deep shadows would confuse it. The robot was stopped at the end of the reverse westward journey at the location pictured in Figure 4(c) because of large rocks very close to the trail. In this “flat” trail-following mode, the robot had no structural cues for speed control and thus was set to move at a constant rate of 1.0 m / s. The maximum turning rate was set to 0.5 radians / s. Part of the westward leg is illustrated by images taken at 250-frame intervals in Figure 6. For the most part the segmentation was fairly clean, but there were some sections where the bordering grass contained large brown patches and the 2
For all results, the default width of each reference area was 16 the image width and the horizon line was fixed at 23 the image height. The RGB histogram had 16 bins per channel, and the reference areas were reset if < 1% or > 50% of the image was classified as trail. Images were captured at a framerate of 5 fps. The processing resolution after downsampling was 80×60; here we are upsampling the logged results for display.
350
C. Rasmussen and D. Scott
0000
0250
0500
0750
1000
1250
1500
1750
2000
Fig. 6. Part of completely image-based autonomous run on flat trail section. Image labels are camera frame numbers.
color discrimination faltered temporarily, as with frame 1500. When the area of the segmented region dipped below threshold, the robot no longer “saw” a trail and would continue without turning. However, because the trail was not too sharply curved at these points, the adaptive color model recovered in time to refind the trail and correct the overshoot. The ladar-based terrain mode classifier worked perfectly throughout the runs above, switching from flat to thick correctly toward the end of the eastward leg. This transition is documented in Figure 7 over a sequence of images 100 frames apart. Two strong lines are fit to the ladar data which intersect the robot baseline less than τ apart just before frame 300; this switches the controller from the image segmentation follower to the ladar gap follower. Part of a sequence further into the thickly-vegetated section with the ladar gap follower running is shown in Figure 8. A nearly 16-minute, manually-controlled run on an earlier day showed similarly successful results. The robot was driven along the trail through thick terrain
Terrain-Based Sensor Selection for Autonomous Trail Following
000
100
200
300
400
500
351
Fig. 7. Autonomous run: transition from flat to thick trail section. Image labels are camera frame numbers. The green and blue lines in the ladar images are estimated left and right edges, respectively. The yellow line in the last three ladar images represents the estimated gap direction and distance after the switch to thick terrain mode.
(the western half of the flat terrain above before grass over 1 m tall was cut for hay), followed by flat terrain (short grass in the eastern half), then into the thick terrain alley which begins at frame 300 in Figure 7. Both terrain transitions were recognized appropriately and with no other, false switches, and trail tracking was qualitatively accurate. Quantitatively, the trail follower’s output steering command predicted the logged human driver’s steering command fairly well,
352
C. Rasmussen and D. Scott
000
100
200
Fig. 8. Ladar-based gap tracking on thickly-vegetated trail section. Image labels are camera frame numbers.
with a measured Pearson’s correlation coefficient of ρ = 0.82 for the histogrambased flat mode and ρ = 0.75 for the ladar-based thick mode (p < 0.001)3 .
6
Conclusion
We have presented a vision- and ladar-based system for following hiking trails that works well on a variety of flat and thickly-vegetated terrain. Clearly, the next area needing attention is a robust technique for tackling forested trail segments. As the visual discrimination task is harder, more sophisticated classification methods such as support vector machines and more detailed appearance models including texture [24] may be necessary, though at some cost in speed. Classifying patches or superpixels [25,2] rather than individual pixels would also most likely yield more geometrically coherent segmentations. Incorporating a whole-trail shape prior [26] into the segmentation process, perhaps with recursive updating, would also help isolate the trail region in an otherwise confusing visual environment. Some basic trail negotiation methods are necessary for robot safety, as the system currently has no mechanism for discovering or avoiding in- or near-trail hazards. Although nearly all of the trail tread in our testing area is obstacle-free, several sections contain roots or are bordered by nearby tree trunks or rocks (as 3
The correlation was highest with the trail follower predicting the manual steering command 1 s later. We are including only data when the robot was traveling more than 0.5 m / s (it was paused several times) and being manually steered left or right at a rate of at least 2 degs. / s (less was a “dead band” treated as straight).
Terrain-Based Sensor Selection for Autonomous Trail Following
353
in Figure 4)(c). Existing techniques for visual feature detection and terrain classification [27,2] should help here, but the problem of discriminating soft obstacles like grass and twigs which can be pushed through from hard obstacles like rocks and logs that cannot is a classically difficult one [4].
Acknowledgments This material is based upon work supported by the National Science Foundation under Grant No. 0546410.
References 1. Nabbe, B., et al.: Opportunistic use of vision to push back the path-planning horizon. In: Proc. Int. Conf. Intelligent Robots and Systems (2006) 2. Kim, D., Oh, S., Rehg, J.: Traversability classification for ugv navigation: A comparison of patch and superpixel representations. In: Proc. Int. Conf. Intelligent Robots and Systems (2007) 3. Coombs, D., et al.: Driving autonomously offroad up to 35 km/h. In: Proc. IEEE Intelligent Vehicles Symposium (2000) 4. Stentz, A., et al.: Real-time, multi-perspective perception for unmanned ground vehicles. In: AUVSI (2003) 5. Vandapel, N., Kuffner, J., Amidi, O.: Planning 3-d path networks in unstructured environments. In: Proc. IEEE Int. Conf. Robotics and Automation (2005) 6. Gerkey, B., Vaughan, R., Howard, A.: The Player/Stage project: Tools for multirobot and distributed sensor systems. In: International Conference on Advanced Robotics (2003) 7. Thrun, S., et al.: Stanley, the robot that won the darpa grand challenge. Journal of Field Robotics 23(9) (2006) 8. Urmson, C., et al.: A robust approach to high-speed navigation for unrehearsed desert terrain. Journal of Field Robotics 23(8), 467–508 (2006) 9. Fischler, M., Bolles, R.: Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM 24(6), 381–395 (1981) 10. Taylor, C., Malik, J., Weber, J.: A real-time approach to stereopsis and lane-finding. In: Proc. IEEE Intelligent Vehicles Symposium (1996) 11. Stein, G., Mano, O., Shashua, A.: A robust method for computing vehicle egomotion. In: Proc. IEEE Intelligent Vehicles Symposium (2000) 12. Southall, B., Taylor, C.: Stochastic road shape estimation. In: Proc. Int. Conf. Computer Vision, pp. 205–212 (2001) 13. Apostoloff, N., Zelinsky, A.: Robust vision based lane tracking using multiple cues and particle filtering. In: Proc. IEEE Intelligent Vehicles Symposium (2003) 14. Mateus, D., Avina, G., Devy, M.: Robot visual navigation in semi-structured outdoor environments. In: Proc. IEEE Int. Conf. Robotics and Automation (2005) 15. Crisman, J., Thorpe, C.: UNSCARF, a color vision system for the detection of unstructured roads. In: Proc. IEEE Int. Conf. Robotics and Automation, pp. 2496– 2501 (1991) 16. Rasmussen, C.: Combining laser range, color, and texture cues for autonomous road following. In: Proc. IEEE Int. Conf. Robotics and Automation (2002)
354
C. Rasmussen and D. Scott
17. Zhang, J., Nagel, H.: Texture-based segmentation of road images. In: Proc. IEEE Intelligent Vehicles Symposium (1994) 18. Dahlkamp, H., et al.: Self-supervised monocular road detection in desert terrain. In: Robotics: Science and Systems (2006) 19. Ulrich, I., Nourbakhsh, I.: Appearance-based obstacle detection with monocular color vision. In: Proceedings of the National Conference on Artificial Intelligence (2000) 20. Borenstein, J., Koren, Y.: The vector field histogram: Fast obstacle avoidance for mobile robots. IEEE Transactions on Robotics and Automation (1991) 21. Isard, M., Blake, A.: Condensation – conditional density propagation for visual tracking. Int. J. Computer Vision 29, 5–28 (1998) 22. Stavens, D., Thrun, S.: A self-supervised terrain roughness estimator for off-road autonomous driving. In: Proceedings of the Conference on Uncertainty in AI (2006) 23. International Mountain Bicycling Association, Trail difficulty rating system. (Accessed September 12, 2007), http://www.imba.com/resources/trail building/itn 17 4 difficulty.html 24. Varma, M., Zisserman, A.: A statistical approach to texture classification from single images. Int. J. Computer Vision (2005) 25. Felzenszwalb, P., Huttenlocher, D.: Efficient graph-based image segmentation. In: Int. J. Computer Vision (2004) 26. Wolf, L., et al.: Patch-based texture edges and segmentation. In: Proc. European Conf. Computer Vision (2006) 27. Manduchi, R., et al.: Obstacle detection and terrain classification for autonomous off-road navigation. Autonomous Robots (2005)
Generic Object Recognition Using Boosted Combined Features Doaa Hegazy and Joachim Denzler Chair of Computer Vision, Institute of Computer Science, Friedrich-Schiller-University, Jena, Germany {hegazy,denzler}@informatik.uni-jena.de
Abstract. In generic object recognition, the performance of local descriptors varies from class category to another. A descriptor may have a good performance on one category and low performance on another. Combining more than one descriptor in recognition can give a solution to this problem. The choice of descriptor’s type and number of descriptors to be used is then important. In this paper, we use two different types of descriptors, the Gradient Location-Orientation Histogram (GLOH) and simple color descriptor, for generic object recognition. Boosting is used as the underlying learning technique. The recognition model achieves a performance that is comparable to or better than that of state-of-the-art approaches.
1
Introduction
Generic object recognition (GOR) has gained recently a lot of attention due to its importance for computer vision applications. There are three main steps for a generic object recognition system. First, features like points or regions are to be detected. These features should have certain characteristics to deal with the intra and inter-class variability in and between object categories. Then, these features are described to be learnt or compared. Finally, a suitable learning algorithm or classification technique is used. In many GOR approaches or systems, distinct features are used separately into the classification. Combining different types of descriptors or features can improve the recognition performance and yelid more accurate results than using each one separately. In this paper, we combine two different types of descriptors, namely the Gradient Location-Orientation Histogram (GLOH) and simple local color descriptor (Opponent angle), for the purpose of GOR. We intend to use only two descriptors (and not more) for not increasing the computation burden. Of course increasing the number of used descriptors can increase the recognition performance but on the other hand will increase the computation complexity. For learning this descriptor combination, we used boosting. In section 2 the related work is presented. Our GOR model is explained in section 3 while experiments and results are presented in section 4. Conclusions on performance of the recognition model are drawn in section 5. G. Sommer and R. Klette (Eds.): RobVis 2008, LNCS 4931, pp. 355–366, 2008. c Springer-Verlag Berlin Heidelberg 2008
356
2
D. Hegazy and J. Denzler
Related Work
Many GOR approaches exist in the literature which use many different techniques for object representation (features and descriptors) and learning procedures. Agarwal and Roth [1] presented a recognition method where side views of cars are to be recognized. Training images are represented as a binary feature vector. These vectors encode which image patches from a Codebook appear in an image. Winnow is used as the learning technique by their recognition method. A different approach to GOR is presented by Fergus et al. [2]. They represented an object class in terms of constellation model of learnt parts using a probability model, which parameters are learnt using the EM algorithm. We differ from the previously mentioned approach that they incorporate spatial information in addition to the appearance information of objects in their recognition approach while we use appearance information only (texture and color) in our recognition. Opelt et al.[3] proposed a GOR model using Boosting. They combine three interest point detectors together with four types of local descriptors. We combine only two types of local descriptors with only one interest point detection technique. Also, they do not include any color information in their approach. We compare our results to their results in section 4. Another GOR approach using boosting is presented by Zahng et al. [4]. Their model is a multilayer boosting system with combination of local texture feature (PCASIFT), global feature (shape context) and spatial features. The first boosting layer chooses the most important features for object-class recognition from a pool of PCA-SIFT descriptors and shape-context descriptors. Spatial relationships between the selected features are computed in the second layer to improve the performance of their classifier. In our approach, learning is performed using one layer boosting and no spatial information is used in recognition. Also, they do not add color information to their recognition technique. The main recognition framework of our model is similar to the framework presented in [5]. We use in this paper the GLOH features, which are extension to SIFT descriptor [6] and are reported to have a superior performance in context of matching images and viewpoint changes [6] , to test their suitability and robustness to the GOR. We also compare the performance of SIFT and GLOH descriptors in GOR.
3
The Recognition Model
In our GOR model objects from a certain class in still images are to be recognized. The objects are not segmented before the learning process nor information about their location or position within images is given during learning. Figure 1 gives a brief description of our recognition approach. In the first step, interest regions are detected in the training images. We use the Hessian-affine interest point detector [7]. Then, local descriptors are extracted from the interest regions. Two types of local descriptors are used: The Gradient Location-Orientation Histogram (GLOH) [6] (grayscale descriptor) and local color descriptor presented
Generic Object Recognition Using Boosted Combined Features
357
Fig. 1. The Proposed GOR Model
in [8]. The local descriptors together with the labels of the training images are given to the boosting learning technique [9] which produces final classifier (final hypotheses) as an output to predict if a relevant object is presented in the new test image. 3.1
Local Descriptors
We use the Hessian-affine point detector 1 to detect the interest points in images to give invariance to scale change as well as partial invariance to changes caused by viewpoint, zoom, and orientation (figure 2). Local descriptors are then extracted from regions around the detected interest points. As a grayscale descriptor we use the Gradient Location-Orientation Histogram (GLOH) [6]. GLOH is an extension of the SIFT [10] descriptor, designed to increase its robustness and distinctiveness. Compared to SIFT, which is a 3D histogram of gradient location and orientation of 4X4 grid and 8 angles respectively, GLOH is a histogram computed for 17 location and 16 orientation bins in a log-polar location grid. PCA is used to reduce the dimension to 128. We use GLOH for two reasons: 1) they exceed in performance many other descriptors including the SIFT descriptor as reported in [6] and 2) to test their adequacy, suitability and robustness to the task of GOR. The second type of descriptor we use is a local 1
http://www.robots.ox.ac.uk/ vgg/research/affine/
358
D. Hegazy and J. Denzler
Fig. 2. Example of detected regions using Hessian-affine detector
color descriptor presented by van de Weijer and Schmid [8]. They introduced a set of local color descriptors with different criteria such as photometric robustness, geometric robustness, photometric stability and generality. Among those descriptors introduced in [8], we choose to use opponent angle color descriptor as it is robust with respect to both geometrical variations caused by changes in viewpoint, zoom and object orientations and photometric variations caused by shadows, shading and specularities. Authors of [8] used their color descriptor in concatenation with SIFT descriptors as one descriptor which could be then fed to any classifier and they evaluated the performance of this concatenated descriptor in the context of image matching, retrieval and classification. Here, we do not concatenate the color descriptor with the GLOH. Instead, we treat them separately. Brief description of how to construct the Opponent angle color descriptor is given (according to [8]) as follows: angxO = arctan(
O1x ) O2x
(1)
where O1x and O2x are derivatives of opponent colors and are given by: 1 1 O1x = √ (Rx − Gx ),O2x = √ (Rx + Gx − 2Bx ) 2 6
(2)
and Rx , Gx and Bx are derivatives of color channels. The opponent colors and their derivatives are proven to be invariant with respect to specular variations [8]. Before computing the previously mentioned opponent angle, color illumination normalization should be first done as described in [8]. They introduced two methods for normalization, zero-order and first-order. We use the first-order color normalization as it is recommend by [8] to be used with the opponent angle descriptor. The first-order color normalization is given as: C(x) (3) C ∗ (x) = |Cx (x)|
Generic Object Recognition Using Boosted Combined Features
359
where the bar indicates a spatial average: a ¯ = S adx/ S dx, S is the surface of the patch, C ∈ {R, G, B} and Cx is the derivative of the color channel. To construct the opponent angle descriptor, the derived invariants are transformed into robust local histogram. This is done by adjusting the weight of a color value in the histogram according to its certainty as in [8] (photometric stability). The resulting opponent angle descriptor is of length 37. 3.2
The Learning Algorithm
In our recognition model, objects from a certain class (category) in still images are recognized. Therefore, the used learning algorithm predicts if a given image contains an instance (object) from this category or not. AdaBoost learning algorithm [9] is used as the leaning algorithm in our recognition model. AdaBoost is a supervised learning algorithm, which takes a training set I = {I1 , ..., IN } and their associated labels l = {l1 , ..., lN }, where N is the number of training images and li = +1 if the object in the training image Ii belongs to the class category and li = -1 otherwise. Each training image is represented by a set of features {Fi,j (ti,j , vi,j ), j = 1...ni } where ni is the number of features in image Ii (which varies from image to another according to the number of interest points detected in the image), ti,j indicates the type of the feature (g for GLOH and c for Color) and vi,j is the feature value. AdaBoost puts weights on the training images and requires construction of a weak hypothesis hk which, relative to the weights, has discriminative power. AdaBoost is run for a certain number of iterations T and in each iteration k one feature is selected as a weak classifier and weights of the training images are updated. In our model, the AdaBoost in each iteration selects two weak hypothesis: one for GLOH descriptor hgk and one for the color descriptor hck . Each weak hypothesis consists of two components: a feature vector vkx and a certain threshold θkx ( a distance threshold) where x = g for the GLOH and x = c for color. The threshold θkx measures if an image contains a descriptor vi,j that is similar to vkx . The similarity between vi,j , which belongs to the image Ii , and vkx is measured using Euclidean distance for both GLOH and color descriptors . The learning algorithm using AdaBoost is as follows: 1. The weights of the training images are initialized to 1. 2. Calculate the weak hypotheses hsk and hck with respect to the weights using the weakleaner described in [3]. 3. Calculate the classification error as: N εk =
i=1 (hk (Ii ) = li )wi N i=1 wi
(4)
4. Update the weights: wi+1 = wi β −li hk (Ii ) for i = 1 to N and β = 5. Repeat the steps 2, 3 and 4 for number of iterations T .
1 − εk . εk
360
D. Hegazy and J. Denzler
After the T iterations, the final hypotheses (classifier) is given by ⎧ ⎨ +1 if Tk=1 (ln βk )hk (I) > Ω HI = ⎩ −1 Otherwise
(5)
where Ω is the final classifier’s threshold and is varied to get various points for the ROC curve. The threshold could be varied from ∞ to -∞ and tracing a curve through ROC space could be then done. This method is used in many researches (e.g. [11]),but computationally, this is a poor way of generating an ROC curve as it is neither efficient nor practical [12]. In [12] an efficient method for generating an ROC curve, which we use here, is presented and explained. It depends on exploiting the monotonicity of thresholded classification. Details about the method are found in [12]. 3.3
Recognition
In the recognition step, a test image is presented and Hessian-affine interest points are detected. GLOH and color descriptors (Opponent angle) are then extracted. For each weak hypothesis hgk and hck and its associated feature value and threshold, we find the GLOH and the color features in the test image with the minimum distance to vkg and vkc and then compare these minimum distances to the thresholds θkg and θkc respectively. After all the hypotheses are processed, the output of the final (strong) classifier is compared to the threshold Ω and the test image is then accepted or rejected depending on the output of the classifier.
4
Experiments and Results
We evaluate our recognition model using two different datasets, namely the Caltech 42 and Graz023 datasets. Although Caltech 4 has some problems such as somewhat limited range of image variability [13], it played a key role, in addition to UIUC and Caltech 101 datasets, in the recent research of GOR during providing a common ground for algorithm development and evaluation. Caltech 4 has been used by many state-of-the-art GOR approaches for binary classification and Caltech 101 has been used usually by the approaches which perform multi-class GOR tasks (e.g [14]). We use Caltech 4 dataset to be able to compare the performance of our approach to that’s of state-of-the-art to investigate its performance. At the same time, we use Graz02 dataset which is more recent and difficult one and avoids the problems exist in Caltech 4 dataset. 4.1
Experiments Using Caltech 4 Dataset
To compare our results to the existing approaches, we first use the Caltech 4 dataset to evaluate our recognition model (see figure 3). We use, for training, 100 2 3
http://www.vision.caltech.edu/html-files/archive.html http://www.emt.tugraz.at/∼ pinz/data/
Generic Object Recognition Using Boosted Combined Features
361
images of the object class as positive examples and 100 images of the counterclass as negative examples. For testing, 50 positive examples and 50 negative examples are used. The features of each image are clustered to 100 cluster centers using the k-means clustering algorithm to reduce the dimension of the data. The learning procedure was run for number of iterations T = 1504. The counter-class: We do not use the background dataset as a counter-class, which is used by almost all the recognition approaches as the counter-class, because it is not colored. We use background dataset of Graz01 dataset5 instead. It is more difficult than the background dataset of Caltech 4 as shown by the results in table 1. The results in table 1 are the output of recognition experiments we performed using Caltech 4 dataset with the same previously mentioned experimental settings. We use in this experiment the GLOH descriptor (without combining it with the color features) with Hessian-affine point detector. We perform these experiments to show how difficult the counter-class (Graz01 background dataset) we use than that is been used (Caltech 4 background dataset) by almost all the approaches which use Caltech 4 dataset in their experiments.
Table 1. ROC-eqq-err rates of our results using the Caltech 4 dataset with different counter-classes (Caltech 4 background and Graz01 background datasets)
Dataset Using Caltech 4 background Using Graz01 background Motor Cars Airplanes Faces
100 % 100 % 96 % 100%
92% 80 % 78% 92%
First, we evaluate our model using only one descriptor at a time to be able to notice the benefits of combining the two descriptors together then we use a combination of them. The performance is measured in ROC-equal-error rates and is shown in table 2. It shows the improvements we gain in performance from combining the two descriptors together. Each descriptor has a high performance of some datasets and relatively good performance on others. Using a combination of the two descriptors improves the recognition performance on almost all the datasets. Table 3 shows a comparison of the performance of our recognition approach and state-of-the-art approaches. The comparison shows that our approach outperforms- state-of-art approaches in almost all the datasets (considering the more difficult counter-class we use in our experiments). 4
5
We concluded this number by experiments on Caltech 4 dataset as varying T from 10 to 300. After T = 150, the test error remains constant. http://www.emt.tugraz.at/∼ pinz/data/
362
D. Hegazy and J. Denzler
Fig. 3. Example images of the Caltech 4 dataset. The rows beginning from the top show example images of the Categories: airplanes, motorbikes, cars-rear and faces. Table 2. ROC-eqq-err rates of our results using the Caltech 4 dataset
Dataset GLOH Opponent angle Comb. Motor Cars Airplanes Faces
4.2
92% 80% 78%
94% 100% 80%
96% 100% 84%
92%
86%
94%
Experiments Using Graz02 Dataset
Further experiments on our recognition approach are performed using Graz02 dataset (see figure 4). Graz02 dataset is more difficult than Caltech 4. Objects are shown on complex cluttered background, at different scales and with different Table 3. ROC-eqq-err rates of our results using the Caltech 4 dataset compared to state-of-the-art approaches Dataset Ours
[3]
[2]
[15]
[16]
Motor 96% 92.2% 92.5% 93.2% 88% Cars 100% 91.1% 90.3% 86.5 Air84% 88.9% 90.2% 83.8% planes Faces 94% 93.5% 96.4% 83.1% 93.5%
Generic Object Recognition Using Boosted Combined Features
363
object positions. The images include high occlusion up to 50% [3]. So, we use 150 positive and 150 negative images for training and 75 positive and 75 negative images for testing as in [3]. The features of each image are clustered to 100 cluster centers using the k-means clustering algorithm. Table 4 shows the results and compares them to the results of combination in [3]. As shown in table 4, our results exceeds the results of [3] in the cars and the persons datasets and has less performance in the bikes dataset. As we mentioned before, we use only one interest point detector with two local descriptors but in [3], three point detectors combined with four local descriptors are used.
Fig. 4. Example images of the Graz02 dataset. The rows beginning from the top show example images of the Categories: bikes, cars, persons and background.
Table 4. ROC-eqq-err rates of our results using the Graz02 dataset compared to the results of opelt [3] Dataset
Ours Opelt[3](Comb.)
Bikes 74.67% Cars 81.33% Persons 81.33%
77.8% 70.5% 81.2%
The previously presented results show that GLOH has good performance on GOR. Now, we compare the performance of GLOH, which is an extension of SIFT, and SIFT descriptors. The experiments are performed without using color information. We use the results in [5] of the Caltech 4 experiments and SIFT descriptors. Figure 5 display the results of each descriptor as well as a comparison between them. SIFT exceeds in performance the GLOH in almost all
364
D. Hegazy and J. Denzler ROC curve for Airplanes (Caltech) 1 0.9 0.8
True Positive Rate
0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
SIFT GLOH 0
0.1
0.2
0.3
0.4 0.5 0.6 False Positive Rate
0.7
0.8
0.9
1
(a) ROC curve for Motor (Caltech) 1 0.9 0.8
True Positive Rate
0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
SIFT GLOH 0
0.1
0.2
0.3
0.4 0.5 0.6 False Positive Rate
0.7
0.8
0.9
0.7
0.8
0.9
1
(b) ROC curve for Cars (Caltech) 1 0.9 0.8
True Positive Rate
0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
SIFT GLOH 0
0.1
0.2
0.3
0.4 0.5 0.6 False Positive Rate
1
(c) Fig. 5. Comparison between the performance of SIFT descriptor and its extension GLOH in GOR using Caltech dataset:(a) airplanes, (b) motors , (c) cars and (d) faces datasets
Generic Object Recognition Using Boosted Combined Features
365
ROC curve for Faces (Caltech) 1 0.9 0.8
True Positive Rate
0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
SIFT GLOH 0
0.1
0.2
0.3
0.4 0.5 0.6 False Positive Rate
0.7
0.8
0.9
1
(d) Fig. 5. (continued)
the datasets. Although GLOH showed higher performance than SIFT in context of matching images and viewpoint changes [6], our results reveal that SIFT performs higher than GLOH in GOR.
5
Conclusions
We have presented a GOR model which uses a combination of local descriptors, namely GLOH and simple color descriptor (opponent angle). Boosting is the underlying learning technique. Both local descriptors shows good performance when used separately in recognition and high performance is obtained when combining them together in recognition. The performance of our GOR model exceeds many of the GOR state-of-the-art approaches using benchmark and complex datasets. Also, comparison between GLOH and SIFT is performed which reveals that SIFT has higher performance than GLOH in the GOR.
References 1. Agarwal, S., Roth, D.: Learning a Sparse Representation for Object Detection. In: Heyden, A., et al. (eds.) ECCV 2002. LNCS, vol. 2353, pp. 113–130. Springer, Heidelberg (2002) 2. Fergus, R., Perona, P., Zisserman, A.: Object Class Recognition by Unsupervised Scale-Invariant Learning. In: IEEE Computer Society Conference on computer vision and Pattern Recognition CVPR 2003, vol. 2, pp. 264–271 (2003) 3. Opelt, A., et al.: Generic object recognirion with boosting. IEEE Transactions on Pattern Analysis and Machine Intelligence 28 (2006) 4. Zhang, W., et al.: Object Class Recognition Using Multiple Layer Boosting with Heterogeneous Features. In: IEEE Computer Society Conference on Computer Vision and Patter Recognition CVPR 2005, Washington, DC, USA, pp. 66–73 (2005)
366
D. Hegazy and J. Denzler
5. Hegazy, D., Denzler, J.: Boosting local colored features for generic object recognition. In: 7th German-Russian Workshop on Pattern Recognition and Image Understanding (2007) 6. Mikolajczyk, K., Schmid, C.: A performance evaluation of local descriptors. IEEE Transactions on Pattern Analysis and Machine Intelligence PAMI 27, 1615–1630 (2005) 7. Mikolajczyk, K., Schmid, C.: An affine invariant interest point detector. In: Tistarelli, M., Bigun, J., Jain, A.K. (eds.) ECCV 2002. LNCS, vol. 2359, pp. 128– 142. Springer, Heidelberg (2002) 8. Schmid, C., van de Weijer, J.: Coloring Local Feature Extraction. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3952, pp. 334–348. Springer, Heidelberg (2006) 9. Freund, Y., Schapire, R.: A decision theoretic generalization of online learning and application to boosting. Computer and System Sciences 55, 119–139 (1997) 10. Lowe, D.G.: Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision (2004) 11. Viola, P., Jones, M.: Robust real-time object detection. International Journal of Computer Vision 2, 137–154 (2004) 12. Fawcett, T.: Roc graphs: Notes and practical considerations for researchers (2004) 13. Ponce, J., et al.: Dataset issues in object recognition. In: Ponce, J., et al. (eds.) Toward Category-Level Object Recognition. LNCS, vol. 4170, pp. 29–48. Springer, Heidelberg (2006) 14. Grauman, K., Darrell, T.: The pyramid match kernel: Efficient learning with sets of features. J. Mach. Learn. Res. 8, 725–760 (2007) 15. Thureson, J., Carlsson, S.: Appearance based qualitative image description for object class recognition. In: Pajdla, T., Matas, J(G.) (eds.) ECCV 2004. LNCS, vol. 3024, pp. 518–529. Springer, Heidelberg (2004) 16. Weber, M., Welling, M., Perona, P.: Unsupervised learning of models for recognition. In: Vernon, D. (ed.) ECCV 2000. LNCS, vol. 1843, pp. 18–32. Springer, Heidelberg (2000)
Stereo Vision Based Self-localization of Autonomous Mobile Robots Abdul Bais1 , Robert Sablatnig2 , Jason Gu3 , Yahya M. Khawaja1, Muhammad Usman1 , Ghulam M. Hasan1 , and Mohammad T. Iqbal4 1
NWFP University of Engineering and Technology Peshawar, Pakistan {bais, yahya.khawja, usman, gmjally}@nwfpuet.edu.pk 2 Vienna University of Technology Vienna, Austria
[email protected] 3 Dalhousie University Halifax, Canada
[email protected] 4 Islamia College Peshawar, Pakistan
[email protected]
Abstract. This paper presents vision based self-localization of tiny autonomous mobile robots in a known but highly dynamic environment. The problem covers tracking the robot position with an initial estimate to global self-localization. The algorithm enables the robot to find its initial position and to verify its location during every movement. The global position of the robot is estimated using trilateration based techniques whenever distinct landmark features are extracted. Distance measurements are used as they require fewer landmarks compared to methods using angle measurements. However, the minimum required features for global position estimation are not available throughout the entire state space. Therefore, the robot position is tracked once a global position estimate is available. Extended Kalman filter is used to fuse information from multiple heterogeneous sensors. Simulation results show that the new method that combines the global position estimation with tracking results in significant performance gain. Keywords: self-localization, stereo vision, autonomous robots, Kalman filter, soccer robots.
1
Introduction
In an application where multiple robots are autonomously working on a common global task, knowledge of the position of individual robots turns out to be a basic requirement for successful completion of any global strategy. One of the solutions for the robot position estimation is to start at a known location and track the robot position locally using methods such as odometry or inertial navigation [1]. G. Sommer and R. Klette (Eds.): RobVis 2008, LNCS 4931, pp. 367–380, 2008. c Springer-Verlag Berlin Heidelberg 2008
368
A. Bais et al.
These methods suffer from unbounded error growth due to integration of minute measurements to obtain the final estimate [2]. Another approach is to estimate the robot position globally using external sensors [3]. To simplify the process of global localization the robot environment is often engineered with active beacons or other artificial landmarks such as bar code reflectors and visual patterns. If it is not allowed to modify the robot environment, the global localization has to be based on naturally occurring landmarks in the environment. Such methods are less accurate and demand significantly more computational power [2]. Additionally, the minimum features required for global self-localization are not available throughout the entire state space. Thus it is clear that a solution that is based only on local sensors or the one which computes the global position at every step is unlikely to work. A hybrid approach to self-localization that is based on information from the local sensors as well as the external sensor is required. Such a method is complementary and compensates for shortcomings of each other. Extended Kalman Filter (EKF) [4, 5] has been extensively used for information fusion in robot navigation problems [6, 7]. However, due to its unimodal nature, EKF is not suitable for global localization in recurrent environments. The problem of handling ambiguities can be addressed using methods that are capable of representing position belief other than Gaussian i.e multi hypothesis tracking system [8], Markov localization [9] or Monte Carlo localization [10]. These methods are capable of global self-localization and can recover the robot from tracking failures but are computationally expensive [11]. We propose that the problem of global-localization should be addressed separately at startup and whenever the robot loses track of its position. The global position once calculated is used to initialize the EKF for tracking it. The accumulating errors in robot odometry are mitigated by acquiring information from the robot environment. Simulation results show that this approach results in a marked performance gain over the randomly initialized starting position [12]. Depth information of distinct objects in the robot environment is a requirement for position estimation. This can be done with a wide range of sensors i.e laser range finder and sonar. However, these sensors require integration over time and high-level reasoning to accomplish localization. In contrast, vision has the potential to provide enough information to uniquely identify the robot’s position [13]. Depth estimation using single image is too erroneous and the approach cannot be used all the time [14]. Therefore, a pivoted stereo vision system is used to obtain 3D information from the robot environment. The stereo vision system can be built small enough to fit the dimensions needed. The robot environment is known but highly dynamic. It is not allowed to modify the robot environment with navigational aids. Features found in the robot environment can be divided into different types: these are line based and color transitions [15]. Line based features are extracted using gradient based Hough transform. Corners, junctions and line intersections are determined by semantic interpretation of the detected line segments. Color transitions between distinctly colored patches are found using segmentation methods.
Stereo Vision Based Self-localization of Autonomous Mobile Robots
369
Distinct landmarks are scarce in our application environment. Therefore, our global position estimation algorithm has to be based on minimum number of distinct landmarks. The global position of the robot is estimated whenever range estimate of a single landmark and absolute orientation of the robot are available [16]. If the robot orientation cannot be estimated independently then two distinct landmarks are required [3]. However, even a single distinct landmark is not available throughout the entire state space and the robot position has to be tracked once a global estimate is available. For extensive testing of the algorithm the experiment is repeated in three different scenarios. Firstly, the filter is manually initialized. Secondly, the robot is having a random initial position with a high uncertainty. This is to study the convergence of the robot position belief when features are available. Finally, initialization of the filter is done with the help of global position estimation methods discussed in this paper. The balance of the paper is structured as follows: Section 2 explains the state transition and observation model in the framework of an EKF and presents the geometric construction of a differential drive robot, formation of control vector and its uncertainty analysis. Illustration of the robot observation vector, observation prediction and derivation of the robot observation uncertainty is also included in this section. Estimation of global position and its uncertainty analysis is given in Section 3. Experimental results are presented in Section 4 and finally the paper is concluded in Section 5.
2
Extended Kalman Filter
Position tracking using EKF is the subject of this section. Using the global localization techniques, the filter is provided with optimal initial conditions (in the sense of minimum mean square error) [3,16]. State transition and observation models in the framework of an EKF, geometric construction of a differential drive robot, formation of the control vector and its uncertainty analysis is explained. 2.1
State Transition Model
Beginning with the assumption of having a function f that models the transition from state pk−1 1 to pk in the presence of control vector uk−1 at time k. The control vector is independent of state pk−1 and is supposed to be corrupted by k of strength Uk . This model is formally an additive zero mean Gaussian noise u stated by the following equation: pk = f (pk−1 , uk−1 , k)
(1)
k + u k . The quantities pk−1 and pk are the desired (unknown) where uk = u states of the robot. Given robot observations Zk−1 , a minimum mean square estimate of the robot state pk−1 at time k − 1 is defined as [5]: 1
i|j is the The notation used here is a slightly changed version of [5]. In our case p minimum mean square estimate of pi given observation until time j i.e. Zj , where i|j . j ≤ i. Similarly, Pi|j is the uncertainty of the estimate p
370
A. Bais et al.
k−1|k−1 = E{pk−1 |Zk−1 } p
(2)
The uncertainty of this estimate is denoted by Pk−1|k−1 and can be written as follows: Tk−1|k−1 |Zk−1 } pk−1|k−1 p (3) Pk−1|k−1 = E{ k−1|k−1 is the zero mean estimation error. where p k|k−1 , an estimate of pk ( using all The next step is to derive expression for p measurements but the one at time k) and its uncertainty Pk|k−1 using (1). This is called prediction. Similar to (2), this estimate can be written as follows: k|k−1 = E{pk |Zk−1 } p
(4)
k−1|k−1 and Using multivariate Taylor series expansion, pk is linearized around p k as given by the following expression: u k , k) + J1 (pk−1 − p k−1|k−1 ) + J2 (uk − u k ) + . . . pk−1|k−1 , u pk = f (
(5)
k−1|k−1 where J1 and J2 are the jacobians of (1) w.r.t pk−1 and uk evaluated at p k , respectively. Substitution of (5) in (4) results in the following: and u k|k−1 = f ( k , k) p pk−1|k−1 , u
(6)
k|k−1 , evaluates to From (5) and (6) expression for prediction error, p k−1|k−1 + J2 u k , which is used to calculate prediction covariance as given J1 p below: Pk|k−1 = J1 Pk−1|k−1 JT1 + J2 Uk JT2
(7)
Equation (6) and (7) comprise the prediction phase of the EKF. The uncertainty of the control vector, Uk , is presented in Section 2.3. 2.2
Geometric Construction of a Differential Drive Robot
It is assumed that the two wheeled differential drive robot is moving on a flat T surface with the robot pose having 3 degrees of freedom i.e. pk = xk yk θk . The separation between the two wheels of the robot is w. Movement of the robot center is considered as motion of the whole robot. Fig. 1 shows the robot’s trajectory between time step k−1 and k. The distance covered (per unit time) by the left and right wheels of the robot is denoted by vlk and vrk , respectively. The three components of state change in robot coordinate system are represented by δxk , δyk and δθk . Whereas, αk denotes change in robot orientation, ck the radius of curvature and vk is the robot velocity. Using geometric relationship between different elements of Fig. 1, the control vector can be formally stated as follows: ⎤ ⎡ w(v +v ) ⎤ ⎡ vrk −vlk rk lk
) δxk 2(vrk −vlk ) sin( w vlk ⎢ rk +vlk ) vrk −vlk ⎥ uk = ⎣ δyk ⎦ = g = ⎣ w(v (8) ))⎦ 2(vrk −vlk ) (1 − cos( w vrk vrk −vlk δθk w
Stereo Vision Based Self-localization of Autonomous Mobile Robots
371
Fig. 1. Geometric construction of differential drive robot
This is a generalized expression for the control vector for the case when the robot is following a curved path and is required to predict future state of the robot as given by (6). Uncertainty analysis of the control vector is given in the next section.
2.3
Control Vector Uncertainty
For uncertainty analysis of the control vector it is assumed that the robot’s odometry is calibrated for systematic errors using methods such as the University of Michigan Benchmark (UMBmark) [17]. For the non-systematic errors, it is assumed that error in distance traveled by each wheel of the robot is zero mean Gaussian. This error is then propagated to the robot control vector as follows. k from its true value k of the estimated velocity vector v Suppose the deviation v vk is a random vector of zero mean and covariance Σv . As the two wheels are driven by two different motors and the distance covered by each wheel is measured by two independent encoders, it is reasonable to assume that error in distance covered by the left wheel is independent of the error in distance covered by the right wheel [1]. The covariance matrix of this error is given by the following equation: kT } = vk v Σv = E{
σl2 0 0 σr2
(9)
where σl2 and σr2 are the variances of vlk and vrk and are proportional to their absolute values. The covariance matrix of error in uk , Uk , can be derived using k as follows: Taylor series expansion around v Uk = Ju Σv JTu k . where Ju is the jacobian of uk with respect to vk evaluated at v
(10)
372
2.4
A. Bais et al.
Observation Model
The robot observation model links the current state of the robot pk with its observation zk and is given by the following transformation: zk = h (pk , k) + wk
(11)
Robot observation is assumed to be corrupted by zero mean Gaussian noise wk of strength R. Using Taylor series expansion of (11) around the predicted state and retaining only the first two terms, the following expression of the observation model can be obtained: k|k−1 + Jzp ( pk|k−1 − pk ) + wk (12) zk ≈ h p k|k−1 is the observation prediction and Jzp where zk|k−1 = E{zk |Zk−1 } = h p k|k−1 . The deviation of the actual obis the observation jacobian evaluated at p servation from the predicted one is called innovation and is evaluated as follows: k|k−1 + wk zk|k−1 = Jzp p
(13)
After having derived expression for innovation, expression for its covariance matrix can be derived as follows [18]: S = Jzp Pk|k−1 JTzp + R
(14)
Innovation is a measure of disagreement between the actual and predicted state k|k as follows [5]: of the robot and is used to evaluate the state estimate p k|k = p k|k−1 + K(zk − zk|k−1 ) p
(15)
where K is the gain. A value of K that minimizes the mean square estimation error is called Kalman gain [5]. Using (15), the estimation error can be calculated as follows: k|k = (I − KJzp ) p pk|k−1 − Kwk
(16)
and its covariance matrix by the following expression [18]: Pk|k = (I − KJzp )Pk|k−1 (I − KJzp )T + KRKT
(17)
Equation (15) and (17) constitute the update phase of the EKF. Our robot specific observation model is given in the next section. 2.5
Robot Observation
The robot is equipped with a stereo vision system which provides it with values T C0 C0 T x y , which is the location of the landmark pl = xl yl in robot coordinate system. These values are used to construct robot observation vector zk .
Stereo Vision Based Self-localization of Autonomous Mobile Robots
373
Fig. 2. Illustration of the robot observation: The robot stereo vision system is used to estimate range and bearing with respect to a landmark feature
The construction of the robot observation vector is illustrated in Fig. 2 and is given by (18). rk (xC0 )2 + (y C0 )2 zk = = (18) ψk atan2(y C0 , xC0 ) k|k−1 , the robot predicts what it may see, which From its predicted position p is used for position update. Construction of the robot observation prediction vector is illustrated in Fig. 2 and is given by the following expression: rk|k−1 (xl − xk|k−1 )2 + (yl − yk|k−1 )2 zk|k−1 = = (19) ψk|k−1 atan2(yl − yk|k−1 , xl − xk|k−1 ) − θk|k−1 T r l u in For robot observation uncertainty it is assumed that the error u T ul ur is zero mean Gaussian. The covariance matrix of this error is given by the following expression: T 10 2 r r } = σuu u l u l u (20) Σi = E{ u 01 2 is the variance of ul or ur . In deriving (20) it is assumed that u l and where σuu u r are not correlated. Error in ul and ur is propagated to observation zk by the transformation (18). It is assumed that these systems can be represented by the first two terms of Taylor series expansion around estimated value of ul and ur . This assumption leads to following expression for the covariance matrix of error in robot observation [19]:
R = Jz Σi JTz
(21)
T zk is the estimated where Jz is the jacobian of (18) with respect to ul ur and observation. The error model discussed above captures the uncertainty due to image quantization, yet it fails to handle gross segmentation problems [20]. The types of features used for localization play an important role in the success or failure of a method. Features used for localization can be broadly divided into two groups; natural [21, 22] and artificial [23]. Artificial landmarks are specially
374
A. Bais et al.
designed and placed in the environment as an aid in robot navigation. It is desirable to base the localization algorithm on natural landmarks as engineering the environment is expensive and is not possible under all circumstances. Data from a variety of sensors can be processed to extract natural landmarks. The drawback of using natural landmarks is that they are difficult to detect with high accuracy [2].
3
Global Position Estimation
Global position estimation is a requirement for an initial startup and whenever the robot is lost. The robot is unable to rectify odometry error solely by observing landmarks and applying corrective steps if it is moved to another location or bumps against another robot. In such situations, according to our experience, it is useless to carry on with tracking the wrong position. The best choice is to declare the robot lost and do global localization. T The robot position p = x y can be calculated using observation xC0 1 and T y1C0 with respect to landmark pl1 = xll yll and absolute orientation θ as follows [16]: ⎡ ⎤ ul1 xll − r1 cos ϕ x ⎦ ⎣ p= = g( ur1 ) = (22) y yll − r1 sin ϕ θ C0 C0 where ϕ = ψ + θ, s = sin(atan2(y1C0 , xC0 1 ) + θ), c = cos(atan2(y1 , x1 ) + θ) and C0 C0 2 2 r1 = (x1 ) + (x1 ) . ul1 and ur1 represent the location of the landmark in C0 the left and right cameras. These values are used to calculate xC0 1 and y1 . This equation calculates the intersection of line L and circle C as shown in Fig. 3(a). Perfect measurements will result in a single point of intersection between the line and circle, however, measurements are never perfect and errors in input quantities result in an uncertain position estimate as shown by the gray area
(a)
(b)
Fig. 3. Global position estimation (a) the robot position is determined by the intersection of line L and circle C (b) intersection of C and C is the robot position
Stereo Vision Based Self-localization of Autonomous Mobile Robots
375
in Fig. 3(a). We assume that each element of the input vector is by corrupted 2 2 2 σuu σθθ ). independent zero mean Gaussian with covariance Σi = diag( σuu 2 2 σuu is the variance of ul1 or ur1 and σθθ is the variance of the robot absolute orientation. This error is propagated into position estimate by the transformation (22). The system given by (22) is nonlinear and the resulting error distribution will not be Gaussian. However, we assume that it can be adequately represented by the first two terms of Taylor series expansion around estimated input, i. We proceed as follows: (23) p = g(i) + Jii + . . . = g(i) is the position estimate, p ≈ JI I its error and Ji is the jacobian where p of p with respect to i evaluated at its estimated value i. From (23) the expression for the covariance matrix of position estimate is derived as follows: Σp = Ji Σi JTi When the robot orientation estimate is not reliable or not available it needs to estimate the location of two globally known landmarks in its coordinate system [3]. With this information the robot location is constrained to the intersection of two circles as shown in Fig. 3(b) and given by the following expression [3]: ⎤ √ ⎡ ⎤ ⎡ −D± D2 −4CE x 2C √ ⎢ ⎥ 2 p = ⎣y ⎦ = ⎣ (24) A + B( D± D2C−4CE ) ⎦ C0 C0 θ atan2 (yl1 − y, xl1 − x) − atan2 y , x r 2 −r 2 +x2 −x2 +y 2 −y 2
l2 , C = B 2 + 1,D = 2AB − 2yl1 B − 2xl1 , where A = 1 2 l2 2a l1 l2 l1 , xl1 −x a 2 2 2 2 E = A + xl1 + yl1 − 2yl1 A − r1 and a = yl2 − yl1 . The subscripts 1 and 2 differentiate between quantities related with the two landmarks. T The input vector i = ul1 ur1 ul2 ur2 has four components which correspond to the location of the two landmarks in the left and right camera images. The imperfection in estimation of the input vector is propagated into robot location using (24), which result in an uncertain position estimate as illustrated in Fig. 3(b). We model this imperfection with a zero mean Gaussian having the following covariance matrix: 2 I4×4 (25) Σi = σuu
where I4×4 is 4 × 4 identity matrix. Using the same principles as adopted in the previous section, we arrive at the following expression for the covariance matrix of position estimate using the new method: Σp = Ji Σi JTi
4
(26)
Experimental Results
The performance of our algorithm is tested with a simulation as real-world implementations are still in progress. Our robot, Tinyphoon [24], is a two wheeled
376
A. Bais et al.
differential drive, with wheel base of 75 mm. The stereo vision system is mounted at a height of 70 mm. The separation between the two cameras (stereo baseline) is 30 mm. Image resolution for the experiment was set to be QVGA (320 × 240 pixels). The size of the robot and dimension of the field conforms with the FIRA MiroSot Small League2 . Image resolution and other simulation parameters were set to satisfy the constraints of the real robots. In order to improve the statistics regarding the growth of uncertainty in robot position, different types of trajectories with multiple trials were tested. In each of these trials, the robot either follows a circular path of different diameters, or the starting position of the robot and/or direction of motion is changed. As stated earlier, the whole process is repeated three times with different initial conditions. Firstly, the position tracker is initialized manually with a starting position and its corresponding uncertainty. The initial position is obtained by adding random noise to the true position. Secondly, the robot is having a random starting position with a high uncertainty. Our intention behind this is to test if the robot can recover from tracking failures. Providing the starting position and the covariance matrix violates the robot autonomy. For autonomous behavior the robot must be equipped with tools that enable it to localize itself from scratch. This capability of the algorithm is demonstrated in the final run of the experiment where initialization of the tracker is done with the help of global position estimation methods discussed in this paper. A sample trial of the experiment where the robot moves along a circular path is shown in Fig. 4. The ideal path that robot is supposed to follow is a perfect circle. However, due to imperfections of its sensors the robot deviates from the true path. In Fig. 4(a) the robot starts at a known position with low uncertainty. The robot observes its first landmark feature when it has covered more than a quarter of its intended path. As can be seen in the figure, uncertainty (represented by ellipses) of the robot position increases unbounded and its pose drifts away from the true value. When the robot observes one of the two features on the left side, its uncertainty is reduced and position adjusted. However, as soon as the landmarks are out of sight, drift from the true position starts and the uncertainty increases. This drift is corrected when the landmarks are visible again. Fig. 4(b) illustrates the case where the robot is given a random starting point with a very high uncertainty. As can be seen in the figure the robot is constantly moving away from its intended path and its uncertainty is growing rapidly. However, when landmark features become available to the robot, corrective steps are applied and the robot gets closer and closer to its true path. Global self-localization for the same path is shown in Fig. 4(c). 3σ error bound on each component of the robot pose for run of this trial are shown in Fig. 4(d), Fig. 4(e) and Fig. 4(f). Similarly, Fig. 5 shows a sample trial of the category where the robot is moving along a straight line. The robot orientation is fixed at 180 ◦and it starts at a point on the left side and moves in reverse direction towards the right in 2
http://www.fira.net
Stereo Vision Based Self-localization of Autonomous Mobile Robots
(a)
(b)
(c)
(d)
(e)
(f)
377
Fig. 4. Robot desired trajectory is a circle of radius 600 mm (a) known start position (b) random start position (c) here the robot does not initialize its tracker but actively searches for landmarks until it finds enough required for global position estimation (d) 3σ bound on error in x, y and θ for known start position (e) error bounds for random starting position (f) the improvement due to proper initialization of the tracker is obvious
small steps. The orientation of the robot is such that the landmarks are always in sight. First run of this trial where robot starts at a known position with low uncertainty is shown in Fig 5(a). Also in this case we repeat the experiment when the robot starts at a random location with high uncertainty or does global position estimation as shown in Fig. 5(b) and Fig. 5(c), respectively. Error in each component of the robot pose and their corresponding 3σ uncertainty bounds are shown in Fig. 5(d), Fig. 5(e) and Fig. 5(f). When the robot is getting further from the landmarks, effect of the robot’s observation decreases as its uncertainty increases and the accumulation of odometry error dominates. Statistical results for the error in robot observation are shown in Table 1. In this table δr and δϕ refer to error in the range and bearing components of the observation vector. δr is expressed in millimeters and δϕ is given in degrees. The first row shows values for the average error. The standard deviation (Std), minimum (Min) and maximum (Max) values are presented in the second, third and fourth row. The Min and Max shown are the absolute values. The first two columns list error in range and bearing when the robot follows circular paths of different diameters. Results for straight line motion are shown in third and fourth column.
378
A. Bais et al.
(a)
(b)
(c)
(d)
(e)
(f)
Fig. 5. Robot’s desired path is a straight line (a) known starting position (b) starting at a random position (c) the location tracker is initialized by estimating the robot location and its uncertainty using landmark based methods (d) error and 3σ error bound for known starting location (e) Error bound for random start position (f) error bound for (c) Table 1. The robot observation error Circular path δr mm
δϕ
Mean -5.82 -0.15 Std Min
δr mm ◦
22.24 0.13 ◦ 0.07 0.004
Line
◦
δϕ
-50.808 0.03 ◦ 43.66
0.52 ◦
0.15
0.001 ◦
Max 67.07 0.43 ◦ 204.60 1.17 ◦
Simulation results illustrate that the algorithm can successfully localize the robot despite the fact that landmarks are sparse. The linearization of the observation model introduces problems for distant features. The robot position error is perfectly bounded when it is following a circular path. However, the error is not always bounded in the latter case. The reason for this is that in motion along the circle, features are always observed from a short distance whereas in the other case features are observed from a distance towards the end of the trials. The prominence of this problem comes from the use of a narrow baseline stereo. Currently we are in the process of implementation and will have real-world results in near future.
Stereo Vision Based Self-localization of Autonomous Mobile Robots
5
379
Conclusion
In this paper we discussed all aspects of real time mobile robot self-localization in a known but highly dynamic environment. The problem covered tracking the position with an initial estimate to estimating it without any prior position knowledge. The localization algorithm is not dependent on the presence of artificial landmarks or special structures such as rectangular walls in the robot environment. Furthermore, it is not required that features lie on or close to the ground plane. A stereo vision based approach is used to identify and measure distance to landmark features. Distance measurements are used as they require fewer landmarks compared to methods using angle measurements. Global self-localization is computationally slow and sometimes impossible if enough features are not available. Therefore the robot position is tracked once a global estimate is available. Simulation parameters correspond to that of our real robots. The next step is to use all features found in the robot environment and to analyze the affect of error in feature location in the global coordinate system and in correspondence analysis.
References 1. Chong, K.S., Kleeman, L.: Accurate odometry and error modelling for a mobile robot. In: Proceedings of the IEEE International Conference on Robotics and Automation, pp. 2783–2788 (1997) 2. Borenstein, J., Everett, H.R., Feng, L.: Navigating Mobile Robots: Systems and Techniques. A. K. Peters, Ltd., (1996) 3. Bais, A., Sablatnig, R.: Landmark based global self-localization of mobile soccer robots. In: Narayanan, P.J., Nayar, S.K., Shum, H.-Y. (eds.) ACCV 2006. LNCS, vol. 3852, pp. 842–851. Springer, Heidelberg (2006) 4. Crowley, J.L.: Mathematical foundations of navigation and perception for an autonomous mobile robot. In: Proceedings of the International Workshop on Reasoning with Uncertainty in Robotics, pp. 9–51 (1995) 5. Newman, P.: An Introduction to Estimation and its Application to Mobile Robotics. Robotics Research Group, Department of Engineering Science, University of Oxford. 2.0 edn., Lecture notes (2005) 6. Lee, J.M., et al.: Localization of a mobile robot using the image of a moving object. IEEE Transactions on Industrial Electronics 50, 612–619 (2003) 7. Ohno, K., Tsubouchi, T., Yuta, S.: Outdoor map building based on odometry and rtk-gps positioning fusion. In: Proceedings of the IEEE International Conference on Robotics and Automation, vol. 1, pp. 684–690 (2004) 8. Arras, K.O., et al.: Feature-based multi-hypothesis localization and tracking using geometric constraints. Robotics and Autonomous Systems 44, 41–53 (2003) 9. Fox, D., et al.: Markov localization for mobile robots in dynamic environments. Artificial Intelligence 11, 391–427 (1999) 10. Thrun, S., et al.: Robust monte carlo localization for mobile robots. Artificial Intelligence 128, 99–141 (2001) 11. Gutmann, J., Fox, D.: An experimental comparison of localization methods continued. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (2002)
380
A. Bais et al.
12. Bais, A., Deutsch, T., Novak, G.: Comparison of self-localization methods for soccer robots. In: Proceedings of the IEEE International Conference on Industrial Informatics (2007) 13. Lamon, P., et al.: Deriving and matching image fingerprint sequences for mobile robot localization. In: Proceedings of the IEEE International Conference on Robotics and Automation, pp. 1609–1614 (2001) 14. Nickerson, S.B., et al.: The ark project: Autonomous mobile robots for known industrial environments. Robotics and Autonomous Systems 25, 83–104 (1998) 15. Bais, A., Sablatnig, R., Novak, G.: Line-based landmark recognition for selflocalization of soccer robots. In: Proceedings of the IEEE International Conference on Emerging Technologies, pp. 132–137 (2005) 16. Bais, A., et al.: Active single landmark based global localization of autonomous mobile robots. In: Bebis, G., et al. (eds.) ISVC 2006. LNCS, vol. 4291, pp. 202– 211. Springer, Heidelberg (2006) 17. Borenstein, J., Feng, L.: Measurement and correction of systematic odometry errors in mobile robots. IEEE Transactions on Robotics and Automation 12, 869–880 (1996) 18. Bais, A., et al.: Location tracker for a mobile robot. In: Proceedings of the IEEE International Conference on Industrial Informatics, pp. 479–484 (2007) 19. Bais, A.: Real-Time Mobile Robot Self-localization - A Stereo Vision Based Approach. PhD thesis, Vienna University of Technology, Vienna, Austria (2007) 20. Matthies, L., Shafer, S.: Error modeling in stereo navigation. IEEE Transactions on Robotics and Automation RA-3, 239–248 (1987) 21. Panzieri, S., et al.: A low cost vision based localization system for mobile robots. In: Proceedings of the Mediterranean Conference on Control and Automation (2001) 22. Lingemann, K., et al.: High-speed laser localization for mobile robots. Robotics and Autonomous Systems 51, 275–296 (2005) 23. Li, X.J., So, A.T.P., Tso, S.K.: Cad-vision-range-based self-localization for mobile robot using one landmark. Journal of Intelligent and Robotic Systems 35, 61–81 (2002) 24. Novak, G., Mahlknecht, S.: TINYPHOON a tiny autonomous mobile robot. In: Proceedings of the IEEE International Symposium on Industrial Electronics, pp. 1533–1538 (2005)
Robust Ellipsoidal Model Fitting of Human Heads J. Marquez1 , W. Ramirez1 , L. Boyer1, and P. Delmas2 1
2
CCADET, UNAM, A.P 70-186, 04510 D.F., Mexico
[email protected] Dept. of Computer Science, Tamaki Campus, University of Auckland, NZ
[email protected]
Abstract. We report current work on methods for robust fitting of ellipsoids to the shape of the human head in three-dimensional models built from laser scanner acquisitions. A starting ellipsoid is obtained from Principal Component Analysis from mesh vertices; those regions far from the surface of the ellipsoid are penalized (outlier rejection and/or damping). A first method consists in re-calculating iteratively the ellipsoid by PCA, adjusting scales. Another method projects the outliers into the ellipsoid. A third method uses a morphological-error approach, via the difference of the distance fields of the head and its fitting ellipsoid. The ellipsoid orientation, position and scales are then changed in search for a set of parameters minimizing the morphological error. To speedup calculations, only a few orthogonal planes are sampled and the distance field is obtained directly from the triangular mesh elements. Among the intended applications, the restoration of the skullcap is presented.
1
Introduction
In anthropometry, medicine and other disciplines, morphological models and digital phantoms of the human body are built, often based on geometrical primitives, from spheroids and cylinders to quartics surfaces and splines. These shapes are used for computer analytical models, 3D-rendering and finite-element models for the simulation of dynamical behaviour, or the physical response to environmental stimuli. For the human head, a cranial region is approximated to a prolate spheroid and the main problem is to define such region. Spheroids have been used as the simplest morphological representation of the human head, as well as for the brain [3, 23]. In previous works, we have built models for the dosimetry and simulation of radiofrequency-power deposition in the head of mobile-phone users [17], the extraction of salient features and their morphological average, such as the auricular region, replacing the ear by interpolating surfaces [19], and population atlases [20]. These works have required not only global registration and scale normalization, but a correct matching of local features, and an iterative approach for successful elastic registration, with maximization of mutual information [10,16], as matching criterion. Altogether, a better 3D alignment and robust ellipsoids G. Sommer and R. Klette (Eds.): RobVis 2008, LNCS 4931, pp. 381–390, 2008. c Springer-Verlag Berlin Heidelberg 2008
382
J. Marquez et al.
that fit the head under different criteria are still needed. Our approach has been applied on a database of forty adult heads, digitized with a laser scanner from Cyberware [4,18]. The goal of this work is the extraction of simplified models of the head, using ellipsoids fitted in such a way as to discard or penalize most salient features (the ears, the nose, the chin, etc.) and favouring the roundest areas of the skull.
2 2.1
Methods Isotropic Mesh Ellipsoids
Ellipsoid meshes for visualizing fitting results were first obtained on recursivesubdivision based on an octahedron [1], projecting vertices to the analytic ellipsoid. A similar method may also use the icosahedron, to obtain geodesic ellipsoids. The recursion levels give 8, 32, 72, 128, 200, and so on, triangles. We stopped at level 25, where the triangle sides are less than 3mm, which corresponds to the medium resolution size of the mesh models from the scanner. The OpenGL graphics library was employed. The analytical ellipsoid, specified by center and principal axes, is used for robust fitting by error estimation methods. Principal Component Analysis (PCA) allows to extract principal axes from a set of points [5,6,11,26]. In 3D, the principal axes are the eigenvectors, with proportions coded by the eigenvalues, from the inertia tensor [15]. If S is the surface of a head, we represent S with the set of the mesh vertices {(xk , yk , zk )}, k = 1,.., K. After a reduction of vertex redundancy, the second moments of inertia are defined by: Ixx = W (x, y, z)((y − yC )2 + (z − zC )2 ) x,y,z∈S Iyy = W (x, y, z) (x − xC )2 + (z − zC )2 x,y,z∈S Izz = W (x, y, z) (x − xC )2 + (y − yC )2 x,y,z∈S (1) W (x, y, z)(x − xC )(y − yC ) Ixy = Iyx = x,y.z∈S Ixz = Izx = W (x, y, z)(x − xC )(z − zC ) x,y,z∈S W (x, y, z)(y − yC )(z − zC ) Iyz = Izy = x,y,z∈S
where (xC , yC , zC ) is the mass center of S and W (x,y,z ) weights associated to each vertex, for the moment equal to 1/K for all (xk , yk , zk ) ∈ S. They are later modified, and renormalized, in order to penalize vertices far from the ellipsoid, at each iteration. ⎤ ⎡ Ixx −Ixy −Ixz (2) I = ⎣ −Iyx Iyy −Iyz ⎦ −Izx −Izy Izz The three eigenvalues of this matrix satisfy 0< λ1 ≤ λ2 ≤ λ3 for a human head. The new reference system XY ZELIP for the inertia ellipsoid, fitting the set
Robust Ellipsoidal Model Fitting of Human Heads
383
of points S, also includes a translation to (0, 0, 0) of the centroid. Human heads have no perfect symmetry, and the ellipsoid is only an approximated criterion for global registration. The ellipsoid is not normalized to the head dimensions and is not strictly an “equivalent ellipsoid”, since scaling is needed to best fit selected data. An RMS error, among points in both surfaces, is minimized. An approximated correspondence is achieved by projecting a line from the ellipsoid center to each mesh vertex, finding the analytical ellipsoid intersection (not a mesh point). Our original contribution consists in the study of the fitting behaviour, when excluding or penalizing salient head features, like the nose or the chin, and enhancing fitting at the available information for the skull cap and the rear regions of each head. Figure 1, left shows the initial fitting, using axes obtained by PCA (only the axes are plotted, for clarity). The volume of ellipsoid and the head, as a closed surface, may approach, but their being equal does not warranty null quadratic differences among corresponding points on both surfaces. Figure 2, at right shows scaled axes by minimizing square differences.
Fig. 1. A first equivalent ellipsoid via PCA (only the main axes are shown). Left: normalized axes; the surface error is very large. Right eigenvector scaling with distance minimization among corresponding points (vertices and their projection on the analytical ellipsoid), but large regions of the rear remain far from the ellipsoid.
The inertia ellipsoid-based registration is only useful when all data belong to the same population, noise is irrelevant and the objects to be registered have perfect correspondences, under rigid transformations, that is, they turn to be identical but for orientation. Many local differences are present in real populations and landmarks are needed, such as identifying common salient features. Shape variation may be caused by natural differences (two individuals), deformation or noise. Model fitting then prompts to discard large varying features, corresponding also to salient features far from the model. This allows to represent the roundest regions of the head, to set a reference or “zero ground” surface –the ellipsoid–, from which further modeling is possible. Another
384
J. Marquez et al.
Fig. 2. Example of three orthogonal sample planes from the 3D distance fields from a low resolution model of one head. A few samples are used for calculating (5).
successful registration method, between two similar surfaces, represented by clouds of point samples, is the Iterative Closest Points (ICP) method [2]. It does not find the best fitted ellipsoid, but penalization may be included in ICP to enhance alignment. 2.2
Fitting by Outlier Penalization
We define outlier as a mesh point not to be included (weight W =0) or with low 1 ), with a renormalization contribution to ellipsoid fitting (weight W less than K of other weights such that that the sum of all vertex weights is kept equal to 1. The outlier criterion may thus consist of a threshold distance to the ellipsoid, or 2 )/K, where δ is the actual separation and δmax defining W(x, y, z) = (1−δ 2 /δmax the largest possible separation or mismatch, hence, W ∈ [0, 1/K ]. An iterative method was tested calculating a new ellipsoid En+1 from the new set of weights resulting from error adjusting with respect to ellipsoid En . The 1 first ellipsoid E0 is obtained with all vertex weights W set to K . Convergence is achieved with a tolerance criterion on the decreasing rate of global error, rather than the error itself. It is interesting to note that outlier vertices are not removed but tagged or just penalized, but they may also be repositioned at the ellipsoid surface itself. This allows for applications such as interpolating missing areas, such as the skullcap. A better minimization was achieved via our morphological error approach, as explained next.
Robust Ellipsoidal Model Fitting of Human Heads
2.3
385
Ellipsoid Fitting by Morphological Error Minimization
A fitting process considering neighbourhood information, from a starting ellipsoid EN consists in varying orientation as well as scales and position of the ellipsoid and measuring errors in the distance fields of each shape, and searching in the ellipsoid parametrical space minimizing such error. The optimization problem, e.g. matching two shapes VA and VB (e.g. head and ellipsoid, or two different heads) is in general written as: Tmin = argmin {Error metrics (VA , T(VB )}
(3)
T
where the minimization criterion implies a metric to chose. It may consist in morphological similarity between VA and VB . An example is a shape-transformation energy [22], but local information is best included in distance fields of each shape. Distance fields allow a number of manipulations among 3D shapes [21,27]
Fig. 3. Visualizing the differential distance field between head (left column) and fitted ellipsoid (right column). Colour (central column) indicates the quality of fit in each ROI; the hue shows whether the ellipsoid is inside (blue to cyan) or outside (red to violet) the head.
386
J. Marquez et al.
and we have already used them to extract morphological averages of localized anatomical features [20]. Given a solid VA and its boundary ∂VA , we define its Euclidean, signed distance field as: D± (∂VA ) = {(p, d(p)) |d = sgn· min p − q} q∈ ∂VA c +1 if p ∈ (VA ) sgn = −1 if p ∈ VA
(4)
Where p = (x, y, z) is a point (voxel) of field D, with attribute d (the minimal distance to ∂VA ). The components D+ and D− are also named the external and the internal distance fields, respectively. Such distinction allows to measure error or similarities indicating when the ellipsoid is outside or inside the craniofacial surface. We define the global morphological error measure between two objects VA and T (VB ) , when registering VB with VA , as:
2 ErrD (VA ,T(VB ) = [D(∂VA ) − D(∂(T (VB )))] dxdydz (5) D(∂(VA
VB ))
A large computing load is needed for calculating (3), each set of T parameters implies re-calculating (5). Monte Carlo methods may be used not only in the integral and optimization search, but also in sampling the very distance field, by only calculating (5) at sample planes of the 3D field D, as shown in figure 3. The morphological error increases or decreases on T, all over the 3D volume or over a subset, such as the orthogonal planes. Figure 3 visualizes the differential distance field, where components D−, D+, were assigned to red and blue colour channels.
Fig. 4. Penalization by projection of outliers to the ellipsoid
Robust Ellipsoidal Model Fitting of Human Heads
387
Fig. 5. Left: head with outlier penalization by tagging (light blue). Right: Asymptotic behaviour or three tested methods, from preliminary PCA ellipsoid (global error excludes outliers).
3
Experiments
Figure 4 shows superposed heads and fitted ellipsoids according to the outlier penalization criterion, and recalculating the ellipsoid by PCA for a few iterations. Figure 5 shows a head with tagged outliers; at right a global error plot is shown, in terms of the number of iterations. The morphological error is different of the RMS error, since the differential distance field is considered, coding local information about fitting. Figures 6 to 8 show the ellipsoid by morphological error minimization, with only three orthogonal planes, and varying orientation, position and scale parameters. Tests made evident that automated outlier identification is not an evident task, since the ellipsoid could collapse towards the chin. It can be interpreted as a local minimum sub-optimization. Further criteria were added to prohibit certain
Fig. 6. Aligned head and equivalent ellipsoid. Left: 1st iteration. Right: 4th iteration.
388
J. Marquez et al.
Fig. 7. Optimal fitting (light red ellipsoid). During acquisition, a swimming cap was used by subjects, in order to avoid noise by hair and the “north pole” effect. The ellipsoid allows to filter out the swimming-cap folding and to restore the missing data at the skullcap.
Fig. 8. Another views of figure 7, showing the coronal region restored with the equivalent ellipsoid
behaviours, by penalizing facial features from the beginning. The skullcap region was reconstructed by using the ellipsoid, but validation was only qualitative, since the scanner acquisitions were the only data available for that anonymous population.
Robust Ellipsoidal Model Fitting of Human Heads
4
389
Conclusions
This work is part of a larger study on first- and higher-order analytical surface fitting to the shape of human heads. Our tests show that a PCA ellipsoid poorly represents a head, if modelling focus on the skullcap and the rear areas, since main differences appear at facial features. We also believe that the morphological error better assessed the fitness of ellipsoid and head, since local shape information is included. We also found that calculations may be reduced by Monte Carlo estimation, by sampling not also the parameter space and the triple integral calculation, but also the volumetric distance field, since no deformations are being performed. Visualization of the differential distance field also improved the characterisation of penalized regions and error distribution in space. Besides skullcap restoration, the ellipsoid is useful as a 3D line-base reference, or “ground zero”, for anthropometry and the study of modes of variation, for eigen-shape assessing, in a population. This is important for estimating representative shapes–head averages, and to identify homogeneous components in a inhomogeneous population. The ellipsoid provides also analytical representations for the study of transfer functions in acoustic fields and RF power absorption simulations, for example. Together with morphological-error quantification and distance-field based visualizations, the ellipsoid representations, as in the case of Gaussian spheres, will allow us to establish analysis and visualization of differential properties. It constitutes a first step to build higher-order models, such as radial-harmonic, quartics, splines and NURBS. We also study at present steered distance fields, in order to tackle deformation transformations and also to weight such fields in a local fashion, according to salient features.
References 1. Ahn, J.: Fast Generation of Ellipsoids. In: Paeth, A.W. (ed.) Graphics Gems V, Systems Engineering Research Institute, KIST, South Korea, pp. 179–190. Academic Press Inc., London (1995) 2. Besl, P.J., Neil, D., McKay, A.: Method for Registration of 3-D Shapes. IEEE Transactions on Pattern Analysis and Machine Intelligence 14(2), 239–256 (1992) 3. Van der Bie, G.: Anatomy - Human Morphology from a phenomenological point of view. Sample of the content: Morphological characteristics based on the skeleton. Driebergen, Netherlands, Louis Bolk Instituut, Publication GVO 03 77 (2003) 4. Inc. Cyberware Laboratory: 4020/rgb 3d scanner with colour digitizer (1990), http://www.cyberware.com/pressReleases/index.html 5. Ding, C., He, X.: Principal Component Analysis and Effective K-means Clustering. University of California, Berkeley. LBNL Tech Report 52983 (2003) 6. Diagnostic Strategies: Calculating Similarity Between Cases, http://www.DiagnosticStrategies.com 7. Frerichs, D., Graham, C.: VRM (1995), http://silicon-valley.siggraph.org/MeetingNotes/VRML.html 8. Halsall, C.: Creating Real Time 3D Graphics With OpenGL (2000), http://www.oreillynet.com/pub/a/network/2000/06/23/magazine/ opengl intro.html
390
J. Marquez et al.
9. Hantke, H.: 3D-Visualization (VRML) of Morphology and Flow in groyne fields. In: EGS XXVI General Assembly, Nice, Francia (2001) 10. Hill, D., Studholme, C., Hawkes, D.: Voxel Similarity Measures for Automated Image Registration. In: SPIE (Eds): Proceedings of the Third Conference Visualization in Biomedical Computing, pp. 205-213 (1994) 11. Hannah, P., et al.: PCA - Principal Component Analysis (2005), http://www.eng.man.ac.uk/mech/merg/Research/datafusion.org.uk/ techniques/pca.html 12. Ieronutti, L.: Employing Virtual Humans for Education and Training in X3D/VRML Worlds. In: Proceedings of LET-Web3D 1st International Workshop of Web3D Technologies in Learning, Education and Training, pp. 23–27 (2004) 13. Jones, M.W.: 3D Distance from a Point to a Triangle. Technical Report CSR-5-95. Department of Computer Science, University of Wales Swansea (1995) 14. Kilgard, M.J.: The OpenGL Utility Toolkit (GLUT) Programming Interface API Version 3 (1997), http://www.opengl.org/resources/libraries/glut/spec3/spec3.html 15. Kreyszig, P.: Wiley & Sons (Eds): Advanced Engineering Mathematics, 1145–1150 (1999) 16. Maes, F., et al.: Multimodality Image Registration by Maximization of Mutual Information. IEEE Transactions on Medical Imaging 16(2), 187–198 (1997) 17. M´ arquez, J., et al.: Laser-Scan Acquisition of Head Models for Dosimetry of HandHeld Mobile Phones. In: BioElectroMagnetics Society (eds.) Proceedings 22th Annual Meeting, Technical University, Technical University, Munich, Germany (2000) 18. M´ arquez, J., et al.: Construction of Human Head Models for Anthropometry and Dosimetry Studies of Hand-Held Phones. Revista Mexicana de Ingenier´ıa Biomedica XXI4, 120–128 (2000) 19. M´ arquez, J., Bloch, I., Schmitt, F.: Base Surface Definition for the Morphometry of the Ear in Phantoms of Human Heads. In: Proceedings 2nd IEEE International Symposium on Biomedical Imaging, Canc´ un, vol. 1, pp. 541–544 (2003) 20. M´ arquez, J., et al.: Shape-Based Averaging for Craniofacial Anthropometry. In: SMCC (eds.) IEEE Sixth Mexican International Conference on Computer Science ENC 2005, Puebla, pp. 314–319 (2005) 21. Payne, B.A., Toga, A.W.: Distance field manipulation of surface models. IEEE Computer Graphics and Applications 1, 65–71 (1992) 22. S´ anchez, H., Bribiesca, E.: Medida de similitud para objetos 2D y 3D a trav´es de una energ´ıa de transformaci´ on o ´ptima. Computaci´ on y Sistemas 7(1), 66–75 (2003) 23. S´egonne, F., et al.: A Hybrid Approach to the Skull Stripping Problem in MRI. Neuroimage 22, 1060–1075 (2004) 24. Satherley, R., Jones, M.W.: Hybrid Distance Field Computation. In: Springer Verlag (eds.) Volume Graphics, pp. 195–209 (2001) 25. Sud, A., Otaduy, M.A., Manocha, D.: DiFi: Fast 3D Distance Field Computation Using Graphics Hardware. EUROGRAPHICS 4, 557–566 (2004) 26. Verleysen, M.: Principal Component Analysis (PCA). PhD Thesis, Universite catholique de Louvain, Microelectronics laboratory, Belgium (2001) 27. Wyvill, B., Wyvill, G.: Field-Functions for Implicit Surfaces. University of California, Visual Computer 5 (1989) 28. Waldo, R.: Simplified Mesh Models of the Human Head. Computer Engineering MSc thesis, School of Engineering, Universidad Nacional Aut´ onoma de M´exico (2005)
Hierarchical Fuzzy State Controller for Robot Vision Donald Bailey1, Yu Wua Wong1, and Liqiong Tang2 1
Institute of Information Sciences and Technology, 2 Institute of Technology and Engineering Massey University, Palmerston North, New Zealand
[email protected],
[email protected],
[email protected]
Abstract. Vision algorithms for robot control are usually context dependent. A state based vision controller can provide both the computational and temporal context for the algorithm. A hierarchical layering enables one or more subobjectives to be selected. To mimic human behaviour, it is argued that using fuzzy logic is better able to manage the subjective data obtained from images. Fuzzy reasoning is used to control both the transitions between states, and also to directly control the behaviour of the robot. These principles are illustrated with an autonomous guide robot. Preliminary results indicate that this approach enables complex control and vision systems to be readily constructed in a manner that is both modular and extensible. Keywords: vision system, service robot, image processing, fuzzy control.
1 Introduction Vision is an important component of modern robotic systems. This is even more the case with mobile and autonomous robots because vision is the most sophisticated way of perceiving the environment [1,2]. As each type of robot has different functionality and different objectives, there is a wide range of different vision systems and there is no common method or technique to build vision systems for such robots. As computer vision methods are highly adapted to meet each robot's objectives, there are various adaptations to each vision system. The image processing methods used for each vision system can differ, but the aims of the vision systems usually coincide. There are at least four broad areas or purposes for which robot vision is used: 1. Navigation and path planning: Vision can be used to determine the location of obstacles and clear regions with the purpose of navigating a suitable path through the environment. This is particularly important if the environment is unknown (to the robot), or dynamic with other objects moving. 2. Location and mapping: Closely related to navigation is building and maintaining a map of the environment. Here, vision can be used to identify landmark features (both known and previously unknown) and by maintaining a database of such features (the map) the robot location may be determined through triangulation and ranging [3]. An existing map can be extended by adding the location of new landmarks relative to those already in the database. G. Sommer and R. Klette (Eds.): RobVis 2008, LNCS 4931, pp. 391–402, 2008. © Springer-Verlag Berlin Heidelberg 2008
392
D. Bailey, Y.W. Wong, and L. Tang
3. Visual servoing: Vision is a valuable sense for providing feedback to a robot when performing its designated task [4]. When applied to a mobile robot itself, this comes under the previous two purposes. However, visual servoing also enables manipulators to be positioned accurately relative to the object being manipulated, and can be used to facilitate alignment. Thus vision can be used to enable a robot to achieve an improved precision over that which might otherwise be achieved. 4. Data gathering: The role of many autonomous mobile robots is to gather information about the environment in which they are placed. Such data gathering may be the primary purpose of the robot, or may be secondary, enabling the robot to achieve its primary task. For an autonomous robot to be practical in most situations, its movement must be continuous and at a sufficient speed to complete its given task in an acceptable time period [2]. Hence, there is often an emphasis on maximising image processing speed, while simultaneously reducing the size, weight, and power requirements of the vision system. Such constraints require simplifying the image processing where possible, without compromising on the functionality of the system. Interest in intelligent robot systems has increased as robots move away from industrial applications and towards service areas such as personal service in home and office, medicine, and public services [5]. In the service sector, there are many fields in which autonomous robots can be applied in; it can range from medical to fast food to home and domestic, as there are a variety of services and tasks that is unappealing or dangerous for humans, but are suitable for robots. For a truly autonomous robot to fulfil the necessary tasks in any service environment, it must be able to interact with people and provide an appropriate response. Hence, many robots have been designed to mimic human behaviour and form.
2 Autonomous Guide Robot Currently, we have an ongoing project to build an autonomous robot guide system, which aims to provide services for searching in an unknown office environment [6]. Given a name, the robot needs to be able to enter a corridor, search for office doors, and locate the office associated with the name. In terms of the broad task defined above, such a robot needs to be able to perform a number of sub-tasks: 1. Navigation and path planning: The robot needs to be able to move down the centre of the corridor, while avoiding people and other obstacles within the corridor. 2. Location and mapping: As the robot is searching, it builds an approximate “map” of the office environment. This will facilitate later searches along the same corridor. 3. Visual servoing: While this is related to navigation, the robot must be able to position itself appropriately relative to each door to enable the nameplate to be effectively and efficiently read. 4. Data gathering: As the main task is the location of a particular office, the data gathered at each step is the name on each office door. The image processing requirements of each of these tasks are context dependent. How an image is processed depends on the particular task being performed, which in
Hierarchical Fuzzy State Controller for Robot Vision
393
turn depends on which step the robot is at within its overall goal. While the image processing requirements for each individual sub-task are straight forward, we would like the system to be able to be extended easily as new tasks are added to the robot system. Without this as an explicit design requirement, there is a danger that the resultant vision system would be monolithic, making it difficult to adapt to new tasks. These objectives may be achieved by using a state-based controller [7]. A finite state machine is used to encapsulate the current state of the robot and vision system, effectively providing the context for the current sub-task. The current state represents the particular step being performed by the robot at the current time. As such, it is able to specify the particular vision algorithms that are relevant to the current step. This is the computational context of the particular step. The state machine can also be augmented with any historical data that has been gathered to provide the temporal context. This would enable it to effectively integrate the secondary mapping tasks with its primary goal of finding a particular office. Such a state based approach enforces a task oriented modular design on both the vision system and robot control. It also decouples the particular vision algorithms associated with each step from the control tasks, which are encapsulated within the state machine. Adding a new task consists of updating the finite state machine, or adding a new state machine that breaks the new task down to atomic steps. For each step, the vision algorithms are very specific, making them easier to design or develop. As many tasks can be broadly specified, and broken down into a number of simpler steps, or sub-tasks, it is convenient to use a layered or hierarchical approach to the design of the individual state machines [7,8,9,10]. By breaking down a complex task into a series of simpler tasks, the components or sub-tasks that recur may be encapsulated within and represented by separate state machines. This enables re-use of common components within a different, but related task. Such a breakdown also enables multiple concurrent tasks to be performed simultaneously, for example maintaining the position in the centre of the corridor, obstacle avoidance, searching for doors, and mapping. With parallel sub-tasks, it is important to be able to prioritise the individual parallel tasks in order to resolve conflicting control outputs. 2.1 Mimicry of Human Behaviour Another principle we used in the design of the guide robot system was to mimic human behaviour [6], particularly with regards to the way in which a person searches for an unknown office. If a person has already visited the office previously, they remember the approximate location of the office. This enables them to proceed directly in the right direction, using vision to confirm previously remembered landmarks and verify that they have located the correct office. In a similar manner, our robot system will use a previously constructed map if it recognises the name. Such a map can speed the search because not every door would need to be checked. The first door or two can be used to validate the previously constructed map, and the task then becomes a simpler one of map following rather than searching. When the destination is reached, the nameplate on the door is located and read to verify that the correct office has been found.
394
D. Bailey, Y.W. Wong, and L. Tang
On the other hand, when a person enters an unknown corridor or building, they randomly choose a direction to move in. Then, they will move in the chosen direction and search for doors. At each door, they will read the nameplate and decide whether it is the correct office depending on which office they are searching for. Similarly, the robot system will move through a corridor, search for doors, read the nameplate of each door, and decide whether it is the correct office. At the same time, the robot can be constructing a map of office locations along the corridor to speed later searches. One of the characteristics of the human vision system is that it does not consciously look too deeply into the details. This is in contrast to conventional machine and robot vision systems. For example a human identifies a door as a whole because of its general characteristics (e.g. it is rectangular, it has a nameplate and door handle, etc), and gauges it to be similar to a door. The human vision system also does not automatically calculate distance of the door from its relative position. Instead, it will gauge whether the door is far away or close, and whether it is on the left side or right side. In contrast, conventional vision systems and associated robot control tend to deal with details, and in particular with numerical details of data extracted from the image. It could be argued that much of this numerical precision is artificial and is not necessary for many control tasks. Everything, including any robot control variable, is calculated with precision according to the particular models used. In many cases however, and this applies more so with computer vision, the models usually represent only a simplified approximation of reality and are therefore not as precise as the calculations imply. Fuzzy logic provides a way of managing such imprecision, and in particular provides a mechanism for reasoning with imprecise data in a way that more closely resembles the way in which humans reason about imprecise and subjectively defined variables. 2.2 Fuzzy Logic Fuzzy sets are generalised from conventional set theory, which was introduced by Zadeh in 1965 [11]. It is a mathematical way to represent the subjectiveness and imprecision of everyday life. Even when quantities can be measured precisely, the subjective interpretation of that value is often imprecise, and may well be context dependent. A quantitative measurement may be mapped to qualitative or subjective description (a fuzzy set) through a membership function. Such membership functions are often overlapping and provide an interpretation of the quantity. Fuzzy logic is derived from fuzzy sets, and provides a mathematical basis for reasoning or deriving inferences from fuzzy sets based on rules. Each rule consists of a set of conditions and produces an inference. Fuzzy operators are used to evaluate the simultaneous membership of the conditions, with the resulting membership evaluation associated with the inference. All applicable rules are evaluated, with the results combined to select one or more actions. A crispening function is usually applied to convert the fuzzy output to a binary membership or numerical control variable for the action.
Hierarchical Fuzzy State Controller for Robot Vision
395
Within robotics, fuzzy logic can be used for any of the control activities (see for example [12,13]). We are using it to both select the next sate within the associated state machine, and also to select the robot actions to perform within the current state. This flow is illustrated in fig. 1. The current state within the state machine associated with a particular sub-task provides the context for this activity. Therefore the current state specifies (in practice, selects from a library) both the image processing algorithm to apply to the input, and the fuzzy rule set to be used. Any numerical outputs from the vision processing are fuzzified, as are any other relevant inputs to the current processing such as camera angle, etc. The fuzzy rules are applied to the current inputs with the results used to drive any actions to perform by the robot in the current state, and where necessary to select the next state.
Current state
Next state
Robot
Fuzzy rules
Current rule set
Fuzzy logic engine
Actions
Imaging algorithms
Current imaging algorithm
Fuzzy membership of imaging results
Fuzzy membership of other inputs
Other inputs
Camera
Fig. 1. The working of the fuzzy state control: the current state selects the rule set and imaging algorithm; these are applied to the camera image and other inputs to generate the next state and the robot action
3 System Design 3.1 Imaging System The system is designed to use lightweight and relatively low cost firewire camera to facilitate interfacing to the control computer (a lightweight laptop) on the robot. Such cameras tend to be low resolution, typically 640 x 480 pixels, and usually have a moderate angle of view. When the robot is positioned approximately in the centre of the corridor, it needs to be able to easily and unambiguously read the nameplate on the door. To clearly resolve the individual letters requires a resolution of approximately 1 mm per pixel on the nameplate. While the resolution can be slightly less, this provides a margin to allow for inevitable positioning errors of the robot
396
D. Bailey, Y.W. Wong, and L. Tang
within the corridor. However, at this resolution, a typical office door (which is about 0.8 m wide) does not fill the complete field of view. Our first thought of detecting the four edges of the door (left, right, top and bottom) to use these to detect perspective to give the relative position between the robot and the door is therefore impractical. Unless the door is viewed obliquely, there is insufficient resolution to view both sides of the door simultaneously, let alone detecting the angle of the door top and bottom. Also, at such an angle, even if the perspective could be corrected, the resolution of the lettering on the nameplate makes it impractical to read reliably. In order to turn the limited field of view to our advantage, we need to realize that we are not just processing a single image of each door, but are able to capture many images as to robot moves past. By recognizing key features within the image, we are able to use the robot motion, combined with panning of the camera to servo the robot into the correct position for reading the nameplate. The state machine is then able to record the context consisting of the previous steps leading up to the current step, allowing us to focus the image processing on a single step at a time. This also enables us to simplify the image processing, because we do not need to measure with high precision, and do not necessarily require the whole object within the field of view to make useful inferences. 3.2 Robot Capabilities Because the camera is on a mobile platform, it is necessary to take into consideration the actions that the robot may perform to modify the viewpoint. This is particularly important because we are using visual servoing to position the robot appropriately for reading the doors. The robot itself is wheeled, and is able to move forwards or backwards. Steering is accomplished by differentially driving its wheels. A camera is mounted on the robot, on a pan-tilt base. This allows the camera angle to change as the robot moves, and enables the robot to read the nameplates without having to physically stop and turn the whole robot. The scanning camera is mounted at a height of approximately 1.4 m above the ground. This is sufficiently high to enable an image of the nameplate to be captured free of perspective. By having the camera lower than that of a typical nameplate, perspective cues may also be used to assist servoing. Under consideration is the use of a second, fixed camera looking forward. This will aid in navigation, and will save having to continuously pan the search camera for this purpose. 3.3 Door Search State Machine Fig. 2 shows our state machine for locating and reading the door nameplates. The state machine entry conditions assume that the robot is facing along a corridor, positioned approximately in the centre. A separate state machine is used to provide the initial positioning of the robot, and also to maintain the robot approximately centred in the corridor as it travels. Separate state machines are also used to negotiate corners and other obstacles.
Hierarchical Fuzzy State Controller for Robot Vision
397
Entry
c Scan for next door on left.
d Scan for next door on right.
e Find door edge with blank wall
f Use door edge to align with door
Exit to negotiate corner
g Use name plate to align with door
i Add door to map
Exit, task completed
h Read nameplate
Fig. 2. State transition diagram for searching for a door. First the door edge is detected and this is used to align the robot approximately. Then the nameplate is detected and used to position the robot more carefully. Once positioned, the nameplate can be read.
The first step is to scan the corridor looking for doors along the left and right walls. Which of the two states is entered first depends on the initial orientation of the camera. The primary focus of this scanning step is not to map the corridor, but to determine on which side of the corridor the nearest door is located. Note also that the
398
D. Bailey, Y.W. Wong, and L. Tang
state machine is primarily action and processing focused. Additional state information is also maintained that is not directly apparent in the state machine. For example, after first scanning the left side of the corridor for the nearest door (state c), the right side is then scanned (state d). The approximate position of any door found in state c is maintained in state d. To correctly read the nameplate, the robot must be positioned approximately perpendicular to the door. The first step in accomplishing this is to use the edge of the door to position the robot in such a way that it can obtain a clear view of the nameplate (state f). The nameplate is then used to position the robot such that it can read the nameplate reliably (state g). Precise alignment is not critical in this application; there is a relatively broad range of positions where the nameplate may be read. Splitting the processing in this way greatly simplifies the image processing tasks. In state f, the critical aspect is detecting the edge of the door. At a moderate to close range, this edge will fill all of the field of view. A relatively simple contrast enhancement followed by a linear vertical edge detector is sufficient to detect the door. There may also be additional edges from objects such as notice boards, or shadows from lighting in the corridor. These may be distinguished by placing limits on the linearity of the edges, the length of the edge, and the hue contrast across the edge. The context is used to distinguish between the near and far edges of the door. The goal of this step is to position the near edge of a door on the left side of the corridor within the left edge of the image (and the converse for doors on the right side of the corridor). This will ensure that the nameplate will be completely visible within the image. In state g the focus shifts from the door edge to the nameplate. At this stage, most of the image should be filled with the door. The nameplate provides a strong contrast against this, making detection relatively easy; the nameplate is identified as a rectangular object within specified area, aspect ratio, and angle limits. These limits eliminate any miscellaneous objects such as door notice boards and other “decorations” from consideration. If the door is unlabelled, wide open, or the nameplate is unable to be distinguished for other reasons, the robot abandons this door (progressing to state i). Otherwise the angle of the nameplate, and its position within the field of view are used to position the robot in the centre of the door in front of the nameplate. The high contrast of the letters on the nameplate, combined with the elimination of skew and perspective by robot positioning, makes the nameplate relatively easy to read. A combination of syntactic pattern recognition and fuzzy template matching are used for optical character recognition. The nameplate is parsed with the individual line segments that make up each character are compared with a preset template. If necessary, multiple successive images of the nameplate may be parsed from slightly different viewpoints (as the robot moves past the door), with majority voting used to improve the reliability of the recognition. 3.4 Membership Functions and Fuzzy Rules Most of the image processing steps do not require great precision. For example, the position of the edge of the door within the image is used primarily as a cue for detecting the nameplate. Similarly, the position of the centre of the nameplate within the image, combined with the pan angle of the camera relative to the robot, is used for
Hierarchical Fuzzy State Controller for Robot Vision
399
alignment. Since the camera is lower than the nameplate, if the robot is not directly perpendicular to the nameplate, it will appear on an angle within the image. If necessary, the camera can be tilted further to further exaggerate this effect. The fuzzy membership functions for these variables are illustrated in Fig 3. (Note these are only some of the variables used).
Left Horizontal
Front
Left front Left
Right
Back left
Right
Left Front left
Nameplate angle
Centre
Position of nameplate within image
Far right
Right
Left
Far left
Position of door edge within image
Front right Right front Right
Camera pan angle
Back right
Fig. 3. Graphical illustration of some of the key fuzzy membership functions
As indicated previously, there are two types of rules: those that result in robot actions, and those that result in state transitions. Example of some of the rules used within state f are given here: IF door_edge IS far_left AND pan_angle IS left_front THEN SET_STATE = 5 IF door_edge IS left AND pan_angle IS front_left THEN ACTION robot(forward,fast) IF door_edge IS left AND pan_angle IS left_front THEN ACTION robot(forward,medium), ACTION pan(left) All of the rules are evaluated, with the AND operation selecting the minimum of the corresponding membership values, and an OR operation selecting the maximum. A NOT subtracts the membership value from 1. The derived membership value from
400
D. Bailey, Y.W. Wong, and L. Tang
the rule is then associated with the rule’s action or consequence. Where multiple rules give the same action, the results are ORed. To derive a control action, each fuzzy output class is associated with a corresponding crisp numerical control value. Where there are multiple conflicting actions, for example move forward fast and move forward slow, the associated crisp control values associated with each class are weighted by the calculated membership values to determine the final control value. In the case of state transitions, any transition with a membership value less than 0.5 is discarded. If multiple conflicting transitions result, then the transition with the largest membership value is selected. Ties are broken by the precedence determined by the order that the rules are evaluated, with earlier rules given a higher priority. This simplifies the programming as only one state transition needs to be retained while evaluating the rules. The “next state transition” is initialised with the current state with a fuzzy membership of 0.5. As state transition rules are evaluated, any that give a higher membership function for a transition overwrites the previous transition.
4 Results The concept of fuzzy state control was tested by implementing using MATLAB the rules and vision algorithms for the state machine in the section. For input, a large number of images along a corridor were taken at several different camera angles. From a particular view, the fuzzy reasoning was applied to determine the actions to take. These actions were then used to select the next view within the sequence. While the spacing between the views is limited, only a small range of pan angles is available, and the temporal resolution is considerably lower than that on a real robot, this was sufficient to check that the robot could servo onto the door location and successfully read the nameplate. It also enabled the rule set to be tested and debugged for each combination of input variables – something that cannot easily be accomplished when running in real time with the robot. Our initial tests have verified that doors within corridors can be identified, and the position of the door can be “measured” relative to the robot position, in a similar manner to human vision. At this stage, we only have the door location state machine developed. For a fully working system, it is also necessary to develop the state machines that maintain the robot path (including obstacle avoidance), and negotiate corners. The mapping algorithms also need to be further developed. In the case of mapping, it is not necessary to create a precise map; it is sufficient to record the sequence of doors along the corridor with approximate measures of distance (such as can be derived from a sensor on the robot wheel). The next step is to implement the system on a physical robot and test the hierarchical fuzzy state controller in practice.
5 Conclusions A robot vision system using fuzzy logic control to identify object in real time is presented. The system is able to search and detect doors and door position as the robot
Hierarchical Fuzzy State Controller for Robot Vision
401
navigates within a corridor. The research contributions of this paper are the concept of using a state based system to maintain context, and combining it with a fuzzy logic controller to mimic the low level, imprecise reasoning of humans in order to simplify the vision tasks for a mobile service robot. The use of fuzzy logic means that the data extracted from the images does not need to be precise, enabling less complex image processing operations and algorithms to be employed. This enables real-time operation on a relatively low-power computing platform. We have demonstrated a proof of concept of this approach for object searching and identification, through locating a specified office along a corridor. While only a single sub-task is described in detail in this paper, we believe that this concept can be readily extended to more complex tasks through the use of multiple concurrent state machines. A hierarchical approach would allow separate state machines to be developed for common tasks, with one state machine effectively calling another to handle those tasks. This enables code reuse, and provides an extensible mechanism by which complex tasks may be decomposed. To optimise the design of such vision systems that mimic the behaviour of the human vision system requires more in-depth research. However this paper outlines the key concepts and illustrates these with a simple example.
References 1. Cokal, E., Erden, A.: Development of an image processing system for a special purpose mobile robot navigation. In: Proceedings of the 4th Annual Conference on Mechatronics and Machine Vision in Practice, pp. 246–252 (1997) 2. Barnes, J., Liu, Z.Q.: Embodied computer vision for mobile robots. In: 1997 IEEE International Conference on Intelligent Processing Systems, pp. 1395–1399 (1997) 3. Spero, D.J., Jarvis, R.A.: On localising an unknown mobile robot. In: The Second International Conference on Autonomous Robots and Agents, pp. 70–75 (2004) 4. Jafari, S., Jarvis, R.: Relative visual servoing: a comparison with hybrid intelligent models. In: The Second International Conference on Autonomous Robots and Agents, pp. 328–332 (2004) 5. Asada, T., et al.: Developing a service robot. In: Proceedings of the IEEE International Conference on Mechatronics & Automation, pp. 1057–1062 (2005) 6. Wong, Y.W., Tang, L., Bailey, D.G.: Vision system for a guide robot. In: 4th International Conference on Computational Intelligence Robotics and Autonomous Systems (2007) 7. Sen Gupta, G., Messom, C.H., Sng, H.L.: State Transition Based Supervisory Control for a Robot Soccer System. In: 1st IEEE International Workshop on Electronic Design Test and Applications, pp. 338–342 (2002) 8. Brooks, R.A.: A robust layered control system for a mobile robot. IEEE J. of Robotics and Automation 1, 1–10 (1986) 9. Ahmad, O., et al.: Hierarchical, concurrent state machines for behavior modeling and scenario control. In: Proceedings of the fifth Annual Conference on AI, Simulation and Planning in High Autonomy Systems, pp. 36--41 (1994) 10. Sen Gupta, G., Messom, C.H., Demidenko, S.: State transition based (STB) role assignment and behaviour programming in collaborative robotics. In: The Second International Conference on Autonomous Robots and Agents, pp. 385–390 (2004)
402
D. Bailey, Y.W. Wong, and L. Tang
11. Zadeh, L.A.: Fuzzy sets. Information and Control 8, 338–353 (1965) 12. Wang, Y.L., et al.: Intelligent learning technique based on fuzzy logic for multi-robot path planning. J. of Harbin Institute of Technology 8(3), 222–227 (2001) 13. Shahri, H.H.: Towards autonomous decision making in multi-agent environments using fuzzy logic. In: Mařík, V., Müller, J.P., Pěchouček, M. (eds.) CEEMAS 2003. LNCS (LNAI), vol. 2691, pp. 247–257. Springer, Heidelberg (2003)
A New Camera Calibration Algorithm Based on Rotating Object Hui Chen1 , Hong Yu2 , and Aiqun Long1 1
2
School of Information Science and Engineering, Shandong University, Jinan, 250100, China {huichen, longaiqun}@sdu.edu.cn Yantai Normal University, Yantai, Shandong, China
[email protected]
Abstract. With 3D vision measuring, camera calibration is necessary for extracting metric information from 2D images. We present a new algorithm for camera calibration that only requires a spatial triangle with known size. In our method, the camera is fixed while the triangle is rotated freely at one of its vertices. By taking a few (at least two) pictures of the rotating object at an identical camera view position and direction, the camera intrinsic matrix can be obtained. It advances other traditional methods in that the extrinsic parameters are separated from the intrinsic ones, which simplifies the problem with more efficiency and accuracy. Experiments also show that our method is robust and results in high resolution. Keywords: camera calibration; pinhole camera model; intrinsic matrix.
1
Introduction
Recover 3D information from 2D images is a fundamental problem in computer vision. The recovery of metric information requires knowledge of camera parameters –also known as camera calibration. Camera calibration is widely used in reverse engineering, object recognition and synthesis of virtual environment, etc [1]. The techniques of camera calibration can be roughly classified into three categories: self-calibration, active vision based camera calibration and traditional ways of calibration. Self-calibration doesn’t need calibration object. It achieves calibration only with a moving camera to take several images and find corresponding points. As the technique is based on absolute conic and conicoid, it is very flexible but not mature yet. Since there are many parameters to estimate, it cannot always obtain reliable results [7], [8], [9]. Active vision based camera calibration requires the movement of camera to be exactly known [10], [11]. This technique needs precise equipment to catch the track of the moving camera, which is expensive. Until now the traditional ways of calibration is still the most widely used technique in camera calibration. It is performed by observing a calibration object G. Sommer and R. Klette (Eds.): RobVis 2008, LNCS 4931, pp. 403–411, 2008. c Springer-Verlag Berlin Heidelberg 2008
404
H. Chen, H. Yu, and A. Long
whose size or geometry is known, where images of the calibration object is used to acquire the relation between spatial points and their image correspondences for solving the calibration [2,3,4,5,6]. By reasonable design of the calibration object, precise result can be obtained. The technique presented here belongs to the traditional calibration technique. The calibration object is designed as a space triangle that can be easily made with fine precision and little cost. In the classical methods, the calibration object is fixed while moving the camera. However, this will cause the extrinsic parameters to alter with the intrinsic matrix. In our method, we set the camera fixed and one vertex of the calibration object fixed, while rotate the object about the fixed point freely. In doing so, we can solve the intrinsic matrix directly by using a few (at least two) calibration images. The paper is organized as follows: Section 2 introduces the basic principles and notations. Section 3 describes the calibration object and explains how to solve the calibration problem in detail. Section 4 provides experimental results with both computer simulated data and real images. Finally, Section 5 concludes the paper with perspective of this work.
2
Camera Model and Geometry
We first introduce the pinhole mathematical model and the mapping relation between 3D and 2D points with the notations used in this paper. The pinhole camera model is shown in Fig. 1. There are three coordinate Xc U
P ( x w, y w, z w)
p (u,v)
( x c, y c, z c)
O
Xw
Oc Ow
Zc
f Yc
V
Yw
P
Zw
(u 0, v 0)
Fig. 1. Pinhole Camera Model
systems in the model: world coordinate system XW YW ZW , camera coordinate system XC YC ZC and image coordinate system U V O. The mapping relation between space point P (XW , YW , ZW ) and its corresponding image one p(u, v) is given by ⎡ ⎤ ⎤ ⎡ ⎡ ⎤ ⎤ ⎡ XW XW u fu γ u 0 ⎢ ⎥ ⎢ YW ⎥ ⎥ = ⎣ 0 fv v0 ⎦ [R, T] ⎢ YW ⎥ . s˜ p = s ⎣ v ⎦ = K[R, T] ⎢ (1) ⎣ ZW ⎦ ⎣ ZW ⎦ 0 0 1 1 1 1
A New Camera Calibration Algorithm Based on Rotating Object
405
Where s is an arbitrary scale factor; K is called the camera intrinsic matrix, with (u0 , v0 ) the coordinates of the principal point, fu and fv the scale factors in image U and V axes, and γ the parameter describing the deflection of the two image axes; (R, T), called the extrinsic parameters, is the rotation and translation which relates the world coordinate system to the camera coordinate system. The task of camera calibration is to determine the five intrinsic parameters in ˜ to denote the K. We use the abbreviation K−T for (K−1 )T or (K−T )−1 and x T ˜ = u v 1 . Considering augmented vector by adding 1 as the last element: p that the world coordinate system can be denoted anywhere when the camera is fixed while moving the calibration object, we set it to be superpositioned with the camera coordinate system. Thus there are R = I and T = 0, Equ. (1) becomes ⎡ ⎤ ⎤ ⎡ u Xw ˜ = Zw ⎣ v ⎦ = K ⎣ Yw ⎦ = KP . Zw p (2) Zw 1 The calibration method of this paper is performed based on the above relationship.
3
Description of Calibration Model and Solving Calibration Problem
As shown in Fig. 2, the calibration object is a space triangle AB1 B2 with known length of three sides. One of its vertices A is fixed, the triangle can be rotated freely about this fixed point. C1 and C2 are two points on AB1 , AB2 , their positions on the sides can be measured. In one image, we need 3 parameters to define the fixed point A (space coordinate), and another 3 parameters to define B1 . Consider the length of AB1 is known, we actually only need another 2 parameters to define the point B1 . As the same, point B2 can be defined by 2 parameters with the length of AB2 . Because the length of B1 B2 is also known, A(fixed point) C1
C2 l2
l1 l3
B1
c1 b1
a
B2
c2 b2
Oc
Fig. 2. Calibration Object
406
H. Chen, H. Yu, and A. Long
with this information, only 1 parameter is enough to define B2 . As following, we can see there are 8 + 3N unknown parameters in N images, in which the first 8 is for 3 parameters to define the fixed point A plus 5 intrinsic parameters, and next 3N is for parameters to define the moving B1 and B2 . By Equ. (2), it has 2 equations for each point in the image, so there are total 2 + 8N equations in N images for the five features, in which the fixed point introduces 2 equations and the other moving four points do that of 8N . Considering the collinearity of image points a, c1 , b1 and a, c2 , b2 , actually there are 2 + 6N equations in N image. That is in theory, with more than two observations (2 + 6N ≥ 8 + 3N , i.e. N ≥ 2) we can get enough information to solve all the unknown parameters. Next, we will explain the calibration process in details. By the squared length of three calibration triangle’s sides, denoted as l1 , l2 , and l3 , we have (B1 − A)T (B1 − A) = l1 (3) (B2 − A)T (B2 − A) = l2
(4)
(B2 − B1 )T (B2 − B1 ) = l3 .
(5)
The position of C1 , C2 on sides AB1 , AB2 can be measured as Ci = λAi A + λBi Bi
(6)
where λAi + λBi = 1 (i = 1, 2). Substituting the coordinates of A, B1 , C1 into (2) and (6), it follows that ZB1 = − Assume
˜ 1 × ˜c1 ) aט c1 ) · (b ZA λA1 (˜ . ˜ ˜ λB1 (b1 × ˜ c1 ) · (b1 × ˜c1 )
(7)
˜ ×c ˜) λ1 (˜ a × c˜) · (b ˜ ˜ P (˜ a, b, c, λ1 , λ2 ) = ˜ ˜ λ2 (b × c˜) · (b × ˜c)
˜ c ˜ , b, ˜ are 3 × 1 matrix , λ1 and λ2 are constants, and let where a ˜ 1, ˜ ˜1 . ˜ + P (˜ a, b c1 , λA1 , λB1 )b h1 = a Combinning (2) and (7), Equ. (3) becomes 2 T h1 K−T K−1 h1 = l1 . ZA
(8)
Similarly, (4) and (5) can be written as 2 T h2 K−T K−1 h2 = l2 ZA
(9)
2 T h3 K−T K−1 h3 = l3 ZA
(10)
where ˜ 2, ˜ ˜2 ˜ + P (˜ a, b c2 , λA2 , λB2 )b h2 = a
A New Camera Calibration Algorithm Based on Rotating Object
407
and ˜ 2, ˜ ˜ 2 − P (˜ ˜ 1 , c˜1 , λA1 , λB1 )b ˜1 . h3 = P (˜ a, b c2 , λA2 , λB2 )b a, b The above deduction shows that three equations (8), (9) and (10) can be obtained from one calibration image, each of that has 6 unknown parameters. It means in theory that we can get enough information to solve the calibration problem from only a few (at least two) calibration images. Since K−T K−1 is a symmetrical matrix, we can define ⎤ ⎡ m1 m2 m3 M = K−T K−1 = ⎣ m2 m4 m5 ⎦ m3 m5 m6 let m = [m1 , m2 , m3 , m4 , m5 , m6 ]T hi = [hi1 , hi2 , hi3 ]T x = [x1 , x2 , x3 , x4 , x5 , x6 ]T 2 and let x = ZA m. Combinning (8), (9) and (10) yields T
T
[v11 , v22 , v33 ] x = [l1 , l2 , l3 ]
(11)
where vij = [hi1 hj1 , hi1 hj2 + hi2 hj1 , hi3 hj1 + hi1 hj3 , hi2 hj2 , hi2 hj3 + hi3 hj2 , hi3 hj3 ] . From N calibration images, we have N such linear equations as (11). All together it has the following special form Vx = L where V=
1 1 1 , v22 , v33 v11
T
(12)
N N N T T , · · · , v11 , v22 , v33
and L = [l1 , l2 , l3 , · · · , l1 , l2 , l3 ]T . The size of V is 3N × 6, and that of L is 3N × 1. When N > 2, the least-squares solution is given by x = (VT V)−1 VT L . Once x is estimated, without difficulty, we can obtain the intrinsic parameters and ZA as follow v0 = (x2 x3 − x1 x5 ) (x1 x4 − x22 )
ZA = x6 − x23 + v0 (x2 x3 − x1 x5 ) x1 √ fu = ZA / x1
408
H. Chen, H. Yu, and A. Long
x1 x1 x4 − x22 2 γ = −x2 fu2 fv ZA 2 u0 = γv0 /fu − x3 fu2 ZA . fv = ZA
At this point, the features A, B1 , B2 , C1 , C2 can be computed from Equ. (2), (6) and (7). In real case, due to the adding noise of calibration images, the extracted feature points will not comply exactly with the Equ. (2). Therefore, it is not sufficient that we only use Equ. (12) to estimate the unknown parameters. Optimization of the initial result should be performed. The optimization model is given by E = min
N
2 2 ˜ ˜ 2 (˜ ai − φ(K, A) + b 1i − φ(K, B1i ) + b2i − φ(K, B2i )
i=1 2
2
+ ˜ c1i − φ(K, C1i ) + ˜ c2i − φ(K, C2i ) ) where φ(K, P) = (1/Zp )KP. The above minization is a nonlinear problem that can be solved by using the Levenberg-Marquardt algorithm [12]. It requires an initial guess of K, A, B1i , B2i , C1i , and C2i , which can be obtained by the technique described in this section.
4
Experimental Results
The proposed algorithm has been tested on both computer simulated data and real data. 4.1
Computer Simulate
In order to make the simulation generalized, we set the simulated camera’s parameters as (1) The image resolution is of 640 × 480, and u0 = 320, v0 = 240, fu = 1000, fv = 1000, γ = 0. (2) The simulated calibration object is shown in Fig. 2, which is an equal side triangle with a side length of 100cm. Assume C1 and C2 are the center points of AB1 and AB2 , let λA1 = λB1 = λA2 = λB2 = 0.5, and set the fixed point A at T [0, 35, 150] . (3) The movement of the simulated object is set randomly as the method in [6]. (4) In order to check the capability of anti-jamming, Gaussian noise is added on the simulated feature points with 0 mean and standard deviation from 0.0 to 1.0 pixel. The simulation experiment mainly exams the effect of result by altering the number of the calibration images and the noise level. Let the number of images
A New Camera Calibration Algorithm Based on Rotating Object
409
Fig. 3. Calibration errors (a) with respect to the number of images (b) with respect to the noise level
range from 2 to 40 and the noise level from 0.1 to 1.0. For each noise level or image number, we perform 40 independent trials. The average result is shown in Fig. 3. From Fig. 3, it shows that the calibration error will be reduced by increasing the number of calibration images with the same noise level, and the calibration result tends to convergence when the number of images exceeds 20; while with the same number of calibration images, the calibration error will increase with that of noise level; when the noise level is less than 0.7, the relative error is changed slowly and small enough to be accepted. From the simulated experiment results, we can see that the proposed calibration technique has good capability in anti-jamming. 4.2
Real Data
In the real experiment, the resolution of calibration image is 640 × 480, and the number of images is 50. As shown in Fig. 4, one vertex of the calibration triangle is fixed, the lengths of the three sides are AB1 =33cm, AB2 =37cm and B1 B2 =28cm. C1 and C2 are the center points of AB1 and AB2 . The calibration
Fig. 4. Calibration Image
410
H. Chen, H. Yu, and A. Long
technique here requires to precisely locate the feature points, but it is complicated to extract these feature points directly from gray images. Instead, we first detect the feature edges of the calibration object, and then compute the intersection points of these edges to determine the features (Fig. 4). The result is shown in Table 1. Table 1. Calibration Result with Real Data N um
fu
fv
u0
v0
γ
5
1049.23 1077.35 239.14 343.59 7.86(89.57◦ )
10
1076.47 1031.47 259.49 327.44 4.15 (88.78◦ )
15
1016.65 1058.08 267.49 308.83 3.03(89.83◦ )
20
830.57
738.45 316.26 313.86 2.99(89.79◦ )
25
889.55
810.32 312.47 302.48 2.71(89.83◦ )
30
880.88
809.59 323.22 283.94 2.63(89.83◦ )
35
879.24
811.47 331.46 280.43 2.67(89.83◦ )
40
879.93
814.89 334.66 279.47 2.69(89.82◦ )
50
879.92
809.57 340.64 275.07 2.69(89.82◦ )
From the real experiment data, we can see that the result converges to the ideal one when the number of calibration images exceeds 25.
5
Conclusion
In this paper, we have presented an algorithm for camera calibration using a rotating spatial triangle with edge length given. The relationship between 3D points and its 2D correspondences is analyzed to eliminate the extrinsic parameters in the calibration processing. By making full use of the geometric information of the calibration object, only two calibration images at minimal are required to solve the calibration. Compared with the traditional techniques, the proposed one is more computationally efficient in that the intrinsic parameters are solved independently with only a few images(at least two), and therefore more accurate. Simulation and real data experiments also show that the technique has good performance in anti-jamming. The present result can be further improved by considering radial distortion. Acknowledgment. This work is supported by the Grand Project of Shandong Nature Science Fund under No.Z2005G02 and Shandong Outstanding MiddleYoung-Aged Scientist Reward Fund under No.03BS001.
A New Camera Calibration Algorithm Based on Rotating Object
411
References 1. Agrawal, M., Davis, L.S.: Camera calibration using spheres: A semi-definite programming approach. In: Proceedings of the Ninth IEEE International Conference on Computer Vision (ICCV 2003) (2003) 2. Tsai, R.Y.: A Versatile Camera Calibration Technique for High-Accuracy 3D Machine Vision Metrology Using Off-the-Shelf TVCameras and Lenses. IEEE J. Robotics and Automation 3(2), 323–344 (1987) 3. Faugeras, O.: Three-Dimensional Computer Vision: A Geometric Viewpoint. MIT Press, Cambridge (1993) 4. Sturm, P., Maybank, S.: On Plane-Based Camera Calibration: A General Algorithm, Singularities, Applications. In: Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp. 432–437 (1999) 5. Zhang, Z.: A Flexible New Technique for Camera Calibration. IEEE Trans. Pattern Analysis and Machine Intelligence 22(11), 1330–1334 (2000) 6. Zhang, Z.: Camera Calibration with One-Dimensional Objects. Proc. European Conf. Computer Vision 4, 161–174 (2002) 7. Hartley, R.I.: An Algorithm for Self Calibration from Several Views. In: EEE Conf. Computer Vision and Pattern Recognition, pp. 908–912 (1994) 8. Luong, Q.-T.: Matrice Fondamentale et Calibration Visuelle surl’EnvironnementVers une plus Grande Autonomie des Systemes Robotiques. PhD thesis, Universite de Paris-Sud, Centre d’Orsay (1992) 9. Maybank, S.J., Faugeras, O.D.: A Theory of Self-Calibration of a Moving Camera. Int’l J. Computer Vision 8(2), 123–152 (1992) 10. Wan, W., Xu, G.Y.: Camera parameters estimation and evaluation in active vision system. Pattern Recognition 29(3), 429–447 (1996) 11. Wei, G.Q., Arbter, K., Hirzinger, G.: Active self -calibration of robotic eyes and Hand-Eye relationship with model identification. IEEE Trans. Robotics and Automation 29(3), 158–166 (1998) 12. More, J.J.: The Levenberg-Marquardt Algorithm, Implementation and Theory. In: Watson, G.A. (ed.) Numerical Analysis (1977)
Visual Navigation of Mobile Robot Using Optical Flow and Visual Potential Field Naoya Ohnishi1 and Atsushi Imiya2 1
2
Graduate School of Science and Technology, Chiba University, Japan Yayoicho 1-33, Inage-ku, Chiba, 263-8522, Japan
[email protected] Institute of Media and Information Technology, Chiba University, Japan Yayoicho 1-33, Inage-ku, Chiba, 263-8522, Japan
[email protected]
Abstract. In this paper, we develop a novel algorithm for navigating a mobile robot using the visual potential. The visual potential is computed from an image sequence and optical flow computed from successive images captured by the camera mounted on the robot. We assume that the direction to the destination is provided at the initial position of the robot. Using the direction to the destination, the robot dynamically selects a local pathway to the destination without collision with obstacles. The proposed algorithm does no require any knowledge or environmental maps of the robot workspace. Furthermore, this algorithm uses only a monocular uncalibrated camera for detecting a feasible region of navigation, since we apply the dominant plane detection to detect the feasible region. We present the experimental results of navigation in synthetic and real environments. Additionally, we present the robustness evaluation of optical flow computation against lighting effects and various kinds of textures. Keywords: Robot navigation, Visual navigation, Potential field, Optical flow, Homography, Multiresolution.
1
Introduction
Visual navigation of mobile robots is recently one of the most challenging problem in the fields of robot vision and computer vision [5,8,12,13,24]. The main purpose of the problem is to determine a control force of the mobile robot using images captured by the camera mounted on the mobile robot in a workspace. In this paper, we develop a navigation algorithm for an autonomous mobile robot using a visual potential field on images observed by a monocular camera mounted on the robot. The visual potential field on an image is an approximation of the projection of the potential field in the workspace to the image plane. The path-planning problem of the mobile robot in the configuration space is to determine the trajectory of the mobile robot. The trajectory is determined as the path from the start point to the destination point without collision with obstacles G. Sommer and R. Klette (Eds.): RobVis 2008, LNCS 4931, pp. 412–426, 2008. c Springer-Verlag Berlin Heidelberg 2008
Visual Navigation of Mobile Robot
413
Image plane Obstacle
Navigation path
(a)
Ground plane
(b)
(c)
Fig. 1. Visual potential in a robot workspace. By our method, a navigation path without collision with obstacles is derived without the need for an environmental map. (a) Configuration of robot workspace. (b) Image captured by the camera mounted on the mobile robot. (c) Repulsive force from the obstacle.
in the configuration space. The potential field method [11] yields a path from the start point to the destination point using the gradient field. The gradient fields are computed from the map of the configuration of the robot workspace [1,23,26]. For path planning by the potential method, a robot is required to store the map of its workspace. On the other hand, the navigation problem of a mobile robot is to determine the robot motion at an arbitrary time [19]. It means that the robot computes its velocity using real time data obtained in a present situation, such as a range data of obstacles, its position data in a workspace, and images captured by a camera mounted on the robot. Since in a real environment the payload of mobile robots is restricted, for example, power supply, capacity of input devices and computing power, mobile robots are required to have simple mechanisms and devices [8]. We use an uncalibrated monocular camera as a sensor for obtaining information on the environment. This vision sensor is a low-cost device that is easily mounted on mobile robots. Accepting the visual potential field, the robot navigates by referring to a sequence of images without using the spatial configuration of obstacles. We introduce an algorithm for generating the visual potential field from an image sequence observed from a camera mounted on the robot. The visual potential field is computed on an image plane. Using the visual potential field and optical flow, we define a control force for avoiding the obstacles and guiding to the destination of the mobile robot, as shown in Fig. 1. The optical flow field [2,10,14,18] is the apparent motion in successive images captured by the moving camera. This is fundamental feature to recognize environment information around the mobile robot. Additionally, the optical flow field is considered to be fundamental information for obstacle detection in the context of biological data processing [16]. Therefore, the use of optical flow is an appropriate method for the visual navigation of the mobile robot from the viewpoint of the affinity between robots and human beings. In the previous paper [19], we developed a featureless robot navigation method based on a planar area and an optical flow field computed from a pair of successive images. A planar area in the world is called a dominant plane, and it corresponds to the largest part of an image. This method also yields local
414
N. Ohnishi and A. Imiya Perception: Image sequence Obstacle Dominant plane Eyes: Camera Ground plane
Robot motion Mobile robot Obstacle
Fig. 2. Perception and cognition of motion and obstacles in workspace by an autonomous mobile robot. The mobile robot has a camera, which corresponds to eyes. The robot perceives an optical flow field from ego-motion.
obstacle maps by detecting the dominant plane in images. We accepted the following four assumptions. 1. The ground plane is the planar area. 2. The camera mounted on a mobile robot is downward-looking. 3. The robot observes the world using the camera mounted on itself for navigation. 4. The camera on the robot captures a sequence of images since the robot is moving. These assumptions are illustrated in Fig. 2. Therefore, if there are no obstacles around the robot, and since the robot does not touch the obstacles, the ground plane corresponds to the dominant plane in the image observed through the camera mounted on the mobile robot. From the dominant plane, we define the potential field on the image. Using the potential field, we proposed the algorithm for avoiding obstacles [21] and navigating a corridor environment [22] of the mobile robot. In this paper, we present the algorithm for navigating the mobile robot from the start point to the destination point by using the optical flow field as the guide force field to the destination of the robot. In mobile robot navigation by the potential method, an artificial potential field defines the navigation force that guides the robot to the destination. Usually, an attraction to the destination and an repulsive from obstacles are generated to guide the robot to the destination without collision with the obstacles in the robot workspace [6,25]. Basically, these two guiding forces are generated from the terrain and the obstacle configuration in the workspace, which are usually pre-input to the robot. Then, we use the optical flow field as the guide field to the destination of the robot, since the flow vectors indicate the direction of robot motion to a local destination. Furthermore, using images captured by the camera mounted on the robot, the potential field for the repulsive force from obstacles is generated to avoid collision. The mobile robot dynamically computes the local path with the camera capturing the images.
Visual Navigation of Mobile Robot
2
415
Dominant Plane Detection from Optical Flow
In this section, we briefly describe the algorithm for dominant plane detection using the optical flow observed through the camera mounted on the mobile robot. The details of our algorithm are described in [19]. Optical Flow and Homography. Setting I(x, y, t) and (x, ˙ y) ˙ to be the timevarying gray-scale-valued image at time t and optical flow, optical flow (x, ˙ y) ˙ at each point (x, y) satisfies Ix x˙ + Iy y˙ + It = 0.
(1)
The computation of (x, ˙ y) ˙ from I(x, y, t) is an ill-posed problem, Therefore, additional constraints are required to compute (x, ˙ y) ˙ . We use the Lucas-Kanade method with pyramids [4] for the computation of the optical flow. Therefore, Eq. (1) can be solved by assuming that the optical flow vector of pixels is constant in the neighborhood of each pixel. We call u(x, y, t), which is a set of optical flow (x, ˙ y) ˙ computed for all pixels in an image, the optical flow field at time t. Optical Flow on the Dominant Plane. We define the dominant plane as a planar area in the world corresponding to the largest part in an image. Assuming that the dominant plane in an image corresponds to the ground plane on which the robot moves, the detection of the dominant plane enables the robot to detect the feasible region for navigation in its workspace. Setting H to be a 3 × 3 matrix [9], the homography between the two images of a planar surface can be expressed as (x , y , 1) = H(x, y, 1) ,
(2)
where (x, y, 1) and (x , y , 1) are homogeneous coordinates of corresponding points in two successive images. Assuming that the camera displacement is small, matrix H is approximated by affine transformations. These geometrical and mathematical assumptions are valid when the camera is mounted on a mobile robot moving on the dominant plane. Therefore, the corresponding points p = (x, y) and p = (x , y ) on the dominant plane are expressed as p = Ap + b,
(3)
where A and b are a 2 × 2 affine-coefficient matrix and a 2-dimensional vector, which are approximations of H. Therefore, we can estimate the affine coefficients using the RANSAC-based algorithm [7][9][20]. Using estimated affine coefficients, we can estimate optical flow on the dominant plane (ˆ x, yˆ) , (ˆ x, yˆ) = A(x, y) + b − (x, y) ,
(4)
ˆ y, t) for all points (x, y) in the image. We call (ˆ x, yˆ) planar flow, and u(x, planar flow field at time t, which is a set of planar flow (ˆ x, yˆ) computed for all pixels in an image.
416
N. Ohnishi and A. Imiya
Camera displacement T Image Plane Camera
Optical flow
Obstacle
T
Dominant plane
T
Fig. 3. The difference in the optical flow between the dominant plane and obstacles. If the camera moves a distance T approximately parallel to the dominant plane, the optical flow vector on the obstacle and on the dominant plane areas are the same distance T . But, that they differ at the same time Therefore, the camera can observe the difference in the optical flow vectors between the dominant plane and obstacles.
Dominant Plane Detection. If an obstacle exist in front of the robot, the planar flow on the image plane differs from the optical flow on the image plane, as shown in Fig. 3. Since the planar flow (ˆ x, yˆ) is equal to the optical flow (x, ˙ y) ˙ on the dominant plane, we use the difference between these two flows to detect the dominant plane. We set ε to be the tolerance of the difference between the optical flow vector and the planar flow vector. Therefore, if (x, ˙ y) ˙ − (ˆ (5) x, yˆ) < ε is satisfied, we accept the point (x, y) as a point on the dominant plane. Then, the image is represented as a binary image by the dominant plane region and the obstacle region. Therefore, we set d(x, y, t) to be the dominant plane, as d(x, y, t) =
255, if (x, y) is on the dominant plane . 0, if (x, y) is on the obstacle area
We call d(x, y, t) the dominant plane map. Our algorithm is summarized as follows: 1. 2. 3. 4.
Compute optical flow field u(x, y, t) from two successive images. Compute affine coefficients in Eq. (3) by random selection of three points. ˆ y, t) from affine coefficients. Estimate planar flow field u(x, Match the computed optical flow field u(x, y, t) and estimated planar flow ˆ y, t) using field u(x, Eq. (5). 5. Assign the points (x, ˙ y) ˙ − (ˆ x, yˆ) < ε as the dominant plane. If the dominant plane occupies less than half the image, then return to step(2). 6. Output the dominant plane d(x, y, t) as a binary image.
Figure 4 shows examples of the captured image I(x, y, t), the detected dominant ˆ y, t). plane d(x, y, t), the optical flow field u(x, y, t), and the planar flow field u(x, In the image of the dominant plane, the white and black area show the dominant plane and obstacle area, respectively.
Visual Navigation of Mobile Robot
417
Fig. 4. Examples of dominant plane and planar flow field. Starting from the left, the captured image I(x, y, t), detected dominant plane d(x, y, t), optical flow field u(x, y, t), ˆ and planar flow field u(x, y, t).
3
Determination of Robot Motion Using Visual Potential
In this section, we describe the algorithm used for the determination of robot moˆ y, t). tion from the dominant plane image d(x, y, t) and the planar flow field u(x, Gradient Vector of Image Sequence. The robot moves on the dominant plane without collision with obstacles. Therefore, we generate an artificial repulsive force from the obstacle area in the image d(x, y, t) using the gradient vector field. The potential field on the image is an approximation of the projection of the potential field in the workspace to the image plane. Therefore, we use the gradient vector of the dominant plane as the repulsive force from obstacles. Since the dominant plane map d(x, y, t) is a binary image sequence, the computation of the gradient is numerically unstable and unrobust. Therefore, as a preprocess to the computation of the gradient, we smooth the dominant plane map d(x, y, t) by convolution with Gaussian G∗, that is, we adopt g such that ∂ (G ∗ d(x, y, t)) ∂x g(x, y, t) = ∇(G ∗ d(x, y, t)) = ∂ ∂y (G ∗ d(x, y, t)) as the potential generated by obstacles. Here, for a 2D Gaussian, G(x, y) = G ∗ d(x, y, t) is given as G ∗ d(x, y, t) =
∞
−∞
∞
−∞
1 − x2 +y2 2 e 2σ . 2πσ
(6)
G(u − x, v − y)d(x, y, t)dudv.
(7)
In this paper, we select the parameter σ to be half the image size. An example of the gradient vector field is shown in Fig. 5(middle). Optical Flow as Attractive Force. From the geometric properties of flow field and the potential, we define potential field p(x, y, t) as ˆ y, t), if d(x, y, t) = 255 g(x, y, t) − u(x, p(x, y, t) = (8) g(x, y, t), otherwise, ˆ y, t) is attractive force field to the destination. where u(x,
418
N. Ohnishi and A. Imiya
Fig. 5. Examples of potential field p(x, y, t) computed from the examples in Fig. 4. Starting from the left, the dominant plane after Gaussian operation G ∗ d(x, y, t), gradient vector field g(x, y, t) as repulsive force from obstacles, and visual potential field p(x, y, t)
In Eq. (8), the gradient vector field g(x, y, t) is a repulsive force from obˆ y, t) is an artificial attractive force to the stacles and the planar flow field u(x, ˆ y, t) represents the camera motion, the destination. Since the planar flow u(x, ˆ y, t) and the gradient vector sum of the sign-inversed planar flow field −u(x, field g(x, y, t) is the potential field p(x, y, t). However, in the obstacle area in an ˆ y, t) is set to zero, since the planar flow field image, the planar flow field u(x, represents the dominant plane motion. If obstacles do not exist in an image, the above equation becomes ˆ y, t). p(x, y, t) = −u(x,
(9)
Then, the robot moves according to the planar flow field. An example of the potential field p(x, y, t) computed from the examples in Fig. 4 is shown in Fig. 5(right). Using the visual potential, we introduce a mechanism for guiding the robot to the destination. Setting θd (t) to be the direction angle between the destination and the robot direction, we define the guiding force to the destination as the mixtures of ˆ d (x, y, t) = u ˆ t (x, y) cos θd (t) + u ˆ r (x, y) sin θd (t), u
(10)
ˆ r (x, y) are the pure translational and rotational planar flow ˆ t (x, y) and u where u fields, respectively, as shown in Fig. 6. ˆ d (x, y) is the desired flow generated from images captured The optical flow u by the camera mounted on the robot, if the robot is moving to the direction of
Fig. 6. Optical flow field as attractive force field. (left) Translational planar flow field ˆ r (x, y). ˆ t (x, y). (right) Rotational planar flow field u u
Visual Navigation of Mobile Robot Moving direction R(t)
Visual potential on image plane
y axis p(t)Navigation force
T(t)
θ(t) p(t)
Obstacle
419
θ(t)
x axis
Navigation path Ground plane
Image plane
¯ Fig. 7. Navigation from potential field. Starting from the left, the navigation force p(t), ¯ and y axis is θ(t), and robot displacement the angle between the navigation force p(t) T (t) and rotation angle R(t) at time t determined using θ(t).
Capture image I(x,y,t) Detect the dominant plane d(x,y,t) Compute gradient vector field g(x,y,t) from d(x,y,t) ^ Compute attractive force field u(x,y,t) from the direction θd(t) Compute visual potential field p(x,y,t) ^ from g(x,y,t) and u(x,y,t) Compute navigation force p(t) from p(x,y,t) Robot moves T(t) and R(t) t:=t+1
Fig. 8. Closed loop for autonomous robot motion. Computations of g(x, y, t), p(x, y, t), and T (t) and R(t) are described in sections 3.1, 3.2, and 3.3, respectively. The algorithm for the detection of dominant plane d is described in section 2.
the destination in the workspace without obstacles. The pure optical flow fields ut and ur are computed if the velocity of the robot is constant. Therefore, we use the control force ˆ d (x, y, t), if d(x, y, t) = 255 g(x, y, t) − u (11) pd (x, y, t) = g(x, y, t), otherwise, for the guiding of the robot to the destination. ¯ using the Navigation by Potential Field. We define the control force p(t) average of the potential field pd (x, y, t) as 1 ¯ = p(t) pd (x, y, t)dxdy, (12) |A| (x,y) ∈A where |A| is the size of the region of interest in an image captured by the camera mounted on the mobile robot. ¯ to the nonholonomic mobile robot, we Since we apply the control force p(t) require that the control force is transformed into the translational velocity and
420
N. Ohnishi and A. Imiya
(a)
(b)
(c)
(d)
(e)
(f)
Fig. 9. Experimental results for detecting the dominant plane for some of various illuminations and textures environment. Top row is the captured image. Bottom row is the dominant plane detected from the top image. (a)-(c) Illumination-changed images. (a) Dark medium. (b) Medium medium. (c) Bright illumination. (d)-(f) Texture-changed images. (d) Granite texture. (e) Leopard texture. (f) Agate textured floor and wave textured block.
rotational velocity. We determine the ratio between the translational and rota¯ tional velocity using the angle of the control force p(t). We assume that the camera is attached to the front of the mobile robot. Therefore, we set parameter θ(t) to be the angle between the navigation force ¯ and y axis y = (0, 1) of the image, which is the forward direction of the p(t) mobile robot, as shown in Fig. 7. That is, θ(t) = arccos
¯ p(t), y . ¯ |p(t)||y|
(13)
We define robot the translational velocity T (t) and the rotational velocity R(t) at time t as T (t) = Tm cos θ(t), R(t) = Rm sin θ(t),
(14)
where Tm and Rm are the maximum translational and rotational velocities, respectively, of the mobile robot between time t and t + 1. Setting X(t) = (X(t), Y (t)) to be the position of the robot at time t, from Eq. (14), we have the relations ˙ ˙ 2 + Y˙ (t)2 = T (t), tan−1 Y (t) = R(t). X(t) (15) ˙ X(t) Therefore, we have the control law ˙ X(t) = T (t) cos R(t), Y˙ (t) = T (t) sin R(t).
(16) (17)
The algorithm for computing robot motion T (t) and R(t) and the closed-loop for autonomous robot motion is shown in Fig. 8.
100
100
80
80
Detection Rate [%]
Detection Rate [%]
Visual Navigation of Mobile Robot
60 40 20 0
421
60 40 20 0
0
2
4
6 8 Lighting
10
12
14
0
(a)
1
2
3 Texture
4
5
6
(b)
Fig. 10. (a) Detection rate against the illumination change. (b) Detection rate against the texture change.
4
Experimental Results
Robustness Evaluation of Dominant Plane Detection against Lighting and Texture. We have evaluated the performance of the algorithm against the difference of textures and lighting condition using synthetic data. Figure 9 shows the same geometric configuration with different textures and different illuminations The bottom row is the results of the dominant plane detection for these different conditions. Figure 10 shows the detection rate of the dominant plane against the difference of conditions. The detection rate is calculated by comparing the detected dominant plane with the ground truth, which is manually detected. These results show that the dominant plane is a robust and stable feature against the change of statistic property of the images, which is described by textures on the objects in the workspace, and physical property of images, which is described by the illumination properties in the workspace.
o
o
d (a)
d
o
d (b)
(c)
Fig. 11. Experimental results in the synthetic environment with the different three destinations. The sequence of triangles in the map are the trajectory of the mobile robot, the black regions are obstacles, and the gray circle is the destination. (a) The destination is set on the bottom-right corner. (b) The destination is set on the bottomleft corner. (c) The destination is set on the top-right corner. o and d mean the start and destination points, respectively.
422
N. Ohnishi and A. Imiya
o d o d o d
Fig. 12. Starting from the left, the robot position in the environment, the captured ˆ image I(x, t, t), the attractive force field u(x, y, t), the visual potential field p(x, y, t), ¯ and the control force p(t) at 10th, 30th, and 60th frames of examples shown in Fig. 11(a)
Experiments in Synthetic Environment. Figure 11 shows experimental results in the synthetic environment with different three destinations. In this figure, the sequence of triangles in the map are the trajectories of the mobile robot, the black regions are obstacles, and the gray circle is the destination. Figure 12 shows the robot position in the environment, the captured image
o d o d o d
Fig. 13. Starting from the left, the robot position in the environment, the captured ˆ image I(x, t, t), the attractive force field u(x, y, t), the visual potential field p(x, y, t), ¯ and the control force p(t) at 10th, 30th, and 60th frames of examples shown in Fig. 11(b)
Visual Navigation of Mobile Robot
o
d
o
d
o
d
423
Fig. 14. Starting from the left, the robot position in the environment, the captured ˆ image I(x, t, t), the attractive force field u(x, y, t), the visual potential field p(x, y, t), ¯ and the control force p(t) at 10th, 30th, and 60th frames of examples shown in Fig. 11(c)
ˆ y, t), the visual potential field p(x, y, t), I(x, t, t), the attractive force field u(x, ¯ at 10th, 30th, and 60th frames in Fig. 11(a). and the control force p(t) These experimental results show that our method is effective for robot navigation using visual information captured by the mobile robot itself. Experiments in Real Environment. We show an experimental result in a real environment with one obstacle. The destination is set beyond an obstacle, that is, an obstacle separates the geodesic path from the origin to the destination. Therefore, on the image, the destination appears on the obstacle. The direction angle θd (t) between the destination and the robot direction is computed by θd (t) = ∠[XD (t), XD (0)],
(18)
for XD (t) = X(t) − D, where D is the coordinate of the destination point at t = 0, which is given to the robot at the initial position. Figure 15 shows the snapshot of the robot in the environment, the captured image I(x, t, t), the detected dominant plane d(x, t, t), the visual potential field ¯ p(x, y, t), and the control force p(t) at 10th, 50th, 100th, 150th, and 200th frames. To avoid collision with the obstacle on the geodesic path from the origin to the destination, the robot first turned to the left and turned to the right after passing through beside an obstacle. The mobile robot avoided the obstacle and reached to the destination.
424
N. Ohnishi and A. Imiya
Fig. 15. Starting from the left, the snapshot of the robot in the environment, the captured image I(x, t, t), the detected dominant plane d(x, t, t), the visual potential field ¯ at 10th, 50th, 100th, 150th, and 200th frames. To p(x, y, t), and the control force p(t) avoid collision with the obstacle on the geodesic path from the origin to the destination, the robot first turned to the left and turned to the right after passing through beside an obstacle.
5
Conclusions
We developed an algorithm for navigating a mobile robot to the destination using the optical flow and visual potential field captured by a camera mounted on the robot. We generated a potential field of repulsive force from obstacles to avoid collision, using images captured with the camera mounted on the robot. Our method enables a mobile robot to avoid obstacles without an environmental map. Experimental results show that our algorithm is robust against the fluctuation of the displacement of the mobile robot. Images observed in the RGB color space are transformed to images in the HVS color space. Images in the H-channel are invariant to the illumination change. Therefore, parameters computed from the H-channel image are invariant to the illumination change. This property suggests that the computation optical flow and dominant plane from the H-channel is stable and robust against the illumination change [3,15,17]. The visual potential field used for guiding the robot is generated from visual information captured with a camera mounted on the robot without the use of
Visual Navigation of Mobile Robot
425
any maps that describe the workspace. We presented that optical flow vectors indicate the direction of robot motion to the local destination. Therefore, the optical flow field is suitable for the guide field to the destination of the robot.
References 1. Aarno, D., Kragic, D., Christensen, H.I.: Artificial potential biased probabilistic roadmap method. IEEE International Conference on Robotics and Automation 1, 461–466 (2004) 2. Barron, J.L., Fleet, D.J., Beauchemin, S.S.: Performance of optical flow techniques. International Journal of Computer Vision 12, 43–77 (1994) 3. Barron, J., Klette, R.: Qunatitiative colour optical flow. International Conference on Pattern Recognition 4, 251–254 (2002) 4. Bouguet, J.-Y.: Pyramidal implementation of the Lucas Kanade feature tracker description of the algorithm. Intel Corporation, Microprocessor Research Labs, OpenCV Documents (1999) 5. Bur, A., et al.: Robot navigation by panoramic vision and attention guided features. In: International Conference on Pattern Recognition, pp. 695–698 (2006) 6. Conner, D.C., Rizzi, A.A., Choset, H.: Composition of local potential functions for global robot control and navigation. International Conference on Intelligent Robots and Systems 4, 3546–3551 (2003) 7. Fischler, M.A., Bolles, R.C.: Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Comm. of the ACM 24, 381–395 (1981) 8. Guilherme, N.D., Avinash, C.K.: Vision for mobile robot navigation: A survey. IEEE Trans. on PAMI 24, 237–267 (2002) 9. Hartley, R., Zisserman, A.: Multiple View Geometry in Computer Vision. Cambridge University Press, Cambridge (2000) 10. Horn, B.K.P., Schunck, B.G.: Determining optical flow. Artificial Intelligence 17, 185–203 (1981) 11. Khatib, O.: Real-time obstacle avoidance for manipulators and mobile robots. International Journal of Robotics Research 5, 90–98 (1986) 12. Lopez-Franco, C., Bayro-Corrochano, E.: Omnidirectional vision and invariant theory for robot navigation using conformal geometric algebra. In: International Conference on Pattern Recognition, pp. 570–573 (2006) 13. Lopez-de-Teruel, P.E., Ruiz, A., Fernandez, L.: Efficient monocular 3d reconstruction from segments for visual navigation in structured environments. In: International Conference on Pattern Recognition, pp. 143–146 (2006) 14. Lucas, B., Kanade, T.: An iterative image registration technique with an application to stereo vision. In: International Joint Conference on Artificial Intelligence, pp. 674–679 (1981) 15. Maddalena, L., Petrosino, A.: A self-organizing approach to detection of moving patterns for real-time applications. In: Second International Symposium on Brain, Vision, and Artificial Intelligence, pp. 181–190 (2007) 16. Mallot, H.A., et al.: Inverse perspective mapping simplifies optical flow computation and obstacle detection. Biological Cybernetics 64, 177–185 (1991) 17. Mileva, Y., Bruhn, A., Weickert, J.: Illumination-robust variational optical flow with photometric invarinats. In: DAGM-Symposium, pp. 152–162 (2007)
426
N. Ohnishi and A. Imiya
18. Nagel, H.-H., Enkelmann, W.: An investigation of smoothness constraint for the estimation of displacement vector fields from image sequences. IEEE Trans. on PAMI 8, 565–593 (1986) 19. Ohnishi, N., Imiya, A.: Featureless robot navigation using optical flow. Connection Science 17, 23–46 (2005) 20. Ohnishi, N., Imiya, A.: Dominant plane detection from optical flow for robot navigation. Pattern Recognition Letters 27, 1009–1021 (2006) 21. Ohnishi, N., Imiya, A.: Navigation of nonholonomic mobile robot using visual potential field. In: International Conference on Computer Vision Systems (2007) 22. Ohnishi, N., Imiya, A.: Corridor navigation and obstacle avoidance using visual potential for mobile robot. In: Canadian Conference on Computer and Robot Vision, pp. 131–138 (2007) 23. Shimoda, S., Kuroda, Y., Iagnemma, K.: Potential field navigation of high speed unmanned ground vehicles on uneven terrain. In: IEEE International Conference on Robotics and Automation, pp. 2839–2844 (2005) 24. Stemmer, A., Albu-Schaffer, A., Hirzinger, G.: An analytical method for the planning of robust assembly tasks of complex shaped planar parts. In: IEEE International Conference on Robotics and Automation, pp. 317–323 (2007) 25. Tews, A.D., Sukhatme, G.S., Matari´c, M.J.: A multi-robot approach to stealthy navigation in the presence of an observe. In: IEEE International Conference on Robotics and Automation, pp. 2379–2385 (2004) 26. Valavanis, K., et al.: Mobile robot navigation in 2D dynamic environments using an electrostatic potential field. IEEE Trans. on Systems, Man, and Cybernetics 30, 187–196 (2000)
Behavior Based Robot Localisation Using Stereo Vision Maria Sagrebin, Josef Pauli, and Johannes Herwig Fakult¨ at f¨ ur Ingenieurwissenschaften, Abteilung f¨ ur Informatik und Angewandte Kognitionswissenschaft, Universit¨ at Duisburg-Essen, Germany
Abstract. Accurate robot detection and localisation is fundamental in applications which involve robot navigation. Typical methods for robot detection require a model of a robot. However in most applications the availability of such model can not be warranted. This paper discusses a different approach. A method is presented to localise the robot in a complex and dynamic scene based only on the information that the robot is following a previously specified movement pattern. The advantage of this method lies in the ability to detect differently shaped and differently looking robots as long as they perform the previously defined movement. The method has been successfully tested in an indoor environment.
1
Introduction
Successful robot detection and 3D localisation is one of the most desired features in many high level applications. Many algorithms for robot navigation depend on robust object localisation. Usually the problem of localising a robot by using two stereo cameras is solved by first creating a model of a robot and then mapping given image data to this model. Such an approach has two important drawbacks. First it requires an offline phase during which a model of a robot is learned. In many applications such an offline phase is hardly practicable. And second it is very hard to find a model which takes care for any possible appearance of the robot. Moreover sometimes it is simple impossible to extract or find features which are both discriminable against other objects and invariant to lighting conditions, changes in pose and different environment. A robot which is working on a construction side can change his appearance due to dirt and his form and structure due to possible damage. In such situation most of the common approaches would fail. An online learning algorithm which enables the learning of a model in real time could be a solution. But now we face a chicken and egg problem: to apply a learning algorithm we need the position of an object, and to get a position of an object we need a model. The answer to this problem lies in the behavior or motion based localisation of a robot. An algorithm which first tracks all moving objects in the scene and then selects one which is performing a previously specified motion is the solution to this problem. In the example of a robot working on the construction side it G. Sommer and R. Klette (Eds.): RobVis 2008, LNCS 4931, pp. 427–439, 2008. c Springer-Verlag Berlin Heidelberg 2008
428
M. Sagrebin, J. Pauli, and J. Herwig
would be sufficient to let the robot to perform some simple recurring movement. A system which is trained to recognise such movements would be able to localise the robot. Another important scenario is that of supplying robots. Differently looking robots which deliver packages to an office building do know to which office they have to go but don’t know how. A system which is able to localise these different looking robots in the entrance hall could navigate them to the target office. However again the question here arises is how to recognise these robots. Apparently it is not possible to teach the system all the different models. But to train the system to recognise moving objects which perform a special systematic movement is practicable. 1.1
Previous Work
The detection of objects which perform a recurring movement requires an object detection mechanism, appropriate object representation method and a suitable tracking algorithm. Common object detection mechanisms are point detectors (Moravec’s detector [13], Harris detector [9], Scale Invariant Feature Transform [11]), background modeling techniques (Mixture of Gaussians [18], Eigenbackground) and supervised classifiers (Support Vector Machines [14], Neural Networks, Adaptive Boosting [21]). Commonly employed object representations for tracking are points [20] [16], primitive geometric shapes [6] or skeletal models [1]. Depending on the chosen object representation form different tracking algorithm can be used. Widely used are for example Kalman Filter [4], Mean-shift [6], Kanade-Lucas-Tomasi Feature Tracker (KLT) [17], SVM tracker [2]. For object trajectory representation also different approaches have been suggested. Chen [5] proposed a segmented trajectory based approach. The trajectories are segmented based on extrema in acceleration measured by high frequency wavelet coefficients. In [3], Bashir segment the trajectories based on dominant curvature zero-crossings and represent each subtrajectory by using Principal Component Analysis (PCA). In [10] different algorithm for 3D reconstruction and object localisation are presented. 1.2
Proposed Approach at a Glance
In this paper a procedure is presented which solves the task of 3D localisation of a robot based on the robot’s behavior. A system equipped with two stereo cameras was trained to recognise robots which perform periodic movements. The specialisation of the system to periodic movements was chosen for two reasons. First it is the simplicity of this movement. Almost every mobile robot is able to drive back and forth or in a circle. And second it is the easy discriminability of this movement from the others.
Behavior Based Robot Localisation Using Stereo Vision
429
The overall procedure for 3D localisation of the robot is composed of the following steps: – In both image sequences extract moving objects from the background using an adaptive background subtraction algorithm. – In both image sequences track the moving objects over some period of time. – In both image sequences extract the object which is performing a periodic movement. This step provides a pixel position of a robot in both images. – By using a triangulation method compute the 3D location of the robot. Each of these steps is described in more detail in the following sections.
2
Adaptive Background Subtraction
To extract moving objects from the background an adaptive background subtraction algorithm according to Grimson and Stauffer [18] has been used. It allows the timely updating of a background model to gradual illumination changes as well as the significant changes in the background. In their approach each pixel in the image is modelled by a mixture of K Gaussian distributions. K has a value from 3 to 5. The probability that a certain pixel has a color value of X at time t can be written as p(Xt ) =
K
wi,t · η(Xt ; θi,t )
i=1
where wi,t is a weight parameter of the i-th Gaussian component at time t and η(Xt ; θi,t ) is the Normal distribution of i-th Gaussian component at time t represented by η (Xt ; θi,t ) = η (Xt ; μi,t , Σi,t ) =
1 n 2
(2π) |Σi,t |
e− 2 (Xt −μi,t ) 1
1 2
T
−1 Σi,t (Xt −μi,t )
2 where μi,t is the mean and Σi,t = σi,t I is the covariance of the i-th component at time t. It is assumed that the red, green and blue pixel values are independent and have the same variances. The K Gaussians are then ordered by the value of wi,t /σi,t . This value increases both as a distribution gains more evidence and the variance decreases. Thus this ordering causes that the most likely background distributions remain on the top. The first B distributions are then chosen as the background model. B is defined as b wi > T B = arg min b
i=1
where the threshold T is the minimum portion of the background model. Background subtraction is done by marking a pixel as foreground pixel if its value is
430
M. Sagrebin, J. Pauli, and J. Herwig
more than 2.5 standard deviations away from any of the B distributions. The first Gaussian component that matches the pixel values is updated by the following equations μi,t = (1 − ρ) μi,t−1 + ρXt 2 2 σi,t = (1 − ρ) σi,t−1 + ρ (Xt − μi,t )T (Xt − μi,t )
where ρ = αη (Xt |μi , σi ) The weights of the i-th distribution are adjusted as follows wi,t = (1 − α) wi,t−1 + α(Mi,t ) where α is the learning rate and Mi,t is 1 for the distribution which matched and 0 for the remaining distributions. Figure 1 shows the extraction of the moving objects i.e. two robots and a person from the background.
Fig. 1. Extraction of the moving objects from the background
After labeling the connected components in the resulting foreground image axis parallel rectangles around these components have been computed. The centers of these rectangles have been used to specify the positions of the objects in the image.
3
The 2D Tracking Algorithm
The tracking algorithm used in our application is composed of the following steps: – In two consecutive images compute the rectangles around the moving objects.
Behavior Based Robot Localisation Using Stereo Vision
431
– From each of these consecutive images extract and find corresponding SIFTFeatures (Scale-invariant feature transform)[11]. – Identify corresponding rectangles (rectangles which surround the same object) in both images by taking into account the corresponding SIFT-Features. Two rectangles are considered as being corresponded, when the SIFT-Features they surround are corresponding to each other. In this algorithm SIFT-Features have been chosen because as is shown in [12] they do outperform most other point detectors and are more resilient to image deformations. Figure 2 shows the tracking of the objects in two consecutive images taking from one camera.
Fig. 2. Tracking of moving objects in scene
Next the advances in using this algorithm are described. 3.1
Robust Clustering of SIFT-Features
As one can see on the left image SIFT-Features are computed not only on the moving objects but all over the scene. Thus the task to solve is the clustering of those features which belong to each moving object, so that the position of an object can be tracked to the consecutive image. The clustering of the SIFT-Features which represent the same object is done here by the computation of the rectangles around the moving object. An example of it can be seen on the right image. Although the SIFT tracking algorithm tracks all SIFT features which have been found on the previous image, only those features have been selected which represent the moving object. The short lines in the image indicate the trajectory of successfully tracked SIFT features. An alternative approach for clustering the corresponding SIFT features according to the lengths and the angles of the translation vectors defined by the two positions of the corresponding SIFT features has also been considered but led to very unstable results and imposed to many constraints on the scene.
432
M. Sagrebin, J. Pauli, and J. Herwig
For example it is not justifiable to restrict the speed of the objects to a minimum velocity. The speed of an object has a direct influence on the lengths of the translation vector. Thus to separate SIFT features which represent a very slowly moving object from those from the background it would be necessary to define a threshold. This is not acceptable. 3.2
Accurate Specification of the Object’s Position
Another advantage of the computation of the rectangles is the accurate specification of the object’s position in the image. As stated before the position of an object is specified by the center of the rectangle surrounding this object. Defining the position of an object as the average position of the SIFT-Features representing this object is not a valuable alternative. Due to possible changes in illumination or little rotation of the object it was shown that the amount of SIFT-Features and also the SIFT features themselves representing this object can not be assumed as being constant over the whole period of time. Figure 3 reflects this general situation graphically.
SIFT Features which have been tracked from image 2 to image 3
SIFT Features which have been tracked from image 1 to image 2
image 2
image 1
image 3
Fig. 3. The number and the SIFT features themselves can not be assumed as being constant over the whole period of time
As one can see the SIFT features which have been tracked from image 1 to image 2 are not the same as those which have been tracked from image 2 to image 3. Thus the average positions calculated from these two different amounts of SIFT features which do represent the same object vary. To accomplish the detection that these two average points belong to the same trajectory it is necessary to define a threshold. This is very inconvenient. Also the question of how to define this threshold has to be answered. The problem of defining a threshold is not an issue when computing rectangles around the objects.
Behavior Based Robot Localisation Using Stereo Vision
3.3
433
Effective Cooperation Between SIFT Features and Rectangles
One final question arises, why do we need to compute and cluster SIFT features at all? Why it is not enough to compute only the rectangles and track the object’s position by looking at which of the rectangles in the consecutive images overlap? The answer is simple. In the case of a small, very fast moving object the computed rectangles will not overlap and the system will fail miserably. Thus the rectangles are used to cluster the SIFT features which represent the same object, and the SIFT features are used to find corresponding rectangles in two consecutive images. It is exactly this kind of cooperation which makes the proposed algorithm this successful. Now that the exact position of every object in the scene can be determined in every image, the task to solve is the computation of the trajectory of every object and the selection of an object which performs a systematic movement.
4
Robot Detection in Both Camera Images
In our application robot detection was done by selecting a moving object which performed a periodic movement. An algorithm for detecting such a movement has been developed. It is composed of the following steps. – Mapping of the object’s trajectories to a more useful and applicable form. The trajectory of an object which is performing a periodic movement is mapped to a cyclic curve. – Detection that a given curve has a periodic form. These steps are described in more detail below. 4.1
Mapping of Object’s Trajectories
The position of the object in the first image is chosen as the start position of this object. Then after every consecutive image the euclidean distance between the new position and the start position of this object is computed. Mapping these distances over the number of processed images in a diagram results in a curve which has a cyclic form when the object is performing a periodic movement. One example of such a curve is shown in figure 4. Often the amplitude and the frequency of the resulting curve can not be assumed as being constant over the whole period of time. As one can see both maximums and minimums vary but can be expected to lie in some predefined range. An algorithm has been developed which can deal with such impurities. 4.2
Detection of the Periodic Curve
The detection of the periodic curve is based on the identification of maximums and minimums and also on the verification whether the founded maximums and minimums lie in the predefined ranges.
434
M. Sagrebin, J. Pauli, and J. Herwig 300 ’daten_robot.txt’ u 1
250
euclidean distance
200
150
100
50
0 0
10
20
30
40
50
60
70
80
number of processed images
Fig. 4. Example of a curve which results from a periodic movement
The developed algorithm is an online algorithm and processed every new incoming data (in our case newly computed euclidean distance between the new and the start position of an object) on time. The algorithm is structured as follows: Initialization Phase: During this phase the values of the parameters predicted maximum, predicted minimum and predicted period are initialized as the averages of the maximums, minimums and periods detected and computed so far. Later these values are used to verify if for example a newly detected maximum lies in the range of the predicted maximum. Working Phase: During this phase the detection of a periodic curve is performed. For every newly detected maximum or minimum the algorithm first checks if the reached maximum or minimum lies in the range of the predicted maximum or minimum respectively. Next it verifies if also the period (the difference in timestamps between the new and the last maximum or minimum detected) lies in the range of predicted period. When all of these conditions are true then the given curve is identified as representing a periodic movement. The values of the three parameters predicted maximum, predicted minimum and predicted period are updated every time a new maximum or minimum is detected. An implementation of this algorithm in pseudo code is given in figure 5. The values of the parameters predicted maximum, predicted minimum and predicted period are computed as the averages of the last three maximums,
Behavior Based Robot Localisation Using Stereo Vision
435
1. for every new incoming data do { 2. if (maximum detected) { 3. if (no predicted maximum computed) { 4. compute predicted maximum; 5. } else { 6. if ( new maximum lies in the range of predicted maximum) { 7. if (no predicted period computed) { 8. compute predicted period; 9. } else { 10. compute new period; 11. if (if new period lies in the range of predicted period) { 12. set variable periodic_movement to true; 13. } else { 14. set variable periodic_movement to false; 15. } 16. update predicted period with new period; 17. } 18. } 19. update predicted maximum with new max; 20. } 21. } 22. ...do the same steps if minimum has been detected. 23. }
Fig. 5. Pseudo code for an algorithm to detect periodic curves
minimums and periods respectively. If no sufficient data is available the values are computed as the averages of the maximums, minimums and periods detected and computed so far. The thresholds of the valid ranges around the predicted maximum, predicted minimum and predicted period have been determined empirically. Through the variation of these thresholds the degree of accuracy of the desired periodic movement can be established. Now that the robot’s position and the SIFT features representing the robot have been determined in both camera images, the next task to solve is the 3D localisation of a robot.
5
3D Localisation of a Robot
The problem of 3D localisation is very common in the computer vision community. Depending on how much information about the scene, relative camera’s positions and camera calibrations is provided different algorithm have been developed to solve this task. For the demonstration purpose a very simple experimental environment has been chosen. The two cameras were positioned with the 30 cm distance
436
M. Sagrebin, J. Pauli, and J. Herwig
Fig. 6. Two stereo images taken from two cameras. The red points depict the corresponding SIFT features
from each other facing the blackboard. The camera’s orientation and calibration were known. The robot was moving in front of the blackboard in the view of both cameras. Figure 6 shows exemplary two images taken from the cameras. The corresponding SIFT features in the two images were used as an input to the triangulation algorithm. Due to measurement errors the backprojected rays usually don’t intersect in space. Thus their intersection was estimated as the point of minimum distance from both rays. Figure 7 shows the triangulation results from the top view. The red points correspond to the 3D reconstruction computed at timestamp t and the green points correspond to the reconstruction computed at timestamp t + 1. Due to the chosen simple environment it is easy to recognise which of the reconstructed points belong to which parts of the scene. The obvious procedure to compute the robot’s 3D position would be the following: – Reconstruction of the SIFT features which lie inside the robot’s rectangle. – Computation of the average of the reconstructed 3D points. Unfortunately one problem with this approach arises. Depending on the form of the robot it happens that some SIFT features inside the robot’s rectangle don’t represent the robot but lie on the background instead. Thus the simple computation of the average of the reconstructed 3D points often does not well approximate the true robot’s position. This is especially the case if the distance between the robot and the background is long. Hence before computing the 3D reconstruction of the SIFT features it is necessary to remove outliers. An outlier is here defined as a features which lies
Behavior Based Robot Localisation Using Stereo Vision
437
Fig. 7. Triangulation of the corresponding SIFT features detected in the two stereo images
on the background. Efficiently this can be done by sorting out features whose position in the two consecutive images taken from one camera didn’t change.
6
Conclusions
In this paper a behavior based object detection algorithm has been presented. The developed method allows the detection of robots in the scene independently of their appearance and shape. The only requirement imposed on the robots was the performance of a periodic recurring movement. The algorithm presented circumvents the usual requirement of creating a model of a robot of interest. Therefore it is very good suited for applications where a creation of a model is not affordable or practicable for some reasons. Moreover the presented method builds a preliminary step for an online learning algorithm. After the position of a robot is known a system can start to learn the appearance and the shape of the robot online. Having the model a wide range of already developed algorithms can then be used for example for tracking or navigation. Although the system described in this paper was trained to recognise periodic movements it also can be easily modified to recognise another movement patterns. One possible utilisation would then be the recognition of thieves in a shop or in a parking lot based on their suspicious behavior.
438
M. Sagrebin, J. Pauli, and J. Herwig
References 1. Ali, A., Aggarwal, J.: Segmentation and recognition of continuous human activity. In: IEEE Workshop on Detection and Recognition of events in Video, pp. 28–35 (2001) 2. Avidan, S.: Support vector tracking. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 184–191 (2001) 3. Bashir, F., Schonfeld, D., Khokhar, A.: Segmented trajectory based indexing and retrieval of video data. In: IEEE International Conference on Image Processing, ICIP 2003, Barcelona, Spain (2003) 4. Broida, T., Chellappa, R.: Estimation of object motion parameters from noisy images. IEEE Trans. Patt. Analy. Mach. Intell. 1, 90–99 (1986) 5. Chen, W., Chang, S.F.: Motion trajectory matching of video objects. IS&T/SPIE. San Jose, CA (January 2000) 6. Comaniciu, D., Ramesh, V., Meer, P.: Kernel-based object tracking. IEEE Trans. Patt. Analy. Mach. Intell. 25, 564–575 (2003) 7. Duda, R.O., Hart, P.E., Stork, D.G.: Pattern Classification. Second Edition. A Wiley-Interscience Publication, Chichester (2001) 8. Edwards, G., Taylor, C., Cootes, T.: Interpreting face images using active appearance models. In: International Conference on Face and Gesture Recognition, pp. 300–305 (1998) 9. Harris, C., Stephens, M.: A combined corner and edge detector. In: 4th Alvey Vision Conference, pp. 147–151 (1988) 10. Hartley, R., Zisserman, A.: Multiple View Geometry in Computer Vision, 2nd edn. Cambridge University Press, Cambridge 11. Lowe, D.: Distinctive image features from scale-invaliant keypoints. Int. J. Comput. Vision 60(2), 91–110 (2004) 12. Mikolajczyk, K., Schmid, C.: A performance evaluation of local descriptors. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1615– 1630 (2003) 13. Moravec, H.: Visual mapping by a robot rover. In: Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), pp. 598–600 (1979) 14. Papageorgiou, C., Oren, M., Poggio, T.: A general framework for object detection. In: IEEE International Conference on Computer Vision (ICCV), pp. 555–562 (1998) 15. Serby, D., Koller-Meier, S., Gool, L.V.: Probabilistic object tracking using multiple features. In: IEEE International Conference of Pattern Recognition (ICPR), pp. 184–187 (2004) 16. Shafique, K., Shah, M.: A non-iterative greedy algorithm for multi-frame point correspondence. In: IEEE International Conference on Computer Vision (ICCV), pp. 110–115 (2003) 17. Shi, J., Tomasi, C.: Good features to track. In: IEEE Conference on Computer Vision and Patterns Recognition (CVPR), pp. 593–600 (1994) 18. Stauffer, C., Grimson, W.E.L.: Adaptive background mixture models for real-time tracking. In: Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149), vol. 2, IEEE Comput. Soc., Los Alamitos (1999)
Behavior Based Robot Localisation Using Stereo Vision
439
19. Trucco, E., Verri, A.: Introductory Techniques for 3D Computer Vision. PrenticeHall Inc., Englewood Cliffs (1998) 20. Veenman, C., Reinders, M., Backer, E.: Resolving motion correspondence for densely moving points. IEEE Trans. Patt. Analy. Mach. Intell. 23(1), 54–72 (2001) 21. Viola, P., Jones, M., Snow, D.: Detecting pedestrians using patterns of motion and appearance. In: IEEE International Conference on Computer Vision (ICCV), pp. 734–741 (2003) 22. Yilmaz, A., Javed, O., Shah, M.: Object tracking: A survey. ACM Comput.Surv. 38(4) Article 13 (2006)
Direct Pose Estimation with a Monocular Camera Darius Burschka and Elmar Mair Department of Informatics Technische Universit¨ at M¨ unchen, Germany {burschka,elmar.mair}@mytum.de
Abstract. We present a direct method to calculate a 6DoF pose change of a monocular camera for mobile navigation. The calculated pose is estimated up to a constant unknown scale parameter that is kept constant over the entire reconstruction process. This method allows a direct calculation of the metric position and rotation without any necessity to fuse the information in a probabilistic approach over longer frame sequence as it is the case in most currently used VSLAM approaches. The algorithm provides two novel aspects to the field of monocular navigation. It allows a direct pose estimation without any a-priori knowledge about the world directly from any two images and it provides a quality measure for the estimated motion parameters that allows to fuse the resulting information in Kalman Filters. We present the mathematical formulation of the approach together with experimental validation on real scene images.
1
Motivation
Localization is an essential task in most applications of a mobile or a manipulation system. It can be subdivided into two categories of the initial (global) localization and the relative localization. While the former requires an identification of known reference structures in the camera image to find the current pose relative to a known, a-priori map [9], often it is merely necessary to register correctly the relative position changes in consecutive image frames. Many localization approaches for indoor applications use simplifications like assumptions about planarity of the imaged objects in the scene or assume a restricted motion in the ground plane of the floor that allows to derive the metric navigation parameters from differences in the images using Image Jacobians in vision-based control approaches. A true 6DoF localization requires a significant computational effort to calculate the parameters while solving an octal polynomial equation [8] or estimating the pose with a Bayesian minimization approach utilizing intersections of uncertainty ellipsoids to find the true position of the imaged points from a longer sequence of images [3]. While the first solution still requires a sampling to find the true solution of the equation due to the high complexity of the problem, the second one can calculate the result only after a G. Sommer and R. Klette (Eds.): RobVis 2008, LNCS 4931, pp. 440–453, 2008. c Springer-Verlag Berlin Heidelberg 2008
Direct Pose Estimation with a Monocular Camera
441
motion sequence with strongly varying direction of motion of the camera that helps to reduce the uncertainty about the position of the physical point. A common used solutions to this problem are structure from motion approaches like the eight point algorithm [7] and its derivatives using as few as five corresponding points between the two images. Another approach is to use homographies to estimate the motion parameters in case that the corresponding points all are on a planar surface. These approaches provide a solution to the motion parameters (R,T - rotation matrix, translation vector) but are very sensitive to the ill conditioned point configurations and to outliers. Usually, the noise in the point detection does not allow to detect ill-conditioned cases and the system needs to cope with wrong estimate without any additional information about the covariance of the estimated values. Our method follows the idea of metric reconstruction from the absolute conic Ω∞ [11] for the calculation of the pose parameters for any point configuration providing correct covariance values for all estimates. Our contribution is a robust implementation of the algorithm suppressing outliers in the matches between the two images using RANSAC and a correct calculation of covariance values for the resulting pose estimation. Since the projection in a monocular camera results in a loss of one dimension, the estimation of the three-dimensional parameters in space usually requires a metric reference to the surrounding world, a 3D model, that is used to scale the result of the image processing back to Cartesian coordinates. Assuming that the projective geometry of the camera is modeled by perspective projection [6], a point, c P = (x, y, z)T , whose coordinates are expressed with respect to the camera coordinate frame c, will project onto the image plane with coordinates p = (u, ν)T , given by u f x c (1) = π( P ) = z y v Points in the image correspond to an internal model of the environment that is stored in 3D coordinates m P . Localization with a monocular camera system is in this case formulated as estimation of the transformation matrix c xm such that an image point p = (u, ν)T corresponds to an actual observation in the camera image for any visible model point m P . (2) p = π(c xm (m P )) In many cases the initial position of the system is known or not relevant, since the initial frame may define the starting position. We propose a navigation system that operates in the image space of the camera for this domain of applications. The system maintains the correspondences between the “model” and the world based on a simple landmark tracking since both the model and the perception are defined in the same coordinate frame of the camera projection p = (u, ν)T . The paper is structured as follows, in the following section we present the algorithms used for the estimation of the 3D rotation matrix from the points on the ”horizon” and the way how the tracked points are classified if far away from
442
D. Burschka and E. Mair
the camera. We describe the way the system processes the image data to estimate the translational component in all three degrees of freedom. This estimation is possible only up to a scale. In Section 3 we evaluate the accuracy of the system for different motion parameters. We conclude with an evaluation of the system performance and our future plans.
2
Imaging Properties of a Monocular Camera
As mentioned already above, a projection in a camera image results in a reduction of the dimensionality of the scene. The information about the radial distance to the point λi is lost in the projection (Fig. 1).
Fig. 1. Radial distance λi is lost during the camera projection. Only the direction ni to the imaged point can be obtained from the camera image.
− Only the direction vector → ni can be calculated from the camera projection to: ⎛ (upi −Cx)·sx ⎞ ⎜ − → ki = ⎜ ⎝
f (vpi −Cy)·sy f
⎟ ⎟⇒ ⎠
1 − → − → ni = → − ki || ki ||
(3)
1 with (upi , vpi ) being the pixel coordinates of a point. (Cx , Cy ) represent the position of the optical center in the camera image, (sx , sy ) are the pixel sizes on the camera chip, and f is the focal length of the camera. This transformation removes all dependencies from the calibration parameters, like the parameters of the actual optical lens and its mount relative to the camera image. It transforms the image information into a pure geometric line of sight representation. An arbitrary motion of the camera in space can be defined by the 3 translation → − parameters T = (XY Z)T and a rotation matrix R. The motion can be observed − → → − as an apparent motion of a point from a position P to a position P − → → − − → P = RT P − T
(4)
Optical flow approaches have already proven that these two motions result in specific shifts in the projected camera image. The resulting transformation on the imaged points corresponds to a sum of the motions induced by the rotation → − of the system R and the consecutive translation T . Since we can write any point
Direct Pose Estimation with a Monocular Camera
443
in the world according to Fig. 1 as a product of its direction of view ni and the radial distance λi , we can write (4) in the following form → − → − → λi ni = λi RT − ni − T
(5)
If we were able to neglect one of the influences on the point projections then we should be able to estimate the single motion parameters. 2.1
Estimation of the Rotation
There are several methods to estimate rotation of a camera from a set of two images. Vanishing points can be used to estimate the rotation matrix [11]. The idea is to use points at infinity that do not experience any image shifts due to translation. In our approach, we use different way to estimate the rotation that converts the imaged points points into their normalized direction vectors ni (Fig. 1). This allows us a direct calculation of rotation as described below and additionally provides an easy extension to omnidirectional cameras, where the imaged points can lie on a half-sphere around the focal point of the camera. We describe the algorithm in more detail in the following text. The camera quantizes the projected information with the resolution of its → − chip (sx , sy ). We can tell from the Eq. (1) that a motion Δ T becomes less and less observable for large values of z. That means that distant points to the camera experience only small shifts in their projected position due to the translational motion of the camera (Fig. 2).
Fig. 2. The change in the imaged position for a similar motion in 3D changes with the distance of the observed point to the camera
The motion becomes not detectable once the resulting image shift falls below the detection accuracy of the camera. Depending on the camera chip resolution (sx , sy ) and motion parameters between two frames Tm , we can define a set of such point Pk∞ in the actual values as points in the distance Z∞ : Z∞ =
f Tm sx
(6)
We assumed in (6) that the pixels (sx , sy ) are square or the smaller of the both values needs to be taken into account. Additionally, we assumed that the
444
D. Burschka and E. Mair
largest observed motion component occurs for motions coplanar to the image plane and that the feature detection works with the accuracy of up to one pixel. We see that for a typical camera setup with a pixel size of sx = 11μm and a focal length of f = 8mm objects as close as 14m cannot be detected at motion speeds lower than Tm = 2cm/f rame. This is a typical speed of a mobile robot in the scene. Typically, this value is significantly smaller because the motion component parallel to the image plane for a camera looking forward is smaller (Fig. 3)
Fig. 3. Motion component Tm that can be observed in the image
We can calculate the motion value Tm from the actual motion of the vehicle ΔT in Fig. 3 to cos γ ΔT (7) Tm = cos ϕ Therefore, the typical observed motion Tm is smaller than the value assumed above for a motion observed by a camera looking parallel to the motion vector ΔT . This would correspond to a camera looking to the side. An important observation is that Tm is not only scaled by the distance to the observed point but the angle γ is important for the actual observability of a point motion. We see from this equation directly that a radial motion as it is the case for points along the optical center but also any other motion along the line of projection lets a point render part of the {Pk∞ } set of points, from which the translation cannot be calculated. We use this observation to subdivide the observed points in the camera image into points which are in the distance larger than Z∞ in (6) and those that are closer than this value. We use the Kanade-Lucas tracker (KLT) to track image points in a sequence of images acquired by a monocular camera system. The scene depicted in Fig. 4 contains both physical points {PlT } that let observe both motion types translation and rotation, and points {Pk∞ } where we observe only the result of the rotation. Before we explain, how we decide which points belong to which set, we make the observation that for the point set {Pk∞ } the equation (5) simplifies to → − → ∀{Pk∞ } : λi ni = λi RT − ni , with λi λi (8) → − → ni ⇒ ni = RT − ˜ and the vector T between two 3D point sets It is known that the matrix R ∗ {Pi } and {Pi } can be recovered by solving the following least-square problem for N landmarks
Direct Pose Estimation with a Monocular Camera
445
Fig. 4. We use Kanade-Lucas tracker (KLT) to track points in a sequence of outdoor images
min
N
˜ R,T i=1
˜ i + T − Pi∗ 2 , RP
˜ = I. ˜TR subject to R
(9)
Such a constrained least squares problem can be solved in closed form using quaternions [5,10], or singular value decomposition (SVD) [4,1,5,10]. The SVD solution proceeds as follows. Let {Pi } and {Pi∗ } denote lists of corresponding vectors and define 1
P¯ = Pi , n i=1
1 ∗ P¯∗ = P , n i=1 i
n
n
(10)
that is, P¯ and P¯∗ are the centroids of {Pi } and {Pi∗ }, respectively. Define Pi = Pi − P¯ , and ˜ = M
Pi∗ = Pi∗ − P¯∗ ,
n
Pi∗ Pi . T
(11)
(12)
i=1
˜ is the sample cross-covariance matrix between {Pi } and In other words, n1 M ˜ ∗ , T∗ minimize (9), then they satisfy {Pi∗ }. It can be shown that, if R ˜ ∗ = argmax ˜ tr(R ˜ ˜ T M) R R
(13)
Since we subtract the mean value P¯ in (11), we remove any translation component leaving just a pure rotation. ˜ , Σ, ˜ V˜ ) be a SVD of M. ˜ Then the solution to (9) is Let (U ˜U ˜T ˜∗ = V R
(14)
446
D. Burschka and E. Mair
We use the solution in equation (14) as a least-square estimate for the rotation matrix R in (8). The 3D direction vectors {ni } and {ni } are used as the point → sets {Pi } and {Pi∗ } in the approach described above. It is important to use the − ni → − vectors and not the ki vectors from (3) here. The reader may ask, how can we make this distinction which points belong to {Pk∞ } not knowing the real radial distances λi to the imaged points like the ones depicted in Fig. 4? It is possible to use assumptions about the field of view of the camera to answer this question, where points at the horizon appear in well defined parts of the image. We apply a generic solution to this problem involving the Random Sample Consensus (RANSAC). It was introduced by Bolles and Fischler [2]. RANSAC picks a minimal subset of 3 tracked points and calculates the least-square approximation of the rotation matrix for them (14). It enlarges the set with consistent data stepwise. We used following Lemma to pick consistent points for the calculation of R. Lemma: All points used for the calculation of R belong to the set {Pk∞ }, iff the calculated R warps the appearance of the point features {ni } to the appearance in the second image {ni }. The resulting image error for correct points goes to zero after applying the 3D rotation matrix to their corresponding direction vectors. Note that we estimate here directly the 3 rotational parameters of the camera system and not the rotation in the image space like most other approaches. We can validate it at the following example in Fig. 5. We generated a set of feature points in a close range to the camera in a distance 1-4m in front of the camera. These points are marked with (+) in the images. A second set of points was placed in a distance 25-35m in front of the camera and these points are marked with (x). We estimated the rotational matrix from a set of direction vectors ni estimated with RANSAC that matched the (x) points. These points represent the {Pk∞ } set of our calculation. The calculated 3D rotation matrix had a remaining error of (−0.1215◦, 0.2727◦, −0.0496◦). We used the rotation matrix to compensate all direction vectors in the second image {ni } for the influence of rotation during the camera motion. Fig. 6 3
3 2.5
2 2 1
1.5 1
0 0.5 −1
0 −0.5
−2 −1 −3 −3
−2
−1
0
1
2
3
4
−1.5 −3
−2
−1
0
1
2
3
Fig. 5. Two synthetic sets of feature positions with close (+) and distant points (x) with a camera translation by ΔT = (0.3, 0.2, 0.4)T [m] and a rotation around the axes by (10◦ , 2◦ , 5◦ )
Direct Pose Estimation with a Monocular Camera
447
3
2
1
0
−1
−2
−3 −3
−2
−1
0
1
2
3
4
Fig. 6. Features from Fig. 5 compensated for rotation estimated from (x) feature positions
depicts the positions of all features after compensation of the rotation influence. We see that as we already expected above the distant features were moved to their original positions. These features do not give us any information about the translation. They should not be used in the following processing. The close features (marked with ’+’) in Fig. 6 still have a residual displacement in the image after the compensation of the rotation that gives us information about the translation between the two images. 2.2
Estimation of the Translation
In the equation (5), we identified two possible motion components in the images. It is known that the 3D rotation of the camera relative to the world can always be calculated, but the distances to the imaged points can only be calculated once a significant motion between the frames is present (see Fig. 6). We need some remaining points with significant displacement between the images (after they were compensated for rotation) to estimate the translation parameters. We estimate the translation of the camera from the position of the epipoles.
Fig. 7. In case that the camera is moved by a pure translation, the direction of the epipole T defines the direction of the translational motion
448
D. Burschka and E. Mair
We know from Fig. 1 that the camera measures only the direction ni to the imaged point but looses the radial distance λi . Therefore, we know only the direction to both points (P1 , P2 ) for the left camera position in Fig. 7. If we find correspondences in both cameras then we know that the corresponding point lies somewhere on a line starting at the projection of the focal point of the right camera in the left image (epipole) and that it intersects the image plane of the right camera in the corresponding point. The projection of this line in the left image is called the epipolar line. We know that both corresponding points lie on the epipolar line and that the epipole defines the beginning of the line segment. Therefore, it is possible to find the position of the epipole by intersecting two such lines since all epipolar lines share the same epipole. Since the points (Pi , F1 , F2 ) define a plane, all possible projections of the given point while travelling along the line F1 F2 will appear on the epipolar line. We need at least two corresponding point pairs in the {PlT } to estimate the position of the epipole in the image. For each of the line pairs, we calculate → − − → the following equation using the 2D projections kip of the ki vectors from the equation (3) on the image plane. As simplification, we write in the following → − equation (15) ki for them, but we mean the 2D projections. → − − → − − → − → → − → ka+ μ ∗ (ka − ka ) = kb + ν ∗ ( kb − kb ) μ → − → − → −1 − → − → → − − · ka − kb = ka − ka kb − kb ν
(15)
We can estimate the image position of the epipole Ep from the estimated val→ − ues (μ, ν) using the 3D versions of the vectors ki to for example: → − − → → − → − Ep = ka + μ ∗ (ka − ka )
(16)
Note that the position of the epipole is identical for both images after the compensation of rotation. The epipole defines the direction of motion between the two cameras as depicted in Fig. 7. It is a known fact that we can reconstruct the motion parameters only up to an unknown scale from a monocular camera. Therefore, all we can rely on is the direction of motion c − → − → T = − → Ep ||Ep ||
(17)
The value c in (17) can have a value of ±1 depending on the direction in which the corresponding points move in the images. If the corresponding point moves away from the epipole then we assume c=1 and if the corresponding point moves toward the epipole then we assume c=-1. This explains, why we get two identical epipoles for both translated images. These images have similar translation vectors. The only difference is the sign (c-value). For the example from Fig. 6, we calculate the position of the epipole to be −→ [Ep = (0.7, 0.36, 1)T that matches exactly the direction of the motion that we used to create the data. We can get multiple solutions for the position of the
Direct Pose Estimation with a Monocular Camera
449
epipole in the image. The different results originate from intersection of different line lengths, which allows to pick the lines with the longest image extension to reduce the detection error and increase the accuracy. A better option is to use a weighted ωi average of the estimated values Ep i . The weight ωi depends on the minimal length li of the both line segments and it is calculated to:
l ωi =
L,
li < L ⇒ 1, li ≥ L i
− → Ep =
− →
i (ωi Ep i )
i
ωi
(18)
The value L is chosen depending on the actual accuracy of the feature detection. We set it to L=12 pixels in our system, which means that displacements larger than 12 pixels for translation will be considered accurate. 2.3
Calculation of the Translational Error
We need to estimate the covariance of the resulting pose estimation directly from the quality of the data in the images. It is obvious that accuracy increases with the increasing length of the image vectors of the optical flow. In these cases, detection errors have smaller influence. In general, the vectors of the translational field do not intersect in one point (the epipole) as in the ideal case, but they intersect in an area around the epipole. The area gets smaller if the angle between the vectors is close to 90◦ and increases the more the vectors become parallel to each other. Parallel vectors intersect in infinity and does not allow any robust motion estimation in our system. We can estimate the variance of the resulting translational vector from the variance of the intersecting point vector pairs of the translational vector field. It is very important to provide this value as an output of the system, because calculated translational value may be ill-conditioned due to very short almost parallel vectors that may intersect in a large area around the actual epipole position.
3
Results
We applied the presented system on a variety of images in indoor and outdoor scenarios. The system was able to calculate both the rotation matrix R in all cases and the direction of the translation depending on the matching criterions in the RANSAC process calculating the inliers (points used for rotation) and outliers based on compensation of the rotation and evaluation of the resulting re-projection error. This matching can allow deviations of the position of the reprojected points. In case of poor camera calibration and/or poor feature detection the matching must be set to higher values allowing larger variations. This requires automatically a larger translations between the frames. We validated the system on synthetic and real images to estimate the accuracy of the system.
450
3.1
D. Burschka and E. Mair
Simulation Results
We tried to estimate the accuracy of the algorithm assuming a sub-pixel accuracy in point detection in the images. We added white noise to the ideal values with σ 2 = 0.05. We added 20 outliers to the matches provided to the algorithm. We validated it on 2000 images. We observed following accuracies: average number of matches 89.7725 average rotation error around the x-axis 0.0113◦ average rotation error around the y-axis 0.0107◦ average rotation error around the z-axis 0.0231◦ number of failed estimates of rotation 0 from 2000 number of failed estimates of translation 0 from 2000 average error in the direction of the estimated translation vector 2.5888◦
The system provides robust estimates of the pose even under presence of outliers, which is an important extension to the typical structure from motion algorithm. 3.2
Tests on Real Images
We tested the system using a calibrated digital camera that we used to acquire a sequence of images in different scenarios. We ran a C++-implementation of the KLT-tracker on this sequence of images that created a list of tracked positions for single image features. We used this lists in Matlab to evaluate the rotation and translation parameters.
Fig. 8. Horizontal sweep of the camera with mostly rotation
For the sequence in Fig. 8, we obtained the following results: rotation (α, β, γ) in [◦ ] (2.132,-0.077,0.108) (4.521,-0.007,0.0751) (1.9466,0.288,-0.071)
inlier 49 11 56
outlier 5 7 6
error 0.003066 0.003874 0.004
We tested the estimation of significant rotation around all axes in Fig. 9 with the following results:
Direct Pose Estimation with a Monocular Camera
rotation (α, β, γ) in [◦ ] (1.736,-2.3955,6.05) (1.08,0.64,4.257) (2.2201,-0.2934,6.7948)
inlier 24 87 56
outlier 1 0 6
451
error 0.0024 0.0012 0.004
Fig. 9. Rotation around several axes simultaneously
Fig. 10. Shown are two screen-shots of our working navigation system showing the results for two scene examples (top) drive through the Garching campus, (bottom) drive through the city of Garching
452
D. Burschka and E. Mair
The second sequence in Fig. 9 shows the ability of the system to capture the rotation of the camera correctly in all three possible rotational degrees of freedom in each frame without any iterations necessary. Due to the page limit, we cannot show our further results in indoor scenarios, where we were able to provide some significant translation that allows a simultaneous estimation of the translation direction. We tested the system on real images taken from a car driven on our campus and through the city of Garching close to Munich. Fig. 10 depicts the results of real navigation in a city and campus scenario. The red points in the top left images are the points used to estimate the rotation parameters while the green points are the points used to estimate the translation. The top right image shows the resulting translational optical flow and the position of the estimated epipole as a black circle with a cross. The estimated motion parameters with accuracies are also shown in Fig. 10.
4
Conclusions and Future Work
We presented a way to calculate pose and covariance values for camera-based navigation with a monocular camera. The approach calculates directly the 6DoF motion parameters from a pair of images and estimates the accuracy for the estimated values that allows a correct fusion of the resulting values in higher level fusion algorithms used for SLAM, like Kalman Filters. We provide the mathematical framework that allows to subdivide tracked feature points into two sets of points that are close to the sensor and whose position change in the image due to a combined rotation and translation motion, and points that are far away and experience only changes due to the rotation of the camera. The presented system was tested on synthetic and real world images proving the validity of this concept. The system is able to calculate the 6DoF parameters in every camera step without any necessity for initialization of scene structure or initial filtering of uncertainties. In the next step, we will test the translation estimates using external references for motion of the real camera system and integrate the sensor into our VSLAM system.
References 1. Arun, K.S., Huang, T.S., Blostein, S.D.: Least-squares fitting of two 3-D point sets. IEEE Trans. Pat. Anal. Machine Intell. 9, 698–700 (1987) 2. Fischler, M., Bolles, R.: Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography. Commun. ACM 24(6), 381–395 (1981) 3. Davison, A.J.: Real-Time Simultaneous Localisation and Mapping with a Single Camera. In: Proc. Internation Conference on Computer Vision, vol. 2, pp. 1403– 1412 (2003) 4. Horn, B.K.P.: Closed-form solution of absolute orientation using orthonomal matrices. J. Opt. Soc. Amer. 5, 1127–1135 (1988)
Direct Pose Estimation with a Monocular Camera
453
5. Horn, B.K.P.: Closed-form solution of absolute orientation using unit quaternion. J. Opt. Soc. Amer. A-4, 629–642 (1987) 6. Horn, B.K.P.: Robot Vision. MIT Press, Cambridge (1986) 7. Longuet-Higgins, H.C.: A computer algorithm for reconstructing a scene from two projections. Nature 293(10), 133–135 (1981) 8. Nister, D.: A Minimal Solution to the Generalised 3-Point Pose Problem. In: CVPR 2004 (2004) 9. Thrun, S.: Learning metric-topological maps for indoor mobile robot navigation. Artificial Intelligence 99(1), 21–71 (1998) 10. Walker, M.W., Shao, L., Volz, R.A.: Estimating 3-D location parameters using dual number quaternions. CVGIP: Image Understanding 54(3), 358–367 (1991) 11. Hartley, R., Zisserman, A.: Multiple View Geometry in Computer Vision, 2nd edn. Cambridge University Press, ambridge
Differential Geometry of Monogenic Signal Representations Lennart Wietzke1 , Gerald Sommer1 , Christian Schmaltz2 , and Joachim Weickert2 1
2
Institute of Computer Science, Chair of Cognitive Systems, Christian-Albrechts-University, 24118 Kiel, Germany Mathematical Image Analysis Group, Faculty of Mathematics and Computer Science, Building E1 1, Saarland University, 66041 Saarbr¨ ucken, Germany
Abstract. This paper presents the fusion of monogenic signal processing and differential geometry to enable monogenic analyzing of local intrinsic 2D features of low level image data. New rotational invariant features such as structure and geometry (angle of intersection) of two superimposed intrinsic 1D signals will be extracted without the need of any steerable filters. These features are important for all kinds of low level image matching tasks in robot vision because they are invariant against local and global illumination changes and result from one unique framework within the monogenic scale-space. Keywords: Low-level Image Analysis, Differential Geometry, Geometric (Clifford) Algebra, Monogenic Curvature Tensor, Monogenic Signal, Radon Transform, Riesz Transform, Hilbert Transform, Local Phase Based Signal Processing.
1
Introduction
Local image analysis is the first step and therefore very important for detecting key points in robot vision. One aim is to decompose a given signal into as many features as possible. In low level image analysis it is necessary not to loose any information but to be invariant against certain features. Two similar key points (i.e. two crossing lines 1) can have different main orientations in two images but they should have the same structure (phase) and apex angle (angle of intersection) in both images. The local phase ϕ(t) of an assumed signal model cos(ϕ(t)) and the apex angle of two superimposed signals are rotational invariant features of images and therefore very important for matching tasks in robot vision. Two superimposed patterns are of big interest being analyzed because they are the most frequently (after intrinsic 0D and 1D signals) appearing 2D structures in images. In this paper we present how to decompose 2D image signals which consist locally of two intrinsical 1D signals. To extract the essential features like
We acknowledge funding by the German Research Foundation (DFG) under the projects SO 320/4-2 and We 2602/5-1.
G. Sommer and R. Klette (Eds.): RobVis 2008, LNCS 4931, pp. 454–465, 2008. c Springer-Verlag Berlin Heidelberg 2008
Differential Geometry of Monogenic Signal Representations
455
the main orientation, the apex angle and the phase of those signals no heuristics will be applied in this paper like in many other works [Stuke2004] but a fundamental access to the local geometry of two superimposed 1D signals will be given. This paper will be the first step and provides also the grasp to analyze an arbitrary number of superimposed signals in future work. In the following 2D
Minimum curvature N1
Normal vector Relation
Maximum curvature N 2 Fig. 1. This figure illustrates the relation of signal processing and differential geometry in Ê3 . Images will be visualized in their Monge patch embedding with the normal vector of the tangential plane being always at the point of origin (0, 0) ∈ Ê2 . From left to right: The image pattern in Monge patch embedding and its corresponding assumed local curvature saddle point model. Note that the normal section in direction of the two superimposed lines has zero curvature in the image pattern as well as in the curvature model.
signals f ∈ L2 (Ω, Ê) will be analyzed in the Monge patch embedding (see figure 1) If = {xe1 + ye2 + f (x, y)e3 |(x, y) ∈ Ω ⊂ Ê2 } which is well known from differential geometry with {1, e1 , e2 , e3 , e12 , e13 , e23 , e123 } as the set of basis vectors of the Clifford algebra Ê3 [Porteous1995]. A 2D signal f will be classified into local regions N ⊆ Ω of different intrinsic dimension (also known as codimension): ⎧ ⎨ i0DN , f (xi ) = f (xj ) ∀xi , xj ∈ N / i0DN (1) f ∈ i1DN , f (x, y) = g(x cos θ + y sin θ) ∀(x, y) ∈ N, g ∈ ÊÊ , f ∈ ⎩ i2DN , else The set of signals being analyzed in this work can be written as the superposition of two i1D signals such as two crossing lines or a checkerboard: ⎧ ⎫ ⎬ ⎨ fi (x) ∀x ∈ N, fi ∈ i1DN ⊂ i2D (2) f ∈ L2 (Ω, Ê)|f (x) = ⎩ ⎭ N ⊆Ω
i∈{1,2}
456
L. Wietzke et al.
The set of i1D signals can be completely analyzed by the monogenic signal [Felsberg2001] which splits the signal locally into phase, orientation and amplitude information. Two superimposed i1D signals with arbitrary main orientation and phase can be analyzed by the structure multivector [Felsberg2002] with the significant restriction that the two signals must be orthogonal, i.e. their apex angle must be fixed to 90◦ . Recently the monogenic curvature tensor has been introduced by [Zang2007] to decompose i2D signals into main orientation and phase information. Proofs and an exact signal model have not been given. This paper refers to [Felsberg2002, Felsberg2004, Zang2007] and fuses monogenic signal processing with differential geometry to analyze two superimposed i1D signals. The results of this work will be proofs concerning the monogenic curvature tensor and in addition the exact derivation of the apex angle (angle of intersection between two crossing lines) without the use of any heuristic. Future work will contain analysis of an arbitrary number of superimposed i1D signals. 1.1
The 2D Radon Transform
In this paper the monogenic signal [Felsberg2001] and the monogenic curvature tensor [Zang2007] will be interpreted in Radon space which gives direct access to analyzing the concatenation of Riesz transforms of any order. Remember that the Riesz transform of a 2D signal can be interpreted as the well known Hilbert transform of 1D signals within the Radon space. Therefore all odd order Riesz transforms (i.e. the concatenation of an odd number of Riesz transforms) apply a one-dimensional Hilbert transform to multidimensional signals in a certain orientation which is determined by the Radon transform [Beyerer2002]. The Radon transform is defined as:
f (x, y)δ0 (x cos θ + y sin θ − t)d(x, y) (3) r := r(t, θ) := R{f }(t, θ) := (x,y)∈Ω
with θ ∈ [0, π) as the orientation, t ∈ Ê as the minimal distance of the line from the origin and δ0 as the delta distribution (see figure 2). The inverse Radon transform exists and is defined by:
π
∂ 1 1 −1 r(t, θ)dtdθ (4) R {r(t, θ)}(x, y) := 2π 2 θ=0 t∈Ê x cos θ + y sin θ − t ∂t The point of interest where the phase and orientation information should be obtained within the whole signal will always be translated to the origin (0, 0) for each point (x, y) ∈ Ω so that the inverse Radon transform can be simplified to:
1 ∂ 1 r(t, θi )dt R−1 {r}(0, 0) = − 2 (5) 2π t ∂t i∈I t∈Ê This simplification is an important new result because the integral of the θ can be replaced by the finite sum of discrete angles θi to enable modeling the superposition of an arbitrary number of i1D signals. The integral of θ vanishes
Differential Geometry of Monogenic Signal Representations
457
y
r(t, )
t
x
Fig. 2. All the values lying on the line defined by the distance t and the orientation θ will be summed up to define the value r(t, θ) in the Radon space
because of the fact that r(t1 , θ) = r(t2 , θ) ∀t1 , t2 ∈ Ê ∀θ ∈ [0, π) − i∈I {θi } ∂ implies ∂t r(t, θ) = 0 ∀t ∈ Ê ∀θ ∈ [0, π) − i∈I {θi } for a finite number |I| ∈ Æ of superimposed i1D signals which build the i2D signal f = i∈I fi where each single i1D signal fi has its own orientation θi . Note that in this work we restrict the set of signals to |I| = 2.
2
Interpretation of the Riesz Transform in Radon Space
The Riesz transform R{f } extends the function f to a monogenic (holomorphic) function. The Riesz transform is a possible but not the only generalization of the Hilbert transform to a multidimensional domain. It is given by: ⎡ ⎤ x
3 ∗ f (x, y) 2 2 2 Rx {f }(x, y) ⎦ R{f }(x, y) = = ⎣ 2π(x y+y ) (6) Ry {f }(x, y) 3 ∗ f (x, y) 2 2 2π(x +y ) 2
(with ∗ as the convolution operation). The Riesz transform can be also written in terms of the Radon transform: R{f }(x, y) = R−1 {h1 (t) ∗ r(t, θ)nθ }(x, y) T
(7)
with nθ = [cos θ, sin θ] and h1 as the one-dimensional Hilbert kernel in spatial domain. Applying the Riesz transform to an i1D signal with orientation θm results in
1 ∂ 1 Rx {f }(0, 0) h1 (t) ∗ r(t, θm )dt nθm (8) = − 2 Ry {f }(0, 0) 2π t∈Ê t ∂t =:s(θm )
458
L. Wietzke et al.
∂ ∂ Note that ∂t (h1 (t) ∗ r(t, θ)) = h1 (t) ∗ ∂t r(t, θ). The orientation of the signal can therefore be derived by
arctan
Ry {f }(0, 0) s(θm ) sin θm = arctan = θm Rx {f }(0, 0) s(θm ) cos θm
(9)
(see also figure 3). The Hilbert transform of f and with it also the one-dimensional phase can be calculated by 2 2 [Rx {f }(0, 0)] + [Ry {f }(0, 0)] = (h1 ∗ fθm )(0) (10) with the partial Hilbert transform [Hahn1996]
f (τ cos θ, τ sin θ) 1 dτ (h1 ∗ fθ )(0) = − π τ ∈Ê τ y
(11)
t
m
x
m
Fig. 3. From left to right: i1D signal in spatial domain with x- and y-axis and with main orientation θm and in Radon space with θ- and t-axis. Even though in the right figure some artifacts can be seen in the Radon space, those artifacts do not exist in case of an infinite support of the image patterns and therefore the artifacts will be neglected in this ∂ r(t, θ) = 0 ∀t ∀θ = θm and in the right figure r(t1 , θ) = r(t2 , θ) ∀t1 , t2 ∀θ = θm work: ∂t
The first order Riesz transform of any i2D signal consisting of a number |I| of i1D signals therefore reads:
1 ∂ 1 Rx {f }(0, 0) h1 (t) ∗ r(t, θi )dt nθi = s(θi )nθi (12) = − 2 Ry {f }(0, 0) 2π t∈Ê t ∂t i∈I i∈I =:s(θi )
With the assumption that two superimposed i1D signals have arbitrary but same phase the main orientation θm of the resulting i2D signal can be calculated:
Rx {f }(0, 0) cos θ1 + cos θ2 s := s(θ1 ) = s(θ2 ) ⇒ =s (13) sin θ1 + sin θ2 Ry {f }(0, 0)
Differential Geometry of Monogenic Signal Representations
459
θ1 + θ2 Ry {f }(0, 0) = = θm Rx {f }(0, 0) 2
(14)
⇒ arctan
The most important conclusion of this result is that the monogenic signal can be used not only to interpret i1D signals but also to calculate the main orientation of the superposition of two and more i1D signals which construct the complex i2D signals.
3
Analyzing the Monogenic Curvature Tensor
Recently the monogenic curvature tensor has been introduced by [Zang2007] to extend the monogenic signal to analyze also i2D signals. It has been shown that the monogenic curvature tensor already contains the monogenic signal. Roughly spoken the monogenic curvature tensor has been motivated by the concept of the Hessian matrix of differential geometry and so far has only been known in Fourier domain. This drawback makes interpretation hard when applying the monogenic curvature tensor to a certain signal model with explicit features such as orientation, apex angle and phase information. Instead, this problem can be solved by the analysis of the spatial domain of the Riesz transform. Now any i2D signal will be regarded as the superposition of a finite number of i1D signals f = −1 f . With the properties R{R {r}} = r and R{ f } = i i i∈I i∈I i∈I R{fi } [Toft1996] and because of h1 ∗ h1 ∗ f = −f the even part of the monogenic curvature tensor Te can be also written in terms of the concatenation of two Riesz transforms and therefore also in terms of the Radon transform and its inverse [Stein1970]:
cos2 α − sin α cos αe12 Te (f ) := F −1 F {f }(α, ρ) = (15) 2 sin α cos αe12 sin α
Rx {Rx {f }} −Rx {Ry {f }}e12 = Rx {Ry {f }}e12 Ry {Ry {f }}
(16)
with F {f }(α, ρ) as the Fourier transform of the signal f in polar coordinates with α as the angular component and ρ as the radial component and F −1 as the inverse Fourier transform. To understand the motivation of the monogenic curvature tensor compare the even tensor Te (f ) with the well known Hessian matrix:
fxx −fxy e12 MHesse (f ) := (17) fxy e12 fyy where the partial derivative of the signal f simply have been replaced by the corresponding Riesz transforms. Now as a new result of this work the even tensor in Radon space reads:
R−1 {cos2 θR{f }} −R−1 {sin θ cos θR{f }}e12 Te (f ) = − (18) R−1 {sin θ cos θR{f }}e12 R−1 {sin2 θR{f }}
460
L. Wietzke et al.
The advantage of this form is that the signal features such as the orientation θ is explicitly given. In the Fourier form the orientation is not given. So with the previous results of the Radon interpretation of the Riesz transform the features of the signal can be easily extracted. For the sake of completeness the odd tensor is defined as the Riesz transform of the even tensor: To (f ) = Te (Rx {f } + Ry {f }e12)
(19)
So the monogenic curvature tensor can be written as T (f ) = Te (f ) + To (f ). Only the even part Te of the monogenic curvature tensor T will be used in the following. 3.1
Interpretation of Two Superimposed i1D Signals
In the following two superimposed i1D signals with orientations θ1 , θ2 and arbitrary but same phase ϕ for both i1D signals will be analyzed in Radon space (see figure 4). With the abbreviations: a := cos θ1 , b := cos θ2 , c := sin θ1 , d := sin θ2 the even tensor for two superimposed i1D signals reads
Te = f (0, 0)
a2 + b2 −(ca + db)e12 (ca + db)e12 c2 + d2
(20)
According to differential geometry the well known Gaussian curvature K := κ1 κ2 and the main curvature H := 12 (κ1 + κ2 ) can be also defined for the even tensor. Both features are rotational invariant. Motivated by differential geometry it will be shown how to compute the apex angle as an important rotational invariant feature of i2D signals by means of the Gaussian and the main curvature of the even tensor. y
t
Apex Angle
x
T
T1
T2
Fig. 4. From left to right: Two superimposed i1D signals with orientation θ1 and θ2 in spatial domain and in Radon space. Assuming that both signals have same phase ϕ the signal information can be separated by the Riesz transform.
Differential Geometry of Monogenic Signal Representations
461
K
i2D peak
forbidden zone
i1D ridge i2D minimal surface i2D saddle ridge
i2D pit
K=H 2 i2D spherical surface
i0D flat
H
i1D valley
i2D saddle valley
Fig. 5. Classification of local image models by Gaussian and mean curvatures
3.2
The Apex Angle in Terms of Surface Theory in
Ê3
The apex angle can be calculated indirectly by means of the two orthogonal main curvatures κ1 , κ2 of a surface. Actually a surface at a certain point can have an infinite number of different 1D curvatures or only one unique curvature. This will be explained in detail. Differential geometry considers a certain point with certain differentiable properties. In the following this will be fulfilled only at the infinite small region of that point in the assumed saddle point curvature model (see figure 5). But the region around a point of interest in the image data can not assumed to be infinite small. This difference of curvature model and the true image data is very important for the following argument. At the point of interest the normal vector of the tangential plane and another arbitrary point which is not part of the normal vector span a plane which can be rotated around the normal vector by any angle. That plane is used to intersect the twodimensional image surface If to deliver a one dimensional function which has a well defined curvature at that point for every angle. This generated set is called normal section (see figure 6). To analyze two superimposed i1D structures their intersection point will be approximated by a local i2D saddle point model (see figure 1). This local model describes the pattern up to the sign of the maximum curvature which will not affect the results of the global image model. So when adjusting the global i2D structure the minimum curvature κ1 and the maximum curvature κ2 of the local curvature model now lie on the x-axis and on the yaxis. Because of that the normal cut curvature in directions of the two principal axes delivers the minimum and the maximum curvature of the local i2D saddle point model (see figure 7). Remember that in direction of the i1D structures the normal cut curvature is zero. So if it is possible to determine the angle where the curvature is zero the apex angle could be calculated. This can be done by applying the results of Euler’s and Meusnier’s theorems (see [Baer2001], p.104):
462
L. Wietzke et al.
Normal section with zero curvature
Curvature model Image pattern in spatial domain Fig. 6. The curvature of the normal section is zero iff the present structure is of intrinsically one dimension in direction of the section plane
κ
α 2
=
lim fxy (0,0)→0
κ1 cos2
α α α α + 2fxy (0, 0) sin cos + κ2 sin2 2 2 2 2
(21)
Here α2 is the angle between the normal section direction with minimum curvature κ1 and the normal section direction with curvature κ α2 . In general two different solutions exist. Because of that two superimposed i1D signals can be described by this local saddle point model. Since the i1D information has zero curvature and the local saddle point model assumption with κ1 < 0 and κ2 > 0 the apex angle can be calculated by minimal information given by κ1 and κ2 .
κ
α 2
= 0 ⇒ α = 2 arctan
|κ1 | |κ2 |
(22)
With the abbreviations a := cos θ1 , b := cos θ2 , c := sin θ1 , d := sin θ2 the apex angle of two i1D signals can now be derived by the Gaussian curvature 2 K := det(Te ) = [f (0, 0)] (a2 d2 + b2 c2 − 2abcd) and the main curvature H := 1 1 2 2 2 2 2 trace(Te ) = 2 (a + b + c + d )f (0, 0) = f (0, 0) in a rotational invariant way. This yields the apex angle: ! √ ! H − H 2 − K θ1 − θ2 " 2 = arctan H + √H 2 − K
(23)
Differential Geometry of Monogenic Signal Representations
463
Minimum curvature N1
Maximum curvature N 2 Fig. 7. The normal sections of the bisectors along the principal x-axis (left figure) and the principal y-axis (right figure) are analyzed to determine the apex angle α
4
Experiments and Results
For the synthetic i2D signal consisting of two crossing lines (see figure 8) the following model is used: $ # 2 2 (24) fθ1 ,θ2 (x, y) := max e−(x cos θ1 +y sin θ1 ) , e−(x cos θ2 +y sin −θ2 ) which will be analyzed at the origin (0, 0). To analyze the apex angle by the even part of the monogenic curvature tensor the following synthetic checkerboard signal (see figure 8) will be used: fθ1 ,θ2 (x, y) :=
1 [cos [(x cos θ1 + y sin θ1 )π] + cos [(x cos θ2 + y sin θ2 )π]] 2
(25)
The average angular error will be defined by: 2 ! √ 45◦ 45◦ ! H − H 2 − K |θ1 − θ2 | − 2 arctan " √ AAE := H + H 2 − K θ1 =0◦ θ2 =0◦
(26)
with H := 12 trace(Te (fθ1 ,θ2 )) and K := det(Te (fθ1 ,θ2 )). The experimental AAE of the checkerboard signal is 0.067◦ and 0.08◦ for the two crossing lines with total convolution mask size (2 · 8 + 1)2. The AAE converges to 0◦ with increasing convolution mask sizes. Also the main orientation of an arbitrary number of superimposed i1D signals can be computed on discrete signals without any error with increasing convolution mask sizes.
464
L. Wietzke et al. y
Main orientation
x
Apex angle
y
Main orientation
x
Apex angle
Fig. 8. Discrete synthetic i2D signals with different apex angles. From left to right: Two crossing i1D lines and the i2D checkerboard.
5
Conclusion and Future Work
The odd-order Riesz transform of any i1D or i2D signal can be analyzed in Radon space in which a one-dimensional partial Hilbert transform will be applied in direction of each i1D signal with its individual orientation θi , i ∈ I. Assuming that two superimposing i1D signals have arbitrary but same phase ϕ, the orientations θ1 , θ2 and the phase can be calculated. The advantage of analyzing the Riesz transform in Radon space is that the signal properties consisting of orientation and phase are explicitly given after applying the general operator (e.g. the monogenic curvature operator) to the specific i1D or i2D signal function. Future work will include analyzing the superposition of i1D signals with individual phases ϕi . Note that arbitrary but same phase of both signals has been assumed in this paper for deriving the main orientation and the apex angle, so that only one common phase can be calculated by this assumption. The analysis of i2D signals in Radon space presented in this work realizes to interpret the monogenic curvature tensor for the first time in an exact way. Future work will contain investigation of the promising isomorphism M (2, Ê3 ) ∼ = Ê4,1 [Sobczyk2005] from the set of the monogenic curvature tensors to the set of the multivectors of the conformal space in Radon space.
References [Baer2001] [Beyerer2002]
[Felsberg2001]
Baer, C.: Elementare Differentialgeometrie, de Gruyter, Berlin, New York, vol. 1 (2001) Beyerer, J., Leon, F.P.: The Radon transform in digital image processing, Oldenbourg, vol 50, part 10, pp. 472–480, Automatisierungstechnik, Germany (2002) Felsberg, M., Sommer, G.: The monogenic signal. IEEE Transactions on Signal Processing 49(12), 3136–3144 (2001)
Differential Geometry of Monogenic Signal Representations [Felsberg2002] [Felsberg2004]
[Hahn1996] [Porteous1995]
[Sobczyk2005]
[Stein1970]
[Stuke2004]
[Toft1996]
[Zang2007]
465
Michael Felsberg, Low-Level Image Processing with the Structure Multivector, PhD Thesis, Kiel University (2002) Felsberg, M., Sommer, G.: The monogenic scale-space: A unifying approach to phase-based image processing in scale-space, vol. 21, pp. 5–26. Kluwer Academic Publishers, Dordrecht (2004) Stefan, L.: Hahn, Hilbert Transforms in Signal Processing, Artech House, Norwood, Maryland (1996) Porteous, I.R.: Clifford Algebras and the Classical Groups, Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge (1995) Sobczyk, G., Erlebacher, G.: Hybrid matrix geometric algebra. In: Li, H., J. Olver, P., Sommer, G. (eds.) IWMM-GIAE 2004. LNCS, vol. 3519, pp. 191–206. Springer, Heidelberg (2005) Stein, E.M.: Singular Integrals and Differentiability Properties of Functions, Princeton Mathematical Series. Princeton University Press, Princeton (1970) Stuke, I., et al.: Analysing superimposed oriented patterns. In: 6th IEEE Southwest Symposium on Image Analysis and Interpretation, Lake Tahoe, NV, pp. 133–137 (2004) Toft, P.: The Radon Transform - Theory and Implementation, PhD Thesis, Department of Mathematical Modelling, Technical University of Denmark (1996) Zang, D., Sommer, G.: Signal modeling for two-dimensional image structures. Journal of Visual Communication and Image Representation 18(1), 81–99 (2007)
Author Index
Ackermann, Hanno 153 Anko, B¨ orner 319 Akeila, Hani 72 Aldavert, David 111 Badino, Hern´ an 29, 43 Bailey, Donald 391 Bais, Abdul 367 Baltes, Jacky 59 Bauer, Sascha 234 Berger, Kai 260 Boyer, L. 381 Brox, T. 98 Burschka, Darius 440 Chen, Hui 403 Chen, Ian Yen-Hung 125 Chen, Yong-Sheng 207 Cremers, D. 98 Cyganek, Boguslaw 219 Delmas, P. 381 Denzler, Joachim Dorst, Leo 1 Effertz, Jan
275
Franke, Uwe
43
355
Gebken, Christian 85 Gehrig, Stefan 29, 43 Gindele, Tobias 248 Gu, Jason 367 Guan, Shushi 291 Hasan, Ghulam M. 367 Hegazy, Doaa 355 Heiko, Hirschm¨ uller 319 Herwig, Johannes 427 Imiya, Atsushi 1, 412 Iqbal, Mohammad T. 367 Jagszent, Daniel 248 Jayamuni, Dinuka 139 J¨ urgen, Wohlfeil 319
Kameda, Yusuke 1 Kamiya, Sunao 165 Kammel, S¨ oren 248 Kanatani, Kenichi 153 Kanazawa, Yasushi 165 Karsten, Scheibe 319 Khawaja, Yahya M. 367 Klette, Reinhard 1, 189, 291 Li, Fajie 189 Lin, Kai-Ying 207 Linz, Christian 260 Lipski, Christian 260 Liu, Yu-Chih 207 Long, Aiqun 403 MacDonald, Bruce A. 125, 139 Magnor, Marcus 260 Mair, Elmar 440 Marquez, J. 381 McAllester, David 16 McKinnon, Brian 59 Meysel, Frederik 234 Michael, Suppa 319 Morris, John 72 Ohnishi, Naoya
1, 412
Pauli, Josef 427 Pitzer, Benjamin 248 Rabe, Clemens 43 Ramirez, W. 381 Rasmussen, Christopher Reulke, Ralf 234 Rosenhahn, B. 98 Sablatnig, Robert 367 Sagrebin, Maria 427 Schmaltz, Christian 454 Scott, Donald 341 Seidel, H.-P. 98 Sommer, Gerald 85, 454 Stich, Timo 260 Stiller, Christoph 248
341
468
Author Index
Tanaka, Yoshiyuki 303 Tang, Liqiong 391 Teich, J¨ urgen 327 Terauchi, Mutsuhiro 303 Toledo, Ricardo 111 Trinh, Hoang 16 Tsuji, Toshio 303 Usman, Muhammad Vaudrey, Tobi
367
29
Wang, Guanghui 177 Weickert, Joachim 454
Werling, Moritz 248 Wietzke, Lennart 454 Wildermann, Stefan 327 Wimmer, Matthias 139 Wong, Yu Wua 391 Wu, Q.M. Jonathan 177 W¨ unsche, Burkhard 125 Yadav, Arpit 139 Yu, Hong 403 Zhang, Wei 177 Ziegler, Julius 248