Lecture Notes in Computer Science Commenced Publication in 1973 Founding and Former Series Editors: Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen
Editorial Board David Hutchison Lancaster University, UK Takeo Kanade Carnegie Mellon University, Pittsburgh, PA, USA Josef Kittler University of Surrey, Guildford, UK Jon M. Kleinberg Cornell University, Ithaca, NY, USA Alfred Kobsa University of California, Irvine, CA, USA Friedemann Mattern ETH Zurich, Switzerland John C. Mitchell Stanford University, CA, USA Moni Naor Weizmann Institute of Science, Rehovot, Israel Oscar Nierstrasz University of Bern, Switzerland C. Pandu Rangan Indian Institute of Technology, Madras, India Bernhard Steffen TU Dortmund University, Germany Madhu Sudan Microsoft Research, Cambridge, MA, USA Demetri Terzopoulos University of California, Los Angeles, CA, USA Doug Tygar University of California, Berkeley, CA, USA Gerhard Weikum Max Planck Institute for Informatics, Saarbruecken, Germany
6754
Mohamed Kamel Aurélio Campilho (Eds.)
Image Analysis and Recognition 8th International Conference, ICIAR 2011 Burnaby, BC, Canada, June 22-24, 2011 Proceedings, Part II
13
Volume Editors Mohamed Kamel University of Waterloo Department of Electrical and Computer Engineering Waterloo, ON, N2L 3G1, Canada E-mail:
[email protected] Aurélio Campilho University of Porto Faculty of Engineering Institute of Biomedical Engineering Rua Dr. Roberto Frias 4200-465 Porto, Portugal E-mail:
[email protected]
ISSN 0302-9743 e-ISSN 1611-3349 ISBN 978-3-642-21595-7 e-ISBN 978-3-642-21596-4 DOI 10.1007/978-3-642-21596-4 Springer Heidelberg Dordrecht London New York Library of Congress Control Number: 2011928908 CR Subject Classification (1998): I.4, I.5, I.2.10, I.2, I.3.5, F.2.2 LNCS Sublibrary: SL 6 – Image Processing, Computer Vision, Pattern Recognition, and Graphics
© Springer-Verlag Berlin Heidelberg 2011 This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. Typesetting: Camera-ready by author, data conversion by Scientific Publishing Services, Chennai, India Printed on acid-free paper Springer is part of Springer Science+Business Media (www.springer.com)
Preface
This is one of two volumes that contain papers accepted for ICIAR 2011, the International Conference on Image Analysis and Recognition, held at Simon Fraser University, Burnaby, BC, Canada, June 22–24, 2011. This was the eighth edition in the ICIAR series of annual conferences alternating between Europe and North America. The idea of organizing these conferences was to foster collaboration and exchange between researchers and scientists in the broad fields of image analysis and pattern recognition, addressing recent advances in theory, methodology and applications. ICIAR was organized at the same time with AIS 2011, the International Conference on Autonomous and Intelligent Systems. Both conferences were organized by AIMI – Association for Image and Machine Intelligence—a not-for-profit organization registered in Ontario, Canada. For ICIAR 2011, we received a total of 147 full papers from 37 countries. The review process was carried out by members of the Program Committee and other reviewers; all are experts in various image analysis and pattern recognition areas. Each paper was reviewed by at least two reviewers and checked by the Conference Chairs. A total of 84 papers were finally accepted and appear in the two volumes of these proceedings. The high quality of the papers is attributed first to the authors, and second to the quality of the reviews provided by the experts. We would like to sincerely thank the authors for responding to our call, and to thank the reviewers for their careful evaluation and feedback provided to the authors. It is this collective effort that resulted in the strong conference program and high-quality proceedings. This year ICIAR had a competition on “Hand Geometric Points Detection,” which attracted the attention of participants. We were very pleased to be able to include in the conference program keynote talks by well-known experts: Toshio Fukuda, Nagoya University, Japan; William A. Gruver, Simon Fraser University, Canada; Ze-Nian Li, Simon Fraser University, Canada; Andrew Sixsmith, Simon Fraser University, Canada; and Patrick Wang, Northeastern University Boston, USA. We would like to express our sincere gratitude to the keynote speakers for accepting our invitation to share their vision and recent advances in their specialized areas. Special thanks are also due to Jie Liang, Chair of the local Organizing Committee, and members of the committee for their advice and help. We are thankful for the support and facilities provided by Simon Fraser University. We are also grateful to Springer’s editorial staff for supporting this publication in the LNCS series. We would like to thank Khaled Hammouda, the webmaster of the conference, for maintaining the Web pages, interacting with the authors and preparing the proceedings.
VI
Preface
Finally, we were very pleased to welcome all the participants to ICIAR 2011. For those who did not attend, we hope this publication provides a good view of the research presented at the conference, and we look forward to meeting you at the next ICIAR conference. June 2011
Mohamed Kamel Aur´elio Campilho
ICIAR 2011 – International Conference on Image Analysis and Recognition
General Chair
General Co-chair
Mohamed Kamel University of Waterloo, Canada
[email protected]
Aur´elio Campilho University of Porto, Portugal
[email protected]
Local Organizing Committee Jie Liang (Chair) Simon Fraser University Canada
Carlo Menon Simon Fraser University Canada
Faisal Beg Simon Fraser University Canada
Jian Pei Simon Fraser University Canada
Ivan Bajic Simon Fraser University Canada
Conference Secretary Cathie Lowell Toronto, Ontario, Canada
[email protected]
Webmaster Khaled Hammouda Waterloo, Ontario, Canada
[email protected]
VIII
ICIAR 2011 – International Conference on Image Analysis and Recognition
Supported by AIMI – Association for Image and Machine Intelligence
PAMI – Pattern Analysis and Machine Intelligence Group University of Waterloo Canada Department of Electrical and Computer Engineering Faculty of Engineering University of Porto Portugal INEB – Instituto Biom´edica Portugal
de
Engenharia
Advisory Committee M. Ahmadi P. Bhattacharya T.D. Bui M. Cheriet E. Dubois Z. Duric G. Granlund L. Guan M. Haindl E. Hancock J. Kovacevic M. Kunt J. Padilha K.N. Plataniotis A. Sanfeliu M. Shah M. Sid-Ahmed C.Y. Suen A.N. Venetsanopoulos M. Viergever
University of Windsor, Canada Concordia University, Canada Concordia University, Canada University of Quebec, Canada University of Ottawa, Canada George Mason University, USA Link¨ oping University, Sweden Ryerson University, Canada Institute of Information Theory and Automation, Czech Republic The University of York, UK Carnegie Mellon University, USA Swiss Federal Institute of Technology (EPFL), Switzerland University of Porto, Portugal University of Toronto, Canada Technical University of Catalonia, Spain University of Central Florida, USA University of Windsor, Canada Concordia University, Canada University of Toronto, Canada University of Utrecht, The Netherlands
ICIAR 2011 – International Conference on Image Analysis and Recognition
B. Vijayakumar J. Villanueva R. Ward D. Zhang
IX
Carnegie Mellon University, USA Autonomous University of Barcelona, Spain University of British Columbia, Canada The Hong Kong Polytechnic University, Hong Kong
Program Committee A. Abate P. Aguiar M. Ahmed J. Alirezaie H. Ara´ ujo N. Arica I. Bajic J. Barbosa J. Barron J. Batista C. Bauckhage G. Bilodeau J. Bioucas B. Boufama T.D. Bui X. Cao J. Cardoso E. Cernadas M. Cheriet M. Coimbra M. Correia L. Corte-Real J. Costeira A. Dawoud M. De Gregorio J. Dias Z. Duric N. El Gayar M. El-Sakka D. ElShafie M. Figueiredo G. Freeman L. Guan F. Guibault M. Haindl E. Hancock C. Hong K. Huang
University of Salerno, Italy Institute for Systems and Robotics, Portugal Wilfrid Laurier University, Canada Ryerson University, Canada University of Coimbra, Portugal Turkish Naval Academy, Turkey Simon Fraser University, Canada University of Porto, Portugal University of Western Ontario, Canada University of Coimbra, Portugal York University, Canada ´ Ecole Polytechnique de Montr´eal, Canada Technical University of Lisbon, Portugal University of Windsor, Canada Concordia University, Canada Beihang University, China University of Porto, Portugal University of Vigo, Spain University of Quebec, Canada University of Porto, Portugal University of Porto, Portugal University of Porto, Portugal Technical University of Lisbon, Portugal University of South Alabama, USA Istituto di Cibernetica “E. Caianiello” - CNR, Italy University of Coimbra, Portugal George Mason University, USA Nile University, Egypt University of Western Ontario, Canada McGill University, Canada Technical University of Lisbon, Portugal University of Waterloo, Canada Ryerson University, Canada ´ Ecole Polytechnique de Montr´eal, Canada Institute of Information Theory and Automation, Czech Republic University of York, UK Hong Kong Polytechnic, Hong Kong Chinese Academy of Sciences, China
X
ICIAR 2011 – International Conference on Image Analysis and Recognition
J. Jiang J. Jorge G. Khan M. Khan Y. Kita A. Kong J. Laaksonen Q. Li X. Li J. Liang R. Lins J. Lorenzo-Ginori R. Lukac A. Mansouri A. Mar¸cal J. Marques M. Melkemi A. Mendon¸ca J. Meunier M. Mignotte A. Mohammed A. Monteiro M. Nappi A. Padilha F. Perales F. Pereira E. Petrakis P. Pina A. Pinho J. Pinto P. Quelhas M. Queluz P. Radeva B. Raducanu S. Rahnamayan E. Ribeiro J. Sanches J. S´ anchez B. Santos A. Sappa A. Sayedelahl G. Schaefer P. Scheunders
University of Bradford, UK INESC-ID, Portugal Ryerson University, Canada Saudi Arabia National Institute AIST, Japan Nanyang Technological University, Singapore Aalto University, Finland Western Kentucky University, USA University of London, UK Simon Fraser University, Canada Universidade Federal de Pernambuco, Brazil Universidad Central “Marta Abreu” de Las Villas, Cuba University of Toronto, Canada Universit´e de Bourgogne, France University of Porto, Portugal Technical University of Lisbon, Portugal Univerist´e de Haute Alsace, France University of Porto, Portugal University of Montreal, Canada University of Montreal, Canada University of Waterloo, Canada University of Porto, Portugal University of Salerno, Italy University of Porto, Portugal University of the Balearic Islands, Spain Technical University of Lisbon, Portugal Technical University of Crete, Greece Technical University of Lisbon, Portugal University of Aveiro, Portugal Technical University of Lisbon, Portugal Biomedical Engineering Institute, Portugal Technical University of Lisbon, Portugal Autonomous University of Barcelona, Spain Autonomous University of Barcelona, Spain University of Ontario Institute of Technology (UOIT), Canada Florida Institute of Technology, USA Technical University of Lisbon, Portugal University of Las Palmas de Gran Canaria, Spain University of Aveiro, Portugal Computer Vision Center, Spain University of Waterloo, Canada Nottingham Trent University, UK University of Antwerp, Belgium
ICIAR 2011 – International Conference on Image Analysis and Recognition
J. Sequeira J. Shen J. Silva B. Smolka M. Song J. Sousa H. Suesse S. Sural S. Suthaharan A. Taboada-Crisp´ı D. Tao M. Vento Y. Voisin E. Vrscay Z. Wang M. Wirth J. Wu F. Yarman-Vural J. Zelek L. Zhang L. Zhang Q. Zhang G. Zheng H. Zhou D. Ziou
XI
Ecole Sup´erieure d’Ing´enieurs de Luminy, France Singapore Management University, Singapore University of Porto, Portugal Silesian University of Technology, Poland Hong Kong Polytechnical University, Hong Kong Technical University of Lisbon, Portugal Friedrich Schiller University Jena, Germany Indian Institute of Technology, India USA Universidad Central “Marta Abreu” de las Villas, Cuba NTU, Singapore University of Salerno, Italy Universit´e de Bourgogne, France University of Waterloo, Canada University of Waterloo, Canada University of Guelph, Canada University of Windsor, Canada Middle East Technical University, Turkey University of Waterloo, Canada The Hong Kong Polytechnic University, Hong Kong Wuhan University, China Waseda University, Japan University of Bern, Switzerland Queen Mary College, UK University of Sherbrooke, Canada
Reviewers A. Abdel-Dayem J. Ferreira D. Frejlichowski M. Gangeh S. Mahmoud A. Mohebi F. Monteiro Y. Ou R. Rocha
Laurentian University, Canada University of Porto, Portugal West Pomeranian University of Technology, Poland University of Waterloo, Canada University of Waterloo, Canada University of Waterloo, Canada IPB, Portugal University of Pennsylvania, USA Biomedical Engineering Institute, Portugal
Table of Contents – Part II
Biomedical Image Analysis Arabidopsis Thaliana Automatic Cell File Detection and Cell Length Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Pedro Quelhas, Jeroen Nieuwland, Walter Dewitte, Ana Maria Mendon¸ca, Jim Murray, and Aur´elio Campilho A Machine Vision Framework for Automated Localization of Microinjection Sites on Low-Contrast Single Adherent Cells . . . . . . . . . . . Hadi Esmaeilsabzali, Kelly Sakaki, Nikolai Dechev, Robert D. Burke, and Edward J. Park A Texture-Based Probabilistic Approach for Lung Nodule Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Olga Zinoveva, Dmitriy Zinovev, Stephen A. Siena, Daniela S. Raicu, Jacob Furst, and Samuel G. Armato Generation of 3D Digital Phantoms of Colon Tissue . . . . . . . . . . . . . . . . . . David Svoboda, Ondˇrej Homola, and Stanislav Stejskal Using the Pupillary Reflex as a Diabetes Occurrence Screening Aid Tool through Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Vitor Yano, Giselle Ferrari, and Alessandro Zimmer
1
12
21
31
40
Genetic Snake for Medical Ultrasound Image Segmentation . . . . . . . . . . . . Mohammad Talebi and Ahmad Ayatollahi
48
3D-Video-fMRI: 3D Motion Tracking in a 3T MRI Environment . . . . . . . Jos´e Maria Fernandes, S´ergio Tafula, and Jo˜ ao Paulo Silva Cunha
59
Classification-Based Segmentation of the Region of Interest in Chromatographic Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ant´ onio V. Sousa, Ana Maria Mendon¸ca, M. Clara S´ a-Miranda, and Aur´elio Campilho
68
Biometrics A Novel and Efficient Feedback Method for Pupil and Iris Localization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Muhammad Talal Ibrahim, Tariq Mehmood, M. Aurangzeb Khan, and Ling Guan
79
XIV
Table of Contents – Part II
Fusion of Multiple Candidate Orientations in Fingerprints . . . . . . . . . . . . . En Zhu, Edwin Hancock, Jianping Yin, Jianming Zhang, and Huiyao An Fingerprint Pattern and Minutiae Fusion in Various Operational Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Azhar Quddus, Ira Konvalinka, Sorin Toda, and Daniel Asraf
89
101
Fingerprint Verification Using Rotation Invariant Feature Codes . . . . . . . Muhammad Talal Ibrahim, Yongjin Wang, Ling Guan, and A.N. Venetsanopoulos
111
Can Gender Be Predicted from Near-Infrared Face Images? . . . . . . . . . . . Arun Ross and Cunjian Chen
120
Hand Geometry Analysis by Continuous Skeletons . . . . . . . . . . . . . . . . . . . Leonid Mestetskiy, Irina Bakina, and Alexey Kurakin
130
Kernel Fusion of Audio and Visual Information for Emotion Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yongjin Wang, Rui Zhang, Ling Guan, and A.N. Venetsanopoulos Automatic Eye Detection in Human Faces Using Geostatistical Functions and Support Vector Machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jo˜ ao Dallyson S. Almeida, Arist´ ofanes C. Silva, and Anselmo C. Paiva Gender Classification Using a Novel Gait Template: Radon Transform of Mean Gait Energy Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Farhad Bagher Oskuie and Karim Faez Person Re-identification Using Appearance Classification . . . . . . . . . . . . . . Kheir-Eddine Aziz, Djamel Merad, and Bernard Fertil
140
151
161
170
Face Recognition A Method for Robust Multispectral Face Recognition . . . . . . . . . . . . . . . . . Francesco Nicolo and Natalia A. Schmid Robust Face Recognition After Plastic Surgery Using Local Region Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Maria De Marsico, Michele Nappi, Daniel Riccio, and Harry Wechsler SEMD Based Sparse Gabor Representation for Eyeglasses-Face Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Caifang Song, Baocai Yin, and Yanfeng Sun
180
191
201
Table of Contents – Part II
XV
Face Recognition on Low Quality Surveillance Images, by Compensating Degradation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Shiva Rudrani and Sukhendu Das
212
Real-Time 3D Face Recognition with the Integration of Depth and Intensity Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Pengfei Xiong, Lei Huang, and Changping Liu
222
Individual Feature–Appearance for Facial Action Recognition . . . . . . . . . . Mohamed Dahmane and Jean Meunier
233
Image Coding, Compression and Encryption Lossless Compression of Satellite Image Sets Using Spatial Area Overlap Compensation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Vivek Trivedi and Howard Cheng
243
Color Image Compression Using Fast VQ with DCT Based Block Indexing Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Loay E. George and Azhar M. Kadim
253
Structural Similarity-Based Affine Approximation and Self-similarity of Images Revisited . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dominique Brunet, Edward R. Vrscay, and Zhou Wang
264
A Fair P2P Scalable Video Streaming Scheme Using Improved Priority Index Assignment and Multi-hierarchical Topology . . . . . . . . . . . . . . . . . . . Xiaozheng Huang, Jie Liang, Yan Ding, and Jiangchuan Liu
276
A Novel Image Encryption Framework Based on Markov Map and Singular Value Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Gaurav Bhatnagar, Q.M. Jonathan Wu, and Balasubramanian Raman
286
Applications A Self-trainable System for Moving People Counting by Scene Partitioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . G. Percannella and M. Vento
297
Multiple Classifier System for Urban Area’s Extraction from High Resolution Remote Sensing Imagery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Safaa M. Bedawi and Mohamed S. Kamel
307
Correction of Atmospheric Turbulence Degraded Sequences Using Grid Smoothing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Rishaad Abdoola, Guillaume Noel, Barend van Wyk, and Eric Monacelli
317
XVI
Table of Contents – Part II
A New Image-Based Method for Event Detection and Extraction of Noisy Hydrophone Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . F. Sattar, P.F. Driessen, and G. Tzanetakis Detection of Multiple Preceding Cars in Busy Traffic Using Taillights . . . Rachana A. Gupta and Wesley E. Snyder Road Surface Marking Classification Based on a Hierarchical Markov Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Moez Ammar, Sylvie Le H´egarat-Mascle, and Hugues Mounier Affine Illumination Compensation on Hyperspectral/Multiangular Remote Sensing Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Pedro Latorre Carmona, Luis Alonso, Filiberto Pla, Jose E. Moreno, and Crystal Schaaf
328 338
348
360
Crevasse Detection in Antarctica Using ASTER Images . . . . . . . . . . . . . . . Tao Xu, Wen Yang, Ying Liu, Chunxia Zhou, and Zemin Wang
370
Recognition of Trademarks during Sport Television Broadcasts . . . . . . . . Dariusz Frejlichowski
380
An Image Processing Approach to Distance Estimation for Automated Strawberry Harvesting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Andrew Busch and Phillip Palk
389
A Database for Offline Arabic Handwritten Text Recognition . . . . . . . . . . Sabri A. Mahmoud, Irfan Ahmad, Mohammed Alshayeb, and Wasfi G. Al-Khatib
397
Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
407
Table of Contents – Part I
Image and Video Processing Enhancing Video Denoising Algorithms by Fusion from Multiple Views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kai Zeng and Zhou Wang Single Image Example-Based Super-Resolution Using Cross-Scale Patch Matching and Markov Random Field Modelling . . . . . . . . . . . . . . . . . . . . . . Tijana Ruˇzi´c, Hiˆep Q. Luong, Aleksandra Piˇzurica, and Wilfried Philips
1
11
Background Images Generation Based on the Nelder-Mead Simplex Algorithm Using the Eigenbackground Model . . . . . . . . . . . . . . . . . . . . . . . . Charles-Henri Quivy and Itsuo Kumazawa
21
Phase Congruency Based Technique for the Removal of Rain from Video . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Varun Santhaseelan and Vijayan K. Asari
30
A Flexible Framework for Local Phase Coherence Computation . . . . . . . . Rania Hassen, Zhou Wang, and Magdy Salama
40
Edge Detection by Sliding Wedgelets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Agnieszka Lisowska
50
Adaptive Non-linear Diffusion in Wavelet Domain . . . . . . . . . . . . . . . . . . . . Ajay K. Mandava and Emma E. Regentova
58
Wavelet Domain Blur Invariants for 1D Discrete Signals . . . . . . . . . . . . . . Iman Makaremi, Karl Leboeuf, and Majid Ahmadi
69
A Super Resolution Algorithm to Improve the Hough Transform . . . . . . . Chunling Tu, Barend Jacobus van Wyk, Karim Djouani, Yskandar Hamam, and Shengzhi Du
80
Fusion of Multi-spectral Image Using Non-separable Additive Wavelets for High Spatial Resolution Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . Bin Liu and Weijie Liu
90
A Class of Image Metrics Based on the Structural Similarity Quality Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dominique Brunet, Edward R. Vrscay, and Zhou Wang
100
XVIII
Table of Contents – Part I
Structural Fidelity vs. Naturalness - Objective Assessment of Tone Mapped Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hojatollah Yeganeh and Zhou Wang
111
Feature Extraction and Pattern Recognition Learning Sparse Features On-Line for Image Classification . . . . . . . . . . . . Ziming Zhang, Jiawei Huang, and Ze-Nian Li
122
Classifying Data Considering Pairs of Patients in a Relational Space . . . . Siti Mariam Shafie and Maria Petrou
132
Hierarchical Spatial Matching Kernel for Image Categorization . . . . . . . . . Tam T. Le, Yousun Kang, Akihiro Sugimoto, Son T. Tran, and Thuc D. Nguyen
141
Computer Vision Feature Selection for Tracker-Less Human Activity Recognition . . . . . . . . Plinio Moreno, Pedro Ribeiro, and Jos´e Santos-Victor Classification of Atomic Density Distributions Using Scale Invariant Blob Localization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kai Cordes, Oliver Topic, Manuel Scherer, Carsten Klempt, Bodo Rosenhahn, and J¨ orn Ostermann
152
161
A Graph-Kernel Method for Re-identification . . . . . . . . . . . . . . . . . . . . . . . . Luc Brun, Donatello Conte, Pasquale Foggia, and Mario Vento
173
Automatic Recognition of 2D Shapes from a Set of Points . . . . . . . . . . . . . Benoˆıt Presles, Johan Debayle, Yvan Maillot, and Jean-Charles Pinoli
183
Steganalysis of LSB Matching Based on the Statistical Analysis of Empirical Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hamidreza Dastmalchi and Karim Faez Infinite Generalized Gaussian Mixture Modeling and Applications . . . . . . Tarek Elguebaly and Nizar Bouguila Fusion of Elevation Data into Satellite Image Classification Using Refined Production Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bilal Al Momani, Philip Morrow, and Sally McClean Using Grid Based Feature Localization for Fast Image Matching . . . . . . . Daniel Fleck and Zoran Duric
193
201
211
221
Table of Contents – Part I
XIX
A Hybrid Representation of Imbalanced Points for Two-Layer Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Qi Li
232
Wide-Baseline Correspondence from Locally Affine Invariant Contour Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Zhaozhong Wang and Lei Wang
242
Measuring the Coverage of Interest Point Detectors . . . . . . . . . . . . . . . . . . Shoaib Ehsan, Nadia Kanwal, Adrian F. Clark, and Klaus D. McDonald-Maier
253
Non-uniform Mesh Warping for Content-Aware Image Retargeting . . . . . Huiyun Bao and Xueqing Li
262
Moving Edge Segment Matching for the Detection of Moving Object . . . Mahbub Murshed, Adin Ramirez, and Oksam Chae
274
Gauss-Laguerre Keypoints Extraction Using Fast Hermite Projection Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dmitry V. Sorokin, Maxim M. Mizotin, and Andrey S. Krylov
284
Re-identification of Visual Targets in Camera Networks: A Comparison of Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dario Figueira and Alexandre Bernardino
294
Statistical Significance Based Graph Cut Segmentation for Shrinking Bias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sema Candemir and Yusuf Sinan Akgul
304
Real-Time People Detection in Videos Using Geometrical Features and Adaptive Boosting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Pablo Julian Pedrocca and Mohand Sa¨ıd Allili
314
Color, Texture, Motion and Shape A Higher-Order Model for Fluid Motion Estimation . . . . . . . . . . . . . . . . . . Wei Liu and Eraldo Ribeiro
325
Dictionary Learning in Texture Classification . . . . . . . . . . . . . . . . . . . . . . . . Mehrdad J. Gangeh, Ali Ghodsi, and Mohamed S. Kamel
335
Selecting Anchor Points for 2D Skeletonization . . . . . . . . . . . . . . . . . . . . . . Luca Serino and Gabriella Sanniti di Baja
344
Interactive Segmentation of 3D Images Using a Region Adjacency Graph Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ludovic Paulhac, Jean-Yves Ramel, and Tom Renard
354
XX
Table of Contents – Part I
An Algorithm to Detect the Weak-Symmetry of a Simple Polygon . . . . . . Mahmoud Melkemi, Fr´ed´eric Cordier, and Nickolas S. Sapidis Spatially Variant Dimensionality Reduction for the Visualization of Multi/Hyperspectral Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Steven Le Moan, Alamin Mansouri, Yvon Voisin, and Jon Y. Hardeberg
365
375
Tracking Maneuvering Head Motion Tracking by Coarse-to-Fine Particle Filter . . . Yun-Qian Miao, Paul Fieguth, and Mohamed S. Kamel
385
Multi-camera Relay Tracker Utilizing Color-Based Particle Filtering . . . . Xiaochen Dai and Shahram Payandeh
395
Visual Tracking Using Online Semi-supervised Learning . . . . . . . . . . . . . . . Meng Gao, Huaping Liu, and Fuchun Sun
406
Solving Multiple-Target Tracking Using Adaptive Filters . . . . . . . . . . . . . . B. Cancela, M. Ortega, Manuel G. Penedo, and A. Fern´ andez
416
From Optical Flow to Tracking Objects on Movie Videos . . . . . . . . . . . . . . Nhat-Tan Nguyen, Alexandra Branzan-Albu, and Denis Laurendeau
426
Event Detection and Recognition Using Histogram of Oriented Gradients and Hidden Markov Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chun-hao Wang, Yongjin Wang, and Ling Guan
436
Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
447
Arabidopsis Thaliana Automatic Cell File Detection and Cell Length Estimation Pedro Quelhas1 , Jeroen Nieuwland3 , Walter Dewitte3 , Ana Maria Mendon¸ca1,2 , Jim Murray3 , and Aur´elio Campilho1,2 1
INEB - Divis˜ ao de Sinal e Imagem Instituto de Engenharia Biom´edica, Porto, Portugal 2 Universidade do Porto, Faculdade de Engenharia Departamento de Engenharia Electrot´ecnica e Computadores, Portugal 3 Cardiff School of Biosciences, Cardiff University CF10 3AX, Cardiff, United Kingdom
Abstract. In plant development biology, the study of the structure of the plant’s root is fundamental for the understanding of the regulation and interrelationships of cell division and cellular differentiation. This is based on the high connection between cell length and progression of cell differentiation and the nuclear state. However, the need to analyse a large amount of images from many replicate roots to obtain reliable measurements motivates the development of automatic tools for root structure analysis. We present a novel automatic approach to detect cell files, the main structure in plant roots, and extract the length of the cells in those files. This approach is based on the detection of local cell file characteristic symmetry using a wavelet based image symmetry measure. The resulting detection enables the automatic extraction of important data on the plant development stage and of characteristics for individual cells. Furthermore, the approach presented reduces in more than 90% the time required for the analysis of each root, improving the work of the biologist and allowing the increase of the amount of data to be analysed for each experimental condition. While our approach is fully automatic a user verification and editing stage is provided so that any existing errors may be corrected. Given five test images it was observed that user did not correct more than 20% of all automatically detected structure, while taking no more than 10% of manual analysis time to do so. Keywords: Arabidopsis Thaliana, image symmetry, BioImaging.
1
Introduction
In the plant root, cells grow primarily in the vertical (basal/apical) axis and divide primarily perpendicular to the growth axis, resulting in columns or files of cells that derive from specific stem cells near the tip of the root. The stem cells themselves are organised around a small group of cells in a disc like shape M. Kamel and A. Campilho (Eds.): ICIAR 2011, Part II, LNCS 6754, pp. 1–11, 2011. c Springer-Verlag Berlin Heidelberg 2011
2
P. Quelhas et al.
which divide only rarely, called the quiescent centre (QC) [1]. The cell files are organised in concentric cylinders and their identity appears to be defined by from which specific stem cells they originate and their position within the developing root with regard to adjacent cell files [2]. Cells specialise to their specific function through differentiation which relates to to their distance from the QC cell and stem cells which remain undifferentiated. In the region directly above the QC, known as the apical meristem, cells are actively dividing and maintain a relatively constant cell size. Further from the QC, division ceases and cells enter first a transition zone or basal meristem region where they start to expand longitudinally, and then a region of very rapid expansion where they finalize their differentiation [3,4]. The expansion of cells is associated with replication of the genome without cell division, resulting in an increase in the cell DNA content in a process known as endocycling [5]. Therefore, cell lengths in the root appear to correlate with the progression of cell differentiation and the nuclear DNA content. The different characteristics of cells in the radial dimension, coupled with their changing activity in the longitudinal dimension makes the plant root a highly attractive system for analysing the regulation and interrelationships of cell division and cellular differentiation. The requirement to analyse multiple cell files in many replicate roots to obtain reliable data demands appropriate image analysis tools to assist in the data acquisition. Most analysis of plant root cellular structure and growth is performed through manually gathered data [6,7,8]. In these conditions, researchers spend a large amount of time performing measurements which may not be reliable due to varying bias. Some plant root analysis applications use image segmentation to detect cells and analyse their properties [9,10,11]. While approaches based on cell segmentation are faster and more obective methods of obtaining cell data, when compared with manual collection, the problem of plant cell segmentation is a difficult problem, with no complete solution. As an alternative to the study of plant root images through the use of segmentation we present here a novel approach that avoids segmentation by recognizing the local symmetry present at the root files and root cell walls. Through the use of symmetry, cell files can be detected and analysed without requiering segmentation. Furthermore, after the detection of a cell file, the cell length estimation problem becomes a one dimensional problem with a much easier solution than that of cell segmentation. In both cases we use the phase based image symmetry measure proposed by P. Kovesi [12]. This approach allows us to select specific orientations and scales for symmetry calculation and also allows for the separation of high greyscale image symmetry from low greyscale symmetry (useful in differentiating cell walls from cell files). While the approach presented here is fully automatic, it is well known that fully automated solutions are unlikely to have zero errors. As such we implemented a manual post-processing for error correction that enables the user to verify the results given by the approach and change them when needed.
Arabidopsis Thaliana Automatic Cell File Detection and Length Estimation
3
Fig. 1. 3D transform-domain collaborative filtering (BM3D) applied to Arabidopsis Thaliana confocal fluorescence microscopy images: original image (left), image after filtering (right) [14]
This paper is organized as follows: Section 2 describes the proposed approach. Section 3 presents the user interface provided to the user. Section 4 presents and discusses the obtained results. Finally, conclusion is presented in Section 5.
2
Methodology
Our approach is divided into three main stages: root orientation and scale estimation, cell file detection from darker greyscale image symmetry and cell wall detection from lighter greyscale image symmetry. The cell files and walls are detected using the information of scale and orientation from the previous steps. For the our work we used images from the Arabidopsis Thaliana root. This plant is widely used in plant development biology since due to its rapid development and a simple cellular pattern [13]. To simplify and speed-up our methodology we re-scale each input image to one mega pixel. This has proven to be an acceptable size for all tested images. An additional noise reduction pre-processing step is also integrated into our approach since the microscopy images used in many of the work related with plant root analysis are of poor quality. In the next subsections we will describe each step of our approach. 2.1
Noise Reduction
While plant root images are gathered using many different techniques, confocal fluorescence microscopy is one of the most widely used approaches due to the need to extract information that is inside the 3D root volume. Unfortunately, images resulting from confocal-microscopy have a high level of noise, a consequence of the reduced time of exposure, necessary to avoid excessive bleaching and subsequent destruction of the root structure under analysis. To improve image quality we filter all images using sparse 3D transform-domain collaborative filtering (BM3D) [14]. This methodology has proven to perform well in this type of images [10]. Figure 1 shows both an original image and the resulting image after filtering. After the filtering of the input images we start the process of detection the cell files, based on the image phase symmetry measure.
4
2.2
P. Quelhas et al.
Cell File and Cell Walls Analysis Using Symmetry
The Arabidopsis Thaliana root presents a cell structure that can be viewed, using a topological analogy, as walls and valleys. Each of these structures presents symmetry in relation to a certain orientation, a certain scale and a certain greyscale intensity (valleys have low greyscale values while walls have high values). Image Symmetry Measure. To assess local image symmetry we use the phase symmetry measure [12]. This is a phase based measure obtained from the image’s wavelet transform. The symmetry value for each coordinates (x, y) of image I, for scale s and orientation o, is given by the normalized difference of the absolute value between the even-symmetric (M e (o, s)) and odd-symmetric (M o (o, s)) wavelet responses. The symmetry for the whole image is obtained through convolution and the total symmetry over a range of orientations and scales is given by adding the individual responses: no sns P hSym(I, s, no) =
o=1
s=s1
[(|I ∗ M e (o, s)| − |I ∗ M o (o, s)|) − T ] no sns s=s1 A(o, s) + ε o=1
(1)
where I is the input image, A(o, s) = I ∗ M e (o, s)2 + I ∗ M o (o, s)2 , s is the vector of ns scales for which symmetry is computed, no is the number of orientation samples used to cover the 360 degrees and T is the noise level (assumed to be equal to the response for the smallest wavelet). In the frequency domain, symmetry at a location is higher when frequency components within the selected range are at their maxima or minima in their cycles at that location. Since we want to find cell files, which are darker at their point of maximum symmetry we detect only location of components minima, ignoring symmetry of image maxima. The results of phase symmetry analysis on a root image can be observed in Figure 2 (top-right). Also, since we wish to make the symmetry analysis in each orientation separately we use a single orientation version of equation (1): sns P hSym(I, s, o) =
s=s1
[(|I ∗ M e (o, s)| − |I ∗ M o (o, s)|) − T ] sns s=s1 A(o, s) + ε
(2)
While we could perform symmetry analysis in the whole image this could lead to erroneous conclusion as some images contain noise which may exhibit symmetry. To restrict our analysis to the plant root we segment that root by thresholding the image using the Otsu segmentation method [15], after the filtering of the image using a Gaussian with a large sigma (σ = 12). We further use morphological filling to ensure that the root segmentation has no holes inside the root area. The resulting mask is shown in Figure 2 (top-centre). Inside the image spatial coordinates specified by the mask we extract the symmetry of the image based on the phase symmetry measure.
Arabidopsis Thaliana Automatic Cell File Detection and Length Estimation
5
Fig. 2. Symmetry measure in a plant root image: original image (top-left), segmentation mask of the root location (top-centre), full phase symmetry measure response map over all orientations (top-right), phase symmetry measure response map for orientations 1, 4 and 7 (bottom left, centre and right respectively)
Root Orientation Estimation Based on Local Symmetry. Using equation (2) we are able to obtain an image with the symmetry response for a range of orientations. To estimate the root orientation we search through a range of orientations for the one which has a large response. Other alternatives based on root segmentation (as the ones presented in [13]) were tested but did not provide stable results. It was found experimentally that analysing the image in eight equally separated directions is sufficient to correctly estimate the root’s orientation (no = 8). In Figure 2 (bottom) we can observe the phase symmetry result for orientations 1,4 and 7 (left, centre and right respectively). Based on the obtained symmetry images we compute the average of all location in the image where the symmetry measure is positive. By choosing the image with the greatest average we detect the dominant orientation of the root (orientation 7 in the case of the root in Figure 2). While we do need to set a range of scales ns for this orientation estimation, this is not a critical parameter as a large range of parameter selections will provide valid results (ns = [2, 4, 6] in our case). Root Scale Estimation Based on Segmented Root Size. The estimation of scale is based on the size of the root segmentation mask. This estimation is obtained by using the distance transform of the mask image. Through this approach we are able to find a method to obtain a range of scale for symmetry computation invariant to changes in image spatial resolution. The measure used is based on the mean value of the distance transform inside the root area. While looking for the maxima of the distance transform was considered, it proved not
6
P. Quelhas et al.
Fig. 3. Results of cell file and cell wall detection on the Arabidopsis Thaliana root: original image(left), phase symmetry measure for detected orientation and scales (centreleft), cell files detected in the image (centre-right) and final detection of cell walls existing in the cell file (right)
to be robust to errors in root segmentation. Using the average of the distance transform values as a base of reference we can define a scale range for the file extraction: avg(Idist (Imask > 0)) avg(Idist (Imask > 0) , ns = , (3) 7 3 where Idist (Imask > 0) are the distance transform values for the masked image region. We selected two scales for root file extraction as it proved adequate due to the range of cell file scale for the different parts of the root. More scales or a wider range of values may be necessary if the morphology of the root changes significantly. Given the range of scales ns and the known orientation of the root we can obtain the symmetry map from which it is possible to extract the cell files’ locations. This is performed using directional non-maxima suppression, performed for the orientation perpendicular to the estimated root orientation. Figure 3 (b) shows the symmetry response for the automatically estimated orientation and scale, in a detail of a Arabidopsis Thaliana plant root. In Figure 3 (c) we can also see the detection of the cell files. Cell files are represented by their detected maximum symmetry lines. Cell File Detection Post-Processing. While the results from the cell file detection obtained by non-maxima supression can be considered final for cell file detection, they are often broken. This is a known problem in edge detection and many approaches for edge joining exist. To reduce the amount of broken cell files we created a search procedure which searches for possible connections between the end of each cell file and all starting points of other cell files within 10 pixels. The start and end of cell files is based on either an x or y direction ordering which depends on the major orientation of the root. This allowed for the re-connection of an average of 3 split cell files per image.
Arabidopsis Thaliana Automatic Cell File Detection and Length Estimation
7
Fig. 4. User interface for the correction of automated structure analysis result
Cell Wall Position Detection on Cell Files. Using the phase based symmetry measure we are also able to estimate the location of cell walls in each cell file. Cell walls are highly symmetric locations in the root, which have a high greyscale value. Using this knowledge we first compute the symmetry measure map based only on wavelet components maxima, ignoring symmetry of image minima. This is performed for a range of scales three times smaller than that used for cell file detection (due to the observable differences in scale) While we could restraint the orientation for symmetry computation for cell wall detection, empirically it was found that it was more robust to perform this computation over the full range of orientations (mostly due to some degree of randomness in the cells walls shape). As we perform the search for cell wall location on the cell file maximum symmetry lines this detection is one dimensional. We sample the symmetry measure response along the cell file symmetry line and from that one dimensional sample extract the maxima. Figure 3 (d) shows the detection of the cell walls in the cell files in a detail of an Arabidopsis Thaliana root. This complete information of the location of cell files and the position of cell walls within those files allows the user to collect all data about plant root cell structure. This provides the information needed for most structure analysis studies.
3
User Supervision
While the methodology presented is of an unsupervised nature we whish to allow the user to correct possible mistakes made by the automated approach. This is aimed at both improving the final results and increasing the user’s confidence on the final result, while still greatly reducing the time it takes the user to perform the analysis of the plant root structure. To facilitate the user’s corrections we created an interface that allows all user interactions to be performed over the already automatic results (Figure 4). The user can delete files, cut or extend existing files and add files. It is also possible to add or remove wall locations in the cell files. This enables the user
8
P. Quelhas et al.
Fig. 5. Test images used in the testing of our approach (these images range in both resolution and spatial extent which can vary greatly since mosaicing of individual scaning frames is used
to correct any error by simply selecting the type of correction and clicking on it in the screen. The collection of the data into a datasheet file is performed automatically whenever the user wishes to save the root structure information. To fairly estimate the cell wall detection methodology we asked the user to first correct the cell file detection result and only then use the automated cell wall detector. As such the results for automated cell wall detection are computed and displayed in all cell files (both automatic and manually obtained). Then the user corrects all cell wall detections (adding or removing cell wall locations).
4
Experimental Results
To test our approach we applied it to five test images shown in Figure 5. Due to the lack of ground-truth data and the objective of granting the user the capability to correct errors through the user interface we performed evaluation of the results based on the modifications introduced by the user through the correction interface. Given an image the user first performed automatic detection and then has the opportunity to correct its results. The user was given several ways in which to improve results as described in the previous section. Each image was given to a user with the task of correcting the automatic result to the best possible result. Table 1 shows the results of the corrections done by users on each of the five image as well as the average corrections over all images. Results are presented for each correction operation (splits, adds, cuts, joins and extends) as a percentage of total number of cell files detected (n). In parenthesis is also provided the percentage of cell file length affected by each operation in relation to the total length of all files that were detected. Cell wall detection errors are presented as false positives (FP) and false negatives (FN) as well as the total number of detected cell walls (N). We can see from Table 1 that the user corrections were bellow 20% of the total cell file automatically extracted data (considering length). This reveals a high agreement between the automated result and the user’s expert measurements. Also in Table 1 we can see that for the cell wall detection approach the users found only 7.5% errors on average, leaving most automated detections unchanged. Two automated cell file and cell wall results can be seen in Figure 6.
Arabidopsis Thaliana Automatic Cell File Detection and Length Estimation
9
Table 1. Results for user error correction through interaction after automatic root file detection and cell length estimation (as percentage of total number of cell files - n). In parenthesis the percentage of cell file length affected is given. Cell wall detection errors are presented as false positives (FP) and false negatives (FN) and total detected number (N).
Image 1 2 3 4 5 average
n 40 38 48 30 35 38.2
length 11517 8311 10411 6931 6606 8755
Cell files splits adds deletes cuts joins extends 7.5 0(0) 12.5(1.8) 22.5(1.0) 27.5(1.8) 5(121) 7.9 5.2(1.0) 7.9(1.2) 18.4(0.7) 31.6(4.6) 5.2(150) 8.3 0(0) 20.8(9.1) 8.3(0.6) 29.1(5.5) 2(84) 0 0(0) 17.1(6.9) 16.7(1.4) 20(22.1) 6.7(78) 0 2.9(5.6) 15.6(6.1) 0(0) 20(7.2) 7.2(715) 4.7 1.6(1.3) 15.7(5.0) 13.1(0.7) 25.6(8.2) 7.2(3.1)
Cell file walls N FP FN 500 15 5 304 7 12 384 35 21 215 21 0 253 8 0 331 17.2 7.6
Fig. 6. Final detection results (with no correction) for two images of Arabidopsis thaliana root (cropped detail)
The most often user performed operation, on average, was the joining of partially detected cell files into larger single ones (25.6%). However, this accounted for only a small amount of data correction (8.3%). This occurs mostly in plants roots where there is low contrast or a badly defined root direction (see Figure 6). It is also important to point out that during the tests of our approach the user spent less than ten minutes from beginning to end (including both automated detection and correction processing). This is a considerable reduction in time form the normal time it would take a researcher to fully measure the structure of a root (over one hour).
5
Conclusion
We presented an automatic approach to detect cell files and estimate the length of the cells in those files. The approach shown promising results which greatly reduce the time taken to analyse each cell by biologists.
10
P. Quelhas et al.
As future work we aim to combine methodologies for Arabidopsis Thaliana root cell segmentation and nuclei detection with the presented cell file detection approach so that we can obtain a more complete model of root structure from the image data .
Acknowledgements The authors acknowledge the funding of Funda¸c˜ao para a Ciˆencia e Tecnologia, under contract ERA-PG/0007/2006.
References 1. Terpstra, I., Heidstra, R.: Stem cells: The root of all cells. Semin. Cell Dev. Biol. 20(9), 1089–1096 (2009) 2. Bennett, T., Scheres, B.: Root development-two meristems for the price of one? Curr. Top Dev. Bio. 91, 67–102 (2010) 3. De Smet, I., Tetsumura, T., De Rybel, B., Frey, N.F., Laplaze, L., Casimiro, I., Swarup, R., Naudts, M., Vanneste, S., Audenaert, D., Inze, D., Bennett, M.J., Beeckman, T.: Auxin-dependent regulation of lateral root positioning in the basal meristem of arabidopsis. Development 134(4), 681–690 (2007) 4. Nieuwland, J., Maughan, S., Dewitte, W., Scofield, S., Sanz, L., Murray, J.A.: The d-type cyclin cycd4;1 modulates lateral root density in arabidopsis by affecting the basal meristem region. Proc. Natl. Acad. Sci. U S A 106(52), 22528–22533 (2009) 5. Vanstraelen, M., Baloban, M., Da Ines, O., Cultrone, A., Lammens, T., Boudolf, V., Brown, S.C., De Veylder, L., Mergaert, P., Kondorosi, E.: Apc/c-ccs52a complexes control meristem maintenance in the arabidopsis root. Proc. Natl. Acad. Sci. U S A 106(28), 11806–11811 (2009) 6. Beemster, G., Baskin, T.: Analysis of cell division and elongation underlying the developmental acceleration of root growth in arabidopsis thaliana. Plant Physiology 116, 1515–1526 (1998) 7. Iwamoto, A., Satoh, D., Furutani, M., Maruyama, S., Ohba, H., Sugiyama, M.: Insight into the basis of root growth in arabidopsis thaliana provided by a simple mathematical model. J. Plant Res. 119, 85–93 (2006) 8. Immink, R., Gadella, T., Ferrario, S., Busscher, M., Angenent, G.: Analysis of mads box protein-protein interactions in living plant cells. Proc. Natl. Acad. Sci. USA 99, 2416–2421 (2002) 9. Willemse, J., Kulikova, O., Jong, H., Bisseling, T.: A new whole-mount dna quantification method and the analysis of nuclear dna content in the stem-cell niche of arabidopsis roots. The Plant Journal 10, 1365–1374 (2008) 10. Marcuzzo, M., Quelhas, P., Campilho, A., Mendon¸ca, A.M., Campilho, A.: Automated arabidopsis plant root cell segmentation based on svm classification and region merging. Comp. in Biology and Medicine 39, 785–793 (2009) 11. Marcuzzo, M., Guichard, T., Quelhas, P., Mendon¸ca, A.M., Campilho, A.: Cell division detection on the arabidopsis thaliana root. In: Araujo, H., Mendon¸ca, A.M., Pinho, A.J., Torres, M.I. (eds.) IbPRIA 2009. LNCS, vol. 5524, pp. 168–175. Springer, Heidelberg (2009) 12. Kovesi, P.: Symmetry and asymmetry from local phase. In: Proc. of the 10th Australian Joint Conf. on Artificial Intelligence (December 1997)
Arabidopsis Thaliana Automatic Cell File Detection and Length Estimation
11
13. Campilho, A., Garcia, B., Toorn, H., Wijk, H., Campilho, A., Scheres, B.: Time-lapse analysis of stem-cell divisions in the arabidopsis thaliana root meristem. The Plant Journal 48, 619–627 (2006) 14. Dabov, K., Foi, A., Katkovnik, V., Egiazarian, K.: Image denoising by sparse 3d transform-domain collaborative filtering. IEEE Trans. on Image Processing 16(8), 2080–2095 (2007) 15. Otsu, N.: A threshold selection method from gray-level histograms. IEEE Transactions on Systems, Man, and Cybernetics SMC-9(1), 62–66 (1979)
A Machine Vision Framework for Automated Localization of Microinjection Sites on Low-Contrast Single Adherent Cells Hadi Esmaeilsabzali1, Kelly Sakaki1, Nikolai Dechev2, Robert D. Burke3, and Edward J. Park1 1
School of Engineering Science, Simon Fraser University, Surrey, BC, Canada Department of Mechanical Engineering, University of Victoria, Victoria, BC, Canada 3 Department of Biochemistry and Microbiology, University of Victoria, Victoria, Canada {hesmaeil,kelly.sakaki,ed_park}@sfu.ca, {dechev,rburke}@uvic.ca 2
Abstract. To perform high-throughput single-cell studies, automation of the potential experiments is quite necessary. Due to their complex morphology, automatic manipulation and visual analysis of adherent cells which include a wide range of mammalian cell lines is a challenging task. In this paper, the problem of adherent cells localization for the purpose of automated robotic microinjection has been stated and a practical two-stage texture-based solution has been proposed. The method has been tested on NIH/3T3 cells and the results have been reported. Keywords: cell segmentation, cytoplasmic and nucleic microinjection, adherent cells, region-growing algorithm, mathematical morphology, 3T3 cells.
1 Introduction 1.1 Background and Motivation The main motivation behind the single-cell research is that by studying an individual cell--instead of a population of cells--at a time, highly precise information about that particular cell and its different biological functions can be obtained. A fundamental step in many single-cell studies is to manipulate the genetic or biochemical contents of cells by inserting foreign molecules, such as RNAs, DNAs, or other biochemical molecules into the cell. To introduce external substances into the cell, its bilayer lipid membrane (BLM) should be permeated [1]. Capillary pressure microinjection (CPM) and single-cell electroporation (SCE) are two effective methods being extensively used for this purpose. While CPM technique involves mechanically penetrating the cell membrane and delivering the desired substance into the cell via the microcapillary, SCE method can insert the molecules into the single cells without piercing the BLM, hence much less invasive. SCE can be done by shrinking the electrical field M. Kamel and A. Campilho (Eds.): ICIAR 2011, Part II, LNCS 6754, pp. 12–20, 2011. © Springer-Verlag Berlin Heidelberg 2011
A Machine Vision Framework for Automated Localization of Microinjection Sites
13
to target a small region, approx. 1%, on the membrane of a single cell. As a result, tiny pores will be formed on the membrane, letting foreign substances to enter the cell. Manually performing both CPM and SCE is lengthy, subjective and inefficient [2]. Also, to achieve statistically consistence results in microbiological experiments, it is usually necessary to inject a large number of individual cells [3]. This may not be feasible if manual systems are employed. Hence, automated robotic microinjection systems which can operate independent of the human factor, and would provide higher throughput, efficiency, repeatability, and speed are being investigated [2], [4], [5]. A proof-of-concept automated robotic single-cell manipulation and analysis system, called RoboSCell, has been already developed and reported in [4]. In that work, automatic SCE capability of RoboSCell was demonstrated by in vitro insertion of fluorescent dyes into the single sea urchin eggs, which are fairly large (approx. 80 μm in diameter), and have a nice spherical shape. To generalize this technology, the automated SCE capability of RoboScell has to be extended to other cell types, particularly adherent cells, which cover a wide range of mammalian cells. This issue has been partly addressed in this paper. 1.2 Machine Vision Task for Adherent Cells Localization In cell biology, cells are usually classified as either suspended (e.g., embryos, oocytes, etc.) or adherent (e.g., fibroblasts, endothelial, etc.). Because of their almost round and regular shape, machine vision tasks for the suspended cells are routine procedures for which many standard image processing algorithms, such as template matching techniques (e.g., Hough transform), have already been devised. As a result, a noticeable number of studies have been reported on the visual-based automatic microinjection of single suspended cells [2], [4], [5]. Contrariwise, there has not been much of similar works on the adherent cells’ side. Automated machine vision-based localization of “injection site,” where the external molecules are inserted into the adherent cells, seems to be the main obstacle. Very recently, this problem has been partially addressed in [6], where a method has been developed for localization of the so-called “deposition destinations” on endothelial cells. Although it has shown a satisfactory performance, their method detects injection sites only on the nucleus. However, there are biological applications, such as the study of intracellular behavior of nano-particles, in which the external materials need to be inserted right into the cytoplasm. The fact that a single-cell microinjection system should be able to manipulate cells at any location has been stated in the literature [2], [7]; however, it has not been properly addressed so far. Problem Statement. Hence, an image processing algorithm that can localize injection sites, both on the nucleus (Nucleic Injection Site (NIS)) and on the cytoplasm (Cytoplasmic Injection Sites (CIS)) of the adherent cells, is required. This machine vision framework then can be employed in the automated robotic single-cell injection systems with either SCE or CPM capabilities. The two injection sites are defined as follows:
14
H. Esmaeilsabzali et al.
-
Nucleic Injection Site (NIS): a point inside the nucleus of the adherent cell. Cytoplasmic Injection Site (CIS): a point on a part of the cytoplasm which is fairly far from the nucleus, but is thick enough for the purpose of microinjection. This is the criterion used by cell biologists in practice.
Also, it is necessary for the developed machine vision system to be fast enough that can be operated in real-time. This requirement restricts the application of some image processing algorithms, such as active contours. Also, it should robustly perform well under the different imaging parameters and cell environments described below: Cell Morphology: Adherent cells mostly have an irregular morphology; this makes the cell segmentation problem more difficult, as many standard image processing techniques cannot be employed. Occlusion: The fact that these cells usually tend to grow in close vicinities creates a critical problem for the detection of single adherent cells (see Fig. 1). Two or more adjacent cells can be easily detected as a single cell. Object multiplicity: Cellular debris, extracellular structures, small pieces of membrane, and organelles might be detected as cells. Poor contrast (short range of grey levels): This is one of the main problems in cell image processing. Particularly, because of their low thickness, images taken from adherent cells have lower contrast than those of suspended cells, as the latter are thicker in general. This problem is even getting worse at the adherent cell’s peripheral areas, which correspond to the cytoplasm (see Fig. 1). Because of the potential phototoxicity, fluorescence microscopy, which could potentially solve the poor-contrast problem, cannot be used in this application. To address the aforementioned problem, a practical and straightforward image processing framework has been offered, which will be discussed in the next section. Sections 3 and 4 present the experimental results and the conclusion, respectively.
2 Methodology Fig. 1 shows a sample image taken from NIH/3T3 cells under the DIC microscopy. It can be observed that due to the presence of nucleoli, ribosomes, and rough endoplasmic reticulum (RER), a region of the cell associated with nucleus has a higher contrast. This “nucleic region” is often located in the middle of the cell and is fairly distinguishable for almost all cells in typical DIC images, regardless of the cell size and its stage of development. Hence, this characteristic of cells can be used to first detect the cells, and then localize the CIS and NIS. Moreover, the nuclei regions do not touch each other, as cells do in a confluent population. Therefore, by using these two characteristics, two challenges, i.e. poor contrast and occlusion, can be effectively addressed. The proposed algorithm is a hybrid method which has two main stages as follows:
A Machine Vision Framework for Automated Localization of Microinjection Sites
15
Fig. 1. NIH/3T3 cells under the DIC microscopy. The high-contrast “nucleic regions” have been marked with red rectangles. Notice the low-contrast of cytoplasmic regions.
2.1 Nucleic Region Segmentation and NIS Localization The sequential steps taken in this stage are: Gaussian Smoothing: As the pre-processing step, a Gaussian smoothing filter is first applied to remove the high spatial frequency noises resulted from randomness of photon flux or camera characteristics, and also to reduce the effect of uneven illumination [8]. Image Gradient: To detect pixels with higher contrast, the gradient operator which highlights the light intensity variations (i.e. contrast) should be applied. In digital image processing, the image gradient is calculated by convolving the image with predefined kernels. To get as much information as possible about the high-contrast regions in the image, two independent kernels that calculate the gradient in vertical and horizontal directions have been used, which have been resulted in higher S/N and better nucleic region segmentation. Image Thresholding: The gradient image obtained in the last step should be examined to find the pixels associated with high-contrast regions. This task can be done by thresholding the gradient image. Those pixels whose value is higher than the threshold represent high contrast pixels which could be potentially belonged to the nucleic regions. Otsu-clustering technique, which performs robustly well for images with single peak in their histogram (as for gradient images being considered in this study), has been employed for this purpose [9]. Mathematical Morphology Operations: The result of the thresholding step is a binary image whose pixels’ value is “1” if the corresponding pixels in the gradient image have higher values than the threshold. However, these high-contrast regions in the
16
H. Esmaeilsabzali et al.
binary image are usually small segregated groups of pixels which should be properly connected to form uniform clusters of pixels that resemble the nucleic regions. Mathematical morphology (MM) operations can be used for this purpose. All MM operations are based on the so-called structuring element which is used to interact with the binary images. The size of the structuring element and the number of repetitions for each MM operation can significantly change the final result of the MM operation. These parameters are the only inputs that need to be adjusted in this algorithm and have been tuned using the training image library. The binary image is first dilated to “partially” connect small pixel-sets representing high-contrast nucleic regions. Then, all small particles the size of which is less than a threshold are filtered out. These particles do not represent the nucleic organelles, but have been formed after the dilation step. The threshold is set according to the imaging condition as well as the cell type. Next, the dilation operation in the first step is repeated again to “fully” connect the remaining pixel-sets and produce a coherence group of pixels for each nucleic region. It should be noted that this profile of “dilation-filtration-dilation” is quite necessary to properly address the occlusion problem. In fact, while the first dilation operation produces a semi-coherent pixel-set for each nucleic region, the filtration operation removes the noisy in between pixels. This filtering prevents the formation of a single pixel-set associated to two or more adjacent cells. The last dilation operation then produces a rough estimation of the nucleic regions. Next, all holes in the resulted pixel-clusters in the binary image are filled. Then, morphological erosion is performed to remove the spurious artefacts and to produce well-shaped pixel-clusters. Finally, since sometimes two dividing cells whose nuclei have not completely been formed or cells in a highly confluent population (i.e. cell monolayer) are detected as one object, in the last step, morphological filters which evaluate the size and the shape (i.e. aspect ratio) of the detected clusters have been employed to remove those which most likely do not represent a single nucleus. NIS Detection: Most of the adherent cells’ nuclei such as those of NIH/3T3 cells shown in Fig. 1, have ellipsoidal or circular layouts in the 2D microscopic images. Therefore, once the clusters are detected, based on their shape (i.e. aspect ratio), they will be estimated by either an ellipse or a circle of the same size and spatial properties. Then, the centre of the ellipse or circle will be chosen as the NIS. This way, it can be highly expected that the selected NIS is located inside the nucleus, as it is required by the problem statement. 2.2 CIS Localization The low-contrast of cell’s cytoplasm in DIC images makes it difficult to detect the exact boundaries of cells. However, information obtained from segmented nucleic regions (i.e. nucleic ellipse/ circles) can be combined with some other “low-level” information to detect the body (i.e. cytoplasmic regions) of the cell so far as possible. The proposed method for this purpose is basically a pseudo-region-growing algorithm that uses each detected nucleic ellipse/circle as a “seed region” to perform an iterative search in the 2D image space until it ends up with a proper estimation of the cell body. To perform this search, the area around each estimated nucleic ellipse or circle will be examined using a number of surrounding “probe-circles” and the low-level
A Machine Vision Framework for Automated Localization of Microinjection Sites
17
feedback information. This feedback information will be used to assess the area covered by each probe-circle and to decide whether or not the area belongs to the cell’s cytoplasmic body. The number of probe-circles will determine both the precision and the speed of the algorithm. It has been found that using 8 circles provides satisfactory results, both in terms of precision and speed. The low-level information should be originated from the cell body. By looking at the image of 3T3 cells in Fig. 1, a slight contrast in the regions of the image that represent the cytoplasm can be observed. This contrast has been mainly created by the cytoskeleton. If they are extracted properly, these weak contrasts could be useful as the desired feedback information. To retrieve the low-contrast information from these regions, a highly sensitive edge detection algorithm is needed. Canny edge detector is a method that can satisfy this requirement. The edge image obtained by the Canny operator vaguely shows the layout of each cell (see Fig. 3.b). Using this edge information, regions covered by each probe-circle can be evaluated as follow: the higher the edge density in the portion covered by the probe-circle, the higher the thickness of cytoskeleton, and the denser the portion. One straightforward option for the localization of CIS could be obviously the densest area covered by the initial surrounding probe-circles. However, this is not the optimal location as the microinjection at this point may affect the nucleus contents as well. Hence, a point on the fairly dense portion of the cytoplasm which is also far enough from the nucleus must be sought. The pseudo-region-growing algorithm presented in Fig. 2 has been suggested for this purpose. The algorithm starts with the initial probe-circles which, based on the size of the segmented nucleic clusters, are defined around the estimated nucleic ellipses/circles. Canny edge information is used as the feedback data. In the flow chart shown in Fig. 2, Average Edge Density (AED) is the mean value of the pixels that is covered by each probe-circle in the Canny image, and have been used as a measure of the density of the cytoplasmic portions covered by each probe-circle. The maximum number of iterations (count) has been set according to an analysis that has been done on training data. It has been found that the maximum number of iterations which produce a probe-circle whose area equals the area of the corresponding nucleic ellipse/circle is sufficient for an initial probe-circle to cover the whole portion located between the nucleus and the cell boundary. Upon terminating the algorithm for each cell, the centre of the largest probe-circle will be chosen as the CIS. Hence, the requirements stated for CIS would be satisfied.
3 Experimental Results To evaluate and demonstrate the performance of the developed machine vision system, NIH/3T3 cells (Swiss mouse embryonic fibroblast cells) have been employed (ATCC, Cat. no. CRL-1658). NIH/3T3 cells represent all the characteristics mentioned for adherent cells. Cells have been cultured in α-MEM (1X) (Invitrogen, Cat. No, 12571-063) supplemented with 5% FBS (Invitrogen, Cat. no. 16000-036) and 1% Pen-Strep (Invitrogen, Cat. no. 15140-12). Cells were plated 10 hours before being imaged at a concentration of (60±2)×10e4 cells/ml. The image acquisition system consists of an inverted Nikon TE-2000U microscope equipped with a 60X Nikon ELWD Plan Flour objective with a numerical aperture (NA) of 0.7. The DIC mode has been employed, and a Retiga EXi Digital CCD camera with the resolution of 1392X1040 pixels was used to capture images.
18
H. Esmaeilsabzali et al.
Size of each probe-circle = Size of the associated nucleic ellipse or circle/ n (in the current implementation, n = 16) Named AED(1) to AED(8) associated with 8 circles Named “mean(AED)” and “variance(AED)”
Start yes Expanded probe-circle approaching other cells?
Initialize 8 probe-circles
Find AED for each initial probe-circle
Find the mean and variance of AEDs
Discard those probe-circles with AED less than (mean(AED)-variance(AED))
no Find the AED of the added PC (AED(PC))
no
AED(PC) > AEDR yes
Nc = number of remaining circles
Assign an index (i) to the remaining probe-circles
AEDR: the reference measure (the average of AEDs for the remaining probecircles after removing outliers)
Replace the expanded probe-circle with the older probe-circle
count ++ i=1
Counter variable
yes count = 0
count < n
no
i ++
Expand the probe-circle non. (i)
yes
i < Nc no Stop
Fig. 2. Pseudo-region-growing algorithm for the CIS localization
The algorithm has been implemented in NI-LabVIEW® 9.0 and has been employed on a PC (Intel, Core2, 2.99GHz, 4GB RAM) running the 64-bit Windows 7 Professional OS. The algorithm has been tested on an image library containing 170 images, which have been captured by manually scanning a square area of the Petri dish. To tune the algorithm parameters, a separate image library containing 15 images which had been captured in the same imaging conditions has been used. Fig. 3 shows the localization results and the Canny edge images used as the feedback data in the pseudo-region-growing algorithm for the CIS localization. The results of the NIS and CIS localization for each image in the library have been analyzed by an operator, and have been evaluated from two aspects. First, the total number of detected cells has been counted. In fact, detected cells can be considered as those clusters that have been correctly corresponded to the cells’ nucleic regions. However, it does not necessarily give the number of right NIS detections, as sometimes the identified NIS does not totally lie inside the nucleus. Next, the total number of cells for which both NIS and CIS have been detected correctly was counted. The results of the NIS and CIS detection have been summarized in Table 1.
A Machine Vision Framework for Automated Localization of Microinjection Sites
19
Table 1. Summary of NIS and CIS Localization Algorithm Analysis Number of Images Total Cells Number Number of Detected Cells Number of NIS & CIS detection Overall cell detection ratio NIS & CIS detection ratio among ALL cells NIS & CIS detection ratio among DETECTED cells
170 522 404 359 73.7% 68.8% 88.9%
Fig. 3. (a) NIS (blue point) and CIS (red point) localization results and (b) the pseudo-regiongrowing algorithm results for the probe-circles
20
H. Esmaeilsabzali et al.
4 Conclusion Single-cell research is a promising field that is going to change the perspective of the human health. High-throughput single-cell studies require performing thousands of experiments and processing large amounts of data all in a short time. Hence, automation of the experiments is inevitable. One of the main steps in the cell biology experiments is the microinjection of single-cells. This paper addressed the practical problem of nucleic and cytoplasmic microinjection sites localization which can be employed in a robotic microinjection system. For this purpose a hybrid image processing algorithm based on the concepts of mathematical morphology and region growing was designed and tested on a library of 170 images. The preliminary results are encouraging, yet potential modifications are being studied. Acknowledgments. We would like to thank Dr. Timothy Beischlag and Mr. Kevin Tam from the Faculty of Health Sciences, Simon Fraser University for their great hospitality and assistance during the experiments performed in Beischlag Lab.
References 1. Stein, W.D.: Transport and diffusion across cell membranes. Academia Press, Orlando (1986) 2. Sun, Y., Nelson, B.: Biological Cell Injection Using an Autonomous MicroRobotic System. The International Journal of Robotics Research 21(10-11), 861–868 (2002) 3. Lindström, S., Svahn, H.A.: Overview of single-cell analyses: microdevices and applications. Lab on a Chip, Advance Article (2010) 4. Sakaki, K., Dechev, N., Burke, R.D., Park, E.J.: Development of an autonomous biological cell manipulator with single-cell electroporation and visual servoing capabilities. IEEE Trans. on Biomed. Eng. 56(8), 2064–2074 (2009) 5. Huang, H., Sun, D., Mills, J.K., Cheng, S.H.: Robotic Cell Injection System With Position and Force Control: Toward Automatic Batch Biomanipulation. IEEE Trans. on Robotics 25(3), 727–737 (2009) 6. Wang, W.H., Hewett, D., Hann, C.E., Chase, J.G., Chen, X.Q.: Application of machine vision for automated cell injection. Int. J. Mechatronics and Manufacturing Systems 2, 120–134 (2009) 7. Rae, J.L., Levis, R.A.: Single-cell electroporaion. Instruments and Techniques 443(4), 664–670 (2001) 8. Likar, B., et al.: Retrospective shading correction based on entropy minimization. Journal of Microscopy 197, 285–295 (2000) 9. Otsu, N.: A threshold section method from gray level histogram. IEEE Trans. on Systems, Man., and Cybernetics 9(1) (1979)
A Texture-Based Probabilistic Approach for Lung Nodule Segmentation Olga Zinoveva1, Dmitriy Zinovev2, Stephen A. Siena3, Daniela S. Raicu4, Jacob Furst4, and Samuel G. Armato4 1 2
Harvard University, 200 Quincy Mail Ctr., Cambridge, MA College of Computing and Digital Media, DePaul University, 243 S. Wabash Ave, Chicago, IL 60604 3 University of Notre Dame, Notre Dame, IN 46556 4 Comprehensive Cancer Center, The University of Chicago 5841 South Maryland Avenue, MC 1140, Chicago, IL 60637
Abstract. Producing consistent segmentations of lung nodules in CT scans is a persistent problem of image processing algorithms. Many hard-segmentation approaches are proposed in the literature, but soft segmentation of lung nodules remains largely unexplored. In this paper, we propose a classification-based approach based on pixel-level texture features that produces soft (probabilistic) segmentations. We tested this classifier on the publicly available Lung Image Database Consortium (LIDC) dataset. We further refined the classification results with a post-processing algorithm based on the variability index. The algorithm performed well on nodules not adjacent to the chest wall, producing a soft overlap between radiologists’ based segmentation and computer-based segmentation of 0.52. In the long term, these soft segmentations will be useful for representing the uncertainty in nodule boundaries that is manifest in radiological image segmentations. Keywords: segmentation, probabilistic, lung, classifier, LIDC.
1 Introduction Most lung cancer treatment methods rely on early detection of malignant tumors. An effective way of measuring the malignancy of a lung nodule is by taking repeated computed tomography (CT) scans at intervals of several months and measuring the change in the nodule’s volume[1]. However, the process of segmenting the nodule consistently is challenging, both for human readers and automated algorithms. One of the greatest difficulties facing automatic lung nodule segmentation algorithms is the absence of a reliable and unambiguous ground truth. Many algorithms are trained on data from the Lung Image Database Consortium (LIDC)[2], which provides a reference truth based on the contours marked by four radiologists. Armato et al. explored the possible reference truths that may be constructed from the sets of nodules detected by different radiologists on the same CT scans, and found significant variations[3]. This alone can greatly affect the results of a detection algorithm. The same is true of segmentation. Siena et al. measured the variability of M. Kamel and A. Campilho (Eds.): ICIAR 2011, Part II, LNCS 6754, pp. 21–30, 2011. © Springer-Verlag Berlin Heidelberg 2011
22
O. Zinoveva et al.
radiologist contours in LIDC data and found that there are certain images for which disagreement is extremely high[4]. In such cases, it may be difficult to find a reliable reference truth. The vast majority of lung nodule segmentation algorithms in the past have produced hard (binary) segmentations. Many methods for this kind of classification exist, but it is difficult to compare their effectiveness because they are quantified differently. For instance, Liu et al used the popular level set technique, though they did not provide an overall quantitative measure of their algorithm’s accuracy[5]. Demeshki et al employed region growing and fuzzy connectivity and evaluated segmentation results subjectively with the help of radiologists[6]. Xu et al used dynamic programming to segment nodules with radiologist-defined seed points, though they did not test their algorithm on a dataset[7]. Q. Wang et al’s and publication on dynamic programming[8] and J. Wang et al’s paper on a 3D segmentation algorithm[9] evaluated their results by calculating the overlap of computer-generated segmentations against a ground truth. Q. Wang et al obtained overlaps of 0.58 and 0.66 on two datasets, and J. Wang et al had overlaps of 0.64 and 0.66, on two different datasets. Comparison of different methods is further complicated by the use of different datasets and varying methods of constructing the reference truths. The variation in the reference truth, which is usually produced by experienced radiologists, indicates that there may be more than one way to correctly segment lung nodules, so a probabilistic (soft) segmentation, which preserves variation in the data, may be a more natural way to segment nodules. Soft segmentation has been applied to different areas of medical image processing, including segmentations of the kidneys[10] and magnetic resonance images of the brain[11]. However, little work has been done on soft segmentations of lung nodules. Ginneken produced a soft segmentation of the 23 nodules in the first version of the LIDC dataset using a region growing technique[12]. In this paper, we propose a new method for soft lung nodule segmentation that investigates the power of texture-based image features in segmenting lung nodules.
2 Materials and Methods Our approach was to train a classifier using texture and intensity features, and then use it to classify pixels of interest. After this initial segmentation, we refined our results using a post-processing algorithm (Variability Index (VI)[4] Trimming) that trims those portions of the segmentation that appear to create the most variation in the data. In the next five sections, we explain our proposed soft segmentation approach. 2.1 LIDC Dataset, Probability Maps, and Data Preprocessing The LIDC dataset is an expanding collection of CT scans analyzed at five US academic institutions in the effort to facilitate the testing of computer-aided diagnosis (CAD) systems. At the time of this study, the second version of the dataset, LIDC85, was available, containing 60 series of chest CT scans representing 149 nodules. Each scan was presented separately to 4 radiologists, who provided contours for all nodules they found between 3 and 30 mm. Each nodule, therefore, was outlined by up to 4
A Texture-Based Probabilistic Approach for Lung Nodule Segmentation
23
radiologists. Given that we are investigating a soft segmentation approach, and therefore several boundaries per slice were needed to train and test our approach, we created our dataset as a subset of the LIDC85: 39 nodules on 326 images, selected based on the criterion that each would contain at least one 2D slice with 4 contours; 264 were in this category. The other 62 images were the remaining slices for the 39 nodules that contained fewer than 4 contours, as radiologist opinion may have differed with regard to the superior and inferior slices of a nodule. The contours produced by the radiologists were translated into probability maps for analysis (Fig 1 A). In a probability map (p-map), each pixel of the image is assigned a probability of belonging to the structure of interest (a lung nodule, in our case). Since up to four radiologists annotated each nodule, each pixel can take on 5 discrete probability values (0, 0.25, 0.50, 0.75, and 1), depending on the number of radiologists that included that pixel within their contours. In our algorithms, these probability values were replaced with (0, 1, 2, 3, and 4), which indicate the number of radiologists that included the pixel.
Fig. 1. Important terms (A) Radiologist outlines for a nodule in the LIDC dataset (top) and the corresponding p-map (bottom), where the level of probability is indicated by the color, from white (0) to the dark grey(1). (B) A computer generated p-map produced for the nodule in A using a decision tree classifier, overlaid on the CT scan (top) and displayed similarly to the radiologist p-map (bottom). (C) Thresholded p-maps for the computer-generated p-map in B (bottom), 0.25 (leftmost) to 1 (rightmost), and their corresponding contours (top).
Due to the varying properties and settings of the different scanners used to collect the LIDC data, pixel intensity histograms were not consistent on a series-to-series basis. At least four brands of scanners were used, including ones from GE, Toshiba, Siemens, and Philips, and certain settings and data display options were highly variable. The most significant variable that defined the histograms was the rescale intercept b used in the series of scans. In order to make intensity values comparable across images, we modified all intensity histograms by shifting their rescale intercept to -1024 (the most common value). When training the classifier, we selected 10,000 pixels for the nodule class and 10,000 pixels for the non-nodule class. The pixels were selected from the radiological p-maps. Every pixel was assigned a value 0-4, indicating the number of outlines that included it, so that pixels selected by four radiologists would have a value of four. Training pixels were selected only from those slices of the 39 nodules that contained outlines from four radiologists.
24
O. Zinoveva et al.
Fig. 2. An example of a p-map constructed from radiologists’ outlines of a nodule. The shades of gray represent probabilities 0, 0.25, 0.5, 0.75, and 1.
In the selection of random points, the non-nodule pixels were those that lay outside of the p-map, but inside a rectangular box that included the nodule and added 8 pixels in all four directions (Fig. 2). This 8-pixel box was selected because the high running time of certain feature calculation algorithms did not allow for a larger one, and anything smaller would have prevented us from evaluating the algorithm’s performance on structures surrounding the nodule. The nodule pixels were selected from the 0.25, 0.50, 0.75, and 1 p-map areas. 2.2 Feature Extraction Once random training pixels were selected, we performed feature extraction on the pixel level. We calculated the following features in a 9x9 neighborhood around the pixel of interest: intensity (including the intensity of the pixel of interest, as well as the minimum, maximum, mean, and standard deviation of the intensities in the neighborhood), Gabor filters, and Markov Random Fields. To extract Gabor and Markov features, we used an open-source implementation of a feature extractor called BRISC[13]. Gabor filters are harmonic functions modulated by a Gaussian function. As per the algorithm, 12 Gabor filters were used: combinations of four orientations (0, π/4, π/2, 3π/4 θ) and three frequencies (0.3, 0.4, 0.5 1/λ). A Markov random field is a matrix of random variables that exhibit a Markov property with respect to their neighbors. In the BRISC implementation, four Gaussian Markov random field parameters (corresponding to four orientations between two neighboring pixels – 0o, 45o, 90o, 135o) and their variance were calculated. 2.3 Decision Tree Classification We built our classifier using the Classification and Regression Trees (C&RT) binary decision tree provided by SPSS. We trained the classifier on 10,000 non-nodule and 10,000 nodule pixels, which represented 3 percent of the entire pixel dataset. We then used the result to classify all the 525,084 pixels that lay within the 8-pixel offset box described in Fig 2. The classifier assigned each pixel a continuous probability (CP’)
A Texture-Based Probabilistic Approach for Lung Nodule Segmentation
25
from 0 to 1 of belonging to the nodule. We constructed computer-generated p-maps (Fig. 1 B) by binning these probabilities to make them comparable to radiologist pmaps during analysis of the data. We found discrete probability values CP as follows: .
1
2 .
0.75 0.5
.
.
2
,
(1)
.
0.25 0
.
2
2 .
2
where , . , . , . , is the average probability assigned by the decision tree to all is the training set pixels originating from the respective p-map area and probability value assigned by the classifier to the specific pixel in the test set. For instance, if the average probability assigned by the decision tree to the pixels originating from p-map area 1 is 0.92, and the average probability assigned to those originating from p-map area 0.75 is 0.76, any pixel above 0.84 would be assigned the value of 1 on the p-map by the algorithm. We attempted different methods for finding the thresholds, including hard-coding values, but we have found that this approach performs best. 2.4 Post-processing To improve the results of our initial segmentation, especially with regard to systematic mistakes and over-segmentation, we used a post-processing algorithm called VI Trimming, which requires a seed point selected from the p-map generated by the classifier. First, we constructed thresholded p-maps (Fig. 1C) of our soft segmentations for each of the probability areas (when a scan contained multiple nodules, each of the segmentations was treated separately when finding seed points). These p-maps were then passed through a built-in Matlab implementation of a Savitzky-Golay Filter[14] This filter reduces the impact of noise in an image by moving a frame of a specified size over each column of an image and performing a polynomial regression on the pixels in that frame. The value of each pixel in the frame is then replaced by its value as predicted by the polynomial. We used a filter with a polynomial order of 3 and frame size of 7. After noise reduction, we produced contours of each of the thresholded p-maps and found the centroid of the most circular contour on the highest-valued p-map, ignoring noise. This centroid was the seed point for the image. The VI Trimming algorithm began with the seed point for the image, then iteratively increased the area around it, starting with a 3x3 square, then growing to 5x5, etc. This square was filled with the computer-generated p-map. For each p-map square, a variability matrix was calculated according to the algorithm developed by Siena et al[4].
26
O. Zinoveva et al.
Once calculated, the variability matrix was used to generate a pointer matrix. The pointer matrix is a border that surrounds the outer edge of the variability matrix. Each pixel in the pointer matrix indicates how many variability matrix pixels adjacent to it are below the selected VI threshold. We have found 2 to be the optimal VI threshold value for our purpose. All pixels outside of a 0 in the pointer matrix are reset to 0 in the post-VI Trimming p-map, regardless of their original probabilities. This ensures that the nodule region that contained the seed points is kept in the p-map, while other regions are eliminated. This is the key step for removing misclassified chest wall regions, given that they are separated by at least a small gap from the nodule itself. The matrix stops growing when all the pixels in the pointer matrix are 0. 2.5 Evaluation of the Segmentation We evaluated the quality of our segmentations using two metrics: the soft overlap[12] and the variability index[4] (not to be confused with VI Trimming, which is based on the variability index). The soft overlap is a measure of agreement between two soft segmentations. Values range between 0 for completely dissimilar segmentations, and 1 for identical segmentations. In our case, we compare our computer-generated p-maps against radiologist-generated ones. It is calculated as follows: ∑ ∑
, ,
min max
, , , ,
, ,
(2)
where CPn (i,j) is the p-map value for pixel (i,j) in the computer-generated p-map of image n, and RPn (i,j) is the p-map value for the same pixel in the radiologistgenerated p-map. The variability index is a metric for evaluating the variability of a soft segmentation. Given a probability map, it takes into account both the number of pixels with probability below 1, and the shape of each probability area. We used the method described in the VI Trimming section to find the variability matrix for each image. Then, we calculated the variability index VI for each image[4]: ∑ , ∑ ,
, ,
(3)
where P(i,j) denotes RP(i,j) or CR(i,j), depending on whether the VI is calculated for radiologist or computer segmentations.
3 Results We used cross-validation on the training set and obtained the lowest risk estimate (variance about the mean of the node) of 0.06 for a decision tree with a depth of 10 and 53 terminal nodes. Before post-processing, our classifier had a median SO of 0.49 on the 39 nodules in our subset of the LIDC85 (Fig. 3A). The variability index distribution was highly right-skewed, indicating a few outliers with very high variability (Fig. 3B). The median VI was 4.50. Using the Inter-quartile range criterion, there were 42 possible outliers, specifically all images with VI above 16.88.
A Texture-Based Probabilistic Approach for Lung Nodule Segmentation
27
To improve these results, we used VI Trimming. The median SO rose to 0.52 (Fig. 3C). The large number of nodules with an SO lower than 0.1 are a result of misclassification on specific groups of nodule slices. Specifically, the classifier did not perform well on superior and inferior slices of each nodule, and nodule slices in contact with the chest wall. Furthermore, a failure to select good seed points resulted in lower post-VI Trimming SO for certain nodule slices. After VI Trimming, the median variability index for all images decreased to 2.61. There were 20 outliers, which were all images with VI above 12.09 (Fig. 3D). For examples of VI Trimming results, refer to Fig. 4. A
B
C
D
Fig. 3. Soft overlaps and variability indices before (A, B) and after (C, D) VI Trimming
In addition to calculating the variability index for all images, we found it specifically for those that were post-processed with VI Trimming (221 images). For these images, the median VI decreased from 3.63 to 2.01. More significantly, the number of outliers in this case decreased from 28 to 18, indicating that VI Trimming is useful for minimizing the number of highly variable outliers. For a summary of the results, see Table 1. Table 1. Summary of results before and after post-processing Soft Overlap A ratio for 0.25-thresholded p-map Variability Index
Algorithm only 0.49 0.48 4.50
Algorithm plus VI Trimming 0.52 0.55 2.61
4 Discussion Our results indicate that decision tree classifiers trained on Gabor, Markov, and intensity image features can be used to produce soft segmentations of lung nodules.
28
O. Zinoveva et al.
The classifier successfully distinguished nodules from adjacent blood vessels (Fig. 4 E-F) in the majority of cases, but it failed to differentiate between chest wall and nodule pixels (Fig. 4 A, C). The best way to improve upon these results is to include lung segmentation in the pre-processing step, which we plan to do in future work. Once lung segmentation is performed, we will also run our algorithm on all pixels within the lung, instead of only ones inside the 8-pixel offset box.
Fig. 4. Soft segmentations before and after VI Trimming. (Yellow – 0.25, blue – 0.5, green – 0.75, red – 1). (A) A nodule next to the chest wall before post-processing. The classifier could not distinguish the chest wall from the nodule. (B) The same nodule after VI Trimming. The chest wall has been removed. (C) VI Trimming was unable to correct the classifier’s errors in this case because the nodule is too close to the chest wall. (D) The VI Trimming had no effect on this nodule because the classifier produced a good segmentation. (E-F) The classifier is good at distinguishing nodules from blood vessels, even without VI Trimming.
A Texture-Based Probabilistic Approach for Lung Nodule Segmentation
29
We compared our soft segmentation results against another soft lung nodule segmentation algorithm. Ginneken’s region-growing algorithm produced a mean soft overlap of 0.68 for 23 nodules. Although the soft overlaps for these nodules are higher than ours, the algorithm described in Ginneken’s paper included a pre-processing lung segmentation step, which makes the comparison more difficult. Additionally, Kubota et al’s work on nodule segmentation shows a decrease in performance in moving from the first to the second LIDC dataset, on which they obtained 0.69 and 0.59 mean overlaps, respectively[15]. Kubota et al conclude that this is due to the second dataset’s thicker slices and some subtle nodules, which also complicates the comparison of our results with those of Ginneken. Due to the possible bias of selecting training pixels from the same nodules that the testing set came from, we also applied our algorithm to four additional nodules from the most recent LIDC dataset. These nodules were not involved in producing the classifier, so they ensured that there was no such bias when they were segmented. We obtained an SO of 0.44 for these nodules. This may have resulted from the higher variability in scanning methods in the most recent LIDC, which results in more variable data. In the future, we plan to run our algorithm on this version of the LIDC and investigate the change in results.
References 1. Ko, J.P., Betke, M.: Chest CT: automated nodule detection and assessment of change over time–preliminary experience. Radiology 218, 267–273 (2001) 2. Armato, S.G., McLennan, G., McNitt-Gray, M.F., Meyer, C.R., Yankelevitz, D., Aberle, D.R., Henschke, C.I., Hoffman, E.A., Kazerooni, E.A., MacMahon, H., Reeves, A.P., Croft, B.Y., Clarke, L.P.: Lung image database consortium: developing a resource for the medical imaging research community. Radiology 232, 739–748 (2004) 3. Armato, S.G., Roberts, R.Y., Kocherginsky, M., Aberle, D.R., Kazerooni, E.A., Macmahon, H., van Beek, E.J., Yankelevitz, D., McLennan, G., McNitt-Gray, M.F., Meyer, C.R., Reeves, A.P., Caligiuri, P., Quint, L.E., Sundaram, B., Croft, B.Y., Clarke, L.P.: Assessment of radiologist performance in the detection of lung nodules: dependence on the definition of “truth”. Acad. Radiol. 16, 28–38 (2009) 4. Siena, S., Zinoveva, O., Raicu, D., Furst, J., Armato III, S.: A shape-dependent variability metric for evaluating panel segmentations with a case study on LIDC. In: Karssemeijer, N., Summers, R.M. (eds.), 1st edn., pp. 762416–762418. SPIE, San Diego (2010) 5. Liu, S., Li, J.: Automatic medical image segmentation using gradient and intensity combined level set method. In: Conf. Proc. IEEE Eng. Med. Biol. Soc., vol. 1, pp. 3118–3121 (2006) 6. Dehmeshki, J., Amin, H., Valdivieso, M., Ye, X.: Segmentation of pulmonary nodules in thoracic CT scans: a region growing approach. IEEE Trans. Med. Imaging 27, 467–480 (2008) 7. Xu, N., Ahuja, N., Bansal, R.: Automated lung nodule segmentation using dynamic programming and EM-based classification. In: Sonka, M., Fitzpatrick, J.M. (eds.), 1st edn., pp. 666–676. SPIE, San Diego (2002) 8. Wang, Q., Song, E., Jin, R., Han, P., Wang, X., Zhou, Y., Zeng, J.: Segmentation of lung nodules in computed tomography images using dynamic programming and multidirection fusion techniques. Acad. Radiol. 16, 678–688 (2009)
30
O. Zinoveva et al.
9. Wang, J., Engelmann, R., Li, Q.: Computer-aided diagnosis: a 3D segmentation method for lung nodules in CT images by use of a spiral-scanning technique. In: Giger, M.L., Karssemeijer, N. (eds.), 1st edn., pp. 69151–69158. SPIE, San Diego (2008) 10. Tang, H., Dillenseger, J.L., Bao, X.D., Luo, L.M.: A vectorial image soft segmentation method based on neighborhood weighted Gaussian mixture model. Comput. Med. Imaging Graph 33, 644–650 (2009) 11. Hongmin, C., Verma, R., Yangming, O., Seung-koo, L., Melhem, E.R., Davatzikos, C.: Probabilistic segmentation of brain tumors based on multi-modality magnetic resonance images. In: 4th IEEE International Symposium on Biomedical Imaging: From Nano to Macro, ISBI 2007, pp. 600–603 (2007) 12. van Ginneken, B.: Supervised probabilistic segmentation of pulmonary nodules in CT scans. Med. Image Comput. Comput. Assist. Interv. 9, 912–919 (2006) 13. Lam, M.O., Disney, T., Raicu, D.S., Furst, J., Channin, D.S.: BRISC-an open source pulmonary nodule image retrieval framework. J. Digit Imaging 20(suppl. 1), 63–71 (2007) 14. Savitzky, A., Golay, M.J.E.: Smoothing and Differentiation of Data by Simplified Least Squares Procedures. Analytical Chemistry 36, 1627–1639 (1964) 15. Kubota, T., Jerebko, A.K., Dewan, M., Salganicoff, M., Krishnan, A.: Segmentation of pulmonary nodules of various densities with morphological approaches and convexity models. Med. Image Analysis 15, 133–154 (2011)
Generation of 3D Digital Phantoms of Colon Tissue David Svoboda, Ondˇrej Homola, and Stanislav Stejskal Centre for Biomedical Image Analysis Faculty of Informatics, Masaryk University Botanick´ a 68a, 602 00 Brno, Czech republic
[email protected]
Abstract. Although segmentation of biomedical image data has been paid a lot of attention for many years, this crucial task still meets the problem of the correctness of the obtained results. Especially in the case of optical microscopy, the ground truth (GT), which is a very important tool for the validation of image processing algorithms, is not available. We have developed a toolkit that generates fully 3D digital phantoms, that represent the structure of the studied biological objects. While former papers concentrated on the modelling of isolated cells (such as blood cells), this work focuses on a representative of tissue image type, namely human colon tissue. This phantom image can be submitted to the engine that simulates the image acquisition process. Such synthetic image can be further processed, e.g. deconvolved or segmented. The results can be compared with the GT derived from the digital phantom and the quality of the applied algorithm can be measured. Keywords: Digital phantom, Colon tissue, Simulation, Haralick texture features.
1
Introduction
The present biomedical research deeply relies on the computer based evaluation as the vast majority of the commonly used acquisition techniques produce large amount of numerical or visual outputs. The rough image data cannot be directly used since they are typically affected by some imperfections of acquisition devices. In optical microscopy, for example, the optical system makes the output image data blurred and subsequently the use of digital camera causes the data to be affected by various types of noise. In order to diminish the influence of the imperfections some preprocessing is required. This mainly includes some simple image enhancement or deconvolution techniques. Nevertheless, the use of any preprocessing algorithm brings the question whether its output is really enhanced and if so, how much it is improved. The same issue arises when we submit the already preprocessed image data to some selected segmentation algorithm. In case of microscopic specimens no one knows how the original data should appear. Therefore, it is really difficult to judge whether the segmentation M. Kamel and A. Campilho (Eds.): ICIAR 2011, Part II, LNCS 6754, pp. 31–39, 2011. c Springer-Verlag Berlin Heidelberg 2011
32
D. Svoboda, O. Homola, and S. Stejskal
results are correct or not. On the other hand, if the GT image data were available, we could compare the outputs of the given algorithm with the provided GT. Subsequently, we could validate the quality of the algorithm. The task of obtaining the GT can be solved in the case of availability of digital phantom, which is an estimated discretized model of the observed objects. In the following text, a brief survey of the most common digital phantoms and their generators in optical microscopy is given. As concerning the simplest point-like objects such as FISH [1] spots a toolbox for generating large set of spots was proposed by Grigoryan et al. [2]. Each spot was represented by a sphere randomly placed in 3D space. Two individual spheres were allowed to overlap but only under given conditions. The issue of virtual spot generation was also touched by Manders et al. [3] who verified the novel region growing segmentation algorithm over a large set of Gaussian-like 3D objects arranged in a grid. When creating cell-like or nucleus-like objects, the most popular shapes – circles and ellipses in 2D and spheres and ellipsoids in 3D space – are employed. In order to check the quality of the new cell-nuclei segmentation algorithm, Lockett et al. [4] generated a set of artificial spatial objects in the shape of curved sphere, ellipsoid, disc, banana, satellite disc, and dumbbell. Lehmussola et al. [5] designed a more complex simulator capable of producing large cell populations. However, the toolbox was designed for 2D images only and the extension to higher dimension was not straightforward. Later, Svoboda et al. [6] extended this model to enable manipulation with fully 3D image data. Up to now, the majority of the authors have focused on the design of spots or nuclei. Recently, Zhao et al. [7] designed an algorithm for generating the whole 2D digital cell phantom. Here, the whole cell including nucleus, proteins and cell membrane was modelled using machine learning from real data, which is a different approach as compared to other studies that employ basic shapes and their deformation. In this paper, we introduce a method for the generation of fully 3D digital phantoms of human colon tissue. The generated data are submitted to the simulated optical system and a virtual camera and the results are compared with real image data to guarantee the plausibility of the phantoms.
2 2.1
Reference Data Biological Material
Before starting developing any digital phantom generator one should have an access to a sufficiently large database of reference real image data. Such a source of information is very important as the digital phantom generation is pointless without knowledge of the nature and the structure of the images of real objects. Samples of colon tissue we used [8], were obtained from 18 patients suffering from adenocarcinoma (which is a very common type of cancer) at the Masaryk
Generation of 3D Digital Phantoms of Colon Tissue
33
(a)
(b) Fig. 1. The representatives of image data of different origin: (a) real, (b) synthetic. Each 3D figure consists of three individual images: the top-left image contains selected xyslice, the top-right image corresponds to selected yz-slice, and the bottom one depicts selected xz-slice. Three mutually orthogonal slice planes are shown with ticks.
34
D. Svoboda, O. Homola, and S. Stejskal
Memorial Cancer Institute, Brno, and the University Hospital, Brno, Czech republic. Some samples were blurred or overexposed. Hence, we used only 44 of a total of 72 images in our study. Informed consent was obtained from all patients, prior to enrollment in the study. Frozen tissue was sliced using a cryo-cut into 15 micrometer-thick tissue sections. Data was anonymized before computer processing. 2.2
Equipment
Image acquisition was performed using a Zeiss S100 microscope (Carl Zeiss, G¨ ottingen, Germany) with a CARV confocal unit (Atto Instruments, Thornwood, USA). Images were captured using a Micromax 1300-YHS camera with a cooled CCD chip (Princeton Instruments, Acton, USA). Camera resolution was 1300×1030 pixels. Image voxel size was 124×124×200nm. For the generation of the results and the succeeding measurement a PC with Linux OS was employed. It was equipped with Intel Xeon Quad Core 2.83GHz processor and 32GB RAM. The programming was performed in C/C++ language.
3 3.1
Method Real Shape Analysis
Concerning the real 3D image data depicted in Fig. 1(a), one can clearly see one or more elliptical clusters, that are composed of nuclei usually tightly pressed to each other. These clusters correspond to the cross-sections of villi. Their shape does not form the exact ellipse. It may be slightly irregular. The nuclei lie mainly along the ellipse perimeter while the cluster interior is empty. On the other hand, there are some nuclei in the area among the clusters. 3.2
Synthetic Shape Synthesis
Without loss of generality, we will focus on the generation of one cell cluster and its neighbourhood only. We will further presume that the basic shape of a cluster may be described by a toroid. Let be given a toroid, whose diameter corresponds to the mean size of villus and which is obtained by revolving a parallelogram or a trapezoid. Let us generate a set of points within this toroid. Each such point will become a center of gravity of one nucleus. To ensure some irregularity of the initial distribution of the points, their 3D positions will be affected by the noise function [9] that modifies the distance of each point from the toroid centre. Aside from the clusters, there is also another set of nuclei. These nuclei correspond to those that lie outside the clusters. Let us generate their positions randomly. As soon as the points are spread over the dedicated space, the 3D Voronoi diagram [10] is generated from these points. For the generation of Voronoi regions
Generation of 3D Digital Phantoms of Colon Tissue
(a)
(b)
(c)
(d)
(e)
(f)
35
Fig. 2. The main steps (each view is a 3D image including three mutually orthogonal projections) during the generation of digital phantom of colon tissue: (a) basic mask created from the Voronoi diagram, (b) slightly deformed mask obtained after the application of fast levet-set based deformation to the previous image, (c) euclidean distance transform applied to each nucleus, (d) nucleus membrane defined as a distance transform combined with Perlin noise, (e) chromatin structure, (f) final digital phantom is a composition of images (d) and (e).
36
D. Svoboda, O. Homola, and S. Stejskal
the Qhull package1 can be used. Too large regions, too small regions, and the regions touching the boundary of the dedicated space are eliminated. Further, the vector-based model of Voronoi diagram is converted into voxel-based model, i.e. the model is rasterized. Too sharp Voronoi regions are slightly smoothed. Each Voronoi region is associated with one nucleus. The initial distribution of the generated nuclei is depicted in Fig. 2(a). As the gaps between the individual nuclei in the initial distribution are too uniform, a certain level of deformation [11] is applied (see the result in Fig. 2(b)). This way, we get the mask of each nucleus. It can be considered as a GT for image segmentation algorithms, for example. 3.3
Real Structure Analysis
Besides the shape, the internal structure of each nucleus reveals an important information of the cell activity. In each stage of the cell cycle, chromatin has different properties and hence when stained it looks differently. Namely, some parts of nucleus look either brighter or darker. The deep analysis of individual nuclei reveals that the majority of chromatin is concentrated close to the nucleus border. That is why we can clearly see rather the membrane regions than the nucleus interior. There is a certain gradient of chromatin concentration starting from low in the nucleus centroid to high at membrane regions. Therefore, neither the nucleus interior is empty. The chromatin is also apparent here but it is not condensed too much. 3.4
Synthetic Structure Synthesis
Utilizing the knowledge from the previous paragraph, the texture representing the chromatin structure can be generated from the distance transform of a mask of each individual nucleus (see Fig. 2(c)). This way, the voxels, that are close to the nucleus boundary, become brighter while the others tends to be darker or even black. Further, as the structure around nucleus border is not expected to be of a regular width, the intensity of border pixels is weighted by Perlin noise function [9] (see the results in Fig. 2(d)). Perlin noise (see Fig. 2(e)) is also used for the generation of the nucleus interior. Finally, the interior texture and the border texture are composed to form the final structure (see Fig. 2(f)).
4
Results
4.1
Phantom Generation
The whole generation process is driven by a set of parameters. These parameters exactly define the shape and texture of generated objects. In this study, the total amount of parameters defining the phantom structure was 41. They were used to define the behaviour of the employed algorithms (Voronoi diagram creation, distance transform, Perlin noise generation, fast level set methods, . . . ), shape characteristics (amount of nuclei outside the clusters, affine transform, . . . ), 1
http://www.qhull.org/
Generation of 3D Digital Phantoms of Colon Tissue
37
and texture characteristics (photobleaching effect, chromatin granularity, . . . ). For more details concerning the parameter settings we recommend downloading the source codes of this method from the address http://cbia.fi.muni.cz/ simulator/, where the domain of each individual parameter is properly defined and well documented.
Q−Q plot
Q−Q plot Synthetic data
Synthetic data
167 122
75.2
28.2
28.2
103
38.8 38.8
75.2 122 Real data
Q−Q plot
Q−Q plot 287 Synthetic data
Synthetic data
230
142
142 Real data
230
(c) 4th central moment
176
65.4 65.4
287
Q−Q plot
Q−Q plot 8 Synthetic data
Synthetic data
176 Real data
(d) 5th central moment
341
209
76.7 76.7
167
(b) 3rd central moment
(a) 2nd central moment
53.3 53.3
103 Real data
209 Real data
341
(e) 6th central moment
7
6
6
7 Real data
8
(f) entropy
Fig. 3. Comparison of descriptors computed from 44 real and synthetic images of colon tissue. Quantile-quantile plots illustrate whether the measured datasets come from populations with similar distributions. Generally, if the two sets come from a population with the same distribution, the points should fall approximately along the reference line.
38
D. Svoboda, O. Homola, and S. Stejskal
Respecting the time requirements, each generated image (900×800×80 voxels) was created approximately in 10 minutes. The majority of the time has been spent during the execution of the two pivotal tasks: the conversion of vectorbased Voronoi model into digitized voxel-based model and consequently the fast level-set based deformation of the image data. The optimization of these two algorithms might be one of the objectives of our further research. In total, we generated 44 synthetic images, i.e. the amount of real and synthetic image data was identical. Afterwards, the generated digital phantoms were submitted to the simulation toolbox [6] that imitates the behavior of the optical system and the digital camera. This way, we obtained a set of synthetic images (see one representative in Fig. 1(b)) that were further submitted to selected measurements. 4.2
Measurements
Due to a large number of nuclei (each of 44 synthetic 3D images contained at least 200 nuclei) to be processed we measured only the global texture characteristics. In the sense, each individual image was submitted to the selected sequence of 18 measurements (image entropy, five central moments, and twelve 3D Haralick texture features [12]), which produced a feature vector. Analogously, we performed the same number of measurements over the set of all real images (see one representative in Fig. 1(a)). The feature vectors were further submitted to a suitable decision tool that checks the similarity of these vectors. In this work, we employed a common approach based on so called QQ-plots [6,7]: If the two sets come from a population with the same distribution, their quantiles should fall approximately along the 45-degree reference line. The greater the deviation from this reference line, the greater the evidence for the conclusion that the two data sets have come from populations with different distributions. The QQ-plots in Fig. 3 show that the feature vectors for both real and synthetic image data follow similar distribution. Concerning 3D Haralick texture features, that are not depicted, 8 of 12 plots exhibited the similarity between the individual features for both real and synthetic images. Searching for the dissimilarity of the real and the synthetic data which caused the imperfection of some QQ-plots will be the aim of our further research.
5
Conclusion
The method presented in the paper offers a guideline how to generate 3D digital phantoms of human colon tissue. The implementation including the documentation and the source codes of this method is freely available under GNU GPL at http://cbia.fi.muni.cz/simulator/. The generated digital phantoms are a useful source of ground truth image data, that can be used in the verification process of some image enhancement or image segmentation algorithms. Acknowledgement. This work was supported by the Ministry of Education of the Czech Republic (Projects No. LC535 and No. 2B06052).
Generation of 3D Digital Phantoms of Colon Tissue
39
References 1. Netten, H., Young, I.T., van Vliet, J., Tanke, H.J., Vrolijk, H., Sloos, W.C.R.: FISH and chips: automation of fluorescent dot counting in interphase cell nuclei. Cytometry 28, 1–10 (1997) 2. Grigoryan, A.M., Hostetter, G., Kallioniemi, O., Dougherty, E.R.: Simulation toolbox for 3D-FISH spot-counting algorithms. Real-Time Imaging 8(3), 203–212 (2002) 3. Manders, E.M.M., Hoebe, R., Strackee, J., Vossepoel, A.M., Aten, J.A.: Largest contour segmentation: A tool for the localization of spots in confocal images. Cytometry 23, 15–21 (1996) 4. Lockett, S.J., Sudar, D., Thompson, C.T., Pinkel, D., Gray, J.W.: Efficient, interactive, and three-dimensional segmentation of cell nuclei in thick tissue sections. Cytometry 31, 275–286 (1998) 5. Lehmussola, A., Ruusuvuori, P., Selinummi, J., Huttunen, H., Yli-Harja, O.: Computational framework for simulating fluorescence microscope images with cell populations. IEEE Trans. Med. Imaging 26(7), 1010–1016 (2007) 6. Svoboda, D., Kozubek, M., Stejskal, S.: Generation of digital phantoms of cell nuclei and simulation of image formation in 3d image cytometry. Cytometry, Part A 75A(6), 494–509 (2009) 7. Zhao, T., Murphy, R.F.: Automated learning of generative models for subcellular location: Building blocks for systems biology. Cytometry Part A 71A(12), 978–990 (2007) ˇ 8. Jansov´ a, E., Koutn´ a, I., Krontor´ ad, P., Svoboda, Z., Kˇriv´ ankov´ a, S., Zaloud´ ık, J., Kozubek, M., Kozubek, S.: Comparative transcriptome maps: a new approach to the diagnosis of colorectal carcinoma patients using cDNA microarrays. Clinical Genetics 68(3), 218–227 (2006) 9. Perlin, K.: An image synthesizer. In: SIGGRAPH 1985: Proceedings of the 12th Annual Conference on Computer Graphics and Interactive Techniques, pp. 287–296. ACM Press, New York (1985) 10. Aurenhammer, F.: Voronoi diagrams-a survey of a fundamental geometric data structure. ACM Computing Surveys 23(3), 345–405 (1991) 11. Nilsson, B., Heyden, A.: A fast algorithm for level set-like active contours. Pattern Recogn. Lett. 24(9-10), 1331–1337 (2003) 12. Tesaˇr, L., Smutek, D., Shimizu, A., Kobatake, H.: 3D extension of Haralick texture features for medical image analysis. In: SPPR 2007: Proceedings of the Fourth Conference on IASTED International Conference, pp. 350–355. ACTA Press (2007)
Using the Pupillary Reflex as a Diabetes Occurrence Screening Aid Tool through Neural Networks Vitor Yano, Giselle Ferrari, and Alessandro Zimmer Federal University of Parana, Department of Electrical Engineering Centro Politécnico, CEP 81531-970, Curitiba, PR, Brazil
[email protected], {ferrari,zimmer}@eletrica.ufpr.br
Abstract. Diabetes mellitus is a disease that may cause dysfunctions in the sympathetic and parasympathetic nervous system. Therefore, the pupillary reflex of diabetic patients shows characteristics that distinguish them from healthy people, such as pupil radius and contraction time. These features can be measured by the noninvasive way of dynamic pupillometry, and an analysis of the data can be used to check the existence of a neuropathy. In this paper, it is proposed the use of artificial neural networks for helping screening the diabetes occurrence through the dynamic characteristics of the pupil, with successful results. Keywords: Neural networks, Pattern recognition, Diabetes, Pupillometry.
1 Introduction Diabetes mellitus is a very common endocrine disease, being among those that cause most disabilities and deaths. It is caused by insufficient secretion of insulin or by inefficient action of this hormone, leading to a dysfunction in the body metabolism [1]. If not properly treated, the syndrome can cause serious complications, such as myocardial infarction, stroke, kidney failure and peripheral nervous system lesions [2]. Although there isn’t a cure, if properly detected and treated, the patients can have a normal life. Among the chronic complications caused by diabetes, there are abnormalities known as diabetic neuropathies, which affect the autonomic nervous system and peripheral nerves [3]. As these dysfunctions alter the sympathetic and parasympathetic activity, an assessment of pupillary light reflex has been used as a noninvasive clinical test of neuropathy [4-6], since this response is based on the balance of the nervous system. The pupil size is maintained by the sympathetic system, while the variation in reaction to light is due to parasympathetic activity. Therefore, by analyzing the behavior of the pupil stimulated by light, it’s possible to evaluate the characteristics of the autonomic nervous system of the person. Ferrari demonstrates in [7] that the dynamic pupillometry, technique that consists in the measurement of the variation of pupil diameter in response to a light stimulus of fixed intensity and duration, can be applied to detection and assessment of peripheral neuropathies. That work developed and used a pupillometer, a system for M. Kamel and A. Campilho (Eds.): ICIAR 2011, Part II, LNCS 6754, pp. 40–47, 2011. © Springer-Verlag Berlin Heidelberg 2011
Using the Pupillary Reflex as a Diabetes Occurrence Screening Aid Tool
41
stimulation and recording of pupillary reflex. Through image processing of captured video frames, the diameter of the pupil was measured during the period of contraction. The data were statistically analyzed using the techniques One-way ANOVA and post hoc Bonferroni, showing significant differences in the characteristics of the pupillary reflex of diabetic neuropathy patients compared to healthy people. Dütsch et al. [5] shows that pupillary dysfunction occurs in patients with diabetes regardless from cardiac autonomic and peripheral somatic neuropathies. Pittasch et al. [3] and Cahill et al. [6] studies demonstrate that the pupil size is smaller in diabetics, even in those without clinical evidence of neuropathy. The already proved relationship between the disease and the pupil characteristics [3-7] can be used in the development of an automatic screening aid system, which requires a fast and robust classification method based on pattern recognition. In this paper, neural networks have been chosen because of their properties of nonlinearity, adaptivity [8], quick response and simple implementation, which make them an appropriate tool for an embedded solution. As pupillometry has already been used in clinical tests for the assessment of neuropathies in diabetic patients [4-7], this work verifies the use of this tool for the discrimitnation between diabetic and healthy subjects.
2 Method 2.1 Videos For this study, the measurement of pupil response to a light stimulus was done through video frames processing. The video sequences used were captured in the work of Ferrari [7], after approval by the Research Ethics Committee of the Pontifical Catholic University of Parana (CEP-PUCPR) and by the volunteers agreement, and contains images of 46 healthy and 39 diabetic people, from which 20 without and 19 with peripheral neuropathy. Those videos are all in grayscale. They have a frame rate of 29.97 frames per second, with a mean duration of 6 seconds, and dimensions of 320 pixels in width and 240 pixels in height. The duration of the light stimulus applied in the recordings was 10 ms, which is less than the period of a frame capture. That means it can affect at most one frame per video sequence. 2.2 Dynamic Pupillometry The process of pupillometry in the videos included a preprocessing step, in which artifacts were removed from the images, then a segmentation step, used for determining the pupil boundaries and their size. After that, the data had to be processed in order to remove outliers, generated by errors in the image segmentation. Finally, some features were extracted for the classification process. Fig. 1 shows the block diagram of the whole process with the result after each stage. The procedure performed in the first steps is the same used in a previous work of the authors [9] and is quickly described below.
V. Yano, G. Ferrari, and A. Zimmer
Artifacts removal
Image segmentation
Pupil radius (pixels)
Pupil measurement
outliers Time (s)
Data processing Pupil radius (pixels)
42
f3
f1
f4
f2 Time (s)
Features extraction
f1
f2
f3
f4
Fig. 1. Block diagram of the dynamic pupillometry process
Using the Pupillary Reflex as a Diabetes Occurrence Screening Aid Tool
43
Artifacts Removal. Specular reflexes of light sources used for illumination of the videos may difficult the image segmentation, since it is based on edge detection. In order to improve accuracy and speed up the next step, these artifacts were detected by comparing the intensity of each pixel with their neighbors located at a fixed distance D. This parameter was determined according to the radius of the artifacts, which in this case was 6 pixels. If the neighbor was 1.5 brighter than the original one, it was considered belonging to an artifact and all pixels at a distance less than D pixels from it were replaced by the darker pixels. Image Segmentation. As working with videos requires fast processing, the segmentation method proposed by Yuan, Xu and Lin [10] was used, which is very simple and don’t demands great computational effort. This algorithm is based on the principle that three non-collinear and non-coincident points define a unique circumference. It starts by searching a pixel inside the pupil. For this, it uses a gray value accumulator operator to find the coordinates (x0, y0) in which the result is the minimum value, i.e., in the darker region of the image. From this point, it scans the image horizontally with an edge detector, searching for one pixel at the left (xL, yL) and one at the right border of the pupil (xR, yR). Once they are found, the image is scanned downwards starting from coordinates (xL + 20, y0) to find the third point (xD, yD). Fig. 2 shows the points used to determine the circumference.
(xL, yL)
(x0, y0)
(xR, yR)
20 (xD, yD) Fig. 2. Points used to determine pupil circumference [9]
Pupil Measurement. Using equations 1, 2 and 3, the coordinates of the pupil center (xP, yP) and pupil radius are calculated.
xP =
xL + xR . 2
(1)
yP =
20( x R + x L ) − 400 + y P2 − y D2 . 2( y R − y D )
(2)
RP = ( x P − x D ) 2 + ( y P − y D ) 2 .
(3)
Each video frame provides a sample of the pupil radius. So one video shot originates a data set containing the variation of pupil size along the time.
44
V. Yano, G. Ferrari, and A. Zimmer
Data Processing. In some frames there were errors during the pupil segmentation, generating outlier data. This occured, for example, when an eye blinked. The incorrect values were removed and replaced by the mean of the previous and next ones. After that, a Gaussian filter, with a kernel given by Equation 4, was applied to the signal, in order to smooth the signal. For this case, it was verified that a kernel of seven elements and a resolution σ = 5 was enough to remove a noise in the signal without significantly affect the pupillary reflex information. The noise was caused basically by different rounding errors between consecutive frames.
G (i) =
1
σ
−
e
i2
σ2
.
(4)
In order to shorten the video sizes, the pupil size samples before the light stimulus were discarded. Features Extraction. Based on the differences observed on the gathered data [9], four features were selected for classification: 1. 2. 3. 4.
the initial pupil radius before the light stimulus; the minimum pupil radius; the time from light stimulus until maximum contraction and the pupil radius 2 seconds after the light stimulus.
2.3 Classification The four features were normalized through Equation 5, where fi represents feature i, from 1 to 4, and n is the number of the video, from 1 to 85. Thus, all values were mapped between 0 and 1 to be used as inputs in a neural network.
f i (n) =
f i (n) − min( f i ) . max( f i ) − min( f i )
(5)
The classification was done using a multi-layer perceptron (MLP) neural network, which is characterized by the presence of intermediate hidden layers that allow the solution of nonlinear systems [8]. The target values defined two classes: 0 for healthy and 1 for diabetic individuals, which means only one neuron was necessary in the output layer. One hidden layer with N neurons was used, as illustrated in Fig. 3. The activation function of the hidden layer’s neurons was a logarithmic sigmoid [8], and in the output layer, the identity function was used. In order to find the best number of hidden neurons, the network was trained using different values of N, starting from 10 up to 50. The classification errors for each case were measured to select the situation in which they were minimal. Training of the network was supervised with the back-propagation algorithm and the Levenberg-Marquadt optimization method [11] for updating of weight and bias states, which were randomly initialized between -1 and 1. The method is based on Gauss-Newton’s, and is used to find the minimum of the sum of squares of nonlinear functions.
Using the Pupillary Reflex as a Diabetes Occurrence Screening Aid Tool
Hidden layer
Inputs
45
Output layer
1
f1 f2 f3 f4
2
S
3
N
Fig. 3. Neural network used for classification
The network was trained by minimizing the cost function ξ(w), in this case given by the mean of squared errors, as shown in Equation 6, where w is the vector of weights and biases, T(i) is the target of sample n, f(n) is the vector of inputs of sample n, k is the total number of samples and S is the output of the network.
ξ ( w) =
1 2k
k
∑ [T (n) − S ( f (n), w)]
2
.
(6)
n =1
The optimum adjustment Δw of the vector w for each iteration is given by Equation 7. To reduce computational costs of the complex calculation of Hessian matrix, especially when vector w dimensionality is high, H is approximated by Equation 8, where J is the Jacobian of function S(f(n), w). An adaptive regularizing parameter μ is used to maintain the sum matrix positive, I is the identity matrix with of the same dimensions of H, and g is the gradient of the function ξ(w), given by Equation 9.
Δw = [H + μI ] g −1
(7)
⎡ ∂S ( f ( n), w) ⎤ ⎡ ∂S ( f ( n), w) ⎤ H ≈ JT J = ⎢ ⎥ ⎢ ⎥ ∂w ∂w ⎣ ⎦ ⎣ ⎦ T
g=
∂ξ ( w) ∂w
(8)
(9)
It was considered that the back-propagation algorithm had converged (stopping criteria) when the Euclidean norm of gradient vector g was less than 10-10.
46
V. Yano, G. Ferrari, and A. Zimmer
3 Results and Discussion From all 85 individuals samples, 68 (80%) were randomly selected for network training, where 37 of them were from healthy people and 31 from diabetic patients. The other 17 (20%) samples were used for validation, comprising of 9 non-diabetics and 8 diabetics. This ratio has been initially chosen due to the small amount of data available. The results of the experiments using different numbers of neurons in the hidden layer are presented in Table 1. Table 1. Results of classification with different numbers of hidden neurons Number of hidden neurons 10 20 30 40 50
Total errors 5 4 4 2 0
The network with 50 neurons in the hidden layer presented the best results.Tests with different ratios of training and validation samples were also performed. The results of them are shown in Table 2. Table 2. Results of classification with different ratios of training and validation samples Training Samples 40% 50% 60% 70% 80%
Validation Samples 60% 50% 40% 30% 20%
Total errors 12 9 7 5 0
After the network convergence process, all 68 samples used in training were correctly classified. In the best validation experiment, considering 20% of the database, all 17 samples were also successfully classified, 9 as non-diabetics and 8 as diabetics. In this experiment, there was an overall accuracy rate of 100% during the system training and testing. This indicates the existence of consistent patterns in pupillary reflex characteristics that can distinguish diabetic patients from healthy subjects, even without considering the presence of a neuropathy. However, tests with a larger amount of data are still needed in order to confirm the effectiveness of this method to assist in screening the presence of the disease.
4 Conclusion This paper proposed the use of artificial neural networks to assist in screening diabetics from non-diabetics individuals through dynamic pupillometry, a fast and
Using the Pupillary Reflex as a Diabetes Occurrence Screening Aid Tool
47
noninvasive technique. Tests showed a good performance of the multi-layer perceptron neural network used in the classification of these subjects, presenting a zero percent error rate by using only the characteristics of pupil size variations in response to a light stimulus as features. This indicates that the pupillary reflex can be useful, in principle, as an aid tool to determine patterns that could differentiate diabetic patients from healthy subjects.
References 1. Mandrup-Poulsen, T.: Recent advances – diabetes. Brit. Med. J. 316, 1221–1225 (1998) 2. Widman, S., Ladner, E., Lottenberg, S.: Diabetes. Senac, São Paulo (1989) 3. Pittasch, D., Lobmann, R., Behrens-Baumann, W., Lehnert, H.: Pupil signs of sympathetic autonomic neuropathy in patients with type 1 diabetes. Diabetes Care 25, 1545–1550 (2002) 4. Fotiou, F., Fountoulakis, K.N., Goulas, A., Alexopoulos, L., Palikaras, A.: Automated standardized pupillometry with optical method for purposes of clinical practice and research. Clin. Physiol. 20, 336–347 (2000) 5. Dütsch, M., Marthol, H., Michelson, G., Neundörfer, N., Hilz, M.J.: Pupillography refines the diagnosis of diabetic autonomic neuropathy. J. Neurol. Sci. 222, 75–81 (2004) 6. Cahill, M., Eustace, P., Jesus, V.: Pupillary autonomic denervation with increasing duration of diabetes mellitus. Brit. J. Ophthalmol. 85, 1225–1230 (2001) 7. Ferrari, G.L., Marques, J.L.B., Gandhi, R.A., Heller, S., Schneider, F.K., Tesfaye, S., Gamba, H.R.: Using dynamic pupillometry as a simple screening tool to detect autonomic neuropathy in patients with diabetes: a pilot study. Biomed. Eng. Online 9, 26–41 (2010) 8. Haykin, S.: Neural Networks and Learning Machines. Prentice Hall, Upper Saddle River (2009) 9. Yano, V., Zimmer, A., Ferrari, G.L.: Análise da Variação do Reflexo Pupilar entre Pacientes Diabéticos para Aplicação em Sistema Biométrico. In: XXII Congresso Brasileiro de Engenharia Biomédica, UFSJ (2010) 10. Weiqi, Y., Lu, X., Zhonghua, L.: A Novel Iris Localization Algorithm Based on the Gray Distributions of Eye Images. In: 27th IEEE Annual International Conference of the Engineering in Medicine and Biology Society, pp. 6504–6507. IEEE Press, Shanghai (2005) 11. Weisstein, E.W.: Levenberg-Marquardt Method (2010), http://mathworld.wolfram.com/Levenberg-MarquardtMethod.html 12. Maffra, F.A., Gattass, M.: Método de Levenberg-Marquardt. PUC-Rio, Rio de Janeiro (2008)
Genetic Snake for Medical Ultrasound Image Segmentation Mohammad Talebi and Ahmad Ayatollahi Department of Electrical Engineering, Iran University of science and Technology, Tehran, Iran
[email protected], ayatollahi@ iust.ac.ir
Abstract. Active contour, due to acceptable results in the field of image segmentation, has attracted more attention in the last several decades. However, the low quality and the presence of noise in medical images, particularly ultrasound images have also created some limitations for this method, such as Entrapment within the local minima and adjustment of the contour coefficients. In this paper, we present a segmental algorithm combined active contour and genetic algorithm to remove these limitations and bring some improvements to the segmentation outcome. The experimental results show that our proposed algorithm has an acceptable accuracy. Keywords: Segmentation, Active Contour, Genetic algorithm, Ultrasound Images, Snake.
1 Introduction The development of imaging techniques in the medical sector, such as the CT, MRI, X-ray, and ultrasound, the groundwork for employing the image processing in this area as well. By using these methods, it is possible to imaging various parts of the body and to help the physicians with the correct diagnosis of diseases. In recent decades, the ultrasound imaging technique, because of having advantages such as being quick, harmless, and less costly compared with the other imaging methods, has attracted more attention. Despite all these advantages, the method which is used in the ultrasound imaging system causes the formation of characteristics such as speckle noises, non-uniform attenuation of ultrasound beams, and low signal to noise ratios in ultrasound images. The damaging effects of these characteristics cause the ultrasound images to have a very undesirable quality. This adverse quality leads to complications in the implementation of processing algorithms and also to the lowering of their accuracy. It should be pointed out; however, one of the most important tasks in medical image processing is the correct segmentation of the image, which can be used to distinguish regions like organs, bones, tissues, etc., from the other parts of the image. This is important with respect to many applications it has in clinical analysis such as tumor and tissue classifications. So far, many different methods have been proposed for ultrasound image segmentation including various methods of edge finding [1], the watershed transform M. Kamel and A. Campilho (Eds.): ICIAR 2011, Part II, LNCS 6754, pp. 48–58, 2011. © Springer-Verlag Berlin Heidelberg 2011
Genetic Snake for Medical Ultrasound Image Segmentation
49
method [2,3], clustering methods [4], methods based on statistical models [5] , wavelet transform [6], local histogram range image[7]. The major problem inherent with most of the suggested methods is that they lose their functionality in the presence of noise, and the segmentation accuracy is reduced. One of the proposed methods is the active contour that has been utilized extensively in the area of image segmentation, in the last few years. According to mechanisms used for contour deformation process, active contours can be divided two models: parametric active contours and geometric active contours. In this paper, we focus on parametric active contours. This model was first proposed by KASS in 1988, which became known as traditional snake model [8]. The active contour is a deformable curve in twodimensional space, which is established on the basis of energy minimization and the equilibrium of the internal and external forces. The results obtained from using the active contour show that this technique possesses good accuracy in image segmentation; however, active contour also has certain limitations. Entrapment within the local minima and the deviation from the original path is one of the most severe drawbacks of using the active contour; and the existence of noise and the low quality of the image magnifies this difficulty and limitation. So far, to solve the problem of contour entrapment in local minima points, various methods have been suggested, such as the Balloon model [9], the distance potential model [10], and the GVF model [11], Topology Snake model [12]. In the balloon model, to resolve the dilemma of local minima and to facilitate the deformation of the contour, an additional force called the pressure force has been utilized. This force causes the contour to exhibit a balloon-like behavior and to expand and contract with the increasing and decreasing value of the force. In 1993, Cohen & Cohen presented another model of the active contour type for increasing the area of absorption. In this model, the potential function based on distance marking has been used as the external force. The GVF model is another type of active contour model that was presented in 1998 by Xu & Prince. In this model, the vector fields obtained from the edge demarcation have been made to infiltrate the inner regions of the image, and through the increase and compaction of these vector fields, the absorption area has increased. And more recently, the use of the genetic algorithm for the optimization of active contour has been pursued [13-17]. The other limitation that exists for different models of active contour is the adjustment of the contour parameters. The adjustment of these parameters is a difficult task, and it differs from one image to another image. Until now, the adjustment of these parameters, were being carried out experimentally. In the algorithm presented in this paper, we used the balloon model for the segmentation of ultrasound images, and we tried to remedy the adjustment problem associated with the active contour parameters through the use of the genetic algorithm. A review of the results obtained from the application of the proposed algorithm has suggested that this model is highly effective. In the second section of this paper, we will illustrate the concept of the parametric active contour and discuss the balloon model. In section three, the fundamentals of the genetic algorithm will be discussed. In section four, the proposed algorithm will be explained, and finally, the results of the ultrasound image segmentation obtained through the proposed model will be presented.
50
M. Talebi and A. Ayatollahi
2 Parametric Active Contour Active contour is a powerful and useful approach for segmentation of objects within images. The traditional snake or Active contour is a two-dimensional curve V(s) = (x(s), y(s)), in the image space I(x, y), whose deformation is based on energy minimization. In this definition, x and y are the position coordinates of curve in 2-D image and 0,1 is used for parametric representation of the contour. In this method, first, a primary contour is defined close to the edge of the object and then, in order to detect the edges, an energy function is specified for contour deformation. Finally, the curve moves through the spatial domain of image and by minimizing the specified energy through various arithmetic techniques, the edge detection and segmentation process is completed. In general, the energy function of the active contour is expressed as [8, 18]: (1) The defined energy function for the contour consists of two components which respectively are: Internal energy and External energy component. : This term of energy is used to control the rate of stretch Internal Energy and to prevent discontinuity in the contour and can be defined as follow: 1 2
|
1 2
|
|
|
(2)
The first term of the internal energy is related to the contour elasticity. Weight factor α(s) is used to control and adjust it. The second term in this relation specifies the contour's strength and resistance against sudden changes and β(s) factor controls it. In and this equation, denote first and second order derivation of . These coefficients have a great effect on the behavior of active contour. If α(s) and β(s) are close to unity, the internal energy has a major influence on contour deformation. External Energy : The term of external energy is generated by using the image characteristic features or the limitations imposed on the contour by the user, and it is used for contour displacement. Equation (3) represents a typical external energy defined for the contour.
|
,
,
|2
(3)
(x, y) is a twoIn this equation, γ is the external energy weight factor, dimensional Gaussian filter with the standard deviation σ, is the gradient operator and * specifies the convolution operator for the 2-D input image I(x,y). In order to control the contour deformation and displacement, these energy components are converted into two internal and external forces. During the process of contour deformation, the force resulting from internal energy, keeps the contour smooth and prevents breaking and discontinuity of the contour which are caused mainly by the presence of irregularities and noise in the image. Also, the external force has the task of displacing the contour from its initial position and guiding it towards the subject's edge. When the energy function attains its minimum value, in other words, when external and internal forces balance out, the Contour deformation
Genetic Snake for Medical Ultrasound Image Segmentation
51
will be stopped and the edge detection process reaches to the end. This indicates that the contour has coincided with the edge and satisfies Euler-Lagrange equation: 0
(4)
′ ′′ Here, is the gradient operator and , . The traditional snake has two major limitations which are: Sensitivity of the contour to initial position: if the contour is not close enough to object edge, it is not attracted by it.
Local minima: Formation of minimum energy points due to the presence of noise in the image in which entrap contour in these local points and deviate it from the original path. The existence of these limitations and restrictions led to the development of new active contour models to overcome these problems. The balloon model is one of these models [9]. In this model, to remedy the problems associated with the traditional contour model, changes were made to the definition of the applied external force, and besides the Gaussian potential force, another force that called the pressure force, was also used. Equation (5) represents a general definition of the external force, for the balloon model [9]. ,
|
|
(5)
In equation (5), the pressure force is represented by a normal unitary vector which acts perpendicular to the contour at each point and is controlled by the coefficient K1. The amount chosen for K1 should be large enough to be able to carry the contour through the local minimum points resulting from noise and weak and indistinct edges. But at important edges, the Gaussian potential force should be larger than the force of pressure in order to stop the contour's expansion and contraction at the important edges. The use of the pressure force has brought several advantages for this model, some of which are listed below: 1. The use of the pressure model reduces the sensitivity of the model to the initial position of the contour and facilitates contour displacement and convergence towards the subject's edges. 2. The use of the pressure model, in the absence of the Gaussian potential force, helps the displacement and exit of contour from the homogeneous regions. 3. At the local minima points (noise), the pressure force disrupts the equilibrium between the internal and external forces, and rescues the contour from the entrapment of local minima and guides it towards the subject's edge.
3 Genetic Algorithm The genetic algorithm is an efficient, adaptive and stable method of optimization which has caught the attention of the researchers in the last several years. However, these algorithms don’t guarantee the optimum solution for problems. Using them for
52
M. Talebi and A. Ayatollahi
the optimization problems has shown that these algorithms often find the closest solution to the optimum and in some cases; they obtain the most optimum solution among the existing solutions in the search space. The genetic algorithm uses a set of population called chromosomes for optimization. These chromosomes are in fact the solutions of the problem. In a search process, the best of them are selected from the solution set available in the search space. Each of these chromosomes is made up of several genes, and these genes represent the parameters that should be optimized. These chromosomes are encoded by numbers. The manner of encoding these genes and chromosomes depends on the type of problem that needs to be solved. Usually, a string of zeros and ones is used for encoding the chromosomes. After determining the population size and the manner of encoding, the appropriate solution should be evaluated. To do this, the fitness function is used. After the fitness function is determined for each member of the population, three genetic operators should be applied on them to prevent premature convergence. These three genetic operators are as follows [19]: Selection Operator: This operator is applied on the members of the population and selects a number of individuals who possess the best fitness function values to reproduce the new generation. Different methods exist for the selection of the bests of the society including: selection by the Roulette wheel method, selection by the competition method and the elitist selection method. Crossover Operator: After the best members of the society are selected for reproduction, it is necessary for them to cross over and produce the new generation. In this situation, from among selected individuals, certain pairs are identified as parents and by crossing those over two offspring are born. To cross over the parent chromosomes, single point cross over, double point cross over and the uniform cross over methods can be used. Mutation Operator: Mutation, by changing the genes, can produce a new chromosome. Applying the mutation operator in the genetic algorithm saves valuable information of the chromosomes which may otherwise be deleted during the execution of the algorithm. In other hand, this act can greatly enhance the algorithm operation and prevent its quick convergence and falling in the trap of local minima. The amount of mutation taking place in a chromosome is determined by the mutation rate. At first, a random number between zero and one is generated for each gene to carry out the mutation operation. If the generated number was smaller than the value of mutation probability, the mutation takes place and the gene changes. Otherwise, no change is carried out on the gene. To calculate the mutation rate, usually, the relation is used, where L is the chromosome length [20]. The process of producing a new generation and selecting the best member is repeated continually until the algorithm's termination condition is satisfied.
4 Proposed Genetic Snake Model As was pointed out in the section related to the introduction of the active contour, to control the operation of the active contour, certain coefficients have been considered.
Gen netic Snake for Medical Ultrasound Image Segmentation
53
The adjustment of these coeefficients, for controlling the contour's behavior, is difficcult and the values of these coefficients are different for one image to another. Until now, for adjusting the contour co oefficients, the factor of experience has been used. Figurre 1 shows several segmentatiion samples performed by the active contour, whhose coefficients have been adjusted experimentally and through trial and error. As the mage segmentation has not been performed properly duee to images clearly show, the im the imbalance between thee internal and external forces. This has happened becaause the adjustment of these co oefficients and their interrelation has not been adequattely taken into consideration, an nd this task should continue until each coefficient valuue is correctly determined.
(a)
(b)
(c)
(d)
Fig. 1. (a) Initial contour posiition for image segmentation, (b) segmentation performed byy the contour after 175 iterations an nd with the coefficient set of α = 0.08, β = 0.3, γ = 0.5, and κ = 0.2; (c) segmentation performeed by the contour after 175 iterations and with the coefficientt set of α = 0.08, β = 0.3, γ = 0.5, and a κ = 0.1; (d) segmentation performed by the contour after 500 iterations and with the coefficiient set of α = 0.08, β = 0.3, γ = 0.5, and κ = 0.1.
Thus, it is concluded thaat finding the contour coefficients experimentally, requuires a lot of time. The genetic algorithm a is one of the methods that can be used to rem medy this problem. To implementt this algorithm, first, each coefficient of the active conttour was assumed as a gene that should evolve in the course of the segmentation process. Since the balloon model uses four control coefficients to oversee the contoour's behavior throughout the seegmentation, so, we produced a set of chromosomes, eeach consisting of four genes. Th he Figure 2 shows the structure of these chromosomes. After generating the in nitial coefficients and determining the contour's iniitial position, each of these co oefficient sets should be evaluated with regards to the position of contour, and ulltimately, the best set for contour deformation shouldd be selected. To do this, first, a fitness function should be defined. Since the establishm ment of active contours is based on energy minimization, therefore, the fitness function can be defined as follows: 1 1
(6)
Considering the defined d function and the contour's position at each step, for the purpose of evaluation and selection s of the best set of coefficients, the fitness functtion can be determined for eveery coefficient set, by calculating the energy of all the contour points. Those conto our points that are located on the edge of the tissue or in the
54
M. Talebi and A. Ayatollahi
local minima have the least amount of energy; in other words, their fitness functions reach the highest value. Therefore, for resolving the problem of local minima and evaluating the points more correctly, at every step, instead of using each point's fitness function, we calculate the fitness function for the whole contour points. The overall fitness function for the contour is the sum of the fitness functions of each contour point. After calculating the fitness functions for all the chromosomes, the coefficient set with the highest fitness function value is selected as the best answer and used for the single-step contour deformation.
Fig. 2. The structure of chromosomes that have been used to encode the active contour's control coefficients; N is the number of chromosomes in a society
In the next steps, and for the re-deformation of the contour considering the rate of selection, some of the best individuals from the society are selected and put into the replication pool to produce a new set of coefficients. Then, the members of the pool are selected as pairs and start to reproduce. At this stage, to crossover the parent chromosomes, we used the uniform cross over method. In the next stage, to prevent fast convergence and to have a better generation, mutation operation is performed on the chromosome populations. This process of coefficient production and contour deformation is repeated until the algorithm's termination condition is satisfied.
5 Experimental Results To evaluate the effectiveness of the proposed algorithm versus the gold standard (represented by manual tracing), several Breast ultrasound images is acquired and segmented by an expert observer. The expert manually traced the edge of anatomical structure and obtaining the corresponding areas. Then, we used Hausdorff distance for comparing the manual contour defined by expert and genetic snake contour. Hausdorff distance techniques specify the degree of mismatch between two finite point sets, A and B, and can be used for computing the difference between two images and defined as follows [21, 22]: ,
,
,
,
(7)
In this equation, , ,…, and , ,…, are two finite point sets of contours, , measures the distance from a to its nearest neighbor in B and vice versa and defined as:
Gen netic Snake for Medical Ultrasound Image Segmentation
,
55
(8)
In this algorithm, for the purpose of image segmentation, we used the balloon moodel and the pressure force. Also o, for producing the contour coefficients, we consideredd 40 chromosomes and genes wiith a length of 10 bits. Each of these parameters has bbeen produced in the range of [0, 1], and taking into consideration the number of bits usedd for encoding the genes, the preccision of each parameter is about 0.001. Figure 3(a) shows the co ontour's initial position on the ultrasound image. Figure 33(b) illustrates the results of the segmentation performed on the image of Figure 3(a) by the proposed algorithm, after an a iteration of 500 times. Figure 3(c) shows the manually segmented image by an exp pert and Compared with the contour outlined by an exppert [Figure 3(d)], it can be seen n that the tissue’s has been correctly identified. Accordingg to Hausdorff distance algorith hm, the maximum error between manual contour definedd by expert and genetic snake contour c is 5 pixels and mean error is 1.68 pixels for this sample.
(a)
(b)
(c)
(d)
Fig. 3. (a) Initial position. (b b) Final result of segmentation by proposed algorithm after 500 iteration. (c) Manually segmeented image by expert. (d) Mismatching between manually and genetic snake contour.
Also, Figure 4 shows other o examples of ultrasound image segmentation by the proposed algorithm and Table T 1 shows the statistical results of segmentation on Figure 4. The results of im mplementing the proposed algorithms on breast ultrasouund images and obtain the mean n error value of less than 2 pixels in each image shows tthat the proposed algorithm has an acceptable accuracy. Table 1. Th he statistical results of segmentation on figure 4 Figure Number 4-a 4-b 4-c 4-d 4-e 4-f
Ma aximum Error 6.4 pixel 3 pixel 4 pixel 4 pixel 10 pixel 11 pixel
Mean Error 1.7 pixel 0.8 pixel 1.19 pixel 1.63 pixel 1.58 pixel 2 pixel
Number of Iteratioon 500 550 500 500 700 550
56
M. Talebi and A. Ayaatollahi
(a)
(b)
(c)
(d)
(e)
(f)
Fig. 4. Segmentattion of the ultrasound breast images by genetic snake
6 Conclusion In this paper, we tried to use u the genetic algorithm along with the active contourr in order to remove and resollve the existing limitations on the adjustment of conntrol parameters related to active contours. The use of the genetic algorithm for the adjustment of contour param meters provides the possibility for the operator to carry out the segmentation process by b only determining the contour's initial position; whilee, in the manual method, the sam me task was performed through several steps by meanss of trial and error. So, we can n conclude that the proposed method needs less time for the segmentation of the con ntour. The results obtained from the implementation off the proposed method on ultraso ound images demonstrate that, by applying this method, the parameters are well adjussted and the problem arising from the lack of balaance between the forces on the edge no longer exists.
Genetic Snake for Medical Ultrasound Image Segmentation
57
References 1. Gonzalez, R.C., Woods, R.E.: Digital image processing, 2nd edn. Publishing House of Electronics Industry, Beijing (2002) 2. Deka, B., Ghosh, D.: Watershed Segmentation for Medical Ultrasound Images. In: IEEE International Conference on Systems, Man, and Cybernetics, pp. 3186–3191 (2006) 3. Liang, L., Yingxia, F., Peirui, B., Wenjie, M.: Medical ultrasound image segmentation Based on improved watershed scheme. In: IEEE International Conference on Bioinformatics and Biomedical Engineering, ICBBE, pp. 1–4 (2009) 4. Chang-ming, z., Guo-chang, G., Hai-bo, L., Jing, S., Hualong, Y.: Segmentation of Ultrasound Image Based On Cluster Ensemble. In: IEEE International Symposium Knowledge Acquisition and Modeling Workshop, pp. 418–421 (2008) 5. Sarti, A., Corsi, A., Mazzini, E., Lamberti, C.: Maximum Likelihood Segmentation of Ultrasound Images with Rayleigh Distribution. IEEE Transactions on Ultrasonic’s, Ferroelectrics, And Frequency Control 52(6), 947–960 (2005) 6. Liu, H., Chen, Z., Chen, X., Chen, Y.: Multiresolution Medical Image Segmentation Based on Wavelet Transform. In: 27th Annual Conference Engineering In Medicine And Biology, pp. 3418–3421 (2005) 7. Kermani, A., Ayatollahi, A., Mirzaei, A., Barekatain, M.: Medical ultrasound image segmentation by modified local histogram range image method. J. Biomedical Science and Engineering, 1078–1084 (2010) 8. Kass, M., Witkin, A., Terzopoulos, D.: Snakes: active contour models. International Journal of Computer Vision, 321–331 (1988) 9. Cohen, D.: On active contour models and balloons. CVGIP Image Understand, 211–218 (1991) 10. Cohen, L., Cohen, I.: Finite element methods for active contour models and balloons for 2D and 3-D images. IEEE Transaction on Pattern Analysis and Machine Intelligence 15(11), 1131–1147 (1993) 11. Xu, C., Prince, C.: Snakes, shapes, and gradient vector flow. IEEE Trans. Image Process., 359–369 (1998) 12. Mcinerney, T., Terzopoulos, D.: T-snakes: topologically adaptive snakes. Medical Image Analysis, 73–91 (2000) 13. Ballerini, L.: Genetic snakes for medical images segmentation. In: Poli, R. (ed.) Evolutionary Image Analysis, Signal Processing and Telecommunications, pp. 59–73. Springer, London (1999) 14. Rezaei Rad, G., Kashanian, M.: Extraction of the Breast Cancer Tumor in Mammograms Using Genetic Active Contour. In: International Conference on Biomedical and Pharmaceutical Engineering (ICBPE), pp. 30–33 (2006) 15. Hussain, A.R.: Optic Nerve Head Segmentation Using Genetic Active Contours. In: IEEE International Conference on Computer and Communication Engineering, pp. 783–787 (2008) 16. Mun, K.J., Kang, H.T., Lee, H.T., Yoon, Y.S., Lee, Y.S., Park, Y.S.: Active Contour Model Based Object Contour Detection Using Genetic Algorithm with Wavelet Based Image Preprocessing. International Journal of Control, Automation, and Systems 2(1), 100–106 (2004) 17. Rousselle, J.-J., Vincent, N., Verbeke, N.: Genetic Algorithm to Set Active Contour. In: Petkov, N., Westenberg, M.A. (eds.) CAIP 2003. LNCS, vol. 2756, pp. 345–352. Springer, Heidelberg (2003)
58
M. Talebi and A. Ayatollahi
18. He, L., Peng, Z., Everding, B., Wang, X., Han, Y., Weiss, K.L., Wee, W.G.: A comparative study of deformable contour methods on medical image segmentation. In: Science Direct, Image and Vision Computing, pp. 141–163 (2007) 19. Sivanandam, S.N., Deepa, S.N.: Introduction to Genetic Algorithms. Springer, Heidelberg (2008) ISBN 978-3-540-73189-4 20. Back, T.: Optimal Mutation in Genetic Search. In: Fifth International Conference in Genetic Algorithm, pp. 2–8 (1993) 21. Huttenlocher, D.P., Klanderman, D.P., Rucklidge, W.J.: Comparing images using the Hausdorff distance. IEEE Transactions on Pattern Analysis and Machine Intelligence 15(9), 850–863 (1993) 22. Belogay, E., Cabrelli, E., Molter, U., Shonkwiler, R.: Calculating the Hausdorff distance between curves. Information Processing Lett. 64, 17–22 (1997)
3D-Video-fMRI: 3D Motion Tracking in a 3T MRI Environment José Maria Fernandes1,2, Sérgio Tafula2,3, and João Paulo Silva Cunha1,2,3 1
Dep. of Electronics, Telecomm. and Informatics, University of Aveiro, Campus Universitário de Santiago, 3810-193 Aveiro, Portugal 2 IEETA/University of Aveiro, Campus Universitário de Santiago, 3810-193 Aveiro, Portugal 3 Portuguese Brain Imaging Network (ANIFC), Coimbra, Portugal {jfernan,jcunha}@ua.pt,
[email protected]
Abstract. We propose a technical solution that enables 3D video-based in-bore movement quantification to be acquired synchronously with the BOLD function magnetic resonance imaging (fMRI) sequences. Our solution relies on in-bore video setup with 2 cameras mounted in a 90 degrees angle that allows tracking movments while acquiring fMRI sequences. In this study we show that using 3D motion quantification of a simple finger opposition paradigm we were able to map two different finger positions to two different BOLD response patterns in a typical block design protocol. The motion information was also used to adjust the block design to the actual motion start and stop improving the time accuracy of the analysis. These results reinforce the role of video based motion quantification in fMRI analysis as an independent regressor that allows new findings not discernable when using traditional block designs. Keywords: Motion tracking, 3D-video-fmri, finger tapping, BOLD analysis.
1 Introduction Function magnetic resonance imaging (fMRI) has been a window to study the human motor system usually using in-bore simple movement such as finger tapping [1] or finger opposition [2 , 3-4]. Typically, researchers rely on assuming subject correct execution and compliance to vocal instructions, using the design protocol to distinguish between rest and activation or use MRI compatible sensors such as buttons in a response box (e.g. [5]) or more complex devices (e.g. [6-7]). Unfortunately, in many situations, it is impossible to detect in-bore subjects’ movements especially when his response cannot be controlled (e.g. study of Parkinson ’s disease involuntary tremor [8] ). In the present study we propose a technical solution that enables 3D video-based in-bore movement quantification to be acquired synchronously with the BOLD fMRI sequences. Using this innovative method we can use millisecond accurate motor related information to correct and better support the analysis of BOLD response. M. Kamel and A. Campilho (Eds.): ICIAR 2011, Part II, LNCS 6754, pp. 59–67, 2011. © Springer-Verlag Berlin Heidelberg 2011
60
J.M. Fernandes, S. Tafula, and J.P.S. Cunha
2 Data and Methods 2.1 Setup MRI acquisitions were performed in a scanner operating at 3 Tesla (Magnetom Trio Tim, Siemens AG, Erlangen, Germany) at the Portuguese Brain Imaging Network facilities in Coimbra (www.brainimaging.pt). The sessions were recorded using two MRI compatible PAL colour cameras fixed to an in-house manufactured tubular support that positioned the cameras in perpendicular directions in relation to the subjects’ hand. This support can be inserted in the bore from both sides of the scanner (Fig. 1 A & C). Both cameras were positioned in a perpendicular in relation to the hand to enabled 3D motion tracking of each finger. Each fingertip was colour-coded (Fig. 1 B) to simplify video tracking algorithms [9]. We coded the thumb with yellow (Y), little finger with red ( R ) and the index with green (G). We used this color coding to code the thumb-to-little finger opposition (YR) and the thumb-to-index finger (YG) occurrences. The perpendicular positioning of both cameras enabled 3D motion tracking of each finger (Fig. 1 B & D).
Fig. 1. The video setup schematics (A&B) and photographs (C&D). A plastic support (A) allows the cameras to change in height (h) and longitudinal displacement (d) to capture the area of interest in the back or in the front of the scanner. The cameras are place at 90 degrees angle to allow a 3D tracking (detail in B and D). For easier tracking the finger tips were color coded – (Y)ellow for thumb, (R)ed for little finger and (G)reen for index finger.
3D-Video-fMRI: 3D Motion Tracking in a 3T MRI Environment
61
A video mixer was used to combine both videos in a split-screen PAL video output and connected to a video digitizer. The synchronization of video and the fMRI was based on the audio signature of EPI sequences that were fed into a synchronized audio channel of the video digitizer [10]. By aligning the pulse generated by the gradient changes during EPI sequences (beginning of each fMRI volume acquisition) with the respective artifact in audio it was possible to map the video events with the fMRI volume acquisitions. These electromagnetic pulse interfere in audio [11] and are detectable in the audio stream using a running window correlation based detector using a template based on the audio artifact. 2.2 fMRI Acquisition and Protocol One volunteer (23 Y, female ) performed two simple in-bore tasks: T1) consecutive self-paced right hand finger oppositions (thumb-to-index (YG) and thumb-to-little (YR)) and T2) index finger rotation. We performed 2 sessions of 180 seconds organized in consecutives blocks of 20s rest-20s activation. In each session, an echo-planar imaging (EPI) fMRI sequence covering the brain motor areas was used to obtain the BOLD signal (90 volumes, TE=42 ms, TR=2000 ms, 25 slices, voxel size=2.0x2.0x2.0 mm). The EPI fMRI were spatially realigned (to mean volume), time corrected (to middle slice) and smoothed with an isotropic 8mm full-width-half-maximum Gaussian kernel by means of SPM5 software (http://www.fil.ion.ucl.ac.uk/spm/ ). The resulting images were then co-registered to high-resolution 3D T1-weighted images and normalized to the standard Montreal Neurological Institute (MNI) template.
Fig. 2. Motion and regressor used in the analysis. The quantified motion was used to identify finger positions (A) - opposition of thumb-to-index (YG) and thumb to little finger (YR) - and this information was used first to correct the block design (B) to obtain the actual finger motion block (C) and then used to define 2 discrete regressor (duration 0s) associated to each of the finger position described earlier (YR and YG).
62
J.M. Fernandes, S. Tafula, and J.P.S. Cunha
In the present work we focused our analysis of the finger opposition task. We analyzed the BOLD response by using spatial parametric mapping [12] implemented in SPM5. All fMRI sequences were processed using the default procedures and parameters. The preprocessing included a high pass cutoff filter of 128 seconds. We used the canonical hemodynamic response function (HRF) in the analysis. We used several regressors in the analysis (Fig. 2): 1) block design, 2) corrected block design based on measured motion and 3) two independent series of discrete regressors (0 s duration) extracted from video occurrence of opposition of YG and YR. For each regressor, contrast and t-maps were calculated and thresholded to identify areas of statistical significance. All activation/deactivation extremes were mapped in Automated Anatomical Labeling digital atlas (AAL) of the MNI template [13] and only findings clearly identified in the atlas were considered.
Fig. 3. Finger 3D trajectories extracted from video
3 Results The in-bore movement quantification procedure extracted successfully precise 3D finger trajectories (Fig. 3). Furthermore, the YG and YR events were detected with a +/- 40 ms precision. We could quantify the frequency of YG events (0.59 Hz) and the average +/- standard deviation interval between YG and YR occurrence (0.834s +/0.10s). The results obtained in the SPM analysis of the BOLD responses are depicted in Fig. 4 and Table 1. Slice numbers are in the MNI space and |T|>2.67 (p uncorrected
3D-Video-fMRI: 3D Motion Tracking in a 3T MRI Environment
63
<0.005). When comparing the areas (de)activated in the block design with the ones in the corrected block design we found some differences: while in the first (Fig. 4 A) only clear activations were found in postcentral and SMA on the left and precentral area on the right, in the corrected block design(Fig. 4 B) we found activity in pre and postcentral areas but with more complex activation/deactivation patterns (post vs. pre central area in left– slice 48) and on the right (precentral area – slice 58 & 68). In the individual analysis of conditions YG and YR we did not obtained clear patterns. YG presented a widespread deactivation pattern along central areas (including SMA) and in both sides at the pre and postcentral and frontal areas. YR condition only presented small deactivation in precentral and activation in parietal inferior area on the right side. Nevertheless, when comparing YG and YR (condition YG-YR) (Fig. 4 C) we obtained a wide spread negative difference along the middle line (YG-YR - slice 58). The main difference was in the right parietal inf (Fig. 4 C 48) and in both sides in front mid (Fig. 4 28 & 38).
Fig. 4. BOLD response. Slices are according to MNI reference.
We found activations mainly in the contralateral area to the movement (postcentral and SMA on the left) in agreement to what is found in the literature in finger tapping and opposition tasks [1-4] but we found differences between the BOLD activation and deactivations when considering the original block design and the motion correct block design. We found a clear activation / deactivation pattern areas associated with motor function (post- and precentral areas including parietal inf area - Fig. 4 B – slice 48) suggesting a motion related time structure in the paradigm that was not visible in the original block design. In relation to the condition YG and YR we observed different
64
J.M. Fernandes, S. Tafula, and J.P.S. Cunha
BOLD response patterns supporting once more that both conditions represent different stages of the motor related brain activity. Table 1. BOLD responses for conditions block, corrected block, YG and YR. Peaks activations and deactivation for |T|> 2.67 (p uncorrected < 0.005). Areas denomination according to AAL. MNI position Condition Side
Area (AAL)
x
y
z
Voxel
t value
block
Precentral Postcentral Supp_Motor_Area Postcentral Cingulum_Mid Postcentral Precentral Frontal_Sup_Medial
40 -48 -2 46 16 28 42 -2
-16 -22 -6 -24 -26 -32 -18 48
68 54 54 40 40 76 52 28
2183 2155 1619 63 54 26 7 5
4.98 5.855 4.12 3.47 3.56 2.94 2.73 2.74
Precentral Precentral Precentral Parietal_Inf Precuneus Parietal_Sup
-42 20 40 46 -8 -16
-10 -24 -16 -36 -46 -62
42 64 70 54 64 60
234 169 130 50 41 16
3.77 3.94 -4.20 -3.60 3.11 2.87
R L R L
Parietal_Inf Precentral Precuneus Frontal_Inf_Oper Parietal_Sup
50 36 -10 40 -22
-50 -16 -76 12 -62
48 70 50 30 58
45 31 17 11 9
3.08 -3.11 3.12 2.97 -2.84
R L R L R L L L
Pariental_Inf Postcentral Precentral Parietal_Inf Occipital_Mid Parietal_Sup Postcentral Postcentral
56 -48 40 -42 32 -36 -36 -26
-48 -16 -16 -44 -74 -52 -52 -34
48 36 42 40 36 58 58 68
14415 502 1052 72 11 112 10 5
-6.18 -3.94 -4.34 -3.25 -3.24 -3.37 -2.96 -2.86
R L L R R R R L
Corrected L Block R R R L L YR R
YG
4 Discussion and Conclusions In the present paper, we showed that precise 3D movement trajectories can extracted from video without any extra setup other than in-bore cameras while acquiring fMRI and that motion parameters can be used to identify paradigm related events (e.g. finger opposition of thumb-to-index and thumb-to-little finger) than can support a better comprehension of the motor function.
3D-Video-fMRI: 3D Motion Tracking in a 3T MRI Environment
65
Firstly, we demonstrated that fine adjustments based on movement performance might alter the results in comparison to assuming total compliance to original design usually transmitted by oral communication (Fig. 4 A vs. B). Secondly, we present results that prove it is possible to discriminate motion related sub-states in BOLD analysis in areas typically associated with (de)activation from traditional movement block design (Fig. 4 C). Furthermore, we show that, by using movement quantification regressors in the analysis, we are able to describe clear motion related BOLD responses that are not discernable when using traditional block design. This can be a valuable feature that may help study the brain motor function in situations where it is impossible (or not recommended) to detect in-bore subjects’ movements using external devices especially when is not possible to control or modulate the motor response. Good examples are neurodegenerative diseases with motor disturbances where the subjects’ movements cannot be controlled (e.g. study of Parkinson ’s disease involuntary tremor [8] ) and where the use of fMRI can help to further understand the underlying brain processes. 4.1 Video In-Bore Motion Quantification Other works addressing in-gantry finger motion quantification involved sophisticated solutions using sensors, such as push-button boxes, keyboards or even other detection devices [5-7, 14-16], but no solution only based in video was found. Several technical solutions used triggered based mechanisms to enable the quantification of motion in frequency [17], type of motion , duration [18]. EMG in-bore is also used in order to provide a control on the actual motion execution although no quantification of the motor performance is possible [19]. In contrast with the previously referred works, our solution does not require intrusive solution on the subjects - other than the setup of in-bore cameras - and does no constrain the motion in analysis allowing the quantification of fine finger movements like finger opposition or finger tapping, a result of previous work [10]. We only found a work from Casellato et al. where video is used to quantify in-bore motion [20]. Casellato et al. use a marker strategy similar to existing solutions outside the scanner [9, 21] to track the motion of the subject. However their solution has the downside of relying in equipment containing metallic parts which induce electromagnetic in-bore noise. In their work they successfully characterize the motion of several body parts, but in contrast to our work, they use the motion as a regressor in the BOLD response analysis instead of motion related events. Their results, as ours, reproduce the expected activation as described in the literature [1-4].
Acknowledgments We would like to thank to the volunteer C. Duarte and the ANIFC core staff for their help in the experimental part of this paper. This work was partly supported by FEDER and FCT (Portuguese Science and Technology Agency) grants PTDC/SAUBEB/72305/2006 and GRID/GRI/81833/2006.
66
J.M. Fernandes, S. Tafula, and J.P.S. Cunha
References 1. Witt, S.T., Laird, A.R., Meyerand, M.E.: Functional neuroimaging correlates of fingertapping task variations: an ALE meta-analysis. Neuroimage 42, 343–356 (2008) 2. Guillot, A., Collet, C., Nguyen, V.A., Malouin, F., Richards, C., Doyon, J.: Brain activity during visual versus kinesthetic imagery: an fMRI study. Hum. Brain Mapp. 30, 2157–2172 (2009) 3. Smith, J.F., Chen, K., Johnson, S., Morrone-Strupinsky, J., Reiman, E.M., Nelson, A., Moeller, J.R., Alexander, G.E.: Network analysis of single-subject fMRI during a finger opposition task. Neuroimage 32, 325–332 (2006) 4. Solodkin, A., Hlustik, P., Chen, E.E., Small, S.L.: Fine modulation in network activation during motor execution and motor imagery. Cereb Cortex 14, 1246–1255 (2004) 5. Liu, J.Z., Zhang, L., Yao, B., Yue, G.H.: Accessory hardware for neuromuscular measurements during functional MRI experiments. Magma. 13, 164–171 (2002) 6. De Luca, C., Bertollo, M., Comani, S.: Non-magnetic equipment for the high-resolution quantification of finger kinematics during functional studies of bimanual coordination. J. Neurosci. Methods 192, 173–184 (2010) 7. Schaechter, J.D., Stokes, C., Connell, B.D., Perdue, K., Bonmassar, G.: Finger motion sensors for fMRI motor studies. Neuroimage 31, 1549–1559 (2006) 8. Raethjen, J., Deuschl, G.: Tremor. Curr. Opin. Neurol. 22, 400–405 (2009) 9. Li, Z., Martins da Silva, A., Cunha, J.P.S.: Movement quantification in epileptic seizures: a new approach to video-EEG analysis. IEEE Trans. Biomed. Eng. 49, 565–573 (2002) 10. Fernandes, J.M., Tafula, S., Brandão, S., Bastos Leite, A., Ramos, I., Silva-Cunha, J.P.: Video-EEG-fMRI: Contribution of in-bore Video for the Analysis of Motor Activation Paradigms. In: World Conference on Medical Physics and Biomedical Engineering, pp. 786–789. Springer, Heidelberg (2009) 11. Amaro Jr., E., Williams, S.C., Shergill, S.S., Fu, C.H., MacSweeney, M., Picchioni, M.M., Brammer, M.J., McGuire, P.K.: Acoustic noise and functional magnetic resonance imaging: current strategies and future prospects. J. Magn. Reson. Imaging 16, 497–510 (2002) 12. Statistical Parametric Mapping: The Analysis of Functional Brain Images. Academic Press, London (2007) 13. Tzourio-Mazoyer, N., Landeau, B., Papathanassiou, D., Crivello, F., Etard, O., Delcroix, N., Mazoyer, B., Joliot, M.: Automated anatomical labeling of activations in SPM using a macroscopic anatomical parcellation of the MNI MRI single-subject brain. Neuroimage 15, 273–289 (2002) 14. Carey, J.R., Kimberley, T.J., Lewis, S.M., Auerbach, E.J., Dorsey, L., Rundquist, P., Ugurbil, K.: Analysis of fMRI and finger tracking training in subjects with chronic stroke. Brain 125, 773–788 (2002) 15. Serrien, D.J.: Functional connectivity patterns during motor behaviour: the impact of past on present activity. Hum. Brain Mapp. 30, 523–531 (2009) 16. Horenstein, C., Lowe, M.J., Koenig, K.A., Phillips, M.D.: Comparison of unilateral and bilateral complex finger tapping-related activation in premotor and primary motor cortex. Hum. Brain Mapp. (2008) 17. Taniwaki, T., Okayama, A., Yoshiura, T., Togao, O., Nakamura, Y., Yamasaki, T., Ogata, K., Shigeto, H., Ohyagi, Y., Kira, J., Tobimatsu, S.: Functional network of the basal ganglia and cerebellar motor loops in vivo: different activation patterns between selfinitiated and externally triggered movements. Neuroimage 31, 745–753 (2006) (Epub. 2006 February 2007)
3D-Video-fMRI: 3D Motion Tracking in a 3T MRI Environment
67
18. Kansaku, K., Muraki, S., Umeyama, S., Nishimori, Y., Kochiyama, T., Yamane, S., Kitazawa, S.: Cortical activity in multiple motor areas during sequential finger movements: an application of independent component analysis. Neuroimage 28, 669–681 (2005) (Epub. 2005 July 2028) 19. Formaggio, E., Storti, S.F., Avesani, M., Cerini, R., Milanese, F., Gasparini, A., Acler, M., Pozzi Mucelli, R., Fiaschi, A., Manganotti, P.: EEG and FMRI coregistration to investigate the cortical oscillatory activities during finger movement. Brain Topogr. 21, 100–111 (2008) 20. Casellato, C., Ferrante, S., Gandolla, M., Volonterio, N., Ferrigno, G., Baselli, G., Frattini, T., Martegani, A., Molteni, F., Pedrocchi, A.: Simultaneous measurements of kinematics and fMRI: compatibility assessment and case report on recovery evaluation of one stroke patient. J. Neuroeng. Rehabil. 7, 49 (2010) 21. Silva-Cunha, J.P., Fernandes, J.M., Bento, V., Paula, L., Dias, E., Bilgin, C., Noachtar, S.: 2D versus 3D approaches to movement quantification in epileptic seizures: Simulations and real seizures comparative evaluation. In: 9th European Congress on Epileptology, Rhodes, Greece (2010)
Classification-Based Segmentation of the Region of Interest in Chromatographic Images António V. Sousa1,2, Ana Maria Mendonça1,3, M. Clara Sá-Miranda1,4, and Aurélio Campilho1,3 1
Instituto de Engenharia Biomédica, Universidade do Porto Rua Roberto Frias, s/n 4200-465 Porto, Portugal 2 Instituto Superior de Engenharia do Porto, Instituto Politécnico do Porto Rua Dr. António Bernardino de Almeida 431, 4200-072 Porto, Portugal
[email protected] 3 Faculdade de Engenharia, Universidade do Porto Rua Roberto Frias, 4200-465 Porto, Portugal {amendon,campilho}@fe.up.pt 4 Instituto de Biologia Molecular e Celular Rua do Campo Alegre, 823, 4150-180 Porto, Portugal
Abstract. This paper proposes a classification-based method for automating the segmentation of the region of interest (ROI) in digital images of chromatographic plates. Image segmentation is performed in two phases. In the first phase an unsupervised learning method classifies the image pixels into three classes: frame, ROI or unknown. In the second phase, distance features calculated for the members of the three classes are used for deciding on the new label, ROI or frame, for each individual connected segment previously classified as unknown.The segmentation result is post-processed using a sequence of morphological operators beforeobtaining the final ROI rectangular area. The proposed methodology, which is the initial step for the development of a screening tool for Fabry disease, was successfully evaluated in a dataset of 58 chromatographic images. Keywords: Image segmentation, Region of interest delineation, Chromatographic images, Fabry disease.
1 Introduction Fabry disease (FD) is an X-linked lysosomal storage disorder caused by abnormalities in the GLA gene, which leads to a deficiency in the enzime α-galactosidase A (αGal A) [1]. Due to the abnormal accumulation of glycophingolipids, several clinical signs and symptoms occur, of which renal failure, cardiomyophathy, acroparesthesias and strokes are the most prominent and debilitating [2]. FD is a rare disorder, with an incidence estimated between 1:40 000 and 1:117 000 [2]. Although the usual onset of the first symptoms is in the childhood, by middle-age life-threatening complications are often developed in untreated patients. The late complications are the main cause of late morbidity, as well as premature mortality M. Kamel and A. Campilho (Eds.): ICIAR 2011, Part II, LNCS 6754, pp. 68–78, 2011. © Springer-Verlag Berlin Heidelberg 2011
Classification-Based Segmentation of the Region of Interest in Chromatographic Images
69
[3].The recent availability of enzymatic replacement therapy [2], associated with the progressive nature of the disease, has renewed the interest in this disorder, and revealed the need for early diagnosis, only achieved with generalized screening programs. The complete diagnosis of FD is very complex but the first phase is simply based on the detection of an abnormal quantity of Gb3 in urine or blood plasma of the patient. The direct measurement of those compounds can be carried out by using micro tandem mass spectrometer (MS/MS), but its use is very expensive. Another approach, less expensive, is the analysis of a patient urine sample or blood plasma performed by a Thin-Layer Chromatography (TLC) on a silica gel plate followed by a visual inspection of the generated chromatographic pattern. In order to implement a screening tool for FD, we need to develop several procedures for automating the complete image analysis process. One fundamental initial step is the identification of the image region which contains the relevant data resulting from the chromatographic process, usually known as the image region of interest (ROI). This is a fundamental phase as the frame of the chromatographic plate contains several additional data, such as the identification of patients and the composition/concentration of the standard mixture used as reference for lipid characterization, which is important for the whole process but irrelevant for the image analysis aiming at FD patient classification. In recent years several image processing and pattern recognition techniques have been proposed for automating the analysis of chromatographic images [4-7]. The proposed methods automate the chromatographic profile analysis and identification processes, but most of the solutions start from the image region of interest, without mentioning how it was obtained. An automatic methodology for ROI detection based on maximal responses of differential filters sensitive to vertical and horizontal directions was presented in Sousa et al. [8]. This paper describes a segmentation method for automating the ROI delineation in chromatographic images, which is an improved version of [9]. Image segmentation is performed in two phases, where each individual pixel is finally considered as frame or ROI. In the first phase, an unsupervised learning algorithm is used for obtaining a preliminary classification of image pixels into three classes: frame, ROI or unknown. In the second phase, distance features calculated for the members of the three classes are used for deciding on the new label, ROI or frame, for each individual connected segment previously classified as unknown. The segmentation result is then postprocessed in order to obtain the final ROI rectangular area. This paper is organized as follows. Section 2 describes the segmentation methodology that was developed for automating the ROI detection. The segmentation results are presented and discussed in Section 3. Finally, Section 4 is dedicated to the conclusions.
2 ROI Segmentation In a typical image of a chromatographic plate resulting from TLC processing of several biological samples (Fig. 1) the identification of the two basic regions which are the focus of this work is simple: the internal part corresponds to the region of
70
A.V. Sousa et al.
interest formed by several lanes each one containing a chromatographic pattern associated with an individual sample; the external part, which is the frame of the chromatographic plate, normally containing additional data related to the identification of patients and the composition/concentration of the standard mixture used as a lipid reference.
Fig. 1. Digital image of a chromatographic plate: the image frame, which surrounds the image ROI, is also easily observed
In the proposed approach, image segmentation is performed in two phases. In the first phase, the pixels are classified into three classes, frame, ROI or unknown, usingan Expectation-Maximization (EM) clustering algorithm. The unknown pixels usually correspond to image regions with intensity characteristics differing from their neighborhood, namely due to chromatographic development or plate digitalization. Afterwards, distance features calculated for the samples of the three classes are used for deciding on the new label, ROI or frame, for each individual connected segment previously classified as unknown. This is a major difference from the algorithm proposed in [9], where all unknown pixels were jointly reassigned to either ROI or frame class. An improved post-processing sequence was also defined in order to obtain the final ROI rectangular area. 2.1 Initial Segmentation Images resulting from the digitalization of chromatographic plates are color images represented in RGB format. As depicted in Fig. 1, the image ROI and the image frame have distinctive color characteristics and each one of these regions is easily identified by a human observer. Based on this assumption, we have assessed the data associated with image representation in different color spaces, namely RGB, HSV and L*a*b*, aiming at selecting the chromatic features that best discriminate the two image areas. Fig. 2 presents the RGB components of the original image of Fig. 1, showing the different levels of discrimination between frame and ROI background in the three chromatic components. A similar analysis of the individual components of the HSV and L*a*b* representations allowed similar conclusions. In the method proposed in [9], the chromatic data selected as input for the initial classification stage were the B (RGB), S (HSV) and b* (L*a*b*) components. However, in the current proposal the B component was replaced by the minimum of the R, G and B values for each image pixel. This solution improves the initial classification of very low contrast images, as will be demonstrated in section 3.
Classification-Based Segmentation of the Region of Interest in Chromatographic Images
a
71
b
c Fig. 2. RGB components of original image of figure 1: a. Red (R) component; b. Green (G) component; c. Blue (B) component
As can be observed in Fig. 1, the image frame is usually used for including handwritten annotations such as patient identification and composition/concentration of the standard mixture used as reference. These annotations, which are also clearly visible in all RGB components, highly disturb the subsequent steps of the segmentation process and need to be removed. A sequence of two morphological closing operations using linear structuring elements in the vertical and horizontal directions was selected for this purpose. Structuring elements of fixed sizewere selected as all images are initially resized to have an identical number of lines. Because the three measured chromatic features (min(R,G,B), S and b*) are not independent, PCA projection is applied for obtaining the final features for segmentation purposes. Fig. 3 presents both the measured and the projected features. Only the first two PCA components are considered for classification purposes because these two componentsalready retains about 98% of the variance in the data. In a typical image of a chromatographic plate resulting from TLC processing of several biological samples (Fig. 1) the identification of the two basic regions which are the focus of this work is simple: the internal part corresponds to the region of interest formed by several lanes each one containing a chromatographic pattern associated with an individual sample; the external part, which is the frame of the chromatographic plate, normally containing additional data related to the identification of patients and the composition/concentration of the standard mixture used as a lipid reference. For implementing the first phase of the segmentation, a clustering EM algorithm was chosen. This approach is more adequate for our data than other solutions, such as K-means, because more flexible cluster shapes are applied and fewer clusters may be used to approximate the structure in the data [11]. The first two PCA components are used as input for this classification step.
72
A.V. Sousa et al.
Fig. 3. Measured (left) and projected (right) components: min(R,G,B) (top), Saturation (middle), b* component (bottom), 1st PCA component (top), 2nd PCA component (middle), 3rd PCA component (bottom)
This option for an initial separation of the original image pixels into three classes instead of the two obvious ones, frame and ROI, was motivated by the fact that some of the images contain artifacts due to chromatographic development or plate digitalization that prevent the correct identification of these two regions simply based on chromatic features. The occurrence of very intense bands inside the ROI also influences the immediate detection of this image area. 2.2 Classification of Unknown Areas Pixels classified as unknown in the first phase of segmentation usually correspond to regions which are clearly salient from their neighborhood, and appear as image areas mainly embedded into one of the other two classes. Based on the results of the initial segmentation, an obvious conclusion is that chromatic information is no longer discriminative for the pixels belonging to the unknown class. On the other hand, because most of these areas are in fact spatially surrounded by correctly classified pixels, a spatial criterion based on proximity is exploited. The main idea underlying the refinement of unknown image areas classification is that each individual connected segment of the unknown class, hereafter designated as
Classification-Based Segmentation of the Region of Interest in Chromatographic Images
73
region Uk, should be integrated into the class i, ROI or frame, which minimizes the normalized distance measure expressed by equation (1),
di − dU
k
si2 + sU2 where dU and sU k
k
(1) k
are respectively the average and standard deviation of the distance
of region Uk to the image boundary, and di and si are identical values for class i. The result of the initial segmentation into three classes is presented in Fig. 4a, while Fig. 4b depicts the unknown region reassignment process just described. 2.3 Final ROI Delineation After the second stage of segmentation, some small image areas can still remain misclassified, mainly near the image border. These can be observed in the images of Fig.4, which shows that in the second stage of segmentation most of the unknown pixels were included into the ROI, but some small areas near the image borders were integrated into the frame. However, the frame is not fully closed, and due to an careless acquisition procedure the final original image contains some border areas that do not belong to the chromatographic plate, that were incorrectly classified as ROI due to their chromatic characteristics. In order to overcome the aforementioned problems, a post-processing operation consisting in a morphological opening is applied aiming at reinforcing the frame area connectivity. The final ROI coordinates are obtained from the bounding box that includes all connected ROI segments whose area is greater than a reference value calculated based on the largest region. This is another major difference between the proposed algorithm and the one described in [9].
3 Results The proposed ROI delineation methodology was evaluated using a dataset formed by 58 images, with distinct resolutions and dimensions ranging from 569×625 to 2530×4927 pixels. Before applying the proposed methodology, all images are rescaled to a common height of 512 pixels. Linear structuring elements of 21 pixels were used in the initial pre-processing step for removing hand-written annotations. With the aim of reducing the computational burden of the EM clustering step, only a set of randomly selected pixels is used during the learning phase.We have also assessed the use of K-means as an alternative to EM clustering in the initial segmentation phase. Besides an increase of execution time, K-means did not allow a correct final ROI delineation for all images in the dataset.
74
A.V. Sousa et al.
a
b
c
d
Fig. 4. Segmentation results: a. EM clustering (Frame-dark gray, Unknown-light gray, ROIwhite); b. Final segmentation; c. Post-processing result; d. Final ROI limits superimposed on the original image
Figure 5 shows the intermediate and final results of the proposed algorithm for two images of the dataset. In the example depicted in the left column of this figure members of the unknown class are distributed all over the image, but after the classification step these segments are reassigned to their correct class. In the example of the right column, the unknown class is formed by a single region, afterwards classified as ROI. Despite this association, ROI areas are still disconnected, but the new approach used for calculating the final limits is able to cope with this fact. Figure 6 allows a comparison between the results of the algorithm described in [9] and the new version proposed in this paper for one image of the dataset. Images presented in the left column were generated by the algorithm in [9] while those shown in the right column were obtained using the new version. The inclusion of a new feature combining the three chromatic components of the RGB representation, min(R,G,B), instead of the use of the blue (B) component alone, clearly improved the result of the initial segmentation (Fig. 6a) allowing a significant reduction of the number of connected segments classified as unknown. For these segments, the individual assignment procedure included in the new version leaded to better intermediate results (Figs. 6b and 6c), and finally to a successful delineation of the image region of interest (Fig. 6d). The method described in this paper was able to correctly delineate the ROI for all 58 images in the dataset even in the presence of the disturbing artifacts resulting from incorrect acquisition procedures or derived from the chromatographic development process that prevented the original algorithm [9] from delineating the correct ROI in 4 images.
Classification-Based Segmentation of the Region of Interest in Chromatographic Images
75
a
b
c
d
Fig. 5. ROI segmentation results: Top. EM clustering (Frame-dark gray, Unknown-light gray, ROI-white); b. Final segmentation; c. Result of post-processing; d. Final ROI limits superimposed on the original image
76
A.V. Sousa et al.
a
b
c
d
Fig. 6. ROI delineation results using the proposed method (right column) and the method described in [9] (left column): Top. EM clustering (Frame-dark gray, Unknown-light gray, ROI-white); b. Final segmentation; c. Result of post-processing; d. Final ROI limits superimposed on the original image.
4 Conclusions We have described a method for automating the segmentation of the region of interest in digital images of chromatographic plates based on a two-level classification approach. The method is an improved version of the algorithm recently presented in [9]. Image segmentation is performed in two phases.In the first phase an unsupervised learning method is used for classifying image pixels into three classes: frame, ROI or unknown. The selection of a different set offeatures allowed a significant decrease in
Classification-Based Segmentation of the Region of Interest in Chromatographic Images
77
the number and distribution of image areas classified as unknown after for this initial classification task. In the second phase, distance features calculated for the members of the three classes are used for deciding on the new label, ROI or frame, for each individual connected segment previously classified as unknown. This is also a major difference from the algorithm in [9] where the complete set of unknown pixels was reclassified either as ROI or frame. A post-processing sequence is applied before calculating the final ROI coordinates. The new implementation of this last ROI delineation step, which considers all remaining ROI segments instead of the largest one, is another major enhancement included in the method described in this paper. The methodology was successfully evaluated using a dataset of 58 digital images of chromatographic plates. For each one of these images a ROI was obtained containing all the relevant chromatographic information. The algorithm for the automatic delineation of the region of interest in digital images of chromatographic plates is a fundamental initial step in the development of an integrated image analysis system aiming at implementing a screening tool for Fabry disease.
Acknowledgements The authors would like to thank the Portuguese Foundation for Science and Technology (FCT) (Ref. FCT PTDC /SAU-BEB/100875/2008) and FEDER (Ref. FCOMP-01-0124-FEDER-010913) for co-funding this research project.
References 1. Zarate, Y., Hopkin, R.: Fabry’s Disease. The Lancet. 372(9647), 1427–1435 (2008) 2. Linthorst, G.E., Vedder, A., Aerts, J.M., Hollak, C.E.: Screening for Fabry Disease Using Whole Blood Spots Fails to Identify One-third of Female Carriers. Clinica Chimica Acta. 353(1-2), 201–203 (2005) 3. Eng, C.M., Germain, D.P., Banikazemi, M., Warnock, D.G., Wanner, C., Hopkin, R.J., Bultas, J., Lee, P., Sims, K., Brodie, S.E., Pastores, G.M., Strotmann, J.M., Wilcox, W.R.: Fabry Disease: Guidelines for the Evaluation and Management of Multi-organ System Involvement. Genetics in Medicine 8(9), 539–548 (2006) 4. Gerasimov, A.V.: Use of the Software Processing of Scanned Chromatogram Images in Quantitative Planar Chromatography. J. of Anal. Chem. 59(4), 348–353 (2004) 5. Bajla, I., Hollander, I., Fluch, S., Burg, K., Kollar, M.: An Alternative Method for Electrophoretic Gel Image Analysis in the Gelmaster Software. Comput. Methods Programs Biomed. 77(3), 209–231 (2005) 6. Sousa, A.V., Mendonça, A.M., Campilho, A.: Chromatographic Pattern Classification. IEEE Trans. on Biomedical Engineering 55(6), 1687–1696 (2008) 7. Bajla, I., Rublík, F., Arendacká, B., Farkaš, I., Hornišová, K., Štolc, S., Witkovský, V.: Segmentation and Supervised Classification of Image Objects in Epo Doping-control. Machine Vision and Applications 20(4), 243–259 (2009)
78
A.V. Sousa et al.
8. Sousa, A.V., Aguiar, R., Mendonça, A.M., Campilho, A.: Automatic Lane and Band Detection in Images of Thin Layer Chromatography. In: Campilho, A.C., Kamel, M.S. (eds.) ICIAR 2004. LNCS, vol. 3212, pp. 158–165. Springer, Heidelberg (2004) 9. Mendonça, A.M., Sousa, A.V., Sá-Miranda, M.C., Campilho, A.: Automatic segmentation of chromatographic images for region of interest delineation. SPIE Medical Imaging (2011) 10. Dempster, A., Laird, N., Rubin, D.: Maximum Likelihood from Incomplete Data via the EM Algorithm. J. of the Royal Statistical Society. Series B 39(1), 1–38 (1997) 11. Heijden, F., Robert, P.W.D., Ridder, D., Tax, D.M.J.: Classification, Parameter Estimation and State Estimation. John Wiley & Sons, Chichester (2004)
A Novel and Efficient Feedback Method for Pupil and Iris Localization Muhammad Talal Ibrahim1 , Tariq Mehmood2 , M. Aurangzeb Khan2 , and Ling Guan1 1
Ryerson Multimedia Research Lab, Ryerson University, Toronto, Canada 2 Dept. of Electrical Engineering COMSATS Institute of Information Technology, Islamabad, Pakistan
[email protected], tariq
[email protected], aurangzeb
[email protected],
[email protected]
Abstract. This paper presents a novel method for the automatic pupil and iris localization. The proposed algorithm is based on an automatic adaptive thresholding method that iteratively looks for a region that has the highest chances of enclosing the pupil. Once the pupil is localized, next step is to find the boundary of iris based on the first derivative of each row of the area within the pupil. We have tested our proposed algorithm on two public databases namely: CASIA v1.0 and MMU v1.0 and experimental results show that the proposed method has satisfying performance and good robustness against the reflection in the pupil.
1
Introduction
Due to the uniqueness of iris patterns, they are considered as the most reliable physiological characteristic of human and thus, most suitable for security purposes [1]. For a reliable iris identification system, the iris should be segmented properly from an eye image. Iris segmentation deals with the isolation of the iris from other parts of an eye like pupil, sclera, surrounding skin, reflection, eyelashes and eyebrows. It has been observed that the most computationally intensive task in iris recognition, specially in iris segmentation is iris localization. Iris localization means to exactly determine the inner and the outer boundaries of the iris. During the past few years, extensive research has been carried out to accurately localize the iris in an image. Generally, the methods proposed for the iris localization can be divided in two major categories. The first category is based on circular edge detectors like circular Hough transform [2,3,4], and the second category is based on histogram [5,6]. Circular Hough transform based methods tries to deduce the radius and center coordinates of the pupil and iris regions but the problems with the Hough transform is thresholding and intensive computations due to its brute-force approach. The integro-differential [7] can be seen as a variation of the Hough transform that overcomes the thresholding problems of the Hough transform as it works with raw derivative information but it fails in case of noisy images (such as reflections in the pupil). M. Kamel and A. Campilho (Eds.): ICIAR 2011, Part II, LNCS 6754, pp. 79–88, 2011. c Springer-Verlag Berlin Heidelberg 2011
80
M.T. Ibrahim et al.
In histogram based methods, pupil is considered as the darkest region in an eye image and thresholding is used for locating the pupil. Xue Bai et. al. [8] uses the histogram based method for the extraction of pupil. It is an effective method to certain extant, but if the gray level of eyelashes and eyebrows or some other part of image is lower than that of pupil, it is not able to detect the exact boundary of the pupil. Xu Guan-zhu et al. [9] detected the pupil region by first diving the eye image into small rectangular blocks of fixed sizes and then finding the average intensity value of each block. The minimum average intensity value was selected as a threshold to find the pupil region. Finally, the iris boundaries were detected in the predefined regions. Z. He et al. [5] also presented a two stage algorithm. In the first stage, specular reflections were eliminated and an initial approximation of the iris center was obtained by using the cascaded Adaboost method. Later, the points of the iris boundary were detected by using a pulling and pushing model based on the Hookes law. In this paper, a new feedback method for pupil and iris localization is proposed. The method first locates the pupil on the basis of adaptive thresholding, which iteratively seeks the region that has the highest probability of enclosing the pupil. Once the pupil is localized, the boundary of the iris is extracted based on the peaks of the first derivative of the image intensity. For experimental results, proposed method has been applied on CASIA v1.0 [10] and MMU v1.0 [11]. The remainder of the paper is organized in the following manner: Section 2 covers our proposed method. Section 3 will cover the details of the experimental results. Finally, conclusions are provided in the last section.
2
Proposed Method
Proposed method is comprised of two stages. In the first stage, localization of pupil from the given image is achieved and in the second stage, iris is localized based on the coordinates of pupil. 2.1
Pupil Localization
The proposed method for the pupil localization is basically a feedback system that works in iterative manner. The details of the proposed algorithm are given below: 1. For the very first iteration, apply the histogram based contrast stretching in order to make dynamic range of the input image from 0 to 255. We have applied the following equation for contrast stretching [12]. In =
In − min ∗ 255; max − min
(1)
where In is the image of the nth iteration, and min and max are the minimum and maximum grey levels of In , respectively. For the first iteration, I1 will be the original eye image.
A Novel and Efficient Feedback Method for Pupil and Iris Localization
81
2. Find the minimum and the maximum grey levels in the image and name them as minn and maxn , respectively. 3. Now calculate the frequency of each grey level in the image and look for the grey level that has maximum frequency in the grey level range lrn i.e. range n from minn to ceil( minn +max ) and name it as lrleveln . Then look for the 2 grey level that has the maximum frequency in the grey level range upn i.e. n range from ceil( minn +max ) + 1 to maxn and name it as upleveln . 2 4. Now start finding the coordinates of the pixels that have lrleveln and the coordinates of the pixels that have upleveln as given in the following equation: [x lrn , y lrn ] = f ind(In == lrleveln ) [x upn , y upn ] = f ind(In == upleveln )
(2)
Then take the standard deviation of the xlrn , xlrn , xupn and xupn separately and find the mean of the standard deviation of lrn and also for upn independently as given below. sdlrn = sdupn =
std(xlrn )+std(ylrn ) 2 std(xupn )+std(yupn ) 2
(3)
5. Finally, the minimum from sdlrn and sdupn is calculated, and the range that has the minimum average value is then selected as the range that has more chances of having the pupil. Then the minimum and maximum grey levels of that range becomes equal to minn and maxn . If range lrleveln has the minimum average then the threshold value is selected as lrleveln else upperleveln is selected as threshold. 6. Repeat the steps from 3 to 5 for the nth iteration, until the value of threshold stops changing. To be on the safer side, we just add and subtract a small factor from this threshold value Tn to give a small range which we think can be the grey level of pupil as given below. As, we are dealing with digital images, so the value of should be an integer. For the experimental results, we have chosen this value as 5. indn = f ind(In >= Tn − & In <= Tn + )
(4)
where indn are the indices corresponding to the grey levels of the pupil in In as shown in Fig. 1 (b). Now, indices corresponding to isolated points in Fig. 1 (b) are removed by some morphological operations and remaining horizontal and vertical indices are named as xn , yn as shown in Fig. 1 c). These indices will help us in extracting the radius and center of the pupil as shown in the equation below. rxn = round((max(xn ) − min(xn ))/2) ryn = round((max(yn ) − min(yn ))/2); (5) cxn = round((max(xn ) + min(xn ))/2); cyn = round((max(yn ) + min(yn ))/2); The maximum from rxn and ryn give us the radius of pupil ‘Rpupiln ’, ideally both rxn and ryn should be equal to each other. The cxn and cyn give us the center
82
M.T. Ibrahim et al.
(a)
(b)
(c)
(d)
Fig. 1. a) Human eye, b) Localized pupil before morphological operations, c) Localized pupil after morphological operations and d) Outer boundary of pupil superimposed on (a)
of the pupil. Now, a circle of radius ‘Rpupiln ’ at point (cxn , cyn ) represents the outer boundary of pupil as shown in Fig. 1 (d). Now repeat the steps from 1 to 6 by giving the segmented pupil ‘pupiln ’ as a feedback to our method. The pupiln of nth iteration will become as an input image for the next iteration. From the first iteration, we will have pupil1 which will give us (cx1 ,cy1 ), representing the location of the pupil1. Now, in the next iteration, if the difference between cxn−1 ,cxn and cyn−1 , cyn is not equal to zero, it means that the location of the pupil has moved. We will apply the whole procedure again on pupiln, until this difference is zero which will give us the exact location and radius of pupil as shown in Fig. 2. We name the final location and the radius of pupil as (Crow ,Ccol ) and Rpupil . 2.2
Iris Localization
The center of the iris lies within pupil but these two centers are not necessarily concentric [7]. So, we have limited our search space by selecting a circular region whose radius is 3 times the radius of the pupil and we have neglected all the portion of the eye image beyond that circular region as shown in Fig. 3 (a). Now, we will look for the center of the iris within the pupil. The search starts by applying a 1D low pass filter to the rows within the area of pupil and then taking the first derivative of each row independently. It looks easier in rows to find the boundary of iris because there are cases in which eye is partially
A Novel and Efficient Feedback Method for Pupil and Iris Localization
(a)
(b)
(d)
83
(c)
(e)
Fig. 2. a) Human eye, b) Pupil after 1st iteration, c) Pupil after 2nd iteration, d) Pupil after 3rd iteration and e) Final Localized Pupil
opened so it is not possible to locate the exact boundary of the iris by taking the derivative in the vertical direction. Let us explain the proposed algorithm for only one row and it will be same for the rest of the rows that belong to the area within the pupil. Let this row be row1 as shown in Fig. 3 (b). Before taking the first derivative, we have applied the 1D low pass filter in order to remove the spurious points caused by noise. It is clear from Fig. 3 (c), that when our first derivative filter will enter from the black region to the grey level region, we will have positive response as shown in Fig. 3 (d) and as soon as it will move towards the boundary of iris, there will be a negative change in the grey level i.e. from high to low as shown in Fig. 3 (d). We should also keep in mind that these peaks can be because of eyelashes. To overcome this problem, we have applied a 1D accumulator of length 7 on the result of first derivative, so that the effect of these small peaks caused by the eyelashes will be minimized as shown in Fig. 3 e). The maximum negative peak from left will be the boundary of the iris represented by red circle and we named the location of this negative peak as p1. The same thing will happen when the filter will leave the boundary of the iris, we will have a maximum positive response as shown in Fig. 3 (f), represented by green circle and we named the location of this positive peak as p2. Now, we will calculate the difference of the location of the detected peaks p1, p2 from the central column of pupil ‘Ccol ’ as given in the equation below: D = abs((Ccol − p1) − (p2 − Ccol ))
(6)
After testing our algorithm on 1206 images of two databases, we have set the value of D to be 20. If the value of D is greater than 20, then it means that the one of the
84
M.T. Ibrahim et al.
180
180
160
160
140
140
120
120
100
100
80
80
60
60
40
40
20 0
20
0
50
100
(a)
150
200
250
300
350
0
0
50
100
(b)
25 20
150
200
250
300
350
200
250
300
350
(c)
80
80
60
60
40
40
20
20
15 10 5 0 −5
0
0
−20
−20
−40
−40
−10 −15 −60
−20 −25
0
50
100
150
200
250
300
350
−80
−60
0
50
100
(d)
150
200
250
300
350
(e)
−80
0
50
100
150
(f)
Fig. 3. a) Segmented Eye, b) Row within the pupil, c) Result of applying 1D low pass filter on (b), d) Result of applying first derivative on (c), e) Result of applying 1D accumulator of length 7 on (d) and f) Detected points of the boundary of iris represented by circles
extracted peak is because of a very strong eyelash, so we will move to the next row until this D is less than 20. The peaks p1, p2 in a row for which the value of D is less than 20 will give us the Ccoliris by just taking the mean of p1 and p2. As we know, that the number of pixels between the boundary of circle are maximum at the row that carries diameter of the circle. So we will apply the above mentioned algorithm on the rows within the area of the pupil and the row that will give the maximum number of pixels between two peaks satisfying the condition that we have put on the value of D will give us the diameter of the iris and its radius ‘Riris ’ will be the location of the iris represented by ‘Crowiris ’. Now, we will draw a circle of radius Riris at point (Crowiris , Ccoliris ) as shown in Fig. 4.
3
Experimental Results
In order to check the authenticity of our proposed method, we have tested our method on the two most widely used public databases namely: CASIA v1.0 database [10] and MMU v1.0 database [11]. We have used the accuracy rate [13,6] as a measure to evaluate the performance of our proposed method. The accuracy rate (ACrate ) is defined as follows: ACrate =
Nsuccess × 100 Ntotal
(7)
A Novel and Efficient Feedback Method for Pupil and Iris Localization
(a)
(b)
(c)
(d)
(e)
(f)
85
Fig. 4. Results of our proposed method on some of the randomly selected images from CASIA v1.0 database.
where Nsuccess is the total number of eye images in which the iris has been successfully localized and Ntotal is the total number of images in the database. The details of the experimental results are presented in the following sections. 3.1
Experiments Set-1
In our first set of experiments, the results are collected on CASIA v1.0 iris database. It contains 756 eye images of 108 different subjects with a resolution of 320 × 280 pixels. Each of the subject contributed seven images in two different sessions. In the first session, three images were acquired from each subject while in the second session, four images were acquired. The proposed method is tested on the whole database with an accuracy rate of 99.47%. Table 1 shows the comparison of our proposed method with some of the existing methods while using the CASIA v1.0 iris database for experimental results. Fig. 4 shows the results of our proposed method on some of the randomly selected images from the CASIA v1.0 database. 3.2
Experiments Set-2
In our second set of experiments, the results are collected on MMU v1.0 database. It consists of 450 eye images with a resolution of 320 × 240 pixels from 45 subjects. The proposed method is tested on the whole database with an accuracy rate of 99.11%. Table 2 shows the accuracy rate of our proposed iris localization method. Fig. 5 shows the results on some of the randomly selected images from the MMU v1.0 database.
86
M.T. Ibrahim et al.
(a)
(b)
(c)
(d)
(e)
(f)
Fig. 5. Results of our proposed method on some of the randomly selected images from MMU v1.0 database Table 1. Comparison of Accuracy Rates on CASIA v1.0 database Masek [14] Alvaro et. al. [15] Mateo [16] Xu et. al. [17] Daugman [18] Cui [19] Weiqi Yuan et. al. [20] Our Proposed
82.54% 94.84% 95.00% 98.42% 98.60% 99.34% 99.45% 99.47%
Table 2. Comparison of Accuracy Rates on MMU v1.0 database Masek [14] as reported in [13] Daugman [21] as reported in [13] Ma et. al. [22] as reported in [13] Daugman New [23] as reported in [13] Somnath et. al. [13] Our Proposed
4
83.92% 85.64% 91.02% 98.23% 98.41% 99.11%
Conclusions
In this paper, a novel feedback method for the pupil and iris localisation has been proposed. It is clear from the results that the performance of our proposed method is not affected by the reflection in the pupil and it is still capable of localizing the iris from the eye image. By taking the first derivative of horizontal
A Novel and Efficient Feedback Method for Pupil and Iris Localization
87
rows within the pupil along with the difference condition on D, has overcome the difficulty faced while extracting the boundary of iris in case of partially opened eye image. By looking at the accuracy rates, we can say that our proposed method has outperformed many existing methods for the localization of iris in digital eye image.
References 1. Ross, A.: Iris Recognition: The Path Forward. IEEE Computer 43(2), 30–35 (2010) 2. Wildes, R., Asmuth, J., Green, G., Hsu, S., Kolczynski, R., Matey, J., McBride, S.: A system for automated iris recognition. In: Proceedings IEEE Workshop on Applications of Computer Vision, Sarasota, FL, pp. 121–128 (1994) 3. Tisse, C., Martin, L., Torres, L., Robert, M.: Person identification technique using human iris recognition. In: International Conference on Vision Interface, Canada, vol. 2, pp. 249–299 (2002) 4. Ma, L., Wang, Y., Tan, T.: Iris recognition using circular symmetric filters. National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences 2 (2002) 5. He, Z., Tan, T., Sun, Z., Qiu, X.: Toward accurate and fast iris segmentation for iris biometrics. IEEE Transactions on Pattern Analysis and Machine Intelligence 31(9) (2009) 6. Dey, S., Samanta, D.: Fast and accurate personal identification based on iris biometric. International Journal of Biometrics 2(3), 250–281 (2010) 7. Daugman, J.: Biometric personal identification system based on iris analysis. Patent, Patent Number: 5,291,560 (1994) 8. Bai, X., Wenyao, L., et al.: Research on Iris. Image Processing Algorithm. Journal of Optoelectronics-Laser 14, 741–744 (2003) 9. Guang-zhu, X., Zai-feng, Z., Yi-de, M.: A novel and efficient method for iris automatic location. Journal of China University of Mining and Technology 17, 441–446 (2007) 10. Specifications of casia iris image database(ver. 1.0), Chineese Academy of Sciences (March 2007), http://www.nlpr.ia.ac.cn/english/irds/irisdatabase.htm 11. Multimedia university iris image database (2007), http://www.persona.mmu.edu.my.ccteo/ 12. Gonzalez, R.C., Woods, R.E.: Digital Image Processing, 3rd edn. Prentice-Hall, Englewood Cliffs (2008) 13. Dey, S., Samanta, D.: A novel approach to iris localization for iris biometric processing. International Journal of Biological, Biomedical and Medical Sciences 3, 180–191 (2008) 14. Masek, L., Kovesi, P.: Matlab source code for a biometric identification system based on iris patterns. the school of computer science and software engineering, the university of western australia (2003) 15. A.M., Zuniga, G.: A fast and robust approach for iris segmentation. In: Symposium II Peruvian Computer Graphics and Image Processing (SCGI 2008), pp. 1–10 (December 2008) ´ G´ 16. Otero-Mateo, N., Vega-Rodr´ıguez, M.A., omez-Pulido, J.A., S´ anchez-P´erez, J.M.: A fast and robust iris segmentation method. In: Mart´ı, J., Bened´ı, J.M., Mendon¸ca, A.M., Serrat, J. (eds.) IbPRIA 2007. LNCS, vol. 4478, pp. 162–169. Springer, Heidelberg (2007)
88
M.T. Ibrahim et al.
17. zhu Xu, G., feng Zhang, Z., de Ma, Y.: A novel and efficient method for iris automatic location. Journal of China University of Mining and Technology 17, 441–446 (2007) 18. Daugman, J.G.: High confidence visual recognition of person by a test of statistical independence. IEEE Trans. on Pattern Analysis and Machine Intelligence 15, 1148– 1161 (1993) 19. Cui, J., Wang, Y., Tan, T., Ma, L., Sun, Z.: A fast and robust iris localization method based on texture segmentation. In: Proceedings of the SPIE, vol. 5404, pp. 401–408 (2004) 20. Yuan, W., Lin, Z., Xu, L.: A rapid iris location method based on the structure of human eyes. Engineering in Medicine and Biology Society, 3020–3023 (2005) 21. Daugman, J.G.: How iris recognition works. IEEE Trans. on Circuits and Systems for Video Technology 14(1), 21–30 (2004) 22. Ma, L., Tan, T., Wang, Y., Zhang, D.: Local intensity variation analysis for iris recognition. Pattern Recognition 37, 1284–1298 (2004) 23. Daugman, J.: New methods in iris recognition. IEEE Trans. on Systems, Man and Cybernetics. Part B: Cybernatics 37, 1167–1175 (2007)
Fusion of Multiple Candidate Orientations in Fingerprints En Zhu1,2, Edwin Hancock2, Jianping Yin1, Jianming Zhang3, and Huiyao An4 1
School of Computer Science, National University of Defense Technology, Changsha 410073, China 2 Department of Computer Science, University of York, Heslington, York, YO10 5DD, UK 3 College of Computer and Communication Engineering, Changsha University of Science and Technology, Changsha 410114, China 4 School of Electronic Engineering and Computer Science, Peking University, Beijing 100871, China
[email protected]
Abstract. This paper addresses the problem of local ridge orientation estimation of fingerprint image. The proposed method computes multiple candidate orientations for each foreground block. A systematic method, consisting of orientation voting and orientation propagation, for selecting orientation out of the multiple candidates is designed according to the smooth changing of local ridge orientations. Experiments show that the proposed methods are robust for poor quality images and the overall matching performance is improved. Keywords: fingerprint recognition, local ridge orientation, fingerprint segmentation, multiple candidate orientations.
1 Introduction The feature extraction of fingerprint image typically goes through segmentation, ridge orientation estimation, enhancement, ridge skeletonization, and minutiae detection. Local ridge orientation is considered as the most important step for a recognition system, because the enhancement, minutiae detection, singular point detection, feature matching and sometimes segmentation all rely on the ridge orientations. This paper proposes an orientation estimation method by extracting and fusing multiple candidate local ridge orientations, and it is robust for poor quality images. There are a variety of methods for estimating local ridge orientation. The most popular method is the gradient-based method [1-5]. It computes the gradient vector at each pixel. The local ridge orientation is estimated by means of averaging the gradient vector direction of the pixels in the local block. The gradient-based method may get a wrong ridge orientation in poor quality region. In order to regulate the wrong local ridge orientation, Hong [1] filters the orientation filed using a low-pass filter. Wrong orientations will be regulated to correct orientations if they are surrounded by correct orientations. However, if wrong orientations dominate a local region, it is rather difficult to correctly regulate them. In order to regulate wrong orientations, Zhu and Yin M. Kamel and A. Campilho (Eds.): ICIAR 2011, Part II, LNCS 6754, pp. 89–100, 2011. © Springer-Verlag Berlin Heidelberg 2011
90
E. Zhu et al.
et al. [6] train a network to classify the local ridge orientation estimated by gradientbased method into two classes, namely incorrect and correct. The incorrect orientations are then regulated in terms of their neighbor correct orientations. The image is also segmented in the context of orientation estimation and regulation. Some works [7-10] use the gray-value variance of a local slit to estimate the local ridge orientation. The slit parallel to the local ridge has relatively small gray-value variance, and the slit orthogonal to the local ridges has relatively large gray-value variance. Another observation is that the projection along the ridge orientation has the largest variance. The methods based on these observations are known as slit/projection variance-based methods. Other local ridge orientation estimation methods include filter response-based method, including frequency domain [11] and space domain [12,13], network response-based method [14], and short time Fourier transform-based method [15]. The above mentioned methods estimate and regulate the local orientations. Another quite different kind of method for estimating orientation field is to establish the global orientation model using the singular points. Sherlock and Monro [16] model the orientation field using the position of core and delta points. This model will generate the same orientation fields for images with same core and delta point arrangements, which may well have different orientation filed patterns. This model is improved by Vizcaya[17] and Araque[18] to overcome its weakness. Zhou and Gu[19] model the orientation field by estimating initial local orientations and imposing a Point-Charge Model to cope with the incontinuity near the singular points. A review of global orientation modeling can be found in [20]. The main weakness of global orientation model-based is that it depends on the position of singular points which themselves depend on the estimation of the orientation field. (a)
(b)
Fig. 1. A portion of a poor quality image (a) and its orientation field (b) estimated by gradientbased method. The marked block in (a) has a vertical crease, and its orientation is estimated to be the vertical orientation. It can be easily observed that other blocks with creases have their orientations estimated as the crease orientation.
The challenging of orientation estimation is posed by the poor quality image, including scratches, creases, wet, dry, etc. It is difficult to correctly estimate the ridge orientation in the poor quality region of an image. If large areas of wrong orientations are estimated, it is difficult to correctly regulate them, and the singular points are also difficult to be correctly located. For a local block with creases, the gradient-based method, projection-based method, and response-based method may well estimate the ridge orientation as the crease orientation. An example is shown in Fig. 1. The existing
Fusion of Multiple Candidate Orientations in Fingerprints
91
methods of local orientation estimation compute a single orientation for each local image block. The gradient-based method computes the local orientation by averaging the squared gradient vectors. The projection-based method finds the orientation with the largest projection variance. The response-based method takes largest response value -related orientation as the local orientation of an image block. In fact, the mean squared gradient vector, the projection with largest variance and the largest response value do not necessarily correspond to the local real orientation, especially for poor quality images. We observed that the local real orientations for poor quality regions are probably related to the second or even third largest response value or projection variance. Therefore, our idea in this paper is take more than one local orientations into consideration, each corresponding to one of the largest response values (or projection variance). For fingerprints, three local orientations are considered. The orientations taken into consideration are regarded as the multiple candidate orientations (MCO) of the local block. One of the candidate orientations is expected to be the true local ridge orientation. A systematic method for selecting the true orientation among the candidate orientations is proposed, and is proven to be effective for poor quality fingerprint images by the experiments. Section 2 describes the methods fusing multiple candidate orientations, including orientation voting and propagation. Section 3 describes a real application of the proposed method on fingerprint images. A fingerprint segmentation method is proposed in this section. Section 4 is the experimental comparison between the proposed method and related works. Section 5 is the conclusion. The experiments in section 4 show that the matching performance is improved by using the proposed method.
2 Orientation Voting and Propagation The image is divided into unoverlapped blocks, and use w(i,j) to denote the block at the ith row and jth column. Use O(i,j) to denote the orientation of w(i,j), and use f(i,j) to indicate whether w(i,j) is a foreground block (f(i,j)=1) or background block (f(i,j)=0). The orientation of a local block is regularized to be in the range of [0, π), since θ and θ+π are the same orientations. Multiple Candidate Orientations Conventionally, the frequency domain image of a local image block is used to compute the local orientation by locating the highest symmetric peak pair in the frequency image. An example is shown in Fig. 2. A pair of symmetric peaks present in the frequency image Fig. 2 (d) of the local image block Fig. 2 (b), and the orientation orthogonal to the line connecting the two peaks can be taken as the local image orientation. For poor quality local image block Fig. 2 (c), there are more than one pair of symmetric peaks in the frequency domain image Fig. 2 (e), and it is ambiguous to determine the orientation. We observed that orientation relating to the highest peak pair is often not the true orientation for a local image block of poor quality. For poor quality image blocks, the true orientation is often related to the second or even third highest peak pairs. In this paper, at most three largest peak pairs are considered in each local frequency domain image, and hence there are at most three candidate orientations for each local image block. One of the candidate orientations is to be
92
E. Zhu et al.
chosen by the following voting and propagation methods. Other methods, such as projection-based method, can also extract the candidate orientations by looking for a certain number of largest response/variance values. The frequency domain image is used in this work, because it is practically more effective than projection-based method for fingerprints. Suppose O1(i,j), O2(i,j) and O3(i,j) are the three candidate orientations of the block w(i,j), and they respectively correspond to the highest, second highest and third highest peak pairs in the frequency domain image. Ok(i,j)=−1 or Ok(i,j)∈[0,π). If f(i,j)=0, which indicates that w(i,j) is a background block, then Ok(i,j)=−1 for k=1,2,3. If f(i,j)=1, there are three cases for O1(i,j), O2(i,j) and O3(i,j): (1) O1(i,j)∈[0,π), O2(i,j)=−1 and O3(i,j) =−1, which indicates there is only a pair of peaks in the frequency image; (2) O1(i,j)∈[0,π), O2(i,j)∈[0,π) and O3(i,j) =−1, which indicates two pairs of peaks are found in the frequency image; (3) O1(i,j)∈[0,π), O2(i,j)∈[0,π) and O3(i,j)∈[0,π), which indicates three pairs of peaks are found in the frequency image. Some problems may have to extract more candidate orientations. For fingerprint image, it is sufficient extracting 3 candidate orientations. (a)
Block A
Block B
(b)
(c)
(d)
(e)
Fig. 2. A portion of a poor quality image (a) with two labeled local blocks, block A and B, and their enlarged version, (b) and (c), and their frequency images, (d) and (e)
Orientation Voting The problem of voting is: given O1(i,j), O2(i,j) and O3(i,j) for w(i,j) (∀i, ∀j), determine the true orientation, denoted by O(i,j), of w(i,j). In order to vote the orientation among the tree candidates, O(i,j) is initialized to be O1(i,j) (since it is related to the highest peaks), and it is updated in the process of voting. The voting procedure is iterated over all the local foreground image blocks, and the orientation O(i,j) for w(i,j) is updated at each iteration. The orientation field (O(i,j)) will become stable with the iteration of the voting procedure. The voters for each block are its neighbors, the arrangement of which is shown in Fig. 3. The neighbors for the block w(i,j) are partitioned into eight separate groups, and each group has M blocks. The orientations for the M blocks in the kth (1≤k≤8) group are Ok1, Ok2, …, and OkM, whose relative positions are shown in Fig. 3. Okl (1≤k≤8, 1≤l≤M) is the current orientation of the block where Okl is hosted. For instance, if Okl is hosted in w(u,v), then Okl=O(u,v). Denote the kth group by Gk={Ok1, Ok2, …, OkM}. If all the blocks in a certain group are foreground blocks and all their orientations are true ridge orientations, then the orientations are highly probable to change smoothly, and in turn, this group can predict the orientation of w(i,j). Each group will vote for one orientation out of the three candidates which are most close to the predicted orientation. Define Cuv as C(Ouv,Ouv+1) (1≤u≤8,1≤v≤M−1), where
Fusion of Multiple Candidate Orientations in Fingerprints
⎧β −α ⎪β −α − π C (α , β ) = ⎨ β −α + π ⎪⎩∞
93
α , β ∈[0, π ), −π / 2 ≤ β −α ≤ π / 2 α , β ∈[0, π ), β −α > π / 2 α , β ∈[0, π ), β −α <−π / 2 else
For the kth group Gk, given a threshold tC, if (1) ∀l∈[1,M−1] −tC≤Ckl≤0, or (2)
∀l∈[1,M−1] tC≥Ckl≥0, then Gk is harmonious. Use H(Gk) to indicate whether Gk is harmonious (H(Gk)=1) or not (H(Gk)=0). Gk predicts the orientation, denoted by OkM+1, of w(i,j): H (Gk )=1, 0 ≤ OkM + C kM −1 < π H (Gk )=1, OkM + C kM −1 ≥ π H (Gk )=1, OkM + C kM −1 < 0 H (Gk )=0
⎧OkM + C kM −1 ⎪ kM kM −1 OkM +1 = ⎨OkM + C kM −1 − π +C +π ⎪O ⎩− 1 Define dkl (1≤k≤8, 1≤l≤3) as
{
kM +1 kM +1 ≥ 0, Ol (i, j) ≥ 0 d kl = |C (O , Ol (i, j)| O ∞ else
which is the absolute difference between the predicted orientation OkM+1 and the lth candidate orientation Ol(i,j). If min{dkl|1≤l≤3}<∞, then Os(i,j) is voted for by Gk, where s=argmin{dkl|1≤l≤3}; If min{dkl|1≤l≤3}=∞, then Gk abstains from voting (the are at least one background block in Gk, or the orientations in Gk do not change smoothly, and therefore it can not predict a reliable orientation.). At last the orientation among the candidates with the largest number of votes is chosen to update the current orientation of w(i,j). The voting procedure votes candidate orientations and updates the current orientation block by block over all the foreground blocks, and the orientation field becomes stable with the iterations of the voting procedure. For fingerprint image, M is set to be 3 in the experiments, because larger M is relatively ineffective for regulating orientations near the foreground borders.
O41
O21
O31 O
32
42
O
O $
#
%
22
O4M O3M O2M
O
51
52
O
O
$ O62
O
61
O1M " O12 O11
" O5M 6M
O
7M
#
O72
O
71
O8 M %
O82 O81
Fig. 3. The arrangement of the neighbor blocks of w(i,j) for the orientation voting procedure
94
E. Zhu et al.
Orientation Propagation
The voting procedure iterates to regulate the orientation field. It is possible for very poor quality image that some of its local blocks orientations can still not be correctly regulated. The second strategy, namely orientation propagation, is used to solve this problem. The main idea is to locate a maximal orientation-connected (defined whereafter) foreground region, and the orientations in the rest foreground region are regulated by selecting among the candidates to be consistent with the maximal orientation-connected region. Given two blocks, w(i,j) and w(i+di,j+dj), and a threshold tO, they are orientation-connected, if the following conditions are satisfied: (1) di=0, dj=±1; or di=±1, dj=0. (2) f(i,j)=1, f(i+di,j+dj)=1. (3) |C(O(i,j),O(i+di,j+dj))|≤tO (tO=π/16 in the experiments). Use F(i,j)=0|1 to label the maximal orientation-connected region: if F(i,j)=1, then the block w(i,j) is in this region, else not (F(i,j)=0). The orientation-connected region is similar to a four-connected region, but the neighbor relationship is refined by the orientation difference, namely the above condition (3). Denote by RF={w(i,j)|F(i,j)=1} the maximal orientation-connected region, and by Rf={w(i,j)|f(i,j)=1} the foreground region. The orientation propagation procedure selects the orientation for block w(i,j)∈Rf −RF, so that their orientations are consistent with the existing maximal orientation-connected region. It repeatedly moves blocks from Rf −RF to RF, and the being moved block must have at least one adjacent block in the existing RF, so that the orientation can be propagated from the blocks in the existing RF to the being moved block. The propagation is in fact a voting process, which is different with the voting in the previous section. The neighbor blocks which are in RF vote for one candidate to be the orientation of the being newly added block. Suppose the block which is under consideration to be moved is w(i,j), and its neighbor blocks in RF are N(i,j)={w(k,l)| F(k,l)=1, |k-i|≤1, |l-j|≤1, |k-i|+|l-j|>0}. If N(i,j)=∅, then do not move w(i,j) to RF for the moment, since the orientation can not be propagated. If N(i,j)≠∅, then the blocks in N(i,j) vote among the candidates as the following steps: (1) Use scorek=0 (k=1,2,3) to remember the number of votes for Ok(i,j), and they are initialized to be 0; (2) For each w(k,l)∈N(i,j), if |C(Om(i,j),O(k,l))|≤tO, then scorem=scorem+1 (m=1,2,3); (3) If max{scorem|m=1,2,3} =|N(i,j)|, then move w(i,j) from Rf −RF to RF, and the orientation Ov(i,j) is selected, where v=arg max{scorem| m=1,2,3}. In fact the selected orientation Ov(i,j) is unanimously voted for by the blocks in N(i,j). The propagating procedure repeatedly moves blocks from Rf −RF to RF until no block can be moved, Rf −RF becoming empty, or no block satisfying the condition max{scorem|m=1,2,3}=|N(i,j)| in step (3). After the propagation procedure, it is possible that Rf −RF is still not empty. In this case the orientation of blocks in Rf −RF can have their orientation computed as the mean orientation of their neighbor blocks which are in RF. After post-processing the blocks in Rf −RF can be discarded as background, since their orientations are probably unreliable.
Fusion of Multiple Candidate Orientations in Fingerprints
95
3 Application on Fingerprints This section shows a real application of orientation voting and propagation on fingerprint images. The image is divided into unoverlapped blocks of size 8×8 pixels. Use w(i,j) to denote the block at the ith row and jth column, and use f(i,j) to indicate whether w(i,j) is a foreground block (f(i,j)=1) or background block (f(i,j)=0). Denote by (x,y) the pixel at the yth row and xth column of the image, and denote by G(x,y) the gray-value of (x,y). The following steps can be used to determine the orientation field: (1) Segment the image. (2) Compute three candidate orientations for each block, and apply the voting procedure to determine the orientation field (O(i,j)). (3) Locate the maximal orientation-connected foreground region, and apply the orientation propagation procedure to determine the orientations for the rest foreground region, i.e. repeatedly move blocks from the rest region to the orientation-connected region and vote their orientation at the same time. (4) Post-processing: fill holes existing in RF and estimate their orientation as the average orientation of their neighbor elements in RF. Issues of Segmentation
Three rounds of segmentation are proposed and are sequentially applied to the fingerprint image. The second and the third segmentation change some foreground blocks produced by the previous segmentation to background blocks. (1) The first segmentation uses the local mean gray and local ridge-valley difference. For each block w(i,j), Suppose W(i,j) is a larger region (size 15×15) sharing the same center with w(i,j). We compute the local mean gray m(i,j) and local ridge-valley difference d(i,j) of W(i,j), and segment the image (i.e. compute f(i,j)) in terms of the ratio Z(i,j)=m(i,j)/d(i,j). The pixels in W(i,j) whose gray is less/larger than m(i,j) are pseudo (potential) ridge/valley pixels. d(i,j) is the difference between the mean gray value of pseudo valley pixels and that of pseudo ridge pixels. For foreground blocks, they generally have small mean gray and large ridge-valley difference, and background blocks have relatively large mean gray and small ridge-valley difference. An appropriate threshold is selected to segment the image by using Z(i,j)=m(i,j)/d(i,j). (2) The second segmentation analyzes the pseudo ridge pixels in W(i,j) (here the size is set to be 30×30), whose gray is less than the local mean gray value, to segment the image again. In true ridges, the pseudo-ridge pixels should have relatively greater number of neighbor (8-neighbor) pseudo-ridge pixels, and on the opposite, the pseudoridge pixels in non-ridge areas are sparsely distributed. Using R(x,y) to indicate whether (x,y) is a pseudo-ridge (R(x,y)=1) or pseudo-valley (R(x,y)=0) pixel. For a given pseudo-ridge pixel (x,y), supposing the R(·,·) values of its eight neighbor pixels are, beginning with its right neighbor and unti-clockwise, R0,R1,...,R7, if 0<∑Rk<4, and there is a k satisfying Rk=1 and Rk1=0 and Rk2=0, where k1=k+1 mod 8 and k2=k+7 mod 8, then (x,y) is defined as a sparse pseudo-ridge pixel (We have tried a number of constraints on the sparse pseudo-ridge pixel, and the given constraints seem to be the best among what we have tried). For each local block w(i,j) (size 8×8), count the number of sparse pseudo-ridge pixels in the larger block W(i,j), and denote the number by P(i,j). An appropriate threshold is selected to segment the image by P(i,j).
96
E. Zhu et al.
(3) The third segmentation analyzes the distance between the highest pair of peaks present in the frequency domain image. For each local block w(i,j), the larger block W(i,j) sharing the same center with w(i,j) is set to be of size 32×32, and the frequency domain image of W(i,j) is analyzed. For true ridges of an image of 500dpi, the ridge distance generally distribute in the range of 5~15 pixels. Given the distance ζ between the highest peaks, the potential ridge distance is 64/ζ. The blocks with its ridge distance 64/ζ less than an appropriate small threshold or larger than an appropriate big threshold can then be labeled as background. Issues of Post-processing
The post-processing fills holes present in the maximal orientation-connected region. Closing operation is used to fill the holes. The average orientation estimation method is from the low-pass filtering method in [1] where local orientations are smoothed by neighbor elements. The block being converted from background to foreground has its orientation computed by averaging the neighbor foreground blocks orientation. At last, the whole orientation field can be smoothed by using low-pass filtering. Issues of Implementation
Both the third segmentation and the multiple candidate orientations (MCO) computation use frequency domain image. Therefore, in practical implementation, the third segmentation follows the computation of multiple candidate orientations, and the implementation steps of the orientation estimation using the proposed method is: (1) First segmentation; (2) Second segmentation; (3) Computation of multiple candidate orientations; (4) Third segmentation; (5) Orientation voting; (6) Maximal orientationconnected region locating; (7) Orientation propagation; (8) Post-processing. In step (5), the voting procedure over the whole orientation field is iterated 20 times. As a whole, the systematic steps of the proposed method on an example poor quality image are shown in Fig. 4.
4 Experiments This section shows example results of orientation estimation and compares them with the systematic method of segmentation and orientation estimation in [6]. 3 typical poor quality images are shown in the left column of Fig. 5, and the orientation fields computed by the proposed method and the method in [6] are respectively shown in the middle column and right column of Fig. 5. The method in [6] estimate the correctness of orientations by gradient-based method and regulates the incorrect orientations by using their neighbor correct orientations, and the image is also segmented in the context of orientation field computation. The bold short lines in the right column images of Fig. 5 are the final orientations of the foreground blocks, and the thin lines of the background blocks are the initial orientations and indicate the background. Compared with the right column, the middle column of Fig. 5 apparently has better results of orientation estimation and segmentation. Minutiae detection is a typical approach to comparing orientation estimation and segmentation. However, for very poor quality images, it is quite ambiguous to even manually locate the minutiae. Therefore, we
Fusion of Multiple Candidate Orientations in Fingerprints
O1(i, j)
O2 (i, j)
97
O3 (i, j)
Fig. 4. An example demonstrating the steps of the proposed method. In step4 and step5, the bold line for a block w(i,j) means that f(i,j)=1. In step6, step7 and step8 the bold line for a block w(i,j) means that F(i,j)=1. The bold lines in step4 are the initial (O(i,j)), which is equal to (O1(i,j)). The bold lines in step5 are orientations after 20 iterations of orientation voting. In step6, the bold lines indicate the maximal orientation connected region. In step7, the bold lines indicate RF after orientation propagation. And the lines in step7 are the final orientations after post-processing.
98
E. Zhu et al. Original Image
The proposed method
Method in [6]
4 _7 9 _ 2 db 4 0 0 2c v f
_5 9 9 _ 2b d 4 0 02 c v f
4 _ 0 1 _1 1 b d 00 0 2 c fv
Fig. 5. Example results of orientation estimation by the proposed method and the method in [6]. The blocks with bold lines, which are orientations, are foreground blocks, and the rest are background blocks.
quantitatively compare the proposed method and the method in [6] by applying them for fingerprint matching. Two methods, A and B, are compared: method A uses the proposed segmentation and orientation estimation, and method B uses the methods in [6]. Both methods use the enhancement method in [21] and the matching method in [22], and the matching score is computed as a/(a1+a2), where a is the number of matched minutiae pairs, a1 and a2 are the number of minutiae in the template image and query image respectively. The two methods are applied on FVC2000DB1_A, which contains 100×8 (100 fingers, and 8 samples for each finger) images, and the ROC curves are shown in Fig. 6, which shows that the method A is better than method B. The proposed method for segmentation and orientation estimation consumes about 400ms for an image of size 300×300 on Intel Core 2 U7600 1.2GHz.
Fusion of Multiple Candidate Orientations in Fingerprints
99
0.1 Method A (the proposed) Method B ([6])
FNMR (False Non-Match Rate)
0.09 0.08 0.07 0.06 0.05 0.04 0.03 0.02 0.01 0
-3
10
-2
10
-1
10
0
10
FMR (False Match Rate)
Fig. 6. Roc curves on FVC2000DB1_A
6 Conclusion We have proposed to extract multiple candidate orientations for each foreground blocks, and orientation voting and propagation procedures are designed to regulate the orientations. The voting procedure uses eight separate groups of neighbors to vote the candidate orientations for each local foreground block. The orientation propagation procedure propagates orientations from the maximal orientation-connected region to the rest region. The proposed methods are applied on fingerprint images and are proven to be effective for poor quality images, and the matching performance by using the proposed method is improved. Acknowledgments. This work was supported by the National Natural Science Foundation of China (Grant No. 60603015; 60970034), the Foundation for the Author of National Excellent Doctoral Dissertation (Grant No. 2007B4), and the Foundation for the Author of Hunan Provincial Excellent Doctoral Dissertation.
References 1. Hong, L., Wang, Y.F., Jain, A.K.: Fingerprint Image Enhancement: Algorithm and Performance Evaluation. IEEE Transactions on Pattern Analysis and Machine Intelligence 20(8), 777–789 (1998) 2. Bazen, A.M., Gerez, S.H.: Systematic Methods for the Computation of the Directional Fields and Singular Points of Fingerprints. IEEE Transactions on Pattern Analysis and Machine Intelligence 24(7), 905–919 (2002) 3. Ratha, N., Chen, S., Jain, A.K.: Adaptive Flow Orientation-based Feature Extraction in Fingerprint Images. Pattern Recognition 28(11), 1657–1672 (1995) 4. Donahue, M.J., Rokhlin, S.L.: On the Use of Level Curves in Image Analysis. Image Understanding 57(2), 185–203 (1993) 5. Maio, D., Maltoni, D.: Direct Gray-Scale Minutiae Detection in Fingerprints. IEEE transactions on Pattern Analysis and Machine Intelligence 19(1), 27–39 (1997) 6. Zhu, E., Yin, J.P., Hu, C.F., Zhang, G.M.: A Systematic Method for Fingerprint Ridge Orientation Estimation and Image Segmentation. Pattern Recognition 39(8), 1452–1472 (2006)
100
E. Zhu et al.
7. Mehtre, B.M., Murthy, N.N., Kapoor, S., Chatterjee, B.: Segmentation of Fingerprint Images Using the Directional Image. Pattern Recognition 20(4), 429–435 (1987) 8. Oliveira, M.A., Leite, N.J.: A Multiscale Directional Operator and Morphological Tools for Reconnecting Broken Ridges in Fingerprint Images. Pattern Recognition 41(1), 367–377 (2008) 9. Ji, L., Yi, Z.: Fingerprint Orientation Field Estimation Using Ridge Projection. Pattern Recognition 41(5), 1508–1520 (2008) 10. Sherlock, B.G.: Computer Enhancement and Modeling of Fingerprint Images. In: Ratha, N., Bolle, R. (eds.) Automatic Fingerprint Recognition Systems, pp. 87–112. Springer, New York (2004) 11. Kamei, T., Mizoguchi, M.: Image Filter Design for Fingerprint Enhancement. In: Proc. Int. Symp. on Computer Vision, pp. 109–114 (1995) 12. Hong, L., Jain, A.K., Pankanti, S., Bolle, R.: Fingerprint Enhancement. In: Proc. Workshop on Applications of Computer Vision, pp. 202–207 (1996) 13. Nakamura, T., Hirooka, M., Fujiwara, H., Sumi, K.: Fingerprint Image Enhancement Using a Parallel Ridge Filter. In: Proc. Int. Conf. on Pattern Recognition (17th), vol. 1, pp. 536–539 (2004) 14. Zhu, E., Yin, J.P., Zhang, G.M., Hu, C.F.: Orientation Field Estimation for Fingerprints Based on Neural Network. WSEAS Transactions on Information Science & Applications 3(4) (2006) 15. Chikkerur, S., Cartwright, A.N., Govindaraju, V.: Fingerprint Enhancement Using STFT Analysis. Pattern Recognition 40(1), 198–211 (2007) 16. Sherlock, B.G., Monro, D.M.: A Model for Interpreting Fingerprint Topology. Pattern Recognition 26(7), 1047–1055 (1993) 17. Vizcaya, P.R., Gerhardt, L.A.: A Nonlinear Orientation Model for Global Description of Fingerprints. Pattern Recognition 29(7), 1221–1231 (1996) 18. Araque, J.L., Baena, M., Chalela, B.E., Navarro, D., Vizcaya, P.R.: Synthesis of Fingerprint Images. In: Proc. Int. Conf. on Pattern Recognition (16th), vol. 2, pp. 442–445 (2002) 19. Zhou, J., Gu, J.: A Model-based Method for the Computation of Fingerprints Orientation Field. IEEE Transactions on Image Processing 13(6), 821–835 (2004) 20. Maltoni, D., Maio, D., Jain, A.K., Prabhakar, S.: Handbook of Fingerprint Recognition. Springer, Heidelberg (2009) 21. Zhu, E., Yin, J.P., Zhang, G.M., Hu, C.F.: A Gabor Filter Based Fingerprint Enhancement Scheme Using Average Frequency. International Journal of Pattern Recognition and Artificial Intelligence 20(3), 417–429 (2006) 22. Zhu, E., Yin, J.P., Zhang, G.M.: Fingerprint Matching Based on Global Alignment of Multiple Reference Minutiae. Pattern Recgonition 38(10), 1685–1694 (2005)
Fingerprint Pattern and Minutiae Fusion in Various Operational Scenarios Azhar Quddus, Ira Konvalinka, Sorin Toda, and Daniel Asraf Enterprise Access, L1 Identity Solutions Inc., 50 Acadia Avenue, Suite 200, Markham, Ontario, L3R0B3, Canada {AQuddus,IKonvalinka,SToda,DAsraf}@L1ID.com
Abstract. This paper discusses the advantages of score-level fusion between pattern and minutiae based fingerprint verification algorithms in various operational scenarios. The different scenarios considered are sensor interoperability, environmental conditions and low quality enrollments. These are commonly encountered in real-life deployments of fingerprint-based biometric systems, specifically for large-scale distributed systems and physical access control. Moreover, the approach for jointly utilizing the conceptually different pattern and minutiae algorithms is based on various well-known scorelevel fusion techniques with single finger presentations. In contrast to previous studies on multi-matcher score-level fusion for fingerprint verification, where only moderate performance improvement were reported, the results presented here show significant performance gains. The two main contributing factors to these findings are that the two algorithms are conceptually different and the effects of the different operational scenarios. For the latter, improvement in accuracy due to fusion is even more significant in non-ideal and challenging operating conditions. Keywords: score-level fusion, pattern-minutiae fusion, fingerprint verification, sensor interoperability, environmental conditions, enrollment quality.
1 Introduction Biometrics refers to the automatic authentication of individuals based on their anatomical and behavioral characteristics. Typically, the authentication is performed by comparing an acquired sample against a previously enrolled template. It is well known that the biometric recognition rates depend on the operational scenario for the system. Some of the most commonly encountered challenges are low quality enrollments, variability in environmental conditions, such as temperature, humidity and ambient light as well as sensor interoperability. Here sensor interoperability [9, 10] refers to the scenario where the enrollment, i.e. initial template creation, is performed with one type of sensor and different kinds of sensors are used for verification against the previously enrolled template. M. Kamel and A. Campilho (Eds.): ICIAR 2011, Part II, LNCS 6754, pp. 101–110, 2011. © Springer-Verlag Berlin Heidelberg 2011
102
A. Quddus et al.
These challenges are particularly pronounced in real-life deployments of large scale systems such as identity management initiatives across commercial enterprises or in governmental programs such as the Personal Identity Verification (PIV) and Transportation Worker Identification Credential (TWIC). For these deployments the enrollment and verification is often performed on separate systems that utilizes different equipment such as sensors and even different vendors biometric algorithms. Moreover, due to the technological evolution it is also common that different sensors and biometric algorithms are introduced to already existing systems. To battle these challenges significant efforts have been, and still are, focused on improving the biometric recognition accuracy. A commonly used approach is multimodal biometric fusion, which utilizes information from multiple biometric modalities to achieve improved recognition accuracy [4-7]. Various approaches for unimodal multi-matcher fusion has also been considered for improving accuracy [1, 8, 11, 12], but reported to deliver less significant gains. In the literature, many fusion approaches have been suggested, such as likelihood-based [2], support vector machines [5], quality based [6] and Gaussian Mixture Model [7]. In [4], quality dependent fusion approach has been proposed. Recently, it has also been shown [3] that the weighted sum-rule based fusion has some relationship to “Neyman-Pearson” optimality subject to certain conditions. In [9], fusion based approach is used to verify multiple impressions of the same finger. There are several novel aspects of this research. To our knowledge there is no reported study on the single impression, unimodal fusion in sensor interoperable deployment scenarios and challenging environmental conditions. Moreover, effect of the quality of enrollment on fusion performance has not been studied before. In this paper, we study the effects of score-level fusion of two fundamentally different fingerprint verification algorithms, namely, one pattern-based and one minutiae-based. These algorithms are conceptually different in the sense that they utilize different features and matching approaches. Several algebraic relationships for score fusion are considered, such as mean, max, min, and geometric mean and the weighted sum-rule. Among these we found the best linear and the likelihood ratio based techniques to be the top performers and most adequate for our data sets. Moreover, the main focus of this paper is on the accuracy gains that can be achieved in the various operational scenarios mentioned above. It is shown in [1] that the multimatcher score-level fusion, using two algorithms on the same data, was much less effective than the other methods. Here, experimental results show that the accuracy improvements due to the score-level fusion can be significant for general cases and even further improved in non-ideal operating conditions, such as sensor interoperable deployments, challenging environmental conditions and lower quality of enrollment images. The two main contributing factors to these findings are the conceptually different algorithms, which results in reduced correlation between the matchers, and the challenging/non-ideal operational scenarios. This paper is organized as follows; section 2 describes the sensors and various environmental conditions; section 3 briefly discusses the pattern and minutiae based
Fingerprint Pattern and Minutiae Fusion in Various Operational Scenarios
103
verification algorithms; section 4 describes the fusion approaches; section 5 presents the results and discussions. Finally, this paper is concluded in section 6.
2 Sensors in Challenging Environmental Conditions Sensor interoperable scenario requires enrollment using one sensor and verification using a different sensor. In order to evaluate performance in such cases, image datasets are needed with multiple impression of the same finger on all the sensors. To our knowledge there are no public databases available for such study. In this research, we have used UPEK TCS1 (OEM), Secugen OPP03XC2 and Lumidigm VENUS sensors for sensor interoperability study. Since Lumidigm sensors are suitable for outdoor conditions we acquired images in various environmental conditions viz. wet, hot (+60 oC) and cold (-20 oC). Sensors were kept for several minutes in temperature controlled chambers before each user presents his or her fingers on the sensor. For wet condition the sensor was submerged in the water at room temperature in a watertight enclosure. The details of the sensors are provided in table 1. Table 1. Sensor Details
Sensor Name UPEK TCS1 OEM Secugen (OPP03XC2) OEM Lumidigm Venus OEM Lumidigm Venus OEM Lumidigm Venus OEM Lumidigm Venus OEM Secugen Hamster3 Plus (PC)
Legend NTCS1 NOPP3 VenusR VenusW VenusH VenusC SECH3
dpi 508 500 500 500 500 500 500
Technology Silicon Optical Multispectral Multispectral Multispectral Multispectral Optical
Environment Room Room Room Underwater Hot (60 oC) Cold (-20 oC) Room
In order to study the effect of enrollment quality, we used Secugen HAMSTER PLUS sensor for which we have larger image dataset (last row in table 1).
3 Verification Algorithms As already mentioned, two fundamentally different fingerprint verification algorithms were utilized in this study. The first one is based on fingerprint ridge pattern correlation and the second one is based on well known minutiae technology. The pattern based verification algorithm won FVC2002 and FVC2004 while our minutiae based verification technology is MINEX approved.
4 Score-Level Fusion There is a variety of fusion approaches in the literature, however, we considered a few, such as best linear (weighted sum-rule), max-rule, min-rule, geometric mean, and likelihood ratio. Among these we found best linear and the likelihood ratio based fusion to be the top performers and most adequate for our data sets.
104
A. Quddus et al.
Best Linear is a conceptually simple technique that does not require modeling of distributions [1]. Likelihood based fusion methods require density estimation, however, they do not require normalization of scores. In [2], likelihood based scorelevel fusion is attempted where densities were estimated with Gaussian Mixture Model (GMM). The estimation in GMM requires large dataset to separate training and testing sets. Due to the limited dataset we found GMMs quite unstable and biased. Instability was caused due to random nature of selecting the training and testing sets, while bias was caused due to the much larger number of impostor attempts than the number of genuine attempts. Nevertheless, we attempted likelihood based fusion with bi-modal Gaussian assumption and the results were very close to the best linear. Therefore, fusion results based on best linear (weighted sum) is presented in this paper. The verification scores of the pattern based algorithm are in the range of (0, 1). On the other hand, minutiae scores lie in the range of (0, 1000000), therefore minutiae scores were normalized to bring them into the range (0,1). Here single finger presentation on the sensor is used for verification using both the algorithms before the fusion attempt. In case of best linear, fused score is the weighted sum of the scores obtained by pattern and minutiae based algorithms which can be defined as, (1)
,
where and are weights associated with pattern and minutiae scores ( ), respectively. The weights were obtained using exhaustive search.
and
5 Results and Discussions In this paper, improvement due to fusion is presented in the form of percentage gain in FRR at three FAR points (0.1%, 0.01% and 0.005%) in the Decision Error Tradeoff (DET) curves. Here the percentage gain in FRR (at specific FAR) after fusion is computed against pattern based verification algorithm as follows, 100 and are the FRRs (at specific FAR) using pattern based algorithm and where fused scores, respectively. Similarly, the percentage gain in FRR (at specific FAR) after fusion is computed against minutiae based verification algorithm as follows, 100 and are the FRRs (at specific FAR) using minutiae based algorithm and where fused scores, respectively. Table 2 shows various interoperable scenarios along with the weights for the best linear fusion. Here the first legend is the enrollment sensor and the second legend is the verification sensor. The enrollment with VenusW is not present because fingerprint enrollment is advised only at room temperature. The VENUS image dataset was collected at various environmental conditions, viz. wet, hot (+60 oC) and
Fingerprint Pattern and Minutiae Fusion in Various Operational Scenarios
105
cold (-20 oC). Images of index, middle and ring fingers from both hands were acquired in the database. Typically, genuine attempts are in the order of few thousands while the impostor attempts were in the order of one hundred thousand. Table 2. Enrollment and verification scenarios with best linear weights
Sensor pairs NTCS1-NTCS1 NOPP3-NOPP3 VenusR-VenusR NTCS1-NOPP3 NTCS1-VenusR NTCS1-VenusW NTCS1-VenusH NTCS1-VenusC NOPP3-NTCS1 NOPP3-VenusR
Weights ( 0.6, 0.4 0.6, 0.4 0.3, 0.7 0.6, 0.4 0.3, 0.7 0.3, 0.7 0.3, 0.7 0.3, 0.7 0.6, 0.4 0.3, 0.7
,
)
Sensor pairs NOPP3-VenusW NOPP3-VenusH NOPP3-VenusC VenusR-NTCS1 VenusR-NOPP3 VenusR-VenusW VenusR-VenusH VenusR-VenusC SECH3-SECH3
Weights ( 0.3, 0.7 0.3, 0.7 0.3, 0.7 0.4, 0.6 0.4, 0.6 0.3, 0.7 0.3, 0.7 0.3, 0.7 0.6, 0.4
,
)
5.1 Fusion Performance with Best Linear Here performance gains and with best linear based fusion is provided. Table 3 shows the performance gains in case of non-interoperable scenarios. Tables 4, 5, 6 and 7 show performance gains with sensor interoperability and various environmental conditions. Table 3. Fused verification improvement with same sensors
Sensor Pairs
NTCS1-NTCS1 NOPP3-NOPP3 VenusR- VenusR
0.1% FAR 2.8 31.8 50.0
0.01% FAR 10.9 35.9 72.7
0.005% FAR 19.6 31.1 73.3
0.1% FAR 57.3 81.2 0.0
0.01% FAR 70.3 81.3 66.7
0.005% FAR 73.2 80.6 63.6
Table 4. Fused verification improvement with NTCS1 enrollment in interoperable scenario and environmental conditions
Sensor Pairs
NTCS1-NOPP3 NTCS1-VenusR NTCS1- VenusW NTCS1- VenusH NTCS1- VenusC
0.1% FAR 18.8 72.7 77.8 80.0 25.0
0.01% FAR 40.0 61.5 80.0 75.0 25.0
0.005% FAR 48.4 50.0 81.5 46.2 31.4
0.1% FAR 84.9 85.0 90.5 91.3 58.3
0.01% FAR 91.5 82.8 87.1 90.6 57.1
0.005% FAR 91.7 80.0 85.3 82.1 60.7
106
A. Quddus et al.
Table 5. Fused verification improvement with NOPP3 enrollment in interoperable scenario and environmental conditions
Sensor Pairs
NOPP3-NTCS1 NOPP3-VenusR NOPP3-VenusW NOPP3-VenusH NOPP3-VenusC
0.1% FAR 16.3 62.5 85.7 25.0 41.7
0.01% FAR 18.6 64.7 84.2 50.0 50.0
0.005% FAR 22.6 60.0 71.1 54.2 42.9
0.1% FAR 62.1 85.7 92.0 91.9 67.4
0.01% FAR 67.8 81.3 85.7 86.8 68.3
0.005% FAR 73.2 80.0 75.9 82.8 70.4
Table 6. Fused verification improvement with VenusR enrollment in interoperable scenario and environmental conditions
Sensor Pairs
VenusR-NTCS1 VenusR-NOPP3 VenusR-VenusW VenusR-VenusH VenusR-VenusC
0.1% FAR 18.6 34.7 71.1 25.0 44.4
0.01% FAR 12.8 50.0 78.8 70.0 60.9
0.005% FAR 10.6 58.6 78.9 80.0 68.6
0.1% FAR 51.4 73.6 47.6 50.0 68.8
0.01% FAR 57.3 81.9 69.6 72.7 67.9
0.005% FAR 65.6 82.4 69.8 78.6 71.1
It can be observed from the tables 3-6 that at 0.01% FAR on the average and can be 39.8% and 72.8% for same sensor environment. In challenging operational environments (sensor interoperable and challenging environmental conditions) on the average they can be 54.8% and 76.6%. 5.2 Effect of Enrollment Quality on Fusion Performance In the image databases used in previous analysis we did not have enough image samples with low image quality. Therefore, we used a large image database acquired with SECUGEN HAMSTER PLUS, USB sensor. In order to use complete datasets, enrollment and verification were attempted with the same sensor, resulting in approximately 340,000 genuine and 400,000 impostor verification attempts. The first image quality metric is our proprietary pattern based while the second metric is the publicly available NFIQ from NIST. Pattern enrollment quality value Q can be between 0 and 100. Enrollment quality is considered adequate when Q ≥ 40. The NFIQ provides an integer value in the range of 1 (highest) and 5 (poorest). Table 7 shows that performance gain against pattern based algorithm ( ) is more pronounced for lower quality enrollments. However, this trend is not observed in performance gain against minutiae based algorithm ( ) for low quality image enrollments.
Fingerprint Pattern and Minutiae Fusion in Various Operational Scenarios
107
Though, the performance gain (due to fusion) against minutiae based algorithm ( ) in all the simulations, except in VenusRis much more in absolute terms than VenusW sensor interoperable scenario. Table 7. Effect of enrollment quality on fused performance
Enrolment Quality
Q < 40 (low) Q ≥ 40 NFIQ > 2 (low) NFIQ ≤ 2
0.1% FAR 33.7 22.3 32.9 20.8
0.01% FAR 44.7 31.0 44.6 28.8
0.005% FAR 40.2 37.2 43.2 33.0
0.1% FAR 52.5 57.1 57.2 55.6
0.01% FAR 61.6 66.1 61.8 64.7
0.005% FAR 63.5 69.2 64.9 69.0
In figures 1 and 2, two successful fusion verification results are presented with low enrollment quality images. In order to demonstrate their effectiveness we selected a score 0.5 as a specific security threshold for successful verification. In figure 1, a case is greater than 0.5 and minutiae score is less is presented where pattern score is less than the than this threshold. While figure 2 shows a case where pattern score is greater than this threshold. security threshold and minutiae score
Fig. 1. Enrollment and verification images (
0.5 and
0.5)
Fig. 2. Enrollment and verification images (
0.5 and
0.5)
108
A. Quddus et al.
5.3 Discussions From the tables 3 to 7 it can be concluded that even when pattern algorithm is significantly better than the minutiae algorithm the score-level fusion still improves the performance significantly as compared to both individually. This is due to the fact that these algorithms are fundamentally different from one another. This contradicts the general conclusion in [1] that the multi-matcher score-level fusion, using two matchers on the same data, is much less effective. Extensive simulations and analysis presented in this section highlight the finding that the significant improvement in accuracy can be gained when the algorithms are conceptually different. Moreover, improvement in accuracy due to fusion is even more pronounced in non-ideal operating conditions, such as in sensor interoperable deployment scenarios and challenging environmental conditions; as well as with lower quality of enrollment images. Table 8 presents two sets of images samples for successful fusion verification in sensor interoperable scenario. The first column shows that a sample of enrollment and verification images where pattern score is greater than 0.5 and minutiae score is less than this threshold. While the second column shows a result where pattern is less than 0.5 and minutiae score is greater than this threshold. score Table 8. Image samples of successful fusion in sensor interoperable scenario
Mode Enroll
Verify
NOPP3-NTCS1 0.5 and
0.5
NOPP3-VenusR 0.5 and
0.5
Fingerprint Pattern and Minutiae Fusion in Various Operational Scenarios
109
Table 9 presents two sets of images samples for successful fusion verification in challenging environmental conditions. The first column shows that a sample of is greater than 0.5 and enrollment and verification images where pattern score minutiae score is less than this threshold. While the second column shows a result is less than 0.5 and minutiae score is greater than this where pattern score threshold. Table 9. Image samples of successful fusion in sensor interoperable scenario
Mode
VenusR-VenusC 0.5 and
0.5
VenusR-VenusW 0.5 and
0.5
Enroll
Verify
6 Conclusions This paper presents three scenarios where unimodal score-based fusion of pattern and minutiae fingerprint verification algorithms can be very effective in improving accuracy. The first one is sensor interoperability, where one sensor was used in enrollment and different sensor is used in verification; and the second scenario is the verification with sensors in challenging environmental conditions; and the third scenario is the verification with low quality enrollment images. The results are supported by extensive simulation with various sensors and environmental conditions which is a significant contribution of this work. The authors believe that these findings can provide useful insights not only to the biometric research community but also to the wider community of biometric users and integrators.
110
A. Quddus et al.
References 1. Ulery, B., Hicklin, A., Watson, C., Fellner, W., Hallinan, P.: Studies of Biometric Fusion. NIST Report (NISTIR 7346) (September 2006) 2. Nandakumar, K., Chen, Y., Dass, S.C., Jain, A.K.: Likelihood Ratio-Based Biometric Score Fusion. IEEE Transactions on Pattern Analysis and Machine Intelligence 30(2), 342–347 (2008) 3. Hube, J.P.: Neyman-Pearson Biometric Score Fusion as an Extension of the Sum Rule. In: Prabhakar, S., Ross, A.A. (eds.) Biometric Technology for Human Identification IV. Proceedings of the SPIE, vol. 6539, p. 65390M (2007) 4. Poh, N., Bourlai, T., Kittler, J., Allano, L., Alonso-Fernandez, F., Ambekar, O., Baker, J., Dorizzi, B., Fatukasi, O., Fierrez, J., Ganster, H., Ortega-Garcia, J., Maurer, D., Salah, A.A., Scheidat, T., Vielhauer, C.: Benchmarking Quality-Dependent and Cost-Sensitive Score-Level Multimodal Biometric Fusion Algorithms. IEEE Transactions on Information Forensics and Security 4(4), 849–866 (2009) 5. Wang, F., Han, J.: Multimodal biometric authentication based on score level fusion using support vector machine. Opto-Electronics Review 17(1), 59–64 (2009) 6. Nandakumar, K., Chen, Y., Jain, A.K., Dass, S.C.: Quality-based Score Level Fusion in Multibiometric Systems. In: Proc. the 18th International Conference on Pattern Recognition, ICPR (2006) 7. Raghavendra, R., Rao, A., Hemantha Kumar, G.: A Novel Approach for Multimodal Biometric Score Fusion using Gaussian Mixture Model and Monte Carlo Method. In: International Conference on Advances in Recent Technologies in Communication and Computing (2009) 8. Ren, C., Yin, Y., Ma, J., Yang, G.: A Novel Method of Score Level Fusion Using Multiple Impressions for Fingerprint Verification. In: Proc. IEEE International Conference on Systems, Man, and Cybernetics, San Antonio, TX, USA, pp. 5051–5056 (October 2009) 9. Alonso-Fernandez, F., Veldhuis, R.N.J., Bazen, A.M., Fierrez-Aguilar, J., Ortega-Garcia, J.: Sensor Interoperability and Fusion in Fingerprint Verification: A Case Study using Minutiae- and Ridge-Based Matchers. In: IEEE 9th Int. Conf. Control, Automation, Robotics and Vision (December 2006) 10. Ross, A., Jain, A.: Biometric Sensor Interoperability: A Case Study In Fingerprints. In: Proc. of International ECCV Workshop on Biometric Authentication (May 2004) 11. Ross, A., Jain, A., Reisman, J.: A Hybrid Fingerprint Matcher. In: IEEE 16th International Conference on Pattern Recognition (2002) 12. Fierrez-Aguilar, J., Nanni, L.: Combining Multiple Matchers for Fingerprint Verification: A Case Study in FVC 2004 (2004)
Fingerprint Verification Using Rotation Invariant Feature Codes Muhammad Talal Ibrahim, Yongjin Wang, Ling Guan, and A.N. Venetsanopoulos Department of Electrical and Computer Engineering Ryerson University, Toronto, Ontario, Canada
Abstract. This paper presents an improved image-based fingerprint verification system. The proposed system enhances an input fingerprint image using a contextual filtering technique in the frequency domain, and uses the complex fillers to identify the core point. Subsequently, a region of interest (ROI) of a predefined size, which is centered around the detected core point, is extracted. The resulting ROI is rotated based on the detected core point angle to ensure rotation invariance. The proposed system extracts the absolute average deviation from the outputs of eight oriented Gabor filters that are applied to the ROI. To reduce the dimensionality of the extracted features whilst generating more discriminatory representation, this paper compares the unsupervised principal component analysis and the supervised linear discriminant analysis methods for dimensionality reduction. User-specific thresholding schemes are investigated. The effectiveness of the proposed algorithm is evaluated on the public FVC2002 set a database. Experimental results demonstrate the superiority of the introduced solution in comparison with existing approaches.
1
Introduction
Fingerprint is one of the most widely used biometric traits for security purposes due to its high accuracy, stability, public acceptability, and low cost. Many different methods have been proposed in the literature for fingerprint verification, which can be roughly categorized as minutiae-based and image-based methods. In minutiae-based methods, a set of un-ordered minutiae points are identified after some pre-processing steps and compared with the minutiae points of the template [1,2,3]. The dissimilarity measure is based on the set difference. A post-processing step is usually adopted to remove the false minutiae points [4]. In general, minutiae-based methods are computationally complex due to the requirement of time-consuming preprocessing, and the performance of such methods are highly dependent on the quality of the fingerprint images. Unlike minutiae-based approaches, image-based methods extract features from the gray level fingerprint directly. It requires less preprocessing and hence is computationally more efficient. Moreover, image-based methods generally work well in case of low quality images in comparison with minutiae-based methods. A M. Kamel and A. Campilho (Eds.): ICIAR 2011, Part II, LNCS 6754, pp. 111–119, 2011. c Springer-Verlag Berlin Heidelberg 2011
112
M.T. Ibrahim et al.
number of image-based methods have been proposed in the past, most of which employ texture analysis using filter banks [5]. In general, local texture analysis has shown to be more effective than the global counterparts [6][7]. In [8], a local texture analysis based method was proposed, where a set of Gabor filters with four and eight different orientations were applied to an ROI of 96 × 120 pixels, cropped with the core point as its center. The detection of the core point was done manually and the ROI was divided into non-overlapping blocks of size 8 × 8. The magnitude of each non-overlapping block corresponding to different orientations of the Gabor filters were used to form a feature vector. Jain et al [9] proposed a filter-bank based method, where an ROI centered at the core point was divided into a number of concentric bands and each sub-band was further divided into several sectors. A set of eight oriented Gabor filters was applied to the image, and the average absolute deviation (AAD) of individual sectors was used to build a feature vector. They also proposed a cyclic shift method to make the images invariant to the rotation of 11.25◦. Euclidean distance was used as a measure to find the similarity between the feature vectors of template and test image. Their method requires the storage of a large number of cyclicly rotated images as templates and is computational expensive. Furthermore, it is only invariant to rotation of 11.25◦, therefore provides limited rotation invariance. A. B. J. Teoh et al. [10] made use of the Fourier Mellin transform in combination with wavelet. An ROI of size 128 × 128 centered at the core point was cropped and the wavelet up to the second level was applied to it. At each level of wavelet, Fourier Mellin transform was used to make the ROI translation, rotation and scale invariant. Finally, Euclidean distance was used to find the matching score between the two fingerprints. Recently, the Hu’s invariant moments [11] were used in [12][13]. The ROI of size 96 × 96 centered at the core point was divided into 16 × 16 non-overlapping blocks. The invariant moments were calculated for each non-overlapping block and a feature vector of length 252 was used to represent each fingerprint. Eigenvalue-weighted cosine was used as a distance metric. It was shown that the proposed method outperforms that of [10]. However, their methods only provide local rotation invariance. It is not clear how the correspondence between local blocks being constructed. In this paper, we introduce an improved filter-bank based fingerprint verification system, featuring automatic and rotation invariant feature extraction. The proposed method applies image enhancement technique in conjunction with automatic core detection method as the preprocessing step for noise reduction, while producing rotation invariant fingerprint image. A filter-bank based method is then employed on the detected ROI for capturing the intrinsic information embedded in a fingerprint image, followed by subspace methods to reduce the dimensionality of the features and obtain better verification accuracy. Extensive experimentation demonstrates that the proposed methods produce promising results in comparison with other methods. The remainder of this paper is organized as follows. Section 2 introduces the methodology of the proposed solution. The experimental results are presented in Section 3, and conclusions are provided in Section 4.
Fingerprint Verification Using Rotation Invariant Feature Codes
2 2.1
113
Proposed System Fingerprint Enhancement and Core Point Detection
To improve the quality of the captured fingerprint images, the proposed system performs image enhancement first. For this purpose, a contextual filtering based method [14], which consists of two steps, has been employed. At the first step, Short Time Fourier Transform (STFT) is used to calculate the local ridge orientation, local ridge frequency and the region mask, whereas the second step conducts the enhancement in the frequency domain. Once the fingerprint has been enhanced, the next step is to detect the core point. In this work, core detection is performed by utilizing a complex filtering based method, which has demonstrated to be accurate and robust against noise [15]. This method is based on a multi-scale approach where an input image is first converted into a complex image, and a four-level sub-sampled Gaussian pyramid is then applied. Only the angle of the complex orientation field is used while the magnitude is set to one in multi-scale filtering. The filters of first order symmetry are applied to each resolution for the detection of singular points. Finally, singular points are tracked from the highest to the lowest level. The results of the above mentioned steps are shown in Fig. 1.
(a)
(b)
(c)
Fig. 1. Image from FVC2002 set a: a)Original Image 1 8 from Db 1a. b) Enhanced Image 1 8 from Db 1a. c) Core point detected in image 1 8.
2.2
ROI and Feature Extraction
One of the most well known image-based fingerprint feature extraction methods is the filter-bank based approach [9][16], which uses a bank of Gabor filters to capture both local and global characteristics of a fingerprint. The main drawback of this method is that it is computationally expensive and only invariant to the rotation of 11.25◦. For each fingerprint, one needs to generate ten templates which increase the enrollment and recognition time, as well as requiring larger template storage. In this work, we make an effort to address the limitation of existing works [9][16] by enabling the images rotation invariant before feature extraction. Once the core point has been detected, we identify an ROI of size 223 × 223 with the
114
M.T. Ibrahim et al.
(a)
(b)
(e)
(c)
(f)
(d)
(g)
Fig. 2. a)Region of Interest. b) to d) Results after applying three out of eight oriented Gabor filters. e) to f) AAD features corresponding to the images 2(b) to 2(d), respectively.
core point as its center as shown in Fig. 2(a). To produce rotation invariance, the detected ROI is rotated by an angle of π/2 − θcore , where θcore is the angle of the core point, such that the core point is aligned at an angle of π/2. The proposed method works as follows: 1. Locate the core point in the fingerprint and extract the ROI of a predefined size of 223 × 223 around that core point. 2. Core point is aligned at an angle of π/2 by rotating the ROI at an angle of π/2 − θcore, where θcore is the angle of the core point. 3. Then, ROI is divided into B concentric bands and each sub-band is further divided into k sectors. For a fingerprint scanned at 500 dpi, the value of B is set to 5 and the value of k is set to 16 whereas the width of each sector is 20 pixels as suggested in [16]. 4. In order to reduce the effects of noise, normalize each sector such that the gray levels in each sector has a specified mean and variance. 5. Then apply a Gabor filter bank on the region of interest. Gabor filters will enhance the ridges and valleys that are oriented at the same angle as the filter and suppresses the ridges and valleys oriented at different angles as shown in Fig. 2. 6. Compute the AAD of the oriented Gabor responses for each sector by using the following equation. Viθ◦ =
1 |Fiθ◦ (x, y) − miθ◦ | ni n i
(1)
Fingerprint Verification Using Rotation Invariant Feature Codes
115
where Fiθ◦ (x, y) is the θ◦ direction filtered image, miθ◦ is the mean of pixel values of Fiθ◦ (x, y) in sector Si and ni are the number of pixels in Si . In the above equation, ∀ i ∈ {0, 1, ..., 79} and θ ∈ {0◦ , 22.5◦ , 45◦ , 67.5◦ , 90◦ , 112.5◦ , 135◦ , 157.50◦}, the feature value (Viθ◦ ) is calculated from the mean. Now, each filtered image will be represented by a feature vector of 16 × 5 = 80 as shown in Fig. 2. Overall, a fingerprint is encoded by a feature vector of length 8 × 80 = 640. By using this method, it relaxes the requirement of storing and generating multiple templates corresponding to different rotations of the original image, and solves the problem of limited rotation invariance of the filter-bank based methods [9][16]. 2.3
Dimensionality Reduction
The Gabor filter bank method captures the characteristics of a fingerprint image with respect to different orientations, and outputs a feature vector of length 640. Although it is possible to perform matching using these feature vectors directly, it is generally computationally complex, and provides limited discriminatory power. To address this problem, we apply subspace techniques for dimensionality reduction. Principal component analysis (PCA) [17] is one of the the most influential subspace techniques for dimensionality reduction. It is an unsupervised learning technique which provides an optimal, in the mean-squared-error sense, representation of the input in a lower dimensional space. Linear discriminant analysis (LDA) [18] is another commonly used method for dimensionality reduction. In contrast to PCA, LDA is a supervised learning technique which produces a class-specific feature space based on the maximization of the so-called Fisher’s criterion defined as the ratio of between-class scatter to within-class scatter. In this work, we compare the performance of these two representative techniques.
3
Experimental Results
To evaluate the effectiveness of the proposed method, we conducted experimentation on the FVC2002 set a database, which contains four subsets. Each of the subsets contains 800 images from 100 subjects with 8 images per subject. The detailed configuration of each database is shown in Table 1. Table 1. Details of FVC2002 Database Set A Name Sensor Type DB1 a Optical Sensor DB2 a Optical Sensor DB3 a Capacitive Sensor DB4 a SFinGe v2.51
Image Size Resolution 388 × 374 pixels 500 dpi 296 × 560 pixels 569 dpi 300 × 300 pixels 500 dpi 288 × 384 pixels 500 dpi
116
M.T. Ibrahim et al.
In this work, we focus on a fingerprint verification problem. Verification is a one-to-one match that determines whether the claim of an identity is true or false. During enrolment, a feature vector xi , i = 1, 2, ..., C, where C is the total number of participants, is extracted from the biometric data of each individual and stored in the system database as a template. During verification, a feature vector y is extracted from the biometric signal of the authenticating individual IDy , and compared with the stored template xj of the claimed identity IDj , through a similarity function S. The decision is made based on the system threshold τ : accept if S(y, xj ) ≤ τ ; and reject if S(y, xj ) > τ . In our experiments, we select six out of the eight fingerprint images from each subject to train our proposed system and the remaining two samples for testing purpose. The genuine match and impostor match tests were performed on the four subsets of FVC2002 database respectively. The evaluation is based on the equal error rate (EER), which represents the operating point where False Acceptance Rate (FAR) and False Rejection Rate (FRR) are equal. The lower the EER, the better the verification performance. FAR represents the probability that a false match occurs, which can be computed as the ratio of the number of accepted impostor claims over the total number of impostor attempts. FRR represents the probability that a false rejection occurs, and can be calculated as the ratio of the number of rejected genuine claims over the total number of genuine attempts. To compute the FAR and the FRR, the assessment for the genuine match was conducted by comparing the fingerprint of each person with other fingerprints of the same person, while the impostor match was carried out by comparing fingerprints of each person with fingerprints belonging to other individuals, respectively. For each data set, there are 200 test patterns, therefore the number of genuine and impostor matches are 1 × 200 = 200 and 99 × 200 = 19, 800 respectively. We tested our proposed method by using four different randomly selected training and testing sets from each of the four subsets of the FVC2002 database, and then the average of the four experimental results is reported. All the results are obtained under the same experimental condition. For the PCA method, J the reduced dimensionality M is selected automatically M based on i=1 λi / i=1 λi ≥ 99%, where J denotes the dimensionality of the original features, and λi is the ith largest eigenvalue. For the LDA method, the reduced dimensionality M is selected as M = C − 1 = 100 − 1 = 99, where C is the number of human subjects for training in each data set. For both methods, Euclidean and Cosine distances are employed as dissimilarity measures for comparison purpose. In the verification scenario, it is possible to utilize user-specific threshold for the final decision based on the characteristics of a specific user. To provide some insight into the design of a user-specific thresholding scheme, in this paper, we examine three schemes where the intra-class, inter-class, and both intra- and inter-class distributions are taken into consideration. k denote the mean and standard deviation of intra-class Let μkintra and σintra distance distribution obtained from the training samples of a specific user ID k ,
Fingerprint Verification Using Rotation Invariant Feature Codes
117
with respect to certain distance metrics, such as the Euclidean or Cosine k distance in this paper. Let μkinter and σinter denote the mean and standard deviation of the inter-class distance distribution obtained for a user, which is estimated by labeling all the samples from other users as one class, and computing their distance to the samples of the user IDk . Then for the intra-class k scheme, the user-specific threshold for ID k is computed as: τk = μkintra +ξσintra , where ξ is a parameter to change the value of threshold, such that different FAR and FRR can be obtained depending the requirement of the application. For the inter-class scheme, the user-specific threshold for ID k is computed as: k τk = μkinter − ξσinter . For the intra- and inter-class scheme, we assume the intraclass and inter-class distributions are both Gaussian, and compute a point τ0 at which Pintra (x > τ0 ) = Pinter (x < τ0 ), where P denote probability. The user-specific threshold for subject ID k is then defined as τ = ξτ0 , where ξ is a controlling parameter. Table 2. Verification results of user-specific thresholding schemes (EER in %) User-specific threshold Euc. PCA Cos. Intra-class Euc. LDA Cos. Euc. PCA Cos. Intra- and Inter-class Euc. LDA Cos. Euc. PCA Cos. Inter-class Euc. LDA Cos.
DB1 a 2.6730 2.6319 4.5802 5.7771 1.8396 2.0676 2.0884 1.1957 1.7241 1.7538 0.8340 0.2740
DB2 a 4.6850 4.4091 4.6275 4.0795 2.9981 2.9198 2.5284 1.2866 2.9905 2.5492 1.6117 0.5032
DB3 a 8.9135 9.3864 12.2330 16.4792 6.4514 6.3144 8.3870 5.3864 5.9268 6.6963 7.1231 3.1812
DB4 a 7.2696 6.7702 8.4255 7.9596 6.9912 5.4722 5.5271 3.8687 6.5657 5.3125 4.8062 2.0044
Average 5.8853 5.7994 7.4666 8.5739 4.5701 4.1935 4.6327 2.9344 4.3018 4.0779 3.5937 1.4907
Table 2 compares the results obtained using different user-specific thresholding schemes, with ξ set to 0 − 10 in our experiments. It can be seen that the proposed inter-class distribution based user-specific thresholding scheme achieves the best verification accuracy, when applied on LDA reduced low-dimensional Gabor features with Cosine distance as the dissimilarity metric. The lower performance of intra-class related schemes is due to the fact that there are only a small number of training samples for each subject, and hence the estimation of the intra-class distribution is not accurate. On the other hand, the inter-class distribution is computed based on a relatively large number of training samples, therefore provides a better estimation of the characteristics of a certain user with respect to other users. This observation offers a guideline for the design of a user-specific scheme, in which the investigation of inter-class distribution may possibly provides better recognition performance when the number of training samples per class is small.
118
M.T. Ibrahim et al.
Table 3. Comparison of existing methods with proposed system on FVC2002 set a Method Ross [19] Jin [10] Amornraksa [20] Park [12] Proposed
DB1 a 1.87 2.43 2.96 1.63 0.27
DB2 a 3.98 4.41 5.42 3.78 0.50
DB3 a DB14 a Average 4.64 6.21 4.17 5.18 6.62 4.66 6.79 7.53 5.68 4.20 4.68 3.57 3.18 3.00 1.49
We have compared our proposed method with some of the existing imagebased methods for fingerprint verification. In [19], a hybrid fingerprint verification system was proposed where the scores from minutiae-based and the imagebased verification systems are combined using the sum rule. Fourier-Mellin transform was integrated with wavelets in [10], where ROI of size 128 × 128 centered at the core point was analyzed by the wavelets up to the second level and at each level Fourier-Mellin features were extracted. A fingerprint verification system based on the Discrete cosine transform was proposed in [20], where ROI of 64 × 64 was divided into four 32 × 32 non overlapping blocks and standard deviation of six predefined blocks in each non overlapping block was used as a feature. Recently, J. C. Yang et al. made use of Hu’s invariant moments for feature extraction and used the Euclidean weighted cosine as a distance metric for the fingerprint verification [12]. Table 3 shows the average EER of our proposed system in comparison with the above mentioned methods. It can be seen that the average EER of our proposed system is 1.49%, whereas average EER of [19], [10], [20] and [12] are 4.17%, 4.66%, 5.68% and 3.57%, respectively.
4
Conclusions
This paper has presented an image-based algorithm for automatic and rotation invariant fingerprint verification. The fingerprint images are first pre-processed by using an STFT analysis which helps to enhance the low quality images and makes them suitable for core point detection. Once the core point is detected, an ROI of a predefined size is cropped with core point as its center. This fingerprint image is made rotation invariant by rotating the ROI such that its core point is at π/2. The resulting ROI becomes an input to a bank of eight oriented Gabor filters and the AAD features are extracted for fingerprint representation. To further reduce the dimensionality of the feature space, whilst obtaining discriminatory representations, two dimensionality reduction tools are examined and compared. In addition, we have studied and compared three different user-specific thresholding schemes. The effectiveness of the proposed solution is demonstrated on the well known public database of FVC2002 set a. The proposed algorithm achieves better verification performance than prominent existing methods.
Fingerprint Verification Using Rotation Invariant Feature Codes
119
References 1. Ratha, N.K., Chen, S.Y., Jain, A.K.: Adaptive flow orientation-based featureextraction in fingerprint images. Pattern Recognition 28(11), 1657–1672 (1995) 2. Jain, A.K., Hong, L., Bolle, R.M.: Real-time matching system for large fingerprint databases. IEEE Trans on Pattern Analysis and Machine Intelligence 19(4), 302–314 (1997) 3. Tico, M., Immomen, a.E., Ramo, P., Kuosmanen, P., Saarinen, J.: Fingerprint recognition using wavelet features. In: Proc. ISCAS, Australia, vol. 2, pp. 21–24 (May 2001) 4. Hung, D.C.D.: Enhancement and feature purification of fingerprint images. Pattern Recognition 26(11), 1661–1671 (1993) 5. Maltoni, D., Maio, D., Jain, A.K., Prabhakar, S.: Handbook of Fingerprint Recognition, 2nd edn. Springer, London (2009) 6. Khalil, M.S., Mohamad, D., Khan, M.K., Al-Nuzaili, Q.: Fingerprint pattern classification. Digital Signal Processing 20, 1264–1273 (2010) 7. Nanni, L., Lumini, A.: Descriptors for image-based fingerprint matcher. Expert Systems with Applications 36(10), 12414–12422 (2009) 8. Wang, C.J.L.S.D.: Fingerprint feature extraction using gabor filters. Electronic Letters 35(4), 288–290 (1999) 9. Jain, A.K., Prabharkar, S., Hong, L., Pankanti, S.: Filterbank-based fingeerprnt matching. IEEE Trans. on Image Processing 9, 846–859 (2000) 10. Jin, A.T.B., Ling, D.N.C., Song, O.T.: An efficient fingerprint verification system using integrated wavelet and fourier-mellin invariant transform. Image and Vision Computing 22(6), 503–513 (2004) 11. Hu, M.K.: Visual pattern recognition by moment invariants. IRE Transactions on Information Theory, 179–187 (1962) 12. Yang, J.C., Park, D.S.: A fingerprint verification algorithm using tessellated invariant moment features. Neurocomputing 71(10-12), 1939–1946 (2008) 13. Yang, J.C., Park, D.S.: Fingerprint verification based on invariant moment features and nonlinear bpnn. International Journal of Control, Automation, and Systems 6(6), 800–808 (2008) 14. Chikkerur, S., Cartwright, A.N., Govindaraju, V.: Fingerprint enhancement using stft analysis. Pattern Recognition 40(1), 198–211 (2007) 15. Nilsson, K., Bigun, J.: Complex filters applied to fingerprint images detecting prominent symmetry points used for alignment. In: Biometric Authentication, pp. 39–47 (2002) 16. Prabhakar, S.: Fingerprint Classification and Matching Using a Filterbank. PhD thesis, Michigan State University (2001) 17. Jolliffe, L.T.: Principle Component Analysis. Springer, New York (1986) 18. Belhumeur, P.N., Hespanha, J.P., Kriegman, D.J.: Eigenfaces vs. Fisherfaces: recognition using class specific linear projection. IEEE Transactions on PAMI 19(7), 711–720 (1997) 19. Ross, A., Jain, A.K., Reisman, J.: A hybrid fingerprint matcher. Pattern Recognition 36(7), 1661–1673 (2003) 20. Amornraksa, T., Tachaphetpiboon, S.: Fingerprint recognition using dct features. Electronics Letters 42(9), 522–523 (2006)
Can Gender Be Predicted from Near-Infrared Face Images? Arun Ross and Cunjian Chen Lane Department of Computer Science and Electrical Engineering West Virginia University, USA
[email protected],
[email protected]
Abstract. Gender classification based on facial images has received increased attention in the computer vision literature. Previous work on this topic has focused on images acquired in the visible spectrum (VIS). We explore the possibility of predicting gender from face images acquired in the near-infrared spectrum (NIR). In this regard, we address the following two questions: (a) Can gender be predicted from NIR face images; and (b) Can a gender predictor learned using VIS images operate successfully on NIR images and vice-versa? Our experimental results suggest that NIR face images do have some discriminatory information pertaining to gender, although the degree of discrimination is noticeably lower than that of VIS images. Further, the use of an illumination normalization routine may be essential for facilitating cross-spectral gender prediction. Keywords: Biometrics, Faces, Gender, Near-Infrared, Cross-Spectral.
1
Introduction
Automated gender identification plays an important role in Human-Computer Interaction (HCI), upon which more complex visual systems are built [13]. Recognizing a person’s gender will enhance the HCI’s ability to respond in a userfriendly and socially acceptable manner. In the realm of biometrics, gender is viewed as a soft biometric trait that can be used to index databases or enhance the recognition accuracy of primary traits such as face [6]. Predicting gender based on human faces has been extensively studied in the literature [14,1]. Two popular methods are due to Moghaddam et al. [14] who utilize a support vector machine (SVM) for gender classification of thumbnail face images, and Baluja et al. [1] who present the use of Adaboost for predicting gender. Gutta et al. [5] consider the use of hybrid classifiers consisting of an ensemble of RBF networks and decision trees. Other approaches utilize gender-specific information, such as hair, to enhance gender prediction [11], or genetic algorithms to select features that encode gender information [15]. A systematic overview on the topic of gender classification from face images can be found in [13].
Funding from the Office of Naval Research is gratefully acknowledged.
M. Kamel and A. Campilho (Eds.): ICIAR 2011, Part II, LNCS 6754, pp. 120–129, 2011. c Springer-Verlag Berlin Heidelberg 2011
Can Gender Be Predicted from Near-Infrared Face Images?
121
Though gender classification has received much attention from the research community, previous work has focused on face images obtained in the visible spectrum (VIS). The aim of this study is to explore gender classification in the near-infrared spectrum (NIR) using learning-based algorithms. The use of NIR images for face recognition has become necessary especially in the context of a night-time environment where VIS face images cannot be easily discerned [9]. Further, NIR images are less susceptible to changes in ambient illumination. Thus, cross-spectral matching has become an important topic of research [8,12,2]. To the best of our knowledge, this is the first work that explores gender recognition in NIR face images. In this regard, we address the following questions: – Q1. Can gender be predicted from NIR face images? – Q2. Can a gender predictor learned using VIS images operate successfully on NIR images, and vice-versa? To answer Q1, we use an existing gender prediction mechanism based on SVM [14]. In order to address Q2, we hypothesize that an illumination normalization scheme may be necessary prior to invoking the gender classifier. In the next section, we describe the design of the gender classifier (predictor), with special emphasis on illumination normalization approaches for cross-spectral gender prediction. Then, we report experimental results that demonstrate the possibility of assessing gender from NIR face images. Finally, we discuss the difficulties in cross-spectral gender classification and indicate future directions.
2
Proposed Methods
In order to address the questions raised above, we utilize a gender prediction scheme based on SVMs. Such a scheme has been shown to be efficient in the VIS domain [14,4]. The SVM-based classification scheme is described below. Given a facial image xi , in either the VIS or NIR domains, the feature extractor is applied to obtain a discriminant feature set si . The gender classifier, G, is then invoked to predict the gender from the image (Figure 1): 1 if xi is male (1) G(α(xi )) = −1 if xi is female. Here si = α(xi ) represents the transformation of the raw image into a feature vector. 2.1
Feature Extractor
Previous work on gender classification in the visible domain utilized features extracted via Principal Component Analysis (PCA) [14,4,15] or Haar-like features [1]. Features based on local binary patterns (LBP) have also been used [10].
122
A. Ross and C. Chen
Fig. 1. Cross-spectral gender classifier
In this work, we use features derived from PCA since they have been successfully tested in previous literature. Consider a labeled set of N training samples {(xi , yi )}N i=1 , where xi is the facial image and yi is the associated class label. Here, yi ∈ {−1, 1}, where a -1 (+1) indicates a female (male). The PCA is performed on the covariance matrix of vectorized images, Σg =
N 1 (xi − x ¯)(xi − x ¯)T , N i=1
(2)
¯ is the mean vector of the where xi is the sample image after vectorization and x training set. The eigenvectors can be obtained through the decomposition, Σg Φg = Φg Λg ,
(3)
where Φg are the eigenvectors of Σg and Λg are the corresponding eigenvalues. The number of eigenvectors used in our experiment is 60. The gender features can be extracted by projecting the sample image into the subspace expanded by eigenvectors: (4) si = ΦTg (xi − x¯). Here, si is the feature vector to represent the gender information of sample xi . The feature vectors corresponding to the training set and their label information {si , yi } are stored in the database. In the test (i.e., evaluation) stage, when an unknown face image is presented, the same feature extractor is invoked to obtain the feature vector, which is input to the classifier G to predict the gender. 2.2
Gender Classifier
To build the gender classifier G using SVM, we need a labeled set of N training samples {(si , yi )}N i=1 . The gender classifier seeks to find an optimal hyperplane defined as M f (s) = yi αi · k(s, si ) + b, (5) i=1
where k(s, si ) represents the kernel function and the sign of f (s) determines the class label of s (gender). The linear kernel is the simplest function, and it is
Can Gender Be Predicted from Near-Infrared Face Images?
123
Fig. 2. Illustration of a SVM-based gender classifier with linear kernel on the HFB database [9]
computed by the dot product < s, si > plus an optional constant c. Any vector si that has a non-zero αi is called a support vector (SV) of the optimal hyperplane that separates the two classes. The common kernels used are the radial basis function (RBF) kernel and the linear kernel. In this work, the linear kernel was used. An example of gender classification using SVM is shown in Figure 2. Here, the dimension of the extracted feature vector is reduced to two by PCA. The classifier was trained using images from the VIS spectrum and tested on images in the NIR spectrum. 2.3
Illumination Normalization
As stated earlier, we hypothesize that the use of an illumination normalization scheme may be necessary to accommodate cross-spectral gender prediction where the training and test sets have images pertaining to different spectral bands. Self Quotient Image (SQI): According to the Lambertian model, the image formation process is described as follows: I(x, y) = ρw (x, y)n(x, y)s,
(6)
where ρw (x, y) is the albedo of the facial surface, n is the surface normal and s is the lighting reflection. To reduce the impact of illumination, we need to separate out the extrinsic factor s from ρ and n. The self-quotient image, Q, of I is defined as [18],
124
A. Ross and C. Chen
Q=
I(x, y) ρw (x, y)n(x, y)s , = ˆ G ∗ [ρw (x, y)n(x, y)s] I(x, y)
(7)
where Iˆ is the smoothed version of I and G is the smoothing kernel. Retinex Model: The retinex approach is based on the reflectance illumination model instead of the Lambertian model. It is an image enhancement algorithm [7] proposed to account for the lightness and color constancy of the dynamic range compression properties of the human vision system. It tries to compute the invariant property of reflectance ratio under varying illumination conditions [3,18]. The retinex model is described as follows: I(x, y) = R(x, y)L(x, y),
(8)
where I(x, y) is the image, R(x, y) is the reflectance of the scene and L(x, y) is the lighting. The lighting is considered to be the low-frequency component of the image I(x, y), and is thus approximated as, L(x, y) = G(x, y) ∗ I(x, y),
(9)
where G(x, y) is a Gaussian filter and ∗ denotes the convolution operator. The output of the retinex approach is the image R(x, y) that is computed as, R(x, y) =
I(x, y) I(x, y) = . L(x, y) G(x, y) ∗ I(x, y)
(10)
Discrete Cosine Transform (DCT) Model: Since illumination variations typically manifest in the low-frequency domain, it is reasonable to normalize the illumination by removing the low-frequency components of an image. DCT can be first applied to transform an image from the spatial domain to the frequency domain, and then estimate the illumination of the image via low-frequency DCT coefficients which appear in the upper-left corner of the DCT [3]. By setting the low-frequency components to zero and reconstructing the image, variations due to illumination can be reduced. The 2D M × N DCT can be computed as, M−1 −1 N π(2x + 1)u π(2y + 1)v C(u, v) = α(u)α(v) I(x, y)×cos cos . (11) 2M 2N x=0 y=0 Here α(u) and α(v) are the normalization factors. CLAHE Normalization: The CLAHE (Contrast Limited Adaptive Histogram Equalization) [19] method applies contrast normalization to local blocks in the image such that the histogram of pixel intensities in each block approximately matches a pre-specified histogram distribution. This scheme is applied to blocks of size 16 × 16. CLAHE is effective at improving local contrast without inducing much noise. It utilizes the normalized cumulative distribution of each gray level, x, in the block [2]: x N −1 × h(k). (12) f (x) = M k=0
Can Gender Be Predicted from Near-Infrared Face Images?
125
Here, M is the total number of pixels in the block, N is the number of gray levels in the block, and h is the histogram of the block. To improve the contrast, the CLAHE technique transforms the histogram of the block such that the histogram height falls below a pre-specified threshold. Gray level counts beyond the threshold are uniformly redistributed among the gray levels below the threshold. The blocks are then blended across their boundaries using bilinear interpolation. Difference-of-Gaussian (DoG) Filtering: Another type of normalization is proposed in [16], where the local image structures are enhanced. One of the key components in [16] is the Difference-of-Gaussian (DoG) filtering, which can be computed as, D(x, y|σ0 , σ1 ) = [G(x, y, σ0 ) − G(x, y, σ1 )] ∗ I(x, y).
(13)
The symbol * is the convolution operator, and the gaussian kernel function based on σ is, 2 2 2 1 G(x, y, σ) = √ e−(x +y )/2σ . (14) 2 2πσ This simple filtering scheme has the effect of subtracting two Gaussian filters. The output of the various illumination normalization schemes are presented in Figure 3. The goal of illumination normalization is to facilitate cross-spectral gender classification by mitigating the effect of spectral specific features [17].
(a)
(b) Fig. 3. (a) A VIS image and its corresponding normalized images; (b) A NIR image and its corresponding normalized images
3
Experiments
The HFB face database [9] consists of 100 subjects, including 57 males and 43 females. There are 4 VIS and 4 NIR face images per subject. The following experiments were conducted on this database: (a) Training and testing using VIS images (VIS-VIS); (b) Training and testing using NIR images (NIR-NIR);
126
A. Ross and C. Chen
(c) Training using VIS images and testing using NIR images (VIS-NIR); (d) Training using NIR images and testing using VIS images (NIR-VIS). In all cases, the subjects used in the training and test sets were mutually exclusive. 20 male and 20 female subjects were randomly selected for training, with 4 samples for each subject. The remaining subjects were reserved for testing. This random partitioning to generate the training and test sets was done 10 times for each experiment in order to understand the variance in classification accuracy. The image size used in our work was 128×128. Table 1. Gender classification results on the HFB database when illumination normalization is not used for cross-spectral prediction Scenario Classification Rate VIS-VIS 0.9067 ± 0.0397 NIR-NIR 0.8442 ± 0.0264 VIS-NIR 0.5625 ± 0.1289 NIR-VIS 0.6021 ± 0.0769
Best Worst 0.9708 0.8458 0.9000 0.8042 0.7083 0.3833 0.6667 0.3875
Table 2. Results for cross-spectral gender classification after applying different normalization schemes VIS-NIR(N) NIR-VIS(N) CLAHE 0.6617 ± 0.0724 0.6642 ± 0.0806 DoG 0.6446 ± 0.0331 0.6100 ± 0.0354 SQI 0.4512 ± 0.0693 0.4692 ± 0.0611 Retinex 0.5525 ± 0.0537 0.5921 ± 0.0674 DCT 0.5967 ± 0.0840 0.6392 ± 0.0666
For the VIS-VIS experiments, the average classification rate was 90.67%, with the best performance being 97.08% (Table 1). The performance is comparable to the results reported in previous literature on other datasets [14,1]. This suggests that gender classification can be performed with high accuracy in the VIS domain. For the NIR-NIR experiment, the average performance declined by around 6% compared to VIS-VIS classification resulting in an average accuracy rate of 84.42%. For the VIS-NIR and NIR-VIS experiments, the average classification rates were 56.25% and 60.21%, respectively, suggesting the difficulty in performing cross-spectral gender classification. However, upon applying certain illumination normalization schemes (to both the training and test images), we observed an improvement in classification accuracy (Table 2). Two of the most effective normalization schemes were CLAHE and DoG. In our experiment, the CLAHE gave slightly better performance than DoG. Specifically, the CLAHE normalization scheme improved cross-spectral gender classification for the VIS-NIR and NIR-VIS experiments to 66.17% and 66.42%, respectively - this represents an improvement of 18% and 10%, respectively. The SQI scheme decreased the performance after normalization, while the retinex model did not impact the accuracy. The DCT algorithm gave slightly better performance, but not as significant as that of CLAHE and DoG.
Can Gender Be Predicted from Near-Infrared Face Images?
127
Table 3. Impact of image size on gender classification for the VIS-NIR and NIR-VIS scenarios when the CLAHE normalization method is used Image Size 128 × 128 64 × 64 32 × 32 16 × 16
4
VIS-NIR 0.6617 ± 0.0724 0.6958 ± 0.0241 0.7179 ± 0.0208 0.6638 ± 0.0362
NIR-VIS 0.6642 ± 0.0806 0.6596 ± 0.0856 0.6917 ± 0.0292 0.6617 ± 0.0668
Discussion
Our experimental results indicate the possibility of performing gender classification using NIR face images although the performance is slightly inferior to that of VIS images. This indicates that the gender information observed in the NIR domain may not be as discriminative as in the VIS domain. Cross-spectral gender prediction was observed to be difficult - this suggests that the gender related information available in the NIR and VIS face images are significantly different as assessed by the classifier. The key, therefore, is to reduce the variability between these two type of images by applying an illumination normalization routine. Experiments suggest that certain normalization schemes are better than the others. In particular, the CLAHE scheme proved superior than the other models considered in this work. Next, we consider the reasons for the inferior performance of the other normalization models. The Lambertian model usually assumes that the term ρw (x, y) is constant across different lighting sources. However, since the lighting conditions under NIR and VIS spectra are not homogeneous, estimating an illumination invariant albedo ρw (x, y) under the Lambertian model for those two type of images is not possible [12]. Therefore, approaches based on the Lambertian model, such as self-quotient image and its variants, are not useful in our application. Since the reflectance is not a stable characteristic of facial features for images captured under the NIR and VIS spectra, the retinex model also does not result in good performance. The DCT method fails since the illumination in NIR images cannot be simply estimated by the low-frequency coefficients of the image. Only those normalization methods based on local appearance-based features (i.e., CLAHE and DoG) result in better accuracy. This could partly be due to the use of PCA-based features in our experiments. The use of other sophisticated features (such as LBP) for gender classification may be useful when the SQI and retinex models are used for normalization. When the images (128×128) are downsampled by a factor of 16, the average accuracy of VIS-NIR improved from 66.17% to 71.79% (Table 3). Similarly, the average accuracy of NIR-VIS improved from 66.42% to 69.17%. Another observation has to do with the difference in gender classification of males and females. We ran the VIS-NIR experiments on the HFB database 100 times and observed that the female classification rate was 68% while the male classification rate was 77%.
128
A. Ross and C. Chen
Next, we take a look at the histogram distribution of pixel intensities for a VIS image and a NIR image (Figure 4). The VIS image has a dense histogram, while the NIR image has a relatively sparse histogram distribution. This suggests that the VIS image has more intensity values captured than its counterpart. Such a difference suggests a loss in information when forming NIR images. The hypothesis is that histogram normalization can mitigate some of these differences thereby improving cross-spectral gender prediction. We find that by applying the CLAHE normalization approach, it is possible to reduce the difference between the two histograms (Figure 4).
(a)
(c)
(b)
(d)
Fig. 4. (a) VIS image before normalization; (b) NIR image before normalization; (c) VIS image after normalization; (d) NIR image after normalization.
5
Summary
This paper presents initial experimental results on gender classification from NIR face images. A classification accuracy of 84.42% was obtained in the NIRNIR scenario. The paper also discusses cross-spectral gender recognition where training images and test images originate from different spectral bands. The preprocessing operation involving illumination normalization was observed to improve cross-spectral classification accuracy by up to 18%. But this is still lower than the performance obtained for intra-spectral classification (i.e., the VIS-VIS and NIR-NIR scenarios). Currently, we are examining the use of fundamental image formation models to better understand the gender-specific details present in NIR and VIS images.
References 1. Baluja, S., Rowley, H.A.: Boosting sex identification performance. IJCV 71(1), 111–119 (2007) 2. Bourlai, T., Kalka, N.D., Ross, A., Cukic, B., Hornak, L.: Cross-spectral face verification in the short wave infrared (SWIR) band. In: ICPR, pp. 1343–1347 (2010)
Can Gender Be Predicted from Near-Infrared Face Images?
129
3. Chen, W., Er, M.J., Wu, S.: Illumination compensation and normalization for robust face recognition using discrete cosine transform in logarithm domain. IEEE SMC-B 36(2), 458–466 (2006) 4. Graf, A.B.A., Wichmann, F.A.: Gender classification of human faces. In: Biologically Motivated Computer Vision, pp. 491–500 (2002) 5. Gutta, S., Wechsler, H., Phillips, P.J.: Gender and ethnic classification of face images. In: FG, pp. 194–199 (1998) 6. Jain, A.K., Dass, S.C., Nandakumar, K.: Can soft biometric traits assist user recognition? In: BTHI, pp. 561–572. SPIE, San Jose (2004) 7. Jobson, D.J., Rahman, Z., Woodell, G.A.: Properties and performance of a center/surround retinex. IEEE TIP 6(3), 451–462 (1997) 8. Klare, B., Jain, A.K.: Heterogeneous face recognition: Matching NIR to visible light images. In: ICPR, pp. 1513–1516 (2010) 9. Li, S.Z., Lei, Z., Ao, M.: The HFB face database for heterogeneous face biometrics research. In: CVPR Workshop, pp. 1–8 (2009) 10. Lian, H.-C., Lu, B.-L.: Multi-view gender classification using local binary patterns ˙ and support vector machines. In: Wang, J., Yi, Z., Zurada, J.M., Lu, B.-L., Yin, H. (eds.) ISNN 2006. LNCS, vol. 3972, pp. 202–209. Springer, Heidelberg (2006) 11. Lian, X.-C., Lu, B.-L.: Gender classification by combining facial and hair information. In: ICONIP (2), pp. 647–654 (2008) 12. Liao, S., Yi, D., Lei, Z., Qin, R., Li, S.Z.: Heterogeneous face recognition from local structures of normalized appearance. In: ICB, pp. 209–218 (2009) 13. Makinen, E., Raisamo, R.: Evaluation of gender classification methods with automatically detected and aligned faces. PAMI 30(3), 541–547 (2008) 14. Moghaddam, B., Yang, M.-H.: Learning gender with support faces. PAMI 24(5), 707–711 (2002) 15. Sun, Z., Bebis, G., Yuan, X., Louis, S.J.: Genetic feature subset selection for gender classification: A comparison study. In: WACV, pp. 165–170 (2002) 16. Tan, X., Triggs, B.: Enhanced local texture feature sets for face recognition under difficult lighting conditions. In: AMFG, pp. 168–182 (2007) ˇ 17. Struc, V., Paveˇsi´c, N.: Gabor-based kernel partial-least-squares discrimination features for face recognition. Informatica 20, 115–138 (2009) 18. Wang, H., Li, S.Z., Wang, Y., Zhang, J.: Self quotient image for face recognition. In: ICIP, pp. 1397–1400 (2004) 19. Zuiderveld, K.: Contrast limited adaptive histogram equalization, pp. 474–485. Academic Press Professional, San Diego (1994)
Hand Geometry Analysis by Continuous Skeletons Leonid Mestetskiy1 , Irina Bakina1 , and Alexey Kurakin2 1
2
Moscow State University, Moscow, Russia Moscow Institute of Physics and Technology, Moscow, Russia
[email protected],
[email protected],
[email protected]
Abstract. The paper considers new approach to palm shape analysis that is based on continuous skeletons of binary images. The approach includes polygonal approximation of binary image, skeleton construction for the polygonal approximation and skeleton regularization by pruning. Skeleton of polygonal shape is a locus of centers of maximum inscribed circles. Both internal and external skeletons of palm shape are used for analysis. Segmentation of initial image, palm orientation and structure identification, fingers segmentation and characteristic points detection are performed based on image skeleton. Algorithm of color palm images binarization and computational experiments with large database of such images are described in the paper. Keywords: Skeleton, binarization, segmentation, external skeleton, hand geometric points detection.
1
Introduction
Skeleton (or medial axis representation) is a powerful and widely used tool for image shape analysis [1]. Originally, the notion of skeleton was defined for continuous objects [2]: the skeleton of a closed region in the Euclidean plane is a locus of centers of maximum empty circles in this region. While the circle is considered to be empty if all its internal points are internal points of the region. To use skeletons for image analysis it is necessary to adapt this notion for discrete images. Many papers consider ”discrete skeleton” for the purpose of shape analysis. Discrete skeleton of a binary image is an analogue of continuous one. Skeleton on this image consists of one pixel width lines, and all these lines are approximately equidistant from the edges of the source shape. There are several approaches for discrete skeleton construction: topological thinning, morphological erosion, Euclidian distance transform [3]. But discrete skeletons have significant drawbacks in comparison with continuous skeletons. First of all, the values of radial function for discrete skeleton could be calculated only approximately. In addition, pruning of noisy branches (regularization) of such skeletons is a hard problem and requires different heuristics in each particular case. And there is a problem of computational efficiency for discrete skeleton construction. At the present moment the algorithms of discrete M. Kamel and A. Campilho (Eds.): ICIAR 2011, Part II, LNCS 6754, pp. 130–139, 2011. c Springer-Verlag Berlin Heidelberg 2011
Hand Geometry Analysis by Continuous Skeletons
131
skeleton construction are speeded up by parallelization [4]. But such speedup is limited due to the fact that number of steps of discrete skeletonization increases with the increase of image resolution. Continuous approach for binary image skeleton construction is considered in papers [5,6,8]. It outperforms discrete methods in many aspects. This paper demonstrates application of continuous skeleton for palm shape analysis. Experiments are held on Hand Geometric Points Detection Competition database [11]. Advantages of the proposed method are especially noticeable in case of poor palm images when fingers are stuck. Usage of external skeleton of the palm helps to segment such images successfully. The paper is organized in the following way. Notion of continuous skeleton is described briefly in Section 2. The basic algorithm for palm characteristic point detection is presented in Section 3. It works in the case of ”good” palm image, i.e. image in which the fingers are separable and could be easily segmented. The algorithm for ”poor” image segmentation with stuck fingers is considered in Section 4. And Section 5 concludes the paper and demonstrates some experiments on HGC-2011 database.
2
Continuous Skeleton of Binary Image
Polygonal shape is a closed connected region in the Euclidean plane such that its boundary consists of finite union of closed polygons without self-intersections. Maximum inscribed circle is a circle C that is completely lying inside the polygonal shape P and any other circle C ⊂ P does not contain C. Skeleton of polygonal shape is a locus of centers of maximum inscribed circles. Notion of skeleton could be defined not only for polygons, but this is beyond the scope of the paper. Radius of maximum inscribed circle is associated with any point of skeleton. Function R(x) that maps points of skeleton to radiuses of maximum inscribed circles is called the radial function. It can be proved [8] that skeleton of a polygonal shape consists of finite union of lines and arcs of the parabolas. Thus, skeleton of a polygonal shape has a dual nature. First of all, skeleton is a union of curves. In addition, it could be treated as a planar graph. In applications it is usually required to construct a skeleton of a shape that is described by the raster binary image. In such a situation silhouette of the binary image should be approximated as a polygon. Then continuous skeleton of the polygon could be constructed. And, finally, the constructed skeleton should be pruned to get rid of unimportant branches. Detailed algorithm of skeleton construction is described in [6], but its brief description is the following: 1. Boundary Corridor Construction. For initial binary image (Fig. 1a) boundary corridor is calculated based on boundary tracing. Boundary corridor consists of two sequences of pixels:
132
L. Mestetskiy, I. Bakina, and A. Kurakin
Fig. 1. The process of skeleton construction: (a) source binary image; (b) boundary corridor and minimal perimeter polygon; (c) initial skeleton of polygon, and (d) skeleton after pruning
black sequence and white sequence. White sequence corresponds to internal boundary of the corridor and black to external. On Fig. 1b corridor is drawn in gray color. 2. Construction of Minimal Perimeter Polygon inside the Boundary Corridor. Closed path of minimal perimeter is constructed inside the boundary corridor [5]. Such path is a closed contour without self-intersections. Physical model of the path is a rubber thread stretched along the boundary corridor. Minimal perimeter polygon is shown on 1b. 3. Skeleton Construction. Skeleton is constructed for the minimal perimeter polygon. Fig. 1c shows skeleton and maximum inscribed circles at the vertices of the axial graph. 4. Skeleton Regularization. When skeleton is constructed it is subjected to pruning. Pruning is performed by sequential cutting of certain terminal edges of the skeleton. Cutting criteria is based upon the following principle. Initial polygon could be represented as a union of all inscribed circles with the centers at the skeleton points. When skeleton edge is removed, associated circles are removed too. Union of the remaining circles forms a figure which is called silhouette of the remaining skeleton part. This silhouette is a subset of the initial polygon and it is situated inside it. If Hausdorff distance of this silhouette and initial polygon is less than the threshold then skeleton edge is cut, otherwise—not. In the example on Fig. 1d threshold is equal to 2 pixels. Example on Fig. 1 is an illustration for the skeleton construction process. Palm size is only 33×36 pixels. Sample skeleton for real palm from Fig. 2a is presented on Fig. 2b, where palm image size is 540 × 420 pixels. Internal skeleton of the palm is considered above. But skeleton of the external area of the palm could be constructed and used for shape analysis. Let S be an initial shape, and BS be its external boundary. Consider rectangle R such that R contains entire image of the shape S. Rectangle R and curve BS form a polygonal shape Sext which is lying between R and BS . Skeleton of the shape Sext is called the external skeleton of S. An example of an external skeleton of a palm is depicted on Fig. 2c.
Hand Geometry Analysis by Continuous Skeletons
133
Fig. 2. (a) Binary image of palm; (b) its boundary polygon of minimal perimeter and internal skeleton, and (c) external skeleton
3
Characteristic Points Detection
The procedure described in the previous section allows us to obtain regularized skeleton of any binary image. Assume that we have a binary image of a palm and its skeleton. Let’s consider how it can be used to locate hand characteristic points—finger tips and valleys. The process consists of the following steps: 1. 2. 3. 4.
Extraction of palm center Extraction of fingers Validation of palm Detection of tips and valleys
At first, we determine the position of palm center. Then, extraction of fingers is performed. It is obvious that each finger corresponds to one of the skeleton branches. So, we extract all skeleton branches, validate them and leave only those which correspond to potential fingers. On the next step we check the correctness of the whole palm segmentation. And, finally, calculate finger’s tips and valleys. 3.1
Extraction of Hand Center
As it is described earlier, there is an association between each skeleton point and radius of maximum inscribed circle with the center at this point. It is called radial function. Thus, we define the center of palm as the skeleton node with the maximal value of radial function. The illustration is given on Fig. 3a: point O is the extracted center of hand, while the circle with center at O represents the inscribed circle of maximal radius.
134
L. Mestetskiy, I. Bakina, and A. Kurakin
Fig. 3. Detection of (a) tops, roots and bottoms of fingers, and (b) their tips and valleys
3.2
Extraction of Fingers
Axial graph of any binary image contains only vertices of degree one, two or three [7]. For each vertex of degree one we extract skeleton branch starting from it, going through the vertices of degree two and ending at the vertex of degree three. As a result, the list of all skeleton branches is produced. Some of these branches are structural (like those going to hand wrist) or fake (those which were not removed during pruning procedure), while others correspond to potential fingers. The originating node of a branch is called the top node. The end point of a branch is called the root node. For example, nodes A5 and O on Fig. 3a are top and root nodes for the branch A5 B5 O. Now, let’s consider the sequence of branch nodes from root to top node. For each node we calculate three characteristics: r is the value of radial function for this node, rp —the value of radial function for the previous node, rt —the value of radial function for the top node, Rmax —the maximal value of radial function for the whole skeleton and α—the angle between two segments connecting the center of circle (associated with the node) with its tangency points. The first node in the sequence to fulfill the following conditions is considered to be the bottom node of the branch (as usual, the braces designate ”and” condition, square brackets— ”or” condition): ⎡⎧ ⎨ r < 0.3Rmax ⎢ ⎢ ⎩ r < rp (1) ⎣ α > α0 r < rt
Hand Geometry Analysis by Continuous Skeletons
135
Variable α0 is the method parameter. In this work we used α0 = 2.7 radians. An example of bottom node is given on Fig. 3a—it is point B5 for the branch A5 B5 O. So, for each skeleton branch top, root and bottom nodes are extracted. The line connecting top and bottom nodes is called the axis of branch. Next, we check all the branches to leave only those of potential fingers. Let’s consider one of the skeleton branches. Firstly, we calculate its length l—the total length of branch edges between top and bottom nodes and rm — the maximal value of radial function in the neighborhood of the top node. The skeleton branch is considered to be the branch of a potential finger if: l ∈ [l0 , l1 ] (2) rm < rmax The variables l0 , l1 and rmax are method parameters. They can be set heuristically or estimated by learning. In our work they were set as l0 = 60, l1 = 250 and rmax = 35 pixels. 3.3
Validation of Hand
As a result of the previous step we have the list of the branches which correspond to potential fingers. If their number is less than 5 (total number of hand fingers) we admit that hand was segmented incorrectly, so tips and valleys cannot be extracted. If the number is equal to 5 we proceed to the next step. However, there are situations when the total number of branches is greater than 5. In such a case special procedure is applied to remove extra skeleton branches. It is based on the analysis of hand structure. Firstly, it should be noted that all skeleton branches in the list are arranged according to the traversal of axial graph. Due to this fact we can analyze only the sequences formed of 5 successive branches. The sequence with the minimal angle between axes of its first and last branches is considered to be the sequence of fingers branches. So, fingers branches are determined. The last thing is to establish correspondence between each finger (little, ring, etc.) and its branch. For this purpose we calculate the angles between axis of each finger and axis of its previous finger in the sequence. And then we select the branch with the maximal angle and announce it to be the branch of thumb. Below in the text, top, root, bottom and axis of finger branch are called top, root, bottom and axis of finger respectively. 3.4
Detection of Tips and Valleys
Let’s consider a palm with extracted fingers. Denote by Ai the top nodes and by Bi the bottom nodes of fingers, i = 1, . . . , 5 (see Fig.3b). The lines Ai Bi are fingers axes. For each finger its tip Ti is the point of intersection between half-line Bi Ai and hand boundary. The valley between two fingers i and i + 1 is the point Vi that is
136
L. Mestetskiy, I. Bakina, and A. Kurakin
the nearest point of hand boundary between Ti and Ti+1 to the hand center O, i = 1, 2, 3. The initial position V4 of valley V4 between thumb and index finger is calculated by the same rule, but then additional correction is applied. It is totally heuristic. We go over the sequence of hand boundary points from V4 to T5 and calculate d4 = |P B4 | and d5 = |P B5 |, where P is one of the considered points. If r5 is the value of radial function for the node B5 , then the first point P that fulfills the conditions d4 > d5 and d5 < 2.1r5 is called the valley V4 . Fig. 3b shows extracted tips Ti and valleys Vi for hand.
4
Hand Image Segmentation
The tips and valleys detection algorithm based on the analysis of internal skeleton of palm binary image produces good results only if the fingers are separable, like on Fig. 2a. But in a case of fingers touching each other (Fig. 4a) the obtained binary image has indistinguishable fingers. So, direct construction of skeleton would not work. However, the problem can be solved if continuous skeleton approach is combined with other image processing methods. The idea is to enhance image on Fig. 4d by reducing it to the image on Fig.4h. Let’s consider basic operations that are used in our algorithm. Denote by Vrgb the space of RGB color images; Vgs the space of grey-scaled images; Vb the space of binary images; Vskel the space of continuous skeletons. Firstly, we introduce the operations Red: Vrgb → Vgs that extracts red component from RGB color image; Sob: is Sobel operator with kernels gs that⎛ ⎛ Vgs → V⎞ ⎞ 1 2 1 1 0 −1 ⎝ 0 0 0 ⎠ and ⎝ 2 0 −2 ⎠; −1 −2 −1 1 0 −1 Bin: Vgs → Vb that produces binary image from gray-scaled imaged based on thresholding; N eg: Vb → Vb that produces negative image for binary image; Skel: Vb → Vskel that constructs continuous skeleton from binary image; P run: Vskel → Vskel that performs pruning of skeleton; Silh: Vskel → Vb produces binary image that is silhouette of skeleton. Let H (Fig. 4a) be the initial color palm image: H ∈ Vrgb . The segmentation algorithm can be defined as follows. 1. H1 = Red(H), H1 ∈ Vgs . Red component (Fig. 4b) is extracted from color image H.
Hand Geometry Analysis by Continuous Skeletons
137
2. H2 = H1 − Sob(H1 ), H2 ∈ Vgs . The grey-scaled image H1 (Fig. 4b) is subjected to Sobel operator and subtracted from itself. The resultant image H2 is shown on Fig. 4c. 3. H3 = Bin(H2 ), H3 ∈ Vb . Binarization of H2 (Fig. 4c) produces H3 (Fig. 4d). 4. G = Skel(H3 ), G ∈ Vskel . Continuous skeleton G (Fig. 4e) is constructed for binary image H3 . 5. G1 = P run(G), G1 ∈ Vskel . Skeleton G is subjected to pruning of two types. Firstly, all the terminal edges having value of radial function less than 3 pixels are removed. Secondly, regularization (see Section 2) with threshold equals to 6 pixels is performed. The result of this step is shown on Fig. 4f. 6. H4 = Silh(G1 ), H4 ∈ Vb . The silhouette H4 (Fig. 4g) of pruned skeleton G1 is constructed. 7. H5 = N eg(H4 ), H5 ∈ Vb . The negative image H5 (Fig. 4h) for the image H4 is produced. Thereby, we obtain binary image of palm from its initial color image. Generally, the image has separable fingers and can be subjected to characteristic points detection process described in Section 3. The results of segmentation for the considered color palm image on Fig. 4a are shown on Fig. 4i. It should be noted, that the proposed algorithm could be applied to any palm image (either ”good” with separable fingers as on Fig. 2a or ”bad” as on Fig. 4a). So this algorithm along with algorithm from section 3 gives us entire procedure for fingers’ tips and valleys detection.
5
Implementation and Experiments
Experiment was carried out for HGC-2011 competition [11]. In the competition it was required to detect 5 tips and 4 valleys for each provided palm image. Source data included training set with 300 palm images. It was used for manual tuning of algorithm parameters (binarization and pruning thresholds). Quality of point’s detection was calculated on testing set (160 palm images), which was unavailable before the publication of results. According to the competition rules point is detected correctly if the distance between the ground truth point position and the detected point position is less than 20 pixels. Each incorrect detection results in 1 point penalty for participating algorithm, and each refuse of classification results in 0.7 points penalty (i.e. refuse of entire palm classification results in 6.3 points penalty). In HGC-2011 competition, our algorithm showed the best result among the 15 registered participants. Second, third and forth algorithms got 57.3, 77.9 and 98.1 penalty points on testing set. Detailed results of our algorithm are presented in the table 1. Moreover similar approach for palm shape analysis was successfully used in systems for biometric person identification [9] and hand gesture recognition [10], which demonstrates rich capabilities of skeleton based shape analysis.
138
L. Mestetskiy, I. Bakina, and A. Kurakin
Fig. 4. Palm image segmentation steps Table 1. Point detection results on HGC database Detection rate Running time (sec) Penalty Training set 2682 of 2700 58 18 (300 palms, 2700 points) (99.33%) Testing set 1415 of 1440 34 22.3 (160 palms, 1440 points) (98.26%)
Hand Geometry Analysis by Continuous Skeletons
139
Acknowledgments. The authors thank the Russian Foundation for Basic Research for the support on this study (grants 10−07−00609 and 11−01−00783), and the organizing committee of Hand Geometric Points Detection Competition (HGC-2011) [11] for the provided database of hand images used in our experiments.
References 1. Siddiqi, K., Pizer, S.M.: Medial Representations: Mathematics, Algorithms and Applications. Springer, Heidelberg (2008) 2. Blum, H.: A Transformation for Extracting New Descriptors of Shape. In: Proc. Symposium Models for the Perception of Speech and Visual Form. MIT Press, Cambridge (1967) 3. Costa, L., Cesar, R.: Shape Analysis and Classification. CRC Press, Boca Raton (2001) 4. Strzodka, R., Telea, A.: Generalized Distance Transforms and Skeletons in Graphics Hardware. In: Proc. Eurographics IEEE TCVG Symp. Visualization, pp. 221–230 (2004) 5. Mestetskiy, L.: Continuous Skeleton of Binary Raster Bitmap (in Russian). In: Graphicon 1998, International Conference on Computer Graphics, Moscow (1998) 6. Mestetskiy, L., Semenov, A.: Binary Image Skeleton—Continuous Approach. In: Proc. of 3rd Int. Conf. on Computer Vision Theory and Applications, vol. 1, pp. 251–258. INSTICC Press (2008) 7. Mestetskiy, L.M.: Continuous Morphology of Binary Images: Figures, Skeletons and Circulas (in Russian). In: FIZMATLIT, Moscow (2009) 8. Mestetskiy, L.: Skeleton Representation Based on Compound Bezier Curves. In: Proc. of 5th Int. Conf. on Computer Vision Theory and Applications, vol. 1, pp. 44–51. INSTICC Press(2010) 9. Bakina, I.: Palm Shape Comparison for Person Recognition. In: Proc. of 6th Int. Conf. on Computer Vision Theory and Applications, Portugal (2011) 10. Kurakin, A., Mestetskiy, L.: Hand Gesture Recognition through On-line Skeletonization. Application of Continuous Skeleton to Real-Time Shape Analysis. In: Proc. of 6th Int. Conf. on Computer Vision Theory and Applications, Portugal (2011) 11. Magalh˜ aes, F., Oliveira, H.P., Matos, H., Campilho, A.: HGC 2011—Hand Geometric Points Detection Competition Database (published: December 23, 2010), http://paginas.fe.up.pt/~ hgc2011/
Kernel Fusion of Audio and Visual Information for Emotion Recognition Yongjin Wang, Rui Zhang, Ling Guan, and A.N. Venetsanopoulos Department of Electrical and Computer Engineering, Ryerson University, Toronto, Ontario, Canada
Abstract. Effective analysis and recognition of human emotional behavior are important for achieving efficient and intelligent human computer interaction. This paper presents an approach for audiovisual based multimodal emotion recognition. The proposed solution integrates the audio and visual information by fusing the kernel matrices of respective channels through algebraic operations, followed by dimensionality reduction techniques to map the original disparate features to a nonlinearly transformed joint subspace. A hidden Markov model is employed for characterizing the statistical dependence across successive frames, and identifying the inherent temporal structure of the features. We examine the kernel fusion method at both feature and score levels. The effectiveness of the proposed method is demonstrated through extensive experimentation. Keywords: Audiovisual emotion recognition, kernel methods, multimodal information fusion.
1
Introduction
Human Computer Interaction (HCI), which embraces applications that are associated with direct interaction between human and computing technology, is an emerging field that aims at bridging the existing gaps between the various disciplines involved with the design and implementation of computing systems that support people’s activities. Emotion, which echoes an individual’s state of the mind, plays an important role in our daily social interaction and activities, and is an important component of an HCI system. The emotional intention of an individual can be inferred from various sources such as voice, facial expressions, body language, semantic meaning of the speech, ECG, and EEG. Among various modalities, voice and facial expression are two of the most natural, passive, and non-invasive types of traits, and they are primaries to the objectives of an emotion recognition system in the field of HCI. Moreover, they can be easily captured by low-cost sensing devices, which makes them more economically feasible for potential deployment in a wide variety of applications. A great deal of research effort has been placed in machine recognition of human emotions in the past few years. The majority of these works either focus on speech alone, or facial expression only. However, as shown in [1], some of the M. Kamel and A. Campilho (Eds.): ICIAR 2011, Part II, LNCS 6754, pp. 140–150, 2011. c Springer-Verlag Berlin Heidelberg 2011
Kernel Fusion of Audio and Visual Information for Emotion Recognition
141
emotions are audio dominant, while the others are visual dominant. When one modality fails or is not good enough to determine a certain emotion, the other modality can help to improve the recognition performance. The integration of audio and visual data will convey more information about the human emotional state. The information presented in different channels, when being utilized for the description of the same perception of interest, i.e. emotion, may provide complementary characteristics, and it is highly probable that a more complete, precise, and discriminatory representation can be derived. The fusion of multimodal information is usually performed at three different levels: data/feature level, score level, and decision level. Data/feature level fusion combines the original data or extracted features through certain fusion strategies. Fusion at score level combines the scores generated from multiple classifiers using multiple modalities through a rule based scheme, or in a pattern classification sense in which the scores are taken as features into a classification algorithm. Fusion at decision level generates the final results based on the decision from multiple modalities or classifiers using methods such as majority voting. The fusion at decision level is rigid due to the limited information left. Many tentative solutions have been introduced for audiovisual based emotion recognition. Go et al. [2] presented a rule-based method for fusion of audio and video recognition results, both using wavelet based techniques for feature extraction. Kanluan et al. [3] used prosodic and spectral features for audio representation, 2-D discrete Cosine transform on predefined image blocks for visual feature extraction, and a weighted linear combination for fusion at the decision level. Metallinou et al [4] proposed to model each individual modality using Gaussian mixture model, and a Bayesian classifier weighting scheme and support vector machine (SVM) for score classification. Han et al. [5] also presented an SVM based score level fusion scheme with prosodic and local features for audio and visual channels respectively. Several hidden Markov model (HMM) based methods, which can be considered as a hybrid of feature level and decision level fusion, have been introduced in [6][7], with different approaches for feature extraction. Wang and Guan [8] proposed a hierarchical multi-classifier scheme for the analysis of combinations of different classes, and the fusion of multimodal data is performed at the feature level. Existing works have demonstrated that the performance of emotion recognition systems can be improved by integrating audio and visual information. However, it is far from a solved problem due to the limits in the accuracy, efficiency, and generality of the proposed systems. In addition, most of these works treat audio and visual channels as independent sources, and fail to identify their joint characteristics. In this work, we examine kernel based fusion methods for human emotion recognition from audiovisual signals. The introduced solution utilizes the kernel trick to map the original audio and visual representations to their respective high-dimensional subspaces by computing the kernel matrices, and the fusion is performed by combining the kernel matrices to obtain a mapping of the features onto a joint subspace. Dimensionality reduction method is then applied on the fused kernel matrix to compute a compact representation.
142
Y. Wang et al.
The HMM method is employed to model the temporal characteristics. Information fusion at both feature and score levels are examined. Experimental results show that the proposed method produces promising results. The remainder of this paper is organized as follows. Section 2 introduces the kernel based fusion methods. Section 3 presents the multimodal emotion recognition system. The experimental results are reported in Section 4, and conclusions are drawn in Section 5.
2 2.1
Kernel Based Information Fusion Kernel Method
Kernel method is considered as one of the most important tools in the design of nonlinear feature extraction and classification techniques, and has demonstrated its success in various pattern analysis problems. The premise behind the kernel method is to find a nonlinear map from the original input space (RJ ) to a higher dimensional kernel feature space FF by using a nonlinear function φ(·), i.e., φ : z ∈ RJ → φ(z) ∈ FF
J
(1)
In the kernel feature space FF , the pattern distribution is expected to be simplified so that better classification performance can be achieved by applying traditional linear methodologies [9]. In general, the dimensionality of the kernel space is much larger than that of the original input space, sometimes even infinite. Therefore, an explicit determination of the nonlinear map φ is difficult or intractable. Fortunately, with the so-called “kernel trick”, the nonlinear mapping can be performed implicitly in the original input space RJ by replacing dot products of the feature representations in FF with a kernel function defined in RJ . Let zi ∈ RJ and zj ∈ RJ represent two vectors in the original feature space, the dot product of their feature representations φ(zi ) ∈ FF , φ(zj ) ∈ FF can be computed by a kernel function κ(·) defined in RJ , i.e., φ(zi ) · φ(zj ) = κ(zi , zj )
(2)
The function selected as the kernel function should satisfy the Mercer’s condition, i.e., positive semi-definite [9]. Some commonly used kernel functions include the linear kernel κ(zi , zj ) =< zi , zj >, the Gaussian kernel κ(zi , zj ) = −||zi −zj ||2 ), and the polynomial kernel κ(zi , zj ) = (< zi , zj > +1)d . exp( 2σ2 Kernel principal component analysis (KPCA) [10] and generalized discriminant analysis (GDA) [11] are two of the most typical kernel techniques. KPCA is an implementation of the traditional PCA algorithm in the kernel feature space. Let C denote the number of classes and Ci is the number of samples of the i-th ˜ t defined in FF could be expressed as follows, class, then the covariance matrix S Ci C ˜t = 1 ¯ ¯T S (φ(zij ) − φ)(φ(z ij ) − φ) N i=1 j=1
(3)
Kernel Fusion of Audio and Visual Information for Emotion Recognition
143
Ci F where φ¯ = N1 C j=1 φ(zij ) is the mean of training samples in F . The i=1 ˜ t , deKPCA subspace is spanned by the first M significant eigenvectors of S ˜ KP CA = ˜ KP CA , corresponding to M largest eigenvalues, i.e., W noted as W ˜1 > λ ˜ 2 , ..., λ ˜ M , where w ˜ k is the cor˜ 1 , ..., w ˜ M ] and λ ˜ k is the eigenvector and λ [w responding eigenvalue. The nonlinear mapping to a high-dimensional subspace can be solved implicitly using the kernel trick. Given N samples {z1 , z2 , ..., zN }, the kernel matrix K can be computed as: ⎡ ⎤ κ(z1 , z1 ) κ(z1 , z2 ) · · · κ(z1 , zN ) ⎢ κ(z2 , z1 ) κ(z2 , z2 ) · · · κ(z2 , zN ) ⎥ ⎢ ⎥ (4) K=⎢ ⎥ .. .. .. .. ⎣ ⎦ . . . . κ(zN , z1 ) κ(zN , z2 ) · · · κ(zN , zN ) We can center the kernel matrix by, ˜ = K − IN K − KIN − IN KIN K
(5)
where IN is an N × N matrix with the elements set to N1 . Then the solution to the above problem can be formulated as the following eigenvalue problem, ˜i u ˜u ˜i = λ ˜i K
(6)
For a testing sample z , its projection on the i-th eigenvector is computed as, ⎡ ⎤ κ(z , z1 ) ⎢ κ(z , z2 ) ⎥ ⎢ ⎥ ˜ Ti ⎢ (7) z˜i = u ⎥ .. ⎣ ⎦ . κ(z , zN )
GDA is a generalization of the linear discriminant analysis (LDA) method using the kernel method. It produces corresponding subspaces by maximizing the Fisher’s criterion defined in FF , ˜ T˜ ˜ ˜ GDA = arg max |W Sb W| , W ˜ w W| ˜ ˜ TS ˜ |W W
˜ = [w ˜ 1 , ..., w ˜M] W
˜ k ∈ FF w
(8)
˜ b to S ˜ w , the ˜ k can be obtained by eigen-decomposing the ratio of S where w F corresponding between- and within-class scatter matrices defined in F , C ¯ φ¯i − φ) ¯T ˜b = 1 S Ci (φ¯i − φ)( N i=1 Ci C ˜w = 1 S (φ(zij ) − φ¯i )(φ(zij ) − φ¯i )T N i=1 j=1
where φ¯i =
1 Ci
Ci
j=1
φ(zij ) is the sample mean of the i-th class in FF .
(9)
(10)
144
Y. Wang et al.
As discussed in [12], the essence of GDA is KPCA plus LDA, which can be implemented by first applying KPCA, followed by LDA on the subsequent KPCA features. 2.2
Fusion Strategies
In kernel based method, the kernel matrix is an embedding of the original features to the kernel feature space, with each entry represents a certain notion of similarity between two specific patterns in a higher dimensional space. It is possible to combine different kernel matrices using algebraic operations, such as addition and multiplication. Such operations still preserve the positive semidefiniteness of the kernel matrix. For audiovisual based emotion recognition, different channels capture different aspects of the same semantic. The kernel matrix derived from each channel therefore provides a partial description of the specific information of one sample to the others. The kernel fusion technique allows for integrating the respective kernel matrices to identify their joint characteristic pertaining to the associated semantic, which may provide a more discriminatory representation. It incorporates features of disparate characteristics into a common format of kernel matrices, therefore provides a viable solution of fusing heterogenous data. In addition, kernel based fusion offers a flexible solution since different kernel functions can be used for different modalities. In this work, we examine and compare two basic algebraic operations, weighted sum and multiplication. Let x1 , x2 , ..., xN and y1 , y2 , ..., yN denote the information extracted from the audio and visual channels respectively, then the kernel y x = κx (xi , xj ) and Kij = κy (yi , yj ). matrices K x and K y can be computed as Kij f The fused kernel matrix K can be computed as: f y x Kij = aKij + bKij , a + b = 1,
(11)
f y x = Kij × Kij , Kij
(12)
Substituting the fused kernel K f into Eqn. 5 and 6, we can compute the KPCA ˜ 2 , ..., u ˜ M ). For a pair of testing sample (x , y ), the projection eigenvectors (˜ u1 , u to the i-th eigenvector in the joint space will be computed as: ⎡ ⎤ aκx (x , x1 ) + bκy (y , y1 ) ⎢ aκx (x , x2 ) + bκy (y , y2 ) ⎥ ⎢ ⎥ ˜ Ti ⎢ (13) z˜i = u ⎥ .. ⎣ ⎦ . aκx (x , xN ) + bκy (y , yN )
⎡ ⎢ ⎢ ˜ Ti ⎢ z˜i = u ⎣
κx (x , x1 ) × κy (y , y1 ) κx (x , x2 ) × κy (y , y2 ) .. .
κx (x , xN ) × κy (y , yN )
⎤ ⎥ ⎥ ⎥ ⎦
(14)
Kernel Fusion of Audio and Visual Information for Emotion Recognition
3
145
Emotion Recognition System
The proposed multimodal emotion recognition system analyzes the audio and visual information presented in a short time window w, and the features from successive windows are then modeled by an HMM for capturing the change of audio and visual information with respect to time. In this section, we detail the feature extraction, fusion, and classification methods. 3.1
Feature Extraction
For audio feature extraction, an input audio signal of length w is first preprocessed by a wavelet coefficient thresholding method to reduce the recording machine and background noise [8]. Spectral analysis is then applied on the noise reduced signal. The spectral analysis method is only reliable when the signal is stationary, i.e., the statistical characteristics of a signal are invariant with respect to time. For speech, this holds only within the short time intervals of articulatory stability, during which a short time analysis can be performed by windowing a signal into a succession of windowed sequences, called frames. These speech frames can then be processed individually. In this paper, we use a Hamming window of size 512 points, with 50% overlap between adjacent windows. The pitch, power, and the first 13 mel-frequency cepstral coefficients (MFCC) are then extracted from each frame, and the features of successive frames within w are concatenated as the audio features. For visual feature representation, we perform feature extraction on the middle frame of the corresponding audio time window w. We first detect the face region from the image frame using a color based method [8]. The resulting face region is normalized to an image of size of 64 × 64. A Gabor filter bank of 5 scales and 8 orientations is then applied for feature extraction. Due to the large dimensionality of the Gabor coefficients, we downsample each subband to a size of 32 × 32, and then perform dimensionality reduction on all the downsampled Gabor coefficients using the PCA method. 3.2
Kernel Fusion of Multimodal Information
The information extracted from audio and visual channels can then be fused by using the proposed kernel fusion method. Specifically, we consider both kernel fusion at feature and score levels. Fig. 1 depicts a block diagram of feature level kernel fusion. During the training process, for each of the window, a separate kernel matrix is computed from the extracted audio and visual features respectively. Algebraic operations are then performed on the two matrices (as in Eqn. 11 and 12), and the eigenvectors of the joint subspace can be computed as in Eqn. 6. For a new input, the projection is performed according to Eqn. 13 and 14, and the obtained KPCA features in the transformed domain are considered as an observation of an HMM to characterize the statistical properties in both
146
Y. Wang et al.
Fig. 1. Block diagram of kernel fusion at feature level
Fig. 2. Block diagram of kernel fusion at score level
temporal and ensemble domain. The output of the HMM, which contains the likelihood of the video sample with respect to different classes, is the recognition results. For score level kernel fusion, we construct two individual expert systems based on audio and visual information respectively, as shown in Fig. 2. In each expert, the original features of audio or visual channels are taken as the input to their respective HMMs, and the outputs of the HMMS are the classification scores. These two streams of scores are then considered as two sets of new features on which the kernel fusion method can be performed. The resulting KPCA representation of the fused classification scores is then considered as an input to a classifier for final recognition.
4
Experimental Results
To evaluate the effectiveness of the proposed solution, we conduct experiments on RML emotion database [8], which contains video samples from eight human subjects, speaking six languages, and expressing six basic emotions (anger, disgust, fear, happy, sad, surprise). The samples were recorded at a sampling rate of 22050 Hz, and a frame rate of 30 fps. In our experiments, we selected 400 video clips, each of which is then truncated to 2-second-long such that both audio and visual information present. The video sample is then segmented into 10 uniform windows, and the dimensionality of audio and visual features are set to 240 and 200 respectively. The evaluation is based on cross-validation, where each time
Kernel Fusion of Audio and Visual Information for Emotion Recognition
147
Fig. 3. Experimental results of feature level fusion (Weighted Sum (WS), Multiplication (M))
75% of samples are randomly selected for training and the rest for testing. This process is repeated 10 times, and the average of the results is reported. In our experiments, we normalize each audio and visual attribute to zero mean and unit variance based on the statistics of the training data. The number of hidden states of the HMM is set to 3 and a Gaussian mixture with 3 components is chosen to model the probability density function of an observation given a state. The performance of kernel based algorithms is significantly affected by the selected kernel functions and the corresponding parameters. We have conducted extensive experiments using linear, Gaussian (σ = 1, 10, 102, 103 , 104 , 105 , 106 ), and polynomial (d = 2, 3, 4, 5) kernel functions. For the weighted sum based kernel fusion, the parameter a is set to 0.1-0.9 with a step size of 0.1. The reported recognition performance is based on the best obtained results in our experiments. 4.1
Feature Level Kernel Fusion
The feature level fusion is performed by fusing the kernel matrices of the audio and visual features within a short time window w. Note that the emotional intention of an individual is usually expressed over a period of time, e.g., 1-2 seconds, and the analysis should be based on the ensemble of the information. For each window w, it is not appropriate to label its emotional class, therefore supervised learning such as GDA can not be applied. On the other hand, unsupervised method such as KPCA can be used to derive a compact representation of the information presented in the current windowed signal in the joint subspace. Fig. 3 depicts the recognition performance of kernel based fusion methods at selected parameters, in comparison with audio only, visual only, and audiovisual information through vector concatenation. A Gaussian kernel with σ = 10 is used. By using the weighted sum based kernel fusion at a = 0.4, it produces
148
Y. Wang et al.
Table 1. Experimental Results (%) of score level fusion (Kernel functions: Linear (L), Gaussian (G), Polynomial (P)) LDA Kernel-(WS) Kernel-(M) 75.86 75.05 (G,σ = 104 ,a=0.4) (G,σ = 103 ) Min-Max 73.94 77.37 76.77 (P,d=2,a=0.5) (L) Gaussian 73.94 75.76 75.25 (G,σ = 10,a=0.5) (G,σ = 10) Scores
Concat. Original 73.94
SVM Concat. Kernel-(WS) Kernel-(M) 75.05 76.67 76.36 (P,d=2,a=0.4) (L) 59.39 76.87 76.87 (P,d=3,a=0.5) (L) 77.88 79.09 53.23 (L,a=0.5) (L)
the best overall recognition accuracy of 82.22%, which significant outperforms other methods. In addition, it also provides efficient dimensionality reduction, comparing with the original dimensionality of 240 + 200 = 440 for audio and visual features. 4.2
Score Level Kernel Fusion
The score level fusion is performed by fusing the kernel matrices of the classification scores, which are the outputs of the HMMs that represent the likelihood of being classified as a certain emotion from audio and visual sources respectively. In addition to using the original scores directly, we also compare with two score normalization approaches, Min-Max and Gaussian normalization. Let sk , k = 1, ..., n represent the obtained classification scores, then the Min-Max sk −sk,min normalization can be formulated as s˜k = sk,max −sk,min , where sk,min and sk,max are the minimum and maximum of the k-th dimension obtained from the traink , ing data. The Gaussian normalization method can be expressed as: s˜k = xkσ−μ k where μk and σk denote the mean and standard deviation of the k-th dimension estimated from the training data. Table 1 shows the experimental results of kernel based score level fusion methods, in comparison with concatenation of audio and visual scores. An LDA and a linear SVM classifier are compared for classification. The dimensionality of KPCA projection is set to 12. Note that when LDA is applied on the fused KPCA features, it is equivalent to the GDA method. For concatenation based fusion, because the same normalization parameters are used along each dimension of the original vectors, the Min-Max and Gaussian normalization methods do not change the performance of LDA. It can be seen that the weighted sum based kernel fusion method achieves better recognition performance. 4.3
Discussion
In our experiments, the weighted sum based kernel fusion demonstrates better overall recognition accuracy at both feature and score levels. The concatenation based feature level fusion actually degrades the recognition performance
Kernel Fusion of Audio and Visual Information for Emotion Recognition
149
comparing with monomodal methods. This is due to the disparate characteristics of the audio and visual features. The significant performance improvement of kernel fusion over concatenation based fusion demonstrates its capability of combining heterogeneous data. Weighted sum based kernel fusion at the feature level achieves better performance than score level fusion. For an emotion recognition problem, the audio and visual channels are dependent sources of a certain class. The scores, which are obtained by independent classification of individual modality, contain less information about their dependency. The appropriate fusion at the feature level might be able to better describe the relationship between different modalities, hence produces better recognition results.
5
Conclusions
This paper has presented a kernel based fusion approach for audiovisual based multimodal emotion recognition. The proposed method combines the information derived from different modalities via algebraic operations on the kernel matrices, and identifies a subspace to describe the joint characteristics of different sources. The introduced kernel fusion strategy is examined at both feature and score levels. Experimental results demonstrate that the presented approach provides efficient dimensionality reduction, and improves the recognition performance. The proposed emotion recognition system is capable of facilitating intelligent human computer interaction, and it is expected that such a system can find applications in a plethora of scenarios such as video conferencing, education and training, health care, database management, security and surveillance, gaming and entertainment.
References 1. De Silva, L.C., Miyasato, T., Nakatsu, R.: ’Facial emotion recognition using multimodal information’. In: Proceedings of IEEE International Conference on Information, Communications and Signal Processing, vol. 1, pp. 397–401 (1997) 2. Go, H., Kwak, K., Lee, D., Chun, M.: Emotion recognition from the facial image and speech signal. In: Proceedings of SICE Annual Conference, Japan, vol. 3, pp. 2890–2895 (2003) 3. Kanluan, I., Grimm, M., Kroschel, K.: Audio-visual emotion recognition using an emotion space concept. In: Proceedings of 16th European Signal Processing Conference, Lausanne, Switzerland (2008) 4. Metallinou, A., Lee, S., Narayanan, S.: Audio-visual emotion recognition using Gaussian mixture models for face and voice. In: Proceedings of 10th IEEE International Symposium on Multimedia, pp. 250–257 (2008) 5. Han, M., Hus, J.H., Song, K.T.: A new information fusion method for bimodal robotic emotion recognition. Journal of Computers 3(7), 39–47 (2008) 6. Song, M., Chen, C., You, M.: Audio-visual based emotion recognition using tripled hidden Markov model. In: Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing, Montreal, Canada, vol. 5, pp. 877–880 (2004)
150
Y. Wang et al.
7. Zeng, Z., Tu, J., Pianfetti, B., Huang, T.S.: Audio-visual Affective Expression Recognition through Multi-stream Fused HMM. IEEE Transactions on Multimedia 10(4), 570–577 (2008) 8. Wang, Y., Guan, L.: Recognizing human emotional state from audiovisual signals. IEEE Transactions on Multimedia 10(5), 936–946 (2008) 9. Muller, K.R., Mika, S., Ratsch, G., Tsuda, K., Scholkopf, B.: An introduction to kernel-based learning algorithms. IEEE Transactions on Neural Networks 12, 181–201 (2001) 10. Scholkopf, B., Smola, A., Muller, K.R.: Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation 10(5), 1299–1319 (1998) 11. Baudat, G., Anouar, F.: Generalized discriminant analysis using a kernel approach. Neural Computing 12(10), 2385–2404 (2000) 12. Yang, J., Jin, Z., Yang, J.Y., Zhang, D., Frangi, A.F.: Essence of kernel Fisher discriminant: KPCA plus LDA. Pattern Recognition 37(10), 2097–2100 (2004)
Automatic Eye Detection in Human Faces Using Geostatistical Functions and Support Vector Machines Jo˜ ao Dallyson S. Almeida1 , Arist´ ofanes C. Silva2 , and Anselmo C. Paiva3 Federal University of Maranh˜ ao (UFMA), Av. dos Portugueses, s/n - 65085-580, S˜ ao Lu´ıs - MA, Brazil
[email protected],
[email protected],
[email protected]
Abstract. Several computational systems which depend on the precise location of the eyes have been developed in the last decades. Aware of this need, we propose a method for automatic detection of eyes in images of human faces using four geostatistical functions - semivariogram, semimadogram, covariogram and correlogram and support vector machines. The method was tested using the ORL human face database, which contains 400 images grouped in 40 persons, each having 10 different expressions. The detection obtained the following results of sensibility of 93.3%, specificity of 82.2% and accuracy of 89.4%. Keywords: Automatic eye detection, semivariogram, semimadogram, covariogram, correlogram, support vector machine (SVM).
1
Introduction
Many applications need to detect eyes, among which we may cite: determination of facial characteristics, biometric systems based on face and iris, analysis of facial expressions, monitoring of drivers’ tiredness. Such detection is fundamental not just to analyse eyes (open, partially open, closed, sad, scary, etc.) but also to supply the position of the mouth and the nose, and constitutes a complex problem due to variations of illumination, background and face expressions. From the geometrical features, contrast and eye movement, information can be extracted for iris recognition/detection systems and human machine interfaces. The main difficulty in locating the eyes in human face images is the variety of facial expressions and positions that a person can show at the moment of the detection of the eyes. In this paper, we propose the analysis of texture by means of geoestatical function. These functions are widely used in Geographical Information Systems (GIS), but haave not yet been explored for eye detection in facial images. These functions have the advantage of analyzing simultaneously the variability and the spatial correlation of pixels, and they work with textures in 2D and 3D. The textural measure is used as input for a Support Vector Machine. This paper is organized as follows. In Section 2 we present some works related to eye detection in facial image. In Section 3, the geostatistical funtions are M. Kamel and A. Campilho (Eds.): ICIAR 2011, Part II, LNCS 6754, pp. 151–160, 2011. c Springer-Verlag Berlin Heidelberg 2011
152
J.D.S. Almeida, A.C. Silva, and A.C. Paiva
presented. Next, in Section 4, the results are shown and the application of the techniques under study is discussed. Finally, Section 5 presents some concluding remarks.
2
Related Works
In this section we present some works that have been developed to automatically detect eyes in digital images. In [10] it is presented a method for detection of eyes in digital facial images using Zernike moment with Support Vector Machines (SVM). The method achieves matching rates of 94.6% for detection of eyes in facial images from the ORL base. With similar purpose, the authors in [13] used a template-based method to find the center of the iris, achieving matching rate of 95.2% for images (without presence of glasses) from the ORL base. In [15] a probabilistic classifier is used to find the region of the eyes. For such, were used 500 pairs of eye images from the FERET base, obtaining a global matching rate of 94.5%. The authors in [8] proposed a method for real-time monitoring of drivers’ tiredness. They used features such as: eyelid, look, head movement and facial expressions. The method obtained matching rate of 96.4% while detecting the glint of the pupil. In [16] a method for detection of eyes using greyscale images, binary template matching and support vector machine (SVM) was developed. This method obtained matching rate of 96.8% for 1,521 images from the BioID faces base. In [12] it is presented a method for eye detection using Wavelets and Back Propagations Neural Networks. The matching rate obtained for the ORL image was of 88%. In [9] it is proposed a method for identification of persons through the analysis of iris texture using the geostatistical functions of semivariogram and correlogram. The method obtained success rate of 98.14% using an iris database called CASIA. In [1] we proposed a method for automatic detection of eyes in images of human faces using semivariogram functions and support vector machines. The detection obtained results of sensibility of 84.6%, specificity of 93.4% and accuracy of 88.45%. With the goal of improving the results presented in [1], this work intends to investigate four geostatistical functions (semivariogram, semimadogram, covariogram and correlogram) applied to eye detection.
3
Geostatistical Functions
Geostatistics is the analysis of spatially continuous data. It treats geographic attributes as random variables which depend on joint distributions of their locations. The semivariogram, semimadogram, covariogram, and correlogram functions summarize the strength of associations between responses as a function of distance, and possibly direction [5]. We typically assume that spatial autocorrelation does not depend on where a pair of observations (in our case, voxel or
Automatic Eye Detection in Human Faces Using Geostatistical Functions
153
attenuation coefficient) is located, but only on the distance between the two observations, and possibly on the direction of their relative displacement. 3.1
Semivariogram Function
Semivariance is a measure of the degree of spatial dependence between samples. The magnitude of the semivariance between points depends on the distance between the points. A smaller distance yields a smaller semivariance and a larger distance results in a larger semivariance. The plot of the semivariances as a function of distance from a point is referred to as a semivariogram. The semivariogram is defined by N (h) 1 (xi − yi )2 γ(h) = 2N (h) i=1
(1)
where h is the lag (vector) distance between the head value (target pixel), yi , and the tail value (source pixel), xi , and N (h) is the number of pairs at lag h. 3.2
Semimadogram Function
The semimadogram is the mean absolute difference of paired sample features as a function of distance and direction. It is defined by m(h) =
N (h) 1 |xi − yi | 2N (h) i=1
(2)
where h is the (vector)lag distance between the head value, yi , and the tail value, xi , and N (h) is the number of pairs at lag h. 3.3
Covariogram Function
The covariance function (covariogram) is a statistical measure of the correlation between two variables. In geostatistics, covariance is computed as the overall sample variance minus the variogram value. The covariance function tends to be high when h = 0 (i.e. the correlation function is 1) and tends to zero for points which are separated by distances greater or equal to the range (i.e. uncorrelated). The covariogram is defined by N (h) 1 C(h) = xi yi − m−h m+h N (h) i=1
(3)
where m−h is the mean of the tail values and m+h is the mean of the head values, N (h) N (h) 1 1 m−h = xi m+h = yi (4) N (h) i=1 N (h) i=1
154
3.4
J.D.S. Almeida, A.C. Silva, and A.C. Paiva
Correlogram Function
The correlation function (correlogram) is a standardized version of the covariance function; the correlation coefficients range from -1 to 1. The correlation is expected to be high for units which are close to each other (correlation = 1 at distance zero) and tends to zero as the distance between units increases. The correlation is defined by C(h) (5) ρ(h) = σ−h σ+h where σ−h is the standard deviation of tail values and σ+h is the standard deviation of head values, ⎡ σ−h = ⎣
4
1 N (h)
N (h)
⎤ 12 x2i − m2−h ⎦
i=1
⎡ σ+h = ⎣
1 N (h)
N (h)
⎤ 12 yi2 − m2+h ⎦
(6)
i=1
Materials and Methods
Figure 1 illustrates the proposed methodology consisting of two stages: training and test. The steps of pre-processing with homomorphic filter, features extraction using geoestatiscal functions and the classification with Support Vector Machine (SVM) [2] are executed in both stages. In addition, in the training stage we have the manual segmentation of the region of interest and the selection of the most significant features through Fisher’s stepwise discriminant analysis[11]. In the test stage we still have: the automatic extraction of the region of the eyes through the projection gradient and segmentation of the eye candidates by applying region growing [6]. 4.1
Database
The ORL [3] image base was used for both stages. It is formed by 40 people with different facial expressions, hair styles, illumination conditions, with and without glasses, totalizing 400 images with dimensions of 92x112 in gray level. In the present work, we have used 164 images for training and 327 for testing. 4.2
Training Methodology
Initially, the images pass through a pre-processing using homomorphic filter [16] in order to solve luminosity diferences. From the 164 training images, 24 regions were manually selected, which were formed by 15x15-pixel windows, being 18 of them associated to the eye class and 6 associated to the non-eye class (eyebrows, glasses, ears, etc.). Next, from each window, texture features are extracted using the geoestatiscal functions explained in Section 3.4. The parameters used by the correlogram functions for extraction of features were the directions 0, 45, 90 and 135 with angular tolerance of 22.5 and lag
Automatic Eye Detection in Human Faces Using Geostatistical Functions
155
Fig. 1. Proposed methodology
(distance) increment of 1, 2 and 3, corresponding to 14, 7 and 4 lags and tolerance of each lag distance of 0.5, 1.0 and 1.5, respectively. The directions adopted are those mostly used in the literature for image analysis. To choose the lag tolerance according to Isaaks and Srivastava [7] the commonest choice is to adopt half the lag increment. Finally, the selection of the most significant variables is done by using Fisher’s stepwise discriminant analysis [11] through the software Statistical Package for the Social Sciences (SPSS) [14]. We used leave-one-out [4] cross validation in the elaboration of the training model to evaluate its discrimination power. We selected 40 features from 100 – 4 directions times 25 lags (14+7+4), corresponding to the increments 1,2 and 3, respectively. After the selection of variables, the samples are trained in the SVM. 4.3
Test Methodology
In the test stage we have the automatic detection of eyes. It is composed by the following steps: pre-processing using homomorphic filter, automatic extraction of the region of the eyes by adapting the method proposed in [13] and segmentation of eye candidates through the adaptation of the method proposed in [10]. After executing this stage, we evaluated the results using the features presented in Section 4.4. Automatic Extraction of the Region of the Eyes. The automatic extraction of the region of the eyes aims to reduce the search space by generating a sub-image with the region wich possibly contains the eyes and excluding regions with no interest (mouth, nose, hair and background). In the detection of the region of the eyes, a smoothing is initially executed using the Gaussian filter of
156
J.D.S. Almeida, A.C. Silva, and A.C. Paiva
3 x 3 mask, and next, we calculate the gradient of the input image by using the Sobel filter [6]. We apply a horizontal projection of this gradient, obtaining as result the mean of the three higher peaks of this projection. Knowing that the eyes are found in the superior part of the face and that joined to the eyebrows they correspond to the 2 peaks closer to each other, this physiologic information, a priori known, can be used to identify the region of interest. The peak of the horizontal projection will provide the horizontal position of the eyes. We apply a vertical projection for all the pixels in the horizontal region, and the valley of this projection will provide the centre of the face. At the same time, we apply a vertical projection in the gradient image. There are two peaks in the left and in the right that correspond to the limits of the face. From these limits, the length of the face is estimated. Combining the results of the projections, we achieve an image that corresponds to the region of the eyes. Segmentation of the Eye Candidates. In the segmentation stage we used the minimal filter [6] sizing 7x7 in the detected region of the eyes image. Next, we performed the thresholding using the mean of the intensity of the pixels as threshold. In the binarized image resultant from the thresholding we apply the region growing technique[6] to locate the centre of the objects. This very same center will be the centre of the 15x15 window that is formed to extract the region to which the geoestatical functions are applied. The samples generated by the application of the geoestatistical function are submitted to classification by the previously trained SVM. 4.4
Validation
The methodology uses positive predictive value (PPV), negative predictive value (NPV), sensitivity (SE), specificity (SP) and accuracy (AC) analysis techniques. These are metrics commonly used to analyze the performance of systems based on image processing. Positive predictive value is defined by T P/(T P + F P ), negative predictive value is defined by T N/(T N + F N ), sensitivity is defined by T P/(T P + F N ), specificity is defined by T N/(T N +F P ), accuracy is defined by (T P +T N )/(T P + T N + F P + F N ) where T N is true-negative, F N is false-negative, F P is falsepositive, and T P is true-positive.
5
Results and Discussion
In this section, we intend to analyse the efficiency of the proposed methodology, which uses geoestatistical functions for detection of the eyes in digital human face images with SVM. The proposed methodology was evaluated using the ORL image base. First, we tested 400 images in the stage of automatic extraction of the region of the eyes. In this test, it was possible to detect the region of 81.5% of the images, that is, 327 images. In Figure 2 we show some examples of successfully detected
Automatic Eye Detection in Human Faces Using Geostatistical Functions
Fig. 2. Region of the eyes correctly detected
157
Fig. 3. Failure in the detection of the region of eyes
regions. On the other hand, in Figure 3 we have examples of images for which the detection failed. We observed that most of the errors occurred due to the position of the face, especially when the face was turning aside, and due to the various types of glasses and hair styles. With the reduction of the search space through the detection of the region of eyes, we start the stage of eye detection. For this stage we used 164 random images from the ORL base, which corresponds to 50% of the images that had the region of eyes correctly detected, to train the SVM. For each image submitted to the training, 18 eye samples and 6 non-eye samples were taken, totalizing 3984 samples. Such samples were manually selected. The SVM library LIBSVM [4] was used for training and testing. A basic radial function was used as kernel and the parameters C and γ used for each geoestatistical funtion are listed in Table 1. To evaluate the efficiency of our proposal we applied the methodology to the 327 images on which the detection of the region of eyes succeeded. From the stage of detection of the possible eyes, we obtained 781 samples to be classified by the trained SVM. Table 1 show of the results of studied functions implemented in tests. Table 1 shows the correlogram function had the highest rates of sensibility of 93.3% and accuracy of 89.4%, while the semivariogram function had the highest rate of specificity of 93.35%. Table 1. Detailed analysis of the eye vs. non-eye characterization for each geoestatistical function FEATURES
C
γ
SE (%) SP (%) PPV (%) NPV (%) AC (%)
Semimadogram Semivariogram Correlogram Covariogram
32 8 128 32
0.5 2 8 0.5
91 84.64 93.3 89.95
64.75 93.35 82.2 72.35
83.71 94.29 90.6 86.43
78.24 82.40 86.9 78.6
82.20 88.40 89.4 84
Table 1 shows the correlogram function showed the best accuracy rate, compared with other functions geostatistics. Therefore, we will discuss and present the results obtained by this function. Figure 4 shows some examples of images for which the methodology succeeded in detecting the eyes using correlogram function, presenting T P of 473. On
158
J.D.S. Almeida, A.C. Silva, and A.C. Paiva
Fig. 4. Correct location of the eyes
Fig. 5. Failure in the location of the eyes
the other hand, in Figure 5 we have examples where the methodology failed, obtaining F P of 49. Analyzing the results we notice that most of the errors occurred in the regions of eyebrows and structures of glasses. Analyzing the classification of the non-eye regions using correlogram function, we observed that the amount of T N and F N was of 226 and 34, respectively. In Figure 6 we have examples of eye regions which were classified as non-eye. We can notice that the error occurred in the images whose eye regions were darkened, becoming similar to eyebrow, hair or background regions.
Fig. 6. Eye region classified as non-eye
From the 327 images used on tests, 55% had both eyes correctly located, 20% had only one of the eyes located, 6% had two of more eye regions or other regions of the face located and 19% were classified as non-eye regions. This result considers the location of both eyes as region of eyes for the images classified through correlogram function. In Figure 7 we have examples of images of location of both eyes.
Fig. 7. Location of both eyes
In Figure 8 we have examples of images for which the methodology located only one of the eyes. For these images, the methodology couldn’t extract the eyes not found in the eye candidates segmentation stage. Table 2 shows a comparative view among several eye detection methods using same face database and different techniques, including the proposed method, by examining the accuracy of each method. Examining the results presented
Automatic Eye Detection in Human Faces Using Geostatistical Functions
159
Fig. 8. Location of only one of the eyes
in 2, we can observe that the method proposed in this paper provides support for detection of eyes in images of human faces in a similar way as the other methods available in research literature. we can see also the improvement of about 1% using the correlogram function in relation to semivariogram in the work presented in [1]. Table 2. Performance results for eye detection methods Method Kim and Kim Peng et al. Motwani et al. Almeida et al. Using Correlogram
Accuracy (%) 94.6 95.2 88 88.45 89.4
Although our results are no greater than those found in the literature, this work contributes using geoestatiscal functions often applied in the analysis of soil texture, to discriminate and classify regions of the eyes of other regions face.
6
Conclusion
In this paper, we presented a methodology for detection of eyes in human faces which can be applied to systems that need to locate the region of the eyes. This methodology is subdivided into training and detection of eyes. For both stages the pre-processing using homomorphic filter is performed, the correlogram function is used as feature descriptor and the support vector machine is used in training and classification. The number of studied images in ORL database is too small to get at definitive conclusions, but the preliminary results of this work are very promising, demonstrating that it is possible to detect eyes in facial images using geoestatiscal functions - semivariogram, semimadogram, covariogram and correlogram and support vector machines.
References 1. Almeida, J., Silva, A., Paiva, A.: Automatic Eye Detection Using Semivariogram Function and Support Vector Machine. In: 17th International Conference on Systems, Signals and Image Processing (2010)
160
J.D.S. Almeida, A.C. Silva, and A.C. Paiva
2. Burges, C.: A tutorial on support vector machines for pattern recognition. Data Mining and Knowledge Discovery 2(2), 121–167 (1998) 3. Cambridge, A.L.: ORL Face Database (2009), Database available at http://www.cl.cam.ac.uk/research/dtg/attarchive/faceda-tabase.html 4. Chang, C., Lin, C.: LIBSVM: a library for support vector machines 80, 604–611 (2001), Software available at http://www.csie.ntu.edu.tw/cjlin/libsvm 5. Clark, I.: Practical Geostatistics. Applied Sience Publishers, London (1979) 6. Gonzalez, R., Woods, R.: Digital image processing (2002) ISBN: 0-201-18075-8 7. Isaaks, E., Srivastava, R.: An introduction to applied geostatistics (1989) 8. Jiao, F., He, G.: Real-Time Eye Detection and Tracking under Various Light Conditions. Data Science Journal 6(0), 636–640 (2007) 9. Junior, O.S., Silva, A.C., Abdelouah, Z.: Personal Identification Based on Iris Texture Analysis Using Semivariogram and Correlogram Functions. International Journal for Computacional Vision and Biomechanics 2(1) (2009) 10. Kim, H.J., Kim, W.Y.: Eye Detection in Facial Images Using Zernike Moments with SVM. ETRI Journal 30(2), 335–337 (2008) 11. Lachenbruch, P.A., Goldstein, M.: Discriminant analysis. Biometrics, 69–85 (1979) 12. Motwani, M., Motwani, R., Jr, F.C.H.: Eye detection using wavelets and ann. In: Proceedings of Global Signal Processing Conferences & Expos for the Industry, GSPx (2004) 13. Peng, K., Chen, L., Ruan, S., Kukharev, G.: A robust algorithm for eye detection on gray intensity face without spectacles. Journal of Computer Science and Technology 5(3), 127–132 (2005) 14. Technologies, L.: SPSS for Windows vs 12.0. LEAD Technologies (2003) 15. Wang, P., Green, M., Ji, Q., Wayman, J.: Automatic eye detection and its validation. In: 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 164–164 (2005) 16. Wang, Q., Yang, J.: Eye detection in facial images with unconstrained background. Journal of Pattern Recognition Research 1(1), 55 (2008)
Gender Classification Using a Novel Gait Template: Radon Transform of Mean Gait Energy Image Farhad Bagher Oskuie and Karim Faez Dept. Electrical Engineering Amirkabir University of Technology (Tehran Polytechnic) Tehran, Iran
[email protected],
[email protected]
Abstract. Any information about people such as their gender may be useful in some secure places; however, in some occasions, it is more appropriate to obtain such information in an unobtrusive manner such as using gait. In this study, we propose a novel method for gender classification using gait template, which is based on Radon Transform of Mean Gait Energy Image (RTMGEI). Robustness against image noises and reducing data dimensionality can be achieved by using Radon Transformation, as well as capturing variations of Mean Gait Energy Images (MGEIs) over their centers. Feature extraction is done by applying Zernike moments to RTMGEIs. Orthogonal property of Zernike moment basis functions guarantee the statistically independence of coefficients in extracted feature vectors. The obtained feature vectors are used to train a Support Vector Machine (SVM). Our method is evaluated on the CASIA database. The maximum Correct Classification Rate (CCR) of 98.94% was achieved for gender classification. Results show that our method outperforms the recently presented works due to its high performance. Keywords: Gait, Gender Classification, Zernike Moments, Radon Transform, SVM.
1 Introduction Gait means style of walking or locomotion. V.stevenage et al [1] showed that gait can be used to recognize people from each other. Recognition and classification based on gait has a unique property that it requires no contact or individual co-operation. This fact makes gait-based systems a hot research topic in recognition and classification of long-range surveillance video for security systems. Recognition based on automatic 2D or 3D faces, fingerprints and iris recognition systems have some shortcoming that described in [2]. However, some variations such as age, clothes, walking surface, kind of shoe, camera view angle respect to walking subject, carrying object and gait speed affect results of identification via gait. But the mentioned property of gait can be a strong motivation for improving and challenging related methods of classification or identification. In crime scenes and also in security places any valuable information of suspected person such its gender can be attractive so new methods are used to gender M. Kamel and A. Campilho (Eds.): ICIAR 2011, Part II, LNCS 6754, pp. 161–169, 2011. © Springer-Verlag Berlin Heidelberg 2011
162
F. Bagher Oskuie and K. Faez
categorization such 3d face [3]. Gait-based gender classification can be done at low resolution where other biometrics cannot be implemented. In this paper we use CASIA database to gender detection to address the problem of unbalanced number of males and females in the training set. CASIA database is also large enough with sequences of 124 objects which 31 of them are female. Ref [4] describes an automated system which classifies gender by utilizing a sequential set of 2D stick figures which is used to represent the gait signature. This signature is the primitive data for the feature extraction based on the motion parameters. Finally, the gender classification is done using an SVM classifier. Performance of 96% was achieved by using this method over 100 subjects. In [5] a novel spatio-temporal gait pattern called Gait Principle Component Image (GPCI) is presented. The dynamic variations of different body parts is amplified and compressed by using the GPCI. New approach is carried out for gait period detection using the Locally Linear Embedding (LLE) coefficients. Then, the KNN classifier is utilized for gender classification. Finally, the proposed algorithm is experimented on the IRIP gait database with not large population (32 males, 28 females) and a maximum correct rate of 92.33% is achieved. L.Lee et al in [6] extracted the gait feature vector based on parameters of moments features in image regions. Each silhouette is divided to seven regions and an ellipse is fitted to each region. Then, the features such as centroid, aspect ratio of major and minor axis of ellipse and etc, are extracted from each region. In the next step, the mean, standard deviation, magnitudes and phases of each region feature over time are calculated as 57-dimentional feature vectors. In the identification step, Mahalanobis distance was used as measure of similarity and the Nearest-Neighbor approach was used to rank a set of gait sequences. Finally, the SVM is trained and tested for gender classification. Performances of 88% and 94% are achieved for linear and polynomial kernels respectively. Therefore, it can be concluded that the boundary of feature vectors are not completely linear. Due to this, some extra complexity will be imposed on the system. In this paper we proposed a new high performance method for gender classification. Our algorithm is based on a novel spatio-temporal gait representation called Radon Transform of Mean Gait Energy Image (RTMGEI). Zernike moments of RTMGEIs are extracted as the feature vector and used to training an SVM for classification. This study mainly is focused on gender classification capability of RTMGEI, but we also reported its performance for individual gait recognition. The structure of the paper is as follows: In section 2, the procedure of creating RTMGEI is describes which consist of sub-sections as overview, Radon Transform, and Zernike Moments. Section 3, describes our experimental results, and, finally, the conclusions are drawn in section 4.
Fig. 1. Block diagram of our proposed system for gender classification
Gender Classification Using a Novel Gait Template
163
2 Radon Transform of Mean Gait Image 2.1 Overview Our proposed system is shown in Fig. 1. As many other researches, the silhouettes have been extracted from the original gait video sequences and the pre-processing procedure [7] is applied on them. The denoising process is needed before applying the gait classification algorithm. First, the centers of silhouettes in the gait sequence have been calculated and then each silhouette is segmented into predetermined size over its center. Then the segmented silhouettes have been aligned using their calculated centers. One of the recently developed spatio-temporal gait templates is the Gait Energy Image (GEI) which was first represented by Ju Han et al in [8]. GEI has no sensitivity to incidental silhouette errors in individual frames. As expected, GEI represents fundamental shapes of silhouettes as well as their changes over the gait cycle [8]. But in gait cycles which have incomplete silhouettes or collusion with objects, the recognition based on the GEI leads to incorrect results. In order to avoid the mentioned problems, we prefer to use the Mean Gait Energy Image (MGEI) as a base representation for extracting the features [9]. Definition for calculating MGEI of ith sequence is as following:
MGEI i (x , y) =
1 Mi
Mi
∑ GEI
i,j
(x , y) .
(1)
j=1
where, Mi is the number of different gait cycles existing in the ith sequence, x and y are the values of two dimensional image coordinates. GEIij is the Gate Energy Image for jth cycle of ith sequence and is calculated by the following equation: GEI i,j ( x , y ) =
1 Nj
Ni
∑I
i,j,t
(x,y).
(2)
t=1
(b) RTMGEI
(c) Zernike Moments of RTMGEI (a) MGEI Fig. 2. (a) Calculated MGEI for a sequence, (b) related RTMGEI, (c)Zernike moments of RTMGEI
164
F. Bagher Oskuie and K. Faez
According to the Fig. 1, after calculating MGEI, the Radon Transform is applied and the novel spatio-temporal template is produced. We call this template RTMGEI. RTMGEI is defined as the following:
RTMGEI i =RTMGEI i (ρ,θ) .
(3)
where, ρ is the radial distance from the center of image or data to be transformed and θ is the angle in the polar coordination system. Definitions of Radon Transform and its arguments are described in detail in the next sub-section. Fig. 2 illustrates one calculated MGEI and its corresponding RTMGEI. Since MGEI and RTMGEI are normalized and transformed from the frames of the sequences, we can display them as viewable image. As it is evident in the figure, the RTMGEI are smoother and has less noise than its corresponding MGEI. This is because of summation property of Radon Transforms which reduces the noise of MGEIs and yields a better temporal template in presence of noise. Actually the first level of denoising is done by summing single frames to construct MGEIs and the second level of denoising is done by applying Radon Transform to MGEIs. Also using RTMGEIs will result in considerable reduction of data dimensions and will increase the separability of data in classification sub-space. After calculating RTMGEIs, Zernike moments are employed for size-reduction and feature extraction on each sequence. Zernike moments are described in detail in the sub-section 2.3. We use the Zernike moments of up to the 15th order which result in feature vector with 136 coefficients. In the training process, as shown in Fig. 1, the features vectors of sequences, which are produced as mentioned above, will be used to train a Support Vector Machine (SVM). In the test process, the feature vector of probe sequence is produced and is fed to the SVM for gender classification. 2.2 Radon Transform
Hough and especially Radon Transforms have found various applications within the computer vision, image processing, pattern recognition and seismic. Mapping the two-dimensional images with lines into a sub-space is one of best abilities of these two transforms; where each line in the image will give a peak positioned at the corresponding line parameters. One can find several definitions of the Radon transform in mathematics, but the very popular form is as the following [8]:
RTf (ρ,θ)=
+∞
+∞
-∞
-∞
∫ ∫
f(x,y)δ(ρ-xcosθ-ysinθ)dxdy.
(4)
which, expresses the lines in the form of ρ=xcosθ+ysinθ , where θ is the angle and ρ is the smallest distance to the origin of the coordinates system. The Radon transform for a set of parameters (ρ,θ) is the line integral through the image f(x,y), where the line is positioned corresponding to the value of (ρ,θ) . The Dirac delta function ( δ(.) ) is defined as ∞ for argument zero and as 0 for all other arguments (it integrates to one). In digital domain the Kronecker delta will be used instead, which is defined as 1 for
Gender Classification Using a Novel Gait Template
165
argument zero and 0 for other all others. Thus the Radon Transform will be simplified to summation of pixel intensities along the discrete lines (Fig. 2.(a,b)). 2.3 Zernike Moments
Zernike moment is some kind of orthogonal complex moments in which its interesting properties such as rotation invariance, translation invariance and scale invariance have been improved [10] [11] [12]. Zernike moments kernels consist of Zernike complete orthogonal polynomials. These polynomials are defined over the interior region of the unit disc in the polar coordinates space. Let f(r,θ) be the image intensity function, and the two-dimensional Zernike moments of order m with repetition n are defined as:
m+1 Z mn = π
2π 1
∫ ∫ f(r,θ)V
* mn (r,θ)rdrdθ,
r ≤ 1.
(5)
0 0
* where Vmn ( r, θ ) is the complex conjugate of Zernike polynomial Vmn ( m, n) ; and m and n both are integer and the relation between m and n can be described as:
(m- n ) is even and n ≤ m
(6)
The Zernike polynomial Vmn (r,θ) is defined as: (7)
Vmn (r,θ)=R mn (r)exp(jnθ) . where
j = −1
; and the orthogonal radial polynomial m- n 2
R mn (r)=
∑ (-1) s=0
s
Rmn ( r )
is given by:
(m-s)! r m-2s . m+ n m- n S!( -s)!( -s)! 2 2
(8)
For the discrete image, let P ( r , θ ) to be the intensity of the image pixel, and (5) can be represented as: Z mn =
m+1 π
∑∑ P(r,θ)V
* mn (r,θ).
r
(9)
θ
The orthogonal radial polynomials result in Zernike moments have less redundancy [13]. Structural and static information of individuals in related RMGEI can be represented by the low-order Zernike moments, and the dynamic information of RMGEI can be represented by high-order Zernike moments. Zernike moments of a RTMGEI are plotted in Fig. 2(c). As shown, the amplitude of moments are decaying by increasing the order of Zernike moments.
166
F. Bagher Oskuie and K. Faez
3 Experimental Results Our Method is carried out on the CASIA database. The CASIA includes three subdatabases named DatasetA, DatasetB and DatasetC. We use the gait sequences of DatasetB which are captured from 124 subjects. Each subject includes 10 different gait sequences from 11 different camera views in the same scene. The original Image size of database is 320x240 pixels. For each person there is: 6 normal gait sequences (set A), two bag carrying sequences (set B) and two coat wearing sequences (set C). For each set, we take first sequences of 90o perspective of normal, bag carrying and coat wearing as the training subset, named set A1, set B1 and set C1 respectively. The rest of sequences of 90o perspective are taken as test subsets named set A2, set B2 and set C2 respectively. First, each frame of the sequence is segmented, aligned and resized to an image with 201x181 pixels. Then the RTMGEI’s of each sequence is calculated as described in section 2. The RTMGEIs can be illustrated as a 180x180 pixel image. Final block is calculating the Zernike moments of up to 15th order which result in the feature vector of length 136 coefficients. In the Zernike moments computing step, the pixel coordinates are transformed into the range of the unit circle, i.e. x 2 + y 2 ≤ 1 and the center of the image is considered as the origin. Table 1. Result of our proposed algorithm on CASIA database
Kernel
Training set A1
Linear
A1-B1-C1 A1
Gaussian(σ=10) Polynomial(d=2) Polynomial(d=3)
A1-B1-C1
CCRTrain 100% 98.45%
CCR-Test A2 96.66% -
97.22% 95.74% 97.53%
-
A1
100%
97.77%
A1-B1-C1
100%
-
A1
100%
98.51%
A1-B1-C1
100%
-
A2-B2
A2-C2
81.76% 93.12% -
-
94.04% 92.72% -
-
89.94% 91.00% -
-
89.81% 96.42% -
-
SVM A2-B2-C2 82.23%
17
96.16%
72
92.07%
79
96.03%
205
86.41%
45
98.94%
97
90.12%
42
98.14%
111
Results of our experiments are shown in Table 1, in which different configuration of training and test sets are considered. Also, the Correct Classification Rate for training and test steps are listed as “CCR-Training” and “CCR-Test”, respectively. The table contains of A1 as the training set, and A2 as the test set. In this case the maximum CCR of 98.51% is achieved when the third order polynomial is used as kernel for the SVM. Also, using the mentioned kernel for SVM yields the maximum result (94.04%) among other kernels when A2 and B2 used as test set, and A1 as training set. The Gaussian Radial basis function (with σ=10), as kernel for SVM, results in the maximum CCR of 96.42% when A2 and C2 are used as test set, and A1 as training
Gender Classification Using a Novel Gait Template
167
set. Also the former kernel results the maximum CCR of 92.07% using A1 as the training set and all other sets as test. In the other experiments, A1, B1 and C1 are used as training set and A2, B2 and C2 are used as test set. In this case, the maximum CCR of 98.94% is achieved using a second order polynomial kernel for SVM. The minimum number of Support Vectors is achieved in the case of using the linear kernel. The CCR for linear kernel has slight different results compared to CCR for non-linear kernels on A2 test set. Thus, it can be concluded that feature vectors on normal gait sequences are more linearly separable than feature vectors of bag carrying and clothed gait sequences. Table 2. Comparing our Proposed Method with other recent works Number of subject in our databse is more than others. CCR-Test
Methods A2
Data set
A2-B2 A2-C2
Maoudi Hu [14], (ICPR2010)
96.77% 88.71% 83.87%
62
Lee et al. [6], (FG2002)
94%
50
-
-
Huang et al. [15], (ACCV2007)
89.5%
-
-
50
Li et al [2]
93.28%
-
-
62
Yu et al [15]
95.94%
-
-
62
Our proposed method (using Zernike moments of RTMGEI)
Gaussian (σ=10)
95.74% 94.04% 92.72%
124
Polynomial(d=2)
97.77% 89.94% 91%
124
Polynomial(d=3)
98.51% 89.81% 96.42%
124
Table 2 shows the result of our proposed method in compression with some of recent works. Maoudi Hu et al [14] extracted low-dimensional discriminative representation called Gabor-MMI, by applying the Gabor filters to Maximization of Mutual Information (MMI). Finally, gender related Gaussian Mixture Model-Hidden Markov Models (GMM-HMMs) are utilized for the classification step. After these sophicated procedures, totally 62 subjects (only 31 female subjects and randomly selected 31 male subjects) from canonical view ( 90o ) of CASIA Gait database (dataset B) are used to evaluate performance of the proposed method of Maoudi. However our proposed method is evaluated on all the 124 subjects, with the canonical camera view of dataset B. Algorithm proposed by Lee et al [6] is described in the introduction section. SVM also is used in aforesaid study to classify gender over model-based gait features. Also the database population in mentioned study is less than our database population. In [15], a new method for gender classification based on fusion of multi-view gait sequences is presented. The features are extracted from each sequence by computing the ellipse parameters. Sum rule and SVM are applied to fuse the similarity measures from different view angles. As Table 2 demonstrates, our method outperforms the other recent studies. Thus, it can be concluded that a novel proposed gait template, called RTMGEI, can capture more static and dynamic
168
F. Bagher Oskuie and K. Faez
characteristics of individual gait sequences. Also the summation property of Radon Transform reduces the frame noise effects on classification. Furthermore, orthogonality of Zernike moments guaranties the statistically independence of features. Also in few cases, better results are achieved overally, when some error exists in training SVM.
4 Conclusions In this paper, we proposed a new gait representation called RTMGEI, which extracts the dynamic and static characteristics of gait sequences. RTMGEI can be extracted from incomplete sequences and showed to possess better noise robustness over the other gait representations. This property is achieved because of summation property of the Radon Transformation. Zernike moments are used for feature extraction. Due to orthogonal property of Zernike basis functions, individual coefficients in feature vector have minimum redundancy. Finally, an SVM is trained by using these feature vectors to classify subjects according to their gender. The algorithm is evaluated on the CASIA gait database. Proposed method is tested by different kernels for SVM. Results demonstrate effectiveness of new proposed gait template (RTMGEI) in capturing dynamic and static properties of individual gaits. Our results show significantly better performance compared to the other mentioned methods and our algorithm outperforms recent works.
References [1] Sarah, V.S., Mark, S.N., Kate, V.: Visual analysis of gait as a cue to identity. Applied Cognitive Psychology 13(6), 513–526 (1999) [2] Li, X., Maybank, S.J., Yan, S., Tao, D., Xu, D.: Gait components and their application to gender recognition. IEEE Transactions on systems, Man, And Cybernetics-Part C 38(2) (2008) [3] Haihong, S., Liqun, M., Qishan, Z.: Gender categorization based on 3D faces. In: International Conference on Advanced Computer Control (ICACC), vol. 5, pp. 617–620 (2010) [4] Yoo, J.-H., Hwang, D., Nixon, M.S.: Gender classification in human gait using support vector machine. In: Blanc-Talon, J., Philips, W., Popescu, D.C., Scheunders, P. (eds.) ACIVS 2005. LNCS, vol. 3708, pp. 138–145. Springer, Heidelberg (2005) [5] Maodi, H., Yunhong, W.: A New Approach for Gender Classification Based on Gait Analysis. In: Fifth International Conference on Image and Graphics, pp. 869–874 (2009) [6] Lee, L., Grimson, W.E.L.: Gait Analysis for Recognition and Classification. In: Fifth IEEE International Conference on Automatic Face and Gesture Recognition (FG), pp. 148–155 (2002) [7] Sarker, S., jonathon Phillips, P., Liu, Z., Vega, I.R., Grother, P., Bouyer, K.W.: The Human ID Gait Challenge problem: data sets, performance and analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence 27(2) (February 2005) [8] Ju, H., Bir, B.: Individual Recognition UsingGait Energy Imag. IEEE Transactions on Pattern Analysis and Machine Intelligence 28(2) (2006)
Gender Classification Using a Novel Gait Template
169
[9] Xiang-tao, C., Zhi-hui, F., Hui, W., Zhe-qing, L.: Automatic Gait Recognition Using Kernel Principal component Analysis. In: IEEE Int. Conference on Biomedical Engineering and Computer Science, Wuhan, pp. 1–4 (April 2010) [10] Ye, B., Peng, J.: Invariance analysis of improved Zernike moments. Journal of Optics A: Pure and Applied Optics 4(6), 606–614 (2002) [11] Ye, B., Peng, J.: Improvement and invariance analysis of Zernike moments using as a region-based shape descriptor. Journal of Pattern Recognition and Image Analysis 12(4), 419–428 (2002) [12] Chong, C.W., Raveendran, P., Mukundan, R.: Translation invariants of Zernike moments. Pattern Recognition 36(8), 765–773 (2003) [13] Maofu, L., Yanxiang, H., Bin, Y.: Image Zernike Moments Shape Feature Evaluation Based on Image Reconstruction. Geo-spatial Information Science 10(3), 191–195 (2007) [14] Maodi, H., Yunhong, W., Zhaoxiang, Z., Yiding, W.: Combining Spatial and Temporal Information for Gait Based Gender Classification. In: International Conference on Pattern Reconition (ICPR), Istanbul, pp. 3679–3682 (2010) [15] Huang, G., Wang, Y.: Gender classification based on fusion of multi-view gait sequences. In: Yagi, Y., Kang, S.B., Kweon, I.S., Zha, H. (eds.) ACCV 2007, Part I. LNCS, vol. 4843, pp. 462–471. Springer, Heidelberg (2007) [16] Yu, S., Tan, T., Huang, K., Jia, K.: X. Wu.: A study on gait-based gender classification. Image Processing Journal, IEEE T-IP 18(8), 1905–1910 (2009)
Person Re-identification Using Appearance Classification Kheir-Eddine Aziz, Djamel Merad, and Bernard Fertil LSIS - UMR CNRS 6168, 163, Avenue of Luminy, 13288 Marseille Cedex 9, France
[email protected], {djamal.merad,bernard.fertil}@univmed.fr http://www.lsis.org/image
Abstract. In this paper, we present a person re-identification method based on appearance classification. It consists a human silhouette comparison by characterizing and classification of a persons appearance (the front and the back appearance) using the geometric distance between the detected head of person and the camera. The combination of head detector with an orthogonal iteration algorithm to help head pose estimation and appearance classification is the novelty of our work. In this way, the is achieved robustness against viewpoint, illumination and clothes appearance changes. Our approach uses matching of interest-points descriptors based on fast cross-bin metric. The approach applies to situations where the number of people varies continuously, considering multiple images for each individual. Keywords: Person re-identification, head detection, head pose estimation, appearance classification, matching features, cross-bin metric.
1
Introduction
Person re-identification is a crucial issue in multi-camera tracking scenarios, where cameras with non-overlapping views are employed. Considering a single camera, the tracking captures several instances of the same individual, providing a volume of frames. The re-identification consists in matching different volumes of the same individual, coming from different cameras. In the literature, the re-identification methods that focus solely on the appearance of the body are dubbed appearance-based methods, and can be grouped in two sets. The first group is composed by the single-shot methods, that model for person analyzing of a single image [5][14][15]. They are applied when tracking information is absent. The second group encloses the multiple-shot approaches; they employ multiple images of a person (usually obtained via tracking) to build a signature [2][4][6][11]. In [2], each person is subdivided into a set of horizontal stripes. The signature is built by the median color value of each stripe accumulated over different frames. A matching between decomposable triangulated graphs, capturing the spatial distribution of local temporal descriptions, is presented in [4]. In [6], a signature composed by a set of SURF interest points collected over short video M. Kamel and A. Campilho (Eds.): ICIAR 2011, Part II, LNCS 6754, pp. 170–179, 2011. c Springer-Verlag Berlin Heidelberg 2011
Person Re-identification Using Appearance Classification
171
sequences is employed. In [11], each person is described by local and global features, which are fed into a multi-class SVM for recognition. Other approaches simplify the problem by adding temporal reasoning on the spatial layout of the monitored environment, in order to prune the candidate set to be matched [7], but these cannot be considered purely appearance-based approaches. In this paper, we present a novel multiple-shot person re-identification method based on the classification of the appearance for each person on two classes, frontal and back appearance. A pre-processing step is integrated in single-camera tracking phase to classify the appearance of the persons. Then, the complementary aspects of the human body appearances are extracted from the class of images: a set of SIFT, SURF and Spin image interest points. For the matching, we use the new robust and fast cross-bin metric based Earth Movers Distance (EMD) variant presented in [12]. Our method reduces the confusion among the appearances due the illumination, viewpoint changes (see Figure 1). The rest of the paper is organized as follow. Sec.2 gives detail of our approach. Several results are reported in Sec.3 and, finally, the conclusions are drawn in Sec.4.
Fig. 1. Examples of matching of interest points for a given person saw under 2 same viewpoints (better matching with appearance classification)(a, c) and from a different viewpoints(b)
2
The Proposed Method
The proposed system is illustrated in the Figure 2. In the first phase, the appearance class is built for each detected and tracked person. In the second phase, the appearance classification of each person is performed by calculating the distance head-camera. In the third phase, the features are accumulated from each appearance class. Finally, the matching of the signatures using cross-bin is performed.
Fig. 2. The person re-identification system
172
K.-E. Aziz, D. Merad, and B. Fertil
2.1
The Appearance Classification
The single-camera tracking output usually consists in a sequence of consecutive images of each individual in the scene. The appearance of the person is reached by calculating the distance between the detected head and the camera. For the head detection, we adopt our method presented in [3] (see Figure 3). If this distance increases we are talking about frontal pose, otherwise it’s about back pose (see Figure 4-b). We choose to calculate the distance among the heads and the camera because in the crowded environments, it is easy to detect the heads of people than the people themselves. This advantage is presented in [3].
Fig. 3. Example of the head detection in crowded environment [3]
To calculate this distance, we assume the head space coordinates model {Xh , Yh , Zh } and the model of head. The four 2D corners of the head model in the head space coordinates model are called A, B, C and D as shown in Figure 4-a. We assume the size of the head model in head space coordinates model is (20cm × 20cm). The second one is the camera space coordinates {Xc , Yc , Zc } supposedly calibrated. The last one is the image space coordinates {U, V }. The points a, b, c, d are the coordinates of the corresponding corners of the detected head in the image plane. This distance consists in finding the rigid transformation (R, T) from C to H (Equation (1)). ⎤ ⎡ ⎤ ⎡ Xh Xc ⎣ Yc ⎦ = [R|T ] ⎣ Yh ⎦ (1) Zc Zh We use the method named Orthogonal Iteration (OI) algorithm, proposed by Lu et al.[10]. To estimate the head pose, this algorithm uses an appropriate error function defined in the local reference model of head. The error function is rewritten to accept an iteration based on the classical solution of the 3D pose
Person Re-identification Using Appearance Classification
(a)
173
(b)
Fig. 4. (a) Head pose estimation. (b) Different appearance of one person.
estimation problem, called absolute orientation problem. This algorithm gives exact results and converges quickly enough. If the component Tz of the translation vector T increases (see Figure 4-a, Algorithm 1), the person is in the frontal pose, otherwise the person is in the back pose. For the comparison between two consecutive values of Tz is significant, we do not use every successive frame, but instead images sampled every half-second (time-spaced images). In the case profile pose, the global appearance of one person does not much change compared to the frontal or back pose. Therefore, we consider this pose as the frontal or back pose. Algorithm 1. Distance head-camera for one person Require: Internal parameters of the calibrated camera Require: heads list : list of the tracked head for one person Require: [Xh , Yh , Zh ] = [(0, 20, 0); (20, 20, 0); (20, 0, 0); (0, 0, 0)] : The coordinates of the four corners of the head model in the head space coordinates model Require: [a, b, c, d]: The four corners of the detected head in the image plane 1: for i = 0 to heads list.size do 2: [Ri , Ti ] ← OI([Xci , Yci , Zci ], [Xh , Yh , Zh ]) 3: [Ri+1 , Ti+1 ] ← OI([Xci+1 , Yci+1 , Zci+1 ], [Xh , Yh , Zh ]) 4: if Tzi > Tzi+1 then 5: Appearance P ersoni ← Frontal pose 6: else 7: Appearance P ersoni ← Back pose 8: end if 9: end for
2.2
Descriptors
In the following we briefly explain the SIFT [9], SURF [1] and Spin image [8] descriptors which offers scale and rotation invariant properties. SIFT (Scale Invariant Feature Transform) descriptors are computed for normalized image patches with the code provided by Lowe [9]. A descriptor is a 3D histogram of gradient location and orientation, where location is quantized into
174
K.-E. Aziz, D. Merad, and B. Fertil
a 4 × 4 location grid and the gradient angle is quantized into eight orientations. The resulting descriptor is of dimension 128. SURF (Speeded Up Robust Features) is a 64-dimensional SURF descriptor [1] also focuses on the spatial distribution of gradient information within the interest point neighborhood, where the interest points itself can be localized by interest point detection approaches or in a regular grid. Spin image is a histogram of quantized pixel locations and intensity values [8]. The intensity of a normalized patch is quantized into 10 bins. A 10 bin normalized histogram is computed for each of five rings centered on the region. The dimension of the spin descriptor is 50. 2.3
Feature Matching
In general, we have two sets of pedestrian images: a gallery set A and a probe set B. Re-identification consists in associating each person of B to the corresponding person of A. This association depends on the content of two sets: 1) each image represent a different individual appearance (frontal and back appearance); 2) if both A and B contain the same individual appearance (frontal or back appearance). For the matching, the several measures have been proposed for the dissimilarity between two descriptors. We divide them into two categories. The bin-by-bin dissimilarity measures only compare contents of corresponding vector bins. The cross-bin measures also contain terms that compare non-corresponding bins. To this end, cross-bin distance makes use of the ground distance dij , defined as the distance between the representative features for bin i and bin j. Predictably, bin-by-bin measures are more sensitive to the position of bin boundaries. The Earth Movers Distance (EMD) [13] is a cross-bin distance that addresses this alignment problem. EMD is defined as the minimal cost that must be paid to transform one vector into the other, where there is a ground distance between the basic features that are aggregated into the vector. Pele et al.[12] proposed a linear time algorithm for the computation of the EMD variant, with a robust ground distance for oriented gradients. Given two histograms P, Q; the EMD as defined by Rubner et al.[13] is: i,j fij dij i,j fij f ij
EM D(P, Q) = min j
fij ≤ Pi ,
i
fij ≤ Qj ,
i,j
fij = min( Pi , Qj ), fij ≥ 0 i
(2)
j
where fij denote the flows. Each fij represents the amount transported from the ith supply th the j th demand. dij denote the ground distance between bin i and bin j in the histograms. Pele et al.[12] proposed the variante: fij dij ) + Pi − Qj × α max {dij } (3) EM Dα (P, Q) = (min i,j fij i i,j j
Person Re-identification Using Appearance Classification
175
It is common practice to use the L2 metric for comparing SIFT descriptors. This practice assumes that the SIFT histograms are aligned, so that a bin in one histogram is only compared to the corresponding bin in the other histogram. This is often not the case, due to quantization, distortion, occlusion, etc. This distance has three instead of two matching costs: zero-cost for exact corresponding bins, one-cost for neighboring bins and two-cost for farther bins and for the extra mass. Thus the metric is robust to small errors and outliers. EM D has two advantages over Rubners EMD for comparing SIFT descriptors. First, the difference in total gradient magnitude between SIFT spatial cells is an important distinctive cue. Using Rubners definition this cue is ignored. Second, EM D is a metric even for non-normalized histograms. In our work, we use the same metric to match the SIFT, the SURF and Spin image descriptors. Finally, the association of each person Pi to the corresponding person Pj is done by a voting approach: every interest point extracted from the set Pi is compared to all models points of the person Pj , and a vote is added for each model containing a close enough descriptor. We just match the interest points of the same class of appearance. Eventually, the identification is made with the highest voted for the model (see Figure 5). Two descriptors are matched if the first descriptor is the nearest neighbor to the second and the distance ratio between the first and the second nearest neighbor is below a threshold.
Fig. 5. Person re-identification process
176
K.-E. Aziz, D. Merad, and B. Fertil
Fig. 6. Typical views of the persons used for the test. Line 1, line 2 and line 3 are sets of persons viewed respectively by camera 1, camera 2 and camera 3
Fig. 7. The number of matched interest points with and without appearance classification (with SURF descriptor)
3
Experimental Results
In our knowledge, there is no available benchmark for evaluation of person re-identification based on multi-shot appearance with non-overlapping camera. Therefore, we decided to conduct a first experimental evaluation of our proposed method on an available series of videos showing persons recorded with three nonoverlapping camera. These videos include images of the same 9 persons seen by the three non-overlapping cameras with different viewpoints 1 (see Figure 7). The re-identification performance evaluation is done with the precision-recall metric: P recision =
correct matches correct matches+f alse matches ,
Recall =
correct matches queries number
(4)
To evaluate the overall performance of our proposed method, we select the set of person detected in the camera one as test data (request) and the set of the person detected in the other cameras as reference data (model). The model for 1
http://kheir-eddine.aziz.perso.esil.univmed.fr/demo
Person Re-identification Using Appearance Classification
(a)
177
(b)
Fig. 8. (a) Precision-recall for our person re-identification method. (b) The precision vs recall curves for person re-identification according to method proposed in [6].
(a) SURF
(b) SIFT
(c) SPIN
Fig. 9. Confusion matrix in person re-identification experiment with different descriptors
178
K.-E. Aziz, D. Merad, and B. Fertil
each person was built with 8 images (4 images for the frontal appearance and 4 images for the back appearance collected during tracking). Likewise, for a test, each query was built with 8 images (see Figure 5). Figure 7 provides the evaluation of the number of matched interest points. We observed that the number of interest points are more important with the classification of the appearances. This explains the classification of appearances greatly reduces the illumination and appearance changes. The resulting performance, computed on a total of 100 queries from the camera 1, is illustrated in Figure 8-a, in which we make the minimum number of matched points vary (15, 25, 35, 45, 55, 65) between query and model required to validate a re-identification. The precision is higher for the three descriptors (SIFT, SURF, Spin image) compared to the approach proposed by Hamdoun et al in [6] (see Figure 8-b). Therefore; there are less false matches due to mainly to the similarity in appearance. For example, the legs of the person 8 is similar to those of the person 5, 7 and 9 (see Figure 9) and part of the torso of the person 1 is similar to the legs of the person P7 and P8 . This problem can be avoided by using 2D articulated body model.
4
Conclusions
In this paper, we proposed a new person re-identification based on the appearance classification. It consists in classifying the person in two appearance classes by calculating the geometric distance among these heads and the camera. Based on the head detection using skeleton graph, the estimation of the distance between a person and a camera is easy even in crowded environment. Employing this classification, we obtained the novel best performances for people reidentification. As future work, we plan to extend our method by extending the 2D articulated body model to reduce de confusion case. Though at a preliminary stage, we are releasing the person database and our code.
References 1. Bay, H., Ess, A., Tuytelaars, T., Gool, L.V.: Speeded-up robust features (surf). In: Computer Vision and Image Understanding, vol. 110, pp. 346–359. Elsevier Science Inc., New York (2008) 2. Bird, N.D., Masoud, O.T., Papanikolopoulos, N.P., Isaacs, A.: Detection of loitering individuals in public transportation areas. IEEE Transactions on Intelligent Transportation Systems 6, 167–177 (2005) 3. Merad, D., Aziz, K.E., Thome, N.: Fast people counting using head detection from skeleton graph. In: IEEE Conference, Advanced Video and Signal Based Surveillance, Los Alamitos, USA, pp. 151–156 (2010) 4. Gheissari, N., Thomas, B.S., Hartley, R.: Person reidentification using spatiotemporal appearance. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Washington, vol. 2, pp. 1528–1535 (2006) 5. Gray, D., Tao, H.: Viewpoint invariant pedestrian recognition with an ensemble of localized features. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008, Part I. LNCS, vol. 5302, pp. 262–275. Springer, Heidelberg (2008)
Person Re-identification Using Appearance Classification
179
6. Hamdoun, O., Moutarde, F., Stanciulescu, B., Steux, B.: Person re-identification in multi-camera system by signature based on interest point descriptors collected on short video sequences. In: The third ACM/IEEE International Conference on Distributed Smart Cameras, pp. 1–6 (2008) 7. Javed, O., Shafique, K., Rasheed, Z., Shah, M.: Modeling inter-camera space-time and appearance relationships for tracking across nonoverlapping views. In: Comput. Vis. Image Underst., New York, USA, vol. 109, pp. 146–162 (2008) 8. Johnson, A.E., Hebert, M.: Using spin images for efficient object recognition in cluttered 3d scenes. IEEE Transactions on Pattern Analysis and Machine Intelligence 21, 433–449 (1999) 9. Lowe, D.G.: Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision 60(2), 91–110 (2004) 10. Lu, C.P., Hager, G.D., Mjolsness, E.: Fast and globally convergent pose estimation from video images. IEEE Transactions on Pattern Analysis and Machine Intelligence 22(6), 610–622 (2000) 11. Nakajima, C., Pontil, M., Heisele, B., Poggio, T.: Full-body person recognition system. Pattern Recognition 36(9), 1997–2006 (2003) 12. Pele, O., Werman, M.: A linear time histogram metric for improved SIFT matching. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008, Part III. LNCS, vol. 5304, pp. 495–508. Springer, Heidelberg (2008) 13. Rubner, Y., Tomasi, C., Guibas, L.G.: The earth movers distance as a metric for image retrieval. Int. International Journal of Computer Vision 40, 99–121 (2000) 14. Schwartz, W.R., Davis, L.S.: Learning discriminative appearancebased models using partial least squares. In: Computer Graphics and Image Processing, Brazilian Symposium, Los Alamitos, USA, pp. 322–329 (2009) 15. Zheng, W., Davis, L., Xiang, T.: Associating groups of people. In: British Machine Vision Conference, pp. 1–6 (2009)
A Method for Robust Multispectral Face Recognition Francesco Nicolo and Natalia A. Schmid West Virginia University, Department of CSEE Morgantown, WV, USA 26506-6109
[email protected],
[email protected]
Abstract. Matching Short Wave InfraRed (SWIR) face images against a face gallery of color images is a very challenging task. The photometric properties of images in these two spectral bands are highly distinct. This work presents a new cross-spectral face recognition method that encodes both magnitude and phase of responses of a classic bank of Gabor filters applied to multi-spectral face images. Three local operators: Simplified Weber Local Descriptor, Local Binary Pattern, and Generalized Local Binary Pattern are involved. The comparison of encoded face images is performed using the symmetric Kullbuck-Leibler divergence. We show that the proposed method provides high recognition rates at different spectra (visible, Near InfraRed and SWIR). In terms of recognition rates R a commercial software distributed by L1. it outperforms FaceitG8, Keywords: Face recognition, SWIR, Gabor wavelets, Simplified Weber Local Descriptor, Local Binary Pattern, Generalized Local Binary Pattern, Kullback Leibler divergence.
1
Introduction
Face recognition has been an active research topic since 1990s. Different spectral bands of electromagnetic spectrum such as visible, Near Infra Red (NIR) (750nm − 1100nm) and thermal Infra Red (7 − 14μm) have been used to collect images and videos of people for testing various face recognition approaches. In majority of cases, these approaches are designed to perform face recognition within one specific spectral band. However, this scenario does not often support modern face recognition applications. Surveillance cameras, for example, often operate in both visible and NIR bands and switch between the bands depending on the night or day environments. Since gallery and watch lists are traditionally composed of visible light face images, newly designed face recognition methods are expected to be successful in matching NIR data versus color images. Apart from this case of cross-spectral comparison, attention has been recently turned to previously unexplored part of the Short Wavelength IR spectrum (1100nm − 1700nm). The SWIR band has several advantages over the NIR spectrum. SWIR imaging modality produces clear images in the presence of challenging atmospheric conditions such as rain, fog and mist. It also works very M. Kamel and A. Campilho (Eds.): ICIAR 2011, Part II, LNCS 6754, pp. 180–190, 2011. c Springer-Verlag Berlin Heidelberg 2011
A Method for Robust Multispectral Face Recognition
181
well under low light/blackout conditions [12]. A SWIR beam of light is completely invisible for the human eye. This makes the modality suitable for covert long-range (up to 800m) applications [13]. The literature contains a number of works that perform face comparisons within the same spectral band, NIR or IR. However, face recognition systems that perform a cross-spectral comparison between visible and SWIR face images have not been developed and tested. Below we review a few existing publications on the topic. In their work, Kong et al [2] perform fusion of NIR and thermal face images in the Discrete Wavelet Transform domain employing images from the NIST/Equinox and the UTK-IRIS databases. They show that, when the fused images are fed to the F aceit recognition software, the resulting matching performance improves with respect to the case when the same face classes are compared within the same spectral band, NIR or thermal IR in this case. The work of Klare and Jain [1] employs a method based on Local Binary Patterns (LBP) and Histogram of Gradient (HOG) features, followed by Liner Discriminant Analysis (LDA) to reduce the dimensionality of feature vectors. This encoding strategy is applied to NIR and color images for their cross matching. The results are shown to outperform Cognitec’s FaceVACS. In the work of Chen et al. [3] a study of face recognition in thermal IR and visible spectral bands is performed, using PCA and F aceit G5. It is shown that the performance of PCA in visible spectral band is higher compared to the performance of PCA in thermal IR spectral band, and that these data fused at the matching score level result in performance similar to the performance of the algorithm in visible band. Li et al.[4] performed a cross-spectral comparison of visible light face images and NIR face images. Their face matcher is based on LBP operator to achieve illumination invariance and is applied to NIR data. In their recent works Akhloufi and Bendada [5],[6] experimented with images from Equinox Database (it includes visible, SWIR, MWIR, and LWIR images) and Laval University Multispectral Database (includes visible, NIR, MWIR, LWIR data). The first work [5] evaluates recognition performance within each spectral band by using a set of known face matching techniques. In the second work [6] (performed on the same data) a classic Local Ternary Pattern (LTP) and a new Local Adaptive Ternary Pattern (LATP) operators are adopted to cross-match images. The authors involve multiresolution analysis in the “texture space” to fuse images from different spectral bands. They further report that fusion of face images from different spectral bands can lead to improved recognition rates (93%-99%). In this paper, we show that cross-spectral comparison of visible (color) face images versus images from SWIR spectral band is a very challenging task and it produces verification rates lower compared to the case when color face images are matched against face images from NIR spectral band. The photometric properties of images in these two spectral bands (visible and SWIR) are highly distinct. As a solution, we introduce a new method for cross-spectral face recognition. We adopt a Gabor filter-based approach at the initial encoding stage which is followed by a novel encoding scheme that involves three operators to extract robust features across different spectral bands. These three operators are
182
F. Nicolo and N.A. Schmid
designed to encode both magnitude and phase of filtered images, resulting in a comprehensive encoding scheme. The obtained verification results are compared against the results obtained with F aceit G8, which is often viewed as a baseline and the state of art algorithm for face recognition. In addition we provide the results of performance evaluation within visible and NIR spectral bands as a basis for comparison. The remainder of the paper is organized as follows. Sec. 2 introduces the face dataset used in this work. Sec. 3 presents a summary of the encoding and matching methods. Experimental results are shown in Sec. 4. Sec. 5 summarizes the contributions.
2 2.1
Multispectral Dataset Multispectral Dataset
In our experiments we use a collection of images acquired with the TINDERS (Tactical Imager for Night/Day Extended-Range Surveillance) system developed by Advanced Technologies Group, WVHTC Foundation [13]. The multispectral dataset is composed of 48 face classes (total of 576 images) at three wavelengths: 980 nm (NIR spectrum), 1550 nm (SWIR spectrum) and visible spectrum. Four images per class are available for each spectral band: two images have neutral expression and two images depict the person talking (open mouth). Sample images from this dataset are shown in Fig. 1. Note the difference of intensity distributions in these images. The human skin and eyes in the SWIR spectrum appear very dark because of the presence of the moisture. Alternatively, the hair are white because they turn to be highly reflective at that wavelengths.
Fig. 1. Images from [13] at visible spectral band (left), 980 nm (center), 1550 nm (right)
3 3.1
Encoding and Matching Method Face Normalization
We use position of the eyes and nose to normalize the face image to a canonical representation resulting in an image of size 128 × 128 pixels. A similarity transformation S (rotation, scaling and translation) is applied to each image such that the eyes and nose tip are projected into fixed positions. The locations of these landmarks are manually selected. The precise positions of the landmarks, however, are not critical for normalization in our approach; and we can easily
A Method for Robust Multispectral Face Recognition
183
accommodate for a displacement of some pixels. The positions of the landmarks can be also automatically determined, for example, with a Haar-based detector [7] trained on multi-spectral images. In this work, we focus on the matcher design rather than on the face detector. 3.2
Preprocessing
Color images are transformed into gray scale images Ig by using a simple linear combination of the original RGB channels. Our experiments have shown that the outcomes of this linear combination are more robust compared to many other approaches which we have attempted when matching color images versus infrared images encoded using Gabor filters. NIR images (980 nm) are not preprocessed. A SWIR image X (1550nm) is preprocessed using the following non linear transformation: Ip = log(1 + αX), where α is a parameter estimated using a single visible image and a single SWIR image. This transformation stretches the histogram of the SWIR image such that the pixel distribution (profile) resembles the one of the transformed color image Ig . We note that the gray variation (trend) of the image pixel values is preserved since the logarithm function is monotonic. We empirically found that a good value of α is 0.2 for TINDERS dataset. 3.3
Filtering
A bank of Gabor filters is applied to the cropped face images. In particular we use a set of filters at 4 scales and 8 orientations, resulting in a total of 32 filter responses. The mathematical equation for the filter is as follows: 2 2 2 k(θ, s) k(θ, s) z G(z, θ, s) = exp (1) [eik(θ,s)z − e−σ /2 ], 2 2 σ 2σ where σ 2 is the variance of the Gaussian kernel, k(θ, s) is the wave vector. The magnitude and phase of the wave vector determine the scale and orientation of the oscillatory term and z=(x,y). A normalized and preprocessed face image I(z) is convolved with the Gabor filter G(z, θ, s) at orientation θ and scale s resulting in the filtered image: Y (z, θ, s) = I(z) ∗ G(z, θ, s), with ” ∗ ” denoting convolution. 3.4
Encoding of Magnitude and Phase Response
In this work, we encode both magnitude and phase of Gabor responses and demonstrate that both of them are important in a multispectral comparison. The magnitude and phase are encoded separately, and different methods to encode them are applied. The encoded phase and magnitude are later combined at the feature level resulting in a robust representation for each face class. To encode the magnitude we use two distinct operators: Weber Local Descriptor (WLD) and Local Binary Pattern (LBP) operators. For encoding the
184
F. Nicolo and N.A. Schmid
phase we adopt a uniform generalized LBP operator. The WLD is a new operator recently developed by Chen et al.[10]. It has been introduced to characterize textures in images and was illustrated on raw images. WLD has never been applied to processed or filtered images. This is one of the novelties claimed in our work. We use a simplified version of WLD operator (that employs only the differential excitation part [10],[11]) to encode the magnitude filter response, resulting in a very robust representation of face features. Later we show that this operator, when applied to the magnitude Gabor response, outperforms the other two operators in terms of recognition performance. The simplified version of the WLD operator is defined as: SW LDl,r,12 (x) = Ql
tan
−1
11 xi − x i=0
x
,
(2)
where xi are the neighbors of x at radius r = 1, 2 and Ql is a uniform quantizer with l quantization levels. In the following experiments we adopt l = 135 levels to discretize the output of the tan−1 function. An uniform LBP operator is the other operator applied to the magnitude response |Y (z, θ, s)|. The main difference between its application in this work compared to all earlier applications (see for example [9]) is that we consider the relationship among 12 neighbors at a radius of one and also two pixels: u (x) LBPr,12
=U
11
I{xi − x}2
i
,
(3)
i=0
where U is the uniform pattern mapping and xi are the neighbors of a value x at radius r and I(·) is the indicator function of the event in {·}. A binary pattern is defined being uniform if it contains at most two bitwise transitions from 0 to 1 or from 1 to 0 when the bit sequence is recorded circularly. For example, the sequence 011111111000 is a 12-bit uniform pattern when the sequence 010001011111 is not uniform. Denote by U(b) a binary decision made about the pattern b. Then U(b) is defined as U(b) =
b, if b is a uniform sequence (N + 1)B , otherwise
where N is the total number of uniform patterns formed using n bits. In our formulation, we work with n = 12 bit sequences which results in N = 134 uniform patterns. Note that a binary sequence which is not uniform is replaced with the value (135)B , where B stands for “expressed in binary base.” The SWLD and LBP operators are complementary operators in terms of the type of information that they encode [11]. The SWLD operator detects the edges and records their intensity values. The LBP detects orientation of the edges but does not encode the intensity values. Therefore, it is expected that the information encoded using these operators combines well [11].
A Method for Robust Multispectral Face Recognition
185
To encode the Gabor phase response (assumed to be defined on the interval [0, 2π]) we adopt a uniform generalized binary operator defined as: 11 u (x) = U Tt {xi − x}2i , GLBPr,12,t (4) i=0
where Tt is a thresholding operator based on the threshold t. It is defined as 1, | u |≤ t Tt (u) = 0, otherwise. The defined GLBP operator is a generalization of the encoding method introduced in [8]. The main differences are as follows. The operator encodes only uniform binary sequences and similarly to the two other introduced encoding methods (SWLD, LBP), the operator considers the relationship among 12 neighbors at both radii r = 1, 2. The values for the thresholds were evaluated experimentally. It has been found that both t = π/2 and t = π are good values, when the operator is applied at radii r = 1, 2. We will further set t = π/2. 3.5
Local Features and Distance Measure
Fig. 2 summarizes the details of the proposed feature extraction method.
Fig. 2. The block diagram of encoding and matching scheme
The preprocessed and normalized images are encoded with a bank of Gabor filters. For each Gabor response we evaluate and store separately magnitude and phase response of individual filters. The magnitude response is further encoded using the SWLD and LBP operators. The phase response is encoded with GLBP operator. Each encoded response is divided into non-overlapping blocks of size 8 × 8 resulting in 256 blocks. Blocks are displayed in the form of histograms containing 135 bins. Histograms are then concatenated to form a single vector. The three vectors (one for each applied operator) are further concatenated to form a longer feature vector as shown in Fig. 2. The encoding process is repeated for each filter response of the Gabor bank, and vectors of features are stored in a matrix H. Note that each of the three operators can be also treated as an independent encoder. To compare feature vectors extracted from two images of two different (or the same) spectral bands we adopt a symmetric Kullback-Leibler distance. Consider
186
F. Nicolo and N.A. Schmid
two images A and B. Denote by HA and HB the matrix of features extracted from images A and B, respectively. The symmetric Kullbuck-Leibler distance between these two matrices is defined as d(A, B) =
K k=1
K
HA (k) HB (k) + , HA (k) log HB (k) log HB (k) HA (k)
(5)
k=1
where K is the total length of the feature vectors HA (k) (or HB (k)) obtained by concatenating all rows of the matrix HA (or HB ).
4
Experimental Results
In this section we describe three different experiments and present the experimental results obtained using the TINDERS dataset. 4.1
Single Spectral Band
Our first experiment involves matching of images within the same spectral band, that is, (1) visible versus visible, (2) 980 nm versus 980 nm, and (3) 1550 nm versus 1550 nm.
(a)
(b)
Fig. 3. The left panel shows ROC curves when images are matched within the same spectral band using only SW LD feature histograms. The right panel displays ROC curves generated by F aceit G8 when images from the same spectral band are matched.
Each spectral band in TINDERS dataset is represented by 192 frontal face images (48 classes, 4 images per class). Images are Gabor filtered and the magnitude of responses is encoded using SWLD operator. Fig. 3(a) displays the Receiver Operating Characteristic (ROC) curves obtained with the SWLD matcher. Fig. 3(b) depicts the ROCs obtained using F aceit G8. Note that both SWLD and F aceit G8 matcher provide perfect recognition performance when applied to data from visible spectral band and the 980nm
A Method for Robust Multispectral Face Recognition
187
band. When face matching is performed within SWIR band (1550 nm) F aceit G8 provides recognition rate 98% (at False Accept Rate, FAR, ranging from 10−2 to 10−4 ) compared to the proposed SWLD matcher which provides 100% recognition rate over the entire range of FARs.The matching results clearly indicate that the SWLD matcher alone is a very robust and effective scheme when comparisons are made within the same spectral band. 4.2
Color vs 980 nm
In this experiment we match color face images against images captured at 980nm. We follow the complete matching scheme depicted in Fig.2. The results of comparisons are shown in Fig.4. In this case the F aceit G8 software outperforms the proposed matcher (SWLD+ LBP+GLBP). However, the performance of our matcher is comparable to the performance of F aceit G8 in the range of FAR rates from 10−2 to 1. The values of Equal Error Rates, EERs, are very similar too. The index of separability, d-prime, for our scheme is higher compared to the d-prime for the commercial software.
Fig. 4. ROC curves for comparison of color images versus images at 980nm
4.3
Color vs 1550 nm
In this experiment we compare color face images versus images captured at 1550nm. Earlier in this paper we indicated that images within 1550 nm spectral band are nonlinearly transformed and normalized by following the equation in Sec. (3.2), prior to being encoded. Color images are transformed into gray scale images. Table 1 and Fig. 5(a) show that SWLD matcher outperforms the matchers based on the other two operators LBP and GLBP. The SWLD alone, however, does not provide the performance comparable to the performance of F aceit G8. For this reason we employ both SWLD and LBP operators to encode the magnitude of the Gabor response and we complement them with the GLBP operator which encodes the phase response. All of them provide complementary information about individual face classes. Table 1 summarizes the values of single point performance measures such as EER and d-prime obtained with the single and combined operators applied to the Gabor responses. The corresponding ROC curves for the individual encoding methods and their various combinations are depicted in Fig. 5(a). The overall best performance is
188
F. Nicolo and N.A. Schmid
Table 1. EERs and d-prime values for the case when color images are matched against images from 1550 nm spectral band Color vs 1550 GLBP (PH) LBP (MAG) SWLD (MAG)
EER 0.0867 0.0593 0.0429
dprime Color vs 1550 2.4775 FACEIT G8 3.0454 LBP+GLBP 3.0608 SWLD+LBP+GLBP
(a)
EER 0.0769 0.0318 0.0284
dprime 1.4330 3.2934 3.3546
(b)
Fig. 5. The left panel displays ROC curves for comparison of color images versus images at 1550 nm using different encoding methods. The right panel compares the performance of our method (SWLD+LBP+GLBP) to the performance of F aceit G8.
obtained when we involve all three local operators to encode both magnitude and phase of Gabor filters. The results confirm that these three operators complement well each other, providing a robust representation of face classes. In Fig. 5(b) we compare the verification performance (ROC) of F aceit G8 with the one achieved with our approach. Finally, in Table 2, we compare the identification performance (rank-1 recognition) for our matcher and F aceit G8. Table 2. Identification (rank-1 recognition) rate for the cross comparison of color images versus images from 1550nm spectral band Color vs 1550 Rank-1 % SWLD+LBP+GLBP 96.74 % FACEIT G8 86.07 %
Tables 1,2 and Fig.5(b) demonstrate that our approach achieves better verification and identification performance compared to F aceit G8 when images from visible band are compared against images in 1550 nm spectral band. The overall gain in verification performance of our matcher with respect to F aceit G8 ranges from about 10% to 7% at FARs values lower than 10−1 and the improvement in terms of recognition performance achieves almost 11%.
A Method for Robust Multispectral Face Recognition
5
189
Conclusions
In this work we introduced a robust method to match visible face images against images from SWIR spectrum. An operator known as SWLD was applied to encode the magnitude of the Gabor filter responses, which is claimed here as a new application of SWLD operator. This operator was employed in the Gabor domain jointly with a uniform 12-bit LBP to encode magnitude of the Gabor filter responses and was further complemented with a uniform GLBP operator to encode phase response. The obtained features are adopted to match face images within and across three spectral bands (Visible, NIR, SWIR). The results of performed experiments show that our approach is superior to F aceit G8 when visible face images are matched against face images from SWIR spectrum (1550nm).
References 1. Klare, B., Jain, A.K.: Heterogeneous Face Recognition: Matching NIR to Visible Light Images. In: 20th International Conference on Pattern Recognition, pp. 1513–1516 (August 2010) 2. Kong, S.G., Heo, J., Boughorbel, F., Zheng, Y., Abidi, B.R., Koschan, A., Yi, M., Abidi, M.A.: Multiscale Fusion of Visible and Thermal IR Images for IlluminationInvariant Face Recognition. International Journal of Computer Vision 72(2), 215–233 (2007) 3. Chen, X., Flynn, P.J., Bowyer, K.W.: IR and visible light face recognition. Computer Vision and Image Understanding 3, 332–358 (2005) 4. Li, S.Z., Chu, R., Liao, S., Zhang, L.: Illumination Invariant Face Recognition Using Near-Infrared Images. IEEE Transactions on Pattern Analysis and Machine Intelligence 4, 627–639 (2007) 5. Akhloufi, M., Bendada, A.: Multispectral Infrared Face Recognition: a comparative study. In: 10th International Conference on Quantitative InfraRed Thermography, vol. 3 (July 2010) 6. Akhloufi, M., Bendada, A.: A new fusion framework for multispectral face recognition in the texture space. In: 10th International Conference on Quantitative InfraRed Thermography, vol. 2 (July 2010) 7. Viola, P., Jones, M.: Rapid Object Detection using a. Boosted Cascade of Simple Features. In: Proc. of IEEE CVPR, pp. 511–518 (December 2001) 8. Guo, Y., Xu, Z.: Local Gabor phase difference pattern for face recognition. In: 19th International Conference on Pattern Recognition, pp. 1–4 (December 2008) 9. Zhang, W., Shan, S., Gao, W., Chen, X., Zhang, H.: Local Gabor Binary Pattern Histogram Sequence (LGBPHS): A Novel Non-Statistical Model for Face Representation and Recognition. In: Tenth IEEE International Conference on Computer Vision, vol. 1, pp. 786–791 (2005) 10. Chen, J., Shan, S., He, C., Zhao, G., Pietik¨ ainen, M., Chen, X., Gao, W.: WLD: a robust local image descriptor. IEEE Transactions on Pattern Analysis and Machine Intelligence 32(9), 1705–1720 (2009) 11. Chen, J., Zhao, G., Pietik¨ ainen, M.: An improved local descriptor and threshold learning for unsupervised dynamic texture segmentation. In: 12th International Conference on Computer Vision Workshops, pp. 460–467 (October 2009)
190
F. Nicolo and N.A. Schmid
12. GoodRich, Surveillance Using SWIR Night Vision Cameras, on line, http://www.sensorsinc.com/facilitysecurity.html (accessed on March 05, 2011) 13. WVHTCF, Tactical Imager for Night/Day Extended-Range Surveillance, on line, http://www.wvhtf.org/departments/advanced_tech/projects/tinders.asp (accessed on March 05, 2011)
Robust Face Recognition after Plastic Surgery Using Local Region Analysis Maria De Marsico1, Michele Nappi2, Daniel Riccio2, and Harry Wechsler3 1
Sapienza Università di Roma, via Salaria 113, 00198, Roma, Italy Università di Salerno, via Ponte don Melillo, 84084, Fisciano, Italy 3 George Mason University, Fairfax, VA 22030-4444, Virginia, USA
[email protected],
[email protected],
[email protected],
[email protected] 2
Abstract. Face recognition in real-world applications is often hindered by uncontrolled settings including pose, expression, and illumination changes, and/or ageing. Additional challenges related to changes in facial appearance due to plastic surgery have become apparent recently. We exploit the fact that plastic surgery bears on appearance in a non-uniform fashion using a recognition approach that integrates information derived from local region analysis. We implemented and evaluated the performance of two new integrative methods, FARO and FACE, which are based on fractals and a localized version of a correlation index, respectively Experimental results confirm the expectation that face recognition is indeed challenged by the effects of plastic surgery. The same experimental results also show that both FARO and FACE compare favourably against standard face recognition methods such as PCA and LDA. Keywords: face recognition, plastic surgery, local features, regions of interest (ROI).
1 Introduction Many organizations aim at improving security by deploying biometric systems based on body or behavioural features. Such systems capture and process raw appearance data in order to store relevant biometric information into signatures / templates, which are stored and used for later retrieval and authentication. Such technologies and methods are general enough to be integrated into diverse application requiring security or access control. The most common and popular biometrics are perhaps fingerprints, but many other human characteristics have been considered, including face, iris, hand geometry, voice, and signature. The use of biometrics, however, has some drawbacks. Iris recognition is extremely accurate, but expensive to implement and not user friendly. The latter is due to the need to position the capture device at a small distance from the subject in order to cope with resolution problems. Fingerprints are reliable and non-intrusive, but they are among the most computationally expensive biometric M. Kamel and A. Campilho (Eds.): ICIAR 2011, Part II, LNCS 6754, pp. 191–200, 2011. © Springer-Verlag Berlin Heidelberg 2011
192
M. De Marsico et al.
methods when identification (1:N matching) is required instead of verification (1:1 matching). Face recognition seems to be a good compromise between reliability and social acceptance, on one side, and security and privacy, on the other side. Face recognition provides a lower security level for uncontrolled settings, but is able to accommodate large public sites with subjects unaware of being under surveillance. Significant progress on the use of face recognition has been recently made with many systems capable of achieving recognition rates greater than 90% but under controlled conditions. Real-world mass screening and surveillance scenarios including uncontrolled settings, however, remain a challenge due to the wide range of variations faces can be subject to. Five key factors can significantly affect the performance of a face recognition system: illumination, pose, expression, occlusion, and ageing [1]. Another factor not much considered, that of plastic surgery [6], is the topic of interest for this paper. Availability of advanced technologies, at ever decreasing costs, makes facial plastic surgery increasingly affordable and thus widespread. It can be used for cosmetic purposes to improve the facial appearance, e.g., correcting an excessive nose, or to correct and/or reconstruct from disfiguring defects. In both cases, plastic surgery often requires only a localized modification. The extent to which the subject remains recognizable depends on the number, regions, and span of modification procedures undergone. As plastic surgery can hinder recognition, it can also be misused to deceitfully conceal personal identity, An extensive use of plastic surgery makes it almost impossible to recognize a person. Exceptions are “light” cases of dermoabrasion that only affect skin texture, and therefore may hinder only methods that rely heavily on texture, and “light” lifting. Some local plastic surgery procedures achieve similar effects though irreversible, to pose or expression variations, or can induce “reverse” ageing. A crucial difference to uncontrolled settings is that, while pose, expression and illumination can be “corrected”, possibly by requiring the user to repeat the enrolment procedure, plastic surgery is more like ageing, were a repeated data capture operation cannot be expected to enhance biometric performance. Singh et al. [6, 7] report on an ad-hoc database to test the effects of plastic surgery on face recognition, and compare the results obtained by competitive face recognition algorithms. As all the algorithms tested provided very poor performance, Singh et al [7] conclude that face recognition is not yet mature enough to handle the effects of plastic surgery. On the other hand, our own empirical analysis of the contributions made by different face regions to the face recognition process suggests the use of localized face recognition approaches. Towards that end, we evaluated face recognition performance subject to specific uncontrolled settings, namely those encountered after plastic surgery. The two face recognition systems that we had implemented for such settings and are characteristic of local recognition methods are: FARO (a fractal technique based on PIFS) [2], and FACE [3], which exploits a localized version of the correlation index used as the similarity measure. The experiments were performed using the same plastic surgery database described and used by Singh et al. [7]. It is worth noticing that the primary goal of the research reported here is to evaluate an effective way to handle variations due to plastic
Robust Face Recognition after Plastic Surgery Using Local Region Analysis
193
surgery. The reader should be aware, however, that such variations are confounded by other variations, e.g., illumination and pose, which are dealt only to the extent they facilitate robust recognition subject to plastic surgery. In other words, the use of methods that are robust with respect to illumination, pose and other variations allows to better assess the net effect of plastic surgery in face recognition.
2 Plastic Surgery Plastic surgery regards the correction or restoration of appearance and/or functionality of visible parts of human body. It includes both aesthetic/cosmetic surgery, as well as reconstructive surgery, e.g. the treatment of burns. It also includes microsurgery, which allows to reconnect smaller and smaller blood vessels and nerves, and therefore to transfer of tissue from one part of the body to another. This paper is only concerned with plastic surgery related to the face, with facial features being modified / reconstructed either globally or locally [7]. • Local Surgery (usually corrects for single defects or diseases): An individual undergoes local plastic surgery for correcting defects, anomalies, or improving skin texture. Correction of birth anomalies, accidents, or ageing effects, as well as aesthetics improvement can call for this type of intervention. Local surgery leads to varying amount of changes in the geometry of facial features. The ability to still recognize a person, either using an automatic system or not, depends on the number and mix of modifications, on the regions where these were performed, and on the span of procedures used. In particular, different (main) face regions may influence recognition to a different extent. • Global Surgery (reconstructs complete facial structure or texture): Functional
defects or damages have to be taken care of. In this type of surgery, the appearance, texture and facial features of an individual may be reconstructed and entirely changed. Exceptions are procedures that, though global in scope, affect less the overall individual appearance; typical examples of these procedures are “light” cases of dermoabrasion, that only affect skin texture, and therefore may hinder only those biometric methods that rely heavily on textural features, and “light” lifting, which may slightly change the proportions holding across the face including but not limited to the shape of the eyes. Due to its characteristics, global plastic surgery is often misused by criminals to elude law enforcement or security controls. Examples in Figure 1 show that variations due to plastic surgery are often confounded by minor variations, if at all, in expression and illumination. This is the case of the database, which we used for our experiments, and that will be presented in Section 4. Since different face regions may affect face recognition differently, this is discussed next.
194
M. De Marsico et al.
Fig. 1. Some images from the database used for tests. In the upper row, faces before plastic surgery are shown; in the lower row the same faces after plastic surgery. From left to right, four kinds of procedure ordered by decreasing impact on face features: fat injection, laser-skinresurfacing, rhinoplasty, otoplasty.
3 Relative Relevance of Facial Regions for Recognition Literature highlights a rather well-established trend in partitioning the face into four main regions, one for each relevant component (eyes, nose, and mouth). This is explained by the fact that such regions contain most of the facial information and are easily located even in the presence of distortions. On the other hand, such regions do not hold the same amount of information. We chose AR-Faces database [4] as a testbed to assess the contribution made to face recognition by the main face components mentioned above, using minor variations, if at all, in expression and illumination. ARFaces database contains a varied set of distortions (illumination, expression, occlusion, and temporal lag) across a number of subjects (70 men and 56 women). The face biometrics for each subject was acquired twice during two different sessions, with 13 image sets per session. Sets differ in expression (1 neutral, 2 smile, 3 anger, 4 scream), illumination (5 left illumination, 6 right illumination, 7 all encompassing illumination), occlusions (8 sun glasses, 11 scarf) or combinations thereof (9 sun glasses and left illumination, 10 sun glasses and right illumination, 12 scarf and left illumination, 13 scarf and right illumination). Sets 14 to 26 of the second session are acquired using the same conditions as those employed for sets 1 to 13. We first analyzed facial components’ degree of interdependence (see Table 1) using the correlation index c(A,B)
Robust Face Recognition after Plastic Surgery Using Local Region Analysis
c( A, B ) =
∑∑ (A
mn
m
195
− A )(Bmn − B )
n
(1)
⎛ 2 ⎞⎛ 2⎞ ⎜ ∑∑ (Amn − A ) ⎟⎜ ∑∑ (Bmn − B ) ⎟ ⎝ m n ⎠⎝ m n ⎠
where n and m are the image dimensions, and A and B are the averages for the corresponding images A and B. This experiment was performed using neutral images from AR-Faces after locating the different facial components. The choice of the image set is due to the need to investigate the net correlations among components when no other factor/variation influences face appearance. Table 1. Correlation Degree among Facial Components. The right eye coordinates (EYE RX) are reflected with respect to the vertical symmetry axis to yield “EYE LX” coordinates. FACE COMPONENTS
FACE COMPONENTS
EYE LX
EYE RX
NOSE
MOUTH
EYE LX
1.00
0.71
0.11
-0.21
EYE RX
0.71
1.00
0.12
-0.22
NOSE
0.11
0.12
1.00
-0.20
MOUTH
-0.21
-0.22
-0.20
1.00
Table 1 confirms the intuitive expectation that the correlation index among the facial components stays constantly low, except that for the two eyes, which shows high values due to approximate face symmetry. Another crucial consideration is the possible presence of local distortions on the face. Further experiments showed that inhomogeneous shadows diminish the correlation degree between eyes down to 0.18, when no correction (illumination normalization) is performed. On the other hand, over-illumination in the same region is less serious, as it lowers the correlation index to 0.40. A similar effect can also be noticed regarding some expression variations, e.g., one eye closed, or the pose slightly away from frontal. Even the correlation among the eyes’ regions holds only for well controlled data capture settings. In a second experiment, we analyzed the impact on performance provided by single facial components in terms of recognition accuracy. Towards that end, the feature extraction process takes advantage of FARO, which is a fractal-based approach found to be robust to local distortions [2]. The figures of merit (FOM) for performance (accuracy) evaluation are Recognition Rate (RR) and Equal Error Rate (EER) and we evaluated their degradation when only one element of the face is used. As expected, we found that eyes, nose and mouth weigh differently on recognition: for example, when hiding the nose, the information loss is less serious than when occluding the eyes. Three images with different facial expression have been used for each subject: set 1 (non-occluded face in neutral condition) as gallery, and sets 2 (angry) and 3 (smile) as probes. The results in terms of RR and EER are shown in Table 2.The RR maxima are at most 0.73 and 0.65, respectively, for angry and smile expression
196
M. De Marsico et al.
images, because expression changes introduce a major distortion in the feature vectors face recognition depends on. The results further show that the eyes by themselves provide most of the information needed for recognition, with the contribution of mouth coming second but also significant. Table 2. Contribution of Single Face Components to Overall Performances (RR and EER) SUBSET SET2
RR EER
SET 3
RR EER
EYE LX 0.73 0.10 0.65 0.12
FACE COMPONENTS EYE RX NOSE 0.64 0.35 0.13 0.24 0.57 0.48 0.18 0.27
MOUTH 0.46 0.17 0.71 0.11
Since not all the anatomical face components have the same weight during recognition one should expect that modifications of the mouth or nose due to plastic surgery will have a lower weight / effect than those affecting the eyes.
4 Plastic Surgery Database Experiments to assess the feasibility and utility of our recognition methods FARO and FACE on distorted faces were performed using the plastic surgery database proposed in [7], which consists of 1800 full frontal face images of 900 subjects. The database contains cases such as Rhinoplasty (nose surgery), Blepharoplasty (eyelid surgery), brow lift, skin peeling, and Rhytidectomy (face lift). For each individual, there are two frontal face images (before and after plastic surgery). These pairs of images generally present minor variations, if at all, in illumination and expression. Examples of exceptions are shown in Figure 1. The database contains 519 image pairs corresponding to local surgeries and 381 cases of global surgery (e.g., skin peeling and face lift). We adopted two different normalization techniques, which vary in complexity. In both cases, the face is located using the Viola-Jones algorithm [8]; an extended active shape model [5] is then applied to the faces located, in order to obtain about 68 reference points (center of the eyes, face contour, centre of mouth, nose tip, etc.). The two normalization techniques differ in how they exploit the reference points to process the face image. In the first case, the normalization process NP1 is simple: the extreme points of the face contour (left and right), the higher point of eyebrows and the lower point on the chin are used to clip a Region of Interest (ROI), which contains the face. The ROI is then resized to a resolution of 64×100. In the second case, the normalization process NP2 is more complex, since points are used to correct the face pose, aiming at reconstructing a perfectly frontal image; the Self Quotient Image algorithm [9] is used to compensate for possible illumination variations. More details about the second normalization technique can be found in [3]. Table 3 reports the number of images for each group of plastic surgery procedures after the normalization process. In some cases, the number of images is lower than that reported in [7], due to possible errors encountered during face location (faces not
Robust Face Recognition after Plastic Surgery Using Local Region Analysis
197
located or reference points incorrectly located). Notice that the numbers hold for both of the normalization methods, since they are computed at a step preceding their split. Table 3. Composition of the Plastic Surgery Database after Normalization Type
Local
Global
Plastic Surgery Procedure Dermabrasion Brow lift (Forehead surgery) Otoplasty (Ear surgery) Blepharoplasty (Eyelid surgery) Rhinoplasty (Nose surgery) Others (Mentoplasty, Malar augmentation, Craniofacial, Lip augmentation, Fat injection) Skin peeling (Skin resurfacing) Rhytidectomy (Face lift)
Number of Individuals 23 50 67 95 149 44 60 296
5 Experiments Four different algorithms were tested on the image groups: Principal Component Analysis (PCA) [1], Linear Discriminant Analysis (LDA) [1], Fractal based (FARO) [2], and Correlation based (FACE) [3]. The first two were chosen for their popularity and for the fact that they are still widely used for benchmark comparison studies. Before presenting the experimental results, we note that the primary goal of the present work is to evaluate an effective way to handle variations due to plastic surgery. Such variations, however, are mostly confounded by other variations, e.g., illumination and pose, even if those are not major. The use of methods, which are robust to such further variations, allows to better assess the net effect of plastic surgery in face recognition and to facilitate robust recognition that can contend with it. FARO (FAce Recognition against Occlusions and Expression Variations) is based on PIFS (Partitioned Iterated Function Systems), but, differently from it, is quite robust with respect to expression changes and partial occlusions. Even though plastic surgery does not directly induce such variations, we are interested in addressing them too, since some images in the database use exhibit them (see Figure 1). In particular, some significant expression changes as well as some significant effect of specific plastic procedures, which affect limited regions of the face, can be considered as localized occlusions and their contribution not counted. Traditional algorithms based on PIFS compute a map of self-similarities inside the whole input image, searching for correspondences among small square regions. Such algorithms suffer from local distortions due to occlusions. FARO overcomes this limitation by locally estimating PIFS information for each face component (eyes, nose, and mouth). Self-similarities found in components are assembled as vectors of features, which are then chained to form the global vector used during image comparisons. The comparison relies on an ad hoc distance measure. Further details on FARO can be found in [2]. The last algorithm, FACE (Face Analysis for Commercial Entities), performs matching
198
M. De Marsico et al.
using a localized version of the correlation index c(A,B) between two images (see Equation 1). The correlation index is now evaluated locally over single sub-regions rA and rB of the images A and B. For each sub-region rA we search, in a limited window around the same position in B, the region rB which maximizes the correlation c(rA, rB). The global correlation C(A,B) is then obtained as the sum of the local maxima. This approach is more accurate, but also more computationally expensive. Pre-computation of some quantities in the matching formulae, code optimization, and lower resolution allow performing a considerable amount of matches (hundreds) in less than one second on a medium-low band computing equipment. The first experiment has access to face images normalized with NP1, the simpler of the two techniques described in Section 4, which is analogous to that mentioned in [7]. Each subject is characterized by two images, one before and one after the plastic surgical procedure. The face image preceding the procedure is used to enrol the subject in the system, while the second face image (after the procedure) is used for testing. The experiment aims to measure the ability of different algorithms to identify the subject even when his/her appearance has been modified by a plastic surgery operation. The results of this experiment, which are reported in Table 4., show that PCA and LDA are more sensitive to surgically induced variations. Their low performance is due to the global nature of these classification techniques and, especially for PCA, to its sensitivity to pose and even more so to illumination. This is partially true for FARO too. As a matter of fact, though being a local technique, made robust to occlusions, FARO still suffers from another limitation of fractals, namely sensitivity to illumination variations. FARO is based on self-similarities among image sub-regions, which are computed through affine transformations of the texture within the regions: a variation of such texture due to illumination causes a change in selfsimilarity, and therefore a decrease of performance. These variations are particularly decisive for a significant number of face images in the database, though they are due to image capture conditions rather than to actual plastic surgery effects. FACE, which employs a correlation index computed locally, performs best among the algorithms considered. Table 4. Accuracy (RR, EER) of the Analyzed Methods Depending on the Type of Plastic Surgery Procedure. Face Images Are Normalized Using NP1 (see text). Type
Local
Global
Plastic Procedure Dermabrasion Brow lift Otoplasty Blepharoplasty Rhinoplasty Others Skin peeling Rhytidectomy Overall
Surgery
PCA
FARO
FACE
RR
EER
LDA RR
EER
RR
EER
RR
EER
0.20 0.26 0.38 0.30 0.24 0.30 0.28 0.20 0.26
0.35 0.20 0.22 0.25 0.29 0.30 0.25 0.32 0.30
0.40 0.45 0.42 0.36 0.32 0.35 0.40 0.27 0.35
0.28 0.22 0.23 0.27 0.28 0.26 0.29 0.28 0.24
0.45 0.30 0.59 0.45 0.38 0.40 0.50 0.36 0.41
0.35 0.29 0.20 0.21 0.25 0.25 0.18 0.24 0.26
0.81 0.74 0.73 0.60 0.63 0.59 0.60 0.61 0.65
0.14 0.23 0.15 0.15 0.20 0.19 0.20 0.19 0.18
Robust Face Recognition after Plastic Surgery Using Local Region Analysis
199
The second experiment has access to face images normalized using NP2, the second but more complex technique (see Section 4) [3], while enrolment and testing are performed in a fashion similar to that employed by the preceding experiment. Table 5 reports the results obtained, which confirm the trends identified by the first experiment. Overall, local estimation and processing as embedded in both FARO and FACE lead to much better performance compared to PCA and LDA, which are inherently global methods. Table 5. Accuracy (RR, EER) of the Analyzed Methods Depending on the Type of Plastic Surgery Procedure. Face Images are Normalized through Pose and Illumination Correction. Type
Local
Global
Plastic Procedure Dermabrasion Brow lift Otoplasty Blepharoplasty Rhinoplasty Others Skin peeling Rhytidectomy Overall
Surgery
PCA
FARO
FACE
RR
EER
LDA RR
EER
RR
EER
RR
EER
0.35 0.43 0.46 0.38 0.32 0.33 0.38 0.31 0.35
0.32 0.30 0.24 0.28 0.31 0.28 0.26 0.29 0.30
0.54 0.45 0.49 0.43 0.38 0.41 0.38 0.34 0.40
0.19 0.26 0.21 0.20 0.24 0.24 0.27 0.25 0.22
0.45 0.43 0.60 0.55 0.42 0.44 0.70 0.39 0.50
0.35 0.25 0.22 0.21 0.22 0.21 0.15 0.22 0.24
0.82 0.84 0.72 0.72 0.74 0.66 0.72 0.74 0.70
0.16 0.15 0.15 0.17 0.16 0.21 0.17 0.17 0.20
One cannot fail to notice that overall the performance on faces that underwent plastic surgery is lower than that experienced with traditional databases captured under controlled settings. This is consonant with real-world face recognition using uncontrolled settings and provides yet another dimension to image variability that has to be accounted for. Note that plastic surgery by its very purpose affects and hinders human recognition too. For comparison purposes Table 6 tabulates the final RR results from Singh et al. [7] using state-of-the art methods. The results are very poor for PCA, FDA, LFA and at most random chance level for CLBP, SURF, and GNN. It becomes apparent (see Table 5) that our FACE method outscores by far existing methods with results consistently above 70% and some even above 80%. Table 6. RR for different methods from [7]: Principal Component Analysis (PCA), Fisher Discriminant Analysis (FDA), Local Feature Analysis (LFA), Circular Local Binary Pattern (CLBP), Speeded Up Robust Features (SURF), and Neural Network Architecture based 2D Log Polar Gabor Transform (GNN). Type
Local
Global
Plastic Surgery Procedure Dermabrasion Brow lift Otoplasty Blepharoplasty Rhinoplasty Others Skin peeling Rhytidectomy Overall
PCA
FDA
LFA
CLBP
SURF
GNN
0.20 0.28 0.56 0.28 0.23 0.26 0.25 0.18 0.27
0.23 0.31 0.58 0.35 0.24 0.33 0.31 0.20 0.31
0.25 0.39 0.60 0.40 0.35 0.41 0.40 0.21 0.37
0.42 0.49 0.68 0.52 0.44 0.52 0.53 0.40 0.47
0.42 0.51 0.66 0.53 0.51 0.62 0.51 0.40 0.50
0.43 0.57 0.70 0.61 0.54 0.58 0.53 0.42 0.53
200
M. De Marsico et al.
6 Conclusions Real-world face recognition has to contend with uncontrolled settings, which include among others pose, illumination, expression, and occlusion changes. Another dimension to contend with, that of change in facial appearance after plastic surgery, is addressed in this paper. Towards that end, two integrative regions of interest (ROI) analysis methods, FARO and FACE, are proposed to address the challenges coming from plastic surgery procedures performed for cosmetic and/or healing reasons. FARO and FACE derive local region of interest (ROI) representations that are properly integrated for robust matching and recognition, which implicitly exploit the relative importance and weight of each face region of interest. FARO and FACE local features are fractal and correlation-based, respectively. Experimental results confirm the expectation that face recognition is indeed challenged by the effects of plastic surgery. The same experimental results show that both FARO and FACE compare favourably against standard face recognition methods such as PCA and LDA, with FACE outscoring by far state-of-the art existing face recognition methods.
References 1. Abate, A.F., Nappi, M., Riccio, D., Sabatino, G.: 2D and 3D face recognition: A survey. Pattern Recognition Letters 28, 1885–1906 (2007) 2. De Marsico, M., Nappi, M., Riccio, D.: FARO: FAce recognition against occlusions and expression variations. IEEE Transactions on Systems, Man, and Cybernetics — Part A: Systems and Humans 40(1), 121–132 (2010) 3. De Marsico, M., Nappi, M., Riccio, D.: FACE: Face analysis for commercial entities. In: Proceedings of International Conference on Image Processing (ICIP), Honk Kong, pp. 1597–1600 (2010) 4. Martinez, A.M.: Recognizing imprecisely localized, partially occluded, and expression variant faces from a single sample per class. IEEE Transaction on Pattern Analisys and Machine Intelligence 24(6), 748–763 (2002) 5. Milborrow, S., Nicolls, F.: Locating facial features with an extended active shape model. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008, Part IV. LNCS, vol. 5305, pp. 504–513. Springer, Heidelberg (2008) 6. Singh, R., Vatsa, M., Noore, A.: Effect of plastic surgery on face recognition: A preliminary study. In: Proceedings Workshops of Computer Vision and Pattern Recognition (CVPR), pp. 72–77 (2009) 7. Singh, R., Vatsa, M., Bhatt, H.S., Bharadwaj, S., Noore, A., Nooreyezdan, S.S.: Plastic surgery: A new dimension to face recognition. IEEE Transactions on Information Forensics and Security 5(3), 441–448 (2010) 8. Viola, P., Jones, M.: Robust real-time face detection. International Journal of Computer Vision 57(2), 137–154 (2004) 9. Wang, H., Li, S.Z., Wang, Y., Zhang, J.: Self quotient image for face recognition. In: International Conf. on Image Processing (ICIP), pp. 1397–1400 (2004)
SEMD Based Sparse Gabor Representation for Eyeglasses-Face Recognition Caifang Song, Baocai Yin, and Yanfeng Sun Beijing Key Laboratory of Multimedia and Intelligent Software, College of Computer Science and Technology, Beijing University of Technology, Beijing 100124, China
[email protected],
[email protected],
[email protected]
Abstract. Sparse representation for face recognition has been exploited in past years. Several sparse representation algorithms have been developed. In this paper, a novel eyeglasses-face recognition approach, SEMD Based Sparse Gabor Representation, is proposed. Firstly, for a robust representation to misalignment, a sparse Gabor representation is proposed. Secondly, spatially constrained earth mover’s distance is employed instead of Euclidean distance to measure the similarity between original data and reconstructed data. The proposed algorithm for eyeglasses-face recognition has been evaluated under different eyeglasses-face databases. The experimental results reveal that the proposed approach is validity and has better recognition performance than that obtained using other traditional methods. Keywords: Eyeglasses-face recognition, Sparse Gabor Representation, Spatially EMD, Virtual sample library.
1
Introduction
Automatic face recognition becomes a hot topic of computer vision and pattern recognition due to its potential use in a wide range of applications and its highlevel difficulty in research. Clearly, challenges lie in not only the academic level but also the application system designing level. Robust face recognition system requires recognizing faces correctly under different environment, such as different lighting conditions, facial expressions, poses, scales, and occlusion by other objects. Among the last category, eyeglasses are the most common occluding objects, which have a significant effect on the performance of face recognition systems [1]. In order to reduce the impact of eyeglasses on face recognition, traditional methods are to research how to extract and remove eyeglasses from facial images, then apply this technique to face recognition systems. Due to the easiness of detecting frames, researchers initially proposed approaches to remove the frames of eyeglasses [2][3][4]. These methods only consider the frames, without taking into account the effects of lenses, which may lead to uneven texture transition. M. Kamel and A. Campilho (Eds.): ICIAR 2011, Part II, LNCS 6754, pp. 201–211, 2011. c Springer-Verlag Berlin Heidelberg 2011
202
C. Song, B. Yin, and Y. Sun
Furthermore, when the frame coincides with the eyebrows, the eyebrows may be removed mistakenly. In 1999, Saito et al. presented a method that used PCA to synthesize eyeglassless facial images [5]. This method considers both the problem of occlusion caused by the glasses frame and that caused by the reflection of the glasses. The representational power of the PCA depends on the training set. If the training images do not contain glasses, then the reconstructed faces of input faces cann’t represent the glasses properly. This method gives rise to errors which are spread out over the entire reconstructed face, resulting in some degradation of quality, with some traces of the glasses frame remaining. J.-S. Park and Y.H. Oh proposed a recursive PCA reconstruction method to obtain a natural looking facial image without glasses in 2003[6]. First, a glasses region is automatically extracted using color and shape information and then a natural looking facial image without glasses is generated by means of recursive error compensation using PCA reconstruction. The proposed method can produce more natural looking facial images which look more similar to the original glassless image than the previous methods. However, it’s failed when the glasses are black frame or gradated color of lens. A novel and comprehensive approach for face recognition is recently proposed based on the spare representation theory [7]. Experimental results showed that the sparse representation recognition approach achieved favorable performance compared with other state-of-the-art methods under various conditions. This method can handle errors due to occlusion and corruption uniformly. However, One limitation of the method is that the training samples are treated as the overcomplete dictionary, that remand sufficient training samples are available for each class. For the overcomplete dictionary problem, we expand the visual face database using the genetic algorithm [8]. The other limitation is that it only handles cases in which all training images and the test image are well aligned and have the same pose. There are some unavoidable misalignments in the whole face or eyeglasses between the test image and training image. Herein comes out one problem: how to resolve the misalignment problem of SRC. To solve the misalignment problem, one should further develop more accurate face alignment method. In consideration of the face variations in illumination, alignment, pose, and occlusion, Andrew Wagner et al. propose deformable sparse recovery and classification (DSRC) [9]. This method uses tools from sparse representation to align a test face image with a set of frontal training images in the presence of significant registration error and occlusion. Junzhou Huang proposed transform-invariant sparse representation (TSR)[10]. This method separated each projection into the aligned projection target and a residue due to misalignment. The desire aligned projection target then iteratively optimized by gradually diminishing the residue. On the other hand, the robustness of the face representation and classification method to misalignment should be greatly improved. For the misalignment, it include usually translations, scaling and rotation. Figure 1 is the examples of misalignment. Curse of misalignment compels us to seek a representation more robust to misalignment. The Gabor wavelet representation seems a natural choice since it
SEMD Based Sparse Gabor Representation for Eyeglasses-Face Recognition
203
Fig. 1. Examples of Misalignment
can capture the local feature corresponding to spatial frequency, spatial localization, and orientation selectivity. As a result, the Gabor wavelet representation of face images should be robust to variations due to the misalignment [11]. So we propose a novel face representation, sparse Gabor representation. It can solve the scaling and rotation in misalignment problem. To deal with the spatial misalignment issue in face recognition, Dong Xu proposed the Spatially Constrained Earth Mover’s Distance (SEMD) method [12]. So we make use of the SEMD in our paper to solve the translation in misalignment problem. Based on above analysis, we propose a sparse Gabor representation based classification algorithm using spatially constrained earth mover’s distance. To feature representation, Gabor wavelets exhibit desirable characteristics of spatial locality and orientation selectivity. To classification, Spatially Constrained Earth Mover’s Distance can get more exciting result than Euclidean distance for misalignment problem. The rest of the paper is organized as follows: Section 2 we introduce the sparse Gabor representation. Then we introduce SEMD based Sparse Gabor representation and its application on face recognition in section 3. Experimental evaluations of our method are presented in Section 4. Finally, we give the conclusions in Section 5.
2
Sparse Gabor Representation
In this section, we first briefly review basics of the sparse representation theory, and discuss one most recent work on face recognition based on sparse representation. Finally, we propose a novel face representation, sparse Gabor representation, to solve the scaling and rotation in misalignment problem. 2.1
Sparse Representation
Sparse representations are representations that account for most or all information of a signal with a linear combination of a small number of elementary signals called atoms. Often, the atoms are chosen from a so called over-complete dictionary. Recent developments in multi-scale and multi-orientation representation of signals, such as wavelet, ridgelet, curvelet and contourlet transforms are an
204
C. Song, B. Yin, and Y. Sun
important incentive for the research on the sparse representation. The original goal of these works was not inference or classification, but rather representation and compression of signals. Because of the naturally discriminates of sparsest representation, recently, an face recognition algorithm (called SRC) based on idea of sparse representation has been proposed [7], which appears to be able to handle changing expression, illumination, occlusion and corruption uniformly. The detail of SRC method is introduced as follow. In the SRC algorithm, it is assumed that the whole set of training samples form a dictionary, and then the recognition problem is cast as one of discriminatively finding a sparse representation of the test image as a linear combination of training images. If the test image is indeed an image of one of the subjects in the training database, the linear combination will only involve training of that subject. The idea can be formulated in the following way: Each image with size w×h is stack as a vector vi,ni ∈ Rm , where i is the subject number and ni is the image number of each subject. The whole training image can be represented as A = [v1,1 , v1,2 , ..., v1,n1 , v2,1 , ..., vk,nk ] ∈ m×n , m = w × h. k is the ktotal number of the subjects and n is the total number of training images, n = i=1 ni . Based on the assumption that the vectors of each subject span a subspace for this subject, the new test image y = m of subject i can be represented as a linear combination of training images of the training examples associated with Subject i: y = αi,1 vi,1 + αi,2 vi,2 + ... + αi,ni vi,ni
(1)
for some scalars, αi,j ∈ , j = 1, 2, ..., ni . Since the membership i of the test sample is initially unknown, the linear representation of y can be rewritten in the terms of all training samples as y = Ax ∈ m
(2)
where x = [0, ..., 0, αi,1 , αi,2 , ..., αi,ni , 0, ..., 0] is a coefficient vector whose entries are mostly zero except those associated with the i-th subject. It can be obtained by solving the linear system of equation y = Ax. Obviously, if m > n, the system of equations y = Ax is overdetermined, and the correct x can ususlly be found as its unique solution. However, that in robust face recognition, the system y = Ax is typically underdetermined, and so, its solution is not unique.Conventionally, this difficulty is resolved by choosing the minimum 2 − norm solution: xˆ2 = argminx2
subject to Ax = y
(3)
x ˆ2 is generally dense, with large nonzero entries corresponding to training samples from many different classes. It’s not especially informative for recognizing the test sample y. There are a simple observation: A valid test sample y can be sufficiently represented using only the training samples from the same class. This representation
SEMD Based Sparse Gabor Representation for Eyeglasses-Face Recognition
205
is naturally sparse if the number of object classes is reasonably large. This motivates us to solve the following optimization problem for a sparse solution: xˆ0 = argminx0
subject to Ax = y
(4)
where · 0 denotes the 0 − norm, which counts the number of nonzero entries in a vector. However, the problem of finding the sparse solution of Eq. (4) is NP-hard, and difficult to solve. The theory of compressive sensing reveals that if the solution x is sparse enough; we can solve the following convex relaxed optimization problem to obtain approximate solution: xˆ1 = argminx1
subject to Ax = y
(5)
where · 1 denotes the 1 − norm. The classifier of SRC is different significantly from the other methods. It uses the sparse representation of each individual test sample directly for classification instead of using sparsity to classify all test samples. For each class i, let δi be the function that selects the coefficients associated with the ith class. δi (x) is a new vector whose only nonzero entries are the entries in x that are associated with the class i. Then, we determine the class of this test sample from its reconstruction error between this test sample and the training samples of class i. identity(y) = argminri (y) = y − Aδi (ˆ x1 )2
(6)
All training images (or their low dimensional versions) have to be stored and accessed during testing, and thus for a large training set, both the space complexity and the speed performance may pose as practical challenges. For these problems, Lishan Qiao et al. proposed an unsupervised DR method called sparsity preserving projection (SPP)[13]. This method designs the weight matrix straightforwardly based on sparse representation theory, through which the sparsity can be optimally and naturally derived . Pradeep Nagesh and Baoxin Li make use of sparse representation theory to find the common component and the innovation component of face image, proposed B-JSM method [14]. Considering the SRC algorithm does not involve some special factors in face recognition, such as similarity measure using more suitable distance than Euclidean distance, Yangfeng Ji proposed Mahalanobis distance based non-negative sparse representation (MDNSR)[15]. 2.2
Sparse Gabor Representation
While the SRC model demonstrates the power of harnessing sparsity in face recognition problem via l1 minimization, it has some disadvantages. In the case of alignment, Wright et al. tested different types of features, including Eigenface, Randomface and Fisherface, for SRC, and they claimed that SRC is insensitive to feature types when the feature dimensions is large enough. But the features
206
C. Song, B. Yin, and Y. Sun
tested in [7] are all holistic features. Since in practice the number of training samples is often limited, such holistic features cann’t effectively handle the variations of illumination, expression, pose and local deformation. the claim that feature extraction is not so improtant to SRC actually holds only for holistic features. Although DSRC and TSR can release the misalignment to some green, it too time-consume and complicate. Since Gabor filters detect amplitude-invariant spatial frequencies of pixel gray value, the discriminative features extracted from the Gabor filtered images could be more robust to illumination, facial expression and eyeglasses changes. We make use of the transform of Gabor to get more robust feature. In our sparse Gabor representation, the Gabor filters are formulated as follow: ψu,v (z) =
2 ku,v 2 −(ku,v 2 z2 /2σ2 ) iku,v z e [e − e−(δ /2) ] σ2
(7)
Where z = (x, y) denotes the pixel, and the wave vector ku,v = kv eiφu with kv = kmax /f v and φu = uπ/8, kmax is the maximum frequency and f is the spacing factor between kernels in the frequency domain. In addition, σ determines the ratio of the Gaussian window width to wavelength. In the Eq.6, u and v are orientation factor and scale factor respectively. The Gaborface, Gu,v (z) = Img(z) ∗ ψu,v (z)representing one face image, is computed by convoluting the input face image with corresponding Gabor filters. The Gabor filtering coefficient Gu,v (z) is a complex number, which can be rewritten as Gu,v (z) = Mu,v (z) · exp(iθu,v (z)) with Mu,v (z) being the magnitude and θu,v (z) being the phase. It is known that magnitude information contains the variation of local energy in the image. Only the magnitude of the Gabor filter response is considered.The Gaborface can not only enhance the face feature but also tolerate to image local deformation to some extent. So we propose to using Gaborface to replace the holistic face feature in the SRC framework. For given Gaborface of each face image in train set and test set, we can apply sparse representation similarly. So the train set can be represented as follow: xˆ(p) = argminx1
subject to Ap x = y p
(8)
where p represents pth Gaborface of each face image, Generally, we can choose the parameters by following method: choose five different scales v ∈ 0, ..., 4 and eight directions u ∈ 0, ..., 7. Hence 40 Gaborfaces can be got for each face image. p = 0, ..., 39. Figure 2 is the example of Spare Gabor representation of a face image.
3
SEMD Based Sparse Gabor Representation and Its Application
For Classification, there are some works related to spare representation. SRC algorithm does not involve some special factors in face recognition, such as similarity measure using more suitable distance than Euclidean distance, etc. [14]
SEMD Based Sparse Gabor Representation for Eyeglasses-Face Recognition
207
Fig. 2. Spare Gabor representation of Face Image
3.1
Spatially Constrained Earth Mover’s Distance
To deal with the spatial misalignment issue in face recognition, Dong Xu etc proposed SEMD[12](Spatially constrained Earth Mover’s Distance) to measure the distance between different face images, in which the source image is partitioned into nonoverlapping local patches while the destination image is represented as a set of overlapping local patches at different position. We formulate the problem of face recognition as matching between two sets, which may have unequal length. Earth Mover’s Distance is suitable for such problem. To reduce the computation cost, we represent x and y with nonoverlapping and overlapping patches, as well as constrain each patch p in image x to be matched only to patches within a local spatial neighborhood in image y. We enforce the spatial neighborhood constraint in EMD because faces are already roughly aligned according to the position of the eyes in preprocessing step.With the spatial neighborhood constraint and non overlapping representation for x[16], the total number of parameters is reduces. Thus, the computational cost is significantly reduced. The distance from image x to image y is defined as fˆij g(xi , yj ) (9) d(x → y) = i
j∈N l (i)
where fˆij represents the flow from the patch xi to yj . Function g is the Euclidean distance between the sparse coefficients of two patches, N l (i) as the index set of nearest patches in the spatial domain from image x. The solution of f can be obtained by solving the following linear programming problem with the standard Simplex method: fˆij = i j∈N l (i) fˆij g(xi , yj ), s.t. 1. 0 ≤ fij ≤ 1, ∀i, j 2. j:j∈N l (i) fij = 1 ∀i 3. i:j∈N l (i) fij = 1 ∀j 3.2
(10)
SEMD Based Sparse Gabor Representation and Recognition Algorithm
We use all training samples and virtual samples to create dictionary matrix. Given a test set and training set, we first compute its Gaborfaces. Then, for each
208
C. Song, B. Yin, and Y. Sun
Fig. 3. The process of our algorithm
0 1 7 Gaborface, we partition the images of test set into √ 8 blocks, y = y , y , ..., y . we constrain the match to be within a distance of 2 2, there are up to l = 25 blocks from image x for consideration.The images of training set should be partitioned into 8 × 25 blocks, x = x0 , x1 , ..., x199 . For each block, we compute the sparse coefficient and its reconstruction error between this test sample and the training sample. We use the SEMD to compute the distance of all Gaborfaces between image x and image y. Finally, we aggregate the results by voting.Fig.3 is the main process. and we also give the discrete algorithm in table 1.
Table 1. Our algorithm Training samples: A = [v1,1 , v1,2 , ..., v1,n1 , v2,1 , ..., vk,nk ] ∈ m×n n = ki=1 ni Test sample: y ∈ m Step1: convolute each face image with Gabor filters to 40 Gaborfaces. A = A0 , A1 , ..., A39 , y = y 0 , y 1 , ..., y 39 Step2: partition the Gaborfaces of traning images into 200 overlapping local patches, and partition the Gaborfaces of test image into 8 nonoverlapping local patches. Step3: apply sparse representation into each patches. (pq) x ˆ1 = argminx(pq) 1 subject to A(kl) x = y kl where p refers the Gaborface,p = 0, ..., 39. q refers the block, q = 0, .., 199. Step4: for each Ap , computer the SEMD. d(x → y) = i j∈N l (i) fˆij g(xqi , y qj ) where i = 0, ..., 7, j = 0, ..., 24 Step5: aggregate the results by voting.
4
Experimental Results
In this section, we test our algorithm on face recognition with two face databases: the CAS-PEAL face database and The FERET database. We have proved that the recognition rates increased along with the increase of samples’ number in [8]. In our experiment,all virtual samples and one glassless image each subject
SEMD Based Sparse Gabor Representation for Eyeglasses-Face Recognition
209
are selected as training set. The randomly selected eyeglasses image each subject from residual images are used to test. The average recognition rate as the final recognition rate. 4.1
Experiments on FERET Face Database
The FERET database contains 14051 gray-scale images of 1199 individuals with varying pose, expression, accessory, and lighting conditions. In our experiment, since we are concerned with the eyeglasses problem in this paper, we select 288 images of 96 people form this database, everyone have one frontal facial images and two eyeglasses facial images. Samples from the 288 selected face images are depicted in Fig.4.
Fig. 4. Some cropped images with/without Fig. 5. Recognition rates comparison on glasses from the FERET database FERET database
We compare the performance of our algorithm with that of the SRC algorithm and other eyeglasses-face recognition algorithm. As illustrated in Figure 5, our algorithm has better performance than the other algorithms. The best performance of our algorithm in this experiment is 93.75% while the best performance of the other algorithms is 87.5%. As shown in this experiment, the classification accuracy is improved by our algorithm. 4.2
Experiments on CAS-PEAL Face Database
the CAS-PEAL face database contains 99,594 images of 1040 individuals (595 males and 445 females) with varying Pose, Expression, Accessory, and Lighting (PEAL). In our experiment, we select 400 images of 100 people from this library, everyone have one normal facial image and three glasses-facial images. Facial images are cropped out from the selected images and resized to be 6464. Samples from the 400 selected face images are depicted in Figure 6. We also compare the performance of our algorithm with other algorithms. In figure 7, we can see that our algorighm is slightly better than other algorithm. The best performance of our algorithm in this experiment is 99% while the best performance of the other algorithms is 97%. Because of the similarity between the face images of CAS-PEAL database, the proposed method isn’t clearly outperform other methods.
210
C. Song, B. Yin, and Y. Sun
Fig. 6. Some cropped images with/without Fig. 7. Recognition rates comparison on glasses from the CAS-PEAL database CAS-PEAL database
5
Conclusions
In this paper, we propose an improved face recognition algorithm based on sparse Gabor representation with spatially constrained earth mover’s distance. In view of SRC’s misalignment, we solve it from feature representation and classification. For feature representation, a spare Gabor representation is proposed. For classification, SEMD is used as a measure of image similarity instead of Euclidean distance. In the SRC, the training samples are treated as the overcomplete dictionary, that remand sufficient training samples are available from each class. For the overcomplete dictionary problem, we expand the visual face database using the genetic algorithm. The experimental results on face recognition show that the performance of our algorithm is better than other algorithms.
References 1. Belhumeur, P.N.: Ongoing Challenges in Face Recognition. In: Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2005 Symposium, pp. 5–14 (2005) 2. Saito, Y., Kenmochi, Y., Kotani, K.: Extraction of a Symmetric Object for Eyeglass Face Analysis Using Active Contour Model. In: International Conference on Image Processing, Vancouver, Canada, vol. II, pp. 231–234 (September 2000) 3. Jing, Z., Mariani, R.: Glasses Detection and Extraction by Deformable Contour. In: Proc. Int’l Conf. Pattern Recognition, pp. 933–936 (August 2000) 4. Wu, H., et al.: Glasses Frame Detection with 3D Hough Transform. In: Proc. Int’l Conf. Pattern Recognition, pp. 346–349 (August 2002) 5. Saito, Y., Kenmochi, Y., Kotani, K.: Estimation of Eyeglassless Facial Images Using Principal Component Analysis. In: Proc. Int’l Conf. Image Processing, vol. 4, pp. 197–201 (October 1999) 6. Park, J.-S., Oh, Y.-H., Ahn, S.-C., Lee, S.-W.: Glasses Removal from Facial Image Using Recursive PCA Reconstruction. In: Proc. Int’l Conf. Audio- and Video-based Biometric Person Authentication, pp. 369–376 (June 2003) 7. Wright, J., Yang, A., Ganesh, A., Sastry, S., Ma, Y.: Robust face recognition via sparse representation. IEEE Transaction on Pattern Analysis and Machine Intelligence 31(2), 210–227 (2009)
SEMD Based Sparse Gabor Representation for Eyeglasses-Face Recognition
211
8. Song, C., Yin, B., Sun, Y.: Adaptively Doubly Weighted Sub-pattern LGBP for Eyeglasses-face Recognition. Journal of Computational Information Systems 6(1), 63–70 (2010) 9. Wagner, A., Wright, J., Ganesh, A., Zhou, Z., Ma, Y.: Towards a Practical Face Recognition System: Robust Registration and Illumination by Sparse Representation. In: IEEE Conference on Computer Vision and Pattern Recognition, vol. 4, pp. 597–604 (2009) 10. Huang, J., Huang, X., Metaxas, D.: Simultaneous image transformation and sparse representation recovery. In: IEEE Conference on Computer Vision and Pattern Recognition, June 23-28, pp. 1–8 (2008) 11. Shan, S., Gao, W., Chang, Y., Cao, B., Yang, P.: Review the strength of Gabor features for face recognition from the angle of its robustness to misalignment. In: Proceedings of ICPR 2004, vol. I, pp. 338–341 (2004) 12. Xu, D., Yan, S., Luo, J.: Face recognition using spatially constrained earth mover’s distance. IEEE Trans. on Image Processing 17(11), 2256–2260 (2008) 13. Qiao, L., Chen, S., Tan, X.: Sparsity Preseving Projections with Applications to Face Recognition. Pattern Recognition, 331–341 (2010) 14. Nagesh, P., Li, B.: A Compressive Sensing Approach for Expressiong-Invariant Face Recognition. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1518–1525 (2009) 15. Ji, Y.F., Lin, T., Zha, H.B.: Mahalanobis distance based non-negative sparse representation for face recognition. In: ICMLA (2009) 16. Zhou, W., Ahrary, A., Kamata, S.-i.: Face Recognition using Local Quaternion Patters and Weighted Spatially constrained Earth Mover’s Distance. In: The 13th IEEE International Symposium on Consumer Eletronics, pp. 285–289 (2009)
Face Recognition on Low Quality Surveillance Images, by Compensating Degradation Shiva Rudrani and Sukhendu Das Department of Computer Science and Engineering, Indian Institute of Technology Madras, Chennai-600036, India {srudrani,sdas}@cse.iitm.ac.in
Abstract. Face images obtained by an outdoor surveillance camera, are often confronted with severe degradations (e.g., low-resolution, lowcontrast, blur and noise). This significantly limits the performance of face recognition (FR) systems. This paper presents a framework to overcome the degradation in images obtained by an outdoor surveillance camera, to improve the performance of FR. We have defined a measure that is based on the difference in intensity histograms of face images, to estimate the amount of degradation. In the past, super-resolution techniques have been proposed to increase the image resolution for face recognition. In this work, we attempt a combination of partial restoration (using superresolution, interpolation etc.) of probe samples (long distance shots of outdoor) and simulated degradation of gallery samples (indoor shots). Due to the unavailability of any benchmark face database with gallery and probe images, we have built our own database1 and conducted experiments on a realistic surveillance face database. PCA and FLDA have been used as baseline face recognition classifiers. The aim is to illustrate the effectiveness of our proposed method of compensating the degradation in surveillance data, rather than designing a specific classifier space suited for degraded test probes. The efficiency of the method is shown by improvement in the face classification accuracy, while comparing results obtained separately using training with acquired indoor gallery samples and then testing with the outdoor probes. Keywords: Degradation, Face database, Face recognition, Image quality, Surveillance, Super-resolution.
1
Introduction
The goal of an intelligent surveillance systems is to accurately identify “Who is Where?”. Face recognition has become more attractive than ever because of the increasing need for security. In a typical surveillance scenario, images used for training a face recognition (FR) system might be available beforehand from sources such as passport, identity card, digital record etc., these snaps are taken under well controlled environment in indoor setup (laboratory, control room) whereas testing images are available when a subject comes under a 1
This database will be made available in public for research and academics purposes.
M. Kamel and A. Campilho (Eds.): ICIAR 2011, Part II, LNCS 6754, pp. 212–221, 2011. c Springer-Verlag Berlin Heidelberg 2011
Face Recognition on Low Quality Images
213
surveillance scene. Images obtained by surveillance security cameras are often confronted with degradations (e.g., low-resolution, low-contrast, blur and noise). These degradations are due to environmental conditions, interface circuitry (IP, analog camera) or camera’s hardware/software limitations. Recognition accuracy of current intensity-based FR systems significantly drop off, if facial images are of low quality. Most face recognition systems [2][13][1][3], have been shown to work in controlled environments where both training as well as testing samples are acquired in similar controlled illumination conditions in indoor environments. With ever increasing demands to combine “security with surveillance” in an integrated and automated framework, it is necessary to analyze samples of face images of subjects acquired by a surveillance camera from a long distance (≥ 50 yards). Estimating the degradation parameters is an important problem because it allows better estimate of lost information from the observed image data. Once these parameters are estimated, it can be useful in recognition in two ways: we can either simulate the degradation on good quality images or apply an inverse process on low quality images to enhance them. We have adapted the former approach and have used the estimated blur parameter to simulate the degradation on good quality (i.e. training) images. The rest of the paper is organized as follows: A brief on data acquisition and description is given in Section 2. The proposed framework is presented in Section 3. In this section, we describe the steps involved in the parameter estimation process. In Section 4, we give experimentation details and present the results of two baseline classifiers, namely PCA and FLDA [2][4] for different cases of training and testing.
2
Outdoor Surveillance Setup Used for Data Acquisition
In an outdoor surveillance scenario, samples used for training the classifier are called gallery images whereas those used for testing are probe images. Therefore, gallery is the term used for a part of the database of faces, obtained in a controlled (indoor) environment for different persons (subjects), whereas probe images are the face image samples obtained from video frames acquired in an uncontrolled (outdoor) environment using a surveillance camera. The outdoor images are captured from a distance of 50m-100m, placing the camera at around 20m25m of elevation. The face regions were extracted from the video frames using the popular Viola-Jones face detector (V-J-F-D) [10], which is considered as a benchmark in the area of work on face detection. Figure 1 shows a typical example of indoor and outdoor scenes where face templates are enclosed with rectangles. The enclosed face templates clearly depict the difference in resolution of the two face images. It is clear from the figure that, the acquired probe images have severe degradation besides low resolution. The complexity of the problem is evident from the degradation (large change in resolution and contrast along with blur and noise) of the outdoor (probe) with respect to that in indoor (gallery) shots.
214
S. Rudrani and S. Das
Fig. 1. Sample frames from indoor and outdoor videos. The rectangular template around the face indicating the spatial extent of the face, as detected using V-J-F-D[10]: (a) Frame from indoor video, (b) Frame from outdoor video of the same subject.
To our knowledge, it is an unique database to provide face images acquired from long distance (between subject and sensor), for biometric applications in surveillance scenario. Data acquisition is done to simulate a typical surveillance system and no special equipment is used to magnify or enhance the outdoor images. Thus, this database is an useful resource to research community.
3
Proposed Framework
The proposed frame work has two stages. In the first stage we estimate the degradation parameter and in the second stage we do recognition for different cases of training and testing. By different cases of training and testing we mean the different combinations of types of face data used for training and testing. Figure 2 shows the proposed framework, where videos obtained from the cameras (indoor for gallery and outdoor for probes) are fed to the V-J-F-D. As described in the previous section, face images are stored in gallery or probe database depending on whether it is detected from indoor or an outdoor scene. These stored gallery and probe images are used to estimate degradation parameters by proposed technique described in the next section. After these parameters are estimated, the gallery images are degraded with these parameters to produce (simulated) degraded images of different resolutions which are used for training the FR system in second stage. To define different experimental cases used in later stage of the proposed frame work, we first introduce some symbols to denote the type of training and testing samples. These symbols and the corresponding sample details are presented in Table 1. We also attempt to solve the ill-posed problem of image enhancement and restoration for probe images. We have used the gray level images for our experimentation. This decision is well motivated by the fact that outdoor images are of low contrast which barely contains any useful color information.
Face Recognition on Low Quality Images
215
Fig. 2. The proposed framework for estimating and compensating the degradation in face images acquired under surveillance conditions
Table 1. List of acronyms for face data at different resolutions and intermediate stages of processing Data Gallery
Downsampled Gallery
Probe Up-Sampled Probe
3.1
Abbreviation Resolution Sample description AG 250x250 Acquired gallery LRG Low resolution gallery 45x45 LRDG Low resolution degraded gallery MRG MRDG AP INTP SRP
Medium resolution gallery 90x90 45x45 90x90
Medium resolution degraded gallery Acquired probe Interpolated probe Super-resolved probe
Estimating and Simulating the Degradation
This section deals with the detailed description of the degradation estimation process. To estimate the degradation, we have defined a measure that is simple, intuitive and based on gray level intensity value of images. This leads to the easiness of implementation and thus depicts a strong characteristic of the proposed methodology. In this work, we have consider the degradation due to blurring. A typical formulation of the degraded image p(x, y) in the spatial domain and its relation with the ideal image g(x, y) is given by the following [5]: p(x, y) = h(x, y) ∗ g(x, y) + n(x, y)
(1)
where, h(x, y) is the point-spread function (PSF), ‘*’ denotes the 2D convolution and n(x, y) is the additive noise. Our objective is to obtain an estimate of
216
S. Rudrani and S. Das
h(x, y) and then use them for improving the accuracy of face recognition. In this direction, we have proposed an empirical method to estimate the degradation parameter for blur PSF. Later, this estimated parameter is used to degrade the acquired gallery images so that they appear close to the corresponding probe images. In this way, we obtain a set of (simulated) degraded gallery images at different resolutions (details are given in Table 1) that are later used for training in face recognition. In our experiment, we have assumed that, the nature of blur is Gaussian and parameter to be estimated is standard deviation (σ) for the Gaussian function. We start with downsampling the gallery images. Downsampling step is shown in Fig. 3(a). This step is required in order to compensate for the difference in the resolution of detected gallery and probe faces - gallery faces having more resolution. This difference is due to the fact that gallery images are taken in an indoor environment (close range) while the probe images are taken from a distant outdoor surveillance camera. We mention again that we have also tried the super-resolution technique to obtained higher resolution probe images. Although, use of super-resolution in an automated way (without human intervention) on the free form face images (V-J-F-D output) is difficult. Successive frames of a video might not be available from outdoor data, due to acquisition conditions of data capture and camera properties. In addition, due to poor lighting and low resolution, V-J-F-D failed to detect the face template in a few cases. In Fig. 3(b), the first row shows the blurred downsampled gallery images with different σ values for a chosen gallery image of a particular subject. Probable σ value for blur is expected to lie in the range 0.25-2.5 (detected empirically based on visual observation from a large set of test cases). The second row in Fig. 3(b), shows an example of probe image for the same subject. Probe images are filtered using Wiener filter [5] in order to minimize the noise and smooth the aliasing effect (due to digitization in sensor and acquisition hardware). On an average, resolution of probe images lie in the range (40-50) while that of gallery images is (200-300). Next, the blurred downsampled gallery images (as shown in first row of Fig. 3(b)) are normalized with respect to the chosen probe image. The corresponding normalized images are shown in the last row of Fig. 3(b). It is clear from the figure that the Normalized gallery images is visually closer to the probe image, for which the histograms are depicted in Figs. 3(c) and 3(d). Figure 3(c) shows the histograms corresponding to chosen downsampled gallery and probe images. Qualitatively the histograms differs a lot, due to the difference in global illumination of the pair of images. In Fig. 3(d), histograms for the Normalized blurred gallery images (for some σ values) are shown. We can clearly observe in Fig. 3(d) that the dynamic range of the histogram is altered to make it appear close to the histogram of the probe image. The above process is repeated for different combinations of available gallery and probe images for a particular subject, i.e., if there are 10 gallery images and 10 probe images for a subject then we use 100 (10*10) combinations. This process is repeated for all the subjects. It is to be noted that, in each combination we have used gallery and probe images
Face Recognition on Low Quality Images
217
Fig. 3. Sample images of faces and the intensity histograms of a few, showing the process of estimating the degradation, with different parameter values
for the same subject. This should not be confused with the use of class label information of probe (testing) images, because at this stage we are estimating the degradation parameter for the surveillance system with the help of pre-acquired data (both gallery and probe for a few subject). Once the degradation parameter is obtained we can use it to simulate degradation for any future data. Next, we present the mathematical formulation for the process described above and define a measure used in estimating the degradation parameter σblur (for blur PSF). 3.2
The Measure
The measure namely, SoHD (Sum of Histogram Difference) is defined based on the intensity histograms of Normalized blurred downsampled gallery images and Wiener filtered probe images. SoHD is given as:
218
S. Rudrani and S. Das
m
SoHD =
n
k k 1 sum{|HDiσ,k − HPjk |} (mk ∗ nk ) j=1 i=1
(2)
Here, HDiσ,k = Hist(N M (giσ,k (x, y), pkj (x, y)))
(3)
where, giσ,k (x, y) is the degraded gallery image obtained by convolution of the ith gallery image for k th subject with a Gaussian function with standard deviation σ. Also, Hist() computes the histogram and N M denotes a Normalization operation [7]. Similarly, for the histogram of a probe image we have; HPjk = Hist(pkj (x, y))
(4)
Also, mk in Eqn.2 denotes the number of probe images used for estimating the degradation for the k th subject and nk is the number of gallery images used for estimating the degradation for the k th subject. Notice that the operator sum{ } in Eqn. 2 denotes the algebraic sum of the elements of the vector |HDiσ,k − HPjk |. SoHD is the measure obtained for each subject separately. The plot of the measure SoHD, averaged over 51 subjects is shown in Fig. 4. We observe that the measure SoHD saturates after some value of σ. To find such an optimal value of σ for k th subject, we use the following condition; d(SoHDk ) | < T hSoHD ( 0)} σSoHDk = {σ : | (5) dσ where, T hSoHD is some low threshold value. To obtain an average measure over all subjects, we define σblur as follows: σblur =
K 1 (σSoHDk ) K
(6)
k=1
where, K is the total number of subjects. SoHD, when observed with increasing values of σ, saturate after some value of σ, which is determined using Eqn. 5. This is the point at which the degraded blurred (and Normalized) gallery image appears qualitatively similar to that in probe. With these estimated parameters, we first blur the gallery images. Empirical observation over 51 subjects produced optimal values of the parameter as: σblur = 1.25 for the value of T hSoHD as 0.2. In this way, we obtain the simulated degraded gallery images, which are comparable with the low quality probe images. Later, these degraded gallery images are used to train the classifier for recognition. When a probe is detected, it’s face image is extracted and tested for recognition. Training with the acquired gallery images produce a low value of accuracy because of the large difference in the quality of gallery and probe images. In the next section, we show how degradation improves the classification accuracy.
Face Recognition on Low Quality Images
219
Fig. 4. Plot of the measure SoHD, averaged for 51 subjects
4
Experimental Results
A face recognition system is claimed to be efficient, robust and reliable, only after it has gone through rigorous testing and verification. A real-world database like that of ours would be the most preferable for this purpose. Many face database are available to the research community, but they are still far from real-world conditions for surveillance applications. The proposed database is very challenging because of its large variation over training and testing samples. There are 20 samples in gallery as well as in probe per subject which are near frontal faces. Different experimental cases used in the second stage of the proposed frame work, are listed below: 1. 2. 3. 4. 5.
Training:-LRG; Testing:-AP Training:-LRDG; Testing:-AP Training:-MRDG; Testing:-INTP Training:-MRDG; Testing:-SRP Training:-MRG; Testing:-MRG
We have obtained Cumulative match score(CMS) curves for the above experimental cases which is shown in Fig. 5. For each of the curve, training and testing case are denoted by the abbreviation from Table 1. The efficiency of the estimated degradation parameter is presented with the help of two baseline methods: PCA and FLDA [2]. The performance of the green curves (LRG-AP combination) shows the worst performance. This situation does not involve any processing on the face samples before they are fed to the classifier. The RED curve (MRG-MRG) is an ideal situation, where both the training-testing pairs are based on the indoor acquired gallery samples. For the rest of the curves,
220
S. Rudrani and S. Das
Fig. 5. CMS curves for comparing the performance of the system, when trained and tested as per different experimental cases, using; (a) PCA [9] and (b) FLDA [2] for face recognition
either the training gallery has been degraded with the estimated blur parameter (Eqn. 6), or the probe has been enhanced or both done. Experimental results shows that training with degraded gallery images provides a much improved performance compared to training with acquired (downsampled to medium resolution) gallery. This improvement is significant given the complexity of face samples as probes in our database. We have obtained these performances by taking a 10-fold study of the classifier output. In each fold, 10 training samples per subject is selected randomly from the set of 20 from gallery. Similarly, for testing 10 training samples per subject is selected randomly from the set of 20 from probe in each fold. Total number of subjects for which the curves are obtained is 51. As it is clear from the results that, PCA performs better than FLDA in this scenario, because PCA features are expected to perform better in case of noise and degradation.
5
Conclusions
The work proposed in this paper concerns a face recognition application under surveillance conditions. It is focused on estimating degradation due to out-offocus blur, low contrast and low resolution. We define a measure- SoHD which is quite intuitive, simple and easy to implement. From this measure, we obtain the parameter σblur for out-of-focus blur. Next, we simulate the degradation on the gallery images. Finally, we train the classifier with degraded gallery instead of original gallery to obtain significantly improved recognition accuracy. As part of future work, we intend to test our method on a larger database of subjects. A combination of partial restoration or enhancement of probe samples using filters or more robust super-resolution techniques along with partial simulation of degradation on gallery, is to be explored for better results. State
Face Recognition on Low Quality Images
221
of the art methods - K-PCA, K-LDA [12][8], dual-space [11] and SVM [6][8] based face recognizer may be used with our proposed method to improve the classification accuracy further.
References 1. Bartlett, M.S., Movellan, J.R., Sejnowski, T.J.: Face recognition by independent component analysis. IEEE Transactions on Neural Networks 13(6), 1450–1464 (2002) 2. Belhumeur, P.N.: Eigenfaces vs. fisherfaces: Recognition using class specific linear projection. IEEE Transactions on Pattern Analysis and Machine Intelligence 19(7), 711–720 (1997) 3. Cevikalp, H., Neamtu, M., Wilkes, M., Barkana, A.: Discriminative common vectors for face recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence 27(1), 4–13 (2005) 4. Fukunaga, K.: Introduction to Statistical Pattern Recognition. Academic Press, San Diego (1990) 5. Gonzalez, R.C., Woods, R.E.: Digital Image Processing. Prentice Hall, Upper Saddle River (2008) 6. Heisele, B., Ho, P., Poggio, T.: Face recognition with support vector machines: global versus component-based approach. In: Proc. of the Eighth IEEE International Conference on Computer Vision, pp. 688–694 (July 2001) 7. Hong, L., Wan, Y., Jain, A.: Fingerprint image enhancement: Algorithm and performance evaluation. IEEE Transactions on Pattern Analysis and Machine Intelligence 20(8), 777–789 (1998) 8. Scholkopf, B., Smola, A.J.: Learning with Kernels. The MIT Press, Massachusetts (2002) 9. Turk, M., Pentland, A.: Eigenfaces for recognition. Journal of Cognitive Neurosicence 3(1), 71–86 (1991) 10. Viola, P., Jones, M.J.: Robust real-time face detection. International Journal of Computer Vision 57(2), 137–154 (2004) 11. Wang, X., Tang, X.: Dual-space linear discriminant analysis for face recognition. In: Proc. Computer Vision and Pattern Recognition, pp. 564–569. IEEE Computer Society, Los Alamitos (2004) 12. Yang, M.-H.: Kernel eigenfaces vs. kernel fisherfaces: Face recognition using kernel methods. In: Proc. of the Fifth IEEE International Conference on Automatic Face and Gesture Recognition, pp. 215–220. IEEE Computer Society, Washington, DC, USA (2002) 13. Yu, H., Yang, J.: A direct LDA algorithm for high-dimensional data with application to face recognition. Pattern Recognition 34, 2067–2070 (2001)
Real-Time 3D Face Recognition with the Integration of Depth and Intensity Images Pengfei Xiong1,2 , Lei Huang2 , and Changping Liu2 1
2
Graduate University of the Chinese Academy of Sciences
[email protected] Institute of Automation, Chinese Academy of Sciences, Beijing, P.R. China {lei.huang,changping.liu}@mails.ia.ac.cn
Abstract. A novel image-level fusion algorithm is proposed for 3D face recognition, which synthesizes an integrate image from both 2D intensity and 3D depth images. Due to the same descriptors in 2D and 3D domain, the image combination not only maintains facial intrinsic details to the utmost extent, but also provides more distinctive features. Also as the result of image recognition, the low efficiency of 3D surface matching is eliminated, and a fast 3D face recognition system is carried out. After the proposed surface preprocessing, an enhanced ULBP descriptor is applied to reduce the feature dimension, and LDA is adopted to extract the optimal discriminative components from the integrate image. Experiments performed on the FRGC v2.0 show that this algorithm practically outperforms the existing state-of-art multimodel recognition algorithm and realizes a real-time face recognition system. Keywords: 3D face recognition, multimodal, integrate image, ULBP.
1
Introduction
Over the past several years, face recognition research has already largely shifted from 2D to 3D approaches, and also the fusion of two modalities attracts more and more attention. As reported, although it is still unclear which modality is better, the combination of 2D and 3D can provide greater performance[1]. The fusion of modalities can be carried out in different levels. One of the most popular strategy is the decision-level fusion, in which, a weighted sum of similarity metrics is achieved as the final decision after modalities identified separately. With the independent distinguishment[2] or aligned matching[4], the imperfection of each one modality can be effectively counterbalanced with the combination. However, the reliability is easily reduced by the different ranges of similarities and various application scopes in 2D and 3D recognition. The other strategy, which is called feature-level fusion, combines different feature descriptors on 2D images and 3D surfaces as the input of facial classifier[6][7]. For instance, Mian[7] connected SIFT(Scale Invariant Feature Transform) and structure characteristics from distinctive 2D and 3D points as the facial descriptors. In the feature extraction, several structure descriptors and other outstanding image features are respectively applied. As the result of the different M. Kamel and A. Campilho (Eds.): ICIAR 2011, Part II, LNCS 6754, pp. 222–232, 2011. c Springer-Verlag Berlin Heidelberg 2011
Real-Time 3D Face Recognition with Depth and Intensity Images
223
Fig. 1. The flow of image-level fusion recognition
representations in 2D image and 3D surface, the feature correlations and discriminabilities are easily weakened in the classification. Other method[8] like CCA(Canonical Correspondence Analysis) is developed to learn the mapping between different representations. However, the original features are destroyed. To improve on these defects of multimodal combinations, a fusion method based on images is carried out. While 3D object can be also represented as range image, an integrate image synthesized from the depth and intensity images not only preserves texture details and facial structure, but also provides more discriminantal facial features. In addition, the low efficiency and instability of 3D surface matching is effectively avoided due to 2D image recognition. Papatheodorou[12] had created a 4D space from 3D coordinate values and pixel intensity for ICP(Iterive Closet Point)[9] alignment in the same domain of object fusion. Different from it, our method is more efficient and stable. In particular, an real-time 3D face recognition system is carried out based on the image-level fusion. In image cropping, a median segmentation is developed for facial region extraction firstly, then a linear fit algorithm is carried out for facial profile. Both of them reduce the computational complexity of key points location. After surface alignment with key points, the corresponding depth and gray images are cropped and compose an integrate image. Then an enhanced ULBP(Uniform Local Binary Patterns) descriptor[10] combing with LDA(Linear Discriminant Analysis)[11] are carried out to select the optimal discriminant information. The flow of this system is illustrated in Fig. 1, based on which, a high recognition rate is achieved on FRGC v2.0 database[13] with a verification rate 0.9899 at 0.001 FAR and rank 1 identification rate 0.9793 in the test of Neutral vs. All, which is almost better than all the existing state-of-art algorithms about multi-model face recognition. In the rest of this paper, the specific techniques of this system are discussed and experiments are reported followed.
2
Preprocessing
In the traditional methods of surface preprocessing[15][16][3], surfaces are aligned based on nose tip and projected into images. Generally, the raw surface contains not only useless points, such as hair, shoulder and neck, but also composes with
224
P. Xiong, L. Huang, and C. Liu
irregular spikes and holes. While seeking in the global, the precision and efficiency of nose tip location are easily impacted by unstable noise and enormous points numbers. Here, an effective location algorithm is developed. As depicted in Fig. 2, the reliable facial region is extracted to eliminate the redundance of raw points firstly. Then, the facial profile is located by applying a linear fitting method. Due to the geometrical property with the maximum depth on the profile, nose tip is easily selected.
Fig. 2. The preprocessing of a valid 3D face. (a) the raw point cloud; (b) facial region extracted; (c) nose tip location; (d) cropped surface with an elliptical mask.
2.1
Facial Region Extraction
The facial region can be extracted based on the various depths of different facial parts. While in face capturing, the available facial region is always face to the camera, then it has higher depth than the other areas, no matter the facial attitude stays front or side. A median segmentation is applied in depth domain for the facial region extraction. Chang[5] had ever detected the facial area using a 2D skin model, but the face color is sensitive to different background. Describing the point cloud as depth values I = (I1 , I2 , ...In ), where n is the number of points, the segmentation is worked iteratively. n
I
i 1. An initial threshold t0 is set to the depth average as t0 = I = i=1 n 2. These points I are binarized into two parts, with the average of two parts 1 2 with k calculted as I 1 and I 2 . And the threshold is refreshed as tk = I +I 2 denoting the kth iteration; 3. With the new threshold, the step 2 is repeated till tk = tk+1 .
After segmentation, several independent components with higher depth are chosen as candidates. Then morphological operations like erosion, dilation and connected component analysis are applied to select the maximal one as the facial region. As shown in Fig. 2(b) and Fig. 5, valid facial regions are correctly labeled. Also in the facial region, spikes with a distance beyond a given threshold from any one of its eight neighbors is eliminated, and holes inside the facial contour are closed by interpolation.
Real-Time 3D Face Recognition with Depth and Intensity Images
2.2
225
Key Point Location
According to the geometrical definition, nose tip is denoted as the central point with maximum depth in the global face. Several methods like [17][18][16] extracted the neighborhood characteristics of each point and sought for the distinctive one. Considering the slow speed by feature extraction, a linear fitting method for facial profile is developed here to select the nose tip. For each depth image, facial profile contains points with maximum depth in rows. As shown in Fig. 2(c), these scattered points C = (C1 , C2 , ...Ch ) are sought in each row firstly, where C represents the corresponding column values. Then a line L is fitted with k and b defined as the gradient and offset. L : C = kR + b
(1)
Least Squares algorithm is applied here. Since there exist noises, L is not exactly the facial profile, but a reference of it. To accurately locate nose tip, the searching is carried out along L vertically and horizontally. For each point p on L, its neighbour points pn are collected along the vertical direction, and its depth gradient is calculated by G= |p − pn | (2) The point pgm with the maximum gradient Gm is chosen as the candidate of nose tip. Then, a horizontal searching is applied on the same row of pgm , where nose tip is located with the highest depth. Based on the nose tip, the facial profile L is translated to cross it. The other application of L is that k depicts the facial attitude. After nose tip location, the surface is aligned to keep L perpendicular and center at the nose tip. Depth and intensity images is cropped following, and after smoothing, the final images can be obtained. As can be seen in Fig. 2 and Fig. 5, nose tips are accurately located and both the 2D texture images and 3D range images are generated.
3
Recognition
With the intensity and depth images cropped, an image-level fusion method is investigated. On the integrate image, an enhanced Uniform LBP feature and LDA are developed for the face discrimination. In this section, recognition method is discussed firstly, and the image combination is explained following. 3.1
Feature Representation
LBP feature[10] is one of the most popular descriptors in 2D face recognition. Comparison with Gabor wavelet descriptor[14] wildly used in the depth image[6], LBP have smaller computational complexity. For each pixel in the image, LBP are defined as sequential binary codes based on the comparisons between the gray value and its neighborhood. The scale of the neighbor range R and the number of neighbor points P are two parameters
226
P. Xiong, L. Huang, and C. Liu
for the formation of LBP. After the LBP coding, the image feature is described as the statistic histogram of LBP code values. As can be seen in Fig. 3, the change of P leads to 2P pattern variations. When P = 8, the image feature length is 256. To extract the principal LBP components, ULBP(uniform LBP) is developed, in which, patterns with at most two bitwise transitions from 0 to 1 or vice versa are kept. Based on ULBP transformation, the feature dimension is reduced from 256 to 59 with P = 8. However, it can be further reduced. While ULBP is counted on the times of 0 − 1 transitions, it can be described as two parameters: the beginning position L of the binary 1, and the length C from L to the first binary 0. With the variations of L and C, the ULBP histogram statistics is equal to the histogram of a L ∗ C image. To be simple, this 2D histogram is expended as the combination of two histograms of L and C. When P = 8, the variations of L and C are 8 and 7 respectively, then a 16 dimension codebook can be obtained, which includes the other non-uniform LBPs, as shown in Fig. 3.
Fig. 3. The definition of LBP and an enhance ULBP codebook with P = 8
With the mask of LBP descriptors, the original image I is transformed into a LBP image Ic , (3) Ic = Flbp (I) = Mlbp ∗ I where Mlbp denotes the LBP mask. Then the LBP image is divided into N nonoverlap regions, and the histograms (H r |r ∈ (1, N )) are calculated from each region based on the enhanced ULBP codebook, B(Ic = codei ), i = 1, ...L (4) Hr = x,y∈blockr
where B is a indicator function, and L labels are defined in the codebook. After that, all of the ULBP histograms are sorted to form the image features, as shown in Fig. 4 (5) H = (H 1 , H 2 , ..., H N ) After the extraction of image features, Linear Discriminant Analysis(LDA)[11] is employed to reduce the computational cost and strength the feature discrimination. LDA subspace can be constructed by solving the following equation, W ∗ = argmaxJ(W ) =
W T SB W W T SW W
(6)
Real-Time 3D Face Recognition with Depth and Intensity Images
227
Fig. 4. The process of ULBP extraction (the above) and image integration (the below)
where SB and SW are the between-class and within-class scatter matrix respectively. While optimizing in both the between-class and within-class discriminations, distinctive feature components are obtained. Also based on the projection in the LDA subspace, the characteristic dimension are shorten. With the N class in the train set, the most effective number of discrimination vectors is selected as the number of N − 1. 3.2
Image Integration
The combination of the intensity and depth images is straightforward. By normalizing both of the two images into 0 − 1 distribution, and connecting them end to end, the integrate image is synthesized. For each pixel pt = (r, g, b) in intensity image, and it corresponding point pd = d in depth image, the integrate pixel contains four-dimensional values pc = (r, g, b, d)
(7)
It is similar to [12], which described a 4D point as p4D = (x, y, z, I), but our method strengthens the advantages of image representation instead. While they are both described in image domain, no additional algebraic operation is applied in the image synthesizing. After the combination, ULBP features are extracted from the integrate image, and LDA is applied for the discriminants selection. Different from feature-level fusion, the combined pixel gives prominence to the discrimination ability in LDA projection, while the same representation of 2D and 3D are dependent on each other. Also the final matching result is based on the integrate image features, which is superior to the fusion of classifier results. Without the fusion, the classification can be carried out once.
4
Experiments and Comparisons
The standard FRGC v2.0 database[13] is employed for the validation, which contains 4007 scans of 466 individuals under various expressions, illuminations and attitudes. For each human being, one surface with depth and texture is captured.
228
P. Xiong, L. Huang, and C. Liu
Fig. 5. Surface preprocessing. For each column, two groups of original image of an individual and their facial regions with nose tip locations are present. The blue points, yellow line, and red point depict the scatter points, facial profile and nose tip respectively (In color printing).
4.1
Experiment 1: Performance on Preprocessing
At first, the results of surface preprocessing is confirmed. Although no profile face exists in the database, several faces are presented as various facial attitude and different depth range. By applying for our method, all of 4007 facial regions are extracted, and 3973 nose tip are accurately located. As shown in Fig. 5, facial region extraction is achieved under varing conditions. No matter the face is in profile (column 9 and 10), or is looking upward (column 4), the available facial region is only the front of surface, while the face is always face to the camera in capturing. That provides strong evidence that our algorithm is insensitive to the facial pose variation. The same conclusion can be drawn when the distance between surface and camera changes. As present, both of the column 1 and column 4 in Fig. 5 achieve the valid results although they are captured with different depth ranges. In addition, the results of nose tip location is demonstrated, accompanied by the facial profile. In Fig. 5, the profile lines (yellow line on the surface) accurately denote the facial posture. Despite the existence of noises in these scattered points (blue points) with local maximum depth, the linear fitting algorithm developed in our system effectively eliminate the possible negative effects. Also, a 99.1% rate of nose tip location is obtained, which provides reliable basis of image cropping. However, attention should be payed to an issue that the facial profile can’t rectify the facial rotation in X-Z plane. This drawback is diminished in the face recognition based on images, but it still needs further research. 4.2
Experiment 2: Performance on FRGC v2.0 Dataset
Then, the performance of our image-level fusion for 3D face recognition is evaluated. After surface preprocessing, a 64 ∗ 64 rectangle mask is applied for image
Real-Time 3D Face Recognition with Depth and Intensity Images
229
cropping. For each surface, an intensity image and a depth image with 64∗64 pixels are obtained. All images are blocked into 64 non-overlapped sub-images with each one has 8 ∗ 8 pixels, and they are described as a 1024(8 ∗ 8 ∗ 16) dimension vector by ULBP extraction. After LDA projection, all the feature dimensions are reduced to 465. In experiments, Euclidean distance and the nearest classifier are applied, and the training set are randomly chosen from the whole database. The first appearance of each individual under neutral expression composes a gallery set with 466 objects, and the remaining 3541 faces are treated as probes, in which, faces with neutral and non-neutral expressions are separated. Firstly, the identification rate of the integrate image is compared with ones based on depth and intensity images solely. As shown in Fig. 6(a), rank 1 identification rates 0.9793, 0.9769, 0.9825 are achieved for the all probes, probes with neutral and non-neutral expressions respectively. The traditional conclusion is drawn that image combination is superior to each single modality. When the all images are set as probes, the face recognition rate drops 15.5% from 0.9793 on integrate images to 0.8270 on depth images. Also we can see that, all the matching rate for neutral faces are lower than that of non-neutral ones. While the bottom of mouth is ignored outside the rectangle mask in image cropping, the impact of expression is deeply reduced. This result states the advantages of image-based recognition and the reliability of our system while expression sensitivity is the biggest obstacle in 3D surface recognition algorithms. The other outcome is that performances on depth images are poorer than that based on intensity images, which is because of that depth images are always smoother than texture values and provide less distinctive features. In the fusion results, intensity image plays a dominant role.
Fig. 6. Different rank identification results(a) and verification results(b) on the FRGC dataset. all, neu and non denote three probe subsets, and depth, intensity, integrate denote the image modalities, which are present in the three curve clusters from top to bottom(To be clear on a black and white printing). In each part, the rank 1 identification rate and the verification at 0.001 FAR are present corresponding.
Fig. 6(b) shows the ROC curves of our algorithm. At 0.001 FAR (False Acceptance Rate), the verification rates reach 0.9899, 0.9894 and 0.9903 in the case of all probes, probes with neutral and non-neutral expression respectively. The
230
P. Xiong, L. Huang, and C. Liu
same conclusion as identification is obtained that the combined image increases the recognition results with a extraordinary range, and results of non-neutral probes are better than neutral ones. The verification results for all vs. all are calculated at the same time, with a rate of 0.9705 at 0.001 FAR. Even with the large datasets, the verification rate performs well, which provides a further indication of the stability of our approach. Besides the present performances, comparisons between our system and the other state-of-art methods are proposed. As shown in table.1, our verification performances is a little worse than [3][7], which due to the wrong key points locations. However, while the existing methods pay more attention to 3D surface, (for example, [7] detects expression insensitive points on the 3D surfaces for recognition), our method avoids the complicated surface matching and magnifies the advantages of image features. Table 1. Comparisons of verification rate with the state-of-art methods
[19] [6] [7] [3] proposed
Neu vs. All 0.958 NA 0.986 0.993 0.9899
Neu vs. Neu 0.992 0.975 0.999 0.997 0.9894
Neu vs. Non 0.982 NA 0.966 0.983 0.9903
All vs. All 0.935 NA NA NA 0.9705
Finally, to confirm the advantage of our image-level fusion, some traditional multi-model fusion algorithms are carried out on the same images. Table. 2 demonstrates all the recognition results, in which, algorithm ”feature” applies the feature-level fusion that LDA features of the two images are extracted and combined; ”sum”, ”bin” and ”weight” denote three score-level fusion methods that adding the two matching scores, choosing the more similar one by differing or judging the first and second similarities respectively. No matter if the fusion is based on features or scores, our algorithm shows a better performance in both the face identification and verification, which testify the reliability of the proposed image-level fusion method. Table 2. Comparisons of recognition with other multi-model fusion algorithms. All, Neu and Non denote the three probe subsets, Iden/Veri depicts the recognition methods, and 4 other algorithms are present. Iden/Veri feature sum bin weight proposed
Neu vs. All 0.9341/0.9672 0.9375/0.9652 0.8688/0.9542 0.8917/0.8872 0.9793/0.9899
Neu vs. Neu 0.9360/0.9663 0.9392/0.9655 0.8772/0.9559 0.9047/0.8977 0.9769/0.9894
Neu vs. Non 0.9371/0.9720 0.9410/0.9684 0.8634/0.9569 0.8814/0.8807 0.9825/0.9903
All vs. All 1.0/0.8715 1.0/0.8691 1.0/0.6821 1.0/0.5573 1.0/0.9705
Real-Time 3D Face Recognition with Depth and Intensity Images
4.3
231
Experiment 3: Efficiency of Our Algorithm
At last, the computational performance of recognition is measured, while one of the targets of our algorithm is a real-time recognition system. In fast recognition, the number of 3D surface points and the size of the gallery are the two main obstacles. In our algorithm, the original surface model is registered without down-sampling. Each surface contains nearly 100000 points, which is reduced to about 50000 after facial region extraction. While carried out on a PC with CPU Core2 2.0GHz and 2GB RAM, the running time of all steps in our algorithm are calculated. The average time of one surface in facial region extraction, profile fitting, nose tip location, feature extraction are 313 ms, 16 ms, 16 ms, 31 ms respectively. And the average matching time on 466 gallery is 47 ms. As comparison, an ICP alignment[9] of the same surface needs about 2500 ms, and a curvature-based nose tip location[17] wastes nearly 4000 ms. For one valid surface in our algorithm, it only consumes about 476ms for the identification when 1000 models in the gallery, which realizes real-time recognition.
5
Conclusions
In this paper, a real-time 3D face recognition system based on the integration on depth and intensity images is proposed. As the results reported, the image-level fusion outperforms the other modalities fusion strategies in recognition rate, while the intrinsic features are kept in the integration images and provide discriminative components. Also, the traditional conclusion is drawn that image combination is superior to each single modality and performances on depth images are poorer than that based on intensity images due to the smoother texture values in depth images. At last, based on the image recognition, the efficiency and reliability are greatly enhanced.
References 1. Bowyer, P.K., Chang, K.: A survey of approaches and challenges in 3d and multimodal 3d + 2d face recognition. Computer Vision and Image Understanding 101, 1–15 (2006) 2. Wang, Y., Chua, C.S.: Robust face recognition from 2d and 3d images using structural hausdorff distance. Image and Vision Computing 24, 176–185 (2006) 3. Mian, A.: Mohammed, Bennamoun, R.Owens: An efficient multimodal 2d-3d hybrid approach to automatic face recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence 29, 1927–1943 (2007) 4. Husken, M., Brauckmann, M., Gehlen, S.: V.D.Malsburg: Strategies and benefits of fusion of 2d and 3d face recognition. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 3, pp. 174–182. IEEE, San Diego (2005) 5. Chang, K.I., Bowyer, K.W., Flynn, P.J.: An evaluation of multimodal 2d+3d face biometrics. IEEE Transactions on Pattern Analysis and Machine Intelligence 27, 619–624 (2005)
232
P. Xiong, L. Huang, and C. Liu
6. Xu, C.: Stan Li, T.Tan: Automatic 3d face recognition from depth and intensity gabor features. Pattern Recognition 42, 1895–1905 (2009) 7. Mian, A.S., Bennamoun, M., Owens, R.: Keypoint detection and local feature matching for textured 3d face recognition. International Journal of Computer Vision 79, 1–12 (2008) 8. Huang, D., Ardabilian, M., Wang, Y., Chen, L.: Automatic asymmetric 3d-2d face recognition. In: The Twentieth International Conference on Pattern Recognition, IAPR, Istanbul (2010) 9. Besl, P., Mckay, N.: A method for registration of 3-d shapes. IEEE Transactions on Pattern Analysis and Machine Intelligence 14, 239–256 (1992) 10. Ahonen, T., Hadid, A., Pietik¨ ainen, M.: Face recognition with local binary patterns. In: Pajdla, T., Matas, J(G.) (eds.) ECCV 2004. LNCS, vol. 3021, pp. 469–481. Springer, Heidelberg (2004) 11. Belhumur, P.N., Hespanha, J.P., Kriegman, D.J.: Eigenfaces vs. fisherfaces: Recognition using class specific linear projection. IEEE Transactions on Pattern Analysis and Machine Intelligence 19, 711–720 (1997) 12. Papatheodorou, T., Reuckert, D.: Evaluation of automatic 4D face recognition using surface and texture registration. In: The Sixth IEEE International Conference on Automatic Face and Gesture Recognition, pp. 321–326. IEEE, Seoul (2004) 13. Phillips, P.J., Flynn, P.J., Scruggs, T., Bowyer, K.W., Chang, J., Hoffman, K., Marques, J., Min, J., Worek, W., Phillips, P.J.: Overview of the face recognition grand challenge. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 3, pp. 947–954. IEEE, San Diego (2005) 14. Wiskott, L., Fellous, J., Kruger, N., Malsburg, C.V.: Face recognition by elastic bunch graph matching. IEEE Transactions on Pattern Analysis and Machine Intelligence 19, 775–779 (1997) 15. Wang, Y., Tang, X., Liu, J., Pan, G., Xiao, R.: 3D Face Recognition by Local Shape Difference Boosting. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008, Part I. LNCS, vol. 5302, pp. 603–616. Springer, Heidelberg (2008) 16. Xu, C., Wang, Y., Tan, T., Quan, L.: A robust method for detecting nose on 3d point cloud. Pattern Recognition Letters 27, 1487–1497 (2006) 17. Chang, K.I., Bowyer, W., Flynn, P.J.: Multiple Nose Region Matching for 3D Face Recognition under Varying Facial Expression. IEEE Transactions on Pattern Analysis and Machine Intelligence 28, 1695–1700 (2006) 18. Bronstein, E.M., Bronstein, M.M., Kimmel, R.: Three-dimensional face recognition. International Journal of Computer Vision 64, 5–30 (2005) 19. Maurer, T., Guigonis, D., Maslov, I., Pesenti, B., Tsaregorodtsev, A., West, D., Medioni, G.: Performance of geometrix ActiveIDTM 3D face recognition engine on the FRGC data. In: IEEE Workshop on Face Recognition Grand Challenge Experiments, pp. 154–160. IEEE, Los Alamitos (2005)
Individual Feature–Appearance for Facial Action Recognition Mohamed Dahmane and Jean Meunier DIRO, University of Montreal, CP 6128, Succursale Centre-Ville, 2920 Chemin de la tour, Montreal, Canada, H3C 3J7 {dahmanem,meunier}@iro.umontreal.ca
Abstract. Automatic facial expression analysis is the most commonly studied aspect of behavior understanding and human-computer interface. Most facial expression recognition systems are implemented with general expression models. However, the same facial expression may vary differently across humans, this can be true even for the same person when the expression is displayed in different contexts. These factors present a significant challenge for recognition. To cope with this problem, we present in this paper a personalized facial action recognition framework that we wish to use in a clinical setting with familiar faces; in this case a high accuracy level is required. The graph fitting method that we are using offers a constrained tracking approach on both shape (using procrustes transformation) and appearance (using weighted Gabor wavelet similarity measure). The tracking process is based on a modified Gabor-phase based disparity estimation technique. Experimental results show that the facial feature points can be tracked with sufficient precision leading to a high facial expression recognition performance.
1
Introduction
The computer vision community is interested in the development of techniques, such as automated facial expression analysis (AFEA), to figure out the main element of facial human communication, in particular for HCI applications or, with additional complexity, in meeting video analysis, and more recently in clinical research. AFEA is highly sensitive to face tracking performance, a task which is rendered difficult owing principally to environment changes, and appearance variability under different head orientations, and non–rigidity of the face. To meet these challenges, various techniques have been developed and applied, that we can divide into two main categories: model–based and image–based approaches [13,6]. However, these methods are still providing inaccurate results due to the variation of facial expression across different people, and even for the same individual since facial expression is context–dependent. Thus, the overall performance of the AFEA systems can severely be affected by their variability across humans and even inter–individual. To establish an accurate subject-independent facial expression recognition system, the training data must include a significant number subjects covering all M. Kamel and A. Campilho (Eds.): ICIAR 2011, Part II, LNCS 6754, pp. 233–242, 2011. c Springer-Verlag Berlin Heidelberg 2011
234
M. Dahmane and J. Meunier
possible individual expression appearances across different people. This is why, in most general systems, accurate recognition is made so difficult. It is advantageous to identify a subject face before attempting facial expression recognition, since facial physiognomy that characterizes each person leads to a proper facial action display [5]. In [10], for each individual in the datasets a subject-dependent AAM was created for Action Units (AUs) recognition by using similarity normalized shape and similarity normalized appearance. By integrating user identification, the subject–dependent method, proposed in [1], performs better than conventional expression recognition systems, with high recognition rate reaching 98.9%. In [5], the best facial expression recognition results were obtained by fusing facial expression and face identity recognition. In [7], the authors conclude that the recognition rates for familiar faces reached 85%. For unfamiliar faces, the rate does not exceed 65%. This low performance may be justified by the fact that the training collection was less representative. For both subject–independent and subject–dependant approaches, expression recognition rate is highly dependent on facial tracking performance. Among the existing tracking techniques, the feature–based techniques demonstrate high concurrent validity with manual FACS (Facial Action Coding System) coding [2,3]. Furthermore, they have some common advantages such as explicit face structure, practical implementation, and collaborative feature–wide error elimination [8]. However, the tracking performance depends on the precise configuration of the local facial features, which poses a significant challenge to automated facial expression analysis, since subtle changes in the facial expression could be missed due to errors in facial point localization [12]. Though an effective scheme for tracking the facial attributes can compensate for this shortcoming, geometric approaches including only shape information may be rather irrelevant [10]. Figure 1 shows an example of two different facial expressions (fear vs. happy) where the respective appearances are significantly different, while the two expressions have a high degree of shape similarity. Therefore, including appearance matching should improve the recognition rate, which can be done by including the local appearance around each facial feature.
Fig. 1. Facial point position may not be sufficient to achieve reliable FE recognition (e.g. fear vs happy)
In this paper, a subject–dependent facial action recognition system is described using a facial features–based tracking approach and a personalized gallery of facial action graphs. At the first frame of the video, four basic points are automatically found, and then tracked over time. At each frame, from these reference
Facial Action Recognition
235
points, the most similar graph through a gallery is chosen. The search is based on the Procrustes transformation and the set of Gabor jets that are stored at each node of the graph. The rest of the paper is organized as follows. First, we present the approach to identify a familiar face, and we outline the facial features we are using. In section (3), we illustrate the facial localization and tracking approach, then we present the facial action recognition process. Experimental results are presented in section (5), while section (6) concludes this work.
2
Face Localization
An individualized facial expression recognition needs a face identify stage, for this purpose a facial graph gallery is constructed as a set of pre–stored facial graphs Fig. 2. Each graph represents a node configuration that characterizes the appearance of a facial expression to recognize. A set of jets J, describing the appearance around each point of interest, is generated and attached to the corresponding node. In our implementation, a set of 28 nodes defines the facial graph of a given expression Fig. 2.
Fig. 2. The facial graph corresponding to the fear expression
2.1
Rough Face Localization
The face localization stage is done only once, in the first frame as an initialization step. For this purpose, we consider a set of subgraphs that includes four facial reference points (left and right eye inner corners and the two nose wings) Fig. 3.
1 3
2 4
Fig. 3. The four tracked (circle) and the twenty four adjusted (diamond) facial points
236
M. Dahmane and J. Meunier
When the first face image is captured, a pyramidal image representation is created, where the coarsest level is used to find near optimal starting points for the subsequent facial feature localization and refinement stage. Each graph from the gallery is displaced over the coarsest image. The graph position that maximizes the weighted magnitude–based similarity function (eq. 1 and 2) provides the best fitting node positions. 1 S(Jl , Jl ) L L
Sim(I, G) =
(1)
l
S(J, J ) refers to the similarity between the jets of the corresponding nodes (eq. 2), L = 4 stands for the total number of the subgraph nodes.
S(J, J ) =
j
aj aj cj 2 with aj 2 aj
cj =
aj − a 2 j 1− aj + aj
(2)
In (eq. 2), cj is used as a weighting factor and aj is the amplitude of the response of each Gabor filter [4]. 2.2
Position Refinement
The rough localizations of the facial nodes are refined by estimating the displacement using the iterative phase–based disparity estimation procedure. The optimal displacement d is estimated, an iterative manner, through a minimization of a squared error given two jets J and J corresponding to the same node (eq. 3).
d(J, J ) =
−1 cj kjx 2 − j cj kjx kjy j j cj kjx Δφj 2π 2 − j cj kjx kjy j cj kjy Δφj 2π j cj kjy
(3)
where (kjx , kjy ) defines the wave vector , and Δφj 2π denotes the principal part of the phase difference. For more details, the reader is referred to [4]. The procedure is also used to track the positions of the nodes over time. During the refinement stage, the two jets are calculated in the same frame, in this case the disparity represents the amount of position correction. Whereas, during tracking, the two jets are processed from the two consecutive frames, in this case the disparity represents the displacement vector.
3 3.1
Facial Feature Tracking Tracking of Facial Reference Points
From frame to frame, the four reference points Fig. 3 which are known to have relatively stable local appearance and to be less sensitive to facial deformations, are tracked using the phase–based displacement estimation procedure.
Facial Action Recognition
237
The new positions are used to deform each graph from the collection, using the Procrustes transformation. In this way, a first search-fit strategy using shape information provides a localized appearance–based feature representation in order to enhance the recognition performance. 3.2
Procrustes Transform
Procrustes shape analysis is a method in directional statistics [11], used to compare two shape configurations. A two–dimensional shape can be described by a centered configuration u in C k (u 1k = 0), where u is a vector containing 2d shape landmark points, each represented by a complex number of the form x + ıy. The procrustes transform is the similarity transform (4) that minimizes (5), where α 1k , |β| and β, respectively, translates, scales and rotates u2 , to match u1 .
α, β ∈ C u1 = α 1k + β u2 (4) β = |β| eı β 2 u1 u2 − α1 − β (5) k u1 u2 3.3
First Fit-Search
The Procrustes transform is used to adequately deform each reference graph Gi stored in the gallery. Given the position of the four tracked reference–point (circle–dots in Fig. 3), the transformation that “best” wraps the corresponding points in Gi to fit these points, is used to adjust the positions of the twenty–four remaining points of Gi (diamond–dots in Fig. 3). The new adjusted positions form the distorted graph Gd . “best” refers to a minimal cost that allows to transform the four–node subgraph of Gi to match the corresponding subgraph of Gd .
4
Facial Action Recognition
The facial action recognition process is based on evaluating the weighted similarity (eq. 1) between the reference graph and its distorted version. The reference graph is the best representative given by the distorted graph with the highest similarity. The node positions give the initial coarse positions of the twenty–eight facial feature points. Refinement stage is performed to obtain the final positions by estimating the optimal displacement of each point by using the Gabor phase–based displacement estimation technique. Then, at each refined position the attached set of jets is recalculated and updated. The similarity measure between these jets and those of Gi defines the facial expression that closely corresponds to the displayed
238
M. Dahmane and J. Meunier
expression. The scoring value indicates the intensity of a given expression for Gi , that is referring to the peak of a given facial action. The entire facial action recognition process is summarized in the flow diagram of figure 4. Current Frame
First Frame
face identification and reference points initialization using the optimal graph across the gallery (use only the coarsest level)
Track the 4 reference nodes using the phase−based disparity estimation procedure over the 3 image levels
Get the 28 node positions by the Procrustes transform that best deforms each subgraph within the gallery to fit the 4 tracked positions
Refine node positions over the 3 image levels via the displacement estimation procedure
Personalized facial action graphs
Initialization Tracking FA estimation
The optimal image positions of the 28 facial features
Estimate the displayed facial action
Fig. 4. Flow diagram of the entire facial action estimation process
5
Results
The videos we used in this work for testing are from the Cohn–Kanade Database [9]. The sequences consist of a set of prototypic facial expressions (happiness, anger, fear, disgust, sadness, and surprise) that were collected from a group of psychology students of different races with ages ranging from 18 to 30 years. Each sequence starts from a neutral expression, and terminates at the peak of a given expression. First, in order to select the appropriate graphs from the gallery (only one graph per person displaying a given expression at its peak is required), and to properly initialize the positions of the four reference points (the points to track), the subgraph containing the four reference facial features Fig. 3 is roughly localized in the first frame of the video via an exhaustive search of the subgraph through the coarsest face image level. We used a three–level hierarchical image representation to decrease the inherent average latency of the graph search operation, by reducing the image search area and the Gabor–jet sizes. For images at
Facial Action Recognition
239
the finest level (640 × 480 pixel resolution), jets are defined as sets of 40 complex coefficients constructed from a set of Gabor filters spanning 8 orientations and 5 scales. Whereas those for images at the coarsest level (320 × 240) are formed by 16 coefficients obtained from filters spanning 8 orientations under 2 scales. The intermediate level images use jets of (8 × 3) coefficients. From frame to frame, only the four reference points are tracked using the iterative disparity estimation procedure. To avoid drifting during tracking, for each feature point, a local fitting is performed by searching through the gallery for the subgraph that maximizes the jet–based similarity. The rough positions of the four feature points are given by the positions of the nodes of the optimal subgraph. These are then adjusted using again the displacement estimation procedure. Figure 5 shows snapshots of the tracking of the four reference facial feature points. The bottom subfigure shows, which facial graph (neutral expression or peak happiness) one should be used to refine the positions of the four facial reference points. The position error of the tracked feature points (calculated by erroneous tracked frames / the total frames) was calculated to be 2.4 pixels (measured on level 0). The twenty–eight facial point positions are obtained by the transformed positions of the reference graph using the Procrustes transformation that wraps the four–point positions to fit the four tracked positions. Then, the iterative displacement estimation procedure is used as a refinement stage, which is performed individually on each position of all of the twenty–eight feature–points, and over the three levels of the pyramid face–image. At each refined position a Gabor–jet is calculated. The reference graph that maximizes the similarity (eq. 1) over the gallery defines the final facial action that corresponds to the displayed expression. The two curve profiles in the bottom subgraph of figure 5 illustrate that in the first 7 frames the displayed expression is neutral with a decreasing intensity, whereas the last 8 ones display a happiness expression with a gradually increasing intensity as it is expected. Figure 6 shows how fear and hapiness expressions evolve in time with the intensity profile from a neutral display to its peak. The facial action recognition process, clearly, differentiates between the two different facial displays, as it is shown by the two respective curve profiles in Figure 7. The overall performance on the six basic facial expression (Angry, Fear, Hapiness, Sadness, Surprise, and disgust) reached 98.7%. The most difficult expression to recognize was the disgust expression with a rate of 90.0%. Table 1 shows the mean score of the correctly recognized expressions over the 81 videos we have used for testing. Table 1. The mean score of the correctly recognized expressions Expressions Disgust Angry Fear Hapiness Sadness Surprise scores 0.95 0.97 0.97 0.97 0.96 0.96
240
M. Dahmane and J. Meunier
Fig. 5. An example of the tracking performance of the facial reference points. Notice that similarity is computed for each reference graph in the gallery, only two facial graphs (neutral and happiness) are shown for clarity.
Fig. 6. Facial action (FA) recognition performance. The similarity curve reflects the intensity of the FA (peak hapiness: top – peak fear : bottom).
Facial Action Recognition
241
Fig. 7. The mutual exclusion happiness (top) vs. fear (bottom)
6
Conclusions
Most facial expression recognition systems are based on general model, leading to a poor performance when using familiar human face databases. Within this context, a personalized feature–based facial action recognition approach was introduced and evaluated in this paper. A facial localization step permits to select the most similar graphs from a set of familiar faces. The node positions are used to initialize the positions of the reference points which are tracked using the iterative phase–based disparity estimation procedure. A Procrustes transformation is used to distort each facial action graph according to the positions of the tracked reference points. The facial action recognition process is based on local appearances, provided by the Gabor–jets given by the twenty–eight fiducial points corresponding to the refined node positions of the distorted graph. The adopted facial tracking scheme provides a noticeable performance for the facial action recognition due to the localized appearance–based feature representations. Further work will involve assessing non–verbal communications between a patient and his health care professional in clinical setting, in which the proposed framework can be easily integrated with familiar patient faces to detect some events (eg. subject’s smile, frown . . . ).
242
M. Dahmane and J. Meunier
References 1. Chang, C.Y., Huang, Y.C., Yang, C.L.: Personalized facial expression recognition in color image, pp. 1164–1167 (December 2009) 2. Cohen, J., Zlochower, A., Lien, J., Kanade, T.: Face Analysis by Feature Point Tracking Has Concurrent Validity with Manual FACS Coding. Psychophysiology 36(1), 35–43 (1999) 3. Cottrell, G., Dailey, M., Padgett, C.: Is All Faces Processing Holistic? The view from UCSD. In: Wenger, M., Twnsend, J. (eds.) Computational, Geometric and Process Perspectives on Facial Recognition, Contexts and Challenges: Contexts and Challenges. Erlbaum, Mahwah (2003) 4. Dahmane, M., Meunier, J.: Constrained phase–based personalized facial feature tracking. In: Blanc-Talon, J., Bourennane, S., Philips, W., Popescu, D., Scheunders, P. (eds.) ACIVS 2008. LNCS, vol. 5259, pp. 1124–1134. Springer, Heidelberg (2008) 5. Fasel, B.: Robust face analysis using convolutional neural networks, vol. 2, pp. 40–43 (2002) 6. Fasel, B., Luettin, J.: Automatic facial expression analysis: a survey. Pattern Recognition 36(1), 259 (2003) 7. Hong, H.H., Neven, von der Malsburg, C.: Online facial expression recognition based on personalized galleries. In: Proceedings of the Third IEEE International Conference on Automatic Face and Gesture Recognition, pp. 354–359 (April 1998) 8. Hu, Y., Chen, L., Zhou, Y., Zhang, H.: Estimating face pose by facial asymmetry and geometry. In: Sixth IEEE International Conference on Automatic Face and Gesture Recognition, pp. 651–656 (May 2004) 9. Kanade, T., Cohn, J.F., Tian, Y.: Comprehensive database for facial expression analysis. In: Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition, pp. 46–53 (2000) 10. Lucey, S., Ashraf, A.B., Cohn, J.: Investigating spontaneous facial action recognition through aam representations of the face. In: Kurihara, K. (ed.) Face Recognition Book. Pro Literatur Verlag (April 2007) 11. Mardia, K., Jupp, P.: Directional Statistics. Wiley, New York (2000) 12. Pantic, M., Bartlett, M.S.: Machine analysis of facial expressions. In: Delac, K., Grgic, M. (eds.) Face Recognition, pp. 40–416. I-Tech Education and Publishing, Vienna (2007) 13. Pantic, M., Rothkrantz, L.: Automatic Analysis of Facial Expressions: The State of the Art. IEEE Transactions on Pattern Analysis and Machine Intelligence 22(12), 1424–1445 (2000), http://pubs.doc.ic.ac.uk/Pantic-PAMI00/
Lossless Compression of Satellite Image Sets Using Spatial Area Overlap Compensation Vivek Trivedi and Howard Cheng Department of Mathematics and Computer Science University of Lethbridge Lethbridge, Alberta, Canada {vivek.trivedi,howard.cheng}@uleth.ca
Abstract. In this paper we present a new prediction technique to compress a pair of satellite images that have significant overlap in the underlying spatial areas. When this prediction technique is combined with an existing lossless image set compression algorithm, the results are significantly better than those obtained by compressing each image individually. Even when there are significant differences between the two images due to factors such as seasonal and atmospheric variations, the new prediction technique still performs very well to achieve significant reduction in storage requirements.
1
Introduction
In this paper, we examine the problem of lossless compression of large sets of satellite images. More specifically, we design an algorithm for the lossless compression of a set of satellite images. These sets consist of many images of similar geographical locations taken at different times as the satellite orbits the Earth. Despite some differences in climate conditions, we expect a high amount of redundancy among these satellite images that represent the same geographic location. The storage of large image collections has traditionally been treated in a straightforward manner by applying well-known compression algorithms to individual images (see, for example, [13]). When the images in the collection are related, there are opportunities to reduce the storage requirements further. There are general-purpose image set compression algorithms for images that are related even though the relationship is not known a priori, including the centroid method and MST-based methods [4–6, 8, 9]. An important component in these algorithms is a compensation algorithm that predicts a target image to be compressed, given a reference image that has already been coded. We propose a new compensation algorithm that takes advantage of the properties of our image sets to reduce redundancies among images. Examples of such compensation in other applications include motion compensation for video compression [13],
This research was supported by a MITACS Accelerate Internship with Iunctus Geomatics Corp. (VT) and an NSERC Discovery Grant (HC).
M. Kamel and A. Campilho (Eds.): ICIAR 2011, Part II, LNCS 6754, pp. 243–252, 2011. c Springer-Verlag Berlin Heidelberg 2011
244
V. Trivedi and H. Cheng
as well as compensation for the compression of stereo images [12], multiview videos [10], and object movies [2]. The new compensation algorithm is based on metadata associated with each image that specifies its spatial location. Since the orbiting path of the satellite is not the same each time it passes through the same area, it is very rare that two images represent exactly the same areas. In order to compress the target image given the reference image, the overlapped area is first determined. It is shown in this paper that simple subtraction of each target pixel in the overlapped area by the corresponding pixel in the reference image is insufficient due to factors such as varying seasons (e.g. snow cover) and changes in atmospheric conditions (e.g. amount of aerosol or humidity in the air). These issues are dealt with by an intensity map computed from the target and the reference images. Through experiments on real image sets, we show that the new compensation algorithm combined with well-known image compression algorithms is significantly better than compressing each image individually. Although the techniques proposed here can be applied to other satellite image sets with similar properties, we mainly worked on images which are acquired in a single panchromatic band by the SPOT5 satellite [3]. These images are taken in a cloud-free environment and each pixel represents a 10m × 10m area of the surface of the Earth. These images are orthoractified to remove distortions due to the tilt and elevation angles from nadir in the satellite camera sensors. Sample images used for the experiments are shown in Fig. 1. The images are chosen so that there is complete overlap in the area imaged to demonstrate the effectiveness of the proposed compensation algorithm. However, complete overlap is not required for our algorithm.
(a) I1
(b) I2
(e) I5
(c) I3
(f) I6
(d) I4
(g) I7
Fig. 1. Images used in the experiments
Lossless Compression of Satellite Image Sets
2
245
Image Set Compression Methods
Data compression is typically achieved by removing redundancy in the given data. For lossless compression of individual images, typical algorithms attempt to remove two types of redundancy—interpixel redundancy and coding redundancy [7]. These redundancies are reduced by a mapper (e.g. wavelet transform) and an entropy coder (e.g. arithmetic coding). For sets of similar images, there is an additional type of redundancy that exists among the images. A set mapper is first applied to reduce this interimage redundancy [8, 9]. The output of the set mapper is a set of images that can be processed by ordinary image compression algorithms to reduce the remaining interpixel redundancy and coding redundancy. The centroid method of Karadimitriou and Tyler [8, 9] first computes a centroid image in the set, which is simply the pixelwise average of all images in the set. The set mapper then predicts each image in the set by subtracting the centroid image. If all images are very similar, the difference images contain mostly zeros and can be compressed very efficiently. For our application, however, this method can only work well if every image represents exactly the same geographic location with no displacements. This almost never occurs in the image set, even if we apply a clustering algorithm to group images of similar locations together. A different approach to the set mapping problem is based on minimum spanning trees [4–6]. In this approach, each image (as well as a “zero” root image) is represented as a vertex in a graph. Between every pair of vertices u and v, there is an edge whose weight represents the cost to encode image v when u is known (or vice versa). To compress the entire image set, a minimum spanning tree (MST) is computed from this graph. The zero root image is used as the first reference image. The algorithm repeatedly chooses a target image that is connected to another reference image that has already been coded in the MST. This is done until all images have been compressed. Decompression of the image proceeds from the root, and decompress the images in the same order. When images in the set are not all very similar but there is significant similarity between pairs of images, it has been shown that MST-based set compression algorithms perform very well [4, 6]. Thus, MST-based algorithms are more suitable in our application. Regardless of which of the above set mapping methods is chosen, a key component is the compensation of a target image given a reference image. While simple image subtraction is used in the previous works, it does not work well with the image sets in our application.
3
Spatial Area Overlap Compensation
In order to compute the overlap in the geographical area represented by a pair of images, we make use of location metadata that are typically recorded with satellite images. In our application, the geographical location of the rectangular area of the surface of the Earth corresponding to each image is extracted from the
246
V. Trivedi and H. Cheng
metadata and specified in the Universal Transverse Mercator (UTM) coordinate system. Other geographic coordinate system can be also used. The spatial area overlap between the target image and the reference image is computed by applying a standard polygon intersection algorithm [11]. Since the overlapped area represents the same geographic area, a high level of inter-image redundancy between the corresponding pixels is expected. As in many image compression algorithms, we attempt to remove this inter-image redundancy through the technique of prediction. Let T be the target image to be encoded, and R be the reference image. We denote by T (i, j) the pixel of T at image coordinates (i, j), and likewise for R(i, j). We now describe how to form the prediction error image E (whose dimensions are the same as those of T ) using spatial area overlap compensation. For each pixel T (i, j) in the overlapped area in the target image T , the corresponding image coordinates (i , j ) in R representing the same spatial location is computed. The coordinates (i , j ) are used to determine a predicted pixel value Tˆ(i, j) for T (i, j). Since i and j are often not integers, interpolation is needed to obtain a value for Tˆ(i, j). In order to minimize computation, we use nearest neighbour interpolation. That is, Tˆ(i, j) = R(round(i ), round(j )).
(1)
Our images have relatively high resolution, and more computationally intensive interpolation schemes such as bilinear or bicubic interpolation does not result in significant improvements in the accuracy of the prediction. If (i, j) is not in the overlapped area in T , we simply set Tˆ (i, j) = 0. The prediction error image E is simply formed by E(i, j) = T (i, j) − Tˆ(i, j). (2) It is clear that if the reference image R and the prediction error image E are available, then the target image T can be recovered provided that the geographic coordinates of T and R are known. The computation of the prediction error image is illustrated in Fig. 2. It can be seen that the major geographic features such as the rivers and streams in the target image is much less prominent in the prediction error image. As expected, the differences in atmospheric conditions near the top of the main river on the left are captured in the prediction error image. It can also be seen from Fig. 2 that prediction error image obtained still contains significant interpixel redundancy among the coded overlapped area, so that an image compression algorithm is still needed to encode the prediction error image. Also in the case of partial overlap, the uncompensated area (the area outside of the spatially overlap) has significant amount of interpixel redundancy as in the original image. This interpixel redundancy can be quantified by examining the entropy [7]. In Table 1, we show the first, second, and fourth order entropy of image I1 together with the entropy of the prediction error resulting from using different reference images. The difference between first and fourth order entropy can be considered as a measure of the amount of interpixel redundancy that exists in
Lossless Compression of Satellite Image Sets
(a) Target image
247
(b) Reference image
(c) Prediction error image Fig. 2. Illustration of spatial area overlap compensation Table 1. Experiments on a sample target image and the entropy of prediction error from different reference images Entropy (bits/pixel) Space Reduction Image 1st order 2nd order 4th order (JPEG2000) I1 5.33 4.47 3.51 60.6% I1 − I2 3.74 3.58 3.28 59.0% I1 − I3 4.03 3.79 3.36 61.5% I1 − I4 6.04 4.95 3.68 61.8% I1 − I5 6.09 5.16 3.76 56.0% I1 − I6 4.03 3.79 3.36 56.2% I1 − I7 5.15 4.49 3.60 57.4%
the image. Also included is the space reduction obtained by the JPEG 2000 compression algorithm [1] (in lossless mode) on the original target image and the prediction error images compared to storing the target image uncompressed. We see that a reduction in entropy is often obtained using spatial area overlap compensation, although only a minor improvement is obtained in actual compression performance in some cases. There is no significant improvement using spatial area overlap compensation using (1) to predict the target image. Despite the fact that the images have significant overlap, the overlapped area are not similar enough for simple subtraction
248
V. Trivedi and H. Cheng
to work well. On closer examination, this is often due to seasonal variations (e.g. snow cover) or atmospheric conditions (such as humidity and pollution) observed between images taken at different times of the year. For example, I1 and I5 in Fig. 1 shows two images representing the same area but they appear very different due to seasonal variations. These results show that a different prediction method is needed.
4
Intensity Mapping
Seasonal variations can significantly impact the intensity of reflected light recorded by the satellite camera sensors. For example, grass may reflect considerable less light in the summer than in the winter when it is covered by snow. Crops in the summer may be harvested in a different season. Even if an image of the same location is taken at different times in the same season, the pixel intensities may be different because of variations in atmospheric conditions. Despite seasonal and atmospheric variations that exist among images in the set, it was observed that pixels of the same “type” in the reference image are often changed in similar ways in the target image. By the type of a pixel we mean the type of surface the pixel corresponds to. At the same time, different types of pixels are changed by different amounts. For example, in Fig. 1 we see that the pixels belonging to a crop field in I1 are changed in I5 because they have been harvested. One can attempt to use region-based techniques to analyze different regions of pixels and compensate appropriately. However, such an approach may be computationally intensive, and side information on the locations and shapes of regions must also be stored. We propose a new simple prediction method called intensity mapping to take advantage of this property and overcome issues arising from variations in seasonal and atmospheric conditions. For each pixel intensity in the overlapped area of the target image, we use (1) to determine the corresponding pixel intensity in the reference image. All such pairs of reference and target intensities are collected. Let S = {(Tˆ (i, j), T (i, j)) | (i, j) in the overlapped area in T } (3) be the multiset of such pairs with duplicates included. Then, for a pixel T (i, j) in the overlapped area of the target image, we define the intensity mapped predictor T¯(i, j) = round median{t | (Tˆ(i, j), t) ∈ S} . (4) In other words, T¯ (i, j) is simply the median of all target intensities associated with the reference intensity Tˆ (i, j) in S. An intensity map consisting of the set {(Tˆ (i, j), T¯(i, j)) | (i, j) in the overlapped area in T }
(5)
(with all duplicates removed) must also be stored so that the decompression algorithm has access to the same prediction when reconstructing the target image from reference image and the prediction error. The storage requirement for the
Lossless Compression of Satellite Image Sets
249
intensity map is negligible in relation to the prediction error image. Median is chosen over mean for intensity mapping because it is less affected by outliers. Since intensity mapping is only a rough approximation of how different types of pixels are changed from one image to another, outliers often exist and can affect the mean significantly, leading to a larger prediction error for more pixels. Intensity mapped prediction is illustrated in Table 2. In this table, we show the various target pixel intensities associated with a particular reference intensity, as well as the effect of intensity mapping on the prediction error. Although the use of intensity mapped prediction does not change the statistical properties (e.g. entropy, range, etc.) of the target intensities corresponding to a specific reference intensity, the prediction errors over the entire image is reduced. An improvement in the overall prediction error image is achieved because intensity mapped prediction effectively allows a different adjustment in the predicted value based on the type of pixels being coded. Table 2. Example of intensity mapped (IM) prediction Reference Target Median Error (no IM) Error (with IM) 70 97, 99, 100, 102, 102 27, 29, 30, 32, -5, -3, -2, 0, 102, 104, 105 32, 34, 35 0, 2, 3 100 120, 121, 121, 122, 122 20, 21, 21, 22, -2, -1, -1, 0, 124, 125, 125 24, 25, 25 2, 3, 3 220 180, 182, 182, 183, 183 -40, -38, -38, -37, -3, -1, -1, 0, 1, 1, 2 184, 184, 185 -36, -36, -35 1, 1, 2
Table 3 shows the experimental results using intensity mapped prediction. As expected, the fourth order entropy of the prediction error image is significantly reduced in most cases. This is also often seen in actual space reduction achieved by lossless JPEG 2000. In fact intensity mapped prediction performs better in all but one case. Even for I1 and I2 which are quite similar visually, intensity mapped prediction gives very good results. A slight disadvantage for intensity mapped prediction is that two passes are needed to construct the intensity map and to compute the prediction error. Figure 3 shows the prediction error image of I1 . It is visually clear that intensity mapping significantly reduces high frequency details in the error image. For example, the zig-zag patterns in the terrain, the thin lines in harvested fields, and the bright strips of dirt road are apparent in the prediction error image without intensity mapping. On the other hand, the same features are drastically reduced when intensity mapping is used. This can also be verified by examining the histograms of the pixel intensities in the error images. We see in Fig. 4 that the histogram of the prediction errors is much more skewed when intensity mapped prediction is used, so that the resulting error image is easier to compress. Spatial area overlap compensation along with intensity mapping can be used in MST-based set compression algorithms [4–6] to obtain edge weights and to compress chosen edges in the MST. As an example, we may construct a directed
250
V. Trivedi and H. Cheng Table 3. Experiments on intensity mapped (IM) prediction
Image I1 − I2 I1 − I3 I1 − I4 I1 − I5 I1 − I6 I1 − I7
4th Order Entropy (bits/pixel) no IM IM 3.28 1.03 3.36 1.90 3.68 3.41 3.76 2.53 3.36 3.10 3.60 2.52
(a) Error image (no IM)
Space Reduction (JPEG 2000) no IM IM 59.0% 83.8% 61.5% 77.3% 61.8% 48.5% 56.0% 57.4% 56.2% 56.4% 57.4% 65.7%
(b) Error image (IM)
Fig. 3. Effect of intensity mapping on prediction error image I1 − I2
(a) No IM
(b) IM
Fig. 4. Histogram of intensities in prediction error images
graph where the weight of each edge is the compressed size of the corresponding prediction error image after compensation. Figure 5 depicts the computed directed MST of the image set in Fig. 1 where node I0 represents a zero “root” image that acts the reference image for encoding image I7 . Using the given MST and spatial area overlap compensation with intensity mapped prediction, we obtain a further reduction of 23.9% compared to compressing each image individually (81182 bytes vs. 106663 bytes).
Lossless Compression of Satellite Image Sets
251
I0
14769
? I7
H 14003HH 15268 12373 HH ? j I6
I2
I5
J 6475 12456 5838 J
^ J
I4
I3
I1
Fig. 5. Directed minimum spanning tree of the example image set
5
Conclusions and Future Works
We described a compensation algorithm for images with significant spatial area overlap taking into account of variations in seasonal and atmospheric conditions. We have shown that when combined with MST-based set compression algorithms, the storage requirement is reduced compared to compressing each image individually. The proposed algorithm generates the prediction error image in two passes. During first pass it constructs the intensity map by scanning the images and in the second pass it uses the intensity map to compute the prediction error from the target image. To improve efficiency we are examining an adaptive method to update the intensity map as it encodes the target image in one pass. The use of this method will also eliminate the necessity of storing the intensity map separately for decoding. The method of intensity mapped spatial area overlap compensation can also be extended for satellite images of different scales. An image of a particular scale can be predicted from images of other scales. For example, we can predict 10meter images using 5-meter images that have similar spatial coordinates. There are also possibilities of predicting an image from multiple images of the same area, especially if an image overlaps with multiple images in different parts. These possibilities will be examined in the near future.
Acknowledgments The authors would like to thank Iunctus Geomatics Corp. for providing the satellite images for our experiments.
252
V. Trivedi and H. Cheng
References 1. Adams, M.: JasPer project, http://www.ece.uvic.ca/~ mdadams/jasper/ 2. Chen, C.-P., Chen, C.-S., Chung, K.-L., Lu, H.-I., Tang, G.: Image set compression through minimal-cost prediction structures. In: Proceedings of the IEEE International Conference on Image Processing, pp. 1289–1292 (2004) 3. Corporation, S.I.: SPOT-5 satellite imagery and satellite system specifications, http://www.satimagingcorp.com/satellite-sensors/spot-5.html 4. Gergel, B.: Automatic Compression for Image Sets Using a Graph Theoretical Framework. Master’s thesis, University of Lethbridge (2007) 5. Gergel, B., Cheng, H., Li, X.: A unified framework for lossless image set compression. In: Data Compression Conference, p. 448 (2006) 6. Gergel, B., Cheng, H., Nielsen, C., Li, X.: A unified framework for image set compression. In: Arabnia, H. (ed.) Proceedings of the 2006 International Conference on Image Processing, Computer Vision, & Pattern Recognition (IPCV 2006), vol. II, pp. 417–423 (2006) 7. Gonzalez, R.C., Woods, R.E.: Digital Image Processing, 3rd edn. Prentice-Hall, Englewood Cliffs (2008) 8. Karadimitriou, K.: Set redundancy, the enhanced compression model, and methods for compressing sets of similar images. Ph.D. thesis, Louisiana State University (1996) 9. Karadimitriou, K., Tyler, J.M.: The centroid method for compressing sets of similar images. Pattern Recognition Letters 19(7), 585–593 (1998) 10. Merkle, P., M¨ uller, K., Smolic, A., Wiegand, T.: Efficient compression of multiview video exploiting inter-view dependencies based on H.264/MPEG4-AVC. In: IEEE Intl. Conf. on Multimedia and Expo. (ICME 2006), pp. 1717–1720 (2006) 11. O’Rourke, J.: Computational Geometry in C, 2nd edn. Cambridge University Press, Cambridge (1998) 12. Perkins, M.G.: Data compression of stereopairs. IEEE Trans. on Communications 40(4), 684–696 (1992) 13. Shi, Y.Q., Sun, H.: Image and Video Compression for Multimedia Engineering: Fundamentals, Algorithms, and Standards, 2nd edn. CRC Press, Boca Raton (2008)
Color Image Compression Using Fast VQ with DCT Based Block Indexing Method Loay E. George1 and Azhar M. Kadim2 1
Dept. of Computer Science, Baghdad University, Baghdad, Iraq
[email protected] 2 Dept. of Computer Science, Al-Nahrain University, Baghdad, Iraq
[email protected]
Abstract. In this paper, a Vector Quantization compression scheme based on block indexing is proposed to compress true color images. This scheme uses affine transform to represent the blocks of the image in terms of the blocks of the code book. In this work a template image rich with high contrast areas is used as a codebook to approximately represent the blocks of the compressed image. A time reduction was achieved due to the usage of block descriptors to index the images blocks, these block descriptors are derived from the discrete cosine transform (DCT) coefficients. The DCT bases descriptor is affine transform invariant. This descriptor is used to filter out the domain blocks, and make matching only with similar indexed blocks. This introduced method led to time (1.13sec), PSNR (30.09), MSE (63.6) and compression ratio (7.31) for Lena image (256×256, 24bits). Keywords: Image Compression, DCT, Fractal Image Compression, IFS, Isometric Processes.
1 Introduction Due to the rapid technological growth and the usage of the internet today we are able to store and transmit digital image. Image Compression is one of the enabling technologies for multimedia applications that aim to reduce the number of bits needed to represent an image because it would not be practical to put images on websites without putting them in one of their compression form [1]. Vector Quantization (VQ) is an efficient technique for data compression and has been successfully used in various applications. VQ maps k-dimensional vector in the vector space into a finite set of vectors, each vector is called codeword and the set of all the codewords is called codebook. Codebook can be generated by clustering algorithms; the most commonly method used to generate codebook is the Linde-Buzo-Gray (LBG) algorithm which is also called as Generalized Lloyd Algorithm (GLA) [2]. The main problem in VQ is choosing the vectors for the codebook, in this work a model image is used as a code book instead of generating it. A vector quantizer is composed of two processor: the encoder and the decoder. The encoder takes the input vector and outputs the index of the codeword that offers the lowest distortion. Once the closest codeword is found, the index of it is sent through channel (could be computer storage, M. Kamel and A. Campilho (Eds.): ICIAR 2011, Part II, LNCS 6754, pp. 253–263, 2011. © Springer-Verlag Berlin Heidelberg 2011
254
L.E. George and A.M. Kadim
communication channel, and so on) instead of the pixels values. This conserves spaces and achieves more compression. The decoder receives the index of the codeword; it replaces the index with the associated codeword [3]. In this work, the affine transform is used for blocks matching purpose in a way similar to that used in fractal image compression (FIC) scheme. The advantage of using affine mapping is to expand the vector book space, but on the account of encoding time. The affine mapping coefficients (i.e., block average and scale factors are saved) beside to block index parameters, in the compression stream.[4,5]. The main difficulty with VQ and FIC methods is they take long time to compress the image. Moreover, the rate distortion performance of most fractal image coders is not satisfactory. Some methods to overcome these difficulties have been proposed. Some of these methods are involved with the combination of fractal coding with VQ. Among the few published relevant researches are: I. R. Hamzaoui and M. Muller [6] presented a novel hybrid scheme combining fractal image compression with mean-removed shape-gain vector quantization. The algorithm based on a distance classification fractal coder with fixed cluster centers, decides whether to encode a range block by a cluster center or by a domain block. Their scheme enhanced the time of encoding and decoding. II. R. Hamzaoui and D. Saupe [7] showed how a few fixed vectors designed from a set of training images by a clustering algorithm accelerates the search for the domain blocks and improves both the rate-distortion performance and the decoding speed of a pure fractal coder, when they are used as a supplementary vector quantization codebook. Two quadtree-based schemes was implemented in their work: a fast topdown heuristic technique and one optimized using Lagrange multiplier method.
2 The Proposed FAST VQ This work presents an enhanced VQ scheme to speed up fractal colored image compression; it is based on image blocks classification using Discrete Cosine Transform (DCT) coefficients to keep only the code book blocks have similar classification index to that of the coded image block for matching stage (which has high computational complexity). DCT represents the image by its elementary frequency components, this means that the most pertinent information of the image is stored or compacted in top left corner of the image, this property of DCT is called Energy compaction. With this property, it is rational to simply leave out the high frequencies coefficients as an approximate mean to address the block characteristics. Due to the DCT blocks classification, the similarity matching computation is done only on the blocks belong to the same class. The proposed method consists of three main stages: (1) Code book generation stage (2) Encoding stage, and (3) Decoding stage. 2.1 CodeBook The first step in any VQ is building a codebook, usually it is the most challenging stage, because the system performance depends on: (1) the number of code book blocks, this number should be as small as possible to keep the encoding time loss;(2) the overall closeness of the codebook to all image blocks in order to keep the image
Color Image Compression Using U Fast VQ with DCT Based Block Indexing Method
255
distortion level as low as possible; p and usually we need to increase the numberr of codebook blocks to ensure good approximation for the image blocks. In our propoosed system a template image ricch with high details (i.e., holds different kinds of edges and different areas characterized d by various contrast shading) will be utilized as a geneeric source for generating codeebook blocks. In this paper a template color BMP im mage with resolution 24 bit/pixel is used to produce the code book, it is loaded and decomposed into three 2D D-arrays (i.e. R:Red, G:Green and B:Blue). The movving window mechanism (with ju ump step equal 1), was adopted to generate the blocks. T The set of generated image blocck is considered as the codebook.
Fig g. 1. The marble image (the code book)
2.2 Encoding a. As a first step in encoding stage the pair of DCT coefficients {c(0,3) and c(3,0)} for each block in the codeboo ok is determined using the formula [8]:
c(i,0) =
∑ ∑
c(0, i) =
∑ ∑
l −1
l −1
y =0
x =0
l −1
l −1
y =0
x =0
b( x, y ). cos
(2 x + 1)iπ 2l
(1)
b( x, y). cos
(2 y + 1)iπ 2l
(2)
Where l is the block length and i =3 b. Then, assign the isometry y index of each block using the boolean criterion as shoown in table (1) [9] Table 1. 1 The truth table for the eight blocks states Block k's Classs Index x
Boolean Criteria |C30|≥ |C03|
|C30|≥0
|C03|≥0
0
T
T
T
1
T
T
F
2
T
F
T
3
T
F
F
4
F
T
T
256
L.E. George and A.M. Kadim Table 1. (continued) 5
F
T
F
6
F
F
T
7 F F F 0≡ No Operation; 1≡ Rotation_90; 2≡ Rotation_180; 3≡ Rotation_270; 4≡Reflection; 5≡Reflection+Rotation_90; 6≡Reflection+Rotation_180; 7≡ Reflection+Rotation_270;
c. Also, the coefficients c(0,3) and c(3,0) are used to obtain the DCT ratio descriptor of each codebook block using the following definition [10]:
⎧ c(3,0) ⎪ c(0,3) * N ⎪ D3 = ⎨ ⎪ c(0,3) * N ⎪⎩ c(3,0)
if c(3,0) ≤ c (0,3) (3) if c(3,0) > c(0,3)
Where N represents the number of classes (i.e., bins) that image and codebook blocks are categorized depending on their descriptors value. d. Now, the codebook blocks are classified, and they are sorted in ascending order, according to their descriptor D3 values, in order to speed up IFS mapping. For each class of blocks having the same D3, two pointers were established to address the start block and end block of the class. e. After the preparation of codebook data, the color BMP image data is loaded and decomposed to Red, Green and Blue arrays. f. The three loaded basic colors (Red, Green, and Blue) are converted to YCbCr color representation. This step is important to make the image data representation more suitable for compression. g. Since the YCbCr has lower spatial resolution for the chromatic band Cb and Cr, so we down sample Cb and Cr (by 2) components to reduce the coding time and to improve the compression gain. h. Each color band of the image is partitioned into non-overlapping blocks. Image blocks (m0, m1, … , mn-1) should have the same size of the codebook blocks. i. Once codebooks blocks and image blocks are generated, then for each image block do the following: 1. Calculate the average of the image block ( m ). 2. Determine the DCT coefficients {c(0,3) and c(3,0)} using eq.1 & eq.2. Get the
symmetry index of the image block from table (1). 3. Compute the descriptor value D3 of the image block using eq.3. 4. By the help of pointers start and end, pick up a codebook blocks from the
codebook whose class index has the same D3. 5. Calculate the average ( cb ) of the codebook block.
Color Image Compression Using Fast VQ with DCT Based Block Indexing Method
257
6. Perform the required symmetry process on the codebook block by comparing the
symmetry indexes of the image block and matched codebook block using the lookup table (2) [9]. Table 2. The required isometric operation to convert block state
Image Block Index
Codebook Block Index 0 1 2 3 4 5 6 0 0 6 4 2 5 3 1 1 6 0 2 4 1 7 5 2 4 2 0 6 3 5 7 3 2 4 6 0 7 1 3 4 5 1 3 7 0 4 6 5 3 7 5 1 4 0 2 6 1 5 7 3 6 2 0 7 7 3 1 5 2 6 4 0≡ No Operation; 1≡ Rotation_90; 2≡ Rotation_180; 3≡ Rotation_270; 4≡ Reflection; 5≡Reflection+Rotation_90; 6≡Reflection+Rotation_180 7≡ Reflection+Rotation_270;
7 7 3 1 5 2 6 4 0
This table addresses the indexes of the required symmetry operation, it is derived to overcome the problem of long computational time that was required to perform the extra matching processes in case of doing eight isometric mappings for each tested codebook block. 7. Determine the IFS scale coefficients (s) by applying the method of least sum of
square errors x2 between the image block mi and its approximation mi , according to the following equation[11]:
mi′ = s(cbi − cb) + m i
(4)
Where mi is the ith pixel value in the image block cbi is the ith pixel value in the codebook block. S is the scalling factor n −1
∑ (m′ − m )
χ = 2
i
2
(5)
i
i =0
The minimum of x2 occurs when: ∂χ 2 =0 ∂s Combining eq.4 and eq.5 and using eq.6 one leads to: n −1
n s=
n −1
(6)
n −1
∑ m cb − ∑ m ∑ cb i
i
i =0
i =0
n −1
n
∑ cb
i
i =0
i
2
i
i =0
⎛ n −1 ⎞ cbi ⎟ −⎜ ⎜ ⎟ ⎝ i =0 ⎠
∑
2
(7)
258
L.E. George and A.M. Kadim
Where n is the number of pixels in each block(i.e. the block size) 8. Apply the following condition to bound the value of (s) coefficient:
If s< smin then s=smin Else if s >smax then s=smax Where Smax is the highest allowed value of scale coefficients Smin is the lowest allowed value of scale coefficients 9. Quantize the value (s) as follows:
I s = round ( Qs )
(8)
~ S = Qs I s
(9)
Where Qs =
S max
(10)
2bs −1 −1
Qs is quantization step of the scale coefficients Is is quantization index of the scale coefficients bs is number of bits used to represent the scale coefficients ̃ 2
10. Compute the approximate error (χ ) using eq.5. 2 11. After the computation of the IFS code and the sum of error (χ ) of the matching
between the image block and the tested codebook block, the (χ2) is compared with a minimum error (χ2min) registered during the previous matching instances; such that: If χ2< (χ2min) then χ2min= χ2; sopt=is ; xopt=xd ; yopt=yd m ; Sym=symmetry index Codebook color band End if 2 12. The new registered minimum χ is compared with the permissible level of error 2 (i.e. threshold value ε ), if χ min < ε then the search across the codebook blocks in the class is stopped, and the registered codebook block is considered as the best matched block. Otherwise, the matching should be restarted with codebook blocks belonging to the neighbour class whose D3 is (D3±1) to get the best IFS match, if there is not acceptable match, try to match the codebook blocks whose D3 are (D3±2),... and so on until either the registered χ2min become less than ε or all the codebook blocks are tested. 13. The output is the set of IFS parameters [(i.e., i , m, x, y, color_Band , sym )]which s
should be registered as a set of best IFS match for the tested image block. 14. Repeat steps (1) to (13) for all image block. 15. Store all IFS mapping parameters as an array of record in a binary file. The length of this array is equal to the number of image blocks.
Color Image Compression Using Fast VQ with DCT Based Block Indexing Method
259
2.3 Decoding
The decoding process can be summarized by the following steps: 1. Loading of compression stream The first step in decoding stage is opening the compression file and reading the overhead information: width and height of the compressed image, block size, quantization step of scale coefficients. 2. For each image block from components (Y,Cb,Cr), produce the attractor as follows: a. Extract the best matched codebook block (cb) from the IFS parameters (xopt , yopt). b. Perform isometric mapping for the extracted codebook block according to the registered sym value. c. The values of sopt of the image block should be de-quantized as follows: S=IQ*Qs d.
3. 4.
(11)
mi′ (i=0,..,blocksize) of the image block is obtained according to eq.4.
Up sampling Cb and Cr components. Converting the reconstructed (YCbCr) color components to RGB components using the inverse YCbCr transform to obtain the compressed image.
3 Test Results The proposed system was implemented using Visual Basic (Ver.6.0) and the tests was conducted on a Fujitsu PC with Pentium IV (1862 MHz). As test material, Lena image (256×256, 24bits) and Marble image (256×256, 24bits) were used; the later is rich with high contrast areas and for this reason it is adopted as a code book. The size of image block and codebook block is taken (4×4) pixels, the search step size is set 1. To evaluate the performance of the proposed compression method, several tests were conducted and the results of these tests shows that the best values of coding parameters were listed below: Table 3. The best coding parameters Parameters No of class (N) Window Size (W) Quantization bits for scale Max scale (Smax) Threshold error (ε)
value 10000 200 6 3 15*block size2
260
L.E. George and A.M. Kadim
Original
With Symmetry
Encoding Time(sec) MSE
without Symmetry
1.17
4.22
63.61
47.39
PSNR
30.09
31.37
CR
7.314
7.314
Fig. 2. Lena’s image using different ifs schemes
Figure (2) shows the reconstructed (decompressed) Lena image using block indexing by DCT descriptor, with symmetry predictor and without symmetry predictor. It's obvious from fig (2) that symmetry predictor reduces the encoding time to 25% because only one orientation instead of eight should be performed to evaluate the IFSsimilarity between image and codebook blocks. Table (4) illustrates the effect of no. of class (N) on the performance of compression scheme, the results show that ET is decreased when N is increased. A good reduction in encoding time will occur with 10000 class without cause a significant degradation in image quality. Table (5) presents the effects of Window Size (W) on the performance parameters. W is used to define the dynamic range of the search within the codebook. It represents the maximum allowable difference between the descriptor value of the tested codebook block and the descriptor value of the image block. The value (W=200) was adopted as the best value. Table 4. The effect of no. of class
N 1000 2000 5000 8000 10000 12000 14000 15000 20000 25000 30000
ET (sec) 5.41 3.67 2.16 1.63 1.16 1.13 1.08 0.92 0.84 0.7 0.6
MSE 45.76 49.47 56.71 60.63 63.24 65.68 67.29 68.35 73.78 77.95 80.89
PSNR 31.5 31.18 30.59 30.30 30.12 29.95 29.85 29.78 29.45 29.21 29.05
Color Image Compression Using Fast VQ with DCT Based Block Indexing Method
261
Table 5. The effect of window size W 50 100 150 200 250 300
ET (sec) 1.3 1.44 1.42 1.13 1.3 1.28
MSE 63.61 63.61 63.61 63.61 63.61 63.61
PSNR 30.09 30.09 30.09 30.09 30.09 30.09
Table (6) tabulates the results of testing the number of quantization bits of scale. Table 6. The effect of scale bit Scale bits 3 4 5 6 7 8
ET (sec) 1.92 1.58 1.52 1.36 1.20 1.17
MSE 75.26 67.49 64.84 63.61 63.18 63.05
PSNR 29.36 29.83 30.01 30.09 30.12 30.13
CR 8 7.757 7.529 7.314 7.111 6.918
The above table shows the variation of value to study the effectiveness of scale bits on compression performance. It's clear that the quality of reconstructed image increased with the increase of quantization bits. When scale bits become higher than 6, its variation become less effective, so the value 6 was adopted to be the best compromising value. Table 7. The effect of smax Smax 1 2 3 4 5 6 7 8 9
ET (sec) 1.42 1.31 1.28 1.38 1.30 1.28 1.27 1.36 1.27
MSE 75.93 65.03 63.61 63.19 63.26 63.50 63.72 64.07 64.36
PSNR 29.32 29.99 30.09 30.12 30.11 30.10 30.08 30.06 30.04
Table (7) illustrates the effects of the maximum scale variation on the behaviour of the compression performance, the results indicate that the effect on ET is insignificant but the effects on PSNR and MSE is less significant. The value (Smax=3 ) was adopted as the best value.
262
L.E. George and A.M. Kadim
The tests results listed in table (8) illustrate the effect of using the error threshold ε as a stopping search condition on the compression performance. Both ET and PSNR decreases when ε increases. The value of ε that were used in the proposed scheme is set to be 15*bsize 2 . Table 8. The effect of error threshold ε 1 5 10 15 20 25 30
ET (sec) 2.58 1.7 1.39 1.2 1.23 1.05 0.97
MSE 61.83 62.41 63.04 63.61 64.03 64.54 65.01
PSNR 30.21 30.17 30.13 30.09 30.06 30.03 30.00
4 Conclusions 1.
2. 3.
4.
An improvement was introduced to the scheme of image compression by making use of a template color image as a VQ codebook instead of using a few set of vectors. The use of DCT coefficients to index the image blocks filter out the blocks which make the matching process faster. Our experimental results show that our VQ scheme yields superior performance over conventional image compression and lead to CR (7.31) with PSNR (30.09) and MSE (63.6), without making significant degradation in image quality. Our proposed scheme had reduced the encoding time to approximately ( 17.2 % ) in comparison with traditional schemes.
References 1. Kekre, H.B., Sarode, K.: Vector Quantized Codebook Optimization using K-Means. International Journal on Computer Science and Engineering 3, 283–290 (2009) 2. Gonzalez, R.F., Woods, R.E.: Digital Image Processing, 2nd edn. Pearson Education International, Prentic Hall, Inc. (2002) 3. Vector Quantization, http://www.mqasem.net/vectorquantization/vq.html 4. Colvin, J.: Iterated Function Systems and Fractal Image Compression:
[email protected] (1996) 5. Fisher, Y.: Fractal Image Compression. In: SIGARAPH 1992 Course Notes, the San Diego Super Computer Center. University of California, San Diego (1992) 6. Hamzaoui, R., Muller, M., Saupe, D.: Enhancing Fractal Image Compression with Vector Quantization. In: Proc. DSPWS (1996) 7. Hamzaoui, R., Saupe, D.: Combining fractal image compression and vector quantization. IEEE Transactions on Image Processing 9(2), 197–208 (2000)
Color Image Compression Using Fast VQ with DCT Based Block Indexing Method
263
8. Nixon, M.S., Aguada, A.S.: Feature Extraction and Image Processing:an imrint of Elsevier. British Library Cataloguing in publishing Data, pp. 57–58 (2005) ISBN 0-75065078-8 9. George, L.: Eman Al-Hilo:Spedding-Up Color FIC Using Isometric Process Based on moment predictor. In: International Conference on Future Computer and Communication, pp. 607–611. IEEE Computer Society, Los Alamitos (2009) 10. Duh, D.J., Jeng, J.H., Chen, S.Y.: Speed QualityControl for Fractal Image Compression. Imaging Science Journal 56(2), 79–90 (2008) 11. George, L.: IFS Coding for Zero-Mean Image Blocks. Iraqi Journal of Science 47(1) (2006)
Structural Similarity-Based Affine Approximation and Self-similarity of Images Revisited Dominique Brunet1 , Edward R. Vrscay1 , and Zhou Wang2 1
2
Department of Applied Mathematics, Faculty of Mathematics, University of Waterloo, Waterloo, Ontario, Canada N2L 3G1 Department of Electrical and Computer Engineering, Faculty of Engineering, University of Waterloo, Waterloo, Ontario, Canada N2L 3G1 {dbrunet,ervrscay}@uwaterloo.ca,
[email protected]
Abstract. Numerical experiments indicate that images, in general, possess a considerable degree of affine self-similarity, that is, blocks are well approximated in root mean square error (RMSE) by a number of other blocks when affine greyscale transformations are employed. This has led to a simple L2 -based model of affine image self-similarity which includes the method of fractal image coding (cross-scale, affine greyscale similarity) and the nonlocal means denoising method (same-scale, translational similarity). We revisit this model in terms of the structural similarity (SSIM) image quality measure, first deriving the optimal affine coefficients for SSIM-based approximations, and then applying them to various test images. We show that the SSIM-based model of self-similarity removes the “unfair advantage” of low-variance blocks exhibited in L2 based approximations. We also demonstrate experimentally that the local variance is the principal factor for self-similarity in natural images both in RMSE and in SSIM-based models. Keywords: self-similarity, structural similarity index, affine approximation, image model, non-local image processing.
1
Introduction
The effectiveness of a good number of nonlocal image processing methods, including nonlocal-means denoising [1], restoration [2,3], compression [4], superresolution [5,6,7] and fractal image coding [8,9,10], is due to how well pixel-blocks of an image can, in some way, be approximated by other pixel blocks of the image. This property of natural images may be viewed as a form of self-similarity. In [11], a simple model of affine self-similarity which includes a number of nonlocal image processing methods as special cases was introduced. (It was analyzed further in [12].) An image I will be represented by an image function u : X → Rg , where Rg ⊂ R denotes the greyscale range. Unless otherwise specified, we work with normalized images, i.e., Rg = [0, 1]. The support X of an image function u is assumed to be an n1 × n2 -pixel array. Let R be a set of M. Kamel and A. Campilho (Eds.): ICIAR 2011, Part II, LNCS 6754, pp. 264–275, 2011. c Springer-Verlag Berlin Heidelberg 2011
Affine Self-similarity and Structural Similarity
265
n × n-pixel subblocks Ri , 1 ≤ i ≤ NR such that X = ∪i Ri , i.e., R forms a covering of X. We let u(Ri ) denote the portion of u that is supported on Ri . We examine how well an image block u(Ri ) is approximated by other image blocks u(Rj ), j = i. Let us consider a block u(Ri ) being approximated as the range block and a block u(Rj ), j = i, approximating it as the domain block. In order to distinguish the roles of these blocks, we shall denote the domain blocks as u(Dj ) with the understanding that Dj = Rj . For two pixel blocks Ri and Dj , the approximation of an image range block u(Ri ) by a domain block u(Dj ) may be written in the following general form, u(Ri ) ≈ αij u(Dj ) + βij ,
i = j .
(1)
The error associated with the approximation in (1) will be defined as Δij = min u(Ri ) − αu(Dj ) − β, α,β∈Π
i = j ,
(2)
where · denotes the L2 (X) norm (or RMSE) and where Π ⊂ R2 denotes the (α, β) parameter space appropriate for each case. The affine self-similarity model was comprised of four cases. The optimal parameters and associated errors for each case will be given. In what follows, we denote x = u(Ri ), y = u(Dj ) and N = n2 . The statistical measures sx , sy , etc., are defined in (7) below. Case 1: Purely translational. This is the strictest view of similarity: Two image subblocks u(Ri ) and u(Dj ) are considered to be “close,” u(Ri ) ≈ u(Dj ), if the L2 distance u(Ri ) − u(Dj ) is small. This is the basis of nonlocal means denoising. There is no optimization here: αij = 1, βij = 0 and the approximation error is simply (Case 1) ¯ ]2 . (3) = x − y = N −1/2 (N − 1)[s2x + s2y − 2sxy ] + [¯ x−y Δij Case 2: Translational + greyscale shift. This is a slighly relaxed definition of simililarity. Two image subblocks are considered similar if they are close up to a greyscale shift, i.e., u(Ri ) ≈ u(Dj ) + β. This simple adjustment can improve the nonlocal means denoising method since more blocks are available in the averaging process. In this case, αij = 1 and we optimize over βij : 1/2 N −1 (Case 2) ¯−y ¯, βij = x Δij = [s2x + s2y − 2sxy ]1/2 . (4) N Case 3: Affine transformation. A further relaxation is afforded by allowing affine greyscale transformations, i.e., u(Ri ) ≈ αu(Dj ) + β. This method has been employed in vector quantization [4]. We optimize over α and β. 1/2 1/2 s2xy N −1 sxy (Case 3) 2 ¯, ¯ − αij y Δij αij = 2 , βij = x = . sx − 2 sy N sy (5)
266
D. Brunet, E.R. Vrscay, and Z. Wang
Case 4: Cross-scale affine transformation. u(Ri ) ≈ αu(w(Dj )) + β, where Dj is larger than Ri and where w is a contractive spatial transformation. This is the basis of fractal image coding. The optimization process and the error distributions for Case 4 are almost identical to those of Case 3. For this reason, this case will not be discussed in this paper. Note 1. In both Cases 2 and 3, the means of the range block and optimally ¯ = α¯ transformed range block are equal, i.e., x y + β. Of particular interest in [11] were the distributions of L2 errors denoted as in approximating range blocks u(Ri ) by all other domain blocks u(Dj ), j = i, for the cases 1 ≤ k ≤ 3. In order to reduce the computational cost, we employ nonoverlapping subblocks. Normally, one could consider eight affine spatial transformations that map a square spatial domain block Dj to a square range block Ri . In our computations, however, unless otherwise specified, we shall consider only the identity transformation, i.e., zero rotation. In Fig. 1 are shown the Case 1-3 Δ-error distributions for all possible matches for the Lena and Mandrill images using 8 × 8-pixel blocks. As we move from Case 1 to Case 3 above, the error in approximating a given range block u(Ri ) by a given domain block u(Dj ) will generally decrease, since more parameters are involved in the fitting. It was observed the Case 3 Δerror distributions for images demonstrate significant peaking near zero error, indicating that blocks of these images are generally very well approximated by other blocks under the action of an affine greyscale transformation. For a given Case k, the Δ-error distributions of some images were observed to be more concentrated near zero approximation error than others. The former images were viewed as possessing greater degrees of self-similarity than the latter. A quantitative characterization of relative degrees of self-similarity was also considered in terms of the means and variances of the error distributions. To illustrate, for the seven well-known test images employed in the study, the degree of Case 3 self-similarity could be ordered as follows: (Case k) Δij ,
Lena ≈ San F rancisco > P eppers > Goldhill > Boat > Barbara > M andrill. It is important to note that the above model of self-similarity was based on the L2 distance measure since all Δ-errors were in terms of root mean squared errors (RMSE) and the optimal greyscale coefficients α and β were determined by minimizing the RMSE approximation error. Of course, this is not surprising since L2 -based distance measures, e.g., MSE, RMSE, PSNR, are the most widely used measures in image processing. However, it is well known [13] that L2 -based measures are not necessarily good measures of visual quality. In this paper, we re-examine the above self-similarity model in terms of the structural similarity (SSIM) image quality measure [14]. SSIM was proposed as an improved measure of assessing visual distortions between two images. The first step is to determine the formulas for optimal SSIM-based approximations of image range blocks by
Affine Self-similarity and Structural Similarity
267
domain blocks which correspond to Cases 1-3 above. We then present the distributions of SSIM measures between domain and range blocks for the Lena and Mandrill test images which, from above, lie on opposite ends of the L2 -based self-similarity spectrum. It turns out that our SSIM-based results allow us to address the question, raised in [11], whether the self-similarity of an image is actually due to the approximability of its blocks which, in turn, is determined by their “flatness.” If range blocks of low standard deviation/variance are easier to approximate, then perhaps a truer measure of self-similarity (or lack thereof) may be obtained if their corresponding Δ approximation errors are magnified appropriately to adjust for this “unfair advantage”. We shall show that the SSIM measure, because of its connection with a normalized metric, takes this “unfair advantage” into account, resulting in much less of the peaking near zero error demonstrated by RMSE approximation errors. As shown in [11], the histogram distributions of the standard deviations su(Ri ) of the 8 × 8-pixel range blocks of both are virtually identical to the Case 3 Δerror distributions in Fig. 1. This is to be expected since the standard deviation of the image subblock u(Ri ) is the RMSE associated with the approximation by its mean: u(Ri ) ≈ u ¯(Ri ). This is, in turn, a suboptimal form of the Case 3 approximation obtained by fixing the greyscale parameter α = 0. The distribution of α greyscale parameters is, however, found to be highly concentrated at zero [11], implying that in most cases, the standard deviation is a very good estimate of the Case 3 Δ-error.
(a) Cases 1, 2 and 3: Lena
(b) Cases 1, 2 and 3: Mandrill
Fig. 1. Case 1-3 RMS Δ-error distributions for normalized Lena and Mandrill images over the interval [0, 0.5]. In all cases, nonoverlapping 8 × 8-pixel blocks Ri and Dj were used.
2
Structural Similarity and Its Use in Self-similarity
As mentioned earlier, the structural similarity (SSIM) index [14] was proposed as an improved measure of assessing visual distortions between two images. If one of the images being compared is assumed to have perfect quality, the SSIM value can also be interpreted as a perceptual quality measure of the second image. It is in this way that we employ it in our self-similarity study.
268
D. Brunet, E.R. Vrscay, and Z. Wang
We express the SSIM between two blocks as a product of two components that measure (i) the similarities of their mean values and (ii) their correlation and contrast distortion. In what follows, in order to simplify the notation, we let x, y ∈ RN + denote two non-negative N -dimensional signal/image blocks, e.g., x = (x1 , x2 , · · · , xN ). The SSIM between x and y is defined as follows, ¯ + 1 2¯ xy 2sxy + 2 S(x, y) = S1 (x, y)S2 (x, y) = , (6) ¯ 2 + 1 ¯2 + y s2x + s2y + 2 x where ¯= x
N 1 xi , N i=1
¯= y
N 1 yi , N i=1
N
s2x =
1 ¯ )2 , (xi − x N − 1 i=1
N
s2y =
1 ¯ )2 , (yi − y N − 1 i=1
(7)
N
sxy
1 ¯ )(yi − y ¯) . = (xi − x N − 1 i=1
The small positive constants 1 , 2 1 are added for numerical stability along with an effort to accomodate the perception of the human visual system. ¯ and y ¯ The component S1 in (6) measures the similarity of the mean values, x ¯=y ¯ , then S1 (x, y) = 1, its maximum possible value. of, respectively, x and y. If x Its functional form was originally chosen in an effort to accomodate Weber’s law of perception [14]. The component S2 in (6) follows the idea of divisive normalization [15]. Note that −1 ≤ S(x, y) ≤ 1, and S(x, y) = 1 if and only if x = y. A negative value of S(x, y) implies that x and y are negatively correlated. 2.1
Optimal SSIM-Based Affine Approximation
We now consider the approximation of an image range block u(Ri ) by a domain block u(Dj ) as written in (1) in terms of the structural similarity measure. The SSIM measure associated with the approximation in (1) will be defined as Sij = max S(u(Ri ), αu(Dj ) + β) , α,β∈Π
i = j .
(8)
The optimal parameters and associated SSIM measures are given below, but only for the zero stability parameter case, i.e., 1 = 2 = 0. Because of space restrictions, we omit the algebraic details and simply state the results. In what follows, we once again denote x = u(Ri ), y = u(Dj ) and N = n2 . Case 1: Purely translational. There is no optimization in this case: αij = 1, βij = 0 and the SSIM measure is simply (Case 1)
Sij
= S(x, y) .
(9)
Affine Self-similarity and Structural Similarity
269
Case 2: Translational + greyscale shift. Here, αij = 1 and we optimize over β. 2sxy (Case 2) ¯−y ¯ , βij = x Sij = S2 (x, y) = 2 . (10) sx + s2y Note that the SSIM-optimal β parameter is identical to its L2 counterpart. Case 3: Affine greyscale transformation. We optimize over α and β. αij = sign(sxy )
sx , sy
(Case 3)
¯ , ¯ − αij y βij = x
Sij
=
|sxy | , sx sy
(11)
where sign(t) = 1 if t > 0, 0 if t = 0, and −1 if t < 0. In this case, the SSIM measure Sij is the magnitude of the correlation between x and y.
Note 2. In Cases 2 and 3, the means of the range block and optimally trans¯ = α¯ formed range block are equal, i.e., x y + β, as was the case for L2 -fitting. Since more parameters are involved as we move from Case 1 to Case 3, the associated SSIM measures behave as follows, (Case 1)
Sij
(Case 2)
≤ Sij
(Case 3)
≤ Sij
.
(12)
In Fig. 2 are shown the Case 1-3 SSIM measure distributions over the interval [−1, 1] of the Lena and Mandrill images, once again using 8 × 8-pixel blocks.
(a) Cases 1, 2 and 3: Lena
(b) Cases 1, 2 and 3: Mandrill
Fig. 2. Case 1-3 SSIM measure distributions for normalized Lena and Mandrill images over [−1, 1]. In all cases, nonoverlapping 8 × 8-pixel blocks Ri and Dj were used.
Before commenting on these plots, we briefly discuss the issue of the stability parameters, 1 and 2 in (6). As proposed in [14], in all computations reported below the stability parameters employed were 1 = 0.012 and 2 = 0.032 . In the case 1 = 2 = 0, the Case 1 SSIM measure distributions of the Lena and Mandrill images are almost identical. The slightly nonzero values of the stability parameters will increase the SSIM values associated with domain-range pairs with low variance. Since the Lena image contains a higher proportion of such blocks, there is a slight increase of the distribution for S > 0.
270
D. Brunet, E.R. Vrscay, and Z. Wang
The difference between the two distributions is more pronounced for Case 2. For the Lena image, the better domain-range block approximations yielded by the greyscale shift causes its SSIM measure distribution to increase over the region S ⊂ [0.5, 0.8]. But the situation is most interesting for Case 3, i.e., affine greyscale approximation. For both images, there are no negative SSIM values. This follows from the positivity of Sij in (11) which is made possible by the inclusion of the α scaling factor. When the domain and range blocks are correlated, as opposed to anticorrelated, i.e., sxy > 0 then the optimal α coefficient is positive, implying that S will be positive. When α < 0, the domain and range blocks are anticorrelated – multiplying the domain block by a negative α value will “undo” this anticorrelation to produce a roughly correlated block. The SSIM distribution for the Lena image has a much stronger component near S = 1, indicating that many more blocks are well approximated in terms of the SSIM measure. Conversely, the SSIM measure for the Mandrill image is quite strongly peaked at S = 0. In summary, the SSIM measure corroborates the fact that the Lena image is more self-similar than the Mandrill image. That being said, despite the dramatic peaking of the RMS Δ-error distribution of the Lena image at zero error – primarily due to a high proportion of low-variance blocks – its SSIM measure distribution does not demonstrate such peaking near S = 1. This will be explained in the following section. 2.2
Relation between Optimal L2 - and SSIM-Based Greyscale Coefficients
At this point it is instructive to compare the affine greyscale transformations of the L2 - and SSIM-based approximations. Obviously, for Case 1, no comparison is necessary since no greyscale transformations are employed. For Case 2, the greyscale shift β = u¯(Ri ) − u¯(Dj ) is the same in both approximations. For Case 3, it is sufficient to compare the α greyscale coefficients. Recall that for a given domain block x = u(Dj ) and range block y = u(Ri ), αL2 = It follows that
sxy , s2y
αSSIM = sign(sxy )
αSSIM sx sy ≥1 , = αL2 |sxy |
sx . sy
(13)
(14)
where the final inequality follows from (11). This result implies that the SSIM-based affine approximation αu(Dj ) + β will have a higher variance than its L2 -based counterpart. Such a “contrast enhancement” was also derived for SSIM-based approximations using orthogonal bases [16]. Finally, note that the coefficients αSSIM and αL2 always have the same sign. Numerically, we find that their values generally do not differ greatly: A histogram plot of their ratios is strongly peaked at 1.
Affine Self-similarity and Structural Similarity
2.3
271
SSIM, Normalized Metrics and Image Self-Similarity vs. Image “Approximability”
The fact that S(x, y) = 1 if and only x = y suggests that the function T (x, y) = 1 − S(x, y) ,
x, y ∈ RN + ,
(15)
could be considered a measure of the distance between x and y, since x = y implies that T (x, y) = 0. We now show that for Case 2 and Case 3, the function T (x, y) may be expressed in terms of the L2 distance x − y. First recall that for both Case 2 and Case 3 and for L2 - and SSIM-based approximations of a range block x = u(Ri ), the mean of the best affine approximation y = αu(Dj ) + β is equal to the mean of x. As such, we consider the function ¯=y ¯ . This implies that S(x, y) = S2 (x, y), the T (x, y) in the special case that x second component of SSIM, and that T (x, y) = 1 −
s2x + s2y − 2sxy x − y2 1 2sxy + 2 = = . (16) s2x + s2y + 2 s2x + s2y + 2 N − 1 s2x + s2y + 2
In other words, the function T (x, y) is an inverse variance-weighted squared L2 distance between x and its optimal affine approximation y. In fact, one can show (see [17]) that T (x, y) is indeed a metric when the means are matched. As mentioned earlier, lower-variance blocks are more easily approximated in the L2 sense than higher-variance blocks. Consequently, the Case 3 Δ-error distributions of images with a higher proportion of “flatter,” i.e., low variance, blocks will exhibit a greater degree of peaking near zero, particularly for Case 3. The structural similarity index compensates for this “flatness bias.” The question is whether this greater peaking should actually be interpreted as self-similarity. This is addressed in the next section.
3
Self-similarity of Natural Images vs. Pure Noise Images
The presence of noise in an image will decrease the ability of its subblocks to be approximated by other subblocks. In [11] it was observed that as (independent, Gaussian) noise of increasing variance σ 2 is added to an image, any near-zero peaking of its Δ-error distribution becomes diminished. Moreover, a χ-squared error distribution associated with the noise which peaks at σ eventually dominates the Δ-error distribution. This peaking at σ is actually the basis of the block-variance method of estimating additive noise. Naturally, the SSIM measure distributions will also be affected by the presence of noise. But instead of simply adding noise to natural images, we wish to study pure noise images. Synthesizing such kinds of images allows us to compare the Δ-error distributions of natural images with a benchmark image that possesses no self-similarity. Indeed, for independent pure noise images there is no selfsimilarity between two blocks in the sense that the expectation of the covariance
272
D. Brunet, E.R. Vrscay, and Z. Wang
between them is zero. The only parameters affecting the self-similarity (in RMSE or in SSIM-sense) are the local mean and the local variance of the image. This leads to the following idea: Generate an image from a uniform distribution with the local mean and variance matched to the statistics of a natural image. In our experiments, we chose an uniform distribution, but the histograms would have been similar for Gaussian or Poisson probability distribution. In Fig. 3 are shown two examples of pure noise images for which the local statistics are matched to a natural image. Also shown is a pure noise image following i.i.d. uniform distribution on [0, 1]. Disjoint blocks were used to compute the local statistics to be consistent through the paper, but it is by no means necessary to generate pure noise images block by block.
(a) Lena-like noise
(b) Mandrill-like noise
(c) Uniform noise
Fig. 3. Images made of uniform noise with statistics matching the local mean and variance of natural images: (a) Lena (b) Mandrill (c) Noise image with each pixel value taken from an uniform distribution on [0,1]
In Fig. 4, we compare the RMS Δ-error distribution of natural images with the RMS Δ-error distribution of a pure noise image with the local statistics matched and of an pure noise image following an uniform distribution on [0, 1]. We observe that there is no more self-similarity for natural images than for pure noise images with matched statistics. Notice that all possible blocks were compared, whereas in non-local image processing only a limited number of (best) blocks are usually needed. So even if the best matches are generally more self-similar, on average, natural images are not more self-similar than pure noise images with matched statistics. We conclude that low variance is the principal factor for self-similarity according to RMSE. In order to correct this low variance bias, the same experiment was performed with the SSIM index for Case 1-3. The results are shown in Fig. 5. Now, we can see a difference between the SSIM measure distributions of natural images and pure noise images. We quantify the self-similarity of images by computing the center of gravity (the mean of the distribution) of the SSIM measure distributions. The results are shown in Table 1. Again, the local variance has a major influence on the self-similarity of images, but now we can see, as hoped, that
Affine Self-similarity and Structural Similarity
(a) Lena
Case 1
(b) Mandrill
Case 1
(c) Lena
Case 2
(d) Mandrill
Case 2
(e) Lena
Case 3
(f) Mandrill
Case 3
273
Fig. 4. Comparison of RMS Δ-error distribution of Lena and Mandrill for Case 1-3 (grey histogram) with the RMS Δ-error distribution of pure noise images for which the local mean and local variance are matched (red) and with the RMS Δ-error distribution of a i.i.d. uniform pure noise image on [0, 1] (blue)
Table 1. Mean of the SSIM measure distributions of natural images (NI), pure noise images with matched statistics (MN) and uniform pure noise image (UN) for Case 1-3 Lena and Mandrill Case 1 Lena NI 0.2719 MN 0.2698 UN 0.0057
Case 1 Mandrill 0.0682 0.0684 0.0057
Case 2 Lena 0.3091 0.3074 0.0057
Case 2 Mandrill 0.0731 0.0735 0.0057
Case 3 Lena 0.5578 0.5206 0.1003
Case 3 Mandrill 0.2246 0.1896 0.1004
274
D. Brunet, E.R. Vrscay, and Z. Wang
(a) Lena
(c) Lena
(e) Lena
Case 1
Case 1-2
Case 3
(b) Mandrill
(d) Mandrill
(f) Mandrill
Case 1
Case 1-2
Case 3
Fig. 5. Comparison of SSIM measure distribution of Lena and Mandrill for Case 1-3 (grey histogram) with the SSIM measure distribution of pure noise images for which the local mean and local variance are matched (red) and the SSIM measure distribution of a i.i.d. uniform pure noise image on [0, 1] (blue)
natural images are more self-similar than pure noise images in the SSIM-sense. To determine theoretically the distribution of the structural similarity between two blocks generated by a known probability distribution remains a open question. The difficulty here is the fact that rational functions are involved in the definition of the SSIM measure. Acknowledgements. We gratefully acknowledge the generous support of this research by the Natural Sciences and Engineering Research Council of Canada (NSERC) in the forms of Discovery Grants (ERV, ZW), a Strategic Grant (ZW),
Affine Self-similarity and Structural Similarity
275
a collaborative research and development (CRD) grant (ERV, ZW) and a Postgraduate Scholarship (DB). ZW would also like to acknowledge partial support by the Province of Ontario Ministry of Research and Innovation in the form of an Early Researcher Award.
References 1. Buades, A., Coll, B., Morel, J.M.: A review of image denoising algorithms, with a new one. Multiscale Modelling and Simulation 4, 490–530 (2005) 2. Dabov, K., Foi, A., Katkovnik, V., Egiazarian, K.: Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Processing 16, 2080–2095 (2007) 3. Zhang, D., Wang, Z.: Image information restoration based on long-range correlation. IEEE Trans. Circuits and Systems for Video Tech. 12, 331–341 (2002) 4. Etemoglu, C., Cuperman, V.: Structured vector quantization using linear transforms. IEEE Trans. Signal Processing 51, 1625–1631 (2003) 5. Ebrahimi, M., Vrscay, E.R.: Solving the inverse problem of image zooming using “Self-examples”. In: Kamel, M.S., Campilho, A. (eds.) ICIAR 2007. LNCS, vol. 4633, pp. 117–130. Springer, Heidelberg (2007) 6. Elad, M., Datsenko, D.: Example-based regularization deployed to super-resolution reconstruction of a single image. The Computer Journal 50, 1–16 (2007) 7. Freeman, W.T., Jones, T.R., Pasztor, E.C.: Example-based super-resolution. IEEE Computer Graphics and Applications 22, 56–65 (2002) 8. Barnsley, M.F.: Fractals Everywhere. Academic Press, New York (1988) 9. Lu, N.: Fractal Imaging. Academic Press, New York (1997) 10. Ghazel, M., Freeman, G., Vrscay, E.R.: Fractal image denoising. IEEE Trans. Image Processing 12, 1560–1578 (2003) 11. Alexander, S.K., Vrscay, E.R., Tsurumi, S.: A simple, general model for the affine self-similarity of images. In: Campilho, A., Kamel, M.S. (eds.) ICIAR 2008. LNCS, vol. 5112, pp. 192–203. Springer, Heidelberg (2008) 12. La Torre, D., Vrscay, E.R., Ebrahimi, M., Barnsley, M.F.: Measure-valued images, associated fractal transforms and the self-similarity of images. SIAM J. Imaging Sci. 2, 470–507 (2009) 13. Wang, Z., Bovik, A.C.: Mean squared error: Love it or leave it? A new look at signal fidelity measures. IEEE Signal Processing Magazine 26(1), 98–117 (2009) 14. Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Processing 13(4), 600–612 (2004) 15. Wainwright, M.J., Schwartz, O., Simoncelli, E.P.: Natural image statistics and divisive normalization: Modeling nonlinearity and adaptation in cortical neurons. In: Rao, R., et al. (eds.) Probabilistic Models of the Brain: Perception and Neural Function, pp. 203–222. MIT Press, Cambridge (2002) 16. Brunet, D., Vrscay, E.R., Wang, Z.: Structural similarity-based approximation of signals and images using orthogonal bases. In: Campilho, A., Kamel, M. (eds.) ICIAR 2010. LNCS, vol. 6111, pp. 11–22. Springer, Heidelberg (2010) 17. Brunet, D., Vrscay, E.R., Wang, Z.: A class of image metrics based on the structural similarity quality index. In: 8th International Conference on Image Analysis and Recognition (ICIAR 2011), Burnaby, BC, Canada (June 2011)
A Fair P2P Scalable Video Streaming Scheme Using Improved Priority Index Assignment and Multi-hierarchical Topology Xiaozheng Huang1, , Jie Liang1 , Yan Ding2 , and Jiangchuan Liu2 1 2
School of Engineering Science, Simon Fraser University School of Computing Science, Simon Fraser University Burnaby, BC, V5A 1S6 Canada {xha13,jiel}@sfu.ca, {yda15,jcliu}@cs.sfu.ca
Abstract. A fair P2P scalable video streaming scheme is proposed in this paper. The contributions of the paper are threefold. First, to improve the quality fairness of multiple video streams, a modified Lloyd-Max quantization-based Priority Index (PID) assignment method for scalable video coding (SVC) is developed, where the base-layer quality is also embedded in the Supplemental Enhancement Information (SEI) message of the SVC bit stream, in addition to the quantized rate-distortion slope of each MGS packet. Secondly, to build a fair ”contribute-and-reward” mechanism for P2P video streaming, we propose a multi-hierarchical topology that is based on peers’ uploading bandwidth. Finally, we combine these two parts to build a SVC-based P2P network, which fully utilizes the quality scalability of SVC, and the end-user quality is determined by its uploading bandwidth contribution. The performance of the scheme is demonstrated by simulation results. Keywords: Scalable Video Coding, Lloyd-Max Quantization, P2P video streaming.
1
Introduction
For a P2P multi-cast video streaming network, the bandwidth and content availability are two main constraints. The bandwidth constraint, especially the uploading bandwidth of each peer, is often the bottleneck of the transmission path in the network. The reasons are threefold [9]. Firstly, due to the inherent characteristics of some networks, like ADSL, the uploading capacity is much less than the downloading capacity. Secondly, the uploading capacity is sometimes limited by the save-and-forward capacity of the computers. Thirdly, the end-users may prefer to impose an upper limit on their uploading bit rate for various reasons, such as saving the energy and computational power. With the rapid improvements of broadband network coverage and the computational power of end-user devices, the third reason is usually the main obstacle
This work was supported in part by the Natural Sciences and Engineering Research Council of Canada under Grants RGPIN312262 and STPGP350416-07.
M. Kamel and A. Campilho (Eds.): ICIAR 2011, Part II, LNCS 6754, pp. 276–285, 2011. c Springer-Verlag Berlin Heidelberg 2011
Fair P2P Streaming Using Improved PID and Topology
277
that causes the bandwidth limit of P2P uploading capacity. The current P2P streaming schemes do not have effective incentives for users to upload more, because this does not guaranee an improved downloading service, especially if the videos that the users are interested in are not very popular, or are particularly bit-rate-demanding. The proposed P2P video streaming method in this paper is based on the scalable extension of the H.264/AVC, or SVC in short, with Medium Grain Scalability (MGS) [10]. SVC offers a variety of advantages, such as rate adaptivity. It is thus a promising technique for P2P video streaming. In [6] and [1], the performance of P2P video streaming using SVC is studied, which shows the benefits of a prioritization mechanism to react to network congestion. In [3], the popularity of videos is considered and the layered structure of SVC is exploited for P2P video streaming. In [2], the encoder configuration of SVC combined with multiple description coding for P2P video streaming is investigated. There are also some other papers which applied SVC to P2P video streaming. However, they only considered existing characteristics of SVC without further exploitation. In this paper, we only consider short videos, such as the videos on YouTube; hence we assume each peer has the capacity to buffer the whole video stream within one peer’s lifetime. We also assume that each peer can at least offer an uploading bandwidth that exceeds the base-layer bit rate of one video stream. A similar P2P structure was used in [9]. To facilitate the comparison of packets from different video streams, we adopt the previously developed Lloyd-Max quantization (LMQ) based Priority Index (PID) assignment method [11], where the reconstructed rate-distortion (R-D) slope of packets with the same PID is embedded in a Supplemental Enhancement Information (SEI) message. Moreover, to reduce the quality variation among different video streams, we also include in the SEI message the average baselayer quality of the video. This will help the server to make the decision. Another contribution of the paper is the development of multi-hierarchical topology for P2P network, which is based on the uploading bandwidth capacity of each peer, the requested video, and the request time instant. By combining the topology construction method and the modified SVC PID assignment method, a fair video quality that depends on on each user’s uploading bandwidth contribution can be achieved. Compared to the PID assignment method in the H.264 SVC reference software [14], significant improvement can be achieved by the proposed method.
2
The Modified LMQ-Based PID Assignment Method
In this section, we first review the LMQ-based PID assignment method developed in [11], then design a video multiplexing algorithm to take advantage of this PID assignment. The SVC extension of the H.264/MPEG-4 AVC allows temporal, spatial and quality scalability. The SVC bit stream contains one base layer (BL) and one or more enhancement layers (EL) [10]. The base layer is fully compatible with
278
X. Huang et al.
the current H.264/AVC, whereas the additional ELs progressively refine the spatial or temporal resolution, or the quality of the decoded video sequence. The scalability of SVC bit stream is realized by discarding part or all of the enhancement layers to meet various network or end-user constraints. In this paper, we focus on medium quality scalability (MGS), which is achieved by splitting the transformed coefficients within each block of the enhancement layer into several fragments. Each is assigned to one quality layer. This approach allows progressively improved quality and graceful degradation when fragments are discarded during adaptation. The SVC bit stream is encapsulated into packets with an integer number of bytes, called Network Abstraction Layer Units (NALU). Each NALU belongs to a specific spatial, temporal, and quality layer. To facilitate bit stream extraction, the H.264 SVC standard defines a 6-bit PID at the NALU header to signal the importance of the NALU. This enables a lightweight bit stream parser in the transmission path to dynamically drop some NALUs to meet the network or enduser constraints, before the decoder is actually invoked to decode the extracted partial bit stream. Therefore the PID assignment algorithm plays an important role in determining the final performance of the SVC. The PID assignment method implemented in the current SVC reference software was developed in [13]. It first treats each NALU as a quality layer and sorts all NALUs according to their R-D slopes. After that, it repeatly merges the two adjacent NALUs with the smallest merging cost until the target number of quality layers is reached, where the merging cost is defined to be the increase of the area under the R-D curve. However, the merging operations have high complexity. Moreover, the PID obtained this way is only valid within one video sequence. Comparing PIDs from different bit streams, as needed in video multiplexing, is usually meaningless, since the contribution of an NALU from one sequence with a lower PID might be much larger than an NALU from another sequence with a higher PID. To solve this problem, in [11], the PID assignment was formulated as a scalar quantization problem. The R-D slopes of all MGS NAL packets are quantized into 63 bins using the Lloyd-Max quantizer. The optimal solution of an M -level LMQ satisfies the following conditions [12]:
tq =
x ˆq−1 + xˆq 2
q = 1, 2, 3...M − 1
(1)
tq+1
xfX (x)dx t x ˆq = qtq+1 fX (x)dx tq
q = 1, 2, 3...M − 1
(2)
where xˆq ’s are the reconstruction points, and tq ’s are the decision boundaries. The optimal solution can be achieved by a fast iterative algorithm using the equations above [12]. In [11], the quantization bin indices are taken as the PIDs and the reconstructed R-D slope values are embedded into a SEI message as an look-up table.
Fair P2P Streaming Using Improved PID and Topology
30
5
4.5
279
Reference Proposed
Reference Proposed
25
4
3.5 2
PSNR Variance dB
Quality variance (dB)
2
20
3
2.5
2
15
10 1.5
1
5 0.5
0 350
400
450 500 Bandwidth Constraint (Kbps)
550
600
0
0
50
100
150 Frame Index
200
250
Fig. 1. PSNR variance of multiplexing Fig. 2. The frame-by-frame PSNR variNews and Coastguard ance of multiplexing Coastguard and News
By obtaining the reconstructed operational R-D slope values, the scheduler at intermediate network nodes along the transmission pathway can compare the RD slopes of MGS packets from multiple video streams and decide which packets to drop if there is not enough bandwidth. In this paper, to further facilitate bit stream extraction in video multiplexing, we also include the average base-layer PSNR of each Group of Pictures (GOP) in the SEI message. The average PSNR is also easy to get since the PSNR of each frame is usually readily available at the encoder during the encoding process. With the base-layer PSNR and the reconstructed R-D slope of each PID as the side information, we can use the following algorithm to achieve a fair video multiplexing scheme when multiple video streams need to share a common transmission channel and are expected to get the same quality: 1. Allocate only the base-layer packets (PID=63) to all videos, and find their PSNRs from the SEI packets. 2. Find the video with the lowest PSNR. 3. For PID = 62 : -1 : 0 (a) Add all MGS packets with the same PID to the current video. (b) Update the PSNR of the current video. (c) If the total bit rate of all videos exceeds the channel bandwidth constraint, stop the bit allocation algorithm. (d) If the current PSNR exceeds the second lowest PSNR, go to Step 2. Since the R-D slopes of packets with the same PID are identical and can be found from the SEI packet, the PSNR improvement in Step 3.b is simply the product of the R-D slope and the total size of these packets. To demonstrate the performance of this algorithm, Fig. 1 shows the video multiplexing results when two videos share a common channel with different bandwidths. The two videos used are News and Coastguard, both have 300 frames. The reference result is obtained using the default PID assignment method in the
280
X. Huang et al.
current JSVM reference software [14]. The bit stream extraction is achieved by keeping adding packets from the two videos with the same PID, starting from the highest PID, until the target bit rate is reached. It can be seen that the PSNR variance of the proposed method is much less than that of the reference software, due to the improved PID assignment method and the capability of monitoring the PSNR closely in the new method. In the old method, since the PSNRs with the base layer and different enhanced layer packets are not available to the multiplexer, it can only use the PID information to extract the bit stream, but packets from different videos with the same PID do not necessarily have similar R-D contributions. Fig. 2 shows an example of the frame-by-frame PSNR variance of News and Coastguard sequences using the two methods, with a channel bandwidth of 550 kbps. It can be observed that the PSNR variance is much smaller using the proposed method.
3 3.1
A Multi-hierarchical Topology for P2P System Overall Structure
The peer structure in P2P video streaming systems can be classified as clusterbased [4,5], tree-based [6,7], mesh-based and hybrid. Cluster-based approach provides good robustness and scalability. Tree-based approach has a pre-determined path and is easy to maintain. These two structures can also be employed together to form a hybrid overlay topology [8,9]. However, these strucures only consider the resources within a single P2P video streaming overlay. As mentioned before, we assume the uploading bandwidth is the main constraint for the P2P network. Meanwhile, the downloading bandwidth is usually not proportional to one’s contribution, which discourages the end-users to allocate more uploading bandwidth, thereby causing more severe resource starvation. In this paper, we propose a scheme in which the available downloading resources are proportional to one’s uploading bandwidth contribution. Hence it can achieve fair video quality among all receivers who are interested in different videos. We define a multi-hierarchical topology for the P2P network, as shown in Fig. 3. The outer-most layer of the structure is based on the uploading bandwidth of a peer, i.e., peers with similar uploading bandwidths are categorized into the same group. The transmission between peers can only happen within one group. As a result, peers who contribute more uploading bandwidths can benefit from peers who also allocate more uploading bandwidths. Thus the policy enjoys the property of ”contributing more, getting more”. Within each group, peers may request different videos, so there are multiple overlays, each corresponds to one video. This constitutes the second hierarchy. Within each overlay, we divide peers into different clusters, based on the time instants these peers request the video. Peers whose requested time instants are within a certain range are grouped as a cluster. Within the same cluster, there
Fair P2P Streaming Using Improved PID and Topology
'ƌŽƵƉ
KǀĞƌůĂLJ ůƵƐƚĞƌ
281
ůƵƐƚĞƌ ͙͙
ůƵƐƚĞƌ
͙͙ KǀĞƌůĂLJ
͙͙ ͙͙
'ƌŽƵƉ
KǀĞƌůĂLJ
͙͙
KǀĞƌůĂLJ
͙͙
͙͙
Fig. 3. The proposed multi-hierarchical topology
is no determined direction of transmission. Peers can transmit the video to each other as in a mesh network. These clusters form the third hierarchy. Also, peers who request the video earlier should have buffered more video contents; hence they should be put closer to the root in the tree. On the other hand, peers who just join the overlay should be the leaves of the tree because they have less to offer. The proposed strucuture is mainly based on the hybrid P2P video streaming strucuture as those in [8,9]. The new features are that we group peers based on their uploading bandwidths, and we also take advantage of the extra uploading bandwidth of other overlays. 3.2
Intra-overlay and Inter-overlay Transmission
In the proposed topology, there could be more than one overlay within the same uploading bandwidth group. Usually, the videos are transmitted within one video overlay. Since the instant transmission bit rate is determined by the server based on the sum of the uploading bandwidths, in some cases there could be some extra uploading bandwidths for one overlay and some shortages for another overlay, depending on the number of peers in each overlay. Another reason of the imbalance of the uploading bandwidth among overlays is the inherent characteristics of the videos. Videos with a lot of scene changes can be bit-rate-demanding while the bit rate of videos with less activities can be quite low. To avoid asking for videos directly from the server, the peers with extra uploading capacity can cache some video contents that they are not using and transmit them to the peers from another overlay who are interested in them, as shown in Fig. 4. The total uploading rate of intra-overlay and inter-overlay transmission will not exceed one’s uploading bandwidth. Once cached, the video can be repeatedly transmitted to different peers from another overlay. The benefit of
282
X. Huang et al.
Overlay 1
Server
Overlay 2
Cluster
Higher bandwidth group Lower bandwidth group
Inter-overlay transmission
Intra-overlay transmission
Fig. 4. The flowchart of P2P video streaming
doing so is to lessen the downloading request from the server and to achieve fair video quality for peers that contribute similar uploading bandwidths but from different overlays. 3.3
The Topology Construction
There are two types of data transmitted in the network. The first type is control messages, which are used for collecting the network condition including the video request, uploading bandwidth information, and topology status. Another type is the actual video contents that the end-users requested to view [9]. Firstly, we assume the server has the full version of every video. It provides the original video contents to peers at the beginning. It also needs to meet downloading requests from the peers when the peers cannot feed themselves. It will be shown later that using the scheme proposed above, this scenario rarely happens. Each peer can get partial or full version of the video based on the instantaneous total uploading bandwidth. For a peer who raises a video request to the server, the server first obtains the uploading bandwidth of this peer, and assigns the peer into the group that matches its uploading bandwidth and the overlay that corresponds to the requested video. The server then assigns the peer to a cluster in that overlay in which all peers have similar requesting time instants. This cluster is usually at the leaves of the tree. Within each group, the server records the uploading bandwidths of all peers. Based on the PID and embedded SEI information of the SVC-coded video, the server uses the PID assignment algorithm in Sec. 2 to calculate the bit rate for each video, while the instant dowloading bit rate of each peer is constrained to be the same by the server and their sum will not exceed the sum of the uploading bit rate within the group. Since our rate allocation algorithm can achieve similar
Fair P2P Streaming Using Improved PID and Topology
283
Table 1. The uploading bandwidth probability distribution ID 1 2 3 4 5
Uploading bandwidth (Kbps) Probability 312 0.1 340 0.1 384 0.3 420 0.4 512 0.1
video quality for all the peers. It is the uploading capacity within the same group that determines the quality of received video for each peer of which it belongs to. The root cluster within each overlay can only asks for the video content from the server, since there are no other peers who have cached the video content. However, except for the start-up, the root cluster usually contains the peers who have already finished viewing the video and only make uploading contribution. The cluster searches for the resources from higher-level clusters until the required downloading bit rate is reached. The transmission direction can only be one-way within a tree, which is from the top to the bottom, because the video requested by a peer in a lower cluster is only cached in the buffers of upper clusters. However, within each cluster, there is a control leader to negotiate with the peers to form a mesh based transmission path since the peers within one cluster requested the video at similar time instants.
4
Simulation and Results
We built a simulation environment to test the performance of the proposed P2P system. We assume that in each minute, the number of peers that join the P2P overlay follows the Poisson distribution, and the lifetime of each peer follows the Weibull distribution. We also assume that the uploading bandwidth distribution follows that in Table 1, and the downloading bandwidth is not constrained. For simplicity, we categorize the peers into two groups based on their uploading bandwidths, and the threshold is chosen as 400 Kbps. We use two repeatedly concatenated videos News and Coastguard as the video contents for the two P2P video streaming overlays. The peers whose requesting time are within 1 minute to each others are considered as in the same cluster. The total simulation time is 5000 minutes and the average number of peers in the system is 326. From the distribution, we can see that the average uploading bit rates are 360 kbps and 438 kbps, respectively. The instant PSNRs of the proposed method is shown in Fig. 5, from which we can see that the average PSNR of all peers in each group is proportional to their uploading bandwidth distribution. Fig. 7 comapres the variance of instant PSNR among peer groups by using the two PID assigning methods. It shows that the PSNR variance with the proposed method is lower than that using the JSVM reference software.
284
X. Huang et al. 6000
39.5 39
5000
Server traffic (Kbps)
PSNR−Y(dB)
38.5 38 37.5 37
4000
3000
2000
36.5 Uppper Group Lower Group
36 35.5
0
1000
2000 3000 Time Instant(Sec)
4000
1000
5000
0
0
1000
2000 3000 Time Instant(Sec)
4000
5000
Fig. 5. The instant average PSNR of two Fig. 6. The instantaneous server traffic groups in the P2P network
PSNR−Y(dB2)
5 Upper Group Lower Group
4 3 2 1 0
0
1000
2000 3000 Time Instant(Sec)
4000
5000
PSNR−Y(dB2)
5 4 3 2
Upper Group Lower Group
1 0
0
1000
2000 3000 Time Instant(Sec)
4000
5000
Fig. 7. The comparison of instant PSNR variance of two groups. (top) By proposed PID method. (Bottom) By the reference method.
Fig. 6 shows the instantaneous server traffic of the whole simulation, from which we can see that, except at the beginning of the simulation, the traffic from the server is zero for most of the time, meaning that the peers can support themselves to stream the videos.
5
Conclusion
In this paper, we first propose a new Lloyd-Max Quantization based Priority Index assigning method for the Scalable Extension of H.264/AVC be embedding the base-quality and incremental quality into SEI message in the SVC bitstream. We show this method can be applied to video multiplexing to ensure the fairness
Fair P2P Streaming Using Improved PID and Topology
285
of quality among multiple videos. We then develop a multi-hierarchical P2P topology, which can provide a fair ”contribute-and-reward” mechanism for a P2P video streaming network. We combine these two parts to build a SVC-based P2P network. The performance of the scheme is demonstrated by simulation results.
References 1. Sanchez, Y., Nchez, Y., Schierl, T., Hellge, C., Wiegand, T.: P2P group communication using Scalable Video Coding. In: 2010 17th IEEE International Conference on Image Processing (ICIP), Hong Kong, September 26-29 (2010) 2. Abanoz, T.B., Tekalp, A.M.: Optimization of encoding configuration in scalable multiple description coding for rate-adaptive P2P video multicasting, Cairo, November 7-10 (2009) 3. Bezerra, A., Melo, C., Vieira, D., Ghamri-Doudane, Y., Fonseca, N.: A contentoriented web cache policy. In: IEEE Latin-American Conference on Communications, LATINCOM 2009, September 10-11 (2009) 4. Xiang, Z., Zhang, Q., Zhu, W., Zhang, Z., Zhang, Y.Q.: Peer-to-peer based multimedia distribution service. IEEE Transactions on Multimedia 6(2), 343–355 (2004) 5. Liu, D.K., Hwang, R.H.: A P2P hierarchical clustering live video streaming system. In: Proc. 12th Intl Conf. on Computer Communications and Networks, pp. 115–120 (October 2003) 6. Baccichet, P., Schier, T., Wiegand, T., Girod, B.: Low-delay peer-to-peer streaming using scalable video coding. In: IEEE Packet Video Workshop, pp. 173–181 (2007) 7. Setton, E., Baccichet, P., Girod, B.: Peer-to-peer live multicast: A video perspective. Proceedings of the IEEE 96(1), 25–38 (2008) 8. Tran, D.A., Hua, K.A., Do, T.T.: A peer-to-peer architecture for media streaming. IEEE Journal on Selected Areas in Communication 22(1), 121–133 (2004) 9. Tunali, E.T., Fesci-Sayit, M., Tekalp, A.M.: Bandwidth-aware multiple multicast tree formation for P2P scalable video streaming using hierarchical clusters. In: 2009 16th IEEE Interna tional Conference on Image Processing (ICIP), Cairo (November 2009) 10. Schwarz, H., Marpe, D., Wiegand, T.: Overviewofthe scalable video coding extension of the H.264/AVC standard. IEEE Trans. Circ. Syst. Video Tech. 17(9), 1103–1120 (2007) 11. Huang, X., Liang, J., Du, H., Liu, J.: Lloyd-max quantization-based priority index assignment for the scal- able extension of h.264/avc. In: Proceedings of 2010 IEEE International Symposium on Circuits and Systems (ISCAS), Paris (June 2010) 12. Taubman, D., Marcellin, M.: JPEG 2000 Image Compression Fundamentals, Standards and Practice, ch. 3, pp. 98–101. Kluwer Academic Publishers, Dordrecht (2002) 13. Schwarz, H., Marpe, D., Wiegand, T.: Closed-loop coding with quality layers. In: JVT-Q030, Joint Video Team (JVT) of ISO/IEC MPEG and ITU-T VCEG, Nice (October 2005) 14. Vieron, J.: Draft reference software for SVC. In: JVT-AB203, Joint Video Team (JVT) of ISO/IEC MPEG and ITU-T VCEG, Hannover (July 2008)
A Novel Image Encryption Framework Based on Markov Map and Singular Value Decomposition Gaurav Bhatnagar1, Q.M. Jonathan Wu1 , and Balasubramanian Raman2 1
2
University of Windsor, Windsor, ON, Canada N9B 3P4 Indian Institute of Technology Roorkee, Roorkee-247 667, India {goravb,jwu}@uwindsor.ca,
[email protected]
Abstract. In this paper, a novel yet simple encryption technique is proposed based on toral automorphism, Markov map and singular value decomposition (SVD). The core idea of the proposed scheme is to scramble the pixel positions by the means of toral automorphism and then encrypting the scrambled image using Markov map and SVD. The combination of Markov map and SVD changed the pixels values significantly in order to confuse the relationship among the pixels. Finally, a reliable decryption scheme is proposed to construct original image from encrypted image. Experimental results demonstrate the efficiency and robustness of the proposed scheme.
1 Introduction With the ripening in the field of communication and internet technology, multimedia transmission over networks and storage on web servers have become a vital part of it. However, this convenience also causes substantial decrease in multimedia security. Cryptography/Encryption techniques are widely used to ensure security but these techniques are developed for textual data and hence inappropriate for direct implementation on multimedia. This is due to the multimedia properties like high redundancy and large volumes which require specific encryption techniques developed with the consideration of structural and statistical properties of multimedia content. Generally, the process of image encryption is divided into two phases, scrambling the image and then encrypting the scrambled image. Image scrambling cast the image elements into confusion by changing the position of pixel in such a way that the original image is not recognizable. But the original image can be obtained by performing reverse operations. Hence to make process complicated and enhance the security, scrambled image undergoes second phase. This phase essentially changes the pixel values in order to confuse the strong relationship among the pixels. The scrambling is done by various reversible techniques based on magic square transform, chaos system, gray code etc. In second phase, the scrambled image is then passed through some cryptographic algorithm like SCAN based methods [1, 2], chaos based methods [3–8], tree structure based methods [9, 10] and other miscellaneous methods [11, 12]. In this paper, a novel image encryption technique is presented based on the toral automorphism, Markov map and singular value decomposition. The first phase i.e. scrambling of pixels positions is done by toral automorphism. The second phase i.e. changing M. Kamel and A. Campilho (Eds.): ICIAR 2011, Part II, LNCS 6754, pp. 286–296, 2011. c Springer-Verlag Berlin Heidelberg 2011
A Novel Image Encryption Framework Based on Markov Map and SVD
287
the pixel values is done by singular value decomposition. For this purpose, singular values of a random matrix, generated from Markov map, is computed via SVD. Hankel matrix is then created using computed singular values and again decomposed into singular values, left and right singular vectors. Now, a secret image is obtained using left and right singular vectors. Using the secret image, encryption process is done and the encrypted image is sent to insecure network channel. From the results, it is observed that the proposed technique significantly reduces the correlation among the pixels by using Markov map and singular value decomposition framework. This paper is organized as follows: In sections 2 and 3, toral automorphism and singular value decomposition are introduced, followed by the introduction of Markov maps in section 4. The proposed image encryption technique is illustrated in section 5. Section 6, presents experimental results using proposed watermarking scheme and finally the concluding remarks are given in section 7.
2 Toral Automorphism The toral automorphism [13] is the mapping from torus to torus. In 2D case, the torus say T2 , can be viewed as the square where two points (x1 , y1 ) and (x2 , y2 ) are identified by either x1 = x2 , y1 = y2 or one of the two coordinates is 0 and other is 1. The simplest example of trous is quotient group R2 /Z2 , where R2 is a topological group with addition operation and Z2 is a discrete subgroup of it. More precisely, ab toral automorphism is given as r = T (r) = Ar(mod 1) where A = with cd integers a, b, c, d and det(A) = 1. This matrix A plays a vital role in the iterated dynamical system formed by T . Mathematically, the dynamical system based on toral automorphism is expressed as r(n + 1) = Ar(n)(mod 1) i.e. x(n + 1) x(n) =A (mod 1), n = 0, 1, 2, ... (1) y(n + 1) y(n) If a = 1, b = 1, c = 1 and d = 2 then toral automorphism is reduced to cat map. Hence, cat-map is a special case of toral automorphism. The working procedure of toral automorphism is depicted in figure 1. It essentially, stretches the unit square by transformation and then folds it into square by unit modulo operation. Hence, toral automorphism is area preserving. Toral automorphism can be easily be extended from unit square to a square of length N by stretching the square of length N via transformation and then folding it into square by N modulo operation. Hence, the generalized toral automorphism is expressed as r(n + 1) = Ar(n)(mod N ) i.e. x(n + 1) x(n) =A (mod N ) (2) y(n + 1) y(n) Toral automorphism is a special class of Anosov Diffeomorphisms which are extreme chaotic systems obeying local instability, ergodicity with mixing and decay of correlation and periodic. Due to periodicity, the original square will reappear after some large number of iterations.
288
G. Bhatnagar, Q.M.J. Wu, and B. Raman
Mapped Square
Stretching by Transformations Original Square
2
4 3
1 4
2
3
Folded into Square by Modulus Operation
1
Fig. 1. Working Process for Toral Automorphism
3 Singular Value Decomposition In linear algebra, the singular value decomposition(SVD) [14] is an important factorization of a rectangular real or complex matrix with many applications in signal/image processing and statistics. The SVD for square matrices was discovered independently by Beltrami in 1873 and Jordan in 1874, and extended to rectangular matrices by Eckart and Young in the 1930s. Let A be a general real(complex) matrix of order m × n. The singular value decomposition (SVD) of X is the factorization X =U ∗S∗VT
(3)
where U and V are orthogonal(unitary) and S = diag(σ1 , σ2 , ..., σr ), where σi , i = 1(1)r are the singular values of the matrix X with r = min(m, n) and satisfying σ1 ≥ σ2 ≥ ... ≥ σr . The first r columns of V are the right singular vectors and the first r columns of U are the left singular vectors. Use of SVD in digital image processing has some advantages. First, the size of the matrices for SVD transformation is not fixed. It can be a square or rectangle. Secondly, singular values in a digital image are less affected if general image processing is performed. Finally, singular values contain stable intrinsic algebraic image properties such that large difference in singular values does not occur whenever a small perturbation is added to the matrix.
4 1-D Chaotic Map: Linear Markov Maps A one dimensional map M : U → U, U ⊂ R, U usually taken to be [0,1] or [-1,1] is defined by the difference relation x(i + 1) = M(x(i)), i = 0, 1, 2, ...
(4)
where M(·) is a continuous and differentiable function which defines the map and x(0) is called the initial condition. Iterating this function with newly obtained value as initial condition, one can get the sequence of desired length associated with the map i.e. x(0), x(1) = M(x(0)), x(2) = M(x(1)), .... Further, the different values of x(0) are resulted in different sequences. The obtained sequence is called the orbit of the map associated with x(0). In order to check the chaoticity of a map, Lyapunov exponent(LE) and Invariant measure (IM) are considered [15]. The mathematical definition of these measures are given as
A Novel Image Encryption Framework Based on Markov Map and SVD
289
– Lyapunov Exponent (LE): The LE of the map shows the divergence rate between nearby orbits. It is defined as: λ = lim
L→∞
L−1 1 ln |M (x(l))| L
(5)
l=0
– Invariant Measure (IM): Invariant measure ρ(x) determines the density of the map which further shows the uniformity of map and defined as 1 δ|x − M(x(l))| L L
ρ(x) = lim
L→∞
(6)
l=0
If ρ(x) does not depend on initial condition x(0), the map uniformly covers the interval U and the system is ergodic. The map M is said to be a linear Markov map [16] if it satisfies the following conditions 1. The map is a piecewise linear one, that is, there exist a set of points 0 = μ1 < μ2 < · · · < μM = 1 and coined as the partition points. 2. The map satisfies the Markov property i.e the partition points are mapped to partition points: ∀i ∈ [0, M ], ∃j ∈ [0, M ] : M(μi ) = μj
(7)
3. The map is eventually expanding i.e. there exist an integer r > 0 such that d inf Mr (x) > 1 x∈[0,1] dx
(8)
For brevity, any map satisfying the above definition will be referred to Markov maps. Any sequence obtained by Markov map are having exponential autocorrelation function and uniform distribution. Another main property, which make Markov maps better than others is that their spectral characteristics are completely controlled by the parameters of the map (μ). An example of Markov map with very interesting properties is the skew tent map which is illustrated in figure 2(I) and can be expressed as ⎧x ⎪ x ∈ [0, μ] ⎨ , μ M(x) = (9) 1 x ⎪ ⎩ + , x ∈ (μ, 1] μ−1 μ−1 The above described skew tent map is further modified in order to get more better properties. The extended/generalized skew tent map is illustrated in figure 2(II) and given by ⎧ 2x 1−μ ⎪ ⎨ + , x ∈ [−1, μ] μ + 1 1 +μ
M(x) = (10) μ+1 2x ⎪ ⎩ + , x ∈ (μ, 1] μ−1 μ−1
290
G. Bhatnagar, Q.M.J. Wu, and B. Raman
Fig. 2. I) Skew-tent Map II) Generalized Skew-tent Map Lyapunov Exponent Curve 0.8
Lyapunov Exponent
0.6
0.4
0.2
0
Skew-Tent Map Generalized Skew-Tent Map -0.2 -1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
Fig. 3. Lypunov Exponent Curve for Skew-tent map (Eqn. 9) and its generalized version (Eqn. 10)
Unlike skew-tent map, the generalized skew-tent map mapped (-1,1) to (-1,1) with μ ∈ (−1, 1). Therefore, the domain and range for the generalized map is twice with more possible values of μ, when compare to traditional skew-tent map. Using Eqn. 5, one can verify that the LE of skew-tent map and its generalized version are
λM λM
1 = −μ ln(μ) + (1 − μ) ln >0 1−μ 2 1−μ 2 μ+1 ln( )+ ln( )>0 = 2 μ+1 2 1−μ
(11)
The positive value of LE for all μ (Eqn. 11) shows the chaoticity of the maps. The Lypunov exponent curve for both the maps are depicted in figure 3 which again shows the chaotic nature of the maps in whole domain. Similarly, using Eqn. 6, one can obtain the IM for the skew-tent map and its generalized version and are given by
A Novel Image Encryption Framework Based on Markov Map and SVD
291
1 (12) 2 From Eqn. 12 it is clear that IM is independent of initial guess. Hence, the map uniformly covers the interval U and the system is ergodic. In the present work, we have used generalized skew-tent map due to its merit over skew-tent map i.e. bigger range for x and μ. ρM = 1 and ρM
=
5 Proposed Technique In this section, the motivating factors in the design of proposed image encryption framework are discussed. The proposed technique uses an image and gives an encrypted image which can be decrypted later for various purposes. Without loss of generality, assume that F represents the original image of size M × N (M < N ). The proposed technique can be described as follows: 5.1 Encryption Process 1. First Phase: Scramble image pixel positions using toral automorphism by iterating it l times with selected a, b, c and d. The values of a, b, c, d and l are secret and used as the keys. Let us denote l times scrambled image by Fsl . 2. By adopting k0 and μ as the keys, iterate generalized skew-tent map (Eqn. 10) to generate M × N values {ki : i = 1, 2, ..., M × N }. 3. Map the obtained chaotic sequence k into an integer sequence z as follows if
j j+1 ≤ ki < , then zi = j M ×N M ×N
(13)
where j = 1, 2, 3, ..., M × N . 4. Arrange integer sequence z into a random matrix (Z) of size M × N followed by SVD on it. Z = UZ SZ VZT
(14)
5. Obtain a Hankel Matrix with the help of singular values of Z i.e. SZ = {σp | p = 1, 2, ..., r(= min(M, N ))}, denoted by HZ and given by ⎞ ⎛ σ1 σ2 σ3 · · · σr−1 σr ⎜ σ2 σ3 σ4 · · · σr 0 ⎟ ⎟ ⎜ ⎜ .. .. . . .. .. ⎟ HZ = ⎜ ... (15) . . . . . ⎟ ⎟ ⎜ ⎝ σr−1 σr 0 · · · 0 0 ⎠ σr 0 0 · · · 0 0 6. Perform SVD on obtained Hankel matrix. HZ = UHZ SHZ VHTZ
(16)
292
G. Bhatnagar, Q.M.J. Wu, and B. Raman
7. Obtain the matrix key (K) by UHZ and VHZ as K = UHZ VHTZ
(17)
where the matrix key K is an orthogonal matrix i.e. KK = I. Since, it is the multiplication of two orthogonal matrices. 8. Second Phase: Change the pixel values of scrambled image using matrix key K to get encrypted image F e as T
F e = KFsl
(18)
5.2 Decryption Process The decryption process consists of the following steps: 1. By adopting keys k0 and μ, step 2 to step 7 of encryption process are performed to get matrix key K. 2. Obtain the decrypted scrambled image from F e with the help of matrix key i.e. Fsl = inv(K)F e = K T F e
(19)
3. Scramble pixels of Fsl , P − l times to get decrypted image (F d ), where P is the period of toral automorphism for the original image F .
6 Experiments and Security Analysis The performance of proposed encryption technique is demonstrated using MATLAB platform. A number of experiments are performed on different gray scale images namely Barbara, Lena and Cameraman, which are used as original image having size 256×256. Due to page restriction the visual results are given only for Barbara image whereas numerical results are given for all images. In the proposed technique, seven parameters are used as the keys, these parameters are a, b, c, d, l, μ and k0 . The first five keys are used as the parameters for toral automorphism and are taken as a = 8, b = 5, c = 3, d = 2 and l = 150. For making matrix key, μ = −0.3456 and k0 = 0.8 are used as the initial parameters for the Markov map. The encrypted and decrypted images using above mentioned keys are shown in figures 4(II, III). Security is a major issue of encryption techniques. A good encryption technique should be robust against all kinds of cryptanalytic, statistical and brute-force attacks. In this section, a complete investigation is made on the security of the proposed encryption technique such as sensitivity analysis, statistical analysis, numeric analysis etc to prove that the proposed encryption technique is secure against the most common attacks. The detailed analysis are given as follows. 6.1 Key Sensitivity Analysis According to the principle, the slight change in the keys never gives the perfect decryption for a good security. And for this purpose, the key sensitivity of the proposed technique is validated. In the proposed technique, seven keys (a, b, c, d, l, μ and k0 ) are used. Keys a, b, c, d are used in the toral automorphism to form the matrix A. First
A Novel Image Encryption Framework Based on Markov Map and SVD
(I)
(II)
(III)
(IV)
(V)
(VI)
(VII)
(VIII)
293
Fig. 4. I) Original Image II) Encrypted Image; Decrypted Image III) with all correct keys IV) with swapped values of a and d V) with wrong a, b, c and d (a = 12, b = 7, c = 5, d = 3) VI) with wrong l (l = 151) VII) with wrong k0 (k0 = 0.7999) VIII) with wrong µ (µ = 0.3455)
check these keys sensitivity, for this the values of the leading diagonal are swapped (i.e. swap a and d) and all other keys remain un-altered. The respective result is shown in figure 4(IV). It is clear that after swapping only two values one cannot get the correct decrypted image. Hence, the proposed technique is highly sensitive to a, b, c, d. Figure 4(V) shows the decrypted image when all of a, b, c, d (a = 12, b = 7, c = 5, d = 3) are changed. Figures 4 (VI-VIII) show the decrypted images when l, k0 and μ are wrong respectively. Since, l represents the number of iteration of toral automorphism which is always an integer. Hence, for slight change, either l is decreased or increased by 1. Figure 4(VI) shows the result when l is increased by 1. Similarly, figure 4(VII, VIII) show the results of change in k0 and μ. The changes are made in the way such that older values (k0 = 0.8, μ = 3.8) and newer values (k0 = 0.7999, μ = −0.3455) are approximately same. Hence, the proposed technique is highly sensitive to the keys. 6.2 Statistical Analysis: Histogram and Correlation Analysis Another method to evaluate encryption technique is statistical analysis. This analysis is composed of two terms viz. 1) Histogram analysis 2) Correlation analysis. According to first term, for a good encryption technique, there is uniform change in the image histogram after encryption. Figure 5(I, II) show the histograms of both original and encrypted images. From figures, it is clear that after encryption histogram becomes uniform. The second term says that a good encryption technique must break the correlation among the adjacent pixels of the image. For this purpose, the correlation between two adjacent pixels are calculated and it is said to be good encryption if correlation come to be as far as from 1. First, randomly select P pairs of adjacent pixels (either in horizontally or vertically or diagonally) and then calculate their correlation as
294
G. Bhatnagar, Q.M.J. Wu, and B. Raman
E(x − E(x))(y − E(y)) rx,y = 2 E(x ) − (E(x))2 E(y 2 ) − (E(y))2
(20)
where x and y are the gray levels of two adjacent pixels in the image and E() denotes the expected (mean) value. Figures 5(III, IV) show the correlation distribution of two horizontally adjacent pixels in the original and encrypted image. The correlation coefficients in all directions are shown in table 1, which are far apart. Hence, proposed technique is able to break the high correlation among the pixels.
Table 1. Correlation coefficients of two adjacent pixels in original and encrypted image Correlation coefficients in Original Image Encrypted image Image Barbara Lena Cameraman Barbara Lena Cameraman Horizontal 0.9682 0.9826 0.9862 -0.1107 0.0914 -0.0716 Vertical 0.9257 0.9551 0.9675 -0.1123 -0.0029 -0.0156 Diagonal 0.9514 0.9753 0.9539 0.0083 0.0601 0.0831 Direction
Histogram of Original Image
700
Histogram of Encrypted Image
400
600 300
500 400
200
300 200
100
100 0
0
0
100
200
0
100
Correlation Plot of Encrypted Image Pixel Location (x+1,y)
Pixel Location (x+1,y)
Correlation Plot of Original Image 250 200 150 100 50 0 0
50
100
150
200
(II)
(I)
200
250
250 200 150 100 50 0 0
50
100
150
200
Pixel Location (x,y)
Pixel Location (x,y)
(III)
(IV)
250
Fig. 5. Histogram of I) Original Image II) Encrypted Image; Correlation plot of two horizontal adjacent pixels in III) Original Image IV) Encrypted image
A Novel Image Encryption Framework Based on Markov Map and SVD
295
6.3 Numerical Analysis Finally numerical analysis is done to evaluate the proposed framework. Numerical analysis includes the values of the objective metrics. A metric which provides more efficient test methods and is suitable for computer simulations is called objective metrics. Peak signal to noise ration (PSNR), spectral distortion (SD), normalized singular value similarity (NSvS) [17] and Universal Image Quality Index (UIQ) [18] are used as the objective metrics to evaluate proposed technique. Table 2 shows the values of objective metrics between original-encrypted and original-decrypted images with correct keys, for all experimental images. From the table, it is clear that for encryption/decryption the values of objective metrics is higher/lower according to their definition mentioned above. Therefore, the proposed technique is able to perfectly encrypt and decrypt the images. Table 2. Numerical analysis of proposed technique Metric Image PSNR SD NSvS UIQ
Values b/w Original Image and Encrypted Image Decrypted Image Barbara Lena Cameraman Barbara Lena Cameraman 10.3969 9.7845 10.3575 235.7433 231.7721 237.0556 60.1270 57.7603 62.5469 0.0469 0.0344 0.0952 120.9058 129.5549 121.8048 2.1419×10−3 5.8935×10−3 2.0451×10−3 2.2958×10−4 4.8305×10−4 1.2363×10−4 1 0.9994 0.9959
7 Conclusions This paper proposes a simple yet efficient image encryption technique that encrypts the image using toral automorphism, Markov map and singular value decomposition. Toral automorphism is used for scrambling of pixels whereas Markov map and singular value decomposition is used for changing the pixel value. Some security analysis is also given to demonstrate that the right combination of keys is important to reveal the original image. The security analysis proves the efficiency and robustness of the proposed technique. The algorithm is suitable for any kind of gray scale image with which can be further extended for color images. This extension can be easily done by either employing proposed technique separately to all color channels or converting original image to some independent space (like YCbCr) and applying the proposed technique.
Acknowledgement This work is supported by the Canada Research Chair program, the NSERC Discovery Grant. One of the authors, Dr. B. Raman acknowledges DST for the collaboration of CVSS Lab, University of Windsor during his BOYSCAST fellowship tenure awarded by DST, India.
296
G. Bhatnagar, Q.M.J. Wu, and B. Raman
References 1. Maniccam, S.S., Bourbakis, N.G.: Image and Video Encryption using Scan Patterns. Pattern Recognition 37, 725–737 (2004) 2. Bourbakis, N.: Image Data Compression Encryption using G-SCAN Patterns. In: Proceedings of IEEE Conference on SMC, Orlando, FL, pp. 1117–1120 (1997) 3. Guan, Z.H., Huang, F., Guan, W.: Chaos-based Image Encryption Algorithm. Physics Letters A 346, 153–157 (2005) 4. Gao, H., Zhang, Y., Liang, S., Li, D.: A New Chaotic Algorithm for Image Encryption. Chaos, Solitons and Fractals 29(2), 393–399 (2005) 5. Tong, X., Cui, M.: Image encryption scheme based on 3D baker with dynamical compound chaotic sequence cipher generator. Signal Processing 89(4), 480–491 (2009) 6. Gaoa, T., Chen, Z.: A new image encryption algorithm based on hyper-chaos. Physics Letters A 372(4), 394–400 (2008) 7. Gao, H., Zhang, Y., Liang, S., Li, D.: A new chaotic algorithm for image encryption. Chaos, Solitons and Fractals 29(2), 393–399 (2006) 8. Gao, T.G., Chen, Z.Q.: Image encryption based on a new total shuffling algorithm. Chaos, Solitons and Fractals 38(1), 213–220 (2008) 9. Chang, L.: Large Encrypting of Binary Images with Higher Security. Pattern Recognition Letters 19(5), 461–468 (1998) 10. Li, X.: Image Compression and Encryption using Tree Structures. Pattern Recognition Letters 18(11), 1253–1259 (1997) 11. Chuang, T., Lin, J.: New Approach to Image Encryption. Journal of Electronic Imaging 7(2), 350–356 (1998) 12. Chuang, T., Lin, J.: A New Multiresolution Approach to Still Image Encryption. Pattern Recognition and Image Analysis 9(3), 431–436 (1999) 13. Pollicott, M., Yuri, M.: Dynamical systems and ergodic theory, Cambridge. London Mathematical Society Student Text Series (1998) 14. Golub, G.H., Reinsch, C.: Singular value decomposition and least squares solutions. Numerische Mathematik 14(5), 403–420 (1970) 15. Schuster, H.G., Just, W.: Deterministic Chaos. Wiley-VCH (2005) 16. Tefas, A., Nikolaidis, A., Nikolaidis, N., Solachidis, V., Tsekeridou, S., Pitas, I.: Performance analysis of correlation-based watermarking schemes employing markov chaotic sequences. IEEE Transactions on Signal Processing 51(7), 1979–1994 (2003) 17. Bhatnagar, G., Raman, B.: Distributed multiresolution discrete Fourier transform and its application to watermarking. International Journal of Wavelets, Multiresolution and Information Processing 8(2), 225–241 (2010) 18. Wang, Z., Bovik, A.C.: A Universal Image Quality Index. IEEE Signal Processing Letters 9(3), 81–84 (2002)
A Self-trainable System for Moving People Counting by Scene Partitioning Gennaro Percannella and Mario Vento Universita’ di Salerno Dipartimento di Ingegneria Elettronica ed Ingegneria Informatica Via Ponte Don Melillo, 84084 Fisciano (SA), Italy
[email protected],
[email protected]
Abstract. The paper presents an improved method for estimating the number of moving people in a scene for video surveillance applications; the performance is measured on the public database used in the framework of the PETS international competition, and compared, on the same database, with the ones participating to the same contest up to now. The system exhibits a high accuracy, ranking it at the top positions, and revealed to be so fast to make possible its use in real time surveillance applications.
1
Introduction
Knowing the number of people present in an area is a very important issue in the framework of video surveillance applications, that an increasing number of papers on this topic have been proposed in the recent past. Despite of the fact that recently some pioneering systems have been made available, further improvements are still necessary, especially concerning their generality and flexibility. The estimation accuracy of the number of people must be sufficiently high, even in presence of dense crowds. To this concern, it is worth pointing out that in these situations only parts of people’s body appear in the image; the occluded parts generally cause significant underestimation in the counting process; it means that the partial occlusions must be forecasted and suitably taken into account starting from the information about the crowd density. Another crucial point is the unavoidable presence of perspective distortions: the optical equipment of the cameras, especially those with wide angle views, cause that people far from the camera appear small while the close ones significantly bigger. So, methods must deal with this perspective issue, so as to obtain an estimation independent from the local scale of the image. Moreover, it is convenient that the system works with uncalibrated cameras, as calibration is generally time consuming and demands for suitable technical skills, not owned by the end user. Consequently, the availability of simple training procedures, not requiring knowledge about the internal organization of the computational
This research has been partially supported by A.I.Tech s.r.l., a spin-off company of the University of Salerno (www.aitech-solutions.eu).
M. Kamel and A. Campilho (Eds.): ICIAR 2011, Part II, LNCS 6754, pp. 297–306, 2011. c Springer-Verlag Berlin Heidelberg 2011
298
G. Percannella and M. Vento
model, but possibly depending on simple geometric properties derivable from the scene, is an extremely desirable feature. Recently, the literature is enriching of indirect methods (also called map-based or measurement-based) to face the problem: the number of people is estimated by measuring the occurrence of suitably defined features that do not require the separate detection of each person in the scene; these features are then somehow put in relation to the number of people. Some methods belonging to this category propose to base the people estimation on the amount of moving pixels [4], blob size [8], fractal dimension [11] or other texture features [13]. Despite their simplicity, promising performance in terms of estimation accuracy are obtained by the approach proposed in [1], in [3] and in [5]; all them have been submitted to the PETS 2009 and 2010 contests on people counting, and achieved very encouraging results. In particular, in Albiol’s paper [1], the authors use the corner points (detected using the Harris’s algorithm [7]) as features. Although Albiol’s method has proved to be quite robust, confirming the validity of its rationale, its accuracy decreases in presence of highly complex scenes, with wide deep range (people moving in the direction of the camera) and highly crowded moving groups. The successive paper [5] explicitly deals with the perspective effects and occlusions; a trainable regressor (the -SVR algorithm) is used to obtain the relation between the number of people as a function of the moving points, made complex by the above mentioned effects. Experimental results demonstrated the improvements with respect to the method by Albiol et al. However, this is obtained at the cost of complex setup procedures for training the -SVR regressor. In [6] the same authors present a variant of the above method aimed to make fully automatic the learning procedure. It deals with the perspective effects on the estimation by subdividing the entire scene in horizontal stripes; the latter have a size depending on their distance from the camera, justifying the hypothesis of linear relationship between the number of feature points and the people contained. Experimental results encourage the investigation in this direction: performance reveal to be practically unchanged, but with a simpler system, depending on a few parameters, obtainable by a linear approximation. In this paper we present some improvements of the approach in [6]; firstly, we define a new method for extracting the moving feature points, which revealed to be more effective and widely faster. Moreover, a fully automated procedure for training all the needed parameters is presented; a brief video sequence taking a person walking around in the scene is analyzed for directly obtaining all the parameters needed by the system. The organization of the paper is the following: in the next section, we summarize the approach in [6] while in the following two sections we describe the original contributions provided. Finally, we discuss experimental results and draw some conclusions.
2
Rationale of the Method
The method is based on the assumption that each person in a scene is attributed a number of salient points that depends only on the distance of that person from
A Self-trainable System for Moving People Counting by Scene Partitioning
299
the camera. In particular, the authors adopt the SURF algorithm proposed in [2]. SURF is inspired by the SIFT scale-invariant descriptor [10], but replaces the Gaussian-based filters of SIFT with filters that use the Haar wavelets, which are significantly faster to compute. The points detected are successively classified into static or moving. The classification is aimed at pruning the static points, as they are not associated to persons. Each salient point p(x, y) detected in the frame at time t is attributed a motion vector v(x, y) with respect to a reference frame at time t-k, by using a block-matching technique, and consequently classified: moving point if |v(x, y)| > 0 p(x, y) = (1) static point if |v(x, y)| = 0 From the above mentioned assumptions the total number P of persons into the scene is estimated as: N ω (d (pi )) (2) P = i=1
where N is the number of moving points and ω (d (pi )) is the weigth attributed to the i-th point pi , assigned on the base of the distance d (pi ) from the camera. In order to solve the problem of perspective normalization, the frames are partitioned into longitudinal non overlapping bands, whose height is equal to the height in pixels of an average person whose feet are on the base of band itself; with this assumption a same weight is attributed to all the points falling in a band. Accordingly, equation 2 can be modified as: P =
N
ω (Bpi )
(3)
i=1
where Bpi is the band the point pi belongs to. Once the set of the weigths Ω = {ω (Bk )} of the bands have been determined, it is possible to calculate the total number of persons in the scene by equation 3.
3
Improving the Moving Point Classification
The method used for discriminating between static and moving points adopts a block matching technique for estimating the motion vector. Typically, block matching requires the definition of suited searching algorithms working on a search area; the latter, possibly containing the candidate blocks in the previous frame, can be made more or less wide. A fully exhaustive approach extends the search everywhere in the frame with a significant computational expense without considering that the motion of the objects of interest (the persons) is much smaller than the frame size. Simply limiting the search procedure within a window centered on the considered block with a size slightly larger than the
300
G. Percannella and M. Vento
maximum possible motion of the persons would significantly reduce the number of candidate blocks without introducing estimation errors. Hereinafter, we will refer to this motion estimation algorithm as window search. A further reduction of the processing time is possible by adopting other search methods (three step search, 2D-logarithmic search, cross search, ...) [9] which determine a suboptimal solution to the problem by reducing the number of candidate blocks to analyze. These approaches have a complexity of Θ(log s) when the windows has an area of size Θ(22·s ). In order to reduce the effect of noise, it is possible to incorporate zero-motion biasing into the block matching technique. The current block is first compared with the block at the same location in the previous frame, before doing the search: if the difference between these two blocks is below a threshold (γZM ), the search is terminated resulting in a zero motion vector without analyzing the neighbor points. Zero-motion biasing allows to reduce the false motion, due to image noise, and the processing time, by eliminating searches; unfortunately, it may produce some false negatives by assigning a zero motion vector to a non static point. Hence, the right value of γZM have to be determined as the best trade off between the two opposite effects. Since we are not interested to the exact value of the motion vector, but only to discriminate between moving and static points, we propose to adopt an approach (let us call it bias classification) that simply classifies a point as static or moving if the difference between the blocks centered on it in the current and in the reference frame is below or above γZM . In particular, moving point if M (x, y) > γZM p(x, y) = (4) static point if M (x, y) ≤ γZM where p(x, y) is the interest point, while M (x, y) measures the dissimilarity of the block in x, y in the current frame and the homologous block in the reference frame. We expect that this approach should preserve the same classification accuracy of the previous approaches, but could significantly reduce processing time as it has to analyze just one candidate block.
4
Automatic Training Procedure
The set up procedure of the method firstly requires the determination of the height of the bands; these are depending on the geometrical parameters of the systems, as the focal lenght and the relative position of the camera in the environment and a closed formula is obtainable, at least in the more general case, if the camera has been suitably calibrated. Once the bands have been properly determined the procedure is completed by estimating, for each band, the corresponding counting coefficient ω (Bk ). However, camera calibration is a costly procedure that requires skilled personnel, not always available at installation time. To overcome this problem, we propose here an automatic procedure using a short video sequence with a person (with height between 1.6 m and 1.8 m) that randomly walks within the scene
A Self-trainable System for Moving People Counting by Scene Partitioning
301
Fig. 1. On top, sample frames with (pi , hi ) couples; on bottom, the analytical expression of the function h = f (p)
(a)
(b)
Fig. 2. Subdivision of the frames of the video sequences for the test: a) S1.L1.13-57 (view 1), b) S1.L2.14-31 (view 2). The height of each band approximatively corresponds to the height of a person in real world coordinates.
in different directions, so as to obtain a good coverage of the visual area. From each frame, we extract the moving salient points and automatically determine the position pi of the person and the corresponding height hi ; once a sufficient number of these couples (pi , hi ) have been extracted, it is possible to obtain, by an approximation method, the analytical expression of the function h = f (p) that gives the height in pixel of the person in the position p of the image (see figure 1 for an example). This function is successfully used for partitioning the frame in bands by an iterative procedure; the first band, say B0 , is by definition located at the bottom
302
G. Percannella and M. Vento
of the frame (at p0 = 0) and its height is so calculated as h0 = f (p0 ). By iterating the process, the second band B1 is positioned immediately on top of B0 , at row h0 of the image, and its top falls at h1 = f (p1 ) = p0 + h0 . In the most general case, the position and the height of the i-th band are pi = pi−1 + hi−1 + 1 and hi = f (pi ). The iterative process is terminated when either the image has been completely scanned or the height of a band is below a certain threshold. The latter situation occurs in installations characterized by a high field depth; in this case the upper part of the frame is excluded from the analysis. An example of the division of the frame in bands is shown in figure 2 where it is possible to visually verify that the height of some bands is perfectly coincident with the height of a person. The computation of the set of weights Ω is carried out by acquiring another short video taking a small group of persons randomly crossing the scene. The SURF moving points are extracted from each frame; then it is selected the set F of the frames with an amount of detected moving points in a single band that is above a certain threshold (in our test we fixed this threshold to 75%). The weight of the i-th band is obtained as: f ∈F Pf,i ω (Bi ) = (5) f ∈F pf,i where Pf,i and pf,i are the number of persons and the number of points that in the f -th frame are in the i-th band, respectively.
5
Experimental Results
The performance of the proposed method has been assessed on the PETS2009 dataset [12]. The videos used for the experimentations refer to two different views obtained by using two cameras that contemporaneously acquired the same scene from different points of view. The videos in the dataset were framed at about 7 fps with a 4 CIF resolution. We used four videos of view 1, namely S1.L1.13-57, S1.L1.13-59, S1.L2.14-06 and S1.L3.14-17, and the four videos of view 2, namely S1.L1.13-57, S1.L2.14-06, S1.L2.14-31 and S3.MF.12-43. The tests were aimed at analyzing the impact of the choice of the algorithm used for recognizing the moving SURF points with respect to counting accuracy and computational charge; we considered three approaches for points classification, namely the window search, the three step search and the bias classification (hereinafter indicated as WS, TSS and BC ), described in Section 3. In order to compare the above three approaches, we carried out two types of tests: the first test was aimed at evaluating their accuracy in static/moving points classification and the respective processing times, while in the second test we assessed the estimation error of the people counting method in [6] when the above approaches are adopted.
A Self-trainable System for Moving People Counting by Scene Partitioning
(a)
303
(b)
Fig. 3. Moving point classification performance for different values of the bias γZM given in terms of (a) the f-index and computational time (b)
5.1
Moving Point Classification
We collected a dozen of sample frames equally distributed from view 1 and view 2 of the PETS 2009 dataset. The SURF points within these frames have been manually classified as moving or static. The resulting dataset was composed by almost 8.000 points of which about 10% were moving ones. Figure 3.a. reports the classification performance in terms of f-index (the harmonic mean of Precision and Recall ), as a function of the bias threshold γZM . The maximum value of the f-index = 0.925 is obtained by WS with γZM = 4, while TSS and BC have respectively f-index = 0.919 with γZM = 11 and f-index = 0.918 with γZM = 11.5. This means that, at least on the considered dataset, the WS guarantees a slightly better accuracy in moving point classification with respect the other approaches, while TSS does not provide any significant advantages with respect to BC. In figure 3.b the plots of the processing time of the WS, the TSS and the BC for different values of the threshold γZM are shown. These results were obtained using a notebook with an Intel(R) Core(TM)2 Duo CPU L9400 @1.86 GHz and the following configurations for the moving point classification algorithms: each point was represented through a 9x9 pixels block and 21x21 pixels search area (this parameter is only for WS and TSS ). It is possible to note that the processing time of WS is one and two orders of magnitude higher than TSS and the BC, respectively. From the above described experiment, we can draw the conclusion that for values of γZM higher than 8 the results of the three approaches are quite similar, while BC is extremely faster than the other two search strategies.
304
G. Percannella and M. Vento
Table 1. Counting estimation error and processing time of the method with the WS and the BC searching strategies Video (view) S1.L1.13-57 (1) S1.L1.13-59 (1) S1.L2.14-06 (1) S1.L3.14-17 (1) S1.L1.13-57 (2) S1.L2.14-06 (2) S1.L2.14-31 (2) S3.MF.12-43 (2)
5.2
MAE 1.37 2.58 5.44 2.74 9.13 17.74 6.61 1.60
WS BC MRE time (s) MAE MRE time (s) 6.9% 1.730 1.36 6.8% 0.208 15.6% 1.395 2.55 16.3% 0.201 20.7% 1.678 5.40 20.8% 0.208 15.1% 1.629 2.81 15.1% 0.218 23.9% 0.952 4.45 15.1% 0.207 43.6% 0.871 12.17 30.7% 0.203 21.7% 1.347 7.55 23.6% 0.222 34.6% 0.637 1.64 35.2% 0.206
People Number Estimation
The training of the system was performed using the proposed automatic training procedure on a video obtained by collecting some short clips from the PETS2009 dataset containing just one person walking into the scene at different distances from the camera. The frames were selected from other sequences available in the PETS2009 dataset that where not used for the tests. Testing has been carried out by comparing the actual number of people in the video sequences and the number of people calculated by the algorithm. The indices used to report the performance are the Mean Absolute Error (MAE) and the Mean Relative Error (MRE) defined as: M AE =
N 1 · |G(i) − T (i)| , N i=1
M RE =
N 1 |G(i) − T (i)| · N i=1 T (i)
(6)
where N is the number of frames of the test sequence and G(i) and T (i) are the guessed and the true number of persons in the i-th frame, respectively. In table 1, we have reported the performance of the method in [6] when the WS and the the BC methods for points classification are adopted. Performance are reported in terms of the two indices MAE and MRE; we have also reported the average processing time per frame (the experimental settings are the same used for the experiments in figure 3). It is worth noting that although the proposed method is simpler than [6], the people estimation accuracy still remains practically unchanged; surprisingly, there are two cases with a significant performance improvement (videos S1.L1.1357 and S1.L2.14-06 of view 2). Furthermore, it is possible to note that using BC allows to reduce drastically the computational charge making possible to process the PETS2009 video sequences in real time. Table 2 extends the comparison with other systems participating to PETS competition; in particular, methods in [1] and [5], both belonging to the category of indirect methods and top ranked as regards the counting accuracy. From the results reported in table 2, it is evident that the proposed method in almost
A Self-trainable System for Moving People Counting by Scene Partitioning
305
Table 2. Counting estimation error of the Albiol’s algorithm, of the Conte’s and of the proposed ones Video (view) S1.L1.13-57 (1) S1.L1.13-59 (1) S1.L2.14-06 (1) S1.L3.14-17 (1) S1.L1.13-57 (2) S1.L2.14-06 (2) S1.L2.14-31 (2) S3.MF.12-43 (2)
Albiol [1] Conte [5] Our MAE MRE MAE MRE MAE MRE 2.80 12.6% 1.92 8.7% 1.36 6.8% 3.86 24.9% 2.24 17.3% 2.55 16.3% 5.14 26.1% 4.66 20.5% 5.40 20.8% 2.64 14.0% 1.75 9.2% 2.81 15.1% 29.45 106.0% 11.76 30.0% 4.45 15.1% 32.24 122.5% 18.03 43.0% 12.17 30.7% 34.09 99.7% 5.64 18.8% 7.55 23.6% 12.34 311.9% 0.63 18.8% 1.64 35.2%
Table 3. Counting estimation error and processing time of the proposed approach at 4 CIF and CIF resolutions Video (view) S1.L1.13-57 (1) S1.L1.13-59 (1) S1.L2.14-06 (1) S1.L3.14-17 (1) S1.L1.13-57 (2) S1.L2.14-06 (2) S1.L2.14-31 (2) S3.MF.12-43 (2)
MAE 2.31 2.83 5.60 2.53 10.26 20.89 8.53 2.27
CIF 4CIF MRE time (s) MAE MRE time (s) 11.4% 0.052 1.36 6.8% 0.208 17.4% 0.052 2.55 16.3% 0.201 23.0% 0.053 5.40 20.8% 0.208 11.2% 0.062 2.81 15.1% 0.218 26.8% 0.053 4.45 15.1% 0.207 51.5% 0.051 12.17 30.7% 0.203 26.4% 0.057 7.55 23.6% 0.222 42.9% 0.051 1.64 35.2% 0.206
all cases outperforms Albiol’s technique with respect to both MAE and MRE performance indices, while its performance are always very close to those obtained by Conte’s method. This aspect is more evident if we refer to the results obtained on view 2. In table 3, we report the results of a test aimed at assessing the robustness of the approach with respect to the frame resolution. In particular, we considered the same test sequences, but at CIF resolution. The results show that the lower resolution causes a reduction in the counting accuracy, which still remain acceptable above all if we consider the relevant decrease of the processing time.
6
Conclusions
In this paper we have proposed an improved method for counting people in video surveillance applications. The experimental results confirmed that the proposed improvement allows to further increase the accuracy of the people counting estimation; moreover, it introduces a reduction of computational time almost of an order of magnitude while preserving practically the same counting estimation accuracy. The method has been experimentally compared with the algorithm by
306
G. Percannella and M. Vento
Albiol et al. and by Conte et al. that were among the best performing ones of the PETS 2009 and 2010 contests. The proposed approach is in several cases more accurate than Albiol’s one while retaining robustness and computational requirements that are considered the greatest strengths of the latter. On the other side our method obtains results comparable to those yielded by the more sophisticated approach by Conte et al. also on very complex scenarios as that proposed by the view 2 of the PETS2009 dataset, but differently from the latter it it does not require a complex set up procedure. In this paper we addressed also the problem of the complexities of the set-up procedures during installation. In particular, we have proposed a procedure for the automatic training of the system that simply requires the acquisition of two short sequences with a known number of persons that randomly crosses the scene.
References 1. Albiol, A., Silla, M.J., Albiol, A., Mossi, J.M.: Video analysis using corner motion statistics. In: IEEE International Workshop on Performance Evaluation of Tracking and Surveillance, pp. 31–38 (2009) 2. Bay, H., Ess, A., Tuytelaars, T., Gool, L.V.: Surf: Speeded up robust features. Computer Vision and Image Understanding 110(3), 346–359 (2008) 3. Chan, A.B., Liang, Z.S.J., Vasconcelos, N.: Privacy preserving crowd monitoring: Counting people without people models or tracking. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1–7 (2008) 4. Cho, S.-Y., Chow, T.W.S., Leung, C.-T.: A neural-based crowd estimation by hybrid global learning algorithm. IEEE Transactions on Systems, Man, and Cybernetics, Part B 29(4), 535–541 (1999) 5. Conte, D., Foggia, P., Percannella, G., Tufano, F., Vento, M.: A method for counting people in crowded scenes. In: 2010 Seventh IEEE International Conference on Advanced Video and Signal Based Surveillance, AVSS (29 2010) 6. Conte, D., Foggia, P., Percannella, G., Tufano, F., Vento, M.: An effective method for counting people in video-surveillance applications. Accepted for Publication on International Conference on Computer Vision Theory and Applications (2011) 7. Harris, C., Stephens, M.: A combined corner and edge detector. In: Proceedings of the 4th Alvey Vision Conference, pp. 147–151 (1988) 8. Kong, D., Gray, D., Tao, H.: A viewpoint invariant approach for crowd counting. In: International Conference on Pattern Recognition, pp. 1187–1190 (2006) 9. Love, N.S., Kamath, C.: An Empirical Study of Block Matching Techniques for the Detection of Moving Objects. Tech. Rep. UCRL - TR - 218038, University of California, Lawrence Livermore National Laboratory (January 2006) 10. Lowe, D.G.: Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision 60(2), 91–110 (2004) 11. Marana, A.N., Costa, L.d.F., Lotufo, R.A., Velastin, S.A.: Estimating crowd density with mikowski fractal dimension. In: Int. Conf. on Acoustics, Speech and Signal Processing (1999) 12. PETS (2009), http://www.cvg.rdg.ac.uk/PETS2009/ 13. Rahmalan, H., Nixon, M.S., Carter, J.N.: On crowd density estimation for surveillance. In: The Institution of Engineering and Technology Conference on Crime and Security (2006)
Multiple Classifier System for Urban Area’s Extraction from High Resolution Remote Sensing Imagery Safaa M. Bedawi and Mohamed S. Kamel Pattern Analysis and Machine Intelligence Lab, Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, Ontario, Canada N2L 3G1 {mkamel,sbedawi}@pami.uwaterloo.ca
Abstract. In this paper, a land-cover extraction thematic mapping approach for urban areas from very high resolution aerial images is presented. Recent developments in the field of sensor technology have increased the challenges of interpreting images contents particularly in the case of complex scenes of dense urban areas. The major objective of this study is to improve the quality of landcover classification. We investigated the use of multiple classifier systems (MCS) based on dynamic classifier selection. The selection scheme consists of an ensemble of weak classifiers, a trainable selector, and a combiner. We also investigated the effect of using Particle Swarm Optimization (PSO) based classifier as the base classifier in the ensemble module, for the classification of such complex problems. A PSO-based classifier discovers the classification rules by simulating the social behaviour of animals. We experimented with the parallel ensemble architecture wherein the feature space is divided randomly among the ensemble and the selector. We report the results of using separate/similar training sets for the ensemble and the selector, and how each case affects the global classification error. The results show that selection improves the combination performance compared to the combination of all classifiers with a higher improvement when using different training set scenarios and also shows the potential of the PSO-based approach for classifying such images. Keywords: Multiple Classifiers System, Ensemble of classifiers, Particle Swarm Optimization, Selection, Remote sensing, images, and Land-cover.
1 Introduction High resolution data is becoming increasingly available in remote sensing applications. Several high resolution satellites have been launched recently, such asWorldView-2, and yet there are plans for others to be launched in the near future. Also, new VHR cameras have been developed for use in digital aerial photography. Remote sensing images have significant applications in different areas such as urban planning, surveys and mapping, environmental monitoring and military intelligence. VHR data provides detailed spectral information about different land cover objects in urban areas: namely trees, building and roads. The increase in image resolution has introduced new challenges in interpreting its content. The challenges arise from the inner class M. Kamel and A. Campilho (Eds.): ICIAR 2011, Part II, LNCS 6754, pp. 307–316, 2011. © Springer-Verlag Berlin Heidelberg 2011
308
S.M. Bedawi and M.S. Kamel
complexity of dense urban areas, along with the occlusions and shadows caused by the variety of objects in urban areas, for example, buildings, roads and trees. Currently, massive manual interpretations are made by human operators, which has proven to be a slow and expensive process and becomes impractical when images are complex and contain many different objects in urban areas. Several classifiers have been applied for interpreting land cover from remote sensing [1]. Conventional statistical classifier approaches [2] are based on statistical principles and require the training data to follow a normal distribution hence the classification accuracy can be limited. Thus, sophisticated classification algorithms are required in order to handle detailed VHR information comprehensively. Machine Intelligence techniques have been increasingly incorporated in the classification of remote sensing images [3]. Swarm Intelligence (SI) is a complex multi-agent system consisting of numerous simple individuals (mimicking animals, e.g., ants, birds, etc.), which exhibit their swarm intelligence through cooperation and competition among the individuals. Although there is typically no centralized control dictating the behavior of the individuals, the accumulation of local interactions in time often gives rise to a global pattern. SI has been widely applied to nonlinear function optimization, traveling salesman problems, data clustering, combination optimization, rule induction, and pattern recognition [4]. Although SI is an efficient and effective global optimization algorithm, using SI in VHR remote sensing classification is a fairly new research area and requires more work[4] [5]. Multiple classifier systems (MCS) is another machine learning concept, which has been utilized by the remote sensing society recently[6][7] to improve classification accuracy in comparison to a single classifier[8]. Several efficient multiple classifier systems are based on weakening techniques that create different classifiers. A weak classifier refers to a classifier whose capacity has been reduced so as to increase its prediction diversity. Either its internal architecture is simple, or it is prevented from using all the information available. Fusing the whole set of classifiers in some cases may not provide the expected improvement in classification accuracy, however fusing a subset of the classifiers can provide more improvement. Choosing this subset is the objective of the classifier selection [9]. Two types of classifier selection techniques have been proposed in the literature. One is called static classifier selection, as the regions of competences are defined prior to classifying any test pattern. The other is called dynamic classifier selection (DCS), as the regions of competence are determined on the fly (during the operation phase) depending on the test pattern to be classified. It is interesting to use the PSO-based approach as the base classifier in the proposed MCS framework to classify VHR data set to investigate it’s effectiveness in the classification accuracy of such complex images. As in [10], the scheme consists of three modules: classifier ensembles, a selector and a combiner. The images are separated into two feature sub-spaces that are provided to the ensemble and the selector. The feature sub-space given to the classifier ensemble is further divided into a set of weak PSO-based classifiers. Each module individually classifies the data and then the outputs are combined by the combiner module to produce the final output of the MCS scheme. We used the following scenarios in training our system: i) different training sets are used to train the ensemble and the combiner, and by different we mean that the training used for each was totally unseen by the other module, ii) same training
Multiple Classifier System for Urban Area’s Extraction
309
sets are provided for both of them. Using separate training tests would likely enhance the global error of the MSC. The first step of the presented algorithm is to perform segmentation of the image at the pixel-level based on color similarity of the image in raster format. The segmented regions are then described using a combination of color and texture features. These features are then fed into the MCS to classify these segmented regions. The results demonstrate that the MCS based on PSO-based classifiers outperforms other machine intelligence approaches such as the neural network classifier and also outperforms using a single PSO-based classifier. This paper is organized as follows: section 2 introduces the classification techniques, followed by Multiple Classifier Systems in section 3; section 4 explains the data set used for experiments, the adopted selection scheme and the classification results; and finally section 4 outlines the conclusions.
2 Particle Swarm Optimization PSO is a population-based evolutionary computation technique was first introduced in [11]. PSO simulates the social behavior of animals, i.e. birds in a flock or fish in a school. Members of such societies share common goals (e.g., finding food) that are realized by exploring its environment while interacting among them. The popularity of PSO is partially due to the simplicity of the algorithm, but mainly due to its effectiveness for producing good results at a very low computational cost. In PSO, each solution can be considered an individual particle in a given D-dimensional search space, which has its own position (xid) and velocity (vid). During movement, each particle adjusts its position by changing its velocity based on its own experience (memory) pid, as well as the experience of its neighboring particles, until an optimum position is reached by itself and its neighbor. All of the particles have fitness values based on the calculation of a fitness function. Particles are updated by following two parameters called pbest and pg at each iteration. Each particle is associated with the best solution (fitness) the particle has achieved so far in the search space. This fitness value is stored, and represents the position called pbest (the best among pid). The value pg is a global optimum value for the entire population. The two basic equations which govern the working of PSO are that of the velocity vector and the position vector given by: vid(t+1) = w vid(t) + c1r1(t)(pid(t) –xid(t)) + c2r2(t)(pg(t)-xid(t))
(1)
xid(t+1) = xid(t) + vid(t+1)
(2)
The first part of (1) represents the inertia of the previous velocity, the second part is the cognition part that reflects the personal experience of the particle, and the third part represents the cooperation among particles and is therefore named as the social component. Acceleration constants c1, c2 and inertia weight w are predefined by the user and r1, r2 are uniformly generated random numbers in range of [0, 1]. PSO-based rule discovery algorithm was developed in [12] and used in[5] for classification of coarse RS imagery. PSO-based discovery rule classification consists of three stages: rule discovery, rule evaluation and covering algorithm. The classification rule discovery algorithm’s task is to find and return the rule, which better classifies
310
S.M. Bedawi and M.S. Kamel
the predominant class in a given sample set. The covering algorithm takes the training set and invokes the classification rule discovery algorithm to reduce this set by removing a sample correctly classified by the rule and returned by the classification rule discovery algorithm. This process is repeated until a pre-defined number of samples are left to classify in the training set. 2.1 Rule Discovery Each particle searches the best value at each band and corresponds to a route. For multispectral remote sensing data with n bands, m particles search the best value in ndimensional space. The best value at each band can be connected with one another using the operator ‘And’, linked with land use types for forming classification rules. Particles are randomly distributed in an n-dimensional space. The fitness value for each particle is calculated after each iteration and the current fitness value is compared with the individual optimum for each particle prior to iteration. If the current fitness value is better than the individual optimum for a particle prior to iteration, then the individual optimum will be updated. The best individual optimum is then updated to the global optimum. The loop is terminated when the number of iteration exceeds tmax, which is the maximum number of iterations, or when the average fitness is smaller than a threshold. A set of classification rules is then generated. 2.2 Rule Evaluation Rules are evaluated during the training process to establish points of reference for the training algorithm. The rule evaluation function must consider both the correctly classified and the wrongly classified ones as shown in (3). FQ =
TP TN × TP + FN TN + FP
(3)
where: − • TP: True Positives (number of samples covered by the rule that are correctly classified − • FP: False Positives (number of samples covered by the rule that are wrongly classified) − • TN: True Negatives (number of samples not covered by the rule, whose class differs from the training target class.) − • FN: False Negatives (number of samples not covered by the rule, whose class matches the training target class). 2.3 Covering Algorithm The covering algorithm is a divide-and-conquer technique. Given an instance training set, it runs the rule discovery algorithm to obtain the highest quality rule for the predominant class in the training set. The best position, pg, for a particle as searched by using the algorithm (the best classification rule) is put into the rule set, then the sequence covering algorithm is employed to remove correctly classified instances from the training set.
Multiple Classifier System for Urban Area’s Extraction
311
3 Multiple Classifier Systems Multiple classifier systems combine variants of the same base classifier or different algorithms. In doing so the overall accuracy is usually increased, in comparison to the classification accuracy achieved by a single classifier [8]. Several different architectures have been introduced which were also applied successfully to remote sensing studies [6]. The one that is most frequently used is the parallel combination architecture. Given a finite set Ω of C class labels {ω1, …,ωc} for an image scene and X ∈ℜd input vectors with d features to be labeled in Ω. The probability P(wi/X) gives the likelihood that the correct class is wi for the d-dimensional feature vector X. The objective of combining classifiers objective is to find a class label in Ω based on the L classifiers outputs. Each of classifiers, D1,..,DL, individually makes a decision about the class of a test input pattern. These decisions are combined by a combiner approach. There are variants from conventional methods to more sophisticated approaches using trainable classifiers as combiners [8]. Dynamic Classifier Selection (DCS) approach [9] selects the best subset of classifiers for each test pattern. We adopted the scheme proposed in[10], in which a feedforward neural network (FFNN) takes the output of the ensemble of classifiers along with the input feature vector x to try to learn how to weigh the different classifiers. These weights reflect the degree of confidence in each classifier. Then a combiner procedure uses the selector output with the ensemble outputs to generate the final classification result. We used parallel architecture in the ensemble module where the input of each of the base classifiers is a random subset of the feature space. The feature space was divided among the ensemble and the selector modules. The feature subspace given to the selector was the subset used in the segmentation process (RGB feature) and a texture feature (homogeneity) as this subset was experimentally proved to have high discrimination abilities in the segmentation process. In using it for the selector, we are aiming at making the selector output more robust. A block diagram of the MCS architecture used is shown in Fig. 1. As shown in the Fig, the features space is randomly divided into two feature subspaces: 1) XE: fed into the ensemble of L classifiers and further subdivided into XE/L sub-space for each, and 2) XS : fed to the selector. 3.1 Selector Module The selector generates weights for the different classifiers based on the output of the classifier ensemble of classifiers along with its own input features sub-space. These weights reflect the degree of confidence in each classifier. An FFNN neural network approach is used to implement the learning mechanism of this module. The input to the FFNN is a vector consisting of the individual classifiers' outputs of size C×L and the input features sub-space of size XS with a total size of (C×L+XS). Using the training dataset and the target classes, the neural network is trained to predict whether or not the classification result obtained from the L classifiers for an input data vector will be correct. The output weights, P, are then used to combine the output of the classifiers in several ways: P can be used in weighted average fusion schemes as the weights for each given input or, P can be used to select a subset of L to be fused by calculating a
312
S.M. Bedawi and M.S. Kamel
threshold value β. A proper value of β can also be obtained during the training phase. A scheme such as the majority voting scheme is then applied. The training phase of this module is performed using the same/separate training set used for the individual classifiers. These weights are then used to combine the output of the classifier ensemble. 3 2 The Combiner Module The combiner module uses the output of the selector along with the output of the ensemble to generate the final classification result. The combiner scheme can be a standard classifier that combines methods or trained methods such as majority voting, weighted average, Bayesian, Neural Network and Fuzzy integrals [8]. The trained combiner is divided into two phases: a learning phase and a decision making phase. The learning phase assigns a weight factor to each input to support each decision maker. This weighting factor represents the degree of confidence in the output of each classifier. These confidences are then combined using standard classifier-combining methods. A neural network approach is selected to perform the weighting of the individual classifier. Among the standard methods we used are majority voting and weighted average [8]. Threshold p1 YC Selector
Combiner
pL
Final Decision
D1
DL
XS XE/L
XE /L XE
Classifier Ensemble
Xd
Fig. 1. Block diagram of the MCS architecture
a)
Voting schemes Majority voting schemes is well known and results based on this approach are frequently reported. Each classifier produces a unique decision regarding the identity of the sample. This identity could be one of the allowable classes or a rejection when no such identity is considered possible. Ties are broken randomly.
b) Weighted average The outputs of an individual classifier approximating the posteriori probabilities can be denoted as:
Multiple Classifier System for Urban Area’s Extraction
P^(wi/x ) = P(wi/x )+ ε(wi/x )
313
(4)
where P(wi/x ) is the “true” posterior probability of the ith class, and ε(wi/x ) is the estimation error. A weighted combination of outputs of a set of L classifiers can be written as: L
P^ ave(wi/x ) =
∑ w * P (w /x ) ^
i
i
(5)
i =1
where the weights wi are obtained using neural network approach.
4 Experimental Results In this section, we compare the performance of the selection scheme using two training scenarios in classifying VHR remote sensing imagery, and investigate the MCS ability to classify land-use classes in dense urban areas. We also compare the average performance of the PSO-based classifier ensemble to the FFNN based classifiers ensemble. 4.1 Dataset The study area is the city of Kitchener-Waterloo (K-W), Ontario, Canada. The data was provided by the Map Library at the University of Waterloo [13]. Ortho-rectified aerial images taken in April 2006 at 12 cm spatial resolution by a digital color airborne camera with 8 bit radiometric resolution. We cropped a set of forty test images of size 512*512 from the original image. The cropped test images were chosen for high density urban parts which were highly corrupted by noise. Samples of the test image are shown in Fig. 2.
Fig. 2. Samples of test images corrupted by noise
The major objects of interests in urban planning are roads, buildings and green areas such as parks. The images were manually segmented into four land-use types (roads, buildings, green area and other). Other represents pixilation which is either difficult to interpret or does not correspond to the objects of interests like building entrances with very small parking areas, alongside roads, swimming pools and other small objects in the image.
314
S.M. Bedawi and M.S. Kamel
The features used in clustering-based segmentation are the RGB values which have proved to have a good preliminary discrimination power. The clustered images are then mapped to the true class label using their corresponding ground truth images. The input features to the MCS are the color and texture of the segmented parts. These features are color (RGB, Lab and HIS) and texture (Gray-level co-occurrence matrix). Using the three bands of the image for a window of 5× 5 pixel size, the feature vector size is: 5×5 blocks × 3 bands × 3 color representation = 225 and 12 texture features × 3 bands = 36. In total, we have 261 dimensional image features. The data set has 10485760 samples divided into three sets: training, validation and testing sets datasets. 5242880 / 2097152 / 3145728 entries used for training, validation and testing, respectively keeping the class distributions similar to that of the full dataset. These sets were divided among the ensemble and the selector in the separate training sets. Here the selector is given a quarter of the different sets while taking into account the class distribution. 4.2 Experiment Setup We have selected seven PSO-based classifiers as members of the classifiers ensemble. Each classifier has a different number of particles ranging from 35-65 particles. Each classifier has an input feature vector consisting of 26 sub features out of the total 261 feature space. All classifiers were trained/validated independently using the preprepared training/validation sets. The classification results were averaged across forty runs. It should be noted that no parameter optimization was performed for the ensemble as we are targeting a set of weak classifiers. On the contrary, a careful design phase was carried out in order to tune the selector for its best performance. The selector module is also implemented using one hidden layer FFNN neural network. In order to determine the appropriate number of hidden neurons in the FFNN for the selector module, the number of hidden neurons is varied from 5 to 100 neurons in the hidden layer leading to 20 different network topologies. For each topology, twenty trials with different initial random weights were performed on the training set, and early-stopping is based on the validation set. The best topology was found to be 78-35-7. The network was trained using the back-propagation algorithm suing the neural network toolbox in MATALAB program. 4.3 Results The average performance of MCS methods, focusing on dense urban areas in VHR aerial images, was taken over 40 for each module. The standard deviation of the error is around 0.1 and 0.2 in the overall classification. Table 1 lists the average classification accuracy achieved for each module: it is shown using similar/separate training sets. The variance of these values is also shown in the table. The table also shows the output of fusion the output of the classifier ensemble directly without selection using majority vote, maximum and average fusion methods [8]. The table compares the average accuracies for the ensemble module based on both the PSO- based classifier and FFNN classifiers. The result shows the potential of the MCS in classifying aerial images in the experiment. When extracting road areas, we achieved an average rate of 96% even in the images of noisy residential areas using a PSO- based classifier as the base classifier in the classifier ensemble.
Multiple Classifier System for Urban Area’s Extraction
315
It can be noted that fusing the output of all classifiers (i.e. without selection), presents an improvement on the single classifier. While testing the selection scheme, the best performance has been obtained at β = 0.5. That is, classifiers corresponding to a selector’s response above 0.5 are fused and the remainders are discarded. For that scheme, the accuracy obtained on the test set is 95.50% on the separate training and 93.47% using same training for both the ensemble and the selector modules. Table 1. Performance of the implemented selection schemes for road extraction from the aerial images test set using PSO-based and FFNN based classier ensemble Module Base classifier Majority voting Mean Max Weighted average Majority voting *
Average accuracy PSO–based classifier ensemble 79.617 ± 1.5310 92.157 ±0.181 91.431±0.271 91.325±0.519 93.318 ± 0.512 94.116± 0.375* 93.473±0.714 95.501± 0.878*
Average accuracy FFNN based ensemble 75.857 ± 1.250 88.814 ±0.078 87.431±0.073 87.325±0.097 89.763 ± 0.203 90.147± 0.231* 90.537±0.081 91.613± 0.145*
Separate training/validation.
The results show that by using the selection scheme there is an improvement of about 15% over the best individual PSO-based classifier and an improvement of about 3% over the MCS scheme without the selection. It is also noted that the overall MSC accuracy when using separate training sets for the ensemble and the selector is about 1-2% higher than when using the same training set. The result also shows that the PSO-based classifier ensemble outperforms the neural network classifier ensemble by 4%, which reflects the potential of using PSO in classifying such complex data.
5 Conclusion In this paper, we have investigated using DCS scheme. Our experiments have shown that the selection scheme can improve the classification accuracy over and above the best/average individual classifier's accuracy. Also, the selection scheme has a slight improvement (3%) over the outcome of fusing all the classifiers' decisions without selection. The experiments have also shown that using separate training/validation sets for the different modules have a marginal improvement compared to experiments where the same training sets were used. The result also shows that the PSO-based classifier ensemble outperforms the neural network classifier ensemble by 4%, which reflects the potential of using PSO in classifying such complex data.
References [1] Lu, D., Weng, Q.: A survey of image classification methods and techniques for improving classification performance. International Journal of Remote Sensing 28(5), 823–870 (2007) [2] Duda, R.O., Hart, P.E., Stork, D.G.: Pattern Classification. John Wiley & Sons, New York (2001)
316
S.M. Bedawi and M.S. Kamel
[3] Stathakis, D., Vasilakos, A.: Comparison of computational intelligence based classification techniques for remotely sensed optical image classification. IEEE T. Geosci. Remote Sens. 44(8), 2305–2318 (2006) [4] Omran, M., Engelbrecht, A.P., Salman, A.: Particle swarm optimization method for image clustering. International Journal of Pattern Recognition and Artificial Intelligence 19(3), 297–321 (2005) [5] XiaoPing, L., Xia, L., XiaoJuan, P., HaiBo, L., JinQiang, H.: Swarm intelligence for classification of remote sensing data. Science in China Series D: Earth Sciences 51(1), 79–87 (2008) [6] Benediktsson, J.A., Chanussot, J., Fauvel, M.: Multiple Classifier Systems in Remote Sensing: From Basics to Recent Developments. In: Haindl, M., Kittler, J., Roli, F. (eds.) MCS 2007. LNCS, vol. 4472, pp. 501–512. Springer, Heidelberg (2007) [7] Yu-Chang, T., Kun-Shan, C.: An adaptive thresholding multiple classifiers system for remote sensing image classification. Photogrammetry Engineering and Remote Sensing 75(6), 679–687 (2009) [8] Kuncheva, L.I.: Combining Pattern Classifiers: Methods and Algorithms. Wiley, Chichester (2004) [9] Giacinto, G., Roli, F.: Dynamic classifier selection. In: Proceedings of the First International Workshop on Multiple Classifier Systems, pp. 177–189 (2000) [10] Wanas, N., Dara, R., Kamel, M.S.: Adaptive Fusion and Co-operative Training for Classifier Ensembles. Pattern Recognition 39(9), 1781–1794 (2006) [11] Kennedy, J., Eberhart, R.C.: Particle swarm optimization. In: Proceedings of IEEE International Conference on Neural Networks (ICNN 1995), Australia, vol. 4, pp. 1942–1948. IEEE Service Center, Perth (1995) [12] Sousa, T., Neves, A., Silva, A.: A particle swarm data miner. In: 11th Portuguese Conference on Artificial Intelligence, Workshop on Artificial Life and Evolutionary Algorithms, pp. 43–53 (2003) [13] Tri-Cities and Surrounding Communities Orthomosaics 2006 [computer file]. Waterloo, Ontario: The Regional Municipality of Waterloo (2006)
Correction of Atmospheric Turbulence Degraded Sequences Using Grid Smoothing Rishaad Abdoola1, Guillaume Noel1, Barend van Wyk1, and Eric Monacelli2 1
Department of Electrical Engineering, French South African Institute of Technology, Tshwane University of Technology, Staatsartillerie Road, Pretoria 0001, South Africa 2 Department of Electrical Engineering, Laboratoire D'Ingénierie Des Systèmes De Versailles, Université de Versailles Saint-Quentin-en-Yvelines Versailles, France {AbdoolaR,NoelG,vanwykB}@tut.ac.za,
[email protected]
Abstract. Heat scintillation occurs due to the index of refraction of air decreasing with an increase in air temperature, causing objects to appear blurred and waver slowly in a quasi-periodic fashion. This imposes limitations on sensors used to record images over long distances resulting in a loss of detail in the video sequences. A method of filtering turbulent sequences using grid smoothing is presented that can be used to either extract a single geometrically improved frame or filter an entire turbulent sequence. The extracted frame is in general sharper than when utilising simple GFATR (Generalized First Average Then Register). It also better preserves edges and lines as well as being geometrically improved. Keywords: Atmospheric Turbulence, Grid Smoothing, Graph-Based Image, Geometric Distortion.
1 Introduction Atmospheric turbulence imposes limitations on sensors used to record image sequences over long distances. The resultant video sequences appear blurred and waver in a quasi-periodic fashion. This poses a significant problem in certain fields such as astronomy and defence (surveillance and intelligence gathering) where detail in images is essential. Hardware methods proposed for countering the effects of atmospheric turbulence such as adaptive optics and DWFS (Deconvolution from Wave-Front Sensing) require complex devices such as wave-front sensors and can be impractical. In many cases image processing methods proved more practical since corrections are made after the video is acquired [1, 2, 3, 4, 5]. Turbulence can therefore be removed for any existing video independent of the cameras used for capture. Numerous image processing methods have been proposed for the compensation of blurring effects. Fewer algorithms have been proposed for the correction of geometric distortions induced by atmospheric turbulence. Each method has its own set of advantages and disadvantages. M. Kamel and A. Campilho (Eds.): ICIAR 2011, Part II, LNCS 6754, pp. 317–327, 2011. © Springer-Verlag Berlin Heidelberg 2011
318
R. Abdoola et al.
In this paper the focus is placed on comparing image processing methods proposed to restore video sequences degraded by heat scintillation. The comparative analysis will be followed by the presentation and comparison of the Grid Smoothing algorithm for the correction and enhancement of turbulent sequences.
2 Algorithms Selected for Comparison Two algorithms, derived from a number of well-known contributions, were chosen for comparison. The algorithms chosen are based on post processing of the turbulence as well as their ability to compensate the distortion effects caused by turbulence. The following sections outline the background of the algorithms as well as their respective advantages and disadvantages. 2.1 Generalized First Average Then Register The GFATR algorithm is based on common techniques and components used in a number of algorithms for correcting atmospheric turbulence [2,5,6]. Image registration is performed for each frame with respect to a reference frame. This provides a sequence that is stable and geometrically correct. The reference frame can be selected by using a frame from the sequence with no geometric distortion and minimal blurring. This is however impractical since the frame would have to be selected manually and the probability of finding an image that has both minimal distortion and is geometrically correct is unlikely. A more practical approach to selecting a reference frame would be the temporal averaging of a number of frames in the sequence. Since atmospheric turbulence can be viewed as being quasi-periodic, averaging a number of frames would provide a reference frame that is geometrically improved, but blurred. For the purpose of registration, an optical flow algorithm as proposed by Lucas and Kanade [7] was implemented. The algorithm performs well when there is no real motion present in the scene i.e. objects moving, panning and zooming of camera. The video sequence is stabilized, except for a few discontinuities caused by the optical flow calculations and/or the warping algorithm. 2.2 First Register Then Average and Subtract The FRTAAS algorithm is an improvement of a previously proposed algorithm, FATR (First Average Then Register), proposed by Fraser, Thorpe and Lambert [2]. The FATR method is a special case of the GFATR algorithm. Each pixel in each frame is registered against an averaged reference or prototype frame. The registration technique used in [2] employs a hierarchically shrinking region based on the cross correlation between two windowed regions. The de-warped frames are then averaged once again to obtain a new reference frame and the sequence is put through the algorithm once again. As discussed in GFATR section, the blur due to the temporal averaging will still be present. The FRTAAS algorithm aims to address this problem. In FRTAAS the averaging approach used to create the reference frame in GFATR is avoided by allowing any one of the frames in a sequence to be the reference frame. However due to the time varying nature of atmospheric turbulence, some of the
Correction of Atmospheric Turbulence Degraded Sequences Using Grid Smoothing
319
frames in the sequence will not be as severely degraded as others. This would mean that it would be possible to obtain a reference frame in which the atmospheric induced blur would be minimal. A sharpness metric is used to select the least blurred frame in the sequence. This frame can also be selected manually. The distortion of the frame is not considered when selecting a reference frame. The sharpness metric used to select the sharpest frame is
S h = − ∫ g ( x , y ) ln[ g ( x , y )]dxdy
(1)
where g(x,y) represents the image and x, y represent pixel co-ordinates. Once the sharp but distorted frame is selected from the video sequence it is used as the reference frame. All frames in the sequence are then warped to the reference frame. The shift maps that are used to warp the frames in the sequence to the reference frame are then used to determine the truth image. In the FATR method the truth image was obtained by temporal averaging. However by instead averaging the shift maps used to register the turbulent frames to the warped reference frame, a truth shift map which warps the truth frame into the reference frame is obtained. The averaging of the shift maps can be used because, as in the case of temporal averaging to obtain a reference frame, atmospheric turbulence can be viewed as being quasiperiodic. The warping using the shift maps, xs and ys, can be described as
r ( x , y , t ) = g ( x + x s ( x , y , t ), y + y s ( x , y , t ), t )
(2)
representing a backward mapping where r(x,y,t) is the reference frame and g(x,y,t) is a distorted frame from the sequence. Once the shift maps, xs and ys, have been obtained for each frame in the sequence, the centroids, Cx and Cy, which are used to calculate the pixel locations of the truth frame, can be determined by averaging: C x ( x, y ) = C y ( x, y ) =
1 N 1 N
N
∑ x ( x, y , t ) s
t =1
(3)
N
∑ ys ( x, y, t ). t =1
It is important to note that since the warping represents a backward mapping, the shift maps obtained does not tell us where each pixel goes from r(x,y,t) to g(x,y,t) but rather where each pixel in g(x,y,t) comes from in r(x,y,t) . Therefore the inverse of Cx and Cy are then calculated and used to determine the corrected shift map of each warped frame in the sequence as −1
C x ( x, y ) = −C x ( x − C x ( x, y , t ), y − C y ( x, y , t )) −1
C y ( x, y ) = −C y ( x − C x ( x, y , t ), y − C y ( x, y , t )) (4) −1
−1
−1
X s ( x, y , t ) = C x ( x, y ) + xs ( x + C x ( x, y ), y + C y ( x, y ), t ) −1
−1
−1
Ys ( x, y , t ) = C y ( x, y ) + y s ( x + C x ( x, y ), y + C y ( x, y ), t )
320
R. Abdoola et al.
where Xs and Ys are the corrected shift maps used to correct the frames in the original warped sequence [4]. Using Xs and Ys one is therefore able to obtain the geometrically improved sequence using
f ( x , y , t ) = g ( x + X s ( x , y , t ), y + Ys ( x, y , t ), t ).
(5)
The registration was done by using a differential elastic image registration algorithm proposed by Periaswamy and Farid [8]. The FRTAAS algorithm performed well with no motion present in the scene. The restored sequences were an improvement over the GFATR. This was achieved by avoiding temporal averaging of the turbulent frames.
3 Proposed Algorithm 3.1 Grid Smoothing Algorithm In the grid smoothing algorithm each image is represented by a graph in which the nodes represent the pixels and the edges reflect the connectivity. A cost function is defined using the spatial coordinates of the nodes and the grey levels present in the image. The minimisation of the cost function leads to new spatial coordinates for each node. Using an adequate cost function, the grid is compressed in the regions with large gradient values and relaxed in the other regions [9]. The common representation of a greyscale image is a matrix of real numbers. The spatial coordinates of the pixel are determined by the row and the column numbers of the pixel. In low resolution images, the shape of the object does not systematically match the matrix and may lead to severe distortion of the original shape. As a result, a clear straight line whose orientation is 45 degrees is represented by a staircase-like line. The paradigm image and matrix may be overcome by the use of a graph-based representation of an image. In this case, an image is represented by a graph in which the nodes represent the pixels and the edges represent the connectivity. The original graph (or grid) is a uniform grid composed by squares. The grid smoothing process modifies the coordinates of the nodes in the (x,y) plane while keeping the greyscale levels associated to the node unchanged. The resulting graph is no more an image in the conventional sense. The grid smoothing relies on the minimisation of a cost function leading to a compression of the grid in the regions with large gradient values and a relaxation in the other regions. As a consequence, the new grid fits the objects in the image. The grid smoothing enhances the original image and does not modify the number of nodes in the image. The main idea of the grid smoothing is, starting from a uniform grid, composed by squares or triangles depending on the connectivity chosen, to reshape the grid according to the information (grey levels) contained in the image. Graph-Based Image Representation. Our input data is a graph G = (V, E), embedded in the 3D Euclidian space. Each edge e in E is an ordered pair (s,r) of vertices, where s and r are the sending and receiving end vertices of e respectively [10]. To each vertex, v, is associated a triplet of real coordinates (xv, yv, zv). Let Cve be the node-edge incidence matrix of the graph G, denoted as
Correction of Atmospheric Turbulence Degraded Sequences Using Grid Smoothing
321
⎧1 if v is the sending end of edge e ⎪ Cve = ⎨− 1 if v is the receiving end of edge e ⎪0 otherwise ⎩
(6)
In the rest of the paper, the node-edge matrix Cve will also be denoted C. Considering an image with M pixels, X, Y and Z represent [x1,…, xM]T, [y1,…, yM]T and [z1,…, zM]T respectively. X and Y are at first uniformly distributed according to the coordinates of the pixels in the plane, while Z represents the grey levels of the pixels. Each pixel in the image is numbered according to its column and then its rows. L is denoted as the number of edges in the graph and C is therefore an L x M matrix. Optimisation-Based Approach to Grid Smoothing. A cost function is introduced to fit the object of the image with the grid. The main idea is that the regions where the variance is small (low gradient) require fewer points than the regions with a large variance (large gradient). The grid smoothing techniques will move the points of the grid from small variance regions to large variance regions. To achieve this, a cost function J is denoted as
J = J X + JY ,
(7)
where JX =
∧ ∧ 1⎡ ⎤ ( X − X )T Q ( X − X ) + θ ( X T AX )⎥ ⎢ 2⎣ ⎦
JY =
∧ ∧ 1⎡ ⎤ (Y − Y )T Q (Y − Y ) + θ (Y T AY )⎥ 2 ⎢⎣ ⎦
(8)
and (9)
The first term in the expression of the cost function is called the attachment as it penalises the value of the cost function if the coordinates are too far from the original values. It is introduced to avoid large movement in the grid [10]. θ is a real number that acts as a weighting factor between the terms of the cost function and can be ∧
∧
adjusted according to the application. X and Y represent the initial values of X and Y respectively and A is equal to CTΩC.
Ω k ,k = ( z i − z j ) 2 ,
(10)
where node i is the sending end of the edge k and node j is the receiving end. Ω and Q are square diagonal matrices with dimensions L x L and M x M respectively. As a result of the definition of Ω, the minimisation of J is leading to the reduction of the areas of the triangle formed by two connected points and the projection of one of the points on the Z-axis. The edges in the image act as attractors for the points in the grid. As a consequence, the edges are better defined in terms of location and steepness in the smoothed grid. The conjugate gradient algorithm is used for the minimisation as it computationally expensive to determine the inverse of very large matrices.
322
R. Abdoola et al.
Fig. 1 shows the results of applying the grid smoothing algorithm on a portion of the Building Site sequence. A value of 0.002 was chosen for θ. A small value of θ will restrict the displacement of the points and can be varied according to the application.
Fig. 1. Grid Smoothing algorithm applied to Building Site sequence with θ=0.002
3.2 Averaging Using Grid Smoothing to Determine a Geometrically Improved Frame Since atmospheric turbulence can be viewed as being quasi-periodic, averaging a number of frames would provide a reference frame that is geometrically improved. Applying simple temporal averaging to the common representation of an image i.e. the spatial coordinates of the pixel are determined by the row and the column numbers of the pixel, causes the reference image to be further degraded due to temporal blurring. If we apply temporal averaging to the grid smoothed images where the edges are better defined we are able to minimize the effects of temporal blurring. The temporal averaging of the grids can be performed by determining the average of the new grid coordinates, X and Y, as well as the grey levels of the pixels, Z.
(a)
(b)
Fig. 2. (a) Average and (b) grid smoothed average of Building Site sequence with θ=0.002
Correction of Atmospheric Turbulence Degraded Sequences Using Grid Smoothing
323
4 Results Table 1 shows the sharpness comparisons of selected sequences. Equation (1) was used as the sharpness metric. It can be seen that the simple averaged reference frame suffers from blurring whereas the grid smoothing averaged reference frame is sharper. Table 1. Comparison of sharpness
Sequence Lenna Sequence (Simulated) Flat Sequence (Simulated) Building_Site Bricks (Real) Armscor Building (Real) Tiffany with Building_Site warp (Simulated) Clock with IP warp (Simulated)
Grid Smoothing Average 1.8455 x 10e5 1.8015 x 10e5 1.5317 x 10e6 1.2418 x 10e4
Turbulent Frames 1.8443 x 10e5 1.8041 x 10e5 1.5297 x 10e6 1.2618 x 10e4
Average of N Frames 1.8617 x 10e5 1.8112 x 10e5 1.5326 x 10e6 1.2451 x 10e4
1.1761 x 10e5
1.1759 x 10e5
1.1854 x 10e5
1.0153 x 10e5
1.0212 x 10e5
1.0302 x 10e5
Fig. 3 shows the result of a simulated sequence using motion fields obtained from a real turbulence sequence and Fig. 4 shows the results of a simulated sequence using a 2-pass mesh warping algorithm.
Fig. 3. Comparison of different algorithms on the clock sequence with IP warp
324
R. Abdoola et al.
Fig. 4. Comparison of different algorithms on the Girl1 sequence using simulated warp
The real datasets consist of sequences obtained in the presence of real atmospheric turbulence. These datasets were obtained from the CSIR (Council for Scientific and Industrial Research) using their Cyclone camera and vary in range from 5km to 15km. The sequences vary from buildings and structures, which contain a large amount of detail, to open areas in which the contrast can be low. The ranges of the sequences are from 5km to 15km and were obtained using the CSIR’s cyclone camera. The turbulence levels vary from light to severe with most of the sequences having a medium level of turbulence as determined visually by an expert. In the case of real sequences, no ground truth is available therefore the sequences cannot be compared with the original. The MSE (Mean-square-error) was calculated between consecutive frames in a sequence, which shows the stability of the sequence. An intensity corrected MSE will measure the differences between frames i.e. if turbulence is present the geometric changes between frames will be large and this will correspond to a high MSE. Fig. 5 shows the results of a real turbulence sequence of the Armscor building taken at a distance of 7km and Fig. 6 shows the results of a real turbulence sequence taken of a tower at a distance of 11km. An example of a turbulent frame of the tower sequence is shown in Fig. 7(a). In this sequence the detail is minimal. The turbulence level is however severe, both in terms of geometric distortion and blurring. The corrected frames for the different algorithms are also shown in Fig. 7.
Correction of Atmospheric Turbulence Degraded Sequences Using Grid Smoothing
Fig. 5. MSE between consecutive frames of Armscor sequence
Fig. 6. MSE between consecutive frames of Tower sequence
325
326
R. Abdoola et al.
(a)
(b)
(c)
(d)
(e)
Fig. 7. (a) Real turbulence-degraded frame of a tower at a distance of 11km, (b) frame from FRTAAS corrected sequence, (c) frame from Time Averaged corrected sequence, (d) frame from grid smoothed corrected sequence (θ =0.002), (e) average of 20 frames of turbulent sequence. (CSIR Dataset)
Table 2 shows a comparison of the running times of all the algorithms in Matlab. The differential elastic image registration algorithm proposed by Periaswamy and Farid [8] was used for both the FRTAAS and the Time-Averaged algorithms as it provided good results with little or no discontinuities. For comparison the TimeAveraged algorithm was also run using the Lucas-Kanade algorithm with one level and one iteration. While this provided discontinuities in the output image the registration algorithm is much faster than the Elastic Registration algorithm. Table 2. Comparison of Matlab running times per frame
Sequence (Resolution) Tower (747 x 199) Building_Site (587 x 871) Armscor (384 x 676) Clock (256 x 256)
Grid Smoothing Algorithm θ=0.002 (s) 20.05 76.15 38.16 10.97
FRTAAS (s) 107.75 410.24 200.48 33.13
Time Averaged (s) (Elastic) 108.51 424.67 198.93 32.46
Time Averaged (LucasKanade) (s) 36.31 129.31 64.53 16.15
5 Conclusions and Further Work A novel grid smoothing algorithm was shown to be able to compensate for turbulence distortions and was compared to other algorithms proposed to correct turbulence
Correction of Atmospheric Turbulence Degraded Sequences Using Grid Smoothing
327
degraded sequences. It was shown that using the grid smoothing algorithm, a geometrically improved reference frame could be generated that is sharper than using simple temporal averaging while still being comparable in terms of the MSE which measures the geometric improvement of the frames. The algorithm was also shown to be able to correct an entire sequence by applying it as a temporal filter. Further work will involve implementation on a GPU to assess real-time capability. The grid smoothing algorithm will also be enhanced by using statistical methods instead of averaging to obtain a reference frame. Acknowledgements. The authors wish to thank the National Research Foundation, Tshwane University of Technology and French South African Institute of Technology.
References 1. Li, D., Mersereau, R.M., Frakes, D.H., Smith, M.J.T.: A New Method For Suppressing Optical Turbulence In Video. In: Proc. European Signal Processing Conference, EUSIPCO 2005 (2005) 2. Fraser, D., Thorpe, G., Lambert, A.: Atmospheric turbulence visualization with wide-area motion blur restoration. Optical Society of America, 1751–1758 (1999) 3. Kopriva, I., Du, Q., Szu, H., Wasylkiwskyj, W.: Independent Component Analysis approach to image sharpening in the presence of atmospheric turbulence. Optics Communications 233, 7–14 (2004) 4. Tahtali, M., Fraser, D., Lambert, A.J.: Restoration of non-uniformly warped images using a typical frame as prototype. TENCON 2005-2005 IEEE Region 10, 1382–1387 (2005) 5. Zhao, W., Bogoni, L., Hansen, M.: Video Enhancement by Scintillation Remova. In: Proc. of the 2001 IEEE International Conference on Multimedia and Expo., pp. 393–396 (2001) 6. Frakes, D.H., Monaco, J.W., Smith, M.J.T.: Suppression of atmospheric turbulence in video using an adaptive control grid interpolation. In: Proc. of the IEEE Int. Conf. Acoustics, Speech, and Signal Processing, pp. 1881–1884 (2001) 7. Lucas, B.D., Kanade, T.: An iterative image registration technique with an application to stereo vision (darpa). In: Proc. of the 1981 DARPA Image Understanding Workshop, pp. 121–130 (April 1981) 8. Periaswamy, S., Farid, H.: Elastic Registration in the presence of Intensity Variations. IEEE Transactions on Medical Imaging 22, 865–874 (2003) 9. Noel, G., Djouani, K., Hamam, Y.: Grid smoothing: A graph-based approach. In: Bloch, I., Cesar Jr., R.M. (eds.) CIARP 2010. LNCS, vol. 6419, pp. 153–160. Springer, Heidelberg (2010) 10. Hamam, Y., Couprie, M.: An optimisation-based approach to mesh smoothing: Reformulation and extensions. In: Torsello, A., Escolano, F., Brun, L. (eds.) GbRPR 2009. LNCS, vol. 5534, pp. 31–41. Springer, Heidelberg (2009)
A New Image-Based Method for Event Detection and Extraction of Noisy Hydrophone Data F. Sattar1 , P.F. Driessen2 , and G. Tzanetakis3 2
1 NEPTUNE Canada, University of Victoria, Victoria, BC, Canada Dept. of Electrical and Computer Eng., University of Victoria, BC, Canada 3 Dept. of Computer Science, University of Victoria, BC, Canada
[email protected],
[email protected],
[email protected]
Abstract. In this paper, a new image based method for detecting and extracting events in noisy hydrophone data sequence is developed. The method relies on dominant orientation and its robust reconstruction based on mutual information (MI) measure. This new reconstructed dominant orientation map of the spectrogram image can provide key segments corresponding to various acoustic events and is robust to noise. The proposed method is useful for long-term monitoring and a proper interpretation for a wide variety of marine mammals and human related activities using hydrophone data. The experimental results demonstrate that this image based approach can efficiently detect and extract unusual events, such as whale calls from the highly noisy hydrophone recordings. Keywords: Event detection/extraction, Noisy hydrophone data, Dominant orientation map, Long-term monitoring, Bioacoustics, Marine mammals.
1
Introduction
The term of acoustic event means a short audio segment, which has rare occurrence, it is not predictable when and if it occurs and of course this sound is relevant to the given application. For underwater monitoring applications the effort is to detect the sounds representing marine mammals, such as whales, as well as human activities. The task of detecting useful acoustic events from hydrophone data is rather difficult as such data is usually quite noisy and highly correlated. Acoustic (rather than visual) monitoring is used primarily, since acoustic waves can travel long distances in the ocean. Visual monitoring is useful only for short range observations up to several tens of meters at most, and is not suitable for monitoring whales or shipping which may be many kilometers away. Thus sound information captured using hydrophones plays an important role. This paper thus deals with audio information, which is relevant for the long-term monitoring applications. M. Kamel and A. Campilho (Eds.): ICIAR 2011, Part II, LNCS 6754, pp. 328–337, 2011. c Springer-Verlag Berlin Heidelberg 2011
A New Image-Based Method for Event Detection and Extraction
329
Most existing monitoring systems for unusual event detection work with video information. For unusual events in audio, the paper [1] proposes a semi-supervised adapted Hidden Markov Model (HMM) framework, in which usual event models are first learned from a large amount of (commonly available) training data, while unusual event models are learned by Bayesian adaptation in an unsupervised manner; [2] robustly models the background for complex audio scenes with a Gaussian mixture method incorporating the proximity of distributions determined using entropy; [3] investigates a machine learning, descriptor based approach that does not require an explicit descriptors statistical model, based on Support Vector novelty detection. [4] applies optimized One-Class Support Vector Machines (1-SVMs) as a discriminative framework for sound classification. In this paper, we develop a robust image based event detection and extraction method which can be useful in efficient monitoring for hydrophone data. The 1D hydrophone data is converted into 2-D spectrogram, a linear time-frequency signal-energy representation by using short-time Fourier transform (STFT). The proposed method is based on the generation of dominant orientation map from the maximum moment of the phase congruency of the spectrogram followed by its robust reconstruction based on MI measures. This orientation map method seems more reliable since the dominant points are generated based on the information of various scales as well as correspond to the maximum eigenvalues which are robust to noise. Note that here, the orientation in the 2-D image plane refers to spatial image phase rather than phase of complex FFT values in spectrogram.
2 2.1
Methodology Data
Hydrophones are underwater microphones designed to capture underwater sounds by converting acoustic energy into electrical energy. The sound waves recorded come from a wide variety of mammals and human related activities. Different types of hydrophones are used for different tasks. For example, on the NEPTUNE Canada observatory (http://www.neptunecanada.ca), an enhanced version of the Naxys Ethernet Hydrophone 02345 system is used, which includes the hydrophone element (rated to 3000m depth), 40 dB pre-amplifier, 16-bit digitizer and Ethernet 100BaseT communication. This particular hydrophone is of high quality with sufficient bandwidth, and can be integrated into an existing underwater instrument package. The NEPTUNE hydrophones collect at a constant rate and can each generate approximately 5.5 GB of data per day. The sampling frequency of the NEPTUNE data used is 96 kHz. 2.2
Generation of Dominant Orientation Map
A 2-D Gabor function is obtained with a Gaussian function g(x, y) modulated with a sinusoid of the frequency f along the orientation θm relative to the positive x-axis.
330
F. Sattar, P.F. Driessen, and G. Tzanetakis
ψ(x, y) = g(x, y)exp(j2πf x + Φ) 2 1 1 x2 = 2πσx σy exp − 2 σ2 + σy2 exp(j2πf x + Φ) x
(1)
y
In Eq. (1), g(x, y) is the 2-D Gaussian function. f denotes the radial frequency of the Gabor function, the Gaussian envelope along the x and y axes is controlled by the space constants σx and σy . The ratio between σx and σy specifies the ellipticity of the support of the Gabor function, and the phase offset Φ denotes the symmetricality. We set f , σx , σy and Φ to 0.125, 6, 6, and π/18, respectively since these values are found to give good results in section 3. The value of θm is defined as π(m − 1)/M , where M represents the number of total orientations and m = 1, ..., M . The Fourier transform of the 2-D Gabor function in Eq. (1) is: v2 1 (u − f )2 + 2 (2) Ψ (u, v) = exp − 2 σu2 σv where Ψ (u, v) is the Fourier transform of ψ(x, y) and σu = 1/(2πσx ), σv = 1/(2πσy ). The 2-D Gabor wavelets are then obtained by rotating and scaling the Gabor function ψ(x, y). (3) ψs,m (x, y) = a−s ψ(x , y ) In Eq. (3),
x y
=
cos θm sin θm − sin θm cos θm
x y
(4)
Here a−s , s = 0, 1, · · · , S − 1 is the scale factor (a = 2). Gabor wavelets thus provide multi-scale and multi-orientation information of the input image I(x, y), which is obtained based on the short-time Fourier transform (STFT) of the 1-D input signal x(t) as
I(x, y) = X(f, t) = |
x(τ )h(τ − t)ej2πf (τ −t) dτ |2
(5)
where h(t) is a window function and the pixel value of I(x, y) corresponds to the signal energy value of X(f, t). The steps involved in the proposed approach for generating the dominant orientation map are as follows: The purpose is to calculate the pixel values which correspond to the dominant orientations of the input image. • Step 1. The input image I(x, y) is first transformed using the Gabor wavelet along M orientations for S number of scales, given by ∗ (x − x1 , y − y1 )dx1 dy1 , (6) Ws,m (x, y) = I(x, y)ψs,m for s = 1, 2, · · · , S, and m = 1, 2, · · · , M . In (6),‘*’ denotes the complex conjugate, whereas Ws,m (x, y) represents the coefficient of the Gabor WT. By utilizing Gabor WT, the proposed method is robust to noise [5].
A New Image-Based Method for Event Detection and Extraction
331
• Step 2. Then the measure of the second moment matrix at each point is computed as follows. a= (|Ws,m |cos(θm ))2 , (7) s
b=
s
m
c=
m
|Ws,m |cos(θm )|Ws,m |sin(θm ),
s
(|Ws,m |sin(θm ))2
(8) (9)
m
In Eqs. (7)-(9), θm is the angle of the orientation m. Ws,m represents the magnitude of the Gabor WT coefficient at scale s and orientation m. The second moment matrix is then constructed as follows summing over all scales and angles. a b Γ = (10) b c By analyzing the eigenvalues of the second moment matrix, the proposed method has isotropic responses. • Step 3. The eigenvalues of the above matrix is λ1,2 =
1 1 2 4b + (a − c)2 (a + c) ± 2 2
(11)
We take the higher eigenvalue for “dominant orientation map” p at each pixel, i.e., p = p(x, y) = p(f, t) =
1 1 2 (a + c) + 4b + (a − c)2 2 2
(12)
Here, the maximum eigenvalue of the second moment matrix for each point of the WT decomposition yields the pronounced orientation information [5]. 2.3
Enhancement of Dominant Orientation Map
In order to remove the high noise components and improve the signal components of the dominant orientation map p from Eq. (12), we perform here the edge tracking followed by contrast enhancement. – Edge Tracking: The edge tracking by an edge detector considers here the Ratio of Averages (RoA) as a measure of the contrast between different image regions. Within the processing window, a median line is defined by the center row, column, or diagonal. The average gray level on each side of each such line is computed, and the maximum ratio is selected. Using this approach, one can expect to detect an edge delimiting two regions if the contrast between them significantly exceeds the selected threshold. The threshold selection problem can be considered equivalent to setting a constant false alarm ratio (CFAR) [6]. Here, we perform edge tracking in the vertical direction (i.e. along row) in order to separate frequencies of the signal components as well as remove the non-impulsive
332
F. Sattar, P.F. Driessen, and G. Tzanetakis
noise. The results are opposite if track along horizontal direction, i.e. more noisy and less disjoint (more overlap) in frequency. – Contrast Enhancement : For contrast enhancement, we use a nonlinear and adaptive rational filter [7] which is able to enhance the signal components while smooth out the noise components. The basic idea is to modify the center pixels of each local square window used by modulating with the contributions of its neighboring pixels. As a consequence, the output value depends more strongly on those neighboring pixels having similar gray levels, while those of differing gray levels have reduced contributions. In this way, this approach ensures to emphasize the signal components while smooth out the noise components. 2.4
Event Detection and Extraction Based on Reconstruction of Dominant Orientation Map
The event detection and extraction are performed based on reconstructing the enhanced dominant orientation map originally derived from Eq. (12) by measuring the mutual information(MI). In our approach, MI between the two successive image blocks are calculated separately for each frequency bin of the dominant orientation map. At block bt of size (N × N ) samples, a matrix Ct,t+1 can be created carrying on the gray-scale N -level (N =256) transitions between blocks bt and bt+1 . The element of Ct,t+1 , with 0 ≤ i ≤ N − 1 and 0 ≤ j ≤ N − 1 corresponds to the probability that a pixel with gray level i in block bt has gray level j in block bt+1 . In other words, Ct,t+1 is a number of pixels which change from gray level i in block bt to gray level j in block bt+1 , divided by the number of pixels in the window block. Following Eq. (13), the MI It,t+1 of the transition from block i to block j is expressed by It,t+1 = −
N −1 N −1 i=0 j=0
Ct,t+1 (i, j) log
Ct,t+1 (i, j) Ct (i)Ct+1 (j)
(13)
A small value of MI between blocks bt and bt+1 indicates the presence of noise components or outliers. Here, the MI values Itc ,tc +1 along time are calculated with the center of block tc shifted by 5 samples in order to detect the peaks which correspond to the key segments. Then the following steps are performed based on the output MI matrix (Here, MI matrix consists of row MI sequences and each MI sequence comprises MI values along time of each frequency bin): Step 1. At each frequency bin k (frequency spacing ≈ 24 Hz with sampling frequency 96 kHz), the peakedness Pk = μk /σk of the MI matrix is calculated, where μk and σk denote the mean and the standard deviation of the MI sequence at the kth frequency bin. Step 2. A contrast function Cr(k) is then calculated defined as: Cr(k) = (max(Pk ) − min(Pk ))/(max(Pk ) + min(Pk )), where max(·) and min(·) represent the maximum and minimum values of the kth MI sequence.
A New Image-Based Method for Event Detection and Extraction
333
Step 3. The maximum value of the contrast function, Crmax , is calculated and compared with a threshold T h. If Crmax < T h then stop the process consider that the input data has no event; otherwise proceed to the following steps (steps 4-7) to search the key segments correspond to the targeted events. Note that the threshold T h is chosen empirically. Step 4. Calculate the derivative P Dk = |Pk − Pk−1 |, where k = 2, · · · , K − 1, and K is the total number of frequency bins used. Step 5. Find out frequency bin kmax = arg max{P D(k)} . k Step 6. Obtain the reconstructed orientation map pr (k, n) from p(k, n) by selecting the key segments based on the frequency bin kmax as well as its neighboring frequency bins and discarding all other segments correspond to remaining frequency bins. Here n denotes time sample index. Step 7. Obtain a detection function D(n) by summing the reconstructed dominant orientation map pr (k, n) along frequency bin k: D(n) =
K
pr (k, n)
(14)
k=1
and compare it with the corresponding mean value as threshold T r. An event is detected only when D(n) is larger than threshold T r and generates a binary event extraction function, B(n) showing 1’s for the extracted events and 0’s otherwise, i.e., 1 if D(n) > T r B(n) = (15) 0 if D(n) ≤ T r
3 3.1
Experimental Results and Discussion Illustrative Results for Dominant Orientation Map
The results of the generated dominant orientation map for the Neptune hydrophone data are illustrated here. The results of a noisy hydrophone recording containing events are shown to the left panel of Fig. 1, whereas the results of a hydrophone recording without event are presented to the right panel of Fig. 1. The hydrophone sequence in Fig. 1(a) is first transformed into 2D timefrequency plane or spectrogram using Audacity(http://audacity.sourceforge.net) audio analysis open source software. For the calculation of spectrogram, a Hann window of length 42 msec and 50% overlap together with total numbers of F F T points NF F T = 4096 have been used throughout this paper (see Figs. 1(c) and Fig. 1(d)). Note that we have confined the spectrogram with limited number of frequency bins in order to process around frequency range of [0−6000] Hz, which corresponds to the number of frequency bins of 256. For the generation of dominant orientation map, we empirically adjusted the parameters for computing
334
F. Sattar, P.F. Driessen, and G. Tzanetakis
(b)
20
20
40
40
60
60
Freq.
Freq.
(a)
80
80 100
100
120
120 5
10
15
20
25 Time (sec)
30
35
40
45
5
10
15
20
25 Time (sec)
30
35
40
45
30
35
40
45
(d)
20
20
40
40
60
60
Freq.
Freq.
(c)
80
80 100
100
120
120 5
10
15
20
25 Time (sec)
(e)
30
35
40
45
5
10
15
20
25 Time (sec)
(f)
Fig. 1. (a) An input hydrophone signal with events and its corresponding (c) spectrogram ( e) dominant orientation map; (b) An input hydrophone signal without event and its corresponding (d) spectrogram (f) dominant orientation map
the dominant orientation map obtained as maximum moment of the phase congruency covariance based on the practice reported in [8]. The number of wavelet scales and the number of filter orientations are set to 9 and 6, respectively. The wavelength of the smallest scale filter is set to 9 and the filter bandwidth is set to 0.5. The ratio of the angular interval between filter orientations and the standard deviations of the angular Gaussian spreading function is determined to be 1.2. The above setting creates a set of wavelets that form the band-pass filters suitable for tracking unusual events with a wide range of duration. The dominant orientation maps of the spectrograms are displayed in Fig. 1(e) and Fig. 1(f). It can be seen that the presence or absence of events could be clearly tracked on the generated dominant orientation map. Note that we have plotted spectrograms and dominant orientation maps within limited number of frequency bins in order to clearly visualize the targeted events around frequency range of [0 − 3000] Hz. 3.2
Illustrative Results for Event Detection and Extraction Based on Reconstruction of Dominant Orientation Map
The results of the proposed method for hydrophone signals with and without events are illustrated in Fig. 2. Fig. 2(a) shows the reconstructed dominant orientation map having key segments for the input signal in Fig. 1(a) as depicted by bright regions. On the other hand, the dark regions in Fig. 2(a) indicate the absence of any key segments. The corresponding results for detection and extraction of events, e.g. whale calls, are presented in Fig. 2(c) and Fig. 2(e). Similarly, the results in Fig. 2(d) and Fig. 2(f) refer no presence of events for the
20
20
40
40
60
60
Freq.
Freq.
A New Image-Based Method for Event Detection and Extraction
80
100
80
100
120
120
5
10
15
20
25 Time (sec)
30
35
40
45
5
10
15
20
(a) 900
0.9
0.8
700
0.7
Amplitude
Amplitude
35
40
45
0.6
600
500
400
0.5
0.4
300
0.3
200
0.2
100
0.1
0
5
10
150
20 Time
25 (sec)
30
35
40
0
45
0
5
10
15
20 Time
(c)
25 (sec)
30
35
800
40
30
35
40
45
(d)
1
1
0.8
0.8
Decision function
Decision function
30
1
800
0.6
0.4
0.6
0.4
0.2
0
25 Time (sec)
(b)
1000
0
335
0.2
0
5
10
15
20 Time
25 (sec)
30
35
40
0
45
0
5
10
15
20 Time
(e)
25 (sec)
(f)
Fig. 2. (a) The output dominant orientation map of the signal in Fig. 1(a) and its corresponding (c) event detection function D(n) together with threshold T r and (e) event extraction function B(n);(b) The output dominant orientation map of the signal in Fig. 1(b) and its corresponding (d) event detection function and (f) event extraction function.
20
20
40
40
60
60
100
Freq.
80
100
Freq.
80
120
120
5
10
15
20
25 Time (msec)
30
35
40
45
5
10
15
20
(a)
25 Time (msec)
30
35
40
45
(b)
1000
9200
900 9000 800 8800
700
Amplitude
600 8600 500 8400 400
Amplitude
300
200
8200
8000 100
0
0
5
10
15
20 Time
25 (sec)
(c)
30
35
40
45
7800
0
5
10
15
20 Time
25 (sec)
30
35
40
45
(d)
Fig. 3. (a) The output dominant orientation map and (c) resulting detection function for the signal in Fig. 1(a); (b) The original spectrogram and (d) resulting detection function for the signal in Fig. 1(a)
input signal in Fig. 2(b). Here the sizes of the windows used for RoA detector and rational filter are (5 × 5), and (3 × 3), respectively. For the calculation of MI matrix, the number of gray-scale levels, N and the size of the one-dimensional window, NW used are N =256 and NW =5, respectively, together with threshold T h = 0.5. We have compared the results obtained for the spectrogram and the reconstructed dominant orientation map, see Fig. 3(a) and Fig. 3(b), respectively. The detection function D(n) for the dominant orientation map is depicted in Fig. 3(c), whereas the corresponding function by summing the spectrogram in Fig. 3(b) along frequency is shown in Fig. 3(d). The results show the effectiveness of the presented method for detecting and extracting events of highly noisy hydrophone data.
336
F. Sattar, P.F. Driessen, and G. Tzanetakis
Freq.
20 40 60 80 100 120
10
20
30 40 Time (sec) Reconstructed Dominant Orientation Map
10
20
0
10
20
0
10
20
50
60
40
50
60
30 40 Time (sec) Event Detection/Extraction
50
60
50
60
50
60
40
50
60
30 40 Time (sec) Event Detection/Extraction
50
60
50
60
50
60
40
50
60
30 40 Time (sec) Event Detection/Extraction
50
60
50
60
30 Time (sec) Decision Function
1000 500 0
Amplitude
Amplitude
Freq.
Spectrogram 20 40 60 80 100 120
1 0.5 0
30 Time (sec)
40
Freq.
20 40 60 80 100 120 140
10
20
30 40 Time (sec) Reconstructed Dominant Orientation Map
10
20
0
10
20
0
10
20
30 Time (sec) Decision Function
1000 500 0
Amplitude
Amplitude
Freq.
Spectrogram 20 40 60 80 100 120 140
1 0.5 0
30 Time (sec)
40
Freq.
20 40 60 80 100 120
10
20
30 40 Time (sec) Reconstructed Dominant Orientation Map
10
20
0
10
20
0
10
20
30 Time (sec) Decision Function
1000 500 0
Amplitude
Amplitude
Freq.
Spectrogram 20 40 60 80 100 120
1 0.5 0
30 Time (sec)
40
Freq.
20 40 60 80 100 120 140
Amplitude
Freq.
Spectrogram 20 40 60 80 100 120 140
10
20
10
20
30 40 50 Time (sec) Reconstructed Dominant Orientation Map
30
40 Time (sec) Decision Function
50
60
70
60
70
1000 500 0
0
25
50
75
50
75
Amplitude
Time (sec) Event Detection/Extraction 1 0.5 0
0
25 Time (sec)
Fig. 4. Illustrative results for detecting and extracting events of noisy hydrophone recordings based on reconstructed dominant orientation map
In order to illustrate the performance of the proposed method based on reconstruction of dominant orientation map, we have presented event detection and extraction results in Fig. 4. It demonstrates that the proposed method is able to detect and extract various acoustic events effectively for high noise recordings where the extracted events with longer durations represent the whale sounds.
4
Conclusion and Future Works
A new technique for unusual event detection and extraction is presented. We have developed a new image based method for long-term monitoring of noisy hydrophone data based on dominant orientation map and MI, respectively. Experimental results show that the proposed method can efficiently capture the
A New Image-Based Method for Event Detection and Extraction
337
events from the high noisy data. The preliminary results are presented here and more detailed performance evaluation will be presented in the future publication. As far we know, we have not seen any other image based method developed earlier for event detection or extraction of the noisy hydrophone data we are dealing with.
References 1. Hodge, V., Austin, J.: A Survey of Outlier Detection Methodologies. Artificial Intelligence Review 22(2), 85–126 (2004) 2. Daniel, D.Z., Bengio, G.-P.S., McCowan, I.: Semi-Supervised Adapted HMMs for Unusual Event Detection. In: Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, CVPR 2005 (2005) 3. Davy, M., Desobry, F., Gretton, A., Doncarli, C.: An Online Support Vector Machine for Abnormal Events Detection. Signal Processing 86(8), 2009–(2025) 4. Rabaoui, A., Davy, M., Rossignol, S., Lachiri, Z., Ellouze, N.: Improved One-Class SVM Classifier for Sounds Classification. In: IEEE Conf. on Advanced Video and Signal Based Sound Environment Monitoring, AVSS 2007 (September 2007) 5. Lindeberg, T.: Scale-Space Theory in Computer Vision. Kluwer Academic Publishers, Dordrecht (1994) 6. Touzi, R., Lopes, A., Bousquet, P.: A Statistical and Geometrical Edge Detector for SAR Images. IEEE Trans. Geosci. Remote Sensing 26, 764–773 (1988) 7. Ramponi, G., Moloney, C.: Smoothing Speckled Images Using an Adaptive Rational Operator. IEEE Signal Processing Letters 4(3) (1997) 8. Kovesi, P.: Image Features from Phase Congruency. A Journal of Computer Vision Research 1(3) (1999)
Detection of Multiple Preceding Cars in Busy Traffic Using Taillights Rachana A. Gupta and Wesley E. Snyder Electrical Engineering, North Carolina State University, Raleigh, USA {ragupta,wes}@ncsu.edu
Abstract. This paper presents an improved method for detecting and segmenting taillight pairs of multiple preceding cars in busy traffic in day as well as night. Novelties and advantages of this method are that it is designed to detect multiple car simultaneously, it does not require knowledge of lanes, it works in busy traffic in daylight as well as night, and it is fast irrespective of number of preceding vehicles in the scene, and therefore suitable for real-time applications. The time to process the scene is independent of the size of the vehicle in pixels, and the number of preceding cars detected. One of the previous night taillight detection methods in literature is modified to detect taillight pairs in the scene for both day and night conditions. This paper further introduces a novel hypothesis verification method based on the mathematical relationship between the vehicle distance from the vanishing point and the location of and distance between its taillights. This method enables the detection of multiple preceding vehicles in multiple lanes in a busy traffic environment in real-time. The results are compared with state-of-the-art algorithms for preceding vehicle detection performance, time and ease of implementation. Keywords: Taillight detection, preceding vehicle detection, autonomous vehicles, computer vision.
1 Introduction and Background Recent advances in autonomous vehicle and driver assistive technology have triggered research in detection of other traffic elements. Considering both the safety of the drivers and the potential for autonomous vehicles, detection of other cars on the road has gained importance over the past decade. Preceding car detection is an important aspect which is useful not only for collision avoidance but also in leader-follower applications. The monocular-vision-based preceding car detection problem has been looked at from many different perspectives in the literature. Features such as shadows created by cars and subsequent change of intensity [6], car symmetry[3], [7], color [2], texture, frequency [1], etc are used for car detection. Ito and Yamada [6] use a histogram matching method to detect cars within a Region Of Interest (ROI) i.e. known lane boundaries. Jiangwei et al. [7] use change in intensity due to shadow under the vehicle for hypothesizing the presence of a car and the hypothesis is verified by checking the property of symmetry about the vertical axis within ROI. Du and Papanikolopoulos [3] also detect and track the preceding vehicle using the vertical symmetry property of the vehicle. M. Kamel and A. Campilho (Eds.): ICIAR 2011, Part II, LNCS 6754, pp. 338–347, 2011. c Springer-Verlag Berlin Heidelberg 2011
Detection of Multiple Preceding Cars in Busy Traffic Using Taillights
339
Fu and Huang [4] suggested a fusion of SVM (Support Vector Machines) and particle filters to detect and track the vehicles in different weather conditions. Sun et al. [12] use Gabor filter with SVM to detect the preceding vehicles. The hypothesis for the presence of the vehicle is formed by histogram-based edge detection methods. Though these methods work very well for daylight conditions, it becomes difficult to track features in low light and night. Some of these methods assume prior knowledge of correct lane boundaries and use the lane boundaries as the ROI for vehicle detection in their lane. However in the scenario of busy traffic, lanes are not always properly visible, and therefore cannot be used to mark a correct ROI for preceding vehicle detection. Furthermore, detecting multiple preceding vehicles in surrounding lanes also requires substantial processing time making the system slower. Because of the challenges mentioned above, taillight detection was suggested in the literature for detecting preceding vehicles in low light conditions. Symmetrical and bright red taillights are the most common, high intensity and therefore easily visible feature and thus easily extractable especially in night. Taillights are therefore used in many cases to detect, track and follow the preceding vehicle. Sukhtankar [11] used a vision-based method called RACOON for detection and tracking of a taillight pair to estimate the position of the lead vehicle for autonomous vehicle following. RACOON tracked illuminated taillights and therefore was suitable only in low light conditions (dawn, dusk, night.) RACOON’s processing time is mentioned to be proportional to the number of pixels in the ROI and therefore it varies with preceding vehicle distance from the ego-vehicle [11]. Thus, due to the requirement for higher processing power to detect multiple vehicles, RACOON seems best suited for detecting and tracking only a single vehicle and was not well suited for high traffic conditions [11]. O’Malley et al. [9] use the knowledge of color, size, symmetry and position of the taillights to detect the preceding vehicle in night. They present the results of single vehicle detection and do not mention the suitability of the system to detect multiple vehicles in busy traffic. This paper focuses upon autonomous driving and driver assistive technologies and therefore the end goal is a system, which must be aware of preceding vehicles not only in the same lane but also in other lanes simultaneously to make better driving and path planning decisions in busy highway traffic scenarios in both day and night conditions. Considering the need for a fast response to cope up with the dynamically changing driving environment, such a system will also need to be real-time. Therefore, in this paper, we improve previous taillight detection methods in the literatures to be able to achieve the following: 1. 2. 3. 4.
No need for knowledge of lanes, Suitable for busy traffic in daylight as well as at night, Capable of detecting multiple preceding vehicles including surrounding lanes, and Fast, irrespective of the number of preceding vehicles to be detected in the scene, and therefore suitable for real-time applications.
This paper starts with the basic concept presented by O’Malley et al. in [9] to detect the pair of taillights indicating the presence of a preceding vehicle. The property symmetry of the taillights in shape, size, and location is used to form the hypothesis for the presence of the vehicle taillights as these symmetries are invariant to the location,
340
R.A. Gupta and W.E. Snyder
distance and height of the preceding vehicle as well as time of the day. However, O’Malley et al. filter out the taillights of the vehicles in the lanes other than the ego-vehicle by using a fixed aspect ratio constraint, therefore, they are not suitable for detecting multiple preceding vehicles. O’Malley et al. cluster the white and red pixels together, as they operate in only night conditions, and red pixels get saturated due to high exposure time of the camera in night. To be able to operate in both daylight and night conditions, this paper clusters only red pixels. Camera automatic gain control is turned off and exposure time is kept small to avoid saturation of red color due to brightness of the lights especially in night. To be able to detect multiple vehicles in real-time, this paper adapts content-addressable memory based clustering (Snyder and Savage [10]) for clustering of red pixels in the image. The advantages mentioned above result from the novel contributions of this paper: 1. Introduction of geometrical properties such as moment of the shape, taillight vertical location with respect to horizon and thresholds varying as a function of size of the taillights to improve the hypothesis formation to pair the red clusters. 2. A novel technique – “Vanishing Point Constraint” – for hypothesis verification allows the detection of multiple preceding vehicles in multiple lanes in real-time. 3. An experimental verification of the applicability of the approach. 1.1 Organization The paper is organized into 5 sections. Section 2 describes the basic principle behind the philosophy and problem formulation. This section also illustrates the color segmentation, and blob clustering. Section 3 explains hypothesis verification to detect the preceding vehicles. Various mathematical constraints are designed in this section to model taillight pair verification. This section also illustrates the novel “Vanishing Point Constraint” to verify the hypothesis for better and robust results. Section 4 is results and discussion followed by the conclusion and future research in section 5.
2 Basic Principle and Hypothesis Formulation As discussed before, the color, symmetry of the size, shape and location of the pair of taillights is a unique property of the vehicle class [9]. We form the following hypothesis H based on the properties of the taillights. H: If bi and bj are two red blobs, they represent the pair of taillights of a vehicle. The desired tolerance level is defined experimentally or by training the system. To form and test the hypothesis, the following steps are followed: (1) Color segmentation, (2) Hypothesis formation, and (3) Hypothesis verification. 2.1 Color Segmentation In this task, the red pixels from the scene are extracted to form blobs representing taillights. The HSV color transform [8] is used to extract color information. As, in the HSV
Detection of Multiple Preceding Cars in Busy Traffic Using Taillights
(a)
341
(b)
Fig. 1. Color Segmentation for red color
model, the feature of interest, the ‘hue’, the ‘value’ is extracted from other color measurements as a separate dimensions and therefore this representation is less sensitive to lighting conditions compared to the RGB color system. −12 ≤ H ≤ 12; S > 0.5; V > 0.1 suggests thresholds for the color defined as “red” that we have identified experimentally as good choices. These thresholds work for both day and night condition to indicate the color “red.” See Fig. 1 for examples of red pixel identification. 2.2 Segmenting the Red Blobs The desired color blobs are detected using the content-addressable memory based clustering by Snyder and Savage [10]. The content-addressable memory makes the blob detection very fast as the complexity of the algorithm is O(n), where n is the total number of pixels in the image. Let b be the set of k blobs detected from the scene ˆ image: b = {b1 , b2 , .., bk }. Thus, let the set of all possible pairs of blobs be b. ˆ = {(bi , bj )|bi ∈ b and bj ∈ b and i = j} b
(1)
ˆ represents the possible pairs of taillights in the scene. By the defined hypothesis H, b
3 Hypothesis Verification 3.1 Shape and Symmetry Check Considering the unique properties of taillight pairs, the following constraints are defined to compare the shape, size and the position of blobs in a pair (bi , bj ) to verify the hypothesis. Vertical Location Constraint: Eq. 3 makes sure that the centers of both blobs are approximately at the same y-location in the scene image. Define (vi , vj ) to be y-coordinates of the center of the gravity and (ri , rj ) to be the estimated radius of the of the blobs (bi , bj ), respectively. vs is the vertical shift between blob centers in pixels calculated as shown vs = |vi − vj |. σl is the measure of vertical location shift between the blob locations. vs is normalized by blob sizes to define a tolerance level for the vertical shift due to noise, camera vibration, etc. kl is determined experimentally.
342
R.A. Gupta and W.E. Snyder
Thus the threshold tl for the vehicle shift is a function of blob sizes. At the same time, blobs in the image above the horizon row are filtered out. vs (2) Sij = ri 2 + rj 2 , σl = Sij Constraint: σl < kl , Let, tl = kl . Sij ∴ vs < tl (3) Size and Shape Equality Constraint: Constraints in Eq. 5 ensure that both blobs have approximately same shape and size. As the size of the blobs is small compared to other objects in the scene image, we use the height, width and area of the blobs to compare the shapes. Define (ai , aj ), (wi , wj ), and (hi , hj ) as the areas, widths and heights of the blobs (bi , bj ) respectively. To define the tolerance level against noise and camera vibration for the measure of matching, these are normalized to define σw , σh and σa – the measures of equality in width, height and area of both blobs. The measures of equality should be less than their respective thresholds kw , kh and ka , which are determined experimentally. System can be trained for these values, too. |wi − wj | |hi − hj | |ai − aj | , σh = , σa = σw = 2 2 wi + wj ai 2 + aj 2 hi 2 + hj 2 σw < kw , σh < kh , σa < ka Let, tw = kw . wi 2 + wj 2 and so forth..
(4)
∴ |wi − wj | < tw AND |hi − hj | < th AND |ai − aj | < ta
(5)
Fig. 2 shows a few images after applying the size and shape constraints.
(a)
(b)
(c)
Fig. 2. Results after size and shape constraints. Red rectangles represent red blobs. Rectangles represent bounding box for possible preceding vehicles.
3.2 Vanishing Point Constraint The first step of hypothesis verification uses the basic features of the car taillights to determine the possible vehicle locations in the scene image. It is observed that, due to other clutter (especially in the daytime), the system has many false positives. To increase the robustness of the system, a new constraint – “Vanishing Point (VP) constraint” is designed to improve the hypothesis verification. This constraint is based on the relationship between the distance of the preceding car from the ego-vehicle and the vanishing point and the lateral distance between its taillights. To model this relationship, let us look at the camera configuration in Fig. 3 and the perspective transform relationship Eq. 6.
Detection of Multiple Preceding Cars in Busy Traffic Using Taillights
343
Fig. 3. Camera configuration. H: height of the camera, Θ: camera tilt angle and f : focal length of the camera. (X, Y, Z) and (x, y): ground and image plane coordinate system respectively.
y = f ky
(Z sin Θ − H cos Θ) X , x = f kx (Z cos Θ + H sin Θ) (Z cos Θ + H sin Θ)
(6)
Where, kx and ky are the camera resolution in pixels/meter in x and y direction respectively. Let h be y-location of horizon line in the image plane as shown in Fig. 4(a) by dotted line. As h represents distance Z = ∞, it can be calculated using Eq. 7. (Z sin Θ − H cos Θ) (7) h = lim f ky ∴ h = f ky tan Θ Z→∞ (Z cos Θ + H sin Θ)
(a)
(b)
Fig. 4. Left figure: the effect of projective transform on vehicle location and the distance between the taillights. Right figure: vanishing point constraint verification.
Consider the ground plane distance of the taillights of the closest possible preceding vehicle visible in the scene image to be Z as shown in Fig. 3. Let v0 (see Fig. 4(a)) be the corresponding image plane distance i.e. the y-coordinate in pixels representing such taillights in the image, which can be calculated using Eq. 6. Let x be the image pixel location of the center of these taillights i.e. at v0 . Due to the perspective transform defined in Fig. 4(a), as the distance of a preceding vehicle increases from the egovehicle, the vehicle location in the image tends to go towards the vanishing point V.
344
R.A. Gupta and W.E. Snyder
Let Dmax and Dmin : R2 → R be the reference maximum and minimum distance possible between the two taillights in pixels for an average sized car closest to the egovehicle i.e. at (x, v0 ). (Dmax , Dmin ) is determined by approximating the size of the car considering the cars running on the road today. (Dmax , Dmin ) are used to calculate the reference width between the taillights: (lmax , lmin ), at a different distance than v0 , say (x, v) considering the perspective transform (See Fig. 4(a).) Both lmax and lmin are the functions of (x, v, v0 ) and can be determined by Eq. 8. Using the similarity property of ΔV CD and ΔV AB, we get lmax =
Dmax (h − v) Dmin (h − v) Similarly, l = min h − v0 h − v0
(8)
Thus, lmax and lmin define the constraint for the horizontal distance between two blobs representing taillights of a regular sized vehicle. Now, consider a pair of blobs (bi , bj ) hypothesized as taillights after size and shape constraints. To further verify this hypothesis the following is done. See Fig. 4(b). Let ls be the lateral distance between bi and bj determined after blob detection as ls = |xi − xj |, where xi , xj are the x− coordinates of the centers of bi and bj . Therefore, if (bi , bj ) is a valid taillight pair, ls will satisfy the constraint in Eq. 9. (9) lmin |v=vij < ls < lmax |v=vij This VP constraint is applied to the output of size and shape constraints as a part of hypothesis verification. See Fig. 5. More results are in the next section.
(a)
(b)
Fig. 5. Left figure shows output after size and shape verification. Right figure shows results after applying VP constraint to the left image: the hypothesis for the bigger rectangle is rejected based on distance constraints.
4 Results and Discussion The taillight detector was tested on several real life scene images in day as well as night and Fig. 6 shows some typical results. The taillight prediction is distance, size and lane location invariant. 4.1 Computational Complexity As mentioned before, the computational complexity of the content-addressable memory based blob detection algorithm used here is O(n) where n is the number of pixels in
Detection of Multiple Preceding Cars in Busy Traffic Using Taillights
(a) VP on Fig. 2(a)
(b) VP on Fig. 2(c)
(c) Results with night image1
(d) Results with night image2
345
Fig. 6. Typical results of taillight detection. Rectangles represent the detected preceding vehicles. Average processing time per frame: 25 ms. Images are processed on a 2.8 GHz Intel dual core laptop. Typical image resolution: 640x480.
the image. Therefore, the complexity of the taillight pair detection algorithm is O(n + k 2 ), where k is the total number of blobs detected below horizon. As k 2 n, the computational complexity for preceding car detection using taillights suggested here is normally O(n). Thus, the time required to process any frame for detection of cars is invariant to number of cars, distance of cars and thus size of cars in pixels and it is suitable for real-time application. 4.2 Comparison with Other Techniques The results of taillight detection in this paper are compared with the results of our implementation of Jiangwei et al. [7] (histogram and symmetry based detection) and O’Malley et al. [9] (taillight based detection). As the lane location is assumed to be known in [7], a lane detector from [5] is implemented. Similarly, as [9] is designed for night, images from night are used for comparison with this method and day time results are compared with [7]. Some sample results are shown in Fig. 7. Table 1 shows the performance over total 340 images. It is to be noted that the performance i.e. correct decisions and false alarms of Jiangwei et al. [7] and O’Malley et al. [9] are calculated with respect to the number of preceding cars only in the same lane as the ego-vehicle as these algorithms are not designed to detect vehicles in other lanes. On the other hand the performance of our algorithm is calculated with respect to total preceding vehicles in the scene in all the lanes. We also compared our performance with the best performance quoted by of two of the state-of-the art technologies: Sun et al.[12] and Chen and Lin [1]. Sun et al. use Gabor filter optimization with 24 different Gabor filters to detect multiple cars in the scene (best performance:
346
R.A. Gupta and W.E. Snyder
(a) Method from [7]. Processing time: 0.23s
(b) Our method. time: 0.027s
Processing
(c) Method from [7]. Processing time: 0.18s
(d) Our method. time: 0.024 sec
Processing
Fig. 7. Comparison Result. Left images: Method from [7] to detect cars in the detected lanes, highlighted in white solid lines. Right images: Our method without lane detection. Table 1. Performance Comparison over 340 images (image resolution 640x480)
Correct False alarm time per frame
Jiangwei et al. [7] 91% 18 0.195 s
Our method (Day) 92.33% 13 0.025s
O’Malley et al.[9] 91.2% 12 0.016s
Our method (night) 94.66% 5 0.02s
91.77 %.) Chen and Lin [1] use frequency domain features to detect taillights during night when brake lights are applied (best performance: 76.9%.) Thus, it is observed that our system is equally fast, the performance (93.33%) is better than the best performance quoted by Sun et al., and our method does not require any system training. Sun et al. [12] did not report the false alarm rate for their system.
5 Conclusion and Future Work This paper not only improved the previous taillight detection algorithms, making them able to work in both daylight and night conditions and but also designed a novel technique to better refine the hypothesis for the presence of preceding vehicles. The novel method for hypothesis verification of the detected taillight pairs of preceding cars is based on the vanishing point constraint, enabling the detection of multiple preceding vehicles in multiple lanes simultaneously i.e. in one processing cycle. Thus, the main advantages of this method are that it works in busy traffic, and it is fast and therefore suitable for real-time applications. The time to process the scene is independent of the
Detection of Multiple Preceding Cars in Busy Traffic Using Taillights
347
size of the vehicle subtended in pixels, the number of lanes, and the number of preceding cars. This is especially important for autonomous driving and driver assistive technologies. It is also useful for collision avoidance, path planning and for leader-follower applications. This method could be extended in future research to predict the approximate lane locations in a cluttered traffic environment where lane edges are not visible. Future work will also involve analyzing the motion of the taillights to improve the detection as well as tracking of multiple vehicles.
References 1. Chen, D.-Y., Lin, Y.-H.: Frequency-tuned nighttime brake-light detection. In: International Conference on Intelligent Information Hiding and Multimedia Signal Processing, pp. 619–622 (2010) 2. Crisman, J.D., Thorpe, C.E.: Color vision for road following. In: Thorpe, C. (ed.) Vision and Navigation: The CMU Navlab, pp. 9–24 (1988) 3. Du, Y., Papanikolopoulos, N.P.: Real-time vehicle following through a novel symmetrybased approach. In: IEEE International Conference on Robotics and Automation, vol. 4, pp. 3160–3165 (April 1997) 4. Fu, C.-M., Huang, C.-L., Chen, Y.-S.: Vision-based preceding vehicle detection and tracking. In: Proceedings of the 18th International Conference on Pattern Recognition, vol. 2, pp. 1070–1073 (September 2006) 5. Gupta, R.A., Snyder, W.E., Pitts, W.S.: Concurrent visual multiple lane detection for autonomous vehicles. In: Proceedings of the IEEE International Conference on Robotics and Automation (May 2010) 6. Ito, T., Yamada, K.: Preceding vehicle and road lanes recognition methods for rcas using vision system. In: Proceedings of the Intelligent Vehicles Symposium, October 24-26, pp. 85–90 (1994) 7. Jiangwei, C., Lisheng, J., Lie, G., Libibing, Rongben, W.: Study on method of detecting preceding vehicle based on monocular camera. In: IEEE Intelligent Vehicles Symposium, June 14-17, pp. 750–755 (2004) 8. Kuehni, R.G.: Color Space and Its Divisions: Color Order from Antiquity to the present. Wiley, New York (2003) 9. O’Malley, R., Glavin, M., Jones, E.: Vehicle detection at night based on tail-light detection. In: 1st International Symposium on Vehicular Computing Systems (July 2008) 10. Snyder, W.E., Savage, C.D.: Content-addressable read/write memories for image analysis. IEEE Transactions on Computers C-31(10), 963–968 (1982) 11. Sukthankar, R.: Raccoon: A real-time autonomous car chaser operating optimally at night. In: Proceedings of IEEE Intelligent Vehicles (1993) 12. Sun, Z., Bebis, G., Miller, R.: On-road vehicle detection using evolutionary gabor filter optimization. IEEE Transactions on Intelligent Transportation Systems 6(2), 125–137 (2005)
Road Surface Marking Classification Based on a Hierarchical Markov Model Moez Ammar1 , Sylvie Le H´egarat-Mascle1, and Hugues Mounier2 1 2
IEF/Univ. Paris-Sud 11, 91405 Orsay Cedex, France LSS/Univ. Paris-Sud 11, 91405 Orsay Cedex, France
Abstract. This study deals with the estimation of the road surface markings and their class using an onboard camera in an Advanced Driver Assistance System (ADAS). The proposed classification is performed in 3 successive steps corresponding to 3 levels of abstraction from the pixel to the object level through the connected-component one. At each level, a Markov Random Field models the a priori knowledge about object intrinsic features and object interactions, in particular spatial interactions. The proposed algorithm has been applied to simulated data simulated in various road configurations: dashed or continuous lane edges, road input, etc. These first results are very promising.
1
Introduction
One of the major challenges of on-board automotive driver assistance systems (for best car safety) is to alert the driver about driving environment context. This context is measured in particular in terms of potential hazards that could lead to accidents, based on the density of possible events such as collision with other vehicles or pedestrian (or any ‘obstacles’). At a given instant, besides the potential obstacles directly observed in the scene, this density should take into account the unobserved potential obstacles (due to some hidden parts of the 3D scene), e.g. pedestrian obscured by parked cars, cars arriving from another street at an intersection in town, etc. Now, road signs (signs and road surface markings) are an element in estimating the probability of such a ‘hidden dangers’. Advanced Driver Assistance Systems (ADASs) are now able to provide more and more precise information to the driver [20], [4], such as the road position, the presence and distance to the other vehicles [19], to the pedestrians [9], [12]. Part of these information pieces may be derived by simple sensors (such as radar or lidar for obstacle/vehicle distance estimation), with the advantage is that they allow measurements (e.g. distance) without requiring powerful computing resources. However, the information about road surface markings can only be derived from image acquisition by an onboard camera (generally frontal) and image processing. Now, most works dealing with road surface markings have focussed on road detection, in particular considering different road models ([1], [21], [17], [14]). Such algorithms may be disturbed by some unexpected road surface marking such as freeway exits or entries or zebra road markings. Moreover, M. Kamel and A. Campilho (Eds.): ICIAR 2011, Part II, LNCS 6754, pp. 348–359, 2011. c Springer-Verlag Berlin Heidelberg 2011
Road Surface Marking Classification Based on a Hierarchical Markov Model
349
in most cases the application is the precise guidance of the vehicle in its lane, rather than the road context determination. In this study, we aim at both detecting and recognizing the kind of road surface markings. Detection/recognition of road signs (vertical road markings) is generally performed based on the very strong features of the signs in terms of color and shape. Now in the case of road surface markings and frontal camera, the color is generally not discriminant and the shape is deformed due to the projection of the road plane toward the image plane (conversely to the case of the signs that are in a vertical plane more or less perpendicular to the focal axis). Our problem is then a detection/recognition problem. Besides meta-heuristic approaches (such as neural networks, Adaboost classification, etc.) that require large learning databases and mainly work as ‘black boxes’, the probabilistic models such as Markov Random Field (MRF) ones allow directly introducing the a priori information about the class or objects of interest; moreover due to their graph basis, their may be used at different levels of abstraction. Section 2 gives the state of art in MRF approaches applied to image classification or object detection/recognition in images, and the positioning of our approach. In Section 3, we present the preprocessing step that leads to the data used for classification. Section 4 presents the MRF model defined at pixel level, Section 5 the MRF model defined at connected component level, and Section 6 the MRF model defined at object level. Section 7 shows some results and Section 8 gives the perspectives of this work.
2
Proposed Approach versus Background
Adopting a probabilistic framework, the problem of classification is classically [10] modeled as follows: there are two random fields, namely the random field of the labels (of the classes) X and the random field of the observations Y ; knowing a realization y of Y , we aim at estimating the corresponding realization x of X. For instance at pixel level, X is the field of the pixel labels, and Y the field of the greyscale or color measurements in every pixel. At object level, X is the field of the labels of the objects identified in the image, and Y is the field of the features characterizing these objects (that may be radiometric but also geometric or topologic features). Adopting a classical Maximum A Posteriori (MAP) criterion that aims at maximizing P (x/y), or equivalently P (y/x)×P (x), one must model these two probabilities (a posteriori and a priori). Markov Random Fields (MRF) allows to introduce the knowledge about interactions between ‘neighbor’ elements (as defined below) in the a priori probability. They are based on a graph where the nodes or vertices represent the random field elements at the considered level, and the edges represent the neighborhood relationships between the field elements at the considered level. According to the neighborhood symmetry, if a vertex v1 is neighbor to a vertex v2 , v2 is also neighbor to v1 , thus the edges are basically non-oriented. Associated to the neighborhood system, we also define a clique system, where a clique is a complete sub-graph of the neighborhood, and to each clique we associate a ‘potential
350
M. Ammar, S. Le H´egarat-Mascle, and H. Mounier
function’ or energy such that the exponential minus this potential represents the probability of the clique configuration. Practically, the configurations that we want to favor (from a priori information) will be associated to low energies or potential values and the configurations that we do not want to favor will be associated to high potential values. Then, the problem of classification can be formalized as a global energy function E minimize that is decomposed in two energy terms: one derived from the a posteriori probability model, and a second one derived from the a priori probability model. Let us note G the set of the graph nodes, and C the set of the cliques (that are subsets of G such as all vertices are neighbors to each others); the global energy is the sum of so-called ‘data attachment’ energy U0 terms over all the graph vertices and so-called ‘interaction’ energy from U terms (defined the potential functions) over all the graph cliques: E = v∈G U0 (v)+ c∈C U (c). MRFs have been widely used in image processing since they provide a solution to the causality problem: Under MRF assumption, P (xs | xt , ∀t ∈ G) = P (xs | xt , ∀t s.t. (s, t) ∈ C), the energy difference between two configurations only differing by a x value becomes computationally tractable since it depends only of the X values within x neighborhood, and optimization techniques such that simulated annealing [11] may be used to find the global solution. Now since simulated annealing convergence is very long (theoretically infinite), several works have focused on the development of efficient optimization techniques, i.e. in a finite number of iterations. In particular, [5], [13] show that graph cut methods may be used for certain forms of interaction energy, including the Potts model. However, the price of the gain in computing time is a high cost in memory, which can make the approach intractable when the number of classes increases. Some other approaches aim at considering more and more flexible models. For instance, [2], [16] relax the stationarity assumption of the image. Marked Point Processes [8] have been developed to manage some graphs with a variable number of vertices. They have been used to model the data fidelity versus the a priori on the geometry of the considered objects and their spatial interactions (typically applications deal with the extraction of line network, buildings, or tree crowns in remote sensing or aerial images e.g., [18]). MRFs models have also been proposed in a hierarchical way. For instance in [15], an automatic detection of cloud/shadow is proposed based on the formalization of the spatial interactions between a cloud and its shadow using MRF at two levels: pixel level for detection of either cloud or shadow regions, and object level to remove some false positives. In [6], [7], [22], hierarchical MRFs are proposed to detect and parse deformable objects. In order to deal with the different configurations of an object either due to the unknown feature of location, size and orientation of the object or due to the different appearances of sub-parts of the object, a hierarchy is built by composing elementary structures to form more complex structures. Typically, interest points (detected firstly at pixel level, namely level 0) which are recognized to form an elementary structure
Road Surface Marking Classification Based on a Hierarchical Markov Model
351
Fig. 1. 3 successive processing steps of the classification: Step 1 detects road surface markings at pixel level, then the connected components are computed and step 2 gathers some connected components to form objects, finally step 3 analyzes the object interactions to classify them
of few points (on which a rotation and scale invariant criterion can be computed) are gathered at level 1. Then these elementary structures or sub-...-sub-parts of the objets are gathered at level 2 within the AND/OR graph that models the alternative relationships between these level 1 structures, and so on, forming larger and larger object subparts until the whole object is reconstructed and recognized. The fundamental difference with the hierarchical MRF proposed, is that in our case the different levels are also semantically different (in addition to increase structure complexity). Here, the classification will be performed in three successive steps corresponding to three different levels of abstraction from pixel to object. At each level, we consider different fields Y and X, in particular different sets of X values, i.e. the sets of the considered level labels, and different a posteriori and a priori probability models. The MRF is used to model a priori knowledge about the features of the solutions. Figure 1 shows the three successive processing steps from pixel to object. At level 0, MRF models neighbor pixel interactions. The considered graph is the one of image pixels and the neighborhood is the spatial neighborhood in four or eight connectivity. Clique potential models the basic a priori that the sought road surface markings have a constant width. At level 1, the MRF models connected component (CC) interactions. The considered graph is the one of the previously obtained connected components (CCs), and different kinds of local neighborhoods are defined either parallel or perpendicular to the major axis direction of the CCs. Clique potential models the basic a priori that the sought objects have aligned CCs (for instance road lines or pedestrian crossings). At level 2, the MRF models the spatial interactions between objects. The considered graph is the one of the previously obtained objects, and neighborhood is defined horizontally, since for horizontal road surface and in the absence of pitch, a same image line corresponds to a same landscape deep distance. Clique potential models the basic a priori that the geometric interactions between the objects are related to their class, an object class is determined knowing the relative geometric position of the objects (for instance left road edge or right road edge). Before specifying each of these steps, we present the data image.
352
3
M. Ammar, S. Le H´egarat-Mascle, and H. Mounier
Data Image
The acquired data images are either greyscale or color (three visible channels: red, green and blue ones). Since the road surface markings are white marks on the road that is generally dark (at least in the case of asphalt roads), road surface markings are first characterized by their high level of contrast. In addition, they have generally a small width. Thus we propose the following transformation of the initial data for classification purpose: Let I be the initial image greyscale image (in the case of color images, I is the image intensity, and let ES be a structuring element, sufficiently large to erase all the road surface markings when performing a functional erosion of I. Practically, we infer the ES size knowing the maximum width of the considered road surface markings (e.g., the width of pedestrian crossing), and the camera features, or we learn it on a few images. Then, the data image considered for classification, denoted y in the following, is the functional geodesic reconstruction under I, of the erosion of I: ⎧
I n I (εES y = EES ⎪ ⎨ (I)) = supn≥0 δES (εES (I)) , (1)
I if n = 0, I n ⎪ ⎩ with [δES ] (I) = δES δ J n−1 (I) ∩ J if n > 0, ES and εES (I) and δES (I) denoting respectively the functional erosion and dilatation of greyscale image I by structuring element ES.
4
Pixel Level
Let nlin be the number of lines and ncol the number of columns of the image. The image lattice is denoted by S, and s refers to a pixel location. At this level, the considered graph is S, and there are two classes: the class of the road surface markings, denoted λmk , and the class of the pixels not belonging to a road surface marking, denoted λmk (for each pixel the question to solve at this level is: Does this pixel belongs to a road surface marking or not?). Since the considered data image y is the geodesic reconstruction of an erosion, the road surface marking class is characterized by high grey values, conversely to the non road surface marking class. Figure 2 shows the two empirical distributions of the road surface marking class and non road surface marking class; they have been estimated from photointerpretation results of eight data images. We approximately model the two class distributions by two sigmoids. The data attachment energy U0 is thus defined as U0 (ys |xs = λmk ) = ln 1 + e−α.(ys −tmk ) , (2) U0 ys |xs = λmk = ln 1 + eα.(ys −tmk ) , with tmk the threshold for road surface marking, Δtmk the imprecision on tmk , . In our case, the parameter tmk is fitted empirically from mean, and α = Δ10 tmk μy , and standard deviation σy of the observed realization y (tmk = μy + 2σy ).
Road Surface Marking Classification Based on a Hierarchical Markov Model
Fig. 2. Empirical distributions of λmk class and λmk class, estimated from 8 images photointerpreted.
353
Fig. 3. Illustration of the computation of connected component (CC) alignment. Either principal axis direction is well-defined for at least one CC or not.
Interacting Term. At pixel level, the a priori on road surface markings is that they have a constant width. Then, even if due to perspective effect, the column width is clearly not constant, it is assumed to vary slowly. Let is and js be the line (or row) and column coordinates of the site s of S, and Δcol (is , js ) the horizontal length of the road surface marking in the neighborhood of s (vertical neighborhood of 1 line). Then: ⎧ lmk (Vs ) =max {Δcol (is − 1, js ), Δcol (is + 1, js )} , ⎨ |Δcol (is , js ) − lmk (Vs )| if lmk (Vs ) > 0, (3) ⎩ U (xs |xt , xt ∈ Vs ) = β × δ(xs , λmk ) if lmk (Vs ) = 0.
5
Connected Component Level
A key point for CC gathering in objects is their spatial relationships and distance. From basic projection geometry, if z is the direction perpendicular to the image plane (whose line and column directions are respectively called y and x), and if the image plane is perpendicular to the horizontal plane (that approximates the soil surface), the further is a soil object from the camera, the more distant are its image projection pixels from the image last line (the lines are numbered by ascending order from the top of the image to the bottom, and the columns are numbered by ascending order from the left of the image to the right): The z distance of a soil object is an increasing function of the opposite of its line position in the image. When this distance tends to infinity, the soil object line position tends to the horizon line position. At this level, the graph nodes are the CCs (pixel level step produced a twoclass, i.e. binary, image that has been labeled in CCs to produce the graph considered at this step). The classes are the objects present in the image, labeled λo1 , . . . , λoi , . . . (for each CC the question to solve is: to which object belongs the considered CC?) In the absence of feature characterizing the different objects, the data attachment term is assumed constant whatever λoi hypothesis, and the considered energy function is only composed by some neighborhood terms.
354
M. Ammar, S. Le H´egarat-Mascle, and H. Mounier
Focusing on the road lines, the basic a priori are: – considering two CCs, they are aligned and not too distant: (i) the direction of the principal axis of at least one of the CCs is well defined (with an estimated imprecision Δα lower than Δαmax ) and close to the direction of the vector relying the two CC barycenters; (ii) their distance is lower than a threshold dalign max . – considering three CCs at once, they are close and aligned: (ii) their distance is lower than a threshold dmax ; (ii) the directions of the three vectors relying one of the three couples of CC barycenters are close. Note that the second condition was defined because in cases of road dashed lines, the CCs may be too small to have a well defined principal axis at the resolution of the considered images (see examples in the result section). Figure 3 gives an illustration of the computation of alignment of CCs. Thus energy term is such that max U (xs |xt , xt ∈ Vs ) = −β × H dmax align − dist(xs , xt ) × H Δalign − Δalign (xs , xt ) (4)
where dmax align is the CC distance threshold (that depends in our case on the well-defined feature of the CC principal direction axis), dist(xi , xj ) is the distance between CCs xi and xj in the case of two components considered at once, and the maximum of the distances between couple-component in the case of three CCs considered at once, H(u) is the Heaviside function: H(u) = 1 if u ≥ 0 and H(u) = 0 otherwise, Δmax align is the threshold on angle differences that may takes into account the imprecision on angle estimations, and Δalign (xs , xt ) is the CC angle difference whose computation depends on the well-definition or not of CC principal axis direction.
6
Object Level
At this level, the graph nodes are the objects and the classes are the road surface marking kinds (for each object the question to solve here is: To which kind of road surface marking belongs the considered object?) Typically considered classes are: left road edge (rL), right road edge (rR), central or medium road edge (rM ), discontinuous (Dr ) or continuous roadedge (Cr ) - left, right or medium road - edges, pedestrian crossing (pd), bus stop (bs), parking (pg), etc. In this work, we limit us to the following class set Ω: Ω={λrL , λrR , λrM , λD r , λC r , hypotheses λre = λ pd , λother , λmk , with in addition the following compound {λrL , λrR , λrM } = {λD r , λC r }, λunknown = {λre , λpc , λother }, and any complementary hypotheses of compound or singleton hypotheses, e.g. λre = λunknown \ {λre }, λpd = λunknown \ {λpd }, etc. Road surface markings can be first distinguish by the ratio between the lengths of the major axis and of the minor axis of CC (we assume that the CCs gathered within a same object all have the same geometric figure). Let yr be this ratio: yr = majoraxis minoraxis ,
Road Surface Marking Classification Based on a Hierarchical Markov Model
U0 (yr |xr = λre ) = ln 1 + e−α.(yr −tRaxis,r ) , U0 (yr |xr = λre ) = ln 1 + eα.(yr −tRaxis,r ) ,
355
(5)
with tRaxis,r the axis ratio threshold for road edge. ⎧
1 1 ⎨ U0 (yr |xr = λpd ) = − ln , −α.(yr −tinf Rax ) + α.(yr −tsup Rax ) − 1 1+e
1+e
(6) 1 1 ⎩ U0 yr |xr = λ , −α.(yr −tinf Rax ) − α.(yr −tsup Rax ) pd = − ln 2 − 1+e
1+e
with tinf Rax and tsup Rax the limits of the imprecision interval (tinf Rax ¡tsup Rax ) for axis ratio threshold for pedestrian crossing. Second, the classes that correspond to road surface markings should be a priori located below the line horizon. Thus, we add an ad hoc term to the data attachment energy:
Uup iup r |xr ∈ λmk = −βup ×
Uup (iup r |xr ∈ λmk ) = 0,
1 up 1+e−αup . max(ff lat ×nlin −ir ,0)
− 0.5 ,
(7)
where ff lat × nlin is the horizon line position, iup r is the lower line number of the region and βup is a weighting factor tuning the relative importance of the different data attachment energy terms. Some ad hoc terms can also be introduced to characterize specific classes, e.g. discontinuous road surface markings should have several CCs, Udisk (yr |xr = λD r ) = −βdisc × [1 − ncc (yr )] , (8) Udisk (yr |xr = λD r ) = 0, with ncc (yr ) the number of CCs of object yr . Let us first define the default interacting term: if there is no region-object of label xt in the neighbor of region-object s, then its energy value is increased: UN (xr = λi |xt , t ∈ Vr ) = βN × min {1 − δ(xr , xt )} ,
(9)
where βN is a weighting factor tuning the relative importance of the different energy terms (U0 , UN and other ones). Here, the neighborhood includes any region-object located at a distance lower than distVr from the region-object r: t ∈ Vr ⇔ dist(r, t) ≤ distVr . As for the data attachment term threshold, distVr is related to the location in line of r (eventually, in the same spirit, it could be related to the size of r). Now there are some classes, namely the lane road lines that we want to determine more specifically, e.g. distinguishing between the left and the right lines, or recognizing input or output highway lanes, and so on. Let us consider a horizontal (line) neighborhood on the object graph. Given two neighbor objects r and t, let N (r, t) be the set of pixels belonging to r and such that there is a same line pixel belonging to t: N (r, t) = {s|s ∈ r, ∃s ∈ t : is = is } and let Dcol (is )
356
M. Ammar, S. Le H´egarat-Mascle, and H. Mounier
be the linear regression function that gives, versus the column coordinate js of pixel s of N (r, t), the column coordinate of corresponding pixel in t, and finally let rmse[Dcol (r, t)] be the root mean square error associated to Dcol (is ) linear regression between corresponding pixels in objects r and t.
γ × rmse[Dcol (r, t)] if |N (r, t)| > 0 and r ↑ t, +A if |N (r, t)| = 0 or r ↓ t, (10) where ‘r ↑ t’ means ‘r at left to t’ and ‘r ↓ t’ means ‘r not at left to t’, γ is a constant to calibrate energy terms, A is a very high value constant. U (xr = λrL ||xt , t ∈ Vs ) =
7
Result Examples
We consider six examples of images typically acquired from a camera on-board a car. These data are simulated by the CIVIC simulator, developed by the LIVIC Laboratory from French INRETS and now sold by Civitech company. The assumed camera is frontal and acquires visible images (i.e. in the range 0.4 − 0.75μm) either color (‘Red’, ‘Green’, and ‘Blue’ channels) or panchromatic. These four scenes are gradually more complex. On the first one, we find a classic two lanes road. In the second one, there is an output lane of the road. In the third and fourth ones there are some input road lanes. The difference is that, in the third case, the car is located on the main road, whereas in the fourth case the car is on the input lane. In the fifth and sixth ones, there are some pedestrian crossing, having different complexity. For each of the three considered level (pixel, CC and object) the minimization of the corresponding global energy function is performed using an Iterative Conditional Mode algorithm [3]. It is a local optimization procedure allowing the convergence in few iterations, that provides in most cases results close to the optimal results. Figure 4 shows qualitatively the results. In the 1st scene, the road lanes are rightly classified but the bus inscription mark is classified as a continuous line (that is not so bad relatively to the considered classes). In the 2nd scene, both road right and left edges are correctly classified, except for small curved parts of the output lane, and the two other road lines are classified as central or middle edges which is rather consistent. In the 3rd scene, left and central lanes are rightly classified, but there is an error on the right lane classification due to the detection of the road barrier as a lane. In the 4th scene, the classification error for left and central lanes is due to the fact that the lanes are nearly horizontal and that the horizontal neighborhood used for interacting term at object level is not adapted to this road direction. The 5th scene shows the first example with pedestrian crossing. First, we note that the used structuring element size (5) is too small to detect correctly all the ‘bands’ of the pedestrian crossing; but we wanted to keep a same ‘size’ value for all examples. Moreover, for the vigilance index computation application, the detection of only a part of the pedestrian crossing is sufficient. Concerning the left, right and central road edges, they are correctly classified (even if the pedestrian crossing which is connected to
Road Surface Marking Classification Based on a Hierarchical Markov Model
357
Fig. 4. Six examples of results with, from top to bottom the examples, and from left to I (εES (I)), right : initial data image I, difference with the geodesic reconstruction EES result of the classification at the pixel level, and result of the classification at the object level (color codes: black=not road marking, red=pedestrian crossing, green=continuous line or general marking, dark blue=discontinuous road line, yellow=left road edge, magenta=right road edge, cyan=central road edge).
the central edge is also classified as such). Finally, the 6th scene shows a more complex example with some errors: the central edge classified as left one, and a part of the right edge classified as a discontinuous road edge (also true). Quantitatively, Table 1 shows the numbers of ‘true positives’, ‘false positives’, ‘false negatives’ and ‘true negatives’ computed by comparison of the ground truth
358
M. Ammar, S. Le H´egarat-Mascle, and H. Mounier
Table 1. Quantitative performance of: threshold CLT , pixel CLP , and object CLO classifications, estimated on the 6 images for which ground truth exists
CLT CLP CLO
true positives false positives false negatives true negatives 10144 749 4085 587138 10191 752 4038 587135 9337 271 4892 587616
(GT) and a binary classification in the two classes λmk and λmk . The ground truth was obtained by photointerpretation of the data images, and different classifications are compared: the one obtained by applying the threshold tmk to I (εES (I)) image denoted CLT , the classification at the pixel level denoted EES CLP , and the one at the object level denoted CLO (for these last case binary image of the considered classes is derived). From the threshold classification (CLT ) to the pixel one (CLP ), we note a slight decrease of false negatives, probably due to a better definition of the borders of the road markings. From the pixel classification (CLP ) to the object one (CLO ), we note both a noticeable increase of the false negatives and a decrease of the false positives, which is due to a global decrease of the pixels or CCs labeled as λmk (among them some were erroneously labeled and some rightly). Thus, we conclude that the main interest of the classification at object level is the refinement of the discernment space: it handles much more classes than the two λmk and λmk .
8
Conclusion and Perspectives
The paper shows how road surface markings can be recognized based on a threelevel Markov Random Field model. It was validated using simulated data. This work will be extended in three directions. First, we will increase of the number of object-level classes: input and output on the road, bus stop, parking, zebra crossing will be considered. Second, we will take into account the previous classification result (obtained at the time t − 1): modeling the vehicle movement, some areas of interest will be defined corresponding to previous detections, and only a subpart of the image (the areas of interest plus the areas around the horizon line) will be processed, decreasing so the computing time. Third, we will test the algorithm on real data: the algorithm robustness will be evaluated relatively to (i) road imperfections such as hills and valleys, or aging of road surface markings (which induces poor contrast), (ii) reduced general visibility conditions (in case of rain, fog).
References 1. Aufrere, R., Chapuis, R., Chausse, F.: A model-driven approach for real-time road recognition. MVA 13(2), 95–107 (2001) 2. Benboudjema, D., Pieczynski, W.: Unsupervised statistical segmentation of non stationary images using triplet markov fields. IEEE PAMI 29(8), 1367–1378 (2007)
Road Surface Marking Classification Based on a Hierarchical Markov Model
359
3. Besag, J.: On the statistical analysis of dirty pictures. J. of the Royal Statistical Society, Series B 48, 259–302 (1986) 4. Bishop, R.: Intelligent Vehicle Technologies and Trends. Artech House, Inc., Boston (2005) 5. Boykov, Y., Veksler, O., Zabih, R.: Fast approximate energy minimization via graph cuts. IEEE PAMI 23(11), 1222–1239 (2001) 6. Chen, Y., Zhu, L., Lin, A., Yuille, C., Zhang, H.: Rapid inference on a novel and/or graph for object detection, segmentation and parsing. In: NIPS 2007 Proc., Vancouver, Canada (2007) 7. Chen, Y., Zhu, L., Yuille, A., Zhang, H.: Unsupervised learning of probabilistic object models (poms) for object classification, segmentation and recognition using knowledge propagation. PAMI 31(10), 1747–1761 (2009) 8. Descombes, X., Zerubia, J.: Marked point processes in image analysis. IEEE Signal Processing Magazine 19(5), 77–84 (2002) 9. Gandhi, T., Trived, M.: Pedestrian protection systems: Issues, survey, and challenges. IEEE Trans. on Intelligent Transportation Systems 8, 413–430 (2007) 10. Geman, D.: Random fields and inverse problems in imaging. Lectures Notes in Mathematics, vol. 1427 (2000) 11. Geman, S., Geman, D.: Stochastic relaxation gibbs distribution and bayesian restoration of images. IEEE PAMI 6(6), 721–741 (1984) 12. Ger´ onimo, D., L´ opez, A., Sappa, A., Graf, T.: Survey of pedestrian detection for advanced driver assistance systems. IEEE PAMI 32(7), 1239–1258 (2010) 13. Kolmogorov, V., Zabih, R.: What energy functions can be minimized via graph cuts? IEEE PAMI 26(2), 147–159 (2004) 14. Labayrade, R., Douret, J., Aubert, D.: A multi-model lane detector that handles road singularities. In: IEEE ITSC 2006 Proc., Toronto, Canada, September 17-20 (2006) 15. Le H´egarat-Mascle, S., Andr´e, C.: Automatic detection of clouds and shadows on high resolution optical images. J. of Photogrammetry and Remote Sensing 64(4), 351–366 (2009) 16. Le H´egarat-Mascle, S., Kallel, A., Descombes, X.: Ant colony optimization for image regularization based on a non-stationary markov modeling. IEEE Trans. on Image Processing 16(3), 865–878 (2007) 17. Lombardi, P., Zanin, M., Messelodi, S.: Switching models for vision-based on-board road detection. In: IEEE ITSC 2005 Proc., Austria, September 13-16, pp. 67–72 (2005) 18. Ortner, M., Descombes, X., Zerubia, J.: A marked point process of rectangles and segments for automatic analysis of digital elevation models. IEEE PAMI 30(1), 353–363 (2008) 19. Sun, Z., Bebis, G., Miller, R.: On-road vehicle detection: A review. IEEE PAMI 28(5), 694–711 (2006) 20. Vlacic, L., Parent, M., Harashima, F.: Intelligent Vehicle Technologies. Butterworth-Heinemann (2001) 21. Wang, R., Xu, Y.: Libin, and Z. Y. A vision-based road edge detection algorithm. In: IV 2002 Proc., France (2002) 22. Zhu, L., Chen, Y., Yuille, A.: Learning a hierarchical deformable template for rapid deformable object parsing. IEEE PAMI 32(6), 1029–1043 (2010)
Affine Illumination Compensation on Hyperspectral/Multiangular Remote Sensing Images Pedro Latorre Carmona1 , Luis Alonso2 , Filiberto Pla1 , Jose E. Moreno2, and Crystal Schaaf3 1
Dept. Lenguajes y Sistemas Inform´aticos, Jaume I University, Spain Departamento de F´ısica de la Tierra y Termodin´amica, Universidad de Valencia Department of Geography and Environment, Center for Remote Sensing, Boston University {latorre,pla}@lsi.uji.es, {luis.alonso,jose.moreno}@uv.es,
[email protected] 2
3
Abstract. The huge amount of information some of the new optical satellites developed nowadays will create demands to quickly and reliably compensate for changes in the atmospheric transmittance and varying solar illumination conditions. In this paper three different forms of affine transformation models (general, particular and diagonal) are considered as candidates for rapid compensation of illumination variations. They are tested on a group of three pairs of CHRISPROBA radiance images obtained in a test field in Barrax (Spain), and where there is a difference in the atmospheric as well as in the geometrical acquisition conditions. Results indicate that the proposed methodology is satisfactory for practical normalization of varying illumination and atmospheric conditions in remotely sensed images required for operational applications. Keywords: Affine illumination compensation, hyperspectral/multiangular images.
1 Introduction Nowadays, there are satellites that are able to acquire images from the same site every day with a high spatial resolution (F ORM OSAT − 2). Other satellites with similar capabilities will be launched in the next future (like SEN T IN EL − 2, [14]). A limiting factor in this series exploitation is the need to compensate for illumination effects due to the changing atmospheric transmittance conditions and solar illumination angles. Corrections are typically made using an atmospheric radiative transfer code. The problem, however, is typically the lack of information about the actual atmospheric status (water vapour, aerosols type, etc.). For some systems it is possible to derive this information from the acquired data itself, but this is not always the case (For the M ERIS satellite, for instance, this is possible, but not for the SP OT satellite). An alternative would be to
This work was supported by the Spanish Ministry of Science and Innovation under the projects Consolider Ingenio 2010CSD2007 − 00018, EODIX AY A2008 − 05965 − C04 − 04/ESP and ALF I3DT IN 2009 − 14103 − C03 − 01, by the Generalitat Valenciana through the project P ROM ET EO/2010/028 and by Fundaci´o Caixa-Castell´o through the project P 11B2007 − 48.
M. Kamel and A. Campilho (Eds.): ICIAR 2011, Part II, LNCS 6754, pp. 360–369, 2011. c Springer-Verlag Berlin Heidelberg 2011
Affine Illumination Compensation
361
consider it from an illumination change assessment (normalization type compensation strategy) point of view, where the surface has not been altered, where only the scattering events in the atmosphere would be considered (the absorption affects would be removed by other methods). This approach is currently used in the MERIS/ENVISAT products [15]. A methodology following this strategy is presented in this paper, where the only information available should be an image taken as reference. Healey et al theoretically proved [1], [2], that a change in the illumination conditions could be modeled by an affine transformation model. There exist three main different types of affine transformation models. The simplest one is given by a diagonal (matrix) transform of the feature space (diagonal model) [3]. This model, which corresponds to the so-called von-Kries adaptation in human colour vision [4], may be generalized considering a non-diagonal matrix transform (particular model) [5], and adding to this model an offset, i. e., a translation vector (general model) [6]. In this paper we analyze the applicability of the three different types of affine transformation models and compare their performance using a group of radiance images from the CHRIS-PROBA satellite acquired during July the 12th and July the 14th 2003 over Barrax (Spain). The combination CHRIS-PROBA provided multi-spectral and multi-angular images of this test site [8]. This paper is built on the research made by Latorre Carmona et al on atmospheric compensation (using affine compensation models) [17] for the case of synthetically radiance images generated using the 6S code [7]. The organization of the paper is as follows: Section 2 introduces the three affine compensation models. Section 3 analyses the assumptions made in the paper with those made by the 6S code for atmospheric correction. Section 4 presents the methodology used to register the images and presents and discusses the atmospheric compensation results obtained. Conclusions can be found in Section 5.
2 Affine Illumination Compensation Assume a vector x ∈ RD + representing a measurement from a D bands linear multispectral sensor. The application of a transformation model is therefore valid whether x is considered as the radiance reaching the sensor, or the response of this sensor. Under a . The change in the illumination characteristics this vector will undergo a change x → x are related through: most general affine transformation model considers that both x, x = B · x + t [9]. In this equation, B is a D × D matrix and t is a D × 1 vector. If the x = A · x, where A vector t is considered zero, the transformation model would be: x would also be a D × D matrix. This is the camera model considered by Healey et al [1]. Under certain conditions of the spectral response of the sensors [3], matrix A may be approximated as a diagonal matrix. The three models will be called hereafter general, particular and diagonal affine models. One step in the application of the method to obtain the parameters of the general affine transformation model is the assessment of the inverse matrix of the matrix F obtained after the Cholesky decomposition of the covariance matrix of the data (x). This assessment may present numerical instability problems due to some characteristics in the signal shape, as it can be the abrupt change in this shape in some specific spectral ranges (for instance, when dealing with the radiance coming from a vegetated surface).
362
P. Latorre Carmona et al.
The inverse matrix can be obtained applying the Truncated Singular Value Decomposition (t − SV D) technique [12]. For more details about the methods used to apply the three affine models, see the Appendix A and [16], [9]. The t − SV D technique can be found in the Appendix B.
3 Comparison with Standard Atmospheric Correction Methods There are two main atmospheric processes to take into account, the gaseous absorption and the scattering by molecules and aerosols. In this paper, the main interest is in the scattering properties of the atmosphere. To isolate these changes from those of absorption, the peak stripping method [13] was applied, but for absorption valleys. The original method compares the value of channel i with the mean of its 2 direct neighy +y bours, i. e., mi = (i−1) 2 (i+1) , and if yi < mi then yi ← mi , and otherwise left unchanged. In our case, the condition is if yi > mi then yi ← mi . This process is applied iteratively. Figure 1(a) shows the transmittance due to gases (tg ) in the spectral interval [400, 1100]nm simulated using the 6S code for the U S62 atmospheric model, with an O3 (Ozone) content of 300 Dobson Units (DU) (i. e., 3 cm column) and the water vapour column at 2.5 cm (values close to those directly obtained in Barrax during the campaign made in June and July 2003 [8]). Figure 1(a) shows that main atmospheric absorption valleys due to gases appear in the wavelength region: 680 ≤ λ ≤ 1000nm. A technique to obtain a curve that may eliminate the atmospheric absorption valleys would just consist of normalizing the radiance curve per pixel by the tg curve. However, this curve must be found first, and this can only happen if we know the atmospheric composition at the time of acquisition or if it is modeled using a radiative transfer code like 6S. The advantage the method we apply has is that no prior knowledge about the atmosphere composition is necessary.
4 Results and Discussion A series of four images from CHRIS-PROBA were selected for the assessment of the three illumination compensation algorithms. These images were acquired in July the 12th and July the 14th 2003 over Barrax (Spain). Image labeled 35A2 was acquired in July the 12th 2003. It corresponds to a Flight Zenith Angle (FZA) = 0◦ . This image was selected as the reference image. Other three images obtained during July the 14th 2003 were considered as the images to be registered and compensated (called warp images) in relation to the reference one (35A2). Its FZA are 0◦ (image labeled 3598), +36◦ (image labeled 3599) and −36◦ (image labeled 359A). In Figure 1(b) a polar plot showing the image acquisition geometry for July the 12th and the 14th is shown. These images with a short time difference among them were selected in order to make sure all changes in the radiance came only from the illumination and geometry acquisition conditions, and not from changes in the surface (i. e., soil moisture or vegetation). 4.1 Image Registration Image 35A2 was taken as reference. The rest of the images were co-registered in relation to this one, using Ground Control Points (GCPs) with sub-pixel precision. 100
Affine Illumination Compensation
363
!
"
Fig. 1. (a) tg plot in the spectral interval [400, 1100]nm simulated using the 6S code using an atmospheric model called U S62. (b) Polar plot of the acquisition geometry.
Fig. 2. 35A2, 3598, and registered images
points were used for each image, with an RM SE = 0.3 applying a 4th order polynomial function. Re-sampling was made considering bi-cubic interpolation. A mask was created to remove some clouds (and the corresponding shadows) that were detected in the reference image. Other changes in the surface were included, like harvesting in some crops during the two days of difference between the images. Figure 2 shows a false colour RGB image of 35A2, 3598, and the result of the registration.
364
P. Latorre Carmona et al.
Fig. 3. Result of the application of the absorption valley removal strategy to the radiance curves of the reference and warp images for the case of a (a) potato crop pixel, (b) dry barley pixel, selected from the 35A2/3598 CHRIS-PROBA image pair.
4.2 Scattering vs. Absorption Clayton method was applied with two restrictions: (a) fixing some specific wavelengths, so that the radiance for them was not updated on each iteration, (b) using two iteration values, 40 for wavelengths λ ≤ 751nm and 120 for λ > 751nm, in order to preserve the chlorophyll activity region valley. A part of the wavelengths that were not updated were used to force the method not to smooth the chlorophyll absorption valley. The rest were selected as the local maxima for a radiance pixel of the terrain whose radiance curve were as flat and as smooth as possible. Figure 3(a),(b) shows the radiance curves corresponding to a pixel from a potato crop (dry barley in (b)) area of Barrax test site for the 35A2/3598 image pair, before and after the application of the absorption valley removal strategy. The group of wavelengths (583nm, 605nm, 674nm) were not updated during the application of the algorithm. Values higher than 30 for wavelengths lower than 751nm created numerical instability problems in the assessment of the inverse matrix of the Cholesky decomposition of the covariance matrix of the pixel data,
Affine Illumination Compensation
365
Table 1. Compensation results for the three affine models
Pair Before compensation x = A · x + t x = A · x x = diag(A) · x 35A2/3598 0.214 0.059 0.056 0.062 35A2/359A 0.192 0.051 0.048 0.056 35A2/3599 0.055 0.050 0.047 0.051
which is needed to apply the general affine compensation model. A Truncated Singular Value Decomposition Technique [12] was used to assess the inverse matrices. The value of 40 was considered as an intermediate value allowing the elimination of absorption peaks and conservation of the chlorophyll absorption valley, and the assessment of the corresponding inverse matrices. 4.3 Illumination Compensation Suppose two different point sets as N × D matrices (A and B), corresponding to two radiance images for two different illumination conditions. Considering also the case of compensating the illumination change of the second image, the aim would be to trans into an image as close as possible to the first one. Thus, form the second image B → B being the first image the target image, the following relative frobenius index could be es F B tablished as a measure of the illumination compensation performance: FI = A− AF , N D 2 where the Fobenius norm for a N × D matrix X is: XF = i=1 j=1 |xij | . Table 1 shows the compensation capability of the three affine models, for each one of the pairs of images that were registered. The general and particular affine models are better than the diagonal model for the three pairs of images. The relative frobenius norm, before illumination compensation, for the case 35A2/3599 pair, was particularly low. The highest frobenius norm before compensation was for the 35A2/3598 pair. That may be due to the fact (see Figure 1(b)) that images 35A2 and 3598 are almost aligned with the Sun, in the first case in opposition, and in the second case in conjunction. This geometry generates high angular effects on the surface reflectance. These effects are however minimized in the plane perpendicular to the Solar plane. Image 3599 is in that plane, whereas image 359A is close to the principal plane. Therefore, difference between images before compensation are lowest in the 35A2/3599 pair. The capability of the three models when no information able to create masks is available during/after acquisition was also tested. Table 2 shows the compensation results for the case when all the pixels in the images were considered. As in the previous case, the relative frobenius norm, before illumination compensation, was low for the case 35A2/3599 pair. The general affine model, for this pair gives a relative frobenius norm higher than for the case before compensation. That was caused by the fact that for a very small number of pixels the norm after compensation was higher than before compensation, but not for the rest of the image pixels. Illumination compensation between images on a pixel by pixel basis was also tested. In this case, the ratio of the norm of the difference in the radiance vector between
366
P. Latorre Carmona et al. Table 2. Compensation results without applying the correction masks
Pair Before compensation x = A · x + t x = A · x x = diag(A) · x 35A2/3598 0.226 0.101 0.090 0.096 35A2/359A 0.206 0.093 0.086 0.092 35A2/3599 0.091 0.092 0.084 0.088
Fig. 4. Relative frobenius norm for the pair 35A2 and 3598 (a) Before Compensation. (b) General affine compensation. (c) Particular affine compensation. (d) Diagonal affine compensation.
each pixel in the reference image and the pixel in the registered and compensated image, and the pixel in the reference image, was taken as the criteria. Figure 4 shows a colour coded image for this ratio for the 35A2/3598 image pair. In all these cases, masks for clouds and shadows had been applied. There is a general tendency to the reduction in the difference between the images. However, there are some parts where this reduction is lower. That is the case for the two circular crops on top of all the images of Figure 4, as well as for some small areas close to the pixels where a mask had been applied. In the case of the two circular crops, the difference could be attributed to a land-use change, and not to an illumination or acquisition geometry change. Figure 5 shows the radiance curve for a group of four pixels for the reference image, for the image to be compensated, and for the resulting compensated image after the application of
Affine Illumination Compensation
[
3RWDWRVSHFWUXPEDVHLPDJH 3RWDWRVSHFWUXPZDUSLPDJH 3RWDWRVSHFWUXPZDUSLPDJH DIWHUFRPSHQVDWLRQ
[
367
&RUQVSHFWUXPEDVHLPDJH &RUQVSHFWUXPZDUSLPDJH &RUQVSHFWUXPZDUSLPDJH DIWHUFRPSHQVDWLRQ
[
[
$OIDOIDVSHFWUXPEDVHLPDJH $OIDOIDVSHFWUXPZDUSLPDJH $OIDOIDVSHFWUXPZDUSLPDJH DIWHUFRPSHQVDWLRQ
'U\EDUOH\VSHFWUXPEDVHLPDJH 'U\EDUOH\VSHFWUXPZDUSLPDJH 'U\EDUOH\VSHFWUXPZDUSLPDJH DIWHUFRPSHQVDWLRQ
Fig. 5. Radiance curves of pixels of different crops for the reference, warp and warp after illumination compensation images
the general affine model. Three pixels corresponded to crops of different nature (corn, potato, and dry barley, Figure 5(a) to (c)). One pixel (Figure 5(d)) corresponded to one of the two circular crops with the highest difference after illumination compensation. In general terms, there is a small difference between the aim curve and the curve of the compensated image in each plot which proves the capacity of the method. However, last plot in Figure 5 shows that the general affine method is not able (nor the rest) to compensate for the difference in the radiance curves for that particular pixel (of one of the circular crops, please see Fig. 4(b)). This difference could be attributed to a surface change for which no previous information was available.
5 Conclusions In this paper we have shown that the three affine compensation models (general, particular and diagonal) can be used to compensate for illumination variations in radiance images due to changes in the atmosphere and acquisition conditions, being the particular affine model the best of the three. The methodology presented in this paper is satisfactory for compensation of varying illumination and atmospheric conditions in remotely sensed images required for operational applications.
368
P. Latorre Carmona et al.
A Deduction of the Illumination Compensation Formulae for the 3 Affine Models be two N × D matrices representing two point sets, with N the number Let X and X their covariance matrices. Applying the Cholesky of points in the set, and C and C = F ·F t, they can be written as: C = F · Ft and C Factorization to C and C, t are the transpose matrices of F and F respectively. Points in the where Ft and F data set are first whitened (only shown for the first group), i. e. y = F−1 · x, where x = x − E{x}. This matrix may be ill-conditioned under some circumstances, and a technique like Truncated Singular Value Decomposition can be used to assess them. See Appendix B and [12] for details. Taking into account the previous equation, we ·y = B · F · y, and creating a quadratic form of this last expression, the next have: F ·F t = B · F · Ft · Bt . In [10], they proved that an equation expression follows: F t of the form T · T = S · St has a solution of the form T = S · P, where P is an . Applying this to orthonormal matrix. This will help finding the final relation y → y · Pt · F−1 . Substitution of ·F t and solving for B, we have: B = F the expression of F ·y = B · F · y yields: y = Pt · y. B in F The assessment of the P matrix in this context is known as the Orthogonal Procrustes problem (see [11] for details). The solution matrix is P = V · Wt , where Y and Y are V · D · Wt is the so-called Singular Value Decomposition of (Yt · Y). of the data point sets. B is obtained reN × D matrices formed by the vectors y and y t ·F−1 . Applying the Expectation Operator (E) to x = B·x+ t, placing P in B = F·P we get t = E{ x} − B · E{x}. The matrix A in the particular affine transformation model can be obtained using the definition of the Moore-Penrose inverse. Following [3], for a D × N matrix Xt of t the corresponding points under some reference illumination condition, denote by X t ≈ matrix when there is an illumination change. The matrix A that accomplishes: X t t + t + t t · [X ] . [X ] is the Moore-Penrose inverse of matrix X (i. e., A · X is: A = X t t + t t · [X t ]+ = X it ·Xi , where the [X ] = X · (X · X)−1 ). Considering [3]: Ad = X ii
i
i
Xi ·Xi
single subscript i denotes the ith matrix row and the double subscript ii denotes matrix element at row i and column i.
B Truncated Singular Value Decomposition Technique Let F ∈ Rm×n be a rectangular matrix with m > n. The Singular n ValueTDecomposition (SV D) of F is given by [12]: F = U · Σ · VT = i=1 ui σi vi , where U = (u1 , u2 , . . . , un ) and V = (v1 , v2 , . . . , vn ), are orthonormal matrices, and where the numbers σi are called the singular values of F. If matrix F is ill-conditioned/rank deficient, the closest rank-k approximation Fk to F would be obtained by truncating the k SV D expansion at k [12], i. e.:Fk = i=1 ui σi viT , k ≤ n. Taking into account the −1 T −1 T properties of the orthonormal matrices U and V (U = U , V = V ), the inverse matrix of F is: F−1 = V · Σ−1 · UT = ni=1 vi · σ1i · uTi , and and the closest rank-k k approximation, (F−1 )k of F−1 would be given by [12]: (F−1 )k = i=1 vi · σ1i ·uTi .
Affine Illumination Compensation
369
References 1. Kuan, C.Y., Healey, G.: Retrieving multispectral satellite images using physics-based invariant representations. IEEE Trans. on Pat. Analysis and Mach. Intel. 18, 842–848 (1996) 2. Kuan, C.Y., Healey, G.: Using spatial filtering to improve spectral distribution invariants. In: Proc. SPIE, vol. 6233, pp. 62330G1–62330G12 (2006) 3. Finlayson, G.D., Drew, M.S., Funt, B.V.: Spectral sharpening: sensor transformations for improved color constancy. Journal of the Opt. Soc. of America, A 11, 1553–1563 (1994) 4. Wyszecki, G., Stiles, W.S.: Color Science: concepts and methods. Wiley, Chichester (2000) 5. Lenz, R., Tran, L.V., Meer, P.: Moment based normalization of color images. In: IEEE 3rd Workshop on Multimedia Signal Processing, pp. 103–108 (1998) 6. Finlayson, G., Chatterjee, S.S., Funt, B.V.: Color Angular Indexing. In: Buxton, B.F., Cipolla, R. (eds.) ECCV 1996. LNCS, vol. 1065, pp. 16–27. Springer, Heidelberg (1996) 7. Vermote, E.F., Tanr´e, D., Deuz´e, J.L., Herman, M., Morcrette, J.J.: Second Simulation of the Satellite Signal in the Solar Spectrum: an overview. IEEE TGARS 35, 675–686 (1997) 8. ”SEN2FLEX Data Acquisition Report” Project Contract No. 19187/05/I-EC (2005) 9. Heikkila, J.: Pattern Matching with Affine Moment Descriptors. Pattern Recognition 37, 1825–1834 (2004) 10. Sprinzak, J., Werman, M.: Affine Point Matching. Pat. Rec. Letters 15, 337–339 (1994) 11. Schonemann, P.H.: A generalized solution of the orthogonal Procrustes problem. Psychometrika 31, 1–10 (1966) 12. Varah, J.M.: On the numerical solution of ill-conditioned linear systems with applications to ill-posed problems. SIAM Journal on Numerical Analysis 10, 257–267 (1973) 13. Espen, P.V.: Spectrum evaluation. In: Handbook of X-Ray Spectr., Marcel Dekker, New York (2001) 14. Gascon, F., Berger, M.: GMES Sentinel-2 Mission requirements document, T.R., European Space Agency (2007) 15. Weiss, S.: Measurement data deffinition and format description for MERIS. T.R. Astrium GmbH (2001) 16. Latorre Carmona, P., Lenz, R., Pla, F., Sotoca, J.M.: Affine Illumination Compensation for Multispectral Images. In: Ersbøll, B.K., Pedersen, K.S. (eds.) SCIA 2007. LNCS, vol. 4522, pp. 522–531. Springer, Heidelberg (2007) 17. Latorre Carmona, P., Moreno, J.E., Pla, F., Schaaf, C.B.: Affine Compensation of Illumination in Hyperspectral Remote Sensing Images. In: IEEE IGARSS (2009)
Crevasse Detection in Antarctica Using ASTER Images Tao Xu1 , Wen Yang1 , Ying Liu1 , Chunxia Zhou2 , and Zemin Wang2 1
School of Electronic Information, Wuhan University, Wuhan 430079, China Chinese Antarctic Center of Surveying and Mapping,Wuhan 430079, China {xutao.whu,liuying1129}@gmail.com, {yangwen,zhoucx,zmwang}@whu.edu.cn 2
Abstract. The crevasse, which has always been one of the most dangerous factors on the Antarctic continent, threatens the life of the team members during the polar expedition. Crevasse detection is thus an increasingly important issue as it facilitates the analysis of glaciers and ice cap movements, research on the effects of climate change, and improves security for expedition staff. In this paper, we first analyze the characteristics of crevasse in ASTER image. We then test five features: Gray-Level Co-occurrence Matrices (GLCM), Gabor filters, Local Phase Quantization (LPQ), the completed local binary pattern (CLBP), and local self-similarity (LSS) for crevasse detection with the SVM classifier. Finally, we evaluate and validate the detection performance on two datasets. Experimental results show that the LSS descriptor performs better than other descriptors, and is thus a promising feature descriptor for crevasse detection. Keywords: crevasse detection, texture feature, ASTER.
1
Introduction
Antarctica is very closely related to the future of the human being because it plays a critical role in the dynamic linkages that couple the spatially and temporally complex components of the Earth system. Since the continent was first found in 1821, the exploration on the Antarctica has never stopped, however, it is full of dangers due to the abominable natural conditions. Among these dangers, crevasse is a most dangerous factor to the team members during the field expedition. Falling into a hidden crevasse will lead to great trouble and damage, and thus crevasse detection is a very important task in polar scientific research expedition for the safety. Furthermore, it can help analyze glaciers and ice caps movement, research the effects of climate change. However, the weather in Antarctica is so abominable that the explorers are not allowed to move easily, and conventional methods of data collection almost could not be implemented on this land. Fortunately, remote sensing satellite images can help minimize the risk of data acquisition. And along with the development of remote sensing, remote sensing satellite images have been improved dramatically, not only in spatial resolution but also in temporal one. These also make remote sensing become the most efficient method on data collection in Antarctica. M. Kamel and A. Campilho (Eds.): ICIAR 2011, Part II, LNCS 6754, pp. 370–379, 2011. c Springer-Verlag Berlin Heidelberg 2011
Crevasse Detection in Antarctica Using ASTER Images
371
Until now, crevasse detection has not acquired deserving attentions on a global scale, and there hardly exists any effective and efficient method for detecting crevasse. Texture is important to remote sensing images, and reflects the image gray-scale statistical information, the structure characteristics, as well as the spatial arrangement relationship, and can thus be used to characterize crevasse. A preliminary study on ice crevasse texture analysis and recognition [1,2] has been made using the gray-level co-occurrence matrix (GLCM), the experiments demonstrated that GLCM is useful for analyzing crevasse, but quantitative assessment has not been done, there still exist many false alarms. [3] proposed visual analysis and interpretation of crevasse by high pass filtered image, which was a reliable method of detecting crevasse with manual interactions. However, all these methods require the labor-intensive and time-consuming works. It is necessary to come up with an automatic and effective method of crevasse detection. Motivated by the above considerations, we present crevasse detection methods based on different feature descriptors. The rest of this paper is organized as follows. Section 2 analyzes the characteristics of crevasse. Section 3 introduces the five feature descriptors and the SVM classifier. Section 4 provides the feature evaluation on an ice crevasse samples dataset and a large scale Aster image. The conclusion is given in Section 5.
2
The Characteristic of Crevasse Texture
A crevasse is a crack in an ice sheet or glacier caused by large tensile stresses at or near the glacier’s surface. Accelerations in glacier speed can cause extension and initiate a crevasse. Crevasse contains several types: Transverse crevasse, Marginal crevasse, Longitudinal crevasse, bergschrund, etc. Transverse crevasses are the most common crevasse type and they form in a zone of extension where the glacier is accelerating as it moves down slope. Marginal crevasses extend downward from the edge of the glacier pointing up glacier. Longitudinal crevasses form parallel to flow where the glacier width is expanding. And a bergschrund is a crevasse that divides moving glacier ice below the bergschrund from the stagnant ice above it and may extend to bedrock below. In Antarctica, some of the crevasses are about as wide as a palm, while some are about 100m or even wider. The depth of the crevasse is about several meters or unfathomable. At the surface, a crevasse may be covered, but not necessarily filled, by a snow bridge made of the previous year’s snow. Falling into a hidden crevasse that is covered by a weak snow bridge is a danger for expedition members and the snow tractor. Although Radio Echo Sounding and Ground Penetrating Radar systems have been used to collect crevasse information on subsurface features, it can only be used for close inspection. For planning safe routes for tractor traverses in Antarctica, the preliminary mapping of crevasses based upon the ASTER scenes proved to be extremely useful. Some image examples of ice crevasse are shown in Fig. 1. As seen from above images, these crevasse appears as discontinuities on the snow surface, sometimes as open linear features. The fact that the crevasses on
372
T. Xu et al.
(a)
(b) Fig. 1. Examples of crevasse: (a) close-up pictures; (b) samples in ASTER image
the image are marked by linear discontinuities allows them to be detected both visually and by applying filters. However, a high pass filter enhances not only the crevasses, but also other regions on the snow surface, such as blue ice areas, nunataks, and shadows [3]. Attributing to their linearity, applying a directional filter to the high pass image further enhances some crevasses, but this is not valid for a wide area because of the different orientations of crevasses across a whole image. Similarly, applying a grayscale co-occurrence texture filter can enhance some features, but due to the variational size of the crevasse, this is not universally applicable. In this paper, the study is focused on the Grove Mountains Area using ASTER image with 15m resolution. Grove Mountains Area is located about 400km to the south of Zhongshan Station, whose geographical coordinates are 72◦ 20 —73◦ 12 south latitude, 73◦ 40 —75◦ 50 east longitude, and the area is about 8000km2. The Grove Mountains is of typical in-land character and also an ideal midway station place for expedition teams extending to the South Pole, However, crevasse is very dense there due to glacier movement. Our work is to detect out all the crevasse in this region and help polar investigation staffs plan expedition route.
3
Methodology of Extracting Descriptors
In this section, we first briefly introduce the following five descriptors, all the features employ the “bag of words” representation except GLCM. Then the SVM classifier is introduced in the rest of this section.
Crevasse Detection in Antarctica Using ASTER Images
3.1
373
Texture Descriptors
GLCM. Gray-level Co-occurrence Matrix (GLCM) [4] is one of the second order texture measures, which shows frequencies of a pixel with the intensity (graylevel) value i occurs in a specific spatial relationship to a pixel with the value j. Those frequencies are related to direction θ between the neighboring resolution cells and the distance d between them. After the GLCMs are created, several statistics can be derived. In our experiments, we first process images using gray-scale histogram equalization, then created a 16-graylevel GLCM for each image, and selected eight statistics which were proved to be more powerful : (1)Contrast; (2)Correlation; (3)Energy; (4)Entropy; (5)Cluster Prominence; (6)Cluster Shade; (7)Dissimilarity; (8)Maximum probability. GABOR. Gabor [5] descriptors are obtained by filtering the images with Gabor filters. An input image is convolved with a two-dimensional Gabor function to obtain a Gabor feature image. The wavelength parameter is set to be 1/4, the orientation parameter is set to be 8 and the scale parameter is 4. Finally we get Gabor descriptors of 32-dimension. To get the above Gabor descriptors, we firstly filter the image with the filter banks. Then we compute the mean response of each pixel on a regular grid of size 5 × 5 pixels. The averages on all the patches are treated as the descriptors of the responding patches. Thus we get the descriptors of the whole image. Finally, the descriptors are quantized into 200 visual words. LPQ. Local Phase Quantization(LPQ) [6] utilizes phase information computed locally in a window for every image position. Ideally, the low-frequency phase components are shown to be invariant to centrally symmetric blur. In our experiments, at first, the local phase information is extracted by using the 2-D DFT in each image or, more precisely, a short-term Fourier transform(STFT) is computed over a 3 × 3 local windows. Then the phases of the four low-frequency coefficients are de-correlated and uniformly quantized in an 8-dimensional space. In the end, we compute the histogram on a regular grid of size 5 × 5 pixels, we finally quantize the descriptors into 200 visual words. CLBP. [7] proposed local binary patterns(LBP) operator, which assigns a label to every pixel of an image by thresholding the 3 × 3 neighborhood of each pixel with the center pixel value and considering the result as a binary number. On this basis, an associated completed LBP (CLBP) scheme is developed by [8] for texture classification. The CLBP framework is illustrated in Fig. 2. A local region is represented by its center pixel and a local difference sign-magnitude transform (LDSMT). And they are all converted into a binary code, namely CLBP C, CLBP S and CLBP M, Then, these codes are combined to form the CLBP feature map of the original image. Finally, a CLBP histogram can be built.
374
T. Xu et al.
Fig. 2. The framework of CLBP
In the experiments, we adopt uniform rotation-invariant LBP codes using 16 sampling points on a circle of radius 3. Given the dimension of CLBP S/M/C as high as 648, which performs best in [8], alternatively, we choose the 54-dimension CLBP S M/C descriptor, which performs also much better than LBP. The descriptors are extracted on a regular patch size of 5 × 5 pixels and quantized in a vocabulary of 200 words. LSS. As proposed by [9], the Local Self-Similarity(LSS) captures self-similarity within relatively small regions. The LSS descriptor ζp for pixel p measures the similarity of a small patch tp around it with the larger surrounding region Rp . It is computed as follows: Determine the N × N correlation surface ζp of the ω × ω patch tp with the surrounding N × N region Rp . Both Rp and tp are centered on p. ζp (x) is the correlation of tp with a patch tx centered on x: ζp (x) = exp (−
SSD(tp , tx ) ). σ
(1)
σ is the maximal variance of the difference of all patches within a very small neighborhood of p (of radius 1) relative to the patch centered at p. Discretize the correlation surface ζp on a log-polar grid and store the maximal value of ζp within each grid bin: ζp (p, d) =
max x∈BIN (p,d)
ζp (x).
(2)
In our work, we extract ζp on a regular 5 × 5 pixel grid (with N = 40; ω = 5, 5 radial bins for d and 8 angular bins for p), and assign each ζp to one of the 200 visual words in the codebook. 3.2
SVM Classifier
The main idea of SVM [10] is that a classifier needs not only to work well on the training samples, but also to work equally well on previously unseen samples. A linear SVM is a separation hyper-plane with the number of misclassified samples minimized and its separation margin maximized. The basic problem of SVM can be written as:
Crevasse Detection in Antarctica Using ASTER Images
M in
ψ(w, ξ) =
n 1 w2 + C( ξi ) 2 i=1
375
(3)
yi [(w · Xi ) + b] − 1 + ξi ≥ 0, i = 1, . . . , n
Subject to
Where (Xi , yi ), i = 1, . . . , n, Xi ∈ Rd , yi ∈ {+1, −1} are the training samples, and C is a constant controlling the trade-off between maximizing the margin and minimizing the errors. The decision function is: f (x) = sgn[(w, x) + b] = sgn{
n
α∗i yi (xi , x) + b∗ }
(4)
i=1
This optimization problem can be solved by the following dual problem: M ax
Q(α) =
n i=1
Subject to
n
αi −
1 αi αj yi yj (xi .xj ) 2 i,j=1,n
(5)
yi αi = 0, 0 αi C, i = 1, . . . , n
i=1
In the final SVM decision function, only a small part of the coefficients α∗i are non-zero. The corresponding training samples are called support vectors, since these samples (and only these samples) support the classification boundary. Using the idea of kernels, linear SVM can be easily extended to its nonlinear version. Here we use the well known “LIBSVM” package for it’s higher computational efficiency, where Linear Kernel are adopted.
4
Experimental Evaluation and Analysis
In the experiments, we report the experimental results on two datasets: a dateset of crevasse samples and background clutter, and a large satellite image of the Grove Mountains. 4.1
Image Samples Classification Experiment
Data Description. Our first dataset is composed of some crevasse and complex background clutters sampled from ASTER images cover the Antarctica, and the size of each sample is 60 × 60 pixels. The dataset include 60 crevasse patches as positive samples, and 180 background patches as negative samples. As presented in Fig. 3, though the crevasse patches display regular texture characteristics, background patches also have some similarities with them, especially some region on the mountain’s ridges. Since the Antarctica constantly suffers the gale scraping whose direction is comparatively fixed, some surfaces of the land, particularly on the mountains ridges, display stripe texture, so it is very
376
T. Xu et al.
Fig. 3. Image sample set: the top line is crevasse, the bottom row is background Accuracy comparision 95 90
MeanAccuracy (%)
85 80 75 70 GLCM GABOR LPQ CLBP LSS
65 60 55
5
10
15
20
25
30
Number of training images (%)
Fig. 4. Comparison of SVM classification accuracy
hard to distinguish them from crevasse. Furthermore, the terrain of the Grove Mountains itself is very complicated. What we see from Fig. 3 is merely very small part of them, thus detecting the crevasse is both meaningful and challenging. In the classification experiments, 5%, 10%, 15%, 20%, 25%, 30% of samples of both kinds are chosen as training data, the rest are used for testing. The final result is recorded as a ratio of the correctly classified images to the total test images. Classification Results. Fig. 4 shows the average classification accuracy on the sample dataset. From Fig. 4, we can clearly find that the classification accuracies of all the descriptors approach 70% when the training sample proportion is merely 5%. It proves that all the descriptors proposed characterize crevasse well, and that visual words could enhance the representativeness of image samples to some extent. When the training sample proportion reaches 30%, the classification
Crevasse Detection in Antarctica Using ASTER Images
(a)
377
(b)
Fig. 5. The whole image detection: (a) The ground truth (crevasse is in red circle); (b) The detection result(crevasse is colored red, background is not colored)
Fig. 6. Some false alarms of crevasse detection
accuracies of all approach 75%, and the LSS descriptor obtains the best result with classification accuracy as high as 85.86%, which validates the effectiveness of our method. We thus adopt LSS descriptor to detect the crevasse on the large scale ASTER image in the second experiment. 4.2
The Large Scale Image Detection Experiment
Data Description. Our second experiment is performed on a large satellite image of the Grove Mountains. The whole ASTER image contains 5400 × 5400 pixels. The training set contains all the samples in the first experiment. The testing set is made by grid partition of this large scene. The size of each grid patch is also 60 × 60 pixels, and we overlap half to slide the windows. The evaluation of the results is visual, using the image tagged by experts as ground truth. Classification Results. As shown in Fig. 5 (b), although there are some false alarms, the result is still satisfying as a whole, that almost all crevasse area is detected out.
378
T. Xu et al.
Due to the low resolution, there are some place even hard to be recognized by human eyes, especially some area around the mountains, which has certain texture of stripe structure in itself. In addition, some surfaces of the land also display stripe texture as same as crevasse because of the gale scraping constantly. Fig. 6 gives some confusing samples because those patches share many similar structures, and some semantic information of the scene might be helpful.
5
Conclusion and Prospects
In this paper, we have evaluated the performance of five different descriptors for detecting crevasse from ASTER images. The LSS descriptor outperforms all the others in the average classification accuracy and stability. The large scale detection experiment also shows the LSS descriptor is a promising crevasse detector. Since the LSS descriptor captures internal geometric layouts of local selfsimilarities within a image, while accounting for small local affine deformations, it is considered universally applicable to other type of image data. Certainly, there may be better descriptors to characterize crevasse, which needs more experiments to test and evaluate. And due to the low resolution of ASTER images, we can only detect larger crevasse. Using remote sensing images from high resolution satellites, such as GeoEye and WorldView, can bring more chances for the researchers to detect small crevasse. Furthermore, compared with optical satellite image, SAR image can detect crevasse under the snow bridge because radar can penetrate the shallow snow surface, future work will focus on detecting crevasse by fusing the optical images and SAR images.
Acknowledgement This work was supported in part by the National High Technology Research and Development Program of China(No.2009AA12Z133) and the National Natural Science Foundation of China (No.40801183, 40606002).
References 1. Zhou, C.X., Wang, Z.M.: Preliminary study on ice crevasse texture analysis and recognition. In: Proc. SPIE The Fifth International Symposium on Multispectral Image Processing & Pattern Recognition (MIPPR), vol. 6786, p. 6786 4D (2007) 2. Zhou, C.X., Wang, M.: Ice crevasse detection based on gray level co-occurrence matrix. Chinese Journal of Polar Research 20(1), 23–30 (2008) 3. Rivera, A., Cawkwell, F., Wendt, A., Zamora, P.: Mapping Blue Ice Areas and Crevasses in West Antarctica using ASTER images, GPS and Radar Measurements 4. Haralick, R.M., Shanmugan, K., Dinstein, I.: Textural Features for Image Classification. IEEE Transactions on Systems, Man and Cybernatics 3(6), 610–621 (1973) 5. Daugman, J.G.: Two Dimensional Spectral Analysis of Cortical Receptive Field Profiles. Vision Research 20(10), 847–856 (1980)
Crevasse Detection in Antarctica Using ASTER Images
379
6. Ojansivu, V., Heikkil¨ a, J.: Blur insensitive texture classification using local phase quantization. In: Elmoataz, A., Lezoray, O., Nouboud, F., Mammass, D. (eds.) ICISP 2008 2008. LNCS, vol. 5099, pp. 236–243. Springer, Heidelberg (2008) 7. Ojala, T., Pietik¨ ainen, M., M¨ aenp¨ aa ¨, T.: Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Transactions on Pattern Analysis and Machine Intelligence 24(7), 971–987 (2002) 8. Guo, Z., Zhang, L., Zhang, D.: A completed modeling of local binary pattern operator for texture classification. IEEE Transactions on Image Processing 19(6), 1657–1663 (2010) 9. Shechtman, E., Irani, M.: Matching local self-similarities across images and videos. In: IEEE Conf. on Computer Vision and Pattern Recognition (CVPR 2007), pp. 1–8 (2007) 10. Cortes, C., Vapnik, V.: Support-vector networks. Machine learning 20(3), 273–297 (1995)
Recognition of Trademarks during Sport Television Broadcasts Dariusz Frejlichowski West Pomeranian University of Technology, Szczecin, Faculty of Computer Science and Information Technology, Zolnierska 49, 71-210, Szczecin, Poland
[email protected]
Abstract. In the paper the problem of the recognition of trademarks placed on banners which are visible during a sport television broadcast is described and experimentally investigated. It constitutes the second stage of the process of the analysis of the banners, e.g. in order to estimate the time that a particular banner is visible and can exert influence on the customers’ behaviour. Banners placed near a play field during football matches were analysed. For this task four algorithms were selected and tested, namely the UNL shape descriptor combined with the Partial Point Matching Algorithm, the Contour Sequence Moments, the UNLFourier descriptor and the Point Distance Histogram. Amongst them the best result was obtained when using the UNL + PPMA approach. The average efficiency of this method was equal to 84%. Keywords: sport television broadcast, trademark recognition, polar shape descriptors, Partial Point Matching Algorithm.
1
Introduction
There are numerous modern advertising techniques. They are diverse and above all they are becoming more and more efficient. Amongst them one of the newest ones is the brand and product placement. It is more commonplace nowadays. Although in its basic, primitive version it can be less effective, it is very efficient when applied during an emotional broadcast. The sport broadcast is useful in this case, especially when a national team is playing. This concerns for example football matches, since this sport discipline is one of the most popular around the world (particularly in Europe and South America). Although brand placement during the broadcast of this kind of events is especially profitable it also expensive. Hence, it is very important to properly evaluate the time of clear visibility for a particular banner. Therefore, the goal of the works presented in this paper constitutes a development of a fully or semi automatic algorithm for the identification of trademarks visible during sport television broadcasts. It can serve for example as a basis for the efficient calculation of marketing campaign costs. The method was designed for football matches; however, it can be adapted to any other kind of sports. The recognition of trademarks described in the paper M. Kamel and A. Campilho (Eds.): ICIAR 2011, Part II, LNCS 6754, pp. 380–388, 2011. c Springer-Verlag Berlin Heidelberg 2011
Recognition of Trademarks during Sport Television Broadcasts
381
constitutes a second step in the whole approach to the problem. It is preceded by the automatic localisation of the banners that was described e.g. in [1]. Many scientific contributions have been made in the area of the application of image processing and recognition algorithms to the problem of automatic analysis of a sport broadcast. For example, in [2] a tennis match was processed. In [3,4,5] the positions of football players were localised and extracted. In [3] and [6] the football was in the centre of the attention. In [7] the events — shot on goal, free kick, etc. were detected. In [8] the frames with players were searched for. The automatic identification of players basing on the image constitutes another exemplary task. It is however definitely more difficult and utilizes e.g. the recognition of numbers placed on the back of the shirts (e.g. [9]) or the face (e.g. [10]). The problem concerning the identification of trademarks placed on banners close to the playing field is less popular, yet it should be taken up. In the works described in this paper the algorithms work on single frames extracted from the broadcast. In Fig. 1 some examples of banners visible during the transmission are presented. The whole approach consists of three main stages. Firstly, the localisation of a banner has to be carried out. Later, the trademark within a banner has to be localised and extracted. In some cases this stage can be omitted as some of the banners can entirely enclose the trademark. The method for carrying out these two stages, and the problems that can occur during this process were presented in [1]. The localisation was based on the selection of the regions of an image with the same colours as for the searched object. Here, the third and final step is discussed — the recognition of extracted trademarks. Four algorithms for shape recognition were selected and experimentally compared in order to select the best one for the realisation of the task. The remaining part of the paper is organised in the following way. The second section presents algorithms applied to the recognition of extracted trademarks. The third section provides some experimental results. Finally, the last section briefly concludes the paper.
2 2.1
The Description of the Approaches Selected for the Recognition of Extracted Trademarks The Approach Based on the UNL Transform and Partial Point Matching Algorithm
The first method selected for the recognition of trademarks was based on the use of the UNL shape descriptor ([11]) at the stage of the shape representation and Partial Point Matching Algorithm ([12]) at the matching. Such a combination of methods is applied for the first time in the recognition of objects extracted from digital images. The UNL (Universidade Nova de Lisboa) transform was selected for the representation of trademarks as it is very efficient in the recognition of shapes extracted from digital images. One of the most important stages of the algorithm
382
D. Frejlichowski
Fig. 1. Examples of banners visible during a football match and extracted trademarks ([1])
is the transformation of the points belonging to the outline of an object from Cartesian to polar co-ordinates. However, the most important difference is the way of representing the final description. The distances from the centroid are put into the final matrix according to the angular values, not in the order of the particular pixels in the contour. It applies complex representation of Cartesian coordinates for points and parametric curves in discrete manner ([11]): z(t) = (x1 + t(x2 − x1 )) + j(y1 + t(y2 − y1 )),
t ∈ (0, 1),
(1)
where z1 = x1 + jy1 and z2 = x2 + jy2 (complex numbers) and for i = 1, 2, . . . , n zi denotes a point with coordinates xi , yi . The centroid O is firstly calculated ([11]): n
O = (Ox , Oy ) = (
n
1 1 xi , yi ), n i=1 n i=1
(2)
and the maximal Euclidean distance between points and centroid is found ([11]): ∀i = 1...n, t ∈ (0, 1). (3) M = max{zi (t) − O}, i
The coordinates are transformed by means of the formula ([11]): U (z(t)) = R(t) + j × θ(t) =
z(t) − O y(t) − Oy + j × atan( ). M x(t) − Ox
(4)
Recognition of Trademarks during Sport Television Broadcasts
383
The discrete version can be formulated as follows ([11]): (x1 +t(x2 −x1 )−Ox )+j(y1 +t(y2 −y1 )−Oy ) M y1 +t(y2 −y1 )−Oy +j × atan( x1 +t(x2 −x1 )−Ox ).
U (z(t)) =
(5)
The parameter i is discretized in the interval [0,1] with small steps ([11]). For the derived coordinates ones are put into a zero matrix, in which the row corresponds to the distance from centroid, and the column to the angle. The obtained matrix is 128 × 128 pixels size. The pictorial representation of the obtained matrix is provided in Fig. 2 and compared with regular conversion from Cartesian to polar coordinates.
Fig. 2. Comparison of the polar (middle) and UNL-transform (right) for exemplary contour shape (left)
The extracted trademarks have very irregular shapes as a result of many distortions and problems that occur when working with TV broadcast. Proper recognition is particularly hampered by the small size of the trademarks and noise. Therefore, at the stage of the matching the method robust to the abovementioned problems has to be used. The Partial Point Matching Algorithm (PPMA, [12]) was applied for this task. It has several advantages. It is not only robust to noise (one can control the level of this property), but also invariant to the cyclic shift that is a result of the transformation to polar co-ordinates. The PPMA for the matching of two objects represented using the UNL descriptor can be formulated in the form of the following steps ([12]): Step 1. Let us assume, that A represents the base matrix (component of the database) and B — the being recognised one. In both of them 0 means background and 1 — point belonging to the object. Step 2. For i = 1, 2, ..., a−1, where a — number of columns in A, do Steps 3 – 6. Step 3. Increase the number of ones in A — for each element equal to 1 put 1 several times (e.g. σ = 2) into rows above and below.
384
D. Frejlichowski
Step 4. Do logical AND between A and B. The result put into matrix C. Step 5. Calculate the number of ones in C and put it into vector M AX at location i. Step 6. Do circular shift of matrix A into right. Step 7. Find the maximum value in vector M AX. It is the value of similarity between A and B. 2.2
The Approach Based on the Contour Sequence Moments
The Contour Sequence Moments ([13]) was the second algorithm selected for the comparison. It starts with the representation of the closed boundary of an object by an ordered sequence z(i) that contains elements equal to the Euclidean distance from the centroid to particular N points of the shape. Then, the onedimensional normalised contour sequence moments are derived by means of the formulas ([13]): mr =
N 1 [z(i)]r , N i=1
μr =
N 1 [z(i) − mr ]r . N i=1
(6)
The r-th normalised contour sequence moment and normalised central contour sequence moment are written as ([13]): mr =
mr , (μ2 )r/2
μr =
μr . (μ2 )r/2
(7)
The final description of a shape uses four values, less sensitive to noise ([13]): F1 =
(μ2 )1/2 , m1
F2 =
μ3 , (μ2 )3/2
F3 =
μ4 , (μ2 )2
F4 = μ5 .
(8)
Since the object is represented by means of four values, the Euclidean distance can be applied to the matching a test object with the templates. 2.3
The Approach Based on the UNL-Fourier
The UNL-Fourier transform is an extension of the UNL transform. For the twodimensional representation of a shape obtained by means of it the absolute spectrum of the 2D Fourier transform is applied. It can be derived by means of the following formula ([14]):
1 C(k, l) = HW
H W 2π (−i 2π (k−1)(h−1)) (−i (l−1)(w−1)) P (h, w) · e H ·e W , h=1 w=1
(9)
Recognition of Trademarks during Sport Television Broadcasts
385
where: H, W — height and width of the image in pixels, k — sampling rate in vertical direction (k ≥ 1 and k ≤ H), l — sampling rate in horizontal direction (l ≥ 1 and l ≤ W ), C(k, l) — value of the coefficient of discrete Fourier transform in the coefficient matrix in k row and l column, P (h, w) — value in the image plane with coordinates h, w. From the achieved absolute spectrum the subpart is extracted with the indices 2–10 for both axes (the first element is omitted) and concatenated. As a result, the one-dimensional Euclidean distance may be again applied for the calculation of the dissimilarity measure. 2.4
The Approach Based on the Point Distance Histogram
The Point Distance Histogram ([15]) constituted the last method of those selected for the comparison. Because it combines the polar transform and the derivation of the histogram, it is invariant to scaling, rotation and shifting of a shape. The algorithm begins with the calculation of the polar co-ordinates according to the centroid O = (Ox , Oy ). The obtained co-ordinates are put into two vectors Θi for angles and P i for radii ([15]): 2 2 ρi = (xi − Ox ) + (yi − Oy ) ,
θi = atan
yi − Oy xi − Ox
.
(10)
The resultant values are converted into nearest integers ([15]): θi =
θi , θi ,
if θi − θi < 0.5 . if θi − θi ≥ 0.5
(11)
The next step is the rearrangement of the elements in Θi and P i according to increasing values in Θi . This way we achieve the vectors Θj , P j . For equal elements in Θj only the one with the highest corresponding value P j is selected. That gives a vector with at most 360 elements, one for each integer angle. For further work only the vector of radii is taken — P k , where k = 1, 2, ..., m and m is the number of elements in P k (m ≤ 360). Now, the normalization of elements in vector P k is performed ([15]): M = max {ρk } , k
ρk =
ρk , M
(12)
The elements in P k are assigned to r bins in histogram (ρk to lk ,[15]): lk =
r, rρk ,
if ρk = 1 . if ρk = 1
(13)
For the comparison of the obtained representations the 1D Euclidean distance can be applied.
386
3
D. Frejlichowski
Conditions and Results of the Experiment
The methods described in the previous section were experimentally evaluated using the image sequences recorded from analogue TV during the FIFA World Cup 2006 in Germany. The input interlaced video had 720 × 576 points in AVI format, RGB colour, MPEG-2. The trademarks were localised and extracted by means of the approach described in [1]. Next, they were converted to the binary form. Later, the shaped obtained in this way were represented and identified by means of the methods described in the previous section. The templates that were matched with the processed objects were prepared before the analysis of a particular football match, basing on the knowledge about the banners that could appear. For the purpose of the experiments twenty 10-minute video sequences from 10 different matches, two sequences per match were selected. For each case firstly the human operator analysed carefully the sequence and calculated the display time for particular clearly visible trademark. Later, the same work was performed employing artificial algorithms. Firstly, the methods for localisation of a banner and trademark were utilized, by means of the approach described in [1]. Then, the process of identification of particular brand mark was executed, separately
Table 1. Experimental results — the average recognition rates for particular explored approaches according to the results provided by human operator
Sequence no. UNL + PPMA 1 73 2 82 3 81 4 87 5 78 6 84 7 89 8 76 9 78 10 92 11 86 12 82 13 91 14 93 15 76 16 89 17 85 18 89 19 79 20 80 Average 84
CSM 41 52 50 48 28 55 36 29 27 56 44 41 48 62 32 52 46 38 28 35 42
UNL-F 51 67 74 76 53 71 78 53 52 75 73 68 69 79 45 74 67 69 52 61 65
PDH 64 74 73 78 73 77 83 81 79 86 72 70 81 89 69 83 73 82 71 73 77
Recognition of Trademarks during Sport Television Broadcasts
387
for each of the algorithms, described in the previous section. Since the detailed analysis of the results for each separate sequence and trademark exceeds the available space here, in Table 1 the average efficiency results for the explored methods are provided. These coefficients were calculated as the proportion of the appropriate results given by the artificial algorithm according to the results provided by a human. In order to speed up the process of trademarks’ identification only two frames per second were analysed and only the frames uninfluenced by the problems hampering the recognition, e.g. blurring or occlusions, were considered. The analysis of Table 1 leads to the conclusion that the best results were achieved when applying the new approach, based on the combination of the UNL descriptor for shape representation and PPMA algorithm for matching. The method achieved 84% of the recognition results. The PDH descriptor proved to be the second best approach scoring 77% RR. This result can be considered very promising, since during the experiment only the simple Euclidean distance was applied as the method for matching. The PDH results in a histogram. Therefore, it is possible that the application of a more sophisticated method for matching, designed particularly for the histograms, can improve the results. The UNLFourier achieved worse results, equal to 65%. Definitely, the approach based on the contour moments turned out the worst resulting in 42% efficiency.
4
Conclusions
In the paper the problem of the automatic recognition of trademarks placed on the banners and visible during the sport TV broadcast was analysed. The localised and extracted logos were represented by means of four different shape representation techniques and matched with the based templates which have been prepared before the experiment. Amongst the evaluated methods the best result was obtained when using the approach which combined the UNL transform at the stage of shape description and the PPMA algorithm applied for matching. This approach achieved the 84% recognition rate. The advantages of the PPMA algorithm were especially fruitful here. Amongst them the invariance to circular shift as well as the robustness to some level of noise were particularly useful. The PDH algorithm achieved 77% efficiency. However, in this case the one-dimensional Euclidean distance was applied for matching. There exists a possibility that some other method, designed specifically for the comparison of histograms, would be more suitable. This idea will be examined in future works on the problem. The UNL-Fourier descriptor obtained 65%, which can be acceptable, yet there is no potential way of improving this score. The worst result was obtained when using the CSM. It was equal to 42%.
References 1. Frejlichowski, D.: Automatic localisation of trademarks during a sport television broadcast. In: Chmielarz, W., Kisielnicki, J., Parys, T. (eds.) Informatyka ku Przyszlo´sci., pp. 361–369 (2010)
388
D. Frejlichowski
2. Zhong, D., Chang, S.-F.: Real-time view recognition and event detection for sports video. Journal of Visual Communication and Image Representation 15(3), 330–347 (2004) 3. Yow, D., Yeo, B.-L., Yeung, M., Liu, B.: Analysis and presentation of soccer highlights from digital video. In: Li, S., Teoh, E.-K., Mital, D., Wang, H. (eds.) ACCV 1995. LNCS, vol. 1035, Springer, Heidelberg (1996) 4. Assfalg, J., Bertini, M., Colombo, C., Del Bimbo, A., Nunziati, W.: Semantic annotation of soccer videos: automatic highlights identification. Computer Vision and Image Understanding 92(2-3), 285–305 (2003) 5. Seo, Y., Choi, S., Kim, H., Hong, K.S.: Where are the ball and players? Soccer game analysis with color-based tracking and image mosaick. In: Proc. of the Int. Conf. on Image Analysis and Processing, Florence, Italy, pp. 196–203 (1997) 6. Ancona, N., Cicirelli, G., Stella, E., Distante, A.: Ball detection in static images with Support Vector Machines for classification. Image and Vision Computing 21(8), 675–692 (2003) 7. Leonardi, R., Migliorati, P.: Semantic indexing of multimedia documents. IEEE MultiMedia 9(2), 44–51 (2002) 8. Li, B., Errico, J.H., Pan, H., Sezan, I.: Bridging the semantic gap in sports video retrieval and summarization. Journal of Visual Communication and Image Representation 15(3), 393–424 (2004) 9. Andrade, E.L., Khan, E., Woods, J.C., Ghanbari, M.: Player identification in interactive sport scenes using region space analysis prior information and number recognition, Melisa EU Project (2003) 10. Frejlichowski, D., Wierzba, P.: A face-based automatic identification of football players during a sport television broadcast. Polish Journal of Environmental Studies 17(4C), 406–409 (2008) 11. Rauber, T.W., Steiger-Garcao, A.S.: 2-D form descriptors based on a normalized parametric polar transform (UNL transform). In: Proc. MVA 1992 IAPR Workshop on Machine Vision Applications (1992) 12. Frejlichowski, D.: An algorithm for binary contour objects representation and recognition. In: Campilho, A., Kamel, M.S. (eds.) ICIAR 2008. LNCS, vol. 5112, pp. 537–546. Springer, Heidelberg (2008) 13. Sonka, M., Hlavac, V., Boyle, R.: Image processing, analysis, and machine vision (2nd Edition). PWS – an Imprint of Brooks and Cole Publishing (1998) 14. Kukharev, G.: Digital Image Processing and Analysis (in Polish). Szczecin University of Technology Press (1998) 15. Frejlichowski, D.: Analysis of four polar shape descriptors properties in an exemplary application. In: Bolc, L., Tadeusiewicz, R., Chmielewski, L.J., Wojciechowski, K. (eds.) ICCVG 2010. LNCS, vol. 6374, pp. 376–383. Springer, Heidelberg (2010)
An Image Processing Approach to Distance Estimation for Automated Strawberry Harvesting Andrew Busch1 and Phillip Palk2 1 2
Griffith University, Brisbane, Australia
[email protected] Mˆ agnificent Pty Ltd, Wamuran, Australia
[email protected]
Abstract. In order to successfuly navigate between rows of plants, automated strawberry harvesters require a robust and accurate method of measuring the distance between the harvester and the strawberry bed. A diffracted red laser is used to project a straight horizontal line onto the bed, and is viewed by a video camera positioned at an angle to the laser. Using filtering techniques and the Hough transform, the distance to the bed can be calculated accurately at many points simultaneously, allowing the harverster’s navigation system to determine both its position and angle relative to the bed. Testing has shown that this low-cost solution provides near-perfect field performance. Keywords: automated harvesing, distance estimation, Hough transform.
1
Introduction
Within the strawberry industry, labour costs have always represented by far the highest proportion of expenditures. Additionally, high competition for the limited labour supply in the horticultural industry has meant that the availability of seasonal workers such as pickers and packers is often inadequate during times of peak output, leading to decreased yeild and significant waste [3]. The continual increase of labour costs and the steadily reducing labour pool has seen strawberry producers look towards technological solutions such as automated harvesting, packing, and grading of fruit. Automated harversting of strawberries presents a number of unique difficulties. Strawberries are extremely fragile fruit, and easily damaged. In addition, damaged fruit commands a significantly lower price than undamaged fruit, and is usually sold at a net loss for jam and other such secondary uses. Even seemingly minor blemishes on the fruit can significantly reduce the final sale price. For this reason, any automated harvesting must be done in a very precise and gentle manner. If possible, the fruit itself should never be touched, but rather moved only by means of the peduncle (stem) of the fruit, which is significantly more difficult to find than the fruit itself. M. Kamel and A. Campilho (Eds.): ICIAR 2011, Part II, LNCS 6754, pp. 389–396, 2011. c Springer-Verlag Berlin Heidelberg 2011
390
A. Busch and P. Palk
Strawberries must also be picked at the right time, as they do not ripen significantly once removed from the plant. If picked too early, the unripe fruit are unsuitable for sale. If picked too late, the fruit will likely be rotten, meaning that not only will that particular piece of fruit be lost, but a much higher chance of fungus and other diseases for neighbouring plants. For these reasons, human pickers are trained to pick all ripe fruit, and any automated system must match this level of accuracy to be acceptable. Foilage presents yet another difficulty in this regard, as much of the fruit is hidden under leaves which must be moved before picking can commence. Finally, the harvester must be able to quickly and accurately move along the rows of plants, stopping precisely in front of each to harvest. In the uneven, muddy terrain of an outdoor strawberry field, this can present difficulties. Although generally straight, many fields have local curvature, and as such the harvester must be capable of accurately traversing such curves in order to keep the picking head within reach of the fruit, and avoid causing damage to the plants themselves. It is this latter problem which is addressed in this paper. As shown in figure 1, strawberries are typically grown on raised beds, with a valley running between adjacent beds. This provides both a natural path for the harvester to travel along, and a convenient point of reference for measuring the distance to the plants. By measuring the distance to the bed at both the front and rear of the harvester, both an average distance and angle can be calculated. As the harvester completely straddles the bed, this can be done on both sides simultaneously allowing for greater accuracy. Distance measurement can be performed in a number of ways. Ultrasonic distance measurement, which function by detecting echoes from a high-frequency sound wave, are quite accurate over small distances, and relatively cheap [5]. Unfortunately these devices only work well for detecting hard surfaces such as metal, glass or wood, and perform poorly in the conditions experienced in a typical strawberry field. Preliminary testing showed that accuracy was unacceptably low for this application. Another popular method of distance estimation is laser range finding. These systems typically operate by measuring either the time taken for a pulse of coherent light to travel to an object and return, or by measuring the phase shift in the returned light. Although these technologies are very accurate, their size and cost makes them unsuitable for the harvester application. Image processing techniques also provide a solution to this problem. By shining a line of coherent light onto the bed, and viewing this line with a video camera at an angle, an accurate and robust distance measure can be calculated. As the harvester already contains a number of video cameras, this solution involves only the addition of a single laser source, which is extremely cheap. The physical layout of the system is described in section 2, and the processing used to determine the distance explained in section 3. Experimental results of the proposed technique from both laboratory and field testing are provided in 4.
An Image Processing Approach to Distance Estimation
391
Fig. 1. Layout of strawberry beds, showing position of harvester, laser, and camera
2
Harvester Environment and Camera Setup
The automated strawberry harverter consists of an outer shell, completely enclosed to prevent any natural light from entering, with all robotics and control systems contained internally. As shown in figure 1, the harvester straddles the bed, with plastic and rubber sheeting allowing it to pass smoothly over the plants without letting significant amounts of light in. By avoiding natural light, the illumination of objects of interest can be more accurately controlled, thus enabling simpler processing. The laser used to illuminate the bed has a wavelength of 650nm, with a nominal power output of 5W. It is mounted on the side panel of the harvester, and a level which corresponds to roughly half the height of the bed. This can be easily adjusted for use in fields which have variable bed heights. In order to create the horinzontal line, a diffraction grating is placed immediately in front of the laser. Strawberry beds are generally covered entirely in plastic sheeting to prevent weeds. This sheeting allows excellent reflection of the laser. The camera used to detect the image is a Tucsen TCA 1.31C, with a Micron MT9M131 13 " CMOS sensor, and is mounted directly above the laser, with a separation distance of 0.21m. During testing the camera was angled downward at 41◦ , as this angle gave the best possible view of the bed. This angle can be modified for situations where the bed is significantly narrower or wider. The camera outputs uncompressed frames at 25fps, at a resolution of 640x480. A physical colour filter matched to the wavelength of the laser was attached to the camera to remove any unwanted signals.
3 3.1
Processing Algorithms Pre-processing
In order to speed up processing, frames are downsampled to 160x120 pixels. This resolution was found to be sufficient for high accuracy, and allows a much
392
A. Busch and P. Palk
higher number of frames to be processed per second and thus greater control of the harvester. An example of such an image is shown in figure 2(a). Correction for lens distortion was then applied, in order to obtain a true representation of the image plane. This operation is carried out automatically in software using data obtained with a special calibration image. Following this, the red channel is extracted, and it alone used for further processing. The result of this is shown in figure 2(b). Lens distortion correction provides a muich straighter laser line, whilst extracting the red channel greatly increases the brightness of the line compared to the rest of the image. These operations significantly improve the results of the Hough transform, and the overall accuracy of the system.
Fig. 2. (a) Original image, and (b) Red channel after lens distortion correction
3.2
Filtering
Due to reflections and stray external light entering the enclosed area of the harvester, there is significant amounts of noise in the red channel. Simple thresholding is insufficient to remove this noise in all cases, and so a filtering approach is employed. A 5x5 highpass kernel is applied to the image, and a simple threshold applied to identify regions of with significant high frequency content. As the lighting conditions are relatively stable, a constant threshold is appropriate and sufficient. As can be seen in figure 3, this operation retains the line, and removes the majority of the unwanted areas, leaving only isolated regions which should not unduly affect the Hough transform. 3.3
Hough Transform
The Hough transform is a generalised algorithm for detecting both analytic and non-analytic curves in an image using the duality between points on the curve and its parameters [4,2]. When applied to the case of straight lines, the relevant parameters are the angle of the line and its perpendicular distance from the origin. When represented this way, each point in the binary image can be represented in the Hough domain (θ, r) by the equation r(θ) = x cos(θ) + y sin(θ)
(1)
An Image Processing Approach to Distance Estimation
393
Fig. 3. Result of highpass filtering and application of threshold
Fig. 4. Hough transform
By tranforming each point in the image into the Hough domain, a likelihood map for each possible line is obtained. For images which contain only a single line, this will appear as a strong peak in the Hough domain, and is easily isolated. Applying the Hough transform to the result of the filter operation results shows such a result, seen in figure 4. A clear peak is evident in this image, corresponding to the laser line in the image. The position of the peak is identified, and used to calculate the position of the line in the image. This is shown in figure 5, with the detected line almost perfectly overlaying the laser line in the image. The Hough transform is computationally expensive, as each point in the image must be transformed into a curve in the Hough domain. In order to improve the efficiency of the operation, it is possible to limit the range of the transform in (r, θ) space. In particular, the range of θ can be safely reduced, as the angle of the laser will in practice will not generally exceed the range −π/6 ≤ θ ≤ π/6. By limiting the transform to this range, a large increase in speed can be achieved. As a precaution, in situations where the previously detected line angle nears these boundaries, it is increased. 3.4
Distance and Angle Estimation and Error Correction
Once the position of the laser line has been determined, the distance of the harverster to the bed can be calculated. This is done at two points at either end
394
A. Busch and P. Palk
Fig. 5. Detected line
of the line, in order to also calculate the angle of the harvester relative to the bed. Firstly, the vertical position of the line y is converted to an angle θr relative to the camera position. This can be performed using the lens characteristics, and is easily verified by manual measurement. The actual angle of depression θd can then be caluated by adding this value to the angle of the camera θc (41◦ ). The distance to the bed d can then be calculated by d=
h tan θd
(2)
where h is the height of the camera relative to the laser. The average distance of the harvester to the bed is then calculated by averaging the values of d for the beginning and the end of the line. The direction of the harvester relative to the bed φ can also be easily calculated using simple trigonometry. These values are then used in a simple control system which manages the steering control motors of the harvester. This control system has additional rules for dealing with erroneous inputs (varying significantly from previous values) in order to provide stability in the case of vision errors.
4
Experimental Setup and Results
In order to test the algorithm presented above, two experiments were carried out. In the first, images were collected for known position and orientation of the harvester, and then analysed to determine the accuracy of the system. These images were collected in a real strawberry harvesting environment, taken at regular invervals along the row. Each image was then processed independently, and the results compared to the known data. The results of this testing are shown in table 1. As can be seen from these results, the algorithm performs very well, with over 95% of images giving near-perfect results. Most errors are caused by large amount of noise in the input image, usually as a result of too much external light entering the enclosure. This if often caused by obstacles on the bed lifting the rubber sheeting for brief periods of time. An example of such an image, with the corresponding erroreous result is shown in
An Image Processing Approach to Distance Estimation
395
Table 1. Results of experiment 1, showing percentage of trials falling into various accuracy ranges Angle φ(◦ ) Distance Error x (cm) x≤22<x≤5 x>5 φ≤5 95.6 1.3 0 5 < φ ≤ 10 0 0.4 0 10 < φ ≤ 20 0 0 0.2 φ > 20 0 0 2.5
figure 6. In this case, the algorithm detected the sharp boundary between the plant material and the sheeting as the line, rather than the correct laser line at the bottom of the image. The high amounts of external light present in this image are clearly visible in the lower left.
Fig. 6. Input image with large amounts of external light, resulting in error.
The second experiment tested the actual performance of the entire guidance system. In this test, the harvester was under fully autonomous control, relying completely on the output of the vision system for navigation. Testing was carried out in a number of weather and lighting conditions and in many different row configurations. Success was measured by the ability of the harvester to keep within picking distance of the bed and at a correct angle for the entire row. This was achieved in 100% of trials, with a total of 10 hours of continual operation without error.
5
Conclusions and Future Work
This paper has presented an inexpensive, image processing based approach to distance detection for the specific application of an automated strawberry harvester. By capturing images of a laser-generated line reflected from the bed, and applying a selection of filters and the Hough transform, an accurate and robust estimate of both distance and direction can be obtained. Experimental results show an extremely high accuracy, with field testing when connected to an appropriate control system showing almost perfect performance.
396
A. Busch and P. Palk
References 1. Arima, S., Shibusawa, S., Kondo, N., Yamashita, J.: Traceability based upon multioperation robot; information from spraying, harvesting and grading operation robot. In: Proceedings of IEEE/ASME International Conference on Advanced Intelligent Mechatronics, vol. 2, pp. 1204–1209 (2003) 2. Duda, R.O., Hart, P.E.: Use of the Hough transform to detect lines and curves in pictures. Communications of the Ass. Comp. Mach. 15, 11–15 (1975) 3. Horiculture Australia: Strawberry Industry Strategic Plan 2009-2013 (2009) 4. Hough, P.V.C.: Method and means of recognizing complex patterns. US patent 3069654 (1962) 5. Marioli, D., Sardini, E., Taroni, A.: Ultrasonic distance measurement for linear and angular position control. IEEE Transactions on Instrumentation and Measurement 37, 578–581 (2002)
A Database for Offline Arabic Handwritten Text Recognition Sabri A. Mahmoud, Irfan Ahmad, Mohammed Alshayeb, and Wasfi G. Al-Khatib King Fahd University of Petroleum & Minerals, Dhahran 31261, Saudi Arabia {smasaad,irfanics,alshayeb,wasfi}@kfupm.edu.sa
Abstract. Arabic handwritten text recognition has not received the same attention as that directed towards Latin script-based languages. In this paper, we present our efforts to develop a comprehensive Arabic Handwritten Text database (AHTD). At this stage, the database will consist of text written by 1000 writers from different countries. Currently, it has data from over 300 writers. It is composed of an images database containing images of the written text at various resolutions, and a ground truth database that contains meta-data describing the written text at the page, paragraph, and line levels. Tools to extract paragraphs from pages, segment paragraphs into lines have also been developed. Segmentation of lines into words will follow. The database will be made freely available to researchers world-wide. It is hoped that the AHTD database will stir research efforts in various handwritten-related problems such as text recognition, and writer identification and verification. Keywords: Arabic Handwritten Text Database, Arabic OCR, Document Analysis, Form Processing.
1
Introduction
Recent advances in pattern recognition have aided the automation of many demanding tasks in our daily life. Algorithmic analysis of human handwriting has many applications such as on-line and offline handwritten text recognition, writer identification and verification, bank check processing, postal address recognition, etc. Arabic is one of the Semitic languages. According to Ethnologue [1], it is ranked as the fourth most widely spoken language in the world. It is spoken by more than 280 million people and is the official language of 22 countries. Researchers consider the lack of freely available Arabic handwritten recognition databases as one of the reasons for the lack of research on Arabic text recognition compared with other languages [2]. There is no generally accepted database for Arabic handwritten text recognition that covers the naturalness of the Arabic language. Hence, different researchers of Arabic text recognition use different databases. Therefore, the recognition rates of the different techniques may not be comparable. This raises the need for a comprehensive database for Arabic text recognition [3]. Having such a database is crucial for Arabic handwritten text recognition and identification research. Arabic text is written from M. Kamel and A. Campilho (Eds.): ICIAR 2011, Part II, LNCS 6754, pp. 397–406, 2011. c Springer-Verlag Berlin Heidelberg 2011
398
S.A. Mahmoud et al.
(a)
(b)
(c)
(d)
(e)
Fig. 1. (a) The 28 basic chararcters in the Arabic language, (b) Hamza, (c) Letter ALEF with Hamza, (d) Letter BAA and (e) Letter JEEM.
right to left, with Arabic having 28 basic characters as shown in Figure 1(a). Out of the 28 basic characters, 16 of them have from one to three dots. Those dots differentiate between the otherwise similar characters. Additionally, three characters can have a zigzag like stroke called (Hamza), shown in Figure 1(b). The dots and Hamza are called secondaries. They may be located above the character’s primary part, as in ALEF (Figure 1(c)), or below its primary part, like in BAA (Figure 1(d)). In only one letter JEEM, the dot is located in the middle of its primary part, as shown in Figure 1(e). Written Arabic text is cursive in both machine-printed and hand-written text. Within a word, some characters connect to the preceding and/or following characters, and some do not connect. The shape of an Arabic character depends on its position in the word; a character might have up to four different shapes depending on it being isolated, connected from the right (beginning form), connected from the left (ending form), or connected from both sides (middle form). Figure 2(a) shows the various forms of Arabic character (BAA) that appears within text. Figures 2(b), (c) and (d) show the use of diacritics, Hamza and dots, respectively. Arabic characters do not have fixed size (height and width). The character size varies according to its position in the word. Letters in a word can have short vowels (diacritics). These diacritics are written as strokes, placed either on top of the letters or below them.
(a)
(b)
(c)
(d)
Fig. 2. Examples of (a) Different shapes of the same character, (b) Diacritics, (c) Characters with Hamzah, and (d) Different number and locations of dots.
Besides Arabic letters, Indian numerals are commonly used in Arabic scripts, although Arabic numerals have been increasingly used in recent writings. Arabic numerals are used in Latin scripts. As in Latin, the digits in Arabic numbers are written with the most significant digit to the left. This paper is organized
A Database for Offline Arabic Handwritten Text Recognition
399
as follows: Section 2 presents the published work related to developing Arabic off-line handwriting databases. Data collection phase is presented in Section 3. Then, data extraction and pre-processing phases are described in Section 4. In Section 5, the database structure along with ground-truth data is presented. Finally, we present the conclusions and future work.
2
Related Works
There has not been much effort towards developing comprehensive databases for Arabic handwriting text recognition as compared to that for other languages, for example Latin-scripts [4,5,6,7,8]. Among the earliest works in this regard, Abuhaiba et al. [9] developed a database of around 2000 samples of unconstrained Arabic handwritten characters written by four writers. The database contains the basic character shapes without dots, comprising a total of 51 shapes. This database lacks some handwritten character shapes. Al ISRA database [10] was collected by a group of researchers at the University of British Columbia in 1999. It contains 37,000 Arabic words, 10,000 digits, 2,500 signatures, and 500 free form Arabic sentences gathered from five hundred randomly selected students at Al Isra University in Amman, Jordan. IFN/ENIT database [11,12] was developed in 2002 by the Institute of Communications Technology (IFN) at Technical University Braunschweing in Germany and The National School of Engineers of Tunis (ENIT). Version one consisted of 26,549 images of Tunisian town/village names written by 411 writers. The images are partitioned into four sets so that researchers can use and discuss training and testing data in this context. It is one of the most widely used databases. However, it lacks the naturalness of handwritten Arabic text as it essentially contains names of towns and villages of Tunisia. Khedher et al. [13] used a database of unconstrained isolated Arabic handwritten characters written by 48 writers, making it suitable for isolated character recognition research. AHDB database [14] was developed in 2002 by Almaadeed. This database includes words that are used in writing legal amounts on Arabic checks and some free handwriting pages from 100 writers. Al-Ohali et al. [15], of the Center for Pattern Recognition and Machine Intelligence (CENPARMI), developed an Arabic check database in 2003 for research in the recognition of Arabic handwritten bank checks. The database includes Arabic legal and courtesy amounts that were extracted from 3000 bank checks of AlRajhi Bank, Saudi Arabia. The database contains 2499 legal amounts, 2499 courtesy amounts written in Indian/Arabic digits, 29498 Arabic subwords used in legal amounts and 15175 Indian/Arabic digits extracted from courtesy amounts. This database can be mainly used for Arabic handwritten number, digits and limited vocabulary word recognition. ADBase database was presented by ElSherif et al. [16]. It consists of 70,000 digits written by 700 writers each writing every digit ten times. The database is partitioned into two sets: a training set consisting of 60,000 digits samples and a test set of 10,000 digit samples. This database can be used for research
400
S.A. Mahmoud et al.
in Arabic handwritten digits recognition. A database containing Arabic dates, isolated digits, numerical strings, letters, words and some special symbols were presented by AlAmri [17]. A database of Arabic (Indian) digits was presented by Mahmoud [18]. 44 writers each wrote 48 samples of each digit. This database is suitable for research in isolated digit recognition. The database is limited in size and naturalness.
3
Data Collection
In order to build a database for Arabic Handwritten text recognition, a data collection form is designed. Each form consists of four pages. The first page includes fields for writer information; the name, age category, upbringing country, qualification, gender, left/right-handedness, and a section for management purposes. Figure 3(a) shows the first page of a filled form. The writer name is masked for privacy reasons. The remaining three pages contain six paragraphs, two in each page. The second page, shown in Figure 3(b), consists of two paragraphs, the first paragraph is a summarized paragraph that covers all Arabic characters and forms (beginning, middle, end, and isolated). The second paragraph contains text that is randomly selected from an Arabic corpus that we collected from different published work in different topics. The third page has two paragraphs (viz. the third and fourth paragraphs shown in Figure 3(c)). The third paragraph is a randomly selected paragraph (similar to the second one) and the fourth is a repetition of the first paragraph of the second page. The second and third paragraphs are distinct over all the forms and the first and fourth paragraphs are repeated in all the forms. This was done to enable the database to be used by researchers for writer identification and verification applications in addition to Arabic handwritten text research. Paragraphs 5 and 6 are free form paragraphs that are kept in the fourth page. The writer writes two paragraphs on any topic he/she likes. The sixth paragraph has writing lines to enable researchers to analyze both lined and unlined text. Figure 3(d) shows a sample of a filled fourth page of the form. The forms are distributed in several countries and entities for data collection; mainly from native Arabic speakers. Currently, most of the participants are from Saudi Arabia.
4
Data Extraction and Pre-processing
Once the forms are filled by writers and collected, they are scanned at 200 dpi, 300 dpi, and 600 dpi gray scale resolution. Tools are developed to binarize, skew correct, and de-noise the forms. Further the tool extracts the paragraphs from these forms and save them separately for researchers that need to address full paragraphs as is the case with segmentation and writer identification and verification. The ground truth of these paragraphs is also stored. This enables researchers to compare their recognition results with the true text. Then these paragraphs are segmented into lines. These lines are separately stored for researchers who
A Database for Offline Arabic Handwritten Text Recognition
(a)
(b)
(c)
(d)
401
Fig. 3. A sample filled form. (a) Page one, (b) Page two, (c) Page three, (d) Page four.
402
S.A. Mahmoud et al.
need to conduct research on lines of text. For example, text recognition using HMM, segmentation of words, sub-words, characters, etc. The truth value at the line level is also stored. This can be used by researchers to generate labeled training data, in the training phase and compare with recognized text in the testing phase. These tools have the option to accept user interaction so that the ground truth may be corrected both at the paragraph and line levels. Figure 4 shows samples of extracted paragraphs from a sample form. Figure 5 shows one of the paragraphs along with the segmented lines. Figure 6 shows the segmented lines along with its associated ground truth. The tools work both in an interactive mode (enabling user intervention to correct and/or improve the outputs manually at each stage) and in batch mode (where multiple forms are processed automatically without user intervention).
(a) From top of Page 2
(b) From bottom of Page 2
(c) From top of Page 4
(d) From bottom of Page 4
Fig. 4. Samples of extracted paragraphs from the form
5
AHTD Database Overview
The AHTD database is composed of two databases: the images database and the ground truth database. 5.1
The Images Database
This database consists of gray scale images scanned at 200, 300 and 600 dpi. Thus at the highest level, the database will have three series. Each series consists of gray scale images of the form for a certain resolution i.e. 200, 300 and 600 dpi respectively. Figure 7 shows the database structure. Apart from the original scanned forms, each series is divided into datasets. There are basically three datasets (viz. datasets of pages, paragraphs and lines). The datasets will be extended to include a dataset of words. Each of these sets has associated
A Database for Offline Arabic Handwritten Text Recognition
403
Fig. 5. Sample of a paragraph and its corresponding lines after segmentation
ǂǧƢǛ Ǫȇ¦° Äƚdz Ǻƥ »Â£° ƨƦƸǐƥ ¿ƢǣǂǓǂǨǜǷ ¬Ȃǻ Ƥǿ¯ «ƢƷ ƲȈƴƸdz¦ Dzǧ¦Ȃǫ ©¢ƾƥ . ƲƸǴdz ǦȈǨǟ À±Ƣƻ ¾Ȑǿ ¶ȂǠǘǟ
Ä°ƢƳ ÀƢǯ .ƺȈNj ǞǷ ƢǼȈǠLJ Â ƢǼǨǗ ƢǼdz ȂǏÂ ƾǼǟ . ȆƦǴȇ ǂƻ¡ ǂƯ¤
džǴºǤƥ ǒǬǻ¦ DzưǷ ƢȀǸȀǧ¢ȏ ©ƢǸǴǰƥ ǶƟƢǻ ȂǿÂ ǶǴǰƬȇ ƨǸȈƼdz¦ Ȇǧ
½ ¸ · ƢǼƥƢƸǏ¢ ǢǴƥ Dzǿ ƶƳ¦° ƪdzƘLJ . ǮƬǷDŽdz ǖƥƢǔdz¦ Ǿdz ©ƢǸǴǰdz¦ ¨ƾƟƢǧ ǶǴǠƫ Dzǿ .ƲƸdz¦ Ȇǧ ƢǼǻ¢ Á ¹ ´ ³ ² §µ ª
.ǂLjǻ Ʈƥ Ƣǻ ǚȈǣ ¼¦°® NJǸnjǷ :ǎǼdz¦ ¦ǀȀdz ƨȈdzƢƬdz¦ Fig. 6. Segmented Lines along with ground truths
ground truth at the page, the paragraph and the line levels. At the page level, there are 3000 pages consisting of a total of 6000 paragraphs. Out of these 6000 paragraphs, 4000 paragraphs are written based on the printed samples (i.e. pages two and three) whereas the remaining 2000 paragraphs are freely written by the writers. The 4000 paragraphs include 2000 paragraphs with same truth value.
404
S.A. Mahmoud et al.
Fig. 7. The database structure
5.2
Ground Truth Database
The ground truth of the data is available at the form, at the paragraph and at the line levels. In addition, the database contains information at the form level related to the writer of the form. The information includes the writer ID, writer age, country of upbringing, educational qualification, gender and handedness (left or right handed).
6
Conclusions and Future Works
A freely available database for Arabic language text recognition that represents variety of handwriting styles is currently lacking and is crucial for Arabic handwritten text recognition and writer identification research. In this paper, we are presenting an Arabic handwritten text recognition database. To build a comprehensive database we designed a form consisting of four pages having a total of six paragraphs. Out of these six paragraphs, one paragraph is repeated twice in each form. Two other paragraphs are unique to each form and the remaining two paragraphs are open for the writers to select a topic of their choice and write. Forms were distributed to the writers in several countries. The collected written forms are scanned and stored at 200, 300 and 600 dpi levels. Each form is further processed by binarizing, correcting skew, extracting paragraphs and segmenting the paragraphs into lines. Moreover ground truth is recorded at the paragraph and line levels for each sample. Additional 1000 forms will be added in the second phase. As an extension to this, words of the segmented text lines will be segmented and stored separately and their ground truth will also be made available to interested researchers. Different experiments in different research areas
A Database for Offline Arabic Handwritten Text Recognition
405
related to Arabic handwriting will be conducted on the data. We believe that the database will be of great help and value for the research community. The database may prove useful for various research applications. Arabic handwritten text recognition research at the paragraph and line levels may be conducted. Moreover researchers can propose binarization and noise removal techniques and validate them using gray/binary scale data, propose skew correction techniques at the page level and validate them. Researchers can propose line segmentation techniques and validate them at the paragraph level. Further, researchers can propose slant correction techniques and validate them at the line level. In addition, the database can be used for Arabic handwritten text writer identification and verification research.
Acknowledgement The authors would like to acknowledge the support provided by King Abdul-Aziz City for Science and technology (KACST) through the Science & Technology Unit at King Fahd University of Petroleum and Minerals (KFUPM) for funding this work through project no. 08-INF99-4 as part of the National Science, Technology and Innovation Plan. In addition, we would like to thank all the writers for filling the forms and those who helped in this effort.
References 1. Paul Lewis, M.: Ethnologue: Languages of the World. In: SIL International, 16th edn., Dallas, Texas (2011) http://www.ethnologue.com/ (last accessed on January 24, 2011) 2. Al-Badr, B., Mahmoud, S.A.: Survey and bibliography of Arabic optical text recognition. Signal Processing 41(1), 49–77 (1995) 3. Al-Muhtaseb, H.A., Mahmoud, S.A., Qahwaji, R.S.: Recognition of off-line printed Arabic text using hidden markov models. Signal Process. 88(12), 2902–2912 (2008) 4. M¨ argner, V., El Abed, H.: Databases and competitions: Strategies to improve arabic recognition systems. In: Doermann, D., Jaeger, S. (eds.) SACH 2006. LNCS, vol. 4768, pp. 82–103. Springer, Heidelberg (2008) 5. Lorigo, L.M., Govindaraju, V.: Offline Arabic handwriting recognition: a survey. IEEE Transactions on Pattern Analysis and Machine Intelligence 28(5), 712–724 (2006) 6. Marti, U.-V., Bunke, H.: A full English sentence database for off-line handwriting recognition. In: Proceedings of the Fifth International Conference on Document Analysis and Recognition, pp. 705–708 (1999) 7. Hull, J.J.: A database for handwritten text recognition research. IEEE Transactions on Pattern Analysis and Machine Intelligence 16(5), 550–554 (1994) 8. Dimauro, G., Impedovo, S., Modugno, R., Pirlo, G.: A new database for research on bank-check processing. In: Proceedings of the Eighth International Workshop on Frontiers in Handwriting Recognition, pp. 524–528 (2002) 9. Abuhaiba, I.S.I., Mahmoud, S.A., Green, R.J.: Recognition of handwritten cursive Arabic characters. IEEE Transactions on Pattern Analysis and Machine Intelligence 16(6), 664–672 (1994)
406
S.A. Mahmoud et al.
10. Kharma, N., Ahmed, M., Ward, R.: A new comprehensive database of handwritten Arabic words, numbers, and signatures used for ocr testing. In: IEEE Canadian Conference on Electrical and Computer Engineering, vol. 2, pp. 766–768 (1999) 11. Pechwitz, M., Snoussi Maddouri, S., Mrgner, V., Ellouze, N., Amiri, H.: IFN/ENIT - database of handwritten Arabic words. In: Proceedings of the 7th Colloque International Francophone sur l’Ecrit et le Document, CIFED (2002) 12. El Abed, H., Mrgner, V.: The IFN/ENIT-database - a tool to develop Arabic handwriting recognition systems. In: IEEE International Symposium on Signal Processing and its Applications, ISSPA (2007) 13. Khedher, M.Z., Abandah, G.: Arabic character recognition using approximate stroke sequence. In: Arabic Language Resources and Evaluation - Status and Prospects Workshop, Third Int’l Conf. on Language Resources and Evaluation, LREC 2002 (2002) 14. Al-Ma’adeed, S., Elliman, D., Higgins, C.A.: A data base for Arabic handwritten text recognition research. In: Eighth International Workshop on Frontiers in Handwriting Recognition (IWFHR 2002), pp. 485–489 (2002) 15. Al-Ohali, Y., Cheriet, M., Suen, C.: Databases for recognition of handwritten Arabic cheques. Pattern Recognition 36(1), 111–121 (2003) 16. El-Sherif, E.A., Abdelazeem, S.: A two-stage system for Arabic handwritten digit recognition tested on a new large database. In: International Conference on Artificial Intelligence and Pattern Recognition, pp. 237–242 (2007) 17. Alamri, H., Sadri, J., Suen, C.Y., Nobile, N.: A novel comprehensive database for Arabic off-line handwriting recognition. In: Proceedings of the 11 th International Conference on Frontiers in Handwriting Recognition (ICFHR), pp. 664–669 (2008) 18. Mahmoud, S.: Recognition of writer-independent off-line handwritten Arabic (Indian) numerals using hidden markov models. Signal Processing 88(4), 844–857 (2008)
Author Index
Abdoola, Rishaad II-317 Ahmad, Irfan II-397 Ahmadi, Majid I-69 Akgul, Yusuf Sinan I-304 Al-Khatib, Wasfi G. II-397 Allili, Mohand Sa¨ıd I-314 Almeida, Jo˜ ao Dallyson S. II-151 Alonso, Luis II-360 Alshayeb, Mohammed II-397 Ammar, Moez II-348 An, Huiyao II-89 Armato, Samuel G. II-21 Asari, Vijayan K. I-30 Asraf, Daniel II-101 Ayatollahi, Ahmad II-48 Aziz, Kheir-Eddine II-170 Baja, Gabriella Sanniti di I-344 Bakina, Irina II-130 Bao, Huiyun I-262 Bedawi, Safaa M. II-307 Bernardino, Alexandre I-294 Bhatnagar, Gaurav II-286 Bouguila, Nizar I-201 Branzan-Albu, Alexandra I-426 Brun, Luc I-173 Brunet, Dominique I-100, II-264 Burke, Robert D. II-12 Busch, Andrew II-389 Campilho, Aur´elio II-1, II-68 Cancela, B. I-416 Candemir, Sema I-304 Carmona, Pedro Latorre II-360 Chae, Oksam I-274 Chen, Cunjian II-120 Cheng, Howard II-243 Clark, Adrian F. I-253 Conte, Donatello I-173 Cordes, Kai I-161 Cordier, Fr´ed´eric I-365 Cunha, Jo˜ ao Paulo Silva II-59 Dahmane, Mohamed II-233 Dai, Xiaochen I-395
Das, Sukhendu II-212 Dastmalchi, Hamidreza Debayle, Johan I-183 Dechev, Nikolai II-12 Dewitte, Walter II-1 Ding, Yan II-276 Djouani, Karim I-80 Driessen, P.F. II-328 Du, Shengzhi I-80 Duric, Zoran I-221
I-193
Ehsan, Shoaib I-253 Elguebaly, Tarek I-201 Esmaeilsabzali, Hadi II-12 Faez, Karim I-193, II-161 Fernandes, Jos´e Maria II-59 Fern´ andez, A. I-416 Ferrari, Giselle II-40 Fertil, Bernard II-170 Fieguth, Paul I-385 Figueira, Dario I-294 Fleck, Daniel I-221 Foggia, Pasquale I-173 Frejlichowski, Dariusz II-380 Furst, Jacob II-21 Gangeh, Mehrdad J. I-335 Gao, Meng I-406 George, Loay E. II-253 Ghodsi, Ali I-335 Guan, Ling I-436, II-79, II-111, II-140 Gupta, Rachana A. II-338 Hamam, Yskandar I-80 Hancock, Edwin II-89 Hardeberg, Jon Y. I-375 Hassen, Rania I-40 H´egarat-Mascle, Sylvie Le II-348 Homola, Ondˇrej II-31 Huang, Jiawei I-122 Huang, Lei II-222 Huang, Xiaozheng II-276 Ibrahim, Muhammad Talal
II-79, II-111
408
Author Index
Kadim, Azhar M. II-253 Kamel, Mohamed S. I-335, I-385, II-307 Kang, Yousun I-141 Kanwal, Nadia I-253 Khan, M. Aurangzeb II-79 Klempt, Carsten I-161 Konvalinka, Ira II-101 Krylov, Andrey S. I-284 Kumazawa, Itsuo I-21 Kurakin, Alexey II-130
Mounier, Hugues II-348 Murray, Jim II-1 Murshed, Mahbub I-274
Laurendeau, Denis I-426 Le, Tam T. I-141 Leboeuf, Karl I-69 Li, Qi I-232 Li, Xueqing I-262 Li, Ze-Nian I-122 Liang, Jie II-276 Lisowska, Agnieszka I-50 Liu, Bin I-90 Liu, Changping II-222 Liu, Huaping I-406 Liu, Jiangchuan II-276 Liu, Wei I-325 Liu, Weijie I-90 Liu, Ying II-370 Luong, Hiˆep Q. I-11
Ortega, M. I-416 Oskuie, Farhad Bagher II-161 Ostermann, J¨ orn I-161
Mahmoud, Sabri A. II-397 Maillot, Yvan I-183 Makaremi, Iman I-69 Mandava, Ajay K. I-58 Mansouri, Alamin I-375 Marsico, Maria De II-191 McClean, Sally I-211 McDonald-Maier, Klaus D. I-253 Mehmood, Tariq II-79 Melkemi, Mahmoud I-365 Mendon¸ca, Ana Maria II-1, II-68 Merad, Djamel II-170 Mestetskiy, Leonid II-130 Meunier, Jean II-233 Miao, Yun-Qian I-385 Mizotin, Maxim M. I-284 Moan, Steven Le I-375 Momani, Bilal Al I-211 Monacelli, Eric II-317 Moreno, Jose E. II-360 Moreno, Plinio I-152 Morrow, Philip I-211
Nappi, Michele II-191 Nguyen, Nhat-Tan I-426 Nguyen, Thuc D. I-141 Nicolo, Francesco II-180 Nieuwland, Jeroen II-1 Noel, Guillaume II-317
Paiva, Anselmo C. II-151 Palk, Phillip II-389 Park, Edward J. II-12 Paulhac, Ludovic I-354 Payandeh, Shahram I-395 Pedrocca, Pablo Julian I-314 Penedo, Manuel G. I-416 Percannella, G. II-297 Petrou, Maria I-132 Philips, Wilfried I-11 Pinoli, Jean-Charles I-183 Piˇzurica, Aleksandra I-11 Pla, Filiberto II-360 Presles, Benoˆıt I-183 Quddus, Azhar II-101 Quelhas, Pedro II-1 Quivy, Charles-Henri I-21 Raicu, Daniela S. II-21 Raman, Balasubramanian II-286 Ramel, Jean-Yves I-354 Ramirez, Adin I-274 Regentova, Emma E. I-58 Renard, Tom I-354 Ribeiro, Eraldo I-325 Ribeiro, Pedro I-152 Riccio, Daniel II-191 Rosenhahn, Bodo I-161 Ross, Arun II-120 Rudrani, Shiva II-212 Ruˇzi´c, Tijana I-11 Sakaki, Kelly II-12 Salama, Magdy I-40
Author Index S´ a-Miranda, M. Clara II-68 Santhaseelan, Varun I-30 Santos-Victor, Jos´e I-152 Sapidis, Nickolas S. I-365 Sattar, F. II-328 Schaaf, Crystal II-360 Scherer, Manuel I-161 Schmid, Natalia A. II-180 Serino, Luca I-344 Shafie, Siti Mariam I-132 Siena, Stephen A. II-21 Silva, Arist´ ofanes C. II-151 Snyder, Wesley E. II-338 Song, Caifang II-201 Sorokin, Dmitry V. I-284 Sousa, Ant´ onio V. II-68 Stejskal, Stanislav II-31 Sugimoto, Akihiro I-141 Sun, Fuchun I-406 Sun, Yanfeng II-201 Svoboda, David II-31 Tafula, S´ergio II-59 Talebi, Mohammad II-48 Toda, Sorin II-101 Topic, Oliver I-161 Tran, Son T. I-141 Trivedi, Vivek II-243 Tu, Chunling I-80 Tzanetakis, G. II-328 Venetsanopoulos, A.N. Vento, M. II-297
II-111, II-140
409
Vento, Mario I-173 Voisin, Yvon I-375 Vrscay, Edward R. I-100, II-264 Wang, Chun-hao I-436 Wang, Lei I-242 Wang, Yongjin I-436, II-111, II-140 Wang, Zemin II-370 Wang, Zhaozhong I-242 Wang, Zhou I-1, I-40, I-100, I-111, II-264 Wechsler, Harry II-191 Wu, Q.M. Jonathan II-286 Wyk, Barend Jacobus van I-80, II-317 Xiong, Pengfei II-222 Xu, Tao II-370 Yang, Wen II-370 Yano, Vitor II-40 Yeganeh, Hojatollah I-111 Yin, Baocai II-201 Yin, Jianping II-89 Zeng, Kai I-1 Zhang, Jianming II-89 Zhang, Rui II-140 Zhang, Ziming I-122 Zhou, Chunxia II-370 Zhu, En II-89 Zimmer, Alessandro II-40 Zinovev, Dmitriy II-21 Zinoveva, Olga II-21