Lecture Notes in Computer Science Commenced Publication in 1973 Founding and Former Series Editors: Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen
Editorial Board David Hutchison Lancaster University, UK Takeo Kanade Carnegie Mellon University, Pittsburgh, PA, USA Josef Kittler University of Surrey, Guildford, UK Jon M. Kleinberg Cornell University, Ithaca, NY, USA Alfred Kobsa University of California, Irvine, CA, USA Friedemann Mattern ETH Zurich, Switzerland John C. Mitchell Stanford University, CA, USA Moni Naor Weizmann Institute of Science, Rehovot, Israel Oscar Nierstrasz University of Bern, Switzerland C. Pandu Rangan Indian Institute of Technology, Madras, India Bernhard Steffen TU Dortmund University, Germany Madhu Sudan Microsoft Research, Cambridge, MA, USA Demetri Terzopoulos University of California, Los Angeles, CA, USA Doug Tygar University of California, Berkeley, CA, USA Gerhard Weikum Max-Planck Institute of Computer Science, Saarbruecken, Germany
6005
Ajay Kumar David Zhang (Eds.)
Ethics and Policy of Biometrics Third International Conference on Ethics and Policy of Biometrics and International Data Sharing, ICEB 2010 Hong Kong, January 4-5, 2010. Revised Selected Papers
13
Volume Editors Ajay Kumar The Hong Kong Polytechnic University, PQ 835, Department of Computing Hung Hom, Kowloon, Hong Kong E-mail:
[email protected] David Zhang The Hong Kong Polytechnic University, PQ 803 A, Department of Computing Hung Hom, Kowloon, Hong Kong E-mail:
[email protected]
Library of Congress Control Number: 2010924450 CR Subject Classification (1998): C.2, K.6.5, D.4.6, E.3, J.1, H.4 LNCS Sublibrary: SL 4 – Security and Cryptology ISSN ISBN-10 ISBN-13
0302-9743 3-642-12594-8 Springer Berlin Heidelberg New York 978-3-642-12594-2 Springer Berlin Heidelberg New York
This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. springer.com © Springer-Verlag Berlin Heidelberg 2010 Printed in Germany Typesetting: Camera-ready by author, data conversion by Scientific Publishing Services, Chennai, India Printed on acid-free paper 06/3180
Preface
The past decade has seen tremendous growth in the demand for biometrics and data security technologies in applications ranging from law enforcement and immigration control to online security. The benefits of biometrics technologies are apparent as they become important technologies for information security of governments, business enterprises, and individuals. At the same time, however, the use of biometrics has raised concerns as to issues of ethics, privacy, and the policy implications of its widespread use. The large-scale deployment of biometrics technologies in e-governance, e-security, and e-commerce has required that we launch an international dialogue on these issues, a dialogue that must involve key stakeholders and that must consider the legal, political, philosophical and cultural aspects of the deployment of biometrics technologies. The Third International Conference on Ethics and Policy of Biometrics and International Data Sharing was highly successful in facilitating such interaction among researchers, policymakers, consumers, and privacy groups. This conference was supported and funded as part of the RISE project in its ongoing effort to develop wide consensus and policy recommendations on ethical, medical, legal, social, cultural, and political concerns in the usage of biometrics and data security technologies. The potential concerns over the deployment of biometrics systems can be jointly addressed by developing smart biometrics technologies and by developing policies for the deployment of biometrics technologies that clearly demarcate conflicts of interest between stakeholders. This conference was highly successful, attracting a large number of participants from 19 countries and regions, witnessing lively debates on various aspects of 25 oral presentations, and promoting interactions among the participants that stimulated a wealth of new ideas for further work. We are grateful to Emilio Mordini and René von Schomberg for providing the opportunity and all their support for organizing this conference. We sincerely thank Samson Tam, Alex Wai, and all the keynote speakers for accepting the invitation to join the conference. In addition, we would like to express our gratitude to the RISE project members, contributors, authors of invited papers, reviewers, and Program Committee members who made this conference so successful. We also wish to acknowledge The Hong Kong Polytechnic University and Springer for sponsoring this conference. Special thanks are due to Miranda Leung, Jodie Lee, Carmen Au, Guangming Lu, Yong Xu, Chiu Chi Pang, Siu Chung Lai, Chun Wei Tan, Yingbo Zhou, Vivek Kanhangad for their dedication and hard work during various aspects of the conference organization. We gratefully acknowledge support from the European Commission-supported RISE project no. 230389 funded under FP7, SP-4 Capacities.
January 2010
Ajay Kumar David Zhang
Organization General Chairs David Zhang Emilio Mordini
The Hong Kong Polytechnic University, Hong Kong Centre for Science, Society and Citizenship, Rome, Italy
Program Chair Ajay Kumar
The Hong Kong Polytechnic University, Hong Kong
Program Committee Christopher Megone Frank Leavitt Frederik Kortbak Glenn McGee Kai Rannenberg Nicolas Delvaux Irma van der Ploeg Cong Ya Li Kamlesh Bajaj Paul Mc Carthy Max Snijder Kush Wadhwa Margit Sutrop Niovi Pavlidou Anil Jain Massimo Tistarelli Michael Thieme Roland Chin Nigel Cameron Xiaomei Zhai Richa Singh Asbjorn Hovsto Stan Li Jaihie Kim Mia Harbitz Mark Riddell
University of Leeds, UK Ben Gurion University, Israel The European Privacy Institute, Denmark The American Journal of Bioethics, USA Goethe University Frankfurt, Germany Sagem Securite, France Zuyd University, The Netherlands The Chinese Medical Association on Medical Ethics, China Data Security Council of India, India University of Lancaster, UK European Biometric Forum, Brussels Global Security Intelligence, USA University of Tartu, Estonia University of Thessaloniki, Greece Michigan State University, USA University of Sassari, Italy International Biometric Group, USA Hong Kong University of Science and Technology, Hong Kong Centre for Policy on Emerging Technologies, USA Chinese Academy of Medical Sciences, Beijing, China IIIT Delhi, India ITS, Norway Chinese Academy of Sciences, Beijing, China Biometrics Engineering Research Center, Korea Inter-American Development Bank, USA SUBITO Project, UK
Organizing Chair Lei Zhang
The Hong Kong Polytechnic University, Hong Kong
Table of Contents
Privacy Protection and Challenges Challenges Posed by Biometric Technology on Data Privacy Protection and the Way Forward . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Roderick B. Woo
1
The Dangers of Electronic Traces: Data Protection Challenges Presented by New Information Communication Technologies . . . . . . . . . . Nataˇsa Pirc Musar and Jelena Burnik
7
Privacy and Biometrics for Authentication Purposes: A Discussion of Untraceable Biometrics and Biometric Encryption . . . . . . . . . . . . . . . . . . . . Ann Cavoukian, Max Snijder, Alex Stoianov, and Michelle Chibba
14
From the Economics to the Behavioral Economics of Privacy: A Note . . . Alessandro Acquisti
23
Legal Challenges Legislative and Ethical Questions regarding DNA and Other Forensic “Biometric” Databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Elazar (Azi) Zadok
27
Are We Protected? The Adequacy of Existing Legal Frameworks for Protecting Privacy in the Biometric Age . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tim Parker
40
Have a Safe Trip Global Mobility and Machine Readable Travel Documents: Experiences from Latin America and the Caribbean . . . . . . . Mia Harbitz and Dana King
47
Engineering and Social Challenges On Analysis of Rural and Urban Indian Fingerprint Images . . . . . . . . . . . C. Puri, K. Narang, A. Tiwari, M. Vatsa, and R. Singh
55
Privacy Protection in High Security Biometrics Applications . . . . . . . . . . . Nalini K. Ratha
62
Face Recognition and Plastic Surgery: Social, Ethical and Engineering Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H.S. Bhatt, S. Bharadwaj, R. Singh, and M. Vatsa
70
VIII
Table of Contents
Human Face Analysis: From Identity to Emotion and Intention Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Massimo Tistarelli and Enrico Grosso
76
Creating Safe and Trusted Social Networks with Biometric User Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ho B. Chang and Klaus G. Schroeter
89
Ethical and Medical Concerns Ethical Values for E-Society: Information, Security and Privacy . . . . . . . . Stephen Mak
96
Ethical Issues in Governing Biometric Technologies . . . . . . . . . . . . . . . . . . . Margit Sutrop
102
Interdisciplinary Approaches to Determine the Social Impacts of Biotechnology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Pei-Fen Chang
115
Medical Safety Issues Concerning the Use of Incoherent Infrared Light in Biometrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nikolaos Kourkoumelis and Margaret Tzaphlidou
121
Policy Issues and Deployments in Asia The Status Quo and Ethical Governance in Biometric in Mainland China . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Xiaomei Zhai and Qiu Renzong
127
Building an Ecosystem for Cyber Security and Data Protection in India . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Vinayak Godse
138
Challenges in Large Scale Biometrics Identification The Unique Identification Number Project: Challenges and Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Haricharan Rengamani, Ponnurangam Kumaraguru, Rajarshi Chakraborty, and H. Raghav Rao
146
The Unique ID Project in India: A Skeptical Note . . . . . . . . . . . . . . . . . . . . R. Ramakumar
154
Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
169
Challenges Posed by Biometric Technology on Data Privacy Protection and the Way Forward Roderick B. Woo The Privacy Commissioner for Personal Data, Hong Kong
Abstract. Biometric technologies advancement has created a significant impact on data privacy protection in Hong Kong. The Privacy Commissioner for Personal Data shared his experience and explained his role in regulating activities involving the use of biometric devices in collecting personal data. He also discussed on the good practice that should be adopted by the end users. The recent public consultation in Hong Kong on the review of the Personal Data (Privacy) Ordinance also included the proposal to classify biometric data as sensitive personal data. Keywords: Regulating the collection of biometric data in Hong Kong – Good practice to be adopted – Greater protection should be given to biometric data currently under review in Hong Kong.
1 Biometric and Privacy It is a constant challenge trying to balance technology and privacy. Undoubtedly, technological advancements have brought efficiency and convenience to our daily lives and the conduct of businesses. Increasingly biometric data, which include fingerprints, DNA samples, iris scans, hand contour are collected for identification and verification purposes by using biometric technology devices such as fingerprint scanner and facial recognition devices. Biometric data are very personal because they are information about an individual’s physical self. They are generally considered sensitive since they are fixed and, unlike a password or a PIN, cannot be reset once they have been inappropriately released. Biometric data can reveal other sensitive personal information such as information about one’s health, racial or ethnic origin. They are capable of providing a basis for unjustified discrimination of the individual data subjects. Biometric technologies can help identifying individuals without their knowledge or consent. A biometric sample is taken from an individual and the data from the sample are then analyzed and converted into a biometric template which is stored in a database or an object in the individual’s possession, such as a smart card. A biometric sample taken from the individual can then be compared with the stored biometric template to identify the individual. In most instances, the use of biometric technology involves the compilation of personal data from which an individual is identifiable and hence the use of the data falls within the purview of the Personal Data (Privacy) Ordinance, Cap.486 laws of Hong Kong. A. Kumar and D. Zhang (Eds.): ICEB 2010, LNCS 6005, pp. 1–6, 2010.
2
R.B. Woo
The proliferation of biometric technologies results in significant impacts on data privacy protection. Improper collection and handling of biometric data can lead to negative consequences such data mining, data profiling, excessive retention, and risk of identity theft, etc. There is a valid concern that the data collected might be reengineered or collated with other database leading to uses which are beyond the reasonable expectation of the data subjects. It can also incite fears of constant surveillance, profiling and control which create adverse effects on data privacy protection. The seriousness of harm is aggravated in the event of unauthorized or accidental access or handling. Furthermore, the safety and integrity of these personal data being stored in large digital database is another significant privacy concern that calls for special care and attention.
2 Regulatory Experience in Hong Kong In Hong Kong, the use of biometric technologies for identification and security purposes has become increasingly popular. Biometric technologies infiltrate many aspects of our lives. Many organizations have adopted new biometric devices to record access to their facilitates and supervise employee’s attendance. Biometric devices are also used on minors. My Office had investigated into a primary school’s fingerprint recognition device which was used to record students’ activities including attendances, the purchase of lunch and the borrowing of books. In recent years, there is a sharp rise in complaints lodged with my Office concerning the collection of biometric data. Most of them concern employers’ collection of employees’ fingerprints for attendance purpose. The phenomenon sends out a clear message: people feel uncomfortable to hand out lightly their biometric data. It is easy to understand that the use of biometric devices in the management of human resources are convenient and costs-effective, these activities should not be conducted without consideration for personal data privacy of the persons involved. In July 2009, my Office published a report [1] concerning the collection and recording of employees’ fingerprint data for attendance purpose by a furniture company. In that case, I found that the collection of employees’ fingerprint data by the company for monitoring attendance purpose was excessive and the means of collection was not fair in the circumstances of the case and for those reasons the company’s practice was in contravention of Data Protection Principle (“DPP”) 1(1) and DPP 1(2) in Schedule 1 of the Ordinance. In gist, DPP 1(1) requires that personal data to be collected should be necessary, adequate but not excessive. DPP 1(2) requires the means of collection should be lawful and fair in the circumstances of the case. My regulatory experience on this subject has led me to form certain views. (a) First and foremost, if the act or practice does not involve the collection of "personal data", it is outside the jurisdiction of the Ordinance. For example, if a fingerprint recognition system which converts certain features of the fingerprint into a unique value and store it in the smart card held by the employee, the employer who does not hold a copy of the data has not “collected” the fingerprint data. In practice, the employee puts his finger and the smart card on the recognition device to complete the
Challenges Posed by Biometric Technology on Data Privacy Protection
(b)
(c)
(d)
(e)
(f)
3
verification process. The system simply compares and matches the value in the smart card with the fingerprint features presented each time. Since the employer has no access to the personal data concerned, he has not collected any "personal data" and the Ordinance as it stands is not concerned with such a practice. If the fingerprint recognition system involves the collection of personal data, employers should be mindful not to collect fingerprint data purely for day to day attendance purpose. In many instances, less privacy intrusive alternatives which can achieve the same purpose are available. Whether or not features of the fingerprints are converted into value, such an act amounts to collection of excessive personal data and the employers risk contravening the requirements of DPP1(1), unless the genuine consent of the data subject has been obtained. If a data subject provides his fingerprint data voluntarily for a particular purpose, the application of the DPPs should not override the data subject's right to informational self-determination. I shall respect his consent if given voluntarily and explicitly. Fingerprint data should not be collected from children of tender age, regardless of any consent given by them, for the reason that they may not fully appreciate the data privacy risks involved. Before collecting employees' fingerprint data for attendance purpose, employers must offer employees a free choice in providing their fingerprint data, and they must be informed of the purpose of collection and given other less privacy intrusive options (e.g. using smart cards or passwords). The means of collecting employees' fingerprint data must be fair. Employees should be able to give their consent voluntarily without undue pressure from the employers and should have the choice of other options; otherwise there may be contravention of the requirements of DPP1(1) and DPP1(2).
3 No Compilation of Personal Data? I have come across arguments that the data stored in a fingerprint recognition system are not personal data because:(a) the stored biometric data are just meaningless numbers, and therefore are not personally identifiable information; and (b) a biometric image cannot be reconstructed from the stored template. Let’s look at the first argument. I think no one can agree that these numbers when linked to other personal identification particulars are capable of identifying an individual. After all, the purpose of collecting the data and convective them into numbers is to identify and verify a person. This is similarly true in the second scenario. The templates will ultimately be linked to identify a person. Hence, no matter how the templates are generated (in the form of numerical codes or otherwise),
4
R.B. Woo
they will be considered “personal data” when combined with other identifying particulars of a data subject. As to the claim that a fingerprint image cannot be reconstructed from the stored biometric template, I would like to make reference to the paper entitled “Fingerprint Biometrics: Address Privacy Before Deployment” [2] published by the Information and Privacy Commissioner of Ontario in November 2008. The paper explains that reconstruction of a fingerprint image from the minutiae template with striking resemblance is not uncommon and there is positive match in more than 90% of cases for most minutiae matchers. Similar arguments have also been echoed by several researchers [3]. We don’t need reminders that technology is advancing in an alarming speed. What is regarded impossible to-day should not be regarded impossible next month or next year.
4 Good Attitude and Practice I believe a healthy society should embrace different and sometimes conflicting interests. Technology development and privacy can and should exist in harmony. It is important that end users and various stakeholders recognize and give more thought on the impacts brought by biometric technologies on data privacy protection. From the end users’ perspective, less privacy intrusive alternatives should be offered and measures to lessen the adverse privacy impact should be taken before a practice is adopted which involves the collection and processing of biometric data. Data users should always ask themselves whether the degree of intrusion into personal data privacy is proportional to attaining the purpose behind before they start collecting other people’s biometric data. In this evaluative process, the following questions should be addressed. (1) (2) (3) (4) (5) (6) (7)
What is the scope of the practice? How many people will be affected? The vulnerability of the people who may be affected. Will the biometric data be transferred transferred to third parties? What security measures will be taken? What are the risks of identity theft? How long will the data be retained?
To promote good practice, my Office has issued a Guidance Note on Collection of Fingerprint Data in August 2007 which can be downloaded from my Office website. I wish to highlight a few of the measures that should be taken. (1) Confine the act or practice only to those data subjects the collection of their fingerprint data are necessary for attaining the lawful purpose of collection. Avoid universal, wide scale or indiscriminate collection. (2) Avoid collection of fingerprints from data subjects who lack the mental capacity to understand the privacy impact (e.g. children of tender age). (3) Inform the data subjects explicitly of all the uses, including the intended purposes of use on the fingerprint data collected and the class(es) of persons to whom the fingerprint data may be transferred.
Challenges Posed by Biometric Technology on Data Privacy Protection
5
(4) Steps have to be taken to prevent misuses of the fingerprint data, for example, through unnecessary linkage with other IT systems or databases. (5) Ensure that there are sufficient security measures in place to protect the fingerprint data from unauthorized or accidental access. Privacy enhancement technologies, such as proper encryption should be adopted to guard against decryption, or reverse engineering of the full image of the fingerprints. Personnel entrusted with handling the fingerprint data should possess the requisite training and awareness on protection of personal data privacy. (6) The collected fingerprint data should be regularly and frequently erased upon fulfillment of the purpose of collection; excessive retention and hoarding of the data increases the privacy risk. (7) Where adverse action, for instance, disciplinary action or termination of employment, may be taken against the data subject in reliance of the fingerprint data, the data subject should as far as practicable, be given a chance to respond and challenge the accuracy of the data so used. Currently, my Office is working on an update of the Guidance Note so please stay tuned.
5 Way Forward - Law Amendment Let’s now look ahead forward. The Ordinance was designed in the mid 90s of the 20th century. Is it still capable of giving sufficient protection to data privacy protection? My Office carried out a full review of the Ordinance, and proposed to the Government in December 2007 some 50 amendment proposals. The Government agreed with most of these proposals and has just concluded a Public Consultation inviting comments. One of the proposals my Office made was to classify biometric data as sensitive personal data. My proposal echoed the recent recommendation of the Australian Law Reform Commission to extend the definition of “sensitive personal data” to cover biometric information. I also suggested that broadly in line with the EU Directive 95/46/EC [4] other special categories of personal data should include racial or ethnic origin of the data subject, his political affiliation, his religious beliefs and affiliations, membership of any trade union, his physical or mental health or condition, his biometric data and his sexual life. The Government favoured biometric data as a start to be classified as sensitive personal data. In my recent response to the Consultation Document, I urged the Government to review its decision to include other categories. In the proposal, I suggest that the collection, holding, processing and use (“handling”) of sensitive personal data ought to be prohibited except in certain prescribed circumstances:(a) with the prescribed consent of the data subject; (b) it is necessary for the data user to handle the data to exercise his lawful right or perform his obligations as imposed by law; (c) it is necessary for protecting the vital interests of the data subjects or others where prescribed consent cannot be obtained;
6
R.B. Woo
(d) handling of the data is in the course of the data user’s lawful function and activities with appropriate safeguard against transfer or disclosure of personal data without the prescribed consent of the data subjects; (e) the data has been manifestly made public by the data subjects; (f) handling the data is necessary for medical purposes and is undertaken by a health professional or person who in the circumstances owes a duty of confidentiality; and (g) handling of the data is necessary in connection with any legal proceedings. Many stakeholders have expressed concerns on the possible adverse effect and confusion the proposal may bring. They fear a reduction in business opportunities. However, the proposal does envisage a transitional period. Perhaps in the long run, the public will support the view that personal data privacy right should be properly balanced with but not be sacrificed too readily for the sake of economical gains.
6 Role of Privacy Regulator As a privacy regulator, I am concerned whether the requirements of the Ordinance are complied with. The question of how particular personal data should be regulated is always on my mind. However, I need to stress that the Ordinance is technology neutral. I have no intention whatsoever to hinder businesses from using advanced technologies or to discourage development in biometric technologies. My role is to identify privacy risks, consider the views from different sectors, promote and monitor the compliance of the Ordinance and make recommendations to enhance data privacy protection in light of changing social needs and interests. Stakeholders often ask me what they should do. My reply is always: recognize the need to cope with the privacy risks involved; Understand and comply with the requirements of the Ordinance; Give due consideration and show positive response. I am sure that with the cooperation of data users and data subjects, we can look forward to a world where personal data privacy is respected and protected and that personal information can flow freely in accordance with the law.
References 1. Employer Collecting Employees’ Fingerprint Data for Attendance Purpose, http://www.pcpd.org.hk/english/publications/files/ report_Fingerprint_e.pdf (accessed January 30, 2010) 2. Ann Cavoukian, Fingerprint Biometrics: Address Privacy Before Deployment. Information and Privacy Commissioner of Ontario (2008) 3. Feng, J., Jain, A.K.: FM Model Based Fingerprint Reconstruction from Minutiae Template. In: Tistarelli, M., Nixon, M.S. (eds.) ICB 2009. LNCS, vol. 5558, pp. 544–553. Springer, Heidelberg (2009) 4. EU Directive 95/46/EC, Guidelines on Protection of Privacy and Transborder Flows on Personal Data
The Dangers of Electronic Traces: Data Protection Challenges Presented by New Information Communication Technologies Nataša Pirc Musar and Jelena Burnik Information Commissioner, Republic of Slovenia {natasa.pirc, jelena.burnik}@ip-rs.si
Abstract. Modern IT technologies are more and more invasive for peoples’ privacy. E-ticketing involves processing of a significant amount of personal data at the time the cards are issued to users and every time the card is used. Personal data is being processed thanks to the identifiers that are associated with every subscriber and collected by the validation devices to be subsequently stored in the databases of transport companies. Biometric data always refers to an identified or at least identifiable person. And even when stored electronically in a form of a template it is still personal data, as it is not impossible to identify an individual. In terms of data protection rules applicability to online marketing practices a common misconception and the position usually held by the industry is that the processed data is anonymized due to the fact that IP-address is removed and supplemented by a unique identifier to differentiate between different users. Data Protection Authorities in Europe claim that the data collected and processed in the course of behavioural advertising definitely represents personal data as its core purpose is to differentiate between users. Keywords: biometrics, electronic road toll system, data protection, privacy, behavioural targeting advertising, opt-out principle.
1 Introduction In the last few decades, fast development of information and communication technologies which enable large quantities of information to be collected, stored and sent on at high speeds and lower costs, gave rise to the concerns about information privacy [1] in many areas where previously individuals could operate relatively anonymously, like driving on motorways or internet surfing. The use of new technologies, where personal data processing is a necessary collateral damage, is increasing, bringing all too many benefits to individuals, the companies and the public institutions. However, all the benefits of new communication channels often blur the fact that the same technologies, if used improperly and irresponsibly, present a significant threat to individuals’ right to privacy and data protection. In the present paper we will focus on some of the technological solutions and practices that have entered our lives relatively recently, have significant privacy and data protection implications and will be a challenge in the future. We will argue that the A. Kumar and D. Zhang (Eds.): ICEB 2010, LNCS 6005, pp. 7–13, 2010. © Springer-Verlag Berlin Heidelberg 2010
8
N.P. Musar and J. Burnik
benefits the technologies bring cannot and should not outweigh invasions into individuals’ privacy, and that implementation of such technologies should be assessed from the aspect of privacy and data protection at every stage in the life cycle. To achieve the best with a certain technology, trade-offs between the rights of the individuals and the interests of the industry are not necessarily the only way. A certain technology, if implemented in a privacy friendly way, may bring benefits to all. We will firstly touch upon the electronic road toll systems and electronic ticketing systems for public transport and other services, where the main problem is collection of location data that can be related to an individual and as such often presents excessive collection of personal data out of convenience, just because the technology enables it. The same argument may be used in the context of biometric measures, used often all too carelessly instead of keys and smart cards. The third issue we will include are the recent commercial practices in the online as well as offline world – profiling and behavioural advertising, where the main problems include the lack of transparency of such activities, poor information available to the individual and the question of explicit consent. In the last section some of the possible improvements and solutions will be presented.
2 The Wealth of Location Data Technological evolution and the search for increased efficiency and cost effectiveness in managing public transport services have resulted in the growing use of innovative e-ticketing systems. Such systems work on the basis of electronic cards, usually personalised, that are used for transport services but may increasingly be used to purchase related services (for instance commuter parking fees). These smart cards contain a chip to store information, including personal information (which may include a chip identifier, the number of the user’s subscription contract, as well as time, date and code number of the card validation device) [2]. E-ticketing thus involves processing of a significant amount of personal data at the time the cards are issued to users and every time the card is used. Personal data is being processed thanks to the identifiers that are associated with every subscriber and collected by the validation devices to be subsequently stored (possibly in real time) in the databases of transport companies. From the perspective of protection of individual’s right this so-called validation data/location data that can be attributed to an individual present a threat due to the fact that this data enables tracking the individual users’ movements and whereabouts. The data thus collected shows a map of individuals’ journeys and if privacy concerns will not be taken into account we may well get yet another “Big Brother” database and increase the danger of a “function creep” effect, where the data previously collected for a specific legitimate purpose is being used for other purposes. To avoid the risks of excessive collection of data that might lead to breaches of individuals rights transport companies should provide ways for customers to travel anonymously, by cash or anonymous e-ticket. The companies should also pay special attention to privacy policies and provide to the customers information on what items of personal information concerning them are collected and stored, and how such information is used. As regards, in particular, processing of the data concerning users’
The Dangers of Electronic Traces: Data Protection Challenges
9
movements, the information systems of transport companies should be designed and implemented by prioritizing the use of anonymous data. If (directly or indirectly) identifiable information is used, this information should be stored for the shortest possible period. The company should also provide for the security of the collected data and should obtain the free and informed prior consent of customers for the use of personal data for its own marketing purposes or for marketing performed by third parties [2]. Another issue with wide data protection implications are the electronic road toll systems in transportation, which also involve processing of a significant amount of personal data, namely the information on the position and journey time on the driver's route. With the new road charging system drivers could pay the toll without any stopping in congesting traffic. In European context, the same device would allow paying the toll on all European motorways, defined as payable. The implementation of such systems is in accordance with the Directive 2004/52/EC on the Interoperability of Electronic Fee Collection Systems in Europe [3] which envisages the »pay as you go« principle. As we emphasized in our Opinion on the protection of personal data in the electronic road toll system [4], in order to ensure the privacy of individuals, the most appropriate system would be the one in which the data, needed for the purpose of toll service, would be exclusively under the surveillance of the user and not sent to a control centre. In this case the calculation of the toll would be made by the device in the car, while the control centre would receive only the sum of the toll spent. The anonymity of the driver (i.e. his/her identity) would thus be preserved since all the data on the position and journey time would be kept under sole control of the user. The users should identify themselves only if certain irregularities emerged in which identification would be required: for example, when the user has not paid a correctly calculated toll fee, or when the vehicle has been stolen, when the user's toll system device is broken down or malfunctioning (whilst driving on a charged road segment). All that is needed to ensure is that the device in the vehicle which calculates the toll is working correctly on the roads on which tolls are charged. In such system the control centre does not have the data on the position of the vehicle; it only checks whether the device is operating correctly.
3 Fingerprints as a Remedy for Lost Keys and Forgotten PINs The increasing use of biometrics is another area where data protection concerns are in place. Biometric measures can be spotted in numerous areas and are used for different purposes: defence, state border measures, immigrations, passports, banks and financial institutions, information systems, etc. In terms of security and practicality biometrics certainly have some advantages over other means used to establish or verify the identity of an individual. Such methods are based on items a person has in physical or mental possession (e.g. a magnetic card, or a personal password or PIN-code), whereas biometrics is based on whom a person is. Magnetic cards may be lost, borrowed or stolen; personal passwords may be forgotten or revealed to others; biometric characteristics, however, remain the same (at least in principle) forever; they cannot be lost or forgotten, and they are very difficult to replicate or transfer to another person.
10
N.P. Musar and J. Burnik
Biometric measures by its nature represent an intrusion into individual’s privacy and dignity. Biometric data always refers to an identified or at least identifiable person, e.g., fingerprints belong solely to a certain nameable individual. And even when stored electronically in a form of a template it is still personal data, as it is not impossible to identify an individual. As opposed to passwords or digital confirmations, biometric characteristics are not hidden, they cannot be altered, destroyed or declared invalid; we cannot cancel a fingerprint and issue a new one. Moreover, one of the basic principles of security is that we do not use the same key for everything, and that we use different keys for the car, the house, the office, the garage etc. The risk of theft or abuse of any such universal key1 would be too high. Imagine that some day we will »unlock« everything using a single biometric characteristic - a fingerprint say - then we are in the same situation as if we had just one single key for everything; the difference being that we cannot “change the lock”, let alone all the locks [5]. As any other technology biometrics can be used in a manner friendly to the individual’s privacy or it may invoke serious intrusions into the individual’s privacy, i.e. the “Big Brother effect”. Hence all the conditions for the use of biometrics have to be interpreted in the light of privacy and dignity protection. A real and justifiable reason must underlie any requirement for biometric examination or verification of identity, and it must also be substantiated that the purpose for which the controller is exercising control cannot be satisfactorily achieved using another (non-biometric) means of identification or verification that would not impinge upon the privacy or dignity of the individual. Practice however reveals that controllers tend to implement biometric measures for example in the workplace merely because it is more practical than a swipe-card system and they merely want to prevent abuse which occurs as a result of borrowing/lending of cards between employees. Biometrics can be used to enhance privacy if it is carried out in compliance with basic tenets and rules governing personal data protection (and such principles as proportionality, transparency, strict application and appropriate protection). On the basis of the European Directive 95/46/EU2, Slovenia’s legislator decided to introduce statutory examination and assessment of proposed biometric measures, prior to any introduction into the workplace, conducted by the Information Commissioner. The employer thus has to establish why the introduction of biometric measures is really necessary, namely what is the purpose and goal of such implementation. Stated purposes and intentions should be serious, well-founded and supported by proof (evidence); moreover, an assessment must be made as to whether the implementation of biometric measures is a necessary requisite for operational reasons, for the security of people or property, or the protection of confidential data or business secrets. Before implementing any biometric measures, the employer is obliged to consider the possibility as to whether there is any other suitable non-biometric method for ascertaining or verifying identity that can satisfactorily fulfil the employer’s needs or obligations. 1 2
Presuming that cross- template matching is possible. Directive of the European Parliament and the Council 95/46/EC of October 24, 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data.
The Dangers of Electronic Traces: Data Protection Challenges
11
In terms of biometrics that are used for border control (following the demands from the Council Regulation (EC) No 2252/2004)3, we are glad that we were able to collaborate with our Ministry of Interior and to reach consensus that biometric data will only be stored on individual’s passport and not in a central government database. Moreover the only purpose of use of biometric data is to verify the identity of the individual whilst crossing the border. This is probably the only possible solution that mitigates the risk of the function creep-effect, examples of which can already be seen in some countries that introduced central retention of biometric data and several purposes of use of such data.
4 Show Me What You Browse and I’ll Tell You Who You Are Another issue of the future are new emerging marketing practices (online and offline) that enable tracking of the individuals behaviour, preferences, attitudes. The collected data is used to build profiles of the individuals who are later on offered goods or services that correspond to their profile. Offline examples of such marketing practices are loyalty cards, offered practically by every retailer. Among other personal data smart cards store also the purchase data, which means that the person buying cat food regularly will most likely receive advertising messages for cat food from the retailer. Even more alarming is the situation in the online world. Imagine: You decide to book a plane ticket to New York on the internet. Two days later, while reading your online news, an ad presents you with an attractive offer for a rental car in New York [6]. Of course this is no coincidence. It is a behavioural targeting mechanism that is being employed more and more. Behavioural targeting/advertising enables tracking of internet user activity (web browsing, entered search terms) and then delivering only relevant advertisements, based on the collected and analysed data. It has been used by content providers such as social networks, by search engines like Google, there have been attempts of use by internet service providers (Phorm in the UK) and also by the operators of the digital TV platform. Behavioural targeting is executed by placing a string of text called “cookie” on the user’s computer each time they visit a website. The cookie is assigned a unique cookie ID, held by the database of the website, allowing it to recognize users when they revisit the website. The danger of this practice is that it may lead to the accumulation of significant amount of data about a single user [7]. Increasing computing power and constantly improving data mining tools make it conceivable that there will be even more analyses conducted on these personal data warehouses. This risk increases if the collected information includes sensitive data (health data, political opinions, sexual orientation, etc.). Based on that information, one could learn of a disease affecting the user or determine their political opinions. In terms of data protection rules applicability to these online marketing practices a common misconception and the position usually held by the industry is that the processed data is anonymized due to the fact that IP-address is removed and supplemented by a unique identifier (UID) to differentiate between different users. In Europe 3
Council Regulation (EC) No 2252/2004 of 13 December 2004 on standards for security features and biometrics in passports and travel documents issued by Member States.
12
N.P. Musar and J. Burnik
personal data processing is guided mainly by the Directive 95/46/CE which sets a broad definition of personal data. Following the Article 29 Opinion [8], the data collected and processed in the course of behavioural advertising definitely represents personal data as its core purpose is to differentiate between users, for the users to be identified and served different adverts, regardless of the identifier that enables identification (UID or IP-address). With this in mind it is necessary that the companies, conducting such activities, comply with EU data protection and e-privacy regulations, which foremost emphasise that data processing must be lawful. A precondition of lawfulness in this case is the user’s explicit consent to the tracking and the related personal data processing. Unfortunately, the industry standard is still opt-out for the time being, however, recent initiatives and attempts to regulate this area will hopefully lead to a greater transparency and lawfulness of online marketing activities. The main principles guiding regulation of behavioural targeting and profiling should also include provision of all relevant information in an accessible language and form to the users, protection of sensitive information and limited retention of collected data.
5 Conclusion: A Room for Improvement The above presented cases of new information communication technologies that are being all too carelessly implemented these days are some of the challenges the Data Protection Authorities (DPA) are and will be faced with in the years to come with the goal of protecting individuals’ fundamental rights. The invisible and invasive electronic surveillance as a side product of information technologies development and implementation most certainly presents a threat to individuals’ right to privacy and data protection, therefore it will be the task of regulators among other to follow closely all developments and assess their implications on privacy and data protection at the earliest possible stage. A mix of regulatory approaches and tools will be required to efficiently protect the individuals’ rights, starting with statutory regulatory measures supported by effective enforcement of rules. Additionally, in this fast developing field, where statutory regulation often cannot follow fast enough there is a growing need for supporting preventive action and industry self-regulation as complementary to the statutory regulation. One of the tools of preventive action is certainly awareness raising campaigns and initiatives, targeted at the individuals as well as the industry. Supporting the use of Privacy Enhancing Technologies (PETs) is one of the possible improvements, alongside with Privacy Impact Assessments (PIAs), which are particularly important in cases of larger projects and amendments to legislation. Such assessments are the foundation of the concept of Privacy by Design, which envisages care for the protection of privacy in all phases of a project which includes processing of personal data. Privacy Impact Assessments are of key importance in the initial phases because the retrospective solution is often time consuming and requires radical alterations of the system’s concept, which in itself engenders considerable costs. A strong support to the individuals thus being able to exercise their rights more efficiently are self-regulatory codes of practices, especially if they set a higher standard from the one provided for by regulation. The main advantage of self-regulatory rules is that the industry sets the rules for itself and therefore compliance is theoretically
The Dangers of Electronic Traces: Data Protection Challenges
13
faster achievable; however on the other hand self-regulation without the basis in statutory regulation will not always suffice (like in the case of online behavioural marketing). A potential solution, if appropriately implemented, might also be certifying IT products and IT-based services privacy compliance with data protection regulations, as is being introduced in Europe. The European Privacy Seal4 certificate (EuroPriSe) aims to facilitate an increase of market transparency for privacy relevant products and an enlargement of the market for Privacy Enhancing Technologies and finally an increase of trust in IT. Thus, the privacy seal may at the same time foster consumer protection and trust and provide a marketing incentive to manufacturers and vendors for privacy relevant goods and services. And to wrap up – technology can and should be used in privacy-protective manners, but without efforts and without appropriate timing (system design); privacy will suffer as collateral damage of technological development.
References 1. Bennett, C.J.: Regulating privacy: Data protection and public policy in Europe and the United States, Ithaca Cornell University Press (1992) 2. International Working Group on Data Protection in Telecommunications: Working Paper on E-Ticketing in Public Transport of (September 2007), http://www.datenschutz-berlin.de/content/europainternational/international-working-group-on-data-protectionin-telecommunications-iwgdpt/working-papers-and-commonpositions-adopted-by-the-working-group (acceesed January 28, 2010) 3. Directive of the European Parliament and Council 2004/52/EC of April 29, 2004 on the interoperability of electronic road toll systems in the Community. Official Journal of European Union, L 166/124 (2004) 4. Information Commissioner RS: Opinion on the protection of personal data in the electronic road toll system of July 14, 2008, http://www.ip-rs.si/fileadmin/ user_upload/Pdf/razno/Opinion_on_electronic_toll_collection_ Information_Commissioner_Slovenia.pdf (accessed January 28, 2010) 5. Information Commissioner RS.: Guidelines regarding the introduction of biometric measures (2008), http://www.ip-rs.si/fileadmin/user_upload/Pdf/ smernice/Guidelines_biometrics.pdf (accessed January 28, 2010) 6. Targeted Online Advertising. Commissione Nationale de l’Informatique et des Libertes, France (2009) 7. London School of Economics and Political Science: Online Advertising, Confronting the challenges (2009), http://www.lse.ac.uk/collections/informationSystems/research/policyEngagement/advertising_paper.pdf (accessed January 28, 2010) 8. Article 29: Opinion 4/2007 on the concept of personal data, http://ec.europa.eu/justice_home/fsj/privacy/docs/wpdocs/ 2007/wp136_sl.pdf (accessed January 28, 2010)
4
See: https://www.european-privacy-seal.eu/
Privacy and Biometrics for Authentication Purposes: A Discussion of Untraceable Biometrics and Biometric Encryption Ann Cavoukian1, Max Snijder2, Alex Stoianov1, and Michelle Chibba1 1
Information and Privacy Commissioner of Ontario 2 Bloor Street East, Suite 1400 Toronto, Ontario, Canada M4W 1A8 2 European Biometrics Group Pr. Willem van Oranjelaan 4 1412 GK Naarden, The Netherlands
[email protected],
[email protected]
Abstract. In this paper, the authors consider the prospects for embedding privacy into biometric technology through biometric encryption (BE) to deliver both privacy and security, particularly as more and more countries are exploring the use of biometric solutions for the purpose of authentication. This discussion arises from Commissioner Ann Cavoukian’s seminal work in the area of BE and Max Snijder’s early work on anonymous biometrics and e-Identity. It touches on the distinguishing features of the algorithmic process defined by BE, the terminology that best describes this process, and provides a springboard for further thought leadership on the policies and guidelines needed to distinguish between conventional and untraceable biometrics schemes. Keywords: Biometric keys, Helper data, Cancelable Biometrics, Positive-sum paradigm, Fair information practices, e-Identity, Covert/overt biometrics.
1 Introduction The initial usage of biometrics dates from a time in which the storage medium of any record was ink and paper. Under this system, samples taken from a person — socalled ‘live’ data — were compared, by hand, to a reference sample in order to determine a match. The manual nature of this system largely precluded extensive ‘one-to-many’ matching, in which a sample was compared against many or all stored records; instead, biometrics were generally used in a ‘one-to-one’ matching, to confirm that the sample came from a known individual. Additionally, as the biometric data were all stored in a physical format, the records (and the privacy thereof) could be protected by relatively simple physical access controls. For fingerprints, these methods imposed a natural limit to the size of the databases, generally with a maximum of 250K –350K records. Currently, however, biometric data are stored and matched electronically making it possible to be integrated into a range of large and complex ICT systems and processes A. Kumar and D. Zhang (Eds.): ICEB 2010, LNCS 6005, pp. 14–22, 2010.
Privacy and Biometrics for Authentication Purposes
15
such as immigration and border control systems with millions of records. Access to this data is becoming virtual rather than physical due to the many access points. This digital storage of data makes records, at least conceptually, accessible from anywhere in the world, making it difficult to control and verify authorized access. The digitization of data also broadens the usage possibilities, allowing for easy databasewide matching against live samples. The large size of the databases makes it increasingly attractive to crossmatch the records for multiple or secondary purposes. Such developments make digitally-stored biometric data significantly more vulnerable to manipulation, theft and loss of control. The need for a securely stored, non-vulnerable means of verification/authentication is becoming a key topic for discussion in the deployment of biometric systems for both government and commercial services [1, 2]. This is the context for this vision paper, which touches on the need for secure storage and privacy protection for biometric reference data.
2 State of the Art: Biometric Encryption (BE) There is a class of privacy-enhancing technologies that seeks to provide better security and privacy protection than conventional biometrics, by irreversibly transforming the biometric data provided by the user. These are referred to as Untraceable Biometrics [3], of which Biometric Encryption (BE) is a leading approach. It is this emerging area of privacy-enhancing biometric technologies that challenges the traditional zero-sum paradigm (privacy must be traded for security, and vice versa), by making possible the enhancement of both privacy and security, in a positive-sum manner. BE is a process that securely binds a PIN or a cryptographic key to a biometric, or generates a key from the biometric; in either case, neither the key nor the biometric can be retrieved from the stored reference data. Instead, the key is recreated only if the correct live biometric sample is presented on verification. BE is sometimes mistakenly viewed as a technique to encrypt biometric images or template data. This is not the case. With BE, there is no storage (and therefore no potential for deencryption or loss) of a biometric image or conventional biometric template. Research on BE started in the mid-1990s and is now sufficiently mature to warrant consideration as a new facet of biometric verification and authentication schemes. The authors of this paper agree that the benefit of BE lies not only in the manner in which the biometric is rendered anonymous within an authentication scheme, but also in the fact that it can be rendered untraceable. An illustration of this extension from anonymous to untraceable is as follows: Consider a transaction that may require biometric verification or authentication; for instance, the presentation of a biometrically protected ticket for an event. It is possible, with conventional biometrics, to anonymously show that the ticket holder is indeed the rightful owner of the ticket, and that the ticket has not been forged. If these tickets were to provide access to several events, however, an individual could be traced through the comparison of biometric templates, even while remaining anonymous at each individual event. With BE, however, no biometric template is stored. Instead, separate individual and unconnected keys are generated for each
16
A. Cavoukian et al.
event; though they have been derived from, or encrypted by, the same biometric, they are randomly generated and therefore completely unlinkable and untraceable. Note that the keys are not stored either; only the hashed versions of the keys are stored.
3 Ann Cavoukian on the Term Untraceable Biometrics Biometric data, such as fingerprints and facial images, are unique and permanent identifiers that are widely understood to be private, sensitive forms of personal information, worthy of legal protection. But some biometric data are also arguably semi-public information — our faces may be viewed everywhere we go, and our fingerprints are left behind on everything we touch. However, the ability of biometric information to be the link to a variety of personal data which otherwise would have no linkages, makes biometric data sensitive to privacy issues. This dual nature of biometric data makes it ill-suited for use as secret passwords (e.g., for access control) and for storage in centralized, networked databases, where it can be misused in many ways that can profoundly impact the individual. Yet identification and verification are the primary purposes for which biometric data are being used today. Both depend on the storage and routine matching of biometric data. As the use and deployment of biometrics becomes more widespread across society, so will the privacy and security risks associated with the growing collection, use, disclosure and retention of personal biometric data. These risks include: loss of individual control over one’s personal information; unauthorized cross-matching; unauthorized secondary uses; the prospect of surveillance, profiling and discrimination based upon biometric data; and the loss, theft, misuse and abuse of this data resulting in identity fraud, identity theft and other negative impacts on the individual. At stake is the trustworthiness of biometric systems and the behaviour of organizations that collect biometric data. Moreover, it has been demonstrated in recent publications [4, 5] that a biometric image can be reconstructed with high confidence from a fingerprint minutiae template. This image is known as a “masquerade” image since it is not the exact copy of the original but, nevertheless, will likely fool the system if submitted. See our paper [6] that discusses in more details the privacy repercussions of this and other new developments in fingerprint biometrics. Techniques such as BE can directly address most of these risks at the source, simply because an individual’s actual biometric data need never be disclosed, collected or retained to begin with. BE embodies three core privacy practices: • • •
Data minimization: no retention of biometric images or templates, minimizing the potential for unauthorized secondary uses, loss, or misuse; Maximum individual control: individuals may restrict the use of their biometric data to the purpose intended, thereby avoiding the possibility of unauthorized secondary uses (function creep); and Improved security: authentication, data transmission and data security are all enhanced.
Privacy and Biometrics for Authentication Purposes
17
Instead of presenting actual biometric data that can be lost or misused, BE transforms the individual’s primary biometric image into a virtually unlimited — and nonreversible — range of unique single-purpose data strings that can be used without fear of loss, correlation or misuse. These strings can then serve as passwords, encryption keys, or as identifiers for use anywhere that traditional biometrically-enabled systems are presently used (e.g., authentication, access control). Better still, if one’s unique BE identifier ever becomes compromised in any way, BE techniques allow it to be easily revoked and a completely new identifier generated from the same biometric, much like a new credit card number is issued when an old card is lost or stolen. With traditional biometrics, it is impossible to change one’s fingerprints or face if one’s biometric data is compromised. Using BE, this problem is mitigated because the original biometric data is never retained but only used to generate other identifiers. Thus, security is vastly improved. I have been a long-standing proponent of BE technologies since the mid-1990s and devote considerable time and effort on promoting BE. My office continues to press for strong privacy protections in the development and deployment of interoperable biometric technologies. 3.1 Untraceable Biometrics (UB) Untraceable Biometrics (UB) is a term that my office developed [3] to define this class of emerging privacy-enhancing technologies such as BE, that seek to irreversibly transform the biometric data provided by a user into a dataset that contains no generic information on the original biometric source anymore. A UB dataset bears no traces of the initial biometric sample and therefore has made the original biometrics untraceable. Also the matching will take place in the untraceable domain. A UB dataset doesn’t need to be reversed into a classic biometric template or image, so the raw biometric image or the classic template will not be exposed to the system, apart from the sensor at the very front end. Summarizing, UB has five distinguishing features:
There is no storage of a biometric image or conventional biometric template; The original biometric image/template cannot be recreated from the stored information, thereby rendering it untraceable; A large number of untraceable templates for the same biometric may be created for different applications; Untraceable templates from multiple applications cannot be linked together; and An untraceable template may, however, be renewed or cancelled.
These features embody standard fair information practices, providing for the timehonoured privacy principles of user control, data minimization, and a high degree of data security. UB include two major groups of emerging technologies: BE and Cancelable Biometrics (CB). BE technologies securely bind a digital key to a biometric, or generate a key from the biometric, so that neither the key nor the biometric can be retrieved from the stored information (the latter is often called “helper data”). The key
18
A. Cavoukian et al.
is recreated only if the correct biometric sample is presented up on verification; therefore, the output of BE verification is either a key or a failure message. On the other hand, CB technologies apply a transform (usually kept secret) to the original biometric and store the transformed template. The transform can be either invertible or, preferably, not. On verification, the same transform is applied to a fresh biometric sample, and the matching is done between two transformed templates. The output of CB verification is a Yes/No response, as in conventional biometrics. Although we had initially considered using the term “Anonymous Biometrics” for BE, we favoured instead using the term “untraceable.” Post 9/11, the term “anonymous” is often associated with a lack of accountability (e.g., an anonymous user makes defamatory comments on a website, or, worse, a terrorist organization abuses anonymity to raise funds or for other illicit purposes). In addition, BE can be used in both anonymous and identifying one-to-one modes. Even in the latter case, however, BE still provides significant privacy and security benefits, such as data minimization (no way to reconstruct the original biometrics), and the ability to change/revoke one’s template. Conventional biometrics can also operate anonymously, if no personal data are attached to the biometric template. However, those templates appearing in different databases can be linked together, making this kind of anonymity largely illusive. We agree that biometric “traceability” may refer both to leaving physical traces (e.g., latent fingerprints) and to tracing back to personal data. It seems that the second meaning of traceability which we have used, i.e., referring to the data connectivity involved, is the more commonly used definition. Clearly, the terminology in this area has not been well established. For example, the “Harmonized Biometric Vocabulary” ISO standard [7] does not address these issues. Recent advances in BE allow, at least in principle, biometric matching in the encrypted domain [8, 9]. In fact, these works bridge BE and modern cryptography to greatly enhance the data security and privacy protection. It is interesting to note that there has been some work done on template-free biometric key generation [10]. There is a company whose owner claims to have invented “traceless” biometrics — a 4-digit PIN is extracted from the biometric without storing any helper data (i.e., there are no traces in the storage). For the physical traceability problem, we prefer to use the terms “covert/overt” biometrics, as defined by NIST [11] or IBG [12]. “Overt” means that end users are aware that biometric data is being collected and used, and acquisition devices are in plain view. Indeed, leaving a physical trace (a latent fingerprint or a DNA sample) does not necessarily constitute the highest risk to privacy/security. It is much easier to take a person’s photo than to lift a latent fingerprint. Stored photos are also widely available in digital format, e.g., on Facebook. In other words, a facial image is much more accessible than a fingerprint or DNA. Moreover, the technology is constantly improving, such that it has become possible to take pictures of one’s iris covertly, at a distance. It will likely be possible, in the near future, to also covertly obtain a finger/palm vein pattern. Thus, there will ultimately be no “traceless” biometrics that could not be captured covertly. This problem should be considered in the application context, i.e., whether the system is designed to operate covertly or overtly.
Privacy and Biometrics for Authentication Purposes
19
We agree that untraceable biometrics no longer relates exclusively to “who you are,” (as with conventional biometrics) but rather “what you have” (e.g., smart card, biometric or encryption key storage media) is now tied intrinsically to “who you are.” UB elevates token and password-based systems to a new, higher level of security without compromising privacy — positive-sum, not zero-sum.
4 Max Snijder on the Term Anonymous Biometrics Historically, we associate biometrics with on-site identification: taking fingerprints with ink, storing them on paper and then relying on human inspection of these latent biometric identifiers at a physical, usually well protected, secure location. However, in the wake of the Information Technology Revolution of the 1990s, we are now moving from a “physical” environment to a new “virtual” domain where biometrics exist in a digital universe. It is now possible to store reference data at any location (worldwide) and search for millions of matches in a matter of seconds. Traditionally, biometrics have entailed the use of fingerprints to establish the identity of a person about whom there is little or no information, and this is still the most common use of biometrics around the world. Nonetheless, biometric technology is evolving into its next phase – “verification” – and in the near future we will be seeing this at border control checkpoints with the increased use of electronic passports. While the biometric data will still be stored at a protected physical location and controlled by the rightful owner of the passport, the passport holder’s identity will be verified by a one-to-one check of the biometrics stored in the protected chip. This is a situation characterized by the physical aspects of the process: you have to cooperate by offering your passport to a border control officer. Yet the link between the biometrics and the identity of the person is less evident. The passport’s biometric is indirectly used to verify identity and not to establish identity. In other words, the biometric reference data are made available to facilitate a single, one-off check but are still retained by the passport holder. As long the person is keeping the reference data protected, covert verification will not be possible as there are no reference data to compare with. Even more encouraging, the next evolutionary step after “verification” involves the use of biometrics that have absolutely no link to identity at all – the sole purpose being for authentication only. Even in the most extreme case, a service provider wouldn’t need to know which biometric modality and/or whose biometric data are being used. Why? Because biometrics can be converted into secret keys. But how secret can biometrics be? A face is public information. Using a face finder on a Web-search engine can reveal a person’s identity within minutes, if not seconds. The same process applies to typing in a name, which could generate multiple pictures of a face, including plenty of personal information. As for fingerprints, these can be traced and “stolen” because they can be left anywhere (e.g., doorknobs, drinking glass, keypads, etc.). There are now enough freely available databases where unintended links to a given identity can be established. New solutions are required for this problem. The choice of biometrics used (traceable? detectable?) is one of them, as well as the way that
20
A. Cavoukian et al.
reference data is stored and the performance level of live detection by sensors (automated? human supervision?). Some biometrics, such as the human iris, obviously leaves no fingerprint-like trace or is very difficult to “spoof” or fake. The newly available optical vein pattern recognition technologies are another good example of a “difficult-to-spoof” technology. But the matter of storing reference data and performing the matching function still remains. Currently, we need to revert the biometric reference back from whichever encrypted domain into the original image or biometric template in order to use it for matching. That means that the original biometric is not only being exposed to the sensor, but also to the wider system behind that sensor – and that is where user control starts to become questionable. New privacy-enhancing technologies will not only strengthen the protection of biometric data but also make it possible for their owner to control the storage and matching of his or her biometric data. From a single biometric (e.g., one fingerprint) these technologies can generate a non-biometric derivative and multiple unique (disposable) biometric “keys” for carrying out the matching function. This in turn allows for biometrics to be revocable (in case a key is lost or stolen) and to be protected against being used as “leads” to other databases. This “biometric encryption” introduces new horizons for the correct and beneficial use of biometrics in more complex operating environments such as the Internet and large federated systems. Biometric encryption has the potential of making both biometric data and the matching function anonymous by not exposing an individual’s original biometric information to fraud or manipulation. Technology that should accommodate this function is already commercially available. Indeed, we can now clearly see the two emerging faces of biometrics: “identifying” biometrics for establishing or verifying identity where the link to identity is key; and “un-identifying” biometrics — or anonymous biometrics — for authentication only where a link to identity is strictly avoided. Given that these two “faces” will exist in parallel to one other and could get easily mixed up with each other, a new level of thinking about biometrics is needed - one that defines what policies and guidelines are required so that both kinds of technology can be used, without one compromising the other. 4.1 Anonymous Biometrics Anonymous biometrics, in my view [1], refers to: a system where the biometric data are not being connected to any personal data; and, furthermore, where the biometric data is prevented from being taken to another system that is capable of connecting the biometric data to personal information. This enables biometrics for authentication only, e.g. by using a trusted third party (TTP) that authenticates the biometrics at the request of a user or a service provider. The only connection will be a transaction number. This is a crucial element for using biometrics in electronic environments on a privacy protective way (e.g. for e-services) and requires special management of the biometric data, from capturing to storage, retrieval and matching. Anonymity may very well be associated with a lack of accountability, but on the other hand, privacy is also considered a basic human right. Untraceable biometrics is a means to protect that right.
Privacy and Biometrics for Authentication Purposes
21
Some observations/remarks on the term Untraceable Biometrics:
Leaving physical traces is not the key element. Rather, the prime factor is the extent to which people are aware that they are submitted to a biometric check; With an eye on ongoing technical developments, any biometric characteristic can be taken covertly, either physically (e.g., latent fingerprint) or with contactless sensors (cameras, X-ray etc.). In that view, taking a fingerprint from a person’s drinking glass without his or her knowledge is essentially the same as covertly taking an image from a person’s iris. So the protection will have to lay in the storage. Biometrics that are used for establishing/verifying identity have the purpose of connecting the biometrics to an identity, either to the root identity or an alias. If stored biometrics (or encrypted) are connected to personal data, these personal data can lead to the original biometrics (e.g., through Facebook or through conventional systems that use biometrics for identification, like the central repository of a public institution). So, in order to be totally untraceable, there should be no link to personal data; If a biometric reference is untraceable and not connected to any personal data, the biometric is no longer “who you are”, but rather “what you have” and “what you know”, (i.e., a secret key). The secret element will be stronger if it is not known which biometric data are being used for a specific application. Instead of knowing in advance that a left or right index finger has been used (such as the case with the biometric passports), it can be any finger or any other biometric modality or combinations thereof. The only one who knows is the person who was enrolled in the first place.
So in order to use anonymous biometrics for authentication only we have to make sure that: 1. it remains a secret which biometric modality/-ties is/are being used (untraceable reference data) 2. the stored reference data do not contain any traces of the original biometric sample or template 3. the stored reference data can not be reversed by any means into the orignal biometric image or template 4. there is no personal information linked to the stored biometrically generated reference data UB will facilitate 1, 2 and 3, while the fourth bullet needs to be facilitated by general ICT means and supporting policies. For the first bullet it is also needed that the user has the availability of sensors for multiple modalities, which can be chosen freely and in any combination. This technical infrastructure is not yet available, although there are already sensors available which combine face/iris and finger/vein. If multiple modalities can be freely combined in any order it will be hard to find out which biometrics exactly have been used for enrolment.
22
A. Cavoukian et al.
5 Conclusion We can no longer accept the status quo where a zero-sum approach pits security against privacy. We need to change the paradigm to positive-sum where both privacy and security co-exist. We have initiated this discussion by expressing a vision implementing a truly untraceable biometric through Biometric Encryption technology and encourage others to consider further thought leadership in this area.
References 1. Snijder, M.: Biometrics and (e-) Identity. In: Policy Forum on Outsourcing of Systems for Detection, Identification and Authentication, London, February 9 (2009) http://www.hideproject.org/downloads/HIDE_PF-OutsourcingPresentation_Max_Snijder-20090206.pdf, http://www.bioethics.ie/uploads/docs/Biometrics_Conference_ Final.pdf 2. Cavoukian, A., Stoianov, A.: Biometric Encryption: A Positive-Sum Technology that Achieves Strong Authentication, Security AND Privacy (March 2007), http://www.ipc.on.ca/images/Resources/up-1bio_encryp.pdf 3. Cavoukian, A., Stoianov, A.: Biometric Encryption: The New Breed of Untraceable Biometrics. In: Boulgouris, N.V., Plataniotis, K.N., Micheli-Tzanakou, E. (eds.) Biometrics: fundamentals, theory, and systems, ch. 26, pp. 655–718. Wiley-IEEE Press (2009) 4. Cappelli, R., Lumini, A., Maio, D., Maltoni, D.: Fingerprint Image Reconstruction from Standard Templates. IEEE Transactions On Pattern Analysis And Machine Intelligence 29(9), 1489–1503 (2007) 5. Feng, J., Jain, A.K.: FM Model Based Fingerprint Reconstruction from Minutiae Template. In: Tistarelli, M., Nixon, M.S. (eds.) ICB 2009. LNCS, vol. 5558, pp. 544–553. Springer, Heidelberg (2009) 6. Cavoukian, A.: Fingerprint Biometrics: Address Privacy Before Deployment. IPC (November 2008), http://www.ipc.on.ca/images/Resources/ fingerprint-biosys-priv.pdf 7. “Harmonized biometric vocabulary”. JTC 1/SC 37/WG 1 8. Bringer, J., Chabanne, H.: An authentication protocol with encrypted biometric data. In: Vaudenay, S. (ed.) AFRICACRYPT 2008. LNCS, vol. 5023, pp. 109–124. Springer, Heidelberg (2008) 9. Stoianov, A.: Cryptographically Secure Biometrics. In: Proceedings of the Biometric Technology for Human Identification VII, SPIE International Symposium on Defense, Security and Sensing, Orlando, FL (April 2010) 10. Sheng, W., Howells, G., Fairhurst, M., Deravi, F.: Template-free biometric-key generation by means of fuzzy genetic clustering. IEEE Transactions on Information Forensics And Security 3(2), 183–191 (2008) 11. http://www.biometrics.gov/Documents/glossary.pdf 12. http://www.bioprivacy.org/bioprivacy_main.htm
From the Economics to the Behavioral Economics of Privacy: A Note Alessandro Acquisti Heinz College, Carnegie Mellon University, Pittsburgh, PA, 15213, USA
[email protected]
Abstract. In this brief note, I summarize previous work and offer an overview of two related fields of research: the economics and the behavioral economics of privacy. The economics of privacy studies the trade-offs associated with the protection or revelation of personal information. However, theoretical models in this area sometimes make overly restrictive assumptions about individuals’ decision making. The behavioral economics of privacy attempts to address such shortcomings. In doing so, it can help us gain a better understanding of how we value, and make (sometimes biased) decisions about, our personal data. Keywords: Privacy, Information Security, Economics, Behavioral Economics, Soft Paternalism, Nudging.
1 Introduction In recent years, researchers in academia and decision-makers in the public and private sectors have come to appreciate the role of incentives and trade-offs in privacy and information security problems. The economics of privacy [21, 22, 26, 11], and the economics of information security [13] have thus risen to relevance. Broadly speaking, the economics of privacy attempts to understand, and sometimes measure, the trade-offs associated with the protection or revelation of personal information. Many of the models used in this area assume rational and economically-sound individual decision making. It is legitimate to assume, after all, that privacy and security sensitive behavior must be self-interested and rational: systems administrators may not patch their companies’ computer systems - even after new vulnerabilities are discovered - when they fear that patching may prove more damaging to those systems than the vulnerabilities themselves; Internet users may not adopt even freely available privacy technologies to protect their data simply because they do not trust their efficacy. However, a growing body of evidence from behavioral and experimental economics has demonstrated that human beings are prone to systematic decision making biases and judgment errors in scenarios involving uncertainty and risk. Several of these errors stem from lack of information, insight, or computational ability [19], but many others relate to problems of self-control and limited self-insight, in which case knowledge “is insufficient to motivate behavior change” [19, p. 5]. The complexity of making correct decisions about hiding or sharing personal data in modern information A. Kumar and D. Zhang (Eds.): ICEB 2010, LNCS 6005, pp. 23–26, 2010. © Springer-Verlag Berlin Heidelberg 2010
24
A. Acquisti
economies is such, that theories of cognitive and behavioral biases proposed in the behavioral economics literature (e.g., [28, 16]) are likely to be useful in understanding privacy and security decision making.
2 Applying Behavioral Economics to the Study of Privacy Indeed, seemingly contradictory (and sometimes perhaps even self-damaging) behaviors have been documented by researchers in the fields of privacy and information security. With the commercial success of the Internet, privacy has - in theory became a popular issue for consumers, causing growing concerns documented in several surveys [1]. In practice, however, and with few key exceptions, many technological solutions to privacy problems have met only lukewarm reactions in the marketplace [15]; self-regulatory interventions (such as privacy policies, difficult and time consuming to understand) have failed to provide a solution [14, 20]; and individuals have engaged in new forms of information sharing through Web 2.0 technologies such as online social networks and blogs. As noted elsewhere [3], anecdotal evidence, surveys, and experiments have shown that even privacy concerned individuals are willing to trade-off privacy for very small benefits, or bargain the release of very personal information in exchange for relatively small rewards [25]. These apparent dichotomies stem from a combination of incomplete information [12] about risks, consequences, or solutions to privacy problems; bounded cognitive power [24]; and various systematic deviations from the theoretically rational decision process: from hyperbolic discounting and immediate gratification to status-quo bias and loss aversion [16]. In recent years, together with other researchers, we have started investigating the underlying processes which influence privacy and security relevant behavior, and their possible fallacies and biases (see, for instance, [2, 8, 3, 9, 4, 17, 7, 10, 18, 5, 23]). Using the lenses of behavioral economics and decision research to understand how people make privacy and security decisions can uncover sometimes surprising outcomes. For instance, in some of our studies we have found that “individuals are less likely to provide personal information to professional-looking sites than unprofessional ones, or when they are given strong assurances that their data will be kept confidential” [6]. Some of our research also suggests that individuals “assign radically different values to their personal information depending on whether there are focusing on protecting data from being exposed, or selling away data which would be otherwise protected” [6]. The research agenda in this area seems rich and promising. It involves not just investigating privacy and security decision making through the lenses of behavioral economics, but also informing the design of privacy and security technologies through behavioral experiments, in order to anticipate and mitigate potential biases. The first objective relates to foundations research about the drivers of human behavior. The second objective is informed by the first, and relates to design research: using the improved understanding of the processes underlying security and privacy behavior to build better technologies and assist human decisions. Such approaches may include “soft” paternalistic solutions [19, 27]: systems and policies that do not
From the Economics to the Behavioral Economics of Privacy: A Note
25
limit individual freedom and choice, but “nudge” users and consumers towards certain behaviors that the users themselves have claimed to prefer (or which sound empirical evidence have demonstrated to be preferable) from a privacy perspective.
3 Conclusion Combining economics and behavioral economics to understand privacy valuations and decision making can help us not just understand, but also assist, security- and privacy-sensitive behaviors, making sense of actions and outcomes that may appear sometimes puzzling, or even paradoxical. Understanding how individuals navigate the complexity of information systems in modern economies is akin to a public health issue: results in this area can, therefore, inform the work of privacy and security technologists, providing them insights and methods not simply to design better interfaces for their systems but to actually revisit the very strategies and assumptions underlying those systems. In addition, by exposing conditions under which technology alone may not be sufficient to assist decision making, research in this area can provide tools and opportunities for analysis and management to policy makers.
References 1. Ackerman, M., Cranor, L., Reagle, J.: Privacy in e-commerce: Examining user scenarios and privacy preferences. In: Proceedings of the ACM Conference on Electronic Commerce (EC 1999), pp. 1–8 (1999) 2. Acquisti, A.: Privacy and security of personal information: Economic incentives and technological solutions. In: Camp, J., Lewis, S. (eds.) The Economics of Information Security. Kluwer, Dordrecht (2004) 3. Acquisti, A.: Privacy in electronic commerce and the economics of immediate gratification. In: Proceedings of the ACM Conference on Electronic Commerce (EC 2004), pp. 21– 29 (2004) 4. Acquisti, A.: Ubiquitous computing, customer tracking, and price discrimination. In: Roussos, G. (ed.) Ubiquitous Commerce, pp. 115–132. Springer-Verlag, Heidelberg (2005) 5. Acquisti, A.: Identity management, privacy, and price discrimination. IEEE Security & Privacy 6(2), 46–50 (2008) 6. Acquisti, A.: Nudging privacy: The behavioral economics of personal information. IEEE Security & Privacy, 82–85 (November-December 2009) 7. Acquisti, A., Gross, R.: Imagined communities: Awareness, information sharing, and privacy on the Facebook. In: Danezis, G., Golle, P. (eds.) PET 2006. LNCS, vol. 4258, pp. 36–58. Springer, Heidelberg (2006) 8. Acquisti, A., Grossklags, J.: Losses, gains, and hyperbolic discounting: An experimental approach to information security attitudes and behavior. In: Camp, J., Lewis, S. (eds.) The Economics of Information Security (2004) 9. Acquisti, A., Grossklags, J.: Privacy and rationality in decision making. IEEE Security & Privacy, 24–30 (January-February 2005) 10. Acquisti, A., Grossklags, J.: What can behavioral economics teach us about privacy? In: Digital Privacy: Theory, Technologies and Practices, Auerbach Publications (Taylor and Francis Group), Abington (2007)
26
A. Acquisti
11. Acquisti, A., Varian, H.R.: Conditioning prices on purchase history. Marketing Science 24(3), 1–15 (2005) 12. Akerlof, G.A.: The market for ’lemons’: Quality uncertainty and the market mechanism. Quarterly Journal of Economics 84(3), 488–500 (1970) 13. Anderson, R.: Why information security is hard: An economic perspective. In: Proceedings of the 17th Annual Computer Security Applications Conference, vol. 358 (2001) 14. Anton, A., Earp, J., Bolchini, D., He, Q., et al.: The lack of clarity in financial privacy policies and the need for standardization. IEEE Security & Privacy 2(2), 36–45 (2004) 15. Brunk, B.D.: Understanding the privacy space, 7(10) (First Monday 2002) 16. Camerer, C., Lowenstein, G.: Behavioral economics: Past, present, future. In: Advances in Behavioral Economics, pp. 3–51. Princeton University Press, Princeton (2003) 17. Gross, R., Acquisti, A.: Information revelation and privacy in online social networks. In: Workshop on Privacy in the Electronic Society, WPES 2005 (2005) 18. Grossklags, J., Acquisti, A.: When 25 cents is too much: An experiment on willingness-tosell and willingness-to-protect personal information. In: Workshop on the Economics of Information Security, WEIS 2007 (2007) 19. Loewenstein, G., Haisley, E.: The economist as therapist: Methodological ramifications of ’light’ paternalism. In: Perspectives on the Future of Economics: Positive and Normative Foundations. Oxford University Press, Oxford (2007) 20. Milne, G.R., Culnan, M.J.: Strategies for reducing online privacy risks: Why consumers read (or don’t read) online privacy notices. Journal of Interactive Marketing 18(3), 15–29 (2004) 21. Posner, R.A.: An economic theory of privacy. Regulation, 19–26 (May-June 1978) 22. Posner, R.A.: The economics of privacy. American Economic Review 71(2), 405–409 (1981) 23. Romanosky, S., Telang, R., Acquisti, A.: Do data breach disclosure laws reduce identity theft? In: Workshop on Economics of Information Security, WEIS 2008 (2008) 24. Simon, H.A.: Models of bounded rationality. MIT Press, Cambridge (1982) 25. Spiekermann, S., Grossklags, J., Berendt, B.: E-privacy in 2nd generation e-commerce: Privacy preferences versus actual behavior. In: Proceedings of the ACM Conference on Electronic Commerce (EC 2001), pp. 38–47 (2001) 26. Stigler, G.J.: An introduction to privacy in economics and politics. Journal of Legal Studies 9, 623–644 (1980) 27. Thaler, R.H., Sunstein, C.R.: Nudge: Improving Decisions About Health, Wealth, and Happiness. Yale University Press, New Haven, London (2008) 28. Tversky, A., Kahneman, D.: The framing of decisions and the psychology of choice. Science 211, 453–458 (1981)
Legislative and Ethical Questions regarding DNA and Other Forensic "Biometric" Databases Elazar (Azi) Zadok Former Director of the Division of Identification and Forensic Science (DIFS), The Israel Police
[email protected]
Abstract. Forensic Science is gaining more importance in criminal justice, since the development of new sensitive and accurate scientific tools, and the construction of large computerized databases. Although very effective, these databases pose a lot of legislative and ethical concerns. In case of DNA concerns are deeper since it contains sensitive genetic information regarding its owner, not necessarily needed for his identification. This paper focuses on issues related to the collection, utilization and retention of DNA samples and profiles of legally innocent populations, illustrated by a case study where a voluntarily given DNA sample for a murder investigation successfully solved three non related rape cases.
1 Introduction Evidences derived by forensic methods are gaining more and more importance in criminal justice. These are produced using objective scientific methods from samples taken from suspects and victims or exhibits found in the crime scene. The advantages of forensic evidence over evidence derived from human sources, i.e. by interrogation or by diverse intelligence means, such as wiretapping, are clear. Courts are seeking all kinds of forensic evidence provided their scientific basis is sound and the competence of the laboratories producing them is unequivocally proven. There is nothing like a single fingerprint in the crime scene, whose owner fails to explain how it got there, in order to significantly enhance the chance for his conviction. On top of that, two very significant developments occurring during the last few decades enhanced even further the importance and strength of forensic evidence in criminal justice: (a) The scientific detection and identification methods became much more sensitive, accurate and multi-disciplinary: it is now possible to determine a DNA profile from only a few cells, and the chemical composition of an unknown material from minute or even invisible amount of it. (b) Technologies related to computer sciences, databasing, retrieval and comparison of information have become extremely fast and accurate. It is possible today to establish huge databases and efficiently search them for comparing data. A. Kumar and D. Zhang (Eds.): ICEB 2010, LNCS 6005, pp. 27–39, 2010. © Springer-Verlag Berlin Heidelberg 2010
28
E.(A.) Zadok
Theses developments have put in the hand of the society efficient tools to fight modern crime patterns in their wide range of occurrence. Modern forensic science laboratories can handle increasing numbers of cases, comprising a large variety of exhibits of lesser quantities, producing results of better accuracy and success rates.
2 Forensic Databases Almost any forensic activity is accompanied by a relevant database. The most significant, and on the same token the most sensitive, are databases of information related to individuals, such as criminal photo albums, odontological information, finger (and palm) prints, and DNA which is a relatively newcomer. Legislators are weighing costs and benefits of inclusion of different types of offences or individuals in the databases: for example, many of the burglars are recidivists, thus listing of burglary as an offence enabling inclusion in forensic databases, well expresses one of their strengths: fast identification of suspects without their physical apprehension. Burglars can easily develop into severe crime executors. Therefore their inclusion in databases can effectively help also detection of rapists, murderers etc. In democratic regimes, issues related to privacy and human rights are raised in the context of forensic databases and weighed against common interests and wellbeing of the society. Legislators are bearing the responsibility of making forensic databases available for fighting crime, while putting reasonable constrains and limitations on their use, avoiding unnecessary privacy rights infringements, and ensuring that they are well secured from improper uses. Legislators have to compromise between the presumably opposing expectations of the private citizen on one hand, and the society on the other hand. No clear line can be drawn between these two, since, in many instances, they are complementary rather than opposing each other [1]. All forensic "biometric" databases are sharing the same ethical and legislative dilemmas, best demonstrated through DNA because of its sensitivity and complexity.
3 Forensic DNA Profiling and Databasing Since its introduction by Prof. Sir Alec Jeffries and his coworkers [2] forensic DNA profiling has become an indispensable tool used by law enforcement agencies around the world. It can be stated that forensic DNA caused a revolution in crime scene investigation similar to the one accomplished by fingerprints identification a century ago. The first forensic DNA database was established in the UK in 1995. It is today the largest of its kind, comprising more than 4.5 million individuals, or more than 6.5% of the UK population [3]. More than 100,000 DNA profiles originating in crime scenes are loaded into the database every year, resulting in hit rate1 of 52% [4]. Indeed, the amount of cases solved using DNA profiling in the UK is getting closer to 1
The term "Hit Rate" is used not only for matching individuals to crime scenes. It is also used for scene to scene matching, generating important intelligence. It should be also noted that not every hit turns into charges submission and conviction.
Legislative and Ethical Questions regarding DNA
29
fingerprints' correspondent number, and will undoubtedly surpass it in the near future. DNA profiling and databasing are thus the most remarkable example of state of the art scientific technologies harnessed to forensic uses. Although their great (and proven) potential, DNA profiling and databasing are cautiously considered by many. From the scientific point of view, this is still a very young and fast developing technology. Frequently introduced changes and new applications must be validated and become acceptable within forensic and scientific communities before entering the court room. DNA profiling is a relatively complicated process, prone to contaminations, miscarriages and misinterpretations, putting heavier burden on laboratories practicing it. The statistical nature of DNA profiling results also leaves lot of room for debates regarding their interpretations. Another sort of controversy caused by DNA profiling and databasing lies in the ethical and legislative domains. Ethical questions are frequently raised [5], for example: which populations are to be included in the DNA database (convicted, arrestees, suspects, volunteers), for how long? Should DNA samples be stored or only DNA profiles? Should other DNA databases (such as "BioBanks") be made available for law enforcement purposes? DNA databases attract special attention since DNA samples (and perhaps also DNA profiles) contain a lot of genetic information about the individual beyond the need for identification, and might be misused for other purposes. Three additional issues of fundamental significance are also raised with connection to forensic DNA: (a) "Familial searching" or genetic proximity testing is a technique where- in case there is no full match between a crime scene and the database- police is searching the database for individuals whose DNA profiles are closely related to the DNA profile derived from the crime scene. These closely related profiles might suggest familial connection between the person included in the database and the unknown offender, thus giving the police a clue as to his identity. In several countries this type of search is considered as privacy violation and is unacceptable. (b) The use of "abandoned" DNA: It is asked whether police can collect and use biological traces left unintentionally by individuals in public places. This technique is commonly used whenever police needs physical evidence connecting a suspect to a crime scene. This might open up a possibility of constructing illegal and informal DNA databases in hands of the authorities. (c) DNA dragnets or intelligence led DNA mass screenings is a technique used by the police mostly for elimination purposes, i.e. narrowing a long list of people which might potentially be connected to a crime. People are asked to volunteer a DNA sample for profiling and comparison to a DNA profile derived from the crime scene. This technique raises all sorts of ethical and legislative questions regarding the real meaning of voluntarism in this context; the fate of samples and profiles taken from volunteers etc. This topic well exemplifies the controversial nature of forensic DNA profiling and databasing from ethical and legislative points of view and will be further addressed in this paper. It is also elaborated in details in a book chapter to be published soon [6].
30
E.(A.) Zadok
4 DNA Dragnets or Intelligence Led DNA Mass Screenings DNA mass screenings are defined [7] as: ".. [e]ssentially warrantless searches administered en masse to large numbers of persons whose only known connection with a given crime is that authorities suspect that a particular class of individuals may have had the opportunity to commit it" DNA mass screenings are performed in many countries. In the UK more than 280 screenings were executed until early 2006, involving over 80000 volunteers, giving an average size of 285 volunteers per screening [8]. 40 such screenings resulted in positive identification of the offender [9]. A 39.4% success rate was reported where intelligence-led DNA screenings were accompanied and closely monitored by accredited behavioral scientists [10]. On the other extreme stands the USA: A study made in 2005 at the University of Nebraska [11] mentions only 18 screenings performed across the US, only one of them being successful. The largest DNA mass screen ever took place in Germany, where 16400 people were sampled [12]. The "Antony Imiela Case" [13] from the UK exemplifies the problematic nature of DNA mass screenings. 6 rapes occured in different parts of the UK between November 2001 and October 2002, the first one, which took place in Kent district, led to the collection of over 2000 samples from volunteers, giving no match in the NDNAD (UK National DNA Database). The next 4 rapes, all taking place in other districts, showed that the rapist was not bound to a specific area (and accordingly was given the title of "the M-25 rapist"). This led to sampling of additional 1000 volunteers. After a sixth rape took place, the police was able to construct a visage of the rapist with the help of the victim, leading to the discovery of his identity. The British Forensic Science Service (FSS) published that over 100 scientists and technicians were involved in this case, and more than 2 million pounds were spent. It raised the question of how well are the limited resources available for forensic investigations used. The next example illustrates the operational problems related to DNA mass screenings. In the investigation of a murder case of January 2002 in the USA, known as the "Truro, MA case" [14], the police correctly marked the "garbage man" as one of the potential suspects, but his DNA sample was not taken until March 2004(!), and was transferred to the lab not before 5 additional months had elapsed. At the beginning of 2005, 3 years after the murder took place, police decided to voluntarily sample 790 men from the town. These included almost everyone, even the elderly and the crippled whose sampling should have been avoided in the first place. Just then the lab came across the sample of the "garbage man", which led to a 3 years late identification of the murderer. A completely different example is taken from Israel. On January 2007 a teenage girl, Maayan Ben-Horin, was murdered during a rape attempt in a remote area, only scarcely inhabited by Bedouin community. It was quite obvious that chances that the murderer came from outside the area were very slim. 21 men were immediately sampled, and within a short while the case was solved. This case clearly exemplifies a situation where a DNA mass screening becomes a very efficient eliminative tool when applied within a small and well defined population. The following consequences can be drawn: It is clear that a DNA mass screening can never be a "stand alone" operation. Police must make sure that mass screenings are used very skillfully and intelligently, always with conjunction to other investigative
Legislative and Ethical Questions regarding DNA
31
means. It is unwise to rely on a DNA mass screening as a mean to reveal the offender only because large numbers of volunteers are sampled; Lists of volunteers should be based not only on geographical or ethnic considerations. In most cases where these were the only considerations, the results were negative. It is advisable that a behavioral scientist or a profiler will be involved in constructing the lists of volunteers and prioritizing them; in some cases it is wise to conduct this eliminative search from the very beginning, as shown in the "Ben-Horin case". In other cases, exemplified by the "Truro case", it is recommended to conduct this search when all other means are exhausted after being skillfully used, thus rendering a mass screening inevitable. In a prolonged screening, all details should be reassessed from time to time and updated screening policy must be incorporated; A strong cooperation must be established between investigators and forensic scientists, and the whole process should be closely monitored; Mass screenings should not be used too frequently: exhausting this valuable tool undoubtedly damages the innovation and quality of police work, and cannot replace hard and professional investigative work. On top of that, frequent mass screenings might spoil the delicate interrelations between the police and the community, especially in cases where volunteers are selected mainly on ethnic ascription. Since profiling of large quantity of DNA samples is very expensive, the police should be obliged to optimize budget distribution between DNA screenings and other forensic and investigative means.
5 Volunteers' Samples Collection and Utilization Process DNA mass screenings begin with voluntary sampling of populations defined by the police according to considerations examined in the previous section. A volunteer can be a victim of the assault, a family member giving DNA sample for identification purposes, a witness present on the scene etc. Volunteer can also be any person who is not directly and individually suspected as being the perpetrator. In the UK, Australia and several states in the USA, a court order is not needed in order to ask volunteers to donate DNA samples. One of the most important issues regarding volunteers' sampling is the "informed consent" needed as part of the sampling process. In the UK [9] two distinctive forms of consent exist: The "limited scope of consent" enables the police to use the DNA profile only in connection with the case the sample was taken for in the first place. The "unlimited consent" (or "initial voluntariness", [15]) authorizes the police to match the profile against the database, without any time limit. In the UK this consent is irrevocable, thus enabling the police to upload the profile into the NDNAD and also store the sample. The Nuffield Report on "The Forensic use of Bioinformation" [16] published in the UK in September 2007 revealed that more than 40% of the volunteers signed for unlimited consent, thus illustrating the problematic nature of "informed consent" [17]. Many European countries allow the use of a voluntarily given samples only for purposes of the case under investigation. Many countries do not allow the voluntary collection of DNA samples unless there is DNA evidence on the scene itself [18]. Many authors define two conditions that should be fulfilled in order that a sample taken from a volunteer might be considered as given under "informed consent": (a) Samples should be collected under no coercive circumstances. A volunteer should not
32
E.(A.) Zadok
be given the impression that refusal to give a sample will immediately turn him into a suspect [19]. (b) Samples should be collected under well informed and written consent, explaining to the volunteer all options and possibilities, including the meaning of a search against the whole database, entering the database (where relevant), terms of revoking the consent etc. The first condition might be difficult to implement when a presumably innocent person is called to a police station to give a sample, creating a situation perceived to be coercive by its nature. Moreover, refusal to provide a sample might be deemed as hiding information instead of insisting on one's privacy rights. Failure of law enforcement agencies to clarify, both to themselves and to the volunteer that refusal will not automatically turn him into a suspect might damage the presumed innocence of the individual, transferring the burden of proof from the police to its shoulders, having no real evidence in hand other than his refusal [5]. Nevertheless there might be cases where refusal might have a critical impact on the investigation, i.e. small number of possible volunteers or when a volunteer meets specific criteria relevant only to a classified group of people. In the USA, DNA sampling of volunteers is challenged by "Civil Libertarians" on the basis of the Fourth Amendment of the Constitution, presumably serving as a shield for the individual against its privacy intrusion by the government [1]. Nevertheless, most authors think it cannot avoid DNA sampling of volunteers, because of arguments such as "special needs" of the government, or "society protection needs" mentioned in it [20]. Most courts turned down claims against volunteers' sampling on this basis. It is also argued that availability of biological samples, given consciously by almost any citizen for many other purposes, or left unconsciously everywhere, erodes the constitutional ability to oppose their use for purposes beneficial for the society [7]. DNA mass screenings are compared to a lawful police operation searching for drunk drivers by stopping them for inspection on the highway. The only concern they raise is that DNA mass screenings turn to be ethnically biased in many cases and the claim is that this trend should be avoided.
6 Population-Wide Forensic DNA Databases Another important issue is population-wide forensic DNA databases which can be looked at as another sort of voluntary mass screening. Some scholars support the establishment of population-wide databases, thus putting an end to the claims that the existing databases are ethnically imbalanced (i.e. the NDNAD contains ca. 40% of the black population of the UK and less than 10% of its white population, [21]); It will also save police efforts and money in the long run; It will minimize the harassment of the population and its involvement with the police; It will differently enlighten the question of privacy and human rights preservation, since the whole population will be involved [22]; In case that only DNA profiles will be saved and not the samples themselves, privacy intrusion will be almost unnoticed since the data comprising the DNA profile is valid (today) only for identification purposes, and does not significantly differ from fingerprints [23]. It is also argued that many countries are already on the track of building a comprehensive DNA database, constantly adding
Legislative and Ethical Questions regarding DNA
33
different types of populations to existing databases. In fact, voices all over the free world are heard in favor of such a process: they come from the justice system itself [3], and high ranking politicians [24]. However, the vast majority within law, social sciences and political communities firmly oppose this idea. Their main argument is that the expected benefits for law enforcement will be much less pronounced than the anticipated damages to the society. It is already shown that, in spite of its accelerated growth, the "hit rate" of the UK NDNAD did not grow during the last few years [16]. On top of that the overall forensic coverage of scenes of crime is only 20% [3]. Evidently, enlarging the amount of crime scene samples stored in any given DNA database due to better forensic coverage, will amplify its efficiency; It is feared that this kind of database will really impair the delicate balance between privacy rights and society needs and interests, and will lead the way towards a much more pronounced "surveillance society" [25]; Although costs for determining a single DNA profile are reducing dramatically, a huge budget will still be needed in order to create and maintain a comprehensive DNA database. This will come on top of the existing backlogs in most forensic DNA labs in the world which are suffering from shortages in budgets, skillful people, instrumentation, lab space and accreditation programs [26]; And above all comes the notion that a comprehensive DNA database will turn the whole population into a society of suspects whenever the database is searched and will not repair the impaired ethnic balances in existing law enforcement agencies attitude, since these are embedded within their system conduct itself [27].
7 Retention of DNA Samples and Profiles of Legally Innocent Populations Preservation and further utilization of DNA samples and profiles raise, according to civil libertarians, much more concern than the act of their voluntary collection. This concern is shared not only by volunteers but also by suspects/arrestees that were not charged and by people exonerated by courts, in countries where their DNA samples and profiles can still be lawfully retained and further used by the authorities, although they are legally innocent. In the UK, existing law allows indefinite retention of voluntarily (and in fact all) submitted DNA samples and profiles [9]. This legislation was enacted in 2001, after the House of Lords turned over a judge decision to exclude DNA evidence based on a sample illegally held by the police, where it should have been expunged [19]. The House of Lords then concluded that this policy is not in conflict with the European Convention of Human Rights (ECHR) [21]. In 2008 the ECHR concluded (in the case of S. and Marper vs. the UK) that the practice of retaining DNA and fingerprints of anyone arrested but not charged or convicted in England and Wales was a violation of the ‘right to respect for private life’ under Article 8 of the ECHR [28]. In response, the Home Office issued in May 2009 a consultation paper: ‘Keeping the Right People on the DNA Database: Science and Public Protection’, which specifically addressed this question of keeping and using samples and profiles of people who were not convicted [29]. This might be a turning point in the Home Office policy.
34
E.(A.) Zadok
The main argument used in favor of DNA samples retention is the possible need to update databases due to future technological developments in forensic DNA [30]. This argument is not easily dismissed since the technology (and kits) used in profiles' generation has changed quite a lot during the last two decades and became much more efficient and much less expensive and time consuming. Examples are not scarce: SNPs (Single Nucleotide Polymorphism) technology was first used in masse for degraded DNA analysis in 9/11 terror attack; Forensic Y chromosome analysis is gaining more and more attention lately; In 2006 the German BKA published a unique work where, in a DNA mass screening, nuclear DNA was used for the first time in conjunction with mtDNA [31]. A question arises though whether it will be feasible, on budgetary grounds, to update a whole DNA database comprising millions of profiles, whenever such a technological breakthrough will be justified scientifically. Many judicial systems differentiate between DNA samples and profiles derived from them. The main reason for that is the individual genetic information inherently included in the sample, which is not relevant for identification of its owner [32]. It is feared that samples may be used in research, not necessarily relevant to the initial definition of forensic DNA databases as identification means [33]. Two types of this phenomenon, termed as "function creep", may be considered: (a) Utilization of information stored in a given database for further development of means serving its initial general purpose, and (b) Its utilization for completely different purposes [34]. This might result in researches in fields such as "genetic criminology", aimed to develop various genetic means for identification of criminal potential in individuals. Such fears are based on examples from various countries were information and samples taken from non forensic DNA databases can be used for criminal investigation purposes. One of the largest DNA databases belongs to the USA army, where over 3 million samples are stored for the sole purpose of identification of soldiers killed in action. In 2002 it was established that, subject to court order, law enforcement agencies can use it for criminal investigation purposes [35]. This might be interpreted as a violation of the conditions under which these samples were collected from the soldiers. Attempts to withdraw samples from the database failed [25]. Same applies for "BioBanks", were samples collection is purely voluntary: In Scotland, a judge authorized the use of information derived from HIV positive volunteers' database in order to get a conviction; In the UK data from "BioBanks" can be used by the police under the Police and Criminal Evidence Act 1984 (PACE) by authorization of a circuit judge and in "exceptional circumstances" [36]; In Sweden, information stored in the BioBank (the PKU Register) was used in the investigation of the assassination of the Foreign Minister Anna Lindh in 2003, causing the withdrawal of hundreds of samples from this voluntary database [37]. Concerns are raised also in regard of DNA profiles of not convicted individuals retained by the authorities. It seems that this is a much less annoying issue, but it is argued that polymorphic DNA having no genetic meaning known to us today, might turn meaningful in the future. DNA profile submitted by an individual might also be used in "familial searching" [35]. Thus by retaining a DNA profile of a legally innocent individual, police also acquires valuable information about his first degree relatives. These concerns should also be considered as part of a decision regarding the fate of DNA profiles of these populations.
Legislative and Ethical Questions regarding DNA
35
8 A Case Study: The Farhi Case [38] The following case from Israel illustrates the confrontation between privacy rights of an individual volunteering a DNA sample on one hand, and the needs of the society in fighting severe crimes on the other hand. Anat Fliner, a lawyer and single mother of two, was murdered on April 2006 by a 15 year old boy carrying a knife during a robbery attempt. The police found his knife and gloves, and in a short while they were connected to the assassination, bearing the DNA profiles of both the victim and the presumed assassin. A DNA dragnet was resumed calling in 500 convicted burglars residing in vicinal areas, during a period of 2 years, without any success. One of them was Eitan Farhi, who volunteered his sample for the investigation of Fliners' murder only. The hit was achieved only when the boy was caught trying to steal a motorcycle. DNA sample taken from him revealed he was the murderer. At the same time a serial rape cases from other districts were also in course of investigation. The DNA lab expert working on these cases recalled that she encountered a DNA profile having similar relatively rare characteristics in an unrelated case before. A quick glance at the Fliner case excel file disclosed that the profile of the alleged serial rapist is identical to the profile of Eithan Farhi. The guiding principle of the judge who was asked to issue an arrest order was the "public interest". According to his judgment "the police and the court… cannot ignore, the existence of any evidential basis connecting beyond a reasonable doubt a person to the execution of such severe offenses. I believe that our judicial system principles, which are manifested in the Yissacharov case, do not prevent the arrest of a person in the above mentioned circumstances", [39]. Farhi claimed that the use of the DNA evidence in his case was illegal, since the basis for his indictment in the rape cases was a DNA profile, taken from him in connection to the murder case, which, contradicting his limited consent, was illegally used in the investigation. He argued that the law sets a clear exclusionary rule. His motion to exclude the evidence was denied by the district court, ruling that the DNA evidence was admissible according to the Yissacharov case, which dealt with the issue of the adoption of a doctrine that illegally obtained evidence (termed also as "fruits of the poisonous tree"2) should be inadmissible in the Israeli legal system. According to the Yissacharv case a court can exclude illegally obtained evidence, if it finds that admitting it in the trial will harm the accused rights in a substantial way, and will violate the fairness of the proceedings to a great extent. The possibility to exclude evidence in restricted circumstances expresses the relativity of the right to a fair trial, and the profound need to balance it against competing values, rights and interests, such as discovering the truth, fighting crime, protecting public safety and the rights of potential and actual victims of crime. Dealing with the question of the admissibility of illegally obtained evidence, court should examine whether "the law enforcement authorities made use of improper investigation methods intentionally and deliberately or in good faith". In cases were the authorities have intentionally violated the provisions of law binding them, or consciously violated a protected right of a person under investigation, there is a possibility 2
‘Fruit of the poisonous tree' is a legal metaphor in the USA used to describe evidence gathered with the aid of information obtained illegally.
36
E.(A.) Zadok
of serious violation of due process if the evidence obtained is admitted in trial. Another consideration that might be looked as a mitigating circumstance reducing the seriousness of the illegality is a result of "an urgent need to protect public safety". Another relevant consideration under the judicial discretion is "the degree to which the illegal or unfair investigation method affected the evidence that was obtained". The court should consider to what extent the illegality involved in obtaining the evidence is likely to affect its credibility and probative value. The court should also consider whether the existence of the evidence is independent and distinct from the illegality involved in obtaining it. In a case of DNA evidence, like in the Farhi case, the presumably improper investigation methods were not capable of affecting the contents of the evidence, and therefore the court ruled that it can be admitted in trial. It is most likely that the case law exclusionary rule will not be applicable whenever scientifically based evidence is used, mostly because of its objectivity and high credibility. The verdict specified clearly that the ‘fruit of the poisonous tree’ doctrine prevailing in the USA is not adopted through case law doctrine of inadmissibility. In the Farhi case it was evident to the court that the exclusion of the DNA evidence may cause excessive harm to the public interests. In these circumstances, exclusion of the evidence might undermine the administration of justice and public confidence in the legal system. The more serious the offense the court will be less willing to exclude illegally accessed evidence. Eithan Farhi was thus convicted in all three rape cases. It is now evident that it is only for the court to decide where a limited consent breaks down in front of the public interest.
9 General Conclusions Modern forensic science combined with comprehensive "biometric" databases form a powerful tool in the hands of law enforcement agencies. In many circumstances, problems related to privacy and human rights breaching may arise. Guidelines for use and retention of DNA samples and profiles should be well defined, recognized by all parties involved (police, laboratories, database holders, prosecution etc.) and enforced. Any disputes or problems regarding samples and profiles should only be decided by court, qualified to consider all relevant aspects. Proper protection of human rights should not necessarily lie in restricting databases, but rather putting restrictions on their use and manipulation. It seems that the vast majority of the population in democratic countries is normally willing to cooperate with the police, especially when investigations of severe crimes are concerned. Cooperation of the public with mass screenings will enable the police to make the most out of their elimination abilities, thus focusing all efforts in a narrower list of potential suspects. Clear definitions and transparent behavior of law enforcement agencies will ensure the public cooperation even where a sensitive issue such as DNA sampling is concerned. It is reasonable to expect individuals to give up their privacy rights to some extent for sake of the common good, provided there are enough assurances they are not misused. Building this faith in the public opinion will assure that privacy rights and society welfare will not contradict but rather complement each other.
Legislative and Ethical Questions regarding DNA
37
References 1. Etzioni, A.: DNA Tests and Databases in Criminal Justice: Individual Rights and the Common Good. In: Lazer, D. (ed.) DNA and the Criminal Justice System: The Technology of Justice, pp. 197–223. MIT Press, Cambridge (2004) 2. Gill, P., Jeffries, A., Werrett, D.: Forensic Applications of DNA ’Fingerprints’. Nature 316, 76–79 (1985) 3. Whittall, H.: DNA profiling: Invaluable police tool or Infringement of Civil Liberties? Bioethics Forum (2007), http://www.thehastingscenter.org/ Bioethicsforum/Post.aspx?id=648 (accessed at) 4. Prainsack, B.: An Austrian Perspective. BioSocieties 3, 92–97 (2008) 5. Rothstein, M., Tallbott, M. (The expanding Use of DNA in Law Enforcement: What Role for Privacy? J. of Law, Medicine and Ethics 34, 153–164 (2006) 6. Zadok, E., Ben-Or, G., Fisman, G.: Forensic Utilization of Voluntarily Collected DNA Samples: Law Enforcement vs. Human Rights. In: Hindmarsh, R., Prainsack, B. (eds.) Genetic Suspects: Interrogating Forensic DNA Profiling and Databasing, Cambridge University Press, Cambridge (publication date spring 2010) 7. Harlan, L.: When Privacy Fails: Invoking a Property Paradigm to Mandate the Destruction of DNA Samples. Duke Law Journal 54, 179–219 (2004) 8. Mepham, B.: Comments on the National DNA Database, NDNAD (2006), http://www.nuffieldbioethics.org/fileLibrary/pdf/Professor_B en_Mepham.pdf (accessed at) 9. Williams, R., Johnson, P., Martin, P.: Genetic Information and Crime Investigation: Social, Ethical and Public Policy Aspects of the Establishment, Expansion and Police Use of the National DNA Database. Research funded by the Welcome Trust, London (2004) 10. Burton, C.: The UK NDNAD Intelligence Led DNA Screens: A Guide for Senior Investigating Officers. Presented at the 1st Interpol DNA Users Conference, Lyon (1999), http://www.interpol.int/Public/Forensic/dna/conference/DNADb Burton.ppt(accessed at) 11. Walker, S., Harrington, M.: Police DNA ‘Sweeps’: A Proposed Model Policy on Police Request for DNA Samples. Police Professionalism Initiative, University of Nebraska, Omaha (2005), http://www.unomaha.edu/criminaljustice/PDF/ dnamodelpolicyfinal.pdf (accessed at) 12. Halbfinger, D.: Police Dragnets for DNA Tests Draw Criticism. The New-York Times, (January 4, 2003), accessed at: http://, http://www.nytimes.com/2003/01/04/us/police-dragnetsfor-dna-tests-draw-criticism.html (accessed at) 13. Antoni Imiela- M25 Rapist Trapped by Crucial Forensic Evidence (2004), http://www.forensic.e-symposium.com (accessed at) 14. Bieber, F.R.: Turning Base Hits into Earned Runs: Improving the Effectiveness of Forensic DNA Data Bank Programs. J. of Law, Medicine and Ethics 34, 222–233 (2006) 15. Kaye, D., Smith, M.: DNA Databases for Law Enforcement: The Coverage Question and the Case for a Population-Wide Database. In: Lazer, D. (ed.) DNA and the Criminal Justice System: The Technology of Justice, pp. 247–284. MIT Press, Cambridge (2004) 16. Nuffield Council on Bioethics: The Forensic use of Bioinformation: Ethical Issues. London (2007), http://www.nuffieldbioethics.org/fileLibrary/pdf/ The_forensic_use_of_bioinformation-ethical_issues.pdf (accessed at)
38
E.(A.) Zadok
17. Parry, B.: The Forensic Use of Bioinformation: A Review of Responses to the Nuffield Report. BioSocieties 3, 217–222 (2008) 18. Dierickx, K.: A Belgian Perspective. BioSocieties 3, 97–99 (2008) 19. Walsh, S.: Legal Perceptions of Forensic DNA Profiling Part I: A Review of the Legal Literature. Forensic Science International 155, 51–60 (2005) 20. Simoncelli, T.: Dangerous Excursions: The Case against Expanding Forensic DNA Databases to Innocent Persons. Journal of Law, Medicine and Ethics 34(2), 390–397 (2006) 21. Privacy International. PHR2006- Privacy Topics- Genetic Privacy. London (2007), http://www.privacyinternational.org/article.shtml?cmd%5B347% 5D=x-347-559080 (accessed at) 22. Bikker, J.: Response Submitted to the Consultation Held by the Nuffield Council on Bioethics (2007), http://www.nuffieldbioethics.org/fileLibrary/pdf/ Jan_Bikker (accessed at) 23. Smith, M.: Let’s Make the DNA Identification Database as Inclusive as Possible. J. of Law, Medicine and Ethics 34, 385–389 (2006) 24. Jones, G.: DNA Database ‘Should Include All’. The Telegraph, (October 24, 2006), http://www.telegraph.co.uk/news/uknews/1532210/DNA-databaseshould-include-all.html (accessed at) 25. Nelkin, D., Andrews, L.: Surveillance Creep in the Genetic Age. In: Lyon, D. (ed.) Surveillance as Social Sorting: Privacy, Risk and Digital Discrimination, pp. 94–110. Routledge, Taylor & Francis Group, London (2003) 26. Norton, A.: DNA Databases: The New Dragnet. The Scientist 19, 50–56 (2005) 27. Wadham, J.: Databasing the DNA of Innocent People- Why it offers Problems not Solutions. Liberty Organization press release, (September 13, 2002) 28. European Court of Human Rights: Case of S. and Marper vs. the UK (2008), http://cmiskp.echr.coe.int////tkp197/viewhbkm.asp?action=ope n&table=F69A27FD8FB86142BF01C1166DEA398649&key=74847&session Id=skin=hudoc-en&attachment=true&16785556 (accessed at) 29. Home Office: Keeping the Right People on the DNA Database: Science and Public Protection, London (2009), http://www.homeoffice.gov.uk/documents/cons2009-dna-database/dna-consultation? (accessed at) 30. Brettell, T., Butler, J., Almlrall, J.: Forensic Science. Analytical Chemistry 79, 4365–4384 (2007) 31. Szibor, R., Plate, I., Schmitter, H., Wittig, H., Krause, D.: Forensic Mass Screening using mtDNA. International Journal of Legal Medicine 120, 372–376 (2006) 32. Van Camp, N., Dierickx, K.: The Expansion of Forensic DNA Databases and Police Sampling Powers in the Post 9/11-Era: Ethical Considerations on Genetic Privacy. Ethical Perspectives 14(3), 237–268 (2007) 33. Prainsack, B., Gurwitz, D.: Private Fears in Public Places? Ethical and Regulatory Concerns Regarding Human Genomic Databases. Personalized Medicine 4, 447–452 (2007) 34. Asplen, C.: The Non-forensic Use of Biological Samples taken for Forensic Purposes: An International Perspective. American Society of Law, Medicine and Ethics (2006), http://www.aslme.org/dna_04/spec_reports/asplen_non_forensic .pdf (accessed at) 35. Williams, R., Johnson, P.: Inclusiveness, Effectiveness and Intrusiveness: Issues in the Developing Uses of DNA Profiling in Support of Criminal Investigations. Journal of Law, Medicine and Ethics 34, 234–247 (2006)
Legislative and Ethical Questions regarding DNA
39
36. McCartney, C.: Forensic DNA Sampling and the England and Wales National DNA Database: A Skeptical Approach. Critical Criminology 12, 157–178 (2004) 37. Ansell, R., Rasmusson, B.: A Swedish Perspective. BioSocieties 3, 88–92 (2008) 38. Farhi vs. State of Israel: Serious Crime File 1084/06, Tel-Aviv District Court, (April 26, 2007) (in Hebrew) 39. Yissacharov vs. Chief Military Prosecutor: CrimA 5121/98 (1998), http://elyon1.court.gov.il/files_eng/98/210/051/n21/98051210 .n21.pdf (accessed at)
Are We Protected? The Adequacy of Existing Legal Frameworks for Protecting Privacy in the Biometric Age Tim Parker1,2 1
Barrister-at-Law, Denis Chang’s Chambers, 1501 Two Pacific Place, 88 Queensway, Admiralty, Hong Kong
[email protected] 2 Faculty of Law, The University of Hong Kong, Pokfulam Road, Hong Kong
[email protected]
Abstract. The creation of a record containing biometric data would, in most legal systems, engage laws governing when, how and by whom that record maybe accessed, stored, copied, destroyed, etc. However, there remain grave concerns amongst privacy advocates that existing privacy laws – which in many jurisdictions are general or 'principle-based' in nature – are insufficient to protect the individual from specific and distinct threats to privacy said to attach to biometric data. This paper will analyse the legal properties intrinsic to biometric data as against other established categories of protected data. The survey is cross-jurisdictional, with emphasis on the Hong Kong SAR. Keywords: biometrics, law, human rights, privacy.
1 Introduction The rise of biometrics poses substantial challenges to our privacy. This paper will assess these challenges through the prism of human rights law – specifically, Article 17 of the International Covenant on Civil and Political Rights [1] (“ICCPR” or “the Covenant”). Part 2 offers some discussion on the right to privacy expounded in Article 17 of the Covenant, and the conditions under which interference with that right is permissible in a democratic society. Part 3 appraises the legislative machinations through which biometric identification systems have come to be implemented in various countries. Three case studies are presented: Hong Kong, the United Kingdom and the Netherlands. Finally, in Part 4 some concluding observations are offered on key legislative safeguards that, it is argued, are essential preconditions to the authorisation of any identity regime employing biometric systems.
2 International Human Rights Law The most widely acceded to international human rights instrument enshrining the right to personal privacy is the ICCPR. At the time of writing, that Covenant boasts A. Kumar and D. Zhang (Eds.): ICEB 2010, LNCS 6005, pp. 40–46, 2010. © Springer-Verlag Berlin Heidelberg 2010
Are We Protected? The Adequacy of Existing Legal Frameworks
41
165 States parties and 8 signatories (i.e. around 90% of all recognised sovereign States). In Hong Kong, the ICCPR is incorporated directly into domestic law by virtue of Article 39 of the Basic Law – the Special Administrative Region’s “miniconstitution”.1 In relation to privacy, Article 17 of the Covenant provides that: 1.
2.
No one shall be subjected to arbitrary or unlawful interference with his privacy, family, or correspondence, nor to unlawful attacks on his honour and reputation. Everyone has the right to the protection of the law against such interference or attacks.
At the most fundamental level, the prohibition against interference with ‘privacy’ in Paragraph 1 encapsulates the notion that a person is free to selectively reveal herself to others, or in other words, to have control over her identity [2]. This is recognised as being a fundamental right. The United Nations Human Rights Committee (“UNHRC”)2 has frequently pointed out that Article 17 is a ‘gateway right’, the protection of which is necessary to ensure the effective enjoyment of other civil and political rights, such as freedom of association, movement, political opinion, and so on [4]. This is because the prying eyes of others may have a chilling impact upon our choices – particularly when we wish to choose to do things that, although perfectly legal, are considered to deviate from social norms or are taboo. It follows from the above that, although the ICCPR is for the most part directed at governments, Article 17 applies to any interference with privacy, whether it emanates from the State itself or from other private persons. Thus, Paragraph 2 obliges States parties to enact laws protecting against such interference. Specifically in relation to data privacy, the UNHRC has opined in its General Comment on Article 17 that: “The gathering and holding of personal information on computers, databanks and other devices, whether by public authorities or private individuals or bodies, must be regulated by law. Effective measures have to be taken by States to ensure that information concerning a person's private life does not reach the hands of persons who are not authorized by law to receive, process and use it, and is never used for purposes incompatible with the Covenant.” [3] Although it has been said above that privacy is a fundamental human right, Article 17 reveals that it is not absolute and may properly be limited as is necessary and appropriate in a democratic society. This is because interference with privacy is proscribed only insofar as it is “unlawful” or “arbitrary”. A measure impinging upon privacy will be “unlawful” where such intrusion it is not authorised in legislation.3 This is a 1
Even prior to the resumption of Chinese sovereignty over Hong Kong on 1 July 1997, the Covenant was incorporated via the Hong Kong Bill of Rights Ordinance (Cap. 383), enacted in 1991. 2 The UNHRC is the competent treaty body mandated to oversee the implementation of the Charter. 3 Toonen v Australia [1994] IHLR 27 (UNHRC Communication No. 488/1992, decided under the 1st Optional Protocol to the ICCPR.
42
T. Parker
procedural/systemic protection that applies to all limitable rights. Its basic purpose is to outlaw the unilateral curtailing of human rights by the executive – which, historical experience indicates, will generally be far more ready to trample on human rights than the legislature. In this way, States are permitted to limit the right, but are required to do so in a manner that is open and subject to the public scrutiny of the political process. As to arbitrariness, the language “unlawful or arbitrary” demonstrates that mere legality (in the sense of being authorised in legislation) is not in and of itself sufficient; interference with privacy may be ‘lawful’ and yet still be arbitrary. Thus, legislation phrased in general words purporting to authorise sweeping intrusions into privacy will be incompatible with Article 17. The law must state precisely the circumstances under which it will be permissible to for privacy to be invaded, and in such cases, to what extent such invasion is permissible. This point is amply demonstrated by a recent Hong Kong case, Leung Kwok Hung & Anor. v HKSAR (Unreported, HCAL 107/2005, 9 February 2006).4 That case concerned section 33 of the Telecommunications Ordinance (Cap. 106), which was an old Colonial law providing that the Chief Executive could authorise inter alia the tapping of any telephone “where the public interest so required”. Recognising that section 33 might be vulnerable to challenge on account of its sweeping authorisation of interference with private communication, the Chief Executive promulgated certain rules, in the form of an Executive Order, limiting the circumstances in which the police and security officials could employ telephone interception. The applicants contended that this arrangement was incompatible with Article 17 of the Covenant. Hartmann J (as he then was) ruled in favour of the applicants, holding that section 33 purported to authorise interference with privacy in terms so wide as to render such authorisation ‘arbitrary’, and thereby contrary Article 17. The Executive Order did not save the provision; mere executive rules did not meet the ‘in law’ requirement. Hartmann J’s reasoning in the Leung Kwok Hung case in relation to the arbitrariness limb of Article 17 is entirely apposite when we turn to consider the application of that provision to the generation of biometric data and their deployment in various government and private sector applications. In assessing whether or not the measure was arbitrary in the sense contemplated by Article 17, His Lordship adopted the “reasonableness test” propounded by the UNHRC [4]. In order to satisfy the requirement of reasonableness, any legislative restriction on the right of privacy must be: (a) Rationally connected with one or more of the legitimate aims; and (b) The means used to impair the right must be no more than is necessary to accomplish the legitimate purpose in question.5 Summarising the above analysis, the right to privacy is fundamental, but not absolute. It may be limited, but such limitations must be provided for in legislation – which itself must be detailed and specific as to precisely when and to what extent individual privacy may be compromised. 4
The case was appealed to the Court of Final Appeal in [2006] 3 HKLRD 455 on unrelated grounds, and the CFA expressly approved of Hartmann J’s judgment. 5 See also: Toonen v Australia (supra note 4).
Are We Protected? The Adequacy of Existing Legal Frameworks
43
3 Biometrics Viewed through the Prism of ICCPR Article 17 Before turning to consider the legislation governing the use of biometric technologies, it must first be established that biometric data actually attract the protections set out in Article 17 of the Covenant. As to this question, it is plain that some categories of biometric data are intrinsically of a private character. Data representing a person’s DNA is the strongest example of this; the very data set itself contains information of an inherently sensitive nature. DNA can reveal such fundamental information as our behavioral traits or our predisposition to certain diseases. These are obviously private. The same probably cannot be said about a fingerprint or an iris-scan template; these data are simply representations of physical traits of the data subject, and it would be a stretch to suggest that one’s dignity is offended by such data being viewed. However, notwithstanding that there is nothing intrinsically private about the information contained in a fingerprint or iris-scan template, such data are capable of being used to identify the data subject, and therefore of connecting her with places, objects and events. It has been said that this is only true of certain kinds of biometric data, such as fingerprints, that allow a person to be identified without their knowledge and, potentially, against their will. However, it must be borne in mind that any constraints on the capability to use different data types to covertly identify a person are purely technical. Thus, even though it may not presently be feasible to capture a retina image capable of uniquely identifying the subject from, say, CCTV footage, the mere fact that a retina image is capable of being used in an invasive manner in the right technical context is, in the author’s view, sufficient to conclude that privacy protection law (including Article 17 of the ICCPR) applies to all biometric data. Having established this, I now move to the crux of the discussion, which is an analysis of the legislative approach that has been taken towards the deployment of biometric technologies. As the analysis in Section 2 above demonstrates, where a State party to the ICCPR intends to take steps that will infringe upon personal privacy, it may do so only in pursuit of a legitimate public purpose. It cannot be stressed enough that the first step must be to identify and clearly articulate what purpose is sought to be achieved. Once this legitimate aim is identified, lawmakers must address themselves on the question of whether or not the measure proposed is rationally connected to the accomplishment of that purpose – i.e. does the measure proposed actually solve the problem identified? If it is, the next question is one of necessity: if the legitimate aim can by accomplished by some other means that does not involve an interference with individual rights, then the intrusive measure is impermissible. The final step in the equation is to apply the proportionality test: if the measure is necessary to accomplish the legitimate public aim identified, is the infringement of personal liberty caused by the measure proportionate the importance of achieving the legitimate aim? In relation to the deployment of biometric technologies at the national level, this rights-conscious logic has been completely – and impermissibly – inverted. Hong Kong provides an instructive case study of this phenomenon: Mandatory registration of persons was introduced in Hong Kong in 1949 in the context of a huge influx of persons fleeing the bloody civil war raging in Mainland China. The then-Colonial administration issued all persons with a Hong Kong Identity Card (“the HKID”) in what was described as a ‘temporary measure’ justified by the
44
T. Parker
need to maintain law and order, and to administer food rationing. However, reliance on the HKID quickly became commonplace in the private sector – a fact which was frequently proffered as a justification for keeping them. By 1979, there was another mass influx of persons fleeing civil war – this time from Vietnam. Many Vietnamese arriving in Hong Kong took up employment. In response, the government passed legislation outlawing the employment of persons not holding Hong Kong Residence (surprisingly, none had previously existed). In order to enforce this new prohibition, the Immigration Ordinance was amended to require all persons to carry their HKID at all times. Police and immigration officers were (and remain today) authorised to carry out HKID checks on any person without cause. In 2003, the HKID photo cards were replaced with biometric “smart” cards containing a stored fingerprint template. The system also involves a central database in which the fingerprints of all Hong Kong residents are stored. The card contains a memory chip onto which data can be loaded to allow the card to interact with a number of other government services, including facilitating unmanned automated immigration clearance, tax payment, and access/borrowing facilities for public libraries. The HKID is now the universal – in fact, almost the sole – form of identification used in Hong Kong by public and private sectors alike. Without one, you cannot take up employment, subscribe to a mobile phone service, and in some cases, enter a private building without registering your unique HKID number with security. The story of the HKID is a classic tale of function creep; an identification facility is adopted in turbulent times to accomplish one specific, legitimate purpose (with which it is rationally connected, necessary and proportionate). Over time, it ends up being employed for a variety of other functions for which biometric security is totally unnecessary. Consider the library card function in isolation: it could never be said to be ‘necessary’ and ‘proportionate’ (in the sense contemplated by Article 17) to demand that all persons register for a fingerprint ID card in order to maintain a library borrowing system. In the face of this apparent absurdity, biometric library cards are nevertheless a reality in Hong Kong. Plainly, the questions that lawmakers and policymakers have been asking are: “Now we have this wonderfully reliable (and extremely expensive) ID card, what else can we employ it to do? ” And: “Given that this ID is more reliable than any other, why would any government department – or indeed private organisation – not use it?” However, it is precisely these questions underlie the perverse “logic” of function creep. As has been emphasised above, it is critical to start with the goal that is sought to be achieved. Precisely the reverse has occurred in the legislative path that has led to the near-universal application of biometric identity documents in Hong Kong. This is emblematic of systemic failings in the legislative approach to biometric technologies – and indeed to privacy in general. This unfortunate process is not unique to Hong Kong. As the United Kingdom’s Information Commissioner has pointed out, precisely the same phenomenon has surrounded debate over a national identity card there [5]. At the conclusion of the second reading of the Draft Identity Cards Bill in Parliament, the Information Commissioner chastised the administration for failing to identify even in broad terms what the purpose of introducing the card was in the first place. The response, when it finally came, was that the purposes of the proposed card were: (1) national security; (2) prevention and detection of crime; (3) enforcement of immigration controls (4) enforcement of prohibitions on illegal working; and (5) efficient and effective delivery of public
Are We Protected? The Adequacy of Existing Legal Frameworks
45
services [5]. These sweeping objectives have been roundly criticised as being extremely imprecise and over-broad. It appears to be motivated by an expansive approach to use of the proposed cards, rather than any attempt to ring-fence or limit the proliferation of what are in reality highly invasive technologies. Yet another instance of grand scale function creep in relation to biometric identity documents occurred recently in the Netherlands. In the course of implementing European Regulations relating to passport security, the Netherlands in 2009 implemented legislation6 for a digital passport system. The system involved the creation of a national fingerprint and digital facial image database (whereas in fact the European Regulations required storage of these data only in the document itself). Having authorised the creation of a fingerprint database for this singular purpose, the legislation was then subsequently extended to permit use of the database for criminal investigation purposes (including counter-terrorism). This transformation of travel documents into security instruments for use by the State has been decried by the Dutch Data Protection Authority, privacy experts and human rights advocates as being an entirely arbitrary interference with the right to privacy.
4 Conclusion: Stopping the Creep These examples belie a legislative approach to the deployment of national-level biometric data systems that is clouded by fuzzy thinking and ignores – or even completely inverts – the logic inherent in Article 17 of the ICCPR. This logic requires policymakers and lawmakers to orient themselves on the basis of clearly articulated (and legitimate) public policy objectives. If a given objective is sufficiently important and cannot be accomplished without curtailing the right to privacy, or without doing so to a lesser extent, then so be it. Such is the nature of democratic society. However, this is not the end of the matter. The critical addendum to this is that laws authorising resort to such measures must clearly and effectively protect against function creep in any form. This requires lawmakers to step back and consider the issue of biometricsbased systems at a holistic level, and to clearly delineate acceptable uses of this technology from those that are unacceptable or simply unnecessary. As has been seen above, there has been singular failure in this area in relation to biometric data systems. The United Kingdom’s Information Commissioner has cautioned that we are sleepwalking our way into a surveillance society [5]. Biometrics systems are an essential component of any such order. Once implemented, such systems create an inertia of their own: an identity document that is the ‘gold standard’ (and on the promulgation of which vast amounts of money have been spent) can and will be required for an ever-increasing array of identity-based services – even where there is really no need at all for such a high degree of certainty as to the identity of the user. The end result is that vast amounts of individually innocuous information are unnecessarily generated that can now be collated and cross-referenced to create a staggeringly detailed – and highly intrusive – picture of an individual’s lifestyle. The very availability of such a 6
Wijziging van de Paspoortwet in verband met het herinrichten van de reisdocumentenadministratie, adopted by the Dutch Senate on 9 June 2009. For the text of the law (in Dutch), see Parliamentary Documents I, 2008/09, 31 324 (R1844) A, 20 January 2009.
46
T. Parker
picture is antithetical to a person’s inherent dignity; it has a fundamentally chilling impact upon her ability to make, free from the prying eyes of others, the social, political, economic, sexual, religious and other choices that liberal societies hold to be central to an individual’s personal development and wellbeing. Creating such a picture is an arbitrary attack on privacy (even if purportedly permissible under existing legislation) against which States parties to the ICCPR are obliged to protect in law.
References 1. 999 Treaty Series. United Nations 171 2. Warren, S., Brandeis, L.: The Right to Privacy. Harvard Law Review 4, 193–220 (1890) 3. General Comment No. 16: The right to respect of privacy, family, home and correspondence, and protection of honour and reputation (Art. 17). UN Doc HRI/GEN/1/Rev 1 at 21, at §10 4. Toonen v Australia [1994] IHLR 27 (UNHRC Communication No. 488/1992, decided under the 1st Optional Protocol to the ICCPR 5. Data Protection. ICO., http://www.ico.gov.uk/Global/data_protection_research_ archive.aspx (accessed January 31, 2010)
Have a Safe Trip Global Mobility and Machine Readable Travel Documents: Experiences from Latin America and the Caribbean Mia Harbitz and Dana King Inter-American Development Bank 1300 New York Avenue N.W., Washington D.C. 20577, U.S.A. {miah,danak}@iadb.org
Abstract. This paper examines the use of personal identification information currently requested by governments to issue international travel documents, as well as the scope of government duties to manage and protect this sensitive information. Many national governments have moved to augment citizen identification requirements for international travel. Increasingly, personal identification information—including biometric data—is collected for issuance of travel documents and required for border crossings. It is of interest to governments to efficiently process travelers, identify potential security threats, and prevent identity fraud. However, less attention is paid to minimizing the intrusiveness of the identity information collected and to ensuring its effective management and security. This paper draws attention to these issues through an examination of case studies in Latin America and the Caribbean, and also makes recommendations for establishing standards in the collection and protection of personal identity information as related to border crossing. Keywords: Identity management, privacy, travel documents.
1
Introduction
As more people are traveling today, global mobility has raised security concerns for a number of countries, especially in the wake of the attacks on the World Trade Center and the Pentagon on September 11, 2001. The increased mobility represents a challenge in terms of tracking who exits and enters any given country. In response, countries have increased security measures in terms of travel documents by implementing machine-readable formats and augmenting the types of personal information required. In the past, passports served principally to establish the bearer’s nationality and citizenship and to guarantee safe passage outside his or her home country. Today, passports and other travel documents have moved beyond what was initially envisioned and have become depositories of large amounts of personal A. Kumar and D. Zhang (Eds.): ICEB 2010, LNCS 6005, pp. 47–54, 2010. c Springer-Verlag Berlin Heidelberg 2010
48
M. Harbitz and D. King
identification information. As such, there is a need to ensure that, in requiring their use and increasing the amount of personal information collected, states provide travelers with a set of benefits beyond safe passage. In 2004, the International Civil Aviation Organization (ICAO), the body responsible for standardizing travel document formats, issued a recommendation for all national governments to emit machine-readable travel documents (MRTDs) with biometric data by April 2010. ICAO justified its recommendation by pointing to compelling government interest in promoting “confidence in the integrity of a State’s travel documents,”1 enabling rapid and precise traveler identification; promoting efficiency, allowing electronic monitoring and control of all stages of the application and issuance process; tracing stolen and/or fraudulently obtained documents; permitting electronic tracking of passports domestically and facilitating the creation of databases to track high-risk categories of travelers and to speed the border control process.2 In light of these new measures, there is growing concern over how to protect travelers’ privacy and limit state surveillance. First of all, an individual’s legal identity must be firmly established through birth registration, which gives him or her a name, a nationality, and citizenship, thereby providing access to the full range of human rights—civil, cultural, economic, political, and social rights. “Legal identity can be understood as a composite condition obtained through birth or civil registration which gives the person an identity (name and nationality) and variables of unique personal identifiers, such as biometrics combined with a unique identity number” [1]. The birth certificate is the feeder, or foundational document, that is required for a number of services and for obtaining other documents such as identification cards, voter registration cards, and passports. Citizens from developing countries face particular challenges when it comes to establishing their legal identities. Research carried out by the Inter-American Development Bank (IDB) has shown that, among the many other challenges, most developing countries must deal with weak institutions as well as an archaic and complex legal framework when it comes to establishing and securing legal identity for their citizens [2]. Furthermore, the institutions responsible for identity and identification are severely restricted financially and administratively, and are often seen as unreliable—even corrupt—by citizens, and do not necessarily provide services to the entire population. Over the past five years, modernization processes have been initiated in registry systems throughout the Latin American and Caribbean (LAC) region. The shift from a two-book manual system to computerized civil registration is occurring simultaneously with a growing government interest in implementing national identification systems that contain one or more forms of biometric information. The decision to computerize the civil registry and move towards a national identification system, as well as to incorporate biometric data into identity documents inevitably raises privacy concerns. Privacy rights are at the risk of being violated 1 2
http://www.icao.int/icao/en/atb/fal/fal12/documentation/ fal12wp004 en.pdf http://www2.icao.int/en/MRTD/Pages/BenefitstoGovernments.aspx
Have a Safe Trip Global Mobility and Machine Readable Travel Documents
49
based on how the initial collector agency handles personal information—the collection, verification, storage, and transfer of the data. Furthermore, security protocols are needed in order to restrict access to the information and provide protection from data mining or function creep. Indeed, as governments engage in more intrusive data collection and expand potential access to this information, there is an urgent need to put stronger privacy controls into place. These controls are also needed in the collection, storage, and management of personal data for issuance of travel documents and in the handling of these data at points of authentication, such as border crossings.
2
Collection of Personal Information: Countervailing Interests
In the LAC region, most countries have mandatory national identity cards and a number of countries include—or plan to include—biometric data on the cards. Institutionally, in many cases, the agency that emits national identity cards is different from the agency or department that issues passports. This implies that a move towards MRTDs with biometric information may signify additional public databases with personal data. These second generation biometric passports will guarantee the identity of the owner. However, the question of how that data is managed at points of verification remains open. While building extensive legal identity databases gives rise to both benefits and obligations, in order to insure justice and equality identification registries cannot become depositories of information that can be used to limit access to social, civic, or human rights, or to put individuals at risk of physical harm. Governments that collect sensitive biometric data have a duty to balance the interest of the states in using such data with the interests of the individuals. In this manner, governments should ensure that the collection and use of data is narrowly tailored to its stated collection purpose and that individuals that provide such data are both protected against its abuse and see some benefit from the collection. It is important that the individual whose data is being registered has the ability to correct information that has been incorrectly entered and understands who to contact if his or her personal information is being used in an inappropriate manner after registration. Later, this paper examines three cases from the LAC region in which the countervailing state/citizen interests have not been adequately balanced. Although the examples draw on the region’s experiences with civil registration and identification systems, the lessons are applicable to the debate over biometric MTRDs for two reasons. First, use of biometrics in civil identification provides important information on potential benefits and risks of the collection of biometric information during international travel, which is relatively new. Second, in a world with high mobility and the erosion of geographical and social barriers, travel documents are becoming important and necessary identity documents. The three cases reviewed herein examine issues of privacy, minimization of information intrusiveness, and potential benefits to be studies with regard to biometric MRTDs.
50
M. Harbitz and D. King
2.1
Data Collection and Protection
As governments have moved to implement technological innovations in civil registries, there have been notable instances in which citizen data and privacy have been poorly protected. This emphasizes the need for a strong regulatory framework prior to moving forward with technology enhancements. For instance, in arming digital databases of information, several governments in the LAC region have used private companies to create and maintain the civil registry and identification systems, which can lead to conflicts over ownership of the data, the database, and the maintenance of the identification system. In 2000, El Salvador’s national population registry (Registro Nacional de las Personas Naturales, RNPN), responsible for both civil registry and identification, entered into a contract with a private company to accomplish the following: 1) develop, implement, and maintain a system to gather and process personal information necessary to establish each citizens identity; 2) produce and emit the identification card; and 3) create an up-to-date database of natural persons to enable the RNPN to fulfill its mandate. Though the contract specified that ownership of the data and the systems technological components belonged to the government of El Salvador, a conflict arose over the transfer of information system licenses and source codes to the RNPN. Though the case was ultimately resolved through arbitration in the government’s favor, it raised questions about the importance of having a regulatory framework to guide the way the states handle private and personal data and how they interact with private companies to which they may outsource specific responsibilities in the management of this data. 2.2
Minimizing the Intrusiveness of Identity Information Solicited
As in many other parts of the world, the definition of current national borders in the Americas did not follow pre-colonial national boundaries. As a result, current international borders divide several indigenous nations. Examples include the Tohono O’odham Nation located on the border of the Southwestern United States and Northern Mexico; the Maya Nation, which encompasses areas of Mexico, Guatemala, Belize, and Honduras; the Mohawk Nation, which spans the northeast United States and Canada; and the Way´ uu Nation, which spans the border between Colombia and Venezuela. In Latin America and the Caribbean, how countries accommodate (or do not accommodate) the continuing cultural, social, economic, and political ties that bind transnational indigenous nations with regard to cross-border travel varies. For example, Colombia and Venezuela both recognize members of indigenous nations located on the Colombia-Venezuela border as dual citizens of the two countries, regardless of the nation state in which they were born.3 Citizens from Canadian indigenous 3
In Venezuela this is a constitutional right. See: http://www.reliefweb.int/rw/rwb.nsf/AllDocsByUNID/ 5ec60ddfcc9b602685256e9b006425d5 For Colombia, see: http://www.iadb.org/SDS/ind/ley/docs/CO-19.htm
Have a Safe Trip Global Mobility and Machine Readable Travel Documents
51
nations are able to cross the U.S. border upon presentation of a tribal enrollment card, whereas indigenous U.S. citizens are not similarly accommodated when crossing the Canadian border. In many cases, tribes sit on land that is remote and not subject to heightened patrolling. As such, members pass freely across the border irrespective of actual laws. Yet, this is a precarious situation and may be subject to sudden disruption when government policies change. When government enforcement of immigration laws were strengthened in the United States following September 11, 2001, many Tohono O’odham families and communities were divided, with members on one side of the border unable to participate in the social, economic, political, and cultural activities of those on the other side of the border [3]. Similarly, several Way´ uu communities have been caught up in the conflict between Venezuela and Colombia. While security concerns are valid, particularly with regard to drug trade on the United States-Mexico border or arms trade on the Colombia-Venezuela border, agencies setting travel document requirements— and especially those from countries who are signatories to the 1989 Indigenous and Tribal Peoples Convention—should strive to respect territorial and cultural unity by minimizing barriers when possible.4 In the case of the Tohono O’odham Nation, transnational families are being required to submit more personal information to a wide array of government agencies in order to be able to participate in the most basic family and communal activities, adding undue burdens to the nation’s ability to conduct traditional cultural, economic, and social activities as an integral body. In these instances, the collection of biometric data provides a method to identify and track citizens without providing them any additional benefits. 2.3
The Haitian Case
It is important to note that in some cases, the collection of biometric data for the purpose of establishing a unique identity may provide important benefits. For citizens of countries with ineffective civil registries, biometric travel documents may facilitate their identification within their country of origin as well as when they are traveling. For example, in Haiti—where it is estimated that between 30 and 60 percent of the people do not have birth certificates—, the inability to prove identity (and thus citizenship) is an important obstacle to obtaining a national identification card and travel documents. This is significant given that many Haitians seek both short and long-term work outside the country, particularly in the Dominican Republic. Without adequate travel documents, these citizens are essentially stateless and lack the protection that the documents offer. Biometric travel documents might be important tools to more quickly establish a unique, verifiable identity for an individual. In turn, citizens could benefit from the protection of travel documents without having to pass through the labor-, cost- and timeintensive processes of establishing their identities through the civil registration process. However, there are certain risks associated with this approach; for example, it may make it easier for a person to establish a false identity. Also, while it 4
See Convention 169: http://www.ilo.org/ilolex/gci-lex/convde.pl?c169
52
M. Harbitz and D. King
could facilitate border crossing for adults, it does not solve the problem of under registration of births, nor does it supply the Haitian state with direly needed vital statistics. The best long-term solution for the country would be to recreate a functional civil registration agency as civil identification takes place.
3
Considerations for Establishing Standards for the Collection and Protection of Personal Identity Information
In this day and age, and in the LAC region, most countries require their citizens to carry some form of identification at all times, and most certainly when they travel. The instances that require a form of identification and authentication are plentiful, and lack of identity credentials leads to certain social, economic, and political exclusion and gives rise to what Peter Swire and Cassandra Butts call the “ID divide” [4]. Since national identification cards are often emitted by different entities than those that issue passports, there is an inherent risk that states will obtain the same information twice or increase the total amounts of personal information collected, including individuals’ biometric data. Greater collection of personal information poses inherent risks to the safeguarding of the information and the privacy of the individual traveler. Further ethical questions can be raised when citizens are obliged to disclose more and more personal data, especially when the data are not strictly necessary, and can lead to conflict when a third party or private sector has access to the data, as was the case in El Salvador. There is a growing awareness of and discussion on the aspects of privacy and ethics on the collection and management of identity data in other parts of the world. A recent report by the Hong Kong Privacy Commissioner for Personal Data raises the issue of the excessiveness of biometric data collection and use.5 The right to privacy is a highly developed area of law in Europe, and the European Union is further seeking to strengthen its 1995 directive to member states.6 This debate is still incipient in the LAC region. What data should be collected and verified at the initial point of entry (registration) into the database, and what data should be accessed to authenticate identity at border crossing? Is the process verification-based or identificationbased? How much identity information should be accessed and how much retained? On one hand, from the citizen’s point of view, it may be preferable not to have any personal identity information retained by third parties. On the other hand, if the information is retained in the new or foreign database established by the receiving country, it may facilitate the citizen’s subsequent border crossings by avoiding a false positive response. The laws of the the issuing country protect the holders of travel documents within that country, but once they cross a 5 6
http://www.pcpd.org.hk/textonly/english/infocentre/press_20090713a.html http://europa.eu/legislation summaries/information society/ l14012 en.htm
Have a Safe Trip Global Mobility and Machine Readable Travel Documents
53
border and become a foreigner (or “alien” in the United States), the protection ceases. If a foreigner’s information is mishandled, misused, or lost, there is no advocate within the national system with a direct interest in ensuring that the problem is rectified. Furthermore, foreigners face barriers to organizing themselves (identification, channels for meeting, etc.) that make it more difficult for them to advocate collectively and effectively push destination governments to address real or potential abuses of their personal information. This is why as nations around the world move to implement ICAO’s recommendation, it is critical that a comprehensive International Identity Bill of Rights is also adopted to ensure that travelers are protected as they provide increasingly invasive personal information to a growing number of State entities. This bill would outline what data should be collected and verified at initial points of entry (registration) into the primary database, and what and how data should be accessed to identify or authenticate identity at border crossings. Also, it would address how much identity information should be accessed by whom and how much information is retained each time a travel document is read. A number of countries have outsourced data gathering to third parties, or nonstate actors, that may have biased economic or political interests in personal data management. Another consideration for establishing standards is the availability and capacity of human resources to embrace and manage new technology. The civil registries and civil identification agencies in the LAC region as a whole have to confront important challenges with regards to the workforce. Identification and authentication of one’s identity does not necessarily equate violation of privacy. Authentication is a means, not an end, and ideally the process both protects the individual against identity theft and gives the relevant authority assurance that the right individual gains access. The successful implementation of this process requires the development of citizen trust in governments and the provision of clear benefits associated with obtaining an identity document or a passport. Biometric identification is also a protection against identity usurpation, and can help individuals gain lawful access to social, civic, and economic rights. The adoption and acceptance of technology in developing countries sometimes precedes regulatory framework, and this carries a grave risk in terms of the potential to mismanage personal information, in addition to financial costs, for the country. Before a system is implemented, there have to be clear guidelines for how personal information is to be managed and how data are to be transmitted and stored. There should be an increased focus on security and efficiency in existing systems and on traveler-centered issues such as privacy, cultural respect, and potential benefits.
4
Conclusions and Implications for Practice
Citizens need the permission of governments to cross borders legally. In order to request permission, they must have formal identification. Thus, it is necessary
54
M. Harbitz and D. King
for all states to establish a unique identity for each citizen when he or she is born, both as a means of protection and so citizens can exercise their full range of human rights, including the right to free passage. The legal framework that governs international travel needs revision and modernization to reflect justice and equality, and governments must, at all costs, prevent civil registers—or civil identification registers—from becoming depositories of information that can be used to limit access to social, civic, or human rights, or put individuals at risk of physical harm. In an identity management framework, identity is both a private and a public matter. While registries that contain unique personal identification information are public property, the information in the registry should be considered the private property of the citizens it concerns. Consideration must be given to the creation of databases that contain personal information to avoid function creep caused by improper linking of data within or across databases or systems. This conference should propose an International Identity Bill of Rights with an auditing body that will protect informational privacy and safeguard physical privacy for border crossings. Also, within existing institutional structures such as that of the United Nations, it may be beneficial to explore the creation of a global framework for the interoperability and integrity of identity and authentication in all sectors that seek coordinated action among nation-states on matters of mutual interest. Acknowledgments. The authors thank all reviewers for their helpful and constructive comments and discussions.
References 1. Harbitz, M., Boekle, B.: Democratic Governance, Citizenship and Legal Identity. Working Paper. Inter-American Development Bank, Washington, D.C (2009) 2. Ordo˜ nez, D., Bracamonte, P.: Birth Registration: Implications for Access to Rights. Social Services and Poverty Reduction Programs in Six Countries in Latin America. Inter-American Development Bank, Washington, D.C (2006) 3. Duarte, C.: Tohono O’odham: Campaign for Citizenship. Arizona Daily Star (2001), http://www.azstarnet.com/tohono/ 4. Swire, P., Butts, C.: The ID Divide: Addressing the Challenges of Identification and Authentication in American Society. IDB Project Document. The American Center for Progress, Washington (2008) 5. Salter, M. (ed.): Politics at the Airport. University of Minnesota, Minneapolis (2008) 6. Cofta, P.: Towards a Better Citizen Identification System. Identity in the Information Society. 1876-0678 (2009), http://www.springerlink.com/content/2651u566158n0618/ 7. Caplan, J., Torpey, J. (eds.): Documenting Individual Identity. Princeton University Press, Princeton (2001)
On Analysis of Rural and Urban Indian Fingerprint Images C. Puri, K. Narang, A. Tiwari, M. Vatsa, and R. Singh IIIT Delhi, India {charvi08018,kanika08029,anujt,mayank,rsingh}@iiitd.ac.in
Abstract. This paper presents a feasibility study to compare the performance of fingerprint recognition on rural and urban Indian population. The analysis shows that rural population is very challenging and existing algorithms/systems are unable to provide acceptable performance. On the other hand, fingerprint recognition algorithms provide comparatively better performance on urban population. The study also shows that poor images quality, worn and damaged patterns and some special characteristics affect the performance of fingerprint recognition.
1
Introduction
Fingerprints are considered reliable to identify individuals and are used in both biometric and forensic applications [1]. In biometrics applications, it is used for physical access control, border security, watch list, background check, and National ID System whereas in forensics applications, it is used for latent fingerprint matching for crime scene investigation and to apprehend criminals and terrorists. The use of fingerprints for establishing the identity was started in the 16th century and thereafter several studies were performed for anatomical formation, classification, recognition, categorizing features, individuality of fingerprints, and many others. Over the last two decades, research in fingerprint recognition has seen tremendous growth. Several automated systems have been developed and used for civil, military and forensic applications. FBI-AFIS, US border security, and EU passport/ID system are few examples of large scale applications of fingerprint biometrics. A carefully designed fingerprint recognition system should be stable and robust to environmental dynamics and variations in data distribution. However, in real world applications, establishing the identity of individuals can be very challenging. Sensor noise, poor quality, and partial capture may affect the performance of fingerprint recognition algorithms. Further, interaction between an individual and sensor can cause variations in image quality and biometric features. Poor quality images, therefore, contribute to the difficulty in detecting features from the image and hence decrease the recognition performance. Indian environment is comparatively different compared to western and European countries. Due to the amount of manual work an average Indian do every day, fingerprint features are, generally, affected and it is very difficult to A. Kumar and D. Zhang (Eds.): ICEB 2010, LNCS 6005, pp. 55–61, 2010. c Springer-Verlag Berlin Heidelberg 2010
56
C. Puri et al.
perform a reliable feature extraction and matching. Current state-of-the-art algorithms/systems also are trained for recognizing good quality fingerprints and are not tailored for fingerprints with worn and damaged patterns. Moreover, there are no published results that demonstrate the effectiveness of fingerprint recognition algorithms in austere Indian environment. In this research, we have performed a study to evaluate the performance of automatic fingerprint recognition algorithms on a small set of images pertaining to Indian rural and urban population. Our analysis uses verification and identification accuracies in analyzing the input biometric samples that are obtained from diverse and disparate environment. Next section describes the details of experimental protocol and our analysis.
2
Experimental Evaluation on Indian Rural and Urban Population
The study is performed with two commercial fingerprint recognition systems and implementation of two algorithms from literature. This study is conducted using two fingerprint databases that contain images from rural and urban Indian population. In this section, we briefly describe the algorithms/systems, databases used for validation and experimental analysis. 2.1
Algorithms and Systems Used for Experiments
This study is performed using existing algorithms from literature and state-ofthe-art fingerprint recognition systems: – Implementation of minutiae feature extraction and matching [2] termed as Algorithm - A – Implementation of filterbank based feature extraction and matching [3] termed as Algorithm - B – Two commercial fingerprint recognition systems, termed as System - C and System - D1 2.2
Fingerprint Database
To perform this study, two sets of fingerprint databases are prepared. The first fingerprint database is prepared in urban settings i.e. population including undergraduate/graduate students and working class individuals. We capture right hand and left hand index finger image from 75 individuals using a 1000 ppi optical scanner. Therefore, there are total 150 classes and for each class, we capture 10 samples. Total number of images in urban fingerprint database is 1500. The second fingerprint database is prepared in rural settings in which images are 1
Our license agreements for commercials systems do not allow us to mention product names.
On Analysis of Rural and Urban Indian Fingerprint Images
57
from villagers, farmers, carpenters and housewives. Similar to urban fingerprint database, rural fingerprint database contains images from 150 classes (i.e. 75 individuals and for each individual, we capture right and left index finger) and 10 samples per finger. Total number of images in rural database is also 1500. 2.3
Experimental Analysis
From both the fingerprint databases, we select three samples per class as training and rest as testing. We perform both verification (1:1 matching) and identification (1:N) experiments, and compute both verification accuracy at 0.01% false accept rate (FAR) and Rank-10 identification accuracy. Further, random cross validation is performed for 10 times using the training-testing partitioning. At each cross validation trial, for each database and comparison, there are 525 genuine pairs and 5,47,575 impostor pairs. We perform two sets of experiments, – Verification and identification experiments on urban fingerprint database. – Verification and identification experiments on rural fingerprint database. The ROC plots in Fig. 1 and identification accuracies in Table 1 summarize the results of the two experiments. The key results and analysis of our experiments are summarized below. 1. Experiments with urban fingerprint database show that all the algorithms and system yield verification accuracy in the range of 86 - 93%. Similarly, higher performance is achieved in terms of Rank-10 identification accuracy. We observe that with high quality images, algorithms/systems provide accurate results whereas the performance suffers when the fingerprint quality is poor or with limited number of minutiae (i.e. reduced region of interest). 2. Experiments with rural fingerprints show remarkable results. Since the database contains several non-ideal poor quality fingerprint images, accuracy (both verification and identification) of the algorithms/systems reduce significantly. On this database, best accuracy is around 60% which is certainly not acceptable in real world large applications. Decrease in performance is mainly because of the poor quality images which cause either missing or spurious fingerprint features. Table 1. Rank - 10 identification accuracy of different fingerprint recognition algorithms/systems Algorithm/System Urban Rural Algorithm - A 84 48 Algorithm - B 88 56 System - C 90 64 System - D 91 64
58
C. Puri et al.
(a)
(b) Fig. 1. Fingerprint verification performance using (a) urban fingerprint database and (b) rural fingerprint database
On Analysis of Rural and Urban Indian Fingerprint Images
59
Fig. 2. Difference in image quality can cause reduced performance: 35 years old housewife in rural settings (second image with heena/mehandi applied on the finger)
Fig. 3. An example illustrating a case when it is difficult to capture good quality fingerprint images
3. We analyzed fingerprint images in rural and urban databases, specifically those causing errors. We found that there are some specific causes that are more relevant in Indian sub-continental region compared to western and European countries. For example, as show in Fig. 2, Lawsonia Inermis (commonly knows as heena or mehandi) can cause significant differences in quality of fingerprint images. Widely used by women in Indian sub-continent during festivals, heena is applied on hand/fingers and when applied, fingerprint sensors may not properly capture fingerprint features. 4. Another important cause is due to manual labor work. In general, rural population perform several manual tasks that cause worn and damaged
60
C. Puri et al.
Fig. 4. There are some features such as scars and warts that are difficult to analyze in current algorithms/systems
fingerprint patterns. As shown in Fig. 3, it becomes difficult to capture good quality images due to damage in fingerprint patterns. Further, we analyzed that in most of the fingerprint images from rural settings, scars and warts (Fig. 4) are predominant and affect the performance of fingerprint recognition. 5. In our study, we performed another experiment in which images were shown to 15 individuals and asked to perform identification. Note that all these individuals were initially trained to recognize fingerprints. We recorded their response such as feature(s) playing important role in decision/matching process. We observed that although human mind processes multiple features, there are certain features which are important while identifying analogous fingerprints. In general, the first feature which is widely used is the central pattern such as core, delta, loops, and whorl (level-1). The second most important feature is the direction of ridges followed by the ridge contour (shape). For some specific images, scars and warts play an important role. Other important features are - ridge bifurcation and ending (minutiae) in local regions, incipient ridges, clarity of pores, dots, and special features such as hooks and eye formation.
3
Conclusion
In large scale deployment of fingerprint recognition systems, specially in Indian environment, there are some challenges involved. Along with the sensor noise and poor image quality, presence of scars and warts, and deteriorating ridge/minutiae patterns in fingerprints from rural population affect the data distribution. Since there is no research that evaluates the performance of automatic fingerprint verification/identification in Indian population, we studied the performance using standard fingerprint recognition systems and fingerprint databases collected from the rural and urban Indian population. The results show that on the rural database, these algorithms/systems are unable to provide acceptable performance. However, for the database collected from the urban
On Analysis of Rural and Urban Indian Fingerprint Images
61
population, fingerprint systems provide comparatively better performance. The analysis reveals that it is difficult to extract features from the fingerprints of people who do a lot of manual work which in-turn affects the performance of standard systems.
Acknowledgment Authors would like to acknowledge the volunteers who participated in this study.
References 1. Maltoni, D., Maio, D., Jain, A.K., Prabhakar, S.: Handbook of fingerprint recognition. Springer, Heidelberg (2003) 2. Jain, A.K., Bolle, R., Hong, L.: Online fingerprint verification. IEEE Transactions on Pattern Analysis and Machine Intelligence 19(4), 302–314 (1997) 3. Jain, A.K., Prabhakar, S., Hong, L., Pankanti, S.: Filterbank-based fingerprint matching. IEEE Transactions on Image Processing 9(5), 846–859 (2000)
Privacy Protection in High Security Biometrics Applications Nalini K. Ratha IBM T. J. Watson Research Center, Hawthorne, NY 10532, USA
[email protected]
Abstract. While biometrics is useful for secure and accurate identification of a person, it also has serious privacy implications in handling large databases of biometrics. Standard encryption techniques have limited use for handling biometrics templates and signals as intra-person variations can’t be handled effectively using information security techniques. As a way to enhance privacy and security of biometrics databases, we present a pattern recognition-based model to analyze the threats to a biometrics-based authentication system and also a novel solution to enhances privacy. Cancelable biometrics is an emerging concept, where transformations that hide the biometrics signatures (fingerprints, faces and iris) are designed so that the identity can be established with added security. Other methods have been proposed for enhancing privacy in biometrics. In this paper, we will describe our threat model for biometrics recognition and recent advances proposed for privacy enhancements for fingerprint and iris biometrics.
1
Introduction
Automated biometric authentication systems help to alleviate the problems associated with existing methods of user authentication based on possessions and/or knowledge. Often introduction of biometrics is considered to improve security. However, a thorough security analysis needs to be carried out to identify weak points that may exist or will be introduced in any biometric installation. These weak points will be exploited during operation of a system by the hackers. There have been several instances where artificial fingerprints [2] have been used to circumvent biometric security systems. Similar attacks are possible in other biometrics modalities: e,g., face masks to hide identity, designer iris lenses to fool iris recognition systems. Often the popular press is very much concerned with spoofing of biometrics systems which creates a very negative impression about biometrics in the common users. The advantages of biometrics-based human recognition and its role in an secure authentication system is very clear and undisputed, particularly when it comes to non-repudiable technologies, the biometrics system designers need to be aware of the threats and security holes created by biometrics in the overall system and address them at the outset to avoid being hacked later. In order to analyze the possible weak points, we use a pattern recognition model for biometrics authentication systems [3]. With rapid adoption of biometric systems in many countries to enroll very large populations, there is a growing concern that biometric technologies may compromise A. Kumar and D. Zhang (Eds.): ICEB 2010, LNCS 6005, pp. 62–69, 2010. c Springer-Verlag Berlin Heidelberg 2010
Privacy Protection in High Security Biometrics Applications
63
privacy and anonymity of individuals. Observing that this is an important area of future research, a 2003 NSF Workshop on a Biometrics Research Agenda [1] identified “anonymous biometrics” as a privacy enhancing technology of great interest. In this paper, we will address two important issues related to large biometrics databases: (i) security analysis and enhancements; and (ii) privacy enhancements. The security of large databases is most important as any loss of data from very large databases can be catastrophic. The privacy and security issues are two sides of the coin when it comes to large biometrics applications. We will present the privacy issues with large biometrics databases and describe the cancelable biometrics transform which can address the privacy issues. Many other methods for biometrics template protection have been proposed in the literature and readers can get more details about such techniques from [13]. From our perspective, privacy is limited to using biometrics data beyond the intended purposes without the knowledge of the subjects. Our solution focuses on addressing that aspect of privacy. There may be many other aspects from a legal point of view and readers interested in those aspects should consult other literature in this area. This paper is organized as follows. In section 2, a pattern recognition model of biometrics systems is described. Based on this model, we identify several attack points on a biometrics system in section 3. Counter measures are briefly described. In section 4, we highlight the privacy issues in a large database of biometrics. Cancelable biometircs transform is introduced in section 5 and conclusions are presented in section 6.
2 Pattern Recognition Model A biometric authentication system looks like the system in Figure 1. An input device measures a biometric sample from a human being. This sample is converted into a machine representation of the sample (which can be the sample itself) by the “feature extractor.” This machine representation is matched against a reference representation, previously stored during enrollment. We will use this model to describe the various attack points using this model. Any biometric system can be described as a four-stage system as in Figure 1. More specifically, these stages are: A. The first stage is the actual acquisition of the input biometric sample such as acquiring a fingerprint image using a fingerprint scanner. B. The second stage is the processing of the biometric sample to obtain a digital machine representation that captures the “uniqueness” in the biometrics sample. C. This stage is the computation of match score between two or more representations: one representation of the input sample and one or more stored (enrolled) representations (from database F ). There are several factors that determine how precisely this score can be estimated for some input sample. This stage also makes the actual match versus no match decision. This decision is best made with the particular application in mind. D. This is the application, which is protected by the biometric authentication system. This might be immigration control, a financial institution’s ATM, or a computer account.
64
N.K. Ratha 1
3
Input device/ sensor A
5
Template extractor
11
Matcher
B
Application
C
2
D
4
6 7
Template database
Enrollment E
F 8
9
10
Fig. 1. Stages of biometrics-based authentication system and enrollment system with identified points of attack (adapted from [3]
3 Attack Points The attacks on a biometrics systems can come in various forms such as: hackers with an interest to break the application, phishing and farming attacks faking enrollment and/or authentication transactions remotely and also very focussed attacks from competitors as well as even government sponsored agencies from rogue nations. Some of the attacks can be averted and some of them will be extremely difficult to be noticed. The biometrics system designers need to be aware of these attacks so that they can safeguard the system and the assets it is supposed to protect. Using this model (Figure 1) we identify several basic attacks that plague biometric authentication systems. – Front end attack: • Threat 1 is an attack involving the presentation biometric data at the sensor A. This is the most often cited in common press and the most common type of attack. • Attack point 8 is the enrollment center or application (E in Figure 1). – Trojan horse attack: • Threat 3 is attack by ‘Trojan horse’ on the feature extractor B. The feature extractor could be attacked so that it will produce a pre-selected feature set at some given time or under some specific condition. • Attack 5 is again a Trojan horse: the matcher is attacked to directly produce an artificially high or low match score, thereby manipulating the match decisions. – Communication channel attack: • Threat 2 is an attack on the channel between the sensor and the biometric system. This can again be a replay attack — resubmission of previously stored biometric signal — or an electronic impersonation. • Attack point 4 is the communication channel between the feature extractor and matcher.
Privacy Protection in High Security Biometrics Applications
65
• Threat 6, a very important threat, the overriding of the output of the matching module C. The output of the matching module could either be a hard match versus no match decision, or it could be just the probability of a match where the final decision is left up to the application. • Threat 7 is another channel attack but on the communication between the central or distributed database and the authentication system. • This point of attack is a channel (Threat 9). Control of this channel allows an attacker to override the (biometric) representation that is sent from F , the biometric database, to C. – Back end attack • Attack on the database F itself. The database of enrolled (biometric) representations is available locally or remotely possibly distributed over several servers. This threat is the unauthorized modification of one or more representations in the database (Threat 10). • As noted in [4], the actual application D is a point of attack, too (Threat 11). These attacks can be thwarted by several methods applicable to each weakness: – Livenesss detection can alleviate issues with fake biometrics to fool the sensors. – Channel attacks can be solved by using information security methods such as encrypting the message. – Replay attacks can be addressed using challenge-response methods. – Policy based preventions to prevent dictionary attacks. – Using smartcard based secure storage methods for templates can address the attacks on template databases. – Trojan horse attacks can be prevented by using secure crypto hardware such as IBM 4764. Note that these methods require additional resources in terms of hardware or extra time to protect and defend against various attacks. For more details, please refer to [3,6,7].
4 Privacy Issues in Biometrics Biometrics-based authentication schemes overcome many limitations of conventional authentication systems while offering usability advantages. However, despite its obvious advantages, the use of biometrics raises several security and privacy concerns outlined below. 1. Biometrics is secure but not secret: The knowledge-based authentication methods totally rely on secrecy. For example, passwords and cryptographic keys are known only to the user and hence secrecy can be maintained. In contrast, biometrics such as voice, face, signature and even fingerprints can be easily recorded and potentially misused without the user’s consent. Face and voice biometrics are vulnerable to being captured without the user’s explicit knowledge. 2. Biometrics cannot be revoked or cancelled: Passwords, crypto-keys and PINs can be changed if compromised. When tokens such as credit cards and badges are stolen, they can be replaced. However, biometrics is permanently associated with the user and cannot be revoked or replaced if compromised.
66
N.K. Ratha
Fig. 2. Hash function has been applied on the fingerprint features for fingerprint images acquired in quick succession (adapted from [9])
3. Cross application invariance: It is highly encouraged to use different passwords and tokens in traditional authentication systems. However, biometrics-based authentication methods rely on the same biometrics. If a biometric is lost once, it is compromised forever. If a biometrics is compromised in one application, then the same method can be used to compromise all applications where the biometrics is used. 4. Persistence While invariance over time is a boon for biometrics it can also be a big challenge from a security and privacy point of view when it needs to be changed. The uniqueness contained in them is still the same even though the signal as well the template can look different. 5. Cross-matching can be used to track individuals without their consent: Since the same biometrics is used across all applications and locations, the user can be potentially tracked if one or more organizations collude and share their respective biometric databases. In relation to privacy violations, inability to revoke a biometrics and cross-matching raise serious questions. A naive approach would be think of using standard encryption techniques such as encryption or hash functions to enhance privacy. If we encrypt the biometrics signal or template, it needs to be decrypted to carry out the matching. That creates a possible attack point to get access to the decrypted signal/template. A commonly used method in protecting passwords involves use of one-way hash functions to compute a digest from a message and also match in the hash domain for exactly reproduced input. These hash functions have been shown to be almost impossible to invert. However, these functions also produce a significantly different digest even with minor changes in the input. Biometrics signals and constantly change even when acquired within a very short time window. This is one of the challenges the biometrics templates face. The hash digests computed from these signals or templates are also quite different as shown in Figure 2. Therefore, these functions can not used directly despite being theoretically very strong as they apply only to exact data.
Privacy Protection in High Security Biometrics Applications
67
Fig. 3. Principles behind cancelable biometrics for face recognition where the face image is distorted in the signal domain prior to feature extraction. The distorted version of the face does not match with the original biometric, while the two instance of distorted features match among themselves (adapted from [8].
5 Cancelable Biometrics ‘Cancelable Biometrics’ is one of the original solutions for privacy-preserving biometric authentication. In this technique, instead of storing the original biometric, the biometric is transformed using a one-way function. This transformation can be performed either in the signal domain or in the feature domain [5]. This construct preserves privacy since it will not be possible (or computationally very hard) to recover the original biometric template using such a transformed version. If a biometric is compromised, it can be simply re-enrolled using another transformation function, thus providing revocability. The construct also prevents cross-matching between the databases, since each application using the biometric uses a different transformation. In Figure 3, the key idea of this approach is explained. Another advantage of this approach is that the feature representation is not changed (in both signal and feature domain transformation). This will allow us to use existing feature extraction and matching algorithms. There are several challenges to be overcome before successfully designing a cancelable transform. They are as follows 1. Registration: In order for the transform to be repeatable, the biometric signal must be positioned in the same coordinate system each time. This requires that an absolute registration be done for each biometric signal acquired prior to the transformation. 2. Entropy retention: The transformed version should not lose any individuality while we generate multiple personalities from the same template.
68
N.K. Ratha
3. Error tolerance: Another problem to contend with, even after registration, is the intra-user variability that is present in biometric signals. The features obtained after transformation should be robust w.r.t. to this variation. 4. Transformation function design: The transform has to further satisfy the following conditions (a) The transformed version of the fingerprint should not match with the original. Furthermore, the original should not be recoverable from the transform. This property helps in preserving the privacy of the transformed template. (b) Multiple transforms of the same fingerprint should not match with one-another. This property of the transform prevents cross-matching between databases. The cancelable biometrics methods have been applied to fingerprint templates [8,10] and also to iris images and templates [11]. Note that in these constructions, the matcher didn’t change and the methods retained the backward compatibility. Yet another method cancelable biometrics called as registration free methods relaxes the constraint that the enhanced technique can also use a special matching technique. In [12], a registration method with superior security is discussed. There are several other methods that have been described in literature as well. Reader can get more details in [13].
6 Conclusions and Future Work While biometric presents obvious advantages over password and token based security, security and privacy concerns that biometric authentication raises needs to be addressed. We outlined several attack points that originated from a biometrics-based authentication system. To thwart the attacks, the biometrics-based authentication methods need to rely on a variety of measures based on information security, liveness detection, novel challenge/response methods. There are also several privacy issues when dealing with biometrics. We outlined the advantages of our novel cancelable biometrics transform over other approaches. A key step in the cancelable template construction is precise registration if backward compatible is desired. Registration free methods also exist to achieve cancelable biometrics. We highly recommend use of cancelable biometrics methods over tradition biometrics to enhance security and privacy. It has been shown that cancelable/revocable methods are usable today as they don’t have significant impact on the accuracy while enhancing privacy. As the research community advances these methods, third party evaluation for security attacks and evaluation of the revocable methods need to be evolved.
References 1. Rood, E., Jain, A.K.: Biometric research agenda: Report of the NSF workshop. In: Workshop for a Biometric Research Agenda, Morgantown, WV (July 2003) 2. Matsumoto, T., Matsumoto, H., Yamada, K., Hoshino, S.: Impact of artificial gummy fingers on fingerprint systems. In: Proceedings of SPIE, Optical Security and Counterfeit Deterrence Techniques IV, vol. 8577 (2002) 3. Bolle, R., Connell, J., Pankanti, S., Ratha, N., Senior, A.: Guide to Biometrics. Springer, Heidelberg (2003)
Privacy Protection in High Security Biometrics Applications
69
4. Kansala, I., Tikkanen, P.: Security risk analysis of fingerprint based verification in PDAs. In: Proc. IEEE AutoID 2002, Tarrytown, NY, March 2002, pp. 76–82 (2002) 5. Ratha, N.K., Connell, J.H., Bolle, R.M.: Enhancing security and privacy in biometrics-based authentication systems. IBM Systems Journal 40(3), 614–634 (2001) 6. Ratha, N.K., Connell, J.H., Bolle, R.M.: Biometrics break-ins and band aids. Pattern Recognition Letters 24(13), 2105–2113 (2003) 7. Bolle, R.M., Connell, J.H., Ratha, N.K.: Biometric Perils and Patches. Pattern Recognition 35(12), 2727–2738 (2002) 8. Ratha, N.K., Chikkerur, S., Connell, J.H., Bolle, R.M.: Generating cancelable fingerprint templates. IEEE Trans. on PAMI 29(4), 561–572 (2007) 9. Ratha, N.K., Chikkerur, S., Connell, J.H., Bolle, R.M.: Privacy enhancements for inexact biometrics templates. In: Tuyls, P., Skoric, B., Kevenaar, T. (eds.) Security with Noisy Data. Springer, Heidelberg (2007) 10. Chikkerur, S., Connell, J., Ratha, N.: Generating Registration-free Cancelable Fingerprint Templates. In: Proc. of BTAS 2008 Conference (September 2008) 11. Zuo, J., Ratha, N., Connell, J.: Cancelable Iris Biometric. In: Proc. of IAPR conf. ICPR 2008, December 2008, 12. Farooq, F., Bolle, R.M., Jea, T.-Y., Ratha, N.: Anonymous and Revocable Fingerprint Recognition. In: Proc. of CVPR Workshop on Biometrics, Minneapolis (2007) 13. Cavoukian, A., Stoianov, A.: Biometrics Encryption: A positive-sum technology that achieves strong authentication, security and privacy (2007), www.ipc.on.ca
Face Recognition and Plastic Surgery: Social, Ethical and Engineering Challenges H.S. Bhatt, S. Bharadwaj, R. Singh, and M. Vatsa IIIT Delhi, India {himanshub,samarthb,rsingh,mayank}@iiitd.ac.in
Abstract. Face recognition systems has engrossed much attention and has been applied in various domains, primarily for surveillance, security, access control and law enforcement. In recent years much advancement have been made in face recognition techniques to cater to the challenges such as pose, expression, illumination, aging and disguise. However, due to advances in technology, there are new emerging challenges for which the performance of face recognition systems degrades and plastic/cosmetic surgery is one of them. In this paper we comment on the effect of plastic surgery on face recognition algorithms and various social, ethical and engineering challenges associated with it.
1
Introduction
Face recognition systems have the ability to recognize humans based on facial features such as geometry and texture. These system are primarily used for ID cards, e-passports, e-borders, border security and law enforcement. Face has the benefit of being non-invasive as a biometric when compared to other biometrics such as fingerprint or iris. Though human beings use face for recognition very effectively, automated systems that use face as a biometric have just begun to establish their reliability in real world applications. However, performance of face recognition systems are affected by well establish covariates such as amount and direction of light on the face (illumination), pose, expression, and image quality (e.g. noise and blur). Researchers have also worked to develop face recognition systems that are resilient to aging and disguises. In our previous research, we introduced plastic surgery as one of the most challenging covariate in face recognition [1]. A recent incident in China accentuates the intricacies of this covariate. At Hongqiao International airport’s customs, a group of women were stopped as all of them had undergone plastic surgery and had become so unrecognizable that customs officers couldn’t use their existing passport pictures to recognize them [2]. We can, therefore, assert that plastic surgery is a new and equally serious covariate that has sprung from our evolving cultures. It is also entwined with many ethical dilemmas in their use in biometric technology. During the last decade, there has been a tremendous growth in the number of plastic surgery procedures worldwide. According to the 2008 statistics provided A. Kumar and D. Zhang (Eds.): ICEB 2010, LNCS 6005, pp. 70–75, 2010. c Springer-Verlag Berlin Heidelberg 2010
Face Recognition and Plastic surgery
71
by The American Society for Aesthetic Plastic Surgery [3], in USA alone there has been an increase of about 162% in total number of plastic surgeries from 1997. Liposuction (341,144 procedures), eyelid surgery (195,104), Rhinoplasty (152,434), Chemical peel (591,808) and Laser skin resurfacing (570,880) are the most common plastic surgery procedures for faces. If we look at the plastic surgery distribution by age, 0-18 years contribute 2% of total procedures whereas 19-34 years contribute around 22%, 35-50 years contribute around 45%, 5164 years contribute around 26% and 65 above contribute 6% of total plastic surgery procedures. Moreover, 20% of all the plastic surgeries are performed on individuals belonging to racial and ethnic minorities. The statistics clearly indicate the popularity of plastic surgery among all age groups, ethnicity and gender. Thus there is a great need for the future face recognition systems to confront the challenges posed by facial plastic surgery. In our preliminary study [1], we analyzed the effect of facial plastic surgery procedures on different face recognition systems widely used in the literature. Using pre and post surgery images of around 500 individuals, performance of six face recognition algorithms namely Principal Component Analysis, Fisher Discriminant Analysis, Geometric Features, Local Feature Analysis, Local Binary Pattern, and Neural Network Architecture based 2D Log Polar Gabor Transform were evaluated. The results indicate that facial plastic surgery is one of the most challenging covariate for current face recognition algorithms and more research is required in this direction. Along with performance issues there are several other challenges associated with the problem of facial plastic surgery. Due to the increased dependence on face recognition systems for law enforcement, security and surveillance at crucial places it is required that face recognition systems should learn to tackle such future challenges. In this paper, we discuss social, ethical and engineering challenges to face recognition posed by facial plastic surgery procedures.
2
Plastic Surgery
Facial plastic surgery is generally used for correcting facial feature anomalies or improving facial appearance, for example, removing birth marks, moles, scars and correcting disfiguring defects. These surgical procedures prove beneficial for the patients suffering from structural or functional impairment of facial features, but such procedures can also be misused by individuals to conceal their identities with the intent to commit fraud or evade law enforcement. These surgical procedures may allow anti-social elements to freely move around without any fear of being identified by any face recognition or surveillance system. Facial plastic surgery results being long-lasting or even permanent provide an easy and robust way to dupe law and security mechanism. Face recognition after plastic surgery can lead to rejection of genuine users or acceptance of impostors. Moreover, because these procedures modify the shape or texture of facial features to varying degree, it is difficult to find correlation between pre and post surgery facial geometry.
72
H.S. Bhatt et al.
Facial plastic surgery procedures can be classified as (1) disease correcting local plastic surgery in which an individual undergoes local plastic surgery for correcting defects, anomalies, or improving skin texture, and (2) global plastic surgery in which complete face structure is altered (e.g. face lift). Local plastic surgery is primarily aimed at reshaping and restructuring facial features to improve appearance. Measurable quantities such as geometric distance between facial features are altered, nonlinearly, as a result of such procedures, but other aspects like texture of skin and overall appearance remains same. In global surgery, complete facial structure can be reconstructed. This type of facial plastic surgery is aimed at reconstructing the features to cure some functional damage as well as to improve appearance. Global plastic surgery entirely changes face appearance, texture of skin and other face geometries making it arduous for any face recognition system to recognize faces before and after surgery. Global plastic surgery may be exploited by perpetrators to dupe the law and security mechanism posing a great threat to the society despite all the security mechanism in-place and him being a cooperative user. 2.1
Facial Plastic Surgery Procedures
This section discusses some of the popular facial plastic surgery procedures and how they affect facial features during recognition. Most popular facial plastic surgery is Rhinoplasty i.e. nasal surgery. It is used to re-enact the nose in cases involving birth defects, accidents where nose bones are dented and also to cure breathing problems caused due to the nasal structure. Blepharoplasty (eyelid surgery) is used to remodel eyelids when over-growth of skin tissues on the eyelid causes vision problem. Fig. 1 illustrates the effect of Rhinoplasty and Blepharoplasty. Brow lifts (forehead surgery) is generally recommended to patients who suffer from flagging eyebrows (because of aging) which obstruct vision. Mentoplasty (chin surgery) is used to reshape the chin, including smooth rounding of the chin and reducing or augmenting chin bones. Cheek implants are used to improve the facial appearance. This is done by Malar augmentation, in which a solid implant is inserted over the cheek bone or Sub-Malar augmentation where implants are fitted in areas of the cheeks which have a dug in or hollow look. Otoplasty (ear surgery) brings the ears closer to the face, reduces the size of ears and prunes some structural ear elements. Liposhaving (facial sculpturing) removes excess fat attached to the skin surface on the face (chin, jaws). Skin resurfacing (skin peeling) is used to treat wrinkles, stretch marks, acne and other skin damages caused due to aging and sun burnt. Rhytidectomy (face lift) treats patients with severe burns on face and neck or to get a younger look by tightening the face skin and thus minimizing wrinkles. Lip augmentation involves proper shaping and enhancement of lips with injectable filler substances. Craniofacial surgeries are employed to treat by-birth anomalies such as clift lip and palate (a gap in the roof of mouth), microtia (small outer ear) and other congenital defects of jaws and bones. Dermabrasion is used to give a smooth finish to the face skin by correcting the skin damaged by sun burns or scars (developed as a post surgery effect), dark irregular patches (melasma) that grow over the face
Face Recognition and Plastic surgery
73
Fig. 1. Pre and post facial plastic surgery images. First sample illustrates the effect of Rhinoplasty and second sample shows the effect of Blepharoplasty.
skin and mole removal. Among all the techniques listed above Rhinoplasty, Blepharoplasty, Forehead surgery, Otoplasty, Lip augmentation, and Craniofacial are purely local plastic surgery. Rhytidectomy (face lift) is purely global plastic surgery whereas Liposhaving, skin resurfacing, Dermabrasion can be local as well as global plastic surgery. In our previous research, we observed that current face recognition systems cannot handle variations due to global plastic surgery procedures and their performance deteriorates to an unacceptable level. Local plastic surgery procedures alter the geometry of fudicial points and appearance of local features only. Therefore, face recognition algorithms yield better accuracy for images with local surgery as compared to images with global surgery. Moreover, the surgical procedures that modify key fudicial points such as nose, forehead, chin, eyelid, eyebrows, mouth and lips have a much pronounce effect on face recognition systems than the techniques which deal with ears, mole removal, and dermabrasion. This is because most face recognition systems do not embrace ear region and moles for recognition. Brows lift (forehead surgery) and chin surgery (mentoplasty) also have a major affect on the performance of face recognition system because these features are used to normalize face and estimate the pose variations.
3
Challenges Due to Facial Plastic Surgery
This relatively unexplored research area, i.e. face recognition under variations due to plastic surgery poses several ethical, social and engineering challenges. We briefly explain these issues in the following subsections. 3.1
Ethical and Social Challenges
The most important ethical and social issue is invasion of privacy. The exact definition of what constitutes an individuals privacy and its infringement is a debated topic and varies hugely in different parts of the world. In general, privacy
74
H.S. Bhatt et al.
issues in facial plastic surgery is related to medical information and, therefore, it is private and secure even under law. In some cases, these types of surgery depend on an individuals choice, they cannot be bound under any legal or social obligations. However, it should be the ethical responsibility of an individual to re-enroll himself after undergoing a facial plastic surgery procedure that has led to changes in facial features. A deliberate identity theft in order to bear a resemblance to someone with the help of plastic surgery is another major social challenge. Face recognition systems must be able to distinguish between a genuine and stolen identity, for which system must include laws and other cross references apart from a recognition algorithm. In our opinion, as new complexities like facial plastic surgery emerge and gain popularity, we must constantly be a step ahead to ensure reliability and accountability of face recognition. Another major challenge in this research is to collect images of patients before and after facial plastic surgery. There are several concerns in collecting the database as patients are hesitant to share their images. Apart from the issues of privacy invasion, many who have undergone a disease correcting facial surgery would also like to be discrete. Hence, there are no large databases available that can be used to train or test current face recognition algorithms or develop a new algorithm to match human face images before and after surgery. 3.2
Engineering Challenges
Apart from ethical and social issues, several engineering challenges are also important in developing algorithms to handle variations due to facial plastic surgery. First one is to have an algorithm that can classify whether the false acceptance or rejection is owed to facial plastic surgery or to some other covariate such as aging or disguise. Since some of the local plastic surgery preserves overall appearance and the texture of the face, this challenge may not be significant for some cases. However, in other cases including global plastic surgery or full face lift cases where the entire structure of the face is remodeled, it is of paramount interest to automatically differentiate among plastic surgery, aging, and disguise. In other words, face recognition algorithm must be able to single out the variations in face due to facial plastic surgery from the variations due to other covariates such as aging, disguise, illumination, expression and pose. Despite many advances in face recognition techniques, to the best of our knowledge, there exists no technique that can perform such classification. Even if we somehow (e.g. manually) identify that a particular human face has undergone plastic surgery, it is still an arduous task for current face recognition algorithms to effectively match a post-surgery image with a pre-surgery face image. Therefore, an engineering challenge would be to design an algorithm to correlate facial features in pre and post surgery images. Local facial regions such as nose, chin, eyelids, cheek, lips and forehead have an imperative role in face recognition and small variations in any of these features carry a partial affect on the neighboring features. Further, a combination of local plastic surgery procedures may result in a fairly distinct face from the original face. To develop
Face Recognition and Plastic surgery
75
an algorithm to assess such non-linear variations in pre and post facial plastic surgery images makes the engineering challenge fiercer. Finally, facial plastic surgery may also lead to identity theft. A deliberate identity theft in order to bear a resemblance to someone is a major social issue but identity theft can also be purely unintentional, where a person who has undergone plastic surgery now closely resembles another person. This situation is a good test for robust face recognition systems. It is our assertion that these challenges should receive immediate attention from the research community to develop efficient face recognition algorithms that can account for non-linear variations introduced by facial plastic surgery procedures.
4
Conclusion
In this paper we have discussed the significance of facial plastic surgery as a challenging covariate for face recognition algorithms. The popularity of facial plastic surgery is on the rise owing to advancement in medical technology and greater acceptance in society. Plastic surgery proves highly beneficial for patients with birth anomalies or some accidents or trauma, but poses such a big threat to the society as it can be exploited by criminals and terrorists to conceal their identity and evade law enforcement and security agencies. There is a lack of research efforts to confront this covariate. It may be because of the privacy issues involved in dealing with the pre and post surgery facial images or engineering complications involved in designing automated algorithms. The paper emphasizes the ethical, social and engineering challenges that need to be addressed by the research community so that future face recognition systems are capable of addressing challenges posed by facial plastic surgery.
Acknowledgment The authors thank the doctors/surgeons for providing useful information and feedback during the course of this research.
References 1. Singh, R., Vatsa, M., Noore, A.: Effect of plastic surgery on face recognition: A preliminary study. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition Biometrics Workshop, pp. 72–77 (2009) 2. http://shanghaiist.com/2009/08/04/south_korean_plastic_surgery_trips. php (last accessed on 10/19/2009 3:25:00 PM IST) 3. American Society for Aesthetic Plastic Surgery: Statistics (2008), http://www.surgery.org/media/statistics
Human Face Analysis: From Identity to Emotion and Intention Recognition Massimo Tistarelli and Enrico Grosso Computer Vision Laboratory, University of Sassari, Italy {tista,grosso}@uniss.it
Abstract. Face recognition is the most natural mean of recognition by humans. At the same time, images (and videos) of human faces can be captured without the user’s awareness. The entertainment media and science fiction has greatly contributed in shaping the public view of these technologies, most of the times exaggerating the potential impact in one’s privacy. Even though face images can be acquired, in any place, with hidden cameras it is also true that face recognition technology is not dangerous per se. Rather, whenever properly deployed, it can result for the protection of the citizens and also enhance the user convenience. Face recognition today has achieved a quite high performance rate and most of the problems hindering the use of this technology have now been solved. Faces can be analyzed and characterized on the basis of several features. Then, a face can be tagged with several properties, not only the bearer’s identity, but also his gender, approximate age and possible familiarity with others. Moreover, the analysis of the facial expression may also lead to understanding the mood, maybe the emotional state and intentions of the analyzed subject. May this lead to a ”Big Brother scenario”? Is this technology going to hinder a person’s freedom or privacy? These questions are still to be answered and mostly depend on tomorrow’s good use of this emerging technology. As for today, many scenarios can be envisaged where face recognition technologies can be fruitfully applied. Among them, the border control at airports and other ports of entry are just the most addressed in the recent past. Other applications still exist which have been overlooked and are yet worth a more extensive study and deployment from both the Academia and Industry.
1 Introduction Visual perception is probably the most important sensing ability for humans to enable social interactions and general communication. As a consequence, face recognition is a fundamental skill that humans acquire early in life and which remains an integral part of our perceptual and social abilities throughout our life span [1,2]. Not only faces provide information about the identity of people, but also on their membership in broad demographic categories of humans (including sex, race, and age), and about their current emotional state [3,4,5,6,7]. Humans “sense” these information effortlessly and apply it to the ever-changing demands of cognitive and social interactions. Face recognition has now also a well established place within the current biometric identification technologies. It is clear that machines are becoming increasingly accurate A. Kumar and D. Zhang (Eds.): ICEB 2010, LNCS 6005, pp. 76–88, 2010. c Springer-Verlag Berlin Heidelberg 2010
Human Face Analysis: From Identity to Emotion and Intention Recognition
77
at the task of face recognition, but not only. While only limited information can be inferred from a fingerprint or the ear shape, the face itself conveys far more information than the identity of the bearer. In fact, most of the times humans look at faces not just to recognize the identity but to infer other issues which may or may not be directly related to the other person’s identity. As an example, when speaking to someone we generally observe his/her face to either better understand the meaning of words and sentences, but mostly to understand the emotional state of the speaker. In this case, visual perception acts as a powerful aid to the hearing and language understanding capabilities [3]. Even though most of past and current research has been devoted to ameliorate the performances of automatic face identification in terms of accuracy, robustness and speed in the assignment of identities to faces (a typical classification or data labeling problem)[8,9,10], the most recent research is now trying to address more abstract issues related to face perception[6,11]. As such, age, gender and emotion categorization have been addressed with some level of success (again, as a broader labeling or classification problem), but with limited application in real world environments. This paper analyzes the potential of current and future face analysis technologies as compared to the human perception ability. The aim is to provide a broad view of this technology and better understand how it can be beneficially deployed and applied to tackle practical problems in a constructive manner.
2 Face Recognition Technologies First generation biometrics has been puzzled in devising a 100% error-free face recognition system. The large variations implied in face appearance make it rather difficult to be able to capture all relevant features in faces regardless of the environmental conditions. Therefore, the most successful face recognition systems are based on imposing some constraints on the data acquisition conditions. This is the case for the CASIA F1 system, which is based on the active illumination by infrared illuminators and a near-infrared camera sensor. This is also the case for commercial products like the L1 .... portal for automated border check. This is not the way face recognition is going to revolutionarize today’s use of identification technologies. We expect face recognition systems will be more and more able to work in open air”, detecting and identifying people remotely (from a distance), in motion and with any illumination. The main breakthrough is expected to come from the use of high resolution images and exploiting the time-evolution of data rather than single snapshots. In general, face recognition technologies are based on a two step approach: – an off-line enrollment procedure is established to build a unique template for each registered user. The procedure is based on the acquisition of a pre-defined set of face images, selected from the input image stream, (or a complete video) and the template is build upon a set of features extracted from the image ensemble; – an on-line identification or verification procedure where a set of images are acquired and processed to extract a given set of features. From these features a face description is built to be matched against the user’s template. Regardless of the acquisition devices exploited to grab the image streams, a simple taxonomy can be based on the computational architecture applied to: extract distinctive
78
M. Tistarelli and E. Grosso
and possibly unique features for identification and to derive a template description for subsequent matching. The two main algorithmic categories can be defined on the basis of the relation between the subject and the face model, i.e. whether the algorithm is based on a subjectcentered (eco-centric) representation or on a camera-centered (ego-centric) representation. The former class of algorithms relies on a more complex model of the face, which is generally 3D or 2.5D, and it is strongly linked with the 3D structure of the face. These methods rely on a more complex procedure to extract the features and build the face model, but they have the advantage of being intrinsically pose-invariant. The most popular face-centered algorithms are those based on 3D face data acquisition and on face depth maps. The ego-centric class of algorithms strongly relies on the information content of the gray level structures of the images. Therefore, the face representation is strongly posevariant and the model is rigidly linked to the face appearance, rather than to the 3D face structure. The most popular image-centered algorithms are the holistic or subspacebased methods, the feature-based methods and the hybrid methods. Over these elementary classes of algorithms several elaborations have been proposed. Among them, the kernel methods greatly enhanced the discrimination power of several ego-centric algorithms, while new feature analysis techniques such as the local binary pattern (LBP) representation greatly improved the speed and robustness of Gabor-filtering based methods. The same considerations are valid for eco-centric algorithms, where new shape descriptors and 3D parametric models, including the fusion of shape information with the 2D face texture, considerably enhanced the accuracy of existing methods. 2.1 Face Analysis from Video Streams When monitoring people with surveillance cameras at a distance it is possible to collect information over time. This process allows to build a rich representation than using a single snapshot. It is therefore possible to define a “dynamic template”. This representation can encompass both physical and behavioral traits, thus enhancing the discrimination power of the classifier applied for identification or verification. The representation of the subject’s identity can be arbitrarily rich at the cost of a large template size. Several approaches have been proposed to generalize classical face representations based on a single to multiple view representations. Examples of this kind can be found in [9,13] and [14,15,16] where face sequences are clustered using vector quantization into different views and subsequently fed to a statistical classifier. Recently, Kruger, Zhou and Chellappa [17,18] proposed the “video-to-video” paradigm, where the whole sequence of faces, acquired during a given time interval, is associated to a class (identity). This concept implies the temporal analysis of the video sequence with dynamical models (e.g., Bayesian models), and the “condensation” of the tracking and recognition problems. Other face recognition systems, based on the still-to-still and multiple stills-to-still paradigms, have been proposed [19,20,21]. However, none of them is able to effectively handle the large variability of critical parameters, like pose, lighting, scale, face expression, some kind of forgery in the subject appearance (e.g., the beard). Other interesting approaches are based on the extension of conventional,
Human Face Analysis: From Identity to Emotion and Intention Recognition
79
parametric classifiers to improve the “face space” representation. Among them are the extended HMMs [22], the Pseudo-Hyerarchical HMMs [23,24] and parametric eigenspaces [25], where the dynamic information in the video sequence is explicitely used to improve the face representation and, consequently, the discrimination power of the classifier.
3 Facial Expression and Emotion Recognition Rosalind Picard in [26] defined the notion of affective computing as the computational process that relates to, arises from, or deliberately influences emotions. This concept may be simply formulated as giving a computer the ability to recognize and express emotions (which is rather different from the question of whether computers can have feelings and, consequently, develop the ability to respond intelligently to human emotion. Emotions are an integral and unavoidable part of our life. As such, should we be scared if machines acquire the ability to discern human emotions? Would it be an advantage or a limitation for the user if a personal computer or a television is capable of sensing our mood and provide appropriate services accordingly? Should we perceive a “trespassing” of our privacy if an hidden camera in an airport lounge recognizes our tiredness and activates a signal to direct us to the rest facilities? The most expressive (and the easiest to capture) way humans display emotion is through facial expressions [36]. Humans detect and interpret faces and facial expressions in a scene with little or no effort. Still, developing an automated system that accomplishes this task is rather difficult. There are several related problems: detection of an image segment as a face, extraction of the facial expression information, and accurate classification of the facial expression within a set of pre-assessed emotion categories. The achievement of this goal has been studied for a relatively long time with the aim of achieving humanlike interaction between human and machine. Since the early 1970s Paul Ekman and his colleagues have performed extensive studies of human facial expressions [27]. They found evidence for “universal facial expressions” representing happiness, sadness, anger, fear, surprise and disgust. They studied facial expressions in different cultures, including preliterate cultures, and found much commonality in the expression and recognition of emotions on the face. However, they observed differences in expressions as well and proposed that facial expressions are governed by “display rules” within different social contexts. In order to facilitate the analytical coding of expressions, Ekman and Friesen [28] developed the Facial Action Coding System (FACS) code. In this system movements on the face, leading to facial expressions, are described by a set of action units (AUs). Each AU has some related muscular basis. This system of coding facial expressions is done manually by following a set of prescribed rules. The inputs are still images of facial expressions, often at the peak of the expression. Ekmans work inspired many researchers to analyze facial expressions by means of image and video processing. By tracking facial features and measuring the amount of facial movement, they attempt to categorize different facial expressions. Past work on facial expression analysis and recognition has used these “basic expressions” or a subset of them. Fasel [29], Pantic [30] and more recently Gunes [31] and Zeng [32] provide an in-depth review of much of the research done in automatic facial expression recognition over the years since the early 1990’s.
80
M. Tistarelli and E. Grosso
Among the different approaches proposed, Chen [33,34] used a suite of static classifiers to recognize facial expressions, reporting on both person-dependent and person-independent results. Cohen et al. [35] describe classification schemes for facial expression recognition in two types of settings: dynamic and static classification. In the static setting, the authors learned the structure of Bayesian network classifiers using as input 12 motion units given by a face tracking system for each frame in a video. For the dynamic setting, they used a multilevel HMM classifier that combines the temporal information and allows one not only to classify video segments with the corresponding facial expressions, as in the previous works on HMM-based classifiers, but also to automatically segment an arbitrary long sequence to the different expression segments without resorting to heuristic methods of segmentation [36]. These methods are similar in that they first extract some features from the images, then use these features as inputs into a classification system, and the outcome is one of the preselected emotion categories. They differ mainly in the features extracted from the video images and in the classifiers used to distinguish between the different emotions. An interesting feature of this approach is the commonality with some methods for identity verification from video streams [37]. On the other hand, a recurrent issue is the need to either implicitly or explicitly determine the changes in the face appearance. The facial motion is thus a very powerful feature to discriminate facial expressions and the correlated emotional states. Recently Gunes et al. [39,31,40] extended the concept of emotion recognition from face movements to a more holistic approach, including also the movements of the body. In this work also the range of detected emotions is increased, adding most common emotions such as Boredom and Anxiety. The integration of face and body motion allowed to improve the recognition performances as compared to the analysis of face alone. 3.1 Computation of perceived Motion The motion of 3D objects in space induces an apparent motion on the image plane. The apparent motion can be either computed on the basis of the displacement of few image features points or as a dense field of displacement vectors over the entire image plane. This vector field is often called optical flow. In general, there is a difference between the displacement field of the image points and the actual projection of the 3D motion of the objects on the image plane (the velocity field). This difference is due to the ambiguity induced by the loss in dimensionality when projecting 3D points on the image plane. A simple example is the motion of a uniformly colored disc, parallel to the image plane, rotating around its center. As there is no motion on the image plane the optical flow is zero, but the 3D motion of the object, and consequently its projection, the velocity field, is non-zero. Nonetheless, apart from some degenerate conditions, the optical flow is generally a good approximation of the velocity field. The optical flow can be computed by processing a set of image frames from a video. The process is based on the estimation of the displacement of the image points over time. In general terms, the displacements can be computed either explicitly or implicitly. In the first case, the motion of image patterns is computed by matching corresponding patches on successive image frames. In the latter case the instantaneous velocity is determined by computing
Human Face Analysis: From Identity to Emotion and Intention Recognition
81
the differential properties of the time-varying intensity patterns. This generally implies the computation of partial derivatives of the image brightness, or filtering the image sequence with a bank of filters properly tuned in the spatial and temporal frequencies, and the fulfillment of some constraints on the apparent motion [41,42,43]. The combination of multiple gradient-based constraints allows to compute the value of the flow vector, best approximating the true velocity field, at each image point. Some examples of flow fields computed from different image streams and different actions are presented in figure 2. 3.2 Performances of Expression and Emotion Recognition Algorithms Several algorithms to detect facial expressions and infer emotions have been developed and tested. The datasets used and the reported performances are quite varying, but all share some common denominators. Firstly, no single system is capable of achieving one hundred percent correct categorization, even with a very limited number of subjects [38,39] . The datasets employed for the experiments are quite varied and include from 10 up to a maximum of 210 subjects. The Cohn Kanade AUCoded facial expression database [45], being the largest data collection with 2105 videos from 100 subjects. Yet the number of captured expression is not totally uniform and, more important, it is not fully representative of the variety of emotions which can be experienced and displayed by humans in different environments. The datasets and relative experimental results are limited to 6 basic expressions (Fear, Anger, Disgust, Happiness, Sadness, Surprise). More recently, these expression have been augmented to a more complete set including also Uncertainty, Anxiety and Boredom. These last three being most important to detect uncomfortable states of the subject, for example while waiting in a long queue at the post office or at a ticket counter. The second common denominator is the acquisition condition. All the datasets have been collected in indoor, controlled environments (generally research labs). As a consequence, all expressed emotions are purposively constructed, rather than grabbed from a real scenery. Therefore, the datasets display a number of “expected” expressions, rather than genuine expressions as caused by either external (physical) or internal (emotional) factors. In a few cases the data was acquired from spontaneous emotions derived from external stimuli [32]. Wether there is a perceivable difference between a real or a constructed facial expression is still not fully understood. Another similarity is the variation in the classification performances across different subjects. Not everybody displays the emotions exactly in the same way. Depending on the cultural background, there are several differences in the visible effects of emotions. Sometimes differences among emotions are just subtile movements of the eyes or the mouth. In other cases emotions are expressed by largely evident motions of the face parts. As a consequence it is rather difficult to conceive a single system which is able of performing equally well over a large variety of subjects, especially in real world conditions. Remarkably several systems have been recently proposed to improve the classification performances by using multiple visual and non visual cues [32,46,47,48,49,50]. The work by Sebe et al. [38] and by Gunes et al. [39,40] are examples of how speech
82
M. Tistarelli and E. Grosso
Fig. 1. (Left) Optical flow from an image sequence of a person jumping. (Right) Optical flow from an image sequence of two persons walking.
Fig. 2. (Left) Optical flow from an image sequence of a talking face. (Right) Optical flow from an image sequence of a face changing expression from neutral to hanger.
and the body motion can be exploited to improve the inference of displayed emotions. If emotional states are detected to facilitate the human computer interactions, as it is in many research projects, the available datasets and emotion categorizations are generally sufficient to describe the range of possible emotional states to be discovered by a computer to start the necessary actions. In the case of non-cooperative subjects, where a surveillance camera grabs images from freely moving people in the environment, the range if possible emotions and consequent motions and expressions is quite larger.
Human Face Analysis: From Identity to Emotion and Intention Recognition
83
Fig. 3. Sample snapshots from a standard video database for facial expression analysis [44]
Therefore it is necessary to better understand the target environmental conditions first and then understand the range of emotions which may be observed.
4 Emotion and Intention Recognition There is strong relation between emotional states and intentions. Anxiety status can easily lead to an unpredictable action or a violent reaction. As discussed in the preceding sections, current technology can only discern among basic emotions through the categorization of facial motions. Still, tomorrow’s technology may allow to discern more subtile expression changes to capture a wider spectrum of emotions which may lead to detect dangerous intentions or possible future actions. Nowadays, there is a considerable concern for biometric technologies breaking the citizen’s privacy. The public view of the widespread adoption of surveillance camera is that the use of face identification systems may lead to a “state of control” scenario where people are continuously monitored against their supposed intentions. Science fiction and action movies greatly contributed to shape the public view of intelligent surveillance. Many people may see a “preemptive police corp”’ into action within the next few years. Even though we can’t reasonably predict what the advance of technology will be in the next few years, these concerns have yet hindered the adoption of these technologies in many environments, thus depriving of many potential service improvements and benefits. Unfortunately, as in the past, also today misinformation and politically driven views impair technological advances which can be adopted for the wellness of the citizens. Yet, as any other technology, the way it is used, either for the good or bad, entirely depends on the intentions of the men behind its application. As stated above, the research on facial expression recognition has been largely lead by the aim to develop more intuitive and easy to use human computer interfaces. The
84
M. Tistarelli and E. Grosso
overall idea being to “instruct” computers to adapt the delivered services or informations according to the “mood” and appreciation level of the users. On the other hand, if security is the target application for an “emotion detection” surveillance system, at the basis there is a tradeoff between security and freedom. We already traded much of our personal freedom for air travel security. Nobody can access any airport’s gate area without being checked. We can’t even carry a number of items onboard. Exceptions are quite limited and very hard to achieve. Still, millions of people travel by air every day (apparently) without any complain. A more realistic scenario, where intelligent camera networks continuously check for abnormal and potentially dangerous behaviors in high risk areas or crowded places, may on the contrary increase the “sense of protection” and security of citizens. The same cameras may then be used in shops and lounges to determine the choice and satisfaction level of users to provide better customized services. In reality there is no need to really impair personal privacy by coupling personal data with the identity of the bearer. This is far from being useful in the real practice. We don’t need to wait for the massive introduction of intelligent “emotionally driven” surveillance systems. Real privacy threats are already a reality. The widespread use of social networks and e-services, coupled with the misuse of personal data cross-check and fusion, is already a potential danger which may impair our privacy. This unless the proper ruling actions are taken to hinder the mis-use (rather than the use) of personal data. Better spread of knowledge, at all levels, is the first step, then followed by an appropriate development of pan-national rules supported by technology standards is the only solution to defeat the privacy threats.
5 Conclusion There is always a deep concern about the introduction of new technologies which may impact on the personal privacy. For this reason, all technologies which are devoted at either identify individuals or even for understanding some personal features, are looked suspiciously by the large public. Nonetheless there are research projects which are now seriously looking not only into the identification of individuals, but indeed at the understanding of potentially hostile intentions. An example is the FAST (Future Attribute Screening Technologies) project of the US Department of Homeland Security1 . A popular web magazine addresses this project in this way: The Department of Homeland Security has been researching a sensor system that tries to predict“hostile thoughts” in people remotely for a while, but it’s just spoken up about developments and renamed the system “Future Attribute Screening Technologies” FAST, which sounds really nonintimidating. It was called “Project Hostile Intent.” But check out the technology’s supposed powers for a re-think on how intimidating it sounds: it remotely checks people’s pulse rate, breathing patterns, skin temperature and momentary facial expressions to see if they’re up to no good. As any other technology biometrics can be used for the good or bad of the citizens, but this is not a motivation to ban the technology per se. Rather this is a strong 1
A description of the project can be found under the “Screening Technologies to detect intentions of humans” on the web page htt p : //www.dhs.gov/xabout/structure/gc1 224537081868.shtm
Human Face Analysis: From Identity to Emotion and Intention Recognition
85
incentive to consider better the potential of these technologies at the service of the citizens and to ameliorate the quality of life. Of course, the prevention of terroristic attacks is the first reference which comes to one’s mind, and probably the most heavily advertised, but this is not the only nor the most important one. Many potential applications exist, related both to secure or not secure areas, which are worth deploying. Probably everybody is familiar with surveillance cameras placed in public areas, such as parks, shopping centers, squares, metro and train stations and others, devoted to detect potentially dangerous situations after the sunset. There are millions of such cameras located in every city throughout the world. Unfortunately it’s simply impossible to make a good use of the tremendous amount of data constantly delivered by these cameras. No human operator could costantly look at these videos and reliably identify dangerous situations or suspects. Still, several criminals have been convicted over time, thanks to the videos recorded by these cameras. But how many lives could have been saved if the same cameras would have been able to automatically detect a malevolent behavior in the same environment, and just send an alarm to a human operator? The answer is obvious, and we believe the actions to be taken, at several levels, to promote the proper introduction of these technologies in everyday life are obvious as well. Acknowledgement. Funding from the European Union COST action 2101 “Biometrics for Identity Documents and Smart Cards” and from the Regional Research Authority are also acknowledged.
References 1. Allison, T., Puce, A., McCarthy, G.: Social perception from visual cues: role of the STS region. Trends in Cognitive Sciences 4, 267–278 (2000) 2. Bruce, V., Young, A.W.: Understanding face recognition. British Journal of Psychology 77(3), 305–327 (1986) 3. Hornak, J., Rolls, E., Wade, D.: Face and voice expression identification in patients with emotional and behavioral changes following ventral frontal lobe damage. Neuropsychologia 34, 173–181 (1996) 4. Tranel, D., Damasio, A., Damasio, H.: Intact Recognition of Facial Expression, Gender, and Age in Patients with Impaired Recognition of Face Identity. Neurology 38, 690–696 (1988) 5. Calder, A., Young, A., Rowland, D., Perrett, D., Hodges, J., Etcoff, H.: Facial emotion recognition after bilateral amygdala damage: Differentially severe impairment of fear. Cognitive Neuropsychology 13, 699–745 (1996) 6. Humphreys, G., Donnelly, N., Riddoch, M.: Expression is computed separately from facial identity, and it is computed separately for moving and static faces: Neuropsychological evidence. Neuropsychologia 31, 173–181 (1993) 7. Calder, A., Young, A.: Understanding the recognition of facial identity and facial expression. Nature Reviews Neuroscience 6, 641–651 (2005) 8. Mika, S., Ratsch, G., Scholkopf, B., Smola, A., Weston, J., Muller, K.-R.: Invariant Feature Extraction and Classification in Kernel Spaces. In: Advances in Neural Information Processing Systems, vol. 12. MIT Press, Cambridge (1999) 9. Lucas, S.M.: Continuous n-tuple classifier and its application to real-time face recognition. In: IEE Proceedings-Vision Image and Signal Processing, October 1998, vol. 145(5), p. 343 (1998)
86
M. Tistarelli and E. Grosso
10. Ghosh, J.: Multiclassifier Systems: Back to the Future. In: Roli, F., Kittler, J. (eds.) MCS 2002. LNCS, vol. 2364, pp. 1–15. Springer, Heidelberg (2002) 11. Haxby, J.V., Hoffman, E.A., Gobbini, M.I.: The distributed human neural system for face perception. Trends in Cognitive Sciences 20(6), 223–233 (2000) 12. Movshon, J., Newsome, W.: Neural foundations of visual motion perception. Current Directions in Psychological Science 1, 35–39 (1992) 13. Lucas, S.M., Huang, T.K.: Sequence recognition with scanning N-tuple ensembles. In: Proceedings ICPR 2004 (III), pp. 410–413 (2004) 14. Raytchev, B., Murase, H.: Unsupervised recognition of multi-viewface sequences based on pairwise clustering with attraction and repulsion. Computer Vision and Image Understanding 91(1-2), 22–52 (2003) 15. Raytchev, B., Murase, H.: VQ-Faces: Unsupervised Face Recognition from Image Sequences. In: Proceedings ICIP 2002 (II), pp. 809–812 (2002) 16. Raytchev, B., Murase, H.: Unsupervised Face Recognition from Image Sequences. In: Proceedings ICIP 2001(I), pp. 1042–1045 (2001) 17. Zhou, S.K., Krueger, V., Chellappa, R.: Probabilistic recognition of human faces from video. Computer Vision and Image Understanding 91(1-2), 214–245 (2003) 18. Zhou, S.K., Chellappa, R., Moghaddam, B.: Visual Tracking and Recognition Using Appearance-Adaptive Models in Particle Filters. Image Processing 13(11), 1491–1506 (2004) 19. Li, Y., Gong, S., Liddell, H.: Modelling faces dynamically across views and over time. In: Proceedings IEEE International Conference on Computer Vision, Vancouver, Canada, July 2001, pp. 554–559 (2001) 20. Li, Y., Gong, S., Liddell, H.: Support Vector Regression and Classification Based Multiview Face Detection and Recognition. In: Proceedings of the IEEE International Conference on Automatic Face and Gesture Recognition (FGR 2000), Grenoble, France, pp. 300–305 (2000) 21. Howell, A.J., Buxton, H.: Towards Unconstrained Face Recognition from Image Sequences. In: Proceeding. of the IEEE International Conference on Automatic Face and Gesture Recognition (FGR 1996), Killington, VT, pp. 224–229 (1996) 22. Liu, X., Chen, T.: Video-based face recognition using adaptive hidden Markov models. In: Proceedings CVPR 2003 (I), pp. 340–345 (2003) 23. Bicego, M., Grosso, E., Tistarelli, M.: Person authentication from video of faces: A behavioral and physiological approach using pseudo hierarchical hidden markov models. In: Zhang, D., Jain, A.K. (eds.) ICB 2005. LNCS, vol. 3832, pp. 113–120. Springer, Heidelberg (2005) 24. Tistarelli, M., Bicego, M., Grosso, E.: Dynamic face recognition: From human to machine vision. In: Tistarelli, M., Bigun, J. (eds.) Image and Vision Computing: Special issue on Multimodal Biometrics (2008), Doi:10.1016/j.imavis.2007.05.006 25. Arandjelovic, O., Cipolla, R.: Face Recognition from Face Motion Manifolds using Robust Kernel Resistor-Average Distance. In: Proceedings of FaceVideo 2004, p. 88 (2004) 26. Picard, R.W.: Affective Computing. MIT Press, Cambridge (2000) 27. Ekman, P.: Strong evidence for universals in facial expressions: a reply to Russells mistaken critique. Psychol. Bull. 115(2), 26887 (1994) 28. Ekman, P., Friesen, W.V.: Facial Action Coding System: Investigators Guide. Consulting Psychologists Press, Palo Alto (1978) 29. Fasel, B., Luettin, J.: Automatic facial expression analysis: a survey. Pattern Recognition 36, 25975 (2003) 30. Pantic, M., Rothkrantz, L.J.M.: Automatic analysis of facial expressions: the state of the art. IEEE Transactions on PAMI 22(12), 1424–1445 (2000)
Human Face Analysis: From Identity to Emotion and Intention Recognition
87
31. Gunes, H., Piccardi, M., Pantic, M.: From the Lab to the Real World: Affect Recognition Using Multiple Cues and Modalities. In: Or, J. (ed.) Affective Computing, Focus on Emotion Expression, Synthesis and Recognition, vol. 3, pp. 185–218. I-Tech Education and Publishing, Vienna (2008) 32. Zeng, Z., Pantic, M., Roisman, G.I., Huang, T.S.: A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions. IEEE Transactions on PAMI 31(1), 39–58 (2009) 33. Chen, L.S.: Joint processing of audio-visual information for the recognition of emotional expressions in human computer interaction. PhD Thesis, University of Illinois at UrbanaChampaign (2000) 34. Chen, L.S., Huang, T.S.: Emotional Expressions In Audiovisual Human Computer Interaction. In: Proc. of ICME 2000, New York, July-August 2000, pp. 423–426 (2000) 35. Cohen, I., Sebe, N., Garg, A., Chen, L.S., Huang, T.S.: Facial expression recognition from video sequences: temporal and static modeling. Comput. Vis. Image Understanding 91(1), 16087 (2003) 36. Sebe, N., Cohen, I., Cozman, F.G., Gevers, T., Huang, T.S.: Learning probabilistic classifiers for human computer interaction applications. Multimedia Systems 10(6), 484–498 (2005) 37. Shan, C., Braspenning, R.: Recognizing Facial Expressions Automatically from Video. In: Nakashima, H., Augusto, J., Aghajan, H. (eds.) Handbook of Ambient Intelligence and Smart Environments. Springer, Heidelberg (2009) 38. Sebe, N., Cohen, I., Gevers, T., Huang, T.S.: Emotion Recognition Based on Joint Visual and Audio Cues. In: Proc. of the 18th International Conference on Pattern Recognition (ICPR 2006), vol. 1, pp. 1136–1139 (2006) 39. Gunes, H., Piccardi, M.: A Bimodal Face and Body Gesture Database for Automatic Analysis of Human Nonverbal Affective Behavior. In: Proc. of the 18th International Conference on Pattern Recognition (ICPR 2006), vol. 1, pp. 1148–1153 (2006) 40. Gunes, H., Piccardi, M.: Automatic Temporal Segment Detection and Affect Recognition from Face and Body Display. IEEE Transactions on Systems, Man, and Cybernetics Part B, Special Issue on Human Computing 39(1), 64–84 (2009) 41. Tretiak, O., Pastor, L.: Velocity estimation from image sequences with second order differential operators. In: Proc. of 7th IEEE Intl. Conference on Pattern Recognition, pp. 16–19. IEEE, Los Alamitos (1984) 42. Tistarelli, M.: Multiple Constraints to Compute Optical Flow. IEEE Trans. on PAMI 18(12), 1243–1250 (1996) 43. Horn, B.K.P., Schunck, B.G.: Determining optical flow. Artificial Intelligence 17(1-3), 185– 204 (1981) 44. Essa, I., Pentland, A.: Facial Expression Recognition using Visually Extracted Facial Action Parameters. In: Proc. of Int’l Workshop on Automatic Face and Gesture Recognition, Zurich, Switzerland (1995) 45. Kanade, T., Cohn, J., Tian, Y.: Comprehensive Database for Facial Expression Analysis. In: Proceedings of the 4th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2000), March 2000, pp. 46–53 (2000) 46. Busso, C., Deng, Z., Yildirim, S., Bulut, M., Lee, C.M., Kazemzadeh, A., Lee, S., Neumann, U., Narayanan, S.: Analysis of emotion recognition using facial expressions, speech and multimodal information. In: Proceedings of the 6th international Conference on Multimodal interfaces, ICMI 2004, State College, PA, USA, October 13-15, pp. 205–211. ACM, New York (2004), http://doi.acm.org/10.1145/1027933.1027968 47. Castellano, G., Kessous, L., Caridakis, G.: Emotion recognition through multiple modalities: face, body gesture, speech. In: Peter, C., Beale, R. (eds.) Affect and Emotion in HumanComputer Interaction. LNCS, vol. 4868, pp. 92–103. Springer, Heidelberg (2008)
88
M. Tistarelli and E. Grosso
48. Datcu, D., Rothkrantz, L.: Automatic bi-modal emotion recognition system based on fusion of facial expressions and emotion extraction from speech. In: IEEE Face and Gesture Conference FG 2008 (September 2008) ISBN 978-1-4244-2154-1 49. Ratliff, M.S., Patterson, E.: Emotion Recognition using Facial Expressions with Active Appearance Models. In: Proc. of IASTED Human Computer Interaction 2008, pp. 92–138 (2008) 50. Paleari, M., Benmokhtar, R., Huet, B.: Evidence theory-based multimodal emotion recognition. In: Huet, B., Smeaton, A., Mayer-Patel, K., Avrithis, Y. (eds.) MMM 2009. LNCS, vol. 5371, pp. 435–446. Springer, Heidelberg (2009)
Creating Safe and Trusted Social Networks with Biometric User Authentication Ho B. Chang and Klaus G. Schroeter BioID AG Bruenigstrasse 95, 6072 Sachseln, Switzerland
[email protected] http://www.bioid.com
Abstract. Many people are concerned about engaging in social networks or allowing their children to do so, because the anonymity and perceived lack of security puts them at risk of identity theft and of online contacts who misrepresent themselves, such as sexual predators and stalkers. Current password-based authentication neither adequately protects users’ accounts from being compromised, nor helps people to trust that people they contact really are who they say they are. Strong multimodal biometric authentication, which can conclusively link a real person to a digital identity, prevents unauthorized account access and ensures that only the person who created the account can use it. At the same time, it offers an extra level of trust in the identity of other members, and provides a deterrent against potentially criminal abuse of social networks. Keywords: multimodal biometrics, authentication, security, social networks.
1 The Advent of Social Networks A social network service is a website intended to build communities based on common interests and activities. Typically users can create profiles with selected personal information and interests, search or view parts or all of the profiles of other users, connect with old friends and make new ones. Social networking websites such as Facebook, MySpace, LinkedIn, and Twitter have become a popular channel for communication, meeting people, and selfexpression, and usage is growing rapidly. According to a June 2009 survey by research group The Conference Board, 43% of US internet users participate in social networks, up from 27% only a year earlier.1 Young people, in particular, have taken to it rapidly; a 2008 study by the Pew Internet & American Life Project found that 65% of US teens (12-17) use online social networks.2 The demographics of several German social networks were analyzed by the AGOF internet facts 2009-I study.3 1
2 3
http://www.portfolio.com/news-markets/local-news/atlanta/2009/06/16/conference-board-43-ofinternet-users-now-in-social-networks http://www.pewinternet.org/Presentations/2009/17-Teens-and-Social-Media-An-Overview.aspx http://www.agof.de/studie.583.html
A. Kumar and D. Zhang (Eds.): ICEB 2010, LNCS 6005, pp. 89–95, 2010. © Springer-Verlag Berlin Heidelberg 2010
90
H.B. Chang and K.G. Schroeter
Fig. 1. The global growth of social networks Table 1. Demographic profile of German social networks, March 2009 Age of User
myspace.com
schueler.cc stayfriends.com
studivz.net
14-19
23%
56%
4%
17%
20-29
31%
13%
17%
56%
30-39
18%
10%
29%
11%
40-49
16%
14%
27%
7%
50-59
7%
3%
12%
6%
60+
3%
2%
7%
1%
2 The Real Dangers of Social Networks The anonymity and physical distance can lead to a false sense of security, so that users give out too much personal information. Even mature adults who should know better are not immune. Take the wife of the newly appointed head of the British Secret Intelligence Service. She posted personal information on her Facebook page, accessible to 200 million users, which compromises the new chief’s security and could easily put the family at risk.4 The lack of control mechanisms makes it easy for malicious users to misrepresent themselves and gain other users’ confidence. For example, in 2006 a 14-year-old girl committed suicide after having sex with an older man she met online. Her family has sued MySpace for facilitating the relationship.5 The case highlights one of the most serious dangers of social networks. 4 5
http://www.timesonline.co.uk/tol/news/uk/article6639521.ece http://www.wired.com/threatlevel/2008/02/myspace-sued-ov/
Creating Safe and Trusted Social Networks with Biometric User Authentication
91
In 2009, MySpace confirmed that they identified user profiles belonging to over 90,000 registered sex offenders6 – and that only includes those who registered with their real names. Their accounts were blocked, but nothing prevents them from creating new accounts under false names. Social media presents other dangers as well. Since there is no identity check, anyone can set up a fake profile in someone else’s name, to post embarrassing and slanderous content. Add to that the prevalence of spam and offensive comments, for instance the spammer who tricked MySpace users into giving away login information, then sent over 730,000 spam messages to their friends, including links to gambling and pornography sites.7 Given this environment, is social media too dangerous? Or is there a solution to make virtual socializing safe?
3 Social Networking Today Most major social networking sites today operate on the honor system. While many sites’ policies require members to use their real name and age and to open only one account, there is no verification. Users can easily lie about their identity, with no traceability in case of a serious complaint. The large member base and wealth of personal information makes social networks a magnet for unscrupulous people. Many users, wanting to express themselves or forgetting that strangers can also see their profiles, give out too much personal information. Add to that the tendency to use weak passwords – which can sometimes even be drawn from information in their profile – and the success of “phishing” at tricking users to give away their login information and you have a great recipe for abuse. With access to someone’s account, it is possible to spam their friends; to post embarrassing or incriminating information on their profile or elsewhere; and to gather additional private information leading to identity theft or even physical harassment. Password-based protection is simply too easy to compromise. The average user today has 25 password-protected accounts; most people tend to use weaker passwords and to use the same one for several accounts [1]. Besides, even the strongest passwords still do not conclusively identify the user.
4 A Potential Solution: Biometric User Authentication A password (something you know) cannot prove a user’s identity. Only one form of identification truly does. It must be something that cannot be guessed, lost, or stolen – something the user is, a unique feature that is part of the user. Such features are called biometric traits; they could provide a strong foundation for social network authentication. A biometric authentication solution for social networks would need to provide highly accurate analysis of unique biometric features to reliably determine if the person logging in is the same one who registered. Capture of more than one trait would increase accuracy and security, assuming the results were properly fused [2]. At each 6 7
http://www.nytimes.com/2009/02/04/technology/internet/04myspace.html http://www.ecommercetimes.com/story/63012.html
92
H.B. Chang and K.G. Schroeter
log in, live traits would be captured and compared with the reference data, which could either be stored on the social network database, at an independent party such as a trust center, or by the user on the local PC or on a token or smart card. Since users of such networks have greatly varying degrees of familiarity and comfort with technology, it should be easy to use in a way that is natural to the user and requires no memorization. Use of built-in or common, off-the-shelf equipment would reduce costs. A traceable audit trail at every login (privacy laws permitting) would further increase security. In case of a crime, an accused person would only need to prove to law enforcement personnel, through biometric verification, whether the user data on file “belongs” to them. This model would not only increase the chances of successful prosecution, but would also provide a strong deterrent – who would commit a crime knowing that they could be identified through biometrics? Finally, such a system could be even more effective and trusted with a reliable means to link the user with a legal identity. Registering with a government-issued identity document, such as a biometric passport or national identity card, could provide this link.
5 Biometrics for Social Networks: How It Might Work The technology for such a system already exists, with a combination of human traits well-suited to a social networking application. Biometric identification of face, voice, and iris, individually or in any combination, using standard computer webcam and microphone, such as the biometric technology developed by BioID, has been available for some time [3]. 5.1 Basic User Experience Registration. To register for the first time, users can verify themselves against the biometric data (face and/or iris) stored on their biometric passport or national identity card [4] using a USB e-passport reader. The user can simply look at the screen while a webcam captures their face and/or iris in a video stream. Each trait is preprocessed separately, and unique features are extracted and compared to the biometric data on the passport or identity card. A user registered in this way would be regarded as a “trusted” user. Users who do not have a biometric passport or national identity card or who cannot access the data on their passport can be cross-verified by a “trusted” user. The new user can then enroll their face/iris biometric data using a webcam. The process is the same as described above, except that the unique biometric features are used to create a reference file to be used for later verification. To make it even more secure, the user can be prompted to say a phrase, such as their name, while looking at the screen. While the webcam captures face and/or iris, a microphone captures a brief audio stream with the user’s voice, and added to the biometric reference file. In any case, the biometric reference data can be stored on the social network server, at a trusted third party, or locally on a computer or token for later use. Only digitally signed snapshots of face/iris/voice, in a standard, non-proprietary format, are
Creating Safe and Trusted Social Networks with Biometric User Authentication
93
maintained. A digital signature includes a time stamp and provides a means to verify if a snapshot has been tampered with. To enhance the user experience, the biometric data can be updated as often as needed to adapt to the user’s usage behavior and environment, as well as aging. Verification. The same simple capture process is repeated every time the user logs in. Biometric algorithms compare the new data with the reference data stored either on the biometric passport or identity card, on a token, on the local computer or on a server, and determine whether the newly captured data matches the reference data. In this way, every session is protected with a reliable login, ensuring that only the owner of the account is able to log in to the network and increasing trust in user identity. In practice, user acceptance of this type of biometric login process is very high. They have nothing to memorize – no complicated passwords to be forgotten. Users do not have to touch a sensor or hold still. They simply do one of the most natural of human actions: look the camera in the “eye” and optionally say their name. Most people who have used this type of solution for other applications welcome its simplicity; the login process is completely natural and takes virtually no thought. The process has a certain “cool” appeal that especially younger people, the majority of social network users today, may appreciate. 5.2 User Experience versus Security System settings represent a trade-off between high security (low false acceptance) and greater convenience (low false rejection). The key here is to select a method that is best suited to the particular application. In some cases it may be necessary to use a stricter method [5], with which more attempts may be necessary, while in other cases a looser method might be more fitting, if the security concern is not so critical. For instance, acceptance might require that only two of the three traits reach a certain threshold score, or all three might have to meet a minimum. Alternately, the scores could be mathematically combined to meet a single combined threshold. In a social networking application, while it is vital to keep users enthusiastic and the experience positive, security, as mentioned earlier, is becoming a critical issue which urgently requires action. In the extreme case, if a user had repeated problems logging in they would probably become frustrated with the service. Particularly for some of the most popular social networks, even a relatively low rate of false rejection could mean a large number of complaints to customer service. Likewise, a minor security breach could trigger negative publicity for the whole community. Considering all of this, the decision method for a social network security solution should lean towards user-friendliness, while maintaining a strict configuration. Fortunately, when combined and configured properly, the use of multiple biometric traits results in superior overall performance compared to a single trait, enabling suitable results even for very large social networks. 5.3 Continuous Session Protection In addition to protecting login to the social network, using face recognition the system can continuously check for the presence of the active user’s face in front of the screen throughout the entire social networking session. If the user steps away for a while, the
94
H.B. Chang and K.G. Schroeter
system can automatically log out or inform any corresponding parties that the user is currently away, preventing unauthorized access of the user’s account by other people with access to the same computer. In this way, the entire duration of the session is protected. Upon return, the user is either automatically recognized or is prompted to login again, and the system will automatically inform any corresponding parties that the user is back. Such a process would offer multiple layers of protection for social networks. First of all, because a new video and voice are obtained at every login and throughout the session and are compared with the original registration data, only the registered user is able to log into a social network account or to take any actions during an active session. In this way, the dangers of unauthorized account access or impersonation described above can be avoided. It makes a truly “user-managed” social networking experience possible, because the user is the only one who can access the account, and so is in full control over it. It also greatly increases trust within the community; because the entire session can be trusted, other parties communicating through the service can be confident that they are communicating with the actual “trusted” user. 5.4 Duplicate Account Detection Biometric authentication would also make it possible to detect if the same person is trying to open multiple accounts under different names. During registration, the new reference file can be compared with those of all existing accounts. If a match is found, it means the user may be trying to open a duplicate account, and registration can be denied or linked internally to the other accounts. This same method can be used to prevent users whose accounts were closed for policy violations from opening a new account under another name. 5.5 Traceability and Deterrence Because the user’s biometric data is kept on file, and is compared against at every login, all account activity can be traced to the identity established at registration. If privacy laws and storage capacity allow, additional data from subsequent logins could also be stored, providing further accountability. Perhaps the greatest benefit of biometric authentication for social networks would be the potential deterrent power. For a normal user, it assures that access to their social network account is securely protected. The fact that the user must provide biometric information such as their face and voice in order to register for a social network will make anyone think twice before harassing someone or committing a crime. Furthermore, a link between their online identity and their legal identity should make them think three times. Thus, such a solution has the potential to significantly reduce the incidence of the abuse previously described. And when abuse does occur, such a solution can make it easier to find out who is responsible.
6 Conclusion: A Safe and Trusted Social Network Community Privacy and data protection is, then, possible for social networking. The technology described here could have a considerable effect on user satisfaction and the reputation
Creating Safe and Trusted Social Networks with Biometric User Authentication
95
of social networks. Stories such as those previously cited have made many people hesitate about social media, or at least about their choice of networks. A network that implements a strong identification infrastructure like the one described here would signal that the security of its members is a top priority. As a show of good faith this should help to protect against lawsuits when problems do occur. Furthermore, it should strengthen the trust of its members and potential members, increasing membership and overall activity: the factors most necessary for the success of a social network community. In this way, more people can safely reap the many benefits of social networking.
References 1. Florencio, D., Herley, C.: A Large-Scale Study of Web Password Habits. In: Sixteenth International World Wide Web Conference (WWW 2007), pp. 657–665. IW3C2, Banff (2007) 2. Daugman, J.: The Computer Laboratory. Cambridge University, http://www.cl.cam.ac.uk/~jgd1000/combine/combine.html 3. Frischholz, R., Dieckmann, U.: BioID: A Multimodal Biometric Identification System. IEEE Computer 33(2) (2000) 4. Lion, R.: E-Credentials and Identity Management. In: Keesing Journal of Documents & Identity, Annual Report E-Passports 2008-2009. Keesing Reference Systems, pp. 17–20 (2009) 5. Frischholz, R., Werner, A.: Avoiding Replay-Attacks in a Face Recognition System Using Head-Pose Estimation. In: AMFG 2003: IEEE International Workshop on Analysis and Modeling of Faces and Gestures (2003)
Ethical Values for E-Society: Information, Security and Privacy Stephen Mak Deputy Government Chief Information Officer, Hong Kong SAR
Abstract. Forensic Science is gaining more importance in criminal justice, since the development of new sensitive and accurate scientific tools, and the construction of large computerized databases. Although very effective, these databases pose a lot of legislative and ethical concerns. In case of DNA concerns are deeper since it contains sensitive genetic information regarding its owner, not necessarily needed for his identification. This paper focuses on issues related to the collection, utilization and retention of DNA samples and profiles of legally innocent populations, illustrated by a case study where a voluntarily given DNA sample for a murder investigation successfully solved three non related rape cases. Keywords: security, privacy, information infrastructure and architecture, cloud computing, ethical values, e-society, legal framework, Electronic Transactions Ordinance, Personal Data (Privacy) Ordinance, eHealth, Multi-application Smart ID Card, public key infrastructure (PKI), two-factor authentication, security risk assessments, privacy impact assessments.
1 Information Explosion and Its Ubiquity In the digital world, information is being generated and collected at phenomenal rates in all walks of society. It literally transcends our daily lives and the operations of businesses, whether they are conducted locally or globally. With the advent of embedded computers/chips in all sorts of devices and the human body, the notion of collection, use, distribution and inference takes on many new forms. By the same token, the inter-related issues of information (explosion), security, privacy and ethics cannot be considered as individual aspects or necessary steps in the inception, development and adoption stages of technology.
2 Biometrics as One Form of Information, Whether Direct or Derived Biometrics has the characteristic of being directly linked with in-born personal attributes. This characteristic on the one hand facilitates strong identification and authentication of persons as compared with other identification and authentication means, and A. Kumar and D. Zhang (Eds.): ICEB 2010, LNCS 6005, pp. 96–101, 2010.
Ethical Values for E-Society: Information, Security and Privacy
97
in turn enhances information security. On the other hand, any misuse or abuse of the technology can cause serious problems on personal data privacy. Traditionally, collection of biometrics data is done in more explicit ways, mostly in a one-way fashion, although this has often become a matter of controversy in the contexts of security and privacy. With increasing connectivity of electronic devices to the Internet, coupled with embedded functionality of the devices themselves to perform biometric-like functions, the notion of “derived” biometrics takes on additional meanings.
3 The Ethics of Biometrics - Normative or Practical? Yours or Mine? In Hong Kong, the Government encourages innovation and prudent use of technology, including information and communications technology (ICT). This is in line with the 2008 Digital 21 Strategy. Biometric products and solutions, either as standalone offerings or embodied in all sorts of equipment, appliances and documents, are under continuous development in terms of functionality, price-performance, security and privacy levels. It is estimated to be a multi-billion dollar market globally. Biometrics is also increasingly taking on expanded meanings and contexts, as user behaviours and information exchange methods evolve due to explosive use of the Internet, social networking/engineering, remote sensing, wearable computers, etc. that ride on the basic characteristics of biometrics. Two considerations stand out in the adoption and governance on the use of biometric data – Security and Privacy. The balance between security and privacy is a fine one, hence some of the debatable or contentious issues arising, depending on which angles prevail in a given situation. When considering use of biometric data and associated technologies, different angles (e.g. industry facilitation, policing, regulating and consumer applications) will lead to different emphases on the issues. The ethical collection and use of biometrics has been a much-debated issue in public policies and strategies, including those for supporting technological innovation. When considering biometrics and the associated issues, a holistic approach is thus called for.
4 The Scope for Ethics, Policies and Regulations Is Rapidly Changing – The Evolving Nature of the Global Information Infrastructure and Architecture In the next few years, the global ICT infrastructure will be going through some important changes, in terms of its architecture, interoperability, standards, products and service offerings which in turn will have substantial implications on user behaviours. The notion of biometrics “belonging” to individually identified persons is changing, as the ICT industry promises the next big thing in “cloud computing”, “anything-as-aservice” and similar attractions to both individual and corporate users. This, coupled with rapid advances in embedded software and systems in devices that behave as if
98
S. Mak
they were biometrics themselves, would mean that the scope for policies, regulations and ethics is fast changing. By way of example, a modern, fully-configured digital camera for the enthusiast or professional photographer would typically contain a WiFi-internet connection, a GPS positioning device, a palm/fingerprint scanning device and an iris-scanning capability, all within the confines of the battery grip of the camera. The implication of this is that the user of the camera is knowingly making available his biometric data in multiple ways to serve one or more personal purposes, whether it be sending the pictures quickly home or to the office, identifying himself to be at a specific, traceable location with the GPS data, confirming the authenticity of digital pictures with the palm/fingerprint that they contain intellectual property created by him, and his iris scan template to digitally sign or encrypt the file containing the picture. What is interesting in this example is that all the digital data mentioned would also be recorded on the picture files and transmitted through the open network, with all the intermediaries involved. This example also suggests that biometrics itself and its derived form may increasingly be used as a process control and automation tool by the persons or organisations to collect, store and distribute information in the increasingly connected digital world. A similar example could have been applied to even more advanced applications like medical consultation, diagnosis, monitoring and treatment. Therefore the scope for ethical and policy considerations becomes even more multifarious and complex.
5 Alternating between Personal Computing and “Cloud Computing” – The Roles of Innovators, Entrepreneurs, Vendors, Regulators and Governments Need to Be Redefined Technology innovations have often brought about major changes in mindset and actual practice when considering policies, responsibilities and, in this context, ethical issues. In the case of ICT innovations, the past 30 years have seen major, alternating changes in responsibilities for security, privacy and ethics brought about by the personal computer and now technologies like cloud computing, from everything being “personalized” to everything being “available on demand”. It is therefore important that in every step of the cycle of technology innovation, from inception, R&D, productisation, certification, testing to commercialization and deployment on a massive scale, the parties involved all need to know that they have a role to play. This is true of innovators, entrepreneurs, solution/platform providers, regulators and of course governments.
6 The Foundations for Enhancing Ethical Values for an E-Society in the HKSAR 6.1 Legal Framework to Support Secure Electronic Transactions in the HKSAR In Hong Kong, the Personal Data (Privacy) Ordinance is in place to protect the personal data privacy of citizens. It protects the privacy interests of living individuals in
Ethical Values for E-Society: Information, Security and Privacy
99
relation to personal data. Recently, a review on the Ordinance was conducted and the public consultation ended in November 2009. This would ensure adequate protection of personal data having regard to the technology advancement and growing concern of our society on privacy issues. Besides, we have also the Electronic Transactions Ordinance which provides legal status of digital form of documents and signature equivalent to its paper counterparts. I will talk more about this later. Within the HKSAR government, we have also formulated a comprehensive suite of Security Regulations, IT security policies and guidelines for government departments to follow in the aspect of information security. From time to time, we review this set of regulations, policies and guidelines to keep them in pace with international development, industry best practices and the emerging threats to information security and personal privacy. Relevant parts of these policies and guidelines are made available for sharing with the general public through the government website. 6.2 The eHealth Initiative Hong Kong has started the eHealth initiative in the context of the wider healthcare reform. It includes development of a territory-wide electronic health record (eHR) sharing infrastructure. The supporting system would enable both public and private healthcare providers to access and update relevant patient records. The system would include mechanisms for obtaining consent of the patients, and for authenticating and controlling data access, including possible use of the Smart ID Card. Data privacy and information security of the eHR sharing system are accorded paramount importance and need to be given legal protection. Participation in eHR sharing would be compelling but not compulsory for both patients and healthcare providers. The eHR sharing infrastructure and related systems would be based on open, pre-defined and common technical standards and operational protocols. 6.3 Biometric Applications and Technologies Use of biometrics in identity documents such as ID card and passport is a major biometric application around the world. In addition, other identity document related applications such as border crossing application will continue to evolve in the near future. Taking a local example, starting from 10 December 2009, Hong Kong citizens when crossing the border between Hong Kong and Macao can use Macao’s e-channels and, likewise, Macao citizens can enjoy automated immigration clearance by making use of the e-channels on the Hong Kong side. Cross-border biometric application involves data exchange between jurisdictions. The differences in legislations in respect of biometrics data need to be addressed in the development and implementation of cross-border applications. Apart from the application of identity document, it is anticipated that biometrics will have wider use in the community. We can use biometrics to strengthen the security of e-banking, e-payments, mobile phone applications and in combating certain social problems, such as drug abuse. The Government facilitates the development of electronic commerce in accordance with the provisions of the Electronic Transactions Ordinance. The ETO provides a
100
S. Mak
clear legal framework that gives electronic records and electronic / digital signatures the same legal recognition as records and signatures on paper. This would enable businesses and the Government to conduct transactions by electronic means. Public key infrastructure (PKI) is unique among contemporary technologies in that it gives strong support to the four key attributes of information security, namely confidentiality, integrity, authentication and non-repudiation. Biometrics and PKI are not necessarily competing technologies. They can be positioned as complementing each other in meeting the specific business and security requirements of different applications. The Hong Kong Multi-Application Smart ID Card is a good example as it allows both biometrics, i.e. fingerprint, and PKI which is delivered in the form of a digital certificate recognized under the law, to coexist and function in different applications. The two technologies may even work together to enhance the security and achieve a strong two-factor authentication. 6.4 Can the Government Develop More and Clearer Guidelines to Encourage Wider Use/Deployment of Biometrics? In Hong Kong, the Government and the Privacy Commissioner for Personal Data have respectively made available guidelines on these topics for public reference. Further and more explicit guidelines may need to be considered in the context of industry, users, and organizations for more targeted focus and effectiveness. We encourage community and industry leaders, researchers and experts, regulatory bodies and Government to collaborate to this end. Where better results are expected, consideration may be given to developing guidelines for specific use.
7 The Importance of Security Risk Assessments and Privacy Impact Assessments 7.1 How to More Effectively Use/Deploy Biometrics? The question is often asked – given the concerns over security, privacy and ethical issues, how to more effectively use or deploy biometrics. This calls for a holistic approach. First and foremost, we must not consider the balance among security, privacy and ethics a zero-sum game, i.e. one is achieved at the expense of another. On the contrary, effective use or deployment of biometrics may be achieved by:• • • • • •
Fundamental respect for privacy designed into products, systems and solutions Better informed customers Better industry “norms” Better integration among systems, toolkits, processes and management practices More explicit but “business-friendly” guidelines and regulatory regimes Explicit and early conduct of Privacy Impact Assessments and Security Risk Assessments where these are applicable
Ethical Values for E-Society: Information, Security and Privacy
101
8 Concluding Remarks When assessing the ethical values for an e-society, it is important to recognize the fast developing nature of the underlying technologies that in turn affect the very definition of biometrics, including whether they are in-born, artificial or implanted (for medical or other reasons), the resulting user behaviours, business models, impact on information security and privacy, and the role plays that innovators, entrepreneurs, vendors, regulators and governments may have. If these are properly addressed in a timely manner, the real and perceived concerns over ethical issues will take on new or different dimensions. This will also affect considerations on governance over data sharing, whether this be on a local or international context.
Ethical Issues in Governing Biometric Technologies Margit Sutrop University of Tartu, Estonia
[email protected]
Abstract. This article deals with the ethical considerations raised by the collection, storage and use of biometric data. Comparisons are drawn with human genetic databases. It will be shown that besides ethical assessment of new technologies, it is also interesting to see how the advancement of these technologies has posed challenges to existing ethical frameworks, putting collective values and public interests above individual rights. The comparison of biometric databases with medical research oriented genetic databases shows that public interest may be differently constructed: while genetic databases projects create a discourse of hope, biometric databases are surrounded by a discourse of threat. What both have in common is the imbalanced discussion of benefits and risks. The lack of knowledge and understanding of possible benefits and drawbacks of new technologies makes it difficult to build and maintain authentic public trust, which is of crucial importance for good ethical governance of databases. Keywords: biometrics, ethics, genetic database, biobank, informed consent, security, privacy, governance, trust, discourse of hope, discourse of threat.
1 Introduction Biometrics is a tool used to identify and reliably confirm an individual’s identity on the basis of physiological or behavioural characteristics. The widespread use of biometric technologies has been made possible by “the explosive advances in computing power and by the near universal connectedness of computers around the world” [1]. In practical systems, several biometric characteristics are used, such as fingerprint, retinal and iris scan, signature, hand geometry, facial feature recognition, ear shape, body odour, skin patterns, foot dynamics, voice verification. Biometric recognition offers many advantages over traditional personal identification number or password and token-based (e.g., ID cards) approaches [2]. Because a biometric is tightly linked to an individual, it can prevent identity theft; it avoids the use of several identities by a single individual; it is difficult to duplicate a biometric trait. It is also more convenient because, unlike passwords or PIN codes, the characteristics cannot be forgotten and are always at hand. For these reasons biometric systems are increasingly used both by governments and private companies in very diverse settings: social security entitlement, border control, health systems, banking, election management, insurance, commerce, and surveillance. Although biometrics can be used for many purposes, today’s interest in this technology is clearly augmented by society’s increased surveillance needs. A. Kumar and D. Zhang (Eds.): ICEB 2010, LNCS 6005, pp. 102–114, 2010. © Springer-Verlag Berlin Heidelberg 2010
Ethical Issues in Governing Biometric Technologies
103
In general, biometrics can be used in two ways: either for the purpose of user identification (“who is the person?”) or for authentication (“is this the person who he claims to be?”) in various information systems [3]. Before one can use a biometric system for these purposes, biometric reference information must be collected with a sensoring device and stored – either in a centralised database or on a chipcard which the user has to present to prove that he is the person he claims to be. Because biometric technologies are based on measurements of physiological or behavioural characteristics of the human body and collection and storage of personal data, they raise a host of ethical concerns. Most resemble general questions raised by the ethics of technology, and are related to the protection of individual values such as privacy, autonomy, bodily integrity, dignity, equity, and personal liberty. There is also an issue of data protection, especially when biometric information is stored in centralised databases. In addition, biometrics is considered to be one of the most significant examples of how complex it is to match individual and collective needs. In this paper, I will try to find out whether there is anything special about the ethics of biometrics in comparison to the ethics of technology in general, and with new genetics in particular. As the collection, storage and use of biometric data raises similar concerns to those relating to the biobanking, it would be interesting and useful to compare the ethical discussions surrounding biometric databases with biomedical human genetic databases. It is noteworthy that while the ethical, legal and social issues of genetic databases have been debated all over the world, there has been surprisingly little discussion on the ethical and social costs of the widespread use of biometrics and creation of huge biometric databases. In the first part of my paper I will sketch the main lines of the arguments used in the ethical discussion of biometrics. I will show that similarly to the other technologies, biometrics brings out several value conflicts. But this should not be seen as an unsolvable problem. The solution of the value conflict lies in giving preference to one value in a single specific situation. Thus we should always look at the specific context and situation where the ranking of values takes place. In the second part of my paper I will then compare the discursive strategies of biometric and genetic databases. I will call the discourse surrounding human genetic databases – the “discourse of hope”, and the one surrounding the biometric databases - the “discourse of threat”. Comparison of two sorts of ethical discussion will give us a possibility to see how different contexts and motivations behind the applications of new technologies lead to different ranking of values as well as to different ethical frameworks.
2 Ethical Assessment of the Biometric Technology As with any other technology, we can ask two questions about biometrics: first, what are its ethical implications? Secondly, what kind of challenges does this technology present to ethics? In the Gifford lectures published in Ethics in the Age of Technology (1993), Ian Barbour has shown that appraisals of modern technology diverge widely: “Some see it as the beneficent source of higher life standards, improved health, and better communication. They claim that any problems created by technology are themselves
104
M. Sutrop
amenable to technological solutions. Others are critical, holding that it leads to alienation from nature, environmental destruction, the mechanisation of human life, and the loss of human freedom. A third group asserts that technology is ambiguous, its impacts varying according to the social context in which it is designed and used, because it is both a product and a source of economic and political power” [4]. In his book, Barbour groups views of technology under three headings: Technology as Liberator, Technology as Threat, and Technology as Instrument of Power. When we look at the ethical discussion surrounding the application of biometric technology, the situation is similar. One group of people conceptualises biometrics as Liberator, believing: in the power of technology to bring convenience (avoiding queues, faster answers, an immediate access to information), efficiency (cutting costs, producing efficiency gains for administration) and mobility in space and finances (enhancing convenient access to government services for citizens, i.e., to vote anywhere, services and movement of capital across borders via e-services1). This group may also welcome more powerful surveillance (to monitor migration, combat identity theft and fraud). The other group sees biometrics as a threat: they believe that surveillance technology is inhumane, untrustworthy, and destructive of liberty. A third group sees biometrics as “a sword with multiple edges”, the use of which depends on the carrier and aims followed. Clearly, views of technology vary according to the differing needs of people and institutions. However, different views of technology also reflect our different value judgements. Values are rooted in hierarchies of beliefs, cultures and forms of life. Even if we share primary values which are related to our basic physiological and psychological needs, our secondary values may still differ, depending on our historical, religious, cultural experience, on our understanding of the good life and ways of living. And even if we all agree that security, privacy, autonomy, and liberty are important, we rank these values differently. The value conflicts (e.g. privacy versus security, autonomy versus solidarity) make it difficult to take a broad and consistent position in favour of, or against, expanding or restricting biometric technologies. I tend to agree with Garry T. Marx that there is a continuum from genuine to mandatory voluntarism and from open to secret data collection. This is how he puts it: “Diverse settings – national security, domestic law enforcement, maintenance of public order, health and welfare, commerce, banking, insurance, public and private spaces and roles –do not allow for the rigid application of the same policies. The different roles of employer-employee, merchant-consumer, landlord-tenant, police-suspect, and health provider-patient involve legitimate conflicts of interests. Any social practice is likely to involve conflict of values.” [5] The solution of the value conflict lies in giving preference to one value in a single specific situation, but it does not entail final adherence to an absolute hierarchy of values or an ethical framework. Pluralists such as John Kekes [6] have suggested that rankings of values are reasonable only in particular situations because they depend on traditions and individual conceptions of the good life. Often there is incommensurability of values at the general level, but this does not necessarily mean that values 1
For example, today the citizens of Portugal, Belgium, Finland and Lithuania can establish an enterprise in Estonia through the Company Registration Portal (https://ettevotjaportaal.rik.ee/index.py?chlang=eng) by using national ID-card or Mobile-ID.
Ethical Issues in Governing Biometric Technologies
105
cannot be compared in specific socio-political contexts. “Strong incommensurability”, in which value comparisons at the general level cannot be exercised, does not rule out relating conflicting values to further values (moral or non-moral), resulting in value rankings (“weak incommensurability”). Garry T. Marx argues on the similar lines, claiming that “we need a situational or contextual perspective that acknowledges the richness of different contexts, as well as the multiplicity of conflicting values within and across them. /.../ At best we can hope to find a compass rather than a map and a moving equilibrium rather than a fixed point for decision making.” [7]
3 Ethical Issues of Biometrics On the one hand, the use of biometric technology is expected to help respond to the increasing security fears. However, it is feared that without the necessary regulations, there is the risk of abusing fundamental rights such as those of privacy and dignity [8]. The application of biometrics raises also concerns in relation to the integrity of human body. Irma van der Ploeg has pointed out that the way how biometrics uses the body for identification purposes calls for a reconceptualization of what might constitute bodily integrity. For example, the act of taking a fingerprint is not considered as a violation of bodily integrity in terms of individual rights-based medical ethics. However, one should see this act in a different moral and political context and think of the specific use that is made of bodies. What takes place here is the “inscription of the individual’s body with identity/-fiers that is achieved by the combination of fingerprint-taking, storage in a central database, and the coupling with biometric sensing equipment and automated searches.” [9] The ethical debate around biometrics has mostly centred on the concept of privacy. Privacy has been construed as a need, a right, a condition or respect of human dignity, either as something intrinsically valuable or something that derives its value from other sources (the concepts of autonomy, freedom). As outlined by Alan Furman Westin, privacy is most often seen in terms of control: “Privacy is the claim of individuals, groups, or institutions to determine for themselves when, how, and to what extent information about them is communicated to others” [10]. In the ethical discussion of privacy, two broadly opposed views can again be identified. Whereas most ethicists see biometrics as a threat to privacy, some argue that biometrics is a “friend of privacy”, that it serves to enhance privacy by providing stronger identification than the password and thereby prevents misuse. Jeroen van den Hoven has emphasised the following positions in public debates about privacy. The first is the view that we should stop worrying about privacy because so much personal information is available that anyone who bothers to make the effort can know almost everything about everyone. Secondly, there is Amitai Etzioni’s view that Western democracies cannot afford the high levels of individual privacy that they now attempt to provide [11]. Even if it were technically feasible, high levels of privacy are undesirable. Proponents of this view often also argue along utilitarian lines that modern societies exhibit high degrees of mobility, social and institutional complexity, and anonymity, which facilitate free-riding in the form of criminal behaviour, fraud and tax evasion. In order to mitigate the adverse effects of anonymity and mobility, information about individuals should be made available to governments [12].
106
M. Sutrop
Privacy advocates argue that such benefits are not worth the risk. Biometric identification devices may compromise privacy in a deep and thorough fashion as they can reveal more about a person than only his identity, including medical history, (e.g. retinal data may divulge information about diabetes or hypertension). Anton Alterman argues that the general right to privacy includes the right to control the creation and use of biometric images of ourselves. He is one of the few critics to raise the issue of informed consent for the management of biometric technology. Fully acknowledging that his policy recommendations run counter to current practices, he suggests that everyone who is asked to voluntarily submit biometric identifiers, should be “1) fully informed of the potential risks; 2) competent to understand the impact of their actions; and 3) under no threat of harm to agree to such an action.” [13] Such attitudes towards the concept of informed consent show how the ethical discussion of biometrics differs from traditional medical ethics, where the concept of autonomy and the inviolability of his/her informed consent have been codified in various international declarations (e.g. the World Medical Association’s Helsinki Declaration 2002, article 22) after WW II with the intent of denouncing the practices of Nazi medicine. The principle of individual informed consent is meant to protect individual autonomy and self-determination and to guard against the potential excesses of medical research and practice, preventing social interests or the paternalistic impulses of the medical community from overriding the individual. However, the developments in genetics and related disciplines during the last decades of the 20th century increasingly questioned the applicability or rather the sufficiency of the individual-centred value discourse as the sole ethical framework of guidance in these areas. New genetics has demonstrated our relatedness across families, communities and populations. Because genetic information is shared, such consequences are often not individual but familial or even communal. Genetic databases, especially the large population-based ones (DeCode in Iceland, the Estonian Genome Project Foundation, the UK Biobank, Uman Genomics in Sweden, the Singapore Tissue Network, the Public Population Project in Genomics in Canada) also pose new challenges to the traditional medical ethics framework centred on the individual and his/her informed consent. Participation in such research databases also often means that the traditional individual informed consent process might be burdensome or even impossible to follow within each separate research project as the very rationale of the biobank is to provide means for multiple distinct projects (Árnason [14]; O’Neill [15, 16]; Nõmper [17]). Currently there is growing support for broad consent (also described as open, generic, or blanket consent) to be used for population genetic databases. Two main arguments have been used to justify the use of broad consent rather than informed consent in population genetic databases. The first argument says that it is impossible to obtain informed consent as at the time of collection one does not know what kind of research will be done on the samples. The second argument says that there is public interest in keeping the samples for unlimited time and using it for different research projects. It would be interesting to compare the consent policies of the biomedical human genetic databases with forensic databases. Mairi Levitt [18] has compared the largest and most inclusive forensic database in the world, the UK National DNA database (NDNAD) which applies the knowledge and techniques of genetics to policing and the justice system with UK Biobank which collects DNA samples linked to medical
Ethical Issues in Governing Biometric Technologies
107
records. One of the lamentable results of her study is that while UK Biobank obtains informed consent from all particiapants, guaranteeing them a right to withdrawal at any time, the forensic database NDAD collects samples without any consent and without a right to withdrawal. Additionally, Levitt argued that contrary to the samples stored in the UK Biobank the blood samples stored in NDAD are being used for research without consent and without any approval of the ethics committe. Garrath Williams and Doris Schroeder have argued that while in the case of forensic databases, discussion is overwhelmingly structured by an opposition between the public good and individual rights, in the medical context human genetic banking has attracted quite a different discourse, with the individual rights and the issue of informed consent at the forefront [19]. I tend to disagree with them since we have seen that also in the case of human genetic databases public interest argument is frequently used. In 2002 HUGO Ethics Committee-Statement on Human Genomic Databases declared that “Human genomic databases are global public goods.”2 If we look at the discursive strategies used by the promoters of the genetic database projects, we will see that the public interest is not only limited to the possible health care benefits.
4 Human Genetic Databases: Discourse of Hope The turn of the millennium witnessed a real boom of setting up of human genetic databases in different places of the world. The design and purpose of the genetic database projects vary, but most intend to map genes for common diseases. It is hoped that human genetic databases will enable us to understand the combined effects of genetic, lifestyle and environmental risk factors in the development of a disease, and thus improve our health by delivering „personalized medicine“. Population-based genetic databases rely on a voluntary participation of a large number of research subjects contributing their DNA samples in the form of blood or tissue that will be linked with medical, genealogical and lifestyle information. Since human genetic databases are simultaneously scientific, healthcare and sometimes business projects, different interests are entangled here. Researchers may be motivated by intellectual curiosity, obligation, or self-esteem; biotechnology firms and pharmaceutical companies are interested in financial return; patients and their families might be motivated by an interest in the treatment or cure of a disease as well as by altruism or even duty. Thus the risks and benefits arising from the project may affect different areas such as health, research and economy and different levels, from individual persons to communities and mankind [20]. Depending on whether the genetic database project is treated primarily as a scientific venture, a medical or health-care project, or a business enterprise, different benefits and risks have been constructed in the public discourse. As several social scientists have pointed out, the rhetoric used for justification of human genetic databases relies in large part on a very simplistic, deterministic view of genes which does not quite fit the view of genes in current science [21]. Piia Tammpuu studied the media coverage of the Estonian Human Genome Project and analysed the discursive 2
http://www.eubios.info/HUGOHGD.htm
108
M. Sutrop
strategies of framing and argumentation and came to the conclusion that the domestic media has focused mainly on the advantages arising from the project, playing down the risks: “while various benefits allegedly arising from the project are commonly treated as certainties, various disadvantages are generally discussed as probabilities or possibilities, thus mitigating their significance” [22]. Appeals have been made to health as a common value, suggesting that it is people themselves who are interested in better diagnostics and treatment. From the medical perspective, the project has been characterised as a “project of hope” for thousands of people who have incurable diseases. All the projects promise to identify susceptibility genes for common diseases and to improve the medical care and health of the populations involved. In addition, the related areas of research such as pharmacogenetics and nutrigenetics are considered to be extremely promising. The problem with all these promises is that they have been presented as certainties without discussing the accompanied risks (geneticization of society, potential stigmatization of certain social groups, discrimination in insurance and employment, psychological stress accompanying the knowledge about one’s genetic makeup) and making it clear to the public that the realisation of these promises may take a long time and sometimes never come true. So the main risk with the hype of human genetic databases projects is that one raises expectations which cannot be satisfied. This can bring a serious backlash in the future since abused public trust is difficult to restore.
5 Biometric Technologies: Discourse of Threat The ethical discussion on biometrics has rather been framed as the fight against various security threats: fraud, identity theft, terrorism, criminal attacks. “We live in a globalised world of increasingly desperate and dangerous people whom we can no longer trust on the basis of their identification documents which may have been compromised” [23]. Identity thieves steal PIN to open credit card accounts, withdraw money and take out loans. In the public sector, governments hope to save billions in public spending by launching programs for detecting and preventing so-called “double dipping”, a kind of fraud that involves the collection of more benefits than one is entitled to, by entering the program under two or more identities [24]. Another area where biometrics finds extensive use is border control: fingerprint registration of applicants for asylum and apprehended illegal migrants has been introduced in several countries. After the events of September 11, 2001, biometric scanning programs were quickly introduced in airports to combat terrorism. Anton Alterman has pointed out that the attack and subsequent threats by Al Qaeda rapidly changed public attitudes toward biometric imaging: “How long the change in attitudes lasts, and how far the public is willing to go in offering up pieces of themselves for the sake of security, depends on how long the perception of a serious threat lasts, but it is far from letting up yet.” [25] Biometric information systems used for migration and border security insert themselves into a number of highly complex processes, in the EU, nationally and worldwide.
Ethical Issues in Governing Biometric Technologies
109
First and foremost, there is the establishment and enlargement of the Schengen area itself, ongoing since 1990. The effort to remove the internal borders of the EU and replace them with one common, external border is an enormous undertaking that increasingly relies upon new technologies for the establishment of uniform routines in the different member states. Second, the establishment of a new policy domain, called the “Area of Freedom, Security and Justice” has been ongoing since 1999. Here, a number of issues such as citizenship, immigration, asylum, the fight against terrorism and crime, police and judiciary cooperation, come together within the same policy domain. Third, there is the changing global security landscape and the explicit linkage of biometrics to the fight against terrorism. In the US, the 9/11 Commission stated that for terrorists, travel documents are as important as weapons. A similar view has become predominant in EU policies, where in the aftermath of the tragic events of September 11, 2001 the Commission was asked by Member States to take immediate action in order to improve document security. Therefore, although 9/11 and biometrics entered into processes (Schengen area, policy domain “Area of Freedom, Security and Justice”) that were already underway, it seems clear that they also changed the course and pace of events [26]. At present, three large European databases containing biometric data have been or are in the process of being established: EURODAC (fingerprints of asylum seekers), the Visa Information System VIS (visa data) and the Schengen Information System (II), containing alerts on objects and persons. Information technology is at the heart of the current reconstruction of borders. In particular, the combination of biometric information systems and database technologies is leading to an increase in surveillance, the proportions of which are hard to overestimate: faces, licence plates, fingerprints, and hand geometries are registered when individuals pass through borders, apply for visas, or request asylum. Linked to a host of other personal data kept in files that can be accessed from many places besides the border, it is hoped that this securitisation of borders will increase control over migration and combat terrorism, while at the same time facilitating low-risk border traffic. The freedom of increased international mobility for parts of the population thus goes hand in hand with citizens becoming better known and more transparent to authorities than ever before [27]. The critics have pointed out that there is an increasing tension between the principle of “security” and that of privacy and democracy. The public understanding of what happens to personal data is at best weak. In all EU member states the trend is for government agencies and corporate interests using biometricised documents to downplay the technical disadvantages (false positive and false negative matches of all biometric tools, costs, divergent or competing standards, problems with proper installation of systems, network security, quality of images and biometric documents, dangers of inter-operability, susceptibility to capture by ambient technologies, fraud, ID theft, etc [28].
6 Challenges to Ethics Arising from Different Technologies We seem to be approaching the edge where solidarity and responsibility are coming to prevail over individual freedom and autonomy. The argument is that the common
110
M. Sutrop
good is superior to individual rights. As pointed above, similar movements are taking place in biomedicine. Thus, while technologies have rapidly evolved to gather and analyse information, strict regulations protecting individual interests have obstructed the use of that information (e.g. epidemiological research is more difficult since without a subject’s informed consent it is impossible to gather statistical data [29]). Several authors (Knoppers [30], Chadwick and Berg [31], Knoppers and Chadwick [32]) have insisted on the need to develop new ethical frameworks focusing on values like reciprocity, solidarity and universality which better reflect the need to “encourage, not prevent, civic participation for the public welfare” [33]. Others (Goldworth [34], Takala [35]) claim that since the right to individual autonomy also benefits the community, one should not sacrifice individual autonomy for the well-being of others, as is often the case in utilitarian decision-making and generally with appeals to the common good. Thus, we see that in the field of bioethics it is still attempted to balance the protection of individual interest against likely communitarian benefits or the common good. Here the difference from the biometric databases becomes evident as in the field of biometrics the public interest in security is often accepted as a valid argument for prioritising the common good over the individual rights. As an illustration of this difference read the following quote from Frank Leavitt: “Many of the concepts and considerations which are discussed under the heading of „Identity, Security and Democracy“ are the same as those which are found in bioethical discussions of access to medical records, the Human Genome Project, and biobanks. But there are big differences when it comes to security. When security is at stake, there are reasons to be more lenient and reasons to be less lenient about violations of privacy.” [36] The complex notion of public interest has as yet been insufficiently elaborated, thus constituting a new challenge for ethicists. Liberal ethics places more emphasis on individual right to freedom and autonomy, whereas communitarian ethics stresses solidarity, responsibility and equity. The implicit question is: who defines the common good? Common good and public interest are extremely vague concepts and famously difficult to define. History has shown that such terms can be easily manipulated and used in order to further particular interests in the guise of common ones. In the case of biometric technologies, there is a latent threat that for example radical political parties who want to control migration or private corporations that want to make a profit by selling biometric devices may present their particular corporate interests as public interest. Biometric systems are developed in many countries (i.e. technical surveillance related responses to 9/11), but we can say that public discussion on benefits and drawbacks has been lamentably absent. Such discussion on both opportunities and risks of biometrics is, however, mandatory if we want to maintain public trust. Trust is typically construed as risk taking – one has to place trust without guarantees. Since trust involves vulnerability and risk, it is important to discuss possible risks. In speaking about risk we are discussing and arguing about something which is not the case yet but in principle could happen. Ulrich Beck, who coined the term ‘risk society’, has pointed out that the discourse of risk begins where trust in our security and belief in progress end. It ceases to apply when the potential catastrophe actually occurs. Thus the concept of risk points to a ”peculiar, intermediate state between
Ethical Issues in Governing Biometric Technologies
111
security and destruction, where the perception of threatening risks determines thought and action” [37]. Risk-assessment should be an essential part of the public discourse on human genetic databases as well as on the introduction of biometrics. It is based both on our imagination of what might happen and on our empirical knowledge of what is, the progress made and, at the same time, unawareness of what might come. The task of ethicists is to analyze and weigh the risks, as well as to disclose and justify the values endangered by perceived risks. Awareness of risks gives protection against breach and potential violation of trust. Restoration of breached trust takes a long time and it is therefore important to engage in continuous reflection about how to create and maintain trust [38]. In the case of databases, public trust depends on various aspects and elements. It is equally important to have trustworthy institutions, to choose trusted persons to run them, and thirdly, it is important to provide reliable systems in relation to data protection and securing privacy and confidentiality. All this brings us to the issue of how to achieve good governance of databases. As Mark Stranger and Jane Kaye point out in their introduction to the recent volume Principles and Practice in Biobank Governance (2009), the term ’governance’ relates to the business structures set up to manage the institution and ensure it achieves the desired outcomes whithin the relevant systems of regulation. A key concern of many governance structures established for biobanks is to ensure that research is carried out ethically, and research participants are not harmed by involvement in the biobank. They also stress that the link between donor participation, public trust and public consultation has been widely recognised and public engagement is considered the key for achieving good outcomes in the broader model of governance [39]. Unfortunately, nothing similar can be claimed about the governance of biometric databases. From the guidelines of the OECD, Bernard Didier and Loic Bournon have derived a set of principles that are relevant to the governance of identity security systems. These are adopting an ethical conduct, awareness of risks, responsibility of actors, response and evaluation of threat [40]. However, further substantial research is needed on the ethical governance of biometric databases.
7 Conclusions We have seen that there are several differences between biometric and biomedical genetic databases – in terms of the ethical frameworks as well as of discursive strategies. In the case of human genetic databases there is a slow tendency to move from the individual rights-based ethics (concentrating on the right to autonomy, informed consent, and privacy) towards the common good based communitarian ethics which puts more stress on solidarity and responsibility. However, although the public interest argument plays some role also in the ethical discussion of human genetic banking, it is usually not set in direct opposition to individual rights. In contrary, in the field of biometrics the public interest in security is increasingly ranked above the individual right to autonomy, bodily integrity and/or privacy. And the principle of informed consent is simply ignored in the ethical discussion of biometrics.
112
M. Sutrop
It also emerged from our discussion that the discursive strategies between the two types of databases are different. In the discussion of biomedical research oriented genetic databases there is much talk of promised individual benefits (better diagnosis and treatment of diseases, tailor-made drugs, etc) as well as of collective benefits (better and more efficient healthcare, new scientific knowledge, economic profit), whereas in the case of biometrics the discussion centres on the risks society faces today: terrorism, criminal behaviour, uncontrolled migration, fraud, tax-evasion, freeriding) while the aimed benefits (security, convenience, efficiency) are not discussed at all but are just taken for granted. Therefore I called the discourse surrounding human genetic databases – the “discourse of hope”, and the one surrounding the biometric databases – the “discourse of threat”. What both discourses – the “discourse of hope” and the “discourse of threat” – have in common – is the lack of proper assessment of risks and benefits. Without a public discussion on benefits and burdens, it is, however, difficult to build and maintain authentic public trust. As biometric and genetic database projects progress, one becomes increasingly aware of the fact that in the end their success will depend on whether we will be able to create trustworthy institutions and secure their ethical governance. Acknowledgments. This paper was produced as a part of the RISE project (Rising pan-European and International Awareness of Biometrics and Security Ethics) financed between 2009-2012) by the European Commission’s 7th Framework Programme, Grant agreement no: 230389) and of the TECHNOLIFE (Transdisciplinary Approach to the Emerging Challenges of Novel Technologies: Lifeworld and Imaginaries in Foresight and Ethics) by the European Commission’s 7th Framework Programme, Grant agreement no: 230381). I also gratefully acknowledge the support of the Estonian Science Foundation grant „New ethical frameworks for genetic and electronic health record databases” funded under EEA/Norwegian financial mechanisms. I would like to thank Tiina Kirss and Kadri Simm for their useful suggestions and help with my English and Laura Lilles for the technical assistance. I also owe many thanks to Kristi Lõuk and Katrin Laas-Mikko for introducing me the discussion on privacy and biometrics. And last but not least, I would like to thank the editor Ajay Kumar for his encouragement and patience.
References 1. Nanavati, S., Nanavati, R., Thieme, M.: Biometrics: Identity Verification in a Networked World. Wiley, New York (2002) 2. Dass, S.C., Jain, A.K.: Fingerprint-Based Recognition. Tecshnometrics 49(3), 262–276 (2007) 3. Mordini, E., Petrini, C.: Ethical and Social Implications of Biometric Identification Technology. Annali dell’Instituto Superiore di Sanità 1(43), 5–11 (2007) 4. Barbour, I.G.: Ethics in an Age of Technology. The Gifford Lectures 1989-1991, vol. 2. Harper, SanFransisco (1993) 5. Marx, G.T.: Hey Buddy Can You Spare a DNA? New Surveillance Technology and the Growth of Mandatory Volunteerism in Collecting Personal Information. Annali dell’Instituto Superiore di Sanità 1(43), 12–19 (2007)
Ethical Issues in Governing Biometric Technologies
113
6. Kekes, J.: The Morality of Pluralism. Princeton University Press, Princeton (1993) 7. Marx, G.T.: Hey Buddy Can You Spare a DNA? New Surveillance Technology and the Growth of Mandatory Volunteerism in Collecting Personal Information. Annali dell’Instituto Superiore di Sanità 1(43), 12–19 (2007) 8. Tomova, S.: Ethical and Legal Aspects of Biometris (Convention 108). In: Mordini, E., Green, M. (eds.) Identity, Security and Democracy, pp. 111–114. IOS Press, Amsterdam (2009) 9. Ploeg, I.: The Illegal Body: ‘Eurodac’ and the Politics of Biometric Identification. Ethics and Information Technology 1, 295–302 (1999) 10. Westin, A.F.: Privacy and Freedom. Atheneum, New York (1968) 11. Etzioni, A.: The Limits of Privacy. Basic, New York (1999) 12. van den Hoven, J., Weckert, J.: Information Technology and Moral Philosophy. Cambridge University Press, Cambridge (2008) 13. Alterman, A.: Ethical Issues in Biometric Identification. Ethics and Information Technology 5, 139–150 (2003) 14. Árnason, V.: Coding and Consent: Moral Challenges of the Database Project in Iceland. Bioethics 18, 36–61 (2004) 15. O’Neill, O.: Informed Consent and Genetic Information. Studies in History and Philosophy of Biological and Biomedical Sciences 32(4), 689–704 (2001) 16. O’Neill, O.: Some Limits of Informed Consent. Journal of Medical Ethics 29 (2003) 17. Nõmper, A.: Open Consent – a New Form of Informed Consent for Population Genetic Databases. Tartu University Press, Tartu (2005) 18. Levitt, M.: Forensic Databases: Benefits and Ethical and Social Costs. British Medical Bulletin 83, 235–248 (2007) 19. Williams, G., Schroeder, D.: Human Genetic banking: altruism, benefit and consent. New Genetics and Society 23(1), 89–103 (2004) 20. Simm, K.: Benefit-Sharing: An Inquiry Regarding the Meaning and Limits of the Concept in Human Genetic Research. Genomics, Society and Policy 1(2), 29–40 (2005) 21. Árnasson, G.: Genetics, Rhetoric and Policy. In: Häyry, M., Chadwick, R., Árnason, V., Árnason, G. (eds.) The Ethics and Governance of Human Genetic Databases. European Perspectives, pp. 227–235. Cambridge University Press, Cambridge (2007) 22. Tammpuu, P.: Constructing Public Images of New Genetics and Gene Technology: The Media Discourse on the Estonian Genome Project. Trames 8(58/53), 1/1, 192–216 (2004) 23. Dass, S.C., Jain, A.K.: Fingerprint-Based Recognition. Technometrics 49(3), 262–276 (2007) 24. van der Ploeg, I.: Biometrics and Privacy: A Note on the Politics of Theorizing Technology. Information, Communication & Society 6(1), 85–104 (2003) 25. Alterman, A.: Ethical Issues in Biometric Identification. Ethics and Information Technology 5, 139–150 (2003) 26. Biometrics at the European Borders. Scoping paper. 7th FP project Technolife outcome (2009) 27. van der Ploeg, I.: Biometric Identification Technologies: Ethical Implications of the Informatization of the Body. BITE Policy Paper No. 1 (2005) 28. Lodge, J.: Trends in Biometrics: Briefing Paper. Liberty & Security (December 4, 2006), http://www.libertysecurity.org/article1191.html 29. Personal Data for Public Good: Using Health Information in Medical Research. A Report from the Academy of Medical Sciences (2006) 30. Knoppers, B.M.: From Medical Ethics to ; Genethics . Lancet 356(Suppl.), 38 (2000)
舠
舡
114
M. Sutrop
31. Chadwick, R., Berg, K.: Solidarity and Equity: New Ethical Frameworks for Genetic Databases. Nature Reviews: Genetics 2, 318–321 (2001) 32. Knoppers, B.M., Chadwick, R.: Human Genetic Research: Emerging Trends in Ethics. Nature Reviews: Genetics 6, 75–79 (2005) 33. Knoppers, B.M.: From Medical Ethics to ; Genethics . Lancet 356(Suppl.), 38 (2000) 34. Goldworth, A.: Informed Consent in the Genetic Age. Cambridge Quarterly of Healthcare Ethics 8, 393–400 (1999) 35. Takala, T.: Why We Should not Relax Ethical Rules in the Age of Genetics? In: Árnason, G., Nordal, S., Árnason, V. (eds.) Blood & Data: Ethical, Legal And Social Aspects of Human Genetic Databases, pp. 135–140. University of Iceland Press, Iceland (2004) 36. Leavitt, F.: Privacy and Security. In: Mordini, E., Green, M. (eds.) Identity, Security, and Democracy, p. 38. IOS Press, Amsterdam (2009) 37. Beck, U.: Risk Society Revisited: Theory, Politics and Research Programmes. In: Adam, B., Beck, U., van Loon, J. (eds.) The Risk Society and Beyond, pp. 211–229. Sage, London (2000) 38. Sutrop, M.T.: In: Häyry, M., Chadwick, R., Árnason, V., Árnason, G. (eds.) The Ethics and Governance of Human Genetic Databases. The European Perspectives, pp. 190–198. Cambridge University Press, Cambridge (2007) 39. Stranger, M., Kaye, J. (eds.): Principles and Practice in Biobank Governance. Surrey, Ashgate (2009) 40. Didier, B., Bournon, L.: Towards a Governance of Identity Security Systems. In: Mordini, E., Green, M. (eds.) Identity, Security, Democracy, pp. 19–25. IOS Press, Amsterdam (2009)
舠
舡
Interdisciplinary Approaches to Determine the Social Impacts of Biotechnology Pei-Fen Chang Graduate Institute of Learning and Instruction National Central University, Chung-Li 320, Taiwan
[email protected]
Abstract. The internationalization of education and practice has progressed rapidly in the field of engineering. Accrediting organizations are seeking to establish codes of ethics that will not only find favor with the many communities involved in the fields of engineering but also help the disciplines achieve their various professional and educational goals. In order to actively sanction the use of biotechnology, mobilize political support and opposition, and promote activities of relevant social movements, this paper attempts to discuss the importance of trust in the relationship between engineers and the public. It also examines the roles and responsibilities of engineers and behavioral scientists within the broader context of societal impacts of biotechnology. The study first reviews the history of code of ethics and the process of how the code becomes the source of trust between the profession and the public. Next, using the 2008 China earthquake as an example, the study discusses the ethical dilemmas between the profession and public. Finally, future challenges in the investigation of the societal impacts of biotechnology are presented. Keywords: ethics, societal impacts, biotechnology, interdisciplinary.
1 Introduction The internationalization of education has perhaps progressed more rapidly in the field of engineering than in many other fields, with individuals from diverse cultural backgrounds now being found in the classrooms, faculty rooms, and boardrooms of engineering institutes all around the world. This diversity in backgrounds has given rise to a concern among stakeholders (not just engineers, but modern society at large) regarding the standards and standardization of engineering education. In recent years, there have been many attempts to develop and implement equivalency and accreditation systems for international engineering education and practice. One such system was developed in 1989 by the Washington Accord and was applicable to English-speaking countries. It is now recognized in nine countries in the world, including Hong Kong, Japan, Singapore, Taiwan, and Korea as full signatories [1]. ABET (the Accreditation Board for Engineering and Technology) is a founding member of the Washington Accord and leads its operation. In Europe, using a process known as the Bologna Process [2], the Euro-Ace organization is also attempting to establish equivalency for qualifications, which will allow practitioners to easily move A. Kumar and D. Zhang (Eds.): ICEB 2010, LNCS 6005, pp. 115–120, 2010. © Springer-Verlag Berlin Heidelberg 2010
116
P.-F. Chang
for work across borders in its member countries. A similar effort was a result of the 1995 NAFTA Forum held in Puerto Vallarta, Mexico, at which, a set of “Principles of Ethical Conduct in Engineering Practice under the North American Free Trade Agreement” was approved, and this is especially relevant to the focus of this paper. This topic of “ethics” is increasingly being included in undergraduate and graduate engineering curricula [3]. There are a number of problems associated with this. For a start, engineering on the one hand is highly internationalized while, on the other, many ethical views are deeply culturally embedded. Obviously then, it is not an easy task to satisfactorily harmonize an internationalized codification of ethics to achieve the cooperation of all stakeholders in approving it, implementing it, and acting upon it. For instance, when Taiwan wished to accede to the Washington Accord [3] this was possible only when the professional engineering community accepted and adopted two apparently contradictory principles: (1) the code should reflect a member’s own culture and practices, and (2) it code must be substantively equivalent to those of the international communities. These two requirements cannot easily be satisfied simultaneously. This study takes a brief and somewhat discursive look at some basic issues that need to be considered as the engineering field attempts to define what an engineering ethics code might look like in the modern international setting.
2 Engineers and Ethics Codes The Hippocratic Oath, famously taken by medical doctors around the world, enjoins physicians to “do no harm,” as their first duty. This in fact means only that in their professional capacities, physicians should do no harm to the patient’s health. This is a narrow, professional injunction. It certainly has no status in law and indicates no specific behaviors. It is general and abstract and is a “principle.” In this, it is unlike legislation or rules, which may forbid or require specifically delineated actions. This is characteristic of ethics and this characteristic is a prime difficulty in their codification. The abovementioned difficulty of describing an unambiguous and consistently applicable ethics code for the sciences is well-demonstrated in the Encyclopedia of Science, Technology, and Ethics [4], in which bio-engineers are urged to (1) protect public interest, (2) prevent harm and conflict over justice and fairness, and (3) foster sensitivity to ethical issues and responsibilities at every level of decision-making that involves either technicians or those in charge of policy. Yet, these are injunctions that, obviously, no person or even any group can ever follow. What is “public interest” or “justice” or “fairness”? There is no agreement on these issues. And even if it were possible to describe these ends, what would be the optimal means to achieve them? No one can say. There have been other attempts to create ethical guidelines for engineers that could be regarded as more discipline-specific. Some researchers [5] proposed three points of ostensibly ethical guidance as follows: 1.
Engineers should endeavor, on the basis of their expertise, to keep members of the public safe from serious negative consequences resulting from the development and implementation of their technology.
Interdisciplinary Approaches to Determine the Social Impacts of Biotechnology
2.
3.
117
Engineers should endeavor to keep the public informed of their decisions, which have the potential to seriously affect the public, and to be truthful and thorough in their disclosures. Engineers should endeavor to understand and respect the non-moral cultural values of the people they encounter in the course of fulfilling their engineering duties.
3 Are Global Bioethics Possible? It is certainly creditable that engineering educators and professional organizations have expressed a concern with and wished to codify their ethics. Yet, it is worth asking whether or not there is a conflict between discipline and ethics. After all, the sciences are founded on rationalism, but as the Scottish philosopher, David Hume (1711–1776) said, “The rules of morality are not the conclusions of our reason.” [9] Further, ethics are not rules and they do not specify procedures, the familiar and easily applied methodologies of professionals. As the 1974 winner of the Nobel Memorial Prize in Economics [11] Friedrich Hayek discussed at length in “Fatal Conceit,” ethics are customary behaviors that have evolved over periods of time so as to provide general guidance to human beings regarding their moral behavior in circumstances where the full ramifications of specific behaviors cannot be satisfactorily described or predicted in terms of specific outcomes [10]. In other words, ethics are not the isolated or rationalistic functions of a cause-andeffect world. While technocrats may be inclined to think that they (and often enough they alone) have the intellectual wherewithal and information at hand that would allow them to make complex decisions at all times about all issues affecting all people in all circumstances, they simply do not have either the power or the knowledge. In short, ethics are not a field of rational study. Ethics cannot be reduced to utilitarian formulations for maximizing happiness, and we certainly cannot derive ethics from reason. Reason does not create ethics. Ethics creates reason. A good example of how ethics are the ancestors of reason is found in the history of science itself. Science depends on free speech and the free exchange of ideas. Thus, we must have free speech before we have reason, and there can be no reason without free speech. There seems to have many problems for a rational field such as biotechnology to codify and endorse any kind of culture-specific ethics. The basic ethics of biotechnology might promote the freedom and independence required for scientific enquiry, the validity and enforceability of contracts, transparency, and consistency. They may also include respect for privacy, be it personal or commercial. And they would no doubt include truthfulness and probity in private dealings and public forums. Other issues highly relevant and specific to the practices of biotechnology that are already often outlined in the basic standards of ethical research─ for example, in disclaimers in grants applications─ include respect and care for human life and health; avoidance of cruelty to animals; avoidance of practices that are polluting, unsustainable, or may cause permanent harm to the environment, and so on. While perhaps inherently impossible, given that ethics and rationalism appear to be orthogonal, it has nonetheless long been a goal of the engineering community to establish an ethics code for itself. Codes of ethics have a long history in U.S. engineering
118
P.-F. Chang
societies. The first code of ethics was developed by the AIEE in 1912 [6].The prima facie value of a code is that it can support professional standards and establish, for the groups that propose them, claims of sovereignty over a field of study. It also has allows a disciplinary group to present itself to the public as having high ideals and standards, thereby lending to the credibility of the group. Lastly, it is a strategy that a professional group may use to preclude having external standards imposed upon them.
4 A Case for the Discussion of Ethical Dilemmas and Legal Issues Both Chinese and Western philosophers have encouraged people to be more sensitive to nature, to consider the interests of others, and to contribute to the well-being of all individuals and humankind as a whole [7]. More than any other situations, disasters make it obvious that engineers cannot be concerned with just themselves and their own welfare, and that it is an essential aspect of their professional duty to be concerned about the safety, health, and welfare of society, as is proposed in almost all engineering codes of ethics. From the intertwining issues mentioned above, for our purpose, it is adequate to establish that there are several ethical dimensions to the Chinese earthquake of 2008. The ethical issues in this situation can be discussed on the basis of the following questions: 1. What is the responsibility of engineers to ensure that people escape safely if a disaster strikes? 2. How should safety checks be executed at schools across the country? Who should be involved during the decision-making process? 3. Was the local population sufficiently aware of the potential risks associated with the inadequate design of the legal system governing construction activities? 4. How should multi-disciplinary professionals be involved in reconstruction planning after the earthquake? Or would it be sufficient to leave this responsibility in the hands of local governmental agencies instead? 5. What are the limits of engineers’ professional duty in terms of awareness and utilization of prior knowledge in their construction designs? 6. Should engineers be responsible for utilizing important anti-earthquake designs from other countries such as the US, India, and Japan? 7. To what extent should engineers consider the long-term responsibilities of their decisions, even though many of these consequences could not have been known before the earthquake struck?
5 Conclusions and Implications We have outlined some theoretical issues in the formulation of ethics and of disciplinary ethical codes and have shown that a purely theory-led, rationalistic approach may lead us to less than practical results. Thinkers in this area should take up a more grounded, evolutionary, and empirical view of the task, resisting any temptation to predict and instead taking interesting or problematic examples of ethical dilemmas in
Interdisciplinary Approaches to Determine the Social Impacts of Biotechnology
119
a particular field and considering, based on observations, what practical disciplinary ethics would entail. In other words, the domain and content of disciplinary ethics might best be established empirically. At first glance, this may appear to be a task unsuited to engineers and more suited to, say, ethnographers, but how ever any disciplinary group may perceive this task and whatever methodology they may choose to apply, in the end, the ethics of engineers will be the ethics that evolve from engineering. Currently, it seems that we do not even have the basic data required for such an enquiry. Many questions need to be answered. What are the ethical issues in a field such as biotechnology? How do we establish that they are “issues” and for whom? Have the discipline, researchers, and practitioners been surveyed? What part of their research work makes them feel uneasy because they feel it is immoral or dubious? How do they cope with that? How do they grade it in terms of ethical seriousness? What action would an individual think is most suitable to take when faced with an ethical choice of varying degrees of seriousness? Have they ever been faced with the most serious level of ethical conflict? What action did they then take? Are they aware of the ethical standards of others in their field, and if yes, then how? Can they provide anecdotal support?
6 Future Challenges for Conducting Interdisciplinary Research in Social Acceptance of Biotechnology A massive disaster such as the 2008 China earthquake not only brings to light many engineering design issues but also shows that individual issues are not isolated from one another. By discussing the series of questions provided above, we can help the public explore and rethink the issues from multiple roles, rather than merely reaching a dichotomous, subjective judgment. For social scientists to study the social implications of biotechnology, they have to understand how emerging technologies (e.g., biometrics1, nanotechnology, neurology, and etc.) are advancing, how information on these advances circulates, and how to make necessary course corrections [5], [8]. There are four possible approaches to conduct multidisciplinary research on the social implications of biotechnology: 1. 2.
3.
4.
1
Using representative sample surveys supplied by focus groups and openended interviews for the measurement of psychosocial parameters. Tracking the changing public perceptions of biotechnology by charting the processes of regulatory review and approval, court decisions that actively sanction the use of biotechnology, mobilization of political support and opposition, and activities of relevant social movements. Using multiple feedback loops in which society interacts with the biotechnology industry and thereby transforms the context in which innovation occurs. Focusing on multidisciplinary work on key research and education issues concerning the socioeconomic implications of biotechnology. This would
Biometrics is itself a multidisciplinary subject that integrates engineering, statistics, mathematics, computing, physiology, psychology, and policy [12].
120
P.-F. Chang
involve supporting interdisciplinary interactions between the physics, chemistry, biology, material sciences, and engineering communities on the one hand, and the social sciences and economic science research communities on the other. Ultimately, we hope that discussion presented in this paper will promote further delving into engineers’ role in public participation. By recommending partnerships among interdisciplinary researchers, our focus has been to demonstrate how interdisciplinary approaches to determine the social impacts of biotechnology may result in the open communication between the professionals and the public, who are concerned about the ethical issues of life sciences. Acknowledgments. We gratefully acknowledge financial support from the National Science Council, R.O.C., through project NSC 96-2628-S-008-002-MY3. We also want to express our gratitude to Professor Ajar Kumar and Mr. C. S. M Kyle for the suggestions received while writing this paper.
References 1. Liu, M., Chang, P.F., Wo, A.M., et al.: Quality Assurance of Engineering Education through Accreditation of Programs in Taiwan. International J. of Engineering Education 24(5), 854–863 (2008) 2. Luegenbiehl, H.C.: Disasters as object lessons in ethics: The example of Katrina, TASMWinter (2007) 3. Chang, P.F., Wang, D.C.: Contribution of Asian Values for Improving the Quality Standards of Global Ethics across the Nation Borders. In: International Conference on Engineering Education (ICEE), Pec and Budapest, Hungary (2008) 4. Mitcham, C. (ed.): Encyclopedia of Science, Technology, and Ethics, Thomson an Gale, MI (2005) 5. Roco, M.C., Bainbridge, W.S. (eds.): Societal Implications of Nanoscience and Nanotechnology. Kluwer Academic Pub., MA (2001) 6. Whibeck, C.: Ethics in Engineering Practice and Research. Cambridge University Press, New York (1998) 7. Bucciarelli, L.L.: Ethics and engineering education. International J. of Engineering Education 33(2), 141–149 (2008) 8. Allhoff, F., Lin, P., Moore, J., et al.: Nanoethics: The Ethicical and Social Implications of Nanotechnology. John Wiley and Sons, New York (2007) 9. Hume, D.: A Treatise of Human Nature. Dover 11, New York (2003) 10. Hayek, F.A., Bartley III, W.W. (eds.): The Fatal Conceit: The Errors of Socialism. The Collected Works of FA Hayek, vol. 1. Routledge, London (1998) 11. Nobel Memorial Prize in Economic Sciences. Wikipedia, http://en.wikipedia.org/wiki/Nobel_Memorial_Prize_in_Economi cs (accessed 4 February 2010) 12. Ives, R.W., Du, Y., Etter, D.M., Welch, T.B.: A Multidisciplinary Approach to Biometrics. IEEE Transactions on Education 48(3), 462–471 (2005)
Medical Safety Issues Concerning the Use of Incoherent Infrared Light in Biometrics Nikolaos Kourkoumelis and Margaret Tzaphlidou Department of Medical Physics, Medical School, University of Ioannina, 45110 Ioannina, Greece
[email protected],
[email protected]
Abstract. Several biometric devices use illumination in the infrared spectral region. Typical examples are: (i) iris and retina recognition and (ii) vascular pattern recognition. Exposure of living tissues to infrared light, results in biological effects which are expressed macroscopically as heat. Medical implications of biometrics play a major role for public acceptance. Although infrared radiation is considered as safe when certain criteria, established by international committees, are followed, current literature is inconclusive on chronic low intensity exposure to the infrared spectral range. In this study, we summarize current data on the biological effects of the infrared wavelengths used in biometric systems.
1 Introduction It is common knowledge that all biometric systems must potentially fulfil a number of certain properties such as: universality, uniqueness, permanence, collectability, accuracy and acceptability - safety. The latter feature mostly concerns a measurable biological characteristic of a subject’s body and is a source of fear and unwillingness towards biometric technologies. "User acceptance" of a biometric transaction is directly correlated with the medical implications resulting from such a process, excluding parameters like psychology and ergonomics [1]. These medical implications can be divided into two categories: indirect (IMI) and direct (DMI). IMI involve information disclosure from a medical database without permission while DMI denote a potential threat for the user's tissues. The purpose of this paper is to review, in brief, the literature on the current status of the possible risks associated with electromagnetic radiation and more specific with infrared radiation as a DMI parameter. A different expression of DMI might be in the form of contamination from a contact sensor; this is however much more controlled through the use of mild antiseptics as in the case of the recent H1N1 virus. Infrared radiation lies between the visible and microwave portions of the electromagnetic spectrum and is arbitrarily divided into the following three bands: IR-A (near IR) between 760 nm and 1,400 nm, IR-B (mid IR) between 1,400 nm and 3,000 nm, and IR-C (far IR) between 3,000 nm and 106 nm [2]. Vein pattern biometrics [3] relies on far and near infrared (NIR) imaging while iris and retina recognition are based mainly on NIR illumination [4]. The basic principle for vein pattern biometrics is the high penetration depth of IRA into the tissues. Blood vessels absorb more infrared radiation than the surrounding A. Kumar and D. Zhang (Eds.): ICEB 2010, LNCS 6005, pp. 121–126, 2010. © Springer-Verlag Berlin Heidelberg 2010
122
N. Kourkoumelis and M. Tzaphlidou
tissues due to the fact that veins contain deoxygenated haemoglobin which absorbs strongly at about 760 nm within the near-infrared area. This causes the blood vessels to appear as black patterns in the resulting image captured by a charge-couple device (CCD) camera. Furthermore, a recent improvement in the face recognition systems optimizes the illumination invariant face recognition for indoor, cooperative-user applications, using near infrared illumination [5]. On the other hand, face thermography utilizes far infrared radiation as a consequence that all objects emit heat as infrared radiation. In particular, the human body emits infrared radiation in the range of 3,00014,000 nm. Using calibrated thermal cameras, the structure of the vein pattern is formed because superficial human veins have higher temperature than the surrounding tissues. This technique is totally non-invasive and as such, we will not discuss it any further. Iris recognition analyses the random textured patterns of the iris by illuminating NIR light. The increase in apparent detail compared to visible light, is due to the principal iris pigment melanin, which is responsible for the eye colour and absorbs poorly in the infrared; hence it allows the structural patterns of the iris to be imaged with greater contrast. Retinal scanning is accomplished by illuminating the retina with a low-intensity NIR and imaging the patterns formed by the blood vessels network. NIR illumination has also been used in anti-spoofing techniques based on liveness detection [6]. The aim of liveness testing is to ensure liveness of the source of the biometric data being recorded. All biometric devices use incoherent illumination which is produced by light emitting diodes (LEDs, or IREDs in the infrared spectrum region). This radiance power is approximately 1,000 times lower than that of a laser.
2 Infrared Radiation Infrared radiation is classified as non-ionizing radiation which commonly means electromagnetic radiation whose quantum energy is less than 12 eV and as a result, has insufficient energy for ionizations when interacts with the living tissue. Instead, it may produce excitations that are responsible for a variety of biological effects of thermal and photochemical origin. The term "biological effects" do not necessarily mean biological hazard since the effect must "cause detectable impairment of the health of the individual or of his offspring" [7] in order to be characterized as such. The extent of biological effects from non-ionizing radiation exposures depends mainly on the energy of the incident radiation, the power density of the field or beam, duration of exposure, and the spatial orientation and biological characteristics of the irradiated tissues (molecular composition, blood flow, pigmentation, functional importance, etc.) [8]. Specifically, infrared radiation induces molecular vibrations and rotations and principally interacts with tissues generating heat. It mainly affects the eye and the skin since it exhibits limited penetration length. It is unperceived by any of the human senses unless of heat stimulus resulting from high intensity. Table 1 summarizes the harmful biological effects of infrared radiation.
Medical Safety Issues Concerning the Use of Incoherent Infrared Light in Biometrics
123
Table 1. Adverse biological effects on skin and eye tissues [9,10]
Spectral Region
Effect
IR-A
Skin Thermal Injury
IR-B
Thermal Injury
IR-C
Thermal Injury
Eye Thermal injury of crystalline lens Retinal burns Thermal cataract Thermal injury of cornea Thermal cataract Aqueous flare Thermal injury of cornea
That type of the effect is actually thermal and injury thresholds vary with wavelength. Thermal injury does not follow the Bunsen–Roscoe law [11] and it is primarily affected from the conduction of heat away from the irradiated tissue. Since biometric methodologies rely on IR-A radiation, we will focus on this spectral region.
3 Damage Mechanisms of IR-A Radiation 3.1 Eye Near-IR radiation from 400 to 1400 nm is focused on the retina. In this spectral region, thermal effects dominate and upon intense exposure, tissue coagulates within seconds. Thermal injury is a rate process and depends on energy absorption in a volume of tissue. Under normal circumstances 45 oC is the lower limit for thermal injury but critical temperature is a function of duration and irradiance [12]. Exposure spot size and ambient temperature are two additional parameters that play a major role to tissue cooling since this is performed through conduction. Sudden temperature elevations of the anterior of the eyes due to high level IR exposure are readily sensed causing pain and the eye moves or blink in order to increase the area of illumination or to limit further exposure respectively. 3.2 Skin IR-A is the most penetrating (to several mm) radiation in the IR spectral region. This means however, that it is far safer than IR-B and IR-C because it reaches the subcutaneous tissue without increasing the surface temperature of the skin. Very intense radiance is needed to produce pain and burns. In the case of modest exposures, thermal injury is avoided by vascular flow which conducts excess heat to cooler parts of the body. Recent studies have shown that IR-A irradiated dermal fibroblasts can lead to degradation of dermal collagen fibers proteolytically [13] but this finding is supported for the high exposure value (in comparison to biometric devices) of 200 J cm-2. Hence, biometric non-coherent radiation seems to pose no threat to skin.
124
N. Kourkoumelis and M. Tzaphlidou
4 Exposure Limits Existing guidelines [14] from International Commission of Non-Ionizing Radiation Protection (ICNIRP) assumes that the IR energy from IR-A poses a risk to the human eye. As a general hazard criterion for IR-A and IR-B, the ocular exposure should not exceed 10 mW cm-2 for lengthy exposures (>1,000 s) and to 1.8 t-3/4 W cm-2 for shorter exposure durations in order to avoid thermal injury of the cornea and lens (delayed thermal cataractogenesis). Limits for exposure time over 10 s for the retina, state that IR-A radiation should be restricted by the following formula:
∑ Lλ × R(λ ) × Δλ ≤ (0.6W × cm
−2
× sr −1 ) / α
(1)
where Lλ is the measured spectral irradiance (W m-2 nm-1), R(λ) the retinal thermal hazard weighted function, Δλ the wavelength interval and α the angular subtense of the source (radians) which in general is different than the beam spread angle. For shorter exposures the previous formula, 1.8 t-3/4 W cm-2, applies. It must be noted here, that IR-A therapeutic apparatuses that are currently used for hyperthermic treatment of cancers, Raynaud’s syndrome etc., do not meet the above criteria albeit having beneficial biological effects. The typical treatment irradiance of therapeutic IR devices operates in the vicinity of 800 W m-2 [15].
5 Biometric Exposure The use of near infrared LED with wavelengths ranging from 780 nm to 940 nm is normally used for iris recognition applications while retina recognition is still under research. All biometric installations must adhere to the illumination safety standards ANSI/IESNA RP-27.1-96 and IEC 60825-1 Amend.2, Class 1 LED, to ensure safe use of infrared technology [16]. Eye safety is not compromised with a single LED source using today's LED technology and following the above guidelines. Multiple LED illuminators can, however, produce eye damage if not carefully designed and used. Since brightness improves image quality, several biometric devices use arrays of LEDs. These implementations threaten eye safety because they can easily exceed the irradiance safety limit which was discussed in the previous section. Exposure time remains short for iris identification since capture of an image suitable for iris recognition algorithms can be typically achieved within 2–10 s [17]. The key term here is not single exposure but long-term effects resulting from illumination repetition. Current literature is indecisive on chronic low intensity exposure to this wavelength range. Furthermore, a rigid theoretical model must be applied to simulate such energy deposition taking into account that biometric devices use both visible and IR-A radiation. Through theoretical calculations however, Okuno [18] has suggested that the energy of IR-A is insufficient to cause thermal cataract in contrast to IR-B and IR-C. Long term and synergistic effects must be also taken into consideration for the case of skin since cutaneous exposure stimulates molecular response mechanisms [13].
Medical Safety Issues Concerning the Use of Incoherent Infrared Light in Biometrics
125
6 Conclusions There are still many doubts about the severity of biological effects of both acute and chronic exposure to various types of electromagnetic radiation. On the other hand, safety practices are often overwhelmed by technological progress in current biometrics research. This study is not meant to be an extensive review but instead to summarize current knowledge on this topic. Biometric technology is relative new but very solid. Medical issues are still under investigation but for the time being manufacturers of biometric devices must comply with the rules of international organizations and make all technical data sheets available to the scientific community in order to prevent possible health effects. This information combined with experimental data could lead to improved guidelines for public exposure levels during biometric transactions. Academia and industry should cooperate to provide each biometric system a unique official certification, available to every user before or during the biometric data collection, describing in detail exposure safety levels.
References 1. Biohealth Project, http://biohealth.helmholtz-muenchen.de/ 2. International Commission on Illumination: International lighting vocabulary. CIE 17.4. Vienna (1987) 3. Lingyu, W., Leedham, G.: Near- and Far- Infrared Imaging for Vein Pattern Biometrics. In: Proceedings of the IEEE International Conference on Video and Signal Based Surveillance, p. 52 (2006) 4. Woodward Jr., J.D., Orlans, N.M., Higgins, P.T.: Biometrics. McGraw-Hill/Osborne (2003) 5. Li, S.Z., Chu, R., Liao, S., Zhang, L.: Illumination Invariant Face Recognition Using NearInfrared Images. IEEE Trans. Pattern Analysis and Machine Intelligence 29, 627–639 (2007) 6. Reddy, P.V., Kumar, A., Rahman, S.M.K., Mundra, T.S.: A new antispoofing approach for biometric devices. IEEE Trans. Biomedical Circuits & Sys. 2, 328–337 (2008) 7. International Commission on Non-Ionizing Radiation Protection: Guidelines for Limiting Exposure to Time-varying Electric, Magnetic, and Electromagnetic Fields (Up to 300 GHz). Health Phys. 74, 494–520 (1998) 8. Ng, K.-H.: Electromagnetic Fields and Our Health; Non-Ionizing Radiations – Sources, Biological Effects, Emissions and Exposures. In: Proceedings of the International Conference on Non-Ionizing Radiation at UNITEN, ICNIR 2003 (2003) 9. Driscoll, C.M.H., Whillock, M.J.: The Measurement and Hazard Assessment of Sources of Incoherent Optical Radiation. J. Radiol. Prot. 10, 271–278 (1990) 10. Sliney, D.H., Wolbarsht, M.L.: Safety with Lasers and Other Optical Sources. Plenum Press, New York (1980) 11. Merwald, H., Klosner, G., Kokesch, C., Der-Petrossian, M., Honigsmann, H., Trautinger, F.: UVA-Induced Oxidative Damage and Cytotoxicity Depend on the Mode of Exposure. J. Photochem. Photobiol. B: Biol. 79, 197–207 (2005) 12. Sliney, D., Aron-Rosa, D., DeLori, F., Fankhauser, F., Landry, R., Mainster, M., Marshall, J., Rassow, B., Stuck, B., Trokel, S., West, T.M., Wolffe, M.: Adjustment of Guidelines for Exposure of the Eye to Optical Radiation from Ocular Instruments: State-ment from a Task Group of the International Commission on Non-Ionizing Radiation Protection (ICNIRP). Appl. Optics 44, 2162–2176 (2005)
126
N. Kourkoumelis and M. Tzaphlidou
13. Schieke, S.M., Schroeder, P., Krutmann, J.: Cutaneous Effects of Infrared Radiation: From Clinical Observations to Molecular Response Mechanisms. Photodermatol. Photo. 19, 228–234 (2003) 14. International Commission on Non-Ionizing Radiation Protection: Guidelines of Limits of Exposure to Broad-band Incoherent Optical Radiation (0.38 to 3 μm). Health Phys. 73, 539–554 (1997) 15. International Commission on Non-Ionizing Radiation Protection: ICNIRP Statement on Far Infrared Radiation Exposure. Health Phys. 91, 630–645 (2006) 16. U.S. National Science and Technology Council - Subcommittee on Biometrics: Iris Recognition (2006), http://www.biometrics.gov 17. Wayman, J., Jain, A., Maltoni, D., Maio, D. (eds.): Biometric Systems: Technology, Design and Performance Evaluation. Springer, Heidelberg (2005) 18. Okuno, T.: Thermal Effect of Visible Light and Infra-Red Radiation (I.R.-A, I.R.-B and I.R.-C) on the Eye: A Study of Infra-Red Cataract Based on a Model. Ann. Occup. Hyg. 38, 351–359 (1994)
The Status Quo and Ethical Governance in Biometric in Mainland China* Xiaomei Zhai and Qiu Renzong Centre for Bioethics Chinese Academy of Medical Sciences/Peking Union Medical College
[email protected]
Abstract. This article starts with a brief introduction and then describes the status quo of biometrics in mainland China. The main part of this article focuses on ethical concerns caused by biometrics. The ethical concerns include privacy, stigma and discrimination, dangers to owners of secured items, losing identity and other unease and worry. It concludes with recommendations on ethical governance of biometrics, and suggests a number of ethical principles to build an ethical framework for evaluating conducts taken in biometrics. . Keywords: Biometrics, Biometric Technologies, Biometrics Recognition, Ethical Governance.
1 Brief Introduction In 1982 an American science fiction film Blade Runners depicts a story in Los Angels in November 2019 where genetically engineered beings called Replicants (visually indistinguishable from adult humans) are manufactured by the all-powerful Tyrell Corporation. As a result of a violent Replicants uprising, their use on Earth is banned, and Replicants are exclusively used for dangerous or unskilled work as slaves in Earth’s colonies. Any Replicant who defies the ban and returns to Earth is hunted down by police assassins known as "blade runners". Replicants can be identified only by using a machine, which analyzes the iris contractions and dilatations. The machine that allows identifying the Replicants is a biometric devise [1]. No longer a science fiction, biometric technologies are the most important innovation in the IT industry for the coming years and the biometric industry is projected to grow from $600 million in 2002 to $4 billion by 2007.1 The term biometric comes from the Greek words bios (life) and metrikos (measure). Biometrics is the measure of individuals’ physiological and/or behavioral characteristics. Biometric technologies can be defined as automated methods of recognizing or verifying the identity of a living person based on a physiological or behavioral characteristic. Biometrics comprises methods for uniquely recognizing humans based upon one or more intrinsic physical or behavioral traits. In IT biometrics is used as a form *
This article is written on the basis of my presentation on the Third International Conference on Ethics and Policy. 1 op.cit. A. Kumar and D. Zhang (Eds.): ICEB 2010, LNCS 6005, pp. 127–137, 2010. © Springer-Verlag Berlin Heidelberg 2010
128
X. Zhai and Q. Renzong
of identity access management and access control. It is also used to identify individuals in groups that are under surveillance. Biometric recognition is verifying somebody’s identity from her/his characteristics. Or identifying who he/she is from what he/she possesses. There are two classes of biometric characteristics: Physiological are related to the shape of the body. Examples include, but are not limited to fingerprint, face recognition, DNA, hand and palm geometry, iris recognition, which has largely replaced retina, and odor/scent. Behavioral are related to the behavior of a person. Examples include, but are not limited to typing rhythm, gait, and voice. Some researchers have coined the term behaviometrics for this class of biometrics. Parameters for human characteristic being used for biometrics include: universality – each person should have the characteristic; uniqueness – is how well the biometric separates individuals from another; permanence – measures how well a biometric resists aging and other variance over time; collectability – ease of acquisition for measurement; performance – accuracy, speed, and robustness of technology used; acceptability – degree of approval of a technology; and circumvention – ease of use of a substitute. There are two modes of operating by a biometric system: one is verification – A one to one comparison of a captured biometric with a stored template to verify that the individual is who he claims to be. It can be done in conjunction with a smart card, username or ID number; the other is identification – A one to many comparison of the captured biometric against a biometric database in attempt to identify an unknown individual. Both verification and identification are recognition [2]. Applications of the technology include checking the identity of passengers at borders, checking the identity of entrants at the gate of public events, proving the identity of payments, social security, and others benefits claimants, restricting access to secure premises, checking the identity of voters at polling booths and identifying known criminals, etc. In comparison with traditional technologies the advantages of biometric technologies seem to be more accurate, more reliable, more effective, more confidential and convenient, as well as would be cheaper with technical simplification and mass manufactures of biometric products. For example, a fingerprint scanner that cost $3,000 five years ago, with software included, and $500 two years ago, costs $100 today [1].
2 Status Quo of Biometrics in Mainland China The use of fingerprints in commercial and judicial practices in China has thousands of years’ history. It is a long-standing practice that in commercial transaction both sides of the contract press their fingerprints on contract, or witness and plaintiff press their fingerprint on their oral testimony or statement. It practices even now for illiterates. Starting in 1990s there are five major centres for biometric research and development among the others under the support of 863 and 973 focus programmes, except the most advanced Biometric Research Centre, Hong Kong Polytechnic University in Hong Kong. The five centres include Centre for Biometrics and Security Research (CBSR), Institute of Automation, Chinese Academy of Sciences (CASIA); Joint Research and & Development Laboratory for Advanced Computer and Communication
The Status Quo and Ethical Governance in Biometric in Mainland China
129
Technologies (JDL), Institute of Computing Technology, Chinese Academy of Sciences (CASICT); Electric Engineering Department, Tsinghua University; Centre for Information Research, Peking University and Centre of Forensic Sciences, Beijing Genomics Institute. The biometric technologies they develop include: Face recognition (visible light & near infrared) in CASIA, Institute of Computing Technology, CAS and Tsinghua University; Fingerprint recognition in CASIA, Peking University; Iris recognition in CASIA and Palmprint recognition in CASIA [3]. And there are near 200 enterprises which join the R&D and marketing of biometric products, and the output values in market is near CNY 300 millions yuan [4]. Significant biometric applications in mainland China include: for the governmental departments they are used in self-service border-crossing, such as for Shenzhen–Hong Kong boarder since June 2005 and for Zhuhai-Macau border since April 2006. Biometric E-passport is being researched and developed. For enterprises, they are used in time attendance and access control. And for consumer products face logon is used for notebook PC, finger logon for mobile phone, and finger lock is also used [3]. The prospect of biometric application is very promising. The demand is ever increasing. Officials in the Ministry of Social Securities hope to take more effective measures, such as finger and facial recognition or any biometric technology to prevent ever-increasing false claims. The department of public security is a major demander of biometric products. Now among 250 millions output values of biometric products more than 40% are used in public security and police. Demands from finance departments are also urgent for preventing ever-increasing fraud and false claims. If the government decides to use biometric identification cards and passports, it would be a huge demand [4]. The following are some cases of biometric applications: 2.1 Case in Olympic Game At opening & closing ceremony of 2008 Beijing Olympic Game about one hundred thousands audience passed 100 gates by speedy identity verification with facial recognition systems. When audience bought the ticket, he/she had to provide individual information and a photo. So it enables to do one to one comparison of a captured biometric with a stored template to verify [5].
Fig. 1. High speed face recognition system used during the Beijing Olympic Games in 20082 2
Figures courtesy of http://www.tsingdavision.com/en/rongyu.html
130
X. Zhai and Q. Renzong
3
Fig. 2. Fingerprint device used in intelligent driving training management system
2.2 Case in Shenzhen Shenzhen is the greatest customs point in the world. There are 600,000 passengers who exit from or enter into Shenzhen per day. After using facial recognition devices manufactured by AuthenMetric Company, the time of customs check per passenger is reduced from 13 seconds to 6 seconds. AuthenMetric’s facial recognition system is also used in the People’s Bank of China for work license [6]. 2.3 Case in Suzhou Since 1st April 2008 an intelligent driving training management system has been used. When the applicant is enrolled initially, her/his fingerprints are collected. Each time at the beginning and end of training the trainee has to insert her/his training card and fingerprints to record the training time. It improves the quality of training, reduces the number of car accident for novices and prevents fraud. In mainland China there are near 200 enterprises which join the R&D and marketing of biometric products, and the output values in market is near CNY 300 millions yuan [7]. China would be a great market for biometric products and important provider of biometric technologies as well. However, biometrics in mainland China is still at a preliminary stage. There is no national standard for biometric products and applications, nor public discussion on ethics, policy and governance issues in biometric R&D and applications.
3 Ethical Concerns Increasingly the application of advanced, cutting-edged or sophisticated technology causes public concerns which may focus on: the pervasiveness of a technology which 3
Figure courtesy of http://su.people.com.cn/GB/channel231/374/200803/13/11298.html
The Status Quo and Ethical Governance in Biometric in Mainland China
131
many people do not understand; the lack of transparency of the work of technology and its effects on individuals and society; the difficulty of respecting privacy and confidentiality when third parties may have a strong interest in getting access to recorded and stored personal data; the difficulty in ensuring the security of shared personal data; and the lack of adequate infrastructure which may reinforce existing inequalities. As Mordini and Petrini point out that many problems are related to individual rights such as the protection of personal data, confidentiality, personal liberty, the relationship between individual and collective rights [8]. 3.1 Privacy According to the Confucian concept of personhood any individual is in a network of interpersonal relationships. This concept was sometimes interpreted as there being no or should no privacy. Even the term of “privacy” was interpreted as something shameful or even scandalous ( jian bu de ren). A Chinese saying says: “If you have something that you don't want the other people to know, you should not do it in the first place.” ( ruo yao ren bu zhi, chufei ji mo wei) So the awareness of privacy in the public and even in the professionals is relatively weak. There is no law or article in the laws on privacy in the past, such as in the Constitution (1982, 1988, 1993, 1999, 2004) and General Provisions of Civil Law (1987) there is no article on privacy. Recently in many laws and regulations related with health care or public health there is an article on protecting privacy. In the Tort Liability Law which was promulgated in December 2009 and effectuated in July 2010 there is an article which writes “Medical institutions and medical personnel shall protect patients’ privacy and keep confidentiality. Those who leak patient’s privacy, disclose medical records without patient’s consent and do harm to the patient shall bear the tort liability.”(Article 62) [9]
见不得人 若要人不知,除非己莫为
3.1.1 Case of Data Gate On 2 January CCTV reported one of the most notorious cases called “Data Gate”. One network advertising company in mainland China claims to provide property management offices with a computer software free of charge which could facilitate the communication between these offices and property owners. As a result, all owners’ detailed personal data was transferred to the company's server in one minute when they install this software on the computers of these offices. Shenzhen branch of the company collected these data and its Beijing branch sold it. It led to serous harassments, blackmails and even crimes.4 The following two cases showed that there is black data market and privacy protection of personal data is now emerged as a grave issue of public concern in mainland China: • Case in Hangzhou 3 December 2009 the Procuratorate in the Gongshu District, Hangzhou City, the capital of Zhejiang Province approved to arrest Cui Zhengdong, a suspect in the 4
, resource:《Guang Zhou Daily》, Date Gate in Guang 三百楼盘数十万业主资料遭泄露,“业内人士”:比公安局与政府内部所掌握的数 据和私密材料毫不逊色。
Feb. 22, 2008 People’s Daily online Zhou.
132
X. Zhai and Q. Renzong
guilty of illegally stealing individual information. In September 2009 Mr. Lin, a Hangzhou citizen received a strange short message in his mobile phone which said to the effect that a client asks our company to cut you one hand and one foot for revenge, for the sake of your happiness and well-being, if you pay us CNY 50,000 yuan, we will refuse to receive the order. Mr. Lin remitted 3,500 yuan to the designated account name and then reported to the police. The police found the blackmailer is Huang who spent 1,580 yuan to buy more than 10,000 items of car owners’ data in Hangzhou, Nanjing and Shanghai via internet, and data seller is Cui, a graduate of computer major who spent 300,000 yuan to buy a huge number of data of car owners, insurance, estate property, stock investors, mobile owners, apartment owners from a Shenzhen man via internet. Those data occupy 30G of hard disc of his computer. He has already sold 250,000 items of data in 80 transactions and obtained illegal profits 150,000 yuan [10]. • Case in Zhuhai 3 January 2010 the People’s Court of Xiangzhou District, Zhuhai City sentenced Lin Guiyu and other 6 defendants 3-11 years prison and fined them 40,000150,000 yuan for the guilty of swindling. They pretended to be 14 officials, such as Vice-Mayor of Zhuhai City, Party Secretary of Enping City, Secretary of Discipline Committee of Foshan City and others to call their relatives and deceived them that they urgently need money, they obtained illegal profits 880,000 yuan. The phone number and phone bills of these 14 officials were provided by Zhou Jianping who set up an investigation company and illegally stole the data of phone bills, mobile bills and individual information. Zhou sold 14 phone bills to Lin and was paid 160,000 yuan. Zhou was sentenced to 18 months prison and fined 2,000 yuan. This is the first case of sentence for the guilty illegally selling personal data [11]. As physical or mental characteristics or conditions might be deducible from biometric measurements, so the most significant privacy concerns raised by biometrics relate to the threat of “function creep”, by which the original purpose for obtaining the information is widened to include purposes without the informed and voluntary consent of the participants [1]. Mordini and Ottolini pointed out that the health care sector is second only to the financial sector in term of the number of biometric users, there is the risk that biometric authentication devices can significantly reveal any health information, so it requires a careful ethical and political scrutiny [12]. And Redpath argued that it is necessary to explore the impact of the use of biometrics in the management of migration on the individual’s right to privacy and ability to move freely and lawfully and suggested to build domestic and international frameworks to govern the use of biometric applications in the migration/security context [13]. However, personal information has to be accessed for security reason or societal needs, so proper balance between personal interests and public interests is needed. The article 8 of the Charter of Fundamental Rights of the European Union stipulates that “Everyone has the right to the protection of personal data concerning him or her. Such data must be processed fairly for specified purposes and on the basis of the consent of the person concerned or some other legitimate basis laid down by law
The Status Quo and Ethical Governance in Biometric in Mainland China
133
Everyone has the right of access to data that has been collected concerning him or her, and the right to have it rectified. Compliance with these rules shall be subject to control by an independent authority” [14], [15]. It will be helpful to all countries where biometric technologies will be widely applied. 3.2 Stigma and Discrimination Recent scientific research suggests that biometric features can per se disclose medical information. Certain chromosomal disorders – such as Down’s syndrome, Turner’s syndrome, and Klinefelter’s syndrome - are known to be associated with characteristic fingerprint patterns in a person. Thus, biometrics might become not only an identifier, but also a source of information about an individual. And future and likely use of genetic test information and DNA profiles in biometrics bear many risks of discrimination and the multiplication of compulsory testing procedures. Various groups, including the elderly and disabled face the risk of discrimination. Fingerprints become less readable with age, while those who are visually impaired or have a limb missing may not be able to provide the requisite biometric data; and severe pain and serious injuries may prevent some patients in emergency wards from providing biometric characteristics. And as Wickins pointed out that social exclusion might arise from the use of biometric data, but the possibility of social exclusion resulting from the use of biometrics data for such uses as identity cards has not yet been fully explored. We need to balance individual interests with which to analyse whether it is justified to run the risk of excluding some members of society for the benefit of others [16]. 3.3 Dangers to Owners of Secured Items When thieves cannot get access to secure properties, there is a chance that the thieves will assault the property owner to gain access. If the item is secured with a biometric device, the damage to the owner could be irreversible, and potentially cost more than the secured property. Car thief may cut off the finger of a very expensive owner when attempting to steal the car. 3.4 Lose Identity Some worry that today’s citizens will become biological data, as name, age, address and other traditional identifying characteristics are replaced by biometrics which could be used by companies and governments alike. On the other hand, many people in developing countries do not possess any documents with which they can prove who they are. These people are already vulnerable on account of their poverty, and the fact that they are unable to provide evidence of their identity makes it difficult to empower them [16]. 3.5 Unease and Worry Taking the fingerprints of staff may cause them unease: they may feel their boss is looking over their shoulder, and the associations with criminality worry a surprising number of people. A huge database containing this sort of personal information would
134
X. Zhai and Q. Renzong
unnerve staff. If such a system were to be compromised, the results could be devastating. They may also worry on the possibility of infectious diseases to be transmitted via fingerprint-scanner devices. One advantage of passwords over biometrics is that they can be re-issued. If a password is lost or stolen, it can be cancelled and replaced by a newer version. This is not naturally available in biometrics. If someone’s face is compromised from a database, they cannot cancel or reissue it. And there are many ways to cheat the technology. Artificial devices could be used for mimicry, and the reliability of data is dependent upon the source that provided it. Biometric identification could be fooled by a latex finger, a prosthetic eye, a plaster hand, or a DAT (digital audio tape) voice recording [16].
4 Ethical Governance 4.1 Biometrics as Social Needs Biometrics R&D and its application are social needs. More secure, accurate, reliable, effective, efficient and convenient methods need for preventing terrorism and fraud, benefit individuals, enterprises and society. The introduction of biometrics would result in billions of savings on public spending for preventing wide spread of fraud. The advance of biometrics is an outcome of globalisation. Commerce, migration and trusted exchanges of all types of information now take place globally on a daily basis. Transactions and mobility have increased the need for individuals to prove their identity and to be sure of the identity of others, meaning more effective forms of identification need to be developed. 4.2 Good Governance One option of governance is “develop (pollute) first, and govern second, or “scientists only concern R&D, government and enterprises only concern investment, the public only concern enjoyment/consumption, and the humanists /social scientists only concern comments with hindsight ( ma hou pao). This is an outdated view of development, and is not good governance. The other is that governance should accompany development at initial stage and all stakeholders including government, scientists, engineers, humanists and social scientists, lawyers, businessmen, and the public need engagement in the governance from the very beginning. This is good governance. In the BIONET’s Recommendations of on Best Practice in the Ethical Governance of Sino-European Biological and Biomedical Research Collaboration [17] it is suggested that the concept of ethical governance arises from our understandings of the ways in which a governance system can be made both practical and just, in diverse historical, cultural and normative contexts. The following aspects define ethical governance in particular:
马后炮
•
The rule of law: regulatory structures need to be in place; ethical institutions must be established; law must be implemented, regulations and ethical guidelines must be secured.
The Status Quo and Ethical Governance in Biometric in Mainland China
•
• • • •
135
Transparency (of scientific practice, medical applications, biomedical research and translation of research into practice; of funding procedures) Free and independent media are a key resource and an institutional prerequisite for transparency in this area. Accountability that is clearly established and agreed: including such issues as who is responsible for what, under which condition, and with which consequences? Respect for human rights in biomedical research, with regard to patients and research subjects. Participation in processes of decision-making. Absence of corruption (in research and hospital settings, in the implementation of existing rules and in obtaining science funding).
4.3 Building an Ethical Framework for Evaluating Conducts in Biometrics What we need? We need an ethical framework for evaluating any conduct which will be taken in biometrics R&D and its application. This framework will be formed by a set of principles of ethical governance, the set of principles is also core values shared and committed by stakeholders who engage in biometrics R&D and its application. 4.3.1 Principle 1 Fundamental purpose of biometric R&D and its application is to promote well-beings and quality of life of people with safer, more effective, and more advanced science/technology ( yi ren wei ben, take people as the foremost . Biometric technologies should be used solely for legal, ethical, and non-discriminatory purposes (International Biometric Industries Association, 1999 [18]). Any action in biometrics should be evaluated on the principles of beneficence and non-maleficence serving as a basis for the attempts to weigh anticipated benefits against foreseeable risks. In biometric application individual interests and public interests should be properly balanced. In the case of limiting individual rights and interests for public good the limiting should be necessary, proportional and minimum.
以人为本
)
4.3.2 Principle 2 Biometric R&D and its application should maintain high standards of responsible research, i.e. adhering research integrity and committing safeguarding and protecting people’s rights and interests. “They are therefore committed to the highest standards of systems integrity and database security in order to deter identity theft, protect personal privacy, and ensure equal rights under the law in all biometric applications” (International Biometric Industries Association, 1999 [18]). 4.3.3 Principle 3 Conflict of interest between professionals, companies and users in biometric R&D and its application should properly be handled with. In any case people (vulnerable in particular)’s well-being cannot be compromised for the interests of professionals or companies.
136
X. Zhai and Q. Renzong
4.3.4 Principle 4 Respect for persons, mainly respect autonomy requires self-determination. The principle of informed consent must be adhered. In case of re-use of personal information for another purpose different from the purpose when enrolment, consent has to be obtained. 4.3.5 Principle 5 Human dignity requires us to protect privacy, confidentiality and medical secrecy. It requires us not only to not infringe upon individual right to privacy/confidentiality, but also to do our best to prevent improper or illegal disclosures of private information. 4.3.6 Principle 6 Justice requires equitable distribution of limited resources, and prevention of possible stigma and discrimination due to improper disclosure of individual information. 4.3.7 Principle 7 Solidarity requires us to safeguard the right for everyone to enjoy the benefits from biometrics R&D and application, with a special concern for vulnerable groups in society. 4.3.8 Principle 8 Transparency requires us to make biometrics R&D and its application to be transparent to the public, i.e. taxpayers, and help them to know what is biometrics and what are the benefits and risks from its application. 4.3.9 Principle 9 Public engagement requires us to take measures (such as the dialogue between biometrics professionals and the public or its representatives, NGO) to facilitate public understanding of biometrics and lead to public “consultation”, “engagement” or “involvement.” in the process of biometrics R&D and its application.
References 1. Biometrics Identification Technology Ethics. CSSS Policy Brief N° 1/ 03, http://danishbiometrics.files.wordpress.com/2009/08/news_2.pdf (accessed January 30, 2010) 2. Jain, A.K., Ross, A., Prabhakar, S.: An introduction to biometric recognition. IEEE Trans. on Circuits and Systems for Video Technology 14(1), 4–20 (2004) 3. Biometrics in China - A National Activity Report. China National Body, http://www.sinobiometrics.com/publications/Biometrics-inChina.pdf (accessed 30 January 2010) 4. Preliminarily building of the market of biometric application, Computer World Newspaper 5. XinhuaNet, http://news.xinhuanet.com/tech/200808/30/content_9738626.htm (access January 30, 2010)
我国研制的人脸识别技术成功用于奥运会
The Status Quo and Ethical Governance in Biometric in Mainland China
137
6. Interview with Li Ziqing. CAST, http://www.cast.org.cn/n35081/ n10371505/10374732.html (accessed January 30, 2010) 7. 4 1 . Biometric, http://www.shengwushibie.com/ news/Industry/200803/85.shtml (accessed January 30, 2010) 8. Mordini, E., Petrini, C.: Ethical and social implications of biometric identification technology. Ann Ist Super Sanità 43(1), 5–11 (2007) 9. National People’s Congress: Tort Liability Law. ChinaEClaw, http://www.chinaeclaw.com/readarticle.asp?id=15921(accessed January 30, 2010) 10. ; . SouFun (2009) http://hzbbs.soufun.com/2010166302~1~671/61698388_61698388.htm (accessed January 30, 2010) 11. Lawtime.cn, http://www.lawtime.cn/ info/xingfa/xfnews/2010010438839.html (accessed January 30, 2010) 12. Mordini, E., Ottolini, C.: Body identification, biometrics and medicine: ethical and social considerations. Ann Ist Super Sanità 43(1), 51–60 (2007) 13. Redpath, J.: Biometrics and international migration. Ann Ist Super Sanità 43(1), 27–35 (2007) 14. The Charter of Fundamental Rights of the European Union. European Parliament, http://www.europarl.europa.eu/charter/default_en.htm(accessed January 30, 2010) 15. Gutwirth, S.: Biometrics between opacity and transparency. Ann Ist Super Sanità 43(1), 61–65 (2007) 16. The ethics of biometrics: the rights and wrongs of new ID cards technologies. European Commission: Research, http://ec.europa.eu/research/research-foreurope/science-eco-bite_en.html (accessed January 30, 2010) 17. Recommendations on best practice in the ethical governance of Sino-European biological and biomedical research collaboration. bionet, http://www.bionet-china.org/ (accessed January 30, 2010) 18. IBIA Statement of Principles and Code of Ethics. IBIA, http://www.biteproject.org/documents/ibia_code_ethics.pdf (accessed January 30, 2010)
苏州: 月 日起凭指纹学驾车
蒋萍 徐寒萍 杭州 出卖个人信息嫌犯被批捕 全国首例因出卖个人信息获刑
Building an Ecosystem for Cyber Security and Data Protection in India Vinayak Godse Security Practices, Data Security Council of India, Niryat Bhawan, 3rd Floor, Rao Tula Ram Marg, New Delhi – 110057, India
[email protected] http://www.dsci.in
Abstract. Governments across the globe are gearing up through policy enactments and necessary investments to fight the menace of rising cyber crimes. These policies and investments also assure citizens of their privacy rights in the cyber space. India, with its high growth rate, is rapidly integrating itself with Internet Economy, where transactions are predominantly carried out electronically. While Internet offers a new means for expanding economic and business avenues, it offers ease of operations and promises outreach. It is, however, subject to ever increasing dangers of cyber crimes and escalating misuse of personal information being collected by businesses. Individuals need legal protection to protect their personal rights and secure their transaction in cyber space. Institutes, industry and government need assurance that sufficient steps have been taken to secure the flourishing growth of economy. This requires setting up of an ecosystem that is capable of understanding new age complexities and offering swift response mechanism. The ecosystem for cyber security and data protection necessitates a strong legal framework, proactive government initiatives, active involvement of, and contribution by the industry and effective law enforcement mechanism. This paper discusses how India is responding to cyber security and data protection challenges, and how a new ecosystem is underway in recent years. Keywords: Cyber Security, Data Protection, IT (Amendment) Act, 2008.
1 Introduction Internet brought fundamental changes in the way society interacts, government offer services, organizations expand business and individuals transact. While Internet brings immense benefits, saves cost, and brings unparallel efficiency, it, in fact, by its nature, exposes organized entities (institutions and organizations or individuals to over expanding threats of cyber space. These cyber threats have been augmenting their capability to damage national interests, jeopardize industry functions and seriously harm individual or end users. This led to positioning cyber security as an important element of National Security. This is increasingly reflecting in the state policies and regulatory initiatives that are aimed at protection of public, private and individual property. The protection provides a necessary impetus for expansion of Internet A. Kumar and D. Zhang (Eds.): ICEB 2010, LNCS 6005, pp. 138–145, 2010. © Springer-Verlag Berlin Heidelberg 2010
Building an Ecosystem for Cyber Security and Data Protection in India
139
Economy and attracts an increased trust of the end user. Increased thrust of Government on Information Technology in offering public services to its citizens, and exploitation of new technology enabled channels by the industry provides end users very effective and effortless options to avail the services and perform their financial transactions. However, they have to negotiate with the Government and Industry by sharing their personal information. Personal information, shared to avail a specific service or perform a specific transaction, attracts immense value. Private bodies are likely to use this information for their commercial benefits; Government tends to use this information in the name of national security. Ability and scale of technology to gather, process, use, and share information gradually invite concerns of end users about their personal information being mishandled, and probably used against them. The concept of privacy is undergoing a change. Government of India, its citizens and Industry are going through this dynamism since the last decade. In a tryst of balancing benefits of Technology against its inherent risks, protecting national infrastructure from increasing cyber threats, securing growth of economy, protecting the interest of industry, and providing comfort to citizens about their personal rights in cyberspace, Government of India with the help of Industry is trying to evolve a new ecosystem for cyber security and data protection. Indian Economy is being transcended with increasing dependence on Information Technology. It is interesting to take a review of different initiatives undertaken by government towards building an ecosystem that promises security and in cyberspace guarantees privacy rights of an individual, and how the Industry is contributing to this ecosystem.
2 Indian Economy Becoming E-economy The story of India’s growth continues even in the backdrop of global recession. In recent years, it has witnessed increasing thrust in the use of Information Technology. Government of India is investing more than 10 billion dollars on e-governance through many mission mode projects that would transform its functioning into Government @ 24x7. Indian government seen as a big consumer of Information Technology, is not only using IT for creating new age channels for public services, but is also using IT for managing critical infrastructure of the country. Industry, on the other hand, is extensively using technology for expanding its business outreach. ECommerce in India is growing by 30 % a year, with travel, downloads and e-tailing being preferred e-commerce services. Although the Internet penetration in India is low in comparison to global peers at about 7.1 %, Forrester, a market research company, estimates the number of Internet users in India currently to be 52 million and expects it to increase at an average growth rate of 10-20 %. Forrester estimates that by 2013, India will be the third largest user of Internet. Indian banking industry has been observing that Internet is overwhelming other channels to execute banking transactions. Retail e-payment1 is growing fast at the rate of 70% a year and will reach $ 180 billion by 2010. E-transactions currently account for 30% of total transactions. However, 75% of total payment value is found in the electronic form. Card circulation1 —credit and debit— will hit 210 million by 2010. 1
Payments in India is going e-way, Celnet report.
140
V. Godse
The mobile subscriber base is expanding at an even greater space. According to TRAI, by August 2009, the total wireless subscriber base has crossed the figure of 456 million and is expected to cross 1 billion by 2014. With larger base of mobile consumers, and ease of transactions that these channel offers, m-commerce is expected to pick up in India within a short span of time. Indian IT and IT Services industry is growing multifold, offering not only cost reduction or scalability, but also quality. According to Mckinsey-NASSCOM study, outsourcing industry will reach between $ 225 billion to $ 350 billion by 2020 from the current $50 billion. Indian society is gradually transforming from joint family to nuclear family structure, climbing the individualism ladder fast. The emerging segment of population, in the age bracket of 25 to35 years, is increasingly using the Internet to avail both public and private services. They have been developing their understanding about the use of technology as well as the risks of cyber space. Moreover, the media is becoming active in covering national and international data breaches. Increased exposure of IT and ITES industry to the global data protection regulations also makes Indians aware of privacy perceptions in nation states.
3 An Ecosystem for Cyber Security and Data Protection While discussing an eco-system for cyber security and data protection five important aspects need to be evaluated. These are as follows: (1) Legal Framework: Does the country have an adequate legal model for security and privacy? Does the current legislative eco-system understand new age complexities? Whether special legislation is enacted to deal with specific challenges imposed by for Information Technology? (2) Government Initiatives: Is the government pro-active enough in policy enablement? Does it invest enough to address increasing challenges? How does it partner and collaborate with industry, academia and other stakeholders? (3) Special Projects- What projects have been under taken at the national level that affects cyber space and privacy? How will these projects benefit the cause? (4) Industry Initiatives- How are the industries participating and collaborating in the eco-system? Is there any special purpose mechanism established that provides a suitable platform to the industry? (5) Law enforcement- Is law enforcement in the country effective enough to handle the new age crimes? What initiatives have been taken for improving law enforcement? 3.1 Legal Framework for Cyber Security and Data Protection Recent enactment of IT (Amendment) Act, 2008, brings India to the league of countries that have a legal regime for cyber security and privacy. Indian Constitution, through Article 21, guarantees its citizen right of privacy. Indian courts of law have upheld the right in many of the cases. In various judgments, Supreme Court of India
Building an Ecosystem for Cyber Security and Data Protection in India
141
upheld individual rights about privacy and fixed liability of offenders, and recognized constitutional right to privacy against unlawful government invasions. Moreover, cyber security and data protection measures are supported by various enactments, namely, (i) The Indian Telegraph Act, 1885, (ii) The Indian Contract Act, 1872, (iii) The Specific Relief Act, 1963, (iv) The Public Financial Institutions Act, 1983, (v) The Consumer Protection Act, 1986 and (vi) The Credit Information Companies (Regulations) Act, 2005. The IT (Amendment) Act, 2008 provides a comprehensive definition of the computer system, and tries to ascertain clarity in the definition of cyber crimes. It introduces the concept of “sensitive personal information”, and fix liability of the ‘body corporate’ to protect the same. On the other hand, it helps to take legal action an individual for the breach of confidentiality and privacy, under lawful contract. The data protection regime in India emerges with these provisions. These changes may help gain confidence of global clients who are sending their data as a part of outsourcing to Indian IT/BPO companies. The amended Act also enables setting up of a nodal agency for critical infrastructure protection body, and strengthens the role of CERT-In. This Act now enables central government to define encryption policy for strengthening security of electronic communications. Until now, a uniform national policy for encryption was absent. This will help growth of e-governance and e-commerce. Cyber Appellate Tribunal, which is now operational, is expected to accelerate legal proceeding of cyber crime cases. IT (Amendment) Act, 2008: Avoiding duplication of legislation- This Act includes provisions for digital signatures, e-governance, e-commerce, data protection, cyber offences, critical information infrastructure, interception, cyber terrorism in an omnibus, comprehensive legislation. It has been observed that in western countries the legal regime that governs Information Technology is complex and redundant. In United Sates for example, there are 45 Federal enactments and about 598 State enactments that can be attributed to security and privacy. Reasonable Security Practices for Data Protection- IT (Amendment) Act, 2008, adopt a route of ‘reasonable security practices’ for protection of ‘sensitive personal information’. The Act refrained from creating an infrastructure in terms of privacy commissioner’s office as prevailed in European countries. This might have led to the formation of a rigid bureaucratic mechanism impeding routine business functions. Instead, it tries to address data protection concerns of its citizen by fixing the liability of the organization that is not able implement the practices to an extent of unlimited liability. It remains to be seen the enforcement of the act, and how it would help the cause of data protection in India. 3.2 Regulatory Bodies Regulatory bodies such as Reserve Bank of India (RBI) are becoming sensitive to increasing concerns of individuals about the personal information being gathered by organizations. RBI, in 2007, issued guidelines to banking industry for upholding privacy principles while processing personal information. Similarly, Telecom Regulatory (TRAI) through its ‘Do Not Call Registry’ assures protection to consumers from telemarketers that potentially infringe the privacy of telecom customers.
142
V. Godse
In comparison to Financial Service Authority of UK or Information/ Privacy Commissioner Offices in European countries, the role of regulatory bodies for data protection is limited to guidance to the Industry. The regulatory bodies, in due course of time, will have to complement changed legal regime in the country. 3.3 Government Initiatives The Government of India has undertaken various policy, project, program and awareness initiatives, some of which are summarizes in the following. (1) CERT-In: Government of India has set up a Computer Emergency Response Team, India (CERT-In) as a nodal agency for incident management. CERTIn2, through a dedicated infrastructure, monitors threats that affect computer systems, collaborates internationally for the incident response, tracks incidents affecting both public and private sector and issues security guidelines. It also engages with their counterparts in other countries such as CERT/CC, USCERT, JPCERT, APCERT, and Korean CERT. IT (Amendment) Act, 2008, recognizes CERT-In as a nodal agency for security incident management and authorizes it with a power of ‘call for information’ from Indian corporates. This may help CERT-In establish a data breach notification mechanism, which across the globe has been recognized an importance facet of ‘national data protection regime’ (2) E-Governance projects: Government of India promotes international standards for securing critical infrastructure protection. Special bodies like National Institute for Smart Governance (NISG), defines standards for the e-governance projects. (3) Unique Identification Authority of India (UIDAI): This project aims as issuing a unique identification number (UID) to all Indian residents. UIDAI promises that this identity will be robust enough to eliminate duplicate and fake identity through effective verification and authentication. This project if delivers to its promises, will go a long way to curb the identity theft. (4) Information Security Education and Awareness: Looking at the growing importance of Information Security, DIT has formulated and initiated the Information Security Education Awareness (ISEA3) program. The Objective of this project is to create Information Security awareness among students, teachers and parents. This program relies on the education exchange program, security research in engineering and PhD program, train system administrators/ professionals, and train government officers. (5) Cyber Forensics: Under the Directorate of Forensic Science, a part of Ministry of Home Affairs, three Central Forensic Labs (CFSLs) have developed capabilities in cyber forensics. Also, there are 28 State Forensic Labs (SFSLs), now acquiring capabilities in cyber forensics techniques and skills. Resource Centre for Cyber Forensics (RCCF) at Thiruvananthapuram with objectives to develop cyber forensic tools and to provide technical support and necessary training to Law Enforcement Agencies in the country. Recently, many private players now offer their services in this field. 2 3
http://www.cert-in.org.in/ http://www.iitr.ac.in/ISEA/pages/About_Us.html
Building an Ecosystem for Cyber Security and Data Protection in India
143
3.4 Law Enforcement Challenges Law enforcement has been always a challenge when it comes to dealing with cyber crimes or data breaches. While the current situation leaves much to be desired, there are many steps in the making to revamp the internal security structure of the country.. The projects such as NATGRID and CCTNS will enable this. CCTNS, in particular, will connect 14,000 police stations and 6,000 police officers to the centralized database. There has been increased thrust to invest on building capabilities and skills to cope with new age criminal challenges. NASSCOM-DSCI, in a bid to offer industry help to law enforcement bodies, that can help ensure speedier trail of cyber crimes, has established cyber labs in some of the important IT cities. These labs train police officers in basic forensic skills. This program is modeled on the basis of FBI-funded National Cyber Forensics Training (NCFT) Alliance in the United States The challenge is aggravated due to Transborder nature of cyber crimes, which require healthy and responsive international collaborations. This challenge has been recognized and more international cooperation is sought by Indian law enforcement bodies, through a means of formal communication and joint programs.
4 Industry Initiatives Indian IT and ITES industry, an important player of the ecosystem, has gained significant experience in cyber security and data protection. In its bid to protect client data, which are processed by these companies as a part of outsourcing or providing of services to specific security requirements, the industry has gained significant skills and experience in this field. All Global Security Vendors have their presence in India; many of them source their research talent from India and have built research facilities in India. Recent DSCI-KPMG Survey, confirms the fact that Indian industry is catching up with the information security trends fast, confidently facing new age security challenges through the use of technology and leading practices. The survey also highlights that increased sensitization for protecting the personal information being processed here, is driving their privacy initiatives. 4.1 NASSCOM Trusted Sourcing Initiative Industry feels a need for the joint effort that helps promote security culture in the country, to promote India as a trusted outsourcing destination. NASSCOM, an industry body of IT and ITES industry, initiated a 4E initiative for outsourcing industry for promotion and enforcement of security. It relies on Engagement with all stakeholders involved, Education of service providers, Enactment to create a policy environment, Enforcement of standards and constant checks. NASSCOM’s initiative of trusted sourcing initiative was culminated in setting up of Data Security Council of India (DSCI), a focal body dedicated to the cause of cyber security and data protection. 4.2 Data Security Council of India (DSCI) For regulating organizations (data controllers) for the data protection in the interest of the end users or consumers or citizens (data subject) different model prevails across the
144
V. Godse
globe. European Union recommends setting up data protection authorities that are enjoying statutory power. In the contrary, the United States uses a sectoral approach that relies on a mix of legislation, regulation, and self regulation. The Safe Harbor Program relies on voluntary compliance to privacy principles and advocates reasonable security practices. Indian IT/ITES Industry set up Data Security Council of India, as a Self Regulatory Organization. DSCI relies on best practices approach as a practical and realistic way to enhance global adherence to data security standards. In the direction, it has come up with its Security Framework4 (DSF©) and Privacy Framework (DPF©) under which it is advocating security practices to outsourcing service providers. DSCI collaborates with industry, international bodies, governments, and different knowledge sources for the cause of cyber security and data protection and is playing an important role in establishing the new ecosystem in India that is sensitive to cyber security and data protection cause. It has been closely working with Government of India, in its effort of enabling appropriate policy environment. DSCI is creating important enablers for the government bodies, through its interaction with industry, that help take important policy decisions.
5 Conclusion Expanding cyber security threats, evolution of cyber terrorism, requirements for speedier trial of, increasing awareness of security of personal data gathered by organizations, data protection requirements of Indian IT/BPO companies serving global clients and Increased security requirements for expanding e-governance and e-commerce demand a national level security ecosystem. This should be supported by government initiatives, strong legal framework, effective law enforcement and industry initiatives. IT (Amendment) Act 2008, operational since 27th Oct 2009, offers a strong lever to address all contemporary cyber security challenges and promises to establish a strong data protection regime in India. Measures such as passing investigation power to a lower level officer and setting up of the cyber appellate tribunal will ensure speedier trial of cyber crimes. Strengthened role of CERT-In will contribute to improvement of security of critical infrastructure. Government’s policy initiative on encryption will provide the necessary impetus to the growth of e-commerce and e-governance. Law Enforcement bodies are gearing up to these new challenges upgrading their skills and building their competencies. Industry, on the other hand, is extending its support, working closely with government, through Data Security Council of India, collectively contributing to build the “strong National Security Ecosystem”.
References 1. Consumer E-Commerce Market in India 2006/07, Internet and Mobile Association in India, http://www.iamai.in/Upload/Research/final_ecommerce_report07.pdf 2. Payments in India is going e-way, Celnet report: http://reports.celent.com/PressReleases/20081008/ IndiaEPayments.asp 4
http://www.dsci.in/index.php?option=com_content&view=article&id=80&Itemid=100
Building an Ecosystem for Cyber Security and Data Protection in India
145
3. Forrester Forecast: Global Online Population To Hit 2.2 Billion By 2013: http://www.forrester.com/ER/Press/Release/0,1769,1296,00.html 4. Sharma, V.: Information Technology-Law & Practice, 2nd edn. Universal Law Publishing Co. Pvt. Ltd, New Delhi (2007) 5. Overall Spectrum Management and review of license terms and conditions, TRAI: http://www.trai.gov.in/WriteReadData/trai/upload/.../704/pr16 oct09no71.pdf 6. DSCI-KPMG Data Security Survey (2009), http://www.dsci.in/images/stories/data_security_survey_2009_report_final_30th_dec_2009.pdf
The Unique Identification Number Project: Challenges and Recommendations Haricharan Rengamani1, Ponnurangam Kumaraguru2, Rajarshi Chakraborty1, and H. Raghav Rao1 1 University of Buffalo at SUNY New York, USA {harichar,rc53,mgmtrao}@buffalo.edu 2 Indraprastha Institute of Information Technology, Delhi, India
[email protected]
Abstract. This paper elucidates the social, ethical, cultural, technical, and legal implications / challenges around the implementation of a biometric based unique identification (UID) number project. The Indian government has undertaken a huge effort to issue UID numbers to its residents. Apart from possible challenges that are expected in the implementation of UID, the paper also draws parallels from Social Security Number system in the US. We discuss the setbacks of using the Social Security Number as a unique identifier and how to avoid them with the system being proposed in India. We discuss the various biometric techniques used and a few recommendations associated with the use of biometrics. Keywords: Biometric, ethics, culture, legal, technical, challenges, and issues.
1 Introduction The Unique Identification (UID) is a forthcoming and highly ambitious project in India where each resident of India will be issued a unique number which can be used lifelong and anywhere.1 The central government under the leadership of Mr. Nandan Nilekani, Minister of State, created the Unique Identification Authority of India (UIDAI), an agency responsible for implementing the envisioned Multipurpose National Identity Card or Unique Identification card (UID Card) project in India. UIDAI will be responsible f o r issuing the numbers; but individual go ver nment ministries and agencies will be responsible for integrating the numbers with their respective databases (e.g. The Union Labor Ministry will be responsible for Employment Provident Fund (EPFO)). The Indian government believes this unique identification will save identity verification costs for businesses 2 , especially by virtue of a system that facilitates online verification of authentication of identity. To enable this fast verification, even over a cell phone, UIDAI will be 1
http://timesofindia.indiatimes.com/news/india/Here-to-make-a-difference-not-to- give-Infycontracts-Nilekani/articleshow/4977357.cms
2 http://www.thehindubusinessline.com/2009/10/01/stories/2009100151301100.htm A. Kumar and D. Zhang (Eds.): ICEB 2010, LNCS 6005, pp. 146–153, 2010. © Springer-Verlag Berlin Heidelberg 2010
The Unique Identification Number Project: Challenges and Recommendations
147
building a central database on details of every Indian resident including demographic and biometric information.3 Such a centralized system invites the questions about single point of failure and potential abuse from inside the central government. Given that UIDAI aims to make verification easier for both organizations in both private and public sector through a central database, it is important to understand potential loopholes that include breach of privacy, and targeted advertising in the context of cultural, administrative and bureaucratic practices as well as the contemporary technological and legal issues in India. Principally, we assume that the Unique Identifier is similar to the Social Security Number in the US. Based on this assumption we draw inferences that can be helpful in designing the UID. The US federal government has been mulling over the idea of replacing SSN with biometric unique identification. Listed below are a few challenges that pave way for a biometric unique identifier. Also before citing the similarities and differences of the SSN, it’s worth mentioning the evolution of SSN. Deep insight of this case suggests that the purpose of SSN is primarily to enable the Social Security Board to maintain accurate records of the earnings of individuals who worked in jobs covered under the Social Security program [6]. Although the Social Security Administration ( SSA) has made the card counterfeit-resistant, the card does not contain information that would allow it to be used as proof of identity. However, the simplicity and efficiency of using a unique number that most people already possess has encouraged widespread use of the SSN by both government agencies and private enterprises, especially as they have adapted their recordkeeping and business systems to automated data processing. In the next section we discuss briefly the challenges faced in SSN. In section 3, we discuss in detail the challenges that we anticipate in implementing UID and then in Section 4, we provide specific recommendation for making the UID a success. Finally, in Section 5, we conclude by presenting some high-level thinking with respect to UID.
2 Challenges Faced in SSN The implementation of SSN across all domains had some repercussions in other areas and few of the major issues are cited are: privacy [3], identity theft [2, 14], terror related crimes [13], and other issues [7]. There have been many incidents reported owing to loss of SSN Numbers. Attackers can exploit various public- and private-sector online services, such as online “instant” credit approval sites, to test subsets of variations to verify which number corresponds to an individual with a given birth date. ID theft is one of the fastest growing financial crimes and has affected nearly 10 million Americans and it amounts to 42% of complaints received in 2003 Federal trade commission [4]. The danger of identity theft is illustrated in Triwest HealthCare Alliance where several Triwest beneficiaries became victims of the identity theft in 2003 [2]. Terrorists use Social Security number fraud and ID theft to assimilate themselves into our society. According to FBI testimony before the Senate Committee on the 3
http://beta.thehindu.com/opinion/interview/article19518.ece
148
H. Rengamani et al.
Judiciary, “Terrorists have long utilized ID theft as well as Social Security number fraud to enable them to obtain such things as cover employment and access to secure locations [4].” Researchers have shown that Social Security Number can be predicted using publicly available data [1].
3 Challenges Anticipated in UID For the implementation of the UID project in a country with a population of a billion people and upwards, challenges are manifold. Few challenges are anticipated and they can be broadly classified as Administrative, Legal and Technological challenges. Legal Challenges: The SSN was primarily used for tracking benefits and income of citizens / migrant workers. As time went by, SSN became the primary identifier and a supporting document that facilitate applying for loans or insurance or even to apply for a driver’s license. As for the UID project, the legal system in the country should decide / mandate / regulate the benefits that citizens can acquire through the use of the UID. While the SSN is an identifier for both migrant workers and citizens, the UID is planned to be for the citizens of the country. The judicial and executive machinery of the country needs to stipulate eligibility criteria for allotting the UID to a given person. Apart from this, the system could also be extended to include migrants to the nation that may help in several employment and security related issues. Administrative Challenges: Considering the current population of India and its growth rate, data collection is a massive task. Data collection can also be influenced by other factors like ethnic diversity, geographical variation, accessibility, gender, and climatic conditions. Data collection usually comprises of manually gathering information such as age, gender, marital status and etc. which can be facilitated by the existing Identification cards. A separate body overseeing the data collection should be constituted and its branches have to be established which need to be supervised by appropriate bodies for effectively carrying out the survey. This council should mandate the necessary manual proofs/documents needed to corroborate the identity of an individual. Technological Challenges: Based on the legal and administrative scope of the UID project, the requirements for the design and implementation of the project will vary. Given the scale of complexity of the project, it can be broken down to many sub projects which in turn spans many modules. Depending on the functional requirements of the modules, each can follow its own software lifecycle models and clubbing of the individual modules to sub project and integrating all subprojects will be an ambitious task. With respect to choice of technology, the factors that should be considered are robustness, scalability, performance, complexity, extensibility, maintenance and cost. With increasing adoption of the Internet as well as the rise in the mobility of the Internet users in India, the core of the technological part of the UID design and implementation process will have to be geared towards data gathering from several types of user-end devices like laptops and smart phones. With respect to privacy of data, data needs to be protected by
The Unique Identification Number Project: Challenges and Recommendations
149
secure encryption methods, security certificates and implementation of self-check digits is recommended. Also System should be capable of thwarting any Denial Of Service attacks and other unethical attacks. Risk factors needs to be identified and risk mitigation plans should be built around the same. To quote a few, risk factors may include natural calamities such as earthquakes, floods, tsunami and unwanted acts of crime etc. Information recovery needs to be included in the risk mitigation plan in the event of critical data loss. Other technology considerations include standard secure ways to store and to retrieve the data in a microchip format, where in a single swipe of the card facilitates authorized persons to view the person’s details. The architecture for the database should be designed in a way by which it handles more than a billion unique entries in a reliable way. 3.1 Biometric Approach to UID In this section we discuss the idea of biometrics, its varied use, best practices and some concerns with the use of this technology. Since the first forms of biometric identification, biometric technology has undergone numerous advancements. Today there are different types of biometrics that can be used to accurately identify a person. Facial recognition, vocal recognition and iris scanning are a few biometric identification methods widely used today, apart from finger printing. Today, biometrics mainly finds its use in identity access management and access control [10,11]. Biometric characteristics based on shape of the body and other physical traits. Examples include face recognition, DNA, hand and finger geometry. Biometrics based on the behavior of an individual. Examples include gait, voice. Voice is generally a physiological trait but voice recognition is mainly based on the study of the way a person speaks, commonly classified as behavioral. Human characteristics can be used as a biometric based on parameters such as uniqueness (how well the biometric identifies an individual from another), permanence (measures how far a biometric resists variance over time), collectability (ease of data capture for measurement), performance (robustness of technology, accuracy), etc., Because biometric systems based solely on single biometric information may not always meet performance requirements, the development of systems that integrate two or more biometrics is emerging as a trend. Multiple biometrics could be two types of biometrics, such as combining facial and Iris recognition. Multiple biometrics could also involve multiple instances of a single biometric, such as 1, 2, or 10 fingerprints, 2 hands, and 2 eyes. Prototype systems are trying to integrate fingerprint and facial recognition technologies to improve identification. The greatest social concern with respect to the implementation of UID with biometric information is the misuse and possible profiling with respect to gender or a specific race or sector of people. Judicial regulation and executive supervision needs to be in place to avoid misuse of biometric information, subsequently avoiding privacy invasion. Privacy advocates might protest the “informatization of the human body” [9]; where the human body is seen as an entity of information.
150
H. Rengamani et al. Table 1. Privacy aspects with respect to various biometric technologies
Technology Finger print
Positive privacy aspects Can provide different fingers for different systems; large variety of vendors with different templates and algorithms
Negative privacy aspects Strong de-identification capabilities
Face recognition
Changes in hairstyle, facial hair, texture, position, lighting reduce ability of technology to match without user intervention
Easily captured without user consent or knowledge
Iris recognition
Current technology requires high degree of user cooperation - difficult to acquire image without consent
Very strong de- identification capabilities; development of technology may lead to covert acquisition capability; most iris templates can be compared against each other - no vendor heterogeneity
Retina scan
Requires high degree of user cooperation; image cannot be captured without user consent Voice is text dependent, the user has to speak the enrollment password to be verified
Very strong deidentification capabilities
Voice scan
Hand geometry
Can be captured without consent or knowledge of the user
Physiological biometric, but None not capable of identification yet; requires proprietary device
Biometric technology when combined with other technologies can allow covert monitoring citizens at any given place or time. This could well be the most dramatic surveillance technology that can emerge as a standard for terrorism prevention. Some specific implications of using a biometric approach for UID-based authentication are as follows: Privacy invasion: Billions of records can be linked to people with respect to their identity, health information and/or other private information. Sophisticated data- mining techniques can enable discovery of unknown and non-obvious
The Unique Identification Number Project: Challenges and Recommendations
151
relationships amongst these data sets; arising concerns of social insecurity amongst people. Efficiency and convenience of this high technology also enables traceability of privacy data and subsequent loss of the same [15]. Social implications: Europe has instituted directives with respect to data processing that reveals any racial or ethnic origin [5]. Other sensitive data might include political opinions, religious beliefs and/or anything concerning health information. European union “charter of fundamental rights” addresses some of the issues that affect Biometric technologies. It mandates that everyone have the right to protect and access his/her personal data. It also ordains the right to edit personal data. If not safeguarded, data collected during screening procedures and identification might be a tool for racial or ethnic classifications that might render appalling political consequences [8]. Ethics: It also stems from medical research that certain medical disorders are directly associated with certain behavioral characteristics. In such cases biometrics might not only become an identifier for an individual; it could also prove to be a source of individual’s information. Furthermore, DNA profiles might bear risk of discrimination and multiplication of compulsory testing procedures. These ethical reasons might justify the need for a strong legal framework. Instances where biometric information is being used for reasons beyond its original purpose and without the consent of the participants whose data is involved are some areas of major ethical concern.
4 Recommendations While designing and implementing the UID, we suggest the following factors to be taken for consideration. • •
•
• •
•
Access should be restricted or if possible prohibited to public and private sectors for accessing UID and any means of exploiting UID should be prevented. UID Lookup, if provided through online, should be dealt with more security measures and validations and individuals bearing UID should be protected by prohibiting persons from obtaining their Id’s to find a person with the intent to physically injure or harm them. Process of issuing UID should be tightened by some simplistic measures say, requiring photo ID to get an UID or replacement card. Restricting issuance of multiple replacement cards, and to push for verification of legitimate documents at the time of issuance. Self check-digit should be implemented to prevent transcription errors and to ensure accuracy in the system. Encryption and decryption scheme should be implemented to protect the privacy of the identifier. The proposed enhanced SSN encryption and decryption scheme is intended to aid the access security without compromising an individual's privacy. While designing the numbers, certain algorithm (Random number generation) should be implemented rather than developing numbers based on the date of birth, state and other known fields. This will resolve issues through “Content-free Identifier” thereby improving security [12].
152
• •
•
H. Rengamani et al.
Certain process/mechanism should be introduced to handle persons who don’t have an UID (Immigrants, Dual Citizenship holders). Creating a public awareness about the importance of this system through a nationwide campaign and public should be alerted about its misuse about other fraudulent activities. IT Infrastructure should be designed in a manner that any disruptions in the operations of information systems that are critical to the Infrastructure of the nation should be infrequent, manageable of minimum duration and result in least damage possible with effective backup measures and contingency plans.
5 Conclusion Introduction of UID in India has a potential to enable citizens to have better access to a host of government services. While this forces the governments to better their citizen services, it also tries to eliminate fake and duplicate identities under various government schemes. This will help governments stem exchequer losses arising out of ghost identities or duplication. Generally lack of ID proof results in harassment and denial of services and the introduction of UID would specifically improve delivery of flagship schemes. Indian banking rules mandate the use of a document to verify the identity of an individual before opening a bank account. This can also be extended to securing loans and other financial products. Use of the UID card as a means of identifying an individual will quicken the process of securing a credit card or a loan. This will pave way for Governments effectively build new schemes (employment generation, food and nutrition, disability benefits) and ensure that they reach the targeted strata of the country; thereby cleaning up delivery of social sector services and subsidies. Issuance of passports and driving licenses can be facilitated through the use of UIDs which will be a photo id card. The ensuing text elucidates the impact UID cards are expected to bring in India. Issuance of the cards will also help the government gain a clearer view of population and other demographic indicators. UID cards can be a major impetus to egovernance programs a n d s e r v i c e s . When the UID system matures, technology systems can be built around the UID framework to formalize a ‘grievances cell’ where basic amenities such as public sanitation, roads, and power may be fulfilled or grievances redressed. The internal security scenario in the state can also be monitored with UIDs being used to track criminals. UID can help deter illegalimmigration and help curb terrorism. Citizens need not carry multiple documentary proofs of his/her identity for availing government services. Even though in an Indian scenario, patient health information is not linked to any specific financial records, similar scenarios might arise if the UID is used to track a variety of purposes. The judiciary and executive must bring out laws and legislations to stipulate the use and scope of UIDs in each sector. Another recommendation is to form a dedicated Cyber wing post implementation of the project. This wing can deal with all sorts of cyber security related issues and can design a Forensic Response Plan. This plan should mandate how groups can form a resource pool that is well aware of the UID System and they can closely work with the Cyber wing to combat any issues involving breach of security
The Unique Identification Number Project: Challenges and Recommendations
153
and others, without disrupting the normal functioning. These kinds of mitigation plans can then be amended frequently to facilitate requirements that arise out of many scenarios. Also these professionals should be included in the review committee and their inputs should be considered for any up gradation or optimization activities. Even though we have cited many issues related to UID, we believe that this project can be successful if issues raised in this paper are considered while designing the system. If the UIDAI takes an inclusive approach of brining different stakeholders together to provide a solution, it would be very helpful for the project. Acknowledgement. This research is supported in part by NSF Grant No. 0916612. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
References 1. Acquisti, A., Gross, R.: Predicting Social Security numbers from public data. In: Proceedings of the National Academy of Sciences (2009) 2. American Health Information Management Association. Using the SSN as a Patient Identifier. Journal of AHIMA 77(3) (2006)56A-D 3. Todorova, A.: Is hiding your Social Security Number worth it? SmartMoney (2009) 4. Thomas, B.: Committee on Ways and Means. Fact Sheet: Social Security Identity Theft (2004) 5. Biometric Identification Technology Ethics (BITE) project, http://www.biteproject.org 6. Puckett, C.: Social Security Administration. Social Security Bulletin, 69(2) (2009) 7. Trumbull, D.: The problems facing Social Security. (2005), http://www.trumbullofboston.org/writing/2005-03-11.htm (accessed February 10, 2010) 8. Biometric Identification and Personal Detection Ethics (HIDE) project. Homeland Security, http://www.hideproject.org/ (accessed February 10, 2010) 9. Ploeg, I.: Genetics, biometrics and the informatization of the body. Ann. Ist Super Sanità 43(1), 44–50 (2007) 10. Jain, A., Flynn, P., Ross, A.: Handbook of Biometrics. Springer, Heidelberg (2007) 11. Rhodes, K.: Information Security: Challenges in Using Biometrics. United States General Accounting Office (2003), http://www.gao.gov/new.items/d031137t.pdf (accessed February 10, 2010) 12. National Committee on Vital and Health Statistics. Enhanced Social Security Number, http://www.ncvhs.hhs.gov/app7-1.htm 13. O’carroll, P.: Testimony Before the Subcommittee on Social Security of the House Committee on Ways and Means. Social Security Administration (2004) 14. Ryan, P.: Social Security. United States House of Representatives (2009), http://www.house.gov/ryan/issuepapers/socialsecurityissuepap er.html (accessed February 10, 2010) 15. Prabhakar, S., Pankanti, S., Jain, A.K.: Biometric Recognition: Security and Privacy Concerns. IEEE Security & Privacy (2003)
The Unique ID Project in India: A Skeptical Note R. Ramakumar School of Social Sciences, Tata Institute of Social Sciences, Mumbai – 400 088
[email protected]
Abstract. In this note, I discuss certain social and ethical aspects of a new national project to supply unique ID (UID) numbers to Indian residents. The UID project is presented as a “technology-based solution” that would change the face of governance in India. I argue in this note that the UID project would actually lead to the violation of a large number of freedoms of Indian people. No amount of assertion vis-à-vis improved service delivery can justify the violation of citizen’s freedoms and liberties. Next, I argue that there is a misplaced emphasis on the benefits of technology in this project, when the robustness of that technology to handle large populations remains largely unproven. Further, I argue that no detailed cost-benefit analysis of the project has been carried out yet. Finally, I try to show, with an illustration, that the roots of inefficiency in public welfare schemes in India are policy-induced and do not lie in the absence of identity proofs. Keywords: Identity Cards, Unique Identification Numbers, Technology and Governance, ICT, India.
1 Introduction The intensified use of science and technology in matters of public administration and governance is a phenomenon of the 1980s and after. While many basic facets of technology, such as photography, have been used in governance earlier as well, the use of high-end forms of information and communication technology (ICT) like centralized national databases, biometrics and satellite imageries are more recent. Concurrently, there has been much euphoria in the mainstream literature on governance regarding the impacts of ICT on the evolution of societies and their socio-economic development (see Friedman, 2005) [1]. On the other hand, a parallel stream of literature has argued that it may be erroneous to assume a simple linear relationship between the development of technology and the development of society. This literature, while looking at the growth in productive forces as integral to the evolution of humanity itself, underlines the complex and intimate intertwining of society and technology. These studies call for a detailed understanding of how technology and social relations interact and caution against viewing it as a simple cause-and-effect sequence. Thus, in the study of e-governance initiatives as well as projects that involve intensive utilization of ICT, social scientists try to emphasise the study of society itself as a starting point. Yet, notwithstanding A. Kumar and D. Zhang (Eds.): ICEB 2010, LNCS 6005, pp. 154–168, 2010. © Springer-Verlag Berlin Heidelberg 2010
The Unique ID Project in India: A Skeptical Note
155
this literature, governments across the world have tended to see hastened adoption of ICT in governance as a panacea to the problems of inefficiencies in administration and service delivery.
2 The Unique ID (UID) Project in India In this note, I discuss certain social and ethical aspects of a new national project to supply unique ID numbers to Indian residents. In 2009, soon after assuming power, the new Indian government announced the formation of the Unique Identification Authority of India (UIDAI) and appointed Nandan Nilekani (formerly the Chairman of INFOSYS, a private corporate information technology and consulting firm), as its Chairperson. As per a working paper of the UIDAI, the Authority proposes to “issue a unique identification number (UID) to all Indian residents that is (a) robust enough to eliminate duplicate and fake identities; and (b) can be verified and authenticated in an easy, cost-effective way” (UIDAI, 2009a, pp. 4-5). The UIDAI is envisaged to enroll all Indian residents into a centralized database, along with their demographic and biometric (fingerprint and IRIS scans) information. It is argued by the UIDAI that there are …immense benefits from a mechanism that uniquely identifies a person, and ensures instant identity verification. The need to prove identity only once will bring down transaction costs for the poor. A clear identity number would also transform the delivery of social welfare programs by making them more inclusive of communities now cut off from such benefits due to their lack of identification. It would enable the government to shift from indirect to direct benefits, and help verify whether the intended beneficiaries actually receive funds/subsidies… This will result in significant savings to the state exchequer (UIDAI, 2009a, p. 1). In other words, the UID project appears to have been envisaged as from a clear developmental angle rather than a security angle, as was the case in earlier attempts to issue citizen identity cards. In fact, the original project to issue unique ID cards to Indian citizens was initiated by the right-wing National Democratic Alliance (NDA) government that was in power between 1999 and 2004. The first steps to issue unique ID cards began with the controversial report of the Kargil Review Committee in 1999, appointed in the wake of the Kargil War between India and Pakistan [2]. In its report submitted in January 2000, this Committee had noted that immediate steps were needed to issue ID Cards to villagers in border districts, pending its extension to other parts of the country. In 2001, a Group of Ministers (GoM) submitted a report to the government titled Reforming the National Security System. This report was based largely on the findings of the Kargil Review Committee. The report noted that: Illegal migration has assumed serious proportions. There should be compulsory registration of citizens and non-citizens living in India. This will facilitate preparation of a national register of citizens. All citizens should be
156
R. Ramakumar
given a Multi-purpose National Identity Card (MNIC) and non-citizens should be issued identity cards of a different colour and design. In 2003, the NDA government initiated a series of steps to ensure the smooth preparation of the national register of citizens, which was to form the basis for the preparation of ID cards. It was decided to link the preparation of this register with the decennial census surveys of India. However, the Census of India has always had very strong clauses related to the privacy of its respondents. Thus, the Citizenship Act of 1955 was amended in 2003, soon after the MNIC was instituted. This amendment allowed for the creation of a post of Director of Citizen Registration, who was also to function as the Director of Census in each State. According to the citizenship rules notified on 10 December 2003, the onus for registration was placed on the citizen himself: “it shall be compulsory for every Citizen of India to…get himself registered in the Local Register of Indian Citizens [3].” The rules also specified punishments for citizens who fail to do so; any violation was to be “punishable with fine, which may extend to one thousand rupees.” Thus, the privacy clauses in Census surveys were diluted significantly in 2003 itself. The first UPA government that came to power in 2004 carried forward the plans of the NDA government under a new name. The MNIC project was replaced by the UID project in January 2009. Indicating a shift from a security angle to a developmental angle, a press release of the government dated 10 November 2008 noted that UID project would serve a variety of purposes: “better targeting of government’s development schemes, regulatory purposes (including taxation and licensing), security purposes, banking and financial sector activities, etc [4].” According to the government, the UID will be “progressively extended to various government programmes and regulatory agencies, as well as private sector agencies in the banking, financial services, mobile telephony and other such areas.” A number of similar claims have been made by Nandan Nilekani after taking charge as the Chairman of the UIDAI; according to news reports, he has argued that the UID would make it possible to open a bank account in India with no supporting documents, thus expanding “financial inclusion [5]”; the UID would make it easier to obtain a mobile telephone connection than at present [5]; the UID would ensure that the public food distribution system (PDS) in India would cease to be wasteful [6]; the UID would eliminate corruption from the National Rural Employment Guarantee Scheme (NREGS) [6]; the UID would help ensure and monitor attendance of teachers in schools [7]. Overall, the UID project is presented as a “technology-based solution” that would change the face of governance in India.
3 Debating the Claims If the conditions of life of its citizens are any indicator, India can safely be termed a backward economy and society. According to a survey of the government’s National Sample Survey Organisation (NSSO) in 2004-05, about 77 per cent of India’s population lived at an average monthly per capita consumption expenditure (MPCE) of Rs 16 per day (or 0.34 US $). The median number of years of schooling of an average Indian rural woman in 2005-06 was zero. More than 1 in 18 children in India died
The Unique ID Project in India: A Skeptical Note
157
within the first year of life, and 1 in 13 children died before reaching age 5 in 200506. Among children under age 3, about 38 per cent were stunted and about 46 per cent were underweight in 2005-06. In sum, the extent of reach of basic social services to the Indian population is extremely poor. Indeed, the role of the state in transforming such backwardness in the life of its citizens is central. As such, the levels of expenditure of the state as well as efficient implementation of the state’s welfare programmes have great instrumental value. Social scientists have long argued that the poor state of governance in India, particularly in areas like poverty alleviation, demands a closer look at the nature of the Indian state itself. In rural India, where majority of Indians live, the continuing concentration of political power in the hands of the landed elite is one of the fundamental barriers to improve the quality of governance. The lack of implementation of land reforms has aided the continued domination of these landed classes, also from the upper caste groups, and stymied democratization in the rural areas. According to John Harriss, “the structure and functioning of ‘local agrarian power’, and the relations of local and statelevel power-holders, do exercise a significant influence on policy processes and development outcomes” in rural India (Harriss, 1995, p. 3376) [8]. It follows that any effort to comprehensively improve governance has to begin from the overhauling of local power structures and passing down political power to the under-privileged sections. Of course, advances in technology can be a major supplement to these efforts at democratizing the state and society. In certain spheres, technology can play a role in hastening change as well as reducing the drudgery of manual work. Further, a technology-based solution works best, and gives the most optimal results, when it is implemented in societies that are ready to absorb the technology. As Thomas and Parayil (2008, p. 431) argue with respect to governance, “social structures that tolerate illiteracy, landlessness and other inequities among large sections of the population deprive the individual of the capabilities to use ICTs and to benefit from the information that ICTs provide [9].” However, a perusal of the claims made in favour of the UID project in India would have us believe that the introduction of modern technology can help the state bypass fundamental reforms at social transformation. I argue in this note that the UID project, while being presented as a tool of “good governance”, would actually lead to the violation of a large number of freedoms of Indian people. No amount of assertion vis-à-vis improved service delivery can justify the violation of citizen’s freedoms and liberties. Next, I argue that there is a misplaced emphasis on the benefits of technology in this project, when the robustness of that technology to handle large populations remains largely unproven. Further, I argue that no detailed cost-benefit analysis of the project has been carried out yet. Finally, I try to show, with an illustration, that the roots of inefficiency in public welfare schemes in India do not lie in the absence of identity proofs. In arguing all the above, I have used the literature on the experiences of more modern nations of the world in providing people with unique ID cards and numbers. 3.1 Privacy and Civil Liberties International experience shows that very few countries have provided national ID cards or numbers to their citizens. The most important reason has been the unsettled
158
R. Ramakumar
debate on the protection of privacy and civil liberties of people. It has been argued that the data collected as part of providing ID cards or numbers, and the information stored therein, may be misused for a variety of purposes. For instance, there is the problem of “functionality creep” where the card or number can serve purposes other than its original intent. Some have argued that ID cards or numbers can be used to profile citizens in a country and initiate a process of racial or ethnic cleansing, as during the genocide of Tutsis in Rwanda in 1995.1 Legislations on privacy cannot be satisfactory guarantees against the possibilities of misuse of ID cards or numbers. Learning from Western experiences. Chronologically, Australia was one of the first countries to try the implementation of a national ID card scheme in the recent years. In 1986, the Australian government introduced a Bill in the Parliament to legalise the issue of national ID cards, which were to be called as “Australia Cards”. The declared intention of the government cited in the Bill was to check tax evasion as well as reduce illegal immigration. However, citizens’ groups launched a major agitation against the Bill citing concerns of violation of privacy and civil liberties. Though the government tried hard to push the Bill, it had to finally withdraw the Bill in 1987. Despite the failure to introduce the ID card scheme in Australia, other countries like Canada, New Zealand and Philippines initiated steps in the early-1990s to introduce national ID cards. In all these countries, the scheme had to be withdrawn after strong public backlash. In Canada, the Parliamentary Standing Committee on Citizenship and Immigration that examined the case for ID cards noted in its report that: It is clear that this is a very significant policy issue that could have wide implications for privacy, security, and fiscal accountability. Indeed, it has been suggested that it could affect fundamental values underlying Canadian society. A broad public review is therefore essential. The general public must be made more aware of all aspects of the issue, and we must hear what ordinary citizens have to say about the timeliness of a national identity card (cited in Davies, 2005; emphasis added) [10]. In the early 2000s, China declared its intention to introduce national ID cards along with biometric information. However, on an understanding that biometric technology is liable to major failures when applied to large populations as China’s, the Chinese government in 2006 withdrew the clause to have biometric data stored in such cards. Among many European nations, the nature of public sentiment has governed the form in which identity cards are constructed (see Davies, 2005). For instance, Sweden and Italy have extraordinary regulations regarding the use of data in citizens’ registries. In Germany, collection of biometric information is not allowed. In France, the ID card is not mandatory for citizens. In Greece, after public protests, regulators were forced to remove details regarding religious faith, profession and residence from ID cards. Two countries where the issue of national ID cards has been extensively debated are the US and the UK. In both these countries, the project has been shelved after massive public protests. 1
See “National Identification System: Do We Need One?” Senate Economic Planning Office, Government of Philippines, December 2005.
The Unique ID Project in India: A Skeptical Note
159
In the US, privacy groups have long opposed ID cards; there was strong opposition also when the government tried to expand the use of the social security number in the 1970s and 1980s [10]. The disclosure of the social security number to private agencies had to be stopped in 1989 after public protests. A health security card project proposed by the Bill Clinton administration was set aside even after the government promised “full protection for privacy and confidentiality.” Finally, the George Bush administration settled in 2005 for an indirect method of providing ID cards to US citizens. In what came to be called as a “de-facto ID system”, the REAL ID Act made it mandatory for all US citizens to get their drivers’ licenses re-issued, replacing old licenses. In the application form for re-issue, the Department of Homeland Security added new questions that became part of the database on driving license holders. As almost all citizens of US had a driving license, this became an informal electronic database of citizens. Nevertheless, these cards cannot be used in the US for any other requirement, such as in banks or airlines. The debate on the confidentiality of the data collected by the US government continues to be live even today [11]. The most interesting debate on the issue of national ID cards has been in the United Kingdom. With the introduction of the Identity Cards Bill of 2004, the Tony Blair government declared its intent to issue ID cards for all UK citizens. Public protests have forced the Labour government to shelve the policy till date. The debate in UK has mainly centred around the critical arguments in an important research report on the desirability of national ID cards prepared by the ‘Information Systems and Innovations Group’ at the London School of Economics (LSE). The LSE’s report is worth reviewing here.2 The LSE report identified key areas of concern with the Blair government’s plans, which included their high risk and likely high cost, as well as technological and human rights issues. The report noted that the government’s proposals “are too complex, technically unsafe, overly prescriptive and lack a foundation of public trust and confidence.” While accepting that preventing terrorism is the legitimate role of the state, the report expressed doubts on whether ID cards would prevent terror attacks through identity theft: …preventing identity theft may be better addressed by giving individuals greater control over the disclosure of their own personal information, while prevention of terrorism may be more effectively managed through strengthened border patrols and increased presence at borders, or allocating adequate resources for conventional police intelligence work… A card system such as the one proposed in the Bill may even lead to a greater incidence of identity fraud… In consequence, the National Identity Register may itself pose a far larger risk to the safety and security of UK citizens than any of the problems that it is intended to address. In conclusion, the LSE report noted that 2
For the web site of the Identity Project at LSE, see http://identityproject.lse.ac.uk. The full report is available at http://identityproject.lse.ac.uk/identityreport.pdf and the Executive Summary of the report is available at http://identityproject.lse.ac.uk/identitysummary.pdf
160
R. Ramakumar
…identity systems may create a range of new and unforeseen problems. These include the failure of systems, unforeseen financial costs, increased security threats and unacceptable imposition on citizens. The success of a national identity system depends on a sensitive, cautious and cooperative approach involving all key stakeholder groups including an independent and rolling risk assessment and a regular review of management practices. We are not confident that these conditions have been satisfied in the development of the Identity Cards Bill. The risk of failure in the current proposals is therefore magnified to the point where the scheme should be regarded as a potential danger to the public interest and to the legal rights of individuals. The Western debates reviewed here bring forth serious questions regarding the potential of national ID cards to subvert hard-won rights of people to privacy and civil liberties in the modern world. National debates in each of these countries have influenced the final outcomes in these schemes, and citizens have reacted collectively to the threats of intrusion into their basic democratic rights. In fact, in most of the few countries that have introduced national ID cards, the periods of introduction have also been of either an authoritarian government or a war. Issues of Privacy and the UID Project in India. The UIDAI in India has declared that the UID would not confer citizenship on any individual and that enrolment into the scheme would not be mandatory. However, other pronouncements from the UIDAI have made it clear that the UID is likely to be used in a wide variety of welfare schemes. It is thus clear that even while there would not be a de jure insistence on the UID, citizens would de facto be forced to apply for UID to access many welfare schemes. Thus, “indirect compulsoriness” is a central feature of the UID project in India. What is most disturbing in the Indian scenario is that the concerns of privacy or civil liberties are not discussed in any of the documents of the government or the UIDAI in any substantive form. It has been made to appear as if the purported, and unsubstantiated, benefits of ‘good governance’ from the project eclipse the concerns regarding human rights. Information that is available point to the possibilities of serious misuse of personal information if the UID is extended to a spectrum of social services, most of which are increasingly being privatized in India. Take an example: the UID project in India is being implemented as part of the eleventh five year plan of the government. In 2006, a working group was appointed by the Planning Commission to examine the possibilities and potential of an Integrated Smart Card System to improve the entitlements of the poor. In its report, the working group noted that the: …unique ID could form the fulcrum around which all other smart card applications and e-governance initiatives would revolve. This could also form the basis of a public-private-partnership wherein unique ID based data can be outsourced to other users, who would, in turn, build up their smart card based applications… (GoI, 2007, p. 2; emphasis added) [12]. …In the context of the unique ID, part of this data base could be shared with even purely private smart card initiatives such as private
The Unique ID Project in India: A Skeptical Note
161
banking/financial services on a pay-as-you-use principle…. (p. 8; emphasis added). These agencies [private utility services providers or financial and other institutions] can ‘borrow’ unique ID and related information from the managers of these data bases and load further applications in making requirement specific smart-cards. While the original sources of data can be updated by the data managers, the updating of supplementary parts will remain the responsibility of the service providers (p. 24; emphasis added). Personal information of citizens is rendered all the more vulnerable to misuse in a policy atmosphere that explicitly encourages private participation in social service delivery. Citing the case of privacy of health records of citizens, an observer of the UIDAI noted recently that: …the Apollo Hospitals group has offered to manage health records through the UIDAI. It has already invested in a company called Health Highway that reportedly connects doctors, hospitals and pharmacies who would be able to communicate with each other and access health records. In August 2009, Business Standard reported that Apollo Hospitals had written to the UIDAI and to the Knowledge Commission to link the UID number with health profiles of those provided the ID number, and offered to manage the health records. The terms ‘security’ and ‘privacy’ seem to be under threat, where technological possibility is dislocating many traditional concerns (Ramanathan, 2010) [13]. At present, the UIDAI has only affirmed a commitment to protection of privacy; no substantial information is yet available on how the database of citizens would be protected from misuse in the future. As I argued earlier, promises to introduce privacyprotection legislations are poor tools to gain the trust of citizens, who face real threats of misuse of personal information. 3.2 Technological Determinism in Addressing Social Problems An interesting aspect of the discussion on the UID project in India has been the level of technological determinism on display. The fact that the UIDAI is headed by a technocrat like Nandan Nilekani, and not a demographer or any social scientist, is evidence to the technological bias in the project. The problems of enumeration in a society like India’s, marked by illegal immigration as well as internal migration, especially of people from poor labour households, are too enormous to be handled effectively by a technocrat. It is intriguing that the duties of the Census Registrar and the UIDAI Chairman have been demarcated, and that the UIDAI Chairman has been placed in the rank of a Cabinet Minister above the Census Registrar. Among all the technological features of the UID project, it is the collection and storage of biometric information of residents that is most significant. The UIDAI plans to collect a basic set of personal information from all residents and store them in a centralised database along with their biometric information, such as finger prints of
162
R. Ramakumar
all the 10 fingers in the hands as well as iris scans. Users of this massive centralized database would be public and private service providers; for purposes of verification, all service providers would have access to the centralised database. Biometric data would be used to verify the identity of the person whenever a UID-compliant service is provided. In other words, machines that verify the biometric data of the citizens would be installed at all the sites of service provision. Access to the service would be dependent on a positive verification of the biometric information. For a country with more than a billion residents, the sheer scale of the envisaged project is mind-boggling. As per estimates, there are about half a million public food distribution outlets in India; there are about 265,000 gram panchayats (decentralised local bodies of governance) through which social service provision is managed. This is apart from millions of other offices of the government and public institutions that take part in the process of everyday governance. In other words, the crucial question is: can the technological infrastructure of the project carry the burden of such massive data storage, networking, live sharing and verification? If so, what are the associated costs of the project (see next section)? What are the probabilities of system failures of different degrees? What are the probabilities of errors? What are the “social” costs of these errors? No clear answers are available for these important questions. The use of biometrics. The use of biometrics is the central feature of the UID project; apart from biometrics, there is no valid identity check in the system. There appears to be an extraordinary level of faith among the proponents of the project in the infallibility of biometric verification. On the other hand, there is consensus among biometric scientists and legal experts regarding critical drawbacks of the technology in proving identity beyond doubt. First, many biometric and legal experts have argued that no accurate information exists on whether the errors of matching fingerprints are negligible or non-existent (see Koehler, 2008) [14]. It is acknowledged that a small percentage of users would always be either falsely matched or not matched at all against the data base. Fears have also been raised on the different ways in which users could bypass the verification process by using methods like “gummy fingers” and “latent finger printing.” In other words, a completely new identity, different from the original, could be created and used consistently over a period of time. Secondly, the concern remains if biometric information collected as part of the UID project would be used for policing purposes. In what is a typical case of “functionality creep”, police and security forces, if allowed access into the biometric data base, could extensively use it for regular surveillance and investigative purposes. One, regular use of biometric data in policing can lead to a large number of human rights violations. Two, coupled with the possibility of errors in fingerprint matching, the use of biometric data in policing can further aggravate the extent and depth of human rights violations. For instance, a recent concern in the legal circles in the US is whether validation checks of fingerprints are conclusive or not (see Cole, 2006) [15]. Increasing number of prison detentions in the US based on false fingerprint matches are cited as evidence for the possible human rights violations that could result. In 2004, the New Scientist ran an investigative story titled “Forensic Evidence in the Dock” [16]. The authors argued that “supposed infallibility of fingerprint evidence, which has been used to convict countless people over the past century” was still routinely accepted by US
The Unique ID Project in India: A Skeptical Note
163
courts. In 1999, a US court agreed to hold a “Daubert hearing” over a case of robbery registered against Byron Mitchell, whose finger prints “matched” the finger prints recorded from the site of the crime. A Daubert hearing is a special hearing where the judges take a decision on the scientific validity and reliability of any forensic evidence before it is submitted. While taking a decision on the Daubert hearing, the judges have to judge the defined error rate of the forensic evidence submitted. It turned out that no study on error rates existed. The court then asked the FBI to conduct a study, whose results have been argued to be methodologically erroneous by many legal observers [16]. The debate on the validation of fingerprints is as yet unsettled within the US legal system. For purposes of illustration, I shall cite two instances from Cole (2005, pp. 986987) [17] of false matching of finger prints in the US that led to massive protests from human rights activists: …the case of Brandon Mayfield, an Oregon attorney and Muslim convert who was held for two weeks as a material witness in the Madrid bombing of March 11, 2004…Mayfield, who claimed not to have left the United States in ten years and did not have a passport, was implicated in this attack almost solely on the basis of a latent fingerprint found on a bag in Madrid containing detonators and explosives in the aftermath of the bombing. Unable to identify the source of the print, the Spanish National Police emailed it to other police agencies. Federal Bureau of Investigation (FBI) Senior Fingerprint Examiner Terry Green identified Mayfield as the source of the latent print. Mayfield’s print was in the database because of a 1984 arrest for burglary and because of his military service. The government’s affidavit stated that Green “considers the match to be a 100% identification” of Mayfield… …A few weeks later the FBI retracted the identification altogether and issued a rare apology to Mayfield. The Spanish National Police had attributed the latent print to Ouhnane Daoud, an Algerian national living in Spain… …But the Mayfield case was not the first high-profile fingerprint misattribution to be exposed in 2004. In January, Stephan Cowans was freed after serving six and a half years of a 30- to 45-year sentence for shooting and wounding a police officer. Cowans had been convicted solely on fingerprint and eyewitness evidence, but post-conviction DNA testing showed that Cowans was not the perpetrator. The Boston Police Department then admitted that the fingerprint evidence was erroneous, making Cowans the first person to be convicted by fingerprint evidence and exonerated by DNA evidence. Thirdly, the UIDAI has noted that it plans to introduce the project in a set of flagship schemes of the government, including the National Rural Employment Guarantee Scheme (NREGS). In other words, the access to this important employment scheme for rural labourers in India would be made completely dependent on biometric verification. It is estimated that there are more than 30 million persons in India who possess “job cards” (or, are beneficiaries) of NREGS.
164
R. Ramakumar
A fundamental issue that biometric experts do not dismiss away is the possibility of fingerprints of individuals changing over time, particularly among manual labourers. Given the heavy manual labour that rural poor are regularly involved in (apart from cases of accidental damage to fingers and hands from burns, chemicals, and other agents), the fingerprints of manual labourers are highly likely to be broken or get eroded, inviting frequent negative responses during validation at the sites of wage payments. Globally, about 2 to 5 per cent of the population is held to have noisy or bad data on finger prints; in other words, their finger prints are permanently damaged to the extent that they can not be recorded in the first place [18]. According to some estimates, in developing countries like India, the share of persons with noisy or bad data could go up to 15 per cent, given the larger share of population dependent of hard manual labour [19]. In a country with a population of more than one billion people, a 15 per cent share would mean a minimum of 150 million persons. That is likely to be a rough count of the extent of exclusion in welfare schemes due to the UID project. The report of the UIDAI’s internal Biometrics Standards Committee actually accepts these concerns as real. In its report, the Committee recognises that “a fingerprints-based biometric system shall be at the core of the UIDAI’s de-duplication efforts” (UIDAI, 2009b, p. 4). It has further noted that it is: …conscious of the fact that de-duplication of the magnitude required by the UIDAI has never been implemented in the world. In the global context, a de-duplication accuracy of 99% has been achieved so far, using good quality fingerprints against a database of up to fifty million. Two factors however, raise uncertainty about the accuracy that can be achieved through fingerprints. First, retaining efficacy while scaling the database size from fifty million to a billion has not been adequately analyzed. Second, fingerprint quality, the most important variable for determining de-duplication accuracy, has not been studied in depth in the Indian context (UIDAI, 2009b, p. 4) [20] Yet, the UIDAI Chairman Nandan Nilekani declared in November 2009 that he planned to “issue the first UID number in next 12-18 months and cover 600 million in the next five and half years.”3 The case of UK. Technological determinism has been a feature of efforts to introduce ID cards in other countries too, such as the UK. The rhetorical confidence of the UK government in the scheme has always sat uncomfortably with its own technological uncertainty regarding the project. Critics pointed out that a slight failure in any of the technological components may immediately affect underlying confidence of people in the scheme as a whole. For instance, the LSE report noted that: The technology envisioned for this scheme is, to a large extent, untested and unreliable. No scheme on this scale has been undertaken anywhere in the world. Smaller and less ambitious systems have encountered substantial technological and operational problems that are likely to be amplified in a large-scale, national system. The proposed system unnecessarily introduces, 3
See Interview with Nandan Nilekani (2009), “We’ll use best biometric, storage & search solns”, available at http://igovernment.in/site/Well-use-best-biometric-storage--search-solns
The Unique ID Project in India: A Skeptical Note
165
at a national level, a new tier of technological and organisational infrastructure that will carry associated risks of failure. A fully integrated national system of this complexity and importance will be technologically precarious and could itself become a target for attacks by terrorists or others. 3.3 The Unknown Costs of the UID Project The costs involved in a project of the size and scale as the UID project are always enormous and have to be weighed against the limited benefits that are likely to follow. The estimated costs of implementing the project have not yet been disclosed by the government, while media reports indicate varying figures. According to information that has trickled out of the Planning Commission, the estimated initial cost of the project would be anywhere above Rs 20,000 crores (or about 4,348 million US $). Even after the commitment of such levels of expenditures, the uncertainty over the technological options and ultimate viability of the scheme remains. Nandan Nilekani himself noted in November 2009 that “no exact estimation of the savings can be made at this juncture”.4 In addition, it is unclear whether recurring costs for maintaining a networked system necessary for UID to function effectively have been accounted for by the government. In the case of UK, the LSE report had noted that the costs of the scheme were significantly underestimated by the UK government. The critique of the LSE group on the costing exercise of the UK government is a good case study of why the costs of such schemes are typically underestimated. The LSE group estimated that the costs would lie between £10.6 billion and £19.2 billion, excluding public or private sector integration costs. This was considerably higher than the estimate of the UK government. 3.4 The Efficiency of Social Sector Schemes Would the UID result in an increase in the efficiency of government’s poverty alleviation schemes? According to the Chairman of UIDAI, the UID “will help address the widespread embezzlement that affects subsidies and poverty alleviation programmes [21].” This conviction comes from a basic diagnosis of the UIDAI: that the inability to prove identity is one of the biggest barriers preventing the poor from accessing benefits and subsidies. However, it is difficult to foresee any major shift in the efficiency frontiers of poverty alleviation programmes once UID is introduced. The reason is that the premise of the claim made by the UIDAI (that proven identities would expand access to schemes) is in itself erroneous. The poor efficiency of government schemes in India is not due to the absence of technological monitoring. The reasons are structural, and these structural barriers cannot be transcended by using a UID. I shall illustrate this using the example of one important social sector scheme: the public distribution system (PDS) that supplies subsidized food grains to the people.
4
Ibid.
166
R. Ramakumar
The case of the PDS. In the 1990s and 2000s, economic policy in India took a major shift towards neo-liberal policies. Till 1996-97, the PDS in India was universal in character. In other words, all households who owned a ration card were eligible to purchase commodities at subsidised prices. Under the neo-liberal policy regime, PDS ceased to be universal in character. Instead, a Targeted Public Distribution System (TPDS) was introduced where all households were divided into two categories: Below Poverty Line (BPL) and Above Poverty Line (APL) households. Only those households classified as BPL were eligible for the subsidised purchase of commodities. Such a shift towards targeting is not specific to India; globally itself targeting is the most important instrument used under the policy of structural adjustment in the provision of social security assistance. Targeting reduces the burden that the state has to bear with respect to social assistance by narrowing down the “eligible” proportion of the population to a minimum. The introduction of the TPDS has led to major issues in the functioning of the food distribution system. A widespread complaint from rural India after the introduction of TPDS has been the existence of a major mismatch between households classified as BPL by the government and their actual standard of living (Swaminathan 2000 [22]; Swaminathan and Misra 2002 [23]; GoI 2002 [24]). A high-level committee appointed by the government in 2002 concluded that ‘the narrow targeting of the PDS based on absolute income-poverty is likely to have excluded a large part of the nutritionally vulnerable population from the PDS” (GOI 2002) [24]. In other words, the poor “efficiency” of the PDS and its absence of reach has been a policy-induced phenomenon under the neo-liberal regime. While the real reason for the inefficiency of the PDS in India is the policy of narrow targeting, the claim of the UIDAI has been that the UID would plug leakages in the functioning of the PDS. In other words, the UID would ensure that targeting is as accurate as possible, and no “ineligible” person buys subsidized food grains from the PDS. In simple terms, this is inverted logic. The most important problem with the PDS in India is not that non-BPL households benefit, but that large sections are not classified as BPL in the first place. Further, there are major problems associated with having a classification of households as BPL or APL based on a survey conducted in one year, and then following the same classification for many years. Incomes of rural households, especially rural labour households, fluctuate considerably. A household may be non-poor in the year of survey, but may become poor the next year due to uncertainties in the labour market. How would UID solve this most important barrier to efficiency in the PDS? While the real challenge in PDS is to expand the coverage to newer sections of the population, the UID has been showcased as an intervention that would actually make it as narrowly targeted as possible. Yet another claim is that a simple cash-transfer scheme would become possible if a UID is introduced, which could replace the existing poverty alleviation programmes. To begin with, cash-transfer schemes have not been found to be efficient substitutes for public works schemes in any part of the developing world. In addition, for the same reasons discussed in the context of the PDS, a cash-transfer scheme would also lead to the exclusion of a large number of needy from cash benefits. A UID cannot be of any help in such scenarios.
The Unique ID Project in India: A Skeptical Note
167
4 Concluding Notes In conclusion, the UID project of the Indian government appears to be missing the grade on most criteria. There is no reason to discount the concern that a centralized database of citizens’ personal and biometric information could be misused to profile citizens in undesirable and dangerous ways. There is an unrealistic assumption behind the project that technology can be used to fix the ills of social inefficiencies. The benefits from the project, in terms of raising the efficiency of government schemes, appear to be limited. Given available information, the scheme appears to be extraordinarily expensive, without concomitant benefits. This is not to argue against any form of electronic management of data or provision of services, including a regulated and sector-specific use of biometrics. For instance, when the intended beneficiary populations are smaller – as, for instance, in an old age pension scheme – such an initiative may function better than on a massive scale as to one billion people. The central issue with the UIDAI initiative is that technology is thought of as a short cut to bypass difficult and more fundamental societal changes. On the other hand, the lessons from history are that there are no short cuts to progressive social change. The worldview that drives the UIDAI, unfortunately, is the former.
References 1. Friedman, T.L.: The World is Flat: A Brief History of the Twenty-First Century. Farrar, Strauss and Giroux, New York (2005) 2. Taha, M.: Notes from a Contested History of National Identity Card in India: 1999-2007 (accessed January 28, 2008), http://www.sacw.net/article391.html 3. Ministry of Home Affairs Notification (December 10, 2003), http://www.censusindia.gov.in/Acts_and_Rules/citizenship_rul es_2003.pdf (accessed February 9, 2010) 4. Press Release of the Press Information Bureau, Government of India., http://pib.nic.in/release/release.asp?relid=44711 (accessed January 28, 2010) 5. Dhoot, V.: Unique ID cards will make bank account, phone connection easy. Financial Express. (2009), http://www.financialexpress.com/news/unique-idcards-will-make-bank-account-phone-connection-easy/529183 (accessed January 28, 2010) 6. UIDAI, Creating a Unique Identity Number for Every Resident in India, working paper. Unique Identification Authority of India, New Delhi (2009) 7. Unique ID number for checking absenteeism of teachers?. Press Trust of India, New Delhi. http://www.ptinews.com/news/470034_Unique-ID-number-forchecking-absenteeism-of-teachers (accessed February 9, 2010) 8. John, H.: Comparing Political Regimes across Indian States: A Preliminary Essay. Economic and Political Weekly 34(48), 3367–3377 (1999) 9. Thomas, J.J., Parayil, G.: Bridging the Social and Digital Divides in Andhra Pradesh and Kerala: A Capabilities Approach. Development and Change 39(3), 409–435 (2008) 10. Davies, S.: The Complete ID Primer. Index on Censorship 3 (2005), http://www.eurozine.com
168
R. Ramakumar
11. For a useful set of questions and answers on the national ID scheme in the US, http://www.privacyinternational.org/article.shtml?cmd%5B347% 5D=x-347-61881 (accessed January 28, 2010) 12. Government of India, Entitlement Reform for Empowering the Poor: The Integrated Smart Card (ISC). Report of the Eleventh Plan Working Group on Integrated Smart Card System, Planning Commission, New Delhi (2007) 13. Usha, R.: The Personal is the Personal. Indian Express (2010), http://www.indianexpress.com/news/the-personal-is-thepersonal/563920/0 (accessed January 28, 2010) 14. Koehler, J.J.: Fingerprint Error Rates and Proficiency Tests: What they are and Why they Matter. Hastings Law Journal 59(5), 1077–1100 (2008) 15. Cole Simon, A.: Is Fingerprint Identification Valid? Rhetorics of Reliability in Fingerprint Proponents Discourse. Law and Policy 28(1), 109–135 (2006) 16. Randerson, J., Coghlan, A.: Investigation: Forensic evidence in the dock. New Scientist (2004), http://www.newscientist.com/article/ dn4611-investigation-forensic-evidence-in-the-dock.html (accessed January 28, 2010) 17. Cole, S.A.: More than Zero: Accounting for Error in Latent Fingerprint Identification. The Journal of Criminal Law & Criminology 95(3), 985–1078 (2005) 18. Jain, A.K., Dass, S.C., Nandakumar, K.: Can Soft Biometric Traits Assist in User Recognition? Proceedings of SPIE 54(4), 562 (2004) 19. De-duplication: The Complexity in the Unique ID Context. 4G Identity Solutions, http://www.4gid.com/De-dupcomplexity%20unique%20ID%20context.pdf (accessed January 28, 2010) 20. UIDAI, Biometric Design Standards for UID Applications. Report of the UID Committee on Biometrics, Unique Identification Authority of India, New Delhi (2009) 21. Ramakumar, R.: High-cost, High-risk. Frontline, 26 (15). (2009), http://www.hinduonnet.com/fline/fl2616/stories/ 20090814261604900.htm (accessed January 28, 2010) 22. Swaminathan, M.: Weakening Welfare: The Public Distribution of Food in India. LeftWord Books, New Delhi (2000) 23. Swaminathan, M., Misra, N.: Errors in Targeting: Public Food Distribution in a Maharashtra Village, 1995-2000. Economic and Political Weekly, 2447–2450 (2002) 24. Government of India (GoI), High Level Committee on Long-Term Grain Policy. Ministry of Food and Public Distribution, Government of India, New Delhi (2002)
Author Index
Acquisti, Alessandro
23
Bharadwaj, S. 70 Bhatt, H.S. 70 Burnik, Jelena 7 Cavoukian, Ann 14 Chakraborty, Rajarshi Chang, Ho B. 89 Chang, Pei-Fen 115 Chibba, Michelle 14
146
47
King, Dana 47 Kourkoumelis, Nikolaos 121 Kumaraguru, Ponnurangam 146 Mak, Stephen 96 Musar, Nataˇsa Pirc Narang, K.
55
Ramakumar, R. 154 Rao, H. Raghav 146 Ratha, Nalini K. 62 Rengamani, Haricharan Renzong, Qiu 127
146
Schroeter, Klaus G. 89 Singh, R. 55, 70 Snijder, Max 14 Stoianov, Alex 14 Sutrop, Margit 102
Godse, Vinayak 138 Grosso, Enrico 76 Harbitz, Mia
Parker, Tim 40 Puri, C. 55
Tistarelli, Massimo 76 Tiwari, A. 55 Tzaphlidou, Margaret 121 Vatsa, M.
55, 70
Woo, Roderick B.
1
7 Zadok, Elazar (Azi) Zhai, Xiaomei 127
27