Lecture Notes in Computer Science Edited by G. Goos, J. Hartmanis and J. van Leeuwen
1740
3
Berlin Heidelberg New York Barcelona Hong Kong London Milan Paris Singapore Tokyo
Rainer Baumgart (Ed.)
Secure Networking – CQRE [Secure] ’99 International Exhibition and Congress D¨usseldorf, Germany, November 30 - December 2, 1999 Proceedings
13
Series Editors Gerhard Goos, Karlsruhe University, Germany Juris Hartmanis, Cornell University, NY, USA Jan van Leeuwen, Utrecht University, The Netherlands Volume Editor Rainer Baumgart Security Networks GmbH Weidenauer Str. 223-225, 57076 Siegen, Germany E-mail:
[email protected]
Cataloging-in-Publication data applied for
Die Deutsche Bibliothek - CIP-Einheitsaufnahme Secure networking - CQRE (Secure) ’99 : international exhibition and congress, D¨usseldorf, Germany, November 30 - December 2, 1999 / Rainer Baumgart (ed.). Berlin ; Heidelberg ; New York ; Barcelona ; Hong Kong ; London ; Milan ; Paris ; Singapore ; Tokyo : Springer, 1999 (Lecture notes in computer science ; Vol. 1740) ISBN 3-540-66800-4
CR Subject Classification (1998): C.2, E.3, D.4.6, K.6.5 ISSN 0302-9743 ISBN 3-540-66800-4 Springer-Verlag Berlin Heidelberg New York This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer-Verlag. Violations are liable for prosecution under the German Copyright Law. © Springer-Verlag Berlin Heidelberg 1999 Printed in Germany Typesetting: Camera-ready by author SPIN: 10749957 06/3142 – 5 4 3 2 1 0
Printed on acid-free paper
Preface The CQRE [Secure] conference provides a new international forum giving a close-up view on information security in the context of rapidly evolving economic processes. The unprecedented reliance on computer technology has transformed the previous technical side-issue "information security" to a management problem requiring decisions of strategic importance. Thus one of the main goals of the conference is to provide a platform for both technical specialists as well as decision makers from government, industry, commercial, and academic communities. The target of CQRE is to promote and stimulate dialogue between managers and experts, which seems to be necessary for providing secure information systems in the next millennium. Therefore CQRE consists of two parts: Part I mainly focuses on strategic issues of information security, while the focus of Part II is more technical in nature. This volume of the conference proceedings consists of the reviewed and invited contributions of the second part. The program committee considered 46 papers and selected only 15 for full presentation. For the participants’ convenience we have also included the notes of the invited lectures and short workshop talks in this volume. The selection of papers was a difficult and challenging task. I wish to thank the program committee members who indeed did an excellent job in reviewing and selecting the papers and providing useful feedback to authors. Each submission was blindly refereed by at least three reviewers to make the selection process as fair and objective as possible. The program committee was assisted by many colleagues who reviewed submissions in their field of expertise. My thanks to all of them. I would also like to thank the entire CQRE-team for their kind assistance in organizing this event. My special thanks to our hosts from Messe-Düsseldorf GmbH and especially to N. Mizera, M. Kotschedoff, S. Spamer, A. Viefers, and B. Wagner who greatly contributed to the success of this challenging project with their untiring engagement and timely decisions. Furthermore I would like to thank the team from Brodeur-Kohtes & Klewes around B. Boendel and my colleagues T. Gawlick, A. M. Schlesinger, and D. Hühnlein for kindly assisting me in administrative tasks. Last but not least, I wish to thank all the authors who submitted papers, making this conference possible, and the authors of accepted papers for updating their work in a timely manner , allowing the production of these proceedings. September 1999
Rainer Baumgart
Table of Contents
Risk Management Developing Electronic Trust Policies Using a Risk Management Model ..................... 1 Dean Povey
Security Design SECURE: A Simulation Tool for PKI Design............................................................ 17 Luigi Romano, Antonino Mazzeo, Nicola Mazzocca Lazy Infinite-State Analysis of Security Protocols..................................................... 30 David Basin
Electronic Payment Electronic Payments – Where Do We Go from Here? ............................................... 43 Moti Yung, Yiannis Tsiounis, Markus Jakobsson, David MRaihi
SmartCard Issues PCA: Jini-based Personal Card Assistant ................................................................... 64 Roger Kehr, Joachim Posegga, Harald Vogt An X.509-Compatible Syntax for Compact Certificates ............................................ 76 Magnus Nyström, John Brainard
Applications Secure and Cost Efficient Electronic Stamps ............................................................. 94 Detlef Hühnlein, Johannes Merkle Implementation of a Digital Lottery Server on WWW..............................................101 Kazue Sako
VIII
Table of Contents
PKI-experiences (Workshop Notes) Cert’eM: Certification System Based on Electronic Mail Service Structure .............109 Javier Lopez, Antonio Mana, Juan J. Ortega A Method for Developing Public Key Infrastructure Models....................................119 Klaus Schmeh The Realities of PKI Inter-operability .......................................................................127 John Hughes
Mobile Security Mobile Security – An Overview of GSM, SAT and WAP ........................................133 Malte Borcherding Secure Transport of Authentication Data in Third Generation Mobile Phone Networks ...................................................................................................................142 Stefan Pütz, Roland Schmitz, Benno Tietz
Cryptography Extending Wiener’s Attack in the Presence of Many Decrypting Exponents............153 Jean-Pierre Seifert, Nick Howgrave-Graham Improving the Exact Security of Fiat-Shamir Signature Schemes.............................167 Silvio Micali, Leonid Reyzin
Network Security (Workshop Notes) On Privacy Issues of Internet Access Services via Proxy Servers ............................183 Yuen-Yan Chan Cryptanalysis of Microsoft’s PPTP Authentication Extensions (MS-CHAPv2)........192 Bruce Schneier, Mudge, David Wagner
Key Recovery Auto-recoverable Auto-certifiable Cryptosystems (A Survey)..................................204 Moti Yung, Adam Young
Table of Contents
IX
Intrusion Detection A Distributed Intrusion Detection System Based on Bayesian Alarm Networks.......219 Dusan Bulatovic, Dusan Velasevic
Interoperability Interoperability Characteristics of S/MIME Products................................................229 Sarbari Gupta, Jerry Mulvenna, Srivinas Ganta, Larry Keys, Dale Walters The DEDICA Project: The Solution to the Interoperability Problems between the X.509 and EDIFACT Public Key Infrastructures ......................................................242 Montse Rubia, Juan Carlos Cruellas, Manel Medina
Biometrics Multiresolution Analysis and Geometric Measures for Biometric Identification Systems......................................................................................................................251 Raul Sanchez-Reillo, Carmen Sanchez-Avila, Ana Gonzales-Marco Author Index............................................................................................................259 Dates and Deadlines of CQRE [Secure] 2000.........................................................261
Developing Electronic Trust Policies Using a Risk Management Model Dean Povey Security Unit, Cooperative Research Centre for Enterprise Distributed Systems??, Level 12, S-Block, Queensland University of Technology, Brisbane Qld 4001, Australia,
[email protected]
Trust management systems provide mechanisms which can enforce a trust policy for authorisation and web content. However, little work has been done on identifying a process by which such a policy can be developed. This paper describes a mechanism for developing trust policies using a risk management model, and relates this to a conceptual framework of trust. The process uses an extended risk management model that takes into consideration beliefs about the principals being trusted and the impersonal structures and systems involved. The paper also applies the extended risk management model to a hypothetical case study in which an individual is making investments using an electronic trading service. Abstract.
1
Introduction
Regardless of the strength or robustness of a given security mechanism, its effectiveness is limited without the existence of trust. Security protocols, cryptographic devices and digital signatures rely on the ability to trust either one or more parties, mechanisms or equipment to be sure that the assets they protect remain safe. In the physical world we derive much of our notions of trust from the tangible nature of things. For example, we perceive the information in a book to be worth reading because we know that it costs a lot of money to print a book, because the logo on the side shows that it has been reviewed by a publisher of repute, and often because a library has thought it worthwhile enough to stick it on their shelf. Similarly, we are convinced by the stability and trustworthiness of a bank, because the diÆculty of licensing a fraudulent organisation and the cost of setting up branches, ATM networks and marketing etc, would make it prohibitively expensive. However, the shift toward e-commerce means that we can no longer infer trust from physical, tangible things. We need to rethink our approach to trust ??
The work reported in this paper has been funded in part by the Co-operative Research Centre Program through the Department of Industry, Science & Tourism of the Commonwealth Government of Australia
R. Baumgart (Ed.): CQRE’99, LNCS 1740, pp. 1-16, 1999 Springer-Verlag Berlin Heidelberg 1999
2
D. Povey
so that we can rely on the information and actions of people in a virtual world, with the same degree of con dence that we do in the real world. Trust management systems such as PolicyMaker[1], KeyNote[2], and REFEREE[3] provide mechanisms that can enforce a trust policy for authorisation and web content. However, little work has been done on identifying a process by which a trust policy for such systems can be developed. This paper describes a mechanism for developing trust policies using a risk management model, and outlines a hypothetical case study to illustrate the usefulness of such a scheme.
2
Risk Management
Risk management is the total process of identifying, controlling, and minimising the impact of uncertain events [4]. The Common Criteria [5] outlines a model for relating dierent elements of the risk management process, which is given in gure 1. In general, risk management for information security involves the following process: 1. Identify the assets to be protected, the threats to these assets, and the expected impact if those assets are compromised. 2. Identify the vulnerabilities or weaknesses which can lead to these threats arising. 3. Analyse the risk (i.e. the likelihood and consequences) of the vulnerabilities leading to these threats being exploited. 4. Determine whether to accept or treat the risk. Risk is treated using countermeasures which seek to reduce either the likelihood or consequence of a risk, or defer the risk to some third-party (e.g. insurance). Implementing a countermeasure has a cost associated with it, which must be balanced against the expected utility of implementing the measure. Countermeasures may also expose additional risks, or retain residual risk which must be considered in the risk management process. Risk management is well understood, and numerous standards and methodologies exist to describe the process (e.g. [6][7][8]). Integrating risk management into the trust management process is therefore useful, as it will enable us to leverage o this existing body of work.
3
Trust
To integrate trust with risk management, it is necessary to provide a framework by which dierent aspects of trust can be described and related. One of the more comprehensive frameworks for trust was developed by McKnight, Cummings and Chervany, and results from a survey of sixty papers across a wide range of disciplines[9][10]. McKnight et al's model provides a classi cation system for dierent aspects of trust, as well as a system for showing how trust can in uence behaviour and de nes the following constructs:
Developing Electronic Trust Policies Using a Risk Management Model
3
value Owners wish to minimise impose to reduce countermeasures that may possess
that may be reduced by
vulnerabilities may be aware of leading to that exploit
risk to
Threat Agents give rise to
that increase
assets
threats to wish to absue and/or may damage
Fig. 1.
Security concepts and relationships from the Common Criteria
Trusting behaviour
the extent to which one person voluntarily depends on another person in a speci c situation with a feeling of relative security, even though negative consequences are possible. This construct is in eect describing the \act" of trusting, and implies acceptance of risk (negative consequences) by the trusting party. Trusting intention the extent to which one party is willing to depend on the other party in a given situation with a feeling of relative security, even though negative consequences are possible. A trusting intention usually leads to trusting behaviour. Trusting intentions relate directly to the security policy which determines how entities in the system are trusted. A trusting intention essentially speci es a willingness to trust a given individual in a given context, and implies that the trusting entity has made decisions about the various risks and bene ts of allowing this trust. Trusting beliefs the extent to which one believes and feels con dent in believing that the other person is willing and able to act in the trusting party's best interests. A trusting intention will be largely based on the trusting par-
4
D. Povey
ties cognitive beliefs about the other person. McKnight et al describe four categories of trust belief: 1. Benevolence - the belief that a person cares about the welfare of the other person; 2. Honesty - the belief that a person makes agreements in good faith; 3. Competence - the belief that a person has the ability to perform a particular task; and 4. Predictability - the belief that a person's actions are consistent enough to forecast what they will do in a given situation. Trusting beliefs characterise the information by which we make our trusting decision about a given individual. They may be based on evidence, recommendations from third parties (which themselves must be trusted), and often by simple intuition. We can think of trusting beliefs as being the measures by which we will determine whether a given entity should be trusted given a speci c risk pro le. It should be noted that not all beliefs need to be strong in order to trust an individual in a given context. In business transactions, the issue of benevolence is rarely important (although the presence of malevolence may be) when compared to the issues of honesty, predictability, and most importantly competence. Also, some beliefs are easier to be con dent about than others. It is usually simpler to obtain a measure of an organisations competence (by accreditation and recommendations), and predictability (by past dealings); than it is to obtain a measure of their benevolence and honesty. Like trusting intentions beliefs may also be speci c to a context (e.g. belief in the competence of a lawyer to write contracts, does not extend to their competence to perform neurosurgery). As we shall see it is trusting beliefs which are the most important to ascertain, as they will determine the con dence by which we establish our trusting intentions. System trust the extent to which one believes that proper impersonal or institutional structures are in place to enable one to anticipate a successful future endeavour. An important dierence between system trust and trusting beliefs, is that while trusting beliefs relate to the attributes of another person whom is being trusted, system trust relates to the actual system/infrastructure under which the trusted action is taking place. System trust is important, as it provides stability to our interactions with people and organisations. Legal and regulatory systems provide punitive mechanisms to discourage malicious behaviour, and accreditation and certi cation schemes provide systems which allow us to evaluate an organisations competence. Like trusting beliefs, system trust is a critical component of determining a trusting intention. Dispositional trust the extent to which one has a consistent tendency to trust across a broad spectrum of situations and persons. A person may have dispositional trust because they either believe in the general good nature of people, or they believe that they will achieve better outcomes by tending to trust people.
Developing Electronic Trust Policies Using a Risk Management Model
5
Situational trust
the extent to which one intends to depend on a non-speci c party in a given situation. Situational trust is related to dispositional trust in that it is a general intention. However, it is dierentiated by the fact that where dispositional trust refers to a broad spectrum of situations and persons, situational trust is related only to a speci c situation. Belief formation processes The process by which new beliefs are developed and integrated into our schema about the world. These constructs do not exist in isolation, but have well-de ned relationships between them. We can clearly see that a trusting behaviour relies on the existence of a trusting intention, which in turn is created through the existence of one or more of trusting beliefs, system, dispositional or situational trust. Figure 2 shows the various constructs and their dependencies.
Trusting Behaviour that lead to Situational Decision To Trust
influence
Trusting Intentions
influences influences
Dispositional Trust
influences
System Trust
Trusting Beliefs form Belief Formation Processes
Fig. 2.
Related Trust Constructs
McKnight et al's conceptualisation of trust as multi-dimensional is both powerful and compelling. It also goes some way to explaining the diÆculty that researchers in many disciplines have encountered in the formulation of a single broad de nition of what trust is. In addition, their wide consultation of literature from many disciplines including management, communication, sociology, social psychology and economics, positions their model within a context that it is suÆciently broad to categorise most de nitions of trust.
6
D. Povey
4
An Extended Risk Model
To extend the risk model to encompass trust, it is important to see the goal we are trying to achieve in developing a trust policy. A risk management process seeks to identify risks, and to determine whether those risks should be treated or accepted. Trust management on the other hand, seeks to identify the circumstances under which we are prepared to accept risks that may be exposed by relying on certain entities. The key to merging these two concepts is to focus on risk as the common element. We can see that the de nition for trust management can be related to the decision about risk acceptance/treatment. In eect, trust becomes a risk treatment option, i.e. you are prepared to accept risks if you trust the entities that can expose them. This fact is intuitively obvious to most people. The more someone is trusted, the more we feel we can rely on them, and consequently the more risk we expose ourselves to. When we talk about levels of trust, we are really discussing the level of risk that we are prepared to accept for relying on a trusted entity. 4.1
Relating Trust Policies to McKnight et al's Model
The constructs described in section 3 provide a vocabulary for describing how trust is formed. By combining this process with the risk management process we can show how trust policies can be captured from the environment using a structured process. In McKnight et al's model, the trusting intentions form the trust policy, which is essentially a statement of the conditions under which we are prepared to trust a given entity. As noted in section 3, these intentions are formed from a number of sources: our dispositional trust, our beliefs about the entity we are trusting, how we trust the systems which we look to to support and protect us, and our tendency to trust in the given situation. As described, it is important to consider the risks of the behaviour of entities that we intend to trust. However, it is also important to consider the utility or value of trusting this entity, as this can considerably alter the decision to accept or treat a risk, or to not allow the behaviour to occur. On further analysis, we see that there is also important interactions between components of the trust framework that we must consider. One of the most important elements of forming the trusting intention is the existence of trusting beliefs about an entity. These are important, as they are the only input into the trusting intention decision which is speci c to a given entity. McKnight et al's model identi es a belief formation process, which is an iterative mechanism that uses information and experience gathered from the environment to form one or more trusting belief about an individual. In this extended risk management model, the information that is input into this process is called a trust metric. Trust metrics contribute to our understanding about the four trusting beliefs (competence, predictability, honesty and benevolence); and include: {
information based on previous experience;
Developing Electronic Trust Policies Using a Risk Management Model { { { { {
7
recommendations from third parties; certi cations or quali cations; memberships of professional organisations; certi ed histories (criminal records, credit reports etc.); and brand.
As we can see from this list, the trust metrics themselves can be subject to trust decisions about their accuracy. Thus, the belief formation process is recursive. Another important observation, is that metrics may have a cost associated with them (e.g. obtaining a credit report may cost money). In developing a trust policy, we must be careful to ensure that the costs of gathering metrics do not outweigh the utility gained from trusting, and that we maximise the value of our metrics, such that the cost re ects the contribution to our understanding of the trusting beliefs. Figure 3 shows how these constructs relate to form an extended risk model. 4.2
Using the Extended Risk Model for Trust Management
By combining the concepts from risk management with the extended risk model, we can establish the following process for establishing a trust policy: 1. Identify the entities and situations you want to determine a trust policy for. This allows the establishment of trust contexts, which encapsulate the security context within which trust decisions will be made. Note that such a context should include both all probable trusted entities and threat agents. 2. Identify the assets to be protected within this trust management context, the threats to these assets, and the expected impact if those assets are compromised. 3. Calculate the expected utility of trusting entities in the given situations. 4. Identify the vulnerabilities or weaknesses which can lead to these threats arising. 5. Analyse the risk (i.e. the likelihood and consequences) of the vulnerabilities leading to these threats being exploited. 6. Determine the adequacy of existing countermeasures which may mitigate these risks. 7. Determine the required beliefs and con dences in these beliefs required to trust (or distrust) entities which may expose the given risks. 8. Identify the various impersonal structures or systems which have an impact on the given trust context. Common systems will include legal or regulatory frameworks. Analyse our con dence in these systems to mitigate risks. 9. Identify metrics which will can help make decisions about the required trusting beliefs, and determine the con dence we have in the accuracy of these metrics (in itself a mini trust-management decision). 10. Evaluate the costs of gathering these metrics, and relate this to the expected utility, and their contribution to con dence in the trusting beliefs. Use this evaluation to select the subset of metrics which can be used to establish the trusting beliefs.
8
D. Povey
11. Using the metrics, establish the beliefs identi ed in step 7 and determine whether they meet the required con dence levels. 12. Based on this evidence and the levels of system trust, either unconditionally accept a trusting intention for the evaluated entity in the given situation; reject the trusting intention; or treat the risk and reevaluate.
value have
identify expect Owners use have have
are used by
Trust to form Formulation Processes
Trust Metrics that have
that must be less than
Costs Trusting Beliefs
Dispositional Trust
that influences
that influence
for permitting
about
Trusted Entites about
System Trust
Trusting Intentions
that influences
Situational Decision To Trust
Utility
that lead to
that influences
that influence Trusting Behaviour
Risks that may expose
that may influence Fig. 3.
to Assets
Extended Risk Model
Trusted entities and threat agents may be either known or unknown. In the case that they are known, then the policy should include the actual measurements for this entity obtained using the trust metrics. In the case that they are unknown, the policy should contain the list of metrics which are required to determine whether an entity should or should not be trusted. If a trusting intention is rejected, then risk may be treated by a number of mechanisms: 1. Add countermeasures which decrease the risk 2. Defer risk to a third party (e.g. insurance) 3. Increase the required belief trusting con dences by obtaining more or better metrics.
Developing Electronic Trust Policies Using a Risk Management Model
9
We can see that this process extends the risk management by integrating components of the trust model.
5
Writing Trust Policies
The outcome of the trust management process should be a policy which documents the decisions made. The policy should include:
Trust metrics A list of the metrics used in a trust policy, the trusting beliefs they measure, and their appropriateness for given trust contexts.
Con dence levels A description of the list of qualitative or quanti tative labels that indicate our level of con dence in a given trusting belief.
Trust context policies An articulation of the policy for making trusting decisions for each of the identi ed trust contexts.
These items are described below.
5.1 Trust Metrics One important component of the extended risk model is the use of trust metrics. These are mechanisms which can be used to enhance our con dence about certain beliefs. An important thing to note is that the trust metrics themselves need to be trusted, and we will have a con dence level associated with their precision. A trust policy should begin by evaluating the trust metrics which it will use, and providing con dence levels which we have in their measurements in a given context. When specifying the metrics used in the policy, the policy writer should state:
{ the contexts in which that metric is trusted; { the belief(s) that they measure; { the con dence in that metric for those contexts in which it is trusted, and how this is measured (NB: metrics can be evaluated using other metrics); and { the cost of evaluating that metric. In general, metrics should require close scrutiny as we are exposed to more systemic risk by trusting them.
5.2 Con dence Labels The policy writer should include con dence labels which may be attached to particular beliefs in the trusting decision. Con dence labels can be either qualitative or quantitative, and are similar to the likelihood measurements which are commonly used in risk management. The con dence label represents the likelihood that the belief it is attached to is correct, i.e high con dence means a high probability of correctness. Figure 4 gives an example of qualitative and
10
D. Povey
Label
Quantitative
Qualitative
Very Low A belief with this label has very low con dence, it p <= 0:5 should only be relied on if the risk is negligible. Low A belief with this label has low con dence, it may be p > 0:5 relied on only if the risk is low. Medium A belief with this label has medium con dence, it may p > 0:95 be relied on for contexts at medium risk High A belief with this label has high con dence, it may be p > 0:995 relied on in high risk situations. Very High A belief with this label has very high con dence, it p > 0:999 may be relied on in all situations. Fig. 4.
Example con dence labels
quantitative labels which might be used in a trust policy. In the quantitative descriptions, p is the probability that a belief with that given label is correct. Con dence levels may be combined to obtain new con dence measures. This is useful when for example a number of metrics are being used to determine the level of a trusting belief. Quantitative metrics can be combined, simply by summing the probabilities (i.e. only one of the metrics has to be correct, for the belief to be correct). Qualitative metrics can be combined either on an ad hoc basis or by using rules to combine levels (e.g. HIGH = 3 MEDIUM). 5.3
Trust Contexts
These are all the situations and environments that are under consideration for the trust policy. For each trust context, the trust policy should detail: { { { { { {
a description of the context; the risks inherent in trusting entities for a given context; the expected utility of trusting entities for the given context; a list of the possible trusted and threat agents; a list of the beliefs and con dences required to trust/distrust entities in the trust context; and a list of the required/available metrics appropriate to establish these beliefs.
Contexts may be included within other contexts, for example a context which covers user access to a web site, may also include a sub-context for privileged access to les. This allows a simple hierarchical organisation of trust policies. If speci c entities are to be trusted for a given context, these entities should be listed along with the rationale for trusting them. Where the policy is specifying criteria for trusting unknown entities, it is sometimes useful to separate out the requirements in terms of the type of entity which is to be trusted. For example, entities could be divided into customers, employees and contractors. The policy writer may wish to express diering levels of required beliefs and con dences in each of these, as there are varying levels of utility for diering classes to exploit threats, and as such varying likelihood of threats occurring.
Developing Electronic Trust Policies Using a Risk Management Model
6
11
Hypothetical Case Study
In this section, the extended risk management model is applied to a hypothetical case study in which an individual is making investments using an electronic trading service. The case study serves to illustrate the complexity involved in evaluating a given trust decision, as it shows how making one trust decision relies on many other trust decisions. It should be noted that in the following example, some of the steps described in section 4.2 have been consolidated together. The aim is to give a general feel to how a trust policy can be developed using the mechanism, and not to explicitly show how the policy should be expressed. 6.1
Scenario
Bob is a nave investor, with a small amount of cash to spend. He is contemplating some direct share investments, and so asks his friend Alice who is wise in the ways of sharemarket for her advice. Alice suggests he makes a number of investments, but recommends in particular, a recently listed small Internet company { ComDot.com. She says that she has heard on the grapevine, that this company is likely to do spectacularly well, once it releases the next version of its new Website construction software. Alice also suggests that rather than fork out for brokerage fees, Bob purchase the shares directly from E-Shares, an online brokering rm which allows small purchases using a credit card. Bob contemplates whether to take Alice's advice. 6.2
Trust Management Process
Based on the information from Alice, Bob has to make a number of decisions about whether to invest in ComDot.com shares. Doing this requires a number of trusting decisions, which may also involve gathering information, and determining whether that should be trusted. Following the trust management process outlined in section 4.2, Bob sets about determining his trust policy. The scenario above constitutes Bob's trust management context, i.e. he is making decisions about trust within the context of making a speci c decision about buying a certain set of shares using an electronic trading service. There are a number of trusting intentions which Bob must have before he can make this decision: Establishing Trust Management Context
1. Bob must trust Alice to give good advice about the shares; 2. Bob must trust ComDot.com to conduct their business competently; and 3. Bob must trust E-Shares to respect his privacy, and keep his credit card details secure. In addition, Bob must also consider threats from the following sources: {
hackers, who wish to steal Bob's credit card details and make fraudulent purchases;
12
D. Povey
{ ComDot.com's competitors who may wish to spread misinformation in order to gain market advantage; and
{ marketeers, who may wish to use knowledge of Bob's share purchase as fuel for direct marketing campaigns.
Calculate Expected Utility By purchasing the shares, Bob aims to make at
least an 8% per annum return on his investment. By using the online trading scheme he hopes to save up to 20% in brokerage fees.
Identify Assets to be Protected In this scenario, Bob determines that he has three main assets under threat: 1. The cash he is investing (could be lost due to poor investment) 2. His credit card number (there is a threat of disclosure leading to fraudulent transactions on his credit card). 3. His privacy (Bob doesn't want people knowing how he spends his money).
Vulnerability Analysis By analysing his assets and the possible threats, Bob
determines the set of vulnerabilities which may lead to those threats being realised. 1. Information Bob uses to make decisions could be inaccurate. 2. Companies which Bob invests in might go out of business 3. E-shares might disclose private information 4. E-shares might disclose Bob's credit card details 5. Hackers might intercept Bob's credit card details over the Internet.
Risk Analysis For each of the above vulnerabilities, Bob identi es the likelihood and consequences of these vulnerabilities causing threats to be realised. Likelihood is measured qualitatively (RARE, UNLIKELY, MODERATE, LIKELY, CERTAIN), and the label UNKNOWN is used where making this judgement is not possible in this rst analysis (usually due to lack of knowledge about trust levels). Consequences are also indicated qualitatively with the labels: INSIGNIFICANT, LOW, MODERATE, SIGNIFICANT, CATASTROPHIC). This analysis is summarised in gure 5.
Identify Required Beliefs and Con dences Bob now needs to determine
the level of required beliefs in order to accept the risks he has identi ed. We shall brie y outline these decisions for two of the identi ed vulnerabilities:
Information Bob uses to make decisions could be inaccurate
Given the risk identi ed, Bob determines that he has to trust the information he receives about given shares with a HIGH degree of con dence (see gure 4). In order to trust the information he receives, Bob determines he has to know that the sources of the information are competent, honest and predictable; and that his con dence in these beliefs must either be HIGH, or the information must be con rmed from other sources, such that the total con dence for each of these beliefs is HIGH.
Developing Electronic Trust Policies Using a Risk Management Model Item
1. 2. 3. 4. 5.
#
Likelihood
13
Consequences Comments
UNKNOWN SIGNIFICANT Likelihood depends on how much we trust the source of information MODERATE SIGNIFICANT { MODERATE SIGNIFICANT { MODERATE LOW Low consequences, as vendor bares liability for all but $50 of fraudulent transactions UNLIKELY LOW As above, but SSL encrypted link which makes it less likely. Fig. 5.
Risk analysis Summary
Given the identi ed level of risk, Bob decides he needs to only have MODERATE con dence in E-shares competence to protect his credit card details. E-shares might disclose Bob's credit card details
Identify and Evaluate Metrics When relying on information or actions, Bob
determines the following metrics to be used to determine the con dence he has in certain beliefs about that entity.
{ previous experience with the entity (MEDIUM-HIGH con dence); { recommendations from other trusted sources (MEDIUM con dence); { established brands (MEDIUM con dence); { contractual obligations (HIGH con dence); and { regulatory controls (MEDIUM con dence). In addition, he determines the following additional metrics to be used where speci c software countermeasures (e.g. the SSL enabled browser he uses) are used to combat risk:
{ ITSEC or Common Criteria evaluation (HIGH); { open source software which has been heavily scrutinised (MEDIUM); { well known product or vendor (MEDIUM); and { recommendations from other trusted sources (LOW-MEDIUM). Lastly, Bob determines the following metrics which are used where he is relying on a third party security system.
{ disclosure of security practices and procedures (LOW); { third party audit by a trusted auditor (MEDIUM-HIGH); and { certi ed quality system (e.g. ISO9000) (MEDIUM-HIGH). Belief Analysis
14
D. Povey
Bob has already determined the following beliefs about two entities he will rely on for information: Information Bob uses to make decisions could be inaccurate
Alice: Competence (MEDIUM), Honesty (HIGH), and Predictability (HIGH). Alice can be trusted for information, providing the information is con rmed from at least one other mediumly trusted source. These beliefs were determined solely from a long history of past experience with Alice. Reuters News-Wire service: Competence (MEDIUM-HIGH), Honesty (HIGH), and Predictability (HIGH). Reuters can be trusted to report information, providing it can be con rmed by at least one other LOW-MEDIUM trusted source. These beliefs are determined by Reuter's good brand, recommendations from Alice and other friends, and previous experience.
{
{
Bob determines a HIGH level of con dence about E-shares' competence to keep his credit card details secure. This belief is determined from the existence of a certi ed ISO9000 quality system and a third party audit from KPMG which E-shares describe on their web sites. E-shares might disclose Bob's credit card details
Figure 6 summarises Bob's trusting decisions for each of the identi ed vulnerabilities. Trusting Decisions
Item # Trust decision Comments 1.
Accept Risk
2.
Accept Risk
3.
Accept Risk
4.
Accept Risk
5.
Accept Risk
Trust Alice's information (con rmed by a Reuter's article), and a policy is described for trusting subsequent information SuÆcient information is available to trust ComDot.com's competence to do well. A policy is described for obtaining the required trust in other companies whose shares Bob wants to purchase. Bob determines E-shares' privacy policy is suÆcient, and trusts them to enforce it. Bob is convinced by third party evidence that Eshares' is competent at keeping its site secure enough to mitigate this risk. Bob trusts the SSL mechanism used to secure communications with E-shares, and trusts his browser and E-shares web server to implement this mechanism correctly.
Fig.6. Trusting decisions summary 6.3
Summary
This hypothetical case study outlines the application of the trust management process based on the extended risk model. It should be noted that only a sub-
Developing Electronic Trust Policies Using a Risk Management Model
15
section of the full analysis is presented. Nevertheless it serves to illustrate the plausibility of such a technique in a real world situation.
7
Related Work
Khare and Rifkin [11] describe how trust management philosophies can be applied to the World Wide Web, and describe how trust policies can be designed. However, Khare and Rifkin's work is very much focused on the expression of trusting intentions, i.e. they describe how to express a trust policy, but do not provide a methodology for how to derive it. In [12] Jsang describes general criteria for modelling trust in information security and critiques some other existing formal schemes. Further work by Jsang [13] develops these ideas into a formal model based on a concept called subjective logic. Subjective logic allows us to reason about beliefs or opinion using an algebraic notation, and would be useful in the context of working with trusting beliefs in the extended risk model. As indicated, there have been several attempts to build trust management systems [1][2][3]. Of these REFEREE[3] is probably the most notable, as it provides way to integrate with third party recommender systems like the PICS [14] labelling scheme. The REFEREE architecture is also extensible, making it simple to integrate new components into the system. Future work on automating the trust management process could bene t highly by utilising REFEREE as a platform for gathering and evaluating information.
8
Future Work
Decision support systems is a catch all for a wide variety of systems which provide computer support for decision making [4]. There is a signi cant body of work on using decision support systems for risk management [15][8], which could be leveraged to develop similar systems for trust management based on the extended risk model. Another direction for this work might be the development of trust metrics which could be used to automatically establish beliefs about pages on the World Wide Web. Examples of such metrics might include:
{ { { {
number of pages linking to a given web page; trusted pages linking to given web pages; third party recommendations (e.g. PICS labels); and number of hits on a given web page.
Search engines might be useful sources for such information. In particular the Google search engine [16] already uses link counts in order to rate matched pages. Lastly, the importance of considering dynamically changing policies needs to be investigated. Beliefs and trust are not static, but change as new information is received. It would be useful to investigate how policies could be de ned which cope with dynamic changes.
16
9
D. Povey
Conclusions
This paper has presented a scheme for developing trust policies based on an extended risk management model. The scheme was applied to a hypothetical case study, which shows the utility of the process to real world applications. The paper has also discussed related work and given some rm directions for future research in this area.
References 1. M. Blaze, J. Feigenbaum, and J. Lacey. Decentralized trust managment. In Proceedings of the 1996 Symposium on Security and Privacy, pages 164{173, 1996. 2. Matt Blaze, Joan Feigenbaum, and Angelos D. Keromytis. Keynote: Trust managment for public-key infrastructures. In Cambridge 1998 Security Protocols International Workshop, England, 1998. 3. Yang-Hua Chu, Joan Feigenbaum, Brian LaMacchia, Paul Resnick, and Martin Strauss. Referee: Trust management for web applications. In Proceedings of the 6th International WWW Conference, 1997. 4. Dennis Longley, Michael Shain, and William Caelli. Information Security: Dictionary of Concepts, Standards and Terms. Macmillan, 1992. 5. Common Criteria for Information Technology Security Evaluation { Part 1: Introduction and general model, May 1998. 6. Standards Australia/Standards New Zealand. AS/NZS 4360:1999 Risk Management, 1999. 7. Communications Security Establishment (CSE) Government of Canada. A guide to Security Risk Managment for Information Technology Systems MG-2, 1992. URL: http://www.cse.dnd.ca/cse/english/Manuals/mg2int-e.htm. 8. Dennis Longley, Michael Shain, and William Caelli. Information Security: Dictionary of Concepts, Standards and Terms, pages 450{453. Macmillan, 1992. 9. D. Harrison McKnight, Larry L. Cummings, and Norman L. Chervany. Trust formation in new organizational relationships. In Information and Decision Sciences Workshop, October 1995. URL: http://www.misrc.umn.edu/wpaper/wp96-01.htm. 10. D. Harrison McKnight and Norman L. Chervany. The meanings of trust. Technical report, MISRC Working Papers Series, 1996. URL: http://www.misrc.umn.edu/wpaper/wp96-04.htm. 11. Rohit Khare and Adam Rifkin. Weaving a web of trust. World Wide Web Journal, 2(3), 1997. 12. Audun Jsang. Prospectives for modelling trust in information security. In Vijay Varadharajan, editor, Proceedings of the 1997 Australasian Conference on Information Security and Privacy. Springer-Verlag, 1997. 13. Audun Jsang. A model for trust in security systems. In Proceedings of the Second Nordic Workshop on Secure Computer Systems, 1997. 14. W3C. Platform for Internet Content Selection (PICS) technical speci cation. URL: http://www.w3.org/PICS/. 15. Giampiero E.G. Beroggi and William A. Wallace, editors. Computer supported risk management. Kluwer Academic Publishers, 1995. 16. Google Inc. Why use Google?, 1999. URL: http://www.google.com/why use.html.
SECURE: A Simulation Tool for PKI Design L. Romano1, A. Mazzeo2, N. Mazzocca2 1
Università degli Studi di Napoli “Federico II” Via Diocleziano, 328 I-80124 Napoli, Italy [
[email protected]] 2 II Università degli Studi di Napoli Via Roma, 29 I-81031 Aversa (CE), Italy [mazzeo,
[email protected]]
Abstract. This work presents a novel methodology for security analysis of computer systems. The suggested approach, called simulated hazard injection, is a variant of simulated fault injection, which has already been employed with success to the design and evaluation of fault-tolerant computer systems. The paper describes the key ideas underlying the proposed methodology, and defines a portfolio of security measures to be extracted from experimental data. These concepts are incorporated in a tool for dependability analysis of Public Key Infrastructure (PKI) based systems. The tool is called SECURE and is currently under development at the University of Naples. The paper describes the architecture of the tool and discusses its potentialities.
1 Introduction Security is of crucial importance in all automated, business-related transactions. Forms of electronic commerce, such as communication via electronic mail, Electronic Data Interchange (EDI), or the World Wide Web, are just a few examples of crucial fields of application, where a security breach may have significant economic impact and/or legal consequences. The deployment of paperless mechanisms can be highly beneficial in reducing business costs and in creating opportunities for new and/or improved customer services. However, the electronic systems and infrastructures that support electronic transactions are susceptible to abuse, misuse, and failure in many ways. All participants, i.e. commercial traders, financial institutions, service providers, and consumers are exposed to a variety of potential damages, which are often referred to as electronic risks [1]. These may include direct financial loss resulting from fraud, theft of valuable confidential information, loss of business opportunity through disruption of service, unauthorized use of resources, loss of customer confidence or respect, and costs resulting from uncertainty. In order to mitigate risks and promulgate the deployment of information security technology on a wide scale in the commercial environment, appropriate security countermeasures, and business and legal frameworks must be established. The following services must be provided [2]: R. Baumgart (Ed.): CQRE’99, LNCS 1740, pp. 17-29, 1999 Springer-Verlag Berlin Heidelberg 1999
18
L. Romano, A. Mazzeo, and N. Mazzocca
• Confidentiality – Provides privacy for messages and stored data by hiding information using encryption techniques; • Message Integrity – Provides assurance to all parties that a message remains unchanged from the time it was created to the time it was opened by the recipient; • Non-repudiation – Can provide a way to prove that a document came from someone even if he/she tries to deny it; • Authentication – Provides two services. The first is to identify the origin of a message and provide some assurance that it is authentic. The second is to verify the identity of a person logging onto a system and after doing so, continuing to verify that person’s identity in case someone tries to break into the connection and masquerade as the user. For the most part, these services are enabled through public key (asymmetric) schemes rather than private (symmetric) schemes, for these are best able to cope with scalability problems. The distribution of keys, however, is difficult even in the public scheme if the Internet is the communication channel. On the Internet, obtaining a public key requires a certain level of trust. One must know that the public key belongs to the person one thinks it does. Someone might be masquerading as someone else. A solution is to work with a trusted third-party organization called Certificate Authority that distributes public keys for people and organizations and that verifies the credentials of the people associated with the public keys. In this way trust is transferred from a people-trusting-people to a people-trusting-an-organization scheme. This leads to a more complex organism (eventually to a world-wide global organism) incorporating independent certification authorities that can transfer trust among themselves. Such an organism is called a Public Key Infrastructure (PKI). As it is evident from the above description, a Public Key Infrastructure (PKI) is a complex organization, consisting of policies, services, and professionals. In this context, a party who acts or is in a position to act in reliance upon a certificate and its subject public key is referred to as a Certificate User or Relying Party. A certificate will become a sort of global passport and a personal database that holds a wealth of information about the certificate subject in a very secure way. In order to make public-key-based technologies usable on a wide scale, PKIs must support a variety of services, such as: • Registering users – This also entails authenticating certificate applicants. This task can be performed either by the Certification Authorities (CAs) or by separate entities, called Registration Authorities (RAs), that front-end Certification Authority service; • Issuing certificates – A CA has to issue certificates to Subscribers, i.e. parties who are the subject of a certificate and who are capable of using, and are authorized to use, the private key that correspond to the public key listed in the certificate; • Providing Information about Certificate Status – Certificates and other relevant information about certificates must be delivered or made accessible online to Certificate Users; • Issuing Certificate Revocation Lists - If a certificate is to be revoked, Certificate Authorities needs to make potential users of the certificate aware of the revocation.
SECURE: A Simulation Tool for PKI Design
19
Such services must be provided in accordance to a set of well defined policies and enforced rules. These must be clearly stated in a document (or a set of documents) called Certification Practice Statement (CPS).
2 Issues From the technological perspective, there are no major outstanding challenges. The field of information security has been studied for many years by governments, academia, and a small industry sector of specialists, and solutions to most of the technical problems are well-understood by the technology specialists. Until recently, however, these information security solutions have received little use, except for national security and certain internal banking purposes. Therefore, there is still a tremendous amount to be learned about deploying information security technology on a wide scale. In addition, diverse legal and business practices and controls must be addressed in conjunction with the deployment of technological security countermeasures. When trying to enforce security in this context, involving highly diverse organizations and communities which need to work together in complex ways, many interesting and subtle issues arise, of both a technical and a legal nature. A variety of products exist from many different vendors, which provide the mechanisms needed to build PKI based systems. None of them, however, incorporates the means to assist the system designer in identifying the optimal solution for a specific scenario. This may involve choosing between different potential architectures, and setting the most appropriate values for crucial configuration parameters, in order to maximize interoperability, while minimizing the risk and the impact of a security compromise. Increased security requirements have created an urging need for methodologies, models, and automated, cost-effective design and validation tools for trusted computer systems and infrastructures. The success of the engineering process will rely on the capability of the designers to measure or evaluate the security of each component, as well as of the overall architecture. Thus, security prediction, and evaluation must become an integral part of the system design activity. Predicting, at design time, the security level a system will achieve at operation time, is quite an hard task. Design methodologies, and tools must be provided, which allow the developer to address issues, such as: ways to structure relationships between multiple certification authorities, to associate different certification policies or practices with different certification paths, to find and validate certification paths, to develop and test certificate management protocols, and to enact legislation regarding PKIs to support digital signatures on commercial and governmental business transactions. To be efficient, the analysis must be conducted under realistic operational conditions, which take into account intentional and unintentional attacks, and other exceptional conditions.
20
L. Romano, A. Mazzeo, and N. Mazzocca
3 Approach The approach we suggest here is based on simulation. In the design phase of complex systems, simulation is an important experimental means for evaluating a system under a variety of aspects [3]. Simulation has many advantages over analytical modeling, some of which are reported here: • Compared to analytical modeling, simulation has the capability to model complex systems to a high degree of fidelity without being restricted to assumptions made to keep an analytical model mathematically tractable; • Analytical modeling tools only use probabilistic models to represent the behavior of a system. In essence, the effect of an event on the system is predefined by a set of probabilities and distributions. Functional simulation tools not only use stochastic modeling, they also permit behavioral modeling - which does not require that the effect of an event be predefined - and in some cases they allow the actual software to be integrated and executed within the simulated model. If this is the case, a number of system parameters are the results of (and not inputs to) the simulation experiment. In addition to this, unlike analytical modeling, in which only a few types of distribution are commonly used for the tractability of models, the simulation method can handle any form of distribution, empirical or analytical; • Too many factors affect the behavior of a system on the field, which cannot be easily modeled analytically. Even with simulation, however, a number of issues arises. A fundamental issue is simulation time explosion. This occurs in two cases: 1. When too much detail is simulated, such as modeling processes at an extreme level of detail; 2. When the target system is already characterized by a high security level, i.e. the probabilities of experiencing security breaches is extremely low, which means simulation sessions require a very long time, in order to collect a statistically significant amount of experimental results. Several techniques, including mixed-mode simulation [4], importance sampling [5], and hierarchical simulation [6] can be used to address the time explosion problem. Another fundamental issue involves workloads. The impact of hazards on system security is workload dependent. Hence, it is important to analyze a system while it is executing representative workloads. Workloads for simulation experiments can be trace files of real applications, selected benchmarks, or synthetic programs. If the goal of the study is to assess the security level attained by the system in a well-defined operational context, a model of the real applications to be run in the target configuration should be used in the simulation. If the goal is to study hazard impact with regard to general workloads, several representative benchmarks should be selected for the simulation. If the objective is to exercise every functional unit and location, neither real applications, nor benchmarks may be appropriate. In this case, synthetic workloads may have to be designed for achieving the goal. The workload issue complicates simulation models and increases simulation time. It is essential to develop techniques to represent realistic workloads while maintaining reasonable simulation times.
SECURE: A Simulation Tool for PKI Design
21
With these ideas in mind, we have developed a novel methodology, and a framework for system security analysis. The technique we suggest, herein after called simulated hazard injection, is a variation of simulated fault injection. Simulated fault injection has been successfully employed for dependability (reliability, availability, and performability) evaluation of fault-tolerant computer systems [7]. In simulated fault injection, faults, i.e. pathological events which may originate failures, are injected to a simulated model of the system, in order to evaluate the capability of the system to cope with errors. Simulated hazard injection consists in simulating the behavior of the target system while hazards, i.e. attacks to system security which may originate security compromises, are injected to its components, in order to evaluate the security level attained by the system. To the best of our knowledge, such an approach has never been proposed in the literature before. The following definitions are used: • Security hazard – An unintentional or intentional attack to system security, which makes the system exposed to potential security breaches; • Security compromise – A security breach which manifests in the system. It is the consequence of a security hazard. Simulated hazard injection can be used to pick out the key features, define the structure, and specify the configuration parameters of the target system.
4 Measures In order to evaluate the security level of the system, quantitative measures must be defined and support must be made available to extract such measures from experimental data. Based on the previously defined concepts of hazard and compromise, we have identified a portfolio of parameters, which are suited for use as security measures. Some basic measures are defined in the following: • Mean Number of Transactions Executed (MNTE) - The mean number of transactions executed in a secure way, before the occurrence of a security compromise; • Mean Time To Compromise (MTTC) - The mean time elapsed before the occurrence of a security compromise; • Mean Time Between Compromise (MTBC) - The mean time between the occurrence of security compromises; • Mean Time To Detection (MTTD) - The mean time elapsed before the detection of a hazard/compromise; • Mean Time To Removal (MTTR) - The mean time elapsed before the removal of a hazard/compromise. A fundamental measure is latency. Extra care must be devoted to the evaluation of security hazard/compromise latency. In this context, an event is said to be latent if the following conditions hold: 1. It has occurred;
22
L. Romano, A. Mazzeo, and N. Mazzocca
2. It has not been activated (i.e., it has not caused any other remarkable event); 3. It has not been detected (i.e., the system is unaware of it); 4. It has not been removed (i.e., it has not been eliminated from the system). According to the above definition, it is possible to distinguish between three different kinds of latency, namely: • Hazard activation latency - The amount of time an undetected hazard stays latent, before it is activated (i.e., originates a security compromise); • Hazard/compromise detection latency - The amount of time an hazard/ compromise, which is present in the system, stays undetected; • Hazard/compromise removal latency - The amount of time an undetected hazard/compromise persists in the system, before it is eliminated. The three contributions are shown in Figure 1. In a) an hazard hits a system entity at time to (time of occurrence) and an operation (which is sensitive to the presence of the hazard in the entity) is performed at time ta (time of activation). The time elapsed between to and ta is the hazard activation latency. In b) an hazard/compromise affects an entity at time to and is detected at time td (time of detection). The time elapsed between to and td is the hazard/compromise detection latency. In c) an hazard hits the entity at time to and is removed at time tr (time of removal). The time elapsed between to and tr is the hazard/compromise removal latency. activation latency occurrence t = to
a)
activation t = ta
time
detection latency occurrence t = to
b)
removal latency
detection t = td
time
occurrence t = to
c)
removal t = tr
time
Fig. 1. Contributions to security hazard/compromise latency: a) activation latency - an hazard hits a system entity at time to (time of occurrence) and an operation is performed at time ta (time of activation); b) detection latency - an hazard/compromise affects an entity at time to and is detected at time td (time of detection); c) removal latency - an hazard hits the entity at time to and is removed at time tr (time of removal)
It is worth noting the three contributions may combine in a variety of different ways, thus leading to more complicated scenarios. This makes latency evaluation quite a hard task. Nevertheless, careful investigation of latency data is of foremost importance in many cases, since it makes it is possible to: • Capture the underlying mechanisms which determine the security level attained by the system; • Get valuable feedback about the security bottlenecks of the current design;
SECURE: A Simulation Tool for PKI Design
23
• Evaluate the effectiveness of possible potential modifications and alternative strategies. In particular, latency evaluation is of foremost importance in all situations where resolution of disputes depends largely upon the accuracy with which times of events are known. A typical example of such a scenario, in PKI based systems, is the resolution of disputes upon revocation.
5 Hierarchical Simulation The approach we suggest favors hierarchical simulation. Hierarchical simulation is based on analyzing the system behavior at different levels of abstraction with a simulation sub-model associated to each level. For all levels, the workload might be a real trace file collected on the field, or in alternative it might be generated from a synthetic distribution. The effects of hazards injected at a given level are characterized by statistical distributions and hazard models (e.g., probability and number of hazards affecting a component, and their effects on the component behavior). These distributions are to be used as inputs for hazard injection at the higher level model. As a consequence, hierarchical simulation requires that: • Distinct levels of abstraction be identified; • Hazard dictionaries (i.e., a mechanism to propagate hazard effects from lower level models to higher level models) be defined; • Experimental results from lower levels be propagated to upper levels. If properly implemented, hierarchical simulation provides extremely detailed modeling of specific aspects at an acceptable computing cost. However, establishing the proper number of hierarchical levels and their boundaries is not trivial. Several factors must be considered to find an optimal hierarchical decomposition that provides a significant simulation speed up with a minimum loss of accuracy, and in particular: 1. 2. 3. 4.
The system complexity; The level of detail of the analysis; The kind of security measures to be evaluated; The strength of system component interactions (weak interactions favor hierarchical decomposition at the opposite of strong coupling).
Simulation for security analysis involves the injection and the propagation of hazards into the system under study at different levels of abstraction, such as the physical level, the system level, the network level, the application level, and the personnel administration level. We envision three fundamental hierarchical levels, which are illustrated in Figure 2. We believe these levels provide an efficient framework for accurate security analysis of a wide class of systems. The simulation, however, can be very time consuming and memory bound, since it has to track the propagation of hazards from lower levels to higher levels. There are several common issues that apply to hazard injection at all levels. The first issue is: what is the appropriate hazard model at the chosen level of abstraction?
24
L. Romano, A. Mazzeo, and N. Mazzocca
There is no easy answer to this question. Only field data and experience are valuable guides.
Unauthorized Transaction
Application Level Disclosure of confidential information
Network Level Tampering with Sensitive data
System Level
Fig. 2. Hierarchical simulation for security analysis. Hazards propagate from lower levels to higher levels. At the System Level an hazard may be represented by an attacker tampering with sensitive data. This security breach may lead at the Network Level to the disclosure of confidential information. By using such an information, a malicious user might be able at the Application Level to perform an unauthorized transaction
The second issue is: for a given hazard model (e.g. the disclosure of a private key) and hazard type (e.g. transient hazard), where should the hazard be injected? A straightforward approach is to randomly choose a location from the injection space (e.g. all private keys of certificate subjects in the community). This scheme is easy to implement, but it has two major drawbacks. The first is that many hazards may have similar impact (e.g. misuse of private keys of ordinary users may have comparable effects). The second is that many hazard locations may not be exercised at all. An alternative approach is to inject hazards to a few representative locations under selected workload. This technique can be used to evaluate the impact of locations or workloads in terms of system security. Alternative injection strategies should be used, so as to provide a broad evaluation of the system.
6 Tool Architecture The result of the above discussed considerations is a simulation tool, called SECURE, currently under development at the University of Naples. SECURE is a powerful tool for security analysis, which supports hierarchical and hybrid simulation. It represents a versatile means of evaluating system security as early as in the first design steps. It makes for effective testing, since it is possible to analyze the system under realistic operational conditions, including driving the simulation using real traces collected on the field. The hierarchical approach allows the behavior of components at a given level to be detailed in the lower level model. The granularity of the simulated
SECURE: A Simulation Tool for PKI Design
25
activities and the quantitative measures evaluated are refined from one level to another. The tool provides support to rapidly model fundamental components found in most PKI based environments, to represent functional relationships and timing dependencies between them, to inject hazards to system components, to investigate the effects of alternative policies, to mimic the execution of procedures for detection and handling of security compromises, and to evaluate the effectiveness of different protection mechanisms and strategies. This makes it possible to extract quantitative measures, characterizing the probability and the criticality of potential security breaches, and ultimately evaluate the security level attained by the system. System ability to cope with security attacks is evaluated with respect to a number of different factors, and under varying load conditions. In the following, we describe the structure of the SECURE integrated tool. This structure is illustrated in Figure 3.
Component
Hazard
libraries
Injectors Tracing facilities
C++ based
CSIM
Simulation Scheleton
objects
SECURE
CSIM Simulation Engine
Fig. 3. The SECURE simulation environment. The simulation scheleton defines interactions between simulated system components. C-SIM provides the simulation engine and the basic features to produce estimates of time and performance. SECURE facilities (the Component Libraries, the Hazard Injectors, and the Tracing Facilities) are specifically tailored to addressing security-related issues
As shown in the figure, SECURE incorporates C-SIM [8][9]. C-SIM is a processoriented discrete-event simulation package for use with C or C++ programs. It provides a convenient tool which programmers can use to create models of a system to produce estimates of time and performance. By incorporating C-SIM objects and features, SECURE is able to take into account performance related issues. The SECURE simulation environment augments CSIM with a number of facilities, specifically tailored to addressing security-related issues. To achieve this, SECURE provides a number of features to evaluate security related aspects. The main components of SECURE are:
• The component libraries • The hazard injectors
26
L. Romano, A. Mazzeo, and N. Mazzocca
• The tracing facilities 6.1 Component Libraries The component libraries provide a number of objects and features, which are a generalization of those typically found in most PKI based systems. Since SECURE is intended for use by designers of real PKI systems, the names for the base classes have been chosen as close as possible to the standard ones (we did not want to bother the designers with some new fancy names). The main object classes of the current implementation are: • The Certification Authority (CA) – Simulates an entity that issues Public Key Certificates (PKCs) and Certificate Revocation Lists (CRLs). Certificate applicants may enroll (either directly, or via a Registration Authority) and receive PKCs, which convey identity information about the certificate subject. This is done in accordance to well defined rules, as specified in the policy and in the Certification Practice Statement [10]; • The Authorization Authority (AA) – Simulates an entity that issues Attribute Certificates (ACs) and Attribute Certificate Revocation Lists (ACRLs). Certificate applicants may enroll (either directly, or via a Registration Authority) and receive ACs, which convey authorization information about the subject of the public key certificate pointed to by the attribute certificate [11]; • The Registration Authority - Simulates an entity that front-ends a Certification Authority service or an Authorization Authority service. It is in charge of authenticating certificate applicants, according to the enforced rules; • The Relying Party (or Certificate User) - Simulates a party who acts (or is in a position to act) in reliance upon a certificate and its subject public key; • The Subscriber - Represents a party who is the subject of a certificate and who is capable of using, and is authorized to use, the private key that corresponds to the public key listed in the certificate; • The Repository - Is a database of certificates and other relevant information accessible online; • The Link - Comes in two flavors: the Generic Link object and the Secure Link object. The former acts as a conduit, and is the basic means of communication. The latter is an embellished version of the same object, which provides a secure communication channel between two entities. 6.2 Hazard Injectors The hazard injectors enable the designer to mimic hazard occurrences in the system components according to realistic scenarios. This is achieved by means of a utility class that has the capability of injecting hazards into other objects, thus providing the user with an external mechanism to handle injecting hazards into a large number of components. Such a strategy increases the control one has over the actions of the individual pieces. Since several independent external injectors can be created and
SECURE: A Simulation Tool for PKI Design
27
used, this provides the means for simulating quite complex hazard scenarios. In alternative to using an external entity, injectors can be incorporated in the objects. This provides the components with a built-in injection system, which greatly simplifies the simulation. It is up to the user whether to use the simple route, or to employ the more customizable route. As far as hazard duration is concerned, we distinguish between transient hazards (i.e., hazards which disappear after some time), and permanent hazards (i.e., hazards which persist in the system if proper actions are not taken). A transient hazard occurs, for example, if a private key is disclosed. In this case, the hazard will automatically disappear upon expiration of the validity period of the corresponding public key certificate. The hazard may also be removed prior to the expiration date of the certificate, if this is successfully revoked. A typical example of a permanent hazard is a breach into a system which hosts sensitive data. In this case a security hazard is present until the breach is detected and proper remedy action is taken. SECURE provides many options to tailor how the injection process acts. For transient hazards, it is possible to set the hazard duration to a constant value, or, for randomduration ones, to set normalcy parameters for the duration and the standard deviation (for normal sampling). It is also possible to have the injector read hazard data from a file collected on the field. As far as hazard occurrence is concerned, different analytical models for both transient and permanent hazards, are available. We can set the injection model to one of a number of predefined types, such as constantly occurring, exponentially based, Weib-distribution based, or Erlang-distribution based. Again, it is also possible to have the injector read hazard data from a file collected on the field. 6.3 Tracing Facilities To help the designer to evaluate the security level attained by the system or system prototype under test, support has been incorporated into SECURE to extract quantitative measures from experimental data. The tracing facilities make it possible to monitor a number of events, and in particular: • Hazard occurrence – a security hazard manifests in a system component; • Hazard activation – an hazard, which was present in a system component, leads to a security compromise. An hazard activation thus corresponds to a compromise occurrence; • Hazard/compromise detection – an hazard/compromise, affecting the system or a system component, is detected; • Hazard/compromise removal – an hazard/compromise, affecting the system or a system component, is eliminated. The tracing facilities also provide a number of functions to extract from the collected data the security measures and the latency information described in section 4.
28
L. Romano, A. Mazzeo, and N. Mazzocca
7 Conclusions and Directions of Future Work This work has presented a novel methodology to system security analysis, called simulated hazard injection. The approach consists in simulating the behavior of the target system while hazards, i.e. attacks to system security which may originate security compromises, are injected to its components. To the best of our knowledge, such an approach has never been proposed in the literature before. The methodology is augmented by the definition of a set of parameters, which are suited for use as security measures. Among these, particularly relevant is latency. Extreme detail is needed in the evaluation of the latency of a security hazard/compromise, in order to evaluate the security level attained by the system. The suggested analysis technique and metrics have been integrated in a simulation tool for designing PKI based systems. The tool is called SECURE and it provides support to rapidly model fundamental components found in most PKI based environments, to represent functional relationships and timing dependencies between them, to inject hazards to system components, to investigate the effects of alternative policies, to mimic the execution of procedures for detection and handling of security compromises, to evaluate the effectiveness of different protection mechanisms and strategies, and to extract quantitative measures, characterizing the probability and the criticality of potential security breaches. This ultimately allows the system developer to evaluate the trade-offs of alternative design solutions, with respect to a number of different factors. Future work will aim at: • Demonstrating the potentialities of the suggested approach, by applying the methodology and the tool to the case study of a real system; • Combining the measures provided by the tool, in order to reflect standard criteria for product evaluation (such as, for example, the ITSEC common criteria).
Acknowledgements This work was supported in part by the MOSAICO project, in cooperation with the Universities of Naples.
References 1. 2. 3. 4.
Ford, W., Baum, M. S.: Secure Electronic Commerce. Prentice Hall Inc., Upper Saddle River (1997) nd Atkins, D. et al.: Internet Security Professional Reference. 2 edn. New Riders Publishing, Indianapolis (1997) Iyer, R. K. , Tang, D.: Experimental Analysis of Computer Systems Dependability. In: Pradhan, D. K.: Fault-Tolerant Computer System Design. Prentice Hall Inc., Upper Saddle River (1996) Saleh, R.A., Newton, A.R.: Mixed-Mode Simulation. Kluwer Academic Publishers (1990)
SECURE: A Simulation Tool for PKI Design
5.
29
Obal II, W. D., Sanders, W. H.: An Environment for Importance Sampling Based on th Stochastic Activity Networks. In: Proceedings of the 13 Symposium on Reliable Distributed Systems, Dana Point , CA (1994) 64-73 6. Kaancihe, M., Romano, L., Kalbarczyk, Z., Iyer, R. K., Karcich, R.: A Hierarchical Approach for Dependability Analysis of a Commercial Cached RAID Storage Architecture. In: Proccedings of The Twenty-Eighth Annual International Symposium on Fault-Tolerant Computing (FTCS28), IEEE-CS, Los Alamitos (1998) 6-15 7. Goswami, K. K., Iyer, R. K., Young L.: DEPEND: A Simulation-Based Environment for System Level Dependability Analysis”. In: IEEE Transactions on Computers, Vol. 46, No. 1 (1997) 60-74 8. Schwetman, H.: Using CSIM to model complex systems. In: Proceedings of the 1988 Winter Simulation Conference, ed. M. Abrams, P. Haigh, and J. Comfort, San Diego (1988) 246 - 253 9. CSIM18 User Guides (C++ version), http://www.mesquite.com/ 10. PKIX Working Group: Internet X.509 Public Key Infrastructure Certificate Policy and Certification Practices Framework. INTERNET-DRAFT, April 1998 11. PKIX Working Group: An Internet Attribute Certificate Profile for Authorization. INTERNET-DRAFT, April 1999
.
+
#
4
A
Y
f
}
~
H
D
F
O
H
O
\
H
D
B
a
\
B
T
R
F
T
S
R
T
O
H
b
T
\
O
B
\
U
T
\
T
H
B
k
B
D
[
\
m
H
F
F
O
j
H
D
k
K
F
T
o
5
F
H
n
H
S
6
M
F
K
O
g
r
\
j
5
A
O
H
M
s
t
8
R
K
S
b
k
#
%
(
*
+
@
O
O
v
=
B
a
#
<
N
(
:
_
q
#
8
F
N
p
O
R
B
F
H
O
T
W
j
F
R
H
F
c
x
H
\
n
p
\
F
U
W
O
S
z
T
j
D
\
F
f
B
F
B
v
d
p
T
{
H
R
t
|
F
z
K
T
H
R
F
B
F
\
D
B
K
O
T
K
F
B
H
\
B
R
B
F
D
\
D
[
F
\
T
T
T
D
\
S
\
K
\
\
\
d
H
T
b
B
F
H
T
\
D
T
T
§
H
\
F
B
B
\
T
O
B
R
F
H
F
H
B
b
\
F
T
d
T
B
T
O
T
H
\
S
T
T
D
K
O
K
O
O
B
R
b
K
D
F
T
M
b
¢
T
B
d
H
H
F
D
\
F
H
H
K
b
O
\
O
B
\
d
b
K
D
T
R
B
R
B
O
T
\
b
H
D
D
B
D
T
b
F
H
\
B
R
d
O
\
H
H
b
\
\
\
M
\
D
H
d
H
T
R
[
R
D
H
H
S
M
F
R
S
R
O
O
S
R
D
\
K
T
\
O
D
M
\
b
D
R
T
D
F
d
K
R
R
R
S
F
\
b
O
O
S
B
O
T
\
D
T
H
B
K
D
T
M
T
\
T
R
\
R
\
H
R
H
\
\
R
F
B
F
\
F
R
T
F
S
F
R
F
R
B
T
O
H
F
K
R
W
O
M
b
\
d
O
K
B
B
F
T
T
\
S
j
S
R
S
H
\
O
O
F
O
O
R
F
K
D
R
M
d
\
\
D
B
B
\
[
F
W
\
T
H
D
d
F
F
O
K
h
H
\
g
B
B
S
F
\
\
«
S
R
F
¯
°
¹
º
@
À
¿
@
À
À
Ô
Ö
Ù
¿
½
Ã
Ü
5
Å
À
=
»
º
5
Â
5
Â
Ã
½
Ç
@
8
5
@
Ã
Ã
Â
½
»
5
í
:
»
5
»
½
Ã
À
Æ
Î
8
À
Á
»
8
@
ö
:
»
»
Å
:
Á
º
»
Ã
»
@
8
Å
8
½
\
¬
Â
=
\
\
T
S
O
R
\
\
O
K
a
H
U
\
d
T
K
F
\
B
F
H
T
È
Ã
¼
»
À
»
»
Â
Å
À
À
Â
=
8
¼
=
Ã
8
Á
@
»
Ã
Ã
é
Î
5
¿
Â
À
8
»
½
½
¼
Á
5
8
¾
À
Â
»
»
º
Î
Ê
¿
À
¿
¿
º
»
Ã
À
»
@
=
½
8
»
@
ì
@
Ã
@
8
í
=
º
@
¼
=
À
¼
»
@
½
»
5
@
»
¼
½
»
À
:
=
@
Â
¿
½
Ã
5
½
Ã
À
Ã
À
É
À
:
À
À
½
Á
Ú
¾
É
:
½
¾
¿
½
@
Ù
:
@
½
»
Ø
ä
8
»
»
×
º
Ã
5
Æ
:
º
@
=
Ö
=
À
Ã
È
@
Ã
@
8
=
Õ
5
»
»
8
Ô
¼
¾
5
Ï
8
=
È
5
Ó
Å
@
î
¼
Ò
Ã
8
½
º
Å
8
Â
=
¼
À
»
¾
ë
8
Ä
¾
=
Á
À
Ã
Ã
@
À
º
5
8
5
À
»
º
¼
8
:
Ï
=
Ã
4
Á
Ã
@
Î
8
¼
5
Á
Ç
5
Ã
Ç
=
:
À
¼
»
»
»
Ã
8
Ç
»
º
¾
5
Â
»
º
Ã
@
6
¿
½
¼
=
½
Â
Ã
=
Å
º
À
5
8
À
¿
Ã
»
Ä
¼
5
º
½
Ã
Ð
6
»
ê
:
Ã
Æ
8
@
@
¾
@
À
Ä
½
»
@
Ï
Ã
À
8
5
Æ
Á
8
Ä
5
¾
:
=
Ã
À
Ã
@
½
5
Â
»
¼
8
¼
5
Â
¾
Â
¼
8
Ã
»
@
8
Ã
Á
=
À
Ã
8
¼
Å
8
@
¿
ç
8
Ã
»
5
»
À
5
Ä
»
=
=
Á
½
Ç
º
Ã
¿
¼
»
Ç
Ç
À
À
Ã
Ç
=
@
Á
:
º
½
½
À
5
À
5
Ç
5
Ã
Å
Ä
5
¼
º
Å
=
@
¼
½
:
Ç
=
À
»
»
½
8
¾
Ã
¼
»
5
Ã
À
Á
½
Ã
º
Ç
½
Æ
@
º
À
»
¿
=
½
¼
½
½
è
¼
=
Á
¿
8
»
À
8
=
Ð
@
º
=
½
8
¿
@
½
Ã
»
º
5
¼
º
8
Ã
8
»
@
¼
:
=
5
5
Â
»
ð
À
=
À
»
:
=
¼
Ê
¹
º
º
8
=
6
À
¼
Ã
8
8
»
@
@
À
»
Ã
Â
»
Â
5
º
Ç
Ã
@
Â
8
½
¼
È
½
Ä
À
À
É
¼
:
À
5
Æ
»
¼
Ï
Â
@
¿
À
5
=
=
Â
»
=
Á
5
Ï
5
É
»
½
Á
:
Ã
Å
Ã
½
Î
Ã
5
Ã
»
ó
:
Â
¿
á
ì
»
Á
½
»
»
ð
½
À
5
¾
Ä
À
¾
Ã
:
º
Å
@
=
Ç
»
:
8
»
Â
»
È
Ã
@
=
¼
8
À
5
¾
À
»
@
Ä
»
@
¿
º
ñ
@
Â
À
¼
Ú
»
À
½
Â
=
Ã
5
Ã
¿
»
Ã
»
=
=
5
5
»
:
=
@
:
:
¿
À
»
»
À
»
¾
Ç
½
»
Ç
»
À
8
Æ
½
:
¼
Á
Æ
»
½
»
÷
»
º
À
Ç
6
Ã
À
=
Æ
»
5
8
Ç
Ã
Î
½
½
º
»
¾
¿
=
»
½
Ä
@
=
:
@
»
½
È
Å
À
À
5
»
»
»
À
º
:
=
Â
º
=
Ã
»
Ç
À
Ã
»
=
»
¼
@
½
»
5
Ã
À
Â
Á
Ç
Æ
Ä
8
»
:
Â
»
Ã
Ã
:
»
5
½
=
½
5
¼
Ç
Î
¿
Ë
Ä
ò
Â
À
½
5
¾
Â
Ê
½
À
Ç
@
»
=
@
8
Ã
¿
Å
»
È
5
@
=
º
:
Î
Â
À
Á
Ã
À
À
8
À
Å
º
»
=
8
Ç
º
¼
½
À
=
Ä
»
5
Ä
Â
Ã
Ã
:
6
@
»
@
»
Ã
Î
5
5
5
=
»
:
Ã
¼
À
¾
Ë
À
=
5
À
@
Ê
Ç
»
»
Ã
»
=
»
¼
¿
=
Â
Â
@
½
5
Á
À
ï
»
:
Â
¼
Ê
¿
5
5
À
º
Ç
»
½
Ç
¼
º
À
»
¼
Ã
@
½
¼
Ã
Ê
=
À
»
8
Á
Ã
=
º
=
=
Ç
»
Ï
»
»
»
Ï
½
½
8
º
@
»
8
À
½
Ã
8
Ã
:
¼
»
Ò
:
À
»
»
À
Â
Î
Ã
º
Å
¼
Å
=
½
8
Ã
=
Ã
8
5
5
F
À
8
º
@
5
½
=
Ã
b
Ä
Ã
â
»
8
¿
Ç
=
5
Æ
Ã
Ã
@
º
Ä
¼
B
@
º
¿
Ã
Ã
ñ
Ä
Ã
À
5
¾
»
Ï
À
8
8
=
º
8
Î
½
½
Ä
Ã
¼
Á
Á
»
Ã
Ã
H
À
Å
Ý
½
»
8
»
Ô
5
È
º
5
»
»
=
»
Ã
¼
6
î
=
À
Ý
º
Ã
=
Ø
À
8
U
½
@
Á
Ã
Â
@
»
@
»
@
5
¾
À
Î
½
Ã
Ä
Ý
º
@
8
5
6
=
»
5
¼
Ñ
¾
Ã
Â
»
@
Ö
Â
»
»
=
ó
¾
Á
Â
Ê
Ä
å
È
@
í
Ç
=
À
Ç
À
8
8
5
¿
¾
Ã
Ã
õ
=
Ã
Ê
Æ
½
=
@
¼
8
í
Ã
Á
Ã
5
=
Î
8
=
Ü
8
¿
»
\
±
»
5
á
½
´
À
6
¸
Ë
8
:
=
Ô
=
À
»
ð
½
@
ô
½
Ä
=
»
@
×
R
¿
Ê
Á
=
Ï
=
²
@
@
à
¾
»
B
·
@
»
@
»
R
5
¼
À
5
:
8
8
»
8
º
Â
»
Ã
=
Á
ï
5
¼
Ã
Ü
=
F
¾
¿
ß
Ç
À
À
Ö
R
T
¶
=
Ð
»
Ï
¼
=
Þ
:
b
O
@
8
»
B
µ
5
8
@
Ä
Ý
8
@
5
Ã
Ø
6
»
Á
Â
H
´
È
Ã
=
À
½
½
Á
½
@
³
¼
5
\
B
²
@
Æ
¿
8
8
Ã
R
±
»
»
H
À
@
8
:
»
À
@
5
=
½
»
ø
\
\
K
D
H
\
H
F
F
D
\
F
T
F
F
\
\
O
S
\
ù
B
K
û
S
ü
\
ý
O
T
þ
F
ÿ
H
ý
R
B
ÿ
W
\
b
H
B
T
W
b
T
D
\
B
\
\
O
T
T
D
R. Baumgart (Ed.): CQRE’99, LNCS 1740, pp. 30-42, 1999 Springer-Verlag Berlin Heidelberg 1999
T
F
d
\
F
S
T
R
F
O
H
T
B
R
K
b
H
\
D
D
T
T
B
O
T
R
K
T
F
R
S
\
D
T
a
F
T
T
D
F
\
\
R
R
O
B
d
Lazy Infinite-State Analysis of Security Protocols 8
@
ñ
:
@
»
Ã
º
»
5
¿
½
=
@
¼
À
Ç
ï
º
5
½
@
8
Ã
Â
¹
5
=
Ú
=
»
Ê
»
6
¾
8
5
@
=
Á
Ï
8
À
»
Ã
¼
º
»
5
¼
@
:
½
½
5
¼
À
:
¿
½
5
Ã
½
5
»
¼
Å
À
Ã
½
8
8
»
ñ
@
Å
8
=
»
»
º
À
=
@
5
Â
8
º
:
Ã
½
À
8
À
Ç
6
Â
Ã
5
8
ð
»
5
6
:
8
¼
¿
»
:
8
Ã
À
5
Ã
Ä
@
»
:
=
»
@
Ç
=
Á
Ã
»
=
8
Î
@
5
Å
¼
8
Ã
À
:
Â
:
@
8
»
:
=
»
:
Â
»
5
Ç
À
À
À
¼
:
Ã
¼
Á
»
º
=
À
Ç
Æ
@
8
»
À
@
@
Á
º
6
½
5
Á
@
8
¿
¼
À
Ï
Ã
Ã
»
=
5
¾
½
Á
@
»
@
@
»
À
À
¾
8
6
Ã
Æ
Î
5
=
À
Â
5
Â
=
Â
½
5
¾
¿
»
¿
@
8
Ç
Ç
å
@
5
»
½
Ê
À
Ð
½
=
»
8
5
@
5
:
À
=
÷
5
8
¼
=
È
@
»
Å
½
Â
=
:
Æ
5
Â
@
»
:
=
=
5
Î
¼
5
=
»
=
Ç
Â
8
»
Ã
5
À
½
Â
½
Â
8
5
=
¼
¾
Â
º
Ä
½
Ã
Ä
À
Ã
8
=
Ã
»
8
8
Ã
=
Ã
8
º
5
¿
Ã
¿
=
Ã
8
»
À
À
Ã
Ã
Á
Ç
5
Ã
5
¿
½
=
Á
8
Ã
»
½
À
Ç
»
º
¼
Ï
5
¼
Ã
»
À
º
»
À
=
5
=
@
6
5
Ã
Ã
:
»
=
¼
Ú
:
á
Ã
»
Ê
Ã
½
@
@
:
½
»
Á
Â
@
»
Â
À
À
Ã
Â
¼
:
½
¼
À
»
»
»
Ê
¼
8
8
À
@
Â
¾
Á
»
Ã
=
@
5
¼
É
5
Ã
8
Ï
Î
»
=
5
Î
É
Ä
½
»
Ä
@
@
@
»
Â
Ã
:
5
ñ
&
È
8
=
Â
Ä
º
»
5
8
É
Â
6
:
º
»
8
=
½
:
»
@
»
¼
À
Î
»
º
@
Â
Â
6
:
8
:
5
¼
Ã
Ã
½
ñ
@
¼
5
Á
»
5
Ã
»
»
:
=
À
º
:
Ä
¼
8
:
5
Æ
¼
=
5
Ã
@
½
»
À
8
»
¼
5
»
=
º
Á
Â
Ã
½
Ã
:
À
@
Ç
»
Î
@
¼
8
»
Ç
Ã
À
À
À
»
¿
½
Ã
È
¾
8
Ã
¼
¼
@
@
=
6
5
8
»
Î
5
Ã
Ï
À
À
8
½
»
»
Ç
½
@
Ã
È
8
»
»
Ã
À
º
@
¼
º
¼
=
Ã
Ê
»
Ã
5
¿
8
Â
5
$
5
Á
½
5
Â
=
@
¼
5
:
»
À
=
@
Å
5
Ã
É
Ã
»
:
»
8
¿
¼
8
¼
Ã
=
Á
Ï
À
5
Ã
=
À
5
º
8
á
»
:
Ã
8
Å
Â
Ã
Ã
Ã
À
É
8
@
Ä
À
5
@
À
@
»
½
5
Â
º
Ã
»
5
Ú
@
Á
=
»
¼
¼
=
½
@
Å
8
5
@
5
Ã
½
Â
=
À
ñ
»
Ã
Á
º
@
=
À
À
Ã
8
8
¾
»
½
Ã
ñ
5
ð
Ä
¾
Ä
8
½
Â
¼
Ã
8
½
»
»
¼
=
Ï
»
Â
½
=
»
=
=
5
:
È
»
»
Á
»
5
»
½
Ä
Ã
À
Ç
º
º
=
Ä
»
½
¹
»
»
½
5
Ê
È
8
»
ë
:
ð
Ã
=
5
5
@
8
Á
ð
Â
º
»
8
@
À
Ã
À
»
5
Â
Ã
»
8
@
À
@
º
8
=
¼
º
Á
»
Å
Ã
Ã
º
=
Â
Ã
À
»
½
@
Ã
ñ
À
@
Ã
Ç
ñ
5
5
Ã
½
Ã
5
½
½
@
Ç
À
»
À
8
½
¿
8
Ã
»
À
À
½
Ä
º
Å
Á
¼
=
Ã
5
5
@
5
»
=
½
8
8
Ã
À
8
À
»
8
¿
Â
Ä
½
»
Ð
»
Â
8
Ã
8
¿
=
Ã
½
»
@
»
8
:
»
8
¼
Î
º
Ã
Ã
Á
Ã
½
=
Ã
»
8
¿
½
À
»
½
Ï
À
Á
½
=
Ã
Î
¿
¿
»
»
Ç
º
Î
½
8
¼
»
»
»
@
Ä
8
¼
=
@
¾
Ê
5
Ç
Â
@
Ç
À
¼
Ç
»
@
=
À
8
(
5
»
¿
8
»
¼
½
À
¿
Ã
¼
Â
Ï
À
=
Å
Ï
Å
=
½
º
Ã
À
=
½
»
½
½
»
:
¼
É
½
Á
º
5
»
Ã
¿
Ã
@
Î
Ê
Ã
=
Ç
Å
@
»
ñ
»
½
¾
=
8
Ï
Ê
À
¼
»
º
5
:
@
À
À
»
5
@
¿
Æ
À
@
Ã
À
Ä
Ç
Ã
8
=
¼
¼
@
È
»
Ã
º
Â
À
8
Ã
À
º
8
¼
»
:
¼
Ï
½
»
5
¼
»
Ç
º
Ã
@
:
À
¿
Ã
À
º
8
Ä
Á
Ç
Ã
Ã
Ã
6
¿
@
À
@
½
½
@
5
5
º
Å
¼
@
Ã
5
Ç
À
:
ñ
É
Ã
º
=
Ê
5
Á
»
Ã
À
»
=
Ã
Ï
Ã
8
Ã
Ú
»
À
¿
½
Ä
=
¾
À
»
4
@
Ä
Ã
½
À
»
8
À
Ã
ð
½
»
¿
5
5
Á
Á
5
è
º
Â
»
Ê
½
½
½
@
:
Ã
À
»
Å
À
¿
@
Ã
=
Ä
Â
5
8
»
:
Ã
Ã
Å
»
»
¿
Ã
"
Î
=
Ã
À
5
8
¿
Ã
À
»
À
:
5
=
È
Å
À
@
@
Ã
Â
@
:
=
=
»
»
5
:
Ç
8
8
½
5
=
Å
º
:
8
=
½
¾
À
¿
Î
Á
@
@
@
=
Â
¼
½
º
¿
»
=
8
Ã
Â
8
»
:
»
5
5
Ï
=
Î
Â
5
:
5
º
5
»
»
=
¾
:
Ã
¾
»
@
»
¼
5
@
Ä
:
À
@
»
@
Â
¿
Å
8
Æ
À
¼
8
@
Â
:
:
»
8
»
È
5
ð
Â
8
8
Ï
8
À
º
=
@
@
»
:
Å
»
»
Æ
8
À
=
=
Á
º
Ç
8
Á
=
Ã
»
Ã
@
»
8
Ã
5
@
=
»
»
Ã
:
Æ
6
»
=
Å
»
À
Ä
ñ
ñ
½
Ã
¼
Ä
@
Â
Ã
À
Å
º
=
À
½
Æ
»
Á
=
Á
»
@
=
=
5
»
»
8
:
Ã
»
¿
»
5
¹
5
Ã
8
À
8
:
Ç
Ç
À
Ç
5
@
»
Ä
½
8
5
@
½
8
Á
À
8
=
»
:
Á
Ä
Ê
Ç
Ã
Æ
¿
Ã
Á
:
5
Ç
»
5
Æ
@
À
½
À
»
À
6
À
»
=
»
Ê
Ä
À
:
º
Ä
¼
Ð
:
Å
Â
Á
À
@
Ã
»
½
½
Á
½
=
¼
=
@
=
Æ
½
5
»
È
»
=
½
:
8
5
»
½
»
½
»
á
Â
@
Î
Â
½
5
À
5
»
»
=
@
8
Ã
»
»
5
Ã
Ã
:
º
»
Â
5
5
À
À
@
Ã
Â
½
Ç
Ã
=
¼
:
½
Â
À
Ã
=
@
À
»
¼
Ç
À
Á
À
Ã
»
Ã
Ï
½
À
¼
5
=
Ç
Ã
¼
ñ
º
É
»
¾
¾
À
8
Ã
»
À
=
Î
¿
»
º
Ç
5
À
À
»
¿
¿
¾
Å
½
@
À
8
Ã
5
Î
»
Ã
Ã
á
Ã
½
=
Ã
@
@
Á
=
Ä
Ã
À
8
Â
8
Ò
:
À
»
»
@
=
=
»
Ã
8
8
Ï
»
À
Ó
Â
:
ñ
8
½
6
À
Ü
8
¿
:
@
»
Á
À
À
@
8
Ù
¿
5
Å
º
Â
@
Î
Ø
Ä
½
½
Ç
Ã
Ã
Â
Ã
Ç
¼
Æ
Ã
»
Ã
º
@
5
Ä
6
º
À
»
º
5
Ã
Î
Å
ñ
Ï
À
»
8
@
Ã
@
Ã
»
Ã
$
=
½
º
¿
Á
Î
¾
»
Ã
=
½
À
½
¿
:
Å
Ã
8
»
8
½
@
5
»
º
À
»
»
½
º
»
=
½
Ï
Ã
º
Ä
À
Î
Ã
Ã
5
Â
8
ß
Ã
½
¼
Ã
5
À
=
=
Â
»
8
»
»
8
¼
¿
Å
8
½
º
5
»
Á
@
Á
»
¼
È
Â
½
=
5
º
5
½
ñ
=
Ê
»
À
º
¾
¼
î
5
¾
Ã
=
¼
½
¼
Ã
8
=
8
@
8
@
í
@
:
Ã
»
6
@
»
5
½
:
6
Æ
À
Ê
»
»
¾
Ã
Ø
À
¿
8
Á
@
Á
8
½
@
Á
=
@
Ã
À
¼
»
5
Ï
5
»
@
Ã
»
¼
»
5
8
8
»
º
Ç
»
Ã
½
8
Å
Ã
¼
º
=
=
@
5
6
¼
@
Ç
Ã
5
5
5
ß
»
:
Â
½
5
Ã
Ä
»
»
Ã
ñ
Ð
"
»
8
»
8
8
5
8
5
=
8
À
5
Å
8
»
À
Ã
Ó
@
@
5
½
@
»
@
8
»
½
º
8
»
=
8
»
Ã
Ã
=
»
½
=
¼
È
Ç
@
5
Ï
»
5
5
Ã
Ê
Ã
»
@
@
À
Â
»
¾
ñ
»
½
=
¼
5
=
½
5
»
Ä
@
¾
=
½
½
Ä
»
:
5
Ë
¼
»
8
¿
Ç
Ê
:
5
º
À
@
@
À
ð
@
»
¾
Ç
5
8
=
@
8
¼
»
º
»
ð
º
ì
"
8
Å
5
º
ð
5
¼
@
º
Ê
Ã
Â
½
8
Ï
Â
Î
5
5
¿
Æ
=
Ã
@
À
:
º
¼
½
È
Â
¼
Á
5
¼
¹
5
½
»
»
@
Ã
»
5
½
5
@
Ä
Î
È
Î
Â
½
Ç
Ã
¼
À
Ù
ñ
5
Å
:
ö
=
»
»
@
»
5
Ç
=
À
=
À
¿
È
¼
Ã
8
5
À
º
=
@
»
½
»
Ã
Ý
@
Â
5
5
=
î
8
Ç
»
8
¾
=
:
½
Â
õ
À
Ä
»
á
»
Ã
5
Â
»
Ç
½
5
½
8
½
»
Ã
Ã
¿
Ç
À
¼
=
À
½
¿
º
Á
ß
ð
½
=
½
=
=
»
@
ì
À
=
â
5
¿
¼
5
½
=
À
½
:
»
À
Á
:
Ã
8
À
Ã
¾
¿
5
»
8
»
@
À
¾
Ã
Ã
5
½
5
5
@
5
Â
½
Â
=
»
Ç
6
¼
=
º
À
Ã
»
Ä
Ç
5
Ã
»
Ã
½
Ï
Ã
5
5
¼
8
Á
Ç
È
@
=
Î
»
½
½
Á
Ã
Ç
Å
¼
@
5
5
¼
¿
5
Ã
8
Ï
»
À
å
=
@
8
Å
Ê
Ã
»
5
5
=
»
À
»
½
Á
=
»
:
½
8
À
½
»
=
¼
¼
Å
ð
@
Ã
½
À
»
¼
»
Á
½
Ã
@
5
Ã
¼
¿
À
:
½
8
Â
Ã
=
=
¼
@
=
»
:
È
»
5
ñ
Å
½
=
8
5
»
½
¿
»
@
6
½
@
À
:
Â
8
5
:
@
½
:
5
»
»
À
Â
Ê
Ã
@
»
@
Á
Â
»
Å
5
À
Ä
8
=
8
Ê
¼
Ä
È
º
5
Ã
À
¼
Ä
@
¿
Æ
=
5
¼
Ç
=
@
@
ï
½
½
À
À
Ê
À
@
8
»
5
Ç
Á
8
È
½
»
5
Á
Ã
º
Ï
¼
»
:
Ã
5
8
Ã
Æ
Ã
Ã
»
@
8
Ã
Á
º
5
@
»
Æ
=
Ã
=
5
=
=
¼
À
Ç
÷
Ç
=
Ã
À
½
@
Ã
Ç
Ã
@
:
Å
¾
½
»
Ã
Î
@
:
6
=
Î
¼
8
½
5
Á
=
5
À
»
Å
Á
¾
¾
Î
Ç
5
=
Ç
»
»
½
Ê
Î
Ã
@
»
À
»
º
8
=
=
¾
À
:
5
¾
@
Ã
½
»
Ã
Æ
=
Â
½
8
À
¿
8
Ê
Ï
½
8
»
5
=
@
½
=
»
=
@
»
À
»
5
:
»
6
8
Ï
»
=
À
Ã
Æ
»
:
8
»
:
»
=
¾
@
À
Ã
Á
½
»
Ú
@
Ï
»
Ã
ñ
º
Å
¾
À
É
@
»
¼
»
Á
@
º
¼
À
À
º
5
Â
Ç
8
½
5
î
Æ
À
Ã
í
8
:
@
Ã
í
=
¼
»
À
5
Î
=
»
º
8
½
¼
Ã
=
8
Á
Â
Ã
ô
À
Æ
¹
Ã
¿
»
Ã
Â
5
Æ
»
¿
»
5
º
½
Ã
Ã
:
5
»
¹
Ã
¾
5
¿
8
À
=
»
Ã
Ï
=
5
:
8
Å
»
8
5
ì
¿
=
5
º
=
»
Ê
¼
Ã
¼
Ê
Î
@
8
Ã
5
Ã
5
Ä
½
Ä
5
Ï
@
Ã
È
»
Ò
»
»
8
¼
:
:
:
8
Ã
º
»
º
Å
8
»
½
@
Ã
Ã
¿
À
»
Ý
5
8
@
Ã
Ô
@
À
8
»
½
½
º
î
ñ
=
8
=
8
»
¾
À
í
½
À
¿
5
¿
ì
»
¾
À
Ã
Å
Ã
5
Â
:
À
6
=
:
@
=
@
=
»
Á
Ã
»
5
Ã
ð
À
»
¿
»
8
:
»
@
È
5
Î
Â
ñ
8
8
Ê
½
Ã
»
¼
=
Â
»
=
Â
»
¿
º
À
8
5
=
@
Æ
Ç
¼
=
»
5
Ç
5
Æ
5
º
»
¿
ñ
5
Æ
½
Ä
8
Ã
Ç
»
5
»
=
=
@
¼
À
@
8
:
¿
»
6
Â
Â
5
»
@
À
5
8
Ç
¿
À
¼
=
Ä
=
¼
Ç
À
Â
¿
À
»
½
¼
»
À
:
Ã
ð
À
Ã
Ã
Â
Ã
À
8
Å
8
6
31
Â
=
ð
»
5
»
:
5
@
:
¾
8
Á
=
Å
@
5
Ã
¾
»
º
À
Â
8
Â
Ã
=
À
Ï
º
¿
=
5
5
Ê
Ã
¿
ï
Ï
»
½
@
=
»
5
»
Á
@
¼
Ã
8
=
:
»
=
Å
À
@
À
À
ô
½
Ç
Ï
À
»
Á
Æ
»
½
5
¿
Ï
¼
È
½
À
À
¾
½
½
È
6
À
8
Ê
Á
:
@
»
:
»
32
D. Basin
«
4
«
4
,
ø
,
-
.
-
2
-
.
0
2
.
6
7
9
7
9
:
<
=
?
A
C
?
0
.
6
7
:
<
=
A
9
«
,
4
F
?
H
5
Â
Ã
=
8
À
¼
5
@
8
Ã
8
À
8
=
¼
À
@
Ã
:
:
¼
8
5
Â
=
_
b
å
¿
5
¼
»
8
5
À
8
5
¾
Á
Ã
À
Â
½
º
=
@
8
8
Ã
=
S
8
»
@
¼
º
»
8
À
»
=
@
À
½
:
»
=
8
»
»
8
8
½
}
Ã
º
Ú
»
½
:
@
È
:
»
»
Ê
¼
¼
À
ï
½
Â
@
Ã
»
À
»
½
Ï
º
Ê
8
|
½
:
=
å
8
=
:
Æ
»
Ä
Ã
½
¹
¿
À
Ã
Å
º
À
»
½
Ç
Å
À
À
5
8
Ç
½
Â
@
5
¼
5
@
Â
5
Å
=
=
»
Ç
Î
»
Â
À
K
Ã
5
H
»
Â
½
8
a
º
¿
½
=
½
N
5
8
»
?
A
5
T
:
º
»
\
É
Â
T
¼
8
Â
B
»
8
ð
8
5
R
:
Â
È
@
H
»
5
»
F
À
Ç
º
Ê
½
½
¼
=
H
º
À
Â
Ã
F
¼
Å
»
Á
B
é
»
:
=
Ç
À
½
F
Ï
Ç
Â
K
5
Ï
Ã
Ã
d
À
Á
@
\
º
À
»
U
»
@
Ä
Ç
@
À
=
Ã
:
8
»
5
¼
Â
¾
=
8
¼
º
»
=
@
Ã
¼
»
8
:
@
8
8
Ã
¾
»
À
8
F
Á
Ã
Â
:
8
@
R
Æ
À
Ï
Î
R
¿
Ç
¾
Ä
O
½
»
@
Â
@
8
5
À
Â
8
é
=
Î
Ö
@
×
Ã
R
8
Â
¼
É
È
=
Ã
»
¼
»
º
=
À
º
8
À
Ó
½
Ã
8
»
À
º
Ã
Ä
5
Å
»
À
»
»
=
¿
Ç
Á
½
Ê
@
ì
»
=
Ã
À
º
Ã
»
À
Î
í
@
¼
»
Ï
R
î
Ê
É
À
5
»
Â
Â
:
½
=
º
8
=
=
À
5
Ï
À
»
¿
¾
Å
»
½
Ã
=
=
Ê
5
À
=
½
Ã
Å
À
À
À
@
À
»
Ã
Î
@
Ç
Ã
@
5
8
»
¼
À
»
½
¾
5
Ã
º
»
@
:
5
Ã
¹
5
Â
Ã
Ä
6
º
Á
À
5
¼
À
@
Æ
Î
Ð
º
Â
@
»
½
Î
á
5
È
5
|
Ò
Ç
5
¼
:
:
â
À
=
»
@
Ü
Å
=
º
½
5
×
@
@
Ã
¼
Ô
8
8
@
{
¾
¼
Î
:
Ã
Ü
=
»
¼
@
½
Á
º
À
5
5
Ú
:
Ã
Ç
À
Ò
»
¾
À
5
Ø
Æ
@
½
º
½
¿
Ï
Ó
¼
5
Ã
@
¼
5
À
¾
Â
»
=
Ã
¼
Æ
5
Ê
À
5
5
À
=
8
Ã
=
»
À
À
º
½
¿
5
{
º
ö
»
Ã
¼
½
À
8
º
¾
À
¼
Â
»
5
5
Å
À
8
Á
Ã
À
8
5
Â
Æ
Â
Â
Å
8
=
=
8
8
À
º
8
»
Ã
º
Î
@
Ã
»
{
½
»
Ã
À
º
Á
@
@
@
=
Â
Å
8
¼
Î
|
Ä
|
À
8
=
Â
Ä
Â
»
½
»
»
Â
=
¾
Æ
»
=
»
=
@
Ã
Ã
5
Á
¼
»
¼
Ç
¿
À
5
=
=
8
@
¾
@
=
=
½
º
=
@
À
º
À
À
5
»
=
ê
Å
á
Á
Ç
|
$
º
@
»
@
ô
¾
¼
:
ñ
º
Ï
Ú
8
Ð
@
½
Ã
|
Ä
¼
6
=
Î
Â
5
@
À
»
@
¿
À
8
=
Ã
»
=
»
Ï
8
5
Ç
Ï
8
=
@
À
»
Æ
@
5
6
Ã
8
=
»
:
½
»
»
Î
ñ
Æ
À
@
»
Ã
=
Ã
6
»
»
ñ
º
@
¼
@
Ã
@
»
º
»
8
ñ
¼
5
º
Ã
=
»
8
½
À
Ã
6
»
¿
½
Ê
À
5
½
Å
º
Å
»
À
Ã
»
:
Ç
À
@
Ã
=
»
»
@
À
Å
@
»
¼
»
¿
Ã
»
@
»
À
½
À
Ã
@
@
Ã
5
Ï
=
Á
»
Ë
½
=
Â
ñ
º
5
½
5
½
Ã
Ê
»
5
@
»
»
5
¿
»
5
=
½
¿
Ä
º
¼
8
5
Ä
Ã
5
»
Ã
=
»
@
6
½
8
Ã
º
8
8
8
Ï
¿
º
@
@
»
Ã
=
À
Ã
»
:
À
¼
Î
Ï
À
Ç
¿
6
Ã
À
Î
:
8
:
Ã
Ã
Â
Ã
»
|
5
»
8
:
Ã
8
@
Â
½
Ç
Á
5
»
»
½
Ã
Æ
Æ
»
8
8
Ç
Á
=
=
Ã
=
Î
ó
=
»
5
»
Ä
¿
¼
»
:
:
º
8
{
»
½
8
»
º
Ã
Ä
5
Â
:
½
½
í
Ä
5
=
@
8
Â
5
Ï
¼
¾
Ç
5
5
=
Ã
@
Ã
À
{
Ã
Ã
Ç
Ê
@
Ã
¼
À
»
8
@
¿
5
5
Ë
Î
Ä
º
@
Ê
¿
8
Ã
Ã
¾
»
=
@
»
½
º
@
»
¿
»
@
½
º
=
º
:
»
@
Ï
Ã
»
»
Ã
¼
¾
Á
@
{
¿
=
@
5
¼
Ä
5
À
À
8
=
@
Ã
¾
Â
Ã
»
Ã
@
:
º
=
»
Â
»
¿
@
»
½
½
=
¼
¿
=
5
¼
5
Ï
8
º
Ð
Â
=
À
Ã
Ã
»
Ã
Ã
8
½
»
À
»
Ä
5
À
»
À
:
Â
5
5
Á
=
È
=
@
»
¿
Å
Ã
»
@
5
¼
½
@
º
5
À
»
»
½
Â
½
½
¿
Ã
=
Â
Î
Ã
»
½
8
=
È
ñ
»
5
º
8
Ã
»
¾
{
¼
Â
Ä
º
5
8
Î
Æ
Æ
¾
Î
8
5
=
À
á
5
»
»
Ç
½
Æ
½
º
í
:
Ã
=
º
=
@
½
¾
=
Ã
Ã
Ï
Å
=
5
@
¿
Á
Ã
@
=
@
5
=
¼
¾
Æ
¼
Ã
Ï
À
»
»
Ï
Ç
»
8
Ã
Ú
À
=
Î
»
Î
Ã
À
»
È
»
Â
»
Â
¼
8
8
º
5
½
À
Î
Ã
À
8
@
¼
@
º
¼
¾
º
Ç
5
Ã
Ê
5
¿
½
»
@
Ç
Â
Ç
½
½
Å
»
Æ
À
5
5
8
8
À
"
Ã
Á
»
$
À
Ê
:
Ã
º
Ã
¼
=
@
=
º
=
À
5
=
=
»
@
5
Ã
Â
8
»
¼
Ã
|
Ã
»
5
À
=
½
@
º
¾
º
@
5
é
Ã
¼
À
»
=
Ê
@
Ã
º
8
¾
»
Þ
=
5
Å
º
5
»
»
À
»
º
:
Ä
@
º
º
5
@
8
À
Ã
6
Â
À
¼
»
¿
:
¾
º
á
¼
½
Ã
»
8
Ã
»
¼
@
@
Ï
À
:
=
º
È
5
Ï
8
Ã
Ð
N
8
ß
:
Ã
Ä
Ï
@
»
{
=
¹
P
À
Ã
Ï
¾
»
½
Ã
À
»
@
º
»
5
Ï
=
¿
Î
Á
Ø
5
»
Å
Ã
Å
»
Ú
À
Á
@
=
Ä
Ç
Ã
5
»
½
8
8
»
Ã
À
<
»
½
À
º
Ã
=
Æ
»
Ê
:
»
»
À
½
Ê
Ã
6
6
@
½
À
=
Å
@
Ï
Î
È
Ã
½
»
À
¼
5
À
=
º
=
8
:
¾
á
=
¼
@
¿
=
Ä
8
¼
5
|
=
@
Æ
À
@
Ó
=
|
¾
Ä
À
@
5
Ã
Æ
¼
=
»
8
Ã
=
@
8
~
=
¾
Â
5
8
Ã
»
8
Ã
À
5
½
»
¼
=
Ó
»
Î
»
À
8
5
»
5
=
5
Ç
=
n
½
¼
á
@
@
Æ
Î
À
Ç
½
8
Å
Ä
Â
»
8
¿
Ä
»
È
Á
Â
»
Â
»
:
»
Ã
È
Á
º
À
5
:
º
=
5
Á
¼
¼
Á
À
z
@
Ã
¿
5
¼
Ã
=
=
5
¿
»
½
=
=
=
:
Ã
:
@
¾
<
x
Ë
¾
Î
»
î
Â
8
@
$
Ð
»
r
5
È
À
º
5
Ã
@
»
½
À
½
=
=
8
»
»
:
¹
À
Ï
8
»
»
¿
j
=
Ã
v
Ê
:
ê
Ã
»
=
:
5
Á
Ê
»
{
=
í
¼
Â
È
º
»
ì
À
À
»
5
½
Á
¼
Ã
O
Â
À
Ð
q
:
À
¿
Ã
|
Á
@
@
º
6
»
¿
Ã
Â
=
v
Á
Æ
º
Ã
¼
À
6
»
Ã
¾
8
½
À
À
=
Ç
»
À
½
Î
Ã
\
»
»
c
À
@
Æ
Â
Ú
5
8
Ã
5
º
8
=
Á
Ï
Ï
Ã
¼
8
Ã
À
=
Ä
½
:
@
½
Â
¼
8
½
¼
Ã
@
À
¿
À
¿
Å
=
»
º
»
»
»
¿
Ã
¼
Æ
º
Ã
Â
»
¿
=
»
À
8
½
=
Â
»
Ã
@
=
Á
8
5
@
¿
Ï
Ã
º
»
5
:
»
=
\
Ç
Ã
i
À
Å
Ð
¼
¿
Â
Ï
Ä
»
»
5
»
½
=
Á
:
¾
¼
º
Ã
À
5
Ã
À
Ã
Ã
=
Ã
¿
»
@
Ã
¿
:
À
¾
:
@
Ä
Á
È
@
Á
»
Ä
»
»
Ã
5
Â
»
=
Ä
@
=
Ç
Æ
Ç
=
=
À
Ä
À
º
»
@
5
5
¼
Ç
5
Ç
Ã
5
5
@
e
º
À
»
Â
À
»
i
Ã
½
¼
5
n
»
@
¼
m
¿
Ê
»
¿
e
¼
Â
Á
¿
½
j
»
5
÷
@
º
e
½
À
Ä
À
Ã
Ã
:
µ
i
5
Ã
Ã
Å
½
R
P
»
ò
8
O
Ð
º
½
@
Ï
¿
¾
Ã
À
:
±
e
¾
5
5
À
c
5
=
¼
=
@
T
@
=
=
5
¶
»
=
8
Ã
=
8
Ç
5
í
»
@
»
»
Â
À
@
@
»
=
´
g
À
=
¼
Á
ï
Ï
8
³
f
¼
»
¿
\
e
À
6
Ã
\
½
Ê
R
=
7
Ê
Z
c
Ã
»
Ç
Á
¼
½
º
Ð
·
Â
@
8
\
½
À
2
Á
¼
À
=
À
À
½
@
X
`
8
Á
À
»
Ã
Ã
»
8
W
^
¼
º
¬
¼
À
M
K
Á
½
»
=
Á
V
:
¿
=
=
¼
À
@
@
Á
@
½
I
0
½
È
»
Î
Î
8
5
@
Lazy Infinite-State Analysis of Security Protocols
ý
7
ý
9
33
ü
,
ø
} «
6
}
0
4
¡ 6
¢
¤
:
¥
¦
} ý
ü
§
}
«
4
?
7
0
¡
6
¢
: ¤
¦ ¥
,
C
} «
6
¡
4
¢ 0
¤
6
¢
«
:
¥
¬
}
}
«
0
6
¢
¤
:
¥
} §
4
¡
«
¡ ¦
4
0
¢
¤
6
¢
«
:
¥
¬
,
F
}
«
6
0
¯
4
¡
¢
°
: «
¯
ý
²
ý
´
µ
µ
¦ ¥
þ
²
ý
´
ý
ý
°
-
-
µ
6
¼
0
Ñ
8
5
@
@
¾
½
»
À
5
»
8
@
Ã
»
½
Â
º
8
»
½
½
5
5
¼
@
»
=
=
À
8
Ã
º
À
¼
Â
»
½
Â
º
:
Æ
»
¿
Â
5
8
Â
¼
¼
»
À
=
Ê
¹
º
@
Á
Ã
À
º
»
:
Â
8
¼
Ã
8
»
»
À
Ó
À
Á
@
@
Ø
Ð
»
Ð
5
Ç
¿
Â
»
Î
v
Å
=
À
À
8
=
Ã
¶
·
¸
º
»
:
Å z
Â
¼
»
Â
½
Ï
=
¼
=
Ã
5
Ç
Ã
=
5
Â
8
5
Ä
ð
8
=
Ã
»
À
Ã
»
8
»
»
À
»
@
»
:
À
=
¼
½
»
½
»
@
Î
Â
¾
À
Å
È
8
À
@
¼
Á
Ã
Â
½
=
8
8
¼
@
½
¿
»
¼
»
»
»
º
À
¼
Ã
8
º
Ã
Ï
Ã
À
Â
Ã
=
@
5
Ã
Á
À
»
8
Ã
À
=
¾
=
½
@
½
»
@
º
»
»
»
8
Ã
Æ
:
6
½
¿
@
Â
Å
8
½
»
»
À
À
5
5
:
@
@
8
Å
¼
À
»
5
@
À
8
»
º
@
Ã
Ç
Ã
:
Ã
Â
5
@
5
»
»
»
º
8
Ã
:
º
:
Ã
½
@
@
Ï
À
=
8
»
Ã
8
º
:
»
Ã
8
:
»
@
À
Â
Â
:
¼
Ä
@
º
º
½
=
=
Î
8
=
Ã
Ç
@
º
5
»
@
Ä
¹
À
»
Ç
@
:
=
Ã
º
º
8
»
À
8
:
:
8
@
Ã
=
À
@
ñ
¼
8
Ç
ï
»
»
»
5
Ã
=
5
Â
Ï
Å
@
½
»
È
Ã
½
Ê
8
À
=
À
»
»
:
@
Ã
=
8
Î
À
Á
Â
Å
»
=
=
Ã
@
À
À
»
:
Ç
º
½
8
Ã
Â
Î
»
5
@
»
Ï
:
»
5
Ã
º
À
8
»
»
»
Ä
Ã
º
Å
=
6
½
»
8
8
¼
@
Ï
=
=
»
º
á
»
@
Ê
¿
À
ð
º
º
»
À
8
¼
Ã
º
Æ
Â
é
Ã
Ã
5
5
É
Á
»
Ú
Ç
@
Ç
Æ
=
Ç
Ã
5
Î
Á
½
6
¿
Â
º
¼
»
»
¼
»
Ã
=
À
»
8
Ê
:
:
8
8
8
»
º
»
¿
Ã
Å
Ï
Á
»
=
Ä
Ã
»
À
Ç
»
$
Ç
@
»
@
Â
Ã
5
Ä
À
@
º
À
Â
¼
À
¼
8
»
N
Ã
À
º
@
»
½
Ã
6
:
Á
Ê
¼
¿
=
º
À
@
8
Ç
5
Ã
»
º
Ã
À
¼
Ê
Ã
8
»
½
º
=
8
@
ð
5
8
Ã
Â
ñ
8
5
8
5
»
»
=
¼
Ã
Ä
Â
=
=
ñ
¼
Æ
Á
5
»
8
Á
:
Æ
ð
½
:
½
Å
º
Ï
»
Â
@
À
:
:
5
Ï
»
»
Â
8
»
@
»
À
Â
6
8
Ã
»
5
:
@
5
Î
º
»
º
:
Ç
@
Ã
À
À
½
Ã
»
Ç
Ã
º
À
»
¼
½
À
»
½
À
Ï
=
@
6
»
@
Â
=
Å
=
=
Ã
5
Ã
¼
Å
½
»
Ç
À
5
À
Á
»
5
À
½
Â
Å
À
@
À
5
Ä
¿
Ã
¿
À
»
¼
á
Ð
Á
Ã
º
»
Ï
»
$
5
Ã
5
»
Ï
Ç
»
Ç
Â
¿
=
8
È
5
Ã
@
:
=
»
Á
»
Ã
5
½
:
Ç
º
»
Â
¼
½
Ã
:
Æ
5
»
Ã
5
Â
@
5
@
À
»
=
@
@
Ã
Å
Â
»
¼
:
Å
»
Æ
»
½
»
=
=
Æ
Â
Ç
»
5
Â
Á
»
Ã
¼
F
¿
Â
@
5
º
À
Á
Á
5
À
8
=
Â
»
@
º
¼
Ã
÷
¿
@
5
Â
Á
Á
½
º
5
»
:
¿
¼
¿
@
Â
8
=
Ã
Ã
»
@
=
Ç
º
»
¿
À
»
Æ
½
»
À
=
:
À
=
Ï
8
À
¼
½
5
Â
Â
5
Â
»
À
8
»
Á
Æ
5
»
À
@
Á
5
5
À
»
\
À
¿
Â
¼
D
¿
¿
Á
Ç
¾
»
=
\
À
8
5
È
Ã
=
»
@
»
B
Â
Â
5
À
§
5
Æ
»
Ã
½
\
=
8
»
:
Ã
Á
¼
5
À
½
Î
8
=
Ã
5
@
Æ
À
=
»
»
»
º
Ï
¿
d
Ê
Ç
¼
:
5
Ã
½
»
¼
6
À
º
Â
:
»
À
=
@
=
Ã
À
»
@
=
ñ
º
½
Ã
»
»
=
Ú
½
ñ
Á
:
:
Î
5
Â
@
@
À
v
Ç
À
»
@
Ï
Å
:
@
»
:
Î
=
À
»
»
Á
½
@
»
:
Á
=
:
Ã
Ç
v
½
8
Â
=
Ã
5
º
½
5
=
»
¼
Ú
¿
½
»
Ä
5
»
À
Ã
Ã
Â
á
À
Ï
5
Ã
¼
À
5
Â
Ä
È
Ê
=
»
@
À
½
x ò
»
=
»
½
5
À
=
:
8
»
=
"
º
Ã
»
=
5
Ç
º
¼
À
º
»
\
8
»
Ã
Ã
[
¿
Æ
½
H
º
Ä
=
Å
F
Ï
@
Ã
5
À
Ê
Î
@
À
@
=
Ã
á
Ï
Ç
=
=
Â
Â
Ä
=
»
Á
Á
Ù
º
»
Â
º
Ü
¹
Â
¼
=
Á
Ê
½
À
Â
½
Â
½
8
Á
½
À
»
º
Á
»
»
Â
½
Á
»
Â
º
Ã
½
º
6
5
À
»
È
¼
Ã
¾
»
¹
8
Ã
=
@
º
Ê
Ã
»
º
¼
8
Å
5
º
5
¼
=
@
8
¹
Â
Á
8
@
Ê
À
Ã
½
º
¼
@
»
¾
¾
À
:
¼
Ã
Ã
Ã
8
K
Â
5
»
=
Î
À
Ã
6
@
:
=
¼
B
Ä
5
@
»
8
¼
À
Á
½
Ã
½
º
»
5
5
¾
Ä
8
Ã
Ã
5
8
Æ
8
:
º
¿
5
½
À
»
8
H
5
»
Ã
Ã
½
Ã
Ä
½
º
À
Î
¿
@
Ã
8
Â
D
Ï
@
Æ
T
À
5
Á
5
½
Ã
Ç
»
Ä
Ò
Â
Ã
5
=
Ã
5
Æ
Ï
Â
½
R
¿
¼
6
Á
Ä
5
½
=
»
Â
Ó
»
Î
»
@
Â
8
Ã
R
Ç
»
=
F
½
Ç
=
5
5
Ô
=
@
@
Â
=
Ä
¼
8
Â
À
¿
Á
Ã
»
@
@
5
À
À
º
º
À
@
@
¼
Ð
8
½
ß
@
5
5
ô
Â
À
¾
5
=
»
=
Ç
}
½
Ã
¹
»
À
Á
º
À
6
:
Â
Â
=
Ê
»
¿
Î
Ã
½
Â
»
»
¼
»
¿
8
@
½
¿
Á
À
»
»
Â
º
Á
¼
¿
Å
¾
À
Ç
À
é
Ã
½
R
8
Ã
À
Æ
8
½
=
¼
À
:
Ö
Á
Æ
@
»
À
Ø
¿
¼
5
Ä
Å
¿
»
Â
º
À
Ç
Â
Ú
5
Ã
=
»
Æ
Å
Â
»
:
À
Ã
8
Ã
½
8
½
Â
»
À
À
5
Á
Ã
Ã
=
À
»
@
»
¾
½
¼
=
½
@
»
=
Æ
À
Ê
8
8
=
À
½
À
5
É
Ã
:
¿
6
5
½
À
5
Å
=
Ã
=
@
O
5
=
¾
K
Â
8
Á
½
I
»
Î
¼
»
À
6
:
=
À
Ã
N
¿
Â
Å
Ð
»
È
=
º
º
¿
Ã
5
8
=
¾
5
»
Ê
À
À
À
»
»
Â
¼
Ã
»
½
½
Ç
À
:
º
Â
À
Ã
º
Á
À
5
À
5
Ã
Ã
Ã
8
»
¼
Ã
½
Î
¼
Å
ï
º
Ã
Ï
À
Ã
¹
=
Ï
»
Ã
Ä
¿
í
À
@
=
@
À
º
5
Å
»
Ä
½
Ï
À
8
Á
º
»
=
:
ñ
Ã
@
:
2
H
È
«
5
Â
8
ð
»
=
Ã
º
5
Ã
5
@
Ä
Ã
½
5
¼
»
Ã
Ä
À
¼
5
@
Æ
»
»
Ð
Ã
»
@
:
»
:
Ú
è
Î
ë
8
=
Á
=
»
:
Ã
À
ò
8
Ã
@
8
½
Â
Ã
@
:
¿
5
»
»
:
À
¿
Ã
Â
»
À
5
:
Ã
¼
¾
À
@
»
Â
@
=
»
:
À
»
»
º
ê
=
8
á
Ã
½
5
Ã
Ï
Ã
@
Ã
@
À
Ã
5
¹
¿
»
º
¿
Ã
Ï
º
Ê
5
¼
Å
@
¿
5
5
½
À
½
»
»
»
@
@
=
Ú
»
6
Ã
½
8
5
@
6
º
Ç
@
5
»
Ã
»
8
Ä
Ã
Ä
»
»
Æ
º
»
½
½
6
á
6
»
Î
À
½
=
Ó
Ã
À
Ç
Ò
Á
Ä
¼
»
Õ
8
Æ
»
Ã
Ä
»
Ã
5
¿
@
º
º
Ä
»
=
Â
Ã
6
»
»
=
»
º
Ç
8
»
È
Å
»
»
Æ
»
@
À
Ã
½
6
»
Å
À
{
Ç
@
Ã
»
Ã
Â
{
Î
5
5
8
@
½
@
5
8
»
»
|
Ä
:
ð
º
@
5
|
=
5
Ã
Ã
½
¾
5
º
½
¼
5
@
»
Ã
Ã
Ç
{
½
5
8
@
»
5
º
Ê
É
Ã
6
Ã
»
$
»
5
Â
@
º
=
Æ
À
»
Ê
Ä
»
5
5
»
:
=
º
Á
½
8
=
¹
½
Ã
Á
Ê Í
Ã
º
»
Ì
½
Ã
»
6
Ë
5
=
¼
5
Ã
5
Å
5
@
@
½
¾
»
Ã
À
Ã
»
=
8
Å
À
º
8
Î
»
6
Î
=
º
»
Ê
½
»
Ï
Ê
e
m
Ð
Ñ
q
j
r
Î
34
D. Basin Ã
Ñ
e
Ó
»
Ã
¿
º
Â
5
»
Ã
»
À
8
5
=
À
Â
º
½
8
Ã
_
Ã
½
=
Â
8
»
Ã
»
8
Á
Â
=
â
ß
8
À
Å
»
5
À
:
¼
@
º
»
=
Ã
»
¿
=
8
@
ñ
5
¾
¿
½
:
Á
½
Ã
Ã
õ
Ã
5
Ã
Ã
8
º
@
½
»
@
Ç
ñ
»
Ã
@
Ï
8
í
À
÷
½
À
Ã
À
¼
À
Â
"
»
Ç
@
5
À
¿
¼
Â
»
Ê
¹
º
»
½
Á
½
Á
Â
@
º
Ä
Â
¼
½
Â
»
=
v
Ô
5
@
Ã
=
:
¼
À
=
ì
À
½
À
5
8
Â
½
:
v
z
=
î
Å
À
5
½
@
º
6
:
:
»
»
»
=
@
»
½
Ä
ì
=
ì
À
¼
½
»
5
@
¼
»
@
=
½
Ç
À
»
5
»
Â
º
À
¼
Ã
Å
¼
Ã
5
î
Ï
Ã
»
=
À
»
»
º
5
=
»
¼
5
=
Ã
À
5
È
º
»
8
Ã
º
Ã
º
Ï
Î
»
Ê
8
Ã
Ä
Â
5
½
Ç
Ï
»
¼
½
»
Ã
»
:
@
6
À
Ã
»
º
½
½
Ã
À
5
Ã
Ã
À
5
º
Ã
@
À
Ï
Ä
»
º
Å
@
½
Ã
5
6
8
Î
Ï
5
Ã
á
»
@
º
:
Ã
»
5
Å
ñ
º
:
Ã
z
Á
º
À
:
»
»
¿
Â
=
À
@
Ã
Ã
:
=
»
Â
=
À
@
½
»
¼
»
»
v
Ã
»
@
Æ
À
@
»
:
»
½
À
À
Ä
Ã
¾
@
Ã
»
@
À
Ã
8
Ð
º
8
¼
½
Å
Ã
»
Â
¿
Ã
»
½
Æ
å
»
»
@
À
Ê
º
8
@
=
8
Î
=
»
5
@
Í
8
Á
5
ñ
Ã
8
Ì
5
Ç
=
»
@
Ä
=
Ä
¾
=
Â
:
8
Â
À
Ë
8
º
@
:
½
Á
»
@
Æ
Ã
À
¼
5
5
5
»
ñ
Â
Ã
Ç
½
»
é
»
Å
@
»
5
À
8
5
¾
=
=
À
¼
À
8
Ê
Ç
5
½
Ç
Ê
¼
»
5
Û
Ç
:
8
{
@
À
Î
{
Ã
À
Ó
@
=
Ã
¿
5
Ä
»
:
5
@
Ã
¼
Ï
=
=
Ó
Ê
=
8
»
Ã
Ò
»
É
5
¼
½
Ê
:
Ã
º
5
8
:
¾
5
:
À
@
»
@
:
¿
@
5
@
=
Ú
6
»
@
½
5
@
8
»
5
:
8
=
»
Á
½
5
:
ð
5
À
»
5
½
Å
È
½
=
5
î
¼
»
=
=
5
Ã
5
Å
5
Ã
8
Ê
¿
Ã
5
À
¾
½
½
»
Ã
Ð
Â
:
¿
»
À
8
8
8
¿
À
»
Ï
5
6
Î
8
Å
è
$
8
Â
Á
8
5
Ã
Ã
½
º
Á
½
Ã
½
8
»
è
½
¼
Ã
»
À
5
=
=
ë
¼
»
»
À
Ð
»
ï
»
@
@
Ã
Ç
½
»
ë
=
»
ï
ñ
@
@
Á
É
½
Ê
»
½
¾
½
5
Â
:
=
8
:
»
@
É
Ã
»
¿
@
6
6
=
Ã
5
8
É
=
5
Ê
Á
»
Ã
Â
½
=
À
½
»
5
Ä
¿
Á
Ã
º
5
À
¼
@
º
É
»
¿
»
¼
»
Ê
¹
Ï
º
Ç
½
½
@
5
5
Ê
Ã
@
è
»
:
:
»
Ã
Ã
À
¿
Ç
8
5
»
=
»
»
Ã
Ã
@
»
6
º
8
8
5
»
Á
Ã
:
Î
¼
Ð
º
@
5
»
@
Ç
=
¿
¾
½
Ã
»
Ã
»
½
½
ñ
»
:
Ï
À
5
@
@
:
À
=
8
:
»
Ä
Â
¾
@
»
Ã
½
»
»
À
Ã
»
=
½
½
5
6
½
»
»
º
»
À
º
Ã
Ï
=
Ï
À
Ä
@
º
5
=
ë
@
Â
À
@
=
=
À
»
À
¼
Ç
5
Á
8
=
º
»
=
Ã
»
Ã
"
À
=
@
Ç
8
@
Ð
:
Ï
»
ñ
Î
5
À
=
»
@
»
Â
»
½
»
@
À
Ê
8
»
»
8
=
Ã
¾
¾
Ã
Ç
ñ
Ã
8
5
5
¼
»
@
Á
Á
@
Å
@
=
ñ
¾
¾
Á
À
8
½
@
:
Â
Â
¿
¿
Ã
5
»
»
5
=
Ç
Ã
:
6
8
5
Á
»
À
º
½
8
Ã
»
»
@
@
Å
½
5
5
Ã
:
Â
Â
8
»
»
Á
@
6
¾
Ã
À
ñ
8
¾
8
Ã
@
Ã
@
@
½
À
»
Å
@
5
@
8
@
=
ñ
6
5
Á
Ä
»
»
Ð
Ä
À
@
Ä
»
Á
À
ð
¾
¼
8
Ï
»
8
Á
Ç
@
½
Ã
:
Ç
Ç
Ã
Á
@
Ç
5
»
5
½
5
8
À
8
5
¼
@
Ï
5
½
Ï
ñ
Æ
Î
½
»
Â
:
5
½
¾
8
Ê
¾
¾
Ï
@
Â
»
5
@
»
Ä
Â
½
Ã
5
¾
À
À
$
8
@
Ê
½
½
=
@
Â
Â
À
»
¿
¿
Ã
¿
8
=
Ú
@
@
8
5
À
Î
¼
Á
½
5
»
Â
½
»
¼
À
¼
»
5
5
»
=
8
Ã
@
½
m
½
»
Ã
=
¾
Á
8
»
5
Â
=
m
¾
è
@
r
:
6
5
Ç
»
:
»
¿
Ê
»
Ã
Â
8
Â
Ó
»
À
Ú
Æ
Ï
À
@
Æ
»
Ã
½
@
»
À
Â
»
Ú
¼
5
Ä
5
Ù
6
¼
æ
Æ
»
½
=
@
5
»
8
8
8
Ã
Â
=
Á
=
Ç
º
8
½
½
Ô
»
º
5
5
ð
â
¾
»
Ï
Ç
Ä
À
½
8
À
¿
Ã
Â
Ï
»
Æ
@
¼
8
5
8
Ã
½
5
Á
Ä
»
Â
ß
Ã
@
À
»
5
º
½
5
Á
=
À
½
5
À
½
5
Â
½
½
n
:
Ã
5
»
ð
=
¿
»
5
8
¿
»
¼
Ã
5
¾
:
Ã
»
=
º
À
@
Î
»
8
À
Ç
º
»
À
»
½
5
=
ñ
8
¼
¼
8
Ã
É
À
½
=
ñ
8
À
»
Â
Á
»
Ç
Ã
Â
Ã
»
º
½
Ã
Ã
À
»
=
Ã
À
À
8
=
8
5
Â
º
8
À
ø
8
=
@
Ê
»
ø
6
Ã
:
½
û
»
»
»
8
ú
Ã
Â
º
º
Ã
ø
À
=
Ã
Ã
5
ù
@
=
Å
Á
ø
=
Ã
Ã
À
÷
»
Å
@
6
ö
:
=
Â
8
8
Ã
»
5
»
5
@
º
Á
Â
º
»
Ã
5
½
8
Ã
Å
6
Å
@
@
À
}
ò
@
Ã
:
Ê
»
ð
»
»
¼
k
@
º
º
Ã
º
Á
Ã
@
î
Ã
Å
»
Å
q
»
¼
»
@
o
5
@
Ã
Â
p
Æ
»
½
Ã
»
=
n
=
"
À
Ã
À
5
»
=
ì
:
Å
=
8
=
8
ð
"
Î
Ã
÷
½
v
½
»
@
¿
À
q
½
6
@
»
8
À
Ê
=
½
5
Å
Ã
½
Ã
6
½
Ç
À
»
¿
À
»
»
ñ
@
»
@
ß
=
Á
@
|
@
À
5
8
¼
ë
Ê
È
Â
q
ô
»
»
Á
¾
»
½
=
½
Â
»
5
@
»
Ã
À
5
½
¿
»
º
@
Ç
¿
=
Ã
»
=
¼
Â
Å
5
@
Ã
Å
å
5
Â
8
»
@
¾
Ç
@
»
»
Ã
Æ
¾
=
½
º
8
»
5
À
Ð
Ã
Â
Å
¼
Î
½
5
Ö
Ã
Ã
8
¼
Ê
»
À
5
@
Ã
Ç
Â
@
Ã
8
8
À
@
@
:
À
@
8
¿
À
À
8
k
ó
½
@
o
Ï
=
ñ
Ç
8
½
»
Ã
½
ê
»
8
Ã
Ã
:
Å
ñ
Ã
½
¼
Å
Ã
Ó
Æ
Ä
Á
@
q
5
:
½
Ó
À
5
Ó
À
5
5
Ç
Ä
»
º
º
Ç
½
5
º
Î
Ý
¿
6
Ã
Ã
»
Ç
×
»
»
=
À
Ò
Ç
º
Ô
5
»
Ã
Ã
=
º
»
@
ß
½
=
Ã
º
8
»
Â
½
@
=
À
Â
À
5
=
Å
Ó
½
»
»
à
Â
Ø
º
»
Ø
º
À
»
n
8
Â
¿
»
Ã
5
Â
½
5
»
Å
5
5
Ç
=
@
Ä
¼
Á
¼
r
Å
Ï
º
ð
8
Ã
=
Ï
@
6
@
8
Ï
8
Æ
5
=
Ù
=
Æ
Ä
g
5
»
À
½
½
»
»
:
Ø
Ã
=
Ã
8
¾
Î
ß
$
¿
À
Á
5
»
»
N
÷
Ä
»
î
Ã
Ê
»
@
Â
Ò
Ï
5
Â
â
Â
8
»
º
½
½
À
Á
»
Ã
Ã
=
»
5
À
¼
Ã
¿
¼
8
@
=
¼
=
»
Å
Ã
=
5
p
»
8
Á
»
À
Ã
=
Á
@
Â
ì
Ï
@
È
Ã
=
¼
½
Á
5
5
Ç
@
n
6
»
º
Â
À
Ã
»
Á
k
ñ
÷
Ï
Ã
º
=
¼
5
½
=
Á
»
»
¾
»
Á
¿
Å
º
À
5
8
q
»
»
Â
ê
o
»
½
Ã
Á
»
5
À
Ã
À
¼
»
»
@
:
5
¼
8
@
Ã
»
»
=
5
6
Á
º
Á
r
p
Æ
8
8
Ï
»
@
À
½
Ã
:
ä
¼
»
Æ
5
Ã
¼
Å
Ã
Æ
g
n
Ï
6
À
Æ
¾
5
Ä
@
¼
5
8
¼
8
Ï
Á
½
Ã
q
Î
½
Ã
:
5
Ä
=
½
k
Ä
Î
º
8
»
8
6
»
=
=
@
Ã
:
i
½
5
Î
8
@
@
Ã
:
@
=
½
6
ð
@
À
Ã
»
@
:
5
¹
Ã
5
8
5
Þ
¼
º
=
À
½
q
À
½
5
Ü
Î
Ã
Â
=
:
@
=
Á
¿
À
=
Â
»
½
À
8
À
Ä
»
=
Ã
Â
8
8
½
{
Á
=
@
@
@
½
Î
¿
@
Á
@
Ä
á
»
À
:
@
5
Ã
8
Ã
Å
=
ð
Ã
Ù
À
@
½
¿
À
À
@
5
Î
À
=
À
Á
»
»
¿
ã
@
À
5
=
À
À
=
@
5
¼
8
Ç
»
»
5
¼
»
»
=
@
=
5
¼
Ã
»
5
¼
Ç
=
Ã
â
¼
º
8
é
Å
Â
»
¿
¼
8
Å
¼
Ç
8
Ã
¾
»
Ç
»
ð
Ã
Ã
6
5
8
ï
Ä
=
½
Â
½
½
á
Å
=
Â
»
½
ñ
@
=
8
º
8
À
»
@
Æ
º
Ã
À
q
=
º
À
8
8
º
×
=
5
»
º
Ä
À
º
¼
½
=
=
8
Â
5
5
»
¿
8
5
Ã
À
¼
»
»
8
Ç
Ã
Ì
Ê
8
Ã
Â
¼
Ï
½
»
:
½
=
8
:
Ã
Ë
º
À
»
»
:
Ã
5
8
Ï
Â
@
º
Ç
Æ
=
¿
6
=
8
à
@
¿
=
»
Ð
{
@
Ç
5
5
^
8
»
:
Ã
4
Ã
Ç
»
Ç
»
Ð
Å
@
5
º
@
È
º
8
Ç
À
Á
Á
À
º
¾
À
=
=
»
@
ï
Ã
6
@
Ê
À
Ã
@
=
=
½
½
Ã
5
=
»
Ç
^
Ã
À
»
Å
Á
8
8
½
¼
À
º
½
º
Å
Î
»
Ã
À
»
5
=
¾
¿
Ã
»
º
=
¿
Ã
Ã
8
Ä
5
À
Ã
Â
Á
º
»
Î
Ã
Å
Ã
=
@
8
:
Ä
¹
Â
Â
Î
É
À
À
¹
»
5
º
Ï
á
½
Â
:
6
»
5
»
Ê
@
»
¾
À
Ä
@
Ã
@
»
¼
5
@
@
5
»
5
=
:
á
=
8
º
=
5
À
Ã
½
»
¼
Ç
º
@
À
¼
|
¿
8
Ã
|
@
À
i
Ð
Ç
5
»
½
ñ
5
ü
ø
5
5
Ã
¾
8
@
ý
Á
»
@
8
Â
Ã
:
»
8
»
À
ý
þ
8
@
»
@
½
ý
»
½
À
Ú
Ã
Ã
5
¹
º
5
Ã
Å
º
Ê
=
»
¾
À
Ã
5
8
¼
Á
:
ÿ
»
»
½
:
Á
á
@
8
=
»
@
Â
À
Ç
Ã
»
@
=
Â
5
@
=
Á
5
À
¾
À
=
Ç
Å
5
¼
@
@
:
8
»
@
Ã
@
½
:
Ã
8
ö
:
»
¼
Â
¾
5
Ä
À
º
@
À
Â
Ã
Ã
À
Ã
:
Á
Â
¼
@
½
º
»
Î
@
5
Ï
:
5
Ä
Ä
»
»
½
»
½
Á
»
»
@
5
@
Ã
Ã
»
8
Â
Lazy Infinite-State Analysis of Security Protocols é
=
»
Ã
À
6
Å
Á
½
@
Â
ð
5
»
5
5
=
Ã
¼
Â
5
=
8
¿
=
Å
À
5
@
¼
Ã
=
Ý
½
5
Ü
¿
Ö
½
»
5
Ó
=
½
à
»
Â
Ã
Ã
À
×
Ý
:
Á
º
¿
Î
Â
Ã
=
8
Ã
Å
º
=
Ê
8
À
@
"
5
=
È
»
Â
Â
5
º
8
5
¼
=
5
Ð
¾
»
5
×
Ï
À
:
5
Ã
À
Æ
Ê
Â
½
»
@
Ã
»
Â
»
Ï
:
5
»
Ã
º
»
5
=
Ê
Á
º
N
»
Ë
À
Ã
½
Ã
º
=
»
»
Ç
¾
¿
Å
5
6
»
º
@
»
:
Â
=
»
=
Å
Ã
»
8
¿
º
½
À
À
Á
Á
=
=
»
Ï
8
Ò
¼
Æ
¼
5
Ó
»
Á
½
½
Ú
Ý
¿
»
»
=
Á
=
Ý
»
À
½
Ê
é
Ã
»
=
»
Ã
Î
5
Ã
»
Á
Â
Ã
@
Â
½
¼
Ä
@
Ï
Ã
8
½
»
8
5
À
=
Æ
:
@
8
Å
¼
»
"
5
»
ñ
=
@
Ê
»
8
6
@
Ã
:
5
Ã
8
Ä
8
¼
À
¿
@
»
@
=
»
:
:
»
Î
»
:
Ê
½
Â
ñ
:
º
=
»
@
@
¿
À
Ç
Ã
Ê
Å
8
½
Æ
5
8
5
À
¿
5
Â
×
»
ô
»
Ä
Ò
½
»
=
5
5
Î
¾
Å
º
@
5
Â
¾
º
@
@
¿
»
Æ
Ó
Ã
6
»
8
à
Ã
Ý
=
À
Ã
À
Ç
»
=
Ä
8
Ä
6
½
8
Á
=
=
Ó
@
Á
¾
Á
À
Ö
»
@
8
Â
8
Å
Ã
À
¿
º
8
5
¿
¾
Â
¼
¼
Î
Ã
¾
Ä
¿
Ü
Â
»
ñ
õ
Å
=
¿
8
½
ô
»
»
5
¼
»
À
Â
¾
Ô
½
í
@
:
È
Â
=
»
ó
=
¾
6
8
½
»
¿
@
»
Ä
Ø
@
Å
5
À
¿
½
¿
Ç
8
À
À
"
5
Ç
À
»
À
»
Ç
É
Å
À
Ã
=
½
=
¼
Ç
»
»
»
=
Ð
»
=
@
½
»
5
Ã
¿
Ã
»
½
º
»
º
»
À
5
º
º
Ã
½
Î
Ã
¾
8
Ã
º
¾
Ã
õ
Ã
À
À
í
Ç
¿
Å
Â
À
:
»
»
Â
»
8
@
½
½
8
¼
=
8
@
"
=
8
½
È
8
À
@
Ã
À
â
Â
½
À
¾
»
Ã
»
Ã
@
=
:
@
¿
=
5
5
»
Á
À
»
¾
=
@
Ç
½
Ç
8
Ã
½
»
Ä
Æ
»
»
º
(
8
=
6
Ã
»
½
á
»
Å
Â
É
×
Ò
5
Ø
½
»
Ã
Ë
@
:
»
@
@
Á
ñ
Ã
@
»
:
8
Ã
Ã
6
5
Â
Â
»
º
Ç
8
»
=
@
=
º
6
5
5
½
Ã
½
Â
¼
5
5
Ã
»
Ä
Â
8
À
»
=
@
½
5
Ã
8
¿
¼
Ã
è
5
5
Â
Ã
á
Ã
Î
@
@
¼
:
ö
:
Ú
ø
@
8
þ
$
Â
&
Ø
¼
3
»
Ã
¼
Ã
:
Æ
Ã
»
»
5
¼
Ö
5
Å
8
»
Ý
º
=
:
»
=
:
@
À
»
=
ß
Ã
Á
@
½
º
Ã
:
8
5
5
¿
¼
Ö
Î
Ç
¼
¾
º
ù
»
@
5
½
Ç
Â
»
Ó
»
=
Â
=
»
6
»
@
8
Ï
8
È
»
Ï
Ó
À
º
=
Ñ
:
3
À
Ê
»
½
Î
Þ
½
¼
5
»
5
&
8
"
¿
=
»
À
º
@
»
:
@
Ë
Ï
8
À
5
$
=
Â
À
Ê
½
À
@
=
þ
º
5
»
Ö
@
º
Ã
Ó
ø
»
Ã
5
½
Ã
Ã
»
Ø
=
þ
5
¼
»
Á
½
ø
8
Ø
Ã
þ
3
¼
½
:
÷
&
8
ß
5
Á
ö
»
»
Ü
¿
¼
:
À
½
5
Î
¼
½
:
:
=
-
$
»
Ç
¿
»
À
Æ
5
Â
@
½
»
»
-
õ
¿
Æ
½
Â
ú
!
»
@
À
@
(
/
5
5
=
+
º
À
5
8
½
½
¼
Ç
¼
»
ö
Ã
»
»
@
Î
½
º
½
Ç
Â
@
À
¼
½
õ
=
À
½
5
»
Ã
»
À
@
¾
8
8
º
»
¼
@
»
Ã
º
À
Ã
8
5
Ã
½
8
À
º
Î
»
»
Ç
Ã
Â
@
(
Ï
»
Ã
º
Æ
Ç
¾
Ã
½
=
8
=
:
»
5
5
¾
»
5
½
»
8
8
8
Ã
8
Â
=
@
Ã
»
=
=
5
ø
Â
Â
-
-
Ê
ð
@
@
Ê
8
=
À
8
5
8
Ã
»
=
6
Ã
Ç
@
º
À
Ã
Â
Ã
5
@
»
À
Ã
5
¾
8
Ã
5
Ã
À
½
»
»
8
»
¼
5
Â
5
Â
5
½
½
½
Ã
»
ð
Â
5
@
À
8
Æ
8
½
@
5
Á
Ã
»
º
Ã
@
À
¼
@
Ç
8
À
¼
¾
8
À
@
À
»
Â
@
ð
5
¿
5
Ã
ñ
Â
Ð
:
@
8
Ç
5
É
Ã
À
Ç
8
½
»
:
¿
Ã
º
Ú
É
Â
º
5
»
Î
Â
Ç
»
á
8
@
Ã
:
À
Ê
»
:
»
Ã
º
¼
¼
¼
»
¿
Ã
5
@
=
»
Â
»
8
½
À
5
»
¼
Ã
½
¿
Æ
»
Ð
Ï
»
=
@
Ã
Ã
»
»
Ã
»
À
»
Î
Î
Ã
5
½
8
=
Á
º
À
Ã
½
ê
¿
@
Ã
Ã
Á
À
@
ñ
¼
8
À
Ç
8
»
=
À
Ï
»
Ç
½
Â
Ä
:
½
5
»
¿
@
Â
»
Ã
¼
¿
¼
Ï
»
5
Ç
À
À
Ã
»
»
À
ð
¼
»
º
Å
»
Á
Æ
Ã
Â
»
Ê
»
8
À
Ã
Á
@
½
Â
8
½
Â
À
8
Ç
5
8
=
5
¼
Á
Â
½
:
:
Å
½
=
À
»
=
À
»
=
½
=
6
=
»
À
5
½
À
¿
Á
»
Ã
Ç
Ç
À
@
À
À
½
@
5
5
»
½
À
8
@
Æ
Å
À
À
»
½
¼
Å
Ã
¼
»
Ä
»
=
»
Ï
À
¾
¿
:
@
»
8
Ï
Î
»
½
½
5
@
Ã
Á
Ç
Ã
º
º
5
=
Ï
5
Ï
Ã
À
Ã
»
Ã
Ã
:
=
@
8
5
À
Ã
À
»
Ç
º
@
º
Â
º
$
»
½
»
5
Ã
:
5
=
½
Ç
8
»
@
½
=
=
»
5
½
5
»
Ï
Ï
Ä
@
:
»
Ã
À
»
½
À
Î
=
5
ñ
5
º
Å
@
=
¹
Î
Å
:
¿
º
=
5
Ï
À
»
Â
»
»
8
»
=
=
½
8
@
¼
Ä
¼
Á
:
À
½
:
»
5
º
5
@
À
Ï
À
@
@
8
@
Ã
Ã
»
»
5
¼
½
:
5
Ã
Î
8
¾
5
À
5
8
@
»
Ï
½
»
º
=
Ã
Æ
À
»
½
Ã
À
@
»
8
º
¼
ö
@
Å
=
»
Ã
$
Ã
Â
À
ð
À
@
=
Ã
8
Ã
Ã
ñ
Ã
Â
º
»
8
½
º
@
À
¼
Ã
»
À
¼
¼
:
¾
¼
Á
=
@
À
À
8
Ç
Â
Ã
Ã
@
»
5
½
:
ñ
½
:
=
À
:
=
À
À
»
»
»
5
½
Ã
=
=
=
8
À
@
»
6
¿
À
»
»
5
8
Å
¼
»
Ê
:
»
8
¼
Ã
À
5
:
@
Å
@
¼
8
8
ñ
½
5
8
¼
Ã
½
ð
=
»
ø
À
À
½
=
À
Ã
5
»
5
½
=
½
@
»
»
8
»
5
¼
:
8
=
º
¿
À
þ
@
Å
»
5
÷
»
Î
@
Â
Ç
ù
º
»
Â
Ã
¾
=
º
Ã
»
»
6
À
5
=
»
»
»
8
@
Å
¿
º
À
@
À
¾
6
Â
¹
3
»
»
ý
\
=
:
8
=
º
Ï
Ä
Ã
Ã
»
@
¾
»
Ã
½
±
À
»
¼
8
@
»
À
@
Ä
8
½
Ã
Ã
º
Ç
Ê
@
º
ý
Ã
Ï
¼
¼
ù
Ë
=
ï
Á
5
Ï
Ê
5
@
8
=
4
5
ô
Â
Ã
»
5
½
Ê
ö
»
5
Ê
@
&
õ
»
»
Ã
5
À
Ã
Á
»
¼
»
Ã
8
»
¸
5
º
:
8
8
=
Ã
¼
@
Ã
6
º
@
½
½
Â
»
¿
º
Æ
Ã
Î
í
=
3
5
Ê
µ
Ã
Ã
:
8
A
¼
5
é
»
5
8
Ã
»
Ã
Ã
Å
8
5
»
»
Ã
º
Á
ó
:
=
»
Î
÷
8
Á
Ã
Ä
»
=
»
Á
ð
»
Ã
¼
5
ÿ
Á
5
Ê
»
Â
½
¾
Â
Ð
º
À
@
À
Á
5
¹
õ
@
á
@
º
¾
Â
Ê
Ã
3
»
Á
:
6
:
¼
¿
»
½
»
Â
=
Ê
=
À
Ç
Ã
Â
5
½
Ê
¿
»
Ã
ô
Å
Ä
¸
=
É
½
¿
8
Ä
ð
Î
Á
À
Ç
»
»
½
=
Ç
8
»
@
½
3
Ç
¶
@
Ï
ð
5
Â
5
@
Á
¼
»
À
5
Á
ñ
À
=
5
¹
8
5
Æ
Ç
À
Ê
»
Ã
½
À
5
º
»
=
Ã
Æ
½
í
8
À
=
Î
C
@
@
:
À
À
½
»
»
=
Å
8
8
Â
À
$
&
Ã
@
=
Å
A
@
5
Ê
»
õ
Ç
=
»
½
Ç
8
»
Ã
=
6
¾
»
å
Ï
@
Â
@
»
8
=
?
»
5
=
»
½
=
Â
8
Å
»
8
5
8
5
õ
$
»
º
º
8
µ
¿
5
Ã
Â
8
½
º
»
Ç
»
Ã
Ã
º
»
»
Ê
¼
õ
Ç
»
Â
´
:
º
@
>
ï
À
À
»
=
8
Ã
ô
»
Ã
@
Ð
º
8
Á
5
!
R
5
»
Ã
Â
:
Ã
È
»
@
5
É
»
Ã
8
»
¼
Á
Á
»
8
À
÷
º
º
º
½
»
Á
ù
Ï
Ã
5
À
À
Â
Ã
¼
ú
½
Ã
=
Î
5
Ã
À
/
@
½
»
Î
È
5
À
Ã
Ö
À
Â
Å
Ã
8
Ã
=
ß
½
º
5
5
»
@
=
Á
½
»
=
÷
»
Ç
@
Ó
»
º
»
½
8
Ã
»
½
@
¾
â
Â
¿
»
5
=
Ï
»
»
8
Ã
@
Ú
º
»
Ã
Ã
À
Â
»
=
ö
Ï
¼
º
=
É
8
¼
¼
¾
Â
=
»
Â
¼
@
@
º
»
»
Á
8
5
¼
=
Æ
@
Ï
½
º
8
½
Î
À
Ï
Ä
»
:
8
È
Á
5
Ä
Ê
Á
=
Â
=
º
¾
5
»
Â
Ï
@
õ
Â
Ç
À
Î
5
"
Á
»
¼
î
Â
@
À
½
À
¾
8
:
¾
¿
Æ
À
ì
À
º
Ã
Ã
Â
@
Ã
Ð
À
Â
½
=
Ç
Á
»
@
Â
»
½
8
»
À
Â
º
=
¿
8
»
º
Â
Ò
»
È
È
Ç
5
Ã
»
¿
=
Ç
¿
½
º
5
5
¼
Ã
»
Ó
@
8
»
È
=
=
Ï
Á
=
½
5
Ã
"
Á
5
½
Ã
5
8
:
=
8
¾
¾
º
"
»
Ï
À
À
Å
8
"
Á
»
À
Â
»
Ê
Ã
¿
:
@
½
½
Ä
»
8
¿
Ã
Ã
Ã
=
@
@
=
8
Å
Â
Ô
Ã
5
5
Â
8
8
Ç
»
À
5
@
¼
Ã
8
Á
@
Ð
»
¼
Ç
À
å
5
À
»
8
»
:
=
º
Ã
Ã
Ä
=
35
À
À
=
Á
»
@
½
@
»
Ã
=
36 h
D. Basin
ñ
r
ê
F
î
H
g
I
h
g
f
î
K
p
I
M
r
f
î
O
g
j
p
î
P
{
z
k
r
g
ò
î
q
s
Q
o
k
R
z
r
ò
ò
ò
ò
S
g
T
x
F
P
W
j
R
z
Y
Z
o
f
Y
H
M
I
]
Y
f
T
x
F
P
W
j
R
z
Y
Z
o
f
Y
H
M
I
]
Y
g
a
ê
f
]
S
h
b
r
ê
F
H
g
I
h
H
f
g
g
I
î
h
f
ê
d
ê
K
p
I
f
f
M
ñ
î
r
K
g
p
I
M
î
r
O
g
f
j
p
ñ
O
î
O
î
î
Q
o
g
j
p
g
j
p
k
R
î
P
î
P
z
k
{
z
{
z
g
ò
î
k
r
g
k
r
g
q
s
ò
î
Q
o
k
R
z
Q
o
k
R
z
r
ò
k
ò
g
ò
ò
ò
ò
T
x
r
Y
]
S
h
g
r
ê
F
H
g
I
h
H
g
g
I
g
g
î
g
ê
H
f
h
ê
g
h
d
ê
g
I
g
ê
K
p
I
f
M
î
ñ
Y
p
f
I
f
g
M
ê
i
i
r
K
î
Y
g
î
r
ê
K
f
p
I
i
o
k
z
k
f
ò
ò
ñ
ò
î
Q
o
k
R
z
k
g
ò
ò
ò
T
x
r
Y
Y
r
g
ê
R
ñ
ñ
M
ê
Q
f
g
ñ
ñ
i
i
î
Y
k
O
g
j
g
ê
p
î
ê
k
Q
o
g
i
k
R
z
k
g
i
ò
î
Q
o
k
R
z
k
f
ò
ò
ò
T
x
r
Y
]
S
g
r
r
g
R
s
z
p
r
ê
F
q
h
H
g
{
I
h
T
H
x
h
I
M
I
k
r
g
H
8
@
8
Ã
½
Ã
À
=
»
»
:
5
»
Á
=
Î
Ã
À
5
½
»
Æ
Â
8
»
|
½
Ç
g
À
½
p
½
»
õ
:
Ç
¿
Ã
8
á
q
º
|
+
Ã
Â
T
M
x
F
I
Ï
Å
8
p
=
î
q
À
h
P
W
z
j
z
R
h
z
H
Y
M
Z
o
I
f
]
r
Y
ò
ò
]
+
g
5
Â
ð
@
\
B
:
º
½
\
Ã
»
¼
Ã
½
È
=
8
»
O
T
À
È
F
Ã
8
@
H
º
B
b
»
¾
F
@
Å
À
»
½
\
Ð
½
F
Ã
»
O
¿
T
¿
Â
»
Ã
\
Ä
8
D
À
Ã
8
\
Å
À
F
Ã
@
º
Î
»
5
Ã
@
½
:
»
Î
»
5
Ê
¹
=
º
Ï
8
»
=
Ç
Ï
8
5
Â
Â
È
Â
»
5
Ã
=
»
½
8
À
½
5
À
=
Ã
Ï
½
À
º
»
8
Â
½
5
¼
Ä
Ã
5
Ç
Ç
À
@
5
À
½
Å
5
º
@
»
5
Ä
Ã
Ã
6
¼
½
»
»
Ã
º
»
½
»
8
=
»
Â
½
:
À
Â
»
»
½
Å
»
Ã
5
Æ
=
@
Ä
»
Â
Î
¿
Â
5
»
Æ
»
5
»
:
»
¼
ô
Ï
Â
»
8
:
º
Ã
º
Æ
@
»
À
Ä
Â
:
»
»
»
Â
¼
Ç
»
»
Ç
À
@
»
@
Ã
Ã
@
5
=
Ã
8
À
=
@
Å
À
=
Å
5
@
Ê
]
ú
5
À
Î
Æ
ð
À
b
Ê
»
½
O
¼
Ï
Â
Å
¿
=
Æ
R
Ã
¼
5
z
=
½
½
M
Á
8
5
À
z
=
Ã
D
»
À
Á
8
Ã
p
=
½
B
½
º
À
5
R
½
Ã
8
@
6
Â
ö
½
H
À
8
Á
8
À
F
8
H
F
¼
Ï
»
5
ù
m
@
»
g
g
W
B
Î
»
¿
:
À
º
Â
{
g
K
Ã
¿
5
+
8
»
º
Ä
z
Ã
Ä
Ã
q
5
¼
Ã
@
h
k
=
½
5
o
5
5
À
ô
½
»
5
¹
Q
»
5
Ê
o
g
_
º
»
Ã
=
»
»
Ã
=
î
K
Ã
Ã
5
ô
ê
Â
»
»
=
8
»
:
¿
g
º
Ã
Â
»
Ä
¼
5
¼
5
Ú
Ã
z
À
½
¿
Å
½
Á
Ã
Ä
5
»
ô
À
5
N
À
Ã
z
»
@
@
Ä
p
Ð
Ê
½
»
Ã
Ã
»
»
¿
@
g
ó
Ä
5
»
r
Ã
Ã
¼
Ã
5
¾
@
»
»
:
À
8
º
Ç
»
Ã
À
¹
=
¼
Ä
I
q
l
ö
Ä
:
þ
ø
5
Ã
q
5
¼
+
À
@
ù
=
Ã
½
r
Á
s
¼
ù
Ã
À
t
½
ö
$
8
þ
Ã
þ
=
Æ
5
½
»
¾
Â
Á
À
@
Ç
¾
»
=
@
Ã
Ã
À
=
Ã
Ú
»
º
Ê
»
¾
Ê
Ã
Î
Ä
ù
¿
r
»
s
u
ù
t
á
5
½
»
w
»
6
5
Â
Á
5
Ã
é
¼
½
5
5
»
=
:
Æ
»
½
¼
½
À
Ä
»
Â
@
½
=
¼
5
g
r
g
|
g
r
g
P
{
z
k
r
ê
P
W
j
R
Á
»
¾
8
»
5
=
8
:
¾
Ã
5
À
Å
»
»
¿
½
»
5
ð
Î
Å
½
Â
Â
»
S
|
÷
:
Ç
@
=
»
À
À
À
»
½
Ç
Å
Î
Ç
@
À
»
=
5
»
Ã
Ï
»
Å
º
=
Î
Ç
À
Ï
¿
À
5
=
Â
Ä
¼
@
Ã
@
Ã
Ã
Ã
=
À
5
À
@
5
Î
Ã
½
»
:
»
5
¿
¾
5
Ã
:
½
5
À
5
»
»
»
Æ
Â
Á
À
Ä
½
:
»
º
Ç
»
8
@
Î
»
=
¾
|
g
r
N
g
À
¿
º
Ã
Á
Ã
Â
=
8
½
Ã
Ã
»
º
Å
¼
¿
»
À
Æ
=
»
@
»
Ã
@
Ã
Z
Ã
»
º
Ê
Ã
o
f
H
»
½
8
8
È
M
»
=
5
Î
¾
Ã
=
=
Ê
@
Î
:
/
À
5
@
5
z
À
À
Î
y
»
Ã
8
½
@
»
»
À
Ã
6
Î
@
»
@
¼
º
»
{
½
½
Ã
|
=
Î
Ä
Ê
5
¿
Ã
@
À
½
:
»
Ã
5
:
º
»
}
Ç
»
N
=
=
»
5
Ê
»
:
Ë
¾
º
»
»
=
=
Ê
5
=
}
Ç
5
6
É
¾
»
»
@
=
Ã
=
Ê
I
À
5
Â
Ç
Â
½
Ä
@
@
Â
:
½
5
ù
Á
½
»
Â
8
g
j
8
p
~
=
»
=
½
8
»
5
h
»
{
~
¼
Â
»
À
=
Ã
º
h
5
{
K
Ã
º
Ã
8
p
»
@
¿
=
À
I
M
8
À
¾
À
6
r
¿
º
Ã
@
P
{
z
»
k
8
À
5
r
~
»
h
{
»
Ã
@
½
»
º
Ã
À
5
=
á
Ã
»
:
»
Ä
@
Ã
À
¼
Å
À
½
Â
Ã
º
5
8
Å
Ä
Ã
Ã
5
º
¾
»
»
@
Ã
Ê
º
=
8
@
=
¿
¹
:
À
½
»
8
¾
»
Ê
ñ
@
Ã
¾
ô
¼
5
5
»
º
»
Ä
º
Ã
Ã
½
Ã
½
¿
Å
Á
@
»
5
»
Ä
=
8
½
Ç
=
ñ
@
@
¾
¾
Á
Æ
Å
À
À
=
:
À
¼
Ã
=
»
@
8
¿
5
»
=
À
»
¾
Ä
Â
Ç
@
½
@
½
Â
»
8
Á
¿
½
5
º
Ã
Å
¼
8
@
8
Ç
Å
Ï
»
@
¼
À
»
Å
ñ
5
=
½
À
»
Ã
Æ
Â
5
:
$
8
Ï
=
@
»
=
5
:
»
À
6
Ã
@
¼
Ã
8
Â
»
5
@
5
Ã
Á
5
:
Ú
À
ð
»
6
»
»
¼
½
=
º
@
Á
:
O
Ç
Î
:
»
5
Ã
5
À
@
Ç
3
|
Î
=
S
r
{
Ç
Ç
8
k
h
@
Â
»
5
»
À
=
å
À
º
~
½
8
Ê
Å
Ã
¿
5
Å
á
z
r
º
Ä
Á
R
k
Ä
º
@
k
z
5
Ã
»
À
o
{
Ï
8
»
¼
5
ô
P
5
=
Ð
r
Ï
:
Ã
Ú
k
Ã
Ã
»
@
Q
Ã
@
=
»
|
¼
@
=
Ã
S
r
z
5
5
:
À
:
5
6
5
k
{
»
8
Æ
8
@
½
¾
½
@
»
5
Ã
5
Î
z
P
=
¿
Â
{
h
Æ
À
»
:
ô
I
»
¼
:
P
@
=
P
À
:
=
g
5
½
À
r
5
Å
»
Ç
:
Ã
»
6
½
k
»
À
@
z
H
Ï
Ä
½
@
Ð
¾
Á
»
{
ê
Ã
»
8
¾
È
P
r
5
»
À
»
5
º
k
È
8
5
ê
z
¼
Ã
5
{
»
Æ
5
h
¾
Ã
@
5
S
z
S
~
=
Ã
5
@
»
Á
=
»
»
5
¿
=
¾
Ç
»
Ç
»
5
Å
@
À
Ç
»
À
À
¼
=
½
½
½
Ã
º
Ç
À
Á
Ï
Ã
À
@
Ã
6
:
Â
¼
5
»
5
À
@
½
»
5
Ã
À
5
5
Â
Ã
»
Ã
=
»
5
Ã
:
@
¼
½
=
@
8
¼
õ
Â
5
5
Ã
»
½
¼
¿
=
º
À
8
º
É
@
½
5
=
Ã
Lazy Infinite-State Analysis of Security Protocols g
k
g
W
W
m
z
g
l
r
j
h
k
37
ê
h
ê
r
h
ê
F
h
|
h
ê
F
I
S
n
h
O
g
j
p
I
T
x
h
]
O
g
j
p
I
T
x
h
]
S
h
k
S
|
j
h
I
k
k
R
z
r
W
l
W
R
o
h
l
z
p
I
v
j
r
p
h
r
M
z
h
î
j
ê
k
F
o
K
p
p
n
h
I
M
r
r
g
o
{
p
h
T
k
x
|
o
h
Y
p
g
|
{
ê
z
R
p
ê
I
g
M
]
r
ò
l
h
ê
k
h
ê
h
S
g
{
z
k
r
M
r
d
ê
F
P
{
z
k
r
g
g
T
x
F
P
W
j
R
z
Y
Z
o
f
Y
H
M
I
]
]
S
R
p
I
h
ê
F
K
p
I
M
r
g
T
x
h
Y
g
T
x
F
P
W
j
R
z
Y
Z
o
f
]
]
S
M
j
h
z
k
î
g
z
h
j
j
k
p
g
h
r
o
ê
ê
p
F
n
o
g
W
|
O
g
{
z
j
k
p
p
r
î
I
o
p
p
x
R
p
T
I
î
h
M
x
r
z
z
h
h
o
ñ
Y
p
I
T
M
ò
g
v
x
j
k
p
j
ò
o
z
p
z
h
z
z
h
ñ
î
H
g
I
h
d
f
ò
ê
j
n
g
ê
ê
]
l
k
S
l
h
h
p
ò
z
q
M
r
I
h
z
r
r
p
S
f
g
ê
ê
H
M
I
r
l
z
k
F
]
z
W
h
z
F
]
o
p
n
{
ê
h
x
î
n
h
ò
H
Ã
º
Ã
À
Ú
»
@
Å
»
6
»
À
Á
»
Ã
Ã
@
=
Ã
8
Á
½
º
»
»
Â
8
¿
¾
º
Á
8
»
@
8
¿
¼
Ã
½
8
Â
À
5
@
Â
À
î
Î
ô
8
á
û
{
5
/
H
h
ò
½
Å
¼
T
Ï
Â
»
8
:
À
À
½
Ç
ó
ð
\
õ
À
½
ô
º
ð
Ã
º
T
º
»
5
@
Ã
Ã
Ã
=
½
H
\
ó
À
õ
»
½
¼
¼
»
Ã
½
@
»
=
:
º
º
8
@
=
=
Ã
=
»
»
º
À
Ã
Á
:
¼
¹
Ã
=
»
@
Ê
¾
½
=
À
Í
ñ
8
Á
@
Ì
»
¹
»
º
Ë
@
Ã
Ê
Æ
»
À
=
Â
@
½
¿
@
»
5
Å
Ê
=
8
:
¼
5
{
5
À
=
½
Ã
Ç
»
É
À
À
½
5
$
@
»
È
@
|
¼
¼
D
/
»
8
5
5
F
»
»
H
¾
{
Ã
q
=
Ð
H
/
»
a
ó
»
@
Ã
8
@
»
Â
5
T
q
5
Ã
=
D
Ç
5
»
O
Ï
8
U
@
º
Ä
Â
À
Å
Ã
Â
5
8
=
=
@
T
Ã
8
»
F
¼
º
ð
F
y
@
Ã
5
Å
\
Á
»
Ç
Ä
@
À
F
½
»
Â
b
ô
5
À
Â
B
8
º
Å
=
ô
Â
ö
á
8
õ
H
Ð
õ
Â
õ
|
3
À
T
Á
»
Ä
¼
S
5
¼
½
Ã
k
õ
»
½
5
À
o
O
º
Ã
Ç
R
¹
»
j
Ê
º
=
Ã
Å
Ã
:
Å
»
k
_
¼
@
@
Ã
5
v
K
|
½
8
5
À
=
Ã
@
Ú
5
ô
Ä
ú
Ã
3
@
½
3
=
5
@
:
¹
ô
:
¼
À
@
}
@
¼
Ã
À
@
»
À
¼
¼
»
Ð
I
»
º
Ã
»
»
=
¿
Ã
Ê
º
»
Ô
5
Á
8
@
Ð
Ã
Ç
8
Â
À
=
=
»
¿
5
Ã
»
=
Ã
5
Ã
»
j
=
Å
d
À
r
Â
Â
8
ð
5
Ç
z
½
z
Â
z
»
=
z
k
j
o
½
8
Á
@
@
ñ
k
¼
@
j
8
k
8
j
Ã
Â
À
º
Ã
½
º
»
ó
Ï
»
@
@
8
=
¼
5
¼
¿
¿
8
À
Ê
Ê
¾
Î
=
Ã
8
8
Ã
¼
Ã
¼
=
Ã
À
ð
=
=
¾
¼
»
À
=
»
:
Ä
Ã
á
Ã
Ê
º
»
Ê
@
=
»
Ã
º
5
á
¾
¿
¿
º
5
Å
5
Ä
5
Ã
á
=
½
½
=
í
=
¼
À
5
Ú
Ã
»
@
»
½
@
8
Ç
Ã
8
Ã
»
Â
¿
Â
»
6
=
»
Ä
5
º
8
»
@
½
Ç
Ã
¾
=
5
¼
½
@
À
À
Ã
À
º
»
¿
@
»
Å
=
8
5
Å
3
º
Ç
8
½
Ï
À
¾
»
»
Î
¼
@
¿
3
8
»
8
»
@
@
:
È
:
Ú
8
À
Ç
@
»
Ã
z
À
Á
5
5
ô
é
=
=
@
ó
Ê
=
Ã
Â
Á
Ç
ô
½
5
½
¿
Å
Î
À
»
5
5
ó
8
5
¿
Ä
Ã
¿
À
È
À
¼
Å
@
5
Â
3
=
Ï
Ã
¿
@
À
Ú
À
=
Á
=
Ä
Ç
Å
Ã
@
5
»
õ
»
½
È
Ã
»
È
»
»
5
Ç
5
º
¿
¾
À
Ï
Î
¹
»
@
½
Î
Ê
º
8
Å
Å
@
»
Ã
:
Ã
»
R
Î
=
@
=
»
@
»
À
Ã
Ã
Ê
À
¾
8
8
½
¾
¿
5
Ã
Â
Á
Ê
=
=
5
¾
»
»
=
»
8
Î
½
¼
:
Å
8
=
»
@
5
@
h
À
À
á
Ã
Ç
Á
@
½
¼
Ç
5
8
5
:
Ç
ô
@
»
Â
»
Ú
r
º
Ç
»
¿
8
º
:
Ã
À
6
»
Á
Ã
ð
Â
Æ
¼
8
Æ
=
»
8
¾
5
À
@
5
=
»
Ã
Â
Î
@
ð
8
5
õ
½
p
Á
5
À
Ä
»
8
Â
÷
õ
Ç
ó
Î
|
»
½
Ã
5
»
ó
=
À
¼
@
½
@
Î
W
=
Å
@
5
3
5
Á
½
»
»
À
Ã
Å
8
¾
=
¼
=
Ä
»
5
Á
Ä
v
½
º
¹
f
5
Ã
»
=
8
8
Á
¼
@
8
¼
Ã
»
8
=
5
=
Â
À
½
Ê
r
C
A
B
T
H
F
H
R
B
W
F
R
§
\
R
H
B
F
H
W
D
O
o
\
h
T
v
p
T
U
\
D
T
M
K
B
F
H
R
B
T
z
\
B
T
D
\
F
T
n
¬
R
F
F
T
F
R
K
O
M
R
O
B
T
H
\
D
F
h
S
T
H
T
F
H
R
B
H
R
K
n
¡
\
O
D
\
O
\
D
H
b
F
B
F
H
T
h
d
H
F
O
\
D
\
F
¿ F
R
T
K
D
R
B
D
£
\
R
S
K
F
\
D
F
\
R
D
K
O
\
H
B
a
R
F
T
h
R
S
K
F
\
F
\
R
D
K
O
\
H
B
F
g
§
B
H
F
\
_
R
O
S
T
H
H
B
b
T
B
H
k
B
g
§
W
m
B
H
\
M
R
O
S
\
O
H
D
T
I
T
k
d
r
B
W
l
D
H
g
B
§
B
H
F
k
\
g
T
B
W
\
O
\
T
D
\
R
B
d
m
F
\
T
F
F
\
O
H
D
T
T
d
D
¤
F
\
D
\
F
H
D
B
R
F
T
O
R
a
\
S
H
B
T
T
d
D
\
F
F
H
B
b
\
R
K
R
¤
F
H
a
O
D
T
H
B
F
H
T
B
b
D
F
F
O
O
\
\
T
\
S
´
H
B
D
£
F
R
\
T
\
R
[
M
\
O
W
T
§
B
\
H
F
T
\
[
a
O
\
T
T
B
[
R
H
H
B
\
b
F
F
O
\
H
\
D
T
H
F
D
H
H
F
B
§
R
B
H
K
F
\
O
\
\
B
D
b
K
F
F
H
a
O
B
T
T
B
B
H
\
B
§
D
B
W
H
F
\
H
d
-
R
R
B
K
\
D
d
B
R
F
S
\
D
H
D
H
D
T
F
F
\
\
F
W
T
\
B
T
D
O
H
\
T
T
F
H
F
R
B
\
R
M
O
R
D
\
K
R
O
O
\
\
O
R
H
S
B
b
K
F
\
T
K
F
O
H
H
R
D
B
F
H
D
R
A
B
B
D
´
F
H
\
B
T
D
F
W
\
T
\
R
M
\
T
O
M
O
R
O
R
S
D
¥
D
H
B
§
B
H
F
\
F
O
\
\
K
F
-
D
F
\
38
D. Basin ê
¹
º
¾
»
»
8
Q
½
@
@
»
¿
Ã
º
º
Î
Å
Á
@
ñ
z
À
¼
¾
Ã
Á
Ï
º
8
½
À
»
Â
8
¼
P
º
8
Â
Ã
À
=
»
=
»
¿
8
Ê
¹
Á
Ã
»
º
»
»
=
Á
@
8
¼
Ã
¼
=
ñ
Ã
5
½
»
Ã
Æ
5
Ã
»
5
@
=
8
Æ
Î
=
º
»
À
¾
¼
z
k
º
8
h
¿
=
5
Â
½
Â
8
j
Ä
»
ð
k
Î
¼
»
o
j
Á
j
õ
½
=
k
=
8
5
Ã
6
ò
3
Â
»
ò
ó
»
½
r
Ä
5
»
À
ó
¿
Å
8
¿
ñ
Â
@
8
=
5
8
»
¿
:
Ã
¿
Ê
»
Â
4
À
8
»
½
»
:
¿
8
Ã
»
@
À
@
ñ
:
@
É
8
Ã
»
Ê
Ã
Ã
»
r
5
Ç
¾
Å
Ã
½
@
@
À
Ð
z
8
8
=
»
Ã
Å
Ä
Ã
Ä
5
Â
î
õ
¼
¿
=
ò
+
@
¿
k
:
z
5
À
o
@
½
Ä
¼
j
õ
Ä
»
h
¦
+
Æ
Ã
=
Â
k
5
|
z
@
z
õ
º
Ã
À
r
¼
¦
8
8
ó
8
|
Ã
=
À
º
@
5
z
Ä
Î
Â
=
z
Æ
@
¼
5
z
Ï
@
ñ
8
»
:
À
8
8
p
@
8
½
¼
r
»
À
Ã
À
»
8
=
¼
d
Â
¿
@
¿
|
»
Á
Ä
=
W
Æ
Î
Á
Â
½
j
5
=
Å
Ã
v
Â
½
@
Á
f
=
À
À
À
Å
=
@
»
Â
Ç
8
ñ
Ã
8
=
=
î
»
»
@
M
»
¼
»
g
½
¼
Ã
¿
q
Ã
Á
Ð
¼
î
»
=
Ç
@
»
r
8
r
º
»
»
À
:
j
Ã
º
»
¼
Ç
k
Å
Ã
º
»
»
À
»
Ã
j
Ã
Ã
@
Ã
z
À
5
À
»
|
À
½
¾
:
o
@
º
»
º
5
»
:
8
»
@
Æ
À
Ç
6
¿
¾
Ã
»
Ã
½
Å
Ä
5
Á
Ã
¼
»
@
½
¼
5
=
Ã
¼
Á
8
À
»
=
@
8
5
@
Ã
@
À
Å
:
¾
Ã
Ã
º
À
º
½
Ç
»
»
Å
5
»
Á
Ð
@
Â
Ã
¼
8
»
Ã
8
ð
»
@
=
À
8
@
Î
À
@
=
8
@
Ê
r
ê
q
g
M
î
l
z
p
x
z
z
r
z
r
j
z
k
h
k
h
j
o
r
k
ò
h
z
ê
r
î
z
î
k
h
h
j
ñ
o
o
k
h
p
h
b
o
p
h
g
o
p
g
r
r
g
R
s
z
p
ò
r
ò
M
ê
f
¹
"
5
º
=
È
j
»
¿
Â
=
½
»
Å
5
¿
¼
À
¼
8
º
8
8
@
ñ
@
5
Â
5
8
:
=
º
5
»
Â
ö
»
Â
Â
Ï
Ã
º
»
Î
À
=
Ã
Ç
8
»
»
º
=
=
»
Ä
=
º
I
h
P
W
j
R
z
g
I
h
P
W
j
R
z
F
H
g
I
h
Z
o
f
F
H
g
I
h
Z
o
f
F
H
g
I
h
H
M
I
F
H
g
I
h
H
M
I
F
H
g
I
h
H
M
I
P
W
j
R
z
î
P
{
z
k
r
F
H
g
I
h
H
M
I
P
W
j
R
z
î
P
{
z
k
r
F
H
g
I
h
H
M
I
P
W
j
R
z
î
P
z
k
r
F
H
g
I
h
H
M
I
Z
o
f
î
P
{
z
k
r
F
H
g
I
h
H
M
I
Z
o
f
î
P
{
z
k
r
F
H
g
I
h
H
M
I
Z
o
f
î
P
{
z
k
r
N
À
8
Ã
=
»
÷
Ã
Á
º
8
H
È
j
M
P
Ã
Â
K
p
I
M
r
î
K
p
I
M
r
z
î
K
p
I
M
r
R
W
j
o
R
î
K
p
î
K
p
À
¼
½
º
I
z
Ã
5
M
î
f
Å
Ä
î
I
I
Z
5
¼
W
º
@
K
p
=
j
W
8
W
Z
o
H
¿
Ã
Ã
M
½
»
¼
Ã
5
@
:
¼
À
@
¼
8
=
»
Ê
¹
º
»
Ä
º
ò
]
ò
]
Ã
»
]
ò
]
ò
]
½
Ã
@
½
@
=
¼
Ã
»
¼
Ã
=
:
½
5
¼
@
À
»
5
Ã
Ê
@
½
Ã
=
Ã
¼
5
Ð
¾
½
Â
»
Ã
5
Ã
¼
¿
5
»
½
½
Á
Ã
À
½
½
»
=
@
¿
À
@
í
Å
Ã
Ã
»
:
8
@
¾
j
p
î
P
{
z
k
r
P
W
j
R
z
ò
î
Q
o
k
R
z
ë
ò
ò
ò
]
g
j
p
î
P
{
z
k
r
P
W
j
R
z
ò
î
Q
o
k
R
z
ë
ò
ò
ò
]
î
Q
o
k
R
z
ë
ò
ò
ò
]
ë
ò
ò
ò
]
p
g
j
p
î
ò
O
j
î
P
î
P
p
î
g
{
j
z
k
z
k
P
p
z
k
r
Z
î
{
{
r
P
{
r
½
¼
»
À
Â
=
Ã
À
5
@
»
À
À
½
Ç
¼
8
»
Ã
É
@
5
¿
5
»
Ä
í
=
=
@
Â
=
½
Â
8
À
º
»
À
:
À
Ä
½
À
Ê
=
@
@
¿
»
:
5
=
Ã
Ã
¼
Ê
Ç
=
Ï
»
@
=
»
8
5
5
»
À
Ï
»
ô
@
Ï
»
Ã
Â
Â
Ê
g
g
À
$
Ç
Ã
»
½
½
¿
O
O
ñ
ô
º
=
»
º
@
Ã
8
Ï
À
Á
@
½
»
8
»
Ã
Ã
Ã
º
»
À
5
Ã
8
ñ
»
À
Ã
:
Ã
»
:
º
º
5
@
=
½
=
½
À
@
5
Î
»
»
»
À
»
»
º
»
=
¹
Â
Ï
Ç
¼
Ã
Ê
»
Á
½
»
5
Ä
½
@
=
»
¼
=
Ç
º
8
½
À
5
»
Ã
º
Ã
»
»
À
»
»
É
»
Ã
»
"
»
º
Î
»
Ê
Ã
Ã
=
º
5
Æ
À
:
@
½
»
5
Å
»
Ã
Ã
»
Ã
À
»
@
5
½
=
=
½
»
À
Ã
Ã
Ã
º
@
Ä
Ä
½
»
@
Ä
ð
ð
5
Ã
Ï
8
5
5
¿
8
»
5
Â
Â
¾
@
Ã
5
5
@
½
@
5
@
O
j
o
Z
z
f
k
M
f
ò
r
H
o
H
I
î
Q
î
Q
ò
ò
M
o
I
k
R
k
R
ò
z
î
o
ë
Q
o
ò
k
z
R
ë
ò
ò
]
ò
ò
]
z
ò
]
¼
5
5
»
8
ñ
Ç
Å
Ã
Ã
Á
»
Î
=
À
@
:
¾
@
¼
8
=
5
:
î
g
5
½
@
¾
Á
»
½
Â
»
î
z
ò
8
5
½
»
»
»
=
»
¼
½
À
Ã
Ã
À
6
º
:
5
¼
½
¿
Ã
¼
z
R
z
¼
=
Á
»
»
î
R
I
R
I
O
f
f
M
î
j
o
j
O
Â
Ä
@
»
¼
À
Æ
:
Ï
@
z
j
P
Z
H
¿
R
î
Æ
@
À
¿
À
¾
8
I
P
W
Ã
f
M
f
P
=
:
¾
À
@
5
@
=
I
o
=
À
º
½
8
»
W
r
Ç
=
Ç
@
¼
½
Â
Æ
=
o
M
Z
5
º
P
M
»
»
Ã
Â
»
8
Î
8
Î
»
8
=
=
8
Ã
È
¿
8
»
:
È
¼
5
H
r
{
»
¾
I
8
Ê
5
¼
Ã
Ï
=
H
M
Â
»
Ã
»
¼
»
¾
Z
r
I
@
Ã
g
P
¾
=
5
:
½
É
5
Ã
À
5
H
M
»
5
º
º
À
5
Ï
H
H
f
Ã
Ã
8
»
¼
Ã
:
@
8
F
º
o
¾
@
5
Â
Ã
Ã
Î
5
Ã
=
=
F
Ú
Z
5
Ã
:
¿
Â
5
Â
È
8
8
»
½
5
@
À
5
@
:
=
¼
º
=
À
Ã
5
5
5
¼
5
Ã
Ç
½
ñ
»
8
Ã
º
À
:
Ã
¾
Ã
:
Å
ï
Ã
@
@
À
À
Ê
5
8
8
Ç
»
»
»
\
@
¼
º
6
±
»
Ç
@
:
¿
@
Ä
»
5
5
»
Ã
À
¸
Ã
5
Å
Ã
Z
½
½
=
:
¿
]
À
Á
5
»
Á
@
@
è
Ã
=
È
:
=
º
5
»
À
»
=
»
F
·
ð
É
¼
8
»
¼
Ã
8
8
»
5
:
5
5
8
=
?
Â
Ã
Ã
¼
Ê
Ä
»
Ã
Ã
»
ª
5
Ã
»
5
È
·
8
À
½
»
»
=
Ã
º
©
@
½
¼
5
Ç
5
¿
»
Ã
»
Â
@
½
@
À
À
ñ
A
½
ä
z
Ã
=
=
8
ñ
»
Â
Ã
Ã
À
@
º
À
Å
»
Å
z
»
?
8
¼
=
ï
Ã
»
p
¼
:
µ
Ã
À
r
»
À
Ç
º
Ã
»
6
½
½
½
:
5
À
d
¿
¼
´
º
»
|
=
Â
>
W
»
»
¨
Ã
v
»
¼
Î
»
Ã
=
º
¾
»
½
=
À
Ï
¿
Ä
Ê
á
8
¹
=
Á
º
»
5
=
@
»
Æ
¼
Â
À
»
@
Ã
:
À
=
¿
Â
Ä
5
Ä
º
Ç
5
Á
=
P
¼
í
º
R
À
@
Å
À
8
:
@
»
Ã
=
»
½
5
»
@
=
:
Ã
Û
Ã
º
»
Lazy Infinite-State Analysis of Security Protocols h
o
p
r
O
W
I
l
n
z
p
r
z
ê
q
g
q
M
Z
g
p
M
g
Z
p
k
R
g
l
k
R
z
l
h
z
h
n
n
î
Q
39
r
o
|
z
g
W
ò
ê
Q
o
|
z
g
î
q
g
M
î
q
g
M
Z
p
g
k
R
l
z
h
n
ò
î
n
W
ò
ò
M
i
ê
h
l
z
p
o
p
r
z
O
W
n
I
n
ê
h
o
î
p
r
M
Z
p
v
k
I
R
z
q
M
ò
M
R
q
M
î
Q
o
|
z
j
r
n
ñ
d
ñ
T
ò
î
z
W
Q
b
o
|
r
l
z
r
z
h
z
j
n
ñ
{
l
r
z
ò
ê
p
ê
ê
b
ê
î
M
r
l
z
z
p
z
ñ
ê
z
p
k
¬
z
W
h
z
p
l
j
d
«
z
b
k
o
r
H
r
z
j
{
l
r
î
l
z
g
|
r
ñ
ò
®
b
M
z
ò
ê
z
ð
î
n
p
o
q
H
M
I
z
j
{
l
r
î
l
z
g
|
r
b
ò
ò
M
p
M
o
p
r
M
o
H
p
r
o
p
o
H
j
n
j
n
H
g
I
h
H
g
ê
ê
ê
ê
î
ê
ê
H
g
M
d
î
g
d
r
l
r
l
K
p
z
M
p
z
I
p
I
M
p
v
k
z
n
g
r
ê
n
H
Ã
º
8
5
Â
½
Ç
»
í
À
8
=
:
¾
½
¾
8
º
5
½
8
Ã
½
º
8
»
À
Ã
º
ñ
º
»
8
8
ñ
½
»
½
=
Ï
=
g
b
å
º
½
=
½
»
5
8
½
Ã
¼
º
=
8
Â
º
8
¼
ò
º
Î
À
»
Ã
5
»
=
5
Ð
¿
¿
Ä
Ã
Ä
Â
Ê
À
»
½
º
»
½
ñ
½
8
5
½
Â
g
j
ñ
î
î
P
{
z
k
r
d
ò
î
Q
o
k
R
z
k
ò
ò
ò
ò
ò
ò
ê
p
î
Q
o
k
R
z
k
g
ò
î
Q
o
k
R
z
k
f
ò
ò
ê
ë
Q
o
z
ê
î
Ã
Â
k
ñ
ê
R
z
k
f
ò
ò
ò
ê
ë
H
x
=
Ã
M
I
ò
M
8
@
8
¿
¼
8
5
À
À
8
8
r
p
Ã
Ã
o
l
r
z
H
k
ë
r
=
z
z
M
î
Á
»
Ã
Â
=
Â
W
l
h
z
z
g
¯
|
»
»
Ã
Ã
@
½
º
Ã
ò
T
ê
g
ò
r
Ã
@
º
»
À
5
Á
Ã
½
Ã
8
º
¼
=
»
À
Ã
@
8
½
¼
»
8
5
8
8
²
Î
@
@
À
»
¿
»
8
5
:
5
5
½
Ã
Ã
Ã
:
=
5
À
¼
¿
»
È
½
Ã
Ä
5
=
Á
½
Ã
@
8
=
¼
Ï
Ã
5
8
5
@
Â
Ã
Ä
¼
º
º
»
5
Ã
Ä
8
@
Ã
8
Ã
º
»
»
À
=
¼
5
Ã
»
=
Ã
5
»
Ï
@
À
Â
Â
=
»
8
Ï
=
º
½
»
¼
8
Â
Î
Ã
Ê
½
5
½
Á
Ã
5
Á
É
»
»
À
Î
:
:
½
Ä
Â
Ä
Ã
º
¿
Â
5
Â
=
=
6
@
¿
Ä
¾
8
À
8
½
5
8
»
»
Á
5
½
=
5
½
Ã
@
½
»
¿
Â
»
@
º
»
8
:
½
Ã
½
Ä
=
Ç
À
Ã
=
@
»
8
À
É
Â
ñ
¿
½
Ë
½
»
Ê
@
@
Á
»
@
Å
À
À
¼
º
=
8
¼
¼
@
¾
5
Ã
=
»
8
»
½
Ã
¹
=
Ê
À
»
6
Á
»
:
5
5
5
Ê
=
:
8
¿
5
½
÷
=
8
½
º
º
=
8
¿
Î
»
Ã
»
¾
Ã
5
@
¼
:
½
:
À
»
:
Â
À
¼
5
½
@
$
@
8
¿
5
=
º
=
»
Ã
¼
º
Ç
@
8
Ã
8
º
5
Ã
»
½
=
5
Â
@
=
»
:
¿
8
»
Ã
@
Ï
Â
º
Ã
:
8
º
»
»
Î
Ä
½
º
¹
Î
Á
6
¿
Ç
È
@
À
8
8
5
»
»
¼
:
À
Ã
Ê
Ï
¼
8
=
Ã
6
À
Ã
@
@
À
=
À
@
À
Ï
»
½
¼
Ä
ð
:
Ã
@
»
º
Ã
º
5
¼
Ã
@
»
Ã
¿
½
Ï
À
5
Â
Ã
@
Î
8
¿
»
º
Á
5
À
»
Ä
5
6
5
=
Â
º
5
Â
º
Ã
Å
¾
Î
=
Ã
À
@
Ã
¿
@
¾
¿
Ã
é
Ã
8
=
Ç
Æ
À
5
@
À
»
½
\
Ã
8
=
:
=
Ê
:
¿
=
½
=
º
\
5
=
¼
$
À
¼
»
@
Ã
»
º
À
½
=
»
À
Á
»
À
À
O
º
Ä
8
Ã
»
:
Ã
Ã
=
5
¼
@
=
5
6
Ã
»
:
=
º
:
Â
¾
º
½
»
5
º
À
Â
=
F
Ê
=
5
½
Ã
@
@
¹
À
5
º
»
Ê
8
Â
¼
À
¹
\
Ã
»
8
Ç
»
=
6
½
Æ
½
Ê
¿
Á
»
¼
8
:
À
5
À
Ã
¼
6
=
½
À
À
@
:
:
Á
¼
8
»
¼
=
Ã
»
½
²
À
»
»
=
À
@
=
Ä
Ä
¼
5
8
=
»
À
½
F
@
Ã
»
Å
b
5
5
»
º
5
Ã
»
¼
½
¾
B
»
Â
º
½
H
Ç
»
»
5
O
½
º
5
Ã
@
Ã
¼
\
½
5
Ä
À
½
8
»
¼
=
¾
8
½
º
5
:
@
À
º
Ã
Î
»
8
8
º
Ã
8
@
»
¿
Å
Ä
½
½
¿
À
½
»
8
Ã
@
½
À
À
=
Â
=
@
@
»
Á
=
5
¿
º
½
Ä
»
»
º
Ã
¿
¼
O
Ã
»
8
8
Ç
¼
=
º
Ã
R
Á
=
Ï
Ç
»
5
»
8
½
\
6
Ï
Î
Ç
:
Æ
º
À
º
À
Å
»
Ã
Î
O
¾
5
Ã
¾
Î
»
Å
Â
@
»
=
B
@
8
À
Ã
:
Ï
º
Ã
=
»
½
¿
Ã
@
À
Á
½
8
8
Ä
Á
Å
=
º
5
Å
5
»
Ä
¿
5
Ã
Ï
À
»
=
º
Å
=
»
T
:
5
=
À
=
»
¾
Ã
¼
¿
@
¼
8
Â
½
Ç
@
½
Â
À
=
À
5
:
b
¼
»
Å
»
B
@
»
@
½
»
À
º
À
8
º
½
ñ
Ã
Å
½
Ã
½
À
8
@
ñ
Ï
=
5
H
Ï
=
Ã
5
»
¼
B
@
»
º
È
=
½
8
K
½
8
º
@
=
O
Æ
Ã
Ã
¿
»
Ï
Å
5
Ã
º
Â
:
@
»
¹
»
5
=
5
»
Ê
=
8
»
»
¿
Î
8
@
Á
=
À
8
Î
O
½
6
5
½
=
½
º
Ã
8
»
½
Ç
Ã
¼
»
6
O
z
ñ
b
h
R
»
=
»
:
À
»
¼
Á
½
Ã
:
=
Ä
»
»
Á
5
º
z
»
»
z
M
¼
º
½
@
@
6
À
º
º
=
¿
ð
Ã
¾
õ
»
»
5
Â
º
Ã
Ç
Ä
3
Ã
8
À
8
h
î
p
S
l
@
¼
Ã
»
»
=
5
»
½
W
j
ë
ê
D
»
À
»
5
Ç
¾
º
R
Á
¿
»
¼
H
@
»
ñ
:
»
@
¿
5
h
f
g
k
F
=
º
8
Ç
À
À
»
¼
@
È
:
»
Î
¾
Á
@
g
ñ
ñ
W
z
I
g
D
À
½
º
»
À
½
À
¿
À
Ã
»
H
:
Ã
:
Å
6
½
=
@
½
:
Ð
Ï
¿
=
»
r
g
d
M
p
O
Ã
À
=
=
»
À
5
º
Ã
@
5
=
Ã
Ã
Ã
8
¿
»
½
=
Ã
@
À
Ã
Ä
Ä
Z
K
Ä
=
Â
½
¿
Ã
Å
8
Ä
»
Ã
5
Ã
Â
=
8
»
¼
¿
À
Á
@
Ï
¿
Ã
»
ï
@
Â
À
@
»
Â
Å
¿
Ã
6
@
Á
5
H
p
\
Â
Â
5
À
ê
z
»
»
Â
Â
Â
¼
»
Ã
»
À
»
Á
=
=
6
¿
À
½
À
8
»
Å
=
¼
»
6
»
@
»
Ã
8
5
Á
8
Â
r
@
È
5
Â
¿
À
8
º
»
=
W
g
z
M
O
z
5
Â
=
Ã
À
½
Â
À
=
¾
À
Ã
Ã
Ã
»
Ã
8
=
À
Ç
Â
8
À
°
º
@
Ï
:
½
¼
»
8
=
À
6
Ï
Á
Ã
5
Ä
5
j
ê
£
¼
Ã
ñ
K
Á
5
¿
Ä
Á
»
»
»
Ã
Â
é
½
»
»
Æ
º
»
½
º
À
5
Ê
5
:
»
Á
½
½
8
½
Ã
¿
:
@
Ã
¾
Î
»
=
=
Ã
8
ô
=
»
º
Ã
Ç
½
¼
=
Î
º
Ã
ó
Ã
I
g
r
k
î
h
ë
h
î
ñ
W
b
K
z
z
M
k
S
j
r
ñ
K
î
ñ
I
k
î
f
f
ñ
l
ñ
h
I
r
g
g
I
î
d
ê
H
ñ
h
f
I
d
I
g
M
z
M
n
î
z
r
H
j
M
r
r
î
z
H
q
M
r
r
p
n
z
H
o
M
r
À
º
@
Ê
¾
¾
=
40
D. Basin
»
»
8
Ã
Å
»
½
À
Ç
Á
5
Ã
À
@
»
:
:
=
»
Â
Ç
Æ
¼
À
À
Á
º
=
@
»
Ã
¼
Á
:
»
È
=
Ã
»
:
Å
º
Á
:
»
Â
»
8
5
¿
Ã
Ç
¿
@
8
º
½
Ç
ñ
À
6
¿
½
=
Â
Ã
»
»
:
=
Ç
=
»
»
5
»
5
@
½
Ã
¼
½
5
¼
º
Ã
º
8
Ï
=
À
8
¿
@
Ã
5
À
º
¼
Å
Æ
»
8
À
Á
Ã
»
Á
=
½
@
8
5
@
Ã
:
¾
8
=
=
6
Ã
»
5
:
Î
@
»
í
:
»
Î
5
¿
ô
½
»
Î
:
Ê
@
Ê
=
8
Ê
@
»
5
¾
½
=
¼
»
º
5
5
½
¼
Â
º
¾
À
Î
½
8
+
Ã
º
3
Ç
Î
Ã
=
º
Ê
5
Ã
Ê
S
j
|
j
|
h
M
h
j
p
k
z
|
r
k
M
n
p
z
k
z
W
h
ê
ê
z
j
l
|
î
ê
n
z
n
p
o
p
z
r
z
g
Q
ë
M
W
r
|
z
h
F
g
W
î
r
z
k
k
g
p
z
z
l
|
r
l
j
z
r
j
ò
M
p
î
n
W
h
k
k
M
p
z
|
r
k
T
x
F
ë
t
t
]
]
ê
n
k
ê
|
z
g
g
ì
r
|
g
p
r
z
z
h
r
r
l
z
ò
k
î
z
q
g
k
F
W
h
g
z
M
]
z
p
î
j
z
|
h
h
W
h
z
F
]
ò
r
k
î
k
x
ñ
ò
M
p
z
|
ò
W
ò
¹
º
»
½
»
À
Â
Ã
5
½
Ã
5
5
º
¼
Â
À
Ç
½
r
r
g
R
s
g
r
r
g
R
s
n
z
p
g
á
g
W
h
z
k
r
h
z
k
r
h
r
l
z
F
Ã
½
À
À
@
Å
Å
¿
5
À
Å
»
Ã
8
¼
Ð
{
5
ë
P
Î
=
Ã
½
Ã
»
=
5
8
Î
¾
º
8
»
@
½
º
Ã
¼
½
@
8
=
=
À
Ç
¼
À
»
Ã
@
º
É
Å
Ã
Ä
Â
»
6
Ú
»
@
=
@
ï
À
@
=
À
¾
»
Ã
Ã
@
@
5
»
ê
Ê
8
Á
É
=
@
Ä
Ê
8
5
:
Ã
½
5
=
=
Â
5
5
:
6
»
5
Â
Â
Ä
Á
À
½
º
Â
Ï
8
Ç
Ã
Ã
À
Î
6
À
8
»
º
»
Ã
À
5
»
Å
½
º
ð
5
½
=
=
»
º
Á
Ç
6
ð
+
¼
»
»
8
è
»
»
Â
Â
»
á
Â
º
º
5
¼
»
5
Ã
»
@
=
º
¾
ð
=
Ã
Ç
8
º
»
¼
8
½
¼
ð
=
8
º
Â
@
8
»
À
5
5
Å
Î
8
½
º
Å
Â
@
8
Â
Ç
¿
5
»
Ï
5
5
º
Ã
@
À
½
Ï
¼
Î
½
Ã
À
Ã
½
5
=
»
Å
¼
Ä
»
=
»
À
½
5
8
@
»
@
=
¿
Ã
»
¾
Å
5
Ç
8
¼
5
=
=
Ã
5
¼
»
º
È
@
|
»
Ã
¼
¾
Ã
Ã
Á
º
5
»
»
5
Ã
5
½
Ï
»
Ã
:
@
»
Ã
5
Ã
8
=
=
º
å
6
»
Å
Ã
@
»
Â
À
Ê
À
5
¼
Ã
¼
$
á
+
À
=
Ã
º
Ã
=
¿
À
Ï
=
Â
»
Ç
5
Ú
ô
Á
È
5
8
¼
5
¼
Â
ç
º
8
»
Ä
Å
Ã
:
½
ð
À
½
Á
»
¿
5
:
=
¿
»
Ä
»
Â
»
À
½
Ã
Ã
=
Â
À
5
¼
Î
»
¼
=
Å
ð
Ê
8
@
¾
=
¿
=
Ã
À
8
Ú
5
@
Ã
:
»
À
Ã
½
Ç
Ã
8
»
»
@
Ã
5
=
»
5
=
½
ð
8
»
5
Ç
Ã
»
@
8
¼
8
Ã
º
Ç
½
¼
½
Ã
½
»
Â
»
Á
½
À
5
5
½
8
¿
=
8
Ã
»
½
½
¼
=
Ã
=
5
»
5
5
Á
@
º
¿
¿
@
Å
Ã
@
Ã
À
5
Ê
z
f
î
k
î
]
¼
=
=
8
g
ñ
Â
8
¶
h
f
8
½
=
8
=
5
À
Ä
½
»
»
:
Ã
{
Ã
·
I
8
5
Ã
»
8
¿
Æ
Â
3
À
À
Â
½
Å
@
5
+
½
Ä
:
Á
¿
Ã
¿
8
À
¼
Ä
¿
Ã
6
¼
|
»
»
Á
=
8
»
Î
ê
H
½
Æ
5
»
@
=
Æ
¾
ê
z
Ã
:
@
À
º
»
Â
]
î
ê
É
À
5
F
:
½
À
Ã
Ã
f
l
¼
¾
@
5
»
5
Ã
»
Â
À
@
»
g
j
Ã
8
»
Á
»
@
@
¾
@
À
½
@
»
Ï
¼
Á
8
½
@
Ã
ð
½
»
Î
8
»
8
Á
=
»
Ã
½
½
Â
Ã
=
½
Ã
º
8
5
¿
8
8
Ã
5
5
:
=
Ï
8
»
Â
5
@
¼
¿
Á
Ç
=
½
Å
Ã
½
À
»
:
Á
À
8
¼
@
À
=
À
ê
K
p
I
h
·
z
M
r
k
g
f
r
W
ñ
h
r
î
p
¸
Q
o
k
¸
R
g
k
z
k
g
W
f
I
ò
m
z
ò
ì
ò
r
p
z
ò
W
h
ê
z
g
r
r
g
R
s
r
p
z
î
H
î
g
f
I
h
ñ
f
ê
b
ê
R
f
S
z
g
g
r
r
g
R
l
z
g
5
¿
»
î
¿
Â
8
Â
»
»
F
H
g
H
h
I
h
I
h
H
g
I
h
º
5
»
¼
o
P
Î
P
j
½
å
R
5
¼
Â
H
»
8
¼
j
R
H
o
Î
»
M
î
»
=
5
Ã
5
K
p
:
½
Ã
Ã
=
p
K
p
À
M
R
ñ
ê
î
k
f
O
g
ñ
j
¸
p
î
¸
Q
o
R
ê
k
R
z
ê
d
R
ñ
ò
¸
î
¸
Q
o
k
R
R
a
z
k
ê
g
f
¸
ñ
ò
¸
ò
ò
R
ì
a
z
h
ê
p
f
z
h
r
ò
ê
ò
z
z
h
k
r
R
r
z
z
k
h
p
f
z
ò
î
h
r
g
k
g
W
m
g
î
h
z
z
h
H
M
I
r
p
ò
ò
p
5
I
M
I
Ã
@
:
Î
Ï
P
À
¿
»
¼
5
j
@
@
À
Î
Ã
À
Â
¼
Q
o
8
k
=
À
Ã
Â
O
o
R
Ï
Ã
Ú
ñ
º
½
P
Ç
î
ò
ò
À
:
»
Â
É
¼
º
»
¼
È
Ê
¹
º
»
¼
À
Ç
Ç
5
@
:
P
j
R
W
R
ë
=
=
í
z
ò
î
o
Q
¼
Ã
Ê
Ã
î
º
Q
k
o
À
¹
Ç
¿
º
»
Á
»
½
Ç
8
»
:
Ã
=
5
Ã
Á
:
Â
Â
»
8
À
@
Ã
À
À
ë
Å
5
Ã
Å
Ã
Ã
Ã
º
5
8
¼
º
»
=
ñ
=
È
À
»
½
5
=
½
@
Ã
Ã
¼
º
º
»
Ê
î
Q
»
Â
@
ò
î
º
Á
8
ì
R
Ã
»
@
ò
z
»
½
Ä
j
z
¼
5
Æ
r
k
½
»
Ç
:
W
À
º
è
»
k
o
Å
Ã
»
ñ
P
ò
6
=
8
z
Q
+
=
o
R
k
k
R
z
R
z
ë
z
ë
ò
b
ò
ò
ò
ò
ò
ò
ò
Y
Y
ò
Y
Y
]
»
Ã
º
Ã
{
ò
ô
5
r
b
»
=
@
k
p
b
:
Ã
»
z
z
z
=
{
j
R
»
Å
:
î
P
g
k
@
À
8
p
î
î
Q
j
:
½
@
Ã
g
@
Á
À
=
p
z
î
î
@
À
R
5
Ã
8
½
O
j
Î
»
=
ñ
g
I
»
½
½
Î
î
O
»
=
»
Â
I
W
M
f
Ï
½
6
M
½
8
À
î
H
Ã
º
¼
f
o
:
¼
5
À
H
o
Z
»
=
r
r
r
ð
8
Ã
r
M
»
8
º
À
M
:
º
p
k
º
@
½
I
¿
Ã
h
o
@
Ï
Z
p
I
Q
Ç
»
r
K
î
=
¿
M
î
I
8
Î
»
K
I
z
M
f
½
p
r
ê
h
r
8
Ã
½
î
K
z
s
=
Ç
½
»
I
M
f
ê
î
R
¿
5
¿
:
I
ò
À
»
»
»
î
W
Z
½
p
k
r
q
»
i
½
K
¸
ò
g
¼
»
Ã
À
f
z
I
º
Ã
½
»
k
r
z
r
M
=
5
º
8
s
Ã
z
o
f
M
Ã
È
R
Z
W
H
j
R
W
r
z
h
z
¿
»
¼
z
g
»
À
º
é
I
Z
g
¹
W
M
Ã
Ã
É
P
H
H
»
Ç
º
º
p
ê
g
Ã
h
ê
Ã
3
z
z
r
Å
¼
5
h
+
À
5
º
I
I
g
½
:
g
H
ì
ò
Â
r
d
m
p
Â
g
Ã
I
r
5
=
Ã
»
ì
h
@
W
d
º
|
»
5
»
j
8
Ç
=
N
Ã
|
Ã
î
g
î
8
r
k
s
k
¸
S
h
h
î
b
Ã
»
»
@
¿
Ã
á
=
Ã
Ï
º
8
5
Ã
º
Ã
¼
Ã
º
À
@
»
=
é
Ã
¿
8
Ã
Ä
Á
Ê
Ã
¹
»
Ã
º
»
º
»
é
5
¿
Ã
Ä
Ã
Ã
5
5
¼
È
È
Ê
»
=
ï
@
5
Ã
:
6
º
5
8
@
=
5
Ã
5
Ã
¾
É
»
Lazy Infinite-State Analysis of Security Protocols À
Å
Ã
¿
5
º
=
8
=
=
8
Æ
@
Ä
¾
=
À
Ã
@
5
½
å
Ã
Â
8
8
@
¼
¾
»
ê
5
=
@
À
Ç
Ã
»
=
º
=
»
5
½
¾
½
»
Á
Ê
@
À
º
Å
»
Ã
º
@
»
<
¿
À
½
Æ
À
½
Ã
»
À
=
¼
¿
À
À
Â
Ï
@
8
:
=
Ã
º
=
Ï
8
Ã
À
Ç
º
»
º
8
À
=
Ã
À
º
Ï
»
½
@
5
@
¾
À
»
@
@
¼
Ã
»
Î
Ú
º
Ã
41
»
º
8
½
½
»
<
:
=
À
Ã
Æ
»
Î
¿
á
Î
z
å
Â
é
8
¼
¿
5
º
Ã
À
5
Ä
Ã
Ã
»
Ú
Ã
5
Å
¼
»
Ï
¼
Á
º
=
»
Ç
Å
Ç
½
Â
»
¼
5
5
½
Æ
=
Â
»
¼
Á
Ã
¼
Ã
º
º
Ï
5
À
5
½
½
Ã
5
:
Â
8
Ê
@
:
@
¼
8
8
ô
6
î
Æ
Â
Î
»
Ã
¼
¼
»
»
º
»
<
À
:
¼
À
@
»
»
@
@
Ã
8
Ä
@
º
º
Ã
»
Ã
Ï
Ã
º
Æ
»
@
À
:
:
=
é
5
Ã
À
Ê
=
5
»
@
º
º
"
»
»
Ã
¼
»
=
8
=
@
:
Æ
=
º
å
Ã
¿
º
¼
Â
»
Î
5
8
8
È
¼
À
»
»
¿
º
Ú
Ï
=
Ã
½
ñ
½
À
»
Å
=
Ã
@
Ã
=
Å
¼
@
º
À
À
À
Ã
5
À
¼
»
Ã
¿
@
Â
»
á
5
8
=
Ã
Ã
À
Ê
Ã
À
Ã
º
¹
5
º
¼
À
»
8
È
»
Ï
=
½
»
Î
5
È
Ê
5
À
ñ
»
Å
=
=
Ï
Á
Â
½
Â
À
8
@
¾
5
=
»
»
@
5
»
À
Â
@
Ã
5
Ã
R
Á
5
º
Ã
:
8
»
À
Î
8
Ë
@
÷
5
=
»
8
Á
À
À
Á
½
Â
»
»
½
:
¿
À
»
¿
8
½
5
:
À
»
Ã
6
8
½
@
º
Å
¾
@
Ã
è
»
5
À
Á
º
Á
Ã
=
@
»
½
Ã
À
:
6
@
À
ð
À
ñ
»
@
Ã
º
½
:
Á
¾
½
Ê
5
5
Ç
Ã
Æ
8
6
Â
@
»
Ê
¾
»
º
»
Ë
8
5
Ã
ñ
À
Ç
@
Ã
½
Ã
8
À
Â
Á
ñ
:
Ã
8
À
8
:
@
Å
»
Á
5
º
¿
¼
»
È
Ã
Â
Ã
¼
Ê
º
Â
»
Ä
¼
8
À
È
Ã
8
º
Å
@
¾
À
Á
Ã
Ê
½
8
À
Ç
:
½
:
¼
»
Â
»
À
@
Å
=
À
=
½
Æ
À
Á
¼
»
Ç
=
8
º
Ã
8
Â
Ç
:
À
½
¼
8
8
5
Â
¼
Ä
8
@
¾
8
5
Ã
»
=
Ï
Ã
@
8
»
@
Ã
¼
@
Ã
À
»
:
»
Â
º
»
5
¼
»
¿
À
Ï
º
=
Ç
Å
Ã
=
Å
=
»
»
=
:
À
À
»
À
»
Ã
á
8
»
À
»
=
º
6
¼
=
Æ
Â
R
»
@
Â
À
º
¿
5
Ã
»
Æ
@
¿
º
8
Ã
º
»
À
Â
À
í
=
»
Á
È
»
=
=
È
8
Ã
½
À
Å
8
Â
Ã
8
=
¿
Á
»
Ã
Â
Ï
=
Ã
Î
=
º
8
È
¼
À
Å
Î
Ã
Î
8
»
=
»
»
Ä
Ã
»
8
6
Â
Â
=
@
Ã
Ê
Â
À
Ï
:
»
:
¾
¿
=
Ä
Â
8
Â
5
½
¼
Â
À
»
½
º
»
ñ
º
=
Â
¼
Á
@
=
Á
5
º
»
@
5
»
5
¼
Á
=
Ï
¾
8
8
8
Æ
À
Æ
Ã
»
Î
À
:
Â
Ã
Ã
Ê
Å
À
»
8
¼
5
:
»
»
8
À
Ï
Â
:
=
Ã
½
½
Á
=
Â
8
5
@
5
Á
½
¼
6
»
»
:
À
Á
8
»
¾
½
@
Î
Â
º
Ú
¿
6
Æ
8
»
À
5
@
¿
¿
Ï
º
5
Ç
Ï
=
»
:
É
»
5
Ã
º
5
=
=
Î
»
¼
5
Ã
@
Ã
»
º
À
Î
½
Ã
8
Ê
»
»
Á
=
Î
8
º
½
¾
Á
@
Â
¼
Ã
Á
Ã
8
¿
÷
À
¿
Î
½
Á
»
Á
Á
5
º
»
=
»
À
½
5
»
»
Î
=
½
/
Æ
Ï
¼
»
=
ô
5
½
=
Ä
»
=
»
À
8
@
¾
½
À
½
=
À
5
Á
Å
8
8
8
Á
Ñ
º
Ã
Ê
À
Ê
Ã
½
5
¾
=
Ä
È
¼
¼
@
Â
Ã
Ã
=
Ç
õ
8
»
º
Â
=
È
:
¹
Â
»
½
À
Â
õ
¿
Ã
À
Ç
5
½
»
ñ
5
À
8
½
½
:
»
ô
8
5
Ï
Å
À
5
=
º
À
Å
Ã
»
¾
À
º
¼
¼
¼
Â
Â
À
À
8
Â
¼
8
Ï
½
¾
À
Æ
¿
»
@
Ã
5
»
»
6
8
À
Ã
¿
6
»
@
8
½
»
½
8
Á
Ã
8
º
8
:
»
½
Ã
Ã
½
»
Â
6
Ç
Á
Á
=
Á
5
Å
Ã
8
¿
=
Á
Ã
Ç
Ä
»
É
Å
Ç
¿
½
º
¼
5
Ã
À
ö
À
»
=
=
Á
»
½
=
À
@
¿
:
Â
¿
@
Ã
Ã
À
¾
@
Â
¾
5
½
8
º
¹
8
»
Á
8
À
@
Â
8
Ç
Ã
Ç
8
È
:
5
5
=
Î
À
À
»
ð
¿
»
¼
Æ
Ç
Â
Ç
»
À
Á
º
»
À
Â
Ã
8
º
½
Å
Ê
½
¾
Á
½
=
»
À
Ã
À
»
Ç
Ç
5
Â
À
¼
½
Å
8
Ã
½
»
À
=
:
@
¿
=
Å
@
8
Ä
Å
»
Á
½
ð
À
À
½
½
½
=
Ã
È
Ç
»
¿
À
»
8
À
5
=
¼
Ã
Å
Å
Â
@
Ã
=
»
5
6
¼
@
8
»
=
Â
È
@
=
:
»
À
5
Ä
»
»
Ç
5
º
Ã
À
»
»
½
8
Å
Â
8
=
¼
½
º
:
Ã
5
À
:
½
8
:
¾
Å
5
5
Ã
À
6
:
À
5
Ã
@
¼
Ã
:
Ð
»
Å
À
»
¼
»
»
¼
@
8
½
½
Ê
Ç
½
À
Ã
Ä
8
À
»
Ã
Î
¼
@
»
5
=
Ã
½
À
À
»
Ï
º
5
»
¾
½
Á
Ã
Ç
Ã
Ã
Ã
8
¿
6
=
5
@
»
À
Â
Á
À
:
Á
5
Ã
Î
¿
8
À
=
À
Ê
Ç
8
8
»
6
:
»
»
À
@
@
=
8
5
Î
È
=
Ã
=
Á
@
º
½
Ã
Ç
À
»
»
¼
»
8
8
ì
»
¼
Ã
Ã
½
=
8
:
Â
½
½
À
»
8
5
Ã
:
¼
=
À
@
½
@
º
»
À
Å
À
Ã
½
Æ
Æ
=
¿
À
º
Æ
»
Á
8
Ã
Ä
"
8
»
¾
À
»
Â
¿
À
¿
º
½
¼
Á
À
8
=
Ã
¼
»
Â
»
=
5
»
¿
Ç
Å
À
:
¼
Ã
:
Ç
»
¼
=
À
Ï
8
»
Ç
@
¾
Ã
À
é
½
Â
@
8
»
»
Ï
N
8
Ã
º
Å
@
5
Ã
Ê
¿
å
5
À
:
Ä
½
8
=
À
@
=
»
»
»
5
Á
5
½
½
:
º
Ç
@
À
@
»
»
À
Ã
À
6
º
¾
8
Ã
@
»
P
»
@
6
@
À
5
5
½
8
À
8
Ã
¾
»
»
:
»
@
À
½
»
½
Ã
<
»
Ã
8
½
@
»
½
»
Ç
=
Á
»
¿
»
8
Ç
Ä
:
Ê
¼
Ã
»
8
5
Ã
¾
º
=
À
»
8
¾
½
Ã
½
Î
Á
í
Â
¼
=
»
º
¾
¿
=
5
À
Á
±
=
Â
:
5
»
Â
»
=
À
È
8
À
Ã
Ã
Â
»
6
Ç
Ã
È
º
@
À
5
5
Ã
8
»
8
Ã
Ã
»
½
=
»
À
Ã
Å
»
¼
º
½
@
¿
´
5
¼
8
Ã
8
Ã
¼
8
»
=
Â
:
8
Á
Á
»
:
@
½
Á
À
½
@
º
=
Ã
»
À
½
À
Á
=
¼
Á
»
Ä
»
5
À
º
½
:
÷
¸
»
Ã
Ç
Á
å
º
Ç
¾
À
»
»
À
¶
À
@
5
À
Å
=
Ï
Æ
¼
Ç
ð
Î
A
Ç
º
8
@
¹
¼
¼
¼
5
»
8
Á
¾
·
»
¼
º
8
Á
Î
½
Ã
Ï
Ã
º
È
½
8
á
»
º
@
Ã
5
Ã
¼
¼
Á
±
Ã
=
½
5
@
Ï
À
´
5
À
8
À
¿
º
Á
5
½
5
Æ
5
5
Î
»
@
Ã
º
Ã
5
Ã
@
¾
Ã
=
Ä
»
Á
@
½
é
Â
Ã
=
Ï
5
½
8
¼
º
Â
=
º
¼
=
¿
@
Ã
=
»
»
À
5
8
º
Á
½
8
º
Ç
º
º
Ã
Ã
Ã
½
Á
Ã
ö
8
¿
À
Á
=
¹
²
=
È
@
5
=
¾
»
È
8
Â
Ä
@
¾
Ê
¿
?
À
?
³
?
±
·
?
¾
¿ Â
Ã
Å
B
\
O
D
R
B
T
B
Ã
¬
\
\
T
S
O
R
b
O
T
S
S
H
B
b
T
F
T
B
D
R
S
K
F
\
O
A
B
Æ
û
ù
ù
Ç
°
ý
È
¯ É
É ÿ
ý
ÿ
ý
Ê
û
ü
µ
W
[
R
K
S
\ Â
Ë
Ë
R Ë
M
Ì
Í
Æ
W
T
b
\
D
Î
Ï
Ð
Â
O
H
B
b
\
O
Ñ
\
O
T
b
W Â
Ò
Ò
Ó
F
_
R
O
F
R
D
\
M
T
S
H
H
T
O
H
F
Ô
R
\
D
T
F
F
T
U
W
R
K
O
F
H
O
D
F
\
R
O
O
\
D
H
D
T
F
F
T
U
«
´
R Â
-
R
B
S
F
\
O
R
Å
b
\
«
H
\
Å
¢
H
a
D
\
B
D
H
D
S
\
D
D
T
b
\
F
R
F
d
T
B
´
Î
F
\
B
d
D
D
F
\
R
B
F
D
R
F
D
\
F
\
S
\
D
D
H
D
T
B
b
\
-
B
\
\
R
\
Õ
T
B
T
B
B
R
F
F
\
\
U
F
T
F
R
K
D
B
\
B
\
O
D
R
\
K
O
O
S
F
R
\
\
S
H
\
D
B
b
D
W
T
F
b
\
\
H
D
\
B
F
D
F
\
R
D
F
F
\
O
\
D
\
W
D
T
B
R
H
H
B
F
\
R
\
\
a
D
\
B
R
F
S
T
F
F
\
O
\
F
\
O
H
F
R
S
\
D
M
O
R
S
R
a
R
O
F
\
d
Å
F
T
F
S
T
F
F
\
O
D
F
R
Å
H
\
H
D
F
T
F
H
F
H
D
Õ
H
B
F
\
M
R
O
S
T
F
R
M
D
F
\
F
R
M
R
O
\
O
O
K
B
H
F
F
\
d
D
F
¤
O
\
\
H
B
\
H
F
H
D
W
D
\
T
B
D
\
O
D
H
F
D
F
\
42 Î
D. Basin
×
H
T
\
K
O
O
R
D
W
×
T
O
F
H
B
Å
a
T
H
W
T
B
Ã
R
b
\
O
¬
\
\
T
S
Å
R
b
H
R
M
T
K
F
\
B
F
H
T
F
H
R
B
Õ
¯
Ø
É Æ
Ù
Ê
È
µ
µ
ÿ
°
û
Æ
û
ù
Ç
°
ý
È
°
ý
ù
W
Û
´
« Â
Û Â
Ð
Ü
Ï
W Â
Ò
Ò
Ë
-
Ü
Ý
R
Ñ
B
\
O
Þ
D
H
R
T
B
O
U
Â
T
B
Ë
Å
Ý
[
T
\
H
O
T
\
a
S
d
\
Ý
T
T
F
F
R
a
\
Y
Å
Ã
D
K
O
[
\
d
R
M
T
K
F
\
B
F
H
T
F
H
R
B
O
R
F
R
R
Ô
H
F
\
O
T
F
K
O
\
«
l
r
r
M
ì
a
a
t
R
h
t
I
o
p
s
t
g
R
t
v
s
a
à
g
R
a
¿
c
á
\
B
U
\
O
W
Ý
×
\
D
\
b
K
\
O
W
T
B
Þ
â
T
R
F
F
O
R
F
R
R
D
\
H
§
T
F
H
R
B
T
B
×
T
K
\
A
B
¬
£
\
H
B
F
\
T
B
Ý
H
¯
B
b
W
\
H
F
R
O
D
W
ã
È
û
ÿ
ý
ý
ü
T
B
T
d
D
H
D
H
B
û
ä
°
ý
å
û
È
û
Ç
û
ç
û
È
ù
µ
þ
É Ù
ý
°
û
ü
µ
ü
ý
ÿ
È
°
ã
È
û
°
û
ÿ
û
þ
W
T
b
\
D
Ü Ò
Ð Ò
Ò
W
A
B
H
T
B
T
R
H
D
W
A
B
H
T
B
T
W
Ý
K
B
\ Â
Ò
Û Ò
Ó
á
á
R
\
[
T
B
Å
è
T
R
¢
B
F
\
D
\
K
O
H
F
d
R
M
K
a
H
U
\
d
O
R
F
R
R
D
é
ê
ê
ê
Ê
È
µ
µ
ÿ
°
û
¯
û
é
ä
û
È
ù
µ
°
û
Ê
ý
û
È
W
Î
« Ò
Â
Û Ò
Ð
¿ Ï
á
T
Î
Û Ë
W Â
Û Ò
Ü
B
_
O
H
\
S
T
B
T
B
á
T
[
H
H
D
\
Þ
R
B
D
D
R
K
B
R
F
\
[
T
K
T
F
\
H
F
D
T
O
b
K
S
\
B
F
D
Ø A
B
×
H
T
\
D
R
B
T
B
Ã
×
H
B
\
O
W
\
H
F
R
O
D
W
°
û
ù
µ
°
µ
ë
Ì
µ
µ
ý
µ
ü
ã
È
û
È
µ
ù
ù
W
¿
T
b
¬
í
\
\
D
[
Î
H
Ó
B
Ð í
£
\
Î
H
Û
B
F
\
î
W
Ý
H
B
á
a
K
O
â
b
d
b
Y
T
O
B
W
Ý
H
\
[
\
T
O
B
D
B
H
\
F
d
F
O
F
\
\
×
D
D
W Â
H
Ò
B
Ï
í
b
W
T
B
£
T
R
Þ
H
R
B
b
×
R
\
\
U
É
H
B
b
û
\
ê
\
þ
F
ý
ÿ
O
°
R
È
B
û
H
¿ Û
ÿ
£
K
T
U
S
Æ
R
S
û
\
ù
O
ù
ý
\
È
ÿ
ý
O
R
F
Â
R
Ò
R
Ï
D
A
B
ã
È
û
ÿ
ý
ý
ü
û
ä
°
ý
ï
ê
Í
é
ð
ñ
ò
ò
ó
å
û
È
û
Ç
Ò
¿
W
¿
\
d
F
R
B
Ý
R
B
\
D
W
T
B
T
\
O
´
î
H
F
R
O
D
Ã
\
R
O
F
R
B
F
\
O
R
b
O
T
S
S
H
B
b
-
Ø
T
B
b
K
T
b
\
£
T
D
U
\
«
Å
B
R
B
D
F
O
H
F
W
K
O
\
d
M
K
B
F
H
R
B
T
T
B
b
K
T
b
\
´
[
\
O
D
H
R
B
Â
Î
Æ
Ù
-
Ø É é
õ
ã
Ì
Í
Í
û
°
ÿ
ý
W
Î
´ í
W Ó
Â
Ò
Î
Ò
-
Ã
Ò
H
T
O
ö
\
S
S
\
O
\
O
W
Þ
T
F
\
O
H
B
\
×
\
T
R
D
W
T
B
Ý
R
O
d
F
R
b
O
T
H
O
R
F
R
R
T
B
T
d
D
H
D
ø
B
T
F
T
B
¯
û
È
µ
þ
û
ä
Æ
×
H
\
B
â
O
\
\
D
d
D
F
\
S
D
M
R
O
¯
È
Ç
°
û
þ
û
W
´ í
Î
« í
Ð Ò
Ü Â
W Ë
Â
Ò
Ò
-
Â
Ë
c
T
H
B
Ô
R
\
O
\
T
U
H
B
b
T
B
§
H
B
b
F
\
¬
\
\
T
S
O
R
\
\
O
K
a
H
U
\
d
O
R
F
R
R
Õ
Ø
K
D
H
B
\
b
O
_
H
B
á
W
Ã
Â
Ò
A
Ï Ò
B
ã
È
û
ÿ
ý
ý
ü
û
ä
Ø
Ê
É
ù
Æ
ò
ó
W
Ô
¬
Þ Â
Ë
Ó
W Ó
T
b
\
D
Â
Ð í
Ï Â
Ï
O
H
B
b
\
O
W
Õ
Â
Â
\
B
a
R
×
T
R
Å
B
T
K
b
S
\
B
F
T
F
H
R
B
R
M
Å
¬
H
U
\
R
b
H
D
A
B
ã
È
û
ÿ
ý
ý
ü
û
ä
°
ý
ú
°
é
ê
ê
ê
Õ
¯
É Æ
û
ù
Ç
°
ý
È
ý
ÿ
È
°
ç
û
ü
µ
°
û
å
û
È
û
Ç
W
T
b
\
D
Ð
Ï Ó
A
î
î
î
Þ
R
S
K
F
\
O
R
H
\
F
d
¿ O
\
D
D
W Â
Ò
Ò
Ó
Î Â
×
T
O
O
\
O
R
W
î
Þ
T
O
U
\
W
T
B
Ý
È
û
ÿ
ý
ý
ü
T
×
R
\
Ø
ã
û
ä
°
ý
û
é
\
U
H
B
b
M
R
O
D
\
K
O
H
F
d
O
R
F
R
R
É
Ù
D
A
B
¯
É
Æ
å
û
È
û
Ç
û
û
ý
µ
ü
ü
ý
È
ý
ÿ
µ
°
û
û
ä
ý
ÿ
È
°
ã
È
û
þ
°
û
ÿ
û
þ
Â
Ò
Ò
í
Ü Â
Ý
R
B
T
F
T
B
ö
×
H
\
B
W
Þ
Þ
T
O
U
W
T
B
¿
_
O
\
\
S
T
B
â
\
A
B
F
\
O
O
R
b
T
F
R
O
«
O
R
F
R
Õ
É
R
D
\
K
O
H
F
d
T
B
T
d
D
H
D
é
ê
ê
ê
Ê
È
µ
µ
ÿ
°
û
û
û
ä
°
ÿ
µ
È
ý
ê
ý
ý
È
W
Ü Â
´
Î
«
Î
í
Ð
Î
Û
Û
W
-
Â
Û Ò
í
Â
Ã
R
b
\
O
×
¬
\
\
T
S
T
B
×
H
T
\
á
O
R
\
\
O
Y
D
H
B
b
\
B
O
d
F
H
R
B
M
R
O
T
K
F
\
B
F
H
T
F
H
R
B
Ø
H
B
T
O
b
\
B
\
F
R
O
U
D
R
M
R
S
K
F
\
O
D
Æ
û
ù
ù
ÿ
µ
°
û
û
ä
°
ý
Æ
Ù
W
Î
´ Â
Î Â
« Ò
Ü Ò
Ð Ò
W
Ò
Ò
-
Â
Ò
Û í
¿
Â
Ô
Ó
T
O
\
B
\
Þ
T
K
D
R
B
â
\
H
B
K
F
H
[
\
T
O
R
T
F
R
[
\
O
H
M
d
H
B
b
O
d
F
R
b
O
T
H
O
R
F
R
R
D
¯
É ø
û
È
µ
þ
û
ä
Æ
û
ù
Ç
°
ý
È
ý
ÿ
È
°
W
Ï
«
Û
Ð Ó
Î Â
Û
W Â
Ò
Û Ò
Ï Â
Å
Ã
R
D
R
\
×
R
\
H
B
b
T
B
[
\
O
H
M
d
H
B
b
U
\
d
\
T
B
b
\
O
R
F
R
¯
R
D
K
D
H
B
b
¿
Þ
T
B
É _
á
Ã
A
B
ã
È
û
ÿ
ý
ý
ü
û
ä
ñ
A
î
î
î
Þ
R
S
K
F
\
ò
ò
é
ê
ê
ê
Æ
û
ù
Ç
°
ý
È
ý
ÿ
È
°
ç
û
ü
µ
°
û
å
û
È
û
Ç
R
H
\
F
d
O
\
D
D
W Â
Ò
Ò
Ó
Â
í
F
\
[
\
B
\
H
\
O
Ñ
\
O
H
M
d
H
B
b
T
K
F
\
B
F
H
T
F
H
R
B
O
R
F
É û
û
ä
°
ÿ
µ
È
ý
ê
¿
O
ý
ý
È
W
Î
´
« Ò
-
í
Â
Ð í
Ó
Û
W Â
Ò
Ò
Û
R
R
D
H
B
Þ
¿
é
ê
ê
ê
Ê
È
µ
µ
ÿ
°
û
Electronic Payments: Where Do We Go from Here? Markus Jakobsson1 , David MRaihi2 , Yiannis Tsiounis3 , and Moti Yung4 1
Information Sciences Research Center, Bell Labs, Murray Hill, New Jersey 07974. www.bell-labs.com/user/markusj 2 Gemplus, 3 Lagoon Drive, Suite 300, Redwood City, CA 94065.
[email protected] 3 SpendCash.com, Inc., New York, NY.
[email protected] 4 CertCo, Inc.,New York, NY.
[email protected]
Abstract. Currently, the Internet and the World Wide Web on-line business is booming, with traffic, advertising and content growing at sustained exponential rates. However, the full potential of on-line commerce has not been possible to realize due to the lack of convenient and secure electronic payment methods (e.g., for buying e-goods and paying with e-money). Although it became clear very early that it is vital for payments to be safe and efficient, and to avoid requiring complicated user intervention, it is still the case that the Internet payment method of choice today is that of traditional credit cards. Despite their widespread use and market penetration, these have a number of significant limitations and shortcomings, including lack of security, lack of anonymity, inability to reach all audiences due to credit requirements, large overhead with respect to payments, and the related inefficiency in processing small payment amounts. These limitations (some of which are present in the real world) prompted the design of alternative electronic payment systems very early in the Internet age – even before the conception of the World Wide Web. Such designs promised the security, anonymity, efficiency, and universal appeal of cash transactions, but in an electronic form. Some early schemes, such as the one proposed by First Virtual, were built around the credit card structure; others, such as the scheme developed by DigiCash, offered a solution with cryptographic security and payer anonymity. Still others, such as Millicent, introduced micropayment solutions. However, none of these systems managed to proliferate in the marketplace, and most have either ceased to exist or have only reached a limited audience. This paper is associated with a panel discussion whose purpose is to address the reasons why the international e-commerce market has rejected proposed solutions, and to suggest new ways for electronic payments to be used over the Internet, avoiding the problems inherent in credit card transactions. The purpose of this paper is to set the stage for such a discussion by presenting, in brief, some of the payment schemes currently available and to discuss some of the basic problems in the area. Keywords: anonymity, e-cash, e-commerce, electronic payments, security. R. Baumgart (Ed.): CQRE’99, LNCS 1740, pp. 43–63, 1999. c Springer-Verlag Berlin Heidelberg 1999
44
1
M. Jakobsson et al.
Introduction
Quite a large number of years had passed from the introduction of Arpanet/Internet until it started to become clear that the Internet would become a vehicle for carrying e-commerce. In the beginning, this network was largely of military interest and used by academics, and traffic was limited to email and file transfers using ftp. Large collaborative distributed computing was thought to be an application, but did not materialize. With the introduction of easy to use user interfaces based on HTML, access became possible for the masses, causing both the number of users and the interest in conducting commerce to grow rapidly. One of the next major steps which promises to bring a large increase in Internet use and effectiveness is an improved payment infrastructure (in a very general sense). One factor commonly believed to have dampened the possibilities for, and interest in, electronic commerce has been the lack of such an infrastructure. Thus, practically employable e-commerce has to date been based on existing payment structures, viz. credit cards. These however, have several properties that make them inappropriate for use over the Internet; some of these include their large overhead, risks related to inappropriate use, and inconvenience of use - particularly for small payments. So, it seems that alternative and simpler methods of payment are required. The lack of such simple schemes can be explained by “the chicken and the egg problem,” namely, without a large existing merchant base, the need for payment schemes is less acute, and without a working payment scheme, merchants are unable to enter the Internet market. Another problem has been that financial institutes traditionally are very conservative, particularly when it comes to trying out new and heretofore unproven payment methods. All of these problems are, however, gradually fading away: substantial work is being performed on implementing public key infrastructures. Merchants are becoming aware of the strong potential of the Internet marketplace and are making themselves ready to enter it quickly, and some banks are starting to employ cryptographers and security experts, making it easier for them to evaluate technology-related risks. It seems that it is no longer a question whether there will be Web-based payment schemes. However, a question that remains is what type of scheme(s) will be employed and how soon. To some extent the question of what schemes will be dominant may be resolved not by the consumer, but via government intervention and bank preferences, and by corporate sponsorship. It is likely, though, that many schemes will co-exist at least for a few years, allowing the consumer to state desired preferences. Cryptographic research has produced several important payment related notions and properties of schemes over the last few years. These include, among others, the issues of anonymity, revokable anonymity and fairness, crime prevention, micropayments, smart card and PDA based schemes, software-only schemes. It seems that much has been achieved, yet since global or wide usage has not been achieved, it may be the case that there is still a lot of new issues to be dealt with and much technological work to be done. This is an interesting
Electronic Payments: Where Do We Go from Here?
45
issue that needs to be discussed and examined. The suitability of the research results to the actual problems faced by financial institutions and the merchant base is another motivating issue. This gives rise to the following characterization of categories. 1.1
Payment Categories
At this point we should clarify that electronic payments can be classified according to the acting parties. The parties can be business-to-business, consumer-tobusiness, business-to-consumer, business-to-government, etc. However, most of these electronic payment needs have been covered. Businesses can transfer funds to each other via ACH or Wire transfers. Similarly, they can transfer funds to governments. Furthermore, even though there is still the possibility of enabling electronic checks among these entities, it is still unclear whether or not this is an enhancement of current possibilities or simply a true business enabler. Much of the business related payments and commerce may rely on systems which follow bank-aided business to business practices which may be constructed as enhancements to “public-key infrastructure,” as opposed to on dedicated payment systems. See the “four-corner model” of typical commerce banking transactions that was put forth in [FKMY98]. It seems to us that the open problem demanding immediate attention in current electronic payment methods is the lack of efficient consumer oriented payment methods (either consumer to business or business to consumer). This paper and discussion is therefore focused on this particular part of the market (of course, this segment of payments has to be connected to other e-payments). Organization The paper is organized as follows. Section 2 discusses credit cards, which are by far the most prominent method for electronic payments. Section 3 discusses electronic checks, and how they fit in the on-line payment arena. Section 4 categorizes the various proposed “cash-like” methods. Section 5 gives representative examples of a variety of payment systems. We obviously are not exhaustive in covering the many various suggested schemes and apologize for omitting many interesting designs. Some of the business and political issues are mentioned in Section 6. Then, Section 7 touches on some possible future scenarios, constraints and implications. Section 8 concludes the paper.
2
Credit Card Payments
The most common type of payment used on-line are credit card payments. The main reasons for this is of course convenience, ease of use, and because the they are ubiquitous and omnipresent. However, as noted above, they are insecure, offer no anonymity, and do not allow small payments.
46
M. Jakobsson et al.
– High costs and inability to allow small payments. Each credit card payment has a fixed cost of 20-40 cents, plus a variable cost of 2-4.5%, depending on the method used and the negotiated contract. The fixed costs originate from the cost of performing a transaction, since transactions usually involve some type of paperwork, and the traversal of a proprietary network rented by Visa, Mastercard, or some other credit card provider. US banking regulations exist which mandate that users’ accounts be maintained so as to enable a mechanism for disputing payments. This makes relatively high fixed costs unavoidable. The variable costs are a reflection of the security problems associated with credit cards. In other words, the credit card issuers recover their costs from fraud by charging the merchants a percentage on their customers’ purchases. For this reason this fee is variable, and is much higher for, say, Internet or telephone purchases than it is for purchases where the physical card is presented. It also varies by industry sector, with certain high-fraud businesses being penalized with higher fees. In short, the main reason for these high fees is the insecurity of the original credit card design, which allows merchants to view (and copy, and reuse) all of the customer’s private information. As a result of these high fees, payments of less than $10 cannot be made with credit cards with a reasonable profit being made by the merchants (especially for on-line merchants, who incur higher charges). Aggregating small payments into one reasonably sized amount before charging one’s credit card is the solution currently used, but this poses too many unnecessary restrictions on both users and merchants. – All purchases are traceable. Despite the convenience of a full history of one’s purchases, as well as the ability to dispute payments made with a credit card (especially in the US), the fact that credit card issuers have all the users’ spending information available poses serious privacy concerns. This information is sold to advertisers, and is utilized internally by credit card issuers to target advertisements to their audience. From both an ethical as well as a practical perspective, giving someone the ability to conduct payments should not go hand-in-hand with knowing their whereabouts, their spending patterns, and their personal preferences. – Security problems for the customers. One of the bigger problems with credit card payments is that all the user’s private information is exposed to the merchants. This allows merchants to effectively steal and use their customers’ credit cards. Obviously, this is a much greater threat over the Internet, where the merchant can be located anywhere in the world. This security problem is manifested in two different ways, depending on where the credit card has been issued: – For credit cards issued outside the US, the end-customer is held liable for all purchases. Thus, a stolen credit card number has a direct impact on the consumer. Clearly, this is a serious security problem, especially since the customers have little or no control whatsoever over the merchants’ handling of their credit card information.
Electronic Payments: Where Do We Go from Here?
47
– For US-issued credit cards, there is a regulatory limit of $50 on the consumer’s liability in case of a lost or stolen card number. In addition, most credit cards will typically refund the whole amount from a fraudulent purchase, so more likely than not the customer’s liability is nil. Credit card issuers often take advantage of the fact that consumers are afraid of losing their credit cards by offering them additional “security guard” features. In essence, this is an insurance against theft or loss of one’s credit card; the problem is that the fee for this insurance is extremely high, typically 0.5 to 1% of the customer’s purchases. Thus, in either case consumers are unfairly penalized for the credit cards’ own inappropriate security design.
3
Electronic Checks
Following the model of physical payments, where credit cards, cash, and checks combine to dominate the market, a logical step for payments are electronic checks. Described in an abstract fashion, these are sequences of bits that encode a value, and using either digital signatures or other cryptographic constructions allows a receiver to distinguish between valid and invalid bit sequences. Some methods have indeed been put to practice, but there has been no largescale adoption to date. The biggest missing link for these schemes is to put in place legislation governing the use of digital signatures and other cryptographic functions, so that the types of digital agreements which can be seen as binding can be determined. This is therefore an adaptation of the interpretation of how written signatures are binding. Even though digital signatures have been put to practice many years ago, and even though they are much harder to forge than hand written signatures, they are not yet legally binding to the same extent that hand written signatures are (except in places where laws were put in place). This point creates a severe problem for issuing banks: lack of a clear regulation framework. One of the largest components in the cost of checks is the physical delivery into and out of the clearing houses. Attempts to mimic checks electronically by presenting an electronic image of checks causes a large traffic over electronic networks, so it solves the cost problem only partially (whereas digital signature based checks have the potential for being much cheaper to implement). So, while we believe that “check based” payments are viable, and that they will turn out to be important once they are successfully introduced, this is not likely to occur before more specific legislative structures are in place. Similarly, these payment methods depend on a comprehensive public key infrastructure to be in place before they can become common and widespread. While this is on the way of happening, it has not materialized yet. It is important to note that banks and financial institutions are relatively conservative due to the heavily regulated nature of their industry. Therefore, e-payment implementations to date are a very close reflection of payments in the physical world, and do not incorporate features that would normally come
48
M. Jakobsson et al.
to mind in an electronic scenario. For example, there is no real-time clearing method for electronic checks, which although impractical in the physical-check world would make perfect sense electronically. One possible reason for this is that banks have built their business models around a particular way of handling checks, which would be invalidated with the availability of real-time clearing. However, as technology progresses the banks will have to catch up or they are at risk of being bypassed.
4
Types of Cash-like Schemes
In this section, we will discuss some different types of cryptographically based payment schemes, broadly referred to as “e-cash” or “cash-like” schemes. This categorization is necessary due to the multitude of proposed systems and the differences between their approaches. 4.1
Categorizing by Privacy Notions
As highlighted in section 2, consumer privacy is a major concern, considering the ease with which data mining can be performed electronically. Therefore, a significant portion of electronic payment systems afford some level of consumer privacy. We briefly outline the levels of available privacy in this section. Schemes with Perfect Privacy. Information which can be considered personal can be gathered at several stages in a payment process. To begin with, for every Internet connection the IP address of the consumer is exposed; this can be used for various types of tracing and is certainly private information. On the other end, the merchant may explicitly request personal user information in order to complete a purchase. Clearly, a payment mechanism cannot deal with these “out of band” information leaks. Therefore in our context “perfect privacy” means that the payment mechanism itself hides all consumer-specific information. Perfect privacy, frequently also referred to as “user anonymity” can be achieved in many ways. Anonymity may be established at the time of acquisition of some type of bearer instrument, similar to the way physical cash provide anonymity. Or, anonymity may be established at the time of payment, with the use of cryptographic techniques; in this case, the consumer can “convince” a merchant that the payment information supplied is correct, without revealing any information that could link this payment to the acquisition process, and therefore her/his identity. These types of techniques are called “zero-knowledge proofs”. From a cryptographic perspective, these were the initial schemes (based on offline coins) and they were based on “blind signature techniques” which is more efficient than generic zero-knowledge proofs. The notion was put forth by Chaum [C82] who has been for many years a major proponent of digital cash within the cryptographic community. The notion has been investigated in the initial papers in the cryptographic literature [CFN88,OO89,OO91,FY93,B93,F93,O95].
Electronic Payments: Where Do We Go from Here?
49
Schemes with Revokable Privacy. Privacy, no matter how desirable, may cause problems in the regulatory and legal levels. In particular since a bearer instrument is, by definition, valid for payments in an open environment, there exists the potential for money laundering, buying illegal goods, blackmailing, and other attacks [vSN92]. To prevent against these, some anonymous systems allow an administrative party or a collection of parties to revoke the consumer’s anonymity under certain circumstances, such as a court order. Such revocation is usually made possible by forcing the consumer to encrypt their private information under the key(s) of the administrative authority(ies). When revocation is ordered, the encrypted data are given to the authority(ies) which can then decrypt to obtain the consumer’s identity. An alternative to revocation which has been recently proposed, is public auditing file of coins and access to revoked coins within this context. Schemes without Privacy or with limited Privacy mechanisms. There are also systems which do not employ anonymity, usually in the interest of simplicity. Despite the obvious consumer advantage of the availability of personalized dispute mechanism, complete lack of anonymity usually limits consumer appeal. Thus, some systems employ a mid-way for privacy. Usually this is performed by the entity issuing the bearer instrument (the “bank”) possessing the consumer’s private information, but preventing disclosure to third parties, including merchants. Some of these types of schemes are frequently confused with “perfect privacy” schemes, but the fact remains that the bank can still perform data mining on users’ personal information; furthermore, the bank is the most likely party to perform such mining anyway, since it possesses the largest database of customer data. 4.2
Categorizing by Size of Payment
In principle, a payment mechanism should be able to handle arbitrary size payments. However, there are technical as well as regulatory reasons which prevent a single scheme from covering all possible payment types. Schemes for Large and Medium Payments. Generally when a large single payment is involved there exist regulatory requirements to record the payment amounts, or potentially to allow dispute of payments. But even in the absence of regulation, consumers are unlikely to use a payment mechanism for large or medium value payments if they cannot (a) easily obtain transaction records and dispute payments, (b) be assured that the security of the mechanism is adequate to protect the transmitted funds. On the other hand, processing costs, as well as the time to complete a purchase, are of lesser importance, since large payments are conducted with relatively small frequency from the consumer’s side. Also, anonymity is of lesser importance, since a payment trail is usually desirable by the (lawful) consumers to allow for transaction records and potential disputes.
50
M. Jakobsson et al.
Schemes for Small Payments. In contrast to the requirements for large payments, the priorities for schemes that can be used with small payment denominations are (a) efficiency, (b) anonymity, and (c) simplicity. Accountability, recording of transactions and dispute resolution are of lesser importance –except for when payments are aggregated to larger amounts, but this can be seen as a form of a large payment and treated accordingly. To this effect, a special category of schemes has been developed, traditionally called “micropayments” since they allow payments as low as cents or fractions of cents. It is important to note that the micropayment computational cost cannot be too large and resource consuming (which would increase their cost and will defeat their purpose). Thus, technology like blind signatures, which could have provided anonymity for small payments, is not useful due to its computational cost. 4.3
Allowing Use of Randomness (Probabilistic Schemes)
In the majority of cases, payment mechanisms employ deterministic techniques during the payment verification process. This type of assurances are traditional in the banking industry. However, there are systems which can obtain (computational and otherwise) efficiency advantages by performing some payment-related functions in a probabilistic way, thus spreading the effective “cost” of an operation throughout multiple transactions and consequently achieving higher overall efficiency. Here we describe systems in which consumers pay according to a probabilistic model, either honor-based (you have to pay each time, and if you are caught not paying in a random “check” you are charged a multiple of the purchase price), or lottery based (you only pay infrequently, but you pay multiple times the purchase amount). Some examples of such schemes are: – Probabilistic Polling. The idea behind probabilistic polling construction is to integrate a probabilistic function defining the frequency for sending payment to the bank. These schemes propose a probabilistic deposit at the time of the transaction, correlating the risk of overspending to the frequency of on-line verification of payments. Drawbacks of the method are the need for on-line verification of users solvability and black-listing (which requires to maintain black-list and keep informed vendors of any new revoked user). – Probabilistic Auditing. In this setting, a hardware-based deterministic scheme is combined with a probabilistic auditing of spending records (to detect overspending). – Probabilistic Paying. The idea is to let users send bids and pick randomly a transaction as a “contract” (or several transactions depending on the scheme setting) that is (are) declared as payments. The user committed to the contract must finalize the transaction and actually pay the merchant. 4.4
Categorizing by Implementation Platforms
Hardware based Schemes. Many schemes rely to some extent on hardware implementations and assumptions. These schemes are of two major types:
Electronic Payments: Where Do We Go from Here?
51
– Security relies on hardware. Some schemes, such as [Mon], derive their security entirely from the hardware used. In [Mon], users carry hardware, and in the hardware a state corresponding to the balance kept. When a transaction is performed, this balance is altered correspondingly. Clearly, such a scheme would not be a good idea if implemented in software, as it would allow users to either increase their balance by increasing the counter, or even simpler, by “rewinding” to a previous state after a payment is performed. In schemes like the above, a probabilistic approach can be employed to limit the cost of the check, by only performing on-line verification for a certain fraction of the transactions, as discussed above. – Security improved by hardware. In other schemes, such as off-line coinbased schemes, the hardware is used to prevent overspending. Even though these schemes have mechanisms in place to detect and trace overspending, and some schemes allow the bank to block other coins issued to the fraudulent user, the use of hardware can reduce the amount of litigation, blacklisting, and complicated cases involving more than one country. Software-only schemes. There are many proposed schemes that do not rely on hardware. Two different categories can easily be distinguished: – Fraud is preventable. In on-line schemes (for a clarified example see [S96]) the bank or a clearing agency gets involved in every transaction, and verifies that funds are available. It is therefore possible for the bank to ascertain that nobody spends funds he/she is not entitled to. The bank can verify that a user is entitled to spend an amount either by verifying that he has a coin (or similar) bearing a valid signature, and also verifying that this coin has not been previously spent. Alternatively, the transaction may be account-based, allowing users only to access funds by identifying themselves as having access to an account that the bank keeps. In the latter case, the bank determines the presence of the account, as opposed to the absence of a previously spent coin with the label in question. Both of these approaches have the drawback of the slowdown of the transaction due to the online connection with the bank, and the increased cost due to on-line availability requirements. – Fraud is unprofitable. In micro-payment schemes, each unit of funds is so small that there is no significant risk of fraud, as the amount to be gained is not substantial. Also, this type of payment scheme is likely only to be used in situations where there is no clear benefit associated with a tremendous overspending (such as access to home pages, etc.). It is important to design the supporting architecture to prevent accumulation of vast amounts of small payments to be used for something of high value that can be delivered before the bank detects overspending. (This type of delay is an important but little studied tool for reducing the incentive of misbehavior.) 4.5
Categorizing by the Infrastructure
Phone based. Specialized companies have been proposing to pay for transactions by charging these to the payer’s phone bill. In principle, any bill could be
52
M. Jakobsson et al.
used for this, e.g., the gas and electric bill, but it makes more sense to charge purchases to the phone bill, as in many cases the phone would already be involved in performing the transaction as well. Internet based. The next wave of payment schemes integrating privacy protection, non-repudiation features and enlarging the customer base of electronic commerce is obviously based on the Internet revolution. Indeed the Internet is the new arena for commerce and many other human activities. The examples of e-commerce services like amazon.com or eBay demonstrate the impact and potential of the Internet context on old business and trade models. The new setting enables any customer to choose and decide knowing better and better the relative value of goods and services. Other models of business are direct PC and other computer equipment purchasing. These show another novel business model that the Internet enables. The direct access to customers and reduction of supply chains within and between organizations are expected to further enhance the economic value of the Internet. However, risks and problems of this new medium exist as well [JaYu98]. We believe that it is quite an acceptable prediction to declare that dedicated payment systems within the Internet arena are of prime importance.
5
Examples of Schemes
Will will now mention a few schemes, categorize them given the above payment scheme taxonomy, and briefly discuss what types of situations they appear to be best suited for. 5.1
Credit Card Setting and On-line Schemes
– NetBill : This scheme [CTS95] is based on the on-line paradigm using basic authentication methods. The work include some novel interesting features such as atomicity of payments (a fault tolerance feature whereby a user pays only for transactions he receives) and anonymity by usage of pseudonyms. The drawbacks may be the number of messages (8)to process for a transaction, and the mandatory on-line communication with the intermediary NetBill server. We comment that the issue of fault tolerance raised by the atomicity concern is real and important in deployed systems (see also [BBC94,CHTY96,T96,PW97,XYZZ99]). – NetCheque and NetCash : This project [NM95] managed by the University of Southern California is another on-line scheme where users issue checks using a secret key (shared between a user and the bank) as a certificate of validity. A weakness may be the need of users to register at the banks, and the on-line verification of check correctness and fund availability which is required for each payment. Off-line verification is a technical possibility, but at the cost of possible fraud (non-detection of bad checks). This project is an extension of NetCash [MN94] which implemented electronic currency in
Electronic Payments: Where Do We Go from Here?
53
a way somewhat similar to the Digicash scheme. However, the NetCheque system only keeps track of tokens in circulation, i.e., those issued but not already spent. – SET : VISA and MASTERCARD’s analog to a credit-card setting which incorporates legally-binding signatures, and implements digital signatures as a tool for authenticating users, merchants, and banks. This reduces the possibility of fraudulent transactions, thus bringing on-line transactions on par with physical-card solutions. The technical details, originated by technology companies: IBM, Microsoft and Netscape working together with the credit card companies, are solid. Note however, that the complexity of SET has, so far, hampered its full-scale deployment. It is, in fact, too expensive for most merchants to implement, and it also requires end-users to download specific software and to participate in a public key infrastructure –which is not yet firmly in place. Also, even SET does not raise credit cards to a sufficiently high security standard to completely overcome fraud, hence credit cards still charge merchant fees which makes micropayments prohibitive. – DigiCash: The Digicash payment scheme involved so-called blind signatures, which are standard signatures generated in a manner that does not allow the signer to learn the message or the actual signature, but only their general format. This is done in a withdrawal phase, enabling the entity to later become the payer who holds a signature by the bank. Then, in a payment phase, the payer sends this signature to the merchant, who forwards it to the bank. Since the signature was withdrawn in a blinded fashion, the bank cannot determine what withdrawal session it belongs to, but only that it is a valid signature. Upon seeing such a valid signature, the bank verifies that it has not been deposited already, and acknowledges the transaction to the merchant if this has not taken place. Related versions allowing offline purchases have been developed – these, however, were not implemented due to the risk of short-term high-volume overspending, while being computationally intensive (as opposed to micropayments, below). The Digicash on-line scheme has been implemented on various platforms. Before the end of DigiCash operation, three smart-card based versions, the so-called DigiCash Blue Mask, existed. These schemes are closer to hardware-protected micropayments. The larger of these versions (10K ROM, 8K EEPROM and 256 RAM) implemented what is known as the optimal Fast Debit command with compression, enabling many fast payments for transaction time around 20 ms. 5.2
Probabilistic Payment Schemes
While most schemes are deterministic in the way they treat legality of payments, a few probabilistic methods have been discussed. – Polling Schemes : Gabber and Silberschatz [GS96] and Jarecki and Odlyzko [JO97], proposed schemes where:
54
M. Jakobsson et al.
1. users register by giving a first payment, which is a signed note including a bank certificate; 2. subsequent payments sent by users (depending on the underlying payment scheme) are received by the vendor and probabilistically sent to the bank for deposit at the time of the transaction. The overspending risk can be limited to a known value by defining the probabilistic checking as a function of the transaction size (making large payments more likely to be checked.) – Yacobi’s Auditing Scheme : In [Yac97] a hardware-based deterministic scheme with a probabilistic auditing of spending records (to detect overspending) is proposed. This project at Microsoft Research includes the following features: – smart-card id-based wallet (tamper-resistant device) – e-coins signed by the bank and stored in the smart-card – duplication (double-spending prevention) controlled by probabilistic checking in the device – Rivest’s Lottery : In this lottery based scheme [Riv97a], the idea is to use a chain of values as a book of lottery tickets. The user pays with the next value (or pre-image) in the book (as will be described in the coupon section 5.5) but with the twist that the bank later announces one of the tickets as a winning ticket. If the user spent the corresponding ticket, then he is responsible for paying the vendor with the ticket value. The lottery must be held after the book (of the day, of the week) is not in use anymore, to prevent cheating users from trying to never spend a winning ticket. In a variation of the scheme in [Riv97b], the decision to perform a payment is done by both the payer and merchant who execute a standard coin-flipping protocol (merchant commits to a random number, payer sends a guess and vendor de-commits) to decide jointly if the user should pay or not. 5.3
Hardware Based Schemes
– Small Value Payments : Stern and Vaudenay’s scheme SVP [SV97] proposed to deliver to each vendor a smart-card containing a MAC master key. Users buy tokens certified by the bank using the private-key MAC scheme; they perform a payment by sending a token to the vendor’s device which checks the certificate in order to validate the transaction. The idea is that only the bank knows the secret key while any vendor can verify properly the tag authenticity. The main security issue is related of the master key storage on each individual card since breaking a card is equivalent to getting the scheme’s master key. – Micro-Mint : Rivest and Shamir [RS96] proposed a scheme in which many collisions are found by (substantial) precomputation by the bank; and such collisions are handed out to users later. A collision, which is hard to find for users, thereby becomes the tokens used for commerce, much like precious metals for a long time were used for coins. The principle of the scheme is that whereas a low number of collisions is hard to find, a large number of
Electronic Payments: Where Do We Go from Here?
55
collisions is not much harder to find, thereby allowing amortization. Methods for distributing the effort of finding collisions were recently introduced in [JJ99]. – Hybrid Schemes : The advantage of having a unified scheme which works in software and hardware (assuming card reader/ writer in the PC) is advocated in [BGJY98]. The scheme combines software based (on-line) scheme which is synchronized with a smartcard based scheme where loading can be done via the network. 5.4
Phone Based Payment
– TelPay : This Canadian company [Tel] proposes a payment-by-phone service to registered customers. During registration, each user is given a unique registration number associated with a personal identification number (PIN). Once set up with a registration number and PIN, it is necessary to also set up the accounts users wish to pay. This requires identifying the payee (company to be paid) and entering an account number with that payee. This information is checked by TelPay system to ensure, as far as possible, that it is valid. From then on, to make a payment to that account all that is required is that users dial the last three digits of the account number. The amount is entered and the system, which confirms the name of the payee and amount to be paid. Any user can modify his profile and enter additional details either regarding bills already registered or new bills to append to the list of payees. – ibill/eCharge : Two recent services, ibill [ibill] and eCharge [eCharge], also allow merchants to charge transactions to the phone bills of the payer, but in a slightly more streamlined manner. Whereas ibill focuses on the adult market, and particularly subscriptions, eCharge expressly excludes this market. Both use a concept closely related to 900 numbers, requiring a phone call by the payer to be made in order for the fund transfer to occur. In ibill, this involves the user manually, whereas in the scheme by eCharge it is done via a modem. Both of these services allow only fixed charges of a small variety of denominations, and so, do not allow for shopping cart type of purchases. An advantage of this type of service is that it is easy for the average consumer to use and understand; a drawback is that it requires the phone service provider to accept the risks involved in the types of purchases involved, which is outside the typical business model of these companies. 5.5
Coupon-based Schemes
– Lamport one-time-password based schemes : Various schemes rely on an idea from Lamport: Rivest and Shamir’s PayWord [RS96], Anderson’s et al. Netcard [AMS96], Pedersen’s Scheme [Ped95], Jutla and Yung’s Paytree [JY96b] and Hauser’s et al. Micro-iKP [HSW96]. The idea is the
56
M. Jakobsson et al.
following : take a one-way permutation f (or a hash function), pick a random input x and iterate the application of f a large number n of times to produce y = f n (x) = f (f..(f (x))) and authenticate y with a public-key signature scheme. The chain of values y, f −1 (x), f −1 (f −1 (x)), ...x has the property that given any element of the chain, it is hard to compute the preimage (due to the one-wayness property) but easy to verify that this chain leads to y, authenticated by the bank. The general construction of a payment scheme based on this idea is to deliver to users triples of the form (x, y, sign(y)); when users want to pay, they spend an inverse as a micro-payment unit. Jutla and Yung generalized the chain idea to trees. The drawback is again the double-spending attack; prevention against this attack is to check on-line (which is expensive) or to blacklist malevolent users (but the user’s identity must be properly built in, so that forging/changing the identity is hard to do). – N-count Protocol : The N-count protocol designed by the company QC Technology [qct] is based on the chain value idea. The card contains a key index k and the terminal a N-counter denoted xn . The payment protocol integrates the following steps: 1. Card sends the key index k to the Terminal 2. Terminal replies with the following data: – chain parameters (N, T ID, CID, u) – amount to be paid m – current counter value (xn ) 3. Card computes x0 = G(Sk , T ID, CID, N, u) and x1 , x2 , . . . , xn−m where xi = F i (xn ) 4. Card computes xn−m and decreases balance by value m ∗ u 5. Terminal checks if F m (xn−m ) = xn Sk is a secret key, F and G two one-way functions. The length of the chain depends on the nature of spending. The spending of money on the road can be done in a location with a beacon or between two beacons. A two beacon setting gives enough time to prepare the transaction (between the readings), the time requirement being less critical. On the contrary, in a one beacon situation, the total transaction must be processed in less than 20 ms. In this case, the chain is minimal (length 1) and the empty N-counter at terminal is x1 = F (x0 ). The card will simply compute and send x0 . – MilliCent : Digital (currently Compaq) Research scheme Milicent [GMA+ 95] is a private-key solution where brokers, connected to a certain subset of vendors, are in charge of selling e-coins related to a vendor. These vendor-specific coins can only be authenticated by the vendor, using his private key. The brokers must be trusted and have agreements with vendors (certification). This scheme is one of the initial micropayment schemes. 5.6
Schemes with Revokability
The anonymity coming with the unrestricted usage of blind signature mechanisms could lead to attacks from large-scale criminal organizations. In order to
Electronic Payments: Where Do We Go from Here?
57
reduce such risks and improve control and reliability of anonymous payment schemes, the concept of revocable privacy was introduced. In such a setting, privacy can be removed to identify malevolent users or trace improperly withdrawn coins. Escrowed cash scheme introduced in [vSN92] as schemes based on the fair blind signature primitive [CPS95] give a good flavor of the concept but required the Trustees to get involved during withdrawals (also [BGK95]), decreasing drastically overall performance of the scheme. Recent works introduced the first revocable off-line (w.r.t to the Trustees) schemes, based on publicly verifiable secret sharing techniques [CMS96,Sta96] or on indirect discourse proofs [FTY96] (see also [DFTY97,dST98,FTY98]). An interesting model from Jakobsson and Yung [JY96a] introduced the notion of Ombudsman (a government official in charge of the customers defense against abuses) yielding an efficient electronic money system where tracing does not only depend on the bank but requires the combined endeavors of the bank and the Ombudsman in the tracing process. Furthermore, the paper introduced new type of attacks, including bank robbery attack corresponding to an adversary able to access to secret pieces of information, and ways of protecting users and issuers against these. Several implementations such as [CPS96] based on the fair blind signature primitive or [M’R96], sub-contracting the blinding to a trustee and using an Identity-based piece of information to achieve provable privacy and security, were performed on smart-cards, proving the practical validity of such concepts. Schemes based on public auditing for crime prevention rather than revocation were given in [ST99a,ST99b].
6
Technology and Business/Political Issues
In order to appeal to mainstream customers, have large merchant base, and get many involved in the payment systems, technology and other factors: business, legal, policy and political ones have to be reconciled. While the richness of available schemes is justified technically (and has to be pursued by scientists), the mainstream solutions have to account for many concerns. The integration of the solutions, especially the global large scale one requires a lot of understanding of the regulatory, financial, social and other aspects of the user base (clients, merchants, financial institutes (old and new), governments, regions and global markets). Integrating payment technology with other technologies (fault tolerance, distributed systems, Internet interfaces/API’s, other e-commerce infrastructure, etc.) is still challenging since cryptography is merely a component of the entire system. Some open issues are in [S97,W98]. What is interesting to note is that many of the issues dealt with by the technical community are also issues expressed in the business world (sometimes after they were recognized technically). Many of the technical concerns in the cryptographic literature we reviewed so far, indeed, have parallels in the business, legal and policy literature. The banking industry has reported concerns regar-
58
M. Jakobsson et al.
ding counterfeiting on the user side [Ba96,Ba97a] and risk supervision avoiding commitments to unbacked funds on the bank side [Ba97b,Ba98]. Many of the concerns regarding money laundering is expressed in numerous policy work, e.g. in [FATF96,GAO98,MMW98,OECD96]. Possible legal problems with anonymity are expressed in [F95]. However, large scale studies of comprehensive technological solution have not been done yet. The issue of economic stability assurances that new currencies should maintain (technically and otherwise) is an important issue. The development of stable and well recognized business models will help in integrating operational payment schemes into the business world. The issue of education of the user base, market(s) penetration and the overall integration of payments as an infrastructure component with emerging e-commerce applications is an interesting challenge. The dependencies between the growth of commerce in content, consumer habits and the need for e-cash have to be better understood as well. A recent interesting analysis of some of the reasons for the initial business failures of micropayments is presented in [Cro99]. He also tries to explain why a current trend of reverse micropayments (from companies to consumers as rewards for reading advertisement or participating in some activity) may be more successful at the moment.
7
Future Directions
In this section, we will briefly treat some potential scenarios, and discuss what potential constraints/ implications they may have on e-commerce. By the nature of the discussion, it is impossible to be exhaustive in this expos´e, and we focus only on a few potential events, and do not consider the implication of combinations or extensions of these. 7.1
Legal Restrictions on “Financial Cryptography”
In light of the current debate, it is not unlikely that some countries will impose restrictions on the type of cryptography used by their citizens. Currently such limitations are on bulk encryption and are set from national security perspective. However, this may change with the likely growth of e-commerce in terms of its impact on local economies. In this case, the flow of money becomes as important to control as the flow of information (recall the above policy papers mentioning threats of e-cash). Therefore, even if payment schemes are not easily abused for use for secret communication, local governments are likely to want to control the flow of services and funds, much in the sense of what customs does for its physical counterpart. This desire may further limit what kinds of payment schemes are employed, and may, for example, force privacy to become more of a legislative measure than a technical measure. Alternatively, it may create markets for local payment schemes (for which users enjoy privacy, but taxes are automatically charged by the local government as a part of any transaction), and global schemes mainly employed for exchange of currencies between local schemes. These, in
Electronic Payments: Where Do We Go from Here?
59
turn, would work as the interfaces between different legislative and tax domains, and would have taxation as a main objective. In such a situation, “black market” exchange of funds may become a problem much resembling what piracy is today, and would have to be battled with a combination of legislative and technical measures. Another limitation may prevent or restrict certain types of cryptographic tools to be employed, either globally or in particular countries. This in itself may cause different schemes to be employed, as will local requirements on the functionality of the schemes (something we can already witness today with the division between European cash cards and U.S. credit cards.) Existing cultural differences which typifies variations in physical payment methods may migrate into the e-payment systems. Additionally, and as we will discuss in the next subsection, a multitude of different schemes may evolve and be employed in the same market. 7.2
Many Parallel Standards
Payment schemes today give the impression of being on the way of becoming a niche market in which we have a few leaders for common types of payments, and special schemes used only in particular situations. There are many reasons that such a variety of schemes may be deployed, either symbiotically or in competition with each other. The reasons are ranging from corporate interests to varying requirements on payment schemes based on their usage. One example of the symbiotic use was given in the previous section, while others could arise to give users better functionality, and to cover a variety of situations. For example, fast and low-overhead schemes are useful for situations like paying for daily commuting tolls, while frequent-flier programs and the like require no speed of transactions, and may put restrictions on how funds are transferred and used (or taxed when doing so). Still, incorporating schemes to allow for a consolidated presentation to the user, and allowing for (potentially automatic) transfer possibilities give rise to a much more versatile construction. Whereas much of the problem remaining to be solved is that of building an appropriate infrastructure, it is also important to implement mechanisms for monitoring (by law enforcement, customs, arbiters, and others). It is interesting to notice the tradeoff between monitoring and privacy here, giving rise to a much more severe potential privacy intrusion than what has previously been considered. 7.3
Advances in Cryptanalysis
Advances in cryptanalysis have the same potential as legislation to change the payment scene by limiting the allowed types of operations. This may restrict the use of certain schemes or types of schemes. It is noteworthy to point out that a vast majority of payment schemes discussed in the cryptographic community are based on public key cryptography. In the unlikely event of major cryptanalytic breakthrough, a new approach may be needed. This, along with concerns of legal restrictions, calls for careful studies on how to implement desirable payment
60
M. Jakobsson et al.
schemes relying on secret key cryptography or on other methods or combination of methods to ensure correctness of payments. 7.4
Social and Technical Changes
Clearly, social changes can be expected to have a major impact on the field. For example, if PDA’s become as common as credit cards are, it will drastically simplify the building of a new infrastructure for payments. Similarly, technical changes, such as a substantial increase of the available communication bandwidth (and the price for it) may affect what types of schemes are employed. As an example, it makes little sense to implement off-line payment schemes if the cost of communication drastically falls, given the higher costs and complexity of such schemes. There is a trend towards both of these changes. However, at the same time as such changes simplify the employment of payment schemes, they also increase the security concerns, as should be evident from the existence of viruses. So far, these have not started to surface on PDA’s, but such an event is likely to only be a matter of time. Likewise, due to the lack of electronic payment schemes in common use, viruses have not started to target the wallets of users. This, too, may simply be a question of time. From a technical point of view, that should prompt more secure operating systems to be constructed for these devices (or not relying completely and solely on the computing platform), as well as recovery mechanisms for payment schemes. These may be based on automatic arbitration, supported by tracing mechanisms and detection mechanisms controlling “unusual” flow patterns. The latter, in turn, forces pattern categorization, which may be questionable in terms of privacy concerns if not performed by the user himself, or made non-interpretable to a third party looking for differences in behavior.
8
Conclusion
We have presented some of the past issues with the technology of electronic payments. This area is challenging and promising. We believe that the need for it is inherent, though the difficulties in achieving it are extensive and interrelated to many more general e-commerce and secure infrastructure issues. We have surveyed some prototypical examples from the past and the present. We categorized the technical solutions. We further related the technology to many non-technological constraints and discussed possible future needs, directions and possibilities. While we could not have possibly cover all areas and systems in this very prolific field, we hope we have presented the basic technological developments (granted, with unintentional omissions!). We believe we have pointed to various interesting and challenging issues for further activities. These activities are needed in numerous areas: research, business development, technical research and development, social, legal and political studies and other interdisciplinary areas related to e-commerce and payment mechanisms.
Electronic Payments: Where Do We Go from Here?
61
References AMS96.
R. Anderson, C. Manifavas, and C. Sutherland. Netcard - a practical electronic cash system. In Fourth Cambridge Workshop on Security Protocols. Springer-Verlag, 1996. Available at http://www.cl.cam.ac.uk/users/rja14. Ba96. Report by the Committee on Payment, Settlement Systems, and the Group of Computer Experts of Central Banks of the Group of Ten countries, Security of Electronic Money, http://www.bis.org/publ/index.html, 1996 Ba97a. Report of the working party on electronic money of the Group of Ten countries, Electronic Money-Consumer Protection, Law enforcement, Supervisory and Cross Border Issues, http://www.bis.org/publ/index.html, 1997 Ba97b. Basle Committee on Banking Supervision, Core Principles for Effective Banking Supervision, http://www.bis.org/publ/index.html, 1997 Ba98. Basle Committee on Banking Supervision, Risk Management for Electronic Banking and Electronic Money Activities, http://www.bis.org/publ/index.html, 1998 B93. S. Brands, Untraceable Off-Line Cash in Wallets with Observers, Crypto’93 BBC94. J. Boly, A. Bosselaers, R. Cramer, R. Michelsen, S. Mjolsns, F. Muller, T. Pedersen, B. Pfitzmann, P. de Rooij, B. Schoonmakers, M. Schunter, L. Vallee and M. Waidner, The ESPRIT Project CAFE: High Security Digital Payment Systems, ESORICS’94 BGJY98. M. Bellare, J. Garay, C. Jutla, and M. Yung, VarietyCash: A Multi-Purpose Electronic Payment System (Extended Abstract), Usenix Workshop on Electronic Commerce’98 BGK95. E. F. Brickell, P. Gemmell, and D. Kravitz, Trustee-Based Tracing Extensions to Anonymous Cash and the Making of Anonymous Change, SODA’95 BP89. H. Burk and A. Pfitzmann, Digital Payment Systems Enabling Security and Unobservability, Computer & Security, 8/5, 1989, 399-416 C82. D. Chaum, Blind Signatures for Untraceable Payments, Crypto’82 CFN88. D. Chaum, A. Fiat, and M. Naor, Untraceable Electronic Cash, Crypto’88 CFT98. A. Chan, Y. Frankel, and Y. Tsiounis, Easy Come-Easy Go Divisible Cash, Eurocrypt’98 CHTY96. J. Camp, M. Harkavy, J. D. Tygar, and B. Yee, Anonymous Atomic Transactions, 2nd Usenix on Electronic Commerce, 1996 CMS96. J. Camenisch, U. Maurer, and M. Stadler. Digital Payment Systems with Passive Anonymity-Revoking Trustees. In ESORICS ’96, LNCS 1146. Springer-Verlag, 1996. CPS95. J. Camenisch, J.-M. Piveteau, and M. Stadler. Fair Blind Signatures. In Eurocrypt ’95, LNCS 921, pages 209–219. Springer-Verlag, 1995. CPS96. J. Camenisch, J.-M. Piveteau, and M. Stadler. An Efficient Fair Payment System. In Proc. of the 3rd CCCS, pages 88–94. ACM press, 1996. CTS95. B. Cox, D. Tygar, and M. Sirbu. Netbill security and transactions protocol. In First USENIX Workshop on Electronic Commerce, 1995. Available at http://www.ini.cmu/NETBILL/home.html. Cro99. S. Crocker, The Siren Song of Internet Micropayments. In iMP Magazine, April 99. http://www.cisp.org/imp/april 99/04 99crocker.htm. DFTY97. G. Davida, Y. Frankel, Y. Tsiounis, and M. Yung, Anonymity Control in e-cash. In the 1-st Financial Cryptography, LNCS 1318 Springer.
62 dST98.
M. Jakobsson et al.
A. de Solages and J. Traore, An Efficient Fair off-line electronic cash with extensions to checks and wallets with observers, In the 2-d Financial Cryptography. eCharge. eCharge, http://echarge.com FATF96. FATF-VII report on money laundering typologies, Financial Crimes Enforcement Network Publications, http://www.treas.gov/fincen/pubs.html, 1996 F93. N. Ferguson, Extensions of Single-Term Coins, Crypto’93 FKMY98. Y. Frankel, D. Kravitz C. T. Montgomery and M. Yung, Beyond Identity: Warranty-based Digital Signature Transactions, Financial Cryptography 98. FTY96. Y. Frankel, Y. Tsiounis, and M. Yung, Indirect Discourse Proofs: Achieving Fair Off-Line E-Cash, Asiacrypt’96. FTY98. Y. Frankel, Y. Tsiounis, and M. Yung. Fair Off-Line e-Cash Made Easy, Asiacrypt’98. FY93. M.K. Franklin and M. Yung, Secure and Efficient Off-line Digital Money, ICALP’93 LNCS 700, Springer Verlag. 1993. F95. M. Froomkin, Anonymity and Its Enmities, 1 JOL (Journal of On-Line Law), art. 4. GAO98. General Accounting Office (GAO), Private Banking: Raul Salinas, Citibank, and Alleged Money Laundering, http://www.gao.gov, 1998. GMA+ 95. S. Glassman, M. Manasse, M. Abadai, P. Gauthier, and P. Sobalvarro. The Milicent Protocol for Inexpensive Electronic Commerce. In Fourth International World Wide Web Conference, 1995. Available at http:www.research.digital.com/SRC/milicent. GS96. E. Gabber and A. Silberschatz. Agora: A Minimal Distributed Protocol for Electronic Commerce. In USENIX Workshop on Electronic Commerce, 1996. HSW96. R. Hauser, M. Steiner, and M. Waidner. Micro-Payments based on IKP. In Worldwide Congress on Computer and Communications Security Protocol, 1996. Available at http:/www.zurich.ibm.com/ Technology/Security/publications/1996/HSW96-new.ps.gz. ibill. ibill. http://www.ibill.com. qct. QC technology http://www.qctechnology.nl. J95. M. Jakobsson, Ripping Coins for a Fair Exchange, Eurocrypt’95 JM98. M. Jakobsson and D. M’Raihi, Mix-based Electronic Payments, Workshop on Selected Areas in Cryptography, 1998 JY96a. M. Jakobsson and M. Yung. Revokable and Versatile Electronic Money. In Proc. of the 3rd CCCS, pages 76–87. ACM press, 1996. JaYu98. M. Jakobsson and M. Yung. On Assurance Structures for WWW Commerce. Financial Cryptography 98. JJ99. M. Jakobsson, A. Juels, “Proofs of Work and Breadpudding Protocols”, CMS ’99. JO97. S. Jarecki and A. Odlyzko. An Efficient Micropayment Scheme based on Probabilistic Polling. In Financial Cryptography ’97, LNCS 1318. SpringerVerlag, 1997. JY96b. C. S. Jutla and M. Yung. PayTree: “Amortized-Signature” for Flexible MicroPayments. In Second USENIX Workshop on Electronic Commerce, 1996. MN94. G. Medvinsky and C. Neuman. Netcash: A Design for Ppractical Electronic Currency on the Internet. In Second ACM Conference on Computer and Communication Security, 1994.
Electronic Payments: Where Do We Go from Here?
63
MMW98. R. C. Molander, D. A. Mussington, and P. Wilson, Cyberpayments and Money Laundering, http://www.rand.org/publications/MR/MR965/MR965.pdf, 1998. Mon. Mondex. http://www.mondex.com. M’R96. D. M’Ra¨ihi. Cost-Effective Payment Schemes with Privacy Regulation. In Asiacrypt ’96, LNCS 1163, pages 266–275. Springer-Verlag, 1996. NM95. C. Neuman and G. Medvinsky. Requirements for Network Payment: The Netcheque Prospective. In IEEE COMCON, 1995. Available at ftp://prospero.isi/edu/pub/papers/security/. OECD96. Organization for Economic Co-operation and Development, OECD Workshops on the Economics of the Information Society, 1997. O95. T. Okamoto, An Efficient Divisible Electronic Cash Scheme, Crypto’95 OO89. T. Okamoto and K. Ohta, Disposable Zero-Knowledge Authentication and Their Applications to Untraceable Electronic Cash, Crypto’89 OO91. T. Okamoto and K. Ohta, Universal Electronic Cash, Crypto’91 Ped95. T. Pedersen. Electronic Payments of Small Amounts. Technical report, Aarhus University, Computer Science Department, 1995. DAIMI PB-495. PW97. B. Pfitzmann and M. Waidner, Strong Loss Tolerance of Electronic Coin Systems, ACM Tran. Computer Systems 15/2 1997, 194-213. Riv97a. R. Rivest. Electronic Lottery Tickets as Micro-Cash. In Financial Cryptography ’97, LNCS 1318. Springer-Verlag, 1997. Riv97b. R. Rivest. Lottery Tickets as Micro-Cash, 1997. Financial Cryptography ’97 Rump Session. RS96. R. Rivest, A. Shamir, “PayWord and MicroMint: Two simple micropayment schemes”, May 1996. (also in CryptoBytes) ST99a. T. Sander and A. Ta-Shma, Flow Control: A New Approach for Anonymity Control in Electronic Cash Systems, Financial Cryptography’99. ST99b. T. Sander and A. Ta-Shma, Auditable, Anonymous Electronic Cash (Extended Abstract), Crypto’99. S97. B. Schoenmakers, Basic Security of the ecash Payment System, Computer Security and Industrial Cryptography: State of the Art and Evolution, LNCS series, 1997. S96. D. Simon, Anonymous Communication and Anonymous Cash, Crypto’96. Sta96. M. Stadler. Publicly verifiable secret sharing. In Eurocrypt ’96, LNCS 1070, pages 190–199. Springer-Verlag, 1996. SPC95. M. Stadler, J. M. Piveteau, and J. Canmenisch, Fair Blind Signature, Eurocrypt’95 SV97. J. Stern and S. Vaudenay. Small-value payment: a flexible micropayment scheme. In Financial Cryptography ’97, LNCS 1318. Springer-Verlag, 1997. T96. J. D. Tygar, Atomicity in Electronic Commerce, ACM Symposium on Principles of Distributed Computing, 1996 Tel. TelPay. http://www.telpay.ca. vSN92. S. von Solms and D. Naccache. On Blind Signatures and Perfect Crimes. Computers & Security, 11:581–583, 1992. W98. M. Waidner, Open Issues in Secure Electronic Commerce, 1998 XYZZ99. S. Xu, M. Yung, G. Zhang and H. Zhu, Money Conservation via Atomicity in Fair Off-Line E-Cash. In ISW’99. Kuala Lumpur, Malaysia, SpringerVerlag, 1999. Yac97. Y. Yacobi. On the continuum between on-line and off-line e-cash systems. In Financial Cryptography ’97. Springer-Verlag, 1997.
PCA: Jini-based Personal Card Assistant Roger Kehr, Joachim Posegga, and Harald Vogt Deutsche Telekom AG, Technologiezentrum IT Security Research/FE34 D-64307 Darmstadt {Kehr|Posegga|VogtH}@tzd.telekom.de
Abstract. We describe the Personal Card Assistant, a scenario that brings together PDAs and smartcards. The underlying idea is that a PDA acts as a personal device for controlling a smartcard attached to it using an asymmetric key pair. We describe how such an approach can be used for creating digital signatures: in particular, we can circumvent the problems involved with untrusted document viewers in this context. We consider what sort of network infrastructure is required for using the PCA and outline how Jini can be used for integrating the PDA and smartcards into unknown service networks a mobile user is confronted with.
1
Introduction
Cryptography can provide security-services based on well-founded mathematics. A key problem with applying cryptography to real-world problems is, however, the interface to real life. In this paper we investigate an application area where this problem is very evident, i.e.: the presentation of a document that is to be digitally signed. Digital signatures for applications like electronic commerce require high security standards. Some countries have already proposed to embed digital signatures into legal frameworks, the most prominent example being the German digital signature law “Signaturgesetz” [1,2]: This law requires (among other things) the following ITSEC security levels for a system used for dealing with digital signatures: – The storage component for the secret key (usually a smartcard) must meet the criteria of ITSEC E4 [3], and the – component for presenting a document (document viewer) must meet ITSEC E2. Both requirements form the basis for digital signatures that are legally binding under this law. When considering this legal framework from a technological perspective, it is evident that one of the weakest components is in practice a document viewer running on a PC with a standard operating system like Windows: even if R. Baumgart (Ed.): CQRE’99, LNCS 1740, pp. 64–75, 1999. c Springer-Verlag Berlin Heidelberg 1999
PCA: Jini-based Personal Card Assistant
65
evaluated at ITSEC E2, the PC Software offers little protection against manipulation. This is in particular problematic if the platform used for viewing such a document is not “under control” of the digitally signing party, but belongs to the other party that wants someone to sign a document: It is fairly trivial to manipulate such a system, so a person signing a contract or a money order in an unknown, untrusted environment cannot be sure what her smartcard actually signs. This could turn out to be a major obstacle against the wide-spread use of digital signatures in practice. This problem is, in principle, easy to solve: Raise the security level and require a closed, trustworthy system for applying digital signatures. Unfortunately, this solution is extremely hard put into practice, both because it is expensive and since dedicated hardware, which would be required, simply does not fit into today’s computing world. This paper proposes a pragmatic approach that reduces the risks of using digital signatures by integrating a customer’s PDA into the creation of digital signatures: The PDA is used as a document viewer and it controls the smartcard by unlocking the card’s signing function using cryptographic means. We refer to this approach as the Personal Card Assistant (PCA). A PCA does not increase security per se, since a PDA can be attacked similarly to a PC. However, as we assume that a PDA belongs to and therefore is under the control of a person who wishes to apply a digital signature, such a device will in practice be more trustworthy for that person than, for instance, a vendor’s PC. We therefore regard our approach as pragmatically more secure, making users of digital signature feel more comfortable with the technology. The notion “trust amplifier” for such a PCA covers this quite precisely. We will see in the sequel of this paper that this at first sight very straightforward idea opens up a number of very interesting questions from a technological and research perspective. The paper is organised as follows: Section 2 introduces the concepts and components behind the Personal Card Assistant, Sect. 3 describes and analyses the cryptographic protocol used in our scenario. Section 4 investigates the requirements of a network and service infrastructure for implementing the PCA scenario and proposes a Jini-based approach for it. We discuss related work in Section 5 and finally draw conclusions from our work in Section 6.
2
The Personal Card Assistant
A smartcard is a (comparably) tamper-proof device that offers cryptographic and other functions that can be accessed over a simple I/O interface. For performing critical functions, it is required that the legitimate user is authorised against the card by entering a PIN code (often referred to as card holder verification, CHV). As a smartcard has no Interface to interact directly with a human being, all communication is done via a card reader using a keyboard and screen that is either built into the reader or is attached to a computer. It is planned to integrate keyboards, biometric sensors and screens directly on the card, but that is not yet common.
66
R. Kehr, J. Posegga, and H. Vogt
Thus, smartcard-based applications rely upon the trustworthiness of the environment the card is working in. Our PCA scenario aims at improving this situation; the PCA consists of a secure core component, the smartcard, and a conventional, personal computing device, the PDA. Both can either be tightly coupled by integrating the card into the PDA, or they can be coupled by cryptographic means . We consider the latter case, the coupling is achieved by the fact that each component knows the public key of the other one. Key exchange takes place in a secure environment, e.g. when the smartcard is personalised or purchased. In the PCA scenario, the role of the smartcard is to provide both secure storage and a trusted platform for cryptographic computations, and the PDA provides a user interface, computing power, and additional storage. The sharing of public keys enables both to establish a secure communication channel even if they are physically separated. An application will typically run on the PDA, making use of its I/O capabilities, and access the smartcard for cryptographic functions. But it is also possible to run the application on the smartcard and use the PDA simply as a supplementary device; this is comparable to the GSM SIM AT approach [4] where an application running on the smartcard controls the operation of a mobile phone. A PDA is open to attacks similar to those applicable to a PC. However, it is likely that the PDA owner accepts a much more restrictive security policy on her PDA than on her workstation, e.g. concerning the download and execution of unknown software. It is also realistic to set a separate PDA aside for performing critical transactions such as digital signatures. From a pragmatic viewpoint, one may accept the PCA as a “trust amplifier” due to its nature of being directly associated with a person. To its owner, it is much more trustworthy than an unknown terminal, controlled by strangers, located in an untrusted environment.
3
The PCA for Digital Signatures
We present a scenario where the use of the PCA can enhance the process of creating a digital signature. Our example describes a setting where a document created by one party, e.g. a contract offered by a vendor, is to be signed by a second party, the customer. Our approach involves the following components: – A PC or workstation that is used to create a document to be signed. This could be a vendor’s terminal. – A smartcard reader, either connected to this PC or being a separate device. – A PDA that belongs to the person who wants to sign a document. – A smartcard for signing a document by encrypting a hash value. We require that each of the PDA and the smartcard have the public key of the other one stored, i.e. they together constitute a PCA. We assume that components can communicate over arbitrary communication channels; as an example one can figure using the PDA’s infrared interface.
PCA: Jini-based Personal Card Assistant
67
Fig. 1. The PCA in the Context of Signing Documents
3.1
A Bird’s Eye View of the Scenario
Figure 1 outlines the interworking of the components forming up our approach: 1. A document to be signed is created on the PC, and this document is stored in a format that can be displayed on the PDA. 2. The document is transferred to the PDA. 3. The user checks the document on the PDA and approves it by signing the document’s hash with the PDA’s secret key. 4. The document hash is transferred to the smartcard, which extracts the document’s hash value again and creates the final signature. This procedure differs from the standard approach to using digital signatures in two important points: First, we “route” the document over the PDA for being checked by the signing person; second, we assume that the smartcard of the signing person and the PDA form a pair, tied together by their public keys. In particular, the card will not sign any data unless these data were “approved” by the PDA’s secret key. We shall elaborate the concrete procedure for this subsequently. 3.2
The Underlying Cryptographic Protocol
Hereafter, we will use the identifiers ECrd and DCrd for denoting the smartcard’s public and private keys respectively, and similarly EPDA and DPDA for the PDA. The application of a key K to a message M , e.g. encrypting the message, will be denoted by K(M ). Figure 2 visualizes the communication between the components forming our approach:
68
R. Kehr, J. Posegga, and H. Vogt
Fig. 2. Cryptographic View of Information Flow
PC → PDA: The PC sends a document D to the PDA. PDA’s Function: The PDA displays the document D and computes h = hash(D). If the user wishes to sign the document, she approves it. We propose to implement this by having the user enter the card’s PIN, which has the sideeffect that the PDA is used as a PIN pad.1 PDA → Smartcard: The PDA sends the message M = ECrd DPDA (h, PIN) to the card. In plain English: the PDA signs the document hash h and the PIN with its private key and encrypts the resulting data with the card’s public key. Note, that the contents of M can only be reconstructed with the secret key DCrd matching ECrd . Card’s Function: The card deciphers the message by computing (h, PIN) = EPDA DCrd (M ) I.e.: the card extracts the PIN and the hash h from the message M using its own private and the PDA’s public key. The procedure aborts if verification of the PIN fails. Card → PC The card sends DCrd (h) to the PC, which is the document hash signed with the card’s secret key. This constitutes the final signature. 1
The scenario and the subsequent protocol can be easily modified to allow a user to enter the PIN using a (secure) PIN pad attached to the card reader. In this case, the user’s approval shall be implemented by other appropriate means, like pressing an “OK”-button.
PCA: Jini-based Personal Card Assistant
69
By signing the data sent to the card, the PDA assures the authenticity of the data. This is necessary since the smartcard will only sign a hash value that originates from the PDA. By the PDA’s signature, separate steps for authentication and key exchange are avoided. Entering the PIN ensures that the signing process is authorised by the owner of the PCA. This addresses the issue that PDA’s are not very well protected against unauthorised use. To protect the PIN from attackers intercepting the message to the card, the message is encrypted with the card’s public key. 3.3
Informal Threat Analysis
Under the assumption that the PDA and the card of the scenario described in Section 3.2 are trustworthy, the protocol can only be attacked by manipulating data sent between the components: PC → PDA. An attacker does not gain anything from manipulating D since the document will be checked by the signer. Neither does replaying this message, or preventing it from arriving offer any advantage to attackers. PDA → Card. Under the assumption of secure cryptographic algorithms and sufficient key lengths, the contents of the message M is not reconstructible2 . Since the range of the PIN is restricted, there is a slight chance that a forged message gets signed by the card, even if DPDA is not known. However, the signature produced will surely not be valid for any document, as the hash value reconstructed by the card will be totally random and not correspond to any meaningful document. A replay of this message to the card would create only duplicates of the signature computed by the card, which is acceptable. Blocking the communication between the PDA and the card prevents only the generation of signatures. Card → PC. Since the completely generated signature is transmitted only, there is no meaningful attack left. The signature can be easily verified by the vendor or anybody who is interested. Note that the assumption about the PDA’s trustworthiness made above is not necessarily justified: A PDA is usually not a secure system and is, in principle, as easy to manipulate as a PC if an attacker can temporarily control the device. However, in practice it is certainly more difficult to attack such a mobile device than a PC. 2
Note that encrypting the PIN alone without the hash h changes this situation: Since usually only 10|PIN| values for PIN exist, a brute force attack by enumerating possible PINs would be possible. The attached hash value h –though known– prevents such attacks, since the contents of the encrypted message becomes too long for being enumerated.
70
R. Kehr, J. Posegga, and H. Vogt
4
Infrastructures for the PCA
In the previous sections we have shown the usefulness of the personal card assistant by giving an example application – digitally signing documents in untrusted environments. Still missing is an evaluation of the practical feasibility of our approach and suggestions how the infrastructure for environments that support PCAs could look like: the PCA scenario is supposed to be used in environments, where network architecture, names and location of servers, card readers, etc. are unknown, so we need to have means to integrate the PCA into a local service network. 4.1
Jini
In distributed and mobile scenarios like ours it is important that communication partners such as the PDA and a vendor terminal find each other seamlessly and efficiently. Jini [5,6] is a recently released technology from Sun Microsystems for federating network devices and services. It explicitly addresses many of the problems involved around establishing spontaneous client-server interaction in a-priori unknown environments. Jini is based on the Java facilities for code shipping among different Java virtual machines (JVMs). A so-called lookup service [7] acts as the central registration authority for services. Arbitrary services register with the lookup service using a bootstrapping protocol [8] and provide a serialised Java object called service proxy with some additional descriptive information. Potential Jini-clients contact the lookup service to query for services they are interested in and download and invoke the associated proxy objects. These proxies are then executed in the virtual machine of the client and implement the communication with the (mostly) remote Jini-service they where launched from. We have chosen Jini as a middle tier for dynamically establishing connections between the different services that comprise our scenario since it seems to offer much of the functionality needed. 4.2
PCA-Scenario with a Jini Infrastructure
In our PCA scenario we envision a Jini-enabled infrastructure consisting of a lookup service, a PDA, a card reader, and some other service such as a vendor’s terminal. The PDA and the card reader must themselves register with the lookup service to offer the services relevant for the application. In our scenario the PDA would offer a Document Signing Service that displays the contract document to the customer (cf. Figure 3, upper part). If she accepts the document after reading it on the PDA, she inserts her signature smartcard into the vendor’s card reader. The reader recognises the smartcard and registers a PCA Signing Service with the lookup-service (cf. Figure 3, bottom left). The PDA receives the respective proxy object and passes the encrypted and signed document’s hash value to the proxy. The proxy encodes the data and sends it to the smartcard (cf. Figure 3, bottom right).
PCA: Jini-based Personal Card Assistant
71
-LQL/RRNXS6HUYLFH
6HDUFKIRU &XVWRPHU·V 6LJQDWXUH 6HUYLFH
3'$3UR[\
'LVFRYHU\ 3'$UHJLVWHUV ´'RFXPHQW6LJQLQJ 6HUYLFHµDQG VWRUHVD-DYD ,QWHUIDFHIRULW SONY
SMART CARD
webtv
-LQL/RRNXS6HUYLFH
-LQL/RRNXS6HUYLFH
&$'3UR[\ 3UR[\ LPSOHPHQWV &RPPXQLFDWLRQ ZLWK3'$
&KHFN'RFXPHQW 4XHU\/RRNXS 6HUYLFHIRU3&$ 6LJQLQJ6HUYLFH
&RPPXQLFDWLRQ &DUG5HDGHU3'$ HVWDEOLVKHG
&$'"
&DUG5HDGHU DV-LQL6HUYLFH SONY
SMART CARD
SONY
SMART CARD
webtv
webtv
Fig. 3. PCA and Jini
The smartcard reconstructs the document’s hash value from the received data. It applies the signature key to it – thus creating the digital signature – and returns it to the PDA’s proxy object, which in turn returns the final signature to its client. 4.3
Integration of the PDA into the Jini Federation
We have chosen the 3COM Palm Pilot III for implementing our prototype with since it is in widespread use and many tools are available on the net. It offers a serial connection via the cradle and an infrared device that is compliant to the IrDA standard [9]. Future PDAs might also use wireless technologies such as Bluetooth [10,11]. On the hardware level the local net must offer an entry point for the PDA. In any case there must be a point-of-presence the PDA can be connected to on the hardware level which in our case is a standard PC. Since to our knowledge there doesn’t exist a fully Java 2-compliant JVM for the Pilot yet, we needed to separate the functionality of the Document Signing Service into a Jini-part which runs on the host the PDA is connected to, and the application on the Pilot implementing the display engine and the encryption algorithms. Both parts of the service communicate via a protocol running over
72
R. Kehr, J. Posegga, and H. Vogt
the serial line. Hence, the registration of the service with the lookup service is done on the host, whereas the security-critical part of the application runs on the Pilot. 4.4
Integration of Smartcards into the Jini Federation
A smartcard reader device can support Jini either by directly integrating a Java VM within the reader or by simply attaching it to a device bay –a technique for making non-Java devices available to Jini federations [12]– or a workstation where a dedicated process performs Jini tasks. Thus, it is able to act as an ordinary Jini device, registering services to employ smartcards. The types of services offered can range from basic interfaces accessing the primitive functions of the reader to sophisticated services corresponding to the functionality of inserted smartcards. This is similar to approaches like the OpenCard Framework [13,14] and PC/SC [15]. These frameworks hold static information about smartcards and their services offered. Both supply high-level interfaces to applications for accessing smartcards as well as direct access to readers. For Jini, it is desired to detect the functionality of a smartcard dynamically, when it is inserted into the reader. After that, corresponding services can be registered with the lookup service. The service proxies should offer application-level services which hide the characteristics of their implementation on the smartcard. Especially Java smartcards have dynamic functionality, i.e. new applets can be downloaded to the card while others might be deleted from the card. Unfortunately, directory services keeping track of the installed applets are manufacturerspecific. A generic service detection facility must take this into account. The ATR (answer-to-reset) string sent by a card when connected gives at least static information about the card, e.g. the manufacturer, card type etc. This information can be used to load a procedure (implemented by a Java class) from a dedicated location which could, for instance, be a unique Web-address depending on the card’s ATR-string. Such an executable could then run a card-specific protocol to create a directory of the installed applets. This information can be passed to the reader device which registers corresponding proxies for the applications inside the card with the lookup service. We have realized a Jini smartcard service which implements a primitive interface to a card reader attached to a workstation. It allows for the exchange of card APDUs (data packets used in communication with smartcards) which is the least common interface to all smartcards. Applications must be aware of the APDUs a certain smartcard understands and encode requests according to the card’s own protocol itself. This heavily limits the range of cards an application can interact with. However, this primitive interface is the base upon which higher-level services can be built. The smartcard service is registered with the lookup service by the workstation the card reader is connected to. Proxy objects communicate through Java RMI to a server object on that workstation which passes the requests on to the card reader.
PCA: Jini-based Personal Card Assistant
4.5
73
Protection of Services
Providing access to smartcards is potentially dangerous in an open environment like a Jini federation, especially when offering a simple service like the exchange of APDUs. A malicious client may perform a brute force attack on the authentication codes to the card which will either reveal the codes or lock up the card, i.e. make it unusable even to the legitimate user. It is therefore desirable that only authorised clients can make use of such services. There is no standard mechanism in Jini which provides this kind of security, yet. A possible solution might be a Kerberos-like authentication service [16] that makes services available to trusted clients only. The PCA has the advantage that the PDA (the client) and the smartcard (the server) are tightly coupled. The smartcard accepts only authorised requests which must be signed by the PDA. A separate authentication service is not required in this case.
5
Related Work
PDAs are becoming popular devices, but PDA-based applications to solve security-related problems are sparsely described in the literature. Pfitzmann et. al. [17] discuss portable end-user devices (POBs) and security modules and define a number of requirements to be made for such devices. They observe that trustworthy POBs do not exist and conclude that therefore “the development of secure applications should concentrate on protocols and procedures”; we presented an instance of such an approach. Daswani and Boneh [18] consider PDAs for performing cryptographic computations for Electronic Commerce applications. Their approach differs from ours in that they describe a scenario where the PDA replaces the smartcard, rather than complement the card as we propose. Recently, the Safe Internet Programming Group at Princeton university published on the Web information about a project that aims at integrating smartcards and PDAs [19]; this approach aims in a similar direction as our approach, but no results seem to have been published yet. Our approach to integrating smartcards into a Jini-based infrastructure is related to the OpenCard Framework (OCF) as well as the PC/SC architecture. Within these frameworks, the mapping from ATR strings (card types) to services is triggered from a separate installation process which introduces new smartcards or changed functionality. Both are restricted to the domain of a single workstation or PC, rather than offering the corresponding services in a local network. A similar, but more low-level approach is followed by the Direct Method Invocation (DMI) mechanism proposed by Gemplus [20]. When a Java card applet is designed, an interface is fixed from which a stub object is created. A call to the stub is translated into APDUs which are sent to the smartcard. There, the method call is reconstructed and the respective method is executed. This approach hides the nasty details of creating APDUs from applications but
74
R. Kehr, J. Posegga, and H. Vogt
is only applicable to new applications implemented on Java cards. Stub objects could, e.g. be used to implement service objects in OCF. An extended approach would be useful for Java cards in Jini environments: a proxy object could be automatically created from a Java card applet description. As mentioned in our scenario description, that object would have to be initialised with a channel to the card reader.
6
Conclusion and Future Work
In this paper we have presented the personal card assistant (PCA) which is comprised of two different devices – PDA and smartcard – that together implement a security-sensitive application. Both devices are tightly bound together by the public/private keys they share: the smartcard does not perform its task without the PDA and the PDA cannot perform the task without the help of the smartcard. We have shown the applicability and usefulness of our approach with the scenario of digitally signing documents. Our prototype does not only verify the underlying cryptographic protocol and its implementation on the PDA and smartcard but furthermore aims at a general solution in terms of establishing “spontaneous” networking among the participating devices using Jini. We believe that networking infrastructures that solve similar problems as Jini such as the Service Location Protocol [21] or the Secure Directory Service [22] will be of considerable importance in the future to enable the widespread use of PCA-based applications. Subject to further research is the question if the PCA can be applied to problems in the domains of electronic commerce and electronic cash. Acknowledgements. We would like to thank Dr. Klaus Huber for useful comments and suggestions on an earlier version of this paper.
References 1. Deutscher Bundestag. Gesetz zur digitalen Signatur. http://www.regtp.de/Fachinfo/Digitalsign/neu/rechtsgr.htm, 22 July 1997. English Version (”Digital Signature Act”) available from http://www.regtp.de/English/laws/download.htm. 2. Deutscher Bundestag. Verordnung zur digitalen Signatur. http://www.regtp.de/Fachinfo/Digitalsign/neu/rechtsgr.htm, 22 July 1997. English Version (”Digital Signature Ordinance”) available from http://www.regtp.de/English/laws/download.htm. 3. Commission of the European Communities. Information technology evaluation criteria. Directorate XXIII/F, SOG Information Security, 1991. 4. European Telecommunications Standards Institute. Specification of the SIM Application Toolkit (GSM 11.14), 1998. http://www.etsi.org.
PCA: Jini-based Personal Card Assistant
75
5. Sun. Jini Architecture Specification – Revision 1.0. Sun Microsystems Inc., January 1999. 6. Joachim Posegga. Die Sicherheitsaspekte von Java. Informatik-Spektrum, 21(1):16– 22, 1998. 7. Sun. Jini Lookup Service Specification – Revision 1.0. Sun Microsystems Inc., January 1999. 8. Sun. Jini Discovery and Join Specification – Revision 1.0. Sun Microsystems Inc., January 1999. 9. Infrared Data Association. http://www.irda.org/standards/specifications.asp. 10. Jaap Haartsen, Mahmoud Naghshineh, Jon Inouye, Olaf J. Joeressen, and Warren Allen. Bluetooth: Visions, goals, and architecture. ACM Mobile Computing and Communications Review, 2(4), October 1998. 11. Bluetooth Technology Overview. http://www.bluetooth.com. 12. Sun. Jini Device Architecure Specification – Revision 1.0. Sun Microsystems Inc., January 1999. 13. Dirk Husemann and Reto Hermann. OpenCard Framework. Technical report, IBM Corporation, 1998. 14. OpenCard Forum. http://www.opencard.org. 15. Specifications for PC–ICC Interoperability. http://www.smartcardsys.com. 16. B. Clifford Neuman and Theodore Ts’o. Kerberos: An Authentication Service for Computer Networks. IEEE Communications Magazine, 32(9):33–38, September 1994. 17. A. Pfitzmann, B. Pfitzmann, M. Schunter, and M. Waidner. Vertrauensw¨ urdiger Entwurf portabler Endger¨ ate und Sicherheitsmodule. In H. H. Br¨ uggemann and W. Gerhardt-H¨ ackl, editors, Verl¨ aßliche IT-Systeme, Braunschweig, 1995. 18. Neil Daswani and Dan Boneh. Experimenting with Electronic Commerce on the PalmPilot. In Financial Cryptography ’99, Conference Pre-Proceedings, Anguilla, BWI, 22 Februar 1999. 19. Safe Internet Programming Group Princeton University. Smarter Smartcards – Using Devices That Support User Interaction. http://www.cs.princeton.edu/sip/projects/handheld/, 1999. 20. Jean-Jacques Vandewalle and Eric V´etillard. Developing Smart Card-Based Applications using Java Cards. In Proceedings of the Third Smart Card Research and Advanced Application Conference (CARDIS’98), Louvain-la-Neuve, Belgium, September 1998. 21. J. Veizades, E. Guttman, C. Perkins, and S. Kaplan. Service Location Protocol (SLP). Internet RFC 2165, June 1997. 22. Steven Czerwinski, Ben Y. Zhao, Todd Hodes, Anthony Joseph, and Randy Katz. An Architecture for a Secure Service Discovery Service. In Fifth Annual International Conference on Mobile Computing and Networks (MobiCOM ’99), Seattle, WA, August 1999. Draft version, accepted for publication.
Jini and Java are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States and other countries.
An X.509-Compatible Syntax for Compact Certificates Magnus Nystr¨ om1 and John Brainard2 1 2
RSA Laboratories, Box 10704, S-121 29 Stockholm, Sweden RSA Laboratories, 20 Crosby Drive, Bedford MA 01730, USA {magnus, jbrainard}@rsa.com
Abstract. Given an identified need for a compact format for digital certificates in constrained environments like embedded or high-volume systems, an X.509 [22] compatible proposal is described and compared with previous and related work.
1
Introduction
The use of public-key cryptography on the Internet has, to date, been driven by two major applications: SSL-protected Web pages and S/MIME secure e-mail. Both of these applications utilize public key certificates in the format specified in the X.509 standard. A typical user certificate may easily exceed 1,000 bytes, while a CA certificate may be twice that1 . These certificate sizes are not a major concern in the desktop and portable PC environment, where multiple megabytes of memory and high-speed network connections are available. Recently, however, a new class of devices has been gaining in popularity for Internet use. These devices include personal digital assistants (PDAs), cellular phones, pagers and other such devices. These typically have limited memory and a wireless network interface of relatively low bandwidth. For these devices storing and transmitting certificates and chains of certificates, where each certificate exceeds a kilobyte, may be problematic (c.f. [28], [18], [19]). Another area where large footprint certificates may create difficulties is in the use of smart cards in conjunction with public-key infrastructures. Mobile users may wish to use the same set of credential with multiple systems. Smart cards offer the possibility of truly portable credentials by removing the credentials from the desktop and storing and using them within the secure perimeter of the smart card. Most smart cards have relatively small memories, typically a few kilobytes, and a low bandwidth I/O connection to the card reader. Here, as in the case of wireless devices, the transfer and storage of significant numbers of “heavyweight” certificates is not viable. If secure messaging and Internet e-commerce are to be extended into today’s wireless devices and smart cards, a more compact representation of the information in existing public-key certificates is required. 1
Higher security requirements for CA keys implies larger key sizes, CA policy statements are stored in certificate extensions, etc.
R. Baumgart (Ed.): CQRE’99, LNCS 1740, pp. 76–93, 1999. c Springer-Verlag Berlin Heidelberg 1999
An X.509-Compatible Syntax for Compact Certificates
2
77
Goals and Criteria for Compact Certificates
The most obvious goal for a compact certificate is that it be compact. Other goals include limited resource requirements for processing the certificate, maintenance of adequate security, and compatibility with existing certificate-based services and applications. 2.1
Reduced Footprint
In defining a compact certificate format we need to reduce the storage requirements for the certificate, but without losing the functionality that makes the certificate meaningful and useful. The certificate must, within an appropriate context, identify the holder of the public key. It must also provide a secure binding between the key and its holder. A combination of prudent security practices and digital signature legislation (c.f. [4]) is moving the industry toward a model where each user has different keys for encryption, authentication, and non-repudiation. A plausible goal, therefore, would be the storage of three different private keys and the corresponding publickey certificates in a device with four kilobytes of memory. If we assume that the device will need one kilobyte for basic operation and transient application data, and that the private keys and associated parameters will occupy another kilobyte, we are left with two kilobytes of storage for the certificates. This limits the average size of a certificate to just above one half of one kilobyte or 512 bytes. 2.2
Limited Processing Requirements
In developing software, it is often possible to reduce the memory requirements of an application by increasing the processing time. This type of time-memory tradeoff cannot be used extensively for compact certificates, since devices that have restricted memory size are also likely to have very limited processing power. In addition to the burden on the constrained device, additional computation requirements may also cause problems for application servers that interact with these devices. While application servers are free of the memory and processing limitations of wireless devices and smart cards, they must perform operations on behalf of a large number of devices in a small amount of time. Any significant increase in the time required to process a certificate could impair the server’s ability to process transactions at the needed rate. Some additional processing, above that required for existing X.509 certificates, may be required. In particular, a more computationally intensive data encoding method may be used. This additional processing required for encoding should be small in comparison to that required for public or private key operations.
78
2.3
M. Nystr¨ om and J. Brainard
Security
Public key certificates are used to provide a secure binding between an entity and its public key. Any modification to the certificate format must provide at least the same level of security as existing certificates. As an example, it is tempting to consider reducing the modulus size of the public/private key pair. This can all be accomplished without significant changes to existing infrastructure, but unfortunately, moduli are chosen for their security and a substantial reduction would create keys that can provide only short-term security. In any new format, the security of the data representation must be examined as well. Signatures must be properly padded to avoid possible forgery, and parameter specifications must have sufficient integrity protection to avoid substitution attacks. 2.4
Compatibility with Existing Infrastructure
To the extent possible, compact certificates should work interchangeably with existing X.509 certificates in existing applications in order to leverage on the existing infrastructure. Any new format that requires wholesale changes to the browser, server, and CA infrastructure is unlikely to be widely adopted.
3
Status of Compact Certificate Activities
By defining an “X.509 compatible certificate” as a certificate which is a DERencoding [24] of a signed ASN.1 [23] value which syntactically is compatible with the syntax defined in [22], it is possible to characterize compact certificate approaches into various classes, as follows: – X.509-compatible but constrained certificates; – non X.509-compatible certificates convertible to X.509 compatible; – non X.509-compatible certificates not possible to convert to X.509 compatible; and – approaches representing new paradigms. This section presents a survey of approaches, classified with regards to this model. 3.1
X.509-Compatible Compact Certificate Approaches
The Swedish “Allterminal” Specifications. These specifications, created by the Swedish Agency for Administrative Development (“Statskontoret”), are a family of interface specifications that forms a security architecture for ITsystems, taking end-user workstations as a basis. A basic concept in this architecture is end-user authentication, and for this reason, IC cards are to be used whenever possible. Since these specifications were developed in the early ’90s,
An X.509-Compatible Syntax for Compact Certificates
79
and storage space on IC cards were even more constrained then than it is now, some tricks were used in order to fit end-user certificates on these cards. The ASL 101-1 [2] specification requires end users to have at least two certificates on their cards, one for authentication and key exchange purposes, and the other for digital signature purposes. By storing fields common to both certificates in a certain file, (“EFCCert” for Elementary File Common Certificate data), some savings are possible. The common fields are certificate version, signature algorithm, issuer name, validity, and subject name. Other fields (certificate serial number, public key and signature) are stored separately for each certificate, with a possibility to override the common fields. For two X.509 certificates of 500 bytes each (ASL 101-1 assumes 512-bit RSA keys), this normally yields a 30% saving (the common certificate data file will be approximately 300 bytes and individual data approximately 200 bytes). 3.2
Non X.509-Compatible Certificates Convertible to X.509 Certificates
SET Compressed Certificates. The Secure Electronic Transaction protocol [16] is used to securely process credit card transactions over the Internet. The user is authenticated using a chain of X.509 certificates, from the end user through a well-defined CA hierarchy to a single root certificate. Such a certificate chain, when stored in X.509 format in the DER representation may require between four and eight kilobytes of storage. A working group has proposed [19] a method for compressing these certificate chains to make them suitable for storage in smart cards. The SET working group proposal achieves most of its savings by exploiting the properties of the certificate chain. Since a certificate is always preceded in the chain by its issuer, the issuer name and public key algorithm information may be omitted from the stored certificate. The resulting representation, in ASN.1 notation, is as follows: SETCardChain ::= SEQUENCE { version INTEGER { cc1(1) } DEFAULT cc1, root RootCertificate, -bca CompressedCertificate, -gca CompressedCertificate OPTIONAL, -cca CompressedCertificate, -ch CompressedCertificate -}
Root CA Brand CA Geo-political CA Cardholder CA Cardholder
RootCertificate ::= CHOICE { iRoot INTEGER, -- Identifies generation of root certificate cRoot CompressedCertificate }
80
M. Nystr¨ om and J. Brainard
CompressedCertificate ::= SEQUENCE { version INTEGER { v3(2) } DEFAULT v3, serialNumber INTEGER (0..MAX), signature CAlgorithmIdentifier DEFAULT sha1WithRSAEncryption, validity CValidity, subject CompressedName (SIZE(1..5)), subjectPKI CSubjectPublicKeyInfo, extensions CExtensions, signed Signed } A typical SET certificate chain comprising of 4 levels (omitting the optional Geo-political CA), in compressed form, and encoded using the Packed Encoding Rules (PER) from [25], can be represented in less than 2,000 bytes. This method works well in the well defined hierarchy of SET, but it may not be usable in applications where the CA hierarchy is more complex and where the path from the user certificate to the root is not as obvious. “Parameterized Certificates”. Ford and Solo recently proposed [7] an alternative to X9.68. The proposal was intended as an alternative to ANSI’s X9.68 effort at the time. The proposal defines parameterized certificates as an X.509 certificate involving two separate constructs: – A certificate template, which is common to a family of certificates all of which have the same values in certain fields or sub-fields of the certificate (the other fields are considered parameterized); and – A mini-certificate, which supplies values for the parameterized fields of the certificate template, corresponding to one particular certificate instance. Portable client devices need store and transmit only the mini-certificate. Receivers are assumed to have access to the certificate template and need to reconstruct the original certificate from the template and the mini-certificate before validation. The proposal does not describe the encoding of templates and mini-certificates, but mentions that XML could be used, or DER-encoded ASN.1. Whatever the solution, receivers have to be able to re-construct the original DER-encoded X.509 certificate from its parts. As is apparent from this description, this proposal is very similar to the technique described in Section 3.1, but it belongs in a different category in our X.509-compatibility model since they need to be converted before being used by ordinary X.509-certificate processing systems. Compressed Certificates. The perhaps most obvious method for achieving smaller certificates is of course to apply compression algorithms to DER-encoded certificates. Tests performed by the authors with X.509 certificates containing 1024-bit and 2048-bit RSA keys reveals however that the reduction is fairly small
An X.509-Compatible Syntax for Compact Certificates
81
(˜10%) unless the certificate contains long distinguished names or extensions with lot of redundancy. The approach was proposed at the 1998 PKCS Workshop [14] for storage of certificates on cryptographic tokens, but was rejected due to the rather minimal savings and problems of finding patent-unencumbered compression algorithms (which usually is a goal in the PKCS process). 3.3
Non X.509-Compatible Certificates Not Convertible to X.509 Certificates
SPKI. The Simple Public Key Infrastructure (SPKI, [6]) working group of the IETF has published a set of drafts that describe an alternative to X.509 certificates based on a different view of the purpose of certificates. The SPKI philosophy holds that binding entities in the physical world to keys in the digital world is not a solvable problem. Instead, SPKI focuses on assigning authorizations to keys and delegating those authorizations to other keys. SPKI also abandons the attempt to define globally unique, X.500 style names for all entities. SPKI names use the SDSI [15] naming system, where all names are defined in terms of a local namespace. Namespaces may be nested and linked together to cover larger domains. The public key of a user named “John Smith” in the engineering department of the “Acme” corporation might be referenced as “Acme’s engineering’s jsmith.” A public key may be referenced many different ways, through many different namespaces, but it always resolves to the same key value. SPKI syntax is a departure from X.509 as well. Instead of the standard ASN.1 syntax, SPKI uses S-expressions similar to those used in the Lisp programming language. These expressions may be represented in a advanced notation that emphasizes readability, or in a canonical form, which is base-64 encoded and more efficient for storage and transmission. All expressions are reduced to canonical form before any processing, including hashing or signing. SPKI certs offer several possible ways of saving memory. Principals may be represented directly as keys, with no need for any form of name. Names are relative to a local namespace and may be shorter than X.500 distinguished names. Finally, keys may be included as hashes rather than complete key values. Representing issuers and subjects as public keys only, without names, do save some memory space (a typical such SPKI certificate, representing a principal as a 1024-bit public RSA key, decodes in the canonical form to ˜300 bytes), but it may prove inconvenient for human users in many applications. If a human readable name is required, it must be kept in another certificate, negating any savings in space. Using local namespaces also saves considerable memory over full X.500 distinguished names, but only for local applications. An Internet-wide application will need a hierarchical namespace that will be comparable in complexity to X.500 [21]. The use of key hashes rather than full public key values can make the certificate considerably smaller, especially for large moduli. This however, assumes that the public key itself is available elsewhere. If we store the public key outside the certificate, the total memory usage increases rather than decreases.
82
M. Nystr¨ om and J. Brainard
To summarize, SPKI offers significant efficiency for local, key-centric applications. For large-scale applications, where human-readable names are required, it does not represent a great deal of savings over X.509. ANSI X9.68. As a result of requests from the banking industry and vendors of equipment with limited storage capability, ANSI recently started a new work item, X9.68, with a goal to specify compact certificates. The following description is based on the eighth draft of X9.68, available in June 1999 ([28]). The compact certificate model defined in X9.68 is oriented towards a couple of specific usage scenarios, such as: – account-based financial transaction systems; and – efficient control and rights distributions for information-processing systems. It is considered likely that these categories will employ communications with resource-limited mobile devices or systems having a high volume of transactions. X9.68 explicitly states that the defined compact certificate type is not intended for general-purpose certification2 , but created to allow businesses to use public-key technology in an efficient manner (e.g. for transaction and account handling). Enrollment into X9.68-based domains may be carried out with X.509 v3 identity certificates but is not regarded as a requirement. Part two of X9.68 [29] describes inter-operation with X.509: How to include X9.68 certificates in X.509 revocation lists, how to map between X9.68 certificates and X.509 certificates, and a possible way to convert from X9.68 certificates to X.509 certificates. The latter requires however the X.509 certificate signature to be stored inside the X9.68 certificate and hence a good deal of compactness is lost. The X9.68 model separates PKIs into domains, where each domain is defined by a root CA that defines the policies, the public-key system and parameters that shall be used in the domain. Since these parameters will be known (through the root CA’s certificate), end-entity certificates do not have to carry them. A domain root CA identifier is defined as the hash of the CA’s public key. This will ensure that different domains will have different root identifiers. End-entities (or leaf-entities with X9.68 terminology) are identified with local identifiers, which are only valid inside a domain and could be account numbers etc. Local identifiers must be unique within a domain. By combining local identifiers with domain root CA identifiers it is possible to construct globally unique identifiers for all members of all X9.68 domains. Each domain must have at least one Domain Registration Authority (DRA), responsible for entering members into the domain; at least one Domain Certification Authority (DCA) responsible for certificate management; and may in addition have one or more Domain Attribute Authorities (DAA) responsible for issuing attribute certificates. The current draft indicates that a certificate validation service in each domain is anticipated, and contains a partial specification of messages for such a 2
This should probably read “general-purpose identification”.
An X.509-Compatible Syntax for Compact Certificates
83
service. A distinction is made between intra-domain validation servers and interdomain validation servers. An intra-domain validation server is only required to handle the algorithms defined in its own domain. Certificate validation service clients contact inter-domain validation servers when cross-certificates need to be validated. An inter-domain validation server is therefore required to recognize and handle algorithms from all domains it recognizes from a cross-certification standpoint as well. In addition to signed responses like OCSP [12], X9.68 also anticipates environments in which signed requests to validation servers are required. A form of revocation lists, called certificate status lists is defined, with syntax similar to the X.509 v1 type CertificateList. A PDU for active revocation notices is also defined, to be used on a subscription basis. End-entities are not expected to send servers their certificates when authenticating; instead, they will send their local identifiers, assuming that servers will have appropriate access to certificate repositories. Attribute certificates (not public-key bearing) are expected to be used whenever suitable. The certificate syntax is defined in ASN.1 but not at all compatible with X.509. There is a separate syntax for root CA-certificates, CA-certificates, crosscertificates and leaf-entity certificates. Attribute certificates and public-key bearing leaf certificates share the same syntax (a public key is treated as an attribute). Specific attributes are to be included from X9.59 [26]. One interesting thing to note is that it is optional for a CA to include validity periods in issued certificates. Instead, CAs can include the issuing time in the certificate, leaving the decision of certificate validity periods to verifiers, similar to the model described in [6]. PER encoding is apparently allowed, yielding some further compression. In summary, the main concepts in X9.68 are the notion of domains, in which each domain has its own public key system and associated parameters, and the use of local identifiers instead of the (intended) global namespace of X.509 certificates. The domain concept is said to simplify large volume transactions and use of PKIs in resource-constrained environments. It has been reported by the editor of X9.68 [8] that public-key bearing certificates of this type can be as small as 120 bytes, but clearly this indicates usage of Elliptic Curve (EC) algorithms. Assuming a 160-bit EC public key and a signature made with a key of the same size, this means that an X9.68 certificate with a 1024-bit RSA public key and with a signature made with a key of the same size will be approximately 300 bytes. WAP WTLS Certificates. The Wireless Application Protocol Forum (cf. [10]) is a membership organization devoted to develop protocols and applications for wireless devices such as mobile phones and palmtop devices. Since communication with these devices is bandwidth constrained, a set of lightweight versions of standard protocols has been defined for this environment. Among these protocols is TLS [5]. The WAP version of TLS is called WTLS [20], and in this version of TLS a special kind of server certificate, called WTLS certificate is being used. WTLS certificates are a compact form of X.509 certificates, which contain support for all X.509 v1 certificate fields but only a small
84
M. Nystr¨ om and J. Brainard
subset of public key algorithms. The encoding of these certificates is very similar to XDR [17] and is quite efficient. Excluding the size of subject and issuer names, an ordinary WTLS server certificate containing a 1024 bit RSA key and having been signed with a 1024 bit RSA key will in its encoded form be approximately 300 bytes or less. This is almost as small as DER-encoded X9.68 certificates if subject and issuer names are reasonably long. The WAP Forum term for end-entities is subscribers, and one important objective of the WAP architecture is to enable secure identification of subscribers to WAP services. For this reason, subscribers will be issued IC cards (SIM cards, “Subscriber Identification Module”), which are to be inserted in phones. The cards will contain subscriber certificates that are to be used in client-side authentications. Currently, WTLS allows these end-entity certificates to be X.509 certificates, WTLS certificates or X9.68 certificates. 3.4
Compact Certificate Approaches Representing New Paradigms
Certicom “Implicit Certificates”. Certicom has recently (c.f. [18]) presented a certificate concept, implicit certificates (or bullet certificates), offering a very small footprint and, allegedly, substantial computational savings. Implicit certificates are defined for use with Elliptic Curve (EC) crypto-systems, although other crypto-systems are also conceivable (but in the current form not RSA). One of the main differences between these certificates and traditional ones is that unlike traditional certificates, implicit certificates bind entities to their private keys only after they use their private keys, since they are really just a combination of a signature and a public key. In short, to create an implicit certificate, a CA defines the EC domain, i.e. the field Fq a curve E over this field and some base point G on E (with order n). Furthermore, the CA selects a random integer c (c ∈ [1, n − 1]) as its private key and publishes its public key cG. An entity A, wishing to receive an implicit certificate, sends its identity IDA together with a point tG to the CA (t ∈ [1, n− 1]). The CA, after having validated the identity of A and tG, selects a random k (k ∈ [1, n − 1]) and computes γ = tG + kG and s = SHA1(IDA , γ)c + k mod n. After having received s and γ from the CA, A can compute its private key a as a = t+smodn and the public key as aG = tG+sG. A’s implicit certificate is IDA and γ. Certificate verification is done by verifying that aG = SHA1(IDA , γ)cG+ γ, and assurance about the public key will be given once the private key has been used. The size of these implicit certificates will be the sum of the size of IDA (which can be perceived as a traditional certificate without a signature and a public key) and the size of γ, which will be a point on E. The point γ will probably around 20 bytes if 160-bit curves are being used. If IDA is defined in ASN.1 (which is not necessary), the typical size, when DERencoded, will presumably be around 60 - 120 bytes, which implies a total storage requirement of 80 - 140 bytes for implicit certificates in most cases.
An X.509-Compatible Syntax for Compact Certificates
85
Signature verification has an added advantage of just requiring one EC point multiplication, but one must bear in mind that this architecture will require substantial changes to current infrastructures in order to be accommodated. A similar construction has been presented by B. Arazi ([1]). 3.5
Related Activities
Since quite a bit of the size of X.509 certificates can be attributed to the use of long object identifiers, the notion of relative object identifiers has been discussed lately within the joint ISO/IEC/ITU-T ASN.1 working group. As the name implies, these object identifiers are relative to the outermost previous full object identifier in a certain structure, e.g. a distinguished name. Consistent use of these identifiers could, in many circumstances, reduce the size of X.509 certificates by ˜15% (see below).
4
A Possible Approach Within the Framework of X.509
The following design goals, based on objectives described in [7] have been used as a basis in the work described here: – Retain all the semantics of X.509 unchanged; – Allow a certificate consuming application to have uniform certificate processing logic, i.e., X.509-based validation logic for all certificates, including compact certificates; – Allow use of more efficient encodings of the certificate for storage and communication in those environments in which DER-encoded X.509 certificates would constitute performance problems; and – Leverage the installed base of X.509 products and infrastructure as much as possible. Our approach has been to constrain certain fields in the X.509 definition of certificates, and by doing this achieve a more compact form of certificates while maintaining basic compatibility. Hence, certificates issued in accordance with the profile defined here should be directly usable in standard X.509-based environments. With the exception for the absence of X.509’s authorityKeyIdentifier certificate extension, end-entity certificates should be compliant with the certificate profile defined in [9] as well. Our certificate format is also well suited for PER-encoding, yielding further compression. The syntax is defined in ASN.1 and presented below. CompactCertificate ::= SIGNED { SEQUENCE { version [0] EXPLICIT CompactVersion DEFAULT v1, serialNumber CompactCertificateSerialNumber, signature CompactAlgorithmIdentifier {{CompactSignatureAlgorithms}}, issuer CompactName,
86
M. Nystr¨ om and J. Brainard
validity subject subjectPublicKeyInfo extensions }}
CompactValidity, CompactName, CompactSubjectPublicKeyInfo, [3] EXPLICIT SEQUENCE SIZE (1..compact-certs-ub-extensions) OF CompactExtension
The SIGNED parameterized type is imported from X.509. 4.1
The CompactVersion Type
CompactVersion ::= INTEGER {v1(0),v2(1),v3(2)}(v1|v2|v3,...) The CompactVersion type is equivalent to the X.509 Version type, and shall be set to v3 if any extensions are present 4.2
The CompactCertificateSerialNumber Type
CompactCertificateSerialNumber ::= INTEGER (1..2147483647) The CompactCertificateSerialNumber type is equivalent to the X.509 CertificateSerialNumber type, but constrained to serial numbers less than 32 bits long. This gives approximately 2 billion certificates per CA, which should be sufficient. The constraint, while not giving any space-savings for DER-encoded certificates, is PER-visible and will have an impact on PER-encoded certificate sizes. 4.3
The CompactAlgorithmIdentifier Type
CompactAlgorithmIdentifier {ALGORITHM:IOSet} ::= SEQUENCE { algorithm ALGORITHM.&id ({IOSet}), parameters ALGORITHM.&Type ({IOSet}{@algorithm}) OPTIONAL } This type is equivalent to the AlgorithmIdentifier type defined in X.509. 4.4
The CompactName Type
CompactName ::= SEQUENCE SIZE (1..compact-certs-ub-depth) OF CompactRelativeDistinguishedName CompactRelativeDistinguishedName ::= SET SIZE (1..compact-certs-ub-width) OF CompactAttributeTypeAndValue CompactAttributeTypeAndValue ::= SEQUENCE { type ATTRIBUTE.&id ({CompactAttributes}), value ATTRIBUTE.&Type ({CompactAttributes}{@type}) }
An X.509-Compatible Syntax for Compact Certificates
87
This type is a restricted version of the Name type defined in [21]. Imposed restrictions are: – an upper bound on the number of relative distinguished name components (depth) in a name; and – an upper on the number of components in a relative distinguished name (width). These restrictions are not only (obviously) restricting the size of encoded data, but also PER-visible, making PER-encoding of CompactName values even more compact. Let us also define one attribute for use in conjunction with CompactNames: compactIdentifier ATTRIBUTE ::= { WITH SYNTAX CompactIdentifier EQUALITY MATCHING RULE octetStringMatch SINGLE VALUE TRUE ID compact-certs-at-compactIdentifier } CompactIdentifier ::= OCTET STRING (SIZE(20)) -- Could be key hash CompactAttributes ATTRIBUTE ::= { CompactIdentifier, ... -- For future extensions } The intention is that values of type compactIdentifier will contain unique identifiers for entities. By basing distinguished names on these identifiers, and basing the identifiers on key hashes, one achieves an architecture similar to the one described in [6]. 4.5
The CompactValidity Type
CompactValidity ::= SEQUENCE { notBefore [UNIVERSAL 23] VisibleString (FROM ("0".."9"|"Z")ˆSIZE(13)), notAfter [UNIVERSAL 23] VisibleString (FROM ("0".."9"|"Z")ˆSIZE(13)) } This type is a constrained version of the Validity type defined in X.509. The constraints are PER-visible, yielding some compression in the case of PERencoding.
88
4.6
M. Nystr¨ om and J. Brainard
The CompactSubjectPublicKeyInfo Type
CompactSubjectPublicKeyInfo ::= SEQUENCE { algorithm CompactAlgorithmIdentifier {{CompactPublicKeyAlgorithms}}, subjectPublicKey BIT STRING (SIZE(80..2192)) } This type is a constrained version of the corresponding type defined in X.509. The size constraint on the bit string, while not a limitation in any practical case, is PER-visible and yields some compression in the case of PER-encoding. 4.7
The CompactExtension Type
CompactExtension ::= SEQUENCE { extnId EXTENSION.&id ({CompactExtensionSet}), critical BOOLEAN DEFAULT FALSE, extnValue OCTET STRING } (CONSTRAINED BY {-- Shall contain a value of type -- EXTENSION.&ExtnType for the extension object -- identified by extnId --}) The EXTENSION object class is imported from X.509. The CompactExtension type is equivalent with the corresponding type defined in X.509, except for the fact that we do not require the value inside the extnValue OCTET STRING to be DER-encoded. This gives an opportunity for more efficent encodings of proprietary extensions. 4.8
Storage Space
Table 1 shows results that were obtained after experiments with the certificate type defined in the previous section, together with a compact extension for X9.68 compatibility (see [13] and Appendix A). Table 1. Certificate sizes for examples of the CompactCertificate type. 1024-bit RSA certificates 163-bit EC certificates PER-encoding 323 bytes DER-encoding 385 bytes
180 bytes 240 bytes
The certificates used in this experiment were created in the following manner: – 32-bit certificate serial number; – standard algorithm identifiers;
An X.509-Compatible Syntax for Compact Certificates
89
– “compressed” RSA public keys (c.f. [11]); and – “compressed” EC public keys (c.f. [27]). Certificates were signed with a CA key corresponding to the subject’s public key (163 bit EC key or a 1024 bit RSA key). Actual values of example certificates may be found in Appendix A (in the value notation defined in [23]). A comparison of these certificate sizes with similar certificates in the X9.68 proposal shows, not surprisingly, that X9.68 certificates are smaller (approximately 60 bytes in the PER case and 95 bytes in the DER case). The vast majority of this (approx. 75%) is due to the use of object identifiers to distinguish attributes, algorithms and extensions. As more information is added to the two types of certificates, they will grow at roughly the same rate, however. For more information about this, see [13].
5
Conclusions and Future Work
We have described an alternative certificate syntax, generating compact certificates. The syntax is fully compatible with the certificate syntax defined in X.509 v3. Compared to e.g. draft 8 of ANSI X9.68, the syntax does not sacrifice interoperability, but leverages on existing experiences and implementations. The price for this is slightly larger certificates than for non-X.509 compatible approaches, but we do not believe this difference to be a limiting factor. In particular, when studying the on-going development and evolution of storage technology, it seems fairly clear that this small difference will have only minor impact on future systems. Furthermore, this storage disadvantage can be somewhat remedied by using PER-encoding instead of DER-encoding as well. If the signature is done on the DER-encoded certificate, a certificate-processing system will only have to re-generate the DER-encoding from the PER-encoding before usage. It is an interesting exercise to investigate how much work (and extra certificate-processing code) this would require. It may seem that the advance of technology will obviate the need for compact certificates, since even low-end devices will have both increased storage and faster processors. This is not necessarily the case, however, since the increased use of public key technology is likely to require that devices store a larger number of keys and security requirements may increase the lengths of those keys. Compact profiles for certificates for particular applications need to be considered as those applications are developed. It may also be useful to study some more radical options, such as including multiple public keys, each with its own attributes, in a single certificate. Certificate compression should be a useful area of research for many years.
References 1. B. Arazi, “Certification of DL/EC keys,” Submission to IEEE P1363 August 1998 meeting. Available from http://grouper.ieee.org/groups/1363.
90
M. Nystr¨ om and J. Brainard
2. ASL 101-1, “Allterminal Security Layer Specification - Interface A: IC card - card reader, structure ’ALLTERM0’,” Statskontoret (Swedish Agency for Administrative Development), July 1994. 3. D. Boneh, Communications with RSA Labs, February 1999. 4. Bundestag, Digital Signature Law, BGBl I.S. 1872, July 22, 1997, English translation available at http://www.kuner.com. 5. T. Dierks, C. Allen, “The TLS Protocol - Version 1.0,” IETF RFC 2246, January 1999. 6. C. Ellison, et al, “SPKI Certificate Theory,” work in progress, IETF SPKI WG, June 1999. 7. W. Ford, D. Solo, “Parameterized Certificates: Contribution to X9.68 short certificates project,” submission to ANSI X9F1’s April 1999 meeting, April 1999. 8. R. Geiger, Private communication, February 1999. 9. R. Housley et al, “Internet X.509 Public Key Infrastructure - Certificate and CRL Profile”, IETF RFC 2459, January 1999 10. P. King, “The Wireless Application Protocol (WAP),” In Proceedings of RSA Data Security Conference ’99, San Jose, USA, January 1999. 11. A. Lenstra, “Generating RSA Moduli with a Predetermined Portion,” In Advances in Cryptology - ASIACRYPT ’98, LNCS, Springer-Verlag, October 1998. 12. M. Myers, et al, “X.509 Internet Public Key Infrastructure - Online Certificate Status Protocol - OCSP,” IETF RFC 2560, June 1999. 13. M. Nystr¨ om, “X.509-Compatible Compact Certificates,” submission to ANSI X9F1’s July 1999 meeting, July 1999. 14. RSA Laboratories, “Minutes from the 1998 PKCS Workshop,” available from http://www.rsa.com/rsalabs/pkcs. 15. ?? 16. ?? 17. R. Srinivasan, “XDR: External Data Representation Standard,” IETF RFC 1832, August 1995. 18. S. Vanstone, “ECC Standards, Current Status & Future Developments,” In Proceedings of 1999 PKS Conference, Toronto Canada, April 1999 (available from http://205.150.149.57/pks99/index.htm). 19. VISA International, “Compression of SET 1.0 Cardholder Certificate Chains for chip card storage,” Proposed SET specification, July 1998. 20. WAP, “Wireless Application Protocol - Wireless Transport Layer Security Protocol Specification,” Wireless Application Protocol Forum, April 1998. 21. ISO/IEC 9594-2, “Information technology - Open systems interconnection - The Directory: Models,” International Organization for Standardization, 1997. 22. ISO/IEC 9594-8, “Information technology - Open systems interconnection - The Directory: Authentication Framework,” International Organization for Standardization, 1997. 23. ISO/IEC 8824-1 “Information Technology - Abstract Syntax Notation One (ASN.1): Specification of basic notation,” International Organization for Standardization, 1995. 24. ISO/IEC 8825-1, “Information technology - ASN.1 encoding rules: Specification of Basic Encoding Rules (BER), Canonical Encoding Rules (CER) and Distinguished Encoding Rules (DER),” International Organization for Standardization, 1995. 25. ISO/IEC 8825-2, “Information technology - ASN.1 encoding rules: Specification of Packed Encoding Rules (PER),” International Organization for Standardization, 1995.
An X.509-Compatible Syntax for Compact Certificates
91
26. ANSI X9.59, “Digital Certificates for the Financial Service Industry: AccountBased Secure Payment Objects for the Financial Service Industry,” draft document, American National Standards Institute, 1999. 27. ANSI X9.62, “Public Key Cryptography For the Financial Services Industry: the Elliptic Curve Digital Signature Algorithm (ECDSA),” American National Standards Institute, 1999. 28. ANSI X9.68, “Digital certificates for Mobile, Account Based, and High Transaction Volume Financial Systems,” 8th draft document, American National Standards Institute, June 1999. 29. ANSI X9.68, “Digital certificates for Mobile, Account Based, and High Transaction Volume Financial Systems - Part 2: Inter-Operation with X.509v3,” draft document, American National Standards Institute, June 1999.
A
Example Compact Certificates
This appendix contains the certificates used in the examples in Section 4. The certificates are presented here in the value notation defined in [23]. A.1
An Example RSA CompactCertificate
exampleRSACert CompactCertificate ::= { toBeSigned { version v3, serialNumber 1234567890, signature { algorithm md5WithRSAEncryption }, issuer { { { type compact-certs-at-compactIdentifier value CompactIdentifier : ’0123456789ABCDEF0123456789ABCDEF01234567’H } } }, validity { notBefore "990503104300Z", notAfter "990510104300Z" }, subject { { { type compact-certs-at-compactIdentifier, value CompactIdentifier : ’1234554321123455432112345543211234554321’H
92
M. Nystr¨ om and J. Brainard
}
}
}, subjectPublicKeyInfo { algorithm { algorithm rsaEncryption, }, subjectPublicKey ’3048024100A0658F...0203010001’H }, extensions { { extnId compact-certs-ce-ansi-x9-68BasicExtension, extnValue ’A0’H } }
} A.2
}, algorithmIdentifier { algorithm md5WithRSAEncryption }, encrypted ’0A0658FCBB9BF8C6A0F66D60B7A554E2...’H -- 1024 bit signature
An Example EC CompactCertificate
exampleECCert CompactCertificate ::= { toBeSigned { version v3, serialNumber 1234567890, signature { algorithm ecdsa-with-SHA1 }, issuer { { { type compact-certs-at-compactIdentifier, value CompactIdentifier : ’0123456789ABCDEF0123456789ABCDEF01234567’H } } }, validity { notBefore "990503104300Z", notAfter "990510104300Z" }, subject {
An X.509-Compatible Syntax for Compact Certificates
{
}
{
}
93
type compact-certs-at-compactIdentifier, value CompactIdentifier : ’1234554321123455432112345543211234554321’H
}, subjectPublicKeyInfo { algorithm { algorithm id-ecPublicKey -- parameters namedCurve : c2pnb163v1 (X9.62) }, subjectPublicKey ’0307AF69989546...D74880F33BBE803CB’H }, extensions { { extnId compact-certs-ce-ansi-x9-68BasicExtension, extnValue ’A0’H } }
} A.3
}, algorithmIdentifier { algorithm ecdsa-with-SHA1 }, encrypted ’302E021507AF69989546103D79329FCC3D748...’H
An Example of a CompactExtension
This extension is defined in [13]. exampleExtension ANSI-X9-68BasicExtension ::= { keyUsage {digitalSignature, dataEncipherment} } -- PER encoded, this becomes ’0xa0’
Secure and Cost Efficient Electronic Stamps Detlef H¨ uhnlein and Johannes Merkle secunet Security Networks AG Mergenthalerallee 77-81 D-65760 Eschborn, Germany {huehnlein,merkle}@secunet.de
Abstract. Even small companies do not use physical stamps to prepay postal services, but have a franking machine which can be logically loaded with a certain amount of stamps which are printed on letters for example. Special purpose franking machines, which are still widely used in practice, however have security and handling problems. It was relatively easy to forge stamps and in some cases one needed to bring the franking machine to the post office to get it loaded. Therefore the USPS (US Postal Service) initiated an information based indicia program (IBIP) [1] to provide ”electronic stamps”. While the two problems mentioned above were solved by applying (asymmetric) digital signatures and a special purpose hardware device on the client side, it seems that this approach introduces an unreasonable overhead. In fact the problem of verifying the huge number of stamps / signatures in reasonable time is not adressed in [1] at all. Therefore many international postal service providers, like the German Post AG for example, are hesitating to implement this concept. In this short note we will introduce an alternative approach using symmetric algorithms and general purpose smart cards to provide electronic stamps. Thus it will be less expensive to implement this concept. Furthermore our approach will allow an efficient verification of all stamps in a very short time because one does not need to contact a certificate directory. Note that in this special scenario it is no problem at all that only the postal service provider is able to verify the ”symmetric signatures” and hence the authenticity of the stamps.
1
Introduction
While emails increasingly replace paper bound mail, there is still a large necessity for conventional postal services and it can not be expected, that the electronic analogue will supersede conventional mail entirely in the future, because emails obviously lack some properties of snail mail. Therefore it is necessary to integrate interfaces to postal services into the existing office communication environment. Even small companies apply franking machines, which are issued by postal service providers (PSP). Currently all such franking machines are expensive special purpose machines, whereby most of them, or at least the security component, have to be carried to a post office to be loaded after prepaying a certain amount R. Baumgart (Ed.): CQRE’99, LNCS 1740, pp. 94–100, 1999. c Springer-Verlag Berlin Heidelberg 1999
Secure and Cost Efficient Electronic Stamps
95
of stamps. The security of this procedure, i.e. that it is not possible to forge stamps, rests in the secrecy of the interfaces and tools. Even more modern franking machines with integrated modems to perform the loading remotely require a secure direct connection to the PSP and some sort of out of band payment for the bought stamps. It is clear that it would be desireable to use existing office communication components, like multi purpose printers and PCs with connection to the internet to replace the expensive special purpose franking machines and avoid the annoying trip to the post office. But performing the loading process via open networks like the internet and using standard peripherals to produce and print stamps evidently bears some risks. Hence it is necessary to integrate security mechanisms which prevent unauthorized loading of the franking machine and forging or copying of stamps. Therefore the USPS initiated the information based indicia program [1]. In this concept the authenticity of stamps is ensured by RSA, DSA or ECDSA signatures, which are coded in a two dimensional barcode and printed as part of the stamp on a letter for example. When a letter arrives at the PSP the stamp is scanned and the signature is checked after obtaining the certificate from a directory. Because one has to connect to a possibly remote directory server to look up the certificate to verify the signature this step is certainly the bottleneck in the verification procedure. With current technology it seems impossible to check a non-negligible fraction of the huge amount of letters. Unauthorized stamping is prevented by using special purpose hardware at the client system. Because copying authentic stamps can not be prevented it is necessary to integrate data chararcteristic for the letter like the zip-code, the adress of the receipient and the date into the stamp. Thus, copying stamps only makes sense in circumstances where one needs to send many letters with identical characteristics. The possibility of copying stamps can be further restricted by limiting the time of validity. However, one can still imagine situations where illegal duplication of stamps may be a concern. Therefore it will be neccesary to log the verified and unexpired stamps. As noted above the application of digital signatures and accompanied public key infrastructures introduces an unreasonable overhead in the verification step. Because the signatures are exclusively checked by the PSP it is clear that one may as well use symmetric algorithms with derived keys to obtain the same security features. This alternative approach, which is discussed in this work, will allow to check all arrived letters during the sorting process. Furthermore we will see that the special purpose hardware device with realtime clock is not necessary. Hence it will be much cheaper to implement our concept compared to [1]. This paper is organized as follows: In Section 2 we will briefly explain the central features of USPS’s information based indicia program [1] and point out the deficiencies for broad application. In Section 3 we will introduce our approach using symmetric algorithms and general purpose smart cards.
96
1.1
D. H¨ uhnlein and J. Merkle
Previous Work
There have been several publications treating the realization of secure electronic stamps. In [2] Pastor outlines how such a system might work. In [3] Tygar and Yee give a detailed discussion of the requirements and possible solutions, but in contrast to our concept they only consider protection by digital signatures. Furthermore there is a patent application on cryptographically secured electronic franking systems [4].
2
USPS’s Information Based Indicia Program (IBIP)
In order to facilitate electronic franking and prevent fraud the USPS initiated IBIP [1]. In this program the authenticity of an information based indicia (electronic stamp) is ensured by applying cryptographic mechanisms to data which are related to the piece of mail under consideration. In the following we will briefly highlight the main issues of IBIP and point out the problems for large scale application. 2.1
A Brief Overview of IBIP
Every customer who is willing to use electronic stamps buys a Postal Security Device (PSD) as specified in [1, Part B]. This device is a special piece of cryptographic hardware with real time clock, which can be connected to the parallel port for example. ”For security reasons, the PSD will not be a generalized digital signature device.” [1, page B-4]. For the private key in the PSD the USPS creates a certificate containing the corresponding public key, which is stored in a certificate directory. The PSD can be loaded with a certain amount of stamps, by connecting to the PSP and triggering some sort of payment mechanism for the stamps. A loaded PSD is then used to issue electronic stamps which are coded in a two dimensional bar code and printed on the letter. The stamp as specified in [1, Part A] consists of 49 bytes letter specific data (e.g. customer ID, date of mailing, destination, postage, serial numbers, ... ) and a digital signature of these data which is generated by the PSD using its private key. The allowed signature mechanisms and key sizes are 1024 bit RSA, 1024 bit DSA or 160 bit EC-DSA. Thus the size of the machine readable stamp is 177 bytes for RSA or 89 bytes when using DSA-type signatures. Hence one has the choice between a barcoded stamp of reasonable size (DSA) or efficient verification (RSA). The part of the program which is not yet specified in [1] is the verification procedure at USPS. This verification step will need to consist of reading the barcoded stamp, connecting to the directory and verifying the signature. It is clear that there will be billions of letters which have to be handled by the USPS every day. Thus it is clear that for performance reasons it will not be possible to verify every stamp. This may lead to ”calculated” fraud.
Secure and Cost Efficient Electronic Stamps
2.2
97
Summarizing IBIP’s Problems for Large Scale Application
In this section we will briefly summarize the major problems of IBIP for large scale application: – IBIP requires special purpose hardware which leads to higher initial costs and hence may deter potential customers. – The barcode which carries the stamp is relatively big which leads to problems when stamping regular letters or postcards. – When using DSA-type signatures one obtains smaller signatures and barcodes but has to perform a less efficient verification procedure. – In both cases (RSA, DSA-type) one needs to look up the certificate in a directory, which makes the verification procedure inefficient and makes the verification of all stamps impossible.
3
A More Reasonable Concept to Realize Practical and Secure Electronic Stamps
In this section we will introduce a way to realize electronic stamps which solves the above problems without reducing security. In contrary our approach allows much more efficient verification and hence makes the verification of all stamps possible which should lead to even less fraud. In our concept we assume that every customer has access to a PC equiped with a printer, a smart card reader and an internet connection. It is widely believed that within a few years smart card readers will become standard for PCs. Further, the PSP issues to all customers, who want to use electronic stamps a dedicated software, called the estamp program, and a smart card. Each smart card has implemented a symmetric encryption function FSK and has securely stored a distinguished secret key. This secret key is derived from a master key of a local PSP office (e.g. for every ZIP-code) and the customers ID using an arbitrary secure hashfunction h or alternatively a symmetric cipher in hash-mode. Further, each smart card has 2 internal counters z1 and z2 , which cannot be accessed from the outside. Like in the approach of USPS our system consist of 3 stages: Charging the smart card, Generating stamps, Verification of the Authenticity of the stamp by the PSP. 3.1
Charging the Smart Card
Before a customer can create electronic stamps, he has to charge his smart card. This is initiated by sending the command GEN REQUEST together with the amount x to the smartcard. The smart card increments the internal counter z2 and using its secret key to compute the encrypted charge request. This request contains the counter z2 , the amount x, the customers ID and the keyword REQUEST. Then the estamp program sends this request to the local PSP office (e.g. via email, http, tcp).
98
D. H¨ uhnlein and J. Merkle
After receiving this request, the PSP derivates the customers secret key from its local office key (which itself may be derived from a global master key) and decrypts the request, thereby verifying its authenticy. If positive, the PSP uses the customers secret key to generate an encrypted charge command containing the counter z2 , the amount x and the keyword CHARGE and sends it to the customer. Finally the customer forwards the received message to the card, which decrypts the charge command. If the charge command is verified the card increments its counter z1 by x. 1. The customer sends (GEN REQUEST, x) to the card. 2. The card sets z2 = z2 + 1 and sends Y := FSKC (REQUEST, z2 , x) to the customer. 3. The customer sends (Y, IDC ) to the PSP. 4. The PSP computes SKC = h(SKLO , IDC ) and (REQUEST, z2 , x) = −1 FSK (Y ). C 5. The PSP sends Z := FSKC (CHARGE, z2 , x) to the customer. −1 6. The card verifies FSK (Z) = (CHARGE, z2 , x). If this is ok it sets z1 = C z1 + x.
3.2
Generating Stamps
When a customer wants to stamp a letter, he uses his estamp program to send a stamp request containing the postage amount y (which can be determined by the estamp program), a hashvalue v of the specific parameters of the letter and the keyword STAMP to the smart card. The specific parameters of the letter contain the address of the receipient, the customers ID, the date and may contain other data as well making the specific parameters unique. Note that the customer is responsible to use the correct date. If the date is not within a specific time frame when checked at the PSP the letter is considered more closely, as it might have a fraudulent stamp. Thus if the date is not correct the letter will take longer time to be delivered. The card checks z1 ≥ y and, if positive, decrements z1 by y and generates the stamp for the letter, by encrypting v concatenated with y. If z1 < y the card returns EMPTY. Finally the stamp and the specific parameters of the letter are printed onto the letter using an arbitrary machine readable encoding. 1. The customer determines the postage amount y and the specific parameters of the letter D, calculates v := h(D) and sends (STAMP, v, y) to the card. 2. The card checks y ≤ z1 . If positive, it sets z1 = z1 − y and sends X := FSKC (v, y) to the customer. If negative, it sends EMPTY. 3. The customer prints (D, X) onto the letter.
Secure and Cost Efficient Electronic Stamps
3.3
99
The Verification of the Validity of Stamps by the PSP
The PSP can verify the validity of stamps without connecting to a database for every stamp. First it checks that the specific parameters are consistent with the letter (e.g. that the address and the date is correct). If the date is not within a certain time frame (i.e. dated in the future or older than (say) three days) the PSP redirects the letter to a place where stamps which might have been forged are considered more closely. Only in this case the PSP stores the suspicious stamps in a database to recognize copied stamps. Note that the customer itself is responsible for the date in the stamp to allow timely processing without second level checking. This strategy makes the presence of a secure real-time clock at the client system obsolete. After this checking the PSP computes the customers secret key using the secret key of the local office and the customers ID contained in the specific parameters of the letter. Using the customers secret key the PSP decrypts the stamp yielding a pair (v, y). Finally, it checks, that amount y is sufficient as postage for this letter and that v is the hashvalue of the specific parameters of the letter. 1. The PSP reads (D, X) and checks the consistence of D (e.g. consistence of address, expiration of validity). 2. The PSP extracts IDC from D and computes SKC = h(SKLO , IDC ). −1 3. The PSP computes (v, y) = FSK (X). C 4. The PSP verifies v = h(D) and checks that amount y is sufficient as postage.
3.4
Replay Detection
Although a stamp is tight to a fixed set of characteristics of the letter, one still has to consider replay attacks. It is not unlikely that a company needs to send many letters with the same characteristics within a short time (e.g. the correspondece with one of its dependencies). In this case the company could save a lot of money by illegally copying and reusing stamps. The only way to detect illegal copying is to log all verified stamps in databases. Since mail is usually verified at a post office located in the same region as the costumer, this can be done in a decentralized way, i.e. the stamps of a certain costumer are logged at the regional post office. The data of mail which is verified by different post offices can be exchanged by network connections. Furthermore, since stamps are likely to be verified in essentially the same order as they have been generated, the logging can be done very space efficient: For each costumer C let iC be the greatest number i for that all stamps of costumer C having serial number smaller than i are either expired or have been already verified. Then for each costumer C it is sufficient only to store iC and the (compressed) list of the serial numbers greater than i of the verified stamps from costumer C. For concrete estimates for the expected size of the databases we refer to [3].
100
4
D. H¨ uhnlein and J. Merkle
Conclusion
We will conclude this work by briefly comparing our approach to IBIP: Advantages of IBIP: – If a secret key SKLO of a local post office is compromised in our approach the PSP has to replace a set of smart cards – all smartcards whose key is derived from SKLO . In IBIP, if a secret CA-key was compromised, one would only need to replace the certificates signed by this key. This is no real threat as SKLO and the secret CA-key are additionally secured by strict organizational means. – By using the special purpose hardware with real-time clock it would be harder for an attacker to change the time to produce forged stamps. Advantages of our approach: – Our approach does not require special purpose hardware but simple smart cards at the client which is much more cost efficient. – The ”signature” in the stamp in our concept is at most 16 byte which is less than half as big as in IBIP using DSA-type signatures, not to talk about RSA. Thus our stamps are no problems even if printed to postcards or small letters. Thus our stamps do not cause problems, even if they are used for postcards or small letters. – The verification of stamps in our approach is much more efficient than in IBIP, because one does not need to connect to a directory to obtain certificates, but derives the corresponding symmetric key by simple operations. Thus it will be feasible to verify all stamps, recognize forged stamps and hence prevent fraud. Comparing the arguments for IBIP and our approach we think that our approach is much more suitable to implement a large scale system for electronic franking.
References 1. United States Postal Service: Performance criteria for information-based indicia and security architecture for IBI postage metering systems, August 19th 1998, via http://www.usps.com/ibip 2. Jos´e Pastor, CRYPTOPOST (TM): A universal information based franking system for automated mail processing, Journal of Cryptology 3 (2):137-146, 1991 3. J.D. Tygar and Bennet Yee, Cryptography: It’s not just for electronic mail anymore, Technical Report CMU-CS-93-107, School of Computer Science, Carnegie Mellon University, Pittsburgh, 1993 4. World Intellectual Property Organization, International Application Under the Patent Cooperation Treaty (PCT) System and method for retrieving, selecting and printing postage indicia on documents, International Application Number: WO 97/14117, April 17th 1997
Implementation of a Digital Lottery Server on WWW Kazue Sako C&C Research Laboratories, NEC Corporation 4-1-1 Miyazaki Miyamae Kawasaki 216-8555 Japan
[email protected]
Abstract. This paper presents the implementation of a digital lottery server on WWW. The aim of this server is to offer an outcome that a group of users can agree to be random. The server allows users to define and start a lottery session, participate in that session, and verify its outcome. While the scheme employs Cryptographic tools for purposes of security, it keeps their complexity to a minimum, so as not to complicate actual operations or adversely effect the ease of actual use. Keywords: lottery, fairness, multiparty protocol, implementation, hash function
1
Introduction
The widespread use of digital networks has created a kind of cyber-society whose citizens are increasingly able to participate in the full range of activities associated with a real community. One convenience which they still appear to lack, however, is a handy system for drawing lots. In a real community, a group of people can gather in one place to draw lots by themselves or to observe that a representative indeed drew in a fair manner. Being physically present is essential to be assured of fairness, i.e., that there was no cheating in the process, but this is essentially impossible in a cyber-society. Cryptographic protocols provide a theoretical basis for achieving assured fairness, and studies on secure multi-party computation demonstrate the possibility of determining a winner randomly [1,2]. Goldschlag and Stubblebine present a simple lottery scheme based on a delaying function[3]. These schemes can be used in principle to build a lottery system, given their specifications. However, an actual tool to support a flexible, universal lottery, whose design and purposes might be modified, has, to the author’s best knowledge, yet to be reported. This paper presents the implementation of a convenient digital lottery server to be used on the WWW. It offers an outcome that a group of users can agree to have been determined randomly. The server may be used for selecting at random a single winning participant or multiple winners at various levels of winning, (e.g. a single 1st prize winner and five 2nd prize winners, etc.) It can also be used for the random ordering of participants (e.g. to determine a draw for an athletic R. Baumgart (Ed.): CQRE’99, LNCS 1740, pp. 101–108, 1999. c Springer-Verlag Berlin Heidelberg 1999
102
K. Sako
competition.) The server allows an initiator of a lottery session to determine its purpose and design its rules. In designing such a lottery server, security requirements and applicability features often come in conflict with each other. We have carefully designed the lottery scheme that the server adopts to meet the following requirements from the aspect of both security and applicability. Fairness: Users can be assured that outcomes are generated in a fair manner, under reasonable assumption. Verifiability: The server provides users with verification means to detect any inconsistencies made during the session. Simplicity of procedures: The round complexity of the scheme is kept as low as possible. Robustness: The server does not fail to conduct an outcome, even in the presence of lazy players, i.e. those who for some reason fail to participate as they should. Flexibility: The server can handle many types of lotteries that would be carried out in general. Descriptive Menu: The server provides a simple template menu by which the initiator can describe in detail the characteristic of the lottery desired. Feeling of reality: A consideration must be taken in designing appropriate human-interface to increment a sense of “reality” to the act of participation. We believe thus constructed lottery servers will be useful on the Internet in a number of ways: an individual might use one, for example, to choose from friends distributed over the net persons to whom to give spare movie tickets; managers of public facilities might use one to choose among applicants for use of those facilities; newly developed electronic auction systems or electronic voting systems might use one to hold a lottery in case of a tie; etc. The rest of the paper is organized as follows: in Section 2, we describe the basic process involved in a lottery session, and in Section 3, we present the implementation more specifically.
2
Lottery Scheme
The lottery scheme that we have employed involves two kinds of users: a dealer, who initiates a lottery session on the lottery server, and players, who access the server and participate in the lottery session. On the other hand, the lottery server manages and carries out multiple lottery sessions initiated by the users. More specifically, the trustworthy server: – – – – –
provides users a means to start new lottery sessions and become dealers, maintains the secrecy of initial values of each session until its closure, provides users a means to participate in sessions they wish to particpate in, on a due date, executes a lottery and computes its outcome, and displays the outcome of each executed lottery session in a verifiable manner.
Implementation of a Digital Lottery Server on WWW
2.1
103
Outline
The outline of how a session proceeds is as follows: 1. The dealer initiates a lottery session on the lottery server. He describes its design and purpose using a template, then determines an initial value x and submit it. 2. The server, on accepting the request, determines a server’s initial value y for the session using a random number generator. The server assigns a unique session ID sid. 3. The server adds to a session list the description of the new session, which is published on the web, together with commitments of the dealer’s initial value x and the server’s initial value y. Here, the commitments are computed as H(sid ◦ x) and H(sid ◦ y) using a cryptographically secure hash function[4], H. 4. Each player i chooses the session he is to participate in, and enrolls his name together with a string ri freely created by himself. 5. On the date of execution, the server computes the outcome from two initial values and strings created by each player. The result of the session is mapped from a hashed value H(x◦y ◦r1 ◦· · ·◦rn ◦DESC ◦sid), where DESC denotes a string uniquely converted from the description of the session1 6. The outcome is published on the web, together with decrypted initial values x, y and strings ri created by each players. 7. Each players may verify the following: the string he has created is indeed included, the outcome has been correctly computed, and the initial values had been correctly committed, by computing H(x◦y◦r1 ◦· · ·◦rn ◦DESC◦sid), H(sid ◦ x) and H(sid ◦ y). 2.2
Features
We assume that a cryptographically secure and ideal hash function achieves the following properties: – One-wayness. Given H(x), it is hard to compute x. Moreover given H(x) and any partial string constituting x, it is hard to recover x entirely. – Collision-free. It is hard to find x and x0 that yields H(x) = H(x0 ). – Random oracle model. Distribution of H(x) can be regarded as random. Based on the properties of the hash function, the lottery scheme described above provides the following features: – Strings chosen by the players equally contribute to computing the outcome. The output of an ideal hash function is dependent on each bit of the input. Controlling a part of the input can not control the output. 1
In the implementation, we added the name of the dealer, a name and a registration number of each participant to the input to the hash function.
104
K. Sako
– Players are not allowed to gain any unfair advantage. In order for a player to select a string advantageous to himself, knowledge of all the other players’strings and initial values x and y are required. However, public information regarding x and y is H(sid◦x) and H(sid◦y), which leaks no information on x and y due to the one-wayness of the hash function. Even if a player colludes with the dealer, the value of y is never leaked beyond the trustworthy server. – Neither dealers nor the server can alter the initial values without being detected. In order to alter the initial value x by x0 , or y by y 0 , H(x) = H(x0 ) or H(y) = H(y 0 ) must hold if not detected. These collisions are hard to find due to the collision-free property of the hash function. 2.3
Security Enhancement and Its Cost
One possible concern in the proposed system is the centralized power at the server, i.e., it knows all the initial values of the sessions. One way to decentralize its power is to keep some of these values from the server and distribute them among a dealer and/or some of the participants at each session. For example, a dealer may not submit its initial value x in plain text when initiating a session, but instead submit H(sid ◦ x) from the beginning. Thus the server is prevented from knowing x. Even the participants, some or all of them, can voluntarily act in setting initial values. That is, they can send the hashed value of a string of their choice, which will be published, before the session begins. They will reveal the string only after the closing of the session. In this case, each participant can be completely assured of fairness, because no cheating is possible as long as he keeps the string to himself. The biggest drawback to this approach is that both dealer and volunteer participants must be available when computing the outcome. This affects the scheme’s robustness, that the session may terminate without an outcome. We feel that a more suitable approach is to introduce multiple independent entities within a server, who is always assumed to be present in executing a lottery. On initiating a session, each entity generates an initial value and broadcasts its hashed value to other entities. A set of all hashed values can be considered as the server’s hashed values, or more conveniently, the hashed value of all hashed values from the entities can be used. This improvement does not cause any change in user procedure. Other conventional techniques on threshold schemes can also be employed.
3
Digilot: WWW Lottery Server
We have constructed a WWW lottery server on which to implement the proposed scheme. For the implementation, we have specifically designed: – a template menu which allows a dealer to describe the design and the purpose of his lottery, and – a lottery engine which specifies the input to the hash function and mapping of a hashed value to the outcome of the lottery.
Implementation of a Digital Lottery Server on WWW
105
Fig. 1. Main Page of Lottery Server
Starting a new session A dealer, the person who opens a new lottery session, specifies the following using the template menu: – the title of the session – the aim of the session – participants’ qualifications Sessions may be either open all or limited to members. In the latter case, a list of members must be given. – selection field Selections can be made from a field comprising participants, limited members, or others. In the third case, a list of items in the selection field must be given, for example, spades, hearts, clubs, or diamonds for a card game, or 1-6 for a dice game. The second case applies when one needs to select among the members with equal probability, despite possible laziness of each member. That is, even if a member fails to participate, he still has a chance of winning (or, similarly, losing) in the lottery. – outcome to be determined: its item description and number. Items can be plural, e.g. one 1st prize winner and five 2nd prize winners. – opening and closing dates – date to execute lottery – an initial value x Although this template menu does not serve to represent all types of possible lotteries, we believe it compactly describes most of the designs and purposes of lotteries.
106
K. Sako
Fig. 2. Starting of a New Lottery Session
When a request for a new lottery session with its description is submitted, the server asks the dealer to choose an initial value x. The server then generates a random string y to be the server’s initial value for that session, and assigns a session number (session ID, sid). The description of new session with values of H(sid ◦ x) and H(sid ◦ y) is published on WWW.
Participating in a session Each participant specify a session number, either one obtained from a list of public sessions, or one provided by the dealer or another participant. The server displays the aim and description of the session, together with hashed values H(sid ◦ x) and H(sid ◦ y). By typing in his name and a freely chosen string, he completes the participation procedure.2 2
If only qualifed members are to participate, they need to enter a secret password which is informed by the server beforehand.
Implementation of a Digital Lottery Server on WWW
107
On accepting his entry3 , the server displays his registration number with the string he has submitted. Result of a session After the closing date, the lottery server computes the outcome and publishes the result. The server provides a verification tool so that all participants can verify that the result is consistent with the specified procedure. The tool also allows users to edit an input to the outcome-computing function. Thus one can observe how an alternative choice of his string would have changed the outcome. Specification of the lottery engine The following are inputs to the lottery engine: – – – –
dealer’s name dealer’s initial value and its published hashed value server’s initial value and its published hashed value each participant’s name, his registration number, and his string in the order of participation – a list of items in the selection field in a specified order – a list of outcome descriptions and their number in the announced order – session ID The engine concatenates the above input in the specified order and computes its hashed value. The hashed value is then mapped to a number in a list of selection field items using modular computation. When multiple items are to be selected, the hashed values are sequentially hashed to obtain necessary distinct winning items. Implementation details We have implemented a demonstration system which works on Windows 95 and Windows NT server/Workstation with Pentium CPU 100MHz. The system requires a webserver such as Personal Webserver or Internet Information Server, and a Netscape Navigator. The program is written in C and has 6K lines, less than a tenth of which is devoted to describe the lottery engine. For actual use, we are also developing a system that uses Oracle 8 database for the session management, with e-mail services for users to notify the status of one’s lottery session.
4
Concluding Remarks
In this paper, we have presented the design and implementation of a lottery server on WWW. This is an actual tool to support a flexible, universal lottery, 3
The current open all session only identifies and distinguishes users by their e-mail address. For more rigorous user identification, use of digital signature techniques is appropriate.
108
K. Sako
where a user can define and start a lottery session of his purpose. The server provides users a verification means, which helps them to be assured of fairness. Through the use of a trustworthy server that maintains secrets, the scheme does not complicate actual operations or adversely effect the ease of actual use.
References 1. 2. 3. 4. 5. 6.
Manuel Blum Coin Flipping by Telephone. In IEEE COMPCON, pages 133–137. 1982. Oded Goldreich, Silvio Micali and Avi Wigderson How to Play Any Mental Poker. In STOC, pages 218–229. 1987. David M. Goldschlag and Stuart G. Stubblebine Publicly Verifiable Lotteries: Applications of Delaying Functions In Financial Cryptography, 1998. Douglas Stinson Cryptography–Theory and Practice–, CRC Press, 1995. Ronald Rivest Electronic Lottery Tickets as Micropayments In Financial Cryptography, 1997. Eyal Kushilevitz, Yishay Mansour, and Micheal Rabin On Lotteries with Unique Winners In SIAM Journal on Discrete Mathematics, Vol.8, No.1, p.93-98, 1995.
Cert’eM: Certification System Based on Electronic Mail Service Structure Javier Lopez, Antonio Mana, and Juan J. Ortega Computer Science Department, E.T.S. Ingenieria Informatica Universidad de Malaga, 29071 - Malaga. SPAIN {jlm, amg, juanjose}@lcc.uma.es
Abstract. Public-Key Infrastructures are considered the basis of the protocols and tools needed to guarantee the security demanded for new Internet applications like electronic commerce, government-citizen relationships and digital distribution. This paper introduces a new infrastructure design, Cert’eM, a key management and certification system that is based on the structure of the electronic mail service and on the principle of near-certification. Cert’eM provides secure means to identify users and distribute their public-key certificates, enhances the efficiency of revocation procedures, and avoids scalability and synchronization problems. The system, developed and tested at the University of Malaga, was recently selected by RedIRIS, the National Research and Academic Network in Spain, to provide the public key service for its secure electronic mail.
1
Introduction
There is wide agreement on the immense potential of Internet, specially for exciting new applications like electronic commerce, government-citizen relationships and digital distribution, but a significant part of the users are still reluctant to use the network for financially or legally sensitive data due to the lack of security. The growth and performance of Internet are adversely affected by security issues and by the open design of the network itself. Thus, despite its enormous possibilities, the Internet has not yet become a common vehicle for those applications because it is still too easy to intercept, monitor and forge messages, and even impersonate users [1]. Several systems, such as Kerberos [2,3] have been proposed to protect communications over public networks using symmetric-key cryptography. Those systems are not easily scalable for large groups of users belonging to different organizations. However, some efforts have been accomplished to solve this problem [4,5,6]. On the other hand, public-key cryptography [7] seems to be well suited to satisfy the requirements of the Internet, and is fast becoming the foundation for those applications that require confidentiality and authentication in an open network. The widespread use of a global public-key cryptosystem is complemented by a Public-Key Infrastructure (PKI), an efficient and trustworthy mean to manage R. Baumgart (Ed.): CQRE’99, LNCS 1740, pp. 109–118, 1999. c Springer-Verlag Berlin Heidelberg 1999
110
J. Lopez, A. Mana, and J.J. Ortega
public-key values. A PKI is a vital element because it enables the application of the cryptosystem to the exchange of sensitive information between parties that do not have a face to face interaction. This paper introduces Cert’eM, a new key management and certification system based on the electronic mail service structure, and it is organized as follows: section 2 presents the system structure and operation; section 3 summarizes additional features that improve the efficiency of the system; section 4 describes the protocol used to access the key servers and, finally, section 5 presents concluding remarks.
2
Description of the System
The fundamental principles of Cert’eM can be summarized in the following design goals: • to use a CAs architecture that satisfy the needs of near-certification so the trust can be based on whatever criteria is used in real life; • to eliminate problems associated with the revocation procedures and simplify the validation of certificates; • to avoid architectures that yield scalability problems; • to avoid the synchronization problems associated to schemes that keep multiple copies of the keys and certificates; and • to minimize the network traffic, specially that generated by management operations. 2.1
Structure
The main element in the hierarchy is the Keys Service Unit (KSU), which integrates both key certification and certificate management functions. Cert’eM uses a scheme with various KSUs operating over disjoint groups of users, conforming a predefined hierarchy. Figure 1 shows the system structure. The KSU hierarchy defined by Cert’eM is parallel to the hierarchy of Internet domains. A relevant feature is that KSUs are associated to the corresponding e-mail offices. As shown in figure 2, every KSU is managed by a Certification Authority (CA). Additionally, it contains a database to store the certified keys of its users; each user public-key certificate is stored exclusively in the database of his/her KSU. The third component in the KSU is the key server, which receives requests and delivers the certificates to the requesters. A key server also manages a certificate cache that keeps some of the external certificates recently received. The certificate cache, carefully designed, enhances the efficiency of the system without introducing any security risk. Furthermore, any CA can define its own cache policy according to its users needs. Each CA can set restrictions to limit the users or KSUs allowed to access the server. This feature provides the CA with a useful tool to avoid abuse and to balance the workload between different servers.
Cert’eM: Certification System Based on Electronic Mail Service Structure
111
es KSU ...
uma.es KSU
...
...
lcc.uma.es
KSU
KSU
KSU
KSU ...
...
...
KSU
...
... = END USER
Fig. 1. Hierarchy of Cert’eM Nodes
The certified keys are managed solely by the corresponding CA; therefore, key updating and revocation are local operations without influence in the rest of the system. We must underline that no Certificate Revocation List (CRL) is used in the design. The validation of a certificate is achieved using the Validity Statement (VS), a timestamp statement signed by the CA attesting that the certificate has not been revoked at the time of issuance of the VS. A certificate is considered expired if the validity period has finished. If a certificate has not expired we call it an active certificate. An active certificate is valid if it has not been revoked; therefore in order to validate active certificates the CA simply issues a VS.
Key CA
Server 2
1 Database
1: insertion, updating or revocation of certificates 2: certificate extraction for delivery
Fig. 2. KSU Components
112
2.2
J. Lopez, A. Mana, and J.J. Ortega
System Operation
Cert’eM defines a special user, ca@<domain>, in every KSU in order to denote the correspondent CA. The certificate of any CA is stored exclusively in the database of its parent KSU. Exceptionally, the key of a CA located at any toplevel domain is stored in the database of its own KSU, certified by the domain registering authority (e.g. ICANN). Keys distributed by any KSU are always certified by the corresponding CA; thus, in the subsequent discussion, we will use the terms ’key’ and ’certificate’ equivalently. The logical structure of the data transmitted by a KSU in response to a certificate request is important in order to clarify the key distribution procedure. A certification response consists of two components: • a X.509v3 certificate [8] containing, among other information, a serial number and the expected life of the certificate (the validity information); • the VS signed by the CA, containing the certificate serial number and the time of issuance. Other systems, like SPKI/SDSI [9,10], propose a similar mechanism called one-time revalidation (OTR). But for our purposes this solution is not convenient because it does not provide tools to limit the use of that “pre-validated” certificate in the future. Therefore, in our scheme, the certificate does not need to be issued on-line; however, it still provides a good degree of security against attacks that try to use revoked certificates. We describe now the sequence of actions that are carried out when any user (requester) wants to get the public key of another user (addressee). This process starts when the e-mail address of the last one is provided by the requester to his/her KSU, and this one, in turn, conducts the request to the addressee KSU, whose database contains the key. Such operation is easily done because the system can determine the KSU to be contacted from the email address provided. Previous actions are showed in figure 3 (left). In this case, the figure depicts the information flow produced when user Bob (
[email protected]) requests the key of user Alice (
[email protected]). As shown, Bob requests Alice’s key from his own KSU and this one directs the request to the KSU located at the x.y.z node. The response from Alice’s KSU is then forwarded to Bob. Bob must request the key from his KSU due to the access restrictions that other KSUs set, and also to take advantage of the certificate cache of his KSU. If considered, Bob can also request the certificate of
[email protected] from the KSU located at y.z, obtaining a new certificate that proves the authenticity of the first one. This is depicted in figure 3 (right). The ascending validation process can continue until a top-level node is reached. If no KSU is present at y.z (i.e. the domain does not support Cert’eM system), the key of
[email protected] is automatically requested from the parent node, that is, z. This allows Cert’eM to be used even in case of incomplete structures. Some similarities can be found between Cert’eM and the Secure-DNS proposal [11,12]. Both use the Internet domain name hierarchy to find the location
Cert’eM: Certification System Based on Electronic Mail Service Structure z
z
t KSU
KSU
y.z KSU
t KSU
KSU
s.t KSU
y.z KSU
113
[email protected]
s.t KSU
[email protected]
x.y.z KSU
r.s.t KSU
x.y.z
r.s.t
C
[email protected]
KSU
KSU
C
[email protected] C
[email protected]
alice
bob
C M : Certificate and VS of user M Information flow
[email protected]
C
[email protected]
[email protected]
alice
bob
Request
Response
Certification route
Fig. 3. Certificate Request (left) and Validation (right)
where a particular key is stored, but Secure-DNS uses the Name Server files while Cert’eM uses the e-mail offices. Our choice is based on the following reasons: • Opposite to e-mail offices, it is usual that several domains share the same DNS; therefore, it is frequent that DNSs not closely related to users, and their CAs may not have direct knowledge of the users identities, being more vulnerable to impersonation. • DNSs are intended to store information about domains, not about users. As a consequence, there is a registration procedure for a new domain but not for a new user of one of the registered domains. In fact, there is no need that a final user ever interacts with the DNS to get access to Internet, but users are forced to interact with e-mail offices to set up an e-mail account. • DNSs use caching and lifetime mechanisms that could yield inaccurate or false information in some situations. This feature can be used to attack the system. For these reasons the Secure-DNS scheme cannot guarantee the link between real-world users and keys (not conforming with article 8.2 in [13]).
3
Additional Features
One of the advantages of Cert’eM is that, in case the private key of a user is compromised or lost, the associated public key can be revoked or replaced without possessing the private one. This is possible because there is an entity (the CA), responsible for the maintenance of the database of certificates, which can perform a real-world user identification. Opposed to other systems that
114
J. Lopez, A. Mana, and J.J. Ortega
require that the user generates a “suicidal note” to be used in case the key is compromised or lost [14], Cert’eM users do not need to take any prevention measures for this circumstance. In case the key of a CA is changed, existing certificates must be discarded, and the CA must reissue all the certificates. Other systems need to notify this event to users and request old certificates in order to re-certify their keys and distribute the new certificates. In Cert’eM, any CA keeps the certificates of its users in a local database of the KSU, and there is no need to send new certificates and notify the invalidation of the previous ones. Consequently, the change of the CA key is transparent to users. Usually, the need to check CRLs for certificate revocations becomes a performance handicap. For this reason, systems that use CRLs or similar mechanisms (e.g., On-line Certificate Status Protocol [15], or Suicidal Bureaus [14]) to invalidate certificates incorporate solutions to minimize the number of accesses needed to verify a certificate, but these solutions are sometimes artificial and not efficient. Therefore, avoiding the use of CRLs has been considered one of the priority goals in the design of Cert’eM. In order to achieve a design that does not expose the problems of using CRLs while still retaining their benefits, all the information related to the certification of a specific user must be located and managed at the corresponding KSU. In case a CA decides to record certificate invalidation events, a Local Invalidation Log (LIL) can be managed locally. Notice that a LIL is completely different to a CRL because the LIL will be used exclusively by the CA. When a user certificate needs to be invalidated (because his/her key has been lost or compromised, or because the CA has reasons to cease certifying the user) the CA simply deletes the certificate from its database and, if appropriate, stores the revoked certificate in a LIL. This procedure is simple, immediate, requires no communication and can provide proofs of the certificate revocations in case the CA needs those proofs. Once the revocation takes place, existing active certificates are not useful any more because no VS will be issued to make them valid. The use of the VS prevents attacks based on old certificate reuse. 3.1
User Identification
When designing a key management system that achieves secure user identification it is necessary to take into account the difference between the real world (where people, companies and computers are), and the Internet world (where names, keys and certificates are). It must be pointed out that many of the identity certificates presently used by many schemes are based exclusively in a contact, through Internet, between the user and the CA. This is clearly unsatisfactory because the requester of a certificate will usually require some guarantee of the link between the identity of the user in the real world and his name in the Internet world. Therefore, in these certificates, trust is misinterpreted from the start.
Cert’eM: Certification System Based on Electronic Mail Service Structure
115
The design of Cert’eM guarantees that a CA will only certify the keys of those users closed to it. Therefore, a formal identity verification procedure has been established to give a legal meaning to certification process [16]. Consequently, a link is established between the identity documents (valid in the real world), a distinguished name in the Internet world (the e-mail address) and a cryptographic key. It has been described how Cert’eM uses the e-mail addresses to identify users. There are two common criticisms about the use of e-mail addresses as distinguished names. Firstly, it is claimed that the relationship between a person in the real world and an electronic mail address is not one-to-one because a user can have several e-mail accounts and different aliases. Besides, there are certain e-mail addresses that do not represent a single user but a group of them. Secondly, it is also claimed that, in some cases, the alias file can be modified without administrator or root permissions. Cert’eM has been designed to overcome these problems by isolating the certification management from the email account management.
4
Key Server Access Protocol
In this section we introduce the protocol that describes how both, individual users and other key servers, access a KSU. A TCP connection to the port 850 is used for Cert’eM service. The requests are represented in a Client/Server scenario, where individual users or key servers can play the client role; for instance, consider a request from user
[email protected] (client) to the KSU located at r.s.t (server), followed by a request from the KSU located at r.s.t (now client) to the KSU located at x.y.z (server). In the subsequent description C will be used to denote a generic client and S to denote a generic server. 4.1
Protocol Data
We will use the following data structures as part of the protocol:
: Identification of the client. <userID>: The e-mail address (with format @<domain>) of the user whose key (certificate) is requested. Cert’eM uses the <domain> to determine in which KSU the key resides. : An X.509v3 certificate containing among other information: the user identification (equivalent to <userID>), the user’s public key, a certificate serial number that is unique for the issuing CA and the expected activity period life of the certificate. This record is kept in the KSU database, so there’s no need to produce it online. : A timestamp statement containing a certificate serial number, and the time of issuance of this , signed by the CA. It is used to guarantee that the certificate with that serial number was not revoked at the time of issuance. Opposite to the this record is produced online.
116
J. Lopez, A. Mana, and J.J. Ortega
: Certificate identification consisting on the <userID> of the addressee user and the certificate serial number of the active certificate to be checked. : Negative acknowledgement. It guarantees that there is no key associated to the <userID> requested. 4.2
Protocol Description
The protocol is structured in three phases: connection, transaction and termination. Connection Phase The connection is established with the following message: C : HELLO [] where is optional, depending on the particular KSU security policy to be implemented. Each CA can set restrictions to limit the users or computers allowed to access the server. When a server receives this message, it checks whether or not is allowed to establish the connection. Afterwards, the server sends one of the following messages as a response: S : +OK – the client has permission S : -ERR1 – the client host is not allowed S : -ERR2 – the client is not allowed Transaction Phase When the connection is successfully established the client can start requesting keys. For this purpose the following message is used: C: GET KEY <userID> When the server receives the previous message the following situations can arise: 1. The requested <domain> coincides with the <domain> of S (i.e. the requested key belongs to a local user of S). The response is: S : KEY if the key was found, or S : -NSK ; –no such key if the key was not found. 2. The requested <domain> does not correspond with that of S. a) The requested is ca. i. If the <domain> of S corresponds to the parent of the requested <domain>, then the key should reside in the database of S; therefore, the case is managed as a local certificate request (case 1). ii. Otherwise, the key is requested from the KSU located at the upper node of <domain>. If there is no KSU in that node the request is redirected to the succeeding upper one until the top-level node or S are reached.
Cert’eM: Certification System Based on Electronic Mail Service Structure
117
b) The requested is not ca. i. If <domain> does not exist the server returns an error message: S: -ERR3 ii. Otherwise, a new connection is established to request the key from the KSU located at <domain>. The result of this new request is forwarded to the requester. In case a client already has an active certificate there’s no need to request the complete certification information. The key check message is used in this case. C: CHK KEY To which the server responds: S: VS if the key is found and has not been revoked; otherwise, the request is carried out as a GET KEY request. Termination Phase This phase is meant to inform the server that the client has finished requesting keys. To do so the client sends this message: C: EXIT
5
Conclusions and Future Works
Several PKIs have been proposed in the literature to meet the security needs of different network applications. This paper presents a new scheme, Cert’eM, a key management and certification system that is based on the structure of the electronic mail service and on the principle of near-certification. It provides secure means to identify users and distribute their certificates, eliminating problems associated to common revocation procedures, and simplifying the validation of certificates. The system has been deployed for certified electronic mail in the University of Malaga, and presently services about forty thousand users distributed in more than twenty KSUs. Additionally, this system was recently selected by the National Research and Academic Network in Spain to provide the public key service for its secure electronic mail service, and is presently being tested by a restricted group of users, as the previous step to its distribution to the community of RedIRIS users. This is producing valuable information for future improvements. Among the ongoing projects, we point out the utilization of Cert’eM in corporate extranets, as well as several applications in the University environment like computer system access controls and secure exchange of official documents.
118
J. Lopez, A. Mana, and J.J. Ortega
References 1. U.K. Department of Trade and Industry, “Building Confidence in Electronic Commerce - A Consultation Document”, March 1999. 2. J. Kohl, “The Use of Encryption in Kerberos for Network Authentication”, Advances in Cryptology, Proceedings of CRYPTO ’89, Springer-Verlag, 1989, pp. 35-43. 3. J. Kohl, B. Neuman, “The Kerberos Network Authentication Service (V5)”, RFC 1510, 1993. http://www.ietf.org/rfc/rfc1510.txt 4. D. Davis, “Kerberos Plus RSA for World Wide Web Security”, First USENIX Workshop on Electronic Commerce, 1995, pp. 185-188. 5. R. Ganesan, “Yaksha: Augmenting Kerberos with Public Key Cryptography”, Internet Society Symposium on Network and Distributed Systems Security, IEEE Press, 1995, pp. 132-143. 6. J. Schiller, D. Atkins, “Scaling the Web of Trust: Combining Kerberos and PGP to Provide Large Scale Authentication”, USENIX Technical Conference, 1995. 7. W. Diffie, M. Hellman, “New Directions in Cryptography”. IEEE Transactions on Information Theory, IT-22, n. 6. 1976, pp. 644-654. 8. International Telecommunication Union, Itu-t Recommendation X.509. Information Technology - Open Systems Interconnection - The Directory: Authentication Framework, 1997. 9. C. Ellison, “SPKI Requirements”, Internet draft, May 1999. http://www.ietf.org/internet-drafts/draft-ietf-spki-cert-req-03.txt 10. C. Ellison, W. Frantz, B. Lampson, R. Rivest, B. Thomas, T. Ylonen, “SPKI Certificate Theory”, Internet draft, June 1999. http://www.ietf.org/internet-drafts/draft-ietf-spki-cert-theory-05.txt 11. D. Eastlake, “Domain Name System Security Extensions”, RFC 2535, March 1999. http://www.ietf.org/rfc/rfc2535.txt 12. D. Eastlake, O. Gudmundsson, “Storing Certificates in the Domain Name System (DNS)”, RFC 2538, March 1999. http://www.ietf.org/rfc/rfc2538.txt 13. European Commission, “Proposal for a European Parliament and Council Directive on a Common Framework for Electronic Signatures”, COM(1998) 297 final, 1998. http://www.ispo.cec.be/eif/policy/com98297.html 14. R. Rivest, “Can we Eliminate Revocation Lists?”, Proceedings of the Second International Conference on Financial Cryptography, FC ’98, Springer-Verlag, 1998. 15. C. Adams, M. Myers, A. Malpani, R. Ankney, S. Galperin, “X.509 Internet Public Key Infrastructure Online Certificate Status Protocol - OCSP”, Internet draft, April 1999. http://www.ietf.org/internet-drafts/draft-ietf-pkix-ocsp-08.txt 16. B. Wright, “Making Numbers Ceremonial: Signing Tax Returns with Personal Identification Numbers”, personal communication, 1998.
A Method for Developing Public Key Infrastructure Models Klaus Schmeh secunet Security Networks AG, Im Teelbruch 116 45219 Essen, Germany [email protected]
Abstract. This paper introduces a method for modelling Public Key Infrastructures (PKIs). This method is referred to as 3PPM Method (Three Part PKI Model Method). The resulting models are referred to as 3PPMs. 3PPMs are based on the Unified Modelling Language (UML). The 3PPM Method can be used in an early stage of PKI setup. It provides for an easy way to obtain a model that can be used as a basis for further planning, training and documentation. The 3PPM method has already been used in practice by the author of this document.
1
Introduction
The conception and the setup of Public Key Infrastructures (PKIs) can be regarded as one of the main issues in the field of computer security today. Before the set up of a PKI starts, it is important to develop a sophisticated model, which visualises the main PKI components and procedures. With a well-designed model it is easier to prepare the PKI setup, to estimate costs and to avoid misunderstandings. The scope of this paper is to introduce a method for modelling PKIs. This technique is referred to as 3PPM Method (Three Part PKI Model Method), the resulting models are named 3PPMs. 3PPMs are based on the Unified Modelling Language (UML). The 3PPM Method has already been used in practice by the author, who has two years of practical experience in the PKI area.
2
A PKI Set Up Procedure
For the set up of a PKI an appropriate set up procedure has to be found. According to the author’s experience the following procedure can be applied: • Modelling of the PKI: In the first step a PKI model should be developed according to the operators requirements. This model should be based on a requirement analysis and can be used for all future work. • Product evaluation and basic tests: Before the PKI installation has started, some basic tests can be performed. This is the second step of the set up procedure. • Pilot: The third step is a pilot, where a small user group works with a test PKI. R. Baumgart (Ed.): CQRE’99, LNCS 1740, pp. 119–126, 1999. © Springer-Verlag Berlin Heidelberg 1999
120
K. Schmeh
• PKI roll-out with one application: When the pilot is completed, the company-wide roll-out of the PKI can start. It is recommended to start with only one application (for example e-mail signatures and encryption ). • Adding applications: When the PKI works with the first application, other applications can be added. This procedure has many advantages in practice. The experience made in a certain step can be used directly in the next one. Mistakes detected in a step can be avoided in the next one. It is important to note that PKI modelling is the first step in this procedure. It is virtually impossible to understand a PKI or to exchange any thoughts about it without an appropriate model. How such a model can be developed with the 3PPM Method, is described in this paper.
3
Basic PKI Units
The 3PPM Method for PKI modelling uses the terms component, role and use case. These terms are defined in this chapter. 3.1
Components
A component is a basic unit of a PKI. A component consists of hardware and/or software, typically it is one computer running a certain software. Typical PKI components are the following: • Certification Authority: This component is the core part of a PKI. It is responsible for generating and signing certificates. • Certificate Server: The Certificate Server is a directory server, which provides certificates to the user. The user can connect to the Certificate Server for obtaining certificates or for checking the status of a certificate. For the latter revocation lists can be used. • Timestamp Server: This component creates timestamps, which are needed to connect a digital signature to the time when it was created. A Timestamp Server is not a compulsory component of a PKI, but it makes a PKI more secure. In many cases, the Timestamp Server is omitted in the beginning. • Local Registration Authority: This component is responsible for accepting certification requests and for giving them to the Certification Authority. • Revocation Service: This component is necessary for accepting revocation requests. • User Components: These are the components used by the PKI user for signing, encrypting and interacting with the central PKI components. Examples for this are E-Mail crypto programs, crypto enabled Web Browsers, hardware crypto components and the like. • Personal Security Environment (PSE): This component is used by the user to store his private keys. It can be a file on a hard disk or floppy disk, or a smart card. In any case, a component is a machine. A person is not considered a component.
A Method for Developping Public Key Infrastructure Models
3.2
121
Roles
Apart from hard and software, humans play a role in a PKI. A PKI is operated, administered and used by humans. For this reason, roles are defined. A role is a set of rights and responsibilties that is connected to one or several persons. One person can have several roles, each role can be carried out by one or more persons. The roles that have to be determined for a PKI are pretty much dependent on the PKI products used. Typical roles are the following: • PKI Planner: This is the chief of the whole PKI environment. The PKI Planner commands all other roles, but he is not responsible for administration or routine tasks. • CA Administrator: This role administers the Certification Authority. A CA Administrator is responsible for certificate generation and certificate revocation. • Certificate Server Administrator: This role administers the certificate server. • LRA Administrator: This role is responsible for a Local Registration Authority. • User: The PKI user is considered a role, too. Usually there are several people connected to one role (for example if two CA Administrators are needed). 3.3
Use Cases
Apart from persons and machines, processes play an important role in a PKI. For this purpose, use cases are introduced. A use case (sometimes also referred to as a business process) is a process that appears repeatedly inside a productive organisation. Usually the following use cases appear in a PKI: • User Registration: This use case needs to be carried out to register a user for obtaining a certificate. • Certificate Generation: This use case is carried out to generate a key pair for the user and to create a certificate around it. This use case is necessary after the user has been registered. • Certificate Revocation: A certificate has to be revoked, when it shall not be used any more. The revocation of a certificate is a use case. • Certificate Server Inquiry: This use case is carried out, when a user wants to access the Certificate Server to obtain a certificate or certificate status information. • Timestamp Inquiry: This use case is carried out, when a user needs a timestamp from the Timestamp Service. Additional use case may be defined for certificate renewal, CA key change, information distribution and other procedures.
122
K. Schmeh
Revocation Service
Local Registration Authority
User
User Component
Certification Authority
Certificate Server
Local Registration Authority
PSE User
User Component
PSE
Fig. 1. The components of a typical PKI shown in a component diagram. Central components are the Revocation Service, the Certificate Server and Certification Authority. Each user works with a User Component and a Personal Security Environment (PSE) respectively
4
Identification of Components, Roles and Use Cases
The first step of the 3PPM Method is the determination of components, roles and use cases. There is no algorithm for finding components, rolls or use cases, so it is more a question of experience and creativity. The following subchapters give some guidelines. 4.1
Determining Components
To determine PKI components, it must be decided, which kind of components will be used and how many of each are needed. A Certificate Authority is always necessary, unless certificate generation is outsourced. In a large PKI, several Certificate Servers and Timestamp Servers can be used. It goes without saying that each user should have his own User Component and his own PSE. Complex PKIs may also include more than one Certification Authorities. For example, hierarchies of Certification Authorities may occur. In this case, there is one Certification Authority issuing certificates for other Certification Authorities, which themselves may certify subordinate Certification Authorities or users. Another option
A Method for Developping Public Key Infrastructure Models
123
is a cross certification. In this case there are two Certification Authorities certifying each other respectively. In any case, each Certification Authority is a component. Local Registration Authorities are another kind of component, which may appear more than once in a PKI. If a high degree of security shall be reached, each user should be required to show up personally at a Registration Authority, which means that at each location of an organisation a Local Registration Authority should be reachable. On the other hand, one central Registration Authority is sufficient, if personal registration is not required. In any case, each Registration Authority is a component. Of course, it must also be determined, what kind of components are used by the user. This means that the functionality of the user client must be defined. Usually, the user uses tools for mail encryption and file encryption. Special clients for securing WWW connections or SAP R/ 3 transactions are also possible.
User Registration
LRA Administrator
Certificate Generation
User
CA Administrator
Fig. 2. A Use Case/Role Diagram showing two Use Cases and three Roles
4.2
Determining Roles
It is clear that user roles have to be defined. For the central part of the PKI usually all the roles mentioned in chapter have to be introduced. The whole PKI operation and PKI construction should be managed by a Site Planner. For a Certification Authority and for a Certificate Server administration roles have to be defined. Of course, administration roles must be adjusted to the software used, but all major PKI software systems support administration roles. The user role must also be determined: It must be clear, which members of an organisation may act as users and which other persons (e.g. customers) are accepted. Additionally, it must be determined, how many people carry out a role. Privileged roles should always be carried out by more than one person in order to make sure that there is always a person with privileged permissions available.
124
4.3
K. Schmeh
Determining Use Cases
The use cases appearing in a PKI are usually always the same (see chaper ), so it is not difficult to identify them. The more critical part is to determine how exactly they look like. This is pretty much dependent on the security policy and on the IT infrastructure of the organisation setting up the PKI. Certificate Server
Certification Authority
Local Registration Authority
User
User Component
Certification Request
Certification Request
Credentials
Credentials
Fig. 3. A PKI Use Case modelled with a sequence diagram
A crucial part of a PKI set up is the determination of the use case Registration and Key Generation. It must be clear, whether a user must show up personally at a Registration Authority to register for a certificate or if an e-mail registration is sufficient. It is also important to know, whether users generate their keys themselves or whether a centralised key generation is applied.
5
Developing A Three Part PKI Model
The second step of the 3PPM Method is the development of the model itself. The model consists of three parts, each part is identified with a certain kind of diagram. Which diagrams are used is described in the following subchapters. 5.1
Component Diagram
The first part of the model is a component diagram. A component diagram contains all the components of a PKI. For a better understanding, some of the roles can be
A Method for Developping Public Key Infrastructure Models
125
included, too. The units interacting with each other are connected with a line. Figure shows an example for a component diagram. 5.2
Use Case/Role Diagram
The second part of a 3PPM is a Use Case/Role Diagram. In this diagram every use case is modelled with an ellipse, every roll is modelled with a symbol for a person. Figure shows an example for a Use Case/Roll Diagram. Each role is connected to the use cases in which it is involved. 5.3
Sequence Diagram
The third part of a 3PPM is a Sequence Diagram. A sequence diagram lists components and roles in a table, communication actions are modelled with arrows. An example for a sequence diagram is shown in figure. Sequence diagrams should be used to model the more complicated use cases. To get a sequence diagram for a use case, all communication actions must be found, they are modelled with arrows. On each arrow a description of the data transported is written. If an action takes place with only one component or role involved, the arrow points to the place where it starts. Not all use cases have to be modelled with a sequence diagram. For the less complicated use cases a text description is usually sufficient. According to the author’s experience, the use cases for user registration and certificate generation should be modelled with a sequence diagram. For all others this is not necessary.
6
Benefits of the 3PPM Method
The 3PPM Method enables the development of PKI models, which are easy to understand, even for people not knowing this technique. On the other hand, this method is powerful enough to get a model that covers all of the main PKI issues. The 3PPM Method has already been used in practice. The author has used in several PKI projects to develop a model for a company planning a PKI set up. Such a model can be used to discuss PKI details and it is a part of the specification. Most of all, a sophisticated but simple model enables the detection of problems and mistakes in an early stage of the set up project.
7
Summary
This paper has introduced a method for modelling Public Key Infrastructures. This method, referred to as 3PPM Method, is based on the Unified Modelling Language (UML). To understand this method, the concept of components, roles and use cases has to be understood. The method consists of two steps: In the first step, components, roles and use cases are determined. In the second step, the model itself is developed.
126
K. Schmeh
The benefit of this method is that all major aspects of a PKI can be modelled with it. It is easy to develop a 3PPM and it can be used easily in all later stages of the PKI set up.
References 1. 2.
Schmeh, K.: Safer Net – Kryptografie im Internet und Intranet. dpunkt.Verlag Heidelberg (1998) Schneier, B.: Applied Cryptography. John Wiley (1995)
The Realities of PKI Inter-operability John Hughes Claridge House 29 Barnes High Street London, SW13 9LW England [email protected]
Many vendors are claiming that their products are “open” and “inter-operable”. This paper is intended to explore what this could mean, and in reality what is available in the market place, highlighting any issues found by the author.
1
Introduction
Public Key Infrastructure (PKI) offers a method of protecting an enterprise’s electronic communications using public key cryptography. Because PKI is an “infrastructure”, it is usually necessary to obtain board-level approval for funding or to find some sponsor in the enterprise. As this is sometime difficult to justify local business units are implementing secure solutions, that just happen to be PKI based. As a result of this situation PKI Islands are developing. PKI Islands have the advantage of allowing an enterprise to seed PKI throughout its business without the need to undergo the turmoil of a "big bang" – an approach sometimes termed “PKI by Stealth”. However, at some stage an enterprise will need to connect the Islands together and also secure its communications with trading partners. If inter-operability issues have been ignored at the design stage, problems will almost certainly emerge later. This paper summarises some of the technical issues of making various PKI components inter-operate (or why they cannot operate with each other!). Entegrity Solutions have defined an Inter-operability Model which is used within the company to examine areas of product deficiency and product enhancements providing additional flexibility.
2
Inter-operability Model
Whist the PKIX have defined a Certificate and CRL profile, one has not been produced for all elements of a PKI. The intent of the Model is to assist in formulating such profiles. Following a summary of the model the paper summaries some of our experiences in achieving interoperability. The major elements that the Interoperability Model covers are summarized in the following paragraphs. R. Baumgart (Ed.): CQRE’99, LNCS 1740, pp. 127–132, 1999. © Springer-Verlag Berlin Heidelberg 1999
128
J. Hughes
Key Generation. Three basic key generation schemes are possible: Centralized at the CA, at the EE Client or to have Split Keys. In this last case one set of keys are generated at the EE for a particular set of purposes and another set generated at the CA. Encapsulation protocol. If the public key is generated at the EE then a means of securely sending the public key and receiving back the certificate is required. Four main encapsulation protocols are (or will be soon) available from products. These protocols are PKCS#10-PKCS#7, Verisign’s CRS, PKIX CMP and PKIX CMC.. Transport. The encapsulation protocols provide message protection. However they still have to be transported around the network. Whilst web-based http is clearly the natural (and dominant) technique, it is actually quite complicated partly due to the authentication issue (see below). As those with Internet issued certificates will be aware, obtaining them for a Web Browser is a painful multi-step process, involving a number of Web dialogues plus an e-mail transaction. Because it is a manual, user driven process, most Web Browsers can interact quite successfully with CAs. However dealing with other applications illustrates the point that in general PKI-enabled applications are not well integrated with CAs. Authentication. There are two basic schemes involved in authenticating the owner of a public key prior to its certification. In the case of class 1 VeriSign certificates, no real authentication is actually performed. Therefore anyone could claim a false identity and obtain a certificate issued in that name. This is not the case though for higher grade certificates, such as VeriSign class 3 certificates. The authentication schemes generally available are: - Manual approval at the CA - Automated approval at the CA using some type of “secret value” look-up - Centralized Token issuing with pre-authorization information. Token/PSE Format. There is no widely deployed standard that currently defines the format and contents of the PSE. Currently a new standard is being drafted with the intention of defining the PSE for Smart Cards – this is PKCS#15. Token Plug and Play. When using physical cryptographic tokens the emerging dominant API standard is PKCS#11. Most smart card vendors and high-end cryptographic accelerators support PKCS#11. PKCS#11 defines an API to add, modify and use cryptographic keys on the Token. The intention being for a PKI application supporting PKCS#11 to plug in any PKCS#11 Token. In reality it’s not quite that simple. Publication/Retrieval protocol. The market leader in this area is the LDAP protocol. The version of LDAP most widely used at present is version 2 (LDAPv2), but increasingly LDAPv3 server products are appearing. Publication Protection. Certificates and CRLs are self-protecting objects, and therefore the connection for publishing them on the LDAP server, at first sight, does not seem to require protection. However as the connection requires “write access” to the server, it is important to control access to limit those network entities that can write to the server. The techniques used to protect this connection, include SSL.
The Realities of PKI Inter-operability
129
Schema. Publishing the PKI Information on the LDAP Server requires the server to be configured with defined X.500 attributes and object classes. X.500 defines the required attributes, such as userCertificate. However the object classes, as originally defined, are not suitable for PKI deployment. PKIX has defined new Schema object classes that solve the original problems. OCSP. CRLs are not the only mechanism available to determine whether a certificate is still valid and has not been revoked. A new standard and technology called On-line Certificate Status Protocol (OCSP) is being developed. Another technology called Certificate Revocation Trees (CRT), offered by Valicert, is yet another certificate revocation mechanisms. All 3 mechanism have different characteristics, and each one has its benefits and problems. Certificate. A Certificate is a very complicated structure and can contain many optional fields. A X.509 v3 certificate can have, none, one or more extensions fields. A number of standard extensions are defined by ISO/IETF – but it is also possible for various industry groupings to define their own extensions. Given the complexity and richness of the Certificate together with its various optional fields, inter-operability manifests itself as a problem. Therefore one aspect of the PKIX working group has been to develop a certificate profile to increase the probability of inter-operability of systems using PKIX conformant certificates. This is defined in RFC2459. CRL RFC2459 also contains a profile for CRLs based on the ITU-T X.509 CRL version 2 standard. Cross-certification In some topologies there is a requirement for peer CAs to certify each other’s public keys and then publish them in the form of crosscertificates. Cross-certification can take be performed either manually or automatically.
3
Our Experiences of Interoperability
Entegrity Solutions are focused on delivering Secure Applications based on the Entegrity Secured Application Platform™ working within many different CA vendor environments, our CA Partners include CyberTrust, VeriSign and IBM. Our intent is to be inter-operable in as many PKI environments as possible, both the infrastructure and the PKI-enabled applications. This paper concentrates on our experiences of interoperability testing with many different CA vendors, but also touches upon other areas. 3.1
Certificate and Certificate Path processing
In general all Certificates from the main CA vendors seem to be well constructed and can be decoded. However note that there are some aspects that could rise to interoperability issues: Some Cas do not have the ability to support RFC-822 names in the X.509v3 alternate name extension. What is quite common is to use the “EA= “ attribute in
130
J. Hughes
the DN rather than a RFC-822 Alt Name. This can give rise to problems to some secure e-mail packages expecting the Alternative Name extension (or vice versa) Some times you do see non-standard OID encoding in the DN (e.g. OID.2.5.4.5=123 ). This technique is also sometimes used to carry RFCS-822 names in the DN Obviously private extensions in a X.509v3 certificates can give rise to problems, especially if they are marked critical (which fortunately is rare). The biggest culprit in this area is Microsoft. The Key Usage, Basic Constraints and Subject KeyID extensions are now widely used, although some older products do not support these features. There seems to be wide variety concerning the Authority KeyID extension. It is either not used, or either the hash or issuer name method is used. It’s not clear whether all EE PKI-Enabled s/w can cope this variety.
We have found that using our technology Certificate chain validation is not a problem. All CA vendor products we have tested successfully pass our tests. 3.2
ASN.1 Encoding problems
Given that ASN.1 is such a rich standard with various encoding rules, but yet its frequently specified partially in English, its not surprising that ambiguities or implementation problems manifest themselves. Surprisingly they do not appear in encoding X.509 certificates. Recently we have discovered two problems with products in other areas. If you send a S/MIME-PKCS#7 message to a well known web browser/e-mail client that has a mixed BER/DER encoding it crashes (although earlier versions handled it OK) If a PKCS#10 certificate request is sent to a well know CA product that has any BER encoding then it can not parse the request This illustrates the point that not all PKI based products, whether Infrastructure components or PKI-enabled applications, can cope with the vagaries of a full ASN.1 implementation. Some products only expect DER encoding. 3.3
LDAP
In theory LDAPv3 is a superset of LDAPv2 and hence any LDAPv2 client (for instance a secure e-mail client) should be able to retrieve certificates and CRLS from a LDAPv3 server. In reality this is not true. Fetching certificates/CRLs means that they need to be transferred as binary objects. LDAPv3 states that the LDAP client, when requesting a binary object, needs to specify the “;binary” key word with the name of the attribute being fetched. Publishing to a LDAP server is also problematic, although in the main it is an easy problem to resolve. The original definition of the PKI attributes, such as userCertificate meant that the user’s LDAP entry and this attribute be created in one atomic operation. To get around this problem CA vendors defined new object classes
The Realities of PKI Inter-operability
131
that permitted all CA information being written to the LDAP server was optional. Until recently all products had their own definitions and names for these object classes, although in general they were very similar. Recently PKIX has standardized on 2 new object classes pkiUser and pkiCA. This approach demands that the LDAP server can be configured with these new object classes. 3.4
Certification Requests
The prevalent certification model deployed is that of using a communication of Web and e-mail. The steps one would take being similar to the following: User would browse to the CA’s web page An option on the page would be to generate a key pair and request a certificate. Prior to this the user will be prompted to enter personal information used for both identity authentication and creation of the certificate. Selection of this option would cause a http message to be sent to the browser that triggers a key pair to be generated and the public key sent back to the CA (usually in the form of a PKCS#10 request). The browser being triggered to perform these functions by specific mime types in the http message. The CA would respond with a message saying that the certification request is being processed and that e-mail will be sent on how to pick up the certificate. Typically a request number and password will be provided (or defined by the user). When the certificate has been generated, instigated either via a manual or automatic approval process e-mail is sent to the user. The user then goes to the “pick up certificate” web page and requests the certificate, entering authentication information as appropriate. A http message is sent to the browser and because the http message has a particular mime type it causes the browser to “swallow” the certificate and place it in the appropriate certificate store On-line certification outside a browser environment is not as well specified. Whilst most CA products permit this approach all the CA products have different methods to achieve it. Different MIME types are used by the CA products and various techniques for parameter passing. 3.5
PKCS#11
Whilst PKCS#11 was created a number of years ago it is only recently that Smart Card and High Security Module (HSM) suppliers have started to release PKCS#11 based device drivers. Because there is no recognised conformance test suite, in our experience, the quality of the implementations in general are not that good. 3.6
PKCS#12
PKCS#12 is a useful mechanism to transfer information between different PKI components - e.g. from transferring EE information from a RA/CA into the EE’s PSE.
132
J. Hughes
However not all RA/CA products can create PKCS#12 files containing the full certificate path. That is they can only populate it with a key pair and the user certificate. If a RA/CA has this limitation then it not possible to create a EE PSE in a single step as the trusted certificate, and any other subordinate CA certificates would have to be loaded via other means.
4
Conclusions
Certificate and Certificate Path processing becoming are becoming “trivial” and there is wide spread inter-operability between PKI products. However there are some immediate problem areas that need to be addressed: Tighter CA and EE integration and standardisation, in particular in the area of online certification not using browsers More robust and compliant PKCS#11 implementations Standardized PSE/Token formats (soft and hard) An area that more work is required on is that of the new generation of PKIX CMP and CMC management protocols. To date there are very few products that support these, therefore limited interoperability work has been accomplished. What interoperability problems there are will become evident by the beginning of next year as products supporting these protocols are released into the market.
Mobile Security – An Overview of GSM, SAT and WAP Malte Borcherding BROKAT Infosystems AG, Industriestr. 3, D-70565 Stuttgart, Germany [email protected]
Abstract. Mobile networks have become a very attractive channel for the provision of electronic services, as they are available almost anytime and anywhere. But for a service provider, there are several mobile communication standards to choose from. They differ in market penetration, flexibility, and security. This paper gives a comparative overview of the security features of GSM, SIM Application Toolkit and WAP (Wireless Application Protocol). It describes the trust relations involved, and gives examples of typical applications suitable for each of these standards. Results are that pure GSM is suitable only for applications with low sensitivity, as the security features are limited. SIM Toolkit allows for the implementation of application-specific end-to-end security, and is thus suitable for sensitive, personalized applications like banking ore brokerage. Finally, WAP defines a security standard with choices for differently strong algorithms. In order to be suitable for secure applications, the models for local storage have to be settled, and there must be sufficiently many WAP phones with support for strong security on the market.
1 Introduction Mobile networks have become a very attractive channel for the provision of electronic services: They are available almost anytime, anywhere, and user acceptance of mobile devices is high. As a result, there is a strongly increasing amount of services offered through mobile networks. They range from simple information services to sensitive applications like banking or electronic commerce. As a related development, standards for mobile applications are maturing, and new standards are being defined. This leads to a set of possible technologies a service provider can choose from. They differ in depth of standardization, market penetration, flexibility and security. This paper focuses on the security features of GSM, SIM Application Toolkit and WAP (Wireless Application Protocol). It compares the security-related properties and the trust relations involved, and gives examples of typical applications suitable for each of the standards. R. Baumgart (Ed.): CQRE’99, LNCS 1740, pp. 133–141, 1999. © Springer-Verlag Berlin Heidelberg 1999
134
M. Borcherding
2 GSM GSM (“Global System for Mobile Communications” or “Groupe Spéciale Mobile”) is a standard for digital mobile telephony, defined by the European Telecommunications Standards Institute (ETSI). The first GSM services were started around 1992. Today, this standard is used globally in more than 300 networks operating in more than 100 countries. A basic design requirement of GSM was security of communication. The following paragraphs describe the security mechanisms employed, the implicit trust relations, and the suitable types of application for the given security features. 2.1 Security Features GSM offers confidentiality, subscriber authentication, and subscriber identity confidentiality [2, 5]. The security mechanisms are only defined for the air interface, i.e., security of transport through fixed networks behind the base stations is left to the network providers. The security mechanisms are applied to all traffic, including short messages. Key Infrastructure: GSM Security is based on subscriber-individual symmetric keys shared between the home network and each SIM card (subscriber identity module). More precisely, there is one key ki of 128 bit length per IMSI (International Mobile Subscriber Identity). The SIMs are initialized with the kis during personalization. The individual subscriber keys are usually not transmitted over the network, but used in a challenge-response protocol for authentication and key agreement. Authentication: In an initial phase of a communication, the network sends a random challenge RAND of 128 bit length to the end device. The device computes a 32 bit response SRES = A3(ki , RAND), where A3 is an authentication algorithm implemented in the SIM and the network. SRES is sent back, and the network compares the received SRES with the expected value Encryption: The symmetric encryption key kc is derived using the same parameters ki and RAND which have been used for authentication: kc = A8(ki , RAND), where A8 is again implemented in the SIM and the network. The actual data encryption is done using the stream cipher A5, which is implemented in the end device (not the SIM) and the network. The maximum effective length of kc is 64 bit. Subscriber Identity Confidentiality: The objective of subscriber identity confidentiality is to conceal the IMSI during normal operation by the use of temporary IDs (TMSI), such that an attacker cannot easily figure out who is participating in a connection. As described above, the algorithms for authentication and key generation (A3 and A8) are implemented in the SIMs and the home networks. If the user is outside the home network, the visited network can request sets of corresponding triples (RAND, SRES, kc) from the home network. This allows the visited network to communicate with the handset without gaining access to A3 and A8.
Mobile Security – An Overview of GSM, SAT and WAP
135
As the SIMs and their contents (including A3 and A8) are controlled by the respective networks, this structure leaves room for national and business policy enforcement. A5, on the other hand, has to be supported by all networks and end devices in order to interoperate properly. Although the algorithms are not published officially, one widely employed implementation of A3/A8 called COMP128 and a compatible algorithm to A5 have been published [9, 12]. COMP128 has been shown to leak ki, and attacks against an algorithm similar to A5 have been published in [10]. Further publications on A5 are expected in the near future. This has lead to some uncertainty with regard to the actual strength of encryption and authentication of productive GSM networks. 2.2 Trust Relations Users and service providers relying on the GSM security have to trust the network providers in the following aspects: They have to trust all network providers involved in the communication, regarding privacy of information and keying material, and the home network provider, regarding proper choice of algorithms A3/A8. 2.3 Area of Application Given the security infrastructure described above, “pure” GSM should only be used for applications with low sensitivity, like public information services. Examples for such services include: General information (e.g., weather forecasts, sports results) Cell broadcast (e.g., nearest restaurant) Non-sensitive personalized financial information (e.g., stock information according to a customer’s profile) Sensitive applications like account statements or financial transactions should employ additional security mechanisms, as described in the following sections.
3 SIM Application Toolkit The SIM Application Toolkit is a GSM specification which defines an interface between GSM handsets and subscriber identity modules (SIMs) [8]. As described in the previous section, SIMs are smart cards which carry information related to the subscriber and the GSM provider, like individual secret keys, algorithms for key generation and authentication, and the subscriber’s address book. The SIM Application Toolkit allows applications stored on the SIM to communicate through the handset with the user and the network. In other words, applications on
136
M. Borcherding
the SIM can use the handset as I/O device with the help of the SIM Application Toolkit. For example, applications can define simple menu structures which set up calls to service numbers, but they can also be used to add security data communication via GSM. More specifically, the SIM Application Toolkit [8] includes the following functionality:
Display text and menus Receive input from the keypad Send and receive short messages Set up calls Communicate with a secondary smart card (for dual-slot handsets)
Although a SIM Toolkit application can be any kind of application running on a given SIM, there are standardization efforts underway within ETSI to define an application programming interface for higher-level languages (SIM API). The goal is to have a framework where a SIM application can access SIM Toolkit features through a standardized API for each relevant programming language. The general framework is specified in [3], and specific Java bindings are given in [4]. A similar specification for Virtual Basic is under consideration. Apart from defining an interface to SIM Toolkit, the SIM API also comprises functions for access to GSM files on the SIM and low-level functionality, such that an applet can act as the basic GSM application towards the handset. These standardization efforts lead to a simplified development of SIM Toolkit Applications. A Java programmer can use the standardized interfaces to develop an applet for a Java SIM card which interacts with the external world through SIM Application Toolkit, without dependencies of the specific platform. 3.4 Security Features SIM applications have access to incoming short messages, and can send short messages by themselves. Hence, they can be used to add encryption and authentication to short messages. There are no limitations to the security mechanisms employed, except those imposed by the technical limits of the SIM. In contrast to basic GSM security, SIM Application Toolkit allows for an end-toend-security between the subscriber and a service (content) provider, such that messages can be encrypted, for example, between a SIM and a banking server. This makes security independent of limitations of the GSM algorithms. There are SIMs capable of RSA computations available, such that RSA-based public-key systems can be used directly on the SIM. Alternatively, implementations of elliptic curve cryptography can be used. But since most GSM providers currently do not use cards suitable for public key systems, most of today’s applications are secured by symmetric algorithms. Short message formats including security features are defined in [6]. The specification covers encryption, authentication, redundancy checks, counter management, and
Mobile Security – An Overview of GSM, SAT and WAP
137
proof of reception handling. For this purpose, it defines a header to be included in protected short messages. Currently, identifiers for DES and triple DES (two or three keys) are specified for encryption as well as for message authentication. In addition, there are identifiers defined for proprietary algorithms and for algorithms known implicitly by sender and receiver. This allows for the application of arbitrary algorithms. For key agreement, there are four bits to indicate one of several keys (separately for encryption and message authentication). The actual keys have to be agreed upon through a channel outside the scope of the specification. There are different possibilities for key distribution, which can also be combined: Hardcoded keys which are stored on the card during personalization Distribution of keys over the air, using a transport key for security User input of key material (complete keys or seed) The selection of an appropriate scheme depends on the required flexibility of the application and on the security requirements. For example, distribution over the air is useful for key rollover, and user input of key material can be used to establish a shared secret between the SIM and the application server, thus taking the network provider out of the loop. If a dual-slot handset is used, the SIM Toolkit Application can make use of a secondary cryptographic smart card. This is a useful feature if, for example, users already have a standardized signature card. The SIM application can then coordinate the display of data to be signed, PIN input and the actual signature generation on the secondary card. 3.5 Trust Relations Although SIM Application Toolkit allows for an end-to-end security, the GSM provider is still involved in the trust relations because it usually owns the SIM. All data put on the SIM, including applications and keys, is in principal under the control of the network provider. This control can be tight, where the network provider actually puts applications and secret keys on the card, or loose, where the provider gives download keys to the service provider. The communication parties have to trust the network provider and the card manufacturer in that they do not misuse or leak the (potential) knowledge of information stored on the SIM. A lower level of trust is necessary when the application allows for entering additional keying material shared between the end-user and the service provider. In any case, there is no trust necessary concerning intermediate networks, as is the case for pure GSM security.
138
M. Borcherding
3.6 Area of Application SIM Application Toolkit is most suitable for sensitive, personalized services, such as banking and brokerage. Security mechanisms can be agreed upon individually per application, and the SIM is a very suitable storage device for secret application keys. One basic application useful as building block for many solutions is a signature application. In such an application, the SIM contains a private signing key and some application logic which controls the signature process. It receives short messages which contain the document to be signed, displays the content to the user, and generates a signature on user demand. The signatures can then be sent back to the source of the document, or to a different recipient. This way, the handset becomes a generalpurpose and highly secure signature device. The signature process can be triggered by any kind of external application, like a Web application or a brokerage system driven by stock trading events.
4 Wireless Application Protocol (WAP) WAP [13] is a protocol stack for mobile environments which enables services similar to the Internet, in particular the WWW. The stack is based on a bearer like SMS messaging or GPRS (General Packet Radio Service). Further layers include transport, security, session and application layers. The application layer defines a markup language called WML which is interpreted in a browser on the client side. WAP is being defined by the WAP Forum, an industry group comprising handset manufacturers, wireless service providers, infrastructure providers, and software developers. The WAP specification was released in its first version in April 1998. Since then most cellular vendors have been active to develop network components and terminals for WAP. The first services were shown in early 1999. 4.7 Security Features The security layer protocol in the WAP architecture is called Wireless Transport Layer Security, WTLS [15]. The primary goal of the WTLS layer is to provide privacy, data integrity and authentication between two communicating applications. WTLS provides functionality similar to TLS 1.0 [1], but it is optimized for lowbandwidth bearer networks with relatively high latency. Differences to TLS include specifications for elliptic curve cryptography, small-sized digital certificates, optimized handshake and dynamic key refreshing. Like TLS, WTLS defines a set of cipher suites, including weak and strong ones. The cipher suite for a connection is agreed upon during an initial handshake phase. Key exchange ciphers include RSA, Diffie-Hellmann, and EC-Diffie-Hellmann, all with different key lengths and partly without authentication. It is also possible to start from a shared secret (established on a different channel), such that no public key cryptography needs to be used. For bulk encryption, the algorithms RC5, DES, triple
Mobile Security – An Overview of GSM, SAT and WAP
139
DES and IDEA are defined, each with different effective key lengths in order to cover export control requirements. For symmetric message authentication, WTLS specifies keyed MACs based on SHA-1(160 bit key) and MD5 (128 bit key). For the storage and usage of key material and related personal information, WAP defines a WIM (WAP Identity Module) [14]. A WIM can be in principle any kind of module, but the standard notes explicitly the possibilities to include a WIM application in the SIM or to use an external smart card. The functionality of the WIM is to support the WTLS protocol, but also to provide application level functionality. For WTLS, it can perform such functions as generation of random numbers, storage and usage of private and public keys, and computation of the various symmetric keys. Functionality offered to applications includes unwrapping of symmetric keys with the help of a securely stored private key, and signing of hashes. These operations can be called from WAP applications through WMLScript (a scripting language similar to JavaScript), or by applications external to WAP. Security-related data is stored according to PKCS#15 [11]. This allows non-WAP applications to have a standardized access to the keys. It is thus conceivable that a user uses the same smart card for authentication in WAP with WTLS, and in the Internet using SSL or TLS. 4.8 Trust Relations The trust relations in WAP depend on the position of the WAP server in the network architecture: It can either be hosted by the network provider or directly by the application service provider. In the first case, all parties have to trust the network provider, as the WAP server is one endpoint of the security relation. In the second case, information is transmitted securely between the user and the service provider, such that the network provider has no access to the data. In contrast to SAT, the network provider has only limited or no control of the applications the user is accessing. This shifts additional responsibility onto the users: They have to assure themselves that the application they use really is what they intend to use. As a prerequisite, it is necessary that CA certificates stored in the end device or the WIM for server authentication are correct and trustworthy. When a SIM is to be used as WIM, the different trust models of SAT and WAP start to interfere. In SAT, the SIM is used by applications which are known and trusted by the card issuer, while in WAP, the applications are trusted by the user (and not necessarily by the card issuer). By the time of writing, the exact models as to which extent SIMs are opened up to WAP applications are not completely sorted out.
140
M. Borcherding
Table 1. Comparison of security algorithms and key lengths
GSM [2] SAT [6] WAP [15]
Secured relation Handset to base station SIM to server
Bulk encryption A5 (A8 for key gen.) DES 3DES any Handset to RC5 server DES 3DES IDEA
Effective key length (bit) max. 64
56 112, 168 any 40, 56, 128 40, 56 168 40, 56, 128
Authentication1 A3 (client auth only) DES 3DES any RSA ECDHDSA SHA-1 MD5
Effective key length (bit) 128
56 112, 168 any 512, 768, any any 160 128
4.9 Area of Application WAP is currently suited best for non-personalized information services which do no require strong client authentication, as models for local storage of key material and other personalized information are not yet completely settled. If there are end-devices with support for strong security in the market, sensitive data can be transported via WAP. It has to be noted that secure WAP applications require a more security-aware and educated user than in the case of SIM toolkit, as there is no pre-evaluation of applications by the network provider, and the users have to verify the authenticity of the servers by themselves.
5 Summary For applications over GSM-based mobile networks, there are currently three implementation alternatives: Using standard GSM mechanisms, implementing an application on the SIM using the SIM Application Toolkit, or using the Wireless Application Protocol (WAP). There is no single best choice among the services: Pure GSM offers only limited security, but has the least restrictions concerning capabilities of the handsets. SIM Toolkit allows for the implementation of application-specific end-to-end security, but it is restricted to handsets capable of SIM Toolkit. Furthermore, SIM applications can be used only by those subscribers who have got suitable SIMs from their GSM providers. 1
For WAP WTLS, authentication is achieved through a combination of the asymmetric algorithms and keyed hashes. The mechanisms for anonymous key exchange are not mentioned in this table.
Mobile Security – An Overview of GSM, SAT and WAP
141
Finally, WAP defines a security standard with choices for differently strong algorithms. Compared to SIM toolkit, it is much more standardized on the application and transport security level, such that any WAP browser can basically connect to any WAP server (provided they can agree on a common cipher suite). In order to be suitable for secure applications, the models for local storage of key material have to be settled upon, and there must be sufficiently many WAP phones with support for strong security on the market. Table 1 gives an overview of the algorithms and key lengths specified for the three standards under consideration.
References 1. T. Dierks et al: The TLS Protocol, RFC 2246, January 1999, ftp://ftp.isi.edu/in-notes/rfc2246.txt 2. ETSI: GSM 02.09: “Security-related Network Functions”, February 1992, http://www.etsi.org 3. ETSI: GSM 02.19: “Digital cellular telecommunications system (Phase 2+); Subscriber Identity Module Application Programming Interface (SIM API); Service Description; Stage 1”, to appear 4. ETSI: GSM 03.19: “Digital cellular telecommunications system (Phase 2+); Subscriber Identity Module Application Programming Interface (SIM API); SIM API for Java Card™; Stage 2”, to appear 5. ETSI: GSM 03.20: “Security Aspects”, June 1993, http://www.etsi.org 6. ETSI: GSM 03.48: “Digital cellular telecommunications system (Phase 2+); Security Mechanisms for the SIM application toolkit”, http://www.etsi.org 7. ETSI: GSM 11.11: “Digital cellular telecommunications system (Phase 2+); Specification of the Subscriber Identity Module - Mobile Equipment (SIM - ME) interface”, http://www.etsi.org 8. ETSI: GSM 11.14: “Digital cellular telecommunication system (Phase 2+); Specification of the SIM Application Toolkit for the Subscriber Identity Module – Mobile Equipment (SIM – ME) interface”, http://www.etsi.org 9. ISAAC Reserach Group at the University of California, Berkeley: http://www.isaac.cs.berkeley.edu/isaac/gsm.html 10. Jovan Dj. Golic: Cryptanalysis of alleged A5 stream cipher, Proceedings of EUROCRYPT ’97, LNCS 1233, Springer-Verlag, 1997 11. RSA Laboratories: “PKCS #15: Cryptographic Token Information Standard”, Version 1.0, April 1999, ftp://ftp.rsa.com/pub/pkcs/pkcs-15/pkcs15v1.doc 12. GSM Papges at Smart Card Developers Association: http://www.scard.org/gsm/body.html 13. WAP Forum: WAP Architecture Specification, April 30, 1998, http://www.wapforum.org/ 14. WAP Forum: Identity Module Specification, Proposed Version July 5, 1999, http://www.wapforum.org/ 15. WAP Forum: Wireless Transport Layer Security Protocol, April 30, 1998, http://www.wapforum.org/
Secure Transport of Authentication Data in Third Generation Mobile Phone Networks Stefan Pütz1, Roland Schmitz2, and Benno Tietz3 1 T-Mobil (DeTeMobil, Deutsche Telekom MobilNet GmbH)
PO Box 30 04 63, D-53184 Bonn, Germany [email protected] 2 T-Nova Innovationsgesellschaft mbH Deutsche Telekom D-64307 Darmstadt, Germany [email protected] 3 Mannesmann Mobilfunk GmbH Am Seestern 1, D-40547 Düsseldorf, Germany [email protected]
Abstract. In this paper a mechanism for securing sensitive MAP messages between network elements belonging to different network operators is described. The mechanism is currently under discussion in the security group of the Third Generation Partnership Project, a joint project of ETSI and Japanese, American, Korean and Chinese standardisation bodies working on the security specifications for UMTS. The proposed mechanism provides confidentiality, authenticity and integrity of the messages exchanged; however, there may be messages where no confidentiality or no protection at all is needed. Therefore, three levels of protection have been defined that are applied to the various MAP messages according to their sensitivity.
1. Introduction The security of the global Signaling System No. 7 (SS7) network as a transport system for sensitive signaling messages between different telecommunication network elements is open to major compromise. Messages can be eavesdropped, altered, injected or deleted in an uncontrolled manner. For example, in mobile phone networks based on the GSM (Global System for Mobile Communication) standard, particularly sensitive authentication data of mobile subscribers have to be transported from the Authentication Centre (AuC) to the Visitor Location Register (VLR)1 in order to authenticate the subscriber. For the first phases of the third generation mobile system UMTS (Universal Mobile Telecommunications System), a similar approach is foreseen. Transportation of the data will be done via the MAP (Mobile Application Part) protocol [3], a mobile-phone specific application protocol of the SS7 protocol stack (cf. [4], chapter 1
For an overview of security-related signaling of GSM and similar systems, see e.g. [1], chapter 7; more specific information can be found in [2].
R. Baumgart (Ed.): CQRE’99, LNCS 1740, pp. 142–152, 1999. © Springer-Verlag Berlin Heidelberg 1999
Secure Transport of Authentication Data in Third Generation Mobile Phone Networks
143
17).2 If an intruder succeeds in eavesdropping these sensitive data, serious impersonation attacks or eavesdropping of user traffic on the air interface may result (cf. section 2). In addition, there are several other sensitive MAP messages. Although no attack of this kind has been reported for GSM networks to date, it is intended for UMTS to protect against these kind of attacks to achieve a constantly increased security level. Therefore, in this document a mechanism for securing sensitive MAP messages between network elements is described. The mechanism is currently under discussion in the security group of 3GPP (Third Generation Partnership Project), a joint project of ETSI and Japanese, American, Korean and Chinese standardisation bodies working on the security specifications for UMTS. The proposed mechanism provides confidentiality, authenticity and integrity of the messages exchanged; however, there may be messages where no confidentiality or no protection at all is needed. Therefore, three levels of protection have been defined that are applied to the various MAP messages according to their sensitivity: Protection mode 0 is identical to the original MAP message in cleartext and thus provides no protection, while protection mode 1 provides integrity and authenticity, and protection mode 2 provides confidentiality, integrity and authenticity of MAP messages.
2. The Main Threat: Compromise of Authentication Data In mobile phone networks using a similar approach for authenticating the user as the GSM network, authentication data can get compromised, either during its transport between the home environment and the serving network, or by unauthorised access to databases. This can lead to various, serious attacks including the following: Forcing use of a compromised cipher key The intruder obtains a sample of authentication data and uses it to convince the user that he is connected to a proper serving network, and forces the use of a compromised cipher key. The intruder may force the repeated use of the same authentication data to ensure the same encryption key will be used for many calls. This leads to continuous eavesdropping. Impersonating the user The intruder obtains a sample of authentication data and uses it to impersonate a user towards the serving network. Although no attacks of this kind have been reported for second generation mobile networks to date, the security level for third generation mobile systems will be increased. The security improvements comprise the access as well as the core networks [5]. The present paper concentrates on the core network security features "Entity Authentication", "Data Confidentiality" and "Data Integrity" as defined in [6], 2
Note however that there are plans to have an alternative all IP based network solution for UMTS with the Release ’00. In this case, an equivalent to the MAP protocol will handle the security related information exchange. Clearly, the mechanisms presented by this document will hold accordingly.
144
S. Pütz, R. Schmitz, and B. Tietz
section 5.2. In order to provide these features, a mechanism how to effectively protect authentication and other sensitive signaling data transmitted between network nodes of one operator (internal use) or between network nodes of different operators (external use) is proposed.
3. Overview of Mechanism The proposed mechanism consists of three layers. 3.1 Layer I Layer I is a secret key transport mechanism based on an asymmetric3 crypto-system and is aimed at agreeing on a symmetric session key for each direction of communication between two networks X and Y. The party wishing to send sensitive data initiates the mechanism and chooses the symmetric session key it wishes to use for sending the data to the other party. The other party may choose a symmetric session key of its own, used for sending data in the other direction. The symmetric session keys are protected by asymmetric techniques. They are exchanged between certain newly defined elements called the Key Administration Centres (KAC) of the network operators X and Y. The format of the Layer I transmissions is based on ISO/IEC 11770-3: Key Management – Mechanisms using Asymmetric Techniques [7].4 It is proposed that public keys will be exchanged between a pair of network operators when setting up their roaming agreement.5 In this case no general Public Key Infrastructure (PKI) is required. For the transmission of the messages, no special assumptions regarding the transport protocol are made, a possible example would be IP. 3.2 Layer II In Layer II the agreed symmetric session keys for sending and receiving data are distributed by the KACs in each network to the relevant network elements. For example, an AuC will normally send sensitive authentication data to VLRs belonging to other networks and will therefore get a session key for sending from its KAC. Layer II is carried out entirely inside one operator's network. However, it is clear that the distribution of the symmetric keys to the network elements must be carried out in a secure way, as not to compromise the whole system.
3
4
5
For UMTS a large number of network operators is expected. In this case key transport mechanisms based on asymmetric algorithms offer advantages regarding key management. Therefore, we propose to use an asymmetric scheme in Layer I. For a general overview of key transport mechanisms based on asymmetric techniques, see chapter 12.5 of [8]. In general a Public Key Infrastructure is required to handle public keys and the appropriate certificates.
Secure Transport of Authentication Data in Third Generation Mobile Phone Networks
145
3.3 Layer III Layer III uses the distributed symmetric keys for securely exchanging sensitive data between the network elements of one operator (internal use) or different operators (external use) by means of a symmetric encryption algorithm. The encrypted (resp. authenticity/integrity-protected) messages will be transported via the MAP protocol. 3.4 General Overview Figure 1 may help to clarify the proposal by providing an overview of the whole mechanism. Note that the messages are not fully specified in this figure. Rather, only the "essential" parts of the messages are given. More details on the format of the messages in the single layers will be provided in subsequent chapters.
Network X Layer I
Session Key KSXY
KACX
Network Y KACY
Key Distribution Complete
Layer II
Layer III
Session Key KSXY
NEX (sending, e.g. AuCX)
Session Key KSXY
EKSXY (data)
NEY (receiving, e.g. VLRY)
Fig. 1. Overview of proposed mechanism6
4. Layer I Message Format Layer I describes the communication between two newly defined network entities belonging to different networks, the so-called Key Administration Centres (KAC). We do not make any assumptions about the protocols to be used for this communications, although IP might be the most likely candidate. 4.1 Properties and Tasks of Key Administration Centres It is assumed that there is only one KAC per network operator. As will become evident from the following, KACs are needed to perform the following tasks: 6
For details on the abbreviations, see the appendix.
146
S. Pütz, R. Schmitz, and B. Tietz
• Generation and storage of its own asymmetric key pairs (different key pairs used for signing/verifying and encrypting/decrypting) • Storage of public key pairs of KACs of other network operators • Generation and storage of symmetric session keys for sending/receiving sensitive information to network entities of other networks • Secure distribution of symmetric session keys to network entities in the same network. Due to these sensitive tasks, a KAC has to be physically secured. 4.2 Transport of Session Keys The transport of session keys in Layer I is based on asymmetric cryptographic techniques (cf. [8]). In what follows, it is assumed that the involved networks have exchanged their respective public keys in course of a roaming agreement. Therefore, no public key certificates are needed. In order to establish a symmetric session key with version no. i to be used for sending data from X to Y, KACX sends a message containing the following data to KACY: EPK(Y){X||Y||i||KSXY(i)||RNDX||Text1|| DSK(X)(Hash(X||Y||i||KSXY(i)||RNDX||Text1))||Text2}||Text3 The reasons for this message format are as follows: • Encrypting the message with the public key of the receiving network Y (used for encrypting) provides message confidentiality, while decrypting the message body with the private key of the sending network X (used for signing) provides message integrity and authenticity. • X includes RNDX to make sure that the message contents contains some random data before signing. The symmetric session keys KSXY(i) should be periodically updated by this process, thereby moving on to KSXY(i+1). For each new session key KSXY the version no. i is incremented by one. After having successfully decrypted the key transport message and having verified the digital signature of the sending network including the hash value and having checked the received i the receiving network starts Layer II activities. If anything goes wrong, e.g. computing the hash value of X||Y||i||KSXY(i)||RNDX||Text1 does not yield the expected result, a RESEND message should be sent by Y to X in the form RESEND||Y||X Y shall reject messages with i smaller or equal than the currently used i.
Secure Transport of Authentication Data in Third Generation Mobile Phone Networks
147
After having successfully distributed the symmetric session key received by network X to its network entities, network Y sends to X a KEY_DIST_COMPLETE Message. This is an indication to KACX to start with the distribution of the key to its own entities, which can then start to use the key immediately. The message takes the form KEY_DIST_COMPLETE||Y||X||i||RNDY|| DSK(Y)(Hash(KEY_DIST_COMPLETE||Y||X||i||RNDY) where i indicates the distributed key and RNDY is a random number generated by Y. Network Y includes RNDY to make sure that the message contents determined by X will be modified before signing. The digital signature is appended for integrity and authenticity purposes.
5. Layer II Message Format In Layer II symmetric session keys (to encrypt/decrypt data before sending/after receiving) are distributed by the KACs in each network to the relevant network elements. For example, an AuCX will normally send sensitive authentication data to VLRY and will therefore get a session key KSXY from its KACX. Layer II is carried out entirely inside one operator’s network. However, in order to achieve a more consistent overall scheme, in this section it is suggested to use for Layer II the same mechanism for distributing the keys as in Layer I. This requires the KACs of the different networks to generate and distribute asymmetric key pairs for the network elements of that network. These key pairs will then be used to transfer the symmetric session keys in the same way as in Layer I. The public and private key pairs needed for the network entities should be distributed to the entities in a secure way, which is in principle an operation & maintenance task. One way to do this is to distribute the key pairs, along with the necessary crypto-software, to the network entities in the form of chipcards, which can also carry out the necessary computations. Therefore, all that has to be added to the present network entities are chipcard readers with a standardised interface. Thus, on adoption of this proposal, in addition to their present tasks, the network entities would have to: • Store the symmetric session keys to encrypt/decrypt data before sending/after receiving to/from network entities of other networks (external) and of their own network (internal) • Encrypt/decrypt MAP messages according to their mode of protection (see section 4). The necessary computations may be carried out by chipcards. In addition to their tasks listed in section 3.1, the KACs would have to: • Generate and store asymmetric key pairs for network entities in the same network • Distribute asymmetric key pairs to network entities in the same network.
148
S. Pütz, R. Schmitz, and B. Tietz
The Layer II messages themselves take the same form as in section 2, where the ’receiving network Y’ has to be replaced by ’receiving network entity NEX’ (or X by NEY). Further, the Key Distribution Complete message is not needed in Layer II. In order to ensure that no network element starts enciphering with a key that not all potentially corresponding network elements have received yet, the following approach is suggested: The distribution of the session keys KSXY in network X having initiated the Layer I message exchange should not begin before the Key Distribution Complete Message from the receiving network Y has been received by the KACX in Layer I. As soon as a network element of X has received a session key KSXY, it may start enciphering with this key. A similar statement holds if the transported keys are used internally only: In this case, all network elements of X should get the symmetric session key KSXX to be used internal for encryption (marked with flag RECEIVED) first; if all network elements have acknowledged that they have recovered these keys, the KACX sends the same key again (marked with flag SEND). Again, as soon as a network element has received the session key KSXX (marked with flag SEND), it may start enciphering with this key. This results in the message format described in the following. As for layer I, no assumptions about the transport protocol are made, although IP might be a good candidate. 5.1 Sending a Session Key for Decryption In order to transport a symmetric session key (marked with flag RECEIVE) with version no. i to be used to decrypt received data from network elements of network X in NEY, KACY sends a message containing the following data to NEY: EPK(NEY){X||NEY||RECEIVE||i||KSXY(i)||RNDY||Text1|| DSK(Y)(Hash(X||NEY||RECEIVE||i||KSXY(i)||RNDY||Text1))||Text2}||Text3 After having successfully decrypted the key transport message and having verified the digital signature of the sending network including the hash value, the receiving network entity sends a key installed message to its Key Administration Centre KACY. The message takes the form: KEY_INSTALLED||X||NEY||RNDY||i This message can only be sent by the receiving network entity, because only this entity can know about RNDY. If anything goes wrong, e.g. computing the hash value of X||NEY||RECEIVE||i||KSXY(i)||RNDY||Text1 does not yield the expected result, a RESEND message should be sent by NEY to KACY in the form RESEND||NEY
Secure Transport of Authentication Data in Third Generation Mobile Phone Networks
149
5.2 Sending a Session Key for Encryption In order to transport a symmetric SEND key with version no. i to be used for sending data from NEX to network elements of network Y, KACX sends a message containing the following data to NEX: EPK(NEX){NEX||Y||SEND||i||KSXY(i)||RNDX||Text1|| DSK(X)(Hash(NEX||Y||SEND||i||KSXY(i)||RNDX||Text1))||Text2}||Text3
6. Layer III Message Format 6.1 General Structure of Layer III Messages Layer III messages are transported via the MAP protocol, that means, they form the payload of a MAP message after the original MAP message header. For Layer III messages, three levels of protection (or protection modes) are defined providing the following security features: Protection mode 0: no protection Protection mode 1: integrity, authenticity Protection mode 2: confidentiality, integrity, authenticity Layer III messages consists of a security header and the Layer III message body. Depending on the protection mode Layer III message bodies are protected by a symmetric encryption algorithm, using the symmetric session keys that were distributed in layer II. Layer III Messages have the following structure: Security Header
Layer III Message Body
In all three protection modes, the security header is transmitted in cleartext. It shall comprise the following information: • Protection mode • Other security parameters (if required, e.g. IV, version no. of key used, encryption algorithm identifier, mode of operation of encryption algorithm, etc.) Both parts of the Layer III messages, security header and message body, will become part of the "new" MAP message body. Therefore, the complete "new" MAP messages take the following form: MAP Message Header
MAP Message Body
Layer III Message
MAP Message Header
Security Header
Layer III Message Body
150
S. Pütz, R. Schmitz, and B. Tietz
Like the security header, the MAP message header is transmitted in cleartext. In protection mode 2 providing confidentiality, the Layer III message body is essentially the encrypted "old" MAP message body. For integrity and authenticity, an encrypted hash value calculated on the concatenation of MAP message header, security header and the "old" MAP message body in cleartext is included in the Layer III message body in protection modes 1 and 2. In protection mode 0 no protection is offered, therefore the Layer III message body is identical to the "old" MAP message body in cleartext in this case. In the following subchapters, the contents of the Layer III message body for the different protection modes will be specified in greater detail. 6.2 Format of Layer III Message Body 6.2.1 Protection Mode 0 Protection mode 0 offers no protection at all. Therefore, the Layer III message body in protection mode 0 is identical to the original MAP message body in cleartext. 6.2.2 Protection Mode 1 The message body of Layer III messages in protection mode 1 takes the following form: Cleartext||TVP|| EKSXY(i)(Hash(MAP Header||Security Header||Cleartext||TVP)) where "Cleartext" is the message body of the original MAP message in clear. Authentication of origin is achieved by encrypting the hash value of the cleartext by a symmetric encryption algorithm, since only a network element knowing KSXY(i)7 can encrypt in this way. Message integrity and validation is achieved by hashing and encrypting the cleartext. Note that protection mode 1 is compatible to the present MAP protocol, since everything appended to the cleartext may be ignored by a receiver incapable of decrypting. 6.2.3 Protection Mode 2 The Layer III message body in protection mode 2 takes the following form: EKSXY(i)(Cleartext||TVP||Hash(MAP Header||Security Header||Cleartext||TVP)) where "Cleartext" is the message body of the original MAP message in clear. Message confidentiality is achieved by encrypting with the symmetric session key. This also provides for authentication of origin, since only a network element knowing KSXY(i) can encrypt in this way. Message integrity and validation is achieved by hashing the cleartext. TVP is a random number that avoids traceability.8 7
8
The case X=Y, i.e. only one key for sending and receiving, corresponds to internal use inside network X. By using a TVP as timestamp (perhaps derived from an overall present master time) replay attacks could be avoided.
Secure Transport of Authentication Data in Third Generation Mobile Phone Networks
151
7. Discussion 7.1 Mapping of MAP Messages and Modes of Protection It is proposed that each network operator should be able to assign the mode of protection of each MAP message in order to adapt the level of protection according to its own security policy. 7.2 Some possible problems In protection mode 2, the original MAP message body will be encrypted in order to achieve confidentiality. For integrity and authenticity, an encrypted hash value calculated on the MAP message header and body in cleartext (i.e. the original MAP message) is appended to the messages in protection mode 1 and 2. All protection modes need a security header to be added. When implementing these changes, care has to be taken that the maximum length of a MAP message (approx. 250 byte) is not exceeded by the protected MAP messages of Layer III, otherwise substantial changes to the underlying SS7 protocol levels (TCAP and SCCP) would have to be made.
References [1] [2] [3] [4] [5] [6] [7] [8]
W. Webb: Understanding Cellular Radio, Artech House Publishers, 1998. ETSI GSM 02.09 Version 7.0.0: Security Related Network Functions. ETSI GSM 09.02 Version 7.0.0: Mobile Application Part (MAP) Specification. J.G. van Bosse: Signaling in Telecommunication Networks, John Wiley & Sons, 1998. S. Pütz: Security for the Third-Generation Mobile Radio System UMTS, Proc. of th Networking the Future, 38 European Telecommunications Congress (FITCE 1999), Utrecht, Netherlands, 1999. 3GPP Technical Specification 33.102 Version 3.1.0: Security Architecture. ISO/IEC 11770 Part 3: Key Management – Mechanisms using Asymmetric Techniques, 1996. A. J. Menezes, P. C. v. Oorschot, S. A. Vanstone, Handbook of Applied Cryptography, CRC Press, 1997.
152
S. Pütz, R. Schmitz, and B. Tietz
Appendix: Abbreviations and Proposed Key Lengths The following abbreviations are used in this paper: AuC DSK(X)(data) EKSXY(i)(data) EPK(X)(data) ETSI GSM Hash(data) IV KACX KSXX(i) KSXY(i) m1||m2 MAP NEX RNDX SCCP SS7 TCAP Text1 Text2 Text3 TVP UMTS VLR X, Y
Authentication Centre Decryption of "data" with secret key of X (used for signing) Encryption of "data" with symmetric session key #i for sending data from X toY Encryption of "data" with public key of X European Telecommunications Standards Institute Global System for Mobile Communication The result of applying a collision-resistant one-way hashfunction to "data" Initialisation Vector Key Administration Centre of network X Symmetric session key #i for sending data within network X Symmetric session key #i for sending data from X to Y Concatenation of message m1 and m2 Mobile Application Part Network Element of network X Unpredictable random value generated by X Signaling Connection Control Part Signaling System No. 7 Transaction Capabilities Applications Part Optional data field Optional data field Public key algorithm identifier and public key version number (eventually included in a public key certificate) Time Variant Parameter Universal Mobile Telecommunications System Visitor Location Register Network identifier
The following parameter lengths are proposed: TVP RND X,Y Hash(data) Public Key Secret Key KSXX, KSXY
64 bit 128 bit 32 bit 160 bit 2048 bit 2048 bit 128 bit
Extending Wiener’s Attack in the Presence of Many Decrypting Exponents Nicholas Howgrave-Graham1 and Jean-Pierre Seifert2 1
2
Mathematical Sciences Department, University of Bath, Bath, BA2 7AY, U.K. [email protected] Department of Mathematics, Johann Wolfgang Goethe-University Frankfurt am Main, Germany [email protected]
Abstract. Wiener has shown that when the RSA protocol is used with a decrypting exponent, d, which is less than N 1/4 and an encrypting exponent, e, approximately the same size as N , then d can usually be found from the continued fraction approximation of e/N . We extend this attack to the case when there are many ei for a given N , all with small di . For the case of two such ei , the di can (heuristically) be as large as N 5/14 and still be efficiently recovered. As the number of encrypting exponents increases the bound on the di , which enables efficient recovery of the di , increases (slowly) to N 1− . However, the complexity of our method is exponential in the number of exponents present, and therefore only practical for a relatively small number of them.
1
Introduction
In the RSA protocol (see [RSA]), Alice publishes a public modulus N and an encrypting exponent e. The modulus N should be the product of two large distinct primes p and q which are kept secret. To make the factoring of N hard, p and q are often chosen with about the same number of digits. With the knowledge of p and q Alice can also calculate d such that ed = 1 mod λ(N ) where λ(N ) = lcm(p − 1, q − 1). Anyone wishing to encrypt a message m for Alice then raises it to the power e modulo N . This can then be decrypted (hopefully only by Alice) by another exponentiation since (me )d = m (mod N ). Clearly if one can factor N then one can also decrypt any messages sent to Alice. Despite twenty years of intensive research on the RSA cryptosystem no devastating attacks on it have been discovered so far. However, under certain circumstances more efficient attacks rather than simply factoring the modulus N are known (see Boneh [B] for a recent survey). One of those is the use of a small private exponent d and another one is the use of a common modulus N for several key pairs ei , di . Let us elaborate these attacks a little bit further. For efficient RSA signature generation it may be tempting to use a small private exponent d. Unfortunately, Wiener [W] has shown that when the RSA protocol is used with a decrypting exponent, d, less than N 1/4 and an encrypting R. Baumgart (Ed.): CQRE’99, LNCS 1740, pp. 153–166, 1999. c Springer-Verlag Berlin Heidelberg 1999
154
N. Howgrave-Graham and J.-P. Seifert
exponent, e, approximately the same size as N , then the RSA system can be broken. Very recently Boneh and Durfee [BD] managed to improve Wiener’s result by showing how to break the RSA system even when using decrypting exponents of size less than N 0.292 . In order to simplify the RSA key management one may be tempted to use a single modulus for several key pairs ei , di . However, as pointed out by Simmons [Si], whenever a message m is sent to two participants whose public exponents happen to be relatively prime, then the message m can be easily recovered without breaking the system. DeLaurentis [D] described two further attacks in which a participant can break such a common modulus cryptosystem. Particularly, he showed that knowledge of one key pair ei , di gives rise to an efficient probabilistic algorithm for factoring the modulus N . Moreover, he also showed that knowledge of one key pair ei , di gives rise to an efficient deterministic algorithm to generate other key pairs without determining λ(N ). For a thorough discussion of the common modulus situation when using RSA we refer to Moore [M]. However, we stress that Simmons attack does not break the RSA system at all and that the attack of DeLaurentis assumes that the attacker is also given the secret exponent. Having said all this, it seems to be natural to study the more realistic problem of what an opponent might do, given only several public exponents for a given modulus and the knowledge of the corresponding private exponents being quite small. This is the purpose of this paper. Although, as explained before, this situation is not common in present-day RSA systems, an analysis of this problem sheds some light on the gain of additional public information in attacking RSA and on the security of re-using the modulus N . Moreover, it seems a natural way to better understand and extend Wiener’s original idea which might also be useful in other circumstances. The question of how to combine several public exponents for a given modulus in order to reduce the size constraint on the private exponents for their efficient reconstruction was only very recently initiated by Guo [G]. Still based on the continued fraction approach of Wiener, Guo showed how to break RSA given 3 public exponenents even when their corresponding decrypting exponents are of size less than N 1/3 . Using instead a lattice basis reduction approach we continue this study here, generalising (and improving) the result up to an arbitrary number of exponents. Particularly, we show that with n encrypting exponents ei , our lattice basis approach allows for the di to be as large as N αn where n (2n+1)2n −(2n+1) (n/2 ) if n is even, n n (2n−2)2 +(4n+2) (n/2 ) αn = n−1 (2n+1)2n −4n ((n−1)/2 ) if n is odd. n−1 (2n−2)2n +8n ((n−1)/2 ) It is interesting to note that our method already allows for 2 encrypting exponents a decrypting exponent bound of N 5/14 , which is superior to the N 1/3 bound of Guo for 3 encrypting exponents. As our approach combines ideas from both Wiener and Guo into a single lattice the next section reviews the approaches of Wiener and Guo and gives
Extending Wiener’s Attack in the Presence of Many Decrypting Exponents
155
an overview of our extension approach. Our solution to the general problem of n encrypting exponents is given in section 3 starting with some preliminaries and examining first the cases of 2, 3 and 4 exponents before generalising the approach to n exponents. Section 4 then describes experimental results for our lattice basis.
2 2.1
Low Private Exponent Attacks on RSA Wiener’s Approach
It was shown in Wiener [W] that, if one assumes λ(N ) and e are both approximately as large as N , and if the decrypting exponent d is less than N 1/4 then the modulus N can be factored by examining the continued fraction approximation of e/N . This follows because e and d satisfy the relationship ed − kλ(N ) = 1. So letting λ(N ) = (p − 1)(q − 1)/g, and s = 1 − p − q we have that edg − kN = g + ks.
(1)
Dividing both sides by dgN gives k g + ks e − = = N dg dgN
k dg
s 1 + . N dN
Now using the assumption that e ' N , and that s ' N 1/2 means (from examining equation 1) that k/(dg) ' 1 so that the right-hand side of the above equation is approximately N −1/2 . It is well known (see for instance [HW]) that if |x − a/b| < 1/(2b2 ) then a/b is a continued fraction approximant of x. Thus if N −1/2 < 1/(2(dg)2 ) then k/(dg) will be a continued fraction approximant of e/N . This is true whenever d < 2−1/2 (1/g)N 1/4 ,
(2)
and g will be small under the assumption that λ(N ) ' N (though clearly g ≥ 2 since both p and q are odd). Given dg one may calculate r = (p − 1)(q − 1) =
g edg − = bedg/ke k k
(since g is small),
and then we can factor N since the factors p and q satisfy the quadratic relationship x2 − (N + 1 − r)x + N = 0.
156
2.2
N. Howgrave-Graham and J.-P. Seifert
Guo’s Approach
The approach taken in Guo [G] assumes that one has more than one ei for a given N , and that each of these ei has a relatively small di . Guo only considers the problem for 2 and 3 encryption exponents. For 2 exponents we have the following relations: e1 d1 g − k1 (p − 1)(q − 1) = g e2 d2 g − k2 (p − 1)(q − 1) = g, so multiplying the first by k2 , the second by k1 , and subtracting gives k2 d1 e1 − k1 d2 e2 = k2 − k1 .
(3)
Dividing now both sides of equation 3 by k2 d1 e2 implies the following e1 k1 d2 k2 − k1 − = , e2 k2 d1 k2 d1 e2 and assuming that the di (and hence ki if the ei are large) are at most N α means that the right-hand side is about N −(1+α) . For the fraction k1 d2 /(k2 d1 ) to be a continued fraction approximant of e1 /e2 , we must therefore have that 2(k2 d1 )2 < N 1+α , and with the assumptions that k2 and d1 are at most N α and that g is small this condition will be true whenever α = 1/3 − for some > 0. However, unlike the situation with Wiener’s attack, the fraction k1 d2 /(k2 d1 ) does not break the RSA cryptosystem for two reasons: – Firstly knowing, say, the numerator k1 d2 , does not allow us to find d2 or k1 without factoring this number. – Secondly there may be a factor in common between d1 k2 and d2 k1 in which case the continued fraction method would not give a fraction with numerator k1 d2 and denominator k2 d1 , but rather the fraction with the common factor removed. Guo assumes that the second problem does not exist, i.e. that we have gcd(k1 d2 , k2 d1 ) = 1, and it is estimated that this happens with probability 6/π 2 ' 0.61. To get around the first problem, Guo suggests that one could either try to factor k1 d2 (a number of size about N 2/3 and not typically of a hard factorisation shape), or alternatively assume that one has another encrypting exponent e3 with d3 < N 1/3 . Then (repeating the above procedure with e3 and e2 ) one can also find k3 d2 , and calculating gcd(k1 d2 , k3 d2 ) will hopefully (if gcd(k1 , k3 ) = 1) give d2 and thus allow the factoring of N . The probability of this attack working under the given assumptions is (6/π 2 )3 ' 0.23.
Extending Wiener’s Attack in the Presence of Many Decrypting Exponents
2.3
157
Overview of our Extension Approach
As already said in the introduction, our approach also assumes that we have more than one ei for a given N , and that each of these ei has a relatively small di . In the remainder we will use, among others, ideas from both Wiener and Guo to solve the general problem of breaking RSA in the presence of n encrypting exponents ei , all with relatively small di < N αn , i = 1, . . . , n. The main technique used in deriving these results is the creation and subsequent reduction of certain lattices. The approach taken by us, however, can currently only be classed as a heuristic method because, although the vectors we search for can be shown to be relatively short, we cannot prove yet that they are indeed among the shortest vectors (and hence bound to be found by lattice basis reduction algorithms). Nevertheless, in section 4 it is shown that our approach performs well in practice, and that the following theoretically derived bounds are frequently achieved. In particular, in the presence of n encrypting exponents ei , our approach allows for the di to be as large as N αn where n (2n+1)2n −(2n+1) (n/2 ) if n is even, n n (2n−2)2 +(4n+2) (n/2 ) αn = n−1 n (2n+1)2 −4n ((n−1)/2) if n is odd. n−1 (2n−2)2n +8n ((n−1)/2 ) The first few (from n = 1) start 1/4, 5/14, 2/5, 15/34, 29/62. In section 3.5 it is shown that αn → 1 as n → ∞. If the LLL algorithm (see [LLL]) is used in order to reduce the lattices underlying our approach, and the (pessimistic) estimate for its complexity of O(m6 log3 B) is assumed (given a lattice of dimension m with largest norm B), then the complexity of our method is O(26n n3 log3 N ), and so clearly the attack is only practical for small n.
3 3.1
An Extension in the Presence of Many Small Decryption Exponents Preliminaries
In extending the analysis to n encrypting exponents ei (with small decrypting exponents di ), we use both Wiener’s and Guo’s ideas. We shall refer to relations of the form di gei − ki N = g + ki s as Wiener equations, and we shall denote them Wi (see equation 1 for an example). Similarly we shall refer to relations of the form ki dj ej − kj di ei = ki − kj
158
N. Howgrave-Graham and J.-P. Seifert
as Guo equations, and shall denote them Gi,j (see equation 3 for an example). We shall also assume, for a given n, that the di and ki are at most N αn , that g is small, and that s is around N 1/2 . Notice that the right-hand sides of Wi and Gi,j are therefore quite small; in fact at most N (1/2)+αn , and N αn respectively. Finally we often refer to composite relations, e.g. Wu Gv,w , in which case we mean the relation, whose left-hand (resp. right-hand) side is the product of the left-hand (resp. right-hand) sides of Wu and Gv,w . For example, Wu Gv,w which has a relatively small right-hand side, bounded in size by N (1/2)+2αn . In the following analysis we examine the cases of 2, 3 and 4 exponents before generalising the approach to n exponents. This is done both to give explicit examples of the approach when in the presence of a small number of exponents, and also because it is not until the presence of 4 exponents that the general phenomenon becomes clear. The relations that we choose for the cases of 2, 3 and 4 exponents may seem “plucked from the air”, but the pattern is made clear in section 3.5. 3.2
RSA in the Presence of 2 Small Decryption Exponents
Assuming that we have two small decryption exponents, then the following relations hold: W1 , G1,2 , W1 W2 ; or more explicitly: d1 ge1 − k1 N = g + k1 s, k1 d2 e2 − k2 d1 e1 = k1 − k2 , d1 d2 g e1 e2 − d1 gk2 e1 N − d2 gk1 e2 N + k1 k2 N 2 = (g + k1 s)(g + k2 s). 2
Multiplying the first of these by k2 means that the left-hand sides are all in terms of d1 d2 g 2 , d1 gk2 , d2 gk1 , and k1 k2 , and hence we may write these equations in the matrix form below. 1 −N 0 N 2 (k1 k2 , d1 gk2 , d2 gk1 , d1 d2 g 2 ) e1 −e1 −e1 N = e2 −e2 N e1 e2 (k1 k2 , k2 (g + k1 s), g(k1 − k2 ), (g + k1 s)(g + k2 s). The size of the entries of the vector on the right-hand side are at most N 2α2 , N (1/2)+2α2 , N α2 , and N 1+2α2 respectively. These size estimates may be made roughly equivalent by multiplying the first three columns of the matrix by N , M1 = N 1/2 , and M2 = N 1+α2 respectively, which gives the following matrix: 0 N2 N −M1 N M1 e1 −M2 e1 −e1 N L2 = M2 e2 −e2 N e1 e2 In this case the vector b = (k1 k2 , d1 gk2 , d2 gk1 , d1 d2 g 2 ) will be such that kbL2 k < 2N 1+2α2 .
Extending Wiener’s Attack in the Presence of Many Decrypting Exponents
159
We must now make the assumption that, in the lattice generated by the rows of L2 , the shortest vector has length ∆1/4− , where ∆ := det(L2 ) ' N (13/2)+α2 , and moreover that the next shortest linearly independent vector has a significantly larger norm than the shortest vector in L2 . Indeed, if the lattice L2 is pretty “random”, there are almost surely no lattice points of L2 significantly shorter than the Minkowski bound 2∆1/4 . Under these assumptions, then bL2 is the shortest vector in the lattice if 1/4 N 1+2α2 < (1/c2 ) N (13/2)+α2 for some small c2 , which is true if α2 < 5/14 − 0 . This implies that the vector b = (b1 , b2 , b3 , b4 ) can be found via lattice basis reduction algorithms (e.g. LLL) if α2 < 5/14 − 0 , and then d1 g/k1 = b2 /b1 can be calculated, which leads to the factoring of N as shown in section 2.1. 3.3
RSA in the Presence of 3 Small Decryption Exponents
This method extends easily to 3 encrypting exponents. We now have the quantities 1, e1 , e2 , e1 e2 , e3 , e1 e3 and e1 e2 e3 from which to form linear relationships, and we already have relationships concerning the first four of these from the 2 exponent case, namely 1, W1 , G1,2 and W1 W2 . For the remaining relationships we choose G1,3 , W1 G2,3 , W2 G1,3 and W1 W2 W3 . These relations imply looking for the vector b = (k1 k2 k3 , d1 gk2 k3 , k1 d2 gk3 , d1 d2 g 2 k3 , k1 k2 d3 g, k1 d3 g, k2 d3 g, d1 d2 d3 g 3 ), by reducing the rows of the following lattice: 0 0 1 −N 0 N 2 e1 −e1 −e1 N −e1 0 e2 −e2 N 0 e2 N e1 e2 0 −e1 e2 L3 = e3 −e3 N e1 e3
0 e1 N 0 −e1 e2 −e3 N 0 e2 e3
−N 3 e1 N 2 e2 N 2 −e1 e2 N × D, e3 N 2 −e1 e3 N −e2 e3 N e1 e2 e3
where D is the diagonal matrix diag(N 3/2 , N, N (3/2)+α3 , N 1/2 , N (3/2)+α3 , N 1+α3 , N 1+α3 , 1) used to maximise the determinant of L3 and still keep √ kbL3 k < 8N (3/2)+3α3 .
160
N. Howgrave-Graham and J.-P. Seifert
Again, using the assumptions that the shortest vector in the lattice generated by the rows of L3 has length det(L3 )(1/8)− , and is also significantly shorter than the next shortest linearly independent vector in L3 , means that bL3 will be the shortest vector in the lattice L3 if N (3/2)+3α3 < (1/c3 ) N 20+4α3
1/8
for some small c3 which is true if α3 < 2/5 − 0 . By using again the first two components of b, as in the 2 exponent case, one may now factor the modulus N as shown in section 2.1. 3.4
RSA in the Presence of 4 Small Decryption Exponents
In the presence of 4 exponents we can now use linear relationships among the quantities 1, e1 , e2 , e1 e2 , e3 , e1 e3 , e2 e3 , e1 e2 e3 , e4 , e1 e4 , e2 e4 , e3 e4 , e1 e2 e4 , e1 e3 e4 , e2 e3 e4 and e1 e2 e3 e4 . As before we already have linear relationships for the first half of these quantities from the analysis in the presence of 3 equations. For the remaining quantities we use the relations G1,4 , W1 G2,4 , G1,2 G3,4 , G1,3 G2,4 , W1 W2 G3,4 , W1 W3 G2,4 , W2 W3 G1,4 and W1 W2 W3 W4 . Putting these relations in matrix form, and multiplying the columns by appropriate factors to make all the relations of size at most N 2+4α4 , results in a 16 × 16 matrix, L4 , which has determinant N (109/2)+13α4 . The vector b we are now looking for is b = (k1 k2 k3 k4 , d1 gk2 k3 k4 , k1 d2 gk3 k4 , d1 d2 g 2 k3 k4 , k1 k2 d3 gk4 , d1 k2 d3 g 2 k4 , k1 d2 d3 g 2 k4 , d1 d2 d3 g 3 k4 , k1 k2 k3 d4 g, d1 k2 k3 d4 g 2 , k1 d2 k3 d4 g 2 , k1 k2 d3 d4 g 2 , d1 d2 k3 d4 g 3 , d1 k2 d3 d4 g 3 , k1 d2 d3 d4 g 3 , d1 d2 d3 d4 g 4 ). Therefore, again making the same assumptions as before, implies that the vector bL4 is the shortest vector in the lattice generated by the rows of L4 if 1/16 N 2+4α4 < (1/c4 ) N (109/2)+13α4 for some small c4 , and this is true if α4 < 15/34 − 0 . Using again the first two components of b, as in the 2 and 3 exponent case, one may again factor the modulus N as shown in section 2.1.
Extending Wiener’s Attack in the Presence of Many Decrypting Exponents
3.5
161
The General Approach
Due to space limitations we defer the subtle computation of the general allowable bound on the di when we have n encrypting exponents ei , i = 1, . . . , n, to the appendix and show below simply the graph for n (2n+1)2n −(2n+1) (n/2 ) if n is even, n n (2n−2)2 +(4n+2) (n/2 ) αn = n−1 n (2n+1)2 −4n ((n−1)/2) if n is odd. n−1 (2n−2)2n +8n ((n−1)/2 ) 1 "bounds"
0.8
0.6
0.4
0.2
0 0
20
40
60
80
100
Fig. 1. Graph of bounds αn for n ≤ 100.
4
Practical Results
Although our method is at the current time only heuristic, it works well in practice as can be seen from our experimental results below. Our implementation uses the NTL library [Sh] of Victor Shoup. Timings are given for a 300 MHz AMD K6 running under Linux. RSA-500 with 2 public exponents α2 bit length of di avg. time in secs. success rate 0.356 178 0.441 40% 0.354 177 0.421 100%
162
N. Howgrave-Graham and J.-P. Seifert
Fig. 2. Average running time (in seconds) and success rate for 10 random experiments as a function of α2 .
α2 0.357143 0.355714 0.354286 0.352857
RSA-700 with 2 public exponents bit length of di avg. time in secs. success rate 250 1.075 0% 249 1.117 70% 248 0.93 80% 247 1.33 100%
Fig. 3. Average running time (in seconds) and number of success rate for 10 random experiments as a function of α2 . RSA-500 with 3 public exponents α3 bit length of di avg. time in secs. success rate 0.4 200 3.632 0% 0.398 199 3.567 40% 0.396 198 3.599 90% 0.394 197 3.726 90% 0.392 196 3.595 90% 0.39 195 3.529 100% Fig. 4. Average running time (in seconds) and success rate for 10 random experiments as a function of α3 . RSA-200 with 4 public exponents α4 bit length of di avg. time in secs. success rate 0.44 88 14.538 0% 0.435 87 14.496 50% 0.43 86 14.328 80% 0.425 85 14.159 100% Fig. 5. Average running time (in seconds) and success rate for 10 random experiments as a function of α4 . RSA-200 with 5 public exponents α5 bit length of di avg. time in secs. success rate 0.45 90 424.756 0% 0.445 89 427.275 60% 0.44 88 422.74 100% Fig. 6. Average running time (in seconds) and success rate for 10 random experiments as a function of α5 .
Extending Wiener’s Attack in the Presence of Many Decrypting Exponents
5
163
Open Problems
The major open problem raised by our work is the following. To work out the manageable bound on αn for the secret exponents we had to make two heuristic assumptions concerning “random” lattices. As the experimental results strongly support the derived bounds it is natural to ask whether our attack can be turned into a rigorous theorem?
References B. D. Boneh, “Twenty years of attacks on RSA”, Notices of the AMS Vol. 46, pp. 203213, 1999. BD. D. Boneh, G. Durfee, “New results on the cryptanalysis of low exponent RSA”, to appear in Proc. of EUROCRYPT ’99. D. J. M. DeLaurentis, “A further weakness in the common modulus protocol for the RSA cryptoalgorithm”, Cryptologia Vol. 8, pp. 253-259, 1984. G. C. R. Guo, “An application of diophantine approximation in computer security”, to appear in Mathematics of Computation. HW. G. H. Hardy, E. M. Wright, An introduction to the theory of numbers, 5th edn., Oxford University Press, 1979. LLL. A. K. Lenstra, H. W. Lenstra, L. Lovasz, “Factoring polynomials with integer coefficients”, Mathematische Annalen Vol. 261, pp. 513-534, 1982. M. J. H. Moore, “Protocol failures in cryptosystems”, in G. J. Simmons (ed.), Contemporary Cryptology, IEEE Press, 1992. RSA. R. L. Rivest, A. Shamir, L. Adleman, “A method for obtaining digital signatures and public-key cryptosystems”, Commun. ACM Vol. 21, pp. 120-126, 1978. Sh. V. Shoup, “Number Theory Library (NTL)”, http://www.cs.wisc.edu/˜ shoup.ntl. Si. G. J. Simmons, “A ‘weak’ privacy protocol using the RSA cryptalgorithm”, Cryptologia Vol. 7, pp. 180-182, 1983. VvT. E. R. Verheul, H. C. A. van Tilborg, “Cryptanalysis of ‘Less Short’ RSA secret exponents”, Applicable Algebra in Engeneering, Communication and Computing Vol. 8, pp. 425-435, 1997. W. M. Wiener, “Cryptanalysis of short RSA exponents”, IEEE Trans. on Information Theory Vol. 36, pp. 553-558, 1990.
Appendix We now work out the general bound on the di when we have n encrypting exponents. The reader is encouraged to refer back to the previous sections (when n = 2, 3 and 4) as examples. Given that there are n exponents ei , then there are 2n different quantities, hj , n−1 involving the ei ’s, and the product of all of these (assuming e ' N ) is N (n2 ) . n This means that one considers a diagonal matrix, Ln , of dimension 2 , and that the determinant of this matrix, before multiplying the rows to increase the n−1 allowable bound, is N (n2 ) .
164
N. Howgrave-Graham and J.-P. Seifert
The last relation W1 W2 . . . Wn has a right-hand side of at most N (n/2)+nαn , and thus we increase the right-hand side of all the other relations up to this bound, making the desired vector b such that kbLn k∞ is (still) approximately N (n/2)+nαn . The general form of the desired vector b is that its j th entry is the product of n unknown quantitities ai for i = 1 . . . n, where ai is either di g or ki depending on whether ei is present in the j th quantity hj or not. We now consider the interesting problem of which relations to consider for n equations. Observe that a general relation of the form Ru,v = Wi1 . . . Wiu Gj1 ,l1 . . . Gjv ,lv , (where the i1 , . . . , iu , j1 , . . . , jv , l1 , . . . , lv are unique), has a left-hand side composed of products of (u + 2v) of the ei ’s with coefficients that are products of (u + v) of the unknown quantities ai (where ai is again either di g or ki ). Also notice that the right-hand side of Ru,v has size at most N (u/2)+(u+v)αn . Our method requires all the coefficients to be roughly the same size (a product of n of the quantities ai ). This means that relations which have coefficients less than this must be multiplied (on both sides) by some missing ki . For example, in the the 2 exponent case we multiplied the first equation by k2 to make all the coefficients of size N 2α2 . This has the effect of increasing the right-hand side of relation Ru,v to a size bounded by N (u/2)+(n−v)αn . Given this new relation Ru,v we now need to make it’s right-hand side as large as the right-hand side of W1 W2 . . . Wn , which means multiplying (both sides) by N (n−u)/2+vαn . For example, these multiplication factors are the (diagonal) entries of the diagonal matrix D in the example when n = 3. Say that the product of these multiplication factors (i.e. the determinant of D in the n = 3 example) is N βn , where βn = x + yαn , and let Ln denoted the lattice of (modified) relations as before. This means that (under the usual assumptions) the vector bLn is the shortest vector of the lattice if 1/2n n−1 N n/2+nαn < (1/cn ) N n2 +x+yαn for some small cn , i.e. when αn <
x − 0 . n2n − y
(4)
In order to maximise αn we wish both x and y to be large. This means that the relations should be chosen to maximise v (and minimise u). For instance when n = 2 we choose the relations W1 , G1,2 and W1 W2 rather than W1 , W2 and W1 W2 because β2 = 2 in the latter case rather than 5/2 + α2 in the former. With this general principle in mind we still need to explain exactly which relations we use. In order to mantain the triangularity of Ln we only consider relations which introduce one new quantity hj . The choices for n ≤ 5 can be seen in the below figure.
Extending Wiener’s Attack in the Presence of Many Decrypting Exponents
hj 1 e1 e2 e1 e2 e3 e1 e3 e2 e3 e1 e2 e3 e4 e1 e4 e2 e4 e3 e4 e1 e2 e4 e1 e3 e4 e2 e3 e4 e1 e2 e3 e4 e5 e1 e5 e2 e5 e3 e5 e4 e5 e1 e2 e5 e1 e3 e5 e1 e4 e5 e2 e3 e5 e2 e4 e5 e3 e4 e5 e1 e2 e3 e5 e1 e2 e4 e5 e1 e3 e4 e5 e2 e3 e4 e5 e1 e2 e3 e4 e5
size of size of size of relation coeffs hj rhs − 0 0 0 W1 1 1 (1/2) + αn G1,2 2 1 αn W1 W2 2 2 1 + 2αn G1,3 2 1 αn W1 G2,3 3 2 (1/2) + 2αn W2 G1,3 3 2 (1/2) + 2αn W1 W2 W3 3 3 (3/2) + 3αn G1,4 2 1 αn W1 G2,4 3 2 (1/2) + 2αn G1,2 G3,4 4 2 2αn G1,3 G2,4 4 2 2αn W1 W2 G3,4 4 3 1 + 3αn W1 W3 G2,4 4 3 1 + 3αn W2 W3 G1,4 4 3 1 + 3αn W1 W2 W3 W4 4 4 2 + 4αn G1,5 2 1 αn W1 G2,5 3 2 (1/2) + 2αn G1,2 G3,5 4 2 2αn G1,3 G4,5 4 2 2αn G1,4 G2,5 4 2 2αn W1 W2 G4,5 4 3 1 + 3αn W1 G2,3 G4,5 5 3 (1/2) + 3αn W1 G2,4 G3,5 5 3 (1/2) + 3αn W2 G1,3 G4,5 5 3 (1/2) + 3αn W2 G1,4 G3,5 5 3 (1/2) + 3αn W3 G2,4 G1,5 5 3 (1/2) + 3αn W1 W2 W3 G4,5 5 4 (3/2) + 4αn W1 W2 W4 G3,5 5 4 (3/2) + 4αn W1 W3 W4 G2,5 5 4 (3/2) + 4αn W2 W3 W4 G1,5 5 4 (3/2) + 4αn W1 W2 W3 W4 W5 5 5 (5/2) + 5αn
165
contribution to βn (n/2) (n − 1)/2 (n/2) + αn (n − 2)/2 (n/2) + αn (n − 1)/2 + αn (n − 1)/2 + αn (n − 3)/2 (n/2) + αn (n − 1)/2 + αn (n/2) + 2αn (n/2) + 2αn (n − 2)/2 + αn (n − 2)/2 + αn (n − 2)/2 + αn (n − 4)/2 (n/2) + αn (n − 1)/2 + αn (n/2) + 2αn (n/2) + 2αn (n − 2)/2 + αn (n − 1)/2 + 2αn (n − 1)/2 + 2αn (n − 1)/2 + 2αn (n − 1)/2 + 2αn (n − 1)/2 + 2αn (n − 1)/2 + 2αn (n − 3)/2 + αn (n − 3)/2 + αn (n − 3)/2 + αn (n − 3)/2 + αn (n − 5)/2
A table showing the chosen relations for n ≤ 5. After the initial “base relation” (which requires that the first component of b should be small), we seek a linear relation between e1 and 1 (or a multiple of this e.g. N ), and our only choice for this is W1 . With the introduction of the next exponent e2 we now look for a relation between 1, e1 and e2 . For this we can either choose W2 or G1,2 , and as explained above G1,2 is the right choice.
166
N. Howgrave-Graham and J.-P. Seifert
A more interesting situation arises when the fourth exponent e4 has been introduced, and one looks for a relation regarding e1 e4 and the previous ones. The best choice in this case turns out to be W1 G2,4 . However, when considering the next relation regarding e2 e4 and the previous ones we may now use G1,2 G3,4 because the left-hand side of this relation contains e1 e3 , e1 e4 , e2 e3 and e2 e4 all of which are now present. In general when looking for a relation regarding ei1 ei2 . . . eis and the previous ones, one can use any relation Ru,v where u + v = s, subject to the required hj being present earlier. Itn can be shown that the number of relations Ru+v with v = t should be nt − t−1 regardless of the size s = u+v of the relation (though of course this is subject to t ≤ s and s + 2t ≤ n). The contribution to βn for such a relation is (n − s + t)/2 + tαn , and thus (summing over the possible n) the total contribution to βn is shown below. βn =
n min(s,n−s) X X n s=0
t
t=0
n − t−1
n−s+t + tαn 2
Assuming n is even this sum can be simplified to n (n + 1)2n − (2n + 1) (2n + 1)2n − (2n + 1) n/2 + βn = 4 2 or if n is odd then the sum becomes βn =
(2n + 1)2n − 4n 4
n−1 (n−1)/2
+
(n + 1)2n − 4n
n n/2
n−1 (n−1)/2
αn ,
2
αn .
Using equation 4 this means that if n is even, then αn =
(2n + 1)2n − (2n + 1) (2n − 2)2n + (4n + 2)
n n/2 n , n/2
(5)
whilst if n is odd, then αn =
(2n + 1)2n − 4n (2n − 2)2n + 8n
n−1 (n−1)/2 . n−1 (n−1)/2
√ Either way, using Stirling’s formula n! ' 2πnnn e−n we get that 1 2k (2k)! √ 22k 22k = 2 ' k πk (k!) as k → ∞, and then we have that αn → 1 as n → ∞.
(6)
Improving the Exact Security of Fiat-Shamir Signature Schemes Silvio Micali and Leonid Reyzin? MIT Laboratory for Computer Science, Cambridge, MA 02139, USA
Abstract. We provide two contributions to exact security analysis of digital signatures: 1. We put forward a new method of constructing Fiat-Shamir-like signature schemes that yields better “exact security” than the original Fiat-Shamir method; and 2. We extend exact security analysis to exact cost-security analysis by showing that digital signature schemes with “loose security” may be preferable for reasonable measures of cost.
1
Introduction
1.1
Exact Security of Signature Schemes
Goldwasser, Micali and Rivest’s ([GMR88]) classical notion of security for a digital signature scheme is asymptotic in nature. In essence, a proof of security amounts to a reduction from forging a signature to solving a computationally hard problem: if a polynomial-time forger exists, then we can use it to solve the hard problem in polynomial time. It has been often pointed out that this asymptotic approach, which uses notions such as “polynomial time” and “sufficiently large,” is too coarse for practical security recommendations. Knowing that no polynomial-time adversary has a better than exponentially small chance of forgery for a sufficiently large security parameter does not provide one with an answer to the practical problem of finding the appropriate security parameters to ensure security against adversaries with certain concrete capabilities. Bellare and Rogaway ([BR96]) argue that, in order to be able to deduce concrete security recommendations, it is important to be precise in the reduction from a forger to the algorithm that solves the hard problem. For example, if one knows that factoring integers of length l is no more than 100 times harder than breaking a certain signature scheme with security parameter l, then one could pick l so that even 1% of the work required to factor integers of length l is considered infeasible. ?
A full version of the paper is available from http://theory.lcs.mit.edu/˜reyzin/. The second author was supported in part under a National Science Foundation Graduate Fellowship.
R. Baumgart (Ed.): CQRE’99, LNCS 1740, pp. 167–182, 1999. c Springer-Verlag Berlin Heidelberg 1999
168
S. Micali and L. Reyzin
A reduction in which the difficulty of forging and the difficulty of solving the underlying hard problem are close is called tight; otherwise, it is called loose. (Naturally, “close,” “tight” and “loose” are imprecise terms and make more sense when used in the comparative.) A scheme whose exact security is tightly related to the difficulty of factoring is also proposed in [BR96]. 1.2
Loose Security of Fiat-Shamir-like Signature Schemes
A fruitful method for constructing signature schemes was introduced by Fiat and Shamir ([FS86]). Although claimed for a specific ID scheme, the method works with a general commit-challenge-respond ID scheme. The method consists of replacing the verifier’s random challenge by a publicly known “random” function H computed on the prover’s commitment and the message being signed. This removes interaction and adds the message into the picture, thus changing an ID scheme into a signature scheme. Many of such signature schemes have been proven secure when the “random” function is modeled as a random oracle ([BR93] provide a formal treatment of this model). However, the reductions in these proofs are quite loose, thus necessitating larger key sizes. Unless a tighter reduction has been overlooked, the only way to improve the security of such signature schemes is to modify them to allow for tighter reductions. 1.3
The Contributions of this Paper
This paper makes two contributions. First, we show how to modify the factoring-based Fiat-Shamir-like schemes to makes their security very tightly related to the problem of integer factorization. Our modification is quite general and can be applied, in particular, to the schemes from [FS86], [FFS88], [OO88], [MS88], [OS90], [Oka92], [Mic94], [Sho96] and [Sch96]. To exemplify our method and make the description concrete, we picked one of the simpler and more efficient schemes from the above list, the one of [Mic94]. As it is only an example of a scheme that our method applies to, we shall henceforth call it the “E” scheme. We first present an exact analysis of the loose security of E, then propose the modification (called the “swap method”) and present an exact analysis of the tight security of the modified scheme (called “E-swap”). Note that both E and E-swap are quite practical, with the performance that is comparable to that of the schemes currently used in practice. Second, after proposing a method for creating signature schemes with tight reductions, we demonstrate that tightness of a reduction alone is insufficient if one wishes to maximize security while minimizing costs other than key length (e.g., signing time). While it is indeed true that a tighter reduction allows for a lower security parameter, a “loose” scheme can be so efficient that, though requiring a larger security parameter for a specified level of security, may deliver better performance (e.g., in signing time) than a “tight” scheme for the same level of security.
Improving the Exact Security of Fiat-Shamir Signature Schemes
169
Specifically, we demonstrate that although E-swap has better exact security than the E scheme, which of the two schemes to pick depends on what the main factor in the cost is. If the efficiency of verifying is of main concern, then E-swap should be chosen. If, however, the efficiency of signing is the main concern, then the E scheme can deliver more security for less cost. In fact, in that case the E scheme can deliver more security for less cost than even the schemes of [BR96]. We highlight that our point is not sacrificing security for efficiency. Quite contrary, we leverage efficiency in order to achieve better security. We submit that measuring the cost of a signature scheme accurately is just as important as measuring security accurately, and demonstrate that schemes with worse exact security may actually achieve better security for the same cost. We hope that this may further the applicability of exact security analysis. 1.4
Roadmap
We begin by introducing formal definitions and dealing with other preliminaries in Section 2. We introduce the E signature scheme and analyze its security in Section 3. Our new method for constructing signature schemes is given in Section 4. We then show in Section 5 how to apply exact security analysis to choosing a digital signature scheme so as to optimize a given cost for a given level of security.
2
Definitions
In the interests of space, we omit the explanations of commonly used notation and the often-used definition of a signature scheme as a triple Gen? , Sign? , V er? of algorithms that are given access to a common oracle (see, e.g., [BR93] and [BR96]). They are available in the full paper. We will, however, provide a more detailed discussion of what it means for a signature scheme to be secure. Our definition of security is a modified version of that in [BR96], which is based on [BR93] and [GMR88]. This definition concerns itself with exact, rather than asymptotic, security. Intuitively, we want to capture the following in our definition of security: there is no algorithm (called “forger”) that, for a random oracle H, is able to produce new valid signatures with reasonable probability in reasonable time without knowing the secret key sk. Moreover, we should assume that an attacker can coerce the signer into signing some number of messages of the attacker’s choice—to carry out the so-called “adaptive chosen-message attack” [GMR88]. We model this by giving the forger oracle access to the oracle H and to the algorithm SignH (sk, ·). Definition 1. A forger F ?,? is a probabilistic two-oracle algorithm that is given a security parameter k and a public key pk as input. The first oracle of F is called a hashing oracle and the second oracle is called a signature oracle. Let H be a hashing oracle, and let (pk, sk) = GenH (1k ) for some k. We say that the
170
S. Micali and L. Reyzin H
forger succeeds if (M, x) ← F H,Sign (sk,·) (1k , pk) and x is a valid signature on M and F did not query its signature oracle on M . We say that a forger (t, qsig , qhash , ε, δ)-breaks the signature scheme if, for a security parameter k, the following holds: – – – –
its running time (plus the size of its description) does not exceed t(k) the number of its queries to the signature oracle does not exceed qsig (k) the number of its queries to the hashing oracle does not exceed qhash (k) with probability at least δ(k), GenH (1k ) generates such a key (pk, sk) that the probability of the forger’s success on input (1k , pk) is at least ε(k) (here, the probability of the forger’s success is taken over a random choice of the oracle H, the random tape of the forger, the random tape of the signer to whom the forger addresses the chosen-message queries, but not the choice of pk)
Finally, we say that a signature scheme is (t, qsig , qhash , ε, δ)-secure if no forger (t, qsig , qhash , ε, δ)-breaks it. (As an aside for the reader familiar with the definition of [BR96], we point out that if a scheme is (t, qsig , qhash , εδ)-secure in the sense of the [BR96], then it is (t, qsig , qhash , ε, δ)-secure in the sense of the above definition. We simply separate the component of the probability that is due to the selection of the public key.) Now that we have defined what it means for a signature scheme to be secure, how do we actually prove anything about security? We will relate the security of a signature scheme to the difficulty of some problem; in our case, the difficulty of factoring. Let Gen(1l ) be an algorithm generating l-bit products of two primes. Definition 2. We will say that an algorithm A (t, ε, δ)-factors integers generated by Gen if, for a given parameter l, – A’s running time (plus the size of its description) does not exceed t(l) – with probability at least δ(l), Gen(1l ) generates such an integer n that A has at least ε(l) probability (taken over only the random choices of the algorithm, not the choice of n) of producing the correct factors of n on input n We will say that factoring integers generated by Gen is (t, ε, δ)-secure if no such A exists. Given this definition of the difficulty of a problem, we can then explain the security of a signature scheme Π in the following terms, as suggested by [BR96]: if some problem is (t0 , ε0 , δ 0 )-secure, then Π scheme is (t, qsig , qhash , ε, δ)-secure. If t is not much smaller than t0 and ε, δ are not much larger than ε0 , δ 0 , even for a reasonably large qsig and qhash , then the reduction proving the security is called tight.
Improving the Exact Security of Fiat-Shamir Signature Schemes
3 3.1
171
The E Scheme Signature and Verification Algorithms
We describe the following ID and signature scheme from [Mic94], with similarities to the Ong-Schnorr ([OS90]) and the Guillou-Quisquater ([GQ88]) schemes. Number Theory. Let k and l be two security parameters. Let p1 ≡ 3 (mod 8) and p2 ≡ 7 (mod 8) be two primes of approximately equal size and n = p1 p2 be an l-bit integer (such n is called a Williams integer [Wil80]). To simplify further computations, we will assume not only that n > 2l−1 , but also that |Zn∗ | = n − p1 − p2 + 1 > 2l−1 , and that p1 + p2 − 1 < 2l/2+1 . Let Q denote the set of non-zero quadratic residues modulo n. Note that |Q| > 2l−3 . Note also that for x ∈ Q, exactly one of its four square roots is also in Q (this follows from the fact that −1 is a non-square modulo p1 and p2 and the Chinese remainder theorem). Thus, squaring is a permutation over Q. From now on, when we speak −k of “the square root of x,” we mean the single square root in Q; by x2 we will k 2 is a non-square denote the single y ∈ Q such that x = y 2 .Also note that 2 (p2 −1)/8 ), so 1 ∈ Q and modulo p1 and a square modulo p2 (because p = (−1) −1, 2, −2 ∈ / Q. In general, for any x ∈ Zn∗ , exactly one of x, −x, 2x, −2x is in Q. Following [GMR88], define F0 (x) = x2 mod n, F1 (x) = 4x2 mod n, and, for an m-bit binary string σ = b1 . . . bm , define Fσ : Q → Q as Fσ (x) = m Fbm (. . . (Fb2 (Fb1 (x))) . . .) = x2 4σ mod n (note that 4σ is a slight abuse of notation, because σ is a binary string, rather than an integer; what is really meant here is 4 raised to the power of the integer represented in binary by σ). Because squaring is a permutation over Q and 4 ∈ Q, Fσ is a permutation over Q. Note that Fσ (x) can be efficiently computed by anybody who knows n. Also, if one knows p1 and p2 , one can efficiently compute x = Fσ−1 (y) (as shown −|σ| mod n and then letting by Goldreich in [Gol86]) by computing s = 1/42 −|σ| x = y2 sσ mod n (these calculations can be done modulo p1 and p2 separately, and the results combined using the Chinese remainder theorem). However, if one does not know p1 and p2 , then Fσ−1 is hard to compute, as shown in the Lemma below. Lemma 1. If one can compute, for a given y ∈ Q and two different strings σ and τ of equal length, x1 = Fσ−1 (y) and x2 = Fτ−1 (y), then one can factor n. Proof. The proof is by induction on the length of the strings σ and τ . If |σ| = |τ | = 1, then assume, without loss of generality, that σ = 0 and τ = 1. Then F0 (x1 ) ≡ F1 (x2 ) ≡ y mod n, i.e., x21 ≡ 4x22 mod n, i.e., n|(x1 − 2x2 )(x1 + 2x2 ). Note that x1 , x2 ∈ Q and ±2 ∈ / Q, so ±2x2 ∈ / Q, so x1 ≡ / ± 2x2 (mod n), so n does not divide either x1 − 2x2 or x1 + 2x2 . Thus, by computing the gcd of x1 + 2x2 and n, we can get either p1 or p2 . For the inductive case, let σ and τ be two strings of length m + 1. Let σ 0 and 0 τ be their m-bit prefixes, respectively. If Fσ0 (x1 ) ≡ Fτ 0 (x2 ) (mod n), we are done by the inductive hypothesis. Otherwise, the last bit of σ must be different
172
S. Micali and L. Reyzin
from the last bit of τ , so, without loss of generality, assume the last bit of σ is 0 and the last bit of τ is 1. Then F0 (Fσ0 (x1 ) ≡ F1 (Fτ 0 (x2 ) (mod n), and the same proof as for the base case works here. The ID Scheme. The above lemma naturally suggests the following ID scheme. A user has n as the public key and p1 , p2 as the secret key. To prove his identity (i.e., that he knows p1 and p2 ) to a verifier, he commits to random X ∈ Q and sends it to the verifier. The verifier produces a random k-bit challenge σ −1 and sends it to the prover (the user). The prover responds with z = F0σ (X) (note that σ here is prefixed with a single 0 bit, whose use will be explained / 0 (mod n). shortly). The verifier checks that X = F0σ (z) = Fσ (z 2 ) and that z ≡ Informally, the security of this protocol is based on the fact that if the prover is able to respond to two different challenges σ and τ , then, by Lemma 1, he knows p1 and p2 . The 0 bit in front of σ is to save the verifier from having to check that the prover’s response is in Q (which is a hard problem in itself)—instead, she just squares the prover’s response and thus puts it in Q. We will say no more about the security of this ID scheme because we are not concerned with it in this paper. We will, however, point out an efficiency improvement for the prover. First, as part of key generation, the prover com−k−1 mod n. Then, when putes, using the Chinese remainder theorem, s = 1/42 committing to a random X ∈ Q, the prover randomly selects an x ∈ Zn∗ and k+1 mod n (note that X gets selected with uniform distribution as sets X = x2 long as x is so selected). Now, to respond to a challenge σ, the prover simply computes z = xsσ mod n. The E Scheme. The standard way to change the above ID scheme into a signature scheme is to replace the verifier with a random function H : {0, 1}∗ → {0, 1}k . The exact steps of the algorithms Gen, Sign and V er follow. Key Generation 1. Generate two random primes p1 ≡ 3 (mod 8) and p2 ≡ 7 (mod 8) and n = p1 p2 so that n < 2l , n − p1 − p2 + 1 > 2l + 1 and p1 + p2 − 1 < 2l/2+1 2. Generate coefficient c = p2 −1 mod p1 for use in the Chinese remainder theorem k+1 mod pi2−1 for i = 1, 2 (note that ui is such that 3. Compute ui = pi4+1 k+1 raising a square to the root) power ui modulo pi will compute its 2 pi +1 ui mod pi for i = 1, 2 4. Compute si = 4 5. Compute v = (s1 − s2 )c mod p1 and s = s2 + vp2 to get −(k+1) mod n s = 1/42 6. Output n as the public key and (n, s) as the secret key Signing k+1 mod 1. Generate X by picking a random x ∈ Zn∗ and computing X = x2 n (note that this step can be done off-line, before the message is known) −1 2. Compute σ = H(X, M ), and z = F0σ (X) via t = sσ mod n (this can be σ done via ti = si mod pi for i = 1, 2, v = (t1 − t2 )c mod p1 , t = t2 + vp2 ) and z = xt mod n 3. Output (z, σ)
Improving the Exact Security of Fiat-Shamir Signature Schemes
173
Verifying k+1 mod 1. Verify that z ≡ / 0 (mod n) and compute X = Fσ (z 2 ) via t1 = z 2 σ n, t2 = 2 mod n, X = t1 t2 mod n 2. Verify if σ = H(X, M ) 3.2
Security of the E scheme
We state the following two theorems that give two different views of the exact security of the E scheme and demonstrate the tradeoff between running time and success probability. Their proofs use known methods (see Pointcheval and Stern [PS96] and Ohta and Okamoto [OO98]). Our probability analysis is new, however, and results in slightly tighter reductions. Theorem 1. If there exists a forger that (t, qsig , qhash , ε, δ)-breaks the E scheme with security parameters l and k, then there exists an algorithm that (t0 , ε0 , δ)factors integers generated by Gen for t0 = 2t + 2(qsig + 1)T1 + T2 ε ε0 = ε − 2−k (1 − 2γ) , qhash + 1 where T1 is the time required to perform an E signature verification, T2 is the time required to factor n given the conditions of Lemma 1 (essentially, a gcd computation) and γ = qsig (qhash + 1)2−l+3 (note that γ is close to 0 for a large enough l). Proof. Let F be a forger that (t, qsig , qhash , ε, δ)-breaks E signature scheme. We will construct a factoring algorithm A that uses F to produce y, z ∈ Zn∗ and σ 6= τ ∈ {0, 1}k such that Fσ (z 2 ) = Fτ (y 2 ). This will allow A to factor n by using the method given in the proof of Lemma 1. The main idea of this proof is given by the “forking lemma” of [PS96]. It is to allow F to run once to produce one forgery—a signature (z, σ) on a message M such that σ = H(X, M ) where X = Fσ (z 2 ). Note that F had to ask a hashing-oracle query on (X, M )—otherwise its probability of success is at most 2−k . Then, run F the second time, giving the same answers to all the oracle queries before the query (X, M ). For (X, M ) give a new answer τ . Then, if F again forges a signature (y, τ ) using X and M , we will have achieved our goal. Assuming n is such that F has probability at least ε of success, the probability that A will factor n using this approach is roughly ε2 /qhash , because F needs to succeed twice and we have no guarantee that F will choose to use (X, M ) for its second forgery and not any of its other qhash oracle queries. The complete details of the proof are available in the full version of this paper and are omitted here in the interests of space. Theorem 2. If there exists a forger that (t, qsig , qhash , ε, δ)-breaks the E scheme with security parameters l and k, such that ε > 2−k+1 (qhash +1)/(1−qsig (qhash +
174
S. Micali and L. Reyzin
1)2−l+3 ), then there exists an algorithm that (t0 , ε0 , δ)-factors integers generated by Gen for (2qhash + 3)(t + qsig T1 ) + T2 ε(1 − γ) − 2−k+1 (qhash + 1) 2 1 1 0 1− ε = > 0.199, 2 e t0 =
where T1 , T2 and γ are as in Theorem 1. Proof. The idea is to iterate the A from Theorem 1 sufficiently many times to get a constant probability of success. More specifically, run F about 1/ε times the first time, to achieve a constant probability of a successful forgery, and about 2qhash /ε times the second time, to achieve a constant probability of a successful forgery that uses the pair (X, M ). The complete details of the proof are available in the full version of this paper and are omitted here in the interests of space. The following two statements follow directly from the theorems just proved once we fix the parameters to be high enough to avoid dealing with small terms. Corollary 1. If factoring l-bit integers generated by Gen is (t0 , ε0 , δ)-secure, then the E signature scheme is (t, qsig , qhash , ε, δ)-secure for qsig ≤ 2l−5 /(qhash + 1) t = t0 /2 − (qsig + 1)T1 − T2 /2 p ε = 2−k (qhash + 1) + 2ε0 (qhash + 1) Proof. Note that the value for t follows directly by solving for t the equation for t0 in the statement of Theorem 1. The value for ε is computed as follows: solve for ε the quadratic equation that expresses ε0 in terms of ε to get q −k −2k 2 0 (qhash + 1) + 4ε (qhash + 1)/(1 − 2γ) /2 ε = 2 (qhash + 1) + 2 q p −k −2k 2 0 ≤ 2 (qhash + 1) + 2 (qhash + 1) + 4ε (qhash + 1)/(1 − 2γ) /2 p = 2−k (qhash + 1) + ε0 (qhash + 1)/(1 − 2γ). Observe that we are allowed to increase ε, as this will only weaken the result. Note that the condition on qsig ensures that 1 − 2γ ≥ 1/2, so setting ε to p 2−k (qhash + 1) + 2ε0 (qhash + 1) will not decrease it. Corollary 2. If factoring l-bit integers generated by Gen is (t0 , 0.199, δ)-secure, then the E signature scheme is (t, qsig , qhash , ε, δ)-secure for t=
(t0 − T2 )ε − qsig T1 4qhash + 6
Improving the Exact Security of Fiat-Shamir Signature Schemes
175
as long as qsig ≤ 2l−5 /(qhash + 1) ε ≥ 2−k+3 (qhash + 1). Proof. The proof is similar to that of of Corollary 1, and is given in the full paper.
4 4.1
The Swap Method Motivation
As exemplified by the proof of Theorems 1 and 2 above, all known results for the security of Fiat-Shamir-like signature schemes involve losing a factor of qhash (in either time or success probability) in the reduction from a forger to an algorithm that breaks the underlying hard problem (see, for example, [FS86], [Sch96], [PS96], [Sho96], [OO98]). While no proof exists that the loss of this factor is necessary, the problem seems inherent in the way signature schemes are constructed from ID schemes, as explained below. The security of an ID scheme usually relies on the fact that a prover would be unable to answer two different challenges for the same commitment without knowing the private key. Therefore, in the proof of security of the corresponding signature scheme, we need to use the forger to get two signatures on the same commitment, as we did in the proof of Theorems 1 and 2. The forger, however, has any of its qhash queries to pick for the commitment for the second signature— hence, our loss of the factor of qhash . We want to point out that qhash is a significant factor, and its loss definitely makes a reduction quite loose. This is because a reasonable bound on the number of possible hash queries of committed adversaries is about qhash = 280 (see Section 4.4). We therefore devise a new method of constructing signature schemes from ID schemes so that any one signature from the forger is enough to break the underlying hard problem. 4.2
Method
Recall that in Fiat-Shamir-like signature schemes, the signer comes up with the commitment and then uses H applied to the commitment and the message to produce the challenge. We propose that instead the signer first come up with the challenge and then use H applied to the challenge and the message to produce the commitment. In a way, we swap the challenge and the commitment. This method applies whenever the signer can compute the response given only the challenge and the commitment. It does not apply when information used during the generation of the commitment is necessary to compute the response. For example, it does not apply to discrete-logarithm-based ID schemes (such as the Schnorr scheme [Sch89]) in which the prover needs to know the discrete logarithm of the commitment in order to provide the response.
176
S. Micali and L. Reyzin
Additionally, in order to use this method, one needs to get around the problem that the commitment is selected from some structured set (such as Q in the case of E), while H returns a random binary string. This problem can usually be easily solved. The only case known to us when it seems to present a real obstacle is in the scheme of Ohta and Okamoto ([OO88]) in the case when an exponent L is used such that gcd(L, (p1 − 1)(p2 − 1)) > 2. The key-generation algorithm and the private key may need to be modified slightly in order to provide the signer with the additional information needed to compute the response from a random commitment, rather than from a commitment that it generated. The verification algorithm remains vastly unchanged. In the next section, we exemplify our proposed method and explain why it results a tighter security reduction. 4.3
E-swap
Description. The scheme depends on two security parameters: k and l. Let H : {0, 1}∗ → {0, 1}l−1 be a random function. Key Generation The key generation is the same as in the E scheme, except for one additional step (step 6) and extra information in the private key: 1. Generate two random primes p1 ≡ 3 (mod 8) and p2 ≡ 7 (mod 8) and n = p1 p2 so that n < 2l , n − p1 − p2 + 1 > 2l + 1 and p1 + p2 − 1 < 2l/2+1 2. Generate coefficient c = p2 −1 mod p1 for use in the Chinese remainder theorem k+1 mod pi2−1 for i = 1, 2 (note that ui is such that 3. Compute ui = pi4+1 k+1 raising a square to the root) power ui modulo pi will compute its 2 pi +1 ui 4. Compute si = mod pi for i = 1, 2 4 5. Compute v = (s1 − s2 )c mod p1 and s = s2 + vp2 to get −(k+1) s = 1/42 mod n 6. If ui is odd, make it even by setting ui = ui + pi2−1 for i = 1, 2 (note that now ui is such that raising a square or its negative to the power ui modulo pi will compute its 2k+1 root) 7. Output n as the public key and (n, s, u1 , u2 , p1 , p2 ) as the secret key Signing 1. Generate a random σ and compute t = sσ mod n (note that this step can be done off-line, before the message is known). 2. Compute X = H(σ, M ). We will assume X ∈ Zn∗ (i.e., (X, n) = 1), because the probability of X ∈ / Zn∗ is at most 2−l/2+2 . If the Jacobi X symbol n = −1, set X = 2X mod n. Now either X or n − X is in Q. Compute z = F0 σ −1 (±X) via xi = X ui mod pi for i = 1, 2, v = (x1 − x2 )c mod p1 , x = x2 + vp2 and z = xt mod n. 3. Output (z, σ). Verifying k+1 1. Verify that z ≡ / 0 (mod n) and compute X = Fσ (z 2 ) via t1 = z 2 mod n, t2 = 2σ mod n, X = t1 t2 mod n (this step is the same as for the E scheme). 2. Let X 0 = H(σ, M ). If X ≡ ±X 0 (mod n) or X ≡ ±2X 0 (mod n), accept the signature (this step differs slightly from the E scheme).
Improving the Exact Security of Fiat-Shamir Signature Schemes
177
Security of E-swap. Theorem 3. If there exists a forger that (t, qsig , qhash , ε, δ)-breaks the E-swap scheme with security parameters l and k, then there exists an algorithm that (t0 , ε0 , δ)-factors integers generated by Gen for t0 = t + 2(qsig + qhash + 1)T1 + T2 ε0 = ε (1 − γ) − (qhash + qsig + 1)2−l/2+2 , where T1 is the time required to perform an E-swap signature verification, T2 is the time required to factor n given the conditions of Lemma 1 (essentially, a gcd computation) and γ = qsig (qhash + 1)2−k (note that γ is close to 0 for a large enough k). Proof. Familiarity with the proof of Theorem 1 will be helpful for understanding this proof. Let F be a forger that (t, qsig , qhash , ε, δ)-breaks E-swap signature scheme. Similarly to the proof of Theorem 1, we will construct an algorithm that, after interacting with F , will produce y, z ∈ Zn∗ and σ 6= τ ∈ {0, 1}k such that Fσ (z 2 ) = Fτ (y 2 ). The main idea is to answer each hash query on (σ, M ) with an X computed via X = Fτ (y 2 ) for a random y ∈ Zn∗ and arbitrary τ that is different from σ. Then if F forges a signature (z, σ) on M , we will have Fσ (z 2 ) = Fτ (y 2 ) and will be able to factor n. The complete details of the proof are available in the full version of this paper and are omitted here in the interests of space. Theorem 4. If there exists a forger that (t, qsig , qhash , ε, δ)-breaks the E-swap scheme with security parameters l and k, such that ε > (qhash + qsig + 1)2−l/2+2 / 1 − qsig (qhash + 1)2−k , then there exists an algorithm that (t0 , ε0 , δ)-factors integers generated by Gen for t + 2(qsig + qhash + 1)T1 + T2 ε (1 − γ) − (qhash + qsig + 1)2−l/2+2 1 ε0 = 1 − > 0.632, e t0 =
where T1 and T2 are as in Theorem 3. Proof. Let
α = ε (1 − γ) − (qhash + qsig + 1)2−l/2+2 .
By assumption, α > 0. So if we repeat the algorithm constructed in the proof of Theorem 3 up to 1/α times (except for the final gcd computation, which need only be done once), we will get the desired ε0 , similarly to the proof of Theorem 2.
178
S. Micali and L. Reyzin
Similarly to the E scheme, we have the following two corollaries. Corollary 3. If factoring l-bit integers generated by Gen is (t0 , ε0 , δ)-secure, then E-swap signature scheme is (t, qsig , qhash , ε, δ)-secure, where qsig ≤ min(2k−2 /(qhash + 1), ε2l/2−4 − qhash − 1) t = t − 2(qsig + qhash + 1)T1 − T2 ε = 2ε0 . Proof. The condition on qsig ensures that ε(1−γ)−(qhash +qsig +1)2−l+2 ) ≥ ε/2. The rest follows, similarly to the proof of Corollary 1, from solving the equations of Theorem 3 for t and ε. Corollary 4. If factoring l-bit integers generated by Gen is (t0 , 0.632, δ)-secure, then the E-swap signature scheme is (t, qsig , qhash , ε, δ)-secure for qsig ≤ min(2k−2 /(qhash + 1), ε2l/2−4 − qhash − 1) (t0 − T2 )ε − 2(qsig + qhash + 1)T1 . t= 2 Proof. The condition on qsig ensures that ε(1−γ)−(qhash +qsig +1)2−l+2 ) ≥ ε/2. The rest follows, similarly to the proof of Corollary 2, from solving the equations of Theorem 4 for t and ε. 4.4
Parameter Choice
The formulas in the Corollaries 1–4 are quite different. Nonetheless, it is immediately clear that E-swap loses no factor of qhash , neither in time nor in probability. This is a big advantage for E-swap because qhash can be quite big. A fuller comparison, provided in the next section, depends on the actual values of the parameters qsig , qhash , k and l. Let us deal here, however, with the preliminary problem of assigning reasonable values to these parameters. We believe it reasonable to set qsig = 230 and qhash = 280 − 1. This is so because signature queries have to be answered by the honest signer (who may not be willing or able to sign more than a billion messages), while hash queries can be computed by the adversary alone (who may be willing to invest extraordinary resources). Notice that we recommend a higher value for qhash than suggested in [BR96]. We recommend setting k = 100 for the E scheme and k = 112 for E-swap. For the E scheme, this is so because, from Corollaries 1 and 2, we see that 2−k (qhash + 1) has to be small (the value of 2−k (qhash + 1) is essentially the success probability of the simple attack that relies on correctly guessing one hash value among qhash + 1 hash queries). Therefore, we need 2−k+80 to be small, and by setting k = 100 we make it less than 10−6 . For E-swap, this is so because 2k−2 has to be at least qsig (qhash + 1) = 2110 from Corollaries 3 and 4.
Improving the Exact Security of Fiat-Shamir Signature Schemes
179
As for l, notice that both E and E-swap are immediately broken if the adversary succeeds in factoring the l-bit modulus. Therefore, l ought to be at least 1000. Given the above choices for the other parameters, such a minimum value for l is large enough to make all the constraints involving l in Corollaries 1–4 satisfied (for any reasonable ε in the case of Corollaries 3 and 4). Thus, the value of l depends on the presumed security of factoring, as discussed in the next section.
5 5.1
The Case for Exact Security-Cost Analysis The Costs of Security
The desired level of security is usually dictated by the specific application. It is after settling on the desired amount of security that choosing among the various secure schemes becomes crucial. Indeed, when choosing a signature scheme, the goal is to maintain the desired level of security at the lowest possible cost. In a sense, picking a signature scheme is similar to shopping for an insurance policy for the desired face value. The costs of a signature scheme, however, are quite varied. They may include the sizes of keys and signatures, the efficiencies of generating keys, signing and verifying, the amounts of code required, and even “external” considerations— such as the availability of inexpensive implementations or off-the-shelf hardware components. In this paper, we focus on the efficiencies of signing and verifying. These are particularly important when signing or verifying is performed by a low-power device, such as a smart card, or when signing or verifying needs to be performed in bulk quantities, as on a secure server. It is for these costs, then, that below we compare the E and E-swap schemes. We also provide a comparison of the E scheme with the PRab scheme from [BR96], arguably the most practical among those tightly related to factoring. (The reason for choosing PRab rather than its PSS variant is that the latter is tightly related to RSA, and thus potentially less secure than factoring.) 5.2
Comparison of E and E-swap
The efficiency of signature verification in E is about the same as in E-swap. The security of E-swap is generally higher than the security of E for the same security parameters. Therefore, if the efficiency of verifying is the only significant component in the cost, E-swap will be able to provide the same amount of security for less cost than E. A more difficult case to analyze is the case when the efficiency of signing is of main concern. We will limit our analysis to the case when we are only concerned with the on-line part of signing. In both cases, this involves mainly a modular exponentiation. Therefore, a variety of sophisticated algebraic methods can be used here, but these methods apply equally to E and E-swap. We thus find it simpler to compare the two under “standard” implementations using the Chinese
180
S. Micali and L. Reyzin
remainder theorem (CRT). Then the total amount of time required for on-line signing in the E scheme is about 3kl2 /4 and the total required for on-line signing in E-swap is about 3l3 /8, not counting the (relatively small) cost of computing the Jacobi symbol. (In sum, on-line signing is l/(2k) times faster for E than for E-swap if one used the same value of l for both.1 ) Let us now see how the security of the two schemes compares assuming the on-line signing costs are the same. Let lE and kE be the security parameters for E, and lES and kES be the security parameters for E-swap. The on-line signing costs for E and E-swap are the same if 2 1/3 ) . lES = (2kE lE
(1)
The best known factoring algorithms take time about ! 1/3 64 1/3 2/3 T (l) = C exp l (ln l) 9 for some constant C [LL93]. Therefore, we will assume that factoring l-bit integers generated by Gen is (C 0 T (l), 0.199, δ)-secure for some δ and some constant C 0 . Using the formulas given by Corollaries 2 and 4 and the values for qsig , qhash , kE and kES as given by Section 4.4, we can now find out when the E scheme becomes more secure that E-swap if we keep the signing costs equal. The details of further algebraic manipulations are omitted here and given in the full paper. The result is that at lE = 6109, lES = 1954, kE = 100, kES = 112, E and E-swap provide about the security and the same performance for on-line signing. Beyond this point, the gap in security for the same performance increases exponentially in favor of E. Thus, the signing algorithm of the E scheme is so fast that provable security and signing efficiency are the same when E uses 6109-bit moduli and E-swap 1954-bit moduli. In both cases, the security is that of factoring a 1954-bit integer generated by Gen. (The E scheme may actually be even more secure, but we cannot prove it!) It just so happens that this computed level of security is currently considered adequate for many applications. (Therefore, for these applications E-swap is preferable: E-swap has faster verification for the same level of security, as well as shorter keys and, therefore, shorter signatures.) However, whenever the application calls for an higher level of security, and the dominant cost is that of signing, then the “loosely”-secure E becomes preferable because the security gap between E and E-swap, given the same performance, increases exponentially.
1
Moreover, an optimization available to E but not to E-swap is precomputing some powers of the fixed base; this requires additional memory, so we will assume it is not implemented for the purposes of this analysis.
Improving the Exact Security of Fiat-Shamir Signature Schemes
5.3
181
Comparison of the E Scheme with Bellare-Rogaway’s PRab
The security of PRab is tightly related to that of modular square roots, rather than factoring. A factor of 2 in probability is lost (as compared to E-swap) when one relates the security of PRab to that of factoring. PRab’s performance for on-line signing is about the same as E-swap’s (PRab requires a few more Jacobi symbol computations, but no separate modular multiplication).2 A vastly similar analysis leads to the following conclusion: provable security and signing efficiency are the same when E uses 5989-bit moduli, and PRab 1929-bit moduli. Also here this is a “cross-over” point: the gap in security for the same performance increases exponentially in favor of the E scheme. As we can see, this cross-over point is just slightly more in favor of E than the cross-over point of E and E-swap. This is because of the factor of 2 difference in the security of E-swap and PRab.
Acknowledgments We would like to thank Salil Vadhan for pointing out an error in an earlier version of this work and Mihir Bellare for suggesting an improvement in the security analysis of the E scheme using an idea from [BM99].
References BM99.
Mihir Bellare and Sara Miner. A forward-secure digital signature scheme. In Michael Wiener, editor, Advances in Cryptology—CRYPTO ’99, volume 1666 of Lecture Notes in Computer Science. Springer-Verlag, 15–19 August 1999. Revised version is available from http://www.cs.ucsd.edu/ mihir/. BR93. Mihir Bellare and Phillip Rogaway. Random oracles are practical: A paradigm for designing efficient protocols. In Proceedings of the 1st ACM Conference on Computer and Communication Security, November 1993. Revised version appears in http://www-cse.ucsd.edu/users/mihir/papers/crypto-papers.html. BR96. Mihir Bellare and Phillip Rogaway. The exact security of digital signatures: How to sign with RSA and Rabin. In Maurer [Mau96], pages 399–416. Revised version appears in http://www-cse.ucsd.edu/users/mihir/papers/crypto-papers.html. Dam90. I. B. Damg˚ ard, editor. Advances in Cryptology—EUROCRYPT 90, volume 473 of Lecture Notes in Computer Science. Springer-Verlag, 1991, 21–24 May 1990. FFS88. Uriel Feige, Amos Fiat, and Adi Shamir. Zero-knowledge proofs of identity. Journal of Cryptology, 1(2):77–94, 1988. FS86. Amos Fiat and Adi Shamir. How to prove yourself: Practical solutions to identification and signature problems. In Odlyzko [Odl86], pages 186–194. GMR88. Shafi Goldwasser, Silvio Micali, and Ronald L. Rivest. A digital signature scheme secure against adaptive chosen-message attacks. SIAM Journal on Computing, 17(2):281–308, April 1988. 2
PRab has no off-line component in signing and has more efficient verification.
182 Gol86. Gol88. GQ88. LL93. Mau96. Mic94. MS88. Odl86. Oka92.
OO88. OO98.
OS90. PS96. QV89. Sch89. Sch96.
Sho96. Wil80.
S. Micali and L. Reyzin Oded Goldreich. Two remarks concerning the Goldwasser-Micali-Rivest signature scheme. In Odlyzko [Odl86], pages 104–110. S. Goldwasser, editor. Advances in Cryptology—CRYPTO ’88, volume 403 of Lecture Notes in Computer Science. Springer-Verlag, 1990, 21–25 August 1988. Louis Claude Guillou and Jean-Jacques Quisquater. A “paradoxical” indentity-based signature scheme resulting from zero-knowledge. In Goldwasser [Gol88], pages 216–231. A. Lenstra and H. Lenstra, editors. The development of the number field sieve, volume 1554 of Lecture notes in Mathematics. Springer-Verlag, 1993. Ueli Maurer, editor. Advances in Cryptology—EUROCRYPT 96, volume 1070 of Lecture Notes in Computer Science. Springer-Verlag, 12–16 May 1996. Silvio Micali. A secure and efficient digital signature algorithm. Technical Report MIT/LCS/TM-501, Massachusetts Institute of Technology, Cambridge, MA, March 1994. Silvio Micali and Adi Shamir. An improvement of the Fiat-Shamir identification and signature scheme. In Goldwasser [Gol88], pages 244–247. A. M. Odlyzko, editor. Advances in Cryptology—CRYPTO ’86, volume 263 of Lecture Notes in Computer Science. Springer-Verlag, 1987, 11–15 August 1986. Tatsuaki Okamoto. Provably secure and practical identification schemes and corresponding signature schemes. In Ernest F. Brickell, editor, Advances in Cryptology—CRYPTO ’92, volume 740 of Lecture Notes in Computer Science, pages 31–53. Springer-Verlag, 1993, 16–20 August 1992. Kazuo Ohta and Tatsuaki Okamoto. A modification of the Fiat-Shamir scheme. In Goldwasser [Gol88], pages 232–243. Kazuo Ohta and Tatsuaki Okamoto. On concrete security treatment of signatures derived from identification. In Hugo Krawczyk, editor, Advances in Cryptology—CRYPTO ’98, volume 1462 of Lecture Notes in Computer Science, pages 354–369. Springer-Verlag, 23–27 August 1998. H. Ong and C. P. Schnorr. Fast signature generation with a Fiat Shamir-like scheme. In Damg˚ ard [Dam90], pages 432–440. David Pointcheval and Jacques Stern. Security proofs for signature schemes. In Maurer [Mau96], pages 387–398. J.-J. Quisquater and J. Vandewalle, editors. Advances in Cryptology— EUROCRYPT 89, volume 434 of Lecture Notes in Computer Science. Springer-Verlag, 1990, 10–13 April 1989. C. P. Schnorr. Efficient identification and signatures for smart cards. In Quisquater and Vandewalle [QV89], pages 688–689. C. P. Schnorr. Security of 2t -root identification and signatures. In Neal Koblitz, editor, Advances in Cryptology—CRYPTO ’96, volume 1109 of Lecture Notes in Computer Science, pages 143–156. Springer-Verlag, 18–22 August 1996. Victor Shoup. On the security of a practical identification scheme. In Maurer [Mau96], pages 344–353. Hugh C. Williams. A modification of the RSA public-key encryption procedure. IEEE Transactions on Information Theory, IT-26(6):726–729, November 1980.
On Privacy Issues of Internet Access Services via Proxy Servers Yuen-Yan Chan Department of Information Engineering Chinese University of Hong Kong Shatin, Hong Kong [email protected]
Abstract. When you access to the Internet via an Internet services provider (ISP), information like when and how long you access the modem pool and which objects you request is logged in the proxy servers. Such data enables one’s user habit to be traced and analyzed. This is referred as ’clicktrails’ data collections and is a threat to user privacy. In this paper, present legal and technical solutions of protecting user privacy in Internet service provisions are discussed. We also propose a cryptographic solution that allows anonymous Internet connection via an ISP proxy, while the user’s identity is revealed when he/she misbehaves. In this way, user can access the Internet anonymously while the anonymity cannot be abused.
1
Introduction
The rapid development of the Internet has made a revolution over our daily life. Information is abundantly and easily accessible to everyone who has a connection to the world wide information superhighway. Internet applications like electronic commerce, electronic messaging (e.g. E-mails) as well as the World Wide Web provide great convenience to the modern society and eventually transform commerce, education, provision of government services and almost every other aspect of modern life. However, the privacy issues accompanying these innovations cannot be neglected. The potential interception or misuse of personal data collected from the provision of Internet services is a threat to user privacy. Moreover, the rapid development of data mining technologies makes this threat even more severe. This results in the calling of anonymous Internet access. We believe that anonymity should be provided conditionally. In the later part of this paper, we will propose a cryptographic solution that can achieve conditional anonymous Internet connections, which is developed based on the Electronic Cash Protocol introduced in [1]. In our protocol, user anonymity is maintained so long as the user does not misbehave. In this way, user privacy is protected while anonymity is not abused. R. Baumgart (Ed.): CQRE’99, LNCS 1740, pp. 183–191, 1999. c Springer-Verlag Berlin Heidelberg 1999
184
2
Y.-Y. Chan
Privacy Issues of Internet Access Services
Internet services providers can learn much about their customers, as all information that will pass to an Internet user must first pass through the proxies reside in their servers. Although encryption techniques such as SSL are used so that third parties cannot interpret the contents being transmitted, the ISP can still determine what web sites or even which article a particular user has visited. This is because every Internet object request originated from the users is logged in the ISP’s proxy cache. This is referred as the ’clicktrails’ data collection. Collecting and analyzing the ’clicktrails’ data can derive much information about a person. 2.1
Present Privacy Laws and Policies
The collection of personal data is unavoidable in many occasions (e.g. opening a bank account), Some existing privacy ordinances such as [2] allows services providers to collect user’s private information for the purpose intended, but prevents them from changing the usage of such data. For the case of Internet services provision, at present, an ISP has right to collect ’clicktrails’ data and hold log files of user Internet usage for the purposes of system maintenance and troubleshooting. Individual ISP also has own policy on privacy, however the standard diverges and user privacy is sometimes not properly protected. 2.2
Present Anonymous Internet Services Solutions
Several anonymous Internet services technologies have been developed. They aim at hiding the user identity from the remote sites. For example, the anonymous web servers such as [3] fetches the requested objects on behalf of the users, so that the remote host receives the request apparently originated from the server. There are also technologies such as the Onion Routing [4] providing anonymous connection in which routing information is hidden. 2.3
Conditional Anonymous Internet Access Services
In order to prevent the abuse of the data in log files, users should remain anonymous to the ISP during Internet object requests. This can be achieved by using cryptographic methods. In Section three of this paper, we will present a conditional anonymous Internet access protocol that have the following features: – User is anonymous to the ISP during Internet access. – The ISP has no way to relate a requested object to its requester even if it is fetched via the ISP’s proxy. – The anonymity is conditional, the user identity is revealed when any misbehavior is detected. – This protocol is transparent to other applications; and it is interoperable among different ISP’s. Here we assume the employment of caller-ID blocking so that the ISP cannot trace the user identity from the phone number.
On Privacy Issues of Internet Access Services via Proxy Servers
3
185
The Protocol
We developed our protocol motivated by the E-Cash protocol proposed in [2]. In our system, the user login with a pass (x, f (x)1/3
(mod n))
in which the first term is the pseudonym of the user and the second term is the ISP’s public key signature [5]. Here n is a published composite and only the ISP knows its factors, f (.) is a one-way function known by both the user and the ISP and x is the pseudonym chosen by the user. We firstly introduce a simpler version of the protocol, in which anonymous Internet connection without user identity recovery is achieved. In later section, we will discuss how the protocol is modified so that the user’s identity can be revealed in case of any misbehavior occurrence. 3.1
ISP Issues a new Pass to Alice Using Blind Signature [6] Scheme
To open a new account, Alice generates x and r, where x is a pseudonym which she will use to access the Internet services and r is a blinding factor. These numbers only known by Alice. She presents the following token: T = r3 · f (x)
(mod n)
to the ISP. Upon receiving T and having verified on Alice’s identity, the ISP signs on T by calculating the third root of T modulo n and returns T 1/3 (mod n), i.e. r · f (x)1/3 (mod n), to Alice. It is assumed that only the server has the knowledge to compute the third root modulo n [5]. Alice then extracts f (x)1/3 (mod n) from the returned token by dividing T 1/3 (mod n) with r and form her pass: pass = (x, f (x)1/3
(mod n)).
This pass is saved at Alice’s side. She logins with this anonymous pass instead of her login name from now on. The server has no way to relate Alice to her pseudonym x because it cannot see the value of x when it signs on the T. 3.2
Account Operations
An account for x is created, where x is the pseudonym of Alice. When Alice login, she presents the pass pass instead of using her own username and password. Authentication is done by verifying the value of f (x)1/3 (mod n). from pass. All other account operations are similar to the existing system. Since the server has no knowledge on the identity of x, anonymous Internet service provision is achieved.
186
4
Y.-Y. Chan
Modified Version with Key Escrow on User Identity
In Section 3 we have presented the simpler version of our protocol. In this section, we modify on it so that the following desirable additional properties can be achieved: – Alice is the only legitimate user of the pass. – Alice’s identity will be revealed when necessary. The modification is based on the double spending prevention solution presented in [2]. The later property enables identity revocation in critical situations. In the modified version of the protocol, a trust third party (TTP) is involved. And a valid pass has the following format: (pseudonym, {pseudonym}ISP sign , {pseudonym}T T P sign ). Here we make the same assumption that only the ISP has the knowledge to compute the third root modulo n. 4.1
Getting a new Pass
Let f and g be some two-argument collision-free functions as described in [2]. And let u be a unique identifying number of Alice (e.g. the account number). Instead of producing a single blinding factor r as in the previous section, four independent sets of random numbers each consists of k elements, a, c, d and r are generated. In order to obtain the blind signature from the ISP, Alice forms and sends k Ti ’s in the following manner: Ti = ri3 · f (xi , yi )
(mod n)
where i = 1 to k xi = g(ai , ci ) and yi = g (ai XOR u, di ). Notice that at this stage, The ISP knows Alice’s identity, u. In order to verify the Ti ’s presented by Alice, the ISP undergoes the following steps: 1. It chooses randomly a set of k/2 integers, R = {ij }, where 1 ≤ ij ≤ k and 1 ≤ j ≤ k/2. 2. It asks Alice to show the values of ri , ai , ci and di for every i in R. 3. It compares the k/2 presented Ti ’s and see if it is can be derived from these ri , ai , ci , ci and u.
On Privacy Issues of Internet Access Services via Proxy Servers
187
After that, the ISP gives Alice Y
1/3
Ti
(mod n)
i6∈R
And Alice can easily extract the following component: Y
f (xi , yi )1/3
(mod n)
i6∈R
which is correspond to the ISP’s blind signature on the pseudonym, p=
Y
f (xi , yi )
(mod n).
i6∈R
Notice that the ISP has no way to relate u to p because it cannot see xi and yi for i 6∈R. Alice also needs to get the signature from the TTP. Before signing on p, the TTP verifies the validity of p and writes part of the information about Alice’s identity into the database. Notice that Alice is anonymous to the TTP and the information obtained by the TTP is not enough for it to compute Alice’s identity. To perform this task, the same set of k Ti ’s are presented to the TTP. The TTP then performs the following procedures: 1. It asks Alice to give the values of xi , (ai XOR u), and di for every i. 2. It verifies if the corresponding Ti ’s can be derived from the presented values. 3. If the verification succeeds, the TTP stores the values (ai XOR u) along with p into the database. The TTP then verifies and signs on the pseudonym p: 1. For every i ∈ R0 where R0 = {i ∈ Z : i 6∈R, 1 ≤ ij ≤ k}, it checks if the values of xi , (ai XOR u), and di match the corresponding Ti ’s involves in p. 2. It signs on the pseudonym using normal public-key signature schemes. Upon receiving the TTP’s signature, Alice can then form the pass: (pseudonym, {pseudonym}ISP sign , {pseudonym}T T P sign ). In the process of pass verification just mentioned above, the cryptographic method, zero-knowledge proof is employed. It enables one to prove his/her identity to the other party without revealing the identity. More details can be found in [7].
188
4.2
Y.-Y. Chan
Account Operations
Account operations can be performed as usual. However, there are some differences in pass verification. The verification of a pass includes three procedures, namely the verifications of the ISP signatures and that of the TTP signatures, and also the process to ensure Alice (who is anonymous to the ISP) is indeed a valid holder of the pass. The first two processes can be performed directly using the public-key signature verification schemes. While the last part is done by the following: 1. The ISP generate a random binary vector Z = (z1 , z2 , ..., zk/2 ) where the element zi correspond to the ith number in the set R0 . 2. Alice responds according to the following rule: – when zi = 1, Alice send the ISP ai , ci , and yi . – when zi = 0, Alice send the ISP xi , and yi . 3. From the received values, the ISP can check if the corresponding Ti ’s can satisfy the pass. 4.3
Identity Revocation
In this protocol, a user remains anonymous so long as he/she does not misbehave. In this way, a limited anonymity is provided so that user privacy cannot be abused. This is done by the cryptographic method of secret splitting; in which a piece of secret is divided among two or more parties and each party alone does not have knowledge about the secret [8]. When misbehavior of the holder of p is detected or in case of any appeals, the court asks the ISP to presents the pass p along with ai for i ∈ R0 ; which it obtains during pass verification process. The court also gathers the corresponding (ai XOR u) for j ∈ S where S = R ∪ R0 from the TTP. Notice that S ∪ R0 6= {∅} and let e be an element in S ∪ R0 . The user identity u is revealed as: u = ae XOR (ae XOR u).
5
Security Analysis
This section analyzes on the strength of our protocol in resistance of different potential threats. 5.1
Anonymity
Alice remains anonymous to the ISP. This is because during the stage of pass issuing, Alice prepares k candidates of Ti ’s and the ISP only random challenge on k/2 of them. The other k/2 values which are used to form the pass are never seen by the ISP. During the authentication process, the ISP only random challenge on the k/2 unseen values. Therefore the ISP cannot relate the identity of Alice to the pass that she possesses.
On Privacy Issues of Internet Access Services via Proxy Servers
5.2
189
Masquerade
During the pass issuing stage, Alice is not anonymous and the ISP should make sure Alice’s identity before signing on the pass. This can be done by employing a digital certificate scheme; in which one’s identity is proved by a recognized digital certificate. In this way, the identity of the pass receiver can be ensured. 5.3
Alice Cheats
Since the ISP views only half of the k candidates of ri , ai , ci and di , where i ∈ R, therefore in the pass issuing process Alice may have chance to cheat. She does this by not using a valid u in the calculation of those k/2 Ti ’s which are not viewed by the ISP. However her chance of successful cheating decrease exponentially with the value of k. For example, when k equals 16, the chance for the ISP choosing none of the Alice’s cheated Ti ’s is 1/28 = 0.0039. When k increases to 32 the chance further decreases to 1/216 = 1.526 × 10−5 . 5.4
Stolen Pass
Suppose Alice’s pass is stolen by Carol during the pass issuing stage, this will not bring any lost to Alice because Carol does not know the secrets that Alice is holding. That is, Carol does not know ai , ci and yi for every i ∈ R0 which involve in the random challenge process during future logins. Therefore cannot use the pass. Suppose Carol steals the pass at later stages so that she also steals the numbers ai , ci and yi for some i ∈ R0 . In this case, however, she cannot use the pass until she obtains ai , ci and yi for every i ∈ R0 . This is because different elements in a, c and y are challenged randomly each time so Carol can only obtain some of them each time. When Carol has waited long enough to collect ai , ci and yi for every i ∈ R0 , Alice may has already changed her pass.
6
Efficiency
Compare with the non-anonymous Internet access scheme, our secure anonymous Internet access protocol may require more computational power. In this section we analyze on the computational effort involved. 6.1
Random Number Generation
During the initial stage where the pass is issued, a total number of 4k random integers need to be generated. Where k is suitable and large enough to prevent cheat from any potential parties, as explained in Section 5.3. For example, if k = 32, then 128 random numbers are generated. The number-of-bit of these random numbers are arbitrary. For example, when 32-bit binary numbers are used, the possible variation for the values of
190
Y.-Y. Chan
ri , ai , ci , di equals 232 = 4294967296. When 64-bit binary number is used instead, the number of possible variations of these values is increased to 264 = 18446744073709551616. The higher the number-of-bit, the more secure but slower of the system is. 6.2
Signing on the Pass.
Blind signature by the ISP. When the ISP makes a blind signature on Alice’s pass, it need to verify on the pass using cut-and-choose method. This involves the verification on the k/2 presented Ti ’s; and each of the verification involves 3 hashes. The ISP also need to verify Alice’s identity and this involves one publickey certificate verification. Therefore the ISP needs to undergo 3k/2 hashes, one certificate verification and 1 public-key signature when it signs on a pass. Signature by the TTP. For the TTP, two procedures are involved in the signing process. Firstly it verifies the xi , (ai XOR u), and di for every i. This involves 2k hashes. Secondly checks if the k/2 Ti ’s involves in the pass are valid. This requires another k hashes. Therefore the TTP needs to undergo 3k hashes and 1 public-key signature when it sign on a pass. 6.3
Pass Validation
When a user login, the ISP undergoes random challenge and checks if the user is a legitimate holder of the pass. This involves a generation of a k/2-bit random binary number and 2z hashes, where z is the number of 1’s in the binary number and z ∈ k/2. 6.4
Identity Recovery
When the user misbehaves and his/her identity is going to be revealed; this simply requires one searching and one XOR calculation. To conclude, most operations undergone in our protocol are hashes, and they are light in terms of computational power [9].
7
Conclusion
In this paper we have pointed out the privacy problems involved in Internet access. We also proposed a cryptographic solution to the problem; which is motivated by the electronic cash protocols. Our protocol supports anonymous user login to a proxy server so that the Internet usage habit of a user cannot be traced and analyzed. However the user cannot abuse his/her anonymity because our protocol enable a misbehaved user’s identity to be revealed. This is achieved by a key escrow method in which a trusted third party keeps half of the secret about the user’s identity. In addition, our protocol resides on the application layer and does not require changes in other layers during implementation. With suitable legislation, user privacy of Internet access can be properly protected.
On Privacy Issues of Internet Access Services via Proxy Servers
8
191
Acknowledgement
The author would like to thank Professor Victor K.W.Wei for his supervision on the research.
References 1. Hong Kong SAR of the People Republic of China: Personal Data (Privacy) Ordinance, Version date 20 Dec 1996 2. David Chaum, Amos Fiat, Moni Naor: Untraceable Electronic Cash. Advances in Cryptology - Proceedings of CRYPTO’88 (1988) 3. The Anonymizer. [web page] (1999). http://www.anonymizer.com/3.0/index.shtml. [Accessed 6 Sept 1999] 4. Reed M.G., Syverson P.F., Goldschlag D.M.: Proxies for anonymous routing. Proceedings, 12th Annual Computer Security Applications Conference (1996) 95-104 5. Rivest, R.L., A. Shamir, and L. Adleman: A Method for Obtaining Digital Signatures and Public-Key Cryptosystems. Communications of the ACM, v. 21, n. 2, (1978) 120-126 6. David Chaum: Blind Signature for Untraceable Payment. Advances in Cryptology - Proceedings of CRYPTO’82, Plenum Press, (1983) 199-203 7. Bruce Schneier: Applied Cryptography 2nd Edition, (1996) 102-104 8. H. Feistel: Cryptographic Coding for Data-Bank Privacy, RC 2827, Yorktown Heights, NY. IBM Research (1970) 9. D. O’Mahony, M. Peirce, H. Tewari: Electronic Payment Systems (1997) 213
Cryptanalysis of Microsoft’s PPTP Authentication Extensions (MS-CHAPv2) Bruce Schneier1 , Mudge2 , and David Wagner3 1 Counterpane Systems [email protected] 2 L0pht Heavy Industries [email protected] 3 UC Berkeley [email protected]
Abstract. The Point-to-Point Tunneling Protocol (PPTP) is used to secure PPP connections over TCP/IP link. In response to [SM98], Microsoft released extensions to the PPTP authentication mechanism (MSCHAP), called MS-CHAPv2. We present an overview of the changes in the authentication and encryption-key generation portions of MSCHAPv2, and assess the improvements and remaining weaknesses in Microsoft’s PPTP implementation.
1
Introduction
The Point-to-Point Tunneling Protocol (PPTP) [HP+97] is a protocol that allows Point-to-Point Protocol (PPP) connections [Sim94] to be tunneled through an IP network, creating a Virtual Private Network (VPN). Microsoft has implemented its own algorithms and protocols to support PPTP. This implementation of PPTP, called Microsoft PPTP, is used extensively in commercial VPN products precisely because it is already a part of the Microsoft Windows 95, 98, and NT operating systems. The authentication protocol in Microsoft PPTP is the Microsoft Challenge / Reply Handshake Protocol (MS-CHAP) [ZC98]; the encryption protocol is Microsoft Point to Point Encryption (MPPE) [PZ98]. After Microsoft’s PPTP was cryptanalyzed [SM98] and significant weaknesses were publicized, Microsoft upgraded their protocols [Zor98a,Zor98b,Zor99]. The new version is called MS-CHAP version 2 (MS-CHAPv2); the older version has been renamed as MS-CHAP version 1 (MS-CHAPv1). MS-CHAPv2 is available as an upgrade for Microsoft Windows 95, Windows 98, and Windows NT 4.0 (DUN 1.3) [Mic98a,Mic98b]. Even though this upgrade is available, we believe that most implementation of PPTP use MS-CHAPv1. This paper examines MS-CHAPv2 and discusses how well it addresses the security weaknesses outlined in [SM98]. R. Baumgart (Ed.): CQRE’99, LNCS 1740, pp. 192–203, 1999. c Springer-Verlag Berlin Heidelberg 1999
Cryptanalysis of Microsoft’s PPTP Authentication Extensions
193
The most significant changes from MS-CHAPv1 to MS-CHAPv2 are: – The weaker LAN Manager hash is no longer sent along with the stronger Windows NT hash. This is to prevent automatic password crackers like L0phtcrack [L99] from first breaking the weaker LAN Manager hash and then using that information to break the stronger NT hash [L97]. – An authentication scheme for the server has been introduced. This is to prevent malicious servers from masquerading as legitimate servers. – The change password packets from MS-CHAPv1 have been replaced by a single change password packet in MS-CHAPv2. This is to prevent the active attack of spoofing MS-CHAP failure packets. – MPPE uses unique keys in each direction. This is to prevent the trivial cryptanalytic attack of XORing the text stream in each direction to remove the effects of the encryption [SM98]. These changes do correct the major security weaknesses of the original protocol: the inclusion of the LAN Manager hash function and the use of the same OFB encryption key multiple times. However, many security problems are still unaddressed: e.g., how the client protects itself, the fact that the encryption key has the same entropy as the user’s password, and the fact that enough data is passed on the wire to allow attackers to mount crypt-and-compare attacks. This being said, Microsoft obviously took this opportunity to not only fix some of the major cryptographic weaknesses in their implementation of PPTP, but also to improve the quality of their code. The new version is much more robust against denial-of-service style attacks and no longer leaks information regarding the number of active VPN sessions.
2
MS-CHAP, Versions 1 and 2
The MS-CHAPv1 challenge/response mechanism was described in [SM98]. It consists of the following steps: 1. Client requests a login challenge from the Server. 2. The Server sends back an 8-byte random challenge. 3. The Client uses the LAN Manager hash of its password to derive three DES keys. Each of these keys is used to encrypt the challenge. All three encrypted blocks are concatenated into a 24-byte reply. The Client creates a second 24byte reply using the Windows NT hash and the same procedure. 4. The server uses the hashes of the Client’s password, stored in a database, to decrypt the replies. If the decrypted blocks match the challenge, the authentication completes and sends a “success” packet back to the client. This exchange has been modified in MS-CHAPv2. The following is the revised protocol: 1. Client requests a login challenge from the Server. 2. The Server sends back a 16-byte random challenge.
194
B. Schneier, Mudge, and D. Wagner
3a. The Client generates a random 16-byte number, called the “Peer Authenticator Challenge.” 3b. The Client generates an 8-byte challenge by hashing the 16-byte challenge received in step (2), the 16-byte Peer Authenticator Challenge generated in step (3a), and the Client’s username. (See Section 3 for details.) 3c. The Client creates a 24-byte reply, using the Windows NT hash function and the 8-byte challenge generated in step (3b). This process is identical to MS-CHAPv1. 3d. The Client sends the Server the results of steps (3a) and (3c). 4a. The Server uses the hashes of the Client’s password, stored in a database, to decrypt the replies. If the decrypted blocks match the challenge, the Client is authenticated. 4b. The Server uses the 16-byte Peer Authenticator Challenge from the client, as well as the Client’s hashed password, to create a 20-byte “Authenticator Response.” (See Section 5 for details.) 5. The Client also computes the Authenticator Response. If the computed response matches the received response, the Server is authenticated. A general description of the changes between MS-CHAPv1 and MS-CHAPv2 is given in Figure 1. This protocol works, and eliminates the most serious weaknMS-CHAP Version 1 MS-CHAP Version 2 Negotiates CHAP with an algorithm va- Negotiates CHAP with an algorithm value of 0x80. lue of 0x81. Server sends an 8-byte challenge value. Server sends a 16-byte value to be used by the client in creating an 8-byte challenge value. Client sends 24-byte LANMAN and 24- Client sends 16-byte peer challenge that byte NT response to 8-byte challenge. was used in creating the hidden 8-byte challenge, and the 24-byte NT response. Server sends a response stating SUC- Server sends a response stating SUCCESS or FAILURE. CESS or FAILURE and piggybacks an Authenticator Response to the 16-byte peer challenge. Client decides to continue or end based Client decides to continue or end baupon the SUCCESS or FAILURE res- sed upon the SUCCESS or FAILURE ponse above. response above. In addition, the Client checks the validity of the Authenticator Response and disconnects if it is not the expected value. Fig. 1. Some basic differences between MSCHAPv1 and MSCHAPv2 authentication
esses that plagued MS-CHAPv1. In MS-CHAPv1, two parallel hash values were sent from the Client to the Server: the LAN Manager hash and the Windows
Cryptanalysis of Microsoft’s PPTP Authentication Extensions
195
NT hash. These were two different hashes of the same User password. The LAN Manager hash is a much weaker hash function, and password-cracker programs such as L0phtcrack were able to break the LAN Manager hash and then use that information to break the Windows NT hash [L97]. By eliminating the LAN Manager hash in MS-CHAPv2, Microsoft has made this divide-and-conquer attack impossible. Still, the security of this protocol is based on the password used, and L0phtcrack can still break weak passwords using a dictionary attack [L99]. As we will discuss later, multiple layers of hashing are used in the different steps of MS-CHAPv2. While this hashing serves to obscure some of the values, it is unclear what the cryptographic significance of them are. All they seem to do is to slow down the execution of the protocol. We also have concerns over the amount of control the client has in the influence of the ultimate 8-byte challenge that is used, though we have not yet been able to come up with a viable attack to exploit this. Certainly it opens the possibility of subliminal channels, which can be exploited in other contexts.
3
MS-CHAPv2: Deriving the 8-byte Challenge for the 24-byte Response
In MS-CHAPv1, the Server sends the Client an 8-byte random challenge. This challenge is used, together with the Client’s password and a hash function, to create a pair of 24-byte responses. In MS-CHAPv2, the Server sends the Client a 16-byte challenge. This challenge is not used by the Client directly; the Client derives an 8-byte value from this 16-byte challenge. The derivation process is as follows: 1. The Client creates a 16-byte random number, called the Peer Authenticator Challenge. 2. The Client concatenates the Peer Authenticator Challenge with the 16-byte challenge received from the server and the Client’s username. 3. The client hashes the result with SHA-1 [NIST93]. 4. The first eight bytes of the hash become the 8-byte challenge. It is these 8 bytes that the Client will use to encrypt the 16-byte local password hash (using the Windows NT hash function) to obtain the 24-byte response, which the Client will send to the server. This method is identical to MS-CHAPv1, and has been described in [SM98]. 3.1
Analysis
It is unclear to us why this protocol is so complicated. At first glance, it seems reasonable that the Client not use the challenge from the Server directly, since it is known to an eavesdropper. But instead of deriving a new challenge from some secret information—the password hash, for example—the Client uses a unique random number that is sent to the Server later in the protocol. There is no reason why the Client cannot use the Server’s challenge directly and not use the Peer Authenticator Challenge at all.
196
4
B. Schneier, Mudge, and D. Wagner
MS-CHAPv2: Deriving the 24-byte Response
Both MS-CHAPv1 and MS-CHAPv2 use the same procedure to derive a 24-byte response from the 8-byte challenge and the 16-byte NT password hash: 1. The 16-byte NT hash is padded to 21 bytes by appending five zero bytes. 2. Let X, Y, Z be the three consecutive 7-byte blocks of this 21-byte value, and let C be the 8-byte challenge. The 24-byte response R is calculated as R = hDESX (C), DESY (C), DESZ (C)i. 4.1
Analysis
This complicated procedure creates a serious weakness in the MS-CHAP protocols: it allows the attacker to speed up dictionary keysearch by a factor of 216 , which is a pretty devastating effect given the relatively low entropy of most user passwords. Suppose that we eavesdrop on a MS-CHAP connection. The response R is exposed in the clear, and the challenge C may be derived easily from public information. We will attempt to recover the password, using the knowledge that many passwords are closely derived from dictionary words or otherwise readily guessable. Note first that the value of Z can be easily recovered: since there are only 216 possibilities for Z, and we have a known plaintext-ciphertext pair C, DESZ (C) for Z, we may try each of the possibilities for Z in turn with a simple trial encryption1 . This discloses the last two bytes of the NT hash of the password. We will use this observation to speed up dictionary search. In a one-time precomputation, we hash each of our guesses at the password (perhaps as minor variations on a list of words in a dictionary). We sort the results by the last two bytes of their hash and burn this on a CD-ROM (or a small hard drive). Then, when we see a MS-CHAP exchange, we may recover the last two bytes of the NT hash (using the method outlined above) and examine all the corresponding entries on the CD-ROM. This gives us a list of plausible passwords which have the right value for the last two bytes of their NT hash; then we can try each of those possibilities by brute force. Suppose a naive dictionary attack would search N passwords. In our attack, we try only the passwords which have the right value for the last two bytes of their NT hash, so we expect to try only about N/216 passwords. This implies that the optimized attack runs about 216 times faster than a standard dictionary attack, if we can afford the space to store a precomputed list of possible passwords. This attack is applicable to both MS-CHAPv1 and MS-CHAPv2. However, the weakness is much more important for MS-CHAPv2, because for MS-CHAPv1 it is easier to attack the LanManager hash than to attack the NT hash. This is a serious weakness which could have been easily avoided merely by using a standard cryptographic hashing primitive. For instance, merely generating the response as R = SHA-1(NT hash, C) would be enough to prevent this attack. 1
This has been independently observed by B. Rosenburg.
Cryptanalysis of Microsoft’s PPTP Authentication Extensions
197
Note also that the MS-CHAP response generation algorithm is also a weak link, even when passwords contain adequate entropy. It is clear that the NT hash can be recovered with just two DES exhaustive keysearches (about 256 trial DES decryptions on average), or in just 9 days using a single EFF DES Cracker machine [Gil98]. Once the NT hash is recovered, all encrypted sessions can be read and the authentication scheme can be cracked with no effort. This shows that, even when using 128-bit RC4 keys, the MS-CHAP protocol provides at most the equivalent of 57-bit security2 . This weakness could also have been avoided by the simple change suggested above, R = SHA-1(NT hash, C). It is not clear to us why the MS-CHAPv2 designers chose such a complicated and insecure algorithm for generating 24-byte responses, when a simpler and more secure alternative was available.
5
MS-CHAPv2: Deriving the 20-byte Authenticator Response
In MS-CHAPv2, the Server sends the Client a 20-byte Authenticator Response. The Client calculates the same value, and then compares it with the value received from the Server in order to complete the mutual authentication process. This value is created as follows: 1. The Server (or the Client) hashes the 16-byte NT password hash with [Riv91] to get password-hash-hash. (The Server stores the client’s password hashed with MD4; this is the NT password hash value.) 2. The Server then concatenates the password-hash-hash, the 24-byte NT response, and the literal string “Magic server to client constant”, and then hashes the result with SHA. 3. The Server concatenates the 20-byte SHA output from step (2), the initial 8byte generated challenge (see Section 3) and the literal string “Pad to make it do more than one iteration”, and then hashes the result with SHA. The resulting 20 bytes are the mutual authenticator response. 5.1
Analysis
Again, this process is much more complicated than required. There is no reason to use SHA twice; a single hashing has the same security properties.
6
Analysis of MS-CHAPv2
We do not know why Microsoft chose such a complicated protocol, since this is not stronger than the following: 1. The Server sends the Client an 8-byte challenge. 2
This has been independently observed by P. Holzer.
198
B. Schneier, Mudge, and D. Wagner
2. The Client encrypts the 16-byte local password hash with an 8-byte challenge and sends the Server the 24-byte response, an 8-byte challenge of its own, and the username. 3. The Server sends a pass/fail packet with a 24-byte response to the Client’s challenge, which is the user’s password-hash-hash encrypted with the Client’s 8-byte challenge. The downside to the MS-CHAPv2 protocol is that an eavesdropper can obtain two copies of the same plaintext, encrypted with two different keys. However, in the current model, watching the network for any length of time will still give you multiple copies of a user challenge/response as the user logs in and out, which will be encrypted with different keys. As it stands, a passive listener is still able to get the 8-byte challenge and the 24-byte response from the information sent. The popular hacker tool L0phtcrack [L97], which breaks Windows NT passwords, works with this data as input. This task was much easier with MS-CHAPv1, since the weaker LAN Manager hash was sent alongside the stronger Windows NT hash; L0phtcrack first broke the former and then used that information to break the latter [L99]. L0phtcrack can still break most common passwords from the Windows NT hash alone [L97]. And this still does not solve the problem of using the user’s hash for MPPE keying, PPTP authentication, etc. without negotiating, at least, machine public key/private key methods of exchanging such an important key. 6.1
Version Rollback Attacks
Since Microsoft has attempted to retain some backwards compatibility with MS-CHAPv1, it is possible for an attacker to mount a “version rollback attack” against MS-CHAP. In this attack, the attacker convinces both the Client and the Server not to negotiate the more secure MS-CHAPv2 protocol, but to use the less secure MS-CHAPv1 protocol. In its documentation, Microsoft claims that the operating systems will try to negotiate MS-CHAPv2 first, and only drop back to MS-CHAPv1 if the first negotiation fails [Mic99]. Additionally, it is possible to set the Server to require MS-CHAPv2. We find this scenario implausible for two reasons. One, the software switches to turn off backwards compatibility are registry settings, and can be difficult to find. And two, since older versions of Windows cannot support MS-CHAPv2, backwards compatibility must be turned on if there are any legacy users on the network. We conclude that version rollback attacks are a significant threat.
7
Changes to MPPE
The original encryption mechanism in Microsoft’s Point to Point Encryption protocol (MPPE) used the same encryption keys in each direction (Client to Server, and Server to Client). Since the bulk data encryption routine is the RC4
Cryptanalysis of Microsoft’s PPTP Authentication Extensions
199
stream cipher [Sch96], this created a cryptographic attack by XORing the two streams against each other and performing standard cryptanalysis against the result. In the more recent version, the MPPE keys are derived from MS-CHAPv2 credentials and a unique key is used in each direction. The keys for each direction are still derived from the same value (the Client’s NT password hash), but differently depending on the direction. 7.1
Deriving MPPE Keys from MS-CHAPv2 Credentials
MPPE keys can be either 40 bits or 128 bits, and they can be derived from either MS-CHAPv1 credentials or MS-CHAPv2 credentials. The original derivation protocol (from MS-CHAPv1) was described in [SM98]. Briefly, the password hash is hashed again using SHA, and then truncated. For a 40-bit key, the SHA hash is truncated to 64 bits, and then the high-order 24 bits are set to 0xD1269E. For a 128-bit key, the SHA hash is truncated to 128 bits. This key is used to encrypt traffic from the Client to the Server and traffic from the Server to the Client, opening a major security vulnerability. This has been corrected in MSCHAPv2. Deriving MPPE keys from MS-CHAPv2 credentials works as follows: 1. Hash the 16-byte NT password hash, the 24-byte response from the MSCHAPv2 exchange, and a 27-byte constant (the string “This is the MPPE Master Key”) with SHA. Truncate to get a 16-byte master-master key. 2. Using a deterministic process, convert the master-master key to a pair of session keys. For 40-bit session keys, this is done as follows: 1. Hash the master-master key, 40 bytes of 0x00, an 84-byte constant and 40 bytes of 0xF2 with SHA. Truncate to get an 8-byte output. 2. Set the high-order 24 bits of 0xD1269E, resulting in a 40-bit key. The magic constants are different, depending on whether the key is used to encrypt traffic from the Client to the Server, or from the Server to the Client. For 128-bit session keys, the process is as follows: 1. Hash the master-master key, 40 bytes of 0x00, an 84-byte constant (magic constant 2 or 3), and 40 bytes of 0xF2 with SHA. Truncate to get a 16-byte output. 7.2
Analysis
This modification means that unique keys are used in each direction, but does not solve the serious problem of weak keys. The keys are still a function of the password, and hence contain no more entropy than the password. Even though the RC4 algorithm may theoretically have 128-bits of entropy, the actual passwords used for key generation have much less. This having been said, using different keys in each direction is still a major improvement in the protocol.
200
7.3
B. Schneier, Mudge, and D. Wagner
Trapdoors in the Magic Constants?
We are very concerned with the magic constants embedded in the key derivation algorithm for export-weakened keys. The protocol weakens RC4 keys to 40 bits by fixing the high bits of the 64-bit RC4 key to 0xD1269E. But this seems dangerous. It is known that, if an adversary is allowed to choose the high bits of the RC4 key, the adversary can force you into a weak key class for RC4 [Roo95,Wag95]. Therefore, if the MS-CHAP designers—or the NSA export-reviewer folks—wanted to embed a trapdoor in the protocol, they could exploit the presence of magic constants to weaken RC4. We do not know whether keys prefixed with 0xD1269E are unusually weak, but in our preliminary statistical tests we have found some suspicious properties of such keys that leaves us with some cause for concern. To give two examples: – Empirical measurements show that the first few bytes of output are biased, for keys which start with 0xD1269E. The first and second keystream bytes take on the values 0x09 and 0x00 with probabilities 0.0054 and 0.0060, respectively. This is noticeably higher than the 1/256 = 0.0039 probability you’d expect from good cipher. – The key schedule mixes some entries in the state table poorly, for this class of keys. For instance, S[1] = 0xF8 holds with probability 0.38 ≈ 1/e, and S[2] = 0x98 holds with a similar probability. These statistical properties are worrisome. Because no information is given on how the value 0xD1269E was chosen, one has to worry that it could well be a “trapdoor choice” which forces all 40-bit keys into some weak key class for RC4. We invite the MS-CHAP designers to openly disclose how all magic constants were chosen and to provide concrete assurances that those magic values don’t create any hidden trapdoors. In the meantime, we leave it as an open question to ascertain whether RC4 is secure when used with the fixed key-prefix 0xD1269E.
8
Attack on Export-Weakened Key Derivation
In this section we present a very serious attack on the way that exportable 40-bit session keys are generated. This weakness is also present in MS-CHAPv1 as well as MS-CHAPv2, but it has not been discovered until now. The end result is that the so-called “40-bit keys” really only have an effective strength of about 26 bits. As a result, the export-weakened protocol can be cracked in near-realtime with only a single computer3 . 3
Today’s computers seem to be able to try 216 –217 keys/second, which suggests that each key can be cracked in something like a quarter of an hour. (In lieu of an implementation, these estimates will necessarily be very rough.) With a small cluster of computers, the cracking performance can be greatly increased.
Cryptanalysis of Microsoft’s PPTP Authentication Extensions
201
We recall that the key derivation process appends 40 secret bits (generated in some way which is irrelevant to our attack) to the fixed value 0xD1269E. The resulting 64-bit session key is to RC4-encrypt the transmitted data. The problem is that this process introduces no per-session salt (compare to, e.g., SSL), and thus can be broken with a time-space tradeoff attack. For the remainder of this section, we assume that we can obtain a short segment of known plaintext (40 bits should suffice) at some predictable location. The known plaintext need not even occur at consecutive bit locations; the only requirement is that the bit positions be predictable in advance. This seems to be a very plausible assumption, when one considers the quantity of known headers and other predictable data that is encrypted. Let us assume for simplicity of description that this known plaintext occurs at the start of the keystream. We will attack this protocol with a time-space tradeoff. The cost of a lengthy precomputation is amortized over many sessions so that the incremental cost of breaking each additional session key is reduced to a very low value. A naive attacker might consider building a lookup table with 240 entries, listing for each possible 40-bit key the value of the first 40 bits of keystream that results. This requires a 240 precomputation, but then each subsequent session key can be broken extremely quickly (with just a single table lookup). However, in practice this attack is probably not very practical because it requires 240 space. A time-space tradeoff allows us to reduce the space requirements of the naive attack by trading off memory for additional computation. Consider Hellman’s time-space tradeoff [Hel80]. For a n-bit key, Hellman’s tradeoff requires a 2n precomputation and 22n/3 space, and then every subsequent session key can be broken with just 22n/3 work. (Other tradeoffs are also possible.) For MS-CHAP’s 40-bit keys, n = 40, and 2n/3 ≈ 26, so you get an attack that breaks each session key with approximately 226 work. The attack requires a 240 precomputation and 226 space, but these requirements are easily met. This means that the export-weakened versions of MS-CHAP offer an effective keylength of only about 26 bits or so, which is much less than the claimed 40 bits of strength. This is a deadly weakness.
9
Conclusions
Microsoft has improved PPTP to correct the major security weaknesses described in [SM98]. However, the fundamental weakness of the authentication and encryption protocol is that it is only as secure as the password chosen by the user. As computers get faster and distributed attacks against password files become more feasible, the list of bad passwords—dictionary words, words with random capitalization, words with the addition of numbers, words with numbers replacing letters, reversed words, acronyms, words with the addition of punctuation—becomes larger. Since authentication and key-exchange protocols which do not allow passive dictionary attacks against the user’s password are possible—Encrypted Key Exchange [BM92,BM94] and its variants
202
B. Schneier, Mudge, and D. Wagner
[Jab96,Jab97,Wu98], IPSec—it seems imprudent for Microsoft to continue to rely on the security of passwords. Our hope is that PPTP continues to see a decline in use as IPSec becomes more prevalent.
References BM92. BM94. Gil98. HP+97. Hel80. Jab96. Jab97.
L97. L99. Mic96a. Mic96b. Mic98a. Mic98b. Mic99. NIST93. PZ98.
S.M. Bellovin and M. Merritt, “Encrypted Key Exchange: Password-Based Protocols Secure Against Dictionary Attacks,” Proceedings of the IEEE Symposium on Research in Security and Privacy, May 1992, pp. 72–84. S.M. Bellovin and M. Merritt, “Augmented Encrypted Key Exchange: A Password-Based Protocol Secure Against Dictionary Attacks and Password File Compromise,” AT&T Bell Laboratories, 1994. J. Gilmore, Ed., Cracking DES. The Electronic Frontier Foundation, San Francisco, CA, O’Reilly and Associates, 1998. K. Hamzeh, G.S. Pall, W. Verthein, J. Taarud, and W.A. Little, “Point-to-Point Tunneling Protocol,” Internet Draft, IETF, Jul 1997. http://www.ietf.org/internet-drafts/draft-ietf-pppext-pptp-10.txt. M.E. Hellman, “A cryptanalytic time-memory trade-off,” IEEE Transactions on Information Theory, vol.IT-26, no.4, July 1980, p.401–406. D. Jablon, “Strong Password-Only Authenticated Key Exchange,” ACM Computer Communications Review, Oct 96, pp. 5–26. D. Jablon, “Extended Password Key Exchange Protocols Immune to Dictionary Attacks,” Proceedings of the Sixth Workshop on Enabling Technologies: Infrastructure for Collaborative Enterprises, IEEE Computer Society, 1997, pp. 248–255. L0pht Heavy Industries, Inc., “A L0phtCrack Technical Rant,” Jul 1997. http://www.l0pht.com/l0phtcrack/rant.html. L0pht Heavy Industries, Inc, L0phtcrack, 1999, http://www.l0pht.com/ l0phtcrack/. Microsoft Corporation, Advanced Windows NT Concepts, New Riders Publishing, 1996. Relevant chapter at http://www.microsoft.com/ communications/nrpptp.htm. Microsoft Corporation, “Point-to-Point Tunneling Protocol (PPTP) Frequently Asked Questions,” Jul 1996. Microsoft Corporation, “Frequently Asked Questions about Microsoft VPN Security,” Dec 1998, http://www.microsoft.com/NTServer/commserv/ deployment/moreinfo/VPNSec FAQ.asp Microsoft Corporation, “Microsoft Windows 95 Dial-Up Networking 1.3 Upgrade Release Notes,” 1998, http://support.microsoft.com/ support/kb/articles/q154/0/91.asp Microsoft, Corporation, “Windows 98 Dial-Up Networking Security Upgrade Release Notes,” Feb 1999, http://support.microsoft.com/ support/kb/articles/Q189/7/71.asp. National Institute of Standards and Technology, “Secure Hash Standard,” U.S. Department of Commerce, May 1993. G.S. Pall and G. Zorn, “Microsoft Point-to-Point Encryption (MPPE) Protocol,” Network Working Group, Internet Draft, IETF, Mar 1998. http://www.ietf.org/internet-drafts/draft-ietf-pppext-mppe-03.txt.
Cryptanalysis of Microsoft’s PPTP Authentication Extensions Riv91.
203
R. Rivest, “The MD4 Message Digest Algorithm,” Advances in Cryptology— CRYPTO’90 Proceedings, Springer-Verlag, 1991, pp. 303311. Roo95. A. Roos, “Weak Keys in RC4,” sci.crypt post, 22 Sep 1995. Sim94. W. Simpson, “The Point-to-Point Protocol (PPP),” Network Working Group, STD 51, RFC 1661, Jul 1994. ftp://ftp.isi.edu/in-notes/rfc1661.txt. Sch96. B. Schneier, Applied Cryptography, 2nd Edition, John Wiley & Sons, 1996. SM98. B. Schneier and Mudge, “Cryptanalysis of Microsoft’s Point-to-Point Tunneling Protocol (PPTP),” Proceedings of the 5th ACM Conference on Communications and Computer Security, ACM Press, pp. 132–141. http://www.counterpane.com/pptp.html. Wag95. D. Wagner, “Re: Weak Keys in RC4,” sci.crypt post, 25 Sep 1995. http://www.cs.berkeley.edu/ daw/my-posts/my-rc4-weak-keys. Wu98. T. Wu, “The Secure Remote Password Protocol,” Proceedings of the 1998 Internet Society Network and Distributed System Security Symposium, Mar 1998, pp. 97–111. ZC98. G. Zorn and S. Cobb, “Microsoft PPP CHAP Extensions,” Network Working Group Internet Draft, Mar 1998. http://www.ietf.org/internet-drafts/draftietf-pppext-mschap-00.txt. Zor98a. G. Zorn, “Deriving MPPE Keys from MS-CHAP V1 Credentials,” Network Working Group Internet Draft, Sep 1998. http://www.ietf.org/internetdrafts/draft-ietf-pppext-mschapv1-keys-00.txt. Zor98b. G. Zorn, “Deriving MPPE Keys from MS-CHAP V2 Credentials,” Network Working Group Internet Draft, Nov 1998. http://www.ietf.org/internetdrafts/draft-ietf-pppext-mschapv2-keys-02.txt. Zor99. G. Zorn, “Microsoft PPP CHAP Extensions, Version 2,” Network Working Group Internet Draft, Apr 1999. http://www.ietf.org/internet-drafts/draftietf-pppext-mschap-v2-03.txt.
Auto-recoverable Auto-certifiable Cryptosystems (A Survey) Adam Young1 and Moti Yung2 1
Currently: Columbia University 2 Currently: CertCo LLC
Abstract. In this paper we survey the recent work on Auto-Recoverable Auto-Certifiable Cryptosystems. This notion has been put forth to solve the “software key escrow” problem in an efficient manner within the context of a Pubic Key Infrastructure (PKI). This survey presents the exact specification of the problem which is based on what software key escrow can hope to achieve. The specification attempts to separate the truly difficult technical issues in the area from the ones that are only seemingly difficult. We then review the work in Eurocrypt ’98 and PKC ’99, which gives an efficient reduction to a software key escrow system from a certified public key system (PKI). Namely, we show how to construct an escrowed PKI for essentially the same cost and effort required for a regular PKI. More specifically, the schemes presented are as efficient for users to use as a PKI, do not require tamper-resistant hardware (i.e., they can be distributed in software to users), and the schemes are shadow public key resistant as defined in Crypto ’95 by Kilian and Leighton (namely, they do not allow the users to publish public keys other then the ones certified). The schemes enable the efficient verification of the fact that a given user’s private key is escrowed properly. They allow the safe and efficient recovery of keys (and plaintext messages) which is typical in emergency situations such as in the medical area, in secure file systems, and in criminal investigations. We comment that we do not advocate nor deal with the policy issues regarding the need of governments to control access to messages; our motivation is highly technical: in cases that escrow is required or needed we would like to minimize its effect on the overall PKI deployment. We then briefly mention forthcoming developments in the area which include further flexibility/compatibility requirements for auto-recoverable cryptosystems, as well as design of such systems which are based on traditional public key methods (RSA and discrete logs).
1
Introduction
We are currently at the point, due to the enormous surge of Internet use, where a large-scale Public Key Infrastructure (PKI) is about to be deployed. On the other hand, another set of requirements suggest that decryption keys should be R. Baumgart (Ed.): CQRE’99, LNCS 1740, pp. 204–218, 1999. c Springer-Verlag Berlin Heidelberg 1999
Auto-recoverable Auto-certifiable Cryptosystems
205
escrowed. This is easily justifiable in world wide deployment of systems where medical records can be accessed, typically by the client, but only in an emergency using escrow mechanisms. Also, governments are interested in securing access to telephony systems for law enforcement (this last issue is politically controversial, but our treatment is only technical). Most if not all of the early proposed key escrow schemes suffer from at least one form of drawback or another (typically from inherent incompatibilities with software based regular PKIs). These include the need for “tamper-resistant” devices, e.g., Clipper and Capstone, added overhead of protocol interaction between users and the escrow authorities, the need for “trusted third parties” to generate cryptographic keys and be active in the user-to-user transactions, and requiring changes in protocols which are outside the cryptographic system. In fact, the problem of implementing an escrowed PKI efficiently is regarded as too difficult a problem to achieve by a number of researchers, cryptographers, and security experts [K-S]. In another paper [FY97], formal arguments are presented explaining why building key escrow on top of a public-key system is a non-trivial task (even when third parties are allowed to be present). The early attempts to present escrowed encryption, indeed, proposed systems much different than a regular PKI. Auto-Recoverable Auto-Certifiable cryptosystems attempt to solve the efficiency (and compatibility) problems that are posed to escrowed PKI’s, and do not claim to resolve the ongoing conflict between privacy advocates and those seeking access to escrowed keys. We remark that unlike recent escrow proposals which give the escrow authorities only access to some fraction of the escrowed information (e.g., as in [BGw97]), Auto-Recoverable Auto-Certifiable cryptosystems give the escrow authorities access to all encrypted information when authorized. However, the access does not have to be to a key, rather it can have very small granularity which enables access to an individual message. We believe that granular access [FY95] is more acceptable than partial access which implies further computational costs in recovering the message (the cost effectiveness is not justified from an engineering point of view and the delay of partial recovery may not be tolerable).
2
Initial Problem Specification
The following are specifications of software escrowed PKI as can be derived from existing documents, discussions in the cryptographic community, and approaches to systems development: 1. Software implementation: Each and every system component does not require tamper-proof hardware. 2. Software distribution: The software that users employ is public (and hence is easily distributed). 3. Key self-generation: Users generate their own private keys independently and efficiently. The private keys (or messages encrypted by these keys) are recoverable by the escrow authorities only.
206
A. Young and M. Yung
4. Escrow authorities minimal intervention: The escrow authorities act only at the system’s set-up, and when key recovery is needed. 5. PKI-compatible certification process: To certify a key, a user sends one message requesting certification to the Certification Authority (CA), as in a regular public key infrastructure. This message is created by an efficient procedure, performed independently by the user alone. 6. Certified keys are recoverable: A user’s public key is certified by a Certification Authority (CA) only if the corresponding private key is verified to be recoverable by the authorities. This verification is conducted solely from the message that forms the request for certification; the verification is successfull if and only if the key is recoverable with very high probability. 7. PKI-compatible certificates: A user’s key in the certificate should include the same information as in a regular public key. 8. Universal verifiability of recoverability: Upon request, a user can present the message that forms the request for certification to any party and this party can verify that the private key is recoverable by the authorities. 9. Efficient recovery: The key recovery procedure is efficient (it can preferably be done by distributed parties, e.g., using threshold cryptography [FD89] and verifiable secret sharing which have been developed in the 80’s). 10. PKI-compatible user system: The system is as easy for users as a public key cryptosystem, and can be implemented in software. Such a solution therefore constitutes a reduction of a PKI with a Certification Authority [Koh78] to an escrowed public key infrastructure with the same configuration. Since such a solution can be implemented securely in software, it can be implemented and distributed in source code form, thus making it as easy to distribute and use as a public key software package (e.g., PGP). 11. PKI-compatible software/architecture layers: From an infrastructure and systems integration perspective, these specifications differentiate between various independent layers. The first layer consists of the escrow authorities, who act only at the time the system is established and who act only when a private key needs to be recovered. The later action is performed without interfacing with users. The second layer is the public-key infrastructure, where users and CA’s generate certified keys whose corresponding private keys are private to the users. The third layer is the use of the certified public keys within communication and storage applications. In such a system the third layer is related to the second layer as in a regular public key infrastructure. 12. Communication Protocol Compatibility: The solution should not change headers and messages outside the PKI protocols; e.g., communicating parties use existing communication protocols. 13. Compliance assurance: In [FY95,FY97] Frankel and Yung note that an escrow encryption scheme can always be bypassed (hardware or software). This is due to under-encrypting, over-encrypting, etc. Thus, governments cannot hope to solve misbehaviors in general. What is important is the definition given in [FY95] which says that “as long as the parties employ the mechanisms provided for confidentiality by the system, the escrow capability
Auto-recoverable Auto-certifiable Cryptosystems
207
should be enabled”. In an Auto-Recoverable Auto-Certifiable system, the CA ties the certification of keys to assure that secret keys are escrowed. It seems that this is a proper choice of control since in order to bypass the system, users will have to use another system or use an unauthorized modification of the system. Also, not performing the ‘assurance of escrow’ by the CA at the infrastructure level may cause problems in system design, as pointed out in [FY97]. 14. Security: The public key is as secure as a key in a PKI against all parties ([YY98] and [YY99]require an additional assumption for each, but only for arguing security against the CA, whereas it is possible to reduce the security of the key to a known assumption, otherwise). 15. Shadow-public-key resistance: Another aspect of security is that the system should not contain a subliminal channel that enables “shadow publickey” distribution [KL95]. Such a property is hard to prove, but at the very least it should be required that what is published to the general public in the key escrow system is the same information as what is published in a regular public key system. There are three additional requirements which are often desired in many applications: 16. Low Cost: We need to have the system be of relative low resource cost. In particular, whereas the user should have no additional cost when compared to an unescrowed PKI user, the CA may have some additional cost (e.g., a moderate increase in memory and processing, though this memory may be maintained at the escrow authorities as well) and the only real additional cost is in managing and operating the escrow authorities (which is a required cost). A typical cost of the authorities and the needs for their security should be compatible with the corresponding cost and needs of a (perhaps distributed) CA. 17. Granularity of Escrow: Another property which may be required is that rather than opening keys of users, the authorities open session keys which are encrypted under the public keys of users. The session key is openable regardless of which of the two users in the session has been authorized as a target for key escrow. The notion of granularity in taking a key out of escrow was dealt with in [DDFY94,FY95] and by Lenstra, Winkler, and Yacobi [LWY95]. This property is typically a function of the key of the escrow authority (but can be always achieved if the authorities, rather than a large number of users, are implemented in tamper-proof hardware). 18. CA as the source of trust: In a PKI setting the system’s trust is with the CA; it may be desirable in an escrow PKI setting that trust remain with the CA. In Auto-Recoverable Auto-Certifiable cryptosystems it is possible for the CA to retain critical escrowed information for each user (though the CA cannot access it), and thus CA collaboration can be made necessary to take information out of escrow, thus making the trust remain with the CA.
208
3
A. Young and M. Yung
Auto-Recoverable Auto-Certifiable Cryptosystems: General Structure
The papers [YY98] and [YY99] describe two different Auto-Recoverable AutoCertifiable Cryptosystems such that when each is run an ElGamal [ElG85] public/private key pair, an escrowed encryption of the private key, and a proof that the escrow authorities can recover the private key is produced for the user. The proof, in addition to the ‘encryption’ of the private key, has been called a certificate of recoverability. This certificate is publicly verifiable and assures that the private key is escrowed properly. In short, each algorithm describes how to construct a string which constitutes an implicit encryption of the private key x under the escrow authorities key and a non-interactive zero-knowledge (NIZK) proof that allows a prover to prove to a verifier that her private key x in y = g x mod p is the same as in the implicit encryption. Hence, a Certification Authority (CA) can insist that a user who submits her own public key for publication also submits the certificate just described. Having done so, the CA can be certain that x is escrowed properly, without ever learning x itself. The primary difference between the two algorithms is that the public key of the escrow authorities in [YY98] is a discrete log based public key, whereas in [YY99] it is an RSA modulus. We emphasize that the proofs and encryptions employed are efficient and do not contain encryptions of circuits and general proofs which employ such constructions (which are typically plausibility results rather than actual systems). Once the keys are certified by the CA, their use within the system is as in a regular PKI based on ElGamal/Diffie-Hellman keys. Key recovery is an efficient procedure between the CA and the escrow authorities, who are otherwise not active. The cooperation with the CA is needed, yet the CA cannot recover the keys. For security the primary cryptographic assumption that is made is that the Diffie-Hellman (DH) problem is hard. This assumption is used for security against adversaries. For the user to be secure from the CA, each of the aforementioned Auto-Recoverable Auto-Certifiable Cryptosystems requires a new cryptographic assumption. Note that the DH assumption is already required because the ElGamal PKCS is secure if and only if the DH problem is hard. Due to the non-interactive nature of the certificates of recoverability, a random oracle cryptographic hash assumption (for SHA1) is also required for the validity of the proofs within the certificate. The certificate of recoverability to the CA is not made public to avoid shadow public abuse. Related Work Various tamper-resistant hardware solutions have been proposed, like the U.S. government’s Clipper chip and Capstone chip. These solutions are undesirable for users since they require special hardware, and since secret unscrutinized
Auto-recoverable Auto-certifiable Cryptosystems
209
algorithms have to be trusted. (See [YY96,YY97a,YY97b] for potential problems with such designs). Also, attacks has been found (e.g., [FY95]). Fair Public Key Cryptosystems (FPKC) is a public key escrow system that can be implemented in software [Mi92]. One of the problems with FPKC’s is that every user must split his or her private key and send the shares to the authorities. The authorities must then interact to insure that the key is escrowed. So, the system has more communication overhead than a typical public key system. Also, FPKC’s can be abused via the use of shadow public key cryptosystems, as shown by Kilian and Leighton [KL95]. Kilian and Leighton proposed a key escrow solution called Failsafe Key Escrow (FKE) to fix the shadow public key abuse problem. However, in so doing FKE’s require even more protocol interaction to escrow keys than FPKC’s. A “Fraud-Detectable Alternative to Key-Escrow Proposals” based on ElGamal was described in [VT97]. This system, called binding ElGamal, allows users to send encrypted information along with a poly-sized proof that the session key that was used in the encryption can be recovered by the escrow authorities. It was pointed out in two different rump session presentations at Eurocrypt ’97 that it is possible to use the means provided by the binding ElGamal system to defeat the escrow capability [PW97,T97].
4
Definitions
Informally, an Auto-Recoverable and Auto-Certifiable cryptosystem is a system that allows a user to generate auto-certifiable keys (keys with a proof of the method of generation) efficiently. The following is a formal definition. Definition 1. An Auto-Recoverable and Auto-Certifiable Cryptosystem is a triple (GEN,VER,REC) (where REC may be an m-tuple (REC1 ,REC2 ,...,RECm ) such that: 1. GEN is a publicly known poly-time probabilistic Turing Machine that takes no input and generates the triple (K1 ,K2 ,P ) which is left on the tape as output. Here K2 is a randomly generated private key and K1 is the corresponding public key. P is a poly-sized certificate that proves that K2 is recoverable by the escrow authorities using P . 2. VER is a publicly known poly-time deterministic Turing Machine that takes (K1 ,P ) on its input tape and returns a boolean value. With very high probability, VER returns true iff P can be used to recover the private key K2 . 3. REC is a deterministic Turing machine with a private input. For a distributed implementation: RECi , where 1 ≤ i ≤ m is a poly-time deterministic Turing Machine with a private input that takes P as input and returns share i of K2 on its tape as output, assuming that K2 was properly escrowed. (Subsets of ) the Turing Machines RECi for 1 ≤ i ≤ m can be used collaboratively to recover K2 . 4. It is intractable to recover K2 given K1 and P without REC (or REC1 ,..., RECm ).
210
A. Young and M. Yung
Next we will define informally the steps taken in a Public Key Infrastructure (PKI) and in an Auto-Recoverable Auto-Certifiable PKI. The following is the structure (protocol) of a Public Key Infrastructure: 1. CA’s addresses and parameters are published and distributed. a) Each user generates a public/private key pair, and submits the public key, along with an ID string, to a CA. b) The CA verifies the ID string, certifies the public key (by signing it), and enters the certification in the public key database. c) To send a message, a user queries the CA to obtain the public key of the recipient, and verifies the signature of the CA on the public key. d) The user then encrypts the message with the recipients public key and sends the corresponding ciphertext to the recipient. e) The recipient decrypts the ciphertext with his or her own private key. The following is an Auto-Recoverable Auto-Certifiable PKI: 1. A set of system parameters are agreed upon. The escrow authorities generate an escrowing public key with corresponding private shares. The public parameters and CA’s parameters are distributed (e.g., in software). a) Each user generates a public/private key pair, and submits the public key along with an ID string and a certificate of recoverability, to a CA. b) Using the escrowing public key, the CA verifies the certificate of recoverability. Provided that this verification holds, and that the ID string is valid, the CA certifies the public key (by signing it), and enters the certification in the public key database. c) To send a message, a user queries the CA to obtain the public key of the recipient, and verifies the signature of the CA on the public key. d) The user then encrypts the message with the recipients public key and sends the corresponding ciphertext to the recipient. e) The recipient decrypts the ciphertext with his or her own private key. 2. If a wire-tap is authorized for a given user, the escrow authorities obtain the certificate of recoverability of that user (from the CA), and recover the key or cleartext under the key. Note that (a) through (e) above are functionally equivalent in both systems. The only difference is that in the escrow system, the CA is able to verify that the private key is recoverable by the escrow authorities. The only added items in the auto recoverable PKI to what is required for a PKI are set-up extra work in step 1 and step 2. step 2 is necessary by definition and step 1 additional work seems necessary to bind the system to the escrow authorities. In an Auto-Recoverable Auto-Certifiable system, the i-th escrow authority EAi knows only RECi , in addition to what is publicly known. To publish a public key, user U runs GEN() and receives (K1 ,K2 ,P ). U keeps K2 private and sends the pair (K1 ,P ) to the CA. The CA then computes VER(K1 ,P ), and publishes a signed version of K1 in the database of public keys iff the result is true. Otherwise, U’s submission is ignored. In either case the certificate of recoverability is not published. Suppose that U’s public key is accepted and K1
Auto-recoverable Auto-certifiable Cryptosystems
211
appears in the database of the CA. Given P obtained from the CA, the escrow authorities can recover K2 as follows. EAi computes share i of K2 by running RECi (P ). The authorities then pool their shares and recover K2 .
5 5.1
The Auto-Recoverable Auto-Certifiable Cryptosystems The scheme from (YY98)
∗ denote the multiplicative group of caMathematical Preliminaries Let Z2q nonical elements relatively prime to 2q (we used it to call the group and its ∗ elements). Here q is a large odd prime. It is straightforward to show that Z2q is a cyclic group (it possesses a primitive root). In fact, if s is a primitive root modulo q and if s is odd, then s is also a primitive root modulo 2q. If s is a primitive root modulo q and s is even, then s + q is a primitive root modulo 2q. See [Ro93] for details. It can be shown that there exists a generator s for all ∗ . Thus, there is a probabilistic poly-time algorithm to find a generator groups Z2q ∗ of Z2q . The following first two simple claims are used to show that if discrete log ∗ problem is hard, then the discrete log problem in Z2q is hard.
Claim. (sk mod 2q) mod q = sk mod q. 0
Claim. If (sk mod 2q) = sk mod q, then k0 = k. Claim. If DH mod q is hard, then DH mod 2q is hard. Proof. We will prove this by proving the contrapositive. Suppose we are given a box X that takes A and B and returns sab mod 2q where A = sa mod 2q and B = sb mod 2q. We need to show that we can use X to perform DH given 0 0 ∗ such A0 = sa mod q and B 0 = sb mod q. To do this, we choose r1 , r2 ∈R Zq−1 0r1 0r2 that A and B are odd mod q, provided that s is odd (if s is even, we make sure these two values are even). We then compute t = X(A0r1 mod q, B 0r2 mod q). 0 0 −1 By Claim 2 it follows that t = sa b r1 r2 mod q. We then output t(r1 r2 ) mod q. ∗ Note that r1 r2 has a unique inverse mod q−1 since r1 , r2 ∈ Zq−1 . Our algorithm 0 0 thus outputs sa b mod q as needed. QED. From Claim 3 it follows that if the DH problem is hard, then the discrete log ∗ is hard. problem in Z2q Problem 1: Let p = 2q + 1 and let q = 2r + 1 where p, q, and r are prime. k Find tk mod 2q given sk mod 2q and g t mod p. Here g, s, t, and p are public. ∗ ∗ g generates Zp , s generates Z2q , and t generates a large subgroup of Z2q . The difficulty of Problem 1 is a cryptographic assumption in [YY98]. Clearly, Problem 1 is not hard if the discrete-log problem is not hard, or if DH is not hard. We note that Stadler [St96] has initiated the use of the double-decker exponentiation in his publicly-verifiable secret sharing (PVSS) work prior to our use of it. He also notes that his PVSS can be used in the model of Micali’s “Fair Cryptosystems”. However, this means that the application suffers from the similar problems of the original fair cryptosystems which our work has attempted to overcome.
212
A. Young and M. Yung
System Setup A large prime r is agreed upon s.t. q = 2r + 1 is prime and s.t. p = 2q + 1 is prime. We have produced such large values efficiently. A generator g is agreed upon s.t. g generates Zp , and an odd value g1 is agreed upon s.t. g1 ∗ generates Z2q . The values (p,q,r,g,g1 ) are made public. One example of organizing the escrow authorities is given; other settings of threshold schemes or even schemes where users decide on which authorities to bundle together are possible. There are m authorities. Each authority EAi chooses zi ∈R Z2r . They each compute Yi =Qg1 zi mod 2q. They then pool m their shares Yi and compute Pm the product Y = i=1 Yi mod 2q. Note that Y = z g1 mod 2q, where z = i=1 zi mod 2r. The authorities choose their zi over ∗ again if (g1 /Y ) mod 2q is not a generator of Z2q . Each authority EAi keeps zi private. The public key of the authorities is (Y ,g1 ,2q). The corresponding shared private key is z. Key Generation GEN uses “double decker” exponentiation and operates as follows. It chooses a value k ∈R Z2r and computes C = g1 k mod 2q. GEN then solves for the user’s private key x in Y k x = g1 k mod 2q. GEN computes the public key y = g x mod p. GEN computes a portion of the certificate v to −k be g Y mod p. GEN also computes three NIZK proof transcript P1 , P2 , P3 (which are generated by the NIZK proof systems ZKIP1 , ZKIP2 , and ZKIP2 , described below). The certificate P is the 5-tuple (C,v,P1 ,P2 ,P3 ). GEN leaves ((y,g,p),x,P ) on the output tape (note that y need not be output by the device since y = v C mod p). The user’s public key is (y, g, p). Public Escrow Verification VER takes ((y,g,p),P ) on its input tape and outputs a boolean value. VER verifies the following things: 1. 2. 3. 4.
P1 is valid, which shows that U knows k in C P2 is valid, which shows that U knows k in v P3 is valid, which shows that U knows k in v C mod p verifies that y = v C mod p
VER returns true iff all 4 criterion are satisfied. P1 is essentially the same as the proof described first in [GHY85] for isomorphic functions, but the operations ∗ . ZKIP2 , which is the basis for P2 and P3 , will be explained in here are in Z2q the following section. ZKIP In ZKIP2 , the prover wishes to interactively prove to a verifier that the k prover knows k in T = g s mod p. It is assumed that the verifier does not know sk mod 2q (and hence he doesn’t know k). The values T , g, s, and p are public. The quantity g generates Zp . The following three-pass protocol is repeated n times. 1. 2. 3. 4.
The The The The
e
prover chooses e ∈R Z2r and sends I = T s mod p to the verifier. verifier sends b ∈R Z2 to the prover. prover sends z = e + bk mod 2r toz the verifier. verifier verifies that I = (T 1−b g b )s mod p.
Auto-recoverable Auto-certifiable Cryptosystems
213
The verifier accepts the proof iff step 4 passes in all n rounds of the protocol. ZKIP1 is a three-pass protocol that uses values (I,b,z) which are very similar to the values (I,b,z) which are used in ZKIP2 . It remains to show how the NIZK proofs in P are constructed. Let ei,j denote the prover’s random choice for iteration j of proof Pi . Here 1 ≤ i ≤ 3 and 1 ≤ j ≤ n. 1. P = (C,v) 2. The prover chooses values e1,1 , e1,2 , ..., e1,n , e2,1 , e2,2 , ..., e2,n ,e3,1 , e3,2 , ...,e3,n ∈R Z2r . Note that the e’s must be in Z2r , otherwise information about k may be leaked in step (8). To see this, note that e is needed to blind kb perfectly. −e2,j mod p, and I3,j = 3. The prover computes I1,j = g1 e1,j mod 2q, I2,j = v Y e3,j (g1 /Y ) mod p for 1 ≤ j ≤ n y 4. The prover includes all the values Ii,j in P , where 1 ≤ i ≤ 3 and 1 ≤ j ≤ n. 5. The prover computes rnd = H(I1,1 , I1,2 , ..., I1,n , I2,1 , I2,2 , ..., I2,n , I3,1 , I3,2 , ..., I3,n ) where H is a cryptographic one-way function. 6. The prover gets the 3n values bi,j for 1 ≤ i ≤ 3 and 1 ≤ j ≤ n from the 3n least significant bits of rnd. These are the challenge bits. Note that the verifier can calculate these bits given the values for I. 7. The prover computes zi,j = ei,j + bi,j k for 1 ≤ i ≤ 3 and 1 ≤ j ≤ n. 8. The prover includes the values zi,j in P , where 1 ≤ i ≤ 3 and 1 ≤ j ≤ n. The verifier accepts the proof iff all 3n checks pass and if y = v C mod p. This method of making a ZKIP non-interactive is due to Fiat and Shamir [FS86]. Key Recovery RECi recovers share i of the user’s private key x as follows. RECi takes C from P . It then computes share si to be C zi mod 2q, and outputs sQi on its tape. The authorities then pool their shares and each computes Y k = m −k mod 2q, which is i=1 si mod 2q. From this they can each compute x = CY the user’s private key. The escrow authorities can recover the plaintext of users suspected of criminal activity without recovering the user’s private key itself. To decrypt the ciphertext (a, b) of user U the escrow authorities proceed as follows: 1. 2. 3. 4.
Each of the m escrow authorities i receives C corresponding to U . −z1 mod p. Escrow authority 1 computes s1 = aC −zi+1 Escrow authority i + 1 computes si+1 = si C mod p. Escrow authority m decrypts (a, b) by computing b/(sm C ) mod p.
This system allows for multiple CA’s to be associated with the escrow authorities. Escrowing across escrow authorities domains (e.g., different countries) can be solved by the users employing the long-lived Diffie-Hellman key as their
214
A. Young and M. Yung
common key (which is recoverable by either country) or by bilateral agreements. For proofs of security we refer the reader to [YY98]. Note that only the user’s public key is published as in a regular public key system. In fact, it is insisted that the certificate of recoverability not be published. This is to prevent the establishment of a shadow public-key for each user. 5.2
The scheme from (YY99)
Mathematical Preliminaries The system requires the following cryptographic assumption. ∗ , Problem 2: Without knowing the factorization of n, find x where x∈ Z2tn e x given x mod 2tn and g mod p. Here, p = 2tn + 1, n = qr, p, q, r, and large primes,t is a small prime, g generates a large subgroup of Zp , and gcd(e,φ(tn)) = 1. In this work e = 3.
It is also assumed that it is hard to compute the entire plaintext if reductions are performed modulo 2tn, as opposed to reducing modulo n as in RSA. Recall that t is a small prime number1 . Intuitively, it seems that problem 2 should be hard, since xe mod 2tn is a presumed one-way trapdoor function of x, and g x mod p is a presumed one-way function of x. Clearly, Problem 2 is not hard if cracking RSA is not hard, or if computing discrete logs is not hard. System Setup The escrow authority (authorities) generate a shared Blum integer n = qr, where q and r are prime. The escrow authorities then make sure that gcd(3,φ(n)) = 1. If this condition does not hold, then the escrow authorities generate a new n. The escrow authorities then compute p = 2tn + 1, where t is drawn from the first, say 256 strong primes starting from 11, inclusive. If p is found to be prime using one of these values for t, then the values for n and p have been found. If none of the values for t causes p to be prime, this entire process is repeated as many times as necessary. Note that t = 2t0 + 1 where t0 is prime. Since we insist that t > 7, we are guaranteed that gcd(3,φ(tn)) = 1. Once n and p are found, the escrow authorities generate the private shares ∗ is chosen such that g d1 , d2 , ..., dm corresponding to e = 3. A value g ∈R Z2tn has an order that is at least as large as the smallest of q and r, in the field Zp (recall that the factorization of n is not known). The values t, n, and g are made public. This system can be setup much faster than [YY98] since the escrow authority can generate a composite modulus very quickly, and in order to find a prime p, t can be varied as needed. The expected time to find such a p is inversely proportional to the density of primes. In contrast, in [YY98] the system setup relied on finding three primes with a rigid relationship between them. Heuristicly this means that sampling such primes may take an expected time which is inversely proportional to the density of the primes cubed. 1
the CA can be given the value mod n and the user can always choose values which are fixed and known mod 2t.
Auto-recoverable Auto-certifiable Cryptosystems
215
∗ Key Generation GEN operates as follows. It chooses a value x ∈R Z2tn and 3 computes C = x mod 2tn. x is the user’s ElGamal private key. GEN then computes y = g x mod p. The user’s ElGamal public key is (y, g, p). Note that g may not necessarily generate Zp , but, we can make sure that it generates a large subgroup of Zp . GEN also computes a non-interactive zero-knowledge proof based on C and y. The following is how this proof is constructed.
1. 2. 3. 4. 5. 6.
∗ choose r1 , r2 , ..., rN ∈R Z2tn . compute Ci = ri 3 mod 2tn for 1 ≤ i ≤ N compute vi = y ri mod p for 1 ≤ i ≤ N b = H((C1 , v1 ), (C2 , v2 ), ..., (CN , vN )) mod 2N bi = (2i AN D b) > 0 for 1 ≤ i ≤ N b zi = ri x0 i mod 2tn for 1 ≤ i ≤ N
Here N is the number of iterations in the NIZK proof (e.g., N = 40). Concerning step 1, technically the prover has a chance that one of the ri will have q or r in its factorization, this is highly unlikely. Note that bi in step 5 results from a boolean test. bi is 1 if when we take the logical AND of 2i and b we get a value greater than zero. It is 0 otherwise. The proof P is (C, (C1 , v1 ), (C2 , v2 ), ..., (CN , vN ), z1 , z2 , ..., zN ). GEN leaves ((y,g,p),x,P ) on the output tape. Public Escrow Verification VER takes ((y,g,p),P ) on its input tape and outputs a boolean value. VER verifies the following things: 1. C bi Ci = zi 3 mod 2tn for 1 ≤ i ≤ N 2. vi = (y 1−bi g bi )zi mod p for 1 ≤ i ≤ N VER returns true both criterion are satisfied. Note that skeptical verifiers may also wish to check the parameters supplied by the escrow authorities (e.g., that n is composite, p is prime, etc.). Key Recovery RECi recovers share i of the user’s private key x as follows. RECi takes C from P . It then recovers share si using the private share di . It outputs si on its tape. The authorities then pool their shares and x is computed. Recovering Plaintext Data The escrow authorities can recover the plaintext of users suspected of criminal activity without recovering the user’s private key itself. In this section, it is assumed that the method Pm being used is [BF97]. In this case the private decryption exponent is d = i=1 di mod φ(tn), and d is the inverse of 3 mod φ(tn). To decrypt the ElGamal ciphertext (a, b) of a user U the escrow authorities proceed as follows: 1. 2. 3. 4.
Each of the m escrow authorities receives C corresponding to U . d1 Escrow authority 1 computes s1 = aC mod p. di+1 mod p. Escrow authority i + 1 computes si+1 = si C dm Escrow authority m decrypts (a, b) by computing b/(sm−1 C ) mod p.
Since the escrow authorities do not reveal the values C di , no one can recover x. For proofs of security we refer the reader to [YY99].
216
6
A. Young and M. Yung
Depth-3 Escrow Hierarchy
The last solution can be combined with [YY98] to implement a depth-3 escrow hierarchy. The following is how to realize such a system. The escrow authorities generate a shared composite n such that q 0 = 2tn + 1 is prime, and such that p = 2q 0 + 1 is prime. Here t is a small prime of the form 2t0 + 1 where t0 is prime. Thus, from the root of the tree to the children of the root, the escrow system that is used is the one that is described in this section. It is somewhat more difficult to generate an appropriate prime 2tn + 1 in this case, since 4tn + 3 must also be prime (so we have the same inefficiency as in [YY98]). Each child of the root (intermediate node) then generates a (potentially shared) public key Y mod 2q 0 . Thus Y is an ElGamal public key in ElGamal mod 2q 0 . The leaves corresponding to (i.e. under) each of these intermediate children then generate escrowed keys based on the values for Y using the algorithm from [YY98]. Thus, the [YY98] algorithm is used between the intermediate nodes and the users at the leaves. Note that in this case the generator that is used in Y ∗ may only generate a large subgroup of Z2q 0.
7
Recent Developments
There are a number of things that could improve on these Auto-Recoverable Auto-Certifiable cryptosystems. For instance, it is desirable to eliminate the assumption that Problem 1 and Problem 2 are hard. Also, note that in both systems, the user’s public keys have special algebraic form, as dictated by their reliance on the shared public key of the escrow authorities. In a generic system one would like to be able to escrow generic keys (RSA and regular El-Gamal). This gives rise to the following additional requirements. 19. Employing Generic Keys: The systems use the traditional public keys: RSA/factoring-based or ElGamal variants. 20. Compatible User: The only change for the user is additional information sent during key registration (or reregistration). 21. No Cascading Changes: The users do not have to change their applications which employ cryptography, not even within the PKI applications (namely they use the same general cryptographic functions, and the same software, all we change is some added procedure in registration). 22. Seperation of Users and Escrow Agents: The escrow authorities are managed and constructed independently of the users (only their public key(s) need be known). 23. Independent User Keys: The user’s key is independent of any third party key and is produced in much the same way as in an unescrowed PKI. The users can keep their basic cryptographic algorithms (key generation, encryption, etc.). 24. Multiple Escrow Authorities Users can register for escrow with multiple escrow authorities.
Auto-recoverable Auto-certifiable Cryptosystems
217
25. Coexistence Escrow/ non-Escrow: Users can under the same CA have unescrowed keys and escrowed ones (and can transfer unescrowed keys to escrowed ones). 26. Escrow Hierarchy: A multi-level security system can be implemented where escrow authorities at each level can access all information below in the hierarchy, and none of the information above.
8
Forthcoming Work
In work that is yet to appear, we present an Auto-Recoverable Auto-Certifiable cryptosytem with ElGamal user keys that does not involve any new cryptographic assumptions (like Problem 1 or Problem 2 being hard). In fact, all that is assumed is the existence of a semantically secure PKCS (though the random oracle model is still used). Hence, the shared public key of the escrow authority can have any algebraic form, so long as it is part of a semantically secure PKCS. The solution decouples the algebraic connection between the user keys and the shared public key of the escrow authority, and thus gives rise to a completely new feature in Auto-Recoverable Auto-Certifiable cryptosystems. It enables “drop-in replacement” of certified public keys. This results from the fact that the public key of the user can be generated in exactly the same way as in an unescrowed PKI. Thus, should a user even decide to escrow his or her public key, he or she can do so at any time, even after the public key is made public. The user need only construct the certificate of recoverability at a later time and submit it for verification by the CA. In future work we will be presenting such a system where the user’s public key is an RSA public key [YY-ms]. This decoupling of the algebra behind the public keys also enables arbitrary depth key escrow hierarchies. The new systems have the properties specified above.
References BGw97. M. Bellare, S. Goldwasser. Verifiable Partial Key Escrow. In ACM CCCS ’97. BF97. D. Boneh, M. Franklin. Efficient Generation of Shared RSA Keys. In Advances in Cryptology—CRYPTO ’97, 1997. Springer-Verlag. DDFY94. A. De Santis, Y. Desmedt, Y. Frankel, M. Yung. How to Share a Function Securely. In ACM Symp. on Theory of Computing, pages 522–533, 1994. DH76. W. Diffie, M. Hellman. New Directions in Cryptography. In volume IT-22, n. 6 of IEEE Transactions on Information Theory, pages 644–654, Nov. 1976. ElG85. T. ElGamal. A Public-Key Cryptosystem and a Signature Scheme Based on Discrete Logarithms. In CRYPTO ’84, pages 10–18. FD89. Y. Frankel, Y. Desmedt. Threshold Cryptosystems. In CRYPTO ’89, pages 307-315. FS86. A. Fiat, A. Shamir. How to Prove Yourself: Practical Solutions to Identification and Signature Problems. In CRYPTO ’86, pages 186–194. FY95. Y. Frankel, M. Yung. Escrow Encryption Systems Visited: Attacks, Analysis and Designs. In CRYPTO ’95, pages 222–235,
218 FY97.
A. Young and M. Yung
Y. Frankel, M. Yung. On characterization of Escrow Encryption Schemes. In ICALP ’97. GHY85. Z. Galil, S. Haber, M. Yung. Symmetric public-key encryption. In CRYPTO ’85, pages 128–137. 1985. K-S. H. Abelson, R. Anderson, S. Bellovin, J. Benaloh, M. Blaze, W. Diffie, J. Gilmore, P. Neumann, R. Rivest, J. Schiller, B. Schneier. The Risks of Key Recovery, Key Escrow, and Trusted Third-Party Encryption. available at http://www.crypto.com/key study KL95. J. Kilian and F.T. Leighton. Fair Cryptosystems Revisited. In CRYPTO ’95, pages 208–221, 1995. Springer-Verlag. Koh78. L. Kohnfelder. A Method for Certification. MIT Lab. for Computer Science, Cambridge Mass., May 1978. LWY95. A. Lenstra, P. Winkler, Y. Yacobi. A Key Escrow System with Warrant Bounds. In CRYPTO ’95, pages 197–207, 1995. Mi92. S. Micali. Fair Public-Key Cryptosystems. In CRYPTO ’92, pages 113–138, 1992. Springer-Verlag. PW97. B. Pfitzmann, M. Waidner. How to Break “Fraud-Detectable Key Escrow”. Eurocrypt ’97 rump session. Ro93. K. R. Rosen. Elementary Number Theory and its Applications. 3rd edition, Theorem 8.14, page 295, 1993. Addison Wesley. St96. M. Stadler. Publicly Verifiable Secret Sharing. In Eurocrypt ’96, pages 190– 199, 1996. Springer-Verlag. T97. H. Tiersma. Unbinding ElGamal - An Alternative to Key-escrow? Eurocrypt ’97 rump session. VT97. E. Verheul, H. van Tilborg. Binding ElGamal: A Fraud-Detectable Alternative to Key-Escrow Proposals. In Eurocrypt ’97, pages 119–133, 1997. YY96. A. Young, M. Yung. The Dark Side of Black-Box Cryptography. In CRYPTO ’96, pages 89–103, YY97a. A. Young, M. Yung. Kleptography: Using Cryptography against Cryptography. In Eurocrypt ’97, pages 62–74. YY97b. A. Young, M. Yung. The Prevalence of Kleptographic Attacks on Discrete-Log Based Cryptosystems. In CRYPTO ’97, pages 264–276. Springer-Verlag. YY98. A. Young, M. Yung. Auto-Recoverable and Auto-Certifiable Cryptosystems. In Advances in Cryptology—Eurocrypt ’98. YY99. A. Young, M. Yung. Auto-Recoverable Cryptosystems with Faster Initialization and The Escrow Hierarchy. In PKC ’99. YY-ms. A. Young, M. Yung. manuscript (available from authors).
A Distributed Intrusion Detection System Based on Bayesian Alarm Networks Dusan Bulatovic1 and Dusan Velasevic2 1 Informatika, Jevrejska 32, 11000 Belgrade, Yugoslavia [email protected] 2 University of Belgrade, School of Electrical Engineering, 11000 Belgrade, Yugoslavia [email protected]
Abstract. Intrusion Detection in large network must rely on use of many distributed agents instead to one large monolithic module. Agents should have some kind of artificial intelligence in order to cope successfully with different intrusion problems. In this paper, we suggested Bayesian alarm network to work as independent Network Intrusion Detection Agent. We have shown that when narrowed in detecting one specific type of the attack in large network, for example denial of service, virus, worm or privacy attack, we can induce much more prior knowledge into system regarding the attack. Different nodes of the network can develop their own model of Bayesian alarm network and agents could communicate between themselves and with common security data base. Networks should be organized hierarchically so on the higher level of hierarchy, Bayesian alarm network, thanks to interconnections with lower level networks and data, acts as a distributed Intrusion Detection System.
1 Introduction Due to increased connectivity (especially on the Internet), and the vast of financial possibilities that are opening up in electronic trade, more and more computer networks and hosts are subject to attack. One way to prevent subversion is by building a completely secure system. However this is not possible in practice. The vast installed base of systems world-wide, guarantees that any transition to a secure system and network, if ever attempted, would take long time in coming. It seems obvious that we cannot prevent subversion. Tools are therefore necessary to monitor systems, to detect attacks, and to respond actively to them. This is essentially what is expected from one Intrusion Detection System (IDS) to be able to do. An intrusion is defined [1] as any set of actions that attempt to compromise the integrity, confidentiality, or availability of a resource. It is a violation of the security policy of the system. Any definition of an intrusion is, of necessity, imprecise, as security policy requirements do not always translate into a well defined set of actions. From the other side, Intrusion Detection is the methodology by which intrusions are detected. This methodology can be divided into two category of intrusion, misuse intrusion and anomaly intrusion that can be described as: R. Baumgart (Ed.): CQRE’99, LNCS 1740, pp. 219–228, 1999. © Springer-Verlag Berlin Heidelberg 1999
220
• •
D. Bulatovic and D. Velasevic
Misuse intrusions are well defined attacks on known weak points of a system. They can be spotted by watching for certain actions being performed on certain objects. Anomaly intrusions are based on observations of deviations from normal system usage patterns. They are detected by building up a profile of the system being monitored, and detecting significant deviations from this profile.
As misuse intrusions follow well-defined patterns they can be detected by doing pattern matching on audit- trail information. However, anomaly intrusions are harder to detect. There are no fixed patterns that can be monitored for and so we need a system that combined human-like pattern matching capabilities with the vigilance of a computer program. Thus it would always be monitoring the system for potential intrusions, but would be able to ignore spurious false intrusions if they resulted from legitimate user actions; so another goal is to minimize the probability of incorrect classification. In large networks, Intrusion Detection Systems must relay on network wide information. Often, use of many distributed agents instead of one large monolithic IDS module will give better results. Agents should have some kind of artificial intelligence in order to cope successfully with different intrusion problems. As a future direction in developing IDS, it is believed that Bayesian network should be used. In a general case it is not clear how to do that, but we will show that when narrowed in detecting one specific type of the attack, for example denial of service, virus, worm or privacy attack, we can induce much more prior knowledge into the system regarding the attack. Before we present our solution, we will first describe three corresponding methods of network intrusion detection.
2 Use of Genetic Programming in Intrusion Detection Many seemingly different problems in artificial intelligence, can be viewed as requiring discovery of a computer program that produces some desired output for particular inputs. When viewed in this way, the process of solving these problems becomes equivalent to searching a space of possible computer programs for a most fit individual computer program. This approach is chosen in [2] to building IDS. Instead of one large monolithic IDS module, here is used a finer-grained approach with a group of free-running processes which can act independently of each other and the system. They are called Autonomous Agents. An agent is defined as [3] a system that tries to fulfill a set of goals in a complex, dynamic environment. In this context, every agent would try to detect anomalous intrusions in a computer system under continually changing conditions. In other words, the agent would be the IDS. If an IDS can be split up into multiple functional entities which can operate in their own right, each of them can be an agent. This gives multiple intrusion detection systems running simultaneously. The agents run in parallel in the system. Each agent is a lightweight program - it observes only one small aspect of the overall system. A single agent alone cannot form an effective intrusion detection system - its view of the overall system is too limited in scope. However, if
A Distributed Intrusion Detection System Based on Bayesian Alarm Networks
221
many agents all operate on a system, then a more complicated IDS can be built. Agents are independent of each other. They can be added to and removed from the system dynamically. The agent code is composed of a set of operators (arithmetic, logical and conditional) and a set of primitives that obtain the value of metrics. As is usual with Genetic Programming, these sets can be combined in any way during evaluation run to generate parse trees for solution programs. Packet data obtained from the lower layer of the system
Packet Data
Evaluate agent for each packet
IF
IP NEQ
IP-DEST
RAISE
MY-IP
Example code for a simple agent
A broadcast message to all other agents
Fig. 1: Sample internal parse tree for an agent
Figure 1 shows a sample parse tree for an agent. The Terminals in the parse tree (the primitives IP-DEST, MY-IP and RAISE) obtain their values from the system abstraction layer. In this simple example, the primitive IP-DEST would obtain the IP Destination address for the current packet from the abstraction layer. The advantages of using genetic programming looked trough the model of Autonomous agent are efficiency, fault tolerance, resilience to degradation, extensibility and scalability. Having many small agents has a number of advantages against a single monolithic IDS. Clear analogy can be drawn between the human immune system and this proposal. The immune system consists of many white blood cells dispersed throughout the body. They must attack anything which they consider to be alien before it poses a threat to the body. The foreseen drawbacks include the overhead both on hosts and network because of so many processes, long training times, and the fact that if the system is subverted, it becomes a security liability. An interesting possibility they open up is that of an active defense, that can respond to intrusions actively instead of passively reporting them (it could kill suspicious connections, for example). Developing good training scenarios is an important issue with this model and that should be area for future investigation.
222
D. Bulatovic and D. Velasevic
3 Graph Based Intrusion Detection This approach in Intrusion Detection will be described on the model developed by group of authors in University of California, Davis [4]. Their work was inspired by Internet Worm Attack (1988), which caused the Internet to be unavailable for about five days [5]. They designed GrIDS - Graph-based Intrusion Detection System in order to develop secure infrastructure capable to defend Internet and other large networks. Its primary function is to detect and analyze large-scale attacks, although it also has the capability of detecting intrusions on individual hosts. The nature of operation of the GrIDS system will be presented on a simple example of tracking warm and building such an activity graph. In Figure 2 the worm begins on host A, then initiates connections to hosts B and C which causes them to be infected. The two connections are reported to GrIDS, which creates a new graph representing this activity and records when it occurred. The two connections are placed in the same graph because they are assumed to be related. In this case, this is because they overlap in the network topology and occur closely together in time.
D B A
E C
Fig. 2: The beginning of a worm graph, and the graph after the worm has spread
If enough time passes without further activity from hosts A, B, or C, then the graph will be discarded. However, if the worm spreads quickly to hosts D and E, as in the figure, then this new activity is added to the graph and the graph’s time stamp is updated. Graph-based Intrusion Detection is a helpful step toward defending against widespread attack in the networks. It presents network activities to humans as highly comprehensible graphs. In addition, policy mechanisms allow organizations much greater control over the use of their networks than is possible, for example, with firewalls alone. GrIDS, implementation of the graph-based Intrusion Detection, is designed to detect large-scale attacks or violations of an explicit policy. However, a widespread attack that progresses slowly will not be diagnosed by its aggregation mechanism. Also additional safeguards must be taken to ensure the integrity of communications between GrIDS modules, and to prevent an attacker from replacing parts of GrIDS with malicious software of her own.
A Distributed Intrusion Detection System Based on Bayesian Alarm Networks
223
4 Cooperative Intrusion Detection for Detecting Denial of Network Service Denial of service for the routing infrastructures, (routers and routing protocols), may be caused by natural faults as well as by malicious attacks. To protect network infrastructures from routers that incorrectly drop packets and misroute packets, Cheung and Levitt [7] used a detection - response approach. They presented protocols that detect and respond to those misbehaving routers. Protocols are supposed to detect and respond to two types of denial of service, “black hole” routers and routers that misroute packets. One of the proposed protocols, distributed probing is applicable to detecting network sinks and misrouting routers that cause denial of service - that is, the misrouted packets cannot reach their destinations. With distributed probing, a router can diagnose its neighboring routers by sending them directly (i.e., without passing through intermediate routers) a test packet whose destination router is the tester itself. Based on whether a tester can get back the test packet within a certain time interval, the tester can deduce the goodness of the tested router. Network is modeled by a directed graph G = (V;E) where vertices denote routers and edges denote communication channels. An edge (i; j) ∈ E is called testable if cost(j; i) is strictly less than the cost of any other path from j to i in G, where the cost of a path is the sum of the costs of all edges on the path. In Figure 3 we have a network example that has three routers, namely a, b, and c. and three edges (b,c), (b,a), and (c,b). If (c,b) is testable and router c sends a packet p whose destination is c itself to b, then, in distributing probing protocol, p will return to c if and only if b does not misbehave on p. a c2
c3 p
c
b c1 Fig. 3: Testable edges
This model of cooperative work for detecting denial of service, unfortunately, does not solve the entire denial of service problem of routing infrastructures. There are router failures not covered by this failure models. For example, a compromised router may modify the body of a transit packet. Also, link failures are not modeled, so a link failure that results in packet loss may be viewed as a node failure. Finally, these models only consider transit traffic. In other words, packets sent by source hosts to source routers and those sent by destination routers to destination hosts are not addressed. However, this model represent a first step in protecting routing infrastructures from denial of service using an intrusion detection approach.
224
D. Bulatovic and D. Velasevic
5 Our Proposal: A Bayesian Alarm Network as Independent Intrusion Detection Agent Bayesian approach to probability and statistics differs from the classical probability. Whereas a classical probability is a physical property of the world (e.g., the probability that a coin will land heads), a Bayesian probability is a person’s degree of belief in that event. Important difference between physical probability and personal probability is that, to measure the latter, we do not need repeated trials. While classical statistician has a hard time measuring that the cube will lend with a particular face up, the Bayesian simply restrict his attention to the next toss, and assigns a probability. For some events it is not possible to measure the probability and that is why Bayesian classification represents interesting tool in intrusion detection. This technique of unsupervised classification of data, and its implementation, Autoclass [8] searches for classes in the given data using Bayesian statistical techniques. It attempts to determine the most likely process(es) that generated the data. It does not partition the given data into classes but defines a probabilistic membership function of each datum in the most likely determined classes. Bayes’ rule does not provide an algorithm for classification. The designers of a Bayesian classifier are faced with the computationally intractable problem of searching the hypothesis space for the optimal distribution that produced the observed data and the controversial problem of estimating the priors. In the case where we are faced with large number of variables and relationships among them Bayesian network is a representation suited to solve the problem. It is a graphical model (directed acyclic graph-DAG), that can efficiently encode the joint probability distribution (physical or Bayesian) for a larger set of variables. The idea to use Bayesian or other belief networks in Intrusion Detection Systems has come from the necessity to combine different anomaly measures in detecting intrusions. Bayesian networks [10] allow the representation of causal dependencies between random variables in graphical form and permit the calculation of the joint probability distribution of the random variables by specifying only a small set of probabilities, relating only to neighboring nodes. This set consists of the prior probabilities of all the root nodes (nodes without parents) and the conditional probabilities of all the non root nodes given all possible combinations of their direct predecessors. Bayesian networks, which are DAGs with arcs representing causal dependence between the parent and the child, permit absorption of evidence when the values of some random variables become known, and provide a computational framework for determining the conditional values of the remaining random variables, given the evidence. Figure 4 gives a trivial Bayesian network modeling intrusive activity. Each box represents a binary random variable with values representing either its normal or abnormal condition. If we can observe the values of some of these variables, we can use Bayesian network calculus to determine P(Intrusion|Evidence). However, to determine the a priori probability values of the root nodes and the link matrices for each directed arc for a general case, where many different intrusion are possible, we must incorporate a substantial amount of knowledge concerning the different types of attacks that can be used to compromise system security, as well as the conditional probabilities that various well-defined events will occur given that
A Distributed Intrusion Detection System Based on Bayesian Alarm Networks
225
INTRUSION
To Many Users To many CPU Intensive Jobs
To many Disk Intensive Jobs
Disk I/O
Fragmentation
CPU
Net I/O
Newly available Program on the Net
Trashing Fig. 4: Bayesian alarm network – general case
those attacks are in progress. Unfortunately, Intrusion-detection community is at the moment only at first stage of trying to assemble this type of knowledge. Our proposal is, because of complicity to find general solution, not to use Bayesian alarm network as universal, standalone Intrusion Detection System. Instead, it could be used as independent Intrusion Detection Agent for detecting one, specific type of network attack. This way we need to induce into the system prior knowledge only regarding that type of the attack. At the same time some other nodes of large network can develop its own model of alarm network for detecting same kind of the attack, entering freely local believes in data sensitivity, and expectation of the attack. These agents should be able to communicate between themselves on broadcast or search principle, as required. Beside being able to communicate between themselves, this approach require for agents to communicate with a common data base-Bayesian Management Information Base (BMIB), which contains information regarding the attacks in progress. However, different site will normally select different vendors and, since network incidents are often distributed over multiple sites, it is likely that a single incident will be visible in different way. Clearly, it would be necessary for these diverse intrusion detection systems to be able to share data. Solution to that problem could result from the work of a new Intrusion Detection working group established in the Security Area of the IETF to define data formats and exchange procedures for sharing information of interest to intrusion detection and response systems and the management systems which have to interact with them. However, for our model, not only data format in BMIBs and exchange procedure must be standardized, but also notation of the network attacks, like P–for privacy, Vfor virus , W-for worm, D-for denial of service etc.
226
D. Bulatovic and D. Velasevic
For the definition of the architectural model that could be used in the implementation of the security management system, hierarchical organization of the networks and BMIBs is suggested [12]. The lower level of the hierarchy should include a small number of interconnected physical networks. Network of the upper level will interconnect the lower levels networks and BMIB will contain relevant information regarding attacks in wider area. Due to sophisticate nature of network attacks the security management cannot rely only on the real-time monitoring of security measures. The network manager needs also to store in a database and analyze historical security information in order to detect an attack as a symptom of past correlated events and to discover the attacker. As an illustration of solving the specific problem of detecting privacy attack to sensitive medical records, in Figure 5 is given a simplified structure of a corresponding Bayesian alarm network. One possible choice of variables for this problem was Intrusion (I), Aids (A), External (E), Medical (M), Nonmedical (N), Outsider (O), where I represent that current access to sensitive records is intrusion to privacy, A access to records with diagnose of aids and E access to external sensitive records (other ward or hospital). Variables M, N and O denote that access is performed by medical staff, nonmedical staff, or outsider.
I-Intrusion
M-Medical staf
N-Nonmedical
O-Outsider
A-Aids
E-External
Fig. 5: One specific attack (privacy intrusion) - simpler Bayesian alarm network
In this example, using ordering (I,M,N,O,E,A) we have the following conditional independencies:
p( m i ) = p( m ) , p ( n i, m ) = p ( n ) ,
p( o i, m, n ) = p( o ) , p( e i, m, n, o ) = p( e i ) , p( a i, m, n , o, e ) = p ( a i, m, n , o ) .
(1)
A Distributed Intrusion Detection System Based on Bayesian Alarm Networks
227
As seen, we consider that accesses to sensitive records at different places are conditionally independent. Also, accesses by medical staff, nonmedical or outsider are mutually conditionally independent. Our judgments about conditional independence between various variables, guide us to the network structure where it is possible easier to compute the probability of interest (probability of intrusion):
p( i m, n, o, e, a ) =
p( i, m, n, o, e, a ) = p( m, n, o, e, a )
p( i, m, n, o, e, a ) . p ( i , m , n , o , e , a ) ′ ∑i ′
(2)
Given the conditional independencies in Equation (1), we can make this computation more efficient:
p( i m, n, o, e, a ) =
p( i ) p( m ) p( n ) p( o ) p( e i ) p( a i, m, n, o ) ∑ i′ p( i ′ ) p( m) p( n ) p( o) p( e i ′ ) p( a i ′, m, n, o )
(3)
i.e.:
p ( i m, n , o, e, a ) =
p ( i ) p ( e i ) p ( a i, m , n , o ) . ∑ i′ p ( i ′ ) p ( e i ′ ) p ( a i ′, m , n , o )
(4)
From the presented example it is also possible to conclude that requirement for the prior knowledge regarding the attack is not drawback in the case of Bayesian network developed to detect one specific kind of intrusion. As we are here concentrated to one type or a small subset of intrusions, it is expected that we should have more knowledge regarding the matter. At the same time thanks to interconnections with other alarm networks at the same level of hierarchy and corresponding BMIB we should be able to collect more knowledge and data regarding the same type of the attack. (Connection to other alarm networks is here symbolically denoted with variable E – access to external sensitive records in other ward or hospital). At higher level of hierarchy, based to its interconnections with lower level networks and data from Bayesian Management Information Base, Bayesian alarm network will be able to monitor network attacks in wider area, and can act as Distributed Intrusion Detection System. Using recorded data from BMIBs such a distributed Intrusion Detection System, with assembled knowledge from different Bayesian alarm networks, could have integrated, human and computer program intrusion detection capability.
6 Discussion It is shown that our model will provide Bayesian alarm network to work as independent Intrusion Detection System, and, at the same time, to be part of a larger distributed IDS. In opposed to others agents based Intrusion Detection, our agent do not have limited capability but can work as standalone IDS. Different sites, with different vendors selected, can develop independently its own model of alarm network for detecting same kind of the attack, but agents will be able to communicate between themselves thanks to standard data format, exchange procedure and notation of the attacks.
228
D. Bulatovic and D. Velasevic
Beside standardization in messaging, in our approach is as well important introduction of Bayesian Management Information Base (BMIB) concept. BMIB will store information regarding the attack in progress and also historical security information. Based on hierarchical organization of networks, it is possible to develop distributed Intrusion Detection System that will use information from BMIBs and communicate with lower lever networks. With Bayesian alarm network in conjunction with Bayesian statistical techniques, we can easy overcome problem of missing data and facilitate the combination of prior knowledge and data especially in the case, (what is usual with Intrusion Detection), when no experiments are available. Finally with Bayesian alarm network we have no problem with different type of data as different type of attributes may be freely mixed. Bayesian alarm network in described framework can be used not only to detect intrusion but to play active role in protecting networks as well. Due to nature of Bayesian probability it could be able to prevent on going attack even if we have not evidenced that kind of attack before.
References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15.
R. Heady, G. Luger, A. Maccabe, and M. Servilla. The Architecture of a Network Level Intrusion Detection System. Technical report, University of NewMexico, Department of ComputerScience, August 1990. Crosbie M., October 1995. Defending a Computer System using Autonomous Agents. In Proceedings of the 18th NISSC Conference, October 1995. Maes P. 1993. Modeling adaptive autonomous agents.Artificial Life 1(1/2). S. Staniford-Chen, S. Cheung, R. Crawford, M. Dilger, J. Frank, J. Hoagland, K. Levitt, C. Wee, R. Yip, D. Zerkle, "GrIDS -- A Graph-Based Intrusion Detection System for Large Networks". The 19th National Information Systems Security Conference, 1996. M. Eichin and J. Rochis. With microscope and tweezers: An analysis of the Internet worm of November 1988. IEEE Symposium on Research in Security and Privacy, 1989. D. Seely. A tour of the worm. IEEE Trans. On Soft. Eng., November 1991. S. Cheung, K. N. Levit: Protecting Routing Infrastructure from Denial of Service Using Cooperative intrusion Detection. In Proceedings of New Security Paradigm Workshop, Cumbria, UK, September 1997. P. Cheeseman, J. Stutz, and R. Hanson: Bayesian classification with correlation and inheritance. Proceedings of 12th International Joint Conference On Artificial Intelligence pages 692-698, San Mateo, California, 1991. K. Fukunaga. Introduction to Statistical Pattern Recognition. Academic press, New York, 1990. D. Heckerman, Probabilistic Similarity Networks. MIT Press, 1991. G. Finn, “Reducing the Vulnerability of Dynamic Computer Networks," ISI Research Report RR-88-201, University of Southern California, June 1988. T. Apostolopoulos,V. Daskalou: “On the implementation of a Prototype for Performance Management Services”, Proceedings of IEEE Int Symp. on Computers and Communications, ISCC’95, 1995. D. Comer, “Internetworking with TCP/IP." Vol.1, Prentice Hall, 1991. R. Perlman, “Interconnections: Bridges and Routers." Addison-Wesley, 1992. Biswanath Mukherjee, L Todd Heberlein and Karl N Levitt. Network Intrusion Detection, IEEE Network, May/June 1994, pages 26-41.
Interoperability Characteristics of S/MIME Products 1
2
1
2
2
Sarbari Gupta , Jerry Mulvenna , Srinivas Ganta , Larry Keys , and Dale Walters 1
CygnaCom Solutions, Inc., 7927 Jones Branch Drive Suite 100W, McLean, VA {sgupta, sganta}@cygnacom.com 2 National Institute of Standards and Technology, Gaithersburg, MD {mulvenna, keys, walters }@csmes.ncsl.nist.gov
Abstract. S/MIME is based upon the popular MIME standard, and describes a protocol for adding cryptographic security services through MIME encapsulation of digitally signed and encrypted objects. The S/MIME version 2 specification was designed to promote interoperable secure electronic mail. However, because the specification allows multiple interpretations and implementations, and is sometimes silent about key aspects that affect interoperability, a number of “S/MIME Enabled” products are available on the market that are incapable of fully interacting with one another. In this paper, we present a set of characteristics that affect the interoperability profile for a given S/MIME application, and illustrate how they may be used to achieve a higher level of interoperability within the family of S/MIME compliant products. We also analyze the S/MIME version 3 specification to determine what subset of the identified interoperability characteristics still remain to be adequately addressed.
1
Introduction
S/MIME (Secure / Multipurpose Internet Mail Extensions) is a specification for securing electronic mail. S/MIME is based upon the popular MIME standard, and describes a protocol for adding cryptographic security services through MIME encapsulation of digitally signed and encrypted objects. The exact security services offered by S/MIME are authentication, non-repudiation, message integrity, and message privacy. The S/MIME Version 2 specifications were designed to promote interoperable secure electronic mail, such that two compliant implementations would be able to communicate securely with one another [6, 7]. However, because the specification allows multiple interpretations and implementations, and is sometimes silent about key aspects that affect interoperability, what has resulted is the availability of multiple S/MIME compliant commercial products that are not capable of fully interoperating with one another with respect to secure messaging. Recently, the S/MIME Version 3 specifications were passed by the IESG (Internet Engineering Steering Group) and are in the process of being published as RFC (Request For Comment) standards by the IETF (Internet Engineering Task Force) [8, 9]. However, this paper describes the findings of a set of interoperability experiments that were conducted using commercial-off-the-shelf (COTS) S/MIME version 2 products from different vendors. The experiments were R. Baumgart (Ed.): CQRE’99, LNCS 1740, pp. 229–241, 1999. © Springer-Verlag Berlin Heidelberg 1999
230
S. Gupta et al.
designed to test the interoperability between peer S/MIME applications, between S/MIME applications and Certification Authority products, and between S/MIME applications and Directories. Other groups have also conducted tests on S/MIME applications and have published results [10]. All of the S/MIME implementations tested have been awarded the “S/MIME Enabled” seal based upon compliance tests conducted by RSA Labs. [Appendix A lists the actual products that were used in the tests.] Yet, there were a significant number of scenarios, where interoperability between the implementations was either limited or unachievable. From the test results, we concluded that there are a number of characteristics or properties that affect the interoperability of a given S/MIME application with other S/MIME applications, Certification Authority products and Directory products. These characteristics are neither part of the S/MIME version 2 specifications, nor do they appear in the S/MIME compliance testing methodology adopted by RSA. The recently approved S/MIME version 3 standard addresses some, but not all of the characteristics described in this paper. In this paper, we discuss these characteristics and illustrate how they affect the interoperability profile for a given S/MIME application. Interoperability is a prime concern of users of S/MIME implementations. Awareness of these characteristics may help to fine tune the S/MIME specifications to support a greater level of interoperability. They may also help the developers of S/MIME applications to make design decisions that would further the cause of interoperability. Additionally, these characteristics may help individuals who are procuring S/MIME applications to differentiate between the available implementations and select the one that most closely meets their interoperability needs. Finally, although these characteristics were derived from tests conducted upon S/MIME implementations, they may be applied to any end-user security application that requires a public key infrastructure. The rest of this paper is organized as follows. Section 2 describes the necessary background including the evolution and current status of the S/MIME specification. Section 3 describes a categorized set of characteristics that impact the ability of an S/MIME implementation to interoperate with other implementations. Section 4 discusses how the findings in this paper can be used to attain a higher level of awareness about the potential bottlenecks to interoperability. Section 5 analyzes the S/MIME version 3 specifications to identify the S/MIME interoperability characteristics that have been adequately addressed, and the ones that still need more attention at the specification level. Finally, our conclusions are presented in Section 6.
2 2.1
Background Evolution of the S/MIME Standard
The Multipurpose Internet Mail Extension (MIME) was also developed by the IETF, and was designed to support non-textual data (such as graphics data or video data) as the content of an Internet message [4,5]. Additional structure was imposed on the MIME message body to provide an encryption and digital signature service as part of the S/MIME specification.
Interoperability Characteristics of S/MIME Products
2.2
231
S/MIME Version 2
The S/MIME specification uses data structures that conform to Public Key Cryptographic Standard (PKCS) #7 [1]. PKCS #7 is a cryptographic message syntax that is designed to specify the content and form of the information that is required in order to provide an encryption and digital signature service. S/MIME implementations support several different symmetric content encryption algorithms. The RC2 algorithm with a key size of 40 bits is supported, even though it provides weak encryption, in order to comply with U.S. export regulations. In addition, in most S/MIME implementations, the user can choose DES, Triple DES or RC2 with a key size greater than 40 as the content encryption algorithm. The user can normally select either SHA-1 or MD5 as the message digest algorithm; the receiver’s application must be able to process both algorithms. The sender’s system must use the RSA public key algorithm with a key size ranging from 512 to 1024 bits to sign a message digest or to encrypt the content encrypting key A Certification Authority (CA) issues certificates that bind the identity of a public key to a user. This binding is only as strong as the out-of-band verification that the CA performs before issuing the certificate. Since many CAs can issue certificates, there must be a method of establishing trust among CAs so that each user can trust the information in a certificate issued by a CA other than his own. After the public certificate is issued, there must be a method by which the certificate is made available to other users. The certificate must be in a standard format so that the information in the certificate can be processed by applications built by different vendors. Deployment of S/MIME secure e-mail implementations requires a supporting Public Key Infrastructure (PKI) to provide solutions for the issues listed above. In some cases, standards have already been developed and implemented to provide this infrastructure. There is agreement that the certificate format will conform to Version 3 of the International Telecommunications Union (ITU) x.509 Recommendations. There is agreement that the Lightweight Directory Access Protocol (LDAP) is the protocol that will be used to access the directories that function as certificate repositories. PKCS#10 specifies the format for a request for a CA to issue a certificate [2]. 2.3
S/MIME Version 3
The S/MIME Version 3 specification [8, 9] is based on the usage of data structures from the Cryptographic Message Syntax (CMS) published as an RFC [11] by the IETF. CMS is derived from PKCS#7 version 1.5. The changes were designed to accommodate key agreement techniques for key management and the support of attribute certificates. Version 3 products are mandated to support the use of DSA (Digital Signature Algorithm) for signatures, and DH (Diffie-Hellman) for key establishment. The use of RSA for signature and key exchange is not mandated, but is specified as desirable. The symmetric encryption algorithm that must be supported by all Version 3 implementations is Triple DES (DES EDE3 CBC). 40 bit RC2 is supported as a nonmandatory algorithm to allow backward compatibility with Version 2 implementations.
232
S. Gupta et al.
Version 3 specifies a number of attributes that may be sent within the CMS message as either signed or unsigned attributes. Receiving agents must be able to process these attributes. The signed attributes that may be included in a Version 3 message are, signing time, S/MIME capabilities, S/MIME encryption key preference, and signing certificate. It may be noted that the Version 3 specification implicitly supports the usage of separate key pairs (and hence certificates) for signature and key exchange. The S/MIME capabilities signed attribute allows the sender to specify their algorithmic preferences in the order of preference. This allows a peer to select the algorithms that are appropriate. Both opaque as well as multipart formats are supported for signed messages in Version 3, but neither one is specified as being mandatory for sending or receiving agents. The support for messages that carry only certificates to the peer is supported in Version 3, thus allowing in-band certificate distribution. Certificates and certificate revocation lists used within Version 3 implementations must be compliant with [1]. Receiving agents must validate peer certificates (including revocation checking) for all messages. Version 3 also supports the use of X.509 attribute certificates. Receiving agents must be able to handle messages that contain no certificates using a database or directory lookup scheme. 2.4
S/MIME Compliance Tests from RSA
S/MIME products are being developed to interoperate with the products of different vendors. When they purchase an S/MIME product, users want to know that they can exchange signed and encrypted messages with any other S/MIME user. RSA Data Security has set up an S/MIME Interoperability Center that allows vendors to perform interoperability testing on their products and to have the results published. The RSA Interoperability Test Center was established in 1997. Participating vendors test against WorldTalk’s WorldSecure Client which is the designated reference implementation. All vendors participating in the testing use Verisign’s Class 1 public key certificates. The vendor first sends a signed message containing a public key certificate to the reference implementation and receives two signed and encrypted messages in return. One message uses RC2 as the content encryption algorithm; the second message uses Triple-DES for content encryption. Both messages contain a secret phrase. The vendor decrypts the messages, extracts the secret phrases and includes them in the messages sent back to the reference implementation, using the same content encryption algorithm. If the reference implementation can recover the secret phrases, the successful test results will be posted on the S/MIME Interoperability Test Center Web Page (www.rsa.com/smime). As of January 1999, more than 20 different S/MIME products have been successfully tested. [Appendix B lists the products that have been awarded the S/MIME compliance seal by RSA Labs.] The testing, while providing useful information is limited in scope. It doesn’t test the ability of an S/MIME implementation to interact with a certificate repository in order to publish or obtain a public key certificate. It doesn’t test the ability to process certificates issued by different Certification Authorities or the ability to process Certification Revocation Lists. It also doesn’t follow that, because the
Interoperability Characteristics of S/MIME Products
233
implementations test successfully with the reference implementation, they will successfully test with each other.
3
Interoperability Characteristics
This section describes characteristics and properties that are pertinent to the ability of an S/MIME implementation to interoperate with peer implementations, Certificate Authorities, and Repositories. The properties are categorized into sets that affect a particular area of operation of a specific implementation. 3.1
Certificate Handling
This section describes characteristics related to the management and use of certificates within an S/MIME implementation. Managing Certificates for Local User. The local user is the human entity that controls an S/MIME application to send and receive secure email with its peer entities. Distinct Signing and Encryption Certificates for Local User. The S/MIME Version 2 specification calls for the use of a single certificate for signing outgoing email as well as receiving incoming encrypted email. Most currently available S/MIME implementations support a single certificate for the local user running the S/MIME application. S/MIME Version 3, however, supports the use of separate certificates for signatures and encryption, and a small set of S/MIME implementations implement this two-certificate scheme [8, 9]. An S/MIME application that only supports a single certificate for encryption and signatures may be unable to communicate securely with a peer that supports a dual certificate scheme. For example, a typical S/MIME implementation will try to use the certificate used to validate a signed message from a peer to send encrypted message to that peer entity. However, if the peer happens to be a dual-certificate-based implementation, it will reject the incoming encrypted message since it will not be able to use its own encryption certificate to decrypt the message. Thus, single certificate implementations provide the greatest level of interoperability in the current S/MIME version 2 space of products. If dual-certificate implementations are used, it is recommended that users identify the same certificate as the signature as well as the encryption certificate. Self-Signed Certificate Support for Local User. The use of the security features of S/MIME within a group of peer entities is predicated upon the availability of a PKI that allows an entity within the group to establish trust in the public key certificates of every other entity within the group. However, the deployment of large-scale public key infrastructures has been neither easy nor widespread. In the absence of a PKI, certain trust models allow a small group of peers to trust one another implicitly. This is typically achieved by exchanging certificates via some secure means and trusting
234
S. Gupta et al.
peer certificates implicitly, as opposed to trusting them via certificate path validation to a trusted anchor or root certificate. A subset of the S/MIME implementations that are currently available support the use of an implicit trust model using self-signed certificates. Self-signed certificates accompanying incoming signed messages from peers can be implicitly trusted and used to send encrypted messages to the peer entity. Other S/MIME implementations do not allow the use of self-signed certificates either for the local user or their peers. To allow rapid deployment of S/MIME in an environment where PKI path-based trust cannot be established, it is preferable to use S/MIME implementations that support an implicit trust model. Single / Multiple Certificates for Local user. Some S/MIME applications have the capability to support multiple certificates for the local user. This allows the local user to belong to multiple PKI hierarchies simultaneously, selecting the certificate to use when interacting with a particular peer. For example, user A belong to infrastructures X and Y and has certificates Kx and Ky from infrastructures X and Y respectively. Entity B belongs to infrastructure X and can only validate certificates in X; entity C belongs to infrastructure Y and can only validate certificates within Y. When interacting with B, A selects certificate Kx. Likewise, A selects certificate Ky when interacting with C. Support for multiple certificates for the local user is thus a very desirable attribute in an S/MIME application. Ability to Import PKCS #12 Credentials for Local user. PKCS (Public Key Cryptography Standards) #12 is a de-facto standard from RSA Laboratories for securely packaging credentials (public and private key pairs) for transport or storage [3]. Many S/MIME applications have built-in or companion modules that generate key pairs, and are able to dispatch certificate requests to Certification Authorities using the newly generated public key. In such cases, the ability to import PKCS#12 objects is not necessary. However, there are two situations where it becomes important for an S/MIME application to import PKCS#12 objects. In the first situation, a Certification Authority may perform key pair generation for every certificate issued by it; a PKCS#12 object is then sent back to the S/MIME user for import into the S/MIME application. In the second case, a key pair and certificate may be held within an external module (such as a browser,) and the user may be interested in importing the same set of credentials for use within the S/MIME application. The ability of an S/MIME implementation to import and use PKCS#12 objects thus affects its interoperability with CAs and the ability to share digital credentials with other PKI-based applications. Managing Peer Certificates. Self-Signed Peer Certificate Support. The ability to support an implicit trust model using self-signed certificates from peers allows an S/MIME application to be fit for quick deployment in communities where a pervasive PKI is either lacking.
Interoperability Characteristics of S/MIME Products
235
Acquiring Certificates for Peers. Peer certificates are acquired by S/MIME applications in any of the following three ways: • • •
Extracting certificates from incoming signed messages from peers Loading certificates from *.p7c files Lookup of peer certificates from a LDAP Repository
The lack of support for one or more of the above may hinder an S/MIME application from obtaining certificates for peer users, and therefore, from being able to communicate securely with them. For example, if an S/MIME client application only has the capability to extract certificates from signed messages, then it cannot interact with a peer S/MIME application that does not send certificates along with a signed message. Support for Selective Trust of Peer Certificates. Occasionally, peer certificates that are acquired (through any of the mechanisms discussed in the last section) cannot be validated using any of the known trusted root keys embedded within the S/MIME application. In such cases, it is very useful if the S/MIME application provides the local user the ability to selectively trust peer certificates that have been acquired. Once the local user designates the peer certificate as trusted, secure, encrypted email can be sent to that peer. Managing Root Certificates. Most S/MIME implementation comes preloaded with a set of root certificates, all or a subset of which may be designated as trusted. These trusted root certificates are used to validate the certificates of peers. This section describes some attributes that affect the management of root certificates. Acquiring Certificates for Roots. Root certificates may be acquired via the same three ways (as mentioned in the last subsection) used to acquire peer certificates. Support for various means of acquiring root certificates for use within an S/MIME application allows it to use additional roots to establish trust in peer certificates. Conversely, lack of support for one or more of these ways, may disallow import of a particular root certificate, and prevent interoperability with a peer that is certified by that root authority. Selectively Trusting Root Certificates. Having acquired or imported additional root certificates into an S/MIME application, it is very useful to have the ability to selectively trust one or all of the newly imported root certificates. Thus, if the local user is given the opportunity to designate newly imported roots as trusted, it may allow the local user to establish trust in all certificates issued by these additional trusted roots. Conversely, if additional trusted roots cannot be established within an S/MIME application, it may be impossible to communicate with a large set of potential peers. 3.2
Interaction with Certificate Authorities
S/MIME users need to obtain certificates signed by Certification Authorities (CA) to communicate securely with peers. The only exception is when self-signed certificates
236
S. Gupta et al.
are used within a small well-known community to establish implicit trust in peers. Most S/MIME applications have associated modules or software tools that allow the generation of a key pair on behalf of the local user, and the construction and dispatch of a certification request to a CA. The certificate request message is based upon the PKCS#10 format as specified in the S/MIME Version 2 specification. Support for Multiple Mechanisms for Requesting Certificates from CAs. Certification Authorities or their delegates support one or more of the following transport mechanisms for incoming certification requests, and distribution of issued certificates: • • •
Web: The User’s Web Browser connects to the CA’s website to dispatch certification requests, or to collect an issued certificate. Email: The User sends an email to the CA’s email address with the certification request. The CA may send an email back to the User with a reference to the location where the issued certificate may be picked up. In-Person/Floppy/Smart Card: The User places the certification request on a floppy or similar physical medium and transports it to the CA or its delegate. The CA or its delegate may return the issued certificate on a floppy or other medium (such as a smart card) for import and use by the User’s application.
S/MIME applications that support all of the above mechanisms for interaction with a CA are able to request and receive certificates from the majority of CA products. 3.3
Interaction with Repositories
Certificate distribution in a small community may be achieved by users exchanging certificates with one another. However, the S/MIME Version 2 specification calls for the use of LDAP (Lightweight Directory Access Protocol) to interface with directories/repositories to obtain certificates and revocation information for users. Publishing local user certificate. Typically, the CA that issues a certificate is responsible for publishing it in a repository. However, some S/MIME implementations also have the ability to publish the local user’s certificate in a chosen directory. This feature is very useful in a domain where peers obtain each other’s certificate from an organizational directory. Publication in the directory makes the user’s certificate readily available to a large community of peers, and thus promotes interoperability. Peer Certificate Lookup. When an S/MIME application supports the lookup of LDAP-based Directories for peer certificates, it gives the local user access to a large set of potential peer certificates, and the ability to interact with these peers. 3.4 Signing Outgoing Messages This section describes various issues involved during signing of messages that may determine its level of interoperability.
Interoperability Characteristics of S/MIME Products
237
Support for Opaque/Clear Signed Message Formats. S/MIME Version 2 provides for two data signing formats. In the “clear” or multipart format, the signature is separated from the signed data and is sent as an attachment. There is both an advantage and a disadvantage in using this signing format. The advantage is that the recipient can always read the message even if the recipient’s e-mail application is not an S/MIME client and the signature cannot be verified. The disadvantage is that the message may undergo some format conversion as it transits a mail gateway that is not S/MIME- aware. This will cause the receiving S/MIME application to invalidate the signature. This can be corrected by binding the signature with the message in a single binary file. The resulting format is labeled the “opaque” format. No conversion will be performed by a mail gateway on the binary file and the message can be verified by an S/MIME application that serves the recipient. However, because the message text is wrapped in a binary file, the recipient cannot read it if the recipient’s e-mail application is not an S/MIME client. The existence of two possible signing formats has led to some difficulties in S/MIME interoperability. Some applications sign in “clear” format, some sign in “opaque” format; others give the user a choice. The applications that support both formats for outgoing signed messages are guaranteed to be able to successfully interoperate with every other S/MIME application. Support for Multiple Algorithms and Key Sizes. All currently available S/MIME implementations use RSA for signatures; the keys that are used vary between sizes 512/768/1024/2048. The hashing algorithm used within the signature could be SHA-1 or MD5. Some S/MIME applications support only a subset of the above algorithms for incoming signed messages. In order for two S/MIME implementations to exchange signed messages, they must support a common set of algorithms and key sizes. Thus the implementations that support both hash algorithms and various RSA moduli, and allow the local user to select the algorithms to use for specific outgoing signed messages enable the greatest level of interoperability with other S/MIME implementations. 3.5
Validating Incoming Signed Messages
Support for Opaque/Clear Signed Message Formats. Support for both signed message formats for validating incoming signed messages provides the highest level of interoperability with other S/MIME implementations that may support only one of the formats for outgoing signed messages. See similar subsection above for further details. Support for Multiple Algorithm Choices and Key Sizes. Support for multiple hash algorithms and various moduli for the RSA signature keys for validating incoming signed messages promotes interoperability with a large number of sending clients. See similar subsection above for further details. X.509v3 Certificate Path Validation. S/MIME Version 2 specifies the use of X.509v3 certificate path validation mechanisms for S/MIME implementations;
238
S. Gupta et al.
support for this type of path validation allows an S/MIME application to parse complex certificate chains to establish trust in peer certificates. All S/MIME applications that we have tested have the capacity to validate flat certification hierarchies, namely, the CA issues certificates to S/MIME users in a one level deep hierarchy. However, many implementations do not support the validation of certificates that are part of a multiple level hierarchy. In order to interoperate with the largest possible set of peers (some of which may send out signed messages with certificate chains that are part of a multiple level hierarchy), it is very useful if an S/MIME implementation supports fully compliant X.509v3 path validation. 3.6
Encrypting Outgoing Messages
In S/MIME, Version 2, an encrypted message is constructed as follows: a random symmetric key is used to encrypt the message, and the recipient’s public key is used to wrap the symmetric key for key transfer purposes. On the recipient’s side, the corresponding private key is used to unwrap the symmetric decryption key, and the latter is used to decrypt the message. Support for Multiple Algorithm Choices and Key Sizes. The S/MIME Version 2 specification allows the use of various symmetric algorithms and key sizes for message encryption, and various RSA moduli for key exchange. Currently, S/MIME applications support one or more of the symmetric encryption algorithms, DES, Triple DES and RC2, with various key sizes. In order for an encrypted message to be passed between two S/MIME applications, both sides must support the same encryption algorithm and key size, and the same modulus for RSA key exchange. Some implementations support only a single algorithm and key size for encryption, or a single modulus for RSA keys. The implementations that support all or a large subset of the available algorithms provide the greatest level of interoperability with peer implementations with a limited set of algorithms. 3.7
Decrypting Incoming Messages
Selection of Local User Certificate for decryption. When the local user possesses more than one certificate, and receives an encrypted S/MIME message, the correct certificate and private key needs to be selected to decrypt the message. Some implementations leave the selection of the appropriate private key (from the set of available private keys) to the user. Others allow a transparent selection of the appropriate private key for decryption; this is very useful feature in environments where users routinely possess certificates from multiple public key infrastructures, and use them for communicating with peers from disparate trust domains. Support for Multiple Algorithms and Key Sizes. See similar subsection above for details.
Interoperability Characteristics of S/MIME Products
4
239
Usefulness of the Interoperability Characteristics
The characteristics and properties outlined in this paper provide us with a greater insight into the issues that affect the interoperability of a S/MIME implementation in a real-world scenario. Ideally, the S/MIME specification should be capable of addressing each of these issues and setting minimum requirements to allow a base level of interoperability between all compliant implementations. Understanding the intricacies of the various choices that can be made within the scope of the S/MIME Version 2 specification may help to fine tune the future S/MIME specifications. Understanding the characteristics that affect interoperability also helps vendors of S/MIME products understand the implications of the implementation and design choices they make for their products. Knowledge of these characteristics is also important to the community of S/MIME product users and procurers. Users who are aware of their own environments with respect to the deployment of PKI products will be able to make an informed decision about which subset of the characteristics presented in this paper are relevant to their interoperability needs. Having defined their idealized profile for S/MIME products, they can then evaluate the available implementations from the various vendors and select the one that scores highest in the evaluation based upon their customized needs. The characteristics described in this paper were derived through a study of the S/MIME specification and experimentation with S/MIME implementations. However, we believe that a large subset of these characteristics are also applicable to most other public key infrastructure based secure communication protocols, and their implementations. The lessons learned through the study of S/MIME should be easily transferable to other similar domains.
5
Analysis of the S/MIME Version 3 Specifications with Respect to the Interoperability Characteristics
S/MIME Version 3 uses the CMS instead of the PKCS#7 standard to build the S/MIME objects. CMS supports a set of signed attributes that are encapsulated within the signerInfo data type that is a part of each S/MIME signed object. CMS allows object identifiers (OIDs) for preferred algorithms to be conveyed using these signed attributes. However, there does not appear to be a way to support the conveyance of other critical information for the sender, such as signature format preferences, or trust anchors known to the sender, etc. Additionally, these capabilities seem to be supported only when a signed message is sent. When the enveloped data content type is used, only a limited set of originator information (certificates and CRLs only) may be included in the message – there does not appear to be a way for the originator to include their algorithmic preferences to their peers. A deficiency that continues to exist in the Version 3 specification is that there is no mandate to support a particular signature format (opaque versus multipart). As we have noted in this paper, a number of the interoperability problems were related to the support of only one or the other signature formats in Version 2 products that we tested. It would be desirable to establish a baseline for the supported signature formats – this would allow a minimal level of interoperability between all S/MIME
240
S. Gupta et al.
implementations. Thus, we would recommend that the S/MIME specification be augmented to require that both sending as well as receiving agents MUST support the opaque signature format. In addition, sending and receiving agents SHOULD support the clear signing format to allow non-S/MIME capable mail agents to display the message contents. A desirable feature of Version 3 specification is that it supports the ability to dynamically import additional trust anchors into an S/MIME product. Receiving agents MUST support the import of additional trusted roots and certificate chains from incoming S/MIME messages. During the import of additional trust anchors, receiving agents SHOULD allow the user to select whether or not to trust new root certificates that were imported. Other methods to allow import of additional trust anchors would also be desirable (for example, the import of self-signed .p7c files from a floppy).
6
Conclusions
In this paper, we have described a number of important properties that affect the ability of an S/MIME implementation to interoperate with its peer implementations. However, there are other issues that also affect the suitability of an implementation within a particular environment. The usability characteristics of an implementation go a long way to promote the usage of the product. If secure email products provide daunting user interfaces, they will not be widely. One obvious recommendation to heightened user friendliness would be to transparently support the digital certificates of peers within the address book mechanisms provided by the basic email package. Thus, when a signed message comes in, the local user can add the sender to their local address book and thereby transparently add the sender’s certificate to the address book entry. Conversely, when sending out encrypted email, the local address book could be used to select the receiver and transparently select the receiver’s certificate (if present as part of the address book entry.) Most current implementations also have little or no support for revocation checking of certificates. As public key infrastructures become widely deployed, the very real management problems such as certificate revocation need to be handled within the applications using the infrastructure. In conclusion, we would like to point out that it is heartening to see the widespread adoption of the S/MIME secure electronic mail standard, and the availability of commercial products based upon the standard. Despite the fact that public key infrastructure technology is still in its infancy, and the standards are continuously evolving, the S/MIME vendors are making considerable progress in resolving the existing barriers to interoperability. In the near future, users will find that security services will be integrated into most e-mail applications.
Interoperability Characteristics of S/MIME Products
7
241
References
[1] Kaliski, B., “PKCS #7: Cryptographic Message Syntax Version 1.5”, RFC 2315, March 1998. [2] Kaliski, B., “PKCS #10: Certification Request Syntax Version 1.5”, RFC 2314, March 1998. [3] Kaliski, B., “PKCS #12: Personal Information Exchange Syntax Standard, Version 1.0 Draft”, 30 April 1997. [4] Postel, J., “Simple Mail Transfer Protocol”, RFC 821, Aug 1982. [5] Crocker, D., “Standard for the format of ARPA Internet text messages”, RFC 822, Aug1982. [6] Dusse, S., Hoffman, P., Ramsdell, B., Lundblade, L., and L. Repka, “ S/MIME Version 2 Message Specification”, RFC 2311, March 1998. [7] Dusse, S., Hoffman, P., Ramsdell, B., and J. Weinstein, “ S/MIME Version 2 Certificate Handling”, RFC 2312, March 1998. [8] Ramsdell, B., “S/MIME Version 3 Message Specification”, RFC 2633. [9] Ramsdell, B., “S/MIME Version 3 Certificate Handling”, RFC 2632. [10] Backman, D., “Secure E-Mail Clients: Not Quite Ready For S/MIME Prime Time. Stay Tuned”, Network Computing Online, http://www.networkcomputing.com/902/902r22.html. [11] Housley, R., “Cryptographic Message Syntax”, RFC 2630.
8
Appendix
These are the products that were tested in order to derive the characteristics described in this paper: • • • • • •
Baltimore Technologies MailSecure Exchange Plug-in Version 2.1 WorldTalk WorldSecure Eudora Plug-in Version 3.05 WorldTalk WorldSecure Exchange Plug-in Version 3.0 Netscape Messenger Version 4.05 Microsoft Outlook Express Version 5.0 Beta 2 Microsoft Outlook 98
The DEDICA Project: The Solution to the Interoperability Problems between the X.509 and EDIFACT Public Key Infrastructures Montse Rubia1,2, Juan Carlos Cruellas1, and Manel Medina1 1
Telematics Applications Group - Department of Computer Architecture Universitat Politècnica de Catalunya c / Jordi Girona 1-3, Mòdul D6 08034 - Barcelona (SPAIN) {montser, cruellas, medina}@ac.upc.es 2
Safelayer Secure Communications S.A. Edificio World Trade Center (s4) Moll de Barcelona s/n 08039 – Barcelona (SPAIN) [email protected]
Abstract. This paper introduces the barriers of interoperability that exist between the X.509 and EDIFACT Public Key Infrastructures (PKI), and proposes a solution to remove them. The solution goes through the DEDICA1 (Directory based EDI Certificate Access and management) Project. The main objective of this project is to define and to provide the means to make these two infrastructures inter-operable without increasing the amount of information to be managed by them. The proposed solution is a gateway tool interconnecting both PKIs. The main goal of this gateway is to act as a TTP that "translates" certificates issued by one PKI to the other’s format, and then signs the translation to make it a new certificate. The gateway will, in fact, act as a proxy Certification Authority (CA) of the CAs of the other PKI, and will take the responsibility of the certified data authenticity, on the behalf of the original CA.
1. Introduction The security services based on asymmetric cryptography require a Public Key Infrastructure (PKI) to make the public key values available. Several initiatives around the world have caused the emergence of PKIs based on X.509 certificates, such as SET (Secure Electronic Transaction) or PKIX (Internet Public Key Infrastructure). Another PKI type is the one based on the EDIFACT certificate. These infrastructures are not interoperable, mainly due to the fact that the certificates and messages are coded in different way (ASN.1 and DER are used for X.509 PKI, whilst EDIFACT syntax is used for EDIFACT PKI). 1
This project has been funded by the EU Telematics program and the Spanish CICYT, and has been selected as one of the pilot projects to promote the telematic applications by the SMEs by the G7.
R. Baumgart (Ed.): CQRE’99, LNCS 1740, pp. 242–250, 1999. © Springer-Verlag Berlin Heidelberg 1999
The DEDICA Project
243
DEDICA (Directory based EDI Certificate Access and management) is a research and development project. Its main objective is to define and to provide means to make the two above-mentioned infrastructures inter-operable without increasing the amount of information to be managed by them. The proposed solution involves the design and implementation of a gateway tool interconnecting both PKIs: the certification infrastructure, based on standards produced in the open systems world, and the existing EDI applications, which follow the UN/EDIFACT standards for certification and electronic signature mechanisms. The main goal of the gateway proposed by DEDICA is to act as a Trusted Third Party (TTP) that “translates” certificates issued in one PKI to the other’s format, and then signs the translation to make it a new certificate: a derived certificate. In this way, any user certified, for instance, within an X.509 PKI could get an EDIFACT certificate from this gateway without having to register in an EDIFACT authority, saving not only time and money, but also allowing the users to use the same private key for both environments. The gateway will act, in fact, as a proxy Certification Authority (CA) of the CAs of the other PKI. The figure 1 shows the DEDICA gateway context. Each user is registered in his PKI and accesses the certification objects repository related to its PKI. The DEDICA gateway must be able to interact with the users of both PKIs in order to serve requests from any of them. It must also be able to access the security object stores of both PKIs, and to be certified as EDIFACT and X.509 CAs.
CAE1
CAE2
Network
User E
Gateway certified by X.509 and EDIFACT CAs
CAX1
D EDI CA
CAX2
Network
User E
User X User X X.500 Directory
Certificates store
EDIFACT
X.509
Fig. 1. DEDICA gateway context.
244
M. Rubia, J.C. Cruellas, and M. Medina
2. Functionality of the gateway The interoperability problem between the X.509 and EDIFACT PKIs was focused by the DEDICA project in two levels: . 1. The different formats of the certificates: The DEDICA consortium, after an indepth study of the contents of both types of certificates, specified a set of mapping rules that makes possible the two-way translation of both types of certificates. 2. The different messages interchanged by the entities of the PKIs: whereas in the EDIFACT world the UN/EDIFACT KEYMAN message is used to provide certification services, in the X.509 world a set of messages specified for each PKI (PKIX on the Internet, for instance) are used. The DEDICA gateway assumes the role of a TTP for users of both infrastructures. The gateway accomplishes a process of certificate translation from EDIFACT to X.509 and conversely; however this translation process is not strictly a mapping at level of certificate formats, since the gateway adds a digital signature to the mapped data. In addition to that, in some cases, it is not possible just to move data from one certificate to the other, due to format restrictions (size, encoding). In these cases the gateway has to generate tagged data for the derived certificate, that will allow it to reproduce the original data, kept in internal records (e.g. names mapping table, see fig. 3). When the X.509 certificate has private extensions, the gateway will just ignore them, since they are assumed relevant only to other applications. Full details of the mapping mechanism between both infrastructures may be found at: http://www.ac.upc.es/DEDICA/ and at DEDICA CEC-Deliverable WP03.DST3: Final Specifications of CertMap Conversion Rules [5]. The DEDICA gateway is able to offer a basic set of certificate management services to users of different infrastructures: 1. Request of an EDIFACT certificate from an X.509 certificate generated by an X.509 CA. 2. Verification of an EDIFACT certificate generated by the DEDICA gateway (coming from the mapping of an X.509 certificate). 3. Request of an X.509 certificate from an EDIFACT certificate generated by an EDIFACT CA. 4. Verification of an X.509 certificate generated by the DEDICA gateway (coming from the mapping of an EDIFACT certificate). The above requests will be carried out making use of the appropriate messages from the infrastructure: KEYMAN PACKAGES for EDIFACT, and PKIX for X.509. 2.1. Request of a derived certificate In the scenario shown in Figure 2, an X.509 user (user X) that may want to send EDIFACT messages to an EDIFACT user (user E) using digital signatures or any security mechanism that involves the management of certificates. This user needs a certificate from the other Public Key Infrastructure (in this case, the EDIFACT PKI). He then sends an interchange to the gateway requesting the production of an EDIFACT certificate “equivalent” to its provided X.509 one. This interchange will
The DEDICA Project
245
contain a KEYMAN message (indicating a request for an EDIFACT certificate) and the X.509 certificate of this user in an EDIFACT package (EDIFACT structure capable of containing binary information). The gateway will validate the X.509 certificate. If the certificate is valid (the signature is correct, it has not been revoked, and it has not expired), it will perform the mapping process, and will generate the new EDIFACT certificate. After that the gateway will send it to user X within a KEYMAN message. Now user X can establish a communication with user E using security mechanisms that involve the use of electronic certificates through the new EDIFACT certificate, sending him an EDIFACT interchange with this certificate. 2.2. Validation of a derived certificate Following the process described in the previous section, user E, after receiving the interchange sent by user X, requests validation of the certificate generated by the DEDICA gateway by sending the corresponding KEYMAN message to the gateway. The gateway determines whether the EDIFACT certificate has been generated by itself, and proceeds with the validation of the original X.509 certificate, to find out whether it has been revoked or not, and of the derived EDIFACT certificate. The EDIFACT user could only check the derived certificate, since it has no access to the original environment. The general process of validation of derived certificates is as follows: 1. It verifies the validity of the derived certificate. This requires checking of: (a)The correctness of signature, using the public key of the gateway. (b) Whether the certificate can be used in view of the validity period. 2. The gateway accesses to the X.500 Distributed Directory, in order to get the original X.509 certificate and the necessary Certificate Revocation Lists (CRL). 3. It verifies the signature of the original certificate, and checks the validity period. 4. The gateway verifies the certification path related to the original X.509 certificate, and checks that its certificates have not been revoked. Now the DEDICA gateway will send the positive or negative validation response to the EDIFACT user within a KEYMAN message.
3. Gateway architecture The DEDICA gateway has two main architectural blocks: the CertMap and the MangMap modules. 3.1. CertMap module The CertMap module is responsible for performing the certificate translations following the mapping rules specified by the DEDICA consortium in Deliverables WP03.DST3 ([5]). The CertMap is composed of three main modules: the CM_Kernel module, the EDIFACT certificate coding/decoding module, and the set of APIs needed to allow
246
M. Rubia, J.C. Cruellas, and M. Medina
the CM_KE to interact with external software tools (the ASN.1 API and the Cryptographic API). The CM_Kernel module (CM_KE) coordinates the operations performed by all the other CertMap modules. Four groups of information presents in both certificates have been identified: Names, Algorithms, Time and Keys. For each one of these groups, inside the CM_Kernel, a software module implements the appropriate relevant translation process: the CM_Names, the CM_Algorithm, the CM_Time and the CM_Keys modules. Request of certificate KEYMAN UNO X.509 UNP
DEDICA
KEYMAN (EDIFACT Cert)
MangMap
CertMap
User X
User E
Secured Interchange & EDIFACT Certificate User X
User E
Validation of a derived certificate
DEDICA CertMap MangMap User X
KEYMAN (EDIFACT Cert) KEYMAN (Valid.result)
User E
Fig. 2. Functionality of the DEDICA gateway
Mapping between X.509 and EDIFACT certificates. The Certificate Mapping Rules developed in DEDICA were designed in such a way that the translated information was relayed as precisely as possible from the original certificate to the derived one. A number of issues had to be taken into account: • Specification syntax and transfer syntax for the transmission. The EDIFACT certificates are specified following the EDIFACT syntax, and they are transmitted coded in printable characters. However, in the X.509 environment the ASN.1 Abstract Syntax and the DER rules are used. • Naming System. In the X.509 world, the basic mechanism of identification is the DN (Distinguished Name) [6], which is associated with an entry in the DIT (Directory Information Tree) of the X.500 Distributed Directory. On the other hand, the EDIFACT certificate supports both codes (i.e., identifiers assigned by authorities) and EDI party names. The DEDICA gateway performs a name mapping between the DNs and the EDI Names, according to guidelines defined in EDIRA (EDIRA Memorandum of Understanding) [7]. EDIRA proposes an identification mechanism compatible with the DN strategy in X.500. The DEDICA Deliverable WP03.DST2([4]) contains the specifications of the conversion rules that are used by the CertMap module to execute the mapping between DNs and EDI Names.
The DEDICA Project
247
• Extension mechanism. Version 3 of the X.509 certificate has an extension mechanism that allows it to extend the semantics of the information that it carries out. However, at present the EDIFACT certificate does not have any extension mechanism, and its syntax specification does not allow to specify such a wide variety of information. In the mapping of X.509 certificates version 3, only the following extensions will be mapped: keyUsage and subjectAltName. Other extensions, mainly the private ones, even if they are tagged as critical for the intended applications of the original certificate, are ignored, since we assumed that user and issuer know and accept the EDIFACT certificate format, when they make the application for a derived certificate. • Digital signature. When the gateway finishes the mapping process, it automatically generates a new digital signature. In the certificate field identifying the issuer entity, the DEDICA gateway identifier will appear, instead of the original certificate issuer identification. The figure 3 shows the internal structure of the CertMap module. It also shows the sequence of operations that will take place inside the CertMap to generate an EDIFACT certificate from the initial X.509 one. ASN.1
Tool
(1) DER to internal representation CM_Names(CM_NM) CM_Algorithm(CM_AL) CM_Time(CM_TM) CM_Keys(CM_PK) (7) Return certificate to MangMap
To/ From
CM_Kernel (CM_KE)
(2) [ Mapping CM_EDICert(
process ] CM_Cert)
(3) Internal representation to EDIFACT coding
MangMap
CRYPTO
Tool
(4) Signature of EDIFACT coding CM_EDICert(
CM_Cert)
(5) Append signature: USC-USA(3)-USR (6) Add new entry Names
Mapping
Table
...
Fig. 3. Mapping process from X.509 to EDIFACT
3.2. MangMap module The MangMap module of the DEDICA gateway converts certain operations of the KEYMAN message into equivalent operations (messages) in the X.509 PKI (including X.500 access).
248
M. Rubia, J.C. Cruellas, and M. Medina
MangMap is also the general management module for DEDICA gateway. It receives all the requests sent to it and chooses which information has to be recovered from external repositories, what type of translation is needed, and what results must be generated and sent to the requesting entity. Internal structure of the MangMap module. The main blocks of the MangMap are shown in Figure 4, and its functionality may be summarised as follows: • MangMap Kernel (MK) module The Kernel module of the MangMap controls all the actions within the DEDICA gateway and co-ordinates the co-operation between the different DEDICA modules. • KEYMAN Handling (KH) module On reception of KEYMAN messages from an end user, it checks the protection applied to the KEYMAN, analyses it, interprets the message and converts it into an internal request to the MangMap Kernel block. On reception of requests from the MangMap Kernel block, it builds KEYMAN messages, applies the required protection and makes the KEYMAN available to the communication services. • X.509 Public Key Infrastructure Messages Handling (XH) module On reception of relevant X.509 public key infrastructure messages from an end user, XH module checks the protection applied to the message, analyses it and converts the message into an internal request to the MK.
EDIFACT Certificates Directory
X.500 Directory
MangMap Module EDIFACT applications (EA)
KEYMAN Handling (KM)
X.500 Access Handling (XH)
EA/SM KM/MK MK/KH Security Module (SM)
MangMap Kernel (MK)
EDI user
CertMap Module (CM)
DEDICA gateway
Fig. 4. Structure of the DEDICA gateway
X.500 User
The DEDICA Project
249
XH is also able to access the X.500 Directory in order to get X.509 certificates, revocation lists and certification paths. XH will be able to send requests to X.500 and to obtain and interpret answers from it. Due to the complexity of DAP, the XH module uses the LDAP (Lightweight Directory Access Protocol) [8] interface to access the X.500 Directory. LDAP offers all the functionality needed to interact with the X.500 Directory at much lower cost. On reception of requests from MK, it builds relevant X.509 public key infrastructure messages, applies the required protection and makes the messages available to the communication service.
4. Conclusions This work has proved the suitability to launch a TTP service to translate security objects between different protocol environments. The problems found in this project are of general nature, and the solutions adopted here may be extrapolated to any other pair of environments. Other arising PKI services, like SPKI or UMTS (Universal Mobile Telephone System), or XML-EDI are potential candidates to use the results of this work. But it will be possible to extend the results of this work to other TTP services, like Time Stamping, Attribute certification, etc. The data type conversion based on translation table may solve any format incompatibility, and the message mapping strategy used to handle the different certificate management strategies may also overcome with almost any mismatching services between the two protocols being linked. As far as both, environments and protocols, have the same goals, the details of data and service elements not having a corresponding element on the other environment may either: a) be just overridden because it is not useful in the destination application, or b) be replaced by an equivalent data or service element with similar meaning in the destination protocol. The interoperability between the X.509 and EDIFACT PKIs can be greatly enhanced by facilities such as the DEDICA gateway, which acts as a TTP capable of offering a basic set of certificate management services to users of both infrastructures. The DEDICA project has set up a gateway to translate the security objects between X.509 and EDIFACT. This solution also provides interoperability between EDIFACT and all the other tools used in electronic commerce, since all of them authenticate the entities using X.509 certificates. The DEDICA gateway is being integrated in several pilot schemes and projects in the context of electronic certification, such as the TEDIC system, the AECOC-UPC EDI over Internet project, or in the SAFELAYER2 X.509 Certification Authority. The DEDICA service is interesting to both the large enterprises and SMEs, although this gateway is mostly interesting to SMEs. This is because it allows them to use security in the interchange of messages, without the need to pay registration fees
2
Safelayer Secure Communications S.A. is a company provider of PKI and SET software solutions
250
M. Rubia, J.C. Cruellas, and M. Medina
in several infrastructures. This was the reason for which DEDICA was selected as one of the G7 pilots projects to promote the use of information technology by the SMEs. The main advantage for the user will be to share the authentication mechanism (digital signature, tools, etc.) between the various applications where they can be applied, avoiding the burden of having to register with various services in order to satisfy one single user requirement. Moreover, the service has been quickly deployed and made available, thanks to the fact that no additional registration infrastructure is needed, due to its compatibility with the EDIFACT and X.509 infrastructures. This service will promote the use of Internet by EDI applications (since it will allow them to secure the interchanges which has been identified) in spite of the major barriers to the deployment of EDI over Internet in the past. Within the project, several pilot schemes have been launched to demonstrate the system in the following fields: customs, electronic chambers of commerce, tourism, electronic products manufacturers, EDI software providers and electronic payment in banking and public administration.
References 1. Security Joint Working Group, Proposed Draft of a MIG Handbook UN/EDIFACT Message KEYMAN, 30. June 1995. 2. Security Joint Working Group: Committee Draft UN/EDIFACT CD 9735-5, Electronic Data Interchange for Administration, Commerce and Transport (EDIFACT) - Application Level Syntax Rules, Part 5: Security Rules for Batch EDI (Authenticity: Integrity and NonRepudiation of Origin, Release 1, 14. December 1995. 3. DEDICA Consortium, CEC Deliverable WP03.DST1: Technical description of X.509 and UN/EDIFACT certificates, July 1996. 4. DEDICA Consortium, CEC Deliverable WP03.DST2: Naming Conversion Rules Specifications Requirements, July 1996. 5. DEDICA Consortium, CEC Deliverable WP03.DST3: Final Specifications of CertMap Conversion Rules, July 1996. 6. Network Working Group, RFC 1779: A String Representation of Distinguished Names, ISODE Consortium, 1995. 7. EDIRA - Memorandum of Understanding for the Operation of EDI Registration Authorities, Final Draft. November, 1993. 8. Network Working Group, RFC 1959: An LDAP URL Format, 1996. 9. PKIX Working Group, INTERNET-DRAFT: Internet Public Key Infrastructure, X.509 Certificate and CRL Profile, 1997. 10. Fritz Bauspieß, Juan Carlos Cruellas, Montse Rubia, DEDICA Directory based EDI Certificate Access and Management, Digital Signature Conference, July 1996. 11. Juan Carlos Cruellas, Damián Rodriguez, Montse Rubia, Manel Medina, Isabel Gallego, WP07.DST2. Final Specification of MangMap Conversion Rules, DEDICA Project, 1996. 12. Juan Carlos Cruellas, Damián Rodriguez, Montse Rubia, Manel Medina, Isabel Gallego, WP07.DST1. Final Specifications of MangMap, DEDICA Project, 1996.
Multiresolution Analysis and Geometric Measures for Biometric Identification Systems Raul Sanchez-Reillo1 , Carmen Sanchez-Avila2 , and Ana Gonzalez-Marcos1 1
2
Universidad Politecnica de Madrid. E.T.S.I. Telecomunicacion. Dpt. Tecnologia Fotonica. Ciudad Universitaria s/n. E-28040 Madrid, Spain {reillo, agonmar}@tfo.upm.es Universidad Politecnica de Madrid. E.T.S.I. Telecomunicacion. Dpt. Matematica Aplicada. Ciudad Universitaria s/n. E-28040 Madrid, Spain [email protected]
Abstract. The authors describe here their work in two different biometric techniques: iris pattern and hand geometry. These two techniques require very different processing algorithms, achieving good results for two different security environments. While iris pattern is a high cost and very high reliability technique, hand geometry is low cost, highly accepted and provides medium/high security. Adaptive techniques have been studied for increasing hand geometry reliability while new processing algorithms, such as symmetric real Gabor filters, have been used to decrease the computational cost involved in iris pattern recognition. The target of adapting these biometric techniques for small embedded systems, comes with the design of new Biometric Systems, where the user’s template is stored in a portable storage media. This media can also be used to store some user’s sensible information, such as his health record, and new access conditions are designed to avoid the reading of this data unless the biometric verification has been performed. The media chosen has been a smart card, and a new requirement is established: biometric verification (hand or iris) should be performed inside the smart card, in order to not be able to extract the user’s template from the card.
1
Introduction
Nowadays, one of the main threats that IT systems and security environments can have, is the possibility of having intruders in the system. This is normally solved by user authentication schemes based on passwords, secret codes and/or identification cards or tokens. Schemes based only on passwords or secret codes can be cracked by intercepting the presentation of such a password, or even by counterfeiting it (via passwords dictionaries or, in some systems, via brutal force attacks). On the other hand, an intruder can attack systems based on identification card or tokens by robbing, copying or simulating that card or token. If the scheme used in the system is based both on a card and a password (usually called Personal Identification Number - PIN), the intruder should apply R. Baumgart (Ed.): CQRE ’99, LNCS 1740, pp. 251–258, 1999. c Springer-Verlag Berlin Heidelberg 1999
252
R. Sanchez-Reillo, C. Sanchez-Avila, and A. Gonzalez-Marcos
more effort to gain entry to the system, and with more advanced technologies, such as smart cards, some vulnerabilities of the system could be avoided (e.g. brutal force attacks are impossible under a well-defined smart card). But even with the most advanced techniques, authentication schemes based on passwords and identification cards have a legal limitation: a person cannot be legally identified by the knowledge or possession of something, but only by the measure of some biological behavioural features. This requisite is only solved by biometric techniques such as speaker verification, signature recognition or measurement of fingerprint, iris pattern or hand geometry. Each biometric technique has its own advantages and disadvantages. While some of them provide more security, i.e. lower False Acceptance Rate (FAR) and False Rejection Rate (FRR), other techniques are cheaper or better accepted by final users. The authors report here a biometric R&D project where two biometric techniques have been analysed and developed: iris pattern and hand geometry. Iris Pattern Recognition was chosen for providing an extremely high reliability biometric identification. Its reliability is only improved by Retinal Scanning which is highly rejected by final users for using laser scanning inside the eye. The main disadvantage of this technique is only its high cost, not only economical but also computational. On the other hand, Hand Geometry Measurement was chosen as a medium/high security technique with a medium equipment cost, low computational cost and very low template size. After this introduction, a brief explanation of both systems (first iris pattern, followed by hand geometry) will be given. Then main results achieved with both techniques will be shown, ending this work with the final conclusions obtained.
Fig. 1. Samples of iris (in the left) and hand (in the right), before preprocessing and feature extraction
2
Iris Pattern Recognition
From all biometric techniques known today, iris recognition is considered to be the most promising of all for high security environments. This technique presents
Multiresolution Analysis and Geometric Measures
253
several advantages compared to other techniques, with only one main disadvantage: the cost. But considering the overall costs that suppose the installation of a high security system, this disadvantage could be minimized. Iris recognition is based on the characterization of the pattern of the iris. Medical and forensic studies have proven that an iris has higher unicity than other techniques, i.e. the probability of finding two iris patterns identical is nearly null (identical twins do not have the same iris pattern and even the two eyes of the same person have different patterns). Another strong characteristic of this technique is the stability of the pattern. After the adolescence the pattern has completely evolved and the protection of the cornea makes any modification in the pattern impossible, unless a major injury destroys part of the eye and, of course, the vision of the user. Biological studies have affirmed that the iris pattern is not influenced by age, and even common vision illness as myopia or cataract do not affect irises in any sense. The excellent unicity of this technique leads to a False Acceptance Rate (FAR) nearly null, while its stability allows to reach really low False Rejection Rates (FRR). Iris recognition systems do not suffer from high user rejection due to the use of video or photograph cameras, instead of laser beams such as the ones used for retinal scanning which leads to consider the latter technique somehow invasive. Counterfeiting an iris is nearly impossible unless a whole surgery is made in the eye with the threat of losing the vision in that eye. The use of contact lenses with a copy of other user’s pattern printed on it is easily detected because of the continuous involuntary movement of the iris, which is not present in a printed one. 2.1
Preprocessing, Feature Extraction and Verification
As the first process, the iris location and isolation is performed. This is performed taking profit from the circular pattern of the iris within the eye studying the first derivate of the intensity of the image around a circle with fixed centre and variable radiuses ∂ ∂r
I x0 ,y0 ,r
I(x, y) ds . 2πr
(1)
The same process is performed to eliminate the pupil from the iris. Once isolated, the resulting image is stretched for better processing. Then wavelet multiresolution analysis is carried out. Several multiresolution algorithms have been studied (e.g. [2], [5] and [6]) achieving best results with real symmetric Gabor filtering 1 (x cos θ + y sin θ)2 (x sin θ − y cos θ)2 G(x, y) = exp − + 2 δx2 δy2 · cos(2πω(x cos θ + y sin θ)).
(2)
254
R. Sanchez-Reillo, C. Sanchez-Avila, and A. Gonzalez-Marcos
After obtaining the wavelet coefficients, a reduced set of them has been selected through principal component analysis. With the set of data chosen, a primary pattern is formed. This pattern is used in three statistical decision schemes for the verification process: Euclidean and Hamming distances and Gaussian Mixture Modelling (GMM) [7]. In this application, distances between authentic eyes and imposters are so different (around 0.1 for authentics and 0.5 for imposters), that a universal threshold could be applied, being sufficient the usage of Euclidean and Hamming distances instead of GMMs, with the advantage of lower computational cost of the former two compared with the latter.
3
Hand Geometry Measurements
The importance of hand geometry as a biometric technique relies on its medium/low cost and on its great acceptance by the user based on the following three main points: it has not any police implication (e.g. fingerprint verification is closely related to police for most of the users), the system is really easy to use (not like speaker verification without being through a telephone or iris recognition) and it does not imply physical invasions of user’s vital organs (such as eyes in retinal scanning). As it will be shown, there are other facts that make hand geometry an optimal solution for some security environments, such as the template size which is the lowest of all the biometric techniques existing today. But the main disadvantage of this technique is the ”lack” of security. Compared to commonly considered high security methods (like fingerprint, iris and retinal scanning), hand geometry FAR and FRR figures are much higher, due to the lower unicity and stability of the hand template compared with the abovementioned techniques. Unicity and stability are two characteristics of a biometric technique that can be studied separately. Because of the low unicity of the geometry of the hand (i.e. the possibility of finding two hands identical), this technique is not recommended for high security environments, but the level of security it gives, makes this technique valid for medium security access control systems or ATMs. The low stability of the hand geometry is a problem that can be solved in two ways: by performing relative measures and/or by adapting the template each time the user is authenticated by the system. 3.1
Capture, Preprocessing, Extraction and Verification
As in any biometric system the first step is the signal capture. In hand geometry a digital camera is used for this task. The image captured shows not only the reverse of the palm but also the lateral view of the hand which will serve as a weighting factor for the features extracted. The photo will be taken when the hand is correctly placed in the system, as it will be indicated by position and pressure sensors. Applying gradient ascent techniques, the edge of the hand is quickly obtained, and ready to perform the feature extraction. Several parameters were measured, from the width of the palm and fingers, to the length and height of the latter. The
Multiresolution Analysis and Geometric Measures
255
angles formed in the phalange joints were also taken into account. A principal component analysis has been done to select the parameters that will be used in the user’s template. Absolute measures, as well as relations between them have taken part in the analysis. Corrections have been made according to the pressure done by the user. The analysis showed that even with a low number of parameters, such as 10 of them, satisfactory figures for FAR and FRR have been obtained. Increasing the number of parameters, FAR and FRR data could be decreased. With proper parameter codification, the template size could be from 9 to 25 bytes, which makes this technique ideal for minimizing template storage and lowering computational cost of the verification process. After analysing different verification methods, from the ones based in metrics (e.g. Hamming and Euclidean distances), to the ones based on the statistical decision theory (e.g. Gaussian mixture modelling), and considering the neural network approach with MLP and radial basis functions, the results obtained show that the method with the best rate between computational cost and FAR+FRR is the Hamming distance, although for best reliability GMMs are recommended (as seen in the next section). 3.2
Stability Improvement
With the system developed as explained above, overall error rate is below 5%. The relation between FAR, FRR and this rate could be modified (as said) playing with the threshold value, depending on the specific needs of the system. But the stability limitations that this technique has, are shown when this system is used over a long period of time. Adult users gaining or losing weight can change absolute measures, and if the weight difference is large enough, even a change can appear in the relative measures. On the other hand, non-adult users (i.e. users in their growing years), change continuously their measures. Although the latter case is not important in most environments, the improvement reported by the authors also solves it. This improvement is validated based on two hypotheses: Hypothesis 1. The gaining or losing of weight or height, involve homogeneous changes in hand geometric measures, changing absolute measures but not the relative ones. Hypothesis 2. The speed of changing the hand features, is slow enough to consider that no important variation will occur in a period of one week. The first hypothesis solves most of the problems by taking relative measures instead of absolute ones. Thanks to the second hypothesis, stronger protection can be included in the biometric templates through adaptation. This adaptation should consider not the change in a single attempt, but the evolution of a set of them. For example, the last five successfully recognized attempts could be averaged with the template, after using a weighting factor for each of these
256
R. Sanchez-Reillo, C. Sanchez-Avila, and A. Gonzalez-Marcos
measurements, which depend on the period of time among them. With these improvements, the stability of the features is increased and therefore, FRR figures remain the same throughout time. With this average process, influence of a potential intruder wrongly recognized as the user authenticated, is reduced. If the system is based on the storage of the templates in a user’s smart card, instead of in a central database, the threat is decreased even more, and because this technique uses a very small template, no memory limitations exist and the average could be performed even with the last ten or twenty successful attempts. Furthermore, the mathematical operations involved in the whole process are simple enough to enable processing times much below the ones needed for other techniques and with less powerful microprocessors (e.g. using 8-bit processors).
4
Results Obtained and Conclusions
After designing and developing the biometric systems explained above, the main results obtained have been as expected, with null FAR for iris pattern recognition and FRR much lower than the one obtained for hand geometry (see Fig. 2). However, iris feature extraction should be improved to lower the FRR without increasing the computational cost.
Fig. 2. FAR and FRR for iris pattern recognition and hand geometry measurement
On the other hand, the results obtained with hand geometry measurements are very satisfactory, being able to achieve overall error rates below 5% with a GMM based verification algorithm. If computational cost is a high restriction, Hamming distances can mean an alternative with the sacrifice of increasing the overall error rate up to 12%, as it can be seen in the next illustration. Finally the performance of the hand biometric system has been analysed throughout 9 months, with and without the stabilization improvement, showing an increase of the FRR when no adaptation is present and absolute measurements are used, while with the stabilization the overall error rate has been kept in similar figures.
Multiresolution Analysis and Geometric Measures
257
Fig. 3. Overall error rates and time consumed in verification (of the whole process time) in hand geometry biometric system, with Hamming and GMM based verification algorithms
Fig. 4. Stability improvement in hand geometry biometric system
With the results obtained, both techniques can be implemented in small embedded systems, such as a smart card. The verification rates obtained are satisfactory for most of the applications since are less than 10%, although for high security environments iris recognition is recommended.
References 1. Berliner, M.L.: Biomicroscopy of the Eye. Paul B. Hoeber, Inc. (1949) 2. Daugman, J. G.: High Confidence Visual Recognition of Persons by a Test of Statistical Independence. IEEE Trans. PAMI, vol. 15, no. 11, Nov. (1993) 11481161 3. Duda, R. O., Hart, P. E.: Pattern Classification and Scene Analysis. John Wiley & Sons (1973) 4. Jain, A. K.: Fundamentals of Digital Image Processing. Prentice Hall (1989) 5. Jain, A. K., Bolle, R., Pankanti, S.: Biometrics Personal Identification in Networked Society. Kluwer Academic Publishers (1999) 6. Jain, L. C., Halici, U., Hayashi, I, Lee, S. B., Tsutsui S.: Intelligent Biometric Techniques in Fingerprint and Face Recoginition. CRC Press (1999)
258
R. Sanchez-Reillo, C. Sanchez-Avila, and A. Gonzalez-Marcos
7. Reynolds, D. A., Rose, R. C.: Robust Text-Independent Speaker Identification Using Gaussian Mixture Speaker Models. IEEE Trans. on Speech and Audio Processing, vol. 3, no. 1, (1995) 72-83 8. Sanchez-Reillo R., Gonzalez-Marcos A.: Access Control System with Hand Geometry Verification and Smart Cards. Proceedings of the IEEE ICCST Conference (to be published). October (1999) 9. Schalkoff, R. J.: Digital Image Processing and Computer Vision. John Wiley & Sons (1989) 10. Sch¨ urmann, J.: Pattern Classification. A unified view of statistical and neural approaches. John Wiley & Sons, Inc. (1996) 11. Zoreda, J.L., Oton, J.M.: Smart Cards. Artech House Inc. (1994)
Author Index
Basin, D. Borcherding, M. Brainard, J. Bulatovic, D.
30 133 76 219
Cruellas, J.C. Chan, Y.-Y.
Nyström, M.
76
Ortega, J. J.
109
242 183
Posegga, J. Povey, D. Pütz, S.
64 1 142
Ganta, S. Gonzales-Marcos, A. Gupta, S.
229 251 229
Reyzin, L. Romano, L. Rubia, M.
167 17 242
Howgrave-Graham, N. Hühnlein, D. Hughes, J.
153 94 127
Sako, K. Sanchez-Avila, C. Sanchez-Reillo, R. Schmeh, K. Schmitz, R. Schneier, B. Seifert, J.P.
101 251 251 119 142 192 153
Tietz, B. Tsiounis, Y.
142 43
Velasevic, D. Vogt, H.
219 64
Wagner, D. Walters, D.
192 229
Jakobsson, M.
43
Kehr, R. Keys, L.
64 229
Lopez, J.
109
MRaihi, D. Mana, A. Mazzeo, A. Mazzocca, N. Medina, M. Merkle, J. Micali, S. Mudge Mulvenna, J.
43 109 17 17 242 94 167 192 229
Young, A. Yung, M.
204 43, 204
CQRE [Secure] November 30 - December 2, 1999, Düsseldorf, Germany
Program Chair Rainer Baumgart, Secunet, Germany
Program Committee Johannes Buchmann .......................... University of Technology Darmstadt, Germany Dirk Fox ..........................................................................................Secorvo, Germany Walter Fumy................................................................................... Siemens, Germany Rüdiger Grimm................................................................................... GMD, Germany Helena Handschuh...................................................................Gemplus / ENS, France Pil Joong Lee .............................................................................. Postech, South Korea Alfred Menezes ........................................ University of Waterloo / Certicom, Canada David Naccache................................................................................. Gemplus, France Clifford Neumann.......................................... University of Southern California, USA Joachim Posegga ............................................................Deutsche Telekom, Germany Mike Reiter.......................................................................................... Bell Labs, USA Matt Robshaw..................................................................................... RSA Labs, USA Richard Schlechter.....................................................European Commission, Belgium Bruce Schneier ............................................................................... Counterpane, USA Tsuyoshi Takagi ...................................................................................NTT, Germany Yiannis Tsiounis .................................................................................Spendcash, USA Michael Waidner .............................................................................. IBM, Switzerland Moti Yung ............................................................................................... CertCo, USA Robert Zuccerato ........................................................... Entrust Technologies, Canada